Podman Executing Unikernels ?
I'm really big fan of Podman, as a team we are responsible for a component (oc-mirror) that does bulk copying of OpenShift container images from a source registry to a destination registry even within disconnected (no internet) environments. Podman is extremely useful when verifying container manifests, blobs etc.
I have also been working with Unikernels in my home lab setup, if you are interested I published an article about Unikernel Platform As A Service (link to article), all about the benefits of Unikernels (small foot print, low attack surface, fast boot times etc)
I recently got thinking about, could I get Podman to execute Unikernels ? I found that the best way would be to look at the OCI Specification
Below is a high level overview diagram of the "container ecosystem"
The Experiment
So basically I thought a great place to start would be to investigate the workings of "crun" the default container runtime for both Podman and CRI-O.
Without boring you with all the intricate details, it has the ability to
for the underlying container.
I have listed a couple of "commands", but if you want you can get a detailed list of "commands" from the OCI runtime specification.
With this info I created my own "crun", I called it "ucrun" (the u for unikernel - simple naming convention). As crun was written in C and I didn't want to go down that route, I opted for using Rust, it has near performance times to C, and as I'm still learning Rust, I felt this would be an extremely interesting challenge.
The end result is that for the basic "create", "delete", "state" and "start" commands, I could "intercept" these calls and pass them onto the Nanovm utility. The other "not implemented" commands would just simply be forwarded to "crun" .
This meant I needed a simple container that would stay alive, i.e in a continuous loop to keep the Unikernel running and when Podman stops or deletes the container, it would then pass that onto Nanovm to stop the Unikernel instance.
I found this simple loop service
#include <sys/syscall.h>
#include <unistd.h>
const char message[] =
"ucrun running\n";
int main() {
syscall(SYS_write, STDOUT_FILENO, message, sizeof(message) - 1);
while (1) {
}
return 0;
}
All it does is print a message to console and then just remain in a loop until interrupted. I then used a container file to build the container that Podman will use to execute the "Unikernel" its extremely lightweight (about 776 kB)
FROM docker.io/redhat/ubi9-minimal as builder
RUN microdnf -y install gcc glibc-static
ADD hack/loop-svc.c .
RUN gcc -O2 -static -o loop-svc loop-svc.c
RUN strip loop-svc
FROM scratch
LABEL io.containers.capabilities="sys_chroot"
COPY --from=builder loop-svc /usr/local/bin/loop-svc
CMD ["/usr/local/bin/loop-svc"]
Executing Podman build (and eventually pushing to the remote registry)
podman build -t 192.168.1.27:5000/unikernel-tracker:latest -f containerfile
I compiled my "ucrun" project and installed the binary in /usr/bin/
Also I needed to update the Podman config file /usr/share/containers/containers.conf with my "ucrun" location
ucrun = [
"/usr/bin/ucrun"
]
After some troubleshooting (I found that it was important to have the exact version of "crun" that Podman was built with).
The final test then was to install Podman compose as I have 3 Unikernels , a redis service (used as a message bus, publish and subscribe), a service to listen to requests and push the payload to a specific topic and finally a message consumer listening to the topic.
The compose file is relatively simple
version: '3'
services:
redis-mq:
image: "192.168.1.27:5000/unikernel-tracker:latest"
environment:
SERVICE_NAME: "redis-server"
PORT: "6379"
publisher:
image: "192.168.1.27:5000/unikernel-tracker:latest"
depends-on:
redis-mq
environment:
SERVICE_NAME: "rust-redis-publisher"
PORT: "8080"
subscriber:
image: "192.168.1.27:5000/unikernel-tracker:latest"
depends-on:
redis-mq
environment:
SERVICE_NAME: "rust-redis-subscriber"
PORT: ""
The "SERVICE_NAME" is the key, it does a lookup on my on-prem Unikernel registry and starts/stops the Unikernel by the set name. I Deployed this on a remote server so I used the IP of the remote registry to locate the "unikernel-tracker" loop service (for want of a better name).
I executed Podman compose with this command
podman-compose -f unikernel-deploy.yaml --podman-run-args="--runtime=ucrun" up
passing the --runtime=ucrun to inform Podman to use my "ucrun" oci-runtime rather than the default "crun".
Here is a snapshot of the output
As a final test I pushed a payload to the publish endpoint
curl -d'@payload.json' https://127.0.0.1:8080/publish
json data successfully published to message queue from 10-0-2-15
Issuing a Podman compose down (stops all containers and Unikernel instances)
Conclusion
This was an extremely interesting project, it took me a weekend to get the "ucrun" oci-runtime to work correctly.
So you might be asking why ? what was the purpose ?
Couldn't I have created a simple bash script to start and stop the Unikernels, also the Nanovm project has a compose (similar to Podman compose) so why not use it ?, all valid points !!!.
Well simply put for the OpenShift/Kubernetes enthusiasts, taking a look at the original ecosystem diagram above, could I perhaps have a way to execute Unikernels on tainted nodes in OpenShift/Kubernetes clusters by setting up CRI-O in a similar way ?, well that's a story for another article :)
A massive big shout out to all the Podman creators/contributors and also the Nanovm (ops) project, thanks for great products !!!
Systems Researcher
3 个月really interesting read -- we've been playing with this concept for some time. A read that would be of interest: https://blog.cloudkernels.net/posts/wasm-urunc/ and a use-case we found relevant: https://kccnceu2024.sched.com/event/1YeRd Also, Luigi Zuccarelli we'll be at FOSDEM this year presenting some of this work -- if you're around we'll be happy to chat -- I know Matias Vara Larsen will be!
Software Engineer
3 个月Hi Luigi, thanks for sharing the article! I found it very informative and it sparked many ideas. However, I do have some concerns about the unikernel concept. As far as I understand, one of the main challenges with unikernels is that they need to be compiled within the application itself, which can be tricky. In order to compile a unikernel, you would need to ensure that all required system calls and kernel drivers are included, even though in many cases, only a few specific drivers (such as virtio or network and block devices) are actually needed.
Researcher
3 个月I'll keep this in mind
Open to conversations on Edge systems and software delivery
3 个月Very very cool ?? I built a custom crun deployer for k8s when we were looking at WASI last year. Probably too much cruft for you but it might have some useful pointers https://github.com/knawd/deployer
CTO @ Trilo ? Cloud Consultant ? Writer @ The Serverless Mindset ? AWS Community Builder
3 个月Could this give you the ability to seamlessly mix and match unikernels and podman/docker containers all under the same cluster?