Lightweight Container Runtime for Kubernetes: CRI-O
What is CRI-O?
CRI-O is an implementation of the Kubernetes CRI (Container Runtime Interface) to enable using OCI (Open Container Initiative) compatible runtimes. It is a lightweight alternative to using Docker as the runtime for kubernetes. It allows Kubernetes to use any OCI-compliant runtime as the container runtime for running pods. Today it supports runc and Clear Containers as the container runtimes but any OCI-conformant runtime can be plugged in principle.
CRI-O supports OCI container images and can pull from any container registry. It is a lightweight alternative to using Docker, Moby or rkt as the runtime for Kubernetes.
Contributors
- Red Hat
- Intel
- SUSE
- Hyper
- IBM
CRI-O is developed by maintainers and contributors from these companies and others. It is a community-driven, open source project. Feedback, users, and of course, contributors, are always welcome at the kubernetes-incubator/cri-o project.
Architecture
The architectural components are as follows:
- Kubernetes contacts the kubelet to launch a pod.Pods are a kubernetes concept consisting of one or more containers sharing the same IPC, NET and PID namespaces and living in the same cgroup.
- The kublet forwards the request to the CRI-O daemon VIA kubernetes CRI (Container runtime interface) to launch the new POD.
- CRI-O uses the containers/image library to pull the image from a container registry.
- The downloaded image is unpacked into the container’s root filesystems, stored in COW file systems, using containers/storage library.
- After the rootfs has been created for the container, CRI-O generates an OCI runtime specification json file describing how to run the container using the OCI Generate tools.
- CRI-O then launches an OCI Compatible Runtime using the specification to run the container proceses. The default OCI Runtime is runc.
- Each container is monitored by a separate conmon process. The conmon process holds the ptyof the PID1 of the container process. It handles logging for the container and records the exit code for the container process.
- Networking for the pod is setup through use of CNI, so any CNI plugin can be used with CRI-O.
Components
CRI-O is made up of several components that are found in different GitHub repositories.
- OCI compatible runtime
- containers/storage
- containers/image
- networking (CNI)
- container monitoring (conmon)
- security is provided by several core Linux capabilities
OCI compatible runtimes
CRI-O supports any OCI compatible runtime. We test with runc and Clear Containers today.
Storage
The containers/storage library is used for managing layers and creating root file-systems for the containers in a pod: Overlayfs, devicemapper, AUFS and btrfs are implemented, with Overlayfs as the default driver.
Support for network based file system images (NFS, Gluster, Cefs) is on the development roadmap.
Container images
The containers/image library is used for pulling images from registries. Currently, it supports Docker schema 2/version 1 as well as schema 2/version 2. It also passes all Docker and Kubernetes tests.
Networking
The Container Network Interface CNI is used for setting up networking for the pods. Various CNI plugins such as Flannel, Weave and OpenShift-SDN have been tested with CRI-O and are working as expected.
Monitoring
conmon is a utility within CRI-O that is used to monitor the containers, handle logging from the container process, serve attach clients and detects Out Of Memory (OOM) situations.
Security
Container security separation policies are provided by a series of tools including SELinux, Capabilities, seccomp, and other security separation policies as specified in the OCI Specification.