Cloud Native Computing Foundation adopts CRI-O container runtime+tutorial
CNCF team has voted to accept CRI-O as an incubation-level hosted project. CRI-O was created by Red Hat and it is an implementation of the Kubernetes Container Runtime Interface (CRI) designed to enable the use of Open Container Initiative (OCI) compatible runtime. In this article, let us look at key features/components, how to configure kubeadm for CRI-O container runtime and deploy apps.
Container runtime is the software that is responsible for running containers. To understand better, let us look at the typical Kubernetes cluster, its comprised of master nodes and a set of slave nodes.
To learn about some of alternative container runtimes, check out here. The Kubernetes master includes the following main components:
- API server exposes four APIs; Kubernetes API, Extensions API, Autoscaling API, and Batch API. These are used for communicating with the Kubernetes cluster and executing container cluster operations.
- etcd is a key/value store. Kubernetes uses that as the persistence storage of all of its API objects.
- Scheduler’s responsibility is to monitor the resource usage of each node and scheduling containers according to resource availability.
- Controller manager monitors the current state of the applications deployed on Kubernetes via the API server and makes sure that it meets the desired state.
In each Kubernetes node following components are available:
- Kubelet is the agent that runs on each node. It makes use of the pod specification for creating containers and managing them.
- Kube-proxy runs in each node for load balancing pods. It uses
iptablerules for doing simple TCP, UDP stream forwarding or round robin TCP, UDP forwarding.
- Container runtime is software that executes containers and manages container images on a node.
By default, Docker is the container runtime but Kubernetes provides support for multiple container runtimes. The Open Container Initiative (OCI) is a Linux Foundation effort to create a truly portable software container. To standardize container formats and runtimes, OCI published the runtime-spec as a standard for container runtimes.
Introduction to CRI-O
As explained earlier, CRI-O is an implementation of the Kubernetes CRI (Container Runtime Interface) to enable using OCI (Open Container Initiative) compatible runtimes.
Devops Engineer Masters Program will make you proficient in DevOps principles like CI/CD, Continuous Monitoring and Continuous Delivery, using tools like Puppet, Nagios, Chef, Docker, Git & Jenkins. It includes training on Linux, Python, Docker, AWS DevOps Certification Training and Splunk. The curriculum has been determined by extensive research on 5000+ job descriptions across the globe.
CRI-O is a lightweight alternative to Docker as the runtime for Kubernetes. It also allows Kubernetes to use any OCI-compliant runtime as the container runtime for running pods.
- Support multiple image formats including the existing Docker image format
- Support for multiple means to download images including trust & image verification
- Container image management (managing image layers, overlay filesystems, etc)
- Container process lifecycle management
- Monitoring and logging required to satisfy the CRI
- Resource isolation as required by the CRI
- Container Network Interface (CNI) is used for setting up networking for the pods. Various CNI plugins such as Flannel, Weave, Cilium, and OpenShift-SDN have been tested
- Container security separation policies are provided by a series of tools including SELinux, Capabilities, seccomp, and other security separation policies as specified in the OCI Specification.
Image – CRI-O Architecture
- Runtime – OCI compatible runtime
- Storage: Storage and management of image layers using containers/storage
- Images: Image management using containers/image
- Networking: Networking support through use of CNI
- Monitoring using conmon
- Security is provided by several core Linux capabilities ex.seccomp
Sequence of launching new pod
- Kubernetes control plane contacts the kubelet to launch a pod.
kubletforwards the request to the CRI-O daemon via Kubernetes CRI (Container runtime interface) to launch the new pod.
- CRI-O then uses the
containers/imagelibrary to pull the image from a container registry.
- Downloaded image is unpacked into the container’s root filesystems, using containers/storage library.
- After the rootfs has been created for the container, CRI-O generates an OCI runtime specification json file describing how to run the container.
- CRI-O then launches an OCI Compatible Runtime using the specification to run the container process. Default OCI Runtime is
- Each container is monitored by a separate
- Networking for the pod is set up through the use of CNI(Container Network Interface), so any CNI plugin can be used with CRI-O.
Next section, we can check how to install and launch pods, containers.
Step #1.How to install CRI
For Ubuntu, use the following commands to install CRI-O runtime on the nodes.
# Install prerequisites apt-get update apt-get install software-properties-common add-apt-repository ppa:projectatomic/ppa apt-get update # Install CRI-O apt-get install cri-o-1.11
Once CRI-O is installed, use
systemctl start crio to start the daemon
Image – CRI-O Daemon status
Step #2.Configure kubeadm to use CRI-O Runtime
kubeadm solves the problem of handling TLS encryption configuration, deploying the core Kubernetes components, and ensuring that additional nodes can easily join the cluster.
More details on Kubeadm can be found at https://github.com/kubernetes/kubeadm
Use the below command to initialize the cluster:
kubeadm init --cri-socket=/var/run/crio/crio.sock --kubernetes-version $(kubeadm version -o short)
Kubernetes cluster has now been initialized. The Master node will manage the cluster and one of the worker nodes will run our container workloads.
Use the below command to copy the configuration to the users home directory and sets the environment variable for use with the CLI.
sudo cp /etc/kubernetes/admin.conf $HOME/ sudo chown $(id -u):$(id -g) $HOME/admin.conf export KUBECONFIG=$HOME/admin.conf
Once the environment variables are configured,Kubernetes CLI
kubectlwould now be able to use the configuration to access the cluster.
docker ps command,you can now use
crictl ps to list all the containers.
crictl provides a CLI for CRI-compatible container runtimes.
crictl images command would list all the images.
Step #3.Deploy applications
For deploying applications to your pod, images need to be prefixed with the Container Image Registry, such as
docker.io for the Docker Hub.
kubectl run http --image=docker.io/docker-http-server:latest --replicas=1
The rest of the steps would be the same, as you have for any Kubernetes deployment. Please refer to earlier posts for understanding on Kubernetes & how to create, deploy & rollout updates to the cluster.
Congrats! today we have learned how to configure and run pods, containers using CRI-O runtime. Do check out OCI Runtime specification, Image specification to learn more about the Open Container initiative.
Like this post? Don’t forget to share it!
Additional Resources :
- See here for information about crictl.
- CRI Command line interface
- CRI performance benchmarking
- Kubelet container runtime API
- OCI Image Specification
- OCI Runtime Specification
- Container Networking Interface specification
- Kubectl cheat sheet
- Take a free course on Building Scalable Java Microservices with Spring Boot and Spring Cloud
- Kubernetes tutorial – Create simple cluster & Deploy app
- Kubernetes tutorial – Scale & perform updates to your app
- Kubernetes tutorial – Create deployments using YAML file