Last year, the Kubernetes project introduced its Container Runtime Interface (CRI) -- a plugin interface that gives kubelet (a cluster node agent used to create pods and start containers) the ability to use different OCI-compliant container runtimes, without needing to recompile Kubernetes. Building on that work, the CRI-O project (originally known as OCID) is ready to provide a lightweight runtime for Kubernetes.
So what does this really mean?
CRI-O allows you to run containers directly from Kubernetes - without any unnecessary code or tooling. As long as the container is OCI-compliant, CRI-O can run it, cutting out extraneous tooling and allowing containers to do what they do best: fuel your next-generation cloud-native applications
Prior to the introduction of CRI, Kubernetes was tied to specific container runtimes through “an internal and volatile interface.” This incurred a significant maintenance overhead for the upstream Kubernetes community as well as vendors building solutions on top of the orchestration platform.
With CRI, Kubernetes can be container runtime-agnostic. Providers of container runtimes don’t need to implement features that Kubernetes already provides. This is a win for the broad community, as it allows projects to move independently while still working well together.
For the most part, we don’t think users of Kubernetes (or distributions of Kubernetes, like OpenShift) really care a lot about the container runtime. They want it to work, but they don’t really want to think about it a great deal. Sort of like you don’t (usually) care if a machine has GNU Bash, Korn, Zsh, or another POSIX-compliant shell. You just want to have a standard way to run your script or application.
CRI-O: A Lightweight Container Runtime for Kubernetes
And that’s what CRI-O provides. The name derives from CRI plus Open Container Initiative (OCI), because CRI-O is strictly focused on OCI-compliant runtimes and container images.
Today, CRI-O supports the runc and Clear Container runtimes, though it should support any OCI-conformant runtime. It can pull images from any container registry, and handles networking using the Container Network Interface (CNI) so that any CNI-compatible networking plugin will likely work with the project.
When Kubernetes needs to run a container, it talks to CRI-O and the CRI-O daemon works with runc (or another OCI-compliant runtime) to start the container. When Kubernetes needs to stop the container, CRI-O handles that. Nothing exciting, it just works behind the scenes to manage Linux containers so that users don’t need to worry about this crucial piece of container orchestration.
What CRI-O isn’t
It’s worth spending a little time on what CRI-O isn’t. The scope for CRI-O is to work with Kubernetes, to manage and run OCI containers. It’s not meant as a developer-facing tool, though the project does have some user-facing tools for troubleshooting.
Building images, for example, is out of scope for CRI-O and that’s left to tools like Docker’s build command, Buildah, or OpenShift’s Source-to-Image (S2I). Once an image is built, CRI-O will happily consume it, but the building of images is left to other tools.
While CRI-O does include a command line interface (CLI), it’s provided mainly for testing CRI-O and not really as a method for managing containers in a production environment.
Next steps
Now that CRI-O 1.0 is released, we’re hoping to see it included as a stable feature in the next release of Kubernetes. The 1.0 release will work with the Kubernetes 1.7.x series, a CRI-O 1.8-rc1 release for Kubernetes 1.8.x will be released soon.
We invite you to join us in furthering the development of the Open Source CRI-O project and we would like to thank our current contributors for their assistance in reaching this milestone. If you would like to contribute, or follow development, head to CRI-O project’s GitHub repository and follow the CRI-O blog.
About the author
Joe Brockmeier is the editorial director of the Red Hat Blog. He also acts as Vice President of Marketing & Publicity for the Apache Software Foundation.
Brockmeier joined Red Hat in 2013 as part of the Open Source and Standards (OSAS) group, now the Open Source Program Office (OSPO). Prior to Red Hat, Brockmeier worked for Citrix on the Apache OpenStack project, and was the first OpenSUSE community manager for Novell between 2008-2010.
He also has an extensive history in the tech press and publishing, having been editor-in-chief of Linux Magazine, editorial director of Linux.com, and a contributor to LWN.net, ZDNet, UnixReview.com, and many others.
Browse by channel
Automation
The latest on IT automation for tech, teams, and environments
Artificial intelligence
Updates on the platforms that free customers to run AI workloads anywhere
Open hybrid cloud
Explore how we build a more flexible future with hybrid cloud
Security
The latest on how we reduce risks across environments and technologies
Edge computing
Updates on the platforms that simplify operations at the edge
Infrastructure
The latest on the world’s leading enterprise Linux platform
Applications
Inside our solutions to the toughest application challenges
Original shows
Entertaining stories from the makers and leaders in enterprise tech
Products
- Red Hat Enterprise Linux
- Red Hat OpenShift
- Red Hat Ansible Automation Platform
- Cloud services
- See all products
Tools
- Training and certification
- My account
- Customer support
- Developer resources
- Find a partner
- Red Hat Ecosystem Catalog
- Red Hat value calculator
- Documentation
Try, buy, & sell
Communicate
About Red Hat
We’re the world’s leading provider of enterprise open source solutions—including Linux, cloud, container, and Kubernetes. We deliver hardened solutions that make it easier for enterprises to work across platforms and environments, from the core datacenter to the network edge.
Select a language
Red Hat legal and privacy links
- About Red Hat
- Jobs
- Events
- Locations
- Contact Red Hat
- Red Hat Blog
- Diversity, equity, and inclusion
- Cool Stuff Store
- Red Hat Summit