订阅内容

After hitting 1.0 in October of last year and being shipped as generally available (GA) in OpenShift 3.9, CRI-O has reached another important milestone—it’s now being used in production for many workloads running on OpenShift Online Starter accounts using OpenShift 3.10. Using CRI-O in a real-world production environment with diverse Kubernetes workloads is an important part of the development feedback loop for improving and extending CRI-O and OpenShift.

What is CRI-O?

To recap a brief history of CRI-O, the project was initially introduced as the “Open Container Initiative Daemon” (OCID) in September 2016. It was renamed shortly after to acknowledge its relationship with Kubernetes’ Container Runtime Interface (CRI), as well as its relationship with the Open Container Initiative Standards.

At its core, CRI-O is a lightweight container engine for Kubernetes that looks to cut out any extraneous tooling and simply serve as a way to run Docker/OCI-compliant Linux containers with OCI-compliant container runtimes. Most Kubernetes users, including OpenShift users, don’t care about the container engine itself – so long as it works, they don’t really want to think about it.

And that’s one of the CRI-O project’s goals, to be “boring.” CRI-O is optimized for Kubernetes. The project is committed to ensuring that CRI-O passes Kubernetes tests, and to have CRI-0 work with any compliant container registry and run any OCI-compliant container.

CRI-O in production

OpenShift Online’s free Starter account allows developers to get hands-on experience with OpenShift quickly, without needing to stand up an instance on their own. Behind the scenes, it is actually a set of OpenShift clusters that provide the service. The OpenShift Online operations team has now transitioned the compute nodes for entire clusters to run CRI-O with no disruption to end-users.

In true cloud fashion, the OpenShift Online operations team was able to release a canary deployment of CRI-O in production, side by side with Docker, transparent to the end user and then expand the deployment to cover entire clusters. Note that the OpenShift Online operations team uses the same methods as our customers to deploy and manage OpenShift Container Platform in their own environments. This provides an additional layer of testing and observation to make OpenShift production-ready at scale.

In fact in the short time that CRI-O has been in production, the potential reduction in support burden looks promising. This is a testament to stability and security features of CRI-O, but Red Hat continues to support the Docker engine shipped with Red Hat Enterprise Linux 7 for OpenShift, so Red Hat Enterprise Linux and OpenShift Container Platform users can rest assured that their existing usage of Docker is well-supported.

By kicking off our production use of CRI-O with OpenShift Online, the operations team is collecting important data on how CRI-O handles in a real-world use case. This data will be fed back into the upstream CRI-O project as bug and security fixes, as well as new features that are useful for the entire community.

“It’s really exciting to see CRI-O being used more and more widely,” says Derek Carr, an OpenShift architect at Red Hat.  “We test CRI-O extensively before each stable release, and our user and contributor base have grown. But there’s nothing like putting work into multiple large-scale production environment to get feedback and ensure it’s ready for real-world use.”

Using CRI-O today

CRI-O was declared GA in OpenShift 3.9, which means that customers can start using it today in their own environments. Scott McCarty, Principal Product Manager - Containers, Red Hat, has an excellent post that explains the steps to enable CRI-O in OpenShift Container Platform. Docker remains the default, and customers can fine-tune whether CRI-O runs on all nodes or just some.

As always, CRI-O plans to continue releasing alongside Kubernetes, and providing updates for the past three major versions of Kubernetes in upstream. The CRI-O project welcomes new contributors and we’d like to thank our current contributors for their assistance in reaching this milestone. If you would like to contribute, or follow development, head to CRI-O project’s GitHub repository and follow the CRI-O blog.


关于作者

Joe Brockmeier is the editorial director of the Red Hat Blog. He also acts as Vice President of Marketing & Publicity for the Apache Software Foundation.

Brockmeier joined Red Hat in 2013 as part of the Open Source and Standards (OSAS) group, now the Open Source Program Office (OSPO). Prior to Red Hat, Brockmeier worked for Citrix on the Apache OpenStack project, and was the first OpenSUSE community manager for Novell between 2008-2010. 

He also has an extensive history in the tech press and publishing, having been editor-in-chief of Linux Magazine, editorial director of Linux.com, and a contributor to LWN.net, ZDNet, UnixReview.com, and many others. 

Read full bio
UI_Icon-Red_Hat-Close-A-Black-RGB

按频道浏览

automation icon

自动化

有关技术、团队和环境 IT 自动化的最新信息

AI icon

人工智能

平台更新使客户可以在任何地方运行人工智能工作负载

open hybrid cloud icon

开放混合云

了解我们如何利用混合云构建更灵活的未来

security icon

安全防护

有关我们如何跨环境和技术减少风险的最新信息

edge icon

边缘计算

简化边缘运维的平台更新

Infrastructure icon

基础架构

全球领先企业 Linux 平台的最新动态

application development icon

应用领域

我们针对最严峻的应用挑战的解决方案

Original series icon

原创节目

关于企业技术领域的创客和领导者们有趣的故事