Standards are powerful forces in the software industry. They can drive technology forward by bringing together the combined efforts of multiple developers, different communities and even competing vendors. Over the past year there has been no greater example of this than the evolution of Linux containers technology driven by the Docker open source project. In OpenShift, which is built on containers technology, we recognized early on the power of this emerging standard and today the weight of Red Hat is squarely behind it.
Driving a Standard for Linux Containers
Container technology is far from new, with a heritage that goes back almost a decade in Linux and longer still in Unix. However, usage of Linux containers was limited to select users and organizations that understood the power of the technology and had the sophistication to harness it. The advent of Platform-as-a-Service (PaaS) saw multiple PaaS vendors adopt containers, including OpenShift, Google AppEngine, Heroku, and more. In fact, it was from the PaaS space that the Docker technology first emerged.
Today, hundreds of developers contribute to the Docker community project, as do multiple vendors, and their combined efforts have brought Linux containers into the mainstream. The latest evidence of this momentum came from Microsoft, who recently announced that they too would join the cause in an effort to bring containerization to Windows.
While a few PaaS vendors like Pivotal are still pushing their own platform-specific container management solution, we at Red Hat were early proponents of Docker and quickly became one of the leading contributors to the community project. This enables us to standardize Linux containers across our own solutions, including Red Hat Enterprise Linux, OpenShift, Red Hat Enterprise Linux Atomic Host and more, even as we help drive a standard for Linux containers in the industry.
Kubernetes as a Standard for Container Orchestration & Management
As I discussed in my prior blog post however, applications in OpenShift typically span multiple containers and those containers need to be deployed across multiple hosts. One of the key requirements for OpenShift is a system for orchestrating and managing these containers at very large scales. With more than two million containerized applications deployed on our own OpenShift Online service since inception, we’ve gained quite a bit of experience managing containers at scale. We tapped into that experience when we set out to build our next generation container orchestration capabilities for OpenShift v3 and launched efforts like the GearD project.
Google also knows a little something about web scale container orchestration, as containers power most of their services. When Google notified us of their intent to launch the Kuberenetes project for container orchestration and management, we saw the opportunity to collaborate and drive another standard to propel containers technology forward.
Today you will find multiple Red Hat developers among the leading contributors to Kubernetes, just as we’ve done in Docker. We’ve taken our initial work on container orchestration in GearD and our experience from running OpenShift over the past 4 years and are using that to help drive capabilities in Kubernetes, together with Google and other contributors.
Red Hat’s Clayton Coleman, who initiated the GearD project, is now one of the leading contributors to Kubernetes and has worked with other OpenShift developers to integrate our initial GearD orchestration efforts upstream. Other contributors have shown up in great numbers to support the project as well as vendors like IBM, Microsoft, and others.
Why Kubernetes?
Kubernetes relies on Docker to package, instantiate, and run application containers. The power of Kubernetes is in the declarative model it implements for managing containerized application deployments. Rather than declaring what containers to deploy where (as in an “imperative model”), a user declares the desired end state that should be maintained. With this declared state established, Kubernetes can then employ automated self-healing mechanisms such as automatically restarting containers, rescheduling containers on different hosts, or replicating containers for use cases such as auto-scaling. For example, if your application server cluster should have four server instances each running in their own container, then that state can be declared and Kubernetes will deploy and maintain that state over time, starting or restarting containers as needed.
How Kubernetes Works
In Kubernetes, Linux host instances referred to as “masters” are used to manage and schedule container deployments as well as manage state. This has historically been the function of the broker tier in OpenShift. The Kubernetes master uses the etcd repository for storing state, although it may be possible to use alternative repositories in the future. The master also contains the scheduler for handling placement of pods onto selected hosts and this scheduler is also pluggable with 3rd party schedulers/cluster managers. The master then provides different controllers, such as the replication controller, and also an authenticated API server to interface with clients.
The Kubernetes node instances (previously referred to as minions) are where the containers actually run. A node agent, or kubelet, manages the containers on each host, to maintain the desired state described by the master. Kubernetes deploys containers in pods, with each pod including one or more related containers. The containers in each pod share an IP address and data volumes and run on the same host. Pods may only have a single container, but an example of a multi-container pod may be a database (i.e. Postgresql) plus admin tool (pgadmin) or an application server plus management agent.
Labels are used to identify and group related pods across hosts. A service in Kubernetes is a logical set of pods (identified by a service label) which can be accessed as a unit, such as a database service that may consist of multiple database instances (each in their own pod). This facilitates the deployment of microservices that span multiple containers and a service proxy is used to proxy requests across those containers.
We also see the Kubernetes project driving innovation across adjacent communities, like Apache Mesos and Apache Hadoop Yarn in the scheduler and cluster management space. Both of these communities are integrating their solutions with Kubernetes, bringing large scale cluster management capabilities to application services. This was highlighted recently in a blog by Hortonworks who discussed the work they are doing to integrate Yarn with Kuberenetes and OpenShift v3.
While multiple solutions for container orchestration continue to sprout in the Docker community, as well as others developing their own platform-specific orchestration and scheduling solutions (e.g. Cloud Foundry Diego), we believe over time the power of this emerging standard will win out.
Summary
Kubernetes is the key component for managing and ensuring the reliability of application deployments in OpenShift v3. It not only orchestrates and schedules container deployments across a cluster of server hosts, but it also automates container health management. Leveraging a declarative model and automated controllers, Kubernetes brings the power of web scale container orchestration and management to OpenShift users.
If you want to play with Kubernetes in our latest upstream Origin builds, check out our recent OpenShift v3 deep dive blog. In my next blog I will discuss how we are extending Kubernetes to provide additional functionality in OpenShift v3.
저자 소개
Joe Fernandes is Vice President and General Manager of the Artificial Intelligence (AI) Business Unit at Red Hat, where he leads product management, product marketing, and technical marketing for Red Hat's AI platforms, including Red Hat Enterprise Linux AI (RHEL AI) and Red Hat OpenShift AI.
채널별 검색
오토메이션
기술, 팀, 인프라를 위한 IT 자동화 최신 동향
인공지능
고객이 어디서나 AI 워크로드를 실행할 수 있도록 지원하는 플랫폼 업데이트
오픈 하이브리드 클라우드
하이브리드 클라우드로 더욱 유연한 미래를 구축하는 방법을 알아보세요
보안
환경과 기술 전반에 걸쳐 리스크를 감소하는 방법에 대한 최신 정보
엣지 컴퓨팅
엣지에서의 운영을 단순화하는 플랫폼 업데이트
인프라
세계적으로 인정받은 기업용 Linux 플랫폼에 대한 최신 정보
애플리케이션
복잡한 애플리케이션에 대한 솔루션 더 보기
오리지널 쇼
엔터프라이즈 기술 분야의 제작자와 리더가 전하는 흥미로운 스토리
제품
- Red Hat Enterprise Linux
- Red Hat OpenShift Enterprise
- Red Hat Ansible Automation Platform
- 클라우드 서비스
- 모든 제품 보기
툴
체험, 구매 & 영업
커뮤니케이션
Red Hat 소개
Red Hat은 Linux, 클라우드, 컨테이너, 쿠버네티스 등을 포함한 글로벌 엔터프라이즈 오픈소스 솔루션 공급업체입니다. Red Hat은 코어 데이터센터에서 네트워크 엣지에 이르기까지 다양한 플랫폼과 환경에서 기업의 업무 편의성을 높여 주는 강화된 기능의 솔루션을 제공합니다.