フィードを購読する

Red Hat OpenShift Container Platform is changing the way that clusters are installed, and the way those resulting clusters are structured. When the Kubernetes project began, there were no extension mechanisms. Over the last four-plus years, we have devoted significant effort to producing extension mechanisms, and they are now mature enough for us build systems upon. This post is about what that new deployment topology looks like, how it is different from what came before, and why we’re making such a significant change.

Pre-managed (3.x) Topology

In 3.x, we developed a traditional product deployment topology, driven in large part by the fact that Kubernetes, at the time, had no extension mechanism. In this topology, the control plane components are installed onto hosts, then started on the hosts to provide the platform to other components and the end user workloads. This matched the general expectations for running enterprise software, and allowed us to build a traditional installer, but it forced us to make a few compromises that ended up making things more difficult for the user..

Without an extension mechanism, we combined OpenShift-specific control plane components and Kubernetes control plane components into single binaries. While they are externally compatible and seemingly simple, this layout can be confusing if you are used to a “standard” kubernetes deployment.

Managed (4.x) Topology

Kubernetes now has a robust set of extension mechanisms suitable for different needs and in Red Hat OpenShift Container Platform 4, we decided to make use of them. The result is a topology that has a lot more pieces, but each piece is a discrete unit of function focused on doing a single thing well and running on the platform.


Don’t panic. Separating out all these individual units initially seems like a big step backwards in terms of complexity, but the best measure of a system isn’t simplicity, it is understandability. One binary doing a single thing, failing independently, and clearly reporting that failure is easier to administer than a anything which doesn’t.

Improved reliability

Having a binary doing a single thing makes determining the health of that binary much easier than if that binary is performing multiple functions. This is such a core principle of Kubernetes that we included it in the bedrock of pods: health checks.

If you have a combined binary, it can be scary to report it as unhealthy and restart it; this is because if only part of it is unhealthy, you could do more harm than good to your operations. By separating different functions into separate binaries, we are able to turn discrete components off and back on again side effects.

Bugs happen. When a particular unit of function inevitably starts to fail, it can no longer be able to bring down unrelated functions by crashing a combined binary.

Faster fixes

By separating components, the interactions between them are reduced to API interactions, making it more approachable for developers. When changes are made to any one binary, the developer can be assured as to the state of the other systems in play during testing and during production rollouts, allowing them to isolate change-related problems faster. This gives us faster and more focused fixes for bugs.

More like upstream Kubernetes

Separating our topology makes OpenShift’s relationship to Kubernetes much clearer, specifically as a set of security-related customizations and then a set of extensions that provide the additional experience. This separation reduces the barrier to switching vendors, and eases common challenges to adopting OpenShift.


執筆者紹介

Red Hatter since 2018, technology historian and founder of The Museum of Art and Digital Entertainment. Two decades of journalism mixed with technology expertise, storytelling and oodles of computing experience from inception to ewaste recycling. I have taught or had my work used in classes at USF, SFSU, AAU, UC Law Hastings and Harvard Law. 

I have worked with the EFF, Stanford, MIT, and Archive.org to brief the US Copyright Office and change US copyright law. We won multiple exemptions to the DMCA, accepted and implemented by the Librarian of Congress. My writings have appeared in Wired, Bloomberg, Make Magazine, SD Times, The Austin American Statesman, The Atlanta Journal Constitution and many other outlets.

I have been written about by the Wall Street Journal, The Washington Post, Wired and The Atlantic. I have been called "The Gertrude Stein of Video Games," an honor I accept, as I live less than a mile from her childhood home in Oakland, CA. I was project lead on the first successful institutional preservation and rebooting of the first massively multiplayer game, Habitat, for the C64, from 1986: https://neohabitat.org . I've consulted and collaborated with the NY MOMA, the Oakland Museum of California, Cisco, Semtech, Twilio, Game Developers Conference, NGNX, the Anti-Defamation League, the Library of Congress and the Oakland Public Library System on projects, contracts, and exhibitions.

 
Read full bio
UI_Icon-Red_Hat-Close-A-Black-RGB

チャンネル別に見る

automation icon

自動化

テクノロジー、チームおよび環境に関する IT 自動化の最新情報

AI icon

AI (人工知能)

お客様が AI ワークロードをどこでも自由に実行することを可能にするプラットフォームについてのアップデート

open hybrid cloud icon

オープン・ハイブリッドクラウド

ハイブリッドクラウドで柔軟に未来を築く方法をご確認ください。

security icon

セキュリティ

環境やテクノロジー全体に及ぶリスクを軽減する方法に関する最新情報

edge icon

エッジコンピューティング

エッジでの運用を単純化するプラットフォームのアップデート

Infrastructure icon

インフラストラクチャ

世界有数のエンタープライズ向け Linux プラットフォームの最新情報

application development icon

アプリケーション

アプリケーションの最も困難な課題に対する Red Hat ソリューションの詳細

Original series icon

オリジナル番組

エンタープライズ向けテクノロジーのメーカーやリーダーによるストーリー