Self-managed OpenShift (Red Hat OpenShift Platform Plus, Red Hat OpenShift Container Platform, Red Hat OpenShift Kubernetes Engine, and Red Hat OpenShift Virtualization Engine) can be used anywhere 64-bit Red Hat Enterprise Linux is certified and supported. Refer to the documentation to learn more about OpenShift deployment methods and supported infrastructure types.
Self-managed OpenShift software editions:
- Red Hat OpenShift Kubernetes Engine: A hybrid cloud, enterprise Kubernetes runtime engine that provides core Red Hat OpenShift functionality to deploy and run applications, which you install and manage in a datacenter, public cloud, or edge environment.
- Red Hat OpenShift Container Platform: A full-featured hybrid cloud, enterprise Kubernetes application platform used to build, deploy, and run applications, which you install and manage in a datacenter, public cloud, and edge environment.
- Red Hat OpenShift Platform Plus: A hybrid cloud platform that allows enterprises to build, deploy, run, and manage intelligent applications at scale across multiple clusters and cloud environments. Multiple layers of security, manageability, and automation provide consistency throughout the software supply chain. OpenShift Platform Plus subscriptions are available for x86-based clusters only.
- Red Hat OpenShift Virtualization Engine: A bare metal-only, virtualization infrastructure offering based on Red Hat OpenShift and the open source kernel-based virtual machines (KVM) hypervisor, purpose-built to provide enterprises with a reliable, enterprise-grade solution for deploying, managing, and scaling VMs. A subset of OpenShift functionality, this edition of OpenShift is targeted at virtual machine-only workloads, with only infrastructure services supported in containers (i.e., no end-user application containers support).
Subscription types
There are 2 types of subscriptions (core-pair and bare-metal node) for self-managed OpenShift with 2 support levels available for each subscription.
Compute node subscriptions are required for the compute nodes in your environment. These can be entitled by core-pairs or by bare-metal nodes:
1. Core-pair (2 cores or 4vCPUs)
This subscription option is available for OpenShift Kubernetes Engine, OpenShift Container Platform, and OpenShift Platform Plus. Core-pair subscriptions are not applicable to OpenShift Virtualization Engine.
- When entitling CPU cores, count the aggregate number of physical cores or vCPUs across all OpenShift compute nodes running across all OpenShift clusters you want to entitle using core-pair subscriptions.
- Available with Standard 8x5 or Premium 24x7 support SLA.
2. Bare-metal node (1 physical node)
- A physical node is 1 server regardless of the number of CPU sockets in the server, or cores in the CPUs.
- This subscription option is available for all self-managed OpenShift editions, and is the only option for OpenShift Virtualization Engine.
- This subscription is available only for x86 and Arm bare-metal physical nodes where OpenShift is installed directly to the hardware. No third-party hypervisor is allowed.
- This is explicitly not a “virtual datacenter” subscription (like Red Hat Enterprise Linux for Virtual Datacenters where a single subscription gives you unlimited VM guest operating system [OS] installations on any hypervisor host).
- Available with Standard 8x5 or Premium 24x7 support SLA.
Additionally, you will require Red Hat AI Accelerator subscriptions for the accelerators in your environment:
1. AI Accelerator (1 Accelerator)
- This subscription is required for accelerator cards (GPU, TPU, NPU, FPGA, DPU, etc.) that are providing compute acceleration for AI workloads that are discrete add-ons and not part of the CPU package.
- The same subscription is used for each physical AI Accelerator regardless of Red Hat OpenShift edition.
- A single AI accelerator subscription is sufficient for Red Hat OpenShift and OpenShift AI if both are installed on the cluster.
- This subscription is not required as long as the accelerator capability is not being used for compute acceleration (for example, DPUs used as SmartNICs for network acceleration only even if they have addressable Arm cores on them that are unused, or GPUs used for rendering graphics and not for AI Acceleration).
- Available with Standard 8x5 or Premium 24x7 support SLA. The SLA must match the SLA of the supporting core-pair or bare-metal node subscription.
When to choose core-pair subscriptions
You will most often choose core-pair subscriptions when you are deploying self-managed OpenShift to public cloud hyperscalers, within an Infrastructure-as-a-Service (IaaS) private cloud, or when you are deploying on a hypervisor such as VMware vSphere, Red Hat OpenStack® Platform, or Nutanix.
Core-pair subscriptions remove the need to attach subscriptions to physical servers and allow the freedom for pods to be deployed across your hybrid cloud whenever and wherever needed.
You can also use core-pair subscriptions on bare-metal servers or devices (i.e., with no hypervisor). Be aware there is usually a compute pod density where bare-metal node subscriptions may be more cost effective.
When using OpenShift Virtualization Engine as a dedicated virtualization platform, you can choose to entitle OpenShift containers on VMs using core-pair subscriptions on top of the bare-metal node subscriptions for the hypervisor itself. You would separately purchase OpenShift self-managed core-pair subscriptions and assign them to the VMs in this environment, just like any other application you may purchase and run as a VM. In this instance, there is a density of cores where switching to the bare-metal node model for self-managed OpenShift, which includes unlimited OpenShift containers on the bare-metal server and support for running those containers in OpenShift VMs, may be more cost effective.
Core-pair subscriptions can be distributed to cover all OpenShift compute nodes across all OpenShift clusters. For example, 100 core-pair Red Hat OpenShift Platform Plus subscriptions will provide 200 cores (400 vCPUs) that can be used across any number of compute nodes, across all running OpenShift clusters across your hybrid cloud environments.
When to choose bare-metal node subscriptions
Bare-metal node subscriptions are only an option for OpenShift compute nodes deployed to dedicated physical servers, either in your datacenter, in a hosted private cloud on a supported bare-metal offering, or with a hyperscaler on a supported bare-metal offering. Bare-metal node subscriptions are the only option for OpenShift Virtualization Engine, and are required for support of the OpenShift Virtualization feature in the other self-managed OpenShift editions.
Each bare-metal node subscription entitles a single physical node, regardless of the number of total CPU sockets or cores. .
Each physical, bare-metal server using per node entitlements can only be used as a single OpenShift node due to the inherent architecture of Kubernetes. Since each node in Kubernetes can only belong to a single cluster, this means that all containers on a bare-metal server will be in the same cluster. This is suitable for resource intensive workloads like OpenShift Virtualization (where each workload is running a full VM), but may not be suitable for others. While OpenShift supports up to 2500 containers on a single node, there are situations, whether for performance or architectural reasons, where you might want to split containers between different nodes or different clusters. This is not possible without using virtualization to create separate compute nodes on the bare-metal server.
A common deployment model for containers is to architect a large number of clusters with smaller numbers of containers each. This model is common in hyperscaler environments, and can be achieved in the datacenter by using a hypervisor to create VMs which become the compute nodes on which containers are deployed. In the case of hypervisors such as VMware vSphere, Red Hat OpenStack Platform, and Nutanix, you must use core-pair subscriptions for OpenShift deployed in VMs.
OpenShift Kubernetes Engine, OpenShift Container Platform, and OpenShift Platform Plus clusters, deployed to bare-metal and entitled with node subscriptions, include OpenShift Virtualization and subscriptions for any virtual OpenShift clusters of the same product type deployed to them. This means that virtual OpenShift clusters deployed to, for example, a bare-metal OpenShift Container Platform cluster inherit OpenShift Container Platform subscriptions from the hosting bare-metal cluster.
An important note is that the OpenShift Virtualization Engine subscription does not include support for containerized application instances, the exception being infrastructure workloads as defined in the OpenShift Virtualization Engine section below. If you wish to run your own containerized application workloads with OpenShift Virtualization Engine you need to entitle the VMs using core-pair subscriptions for self-managed OpenShift. Alternatively, at higher densities you can choose instead to purchase a bare-metal node subscription to self-managed OpenShift Kubernetes Engine, OpenShift Container Platform, or OpenShift Platform Plus, which would result in the ability to run container-based applications natively on the bare-metal cluster or virtual clusters inherit subscriptions as described in the previous paragraph.
It is not possible to mix OpenShift product types within the same cluster, all nodes must be subscribed using the same OpenShift Virtualization Engine, OpenShift Kubernetes Engine, OpenShift Container Platform, or OpenShift Platform Plus product type, however core-pair and node subscriptions can be used within a single cluster. For example, you cannot have a single bare-metal cluster with some number of OpenShift Virtualization Engine nodes for hosting VMs and other nodes subscribed using OpenShift Platform Plus for hosting containerized applications and virtual OpenShift instances.
How to count AI Accelerator subscriptions
In recent years, specific hardware technologies have come onto the market that allow certain compute workloads to perform with increased speed. These types of hardware devices are collectively referred to as accelerators or AI accelerators in some Red Hat content. There are several types of hardware devices available for modern servers that can be classified as accelerators, including but not limited to GPUs, TPUs, ASICs, NPUs, and FPGAs.
These accelerators are typically a card, board, or other physical device occupying a peripheral component interconnect (PCI) slot in a server. This is almost always the unit quantity you have purchased from the accelerator vendor. For example, in a server whose vendor says “it has 8 GPUs,” this almost always means 8 physical accelerator units.
Each AI accelerator subscription covers 1 physical accelerator device. For example, focusing just on the AI accelerator subscriptions:
- A physical compute node with 4 physical GPU devices would require 4 x AI accelerator subscriptions in addition to the CPU core-pair or node subscriptions covering the compute node.
- A single virtual compute node with 1 physical GPU device presented to the VMs as multiple vGPUs would require 1 AI Accelerator subscription, as counting is based on physical accelerators, not virtual accelerators.
Accelerators are only counted when they are used to execute a compute workload. A workload is considered a compute workload when the primary purpose of the workload is not actively drawing pixels on a user’s screen in near real-time or moving data across a network.
This distinction is important because VFX (visual effects) and streaming applications may use GPUs and other accelerator hardware, but the primary purpose in these cases is drawing pixels on a screen. In some cases, the primary function is moving data across a network (such as data processing units dedicated to network functions), which also is not considered compute.
Examples of compute workloads include:
- Traditional software applications, such as Java, Python, and Perl.
- LLMs or other compute-intensive software.
- Data science model training and tuning.
- Scientific modeling and physical simulation like protein folding and fluid dynamics.