订阅内容

With the release of Red Hat OpenStack Services on OpenShift, there is a major change in the design and architecture that impacts how OpenStack is deployed and managed. The OpenStack control plane has moved from traditional standalone containers on Red Hat Enterprise Linux (RHEL) to an advanced pod-based Kubernetes managed architecture.

Introducing Red Hat OpenStack Services on OpenShift

In this new form factor, the OpenStack control services such as keystone, nova, glance and neutron that were once deployed as standalone containers on top of bare metal or virtual machines (VMs) are now deployed as native Red Hat OpenShift pods leveraging the flexibility, placement, abstraction and scalability of Kubernetes orchestration

The OpenStack compute nodes that are running VMs are still relying on RHEL, with the difference being that it is provisioned by Metal3 and configured by an OpenShift operator using Red Hat Ansible Automation Platform behind the scenes. It is worth noting that it’s still possible to bring preprovisioned nodes with RHEL pre-install.

New approach, new storage considerations

Deploying and managing the OpenStack control plane on top of OpenShift brings several new advantages, but it also comes with new storage considerations.

Previously, the OpenStack control plane was deployed as three “controllers” which usually took form as bare metal servers or, in some cases, VMs.

In terms of storage, the OpenStack control services used the server’s local disk(s) to write persistent data (or a network storage backend when booting from your storage area network (SAN)).

With the shift to a native OpenShift approach, the OpenStack control services are dynamically scheduled across OpenShift workers as pods. This approach introduces a number of benefits, but the default pod storage option is to use ephemeral storage. Ephemeral storage is perfectly fine for stateless services such as the service’s API, but not appropriate for services that require persistent data such as the control plane database. When a pod restarts or terminates, it must get its data back.

Fortunately, OpenShift provides a persistent storage abstraction layer in the form of “Persistent Volumes” (PV) and “Persistent Volume Claim” (PVC) that enable pods to mount volumes that persist across pod’s lifecycle. This persistent storage framework is tightly coupled with another standard called Container Storage Interface (CSI) that allows OpenShift to provision volumes from a variety of storage backends should the storage vendor provide a certified CSI Driver.

Red Hat OpenStack Services on OpenShift, high level design


This is where the paradigm changes, in previous versions of Red Hat OpenStack, the control services' persistent data were stored on local controllers disks and no further design decisions were needed besides the size, type, performance and RAID level of the disks.

With OpenStack Services on OpenShift, a storage solution must also be considered for OpenShift alongside the traditional OpenStack storage.

In this article, we dive into the main available options to back OpenShift and OpenStack data for environments that are using Ceph or third-party storage solutions.

Before we get into the details, you may wonder which OpenStack control services need persistent storage:

  • Glance for the staging area and optional cache
  • Galera for storing the database
  • OVN Northbound and Southbound database
  • RabbitMQ for storing the queues
  • Swift for storing object data when not using external physical nodes
  • Telemetry for storing metrics

Red Hat OpenStack Services on OpenShift with Red Hat Ceph Storage

Ceph is a well known and widely used storage backend for OpenStack. It can serve block with Nova, Glance and Cinder, file with Manila, and object with S3/SWIFT APIs.

The integration between OpenStack Services on OpenShift and Ceph is the same as previous OpenStack versions—block is served by RADOS block devices (RBD), file by CephFS or network file system (NFS) and object by S3 or SWIFT.

The different OpenStack services are configured to connect to the Ceph cluster, but what changes is the way you configure it at install time, as we are now using native Kubernetes Custom Resources Definition (CDR) instead of TripleO templates as in previous versions.

The main design change is how to serve OpenShift volumes.

Using Ceph across both platforms

The first option is to use the same external Ceph cluster between OpenStack and OpenShift, consolidating the Ceph investment by sharing the storage resources.

Red Hat OpenStack Services on OpenShift design with shared Ceph cluster


 In the above diagram, OpenStack is consuming Ceph as usual, and OpenShift uses OpenShift Data Foundation (ODF) external mode to connect to the same cluster. ODF external deploys the Ceph CSI drivers that allow OpenShift to provision persistent volumes from a Ceph cluster.

OpenStack and OpenShift use different Ceph pools and keys, but architects should review their cluster’s capacity and performance to anticipate any potential impact. It’s also possible to isolate the storage I/O of both platforms by customizing the CRUSH map and allowing data to be stored on different object storage daemons (OSDs).

The design outlined above shares the same Ceph cluster between OpenShift and OpenStack but they can be different clusters based on the use case.

Third-party or local storage for OpenShift and Ceph for OpenStack

In some cases, you do not want to share OpenStack and OpenShift data on the same cluster. As mentioned before, it’s possible to use another Ceph cluster but the capacity needed for the control plane services may not be enough to justify it.

Another option is to leverage OpenShift’s workers' local disks. To do so, OpenShift includes an out-of-the-box logical volume manager (LVM) based CSI operator called LVM Storage (LVMS). LVMS allows dynamic local provisioning of the persistent volumes via LVM on the workers' local disks. This has the advantage of using local direct disk performance at a minimum cost.

On the other hand, if the data being local to the worker, the pods relying on volumes cannot be evacuated to other workers. This is a limitation to consider, especially if OpenStack control services are deployed on more than three workers.

It is also possible to rely on an existing third-party backend using a certified CSI driver which would remove the 1:1 pinning between the pod and the volume but can increase the cost. Using ODF internally as an OpenShift storage solution is also an option.

The OpenStack integration to Ceph remains the same.

Red Hat OpenStack Services on OpenShift design with Ceph cluster for OpenStack and alternative solution for OpenShift

OpenStack with Ceph hyper-converged

Deploying Ceph hyper-converged with OpenStack compute nodes is a popular solution to combine both compute and storage resources on the same hardware, reducing the cost and hardware footprint.

The integration with Ceph does not differ from an external Ceph besides the fact that the compute and storage services are collocated.

The OpenShift storage options are more limited, however, as it is not possible to use the hyper-converged Ceph cluster to back OpenShift persistent volumes.

The options are the same as those outlined in the previous section—OpenShift can rely on LVMS to leverage the local worker disks or use an existing third-party backend with a certified CSI driver.

Red Hat OpenStack Services on OpenShift design with Ceph HyperConverged for OpenStack

OpenStack with third-party storage solutions

For environments that are not using Ceph, the same principle applies. The OpenStack integration does not change, the control and compute services are configured to use an external shared storage backend through iSCSI, FC, NFS, NVMe/TCP or other vendor-specific protocols. Cinder and Manila drivers are still used to integrate the storage solution with OpenStack.

On the OpenShift side, the options are to either use LVMS to leverage the local worker disks or use an existing third-party backend with a certified CSI driver. This third-party backend can be the same as the one used for OpenStack or a different one.

Red Hat OpenStack Services on OpenShift design with third party storage

Wrap up

As Red Hat OpenStack moves to a more modern OpenShift-based deployment model, new storage systems need to be considered. Red Hat OpenStack Services on OpenShift offers a broad set of options for storing the OpenStack control services and the end user’s data. Whether you’re using Ceph or not, and whether you want shared storage or to rely on local disks, the different supported combinations will match a vast set of use cases and requirements.

For more details on Red Hat OpenStack Services on OpenShift storage integration, please refer to our planning guide

training

OpenStack 管理:控制平面管理

使用红帽 OpenShift 管理运行基于虚拟机的工作负载的 OpenStack 服务和 RHEL 计算节点。

关于作者

Gregory Charot is a Principal Technical Product Manager at Red Hat covering OpenStack Storage, Ceph integration, Edge computing as well as OpenShift on OpenStack storage. His primary mission is to define product strategy and design features based on customers and market demands, as well as driving the overall productization to deliver production-ready solutions to the market. OpenSource and Linux-passionate since 1998, Charot worked as a production engineer and system architect for eight years before joining Red Hat—first as an Architect, then as a Field Product Manager prior to his current role as a Technical Product Manager for the Cloud Platform Business Unit.

Read full bio
UI_Icon-Red_Hat-Close-A-Black-RGB

按频道浏览

automation icon

自动化

有关技术、团队和环境 IT 自动化的最新信息

AI icon

人工智能

平台更新使客户可以在任何地方运行人工智能工作负载

open hybrid cloud icon

开放混合云

了解我们如何利用混合云构建更灵活的未来

security icon

安全防护

有关我们如何跨环境和技术减少风险的最新信息

edge icon

边缘计算

简化边缘运维的平台更新

Infrastructure icon

基础架构

全球领先企业 Linux 平台的最新动态

application development icon

应用领域

我们针对最严峻的应用挑战的解决方案

Original series icon

原创节目

关于企业技术领域的创客和领导者们有趣的故事