Earlier this year, writing in The New Stack, I argued that industry-wide moves toward strategic software development — over operational and transactional approaches — place DevOps teams in a new, more central role in business strategy. For DevOps to fill their new role effectively means adapting not just to their growing prominence in formulating project goals, but in understanding the place hybrid and multicloud environments play in the new, faster, cloud-native world.
“The question today typically revolves around what DevOps teams must do to achieve desired levels of agility, using cloud native platforms at scale to deploy software, at cadences that were unheard of not that long ago,” I argued. “Thanks to agile practices and a brave new world of cloud native infrastructures, developer teams can deploy code several times per day — compared to maybe once every several months — or, in many cases, even longer.”
In the mad rush to deploy rapidly and consistently in stateless container environments, though, unforeseen issues can occur.
A prominent stumbling block I often find that organizations face when making the cloud native shift is how to manage data for stateful applications in ephemeral container environments. Cloud native applications can be much smaller, more agile and easily integrated with other applications and services. Developers may work on applications or services that are part of a broader ecosystem. When developing and deploying software for cloud native architectures, developers must remain aware of how the code they create and distribute is going to interact across an organization’s operations.
Containers and microservices offer developers incredible versatility for deployment. They can instantaneously scale up and down code deployments thanks to the statelessness of their underlying architecture.
However, when it comes to data placement, maintaining data persistence, stability and security can pose challenges — particularly as application architects use code and microservices that might exist in multiple locations.
In early DevOps environments that initially used containers, a developer might have attached a Network File System (NFS) for their CI/CD pipelines, Git repositories or applications. But data portability, resilience and dynamic provisioning/deprovisioning made this route cumbersome and particularly today, substandard. Similar issues can arise by using proprietary cloud storage infrastructures that are not shareable, and have potential points of failure and data security.
The best way to solve the persistent storage conundrum for application development, and deployments in stateless and diverse environments, is to adopt a storage layer that integrates with your container platform. Having a persistent storage layer in place beforehand can save organizations from expensive backtracking down the road.
It also saves headaches now. When developers work with a Kubernetes orchestrator, it becomes easier for them to create their resources for a project. That persistent storage layer consists of a dynamic storage platform. (Developers should have confidence that the storage layer also adheres to their data security and resilience requirements for application deployments.)
With a viable software-defined storage platform, developer teams can define and adjust their data requirements for a project on the fly, rather than completing the process manually, when using an alternative like an NFS mount. And they don’t need to rely on storage administrators to provision storage on their behalf; they can change their storage configurations as needed.
Likewise, applications storing data in a block protocol, such as SQL or NoSQL databases, can tempt organizations to adopt a service provider’s proprietary solution, without thinking about how that limits their storage availability across different multicloud or regional zones, or locking developers into a single provider’s solutions. Open source software-defined storage allows for persistent and portable storage across many different kinds of infrastructures, including bare metal, virtual machines (VMs) and public and private cloud environments. Data federation can take place across hybrid and multicloud environments, so developers can place sensitive data where it needs to be, and integrate applications and microservices from various multicloud deployments.
For large-scale analytics applications such as artificial intelligence (AI) and machine learning (ML) workloads, data scientists must often manage huge increments of data from multiple locations and devices. Another example is from edge devices and IoT sources. Data aggregation and dissemination from the device edge, to remote staging to core systems must be delivered seamlessly across the data lifecycle. Often, different storage protocols, from object to block to file for different types of events, are required. The persistent storage layer capabilities must be available to handle these very dynamic and diverse storage requirements.
Ultimately, developer teams must be able to rely on a standardized platform to automate storage management across often diverse and demanding environments — including multicloud, bare metal, and VMs — through a single API. The storage layer should also offer distinct failover advantages for when developers need to scale back or redeploy on an as-needed basis. It also needs to be very agile, so developers can define what they need by provisioning with a near-zero delay.
Cloud native persistent storage offers many of these capabilities and provides significant flexibility and portability for DevOps teams. It can lend agility to software deployments in cloud native environments. It does this wil empowering developers, granting the freedom to manage their own storage needs, and better play their new, more central role in the overall project.
Additional Resources
OpenShift Container Storage: openshift.com/storage
OpenShift | Storage YouTube Playlist
OpenShift Commons ‘All Things Data’ YouTube Playlist
Feedback
To find out more about OpenShift Container Storage or to take a test drive, visit https://www.openshift.com/products/container-storage/.
If you would like to learn more about what the OpenShift Container Storage team is up to or provide feedback on any of the new 4.3 features, take this brief 3-minute survey.
執筆者紹介
Michael St-Jean is a Technical Alliance executive focused on building joint solutions with partners that accelerate time to value for organizations' strategic technology initiatives. For over two decades, Michael has worked with cross-functional teams helping organizations solve complex business challenges with innovative technology solutions and strategies.
チャンネル別に見る
自動化
テクノロジー、チームおよび環境に関する IT 自動化の最新情報
AI (人工知能)
お客様が AI ワークロードをどこでも自由に実行することを可能にするプラットフォームについてのアップデート
オープン・ハイブリッドクラウド
ハイブリッドクラウドで柔軟に未来を築く方法をご確認ください。
セキュリティ
環境やテクノロジー全体に及ぶリスクを軽減する方法に関する最新情報
エッジコンピューティング
エッジでの運用を単純化するプラットフォームのアップデート
インフラストラクチャ
世界有数のエンタープライズ向け Linux プラットフォームの最新情報
アプリケーション
アプリケーションの最も困難な課題に対する Red Hat ソリューションの詳細
オリジナル番組
エンタープライズ向けテクノロジーのメーカーやリーダーによるストーリー
製品
ツール
試用、購入、販売
コミュニケーション
Red Hat について
エンタープライズ・オープンソース・ソリューションのプロバイダーとして世界をリードする Red Hat は、Linux、クラウド、コンテナ、Kubernetes などのテクノロジーを提供しています。Red Hat は強化されたソリューションを提供し、コアデータセンターからネットワークエッジまで、企業が複数のプラットフォームおよび環境間で容易に運用できるようにしています。
言語を選択してください
Red Hat legal and privacy links
- Red Hat について
- 採用情報
- イベント
- 各国のオフィス
- Red Hat へのお問い合わせ
- Red Hat ブログ
- ダイバーシティ、エクイティ、およびインクルージョン
- Cool Stuff Store
- Red Hat Summit