In this post:
-
Understand how outputs from existing pipelines can be used to feed another pipeline for the validation of a new platform release against existing applications and CNFs.
-
Learn how end-to-end testing regimes are used to generate a baseline and metric for service providers to assess performance of existing applications and CNFs with a new platform version.
-
Find out how to achieve the continuous adoption of new releases of the platform while maintaining the stability of deployed services.
In two previous posts, I discussed the use of pipelines for cloud-native network functions (CNFs):
In this article, I discuss how to use the outputs from the previous pipelines and combine them to achieve automation, consistency and reliability of Day 2 operations at scale.
A pipeline can be used to validate other pipelines, in the context of proving operational readiness, and use within a service provider’s production environment. I will use the two previous pipelines as examples:
-
A new Red Hat OpenShift version accepted by the lifecycle management (LCM) pipelines.
-
Deployment of various applications and CNFs within the service provider environment as a result of the onboarding pipelines.
When a new version of OpenShift has been accepted by the service provider’s lifecycle management pipelines, the end-to-end combination of applications and CNFs that have been accepted by the service provider’s onboarding pipelines need to be tested and validated. This is achieved using multi-tenant end-to-end integration pipelines, as depicted below. This pipeline illustrates the concept and is not intended to represent any final configuration or definition of this type of pipeline.
Once a new OpenShift cluster version is identified as accepted, all the applications and CNFs that must work together are identified (A). This serves as the input for an ephemeral cluster with all the configurations validated by the lifecycle management pipeline (B). All the applications and CNFs that share a cluster or a multi-tenant cluster in the service provider’s production environment are onboarded into the ephemeral cluster. This pipeline (B) validates that there are no conflicts among the configurations or custom resource definitions (CRDs).
Once validated, cross-tenant automated functional testing takes care of identifying compatibility among the applications or CNFs that are expected to work together. The pipeline (B) then executes an end-to-end scalability test and generates a baseline for the OpenShift release with the specific combination of applications and CNFs. This baseline serves as a comparison point between existing version combinations and the new version. It helps the service provider maintain a metric to compare improvement or degradation among versions and combinations of applications and CNFs.
With scalability validated, the new cluster version and combination of applications and CNFs are ready for production (C) and the service provider can set the deployment of any future OpenShift cluster to use this new version.
The multi-tenant end-to-end integration pipelines allow the service provider to move to the continuous adoption of new releases of the platform while maintaining the stability of deployed services. When combining the types of pipelines described in this three-part series, the service provider benefits from the automation, consistency and reliability of a modern process while maintaining the availability and stability of the services provided to its end customers.
These pipelines serve as gatekeeping processes for service provider production environments. When combined with the GitOps operational model, benefits can be extended to Day 2 operations with granular auditability and control, with the output of the pipelines described brought into production environments.
Closing remarks and where Red Hat can help
In this three-part series I have discussed how the use of pipelines can achieve automation, greater consistency and reliability of a telecommunication service provider process. These processes include Infrastructure as Code (IaC), development and operations (DevOps), development, security and operations (DevSecOps), network operations (NetOps) and GitOps.
In part one, I discussed the use of pipelines to onboard applications and the benefit of a digital twin to mitigate the risks of software deployment and to better meet compatibility and compliance requirements of existing service provider platforms. The digital twin concept can be achieved using the OpenShift hosted control plane capability, where a dedicated cluster is used to onboard applications and CNFs.
In part two, I discussed the use of pipelines for lifecycle management and how they facilitate the frequent and more reliable deployment and upgrade of a service provider’s infrastructure or platform while checking if the software adheres to their governance policies. Red Hat Advanced Cluster Security for Kubernetes helps to safeguard both applications and the underlying infrastructure with built-in security enforcement that reduces operational risk.
In this post, I have discussed how the output of the pipelines described in part one and two can be used to feed a new pipeline. This pipeline validates a particular OpenShift release against a specific set of applications and CNFs that have been onboarded to allow service providers to compare performance between different versions.
Red Hat OpenShift simplifies and accelerates the delivery and lifecycle management of applications and CNFs consistently across any cloud environment, and supports continuous innovation and speed for application delivery at scale. With Red Hat OpenShift Pipelines and Tekton, service providers benefit from a CI/CD experience through tight integration with Red Hat OpenShift and Red Hat developer tools, with each step scaling independently to meet the demands of the pipeline.
저자 소개
William is a Product Manager in Red Hat's AI Business Unit and is a seasoned professional and inventor at the forefront of artificial intelligence. With expertise spanning high-performance computing, enterprise platforms, data science, and machine learning, William has a track record of introducing cutting-edge technologies across diverse markets. He now leverages this comprehensive background to drive innovative solutions in generative AI, addressing complex customer challenges in this emerging field. Beyond his professional role, William volunteers as a mentor to social entrepreneurs, guiding them in developing responsible AI-enabled products and services. He is also an active participant in the Cloud Native Computing Foundation (CNCF) community, contributing to the advancement of cloud native technologies.
채널별 검색
오토메이션
기술, 팀, 인프라를 위한 IT 자동화 최신 동향
인공지능
고객이 어디서나 AI 워크로드를 실행할 수 있도록 지원하는 플랫폼 업데이트
오픈 하이브리드 클라우드
하이브리드 클라우드로 더욱 유연한 미래를 구축하는 방법을 알아보세요
보안
환경과 기술 전반에 걸쳐 리스크를 감소하는 방법에 대한 최신 정보
엣지 컴퓨팅
엣지에서의 운영을 단순화하는 플랫폼 업데이트
인프라
세계적으로 인정받은 기업용 Linux 플랫폼에 대한 최신 정보
애플리케이션
복잡한 애플리케이션에 대한 솔루션 더 보기
오리지널 쇼
엔터프라이즈 기술 분야의 제작자와 리더가 전하는 흥미로운 스토리
제품
- Red Hat Enterprise Linux
- Red Hat OpenShift Enterprise
- Red Hat Ansible Automation Platform
- 클라우드 서비스
- 모든 제품 보기
툴
체험, 구매 & 영업
커뮤니케이션
Red Hat 소개
Red Hat은 Linux, 클라우드, 컨테이너, 쿠버네티스 등을 포함한 글로벌 엔터프라이즈 오픈소스 솔루션 공급업체입니다. Red Hat은 코어 데이터센터에서 네트워크 엣지에 이르기까지 다양한 플랫폼과 환경에서 기업의 업무 편의성을 높여 주는 강화된 기능의 솔루션을 제공합니다.