We live in a world that has changed the way it consumes applications. The last few years have seen a rapid rise in the adoption of Software-as-a-Service (SaaS) and Platform-as-a-Service (PaaS). Much of this can be attributed to the broad success of Amazon Web Services (AWS), which is said to have grown revenue from $3.1B to $5B last year (Forbes). More and more people, enterprise customers included, are consuming applications and resources that require little to no maintenance. And any maintenance that does happen, now goes unnoticed by users. This leaves traditional software vendors contending to find a way to adapt their distribution models to make their software easier to consume. Lengthy, painful upgrades are no longer acceptable to users, forcing vendors to create a solution to this problem.
Let’s face it, the impact of this on traditional software companies are starting to be felt. Their services and methods of doing business are now being compared to a newer, more efficient model. One that is not bogged down by the inefficiencies of the traditional model. SaaS has the advantage that the software runs in their datacenters, where they have easy access to it, control the hardware, the architecture, the configurations, and so on.
Open source initiatives that target the enterprise market, like OpenStack, have to look at what others are doing in order to appeal to it’s intended audience. The grueling release cycle of the OpenStack community (major releases every 6 months) can put undue pressure on enterprise IT teams to update, deploy, and maintain environments, often times leaving them unable to keep up from one release to the next. Inevitably, they start falling behind. And in some cases, their attempts to update is slower than the software release cycle, resulting in them falling further behind each release. This is a major hindrance to successful OpenStack adoption.
Solving only one side of the problem
Looking at today’s best practices for upgrading, we can see that the technology hasn’t quite matured yet. And, although DevOps allows companies to deliver code to customers faster, It doesn’t solve the problem of installing the new underlying infrastructure - faster is not enough.. This situation is even more critical when considering your data security practices. The ability to patch quickly and efficiently is key for companies to deploy security updates when critical security issues are spotted.
Adding to this further, is how businesses can shorten the feedback loop with development releases. Releasing an alpha or beta, waiting for people to test it and send relevant feedback is a long process that causes delays for both the customer and the provider. Yet another friction point.
Efforts are currently being made with community projects Tempest and Rally to provide better vision in a cloud’s stability and functionality. These two projects on their own are necessary steps in the right direction, however they currently lack holistic integration and still only offer a vision into a single cloud’s performance. Additionally, they do not yet allow for an OpenStack distribution provider to check if their distribution’s new versions work with specific configurations or hardware. Whatever the solution is, it has to compete with what is currently being offered in the “*aaS” or it will be seen as outdated and risk losing users.
Automation: A way out
Continuous integration and continuous delivery (CI/CD) is all the rage these days and it might offer part of the solution. Automation has to play a key role if companies are to keep up. We need to look into ways of making the process repeatable, reliable, incrementally improving, and customizable. Developers can no longer claim it worked on their laptop, so companies cannot limit themselves to saying it worked (or didn’t work) on their infrastructure. Software providers have to get closer to their customers to share in the pain.
Every OpenStack deployment is a custom job these days. Everyone is not running the same hardware, the same configurations, and so on. This means we have to adapt to those customizations and provide a framework that allows people to test their specific use cases. Once unit testing, integration testing, and functional testing has happened inside the walls of the software providers, it has to go out into the wild and survive real customer use cases. And just as important, feedback has to be received quickly in order for the next iterations to be smaller, which will ease the burden of identifying problems and fixing as needed.
One of the concepts Red Hat is investigating is chaining different CI environments and managing the logs and log analysis from a “central CI”. We’ve been working with customers to validate this concept by testing it first on customer and partner equipment for those who have been able to set aside equipment for us. We want to deploy a new version and verify an update live on premise and include this step into our gating process before merging code. We are not satisfied unless it can be deployed and proven to work in a real environment. This means that CI/CD isn’t just about us anymore, it has to work on-site or a patch is not merged.
Currently in our testing, we receive status reports from different architectures which allows us to identify if an issue is specific to a certain configuration, hardware, or environment. This also allows us identify a more widespread issue that needs to be fixed in the release. Ideally, we envision a point where once a new version reaches a certain “acceptance threshold,” it is marked as ready for release. It’s then automatically pushed out and updated to a customer’s pre-production environment.
A workflow might look something like this:
Source (modified): https://en.wikipedia.org/wiki/Continuous_delivery#/media/File:Continuous_Delivery_process_diagram.png
This type of workflow could integrate well into existing tools like Red Hat Satellite. Updates would still be provided as usual, but additional options to test upgrades leveraging the capabilities of the cloud would be made available. This would provide system administrators with an added level of certainty before deploying packages to existing servers, including logs to troubleshoot before pushing to production environments, should anything go wrong.
Red Hat is committed to delivering a better and smoother upgrade experience for our customers and partners. While there are many questions that remain to be answered, notably around security or proprietary code, there is no doubt in my mind that this is the way forward for software. Automation has to take over the busy work of testing and upgrading to free up critical IT staff members to spend more time delivering features to their customers or users.
저자 소개
유사한 검색 결과
채널별 검색
오토메이션
기술, 팀, 인프라를 위한 IT 자동화 최신 동향
인공지능
고객이 어디서나 AI 워크로드를 실행할 수 있도록 지원하는 플랫폼 업데이트
오픈 하이브리드 클라우드
하이브리드 클라우드로 더욱 유연한 미래를 구축하는 방법을 알아보세요
보안
환경과 기술 전반에 걸쳐 리스크를 감소하는 방법에 대한 최신 정보
엣지 컴퓨팅
엣지에서의 운영을 단순화하는 플랫폼 업데이트
인프라
세계적으로 인정받은 기업용 Linux 플랫폼에 대한 최신 정보
애플리케이션
복잡한 애플리케이션에 대한 솔루션 더 보기
오리지널 쇼
엔터프라이즈 기술 분야의 제작자와 리더가 전하는 흥미로운 스토리
제품
- Red Hat Enterprise Linux
- Red Hat OpenShift Enterprise
- Red Hat Ansible Automation Platform
- 클라우드 서비스
- 모든 제품 보기
툴
체험, 구매 & 영업
커뮤니케이션
Red Hat 소개
Red Hat은 Linux, 클라우드, 컨테이너, 쿠버네티스 등을 포함한 글로벌 엔터프라이즈 오픈소스 솔루션 공급업체입니다. Red Hat은 코어 데이터센터에서 네트워크 엣지에 이르기까지 다양한 플랫폼과 환경에서 기업의 업무 편의성을 높여 주는 강화된 기능의 솔루션을 제공합니다.