Network engineering for the high availability of application services is nothing new. However, as Kubernetes-based environments become more widespread, companies are beginning to look for innovative deployments across multiple types of infrastructures (on-premise, AWS, Azure, IBM Cloud). They then begin to ask what achieving successful application delivery looks like for those types of deployments.
In our series on Davie Street Enterprises (DSE), we've used a fictitious company to illustrate how organizations have implemented protective measures against ransomware, transformed applications, and more.
Red Hat ausgezeichnet als Leader im 2023 Gartner® Magic Quadrant™
Red Hat wurde im Gartner 2023 Magic Quadrant für Container Management hinsichtlich der Ausführungsfähigkeit und der umfassenden Vision bestplatziert.
In this post, we’ll cover how Gloria Fenderson, DSE’s Senior Manager of Network Engineering, plans to use Red Hat OpenShift and the NGINX Ingress Controller to deliver applications on a cluster running across multiple types of underlying infrastructure. Both NGINX and Red Hat solutions are designed to enable the deployment of container-based environments across multiple clouds; we'll see now how easily this is accomplished.
Application Delivery Strategy Across Multiple Clouds
The Value of the Hybrid Cloud
DSE will be migrating to a hybrid cloud environment using Red Hat OpenShift. Red Hat OpenShift uses Red Hat Enterprise Linux CoreOS to create centralized management of OpenShift cluster deployments across different infrastructures from within the cluster itself. It doesn’t matter if that cluster is in DSE’s on-premise environment or in a public cloud, or both.
This is done to provide the benefits of cloud computing while also leveraging DSE’s existing investment in its on-premise infrastructure. When demand increases, DSE will be able to “burst” into the cloud and scale-up applications to match demand and then dynamically scale back down to reduce costs and optimize efficiency. Red Hat OpenShift will also provide consistency in how developers and administrators interact with the containerized environment, allowing them to trust that an application built to run on OpenShift will run anywhere the same.
Fenderson realizes getting traffic into and out of the environment (ingress and egress traffic) across different nodes will be critical to a successful deployment. To that end, Fenderson is most concerned about the application delivery aspect in this new environment and wants to keep using NGINX to accomplish this asher team has done in the past. She knows that NGINX has an ingress controller that has been certified on Red Hat OpenShift for handling application delivery in the OpenShift clusters.
From a networking perspective, Fenderson is looking for a solution that allows her team to see the health of individual services running across the different infrastructures, scale those services up and down as needed, and do all this consistently and reliably across the different types of infrastructure. To accomplish this, Red Hat and NGINX have combined forces to support the NGINX Ingress Operator for OpenShift, which helps automate the installation and maintenance of the NGINX Ingress Controller.
Utilizing the latest open source technologies is also a big win for Fenderson, who appreciates the dedication both Red Hat and NGINX have to open source software. With that in mind, she feels confident in telling her team to go ahead with the initial deployment and to run a test application.
Installing the Operator
The certified NGINX Operator for OpenShift is easy to install and automates the installation and deployment of the NGINX Ingress Controller. Red Hat OpenShift also helps with the management of all your operators through a catalog in OperatorHub and Operator Lifecycle Management (OLM)
First, Fenderson's team downloads the NGINX Certified Operator from the OpenShift OperatorHub. OperatorHub lists all the certified operators that Red Hat jointly supports with its partners so that you know the software you are using will have the support of Red Hat behind it. Below is an example of what installing an operator from the OperatorHub looks like.
Deploying the NGINX Ingress Controller with the certified operator
Next, they use the operator to deploy an instance of the NGINX Ingress Controller. They do this by using the YAML example on GitHub to create the NGINX Ingress Controller. The team learns that you can modify your deployment of the NGINX Ingress Controller Custom Resource in different ways to include NGINX App Protect or other features needed for its environment.
Now that the ingress is successfully installed along with the Operator, the team decides to deploy a test application. They will use the Github repo NGINX made available to do just that.
Once the team deploys the cafe application, they use the NGINX+ Status page to monitor the status of the services running on the OpenShift Nodes deployed across any infrastructure.
Conclusion
Fenderson’s team is only beginning to unlock the abilities of the open source powered solutions that Red Hat and NGINX have created. The team is excited to begin migrating the applications it has created onto Red Hat OpenShift and become more flexible in where and how to run them in a highly available multi-cloud environment.
In case you missed them, you can find all posts in our DSE series in one place. See how Red Hat solutions have helped DSE implement DevSecOps, embrace GitOps, and more.
Über den Autor
Cameron Skidmore is a Global ISV Partner Solution Architect. He has experience in enterprise networking, communications, and infrastructure design. He started his career in Cisco networking, later moving into cloud infrastructure technology. He now works as the Ecosystem Solution Architect for the Infrastructure and Automation team at Red Hat, which includes the F5 Alliance. He likes yoga, tacos and spending time with his niece.
Nach Thema durchsuchen
Automatisierung
Das Neueste zum Thema IT-Automatisierung für Technologien, Teams und Umgebungen
Künstliche Intelligenz
Erfahren Sie das Neueste von den Plattformen, die es Kunden ermöglichen, KI-Workloads beliebig auszuführen
Open Hybrid Cloud
Erfahren Sie, wie wir eine flexiblere Zukunft mit Hybrid Clouds schaffen.
Sicherheit
Erfahren Sie, wie wir Risiken in verschiedenen Umgebungen und Technologien reduzieren
Edge Computing
Erfahren Sie das Neueste von den Plattformen, die die Operations am Edge vereinfachen
Infrastruktur
Erfahren Sie das Neueste von der weltweit führenden Linux-Plattform für Unternehmen
Anwendungen
Entdecken Sie unsere Lösungen für komplexe Herausforderungen bei Anwendungen
Original Shows
Interessantes von den Experten, die die Technologien in Unternehmen mitgestalten
Produkte
- Red Hat Enterprise Linux
- Red Hat OpenShift
- Red Hat Ansible Automation Platform
- Cloud-Services
- Alle Produkte anzeigen
Tools
- Training & Zertifizierung
- Eigenes Konto
- Kundensupport
- Für Entwickler
- Partner finden
- Red Hat Ecosystem Catalog
- Mehrwert von Red Hat berechnen
- Dokumentation
Testen, kaufen und verkaufen
Kommunizieren
Über Red Hat
Als weltweit größter Anbieter von Open-Source-Software-Lösungen für Unternehmen stellen wir Linux-, Cloud-, Container- und Kubernetes-Technologien bereit. Wir bieten robuste Lösungen, die es Unternehmen erleichtern, plattform- und umgebungsübergreifend zu arbeiten – vom Rechenzentrum bis zum Netzwerkrand.
Wählen Sie eine Sprache
Red Hat legal and privacy links
- Über Red Hat
- Jobs bei Red Hat
- Veranstaltungen
- Standorte
- Red Hat kontaktieren
- Red Hat Blog
- Diversität, Gleichberechtigung und Inklusion
- Cool Stuff Store
- Red Hat Summit