Feed abonnieren

Cloud, Linux containers, and container orchestration (in the form of Kubernetes) are the topics I hear being discussed the most today. Most IT organizations are discussing DevOps and microservices. The will to deep dive into that pool of fresh new experiences is leading many organizations to rethink tooling, culture, and processes in-house. Businesses want all the benefits of this digital transformation, but are you really prepared for this new paradigm? Are you really ready for containers?

In order to standardize environments, isolate processes or increase modularity, to be able to better produce code, services and provide maintenance, the solution that comes in handy is containers. A smaller footprint which is standardized and isolated while consuming the resources of the host was the perfect recipe. Click here to understand what containers are.

Since we have all those benefits, how do we really use and profit from them?

First, when we think about modularity, we have to start breaking the monolithic application into smaller pieces where one piece does not influence the behavior of another. Instead each piece would have a single responsibility. An issue in a small part of a monolithic application can have a cascading effect on rest of the functionality.

What if you have had broken up that application in small services? In case you have had a credit card payment problem, the users taking reports, checking the homepage, paying using debit cards, bitcoins, etc., should not be affected because they are paying using other methods. And your services are isolated and independent, therefore an issue in one container/application should not interfere into another.

Managing Your Containers

Adding Linux containers to your environment and replacing monolithic applications with containers means addressing a lot of questions.

Once you have split your big app into microservices and put them into containers, how are you going to manage and enjoy the real benefit of them? Should I set up user permissions? Should I store data inside a container? What about HTTP sessions? What about the microservice authentication? One container is okay to manage. What about ten, one hundred, or a thousand containers? How can you scale up (increase the number of running containers) when your website load increases? Or how can you scale it down overnight when nobody is accessing it? What about having a way to do that automatically? How or where can the log be written by your application? And how can you access them later?

See? that’s a lot of questions. Now, let’s answer those questions. As mentioned previously, containers include the necessary environment for their application, provide isolation and are self-contained, which means they do not require other containers or external resources, however it is possible to have a database or other services linked or dependent on it .And as they are isolated, the premise is that it will have the same behavior regardless of the environment. So you will not hear any more “It works on my laptop”.

Avoid running as root

The user that will be configured to run your application should not be root. It should have the necessary rights to write and read from the directories your application needs, and that is it. No more rights than necessary. Assigning the correct permissions and ownership on the necessary folder is a best practice of controlling and maintaining your environment.

Logging

As the application is running now inside a container, and the container itself is ephemeral, logging becomes a little more complex since you can’t just write to /var/log/whatever anymore.

If the logs of your application are not written to /var/log, what should we do? We send them to the standard output (STDOUT). We do that so an application that is made up and has the purpose to collect and store logs can do its job. Consequently, the logs can be accessed throughout this application at any time. No need for devs to request logs from the Ops team and no need to copy logs for devs to analyze. Check out the solution used to capture, manage and later view the logs from containers.

Stored data

If your application uses any directory to persist a configuration or persist any kind of data inside the container, for example, recording data into a database, firstly you should rethink that need to write locally. Since containers came into play, database or directories should be written on dedicated storage, which your container will read/write, integrated by your container orchestration solution. Why integrated? If you use OpenShift, you can integrate your Container Platform with Storage solutions, such as Red Hat OpenShift Container Storage. Therefore the storage solution will provide the requested storage on demand for applications running in containers over OpenShift.

Avoid the “CMD” statement

Many examples on the internet show the execution of your application using the “CMD” statement in your Dockerfile. This command does not guarantee that your application will run in case another command is passed as a parameter for the container.

Use “ENTRYPOINT” instead. ENTRYPOINT guarantees that your application/service/process will be executed regardless of the parameter passed to the container.

Application authentication

One thing that should also be considered is that when you start breaking up your service into microservices, some responsibilities are taken away from it. One of them is Authentication. Your application is not responsible for authentication. What if an Authentication App, an app designed only for that, could authenticate and authorize your microservices? Then your app would only have the responsibility to do what it was designed for. To help you to implement this kind of service, you should consider using Single Sign-On applications such as Red Hat Single Sign On.

Caching

Once again, containers are ephemeral, and it is not the responsibility of your application to cache data by itself. This responsibility should be assigned to an application designed for that. That’s one reason cache applications are becoming more and more popular these days. Redis, for example, is an open source project which helps your microservice to cache data.

Thinking a bit bigger, what about the data replication cross-site (between data centers). Cache applications such as Red Hat JBoss Data Grid help your services to store and fetch data and avoid data loss using the data replication cross-site ability.

Container Orchestration

Containers are not the silver bullet. Containers by themselves cannot solve your issues, complexities, automation, and scalability. One way to address those matters is by adopting a container orchestration tool. A container orchestration tool is responsible for building, deploying, managing and scaling container applications.

You can use Red Hat’s OpenShift Kubernetes-based platform, for example, to build your application into a container. Scale up and maintain the number of container replicas on your environment and scale down when those containers are not used anymore. High availability and scalability features are brought to you in one shot.

OpenShift also offers the ability to do a health check on your containers, so your containers only receive traffic if they’re still running and ready to receive requests. If the container fails the readiness check or “liveness” probe, then OpenShift will kill the container and restart it based on policy. Consequently, users should not be routed to containers that aren’t serving requests.

OpenShift

Therefore, in order to fully use containers really well, you will need to think about scale and orchestration, at the same time achieve all those points. That’s possible when Openshift adoption comes into play. It really enables your team to deploy applications into containers. By default, OpenShift doesn’t allow containers to be run using root user, although it is possible to configure an exception.

Your logs can be consulted on a Kibana dashboard provided by the EFK stack (Elasticsearch, FluentD, and Kibana), and it is multitenant.

The application data should be stored on a storage solution that is easy to plug into OpenShift such as OpenShift Container Storage.

Since OpenShift is based on Kubernetes it handles orchestration, and it handles deployment and build versioning as well! Also, a router will receive all external traffic point to your application services. Services will balance all the request between containers. Being aware of all those topics listed, for sure you can catch up to containers in your business.

 

Renato Puccini Renato Puccini works as OpenShift TAM in South America, Brazil. He is experienced and certified on container management and development, OpenShift Specialist by Red Hat and System/JBoss Administration (RHCSA, RHCJA). Renato has guided customers in South America transitioning from monolith application to containerized microservices applications on the cloud environment. Also, his daily activities are bring best practices on CI/CD, pipeline automation, OpenShift management and container deployment for Red Hat Customers.

 


Über den Autor

UI_Icon-Red_Hat-Close-A-Black-RGB

Nach Thema durchsuchen

automation icon

Automatisierung

Das Neueste zum Thema IT-Automatisierung für Technologien, Teams und Umgebungen

AI icon

Künstliche Intelligenz

Erfahren Sie das Neueste von den Plattformen, die es Kunden ermöglichen, KI-Workloads beliebig auszuführen

open hybrid cloud icon

Open Hybrid Cloud

Erfahren Sie, wie wir eine flexiblere Zukunft mit Hybrid Clouds schaffen.

security icon

Sicherheit

Erfahren Sie, wie wir Risiken in verschiedenen Umgebungen und Technologien reduzieren

edge icon

Edge Computing

Erfahren Sie das Neueste von den Plattformen, die die Operations am Edge vereinfachen

Infrastructure icon

Infrastruktur

Erfahren Sie das Neueste von der weltweit führenden Linux-Plattform für Unternehmen

application development icon

Anwendungen

Entdecken Sie unsere Lösungen für komplexe Herausforderungen bei Anwendungen

Original series icon

Original Shows

Interessantes von den Experten, die die Technologien in Unternehmen mitgestalten