Jump to section

What is a GitOps workflow?

Copy URL

GitOps is a modern software development and deployment approach where the entire infrastructure and application lifecycle is managed through Git repositories as the single source of truth. In this workflow, developers commit code changes and infrastructure configurations to Git repositories, triggering automated CI/CD pipelines that build, test, and deploy applications and infrastructure changes based on the Git repository state. 

Operators and administrators utilize declarative configuration files stored in Git to define the desired infrastructure state, and continuous synchronization tools like Argo CD to ensure that the live environment matches the Git repository, providing version control, collaboration, and auditability for both code and infrastructure, ultimately leading to efficient and reliable software delivery and infrastructure management.

It helps you manage your cluster configuration and application deployments by introducing automation to a previously manual process. For example, GitOps can help you manage Red Hat® OpenShift® Container Platform clusters across multi-cluster Kubernetes environments. 

These functions help alleviate the challenges that come with a multi-cloud approach, such as the need for consistency, security, and collaboration as workloads move across public cloud, private cloud, and even on-premises environments. 

In this article, we’ll cover the basics of GitOps workflows.

In GitOps, the Git repository is the sole source of truth for system and application configuration. It consists of a declarative description of the infrastructure for your environment and works in tandem with automated processes handled by GitOps tooling like Argo CD. This automation makes the actual state of your environment conform to the described state. You can also use the repository to view the list of changes to the system state, because the Git history provides change tracking. 

Additionally, storing your infrastructure and configuration as code can help you reduce sprawl. You can store the configuration of clusters and applications as code within Git repositories. 

Infrastructure as Code (IaC) is the managing and provisioning of infrastructure through code instead of through manual processes.

With IaC, configuration files are created that contain your infrastructure specifications. This makes it easier to edit and distribute configurations, while ensuring that you provision the same environment every time. By codifying and documenting your configuration specifications, IaC aids configuration management and helps you to avoid undocumented, ad-hoc configuration changes. There are 2 ways to approach IaC: declarative or imperative.

A declarative approach defines the desired state of the system, including what resources you need and any properties they should have, and an IaC tool will configure it for you. It also keeps a list of the current state of your system objects. An imperative approach instead defines the specific commands needed to achieve the desired configuration, and those commands then must be executed in the correct order. 

Many IaC tools use a declarative approach and will automatically provision the desired infrastructure. If you make changes to the desired state, a declarative IaC tool will apply those changes for you. An imperative tool will require you to figure out how those changes should be applied.

IaC is an important part of implementing DevOps practices and continuous integration/continuous delivery (CI/CD). IaC takes away the majority of provisioning work from developers, who can execute a script to have their infrastructure ready to go. That way, application deployments aren’t held up waiting for the infrastructure, and sysadmins aren’t managing time-consuming manual processes. 

CI/CD relies on ongoing automation and continuous monitoring throughout the application life cycle, from integration and testing to delivery and deployment. 
Aligning development and operations teams through a DevOps approach using IaC leads to fewer errors, manual deployments, and inconsistencies.

GitOps can be considered an evolution in Infrastructure as Code (IaC) that uses Git as the version control system for infrastructure configurations. 

A pipeline is a process that drives software development through a path of building, testing, and deploying code, also known as continuous integration and continuous delivery/deployment (CI/CD). By automating the pipeline, the objective is to minimize human error and maintain a consistent process for how software is released. Tools that are used in the pipeline could include compiling code, unit tests, code analysis, security, and binaries creation. For containerized environments, this pipeline would also include packaging the code into a container image to be deployed across a hybrid cloud.

CI/CD is the backbone of a DevOps methodology, bringing developers and IT operations teams together to deploy software. As custom applications become key to how companies differentiate, the rate at which code can be released has become a competitive differentiator.

CI/CD pipelines are usually triggered by an external event, like code being pushed to a repository. In a GitOps workflow, changes are made using pull requests which modify state in the Git repository. 

To roll out a new release using a GitOps workflow, a pull request is made in Git, which makes a change to the declared state of the cluster. The GitOps operator, which sits between the GitOps pipeline and the orchestration system, picks up the commit and pulls in the new state declaration from Git.  

Once the changes are approved and merged, they will be applied automatically to the live infrastructure. Developers can continue to use their standard workflow and (CI/CD) practices.

When using GitOps with Kubernetes for your workflow, the operator will often be a Kubernetes Operator. 

A Kubernetes operator is a method of packaging, deploying, and managing a Kubernetes application. It is an application-specific controller that extends the functionality of the Kubernetes API to create, configure, and manage instances of complex applications on behalf of a Kubernetes user. The operator builds upon the basic Kubernetes resource and controller concepts, but includes domain or application-specific knowledge to automate the entire life cycle of the software it manages. 

Within GitOps, the operator compares the desired state in the repository to the actual state of the deployed infrastructure. The operator will update the infrastructure whenever a difference is noticed between the actual state and what exists in the repository. The operator can also monitor a container image repository and make updates in the same way to deploy new images.

Observability refers to the ability to monitor, measure, and understand the state of a system or application by examining its outputs, logs, and performance metrics. In modern software systems and cloud computing, Observability plays an increasingly crucial role in ensuring the reliability, performance, and security of applications and infrastructure.

Observability absorbs and extends classic monitoring systems and helps teams identify the root cause of issues. It allows stakeholders to answer questions about their application and business, including forecasting and predictions about what could go wrong. 

Benefits of observability: 

  • Improved reliability: Detect and resolve issues before they escalate, minimizing downtime and ensuring that systems remain available to users.
  • Efficient troubleshooting: Quickly identify the root cause of issues and resolve them efficiently with deep insights into the behavior of a system.
  • Optimized performance: Identify areas for optimization, such as bottlenecks in the system or underutilized resources, allowing for more efficient resource allocation and improved performance.
  • Data-driven decision-making: Receive up-to-date system performance and behavior information, enabling data-driven decision making and continuous improvement.

Red Hat® OpenShift® Observability solves modern architectural complexity by connecting observability tools and technologies to create a unified observability experience. The platform is designed to provide real-time visibility, monitoring, and analysis of various system metrics, logs, traces, and events to help users quickly troubleshoot issues before they impact their applications or end-users.

Red Hat OpenShift GitOps is an operator that installs and configures an Argo CD instance for you. It manages your infrastructure configuration and application deployments, organizing the deployment process around these configuration repositories. At least two repositories are always central to the process: an application repository with the source code and an environment configuration repository that defines the desired state of the application.

To maintain cluster resources, Red Hat OpenShift GitOps uses Argo CD, an open-source tool for the continuous deployment portion of the continuous integration and continuous deployment (CI/CD) of applications. Argo CD acts as a controller for Red Hat OpenShift GitOps by monitoring application state descriptions and configurations, as defined in a Git repository. It compares the defined state to the actual state, and then reports any configurations that deviate from the specified description. 

Administrators can resync configurations to the defined state based on these reports. That resyncing can be manual or automated. In the case of automation, the configuration is essentially “self-healing.”

In other words, Red Hat OpenShift GitOps facilitates an optimal GitOps workflow, in which developers commit their code and configuration changes to Git repositories, which in turn trigger automated CI/CD pipelines. These pipelines are responsible for building, testing, and deploying applications and infrastructure based on the state of the Git repository. In this case, Red Hat OpenShift GitOps is the operator that defines the desired infrastructure state using declarative configuration files stored in Git. Argo CD then ensures that the actual environment consistently aligns with the state specified in the Git repository. This approach fosters version control, collaboration, and traceability for both code and infrastructure, ultimately streamlining software delivery and infrastructure management while bolstering reliability.

Now you have a conceptual understanding of how GitOps workflows can increase productivity and the velocity of development and deployments, while improving the stability and reliability of systems.

Next, give development with GitOps a try for yourself.

Keep reading

Article

What is DevSecOps?

If you want to take full advantage of the agility and responsiveness of DevOps, IT security must play a role in the full life cycle of your apps.

Article

What is CI/CD?

CI/CD introduces ongoing automation and continuous monitoring throughout the lifecycle of apps, from integration and testing phases to delivery and deployment.

Article

Who is a DevOps engineer?

A DevOps engineer has a unique combination of skills and expertise that enables collaboration, innovation, and cultural shifts within an organization.  

More about DevOps

Products

An intensive, highly focused residency with Red Hat experts where you learn to use an agile methodology and open source tools to work on your enterprise’s business problems.

Engagements with our strategic advisers who take a big-picture view of your organization, analyze your challenges, and help you overcome them with comprehensive, cost-effective solutions.

Resources

Podcast

Command Line Heroes Season 1, Episode 4:

"DevOps: Tear down that wall"

Checklist

Enterprise automation with a DevOps methodology

Whitepaper

Streamline CI/CD pipelines with Red Hat Ansible Automation Platform

Operator

Manage infrastructure and application configurations with Red Hat® OpenShift® GitOps