Skip to main content

How to deploy multicluster applications with OpenShift and GitOps

Learn how to use Red Hat Advanced Cluster Management for Kubernetes and Argo CD to handle complex deployments in the most automated way possible.
Image
Photo of a large seaport with many shipping containers

Photo by CHUTTERSNAP from Unsplash

Modern applications are increasingly complex and distributed, and managing the infrastructure and services that support them is a significant challenge for organizations. Container orchestrators like Kubernetes and Red Hat OpenShift Container Platform are popular solutions, but as scale and complexity grow, so does the challenge of managing the supporting systems.

This article explores using Red Hat Advanced Cluster Management for Kubernetes (RHACM) and a GitOps approach using Argo CD (upstream of Red Hat OpenShift GitOps) to install Red Hat OpenShift Service Mesh on OpenShift (although you can port this to any other operator).

[ Learn how to extend the Kubernetes API with operators. ]

OpenShift Service Mesh provides powerful features for administering microservices-based applications, including traffic management, security, observability, and platform integration. However, deploying and managing a mesh at scale is complex, especially in large clusters.

Integrating RHACM with Argo CD can make this easier. Using Argo CD's ApplicationSets controller to handle complex deployments simplifies the process of installing and managing applications including OpenShift Service Mesh, improves the reliability and consistency of clusters, and introduces disaster recovery capabilities. Using GitOps, all infrastructure and application configuration changes are managed through pull requests to a Git repository, ensuring that changes are tracked, reproducible, and able to roll back if necessary.

The following sections explore the benefits of each of these approaches in more detail and describe the step-by-step multicluster installation of each of these components in the most automated way possible.

Set up a hands-on lab

If you want a hands-on opportunity to try these tools yourself, build a lab environment using the following steps. The hands-on lab uses two OpenShift 4.x clusters.

Image
Hands on Lab Architecture
(Ignacio Lago, Pablo Castelo, CC BY-SA 4.0)

Tested versions:

  • Red Hat OpenShift 4.10, 4.11, and 4.12
  • Red Hat Advanced Cluster Management for Kubernetes 2.6.3
  • OpenShift GitOps operator 1.7

Tools required (some of these tools are available through the web terminal provided by the Web Terminal operator):

[ Learn how to get the most out of Red Hat OpenShift Service on AWS (ROSA) in this series of videos. ]

Install the RHACM and Red Hat GitOps operators

Install RHACM and OpenShift GitOps in the first cluster.

You can use Kustomize templates to install the RHACM operator.

Clone the repository:

$ git clone https://github.com/ignaciolago/rhacm-gitops.git
Cloning into 'rhacm-gitops'...
remote: Enumerating objects: 74, done.
remote: Counting objects: 100% (74/74), done.
remote: Compressing objects: 100% (51/51), done.
remote: Total 74 (delta 20), reused 74 (delta 20), pack-reused 0
Unpacking objects: 100% (74/74), 11.27 KiB | 678.00 KiB/s, done.

The directory structure looks like this:

$ tree -L 2
.
├── README.md
├── bootstrap
│  ├── 00-openshift-gitops
│  ├── 01-advance-cluster-management
│  ├── 02-import-cluster
│  ├── 03-acm-integration
│  ├── 04-policies
│  ├── 05-demo-app
│  └── 06-service-mesh-operators
└── resources
    ├── 01-acm-operator
    ├── 02-import-cluster
    ├── 03-acm-integration
    ├── 04-policies
    ├── 05-demo-app
    └── 06-service-mesh-operators
15 directories, 1 file

Use a declarative approach with Kustomize.

Install the OpenShift GitOps operator

To install the OpenShift GitOps operator, run:

$ oc apply -k bootstrap/00-openshift-gitops/
namespace/openshift-gitops created
clusterrolebinding.rbac.authorization.k8s.io/argocd-rbac-ca created
subscription.operators.coreos.com/openshift-gitops-operator created

Wait until all resources are running.

$ oc get pods -n openshift-gitops
NAME                                                       READY  STATUS   RESTARTS  AGE
cluster-977d46b6d-mf8qs                                    1/1    Running  0         67s
kam-5c87945c98-bkn82                                       1/1    Running  0         67s
openshift-gitops-application-controller-0                  1/1    Running  0         32s
openshift-gitops-applicationset-controller-c5f76984-7gdz4  1/1    Running  0         34s
openshift-gitops-dex-server-857dc8b577-gdd5k               1/1    Running  0         35s
openshift-gitops-redis-87698688c-jqd4b                     1/1    Running  0         65s
openshift-gitops-repo-server-6b8dbb9b5d-wbn7v              1/1    Running  0         37s
openshift-gitops-server-776c8cd579-2gcnk                   1/1    Running  0         36s
openshift-gitops-server-7789547fb5-f8vpw                   0/1    Running  0         37s

Check that the Argo CD console is accessible and the pods are in Running status:

$ oc get route openshift-gitops-server -n openshift-gitops --template='https://{{.spec.host}}'

https://openshift-gitops-server-openshift-gitops.apps.cluster-57lgk.57lgk.sandbox2634.opentlc.com
Image
Argo CD console login screen
(Ignacio Lago, Pablo Castelo, CC BY-SA 4.0)

Log in to the Argo CD console with your OpenShift credentials.

Image
Argo CD console user logged in
(Ignacio Lago, Pablo Castelo, CC BY-SA 4.0)

Install the RHACM operator and create the CustomResourceDefinition (CRD) object MultiClusterHub

Run the following commands:

$ oc apply -k bootstrap/01-advance-cluster-management/
appproject.argoproj.io/advance-cluster-management created
applicationset.argoproj.io/advance-cluster-management created
$ watch oc get pod -n open-cluster-management
Every 2.0s: oc get pods -n open-cluster-management
NAME                                                             READY  STATUS   RESTARTS  AGE
multicluster-observability-operator-7564f7d4f-qgjdp              1/1    Running  0         110s
multicluster-operators-application-6586bd956d-lqfsn              3/3    Running  0         58s
multicluster-operators-channel-6d74647db7-kp4k7                  1/1    Running  0         45s
multicluster-operators-hub-subscription-5555c5d88c-zkbgj         1/1    Running  0         45s
multicluster-operators-standalone-subscription-75bf6c5684-xnk59  1/1    Running  0         45s
multicluster-operators-subscription-report-78b7fdf784-xn2q8      1/1    Running  0         45s
multiclusterhub-operator-848bcd6c8-dxbw9                         1/1    Running  0         110s
submariner-addon-cd889f6b9-7dn6b                                 1/1    Running  0         110s

Wait until the RHACM operator is installed. It can take up to 10 minutes for the MultiClusterHub custom resource status to display as Running in the status.phase field after you run this command:

$ oc get mch -o=jsonpath='{.items[0].status.phase}' -n open-cluster-management
Running

You now have RHACM and Argo installed. Next, configure the second cluster.

[ For more on OpenShift and Argo CD, download the complimentary eBook Getting GitOps. ]

Add the second cluster to RHACM

Since you are deploying in multiple clusters simultaneously, use a new one that is already running and import it. You will see and manage it from RHACM.

Edit the following YAML file (resources/02-import-cluster/cluster-import.yaml) to import it. Perform this task using the GUI, although the goal of this article is automate all tasks, including this one:

---
apiVersion: cluster.open-cluster-management.io/v1
kind: ManagedCluster
metadata:
  name: cluster_name # for this example we will to replace for sandbox1664.
  labels:
    name: cluster_name # change this in own example will be sandbox1664 
    cloud: auto-detect
    vendor: auto-detect
    cluster.open-cluster-management.io/clusterset: gitops-clusters
  annotations: {}
spec:
  hubAcceptsClient: true
---
apiVersion: v1
kind: Secret
metadata:
  name: auto-import-secret
  namespace: cluster_name
stringData:
  autoImportRetry: "2"
  token: token_for_cluster #use you token 
  server: https://api-cluster-url.com:6443 # your server api 
type: Opaque
---
apiVersion: agent.open-cluster-management.io/v1
kind: KlusterletAddonConfig
metadata:
  name: cluster_name
  namespace: cluster_name # change this
spec:
  clusterName: cluster_name # change this
  clusterNamespace: cluster_name # change this
  clusterLabels:
    name: cluster_name # change this
    cloud: auto-detect
    vendor: auto-detect
    cluster.open-cluster-management.io/clusterset: gitops-clusters
  applicationManager:
    enabled: true
  policyController:
    enabled: true
  searchCollector:
    enabled: true
  certPolicyController:
    enabled: true
  iamPolicyController:
    enabled: true

[ Get the YAML cheat sheet. ]

After editing this file, run:

$ oc apply -k bootstrap/02-import-cluster 
appproject.argoproj.io/policies created
applicationset.argoproj.io/import-openshift-cluster created

After a few minutes, you will see the new server in RHACM.

Integrate GitOps and RHACM

RHACM introduces a new gitopscluster resource kind, which connects to a placement resource to determine which clusters to import into Argo CD. This integration allows you to expand your fleet while Argo CD automatically begins working with your new clusters. This means if you leverage Argo CD ApplicationSets, your application payloads are automatically applied to your new clusters as they are registered by RHACM in your Argo CD instances.

Image
OpenShift Cluster Diagram
(Ignacio Lago, Pablo Castelo, CC BY-SA 4.0)

Run the following command to perform the GitOps operator integration with RHACM:

oc apply -k bootstrap/03-acm-argocd-integration/
applicationset.argoproj.io/acm-argocd-integration created

Demo: Automate everything

At this point, you have successfully installed the necessary operators for OpenShift GitOps and RHACM, along with their respective configurations. You have also integrated these operators and imported a new cluster using OpenShift GitOps. These steps prepared your environment for the following demonstration.

Install the operator using RHACM policies

Before you start running commands, take a moment to become familiar with RHACM policies. This powerful feature allows you to automate and enforce cluster management tasks. Policies define a set of rules that describe the desired cluster state. RHACM can inform on them or even enforce those rules automatically.

Policies are based on the Open Policy Agent (OPA) framework, a widely used policy engine in the Kubernetes ecosystem. You can integrate RHACM policies with other tools in the OpenShift ecosystem, such as the OpenShift Container Platform and the Red Hat Ansible Automation Platform.

Policies can automate various aspects of cluster management such as scaling, security, and application deployment. With RHACM policies, you can keep your cluster always in the desired state, reduce the risk of human error, and improve the security and reliability of your applications.

This demonstration deploys two policies. First, analyze the YAML files:

$ tree -L 2 resources/04-policies/
resources/04-policies/
├── 00-compliance-placementbinding.yaml
├── 00-namespace-placementbinding.yaml
├── 01-compliance-placementrule.yaml
├── 01-namespace-placementrule.yaml
├── 02-compliance-policy.yaml
├── 02-namespace-policy.yaml
└── kustomization.yaml

0 directories, 7 files

There are two placementRules and two placeBindings in resources/04-policies/00-compliance-placementrule.yaml:

---
apiVersion: apps.open-cluster-management.io/v1
kind: PlacementRule
metadata:
  annotations:
    argocd.argoproj.io/sync-wave: "7"
  name: compliance-policy-placement
  namespace: default
spec:
  clusterConditions: []
  clusterSelector:
    matchExpressions:
      - key: vendor
        operator: In
        values:
          - OpenShift

The placement rule defines where to deploy the policy. In this case, it targets all clusters under "vendor = Openshift in cat resources/04-policies/00-compliance-placementbinding.yaml:

---
apiVersion: policy.open-cluster-management.io/v1
kind: PlacementBinding
metadata:
  annotations:
    argocd.argoproj.io/sync-wave: "7"
  name: compliance-policy-placement
  namespace: default
placementRef:
  name: compliance-policy-placement
  apiGroup: apps.open-cluster-management.io
  kind: PlacementRule
subjects:
  - name: compliance-policy
    apiGroup: policy.open-cluster-management.io
    kind: Policy

The placement binding is bounding the placement rule and the policy. Finally, look at the policy in resources/04-policies/02-compliance-policy.yaml:

---
apiVersion: policy.open-cluster-management.io/v1
kind: Policy
metadata:
  name: compliance-policy
  namespace: default
  annotations:
    argocd.argoproj.io/sync-wave: "9"
    policy.open-cluster-management.io/categories: CA Security Assessment and Authorization
    policy.open-cluster-management.io/controls: CA-2 Security Assessments, CA-7 Continuous Monitoring
    policy.open-cluster-management.io/standards: NIST SP 800-53
spec:
  disabled: false
  policy-templates:
    - objectDefinition:
        apiVersion: policy.open-cluster-management.io/v1
        kind: ConfigurationPolicy
        metadata:
          name: comp-operator-ns
        spec:
          object-templates:
            - complianceType: musthave
              objectDefinition:
                apiVersion: v1
                kind: Namespace
                metadata:
                  name: openshift-compliance
          pruneObjectBehavior: None
          remediationAction: enforce
          severity: high
    - objectDefinition:
        apiVersion: policy.open-cluster-management.io/v1
        kind: ConfigurationPolicy
        metadata:
          name: comp-operator-operator-group
        spec:
          object-templates:
            - complianceType: musthave
              objectDefinition:
                apiVersion: operators.coreos.com/v1
                kind: OperatorGroup
                metadata:
                  name: compliance-operator
                  namespace: openshift-compliance
                spec:
                  targetNamespaces:
                    - openshift-compliance
          pruneObjectBehavior: None
          remediationAction: enforce
          severity: high
    - objectDefinition:
        apiVersion: policy.open-cluster-management.io/v1
        kind: ConfigurationPolicy
        metadata:
          name: comp-operator-subscription
        spec:
          object-templates:
            - complianceType: musthave
              objectDefinition:
                apiVersion: operators.coreos.com/v1alpha1
                kind: Subscription
                metadata:
                  name: compliance-operator
                  namespace: openshift-compliance
                spec:
                  name: compliance-operator
                  installPlanApproval: Automatic
                  source: redhat-operators
                  sourceNamespace: openshift-marketplace
          pruneObjectBehavior: None
          remediationAction: enforce
          severity: high
    - objectDefinition:
        apiVersion: policy.open-cluster-management.io/v1
        kind: ConfigurationPolicy
        metadata:
          name: comp-operator-status
        spec:
          object-templates:
            - complianceType: musthave
              objectDefinition:
                apiVersion: operators.coreos.com/v1alpha1
                kind: ClusterServiceVersion
                metadata:
                  namespace: openshift-compliance
                spec:
                  displayName: Compliance Operator
                status:
                  phase: Succeeded
          pruneObjectBehavior: None
          remediationAction: enforce
          severity: high
  remediationAction: enforce

This policy checks if the clusters have installed the Compliance operator. If they have not, it will enforce the installation:

$ oc apply -k bootstrap/04-policies/

appproject.argoproj.io/policies configured
applicationset.argoproj.io/demo-app configured
placement.cluster.open-cluster-management.io/gitops-clusters configured
Image
Policy view
(Ignacio Lago, Pablo Castelo, CC BY-SA 4.0)

Deploy a multicluster application with GitOps

OpenShift GitOps implements Argo CD as a controller to continuously monitor application definitions and configurations defined in a Git repository. Argo CD compares the specified state of these configurations with their live state on the cluster.

The ApplicationSet leverages the cluster decision generator to interface with Kubernetes custom resources that use custom resource-specific logic to decide which managed clusters to deploy to. A cluster decision resource generates a list of managed clusters, which are then rendered into the template fields of the ApplicationSet resource. It does this using duck typing, which does not require knowledge of the full shape of the referenced Kubernetes resource.

ApplicationSet is a subproject of Argo CD that adds multicluster support. This section covers creating ApplicationSets from the RHACM.

The next sections deploy the same application from RHACM or the CLI.

[ Learn more about automating DevSecOps in Red Hat OpenShift. ]

Create an Argo CD application from ACM

The folder structure below shows it contains one app project where you will create the demo-app, one placement, the demo-app itself, and the Kustomize file.

$ tree -l 2 bootstrap/05-demo-app
bootstrap/05-demo-app
├── 00-demo-app-appproject.yaml
├── 01-placement-cpu.yaml
├── 02-demo-app.yaml
└── kustomization.yaml

0 directories, 4 files

Look at the placement file, bootstrap/05-demo-app/01-placement-cpu.yaml:

---
apiVersion: cluster.open-cluster-management.io/v1beta1
kind: Placement
metadata:
  name: gitops-clusters-cpu
  namespace: openshift-gitops
spec:
  clusterSets:
    - gitops-clusters
  numberOfClusters: 1
  prioritizerPolicy:
    configurations:
      - scoreCoordinate:
          type: BuiltIn
          builtIn: 
            ResourceAllocatableCPU
        weight: 10
      - scoreCoordinate:
          type: BuiltIn
          builtIn: 
            ResourceAllocatableMemory
        weight: 10

This placement will deploy the demo-app on the cluster with the greatest amount of free CPU time from the GitOps ClusterSet pool.

$ oc apply -k bootstrap/05-demo-app 

appproject.argoproj.io/demo-app created
applicationset.argoproj.io/demo-app created
placement.cluster.open-cluster-management.io/gitops-clusters-cpu created

Here is what you are creating:

$ tree -L 2 resources/05-demo-app
resources/05-demo-app
├── 000-namespace.yaml
├── 100-configmap.yaml
├── 200-deployment.yaml
├── 300-service.yaml
├── 400-route.yaml
└── kustomization.yaml

0 directories, 6 files

The application has a namespace, the configmap for the properties, a deployment, a service, and a route.

Install operators with Argo CD applications in multiple clusters

This example is not meant to talk specifically about Red Hat Service Mesh but to show different ways of installing or automating elements that administrators normally use and install by hand.

With this in mind, execute the following statement to install the series of operators required to install a ServiceMesh with the GitOps approach:

$ oc apply -k bootstrap/06-service-mesh-operators/

appproject.argoproj.io/service-mesh-operators created
applicationset.argoproj.io/service-mesh-operators created
placement.cluster.open-cluster-management.io/service-mesh-operator-placement created

Meanwhile, look at the files:

$ tree -L 2 resources/06-service-mesh-operators
resources/06-service-mesh-operators
├── 00_elastic_search_operator.yaml
├── 00_jaeger_operator.yaml
├── 00_kiali_operator.yaml
├── 01_ocp_servicemesh_operator.yaml
└── kustomization.yaml

0 directories, 5 files

After a few minutes, confirm that all components are installed:

Image
Application components installed
(Ignacio Lago, Pablo Castelo, CC BY-SA 4.0)

Which should you use: ApplicationSets or policies?

An ApplicationSet is designed to handle the deployment of numerous versions of an application across multiple clusters. It allows you to define a set of Kubernetes resources that make up an application and then deploy that application to multiple clusters based on certain criteria, such as cluster labels or namespaces. This can be useful in scenarios with multiple clusters with different configurations and where you need to deploy various versions of an application to other clusters.

On the other hand, polices are designed to enforce a certain set of rules or requirements on clusters. For example, policies can ensure that all your pods have certain labels or limit the amount of resources a particular application can use. Use policies for tasks that involve monitoring or ensuring compliance across a cluster infrastructure.

If you need to deploy the same application to multiple clusters with different configurations, ApplicationSet is usually a better fit. If you need to enforce certain rules or requirements across clusters, then policies is the way to go. However, it's worth noting that you can combine ApplicationSets and policies to provide a comprehensive approach to managing Kubernetes resources across multiple clusters.

[ Hybrid cloud and Kubernetes: A guide to successful architecture

Topics:   OpenShift   GitOps  
Author’s photo

Pablo Castelo

Pablo is an architect for the Solutions & Technology Practice LATAM team at Red Hat. He has more than ten years of experience developing and designing architectures, ensuring that best practices are used while carrying out projects. More about me

Author’s photo

Ignacio Lago

Ignacio is a software architect at Red Hat with over ten years of experience. He prides himself on being a hands-on software architect, specializing in microservices, event-driven, and cloud-native architectures. More about me

Navigate the shifting technology landscape. Read An architect's guide to multicloud infrastructure.

OUR BEST CONTENT, DELIVERED TO YOUR INBOX

Privacy Statement