Skip to main content

How we provision Ansible Automation Platform in 10 minutes with a custom Operator

Learn how we migrated from Ansible Tower to Ansible Automation Platform 2 using Red Hat OpenShift and created a custom Operator to speed up provisioning.
Image
Photo of an architect drawing an architectural design on paper

Photo by Kelly Sikkema on Unsplash

Infrastructure provisioning and preparation means deploying Red Hat Ansible Automation Platform 2 can take hours or even days. You can significantly reduce that deployment time by using Red Hat OpenShift as a platform and infrastructure.

[ Get started with Ansible automation controller in this hands-on interactive lab. ]

Until recently, Swiss financial tech company SIX Group and Red Hat provided Ansible Tower as a service on OpenShift 3 using the awx-operator as a base. However, with Ansible Tower being end-of-life, migrating to Ansible Automation Platform 2 enabled our team to evaluate Ansible Automation Platform 2 on disconnected OpenShift 4 clusters.

Use Ansible Automation Platform on OpenShift in a disconnected environment

We wanted an Ansible Automation Platform based on supported components from Red Hat rather than creating a custom solution. Fortunately, Ansible Automation Platform 2 decouples the former Ansible Tower into Ansible automation controller (the control plane) and execution environments (the execution plane).

The first step is installing the Ansible Automation Automation Platform Operator through the OpenShift web console or the command-line interface (CLI).

Note: In an air-gapped environment, you must mirror the images and prepare the SourceCatalog and the ImageContentSourcePolicy. You can do that with the oc mirror command.

Explore new custom resources in Ansible Automation Platform

Ansible Automation Platform provides different custom resources that are extensions of the Kubernetes APIs: AutomationController, AutomationControllerBackup, and AutomationControllerRestore.

AutomationController

You can use this new resource type to provision an automation controller.

Image
The AutomationController
(Sylvain Chen, CC BY-SA 4.0)

You can customize the payload within the YAML file for:

  • AutomationController container resources
  • AutomationController replicas and routes
  • Standalone PostgreSQL resources and storage
  • AutomationController time zone
  • Volumes (for example, the certificate authority)

Ansible vs. Terraform, clarified ]

Create a project

First, create a project to deploy Ansible Automation Platform:

$ oc new-project aap

Specify a route

Specify the DNS name where you will access your Ansible Automation Platform web interface:

apiVersion: automationcontroller.ansible.com/v1beta1
kind: AutomationController
metadata:
  name: aapshowcase
  namespace: aap
spec:
  route_host: aap.apps.ocp-clustername.example.com

Configure container resources (CPU and memory)

Configure processor and memory settings for your project:

apiVersion: automationcontroller.ansible.com/v1beta1
kind: AutomationController
metadata:
  name: aapshowcase
  namespace: aap
spec:
  replicas: 1
  task_resource_requirements:
    requests:
      cpu: 500m
      memory: 2Gi
    limits:
      cpu: 500m
      memory: 2Gi
  web_resource_requirements:
    requests:
      cpu: 500m
      memory: 2Gi
    limits:
      cpu: 1
      memory: 2Gi
  ee_resource_requirements:
    requests:
      cpu: 500m
      memory: 1Gi
    limits:
      cpu: 500m
      memory: 1Gi

[ How to make the case for automation architecture: 5 ways to win investment ]

Customize standalone PostgreSQL (CPU, memory, storage)

The Ansible Automation Platform Operator can provision and use a standalone PostgreSQL database (version 13), external PostgreSQL database, or CrunchyDB (outside the scope of this article). Here are the settings:

apiVersion: automationcontroller.ansible.com/v1beta1
kind: AutomationController
metadata:
  name: aapshowcase
  namespace: aap
spec:
  postgres_resource_requirements:
    requests:
      storage: 50Gi
    limits:
      storage: 50Gi
  postgres_storage_class: 'your_choice_of_storage_class'
  postgres_storage_requirements:
    requests:
      cpu: 500m
      memory: 1Gi
    limits:
      memory: 4Gi

Customize time-zone settings

Customize time-zone settings using a combination of environment variables and Django settings:

apiVersion: automationcontroller.ansible.com/v1beta1
kind: AutomationController
metadata:
  name: aapshowcase
  namespace: aap
spec:
  task_extra_env: |
    - name: TZ
      value: Europe/Zurich
  web_extra_env: |
    - name: TZ
      value: Europe/Zurich
  ee_extra_env: |
    - name: TZ
      value: Europe/Zurich
  extra_settings:
    - setting: TIME_ZONE
      value: '"Europe/Zurich"'

Integrate a custom certificate authority

You may need to integrate a custom certificate authority (CA). The Ansible Automation Platform Operator supports this using the following steps:

1. Create a secret:

oc create generic secret custom-ca-bundle --from-file=bundle-ca.crt=<PATH_TO_YOUR_CA_FILE> -n aap

2. Patch the AutomationController:

apiVersion: automationcontroller.ansible.com/v1beta1
kind: AutomationController
metadata:
  name: aapshowcase
  namespace: aap
spec:
  bundle_cacert_secret: custom-ca-bundle
  extra_volumes: |
    - name: ca-volume
      secret: 
        secretName: custom-ca-bundle
  task_extra_volume_mounts: |
    - name: ca-volume
      readOnly: true
      mountPath: /etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem
      subPath: bundle-ca.crt
  web_extra_volume_mounts: |
    - name: ca-volume
      readOnly: true
      mountPath: /etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem
      subPath: bundle-ca.crt

Note: The extra volumes and volume mounts are necessary to trust the external logging stack.

[ Download The path to GitOps to put the right tools, workflows, and structures in place to enable a complete GitOps workflow. ] 

Integrate LDAP

Enable LDAP authentication at the AutomationController bootstrap.

1. If you need to trust a custom certificate for the LDAP server, create a secret referencing your LDAP CA:

oc create secret generic ldap-ca --from-file=ldap-ca.crt=<PATH_TO_YOUR_LDAP_CA_FILE> -n aap

2. Next, create the LDAP Bind DN password:

oc create secret generic ldap-password --from-litteral=ldap-password=<YOUR_LDAP_DN_PASSWORD> -n aap

3. Patch the AutomationController using the ldap_cacert_secret, ldap_password_secret, and extra_settings fields. Here is an example:

apiVersion: automationcontroller.ansible.com/v1beta1
kind: AutomationController
metadata:
 name: aapshowcase
 namespace: aap
spec:
  ldap_cacert_secret: ldap-ca
  ldap_password_secret: ldap-password
  extra_settings:
    - setting: AUTH_LDAP_SERVER_URI
      value: '"ldaps://ldap.abc.com:636"'
    - setting: AUTH_LDAP_BIND_DN
      value: >-
        'CN=LDAP User,OU=Service Accounts,DC=abc,DC=com'
    - setting: AUTH_LDAP_USER_SEARCH
      value: >-
        LDAPSearch("DC=abc,DC=com",ldap.SCOPE_SUBTREE,"(sAMAccountName=%(user)s)",)

    - setting: AUTH_LDAP_GROUP_SEARCH
      value: >-
      LLDAPSearch("OU=Groups,DC=abc,DC=com",ldap.SCOPE_SUBTREE,"(objectClass=group)",)
    - setting: AUTH_LDAP_USER_ATTR_MAP
      value: '''{"first_name": "givenName","last_name": "sn","email": "mail"}'''
    - setting: AUTH_LDAP_USER_FLAGS_BY_GROUP
      value:
        is_superuser:
          - >-
            CN=admin,OU=Groups,DC=abc,DC=com
    - setting: AUTH_LDAP_ORGANIZATION_MAP
      value:
        abc:
          admins: 'null'
          remove_admins: false
          remove_users: false
          users:
            - >-
              CN=admin,OU=Groups,DC=abc,DC=com
    - setting: AUTH_LDAP_TEAM_MAP
      value:
        admin:
          organization: abc
          remove: true
          users: >-
            CN=admin,OU=Groups,DC=abc,DC=com

    - setting: AUTH_LDAP_REQUIRE_GROUP
      value: >-
        "CN=operators,OU=Groups,DC=abc,DC=com"

See the upstream AWX operator documentation for more details about these settings.

Sync your project through HTTPS

We hit a bug when integrating custom certificates, which prevented our end users from syncing their Ansible Automation Platform projects using HTTPS. To fix this, patch the AutomationController as described in this article. A potential fix is tracked under this issue in the Red Hat portal.

apiVersion: automationcontroller.ansible.com/v1beta1
kind: AutomationController
metadata:
  name: aapshowcase
  namespace: aap
spec:
  init_container_extra_commands: |
    echo $HOME

This section shows the opportunity for a lot of customization within the AutomationController.

AutomationControllerBackup and AutomationControllerRestore

These resources can help you do backups of your standalone PostgreSQL database within OpenShift. Once AutomationControllerBackup completes the backups, AutomationControllerRestore can restore a backup. Our team is currently working with the engineers to release the documentation. You can find a preview on Ansible Automation Platform's GitHub repository.

How we developed a custom Ansible Operator and resource

Ansible Automation Platform 2 offers many new features, especially with the OpenShift Operator customization options. For example, you can develop a custom Ansible Operator to add even more logic and automation. The SIX Group took this approach.

Our custom Ansible Operator, SIX AAP Operator, enables end users to provision and use Ansible Automation Platform within 10 minutes. We created an Operator using the Operator SDK. While SIX AAP Operator is not open source, the following explains how we did it.

Prerequisites:

  • Install the Ansible Automation Platform Operator.
  • Create your AutomationController.
  • Install your custom Operator (ours is the SIX AAP Operator)

Once you meet the prerequisites, create the following Custom Resource:

apiVersion: ansible.six-group.com/v1alpha1
kind: SIXAutomationControllerConfig
metadata:
  name: aapshowcase-config
  namespace: aap
spec:
  deployment_name: aapshowcase

The new Ansible Operator handles the following tasks:

  • Injects the subscription for a given automation controller to ensure the user can immediately start using the platform.
  • Configures logging so that each automation controller sends its logs to a centralized external logging stack. Logging is critical for auditing every Ansible Automation Platform.
  • Creates an auditor user and related secret for metrics tracking.
  • Creates a ServiceMonitor to help directly expose the Prometheus metrics into the OpenShift monitoring stack to be displayed in a Grafana dashboard.

The tracking features collect Ansible Automation Platform end-user data to remain compliant with requirements.

[ Check out Red Hat's Portfolio Architecture Center for a wide variety of reference architectures you can use. ]

The following is an example of a ServiceMonitor that scrapes the /api/v2/metrics for a given AutomationController service:

apiVersion: monitoring.coreos.com/v1
kind: ServiceMonitor
metadata:
  name: aapshowcase-service-monitor
  namespace: aap
spec:
  endpoints:
  - basicAuth:
      password:
        key: password
        Name: aap-auditor-credentials
      username:
        key: username
        Name: aap-auditor-credentials
    interval: 1m
    port: web
    scheme: http
  selector:
    matchLabels:
      app.kubernetes.io/component: automationcontroller

Operator reconciliation loop logic

Image
Operator reconciliation loop logic
(Sylvain Chen, CC BY-SA 4.0)

Our project depends on another resource (AutomationController) and its API. This dependency has shaped the Operator logic. It must:

  • Wait until the AutomationController is up and running.
  • Wait until the AutomationController is ready.
  • Prepare the input payload.
  • Inject the configuration needed by calling APIs.

The primary Ansible collection we use is the kubernetes.core collection. It communicates with Kubernetes or OpenShift easily.

The main Ansible modules are:

  • kubernetes.core.k8s
  • kubernetes.core.k8s_info
  • kubernetes.core.k8s_cp
  • kubernetes.core.k8s_exec

Wrap up

Ansible Automation Platform 2 brings many new features, especially in OpenShift, where the new Operator really leverages the Day 1 experience. Users can customize many options, including the resources, volumes, and environment variables in a single Custom Resource. You can also develop your own Ansible Operator to add even more logic and automation, as the SIX Group did (in collaboration with Red Hat Consulting) to have an Ansible Automation Platform provisioned and ready for use in 10 minutes.

References

These steps use Ansible modules and API calls. Here are the references:

We used the following documentation to perform the Prometheus metrics collection:


This article is based on Ansible Automation Platform as a service using the Operator SDK presented at AnsibleFest 2022.

Topics:   Ansible   OpenShift   Application modernization  
Author’s photo

Sylvain Chen

Sylvain Chen is an expert in OpenShift, DevOps, and software development, and currently works primarily as a consultant in Switzerland. He has also presented at conferences, both internal and external, such as Red Hat Summit and Ansible Fest. More about me

Navigate the shifting technology landscape. Read An architect's guide to multicloud infrastructure.

OUR BEST CONTENT, DELIVERED TO YOUR INBOX

Privacy Statement