This is the third post of our blog series on HashiCorp Vault. In the first post, we proposed a custom orchestration to more securely retrieve secrets stored in the Vault from a pod running in Red Hat OpenShift.
In the second post, we improved upon that approach by using the native Kubernetes Auth Method that Vault provides.
Both of the previous approaches assumed that the application knew how to handle the renewals of vault tokens and how to retrieve secrets from Vault. In all of our examples we used Spring Boot, which, we believe, has a sophisticated and out-of-the box Vault integration.
In this post, we are going to add further improvements with the purpose of enabling applications that cannot integrate directly with Vault.it. We will assume that these applications (henceforth referred to as legacy applications) can read a file to retrieve their secrets.
The Vault Agent
A recent release of Vault introduced the Vault Agent.
The Vault Agent performs two functions:
- It authenticates with Vault using a configured authentication method (we are obviously interested in using the Kubernetes authentication method)
- It stores the Vault token in a sink (a directory), and keeps it valid by refreshing it at the appropriate time.
We can configure the Vault Agent to run as a sidecar container and to share the directory in which the token is retrieved with our application using an in-memory shared folder. The architecture would look similar to the following:
The Vault Secret Fetcher
Legacy applications would not be able to retrieve secrets from Vault, even if they had a valid token, because they were not designed to integrate with it. We need another piece of functionality to retrieve the secrets having a valid token. The Vault Secret Fetcher is a program written in golang that can be used for this purpose.
The Vault Secret Fetcher can use a Vault token to retrieve Vault-secrets and store them in a file.
As previously described, we can use the sidecar pattern to keep this functionality out of our code and share the retrieved secrets with our application using an in-memory folder. The architecture would look as follows:
Here is a fragment of how an application pod would be instrumented to use the described approach:
Containers:
# Vault Agent
- args:
- agent
- '-log-level=debug'
- '-config=/vault/config/agent.config'
env:
- name: SKIP_SETCAP
value: 'true'
- name: VAULT_ADDR
value: 'https://vault.hashicorp-vault.svc:8200'
- name: VAULT_CAPATH
value: /var/run/secrets/kubernetes.io/serviceaccount/service-ca.crt
image: 'vault:latest'
imagePullPolicy: Always
name: vault-agent
resources: {}
terminationMessagePath: /dev/termination-log
terminationMessagePolicy: File
volumeMounts:
- mountPath: /vault/config/agent.config
name: vault-config
subPath: agent.config
- mountPath: /var/run/secrets/vaultproject.io
name: vault-agent-volume
# Secret Fetcher
- args:
- start
env:
- name: LOG_LEVEL
value: DEBUG
- name: VAULT_ADDR
value: 'https://vault.hashicorp-vault.svc:8200'
- name: VAULT_CAPATH
value: /var/run/secrets/kubernetes.io/serviceaccount/service-ca.crt
- name: VAULT_TOKEN
value: /var/run/secrets/vaultproject.io/token
- name: VAULT_SECRET
value: secret/example
- name: PROPERTIES_FILE
value: /var/run/secrets/vaultproject.io/application.json
- name: PROPERTIES_TYPE
value: json
image: vault-secret-fetcher
imagePullPolicy: Always
name: vault-secret-fetcher
resources: {}
terminationMessagePath: /dev/termination-log
terminationMessagePolicy: File
volumeMounts:
- mountPath: /var/run/secrets/vaultproject.io
name: vault-agent-volume
# App container ...
Automating the injection of the sidecar containers
As we can see from the previous example, the two sidecar container definitions can be quite long and add a bit of noise to the pod manifest. While there is nothing wrong with that approach, we can make improvements by automatically injecting the sidecar containers using a Kubernetes mutating admission controller.
The Mutating Webhook Vault Agent can be used for this purpose. This mutating admission controller monitors for newly created pods and will inject the above sidecars to the pods that request it via the following annotation: sidecar.agent.vaultproject.io/inject.
The improved architecture looks similar to the following:
Installation
To install this solution in your own environment, first clone the repository (in it you can find more details on this process as well as more examples):
git clone https://github.com/openlab-red/hashicorp-vault-for-openshiftcd hashicorp-vault-for-openshift
Then install Vault:
oc new-project hashicorp-vault
oc adm policy add-scc-to-user privileged -z default
oc create configmap vault-config --from-file=vault-config=./vault/vault-config.json
oc create -f ./vault/vault.yaml
oc create route reencrypt vault --port=8200 --service=vault
Then we need to initialize Vault:
(Note: these steps should be manually executed for increased security)
export VAULT_ADDR=https://$(oc get route vault --no-headers -o custom-columns=HOST:.spec.host)
vault operator init -tls-skip-verify -key-shares=1 -key-threshold=1
Save the generated key and token that were provided by the previous command:
Unseal Key 1: NRvJGYdLeUc9emtX+eWJfa+JV7I0wzLb2lTlOcK5lmU=
Initial Root Token: 4Zh3yRX5orXFqdQUXdKrNxmg
Export the saved keys as environment variables for later use:
export KEYS=NRvJGYdLeUc9emtX+eWJfa+JV7I0wzLb2lTlOcK5lmU=
export ROOT_TOKEN=4Zh3yRX5orXFqdQUXdKrNxmg
export VAULT_TOKEN=$ROOT_TOKEN
At this point, we can unseal Vault which will configure it and make it eligible for use.
<span>vault operator unseal -tls-skip-verify $KEYS</span>
Configure the Kubernetes Auth Method:
oc create sa vault-auth
oc adm policy add-cluster-role-to-user system:auth-delegator system:serviceaccount:hashicorp-vault:vault-auth
reviewer_service_account_jwt=$(oc serviceaccounts get-token vault-auth)
pod=$(oc get pods -lapp=vault --no-headers -o custom-columns=NAME:.metadata.name)
oc exec $pod -- cat /var/run/secrets/kubernetes.io/serviceaccount/ca.crt > /tmp/ca.crt
vault auth enable -tls-skip-verify kubernetes
export OPENSHIFT_HOST=https://openshift-master.openlab.red
vault write -tls-skip-verify auth/kubernetes/config token_reviewer_jwt=$reviewer_service_account_jwt kubernetes_host=$OPENSHIFT_HOST kubernetes_ca_cert=@/tmp/ca.crt
Create a simple Vault policy to represent a set of permission to read and write secrets:
<span>vault policy write -tls-skip-verify policy-example policy/policy-example.hcl</span>
Bind this policy to principals that authenticate via the previously configured Kubernetes authentication method. In particular, we restrict the policy to the service accounts named default and coming from the app namespace:
vault write -tls-skip-verify auth/kubernetes/role/example \
bound_service_account_names=default bound_service_account_namespaces='app' \
policies=policy-example \
ttl=2h
Finally, we initialize a sample secret protected by the above policy:
<span>vault write -tls-skip-verify secret/example password=pwd</span>
At this point, we need to install the Mutating Webhook Vault Agent. Clone this repo:
cd ..
git clone https://github.com/openlab-red/mutating-webhook-vault-agent
cd mutating-webhook-vault-agent
Build the Mutating Webhook Vault Agent container:
oc project hashicorp-vault
oc apply -f openshift/webhook-build.yaml
Create the configuration:
<span>oc apply -f openshift/sidecar-configmap.yaml</span>
Process the webhook template:
pod=$(oc get pods -lapp=vault --no-headers -o custom-columns=NAME:.metadata.name)
export CA_BUNDLE=$(oc exec $pod -- cat /var/run/secrets/kubernetes.io/serviceaccount/service-ca.crt | base64 | tr -d '\n')oc process -f openshift/webhook-template.yaml -p CA_BUNDLE=${CA_BUNDLE} | oc apply -f -
At this point, we can finally deploy our application. We are going to use Thorntail in this example, but other runtimes are available within the repository:
cd ..
cd hashicorp-vault-for-openshift
oc new-project app
Vault needs to be accessible outside of its project, to later be used by the sidecar agent.
With SDN Multi Tenant:
<span>oc adm pod-network make-projects-global hashicorp-vault</span>
With SDN Network Policy
<span>oc apply -f vault/app-allow-vault.yaml -n hashicorp-vault</span>
Label the app namespace with vault-agent-webhook=enabled to enable the injection
<span>oc label namespace app vault-agent-webhook=enabled</span>
Build the application
<span>oc new-build --name=thorntail-example registry.access.redhat.com/redhat-openjdk-18/openjdk18-openshift~https://github.com/openlab-red/hashicorp-vault-for-openshift --context-dir=/examples/thorntail-example</span>
Deploy the application
<span>oc apply -f examples/thorntail-example/thorntail-inject.yaml </span>
Conclusion
In this article we have explored how we can enable applications to retrieve secrets that were not originally designed to work with Vault. The only requirement of these applications is that they can read a file in which the secrets will be stored.
This application runtime-agnostic approach can enable a broader adoption of Vault and help simplify the management of credentials, especially for hybrid cloud deployments, in which Vault can also be used to share secrets between applications deployed in multiple OpenShift clusters as well as applications deployed outside of OpenShift.
À propos des auteurs
Raffaele is a full-stack enterprise architect with 20+ years of experience. Raffaele started his career in Italy as a Java Architect then gradually moved to Integration Architect and then Enterprise Architect. Later he moved to the United States to eventually become an OpenShift Architect for Red Hat consulting services, acquiring, in the process, knowledge of the infrastructure side of IT.
Currently Raffaele covers a consulting position of cross-portfolio application architect with a focus on OpenShift. Most of his career Raffaele worked with large financial institutions allowing him to acquire an understanding of enterprise processes and security and compliance requirements of large enterprise customers.
Raffaele has become part of the CNCF TAG Storage and contributed to the Cloud Native Disaster Recovery whitepaper.
Recently Raffaele has been focusing on how to improve the developer experience by implementing internal development platforms (IDP).
Parcourir par canal
Automatisation
Les dernières nouveautés en matière d'automatisation informatique pour les technologies, les équipes et les environnements
Intelligence artificielle
Actualité sur les plateformes qui permettent aux clients d'exécuter des charges de travail d'IA sur tout type d'environnement
Cloud hybride ouvert
Découvrez comment créer un avenir flexible grâce au cloud hybride
Sécurité
Les dernières actualités sur la façon dont nous réduisons les risques dans tous les environnements et technologies
Edge computing
Actualité sur les plateformes qui simplifient les opérations en périphérie
Infrastructure
Les dernières nouveautés sur la plateforme Linux d'entreprise leader au monde
Applications
À l’intérieur de nos solutions aux défis d’application les plus difficiles
Programmes originaux
Histoires passionnantes de créateurs et de leaders de technologies d'entreprise
Produits
- Red Hat Enterprise Linux
- Red Hat OpenShift
- Red Hat Ansible Automation Platform
- Services cloud
- Voir tous les produits
Outils
- Formation et certification
- Mon compte
- Assistance client
- Ressources développeurs
- Rechercher un partenaire
- Red Hat Ecosystem Catalog
- Calculateur de valeur Red Hat
- Documentation
Essayer, acheter et vendre
Communication
- Contacter le service commercial
- Contactez notre service clientèle
- Contacter le service de formation
- Réseaux sociaux
À propos de Red Hat
Premier éditeur mondial de solutions Open Source pour les entreprises, nous fournissons des technologies Linux, cloud, de conteneurs et Kubernetes. Nous proposons des solutions stables qui aident les entreprises à jongler avec les divers environnements et plateformes, du cœur du datacenter à la périphérie du réseau.
Sélectionner une langue
Red Hat legal and privacy links
- À propos de Red Hat
- Carrières
- Événements
- Bureaux
- Contacter Red Hat
- Lire le blog Red Hat
- Diversité, équité et inclusion
- Cool Stuff Store
- Red Hat Summit