A previous post introduced the Secrets Store CSI Driver in Red Hat OpenShift. You can refer to it to learn the basics behind this driver. This post demonstrates how to integrate the OpenShift Secrets Store CSI Driver with an external secrets management system like Vault.
This article uses a Vault server running outside the OpenShift cluster. If you run the Vault server inside an OpenShift cluster, the procedure is slightly different and is not covered in this post.
IMPORTANT: As of Red Hat OpenShift 4.14, the Secrets Store CSI Driver Operator is a Technology Preview feature only. Technology Preview features are not supported with Red Hat production service level agreements (SLAs) and might not be functionally complete. Red Hat does not recommend using them in production. These features provide early access to upcoming product capabilities, enabling customers to test functionality and provide feedback during the development process.
Prerequisites
- An OpenShift v4.14 cluster
- OpenShift Secrets Store CSI Driver Operator deployed and a default ClusterCSIDriver created
- A Vault server deployed outside the OpenShift Cluster
Configure the Vault CSI provider
For the Secrets Store CSI driver to gather secrets information from the Vault server, you must first deploy the Vault CSI Provider.
IMPORTANT: The Vault CSI provider for the Secrets Store CSI driver is an upstream provider. Currently, this provider is outside the Tech Preview program. We plan to get this provider certified for the GA release. The Vault CSI provider requires running its pods as privileged
. Grant access to the privileged
SCC to the ServiceAccount used by the Vault CSI pods:
oc -n openshift-cluster-csi-drivers adm policy add-scc-to-user privileged -z vault-csi-provider
Next, deploy the Vault CSI provider:
NOTE: This configuration is modified from the configuration provided in the upstream repository to work properly with OpenShift. Changes to this configuration might impact functionality.
cat <<EOF | oc apply -f -
apiVersion: v1
kind: ServiceAccount
metadata:
name: vault-csi-provider
namespace: openshift-cluster-csi-drivers
---
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRole
metadata:
name: vault-csi-provider-clusterrole
rules:
- apiGroups:
- ""
resources:
- serviceaccounts/token
verbs:
- create
---
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRoleBinding
metadata:
name: vault-csi-provider-clusterrolebinding
roleRef:
apiGroup: rbac.authorization.k8s.io
kind: ClusterRole
name: vault-csi-provider-clusterrole
subjects:
- kind: ServiceAccount
name: vault-csi-provider
namespace: openshift-cluster-csi-drivers
---
apiVersion: rbac.authorization.k8s.io/v1
kind: Role
metadata:
name: vault-csi-provider-role
namespace: openshift-cluster-csi-drivers
rules:
- apiGroups: [""]
resources: ["secrets"]
verbs: ["get"]
resourceNames:
- vault-csi-provider-hmac-key
# 'create' permissions cannot be restricted by resource name:
# https://kubernetes.io/docs/reference/access-authn-authz/rbac/#referring-to-resources
- apiGroups: [""]
resources: ["secrets"]
verbs: ["create"]
---
apiVersion: rbac.authorization.k8s.io/v1
kind: RoleBinding
metadata:
name: vault-csi-provider-rolebinding
namespace: openshift-cluster-csi-drivers
roleRef:
apiGroup: rbac.authorization.k8s.io
kind: Role
name: vault-csi-provider-role
subjects:
- kind: ServiceAccount
name: vault-csi-provider
namespace: openshift-cluster-csi-drivers
---
apiVersion: apps/v1
kind: DaemonSet
metadata:
labels:
app.kubernetes.io/name: vault-csi-provider
name: vault-csi-provider
namespace: openshift-cluster-csi-drivers
spec:
updateStrategy:
type: RollingUpdate
selector:
matchLabels:
app.kubernetes.io/name: vault-csi-provider
template:
metadata:
labels:
app.kubernetes.io/name: vault-csi-provider
spec:
serviceAccountName: vault-csi-provider
tolerations:
containers:
- name: provider-vault-installer
image: docker.io/hashicorp/vault-csi-provider:1.4.1
securityContext:
privileged: true
imagePullPolicy: Always
args:
- -endpoint=/provider/vault.sock
- -debug=false
resources:
requests:
cpu: 50m
memory: 100Mi
limits:
cpu: 50m
memory: 100Mi
volumeMounts:
- name: providervol
mountPath: "/provider"
livenessProbe:
httpGet:
path: "/health/ready"
port: 8080
scheme: "HTTP"
failureThreshold: 2
initialDelaySeconds: 5
periodSeconds: 5
successThreshold: 1
timeoutSeconds: 3
readinessProbe:
httpGet:
path: "/health/ready"
port: 8080
scheme: "HTTP"
failureThreshold: 2
initialDelaySeconds: 5
periodSeconds: 5
successThreshold: 1
timeoutSeconds: 3
volumes:
- name: providervol
hostPath:
path: "/etc/kubernetes/secrets-store-csi-providers"
nodeSelector:
kubernetes.io/os: linux
EOF
The Vault CSI pods are now running:
oc -n openshift-cluster-csi-drivers get pods
NAME READY STATUS RESTARTS AGE
secrets-store-csi-driver-node-46lpg 3/3 Running 0 12h
secrets-store-csi-driver-node-4svsk 3/3 Running 0 12h
secrets-store-csi-driver-node-j4ljq 3/3 Running 0 12h
secrets-store-csi-driver-operator-7c5fb75769-g6x76 1/1 Running 0 12h
vault-csi-provider-26pdt 1/1 Running 0 8h
vault-csi-provider-68nhp 1/1 Running 0 8h
vault-csi-provider-kg52z 1/1 Running 0 8h
Create secrets in Vault
This section assumes that you have access to your Vault server and you're authenticated with the Vault CLI. Create a secret for the application to consume.
vault kv put -mount=kv team1/db-pass password="mys3cretdbp4ss"
In your environment, you may need to change the path to the secret. Verify that the secret is readable:
vault kv get -mount=kv team1/db-pass
==== Secret Path ====
kv/data/team1/db-pass
======= Metadata =======
Key Value
--- -----
created_time 2023-11-15T08:34:51.014161533Z
custom_metadata <nil>
deletion_time n/a
destroyed false
version 1
====== Data ======
Key Value
--- -----
password mys3cretdbp4ss
Connect the CSI provider to Vault
Provide the required configurations so the Vault CSI provider and the Vault server can talk to each other. This example uses a long-lived ServiceAccount token. You may want to use the JWT OIDC provider for Kubernetes for future production use. If you run the Vault server in the same OpenShift cluster as the Vault CSI provider, you can use the local service account token auth method instead. Create the required configurations in Kubernetes to integrate with Vault:
cat <<EOF | oc apply -f -
apiVersion: v1
kind: ServiceAccount
metadata:
name: vault
namespace: openshift-cluster-csi-drivers
---
apiVersion: v1
kind: Secret
metadata:
name: vault-k8s-auth-secret
namespace: openshift-cluster-csi-drivers
annotations:
kubernetes.io/service-account.name: vault
type: kubernetes.io/service-account-token
---
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRoleBinding
metadata:
name: vault-sa-tokenreview-binding
roleRef:
apiGroup: rbac.authorization.k8s.io
kind: ClusterRole
name: system:auth-delegator
subjects:
- kind: ServiceAccount
name: vault
namespace: openshift-cluster-csi-drivers
EOF
Get the required information from the Kubernetes cluster:
KUBERNETES_API=$(oc whoami --show-server)
VAULT_SA_JWT=$(oc -n openshift-cluster-csi-drivers get secret vault-k8s-auth-secret -o jsonpath='{.data.token}' | base64 -d)
KUBERNETES_API_IP_PORT=$(echo $KUBERNETES_API | awk -F "//" '{print $2}')
KUBERNETES_API_CA=$(openssl s_client -connect $KUBERNETES_API_IP_PORT </dev/null 2>/dev/null | openssl x509 -outform PEM)
Configure the Kubernetes authentication in Vault:
vault auth enable kubernetes
vault write auth/kubernetes/config kubernetes_host="$KUBERNETES_API" token_reviewer_jwt="$VAULT_SA_JWT" kubernetes_ca_cert="$KUBERNETES_API_CA"
Create a Vault policy and add a user to the Kubernetes auth so the app that you will deploy later can read the secret created earlier. Use db-app-sa ServiceAccountName
and db-app
Namespace for the user. In your environment, you may need to change the path to the secret.
Create the Policy:
vault policy write database-app - <<EOF
path "kv/data/team1/db-pass" {
capabilities = ["read"]
}
EOF
Create the user:
vault write auth/kubernetes/role/database bound_service_account_names=db-app-sa bound_service_account_namespaces=db-app policies=database-app ttl=20m
Consume secrets from Vault in the workloads
Now that you've configured the CSI provider, you can see how to consume secrets from Vault in the workloads. First, create a namespace for the application:
oc create namespace db-app
Next, you must define a SecretProviderClass for the Vault store. Update the vaultAddress
to match your environment. You are not validating TLS certs; you can use the different parameters to specify a CA so TLS verification is not skipped.
cat <<EOF | oc apply -f -
apiVersion: secrets-store.csi.x-k8s.io/v1
kind: SecretProviderClass
metadata:
name: vault-database
namespace: db-app
spec:
provider: vault
parameters:
vaultAddress: "https://192.168.122.20:8201"
vaultSkipTLSVerify: "true"
roleName: "database"
objects: |
- objectName: "db-password"
secretPath: "kv/data/team1/db-pass"
secretKey: "password"
EOF
Create the application consuming the secret:
cat <<EOF | oc apply -f -
apiVersion: v1
kind: ServiceAccount
metadata:
name: db-app-sa
namespace: db-app
---
kind: Pod
apiVersion: v1
metadata:
name: dbapp
namespace: db-app
spec:
serviceAccountName: db-app-sa
containers:
- image: quay.io/mavazque/trbsht:latest
name: dbapp
securityContext:
allowPrivilegeEscalation: false
runAsNonRoot: true
seccompProfile:
type: RuntimeDefault
capabilities:
drop:
- ALL
volumeMounts:
- name: secrets-store-inline
mountPath: "/mnt/secrets-store"
readOnly: true
volumes:
- name: secrets-store-inline
csi:
driver: secrets-store.csi.k8s.io
readOnly: true
volumeAttributes:
secretProviderClass: "vault-database"
EOF
Access the pod and view the secret:
oc -n db-app exec -ti dbapp -- cat /mnt/secrets-store/db-password
mys3cretdbp4ss
In addition to mounting secrets in the container filesystem, you can sync vault data into a Kubernetes Secret so the pod can consume it that way.
IMPORTANT: If you plan to consume your secret data as Kubernetes Secrets only, then other solutions like External Secrets Operator may be a better fit. More on this topic in the closing thoughts section.
Update the SecretProviderClass to include the secretObjects
entry. You can find a list of supported secret types here.
cat <<EOF | oc apply -f -
apiVersion: secrets-store.csi.x-k8s.io/v1
kind: SecretProviderClass
metadata:
name: vault-database
namespace: db-app
spec:
provider: vault
secretObjects:
- data:
- key: password
objectName: db-password
secretName: db-pass
type: Opaque
parameters:
vaultAddress: "https://192.168.122.20:8201"
vaultSkipTLSVerify: "true"
roleName: "database"
objects: |
- objectName: "db-password"
secretPath: "kv/data/team1/db-pass"
secretKey: "password"
EOF
If you try to get the secret, you'll see that it doesn't exist yet:
oc -n db-app get secret db-pass
Error from server (NotFound): secrets "db-pass" not found
If you create a pod requesting such secrets using the SecretProviderClass
, you see that the secret gets created. When a pod references this SecretProviderClass
, the CSI driver creates a Kubernetes Secret called db-pass
with the password
field set to the contents of the db-password
object from the parameters. In this case, the pod waits for the secret to be created before starting, and the secret is deleted when all pods using this SecretProviderClass
are stopped.
cat <<EOF | oc apply -f -
kind: Pod
apiVersion: v1
metadata:
name: dbapp-secret
namespace: db-app
spec:
serviceAccountName: db-app-sa
containers:
- image: quay.io/mavazque/trbsht:latest
name: dbapp
env:
- name: DB_PASSWORD
valueFrom:
secretKeyRef:
name: db-pass
key: password
securityContext:
allowPrivilegeEscalation: false
runAsNonRoot: true
seccompProfile:
type: RuntimeDefault
capabilities:
drop:
- ALL
volumeMounts:
- name: secrets-store-inline
mountPath: "/mnt/secrets-store"
readOnly: true
volumes:
- name: secrets-store-inline
csi:
driver: secrets-store.csi.k8s.io
readOnly: true
volumeAttributes:
secretProviderClass: "vault-database"
EOF
Get the secret:
oc -n db-app get secret db-pass
NAME TYPE DATA AGE
db-pass Opaque 1 22s
Check the environment variable:
oc -n db-app exec -ti dbapp-secret -- sh -c 'echo $DB_PASSWORD'
mys3cretdbp4ss
If you delete every pod using the SecretProviderClass, the secret is also gone:
oc -n db-app delete pod dbapp-secret db-app
oc -n db-app get secret db-pass
Error from server (NotFound): secrets "db-pass" not found
CSI and OpenShift
This post introduced the Vault CSI Provider for the OpenShift Secrets Store CSI Driver. You connected an OpenShift cluster to an external Vault server and consumed secret data from Vault within the workloads. The Secret Store CSI Driver is a good alternative to other solutions like the External Secrets Operator or Sealed Secrets when you need to avoid secrets being stored in the etcd
, like when running in a managed service and the control plane is outside your control. In addition, secrets are auto-rotated in the pods without any other tooling required.
While the Secrets Store CSI Driver can also create Kubernetes Secrets from the secret data from the secret management systems, solutions like External Secrets Operator may be a better fit for that specific use case. If you want to know more about what options exist today to protect your secret data on and off your OpenShift cluster, read A Holistic approach to encrypting secrets, both on and off your OpenShift clusters. Stay tuned for future improvements in the community projects and for the GA release in OpenShift.
About the author
Browse by channel
Automation
The latest on IT automation for tech, teams, and environments
Artificial intelligence
Updates on the platforms that free customers to run AI workloads anywhere
Open hybrid cloud
Explore how we build a more flexible future with hybrid cloud
Security
The latest on how we reduce risks across environments and technologies
Edge computing
Updates on the platforms that simplify operations at the edge
Infrastructure
The latest on the world’s leading enterprise Linux platform
Applications
Inside our solutions to the toughest application challenges
Original shows
Entertaining stories from the makers and leaders in enterprise tech
Products
- Red Hat Enterprise Linux
- Red Hat OpenShift
- Red Hat Ansible Automation Platform
- Cloud services
- See all products
Tools
- Training and certification
- My account
- Customer support
- Developer resources
- Find a partner
- Red Hat Ecosystem Catalog
- Red Hat value calculator
- Documentation
Try, buy, & sell
Communicate
About Red Hat
We’re the world’s leading provider of enterprise open source solutions—including Linux, cloud, container, and Kubernetes. We deliver hardened solutions that make it easier for enterprises to work across platforms and environments, from the core datacenter to the network edge.
Select a language
Red Hat legal and privacy links
- About Red Hat
- Jobs
- Events
- Locations
- Contact Red Hat
- Red Hat Blog
- Diversity, equity, and inclusion
- Cool Stuff Store
- Red Hat Summit