Subscribe to the feed

This article outlines how to dynamically provision storage resources and use persistent volumes across mixed-node (Windows and Linux) clusters, enabling stateful applications to run on various cloud platforms supported by OpenShift.

What is a PV/PVC?

A Persistent Volume (PV) is a piece of storage in the cluster made by the administrator, while a Persistent Volume Claim (PVC) is a request for storage access by the user. In a PVC, a user can request specific size and access modes. OpenShift users use these two API resources to abstract details on how the storage is provided and consumed, allowing developers to focus on their application. PVs and PVCs are bidirectionally bound in a one-to-one mapping.

Access Modes

PVC access modes are an integral part of dynamic provisioning across nodes, regulating both the number of nodes able to use a volume as well as read/write permissions. Different storage class source providers support different access modes. Note that a single PV can only be mounted with one access mode at a time.

There are three access modes:

  • ReadWriteOnce (RWO) -- supports read/write mounting by only a single client node
  • ReadOnlyMany (ROX) -- supports read-only mounting by multiple client nodes
  • ReadWriteMany (RWX) -- supports read/write mounting by multiple client nodes

For more on access modes, refer to Kubernetes storage documentation for PVC access modes.

Background on Dynamic Provisioning and Storage Classes

Dynamic provisioning automates the life cycle of storage resources by removing the need for cluster administrators to manually request new storage volumes from their cloud providers or storage platforms. Rather than pre-provision volumes for later use, dynamic provisioning relies on StorageClass resources that use platform-specific providers to create storage volumes on-demand.

Windows Supported Plug-ins

Note: There are multiple dynamic provisioning plug-ins that OpenShift provides, but AWS and Azure are the focus in this post. For more information, refer to Kubernetes storage documentation for Windows and OpenShift storage documentation.

 

 

Plug-in name

Description

Notes

AWS Elastic Block Storage (EBS)

Used to mount an AWS EBS volume to a pod.

It can only be mounted with access type RWO, making it available to only a single deployment or pod resource.


In order to enable Windows support, users must specify parameter ‘fsType: ntfs’ on creation of the StorageClass resource object to override default ‘ext4’ file type.


AWS EBS offers additional features including volume expansion and snapshot backup support.

AzureDisk

Used to mount a Microsoft Azure Data Disk to a pod. 

It can only be mounted with access type RWO, making it available to only a single deployment or pod resource. For example, if a pod were to be restarted on a different node, data that was read/write accessible before the move would not persist.


AzureDisk offers additional features including server-side encryption and snapshot backups. 

AzureFile

Used to mount a Microsoft Azure File volume to a pod.

Supports all three access modes (RWO, ROX, RWX).


Users will want to use AzureFile if they want to be able to share across multiple deployments or pods, that may be running on multiple nodes in a cluster.


AzureFile has performance and size limitations, as well as file system limitations -- AzureFile is not suitable for workloads that expect POSIX compliant file systems as the UNIX extension for SMB is not enabled by default. Volume expansion and snapshot backups are also not supported.

 

 

 

Testing

In order to validate Windows support of the storage plug-ins, reading from and writing to attached persistent volumes was tested in multiple scenarios: intra-OS, intra-node, inter-OS, and inter-node. For the Test Matrix table, refer to Appendix.

  • Windows Node 1 (two pods deployed)
    • Deployed two pods and read/wrote from one pod and ensured that data was write/readable on the second pod.
    • Scaled our deployment down to 0 pods and then back up to 2 and ensured there was read/write access on both pods.
    • These tests were successful on AWS(EBS), AzureDisk, and AzureFile.
  • Windows Node 1 (one pod deployed) to Windows Node 2 (one pod deployed)
    • Read/wrote from a pod on the Windows Node 1 and then ensured that data was write/readable to the pod on Windows Node 2.
    • Scaled deployment on Windows Node 1 down to 0, then back up to 1 pod. Scaled deployment on Windows Node 2 down to 0, then back up to 1 pod and ensured there were read/write capabilities on both.
    • These tests were successful only on AzureFile.
  • Windows Node 2 (one pod deployed) to Linux Node 3 (one pod deployed)
    • Read/wrote from a pod on the Windows Node 2 and then ensured data was write/readable to the pod on the Linux Node 1.
    • Scaled deployment on Windows Node 2 down to 0, then back up to 1 pod. Scaled deployment on Linux Node 1 down to 0, then back up to 1 pod and ensured there were read/write capabilities on both.
    • These tests were successful only on AzureFile.

Prerequisites

  • You must be logged into an OpenShift Container Platform (OCP) cluster.
  • Your cluster must include:
    • At least one Red Hat Linux master node
    • At least one Red Hat Linux worker node
    • At least one Windows worker node (with Windows Server 2019 Image)
    • OVNKubernetes network hybrid overlay to enable the orchestration of Windows containers

Procedure

Note: A command preceded by > is to be run in a PowerShell window on a Windows instance, and a command preceded by $ is to be run on a Linux console.

Note: This general workflow is done using AzureFile storage. If you are following this workflow using another storage class, ensure it has your desired access mode by referring to the OpenShift storage documentation.

 1. Ensure your OCP cluster is up. Run $ oc whoami to ensure you are a system administrator.

CONFIGURING AN AZURE FILE OBJECT 

Note: This AzureFile object definition is taken from OpenShift storage documentation.

 2. Define and create a ClusterRole that allows access to create and view secrets. Example clusterrole.yaml file:

apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRole
metadata:
 name: persistent-volume-binder
rules:
- apiGroups: ['']
 resources: ['secrets']
 verbs:     ['get','create']
 3. Add the ClusterRole to your Service Account by running 
$ oc adm policy add-cluster-role-to-user <clusterrole-name> system:serviceaccount:kube-system:persistent-volume-binder
 4. Define and create Storage Class. Example storageclass.yaml:
kind: StorageClass
apiVersion: storage.k8s.io/v1
metadata:
 name: standard-file 
provisioner: kubernetes.io/azure-file
parameters:
 location: centralus 
 skuName: Standard_LRS  
reclaimPolicy: Delete
volumeBindingMode: Immediate

MAKING A PERSISTENT VOLUME CLAIM ON A WINDOWS WORKLOAD

 1. Define and create a PVC with the desired access mode (ReadOnlyMany or ReadWriteMany needed for multi-workload access), storage request size, and storage class name. Example pvc.yaml:
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
 name: azure-file-win
spec:
 accessModes:
 - ReadWriteMany
 storageClassName: standard-file
 resources:
   requests:
     storage: 2Gi

 2. Define and create workload with desired container mountPath, PV claimName, proper nodeSelector OS (Windows), and taint/toleration. Example deployment.yaml (modified from Kubernetes storage documentation for Windows):

apiVersion: v1
kind: Service
metadata:
 name: win-webserver
 labels:
   app: win-webserver
spec:
 ports:
   # the port that this service should serve on
 - port: 80
   targetPort: 80
 selector:
   app: win-webserver
 type: LoadBalancer
---
apiVersion: apps/v1
kind: Deployment
metadata:
 labels:
   app: win-webserver
 name: win-webserver
spec:
 selector:
   matchLabels:
     app: win-webserver
 replicas: 1
 template:
   metadata:
     labels:
       app: win-webserver
     name: win-webserver
   spec:
     tolerations:
     - key: "os"
       value: "Windows"
       Effect: "NoSchedule"
     containers:
     - name: windowswebserver
       image: mcr.microsoft.com/windows/servercore:ltsc2019
       imagePullPolicy: IfNotPresent
       command:
       - powershell.exe
       - -command
       - $listener = New-Object System.Net.HttpListener; $listener.Prefixes.Add('http://*:80/'); $listener.Start();Write-Host('Listening at http://*:80/'); while ($listener.IsListening) { $context = $listener.GetContext(); $response = $context.Response; $content='<html><body><H1>Windows Container Web Server</H1></body></html>'; $buffer = [System.Text.Encoding]::UTF8.GetBytes($content); $response.ContentLength64 = $buffer.Length; $response.OutputStream.Write($buffer, 0, $buffer.Length); $response.Close(); };
       volumeMounts:
       - mountPath: "C:\\Data"
         name: volume
     volumes:
     - name: volume
       persistentVolumeClaim:
         claimName: azure-file-win
     nodeSelector:
       beta.kubernetes.io/os: windows
 
 3. Confirm the pod(s) were successfully deployed with $ oc get pods. The status should read ‘Running’. This can also be confirmed in the OCP web console by navigating on the left to Workload->Pods.
 

Figure: Windows web server Pod with ‘Running’ status.

 4. Run $ oc exec -it <pod_name> -c <container_name> -- powershell.exe to access the Windows container’s Microsoft PowerShell.

 5. Navigate to the file path as specified by the PVC’s mountPath. This location is a shared directory representing the dynamically provisioned PV.

 6. The directory should be read/write accessible, as specified by the PV. List the contents of the directory and create a new file or directory.

 7. Run $ exit from the container powershell to return to your CLI.

Figure: Read and write from a mounted volume through a Windows pod’s powershell.

PERSISTENT VOLUME SHARE BETWEEN WINDOWS TO WINDOWS CONTAINERS

Note: These containers may run on the same Windows node, or different nodes within the same cluster.

 1. Next, go back to the OCP web console and navigate on the left to Workloads->Deployments. Scale the deployment up from 1 to multiple pods. Check that the pods are being created in the OCP web console. You can also check the status in the terminal by running $ oc get pods. Figure: Scaling a deployment up to three pods using the OCP web console.

 2. After the pods are up and running, run $ oc exec -it <pod_2_name> -c <container_name> -- powershell.exe again but with a new pod that was just deployed after scaling.

 3. Navigate to the file path specified by the PVC’s mountPath.

 4. The directory should still be read/write accessible. List the contents of that directory and notice that the file contents are the same as the first pod. Here you can also create more files or directories.

 5. Run $ exit to return to the CLI.

 6. In order to demonstrate failover persistence with no data loss, navigate back to the OCP web console and scale the deployment down to 0 pods. Run $ oc get pods and ensure it says ‘No Resources found in default namespace’.

Figure: Scaling a deployment down to zero pods using the OCP web console.

 7. When all pods have terminated, use the up arrow to scale the deployment back up to 1 or more pods.

 8. Once the pods are up, run $ oc exec -it <new_pod_name> -c <container_name> -- powershell.exe with any one of the pods that was deployed after scaling.


 9. Navigate to the file specified by the PVC’s mountPath.


 10. Again, notice that the directory is read/write accessible. List the contents of that directory and ensure that you are still able to view those files within.

 11. Run $ exit to return to the CLI.

PERSISTENT VOLUME SHARE BETWEEN WINDOWS AND LINUX CONTAINERS

Note: These containers run on different nodes within the same cluster.

 1. Define & create a Linux workload, specifying the PVC to use and desired container mount paths. Example pod.yaml:
apiVersion: v1
kind: Pod
metadata:
 name: task-pv-pod
spec:
 volumes:
   - name: volumetest
     persistentVolumeClaim:
       claimName: azure-file-win
 containers:
   - name: task-pv-container
     image: nginx
     ports:
       - containerPort: 80
         name: "http-server"
     volumeMounts:
       - mountPath: "/usr/share/nginx/html"
         name: volumetest

 2. Ensure the workload is ‘Running’ on a Linux worker node from either the OCP web console (Workloads -> Pods -> <pod_name> -> Details) or CLI ($ oc get pods -o wide)

3. Run $ oc exec -it <linux_pod_name> -c <container_name> -- /bin/bash/ to access the Linux workload’s shell.

 4. Navigate to the location of the shared volume as specified by your chosen container mountPath.

 5. This directory should be read/write accessible, as specified by the PV’s ‘ReadWriteMany’ access mode. List the contents of the directory and view the files within to ensure data written from Windows containers running on other nodes within the cluster is accessible to the current Linux container.

 6. Write data to the shared persistent volume and exit the container’s shell.

Figure: Read and write data from a mounted volume from a Linux pod’s shell.

 7. In order to demonstrate failover persistence with no data loss, delete the Linux workload from either the OCP web console (Workloads -> Pods -> Vertical ellipses -> Delete Pod -> Delete) or ‘oc’ CLI ($ oc delete pod <pod_name>).

Figure: Deleting a pod from the OCP web console.

 8. Run $ oc exec -it <windows_pod_name> -c <container_name> -- powershell.exe to access the Windows container’s Microsoft PowerShell.

 9. Navigate to the location of the shared volume as specified by your chosen container mountPath.

 10. List the contents of the directory and view the files within to ensure data written from now-deleted Linux workload is accessible to the other workloads within the cluster, even those running on different operating systems.

Figure: Reading shared persistent data, written from a Linux container, on a Windows container.

Appendix

 

graph1

VERIFYING STORAGE ATTACHMENT:

In order to verify a successful mount on a Linux or Windows host, system and storage administrators can RDP or SSH into a node and view/edit the contents of the shared drive. Helpful commands include:

Windows PowerShell

  • > Get-Volume
  • > Get-Disk
  • > gdr

Figure: Mounted 1 GiB volume (highlighted) on AWS EC2 Windows instance.

Linux Kernel

  • $ df

Figure: Mounted 5 GiB volume (highlighted) on Linux instance.

 

 

 


About the authors

UI_Icon-Red_Hat-Close-A-Black-RGB

Browse by channel

automation icon

Automation

The latest on IT automation for tech, teams, and environments

AI icon

Artificial intelligence

Updates on the platforms that free customers to run AI workloads anywhere

open hybrid cloud icon

Open hybrid cloud

Explore how we build a more flexible future with hybrid cloud

security icon

Security

The latest on how we reduce risks across environments and technologies

edge icon

Edge computing

Updates on the platforms that simplify operations at the edge

Infrastructure icon

Infrastructure

The latest on the world’s leading enterprise Linux platform

application development icon

Applications

Inside our solutions to the toughest application challenges

Original series icon

Original shows

Entertaining stories from the makers and leaders in enterprise tech