Support for Microsoft Windows-based applications along with the technologies that are common to a Windows environment continue to be a popular topic in both Kubernetes and OpenShift ecosystems. As the ability to leverage these types of workloads has become more readily available through the introduction of Windows Containers support in Kubernetes, providing a seamless integration has never been more important. A prior article based on written for OpenShift 3, it was described how one could leverage a Kubernetes persistent storage plugin type called FlexVolumes to consume Common Internet FIle System (CIFS) shares, a common type of shared storage that is typically used by Windows workloads. FlexVolumes were essentially an executable that resides on the master and node instances and contained the necessary logic to manage access to a storage backend for the primary operations that needed to be performed to be able to leverage the storage backend, such as mounting and unmounting of volumes.
Today, in OpenShift 4, FlexVolumes are still a supported storage provider, but there is an emphasis from the Kubernetes community on embracing a standard-based approach for consuming storage. The Container Storage Interface (CSI) has become the defacto standard for exposing file and block storage to containerized workloads as it leverages the Kubelet device plugin registration system to allow externalized storage to integrate with the platform. One of the key benefits of the CSI implementation is that it no longer requires the modification of the Kubernetes codebase itself (in-tree) as was required previously. Ongoing work in the community has evolved over time to migrate all storage plugins to the CSI specification, including prior FlexVolume based implementations for CIFS. The csi-driver-smb project within the Kubernetes CSI group has been working on providing a CSI based solution for CIFS and recently announced their first release of the project. With a new viable option available, the remainder of this article will demonstrate how this project can be implemented in an OpenShift environment in order to consume a set of existing CIFS shares. For a deeper dive into the CSI specification, the CSI developer documentation provides a wealth of information on the specification as a whole and how it can be leveraged.
As with any type of persistent storage installation and configurations in a Kubernetes environment, elevated cluster access privileges are required. Be sure that you have the OpenShift Command Line tools installed locally and are logged in to a cluster
The first step is to install the CSI Driver resource, which enables a discovery mechanism of all defined CSI drivers along with any customizations that may be needed in order for Kubernetes to interact with the driver. Execute the following command to add the resource to the cluster:
$ oc apply -f https://raw.githubusercontent.com/kubernetes-csi/csi-driver-smb/master/deploy/csi-smb-driver.yaml
With the CSI driver now defined, deploy a DaemonSet to the cluster to make the driver available to mount CIFS shares on all nodes. CSI drivers make use of several standard sidecar containers that aim to alleviate many of the common tasks needed during the lifecycle of a deployed driver. This allows the developer to focus on fulfilling the three primary functions of the CSI specification:
- Identity - Enables Kubernetes to identify the driver and functionality that it supports
- Node - Runs on each node and responsible for making calls into the Kubelet
- Controller - Responsible for managing the volume, such as attaching and detaching
Each of these components is packaged into a single container and deployed with the rest of the standard CSI sidecar containers in a DaemonSet.
Execute the following command to deploy the DaemonSet to the cluster:
$ oc apply -f https://raw.githubusercontent.com/kubernetes-csi/csi-driver-smb/master/deploy/csi-smb-node.yaml
After the DaemonSet starts and the driver registers with the Kubelet, a new custom resource called CSINode is created for each node in the cluster and can be viewed by executing the following command:
$ oc get csinode
With the CSI driver installed, CIFS shares can be used by OpenShift. The following diagram depicts the key components of the CIFS CSI driver architecture:
To demonstrate the integration between OpenShift and the ability to make use of CIFS as a storage backend, let’s create a new project for which we will deploy a sample application:
$ oc new-project cifs-csi-demo
The next step is to statically define PersistentVolumes to associate with any existing CIFS shares. CSI drivers, depending on their capabilities, can leverage either statically defined PersistentVolumes or dynamically provision PersistentVolumes at runtime. This CSI driver only supports the use of statically defined, pre-provisioned PersistentVolumes at this time. A statically defined PersistentVolume for a CIFS share takes the following format:
apiVersion: v1
kind: PersistentVolume
metadata:
name: <name>
spec:
accessModes:
- ReadWriteMany
capacity:
storage: <size>
csi:
driver: smb.csi.k8s.io
volumeAttributes:
source: //<host>/<share>
volumeHandle: <unique_id>
nodeStageSecretRef:
name: <credentials_secret_name>
namespace: <credentials_secret_namespace>
mountOptions:
- <array_mount_options>
persistentVolumeReclaimPolicy: Retain
volumeMode: Filesystem
With an understanding of the structure that represents the PersistentVolume, let’s create a secret that contains the credentials for accessing the share. Regardless of whether the share can be accessed anonymously or not, a secret must be created. Create a new secret called cifs-csi-credentials by specifying the username, password, and domain (optional parameter) as secret key literals as shown below:
$ oc create secret generic cifs-csi-credentials --from-literal=username=<username> --from-literal=password=<password>
Once the secret has been created, the name of the secret along with the namespace can be added to the PersistentVolume under the nodeStageSecretRef field.
Next, specify a name for the PersistentVolume underneath the metadata section along with providing a size that represents the amount of storage backing this volume. Kuubernetes does not enforce quota limitations at a storage level, so any reasonable value can be provided.
Specify the host and share of the existing CIFS share in the source field under the volumeAttributes section.
The volumeHandle field represents a unique value for the share. It can be set to any value (for example cifs-demo-share).
The final field that needs to be set is an array of mount options that will be used when mounting the share. At a minimum, the following mount options should be set:
mountOptions:
- dir_mode=0777
- file_mode=0777
- vers=3.0
If the CIFS share should be accessed anonymously without the need of a username and password, add guest as an additional mount option.
Once each of these fields have been set, the PersistentVolume is represented similar to the following:
apiVersion: v1
kind: PersistentVolume
metadata:
name: cifs-csi-demo
spec:
accessModes:
- ReadWriteMany
capacity:
storage: 10Gi
csi:
driver: smb.csi.k8s.io
volumeAttributes:
source: //<host>/<share>
volumeHandle: cifs-demo-share
nodeStageSecretRef:
name: cifs-csi-credentials
namespace: cifs-test
mountOptions:
- dir_mode=0777
- file_mode=0777
- vers=3.0
persistentVolumeReclaimPolicy: Retain
volumeMode: Filesystem
Save the contents to a file called cifs-csi-demo-pv.yaml and execute the following command to create the PersistentVolume:
$ oc create -f cifs-csi-demo-pv.yaml
Check the status of the PersistentVolume:
$ oc get pv cifs-csi-demo
NAME CAPACITY ACCESS MODES RECLAIM POLICY STATUS CLAIM STORAGECLASS REASON AGE
cifs-csi-demo 10Gi RWX Retain Available 2s
Now, create a PersistentVolumeClaim to associate with the Persistent Volume created above:
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
name: cifs-csi-demo
spec:
accessModes:
- ReadWriteMany
resources:
requests:
storage: 10Gi
storageClassName: ""
volumeMode: Filesystem
volumeName: cifs-csi-demo
Add the contents of the PersistentVolume to a file called cifs-csi-demo-pvc.yaml and execute the following command to add the PersistentVolumeClaim to the cluster:
$ oc create -f cifs-csi-demo-pvc.yaml
After a few moments, verify that the PersistentVolumeClaim has been bound the the PersistentVolume previously created:
$ oc get pvc cifs-csi-demo
NAME STATUS VOLUME CAPACITY ACCESS MODES STORAGECLASS AGE
cifs-csi-demo Bound pv-smb-secure 10Gi RWX 32s
Finally, let's deploy a simple application that will write the current date to a file located on the CIFS share.
apiVersion: apps/v1
kind: Deployment
metadata:
labels:
app: cifs
name: deployment-smb-secure
spec:
replicas: 1
selector:
matchLabels:
app: cifs
template:
metadata:
labels:
app: cifs
name: deployment-smb-secure
spec:
containers:
- name: deployment-smb-secure
image: registry.redhat.io/ubi8/ubi:latest
command:
- "/bin/sh"
- "-c"
- while true; do echo $(date) >> /mnt/smb/outfile; sleep 1; done
volumeMounts:
- name: smb
mountPath: "/mnt/smb"
readOnly: false
volumes:
- name: smb
persistentVolumeClaim:
claimName: pvc-smb-secure
strategy:
rollingUpdate:
maxSurge: 0
maxUnavailable: 1
type: RollingUpdate
Create a file called cifs-csi-demo-deployment.yaml and execute the following command to add the deployment to the cluster:
$ oc create -f cifs-csi-demo-deployment.yaml
The CIFS share specified in the PersistentVolume will then be mounted to the node the pod associated with the deployment is scheduled on.
Display the log file that is being appended to within the CIFS share by executing the following command:
$ oc rsh $(oc get pods -o jsonpath='{.items[*].metadata.name}') cat /mnt/smb/outfile
Wed Jul 1 02:35:02 UTC 2020
Wed Jul 1 02:35:03 UTC 2020
Wed Jul 1 02:35:04 UTC 2020
Wed Jul 1 02:35:05 UTC 2020
Wed Jul 1 02:35:06 UTC 2020
Wed Jul 1 02:35:07 UTC 2020
Wed Jul 1 02:35:08 UTC 2020
Wed Jul 1 02:35:09 UTC 2020
Wed Jul 1 02:35:10 UTC 2020
Wed Jul 1 02:35:11 UTC 2020
Wed Jul 1 02:35:12 UTC 2020
Now that you have verified logs are being written to the CIFS share from within OpenShift, as an additional validation step, you can confirm that the file is being written to the CIFS share by directly accessing the share from outside of OpenShift.
While this scenario merely provided for the validation that applications running in OpenShift can communicate with persistent storage being hosted on CIFS shares, it demonstrated that by using a standards-based approach through the use of the Container Storage Interface (CSI) for accessing storage backends, new capabilities that go beyond the set of included Kubernetes and OpenShift featureset could be unlocked to drive new workloads onto the platform. A future enhancement to the CIFS CSI driver could add support for managingPersistentVolumes in a dynamic fashion to eliminate the cluster administrator from manually provisioning volumes prior to their use.
저자 소개
Andrew Block is a Distinguished Architect at Red Hat, specializing in cloud technologies, enterprise integration and automation.
채널별 검색
오토메이션
기술, 팀, 인프라를 위한 IT 자동화 최신 동향
인공지능
고객이 어디서나 AI 워크로드를 실행할 수 있도록 지원하는 플랫폼 업데이트
오픈 하이브리드 클라우드
하이브리드 클라우드로 더욱 유연한 미래를 구축하는 방법을 알아보세요
보안
환경과 기술 전반에 걸쳐 리스크를 감소하는 방법에 대한 최신 정보
엣지 컴퓨팅
엣지에서의 운영을 단순화하는 플랫폼 업데이트
인프라
세계적으로 인정받은 기업용 Linux 플랫폼에 대한 최신 정보
애플리케이션
복잡한 애플리케이션에 대한 솔루션 더 보기
오리지널 쇼
엔터프라이즈 기술 분야의 제작자와 리더가 전하는 흥미로운 스토리
제품
- Red Hat Enterprise Linux
- Red Hat OpenShift Enterprise
- Red Hat Ansible Automation Platform
- 클라우드 서비스
- 모든 제품 보기
툴
체험, 구매 & 영업
커뮤니케이션
Red Hat 소개
Red Hat은 Linux, 클라우드, 컨테이너, 쿠버네티스 등을 포함한 글로벌 엔터프라이즈 오픈소스 솔루션 공급업체입니다. Red Hat은 코어 데이터센터에서 네트워크 엣지에 이르기까지 다양한 플랫폼과 환경에서 기업의 업무 편의성을 높여 주는 강화된 기능의 솔루션을 제공합니다.