Background
Offering a range of predefined virtual machine images that are ready to consume and populated with a desired set of packages and configurations has become a default in the virtualization space.
With OpenShift Virtualization, the complete lifecycle of virtual machines can be controlled over the API by utilizing YAML-formatted manifests describing the virtual machines' specifications and state.
The same is true for the lifecycle of the disk images that are referenced by the virtual machines' templates.
In the simplest deployments, the use of vanilla images such as the RHEL KVM guest image, combined with cloud-init config, is enough to bring a virtual machine to life.
Let's take a closer look at how to shape (edit and define) such a vanilla image into our desired state to satisfy requirements of more advanced scenarios and make it consumable for virtual machines in the cluster.
Additionally we will utilize secrets to reference values such as passwords or private keys used in the image building process.
Scenario
As a foundation for the creation of golden images you can utilize any OpenShift (4.12 and newer) cluster with OpenShift Virtualization and Red Hat Pipelines Operator installed. A storage class that provides file and block storage must also be in place.
Start by making a KVM guest image available (e.g., the RHEL 9.2 KVM guest image from the Red Hat Portal). It can be fetched with a resource type Datavolume, which utilizes the CDI (Containerized Data Importer) to import the qcow2-formatted image into a PV (Persistent Volume) in OpenShift.
Building a Golden Image with Tekton including Secrets
Tekton allows you to build pipelines as OpenShift native objects. A pipeline is populated by at least one task. A task is running a container image to achieve the desired state; for example, to download a qcow2 image into a PVC. More information on Tekton tasks related to OpenShift virtualization is available here.
In this scenario, you are going to manipulate a qcow2 disk image which is stored in a Persistent Volume. The container that allows you to do this is running virt-customize (libguestfs tools) on the image that is mounted inside the container.
To bring the disk into your desired state, you will create a new user and populate its home directory with a public key. Additionally, you will install some packages while you authenticate, subscribe, and later unsubscribe the system to a Red Hat repository.
You will also change the root password for this image. These basic actions are performed by virt-customize.
For a list of all possible command options, refer to the libguestfs documentation.
Your passwords and the public key are stored in a separate secret and will be referenced by the disk-virt-customize task.
Architecture
Let's take a brief look into the container that gets executed by running the disk-virt-customize ClusterTask:
The PVC holding the qcow2 image and the secrets do get mounted in the container, so they can be referenced and targeted by the virt-customize commands. At its core, the container simply executes the virt-customize process with the given parameters as commands against the image.
The Tekton pipeline allows you to chain several tasks together and execute them in order:
Highlight Involved Tasks
Task 01 will supply a container which handles the creation of a DataVolume. This DataVolume is importing the RHEL KVM guest image in qcow2 format (source), into a PVC (target).
The PVC will be created in volumeMode ‘Filesystem’ so it can be natively consumed by the virt-customize container in the next step.
Task 02 will supply a container which is running virt-customize and mounts the PVC containing the qcow2 image in a well-known path to execute the virt-customize commands upon. Both the name of the PVC and the customize commands are expected parameters of this task.
Task 03 acts very similarly to Task 01 as both create a DataVolume, but in this case the source is pointing towards your PVC, which contains the now manipulated qcow2 image and will transfer it into a PVC with volumeMode ‘Block’, which is then ready to be cloned and consumed by virtual machines in the OpenShift Cluster. This new PVC will be your ready-to-use, customized ‘golden image.’
Procedure
The following procedure is based on these Cluster prerequisites:
- OpenShift 4.13
- Red Hat Pipelines Operator
- OpenShift Virtualization Operator
- OpenShift DataFoundation (file/block storage)
Before you start, let's make sure that the ClusterTasks are available by checking the featuregate of the Hyperconverged Object.
The key deployTektonTaskResources must be set to "true".
oc patch HyperConverged/kubevirt-hyperconverged -p '{"spec":{"featureGates":{"deployTektonTaskResources":true}}}' --type=merge -n openshift-cnv
You should now be able to find the ClusterTask named disk-virt-customize:
oc get ClusterTasks | grep virt-customize
Now you can begin to set up your environment to build custom virtual machine images. First, create a namespace/project where everything should take place and move into it:
oc create namespace image-building && oc project image-building
Import the Disk Image
At this point you need to make an QCOW2 image available, which you want to customize. To get the image into your cluster you can utilize a DataVolume; this imports the image from a source of your choice directly into a PVC.
The source can be an existing PVC, that contains an image that you clone, or you can pull it from the web.
It is possible to download from an URL pointing directly to the image file, or as
in this case, you can use the RHEL 9.2 cloud-image which you download from a container registry.
The target PVC needs to be created in volumeMode Filesystem, so you can directly mount and modify it in the virt-customize container (you will change the PVC to volumeMode block later).
apiVersion: cdi.kubevirt.io/v1beta1
kind: DataVolume
metadata:
name: rhel9.2
annotations:
cdi.kubevirt.io/storage.bind.immediate.requested: 'true'
spec:
source:
registry:
url: docker://registry.redhat.io/rhel9/rhel-guest-image:9.2
pullMethod: node
storage:
resources:
requests:
storage: 10Gi
volumeMode: Filesystem
Note: For smoother readability, the pure YAML-formatted manifests are printed in this blog. To post these against the OpenShift API, copy these in a file (e.g., datavolume.yaml)and use the default command syntax:
oc create -f datavolume.yaml
Optional: Clone PVC from DataSource
You could also clone an existing image, which is managed by a DataSource from the openshift-virtualization-os-images namespace.
DataSources automatically populate PVCs with the latest version of selected distributions such as RHEL, Fedora, and Windows. As shown in the example below, you can reference the DataSource to make sure you always have an up-to-date image for the build.
Note: To copy resources between namespaces, the proper permissions must be in place.
The good news is that cloning from the openshift-virtualization-os-images namespace works out of the box, as OpenShift Virtualization stores its default images there.
apiVersion: cdi.kubevirt.io/v1beta1
kind: DataVolume
metadata:
name: rhel9-latest
annotations:
cdi.kubevirt.io/storage.bind.immediate.requested: 'true'
spec:
sourceRef:
kind: DataSource
name: rhel9
namespace: openshift-virtualization-os-images
storage:
resources:
requests:
storage: 30Gi
volumeMode: Filesystem
Create Secrets
Now that your image is ready, you can create the secrets holding the data (in this case the passwords and public key) which you want to reference and utilize in the image build process.
Three secrets will be created. The first one holds the value for the root password:
apiVersion: v1
kind: Secret
metadata:
name: disk-virt-customize-workspace-secret-01
type: Opaque
stringData:
password: "Naishogoto4000"
The second secret holds the password for the Red Hat account, which is used to subscribe to the system so you can install the packages.
apiVersion: v1
kind: Secret
metadata:
name: disk-virt-customize-workspace-secret-02
type: Opaque
stringData:
smpassword: "Ulleungdo4000"
The third secret holds the public key. Note that in the YAML manifest it is stored under a field named “authorized_keys”.
Some context: In the virt-customize task your secrets values will be mounted as a file. These files have the same name as their field key in the manifest. Therefore you will be able to directly upload the public key in a file named authorized_keys into the users .ssh directory.
apiVersion: v1
kind: Secret
metadata:
name: disk-virt-customize-workspace-secret-03
type: Opaque
stringData:
authorized_keys:
ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBAWUZs13H78b6ae502744c6c75ZEjWHpC3Sv+X04y/WNSreYvqvUA0V34jkiaYHvEXRpTJ9OM6XlBxndw2rRC1MUI=
These secrets will be referenced in the task by a workspace. In this case the workspace is named data01, which means that on the filesystem of the container that executes the task, a mountpoint /data01 is created. This mountpoint holds all the secrets and allows you to reference them as seen in the TaskRun example shown below.
Now that the PVC and the secrets are in place, you are already in position to change the image into your desired state.
Create TaskRun
The TaskRun object references the disk-virt-customize ClusterTask by the "taskRef" key and by the "params" key that can point to the PVC you just created.
Most interesting is the customizeCommands field, where you can use the full capabilities of the virt-customize command. Just like on the CLI, simply type the options to change the image.
In this case, change the root password, create a new user with a public key, and register the system by the subscription manager to the Red Hat Repositories.
After you attach the subscription, install some packages of your choice and unregister the system again.
apiVersion: tekton.dev/v1beta1
kind: TaskRun
metadata:
name: disk-virt-customize-taskrun-01
namespace: image-building
spec:
taskRef:
kind: ClusterTask
name: disk-virt-customize
params:
- name: pvc
value: rhel9.2
- name: customizeCommands
value: |
root-password file:/data01/password
run-command useradd -m techuser
mkdir /home/techuser/.ssh
chmod 0700:/home/techuser/.ssh
upload /data01/authorized_keys:/home/techuser/.ssh/
chmod 0600:/home/techuser/.ssh/authorized_keys
sm-credentials shadowman:file:/data01/smpassword
sm-register
sm-attach auto
install vim,ipa-client
sm-unregister
workspaces:
- name: data01
projected:
sources:
- secret:
name: disk-virt-customize-workspace-secret-01
- secret:
name: disk-virt-customize-workspace-secret-02
- secret:
name: disk-virt-customize-workspace-secret-03
Once the TaskRun is created a Pod gets started, pulling the container image "registry.redhat.io/container-native-virtualization/kubevirt-tekton-tasks-disk-virt-customize-rhel9" and executing the virt-customize commands against the PVC mounted inside.
The workspace is as expected, mounting the secrets under a directory titled the same as its name key (/data01 in this example). The secrets are projected here and can be referenced as shown in root-password file:/data01/password.
As the TaskRun is started, you now can inspect its progress in the disk-virt-customize-taskrun-01-pod :
oc logs -n image-building disk-virt-customize-taskrun-01-pod
step-run-virt-customize
[ 0.0] Examining the guest ...
[ 4.6] Setting a random seed
[ 4.7] Setting the machine ID in /etc/machine-id
[ 4.7] Running: useradd -m techuser
[ 5.0] Making directory: /home/techuser/.ssh
[ 5.0] Changing permissions of /home/techuser/.ssh to 0700
[ 5.0] Uploading: /data01/authorized_keys to /home/techuser/.ssh/
[ 5.0] Changing permissions of /home/techuser/.ssh/authorized_keys to 0600
[ 5.0] Running: chown techuser /home/techuser/.ssh/authorized_keys
[ 5.0] Registering with subscription-manager
[ 13.7] Attaching to compatible subscriptions
[ 32.4] Installing packages: vim ipa-client
[ 53.5] Unregistering with subscription-manager
[ 54.6] Setting passwords
[ 55.4] SELinux relabelling
[ 64.6] Finishing off
As you can see, the virt-customize command finished its job, building your new image.
At this point you are ready to use your new image as a bootable volume. Of course, a VirtualMachine disk should be in the volumeMode block, so in the next chapter we will also transform the PVC from file to block mode.
Create a Pipeline
Now let's wrap everything up and put it in a pipeline:
The pipeline itself is just a collection of tasks and can be triggered by a PipelineRun object.
Your pipeline will consist of three tasks:
1. Pulling the image into a PVC (Filesystem)
2. Running the virt-customize commands on the mounted PVC
3. Transforming the PVC from volumeMode File to Block
---
apiVersion: tekton.dev/v1beta1
kind: Pipeline
metadata:
name: golden-image-rhel9
spec:
workspaces:
- name: data01
tasks:
- name: create-datavolume
params:
- name: waitForSuccess
value: 'true'
- name: manifest
value: |
apiVersion: cdi.kubevirt.io/v1beta1
kind: DataVolume
metadata:
name: rhel9.2
spec:
storage:
resources:
requests:
storage: 15Gi
volumeMode: Filesystem
source:
registry:
url: 'docker://registry.redhat.io/rhel9/rhel-guest-image:9.2'
pullMethod: node
taskRef:
kind: ClusterTask
name: modify-data-object
- name: disk-virt-customize
params:
- name: pvc
value: rhel9.2
- name: customizeCommands
value: |
root-password file:/data01/password
run-command useradd -m techuser
mkdir /home/techuser/.ssh
chmod 0700:/home/techuser/.ssh
upload /data01/authorized_keys:/home/techuser/.ssh/
chmod 0600:/home/techuser/.ssh/authorized_keys
sm-credentials shadowman:file:/data01/smpassword
sm-register
sm-attach auto
install vim,ipa-client
sm-unregister
taskRef:
kind: ClusterTask
name: disk-virt-customize
runAfter:
- create-datavolume
workspaces:
- name: data01
workspace: data01
- name: transform-pvc-from-file-to-block
params:
- name: waitForSuccess
value: 'true'
- name: manifest
value: |
apiVersion: cdi.kubevirt.io/v1beta1
kind: DataVolume
metadata:
name: rhel9.2-final
spec:
storage:
accessModes:
- ReadWriteMany
resources:
requests:
storage: 15Gi
volumeMode: Block
source:
pvc:
namespace: image-building
name: rhel9.2
taskRef:
kind: ClusterTask
name: modify-data-object
runAfter:
- disk-virt-customize
As shown above, the pipeline has a key named "runAfter" that allows you to define a logical order when each task should be executed.
The pipeline itself just bundles a number of tasks together. To trigger it you can create a PipelineRun referencing the pipeline.
Also take note that the workspaces are defined in the pipeline and the PipelineRun manifest.
apiVersion: tekton.dev/v1beta1
kind: PipelineRun
metadata:
generateName: golden-image-rhel9-
spec:
pipelineRef:
name: golden-image-rhel9
workspaces:
- name: data01
projected:
sources:
- secret:
name: disk-virt-customize-workspace-secret-01
- secret:
name: disk-virt-customize-workspace-secret-02
- secret:
name: disk-virt-customize-workspace-secret-03
The progress of the PipelineRun itself can be followed either on the CLI, by checking the status field, or nicely visualized in the OpenShift Console under the Pipelines tab.
Here you can track the progress of each task in the pipeline:
The TaskRun overview is also worth a look. Each TaskRun is listed and references its ClusterTask and Pod. The duration is also displayed. The complete image-build process, including download, took approximately three minutes.
On the Logs view, the container output of each task is gathered and allows deep insight to each step, which is extraordinarily useful while debugging.
Conclusion
We are now ready to build virtual machine images inside and for OpenShift.
The whole build lifecycle of the virtual machine is happening inside the cluster. With the workspaces you can project and process data inside the image to set passwords or place public keys, certificates, etc., as desired.
Once the customization is done, the image is ready to use and can immediately be referenced as a source in a virtual machine template or as bootable volume for the new instance type feature.
Further reads:
To see how pipelines work with Windows images, check out this Blogpost by Karel Simon.
For a high-level overview of Tekton, check out this Blogpost by Ronen Sde-Or.
关于作者
产品
工具
试用购买与出售
沟通
关于红帽
我们是世界领先的企业开源解决方案供应商,提供包括 Linux、云、容器和 Kubernetes。我们致力于提供经过安全强化的解决方案,从核心数据中心到网络边缘,让企业能够更轻松地跨平台和环境运营。