フィードを購読する


 

 

Abstract

In this article we will show you how to move a .NET application deployment from local builds on virtual machines to a hosted build server on OpenShift. Our scenario assumes we’re starting with Azure DevOps Pipelines to deploy a .NET application to Red Hat OpenShift on AWS (ROSA), but the target platform can be OpenShift running on Google Cloud, Azure, or on-premise as well. The goal is to keep the developer deployment experience relatively the same while deploying the agent to a cloud-native environment.  This will also allow you to reap the benefits of OpenShift’s observability and scalability.

The instructions that follow describe how to set up a self-hosted Azure Pipelines build agent in OpenShift; it’s based on our hands-on experience with a customer. Azure Pipelines offers a Microsoft-hosted agent pool, but our customer deployed their ROSA cluster using AWS PrivateLink, and their OpenShift cluster was accessible from a private IP address space only. Microsoft has published solutions to self-host a build agent in Linux or in Docker. We wanted to take it a step further and host it within the OpenShift cluster that Azure Pipelines would interact with, to eliminate the need for an additional standalone Azure Virtual Machine.

Logical Overview

The following image shows how the containerized build agent works with an Azure DevOps Pipeline. The tasks are run in the Azure Pipeline and Azure calls the build agent on the OpenShift cluster to perform each task:

  1. Build task
  2. Push image to registry task  
  3. Deploy app task

Building the Agent

The following installation assumes you have a running OpenShift 4 cluster and have created an Azure DevOps organization. From the Azure web console, set up a Personal Access Token to be used by the build agent. Then create a new Self-hosted Agent Pool or configure an existing one. For this example, we have leveraged the existing Default self-hosted agent pool. Verify from the pool's Security tab that you are assigned as an Administrator to the pool.

We will leverage a wrapper script to configure and run the agent container, and we have customized these instructions to streamline build time and runtime steps. Save this wrapper script locally as start.sh, and save the BuildConfig definition as buildconfig.yaml.  Defined in the BuildConfig, our agent uses a UBI image (Red Hat Universal Base Image) for .NET 6.0 based on Red Hat Enterprise Linux and freely distributable without a Red Hat subscription.

In OpenShift, now create the following artifacts to build the agent image. Preconfigured triggers will start a new build automatically.

$ oc new-project azure-build
$ oc create configmap start-sh --from-file=start.sh=start.sh
$ oc create imagestream azure-build-agent
$ oc create -f buildconfig.yaml

Optionally, determine the latest published agent release. Navigate to Azure Pipelines Agent and check the page for the highest version number listed. Note the Agent download URL for Linux x64.

Configure the AZP_AGENT_PACKAGE_LATEST_URL environment variable in the BuildConfig with the desired Agent download URL, and build a new agent image. At the time of this blog, the latest release is 2.210.1.

$ oc set env bc/azure-build-agent AZP_AGENT_PACKAGE_LATEST_URL=https://vstsagentpackage.azureedge.net/agent/2.210.1/vsts-agent-linux-x64-2.210.1.tar.gz
$ oc start-build azure-build-agent

A Word on Container Security

Now that the agent image has been built, we’ll need to deploy it. For an agent pod itself to facilitate a build, we'll effectively need to run Podman within Podman in OpenShift. While there are various methods to accomplish this, we’d also like to adhere to the OpenShift security best practice that most containers, except those managing or monitoring the host system itself, should run as a non-root user.

Let’s review some of the security we’ve baked into the image. From the inline Dockerfile in the BuildConfig:

ENV _BUILDAH_STARTED_IN_USERNS="" \
  BUILDAH_ISOLATION=chroot \
  STORAGE_driver=vfs
  • We've opted to lock down the Buildah (Podman-docker builds are converted to Buildah) container by not starting with user namespace and isolating the filesystem with chroot.
  • For simplicity, we've opted to use the VFS storage driver although this has poor performance. We could alternatively use the fuse-overlayfs storage driver which requires Podman on the host to mount /dev/fuse to the container, and an example can be shown here.
RUN usermod --add-subuids 100000-165535 default && \
  usermod --add-subgids 100000-165535 default && \
  setcap cap_setuid+eip /usr/bin/newuidmap && \
  setcap cap_setgid+eip /usr/bin/newgidmap
  • We’ve configured rootless Podman to map additional UIDs and GIDs to interact with multiple user namespaces
  • We’ve set file capabilities for newuidmap and newgidmap binaries to elevate privileges for extra users and groups

The existing default SecurityContextConstraints (SCCs) in OpenShift do not fit our requirements to run as rootless, and a further explanation can be found in this blog post. To support mapping additional UIDs and GIDs, we’ll create an SCC which runs as rootless with SETUID and SETGID Linux capabilities. Save the SCC definition as nonroot-builder.yaml.

As cluster-admin, create a serviceaccount for the build agent, a nonroot-builder SCC, and apply the SCC to the serviceaccount:

$ oc create serviceaccount azure-build-sa
$ oc create -f nonroot-builder.yaml
$ oc adm policy add-scc-to-user nonroot-builder -z azure-build-sa

Deploying the Agent

The Azure build agent is configured to use an unattended config, which will allow us to deploy the agent as an OpenShift pod without manual intervention. Configure the Azure DevOps credentials as a Secret named azdevops, replacing the values for environment variables with your own. For example:

$ oc create secret generic azdevops \
--from-literal=AZP_URL=https://dev.azure.com/yourOrg \
--from-literal=AZP_TOKEN=YourPAT \
--from-literal=AZP_POOL=NameOfYourPool

Optionally, for a proxy configuration, also create a Secret named azproxy, replacing environment variables with your own. The NO_PROXY proxy bypass configuration can be extracted from the cluster-wide proxy.  For example:

$ oc get proxy -o jsonpath='{.items[0].status.noProxy}'
$ oc create secret generic azproxy \
--from-literal=AZP_PROXY_URL=http://192.168.0.1:8888 \
--from-literal=AZP_PROXY_USERNAME=myuser \
--from-literal=AZP_PROXY_PASSWORD=mypass \
--from-literal=HTTPS_PROXY=http://myuser:mypass@192.168.0.1:8888 \
 --from-literal=HTTP_PROXY=http://myuser:mypass@192.168.0.1:8888 \
--from-literal=NO_PROXY=.cluster.local,.ec2.internal,.svc,10.0.0.0/16,10.128.0.0/14,127.0.0.1,169.254.169.254,172.30.0.0/16,api-int.example.com,example.com,localhost

Unauthenticated proxy can be defined as follows:

$ oc create secret generic azproxy \
--from-literal=AZP_PROXY_URL=http://192.168.0.1:8888 \
--from-literal=HTTPS_PROXY=http://myuser:mypass@192.168.0.1:8888 \
--from-literal=HTTP_PROXY=http://myuser:mypass@192.168.0.1:8888 \
 --from-literal=NO_PROXY=.cluster.local,.ec2.internal,.svc,10.0.0.0/16,10.128.0.0/14,127.0.0.1,169.254.169.254,172.30.0.0/16,api-int.example.com,example.com,localhost

Save the deployment as deployment.yaml locally and create it which will subsequently create a running build agent pod. You can also scale up pod replicas which will deploy additional agent pods.

$ oc create -f deployment.yaml

Finally, check that the build agent is running in Azure Pipelines. View the registered agents in the Agent Pool and you should now see a build agent with Online status.

Your build agent container is now ready to use in your Azure DevOps Pipeline to build and deploy an application on your OpenShift cluster. Now you can take advantage of OpenShift’s built-in tools to monitor and scale the build agent in your cluster.


執筆者紹介

Kevin Chung is a Principal Architect focused on assisting enterprise customers in design, implementation and knowledge transfer through a hands-on approach to accelerate adoption of their managed OpenShift container platform.

Read full bio
UI_Icon-Red_Hat-Close-A-Black-RGB

チャンネル別に見る

automation icon

自動化

テクノロジー、チームおよび環境に関する IT 自動化の最新情報

AI icon

AI (人工知能)

お客様が AI ワークロードをどこでも自由に実行することを可能にするプラットフォームについてのアップデート

open hybrid cloud icon

オープン・ハイブリッドクラウド

ハイブリッドクラウドで柔軟に未来を築く方法をご確認ください。

security icon

セキュリティ

環境やテクノロジー全体に及ぶリスクを軽減する方法に関する最新情報

edge icon

エッジコンピューティング

エッジでの運用を単純化するプラットフォームのアップデート

Infrastructure icon

インフラストラクチャ

世界有数のエンタープライズ向け Linux プラットフォームの最新情報

application development icon

アプリケーション

アプリケーションの最も困難な課題に対する Red Hat ソリューションの詳細

Original series icon

オリジナル番組

エンタープライズ向けテクノロジーのメーカーやリーダーによるストーリー