I recently collaborated with fellow Red Hatters to create a whiteboarding video that introduces OpenShift Serverless at a high level. In this article, I build upon that YouTube video and my recent work with Quarkus to create a hands-on deep dive into OpenShift Serverless. This article walks you through using the OpenShift Serverless operator to seamlessly add serverless capabilities to an OpenShift 4.3 cluster and then using the Knative CLI tool to deploy a Quarkus native application as a serverless service onto that same cluster.
OpenShift Serverless
OpenShift Serverless helps developers to deploy and run applications that will scale up or scale to zero on-demand. Applications are packaged as OCI compliant Linux containers that can be run anywhere. Using the Serverless model, an application can simply consume compute resources and automatically scale up or down based on use. As mentioned in the introduction above, the whiteboarding YouTube video embedded below provides a high-level overview of OpenShift Serverless.
[embed]https://youtu.be/DYhK60vkDQY[/embed]
OpenShift Serverless is based on Knative, an open-source project started by Google. Specifically, OpenShift Serverless uses the Serving component of Knative. Knative Serving extends Kubernetes using Custom Resource Definitions (CRDs) to support deploying and serving of serverless applications and functions. Knative Serving is used to run containerized applications with Knative abstracting away details, such as networking, autoscaling (including scaling down to zero), and revision tracking. As I will demonstrate below, Knative Serving is easy to get started with and scales to support advanced scenarios. Knative documentation does a good job of breaking down the components of Knative Serving and providing examples. In this article, I will focus on installing OpenShift Serverless and deploying my sample Quarkus application as a Knative Serving application.
Installing OpenShift Serverless
The best way to add serverless capabilities to an OpenShift cluster is by installing the OpenShift Serverless Operator. Adding the operator to an OpenShift 4.3 cluster is straightforward.
- Login to the web console as a cluster administrator.
- Make sure you are using the web console in the Administrator perspective.
- Under the openshift-operators project, navigate to the Operators -> Operator Hub menu item.
- Search for OpenShift Serverless in the “Filter by keyword” search box.
- Click on OpenShift Serverless Operator to start installing. Install with all the default options selected.
- After clicking through the installation views, you’ll end up at the Installed Operators view. OpenShift Serverless Operator depends on Red Hat OpenShift Service Mesh, which in turn depends on Elasticsearch, Jaeger, and Kiali. Wait till the Status column for all operators has a green checkmark indicating InstallSucceeded.
- Create a new project called knative-serving. This is where the Knative Serving object that manages all serverless applications on your cluster will live.
- Navigate to the Operators -> Installed Operators view under the knative-serving project. You’ll notice all the operators we just installed in openshift-operators project get copied here.
- Once all the operators have a green check mark in the Status column indicating Copied, click on the OpenShift Serverless Operator.
- In the Operator Details view, click on the Knative Serving tab link. If the view shows a 404 page, just refresh after a few seconds.
- Click on the Create Knative Serving button. In the Create KnativeServing form view, click Create to create the Serving object using the default out-of-box YAML file.
- After creating the Knative Serving object, navigate to the Workloads -> Pods view and wait till all pods are in Running state.
And that’s it. Users on your cluster can now start deploying serverless applications. So let’s do just that.
Deploying a Serverless Application
I recently created a reference Quarkus application as part of a blog article I wrote introducing Quarkus. I’ll use that application as a reference here as well while demonstrating how to deploy a serverless application to OpenShift. The application is a RESTful API used to store simple text values. The values can be either stored in memory or in a MySQL database. In our serverless deployment, we will use the MySQL configuration so that the values are persistent beyond any single serverless instance’s lifetime.
To get started, let’s create a project and deploy the MySQL instance. As mentioned in the last section, serverless features are available to any user on your cluster, including ones without cluster administrator privileges. Feel free to run the following steps as a non-administrator user.
NOTE: Following steps assume familiarity with OpenShift CLI. See OpenShift docs for an introduction.
NOTE: one of the steps above uses oc rsh to run the MySQL command line client inside the MySQL pod. To get the pod ID mentioned in the step, run oc get pods, and make sure the MySQL pod is running before running the command.
We will configure our serverless application to connect to the MySQL instance instead of using an in-memory value store by injecting certain environment variables into the pod. We will inject those environment variables from an OpenShift secret. Let’s create that secret now.
Next, we need to build and publish our application’s container image into a registry visible to our OpenShift cluster. I published a public image to my Quay registry using the following commands. Feel free to deploy to a container registry of your choice.
We are finally ready to deploy our serverless application. The most seamless approach here is using the Knative CLI tool. The kn tool uses the Kubernetes authentication configuration stored in the kubeconfig file. The oc login and oc new-project commands configure this file properly. So, the following command deploys the application as a Knative service in the samples project.
Running this command creates the Knative objects needed to deploy this application as an OpenShift Serverless service. The results of this command should be similar to the following output.
There are three OpenShift Serverless objects created: a service, a revision, and a route. You can view these objects in the OpenShift web console by navigating to Serverless under the samples project.
Autoscaling
Going to Workloads -> Deployments, you’ll notice that there is a deployment associated with valuesapi service. However, since no requests have been sent to the application yet, there are no pods running for the deployment. This is an example of the scaled down to zero feature of OpenShift Serverless. Sending requests to the service’s route will trigger OpenShift Serverless to automatically scale the deployment to one pod (or more depending on the volume of requests). On my cluster, the auto generated route for the valuesapi service is http://valuesapi.samples.apps.cluster-plano-dc47.plano-dc47.example.opentlc.com. Similarly, once OpenShift Serverless detects that requests are no longer coming to the valuesapi route, the deployment is automatically scaled down to zero. The README of the application Git repository lists all the endpoints exposed by the valuesapi. I’ll leave it as an exercise to the readers to consume the various endpoints and test the autoscaling behavior of OpenShift Serverless.
Conclusion
Hopefully this article demonstrates the value of OpenShift Serverless as a developer friendly platform for cloud native applications. The approach to deployment here is fairly manual and meant to primarily serve an educational purpose. For production ready applications, I encourage users of OpenShift Serverless to adopt a more automated and continuous approach using tools like Tekton or Jenkins X.
Connect with Red Hat Services
Learn more about Red Hat Consulting
Learn more about Red Hat Training
Join the Red Hat Learning Community
Learn more about Red Hat Certification
Subscribe to the Training Newsletter
Follow Red Hat Services on Twitter
Follow Red Hat Open Innovation Labs on Twitter
Like Red Hat Services on Facebook
Watch Red Hat Training videos on YouTube
Understand the value of Red Hat Certified Professionals
Sobre o autor
Navegue por canal
Automação
Últimas novidades em automação de TI para empresas de tecnologia, equipes e ambientes
Inteligência artificial
Descubra as atualizações nas plataformas que proporcionam aos clientes executar suas cargas de trabalho de IA em qualquer ambiente
Nuvem híbrida aberta
Veja como construímos um futuro mais flexível com a nuvem híbrida
Segurança
Veja as últimas novidades sobre como reduzimos riscos em ambientes e tecnologias
Edge computing
Saiba quais são as atualizações nas plataformas que simplificam as operações na borda
Infraestrutura
Saiba o que há de mais recente na plataforma Linux empresarial líder mundial
Aplicações
Conheça nossas soluções desenvolvidas para ajudar você a superar os desafios mais complexos de aplicações
Programas originais
Veja as histórias divertidas de criadores e líderes em tecnologia empresarial
Produtos
- Red Hat Enterprise Linux
- Red Hat OpenShift
- Red Hat Ansible Automation Platform
- Red Hat Cloud Services
- Veja todos os produtos
Ferramentas
- Treinamento e certificação
- Minha conta
- Suporte ao cliente
- Recursos para desenvolvedores
- Encontre um parceiro
- Red Hat Ecosystem Catalog
- Calculadora de valor Red Hat
- Documentação
Experimente, compre, venda
Comunicação
- Contate o setor de vendas
- Fale com o Atendimento ao Cliente
- Contate o setor de treinamento
- Redes sociais
Sobre a Red Hat
A Red Hat é a líder mundial em soluções empresariais open source como Linux, nuvem, containers e Kubernetes. Fornecemos soluções robustas que facilitam o trabalho em diversas plataformas e ambientes, do datacenter principal até a borda da rede.
Selecione um idioma
Red Hat legal and privacy links
- Sobre a Red Hat
- Oportunidades de emprego
- Eventos
- Escritórios
- Fale com a Red Hat
- Blog da Red Hat
- Diversidade, equidade e inclusão
- Cool Stuff Store
- Red Hat Summit