Recently, I hosted a Red Hat webinar titled “Kubernetes is the Platform. What’s next?” during which I highlighted the current Kubernetes architecture and capabilities, some of the new innovation happening within the broader open source community, and how much of this innovation is making its way into Red Hat OpenShift Container Platform.
There were great questions from the audience afterward but due to time constraints, I wasn’t able to get to every one. I’ve tackled the remaining questions below and provided some additional links for details or learning.
NOTE: Many questions were similar or overlapping, so many have been consolidated.
Q1: Please help me understand Containers-as-a-Service (CaaS). With Platform-as-a-Service (PaaS), I understand that vendors provide the 'platform' that users can run their apps on top of. However, my understanding is that with CaaS there's no such thing like providers provide 'containers' as service I think providers/vendors do still provide a 'platform' to run containers (instead of apps) on top. Do you agree? Or am I missing something?
A1: In the original NIST definition of Cloud Computing (c.2011), they identified Infrastructure-as-a-Service (IaaS), PaaS and Software-as-a-Service (SaaS). At the time, IaaS implied that the unit of application packaging and isolation was a virtual machine (VM), since that was the most commonly used technology. Since then, Linux containers have grown in use and maturity. So we could say that a platform (e.g., Red Hat OpenShift Container Platform) which provides a management framework for containers (using Kubernetes) is an IaaS. But, that could confuse the marketplace, so the term CaaS is now more frequently used to specify that the expected packaging and isolation is using containers. In addition, Red Hat OpenShift Container Platform provides a number of additional capabilities that can improve developer productivity, or PaaS capabilities, so it can be considered both a CaaS and a PaaS, depending on how the platform is utilized by both developers and operations teams.
Q2: What drives C-level stakeholders to buy?
A2: In the context of Kubernetes platforms, Red Hat customers a variety of industries have shown that containers and Kubernetes are able to deliver positive results. Stories about organizations from a variety of industries and geographies that have made positive impacts to their business using Red Hat OpenShift Container Platform can be found on our customer success page, or by watching recorded sessions from recent OpenShift Commons Gathering events.
Q3: Can I use our own container registry?
A3: Yes. Kubernetes does not include a container registry as part of the open source project, so external registries can be used. Red Hat OpenShift provides an integrated container registry, as well as the Red Hat Quay registry (Enterprise and SaaS offerings).
Q4: If a data center has x86 servers and ARM servers can Kubernetes / Red Hat OpenShift Container Platform manage workloads across both infrastructures?
A4: Yes, Kubernetes / Red Hat OpenShift Container Platform can support both x86 and ARM servers. They may be dependencies on the version of operating system and chipset that you’re running, so check the documentation to make sure you have the proper versions for compatibility.
Q5: Our company is concerned with container security. Does Red Hat OpenShift Container Platform bridge the gap between registry governance and Kubernetes?
A5: This should probably be broken down into two parts:
[1] Security of container content that get into the registry and is within the registry,
[2] Security of the platform where the containers run
Regarding [1], most commercial container registry offerings either have embedded image scanning (for vulnerabilities) and/or image signing. These capabilities are available with both the Red Hat OpenShift container registry, Red Hat Quay, and several OpenShift Commons ecosystem partners. In addition, the Red Hat Container Catalog (RHCC) provides a source for certified, up-to-date, and more secure container images.
We discussed some of these topics on recent episodes of the PodCTL podcast (Eps.14, Eps.32).
Regarding [2], Red Hat believe in defense-in-depth and that proper security for containerized applications should come from several layers of security. See this whitepaper for more details.
Q6: Can you use Kubernetes to orchestrate non-containerized applications?
A6: Currently, Kubernetes only provides (supported) orchestration for containerized applications. But there is an emerging open source project, called “Kubevirt”, which is building a virtualized API for Kubernetes in order to manage virtual machines. Red Hat has plans to offer this as “Container-Native Virtualization” (CNV). This was previewed at Red Hat Summit in May 2018 during the day 1 keynote.
Q7: What higher-level frameworks in the 'developer tooling' space did you allude to?
A7: While “developer frameworks” are outside the scope of the Kubernetes project, a number of projects have emerged to look at ways to make it easier for developers to build cloud-native applications that interact with elements of Kubernetes, as well as abstract away some of the complexity that might be felt in working with Kubernetes YAML-based manifest files.
We discussed some of the emerging ways that developers get applications into Kubernetes on PodCTL #37, but here are some other emerging projects (note: most of these are in very early stages of development and may not be recommended for production uses):
• OpenShift.io
• OpenShift ODO
• Draft
• Brigade
• Metaparticle
• Pulumi
• Ballerina
Q8: What is the relation between microservices and serveless? How are Kubernetes playing /impacting on these 2 concepts?
A8: Microservices is the concept of building applications in (relatively) smaller elements, typically confined to a specific business task, so that individual components can be updated independently of the broader system. It is a contrast to previously built “monolithic” applications, where all/most functionality was linked more closely together, making it more difficult to update or add new functionality. Microservices are often used in conjunction with new, cloud-native application models.
Serverless is the concept of application platforms where application developers do not need to have any awareness of the underlying infrastructure resources or the scaling of those resources. Applications in a serverless environment are defined as “functions”, or small chunks of code which perform a specific task or function. Because of this, the terms serverless and Function-as-a-Service (FaaS) are often intertwined or used interchangeably.
Kubernetes has supported patterns and frameworks used for microservice applications since v1.0.
Recently, a number of open source serverless projects have been created which run on Kubernetes (e.g., Fission, Fn, Kubeless, Nuclio, OpenFaaS, OpenWhisk, Riff). We discussed aspects of serverless on Kubernetes here and here. We highlighted OpenWhisk on OpenShift at Red Hat Summit in May 2018, and announced an early developer preview of a new serverless offering based on OpenWhisk called Red Hat OpenShift Cloud Functions.
Q9: Is your concept of 'Service Brokers' similar to Kubernetes 'ExternalName' Services? And if so how do Service Brokers go beyond that type of Service?
A9: ExternalName enables Kubernetes to return the name of a resource that is external to the Kubernetes cluster. Service Broker is based on the Open Service Broker standard. Service Broker is often tied to a Service Catalog entity, which can create and manage an eternal service or resource. More details on how the Service Catalog interacts with a Service Broker is provided here, in a discussion with one of the SIG engineering leads.
Q10: Any recommendations for CI pipelines integrations?
A10: A number of CI pipelines provide native integration with Kubernetes. OpenShift provides a number of integration strategies and deployment models for integrating with CI pipelines.
Q11: How does CoreOS compliment OpenShift? Is there any redundancy in the stack?
A11: Many elements of the CoreOS technologies, acquired in January 2018, are planned to be integrated into Red Hat platforms (Red Hat OpenShift, Red Hat CoreOS, Red Hat Quay). In addition, emerging technologies such as the Operator Framework are planned to become core elements of the Red Hat OpenShift Container Platform.
More details about the integrations are provided in this blog.
Several sessions from Red Hat Summit (OpenShift Roadmap, Red Hat CoreOS Roadmap, Future of Kubernetes Platform) in May 2018 provide more details about the integrations.
Q12: Can Kubernetes exist without Docker and where you do see this evolving?
A12: In the early version of Kubernetes, the only container runtime that was supported was docker. Since then, some other container runtimes have emerged, as well as the standardization efforts within the Open Container Initiative (OCI). This lead the Kubernetes project to create the concept of a Container Runtime Interface (CRI), which provides a common interface and abstraction for multiple container runtimes, such as CRI-O and containerd. In the future, using tools like Buildah, Podman, Skopeo and others, I anticipate it will be possible to run Kubernetes without docker.
Q13: Would JBoss integrations run as CRDs?
A13: This question was asked in the context of stateful applications and middleware services being deployed using the Operator Framework, which interacts with Kubernetes CRDs. Currently, the plan for OpenShift is to eventually have all middleware services and applications be deployed using the Operator Framework or to interact with the Operator Framework for day-2 operations.
Q14: How do you do automated testing in Kubernetes?
A14: In most cases, automated testing is done in conjunction with a Continuous Integration (CI) platform and the associated plugins for testing tools (e.g. Selenium, Cucumber, Sonarqube, etc.). As mentioned in Q9 (above), there are several ways to integrate CI/CD platforms with Kubernetes/OpenShift.
À propos de l'auteur
Parcourir par canal
Automatisation
Les dernières nouveautés en matière d'automatisation informatique pour les technologies, les équipes et les environnements
Intelligence artificielle
Actualité sur les plateformes qui permettent aux clients d'exécuter des charges de travail d'IA sur tout type d'environnement
Cloud hybride ouvert
Découvrez comment créer un avenir flexible grâce au cloud hybride
Sécurité
Les dernières actualités sur la façon dont nous réduisons les risques dans tous les environnements et technologies
Edge computing
Actualité sur les plateformes qui simplifient les opérations en périphérie
Infrastructure
Les dernières nouveautés sur la plateforme Linux d'entreprise leader au monde
Applications
À l’intérieur de nos solutions aux défis d’application les plus difficiles
Programmes originaux
Histoires passionnantes de créateurs et de leaders de technologies d'entreprise
Produits
- Red Hat Enterprise Linux
- Red Hat OpenShift
- Red Hat Ansible Automation Platform
- Services cloud
- Voir tous les produits
Outils
- Formation et certification
- Mon compte
- Assistance client
- Ressources développeurs
- Rechercher un partenaire
- Red Hat Ecosystem Catalog
- Calculateur de valeur Red Hat
- Documentation
Essayer, acheter et vendre
Communication
- Contacter le service commercial
- Contactez notre service clientèle
- Contacter le service de formation
- Réseaux sociaux
À propos de Red Hat
Premier éditeur mondial de solutions Open Source pour les entreprises, nous fournissons des technologies Linux, cloud, de conteneurs et Kubernetes. Nous proposons des solutions stables qui aident les entreprises à jongler avec les divers environnements et plateformes, du cœur du datacenter à la périphérie du réseau.
Sélectionner une langue
Red Hat legal and privacy links
- À propos de Red Hat
- Carrières
- Événements
- Bureaux
- Contacter Red Hat
- Lire le blog Red Hat
- Diversité, équité et inclusion
- Cool Stuff Store
- Red Hat Summit