Suscríbase al feed

Recently, I hosted a Red Hat webinar titled “Kubernetes is the Platform. What’s next?” during which I highlighted the current Kubernetes architecture and capabilities, some of the new innovation happening within the broader open source community, and how much of this innovation is making its way into Red Hat OpenShift Container Platform.

There were great questions from the audience afterward but due to time constraints, I wasn’t able to get to every one. I’ve tackled the remaining questions below and provided some additional links for details or learning.

NOTE: Many questions were similar or overlapping, so many have been consolidated.

Q1: Please help me understand Containers-as-a-Service (CaaS). With Platform-as-a-Service (PaaS), I understand that vendors provide the 'platform' that users can run their apps on top of. However, my understanding is that with CaaS there's no such thing like providers provide 'containers' as service I think providers/vendors do still provide a 'platform' to run containers (instead of apps) on top. Do you agree? Or am I missing something?

A1: In the original NIST definition of Cloud Computing (c.2011), they identified Infrastructure-as-a-Service (IaaS), PaaS and Software-as-a-Service (SaaS). At the time, IaaS implied that the unit of application packaging and isolation was a virtual machine (VM), since that was the most commonly used technology. Since then, Linux containers have grown in use and maturity. So we could say that a platform (e.g., Red Hat OpenShift Container Platform) which provides a management framework for containers (using Kubernetes) is an IaaS. But,  that could confuse the marketplace, so the term CaaS is now more frequently used to specify that the expected packaging and isolation is using containers. In addition, Red Hat OpenShift Container Platform provides a number of additional capabilities that can improve developer productivity, or PaaS capabilities, so it can be considered both a CaaS and a PaaS, depending on how the platform is utilized by both developers and operations teams.

Q2: What drives C-level stakeholders to buy?

A2: In the context of Kubernetes platforms, Red Hat customers a variety of industries have shown that containers and Kubernetes are able to deliver positive results. Stories about organizations from a variety of industries and geographies that have made positive impacts to their business using Red Hat OpenShift Container Platform can be found on our customer success page, or by watching recorded sessions from recent OpenShift Commons Gathering events.


Q3: Can I use our own container registry?

A3: Yes. Kubernetes does not include a container registry as part of the open source project, so external registries can be used. Red Hat OpenShift provides an integrated container registry, as well as the Red Hat Quay registry (Enterprise and SaaS offerings).


Q4: If a data center has x86 servers and ARM servers can Kubernetes / Red Hat OpenShift Container Platform manage workloads across both infrastructures?

A4: Yes, Kubernetes / Red Hat OpenShift Container Platform can support both x86 and ARM servers. They may be dependencies on the version of operating system and chipset that you’re running, so check the documentation to make sure you have the proper versions for compatibility.


Q5: Our company is concerned with container security. Does Red Hat OpenShift Container Platform bridge the gap between registry governance and Kubernetes?

A5: This should probably be broken down into two parts:
[1] Security of container content that get into the registry and is within the registry,
[2] Security of the platform where the containers run

Regarding [1], most commercial container registry offerings either have embedded image scanning (for vulnerabilities) and/or image signing. These capabilities are available with both the Red Hat OpenShift container registry, Red Hat Quay, and several OpenShift Commons ecosystem partners. In addition, the Red Hat Container Catalog (RHCC) provides a source for certified, up-to-date, and more secure container images.

We discussed some of these topics on recent episodes of the PodCTL podcast (Eps.14, Eps.32).

Regarding [2], Red Hat believe in defense-in-depth and that proper security for containerized applications should come from several layers of security. See this whitepaper for more details.


Q6: Can you use Kubernetes to orchestrate non-containerized applications?

A6: Currently, Kubernetes only provides (supported) orchestration for containerized applications. But there is an emerging open source project, called “Kubevirt”, which is building a virtualized API for Kubernetes in order to manage virtual machines. Red Hat has plans to offer this as “Container-Native Virtualization” (CNV). This was previewed at Red Hat Summit in May 2018 during the day 1 keynote.


Q7: What higher-level frameworks in the 'developer tooling' space did you allude to?

A7: While “developer frameworks” are outside the scope of the Kubernetes project, a number of projects have emerged to look at ways to make it easier for developers to build cloud-native applications that interact with elements of Kubernetes, as well as abstract away some of the complexity that might be felt in working with Kubernetes YAML-based manifest files.

We discussed some of the emerging ways that developers get applications into Kubernetes on PodCTL #37, but here are some other emerging projects (note: most of these are in very early stages of development and may not be recommended for production uses):

    • OpenShift.io
    • OpenShift ODO
    • Draft
    • Brigade
    • Metaparticle
    • Pulumi
    • Ballerina


Q8: What is the relation between microservices and serveless? How are Kubernetes playing /impacting on these 2 concepts?

A8: Microservices is the concept of building applications in (relatively) smaller elements, typically confined to a specific business task, so that individual components can be updated independently of the broader system. It is a contrast to previously built “monolithic” applications, where all/most functionality was linked more closely together, making it more difficult to update or add new functionality. Microservices are often used in conjunction with new, cloud-native application models.

Serverless is the concept of application platforms where application developers do not need to have any awareness of the underlying infrastructure resources or the scaling of those resources. Applications in a serverless environment are defined as “functions”, or small chunks of code which perform a specific task or function. Because of this, the terms serverless and Function-as-a-Service (FaaS) are often intertwined or used interchangeably.

Kubernetes has supported patterns and frameworks used for microservice applications since v1.0.

Recently, a number of open source serverless projects have been created which run on Kubernetes (e.g., Fission, Fn, Kubeless, Nuclio, OpenFaaS, OpenWhisk, Riff). We discussed aspects of serverless on Kubernetes here and here. We highlighted OpenWhisk on OpenShift at Red Hat Summit in May 2018, and announced an early developer preview of a new serverless offering based on OpenWhisk called Red Hat OpenShift Cloud Functions.


Q9: Is your concept of 'Service Brokers' similar to Kubernetes 'ExternalName' Services? And if so how do Service Brokers go beyond that type of Service?

A9: ExternalName enables Kubernetes to return the name of a resource that is external to the Kubernetes cluster. Service Broker is based on the Open Service Broker standard. Service Broker is often tied to a Service Catalog entity, which can create and manage an eternal service or resource. More details on how the Service Catalog interacts with a Service Broker is provided here, in a discussion with one of the SIG engineering leads.


Q10: Any recommendations for CI pipelines integrations?

A10: A number of CI pipelines provide native integration with Kubernetes. OpenShift provides a number of integration strategies and deployment models for integrating with CI pipelines.


Q11: How does CoreOS compliment OpenShift? Is there any redundancy in the stack?

A11: Many elements of the CoreOS technologies, acquired in January 2018, are planned to be integrated into Red Hat platforms (Red Hat OpenShift, Red Hat CoreOS, Red Hat Quay). In addition, emerging technologies such as the Operator Framework are planned to become core elements of the Red Hat OpenShift Container Platform.

More details about the integrations are provided in this blog.

Several sessions from Red Hat Summit (OpenShift Roadmap, Red Hat CoreOS Roadmap, Future of Kubernetes Platform) in May 2018 provide more details about the integrations.


Q12: Can Kubernetes exist without Docker and where you do see this evolving?

A12: In the early version of Kubernetes, the only container runtime that was supported was docker. Since then, some other container runtimes have emerged, as well as the standardization efforts within the Open Container Initiative (OCI). This lead the Kubernetes project to create the concept of a Container Runtime Interface (CRI), which provides a common interface and abstraction for multiple container runtimes, such as CRI-O and containerd. In the future, using tools like Buildah, Podman, Skopeo and others, I anticipate it will be possible to run Kubernetes without docker.


Q13: Would JBoss integrations run as CRDs?

A13: This question was asked in the context of stateful applications and middleware services being deployed using the Operator Framework, which interacts with Kubernetes CRDs. Currently, the plan for OpenShift is to eventually have all middleware services and applications be deployed using the Operator Framework or to interact with the Operator Framework for day-2 operations.


Q14: How do you do automated testing in Kubernetes?

A14: In most cases, automated testing is done in conjunction with a Continuous Integration (CI) platform and the associated plugins for testing tools (e.g. Selenium, Cucumber, Sonarqube, etc.). As mentioned in Q9 (above), there are several ways to integrate CI/CD platforms with Kubernetes/OpenShift.

The webinar is available on-demand.


Sobre el autor

UI_Icon-Red_Hat-Close-A-Black-RGB

Navegar por canal

automation icon

Automatización

Las últimas novedades en la automatización de la TI para los equipos, la tecnología y los entornos

AI icon

Inteligencia artificial

Descubra las actualizaciones en las plataformas que permiten a los clientes ejecutar cargas de trabajo de inteligecia artificial en cualquier lugar

open hybrid cloud icon

Nube híbrida abierta

Vea como construimos un futuro flexible con la nube híbrida

security icon

Seguridad

Vea las últimas novedades sobre cómo reducimos los riesgos en entornos y tecnologías

edge icon

Edge computing

Conozca las actualizaciones en las plataformas que simplifican las operaciones en el edge

Infrastructure icon

Infraestructura

Vea las últimas novedades sobre la plataforma Linux empresarial líder en el mundo

application development icon

Aplicaciones

Conozca nuestras soluciones para abordar los desafíos más complejos de las aplicaciones

Original series icon

Programas originales

Vea historias divertidas de creadores y líderes en tecnología empresarial