A few weeks ago, we shared our thoughts about running containers in cars. Today, let's discuss our vision for the experience of developing containerized in-vehicle applications.
In our previous blog post, we discussed using Podman and systemd to run containers in cars. You might be asking yourself:
- Does this mean that developers won’t be able to leverage their experience working with Kubernetes or Red Hat OpenShift?
- Will this prevent the automotive industry from attracting new talent because developers need to learn yet another new process for container management and deployment?
- Do we need to create new CI/CD (continuous integration/continuous delivery) pipelines rather than relying on existing infrastructure and tools, and won’t this slow down the development process?
In a word—no. Let us explain further.
If you work with containers, you are most likely familiar with Kubernetes objects often expressed in .yaml format (Kubernetes YAML). Using Kubernetes YAML to describe how to deploy containerized applications is becoming a de facto standard. We want to leverage this standard to enable a model in which local application development, virtual testing, hardware in the loop testing and final deployments share common ground, namely using Kubernetes YAML to describe how an application should be deployed.
Podman has the ability to run containers using Kubernetes YAML files as input. Our recent blog post, How to "build once, run anywhere" at the edge with containers, describes how to deploy containers by using either Podman or Kubernetes (and thus OpenShift) using the same Kubernetes YAML. Following the demo in that blog post, you will retrieve a set of Kubernetes YAML files—describing how to deploy an application—that you may use to deploy that application on a Kubernetes or OpenShift cluster using kubectl apply
(or oc apply
). Then, you will use the same files to deploy and run that application locally using podman kube play
.
Using this mechanism, we can foresee an architecture in which developers build and test their applications locally using Podman. Once satisfied, they can then push their source files, Containerfile, and Kubernetes YAML to one or more repositories, all of which would then be used in a continuous integration pipeline to compile the code into a container image. That container image can then be deployed according to the instructions in the Kubernetes YAML and tested as desired. If the tests pass, the application can then move on to the next testing phase, be that another virtual testing phase or onto actual hardware. All of these files (sources, Containerfile and Kubernetes YAML) are sufficient to enable running (and thus testing) the application in all contexts.
In summary, we wanted to point you to our previous blog post to highlight what it means for the automotive industry, which is to say that Kubernetes knowledge can be transferred from the IT industry to the automotive sector. This will help reduce the amount of net-new knowledge engineers need to learn when joining the automotive industry and accelerate the development process of containerized applications by leveraging the existing container ecosystem.
Learn more
Über die Autoren
Pierre-Yves Chibon (aka pingou) is a Principal Software Engineer who spent nearly 15 years in the Fedora community and is now looking at the challenges the automotive industry offers to the FOSS ecosystems.
Ygal Blum is a Principal Software Engineer who is also an experienced manager and tech lead. He writes code from C and Java to Python and Golang, targeting platforms from microcontrollers to multicore servers, and servicing verticals from testing equipment through mobile and automotive to cloud infrastructure.
Alexander has worked at Red Hat since 2001, doing development work on desktop and containers, including the creation of flatpak and lots foundational work on Gnome.
Nach Thema durchsuchen
Automatisierung
Das Neueste zum Thema IT-Automatisierung für Technologien, Teams und Umgebungen
Künstliche Intelligenz
Erfahren Sie das Neueste von den Plattformen, die es Kunden ermöglichen, KI-Workloads beliebig auszuführen
Open Hybrid Cloud
Erfahren Sie, wie wir eine flexiblere Zukunft mit Hybrid Clouds schaffen.
Sicherheit
Erfahren Sie, wie wir Risiken in verschiedenen Umgebungen und Technologien reduzieren
Edge Computing
Erfahren Sie das Neueste von den Plattformen, die die Operations am Edge vereinfachen
Infrastruktur
Erfahren Sie das Neueste von der weltweit führenden Linux-Plattform für Unternehmen
Anwendungen
Entdecken Sie unsere Lösungen für komplexe Herausforderungen bei Anwendungen
Original Shows
Interessantes von den Experten, die die Technologien in Unternehmen mitgestalten
Produkte
- Red Hat Enterprise Linux
- Red Hat OpenShift
- Red Hat Ansible Automation Platform
- Cloud-Services
- Alle Produkte anzeigen
Tools
- Training & Zertifizierung
- Eigenes Konto
- Kundensupport
- Für Entwickler
- Partner finden
- Red Hat Ecosystem Catalog
- Mehrwert von Red Hat berechnen
- Dokumentation
Testen, kaufen und verkaufen
Kommunizieren
Über Red Hat
Als weltweit größter Anbieter von Open-Source-Software-Lösungen für Unternehmen stellen wir Linux-, Cloud-, Container- und Kubernetes-Technologien bereit. Wir bieten robuste Lösungen, die es Unternehmen erleichtern, plattform- und umgebungsübergreifend zu arbeiten – vom Rechenzentrum bis zum Netzwerkrand.
Wählen Sie eine Sprache
Red Hat legal and privacy links
- Über Red Hat
- Jobs bei Red Hat
- Veranstaltungen
- Standorte
- Red Hat kontaktieren
- Red Hat Blog
- Diversität, Gleichberechtigung und Inklusion
- Cool Stuff Store
- Red Hat Summit