Skip to main content

How to use auto-updates and rollbacks in Podman

New auto-update capabilities enable you to use Podman in edge use cases, update workloads once they are connected to the network, and roll back failures to a known-good state.
Image
Pod of sea lions

Photo by Elianne Dipp from Pexels

Automatically keeping containers up to date has become one of edge computing's critical principles. Podman enables users to automate container updates using what are called auto-updates. On a high level, you can configure Podman to check the availability of new images for auto-updates, pull down these new images if needed, and restart the containers.

Running containers is essential for implementing edge computing in remote data centers or on Internet of Things (IoT) devices. Auto-updates enable you to use Podman in edge use cases, update workloads once they are connected to the network, and help reduce maintenance costs.

This article elaborates on improvements to auto-updates in Podman and enhancements to Podman's systemd support.

Configure Podman auto-updates

You can configure containers for auto-updates by setting the io.containers.autoupdate label when you create each container. At the time of writing, two auto-update policies determine how Podman fetches updates:

  • registry: When using the registry policy, Podman reaches out to the container registry to check whether a new image is available. For instance, Podman can compare registry.access.redhat.com/ubi8:8.4 on the registry with the image in the local storage. If they differ, the registry image is considered newer and is pulled down.
  • local: The local policy is slightly different. Podman will not reach out to the registry but will only compare local images. For instance, if a local image has been rebuilt, containers using the previous image can easily be auto-updated.

You can set the label with: 

podman create --label io.containers.autoupdate={registry,local}

For the auto-updates functionality to work, the containers must run inside systemd services. Once Podman pulls an updated image, the auto-update boils down to restarting the associated systemd services. Advantages of running the containers inside systemd services include integrated dependency management across services and systemd's excellent lifecycle management.

[ For more insight, download Cloud-native meets hybrid cloud: A strategy guide. ]

Note that Podman allows you to generate custom systemd units for containers and pods via podman generate systemd. This article explains how to generate these units, but if you are interested in more detailed background on running Podman inside systemd, please refer to our previous articles "Running containers with Podman and shareable systemd services" and "Improved systemd integration with Podman 2.0."

Explore Podman auto-update UI improvements

Podman 3.3 introduced improvements to Podman auto-update's output. In older versions, the podman auto-update output printed the restarted systemd services; however, it was almost impossible to correlate the output to running containers, images, or the update status. As people started using auto-updates in increasingly complex scenarios, they asked for output improvements for easier navigation. So, starting with version 3.3, Podman's output changed to a table showing the systemd unit, the container, the image, the auto-update policy, and the update status. The update status indicates whether a unit has been updated or if the update failed.

[ Download your Linux commands cheat sheet and keep it close at hand. ]

To see this in action, pull down a Fedora 33 image and tag it Fedora 34. Tagging the older Fedora 33 image ensures Podman detects that a newer image is available on the registry:

$ podman pull fedora:33
$ podman tag fedora:33 registry.fedoraproject.org/fedora:34

Now create a container using the Fedora 34 image. Make sure to specify the fully qualified image name when using the registry auto-update policy. This ensures Podman knows precisely which image to look up and from which registry. For demo purposes, just run a simple container with sleep:

$ podman create --name enable-sysadmin --label io.containers.autoupdate=registry registry.fedoraproject.org/fedora:34 sleep infinity

Create the /.config/systemd directory if it does not already exist:

$ mkdir -p ~/.config/systemd/user/

Next, generate a systemd unit and load it into systemd to start it:

$ podman generate systemd --new enable-sysadmin > ~/.config/systemd/user/enable-sysadmin.service
$ systemctl --user daemon-reload
$ systemctl --user start enable-sysadmin.service

Begin working with podman-auto-update:

~ $ podman auto-update --dry-run
UNIT                    CONTAINER                       IMAGE                               POLICY      UPDATED
enable-sysadmin.service  21153722570b (enable-sysadmin)  registry.fedoraproject.org/fedora:34  registry pending

Notice the --dry-run flag? The dry run feature allows you to assemble information about which services, containers, and images need updates before applying them. Setting --format json prints the data as JSON instead of a table, integrates seamlessly into automation, and passes on the data in a machine-readable format. Here is the full command:

~ $ podman auto-update --dry-run --format "{{.Unit}} {{.Updated}}"
enable-sysadmin.service pending

Now, update the service. Podman figures out that the local fedora:34 image is outdated (remember it's actually fedora:33), pulls down the new image from the registry, and restarts the enable-sysadmin systemd service.

$ podman auto-update
Trying to pull registry.fedoraproject.org/fedora:34...
Getting image source signatures
Copying blob ecfb9899f4ce done   
Copying config 37e5619f4a done   
Writing manifest to image destination
Storing signatures
UNIT                    CONTAINER                       IMAGE                               POLICY      UPDATED
enable-sysadmin.service  21153722570b (enable-sysadmin)  registry.fedoraproject.org/fedora:34  registry true

This shows the output from fedora:34 being pulled from the registry, followed by the new table indicating the update (true in the UPDATED column).

Podman ships with an auto-update systemd timer and an auto-update service, which enables you to set time- and event-triggers for auto-updates.

Use sdnotify inside containers

systemd wants to know the main process ID (PID) for most service types. Knowing a service's main process allows systemd to manage and track the service's lifecycle properly. For instance, systemd marks the service stopped once the main process exits. The process's exit code further determines whether the service status is marked as failed.

With Podman, the main process should point to conmon. Container monitor (conmon) monitors the container and manages parts of its lifecycle and resources. It also exits with the same exit code as the container, allowing a smooth integration into the systemd services. If the container exits with a failure, conmon will do so as well, and the systemd service will be marked as failed.

Prior to Podman v3.3, the unit files generated via podman generate systemd communicated the main PID through the file system by writing conmon's PID to a specific path that is also set with the PIDFile option in the unit file. The service type is also set to forking and the kill mode to none to let Podman stop and kill the container without interference from systemd. "Running containers with Podman and shareable systemd services" offers more information about the reasoning behind this option.

Starting with Podman v3.3, the type of unit generated via the --new flag is sdnotify with the default kill mode (or cgroup). Using Type=notify lets Podman communicate the main PID directly to systemd via the sdnotify bus. By default, the service is marked as running as soon as conmon has started the container. However, this behavior can be changed if the container is created via --sdnotify=container, in which case Podman will set the main PID to conmon, but the container is responsible for marking the service as started (by sending the ready message on the sdnotify bus). Imagine a container running a database that takes 30 seconds to initialize. The container can now send the ready message once it's ready for work (after database initialization).

So why is using sdnotify such a big deal? After all, it was supported to a certain degree before. Prior to Podman v3.3, the NOTIFY_SOCKET environment variable would be passed down to the container runtime (runc or crun), and the container could send the ready message. Unfortunately, the runtimes do not return until the container sends this specific ready message, which freezes Podman. We've been working with the community to resolve the Podman freezing issue by proxying the sdnotify socket in conmon and doing the necessary plumbing in Podman. This way, the container runtimes won't block, Podman won't freeze, and the user experience is at the desired level. Special thanks to Joseph Gooch for his core contributions to this effort.

Do simple rollbacks with Podman auto-updates

A more than welcome effect of all the work put into enabling --sdnotify=container is it paved the way for implementing simple rollbacks in auto-updates. In short, podman auto-update should detect if a systemd service restarted successfully with the updated image. If the update fails, Podman should revert to the previous known-to-work image. Rollbacks are available starting with Podman v3.4.

While working on simple rollbacks, we found that detecting if a service restarted successfully requires some special care. If Podman sent the ready message once conmon started the container, systemd would see the service restarted successfully—even if the container exited immediately. It would see a failed restart only if conmon exited immediately, which happens only in exceptional cases. The aforementioned work on letting the container send the ready message via --sdnotify=container solves this issue. systemd will consider the service as started only once the container has sent the ready message. Suppose the container didn't send the message within a given timeout, the main PID (conmon) exited, or the service did not start successfully. In those cases, Podman detects it and properly handles rollback.

[ You might also be interested in the CTO Guide to Containers and Kubernetes: Answering the Top 10 FAQs. ]

Here is an example to work though. It builds a container image and starts a systemd service based on it, then it rebuilds the image with a little bug. Once updated, the service is supposed to fail, and Podman should roll back to the previous known-to-work image and get the service back up running. Here is the example:

$ podman build -f Containerfile.good -t localhost/enable:sysadmin
$ podman create --name rollback --label io.containers.autoupdate=local localhost/enable:sysadmin /runme
$ podman generate systemd --new rollback > ~/.config/systemd/user/rollback.service
$ systemctl --user daemon-reload
$ systemctl --user start rollback.service
$ podman auto-update --dryrun
UNIT            CONTAINER               IMAGE                   POLICY      UPDATED
rollback.service  abb7df72f094 (rollback)  localhost/enable:sysadmin  local     false

That worked well. The service is up and running, as seen below:

$ cat Containerfile.good
FROM registry.access.redhat.com/ubi8/ubi-init                                                      
RUN echo -e "#!/bin/sh\n\                                                                       
systemd-notify --ready;\n\                                                                      
trap 'echo Received SIGTERM, finishing; exit' SIGTERM; echo WAITING; while :; do sleep 0.1; done" \
>> /runme                                                               
RUN chmod +x /runme           

This Containerfile deserves some explanation. First, the example uses ubi-init as the base image. It's a flavor of the Universal Base Image (UBI) with preinstalled systemd. Then it creates a new binary in the container (/runme), which sends a ready message via sdnotify and continues running until it receives a signal to terminate.

Next, it updates the text image. First, the local image is rebuilt to force an update, and then Podman updates the image successfully. To force an update, rebuild with --squash:

$ podman build -f Containerfile.good -t localhost/enable:sysadmin --squash
$ podman auto-update  
UNIT            CONTAINER               IMAGE                   POLICY      UPDATED
rollback.service  abb7df72f094 (rollback)  localhost/enable:sysadmin  local     true
$ systemctl --user is-active rollback.service  
Active
$ podman ps --format "{{.Names}} {{.ID}}"
rollback 9dde07bf7fa3

Excellent. The rollback.service updates (that is, the container ID changes), and the rollback.service remains active (true in the UPDATED column). Now, rebuild the image to force a rollback:

$ podman build -f Containerfile.bad -t localhost/enable:sysadmin
$ cat Containerfile.bad
FROM registry.access.redhat.com/ubi8/ubi-init
RUN echo -e "#!/bin/sh\n\                   
exit 1">> /runme                            
RUN chmod +x /runme    

The bad Containerfile looks similar to the good one, but instead of sending the ready message, the /runme binary exits immediately with a failure. Once it updates, Podman should detect that the rollback.service could not restart and roll back to the previous image:

$ podman auto-update  
UNIT            CONTAINER               IMAGE                   POLICY      UPDATED
rollback.service  9dde07bf7fa3 (rollback)  localhost/enable:sysadmin  local     rolled back
$ systemctl --user is-active rollback.service  
Active
$ podman ps --format "{{.Names}} {{.ID}}"
rollback 09160a33dcdd

Indeed, the container ID changed, which indicates the rollback.service restarted. The UPDATED column in the auto-update output now shows rolled back, meaning the update to the new bad image failed and Podman rolled back to the previous image. If the rollback fails, the UPDATED column will show failure, and Podman prints the associated error message.

Looking ahead

Without any additional tools, you can run auto-updates on Podman containers manually or on a schedule (since Podman 2.0). With the --sdnotify=container implementation, Podman can also support simple rollbacks.

With capabilities such as automatic updates and intelligent rollbacks, Podman provides the things users need to expand edge computing capabilities regardless of their industry, location, or connectivity. As we look to expand Podman use cases further, our team is working on implementing features such as cloning containers, which will allow updating containers outside of systemd units.

Topics:   Podman   Containers   Troubleshooting  
Author’s photo

Valentin Rothberg

Container engineer at Red Hat, bass player, music lover. More about me

Author’s photo

Preethi Thomas

Preethi Thomas is an Engineering Manager for the containers team at Red Hat. She has been a manager for over three years. Prior to becoming a manager, she was a Quality Engineer at Red Hat. More about me

Author’s photo

Dan Walsh

Daniel Walsh has worked in the computer security field for over 30 years. Dan is a Consulting Engineer at Red Hat. He joined Red Hat in August 2001. Dan leads the Red Hat Container Engineering team since August 2013, but has been working on container technology for several years. More about me

Try Red Hat Enterprise Linux

Download it at no charge from the Red Hat Developer program.