Suscríbase al feed

Migrating a virtual machine from one hypervisor to another is a task fraught with challenges. We often get so caught up in installing the platformconfiguring the network and other infrastructure components, and only afterwards consider the challenges within the virtual machines themselves. In this article, I discuss some of the ways you can make the transition of network configuration on Red Hat Enterprise Linux virtual machines from VMware to Red Hat OpenShift Virtualization smoother and easier for those workloads that may not be ready to take advantage of some of the modern ways that OpenShift handles networking.

OpenShift Virtualization and traditional network configuration

Back when data centers were full of physical servers, each server would have one or more Ethernet cables that connected to a switch, which then allowed access to the rest of the environment or the internet. VMware virtual machines took the same approach, but virtualized not only the physical server, but also the physical Ethernet switches, allowing virtual machine administrators to change network configurations with a few mouse clicks. However, the basic architecture remained the same.

Red Hat OpenShift handles networking in a different way, by using a dedicated pod network for microservices, with software-defined load balancers (called services) in front of the applications. These services are available within the cluster and can be exposed by a route with a DNS name to other applications and users outside the OpenShift cluster. One of the benefits of this method is that the specific network configuration of any particular microservice, such as its IP address, becomes irrelevant. The OpenShift cluster and service manages those relationships.

However, traditional virtual machines do not necessarily fit into the OpenShift way of doing things, certainly not when the priority is to lift and shift the workloads from VMware to OpenShift Virtualization. For those cases, OpenShift Virtualization does provide a variety of ways for virtual machine workloads to access the external network. One of my previous blog posts outlines some of those methods and how to use them.

The first step to get your OpenShift Virtualization cluster configured is to allow virtual machines to access the external network, and have a migration plan in place (such as the Migration Toolkit for Virtualization). Then you must consider what changes need to be done to the network configuration on the virtual machine itself to ease migration from VMware to OpenShift Virtualization.

The network adapter configuration puzzle

Red Hat Enterprise Linux and the underlying Linux kernel use a variety of different rules to name a network interface card (NIC). For example, a RHEL 9 server running on VMware may have a network adapter using the VMXNET driver that udev calls ens192, as in this example:

# ip link
1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN mode DEFAULT group default qlen 1000
    link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
2: ens192: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1400 qdisc fq_codel state UP mode DEFAULT group default qlen 1000
    link/ether 00:50:56:af:23:16 brd ff:ff:ff:ff:ff:ff

But after using a tool such as the Migration Toolkit for Virtualization to move to OpenShift Virtualization, the same server might use the VirtIO driver and boot with a network adapter name called enp1s0, as in this example:

# ip link
1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN mode DEFAULT group default qlen 1000
    link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
2: enp1s0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1400 qdisc fq_codel state UP mode DEFAULT group default qlen 1000
    link/ether 00:50:56:af:23:16 brd ff:ff:ff:ff:ff:ff

If you do nothing to prepare for the migration, then your migrated virtual machine has no access to the network because it's unable to find the network adapter it was initially using. This means the virtual machine administrator must log in to the console of every migrated virtual machine and manually change its network configuration. Even automation tools like Red Hat Ansible Automation Platform wouldn't  resolve the issue, because the virtual machines aren't connected to the network.

But don't worry! There are solutions to this puzzle.

Option #1: Updating NetworkManager configuration to match the network interfaces

I'll start with the simplest scenario, which is to make a clone of the network configuration and then update it to match the name of the network interfaces that will actually be present after the migration.

Advantages:

  • Very simple to implement
  • Easy to automate
  • Works on Red Hat Enterprise Linux 7, 8, 9
  • No downtime other than migration

Disadvantages:

  • You must know the target interface names in advance of migration
  • Only works with NetworkManager
  • Requires a small amount of cleanup after migration

The example below assumes the server being migrated is getting its IP address from DHCP. However, the steps are identical for a server with a static IP address.

Here's the network configuration of my example server prior to migration when the virtual machine is still running on VMware (this output is abbreviated for convenience):

# ip address
1: ...
2: ens192: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc fq_codel state UP group default qlen 1000
   link/ether 00:50:56:af:23:16 brd ff:ff:ff:ff:ff:ff
   altname enp11s0
   inet 10.8.50.215/23 brd 10.8.51.255 scope global dynamic noprefixroute ens192
      valid_lft 42965sec preferred_lft 42965sec
   inet6 2620:52:0:832:250:56ff:feaf:2316/64 scope global dynamic noprefixroute 
      valid_lft 2591985sec preferred_lft 604785sec
   inet6 fe80::250:56ff:feaf:2316/64 scope link noprefixroute 
      valid_lft forever preferred_lft forever

The network interface called ens192 has an IP address of 10.8.5.215. After migration, the network interface will use VirtIO drivers and will be renamed to enp1s0 (on Red Hat Enterprise Linux 7, the new interface name will be eth0).

To prepare this virtual machine for migration, you can clone the network configuration to the new name and then update the interface. First, clone the NetworkManager configuration for ens192 to a new connection called enp1s0:

# nmcli connection clone ens192 enp1s0

At this point, you have only created a new configuration called enp1s0, but it is still using the existing ens192 interface. Modify the new connection to use the enp1s0 interface instead:

# nmcli con modify enp1s0 connection.interface-name enp1s0

Now you're ready for the migration! After the virtual machine is migrated to OpenShift Virtualization and it's configured to connect to the external network, the new network adapter configuration is the same, except for the name. The IP address and MAC address are the same as they were before the migration, so any applications would continue to function just as before.

# ip a
1: ...
2: enp1s0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1400 qdisc fq_codel state UP group default qlen 1000
   link/ether 00:50:56:af:23:16 brd ff:ff:ff:ff:ff:ff
   inet 10.8.50.215/23 brd 10.8.51.255 scope global dynamic noprefixroute enp1s0
      valid_lft 43182sec preferred_lft 43182sec
   inet6 2620:52:0:832:250:56ff:feaf:2316/64 scope global dynamic noprefixroute 
      valid_lft 2591982sec preferred_lft 604782sec
   inet6 fe80::250:56ff:feaf:2316/64 scope link noprefixroute 
      valid_lft forever preferred_lft forever

The old network adapter configuration is still present, but it can be removed at any time.

# nmcli connection show
NAME    UUID                                  TYPE      DEVICE
enp1s0  360d1330-11cf-3a91-ae05-590058fc3e82  ethernet  enp1s0
lo      5e40f44b-5388-4dad-bfdb-502e97a74c26  loopback  lo    
ens192  1d8cc2e0-c4f8-3dc8-a372-97bc75cec1ed  ethernet  --
# nmcli con delete ens192
Connection “ens192” (1d8cc2e0-c4f8-3dc8-a372-97bc75cec1ed) successfully deleted.

Using this option with teams or bonds

If your virtual machine is using a team or bond interface, then it's not necessary to clone any of the interfaces. Instead, just add the new interface names to the team or bond. When the virtual machine is migrated to OpenShift Virtualization, the new interface names are active and the old interface names can be removed.

In the example below, I have a team called team0 with the interfaces ens192 and ens224 as members. Use the nmcli tool to add interfaces enp1s0 and enp2s0 to the team. The enp1s0 and enp2s0 don't yet exist in the virtual machine before migration, but you can add them anyway. This does not impact the function of the team or bond.

For a team:

# nmcli con add type ethernet con-name enp1s0 ifname enp1s0 slave-type team master team0
# nmcli con add type ethernet con-name enp2s0 ifname enp2s0 slave-type team master team0

For a bond, the syntax is:

#  nmcli con add type ethernet con-name enp1s0 ifname enp1s0 master bond0
#  nmcli con add type ethernet con-name enp2s0 ifname enp2s0 master bond0

Once the virtual machine is migrated, the old ens192 and ens224 interfaces can be deleted:

# nmcli con del ens192
# nmcli con del ens224

Because the team or bond itself was never changed in this example, the MAC address remains the same, so the IP address  from DHCP also remains the same.

Option #2: Updating network interfaces to match the NetworkManager configuration

The previous example was easy, but it assumes that you knew in advance what the network adapter names will be before starting the migration. That's a safe assumption in many cases, but if you use a different network adapter driver or have some other configurations in place (such as a large number of different network adapter types), then it might be difficult to predict what the new network adapter names are going to be.

In this case, you can make some changes to how udev names network adapters prior to migrating the virtual machine. This does require some work, but it's repeatable work that can be handled by a good automation tool like Red Hat Ansible Automation Platform

Advantages:

  • Network interface names will be known in advance
  • Can be automated
  • Works for Red Hat Enterprise Linux 8 and 9

Disadvantages:

  • Doesn't work for Red Hat Enterprise Linux 7 servers
  • Requires an outage in advance of the migration
  • Some interface prefixes may not be used

The example below requires modifying the kernel boot parameters to set the prefix for the network adapter name, or interface name, to something of your choice. As is outlined in this Red Hat Knowledgebase article, avoid using network adapter names that might conflict with names that udev already uses, such as ethensem, or eno. For this example, I use the prefix net, which would make our first network adapter net0, and the second net1, and so on.

To set a network interface prefix, add the following line to the kernel boot parameters:

net.ifnames.prefix=net

The easiest way to set this parameter is with the grubby tool:

# grubby --update-kernel=ALL 
--args="net.ifnames.prefix=net"

Before you reboot, follow the same steps as in Option #1, but this time you know exactly what the network adapter names will be. Furthermore, you can do all of this work in a planned server outage at any time before the migration begins because these steps ensure the network adapter names stay the same, whether the virtual machine is running on VMware or on OpenShift Virtualization.

# nmcli connection clone ens192 net0

At this point, you've only created a new configuration called net0, but it's still using the existing ens192 interface. Modify the new connection to use the net0 interface:

# nmcli con modify net0 connection.interface-name net0

After the virtual machine is rebooted, the network adapters use the new naming convention. Remember, in this scenario the virtual machine is still running on VMware!

# ip address
1: ...
2: net0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc fq_codel state UP group default qlen 1000
   link/ether 00:50:56:af:03:61 brd ff:ff:ff:ff:ff:ff
   altname enp11s0
   altname ens192
   inet 10.8.50.131/23 brd 10.8.51.255 scope global noprefixroute net0
      valid_lft forever preferred_lft forever
   inet6 2620:52:0:832:250:56ff:feaf:361/64 scope global dynamic noprefixroute 
      valid_lft 2591984sec preferred_lft 604784sec
   inet6 fe80::250:56ff:feaf:361/64 scope link noprefixroute 
      valid_lft forever preferred_lft forever

After migrating the virtual machine to OpenShift Virtualization, the network adapter name doesn't change, so the static IP address or DHCP configuration is exactly the same as it was before the migration.

# ip address
1: ...
2: net0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc fq_codel state UP group default qlen 1000
   link/ether 00:50:56:af:03:61 brd ff:ff:ff:ff:ff:ff
   inet 10.8.50.131/23 brd 10.8.51.255 scope global noprefixroute net0
      valid_lft forever preferred_lft forever
   inet6 2620:52:0:832:250:56ff:feaf:361/64 scope global dynamic noprefixroute 
      valid_lft 2591964sec preferred_lft 604764sec
   inet6 fe80::250:56ff:feaf:361/64 scope link noprefixroute 
      valid_lft forever preferred_lft forever

The old network adapter configuration is still present, but it can be removed at any time.

# nmcli connection show
NAME    UUID                                  TYPE      DEVICE
net0    e69b124b-50a8-4458-8859-5e68c3f19460  ethernet  net0
lo      2a0388d4-36fa-4df5-bc65-01ec8e0b6f2e  loopback  lo    
ens192  38486930-d56a-4088-bbb5-d8a6a1d236c0  ethernet  --
# nmcli con delete ens192
Connection “ens192” (38486930-d56a-4088-bbb5-d8a6a1d236c0) successfully deleted.

Seamless migration

Migrating from one hypervisor solution that has been in use for years to a new one is a daunting task. Preparing for the migration is a key aspect of ensuring the success of any project, especially when it comes to moving workloads to innovative technologies. In this article, I demonstrated one aspect of the complex process of performing virtual machine migrations from VMware to OpenShift Virtualization.

To find out more about Red Hat OpenShift Virtualization, please check out our product page, as well as a previous blog post of mine on the topic, or find more information about our tools to assist in the migration.  Together, we can make the transition to OpenShift virtual machines or containers a success for your organization.


Sobre el autor

Matthew Secaur is a Red Hat Senior Technical Account Manager (TAM) for Canada and the Northeast United States. He has expertise in Red Hat OpenShift Platform, Red Hat OpenStack Platform, and Red Hat Ceph Storage.

Read full bio
UI_Icon-Red_Hat-Close-A-Black-RGB

Navegar por canal

automation icon

Automatización

Las últimas novedades en la automatización de la TI para los equipos, la tecnología y los entornos

AI icon

Inteligencia artificial

Descubra las actualizaciones en las plataformas que permiten a los clientes ejecutar cargas de trabajo de inteligecia artificial en cualquier lugar

open hybrid cloud icon

Nube híbrida abierta

Vea como construimos un futuro flexible con la nube híbrida

security icon

Seguridad

Vea las últimas novedades sobre cómo reducimos los riesgos en entornos y tecnologías

edge icon

Edge computing

Conozca las actualizaciones en las plataformas que simplifican las operaciones en el edge

Infrastructure icon

Infraestructura

Vea las últimas novedades sobre la plataforma Linux empresarial líder en el mundo

application development icon

Aplicaciones

Conozca nuestras soluciones para abordar los desafíos más complejos de las aplicaciones

Original series icon

Programas originales

Vea historias divertidas de creadores y líderes en tecnología empresarial