10 Edge Networking #
This section describes the approach to network configuration in the SUSE Edge solution. We will show how to configure NetworkManager on SLE Micro in a declarative manner, and explain how the related tools are integrated.
10.1 Overview of NetworkManager #
NetworkManager is a tool that manages the primary network connection and other connection interfaces.
NetworkManager stores network configurations as connection files that contain the desired state.
These connections are stored as files in the /etc/NetworkManager/system-connections/
directory.
Details about NetworkManager can be found in the SLE Micro documentation.
10.2 Overview of nmstate #
nmstate is a widely adopted library (with an accompanying CLI tool) which offers a declarative API for network configurations via a predefined schema.
Details about nmstate can be found in the upstream documentation.
10.3 Enter: NetworkManager Configurator (nmc) #
The network customization options available in SUSE Edge are achieved via a CLI tool called NetworkManager Configurator or nmc for short. It is leveraging the functionality provided by the nmstate library and, as such, it is fully capable of configuring static IP addresses, DNS servers, VLANs, bonding, bridges, etc. This tool allows us to generate network configurations from predefined desired states and to apply those across many different nodes in an automated fashion.
Details about the NetworkManager Configurator (nmc) can be found in the upstream repository.
10.4 How does SUSE Edge use NetworkManager Configurator? #
SUSE Edge utilizes nmc for the network customizations in the various different provisioning models:
Custom network configurations in the Directed Network Provisioning scenarios (Chapter 1, BMC automated deployments with Metal3)
Declarative static configurations in the Image Based Provisioning scenarios (Chapter 3, Standalone clusters with Edge Image Builder)
10.5 Configuring with Edge Image Builder #
Edge Image Builder (EIB) is a tool which enables configuring multiple hosts with a single OS image. In this section we’ll show how you can use a declarative approach to describe the desired network states, how those are converted to the respective NetworkManager connections, and are then applied during the provisioning process.
10.5.1 Prerequisites #
If you’re following this guide, it’s assumed that you’ve got the following already available:
An x86_64 physical host (or virtual machine) running SLES 15 SP6 or openSUSE Leap 15.6
An available container runtime (e.g. Podman)
A copy of the SL Micro 6.0 RAW image found here
10.5.2 Getting the Edge Image Builder container image #
The EIB container image is publicly available and can be downloaded from the SUSE Edge registry by running:
podman pull registry.suse.com/edge/3.1/edge-image-builder:1.1.0
10.5.3 Creating the image configuration directory #
Let’s start with creating the configuration directory:
export CONFIG_DIR=$HOME/eib mkdir -p $CONFIG_DIR/base-images
We will now ensure that the downloaded base image copy is moved over to the configuration directory:
mv /path/to/downloads/SL-Micro.x86_64-6.0-Base-GM2.raw $CONFIG_DIR/base-images/
NoteEIB is never going to modify the base image input. It will create a new image with its modifications.
The configuration directory at this point should look like the following:
└── base-images/ └── SL-Micro.x86_64-6.0-Base-GM2.raw
10.5.4 Creating the image definition file #
The definition file describes the majority of configurable options that the Edge Image Builder supports.
Let’s start with a very basic definition file for our OS image:
cat << EOF > $CONFIG_DIR/definition.yaml apiVersion: 1.1 image: arch: x86_64 imageType: raw baseImage: SL-Micro.x86_64-6.0-Base-GM2.raw outputImageName: modified-image.raw operatingSystem: users: - username: root encryptedPassword: $6$jHugJNNd3HElGsUZ$eodjVe4te5ps44SVcWshdfWizrP.xAyd71CVEXazBJ/.v799/WRCBXxfYmunlBO2yp1hm/zb4r8EmnrrNCF.P/ EOF
The image
section is required, and it specifies the input image, its architecture and type, as well as what the output image will be called.
The operatingSystem
section is optional, and contains configuration to enable login on the provisioned systems with the root/eib
username/password.
NoteFeel free to use your own encrypted password by running
openssl passwd -6 <password>
.
The configuration directory at this point should look like the following:
├── definition.yaml └── base-images/ └── SL-Micro.x86_64-6.0-Base-GM2.raw
10.5.5 Defining the network configurations #
The desired network configurations are not part of the image definition file that we just created.
We’ll now populate those under the special network/
directory. Let’s create it:
mkdir -p $CONFIG_DIR/network
As previously mentioned, the NetworkManager Configurator (nmc) tool expects an input in the form of predefined schema. You can find how to set up a wide variety of different networking options in the upstream NMState examples documentation.
This guide will explain how to configure the networking on three different nodes:
A node which uses two Ethernet interfaces
A node which uses network bonding
A node which uses a network bridge
Using completely different network setups is not recommended in production builds, especially if configuring Kubernetes clusters. Networking configurations should generally be homogeneous amongst nodes or at least amongst roles within a given cluster. This guide is including various different options only to serve as an example reference.
NoteThe following assumes a default
libvirt
network with an IP address range192.168.122.1/24
. Adjust accordingly if this differs in your environment.
Let’s create the desired states for the first node which we will call node1.suse.com
:
cat << EOF > $CONFIG_DIR/network/node1.suse.com.yaml routes: config: - destination: 0.0.0.0/0 metric: 100 next-hop-address: 192.168.122.1 next-hop-interface: eth0 table-id: 254 - destination: 192.168.122.0/24 metric: 100 next-hop-address: next-hop-interface: eth0 table-id: 254 dns-resolver: config: server: - 192.168.122.1 - 8.8.8.8 interfaces: - name: eth0 type: ethernet state: up mac-address: 34:8A:B1:4B:16:E1 ipv4: address: - ip: 192.168.122.50 prefix-length: 24 dhcp: false enabled: true ipv6: enabled: false - name: eth3 type: ethernet state: down mac-address: 34:8A:B1:4B:16:E2 ipv4: address: - ip: 192.168.122.55 prefix-length: 24 dhcp: false enabled: true ipv6: enabled: false EOF
In this example we define a desired state of two Ethernet interfaces (eth0 and eth3), their requested IP addresses, routing, and DNS resolution.
You must ensure that the MAC addresses of all Ethernet interfaces are listed. Those are used during the provisioning process as the identifiers of the nodes and serve to determine which configurations should be applied. This is how we are able to configure multiple nodes using a single ISO or RAW image.
Next up is the second node which we will call node2.suse.com
and which will use network bonding:
cat << EOF > $CONFIG_DIR/network/node2.suse.com.yaml routes: config: - destination: 0.0.0.0/0 metric: 100 next-hop-address: 192.168.122.1 next-hop-interface: bond99 table-id: 254 - destination: 192.168.122.0/24 metric: 100 next-hop-address: next-hop-interface: bond99 table-id: 254 dns-resolver: config: server: - 192.168.122.1 - 8.8.8.8 interfaces: - name: bond99 type: bond state: up ipv4: address: - ip: 192.168.122.60 prefix-length: 24 enabled: true link-aggregation: mode: balance-rr options: miimon: '140' port: - eth0 - eth1 - name: eth0 type: ethernet state: up mac-address: 34:8A:B1:4B:16:E3 ipv4: enabled: false ipv6: enabled: false - name: eth1 type: ethernet state: up mac-address: 34:8A:B1:4B:16:E4 ipv4: enabled: false ipv6: enabled: false EOF
In this example we define a desired state of two Ethernet interfaces (eth0 and eth1) which are not enabling IP addressing, as well as a bond with a round-robin policy and its respective address which is going to be used to forward the network traffic.
Lastly, we’ll create the third and final desired state file which will be utilizing a network bridge and which we’ll call node3.suse.com
:
cat << EOF > $CONFIG_DIR/network/node3.suse.com.yaml routes: config: - destination: 0.0.0.0/0 metric: 100 next-hop-address: 192.168.122.1 next-hop-interface: linux-br0 table-id: 254 - destination: 192.168.122.0/24 metric: 100 next-hop-address: next-hop-interface: linux-br0 table-id: 254 dns-resolver: config: server: - 192.168.122.1 - 8.8.8.8 interfaces: - name: eth0 type: ethernet state: up mac-address: 34:8A:B1:4B:16:E5 ipv4: enabled: false ipv6: enabled: false - name: linux-br0 type: linux-bridge state: up ipv4: address: - ip: 192.168.122.70 prefix-length: 24 dhcp: false enabled: true bridge: options: group-forward-mask: 0 mac-ageing-time: 300 multicast-snooping: true stp: enabled: true forward-delay: 15 hello-time: 2 max-age: 20 priority: 32768 port: - name: eth0 stp-hairpin-mode: false stp-path-cost: 100 stp-priority: 32 EOF
The configuration directory at this point should look like the following:
├── definition.yaml ├── network/ │ │── node1.suse.com.yaml │ │── node2.suse.com.yaml │ └── node3.suse.com.yaml └── base-images/ └── SL-Micro.x86_64-6.0-Base-GM2.raw
NoteThe names of the files under the
network/
directory are intentional. They correspond to the hostnames which will be set during the provisioning process.
10.5.6 Building the OS image #
Now that all the necessary configurations are in place, we can build the image by simply running:
podman run --rm -it -v $CONFIG_DIR:/eib registry.suse.com/edge/3.1/edge-image-builder:1.1.0 build --definition-file definition.yaml
The output should be similar to the following:
Generating image customization components... Identifier ................... [SUCCESS] Custom Files ................. [SKIPPED] Time ......................... [SKIPPED] Network ...................... [SUCCESS] Groups ....................... [SKIPPED] Users ........................ [SUCCESS] Proxy ........................ [SKIPPED] Rpm .......................... [SKIPPED] Systemd ...................... [SKIPPED] Elemental .................... [SKIPPED] Suma ......................... [SKIPPED] Embedded Artifact Registry ... [SKIPPED] Keymap ....................... [SUCCESS] Kubernetes ................... [SKIPPED] Certificates ................. [SKIPPED] Building RAW image... Kernel Params ................ [SKIPPED] Image build complete!
The snippet above tells us that the Network
component has successfully been configured, and we can proceed with provisioning our edge nodes.
NoteA log file (
network-config.log
) and the respective NetworkManager connection files can be inspected in the resulting_build
directory under a timestamped directory for the image run.
10.5.7 Provisioning the edge nodes #
Let’s copy the resulting RAW image:
mkdir edge-nodes && cd edge-nodes for i in {1..4}; do cp $CONFIG_DIR/modified-image.raw node$i.raw; done
You will notice that we copied the built image four times but only specified the network configurations for three nodes. This is because we also want to showcase what will happen if we provision a node which does not match any of the desired configurations.
NoteThis guide will use virtualization for the node provisioning examples. Ensure the necessary extensions are enabled in the BIOS (see here for details).
We will be using virt-install
to create virtual machines using the copied raw disks.
Each virtual machine will be using 10 GB of RAM and 6 vCPUs.
10.5.7.1 Provisioning the first node #
Let’s create the virtual machine:
virt-install --name node1 --ram 10000 --vcpus 6 --disk path=node1.raw,format=raw --osinfo detect=on,name=sle-unknown --graphics none --console pty,target_type=serial --network default,mac=34:8A:B1:4B:16:E1 --network default,mac=34:8A:B1:4B:16:E2 --virt-type kvm --import
NoteIt is important that we create the network interfaces with the same MAC addresses as the ones in the desired state we described above.
Once the operation is complete, we will see something similar to the following:
Starting install... Creating domain... Running text console command: virsh --connect qemu:///system console node1 Connected to domain 'node1' Escape character is ^] (Ctrl + ]) Welcome to SUSE Linux Enterprise Micro 6.0 (x86_64) - Kernel 6.4.0-18-default (tty1). SSH host key: SHA256:XN/R5Tw43reG+QsOw480LxCnhkc/1uqMdwlI6KUBY70 (RSA) SSH host key: SHA256:/96yGrPGKlhn04f1rb9cXv/2WJt4TtrIN5yEcN66r3s (DSA) SSH host key: SHA256:Dy/YjBQ7LwjZGaaVcMhTWZNSOstxXBsPsvgJTJq5t00 (ECDSA) SSH host key: SHA256:TNGqY1LRddpxD/jn/8dkT/9YmVl9hiwulqmayP+wOWQ (ED25519) eth0: 192.168.122.50 eth1: Configured with the Edge Image Builder Activate the web console with: systemctl enable --now cockpit.socket node1 login:
We’re now able to log in with the root:eib
credentials pair.
We’re also able to SSH into the host if we prefer that over the virsh console
we’re presented with here.
Once logged in, let’s confirm that all the settings are in place.
Verify that the hostname is properly set:
node1:~ # hostnamectl Static hostname: node1.suse.com ...
Verify that the routing is properly configured:
node1:~ # ip r default via 192.168.122.1 dev eth0 proto static metric 100 192.168.122.0/24 dev eth0 proto static scope link metric 100 192.168.122.0/24 dev eth0 proto kernel scope link src 192.168.122.50 metric 100
Verify that Internet connection is available:
node1:~ # ping google.com PING google.com (142.250.72.78) 56(84) bytes of data. 64 bytes from den16s09-in-f14.1e100.net (142.250.72.78): icmp_seq=1 ttl=56 time=13.2 ms 64 bytes from den16s09-in-f14.1e100.net (142.250.72.78): icmp_seq=2 ttl=56 time=13.4 ms ^C --- google.com ping statistics --- 2 packets transmitted, 2 received, 0% packet loss, time 1002ms rtt min/avg/max/mdev = 13.248/13.304/13.361/0.056 ms
Verify that exactly two Ethernet interfaces are configured and only one of those is active:
node1:~ # ip a 1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN group default qlen 1000 link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00 inet 127.0.0.1/8 scope host lo valid_lft forever preferred_lft forever inet6 ::1/128 scope host valid_lft forever preferred_lft forever 2: eth0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc pfifo_fast state UP group default qlen 1000 link/ether 34:8a:b1:4b:16:e1 brd ff:ff:ff:ff:ff:ff altname enp0s2 altname ens2 inet 192.168.122.50/24 brd 192.168.122.255 scope global noprefixroute eth0 valid_lft forever preferred_lft forever 3: eth1: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc pfifo_fast state UP group default qlen 1000 link/ether 34:8a:b1:4b:16:e2 brd ff:ff:ff:ff:ff:ff altname enp0s3 altname ens3 node1:~ # nmcli -f NAME,UUID,TYPE,DEVICE,FILENAME con show NAME UUID TYPE DEVICE FILENAME eth0 dfd202f5-562f-5f07-8f2a-a7717756fb70 ethernet eth0 /etc/NetworkManager/system-connections/eth0.nmconnection eth1 7e211aea-3d14-59cf-a4fa-be91dac5dbba ethernet -- /etc/NetworkManager/system-connections/eth1.nmconnection
You’ll notice that the second interface is eth1
instead of the predefined eth3
in our desired networking state.
This is the case because the NetworkManager Configurator (nmc) is able to detect that the OS has given a different name for the NIC with MAC address 34:8a:b1:4b:16:e2
and it adjusts its settings accordingly.
Verify this has indeed happened by inspecting the Combustion phase of the provisioning:
node1:~ # journalctl -u combustion | grep nmc Apr 23 09:20:19 localhost.localdomain combustion[1360]: [2024-04-23T09:20:19Z INFO nmc::apply_conf] Identified host: node1.suse.com Apr 23 09:20:19 localhost.localdomain combustion[1360]: [2024-04-23T09:20:19Z INFO nmc::apply_conf] Set hostname: node1.suse.com Apr 23 09:20:19 localhost.localdomain combustion[1360]: [2024-04-23T09:20:19Z INFO nmc::apply_conf] Processing interface 'eth0'... Apr 23 09:20:19 localhost.localdomain combustion[1360]: [2024-04-23T09:20:19Z INFO nmc::apply_conf] Processing interface 'eth3'... Apr 23 09:20:19 localhost.localdomain combustion[1360]: [2024-04-23T09:20:19Z INFO nmc::apply_conf] Using interface name 'eth1' instead of the preconfigured 'eth3' Apr 23 09:20:19 localhost.localdomain combustion[1360]: [2024-04-23T09:20:19Z INFO nmc] Successfully applied config
We will now provision the rest of the nodes, but we will only show the differences in the final configuration. Feel free to apply any or all of the above checks for all nodes you are about to provision.
10.5.7.2 Provisioning the second node #
Let’s create the virtual machine:
virt-install --name node2 --ram 10000 --vcpus 6 --disk path=node2.raw,format=raw --osinfo detect=on,name=sle-unknown --graphics none --console pty,target_type=serial --network default,mac=34:8A:B1:4B:16:E3 --network default,mac=34:8A:B1:4B:16:E4 --virt-type kvm --import
Once the virtual machine is up and running, we can confirm that this node is using bonded interfaces:
node2:~ # ip a 1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN group default qlen 1000 link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00 inet 127.0.0.1/8 scope host lo valid_lft forever preferred_lft forever inet6 ::1/128 scope host valid_lft forever preferred_lft forever 2: eth0: <BROADCAST,MULTICAST,SLAVE,UP,LOWER_UP> mtu 1500 qdisc pfifo_fast master bond99 state UP group default qlen 1000 link/ether 34:8a:b1:4b:16:e3 brd ff:ff:ff:ff:ff:ff altname enp0s2 altname ens2 3: eth1: <BROADCAST,MULTICAST,SLAVE,UP,LOWER_UP> mtu 1500 qdisc pfifo_fast master bond99 state UP group default qlen 1000 link/ether 34:8a:b1:4b:16:e3 brd ff:ff:ff:ff:ff:ff permaddr 34:8a:b1:4b:16:e4 altname enp0s3 altname ens3 4: bond99: <BROADCAST,MULTICAST,MASTER,UP,LOWER_UP> mtu 1500 qdisc noqueue state UP group default qlen 1000 link/ether 34:8a:b1:4b:16:e3 brd ff:ff:ff:ff:ff:ff inet 192.168.122.60/24 brd 192.168.122.255 scope global noprefixroute bond99 valid_lft forever preferred_lft forever
Confirm that the routing is using the bond:
node2:~ # ip r default via 192.168.122.1 dev bond99 proto static metric 100 192.168.122.0/24 dev bond99 proto static scope link metric 100 192.168.122.0/24 dev bond99 proto kernel scope link src 192.168.122.60 metric 300
Ensure that the static connection files are properly utilized:
node2:~ # nmcli -f NAME,UUID,TYPE,DEVICE,FILENAME con show NAME UUID TYPE DEVICE FILENAME bond99 4a920503-4862-5505-80fd-4738d07f44c6 bond bond99 /etc/NetworkManager/system-connections/bond99.nmconnection eth0 dfd202f5-562f-5f07-8f2a-a7717756fb70 ethernet eth0 /etc/NetworkManager/system-connections/eth0.nmconnection eth1 0523c0a1-5f5e-5603-bcf2-68155d5d322e ethernet eth1 /etc/NetworkManager/system-connections/eth1.nmconnection
10.5.7.3 Provisioning the third node #
Let’s create the virtual machine:
virt-install --name node3 --ram 10000 --vcpus 6 --disk path=node3.raw,format=raw --osinfo detect=on,name=sle-unknown --graphics none --console pty,target_type=serial --network default,mac=34:8A:B1:4B:16:E5 --virt-type kvm --import
Once the virtual machine is up and running, we can confirm that this node is using a network bridge:
node3:~ # ip a 1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN group default qlen 1000 link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00 inet 127.0.0.1/8 scope host lo valid_lft forever preferred_lft forever inet6 ::1/128 scope host valid_lft forever preferred_lft forever 2: eth0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc pfifo_fast master linux-br0 state UP group default qlen 1000 link/ether 34:8a:b1:4b:16:e5 brd ff:ff:ff:ff:ff:ff altname enp0s2 altname ens2 3: linux-br0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue state UP group default qlen 1000 link/ether 34:8a:b1:4b:16:e5 brd ff:ff:ff:ff:ff:ff inet 192.168.122.70/24 brd 192.168.122.255 scope global noprefixroute linux-br0 valid_lft forever preferred_lft forever
Confirm that the routing is using the bridge:
node3:~ # ip r default via 192.168.122.1 dev linux-br0 proto static metric 100 192.168.122.0/24 dev linux-br0 proto static scope link metric 100 192.168.122.0/24 dev linux-br0 proto kernel scope link src 192.168.122.70 metric 425
Ensure that the static connection files are properly utilized:
node3:~ # nmcli -f NAME,UUID,TYPE,DEVICE,FILENAME con show NAME UUID TYPE DEVICE FILENAME linux-br0 1f8f1469-ed20-5f2c-bacb-a6767bee9bc0 bridge linux-br0 /etc/NetworkManager/system-connections/linux-br0.nmconnection eth0 dfd202f5-562f-5f07-8f2a-a7717756fb70 ethernet eth0 /etc/NetworkManager/system-connections/eth0.nmconnection
10.5.7.4 Provisioning the fourth node #
Lastly, we will provision a node which will not match any of the predefined configurations by a MAC address. In these cases, we will default to DHCP to configure the network interfaces.
Let’s create the virtual machine:
virt-install --name node4 --ram 10000 --vcpus 6 --disk path=node4.raw,format=raw --osinfo detect=on,name=sle-unknown --graphics none --console pty,target_type=serial --network default --virt-type kvm --import
Once the virtual machine is up and running, we can confirm that this node is using a random IP address for its network interface:
localhost:~ # ip a 1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN group default qlen 1000 link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00 inet 127.0.0.1/8 scope host lo valid_lft forever preferred_lft forever inet6 ::1/128 scope host valid_lft forever preferred_lft forever 2: eth0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc pfifo_fast state UP group default qlen 1000 link/ether 52:54:00:56:63:71 brd ff:ff:ff:ff:ff:ff altname enp0s2 altname ens2 inet 192.168.122.86/24 brd 192.168.122.255 scope global dynamic noprefixroute eth0 valid_lft 3542sec preferred_lft 3542sec inet6 fe80::5054:ff:fe56:6371/64 scope link noprefixroute valid_lft forever preferred_lft forever
Verify that nmc failed to apply static configurations for this node:
localhost:~ # journalctl -u combustion | grep nmc Apr 23 12:15:45 localhost.localdomain combustion[1357]: [2024-04-23T12:15:45Z ERROR nmc] Applying config failed: None of the preconfigured hosts match local NICs
Verify that the Ethernet interface was configured via DHCP:
localhost:~ # journalctl | grep eth0 Apr 23 12:15:29 localhost.localdomain NetworkManager[704]: <info> [1713874529.7801] manager: (eth0): new Ethernet device (/org/freedesktop/NetworkManager/Devices/2) Apr 23 12:15:29 localhost.localdomain NetworkManager[704]: <info> [1713874529.7802] device (eth0): state change: unmanaged -> unavailable (reason 'managed', sys-iface-state: 'external') Apr 23 12:15:29 localhost.localdomain NetworkManager[704]: <info> [1713874529.7929] device (eth0): carrier: link connected Apr 23 12:15:29 localhost.localdomain NetworkManager[704]: <info> [1713874529.7931] device (eth0): state change: unavailable -> disconnected (reason 'carrier-changed', sys-iface-state: 'managed') Apr 23 12:15:29 localhost.localdomain NetworkManager[704]: <info> [1713874529.7944] device (eth0): Activation: starting connection 'Wired Connection' (300ed658-08d4-4281-9f8c-d1b8882d29b9) Apr 23 12:15:29 localhost.localdomain NetworkManager[704]: <info> [1713874529.7945] device (eth0): state change: disconnected -> prepare (reason 'none', sys-iface-state: 'managed') Apr 23 12:15:29 localhost.localdomain NetworkManager[704]: <info> [1713874529.7947] device (eth0): state change: prepare -> config (reason 'none', sys-iface-state: 'managed') Apr 23 12:15:29 localhost.localdomain NetworkManager[704]: <info> [1713874529.7953] device (eth0): state change: config -> ip-config (reason 'none', sys-iface-state: 'managed') Apr 23 12:15:29 localhost.localdomain NetworkManager[704]: <info> [1713874529.7964] dhcp4 (eth0): activation: beginning transaction (timeout in 90 seconds) Apr 23 12:15:33 localhost.localdomain NetworkManager[704]: <info> [1713874533.1272] dhcp4 (eth0): state changed new lease, address=192.168.122.86 localhost:~ # nmcli -f NAME,UUID,TYPE,DEVICE,FILENAME con show NAME UUID TYPE DEVICE FILENAME Wired Connection 300ed658-08d4-4281-9f8c-d1b8882d29b9 ethernet eth0 /var/run/NetworkManager/system-connections/default_connection.nmconnection
10.5.8 Unified node configurations #
There are occasions where relying on known MAC addresses is not an option. In these cases we can opt for the so-called unified configuration
which allows us to specify settings in an _all.yaml
file which will then be applied across all provisioned nodes.
We will build and provision an edge node using different configuration structure. Follow all steps starting from Section 10.5.3, “Creating the image configuration directory” up until Section 10.5.5, “Defining the network configurations”.
In this example we define a desired state of two Ethernet interfaces (eth0 and eth1) - one using DHCP, and one assigned a static IP address.
mkdir -p $CONFIG_DIR/network cat <<- EOF > $CONFIG_DIR/network/_all.yaml interfaces: - name: eth0 type: ethernet state: up ipv4: dhcp: true enabled: true ipv6: enabled: false - name: eth1 type: ethernet state: up ipv4: address: - ip: 10.0.0.1 prefix-length: 24 enabled: true dhcp: false ipv6: enabled: false EOF
Let’s build the image:
podman run --rm -it -v $CONFIG_DIR:/eib registry.suse.com/edge/3.1/edge-image-builder:1.1.0 build --definition-file definition.yaml
Once the image is successfully built, let’s create a virtual machine using it:
virt-install --name node1 --ram 10000 --vcpus 6 --disk path=$CONFIG_DIR/modified-image.raw,format=raw --osinfo detect=on,name=sle-unknown --graphics none --console pty,target_type=serial --network default --network default --virt-type kvm --import
The provisioning process might take a few minutes. Once it’s finished, log in to the system with the provided credentials.
Verify that the routing is properly configured:
localhost:~ # ip r default via 192.168.122.1 dev eth0 proto dhcp src 192.168.122.100 metric 100 10.0.0.0/24 dev eth1 proto kernel scope link src 10.0.0.1 metric 101 192.168.122.0/24 dev eth0 proto kernel scope link src 192.168.122.100 metric 100
Verify that Internet connection is available:
localhost:~ # ping google.com PING google.com (142.250.72.46) 56(84) bytes of data. 64 bytes from den16s08-in-f14.1e100.net (142.250.72.46): icmp_seq=1 ttl=56 time=14.3 ms 64 bytes from den16s08-in-f14.1e100.net (142.250.72.46): icmp_seq=2 ttl=56 time=14.2 ms ^C --- google.com ping statistics --- 2 packets transmitted, 2 received, 0% packet loss, time 1001ms rtt min/avg/max/mdev = 14.196/14.260/14.324/0.064 ms
Verify that the Ethernet interfaces are configured and active:
localhost:~ # ip a 1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN group default qlen 1000 link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00 inet 127.0.0.1/8 scope host lo valid_lft forever preferred_lft forever inet6 ::1/128 scope host valid_lft forever preferred_lft forever 2: eth0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc pfifo_fast state UP group default qlen 1000 link/ether 52:54:00:26:44:7a brd ff:ff:ff:ff:ff:ff altname enp1s0 inet 192.168.122.100/24 brd 192.168.122.255 scope global dynamic noprefixroute eth0 valid_lft 3505sec preferred_lft 3505sec 3: eth1: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc pfifo_fast state UP group default qlen 1000 link/ether 52:54:00:ec:57:9e brd ff:ff:ff:ff:ff:ff altname enp7s0 inet 10.0.0.1/24 brd 10.0.0.255 scope global noprefixroute eth1 valid_lft forever preferred_lft forever localhost:~ # nmcli -f NAME,UUID,TYPE,DEVICE,FILENAME con show NAME UUID TYPE DEVICE FILENAME eth0 dfd202f5-562f-5f07-8f2a-a7717756fb70 ethernet eth0 /etc/NetworkManager/system-connections/eth0.nmconnection eth1 0523c0a1-5f5e-5603-bcf2-68155d5d322e ethernet eth1 /etc/NetworkManager/system-connections/eth1.nmconnection localhost:~ # cat /etc/NetworkManager/system-connections/eth0.nmconnection [connection] autoconnect=true autoconnect-slaves=-1 id=eth0 interface-name=eth0 type=802-3-ethernet uuid=dfd202f5-562f-5f07-8f2a-a7717756fb70 [ipv4] dhcp-client-id=mac dhcp-send-hostname=true dhcp-timeout=2147483647 ignore-auto-dns=false ignore-auto-routes=false method=auto never-default=false [ipv6] addr-gen-mode=0 dhcp-timeout=2147483647 method=disabled localhost:~ # cat /etc/NetworkManager/system-connections/eth1.nmconnection [connection] autoconnect=true autoconnect-slaves=-1 id=eth1 interface-name=eth1 type=802-3-ethernet uuid=0523c0a1-5f5e-5603-bcf2-68155d5d322e [ipv4] address0=10.0.0.1/24 dhcp-timeout=2147483647 method=manual [ipv6] addr-gen-mode=0 dhcp-timeout=2147483647 method=disabled
10.5.9 Custom network configurations #
We have already covered the default network configuration for Edge Image Builder which relies on the NetworkManager Configurator. However, there is also the option to modify it via a custom script. Whilst this option is very flexible and is also not MAC address dependant, its limitation stems from the fact that using it is much less convenient when bootstrapping multiple nodes with a single image.
NoteIt is recommended to use the default network configuration via files describing the desired network states under the
/network
directory. Only opt for custom scripting when that behaviour is not applicable to your use case.
We will build and provision an edge node using different configuration structure. Follow all steps starting from Section 10.5.3, “Creating the image configuration directory” up until Section 10.5.5, “Defining the network configurations”.
In this example, we will create a custom script which applies static configuration for the eth0
interface on all provisioned nodes,
as well as removing and disabling the automatically created wired connections by NetworkManager. This is beneficial in situations where you want to make sure that every node in your cluster has an identical networking configuration, and as such you do not need to be concerned with the MAC address of each node prior to image creation.
Let’s start by storing the connection file in the /custom/files
directory:
mkdir -p $CONFIG_DIR/custom/files cat << EOF > $CONFIG_DIR/custom/files/eth0.nmconnection [connection] autoconnect=true autoconnect-slaves=-1 autoconnect-retries=1 id=eth0 interface-name=eth0 type=802-3-ethernet uuid=dfd202f5-562f-5f07-8f2a-a7717756fb70 wait-device-timeout=60000 [ipv4] dhcp-timeout=2147483647 method=auto [ipv6] addr-gen-mode=eui64 dhcp-timeout=2147483647 method=disabled EOF
Now that the static configuration is created, we will also create our custom network script:
mkdir -p $CONFIG_DIR/network cat << EOF > $CONFIG_DIR/network/configure-network.sh #!/bin/bash set -eux # Remove and disable wired connections mkdir -p /etc/NetworkManager/conf.d/ printf "[main]\nno-auto-default=*\n" > /etc/NetworkManager/conf.d/no-auto-default.conf rm -f /var/run/NetworkManager/system-connections/* || true # Copy pre-configured network configuration files into NetworkManager mkdir -p /etc/NetworkManager/system-connections/ cp eth0.nmconnection /etc/NetworkManager/system-connections/ chmod 600 /etc/NetworkManager/system-connections/*.nmconnection EOF chmod a+x $CONFIG_DIR/network/configure-network.sh
NoteThe nmc binary will still be included by default, so it can also be used in the
configure-network.sh
script if necessary.
The custom script must always be provided under /network/configure-network.sh
in the configuration directory. If present, all other files will be ignored.
It is NOT possible to configure a network by working with both static configurations in YAML format and a custom script simultaneously.
The configuration directory at this point should look like the following:
├── definition.yaml ├── custom/ │ └── files/ │ └── eth0.nmconnection ├── network/ │ └── configure-network.sh └── base-images/ └── SL-Micro.x86_64-6.0-Base-GM2.raw
Let’s build the image:
podman run --rm -it -v $CONFIG_DIR:/eib registry.suse.com/edge/3.1/edge-image-builder:1.1.0 build --definition-file definition.yaml
Once the image is successfully built, let’s create a virtual machine using it:
virt-install --name node1 --ram 10000 --vcpus 6 --disk path=$CONFIG_DIR/modified-image.raw,format=raw --osinfo detect=on,name=sle-unknown --graphics none --console pty,target_type=serial --network default --virt-type kvm --import
The provisioning process might take a few minutes. Once it’s finished, log in to the system with the provided credentials.
Verify that the routing is properly configured:
localhost:~ # ip r default via 192.168.122.1 dev eth0 proto dhcp src 192.168.122.185 metric 100 192.168.122.0/24 dev eth0 proto kernel scope link src 192.168.122.185 metric 100
Verify that Internet connection is available:
localhost:~ # ping google.com PING google.com (142.250.72.78) 56(84) bytes of data. 64 bytes from den16s09-in-f14.1e100.net (142.250.72.78): icmp_seq=1 ttl=56 time=13.6 ms 64 bytes from den16s09-in-f14.1e100.net (142.250.72.78): icmp_seq=2 ttl=56 time=13.6 ms ^C --- google.com ping statistics --- 2 packets transmitted, 2 received, 0% packet loss, time 1001ms rtt min/avg/max/mdev = 13.592/13.599/13.606/0.007 ms
Verify that an Ethernet interface is statically configured using our connection file and is active:
localhost:~ # ip a 1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN group default qlen 1000 link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00 inet 127.0.0.1/8 scope host lo valid_lft forever preferred_lft forever inet6 ::1/128 scope host valid_lft forever preferred_lft forever 2: eth0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc pfifo_fast state UP group default qlen 1000 link/ether 52:54:00:31:d0:1b brd ff:ff:ff:ff:ff:ff altname enp0s2 altname ens2 inet 192.168.122.185/24 brd 192.168.122.255 scope global dynamic noprefixroute eth0 localhost:~ # nmcli -f NAME,UUID,TYPE,DEVICE,FILENAME con show NAME UUID TYPE DEVICE FILENAME eth0 dfd202f5-562f-5f07-8f2a-a7717756fb70 ethernet eth0 /etc/NetworkManager/system-connections/eth0.nmconnection localhost:~ # cat /etc/NetworkManager/system-connections/eth0.nmconnection [connection] autoconnect=true autoconnect-slaves=-1 autoconnect-retries=1 id=eth0 interface-name=eth0 type=802-3-ethernet uuid=dfd202f5-562f-5f07-8f2a-a7717756fb70 wait-device-timeout=60000 [ipv4] dhcp-timeout=2147483647 method=auto [ipv6] addr-gen-mode=eui64 dhcp-timeout=2147483647 method=disabled