36 Lifecycle actions #
This section covers the lifecycle management actions of deployed ATIP clusters.
36.1 Management cluster upgrades #
The upgrade of the management cluster involves several components. For a list of the general components that require an upgrade, see the Day 2
management cluster (Chapter 28, Management Cluster) documentation.
The upgrade procedure for comoponents specific to this setup can be seen below.
Upgrading Metal3
To upgrade Metal3
, use the following command to update the Helm repository cache and fetch the latest chart to install Metal3
from the Helm chart repository:
helm repo update helm fetch suse-edge/metal3
After that, the easy way to upgrade is to export your current configurations to a file, and then upgrade the Metal3
version using that previous file.
If any change is required in the new version, the file can be edited before the upgrade.
helm get values metal3 -n metal3-system -o yaml > metal3-values.yaml helm upgrade metal3 suse-edge/metal3 \ --namespace metal3-system \ -f metal3-values.yaml \ --version=0.9.0
36.2 Downstream cluster upgrades #
Upgrading downstream clusters involves updating several components. The following sections cover the upgrade process for each of the components.
Upgrading the operating system
For this process, check the following reference (Section 35.2, “Prepare downstream cluster image for connected scenarios”) to build the new image with a new operating system version.
With this new image generated by EIB
, the next provision phase uses the new operating version provided.
In the following step, the new image is used to upgrade the nodes.
Upgrading the RKE2 cluster
The changes required to upgrade the RKE2
cluster using the automated workflow are the following:
Change the block
RKE2ControlPlane
in thecapi-provisioning-example.yaml
shown in the following section (Section 35.4, “Downstream cluster provisioning with Directed network provisioning (single-node)”):Add the rollout strategy in the spec file.
Change the version of the
RKE2
cluster to the new version replacing${RKE2_NEW_VERSION}
.
apiVersion: controlplane.cluster.x-k8s.io/v1beta1
kind: RKE2ControlPlane
metadata:
name: single-node-cluster
namespace: default
spec:
infrastructureRef:
apiVersion: infrastructure.cluster.x-k8s.io/v1beta1
kind: Metal3MachineTemplate
name: single-node-cluster-controlplane
version: ${RKE2_NEW_VERSION}
replicas: 1
serverConfig:
cni: cilium
rolloutStrategy:
rollingUpdate:
maxSurge: 0
registrationMethod: "control-plane-endpoint"
agentConfig:
format: ignition
additionalUserData:
config: |
variant: fcos
version: 1.4.0
systemd:
units:
- name: rke2-preinstall.service
enabled: true
contents: |
[Unit]
Description=rke2-preinstall
Wants=network-online.target
Before=rke2-install.service
ConditionPathExists=!/run/cluster-api/bootstrap-success.complete
[Service]
Type=oneshot
User=root
ExecStartPre=/bin/sh -c "mount -L config-2 /mnt"
ExecStart=/bin/sh -c "sed -i \"s/BAREMETALHOST_UUID/$(jq -r .uuid /mnt/openstack/latest/meta_data.json)/\" /etc/rancher/rke2/config.yaml"
ExecStart=/bin/sh -c "echo \"node-name: $(jq -r .name /mnt/openstack/latest/meta_data.json)\" >> /etc/rancher/rke2/config.yaml"
ExecStartPost=/bin/sh -c "umount /mnt"
[Install]
WantedBy=multi-user.target
kubelet:
extraArgs:
- provider-id=metal3://BAREMETALHOST_UUID
nodeName: "localhost.localdomain"
Change the block
Metal3MachineTemplate
in thecapi-provisioning-example.yaml
shown in the following section (Section 35.4, “Downstream cluster provisioning with Directed network provisioning (single-node)”):Change the image name and checksum to the new version generated in the previous step.
Add the directive
nodeReuse
totrue
to avoid creating a new node.Add the directive
automatedCleaningMode
tometadata
to enable the automated cleaning for the node.
apiVersion: infrastructure.cluster.x-k8s.io/v1beta1
kind: Metal3MachineTemplate
metadata:
name: single-node-cluster-controlplane
namespace: default
spec:
nodeReuse: True
template:
spec:
automatedCleaningMode: metadata
dataTemplate:
name: single-node-cluster-controlplane-template
hostSelector:
matchLabels:
cluster-role: control-plane
image:
checksum: http://imagecache.local:8080/${NEW_IMAGE_GENERATED}.sha256
checksumType: sha256
format: raw
url: http://imagecache.local:8080/${NEW_IMAGE_GENERATED}.raw
After making these changes, the capi-provisioning-example.yaml
file can be applied to the cluster using the following command:
kubectl apply -f capi-provisioning-example.yaml