Jump to contentJump to page navigation: previous page [access key p]/next page [access key n]
SUSE Edge Documentation / Day 2 Operations / Edge 3.1 migration

26 Edge 3.1 migration

This section offers migration guidelines for existing Edge 3.0 (including minor releases such as 3.0.1 and 3.0.2) management and downstream clusters to Edge 3.1.0.

For a list of Edge 3.1.0 component versions, refer to the release notes (Section 36.1, “Abstract”).

26.1 Management cluster

This section covers how to migrate a management cluster from Edge 3.0 to Edge 3.1.0.

Management cluster components should be migrated in the following order:

26.1.1 Operating System (OS)

This section covers the steps needed to migrate your management cluster nodes' OS to an Edge 3.1.0 supported version.

Important
Important

The below steps should be done for each node of the management cluster.

To avoid any unforeseen problems, migrate the cluster’s control-plane nodes first and the worker nodes second.

26.1.1.1 Prerequisites

  • SCC registered nodes - ensure your cluster nodes' OS are registered with a subscription key that supports the operating system version specified in the Edge 3.1 release (Section 36.1, “Abstract”).

Air-gapped:

  • Mirror SUSE RPM repositories - RPM repositories related to the operating system that is specified in the Edge 3.1.0 release (Section 36.1, “Abstract”) should be locally mirrored, so that transactional-update has access to them. This can be achieved by using either RMT or SUMA.

26.1.1.2 Migration steps

Note
Note

The below steps assume you are running as root and that kubectl has been configured to connect to the management cluster.

  1. Mark the node as unschedulable:

    kubectl cordon <node_name>

    For a full list of the options for the cordon command, see kubectl cordon.

  2. Optionally, there might be use-cases where you would like to drain the nodes' workloads:

    kubectl drain <node>

    For a full list of the options for the drain command, see kubectl drain.

  3. Before a migration, you need to ensure that packages on your current OS are updated. To do this, execute:

    transactional-update

    The above command executes zypper up to update the OS packages. For more information on transactional-update, see the transactional-update guide.

  4. Proceed to do the OS migration:

    transactional-update --continue migration
    Note
    Note

    The --continue option is used here to reuse the previous snapshot without having to reboot the system.

    • If your subscription key supports the SUSE Linux Micro 6.0 version, you will be prompted with something similar to:

      day2 migration os migration prompt

      Select the number that corresponds to SUSE Linux Micro 6.0 <arch>.

      Note
      Note

      The Edge 3.1.0 release supports only the SUSE Linux Micro 6.0 operating system.

  5. After a successful transactional-update run, for the changes to take effect on the system you would need to reboot:

    reboot
  6. After the host has been rebooted, validate that the operating system is migrated to SUSE Linux Micro 6.0:

    cat /etc/os-release

    Output should be similar to:

    NAME="SL-Micro"
    VERSION="6.0"
    VERSION_ID="6.0"
    PRETTY_NAME="SUSE Linux Micro 6.0"
    ID="sl-micro"
    ID_LIKE="suse"
    ANSI_COLOR="0;32"
    CPE_NAME="cpe:/o:suse:sl-micro:6.0"
    HOME_URL="https://www.suse.com/products/micro/"
    DOCUMENTATION_URL="https://documentation.suse.com/sl-micro/6.0/"
    Note
    Note

    In case something failed with the migration, you can rollback to the last working snapshot using:

    transactional-update rollback last

    You would need to reboot your system for the rollback to take effect. See the official transactional-update documentation for more information about the rollback procedure.

  7. Mark the node as schedulable:

    kubectl uncordon <node_name>

26.1.2 RKE2

Important
Important

The below steps should be done for each node of the management cluster.

As the RKE2 documentation explains, the upgrade procedure requires to upgrade the clusters' control-plane nodes one at a time and once all have been upgraded, the agent nodes.

Note
Note

To ensure disaster recovery, we advise to do a backup of the RKE2 cluster data. For information on how to do this, check the RKE2 backup and restore guide. The default location for the rke2 binary is /opt/rke2/bin.

You can upgrade the RKE2 version to a Edge 3.1.0 compatible version using the RKE2 installation script as follows:

  1. Mark the node as unschedulable:

    kubectl cordon <node_name>

    For a full list of the options for the cordon command, see kubectl cordon.

  2. Optionally, there might be use-cases where you would like to drain the nodes' workloads:

    kubectl drain <node>

    For a full list of the options for the drain command, see kubectl drain.

  3. Use the RKE2 installation script to install the correct Edge 3.1.0 compatible RKE2 version:

    curl -sfL https://get.rke2.io | INSTALL_RKE2_VERSION=v1.30.3+rke2r1 sh -
  4. Restart the rke2 process:

    # For control-plane nodes:
    systemctl restart rke2-server
    
    # For worker nodes:
    systemctl restart rke2-agent
  5. Validate that the nodes' RKE2 version is upgraded:

    kubectl get nodes
  6. Mark the node as schedulable:

    kubectl uncordon <node_name>

26.1.3 Edge Helm charts

Note
Note

This section assumes you have installed helm on your system and you have a valid kubeconfig pointing to the desired cluster. For helm installation instructions, check the Installing Helm guide.

This section provides guidelines for upgrading the Helm chart components that make up a specific Edge release. It covers the following topics:

26.1.3.1 Known Limitations

This section covers known limitations to the current migration process. Users should first go through the steps described here before moving to upgrade their helm charts.

26.1.3.1.1 Rancher upgrade

With the current RKE2 version that Edge 3.1.0 utilizes, there is an issue where all ingresses that do not contain an IngressClass are ignored by the ingress controller. To mitigate this, users would need to manually add the name of the default IngressClass to the default Rancher Ingress.

For more information on the problem that the below steps fix, see the upstream RKE2 issue and more specifically this comment.

Note
Note

In some cases the default IngressClass might have a different name than nginx.

Make sure to validate the name by running:

kubectl get ingressclass

Before upgrading Rancher, make sure to execute the following command:

  • If Rancher was deployed through EIB (Chapter 9, Edge Image Builder):

    kubectl patch helmchart rancher -n <namespace> --type='merge' -p '{"spec":{"set":{"ingress.ingressClassName":"nginx"}}}'
  • If Rancher was deployed through Helm, add the --set ingress.ingressClassName=nginx flag to your upgrade command. For a full example of how to utilize this option, see the following example (Section 26.1.3.4.1, “Example”).

26.1.3.2 Cluster API controllers migration

From Edge 3.1.0, Cluster API (CAPI) controllers on a Metal3 management cluster are managed via Rancher Turtles.

To migrate the CAPI controllers versions to Edge 3.1.0 compatible versions, install the Rancher Turtles chart:

helm install rancher-turtles oci://registry.suse.com/edge/3.1/rancher-turtles-chart --version 0.3.2 --namespace rancher-turtles-system --create-namespace

After some time, the controller pods running in the capi-system, capm3-system, rke2-bootstrap-system and rke2-control-plane-system namespaces are upgraded with the Edge 3.1.0 compatible controller versions.

For information on how to install Rancher Turtles in an air-gapped environment, refer to Rancher Turtles air-gapped installation (Section 26.1.3.2.1, “Rancher Turtles air-gapped installation”).

26.1.3.2.1 Rancher Turtles air-gapped installation
Note
Note

The below steps assume that kubectl has been configured to connect to the management cluster that you wish to upgrade.

  1. Before installing the below mentioned rancher-turtles-airgap-resources Helm chart, ensure that it will have the correct ownership over the clusterctl created namespaces:

    1. capi-system ownership change:

      kubectl label namespace capi-system app.kubernetes.io/managed-by=Helm --overwrite
      
      kubectl annotate namespace capi-system meta.helm.sh/release-name=rancher-turtles-airgap-resources --overwrite
      kubectl annotate namespace capi-system meta.helm.sh/release-namespace=rancher-turtles-system --overwrite
    2. capm3-system ownership change:

      kubectl label namespace capm3-system app.kubernetes.io/managed-by=Helm --overwrite
      
      kubectl annotate namespace capm3-system meta.helm.sh/release-name=rancher-turtles-airgap-resources --overwrite
      kubectl annotate namespace capm3-system meta.helm.sh/release-namespace=rancher-turtles-system --overwrite
    3. rke2-bootstrap-system ownership change:

      kubectl label namespace rke2-bootstrap-system app.kubernetes.io/managed-by=Helm --overwrite
      
      kubectl annotate namespace rke2-bootstrap-system meta.helm.sh/release-name=rancher-turtles-airgap-resources --overwrite
      kubectl annotate namespace rke2-bootstrap-system meta.helm.sh/release-namespace=rancher-turtles-system --overwrite
    4. rke2-control-plane-system ownership change:

      kubectl label namespace rke2-control-plane-system app.kubernetes.io/managed-by=Helm --overwrite
      
      kubectl annotate namespace rke2-control-plane-system meta.helm.sh/release-name=rancher-turtles-airgap-resources --overwrite
      kubectl annotate namespace rke2-control-plane-system meta.helm.sh/release-namespace=rancher-turtles-system --overwrite
  2. Pull the rancher-turtles-airgap-resources and rancher-turtles chart archives:

    helm pull oci://registry.suse.com/edge/3.1/rancher-turtles-airgap-resources-chart --version 0.3.2
    helm pull oci://registry.suse.com/edge/3.1/rancher-turtles-chart --version 0.3.2
  3. To provide the needed resources for an air-gapped installation of the Rancher Turtles Helm chart, install the rancher-turtles-airgap-resources Helm chart:

    helm install rancher-turtles-airgap-resources ./rancher-turtles-airgap-resources-chart-0.3.2.tgz --namespace rancher-turtles-system --create-namespace
  4. Configure the cluster-api-operator in the Rancher Turtles Helm chart to fetch controller data from correct locations:

    cat > values.yaml <<EOF
    cluster-api-operator:
      cluster-api:
        core:
          fetchConfig:
            selector: "{\"matchLabels\": {\"provider-components\": \"core\"}}"
        rke2:
          bootstrap:
            fetchConfig:
              selector: "{\"matchLabels\": {\"provider-components\": \"rke2-bootstrap\"}}"
          controlPlane:
            fetchConfig:
              selector: "{\"matchLabels\": {\"provider-components\": \"rke2-control-plane\"}}"
        metal3:
          infrastructure:
            fetchConfig:
              selector: "{\"matchLabels\": {\"provider-components\": \"metal3\"}}"
    EOF
  5. Install Rancher Turtles:

    helm install rancher-turtles ./rancher-turtles-chart-0.3.2.tgz --namespace rancher-turtles-system --create-namespace --values values.yaml

After some time, the controller pods running in the capi-system, capm3-system, rke2-bootstrap-system and rke2-control-plane-system namespaces will be upgraded with the Edge 3.1.0 compatible controller versions.

26.1.3.3 Edge Helm chart upgrade - EIB

This section explains how to upgrade a Helm chart from the Edge component stack, deployed via EIB (Chapter 9, Edge Image Builder), to an Edge 3.1.0 compatible version.

26.1.3.3.1 Prerequisites

In Edge 3.1, EIB changes the way it deploys charts and no longer uses the RKE2/K3s manifest auto-deploy mechanism.

This means that, before upgrading to an Edge 3.1.0 compatible version, any Helm charts deployed on an Edge 3.0 environment using EIB should have their chart manifests removed from the manifests directory of the relevant Kubernetes distribution.

Warning
Warning

If this is not done, any chart upgrade will be reverted by the RKE2/K3s process upon restart of the process or the operating system.

Note
Note

Deleting manifests from the RKE2/K3s directory will not result in the resources being removed from the cluster.

As per the RKE2/K3s documentation:

"Deleting files out of this directory will not delete the corresponding resources from the cluster."

Removing any EIB deployed chart manifests involves the following steps:

  1. To ensure disaster recovery, make a backup of each EIB deployed manifest:

    Note
    Note

    EIB deployed manifests will have the "edge.suse.com/source: edge-image-builder" label.

    Note
    Note

    Make sure that the <backup_location> that you provide to the below command exists.

    grep -lrIZ 'edge.suse.com/source: edge-image-builder' /var/lib/rancher/rke2/server/manifests | xargs -0 -I{} cp {} <backup_location>
  2. Remove all EIB deployed manifests:

    grep -lrIZ 'edge.suse.com/source: edge-image-builder' /var/lib/rancher/rke2/server/manifests | xargs -0 rm -f --
26.1.3.3.2 Upgrade steps
Note
Note

The below steps assume that kubectl has been configured to connect to the management cluster that you wish to upgrade.

  1. Locate the Edge 3.1 compatible chart version that you wish to migrate to by looking at the release notes (Section 36.1, “Abstract”).

  2. Pull the desired Helm chart version:

    • For charts hosted in HTTP repositories:

      helm repo add <chart_repo_name> <chart_repo_urls>
      
      helm pull <chart_repo_name>/<chart_name> --version=X.Y.Z
    • For charts hosted in OCI registries:

      helm pull oci://<chart_oci_url> --version=X.Y.Z
  3. Encode the pulled chart archive:

    base64 -w 0 <chart_name>-X.Y.Z.tgz  > <chart_name>-X.Y.Z.txt
  4. Check the Known Limitations (Section 26.1.3.1, “Known Limitations”) section if there are any additional steps that need to be done for the charts.

  5. Patch the existing HelmChart resource:

    Important
    Important

    Make sure to pass the HelmChart name, namespace, encoded file and version to the command below.

    kubectl patch helmchart <helmchart_name> --type=merge -p "{\"spec\":{\"chartContent\":\"$(cat <helmchart_name>-X.Y.Z.txt)\", \"version\":\"<helmchart_version>\"}}" -n <helmchart_namespace>
  6. This will signal the helm-controller to schedule a Job that will create a Pod that will upgrade the desired Helm chart. To view the logs of the created Pod, follow these steps:

    1. Locate the created Pod:

      kubectl get pods -l helmcharts.helm.cattle.io/chart=<helmchart_name> -n <namespace>
    2. View the Pod logs:

      kubectl logs <pod_name> -n <namespace>

A Completed Pod with non-error logs would result in a successful upgrade of the desired Helm chart.

For a full example of how to upgrade a Helm chart deployed through EIB, refer to the Example (Section 26.1.3.3.3, “Example”) section.

26.1.3.3.3 Example

This section provides an example of upgrading the Rancher and Metal3 Helm charts to a version compatible with the Edge 3.1.0 release. It follows the steps outlined in the "Upgrade Steps" (Section 26.1.3.3.2, “Upgrade steps”) section.

Use-case:

  • Current Rancher and Metal3 charts need to be upgraded to an Edge 3.1.0 compatible version.

    • Rancher is deployed through EIB and its HelmChart is deployed in the default namespace.

    • Metal3 is deployed through EIB and its HelmChart is deployed in the kube-system namespace.

Steps:

  1. Locate the desired versions for Rancher and Metal3 from the release notes (Section 36.1, “Abstract”). For Edge 3.1.0, these versions would be 2.9.1 for Rancher and 0.8.1 for Metal3.

  2. Pull the desired chart versions:

    • For Rancher:

      helm repo add rancher-prime https://charts.rancher.com/server-charts/prime
      helm pull rancher-prime/rancher --version=2.9.1
    • For Metal3:

      helm pull oci://registry.suse.com/edge/3.1/metal3-chart --version=0.8.1
  3. Encode the Rancher and Metal3 Helm charts:

    base64 -w 0 rancher-2.9.1.tgz > rancher-2.9.1.txt
    base64 -w 0 metal3-chart-0.8.1.tgz > metal3-chart-0.8.1.txt
  4. The directory structure should look similar to this:

    .
    ├── metal3-chart-0.8.1.tgz
    ├── metal3-chart-0.8.1.txt
    ├── rancher-2.9.1.tgz
    └── rancher-2.9.1.txt
  5. Check the Known Limitations (Section 26.1.3.1, “Known Limitations”) section if there are any additional steps that need to be done for the charts.

    • For Rancher:

      • Execute the command described in the Known Limitations section:

        # In this example the rancher helmchart is in the 'default' namespace
        kubectl patch helmchart rancher -n default --type='merge' -p '{"spec":{"set":{"ingress.ingressClassName":"nginx"}}}'
      • Validate that the ingressClassName property was successfully added:

        kubectl get ingress rancher -n cattle-system -o yaml | grep -w ingressClassName
        
        # Example output
          ingressClassName: nginx
  6. Patch the Rancher and Metal3 HelmChart resources:

    # Rancher deployed in the default namespace
    kubectl patch helmchart rancher --type=merge -p "{\"spec\":{\"chartContent\":\"$(cat rancher-2.9.1.txt)\", \"version\":\"2.9.1\"}}" -n default
    
    # Metal3 deployed in the kube-system namespace
    kubectl patch helmchart metal3 --type=merge -p "{\"spec\":{\"chartContent\":\"$(cat metal3-chart-0.8.1.txt)\", \"version\":\"0.8.1\"}}" -n kube-system
  7. Locate the helm-controller created Rancher and Metal3 Pods:

    • Rancher:

      kubectl get pods -l helmcharts.helm.cattle.io/chart=rancher -n default
      
      # Example output
      NAME                         READY   STATUS      RESTARTS   AGE
      helm-install-rancher-wg7nf   0/1     Completed   0          5m2s
    • Metal3:

      kubectl get pods -l helmcharts.helm.cattle.io/chart=metal3 -n kube-system
      
      # Example output
      NAME                        READY   STATUS      RESTARTS   AGE
      helm-install-metal3-57lz5   0/1     Completed   0          4m35s
  8. View the logs of each pod using kubectl logs:

    • Rancher:

      kubectl logs helm-install-rancher-wg7nf -n default
      
      # Example successful output
      ...
      Upgrading rancher
      + helm_v3 upgrade --namespace cattle-system --create-namespace --version 2.9.1 --set-string global.clusterCIDR=10.42.0.0/16 --set-string global.clusterCIDRv4=10.42.0.0/16 --set-string global.clusterDNS=10.43.0.10 --set-string global.clusterDomain=cluster.local --set-string global.rke2DataDir=/var/lib/rancher/rke2 --set-string global.serviceCIDR=10.43.0.0/16 --set-string ingress.ingressClassName=nginx rancher /tmp/rancher.tgz --values /config/values-01_HelmChart.yaml
      Release "rancher" has been upgraded. Happy Helming!
      ...
    • Metal3:

      kubectl logs helm-install-metal3-57lz5  -n kube-system
      
      # Example successful output
      ...
      Upgrading metal3
      + echo 'Upgrading metal3'
      + shift 1
      + helm_v3 upgrade --namespace metal3-system --create-namespace --version 0.8.1 --set-string global.clusterCIDR=10.42.0.0/16 --set-string global.clusterCIDRv4=10.42.0.0/16 --set-string global.clusterDNS=10.43.0.10 --set-string global.clusterDomain=cluster.local --set-string global.rke2DataDir=/var/lib/rancher/rke2 --set-string global.serviceCIDR=10.43.0.0/16 metal3 /tmp/metal3.tgz --values /config/values-01_HelmChart.yaml
      Release "metal3" has been upgraded. Happy Helming!
      ...
  9. Validate that the pods for the specific chart are running:

    # For Rancher
    kubectl get pods -n cattle-system
    
    # For Metal3
    kubectl get pods -n metal3-system

26.1.3.4 Edge Helm chart upgrade - non-EIB

This section explains how to upgrade a Helm chart from the Edge component stack, deployed via Helm, to an Edge 3.1.0 compatible version.

Note
Note

The below steps assume that kubectl has been configured to connect to the management cluster that you wish to upgrade.

  1. Locate the Edge 3.1.0 compatible chart version that you wish to migrate to by looking at the release notes (Section 36.1, “Abstract”).

  2. Get the custom values of the currently running helm chart:

    helm get values <chart_name> -n <chart_namespace> -o yaml > <chart_name>-values.yaml
  3. Check the Known Limitations (Section 26.1.3.1, “Known Limitations”) section if there are any additional steps, or changes that need to be done for the charts.

  4. Upgrade the helm chart to the desired version:

    • For non air-gapped setups:

      # For charts hosted in HTTP repositories
      helm upgrade <chart_name> <chart_repo>/<chart_name> --version <version> --values <chart_name>-values.yaml -n <chart_namespace>
      
      # For charts hosted in OCI registries
      helm upgrade <chart_name> oci://<oci_registry_url>/<chart_name> --namespace <chart_namespace> --values <chart_name>-values.yaml --version=X.Y.Z
    • For air-gapped setups:

      • On a machine with access to the internet, pull the desired chart version:

        # For charts hosted in HTTP repositories
        helm pull <chart_repo_name>/<chart_name> --version=X.Y.Z
        
        # For charts hosted in OCI registries
        helm pull oci://<chart_oci_url> --version=X.Y.Z
      • Transfer the chart archive to your management cluster:

        scp <chart>.tgz <machine-address>:<filesystem-path>
      • Upgrade the chart:

        helm upgrade <chart_name> <chart>.tgz --values <chart_name>-values.yaml -n <chart_namespace>
  5. Verify that the chart pods are running:

    kubectl get pods -n <chart_namespace>

You may want to do additional verification of the upgrade by checking resources specific to your chart. After this has been done, the upgrade can be considered successful.

For a full example, refer to the Example (Section 26.1.3.4.1, “Example”) section.

26.1.3.4.1 Example

This section provides an example of upgrading the Rancher and Metal3 Helm charts to a version compatible with the Edge 3.1.0 release. It follows the steps outlined in the "Edge Helm chart upgrade - non-EIB" (Section 26.1.3.4, “Edge Helm chart upgrade - non-EIB”) section.

Use-case:

  • Current Rancher and Metal3 charts need to be upgraded to an Edge 3.1.0 compatible version.

    • The Rancher helm chart is deployed from the Rancher Prime repository in the cattle-system namespace. The Rancher Prime repository was added in the following way:

      helm repo add rancher-prime https://charts.rancher.com/server-charts/prime
    • The Metal3 is deployed from the registry.suse.com OCI registry in the metal3-system namespace.

Steps:

  1. Locate the desired versions for Rancher and Metal3 from the release notes (Section 36.1, “Abstract”). For Edge 3.1.0, these versions would be 2.9.1 for Rancher and 0.8.1 for Metal3.

  2. Get the custom values of the currently running Rancher and Metal3 helm charts:

    # For Rancher
    helm get values rancher -n cattle-system -o yaml > rancher-values.yaml
    
    # For Metal3
    helm get values metal3 -n metal3-system -o yaml > metal3-values.yaml
  3. Check the Known Limitations (Section 26.1.3.1, “Known Limitations”) section if there are any additional steps that need to be done for the charts.

    • For Rancher the --set ingress.ingressClassName=nginx option needs to be added to the upgrade command.

  4. Upgrade the Rancher and Metal3 helm charts:

    # For Rancher
    helm upgrade rancher rancher-prime/rancher --version 2.9.1 --set ingress.ingressClassName=nginx --values rancher-values.yaml -n cattle-system
    
    # For Metal3
    helm upgrade metal3 oci://registry.suse.com/edge/3.1/metal3-chart --version 0.8.1 --values metal3-values.yaml -n metal3-system
  5. Validate that the Rancher and Metal3 pods are running:

    # For Rancher
    kubectl get pods -n cattle-system
    
    # For Metal3
    kubectl get pods -n metal3-system

26.2 Downstream clusters

This section covers how to migrate your Edge 3.0.X downstream clusters to Edge 3.1.0.

26.2.1 Prerequisites

This section covers any prerequisite steps that users should go through before beginning the migration process.

26.2.1.1 Charts deployed through EIB

In Edge 3.1, EIB (Chapter 9, Edge Image Builder) changes the way it deploys charts and no longer uses the RKE2/K3s manifest auto-deploy mechanism.

This means that, before migrating to an Edge 3.1.0 compatible version, any Helm charts deployed on an Edge 3.0 environment using EIB should have their chart manifests removed from the manifests directory of the relevant Kubernetes distribution.

Warning
Warning

If this is not done, any chart upgrade will be reverted by the RKE2/K3s process upon restart of the process or the operating system.

On downstream clusters, the removal of the EIB created chart manifest files is handled by a Fleet called eib-charts-migration-prep located in the suse-edge/fleet-examples repository.

Warning
Warning

Using the eib-charts-migration-prep Fleet file from the main branch is not advised. The Fleet file should always be used from a valid Edge release tag.

Important
Important

This process requires that System Upgrade Controller (SUC) is already deployed. For installation details, refer to "Installing the System Upgrade Controller" (Section 19.2, “Installing the System Upgrade Controller”).

Once created, the eib-charts-migration-prep Fleet ships an SUC (Chapter 19, System Upgrade Controller) Plan that contains a script that will do the following:

  1. Determine if the current node on which it is running is an initializer node. If it is not, it won’t do anything.

  2. If the node is an initializer, it will:

    • Detect all HelmChart resources deployed by EIB.

    • Locate the manifest file of each of the above HelmChart resources.

      Note
      Note

      HelmChart manifest files are located only on the initializer node under /var/lib/rancher/rke2/server/manifests for RKE2 and /var/lib/rancher/k3s/server/manifests for K3s.

    • To ensure disaster recovery, make a backup of each located manifest under /tmp.

      Note
      Note

      The backup location can be changed by defining the MANIFEST_BACKUP_DIR environment variable in the SUC Plan file of the Fleet.

    • Remove each manifest file related to a HelmChart resource deployed by EIB.

      Note
      Note

      Deleting manifests from the RKE2/K3s directory will not result in the resources being removed from the cluster.

      As per the RKE2/K3s documentation:

      "Deleting files out of this directory will not delete the corresponding resources from the cluster."

Depending on your use-case, the eib-charts-migration-prep Fleet can be deployed in the following two ways:

26.2.1.1.1 EIB chart manifest removal Fleet deployment - GitRepo
  1. On the management cluster, deploy the following GitRepo resource:

    Note
    Note

    Before deploying the resource below, you must provide a valid targets configuration, so that Fleet knows on which downstream clusters to deploy your resource. For information on how to map to downstream clusters, see Mapping to Downstream Clusters.

    kubectl apply -n fleet-default -f - <<EOF
    apiVersion: fleet.cattle.io/v1alpha1
    kind: GitRepo
    metadata:
      name: eib-chart-migration-prep
    spec:
      revision: release-3.1.0
      paths:
      - fleets/day2/system-upgrade-controller-plans/eib-charts-migration-prep
      repo: https://github.com/suse-edge/fleet-examples.git
      targets:
      - clusterSelector: CHANGEME
      # Example matching all clusters:
      # targets:
      # - clusterSelector: {}
    EOF

    Alternatively, you can also create the resource through Ranchers' UI, if such is available. For more information, see Accessing Fleet in the Rancher UI.

  2. By creating the above GitRepo on your management cluster, Fleet will deploy a SUC Plan (called eib-chart-migration-prep) on each downstream cluster that matches the targets specified in the GitRepo. To monitor the lifecycle of this plan, refer to "Monitoring System Upgrade Controller Plans" (Section 19.3, “Monitoring System Upgrade Controller Plans”).

26.2.1.1.2 EIB chart manifest removal Fleet deployment - Bundle

This section describes how to convert the eib-chart-migration-prep Fleet to a Bundle resource that can then be used in air-gapped environments that cannot utilize a local git server.

Steps:

  1. On a machine with network access download the fleet-cli:

    Note
    Note

    Make sure that the version of the fleet-cli you download matches the version of Fleet that has been deployed on your cluster.

    • For Mac users, there is a fleet-cli Homebrew Formulae.

    • For Linux users, the binaries are present as assets to each Fleet release.

      • Retrieve the desired binary:

        • Linux AMD:

          curl -L -o fleet-cli https://github.com/rancher/fleet/releases/download/<FLEET_VERSION>/fleet-linux-amd64
        • Linux ARM:

          curl -L -o fleet-cli https://github.com/rancher/fleet/releases/download/<FLEET_VERSION>/fleet-linux-arm64
      • Move the binary to /usr/local/bin:

        sudo mkdir -p /usr/local/bin
        sudo mv ./fleet-cli /usr/local/bin/fleet-cli
        sudo chmod 755 /usr/local/bin/fleet-cli
  2. Clone the suse-edge/fleet-examples release that you wish to use the eib-chart-migration-prep fleet from:

    git clone -b release-3.1.0 https://github.com/suse-edge/fleet-examples.git
  3. Navigate to the eib-chart-migration-prep fleet, located in the fleet-examples repo:

    cd fleet-examples/fleets/day2/system-upgrade-controller-plans/eib-charts-migration-prep
  4. Create a targets.yaml file that will point to all downstream clusters on which you wish to deploy the fleet:

    cat > targets.yaml <<EOF
    targets:
    - clusterSelector: CHANGEME
    EOF

    For information on how to map to downstream clusters, see Mapping to Downstream Clusters.

  5. Proceed to build the Bundle:

    Note
    Note

    Make sure you did not download the fleet-cli in the fleet-examples/fleets/day2/system-upgrade-controller-plans/eib-charts-migration-prep directory, otherwise it will be packaged with the Bundle, which is not advised.

    fleet-cli apply --compress --targets-file=targets.yaml -n fleet-default -o - eib-chart-migration-prep . > eib-chart-migration-prep-bundle.yaml

    For more information about this process, see Convert a Helm Chart into a Bundle.

    For more information about the fleet-cli apply command, see fleet apply.

  6. Transfer the eib-chart-migration-prep-bundle.yaml bundle to your management cluster machine:

    scp eib-chart-migration-prep-bundle.yaml <machine-address>:<filesystem-path>
  7. On your management cluster, deploy the eib-chart-migration-prep-bundle.yaml Bundle:

    kubectl apply -f eib-chart-migration-prep-bundle.yaml
  8. On your management cluster, validate that the Bundle is deployed:

    kubectl get bundle eib-chart-migration-prep -n fleet-default
    NAME                       BUNDLEDEPLOYMENTS-READY   STATUS
    eib-chart-migration-prep   1/1
  9. By creating the above Bundle on your management cluster, Fleet will deploy an SUC Plan (called eib-chart-migration-prep) on each downstream cluster that matches the targets specified in the targets.yaml file. To monitor the lifecycle of this plan, refer to "Monitoring System Upgrade Controller Plans" (Section 19.3, “Monitoring System Upgrade Controller Plans”).

26.2.2 Migration steps

After executing the prerequisite (Section 26.2.1, “Prerequisites”) steps, you can proceed to follow the downstream cluster (Chapter 28, Downstream clusters) upgrade documentation for the Edge 3.1.0 release.