Jump to contentJump to page navigation: previous page [access key p]/next page [access key n]
SUSE Edge Documentation / Day 2 Operations / Downstream clusters

28 Downstream clusters

This section covers how to do various Day 2 operations for different parts of your downstream cluster using your management cluster.

28.1 Introduction

This section is meant to be a starting point for the Day 2 operations documentation. You can find the following information.

  1. The default components (Section 28.1.1, “Components”) used to achieve Day 2 operations over multiple downstream clusters.

  2. Determining which Day 2 resources should you use for your specific use-case (Section 28.1.2, “Determine your use-case”).

  3. The suggested workflow sequence (Section 28.1.3, “Day 2 workflow”) for Day 2 operations.

28.1.1 Components

Below you can find a description of the default components that should be setup on either your management cluster or your downstream clusters so that you can successfully perform Day 2 operations.

28.1.1.1 Rancher

Note
Note

For use-cases where you want to utilise Fleet (Chapter 6, Fleet) without Rancher, you can skip the Rancher component all together.

Responsible for the management of your downstream clusters. Should be deployed on your management cluster.

For more information, see Chapter 4, Rancher.

28.1.1.2 Fleet

Responsible for multi-cluster resource deployment.

Typically offered by the Rancher component. For use-cases where Rancher is not used, can be deployed as a standalone component.

For more information on installing Fleet as a standalone component, see Fleet’s Installation Details.

For more information regarding the Fleet component, see Chapter 6, Fleet.

Important
Important

This documentation heavily relies on Fleet and more specifically on the GitRepo and Bundle resources (more on this in Section 28.1.2, “Determine your use-case”) for establishing a GitOps way of automating the deployment of resources related to Day 2 operations.

For use-cases, where a third party GitOps tool usage is desired, see:

  1. For OS upgrades - Section 28.2.4.3, “SUC Plan deployment - third-party GitOps workflow”

  2. For Kubernetes distribution upgrades - Section 28.3.4.3, “SUC Plan deployment - third-party GitOps workflow”

  3. For Helm chart upgrades - retrieve the chart version supported by the desired Edge release from the Section 36.1, “Abstract” page and populate the chart version and URL in your third party GitOps tool

28.1.1.3 System Upgrade Controller (SUC)

System Upgrade Controller (SUC) is responsible for executing tasks on specified nodes based on configuration data provided through a custom resource, called a Plan.

Note
Note

In order for SUC to be able to support different Day 2 operations, it is important that it is deployed on each downstream cluster that requires an upgrade.

For more information about the SUC component and how it fits in the Edge stack, see the System Upgrade Controller (Chapter 19, System Upgrade Controller) component documentation.

For information on how to deploy SUC on your downstream clusters, first determine your use-case (Section 28.1.2, “Determine your use-case”) and then refer to System Upgrade Controller installation - GitRepo (Section 19.2.1.1, “System Upgrade Controller installation - GitRepo”), or System Upgrade Controller installation - Bundle (Section 19.2.1.2, “System Upgrade Controller installation - Bundle”).

28.1.2 Determine your use-case

As mentioned previously, resources related to Day 2 operations are propagated to downstream clusters using Fleet’s GitRepo and Bundle resources.

Below you can find more information regarding what these resources do and for which use-cases should they be used for Day 2 operations.

28.1.2.1 GitRepo

A GitRepo is a Fleet (Chapter 6, Fleet) resource that represents a Git repository from which Fleet can create Bundles. Each Bundle is created based on configuration paths defined inside of the GitRepo resource. For more information, see the GitRepo documentation.

In terms of Day 2 operations GitRepo resources are normally used to deploy SUC or SUC Plans on non air-gapped environments that utilise a Fleet GitOps approach.

Alternatively, GitRepo resources can also be used to deploy SUC or SUC Plans on air-gapped environments, if you mirror your repository setup through a local git server.

28.1.2.2 Bundle

Bundles hold raw Kubernetes resources that will be deployed on the targeted cluster. Usually they are created from a GitRepo resource, but there are use-cases where they can be deployed manually. For more information refer to the Bundle documentation.

In terms of Day 2 operations Bundle resources are normally used to deploy SUC or SUC Plans on air-gapped environments that do not use some form of local GitOps procedure (e.g. a local git server).

Alternatively, if your use-case does not allow for a GitOps workflow (e.g. using a Git repository), Bundle resources could also be used to deploy SUC or SUC Plans on non air-gapped environments.

28.1.3 Day 2 workflow

The following is a Day 2 workflow that should be followed when upgrading a downstream cluster to a specific Edge release.

28.2 OS upgrade

28.2.1 Components

This section covers the custom components that the OS upgrade process uses over the default Day 2 components (Section 28.1.1, “Components”).

28.2.1.1 systemd.service

A different systemd.service is created depending on what upgrade your OS requires from one Edge version to another:

  • For Edge versions that require the same OS version (e.g. 6.0), the os-pkg-update.service will be created. It uses the transactional-update command to perform a normal package upgrade.

  • For Edge versions that require a OS version migration (e.g 5.56.0), the os-migration.service will be created. It uses transactional-update to perform:

    • First a normal package upgrade. Done in order to ensure that all packages are with the latest version before the migration. Mitigating any failures related to old package version.

    • After that it proceeds with the OS migration process by utilising the zypper migration command.

Shipped through a SUC plan, which should be located on each downstream cluster that is in need of a OS upgrade.

28.2.2 Requirements

General:

  1. SCC registered machine - All downstream cluster nodes should be registered to https://scc.suse.com/. This is needed so that the os-pkg-update.service/os-migration.service can successfully connect to the needed OS RPM repositories.

    Important
    Important

    For Edge releases that require a new OS version (e.g Edge 3.1), make sure that your SCC key supports the migration to the new version (e.g. for Edge 3.1, the SCC key should support SLE Micro 5.56.0 migration).

  2. Make sure that SUC Plan tolerations match node tolerations - If your Kubernetes cluster nodes have custom taints, make sure to add tolerations for those taints in the SUC Plans. By default SUC Plans have tolerations only for control-plane nodes. Default tolerations include:

    • CriticalAddonsOnly=true:NoExecute

    • node-role.kubernetes.io/control-plane:NoSchedule

    • node-role.kubernetes.io/etcd:NoExecute

      Note
      Note

      Any additional tolerations must be added under the .spec.tolerations section of each Plan. SUC Plans related to the OS upgrade can be found in the suse-edge/fleet-examples repository under fleets/day2/system-upgrade-controller-plans/os-upgrade. Make sure you use the Plans from a valid repository release tag.

      An example of defining custom tolerations for the control-plane SUC Plan, would look like this:

      apiVersion: upgrade.cattle.io/v1
      kind: Plan
      metadata:
        name: cp-os-upgrade-edge-3XX
      spec:
        ...
        tolerations:
        # default tolerations
        - key: "CriticalAddonsOnly"
          operator: "Equal"
          value: "true"
          effect: "NoExecute"
        - key: "node-role.kubernetes.io/control-plane"
          operator: "Equal"
          effect: "NoSchedule"
        - key: "node-role.kubernetes.io/etcd"
          operator: "Equal"
          effect: "NoExecute"
        # custom toleration
        - key: "foo"
          operator: "Equal"
          value: "bar"
          effect: "NoSchedule"
      ...

Air-gapped:

  1. Mirror SUSE RPM repositories - OS RPM repositories should be locally mirrored so that os-pkg-update.service/os-migration.service can have access to them. This can be achieved using RMT.

28.2.3 Update procedure

Note
Note

This section assumes you will be deploying the OS upgrade SUC Plan using Fleet (Chapter 6, Fleet). If you intend to deploy the SUC Plan using a different approach, refer to Section 28.2.4.3, “SUC Plan deployment - third-party GitOps workflow”.

The OS upgrade procedure revolves around deploying SUC Plans to downstream clusters. These plans then hold information about how and on which nodes to deploy the os-pkg-update.service/os-migration.service. For information regarding the structure of a SUC Plan, refer to the upstream documentation.

OS upgrade SUC Plans are shipped in the following ways:

To determine which resource you should use, refer to Section 28.1.2, “Determine your use-case”.

For a full overview of what happens during the upgrade procedure, refer to the Section 28.2.3.1, “Overview” section.

28.2.3.1 Overview

This section aims to describe the full workflow that the OS upgrade process goes throught from start to finish.

day2 os pkg update diagram
Figure 28.1: OS upgrade workflow

OS upgrade steps:

  1. Based on their use-case, the user determines whether to use a GitRepo or a Bundle resource for the deployment of the OS upgrade SUC Plans to the desired downstream clusters. For information on how to map a GitRepo/Bundle to a specific set of downstream clusters, see Mapping to Downstream Clusters.

    1. If you are unsure whether you should use a GitRepo or a Bundle resource for the SUC Plan deployment, refer to Section 28.1.2, “Determine your use-case”.

    2. For GitRepo/Bundle configuration options, refer to Section 28.2.4.1, “SUC Plan deployment - GitRepo resource” or Section 28.2.4.2, “SUC Plan deployment - Bundle resource”.

  2. The user deploys the configured GitRepo/Bundle resource to the fleet-default namespace in his management cluster. This is done either manually or through the Rancher UI if such is available.

  3. Fleet (Chapter 6, Fleet) constantly monitors the fleet-default namespace and immediately detects the newly deployed GitRepo/Bundle resource. For more information regarding what namespaces does Fleet monitor, refer to Fleet’s Namespaces documentation.

  4. If the user has deployed a GitRepo resource, Fleet will reconcile the GitRepo and based on its paths and fleet.yaml configurations it will deploy a Bundle resource in the fleet-default namespace. For more information, refer to Fleet’s GitRepo Contents documentation.

  5. Fleet then proceeds to deploy the Kubernetes resources from this Bundle to all the targeted downstream clusters. In the context of OS upgrades, Fleet deploys the following resources from the Bundle:

    1. Agent SUC Plan - instructs SUC on how to do an OS upgrade on cluster agent nodes. It is not interpreted if the cluster consists only from control-plane nodes. It executes after all control-plane SUC plans have completed successfully.

    2. Control-plane SUC Plan - instructs SUC on how to do an OS upgrade on cluster control-plane nodes.

    3. Script Secret - referenced in each SUC Plan; ships an upgrade.sh script responsible for creating the os-pkg-update.service/os-migration.service which will do the actual OS upgrade.

    4. Script Data ConfigMap - referenced in each SUC Plan; ships configurations used by the upgrade.sh script.

      Note
      Note

      The above resources will be deployed in the cattle-system namespace of each downstream cluster.

  6. On the downstream cluster, SUC picks up the newly deployed SUC Plans and deploys an Update Pod on each node that matches the node selector defined in the SUC Plan. For information how to monitor the SUC Plan Pod, refer to Section 19.3, “Monitoring System Upgrade Controller Plans”.

  7. The Update Pod (deployed on each node) mounts the script Secret and executes the upgrade.sh script that the Secret ships.

  8. The upgrade.sh proceeds to do the following:

    1. Based on its configurations, determine whether the OS needs a package update, or it needs to be migrated.

    2. Based on the above outcome it will create either a os-pkg-update.service (for package updates), or a os-migration.service (for migration). The service will be of type oneshot and will adopt the following workflow:

      1. For os-pkg-update.service:

        1. Update all package version on the node OS, by running transactional-update cleanup up

        2. After a successful transactional-update, schedule a system reboot so that the package version updates can take effect

      2. For os-migration.service:

        1. Update all package version on the node OS, by running transactional-update cleanup up. This is done to ensure that no old package versions causes an OS migration error.

        2. Proceed to migrate the OS to the desired values. Migration is done by utilising the zypper migration command.

        3. Schedule a system reboot so that the migration can take effect

    3. Start the os-pkg-update.service/os-migration.service and wait for it to complete.

    4. Cleanup the os-pkg-update.service/os-migration.service after the systemd.service has done its job. It is removed from the system to ensure that no accidental executions/reboots happen in the future.

The OS upgrade procedure finishes with the system reboot. After the reboot, the OS package versions are upgraded and if the Edge release requires it, the OS might be migrated as well.

28.2.4 OS upgrade - SUC Plan deployment

This section describes how to orchestrate the deployment of SUC Plans related OS upgrades using Fleet’s GitRepo and Bundle resources.

28.2.4.1 SUC Plan deployment - GitRepo resource

A GitRepo resource, that ships the needed OS upgrade SUC Plans, can be deployed in one of the following ways:

  1. Through the Rancher UI - Section 28.2.4.1.1, “GitRepo creation - Rancher UI” (when Rancher is available).

  2. By manually deploying (Section 28.2.4.1.2, “GitRepo creation - manual”) the resource to your management cluster.

Once deployed, to monitor the OS upgrade process of the nodes of your targeted cluster, refer to the Section 19.3, “Monitoring System Upgrade Controller Plans” documentation.

28.2.4.1.1 GitRepo creation - Rancher UI
  1. In the upper left corner, ☰ → Continuous Delivery

  2. Go to Git Repos → Add Repository

If you use the suse-edge/fleet-examples repository:

  1. Repository URL - https://github.com/suse-edge/fleet-examples.git

  2. Watch → Revision - choose a release tag for the suse-edge/fleet-examples repository that you wish to use

  3. Under Paths add the path to the OS upgrade Fleets that you wish to use - fleets/day2/system-upgrade-controller-plans/os-upgrade

  4. Select Next to move to the target configuration section. Only select clusters whose node’s packages you wish to upgrade

  5. Create

Alternatively, if you decide to use your own repository to host these files, you would need to provide your repo data above.

28.2.4.1.2 GitRepo creation - manual
  1. Choose the desired Edge release tag that you wish to apply the OS SUC update Plans from (referenced below as ${REVISION}).

  2. Pull the GitRepo resource:

    curl -o os-update-gitrepo.yaml https://raw.githubusercontent.com/suse-edge/fleet-examples/${REVISION}/gitrepos/day2/os-update-gitrepo.yaml
  3. Edit the GitRepo configuration, under spec.targets specify your desired target list. By default the GitRepo resources from the suse-edge/fleet-examples are NOT mapped to any down stream clusters.

    • To match all clusters change the default GitRepo target to:

      spec:
        targets:
        - clusterSelector: {}
    • Alternatively, if you want a more granular cluster selection see Mapping to Downstream Clusters

  4. Apply the GitRepo resources to your management cluster:

    kubectl apply -f os-update-gitrepo.yaml
  5. View the created GitRepo resource under the fleet-default namespace:

    kubectl get gitrepo os-upgrade -n fleet-default
    
    # Example output
    NAME            REPO                                              COMMIT         BUNDLEDEPLOYMENTS-READY   STATUS
    os-upgrade      https://github.com/suse-edge/fleet-examples.git   release-3.1.0  0/0

28.2.4.2 SUC Plan deployment - Bundle resource

A Bundle resource, that ships the needed OS upgrade SUC Plans, can be deployed in one of the following ways:

  1. Through the Rancher UI - Section 28.2.4.2.1, “Bundle creation - Rancher UI” (when Rancher is available).

  2. By manually deploying (Section 28.2.4.2.2, “Bundle creation - manual”) the resource to your management cluster.

Once deployed, to monitor the OS upgrade process of the nodes of your targeted cluster, refer to the Section 19.3, “Monitoring System Upgrade Controller Plans” documentation.

28.2.4.2.1 Bundle creation - Rancher UI
  1. In the upper left corner, click ☰ → Continuous Delivery

  2. Go to Advanced > Bundles

  3. Select Create from YAML

  4. From here you can create the Bundle in one of the following ways:

    1. By manually copying the Bundle content to the Create from YAML page. Content can be retrieved from https://raw.githubusercontent.com/suse-edge/fleet-examples/${REVISION}/bundles/day2/system-upgrade-controller-plans/os-upgrade/os-upgrade-bundle.yaml, where ${REVISION} is the Edge release that you are using

    2. By cloning the suse-edge/fleet-examples repository to the desired release tag and selecting the Read from File option in the Create from YAML page. From there, navigate to bundles/day2/system-upgrade-controller-plans/os-upgrade directory and select os-upgrade-bundle.yaml. This will auto-populate the Create from YAML page with the Bundle content.

  5. Change the target clusters for the Bundle:

    • To match all downstream clusters change the default Bundle .spec.targets to:

      spec:
        targets:
        - clusterSelector: {}
    • For a more granular downstream cluster mappings, see Mapping to Downstream Clusters.

  6. Select Create

28.2.4.2.2 Bundle creation - manual
  1. Choose the desired Edge release tag that you wish to apply the OS upgrade SUC Plans from (referenced below as ${REVISION}).

  2. Pull the Bundle resource:

    curl -o os-upgrade-bundle.yaml https://raw.githubusercontent.com/suse-edge/fleet-examples/${REVISION}/bundles/day2/system-upgrade-controller-plans/os-upgrade/os-upgrade-bundle.yaml
  3. Edit the Bundle target configurations, under spec.targets provide your desired target list. By default the Bundle resources from the suse-edge/fleet-examples are NOT mapped to any down stream clusters.

    • To match all clusters change the default Bundle target to:

      spec:
        targets:
        - clusterSelector: {}
    • Alternatively, if you want a more granular cluster selection see Mapping to Downstream Clusters

  4. Apply the Bundle resources to your management cluster:

    kubectl apply -f os-upgrade-bundle.yaml
  5. View the created Bundle resource under the fleet-default namespace:

    kubectl get bundles -n fleet-default

28.2.4.3 SUC Plan deployment - third-party GitOps workflow

There might be use-cases where users would like to incorporate the OS upgrade SUC Plans to their own third-party GitOps workflow (e.g. Flux).

To get the OS upgrade resources that you need, first determine the Edge release tag of the suse-edge/fleet-examples repository that you would like to use.

After that, resources can be found at fleets/day2/system-upgrade-controller-plans/os-upgrade, where:

  • plan-control-plane.yaml - system-upgrade-controller Plan resource for control-plane nodes

  • plan-agent.yaml - system-upgrade-controller Plan resource for agent nodes

  • secret.yaml - secret that ships a script that creates the os-pkg-update.service/os-migration.service systemd.service

Important
Important

These Plan resources are interpreted by the system-upgrade-controller and should be deployed on each downstream cluster that you wish to upgrade. For information on how to deploy the system-upgrade-controller, see Section 19.2, “Installing the System Upgrade Controller”.

To better understand how your GitOps workflow can be used to deploy the SUC Plans for OS upgrade, it can be beneficial to take a look at the overview (Section 28.2.3.1, “Overview”) of the update procedure using Fleet.

28.3 Kubernetes version upgrade

Important
Important

This section covers Kubernetes upgrades for downstream clusters that have NOT been created through a Rancher (Chapter 4, Rancher) instance. For information on how to upgrade the Kubernetes version of Rancher created clusters, see Upgrading and Rolling Back Kubernetes.

28.3.1 Components

This section covers the custom components that the Kubernetes upgrade process uses over the default Day 2 components (Section 28.1.1, “Components”).

28.3.1.1 rke2-upgrade

Image responsible for upgrading the RKE2 version of a specific node.

Shipped through a Pod created by SUC based on a SUC Plan. The Plan should be located on each downstream cluster that is in need of a RKE2 upgrade.

For more information regarding how the rke2-upgrade image performs the upgrade, see the upstream documentation.

28.3.1.2 k3s-upgrade

Image responsible for upgrading the K3s version of a specific node.

Shipped through a Pod created by SUC based on a SUC Plan. The Plan should be located on each downstream cluster that is in need of a K3s upgrade.

For more information regarding how the k3s-upgrade image performs the upgrade, see the upstream documentation.

28.3.2 Requirements

  1. Backup your Kubernetes distribution:

    1. For imported RKE2 clusters, see the RKE2 Backup and Restore documentation.

    2. For imported K3s clusters, see the K3s Backup and Restore documentation.

  2. Make sure that SUC Plan tolerations match node tolerations - If your Kubernetes cluster nodes have custom taints, make sure to add tolerations for those taints in the SUC Plans. By default SUC Plans have tolerations only for control-plane nodes. Default tolerations include:

    • CriticalAddonsOnly=true:NoExecute

    • node-role.kubernetes.io/control-plane:NoSchedule

    • node-role.kubernetes.io/etcd:NoExecute

      Note
      Note

      Any additional tolerations must be added under the .spec.tolerations section of each Plan. SUC Plans related to the Kubernetes version upgrade can be found in the suse-edge/fleet-examples repository under:

      • For RKE2 - fleets/day2/system-upgrade-controller-plans/rke2-upgrade

      • For K3s - fleets/day2/system-upgrade-controller-plans/k3s-upgrade

      Make sure you use the Plans from a valid repository release tag.

      An example of defining custom tolerations for the RKE2 control-plane SUC Plan, would look like this:

      apiVersion: upgrade.cattle.io/v1
      kind: Plan
      metadata:
        name: rke2-plan-control-plane
      spec:
        ...
        tolerations:
        # default tolerations
        - key: "CriticalAddonsOnly"
          operator: "Equal"
          value: "true"
          effect: "NoExecute"
        - key: "node-role.kubernetes.io/control-plane"
          operator: "Equal"
          effect: "NoSchedule"
        - key: "node-role.kubernetes.io/etcd"
          operator: "Equal"
          effect: "NoExecute"
        # custom toleration
        - key: "foo"
          operator: "Equal"
          value: "bar"
          effect: "NoSchedule"
      ...

28.3.3 Upgrade procedure

Note
Note

This section assumes you will be deploying SUC Plans using Fleet (Chapter 6, Fleet). If you intend to deploy the SUC Plan using a different approach, refer to Section 28.3.4.3, “SUC Plan deployment - third-party GitOps workflow”.

The Kubernetes version upgrade procedure revolves around deploying SUC Plans to downstream clusters. These plans hold information that instructs the SUC on which nodes to create Pods which run the rke2/k3s-upgrade images. For information regarding the structure of a SUC Plan, refer to the upstream documentation.

Kubernetes upgrade Plans are shipped in the following ways:

To determine which resource you should use, refer to Section 28.1.2, “Determine your use-case”.

For a full overview of what happens during the update procedure, refer to the Section 28.3.3.1, “Overview” section.

28.3.3.1 Overview

This section aims to describe the full workflow that the Kubernetes version upgrade process goes throught from start to finish.

day2 k8s version upgrade diagram
Figure 28.2: Kubernetes version upgrade workflow

Kubernetes version upgrade steps:

  1. Based on his use-case, the user determines whether to use a GitRepo or a Bundle resource for the deployment of the Kubernetes upgrade SUC Plans to the desired downstream clusters. For information on how to map a GitRepo/Bundle to a specific set of downstream clusters, see Mapping to Downstream Clusters.

    1. If you are unsure whether you should use a GitRepo or a Bundle resource for the SUC Plan deployment, refer to Section 28.1.2, “Determine your use-case”.

    2. For GitRepo/Bundle configuration options, refer to Section 28.3.4.1, “SUC Plan deployment - GitRepo resource” or Section 28.3.4.2, “SUC Plan deployment - Bundle resource”.

  2. The user deploys the configured GitRepo/Bundle resource to the fleet-default namespace in his management cluster. This is done either manually or thorugh the Rancher UI if such is available.

  3. Fleet (Chapter 6, Fleet) constantly monitors the fleet-default namespace and immediately detects the newly deployed GitRepo/Bundle resource. For more information regarding what namespaces does Fleet monitor, refer to Fleet’s Namespaces documentation.

  4. If the user has deployed a GitRepo resource, Fleet will reconcile the GitRepo and based on its paths and fleet.yaml configurations it will deploy a Bundle resource in the fleet-default namespace. For more information, refer to Fleet’s GitRepo Contents documentation.

  5. Fleet then proceeds to deploy the Kubernetes resources from this Bundle to all the targeted downstream clusters. In the context of the Kubernetes version upgrade, Fleet deploys the following resources from the Bundle (depending on the Kubernetes distrubution):

    1. rke2-plan-agent/k3s-plan-agent - instructs SUC on how to do a Kubernetes upgrade on cluster agent nodes. Will not be interpreted if the cluster consists only from control-plane nodes.

    2. rke2-plan-control-plane/k3s-plan-control-plane - instructs SUC on how to do a Kubernetes upgrade on cluster control-plane nodes.

      Note
      Note

      The above SUC Plans will be deployed in the cattle-system namespace of each downstream cluster.

  6. On the downstream cluster, SUC picks up the newly deployed SUC Plans and deploys an Update Pod on each node that matches the node selector defined in the SUC Plan. For information how to monitor the SUC Plan Pod, refer to Section 19.3, “Monitoring System Upgrade Controller Plans”.

  7. Depending on which SUC Plans you have deployed, the Update Pod will run either a rke2-upgrade or a k3s-upgrade image and will execute the following workflow on each cluster node:

    1. Cordon cluster node - to ensure that no pods are scheduled accidentally on this node while it is being upgraded, we mark it as unschedulable.

    2. Replace the rke2/k3s binary that is installed on the node OS with the binary shipped by the rke2-upgrade/k3s-upgrade image that the Pod is currently running.

    3. Kill the rke2/k3s process that is running on the node OS - this instructs the supervisor to automatically restart the rke2/k3s process using the new version.

    4. Uncordon cluster node - after the successful Kubernetes distribution upgrade, the node is again marked as scheduable.

      Note
      Note

      For further information regarding how the rke2-upgrade and k3s-upgrade images work, see the rke2-upgrade and k3s-upgrade upstream projects.

With the above steps executed, the Kubernetes version of each cluster node should have been upgraded to the desired Edge compatible release.

28.3.4 Kubernetes version upgrade - SUC Plan deployment

28.3.4.1 SUC Plan deployment - GitRepo resource

A GitRepo resource, that ships the needed Kubernetes upgrade SUC Plans, can be deployed in one of the following ways:

  1. Through the Rancher UI - Section 28.3.4.1.1, “GitRepo creation - Rancher UI” (when Rancher is available).

  2. By manually deploying (Section 28.3.4.1.2, “GitRepo creation - manual”) the resource to your management cluster.

Once deployed, to monitor the Kubernetes upgrade process of the nodes of your targeted cluster, refer to the Section 19.3, “Monitoring System Upgrade Controller Plans” documentation.

28.3.4.1.1 GitRepo creation - Rancher UI
  1. In the upper left corner, ☰ → Continuous Delivery

  2. Go to Git Repos → Add Repository

If you use the suse-edge/fleet-examples repository:

  1. Repository URL - https://github.com/suse-edge/fleet-examples.git

  2. Watch → Revision - choose a release tag for the suse-edge/fleet-examples repository that you wish to use

  3. Under Paths add the path to the Kubernetes distribution upgrade Fleets as seen in the release tag:

    1. For RKE2 - fleets/day2/system-upgrade-controller-plans/rke2-upgrade

    2. For K3s - fleets/day2/system-upgrade-controller-plans/k3s-upgrade

  4. Select Next to move to the target configuration section. Only select clusters for which you wish to upgrade the desired Kubernetes distribution

  5. Create

Alternatively, if you decide to use your own repository to host these files, you would need to provide your repo data above.

28.3.4.1.2 GitRepo creation - manual
  1. Choose the desired Edge release tag that you wish to apply the Kubernetes SUC upgrade Plans from (referenced below as ${REVISION}).

  2. Pull the GitRepo resource:

    • For RKE2 clusters:

      curl -o rke2-upgrade-gitrepo.yaml https://raw.githubusercontent.com/suse-edge/fleet-examples/${REVISION}/gitrepos/day2/rke2-upgrade-gitrepo.yaml
    • For K3s clusters:

      curl -o k3s-upgrade-gitrepo.yaml https://raw.githubusercontent.com/suse-edge/fleet-examples/${REVISION}/gitrepos/day2/k3s-upgrade-gitrepo.yaml
  3. Edit the GitRepo configuration, under spec.targets specify your desired target list. By default the GitRepo resources from the suse-edge/fleet-examples are NOT mapped to any down stream clusters.

    • To match all clusters change the default GitRepo target to:

      spec:
        targets:
        - clusterSelector: {}
    • Alternatively, if you want a more granular cluster selection see Mapping to Downstream Clusters

  4. Apply the GitRepo resources to your management cluster:

    # RKE2
    kubectl apply -f rke2-upgrade-gitrepo.yaml
    
    # K3s
    kubectl apply -f k3s-upgrade-gitrepo.yaml
  5. View the created GitRepo resource under the fleet-default namespace:

    # RKE2
    kubectl get gitrepo rke2-upgrade -n fleet-default
    
    # K3s
    kubectl get gitrepo k3s-upgrade -n fleet-default
    
    # Example output
    NAME           REPO                                              COMMIT          BUNDLEDEPLOYMENTS-READY   STATUS
    k3s-upgrade    https://github.com/suse-edge/fleet-examples.git   release-3.0.1   0/0
    rke2-upgrade   https://github.com/suse-edge/fleet-examples.git   release-3.0.1   0/0

28.3.4.2 SUC Plan deployment - Bundle resource

A Bundle resource, that ships the needed Kubernetes upgrade SUC Plans, can be deployed in one of the following ways:

  1. Through the Rancher UI - Section 28.3.4.2.1, “Bundle creation - Rancher UI” (when Rancher is available).

  2. By manually deploying (Section 28.3.4.2.2, “Bundle creation - manual”) the resource to your management cluster.

Once deployed, to monitor the Kubernetes upgrade process of the nodes of your targeted cluster, refer to the Section 19.3, “Monitoring System Upgrade Controller Plans” documentation.

28.3.4.2.1 Bundle creation - Rancher UI
  1. In the upper left corner, click ☰ → Continuous Delivery

  2. Go to Advanced > Bundles

  3. Select Create from YAML

  4. From here you can create the Bundle in one of the following ways:

    1. By manually copying the Bundle content to the Create from YAML page. Content can be retrieved:

    2. By cloning the suse-edge/fleet-examples repository to the desired release tag and selecting the Read from File option in the Create from YAML page. From there, navigate to the bundle that you need (/bundles/day2/system-upgrade-controller-plans/rke2-upgrade/plan-bundle.yaml for RKE2 and /bundles/day2/system-upgrade-controller-plans/k3s-upgrade/plan-bundle.yaml for K3s). This will auto-populate the Create from YAML page with the Bundle content

  5. Change the target clusters for the Bundle:

    • To match all downstream clusters change the default Bundle .spec.targets to:

      spec:
        targets:
        - clusterSelector: {}
    • For a more granular downstream cluster mappings, see Mapping to Downstream Clusters.

  6. Create

28.3.4.2.2 Bundle creation - manual
  1. Choose the desired Edge release tag that you wish to apply the Kubernetes SUC upgrade Plans from (referenced below as ${REVISION}).

  2. Pull the Bundle resources:

    • For RKE2 clusters:

      curl -o rke2-plan-bundle.yaml https://raw.githubusercontent.com/suse-edge/fleet-examples/${REVISION}/bundles/day2/system-upgrade-controller-plans/rke2-upgrade/plan-bundle.yaml
    • For K3s clusters:

      curl -o k3s-plan-bundle.yaml https://raw.githubusercontent.com/suse-edge/fleet-examples/${REVISION}/bundles/day2/system-upgrade-controller-plans/k3s-upgrade/plan-bundle.yaml
  3. Edit the Bundle target configurations, under spec.targets provide your desired target list. By default the Bundle resources from the suse-edge/fleet-examples are NOT mapped to any down stream clusters.

    • To match all clusters change the default Bundle target to:

      spec:
        targets:
        - clusterSelector: {}
    • Alternatively, if you want a more granular cluster selection see Mapping to Downstream Clusters

  4. Apply the Bundle resources to your management cluster:

    # For RKE2
    kubectl apply -f rke2-plan-bundle.yaml
    
    # For K3s
    kubectl apply -f k3s-plan-bundle.yaml
  5. View the created Bundle resource under the fleet-default namespace:

    # For RKE2
    kubectl get bundles rke2-upgrade -n fleet-default
    
    # For K3s
    kubectl get bundles k3s-upgrade -n fleet-default
    
    # Example output
    NAME           BUNDLEDEPLOYMENTS-READY   STATUS
    k3s-upgrade    0/0
    rke2-upgrade   0/0

28.3.4.3 SUC Plan deployment - third-party GitOps workflow

There might be use-cases where users would like to incorporate the Kubernetes upgrade resources to their own third-party GitOps workflow (e.g. Flux).

To get the upgrade resources that you need, first determine the he Edge release tag of the suse-edge/fleet-examples repository that you would like to use.

After that, the resources can be found at:

  • For a RKE2 cluster upgrade:

    • For control-plane nodes - fleets/day2/system-upgrade-controller-plans/rke2-upgrade/plan-control-plane.yaml

    • For agent nodes - fleets/day2/system-upgrade-controller-plans/rke2-upgrade/plan-agent.yaml

  • For a K3s cluster upgrade:

    • For control-plane nodes - fleets/day2/system-upgrade-controller-plans/k3s-upgrade/plan-control-plane.yaml

    • For agent nodes - fleets/day2/system-upgrade-controller-plans/k3s-upgrade/plan-agent.yaml

Important
Important

These Plan resources are interpreted by the system-upgrade-controller and should be deployed on each downstream cluster that you wish to upgrade. For information on how to deploy the system-upgrade-controller, see Section 19.2, “Installing the System Upgrade Controller”.

To better understand how your GitOps workflow can be used to deploy the SUC Plans for Kubernetes version upgrade, it can be beneficial to take a look at the overview (Section 28.3.3.1, “Overview”) of the update procedure using Fleet.

28.4 Helm chart upgrade

Note
Note

The below sections focus on using Fleet functionalities to achieve a Helm chart update.

Users adopting a third-party GitOps workflow, should take the configurations for their desired helm chart from its fleet.yaml located at fleets/day2/chart-templates/<chart-name>. Make sure you are retrieving the chart data from a valid "Day 2" Edge release.

28.4.1 Components

Apart from the default Day 2 components (Section 28.1.1, “Components”), no other custom components are needed for this operation.

28.4.2 Preparation for air-gapped environments

28.4.2.1 Ensure that you have access to your Helm chart upgrade Fleet

Depending on what your environment supports, you can take one of the following options:

  1. Host your chart’s Fleet resources on a local Git server that is accessible by your management cluster.

  2. Use Fleet’s CLI to convert a Helm chart into a Bundle that you can directly use and will not need to host somewhere. Fleet’s CLI can be retrieved from their release page, for Mac users there is a fleet-cli Homebrew Formulae.

28.4.2.2 Find the required assets for your Edge release version

  1. Go to the Day 2 release page and find the Edge 3.X.Y release that you want to upgrade your chart to and click Assets.

  2. From the release’s Assets section, download the following files, which are required for an air-gapped upgrade of a SUSE supported helm chart:

    Release File

    Description

    edge-save-images.sh

    This script pulls the images in the edge-release-images.txt file and saves them to a '.tar.gz' archive that can then be used in your air-gapped environment.

    edge-save-oci-artefacts.sh

    This script pulls the SUSE OCI chart artefacts in the edge-release-helm-oci-artefacts.txt file and creates a '.tar.gz' archive of a directory containing all other chart OCI archives.

    edge-load-images.sh

    This script loads the images in the '.tar.gz' archive generated by edge-save-images.sh, retags them and pushes them to your private registry.

    edge-load-oci-artefacts.sh

    This script takes a directory containing '.tgz' SUSE OCI charts and loads all OCI charts to your private registry. The directory is retrieved from the '.tar.gz' archive that the edge-save-oci-artefacts.sh script has generated.

    edge-release-helm-oci-artefacts.txt

    This file contains a list of OCI artefacts for the SUSE Edge release Helm charts.

    edge-release-images.txt

    This file contains a list of images needed by the Edge release Helm charts.

28.4.2.3 Create the SUSE Edge release images archive

On a machine with internet access:

  1. Make edge-save-images.sh executable:

    chmod +x edge-save-images.sh
  2. Use edge-save-images.sh script to create a Docker importable '.tar.gz' archive:

    ./edge-save-images.sh --source-registry registry.suse.com
  3. This will create a ready to load edge-images.tar.gz (unless you have specified the -i|--images option) archive with the needed images.

  4. Copy this archive to your air-gapped machine

    scp edge-images.tar.gz <user>@<machine_ip>:/path

28.4.2.4 Create a SUSE Edge Helm chart OCI images archive

On a machine with internet access:

  1. Make edge-save-oci-artefacts.sh executable:

    chmod +x edge-save-oci-artefacts.sh
  2. Use edge-save-oci-artefacts.sh script to create a '.tar.gz' archive of all SUSE Edge Helm chart OCI images:

    ./edge-save-oci-artefacts.sh --source-registry registry.suse.com
  3. This will create a oci-artefacts.tar.gz archive containing all SUSE Edge Helm chart OCI images

  4. Copy this archive to your air-gapped machine

    scp oci-artefacts.tar.gz <user>@<machine_ip>:/path

28.4.2.5 Load SUSE Edge release images to your air-gapped machine

On your air-gapped machine:

  1. Log into your private registry (if required):

    podman login <REGISTRY.YOURDOMAIN.COM:PORT>
  2. Make edge-load-images.sh executable:

    chmod +x edge-load-images.sh
  3. Use edge-load-images.sh to load the images from the copied edge-images.tar.gz archive, retag them and push them to your private registry:

    ./edge-load-images.sh --source-registry registry.suse.com --registry <REGISTRY.YOURDOMAIN.COM:PORT> --images edge-images.tar.gz

28.4.2.6 Load SUSE Edge Helm chart OCI images to your air-gapped machine

On your air-gapped machine:

  1. Log into your private registry (if required):

    podman login <REGISTRY.YOURDOMAIN.COM:PORT>
  2. Make edge-load-oci-artefacts.sh executable:

    chmod +x edge-load-oci-artefacts.sh
  3. Untar the copied oci-artefacts.tar.gz archive:

    tar -xvf oci-artefacts.tar.gz
  4. This will produce a directory with the naming template edge-release-oci-tgz-<date>

  5. Pass this directory to the edge-load-oci-artefacts.sh script to load the SUSE Edge helm chart OCI images to your private registry:

    Note
    Note

    This script assumes the helm CLI has been pre-installed on your environment. For Helm installation instructions, see Installing Helm.

    ./edge-load-oci-artefacts.sh --archive-directory edge-release-oci-tgz-<date> --registry <REGISTRY.YOURDOMAIN.COM:PORT> --source-registry registry.suse.com

28.4.2.7 Create registry mirrors pointing to your private registry for your Kubernetes distribution

For RKE2, see Containerd Registry Configuration

For K3s, see Embedded Registry Mirror

28.4.3 Upgrade procedure

Note
Note

The below upgrade procedure utilises Rancher’s Fleet (Chapter 6, Fleet) funtionality. Users using a third-party GitOps workflow should retrieve the chart versions supported by each Edge release from the Section 36.1, “Abstract” and populate these versions to their third-party GitOps workflow.

This section focuses on the following Helm upgrade procedure use-cases:

  1. I have a new cluster and would like to deploy and manage a SUSE Helm chart (Section 28.4.3.1, “I have a new cluster and would like to deploy and manage a SUSE Helm chart”)

  2. I would like to upgrade a Fleet managed Helm chart (Section 28.4.3.2, “I would like to upgrade a Fleet managed Helm chart”)

  3. I would like to upgrade an EIB deployed Helm chart (Section 28.4.3.3, “I would like to upgrade an EIB deployed Helm chart”)

Important
Important

Manually deployed Helm charts cannot be reliably upgraded. We suggest to redeploy the helm chart using the Section 28.4.3.1, “I have a new cluster and would like to deploy and manage a SUSE Helm chart” method.

28.4.3.1 I have a new cluster and would like to deploy and manage a SUSE Helm chart

For users that want to manage their Helm chart lifecycle through Fleet.

28.4.3.1.1 Prepare your Fleet resources
  1. Acquire the Chart’s Fleet resources from the Edge release tag that you wish to use

    1. From the selected Edge release tag revision, navigate to the Helm chart fleet - fleets/day2/chart-templates/<chart>

    2. If you intend to use a GitOps workflow, copy the chart Fleet directory to the Git repository from where you will do GitOps.

    3. Optionally, if the Helm chart requires configurations to its values, edit the .helm.values configuration inside the fleet.yaml file of the copied directory

    4. Optionally, there may be use-cases where you need to add additional resources to your chart’s fleet so that it can better fit your environment. For information on how to enhance your Fleet directory, see Git Repository Contents

An example for the longhorn helm chart would look like:

  • User Git repository strucutre:

    <user_repository_root>
    ├── longhorn
    │   └── fleet.yaml
    └── longhorn-crd
        └── fleet.yaml
  • fleet.yaml content populated with user longhorn data:

    defaultNamespace: longhorn-system
    
    helm:
      releaseName: "longhorn"
      chart: "longhorn"
      repo: "https://charts.rancher.io/"
      version: "104.2.0+up1.7.1"
      takeOwnership: true
      # custom chart value overrides
      values:
        # Example for user provided custom values content
        defaultSettings:
          deletingConfirmationFlag: true
    
    # https://fleet.rancher.io/bundle-diffs
    diff:
      comparePatches:
      - apiVersion: apiextensions.k8s.io/v1
        kind: CustomResourceDefinition
        name: engineimages.longhorn.io
        operations:
        - {"op":"remove", "path":"/status/conditions"}
        - {"op":"remove", "path":"/status/storedVersions"}
        - {"op":"remove", "path":"/status/acceptedNames"}
      - apiVersion: apiextensions.k8s.io/v1
        kind: CustomResourceDefinition
        name: nodes.longhorn.io
        operations:
        - {"op":"remove", "path":"/status/conditions"}
        - {"op":"remove", "path":"/status/storedVersions"}
        - {"op":"remove", "path":"/status/acceptedNames"}
      - apiVersion: apiextensions.k8s.io/v1
        kind: CustomResourceDefinition
        name: volumes.longhorn.io
        operations:
        - {"op":"remove", "path":"/status/conditions"}
        - {"op":"remove", "path":"/status/storedVersions"}
        - {"op":"remove", "path":"/status/acceptedNames"}
    Note
    Note

    These are just example values that are used to illustrate custom configurations over the longhorn chart. They should NOT be treated as deployment guidelines for the longhorn chart.

28.4.3.1.2 Deploy your Fleet

If the environment supports working with a GitOps workflow, you can deploy your Chart Fleet by either using a GitRepo or Bundle.

Note
Note

While deploying your Fleet, if you get a Modified message, make sure to add a corresponding comparePatches entry to the Fleet’s diff section. For more information, see Generating Diffs to Ignore Modified GitRepos.

28.4.3.1.2.1 GitRepo

Fleet’s GitRepo resource holds information on how to access your chart’s Fleet resources and to which clusters it needs to apply those resources.

The GitRepo resource can be deployed through the Rancher UI, or manually, by deploying the resource to the management cluster.

Example Longhorn GitRepo resource for manual deployment:

apiVersion: fleet.cattle.io/v1alpha1
kind: GitRepo
metadata:
  name: longhorn-git-repo
  namespace: fleet-default
spec:
  # If using a tag
  # revision: <user_repository_tag>
  #
  # If using a branch
  # branch: <user_repository_branch>
  paths:
  # As seen in the 'Prepare your Fleet resources' example
  - longhorn
  - longhorn-crd
  repo: <user_repository_url>
  targets:
  # Match all clusters
  - clusterSelector: {}
28.4.3.1.2.2 Bundle

Bundle resources hold the raw Kubernetes resources that need to be deployed by Fleet. Normally it is encouraged to use the GitRepo approach, but for use-cases where the environment is air-gapped and cannot support a local Git server, Bundles can help you in propagating your Helm chart Fleet to your target clusters.

The Bundle can be deployed either through the Rancher UI (Continuous Delivery → Advanced → Bundles → Create from YAML) or by manually deploying the Bundle resource in the correct Fleet namespace. For information about Fleet namespaces, see the upstream documentation.

Example Longhorn Bundle resource deployment using a manual approach:

  1. Navigate to the Longhorn Chart fleet located under fleets/day2/chart-templates/longhorn/longhorn:

    cd fleets/day2/chart-templates/longhorn/longhorn
  2. Create a targets.yaml file that will instruct Fleet to which clusters it should deploy the Helm chart. In this case, we will deploy to a single downstream cluster. For information on how to map more complex targets, see Mapping to Downstream Clusters:

    cat > targets.yaml <<EOF
    targets:
    - clusterName: foo
    EOF
  3. Convert the Longhorn Helm chart Fleet to a Bundle resource. For more information, see Convert a Helm Chart into a Bundle:

    fleet apply --compress --targets-file=targets.yaml -n fleet-default -o - longhorn-bundle > longhorn-bundle.yaml
  4. Navigate to the Longhorn CRD Chart fleet located under fleets/day2/chart-templates/longhorn/longhorn-crd:

    cd fleets/day2/chart-templates/longhorn/longhorn-crd
  5. Create a targets.yaml file that will instruct Fleet to which clusters it should deploy the Helm chart. In this case, we will deploy to a single downstream cluster. For information on how to map more complex targets, see Mapping to Downstream Clusters:

    cat > targets.yaml <<EOF
    targets:
    - clusterName: foo
    EOF
  6. Convert the Longhorn CRD Helm chart Fleet to a Bundle resource. For more information, see Convert a Helm Chart into a Bundle:

    fleet apply --compress --targets-file=targets.yaml -n fleet-default -o - longhorn-crd-bundle > longhorn-crd-bundle.yaml
  7. Deploy longhorn-bundle.yaml and longhorn-crd-bundle.yaml to your management cluster:

    kubectl apply -f longhorn-crd-bundle.yaml
    kubectl apply -f longhorn-bundle.yaml

Following these steps will ensure that Longhorn is deployed on all of the specified target clusters.

28.4.3.1.3 Managing the deployed Helm chart

Once deployed with Fleet, for Helm chart upgrades, see Section 28.4.3.2, “I would like to upgrade a Fleet managed Helm chart”.

28.4.3.2 I would like to upgrade a Fleet managed Helm chart

  1. Determine the version to which you need to upgrade your chart so that it is compatible with an Edge 3.X.Y release. Helm chart version per Edge release can be viewed from the Section 36.1, “Abstract”.

  2. In your Fleet monitored Git repository, edit the Helm chart’s fleet.yaml file with the correct chart version and repository from the Section 36.1, “Abstract”.

  3. After commiting and pushing the changes to your repository, this will trigger an upgrade of the desired Helm chart

28.4.3.3 I would like to upgrade an EIB deployed Helm chart

EIB deploys Helm charts by creating a HelmChart resource and utilising the helm-controller introduced by the RKE2/K3s Helm integration feature.

To ensure that an EIB deployed Helm chart is successfully upgraded, users would need to do an upgrade over the HelmChart resources created for the Helm chart by EIB.

Below you can find information on:

28.4.3.3.1 Overview

This section is meant to give a high overview of the steps that need to be taken in order to upgrade one or multiple Helm charts that have been deployed by EIB. For a detailed explanation of the steps needed for a Helm chart upgrade, see Section 28.4.3.3.2, “Upgrade Steps”.

day2 helm chart upgrade diagram
Figure 28.3: Helm chart upgrade workflow
  1. The workflow begins with the user pulling the new Helm chart archive(s) that he wishes to upgrade his chart(s) to.

  2. The archive(s) should then be placed in a directory that will be processed by the generate-chart-upgrade-data.sh script.

  3. The user then proceeds to execute the generate-chart-upgrade-data.sh script which will generate a Kubernetes Secret YAML file for each Helm chart archive in the provided archive directory. These secrets will be automatically placed under the Fleet that will be used to upgrade the Helm charts. This is further explained in the upgrade steps (Section 28.4.3.3.2, “Upgrade Steps”) section.

  4. After the script finishes successfully, the user should continue to the configuration and deployment of either a Bundle or a GitRepo resource that will ship all the needed K8s resources to the target clusters.

    1. The resource is deployed on the management cluster under the fleet-default namespace.

  5. Fleet (Chapter 6, Fleet) detects the deployed resource, parses its data and deploys its resources to the specified target clusters. The most notable resources that are deployed are:

    1. eib-charts-upgrader - a Job that deployes the Chart Upgrade Pod. The eib-charts-upgrader-script as well as all helm chart upgrade data secrets are mounted inside of the Chart Upgrade Pod.

    2. eib-charts-upgrader-script - a Secret shipping the script that will be used by the Chart Upgrade Pod to patch an existing HelmChart resource.

    3. Helm chart upgrade data secrets - Secret YAML files created by the generate-chart-upgrade-data.sh script based on the user provided data. Secret YAML files should not be edited.

  6. Once the Chart Upgrade Pod has been deployed, the script from the eib-charts-upgrader-script secret is executed, which does the following:

    1. Process all the Helm chart upgrade data provided by the other secrets.

    2. Check if there is a HelmChart resource for each of the provided chart upgrade data.

    3. Proceed to patch the HelmChart resource with the data provided from the secret for the corresponding Helm chart.

  7. RKE2/K3s helm-controller constantly monitors for edits over the existing HelmChart resource. It detects the patch of the HelmChart, reconciles the changes and then proceeds to upgrade the chart behind the HelmChart resource.

28.4.3.3.2 Upgrade Steps
  1. Clone the suse-edge/fleet-examples repository from the Edge relase tag that you wish to use.

  2. Create a directory in which you will store the pulled Helm chart archive(s).

    mkdir archives
  3. Inside of the newly created archive directory, pull the Helm chart archive(s) that you wish to upgrade to:

    cd archives
    helm pull [chart URL | repo/chartname]
    
    # Alternatively if you want to pull a specific version:
    # helm pull [chart URL | repo/chartname] --version 0.0.0
  4. From the desired relase tag download the generate-chart-upgrade-data.sh script

  5. Execute the generate-chart-upgrade-data.sh script:

    Important
    Important

    Users should not make any changes over what the generate-chart-upgrade-data.sh script generates.

    chmod +x ./generate-chart-upgrade-data.sh
    
    ./generate-chart-upgrade-data.sh --archive-dir /foo/bar/archives/ --fleet-path /foo/bar/fleet-examples/fleets/day2/eib-charts-upgrader

    The script will go through the following logic:

    1. Validate that the user has provided --fleet-path points to a valid Fleet that can initiate a Helm chart upgrade.

    2. Process all Helm chart archives from the user-created archives dir (e.g. /foo/bar/archives/).

    3. For each Helm chart archive create a Kubernetes Secret YAML resource. This resource will hold:

      1. The name of the HelmChart resource that needs to be patched.

      2. The new version for the HelmChart resource.

      3. The base64 encoded Helm chart archive that will be used to replace the HelmChart’s currently running configuration.

    4. Each Kubernetes Secret YAML resource will be transferted to the base/secrets directory inside of the path to the eib-charts-upgrader Fleet that was given under --fleet-path.

    5. Furthermore the generate-chart-upgrade-data.sh script ensures that the secrets that it moved will be picked up and used in the Helm chart upgrade logic. It does that by:

      1. Editing the base/secrets/kustomization.yaml file to include the newly added resources.

      2. Edit the base/patches/job-patch.yaml file to include the newly added secrets to the mount configurations.

  6. After a successful generate-chart-upgrade-data.sh run you should have the changes inside of the following directories of the suse-edge/fleet-examples repository:

    1. fleets/day2/eib-charts-upgrader/base/patches

    2. fleets/day2/eib-charts-upgrader/base/secrets

The steps below depend on the environment that you are running:

  1. For an environment that supports GitOps (e.g. is non air-gapped, or is air-gapped, but allows for local Git server support):

    1. Copy the fleets/day2/eib-charts-upgrader Fleet to the repository that you will use for GitOps. Make sure that the Fleet includes the changes that have been made by the generate-chart-upgrade-data.sh script.

    2. Configure a GitRepo resource that will be used to ship all the resources of the eib-charts-upgrader Fleet.

      1. For GitRepo configuration and deployment through the Rancher UI, see Accessing Fleet in the Rancher UI.

      2. For GitRepo manual configuration and deployment, see Creating a Deployment.

  2. For an environment that does not support GitOps (e.g. is air-gapped and does not allow local Git server usage):

    1. Download the fleet-cli binary from the rancher/fleet releases page. For Mac users, there is a Homebrew Formulae that can be used - fleet-cli.

    2. Navigate to the eib-charts-upgrader Fleet:

      cd /foo/bar/fleet-examples/fleets/day2/eib-charts-upgrader
    3. Create a targets.yaml file that will instruct Fleet where to deploy your resources:

      cat > targets.yaml <<EOF
      targets:
      - clusterSelector: {} # Change this with your target data
      EOF

      For information on how to map target clusters, see the upstream documentation.

    4. Use the fleet-cli to convert the Fleet to a Bundle resource:

      fleet apply --compress --targets-file=targets.yaml -n fleet-default -o - eib-charts-upgrade > bundle.yaml

      This will create a Bundle (bundle.yaml) that will hold all the templated resoruce from the eib-charts-upgrader Fleet.

      For more information regarding the fleet apply command, see fleet apply.

      For more information regaring converting Fleets to Bundles, see Convert a Helm Chart into a Bundle.

    5. Deploy the Bundle. This can be done in one of two ways:

      1. Through Rancher’s UI - Navigate to Continuous Delivery → Advanced → Bundles → Create from YAML and either paste the bundle.yaml contents, or click the Read from File option and pass the file itself.

      2. Manually - Deploy the bundle.yaml file manually inside of your management cluster.

Executing these steps will result in a successfully deployed GitRepo/Bundle resource. The resource will be picked up by Fleet and its contents will be deployed onto the target clusters that the user has specified in the previous steps. For an overview of the process, refer to the overview (Section 28.4.3.3.1, “Overview”) section.

For information on how to track the upgrade process, you can refer to the Example (Section 28.4.3.3.3, “Example”) section of this documentation.

28.4.3.3.3 Example
Note
Note

The example below illustrates how to do an upgrade of an EIB deployed Helm chart from one version to another. The versions in the example should not be treated as version recommendations. Version recommendations for a specific Edge release, should be taken from the release notes (Section 36.1, “Abstract”).

Use-case:

  • A cluster named doc-example is running Ranchers' Longhorn 103.3.0+up1.6.1 version.

  • The cluster has been deployed through EIB, using the following image definition snippet:

    kubernetes:
      helm:
        charts:
        - name: longhorn-crd
          repositoryName: rancher-charts
          targetNamespace: longhorn-system
          createNamespace: true
          version: 103.3.0+up1.6.1
        - name: longhorn
          repositoryName: rancher-charts
          targetNamespace: longhorn-system
          createNamespace: true
          version: 103.3.0+up1.6.1
        repositories:
        - name: rancher-charts
          url: https://charts.rancher.io/
    ...
    day2 helm chart upgrade example 1
    Figure 28.4: doc-example installed Longhorn version
  • Longhorn needs to be upgraded to a version that is compatible with the Edge 3.1 release. Meaning it needs to be upgraded to 104.2.0+up1.7.1.

  • It is assumed that the management cluster in charge of managing the doc-example cluster is air-gapped, without support for a local Git server and has a working Rancher setup.

Follow the Upgrade Steps (Section 28.4.3.3.2, “Upgrade Steps”):

  1. Clone the suse-edge/fleet-example repository from the release-3.1.0 tag.

    git clone -b release-3.1.0 https://github.com/suse-edge/fleet-examples.git
  2. Create a directory where the Longhorn upgrade archive will be stored.

    mkdir archives
  3. Pull the desired Longhorn chart archive version:

    # First add the Rancher Helm chart repository
    helm repo add rancher-charts https://charts.rancher.io/
    
    # Pull the Longhorn 1.7.1 CRD archive
    helm pull rancher-charts/longhorn-crd --version 104.2.0+up1.7.1
    
    # Pull the Longhorn 1.7.1 chart archive
    helm pull rancher-charts/longhorn --version 104.2.0+up1.7.1
  4. Outside of the archives directory, download the generate-chart-upgrade-data.sh script from the release-3.1.0 release tag.

  5. Directory setup should look similar to:

    .
    ├── archives
    |   ├── longhorn-104.2.0+up1.7.1.tgz
    │   └── longhorn-crd-104.2.0+up1.7.1.tgz
    ├── fleet-examples
    ...
    │   ├── fleets
    │   │   ├── day2
    |   |   |   ├── ...
    │   │   │   ├── eib-charts-upgrader
    │   │   │   │   ├── base
    │   │   │   │   │   ├── job.yaml
    │   │   │   │   │   ├── kustomization.yaml
    │   │   │   │   │   ├── patches
    │   │   │   │   │   │   └── job-patch.yaml
    │   │   │   │   │   ├── rbac
    │   │   │   │   │   │   ├── cluster-role-binding.yaml
    │   │   │   │   │   │   ├── cluster-role.yaml
    │   │   │   │   │   │   ├── kustomization.yaml
    │   │   │   │   │   │   └── sa.yaml
    │   │   │   │   │   └── secrets
    │   │   │   │   │       ├── eib-charts-upgrader-script.yaml
    │   │   │   │   │       └── kustomization.yaml
    │   │   │   │   ├── fleet.yaml
    │   │   │   │   └── kustomization.yaml
    │   │   │   └── ...
    │   └── ...
    └── generate-chart-upgrade-data.sh
  6. Execute the generate-chart-upgrade-data.sh script:

    # First make the script executable
    chmod +x ./generate-chart-upgrade-data.sh
    
    # Then execute the script
    ./generate-chart-upgrade-data.sh --archive-dir ./archives --fleet-path ./fleet-examples/fleets/day2/eib-charts-upgrader

    The directory structure after the script execution should look similar to:

    .
    ├── archives
    |   ├── longhorn-104.2.0+up1.7.1.tgz
    │   └── longhorn-crd-104.2.0+up1.7.1.tgz
    ├── fleet-examples
    ...
    │   ├── fleets
    │   │   ├── day2
    │   │   │   ├── ...
    │   │   │   ├── eib-charts-upgrader
    │   │   │   │   ├── base
    │   │   │   │   │   ├── job.yaml
    │   │   │   │   │   ├── kustomization.yaml
    │   │   │   │   │   ├── patches
    │   │   │   │   │   │   └── job-patch.yaml
    │   │   │   │   │   ├── rbac
    │   │   │   │   │   │   ├── cluster-role-binding.yaml
    │   │   │   │   │   │   ├── cluster-role.yaml
    │   │   │   │   │   │   ├── kustomization.yaml
    │   │   │   │   │   │   └── sa.yaml
    │   │   │   │   │   └── secrets
    │   │   │   │   │       ├── eib-charts-upgrader-script.yaml
    │   │   │   │   │       ├── kustomization.yaml
    │   │   │   │   │       ├── longhorn-104-2-0-up1-7-1.yaml <- secret created by the generate-chart-upgrade-data.sh script
    │   │   │   │   │       └── longhorn-crd-104-2-0-up1-7-1.yaml <- secret created by the generate-chart-upgrade-data.sh script
    │   │   │   │   ├── fleet.yaml
    │   │   │   │   └── kustomization.yaml
    │   │   │   └── ...
    │   └── ...
    └── generate-chart-upgrade-data.sh

    The files changed in git should look like this:

    day2 helm chart upgrade example 2
    Figure 28.5: Changes over fleet-examples made by generate-chart-upgrade-data.sh
  7. Since the management cluster does not support for a GitOps workflow, a Bundle needs to be created for the eib-charts-upgrader Fleet:

    1. First, navigate to the Fleet itself:

      cd ./fleet-examples/fleets/day2/eib-charts-upgrader
    2. Then create a targets.yaml file targeting the doc-example cluster:

      cat > targets.yaml <<EOF
      targets:
      - clusterName: doc-example
      EOF
    3. Then use the fleet-cli binary to convert the Fleet to a Bundle:

      fleet apply --compress --targets-file=targets.yaml -n fleet-default -o - eib-charts-upgrade > bundle.yaml
    4. Now, transfer the bundle.yaml on your management cluster machine.

  8. Since the management cluster is running Rancher, deploy the Bundle through the Rancher UI:

    day2 helm chart upgrade example 3
    Figure 28.6: Deploy Bundle through Rancher UI

    From here, select Read from File and find the bundle.yaml file on your system.

    This will auto-populate the Bundle inside of Rancher’s UI:

    day2 helm chart upgrade example 4
    Figure 28.7: Auto-populated Bundle snippet

    Select Create.

  9. After a successful deployment, your Bundle would look similar to:

    day2 helm chart upgrade example 5
    Figure 28.8: Successfully deployed Bundle

After the successful deployment of the Bundle, to monitor the upgrade process:

  1. First, verify the logs of the Upgrade Pod:

    day2 helm chart upgrade example 6
    Figure 28.9: View the upgrade pod logs
  2. Now verify the logs of the Pod created for the upgrade by the helm-controller:

    1. The Pod name will be with the following template - helm-install-longhorn-<random-suffix>

    2. The Pod will be in the namespace where the HelmChart resource was deployed. In our case this is default.

      day2 helm chart upgrade example 8
      Figure 28.10: Logs for successfully upgraded Longhorn chart
  3. Check that the HelmChart version has been bumped:

    day2 helm chart upgrade example 9
    Figure 28.11: Bumped Longhorn version
  4. Finally check that the Longhorn Pods are running:

    day2 helm chart upgrade example 10
    Figure 28.12: Example for validating the instance-manager pod

After making the above validations, it is safe to assume that the Longhorn Helm chart has been upgraded from 103.3.0+up1.6.1 to 104.2.0+up1.7.1.

28.4.3.3.4 Helm chart upgrade using a third-party GitOps tool

There might be use-cases where users would like to use this upgrade procedure with a GitOps workflow other than Fleet (e.g. Flux).

To produce the resources needed for the upgrade procedure, you can use the generate-chart-upgrade-data.sh script to populate the eib-charts-upgrader Fleet with the user provided data. For more information on how to do this, see the upgrade steps (Section 28.4.3.3.2, “Upgrade Steps”).

After you have the full setup, you can use kustomize to generate a full working solution that you can deploy in your cluster:

cd /foo/bar/fleets/day2/eib-charts-upgrader

kustomize build .

If you want to include the solution to your GitOps workflow, you can remove the fleet.yaml file and use what is left as a valid Kustomize setup. Just do not forget to first run the generate-chart-upgrade-data.sh script, so that it can populate the Kustomize setup with the data for the Helm charts that you wish to upgrade to.

To understand how this workflow is intended to be used, it can be beneficial to look at the overview (Section 28.4.3.3.1, “Overview”) and upgrade steps (Section 28.4.3.3.2, “Upgrade Steps”) sections as well.