Jump to contentJump to page navigation: previous page [access key p]/next page [access key n]
SUSE Edge Documentation / Day 2 Operations / Management Cluster

31 Management Cluster

Currently, there are two ways to perform "Day 2" operations on your management cluster:

31.1 Upgrade Controller

Important
Important

The Upgrade Controller currently only supports Day 2 operations for non air-gapped management clusters.

This section covers how to perform the various Day 2 operations related to upgrading your management cluster from one Edge platform version to another.

The Day 2 operations are automated by the Upgrade Controller (Chapter 21, Upgrade Controller) and include:

31.1.1 Prerequisites

Before upgrading your management cluster, the following prerequisites must be met:

  1. SCC registered nodes - ensure your cluster nodes' OS are registered with a subscription key that supports the OS version specified in the Edge release (Section 40.1, “Abstract”) you intend to upgrade to.

  2. Upgrade Controller - make sure that the Upgrade Controller has been deployed on your management cluster. For installation steps, refer to Section 21.2, “Installing the Upgrade Controller”.

31.1.2 Upgrade

  1. Determine the Edge release (Section 40.1, “Abstract”) version that you wish to upgrade your management cluster to.

  2. In the management cluster, deploy an UpgradePlan that specifies the desired release version. The UpgradePlan must be deployed in the namespace of the Upgrade Controller.

    kubectl apply -n <upgrade_controller_namespace> -f - <<EOF
    apiVersion: lifecycle.suse.com/v1alpha1
    kind: UpgradePlan
    metadata:
      name: upgrade-plan-mgmt
    spec:
      # Version retrieved from release notes
      releaseVersion: 3.X.Y
    EOF
    Note
    Note

    There may be use-cases where you would want to make additional configurations over the UpgradePlan. For all possible configurations, refer to Section 21.4.1, “UpgradePlan”.

  3. Deploying the UpgradePlan to the Upgrade Controller’s namespace will begin the upgrade process.

    Note
    Note

    For more information on the actual upgrade process, refer to Section 21.3, “How does the Upgrade Controller work?”.

    For information on how to track the upgrade process, refer to Section 21.5, “Tracking the upgrade process”.

31.2 Fleet

This section offers information on how to perform "Day 2" operations using the Fleet (Chapter 7, Fleet) component.

The following topics are covered as part of this section:

  1. Section 31.2.1, “Components” - default components used for all "Day 2" operations.

  2. Section 31.2.2, “Determine your use-case” - provides an overview of the Fleet custom resources that will be used and their suitability for different "Day 2" operations use-cases.

  3. Section 31.2.3, “Day 2 workflow” - provides a workflow guide for executing "Day 2" operations with Fleet.

  4. Section 31.2.4, “OS upgrade” - describes how to do OS upgrades using Fleet.

  5. Section 31.2.5, “Kubernetes version upgrade” - describes how to do Kubernetes version upgrades using Fleet.

  6. Section 31.2.6, “Helm chart upgrade” - describes how to do Helm chart upgrades using Fleet.

31.2.1 Components

Below you can find a description of the default components that should be set up on your management cluster so that you can successfully perform "Day 2" operations using Fleet.

31.2.1.1 Rancher

Optional; Responsible for managing downstream clusters and deploying the System Upgrade Controller on your management cluster.

For more information, see Chapter 4, Rancher.

31.2.1.2 System Upgrade Controller (SUC)

System Upgrade Controller is responsible for executing tasks on specified nodes based on configuration data provided through a custom resource, called a Plan.

SUC is actively utilized to upgrade the operating system and Kubernetes distribution.

For more information about the SUC component and how it fits in the Edge stack, see Chapter 20, System Upgrade Controller.

31.2.2 Determine your use-case

Fleet uses two types of custom resources to enable the management of Kubernetes and Helm resources.

Below you can find information about the purpose of these resources and the use-cases they are best suited for in the context of "Day 2" operations.

31.2.2.1 GitRepo

A GitRepo is a Fleet (Chapter 7, Fleet) resource that represents a Git repository from which Fleet can create Bundles. Each Bundle is created based on configuration paths defined inside of the GitRepo resource. For more information, see the GitRepo documentation.

In the context of "Day 2" operations, GitRepo resources are normally used to deploy SUC or SUC Plans in non air-gapped environments that utilize a Fleet GitOps approach.

Alternatively, GitRepo resources can also be used to deploy SUC or SUC Plans on air-gapped environments, provided you mirror your repository setup through a local git server.

31.2.2.2 Bundle

Bundles hold raw Kubernetes resources that will be deployed on the targeted cluster. Usually they are created from a GitRepo resource, but there are use-cases where they can be deployed manually. For more information refer to the Bundle documentation.

In the context of "Day 2" operations, Bundle resources are normally used to deploy SUC or SUC Plans in air-gapped environments that do not use some form of local GitOps procedure (e.g. a local git server).

Alternatively, if your use-case does not allow for a GitOps workflow (e.g. using a Git repository), Bundle resources could also be used to deploy SUC or SUC Plans in non air-gapped environments.

31.2.3 Day 2 workflow

The following is a "Day 2" workflow that should be followed when upgrading a management cluster to a specific Edge release.

31.2.4 OS upgrade

This section describes how to perform an operating system upgrade using Chapter 7, Fleet and the Chapter 20, System Upgrade Controller.

The following topics are covered as part of this section:

  1. Section 31.2.4.1, “Components” - additional components used by the upgrade process.

  2. Section 31.2.4.2, “Overview” - overview of the upgrade process.

  3. Section 31.2.4.3, “Requirements” - requirements of the upgrade process.

  4. Section 31.2.4.4, “OS upgrade - SUC plan deployment” - information on how to deploy SUC plans, responsible for triggering the upgrade process.

31.2.4.1 Components

This section covers the custom components that the OS upgrade process uses over the default "Day 2" components (Section 31.2.1, “Components”).

31.2.4.1.1 systemd.service

The OS upgrade on a specific node is handled by a systemd.service.

A different service is created depending on what type of upgrade the OS requires from one Edge version to another:

  • For Edge versions that require the same OS version (e.g. 6.0), the os-pkg-update.service will be created. It uses transactional-update to perform a normal package upgrade.

  • For Edge versions that require a OS version migration (e.g 5.56.0), the os-migration.service will be created. It uses transactional-update to perform:

    1. A normal package upgrade which ensures that all packages are at up-to-date in order to mitigate any failures in the migration related to old package versions.

    2. An OS migration by utilizing the zypper migration command.

The services mentioned above are shipped on each node through a SUC plan which must be located on the management cluster that is in need of an OS upgrade.

31.2.4.2 Overview

The upgrade of the operating system for management cluster nodes is done by utilizing Fleet and the System Upgrade Controller (SUC).

Fleet is used to deploy and manage SUC plans onto the desired cluster.

Note
Note

SUC plans are custom resources that describe the steps that SUC needs to follow in order for a specific task to be executed on a set of nodes. For an example of how an SUC plan looks like, refer to the upstream repository.

The OS SUC plans are shipped to each cluster by deploying a GitRepo or Bundle resource to a specific Fleet workspace. Fleet retrieves the deployed GitRepo/Bundle and deploys its contents (the OS SUC plans) to the desired cluster(s).

Note
Note

GitRepo/Bundle resources are always deployed on the management cluster. Whether to use a GitRepo or Bundle resource depends on your use-case, check Section 31.2.2, “Determine your use-case” for more information.

OS SUC plans describe the following workflow:

  1. Always cordon the nodes before OS upgrades.

  2. Always upgrade control-plane nodes before worker nodes.

  3. Always upgrade the cluster on a one node at a time basis.

Once the OS SUC plans are deployed, the workflow looks like this:

  1. SUC reconciles the deployed OS SUC plans and creates a Kubernetes Job on each node.

  2. The Kubernetes Job creates a systemd.service (Section 31.2.4.1.1, “systemd.service”) for either package upgrade, or OS migration.

  3. The created systemd.service triggers the OS upgrade process on the specific node.

    Important
    Important

    Once the OS upgrade process finishes, the corresponding node will be rebooted to apply the updates on the system.

Below you can find a diagram of the above description:

fleet day2 management os upgrade

31.2.4.3 Requirements

General:

  1. SCC registered machine - All management cluster nodes should be registered to https://scc.suse.com/ which is needed so that the respective systemd.service can successfully connect to the desired RPM repository.

    Important
    Important

    For Edge releases that require an OS version migration (e.g. 5.56.0), make sure that your SCC key supports the migration to the new version.

  2. Make sure that SUC Plan tolerations match node tolerations - If your Kubernetes cluster nodes have custom taints, make sure to add tolerations for those taints in the SUC Plans. By default, SUC Plans have tolerations only for control-plane nodes. Default tolerations include:

    • CriticalAddonsOnly=true:NoExecute

    • node-role.kubernetes.io/control-plane:NoSchedule

    • node-role.kubernetes.io/etcd:NoExecute

      Note
      Note

      Any additional tolerations must be added under the .spec.tolerations section of each Plan. SUC Plans related to the OS upgrade can be found in the suse-edge/fleet-examples repository under fleets/day2/system-upgrade-controller-plans/os-upgrade. Make sure you use the Plans from a valid repository release tag.

      An example of defining custom tolerations for the control-plane SUC Plan would look like this:

      apiVersion: upgrade.cattle.io/v1
      kind: Plan
      metadata:
        name: os-upgrade-control-plane
      spec:
        ...
        tolerations:
        # default tolerations
        - key: "CriticalAddonsOnly"
          operator: "Equal"
          value: "true"
          effect: "NoExecute"
        - key: "node-role.kubernetes.io/control-plane"
          operator: "Equal"
          effect: "NoSchedule"
        - key: "node-role.kubernetes.io/etcd"
          operator: "Equal"
          effect: "NoExecute"
        # custom toleration
        - key: "foo"
          operator: "Equal"
          value: "bar"
          effect: "NoSchedule"
      ...

Air-gapped:

  1. Mirror SUSE RPM repositories - OS RPM repositories should be locally mirrored so that the systemd.service can have access to them. This can be achieved by using either RMT or SUMA.

31.2.4.4 OS upgrade - SUC plan deployment

Important
Important

For environments previously upgraded using this procedure, users should ensure that one of the following steps is completed:

  • Remove any previously deployed SUC Plans related to older Edge release versions from the management cluster - can be done by removing the desired cluster from the existing GitRepo/Bundle target configuration, or removing the GitRepo/Bundle resource altogether.

  • Reuse the existing GitRepo/Bundle resource - can be done by pointing the resource’s revision to a new tag that holds the correct fleets for the desired suse-edge/fleet-examples release.

This is done in order to avoid clashes between SUC Plans for older Edge release versions.

If users attempt to upgrade, while there are existing SUC Plans on the management cluster, they will see the following fleet error:

Not installed: Unable to continue with install: Plan <plan_name> in namespace <plan_namespace> exists and cannot be imported into the current release: invalid ownership metadata; annotation validation error..

As mentioned in Section 31.2.4.2, “Overview”, OS upgrades are done by shipping SUC plans to the desired cluster through one of the following ways:

To determine which resource you should use, refer to Section 31.2.2, “Determine your use-case”.

For use-cases where you wish to deploy the OS SUC plans from a third-party GitOps tool, refer to Section 31.2.4.4.3, “SUC Plan deployment - third-party GitOps workflow”

31.2.4.4.1 SUC plan deployment - GitRepo resource

A GitRepo resource, that ships the needed OS SUC plans, can be deployed in one of the following ways:

  1. Through the Rancher UI - Section 31.2.4.4.1.1, “GitRepo creation - Rancher UI” (when Rancher is available).

  2. By manually deploying (Section 31.2.4.4.1.2, “GitRepo creation - manual”) the resource to your management cluster.

Once deployed, to monitor the OS upgrade process of the nodes of your targeted cluster, refer to Section 20.3, “Monitoring System Upgrade Controller Plans”.

31.2.4.4.1.1 GitRepo creation - Rancher UI

To create a GitRepo resource through the Rancher UI, follow their official documentation.

The Edge team maintains a ready to use fleet. Depending on your environment this fleet could be used directly or as a template.

Important
Important

Always use this fleet from a valid Edge release tag.

For use-cases where no custom changes need to be included to the SUC plans that the fleet ships, users can directly refer the os-upgrade fleet from the suse-edge/fleet-examples repository.

In cases where custom changes are needed (e.g. to add custom tolerations), users should refer the os-upgrade fleet from a separate repository, allowing them to add the changes to the SUC plans as required.

An example of how a GitRepo can be configured to use the fleet from the suse-edge/fleet-examples repository, can be viewed here.

31.2.4.4.1.2 GitRepo creation - manual
  1. Pull the GitRepo resource:

    curl -o os-upgrade-gitrepo.yaml https://raw.githubusercontent.com/suse-edge/fleet-examples/refs/tags/release-3.2.0/gitrepos/day2/os-upgrade-gitrepo.yaml
  2. Edit the GitRepo configuration:

    • Remove the spec.targets section - only needed for downstream clusters.

      # Example using sed
      sed -i.bak '/^  targets:/,$d' os-upgrade-gitrepo.yaml && rm -f os-upgrade-gitrepo.yaml.bak
      
      # Example using yq (v4+)
      yq eval 'del(.spec.targets)' -i os-upgrade-gitrepo.yaml
    • Point the namespace of the GitRepo to the fleet-local namespace - done in order to deploy the resource on the management cluster.

      # Example using sed
      sed -i.bak 's/namespace: fleet-default/namespace: fleet-local/' os-upgrade-gitrepo.yaml && rm -f os-upgrade-gitrepo.yaml.bak
      
      # Example using yq (v4+)
      yq eval '.metadata.namespace = "fleet-local"' -i os-upgrade-gitrepo.yaml
  3. Apply the GitRepo resource your management cluster:

    kubectl apply -f os-upgrade-gitrepo.yaml
  4. View the created GitRepo resource under the fleet-local namespace:

    kubectl get gitrepo os-upgrade -n fleet-local
    
    # Example output
    NAME            REPO                                              COMMIT         BUNDLEDEPLOYMENTS-READY   STATUS
    os-upgrade      https://github.com/suse-edge/fleet-examples.git   release-3.2.0  0/0
31.2.4.4.2 SUC plan deployment - Bundle resource

A Bundle resource, that ships the needed OS SUC Plans, can be deployed in one of the following ways:

  1. Through the Rancher UI - Section 31.2.4.4.2.1, “Bundle creation - Rancher UI” (when Rancher is available).

  2. By manually deploying (Section 31.2.4.4.2.2, “Bundle creation - manual”) the resource to your management cluster.

Once deployed, to monitor the OS upgrade process of the nodes of your targeted cluster, refer to Section 20.3, “Monitoring System Upgrade Controller Plans”.

31.2.4.4.2.1 Bundle creation - Rancher UI

The Edge team maintains a ready to use bundle that can be used in the below steps.

Important
Important

Always use this bundle from a valid Edge release tag.

To create a bundle through Rancher’s UI:

  1. In the upper left corner, click ☰ → Continuous Delivery

  2. Go to Advanced > Bundles

  3. Select Create from YAML

  4. From here you can create the Bundle in one of the following ways:

    Note
    Note

    There might be use-cases where you would need to include custom changes to the SUC plans that the bundle ships (e.g. to add custom tolerations). Make sure to include those changes in the bundle that will be generated by the below steps.

    1. By manually copying the bundle content from suse-edge/fleet-examples to the Create from YAML page.

    2. By cloning the suse-edge/fleet-examples repository from the desired release tag and selecting the Read from File option in the Create from YAML page. From there, navigate to the bundle location (bundles/day2/system-upgrade-controller-plans/os-upgrade) and select the bundle file. This will auto-populate the Create from YAML page with the bundle content.

  5. Edit the Bundle in the Rancher UI:

    • Change the namespace of the Bundle to point to the fleet-local namespace.

      # Example
      kind: Bundle
      apiVersion: fleet.cattle.io/v1alpha1
      metadata:
        name: os-upgrade
        namespace: fleet-local
      ...
    • Change the target clusters for the Bundle to point to your local(management) cluster:

      spec:
        targets:
        - clusterName: local
      Note
      Note

      There are some use-cases where your local cluster could have a different name.

      To retrieve your local cluster name, execute the command below:

      kubectl get clusters.fleet.cattle.io -n fleet-local
  6. Select Create

31.2.4.4.2.2 Bundle creation - manual
  1. Pull the Bundle resource:

    curl -o os-upgrade-bundle.yaml https://raw.githubusercontent.com/suse-edge/fleet-examples/refs/tags/release-3.2.0/bundles/day2/system-upgrade-controller-plans/os-upgrade/os-upgrade-bundle.yaml
  2. Edit the Bundle configuration:

    • Change the target clusters for the Bundle to point to your local(management) cluster:

      spec:
        targets:
        - clusterName: local
      Note
      Note

      There are some use-cases where your local cluster could have a different name.

      To retrieve your local cluster name, execute the command below:

      kubectl get clusters.fleet.cattle.io -n fleet-local
    • Change the namespace of the Bundle to point to the fleet-local namespace.

      # Example
      kind: Bundle
      apiVersion: fleet.cattle.io/v1alpha1
      metadata:
        name: os-upgrade
        namespace: fleet-local
      ...
  3. Apply the Bundle resource to your management cluster:

    kubectl apply -f os-upgrade-bundle.yaml
  4. View the created Bundle resource under the fleet-local namespace:

    kubectl get bundles -n fleet-local
31.2.4.4.3 SUC Plan deployment - third-party GitOps workflow

There might be use-cases where users would like to incorporate the OS SUC plans to their own third-party GitOps workflow (e.g. Flux).

To get the OS upgrade resources that you need, first determine the Edge release tag of the suse-edge/fleet-examples repository that you would like to use.

After that, resources can be found at fleets/day2/system-upgrade-controller-plans/os-upgrade, where:

  • plan-control-plane.yaml is a SUC plan resource for control-plane nodes.

  • plan-worker.yaml is a SUC plan resource for worker nodes.

  • secret.yaml is a Secret that contains the upgrade.sh script, which is responsible for creating the systemd.service (Section 31.2.4.1.1, “systemd.service”).

  • config-map.yaml is a ConfigMap that holds configurations that are consumed by the upgrade.sh script.

Important
Important

These Plan resources are interpreted by the System Upgrade Controller and should be deployed on each downstream cluster that you wish to upgrade. For SUC deployment information, see Section 20.2, “Installing the System Upgrade Controller”.

To better understand how your GitOps workflow can be used to deploy the SUC Plans for OS upgrade, it can be beneficial to take a look at overview (Section 31.2.4.2, “Overview”).

31.2.5 Kubernetes version upgrade

This section describes how to perform a Kubernetes upgrade using Chapter 7, Fleet and the Chapter 20, System Upgrade Controller.

The following topics are covered as part of this section:

  1. Section 31.2.5.1, “Components” - additional components used by the upgrade process.

  2. Section 31.2.5.2, “Overview” - overview of the upgrade process.

  3. Section 31.2.5.3, “Requirements” - requirements of the upgrade process.

  4. Section 31.2.5.4, “K8s upgrade - SUC plan deployment” - information on how to deploy SUC plans, responsible for triggering the upgrade process.

31.2.5.1 Components

This section covers the custom components that the K8s upgrade process uses over the default "Day 2" components (Section 31.2.1, “Components”).

31.2.5.1.1 rke2-upgrade

Container image responsible for upgrading the RKE2 version of a specific node.

Shipped through a Pod created by SUC based on a SUC Plan. The Plan should be located on each cluster that is in need of a RKE2 upgrade.

For more information regarding how the rke2-upgrade image performs the upgrade, see the upstream documentation.

31.2.5.1.2 k3s-upgrade

Container image responsible for upgrading the K3s version of a specific node.

Shipped through a Pod created by SUC based on a SUC Plan. The Plan should be located on each cluster that is in need of a K3s upgrade.

For more information regarding how the k3s-upgrade image performs the upgrade, see the upstream documentation.

31.2.5.2 Overview

The Kubernetes distribution upgrade for management cluster nodes is done by utilizing Fleet and the System Upgrade Controller (SUC).

Fleet is used to deploy and manage SUC plans onto the desired cluster.

Note
Note

SUC plans are custom resources that describe the steps that SUC needs to follow in order for a specific task to be executed on a set of nodes. For an example of how an SUC plan looks like, refer to the upstream repository.

The K8s SUC plans are shipped on each cluster by deploying a GitRepo or Bundle resource to a specific Fleet workspace. Fleet retrieves the deployed GitRepo/Bundle and deploys its contents (the K8s SUC plans) to the desired cluster(s).

Note
Note

GitRepo/Bundle resources are always deployed on the management cluster. Whether to use a GitRepo or Bundle resource depends on your use-case, check Section 31.2.2, “Determine your use-case” for more information.

K8s SUC plans describe the following workflow:

  1. Always cordon the nodes before K8s upgrades.

  2. Always upgrade control-plane nodes before worker nodes.

  3. Always upgrade the control-plane nodes one node at a time and the worker nodes two nodes at a time.

Once the K8s SUC plans are deployed, the workflow looks like this:

  1. SUC reconciles the deployed K8s SUC plans and creates a Kubernetes Job on each node.

  2. Depending on the Kubernetes distribution, the Job will create a Pod that runs either the rke2-upgrade (Section 31.2.5.1.1, “rke2-upgrade”) or the k3s-upgrade (Section 31.2.5.1.2, “k3s-upgrade”) container image.

  3. The created Pod will go through the following workflow:

    1. Replace the existing rke2/k3s binary on the node with the one from the rke2-upgrade/k3s-upgrade image.

    2. Kill the running rke2/k3s process.

  4. Killing the rke2/k3s process triggers a restart, launching a new process that runs the updated binary, resulting in an upgraded Kubernetes distribution version.

Below you can find a diagram of the above description:

fleet day2 management k8s upgrade

31.2.5.3 Requirements

  1. Backup your Kubernetes distribution:

    1. For RKE2 clusters, see the RKE2 Backup and Restore documentation.

    2. For K3s clusters, see the K3s Backup and Restore documentation.

  2. Make sure that SUC Plan tolerations match node tolerations - If your Kubernetes cluster nodes have custom taints, make sure to add tolerations for those taints in the SUC Plans. By default SUC Plans have tolerations only for control-plane nodes. Default tolerations include:

    • CriticalAddonsOnly=true:NoExecute

    • node-role.kubernetes.io/control-plane:NoSchedule

    • node-role.kubernetes.io/etcd:NoExecute

      Note
      Note

      Any additional tolerations must be added under the .spec.tolerations section of each Plan. SUC Plans related to the Kubernetes version upgrade can be found in the suse-edge/fleet-examples repository under:

      • For RKE2 - fleets/day2/system-upgrade-controller-plans/rke2-upgrade

      • For K3s - fleets/day2/system-upgrade-controller-plans/k3s-upgrade

      Make sure you use the Plans from a valid repository release tag.

      An example of defining custom tolerations for the RKE2 control-plane SUC Plan, would look like this:

      apiVersion: upgrade.cattle.io/v1
      kind: Plan
      metadata:
        name: rke2-upgrade-control-plane
      spec:
        ...
        tolerations:
        # default tolerations
        - key: "CriticalAddonsOnly"
          operator: "Equal"
          value: "true"
          effect: "NoExecute"
        - key: "node-role.kubernetes.io/control-plane"
          operator: "Equal"
          effect: "NoSchedule"
        - key: "node-role.kubernetes.io/etcd"
          operator: "Equal"
          effect: "NoExecute"
        # custom toleration
        - key: "foo"
          operator: "Equal"
          value: "bar"
          effect: "NoSchedule"
      ...

31.2.5.4 K8s upgrade - SUC plan deployment

Important
Important

For environments previously upgraded using this procedure, users should ensure that one of the following steps is completed:

  • Remove any previously deployed SUC Plans related to older Edge release versions from the management cluster - can be done by removing the desired cluster from the existing GitRepo/Bundle target configuration, or removing the GitRepo/Bundle resource altogether.

  • Reuse the existing GitRepo/Bundle resource - can be done by pointing the resource’s revision to a new tag that holds the correct fleets for the desired suse-edge/fleet-examples release.

This is done in order to avoid clashes between SUC Plans for older Edge release versions.

If users attempt to upgrade, while there are existing SUC Plans on the management cluster, they will see the following fleet error:

Not installed: Unable to continue with install: Plan <plan_name> in namespace <plan_namespace> exists and cannot be imported into the current release: invalid ownership metadata; annotation validation error..

As mentioned in Section 31.2.5.2, “Overview”, Kubernetes upgrades are done by shipping SUC plans to the desired cluster through one of the following ways:

To determine which resource you should use, refer to Section 31.2.2, “Determine your use-case”.

For use-cases where you wish to deploy the K8s SUC plans from a third-party GitOps tool, refer to Section 31.2.5.4.3, “SUC Plan deployment - third-party GitOps workflow”

31.2.5.4.1 SUC plan deployment - GitRepo resource

A GitRepo resource, that ships the needed K8s SUC plans, can be deployed in one of the following ways:

  1. Through the Rancher UI - Section 31.2.5.4.1.1, “GitRepo creation - Rancher UI” (when Rancher is available).

  2. By manually deploying (Section 31.2.5.4.1.2, “GitRepo creation - manual”) the resource to your management cluster.

Once deployed, to monitor the Kubernetes upgrade process of the nodes of your targeted cluster, refer to Section 20.3, “Monitoring System Upgrade Controller Plans”.

31.2.5.4.1.1 GitRepo creation - Rancher UI

To create a GitRepo resource through the Rancher UI, follow their official documentation.

The Edge team maintains ready to use fleets for both rke2 and k3s Kubernetes distributions. Depending on your environment, this fleet could be used directly or as a template.

Important
Important

Always use these fleets from a valid Edge release tag.

For use-cases where no custom changes need to be included to the SUC plans that these fleets ship, users can directly refer the fleets from the suse-edge/fleet-examples repository.

In cases where custom changes are needed (e.g. to add custom tolerations), users should refer the fleets from a separate repository, allowing them to add the changes to the SUC plans as required.

Configuration examples for a GitRepo resource using the fleets from suse-edge/fleet-examples repository:

31.2.5.4.1.2 GitRepo creation - manual
  1. Pull the GitRepo resource:

    • For RKE2 clusters:

      curl -o rke2-upgrade-gitrepo.yaml https://raw.githubusercontent.com/suse-edge/fleet-examples/refs/tags/release-3.2.0/gitrepos/day2/rke2-upgrade-gitrepo.yaml
    • For K3s clusters:

      curl -o k3s-upgrade-gitrepo.yaml https://raw.githubusercontent.com/suse-edge/fleet-examples/refs/tags/release-3.2.0/gitrepos/day2/k3s-upgrade-gitrepo.yaml
  2. Edit the GitRepo configuration:

    • Remove the spec.targets section - only needed for downstream clusters.

      • For RKE2:

        # Example using sed
        sed -i.bak '/^  targets:/,$d' rke2-upgrade-gitrepo.yaml && rm -f rke2-upgrade-gitrepo.yaml.bak
        
        # Example using yq (v4+)
        yq eval 'del(.spec.targets)' -i rke2-upgrade-gitrepo.yaml
      • For K3s:

        # Example using sed
        sed -i.bak '/^  targets:/,$d' k3s-upgrade-gitrepo.yaml && rm -f k3s-upgrade-gitrepo.yaml.bak
        
        # Example using yq (v4+)
        yq eval 'del(.spec.targets)' -i k3s-upgrade-gitrepo.yaml
    • Point the namespace of the GitRepo to the fleet-local namespace - done in order to deploy the resource on the management cluster.

      • For RKE2:

        # Example using sed
        sed -i.bak 's/namespace: fleet-default/namespace: fleet-local/' rke2-upgrade-gitrepo.yaml && rm -f rke2-upgrade-gitrepo.yaml.bak
        
        # Example using yq (v4+)
        yq eval '.metadata.namespace = "fleet-local"' -i rke2-upgrade-gitrepo.yaml
      • For K3s:

        # Example using sed
        sed -i.bak 's/namespace: fleet-default/namespace: fleet-local/' k3s-upgrade-gitrepo.yaml && rm -f k3s-upgrade-gitrepo.yaml.bak
        
        # Example using yq (v4+)
        yq eval '.metadata.namespace = "fleet-local"' -i k3s-upgrade-gitrepo.yaml
  3. Apply the GitRepo resources to your management cluster:

    # RKE2
    kubectl apply -f rke2-upgrade-gitrepo.yaml
    
    # K3s
    kubectl apply -f k3s-upgrade-gitrepo.yaml
  4. View the created GitRepo resource under the fleet-local namespace:

    # RKE2
    kubectl get gitrepo rke2-upgrade -n fleet-local
    
    # K3s
    kubectl get gitrepo k3s-upgrade -n fleet-local
    
    # Example output
    NAME           REPO                                              COMMIT          BUNDLEDEPLOYMENTS-READY   STATUS
    k3s-upgrade    https://github.com/suse-edge/fleet-examples.git   fleet-local   0/0
    rke2-upgrade   https://github.com/suse-edge/fleet-examples.git   fleet-local   0/0
31.2.5.4.2 SUC plan deployment - Bundle resource

A Bundle resource, that ships the needed Kubernetes upgrade SUC Plans, can be deployed in one of the following ways:

  1. Through the Rancher UI - Section 31.2.5.4.2.1, “Bundle creation - Rancher UI” (when Rancher is available).

  2. By manually deploying (Section 31.2.5.4.2.2, “Bundle creation - manual”) the resource to your management cluster.

Once deployed, to monitor the Kubernetes upgrade process of the nodes of your targeted cluster, refer to Section 20.3, “Monitoring System Upgrade Controller Plans”.

31.2.5.4.2.1 Bundle creation - Rancher UI

The Edge team maintains ready to use bundles for both rke2 and k3s Kubernetes distributions. Depending on your environment these bundles could be used directly or as a template.

Important
Important

Always use this bundle from a valid Edge release tag.

To create a bundle through Rancher’s UI:

  1. In the upper left corner, click ☰ → Continuous Delivery

  2. Go to Advanced > Bundles

  3. Select Create from YAML

  4. From here you can create the Bundle in one of the following ways:

    Note
    Note

    There might be use-cases where you would need to include custom changes to the SUC plans that the bundle ships (e.g. to add custom tolerations). Make sure to include those changes in the bundle that will be generated by the below steps.

    1. By manually copying the bundle content for RKE2 or K3s from suse-edge/fleet-examples to the Create from YAML page.

    2. By cloning the suse-edge/fleet-examples repository from the desired release tag and selecting the Read from File option in the Create from YAML page. From there, navigate to the bundle that you need (bundles/day2/system-upgrade-controller-plans/rke2-upgrade/plan-bundle.yaml for RKE2 and bundles/day2/system-upgrade-controller-plans/k3s-upgrade/plan-bundle.yaml for K3s). This will auto-populate the Create from YAML page with the bundle content.

  5. Edit the Bundle in the Rancher UI:

    • Change the namespace of the Bundle to point to the fleet-local namespace.

      # Example
      kind: Bundle
      apiVersion: fleet.cattle.io/v1alpha1
      metadata:
        name: rke2-upgrade
        namespace: fleet-local
      ...
    • Change the target clusters for the Bundle to point to your local(management) cluster:

      spec:
        targets:
        - clusterName: local
      Note
      Note

      There are some use-cases where your local cluster could have a different name.

      To retrieve your local cluster name, execute the command below:

      kubectl get clusters.fleet.cattle.io -n fleet-local
  6. Select Create

31.2.5.4.2.2 Bundle creation - manual
  1. Pull the Bundle resources:

    • For RKE2 clusters:

      curl -o rke2-plan-bundle.yaml https://raw.githubusercontent.com/suse-edge/fleet-examples/refs/tags/release-3.2.0/bundles/day2/system-upgrade-controller-plans/rke2-upgrade/plan-bundle.yaml
    • For K3s clusters:

      curl -o k3s-plan-bundle.yaml https://raw.githubusercontent.com/suse-edge/fleet-examples/refs/tags/release-3.2.0/bundles/day2/system-upgrade-controller-plans/k3s-upgrade/plan-bundle.yaml
  2. Edit the Bundle configuration:

    • Change the target clusters for the Bundle to point to your local(management) cluster:

      spec:
        targets:
        - clusterName: local
      Note
      Note

      There are some use-cases where your local cluster could have a different name.

      To retrieve your local cluster name, execute the command below:

      kubectl get clusters.fleet.cattle.io -n fleet-local
    • Change the namespace of the Bundle to point to the fleet-local namespace.

      # Example
      kind: Bundle
      apiVersion: fleet.cattle.io/v1alpha1
      metadata:
        name: rke2-upgrade
        namespace: fleet-local
      ...
  3. Apply the Bundle resources to your management cluster:

    # For RKE2
    kubectl apply -f rke2-plan-bundle.yaml
    
    # For K3s
    kubectl apply -f k3s-plan-bundle.yaml
  4. View the created Bundle resource under the fleet-local namespace:

    # For RKE2
    kubectl get bundles rke2-upgrade -n fleet-local
    
    # For K3s
    kubectl get bundles k3s-upgrade -n fleet-local
    
    # Example output
    NAME           BUNDLEDEPLOYMENTS-READY   STATUS
    k3s-upgrade    0/0
    rke2-upgrade   0/0
31.2.5.4.3 SUC Plan deployment - third-party GitOps workflow

There might be use-cases where users would like to incorporate the Kubernetes upgrade SUC plans to their own third-party GitOps workflow (e.g. Flux).

To get the K8s upgrade resources that you need, first determine the Edge release tag of the suse-edge/fleet-examples repository that you would like to use.

After that, the resources can be found at:

  • For a RKE2 cluster upgrade:

    • For control-plane nodes - fleets/day2/system-upgrade-controller-plans/rke2-upgrade/plan-control-plane.yaml

    • For worker nodes - fleets/day2/system-upgrade-controller-plans/rke2-upgrade/plan-worker.yaml

  • For a K3s cluster upgrade:

    • For control-plane nodes - fleets/day2/system-upgrade-controller-plans/k3s-upgrade/plan-control-plane.yaml

    • For worker nodes - fleets/day2/system-upgrade-controller-plans/k3s-upgrade/plan-worker.yaml

Important
Important

These Plan resources are interpreted by the System Upgrade Controller and should be deployed on each downstream cluster that you wish to upgrade. For SUC deployment information, see Section 20.2, “Installing the System Upgrade Controller”.

To better understand how your GitOps workflow can be used to deploy the SUC Plans for Kubernetes version upgrade, it can be beneficial to take a look at the overview (Section 31.2.5.2, “Overview”) of the update procedure using Fleet.

31.2.6 Helm chart upgrade

This section covers the following parts:

  1. Section 31.2.6.1, “Preparation for air-gapped environments” - holds information on how to ship Edge related OCI charts and images to your private registry.

  2. Section 31.2.6.2, “Upgrade procedure” - holds information on different Helm chart upgrade use-cases and their upgrade procedure.

31.2.6.1 Preparation for air-gapped environments

31.2.6.1.1 Ensure you have access to your Helm chart Fleet

Depending on what your environment supports, you can take one of the following options:

  1. Host your chart’s Fleet resources on a local Git server that is accessible by your management cluster.

  2. Use Fleet’s CLI to convert a Helm chart into a Bundle that you can directly use and will not need to be hosted somewhere. Fleet’s CLI can be retrieved from their release page, for Mac users there is a fleet-cli Homebrew Formulae.

31.2.6.1.2 Find the required assets for your Edge release version
  1. Go to the "Day 2" release page and find the Edge release that you want to upgrade your chart to and click Assets.

  2. From the "Assets" section, download the following files:

    Release File

    Description

    edge-save-images.sh

    Pulls the images specified in the edge-release-images.txt file and packages them inside of a '.tar.gz' archive.

    edge-save-oci-artefacts.sh

    Pulls the OCI chart images related to the specific Edge release and packages them inside of a '.tar.gz' archive.

    edge-load-images.sh

    Loads images from a '.tar.gz' archive, retags and pushes them to a private registry.

    edge-load-oci-artefacts.sh

    Takes a directory containing Edge OCI '.tgz' chart packages and loads them to a private registry.

    edge-release-helm-oci-artefacts.txt

    Contains a list of OCI chart images related to a specific Edge release.

    edge-release-images.txt

    Contains a list of images related to a specific Edge release.

31.2.6.1.3 Create the Edge release images archive

On a machine with internet access:

  1. Make edge-save-images.sh executable:

    chmod +x edge-save-images.sh
  2. Generate the image archive:

    ./edge-save-images.sh --source-registry registry.suse.com
  3. This will create a ready to load archive named edge-images.tar.gz.

    Note
    Note

    If the -i|--images option is specified, the name of the archive may differ.

  4. Copy this archive to your air-gapped machine:

    scp edge-images.tar.gz <user>@<machine_ip>:/path
31.2.6.1.4 Create the Edge OCI chart images archive

On a machine with internet access:

  1. Make edge-save-oci-artefacts.sh executable:

    chmod +x edge-save-oci-artefacts.sh
  2. Generate the OCI chart image archive:

    ./edge-save-oci-artefacts.sh --source-registry registry.suse.com
  3. This will create an archive named oci-artefacts.tar.gz.

    Note
    Note

    If the -a|--archive option is specified, the name of the archive may differ.

  4. Copy this archive to your air-gapped machine:

    scp oci-artefacts.tar.gz <user>@<machine_ip>:/path
31.2.6.1.5 Load Edge release images to your air-gapped machine

On your air-gapped machine:

  1. Log into your private registry (if required):

    podman login <REGISTRY.YOURDOMAIN.COM:PORT>
  2. Make edge-load-images.sh executable:

    chmod +x edge-load-images.sh
  3. Execute the script, passing the previously copied edge-images.tar.gz archive:

    ./edge-load-images.sh --source-registry registry.suse.com --registry <REGISTRY.YOURDOMAIN.COM:PORT> --images edge-images.tar.gz
    Note
    Note

    This will load all images from the edge-images.tar.gz, retag and push them to the registry specified under the --registry option.

31.2.6.1.6 Load the Edge OCI chart images to your air-gapped machine

On your air-gapped machine:

  1. Log into your private registry (if required):

    podman login <REGISTRY.YOURDOMAIN.COM:PORT>
  2. Make edge-load-oci-artefacts.sh executable:

    chmod +x edge-load-oci-artefacts.sh
  3. Untar the copied oci-artefacts.tar.gz archive:

    tar -xvf oci-artefacts.tar.gz
  4. This will produce a directory with the naming template edge-release-oci-tgz-<date>

  5. Pass this directory to the edge-load-oci-artefacts.sh script to load the Edge OCI chart images to your private registry:

    Note
    Note

    This script assumes the helm CLI has been pre-installed on your environment. For Helm installation instructions, see Installing Helm.

    ./edge-load-oci-artefacts.sh --archive-directory edge-release-oci-tgz-<date> --registry <REGISTRY.YOURDOMAIN.COM:PORT> --source-registry registry.suse.com
31.2.6.1.7 Configure your private registry in your Kubernetes distribution

For RKE2, see Private Registry Configuration

For K3s, see Private Registry Configuration

31.2.6.2 Upgrade procedure

This section focuses on the following Helm upgrade procedure use-cases:

Important
Important

Manually deployed Helm charts cannot be reliably upgraded. We suggest to redeploy the Helm chart using the Section 31.2.6.2.1, “I have a new cluster and would like to deploy and manage an Edge Helm chart” method.

31.2.6.2.1 I have a new cluster and would like to deploy and manage an Edge Helm chart

This section covers how to:

31.2.6.2.1.1 Prepare the fleet resources for your chart
  1. Acquire the chart’s Fleet resources from the Edge release tag that you wish to use.

  2. Navigate to the Helm chart fleet (fleets/day2/chart-templates/<chart>)

  3. If you intend to use a GitOps workflow, copy the chart Fleet directory to the Git repository from where you will do GitOps.

  4. Optionally, if the Helm chart requires configurations to its values, edit the .helm.values configuration inside the fleet.yaml file of the copied directory.

  5. Optionally, there may be use-cases where you need to add additional resources to your chart’s fleet so that it can better fit your environment. For information on how to enhance your Fleet directory, see Git Repository Contents.

Note
Note

In some cases, the default timeout Fleet uses for Helm operations may be insufficient, resulting in the following error:

failed pre-install: context deadline exceeded

In such cases, add the timeoutSeconds property under the helm configuration of your fleet.yaml file.

An example for the longhorn helm chart would look like:

  • User Git repository structure:

    <user_repository_root>
    ├── longhorn
    │   └── fleet.yaml
    └── longhorn-crd
        └── fleet.yaml
  • fleet.yaml content populated with user Longhorn data:

    defaultNamespace: longhorn-system
    
    helm:
      # timeoutSeconds: 10
      releaseName: "longhorn"
      chart: "longhorn"
      repo: "https://charts.rancher.io/"
      version: "105.1.0+up1.7.2"
      takeOwnership: true
      # custom chart value overrides
      values:
        # Example for user provided custom values content
        defaultSettings:
          deletingConfirmationFlag: true
    
    # https://fleet.rancher.io/bundle-diffs
    diff:
      comparePatches:
      - apiVersion: apiextensions.k8s.io/v1
        kind: CustomResourceDefinition
        name: engineimages.longhorn.io
        operations:
        - {"op":"remove", "path":"/status/conditions"}
        - {"op":"remove", "path":"/status/storedVersions"}
        - {"op":"remove", "path":"/status/acceptedNames"}
      - apiVersion: apiextensions.k8s.io/v1
        kind: CustomResourceDefinition
        name: nodes.longhorn.io
        operations:
        - {"op":"remove", "path":"/status/conditions"}
        - {"op":"remove", "path":"/status/storedVersions"}
        - {"op":"remove", "path":"/status/acceptedNames"}
      - apiVersion: apiextensions.k8s.io/v1
        kind: CustomResourceDefinition
        name: volumes.longhorn.io
        operations:
        - {"op":"remove", "path":"/status/conditions"}
        - {"op":"remove", "path":"/status/storedVersions"}
        - {"op":"remove", "path":"/status/acceptedNames"}
    Note
    Note

    These are just example values that are used to illustrate custom configurations over the longhorn chart. They should NOT be treated as deployment guidelines for the longhorn chart.

31.2.6.2.1.2 Deploy the fleet for your chart

You can deploy the fleet for your chart by either using a GitRepo (Section 31.2.6.2.1.2.1, “GitRepo”) or Bundle (Section 31.2.6.2.1.2.2, “Bundle”).

Note
Note

While deploying your Fleet, if you get a Modified message, make sure to add a corresponding comparePatches entry to the Fleet’s diff section. For more information, see Generating Diffs to Ignore Modified GitRepos.

31.2.6.2.1.2.1 GitRepo

Fleet’s GitRepo resource holds information on how to access your chart’s Fleet resources and to which clusters it needs to apply those resources.

The GitRepo resource can be deployed through the Rancher UI, or manually, by deploying the resource to the management cluster.

Example Longhorn GitRepo resource for manual deployment:

apiVersion: fleet.cattle.io/v1alpha1
kind: GitRepo
metadata:
  name: longhorn-git-repo
  namespace: fleet-local
spec:
  # If using a tag
  # revision: user_repository_tag
  #
  # If using a branch
  # branch: user_repository_branch
  paths:
  # As seen in the 'Prepare your Fleet resources' example
  - longhorn
  - longhorn-crd
  repo: user_repository_url
31.2.6.2.1.2.2 Bundle

Bundle resources hold the raw Kubernetes resources that need to be deployed by Fleet. Normally it is encouraged to use the GitRepo approach, but for use-cases where the environment is air-gapped and cannot support a local Git server, Bundles can help you in propagating your Helm chart Fleet to your target clusters.

A Bundle can be deployed either through the Rancher UI (Continuous Delivery → Advanced → Bundles → Create from YAML) or by manually deploying the Bundle resource in the correct Fleet namespace. For information about Fleet namespaces, see the upstream documentation.

Bundles for Edge Helm charts can be created by utilizing Fleet’s Convert a Helm Chart into a Bundle approach.

Below you can find an example on how to create a Bundle resource from the longhorn and longhorn-crd Helm chart fleet templates and manually deploy this bundle to your management cluster.

Note
Note

To illustrate the workflow, the below example uses the suse-edge/fleet-examples directory structure.

  1. Navigate to the longhorn Chart fleet template:

    cd fleets/day2/chart-templates/longhorn/longhorn
  2. Create a targets.yaml file that will instruct Fleet to which clusters it should deploy the Helm chart:

    cat > targets.yaml <<EOF
    targets:
    # Match your local (management) cluster
    - clusterName: local
    EOF
    Note
    Note

    There are some use-cases where your local cluster could have a different name.

    To retrieve your local cluster name, execute the command below:

    kubectl get clusters.fleet.cattle.io -n fleet-local
  3. Convert the Longhorn Helm chart Fleet to a Bundle resource using the fleet-cli.

    Note
    Note

    Fleet’s CLI can be retrieved from their release Assets page (fleet-linux-amd64).

    For Mac users there is a fleet-cli Homebrew Formulae.

    fleet apply --compress --targets-file=targets.yaml -n fleet-local -o - longhorn-bundle > longhorn-bundle.yaml
  4. Navigate to the longhorn-crd Chart fleet template:

    cd fleets/day2/chart-templates/longhorn/longhorn-crd
  5. Create a targets.yaml file that will instruct Fleet to which clusters it should deploy the Helm chart:

    cat > targets.yaml <<EOF
    targets:
    # Match your local (management) cluster
    - clusterName: local
    EOF
  6. Convert the Longhorn CRD Helm chart Fleet to a Bundle resource using the fleet-cli.

    fleet apply --compress --targets-file=targets.yaml -n fleet-local -o - longhorn-crd-bundle > longhorn-crd-bundle.yaml
  7. Deploy the longhorn-bundle.yaml and longhorn-crd-bundle.yaml files to your management cluster:

    kubectl apply -f longhorn-crd-bundle.yaml
    kubectl apply -f longhorn-bundle.yaml

Following these steps will ensure that SUSE Storage is deployed on all of the specified management cluster.

31.2.6.2.1.3 Manage the deployed Helm chart

Once deployed with Fleet, for Helm chart upgrades, see Section 31.2.6.2.2, “I would like to upgrade a Fleet managed Helm chart”.

31.2.6.2.2 I would like to upgrade a Fleet managed Helm chart
  1. Determine the version to which you need to upgrade your chart so that it is compatible with the desired Edge release. Helm chart version per Edge release can be viewed from the release notes (Section 40.1, “Abstract”).

  2. In your Fleet monitored Git repository, edit the Helm chart’s fleet.yaml file with the correct chart version and repository from the release notes (Section 40.1, “Abstract”).

  3. After committing and pushing the changes to your repository, this will trigger an upgrade of the desired Helm chart

31.2.6.2.3 I would like to upgrade a Helm chart deployed via EIB

Chapter 10, Edge Image Builder deploys Helm charts by creating a HelmChart resource and utilizing the helm-controller introduced by the RKE2/K3s Helm integration feature.

To ensure that a Helm chart deployed via EIB is successfully upgraded, users need to do an upgrade over the respective HelmChart resources.

Below you can find information on:

31.2.6.2.3.1 Overview

Helm charts that are deployed via EIB are upgraded through a fleet called eib-charts-upgrader.

This fleet processes user-provided data to update a specific set of HelmChart resources.

Updating these resources triggers the helm-controller, which upgrades the Helm charts associated with the modified HelmChart resources.

The user is only expected to:

  1. Locally pull the archives for each Helm chart that needs to be upgraded.

  2. Pass these archives to the generate-chart-upgrade-data.sh generate-chart-upgrade-data.sh script, which will include the data from these archives to the eib-charts-upgrader fleet.

  3. Deploy the eib-charts-upgrader fleet to their management cluster. This is done through either a GitRepo or Bundle resource.

Once deployed, the eib-charts-upgrader, with the help of Fleet, will ship its resources to the desired management cluster.

These resources include:

  1. A set of Secrets holding the user-provided Helm chart data.

  2. A Kubernetes Job which will deploy a Pod that will mount the previously mentioned Secrets and based on them patch the corresponding HelmChart resources.

As mentioned previously this will trigger the helm-controller which will perform the actual Helm chart upgrade.

Below you can find a diagram of the above description:

fleet day2 management helm eib upgrade
31.2.6.2.3.2 Upgrade Steps
  1. Clone the suse-edge/fleet-examples repository from the correct release tag.

  2. Create a directory in which you will store the pulled Helm chart archive(s).

    mkdir archives
  3. Inside of the newly created archive directory, pull the archive(s) for the Helm chart(s) you wish to upgrade:

    cd archives
    helm pull [chart URL | repo/chartname]
    
    # Alternatively if you want to pull a specific version:
    # helm pull [chart URL | repo/chartname] --version 0.0.0
  4. From Assets of the desired release tag, download the generate-chart-upgrade-data.sh script.

  5. Execute the generate-chart-upgrade-data.sh script:

    chmod +x ./generate-chart-upgrade-data.sh
    
    ./generate-chart-upgrade-data.sh --archive-dir /foo/bar/archives/ --fleet-path /foo/bar/fleet-examples/fleets/day2/eib-charts-upgrader

    For each chart archive in the --archive-dir directory, the script generates a Kubernetes Secret YAML file containing the chart upgrade data and stores it in the base/secrets directory of the fleet specified by --fleet-path.

    The generate-chart-upgrade-data.sh script also applies additional modifications to the fleet to ensure the generated Kubernetes Secret YAML files are correctly utilized by the workload deployed by the fleet.

    Important
    Important

    Users should not make any changes over what the generate-chart-upgrade-data.sh script generates.

The steps below depend on the environment that you are running:

  1. For an environment that supports GitOps (e.g. is non air-gapped, or is air-gapped, but allows for local Git server support):

    1. Copy the fleets/day2/eib-charts-upgrader Fleet to the repository that you will use for GitOps.

      Note
      Note

      Make sure that the Fleet includes the changes that have been made by the generate-chart-upgrade-data.sh script.

    2. Configure a GitRepo resource that will be used to ship all the resources of the eib-charts-upgrader Fleet.

      1. For GitRepo configuration and deployment through the Rancher UI, see Accessing Fleet in the Rancher UI.

      2. For GitRepo manual configuration and deployment, see Creating a Deployment.

  2. For an environment that does not support GitOps (e.g. is air-gapped and does not allow local Git server usage):

    1. Download the fleet-cli binary from the rancher/fleet release page (fleet-linux-amd64 for Linux). For Mac users, there is a Homebrew Formulae that can be used - fleet-cli.

    2. Navigate to the eib-charts-upgrader Fleet:

      cd /foo/bar/fleet-examples/fleets/day2/eib-charts-upgrader
    3. Create a targets.yaml file that will instruct Fleet where to deploy your resources:

      cat > targets.yaml <<EOF
      targets:
      # To map the local(management) cluster
      - clusterName: local
      EOF
      Note
      Note

      There are some use-cases where your local cluster could have a different name.

      To retrieve your local cluster name, execute the command below:

      kubectl get clusters.fleet.cattle.io -n fleet-local
    4. Use the fleet-cli to convert the Fleet to a Bundle resource:

      fleet apply --compress --targets-file=targets.yaml -n fleet-local -o - eib-charts-upgrade > bundle.yaml

      This will create a Bundle (bundle.yaml) that will hold all the templated resource from the eib-charts-upgrader Fleet.

      For more information regarding the fleet apply command, see fleet apply.

      For more information regarding converting Fleets to Bundles, see Convert a Helm Chart into a Bundle.

    5. Deploy the Bundle. This can be done in one of two ways:

      1. Through Rancher’s UI - Navigate to Continuous Delivery → Advanced → Bundles → Create from YAML and either paste the bundle.yaml contents, or click the Read from File option and pass the file itself.

      2. Manually - Deploy the bundle.yaml file manually inside of your management cluster.

Executing these steps will result in a successfully deployed GitRepo/Bundle resource. The resource will be picked up by Fleet and its contents will be deployed onto the target clusters that the user has specified in the previous steps. For an overview of the process, refer to Section 31.2.6.2.3.1, “Overview”.

For information on how to track the upgrade process, you can refer to Section 31.2.6.2.3.3, “Example”.

Important
Important

Once the chart upgrade has been successfully verified, remove the Bundle/GitRepo resource.

This will remove the no longer necessary upgrade resources from your management cluster, ensuring that no future version clashes might occur.

31.2.6.2.3.3 Example
Note
Note

The example below demonstrates how to upgrade a Helm chart deployed via EIB from one version to another on a management cluster. Note that the versions used in this example are not recommendations. For version recommendations specific to an Edge release, refer to the release notes (Section 40.1, “Abstract”).

Use-case:

  • A management cluster is running an older version of Longhorn.

  • The cluster has been deployed through EIB, using the following image definition snippet:

    kubernetes:
      helm:
        charts:
        - name: longhorn-crd
          repositoryName: rancher-charts
          targetNamespace: longhorn-system
          createNamespace: true
          version: 104.2.0+up1.7.1
          installationNamespace: kube-system
        - name: longhorn
          repositoryName: rancher-charts
          targetNamespace: longhorn-system
          createNamespace: true
          version: 104.2.0+up1.7.1
          installationNamespace: kube-system
        repositories:
        - name: rancher-charts
          url: https://charts.rancher.io/
    ...
  • SUSE Storage needs to be upgraded to a version that is compatible with the Edge 3.2.0 release. Meaning it needs to be upgraded to 105.1.0+up1.7.2.

  • It is assumed that the management cluster is air-gapped, without support for a local Git server and has a working Rancher setup.

Follow the Upgrade Steps (Section 31.2.6.2.3.2, “Upgrade Steps”):

  1. Clone the suse-edge/fleet-example repository from the release-3.2.0 tag.

    git clone -b release-3.2.0 https://github.com/suse-edge/fleet-examples.git
  2. Create a directory where the Longhorn upgrade archive will be stored.

    mkdir archives
  3. Pull the desired Longhorn chart archive version:

    # First add the Rancher Helm chart repository
    helm repo add rancher-charts https://charts.rancher.io/
    
    # Pull the Longhorn 1.7.2 CRD archive
    helm pull rancher-charts/longhorn-crd --version 105.1.0+up1.7.2
    
    # Pull the Longhorn 1.7.2 chart archive
    helm pull rancher-charts/longhorn --version 105.1.0+up1.7.2
  4. Outside of the archives directory, download the generate-chart-upgrade-data.sh script from the suse-edge/fleet-examples release tag.

  5. Directory setup should look similar to:

    .
    ├── archives
    |   ├── longhorn-105.1.0+up1.7.2.tgz
    │   └── longhorn-crd-105.1.0+up1.7.2.tgz
    ├── fleet-examples
    ...
    │   ├── fleets
    │   │   ├── day2
    |   |   |   ├── ...
    │   │   │   ├── eib-charts-upgrader
    │   │   │   │   ├── base
    │   │   │   │   │   ├── job.yaml
    │   │   │   │   │   ├── kustomization.yaml
    │   │   │   │   │   ├── patches
    │   │   │   │   │   │   └── job-patch.yaml
    │   │   │   │   │   ├── rbac
    │   │   │   │   │   │   ├── cluster-role-binding.yaml
    │   │   │   │   │   │   ├── cluster-role.yaml
    │   │   │   │   │   │   ├── kustomization.yaml
    │   │   │   │   │   │   └── sa.yaml
    │   │   │   │   │   └── secrets
    │   │   │   │   │       ├── eib-charts-upgrader-script.yaml
    │   │   │   │   │       └── kustomization.yaml
    │   │   │   │   ├── fleet.yaml
    │   │   │   │   └── kustomization.yaml
    │   │   │   └── ...
    │   └── ...
    └── generate-chart-upgrade-data.sh
  6. Execute the generate-chart-upgrade-data.sh script:

    # First make the script executable
    chmod +x ./generate-chart-upgrade-data.sh
    
    # Then execute the script
    ./generate-chart-upgrade-data.sh --archive-dir ./archives --fleet-path ./fleet-examples/fleets/day2/eib-charts-upgrader

    The directory structure after the script execution should look similar to:

    .
    ├── archives
    |   ├── longhorn-105.1.0+up1.7.2.tgz
    │   └── longhorn-crd-105.1.0+up1.7.2.tgz
    ├── fleet-examples
    ...
    │   ├── fleets
    │   │   ├── day2
    │   │   │   ├── ...
    │   │   │   ├── eib-charts-upgrader
    │   │   │   │   ├── base
    │   │   │   │   │   ├── job.yaml
    │   │   │   │   │   ├── kustomization.yaml
    │   │   │   │   │   ├── patches
    │   │   │   │   │   │   └── job-patch.yaml
    │   │   │   │   │   ├── rbac
    │   │   │   │   │   │   ├── cluster-role-binding.yaml
    │   │   │   │   │   │   ├── cluster-role.yaml
    │   │   │   │   │   │   ├── kustomization.yaml
    │   │   │   │   │   │   └── sa.yaml
    │   │   │   │   │   └── secrets
    │   │   │   │   │       ├── eib-charts-upgrader-script.yaml
    │   │   │   │   │       ├── kustomization.yaml
    │   │   │   │   │       ├── longhorn-VERSION.yaml - secret created by the generate-chart-upgrade-data.sh script
    │   │   │   │   │       └── longhorn-crd-VERSION.yaml - secret created by the generate-chart-upgrade-data.sh script
    │   │   │   │   ├── fleet.yaml
    │   │   │   │   └── kustomization.yaml
    │   │   │   └── ...
    │   └── ...
    └── generate-chart-upgrade-data.sh

    The files changed in git should look like this:

    Changes not staged for commit:
      (use "git add <file>..." to update what will be committed)
      (use "git restore <file>..." to discard changes in working directory)
    	modified:   fleets/day2/eib-charts-upgrader/base/patches/job-patch.yaml
    	modified:   fleets/day2/eib-charts-upgrader/base/secrets/kustomization.yaml
    
    Untracked files:
      (use "git add <file>..." to include in what will be committed)
    	fleets/day2/eib-charts-upgrader/base/secrets/longhorn-VERSION.yaml
    	fleets/day2/eib-charts-upgrader/base/secrets/longhorn-crd-VERSION.yaml
  7. Create a Bundle for the eib-charts-upgrader Fleet:

    1. First, navigate to the Fleet itself:

      cd ./fleet-examples/fleets/day2/eib-charts-upgrader
    2. Then create a targets.yaml file:

      cat > targets.yaml <<EOF
      targets:
      - clusterName: local
      EOF
    3. Then use the fleet-cli binary to convert the Fleet to a Bundle:

      fleet apply --compress --targets-file=targets.yaml -n fleet-local -o - eib-charts-upgrade > bundle.yaml
  8. Deploy the Bundle through the Rancher UI:

    day2 helm chart upgrade example 1
    Figure 31.1: Deploy Bundle through Rancher UI

    From here, select Read from File and find the bundle.yaml file on your system.

    This will auto-populate the Bundle inside of Rancher’s UI.

    Select Create.

  9. After a successful deployment, your Bundle would look similar to:

    day2 helm chart upgrade example 2
    Figure 31.2: Successfully deployed Bundle

After the successful deployment of the Bundle, to monitor the upgrade process:

  1. Verify the logs of the Upgrade Pod:

    day2 helm chart upgrade example 3 management
  2. Now verify the logs of the Pod created for the upgrade by the helm-controller:

    1. The Pod name will be with the following template - helm-install-longhorn-<random-suffix>

    2. The Pod will be in the namespace where the HelmChart resource was deployed. In our case this is kube-system.

      day2 helm chart upgrade example 4 management
      Figure 31.3: Logs for successfully upgraded Longhorn chart
  3. Verify that the HelmChart version has been updated by navigating to Rancher’s HelmCharts section (More Resources → HelmCharts). Select the namespace where the chart was deployed, for this example it would be kube-system.

  4. Finally check that the Longhorn Pods are running.

After making the above validations, it is safe to assume that the Longhorn Helm chart has been upgraded from to the 105.1.0+up1.7.2 version.

31.2.6.2.3.4 Helm chart upgrade using a third-party GitOps tool

There might be use-cases where users would like to use this upgrade procedure with a GitOps workflow other than Fleet (e.g. Flux).

To produce the resources needed for the upgrade procedure, you can use the generate-chart-upgrade-data.sh script to populate the eib-charts-upgrader Fleet with the user provided data. For more information on how to do this, see Section 31.2.6.2.3.2, “Upgrade Steps”.

After you have the full setup, you can use kustomize to generate a full working solution that you can deploy in your cluster:

cd /foo/bar/fleets/day2/eib-charts-upgrader

kustomize build .

If you want to include the solution to your GitOps workflow, you can remove the fleet.yaml file and use what is left as a valid Kustomize setup. Just do not forget to first run the generate-chart-upgrade-data.sh script, so that it can populate the Kustomize setup with the data for the Helm charts that you wish to upgrade to.

To understand how this workflow is intended to be used, it can be beneficial to look at Section 31.2.6.2.3.1, “Overview” and Section 31.2.6.2.3.2, “Upgrade Steps”.