29 Downstream clusters #
This section covers how to do various Day 2
operations for different parts of your downstream cluster using your management cluster
.
29.1 Introduction #
This section is meant to be a starting point for the Day 2
operations documentation. You can find the following information.
The default components (Section 29.1.1, “Components”) used to achieve
Day 2
operations over multiple downstream clusters.Determining which
Day 2
resources should you use for your specific use-case (Section 29.1.2, “Determine your use-case”).The suggested workflow sequence (Section 29.1.3, “Day 2 workflow”) for
Day 2
operations.
29.1.1 Components #
Below you can find a description of the default components that should be set up on either your management cluster
or your downstream clusters
so that you can successfully perform Day 2
operations.
29.1.1.1 Rancher #
For use-cases where you want to utilize Fleet (Chapter 7, Fleet) without Rancher, you can skip the Rancher component altogether.
Responsible for the management of your downstream clusters
. Should be deployed on your management cluster
.
For more information, see Chapter 4, Rancher.
29.1.1.2 Fleet #
Responsible for multi-cluster resource deployment.
Typically offered by the Rancher
component. For use-cases where Rancher
is not used, can be deployed as a standalone component.
For more information on installing Fleet as a standalone component, see Fleet’s Installation Details.
For more information regarding the Fleet component, see Chapter 7, Fleet.
This documentation heavily relies on Fleet
and more specifically on the GitRepo
and Bundle
resources (more on this in Section 29.1.2, “Determine your use-case”) for establishing a GitOps way of automating the deployment of resources related to Day 2
operations.
For use-cases, where a third party GitOps tool usage is desired, see:
For
OS upgrades
- Section 29.2.4.3, “SUC Plan deployment - third-party GitOps workflow”For
Kubernetes distribution upgrades
- Section 29.3.4.3, “SUC Plan deployment - third-party GitOps workflow”For
EIB deployed Helm chart upgrades
- Section 29.4.3.3.4, “Helm chart upgrade using a third-party GitOps tool”For
non-EIB deployed Helm chart upgrades
- retrieve the chart version supported by the desired Edge release from the Section 37.1, “Abstract” page and populate the chart version and URL in your third party GitOps tool
29.1.1.3 System Upgrade Controller (SUC) #
System Upgrade Controller (SUC) is responsible for executing tasks on specified nodes based on configuration data provided through a custom resource, called a Plan
.
In order for SUC to be able to support different Day 2 operations, it is important that it is deployed on each downstream cluster that requires an upgrade.
For more information about the SUC component and how it fits in the Edge stack, see the System Upgrade Controller (Chapter 20, System Upgrade Controller) component documentation.
For information on how to deploy SUC on your downstream clusters, first determine your use-case (Section 29.1.2, “Determine your use-case”) and then refer to System Upgrade Controller installation - GitRepo (Section 20.2.1.1, “System Upgrade Controller installation - GitRepo”), or System Upgrade Controller installation - Bundle (Section 20.2.1.2, “System Upgrade Controller installation - Bundle”).
29.1.2 Determine your use-case #
As mentioned previously, resources related to Day 2
operations are propagated to downstream clusters using Fleet’s GitRepo
and Bundle
resources.
Below you can find more information regarding what these resources do and for which use-cases should they be used for Day 2
operations.
29.1.2.1 GitRepo #
A GitRepo
is a Fleet (Chapter 7, Fleet) resource that represents a Git repository from which Fleet
can create Bundles
. Each Bundle
is created based on configuration paths defined inside of the GitRepo
resource. For more information, see the GitRepo documentation.
In terms of Day 2
operations, GitRepo
resources are normally used to deploy SUC
or SUC Plans
on non air-gapped environments that utilize a Fleet GitOps approach.
Alternatively, GitRepo
resources can also be used to deploy SUC
or SUC Plans
on air-gapped environments, if you mirror your repository setup through a local git server.
29.1.2.2 Bundle #
Bundles
hold raw Kubernetes resources that will be deployed on the targeted cluster. Usually they are created from a GitRepo
resource, but there are use-cases where they can be deployed manually. For more information refer to the Bundle documentation.
In terms of Day 2
operations, Bundle
resources are normally used to deploy SUC
or SUC Plans
on air-gapped environments that do not use some form of local GitOps procedure (e.g. a local git server).
Alternatively, if your use-case does not allow for a GitOps workflow (e.g. using a Git repository), Bundle resources could also be used to deploy SUC
or SUC Plans
on non air-gapped environments.
29.1.3 Day 2 workflow #
The following is a Day 2
workflow that should be followed when upgrading a downstream cluster to a specific Edge release.
OS upgrade (Section 29.2, “OS upgrade”)
Kubernetes version upgrade (Section 29.3, “Kubernetes version upgrade”)
Helm chart upgrade (Section 29.4, “Helm chart upgrade”)
29.2 OS upgrade #
29.2.1 Components #
This section covers the custom components that the OS upgrade
process uses over the default Day 2
components (Section 29.1.1, “Components”).
29.2.1.1 systemd.service #
A different systemd.service is created depending on what upgrade your OS requires from one Edge version to another:
For Edge versions that require the same OS version (e.g.
6.0
), theos-pkg-update.service
will be created. It uses the transactional-update command to perform a normal package upgrade.For Edge versions that require a OS version migration (e.g
5.5
→6.0
), theos-migration.service
will be created. It uses transactional-update to perform:First a normal package upgrade. Done in order to ensure that all packages are with the latest version before the migration. Mitigating any failures related to old package version.
After that it proceeds with the OS migration process by utilizing the
zypper migration
command.
Shipped through a SUC plan, which should be located on each downstream cluster that is in need of an OS upgrade.
29.2.2 Requirements #
General:
SCC registered machine - All downstream cluster nodes should be registered to
https://scc.suse.com/
. This is needed so that theos-pkg-update.service/os-migration.service
can successfully connect to the needed OS RPM repositories.ImportantFor Edge releases that require a new OS version (e.g Edge 3.1), make sure that your SCC key supports the migration to the new version (e.g. for Edge 3.1, the SCC key should support SLE Micro
5.5
→6.0
migration).Make sure that SUC Plan tolerations match node tolerations - If your Kubernetes cluster nodes have custom taints, make sure to add tolerations for those taints in the SUC Plans. By default SUC Plans have tolerations only for control-plane nodes. Default tolerations include:
CriticalAddonsOnly=true:NoExecute
node-role.kubernetes.io/control-plane:NoSchedule
node-role.kubernetes.io/etcd:NoExecute
NoteAny additional tolerations must be added under the
.spec.tolerations
section of each Plan. SUC Plans related to the OS upgrade can be found in the suse-edge/fleet-examples repository underfleets/day2/system-upgrade-controller-plans/os-upgrade
. Make sure you use the Plans from a valid repository release tag.An example of defining custom tolerations for the control-plane SUC Plan, would look like this:
apiVersion: upgrade.cattle.io/v1 kind: Plan metadata: name: os-upgrade-control-plane spec: ... tolerations: # default tolerations - key: "CriticalAddonsOnly" operator: "Equal" value: "true" effect: "NoExecute" - key: "node-role.kubernetes.io/control-plane" operator: "Equal" effect: "NoSchedule" - key: "node-role.kubernetes.io/etcd" operator: "Equal" effect: "NoExecute" # custom toleration - key: "foo" operator: "Equal" value: "bar" effect: "NoSchedule" ...
Air-gapped:
29.2.3 Update procedure #
This section assumes you will be deploying the OS upgrade
SUC Plan using Fleet (Chapter 7, Fleet). If you intend to deploy the SUC Plan using a different approach, refer to Section 29.2.4.3, “SUC Plan deployment - third-party GitOps workflow”.
For environments previously upgraded using this procedure, users should ensure that one of the following steps is completed:
Remove any previously deployed SUC Plans related to older Edge release versions from the downstream cluster
- can be done by removing the desired downstream cluster from the existingGitRepo/Bundle
target configuration, or removing theGitRepo/Bundle
resource altogether.Reuse the existing GitRepo/Bundle resource
- can be done by pointing the resource’s revision to a new tag that holds the correct fleets for the desiredsuse-edge/fleet-examples
release.
This is done in order to avoid clashes between SUC Plans
for older Edge release versions.
If users attempt to upgrade, while there are existing SUC Plans
on the downstream cluster, they will see the following fleet error:
Not installed: Unable to continue with install: Plan <plan_name> in namespace <plan_namespace> exists and cannot be imported into the current release: invalid ownership metadata; annotation validation error..
The OS upgrade procedure
revolves around deploying SUC Plans to downstream clusters. These plans hold information about how and on which nodes to deploy the os-pkg-update.service/os-migration.service
. For information regarding the structure of a SUC Plan, refer to the upstream documentation.
OS upgrade
SUC Plans are shipped in the following ways:
Through a
GitRepo
resources - Section 29.2.4.1, “SUC Plan deployment - GitRepo resource”Through a
Bundle
resource - Section 29.2.4.2, “SUC Plan deployment - Bundle resource”
To determine which resource you should use, refer to Section 29.1.2, “Determine your use-case”.
For a full overview of what happens during the upgrade procedure, refer to the Section 29.2.3.1, “Overview” section.
29.2.3.1 Overview #
This section aims to describe the full workflow that the OS upgrade process goes through from start to finish.
OS upgrade steps:
Based on their use-case, the user determines whether to use a GitRepo or a Bundle resource for the deployment of the
OS upgrade SUC Plans
to the desired downstream clusters. For information on how to map a GitRepo/Bundle to a specific set of downstream clusters, see Mapping to Downstream Clusters.If you are unsure whether you should use a GitRepo or a Bundle resource for the SUC Plan deployment, refer to Section 29.1.2, “Determine your use-case”.
For GitRepo/Bundle configuration options, refer to Section 29.2.4.1, “SUC Plan deployment - GitRepo resource” or Section 29.2.4.2, “SUC Plan deployment - Bundle resource”.
The user deploys the configured GitRepo/Bundle resource to the
fleet-default
namespace in hismanagement cluster
. This is done either manually or through the Rancher UI if such is available.Fleet (Chapter 7, Fleet) constantly monitors the
fleet-default
namespace and immediately detects the newly deployed GitRepo/Bundle resource. For more information regarding what namespaces does Fleet monitor, refer to Fleet’s Namespaces documentation.If the user has deployed a GitRepo resource,
Fleet
will reconcile the GitRepo and based on its paths and fleet.yaml configurations it will deploy a Bundle resource in thefleet-default
namespace. For more information, refer to Fleet’s GitRepo Contents documentation.Fleet
then proceeds to deploy theKubernetes resources
from this Bundle to all the targeteddownstream clusters
. In the context ofOS upgrades
, Fleet deploys the following resources from the Bundle:Worker SUC Plan - instructs SUC on how to do an OS upgrade on cluster worker nodes. It is not interpreted if the cluster consists only from control-plane nodes. It executes after all control-plane SUC plans have completed successfully.
Control Plane SUC Plan - instructs SUC on how to do an OS upgrade on cluster control-plane nodes.
Script Secret - referenced in each SUC Plan; ships an
upgrade.sh
script responsible for creating theos-pkg-update.service/os-migration.service
which will do the actual OS upgrade.Script Data ConfigMap - referenced in each SUC Plan; ships configurations used by the
upgrade.sh
script.NoteThe above resources will be deployed in the
cattle-system
namespace of each downstream cluster.
On the downstream cluster, SUC picks up the newly deployed SUC Plans and deploys an Update Pod on each node that matches the node selector defined in the SUC Plan. For information how to monitor the SUC Plan Pod, refer to Section 20.3, “Monitoring System Upgrade Controller Plans”.
The Update Pod (deployed on each node) mounts the script Secret and executes the
upgrade.sh
script that the Secret ships.The
upgrade.sh
proceeds to do the following:Based on its configurations, determine whether the OS needs a package update, or it needs to be migrated.
Based on the above outcome it will create either a
os-pkg-update.service
(for package updates), or aos-migration.service
(for migration). The service will be of type oneshot and will adopt the following workflow:For
os-pkg-update.service
:Update all package versions on the node OS, by running
transactional-update cleanup up
After a successful
transactional-update
, schedule a system reboot so that the package version updates can take effect
For
os-migration.service
:Update all package versions on the node OS, by running
transactional-update cleanup up
. This is done to ensure that no old package versions cause an OS migration error.Proceed to migrate the OS to the desired values. Migration is done by utilizing the
zypper migration
command.Schedule a system reboot so that the migration can take effect
Start the
os-pkg-update.service/os-migration.service
and wait for it to complete.Cleanup the
os-pkg-update.service/os-migration.service
after the systemd.service has done its job. It is removed from the system to ensure that no accidental executions/reboots happen in the future.
The OS upgrade procedure finishes with the system reboot. After the reboot, the OS package versions are upgraded and if the Edge release requires it, the OS might be migrated as well.
29.2.4 OS upgrade - SUC Plan deployment #
This section describes how to orchestrate the deployment of SUC Plans related OS upgrades using Fleet’s GitRepo and Bundle resources.
29.2.4.1 SUC Plan deployment - GitRepo resource #
A GitRepo resource, that ships the needed OS upgrade
SUC Plans, can be deployed in one of the following ways:
Through the
Rancher UI
- Section 29.2.4.1.1, “GitRepo creation - Rancher UI” (whenRancher
is available).By manually deploying (Section 29.2.4.1.2, “GitRepo creation - manual”) the resource to your
management cluster
.
Once deployed, to monitor the OS upgrade process of the nodes of your targeted cluster, refer to the Section 20.3, “Monitoring System Upgrade Controller Plans” documentation.
29.2.4.1.1 GitRepo creation - Rancher UI #
To create a GitRepo
resource through the Rancher UI, follow their official documentation.
The Edge team maintains a ready to use fleet that users can add as a path
for their GitRepo resource.
Always use this fleet from a valid Edge release tag.
For use-cases where no custom tolerations need to be included to the SUC plans
that the fleet ships, users can directly refer the os-upgrade
fleet from the suse-edge/fleet-examples
repository.
In cases where custom tolerations are needed, users should refer the os-upgrade
fleet from a separate repository, allowing them to add the tolerations to the SUC plans as required.
An example of how a GitRepo
can be configured to use the fleet from the suse-edge/fleet-examples
repository, can be viewed here.
29.2.4.1.2 GitRepo creation - manual #
Pull the GitRepo resource:
curl -o os-upgrade-gitrepo.yaml https://raw.githubusercontent.com/suse-edge/fleet-examples/refs/tags/release-3.1.1/gitrepos/day2/os-upgrade-gitrepo.yaml
Edit the GitRepo configuration, under
spec.targets
specify your desired target list. By default theGitRepo
resources from thesuse-edge/fleet-examples
are NOT mapped to any downstream clusters.To match all clusters change the default
GitRepo
target to:spec: targets: - clusterSelector: {}
Alternatively, if you want a more granular cluster selection see Mapping to Downstream Clusters
Apply the GitRepo resources to your
management cluster
:kubectl apply -f os-upgrade-gitrepo.yaml
View the created GitRepo resource under the
fleet-default
namespace:kubectl get gitrepo os-upgrade -n fleet-default # Example output NAME REPO COMMIT BUNDLEDEPLOYMENTS-READY STATUS os-upgrade https://github.com/suse-edge/fleet-examples.git release-3.1.1 0/0
29.2.4.2 SUC Plan deployment - Bundle resource #
A Bundle resource, that ships the needed OS upgrade
SUC Plans, can be deployed in one of the following ways:
Through the
Rancher UI
- Section 29.2.4.2.1, “Bundle creation - Rancher UI” (whenRancher
is available).By manually deploying (Section 29.2.4.2.2, “Bundle creation - manual”) the resource to your
management cluster
.
Once deployed, to monitor the OS upgrade process of the nodes of your targeted cluster, refer to the Section 20.3, “Monitoring System Upgrade Controller Plans” documentation.
29.2.4.2.1 Bundle creation - Rancher UI #
The Edge team maintains a ready to use bundle that can be used in the below steps.
Always use this bundle from a valid Edge release tag.
To create a bundle through Rancher’s UI:
In the upper left corner, click ☰ → Continuous Delivery
Go to Advanced > Bundles
Select Create from YAML
From here you can create the Bundle in one of the following ways:
NoteThere might be use-cases where you would need to include custom tolerations to the
SUC plans
that the bundle ships. Make sure to include those tolerations in the bundle that will be generated by the below steps.By manually copying the bundle content from
suse-edge/fleet-examples
to the Create from YAML page.By cloning the suse-edge/fleet-examples repository from the desired release tag and selecting the Read from File option in the Create from YAML page. From there, navigate to the bundle location (
bundles/day2/system-upgrade-controller-plans/os-upgrade
) and select the bundle file. This will auto-populate the Create from YAML page with the bundle content.
Change the target clusters for the
Bundle
:To match all downstream clusters change the default Bundle
.spec.targets
to:spec: targets: - clusterSelector: {}
For a more granular downstream cluster mappings, see Mapping to Downstream Clusters.
Select Create
29.2.4.2.2 Bundle creation - manual #
Pull the Bundle resource:
curl -o os-upgrade-bundle.yaml https://raw.githubusercontent.com/suse-edge/fleet-examples/refs/tags/release-3.1.1/bundles/day2/system-upgrade-controller-plans/os-upgrade/os-upgrade-bundle.yaml
Edit the
Bundle
target configurations, underspec.targets
provide your desired target list. By default theBundle
resources from thesuse-edge/fleet-examples
are NOT mapped to any downstream clusters.To match all clusters change the default
Bundle
target to:spec: targets: - clusterSelector: {}
Alternatively, if you want a more granular cluster selection see Mapping to Downstream Clusters
Apply the Bundle resources to your
management cluster
:kubectl apply -f os-upgrade-bundle.yaml
View the created Bundle resource under the
fleet-default
namespace:kubectl get bundles -n fleet-default
29.2.4.3 SUC Plan deployment - third-party GitOps workflow #
There might be use-cases where users would like to incorporate the OS upgrade SUC Plans to their own third-party GitOps workflow (e.g. Flux
).
To get the OS upgrade resources that you need, first determine the Edge release tag of the suse-edge/fleet-examples repository that you would like to use.
After that, resources can be found at fleets/day2/system-upgrade-controller-plans/os-upgrade
, where:
plan-control-plane.yaml
-system-upgrade-controller
Plan resource for control-plane nodes.plan-worker.yaml
-system-upgrade-controller
Plan resource for worker nodes.secret.yaml
- secret that ships theupgrade.sh
script.config-map.yaml
- ConfigMap that provides upgrade configurations that are consumed by theupgrade.sh
script.
These Plan
resources are interpreted by the system-upgrade-controller
and should be deployed on each downstream cluster that you wish to upgrade. For information on how to deploy the system-upgrade-controller
, see Section 20.2, “Installing the System Upgrade Controller”.
To better understand how your GitOps workflow can be used to deploy the SUC Plans for OS upgrade, it can be beneficial to take a look at the overview (Section 29.2.3.1, “Overview”) of the update procedure using Fleet
.
29.3 Kubernetes version upgrade #
This section covers Kubernetes upgrades for downstream clusters that have NOT been created through a Rancher (Chapter 4, Rancher) instance. For information on how to upgrade the Kubernetes version of Rancher
created clusters, see Upgrading and Rolling Back Kubernetes.
29.3.1 Components #
This section covers the custom components that the Kubernetes upgrade
process uses over the default Day 2
components (Section 29.1.1, “Components”).
29.3.1.1 rke2-upgrade #
Image responsible for upgrading the RKE2 version of a specific node.
Shipped through a Pod created by SUC based on a SUC Plan. The Plan should be located on each downstream cluster that is in need of a RKE2 upgrade.
For more information regarding how the rke2-upgrade
image performs the upgrade, see the upstream documentation.
29.3.1.2 k3s-upgrade #
Image responsible for upgrading the K3s version of a specific node.
Shipped through a Pod created by SUC based on a SUC Plan. The Plan should be located on each downstream cluster that is in need of a K3s upgrade.
For more information regarding how the k3s-upgrade
image performs the upgrade, see the upstream documentation.
29.3.2 Requirements #
Backup your Kubernetes distribution:
For imported RKE2 clusters, see the RKE2 Backup and Restore documentation.
For imported K3s clusters, see the K3s Backup and Restore documentation.
Make sure that SUC Plan tolerations match node tolerations - If your Kubernetes cluster nodes have custom taints, make sure to add tolerations for those taints in the SUC Plans. By default SUC Plans have tolerations only for control-plane nodes. Default tolerations include:
CriticalAddonsOnly=true:NoExecute
node-role.kubernetes.io/control-plane:NoSchedule
node-role.kubernetes.io/etcd:NoExecute
NoteAny additional tolerations must be added under the
.spec.tolerations
section of each Plan. SUC Plans related to the Kubernetes version upgrade can be found in the suse-edge/fleet-examples repository under:For RKE2 -
fleets/day2/system-upgrade-controller-plans/rke2-upgrade
For K3s -
fleets/day2/system-upgrade-controller-plans/k3s-upgrade
Make sure you use the Plans from a valid repository release tag.
An example of defining custom tolerations for the RKE2 control-plane SUC Plan, would look like this:
apiVersion: upgrade.cattle.io/v1 kind: Plan metadata: name: rke2-upgrade-control-plane spec: ... tolerations: # default tolerations - key: "CriticalAddonsOnly" operator: "Equal" value: "true" effect: "NoExecute" - key: "node-role.kubernetes.io/control-plane" operator: "Equal" effect: "NoSchedule" - key: "node-role.kubernetes.io/etcd" operator: "Equal" effect: "NoExecute" # custom toleration - key: "foo" operator: "Equal" value: "bar" effect: "NoSchedule" ...
29.3.3 Upgrade procedure #
This section assumes you will be deploying SUC Plans using Fleet (Chapter 7, Fleet). If you intend to deploy the SUC Plan using a different approach, refer to Section 29.3.4.3, “SUC Plan deployment - third-party GitOps workflow”.
For environments previously upgraded using this procedure, users should ensure that one of the following steps is completed:
Remove any previously deployed SUC Plans related to older Edge release versions from the downstream cluster
- can be done by removing the desired downstream cluster from the existingGitRepo/Bundle
target configuration, or removing theGitRepo/Bundle
resource altogether.Reuse the existing GitRepo/Bundle resource
- can be done by pointing the resource’s revision to a new tag that holds the correct fleets for the desiredsuse-edge/fleet-examples
release.
This is done in order to avoid clashes between SUC Plans
for older Edge release versions.
If users attempt to upgrade, while there are existing SUC Plans
on the downstream cluster, they will see the following fleet error:
Not installed: Unable to continue with install: Plan <plan_name> in namespace <plan_namespace> exists and cannot be imported into the current release: invalid ownership metadata; annotation validation error..
The Kubernetes version upgrade procedure
revolves around deploying SUC Plans to downstream clusters. These plans hold information that instructs the SUC on which nodes to create Pods which run the rke2/k3s-upgrade
images. For information regarding the structure of a SUC Plan, refer to the upstream documentation.
Kubernetes upgrade
Plans are shipped in the following ways:
Through a
GitRepo
resources - Section 29.3.4.1, “SUC Plan deployment - GitRepo resource”Through a
Bundle
resource - Section 29.3.4.2, “SUC Plan deployment - Bundle resource”
To determine which resource you should use, refer to Section 29.1.2, “Determine your use-case”.
For a full overview of what happens during the update procedure, refer to the Section 29.3.3.1, “Overview” section.
29.3.3.1 Overview #
This section aims to describe the full workflow that the Kubernetes version upgrade process goes through from start to finish.
Kubernetes version upgrade steps:
Based on his use-case, the user determines whether to use a GitRepo or a Bundle resource for the deployment of the
Kubernetes upgrade SUC Plans
to the desired downstream clusters. For information on how to map a GitRepo/Bundle to a specific set of downstream clusters, see Mapping to Downstream Clusters.If you are unsure whether you should use a GitRepo or a Bundle resource for the SUC Plan deployment, refer to Section 29.1.2, “Determine your use-case”.
For GitRepo/Bundle configuration options, refer to Section 29.3.4.1, “SUC Plan deployment - GitRepo resource” or Section 29.3.4.2, “SUC Plan deployment - Bundle resource”.
The user deploys the configured GitRepo/Bundle resource to the
fleet-default
namespace in hismanagement cluster
. This is done either manually or through the Rancher UI if such is available.Fleet (Chapter 7, Fleet) constantly monitors the
fleet-default
namespace and immediately detects the newly deployed GitRepo/Bundle resource. For more information regarding what namespaces does Fleet monitor, refer to Fleet’s Namespaces documentation.If the user has deployed a GitRepo resource,
Fleet
will reconcile the GitRepo and based on its paths and fleet.yaml configurations it will deploy a Bundle resource in thefleet-default
namespace. For more information, refer to Fleet’s GitRepo Contents documentation.Fleet
then proceeds to deploy theKubernetes resources
from this Bundle to all the targeteddownstream clusters
. In the context of theKubernetes version upgrade
, Fleet deploys the following resources from the Bundle (depending on the Kubernetes distribution):rke2-upgrade-worker
/k3s-upgrade-worker
- instructs SUC on how to do a Kubernetes upgrade on cluster worker nodes. Will not be interpreted if the cluster consists only from control-plane nodes.rke2-upgrade-control-plane
/k3s-upgrade-control-plane
- instructs SUC on how to do a Kubernetes upgrade on cluster control-plane nodes.NoteThe above SUC Plans will be deployed in the
cattle-system
namespace of each downstream cluster.
On the downstream cluster, SUC picks up the newly deployed SUC Plans and deploys an Update Pod on each node that matches the node selector defined in the SUC Plan. For information how to monitor the SUC Plan Pod, refer to Section 20.3, “Monitoring System Upgrade Controller Plans”.
Depending on which SUC Plans you have deployed, the Update Pod will run either a rke2-upgrade or a k3s-upgrade image and will execute the following workflow on each cluster node:
Cordon cluster node - to ensure that no pods are scheduled accidentally on this node while it is being upgraded, we mark it as
unschedulable
.Replace the
rke2/k3s
binary that is installed on the node OS with the binary shipped by therke2-upgrade/k3s-upgrade
image that the Pod is currently running.Kill the
rke2/k3s
process that is running on the node OS - this instructs the supervisor to automatically restart therke2/k3s
process using the new version.Uncordon cluster node - after the successful Kubernetes distribution upgrade, the node is again marked as
schedulable
.NoteFor further information regarding how the
rke2-upgrade
andk3s-upgrade
images work, see the rke2-upgrade and k3s-upgrade upstream projects.
With the above steps executed, the Kubernetes version of each cluster node should have been upgraded to the desired Edge compatible release.
29.3.4 Kubernetes version upgrade - SUC Plan deployment #
This section describes how to orchestrate the deployment of SUC Plans related Kubernetes upgrades using Fleet’s GitRepo and Bundle resources.
29.3.4.1 SUC Plan deployment - GitRepo resource #
A GitRepo resource, that ships the needed Kubernetes upgrade
SUC Plans, can be deployed in one of the following ways:
Through the
Rancher UI
- Section 29.3.4.1.1, “GitRepo creation - Rancher UI” (whenRancher
is available).By manually deploying (Section 29.3.4.1.2, “GitRepo creation - manual”) the resource to your
management cluster
.
Once deployed, to monitor the Kubernetes upgrade process of the nodes of your targeted cluster, refer to the Section 20.3, “Monitoring System Upgrade Controller Plans” documentation.
29.3.4.1.1 GitRepo creation - Rancher UI #
To create a GitRepo
resource through the Rancher UI, follow their official documentation.
The Edge team maintains ready to use fleets for both rke2 and k3s Kubernetes distributions, that users can add as a path
for their GitRepo resource.
Always use this fleets from a valid Edge release tag.
For use-cases where no custom tolerations need to be included to the SUC plans
that these fleets ship, users can directly refer the fleets from the suse-edge/fleet-examples
repository.
In cases where custom tolerations are needed, users should refer the fleets from a separate repository, allowing them to add the tolerations to the SUC plans as required.
Configuration examples for a GitRepo
resource using the fleets from suse-edge/fleet-examples
repository:
29.3.4.1.2 GitRepo creation - manual #
Pull the GitRepo resource:
For RKE2 clusters:
curl -o rke2-upgrade-gitrepo.yaml https://raw.githubusercontent.com/suse-edge/fleet-examples/refs/tags/release-3.1.1/gitrepos/day2/rke2-upgrade-gitrepo.yaml
For K3s clusters:
curl -o k3s-upgrade-gitrepo.yaml https://raw.githubusercontent.com/suse-edge/fleet-examples/refs/tags/release-3.1.1/gitrepos/day2/k3s-upgrade-gitrepo.yaml
Edit the GitRepo configuration, under
spec.targets
specify your desired target list. By default theGitRepo
resources from thesuse-edge/fleet-examples
are NOT mapped to any downstream clusters.To match all clusters change the default
GitRepo
target to:spec: targets: - clusterSelector: {}
Alternatively, if you want a more granular cluster selection see Mapping to Downstream Clusters
Apply the GitRepo resources to your
management cluster
:# RKE2 kubectl apply -f rke2-upgrade-gitrepo.yaml # K3s kubectl apply -f k3s-upgrade-gitrepo.yaml
View the created GitRepo resource under the
fleet-default
namespace:# RKE2 kubectl get gitrepo rke2-upgrade -n fleet-default # K3s kubectl get gitrepo k3s-upgrade -n fleet-default # Example output NAME REPO COMMIT BUNDLEDEPLOYMENTS-READY STATUS k3s-upgrade https://github.com/suse-edge/fleet-examples.git release-3.1.1 0/0 rke2-upgrade https://github.com/suse-edge/fleet-examples.git release-3.1.1 0/0
29.3.4.2 SUC Plan deployment - Bundle resource #
A Bundle resource, that ships the needed Kubernetes upgrade
SUC Plans, can be deployed in one of the following ways:
Through the
Rancher UI
- Section 29.3.4.2.1, “Bundle creation - Rancher UI” (whenRancher
is available).By manually deploying (Section 29.3.4.2.2, “Bundle creation - manual”) the resource to your
management cluster
.
Once deployed, to monitor the Kubernetes upgrade process of the nodes of your targeted cluster, refer to the Section 20.3, “Monitoring System Upgrade Controller Plans” documentation.
29.3.4.2.1 Bundle creation - Rancher UI #
The Edge team maintains ready to use bundles for both rke2 and k3s Kubernetes distributions that can be used in the below steps.
Always use this bundle from a valid Edge release tag.
To create a bundle through Rancher’s UI:
In the upper left corner, click ☰ → Continuous Delivery
Go to Advanced > Bundles
Select Create from YAML
From here you can create the Bundle in one of the following ways:
NoteThere might be use-cases where you would need to include custom tolerations to the
SUC plans
that the bundle ships. Make sure to include those tolerations in the bundle that will be generated by the below steps.By manually copying the bundle content for RKE2 or K3s from
suse-edge/fleet-examples
to the Create from YAML page.By cloning the suse-edge/fleet-examples repository from the desired release tag and selecting the Read from File option in the Create from YAML page. From there, navigate to the bundle that you need (
bundles/day2/system-upgrade-controller-plans/rke2-upgrade/plan-bundle.yaml
for RKE2 andbundles/day2/system-upgrade-controller-plans/k3s-upgrade/plan-bundle.yaml
for K3s). This will auto-populate the Create from YAML page with the bundle content.
Change the target clusters for the
Bundle
:To match all downstream clusters change the default Bundle
.spec.targets
to:spec: targets: - clusterSelector: {}
For a more granular downstream cluster mappings, see Mapping to Downstream Clusters.
Create
29.3.4.2.2 Bundle creation - manual #
Pull the Bundle resources:
For RKE2 clusters:
curl -o rke2-plan-bundle.yaml https://raw.githubusercontent.com/suse-edge/fleet-examples/refs/tags/release-3.1.1/bundles/day2/system-upgrade-controller-plans/rke2-upgrade/plan-bundle.yaml
For K3s clusters:
curl -o k3s-plan-bundle.yaml https://raw.githubusercontent.com/suse-edge/fleet-examples/refs/tags/release-3.1.1/bundles/day2/system-upgrade-controller-plans/k3s-upgrade/plan-bundle.yaml
Edit the
Bundle
target configurations, underspec.targets
provide your desired target list. By default theBundle
resources from thesuse-edge/fleet-examples
are NOT mapped to any downstream clusters.To match all clusters change the default
Bundle
target to:spec: targets: - clusterSelector: {}
Alternatively, if you want a more granular cluster selection see Mapping to Downstream Clusters
Apply the Bundle resources to your
management cluster
:# For RKE2 kubectl apply -f rke2-plan-bundle.yaml # For K3s kubectl apply -f k3s-plan-bundle.yaml
View the created Bundle resource under the
fleet-default
namespace:# For RKE2 kubectl get bundles rke2-upgrade -n fleet-default # For K3s kubectl get bundles k3s-upgrade -n fleet-default # Example output NAME BUNDLEDEPLOYMENTS-READY STATUS k3s-upgrade 0/0 rke2-upgrade 0/0
29.3.4.3 SUC Plan deployment - third-party GitOps workflow #
There might be use-cases where users would like to incorporate the Kubernetes upgrade resources to their own third-party GitOps workflow (e.g. Flux
).
To get the upgrade resources that you need, first determine the Edge release tag of the suse-edge/fleet-examples repository that you would like to use.
After that, the resources can be found at:
For a RKE2 cluster upgrade:
For
control-plane
nodes -fleets/day2/system-upgrade-controller-plans/rke2-upgrade/plan-control-plane.yaml
For
worker
nodes -fleets/day2/system-upgrade-controller-plans/rke2-upgrade/plan-worker.yaml
For a K3s cluster upgrade:
For
control-plane
nodes -fleets/day2/system-upgrade-controller-plans/k3s-upgrade/plan-control-plane.yaml
For
worker
nodes -fleets/day2/system-upgrade-controller-plans/k3s-upgrade/plan-worker.yaml
These Plan
resources are interpreted by the system-upgrade-controller
and should be deployed on each downstream cluster that you wish to upgrade. For information on how to deploy the system-upgrade-controller
, see Section 20.2, “Installing the System Upgrade Controller”.
To better understand how your GitOps workflow can be used to deploy the SUC Plans for Kubernetes version upgrade, it can be beneficial to take a look at the overview (Section 29.3.3.1, “Overview”) of the update procedure using Fleet
.
29.4 Helm chart upgrade #
The below sections focus on using Fleet
functionalities to achieve a Helm chart update.
For use-cases, where a third party GitOps tool usage is desired, see:
For
EIB deployed Helm chart upgrades
- Section 29.4.3.3.4, “Helm chart upgrade using a third-party GitOps tool”.For
non-EIB deployed Helm chart upgrades
- retrieve the chart version supported by the desired Edge release from the release notes (Section 37.1, “Abstract”) page and populate the chart version and URL in your third party GitOps tool.
29.4.1 Components #
Apart from the default Day 2
components (Section 29.1.1, “Components”), no other custom components are needed for this operation.
29.4.2 Preparation for air-gapped environments #
29.4.2.1 Ensure that you have access to your Helm chart upgrade Fleet #
Depending on what your environment supports, you can take one of the following options:
Host your chart’s Fleet resources on a local Git server that is accessible by your
management cluster
.Use Fleet’s CLI to convert a Helm chart into a Bundle that you can directly use and will not need to be hosted somewhere. Fleet’s CLI can be retrieved from their release page, for Mac users there is a fleet-cli Homebrew Formulae.
29.4.2.2 Find the required assets for your Edge release version #
Go to the Day 2 release page and find the Edge 3.X.Y release that you want to upgrade your chart to and click Assets.
From the "Assets" section, download the following files:
Release File
Description
edge-save-images.sh
Pulls the images specified in the
edge-release-images.txt
file and packages them inside of a '.tar.gz' archive.edge-save-oci-artefacts.sh
Pulls the OCI chart images related to the specific Edge release and packages them inside of a '.tar.gz' archive.
edge-load-images.sh
Loads images from a '.tar.gz' archive, retags and pushes them to a private registry.
edge-load-oci-artefacts.sh
Takes a directory containing Edge OCI '.tgz' chart packages and loads them to a private registry.
edge-release-helm-oci-artefacts.txt
Contains a list of OCI chart images related to a specific Edge release.
edge-release-images.txt
Contains a list of images related to a specific Edge release.
29.4.2.3 Create the Edge release images archive #
On a machine with internet access:
Make
edge-save-images.sh
executable:chmod +x edge-save-images.sh
Generate the image archive:
./edge-save-images.sh --source-registry registry.suse.com
This will create a ready to load archive named
edge-images.tar.gz
.NoteIf the
-i|--images
option is specified, the name of the archive may differ.Copy this archive to your air-gapped machine:
scp edge-images.tar.gz <user>@<machine_ip>:/path
29.4.2.4 Create the Edge OCI chart images archive #
On a machine with internet access:
Make
edge-save-oci-artefacts.sh
executable:chmod +x edge-save-oci-artefacts.sh
Generate the OCI chart image archive:
./edge-save-oci-artefacts.sh --source-registry registry.suse.com
This will create an archive named
oci-artefacts.tar.gz
.NoteIf the
-a|--archive
option is specified, the name of the archive may differ.Copy this archive to your air-gapped machine:
scp oci-artefacts.tar.gz <user>@<machine_ip>:/path
29.4.2.5 Load Edge release images to your air-gapped machine #
On your air-gapped machine:
Log into your private registry (if required):
podman login <REGISTRY.YOURDOMAIN.COM:PORT>
Make
edge-load-images.sh
executable:chmod +x edge-load-images.sh
Execute the script, passing the previously copied
edge-images.tar.gz
archive:./edge-load-images.sh --source-registry registry.suse.com --registry <REGISTRY.YOURDOMAIN.COM:PORT> --images edge-images.tar.gz
NoteThis will load all images from the
edge-images.tar.gz
, retag and push them to the registry specified under the--registry
option.
29.4.2.6 Load the Edge OCI chart images to your air-gapped machine #
On your air-gapped machine:
Log into your private registry (if required):
podman login <REGISTRY.YOURDOMAIN.COM:PORT>
Make
edge-load-oci-artefacts.sh
executable:chmod +x edge-load-oci-artefacts.sh
Untar the copied
oci-artefacts.tar.gz
archive:tar -xvf oci-artefacts.tar.gz
This will produce a directory with the naming template
edge-release-oci-tgz-<date>
Pass this directory to the
edge-load-oci-artefacts.sh
script to load the Edge OCI chart images to your private registry:NoteThis script assumes the
helm
CLI has been pre-installed on your environment. For Helm installation instructions, see Installing Helm../edge-load-oci-artefacts.sh --archive-directory edge-release-oci-tgz-<date> --registry <REGISTRY.YOURDOMAIN.COM:PORT> --source-registry registry.suse.com
29.4.2.7 Create registry mirrors pointing to your private registry for your Kubernetes distribution #
For RKE2, see Containerd Registry Configuration
For K3s, see Embedded Registry Mirror
29.4.3 Upgrade procedure #
This section focuses on the following Helm upgrade procedure use-cases:
I have a new cluster and would like to deploy and manage a SUSE Helm chart (Section 29.4.3.1, “I have a new cluster and would like to deploy and manage a SUSE Helm chart”)
I would like to upgrade a Fleet managed Helm chart (Section 29.4.3.2, “I would like to upgrade a Fleet managed Helm chart”)
I would like to upgrade an EIB deployed Helm chart (Section 29.4.3.3, “I would like to upgrade an EIB deployed Helm chart”)
Manually deployed Helm charts cannot be reliably upgraded. We suggest to redeploy the helm chart using the Section 29.4.3.1, “I have a new cluster and would like to deploy and manage a SUSE Helm chart” method.
29.4.3.1 I have a new cluster and would like to deploy and manage a SUSE Helm chart #
For users that want to manage their Helm chart lifecycle through Fleet.
This section covers how to:
Prepare your Fleet resources (Section 29.4.3.1.1, “Prepare your Fleet resources”).
Deploy your Fleet resources (Section 29.4.3.1.2, “Deploy your Fleet”).
Manage the deployed Helm chart (Section 29.4.3.1.3, “Managing the deployed Helm chart”).
29.4.3.1.1 Prepare your Fleet resources #
Acquire the Chart’s Fleet resources from the Edge release tag that you wish to use
From the selected Edge release tag revision, navigate to the Helm chart fleet -
fleets/day2/chart-templates/<chart>
If you intend to use a GitOps workflow, copy the chart Fleet directory to the Git repository from where you will do GitOps.
Optionally, if the Helm chart requires configurations to its values, edit the
.helm.values
configuration inside thefleet.yaml
file of the copied directory.Optionally, there may be use-cases where you need to add additional resources to your chart’s fleet so that it can better fit your environment. For information on how to enhance your Fleet directory, see Git Repository Contents.
An example for the longhorn
helm chart would look like:
User Git repository structure:
<user_repository_root> ├── longhorn │ └── fleet.yaml └── longhorn-crd └── fleet.yaml
fleet.yaml
content populated with userlonghorn
data:defaultNamespace: longhorn-system helm: releaseName: "longhorn" chart: "longhorn" repo: "https://charts.rancher.io/" version: "104.2.0+up1.7.1" takeOwnership: true # custom chart value overrides values: # Example for user provided custom values content defaultSettings: deletingConfirmationFlag: true # https://fleet.rancher.io/bundle-diffs diff: comparePatches: - apiVersion: apiextensions.k8s.io/v1 kind: CustomResourceDefinition name: engineimages.longhorn.io operations: - {"op":"remove", "path":"/status/conditions"} - {"op":"remove", "path":"/status/storedVersions"} - {"op":"remove", "path":"/status/acceptedNames"} - apiVersion: apiextensions.k8s.io/v1 kind: CustomResourceDefinition name: nodes.longhorn.io operations: - {"op":"remove", "path":"/status/conditions"} - {"op":"remove", "path":"/status/storedVersions"} - {"op":"remove", "path":"/status/acceptedNames"} - apiVersion: apiextensions.k8s.io/v1 kind: CustomResourceDefinition name: volumes.longhorn.io operations: - {"op":"remove", "path":"/status/conditions"} - {"op":"remove", "path":"/status/storedVersions"} - {"op":"remove", "path":"/status/acceptedNames"}
NoteThese are just example values that are used to illustrate custom configurations over the
longhorn
chart. They should NOT be treated as deployment guidelines for thelonghorn
chart.
29.4.3.1.2 Deploy your Fleet #
If the environment supports working with a GitOps workflow, you can deploy your Chart Fleet by either using a GitRepo (Section 29.4.3.1.2.1, “GitRepo”) or Bundle (Section 29.4.3.1.2.2, “Bundle”).
While deploying your Fleet, if you get a Modified
message, make sure to add a corresponding comparePatches
entry to the Fleet’s diff
section. For more information, see Generating Diffs to Ignore Modified GitRepos.
29.4.3.1.2.1 GitRepo #
Fleet’s GitRepo resource holds information on how to access your chart’s Fleet resources and to which clusters it needs to apply those resources.
The GitRepo
resource can be deployed through the Rancher UI, or manually, by deploying the resource to the management cluster
.
Example Longhorn GitRepo
resource for manual deployment:
apiVersion: fleet.cattle.io/v1alpha1
kind: GitRepo
metadata:
name: longhorn-git-repo
namespace: fleet-default
spec:
# If using a tag
# revision: <user_repository_tag>
#
# If using a branch
# branch: <user_repository_branch>
paths:
# As seen in the 'Prepare your Fleet resources' example
- longhorn
- longhorn-crd
repo: <user_repository_url>
targets:
# Match all clusters
- clusterSelector: {}
29.4.3.1.2.2 Bundle #
Bundle resources hold the raw Kubernetes resources that need to be deployed by Fleet. Normally it is encouraged to use the GitRepo
approach, but for use-cases where the environment is air-gapped and cannot support a local Git server, Bundles
can help you in propagating your Helm chart Fleet to your target clusters.
The Bundle
can be deployed either through the Rancher UI (Continuous Delivery → Advanced → Bundles → Create from YAML
) or by manually deploying the Bundle
resource in the correct Fleet namespace. For information about Fleet namespaces, see the upstream documentation.
Example Longhorn Bundle
resource deployment using a manual approach:
Navigate to the
Longhorn
Chart fleet located underfleets/day2/chart-templates/longhorn/longhorn
:cd fleets/day2/chart-templates/longhorn/longhorn
Create a
targets.yaml
file that will instruct Fleet to which clusters it should deploy the Helm chart. In this case, we will deploy to a single downstream cluster. For information on how to map more complex targets, see Mapping to Downstream Clusters:cat > targets.yaml <<EOF targets: - clusterName: foo EOF
Convert the
Longhorn
Helm chart Fleet to a Bundle resource. For more information, see Convert a Helm Chart into a Bundle:fleet apply --compress --targets-file=targets.yaml -n fleet-default -o - longhorn-bundle > longhorn-bundle.yaml
Navigate to the
Longhorn CRD
Chart fleet located underfleets/day2/chart-templates/longhorn/longhorn-crd
:cd fleets/day2/chart-templates/longhorn/longhorn-crd
Create a
targets.yaml
file that will instruct Fleet to which clusters it should deploy the Helm chart. In this case, we will deploy to a single downstream cluster. For information on how to map more complex targets, see Mapping to Downstream Clusters:cat > targets.yaml <<EOF targets: - clusterName: foo EOF
Convert the
Longhorn CRD
Helm chart Fleet to a Bundle resource. For more information, see Convert a Helm Chart into a Bundle:fleet apply --compress --targets-file=targets.yaml -n fleet-default -o - longhorn-crd-bundle > longhorn-crd-bundle.yaml
Deploy
longhorn-bundle.yaml
andlonghorn-crd-bundle.yaml
to yourmanagement cluster
:kubectl apply -f longhorn-crd-bundle.yaml kubectl apply -f longhorn-bundle.yaml
Following these steps will ensure that Longhorn
is deployed on all of the specified target clusters.
29.4.3.1.3 Managing the deployed Helm chart #
Once deployed with Fleet, for Helm chart upgrades, see Section 29.4.3.2, “I would like to upgrade a Fleet managed Helm chart”.
29.4.3.2 I would like to upgrade a Fleet managed Helm chart #
Determine the version to which you need to upgrade your chart so that it is compatible with the desired Edge release. Helm chart version per Edge release can be viewed from the release notes (Section 37.1, “Abstract”).
In your Fleet monitored Git repository, edit the Helm chart’s
fleet.yaml
file with the correct chart version and repository from the release notes (Section 37.1, “Abstract”).After committing and pushing the changes to your repository, this will trigger an upgrade of the desired Helm chart
29.4.3.3 I would like to upgrade an EIB deployed Helm chart #
EIB deploys Helm charts by creating a HelmChart
resource and utilizing the helm-controller
introduced by the RKE2/K3s Helm integration feature.
To ensure that an EIB deployed Helm chart is successfully upgraded, users need to do an upgrade over the HelmChart
resources created for the Helm chart by EIB.
Below you can find information on:
The general overview (Section 29.4.3.3.1, “Overview”) of the EIB deployed Helm chart upgrade process.
The necessary upgrade steps (Section 29.4.3.3.2, “Upgrade Steps”) needed for a successful EIB deployed Helm chart upgrade.
An example (Section 29.4.3.3.3, “Example”) showcasing a Longhorn chart upgrade using the explained method.
How to use the upgrade process with a different GitOps tool (Section 29.4.3.3.4, “Helm chart upgrade using a third-party GitOps tool”).
29.4.3.3.1 Overview #
This section is meant to give a high overview of the steps that need to be taken in order to upgrade one or multiple Helm charts that have been deployed by EIB. For a detailed explanation of the steps needed for a Helm chart upgrade, see Section 29.4.3.3.2, “Upgrade Steps”.
The workflow begins with the user pulling the new Helm chart archive(s) that he wishes to upgrade his chart(s) to.
The archive(s) should then be placed in a directory that will be processed by the
generate-chart-upgrade-data.sh
script.The user then proceeds to execute the
generate-chart-upgrade-data.sh
script which will generate a Kubernetes Secret YAML file for each Helm chart archive in the provided archive directory. These secrets will be automatically placed under the Fleet that will be used to upgrade the Helm charts. This is further explained in the upgrade steps (Section 29.4.3.3.2, “Upgrade Steps”) section.After the script finishes successfully, the user should continue to the configuration and deployment of either a
Bundle
or aGitRepo
resource that will ship all the needed K8s resources to the target clusters.The resource is deployed on the
management cluster
under thefleet-default
namespace.
Fleet (Chapter 7, Fleet) detects the deployed resource, parses its data and deploys its resources to the specified target clusters. The most notable resources that are deployed are:
eib-charts-upgrader
- a Job that deploys theChart Upgrade Pod
. Theeib-charts-upgrader-script
as well as allhelm chart upgrade data
secrets are mounted inside of theChart Upgrade Pod
.eib-charts-upgrader-script
- a Secret shipping the script that will be used by theChart Upgrade Pod
to patch an existingHelmChart
resource.Helm chart upgrade data
secrets - Secret YAML files created by thegenerate-chart-upgrade-data.sh
script based on the user provided data. Secret YAML files should not be edited.
Once the
Chart Upgrade Pod
has been deployed, the script from theeib-charts-upgrader-script
secret is executed, which does the following:Process all the Helm chart upgrade data provided by the other secrets.
Check if there is a
HelmChart
resource for each of the provided chart upgrade data.Proceed to patch the
HelmChart
resource with the data provided from the secret for the corresponding Helm chart.
RKE2/K3s helm-controller constantly monitors for edits over the existing
HelmChart
resource. It detects the patch of theHelmChart
, reconciles the changes and then proceeds to upgrade the chart behind theHelmChart
resource.
29.4.3.3.2 Upgrade Steps #
Clone the suse-edge/fleet-examples repository from the Edge release tag that you wish to use.
Create a directory in which you will store the pulled Helm chart archive(s).
mkdir archives
Inside of the newly created archive directory, pull the Helm chart archive(s) that you wish to upgrade to:
cd archives helm pull [chart URL | repo/chartname] # Alternatively if you want to pull a specific version: # helm pull [chart URL | repo/chartname] --version 0.0.0
From the desired release tag download the
generate-chart-upgrade-data.sh
script.Execute the
generate-chart-upgrade-data.sh
script:ImportantUsers should not make any changes over what the
generate-chart-upgrade-data.sh
script generates.chmod +x ./generate-chart-upgrade-data.sh ./generate-chart-upgrade-data.sh --archive-dir /foo/bar/archives/ --fleet-path /foo/bar/fleet-examples/fleets/day2/eib-charts-upgrader
The script will go through the following logic:
Validate that the user has provided
--fleet-path
points to a valid Fleet that can initiate a Helm chart upgrade.Process all Helm chart archives from the user-created archives dir (e.g.
/foo/bar/archives/
).For each Helm chart archive create a
Kubernetes Secret YAML
resource. This resource will hold:The
name
of theHelmChart
resource that needs to be patched.The new
version
for theHelmChart
resource.The
base64
encoded Helm chart archive that will be used to replace theHelmChart’s
currently running configuration.
Each
Kubernetes Secret YAML
resource will be transferred to thebase/secrets
directory inside of the path to theeib-charts-upgrader
Fleet that was given under--fleet-path
.Furthermore the
generate-chart-upgrade-data.sh
script ensures that the secrets that it moved will be picked up and used in the Helm chart upgrade logic. It does that by:Editing the
base/secrets/kustomization.yaml
file to include the newly added resources.Edit the
base/patches/job-patch.yaml
file to include the newly added secrets to the mount configurations.
After a successful
generate-chart-upgrade-data.sh
run you should have the changes inside of the following directories of thesuse-edge/fleet-examples
repository:fleets/day2/eib-charts-upgrader/base/patches
fleets/day2/eib-charts-upgrader/base/secrets
The steps below depend on the environment that you are running:
For an environment that supports GitOps (e.g. is non air-gapped, or is air-gapped, but allows for local Git server support):
Copy the
fleets/day2/eib-charts-upgrader
Fleet to the repository that you will use for GitOps. Make sure that the Fleet includes the changes that have been made by thegenerate-chart-upgrade-data.sh
script.Configure a
GitRepo
resource that will be used to ship all the resources of theeib-charts-upgrader
Fleet.For
GitRepo
configuration and deployment through the Rancher UI, see Accessing Fleet in the Rancher UI.For
GitRepo
manual configuration and deployment, see Creating a Deployment.
For an environment that does not support GitOps (e.g. is air-gapped and does not allow local Git server usage):
Download the
fleet-cli
binary from therancher/fleet
releases page. For Mac users, there is a Homebrew Formulae that can be used - fleet-cli.Navigate to the
eib-charts-upgrader
Fleet:cd /foo/bar/fleet-examples/fleets/day2/eib-charts-upgrader
Create a
targets.yaml
file that will instruct Fleet where to deploy your resources:cat > targets.yaml <<EOF targets: - clusterSelector: {} # Change this with your target data EOF
For information on how to map target clusters, see the upstream documentation.
Use the
fleet-cli
to convert the Fleet to aBundle
resource:fleet apply --compress --targets-file=targets.yaml -n fleet-default -o - eib-charts-upgrade > bundle.yaml
This will create a Bundle (
bundle.yaml
) that will hold all the templated resource from theeib-charts-upgrader
Fleet.For more information regarding the
fleet apply
command, see fleet apply.For more information regarding converting Fleets to Bundles, see Convert a Helm Chart into a Bundle.
Deploy the
Bundle
. This can be done in one of two ways:Through Rancher’s UI - Navigate to Continuous Delivery → Advanced → Bundles → Create from YAML and either paste the
bundle.yaml
contents, or click theRead from File
option and pass the file itself.Manually - Deploy the
bundle.yaml
file manually inside of yourmanagement cluster
.
Executing these steps will result in a successfully deployed GitRepo/Bundle
resource. The resource will be picked up by Fleet and its contents will be deployed onto the target clusters that the user has specified in the previous steps. For an overview of the process, refer to the overview (Section 29.4.3.3.1, “Overview”) section.
For information on how to track the upgrade process, you can refer to the Example (Section 29.4.3.3.3, “Example”) section of this documentation.
Once the chart upgrade has been successfully verified, remove the Bundle/GitRepo
resource.
This will remove the no longer necessary upgrade resources from your downstream cluster, ensuring that no future version clashes might occur.
29.4.3.3.3 Example #
The example below illustrates how to do an upgrade of an EIB deployed Helm chart from one version to another. The versions in the example should not be treated as version recommendations. Version recommendations for a specific Edge release, should be taken from the release notes (Section 37.1, “Abstract”).
Use-case:
A cluster named
doc-example
is running Ranchers' Longhorn103.3.0+up1.6.1
version.The cluster has been deployed through EIB, using the following image definition snippet:
kubernetes: helm: charts: - name: longhorn-crd repositoryName: rancher-charts targetNamespace: longhorn-system createNamespace: true version: 103.3.0+up1.6.1 - name: longhorn repositoryName: rancher-charts targetNamespace: longhorn-system createNamespace: true version: 103.3.0+up1.6.1 repositories: - name: rancher-charts url: https://charts.rancher.io/ ...
Figure 29.4: doc-example installed Longhorn version #Longhorn
needs to be upgraded to a version that is compatible with the Edge 3.1 release. Meaning it needs to be upgraded to104.2.0+up1.7.1
.It is assumed that the
management cluster
in charge of managing thedoc-example
cluster is air-gapped, without support for a local Git server and has a working Rancher setup.
Follow the Upgrade Steps (Section 29.4.3.3.2, “Upgrade Steps”):
Clone the
suse-edge/fleet-example
repository from therelease-3.1.1
tag.git clone -b release-3.1.1 https://github.com/suse-edge/fleet-examples.git
Create a directory where the
Longhorn
upgrade archive will be stored.mkdir archives
Pull the desired
Longhorn
chart archive version:# First add the Rancher Helm chart repository helm repo add rancher-charts https://charts.rancher.io/ # Pull the Longhorn 1.7.1 CRD archive helm pull rancher-charts/longhorn-crd --version 104.2.0+up1.7.1 # Pull the Longhorn 1.7.1 chart archive helm pull rancher-charts/longhorn --version 104.2.0+up1.7.1
Outside of the
archives
directory, download thegenerate-chart-upgrade-data.sh
script from therelease-3.1.1
release tag.Directory setup should look similar to:
. ├── archives | ├── longhorn-104.2.0+up1.7.1.tgz │ └── longhorn-crd-104.2.0+up1.7.1.tgz ├── fleet-examples ... │ ├── fleets │ │ ├── day2 | | | ├── ... │ │ │ ├── eib-charts-upgrader │ │ │ │ ├── base │ │ │ │ │ ├── job.yaml │ │ │ │ │ ├── kustomization.yaml │ │ │ │ │ ├── patches │ │ │ │ │ │ └── job-patch.yaml │ │ │ │ │ ├── rbac │ │ │ │ │ │ ├── cluster-role-binding.yaml │ │ │ │ │ │ ├── cluster-role.yaml │ │ │ │ │ │ ├── kustomization.yaml │ │ │ │ │ │ └── sa.yaml │ │ │ │ │ └── secrets │ │ │ │ │ ├── eib-charts-upgrader-script.yaml │ │ │ │ │ └── kustomization.yaml │ │ │ │ ├── fleet.yaml │ │ │ │ └── kustomization.yaml │ │ │ └── ... │ └── ... └── generate-chart-upgrade-data.sh
Execute the
generate-chart-upgrade-data.sh
script:# First make the script executable chmod +x ./generate-chart-upgrade-data.sh # Then execute the script ./generate-chart-upgrade-data.sh --archive-dir ./archives --fleet-path ./fleet-examples/fleets/day2/eib-charts-upgrader
The directory structure after the script execution should look similar to:
. ├── archives | ├── longhorn-104.2.0+up1.7.1.tgz │ └── longhorn-crd-104.2.0+up1.7.1.tgz ├── fleet-examples ... │ ├── fleets │ │ ├── day2 │ │ │ ├── ... │ │ │ ├── eib-charts-upgrader │ │ │ │ ├── base │ │ │ │ │ ├── job.yaml │ │ │ │ │ ├── kustomization.yaml │ │ │ │ │ ├── patches │ │ │ │ │ │ └── job-patch.yaml │ │ │ │ │ ├── rbac │ │ │ │ │ │ ├── cluster-role-binding.yaml │ │ │ │ │ │ ├── cluster-role.yaml │ │ │ │ │ │ ├── kustomization.yaml │ │ │ │ │ │ └── sa.yaml │ │ │ │ │ └── secrets │ │ │ │ │ ├── eib-charts-upgrader-script.yaml │ │ │ │ │ ├── kustomization.yaml │ │ │ │ │ ├── longhorn-104-2-0-up1-7-1.yaml <- secret created by the generate-chart-upgrade-data.sh script │ │ │ │ │ └── longhorn-crd-104-2-0-up1-7-1.yaml <- secret created by the generate-chart-upgrade-data.sh script │ │ │ │ ├── fleet.yaml │ │ │ │ └── kustomization.yaml │ │ │ └── ... │ └── ... └── generate-chart-upgrade-data.sh
The files changed in git should look like this:
Figure 29.5: Changes over fleet-examples made by generate-chart-upgrade-data.sh #Since the
management cluster
does not support for a GitOps workflow, aBundle
needs to be created for theeib-charts-upgrader
Fleet:First, navigate to the Fleet itself:
cd ./fleet-examples/fleets/day2/eib-charts-upgrader
Then create a
targets.yaml
file targeting thedoc-example
cluster:cat > targets.yaml <<EOF targets: - clusterName: doc-example EOF
Then use the
fleet-cli
binary to convert the Fleet to a Bundle:fleet apply --compress --targets-file=targets.yaml -n fleet-default -o - eib-charts-upgrade > bundle.yaml
Now, transfer the
bundle.yaml
on yourmanagement cluster
machine.
Since the
management cluster
is runningRancher
, deploy the Bundle through the Rancher UI:Figure 29.6: Deploy Bundle through Rancher UI #From here, select Read from File and find the
bundle.yaml
file on your system.This will auto-populate the
Bundle
inside of Rancher’s UI:Figure 29.7: Auto-populated Bundle snippet #Select Create.
After a successful deployment, your Bundle would look similar to:
Figure 29.8: Successfully deployed Bundle #
After the successful deployment of the Bundle
, to monitor the upgrade process:
First, verify the logs of the
Upgrade Pod
:Figure 29.9: View the upgrade pod logs #Now verify the logs of the Pod created for the upgrade by the helm-controller:
The Pod name will be with the following template -
helm-install-longhorn-<random-suffix>
The Pod will be in the namespace where the
HelmChart
resource was deployed. In our case this isdefault
.Figure 29.10: Logs for successfully upgraded Longhorn chart #
Check that the HelmChart version has been bumped:
Figure 29.11: Bumped Longhorn version #Finally check that the Longhorn Pods are running:
Figure 29.12: Example for validating the instance-manager pod #
After making the above validations, it is safe to assume that the Longhorn Helm chart has been upgraded from 103.3.0+up1.6.1
to 104.2.0+up1.7.1
.
29.4.3.3.4 Helm chart upgrade using a third-party GitOps tool #
There might be use-cases where users would like to use this upgrade procedure with a GitOps workflow other than Fleet (e.g. Flux
).
To produce the resources needed for the upgrade procedure, you can use the generate-chart-upgrade-data.sh
script to populate the eib-charts-upgrader
Fleet with the user provided data. For more information on how to do this, see the upgrade steps (Section 29.4.3.3.2, “Upgrade Steps”).
After you have the full setup, you can use kustomize to generate a full working solution that you can deploy in your cluster:
cd /foo/bar/fleets/day2/eib-charts-upgrader
kustomize build .
If you want to include the solution to your GitOps workflow, you can remove the fleet.yaml
file and use what is left as a valid Kustomize
setup. Just do not forget to first run the generate-chart-upgrade-data.sh
script, so that it can populate the Kustomize
setup with the data for the Helm charts that you wish to upgrade to.
To understand how this workflow is intended to be used, it can be beneficial to look at the overview (Section 29.4.3.3.1, “Overview”) and upgrade steps (Section 29.4.3.3.2, “Upgrade Steps”) sections as well.