22 MetalLB in front of the Kubernetes API server #
This guide demonstrates using a MetalLB service to expose the RKE2/K3s API externally on an HA cluster with three control-plane nodes.
To achieve this, a Kubernetes Service of type LoadBalancer
and Endpoints will be manually created. The Endpoints keep the IPs of all control plane nodes available in the cluster.
For the Endpoint to be continuously synchronized with the events occurring in the cluster (adding/removing a node or a node goes offline), the Endpoint Copier Operator will be deployed. The operator monitors the events happening in the default kubernetes
Endpoint and updates the managed one automatically to keep them in sync.
Since the managed Service is of type LoadBalancer
, MetalLB
assigns it a static ExternalIP
. This ExternalIP
will be used to communicate with the API Server.
22.1 Prerequisites #
Three hosts to deploy RKE2/K3s on top.
Ensure the hosts have different host names.
For testing, these could be virtual machines
At least 2 available IPs in the network (one for the Traefik/Nginx and one for the managed service).
Helm
22.2 Installing RKE2/K3s #
If you do not want to use a fresh cluster but want to use an existing one, skip this step and proceed to the next one.
First, a free IP in the network must be reserved that will be used later for ExternalIP
of the managed Service.
SSH to the first host and install the wanted distribution in cluster mode.
For RKE2:
# Export the free IP mentioned above
export VIP_SERVICE_IP=<ip>
curl -sfL https://get.rke2.io | INSTALL_RKE2_EXEC="server \
--write-kubeconfig-mode=644 --tls-san=${VIP_SERVICE_IP} \
--tls-san=https://${VIP_SERVICE_IP}.sslip.io" sh -
systemctl enable rke2-server.service
systemctl start rke2-server.service
# Fetch the cluster token:
RKE2_TOKEN=$(tr -d '\n' < /var/lib/rancher/rke2/server/node-token)
For K3s:
# Export the free IP mentioned above
export VIP_SERVICE_IP=<ip>
export INSTALL_K3S_SKIP_START=false
curl -sfL https://get.k3s.io | INSTALL_K3S_EXEC="server --cluster-init \
--disable=servicelb --write-kubeconfig-mode=644 --tls-san=${VIP_SERVICE_IP} \
--tls-san=https://${VIP_SERVICE_IP}.sslip.io" K3S_TOKEN=foobar sh -
Make sure that --disable=servicelb
flag is provided in the k3s server
command.
From now on, the commands should be run on the local machine.
To access the API server from outside, the IP of the RKE2/K3s VM will be used.
# Replace <node-ip> with the actual IP of the machine
export NODE_IP=<node-ip>
export KUBE_DISTRIBUTION=<k3s/rke2>
scp ${NODE_IP}:/etc/rancher/${KUBE_DISTRIBUTION}/${KUBE_DISTRIBUTION}.yaml ~/.kube/config && sed \
-i '' "s/127.0.0.1/${NODE_IP}/g" ~/.kube/config && chmod 600 ~/.kube/config
22.3 Configuring an existing cluster #
This step is valid only if you intend to use an existing RKE2/K3s cluster.
To use an existing cluster the tls-san
flags should be modified and also, servicelb
LB should be disabled for K3s.
To change the flags for RKE2 or K3s servers, you need to modify either the /etc/systemd/system/rke2.service
or /etc/systemd/system/k3s.service
file on all the VMs in the cluster, depending on the distribution.
The flags should be inserted in the ExecStart
. For example:
For RKE2:
# Replace the <vip-service-ip> with the actual ip ExecStart=/usr/local/bin/rke2 \ server \ '--write-kubeconfig-mode=644' \ '--tls-san=<vip-service-ip>' \ '--tls-san=https://<vip-service-ip>.sslip.io' \
For K3s:
# Replace the <vip-service-ip> with the actual ip ExecStart=/usr/local/bin/k3s \ server \ '--cluster-init' \ '--write-kubeconfig-mode=644' \ '--disable=servicelb' \ '--tls-san=<vip-service-ip>' \ '--tls-san=https://<vip-service-ip>.sslip.io' \
Then the following commands should be executed to load the new configurations:
systemctl daemon-reload
systemctl restart ${KUBE_DISTRIBUTION}
22.4 Installing MetalLB #
To deploy MetalLB
, the MetalLB on K3s guide can be used.
NOTE: Ensure that the IP addresses of the ip-pool
IPAddressPool do not overlap with the IP addresses previously selected for the LoadBalancer
service.
Create a separate IpAddressPool
that will be used only for the managed Service.
# Export the VIP_SERVICE_IP on the local machine
# Replace with the actual IP
export VIP_SERVICE_IP=<ip>
cat <<-EOF | kubectl apply -f -
apiVersion: metallb.io/v1beta1
kind: IPAddressPool
metadata:
name: kubernetes-vip-ip-pool
namespace: metallb-system
spec:
addresses:
- ${VIP_SERVICE_IP}/32
serviceAllocation:
priority: 100
namespaces:
- default
EOF
cat <<-EOF | kubectl apply -f -
apiVersion: metallb.io/v1beta1
kind: L2Advertisement
metadata:
name: ip-pool-l2-adv
namespace: metallb-system
spec:
ipAddressPools:
- ip-pool
- kubernetes-vip-ip-pool
EOF
22.5 Installing the Endpoint Copier Operator #
helm install \
endpoint-copier-operator oci://registry.suse.com/edge/3.1/endpoint-copier-operator-chart \
--namespace endpoint-copier-operator \
--create-namespace
The command above will deploy the endpoint-copier-operator
operator Deployment with two replicas. One will be the leader and the other will take over the leader role if needed.
Now, the kubernetes-vip
Service should be deployed, which will be reconciled by the operator and an Endpoint with the configured ports and IP will be created.
For RKE2:
cat <<-EOF | kubectl apply -f -
apiVersion: v1
kind: Service
metadata:
name: kubernetes-vip
namespace: default
spec:
ports:
- name: rke2-api
port: 9345
protocol: TCP
targetPort: 9345
- name: k8s-api
port: 6443
protocol: TCP
targetPort: 6443
type: LoadBalancer
EOF
For K3s:
cat <<-EOF | kubectl apply -f -
apiVersion: v1
kind: Service
metadata:
name: kubernetes-vip
namespace: default
spec:
internalTrafficPolicy: Cluster
ipFamilies:
- IPv4
ipFamilyPolicy: SingleStack
ports:
- name: https
port: 443
protocol: TCP
targetPort: 6443
sessionAffinity: None
type: LoadBalancer
EOF
Verify that the kubernetes-vip
Service has the correct IP address:
kubectl get service kubernetes-vip -n default \
-o=jsonpath='{.status.loadBalancer.ingress[0].ip}'
Ensure that the kubernetes-vip
and kubernetes
Endpoints resources in the default
namespace point to the same IPs.
kubectl get endpoints kubernetes kubernetes-vip
If everything is correct, the last thing left is to use the VIP_SERVICE_IP
in our Kubeconfig
.
sed -i '' "s/${NODE_IP}/${VIP_SERVICE_IP}/g" ~/.kube/config
From now on, all the kubectl
will go through the kubernetes-vip
service.
22.6 Adding control-plane nodes #
To monitor the entire process, two more terminal tabs can be opened.
First terminal:
watch kubectl get nodes
Second terminal:
watch kubectl get endpoints
Now execute the commands below on the second and third nodes.
For RKE2:
# Export the VIP_SERVICE_IP in the VM
# Replace with the actual IP
export VIP_SERVICE_IP=<ip>
curl -sfL https://get.rke2.io | INSTALL_RKE2_TYPE="server" sh -
systemctl enable rke2-server.service
mkdir -p /etc/rancher/rke2/
cat <<EOF > /etc/rancher/rke2/config.yaml
server: https://${VIP_SERVICE_IP}:9345
token: ${RKE2_TOKEN}
EOF
systemctl start rke2-server.service
For K3s:
# Export the VIP_SERVICE_IP in the VM
# Replace with the actual IP
export VIP_SERVICE_IP=<ip>
export INSTALL_K3S_SKIP_START=false
curl -sfL https://get.k3s.io | INSTALL_K3S_EXEC="server \
--server https://${VIP_SERVICE_IP}:6443 --disable=servicelb \
--write-kubeconfig-mode=644" K3S_TOKEN=foobar sh -