In this document, we will ensure uninterrupted service and high availability by redirecting to a virtual IP without disrupting our existing Kubernetes cluster.


Available Machines:


  • Kubernetes Master1
  • Kubernetes Master2
  • Kubernetes Master3
  • Kubernetes Worker1
  • Kubernetes Worker2


1) Current System and Virtual IP Access



In the current system, a Kubernetes cluster was created using the kubeadm init command on Master1, and then other machines (Master2, Master3, Worker1, and Worker2) were added to this cluster.

We need to define a Virtual IP on the Load Balancer and ensure that the existing machines access this Virtual IP through port 6443.


Çok Önemli

If you are not using a load balancer and cannot create a virtual IP on the network, you can use Keepalived and HAProxy tools to accomplish this. Click here to proceed.

Keepalived is used to create a virtual IP, while HAProxy is used for load balancing.


2) Changing the IP


When there are multiple master nodes in the cluster, we need to reconfigure the cluster to leave only one master.

This process typically starts by completely removing the other masters from the cluster.

The following command is run for the other masters (master2 and master3) except for master1.


sudo kubeadm reset
POWERSHELL


Other masters (master 2 and master 3) from the master 1 server are deleted from the cluster.


kubectl delete nodes master2 
kubectl delete nodes master3
POWERSHELL

In the current system, master1 and worker servers must remain.


2.1) What to do on the Master1 server:


Stop the Kubelet and containerd (and the Docker application if Docker is being used).

sudo systemctl stop kubelet
sudo systemctl stop containerd
POWERSHELL


Some files are backed up and deleted.

sudo mv -f /etc/kubernetes /etc/kubernetes.backup
sudo mv -f /var/lib/kubelet /var/lib/kubelet.backup
 
sudo mkdir -p /etc/kubernetes/pki
sudo cp -r /etc/kubernetes_backup/pki /etc/kubernetes
sudo rm -rf /etc/kubernetes/pki/{apiserver.*,etcd/peer.*}
sudo rm -f ~/.kube/config
POWERSHELL


Start Containerd (and the Docker application if Docker is being used).

sudo systemctl start containerd
POWERSHELL

The kubeadm init command is rerun with the endpoint address edited and used as follows.

sudo kubeadm init --pod-network-cidr "10.244.0.0/16" --control-plane-endpoint <VIRTUAL_IP> --upload-certs --ignore-preflight-errors=DirAvailable--var-lib-etcd
POWERSHELL

Note: Kubeadm join commands are noted.

2.2) Actions for Worker1 and Worker2:


The Kubelet and containerd (and the Docker application if Docker is being used) are stopped.


sudo systemctl stop kubelet
sudo systemctl stop containerd
POWERSHELL


Some files are backed up.

sudo mv -f /etc/kubernetes /etc/kubernetes.backup
sudo mv -f /var/lib/kubelet /var/lib/kubelet.backup
POWERSHELL


The container (also the docker application if using docker) is started.

sudo systemctl start containerd
POWERSHELL

The kubeadm worker join command is executed on Worker1 and Worker2 machines.

2.3) Actions for Master2 and Master3:

In this step, you will only need to run the master join command on master2 and master3.

2.4) Cluster Status Check:

You can see your new cluster information with the commands below.

kubectl cluster-info
kubectl get node -o wide
POWERSHELL