Skip to main content

IP Change in Kubernetes Cluster - Virtual IP Usage

In this document, we will ensure that our existing Kubernetes cluster works with high availability without interruption by redirecting it to a virtual IP without breaking it. Existing Machines:
  • Kubernetes Master1
  • Kubernetes Master2
  • Kubernetes Master3
  • Kubernetes Worker1
  • Kubernetes Worker2

1) Existing System and Virtual IP Access

In the existing system, a Kubernetes cluster was created on Master1 with the kubeadm init command, and then other machines (Master2, Master3, Worker1 and Worker2) were included in this cluster with the worker role. By defining a Virtual IP on the Load Balancer, we need to make existing machines accessible from this Virtual IP on port 6443.
If you are not using a load balancer and cannot create a virtual IP on the network, you can check the Kubernetes High Availability Cluster Installation page to perform this operation with Keepalived and HAProxy tools that you can use for this.Keepalived is used to create virtual IP, and HAProxy is used for load balancing operations.

2) IP Change

When there are multiple master nodes in the cluster, we need to reconfigure the cluster to leave these masters as a single master. This operation usually starts by completely removing other masters from the cluster. The following command is run for other masters (master2 and master3) except master1.
sudo kubeadm reset
Other masters (master2 and master3) are deleted from the cluster from Master1 server.
kubectl delete nodes master2
kubectl delete nodes master3
Master1 and worker servers should remain in the existing system.

2.1) What to do on Master1 server:

Kubelet and containerd (if docker is used, also the docker application) are stopped.
sudo systemctl stop kubelet
sudo systemctl stop containerd
Backups of some files are taken and deleted.
sudo mv -f /etc/kubernetes /etc/kubernetes.backup
sudo mv -f /var/lib/kubelet /var/lib/kubelet.backup
sudo mkdir -p /etc/kubernetes/pki
sudo cp -r /etc/kubernetes_backup/pki /etc/kubernetes
sudo rm -rf /etc/kubernetes/pki/{apiserver.*,etcd/peer.*}
sudo rm -f ~/.kube/config
Containerd (if docker is used, also the docker application) is started.
sudo systemctl start containerd
The Kubeadm init command is run again by editing the endpoint address and Virtual IP is used.
sudo kubeadm init --pod-network-cidr "10.244.0.0/16" --control-plane-endpoint <VIRTUAL_IP> --upload-certs --ignore-preflight-errors=DirAvailable--var-lib-etcd
Note: Kubeadm join commands are noted.

2.2) What to do for Worker1 and Worker2:

Kubelet and containerd (if docker is used, also the docker application) are stopped.
sudo systemctl stop kubelet
sudo systemctl stop containerd
Backups of some files are taken.
sudo mv -f /etc/kubernetes /etc/kubernetes.backup
sudo mv -f /var/lib/kubelet /var/lib/kubelet.backup
Containerd (if docker is used, also the docker application) and kubelet are started.
sudo systemctl start kubelet
sudo systemctl start containerd
Kubeadm worker join command is run on worker1 and worker2 machines.

2.3) What to do for Master2 and Master3:

In this step, it will be sufficient to run only the master join command on master2 and master3.

2.4) Cluster Status Check:

You can see your new cluster information with the following commands.
kubectl cluster-info
kubectl get node -o wide