This document describes installing Kubernetes version 1.24.0 on a server with Ubuntu operating system. It is recommended to have Ubuntu 2022.04 LTS Operating system.


Pre-Installation Checks to be Performed Before Starting the Installation

Very Important

Before starting the installations, make sure to confirm from the system administrators that the servers are on the same network and on the same Virtual Machine.

Very Important

Before starting the installations, make sure that the hostname of the server is not localhost.localdomain and that each one is unique (with the hostname command). If this is the case, change it before starting the operations.


# (If necessary) Change Hostname

hostnamectl set-hostname your-new-hostname
POWERSHELL

There should be an input as 127.0.1.1 in /etc/hosts file.

There should be no "nameserver 127.0.1.1" entry in the /etc/resolv.conf file.


Very Important

If a proxy is required to access the internet, the following codes should be run.


# Run the following on the Linux shell:

export http_proxy=http://proxyIp:port/ 
export https_proxy=http://proxyIp:port/
export no_proxy=localhost,127.0.0.1,SERVERIP,*.hostname
POWERSHELL


# Add the codes below to the following files:


sudo vi /etc/apt/apt.conf

Acquire::http::Proxy "http://username:password@proxyIp:port";
Acquire::https::Proxy "https://username:password@proxyIp:port";
POWERSHELL

sudo vi /etc/systemd/system/docker.service.d/proxy.conf

[Service]
Environment="HTTP_PROXY=http://proxyIp:port"
Environment="HTTPS_PROXY=https://proxyIp:port"
Environment="NO_PROXY="localhost,127.0.0.1,::1,SERVERIP,*.hostname"
POWERSHELL

Important for Installation

In order for the installation to be healthy, Apinizer Kubernetes servers must access the following addresses.

To access Docker Images:

*.docker.com
*.docker.io


Kubernetes:

https://packages.cloud.google.com/yum/repos/kubernetes-el7-x86_64

https://packages.cloud.google.com/yum/doc/yum-key.gpg

https://packages.cloud.google.com/yum/doc/rpm-package-key.gpg


Kubernetes Dashboard:

https://raw.githubusercontent.com/coreos/flannel/master/Documentation/kube-flannel.yml

https://raw.githubusercontent.com/kubernetes/dashboard/v2.0.0-beta4/aio/deploy/recommended.yaml


SSL Inspection must be turned off on the firewall for the addresses below.

k8s.gcr.io
registry-1.docker.io
hub.docker.com


If not all traffic between servers is to be allowed, permissions must be defined for the following ports individually:

6443/tcp              # Kubernetes API server
2379-2380/tcp     # etcd server client API
10250/tcp             # Kubelet API
10251/tcp             # kube-scheduler
10252/tcp             # kube-controller-manager
8285/udp             # Flannel
8472/udp             # Flannel

30000-32767       #Applications on Kubernetes


Important

While updating the packages, Ubuntu tries to pull from the server located in Turkey. However, from time to time, there may be a problem at tr.archive.ubuntu.com. In this case, you need to make the following change.

sudo vi /etc/apt/sources.list

#Replace all addresses starting with .tr with "Replace All".

#Example: 

Old: http://tr.archive.ubuntu.com/ubuntu

New: http://archive.ubuntu.com/ubuntu

#1) Operating System Configurations (All Servers)


# The Apinizer user is created and authorized
sudo adduser apinizer
sudo usermod -aG sudo apinizer  

# Transactions are continued by switching to the user
sudo su - apinizer

# It is recommended that the following tools be installed on all servers
sudo apt update
sudo apt install -y curl wget net-tools gnupg2 software-properties-common apt-transport-https ca-certificates


#The firewall is turned off
sudo systemctl stop ufw
sudo systemctl disable ufw  

# Swap is turned off and the swap line in the /etc/fstab file is commented out to prevent it from restarting
sudo swapoff -a
sudo vi /etc/fstab
# Then the file is closed (:wq)  
POWERSHELL

#2) Kubernetes Installation


#2.1) Container Installation (Will be Done on All Kubernetes Servers)


sudo tee /etc/modules-load.d/k8s.conf <<EOF
overlay
br_netfilter
EOF


# Modules need to be installed on the running system for them to function
sudo modprobe overlay
sudo modprobe br_netfilter

sudo lsmod | grep br_netfilter

#sysctl ayarları
sudo vi /etc/sysctl.d/k8s.conf
POWERSHELL
net.bridge.bridge-nf-call-ip6tables = 1
net.bridge.bridge-nf-call-iptables = 1
net.ipv4.ip_forward=1
net.ipv4.tcp_max_syn_backlog=40000
net.core.somaxconn=40000
net.core.wmem_default=8388608
net.core.rmem_default=8388608
net.ipv4.tcp_sack=1
net.ipv4.tcp_window_scaling=1
net.ipv4.tcp_fin_timeout=15
net.ipv4.tcp_keepalive_intvl=30
net.ipv4.tcp_tw_reuse=1
net.ipv4.tcp_moderate_rcvbuf=1
net.core.rmem_max=134217728
net.core.wmem_max=134217728
net.ipv4.tcp_mem=134217728 134217728 134217728
net.ipv4.tcp_rmem=4096 277750 134217728
net.ipv4.tcp_wmem=4096 277750 134217728
net.core.netdev_max_backlog=300000
YML
# Loading configurations
sudo sysctl --system

sudo apt install -y curl gnupg2 software-properties-common apt-transport-https ca-certificates
sudo apt update

# Add Docker repo
curl -fsSL https://download.docker.com/linux/ubuntu/gpg | sudo apt-key add -
sudo add-apt-repository "deb [arch=amd64] https://download.docker.com/linux/ubuntu $(lsb_release -cs) stable"

# Install containerd
sudo apt update
sudo apt install -y containerd.io

# Configure containerd and start service
sudo mkdir -p /etc/containerd
sudo containerd config default | sudo tee /etc/containerd/config.toml

sudo sed -i 's/SystemdCgroup = false/SystemdCgroup = true/g' /etc/containerd/config.toml

# Restart containerd
sudo systemctl restart containerd
sudo systemctl enable containerd
systemctl status containerd

POWERSHELL

#2.2) Kubernetes Installation (On Master and Worker servers)


curl -fsSL https://pkgs.k8s.io/core:/stable:/v1.29/deb/Release.key | sudo gpg --dearmor -o /etc/apt/keyrings/kubernetes-apt-keyring.gpg
echo 'deb [signed-by=/etc/apt/keyrings/kubernetes-apt-keyring.gpg] https://pkgs.k8s.io/core:/stable:/v1.29/deb/ /' | sudo tee /etc/apt/sources.list.d/kubernetes.list
sudo apt-get update


# Kubernetes installation 
sudo apt-get install -y kubelet=1.29.3-1.1 kubeadm=1.29.3-1.1 kubectl=1.29.3-1.1
sudo apt-mark hold kubelet kubeadm kubectl

# Check installation and start
kubectl version --client && kubeadm version

sudo systemctl enable kubelet
POWERSHELL

#2.2.1) Bash Auto-Completion (Optional, On Any Kubernetes Master Server)


This process can speed up the writing of Kubernetes commands:

sudo apt install bash-completion
source /usr/share/bash-completion/bash_completion
echo 'source <(kubectl completion bash)' >>~/.bashrc
echo 'alias k=kubectl' >>~/.bashrc
echo 'complete -o default -F __start_kubectl k' >>~/.bash
source ~/.bashrc
POWERSHELL


#2.2.2) Creating Kubernetes Master Server(On Kubernetes Master Servers)


Run the following command to make Multi-Master Kubernetes:

## Use the hostname of the master server.
 sudo kubeadm init --pod-network-cidr "10.244.0.0/16" --control-plane-endpoint "<MASTER_SERVER_HOSTNAME>" --upload-certs
POWERSHELL

Important

If you will not use 10.244.0.0/16 as the IP block that the Kubernetes pods will take (podCIDR value), you need to edit the above command accordingly.

To use the Multi-Master structure, the other nodes that will be Master should be connected with the following code

sudo kubeadm join <MASTER_SERVER_HOSTNAME>:6443 --token <XXX> --discovery-token-ca-cert-hash sha256:<YYY> --control-plane --certificate-key <ZZZ>
BASH

Very Important

# If the connection command is to be re-created, the output of the second command below should be added to the first one:

kubeadm token create --print-join-command

sudo kubeadm init phase upload-certs --upload-certs


# The result should look something like this:

# The result should look something like this:

<Output of the join command in step 1> --control-plane --certificate-key <Key value of the output in step 2>


# If the code is intended to be generated manually, the following is used:

for XXX → kubeadm token list

for YYY → openssl x509 -pubkey -in /etc/kubernetes/pki/ca.crt | openssl rsa -pubin -outform der 2>/dev/null | openssl dgst -sha256 -hex | sed 's/^.* //'

for ZZZ → sudo kubeadm init phase upload-certs --upload-certs


#2.2.3) Setting User Configuration of kubectl Command on Kubernetes Master Server (On Kubernetes Master Servers)


Definitions are made for the user who will run the kubectl commands:

mkdir -p $HOME/.kube
sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
sudo chown -R $(id -u):$(id -g) $HOME/.kube
POWERSHELL

#2.2.4) Install Kubernetes Network Plugin (On Kubernetes Master Servers)


In this guide, we will use the Flannel network add-on. You can choose other supported network add-ons. Flannel is a simple and easy way to configure a layer 3 network architecture for Kubernetes.

kubectl apply -f https://raw.githubusercontent.com/coreos/flannel/master/Documentation/kube-flannel.yml
POWERSHELL

Important

If you did not use the value 10.244.0.0/16 as podCIDR while initializing the Master, you should download the above yaml file and edit the network settings here as well.

#2.2.5) If the Master Server is Wanted to be Used as a Worker at the Same Time  (Optional)


It is not recommended for production environments.

To add the worker role to the Master

# To add worker role to Master nodes
kubectl  taint nodes --all node-role.kubernetes.io/master:NoSchedule-

# To add worker role to specific Master node
kubectl taint nodes <NODE_NAME> node-role.kubernetes.io/master:NoSchedule-
POWERSHELL

To remove the worker role from the Master

# To remove the worker role from all Masters 
kubectl taint nodes --all node-role.kubernetes.io/master:NoSchedule

# To remove the worker role from only one Master 
kubectl taint nodes <NODE_NAME> node-role.kubernetes.io/master:NoSchedule
POWERSHELL

#2.2.6) Registering Kubernetes Worker Servers to the Master (On Kubernetes Worker Servers)


A token information is required to connect the worker server to the Master. This will be written down during the setup phase on the master node. But if it is missed or you want to view it again, the following command can be used.

On Master Node

sudo kubeadm token create --print-join-command
POWERSHELL

Nodes that will be Workers on

sudo kubeadm join <MASTER_SERVER_HOSTNAME>:6443 --token XXX --discovery-token-ca-cert-hash sha256:YYY
BASH

#2.2.7) Installation Check (On Any Kubernetes Master Server )


If the Node created in addition to the Master can be seen when the following code is run on the Master, the installation has been completed successfully.

If it does not transition from NotReady to Ready status within two minutes, the problem should be investigated with the command 'kubectl describe node <MASTER_SERVER_HOSTNAME>'.

kubectl get node

NAME         STATUS   ROLES    AGE   VERSION
k8s-master   Ready    master   5d    v1.24.10
k8s-worker1  Ready    <none>   5d    v1.24.10
BASH

#2.3) DNS Test (Optional, On Any Kubernetes Master Server)


https://kubernetes.io/docs/tasks/administer-cluster/dns-custom-nameservers/#inheriting-dns-from-the-node

kubectl apply -f https://k8s.io/examples/admin/dns/dnsutils.yaml

kubectl get pods --namespace=kube-system -l k8s-app=kube-dns

kubectl -n kube-system get configmap coredns -oyaml

kubectl exec -i -t dnsutils -- nslookup kubernetes.default

kubectl exec -ti dnsutils -- cat /etc/resolv.conf
POWERSHELL