This document explains the installation of Kubernetes version 1.24.10 on a server with the RHEL (Red Hat Enterprise Linux) 8.x / 9.x operating system. It is recommended to have Red Hat 8.6 operating system.

Pre-Installation Checks to be Performed Before Starting the Installation

Very Important

Before starting the installations, make sure that the hostname of the server is not "localhost.localdomain" and that each one is unique (with the hostname command). If this is the case, change it before starting the operations.


# (If necessary) Change Hostname

hostnamectl set-hostname your-new-hostname
POWERSHELL

There should be an input as 127.0.1.1 in /etc/hosts file.

There should be no "nameserver 127.0.1.1" entry in the /etc/resolv.conf file.


Very Important

If a proxy is required to access the internet, the following codes should be run.


# Run the following on the Linux shell:

export http_proxy=http://proxyIp:port/ 
export https_proxy=http://proxyIp:port/
export no_proxy=localhost,127.0.0.1,SERVERIP,*.hostname
POWERSHELL


# Add the codes below to the following files:


sudo vi /etc/apt/apt.conf

Acquire::http::Proxy "http://username:password@proxyIp:port";
Acquire::https::Proxy "https://username:password@proxyIp:port";
POWERSHELL

sudo vi /etc/systemd/system/docker.service.d/proxy.conf

[Service]
Environment="HTTP_PROXY=http://proxyIp:port"
Environment="HTTPS_PROXY=https://proxyIp:port"
Environment="NO_PROXY="localhost,127.0.0.1,::1,SERVERIP,*.hostname"
POWERSHELL

Important for Installation

In order for the installation to be healthy, Apinizer Kubernetes servers must access the following addresses.

To access Docker Images:

https://hub.docker.com


Kubernetes:

https://packages.cloud.google.com/yum/repos/kubernetes-el7-x86_64

https://packages.cloud.google.com/yum/doc/yum-key.gpg

https://packages.cloud.google.com/yum/doc/rpm-package-key.gpg


Kubernetes Dashboard:

https://raw.githubusercontent.com/coreos/flannel/master/Documentation/kube-flannel.yml

https://raw.githubusercontent.com/kubernetes/dashboard/v2.0.0-beta4/aio/deploy/recommended.yaml


SSL Inspection must be turned off on the firewall for the addresses below.

k8s.gcr.io
registry-1.docker.io
hub.docker.com


If not all traffic between servers is to be allowed, permissions must be defined for the following ports individually:

6443/tcp              # Kubernetes API server
2379-2380/tcp     # etcd server client API
10250/tcp             # Kubelet API
10251/tcp             # kube-scheduler
10252/tcp             # kube-controller-manager
8285/udp             # Flannel
8472/udp             # Flannel

30000-32767       #Applications on Kubernetes


#1) Operating System Configurations (All Servers)

# Refresh your cache of enabled yum repositories.
dnf makecache --refresh

# Execute following dnf command to update your Rocky Linux server.
dnf update -y

# It is recommended that the following tools are installed on all servers.
sudo dnf install -y curl wget telnet zip lsof lvm2 net-tools yum-utils bind-utils  device-mapper-persistent-data

# The Apinizer user is created and authorized.  
sudo adduser apinizer
sudo passwd apinizer
sudo usermod -aG wheel apinizer  

# Proceed to operations by switching to the user
su - apinizer

# Close the firewall
sudo systemctl stop firewalld
sudo systemctl disable firewalld

# Disable SELinux to prevent communication issues on servers
sudo setenforce 0 
sudo sed -i 's/^SELINUX=enforcing$/SELINUX=permissive/' /etc/selinux/config

# Swap is turned off and the swap line in the /etc/fstab file is commented out to prevent it from restarting
sudo swapoff -a
sudo vi /etc/fstab
# Then the file is closed (:wq) 
POWERSHELL

#2) Kubernetes Installation

#2.1) Container Installation (Will be Done on All Kubernetes Servers)


Before proceeding to Kubernetes installation, the following steps are followed to prepare the system and install containerd.

sudo tee /etc/modules-load.d/k8s.conf <<EOF
overlay
br_netfilter
EOF

# For the modules to be installed on the running system
sudo modprobe overlay
sudo modprobe br_netfilter

# sysctl settings 
sudo vi /etc/sysctl.d/k8s.conf
POWERSHELL

The first three lines here are mandatory, and the others can be changed according to the need.

net.bridge.bridge-nf-call-ip6tables = 1
net.bridge.bridge-nf-call-iptables = 1
net.ipv4.ip_forward=1
net.ipv4.tcp_max_syn_backlog=40000
net.core.somaxconn=40000
net.core.wmem_default=8388608
net.core.rmem_default=8388608
net.ipv4.tcp_sack=1
net.ipv4.tcp_window_scaling=1
net.ipv4.tcp_fin_timeout=15
net.ipv4.tcp_keepalive_intvl=30
net.ipv4.tcp_tw_reuse=1
net.ipv4.tcp_moderate_rcvbuf=1
net.core.rmem_max=134217728
net.core.wmem_max=134217728
net.ipv4.tcp_mem=134217728 134217728 134217728
net.ipv4.tcp_rmem=4096 277750 134217728
net.ipv4.tcp_wmem=4096 277750 134217728
net.core.netdev_max_backlog=300000
YML

Install the required modules.

#Loading configurations   
sudo sysctl --system   

#Defining the Docker repository   
sudo dnf config-manager --add-repo=https://download.docker.com/linux/centos/docker-ce.repo

sudo dnf makecache

#Install containerd
sudo dnf install -y containerd.io

#Settings for the containerd module
sudo mkdir -p /etc/containerd  

sudo containerd config default | sudo tee /etc/containerd/config.toml
  
sudo sed -i 's/SystemdCgroup = false/SystemdCgroup = true/g' /etc/containerd/config.toml

#Starting the containerd module
sudo systemctl restart containerd

sudo systemctl enable containerd

systemctl status containerd
POWERSHELL

Not

 # package runc-1:1.1.4-1.module+el8.7.0+16520+2db5507d.x86_64 is filtered out by modular filtering If errors are encountered, the following command line is executed.


sudo yum remove containerd.io | sudo yum remove runc 

#2.2) Kubernetes Installation (On Master and Worker servers)


Kubernetes keys and repository addresses are uploaded to the system, kubernetes is installed and started.

sudo vi /etc/yum.repos.d/kubernetes.repo
POWERSHELL
[kubernetes]

name=Kubernetes
baseurl=https://pkgs.k8s.io/core:/stable:/v1.29/rpm/
enabled=1
gpgcheck=1
gpgkey=https://pkgs.k8s.io/core:/stable:/v1.29/rpm/repodata/repomd.xml.key 
POWERSHELL
#Kubernetes installation
sudo dnf makecache

sudo dnf install -y kubeadm=1.29.3-1.1 kubelet=1.29.3-1.1 kubectl=1.29.3-1.1
systemctl enable --now kubelet.service

# Check the installation then start
kubectl version --client && kubeadm version
POWERSHELL

#2.2.1) Installation of Flannel CNI

Kubernetes supports various CNI (Container Network Interface) plugins such as AWS VPC, Azure CNI, Cilium, Calico, Flannel.

This section describes how to install the Flannel CNI plug-in.


mkdir /opt/bin

curl -fsSLo /opt/bin/flanneld https://github.com/flannel-io/flannel/releases/download/v0.24.4/flannel-v0.24.4-linux-amd64.tar.gz

chmod +x /opt/bin/flanneld
POWERSHELL

#2.2.1) Bash Auto-Completion (İsteğe Bağlı, Herhangi Bir Kubernetes Master Sunucusunda)


This process can speed up the writing of Kubernetes commands.

yum install bash-completion
echo 'source <(kubectl completion bash)' >>~/.bashrc
echo 'alias k=kubectl' >>~/.bashrc
echo 'complete -o default -F __start_kubectl k' >>~/.bashrc
source ~/.bashrc
POWERSHELL

#2.2.2) Creating Kubernetes Master Server(On Kubernetes Master Servers)


The following command is run to make Multi-Master Kubernetes

kubeadm config images pull

sudo kubeadm init --pod-network-cidr "10.244.0.0/16" --control-plane-endpoint "<MASTER_SERVER_HOSTNAME>" --upload-certs
POWERSHELL

Important

If you will not use 10.244.0.0/16 as the IP block that the Kubernetes pods will take (podCIDR value), you need to edit the above command accordingly.

To use the Multi-Master structure, the other nodes that will be Master should be connected with the following code

sudo kubeadm join <MASTER_SERVER_HOSTNAME>:6443 --token <XXX> --discovery-token-ca-cert-hash sha256:<YYY>--control-plane --certificate-key <ZZZ>
BASH

Very Important

# If the connection command is to be re-created, the output of the second command below should be added to the first one:

kubeadm token create --print-join-command

sudo kubeadm init phase upload-certs --upload-certs


# The result should look something like this:

<Output of the join command in step 1> --control-plane --certificate-key <Key value of the output in step 2>


#If the code is intended to be generated manually, the following is used:

for XXX → kubeadm token list

for YYY → openssl x509 -pubkey -in /etc/kubernetes/pki/ca.crt | openssl rsa -pubin -outform der 2>/dev/null | openssl dgst -sha256 -hex | sed 's/^.* //'

for ZZZ → sudo kubeadm init phase upload-certs --upload-certs

#2.2.3) Setting User Configuration of Kubectl Command on Kubernetes Master Server (On Kubernetes Master Servers)


Definitions are made for the user who will run the kubectl commands:

mkdir -p $HOME/.kube
sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
sudo chown -R $(id -u):$(id -g) $HOME/.kube
POWERSHELL

#2.2.4) Install Kubernetes Network Plugin (On Kubernetes Master Servers)


In this guide, we will use the Flannel network add-on. You can choose other supported network add-ons. Flannel is a simple and easy way to configure a layer 3 network architecture for Kubernetes.

kubectl apply -f https://raw.githubusercontent.com/coreos/flannel/master/Documentation/kube-flannel.yml
POWERSHELL

Important

If you did not use the value 10.244.0.0/16 as podCIDR while initializing the Master, you should download the above yaml file and edit the network settings here as well.

#2.2.5) If the Master Server is Wanted to be Used as a Worker at the Same Time  (Optional)



It is not recommended for production environments.

To add the worker role from to Master

# To add worker role to all Master nodes
kubectl taint nodes --all node-role.kubernetes.io/control-plane:NoSchedule-   

# To add worker role to specific Master node
kubectl taint nodes <NODE_NAME> node-role.kubernetes.io/control-plane:NoSchedule-
POWERSHELL

To remove the worker role from the Master

# To remove the worker role from all Master nodes  
kubectl taint nodes --all node-role.kubernetes.io/control-plane:NoSchedule   
# To remove the worker role from only one Master node  
kubectl taint nodes <NODE_NAME> node-role.kubernetes.io/control-plane:NoSchedule
POWERSHELL


#2.2.6) Registering Kubernetes Worker Servers to the Master (On Kubernetes Worker Servers)


A token information is required to connect the worker server to the Master. This will be written down during the setup phase on the master node. But if it is missed or you want to view it again, the following command can be used.

Master Node üzerinde

sudo kubeadm token create --print-join-command
POWERSHELL

Worker olacak Node(lar) üzerinde

sudo kubeadm join <MASTER_SERVER_HOSTNAME>:6443 --token <XXX> --discovery-token-ca-cert-hash sha256:<YYY>
BASH


#2.2.7) Installation Check (On Any Kubernetes Master Server )


If the Node created in addition to the Master can be seen when the following code is run on the Master, the installation has been completed successfully.

If it does not transition from NotReady to Ready status within two minutes, the problem should be investigated with the command 'kubectl describe node <NODE_NAME>'.

kubectl get node

NAME         STATUS   ROLES    AGE   VERSION
k8s-master   Ready    master   5d    v1.24.10
k8s-worker1  Ready    <none>   5d    v1.24.10
BASH

#2.3) DNS Test (Optional, On Any Kubernetes Master Server)

https://kubernetes.io/docs/tasks/administer-cluster/dns-custom-nameservers/#inheriting-dns-from-the-node

kubectl apply -f https://k8s.io/examples/admin/dns/dnsutils.yaml

kubectl get pods --namespace=kube-system -l k8s-app=kube-dns

kubectl -n kube-system get configmap coredns -oyaml

kubectl exec -i -t dnsutils -- nslookup kubernetes.default

kubectl exec -ti dnsutils -- cat /etc/resolv.conf
POWERSHELL