This document explains the installation of Kubernetes version 1.31.0 on a server with the RHEL (Red Hat Enterprise Linux) 8.x / 9.x operating system. It is recommended to have Red Hat 8.6 operating system.

Pre-Installation Checks to be Performed Before Starting the Installation

Very Important

Before starting the installations, make sure that the hostname of the server is not "localhost.localdomain" and that each one is unique (with the hostname command). If this is the case, change it before starting the operations.


# (If necessary) Change Hostname

hostnamectl set-hostname your-new-hostname
POWERSHELL

There should be an input as 127.0.1.1 in /etc/hosts file.

There should be no "nameserver 127.0.1.1" entry in the /etc/resolv.conf file.


Very Important

If a proxy is required to access the internet, the following codes should be run.


# Run the following on the Linux shell:

export http_proxy=http://proxyIp:port/ 
export https_proxy=http://proxyIp:port/
export no_proxy=localhost,127.0.0.1,SERVERIP,*.hostname
POWERSHELL


# Add the codes below to the following files:


sudo vi /etc/apt/apt.conf

Acquire::http::Proxy "http://username:password@proxyIp:port";
Acquire::https::Proxy "https://username:password@proxyIp:port";
POWERSHELL

sudo vi /etc/systemd/system/docker.service.d/proxy.conf

[Service]
Environment="HTTP_PROXY=http://proxyIp:port"
Environment="HTTPS_PROXY=https://proxyIp:port"
Environment="NO_PROXY="localhost,127.0.0.1,::1,SERVERIP,*.hostname"
POWERSHELL

#1) Operating System Configurations (All Servers)

# Refresh your cache of enabled yum repositories.
dnf makecache --refresh

# Execute following dnf command to update your Rocky Linux server.
dnf update -y

# It is recommended that the following tools are installed on all servers.
sudo dnf install -y curl wget telnet zip lsof lvm2 net-tools yum-utils bind-utils  device-mapper-persistent-data

# The Apinizer user is created and authorized.  
sudo adduser apinizer
sudo passwd apinizer
sudo usermod -aG wheel apinizer  

# Proceed to operations by switching to the user
su - apinizer

# Close the firewall
sudo systemctl stop firewalld
sudo systemctl disable firewalld

# Disable SELinux to prevent communication issues on servers
sudo setenforce 0 
sudo sed -i 's/^SELINUX=enforcing$/SELINUX=permissive/' /etc/selinux/config

# Swap is turned off and the swap line in the /etc/fstab file is commented out to prevent it from restarting
sudo swapoff -a
sudo vi /etc/fstab
# Then the file is closed (:wq) 
POWERSHELL

#2) Kubernetes Installation

#2.1) Container Installation (Will be Done on All Kubernetes Servers)


Before proceeding to Kubernetes installation, the following steps are followed to prepare the system and install containerd.

sudo tee /etc/modules-load.d/k8s.conf <<EOF
overlay
br_netfilter
EOF

# For the modules to be installed on the running system
sudo modprobe overlay
sudo modprobe br_netfilter

# sysctl settings 
sudo vi /etc/sysctl.d/k8s.conf
POWERSHELL

The first three lines here are mandatory, and the others can be changed according to the need.

net.bridge.bridge-nf-call-ip6tables = 1
net.bridge.bridge-nf-call-iptables = 1
net.ipv4.ip_forward=1
net.ipv4.tcp_max_syn_backlog=40000
net.core.somaxconn=40000
net.core.wmem_default=8388608
net.core.rmem_default=8388608
net.ipv4.tcp_sack=1
net.ipv4.tcp_window_scaling=1
net.ipv4.tcp_fin_timeout=15
net.ipv4.tcp_keepalive_intvl=30
net.ipv4.tcp_tw_reuse=1
net.ipv4.tcp_moderate_rcvbuf=1
net.core.rmem_max=134217728
net.core.wmem_max=134217728
net.ipv4.tcp_mem=134217728 134217728 134217728
net.ipv4.tcp_rmem=4096 277750 134217728
net.ipv4.tcp_wmem=4096 277750 134217728
net.core.netdev_max_backlog=300000
YML

Install the required modules.

#Loading configurations   
sudo sysctl --system   

#Defining the Docker repository   
sudo dnf config-manager --add-repo=https://download.docker.com/linux/centos/docker-ce.repo

sudo dnf makecache

#Install containerd
sudo dnf install -y containerd.io

#Settings for the containerd module
sudo mkdir -p /etc/containerd  

sudo containerd config default | sudo tee /etc/containerd/config.toml
  
sudo sed -i 's/SystemdCgroup = false/SystemdCgroup = true/g' /etc/containerd/config.toml

#Starting the containerd module
sudo systemctl restart containerd

sudo systemctl enable containerd

systemctl status containerd
POWERSHELL

Note

 # package runc-1:1.1.4-1.module+el8.7.0+16520+2db5507d.x86_64 is filtered out by modular filtering If errors are encountered, the following command line is executed.


sudo yum remove containerd.io | sudo yum remove runc 

#2.2) Kubernetes Installation (On Master and Worker servers)


Kubernetes keys and repository addresses are uploaded to the system, kubernetes is installed and started.

sudo vi /etc/yum.repos.d/kubernetes.repo
POWERSHELL
[kubernetes]
name=Kubernetes
baseurl=https://pkgs.k8s.io/core:/stable:/v1.31/rpm/
enabled=1
gpgcheck=1
gpgkey=https://pkgs.k8s.io/core:/stable:/v1.31/rpm/repodata/repomd.xml.key
POWERSHELL
#Kubernetes installation
sudo dnf makecache --refresh
sudo dnf install -y kubeadm-1.31.0 kubelet-1.31.0 kubectl-1.31.0

systemctl enable --now kubelet.service

# Check the installation then start
kubectl version --client && kubeadm version
POWERSHELL

#2.2.1) Bash Auto-Completion (İsteğe Bağlı, Herhangi Bir Kubernetes Master Sunucusunda)


This process can speed up the writing of Kubernetes commands.

yum install bash-completion
echo 'source <(kubectl completion bash)' >>~/.bashrc
echo 'alias k=kubectl' >>~/.bashrc
echo 'complete -o default -F __start_kubectl k' >>~/.bashrc
source ~/.bashrc
POWERSHELL

#2.2.2) Creating Kubernetes Master Server(On Kubernetes Master Servers)


The following command is run to make Multi-Master Kubernetes

kubeadm config images pull

sudo kubeadm init --pod-network-cidr "10.244.0.0/16" --control-plane-endpoint "<MASTER_SERVER_HOSTNAME>" --upload-certs
POWERSHELL

To use the Multi-Master structure, the other nodes that will be Master should be connected with the following code

sudo kubeadm join <MASTER_SERVER_HOSTNAME>:6443 --token <XXX> --discovery-token-ca-cert-hash sha256:<YYY>--control-plane --certificate-key <ZZZ>
BASH

Very Important

# If the connection command is to be re-created, the output of the second command below should be added to the first one:

kubeadm token create --print-join-command

sudo kubeadm init phase upload-certs --upload-certs


# The result should look something like this:

<Output of the join command in step 1> --control-plane --certificate-key <Key value of the output in step 2>


#If the code is intended to be generated manually, the following is used:

for XXX → kubeadm token list

for YYY → openssl x509 -pubkey -in /etc/kubernetes/pki/ca.crt | openssl rsa -pubin -outform der 2>/dev/null | openssl dgst -sha256 -hex | sed 's/^.* //'

for ZZZ → sudo kubeadm init phase upload-certs --upload-certs

#2.2.3) Setting User Configuration of Kubectl Command on Kubernetes Master Server (On Kubernetes Master Servers)


Definitions are made for the user who will run the kubectl commands:

mkdir -p $HOME/.kube
sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
sudo chown -R $(id -u):$(id -g) $HOME/.kube
POWERSHELL

Execute the following command to set the KUBECONFIG variable for all sessions.

echo "export KUBECONFIG=/etc/kubernetes/admin.conf" >> /etc/profile.d/k8s.sh
POWERSHELL

#2.2.4) Install Kubernetes Network Plugin (On Kubernetes Master Servers)


In this guide, we will use the Flannel network add-on. You can choose other supported network add-ons. Flannel is a simple and easy way to configure a layer 3 network architecture for Kubernetes.

o

vi kube-flannel.yml
POWERSHELL
---
kind: Namespace
apiVersion: v1
metadata:
  name: kube-flannel
  labels:
    k8s-app: flannel
    pod-security.kubernetes.io/enforce: privileged
---
kind: ClusterRole
apiVersion: rbac.authorization.k8s.io/v1
metadata:
  labels:
    k8s-app: flannel
  name: flannel
rules:
- apiGroups:
  - ""
  resources:
  - pods
  verbs:
  - get
- apiGroups:
  - ""
  resources:
  - nodes
  verbs:
  - get
  - list
  - watch
- apiGroups:
  - ""
  resources:
  - nodes/status
  verbs:
  - patch
---
kind: ClusterRoleBinding
apiVersion: rbac.authorization.k8s.io/v1
metadata:
  labels:
    k8s-app: flannel
  name: flannel
roleRef:
  apiGroup: rbac.authorization.k8s.io
  kind: ClusterRole
  name: flannel
subjects:
- kind: ServiceAccount
  name: flannel
  namespace: kube-flannel
---
apiVersion: v1
kind: ServiceAccount
metadata:
  labels:
    k8s-app: flannel
  name: flannel
  namespace: kube-flannel
---
kind: ConfigMap
apiVersion: v1
metadata:
  name: kube-flannel-cfg
  namespace: kube-flannel
  labels:
    tier: node
    k8s-app: flannel
    app: flannel
data:
  cni-conf.json: |
    {
      "name": "cbr0",
      "cniVersion": "0.3.1",
      "plugins": [
        {
          "type": "flannel",
          "delegate": {
            "hairpinMode": true,
            "isDefaultGateway": true
          }
        },
        {
          "type": "portmap",
          "capabilities": {
            "portMappings": true
          }
        }
      ]
    }
  net-conf.json: |
    {
      "Network": "10.244.0.0/16",
      "EnableNFTables": false,
      "Backend": {
        "Type": "vxlan"
      }
    }
---
apiVersion: apps/v1
kind: DaemonSet
metadata:
  name: kube-flannel-ds
  namespace: kube-flannel
  labels:
    tier: node
    app: flannel
    k8s-app: flannel
spec:
  selector:
    matchLabels:
      app: flannel
  template:
    metadata:
      labels:
        tier: node
        app: flannel
    spec:
      affinity:
        nodeAffinity:
          requiredDuringSchedulingIgnoredDuringExecution:
            nodeSelectorTerms:
            - matchExpressions:
              - key: kubernetes.io/os
                operator: In
                values:
                - linux
      hostNetwork: true
      priorityClassName: system-node-critical
      tolerations:
      - operator: Exists
        effect: NoSchedule
      serviceAccountName: flannel
      initContainers:
      - name: install-cni-plugin
        image: docker.io/flannel/flannel-cni-plugin:v1.6.0-flannel1
        command:
        - cp
        args:
        - -f
        - /flannel
        - /opt/cni/bin/flannel
        volumeMounts:
        - name: cni-plugin
          mountPath: /opt/cni/bin
      - name: install-cni
        image: docker.io/flannel/flannel:v0.26.1
        command:
        - cp
        args:
        - -f
        - /etc/kube-flannel/cni-conf.json
        - /etc/cni/net.d/10-flannel.conflist
        volumeMounts:
        - name: cni
          mountPath: /etc/cni/net.d
        - name: flannel-cfg
          mountPath: /etc/kube-flannel/
      containers:
      - name: kube-flannel
        image: docker.io/flannel/flannel:v0.26.1
        command:
        - /opt/bin/flanneld
        args:
        - --ip-masq
        - --kube-subnet-mgr
        resources:
          requests:
            cpu: "100m"
            memory: "50Mi"
        securityContext:
          privileged: false
          capabilities:
            add: ["NET_ADMIN", "NET_RAW"]
        env:
        - name: POD_NAME
          valueFrom:
            fieldRef:
              fieldPath: metadata.name
        - name: POD_NAMESPACE
          valueFrom:
            fieldRef:
              fieldPath: metadata.namespace
        - name: EVENT_QUEUE_DEPTH
          value: "5000"
        volumeMounts:
        - name: run
          mountPath: /run/flannel
        - name: flannel-cfg
          mountPath: /etc/kube-flannel/
        - name: xtables-lock
          mountPath: /run/xtables.lock
      volumes:
      - name: run
        hostPath:
          path: /run/flannel
      - name: cni-plugin
        hostPath:
          path: /opt/cni/bin
      - name: cni
        hostPath:
          path: /etc/cni/net.d
      - name: flannel-cfg
        configMap:
          name: kube-flannel-cfg
      - name: xtables-lock
        hostPath:
          path: /run/xtables.lock
          type: FileOrCreate
YML

Important

If you did not use the value 10.244.0.0/16 as podCIDR while initializing the Master, you should download the above yaml file and edit the network settings here as well.

kubectl apply -f kube-flannel.yml
POWERSHELL

#2.2.5) If the Master Server is Wanted to be Used as a Worker at the Same Time  (Optional)



It is not recommended for production environments.

To add the worker role from to Master

# To add worker role to all Master nodes
kubectl taint nodes --all node-role.kubernetes.io/control-plane:NoSchedule-   

# To assign Worker tasks to all Masters, in some cases it is necessary to write master
kubectl taint nodes --all node-role.kubernetes.io/master:NoSchedule-


# To add worker role to specific Master node
kubectl taint nodes <NODE_NAME> node-role.kubernetes.io/control-plane:NoSchedule-
POWERSHELL

To remove the worker role from the Master

# To remove the worker role from all Master nodes  
kubectl taint nodes --all node-role.kubernetes.io/control-plane:NoSchedule

# In some cases it is necessary to write master instead of control-plane
kubectl taint nodes --all node-role.kubernetes.io/master:NoSchedule
 
# To remove the worker role from only one Master node  
kubectl taint nodes <NODE_NAME> node-role.kubernetes.io/control-plane:NoSchedule
POWERSHELL


#2.2.6) Registering Kubernetes Worker Servers to the Master (On Kubernetes Worker Servers)


A token information is required to connect the worker server to the Master. This will be written down during the setup phase on the master node. But if it is missed or you want to view it again, the following command can be used.

Master Node üzerinde

sudo kubeadm token create --print-join-command
POWERSHELL

Worker olacak Node(lar) üzerinde

sudo kubeadm join <MASTER_SERVER_HOSTNAME>:6443 --token <XXX> --discovery-token-ca-cert-hash sha256:<YYY>
BASH


#2.2.7) Installation Check (On Any Kubernetes Master Server )


If the Node created in addition to the Master can be seen when the following code is run on the Master, the installation has been completed successfully.

If it does not transition from NotReady to Ready status within two minutes, the problem should be investigated with the command 'kubectl describe node <NODE_NAME>'.

kubectl get node

NAME         STATUS   ROLES    AGE   VERSION
k8s-master   Ready    master   5d    v1.31.0
k8s-worker1  Ready    <none>   5d    v1.31.0
BASH

#2.3) DNS Test (Optional, On Any Kubernetes Master Server)

https://kubernetes.io/docs/tasks/administer-cluster/dns-custom-nameservers/#inheriting-dns-from-the-node

kubectl apply -f https://k8s.io/examples/admin/dns/dnsutils.yaml

kubectl get pods --namespace=kube-system -l k8s-app=kube-dns

kubectl -n kube-system get configmap coredns -oyaml

kubectl exec -i -t dnsutils -- nslookup kubernetes.default

kubectl exec -ti dnsutils -- cat /etc/resolv.conf
POWERSHELL