Skip to main content

Checks Required Before Starting Installation

Before starting installations, make sure that the server’s hostname is not localhost.localdomain and that each one is unique (with hostname command). If it is, be sure to change it before starting operations.

(If Necessary) Changing Hostname

hostnamectl set-hostname your-new-hostname
There should not be an IP block hostname assignment like 127.0.1.1 in the /etc/hosts file.There should not be an entry like "nameserver 127.0.1.1" in the /etc/resolv.conf file.
If a Proxy is required for internet access, the following codes should be executed.
#Run the following on Linux Shell:
export http_proxy=http://proxyIp:port/
export https_proxy=http://proxyIp:port/
export no_proxy=localhost,127.0.0.1,SERVERIP,*.hostname
Add the following codes to the files below:
sudo vi /etc/apt/apt.conf
Acquire::http::Proxy "http://username:password@proxyIp:port";
Acquire::https::Proxy "https://username:password@proxyIp:port";
sudo mkdir -p /etc/systemd/system/containerd.service.d/

sudo vi /etc/systemd/system/containerd.service.d/http-proxy.conf
[Service]
Environment="HTTP_PROXY=http://proxyIp:port"
Environment="HTTPS_PROXY=https://proxyIp:port"
Environment="NO_PROXY="localhost,127.0.0.1,::1,SERVERIP,*.hostname"
# systemd'yi yeniden yüklenir
sudo systemctl daemon-reload
 
# containerd'yi yeniden başlatılır
sudo systemctl restart containerd
 
# Ayarların yüklendiğini kontrol edilir
sudo systemctl show --property=Environment containerd

1) Operating System Configurations (To be performed on all servers)

# Refresh your cache of enabled yum repositories.
dnf makecache --refresh

# Execute following dnf command to update your Rocky Linux server.
dnf update -y

# It is recommended that the following tools be installed on all servers.
sudo dnf install -y net-tools yum-utils bind-utils  device-mapper-persistent-data lvm2 telnet wget zip curl lsof

# Apinizer user is created and authorized.
sudo adduser apinizer
sudo passwd apinizer
sudo usermod -aG wheel apinizer

# Switch to the user to continue operations
su - apinizer

# Firewall is disabled
sudo systemctl stop firewalld
sudo systemctl disable firewalld

# SELinux is disabled to prevent communication problems on servers.
sudo setenforce 0
sed -i 's/^SELINUX=enforcing$/SELINUX=permissive/' /etc/selinux/config

# Swap is disabled and the swap line in /etc/fstab file is deleted to prevent it from restarting.
sudo swapoff -a
sudo vi /etc/fstab
# Then close the vi file (:wq)

2) Kubernetes Installation

2.1) Container Installation (To be performed on all Kubernetes servers)

The following steps are followed to prepare the system and install Docker before proceeding to Kubernetes installation.
#To load modules in the running system
sudo modprobe overlay
sudo modprobe br_netfilter

sudo tee /etc/modules-load.d/k8s.conf <<EOF
overlay
br_netfilter
EOF

#sysctl settings 
sudo vi /etc/sysctl.d/k8s.conf
The first three lines here are mandatory, and the others can be changed according to need.
net.bridge.bridge-nf-call-ip6tables = 1
net.bridge.bridge-nf-call-iptables = 1
net.ipv4.ip_forward=1
net.ipv4.tcp_max_syn_backlog=40000
net.core.somaxconn=40000
net.core.wmem_default=8388608
net.core.rmem_default=8388608
net.ipv4.tcp_sack=1
net.ipv4.tcp_window_scaling=1
net.ipv4.tcp_fin_timeout=15
net.ipv4.tcp_keepalive_intvl=30
net.ipv4.tcp_tw_reuse=1
net.ipv4.tcp_moderate_rcvbuf=1
net.core.rmem_max=134217728
net.core.wmem_max=134217728
net.ipv4.tcp_mem=134217728 134217728 134217728
net.ipv4.tcp_rmem=4096 277750 134217728
net.ipv4.tcp_wmem=4096 277750 134217728
net.core.netdev_max_backlog=300000
Required modules are loaded.
#Loading configurations
sudo sysctl --system

#Defining Docker repository
sudo dnf config-manager --add-repo=https://download.docker.com/linux/centos/docker-ce.repo
sudo dnf makecache

#Installing containerd module
sudo dnf install -y containerd.io

#Settings of containerd module
sudo mkdir -p /etc/containerd
sudo containerd config default | sudo tee /etc/containerd/config.toml

sudo sed -i 's/SystemdCgroup = false/SystemdCgroup = true/g' /etc/containerd/config.toml

#Starting containerd module
sudo systemctl restart containerd
sudo systemctl enable containerd
systemctl status containerd
If you encounter errors like package runc-1:1.1.4-1.module+el8.7.0+16520+2db5507d.x86_64 is filtered out by modular filtering, apply the following command line.
sudo yum remove containerd.io | sudo yum remove runc

2.2) Kubernetes Installation (On Master and Worker servers)

Kubernetes keys and repository addresses are loaded to the system, Kubernetes is installed and started.
sudo vi /etc/yum.repos.d/kubernetes.repo
[kubernetes]
name=Kubernetes
baseurl=https://pkgs.k8s.io/core:/stable:/v1.34/rpm/
enabled=1
gpgcheck=1
gpgkey=https://pkgs.k8s.io/core:/stable:/v1.34/rpm/repodata/repomd.xml.key
#Kubernetes installation
sudo dnf makecache --refresh
sudo dnf install -y kubeadm kubelet kubectl
systemctl enable --now kubelet.service

#Kubernetes installation check and startup
kubectl version --client && kubeadm version

2.2.1) Bash Auto-Completion (Optional, On any Kubernetes Master Server)

This operation can speed up writing Kubernetes commands.
yum install bash-completion
echo 'source <(kubectl completion bash)' >>~/.bashrc
echo 'alias k=kubectl' >>~/.bashrc
echo 'complete -o default -F __start_kubectl k' >>~/.bashrc
source ~/.bashrc

2.2.2) Starting Kubernetes Control Plane (Master)

Execute the following command to download the container images required to create the Kubernetes Cluster. Since this Node is the first Node in the cluster, it will be selected as Kubernetes Control Plane.
kubeadm config images pull
sudo kubeadm init --pod-network-cidr "10.244.0.0/16" --control-plane-endpoint "<MASTER_SERVER_HOSTNAME>" --upload-certs
To use Multi-Master structure, other nodes that will be Master should be connected with the following code.
sudo kubeadm join <MASTER_SERVER_HOSTNAME>:6443 --token <XXX> --discovery-token-ca-cert-hash sha256:<YYY> --control-plane --certificate-key <ZZZ>
Very ImportantIf the connection command is desired to be recreated, the output of the second of the following codes should be added to the first;
kubeadm token create --print-join-command
sudo kubeadm init phase upload-certs --upload-certs
As a result, it should look like the following:
<output of join command in step 1> --control-plane --certificate-key <key value that is the output of step 2>
If the code is desired to be created manually, the following are used:
  • For XXX → kubeadm token list
  • For YYY → openssl x509 -pubkey -in /etc/kubernetes/pki/ca.crt | openssl rsa -pubin -outform der 2>/dev/null | openssl dgst -sha256 -hex | sed 's/^.* //'
  • For ZZZ → sudo kubeadm init phase upload-certs --upload-certs

2.2.3) Setting User Configuration of Kubectl Command on Kubernetes Master Server (On Kubernetes Master Servers)

Required settings are made while on the user who will run kubectl commands.
mkdir -p $HOME/.kube
sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
sudo chown -R $(id -u):$(id -g) $HOME/.kube
Execute the following command to set the KUBECONFIG variable for all sessions.
echo "export KUBECONFIG=/etc/kubernetes/admin.conf" >> /etc/profile.d/k8s.sh

2.2.4) Install Kubernetes Network Plugin (On Kubernetes Master Servers)

We will use the Flannel network plugin in this guide. You can choose other supported network plugins. Flannel is a simple and easy way to configure a Layer-3 network structure designed for Kubernetes.
vi kube-flannel.yml
---
kind: Namespace
apiVersion: v1
metadata:
  name: kube-flannel
  labels:
    k8s-app: flannel
    pod-security.kubernetes.io/enforce: privileged
---
kind: ClusterRole
apiVersion: rbac.authorization.k8s.io/v1
metadata:
  labels:
    k8s-app: flannel
  name: flannel
rules:
- apiGroups:
  - ""
  resources:
  - pods
  verbs:
  - get
- apiGroups:
  - ""
  resources:
  - nodes
  verbs:
  - get
  - list
  - watch
- apiGroups:
  - ""
  resources:
  - nodes/status
  verbs:
  - patch
---
kind: ClusterRoleBinding
apiVersion: rbac.authorization.k8s.io/v1
metadata:
  labels:
    k8s-app: flannel
  name: flannel
roleRef:
  apiGroup: rbac.authorization.k8s.io
  kind: ClusterRole
  name: flannel
subjects:
- kind: ServiceAccount
  name: flannel
  namespace: kube-flannel
---
apiVersion: v1
kind: ServiceAccount
metadata:
  labels:
    k8s-app: flannel
  name: flannel
  namespace: kube-flannel
---
kind: ConfigMap
apiVersion: v1
metadata:
  name: kube-flannel-cfg
  namespace: kube-flannel
  labels:
    tier: node
    k8s-app: flannel
    app: flannel
data:
  cni-conf.json: |
    {
      "name": "cbr0",
      "cniVersion": "0.3.1",
      "plugins": [
        {
          "type": "flannel",
          "delegate": {
            "hairpinMode": true,
            "isDefaultGateway": true
          }
        },
        {
          "type": "portmap",
          "capabilities": {
            "portMappings": true
          }
        }
      ]
    }
  net-conf.json: |
    {
      "Network": "10.244.0.0/16",
      "EnableNFTables": false,
      "Backend": {
        "Type": "vxlan"
      }
    }
---
apiVersion: apps/v1
kind: DaemonSet
metadata:
  name: kube-flannel-ds
  namespace: kube-flannel
  labels:
    tier: node
    app: flannel
    k8s-app: flannel
spec:
  selector:
    matchLabels:
      app: flannel
  template:
    metadata:
      labels:
        tier: node
        app: flannel
    spec:
      affinity:
        nodeAffinity:
          requiredDuringSchedulingIgnoredDuringExecution:
            nodeSelectorTerms:
            - matchExpressions:
              - key: kubernetes.io/os
                operator: In
                values:
                - linux
      hostNetwork: true
      priorityClassName: system-node-critical
      tolerations:
      - operator: Exists
        effect: NoSchedule
      serviceAccountName: flannel
      initContainers:
      - name: install-cni-plugin
        image: docker.io/flannel/flannel-cni-plugin:v1.6.0-flannel1
        command:
        - cp
        args:
        - -f
        - /flannel
        - /opt/cni/bin/flannel
        volumeMounts:
        - name: cni-plugin
          mountPath: /opt/cni/bin
      - name: install-cni
        image: docker.io/flannel/flannel:v0.26.1
        command:
        - cp
        args:
        - -f
        - /etc/kube-flannel/cni-conf.json
        - /etc/cni/net.d/10-flannel.conflist
        volumeMounts:
        - name: cni
          mountPath: /etc/cni/net.d
        - name: flannel-cfg
          mountPath: /etc/kube-flannel/
      containers:
      - name: kube-flannel
        image: docker.io/flannel/flannel:v0.26.1
        command:
        - /opt/bin/flanneld
        args:
        - --ip-masq
        - --kube-subnet-mgr
        resources:
          requests:
            cpu: "100m"
            memory: "50Mi"
        securityContext:
          privileged: false
          capabilities:
            add: ["NET_ADMIN", "NET_RAW"]
        env:
        - name: POD_NAME
          valueFrom:
            fieldRef:
              fieldPath: metadata.name
        - name: POD_NAMESPACE
          valueFrom:
            fieldRef:
              fieldPath: metadata.namespace
        - name: EVENT_QUEUE_DEPTH
          value: "5000"
        volumeMounts:
        - name: run
          mountPath: /run/flannel
        - name: flannel-cfg
          mountPath: /etc/kube-flannel/
        - name: xtables-lock
          mountPath: /run/xtables.lock
      volumes:
      - name: run
        hostPath:
          path: /run/flannel
      - name: cni-plugin
        hostPath:
          path: /opt/cni/bin
      - name: cni
        hostPath:
          path: /etc/cni/net.d
      - name: flannel-cfg
        configMap:
          name: kube-flannel-cfg
      - name: xtables-lock
        hostPath:
          path: /run/xtables.lock
          type: FileOrCreate
If you did not use the value 10.244.0.0/16 as podCIDR when initializing the Master, you should edit the network settings in the yaml file above.
kubectl apply -f kube-flannel.yml

2.2.5) If Master Server is Also Desired to be Used as Worker (Optional)

This is not a recommended method for production environments.
To add Worker task to Master
# To make all Masters Workers
kubectl  taint nodes --all node-role.kubernetes.io/control-plane:NoSchedule-

# To give Worker task to all Masters, master needs to be written for some cases
kubectl taint nodes --all node-role.kubernetes.io/master:NoSchedule-

# To give Worker task to only one Master
kubectl taint nodes <NODE_NAME> node-role.kubernetes.io/control-plane:NoSchedule-
To remove Worker task from Master
# To remove Worker task from all Masters
kubectl taint nodes --all node-role.kubernetes.io/control-plane:NoSchedule

# For some cases, master needs to be written instead of control-plane
kubectl taint nodes --all node-role.kubernetes.io/master:NoSchedule

# To remove Worker task from only one Master
kubectl taint nodes <NODE_NAME> node-role.kubernetes.io/control-plane:NoSchedule

2.2.6) Registering Kubernetes Worker Servers to Master (On Kubernetes Worker Servers)

A token information is needed to connect the Worker server to Master. This will be visible in writing on the master node during this installation phase. But if it is skipped or if you want to view it again, the following command can be used. On Master Node
sudo kubeadm token create --print-join-command
On Node(s) that will be Worker
sudo kubeadm join <MASTER_SERVER_HOSTNAME>:6443 --token xxx --discovery-token-ca-cert-hash sha256:yyy

2.2.7) Installation Check (On any Kubernetes Master Server)

If the Node created in addition to Master is also visible when the following code is run from Master, the installation has been completed successfully. If it does not transition from NotReady status to Ready status after two minutes, the problem should be examined with the "kubectl describe node <NODE_NAME>" command.
kubectl get node
NAME         STATUS   ROLES    AGE   VERSION
k8s-master   Ready    master   5d    v1.34.0
k8s-worker1  Ready    <none>   5d    v1.34.0

2.3) DNS Test (Optional, On any Kubernetes Master Server)

kubectl apply -f https://k8s.io/examples/admin/dns/dnsutils.yaml
kubectl get pods --namespace=kube-system -l k8s-app=kube-dns
kubectl -n kube-system get configmap coredns -oyaml
kubectl exec -i -t dnsutils -- nslookup kubernetes.default
kubectl exec -ti dnsutils -- cat /etc/resolv.conf
For more information, please review the address at this link.