File to Download

offlineApinizerInstallation-RedHat86.tar (1.61 GB)
https://drive.google.com/file/d/1nTcklCjpmVhPRjSiPMo-XK7nsTNtHv5o/view?usp=share_link
POWERSHELL

The instructions in the next steps are also available in the relevant files, and they are also clearly shared here for servers that may have limited access to the internet.

The command "tar xvzf fileName.tar" or "tar xf fileName.tar" can be used to extract the files.


First of all, in order not to get an automatic subscription error, first go to the line below and set enabled 0.

vim /etc/yum/pluginconf.d/subscription-manager.conf
enabled=0
POWERSHELL

Go to the folder where net-tools-2.0-0.52.20160912git.el8.x86_64.rpm is located.

net-tools package is installed.

yum install -y --cacheonly  --skip-broken --disablerepo=* *.rpm
POWERSHELL

Create Apinizer user

adduser apinizer
usermod -aG wheel apinizer
passwd apinizer
POWERSHELL

Stop and disable the Firewall

sudo systemctl stop firewalld
 
sudo systemctl disable firewalld
POWERSHELL

Disable the Selinux

Let's disable SELinux to avoid communication problems on servers.

sudo setenforce 0
 
sudo sed -i 's/^SELINUX=enforcing$/SELINUX=disabled/' /etc/selinux/config
POWERSHELL

Disable the Swap

Let's disable swap to avoid communication problems on nodes. For this, the following steps are done. and the swap line in the /etc/fstab file is deleted.

sudo swapoff -a
sudo vi /etc/fstab             
POWERSHELL

IP Tables setup

Then let's close the vi file (:wq). We will continue where we left off with the IPTables settings.

sudo vi /etc/sysctl.d/k8s.conf
 
net.bridge.bridge-nf-call-ip6tables = 1
net.bridge.bridge-nf-call-iptables = 1
net.ipv4.ip_forward=1
net.ipv4.tcp_max_syn_backlog=40000
net.core.somaxconn=40000
net.core.wmem_default=8388608
net.core.rmem_default=8388608
net.ipv4.tcp_sack=1
net.ipv4.tcp_window_scaling=1
net.ipv4.tcp_fin_timeout=15
net.ipv4.tcp_keepalive_intvl=30
net.ipv4.tcp_tw_reuse=1
net.ipv4.tcp_moderate_rcvbuf=1
net.core.rmem_max=134217728
net.core.wmem_max=134217728
net.ipv4.tcp_mem=134217728 134217728 134217728
net.ipv4.tcp_rmem=4096 277750 134217728
net.ipv4.tcp_wmem=4096 277750 134217728
net.core.netdev_max_backlog=300000
POWERSHELL

Save changes

sudo modprobe br_netfilter
sudo sysctl --system
POWERSHELL

Install br_netfilter Module (→ Reboot)

sudo lsmod | grep br_netfilter
sudo reboot
POWERSHELL


Docker Installation

The following packages must be deleted before starting the Docker installation.

yum remove podman* -y
yum remove buildah* -y

cd apinizerOffline/docker
 
rpm -ivh --replacefiles --replacepkgs *.rpm
 
# Create required directories
sudo mkdir -p /etc/systemd/system/docker.service.d
 
# Create daemon json config file
sudo tee /etc/docker/daemon.json <<EOF
{
  "exec-opts": ["native.cgroupdriver=systemd"],
  "log-driver": "json-file",
  "log-opts": {
    "max-size": "100m"
  },
  "storage-driver": "overlay2"
}
EOF
 
# Start and enable Services
sudo systemctl daemon-reload
sudo systemctl restart docker
sudo systemctl enable docker

sudo usermod -aG docker apinizer
POWERSHELL

Kubernetes Installation

Go to apinizerOffline/kubernetes folder and run rpms.

cd apinizerOffline/kubernetes
yum install -y --cacheonly  --skip-broken --disablerepo=* *.rpm

POWERSHELL

(On Master and Worker nodes)

Kubernetes images in .tar form are uploaded to docker.

docker load < kube-apiserver_v1.18.20.tar
docker load < kube-proxy_v1.18.20.tar
docker load < kube-controller-manager_v1.18.20.tar
docker load < kube-scheduler_v1.18.20.tar
docker load < pause_3.2.tar
docker load < etcd_3.4.3-0.tar
docker load < coredns_1.6.7.tar
docker load < flannel_v0.13.1-rc2.tar
POWERSHELL
systemctl enable kubelet && systemctl start kubelet
POWERSHELL


(On Master Node)

sudo kubeadm init --pod-network-cidr=10.244.0.0/16 --apiserver-advertise-address=<MASTER_SERVER_IP_ADDRESS> --kubernetes-version=v1.18.20
POWERSHELL

(On Master Node)

mkdir -p $HOME/.kube
sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
sudo chown -R $(id -u):$(id -g) $HOME/.kube
POWERSHELL
kubectl apply -f kube-flannel.yml
POWERSHELL

If a single server will run as both master and worker, the code below is run.

kubectl taint nodes --all node-role.kubernetes.io/master-
POWERSHELL

If there will be multiple servers, the output of the code below is run on other servers.

sudo kubeadm token create --print-join-command
POWERSHELL


While in the Kubernetes folder /apinizerOffline/kubernetes the following commands are run.

kubectl apply -f service.yaml
kubectl apply -f adminuser.yaml
kubectl create clusterrolebinding permissive-binding --clusterrole=cluster-admin --user=admin --user=kubelet --group=system:serviceaccounts
kubectl create clusterrolebinding kubernetes-dashboard -n kube-system --clusterrole=cluster-admin --serviceaccount=kube-system:kubernetes-dashboard
POWERSHELL

MongoDB Installation

cd apinizerOffline/mongodb
yum install -y --cacheonly --skip-broken --disablerepo=* *.rpm
POWERSHELL

The fields specified in the /etc/mongod.conf file are changed as follows.

sudo vi /etc/mongod.conf
--
port: 25080
ip 0.0.0.0
--
replication:
 replSetName: apinizer-replicaset

security:
    authorization: "enabled"

setParameter:
  transactionLifetimeLimitSeconds: 300
--
POWERSHELL
sudo systemctl start mongod
sudo systemctl enable mongod
POWERSHELL


ReplicaSet Configuration and Authorized User Definition

mongo mongodb://localhost:25080
 
rs.initiate()
rs.status()
POWERSHELL
use admin
db.createUser(
  {
    user: 'apinizer',
    pwd: '<YOUR_PASSWORD>',
    roles: [ { role: 'root', db: 'admin' } ],
  }
);
 
exit;
POWERSHELL

If you want to change the password;

use admin
 
db.changeUserPassword("apinizer", passwordPrompt())


Connect to MongoDB using the command line below and run some test commands to check it's working properly.

mongo  mongodb://localhost:25080 --authenticationDatabase "admin" -u "apinizer" -p
 
cfg = rs.conf()
cfg.members[0].host = "YOURMONGOIPADDRESS:25080"
rs.reconfig(cfg)
rs.status()

exit;
POWERSHELL


Elasticsearch Installation

Elasticsearch system configuration is done.

sudo adduser elasticsearch
sudo passwd elasticsearch
 
sudo usermod -aG wheel elasticsearch
 
sudo ulimit -n 65535
  
sudo vi /etc/security/limits.conf
elasticsearch  -  nofile  65535
elasticsearch soft memlock unlimited
elasticsearch hard memlock unlimited
  
sudo sysctl -w vm.swappiness=1
sudo sysctl -w vm.max_map_count=262144
  
sudo vi /etc/sysctl.conf
vm.max_map_count=262144 elasticsearch
   
sudo sysctl -p
sudo sysctl vm.max_map_count
POWERSHELL

The folders to keep the installation files and logs of Elasticsearch are created and authorized.

sudo mkdir /opt/elasticsearch
sudo mkdir /mnt/elastic-data/
sudo mkdir /mnt/elastic-snapdata/
sudo chown -Rf elasticsearch:elasticsearch /opt/elasticsearch
sudo chown -Rf elasticsearch:elasticsearch /mnt/elastic-*/
sudo chmod -Rf 775 /opt/elasticsearch
sudo chmod -Rf 775 /mnt/elastic-*/
POWERSHELL

Elasticsearch is used and the file in the apinizerOffline/elasticsearch folder is copied to the /opt/elasticsearch/ folder.

su elasticsearch
sudo cp ~/apinizerOffline/elasticsearch/elasticsearch-7.9.2-linux-x86_64.tar /opt/elasticsearch/
cd /opt/elasticsearch
sudo tar -xzf elasticsearch-7.9.2-linux-x86_64.tar.gz
POWERSHELL

Change Java-memory settings according to your machine settings.

-Xms8g
-Xmx8g

sudo vi /opt/elasticsearch/elasticsearch-7.9.2/config/jvm.options
POWERSHELL

The files in the installation folder are copied and authorized.

sudo cp ~/apinizerOffline/elasticsearch/elasticsearch.yml /opt/elasticsearch/elasticsearch-7.9.2/config/elasticsearch.yml
sudo cp ~/apinizerOffline/elasticsearch/elasticsearch-service.sh /opt/elasticsearch/elasticsearch-7.9.2/bin/elasticsearch-service.sh
sudo cp ~/apinizerOffline/elasticsearch/elasticsearch.service /etc/systemd/system/elasticsearch.service
sudo chown -Rf elasticsearch:elasticsearch /opt/elasticsearch/*
sudo chmod -Rf 775 /opt/elasticsearch/*
POWERSHELL

Certain fields in Elasticsearch.yml "ELASTICSEARCH_IP_ADDRESS" are replaced with our own private ip address.

cluster.name: ApinizerEsCluster

node.name: "<ELASTICSEARCH_IP_ADDRESS>"
node.master: true
node.data: true

network.host: <ELASTICSEARCH_IP_ADDRESS>
http.port: 9200
#detail the private IPs of your nodes:
#to avoid split brain ([Master Eligible Node) / 2 + 1])
  
cluster.initial_master_nodes: ["<ELASTICSEARCH_IP_ADDRESS>"]
  
discovery.seed_hosts: []
path.data: /mnt/elastic-data/
path.repo: /mnt/elastic-snapdata/ 

bootstrap.memory_lock: true
  
http.cors.enabled : true
http.cors.allow-origin : "*"
http.cors.allow-methods : OPTIONS, HEAD, GET, POST, PUT, DELETE
http.cors.allow-headers : X-Requested-With,X-Auth-Token,Content-Type, Content-Length
POWERSHELL

The services are run.

sudo systemctl daemon-reload
sudo systemctl start elasticsearch
sudo systemctl status elasticsearch
sudo systemctl enable elasticsearch
POWERSHELL


Uploading Apinizer Images to Kubernetes

Go to the folder with Apinizer images and run the following commands.

cd apinizerOffline/apinizerImages
 
docker load < manager_2024.xx.1.tar
docker load < worker_2024.xx.1.tar
docker load < cache_2024.xx.1.tar
docker load < integration_2024.xx.1.tar
POWERSHELL


SETUP CONFIGURATION SETTINGS

After installing all the tools, we finally need to move on to the Deploy stage.

First, it is entered into the kubernetes folder and after certain fields in deployment.yaml are edited, it is deployed.

cd apinizerOffline/kubernetes
vi apinizer-deployment.yaml
POWERSHELL
apiVersion: v1
kind: Namespace
metadata:
  name: apinizer
---
apiVersion: apps/v1
kind: Deployment
metadata:
  name: manager
  namespace: apinizer
spec:
  replicas: 1
  selector:
    matchLabels:
      app: manager
      version: 'v1'
  template:
    metadata:
      labels:
        app: manager
        version: 'v1'
    spec:
      containers:
        - name: manager
          image: apinizercloud/manager:<APINIZER_VERSION>
          imagePullPolicy: IfNotPresent
          env:
            - name: SPRING_PROFILES_ACTIVE
              value: prod
            - name: SPRING_DATA_MONGODB_DATABASE
              value: apinizerdb
            - name: SPRING_DATA_MONGODB_URI
              value: 'mongodb://<MONGO_USERNAME>:<MONGO_PASSWORD>@<MONGO_IP>:<MONGO_PORT>/?authSource=admin&replicaSet=apinizer-replicaset'
            - name: JAVA_OPTS
              value: ' -Xmx2400m -Xms2400m -Dlog4j.formatMsgNoLookups=true'
          resources:
            requests:
              memory: '3Gi'
              cpu: '1'
            limits:
              memory: '3Gi'
              cpu: '1'
          ports:
            - name: http
              containerPort: 8080
---
apiVersion: v1
kind: Service
metadata:
  name: manager
  namespace: apinizer
  labels:
    app: manager
spec:
  selector:
    app: manager
  type: NodePort
  ports:
    - name: http
      port: 8080
      nodePort: 32080
POWERSHELL

After the apinizer-deployment.yaml file is prepared, run the following command line on your Kubernetes Master server.

kubectl apply -f apinizer-deployment.yaml
POWERSHELL


After this process, to follow the created pod and examine its log, run the first code below and the pod name is taken and used in the second code.

kubectl get pods -n apinizer
 
kubectl logs PODADI -n apinizer
POWERSHELL


After the Apinizer images are deployed to the Kubernetes environment, you need to add the License Key given to you by Apinizer to the database.

The license information in the database can be updated by updating the License Key given to you by Apinizer in a .js file as follows.

vi license.js
POWERSHELL
db.general_settings.updateOne(
{"_class":"GeneralSettings"},
{ $set: { licenseKey: '<LICENSE_KEY>'}}
)
POWERSHELL


The created license.js file is run. A result of the form Matched = 1 is expected.

mongo mongodb://<MONGO_IP>:<MONGO_PORT>/apinizerdb --authenticationDatabase "admin" -u "apinizer" -p "<PASSWORD>" < license.js
POWERSHELL