This document describes how to install Apinizer on RHEL (Red Hat Enterprise Linux) 8.4 operating system.

Please be sure to review the topology examples and review installing MongoDB and Elasticsearch applications separately from Kubernetes servers.

  • Replicaset MongoDB will be installed as version 6.0.0
  • Elasticsearch will be installed as version 7.9.2.
  • Kubernetes will be installed as version 1.24.10.



Before Starting the Installation


Important

Please make sure that the installation requirements are met by clicking this link before starting the installations.

#1) Operating System Configurations (All Servers)


# It is recommended that the following tools are installed on all servers. 
sudo yum install -y net-tools yum-utils bind-utils device-mapper-persistent-data lvm2 telnet wget zip curl lsof

# The Apinizer user is created and authorized. 
sudo adduser apinizer
sudo passwd apinizer
sudo usermod -aG wheel apinizer

# Proceed to operations by switching to the user
su - apinizer

# Close the firewall
sudo systemctl stop firewalld
sudo systemctl disable firewalld

# Disable SELinux to prevent communication issues on servers
sudo setenforce 0
sudo sed -i 's/^SELINUX=enforcing$/SELINUX=disabled/' /etc/selinux/config

# Kubernetes, MongoDB and Elasticsearch all recommend not using swap on the operating system.  Let's turn off swap for this.
# To turn off swap in the running system
sudo swapoff -a

# Loading the module
sudo dnf install 'dnf-command(versionlock)'

# Remove or comment out the swap line in the /etc/fstab file so that the swap does not open when the system restarts. 
# Then the vi file is closed (:wq) 
sudo vi /etc/fstab
POWERSHELL

#2) Kubernetes Installation


#2.1) Container Installation (Will be Done on All Kubernetes Servers)


Before proceeding to Kubernetes installation, the following steps are followed to prepare the system and install containerd.

sudo tee /etc/modules-load.d/k8s.conf <<EOF
overlay
br_netfilter
EOF

#For the modules to be installed on the running system
sudo modprobe overlay
sudo modprobe br_netfilter

#sysctl settings 
sudo vi /etc/sysctl.d/k8s.conf
POWERSHELL

The first three lines here are mandatory, and the others can be changed according to the need.

net.bridge.bridge-nf-call-ip6tables = 1
net.bridge.bridge-nf-call-iptables = 1
net.ipv4.ip_forward=1
net.ipv4.tcp_max_syn_backlog=40000
net.core.somaxconn=40000
net.core.wmem_default=8388608
net.core.rmem_default=8388608
net.ipv4.tcp_sack=1
net.ipv4.tcp_window_scaling=1
net.ipv4.tcp_fin_timeout=15
net.ipv4.tcp_keepalive_intvl=30
net.ipv4.tcp_tw_reuse=1
net.ipv4.tcp_moderate_rcvbuf=1
net.core.rmem_max=134217728
net.core.wmem_max=134217728
net.ipv4.tcp_mem=134217728 134217728 134217728
net.ipv4.tcp_rmem=4096 277750 134217728
net.ipv4.tcp_wmem=4096 277750 134217728
net.core.netdev_max_backlog=300000
YML
#Loading configurations 
sudo sysctl --system

#Defining the Docker repository 
sudo dnf config-manager --add-repo=https://download.docker.com/linux/centos/docker-ce.repo

sudo dnf update  

#Install containerd 
sudo dnf install -y containerd.io

# package runc-1:1.1.4-1.module+el8.7.0+16520+2db5507d.x86_64 is filtered out by modular filtering
sudo yum remove containerd.io | sudo yum remove runc  

#Settings for the containerd module 
sudo mkdir -p /etc/containerd  sudo containerd config default | sudo tee /etc/containerd/config.toml
  
sudo sed -i 's/SystemdCgroup = false/SystemdCgroup = true/g' /etc/containerd/config.toml


#Starting the containerd module 
sudo systemctl restart containerd
sudo systemctl enable containerd
systemctl status containerd
POWERSHELL

#2.2) Kubernetes Installation (On Master and Worker servers)


Kubernetes keys and repository addresses are uploaded to the system, kubernetes is installed and started.

sudo vi /etc/yum.repos.d/kubernetes.repo

[kubernetes]
name=Kubernetes
baseurl=https://packages.cloud.google.com/yum/repos/kubernetes-el7-x86_64
enabled=1
gpgcheck=1
repo_gpgcheck=1
gpgkey=https://packages.cloud.google.com/yum/doc/yum-key.gpg https://packages.cloud.google.com/yum/doc/rpm-package-key.gpg  


# Kubernetes installation 
sudo dnf update sudo dnf install -y kubeadm-1.24.10 kubelet-1.24.10 kubectl-1.24.10
sudo yum versionlock add kubelet kubeadm kubectl

# Check the installation then start
kubectl version --client && kubeadm version

sudo systemctl enable kubelet
sudo systemctl start kubelet     
POWERSHELL

#2.2.1) Bash Auto-Completion (İsteğe Bağlı, Herhangi Bir Kubernetes Master Sunucusunda)


This process can speed up the writing of Kubernetes commands.

sudo yum install bash-completion bash-completion-extras
sudo locate bash_completion.sh
sudo source /etc/profile.d/bash_completion.sh
POWERSHELL

#2.2.2) Creating Kubernetes Master Server(On Kubernetes Master Servers)


The following command is run to make Multi-Master Kubernetes

sudo kubeadm init --pod-network-cidr "10.244.0.0/16" --control-plane-endpoint "MASTERSERVERHOSTNAME" --upload-certs
POWERSHELL

Important

If you will not use 10.244.0.0/16 as the IP block that the Kubernetes pods will take (podCIDR value), you need to edit the above command accordingly.

To use the Multi-Master structure, the other nodes that will be Master should be connected with the following code

sudo kubeadm join <MASTERSERVERHOSTNAME>:6443 --token xxx --discovery-token-ca-cert-hash sha256:yyy --control-plane --certificate-key zzz
BASH

Very Important

#If the connection command is to be re-created, the output of the second command below should be added to the first one;

kubeadm token create --print-join-command

sudo kubeadm init phase upload-certs --upload-certs


#The result should look something like this:

<Output of the join command in step 1> --control-plane --certificate-key <Key value of the output in step 2>


#If the code is intended to be generated manually, the following is used:

xxx için → kubeadm token list

yyy için → openssl x509 -pubkey -in /etc/kubernetes/pki/ca.crt | openssl rsa -pubin -outform der 2>/dev/null | openssl dgst -sha256 -hex | sed 's/^.* //'

zzz için → sudo kubeadm init phase upload-certs --upload-certs

#2.2.3) Setting User Configuration of Kubectl Command on Kubernetes Master Server (On Kubernetes Master Servers)


Definitions are made for the user who will run the kubectl commands


mkdir -p $HOME/.kube
sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
sudo chown -R $(id -u):$(id -g) $HOME/.kube
POWERSHELL

#2.2.4) Install Kubernetes Network Plugin (On Kubernetes Master Servers)


In this guide, we will use the Flannel network add-on. You can choose other supported network add-ons. Flannel is a simple and easy way to configure a layer 3 network architecture for Kubernetes.

kubectl apply -f https://raw.githubusercontent.com/coreos/flannel/master/Documentation/kube-flannel.yml
POWERSHELL

Important

If you did not use the value 10.244.0.0/16 as podCIDR while initializing the Master, you should download the above yaml file and edit the network settings here as well.

#2.2.5) If the Master Server is Wanted to be Used as a Worker at the Same Time  (Optional)


It is not recommended for production environments.

To add the worker role from to Master

# To make all Masters as Workers 
kubectl  taint nodes --all node-role.kubernetes.io/control-plane:NoSchedule-   

# To make only one Master as Worker  
 kubectl taint nodes MASTERNODENAME node-role.kubernetes.io/control-plane:NoSchedule-
POWERSHELL

To remove the worker role from the Master

# To remove the worker role from all Masters  
kubectl taint nodes --all node-role.kubernetes.io/control-plane:NoSchedule   
# To remove the worker role from only one Master   
kubectl taint nodes MASTERNODENAME node-role.kubernetes.io/control-plane:NoSchedule
POWERSHELL


#2.2.6) Registering Kubernetes Worker Servers to the Master (On Kubernetes Worker Servers)


A token information is required to connect the worker server to the Master. This will be written down during the setup phase on the master node. But if it is missed or you want to view it again, the following command can be used.

On Master Node

sudo kubeadm token create --print-join-command
POWERSHELL

Nodes that will be Workers on

sudo kubeadm join <MASTERSERVERIPADDRESS>:6443 --token xxx --discovery-token-ca-cert-hash sha256:yyy
BASH


#2.2.7) Installation Check (On Any Kubernetes Master Server )


If the Node created in addition to the Master can be seen when the following code is run on the Master, the installation has been completed successfully.

If it does not transition from NotReady to Ready status within two minutes, the problem should be investigated with the command 'kubectl describe node NODENAME'.

kubectl get node

NAME         STATUS   ROLES          AGE   VERSION
k8s-master   Ready    control-plane  5d    v1.18.4
ks-worker1   Ready    <none>         5d    v1.18.4

BASH

#2.2.8) Defining Kubernetes Permissions (On Kubernetes Master Servers)


By default, Kubernetes deploys with at least one RBAC configuration to protect your cluster data. Currently, Dashboard only supports login with Bearer Token. Follow the steps below in order.

vi service.yaml

apiVersion: v1
kind: Namespace
metadata:
  name: kube-system
---
apiVersion: v1
kind: ServiceAccount
metadata:
  name: admin-user
  namespace: kube-system
YML

vi adminuser.yaml

apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRoleBinding
metadata:
  name: admin-user
roleRef:
  apiGroup: rbac.authorization.k8s.io
  kind: ClusterRole
  name: cluster-admin
subjects:
- kind: ServiceAccount
  name: admin-user
  namespace: kube-system
YML
kubectl apply -f service.yaml

kubectl apply -f adminuser.yaml

kubectl create clusterrolebinding permissive-binding --clusterrole=cluster-admin --user=admin --user=kubelet --group=system:serviceaccounts

kubectl create clusterrolebinding apinizer -n kube-system --clusterrole=cluster-admin --serviceaccount=kube-system:apinizer
POWERSHELL


#2.3) DNS Test (Optional, On Any Kubernetes Master Server)


https://kubernetes.io/docs/tasks/administer-cluster/dns-custom-nameservers/#inheriting-dns-from-the-node

kubectl apply -f https://k8s.io/examples/admin/dns/dnsutils.yaml

kubectl get pods --namespace=kube-system -l k8s-app=kube-dns

kubectl -n kube-system get configmap coredns -oyaml

kubectl exec -i -t dnsutils -- nslookup kubernetes.default

kubectl exec -ti dnsutils -- cat /etc/resolv.conf
POWERSHELL

#3) MongoDB Installation


#3.1) Operating System Configuration and Installation of MongoDB Application (On All MongoDB Servers)


MongoDB keys and repository addresses are loaded to the system, MongoDB is installed.

sudo vi /etc/yum.repos.d/mongodb-org-6.0.repo
POWERSHELL
[mongodb-org-6.0]
name=MongoDB Repository
baseurl=https://repo.mongodb.org/yum/redhat/$releasever/mongodb-org/6.0/x86_64/
gpgcheck=1
enabled=1
gpgkey=https://www.mongodb.org/static/pgp/server-6.0.asc
POWERSHELL
sudo yum install -y mongodb-org
POWERSHELL

#3.2) MongoDB Configurations (On All MongoDB Servers)


MongoDB configuration file is set and MongoDB is started.

Key creation:
sudo mkdir -p /etc/mongodb/keys/

sudo chown -Rf apinizer:apinizer /etc/mongodb/keys
sudo chmod -Rf 700 /etc/mongodb/keys

sudo openssl rand -base64 756 > /etc/mongodb/keys/mongo-key

sudo chmod -Rf 400 /etc/mongodb/keys/mongo-key
sudo chown -Rf mongod:mongod /etc/mongodb
POWERSHELL

You need to add the following parameters to the /etc/mongod.conf file, adjusting them to your environment:

    • storage / wiredTiger
    • replication
    • security
    • setParameter
    • processManagement

The expected state of the relevant configuration file:

systemLog:
  destination: file
  logAppend: true
  path: /var/log/mongodb/mongod.log

storage:
  dbPath: /var/lib/mongo
  wiredTiger:
    engineConfig:
       cacheSizeGB: 2
  journal:
    enabled: true

# how the process runs
processManagement:
  fork: true  # fork and run in background
  pidFilePath: /var/run/mongodb/mongod.pid  # location of pidfile
  timeZoneInfo: /usr/share/zoneinfo

# network interfaces
net:
  port: 25080
  bindIp: 0.0.0.0  # Enter 0.0.0.0,:: to bind to all IPv4 and IPv6 addresses or, alternatively, use the net.bindIpAll setting.

security:
  authorization: enabled
  keyFile:  /etc/mongodb/keys/mongo-key

replication:
  replSetName: apinizer-replicaset

setParameter:
  transactionLifetimeLimitSeconds: 300
POWERSHELL

Then, the MongoDB application is started.

sudo systemctl start mongod

sudo systemctl enable mongod
POWERSHELL

#3.3) ReplicaSet Configuration and Authorization User Definition (MongoDB Primary Master Server)


Activating Replicaset

mongosh mongodb://localhost:25080   
#(At this stage, if it gives a connection error, server name with server address should be added to /etc/hosts and it should be checked whether one of the values of 127.0.0.1 is localhost)

> rs.initiate()
> rs.status()
POWERSHELL

Creating an authorized user for Apinizer application.

> use admin
> db.createUser(
  {
    user: 'apinizer',
    pwd: 'YOURPASSWORD',
    roles: [ { role: 'root', db: 'admin' } ]  }
 );

> exit;

POWERSHELL

If you want to change the password

use admin

db.changeUserPassword("apinizer", passwordPrompt())
POWERSHELL

Replicaset settings are made.

  mongosh  mongodb://localhost:25080 --authenticationDatabase "admin" -u "apinizer" -p

> cfg = rs.conf()
> cfg.members[0].host = "YOURMONGOIPADDRESS:25080"
> rs.reconfig(cfg)
> rs.status() 
POWERSHELL

Authorize a user on the previously created MongoDB using the following command lines.

mongo mongodb://localhost:25080

> use admin;

> db.grantRolesToUser('admin', [{ role: 'root', db: 'admin' }])
POWERSHELL

#3.4) MongoDB ReplicaSet Installation on Multiple Servers (On MongoDB Slave Servers)


After the MongoDB installation, the keys folder created on the main node is moved to all nodes and the same permissions are given.


sudo openssl rand -base64 756 > /home/apinizer/mongo-key
sudo chmod 400 /home/apinizer/mongo-key
sudo chown -R mongod:mongod /home/apinizer/mongo-key
BASH

Copy the mongo-key file to all secondary nodes (mongoDb02, mongoDb03) in the location /home/apinizer/mongo-key

The mongod.conf file on all Mongo servers should be as follows.

On Node 1 => mongoDb01

# network interfaces
net:
	port: 25080
	bindIp: "127.0.0.1,mongoDb01,mongoDb02,mongoDb03,k8sWorkerIP"
#security:
security:
	authorization: enabled
	keyFile:  /home/apinizer/mongo-key
#replication:
replication:
	replSetName: "apinizer-replicaset"
YML


After restarting the mongod services, the Secondary servers connect from Primary.

mongo  mongodb://localhost:25080 --authenticationDatabase "admin" -u "apinizer" -p 

> rs.add("mongoDb02:25080")
> rs.add("mongoDb03:25080")
BASH

#4) Elasticsearch Installation


#4.1) Operating System Configuration and Installation of Elasticsearch Application (On All Elasticsearch Servers)


User is created, file limits in the system are adjusted.

sudo adduser elasticsearch
sudo passwd elasticsearch
sudo usermod -aG wheel elasticsearch

sudo ulimit -n 65535
 
sudo vi /etc/security/limits.conf
elasticsearch  -  nofile  65535
elasticsearch soft memlock unlimited
elasticsearch hard memlock unlimited
 
sudo sysctl -w vm.swappiness=1
sudo sysctl -w vm.max_map_count=262144
 
sudo vi /etc/sysctl.conf
vm.max_map_count=262144 elasticsearch
  
sudo sysctl -p
sudo sysctl vm.max_map_count
BASH

#4.2) Elasticsearch Installation (On All Elasticsearch Servers)


Elasticsearch is downloaded and initial settings are made.

su elasticsearch
sudo mkdir /opt/elasticsearch
cd /opt/elasticsearch
sudo wget https://artifacts.elastic.co/downloads/elasticsearch/elasticsearch-7.9.2-linux-x86_64.tar.gz
sudo tar -xzf elasticsearch-7.9.2-linux-x86_64.tar.gz

sudo chown -Rf elasticsearch:elasticsearch /opt/elasticsearch
sudo chmod -Rf 775 /opt/elasticsearch  

#At this point, pay attention to where the appropriate disk is mounted or tell the system administrators to add the disk to the following path.  
sudo mkdir /mnt/elastic-data/
sudo chown -Rf elasticsearch:elasticsearch /mnt/elastic-data/
sudo chmod -Rf 775 /mnt/elastic-data/ 
BASH

#4.3) Setting Elasticsearch Parameters According to the Environment (On All Elasticsearch Servers)


The following parameters must be adjusted and added according to your environment.

  • cluster.initial_master_nodes
  • network.host
  • node.name


sudo vi /opt/elasticsearch/elasticsearch-7.9.2/config/elasticsearch.yml 
BASH

Important

Here, the path.data address should be given as the address of the disk in the system where your log file is added.

cluster.name: ApinizerEsCluster
#give your node a name (the same as your hostname)
node.name: "YOURIP"
node.master: true
node.data: true
#enter the private IP and port of your node (the same ip as your machine)
network.host: YOURHOSTIP
http.port: 9200
#detail the private IPs of your nodes:
#to avoid split brain ([Master Eligible Node) / 2 + 1])
 
cluster.initial_master_nodes: ["YOURIP"]
 
discovery.seed_hosts: []
path.data: /mnt/elastic-data/
 
bootstrap.memory_lock: true
 
http.cors.enabled : true
http.cors.allow-origin : "*"
http.cors.allow-methods : OPTIONS, HEAD, GET, POST, PUT, DELETE
http.cors.allow-headers : X-Requested-With,X-Auth-Token,Content-Type, Content-Length
BASH


You can set the JVM (Java Virtual Machine) values and other JVM parameters used by Elasticsearch as follows.

sudo vi /opt/elasticsearch/elasticsearch-7.9.2/config/jvm.options
BASH

Important

Here, it can be up to half of the amount of RAM the operating system has and this value should not exceed 32GB

-Xms8g
-Xmx8g
YML

#4.4) Setting Elasticsearch as Linux Service (On All Elasticsearch Servers)


sudo vi /opt/elasticsearch/elasticsearch-7.9.2/bin/elasticsearch-service.sh
BASH
#!/bin/sh
SERVICE_NAME=elasticsearch
PATH_TO_APP="/opt/elasticsearch/elasticsearch-7.9.2/bin/$SERVICE_NAME"
PID_PATH_NAME="/opt/elasticsearch/elasticsearch-7.9.2/bin/$SERVICE_NAME.pid"
SCRIPTNAME=elasticsearch-service.sh
ES_USER=$SERVICE_NAME
ES_GROUP=$SERVICE_NAME
 
case $1 in
    start)
        echo "Starting $SERVICE_NAME ..."
        if [ ! -f $PID_PATH_NAME ]; then
        mkdir $(dirname $PID_PATH_NAME) > /dev/null 2>&1 || true
            chown $ES_USER $(dirname $PID_PATH_NAME)
            $SUDO $PATH_TO_APP -d -p $PID_PATH_NAME
        echo "Return code: $?"
            echo "$SERVICE_NAME started ..."
        else
            echo "$SERVICE_NAME is already running ..."
        fi
    ;;
    stop)
        if [ -f $PID_PATH_NAME ]; then
            PID=$(cat $PID_PATH_NAME);
            echo "$SERVICE_NAME stopping ..."
            kill -15 $PID;
            echo "$SERVICE_NAME stopped ..."
            rm $PID_PATH_NAME
        else
            echo "$SERVICE_NAME is not running ..."
        fi
    ;;
    restart)
        if [ -f $PID_PATH_NAME ]; then
            PID=$(cat $PID_PATH_NAME);
            echo "$SERVICE_NAME stopping ...";
            kill -15 $PID;
        sleep 1;
            echo "$SERVICE_NAME stopped ...";
            rm -rf $PID_PATH_NAME
            echo "$SERVICE_NAME starting ..."
            mkdir $(dirname $PID_PATH_NAME) > /dev/null 2>&1 || true
            chown $ES_USER $(dirname $PID_PATH_NAME)
            $SUDO $PATH_TO_APP -d -p $PID_PATH_NAME
            echo "$SERVICE_NAME started ..."
         else
            echo "$SERVICE_NAME is not running ..."
        fi
    ;;
  *)
    echo "Usage: $SCRIPTNAME {start|stop|restart}" >&2
    exit 3
    ;;
esac
BASH

The file for service settings is created, edited and run.

sudo chmod -Rf 775 /opt/elasticsearch/elasticsearch-7.9.2/*

sudo vi /etc/systemd/system/elasticsearch.service
BASH
[Unit]
Description=ElasticSearch Server
After=network.target
After=syslog.target

[Install]
WantedBy=multi-user.target

[Service]
Type=forking
ExecStart=/opt/elasticsearch/elasticsearch-7.9.2/bin/elasticsearch-service.sh start
ExecStop=/opt/elasticsearch/elasticsearch-7.9.2/bin/elasticsearch-service.sh stop
ExecReload=/opt/elasticsearch/elasticsearch-7.9.2/bin/elasticsearch-service.sh restart
LimitNOFILE=65536
LimitMEMLOCK=infinity
User=elasticsearch
BASH
sudo systemctl daemon-reload

sudo systemctl start elasticsearch
sudo systemctl status elasticsearch
sudo systemctl enable elasticsearch
BASH

You can use the following link for a compatible Kibana version

https://www.elastic.co/downloads/past-releases/kibana-oss-7-9-2
POWERSHELL

#5) Apinizer Installation


#5.1) Variables to be Configured Before Deployment


Environment Variables

  • APINIZER_VERSION - Parameter that determines which version of Apinizer you will install. To see the versions → Apinizer Versions
  • SPRING_DATA_MONGODB_DATABASE - Database name that will be used for Apinizer configuration
  • SPRING_DATA_MONGODB_URI - Database URL information that will be used for Apinizer configuration


Example Database Connection Clause

Example : mongodb://apinizer:***@mongoDb01:25080,mongoDb02:25080,mongoDb03:25080/?authSource=admin&replicaSet=apinizer-replicaset
POWERSHELL


  • JAVA_OPTS - Java Memory information used by the Management Console in the operating system
name: JAVA_OPTS
value: ' -Xmx2048m -Xms2048m -Dlog4j.formatMsgNoLookups=true'
POWERSHELL


#5.2) Installation of Apinizer Management Console Application (On One of the Kubernetes Master Servers)


Create a yaml file on your Kubernetes Master server in the way shown below and save it by changing the values of the above variables to suit your environment.

vi apinizer-deployment.yaml
POWERSHELL
apiVersion: v1
kind: Namespace
metadata:
  name: apinizer
---
apiVersion: apps/v1
kind: Deployment
metadata:
  name: manager
  namespace: apinizer
spec:
  replicas: 1
  selector:
    matchLabels:
      app: manager
      version: 'v1'
  template:
    metadata:
      labels:
        app: manager
        version: 'v1'
    spec:
      hostAliases:
      - ip: mongodbserver-ipaddress
        hostnames:
        - mongodbserver-hostname
      containers:
        - name: manager
          image: apinizercloud/manager:2023.01.1
          imagePullPolicy: IfNotPresent
          env:
            - name: SPRING_PROFILES_ACTIVE
              value: prod
            - name: SPRING_DATA_MONGODB_DATABASE
              value: apinizerdb
            - name: SPRING_DATA_MONGODB_URI
              value: 'mongodb://apinizer:***@MONGOIPADDRESS:25080/?authSource=admin&replicaSet=apinizer-replicaset'
            - name: JAVA_OPTS
              value: ' -Xmx2400m -Xms2400m -Dlog4j.formatMsgNoLookups=true'
          resources:
            requests:
              memory: '3Gi'
              cpu: '1'
            limits:
              memory: '3Gi'
              cpu: '1'
          ports:
            - name: http
              containerPort: 8080
---
apiVersion: v1
kind: Service
metadata:
  name: manager
  namespace: apinizer
  labels:
    app: manager
spec:
  selector:
    app: manager
  type: NodePort
  ports:
    - name: http
      port: 8080
      nodePort: 32080
YML

After preparing the apinizer-deployment.yaml file, run the following command line on your Kubernetes Master server.

kubectl apply -f apinizer-deployment.yaml
BASH

After this process, run the first code below to follow the created pod and examine the log and take the pod name and use it in the second code.

kubectl get pods -n apinizer

kubectl logs PODADI -n apinizer
BASH


After the Apinizer images are deployed to the Kubernetes environment, you need to add the License Key given to you by Apinizer to the database.


You can update the license information in the database by updating the License Key provided by Apinizer in a .js file as follows.

vi license.js
BASH
db.general_settings.updateOne(
{"_class":"GeneralSettings"},
{ $set: { licenseKey: 'YOURLICENSEKEY'}}
)
POWERSHELL

The created license.js file is run. A result of the form Matched = 1 is expected.

mongosh mongodb://YOURMONGOSIPADDRESS:25080/apinizerdb --authenticationDatabase "admin" -u "apinizer" -p "***" < license.js
POWERSHELL


#5.3) Installation Test (Any Server That Can Access to Kubernetes Workers)


If the installation process was successful, you can access the Apinizer Management Console from the address below.

http://IPADDRESSOFANYWORKERSERVER:32080
POWERSHELL


#5.4) Definition of Log Servers (On the Apinizer Interface)


Apinizer keeps API traffic and metrics in the Elasticsearch database. Elasticsearch Cluster definitions must be made in order to continue with the installation process.

In the Apinizer Administration Console, navigate to the Elasticsearch Clusters page under Administration → Server Management → Elasticsearch Clusters.

To define an Elasticsearch Cluster, you can refer to the Elasticsearch Clusters document.


#5.5) Environment Identification (On Apinizer Interface)


For an API Proxy to be accessible, it must be installed (deployment) in at least one Environment. Apinizer allows an API Proxy to be installed in multiple environments at the same time.

Follow the steps below for media identification.

To define a new Environment, you can refer to the Environment document.

With the creation of the environment, the Apinizer installation is completed.