This document describes how to install Apinizer on Ubuntu 22.04 LTS Live Server operating system.

Please be sure to review the topology examples and review installing MongoDB and Elasticsearch applications separately from Kubernetes servers.

  • It has been announced that this operating system will be officially supported by the Ubuntu organization until April 2027.
  • Replicaset MongoDB will be installed as version 6.0.0
  • Elasticsearch will be installed as version 7.9.2.
  • Kubernetes will be installed as version 1.24.0.


Before Starting the Installation


Important

Please make sure that the installation requirements are met by clicking this link before starting the installations.

Very Important

Before starting installations, make sure that the server's hostname is not localhost.localdomain and that each one is unique (by checking with the hostname command). If this is the case, make sure to change it before proceeding with any operations.

#(If necessary) Change the hostname

hostnamectl set-hostname your-new-hostname
POWERSHELL

Ensure that the hostname is not assigned with an IP-blocked hostname like 127.0.1.1 in the /etc/hosts file.

Make sure that there is no entry in /etc/resolv.conf like "nameserver 127.0.1.1"

#1) Operating System Configurations (All Servers)


# It is recommended that the following tools be installed on all servers.
sudo apt update
sudo apt install -y curl wget net-tools gnupg2 software-properties-common apt-transport-https ca-certificates

# The Apinizer user is created and authorized.
sudo adduser apinizer
sudo usermod -aG sudo apinizer

# Transactions are continued by switching to the user. 
sudo su - apinizer

#The firewall is turned off. 
sudo systemctl stop ufw
sudo systemctl disable ufw

# Kubernetes, MongoDB and Elasticsearch jointly do not want the use of swap in the operating system. For that, let's disable swap.  
# For operating system swap disabled operation. 
sudo swapoff -a

# The swap line in the /etc/fstab file is deleted or commented so that incase of a reboot swap will not open.
# Then the file is closed (:wq)  
sudo vi /etc/fstab
POWERSHELL

#2) Kubernetes Installation


#2.1) Container Installation (Will be Done on All Kubernetes Servers)


For permanent loading of modules

sudo tee /etc/modules-load.d/k8s.conf <<EOF
overlay
br_netfilter
EOF


#Modules need to be installed on the running system for them to function
sudo modprobe overlay
sudo modprobe br_netfilter
POWERSHELL
sudo vi /etc/sysctl.d/k8s.conf
POWERSHELL

The first three lines here are mandatory, and the others can be changed according to the need.

net.bridge.bridge-nf-call-ip6tables = 1
net.bridge.bridge-nf-call-iptables = 1
net.ipv4.ip_forward=1
net.ipv4.tcp_max_syn_backlog=40000
net.core.somaxconn=40000
net.core.wmem_default=8388608
net.core.rmem_default=8388608
net.ipv4.tcp_sack=1
net.ipv4.tcp_window_scaling=1
net.ipv4.tcp_fin_timeout=15
net.ipv4.tcp_keepalive_intvl=30
net.ipv4.tcp_tw_reuse=1
net.ipv4.tcp_moderate_rcvbuf=1
net.core.rmem_max=134217728
net.core.wmem_max=134217728
net.ipv4.tcp_mem=134217728 134217728 134217728
net.ipv4.tcp_rmem=4096 277750 134217728
net.ipv4.tcp_wmem=4096 277750 134217728
net.core.netdev_max_backlog=300000
YML
#Loading configurations
sudo sysctl --system

# Add Docker repo
curl -fsSL https://download.docker.com/linux/ubuntu/gpg | sudo apt-key add -
sudo add-apt-repository "deb [arch=amd64] https://download.docker.com/linux/ubuntu $(lsb_release -cs) stable"

# Install containerd
sudo apt update
sudo apt install -y containerd.io

# Configure containerd and start service
sudo mkdir -p /etc/containerd
sudo containerd config default | sudo tee /etc/containerd/config.toml

# Change the given line's value to "true"
sudo vi /etc/containerd/config.toml
     SystemdCgroup = true

# restart containerd
sudo systemctl restart containerd
sudo systemctl enable containerd
systemctl status containerd

POWERSHELL

#2.2) Kubernetes Installation (On Master and Worker servers)


Kubernetes keys and repository addresses are uploaded to the system, kubernetes is installed and started.

curl -fsSL  https://packages.cloud.google.com/apt/doc/apt-key.gpg | sudo gpg --dearmor -o /etc/apt/trusted.gpg.d/k8s.gpg
curl -s https://packages.cloud.google.com/apt/doc/apt-key.gpg | sudo apt-key add -
echo "deb https://apt.kubernetes.io/ kubernetes-xenial main" | sudo tee /etc/apt/sources.list.d/kubernetes.list

sudo apt update  

# Kubernetes installation 
sudo apt -y install kubelet=1.24.10-00 kubeadm=1.24.10-00 kubectl=1.24.10-00
sudo apt-mark hold kubelet kubeadm kubectl

# Check installation and start 
kubectl version --client && kubeadm version

sudo systemctl enable kubelet

POWERSHELL

#2.2.1) Bash Auto-Completion (Optional, On Any Kubernetes Master Server)


This process can speed up the writing of Kubernetes commands.

sudo apt install bash-completion
source /usr/share/bash-completion/bash_completion
kubectl completion bash | sudo tee /etc/bash_completion.d/kubectl > /dev/null 
POWERSHELL


#2.2.2) Creating Kubernetes Master Server(On Kubernetes Master Servers)


The following command is run to make Multi-Master Kubernetes

sudo lsmod | grep br_netfilter

# Use the hostname of the master server. 
sudo kubeadm init --pod-network-cidr "10.244.0.0/16" --control-plane-endpoint "MASTERSERVERHOSTNAME" --upload-certs
POWERSHELL

Important

If you will not use 10.244.0.0/16 as the IP block that the Kubernetes pods will take (podCIDR value), you need to edit the above command accordingly.

To use the Multi-Master structure, the other nodes that will be Master should be connected with the following code

sudo kubeadm join <MASTERSERVERHOSTNAME>:6443 --token xxx --discovery-token-ca-cert-hash sha256:yyy --control-plane --certificate-key zzz
BASH

Very Important

#If the connection command is to be re-created, the output of the second command below should be added to the first one;

kubeadm token create --print-join-command

sudo kubeadm init phase upload-certs --upload-certs


#The result should look something like this:

<Output of the join command in step 1> --control-plane --certificate-key <Key value of the output in step 2>


#If the code is intended to be generated manually, the following is used:

xxx için → kubeadm token list

yyy için → openssl x509 -pubkey -in /etc/kubernetes/pki/ca.crt | openssl rsa -pubin -outform der 2>/dev/null | openssl dgst -sha256 -hex | sed 's/^.* //'

zzz için → sudo kubeadm init phase upload-certs --upload-certs


#2.2.3) Setting User Configuration of Kubectl Command on Kubernetes Master Server (On Kubernetes Master Servers)


Definitions are made for the user who will run the kubectl commands

mkdir -p $HOME/.kube
sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
sudo chown -R $(id -u):$(id -g) $HOME/.kube
POWERSHELL

#2.2.4) Install Kubernetes Network Plugin (On Kubernetes Master Servers)


In this guide, we will use the Flannel network add-on. You can choose other supported network add-ons. Flannel is a simple and easy way to configure a layer 3 network architecture for Kubernetes.

kubectl apply -f https://raw.githubusercontent.com/coreos/flannel/master/Documentation/kube-flannel.yml
POWERSHELL

Important

If you did not use the value 10.244.0.0/16 as podCIDR while initializing the Master, you should download the above yaml file and edit the network settings here as well.

#2.2.5) If the Master Server is Wanted to be Used as a Worker at the Same Time  (Optional)


It is not recommended for production environments.

To add the worker role to the Master

# To make all Masters as Workers
kubectl  taint nodes --all node-role.kubernetes.io/control-plane:NoSchedule-

# To make only one Master as Worker 
kubectl taint nodes MASTERNODENAME node-role.kubernetes.io/control-plane:NoSchedule-
POWERSHELL

To remove the worker role from the Master

# To remove the worker role from all Masters 
kubectl taint nodes --all node-role.kubernetes.io/control-plane:NoSchedule

# To remove the worker role from only one Master 
kubectl taint nodes MASTERNODENAME node-role.kubernetes.io/control-plane:NoSchedule
POWERSHELL

#2.2.6) Registering Kubernetes Worker Servers to the Master (On Kubernetes Worker Servers)


A token information is required to connect the worker server to the Master. This will be written down during the setup phase on the master node. But if it is missed or you want to view it again, the following command can be used.

On Master Node

sudo kubeadm token create --print-join-command
POWERSHELL

Nodes that will be Workers on

sudo kubeadm join <MASTERSERVERIPADDRESS>:6443 --token xxx --discovery-token-ca-cert-hash sha256:yyy
BASH

#2.2.7) Installation Check (On Any Kubernetes Master Server )


If the Node created in addition to the Master can be seen when the following code is run on the Master, the installation has been completed successfully.

If it does not transition from NotReady to Ready status within two minutes, the problem should be investigated with the command 'kubectl describe node NODENAME'.

kubectl get node

NAME         STATUS   ROLES          AGE   VERSION
k8s-master   Ready    control-plane  5d    v1.24.0
ks-worker1   Ready    <none>         5d    v1.24.0
BASH

#2.2.8) Defining Kubernetes Permissions (On Kubernetes Master Servers)


By default, Kubernetes deploys with at least one RBAC configuration to protect your cluster data. Currently, Dashboard only supports login with Bearer Token. Follow the steps below in order.

vi service.yaml

apiVersion: v1
kind: Namespace
metadata:
  name: kube-system
---
apiVersion: v1
kind: ServiceAccount
metadata:
  name: admin-user
  namespace: kube-system
YML

vi adminuser.yaml

apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRoleBinding
metadata:
  name: admin-user
roleRef:
  apiGroup: rbac.authorization.k8s.io
  kind: ClusterRole
  name: cluster-admin
subjects:
- kind: ServiceAccount
  name: admin-user
  namespace: kube-system
YML

Settings are applied

kubectl apply -f service.yaml

kubectl apply -f adminuser.yaml

kubectl create clusterrolebinding permissive-binding --clusterrole=cluster-admin --user=admin --user=kubelet --group=system:serviceaccounts

kubectl create clusterrolebinding apinizer -n kube-system --clusterrole=cluster-admin --serviceaccount=kube-system:apinizer
POWERSHELL

#2.3) DNS Test (Optional, On Any Kubernetes Master Server)


https://kubernetes.io/docs/tasks/administer-cluster/dns-custom-nameservers/#inheriting-dns-from-the-node

kubectl apply -f https://k8s.io/examples/admin/dns/dnsutils.yaml

kubectl get pods --namespace=kube-system -l k8s-app=kube-dns

kubectl -n kube-system get configmap coredns -oyaml

kubectl exec -i -t dnsutils -- nslookup kubernetes.default

kubectl exec -ti dnsutils -- cat /etc/resolv.conf
POWERSHELL


#3) MongoDB Installation


#3.1) Operating System Configuration and Installation of MongoDB Application (On All MongoDB Servers)


wget http://nz2.archive.ubuntu.com/ubuntu/pool/main/o/openssl/libssl1.1_1.1.1f-1ubuntu2.17_amd64.deb
sudo dpkg -i ./libssl1.1_1.1.1f-1ubuntu2.17_amd64.deb

curl -fsSL https://www.mongodb.org/static/pgp/server-6.0.asc|sudo gpg --dearmor -o /etc/apt/trusted.gpg.d/mongodb-6.gpg
echo "deb [ arch=amd64,arm64 ] https://repo.mongodb.org/apt/ubuntu focal/mongodb-org/6.0 multiverse" | sudo tee /etc/apt/sources.list.d/mongodb-org-6.0.list

sudo apt update
sudo apt install mongodb-org -y
POWERSHELL

#3.2) MongoDB Configurations (On All MongoDB Servers)


Key creation:

sudo mkdir -p /etc/mongodb/keys/

sudo chown -Rf apinizer:apinizer /etc/mongodb/keys
sudo chmod -Rf 700 /etc/mongodb/keys

sudo openssl rand -base64 756 > /etc/mongodb/keys/mongo-key

sudo chmod -Rf 400 /etc/mongodb/keys/mongo-key
sudo chown -Rf mongodb:mongodb /etc/mongodb
POWERSHELL

You need to add the following parameters to the /etc/mongod.conf file, adjusting them to your environment:

    • storage / wiredTiger
    • replication
    • security
    • setParameter
    • processManagement

The expected state of the relevant configuration file:

storage:
  dbPath: /var/lib/mongodb
  wiredTiger:
    engineConfig:
       cacheSizeGB: 2

systemLog:
  destination: file
  logAppend: true
  path: /var/log/mongodb/mongod.log

net:
  port: 25080
  bindIp: 0.0.0.0

replication:
  replSetName: apinizer-replicaset

security:
  authorization: enabled
  keyFile:  /etc/mongodb/keys/mongo-key

setParameter:
  transactionLifetimeLimitSeconds: 300

processManagement:
  timeZoneInfo: /usr/share/zoneinfo
YML

Then, the MongoDB application is started.

sudo systemctl enable mongod
sudo systemctl start mongod
POWERSHELL


#3.3) ReplicaSet Configuration and Authorization User Definition (MongoDB Primary Master Server)


Activating Replicaset

mongosh mongodb://localhost:25080  
#(At this stage, if it gives a connection error, server name with server address should be added to /etc/hosts and it should be checked whether one of the values of 127.0.0.1 is localhost) 

> rs.initiate()
> rs.status()
POWERSHELL

Creating an authorized user for Apinizer application.

> use admin
> db.createUser(
  {
    user: 'apinizer',
    pwd: 'YOURPASSWORD',
    roles: [ { role: 'root', db: 'admin' } ],
	mechanisms:[ "SCRAM-SHA-1"] }
 );

> exit;
POWERSHELL

Note: The following changes are optional. MongoDB installation and configurations required for Apinizer are above.

Authorize a user on the previously created MongoDB using the following command lines.

mongosh mongodb://localhost:25080

> use admin;

> db.grantRolesToUser('admin', [{ role: 'root', db: 'admin' }])
POWERSHELL

If Mongo Hostname or IP needs to be changed

mongosh  mongodb://localhost:25080 --authenticationDatabase "admin" -u "apinizer" -p

> cfg = rs.conf()
> cfg.members[0].host = "YOURMONGOIPADDRESS:25080"
> rs.reconfig(cfg)
> rs.status()
POWERSHELL

If you want to change the password

use admin

db.changeUserPassword("apinizer", passwordPrompt())
POWERSHELL

#3.4) MongoDB ReplicaSet Installation on Multiple Servers (On MongoDB Slave Servers)


After the MongoDB installation, the keys folder created on the main node is moved to all nodes and the same permissions are given.

scp -r /etc/mongodb/keys/ apinizer@mongoDb2:/etc/mongodb/keys

chmod -Rf 400 /etc/mongodb/keys
chown -Rf mongodb:mongodb /etc/mongodb
BASH

Copy the mongo-key file to all secondary nodes (mongoDb02, mongoDb03) in the location /home/apinizer/mongo-key

The mongod.conf file on all Mongo servers should be as follows.

On Node 1 => mongoDb01

# network interfaces
net:
	port: 25080
	bindIp: "127.0.0.1,mongoDb1,mongoDb02,mongoDb03,k8sWorkerIP"
YML


After restarting the mongod services, the Secondary servers connect from Primary.

mongosh mongodb://localhost:25080 --authenticationDatabase "admin" -u "apinizer" -p 

> rs.add("mongoDb02:25080")
> rs.add("mongoDb03:25080")
BASH


#4) Elasticsearch Installation


#4.1) Operating System Configuration and Installation of Elasticsearch Application (On All Elasticsearch Servers)


sudo adduser elasticsearch
sudo usermod -aG sudo elasticsearch
 
sudo vi /etc/security/limits.conf
elasticsearch  -  nofile  65535
elasticsearch soft memlock unlimited
elasticsearch hard memlock unlimited
 
sudo sysctl -w vm.swappiness=1
sudo sysctl -w vm.max_map_count=262144
 
sudo vi /etc/sysctl.conf
vm.max_map_count=262144 elasticsearch
  
sudo sysctl -p
sudo sysctl vm.max_map_count
BASH

#4.2) Elasticsearch Installation (On All Elasticsearch Servers)

su elasticsearch
sudo mkdir /opt/elasticsearch
cd /opt/elasticsearch
sudo wget https://artifacts.elastic.co/downloads/elasticsearch/elasticsearch-7.9.2-linux-x86_64.tar.gz
sudo tar -xzf elasticsearch-7.9.2-linux-x86_64.tar.gz

sudo chown -Rf elasticsearch:elasticsearch /opt/elasticsearch
sudo chmod -Rf 775 /opt/elasticsearch

##At this point, pay attention to where the appropriate disk is mounted or tell the system administrators to add the disk to the following path 
sudo mkdir /mnt/elastic-data/
sudo chown -Rf elasticsearch:elasticsearch /mnt/elastic-data/
sudo chmod -Rf 775 /mnt/elastic-data/ 
BASH

#4.3) Setting Elasticsearch Parameters According to the Environment (On All Elasticsearch Servers)

The following parameters must be adjusted and added according to your environment.

  • cluster.initial_master_nodes
  • network.host
  • node.name



sudo vi /opt/elasticsearch/elasticsearch-7.9.2/config/elasticsearch.yml
BASH

Important

Here, the path.data address should be given as the address of the disk in the system where your log file is added.

cluster.name: ApinizerEsCluster
#give your node a name (the same as your hostname)
node.name: "YOURIP"
node.master: true
node.data: true
#enter the private IP and port of your node (the same ip as your machine)
network.host: YOURHOSTIP
http.port: 9200
#detail the private IPs of your nodes:
#to avoid split brain ([Master Eligible Node) / 2 + 1])
 
cluster.initial_master_nodes: ["YOURIP"]
 
discovery.seed_hosts: []
path.data: /mnt/elastic-data/
 
bootstrap.memory_lock: true
 
http.cors.enabled : true
http.cors.allow-origin : "*"
http.cors.allow-methods : OPTIONS, HEAD, GET, POST, PUT, DELETE
http.cors.allow-headers : X-Requested-With,X-Auth-Token,Content-Type, Content-Length
YML


You can set the JVM (Java Virtual Machine) values and other JVM parameters used by Elasticsearch as follows.


sudo vi /opt/elasticsearch/elasticsearch-7.9.2/config/jvm.options
BASH

Important

Here, it can be up to half of the amount of RAM the operating system has and this value should not exceed 32GB

-Xms8g
-Xmx8g
YML


#4.4) Setting Elasticsearch as Linux Service (On All Elasticsearch Servers)


sudo vi /opt/elasticsearch/elasticsearch-7.9.2/bin/elasticsearch-service.sh
BASH
#!/bin/sh
SERVICE_NAME=elasticsearch
PATH_TO_APP="/opt/elasticsearch/elasticsearch-7.9.2/bin/$SERVICE_NAME"
PID_PATH_NAME="/opt/elasticsearch/elasticsearch-7.9.2/bin/$SERVICE_NAME.pid"
SCRIPTNAME=elasticsearch-service.sh
ES_USER=$SERVICE_NAME
ES_GROUP=$SERVICE_NAME
 
case $1 in
    start)
        echo "Starting $SERVICE_NAME ..."
        if [ ! -f $PID_PATH_NAME ]; then
        mkdir $(dirname $PID_PATH_NAME) > /dev/null 2>&1 || true
            chown $ES_USER $(dirname $PID_PATH_NAME)
            $SUDO $PATH_TO_APP -d -p $PID_PATH_NAME
        echo "Return code: $?"
            echo "$SERVICE_NAME started ..."
        else
            echo "$SERVICE_NAME is already running ..."
        fi
    ;;
    stop)
        if [ -f $PID_PATH_NAME ]; then
            PID=$(cat $PID_PATH_NAME);
            echo "$SERVICE_NAME stopping ..."
            kill -15 $PID;
            echo "$SERVICE_NAME stopped ..."
            rm $PID_PATH_NAME
        else
            echo "$SERVICE_NAME is not running ..."
        fi
    ;;
    restart)
        if [ -f $PID_PATH_NAME ]; then
            PID=$(cat $PID_PATH_NAME);
            echo "$SERVICE_NAME stopping ...";
            kill -15 $PID;
        sleep 1;
            echo "$SERVICE_NAME stopped ...";
            rm -rf $PID_PATH_NAME
            echo "$SERVICE_NAME starting ..."
            mkdir $(dirname $PID_PATH_NAME) > /dev/null 2>&1 || true
            chown $ES_USER $(dirname $PID_PATH_NAME)
            $SUDO $PATH_TO_APP -d -p $PID_PATH_NAME
            echo "$SERVICE_NAME started ..."
         else
            echo "$SERVICE_NAME is not running ..."
        fi
    ;;
  *)
    echo "Usage: $SCRIPTNAME {start|stop|restart}" >&2
    exit 3
    ;;
esac
BASH
sudo chmod -Rf 775 /opt/elasticsearch/elasticsearch-7.9.2/*

sudo vi /etc/systemd/system/elasticsearch.service
BASH
[Unit]
Description=ElasticSearch Server
After=network.target
After=syslog.target

[Install]
WantedBy=multi-user.target

[Service]
Type=forking
ExecStart=/opt/elasticsearch/elasticsearch-7.9.2/bin/elasticsearch-service.sh start
ExecStop=/opt/elasticsearch/elasticsearch-7.9.2/bin/elasticsearch-service.sh stop
ExecReload=/opt/elasticsearch/elasticsearch-7.9.2/bin/elasticsearch-service.sh restart
LimitNOFILE=65536
LimitMEMLOCK=infinity
User=elasticsearch
BASH
sudo systemctl daemon-reload

sudo systemctl start elasticsearch
sudo systemctl status elasticsearch
sudo systemctl enable elasticsearch
POWERSHELL

You can use the following link for a compatible Kibana version

https://www.elastic.co/downloads/past-releases/kibana-oss-7-9-2
POWERSHELL


#5) Apinizer Installation


#5.1) Variables to be Configured Before Deployment


Environment Variables

  • APINIZER_VERSION - Parameter that determines which version of Apinizer you will install. To see the versions → Apinizer Versions
  • SPRING_DATA_MONGODB_DATABASE - Database name that will be used for Apinizer configuration
  • SPRING_DATA_MONGODB_URI - Database URL information that will be used for Apinizer configuration


Example Database Connection Clause

Example : mongodb://apinizer:***@mongoDb01:25080,mongoDb02:25080,mongoDb03:25080/?authSource=admin&replicaSet=apinizer-replicaset

#In order for Apinizer Manager to connect to the MongoDB database, host alias must be defined. 
spec:
hostAliases:
 - ip: mongodbserver-ipaddress
   hostnames:
    - mongodbserver-hostname
POWERSHELL


  • JAVA_OPTS - Java Memory information used by the Management Console in the operating system
name: JAVA_OPTS
value: ' -Xmx2048m -Xms2048m -Dlog4j.formatMsgNoLookups=true'
POWERSHELL


#5.2) Installation of Apinizer Management Console Application (On One of the Kubernetes Master Servers)


Create a yaml file on your Kubernetes Master server in the way shown below and save it by changing the values of the above variables to suit your environment.

vi apinizer-deployment.yaml
POWERSHELL
apiVersion: v1
kind: Namespace
metadata:
  name: apinizer
---
apiVersion: apps/v1
kind: Deployment
metadata:
  name: manager
  namespace: apinizer
spec:
  replicas: 1
  selector:
    matchLabels:
      app: manager
      version: 'v1'
  template:
    metadata:
      labels:
        app: manager
        version: 'v1'
    spec:
      hostAliases:
      - ip: mongodbserver-ipaddress
        hostnames:
        - mongodbserver-hostname
      containers:
        - name: manager
          image: apinizercloud/manager:2023.01.1
          imagePullPolicy: IfNotPresent
          env:
            - name: SPRING_PROFILES_ACTIVE
              value: prod
            - name: SPRING_DATA_MONGODB_DATABASE
              value: apinizerdb
            - name: SPRING_DATA_MONGODB_URI
              value: 'mongodb://apinizer:***@MONGOIPADDRESS:25080/?authSource=admin&replicaSet=apinizer-replicaset'
            - name: JAVA_OPTS
              value: ' -Xmx2400m -Xms2400m -Dlog4j.formatMsgNoLookups=true'
          resources:
            requests:
              memory: '3Gi'
              cpu: '1'
            limits:
              memory: '3Gi'
              cpu: '1'
          ports:
            - name: http
              containerPort: 8080
---
apiVersion: v1
kind: Service
metadata:
  name: manager
  namespace: apinizer
  labels:
    app: manager
spec:
  selector:
    app: manager
  type: NodePort
  ports:
    - name: http
      port: 8080
      nodePort: 32080
YML

After preparing the apinizer-deployment.yaml file, run the following command line on your Kubernetes Master server.

kubectl apply -f apinizer-deployment.yaml
BASH


After this process, run the first code below to follow the created pod and examine the log and take the pod name and use it in the second code.

kubectl get pods -n apinizer

kubectl logs PODADI -n apinizer
BASH


After the Apinizer images are deployed to the Kubernetes environment, you need to add the License Key given to you by Apinizer to the database.

You can update the license information in the database by updating the License Key provided by Apinizer in a .js file as follows.

vi license.js
BASH
db.general_settings.updateOne(
{"_class":"GeneralSettings"},
{ $set: { licenseKey: 'YOURLICENSEKEY'}}
)
POWERSHELL


The created license.js file is run. A result of the form Matched = 1 is expected.

mongosh mongodb://MONGOIPADDRESS:25080/apinizerdb --authenticationDatabase "admin" -u "apinizer" -p "***" < license.js
POWERSHELL


#5.3) Installation Test (Any Server That Can Access to Kubernetes Workers)


If the installation process was successful, you can access the Apinizer Management Console from the address below.


http://ANYWORKERIPADDRESS:32080
POWERSHELL



#5.4) Definition of Log Servers (On the Apinizer Interface)


Apinizer keeps API traffic and metrics in the Elasticsearch database. Elasticsearch Cluster definitions must be made in order to continue with the installation process.

In the Apinizer Administration Console, navigate to the Elasticsearch Clusters page under Administration → Server Management → Elasticsearch Clusters.

To define an  Elasticsearch Cluster, you can refer to the Elasticsearch Clusters document.


#5.5) Environment Identification (On Apinizer Interface)


For an API Proxy to be accessible, it must be installed (deployment) in at least one Environment. Apinizer allows an API Proxy to be installed in multiple environments at the same time.

Follow the steps below for media identification.

To define a new Environment, you can refer to the Environment document.

With the creation of the environment, the Apinizer installation is completed.