Installation of Apinizer on RHEL 8.4
This document describes how to install Apinizer on RHEL (Red Hat Enterprise Linux) 8.4 operating system.
Please be sure to review the topology examples and review installing MongoDB and Elasticsearch applications separately from Kubernetes servers.
- Replicaset MongoDB will be installed as version 6.0.0
- Elasticsearch will be installed as version 7.9.2.
- Kubernetes will be installed as version 1.24.10.
Before Starting the Installation
Important
Please make sure that the installation requirements are met by clicking this link before starting the installations.#1) Operating System Configurations (All Servers)
POWERSHELL
|
#2) Kubernetes Installation
#2.1) Container Installation (Will be Done on All Kubernetes Servers)
Before proceeding to Kubernetes installation, the following steps are followed to prepare the system and install containerd.
POWERSHELL
The first three lines here are mandatory, and the others can be changed according to the need.
YML
POWERSHELL
|
#2.2) Kubernetes Installation (On Master and Worker servers)
Kubernetes keys and repository addresses are uploaded to the system, kubernetes is installed and started.
POWERSHELL
|
#2.2.1) Bash Auto-Completion (İsteğe Bağlı, Herhangi Bir Kubernetes Master Sunucusunda)
This process can speed up the writing of Kubernetes commands.
POWERSHELL
|
#2.2.2) Creating Kubernetes Master Server(On Kubernetes Master Servers)
The following command is run to make Multi-Master Kubernetes
POWERSHELL
Important If you will not use 10.244.0.0/16 as the IP block that the Kubernetes pods will take (podCIDR value), you need to edit the above command accordingly. To use the Multi-Master structure, the other nodes that will be Master should be connected with the following code
BASH
Very Important #If the connection command is to be re-created, the output of the second command below should be added to the first one;
#If the code is intended to be generated manually, the following is used: xxx için → yyy için → zzz için → |
#2.2.3) Setting User Configuration of Kubectl Command on Kubernetes Master Server (On Kubernetes Master Servers)
Definitions are made for the user who will run the kubectl commands
POWERSHELL
|
#2.2.4) Install Kubernetes Network Plugin (On Kubernetes Master Servers)
In this guide, we will use the Flannel network add-on. You can choose other supported network add-ons. Flannel is a simple and easy way to configure a layer 3 network architecture for Kubernetes.
POWERSHELL
Important If you did not use the value 10.244.0.0/16 as podCIDR while initializing the Master, you should download the above yaml file and edit the network settings here as well. |
#2.2.5) If the Master Server is Wanted to be Used as a Worker at the Same Time (Optional)
It is not recommended for production environments.
To add the worker role from to Master
POWERSHELL
To remove the worker role from the Master
POWERSHELL
|
#2.2.6) Registering Kubernetes Worker Servers to the Master (On Kubernetes Worker Servers)
A token information is required to connect the worker server to the Master. This will be written down during the setup phase on the master node. But if it is missed or you want to view it again, the following command can be used.
On Master Node
POWERSHELL
Nodes that will be Workers on
BASH
|
#2.2.7) Installation Check (On Any Kubernetes Master Server )
If the Node created in addition to the Master can be seen when the following code is run on the Master, the installation has been completed successfully.
If it does not transition from NotReady to Ready status within two minutes, the problem should be investigated with the command 'kubectl describe node NODENAME'.
BASH
|
#2.2.8) Defining Kubernetes Permissions (On Kubernetes Master Servers)
By default, Kubernetes deploys with at least one RBAC configuration to protect your cluster data. Currently, Dashboard only supports login with Bearer Token. Follow the steps below in order.
vi service.yaml
YML
vi adminuser.yaml
YML
POWERSHELL
|
#2.3) DNS Test (Optional, On Any Kubernetes Master Server)
POWERSHELL
|
#3) MongoDB Installation
#3.1) Operating System Configuration and Installation of MongoDB Application (On All MongoDB Servers)
MongoDB keys and repository addresses are loaded to the system, MongoDB is installed.
POWERSHELL
POWERSHELL
POWERSHELL
|
#3.2) MongoDB Configurations (On All MongoDB Servers)
MongoDB configuration file is set and MongoDB is started.
Key creation:
POWERSHELL
You need to add the following parameters to the /etc/mongod.conf file, adjusting them to your environment:
The expected state of the relevant configuration file:
POWERSHELL
Then, the MongoDB application is started.
POWERSHELL
|
#3.3) ReplicaSet Configuration and Authorization User Definition (MongoDB Primary Master Server)
Activating Replicaset
POWERSHELL
Creating an authorized user for Apinizer application.
POWERSHELL
If you want to change the password
POWERSHELL
Replicaset settings are made.
POWERSHELL
Authorize a user on the previously created MongoDB using the following command lines.
POWERSHELL
|
#3.4) MongoDB ReplicaSet Installation on Multiple Servers (On MongoDB Slave Servers)
After the MongoDB installation, the keys folder created on the main node is moved to all nodes and the same permissions are given.
BASH
Copy the mongo-key file to all secondary nodes (mongoDb02, mongoDb03) in the location /home/apinizer/mongo-key On Node 1 => mongoDb01
YML
After restarting the mongod services, the Secondary servers connect from Primary.
BASH
|
#4) Elasticsearch Installation
#4.1) Operating System Configuration and Installation of Elasticsearch Application (On All Elasticsearch Servers)
User is created, file limits in the system are adjusted.
BASH
|
#4.2) Elasticsearch Installation (On All Elasticsearch Servers)
Elasticsearch is downloaded and initial settings are made.
BASH
|
#4.3) Setting Elasticsearch Parameters According to the Environment (On All Elasticsearch Servers)
The following parameters must be adjusted and added according to your environment.
- cluster.initial_master_nodes
- network.host
- node.name
BASH
Important Here, the path.data address should be given as the address of the disk in the system where your log file is added.
BASH
|
You can set the JVM (Java Virtual Machine) values and other JVM parameters used by Elasticsearch as follows.
BASH
Important Here, it can be up to half of the amount of RAM the operating system has and this value should not exceed 32GB
YML
|
#4.4) Setting Elasticsearch as Linux Service (On All Elasticsearch Servers)
sudo vi /opt/elasticsearch/elasticsearch-7.9.2/bin/elasticsearch-service.sh
#!/bin/sh
SERVICE_NAME=elasticsearch
PATH_TO_APP="/opt/elasticsearch/elasticsearch-7.9.2/bin/$SERVICE_NAME"
PID_PATH_NAME="/opt/elasticsearch/elasticsearch-7.9.2/bin/$SERVICE_NAME.pid"
SCRIPTNAME=elasticsearch-service.sh
ES_USER=$SERVICE_NAME
ES_GROUP=$SERVICE_NAME
case $1 in
start)
echo "Starting $SERVICE_NAME ..."
if [ ! -f $PID_PATH_NAME ]; then
mkdir $(dirname $PID_PATH_NAME) > /dev/null 2>&1 || true
chown $ES_USER $(dirname $PID_PATH_NAME)
$SUDO $PATH_TO_APP -d -p $PID_PATH_NAME
echo "Return code: $?"
echo "$SERVICE_NAME started ..."
else
echo "$SERVICE_NAME is already running ..."
fi
;;
stop)
if [ -f $PID_PATH_NAME ]; then
PID=$(cat $PID_PATH_NAME);
echo "$SERVICE_NAME stopping ..."
kill -15 $PID;
echo "$SERVICE_NAME stopped ..."
rm $PID_PATH_NAME
else
echo "$SERVICE_NAME is not running ..."
fi
;;
restart)
if [ -f $PID_PATH_NAME ]; then
PID=$(cat $PID_PATH_NAME);
echo "$SERVICE_NAME stopping ...";
kill -15 $PID;
sleep 1;
echo "$SERVICE_NAME stopped ...";
rm -rf $PID_PATH_NAME
echo "$SERVICE_NAME starting ..."
mkdir $(dirname $PID_PATH_NAME) > /dev/null 2>&1 || true
chown $ES_USER $(dirname $PID_PATH_NAME)
$SUDO $PATH_TO_APP -d -p $PID_PATH_NAME
echo "$SERVICE_NAME started ..."
else
echo "$SERVICE_NAME is not running ..."
fi
;;
*)
echo "Usage: $SCRIPTNAME {start|stop|restart}" >&2
exit 3
;;
esac
The file for service settings is created, edited and run.
BASH
BASH
BASH
|
You can use the following link for a compatible Kibana version
POWERSHELL
|
#5) Apinizer Installation
#5.1) Variables to be Configured Before Deployment
Environment Variables
APINIZER_VERSION
- Parameter that determines which version of Apinizer you will install. To see the versions → Apinizer VersionsSPRING_DATA_MONGODB_DATABASE
- Database name that will be used for Apinizer configurationSPRING_DATA_MONGODB_URI
- Database URL information that will be used for Apinizer configuration
Example Database Connection Clause
Example : mongodb://apinizer:***@mongoDb01:25080,mongoDb02:25080,mongoDb03:25080/?authSource=admin&replicaSet=apinizer-replicaset
JAVA_OPTS
- Java Memory information used by the Management Console in the operating system
name: JAVA_OPTS
value: ' -Xmx2048m -Xms2048m -Dlog4j.formatMsgNoLookups=true'
#5.2) Installation of Apinizer Management Console Application (On One of the Kubernetes Master Servers)
Create a yaml file on your Kubernetes Master server in the way shown below and save it by changing the values of the above variables to suit your environment.
vi apinizer-deployment.yaml
apiVersion: v1
kind: Namespace
metadata:
name: apinizer
---
apiVersion: apps/v1
kind: Deployment
metadata:
name: manager
namespace: apinizer
spec:
replicas: 1
selector:
matchLabels:
app: manager
version: 'v1'
template:
metadata:
labels:
app: manager
version: 'v1'
spec:
hostAliases:
- ip: mongodbserver-ipaddress
hostnames:
- mongodbserver-hostname
containers:
- name: manager
image: apinizercloud/manager:2023.01.1
imagePullPolicy: IfNotPresent
env:
- name: SPRING_PROFILES_ACTIVE
value: prod
- name: SPRING_DATA_MONGODB_DATABASE
value: apinizerdb
- name: SPRING_DATA_MONGODB_URI
value: 'mongodb://apinizer:***@MONGOIPADDRESS:25080/?authSource=admin&replicaSet=apinizer-replicaset'
- name: JAVA_OPTS
value: ' -Xmx2400m -Xms2400m -Dlog4j.formatMsgNoLookups=true'
resources:
requests:
memory: '3Gi'
cpu: '1'
limits:
memory: '3Gi'
cpu: '1'
ports:
- name: http
containerPort: 8080
---
apiVersion: v1
kind: Service
metadata:
name: manager
namespace: apinizer
labels:
app: manager
spec:
selector:
app: manager
type: NodePort
ports:
- name: http
port: 8080
nodePort: 32080
After preparing the apinizer-deployment.yaml file, run the following command line on your Kubernetes Master server.
kubectl apply -f apinizer-deployment.yaml
After this process, run the first code below to follow the created pod and examine the log and take the pod name and use it in the second code.
kubectl get pods -n apinizer
kubectl logs PODADI -n apinizer
After the Apinizer images are deployed to the Kubernetes environment, you need to add the License Key given to you by Apinizer to the database.
You can update the license information in the database by updating the License Key provided by Apinizer in a .js file as follows.
vi license.js
db.general_settings.updateOne(
{"_class":"GeneralSettings"},
{ $set: { licenseKey: 'YOURLICENSEKEY'}}
)
The created license.js file is run. A result of the form Matched = 1 is expected.
mongosh mongodb://YOURMONGOSIPADDRESS:25080/apinizerdb --authenticationDatabase "admin" -u "apinizer" -p "***" < license.js
#5.3) Installation Test (Any Server That Can Access to Kubernetes Workers)
If the installation process was successful, you can access the Apinizer Management Console from the address below.
http://IPADDRESSOFANYWORKERSERVER:32080
#5.4) Definition of Log Servers (On the Apinizer Interface)
Apinizer keeps API traffic and metrics in the Elasticsearch database. Elasticsearch Cluster definitions must be made in order to continue with the installation process.
In the Apinizer Administration Console, navigate to the Elasticsearch Clusters page under Administration → Server Management → Elasticsearch Clusters.
To define an Elasticsearch Cluster, you can refer to the Elasticsearch Clusters document.
#5.5) Environment Identification (On Apinizer Interface)
For an API Proxy to be accessible, it must be installed (deployment) in at least one Environment. Apinizer allows an API Proxy to be installed in multiple environments at the same time.
Follow the steps below for media identification.
To define a new Environment, you can refer to the Environment document.
With the creation of the environment, the Apinizer installation is completed.
If host alias structure is desired to be used;