This document explains the installation of the Apinizer API Management Platform.

1) Introduction

Apinizer is an application that works on Kubernetes Platform and consists of three components: API Manager (API Manager), Gateway, and Cache Server.

API Manager (API Manager): The API Manager is a web-based management interface that allows the definition of APIs, policies, users, credentials, and configurations. It also provides the capability to view and analyze API traffic and analytics data.

Gateway Server: The Gateway Server is the most critical component of Apinizer. It serves as the entry point for incoming client requests and functions as the Policy Enforcement Point (PEP). It processes incoming requests in accordance with defined policies, directing them to the appropriate Backend API/Service. Moreover, it can act as a load balancer during routing, and TLS/SSL termination is conducted within this module. The Gateway Server also processes the responses from the Backend API/Service in alignment with specified policies before sending them back to the clients. All activities are logged and asynchronously transmitted to the log server. Additionally, sensitive data is recorded in compliance with predetermined rules (such as deletion, masking, or encryption). Each Gateway is associated with an Environment, and its settings are tailored to the respective operating Environment. Apinizer supports multiple Environments, and within each Environment, there may be several Gateways.

Cache Server:  The Cache Server manages data shared between components by storing it in a distributed cache, which leads to performance improvements.

After deploying Apinizer images to the Kubernetes environment, you need to add the License Key provided by Apinizer to the database.

Apinizer installation proceeds with the API Manager's installation first, followed by defining the environments where Gateway and Cache Server will operate.

2) Pre-Installation Steps

Before starting the Apinizer installation, it is required to have Kubernetes Cluster, Replicaset MongoDB on your servers, and optionally, Elasticsearch should be installed if you plan to manage API Traffic and Analytical data through Apinizer.

If Kubernetes, MongoDB, and Log Servers are already set up in your environment, you can skip this section.


3) Installation and Configurations


3.1) API Manager (API Manager) Installation

API Manager is a web-based management interface where APIs, policies, users, credentials, and configurations are defined, and API traffic and analytical data can be viewed and analyzed.

Before deploying API Manager to Kubernetes, configure the following variables according to your environment.

  • APINIZER_VERSION - The parameter that specifies which Apinizer version to install. Please click to see the current versions. It is recommended to always use the latest version for new installations. Click to review the release notes.
  • MONGO_DBNAME - The database URL information to be used for Apinizer configurations. It is recommended to use the default name "apinizerdb".
  • MONGOX_IP ve MONGOX_PORT - IP and port information for MongoDB servers. The default port for MongoDB is 27017.
  • MONGO_USERNAME ve MONGO_PASSWORD - Information related to the user authorized for Apinizer on the designated MongoDB application, including the user's permissions on the relevant database or authority to create that database.
  • YOUR_LICENSE_KEY - The license key sent to you by Apinizer.
  • K8S_ANY_WORKER_IP - Upon completion of the Apinizer installation, you will need an IP from your Kubernetes Clustor to access the Apinizer API Manager through any web browser. This is typically chosen as one of the Kubernetes Worker servers and is later recommended to be placed behind a Load Balancer and DNS.


For your MongoDB database connection information, it is recommended to have them encoded in Kubernetes deployments. To achieve this, follow the steps below ina terminal on a Linux-based operating system.

Java Memory information that JAVA_OPTS API Manager will use in the operating system

DB_URL='mongodb://<MONGO_USERNAME>:<MONGO_PASSWORD>@<MONGO1_IP>:<MONGO1_PORT>,<MONGO2_IP>:<MONGO2_PORT>,<MONGO3_IP>:<MONGO3_PORT>/?authSource=admin&replicaSet=apinizer-replicaset'

DB_NAME=<MONGO_DBNAME>
//For the <MONGO_DBNAME> variable, our default recommendation is the name "apinizerdb."

echo -n ${DB_URL} | base64
//In the next step, we will replace it with the <ENCODED_URL> variable.
 
echo -n ${DB_NAME} | base64
//In the next step, we will replace it with the <ENCODED_DB_NAME> variable.

vi secret.yaml
POWERSHELL

Usage of encoded db information in yaml

apiVersion: v1
kind: Namespace
metadata:
  name: apinizer
---

apiVersion: v1
kind: Secret
metadata:
  name: mongo-db-credentials
  namespace: apinizer
type: Opaque
data:
  dbUrl: <ENCODED_URL>
  dbName: <ENCODED_DB_NAME>
YML

Java Memory information that JAVA_OPTS API Manager will use in the operating system

kubectl apply -f secret.yaml
POWERSHELL

Usage of encoded db information in yaml

env:
  - name: SPRING_DATA_MONGODB_DATABASE
    value: null
    valueFrom:
      secretKeyRef:
        name: mongo-db-credentials
        key: dbName
  - name: SPRING_DATA_MONGODB_URI
    value: null
    valueFrom:
      secretKeyRef:
        name: mongo-db-credentials
        key: dbUrl
YML


Defining Kubernetes Permissions

Kubernetes API permissions need to be defined for accessing the pods in the created Namespace of Apinizer.

In Kubernetes, ClusterRole and ClusterRoleBinding provide cluster-level role and role binding mechanisms. These two resources enable cluster administrators and application developers to manage access and permissions to Kubernetes resources.

vi apinizer-role.yaml

---
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRoleBinding
metadata:
  name: apinizer-role-binding
roleRef:
  apiGroup: rbac.authorization.k8s.io
  kind: ClusterRole
  name: apinizer-role
subjects:
- apiGroup: rbac.authorization.k8s.io
  kind: Group
  name: system:serviceaccounts

---
kind: ClusterRole
apiVersion: rbac.authorization.k8s.io/v1
metadata:
  name: apinizer-role
rules:
- apiGroups:
    - ''
  resources:
    - nodes
    - services
    - namespaces
    - pods
    - endpoints
    - pods/log
    - secrets
    - configmaps
  verbs:
    - get
    - list
    - watch
    - update
    - create
    - patch 
    - delete
- apiGroups:
    - apps
  resources:
    - deployments
    - replicasets
    - statefulsets
    - configmaps
  verbs:
    - get
    - list
    - watch
    - update
    - create
    - patch
    - delete
YML
kubectl apply -f apinizer-role.yaml
POWERSHELL


If environment management will be done through API Manager, Apinizer needs to have permissions to access Kubernetes APIs and perform operations such as creating, deleting, updating, and monitoring Namespaces, Deployments, Pods, and Services. In the following step, Roles and RoleBindings will be created on Kubernetes to define these permissions.


Note: If environments will not be managed through Apinizer, the following permissions will be sufficient.

---
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRoleBinding
metadata:
  name: apinizer-role-binding
roleRef:
  apiGroup: rbac.authorization.k8s.io
  kind: ClusterRole
  name: apinizer-role
subjects:
- apiGroup: rbac.authorization.k8s.io
  kind: Group
  name: system:serviceaccounts

---
kind: ClusterRole
apiVersion: rbac.authorization.k8s.io/v1
metadata:
  name: apinizer-role
rules:
- apiGroups:
    - ''
  resources:
    - services
    - namespaces
    - pods
    - endpoints
    - pods/log
    - secrets
  verbs:
    - get
    - list
    - watch
    - update
    - create
    - patch 
    - delete
YML


The deployment of API Manager to Kubernetes

Modify the following example YAML file according to your system, and deploy it to your Kubernetes cluster.

vi apinizer-api-manager-deployment.yaml
POWERSHELL
apiVersion: apps/v1
kind: Deployment
metadata:
  name: manager
  namespace: apinizer
spec:
  replicas: 1
  selector:
    matchLabels:
      app: manager
      version: v1
  strategy:
    type: Recreate
  template:
    metadata:
      labels:
        app: manager
        version: v1
    spec:
      containers:
      - env:
        - name: JAVA_OPTS
          value: ' -Xmx2048m -Xms2048m -Dlog4j.formatMsgNoLookups=true'
        - name: SPRING_PROFILES_ACTIVE
          value: prod
        - name: SPRING_DATA_MONGODB_URI
          valueFrom:
            secretKeyRef:
              key: dbUrl
              name: mongo-db-credentials
        - name: SPRING_DATA_MONGODB_DATABASE
          valueFrom:
            secretKeyRef:
              key: dbName
              name: mongo-db-credentials
        name: manager
        image: apinizercloud/manager:<APINIZER_VERSION>
        imagePullPolicy: IfNotPresent
        ports:
        - containerPort: 8080
          protocol: TCP
        resources:
          limits:
            cpu: 1
            memory: 3Gi
        startupProbe:
          failureThreshold: 3
          httpGet:
            path: /apinizer/management/health
            port: 8080
            scheme: HTTP
          initialDelaySeconds: 60
          periodSeconds: 30
          successThreshold: 1
          timeoutSeconds: 30
        readinessProbe:
          failureThreshold: 3
          httpGet:
            path: /apinizer/management/health
            port: 8080
            scheme: HTTP
          initialDelaySeconds: 60
          periodSeconds: 30
          successThreshold: 1
          timeoutSeconds: 30
        livenessProbe:
          failureThreshold: 3
          httpGet:
            path: /apinizer/management/health
            port: 8080
            scheme: HTTP
          initialDelaySeconds: 60
          periodSeconds: 30
          successThreshold: 1
          timeoutSeconds: 30
      dnsPolicy: ClusterFirst
      restartPolicy: Always
      hostAliases:
      - ip: "<IP_ADDRESS>"
        hostnames:
        - "<DNS_ADDRESS_1>"
        - "<DNS_ADDRESS_2>"
YML
Host aliases may need to be defined for Apinizer Manager to connect to the MongoDB database.

As an example, the following entry can be added to the YAML file at the same level as the "containers" key:

spec:
  hostAliases:
  - ip: "127.0.0.1"
    hostnames:
    - "mongodbserver1.ins.org"
    - "mongodbserver1"
  - ip: "10.1.2.3"
    hostnames:
    - "foo.remote"
POWERSHELL

Environment Variables

Apinizer API Manager runs on the Spring Boot infrastructure. In Spring Boot, Environment variables are usually expressed using underscores (_) and uppercase letters. So, for example, you may need to use underscores when setting the spring.servlet.multipart.max-file-size and spring.servlet.multipart.max-request-size properties as environment variables.

Example: SPRING_SERVLET_MULTIPART_MAX_FILE_SIZE and SPRING_SERVLET_MULTIPART_MAX_REQUEST_SIZE can be set as environment variables.


env:
- name: SPRING_SERVLET_MULTIPART_MAX_FILE_SIZE
 value: "20MB"
- name: SPRING_SERVLET_MULTIPART_MAX_REQUEST_SIZE
  value: "50MB"


Create a Kubernetes Service for API Manager:

vi apinizer-api-manager-service.yaml
POWERSHELL
apiVersion: v1
kind: Service
metadata:
  name: manager
  namespace: apinizer
  labels:
    app: manager
spec:
  selector:
    app: manager
  type: NodePort
  ports:
    - name: http
      port: 8080
      nodePort: 32080
YML
kubectl apply -f apinizer-api-manager-deployment.yaml

kubectl apply -f apinizer-api-manager-service.yaml
BASH

During the deployment of API Manager on Kubernetes, it creates a Kubernetes service named "manager" with the type NodePort. This service is required for accessing API Manager from outside the Kubernetes cluster. However, you can adapt this service according to your organization's needs by either removing it and using Ingress or adjusting it to the connection method used in your environment.

After this process, run the first code below to get the name of the created pod and use it in the second code to track and inspect its logs.

kubectl get pods -n apinizer

kubectl logs PODNAME -n apinizer
BASH


After deploying Apinizer images to the Kubernetes environment, you need to add the license key provided by Apinizer to the database.


Entering the API Manager license key

You can update the License Key provided by Apinizer in a .js file as shown below, and then the license information in the database can be updated.

vi license.js
BASH
db.general_settings.updateOne(
{"_class":"GeneralSettings"},
{ $set: { licenseKey: 'YOURLICENSEKEY'}}
)
POWERSHELL

The created license.js is executed on the MongoDB server. It is expected to see a result as Matched = 1.

mongosh mongodb://<MONGODB_IP>:<MONGO_PORT>/<MONGO_DBNAME> --authenticationDatabase "admin" -u "apinizer" -p "<MONGO_PASSWORD>" < license.js
POWERSHELL

Note: It is expected to see a result as Matched = 1.


If the installation process is successful, you can access the Apinizer API Manager (API Manager) at the following address.

http://<K8S_ANY_WORKER_IP>:32080
POWERSHELL

Default Username: admin

Default User Password: Ask for assistance from the Apinizer support team.


It is recommended to change your password after your initial login to the Apinizer API Manager.


3.2) Settings to be Made on the Connection Management Page

The information about where the traffic logs flowing through Apinizer will be sent needs to be defined in Apinizer. This definition is made through Connectors on the Connection Configurations page. If you don't have a specific preference, you can use an Elasticsearch connector set up from Apinizer for data management to fully benefit from Apinizer's Analytics and Monitoring capabilities.

For these processes, you can define the connections for the applications you will use on the Connection Management pages under the System Settings → Connection Management tab.

If you will manage your API Traffic and API Analytics data with your own log systems, you can define the suitable integration settings from the options provided.


3.3) Settings to be Made on the General Settings Page

Go to System Settings → General Settings page, and here;

  • Whether a value will be appended to the addresses defined for the relevant Worker environments when providing services through the system,
  • Whether you will manage the Kubernetes environment where Apinizer is located through Apinizer or not,
  • Whether logs of error messages will be sent to connected connectors even if log settings are turned off,
  • Settings related to login and session durations in the API Manager interface,
  • The number of rollback points to be maintained for each proxy,
  • Applications where application logs and token logs will be stored/sent.

Changes related to these can be made. Appropriate definitions for your organization should be made here.

For detailed information about this page, click here.


3.4) Settings to be Made on the Gateway Environment Page

In the System Settings → Gateways page, at least one environment (environment) should be created and published.

By giving a suitable environment name, settings are entered for containers with resources suitable for your license and server quantity. This environment name will also be the Kubernetes namespace where the applications in the respective environment will run. Then, by defining connectors for the environments where you want to write logs, logging is enabled for the environments.

3.4.1) If Kubernetes management is done with Apinizer

For detailed information about the Gateway Environments page, click here.

In the page opened with the option to create a new environment, general settings such as where you will create the environment, which address you will open it on, which connectors you will connect, as well as the resources and JVM parameters for worker and cache applications should be configured.


3.4.2) If Kubernetes management is not done with Apinizer

For detailed information about Manual Management of Gateway Environments, click here.

Follow the steps below on your Kubernetes Cluster to create one namespace, two deployment files named "worker" and "cache," and Kubernetes services for access to the Pods that will be created after these deployments.

Example:

Namespace (Required): prod

Deployment  (Required): worker ve cache

Service (Required) :  worker-http-service, cache-http-service ve cache-hz-service


Before deploying the necessary Environments for the Gateway on your Kubernetes environment, go to the Server Management section of Apinizer API Manager and add the Kubernetes Namespace you will create as an Environment in Apinizer. The Environment name here should be the same as your Namespace in Kubernetes.

Important: When creating the Environment, it must be in an Unpublished state. After deployments are done in the Kubernetes environment, it should be updated as Published again.


Edit the following yaml files according to your own environments and deploy them to your Kubernetes Clusters.

Creating a Namespace;

vi apinizer-<NAMESPACE>-namespace.yaml
POWERSHELL


apiVersion: v1
kind: Namespace
metadata:
  name: <NAMESPACE>
YML


Creating a Worker Deployment;

vi apinizer-worker-deployment.yaml
POWERSHELL


apiVersion: apps/v1
kind: Deployment
metadata:
  name: worker
  namespace: <NAMESPACE>
spec:
  replicas: 1
  selector:
    matchLabels:
      app: worker
      version: "v1"
  strategy:
    type: "RollingUpdate"
    rollingUpdate:
      maxUnavailable: 75%
      maxSurge: 1
  template:
    metadata:
      labels:
        app: worker
        version: "v1"
    spec:
      automountServiceAccountToken: true
      containers:
        - name: worker
          image: apinizercloud/worker:<APINIZER_VERSION>
          imagePullPolicy: IfNotPresent
          env:
            - name: JAVA_OPTS
              value: -server -XX:MaxRAMPercentage=75 -Dhttp.maxConnections=4096 -Dlog4j.formatMsgNoLookups=true
            - name: tuneWorkerThreads
              value: "1024"
            - name: tuneWorkerMaxThreads
              value: "4096"
            - name: tuneBufferSize
              value: "16384"
            - name: tuneIoThreads
              value: "1"
            - name: tuneBacklog
              value: "10000"
            - name: tuneRoutingConnectionPoolMaxConnectionPerHost
              value: "1024"
            - name: tuneRoutingConnectionPoolMaxConnectionTotal
              value: "4096"
            - name: SPRING_DATA_MONGODB_DATABASE
              value: null
              valueFrom:
                secretKeyRef:
                  name: mongo-db-credentials
                  key: dbName
            - name: SPRING_DATA_MONGODB_URI
              value: null
              valueFrom:
                secretKeyRef:
                  name: mongo-db-credentials
                  key: dbUrl
            - name: SPRING_PROFILES_ACTIVE
              value: prod
          lifecycle:
            preStop:
              exec:
                command:
                  - /bin/sh
                  - -c
                  - sleep 10
          livenessProbe:
            failureThreshold: 3
            httpGet:
              path: /apinizer/management/health
              port: 8091
              scheme: HTTP
            initialDelaySeconds: 60
            periodSeconds: 30
            successThreshold: 1
            timeoutSeconds: 30

          ports:
            - containerPort: 8091
              protocol: TCP
          readinessProbe:
            failureThreshold: 3
            httpGet:
              path: /apinizer/management/health
              port: 8091
              scheme: HTTP
            initialDelaySeconds: 60
            periodSeconds: 30
            successThreshold: 1
            timeoutSeconds: 30
          resources:
            limits:
              cpu: 1
              memory: 2Gi
          securityContext:
            allowPrivilegeEscalation: true
            readOnlyRootFilesystem: false
            runAsGroup: 0
            runAsNonRoot: false
            runAsUser: 0
          startupProbe:
            failureThreshold: 3
            httpGet:
              path: /apinizer/management/health
              port: 8091
              scheme: HTTP
            initialDelaySeconds: 60
            periodSeconds: 30
            successThreshold: 1
            timeoutSeconds: 30
      restartPolicy: Always
YML

If you want to expose the Worker application with HTTPS, the port value is entered as 8443 and the schema value is entered as HTTPS under livenessProbe, readinessProbe and startupProbe in yaml.

Creating a Cache Deployment;

vi apinizer-cache-deployment.yaml
POWERSHELL


apiVersion: apps/v1
kind: Deployment
metadata:
  name: cache
  namespace: <NAMESPACE>
spec:
  replicas: 1
  selector:
    matchLabels:
      app: cache
      version: "v1"
  strategy:
    type: "RollingUpdate"
    rollingUpdate:
      maxUnavailable: 75%
      maxSurge: 1
  template:
    metadata:
      labels:
        app: cache
        version: "v1"
    spec:
      automountServiceAccountToken: true
      containers:
        - name: cache
          image: apinizercloud/cache:<APINIZER_VERSION>
          imagePullPolicy: IfNotPresent
          env:
            - name: JAVA_OPTS
              value: -server -XX:MaxRAMPercentage=75 -Dhttp.maxConnections=1024 -Dlog4j.formatMsgNoLookups=true
            - name: SPRING_PROFILES_ACTIVE
              value: prod
            - name: SPRING_DATA_MONGODB_DATABASE
              value: null
              valueFrom:
                secretKeyRef:
                  name: mongo-db-credentials
                  key: dbName
            - name: SPRING_DATA_MONGODB_URI
              value: null
              valueFrom:
                secretKeyRef:
                  name: mongo-db-credentials
                  key: dbUrl
            - name: CACHE_SERVICE_NAME
              value: cache-hz-service
            - name: CACHE_QUOTA_TIMEZONE
              value: +03:00
          ports:
            - containerPort: 8090             
			- containerPort: 5701
          resources:
            limits:
              cpu: 1
              memory: 1024Mi
          lifecycle:
            preStop:
              exec:
                command:
                  - /bin/sh
                  - -c
                  - sleep 10
          livenessProbe:
            failureThreshold: 3
            httpGet:
              path: /apinizer/management/health
              port: 8090
              scheme: HTTP
            initialDelaySeconds: 120
            periodSeconds: 30
            successThreshold: 1
            timeoutSeconds: 30
          readinessProbe:
            failureThreshold: 3
            httpGet:
              path: /apinizer/management/health
              port: 8090
              scheme: HTTP
            initialDelaySeconds: 120
            periodSeconds: 30
            successThreshold: 1
            timeoutSeconds: 30
          securityContext:
            allowPrivilegeEscalation: true
            readOnlyRootFilesystem: false
            runAsGroup: 0
            runAsNonRoot: false
            runAsUser: 0
          startupProbe:
            failureThreshold: 3
            httpGet:
              path: /apinizer/management/health
              port: 8090
              scheme: HTTP
            initialDelaySeconds: 120
            periodSeconds: 30
            successThreshold: 1
            timeoutSeconds: 30
      restartPolicy: Always
YML

Creating a Service;

vi apinizer-worker-service.yaml
POWERSHELL
apiVersion: v1
kind: Service
metadata:
  name: worker-http-service
  namespace: <NAMESPACE>
spec:
  ports:
  - nodePort: 30080
    port: 8091
    protocol: TCP
    targetPort: 8091
  selector:
    app: worker
  type: NodePort
YML

If you want to serve the Worker with HTTPS, the port and targetPort values should be given as 8443 in the yaml above.

vi apinizer-cache-service.yaml
POWERSHELL
apiVersion: v1
kind: Service
metadata:
  name: cache-http-service
  namespace: <NAMESPACE>
spec:
  ports:
  - port: 8090
    protocol: TCP
    targetPort: 8090
  selector:
    app: cache
  type: ClusterIP
 
---
apiVersion: v1
kind: Service
metadata:
  name: cache-hz-service
  namespace: <NAMESPACE>
spec:
  ports:
  - port: 5701
    protocol: TCP
    targetPort: 5701
  selector:
    app: cache
  type: ClusterIP
YML

The Gateway and Cache Server applications will also connect to MongoDB. Therefore, the secret created for the Manager application is copied to other namespaces. The following example copies a secret from the "apinizer" namespace to the used namespace.

kubectl get secret mongo-db-credentials -n apinizer -o yaml | sed 's/namespace: apinizer/namespace: <NAMESPACE>/'  | kubectl create -f -
POWERSHELL

Deploy the created .yaml files to the Kubernetes environment.

kubectl apply -f apinizer-<NAMESPACE>-namespace.yaml

kubectl apply -f apinizer-worker-deployment.yaml
kubectl apply -f apinizer-cache-deployment.yaml

kubectl apply -f apinizer-worker-service.yaml
kubectl apply -f apinizer-cache-service.yaml
POWERSHELL

3.4.3) Adding a Log Connector to the created environments

At least one Log Connector should be connected to the created environments.

For detailed information about adding Log Connectors to environments, click here.

After completing the above steps, go back to the Server Management section in API Manager and update the Environment you defined as published.

3.5) Settings to be Made on the Backup Management/Configuration Page

Backup of Apinizer configuration data and the database where log and token records are stored, if you have configured it on the General Settings page, can be performed by extracting a dump file on the relevant server(s) specified in this setting (if there is more than one).

It is recommended to securely backup this file to a safe server by your organization's system team employees in any case.

For detailed information about this page, click here.


3.6) Other Settings

Please change the password of the user account named "admin" that you used to log in to the Apinizer API Manager during the first login from the Change Password page under the quick menu in the top right, and securely note it. For detailed information about the Users page where user management is done, click here.

If you want users to use their passwords in LDAP/Active Directory when logging into the API Manager, click here for detailed information.

Many feature logs you will use in Apinizer are written to the database where configuration data is kept. If this information is from logs that are not required according to your organization's policies, click here for detailed information about what these data are and how to keep this growth under control.

If you are using Elasticsearch in the Apinizer management for Apinizer traffic logs and prefer to take snapshots at certain intervals for backup, click here for detailed information on performing these operations.

It is strongly recommended to open the ports where Worker environments are opened and to perform DNS forwarding to the servers they run on during Apinizer installation. For this, your organization's employees should be informed of which servers and ports Apinizer is opened from and which addresses should be accessed with which DNS.

If your organization is part of the "kamunet", and Apinizer will directly access the "kamunet", the exits of Apinizer servers must be able to go out as if they were your "kamunet" IP. This process called NAT should be configured by your organization's firewall administrators.

If your organization wants to use the KPS (Identity Sharing System) services offered by the General Directorate of Population and Citizenship Affairs through Apinizer, the kps information specific to your organization must be entered into the API Manager from the KPS Settings page.



Congratulations! If you've made it this far, it means the Apinizer installation and settings are complete.