This guide explains how to upgrade Apinizer images on Kubernetes Clusters without access to the internet. The process steps are customized for offline systems that require updates in Docker and Kubernetes environments.

The example scenario was realized on servers with Ubuntu 22.04 operating system.


There are 2 main methods for carrying out this process:

  1. Pulling image files to a server with internet and then somehow transferring those images to the Kubernetes cluster
  2. Using a repository application that Kubernetes cluster will use

1) Transferring images to the server

1.1) Extracting Images from Online Server and Transferring to Offline Servers

If a machine has access to the Internet and offline machines, the following steps can be performed.

1.1.1.1) if docker will be used:

# All images required for the upgrade are pulled from the online server
docker pull apinizercloud/manager:<NEW_VERSION>
docker pull apinizercloud/worker:<NEW_VERSION>
docker pull apinizercloud/cache:<NEW_VERSION>
docker pull apinizercloud/portal:<NEW_VERSION>
docker pull apinizercloud/integration:<NEW_VERSION>  

# To transfer the corresponding images to offline servers, the images are saved in `.tar` format on the online server
docker save apinizercloud/manager:<NEW_VERSION> -o apinizercloud-manager.<NEW_VERSION>.tar
docker save apinizercloud/worker:<NEW_VERSION> -o apinizercloud-worker.<NEW_VERSION>.tar
docker save apinizercloud/cache:<NEW_VERSION> -o apinizercloud-cache.<NEW_VERSION>.tar
docker save apinizercloud/portal:<NEW_VERSION> -o apinizercloud-portal.<NEW_VERSION>.tar
docker save apinizercloud/integration:<NEW_VERSION> -o apinizercloud-integration.<NEW_VERSION>.tar  

# Each of the images is transferred to the offline server
scp apinizercloud-*.tar <OFFLINE_MACHINE_USER>@<OFFLINE_MACHINE_IP>:<TARGET_DIRECTORY>  

# Images are loaded on each offline server
docker load -i apinizercloud-manager.<NEW_VERSION>.tar
docker load -i apinizercloud-worker.<NEW_VERSION>.tar
docker load -i apinizercloud-cache.<NEW_VERSION>.tar
docker load -i apinizercloud-portal.<NEW_VERSION>.tar
docker load -i apinizercloud-integration.<NEW_VERSION>.tar
BASH

1.1.2) if containerd is to be used:

# All images required for the upgrade are pulled from the online server
ctr image pull docker.io/apinizercloud/manager:<NEW_VERSION>
ctr image pull docker.io/apinizercloud/worker:<NEW_VERSION>
ctr image pull docker.io/apinizercloud/cache:<NEW_VERSION>
ctr image pull docker.io/apinizercloud/portal:<NEW_VERSION>
ctr image pull docker.io/apinizercloud/integration:<NEW_VERSION>  

# To transfer the corresponding images to offline servers, the images are saved in `.tar` format on the online server
ctr images export apinizercloud-manager.tar docker.io/apinizercloud/manager:<NEW_VERSION>
ctr images export apinizercloud-worker.tar docker.io/apinizercloud/worker:<NEW_VERSION>
ctr images export apinizercloud-cache.tar docker.io/apinizercloud/cache:<NEW_VERSION>
ctr images export apinizercloud-portal.tar docker.io/apinizercloud/portal:<NEW_VERSION>
ctr images export apinizercloud-integration.tar docker.io/apinizercloud/integration:<NEW_VERSION>  

# Each of the images is transferred to the offline server
scp docker.io-apinizercloud-*.tar <OFFLINE_MACHINE_USER>@<OFFLINE_MACHINE_IP>:<TARGET_DIRECTORY>  

# Images are loaded on each offline server
ctr images import apinizercloud-manager.tar
ctr images import apinizercloud-worker.tar
ctr images import apinizercloud-cache.tar
ctr images import apinizercloud-portal.tar
ctr images import apinizercloud-integration.tar
BASH

1.2) If Local Image Registry or Repository Exists

Different image registries and repositories work in different ways, but in most of them the captured images must be tagged and sent to the application.

If the application is used as a reverse proxy, it is sufficient to provide the necessary definition for the apinizercloud repository at hub.docker.com.


1.2.1) if docker will be used:

# All necessary images are taken
docker pull apinizercloud/manager:<NEW_VERSION>
docker pull apinizercloud/worker:<NEW_VERSION>
docker pull apinizercloud/cache:<NEW_VERSION>
docker pull apinizercloud/portal:<NEW_VERSION>
docker pull apinizercloud/integration:<NEW_VERSION>  

# The captured images are re-tagged with the necessary tag information
docker tag apinizercloud/manager:<NEW_VERSION> <LOCAL_REGISTRY>/apinizercloud/manager:<NEW_VERSION>
docker tag apinizercloud/worker:<NEW_VERSION> <LOCAL_REGISTRY>/apinizercloud/worker:<NEW_VERSION>
docker tag apinizercloud/cache:<NEW_VERSION> <LOCAL_REGISTRY>/apinizercloud/cache:<NEW_VERSION>
docker tag apinizercloud/portal:<NEW_VERSION> <LOCAL_REGISTRY>/apinizercloud/portal:<NEW_VERSION>
docker tag apinizercloud/integration:<NEW_VERSION> <LOCAL_REGISTRY>/apinizercloud/integration:<NEW_VERSION>  

# Images are transferred to the local image registry of the organization:
docker push <LOCAL_REGISTRY>/apinizercloud/manager:<NEW_VERSION>
docker push <LOCAL_REGISTRY>/apinizercloud/worker:<NEW_VERSION>
docker push <LOCAL_REGISTRY>/apinizercloud/cache:<NEW_VERSION>
docker push <LOCAL_REGISTRY>/apinizercloud/portal:<NEW_VERSION>
docker push <LOCAL_REGISTRY>/apinizercloud/integration:<NEW_VERSION>
BASH

1.2.2) if containerd is to be used:

# All necessary images are taken
ctr image pull docker.io/apinizercloud/manager:<NEW_VERSION>
ctr image pull docker.io/apinizercloud/worker:<NEW_VERSION>
ctr image pull docker.io/apinizercloud/cache:<NEW_VERSION>
ctr image pull docker.io/apinizercloud/portal:<NEW_VERSION>
ctr image pull docker.io/apinizercloud/integration:<NEW_VERSION>  

# The captured images are re-tagged with the necessary tag information
ctr image tag docker.io/apinizercloud/manager:<NEW_VERSION> <LOCAL_REGISTRY>/apinizercloud/manager:<NEW_VERSION>
ctr image tag docker.io/apinizercloud/worker:<NEW_VERSION> <LOCAL_REGISTRY>/apinizercloud/worker:<NEW_VERSION>
ctr image tag docker.io/apinizercloud/cache:<NEW_VERSION> <LOCAL_REGISTRY>/apinizercloud/cache:<NEW_VERSION>
ctr image tag docker.io/apinizercloud/portal:<NEW_VERSION> <LOCAL_REGISTRY>/apinizercloud/portal:<NEW_VERSION>
ctr image tag docker.io/apinizercloud/integration:<NEW_VERSION> <LOCAL_REGISTRY>/apinizercloud/integration:<NEW_VERSION>  

# Images are transferred to the local image registry of the organization:
ctr images push <LOCAL_REGISTRY>/apinizercloud/manager:<NEW_VERSION>
ctr images push <LOCAL_REGISTRY>/apinizercloud/worker:<NEW_VERSION>
ctr images push <LOCAL_REGISTRY>/apinizercloud/cache:<NEW_VERSION>
ctr images push <LOCAL_REGISTRY>/apinizercloud/portal:<NEW_VERSION>
ctr images push <LOCAL_REGISTRY>/apinizercloud/integration:<NEW_VERSION>
BASH

2) MongoDB Backup

Backups should always be done as the first step of the update process to avoid data loss. This should be done on the mongoDb primary server.

# Backup retrieval
sudo mongodump --host <IP_ADDRESS> --port=25080 --username=apinizer --password=<PASSWORD> --authenticationDatabase=admin --gzip --archive=<BACKUP_DIRECTORY>/apinizer-backup<BACKUP_VERSION>.archive
BASH

3) Updating the Apinizer Manager App


Apinizer Manager must be updated before updating Apinizer Worker and Cache. The reason for this is that Worker and Cache receive database related updates after Manager's settings to the database. If Worker and Cache are updated before Manager, this can cause problems on Worker and Cache side. Therefore, after updating Manager, make sure that the Manager pods on Kubernetes are in the “ready” state and then update the other components.


This and subsequent steps are run on servers with the Kubernetes Control Plane task.

# Information about the Deployment is checked
kubectl get deployments -Ao wide  

# Manager's deployment images are updated. 
kubectl set image deployment/<MANAGER_DEPLOYMENT_NAME> -n <NAMESPACE> <MANAGER_CONTAINER_NAME>=apinizercloud/manager:<NEW_VERSION>  

# Wait for the pod to be READY, monitor pod status 
kubectl get pods -n <DEPLOYMENT_NAMESPACE> -w
BASH

Deployment imajı yerel registry üzerinde bulunuyorsa

# A registy link is added when updating the image.

kubectl set image deployment/<MANAGER_DEPLOYMENT_NAME> -n <NAMESPACE> <MANAGER_CONTAINER_NAME>=<LOCAL_REGISTRY>/apinizercloud/manager:<NEW_VERSION>

4) Updating Apinizer Worker and Cache Applications

After making sure that the image of the Apinizer Manager application is updated, the Apinizer Worker and Cache applications are updated.

kubectl set image deployment/<WORKER_DEPLOYMENT_NAME> -n <NAMESPACE> <WORKER_CONTAINER_NAME>=apinizercloud/worker:<NEW_VERSION>
kubectl set image deployment/<CACHE_DEPLOYMENT_NAME> -n <NAMESPACE>  <CACHE_CONTAINER_NAME>=apinizercloud/worker:<NEW_VERSION>
  
# Wait for the pod to be READY, monitor pod status
kubectl get pods -n <DEPLOYMENT_NAMESPACE> -w
BASH

Deployment imajları yerel registry/repository üzerinde bulunuyorsa

# Add registry/repository link when updating the image.

kubectl set image deployment/<WORKER_DEPLOYMENT_NAME> -n <NAMESPACE> <WORKER_CONTAINER_NAME>=<LOCAL_REGISTRY>/apinizercloud/worker:<NEW_VERSION>

kubectl set image deployment/<CACHE_DEPLOYMENT_NAME> -n <NAMESPACE> <CACHE_CONTAINER_NAME>=<LOCAL_REGISTRY>/apinizercloud/cache:<NEW_VERSION>

5) Updating Apinizer Portal and Integration Applications

Apinizer Portal and Integration can be updated in a similar way.

kubectl set image deployment/<PORTAL_DEPLOYMENT_NAME> -n <NAMESPACE> <PORTAL_CONTAINER_NAME>=apinizercloud/portal:<NEW_VERSION>
kubectl set image deployment/<INTEGRATION_DEPLOYMENT_NAME> -n <NAMESPACE> <INTEGRATION_CONTAINER_NAME>=apinizercloud/integration:<NEW_VERSION>  

# Wait for the pod to be READY, monitor pod status 
kubectl get pods -n <DEPLOYMENT_NAMESPACE> -w
BASH

Deployment imajları yerel registry/repository üzerinde bulunuyorsa

# Add registry/repository link when updating the image.

kubectl set image deployment/<PORTAL_DEPLOYMENT_NAME> -n <NAMESPACE> <PORTAL_CONTAINER_NAME>=<LOCAL_REGISTRY>/apinizercloud/portal:<NEW_VERSION>

kubectl set image deployment/<INTEGRATION_DEPLOYMENT_NAME> -n <NAMESPACE> <INTEGRATION_CONTAINER_NAME>=<LOCAL_REGISTRY>/apinizercloud/integration:<NEW_VERSION>