Apinizer Version Upgrade
This guide explains step-by-step how to upgrade Apinizer images running on Kubernetes.
The example scenario was implemented on servers running Ubuntu 22.04.
Updating is performed in different ways depending on whether the Kubernetes worker servers hosting Apinizer have internet access.
Although not directly related to the update, backing up the database used by Apinizer in the MongoDB database is always recommended in case of a rollback.
Before performing the update, it is recommended to review the Release Notes to identify any changes that may conflict with the current system configuration and to take necessary precautions accordingly.
1) MongoDB Backup
First, a backup operation is performed on MongoDB. The backup is always done as the first step of the update process to avoid data loss. This command is executed on the mongoDb primary server.
# Taking Backup
sudo mongodump --host <PRIMARY_MONGODB_IP_ADDRESS> --port=25080 --username=apinizer --password=<PASSWORD> --authenticationDatabase=admin --gzip --archive=<BACKUP_DIRECTORY>/apinizer-backup--<CURRENT_VERSION>--<BACKUP_DATE>--01.archive
2) Updating Apinizer Applications
If the relevant servers have internet access, proceed with section 2.1; otherwise, continue with section 2.2.
The system primarily consists of manager, worker, and cache components. Depending on your license, integration and apiportal (referred to as "portal" in versions prior to 2025.04.5) components may also be included.
Current version information can be accessed via https://hub.docker.com/u/apinizercloud or through the Release Notes page.
Key Considerations During Updates and Risk of Traffic Interruption
The risk of traffic interruption during the update may vary depending on factors such as the number of pods, the update strategy, and the availability of server resources.
Number of replicas/pods: If the application to be updated is running on a single pod, a brief interruption may occur. However, using a single pod for the Manager component is recommended.
Update strategy: Unless a custom strategy is defined during installation, updates are applied using the RollingUpdate method, ensuring that at least one pod remains active. This approach is suitable for zero-downtime updates when multiple pods are in use. Alternatively, the Recreate method may be preferred for the Manager and Cache components, depending on requirements.
Server resource availability: In environments with a limited number of kubernertes worker nodes and minimal available resources, manual intervention may be required during updates, and traffic interruptions are likely to occur.
2.1) Online Update of Apinizer Applications
2.1.1) Updating the Apinizer Manager Application
Before updating other Apinizer components, Apinizer Manager must be updated.
After the Manager updates the database, other Apinizer applications must be fed with the current database settings.
Therefore, after updating the Manager, ensure that the Manager pods on Kubernetes are ready, and then update the other components.
This and subsequent steps are run on servers with the Kubernetes Control Plane task.
# Information about the Deployment is checked
kubectl get deployments -Ao wide
# Manager's deployment image is updated
kubectl set image deployment/<MANAGER_DEPLOYMENT_NAME> -n <NAMESPACE> <MANAGER_CONTAINER_NAME>=apinizercloud/manager:<NEW_VERSION>
# Wait for the pod to be READY, monitor pod status
kubectl get pods -n <MANAGER_NAMESPACE>
kubectl get logs -f -n <MANAGER_NAMESPACE> <POD_NAME>
2.1.2) Updating Apinizer Worker and Cache Applications
After making sure that the image of the Apinizer Manager application is updated, the Apinizer Worker and Cache applications are updated.
kubectl set image deployment/<WORKER_DEPLOYMENT_NAME> -n <WORKER_CACHE_NAMESPACE> <WORKER_CONTAINER_NAME>=apinizercloud/worker:<NEW_VERSION>
kubectl set image deployment/<CACHE_DEPLOYMENT_NAME> -n <WORKER_CACHE_NAMESPACE> <CACHE_CONTAINER_NAME>=apinizercloud/cache:<NEW_VERSION>
# Pods are expected to be READY, pod status is monitored
kubectl get pods -n <WORKER_CACHE_NAMESPACE>
kubectl get logs -f -n <WORKER_CACHE_NAMESPACE> <POD_NAME>
2.1.3) Updating Apinizer Portal and Integration Applications
Apinizer Portal and Integration can be updated in a similar way.
kubectl set image deployment/<PORTAL_DEPLOYMENT_NAME> -n <PORTAL_NAMESPACE> <PORTAL_CONTAINER_NAME>=apinizercloud/apiportal:<NEW_VERSION>
kubectl set image deployment/<INTEGRATION_DEPLOYMENT_NAME> -n <INTEGRATION_NAMESPACE> <INTEGRATION_CONTAINER_NAME>=apinizercloud/integration:<NEW_VERSION>
# Pods are expected to be READY, pod status is monitored
kubectl get pods -n <PORTAL_NAMESPACE>
kubectl get logs -f -n <PORTAL_NAMESPACE> <POD_NAME>
kubectl get pods -n <INTEGRATION_NAMESPACE>
kubectl get logs -f -n <INTEGRATION_NAMESPACE> <POD_NAME>
2.2) Offline Updating of Apinizer Applications
The process steps are customized for offline systems that require updates in Kubernetes environments.
There are two main methods for performing this process:
- Pulling image files to a server with internet access and a Docker/Containerd application and transferring these image files to the target Kubernetes cluster.
- Using a repository application that the Kubernetes cluster will use (Nexus, Harbor, etc.).
Continue with the step appropriate for you.
2.2.1) Retrieving Images from Online Servers and Transferring them to Offline Servers
If you have a machine with access to the internet and offline machines, the following steps can be performed.
2.2.1.1) if docker will be used:
# All images required for the upgrade are pulled from the online server
docker pull apinizercloud/manager:<NEW_VERSION>
docker pull apinizercloud/worker:<NEW_VERSION>
docker pull apinizercloud/cache:<NEW_VERSION>
docker pull apinizercloud/portal:<NEW_VERSION>
docker pull apinizercloud/integration:<NEW_VERSION>
# To transfer the corresponding images to offline servers, the images are saved in `.tar` format on the online server
docker save apinizercloud/manager:<NEW_VERSION> -o apinizercloud-manager.<NEW_VERSION>.tar
docker save apinizercloud/worker:<NEW_VERSION> -o apinizercloud-worker.<NEW_VERSION>.tar
docker save apinizercloud/cache:<NEW_VERSION> -o apinizercloud-cache.<NEW_VERSION>.tar
docker save apinizercloud/portal:<NEW_VERSION> -o apinizercloud-portal.<NEW_VERSION>.tar
docker save apinizercloud/integration:<NEW_VERSION> -o apinizercloud-integration.<NEW_VERSION>.tar
# Each of the images is transferred to the offline server
scp apinizercloud-*.tar <OFFLINE_MACHINE_USER>@<OFFLINE_MACHINE_IP>:<TARGET_DIRECTORY>
# Images are loaded on each offline server
docker load -i apinizercloud-manager.<NEW_VERSION>.tar
docker load -i apinizercloud-worker.<NEW_VERSION>.tar
docker load -i apinizercloud-cache.<NEW_VERSION>.tar
docker load -i apinizercloud-portal.<NEW_VERSION>.tar
docker load -i apinizercloud-integration.<NEW_VERSION>.tar
2.2.1.2) if containerd is to be used:
# All images required for the upgrade are pulled from the online server
ctr image pull docker.io/apinizercloud/manager:<NEW_VERSION>
ctr image pull docker.io/apinizercloud/worker:<NEW_VERSION>
ctr image pull docker.io/apinizercloud/cache:<NEW_VERSION>
ctr image pull docker.io/apinizercloud/portal:<NEW_VERSION>
ctr image pull docker.io/apinizercloud/integration:<NEW_VERSION>
# To transfer the corresponding images to offline servers, the images are saved in `.tar` format on the online server
ctr images export apinizercloud-manager.tar docker.io/apinizercloud/manager:<NEW_VERSION>
ctr images export apinizercloud-worker.tar docker.io/apinizercloud/worker:<NEW_VERSION>
ctr images export apinizercloud-cache.tar docker.io/apinizercloud/cache:<NEW_VERSION>
ctr images export apinizercloud-portal.tar docker.io/apinizercloud/portal:<NEW_VERSION>
ctr images export apinizercloud-integration.tar docker.io/apinizercloud/integration:<NEW_VERSION>
# Each of the images is transferred to the offline server
scp docker.io-apinizercloud-*.tar <OFFLINE_MACHINE_USER>@<OFFLINE_MACHINE_IP>:<TARGET_DIRECTORY>
# Images are loaded on each offline server
ctr images import apinizercloud-manager.tar
ctr images import apinizercloud-worker.tar
ctr images import apinizercloud-cache.tar
ctr images import apinizercloud-portal.tar
ctr images import apinizercloud-integration.tar
2.2.2) If Local Image Registry or Repository Exists
Different image registries and repositories work in different ways, but in most of them the captured images must be tagged and sent to the application.
If the application is used as a reverse proxy, it is sufficient to provide the necessary definition for the apinizercloud repository at hub.docker.com.
2.2.2.1) if docker will be used:
# All necessary images are taken
docker pull apinizercloud/manager:<NEW_VERSION>
docker pull apinizercloud/worker:<NEW_VERSION>
docker pull apinizercloud/cache:<NEW_VERSION>
docker pull apinizercloud/portal:<NEW_VERSION>
docker pull apinizercloud/integration:<NEW_VERSION>
# The captured images are re-tagged with the necessary tag information
docker tag apinizercloud/manager:<NEW_VERSION> <LOCAL_REGISTRY>/apinizercloud/manager:<NEW_VERSION>
docker tag apinizercloud/worker:<NEW_VERSION> <LOCAL_REGISTRY>/apinizercloud/worker:<NEW_VERSION>
docker tag apinizercloud/cache:<NEW_VERSION> <LOCAL_REGISTRY>/apinizercloud/cache:<NEW_VERSION>
docker tag apinizercloud/portal:<NEW_VERSION> <LOCAL_REGISTRY>/apinizercloud/portal:<NEW_VERSION>
docker tag apinizercloud/integration:<NEW_VERSION> <LOCAL_REGISTRY>/apinizercloud/integration:<NEW_VERSION>
# Images are transferred to the local image registry of the organization:
docker push <LOCAL_REGISTRY>/apinizercloud/manager:<NEW_VERSION>
docker push <LOCAL_REGISTRY>/apinizercloud/worker:<NEW_VERSION>
docker push <LOCAL_REGISTRY>/apinizercloud/cache:<NEW_VERSION>
docker push <LOCAL_REGISTRY>/apinizercloud/portal:<NEW_VERSION>
docker push <LOCAL_REGISTRY>/apinizercloud/integration:<NEW_VERSION>
2.2.2.2) if containerd is to be used:
# All necessary images are taken
ctr image pull docker.io/apinizercloud/manager:<NEW_VERSION>
ctr image pull docker.io/apinizercloud/worker:<NEW_VERSION>
ctr image pull docker.io/apinizercloud/cache:<NEW_VERSION>
ctr image pull docker.io/apinizercloud/portal:<NEW_VERSION>
ctr image pull docker.io/apinizercloud/integration:<NEW_VERSION>
# The captured images are re-tagged with the necessary tag information
ctr image tag docker.io/apinizercloud/manager:<NEW_VERSION> <LOCAL_REGISTRY>/apinizercloud/manager:<NEW_VERSION>
ctr image tag docker.io/apinizercloud/worker:<NEW_VERSION> <LOCAL_REGISTRY>/apinizercloud/worker:<NEW_VERSION>
ctr image tag docker.io/apinizercloud/cache:<NEW_VERSION> <LOCAL_REGISTRY>/apinizercloud/cache:<NEW_VERSION>
ctr image tag docker.io/apinizercloud/portal:<NEW_VERSION> <LOCAL_REGISTRY>/apinizercloud/portal:<NEW_VERSION>
ctr image tag docker.io/apinizercloud/integration:<NEW_VERSION> <LOCAL_REGISTRY>/apinizercloud/integration:<NEW_VERSION>
# Images are transferred to the local image registry of the organization:
ctr images push <LOCAL_REGISTRY>/apinizercloud/manager:<NEW_VERSION>
ctr images push <LOCAL_REGISTRY>/apinizercloud/worker:<NEW_VERSION>
ctr images push <LOCAL_REGISTRY>/apinizercloud/cache:<NEW_VERSION>
ctr images push <LOCAL_REGISTRY>/apinizercloud/portal:<NEW_VERSION>
ctr images push <LOCAL_REGISTRY>/apinizercloud/integration:<NEW_VERSION>
2.2.3) Apinizer Manager Uygulamasının Güncellenmesi
Before updating other Apinizer components, Apinizer Manager must be updated.
After the Manager updates the database, other Apinizer applications must be fed with the current database settings.
Therefore, after updating the Manager, ensure that the Manager pods on Kubernetes are ready, and then update the other components.
This and subsequent steps are run on servers with the Kubernetes Control Plane task.
# Information about the Deployment is checked
kubectl get deployments -Ao wide
# Manager's deployment images are updated.
kubectl set image deployment/<MANAGER_DEPLOYMENT_NAME> -n <MANAGER_NAMESPACE> <MANAGER_CONTAINER_NAME>=<LOCAL_REGISTRY>/apinizercloud/manager:<NEW_VERSION>
# Wait for the pod to be READY, monitor pod status
kubectl get pods -n <MANAGER_NAMESPACE>
kubectl logs -f -n <MANAGER_NAMESPACE> <POD_NAME>
2.2.4) Updating Apinizer Worker and Cache Applications
After making sure that the image of the Apinizer Manager application is updated, the Apinizer Worker and Cache applications are updated.
kubectl set image deployment/<WORKER_DEPLOYMENT_NAME> -n <WORKER_CACHE_NAMESPACE> <WORKER_CONTAINER_NAME>=<LOCAL_REGISTRY>/apinizercloud/worker:<NEW_VERSION>
kubectl set image deployment/<CACHE_DEPLOYMENT_NAME> -n <WORKER_CACHE_NAMESPACE> <CACHE_CONTAINER_NAME>=<LOCAL_REGISTRY>/apinizercloud/cache:<NEW_VERSION>
# Wait for the pod to be READY, monitor pod status
kubectl get pods -n <WORKER_CACHE_NAMESPACE>
kubectl logs -f -n <WORKER_CACHE_NAMESPACE> <POD_NAME>
2.2.5) Updating Apinizer Portal and Integration Applications
Apinizer Portal and Integration can be updated in a similar way.
kubectl set image deployment/<PORTAL_DEPLOYMENT_NAME> -n <PORTAL_NAMESPACE> <PORTAL_CONTAINER_NAME>=<LOCAL_REGISTRY>/apinizercloud/portal:<NEW_VERSION>
kubectl set image deployment/<INTEGRATION_DEPLOYMENT_NAME> -n <INTEGRATION_NAMESPACE> <INTEGRATION_CONTAINER_NAME>=<LOCAL_REGISTRY>/apinizercloud/integration:<NEW_VERSION>
# Wait for the pod to be READY, monitor pod status
kubectl get pods -n <PORTAL_NAMESPACE>
kubectl logs -f -n <PORTAL_NAMESPACE> <POD_NAME>
kubectl get pods -n <INTEGRATION_NAMESPACE>
kubectl logs -f -n <INTEGRATION_NAMESPACE> <POD_NAME>
3) Reverting Apinizer Application Updates
Downgrading versions in Apinizer applications is not supported. However, if backups of the updated version's database exist, it may be possible to revert to a previous state.
If there is any reason to roll back an update, the first step should be to verify that a MongoDB backup was taken prior to the update. If there is no pre-update database backup, and the Manager application has already applied changes to the database, there is no way to undo those changes. In such cases, if there is a system-level backup of the database servers, it may be possible to revert the system to the last backup and restore the system to its former state.
However, since this approach would result in the loss of all changes made after the backup date, it should be executed with caution. If needed, a backup of the updated database can be taken first, then the system backup can be restored to revert the application version, and afterward, data changes can be selectively imported from the updated backup to the restored system using collection-based migration.
Since structural differences may applied in the update, this method may not be applicable and should only be considered for reference purposes.
3.1) Restoring the MongoDB Database
Before proceeding, ensure that the Manager application is not stuck in a crash loop (CrashLoopBackOff) on Kubernetes. For a safe restore, scale down the Manager application by setting its replica count to 0.
# Scale down Manager and stop all pods
kubectl scale deploy <MANAGER_DEPLOYMENT_NAME> -n <MANAGER_NAMESPACE> --replicas=0
Then, restore MongoDB using the latest available backup. To avoid conflicts during restoration, the --drop
parameter can be used. This will drop only the collections that exist in the backup file before loading the data.
For detailed information, refer to the MongoDB Backup and Restore documentation.
3.2) Reverting the Manager Application Version
If using a local image registry, update the commands accordingly.
# Set the Manager deployment image to the previous version and scale up to 1
kubectl set image deployment/<MANAGER_DEPLOYMENT_NAME> -n <MANAGER_NAMESPACE> <MANAGER_CONTAINER_NAME>=apinizercloud/manager:<OLD_VERSION>
kubectl scale deploy <MANAGER_DEPLOYMENT_NAME> -n <MANAGER_NAMESPACE> --replicas=1
# Wait for the pod to become READY; monitor pod status and logs
kubectl get pods -n <MANAGER_NAMESPACE>
kubectl get logs -f -n <MANAGER_NAMESPACE> <POD_NAME>
3.3) Reverting Other Application Versions
Once the Manager application is operational, other components can also be reverted to their previous versions.
If using a local image registry, update the commands accordingly.
# Set deployment images to the previous version for all components
kubectl set image deployment/<WORKER_DEPLOYMENT_NAME> -n <WORKER_CACHE_NAMESPACE> <WORKER_CONTAINER_NAME>=apinizercloud/worker:<OLD_VERSION>
kubectl set image deployment/<CACHE_DEPLOYMENT_NAME> -n <WORKER_CACHE_NAMESPACE> <CACHE_CONTAINER_NAME>=apinizercloud/cache:<OLD_VERSION>
kubectl set image deployment/<PORTAL_DEPLOYMENT_NAME> -n <PORTAL_NAMESPACE> <PORTAL_CONTAINER_NAME>=apinizercloud/portal:<OLD_VERSION>
kubectl set image deployment/<INTEGRATION_DEPLOYMENT_NAME> -n <INTEGRATION_NAMESPACE> <INTEGRATION_CONTAINER_NAME>=apinizercloud/integration:<OLD_VERSION>
# Wait for pods to become READY; monitor their status and logs as needed
kubectl get pods -A