You can perform version upgrade operations for Apinizer images running on Kubernetes. Supports online and offline update methods, can manage MongoDB backup operations, and can roll back update operations.
Apinizer version upgrade operations can be performed using two different methods depending on whether Kubernetes worker servers have internet access:
Online Update
Used when the Kubernetes cluster has internet access. Docker images are pulled directly from Docker Hub and automatically updated. Provides a faster and easier update process.When to Use:
If Kubernetes worker servers can access the internet
If you want to perform a quick and easy update
If you don’t want to take extra steps for image transfer
Offline Update
Used when the Kubernetes cluster does not have internet access or direct internet access is restricted due to security policies. Images are first pulled on a server with internet access, saved as .tar files, or uploaded to a local registry.When to Use:
If Kubernetes worker servers cannot access the internet
If direct internet access is restricted due to security policies
If a local Docker registry (Nexus, Harbor, etc.) is used
Methods:
Manual Image Transfer
Local Registry Usage
Although not directly related to the update, it is recommended to always back up the database used by Apinizer in MongoDB to prevent possible rollback situations.
Before the update operation, it is recommended to review the Release Notes to detect changes that may cause incompatibility with the current system configuration and take necessary precautions.
The system basically consists of apimanager (manager in versions 2025.11.0 and earlier), worker, and cache applications. Depending on the license scope, integration and apiportal (portal in versions 2025.04.5 and earlier) applications may also be present.Current version information can be accessed from Docker Hub or from the release notes page.
The risk of traffic interruption during update may vary depending on factors such as pod count, update strategy, and adequacy of server resources.
Replica/Pod Count
If the application to be updated is running on only 1 pod, a short interruption may occur. It is recommended that the Api Manager component runs with 1 pod.
Update Strategy
Unless a special strategy is specified during installation, updates are performed using the RollingUpdate method with at least one pod remaining active. Recreate strategy can also be preferred for Api Manager and Cache components.
Server Resources
In environments where the number of Kubernetes worker servers is low and existing resources are used at the limit, manual intervention may be required during update and traffic flow interruption is likely.
If the relevant servers have internet access, continue from 2.1 Online Update, otherwise continue from 2.2 Offline Update section.
Apinizer Api Manager must be updated before updating other Apinizer components.After Api Manager makes updates to the database, other Apinizer applications need to be fed from the database with current settings.Therefore, after Api Manager is updated, make sure that Api Manager pods on Kubernetes are in “ready” status and then update other components.
Commands in this and subsequent steps are run on servers with Kubernetes Control Plane task.
1
Check deployment information
Copy
kubectl get deployments -Ao wide
2
Update Api Manager deployment image
Copy
kubectl set image deployment/<MANAGER_DEPLOYMENT_NAME> -n <MANAGER_NAMESPACE> <MANAGER_CONTAINER_NAME>=apinizercloud/apimanager:<NEW_VERSION>
3
Monitor pod status
Wait for the pod to be READY, monitor pod status and logs:
Copy
kubectl get pods -n <MANAGER_NAMESPACE>kubectl get logs -f -n <MANAGER_NAMESPACE> <POD_NAME>
# All images required for upgrade are pulled from online serverdocker pull apinizercloud/apimanager:<NEW_VERSION>docker pull apinizercloud/worker:<NEW_VERSION>docker pull apinizercloud/cache:<NEW_VERSION>docker pull apinizercloud/apiportal:<NEW_VERSION>docker pull apinizercloud/integration:<NEW_VERSION># To transfer relevant images to offline servers, images are saved in `.tar` format on online serverdocker save apinizercloud/apimanager:<NEW_VERSION> -o apinizercloud-apimanager.<NEW_VERSION>.tardocker save apinizercloud/worker:<NEW_VERSION> -o apinizercloud-worker.<NEW_VERSION>.tardocker save apinizercloud/cache:<NEW_VERSION> -o apinizercloud-cache.<NEW_VERSION>.tardocker save apinizercloud/apiportal:<NEW_VERSION> -o apinizercloud-apiportal.<NEW_VERSION>.tardocker save apinizercloud/integration:<NEW_VERSION> -o apinizercloud-integration.<NEW_VERSION>.tar# Each image is transferred to offline serverscp apinizercloud-*.tar <OFFLINE_MACHINE_USER>@<OFFLINE_MACHINE_IP>:<TARGET_DIRECTORY># Images are loaded on each offline serverdocker load -i apinizercloud-apimanager.<NEW_VERSION>.tardocker load -i apinizercloud-worker.<NEW_VERSION>.tardocker load -i apinizercloud-cache.<NEW_VERSION>.tardocker load -i apinizercloud-portal.<NEW_VERSION>.tardocker load -i apinizercloud-integration.<NEW_VERSION>.tar
Although different image registries and repositories work with different methods, in most of them, pulled images must be tagged and sent to the application.
If the application is used as a reverse proxy, it is sufficient to provide the necessary definition for the apinizercloud repo at hub.docker.com.
# All required images are pulleddocker pull apinizercloud/apimanager:<NEW_VERSION>docker pull apinizercloud/worker:<NEW_VERSION>docker pull apinizercloud/cache:<NEW_VERSION>docker pull apinizercloud/apiportal:<NEW_VERSION>docker pull apinizercloud/integration:<NEW_VERSION># Pulled images are retagged with necessary tag informationdocker tag apinizercloud/apimanager:<NEW_VERSION> <LOCAL_REGISTRY>/apinizercloud/apimanager:<NEW_VERSION>docker tag apinizercloud/worker:<NEW_VERSION> <LOCAL_REGISTRY>/apinizercloud/worker:<NEW_VERSION>docker tag apinizercloud/cache:<NEW_VERSION> <LOCAL_REGISTRY>/apinizercloud/cache:<NEW_VERSION>docker tag apinizercloud/apiportal:<NEW_VERSION> <LOCAL_REGISTRY>/apinizercloud/apiportal:<NEW_VERSION>docker tag apinizercloud/integration:<NEW_VERSION> <LOCAL_REGISTRY>/apinizercloud/integration:<NEW_VERSION># Images are transferred to organization's local image registry:docker push <LOCAL_REGISTRY>/apinizercloud/apimanager:<NEW_VERSION>docker push <LOCAL_REGISTRY>/apinizercloud/worker:<NEW_VERSION>docker push <LOCAL_REGISTRY>/apinizercloud/cache:<NEW_VERSION>docker push <LOCAL_REGISTRY>/apinizercloud/apiportal:<NEW_VERSION>docker push <LOCAL_REGISTRY>/apinizercloud/integration:<NEW_VERSION>
Apinizer Api Manager must be updated before updating other Apinizer components.After Api Manager makes updates to the database, other Apinizer applications need to be fed from the database with current settings.Therefore, after Api Manager is updated, make sure that Api Manager pods on Kubernetes are in “ready” status and then update other components.
Commands in this and subsequent steps are run on servers with Kubernetes Control Plane task.
1
Check deployment information
Copy
kubectl get deployments -Ao wide
2
Update Api Manager deployment image
Copy
kubectl set image deployment/<MANAGER_DEPLOYMENT_NAME> -n <MANAGER_NAMESPACE> <MANAGER_CONTAINER_NAME>=<LOCAL_REGISTRY>/apinizercloud/apimanager:<NEW_VERSION>
3
Monitor pod status
Wait for the pod to be READY, monitor pod status:
Copy
kubectl get pods -n <MANAGER_NAMESPACE>kubectl logs -f -n <MANAGER_NAMESPACE> <POD_NAME>
Version downgrade is not supported in Apinizer applications. However, if there are database backups, an updated version can be rolled back.
When updates need to be rolled back for any reason, the first thing to do is check that MongoDB backup was taken before the update. If there is no database backup taken before the update, there is no way to roll back the changes Api Manager application will make to the database, so if there is a system backup of the database servers, you can return to the most recent backup at that date and return to that version.This operation will cause loss of changes made since that date, so it should be done carefully. Since structural differences may occur with the update, this operation can only be considered for reference purposes.
Before this operation, it should be confirmed that the Api Manager application is not in a restart loop with CrashLoopBackOff error on Kubernetes. For the safest method, the Api Manager application should be stopped by reducing the replica count to 0.
1
Stop Api Manager
Api Manager replica count is reduced to 0 and all pods are shut down:
MongoDB should be returned to the most recent backup. The --drop parameter can be used to prevent possible conflicts during return. This will only drop collections that have a counterpart in the backup file and then perform the load.