You can access detailed system metrics, JVM information, thread states, and performance data of Worker and Cache modules. You can use diagnostic endpoints via pod-to-pod communication from within Kubernetes or through API calls from outside, query all pods collectively, and analyze system performance. You can access critical information for troubleshooting, performance analysis, and capacity planning.
Diagnostic endpoints provide the ability to collect detailed information about the operational status of Worker and Cache modules. With these endpoints, you can access JVM metrics, thread information, connection states, environment variables, and more.
You can access diagnostic endpoints in two different ways:
Server Management Screen
API Access
You can access diagnostic information of all pods through the visual interface from the System → Server Management screen in API Manager. This method is suitable for manual control and quick analysis.
You can access endpoints directly with HTTP requests. This method is ideal for automation, monitoring systems, and script-based controls.
When internal=false (or when the parameter is not provided), the requested pod forwards requests to all other pods in the cluster and collects the results:
All diagnostic endpoints require authorization. Environment ID is used in requests:
1
Add Authorization Header
You must send the active environment ID in the Authorization header in the request.
Copy
Authorization: <ENVIRONMENT_ID>
2
Validation
Worker and Cache pods compare the incoming token with their own environment IDs.
3
Access
If the token matches, the request is processed; otherwise, 401 Unauthorized is returned.
You can obtain the Environment ID from Environment settings in API Manager or from your system administrator. Requests made with incorrect or missing authorization information will be rejected.
Binary dump of JVM heap memory. Used for memory analysis.
The heap dump endpoint does not support broadcast and returns a binary file (.hprof) as output. Therefore, the internal parameter is not used, and the result should be saved to a file.
Regularly monitor memory usage with the /jvm endpoint
Detect continuous increase in heap memory
Take a heap dump with /heapdump
Analyze with Eclipse MAT or VisualVM
Copy
# Monitor memory usagecurl -X GET -H "Authorization: <ENV_ID>" \ "http://<WORKER_URL>/apinizer/diagnostics/jvm" | jq '.pods[].jvm.memory'# Take heap dumpcurl -X GET -H "Authorization: <ENV_ID>" \ "http://<WORKER_URL>/apinizer/diagnostics/heapdump" \ --output analysis.hprof
Deadlock Analysis
You can detect deadlock situations by taking thread dumps:
Copy
# Take thread dumpcurl -X GET -H "Authorization: <ENV_ID>" \ "http://<WORKER_URL>/apinizer/diagnostics/threaddump" \ | jq '.pods[].threadDump' > threaddump.txt# Search for "deadlock" in the filegrep -i "deadlock" threaddump.txt
Cluster-Wide Performance Analysis
You can check the status of all pods in a single call:
Copy
# Get all metrics of all podscurl -X GET -H "Authorization: <ENV_ID>" \ "http://<WORKER_URL>/apinizer/diagnostics/all" \ | jq '.' > cluster-diagnostics.json# List memory usage of each podcat cluster-diagnostics.json | jq '.pods[] | {pod: .podName, heapUsed: .jvm.memory.heap.used, heapMax: .jvm.memory.heap.max}'
Automated Monitoring Integration
You can integrate with Prometheus, Grafana, or custom monitoring systems:
Perform proactive troubleshooting by periodically checking /health and /jvm endpoints.
Broadcast Usage
Use broadcast mode (without internal parameter) to check all pods in production environment.
Heap Dump Size
Heap dumps can create large files. Ensure sufficient disk space is available.
Authorization Security
Store Environment ID securely and do not display it in logs.
Performance Note: Heap dump and thread dump operations can create load on the pod. It is recommended to perform these operations during low-traffic hours in production environments.