Skip to main content

Overview

Diagnostic endpoints provide the ability to collect detailed information about the operational status of Worker and Cache modules. With these endpoints, you can access JVM metrics, thread information, connection states, environment variables, and more.

Use Cases

Performance Analysis

You can monitor JVM memory usage, thread counts, and system resources in real-time.

Troubleshooting

You can detect memory leaks and deadlock issues by taking thread dumps and heap dumps.

Capacity Planning

You can make scaling decisions by monitoring system resource usage.

Centralized Monitoring

You can perform cluster-wide status analysis by querying all pods collectively.

Access Methods

You can access diagnostic endpoints in two different ways:
You can access diagnostic information of all pods through the visual interface from the System → Server Management screen in API Manager. This method is suitable for manual control and quick analysis.

Workflow

The operational logic of diagnostic endpoints is shown in the diagram below:

Broadcast Mechanism

When internal=false (or when the parameter is not provided), the requested pod forwards requests to all other pods in the cluster and collects the results:

Authorization

All diagnostic endpoints require authorization. Environment ID is used in requests:
1

Add Authorization Header

You must send the active environment ID in the Authorization header in the request.
Authorization: <ENVIRONMENT_ID>
2

Validation

Worker and Cache pods compare the incoming token with their own environment IDs.
3

Access

If the token matches, the request is processed; otherwise, 401 Unauthorized is returned.
You can obtain the Environment ID from Environment settings in API Manager or from your system administrator. Requests made with incorrect or missing authorization information will be rejected.

Worker Diagnostic Endpoints

Available endpoints for Worker module:

JVM Metrics

JVM memory usage, heap/non-heap information, garbage collection statistics.
# Single pod
kubectl exec -it <any_pod_name> -n <namespace> -- curl -X GET \
  -H "Authorization: <ENVIRONMENT_ID>" \
  "http://worker-http-service.prod.svc.cluster.local:8091/apinizer/diagnostics/jvm?internal=true"

# All pods (broadcast)
kubectl exec -it <any_pod_name> -n <namespace> -- curl -X GET \
  -H "Authorization: <ENVIRONMENT_ID>" \
  "http://worker-http-service.prod.svc.cluster.local:8091/apinizer/diagnostics/jvm"

Thread Information

Active thread count, thread states, thread pool usage.
kubectl exec -it <any_pod_name> -n <namespace> -- curl -X GET \
  -H "Authorization: <ENVIRONMENT_ID>" \
  "http://worker-http-service.prod.svc.cluster.local:8091/apinizer/diagnostics/threads?internal=true"

Thread Dump

Detailed stack trace information of all threads. Used for deadlock detection.
kubectl exec -it <any_pod_name> -n <namespace> -- curl -X GET \
  -H "Authorization: <ENVIRONMENT_ID>" \
  "http://worker-http-service.prod.svc.cluster.local:8091/apinizer/diagnostics/threaddump?internal=true"

Heap Dump

Binary dump of JVM heap memory. Used for memory analysis.
The heap dump endpoint does not support broadcast and returns a binary file (.hprof) as output. Therefore, the internal parameter is not used, and the result should be saved to a file.
kubectl exec -it <any_pod_name> -n <namespace> -- curl -X GET \
  -H "Authorization: <ENVIRONMENT_ID>" \
  "http://worker-http-service.prod.svc.cluster.local:8091/apinizer/diagnostics/heapdump" \
  --output heapdump-$(date +%Y%m%d-%H%M%S).hprof

Connection Information

Active HTTP connections, connection pool states, backend connections.
kubectl exec -it <any_pod_name> -n <namespace> -- curl -X GET \
  -H "Authorization: <ENVIRONMENT_ID>" \
  "http://worker-http-service.prod.svc.cluster.local:8091/apinizer/diagnostics/connections?internal=true"

Environment Variables

System and JVM environment variables, system properties.
kubectl exec -it <any_pod_name> -n <namespace> -- curl -X GET \
  -H "Authorization: <ENVIRONMENT_ID>" \
  "http://worker-http-service.prod.svc.cluster.local:8091/apinizer/diagnostics/env?internal=true"

Health Status

Pod health status, uptime, basic system information.
kubectl exec -it <any_pod_name> -n <namespace> -- curl -X GET \
  -H "Authorization: <ENVIRONMENT_ID>" \
  "http://worker-http-service.prod.svc.cluster.local:8091/apinizer/diagnostics/health?internal=true"

All Metrics

Collects all metrics except heap dump in a single call.
kubectl exec -it <any_pod_name> -n <namespace> -- curl -X GET \
  -H "Authorization: <ENVIRONMENT_ID>" \
  "http://worker-http-service.prod.svc.cluster.local:8091/apinizer/diagnostics/all?internal=true"

Cache Diagnostic Endpoints

Available endpoints for Cache module:

JVM Metrics

# Single pod
kubectl exec -it <any_pod_name> -n <namespace> -- curl -X GET \
  -H "Authorization: <ENVIRONMENT_ID>" \
  "http://cache-http-service.prod.svc.cluster.local:8090/apinizer/diagnostics/jvm?internal=true"

# All pods (broadcast)
kubectl exec -it <any_pod_name> -n <namespace> -- curl -X GET \
  -H "Authorization: <ENVIRONMENT_ID>" \
  "http://cache-http-service.prod.svc.cluster.local:8090/apinizer/diagnostics/jvm"

Thread Information

kubectl exec -it <any_pod_name> -n <namespace> -- curl -X GET \
  -H "Authorization: <ENVIRONMENT_ID>" \
  "http://cache-http-service.prod.svc.cluster.local:8090/apinizer/diagnostics/threads?internal=true"

Thread Dump

kubectl exec -it <any_pod_name> -n <namespace> -- curl -X GET \
  -H "Authorization: <ENVIRONMENT_ID>" \
  "http://cache-http-service.prod.svc.cluster.local:8090/apinizer/diagnostics/threaddump?internal=true"

Heap Dump

kubectl exec -it <any_pod_name> -n <namespace> -- curl -X GET \
  -H "Authorization: <ENVIRONMENT_ID>" \
  "http://cache-http-service.prod.svc.cluster.local:8090/apinizer/diagnostics/heapdump" \
  --output cache-heapdump-$(date +%Y%m%d-%H%M%S).hprof

Environment Variables

kubectl exec -it <any_pod_name> -n <namespace> -- curl -X GET \
  -H "Authorization: <ENVIRONMENT_ID>" \
  "http://cache-http-service.prod.svc.cluster.local:8090/apinizer/diagnostics/env?internal=true"

Health Status

kubectl exec -it <any_pod_name> -n <namespace> -- curl -X GET \
  -H "Authorization: <ENVIRONMENT_ID>" \
  "http://cache-http-service.prod.svc.cluster.local:8090/apinizer/diagnostics/health?internal=true"

Hazelcast Metrics

Hazelcast cluster information, cache statistics, distributed map metrics.
This endpoint is only available in the Cache module and shows the detailed status of the Hazelcast cluster.
kubectl exec -it <any_pod_name> -n <namespace> -- curl -X GET \
  -H "Authorization: <ENVIRONMENT_ID>" \
  "http://cache-http-service.prod.svc.cluster.local:8090/apinizer/diagnostics/hazelcast?internal=true"

All Metrics

kubectl exec -it <any_pod_name> -n <namespace> -- curl -X GET \
  -H "Authorization: <ENVIRONMENT_ID>" \
  "http://cache-http-service.prod.svc.cluster.local:8090/apinizer/diagnostics/all?internal=true"

Endpoint Comparison Table

EndpointWorkerCacheBroadcastDescription
/jvmJVM memory and GC metrics
/threadsThread information and states
/threaddumpDetailed thread stack trace
/heapdumpBinary heap dump file
/connectionsHTTP connection pool information
/envEnvironment variables
/healthHealth status and uptime
/hazelcastHazelcast cluster metrics
/allAll metrics (except heapdump)

Use Case Scenarios

  1. Regularly monitor memory usage with the /jvm endpoint
  2. Detect continuous increase in heap memory
  3. Take a heap dump with /heapdump
  4. Analyze with Eclipse MAT or VisualVM
# Monitor memory usage
curl -X GET -H "Authorization: <ENV_ID>" \
  "http://<WORKER_URL>/apinizer/diagnostics/jvm" | jq '.pods[].jvm.memory'

# Take heap dump
curl -X GET -H "Authorization: <ENV_ID>" \
  "http://<WORKER_URL>/apinizer/diagnostics/heapdump" \
  --output analysis.hprof
You can detect deadlock situations by taking thread dumps:
# Take thread dump
curl -X GET -H "Authorization: <ENV_ID>" \
  "http://<WORKER_URL>/apinizer/diagnostics/threaddump" \
  | jq '.pods[].threadDump' > threaddump.txt

# Search for "deadlock" in the file
grep -i "deadlock" threaddump.txt
You can check the status of all pods in a single call:
# Get all metrics of all pods
curl -X GET -H "Authorization: <ENV_ID>" \
  "http://<WORKER_URL>/apinizer/diagnostics/all" \
  | jq '.' > cluster-diagnostics.json

# List memory usage of each pod
cat cluster-diagnostics.json | jq '.pods[] | {pod: .podName, heapUsed: .jvm.memory.heap.used, heapMax: .jvm.memory.heap.max}'
You can integrate with Prometheus, Grafana, or custom monitoring systems:
#!/bin/bash
# monitoring-script.sh
while true; do
  RESPONSE=$(curl -s -X GET -H "Authorization: $ENV_ID" \
    "http://$WORKER_URL/apinizer/diagnostics/jvm")
  
  echo "$RESPONSE" | jq -r '.pods[] | "\(.podName) - Heap: \(.jvm.memory.heap.used)/\(.jvm.memory.heap.max)"'
  
  sleep 60
done
You can check connection pool states in Worker pods:
# Connection pool metrics
curl -X GET -H "Authorization: <ENV_ID>" \
  "http://<WORKER_URL>/apinizer/diagnostics/connections" \
  | jq '.pods[] | {pod: .podName, activeConnections: .connections.active, idleConnections: .connections.idle}'

Best Practices

Regular Monitoring

Perform proactive troubleshooting by periodically checking /health and /jvm endpoints.

Broadcast Usage

Use broadcast mode (without internal parameter) to check all pods in production environment.

Heap Dump Size

Heap dumps can create large files. Ensure sufficient disk space is available.

Authorization Security

Store Environment ID securely and do not display it in logs.
Performance Note: Heap dump and thread dump operations can create load on the pod. It is recommended to perform these operations during low-traffic hours in production environments.
For more information, you can check the following pages: