
Overview of Apinizer’s Metric System

- Apinizer Gateway: Collects metrics related to API traffic, external connections, JVM health, and system resources
- Apinizer Cache: Monitors cache operations, API requests, JVM performance, and system health
Metrics Collected by Apinizer

Apinizer Gateway Metrics
Gateway component collects metrics in various categories:API Traffic Metrics
These metrics track requests passing through Apinizer Gateway:- Total API traffic requests
- Successful/failed/blocked API requests
- Request processing times (pipeline, routing, total)
- Request and response sizes
- Cache hit statistics
- Total metrics (e.g., total API requests across all APIs)
- Tagged metrics with detailed dimensions (e.g., requests per API ID, API name)

External Connection Metrics
These track connections made to external services:- Total external requests
- External error count
- External response times
JVM Metrics
These provide insights about Java Virtual Machine:- Memory usage (heap, non-heap)
- Garbage collection statistics
- Thread counts and states
System Metrics
These monitor the underlying system:- CPU usage
- Processor count
- System load average
- File descriptor counts
Apinizer Cache Metrics
Cache component collects:Cache Operation Metrics
- Cache get/put counts
- Cache size and entry counts
- Cache operation latencies
- Memory usage by cache entries
API Metrics
- API request counts
- API response times
- API error counts
JVM and System Metrics
Similar to Gateway, Cache component also tracks JVM performance and system resource usage.Setting Up Prometheus Integration
1. Enabling Metrics in Apinizer Components
For Apinizer Gateway:
Go to Gateway Environments page in Apinizer interface and enable “Prometheus Metric Server” option. This activates metric publishing on port 9091.For Apinizer Cache:
Edit Cache deployment and addMETRICS_ENABLED=TRUE environment variable. This can be done in the following ways:
Through Apinizer interface: Follow Gateway Environments > Deployments & Services > Cache > Edit deployment path
Through Kubernetes CLI:
2. Configuring Prometheus to Collect Metrics
You can configure Prometheus to collect metrics from Apinizer components in two different ways:Constant Scraping
Create a service targeting Apinizer components on port 9091:Dynamic Scraping with Kubernetes Service Discovery
For more flexible configurations, you can use Kubernetes service discovery with pod annotations:- Add annotations to Deployment:
- Configure Prometheus to use Kubernetes service discovery:
Analyzing Apinizer Metrics with PromQL
After Prometheus starts collecting metrics from Apinizer components, you can use PromQL (Prometheus Query Language) to analyze the data. Here are some useful query examples:
Gateway API Traffic Analysis

Cache Performance Analysis
JVM Analysis
Creating Grafana Dashboards
After setting Prometheus as data source in Grafana, you can create dashboards to visualize Apinizer metrics. Here are some panel suggestions:API Traffic Dashboard

- Metrics:
- Total requests:
sum(rate(apinizer_api_traffic_total_count_total[5m])) - Successful requests:
sum(rate(apinizer_api_traffic_success_count_total[5m])) - Failed requests:
sum(rate(apinizer_api_traffic_error_count_total[5m])) - Visualization: Time series
- Metric:
topk(5, sum by (api_name) (increase(apinizer_api_traffic_total_count_tagged_total[5m]))) - Visualization: Bar chart
- Metrics:
- Request pipeline:
sum(rate(apinizer_api_traffic_request_pipeline_time_seconds_sum[5m])) / sum(rate(apinizer_api_traffic_request_pipeline_time_seconds_count[5m])) * 1000 - Routing:
sum(rate(apinizer_api_traffic_routing_time_seconds_sum[5m])) / sum(rate(apinizer_api_traffic_routing_time_seconds_count[5m])) * 1000 - Response pipeline:
sum(rate(apinizer_api_traffic_response_pipeline_time_seconds_sum[5m])) / sum(rate(apinizer_api_traffic_response_pipeline_time_seconds_count[5m])) * 1000 - Visualization: Time series
- Metrics:
- Request size:
sum(rate(apinizer_api_traffic_request_size_bytes_sum[5m])) / sum(rate(apinizer_api_traffic_request_size_bytes_count[5m])) - Response size:
sum(rate(apinizer_api_traffic_response_size_bytes_sum[5m])) / sum(rate(apinizer_api_traffic_response_size_bytes_count[5m])) - Visualization: Time series
Cache Performance Dashboard

- Metrics:
- Get operations:
rate(cache_gets_total[5m]) - Put operations:
rate(cache_puts_total[5m]) - Visualization: Time series
- Metric:
(sum(increase(cache_gets_total[5m])) -sum(increase(apinizer_cache_api_errors_total[5m]))) / sum(increase(cache_gets_total[5m])) * 100 - Visualization: Gauge
- Metric:
sum(cache_entry_memory_bytes) - Visualization: Stat or gauge
System Health Dashboard
Panel 1: JVM Memory Usage- Metric:
sum by (area)(jvm_memory_used_bytes) / sum by (area)(jvm_memory_max_bytes) * 100 - Visualization: Gauge or time series
- Metric:
sum(system_cpu_usage{pod=~".*"}) by (pod) * 100 - Visualization: Time series
- Metric:
sum(jvm_threads_live_threads) - Visualization: Stat or gauge
Best Practices
1. Metric Retention Duration
Configure appropriate retention durations in Prometheus according to your needs. Default configuration stores data for 7 days:- “—storage.tsdb.path=/prometheus”
- “—storage.tsdb.retention.time=7d”
2. Alarm Configuration
Set up alarms for critical metrics in Prometheus AlertManager or Grafana:- High API error rates:
(sum(increase(apinizer_api_traffic_error_count_total[5m])) / sum(increase(apinizer_api_traffic_total_count_total[5m]))) * 100 > 10 - High memory usage:
sum(jvm_memory_used_bytes) / sum(jvm_memory_max_bytes) * 100 > 85 - Slow response times:
sum(rate(apinizer_api_traffic_total_time_seconds_sum[5m])) / sum(rate(apinizer_api_traffic_total_time_seconds_count[5m])) > 1
3. Dashboard Organization
Organize your Grafana dashboards logically:- Create separate dashboards for Gateway and Cache components
- Bring related metrics together
- Use variables that allow filtering by namespace, pod, or API
4. Label Usage
Take advantage of Prometheus labels for more effective querying:- Filter by specific APIs using
api_namelabel - Analyze metrics by namespace or pod
- Make performance comparisons between different environments
Conclusion
Integrating Apinizer with Prometheus and Grafana provides powerful monitoring capabilities for your API management infrastructure. By properly configuring metric collection, creating informative dashboards, and implementing alarms, you can ensure optimal performance, quickly detect issues, and make data-driven decisions about your API ecosystem. This integration benefits from the strengths of each component:- Apinizer’s comprehensive metric collection feature
- Prometheus’s efficient time series database and powerful query language
- Grafana’s flexible and beautiful visualizations

Resources
For more information:- Apinizer Documentation
- Prometheus Documentation
- Grafana Documentation

