Skip to main content
This document explains the detailed usage of a specific policy. If you are using the Apinizer policy framework for the first time or want to learn about the general working principles of policies, we recommend reading the What is a Policy? page first.

Overview

The Log policy captures the current message state (headers, parameters, body, user info, error status, etc.) at any desired point in the API Gateway pipeline and sends it to configured connectors. Unlike standard API traffic logging, it can be placed at any stage of the pipeline and uses a lightweight data structure containing only the relevant fields.

Use Cases

  • Monitor message state at specific pipeline stages (before/after policy comparison)
  • Collect detailed logs for debugging and troubleshooting
  • Meet audit and compliance requirements
  • Send real-time data to external systems (SIEM, log analysis platforms)

Supported Targets

TargetProtocol
ElasticsearchREST/HTTP
Database (MySQL, MariaDB, Oracle, MongoDB)JDBC / MongoDB Driver
GraylogGELF
WebhookHTTP POST
SyslogSyslog Protocol
KafkaKafka Producer
RabbitMQAMQP

Execution Modes

ModeBehavior
SynchronousPipeline waits until log delivery completes. Delivery errors break the pipeline.
AsynchronousLog delivery happens in the background. Pipeline continues without interruption. Errors are only logged.

Configuration Fields

FieldRequiredDefaultDescription
Connector SelectionYes-One or more connectors to send log data to
Execution ModeNoSynchronousSynchronous or Asynchronous delivery
Correlation IDNoEnabledInclude request tracking ID
Environment InfoNoEnabledInclude gateway environment info
API Proxy InfoNoEnabledInclude API Proxy and method info
User InfoNoEnabledInclude username or API key
HTTP ContextNoEnabledInclude HTTP status code
Result InfoNoEnabledInclude operation result and error type
HeadersNoEnabledInclude request headers
ParametersNoEnabledInclude query parameters
BodyNoEnabledInclude request body
Body ModeNoFullFull body or partial body (with byte limit)
Body Byte LimitNo-Maximum bytes in partial mode
PrivacyNoDisabledMask sensitive data

Data Structure

The Log policy uses a lightweight data structure that is different from standard API traffic logs. The following table lists the fields sent and their Elasticsearch/database equivalents.
Field NameShort Name (ES/JSON)TypeDescription
Timestamp@timestampDateLog record creation time
Correlation IDaciStringApinizer request tracking ID
Environment IDeiStringGateway environment identifier
API Proxy IDapiStringAPI Proxy identifier
API Proxy NameapnStringAPI Proxy name
API Proxy Method IDapmiStringAPI Proxy method identifier
API Proxy Method NameapmnStringAPI Proxy method name
User/KeyuokStringAuthenticated username or API key
HTTP Status CodescNumberHTTP response status code
Result TypertStringOperation result (SUCCESS, ERROR, etc.)
Error TypeetStringError type (if any)
Request HeadersfcrhListRequest headers as key-value pairs
Request ParametersfcrpListQuery parameters as key-value pairs
Request BodyfcrbStringRequest body content
Pipeline PositioncrStringPipeline region where the log policy is placed
Attachment ScopeclStringHierarchy level where the log policy is attached

Example JSON Output

{
  "@timestamp": "2026-03-31T12:30:45.123Z",
  "aci": "550e8400-e29b-41d4-a716-446655440000",
  "ei": "env-production-001",
  "api": "proxy-payment-api",
  "apn": "Payment API",
  "apmi": "method-create-payment",
  "apmn": "POST /payments",
  "uok": "merchant-api-key-123",
  "sc": 200,
  "rt": "SUCCESS",
  "fcrh": [
    { "k": "Content-Type", "v": "application/json" },
    { "k": "Authorization", "v": "Bearer eyJhbGciOi..." },
    { "k": "X-Request-ID", "v": "req-abc-123" }
  ],
  "fcrp": [
    { "k": "currency", "v": "TRY" },
    { "k": "lang", "v": "tr" }
  ],
  "fcrb": "{\"amount\": 150.00, \"merchantId\": \"M-001\"}",
  "cr": "FROM_CLIENT",
  "cl": "API_PROXY_METHOD"
}
Each entry in the header and parameter fields consists of k (key) and v (value) pairs. This structure is indexed as nested type in Elasticsearch.

Pipeline Position (cr) Values

ValueDescription
FROM_CLIENTFirst pipeline stage where the incoming client request is processed
TO_BACKENDLast pipeline stage before the request is forwarded to the backend
FROM_BACKENDFirst pipeline stage where the backend response is processed
TO_CLIENTLast pipeline stage before the response is sent to the client

Attachment Scope (cl) Values

ValueDescription
API_PROXY_GROUPPolicy is attached to a proxy group; applies to all proxies in the group
API_PROXYPolicy is attached directly to an API Proxy
API_PROXY_METHODPolicy is attached to a specific API Proxy method

Elasticsearch Integration

A separate index template must be created for log policy data in Elasticsearch. This template is different from the standard API traffic log template and contains only the fields sent by the log policy.
The Elasticsearch index template and ILM policy for log policy data are not automatically created from the Apinizer UI. You need to manually apply the following steps on Elasticsearch.

Step 1: Create ILM Policy

Create an ILM policy for index lifecycle management. The following example creates a policy that rolls over at 30 GB or 1 day and deletes after 30 days. Adjust the values according to your needs.
PUT _ilm/policy/apinizer-log-policy-capture-ilm
{
  "policy": {
    "phases": {
      "hot": {
        "actions": {
          "rollover": {
            "max_size": "30gb",
            "max_age": "1d"
          }
        }
      },
      "delete": {
        "min_age": "30d",
        "actions": {
          "delete": {}
        }
      }
    }
  }
}

Step 2: Create Index Template

Run the following command using Elasticsearch Kibana Dev Tools or curl. The template includes data stream support.
The field types in the template must exactly match the JSON structure sent by the log policy. Do not change field types.
PUT _index_template/apinizer-log-policy-capture-template
{
  "index_patterns": ["apinizer-log-policy-capture*"],
  "data_stream": {},
  "template": {
    "settings": {
      "index": {
        "lifecycle": {
          "name": "apinizer-log-policy-capture-ilm"
        },
        "number_of_shards": 1,
        "number_of_replicas": 0,
        "refresh_interval": "5s"
      }
    },
    "mappings": {
      "properties": {
        "@timestamp": {
          "type": "date",
          "format": "yyyy-MM-dd'T'HH:mm:ss.S'Z'||yyyy-MM-dd'T'HH:mm:ss.SS'Z'||yyyy-MM-dd'T'HH:mm:ss.SSS'Z'"
        },
        "aci": {
          "type": "keyword"
        },
        "ei": {
          "type": "keyword"
        },
        "api": {
          "type": "keyword"
        },
        "apn": {
          "type": "keyword"
        },
        "apmi": {
          "type": "keyword"
        },
        "apmn": {
          "type": "keyword"
        },
        "uok": {
          "type": "keyword",
          "ignore_above": 50
        },
        "sc": {
          "type": "short"
        },
        "rt": {
          "type": "keyword",
          "ignore_above": 7
        },
        "et": {
          "type": "keyword",
          "ignore_above": 75
        },
        "fcrh": {
          "type": "nested",
          "properties": {
            "k": {
              "type": "keyword"
            },
            "v": {
              "type": "keyword"
            }
          }
        },
        "fcrp": {
          "type": "nested",
          "properties": {
            "k": {
              "type": "keyword"
            },
            "v": {
              "type": "keyword"
            }
          }
        },
        "fcrb": {
          "type": "text"
        },
        "cr": {
          "type": "keyword"
        },
        "cl": {
          "type": "keyword"
        }
      }
    }
  }
}

Step 3: Create Data Stream

After the template is created, the data stream is automatically created when the first data arrives. To create manually:
PUT _data_stream/apinizer-log-policy-capture

Connector Configuration

Enter apinizer-log-policy-capture in the Elasticsearch connector’s Index Name field. This name must match the index_patterns in the template.
If you want to use a different index name, update the index_patterns field in the template accordingly. For example, for project-based separation you can use apinizer-log-policy-capture-projectname.

Database Integration

To send log policy data to a relational database (MySQL/MariaDB or Oracle), you need to create the following table structure.
You must create this table in the target database before using the database connector. The table is not created automatically.
CREATE TABLE log_PolicyCapture (
  id VARCHAR2(255) PRIMARY KEY,
  log_timestamp TIMESTAMP,
  correlation_id VARCHAR2(255),
  environment_id VARCHAR2(255),
  api_proxy_id VARCHAR2(255),
  api_proxy_name VARCHAR2(255),
  proxy_method_id VARCHAR2(255),
  proxy_method_name VARCHAR2(255),
  username_or_key VARCHAR2(255),
  status_code NUMBER(10),
  result_type VARCHAR2(255),
  error_type VARCHAR2(255),
  from_client_ro_header CLOB,
  from_client_ro_param CLOB,
  from_client_ro_body CLOB,
  capture_region VARCHAR2(50),
  capture_location VARCHAR2(50)
);
ColumnDescription
idRecord identifier
log_timestampTime the log record was created
correlation_idCorrelation identifier
environment_idEnvironment identifier
api_proxy_idAPI Proxy identifier
api_proxy_nameAPI Proxy name
proxy_method_idMethod identifier
proxy_method_nameMethod name
username_or_keyUsername or API key
status_codeHTTP response status code
result_typeProcessing result
error_typeError type
from_client_ro_headerHeaders from the client (JSON)
from_client_ro_paramQuery parameters from the client (JSON)
from_client_ro_bodyRequest body from the client
capture_regionPipeline stage where the policy ran: FROM_CLIENT, TO_BACKEND, FROM_BACKEND, TO_CLIENT
capture_locationLevel at which the policy was defined: API_PROXY_GROUP (Group), API_PROXY (Proxy), API_PROXY_METHOD (Method)
Header and parameter fields (from_client_ro_header, from_client_ro_param) are stored in JSON format. Each record contains an array of key-value pairs.
Since log records can grow to large volumes over time, creating the following indexes is recommended to improve query performance. The index on the correlation_id column is particularly important. When the log policy is placed at different points in the pipeline (for example, FROM_CLIENT and TO_CLIENT), the request and response records of the same call share the same correlation_id value. This allows request and response records to be joined on correlation_id to trace a transaction end to end.
CREATE INDEX idx_logpolicy_correlation ON log_PolicyCapture(correlation_id);
CREATE INDEX idx_logpolicy_timestamp ON log_PolicyCapture(log_timestamp);
CREATE INDEX idx_logpolicy_proxy ON log_PolicyCapture(api_proxy_id);
CREATE INDEX idx_logpolicy_region ON log_PolicyCapture(capture_region);
IndexPurpose
idx_logpolicy_correlationJoin request and response records of the same call on correlation_id
idx_logpolicy_timestampFilter by time range and produce time-based reports
idx_logpolicy_proxyList records belonging to a specific API Proxy
idx_logpolicy_regionFilter by pipeline stage (FROM_CLIENT, TO_CLIENT, etc.)
To display the request and response side by side, you can self-join the table on correlation_id and use capture_region as the discriminator. For example:
SELECT req.correlation_id, req.from_client_ro_body AS request_body, res.from_client_ro_body AS response_body
FROM log_PolicyCapture req
JOIN log_PolicyCapture res ON req.correlation_id = res.correlation_id
WHERE req.capture_region = 'FROM_CLIENT' AND res.capture_region = 'TO_CLIENT';
Since log tables accumulate time-series data at high volumes, applying daily partitioning on the log_timestamp column is recommended. With daily partitioning, deleting old records via DROP PARTITION becomes nearly instantaneous and time-range queries scan only the relevant partitions.Partition syntax varies by database:
  • Oracle: Use PARTITION BY RANGE (log_timestamp) INTERVAL (NUMTODSINTERVAL(1, 'DAY')) for automatic daily partition creation.
  • PostgreSQL: Define PARTITION BY RANGE (log_timestamp) and create a child table for each day (e.g., log_PolicyCapture_20260421).
  • MySQL/MariaDB: Use PARTITION BY RANGE (TO_DAYS(log_timestamp)). The partition key must be part of the PRIMARY KEY.
  • SQL Server: Define a partition function and partition scheme for daily ranges; new partitions are typically added via a scheduled job.

MongoDB

When using the MongoDB connector, no table creation is needed. Data is automatically written to the log_policycapture collection.

Privacy Settings

When privacy is enabled in the log policy, header, parameter, and body data matching the specified field names are masked. This feature can be used to meet data protection requirements such as GDPR. Masking is applied before data is sent to connectors. Connector-level privacy settings are applied separately and independently.

Deleting the Policy

For the steps to delete this policy and the actions to be taken when it is in use, see the Deleting a Policy section on the What is a Policy? page.

Exporting/Importing the Policy

For the export steps and available options for this policy, see the Exporting/Importing a Policy section on the What is a Policy? page.

Attaching the Policy to an API

For the process of attaching this policy to APIs, see the Attaching a Policy to an API section on the Policy Management page.