Skip to main content
This document explains the detailed usage of a specific policy. If you are using the Apinizer policy framework for the first time or want to learn about the general working principles of policies, we recommend reading the What is a Policy? page first.

Overview

The Log policy captures the current message state (headers, parameters, body, user info, error status, etc.) at any desired point in the API Gateway pipeline and sends it to configured connectors. Unlike standard API traffic logging, it can be placed at any stage of the pipeline and uses a lightweight data structure containing only the relevant fields.

Use Cases

  • Monitor message state at specific pipeline stages (before/after policy comparison)
  • Collect detailed logs for debugging and troubleshooting
  • Meet audit and compliance requirements
  • Send real-time data to external systems (SIEM, log analysis platforms)

Supported Targets

TargetProtocol
ElasticsearchREST/HTTP
Database (MySQL, MariaDB, Oracle, MongoDB)JDBC / MongoDB Driver
GraylogGELF
WebhookHTTP POST
SyslogSyslog Protocol
KafkaKafka Producer
RabbitMQAMQP

Execution Modes

ModeBehavior
SynchronousPipeline waits until log delivery completes. Delivery errors break the pipeline.
AsynchronousLog delivery happens in the background. Pipeline continues without interruption. Errors are only logged.

Configuration Fields

FieldRequiredDefaultDescription
Connector SelectionYes-One or more connectors to send log data to
Execution ModeNoSynchronousSynchronous or Asynchronous delivery
Correlation IDNoEnabledInclude request tracking ID
Environment InfoNoEnabledInclude gateway environment info
API Proxy InfoNoEnabledInclude API Proxy and method info
User InfoNoEnabledInclude username or API key
HTTP ContextNoEnabledInclude HTTP status code
Result InfoNoEnabledInclude operation result and error type
HeadersNoEnabledInclude request headers
ParametersNoEnabledInclude query parameters
BodyNoEnabledInclude request body
Body ModeNoFullFull body or partial body (with byte limit)
Body Byte LimitNo-Maximum bytes in partial mode
PrivacyNoDisabledMask sensitive data

Data Structure

The Log policy uses a lightweight data structure that is different from standard API traffic logs. The following table lists the fields sent and their Elasticsearch/database equivalents.
Field NameShort Name (ES/JSON)TypeDescription
Timestamp@timestampDateLog record creation time
Correlation IDaciStringApinizer request tracking ID
Environment IDeiStringGateway environment identifier
API Proxy IDapiStringAPI Proxy identifier
API Proxy NameapnStringAPI Proxy name
API Proxy Method IDapmiStringAPI Proxy method identifier
API Proxy Method NameapmnStringAPI Proxy method name
User/KeyuokStringAuthenticated username or API key
HTTP Status CodescNumberHTTP response status code
Result TypertStringOperation result (SUCCESS, ERROR, etc.)
Error TypeetStringError type (if any)
Request HeadersfcrhListRequest headers as key-value pairs
Request ParametersfcrpListQuery parameters as key-value pairs
Request BodyfcrbStringRequest body content

Example JSON Output

{
  "@timestamp": "2026-03-31T12:30:45.123Z",
  "aci": "550e8400-e29b-41d4-a716-446655440000",
  "ei": "env-production-001",
  "api": "proxy-payment-api",
  "apn": "Payment API",
  "apmi": "method-create-payment",
  "apmn": "POST /payments",
  "uok": "merchant-api-key-123",
  "sc": 200,
  "rt": "SUCCESS",
  "fcrh": [
    { "k": "Content-Type", "v": "application/json" },
    { "k": "Authorization", "v": "Bearer eyJhbGciOi..." },
    { "k": "X-Request-ID", "v": "req-abc-123" }
  ],
  "fcrp": [
    { "k": "currency", "v": "TRY" },
    { "k": "lang", "v": "tr" }
  ],
  "fcrb": "{\"amount\": 150.00, \"merchantId\": \"M-001\"}"
}
Each entry in the header and parameter fields consists of k (key) and v (value) pairs. This structure is indexed as nested type in Elasticsearch.

Elasticsearch Integration

A separate index template must be created for log policy data in Elasticsearch. This template is different from the standard API traffic log template and contains only the fields sent by the log policy.
The Elasticsearch index template and ILM policy for log policy data are not automatically created from the Apinizer UI. You need to manually apply the following steps on Elasticsearch.

Step 1: Create ILM Policy

Create an ILM policy for index lifecycle management. The following example creates a policy that rolls over at 30 GB or 1 day and deletes after 30 days. Adjust the values according to your needs.
PUT _ilm/policy/apinizer-log-policy-capture-ilm
{
  "policy": {
    "phases": {
      "hot": {
        "actions": {
          "rollover": {
            "max_size": "30gb",
            "max_age": "1d"
          }
        }
      },
      "delete": {
        "min_age": "30d",
        "actions": {
          "delete": {}
        }
      }
    }
  }
}

Step 2: Create Index Template

Run the following command using Elasticsearch Kibana Dev Tools or curl. The template includes data stream support.
The field types in the template must exactly match the JSON structure sent by the log policy. Do not change field types.
PUT _index_template/apinizer-log-policy-capture-template
{
  "index_patterns": ["apinizer-log-policy-capture*"],
  "data_stream": {},
  "template": {
    "settings": {
      "index": {
        "lifecycle": {
          "name": "apinizer-log-policy-capture-ilm"
        },
        "number_of_shards": 1,
        "number_of_replicas": 0,
        "refresh_interval": "5s"
      }
    },
    "mappings": {
      "properties": {
        "@timestamp": {
          "type": "date",
          "format": "yyyy-MM-dd'T'HH:mm:ss.S'Z'||yyyy-MM-dd'T'HH:mm:ss.SS'Z'||yyyy-MM-dd'T'HH:mm:ss.SSS'Z'"
        },
        "aci": {
          "type": "keyword"
        },
        "ei": {
          "type": "keyword"
        },
        "api": {
          "type": "keyword"
        },
        "apn": {
          "type": "keyword"
        },
        "apmi": {
          "type": "keyword"
        },
        "apmn": {
          "type": "keyword"
        },
        "uok": {
          "type": "keyword",
          "ignore_above": 50
        },
        "sc": {
          "type": "short"
        },
        "rt": {
          "type": "keyword",
          "ignore_above": 7
        },
        "et": {
          "type": "keyword",
          "ignore_above": 75
        },
        "fcrh": {
          "type": "nested",
          "properties": {
            "k": {
              "type": "keyword"
            },
            "v": {
              "type": "keyword"
            }
          }
        },
        "fcrp": {
          "type": "nested",
          "properties": {
            "k": {
              "type": "keyword"
            },
            "v": {
              "type": "keyword"
            }
          }
        },
        "fcrb": {
          "type": "text"
        }
      }
    }
  }
}

Step 3: Create Data Stream

After the template is created, the data stream is automatically created when the first data arrives. To create manually:
PUT _data_stream/apinizer-log-policy-capture

Connector Configuration

Enter apinizer-log-policy-capture in the Elasticsearch connector’s Index Name field. This name must match the index_patterns in the template.
If you want to use a different index name, update the index_patterns field in the template accordingly. For example, for project-based separation you can use apinizer-log-policy-capture-projectname.

Database Integration

To send log policy data to a relational database (MySQL/MariaDB or Oracle), you need to create the following table structure.
You must create this table in the target database before using the database connector. The table is not created automatically.
CREATE TABLE log_PolicyCapture (
  id VARCHAR(255) PRIMARY KEY,
  created TIMESTAMP,
  apinizer_correlation_id VARCHAR(255),
  environment_id VARCHAR(255),
  api_proxy_id VARCHAR(255),
  api_proxy_name VARCHAR(255),
  api_proxy_method_id VARCHAR(255),
  api_proxy_method_name VARCHAR(255),
  username_or_key VARCHAR(255),
  status_code INTEGER,
  result_type VARCHAR(255),
  error_type VARCHAR(255),
  from_client_read_only_header TEXT,
  from_client_read_only_parameter TEXT,
  from_client_read_only_body LONGTEXT
);
Header and parameter fields (from_client_read_only_header, from_client_read_only_parameter) are stored in JSON format. Each record contains an array of key-value pairs.

MongoDB

When using the MongoDB connector, no table creation is needed. Data is automatically written to the log_policycapture collection.

Privacy Settings

When privacy is enabled in the log policy, header, parameter, and body data matching the specified field names are masked. This feature can be used to meet data protection requirements such as GDPR. Masking is applied before data is sent to connectors. Connector-level privacy settings are applied separately and independently.

Deleting the Policy

For the steps to delete this policy and the actions to be taken when it is in use, see the Deleting a Policy section on the What is a Policy? page.

Exporting/Importing the Policy

For the export steps and available options for this policy, see the Exporting/Importing a Policy section on the What is a Policy? page.

Attaching the Policy to an API

For the process of attaching this policy to APIs, see the Attaching a Policy to an API section on the Policy Management page.