Capture the message state at any point in the API Gateway pipeline and send it to selected connectors. Perform synchronous or asynchronous logging to Elasticsearch, databases, Graylog, Webhook, Kafka, RabbitMQ, and Syslog targets.
Use this file to discover all available pages before exploring further.
This document explains the detailed usage of a specific policy. If you are using the Apinizer policy framework for the first time or want to learn about the general working principles of policies, we recommend reading the What is a Policy? page first.
The Log policy captures a snapshot of the current message state at any point in the message processing pipeline and sends it to the configured connectors.
Unlike standard traffic logging, it can be placed at intermediate points in the pipeline (for example, before and after a transformation policy) to observe message changes.
It works with HTTP, WebSocket, and gRPC protocols.
It can send data to multiple connectors simultaneously.
You can select the connectors the policy will send logs to. Connectors are selected through their connection definition and work environment-independent — the same connection definition may correspond to different connector instances across environments.
Definitions marked as global policies can be used in multiple environments; the connector corresponding to the same connection definition must be configured in each environment.
In synchronous mode, a connector error interrupts the pipeline and returns an error to the client. If uninterrupted operation is required in production, prefer the asynchronous mode.
In asynchronous mode, when a connector error occurs, the error information is written to the application logs, but the client request is not affected.
The Log policy uses a lightweight data structure that is different from standard API traffic logs. The following table lists the fields sent and their Elasticsearch/database equivalents.
A separate index template must be created for log policy data in Elasticsearch. This template is different from the standard API traffic log template and contains only the fields sent by the log policy.
The Elasticsearch index template and ILM policy for log policy data are not automatically created from the Apinizer UI. You need to manually apply the following steps on Elasticsearch.
Create an ILM policy for index lifecycle management. The following example creates a policy that rolls over at 30 GB or 1 day and deletes after 30 days. Adjust the values according to your needs.
Enter apinizer-log-policy-capture in the Elasticsearch connector’s Index Name field. This name must match the index_patterns in the template.
If you want to use a different index name, update the index_patterns field in the template accordingly. For example, for project-based separation you can use apinizer-log-policy-capture-projectname.
The Database connector writes each log policy capture as a single row into the log_PolicyCapture table on the target relational database. Supported database types are Oracle, MySQL/MariaDB, PostgreSQL, and SQL Server. For MongoDB, the collection is created automatically on first write — no manual setup is required.
For relational databases, the log_PolicyCapture table is not created automatically — you must create it manually before enabling the connector. See Apinizer Log Table Creation Commands for the CREATE TABLE statement, recommended indexes, and partitioning guidance for each supported database type.
The correlation_id column links request and response rows of the same call. When the log policy is placed at multiple pipeline stages (for example, FROM_CLIENT and TO_CLIENT), records can be joined on correlation_id to trace a transaction end to end. The index on this column is therefore strongly recommended.
For high-traffic APIs, we recommend configuring the log policy in Asynchronous mode so that database write latency does not block the request pipeline.
You can configure privacy settings at the policy level so that sensitive data is processed before being written to the log record.
Operation
Description
Mask
Partially hides the sensitive data (e.g. ****1234)
Delete
Completely removes the sensitive data
Hash
Converts the sensitive data to a one-way hash value
Encrypt
Encrypts the sensitive data
Privacy settings are applied at two levels: first the policy-level settings, then the connector-level settings. If both levels are enabled, each is applied in sequence.
Masking is applied before data is sent to connectors. Connector-level privacy settings are applied separately and independently. This feature can be used to meet data protection requirements such as GDPR.