Skip to main content
The DDL commands on this page use Apinizer’s default table names (log_ApiProxyTraffic, log_PolicyCapture, log_AuditEvent, log_Application, log_UnsentMessage, log_TokenTraffic, log_LoginLog) and no schema prefix.If you prefer a custom schema or custom table names, configure them in the Log Target Usage — Table Schema section of the connection definition. You can rename the table or add a schema prefix in your DDL, but column names and data types must match exactly — otherwise log inserts will fail with column-not-found errors.Especially on Oracle, if you use case-sensitive table names (e.g. CREATE TABLE "HR"."logtablosu"), you must enter the table name inside double quotes in the connection definition.See Database Connection for details.
Apinizer can store traffic and audit records in relational databases using DB Connector. The following tables are prepared with appropriate data types for each supported database type.
Column names are identical across all database types (snake_case). Only the data types differ per database.

API Proxy Traffic Log

The main table where all HTTP traffic passing through API Proxy is recorded.
CREATE TABLE log_ApiProxyTraffic (
  id VARCHAR2(255) PRIMARY KEY,
  log_timestamp TIMESTAMP,
  correlation_id VARCHAR2(255),
  environment_id VARCHAR2(255),
  project_id VARCHAR2(255),
  application_id VARCHAR2(255),
  application_name VARCHAR2(255),
  api_proxy_id VARCHAR2(255),
  api_proxy_name VARCHAR2(255),
  enum_api_type VARCHAR2(255),
  proxy_relative_path VARCHAR2(255),
  proxy_method_id VARCHAR2(255),
  proxy_method_name VARCHAR2(255),
  x_forwarded_for VARCHAR2(255),
  username_or_key VARCHAR2(255),
  req_context_path VARCHAR2(255),
  req_remote_addr VARCHAR2(255),
  req_http_method VARCHAR2(255),
  req_content_type VARCHAR2(255),
  req_path_info VARCHAR2(255),
  req_query_string VARCHAR2(255),
  req_remote_user VARCHAR2(255),
  req_session_id VARCHAR2(255),
  req_request_uri VARCHAR2(255),
  req_char_encoding VARCHAR2(255),
  req_content_length NUMBER(10),
  req_protocol VARCHAR2(255),
  req_scheme VARCHAR2(255),
  req_server_name VARCHAR2(255),
  req_server_port NUMBER(10),
  req_remote_host VARCHAR2(255),
  req_remote_port NUMBER(10),
  req_local_name VARCHAR2(255),
  req_local_addr VARCHAR2(255),
  req_local_port NUMBER(10),
  from_client_ro_param CLOB,
  from_client_ro_header CLOB,
  from_client_ro_body CLOB,
  to_backend_param CLOB,
  to_backend_header CLOB,
  to_backend_body CLOB,
  routing_address VARCHAR2(255),
  routing_retry_count NUMBER(10),
  routing_failover_cnt NUMBER(10),
  from_backend_ro_header CLOB,
  from_backend_ro_body CLOB,
  to_client_header CLOB,
  to_client_body CLOB,
  time_req_pipeline NUMBER(10),
  time_resp_pipeline NUMBER(10),
  time_routing_total NUMBER(10),
  time_api_proxy_total NUMBER(10),
  size_request_total NUMBER(10),
  size_response_total NUMBER(10),
  cache_hit NUMBER(1),
  status_code NUMBER(10),
  error_type VARCHAR2(255),
  parent_error_type VARCHAR2(255),
  result_type VARCHAR2(255),
  location_lat NUMBER,
  location_lon NUMBER,
  is_canary_request NUMBER(1),
  canary_routing_status VARCHAR2(255),
  mirror_count NUMBER(10),
  mirror_success_count NUMBER(10),
  mirror_failure_count NUMBER(10),
  mirror_total_duration NUMBER(19),
  mirror_addresses CLOB
);
ColumnDescription
idRecord identifier
log_timestampTime the log record was created
correlation_idRequest-specific correlation ID; links records in the same request chain
environment_idIdentifier of the environment where the request was processed
project_idIdentifier of the project the API belongs to
application_idIdentifier of the application making the request
application_nameName of the application making the request
api_proxy_idIdentifier of the API Proxy used for the request
api_proxy_nameName of the API Proxy used for the request
enum_api_typeAPI type (REST, SOAP, etc.)
proxy_relative_pathRelative path of the API Proxy
proxy_method_idIdentifier of the matched method
proxy_method_nameName of the matched method
x_forwarded_forReal client IP address (proxy/load balancer chain)
username_or_keyUsername or API key used for authentication
req_context_pathRequest context path
req_remote_addrRemote address of the request
req_http_methodHTTP method (GET, POST, PUT, etc.)
req_content_typeContent type of the request
req_path_infoRequest path info
req_query_stringRequest query string
req_remote_userRemote user
req_session_idSession identifier
req_request_uriRequest URI
req_char_encodingRequest character encoding
req_content_lengthRequest content length (bytes)
req_protocolProtocol
req_schemeScheme (http/https)
req_server_nameServer name
req_server_portServer port
req_remote_hostRemote host
req_remote_portRemote port
req_local_nameLocal name
req_local_addrLocal address
req_local_portLocal port
from_client_ro_paramQuery parameters from the client (JSON)
from_client_ro_headerHeaders from the client (JSON)
from_client_ro_bodyRequest body from the client
to_backend_paramQuery parameters sent to the backend (JSON)
to_backend_headerHeaders sent to the backend (JSON)
to_backend_bodyRequest body sent to the backend
routing_addressBackend address routed to
routing_retry_countNumber of retry attempts
routing_failover_cntNumber of failovers
from_backend_ro_headerResponse headers from the backend (JSON)
from_backend_ro_bodyResponse body from the backend
to_client_headerResponse headers sent to the client (JSON)
to_client_bodyResponse body sent to the client
time_req_pipelineRequest pipeline duration (ms)
time_resp_pipelineResponse pipeline duration (ms)
time_routing_totalTotal routing duration (ms)
time_api_proxy_totalTotal API Proxy processing duration (ms)
size_request_totalRequest size (bytes)
size_response_totalResponse size (bytes)
cache_hitWhether the response was served from cache
status_codeHTTP response status code
error_typeError type
parent_error_typeParent error type
result_typeProcessing result
location_latGeographic location — latitude
location_lonGeographic location — longitude
is_canary_requestWhether this is a canary routing request
canary_routing_statusCanary routing status
mirror_countNumber of mirror requests
mirror_success_countNumber of successful mirror requests
mirror_failure_countNumber of failed mirror requests
mirror_total_durationTotal mirror request duration (ms)
mirror_addressesMirror addresses (JSON)
This is the highest-volume log table. The following indexes cover the most common query patterns (end-to-end tracing, time-range scans, proxy/project filtering, error analysis).
CREATE INDEX idx_apitraffic_correlation ON log_ApiProxyTraffic(correlation_id);
CREATE INDEX idx_apitraffic_timestamp   ON log_ApiProxyTraffic(log_timestamp);
CREATE INDEX idx_apitraffic_proxy       ON log_ApiProxyTraffic(api_proxy_id);
CREATE INDEX idx_apitraffic_project     ON log_ApiProxyTraffic(project_id);
CREATE INDEX idx_apitraffic_status      ON log_ApiProxyTraffic(status_code);
CREATE INDEX idx_apitraffic_result      ON log_ApiProxyTraffic(result_type);

Partitioning

This table grows very quickly under production traffic. Daily partitioning on log_timestamp is strongly recommended so that retention can be applied via DROP PARTITION and time-range queries prune inactive partitions. See Partitioning Syntax for database-specific examples.

Token Traffic Log

The table that records OAuth/JWT token operations.
CREATE TABLE log_TokenTraffic (
  id VARCHAR2(255) PRIMARY KEY,
  log_timestamp TIMESTAMP,
  correlation_id VARCHAR2(255),
  environment_id VARCHAR2(255),
  project_id VARCHAR2(255),
  application_id VARCHAR2(255),
  application_name VARCHAR2(255),
  api_proxy_id VARCHAR2(255),
  api_proxy_name VARCHAR2(255),
  x_forwarded_for VARCHAR2(255),
  req_remote_addr VARCHAR2(255),
  req_http_method VARCHAR2(255),
  req_content_type VARCHAR2(255),
  username_or_key VARCHAR2(255),
  time_total NUMBER(10),
  status_code NUMBER(10),
  error_type VARCHAR2(255),
  parent_error_type VARCHAR2(255),
  result_type VARCHAR2(255),
  to_client_header CLOB,
  to_client_body CLOB,
  auth_token_type VARCHAR2(255),
  audience VARCHAR2(255),
  client_info VARCHAR2(255),
  token_never_expires NUMBER(1),
  issued_at TIMESTAMP,
  expires_at TIMESTAMP,
  expires_in NUMBER(10),
  scope VARCHAR2(255),
  refresh_token_allowed NUMBER(1),
  refresh_token_issued_at TIMESTAMP,
  refresh_token_expires_at TIMESTAMP,
  refresh_token_expires_in NUMBER(10),
  refresh_count NUMBER(10),
  max_refresh_count NUMBER(10)
);
ColumnDescription
idRecord identifier
log_timestampTime the log record was created
correlation_idCorrelation identifier
environment_idEnvironment identifier
project_idProject identifier
application_idApplication identifier
application_nameApplication name
api_proxy_idAPI Proxy identifier
api_proxy_nameAPI Proxy name
x_forwarded_forReal client IP address
req_remote_addrRemote address
req_http_methodHTTP method
req_content_typeRequest content type
username_or_keyUsername or API key
time_totalTotal processing duration (ms)
status_codeHTTP response status code
error_typeError type
parent_error_typeParent error type
result_typeProcessing result
to_client_headerResponse headers sent to the client (JSON)
to_client_bodyResponse body sent to the client
auth_token_typeToken type (Bearer, Basic, etc.)
audienceTarget audience of the token
client_infoClient information
token_never_expiresWhether the token has no expiry
issued_atTime the token was issued
expires_atToken expiry time
expires_inToken validity duration (seconds)
scopeToken scope information
refresh_token_allowedWhether refresh tokens are allowed
refresh_token_issued_atTime the refresh token was issued
refresh_token_expires_atRefresh token expiry time
refresh_token_expires_inRefresh token validity duration (seconds)
refresh_countNumber of times refreshed
max_refresh_countMaximum number of refreshes allowed
Token operations are frequently investigated per user/application and per time window. Correlation joins with the API traffic table are also common when tracing an end-to-end call.
CREATE INDEX idx_tokentraffic_correlation ON log_TokenTraffic(correlation_id);
CREATE INDEX idx_tokentraffic_timestamp   ON log_TokenTraffic(log_timestamp);
CREATE INDEX idx_tokentraffic_proxy       ON log_TokenTraffic(api_proxy_id);
CREATE INDEX idx_tokentraffic_user        ON log_TokenTraffic(username_or_key);
CREATE INDEX idx_tokentraffic_result      ON log_TokenTraffic(result_type);

Partitioning

Token volumes usually scale with API traffic volumes. Daily partitioning on log_timestamp is recommended. See Partitioning Syntax.

Policy Capture Log

The table that stores request details recorded in policy capture mode.
CREATE TABLE log_PolicyCapture (
  id VARCHAR2(255) PRIMARY KEY,
  log_timestamp TIMESTAMP,
  correlation_id VARCHAR2(255),
  environment_id VARCHAR2(255),
  api_proxy_id VARCHAR2(255),
  api_proxy_name VARCHAR2(255),
  proxy_method_id VARCHAR2(255),
  proxy_method_name VARCHAR2(255),
  username_or_key VARCHAR2(255),
  status_code NUMBER(10),
  result_type VARCHAR2(255),
  error_type VARCHAR2(255),
  from_client_ro_header CLOB,
  from_client_ro_param CLOB,
  from_client_ro_body CLOB,
  capture_region VARCHAR2(50),
  capture_location VARCHAR2(50)
);
ColumnDescription
idRecord identifier
log_timestampTime the log record was created
correlation_idCorrelation identifier
environment_idEnvironment identifier
api_proxy_idAPI Proxy identifier
api_proxy_nameAPI Proxy name
proxy_method_idMethod identifier
proxy_method_nameMethod name
username_or_keyUsername or API key
status_codeHTTP response status code
result_typeProcessing result
error_typeError type
from_client_ro_headerHeaders from the client (JSON)
from_client_ro_paramQuery parameters from the client (JSON)
from_client_ro_bodyRequest body from the client
capture_regionPipeline stage where the policy ran: FROM_CLIENT, TO_BACKEND, FROM_BACKEND, TO_CLIENT
capture_locationLevel at which the policy was defined: API_PROXY_GROUP (Group), API_PROXY (Proxy), API_PROXY_METHOD (Method)
When the log policy is placed at multiple pipeline stages (for example, FROM_CLIENT and TO_CLIENT), request and response rows share the same correlation_id. The correlation_id index is therefore essential for joining request and response records end to end.
CREATE INDEX idx_policycapture_correlation ON log_PolicyCapture(correlation_id);
CREATE INDEX idx_policycapture_timestamp   ON log_PolicyCapture(log_timestamp);
CREATE INDEX idx_policycapture_proxy       ON log_PolicyCapture(api_proxy_id);
CREATE INDEX idx_policycapture_region      ON log_PolicyCapture(capture_region);

Partitioning

Volume depends on how many log policies are attached to flows and how often they fire. For policies attached at proxy/method scope on high-traffic APIs, daily partitioning on log_timestamp is recommended. See Partitioning Syntax.

Application Log

The table that stores platform-level application logs.
CREATE TABLE log_Application (
  id VARCHAR2(255) PRIMARY KEY,
  env_name VARCHAR2(255),
  api_proxy_id VARCHAR2(255),
  api_proxy_name VARCHAR2(255),
  correlation_id VARCHAR2(255),
  log_timestamp TIMESTAMP,
  log_level VARCHAR2(255),
  message CLOB,
  stack_trace CLOB,
  host_name VARCHAR2(255),
  enum_application_log VARCHAR2(255),
  total_duration NUMBER(10)
);
ColumnDescription
idRecord identifier
env_nameEnvironment name
api_proxy_idAPI Proxy identifier
api_proxy_nameAPI Proxy name
correlation_idCorrelation identifier
log_timestampTime the log record was created
log_levelLog level (INFO, WARN, ERROR, etc.)
messageLog message
stack_traceError stack trace
host_nameName of the server where the log was created
enum_application_logApplication log source type
total_durationTotal processing duration (ms)
Application logs are commonly filtered by time range and log level (for troubleshooting), and joined with traffic logs via correlation_id to reconstruct a single request’s platform-level events.
CREATE INDEX idx_application_correlation ON log_Application(correlation_id);
CREATE INDEX idx_application_timestamp   ON log_Application(log_timestamp);
CREATE INDEX idx_application_level       ON log_Application(log_level);
CREATE INDEX idx_application_proxy       ON log_Application(api_proxy_id);

Partitioning

Application logs accumulate steadily. Daily partitioning on log_timestamp is recommended on production systems. See Partitioning Syntax.

Audit Event

The table that stores audit records of user operations.
CREATE TABLE log_AuditEvent (
  id VARCHAR2(255) PRIMARY KEY,
  audit_event_date TIMESTAMP,
  principal VARCHAR2(255),
  state VARCHAR2(255),
  object_id VARCHAR2(255),
  object_name VARCHAR2(255),
  ref_object_json CLOB,
  project_id VARCHAR2(255),
  class_name VARCHAR2(255)
);
ColumnDescription
idRecord identifier
audit_event_dateTime the audit event occurred
principalUser who performed the action
stateState of the action (CREATE, UPDATE, DELETE, etc.)
object_idIdentifier of the object acted upon
object_nameName of the object acted upon
ref_object_jsonJSON representation of the object acted upon
project_idProject identifier
class_nameType of the object
Audit queries most often filter by date range, user (principal), affected object type, or project. The partition key here is audit_event_date, not log_timestamp.
CREATE INDEX idx_auditevent_date     ON log_AuditEvent(audit_event_date);
CREATE INDEX idx_auditevent_principal ON log_AuditEvent(principal);
CREATE INDEX idx_auditevent_project  ON log_AuditEvent(project_id);
CREATE INDEX idx_auditevent_class    ON log_AuditEvent(class_name);
CREATE INDEX idx_auditevent_state    ON log_AuditEvent(state);

Partitioning

Audit events are generated by user actions (far fewer than traffic rows) and are usually retained for longer periods for compliance. Monthly partitioning on audit_event_date is typically sufficient. See Partitioning Syntax and substitute MONTH for DAY in the interval.

Login Log

The table that stores login attempts to the API Manager and API Portal (success and failure events).
CREATE TABLE log_LoginLog (
  id VARCHAR2(255) PRIMARY KEY,
  log_timestamp TIMESTAMP,
  source VARCHAR2(50),
  principal VARCHAR2(255),
  email VARCHAR2(255),
  display_name VARCHAR2(255),
  event_type VARCHAR2(50),
  remote_address VARCHAR2(255),
  message CLOB,
  error_type VARCHAR2(255),
  portal_id VARCHAR2(255),
  account_id VARCHAR2(255)
);
ColumnDescription
idRecord identifier
log_timestampTime the login event occurred
sourceLogin source (API_MANAGER, API_PORTAL)
principalUser principal (API Manager login)
emailUser email (API Portal login)
display_nameUser display name
event_typeEvent type (LOGIN_SUCCESS, LOGIN_FAILURE)
remote_addressRemote client IP address
messageResult message or failure reason
error_typeError class (for failures)
portal_idPortal identifier (for portal logins)
account_idAccount identifier (for portal logins)
Login logs are queried by time range, user (email/principal), login source, and event type (for security auditing and locked-account investigation).
CREATE INDEX idx_loginlog_timestamp ON log_LoginLog(log_timestamp);
CREATE INDEX idx_loginlog_email     ON log_LoginLog(email);
CREATE INDEX idx_loginlog_principal ON log_LoginLog(principal);
CREATE INDEX idx_loginlog_source    ON log_LoginLog(source);
CREATE INDEX idx_loginlog_event     ON log_LoginLog(event_type);

Partitioning

Login log volume is typically low compared to traffic logs. Monthly partitioning on log_timestamp is sufficient for long retention periods commonly required for security audit. See Partitioning Syntax and substitute MONTH for DAY in the interval.

Unsent Message

The table that temporarily stores log records that could not be delivered due to connection errors.
CREATE TABLE log_UnsentMessage (
  id VARCHAR2(255) PRIMARY KEY,
  log_timestamp TIMESTAMP,
  connector_id VARCHAR2(50),
  content CLOB
);
ColumnDescription
idRecord identifier
log_timestampTime the record was created
connector_idIdentifier of the connector the data failed to be sent to
contentContent of the unsent log record (JSON)
Rows in this table are transient: they are inserted when a connector is unreachable and removed when delivery succeeds. A small number of indexes is sufficient for resend lookups and cleanup.
CREATE INDEX idx_unsent_timestamp ON log_UnsentMessage(log_timestamp);
CREATE INDEX idx_unsent_connector ON log_UnsentMessage(connector_id);

Partitioning

This table holds transient records that are normally consumed within a short time window. Partitioning is not required. If the table grows unexpectedly (prolonged outages), operational cleanup is a better remedy than partitioning.

Partitioning Syntax

The examples below show daily partitioning on log_timestamp. Substitute the actual table name and adjust the initial boundary date to suit your deployment. For the log_AuditEvent table, replace log_timestamp with audit_event_date and use a monthly interval instead.
-- Apply at CREATE TABLE time (add after the column list):
)
PARTITION BY RANGE (log_timestamp)
INTERVAL (NUMTODSINTERVAL(1, 'DAY'))
(
  PARTITION p_initial VALUES LESS THAN (TIMESTAMP '2026-01-01 00:00:00')
);

-- Monthly variant (e.g., for log_AuditEvent):
-- INTERVAL (NUMTOYMINTERVAL(1, 'MONTH'))
New daily partitions must be created in advance (for example, via a scheduled job). If the next partition does not exist when a record arrives, inserts will fail on SQL Server and MySQL, and Oracle will auto-create only when INTERVAL partitioning is used.

Data Type Mapping Table

The following table shows the data type mappings between different databases.
UsageOracleMySQL/MariaDBPostgreSQLSQL Server
Short textVARCHAR2(255)VARCHAR(255)VARCHAR(255)NVARCHAR(255)
Long textCLOBTEXT / LONGTEXTTEXTNVARCHAR(MAX)
IntegerNUMBER(10)INTINTEGERINT
Big integerNUMBER(19)BIGINTBIGINTBIGINT
DecimalNUMBERDOUBLEDOUBLE PRECISIONFLOAT
BooleanNUMBER(1)TINYINT(1)BOOLEANBIT
TimestampTIMESTAMPTIMESTAMP NULLTIMESTAMPDATETIME2