Skip to main content
Gateway Infrastructure provides all the tools needed to create and manage Gateway Runtime environments where API Proxies will run in the Apinizer platform, manage Kubernetes resources, configure Gateway Engine, configure SSL/TLS certificates, and manage API traffic log connectors.

Module Components

The Gateway Infrastructure module is managed through the following pages:

What is Gateway Runtime?

Gateway Runtime (Environment) is a runtime execution context for API Proxies in an organization. For one or more API Proxies to be accessible, they must be deployed to a Gateway Runtime environment. An API Proxy can be deployed to a single Gateway Runtime environment or to multiple environments.
This appears when the Kubernetes Namespace and Resources are managed with Apinizer option is marked as Active from the System General Settings screen.
A Gateway Runtime environment provides an isolated area as a system resource to run Gateways. It allows creating multiple Gateway Runtime environments in the Apinizer platform.Gateway Runtime management is performed in two ways:
  • Managed by Apinizer: All Kubernetes definitions required for Gateway Runtimes are created and managed through the API Manager screen
  • Remote Gateway: Only standard data of existing Kubernetes definitions are recorded in the API Manager screen
Gateway Runtimes and Cache Servers are now created and managed separately. For Cache Server creation and management, see the Distributed Cache page.

Gateway Runtime Roles

A Gateway Runtime environment can be created in different roles as Production, Development/Test, or Sandbox. The point to note here is that for a client to access an API Proxy, it must be deployed to at least one Gateway Runtime environment. Gateway Runtime Roles

Test Environment

Used to test API Proxies or Applications active in the test environment. Testing can also be done to measure the adequacy of resources in hardware.If product development is done far from the test environment, it can cause problems in progress.

Sandbox Environment

API Proxies and Applications active in the sandbox environment are used in the development and testing process. This environment provides the opportunity to perform safe testing without affecting the production process and close to the production process.It is important because it mimics the state of the product on the production environment. For example, it facilitates adding policies to the relevant API Proxy and testing before access on the production environment. This ensures the security of the production environment and helps eliminate the risk of a developer accidentally changing a live (production) application. It also ensures that the end user accesses after ensuring that the application works.
Important Differences:
  • The difference between test environment and sandbox environment can be considered. The product (API Proxy or Application) is activated on test environment to test during the development process, and on sandbox environment to simulate real users before the product is finished and released to end users
  • Installation is performed to the relevant role environment according to the usage purpose of the API Proxy. The same API Proxy can be activated on different environments for different purposes
  • Each environment works independently of others

Production Environment

API Proxies or Applications active in the production environment are used for end users. This environment is designed to carry the load of clients.

Gateway Runtime Architecture

Gateway Runtime is designed to be used in environments where many APIs are spread across multiple teams or projects. The Gateway Runtime concept in Apinizer corresponds exactly to the namespace in Kubernetes. The Gateway Runtime environment you create and deploy in Apinizer is created as a namespace in Kubernetes. A Gateway Runtime environment contains namespace, deployment, pod, replica set, service, and access URL components. All Gateway Runtime environments created with the Apinizer Platform are also located within the Kubernetes cluster.
Kubernetes Namespace Concept:Kubernetes clusters can manage large amounts of unrelated workloads simultaneously. Kubernetes uses a concept called namespace to eliminate the complexity of objects within the cluster.Namespaces enable grouping objects together and filtering and controlling these groups as a unit. Whether used to apply customized access control policies or to separate all units for a test environment, namespaces are a powerful and flexible structure for managing objects as a group.Namespaces provide a scope for object names within the cluster. While names within a namespace must be unique, the same name can be used in different namespaces.For detailed information, see Kubernetes Documentation.

Gateway Runtime Components

The Gateway Runtime concept in the Apinizer Platform corresponds to the Namespace concept in the Kubernetes environment.
Log connector definitions where all API Traffic and requests in the created Gateway Runtime environment will be logged.
Gateway Runtime deployment is defined:
  • Gateway Runtime (Worker Server): Core module of the Apinizer Platform, responsible for routing all API requests to BackendAPI and works as Policy Enforcement Point. Gateway Runtimes are now managed independently from Cache Servers and can run in different namespaces.
Gateway Runtime Service is created to access the Apinizer Worker pod in Gateway Runtime Deployment. Service is the layer that handles requests coming to all pods. It is located in front of pods in terms of position.Service information is defined when each Gateway Runtime environment is created. However, it is important that all Apinizer Workers in the cluster are managed with a service. Also, service ports in the Apinizer Platform must be unique. NodePort is used by default and service types other than this are not supported by Apinizer.
Access URL is the external access address of the Proxy. It is specified as https://<your-IP-address>. Messages can be sent to Proxies using external access address information.

Gateway Runtime Access Address

An API Proxy can be accessed through the access address of the Gateway Runtime environment it is deployed to.

Access Address Structure

Let the access address of an API Proxy be:
http://demo.apinizer.com/apigateway/myproxy
ComponentDescription
http://demo.apinizer.com/Gateway Runtime Environment Access Address
apigateway/Root Context
myproxyAPI Proxy Relative Path

Access Address Configuration

The access address to be defined for the Gateway Runtime environment is generally the DNS address defined in a load balancer such as WAF or Nginx. Gateway Runtime environments defined in Apinizer correspond to a namespace in Kubernetes. A Kubernetes service of NodePort type is automatically created by Apinizer for access to Apinizer Workers running in the namespace. A service is created with the value entered in Engine Service Port when creating the Gateway Runtime environment.

Example Service Configuration

Example service information to access Apinizer Worker: If the Engine Service Port value is 30080 (default value and should be different for each Gateway Runtime environment), it should be specified in the DNS definition in your WAF or Nginx server as follows:
http://kubernetes-worker-node-IP:30080/

Gateway Runtime Creation

Images containing general definition information when creating a Gateway Runtime environment are shown below: Gateway Runtime Oluşturma Fields containing general definition information when creating a Gateway Runtime environment:
FieldDescription
Managed Type (Managed Type)Determines how the Gateway Runtime will be managed. Managed by Apinizer: Apinizer automatically creates namespace, deployment, service, and all required Kubernetes resources. Remote Gateway: Ensure that your Gateway pod is running and accessible. Apinizer only records connection details and does not manage the deployment.
Type (Type)A value appropriate to the license and the environment used must be selected. Test or Production can be selected.
Communication Protocol Type (Communication Protocol Type)One of HTTP, gRPC, HTTP+Websocket communication protocols can be selected.
Name (Name)The name of the Gateway Runtime environment. Corresponds to the namespace in Kubernetes.
Key (Key)A shortened key specific to the environment used for the created Gateway Runtime environment.
Access URL Address (Access URL)The external access address of API Proxies running within the Gateway Runtime environment.
Description (Description)Can be used for management convenience and important notes.
Environment Publishing Access / Projects (Environment Publishing Access / Projects)Projects where the Gateway Runtime environment can be used can be selected from here, or the selection can be left empty for use in all projects. If one or more projects are selected, they must also be added for use in newly created projects. Default value is unselected.
Node List (Node List)Select which kubernetes servers the created Gateway Runtime environment will run on.
Namespace Independence:Gateway pods and Cache Server pods can now run in different Kubernetes namespaces. Gateway pods can access Cache Servers in other namespaces using Kubernetes service discovery (e.g., http://cache-http-service.apinizer-cache.svc.cluster.local:8090). This provides more flexible infrastructure management and allows you to separate Gateway and Cache workloads.
Communication Protocol Type Selection:The type selected here determines which environment API Proxies can be deployed to:
  • REST and SOAP API Proxies → to environments of type HTTP
  • gRPC API Proxies → to environments of type gRPC
  • Websocket API Proxies → to environments of type HTTP+Websocket
Project Selection: If a project is selected, it means that only API Proxies within that project can be deployed to this environment.

Management API Access Endpoints

You can configure multiple Management API Access Endpoints for Gateway and Cache communication. Each endpoint configuration includes:
FieldDescription
NameA descriptive name for the API endpoint configuration (e.g., “Production Cluster”, “DR Site”)
Gateway Management API Access URLThe health check address of your gateway server. This is where Apinizer Management Console connects to Gateway pods for configuration updates. Example: http://worker-management-api-http-service.prod.svc.cluster.local:8091
Cache Management API Access URLThe health check address of your Cache Server. Gateway pods use this URL to connect to Cache Servers. Example: http://cache-http-service.apinizer-cache.svc.cluster.local:8090
Multiple Endpoints:You can configure multiple Management API endpoints for different scenarios:
  • Multi-region deployments: Configure separate endpoints for each region
  • High availability: Set up redundant endpoints for failover
  • Cluster separation: Use different endpoints for different Kubernetes clusters
When multiple endpoints are configured, you can select which endpoint to use via the environmentClusterName variable in Additional Variables.
Endpoint Configuration:
  • Each endpoint must have both Gateway and Cache Management API URLs configured
  • URLs should use Kubernetes service discovery format for cross-namespace communication
  • Ensure network connectivity between Management Console, Gateway pods, and Cache Server pods

API Proxy Traffic Log Connectors

Log connectors where all API Proxy Traffic and extensions in the Gateway Runtime environment will be logged are defined here. An image containing API Proxy Traffic Log Connector definitions is shown below: API Proxy Traffic Log Connector For more information about adding connectors to the Gateway Runtime environment, see this page.

Gateway Engine Configuration

Gateway engine corresponds to Gateway pods in the Kubernetes environment.
  • Gateway engine (apinizer-worker): The core module of the Apinizer Platform, responsible for routing all API requests to BackendAPI and works as Policy Enforcement Point
Images containing Gateway engine definitions are shown below: Gateway Engine Definitions
FieldDescription
Count (Count)The number of Gateway engines corresponds to the “replicas” value in Kubernetes deployment. Specifies the number of Pods that will be created in the Kubernetes Cluster to be created.
CPUThe maximum number of CPU cores the pod will use.
Memory (Memory)The maximum memory value the pod will use.
Memory Unit (Memory Unit)The unit of the value required for memory is selected; MB, GB.
Recommended Values:
CPUMemory Size
12GB
24GB
46GB
810GB
  • HTTP Enabled (HTTP Enabled): Comes selected by default
  • HTTPS Enabled (HTTPS Enabled): This option is also selected if HTTPS is desired, in this case the necessary files for encryption must be uploaded
  • mTLS: Can only be selected when HTTPS setting is open since it runs on HTTPS protocol
Keystore and Truststore: When HTTPS protocol is selected, keystore and truststore files can be uploaded in JKS or PFX format.Service Ports: A service port in the range 30080-32767 is entered. The service will be created as NodePort in Kubernetes.Gateway Management API Service:
  • Create HTTP service for API Gateway Management API access (httpServiceForManagementAPIEnabled): When enabled, creates a separate HTTP service for Gateway Management API access. This service is used by Apinizer Management Console to communicate with Gateway pods for configuration updates.
  • Create HTTPS service for API Gateway Management API access (httpsServiceForManagementAPIEnabled): When enabled, creates a separate HTTPS service for Gateway Management API access. Requires keystore and truststore files to be uploaded. Use this option when secure communication between Management Console and Gateway pods is required.
Management API Services:These services are separate from the main HTTP/HTTPS services used for API Proxy traffic. They are specifically used for:
  • Configuration updates from Management Console to Gateway pods
  • Health checks and monitoring
  • Management operations
When these services are enabled, you should configure the corresponding URLs in the Management API Access Endpoints section.
Default and optional variables and their values to be run in the pod are defined.Important Variables:
VariableTarget EnvironmentDescription
JAVA_OPTSAllSets JVM Heap values
tuneWorkerThreadsHTTP Worker, HTTP+Websocket, Management APIMinimum Worker thread count of the server (default: 1024)
tuneWorkerMaxThreadsHTTP Worker, HTTP+Websocket, Management APIMaximum Worker thread count of the server (default: 2048)
tuneBufferSizeHTTP Worker, HTTP+Websocket, Management APIBuffer area the thread will use for write operations (default: 16384 bytes)
tuneIoThreadsHTTP Worker, HTTP+Websocket, Management APIIO Thread count (default: CPU core count)
tuneBacklogHTTP Worker, HTTP+Websocket, Management APIBacklog value (default: 1000)
tuneRoutingConnectionPoolMaxConnectionPerHostHTTP Worker, HTTP+Websocket, Management APIMaximum connections per host (default: 1024)
tuneRoutingConnectionPoolMaxConnectionTotalHTTP Worker, HTTP+Websocket, Management APITotal maximum connections (default: 2048)
tuneRoutingConnectionPoolMinThreadCountHTTP Worker, HTTP+Websocket, Management APIRouting Connection Pool minimum thread count
tuneRoutingConnectionPoolMaxThreadCountHTTP Worker, HTTP+Websocket, Management APIRouting Connection Pool maximum thread count
tuneElasticsearchClientIoThreadCountHTTP Worker, HTTP+Websocket, Management APIElasticsearch Client IO Thread count
tuneMaxConcurrentRequestHTTP Worker, HTTP+Websocket, Management APIMaximum concurrent request count (default: tuneWorkerMaxThreads * tuneIoThreads)
tuneMaxQueueSizeHTTP Worker, HTTP+Websocket, Management APIMaximum queue size (default: 0)
tuneCacheConnectionPoolMaxConnectionTotalHTTP Worker, HTTP+Websocket, Management APICache Connection Pool total maximum connections (default: 2048)
tuneApiCallConnectionPoolMaxConnectionPerHostHTTP Worker, HTTP+Websocket, Management APIAPI Call Connection Pool maximum connections per host (default: 256)
tuneApiCallConnectionPoolMaxConnectionTotalHTTP Worker, HTTP+Websocket, Management APIAPI Call Connection Pool total maximum connections (default: 4096)
multipartConfigMaxFileSizeHTTP Worker, HTTP+Websocket, Management APIMultipart config maximum file size (default: 100MB)
multipartConfigMaxRequestSizeHTTP Worker, HTTP+Websocket, Management APIMultipart config maximum request size (default: 100MB)
multipartConfigFileSizeThresholdHTTP Worker, HTTP+Websocket, Management APIMultipart config file size threshold (default: 100MB)
defaultCharsetHTTP Worker, HTTP+Websocket, Management APIDefault character set (default: UTF-8)
deploymentTimeoutHTTP Worker, HTTP+Websocket, Management APIDeployment timeout in seconds (default: 30)
tuneReadTimeoutHTTP Worker, HTTP+Websocket, Management APIClient data read timeout. Connection will be closed if client doesn’t send data within this time (default: 30000 ms / 30 seconds)
tuneNoRequestTimeoutHTTP Worker, HTTP+Websocket, Management APINo request after connection timeout. Connection will be closed if no request is sent after connection established (default: 60000 ms / 60 seconds)
http2EnabledHTTP Worker, HTTP+Websocket, Management APIEnable HTTP/2 protocol (default: false)
environmentClusterNameHTTP Worker, HTTP+Websocket, Management APIEnvironment cluster name
tuneAsyncExecutorCorePoolSizeHTTP Worker, HTTP+Websocket, Management APIAsync executor core pool size (default: tuneWorkerThreads)
tuneAsyncExecutorMaxPoolSizeHTTP Worker, HTTP+Websocket, Management APIAsync executor maximum pool size (default: tuneWorkerMaxThreads)
tuneAsyncExecutorQueueCapacityHTTP Worker, HTTP+Websocket, Management APIAsync executor queue capacity (default: tuneMaxQueueSize > 0 ? tuneMaxQueueSize : 1000)
METRICS_ENABLEDHTTP Worker, HTTP+Websocket, Management APIWhether metric collection feature is enabled (default: false)
logLevelAllApplication log level (ERROR, WARNING, INFO, DEBUG, TRACE, OFF)
Asynchronous Operations Thread Pool:The Rest API Policy and Script Policy use a centralized thread pool for asynchronous operations which is configured in ManagementConfig. This improves thread management and optimizes resource usage. Asynchronous operations performed in environments are managed by this separate thread pool, which can be configured with specific parameters to ensure optimal performance and resource allocation.Usage Scenarios:
  • Rest API Policy: Asynchronous HTTP calls to external services
  • Script Policy: Asynchronous script executions that don’t block the main request thread
  • Logging Operations: Asynchronous log writing operations
  • Traffic Mirroring: Asynchronous mirroring of requests to secondary endpoints
Configuration Guidelines:
  • tuneAsyncExecutorCorePoolSize: Minimum number of threads kept alive in the pool (default: same as tuneWorkerThreads)
  • tuneAsyncExecutorMaxPoolSize: Maximum number of threads that can be created (default: same as tuneWorkerMaxThreads)
  • tuneAsyncExecutorQueueCapacity: Maximum number of tasks that can wait in the queue before new threads are created (default: tuneMaxQueueSize if > 0, otherwise 1000)
Thread Pool Sizing:The async executor thread pool is separate from the main worker thread pool. Ensure that the total thread count (worker threads + async executor threads) does not exceed your system’s capacity. Consider CPU cores and memory when configuring these values.
Recommended Thread Values by CPU:
CPUtuneWorkerThreadstuneWorkerMaxThreads
15121024
210242048
420484096
840968192
Java Options Warning:When configuring the Java Options setting in the Additional Variables field, the following warning should be considered:Please note that -Xmx and -Xms settings disable automatic heap sizing.Apinizer sets JVM Heap values to use 75% of the memory given to the container since it runs inside a container.UseContainerSupport is enabled by default.The old flags -XX:MinRAMFraction and -XX:MaxRAMFraction are now deprecated. There is a new -XX:MaxRAMPercentage flag that takes a value between 0.0 and 100.0 and defaults to 25.0. Therefore, if there is a 1 GB memory limit, the JVM heap is limited to 250 MB by default.
If Gateway Management API will communicate with gateway pods over HTTPS:
  • The Create HTTPS service for API Gateway Management API access option is selected
  • Keystore and truststore files can be uploaded in JKS or PFX format
  • Gateway Server Access URL is entered
Example: https://worker-http-service.prod.svc.cluster.local:8443gRPC Tuning Parameters:
VariableDescriptionDefault
tuneGrpcKeepAliveTimegRPC keep-alive time (seconds)120
tuneGrpcKeepAliveTimeoutgRPC keep-alive timeout (seconds)20
tuneGrpcMaxMessageSizegRPC maximum message size (bytes)4194304 (4MB)
tuneGrpcMaxHeaderListSizegRPC maximum header list size (bytes)8192
tuneGrpcMaxConnectionAgegRPC maximum connection age (seconds)3600
tuneGrpcMaxConnectionAgeGracegRPC maximum connection age grace period (seconds)30
tuneGrpcMaxConnectionIdlegRPC maximum connection idle time (seconds)300
tuneGrpcMaxInboundMessageSizegRPC maximum inbound message size (bytes)4194304 (4MB)
tuneGrpcMaxInboundMetadataSizegRPC maximum inbound metadata size (bytes)8192
tuneGrpcHandshakeTimeoutgRPC handshake timeout (seconds)20
tuneGrpcPermitKeepAliveTimegRPC permit keep-alive time (seconds)120
tuneGrpcThreadPoolSizegRPC thread pool sizeCPU count * 2
WebSocket Tuning Parameters:
VariableDescriptionDefault
tuneWebsocketIdleTimeoutWebSocket idle timeout (seconds)60
tuneWebsocketBufferSizeWebSocket buffer size (bytes)65536
tuneWebsocketTcpNoDelayWebSocket TCP no delay (true/false)true
Purpose of CORS Settings:These CORS (Cross-Origin Resource Sharing) parameters are used for requests coming for token acquisition. Policies are not added to requests coming for token acquisition from the Management Console; these settings are provided through environment configuration. These parameters are not used in normal API Proxy flow, they are only valid for token acquisition endpoints.
CORS (Cross-Origin Resource Sharing) Parameters:
VariableDescriptionDefault
tokenCorsAccessControlAllowOriginAccess-Control-Allow-Origin header value-
tokenCorsAccessControlAllowCredentialsAccess-Control-Allow-Credentials header value-
tokenCorsAccessControlAllowMethodsAccess-Control-Allow-Methods header value-
tokenCorsAccessControlAllowHeadersAccess-Control-Allow-Headers header value-
tokenCorsAccessControlOriginOrigin header value-
tokenCorsAccessControlRequestMethodAccess-Control-Request-Method header value-
tokenCorsAccessControlRequestHeadersAccess-Control-Request-Headers header value-
tokenCorsAccessControlExposeHeadersAccess-Control-Expose-Headers header value-
tokenCorsAccessControlMaxAgeAccess-Control-Max-Age header value (seconds)-
Purpose of X-Forwarded-For Settings:These X-Forwarded-For parameters are used for requests coming for token acquisition. Policies are not added to requests coming for token acquisition from the Management Console; these settings are provided through environment configuration. These parameters are not used in normal API Proxy flow, they are only valid for token acquisition endpoints.
X-Forwarded-For Parameters:
VariableDescriptionDefault
tokenXForwardedForIpHeaderX-Forwarded-For IP header name-
tokenXForwardedForIpOrderX-Forwarded-For IP order (FIRST/LAST)LAST
Security configuration can be made with Additional Variables:
VariableDescriptionDefault Value
jdkTLSVersionsTLS protocol versions to be supportedTLSv1, TLSv1.1, TLSv1.2, TLSv1.3
jdkTLSDisabledAlgorithmsTLS algorithms to be disabled for security reasonsRC4, DES, MD5withRSA, DH keySize < 768…
jdkCipherSuitesEncryption suites to be usedJDK default value
Security Recommendations:
  • TLSv1.2 and higher versions should be used
  • GCM mode encryption suites should be preferred
  • Weak algorithms (RC4, DES, 3DES) should be disabled
  • Key sizes should be at an adequate security level (at least 2048 bits for RSA, at least 224 bits for EC)

Setting Host Aliases

What is Host Alias? Why is it needed?

IP addresses on the network can sometimes be placed behind host names. If these are not defined in nameserver or host file, or if Apinizer cannot resolve them somehow, Host Alias definition must be made for Gateway pods to resolve these names. On Kubernetes, alias host names can be given to host names or corresponding IP addresses. This setting is defined within the deployment.yaml file.
Republish operation is required for changes made here to take effect. Compared to version update, it should be noted that there will be a few minutes of interruption in this operation.
Example Usage:
IP AddressHost Aliases
10.10.10.10alias_name, other_alias_name
An image containing Host alias settings is shown below: Host Alias Settings

Environment Publishing

1

Select Unpublished Environment

To publish a Gateway Runtime environment, click the Unpublished button.Unpublished Environment Selection
2

Confirm Operation

Click the Publish button from the window that appears to confirm the operation, and the Gateway Runtime environment is deployed to the Kubernetes server.Environment Publishing Confirmation

Environment Republishing

1

Select Published Environment

Hover over an environment that is in published status and click the Published button.Published Environment Selection
2

Republish

Click the Republish button from the window that appears to confirm the operation.Environment Republishing Confirmation
After the Environment Republish operation, Pods are also restarted.
There may be a few minutes of interruption during the republishing operation. This operation is required for changes made to Gateway Engine configuration, Host Aliases, and other environment settings to take effect.

JWT Token Validation Key

If the environment has been saved, the JWT Token Validation Key has been generated. This token is the value of the private key involved in token generation via Apinizer Token Service for authentication policies. If users want to generate their own token, they should use the private key here. To access this information, go to the JWT Token Validation Keys tab. Available Operations:
  • Redeploy: Reloading the current key to servers without making changes
  • Regenerate & Deploy: Generating a new key and loading it to servers
  • Upload & Deploy: Uploading a PEM Encoded Private Key file to generate a new key
JWT Token Validation Key

Environment Deletion

By selecting the environment, information related to the environment is deleted from the database by clicking Remove Environment from the Remove Environment tab at the bottom. Environment Deletion
Warning: When the deletion operation is completed, all API Proxy deployments registered in this environment are also deleted and API Proxies previously deployed to this environment can no longer be accessed through this environment.

Metric Monitor

To monitor the status of Gateway pods on Kubernetes, click the Pods link of the relevant Gateway Runtime environment from the Gateway Runtime environment list. An image containing the Pods screen is shown below: Metric Monitor
If you want to access metrics of all Gateway Runtime environments, click the Kubernetes Workloads page.

Diagnostics

You can access detailed system metrics, JVM information, thread states, and performance data for Gateway Runtimes. You can access the Diagnostics page by clicking the Diagnostics button of the relevant Gateway Runtime environment from the Gateway Runtime environment list.
For Diagnostics features, usage, and detailed information, see the Environment Diagnostics page.