Gateway Environments
Environment is a runtime execution context for API Proxies in an enterprise.
This section describes environment creation and related operations.
When the Manage Kubernetes Namespace and Resources with Apinizer option is enabled in System General Settings screen, it becomes available.
Environment Creation
The picture below shows the Environment Settings:
Fields that contains general information about the environment are shown in the table below.
Field | Description |
---|---|
Type | Test or Production can be selected as the type so that the test is not deducted from the license. |
Communication Protocol Type | One of HTTP, gRPC, Websocket communication protocols can be selected. The type selected here determines the environment in which API Proxies can be deployed. REST and SOAP API Proxies can be deployed in HTTP type environments, gRPC API Proxies in gRPC type environments, and Websocket API Proxies in Websocket type environments. |
Name | Name of environment. |
Key | An environment-specific abbreviated key used for the created environment. |
Node List | It is selected on which kubernetes worker servers the created environment will run. |
Project | You can choose the projects where the environment can be used, or leave it blank so that it can be used in all projects. If one or more projects are selected, they must also be added to be used in newly created projects. It comes with no selection by default. If a project is selected, it means that only API Proxies included in that project can be deployed to this environment. |
It is the external access address of API Proxies running in the environment. It has been explained in detail in the previous section. | |
Description | It can be used for ease of management and important notes. |
Gateway Server Access URL | The nodeport or ingress type service access address required to deploy the configurations made in the Apinizer API Manager to the Gateway Pods is entered here.Example: http://worker-service.prod.svc.cluster.local:8091 If the HTTPS Enabled option is selected, the address here should also be taken into consideration.Example: https://worker-https-service.prod.svc.cluster.local:8443 |
Cache Server Access URL | The nodeport or ingress type service access address required for deploying the configurations made in the Apinizer API Manager to the Cache Pods or for the Gateway Pods to access the cache pods is entered here.Example: http://cache-service.prod.svc.cluster.local:8090 |
API Proxy Traffic Log Connectors
Log connectors where all API Proxy Traffic and extensions in the environment will be logged are defined here.
The picture below shows the API Proxy Traffic Log Connector definitions:
Please refer to this page for more information about adding a connector to environment.
Gateway Engine and Cache Server Settings
Gateway engine and Cache server correspond to pods in Kubernetes environment.
Gateway engine's name in Apinizer Platform is apinizer-worker. It is the core module of Apinizer Platform, responsible for routing all API requests to BackendAPI and works as Policy Enforcement Point.
The cache server is called apinizer-cache on the Apinizer Platform. It is the environment where the Cache values required in Apinizer are kept.
The picture below shows the Gateway and Cache server settings:
The fields used for configuration in the Gateway Engine section are shown in the table below.
Field | Description |
---|---|
Count | Gateway engine count is equivalent to replicaSet in Kubernetes Cluster. |
CPU | The maximum number of CPU cores that the pod will use. |
Memory | The maximum amount of memory the pod will use. |
Memory Unit | The unit of value required for the memory is selected; MB, GB. |
Service Access Information | The HTTP Enabled option is selected by default.If HTTPS is desired to be used, the HTTPS Enabled option is also selected. In this case required keystore and truststore files must be uploaded.Since the mTLS setting works over the HTTPS protocol, it can only be selected when the HTTPS setting is on and allows the server to request authentication from the client, but does not enforce this as a strict requirement to establish the connection. If mTLS authorization is required, the mTLS policy should be used. |
Keystore | When HTTPS protocol is selected, keystore files can be loaded in JKS or PFX format. |
Truststore | When HTTPS protocol is selected, truststore files can be loaded in JKS or PFX format. |
Keystore Password | Enter the password of the keystore file. |
Truststore Password | Enter the password of the truststore file. |
Create Secure Service For Gateway Service Access | The port of a service is entered in the range of 30080-32767. |
Create Service For Gateway Service Access | It is the port information required to access the services running in the application. |
If gRPC is selected as Communication Protocol
If Gateway Management API will communicate with gRPC gateway pods over HTTPS, Create HTTPS service for Gateway Management API access option is selected. keystore and truststore files can be uploaded in JKS or PFX format. In order to upload the configurations made in the Apinizer Management console to the gRPC gateway pods, the service access address Gateway Management API Access URL is entered as follows.
Ex: https://worker-http-service.prod.svc.cluster.local:8443
In the Service Port field, the port that will open gRPC communication to the outside world is specified. If gRPC communication will be secured from the outside world, Create secure service for Gateway Service access option is selected. For configuring security settings, keystore and truststore files can be uploaded in JKS or PFX format.
If Websocket is selected as Communication Protocol
If Gateway Management API will communicate with Websocket gateway pods over HTTPS, Create HTTPS service for Gateway Management API access option is selected. keystore and truststore files can be uploaded in JKS or PFX format. In order to upload the configurations made in the Apinizer Management console to Websocket gateway pods, the service access address Gateway Management API Access URL is entered as follows.
Ex: https://worker-http-service.prod.svc.cluster.local:8443
In the Service Port field, the port that will open Websocket communication to the outside world is specified. If Websocket communication will be provided securely from the outside world, Create secure service for Gateway Service access option is selected. For configuring security settings, keystore and truststore files can be uploaded in JKS or PFX format.
Additional Variables
Default and optional variables and their values to be run in the pod are defined in this section.
Default variables cannot be deleted, only their values can be edited or new ones can be added.
Variable | Target Environment Type | Description | ||||||||||||||||||||
---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
JAVA_OPTS | All | -XX: MaxRAMPercentage: sets the JVM Heap values to use 75% of the memory allocated to the container because it runs inside the container.http.maxConnections: Max http connection count that can be created from this environment. | ||||||||||||||||||||
tuneWorkerThreads | Http Worker, Management API | Specifies the minimum number of Worker threads the server is running.The recommended values are as follows:
| ||||||||||||||||||||
tuneWorkerMaxThreads | Http Worker, Management API | Specifies the maximum number of Worker threads the server is running.The recommended values are as follows:
| ||||||||||||||||||||
tuneBufferSize | Http Worker, Management API | It is the size of the buffer area in bytes that a thread will use for writing. | ||||||||||||||||||||
tuneIoThreads | Http Worker, Management API | Number of IO Threads. Its recommended values are to be in parallel with the number of processors. | ||||||||||||||||||||
tuneBacklog | Http Worker, Management API | Specifies the maximum pending connection queue size if the server is ready to accept connections. | ||||||||||||||||||||
tuneRoutingConnectionPoolMaxConnectionPerHost | Http Worker, Management API | Specifies the maximum connection pool value that can be used per host in the backend connections of API Proxies. | ||||||||||||||||||||
tuneRoutingConnectionPoolMaxConnectionTotal | Http Worker, Management API | Specifies the maximum connection pool value to be used for backend connections of API Proxies.The recommended values are as follows:
| ||||||||||||||||||||
logLevel | Hepsi | It allows the application logs to be started by default with this parameter when starting the environments. It can take ERROR, WARNING, INFO, DEBUG, TRACE and OFF values. | ||||||||||||||||||||
defaultCharset | Http Worker, Management API | When this field is not added, UTF-8 character set is used by default; this field specifies the character set used in request and response operations | ||||||||||||||||||||
multi-part / File Upload Parameters | Http Worker, Management API | The following key concepts should be used to configure settings for multi-part HTTP requests for file uploads;
The size value of all key concepts is in bytes, so 1024*1024*100 is 100MB. | ||||||||||||||||||||
Token Service CORS Parameters | Http Worker, Management API | When a javascript application connects to receive a JWT or OAuth2 Token via Apinizer, the CORS values need to be returned from the Apinizer token service.If JWT or OAuth2 Token settings will be managed;
| ||||||||||||||||||||
Token Service X-Forwarded-For Parameters | Http Worker, Management API | When a request is made to get a JWT or OAuth2 Token via Apinizer, the client's IP must be obtained.If JWT or OAuth2 Token settings will be managed;
| ||||||||||||||||||||
tuneGrpcKeepAliveTime | Grpc Worker | The time between ping messages used to keep the connection alive between client and server (in seconds) | ||||||||||||||||||||
tuneGrpcKeepAliveTimeout | Grpc Worker | Maximum time to wait for a keep-alive ping response (in seconds) | ||||||||||||||||||||
tuneGrpcMaxMessageSize | Grpc Worker | Maximum message size that can be processed (in bytes) | ||||||||||||||||||||
tuneGrpcMaxHeaderListSize | Grpc Worker | Maximum allowed HTTP header list size (in bytes) | ||||||||||||||||||||
tuneGrpcMaxConnectionAge | Grpc Worker | Maximum duration a connection may exist (in seconds) | ||||||||||||||||||||
tuneGrpcMaxConnectionAgeGrace | Grpc Worker | Additional time given to aged connections before forced closure (in seconds) | ||||||||||||||||||||
tuneGrpcMaxConnectionIdle | Grpc Worker | Maximum time a connection may be idle (in seconds) | ||||||||||||||||||||
tuneGrpcMaxInboundMessageSize | Grpc Worker | Maximum size of incoming messages that can be received (in bytes) | ||||||||||||||||||||
tuneGrpcMaxInboundMetadataSize | Grpc Worker | Maximum size of incoming metadata that can be processed (in bytes) | ||||||||||||||||||||
tuneGrpcHandshakeTimeout | Grpc Worker | Maximum time to wait for handshake completion (in seconds) | ||||||||||||||||||||
tuneGrpcPermitKeepAliveTime | Grpc Worker | Minimum allowed keep-alive time interval (in seconds) | ||||||||||||||||||||
tuneGrpcThreadPoolSize | Grpc Worker | Size of thread pool allocated for the gRPC server | ||||||||||||||||||||
tuneWebsocketReuseAddr | Websocket Worker | Boolean value determining whether to allow reuse of the same port address | ||||||||||||||||||||
tuneWebsocketConnectionLostTimeout | Websocket Worker | Time to wait before considering the connection as lost (in seconds) | ||||||||||||||||||||
tuneWebsocketMaxPendingConnections | Websocket Worker | Maximum number of WebSocket connections that can be pending for processing | ||||||||||||||||||||
deploymentTimeout | Management API | When a request is received after deployment, it hits a Pod within the environment. This Pod locates other Pods within the environment (namespace) and synchronously sends the same configuration to them. This parameter sets the timeout duration for operations between Pods. The default value is 120 seconds. |
Security Configuration with Additional Variables
Parameter | Description | Default Value | Possible Values | Target Java Property |
---|---|---|---|---|
jdkTLSVersions | Specifies supported TLS protocol versions | TLSv1,TLSv1.1,TLSv1.2,TLSv1.3 | SSLv2Hello, SSLv3, TLSv1, TLSv1.1, TLSv1.2, TLSv1.3 | https.protocols, jdk.tls.client.protocols, jdk.tls.server.protocols |
jdkTLSDisabledAlgorithms | Specifies TLS algorithms to be disabled for security reasons | RC4, DES, MD5withRSA, DH keySize < 768, EC keySize < 224, 3DES_EDE_CBC, anon, NULL | RC4, DES, MD5withRSA, DH keySize values, EC keySize values, 3DES_EDE_CBC, anon, NULL, SSLv3, TLSv1, TLSv1.1 | jdk.tls.disabledAlgorithms |
jdkCertPathDisabledAlgorithms | Defines algorithms to be disabled during certificate validation process | Uses JDK default value | MD2, MD5, SHA1, RSA keySize values, DSA keySize values, EC keySize values | jdk.certpath.disabledAlgorithms |
jdkCipherSuites | Defines the cipher suites to be used | Uses JDK default value | TLS 1.3: TLS_AES_256_GCM_SHA384, TLS_AES_128_GCM_SHA256, TLS_CHACHA20_POLY1305_SHA256; TLS 1.2: TLS_ECDHE_ECDSA_WITH_AES_256_GCM_SHA384, TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384; Legacy: TLS_ECDHE_ECDSA_WITH_AES_256_CBC_SHA, TLS_RSA_WITH_AES_256_CBC_SHA | https.cipherSuites |
jdkAllowUnsafeProtocols | Determines whether unsafe protocols are allowed | false | true, false | https.protocols.allow_unsafe, jdk.tls.client.protocols.allow_unsafe |
Important Notes about security configurations:
- When environment variables are not set:
"TLSv1,TLSv1.1,TLSv1.2,TLSv1.3" is used for TLS Versions
"RC4, DES, MD5withRSA, DH keySize < 768, EC keySize < 224, 3DES_EDE_CBC, anon, NULL" is used for TLS Disabled Algorithms
JDK default values are used for other parameters
- Security Recommendations:
- TLSv1.2 and above versions are recommended
- Cipher suites with GCM mode should be preferred
- Weak algorithms (RC4, DES, 3DES) should be disabled
- Key sizes should maintain adequate security levels (minimum 2048 bit for RSA, minimum 224 bit for EC)
- Performance Impact:
- The order of cipher suites can affect performance
- Disabling too many cipher suites may cause compatibility issues with legacy systems
Cache Server configuration
The fields used for Cache Server configuration are shown in the table below.
Field | Description |
---|---|
Cache Count | Cache count is equivalent to replicaSet in Kubernetes Cluster. |
CPU | The maximum number of CPU cores that the pod will use. |
Memory | The maximum amount of memory the pod will use. |
Memory Unit | The unit of value required for the memory is selected; MB, GB. |
Additional Variables | Default and optional variables and their values to be run in the pod are defined. Default variables cannot be deleted, only their values can be edited. For cache pods to be accessible to other cache pods via Kubernetes, the CACHE_SERVICE_NAME value should be added. Daily quota information is also kept in the cache. Daily quota information is reset according to UTC timezone. If you want the starting time of the daily changing quota to be reset to your local time, you can add the value of CACHE_QUOTA_TIMEZONE to the additional variables. The value added here must be written as "+03:00".
If it is not desired for cache pods to try to load the values from the database all at once when first opened, the CACHE_LAZY_MODE value can be added as lazy. In this case, priority is given to opening the pod and the values can continue to be loaded after the pod is opened. It is recommended to enter this value if there are a large number of records in the database. It was added to meet the high concurrency requirements of cache pods. Maximum and minimum thread values allow Tomcat to process requests more efficiently and reduce latency. |
The following warning should be taken into account when configuring the Java Options setting in the additional variables area;
Please note that the -Xmx and -Xms settings disable automatic heap sizing.
Apinizer sets the JVM Heap values to use 75% of the memory allocated to the container because it runs inside the container.
UseContainerSupport is enabled by default.
Old (and somehow broken) flags -XX: {Min | Max} RAMFraction is now deprecated. There is a new -XX:MaxRAMPercentage flag that takes a value between 0.0 and 100.0 and defaults to 25.0. So if there is a 1GB memory limit, the JVM heap is limited to ~250MB by default.
Setting Host Aliases
What is Host Alias? Why is it needed?
IP addresses in the network can sometimes be put behind host names, if they are not defined in the nameserver or host file, or if Apinizer has not been able to resolve them somehow, HostAlias must be defined for the worker pods to resolve these names.
Republishing is required for the changes made here to take effect. It should be noted that there will be a few minutes of interruption in this process compared to the version update.
On Kubernetes, hostnames or their corresponding IP addresses can be given host alias of hostnames. This setting is defined in the deployment.yaml file.
The picture below shows the Host Alias settings:
Publishing Environment
Click the Unpublished button to publish a environment.
Click the Publish button to confirm the operation from the incoming window and the environment is deployed to the kubernetes server.
Republishing Environment
By hovering over a published environment, the Published button is clicked.
Click the Republish button to confirm the operation from the incoming window.
After the Republishing Environment, the Pods are also restarted.
JWT Token Validation Key
If the environment is registered, the JWT Token Authentication Key is generated. This token is the value of the private key involved in generating tokens through the Apinizer Token Service for authentication policies. If the user wants to generate his own token, he should make use of the private key here.
To access this information, go to the JWT Token Validation Keys tab.
By clicking the Redeploy button, the existing key can be redeployed to the servers without making any changes to the key.
By clicking the Regenerate & Deploy button, a new key can be generated and deployed to the servers.
By clicking the Upload & Deploy button, the PEM Encoded Private Key file can be uploaded and a new key can be generated and used according to this key.
Deleting a Environment
By selecting the environment and clicking Remove Environment from the Remove Environment tab at the bottom, information about the environment is deleted from the database.
When the deletion is complete, the installation of all API Proxies registered in this environment is also deleted, and API Proxies that were previously installed in this environment can no longer be accessed through this environment.
Cache Monitor
With the settings in the Cache section of the Overview tab of the API Proxy screen, the request can be cached and answered without going to the Backend API. At the same time, quota and throttle policies are also managed via the cache pod.
In order to monitor, view and delete the cache operations based on the environment, go to this page from the Cache link of the environment.
The picture below shows the Cache Monitor:
Metric Monitor
To monitor the status of Worker and Cache pods on Kubernetes, click the Pods link of the relevant environment from the environment list.
The picture below shows the Pods Screen:
If you want to access metrics for all environments, click here.