The following tables show recommended Gateway Worker settings for different hardware profiles. These values should be validated with pre-production load tests.
JVM Parameters: Automatic memory profile system is recommended for all tiers (no need to add JAVA_OPTS). For profile details and manual GC configuration, see JVM Garbage Collector Tuning.
General rule:tuneWorkerThreads ≈ CPU × 512, tuneWorkerMaxThreads ≈ CPU × 1024. These values are based on Undertow’s (the HTTP server used by Gateway) thread management model.
IO threads are low-level threads that handle network I/O operations (socket read/write).
Parameter
Default
Description
tuneIoThreads
CPU core count
IO Thread count
IO thread count is typically kept equal to the CPU core count. Increasing it provides no benefit in most scenarios; it may cause context switching overhead.
Asynchronous operations such as RestApi Policy, Script Policy, logging, and traffic mirroring use this separate thread pool.
Parameter
Default
Description
tuneAsyncExecutorCorePoolSize
Same as tuneWorkerThreads
Minimum number of threads kept alive in the pool
tuneAsyncExecutorMaxPoolSize
Same as tuneWorkerMaxThreads
Maximum number of threads that can be created
tuneAsyncExecutorQueueCapacity
tuneMaxQueueSize if > 0, otherwise 1000
Maximum number of tasks that can wait in the queue when threads are full
Thread Pool Sizing Warning:The async executor thread pool is independent from the main worker thread pool. Ensure that the total thread count (worker + async executor) does not exceed your system’s capacity. Consider CPU core count and available memory when determining thread counts.
Client connection → tuneNoRequestTimeout (waiting for request) → Request received → tuneReadTimeout (reading data) → Streaming? → tuneStreamingReadTimeout
tuneNoRequestTimeout: Connection opened but no HTTP request sent — cleans up idle connections
tuneReadTimeout: Client stopped sending data during request processing — cleans up stuck requests
tuneStreamingReadTimeout: For long-lived connections like SSE, Server-Sent Events, and LLM streaming, the normal tuneReadTimeout is insufficient
SSE/LLM Streaming Scenarios:In streaming connections, clients don’t send data for extended periods, so the normal tuneReadTimeout may prematurely close the connection. tuneStreamingReadTimeout assigns a dedicated timeout value for streaming connections. The default value of 0 (unlimited) is suitable for most scenarios; however, you may set an appropriate upper limit for your environment to prevent resource leaks.
For tier-based performance comparisons and detailed benchmark results, see Capacity Planning.
Benchmark results were measured under ideal conditions (fast backend, minimal network latency, simple policy chain). Always perform load tests based on your own traffic patterns for production environments.