Routing Concept
Routing consists of two fundamental components in an API Proxy:Client Route
The entry point where requests enter the API Proxy. Clients send requests to this endpoint.
Upstream Target
The backend API where requests are routed. The API Proxy sends requests to this address.
Routing Flow
The following diagram shows how request and response flow occurs through the Gateway with the Routing and Upstream mechanism:1. Client Request
Client sends request to API Proxy
2. Client Route
Entry point where requests enter the API Proxy
Path, Method, Protocol and Port definitions
Path, Method, Protocol and Port definitions
3. Routing Logic
Load Balancing, Failover and routing logic is applied
4. Upstream Target
Backend API where requests are routed
Backend address, protocol and configuration
Backend address, protocol and configuration
5. Backend API
Processed request is sent to backend API
Upstream Target
Upstream Target is the address of the backend API to which requests from clients in an API Proxy are routed. Upstream Target is the point where the API Proxy communicates with the backend.Upstream Target Overview
Upstream and target concepts are fundamental concepts used when routing to backend services in API Proxies. Upstream Target represents the physical or logical address of the backend API. The API Proxy routes requests coming from Client Route to this target.Backend Address
URL or IP address of the Backend API
Protocol
HTTP, HTTPS, gRPC, WebSocket protocols
Load Balancing
Load balancing between multiple backend instances
Failover
Switching to alternative backend in error conditions
Upstream and Target
Upstream: Configuration where backend services to which the API Proxy routes requests are defined. An upstream can contain multiple targets. Target: Backend service addresses defined within an upstream. Each target contains a URL and necessary configuration information.Upstream Target Structure
An Upstream Target contains the following information:Example Upstream Targets
HTTP Target
HTTPS Target
gRPC Target
WebSocket Target
Upstream Target Configuration
When creating an Upstream Target, the following information is defined:Basic Configuration
Basic Configuration
- URL: Address of the Backend API
- Protocol: HTTP, HTTPS, gRPC, WebSocket
- Host: Backend server name or IP
- Port: Backend port number
- Path: Backend path (optional)
- Path Rewrite: Changing backend path (optional)
Path Rewrite
Path Rewrite
Path rewrite can be performed in Upstream Target:Path Rewrite Examples:
/api/v1/products→/products(prefix removal)/api/v1/products→/v2/products(version change)/api/products/{id}→/products/{id}(path simplification)
Timeout Settings
Timeout Settings
- Connection Timeout: Connection timeout
- Read Timeout: Read timeout
- Write Timeout: Write timeout
Upstream Target Types
Single Target
Single backend instance:
Multiple Targets
Multiple backend instances (Load Balanced):
Dynamic Target
Backend determined dynamically:
Routing Types
The Apinizer platform provides routing support for different protocols:HTTP Routing
Routing for HTTP/HTTPS protocol. Used for REST APIs.
gRPC Routing
Routing for gRPC protocol. Used for microservice architectures.
WebSocket Routing
Routing for WebSocket protocol. Used for real-time communication.
HTTP Routing
HTTP Routing is routing support for REST APIs using HTTP/HTTPS protocol.HTTP Routing Features
Protocol Support
- HTTP/1.1
- HTTP/2
- HTTPS (TLS/SSL)
Method Support
- GET, POST, PUT, DELETE
- PATCH, HEAD, OPTIONS
- Custom methods
Content-Type
- application/json
- application/xml
- application/x-www-form-urlencoded
- multipart/form-data
Features
- Path matching (exact, prefix, regex)
- Query parameter handling
- Header manipulation
- Body transformation
- Host-based routing
- Method-based routing
HTTP Routing Configuration
HTTP Routing Usage Scenarios
- REST API Gateway: Management of REST APIs
- API Versioning: API versioning
- Legacy System Integration: Integration with legacy systems
- Public API Exposure: Exposing APIs to the outside world
gRPC Routing
gRPC Routing is routing support for microservices using gRPC protocol.gRPC Routing Features
Protocol Support
- gRPC (over HTTP/2)
- gRPC-Web
- TLS/SSL support
Service Definition
- Protocol Buffers (Protobuf)
- Service definition
- Method routing
Features
- Unary calls
- Server streaming
- Client streaming
- Bidirectional streaming
Load Balancing
- gRPC-aware load balancing
- Failover
gRPC Routing Configuration
gRPC Routing Usage Scenarios
- Microservice Communication: Communication between microservices
- High Performance APIs: High performance requirements
- Streaming APIs: Real-time data streaming
- Internal APIs: Internal system APIs
WebSocket Routing
WebSocket Routing is routing support for real-time applications using WebSocket protocol.WebSocket Routing Features
Protocol Support
- WebSocket (ws://)
- Secure WebSocket (wss://)
- HTTP Upgrade
Connection Management
- Connection establishment
- Connection persistence
- Connection pooling
Message Processing
- Text messages
- Binary messages
- Message routing
Features
- Subprotocol support
- Ping/Pong frames
- Connection timeout
WebSocket Routing Configuration
WebSocket Routing Usage Scenarios
- Real-time Chat: Real-time chat applications
- Live Updates: Live updates
- Gaming: Gaming applications
- IoT: Communication with IoT devices
Protocol Comparison
| Feature | HTTP | gRPC | WebSocket |
|---|---|---|---|
| Protocol | HTTP/1.1, HTTP/2 | HTTP/2 | WebSocket |
| Data Format | JSON, XML | Protobuf | Text, Binary |
| Communication | Request-Response | Request-Response, Streaming | Bidirectional |
| Performance | Medium | High | High (persistent) |
| Usage | REST APIs | Microservices | Real-time apps |
| Browser Support | ✅ | ⚠️ (gRPC-Web) | ✅ |
Protocol Selection Guide
When Should HTTP Be Used?
When Should HTTP Be Used?
- For REST APIs
- For browser-based applications
- For wide compatibility requirements
- For JSON/XML data formats
When Should gRPC Be Used?
When Should gRPC Be Used?
- For microservice architectures
- For high performance requirements
- For streaming requirements
- For internal system APIs
When Should WebSocket Be Used?
When Should WebSocket Be Used?
- For real-time communication
- For live updates
- For bidirectional communication
- For low latency requirements
Load Balancing Strategies
If there are multiple instances in the backend, load balancing strategies are used. Load balancing ensures equal traffic distribution, high availability, and performance improvement.Load Balancing Strategies
Round Robin
Requests are distributed to all backend instances in order. It is a simple and understandable strategy, suitable for situations where all backends have similar capacity.
Least Connections
Request is sent to the backend with the least connections. Distributes load better and is suitable for long-running requests.
Weighted Round Robin
Weights are assigned to backends and requests are distributed accordingly. Suitable for backends with different capacities.
Strategy Comparison
| Strategy | Load Distribution | Complexity | Usage |
|---|---|---|---|
| Round Robin | Equal | Low | General use |
| Least Connections | Good | Medium | Variable durations |
| Weighted Round Robin | Weighted | Medium | Different capacities |
Best Practices
Strategy Selection
Strategy Selection
- Select strategy according to backend capacities
- Analyze traffic patterns
- Evaluate request durations
Monitoring
Monitoring
- Monitor backend load distribution
- Track performance metrics
- Regularly check backend statuses
Routing Configuration
Client Route Configuration
Client Route defines the endpoint where requests from clients are received:Path
/api/v1/productsMethod
GET, POST, PUT, DELETE, etc.
Protocol
HTTP, HTTPS
Port
443, 80, etc.
Upstream Target Configuration
Upstream Target defines the address of the backend API:URL
http://backend-service:8080Protocol
HTTP, HTTPS, gRPC, WebSocket
Timeout
Request timeout
Routing Features
Dynamic Routing
Thanks to Apinizer’s Client Route feature, dynamic routing can be performed:Host-Based Routing
Routing to different proxies based on Host header value:
Header-Based Routing
Routing based on HTTP header values:
Method-Based Routing
Routing based on HTTP method:
Combination-Based Routing
Routing based on host, header and method combinations
Routing Priority Order
The Gateway evaluates incoming requests according to the following priority order:1. Relative Path
Highest priority
2. Hosts
Host header check
3. Headers
Header check
4. Methods
Lowest priority
Matching Logic:
- Hosts: When multiple hosts are defined, it works with OR logic (any one matching is sufficient)
- Headers: When multiple headers are defined, it works with AND logic (all must match)
Path Matching
Path Matching
- Exact match: Exact matching
- Prefix match: Prefix matching (more specific paths are prioritized)
- Regex match: Regular expression matching
Method Matching
Method Matching
- HTTP method check (GET, POST, PUT, DELETE)
- Method-based routing
- Method override support
- If method is not defined, all methods are accepted
Header Matching
Header Matching
- Header-based routing
- Content-Type based routing
- Custom header based routing
- AND logic: All headers must match
Host Matching
Host Matching
- Host header based routing
- Wildcard hostname support (.example.com, example.)
- OR logic: Any host matching is sufficient
Query Parameter Matching
Query Parameter Matching
- Query parameter based routing
- Parameter value based routing
Failover and Retry
Failover
When one of the backend instances gives an error:Automatic Failover
Automatic switch to another instance
Circuit Breaker
Temporarily disabling in error conditions
Retry
Retry mechanism for temporary errors:Retry Count
Retry count
Retry Delay
Retry interval
Retry Conditions
Which errors retry will be performed for
Usage Scenarios
Load Balancing
Load balancing can be performed by defining multiple targets
Different Upstreams
Different upstreams can be used to route to different backend services
Path Rewrite
API versioning and path transformations can be performed with path rewrite
Failover
High availability can be provided with failover mechanisms

