Skip to main content

Routing Concept

Routing consists of two fundamental components in an API Proxy:

Routing Flow

The following diagram shows how request and response flow occurs through the Gateway with the Routing and Upstream mechanism:

1. Client Request

Client sends request to API Proxy

2. Client Route

Entry point where requests enter the API Proxy
Path, Method, Protocol and Port definitions

3. Routing Logic

Load Balancing, Failover and routing logic is applied

4. Upstream Target

Backend API where requests are routed
Backend address, protocol and configuration

5. Backend API

Processed request is sent to backend API

Upstream Target

Upstream Target is the address of the backend API to which requests from clients in an API Proxy are routed. Upstream Target is the point where the API Proxy communicates with the backend.

Upstream Target Overview

Upstream and target concepts are fundamental concepts used when routing to backend services in API Proxies. Upstream Target represents the physical or logical address of the backend API. The API Proxy routes requests coming from Client Route to this target.

Backend Address

URL or IP address of the Backend API

Protocol

HTTP, HTTPS, gRPC, WebSocket protocols

Load Balancing

Load balancing between multiple backend instances

Failover

Switching to alternative backend in error conditions

Upstream and Target

Upstream: Configuration where backend services to which the API Proxy routes requests are defined. An upstream can contain multiple targets. Target: Backend service addresses defined within an upstream. Each target contains a URL and necessary configuration information.

Upstream Target Structure

An Upstream Target contains the following information:
http://backend-service:8080/api/products
│     │                │    │
│     │                │    └─ Backend Path
│     │                └─ Port
│     └─ Host/Service Name
└─ Protocol

Example Upstream Targets

HTTP Target

http://product-service:8080

HTTPS Target

https://api.backend.com/v1

gRPC Target

grpc://backend-service:50051

WebSocket Target

ws://websocket-service:8080

Upstream Target Configuration

When creating an Upstream Target, the following information is defined:
  • URL: Address of the Backend API
  • Protocol: HTTP, HTTPS, gRPC, WebSocket
  • Host: Backend server name or IP
  • Port: Backend port number
  • Path: Backend path (optional)
  • Path Rewrite: Changing backend path (optional)
Path rewrite can be performed in Upstream Target:
Client Route: /api/v1/products

   ▼ (Path Rewrite)
Upstream Target: /products


Backend API: http://backend:8080/products
Path Rewrite Examples:
  • /api/v1/products/products (prefix removal)
  • /api/v1/products/v2/products (version change)
  • /api/products/{id}/products/{id} (path simplification)
  • Connection Timeout: Connection timeout
  • Read Timeout: Read timeout
  • Write Timeout: Write timeout

Upstream Target Types

Single Target

Single backend instance:
http://backend:8080

Multiple Targets

Multiple backend instances (Load Balanced):
http://backend1:8080
http://backend2:8080
http://backend3:8080

Dynamic Target

Backend determined dynamically:
Variable-based target selection

Routing Types

The Apinizer platform provides routing support for different protocols:

HTTP Routing

HTTP Routing is routing support for REST APIs using HTTP/HTTPS protocol.

HTTP Routing Features

Protocol Support

  • HTTP/1.1
  • HTTP/2
  • HTTPS (TLS/SSL)

Method Support

  • GET, POST, PUT, DELETE
  • PATCH, HEAD, OPTIONS
  • Custom methods

Content-Type

  • application/json
  • application/xml
  • application/x-www-form-urlencoded
  • multipart/form-data

Features

  • Path matching (exact, prefix, regex)
  • Query parameter handling
  • Header manipulation
  • Body transformation
  • Host-based routing
  • Method-based routing

HTTP Routing Configuration

client_route:
  path: /api/v1/*
  method: GET, POST, PUT, DELETE
  protocol: https
  port: 443

upstream_target:
  url: http://backend-service:8080
  protocol: http

HTTP Routing Usage Scenarios

  • REST API Gateway: Management of REST APIs
  • API Versioning: API versioning
  • Legacy System Integration: Integration with legacy systems
  • Public API Exposure: Exposing APIs to the outside world
For detailed information, see the HTTP Routing page.

gRPC Routing

gRPC Routing is routing support for microservices using gRPC protocol.

gRPC Routing Features

Protocol Support

  • gRPC (over HTTP/2)
  • gRPC-Web
  • TLS/SSL support

Service Definition

  • Protocol Buffers (Protobuf)
  • Service definition
  • Method routing

Features

  • Unary calls
  • Server streaming
  • Client streaming
  • Bidirectional streaming

Load Balancing

  • gRPC-aware load balancing
  • Failover

gRPC Routing Configuration

client_route:
  path: /com.example.ProductService/*
  protocol: grpc
  port: 50051

upstream_target:
  url: grpc://backend-service:50051
  protocol: grpc
  service: com.example.ProductService

gRPC Routing Usage Scenarios

  • Microservice Communication: Communication between microservices
  • High Performance APIs: High performance requirements
  • Streaming APIs: Real-time data streaming
  • Internal APIs: Internal system APIs
For detailed information, see the gRPC Routing page.

WebSocket Routing

WebSocket Routing is routing support for real-time applications using WebSocket protocol.

WebSocket Routing Features

Protocol Support

  • WebSocket (ws://)
  • Secure WebSocket (wss://)
  • HTTP Upgrade

Connection Management

  • Connection establishment
  • Connection persistence
  • Connection pooling

Message Processing

  • Text messages
  • Binary messages
  • Message routing

Features

  • Subprotocol support
  • Ping/Pong frames
  • Connection timeout

WebSocket Routing Configuration

client_route:
  path: /ws/*
  protocol: wss
  port: 443

upstream_target:
  url: ws://backend-service:8080
  protocol: ws
  subprotocol: chat

WebSocket Routing Usage Scenarios

  • Real-time Chat: Real-time chat applications
  • Live Updates: Live updates
  • Gaming: Gaming applications
  • IoT: Communication with IoT devices
For detailed information, see the WebSocket Routing page.

Protocol Comparison

FeatureHTTPgRPCWebSocket
ProtocolHTTP/1.1, HTTP/2HTTP/2WebSocket
Data FormatJSON, XMLProtobufText, Binary
CommunicationRequest-ResponseRequest-Response, StreamingBidirectional
PerformanceMediumHighHigh (persistent)
UsageREST APIsMicroservicesReal-time apps
Browser Support⚠️ (gRPC-Web)

Protocol Selection Guide

  • For REST APIs
  • For browser-based applications
  • For wide compatibility requirements
  • For JSON/XML data formats
  • For microservice architectures
  • For high performance requirements
  • For streaming requirements
  • For internal system APIs
  • For real-time communication
  • For live updates
  • For bidirectional communication
  • For low latency requirements

Load Balancing Strategies

If there are multiple instances in the backend, load balancing strategies are used. Load balancing ensures equal traffic distribution, high availability, and performance improvement.

Load Balancing Strategies

Round Robin

Requests are distributed to all backend instances in order. It is a simple and understandable strategy, suitable for situations where all backends have similar capacity.

Least Connections

Request is sent to the backend with the least connections. Distributes load better and is suitable for long-running requests.

Weighted Round Robin

Weights are assigned to backends and requests are distributed accordingly. Suitable for backends with different capacities.

Strategy Comparison

StrategyLoad DistributionComplexityUsage
Round RobinEqualLowGeneral use
Least ConnectionsGoodMediumVariable durations
Weighted Round RobinWeightedMediumDifferent capacities

Best Practices

  • Select strategy according to backend capacities
  • Analyze traffic patterns
  • Evaluate request durations
  • Monitor backend load distribution
  • Track performance metrics
  • Regularly check backend statuses

Routing Configuration

Client Route Configuration

Client Route defines the endpoint where requests from clients are received:

Path

/api/v1/products

Method

GET, POST, PUT, DELETE, etc.

Protocol

HTTP, HTTPS

Port

443, 80, etc.

Upstream Target Configuration

Upstream Target defines the address of the backend API:

URL

http://backend-service:8080

Protocol

HTTP, HTTPS, gRPC, WebSocket

Timeout

Request timeout

Routing Features

Dynamic Routing

Thanks to Apinizer’s Client Route feature, dynamic routing can be performed:

Host-Based Routing

Routing to different proxies based on Host header value:
Host: api.example.com → Proxy A
Host: test.example.com → Proxy B

Header-Based Routing

Routing based on HTTP header values:
X-Environment: production → Proxy A
X-Environment: test → Proxy B

Method-Based Routing

Routing based on HTTP method:
GET /api/products → Proxy A
POST /api/products → Proxy B

Combination-Based Routing

Routing based on host, header and method combinations

Routing Priority Order

The Gateway evaluates incoming requests according to the following priority order:

1. Relative Path

Highest priority

2. Hosts

Host header check

3. Headers

Header check

4. Methods

Lowest priority
Matching Logic:
  • Hosts: When multiple hosts are defined, it works with OR logic (any one matching is sufficient)
  • Headers: When multiple headers are defined, it works with AND logic (all must match)
  • Exact match: Exact matching
  • Prefix match: Prefix matching (more specific paths are prioritized)
  • Regex match: Regular expression matching
Path Priority:
/api/v1/products/{id} → More specific (prioritized)
/api/v1/products      → More general
  • HTTP method check (GET, POST, PUT, DELETE)
  • Method-based routing
  • Method override support
  • If method is not defined, all methods are accepted
  • Header-based routing
  • Content-Type based routing
  • Custom header based routing
  • AND logic: All headers must match
  • Host header based routing
  • Wildcard hostname support (.example.com, example.)
  • OR logic: Any host matching is sufficient
  • Query parameter based routing
  • Parameter value based routing

Failover and Retry

Failover

When one of the backend instances gives an error:

Automatic Failover

Automatic switch to another instance

Circuit Breaker

Temporarily disabling in error conditions

Retry

Retry mechanism for temporary errors:

Retry Count

Retry count

Retry Delay

Retry interval

Retry Conditions

Which errors retry will be performed for

Usage Scenarios

Load Balancing

Load balancing can be performed by defining multiple targets

Different Upstreams

Different upstreams can be used to route to different backend services

Path Rewrite

API versioning and path transformations can be performed with path rewrite

Failover

High availability can be provided with failover mechanisms

Next Steps