Backend Load Balancing
Configuration guide for load balancing across multiple backend servers (RADIUS, LDAP, ...) using Radiator Server's built-in algorithms
Backend Load Balancing
This document covers Radiator Server's backend load balancing capabilities for distributing authentication and accounting requests across multiple backend servers (RADIUS, LDAP, SQL, etc.) to improve reliability and performance.
Overview
Radiator Server can act as a RADIUS/TACACS+ proxy, distributing requests across multiple backend servers. When multiple backend servers are configured within a backend block, Radiator uses server selection algorithms to determine which backend server receives each request and how to handle failures.
Backend load balancing is configured using the server-selection statement within backend blocks and applies to all backends (RADIUS, LDAP, SQL, etc.).
See also High Availability and Load Balancing for broader HA architecture patterns.
Server Selection Algorithms
Radiator supports three backend server selection algorithms configured via the server-selection statement:
round-robin
Distributes requests evenly across all available backend servers in a rotating fashion. Each request goes to the next server in the list. If a server fails, the request is retried on the next available server.
backends {
radius "BACKEND_CLUSTER" {
server-selection round-robin;
server "SERVER1" {
secret "mysecret";
timeout 3000;
retries 2;
connect {
protocol udp;
host "192.168.1.10";
port 1812;
}
}
server "SERVER2" {
secret "mysecret";
timeout 3000;
retries 2;
connect {
protocol udp;
host "192.168.1.11";
port 1812;
}
}
server "SERVER3" {
secret "mysecret";
timeout 3000;
retries 2;
connect {
protocol udp;
host "192.168.1.12";
port 1812;
}
}
}
}
fallback
Tries backend servers in priority order. Always attempts the highest priority (lowest number) server first. On failure, falls back to the next priority server. This is the default algorithm if server-selection is not specified.
Example:
backends {
radius "BACKEND_CLUSTER" {
server-selection fallback; # or omit, fallback is default
server "PRIMARY" {
secret "mysecret";
timeout 3000;
retries 2;
priority 0; # Highest priority, tried first
connect {
protocol udp;
host "192.168.1.10";
port 1812;
}
}
server "SECONDARY" {
secret "mysecret";
timeout 3000;
retries 2;
priority 1; # Lower priority, backup
connect {
protocol udp;
host "192.168.1.11";
port 1812;
}
}
}
}
no-fallback
Only attempts the first available server. If that server fails, the request fails immediately without trying other servers. Use when backend failures should be immediately visible rather than masked by failover. Not really intended for production use cases.
Example:
backends {
radius "BACKEND_TEST" {
server-selection no-fallback;
server "TEST_SERVER" {
secret "mysecret";
timeout 3000;
retries 2;
connect {
protocol udp;
host "192.168.1.10";
port 1812;
}
}
# This server will never be tried with no-fallback
server "BACKUP" {
secret "mysecret";
timeout 3000;
retries 2;
connect {
protocol udp;
host "192.168.1.11";
port 1812;
}
}
}
}
SQL Backend Load Balancing
SQL backends (MySQL/MariaDB, PostgreSQL, ...) are among the most common uses of backend load balancing. Distributing queries across multiple database servers provides both high availability and the ability to scale read operations.
For write operations, database specific failover management needs to be used.
PostgreSQL Backend Example
backends {
postgres "USER_DATABASE" {
server-selection fallback; # Primary/replica pattern
server "PG_PRIMARY" {
host "pg-primary.example.com";
port 5432;
database "radiator";
username "radiator_user";
password "db_password";
timeout 5000;
connections 25;
priority 0; # Prefer primary
tls {
server_ca_certificate "PG_CA";
@verification {
if any {
cert.subject_alt.dns != "pg-primary.example.com";
} then {
reject;
} else {
accept;
}
}
}
}
server "PG_REPLICA1" {
host "pg-replica1.example.com";
port 5432;
database "radiator";
username "radiator_user";
password "db_password";
timeout 5000;
connections 25;
priority 1; # Fallback to replica
tls {
server_ca_certificate "PG_CA";
@verification {
if any {
cert.subject_alt.dns != "pg-replica1.example.com";
} then {
reject;
} else {
accept;
}
}
}
}
server "PG_REPLICA2" {
host "pg-replica2.example.com";
port 5432;
database "radiator";
username "radiator_user";
password "db_password";
timeout 5000;
connections 25;
priority 2; # Second fallback
tls {
server_ca_certificate "PG_CA";
@verification {
if any {
cert.subject_alt.dns != "pg-replica2.example.com";
} then {
reject;
} else {
accept;
}
}
}
}
query "AUTHENTICATE_USER" {
statement "
SELECT id, username, password_hash, attributes
FROM users
WHERE username = $1
AND enabled = true
";
bindings {
aaa.identity;
}
mapping {
vars.user_id = id;
user.username = username;
user.password = password_hash;
radius.reply = attributes;
}
}
}
}
LDAP Backend Load Balancing
LDAP backends support the same load balancing algorithms for distributing directory queries:
backends {
ldap "LDAP_CLUSTER" {
server-selection round-robin;
server "LDAP1" {
url "ldap://ldap1.example.com:389/";
timeout 3000;
authentication {
dn "cn=radiator,dc=example,dc=com";
password "ldap_password";
}
}
server "LDAP2" {
url "ldap://ldap2.example.com:389/";
timeout 3000;
authentication {
dn "cn=radiator,dc=example,dc=com";
password "ldap_password";
}
}
search "AUTHENTICATE" {
base "ou=users,dc=example,dc=com";
scope sub;
filter "(&(uid=%{aaa.identity})(objectClass=inetOrgPerson))";
mapping {
user.username = uid;
vars.dn = entry::dn;
}
}
}
}
Server Health and Status
Each backend server tracks its availability status. Radiator automatically:
- Marks servers as unavailable after connection failures
- Skips unavailable servers during selection
- Attempts to reconnect to failed servers periodically
- Returns servers to available pool when connections succeed
- TCP Keepalive can also be used to verify TCP connectivity
The status statement controls automatic health monitoring:
server "MONITORED_SERVER" {
status true; # Enable automatic health monitoring (default)
# ... server configuration
}
server "NO_MONITORING" {
status false; # Disable automatic health monitoring
# ... server configuration
}
status true- Enables automatic periodic health checks to monitor server availability (default)status false- Disables automatic health monitoring (servers are still used for requests)
To add NAS-Identifier, see nas_identifier.
Note: The status statement does not enable/disable a server for request processing. All configured servers participate in load balancing regardless of this setting. The status statement only controls whether Radiator performs periodic health checks on the backend server.
Important: When status false, failed backend servers will not automatically recover. Once all connections to a server fail, there is no mechanism to detect when the server comes back online. New connection attempts will continue to fail until Radiator is restarted or the configuration is reloaded. It is strongly recommended to use the default status true for production environments to enable automatic health monitoring and recovery detection.
Server Priority
The optional priority statement (integer 0-255, default 0) controls server order for fallback algorithm:
backends {
radius "TIERED_BACKENDS" {
server-selection fallback;
server "PRIMARY_DC" {
priority 0; # Highest priority, tried first
# ...
}
server "SECONDARY_DC" {
priority 1; # Tried if PRIMARY fails
# ...
}
server "DR_SITE" {
priority 2; # Last resort
# ...
}
}
}
Lower numbers = higher priority (priority 0 is highest).
When multiple servers have the same priority value, they are tried in alphabetical order by their server names. For example, if both "SERVER_A" and "SERVER_Z" have priority 0, SERVER_A will be tried first.
Request Retry Behavior
When a backend server fails, Radiator's retry behavior depends on:
- Server-level retries: The
retriesstatement controls how many times to retry the same server - Server-level timeout: The
timeoutstatement sets how long to wait for a response - Server selection algorithm: Determines if/how to try alternative servers
Example with retries:
server "BACKEND1" {
timeout 3000; # 3 second timeout per attempt
retries 2; # Try this server up to 2 times
# ...
}
With round-robin or fallback, if all retries on one server fail, the next server in rotation/priority is tried.
Common Patterns
SQL Read Replicas (Active-Passive)
Use fallback for primary database with read replicas:
backends {
mysql "USER_DB" {
server-selection fallback;
server "PRIMARY" {
priority 0; # Always try primary first
host "mysql-primary.example.com";
# Write and read operations
}
server "REPLICA1" {
priority 1; # Fallback to replica
host "mysql-replica1.example.com";
# Read-only operations
}
server "REPLICA2" {
priority 2; # Second fallback
host "mysql-replica2.example.com";
# Read-only operations
}
}
}
SQL Read Scaling (Active-Active)
Use round-robin to distribute read queries across multiple replicas:
backends {
postgres "USER_DB_READ" {
server-selection round-robin;
server "REPLICA1" {
host "pg-replica1.example.com";
# Equal distribution across read replicas
}
server "REPLICA2" {
host "pg-replica2.example.com";
}
server "REPLICA3" {
host "pg-replica3.example.com";
}
}
}
Active-Active Load Balancing
Use round-robin with equal-capacity servers for even distribution:
backends {
radius "ACTIVE_ACTIVE" {
server-selection round-robin;
server "BACKEND1" {
# Equal configuration for all servers
}
server "BACKEND2" {
# Equal configuration for all servers
}
server "BACKEND3" {
# Equal configuration for all servers
}
}
}
Active-Passive Failover
Use fallback with priority for primary/backup pattern:
backends {
radius "ACTIVE_PASSIVE" {
server-selection fallback;
server "PRIMARY" {
priority 0;
# Primary server config
}
server "BACKUP" {
priority 1;
# Backup server config
}
}
}
Geo-Distributed Backends
Use fallback with priority to prefer local datacenter:
backends {
radius "GEO_DISTRIBUTED" {
server-selection fallback;
server "LOCAL_DC" {
priority 0; # Prefer local datacenter
timeout 2000;
# ...
}
server "REMOTE_DC1" {
priority 1; # Failover to remote DC
timeout 5000; # Higher timeout for WAN
# ...
}
server "REMOTE_DC2" {
priority 2;
timeout 5000;
# ...
}
}
}
Related Documentation
- High Availability and Load Balancing - Overall HA architecture patterns
- PROXY Protocol Support - Preserving client IPs through proxies
- IP-Accept Configuration - IP-based access control
- Prometheus Scraping - Metrics collection and monitoring
- Architecture Overview - Radiator Server architecture
- Rate Limiting - Request rate control
Architecture Overview
Backend Load Balancing
Basic Installation
Comparison Operators
Configuration Editor
Configuration Import and Export
Data Types
Duration Units
Execution Context
Execution Pipelines
Filters
Health check /live and /ready
High Availability and Load Balancing
High availability identifiers
HTTP Basic Authentication
Introduction
Local AAA Backends
Log storage and formatting
Management API privilege levels
Namespaces
Password Hashing
Pipeline Directives
Probabilistic Sampling
Prometheus scraping
PROXY Protocol Support
Radiator server health and boot up logic
Radiator sizing
Radiator software releases
Rate Limiting
Rate Limiting Algorithms
Reverse Dynamic Authorization
Template Rendering CLI
Tools radiator-client
TOTP/HOTP Authentication
What is Radiator?