Documentation

Radiator sizing

Hardware resource requirements and performance guidelines for deploying Radiator server at scale

Radiator sizing

This document explains how to size a Radiator deployment. The quick reference table below shows example single-instance allocations for typical commercial loads (illustrative — adjust after measuring real workloads and logging configurations):

CPUMemoryStorageApprox Peak TPS
2 vCPU2 GiB500 GiB~10 000
4 vCPU3 GiB1000 GiB~20 000

More detail and methodology are provided in the sections below.

CPU requirements

Baseline guidance: a single modest vCPU can process roughly 5 000 TPS for simple authentication where one TPS = one request + one response. Some protocols or exchanges involve multiple round trips (e.g. TACACS+ authorisation + accounting can be several messages). Use peak minute (or smaller) TPS estimates for planning, not daily averages.

Notes:

  • Encryption (TLS handshakes, EAP-TLS/TEAP) decreases throughput. Depending on session reuse, cipher suites, and CPU crypto acceleration, impact can be up to ~50% for handshake-heavy mixes.

Memory requirements

Being too tight on memory risks avoidable outages. As a conservative starting point allocate 2 GiB for an instance targeting ~10k TPS.

Approximate incremental usage (illustrative):

  • Base Radiator process: ~0.5 GiB for the first vCPU (code, dictionaries, caches)
  • Additional vCPUs: +~0.25 GiB each for thread stacks, buffers, transient state

Add required overhead for the guest OS if using a VM (commonly 0.5–1 GiB minimal). The minimal scratch container image generally runs comfortably within 0.5 GiB for low load (< 100 TPS) scenarios.

Backend latency (SQL/HTTP/LDAP/...) shifts the bottleneck from CPU to I/O wait; plan additional concurrency headroom (memory) when backend response times exceed an incoming request average latency.

For very small embedded RADIUS deployments please contact Radiator Software for tailored sizing.

Storage requirements

Configuration data under /var/lib/radiator: a typical safe allowance is 1 GiB. Ultra‑low latency storage is not required.

Logs and optional packet captures under /var/log/radiator scale with verbosity, packet size, and retention:

  • Safe starting point: 20 GiB for test / early staging.
  • Very slow physical disks or network mounts can degrade log write performance.

Simple sizing example:

  • 1 TPS (~86 400 requests per day) with 1 KiB average JSON log entries → ~100 MiB/day uncompressed.
  • Compression commonly reduces size by ~90%.
  • 7 days uncompressed + 30 days compressed ≈ 1 GiB total.
  • With brief logs (100 bytes per entry) will result in ten fold decrease of storage.

Export logs to a long‑term aggregation system (recommended for clustering and audit).

Ensuring performance

Radiator is implemented in Rust using high‑performance multithreading and async I/O. Rust delivers near C/C++ performance while enforcing strong memory safety guarantees.

The server is designed and tested to scale near‑linearly with additional CPU cores (and sufficient memory / I/O bandwidth).

For every software change (see software release kinds) we execute performance tests to guard against regressions. Nightly and release builds run extended test suites on consistent hardware.

For historical performance‑related improvements and notable changes, see the release notes. Performance metrics for each build are attached to releases where available.

Resource constraints

Radiator degrades predictably under resource pressure. Mitigations generally involve vertical scaling (more resources per instance) or horizontal scaling (additional instances / listeners).

Resource pressure scenarios:

  1. CPU saturation

    • Behaviour: Radiator begins back‑pressuring—temporarily not accepting new work until internal queues drain. This can manifest as dropped client requests or timeouts if upstream retry windows are short.
    • Mitigations:
      • Ensure sufficient dedicated CPU capacity; avoid aggressive over‑commit.
      • Reduce expensive logging during peaks (debug / packet templates).
  2. Memory starvation

    • Behaviour: On memory allocation failure the process terminates; active transactions may be lost.
    • Contributing factors: Slow backends increase concurrent active requests and memory footprint.
    • Mitigations:
      • Provision adequate memory with headroom.
      • Consider a small swap area only as a transient safety net (swap can worsen latency and trigger further memory growth by elongating request lifetimes).
  3. Configuration storage full

    • Behaviour: Risk of partial / corrupted configuration writes; process typically continues but integrity is not guaranteed.
    • Mitigations:
      • Reserve sufficient free space under /var/lib/radiator.
      • Maintain regular backups / version control of configuration assets.
  4. Logging storage full

    • Behaviour: Server continues running; emits periodic (per logging unit) warnings to stdout about disk full. Logging output will be truncated until space is freed.
    • Mitigations:
      • Implement external log rotation & compression. It is safe to mv the active log file; the server will reopen within ~1 second.
      • Monitor utilisation of the /var/log/radiator mount.

Scaling caveats

RADIUS protocol uses an identifier of 8 bits for active requests; resulting in 256 active requests per connection. In high volume or high latency situations this number can become a limiting factor on throughput.

Workarounds:

  • Run multiple Radiator instances behind a load balancer or configure clients to load balance.
  • Configure multiple UDP listeners (distinct ports) per instance to increase parallelism:
servers {
    radius "RADIUS_UDP_1812" {
        listen {
            protocol udp;
            port 1812;
            ip 0.0.0.0;
        }
        clients "CLIENTS_RADIUS_ALL";
    }
    radius "RADIUS_UDP_2812" {
        listen {
            protocol udp;
            port 2812;
            ip 0.0.0.0;
        }
        clients "CLIENTS_RADIUS_ALL";
    }
...