π Load Proxy
Load Proxy refers to CDN edge nodes deployed across multiple geographic regions to serve end users with high availability and low latency. Each node can assume one or more roles in the content delivery pipeline, enabling both traffic routing and content caching.
π§ Server Roles
Load Proxy nodes in the EdgeHit CDN are designed to operate in one or both of the following roles, depending on the deployment configuration and geographic location:
EdgeHit(CDN role) β Acts as a reverse proxy and caching server, powered by NGINX. Responsible for HTTP/HTTPS traffic delivery and content acceleration.EdgeHit DNS(DNS role) β Acts as a authoritative DNS server, powered by GDNSD (GeoDNS Server Daemon) based component, supporting geo-based DNS resolution.
By default, all Load Proxy nodes are provisioned with both roles enabled. However, role assignment can be customized at deployment time through role-specific provisioning scripts.
Important Note:
-
Load Proxy nodes must always include
EdgeHit(CDN) role, as it provides the foundational runtime environment and core system components. -
Deployments without
EdgeHit DNS(DNS) role are permitted and sometimes preferred, butEdgeHitis mandatory due to how critical services are packaged and initialized during installation.
π§© Common Components
The following components are installed on all Load Proxy nodes, regardless of their assigned role (EdgeHit or EdgeHit DNS). These are critical control-plane services that facilitate secure communication and synchronization with EdgeHit Controller.
These component are automatically installed during EdgeHit installation script
β¬οΈ Redis Slave Server
Each Load Proxy node runs two Redis instances, each operating in an isolated directory and fulfilling a specific role in the control plane.
-
Acts as the replica (slave) of the
config_dbmaster instance running on EdgeHit Controller - Continuously synchronizes configuration data from the control plane
- Listens on
127.0.0.1:8001for local administrative access viaredis-cli - Establishes a TLS-encrypted connection secured with x.509 client certificates to the master instance at EdgeHit Controller:9001
- Retrieves key-value pairs based on the node's assigned roles (
EdgeHit,EdgeHit DNS) to dynamically load relevant configuration objects
-
Bound to
127.0.0.1:8002, accessible locally for CLI operations viaredis-cli - Not exposed externally, ensuring data isolation
- Utilized by local backend services to prefetch, read, or purge cached content based on runtime behavior
Security Advisory
Both Redis instances do not implement authentication for local CLI access. Exercise caution when using web-based tools such as Redis-UI that expose these services externally β doing so may inadvertently introduce security risks.
π Anycast Healthchecker
Certain Load Proxy nodes are deployed with Anycast IP addresses, serving as globally distributed ingress points for edge traffic.
These nodes use BIRD (a dynamic routing daemon) to establish BGP sessions with upstream border routers and advertise their assigned Anycast prefixes.
To enhance availability and routing accuracy, EdgeHit integrates the open-source anycast-healthchecker module developed by Google.
π§ Functionality
- Periodically checks the health of local services β most importantly, the NGINX process.
- If a monitored process fails (e.g., NGINX is unresponsive), it:
- Signals BIRD to withdraw the BGP route advertisement, preventing further traffic from being routed to the faulty node.
- Once recovery is detected, it re-announces the prefix, allowing the node to rejoin the Anycast pool.
π§ This mechanism ensures self-healing Anycast failover, meaning traffic is routed only to healthy and available nodes.
π¦ Components
birdβ BGP daemon that handles route advertisement.anycast-healthcheckerβ Python-based module for monitoring service health and interacting with BIRD.
π Node Exporter
Node Exporter is automatically provisioned during the deployment of any Load Proxy instance. It exposes low-level system metrics to the observability stack (Prometheus) and enables real-time infrastructure monitoring.
- Listens on
0.0.0.0:9100for HTTPS requests. - Secured via Pre-Shared Key (PSK)βbased HTTP authentication, configured natively in the Node Exporter service.
- The PSK is randomly generated at installation time and stored securely in the local
.envfile for system use.
Note
All Load Proxy instance in the same CDN network shares the same HTTP AUTH PSK Key. This simplifies secure integration with centralized monitoring systems
π EdgeHit Components
The EdgeHit role represents the core CDN function of a Load Proxy node. It is responsible for handling HTTP/HTTPS traffic, performing SSL termination, routing to origin servers, and caching content.
Configuration is dynamically sourced from Redis, where string-based configuration blocks tagged with a CDN key identifier define virtual host behavior and cache policy.
π NGINX Engine
This is the primary component of the EdgeHit role, built on NGINX with Lua scripting for runtime flexibility.
π§ Architecture Overview:
- A global NGINX configuration includes modular files from a structured directory tree.
- At runtime, a Lua module integrated via
ngx_lua(e.g.,lua-nginx-module) retrieves Redis key-value pairs that represent CDN virtual host definitions. - These definitions are injected into NGINX as ephemeral configuration blocks, avoiding the need to reload the server.
π§ Redis-Sourced Dynamic Config Includes:
- SSL certificates (full chain & private key) for terminating HTTPS traffic
- Virtual host logic, including:
- Origin server address (IP or DNS)
- Port, upstream protocol (HTTP/HTTPS), and health check behavior
- Cache control rules, including:
- Wildcard-based and regex-driven cache key logic
- TTL policies and argument handling (e.g.,
ignore_args)
π Request Handling Flow:
- Client request arrives at the Load Proxy node (
EdgeHit). - Lua script fetches the appropriate config from Redis using the domain as lookup key.
- NGINX:
- Matches request against configured virtual host rules
- If cached, serves directly from disk
- Otherwise, forwards to origin, caches the response, and returns it to the client.
ποΈ Cache Directory
- All cacheable responses are persisted locally using NGINXβs
proxy_cachemechanism. - The default cache path is:
/usr/local/loadproxy/storage/.cdn-cache-default/EdgeHit-cache/ - Cache is stored in per-domain directories, where each directory is named using the MD5 hash of the domain name.
- Within each domain's cache directory, individual URLs are hashed and stored in nested subdirectories, following NGINX's two-level cache key structure.
βοΈ Task Automation (Python Function)
EdgeHit supports automated cache prefetch and purge operations, implemented through dedicated Python functions. These scripts interface with the Redis config_db instance to retrieve relevant configuration parameters and execute cache-related tasks accordingly.
- The script identifies the cache directory path, and uses logic that mirrors NGINX's cache key hashing and directory structure, ensuring compatibility.
- It can dynamically update (prefetch) or delete (purge) cached content by issuing
curlrequests to the destination origin or cache endpoint. - This enables backend-controlled cache manipulation, independent of incoming NGINX traffic.
Note
By replicating the way NGINX constructs its cache file paths (e.g., using MD5 hashing and levels=1:2 directory nesting), the Python script can directly access or modify cache files with full alignment to NGINX's internal storage model.
π ClickHouse Client (log-shipper)
Each Load Proxy node includes a ClickHouse Client installed within the host namespace. This client is responsible for forwarding telemetry data β such as HTTP(S) request logs to the central ClickHouse Server hosted on EdgeHit Controller.
- π Authentication is performed using a Pre-Shared Key (PSK), provisioned via environment variables at deployment time.
- π‘ Upon successful connection, the client fetch structured log data (e.g., NGINX access logs) in near real time.
- π§© Data is parsed and transformed to match the expected schema on the client side, enabling accurate analytics, reporting, and billing aggregation.
πͺ§ Unbound DNS Resolver
The Unbound DNS Resolver is deployed on each Load Proxy node to handle internal DNS resolution, specifically for resolving the IP addresses of origin servers referenced in NGINX configurations.
Note
This is not the authoritative DNS server that maps customer domains to Load Proxy IPs β that role is handled by EdgeHit DNS (powered by GDNSD).
- Runs as a lightweight Docker container, scoped to the host network and isolated for internal use only.
- Listens on
127.0.0.53to process local DNS queries from backend services such as NGINX, Redis sync processes, or health checks. - Configured to finely tune DNS caching behavior, including TTL limits and negative cache lifetimes, ensuring fast and consistent resolution performance.
By running Unbound locally, the system ensures independent, high-performance resolution of origin domains without relying on the host OS resolver or external DNS sources. This improves latency, reliability, and security in content delivery workflows.
π EdgeHit DNS Components
EdgeHit DNS is the DNS server role assumed by a Load Proxy node. It operates as an authoritative DNS server for both system domains and customer-configured CDN zones.
πΈ Authoritative DNS Server
The EdgeHit DNS service is built on top of the open-source gdnsd (GeoDNS Daemon) and packaged as a Docker container by RootNetwork. It provides high-performance, geo-aware DNS resolution for CDN traffic routing.
- EdgeHit DNS uses BIND9-style zone files, stored as key-value pairs in the
redis_dbRedis instance. - The Redis keys with a DNS-specific prefix (e.g.,
DNS:example.com) contain full zone definitions, including A/AAAA/CNAME records, TTLs, and geo-routing policies.
π Geo-based Routing
One of the key capabilities of gdnsd is the ability to serve geographically-aware DNS responses:
- DNS replies can vary based on the client's source IP address (country, continent, ASN, etc.).
- This enables intelligent traffic steering, where end-users are directed to the nearest Load Proxy node to reduce latency.
Note
EdgeHit DNS does not support EDNS Client Subnet (ECS). DNS queries are resolved based only on the recursive resolverβs IP, not the originating client's IP.
π¦ Runtime Details
- Runs as a Docker container on the Load Proxy node.
- Uses a container image named
EdgeHit DNS, published under therootnetworksorganization registry. - This image is a self-maintained fork of the open-source
gdnsdproject, enhanced to support EdgeHit-specific functionality. - Listens on port
53(TCP/UDP) and responds to authoritative DNS queries for customer and system zones. - Configuration is dynamically loaded from the local replica Redis instance, enabling real-time DNS zone updates without restarting the container.
- EdgeHit DNS retrieves zone data from Redis keys prefixed with
'DNS', where the associated value contains a DNS master zone file formatβstructured similarly to BIND-style syntax, but serialized in JSON for integration with EdgeHitβs control plane.
π©Ί DNS Health Record Checker
Load 53 can remove DNS record that point to IP address of server that are DOWN via a ICMP or HTTP request. Natively, EdgeHit DNS support mapping each DNS record to a unique ID. Then, load 53 will query Web server on local or remote process for a file named health-dns.txt that reports the status coresnponding to all unique ID. Example is shown below:
curl http://0.0.0.0:16666/health-dns.txt
#updated_at=2025-06-13 07:42:50 UTC
#sha256=33e12c5b60fa7226dba05ddd37dd4789446dd25df655aaea44ada16f29923cef
2c2ddfd6-dc6e-42e5-8efc-ea5f9200b6ab,up
9344916f-6aac-472e-8435-64c37f917682,down
The DNS record with the Healthchecker ID 9344916f-6aac-472e-8435-64c37f917682 will be withdrawn until the status is back up.
Note
you need to deploy the health-check service on certain Load Proxy Endpoint that act as an agent that scrap info from a monitoring stack server and present it in a text file that EdgeHit DNS understand
You might also need to deploy the monitoring stack, which is the LoadUP component in EdgeHit


