Proxies incoming HTTP, TLS, DTLS, XMPP, and Minecraft connections based on the hostname contained in the initial request. For TCP protocols (TLS, HTTP, XMPP, Minecraft) the hostname is extracted from the initial TCP stream. For DTLS (TLS over UDP) the hostname is extracted from the SNI extension in the UDP ClientHello datagram, enabling proxying of WebRTC, OpenConnect VPN, CoAP, and other UDP/DTLS protocols by hostname without decryption. This enables HTTPS name-based virtual hosting to separate backend servers without installing the private key on the proxy machine.
SNIProxy is a production-ready, high-performance transparent proxy with a focus on security, reliability, and minimal resource usage.
- Name-based proxying of HTTPS without decrypting traffic - no keys or certificates required on the proxy
- Protocol support: TLS (SNI extraction), DTLS (SNI extraction over UDP),
HTTP/1.x (Host header), HTTP/2 (HPACK :authority pseudo-header),
XMPP (stream
toattribute), and Minecraft Java Edition (handshake server address) - Pattern matching: Exact hostname matching and PCRE2 regular expressions
- Wildcard backends: Route to dynamically resolved hostnames
- Fallback routing: Default backend for requests without valid hostnames
- HAProxy PROXY protocol: Propagate original client IP/port to backends (v1 and v2), and accept incoming PROXY headers from upstream proxies/load balancers
- IPv4, IPv6, and Unix domain sockets for both listeners and backends
- Multiple listeners per instance with independent configurations
- Source address binding for outbound connections
- Transparent proxy mode (IP_TRANSPARENT) to preserve client source IPs
- SO_REUSEPORT support for multi-process scalability
- Event-driven architecture using libev for efficient I/O multiplexing
- Dynamic ring buffers with automatic growth/shrinking
- Memory pressure trimming: global soft limit aggressively shrinks idle connection buffers before RAM balloons
- Per-connection buffer caps: configurable
connection_buffer_limit(or per-side overrides) prevent slow clients from pinning unbounded RAM - Zero-copy forwarding via SO_SPLICE on OpenBSD for kernel-level data movement with automatic buffer reclamation and kernel-managed idle timeouts
- Bounded shrink queues: 4096-entry shrink candidate lists with automatic trimming prevent idle buffer bookkeeping from exhausting memory under churn.
- TLS 1.2+ required by default - use
-T <version>to allow older TLS 1.1/1.0 clients or enforce TLS 1.3 for stricter deployments - Cryptographic DNS query IDs: arc4random()-seeded IDs with lifecycle tracking prevent prediction or reuse
- Regex DoS prevention: Match limits scale with hostname length
- Buffer overflow protection: Strict bounds checking in all protocol parsers
- NULL byte rejection: Prevents hostname validation bypasses
- Listener ACLs: CIDR-based allow/deny policies per listener to block or permit client ranges
- Backend ACLs: CIDR-based restrictions on outbound connections prevent open proxy abuse
- HTTP/2 memory limits: Per-connection and global HPACK table size caps
- Request guardrails: Caps of 100 HTTP headers and 64 TLS extensions stop CPU exhaustion attempts before parsers process attacker-controlled blobs. Extension counting is enforced consistently across all TLS parsing paths.
- Rate limiter collision defense: arc4random()-seeded buckets use FNV-1a hashing and short-chain cutoffs so hash spraying cannot bypass per-IP token buckets.
- DNS resolver hardening: Async-signal-safe handlers, integer overflow protection, arc4random()-seeded query IDs, mutex-guarded restart state, and leak-resistant handle accounting prevent prediction, leaks, or use-after-free bugs.
- DNS query concurrency limits: Prevents resolver exhaustion
- Connection idle timeouts: Automatic cleanup of stalled connections
- Per-IP connection rate limiting: Token-bucket guardrail on new TCP connections and UDP sessions across all listeners
- DTLS source validation: New UDP sessions require a retransmission before contacting the backend, preventing reflection/amplification attacks from spoofed sources
- Privilege separation: Separate processes for logging and DNS resolution
- OpenBSD sandboxing: pledge(2) and unveil(2) for minimal system access
- FreeBSD sandboxing: Capsicum capability mode with per-fd rights limiting
- Input sanitization: Hostname validation, control character removal
- Comprehensive fuzzing: Protocol fuzzers for TLS, DTLS, HTTP/2, XMPP, Minecraft, hostname, address, config, listener ACL, IPC crypto, and resolver
- Configuration integrity: Config files are re-checked for strict permissions on reload, all path directives must be absolute, and resolver search domains are treated as literal suffixes instead of being DNS-parsed.
- Binder allowlisting: The privileged binder helper only binds sockets for listener addresses present in the configuration, preventing unprivileged processes from requesting arbitrary bound descriptors.
- DNS-over-TLS upstreams: Resolver blocks can send queries over TLS via
dot://IP-or-hostname/<SNI>[/tls1.2|tls1.3]entries; IP literals now require either a TLS hostname after the slash or/insecureto explicitly disable verification. TLS 1.2 is enforced by default, and TLS 1.3 can be requested when supported by the linked OpenSSL.
- Asynchronous DNS via dedicated resolver process (powered by c-ares from 0.8.7)
- IPv4/IPv6 preference modes: default, IPv4-only, IPv6-only, IPv4-first, IPv6-first
- Configurable nameservers and search domains
- Concurrency limits to prevent resource exhaustion
- Hot configuration reload via SIGHUP without dropping connections
- Reference counting ensures safe updates during reload
- Flexible logging: Syslog and file-based logs with per-listener overrides
- Access logs with connection duration and byte transfer statistics
- Process renaming: Processes show as
sniproxy-mainloop(Linux only),sniproxy-binder,sniproxy-logger, andsniproxy-resolverin process listings - IPC hardening: binder/logger/resolver channels encrypt control messages,
validate framing, enforce
max_payload_len, and emit clear restart guidance - PID file support for process management with strict validation that rejects stale sockets, FIFOs, or symlinks before writing
- Privilege dropping to non-root user/group after binding privileged ports
- Privilege verification: startup fails fast if real or effective UID remains root after dropping privileges
- Config permission guard: sniproxy refuses to run when the configuration file is accessible to group/other users
- Legacy config compatibility: Accepts older
listen,proto,user,groupkeywords - Resolver debug tracing: Enable verbose DNS resolver logs on demand with the
-dCLI flag for troubleshooting query flow
SNIProxy uses a multi-process architecture for security and isolation:
- Main process (
sniproxy-mainloop): Accepts connections, parses protocol headers, routes to backends, and proxies data bidirectionally - Binder process (
sniproxy-binder): Creates privileged listening sockets before and after reloads so the main loop can drop root while still opening low ports - Logger process (
sniproxy-logger): Handles all log writes with dropped privileges, enabling secure logging from the main process - Resolver process (
sniproxy-resolver): Performs asynchronous DNS lookups in isolation when DNS support is enabled
This separation ensures that even if a component is compromised, the attack surface is minimized. The main process drops privileges after binding to ports, and helper processes run with minimal system access.
See ARCHITECTURE.md for detailed design documentation.
Usage: sniproxy [-c <config>] [-f] [-g] [-t] [-n <max file descriptor limit>] [-V] [-T <min TLS version>] [-d]
-c configuration file, defaults to /etc/sniproxy.conf
-f run in foreground
-g allow group-read (0640) config permissions for SIGHUP reload
-t test configuration and exit
-n specify file descriptor limit
-V print the version of SNIProxy and exit
-T <1.0|1.1|1.2|1.3> set minimum TLS client hello version (default 1.2)
-d enable resolver debug logging (verbose DNS tracing to stderr/error log)
For Debian, Fedora, or Alpine based Linux distributions see building packages below.
Prerequisites
- Autotools (autoconf, automake, gettext and libtool)
- libev4, libpcre2, c-ares, OpenSSL (or LibreSSL) and libbsd development headers
- libbsd is not required on systems that provide arc4random and strlcpy natively (OpenBSD, FreeBSD, macOS)
- Perl and cURL for test suite
Install
./autogen.sh && ./configure && make check && sudo make install
Building Debian/Ubuntu package
This is the preferred installation method on recent Debian based distributions:
-
Install required packages
sudo apt-get install autotools-dev cdbs debhelper dh-autoreconf dpkg-dev gettext libev-dev libpcre2-dev libc-ares-dev libssl-dev libbsd-dev pkg-config fakeroot devscripts -
Build a Debian package
./autogen.sh && dpkg-buildpackage -
Install the resulting package
sudo dpkg -i ../sniproxy_<version>_<arch>.deb
Building Alpine package
-
Install required packages
apk add build-base abuild autoconf automake libtool pkgconf libev-dev pcre2-dev c-ares-dev openssl-dev libbsd-dev -
Build a distribution tarball
./autogen.sh && ./configure && make dist -
Build an APK package using the included APKBUILD
cp alpine/APKBUILD /tmp/aport/ && cp sniproxy-*.tar.gz /tmp/aport/ cd /tmp/aport && abuild checksum && abuild -r -
Install the resulting package
apk add --allow-untrusted ~/packages/<arch>/sniproxy-<version>.apk
Building Fedora/RedHat package
This is the preferred installation method for modern Fedora based distributions.
-
Install required packages
sudo yum install autoconf automake curl gettext-devel libev-devel pcre2-devel pkgconfig rpm-build c-ares-devel openssl-devel libbsd-devel -
Build a distribution tarball:
./autogen.sh && ./configure && make dist -
Build a RPM package
rpmbuild --define "_sourcedir `pwd`" -ba redhat/sniproxy.spec -
Install resulting RPM
sudo yum install ../sniproxy-<version>.<arch>.rpm
Building on FreeBSD
-
Install required packages
pkg install autoconf automake libtool pkgconf libev pcre2 c-ares -
Build
./autogen.sh && ./configure LDFLAGS="-L/usr/local/lib" CPPFLAGS="-I/usr/local/include" && make -
Install
sudo make install sudo cp scripts/sniproxy.rc /usr/local/etc/rc.d/sniproxy -
Enable and start
sudo sysrc sniproxy_enable=YES sudo service sniproxy start
Capsicum capability mode is automatically enabled on FreeBSD when all listeners and backends use IP addresses (not Unix domain sockets).
Building on OS X with Homebrew
-
install dependencies.
brew install libev pcre2 c-ares openssl autoconf automake gettext libtool -
Read the warning about gettext and force link it so autogen.sh works. We need the GNU gettext for the macro
AC_LIB_HAVE_LINKFLAGSwhich isn't present in the default OS X package.brew link --force gettext -
Make it so
./autogen.sh && ./configure && make
OS X support is a best effort, and isn't a primary target platform.
Global directives appear before any listener or table blocks. In addition to
standard items such as user, group, pidfile, resolver, and access_log,
you can keep abusive clients in check with a global per-IP rate limiter:
per_ip_connection_rate 50 # allow 50 new connections per second per source IP
per_ip_max_connections 100 # max 100 simultaneous connections per source IP
per_ip_connection_rate limits the rate of new connections and UDP sessions
(default 30/s). per_ip_max_connections limits how many connections and UDP
sessions may be open concurrently from a single IP (default 0, disabled).
Both limits are shared between TCP and UDP. Set either value to 0 to disable.
To guard against descriptor exhaustion during floods, cap the number of
concurrent connections (set 0 to auto-derive ~80% of the file descriptor
limit, which is the default):
max_connections 20000
To cap how much memory any one connection can pin, set a shared limit (or override each side independently):
connection_buffer_limit 4M # both client and server buffers cap at 4 MiB
# client_buffer_limit 4M # optional per-side overrides
# server_buffer_limit 8M
Limit how many HTTP headers are accepted per request (default 100) to guard against header-count DoS attempts:
http_max_headers 200
Restrict which backend addresses sniproxy may connect to, preventing abuse as an open proxy to reach internal hosts:
backend_acl deny_except {
10.0.0.0/8
172.16.0.0/12
192.168.0.0/16
}
The policy is either deny_except (only allow listed ranges) or
allow_except (allow everything except listed ranges).
Enable TCP Fast Open on both listener and backend sockets for reduced connection latency (Linux 3.7+/4.11+, FreeBSD 12+):
tcp_fastopen on
user daemon
group daemon
pidfile /tmp/sniproxy.pid
# Allow libev to batch events for better throughput (seconds)
io_collect_interval 0.0005
timeout_collect_interval 0.005
error_log {
filename /var/log/sniproxy/error.log
priority notice
}
listener 127.0.0.1:443 {
protocol tls
table TableName
# Specify a server to use if the initial client request doesn't contain
# a hostname
fallback 192.0.2.5:443
# Optional: bind outbound connections to specific source address
source 192.0.2.100
# Optional: per-listener access log
access_log {
filename /var/log/sniproxy/access.log
}
}
table TableName {
# Bare hostnames are auto-anchored: example.com only matches
# "example.com", not "sub.example.com"
example.com 192.0.2.10:4343
# If port is not specified the listener port will be used
example.net [2001:DB8::1:10]
# Use regular expressions to match multiple hosts
.*\\.example\\.com 192.0.2.11:443
# Wildcard backends resolve the client-requested hostname
.*\\.dynamiccdn\\.com *:443
}
resolver {
# DNS resolution mode: ipv4_only, ipv6_only, ipv4_first, ipv6_first
mode ipv4_first
# Custom nameservers (handled by c-ares)
nameserver 8.8.8.8
nameserver 2001:4860:4860::8888
# DNS-over-TLS upstream with explicit TLS verification hostname
#nameserver dot://9.9.9.9/dns.quad9.net/tls1.2
# Limit concurrent DNS queries to prevent resource exhaustion
max_concurrent_queries 512
# Limit per-client concurrent DNS queries (default 16, 0 to disable)
max_concurrent_queries_per_client 16
# DNSSEC policy (default relaxed): off | relaxed | strict
dnssec_validation strict
}
dot:// entries accept an IP literal or hostname before the slash and the TLS
verification hostname after the slash. Bare IP literals are no longer accepted;
for IPs you must supply either a TLS hostname (preferred) or /insecure to
explicitly disable certificate verification (e.g. nameserver dot://9.9.9.9/insecure).
An optional third segment lets you pin a minimum TLS version (/tls1.2 (default)
or /tls1.3 where supported by your OpenSSL). Certificates are validated
against the system trust store, so keep /etc/ssl up-to-date.
Security recommendation: For maximum security, use IP literals with explicit SNI hostnames rather than DNS hostnames for your DoT servers. This avoids a bootstrap problem where the DoT server's hostname must be resolved via potentially untrusted DNS before the secure channel is established:
# Recommended: IP literal with explicit TLS hostname
nameserver dot://9.9.9.9/dns.quad9.net
# Less secure: hostname requires DNS resolution before DoT is available
nameserver dot://dns.quad9.net
listener [::]:443 {
protocol tls
table SecureHosts
# Enable SO_REUSEPORT for multi-process load balancing
reuseport yes
# Enable IP_TRANSPARENT to preserve client source IPs
source client
# Log malformed/rejected requests
bad_requests log
# Restrict which clients may connect (default is allow all)
acl deny_except {
10.0.0.0/8
2001:db8::/32
}
# Accept incoming PROXY protocol headers (v1 and v2 auto-detected)
# from upstream proxies/load balancers
proxy_protocol on
# Fallback with PROXY protocol v1 header (text format)
fallback 192.0.2.50:443
fallback proxy_protocol
# Or use PROXY protocol v2 (binary format)
# fallback proxy_protocol_v2
}
table SecureHosts {
# Per-backend PROXY protocol v1 (text) or v2 (binary)
secure.example.com 192.0.2.20:443 proxy_protocol
other.example.com 192.0.2.21:443 proxy_protocol_v2
# Consistent backend selection: same client IP always
# reaches the same backend when DNS returns multiple records
backend_affinity on
.*\.cdn\.example\.com *:443
}
Setting io_collect_interval and timeout_collect_interval lets libev batch I/O readiness notifications and timer recalculations, which reduces system call pressure on busy instances. The defaults (0.0005s and 0.005s respectively) favor throughput; set the values to 0 if you need the absolute lowest latency.
SNIProxy supports proxying XMPP connections by extracting the target domain from
the to attribute in the initial <stream:stream> opening. This enables
routing of XMPP traffic including STARTTLS negotiation without terminating the
TLS connection on the proxy.
listener 0.0.0.0:5222 {
protocol xmpp
table XMPPServers
# Fallback for connections without a valid 'to' attribute
fallback 192.0.2.50:5222
}
table XMPPServers {
# Route XMPP domains to their respective servers
example.com 192.0.2.10:5222
chat.example.org 192.0.2.11:5222
# Wildcard for dynamic XMPP hosting
.*\\.xmpp\\.net *:5222
}
XMPP clients send an initial stream opening like:
<?xml version='1.0'?>
<stream:stream to="example.com" xmlns="jabber:client" ...>
SNIProxy extracts the to attribute value and routes to the appropriate
backend. The STARTTLS negotiation happens after the connection is established
and is transparent to the proxy.
Security notes:
- Hostnames are validated and sanitized (only alphanumeric, dots, hyphens, underscores, and bracketed IPv6 allowed)
- Control characters, path traversal attempts, and injection characters are rejected
- Maximum hostname length is 255 characters
- Maximum header size is 4096 bytes
SNIProxy supports proxying Minecraft Java Edition connections by extracting the server address from the initial handshake packet. This enables hosting multiple Minecraft servers behind a single IP and port using different hostnames.
listener 0.0.0.0:25565 {
protocol minecraft
table MinecraftServers
fallback 192.0.2.50:25565
}
table MinecraftServers {
mc.example.com 192.0.2.10:25565
play.example.org 192.0.2.11:25565
# Wildcard for dynamic Minecraft hosting
.*\\.mc\\.net *:25565
}
Minecraft clients send a handshake packet containing the server address as the very first data in the TCP connection. SNIProxy extracts the server address field and routes to the appropriate backend. All subsequent protocol traffic (login, encryption, compression) passes through transparently.
Forge Mod Loader (FML) markers and BungeeCord forwarding data appended to the server address after NUL bytes are automatically stripped before routing.
Listeners default to accepting clients from any address. Use acl allow_except to list forbidden ranges while permitting all other clients, or acl deny_except to start from a deny-all stance and explicitly list the ranges that should be accepted. IPv4 and IPv6 networks can be mixed in the same block, and IPv4-mapped IPv6 connections are evaluated against IPv4 CIDRs. Only one policy style may appear in the configuration; mixing allow_except and deny_except blocks causes SNIProxy to exit during parsing.
Using hostnames or wildcard entries in the configuration relies on c-ares for asynchronous resolution. DNS-dependent features such as fallback hostnames, wildcard tables, and transparent proxy mode all use this resolver.
SNIProxy spawns a dedicated sniproxy-resolver process that handles all DNS queries asynchronously. This architecture provides:
- Process isolation: DNS operations are separated from the main proxy
- Concurrency control: Configurable limits prevent resolver exhaustion
- IPv4/IPv6 flexibility: Multiple resolution modes for different deployment needs
- Custom nameservers: Override system DNS configuration per SNIProxy instance
Security note: Run SNIProxy alongside a local caching DNS resolver (e.g., unbound, dnsmasq) to reduce exposure to spoofed responses and improve performance.
DNSSEC validation runs in relaxed mode by default, which requests DNSSEC records and trusts replies carrying the AD flag while still falling back to unsigned answers when AD isn't set. Set dnssec_validation strict inside the resolver block to require DNS replies that carry the AD (Authenticated Data) flag from a validating upstream resolver. This mode needs a c-ares build with DNSSEC/Trust AD support and will fail to resolve unsigned zones. Use dnssec_validation off to disable DNSSEC entirely if your upstream resolvers do not support it.
SNIProxy includes extensive security hardening:
- DNS query ID randomization: Uses arc4random() to generate cryptographically secure random IDs to prevent prediction attacks
- c-ares resolver hardening: Async-signal-safe signal handlers, integer overflow protection, and leak fixes keep the resolver stable under load
- TLS parser hardening: Early rejection of SSL 2.0/3.0 and malformed ClientHello variants that cannot carry the SNI extension
- Regex DoS mitigation: Match limits scale with hostname length to prevent catastrophic backtracking on hostile hostnames
- Buffer overflow protection:
buffer_reserve()enforces strict overflow guards to block integer wraparound attempts - NUL byte filtering: TLS SNI parsing rejects server names with embedded NUL bytes before hostname validation
- HTTP/2 memory limits: Enforces per-connection (64KB) and global (4MB) HPACK table limits to avoid memory exhaustion
- PROXY header hardening: Single-pass header composition eliminates read-past-buffer bugs
- Connection timeout protection: Idle timers clear pending events to prevent use-after-free conditions
- DNS concurrency limits: Mutex-protected resolver queues enforce configurable caps on in-flight lookups
The project includes comprehensive testing:
- Unit tests: All major components (buffer, TLS, DTLS, HTTP, HTTP/2, XMPP, Minecraft, tables, etc.)
- Fuzz testing: Dedicated fuzzers for TLS ClientHello, DTLS ClientHello,
HTTP/2 HEADERS, XMPP stream, Minecraft handshake, hostname sanitization,
address parsing, config parsing, listener ACL, IPC crypto, and resolver
response in
tests/fuzz/ - Integration tests: End-to-end listener and routing validation
- Protocol conformance: Tests for TLS 1.0-1.3, DTLS, HTTP/1.x, HTTP/2, XMPP, and Minecraft
Run tests with: make check
On OpenBSD, SNIProxy combines unveil(2) and pledge(2) to keep each helper process constrained:
- unveil(): Restricts access to the configuration file, pidfile, log destinations, and Unix domain sockets declared in the configuration
- pledge(): Promise sets are tailored per process to minimize available system calls:
- Main process: starts with
stdio getpw inet dns rpath proc id wpath cpath unix sendfd recvfdwhile reading configuration, then tightens tostdio inet dns rpath proc unix sendfd recvfdafter dropping privileges (includessendfdso the binder child can inherit it after fork) - Binder process:
stdio unix inet sendfdwhile handling privileged socket creation - Logger process: starts with
stdio rpath wpath cpath fattr id unix recvfd, then tightens tostdio rpath wpath cpath fattr unix recvfdafter dropping privileges - Resolver process:
stdio rpath inet dns unixto perform DNS lookups in isolation
- Main process: starts with
All paths are collected from the loaded configuration, so custom locations work as long as files/directories exist before launch. Helper processes are forked (not exec'd) and inherit the master key for IPC encryption.
On FreeBSD, SNIProxy uses Capsicum capability mode to restrict each process after initialization:
- Resolver process: DoT SSL context is eagerly initialized before entering capability mode (since CA bundle loading requires filesystem access). The IPC socket is limited to read/write/send/recv/event rights.
- Logger process: Enters capability mode after privilege drop. Pre-opened directory fds allow log file rotation via
openat()in capability mode. Syslog is pre-connected beforecap_enter(). The IPC socket is limited to read/write/send/recv/event rights. - Main process: Config directory and temp directory are pre-opened before entering capability mode. Config reload uses
openat()on the pre-opened directory fd. Debug dumps useopenat()on the pre-opened temp directory fd. - Binder process: Not sandboxed with Capsicum because it must
bind()AF_UNIX paths, which requires VFS lookups forbidden in capability mode.
The main process skips capability mode when any listener, fallback, or backend address is a Unix domain socket, since connect() and bind() to AF_UNIX paths require VFS lookups forbidden in capability mode. IP-only configurations (the common case) get full Capsicum protection. Adding new log file paths during SIGHUP reload is not supported in capability mode (existing log files can be reopened).
Set SNIPROXY_DISABLE_CAPSICUM=1 in the environment to disable Capsicum sandboxing for debugging.
SNIProxy is designed for high performance and low resource usage:
- Event-driven I/O: Uses libev for efficient non-blocking I/O multiplexing, handling thousands of concurrent connections per process
- Minimal per-connection overhead: Dynamic buffers start small (16KB client, 32KB server) and grow only as needed, then shrink when idle
- Efficient buffered I/O: Ring buffers and vectored writes minimize copies while remaining portable
- SO_SPLICE zero-copy: On OpenBSD, after the initial handshake is parsed the kernel splices data directly between client and server sockets, eliminating user-space copies for the bulk of proxied traffic. User-space buffers are shrunk to 4KB once the splice is active to minimize per-connection memory, and idle detection is handled by the kernel splice timeout which properly resets on data flow
- TCP_NODELAY: Nagle's algorithm is disabled on both client and server sockets to avoid coalescing delays on forwarded data
- JIT-compiled regex: PCRE2 JIT compilation is used when available, giving 2-10x faster backend pattern matching
- HPACK ring buffer: HTTP/2 dynamic table inserts are O(1) via ring buffer indexing, eliminating per-header memmove overhead
- SO_REUSEPORT support: Run multiple SNIProxy instances on the same port for kernel-level load balancing across CPU cores
- Compiled regex patterns: Pattern matching happens once at config load, not per connection
- Hot config reload: Update routing rules without restarting or dropping existing connections (SIGHUP)
Typical resource usage: 1-2 MB RAM per process plus ~2-8 KB per active connection (varies with traffic patterns)
"Address already in use" when starting
- Another process is bound to the port, or a previous SNIProxy instance didn't
clean up. Use
netstat -tlnporss -tlnpto check. - Try enabling
reuseport yesin listener config for multi-instance setups
Connections fail to route / "No matching backend"
- Check that table names match between listener and table definitions
- Verify hostname patterns - remember that regex patterns need proper escaping
(e.g.,
.*\.example\.comnot*.example.com) - Enable
bad_requests logto see rejected requests in error log
DNS resolution not working
- Ensure the c-ares development headers were available when SNIProxy was built
- Check
sniproxy-resolverprocess is running (should appear in process list) - Verify nameserver configuration and network connectivity
High memory usage
- Check for connections stuck in RESOLVING state with slow/unresponsive DNS
- Reduce
max_concurrent_queriesto limit DNS-related memory - Verify no regex patterns causing excessive backtracking (check error log)
Permissions errors on startup
- Ensure user/group specified in config exists
- Verify log file directories are writable by the configured user
- On OpenBSD, ensure all paths exist before starting (for unveil)
HTTP/2 connection coalescing causes wrong backend routing
HTTP/2 clients (browsers) reuse an existing TLS connection for a different
hostname when two conditions are met: (1) both hostnames resolve to the same
IP, and (2) the TLS certificate is valid for both (e.g. a wildcard cert
*.example.com). Since all proxied hostnames resolve to the sniproxy IP,
condition (1) is always true. If the backend presents a shared certificate,
the browser multiplexes requests for multiple hostnames over a single
connection. SNIProxy routes once per TCP connection based on the SNI in the
ClientHello and cannot inspect encrypted HTTP/2 frames, so subsequent
hostnames are silently sent to the wrong backend.
Symptoms include 404 errors, CORS failures, "Access denied" responses, or content from the wrong site. The problem resolves temporarily when the browser is restarted or connections are cleared.
Workarounds (pick the most practical for your setup):
- Use per-domain certificates instead of wildcard certs. Let's Encrypt makes this easy. This is the most effective fix when you control the backends.
- Assign separate IPs per backend so the browser's IP-match check fails. IPv6 makes this practical.
- Disable HTTP/2 on backends by removing
h2from ALPN negotiation. Loses HTTP/2 performance benefits but eliminates coalescing entirely. - Configure backends to return HTTP 421 (Misdirected Request) for hostnames they do not serve. RFC 9110 defines this status code; compliant browsers retry on a fresh connection.
When proxying third-party services (e.g. CDNs) where you control neither the certificate nor the backend, there is no workaround within sniproxy. Use a TLS-terminating reverse proxy instead for those services.
Run in foreground with resolver debug logging enabled:
sniproxy -f -d -c /path/to/config.conf
This will:
- Keep process in foreground (not daemonize)
- Show detailed resolver tracing on stderr/error log to troubleshoot DNS issues
SNIProxy is actively maintained with a focus on security, stability, and standards compliance. The codebase has undergone extensive security hardening in recent releases, including protection against regex DoS, buffer overflows, and memory exhaustion attacks.
Primary platform: OpenBSD Best-effort support: Linux, Other BSDs, macOS
SNIProxy is production-ready and commonly used for:
- Name-based virtual hosting: Route HTTPS traffic by hostname without TLS termination
- TLS/SSL load balancing: Distribute connections across backend servers based on SNI
- Multi-tenant hosting: Route multiple domains to different backend infrastructure
- CDN origins: Route traffic to appropriate origin servers by hostname
- XMPP federation: Route federated XMPP traffic to appropriate servers based on the stream's target domain, including STARTTLS support
- Minecraft hosting: Host multiple Minecraft Java Edition servers behind a single IP and port using different hostnames
- DTLS/UDP proxying: Route WebRTC, OpenConnect VPN, CoAP, and other UDP/DTLS protocols by hostname without decryption
- Development proxies: Local HTTPS routing for development environments
- IoT/embedded systems: Lightweight SNI routing with minimal resource usage
Contributions are welcome! Areas of particular interest:
- Additional protocol parsers
- Performance optimizations
- Security improvements
- Documentation improvements
- Bug reports and test cases
When developing, please use the memory sanitizers to catch bugs early:
- See SANITIZERS.md for AddressSanitizer, MemorySanitizer, UndefinedBehaviorSanitizer, and ThreadSanitizer usage
- CI automatically runs ASAN and UBSAN on all pull requests
- Source code: https://github.com/renaudallard/sniproxy
- Architecture documentation: See ARCHITECTURE.md
- Memory sanitizers guide: See SANITIZERS.md
- Issue tracking: GitHub Issues
- License: BSD 2-Clause
Current author: Renaud Allard renaud@allard.it
Original author: Dustin Lundquist dustin@null-ptr.net
Contributors: Chris Lundquist, Igor Novgorodov, Nikos Mavrogiannopoulos, Vit Herman, Remi Gacogne, Pieter Lexis, Oldrich Jedlicka, Nick Kugaevsky, Manuel Kasper, Lars Reemts, Bearnard Hibbins, Robin Balyan, Andrej Manduch, Andreas Loibl, Aaron Schrab, Zhang Sen, Udit Raikwar, Thomas Nordquist, Theophile Helleboid, Sebastian Wiedenroth, RickieL, Pierre-Olivier Mercier, Peter van Dijk, Naveen Nathan, Marc Haber, Kirill Ponomarev, John Wang, imlonghao, Christopher Galtenberg, Bram Gotink, Arni Birgisson
All real life tests are only done on OpenBSD. If you see issues on other OSes feel free to submit PRs or bug reports.
SNIProxy builds on several excellent libraries: