- Introduced `client_adversarial_tests.rs` to stress test connection limits and IP tracker race conditions.
- Added `handshake_adversarial_tests.rs` for mutational bit-flipping tests and timing neutrality checks.
- Created `masking_adversarial_tests.rs` to validate probing indistinguishability and SSRF prevention.
- Implemented `relay_adversarial_tests.rs` to ensure HOL blocking prevention and data quota enforcement.
- Updated respective modules to include new test paths.
- Adjusted QUOTA_USER_LOCKS_MAX based on test and non-test configurations to improve flexibility.
- Implemented logic to retain existing locks when the maximum quota is reached, ensuring efficient memory usage.
- Added comprehensive tests for quota user lock functionality, including cache reuse, saturation behavior, and race conditions.
- Enhanced StatsIo struct to manage wake scheduling for read and write operations, preventing unnecessary self-wakes.
- Introduced separate replay checker domains for handshake and TLS to ensure isolation and prevent cross-pollution of keys.
- Added security tests for replay checker to validate domain separation and window clamping behavior.
- Modified `build_emulated_server_hello` to accept ALPN (Application-Layer Protocol Negotiation) as an optional parameter, allowing for the embedding of ALPN markers in the application data payload.
- Implemented logic to handle oversized ALPN values and ensure they do not interfere with the application data payload.
- Added new security tests in `emulator_security_tests.rs` to validate the behavior of the ALPN embedding, including scenarios for oversized ALPN and preference for certificate payloads over ALPN markers.
- Introduced `send_adversarial_tests.rs` to cover edge cases and potential issues in the middle proxy's send functionality, ensuring robustness against various failure modes.
- Updated `middle_proxy` module to include new test modules and ensure proper handling of writer commands during data transmission.
- Simplified eviction candidate selection in `auth_probe_record_failure_with_state` by tracking the oldest candidate directly.
- Enhanced the handling of stale entries to ensure newcomers are tracked even under capacity constraints.
- Added tests to verify behavior under stress conditions and ensure newcomers are correctly managed.
- Updated `decode_user_secrets` to prioritize preferred users based on SNI hints.
- Introduced new tests for TLS SNI handling and replay protection mechanisms.
- Improved deduplication hash stability and collision resistance in middle relay logic.
- Refined cutover handling in route mode to ensure consistent error messaging and session management.
- Implemented a mechanism to log unknown datacenter indices with a distinct limit to avoid excessive logging.
- Introduced tests to ensure that logging is deduplicated per datacenter index and respects the distinct limit.
- Updated the fallback logic for datacenter resolution to prevent panics when only a single datacenter is available.
feat(proxy): add authentication probe throttling
- Added a pre-authentication probe throttling mechanism to limit the rate of invalid TLS and MTProto handshake attempts.
- Introduced a backoff strategy for repeated failures and ensured that successful handshakes reset the failure count.
- Implemented tests to validate the behavior of the authentication probe under various conditions.
fix(proxy): ensure proper flushing of masked writes
- Added a flush operation after writing initial data to the mask writer to ensure data integrity.
refactor(proxy): optimize desynchronization deduplication
- Replaced the Mutex-based deduplication structure with a DashMap for improved concurrency and performance.
- Implemented a bounded cache for deduplication to limit memory usage and prevent stale entries from persisting.
test(proxy): enhance security tests for middle relay and handshake
- Added comprehensive tests for the middle relay and handshake processes, including scenarios for deduplication and authentication probe behavior.
- Ensured that the tests cover edge cases and validate the expected behavior of the system under load.
- Enforce TLS record length constraints in client handling to comply with RFC 8446, rejecting records outside the range of 512 to 16,384 bytes.
- Update security tests to validate behavior for oversized and undersized TLS records, ensuring they are correctly masked or rejected.
- Introduce new tests to verify the handling of TLS records in both generic and client handler pipelines.
- Refactor handshake logic to enforce mode restrictions based on transport type, preventing misuse of secure tags.
- Add tests for nonce generation and encryption consistency, ensuring correct behavior for different configurations.
- Improve masking tests to ensure proper logging and detection of client types, including SSH and unknown probes.
- forward valid-TLS/invalid-MTProto clients to mask backend in both client paths\n- harden TLS validation against timing and clock edge cases\n- move replay tracking behind successful authentication to avoid cache pollution\n- tighten secret decoding and key-material handling paths\n- add dedicated security test modules for tls/client/handshake/masking\n- include production-path regression for ClientHandler fallback behavior
Previously handle_bad_client used stream.local_addr() (the ephemeral
socket to the mask backend) as the dst in the outgoing PROXY protocol
header. This is wrong: the dst should be the address telemt is listening
on, or the dst from the incoming PROXY protocol header if one was present.
- handle_bad_client now receives local_addr from the caller
- handle_client_stream resolves local_addr from PROXY protocol info.dst_addr
or falls back to a synthetic address based on config.server.port
- RunningClientHandler.do_handshake resolves local_addr from stream.local_addr()
overridden by PROXY protocol info.dst_addr when present, and passes it
down to handle_tls_client / handle_direct_client
- masking.rs uses the caller-supplied local_addr directly, eliminating the
stream.local_addr() call
Adds mask_proxy_protocol config option (0 = off, 1 = v1 text, 2 = v2 binary)
that sends a PROXY protocol header when connecting to mask_host. This lets
the backend see the real client IP address.
Particularly useful when the masking site (nginx/HAProxy) runs on the same
host as telemt and listens on a local port — without this, the backend loses
the original client IP entirely.
PROXY protocol header is also sent during TLS emulation fetches so that
backends with proxy_protocol required don't reject the connection.
- Remove unused imports across multiple modules
- Add #![allow(dead_code)] for public API items preserved for future use
- Add #![allow(deprecated)] for rand::Rng::gen_range usage
- Add #![allow(unused_assignments)] in main.rs
- Add #![allow(unreachable_code)] in network/stun.rs
- Prefix unused variables with underscore (_ip_tracker, _prefer_ipv6)
- Fix unused_must_use warning in tls_front/cache.rs
This ensures clean compilation without warnings while preserving
public API items that may be used in the future.
- Per-DC latency tracking in UpstreamState (array of 5 EMA instances, one per DC):
- Added `dc_latency: [LatencyEma; 5]` – per‑DC tracking instead of a single global EMA
- `effective_latency(dc_idx)` – returns DC‑specific latency, falls back to average if unavailable
- `select_upstream(dc_idx)` – now performs latency‑weighted selection: effective_weight = config_weight × (1000 / latency_ms)
- Example: two upstreams with equal config weight but latencies of 50ms and 200ms → selection probabilities become 80% / 20%
- `connect(target, dc_idx)` – extended signature, dc_idx used for upstream selection and per‑DC RTT recording
- All ping/health‑check operations now record RTT into `dc_latency[dc_zero_index]`
- `upstream_manager.connect(dc_addr)` changed to `upstream_manager.connect(dc_addr, Some(success.dc_idx))` – DC index now participates in upstream selection and per‑DC RTT logging
- `client.rs` – passes dc_idx when connecting to Telegram
- Summary: Upstream selection now accounts for per‑DC latency using the formula weight × (1000/ms). With multiple upstreams (e.g., direct + socks5), traffic automatically flows to the faster route for each specific DC. With a single upstream, the data is used for monitoring without affecting routing.
Co-Authored-By: brekotis <93345790+brekotis@users.noreply.github.com>