Compare commits

...

4 Commits

Author SHA1 Message Date
David Osipov 3db62aaea8
Merge 97d4a1c5c8 into 4f55d08c51 2026-03-17 21:40:54 +00:00
David Osipov 97d4a1c5c8
Refactor and enhance security in proxy and handshake modules
- Updated `direct_relay_security_tests.rs` to ensure sanitized paths are correctly validated against resolved paths.
- Added tests for symlink handling in `unknown_dc_log_path_revalidation` to prevent symlink target escape vulnerabilities.
- Modified `handshake.rs` to use a more robust hashing strategy for eviction offsets, improving the eviction logic in `auth_probe_record_failure_with_state`.
- Introduced new tests in `handshake_security_tests.rs` to validate eviction logic under various conditions, ensuring low fail streak entries are prioritized for eviction.
- Simplified `route_mode.rs` by removing unnecessary atomic mode tracking, streamlining the transition logic in `RouteRuntimeController`.
- Enhanced `route_mode_security_tests.rs` with comprehensive tests for mode transitions and their effects on session states, ensuring consistency under concurrent modifications.
- Cleaned up `emulator.rs` by removing unused ALPN extension handling, improving code clarity and maintainability.
2026-03-18 01:40:38 +04:00
David Osipov c2443e6f1a
Refactor auth probe eviction logic and improve performance
- Simplified eviction candidate selection in `auth_probe_record_failure_with_state` by tracking the oldest candidate directly.
- Enhanced the handling of stale entries to ensure newcomers are tracked even under capacity constraints.
- Added tests to verify behavior under stress conditions and ensure newcomers are correctly managed.
- Updated `decode_user_secrets` to prioritize preferred users based on SNI hints.
- Introduced new tests for TLS SNI handling and replay protection mechanisms.
- Improved deduplication hash stability and collision resistance in middle relay logic.
- Refined cutover handling in route mode to ensure consistent error messaging and session management.
2026-03-18 00:38:59 +04:00
David Osipov a7cffb547e
Implement idle timeout for masking relay and add corresponding tests
- Introduced `copy_with_idle_timeout` function to handle reading and writing with an idle timeout.
- Updated the proxy masking logic to use the new idle timeout function.
- Added tests to verify that idle relays are closed by the idle timeout before the global relay timeout.
- Ensured that connect refusal paths respect the masking budget and that responses followed by silence are cut off by the idle timeout.
- Added tests for adversarial scenarios where clients may attempt to drip-feed data beyond the idle timeout.
2026-03-17 22:48:13 +04:00
20 changed files with 5226 additions and 183 deletions

View File

@ -390,6 +390,12 @@ you MUST explain why existing invariants remain valid.
- Do not modify existing tests unless the task explicitly requires it.
- Do not weaken assertions.
- Preserve determinism in testable components.
- Bug-first forces the discipline of proving you understand a bug before you fix it. Tests written after a fix almost always pass trivially and catch nothing new.
- Invariants over scenarios is the core shift. The route_mode table alone would have caught both BUG-1 and BUG-2 before they were written — "snapshot equals watch state after any transition burst" is a two-line property test that fails immediately on the current diverged-atomics code.
- Differential/model catches logic drift over time.
- Scheduler pressure is specifically aimed at the concurrent state bugs that keep reappearing. A single-threaded happy-path test of set_mode will never find subtle bugs; 10,000 concurrent calls will find it on the first run.
- Mutation gate answers your original complaint directly. It measures test power. If you can remove a bounds check and nothing breaks, the suite isn't covering that branch yet — it just says so explicitly.
- Dead parameter is a code smell rule.
### 15. Security Constraints

57
Cargo.lock generated
View File

@ -425,6 +425,32 @@ dependencies = [
"cipher",
]
[[package]]
name = "curve25519-dalek"
version = "4.1.3"
source = "registry+https://github.com/rust-lang/crates.io-index"
checksum = "97fb8b7c4503de7d6ae7b42ab72a5a59857b4c937ec27a3d4539dba95b5ab2be"
dependencies = [
"cfg-if",
"cpufeatures",
"curve25519-dalek-derive",
"fiat-crypto",
"rustc_version",
"subtle",
"zeroize",
]
[[package]]
name = "curve25519-dalek-derive"
version = "0.1.1"
source = "registry+https://github.com/rust-lang/crates.io-index"
checksum = "f46882e17999c6cc590af592290432be3bce0428cb0d5f8b6715e4dc7b383eb3"
dependencies = [
"proc-macro2",
"quote",
"syn 2.0.114",
]
[[package]]
name = "dashmap"
version = "5.5.3"
@ -517,6 +543,12 @@ version = "2.3.0"
source = "registry+https://github.com/rust-lang/crates.io-index"
checksum = "37909eebbb50d72f9059c3b6d82c0463f2ff062c9e95845c43a6c9c0355411be"
[[package]]
name = "fiat-crypto"
version = "0.2.9"
source = "registry+https://github.com/rust-lang/crates.io-index"
checksum = "28dea519a9695b9977216879a3ebfddf92f1c08c05d984f8996aecd6ecdc811d"
[[package]]
name = "filetime"
version = "0.2.27"
@ -1609,7 +1641,7 @@ source = "registry+https://github.com/rust-lang/crates.io-index"
checksum = "6db2770f06117d490610c7488547d543617b21bfa07796d7a12f6f1bd53850d1"
dependencies = [
"rand_chacha",
"rand_core",
"rand_core 0.9.5",
]
[[package]]
@ -1619,9 +1651,15 @@ source = "registry+https://github.com/rust-lang/crates.io-index"
checksum = "d3022b5f1df60f26e1ffddd6c66e8aa15de382ae63b3a0c1bfc0e4d3e3f325cb"
dependencies = [
"ppv-lite86",
"rand_core",
"rand_core 0.9.5",
]
[[package]]
name = "rand_core"
version = "0.6.4"
source = "registry+https://github.com/rust-lang/crates.io-index"
checksum = "ec0be4795e2f6a28069bec0b5ff3e2ac9bafc99e6a9a7dc3547996c5c816922c"
[[package]]
name = "rand_core"
version = "0.9.5"
@ -1637,7 +1675,7 @@ version = "0.4.0"
source = "registry+https://github.com/rust-lang/crates.io-index"
checksum = "513962919efc330f829edb2535844d1b912b0fbe2ca165d613e4e8788bb05a5a"
dependencies = [
"rand_core",
"rand_core 0.9.5",
]
[[package]]
@ -2145,6 +2183,7 @@ dependencies = [
"tracing-subscriber",
"url",
"webpki-roots 0.26.11",
"x25519-dalek",
"x509-parser",
"zeroize",
]
@ -3144,6 +3183,18 @@ version = "0.6.2"
source = "registry+https://github.com/rust-lang/crates.io-index"
checksum = "9edde0db4769d2dc68579893f2306b26c6ecfbe0ef499b013d731b7b9247e0b9"
[[package]]
name = "x25519-dalek"
version = "2.0.1"
source = "registry+https://github.com/rust-lang/crates.io-index"
checksum = "c7e468321c81fb07fa7f4c636c3972b9100f0346e5b6a9f2bd0603a52f7ed277"
dependencies = [
"curve25519-dalek",
"rand_core 0.6.4",
"serde",
"zeroize",
]
[[package]]
name = "x509-parser"
version = "0.15.1"

View File

@ -52,6 +52,7 @@ regex = "1.11"
crossbeam-queue = "0.3"
num-bigint = "0.4"
num-traits = "0.2"
x25519-dalek = "2"
anyhow = "1.0"
# HTTP

View File

@ -239,7 +239,7 @@ tls_full_cert_ttl_secs = 90
[access]
replay_check_len = 65536
replay_window_secs = 1800
replay_window_secs = 120
ignore_time_skew = false
[access.users]

View File

@ -73,7 +73,9 @@ pub(crate) fn default_replay_check_len() -> usize {
}
pub(crate) fn default_replay_window_secs() -> u64 {
1800
// Keep replay cache TTL tight by default to reduce replay surface.
// Deployments with higher RTT or longer reconnect jitter can override this in config.
120
}
pub(crate) fn default_handshake_timeout() -> u64 {
@ -456,11 +458,11 @@ pub(crate) fn default_tls_full_cert_ttl_secs() -> u64 {
}
pub(crate) fn default_server_hello_delay_min_ms() -> u64 {
0
8
}
pub(crate) fn default_server_hello_delay_max_ms() -> u64 {
0
24
}
pub(crate) fn default_alpn_enforce() -> bool {

View File

@ -11,9 +11,8 @@ use crate::crypto::{sha256_hmac, SecureRandom};
use crate::error::ProxyError;
use super::constants::*;
use std::time::{SystemTime, UNIX_EPOCH};
use num_bigint::BigUint;
use num_traits::One;
use subtle::ConstantTimeEq;
use x25519_dalek::{X25519_BASEPOINT_BYTES, x25519};
// ============= Public Constants =============
@ -27,8 +26,12 @@ pub const TLS_DIGEST_POS: usize = 11;
pub const TLS_DIGEST_HALF_LEN: usize = 16;
/// Time skew limits for anti-replay (in seconds)
pub const TIME_SKEW_MIN: i64 = -20 * 60; // 20 minutes before
pub const TIME_SKEW_MAX: i64 = 10 * 60; // 10 minutes after
///
/// The default window is intentionally narrow to reduce replay acceptance.
/// Operators with known clock-drifted clients should tune deployment config
/// (for example replay-window policy) to match their environment.
pub const TIME_SKEW_MIN: i64 = -2 * 60; // 2 minutes before
pub const TIME_SKEW_MAX: i64 = 2 * 60; // 2 minutes after
/// Maximum accepted boot-time timestamp (seconds) before skew checks are enforced.
pub const BOOT_TIME_MAX_SECS: u32 = 7 * 24 * 60 * 60;
@ -117,27 +120,6 @@ impl TlsExtensionBuilder {
self
}
/// Add ALPN extension with a single selected protocol.
fn add_alpn(&mut self, proto: &[u8]) -> &mut Self {
// Extension type: ALPN (0x0010)
self.extensions.extend_from_slice(&extension_type::ALPN.to_be_bytes());
// ALPN extension format:
// extension_data length (2 bytes)
// protocols length (2 bytes)
// protocol name length (1 byte)
// protocol name bytes
let proto_len = proto.len() as u8;
let list_len: u16 = 1 + u16::from(proto_len);
let ext_len: u16 = 2 + list_len;
self.extensions.extend_from_slice(&ext_len.to_be_bytes());
self.extensions.extend_from_slice(&list_len.to_be_bytes());
self.extensions.push(proto_len);
self.extensions.extend_from_slice(proto);
self
}
/// Build final extensions with length prefix
fn build(self) -> Vec<u8> {
let mut result = Vec::with_capacity(2 + self.extensions.len());
@ -173,8 +155,6 @@ struct ServerHelloBuilder {
compression: u8,
/// Extensions
extensions: TlsExtensionBuilder,
/// Selected ALPN protocol (if any)
alpn: Option<Vec<u8>>,
}
impl ServerHelloBuilder {
@ -185,7 +165,6 @@ impl ServerHelloBuilder {
cipher_suite: cipher_suite::TLS_AES_128_GCM_SHA256,
compression: 0x00,
extensions: TlsExtensionBuilder::new(),
alpn: None,
}
}
@ -200,18 +179,9 @@ impl ServerHelloBuilder {
self
}
fn with_alpn(mut self, proto: Option<Vec<u8>>) -> Self {
self.alpn = proto;
self
}
/// Build ServerHello message (without record header)
fn build_message(&self) -> Vec<u8> {
let mut ext_builder = self.extensions.clone();
if let Some(ref alpn) = self.alpn {
ext_builder.add_alpn(alpn);
}
let extensions = ext_builder.extensions.clone();
let extensions = self.extensions.extensions.clone();
let extensions_len = extensions.len() as u16;
// Calculate total length
@ -316,7 +286,14 @@ pub fn validate_tls_handshake_with_replay_window(
};
let replay_window_u32 = u32::try_from(replay_window_secs).unwrap_or(u32::MAX);
let boot_time_cap_secs = BOOT_TIME_MAX_SECS.min(replay_window_u32);
// Boot-time bypass and ignore_time_skew serve different compatibility paths.
// When skew checks are disabled, force boot-time cap to zero to prevent
// accidental future coupling of boot-time logic into the ignore-skew path.
let boot_time_cap_secs = if ignore_time_skew {
0
} else {
BOOT_TIME_MAX_SECS.min(replay_window_u32)
};
validate_tls_handshake_at_time_with_boot_cap(
handshake,
@ -369,6 +346,9 @@ fn validate_tls_handshake_at_time_with_boot_cap(
// Extract session ID
let session_id_len_pos = TLS_DIGEST_POS + TLS_DIGEST_LEN;
let session_id_len = handshake.get(session_id_len_pos).copied()? as usize;
if session_id_len > 32 {
return None;
}
let session_id_start = session_id_len_pos + 1;
if handshake.len() < session_id_start + session_id_len {
@ -411,7 +391,7 @@ fn validate_tls_handshake_at_time_with_boot_cap(
if !ignore_time_skew {
// Allow very small timestamps (boot time instead of unix time)
// This is a quirk in some clients that use uptime instead of real time
let is_boot_time = timestamp < boot_time_cap_secs;
let is_boot_time = boot_time_cap_secs > 0 && timestamp < boot_time_cap_secs;
if !is_boot_time {
let time_diff = now - i64::from(timestamp);
if !(TIME_SKEW_MIN..=TIME_SKEW_MAX).contains(&time_diff) {
@ -433,27 +413,14 @@ fn validate_tls_handshake_at_time_with_boot_cap(
})
}
fn curve25519_prime() -> BigUint {
(BigUint::one() << 255) - BigUint::from(19u32)
}
/// Generate a fake X25519 public key for TLS
///
/// Produces a quadratic residue mod p = 2^255 - 19 by computing n² mod p,
/// which matches Python/C behavior and avoids DPI fingerprinting.
/// Uses RFC 7748 X25519 scalar multiplication over the canonical basepoint,
/// yielding distribution-consistent public keys for anti-fingerprinting.
pub fn gen_fake_x25519_key(rng: &SecureRandom) -> [u8; 32] {
let mut n_bytes = [0u8; 32];
n_bytes.copy_from_slice(&rng.bytes(32));
let n = BigUint::from_bytes_le(&n_bytes);
let p = curve25519_prime();
let pk = (&n * &n) % &p;
let mut out = pk.to_bytes_le();
out.resize(32, 0);
let mut result = [0u8; 32];
result.copy_from_slice(&out[..32]);
result
let mut scalar = [0u8; 32];
scalar.copy_from_slice(&rng.bytes(32));
x25519(scalar, X25519_BASEPOINT_BYTES)
}
/// Build TLS ServerHello response
@ -470,7 +437,7 @@ pub fn build_server_hello(
session_id: &[u8],
fake_cert_len: usize,
rng: &SecureRandom,
alpn: Option<Vec<u8>>,
_alpn: Option<Vec<u8>>,
new_session_tickets: u8,
) -> Vec<u8> {
const MIN_APP_DATA: usize = 64;
@ -482,7 +449,6 @@ pub fn build_server_hello(
let server_hello = ServerHelloBuilder::new(session_id.to_vec())
.with_x25519_key(&x25519_key)
.with_tls13_version()
.with_alpn(alpn)
.build_record();
// Build Change Cipher Spec record
@ -705,10 +671,10 @@ pub fn is_tls_handshake(first_bytes: &[u8]) -> bool {
return false;
}
// TLS record header: 0x16 (handshake) 0x03 0x01 (TLS 1.0)
// TLS ClientHello commonly uses legacy record versions 0x0301 or 0x0303.
first_bytes[0] == TLS_RECORD_HANDSHAKE
&& first_bytes[1] == 0x03
&& first_bytes[2] == 0x01
&& (first_bytes[2] == 0x01 || first_bytes[2] == 0x03)
}
/// Parse TLS record header, returns (record_type, length)

View File

@ -1,5 +1,8 @@
use super::*;
use crate::crypto::sha256_hmac;
use crate::tls_front::emulator::build_emulated_server_hello;
use crate::tls_front::types::{CachedTlsData, ParsedServerHello, TlsBehaviorProfile, TlsProfileSource};
use std::time::SystemTime;
/// Build a TLS-handshake-like buffer that contains a valid HMAC digest
/// for the given `secret` and `timestamp`.
@ -369,16 +372,16 @@ fn one_byte_session_id_validates_and_is_preserved() {
}
#[test]
fn max_session_id_len_255_with_valid_digest_is_accepted() {
fn max_session_id_len_255_with_valid_digest_is_rejected_by_rfc_cap() {
let secret = b"sid_len_255_test";
let session_id = vec![0xCCu8; 255];
let handshake = make_valid_tls_handshake_with_session_id(secret, 0, &session_id);
let secrets = vec![("u".to_string(), secret.to_vec())];
let result = validate_tls_handshake(&handshake, &secrets, true)
.expect("session_id_len=255 with valid digest must validate");
assert_eq!(result.session_id.len(), 255);
assert_eq!(result.session_id, session_id);
assert!(
validate_tls_handshake(&handshake, &secrets, true).is_none(),
"legacy_session_id length > 32 must be rejected even with valid digest"
);
}
// ------------------------------------------------------------------
@ -731,6 +734,246 @@ fn replay_window_cap_still_allows_small_boot_timestamp() {
);
}
#[test]
fn ignore_time_skew_explicitly_decouples_from_boot_time_cap() {
let secret = b"ignore_skew_boot_cap_decouple_test";
let ts: u32 = 1;
let h = make_valid_tls_handshake(secret, ts);
let secrets = vec![("u".to_string(), secret.to_vec())];
let cap_zero = validate_tls_handshake_at_time_with_boot_cap(&h, &secrets, true, 0, 0);
let cap_nonzero =
validate_tls_handshake_at_time_with_boot_cap(&h, &secrets, true, 0, BOOT_TIME_MAX_SECS);
assert!(cap_zero.is_some(), "ignore_time_skew=true must accept valid HMAC");
assert!(
cap_nonzero.is_some(),
"ignore_time_skew path must not depend on boot-time cap"
);
let a = cap_zero.unwrap();
let b = cap_nonzero.unwrap();
assert_eq!(a.user, b.user);
assert_eq!(a.timestamp, b.timestamp);
}
#[test]
fn adversarial_small_boot_timestamp_matrix_rejected_when_boot_cap_forced_zero() {
let secret = b"boot_cap_zero_matrix_test";
let secrets = vec![("u".to_string(), secret.to_vec())];
let now: i64 = 1_700_000_000;
for ts in 0u32..1024u32 {
let h = make_valid_tls_handshake(secret, ts);
let result = validate_tls_handshake_at_time_with_boot_cap(&h, &secrets, false, now, 0);
assert!(
result.is_none(),
"boot cap=0 must reject timestamp {ts} when skew checks are active"
);
}
}
#[test]
fn light_fuzz_boot_cap_zero_rejects_small_timestamp_space() {
let secret = b"boot_cap_zero_fuzz_test";
let secrets = vec![("u".to_string(), secret.to_vec())];
let now: i64 = 1_700_000_000;
let mut s: u64 = 0x9E37_79B9_7F4A_7C15;
for _ in 0..4096 {
s ^= s << 7;
s ^= s >> 9;
s ^= s << 8;
let ts = (s as u32) % 2048;
let h = make_valid_tls_handshake(secret, ts);
let result = validate_tls_handshake_at_time_with_boot_cap(&h, &secrets, false, now, 0);
assert!(
result.is_none(),
"fuzzed boot-range timestamp {ts} must be rejected when cap=0"
);
}
}
#[test]
fn stress_boot_cap_zero_rejection_is_deterministic_under_high_iteration_count() {
let secret = b"boot_cap_zero_stress_test";
let secrets = vec![("u".to_string(), secret.to_vec())];
let now: i64 = 1_700_000_000;
for i in 0u32..20_000u32 {
let ts = i % 4096;
let h = make_valid_tls_handshake(secret, ts);
let result = validate_tls_handshake_at_time_with_boot_cap(&h, &secrets, false, now, 0);
assert!(
result.is_none(),
"iteration {i}: timestamp {ts} must be rejected with cap=0"
);
}
}
#[test]
fn replay_window_one_allows_only_zero_timestamp_boot_bypass() {
let secret = b"replay_window_one_boot_test";
let secrets = vec![("u".to_string(), secret.to_vec())];
let ts0 = make_valid_tls_handshake(secret, 0);
let ts1 = make_valid_tls_handshake(secret, 1);
assert!(
validate_tls_handshake_with_replay_window(&ts0, &secrets, false, 1).is_some(),
"replay_window=1 must allow timestamp 0 via boot-time compatibility"
);
assert!(
validate_tls_handshake_with_replay_window(&ts1, &secrets, false, 1).is_none(),
"replay_window=1 must reject timestamp 1 on normal wall-clock systems"
);
}
#[test]
fn replay_window_two_allows_ts0_ts1_but_rejects_ts2() {
let secret = b"replay_window_two_boot_test";
let secrets = vec![("u".to_string(), secret.to_vec())];
let ts0 = make_valid_tls_handshake(secret, 0);
let ts1 = make_valid_tls_handshake(secret, 1);
let ts2 = make_valid_tls_handshake(secret, 2);
assert!(validate_tls_handshake_with_replay_window(&ts0, &secrets, false, 2).is_some());
assert!(validate_tls_handshake_with_replay_window(&ts1, &secrets, false, 2).is_some());
assert!(
validate_tls_handshake_with_replay_window(&ts2, &secrets, false, 2).is_none(),
"timestamp equal to replay-window cap must not use boot-time bypass"
);
}
#[test]
fn adversarial_skew_boundary_matrix_accepts_only_inclusive_window_when_boot_disabled() {
let secret = b"skew_boundary_matrix_test";
let secrets = vec![("u".to_string(), secret.to_vec())];
let now: i64 = 1_700_000_000;
for offset in -1500i64..=1500i64 {
let ts_i64 = now - offset;
let ts = u32::try_from(ts_i64).expect("timestamp must fit u32 for test matrix");
let h = make_valid_tls_handshake(secret, ts);
let accepted = validate_tls_handshake_at_time_with_boot_cap(&h, &secrets, false, now, 0)
.is_some();
let expected = (TIME_SKEW_MIN..=TIME_SKEW_MAX).contains(&offset);
assert_eq!(
accepted, expected,
"offset {offset} must match inclusive skew window when boot bypass is disabled"
);
}
}
#[test]
fn light_fuzz_skew_window_rejects_outside_range_when_boot_disabled() {
let secret = b"skew_outside_fuzz_test";
let secrets = vec![("u".to_string(), secret.to_vec())];
let now: i64 = 1_700_000_000;
let mut s: u64 = 0x0123_4567_89AB_CDEF;
for _ in 0..4096 {
s ^= s << 7;
s ^= s >> 9;
s ^= s << 8;
let magnitude = 1300i64 + ((s % 2000u64) as i64);
let sign = if (s & 1) == 0 { 1i64 } else { -1i64 };
let offset = sign * magnitude;
let ts_i64 = now - offset;
let ts = u32::try_from(ts_i64).expect("timestamp must fit u32 for fuzz test");
let h = make_valid_tls_handshake(secret, ts);
let accepted = validate_tls_handshake_at_time_with_boot_cap(&h, &secrets, false, now, 0)
.is_some();
assert!(
!accepted,
"offset {offset} must be rejected outside strict skew window"
);
}
}
#[test]
fn stress_boot_disabled_validation_matches_time_diff_oracle() {
let secret = b"boot_disabled_oracle_stress_test";
let secrets = vec![("u".to_string(), secret.to_vec())];
let now: i64 = 1_700_000_000;
let mut s: u64 = 0xBADC_0FFE_EE11_2233;
for _ in 0..25_000 {
s ^= s << 7;
s ^= s >> 9;
s ^= s << 8;
let ts = s as u32;
let h = make_valid_tls_handshake(secret, ts);
let accepted = validate_tls_handshake_at_time_with_boot_cap(&h, &secrets, false, now, 0)
.is_some();
let time_diff = now - i64::from(ts);
let expected = (TIME_SKEW_MIN..=TIME_SKEW_MAX).contains(&time_diff);
assert_eq!(
accepted, expected,
"boot-disabled validation must match pure time-diff oracle"
);
}
}
#[test]
fn integration_large_user_list_with_boot_disabled_finds_only_matching_user() {
let now: i64 = 1_700_000_000;
let target_secret = b"target_user_secret";
let target_ts = (now - 1) as u32;
let handshake = make_valid_tls_handshake(target_secret, target_ts);
let mut secrets = Vec::new();
for i in 0..512u32 {
secrets.push((format!("noise-{i}"), format!("noise-secret-{i}").into_bytes()));
}
secrets.push(("target-user".to_string(), target_secret.to_vec()));
let result = validate_tls_handshake_at_time_with_boot_cap(&handshake, &secrets, false, now, 0)
.expect("matching user should validate within strict skew window");
assert_eq!(result.user, "target-user");
}
#[test]
fn light_fuzz_ignore_time_skew_accepts_wide_timestamp_range_with_valid_hmac() {
let secret = b"ignore_skew_fuzz_accept_test";
let secrets = vec![("u".to_string(), secret.to_vec())];
let mut s: u64 = 0xC0FF_EE11_2233_4455;
for _ in 0..2048 {
s ^= s << 7;
s ^= s >> 9;
s ^= s << 8;
let ts = s as u32;
let h = make_valid_tls_handshake(secret, ts);
let result = validate_tls_handshake_with_replay_window(&h, &secrets, true, 60);
assert!(
result.is_some(),
"ignore_time_skew=true must accept valid HMAC for arbitrary timestamp"
);
}
}
#[test]
fn light_fuzz_small_replay_window_rejects_far_timestamps_when_skew_enabled() {
let secret = b"replay_window_reject_fuzz_test";
let secrets = vec![("u".to_string(), secret.to_vec())];
for ts in 300u32..=1323u32 {
let h = make_valid_tls_handshake(secret, ts);
let result = validate_tls_handshake_at_time_with_boot_cap(&h, &secrets, false, 0, 300);
assert!(
result.is_none(),
"with skew checks enabled and boot cap=300, timestamp >=300 at now=0 must be rejected"
);
}
}
// ------------------------------------------------------------------
// Extreme timestamp values
// ------------------------------------------------------------------
@ -897,7 +1140,9 @@ fn first_matching_user_wins_over_later_duplicate_secret() {
#[test]
fn test_is_tls_handshake() {
assert!(is_tls_handshake(&[0x16, 0x03, 0x01]));
assert!(is_tls_handshake(&[0x16, 0x03, 0x03]));
assert!(is_tls_handshake(&[0x16, 0x03, 0x01, 0x02, 0x00]));
assert!(is_tls_handshake(&[0x16, 0x03, 0x03, 0x02, 0x00]));
assert!(!is_tls_handshake(&[0x17, 0x03, 0x01]));
assert!(!is_tls_handshake(&[0x16, 0x03, 0x02]));
assert!(!is_tls_handshake(&[0x16, 0x03]));
@ -945,17 +1190,158 @@ fn test_gen_fake_x25519_key() {
}
#[test]
fn test_fake_x25519_key_is_quadratic_residue() {
use num_bigint::BigUint;
use num_traits::One;
fn test_fake_x25519_key_is_nonzero_and_varies() {
let rng = crate::crypto::SecureRandom::new();
let mut unique = std::collections::HashSet::new();
let mut saw_non_zero = false;
for _ in 0..64 {
let key = gen_fake_x25519_key(&rng);
let p = curve25519_prime();
let k_num = BigUint::from_bytes_le(&key);
let exponent = (&p - BigUint::one()) >> 1;
let legendre = k_num.modpow(&exponent, &p);
assert_eq!(legendre, BigUint::one());
if key != [0u8; 32] {
saw_non_zero = true;
}
unique.insert(key);
}
assert!(
saw_non_zero,
"generated X25519 public keys must not collapse to all-zero output"
);
assert!(
unique.len() > 1,
"generated X25519 public keys must vary across invocations"
);
}
#[test]
fn validate_tls_handshake_rejects_session_id_longer_than_rfc_cap() {
let secret = b"session_id_cap_secret";
let oversized_sid = vec![0x42u8; 33];
let handshake = make_valid_tls_handshake_with_session_id(secret, 0, &oversized_sid);
let secrets = vec![("u".to_string(), secret.to_vec())];
assert!(
validate_tls_handshake(&handshake, &secrets, true).is_none(),
"legacy_session_id length > 32 must be rejected"
);
}
fn server_hello_extension_types(record: &[u8]) -> Vec<u16> {
if record.len() < 9 || record[0] != TLS_RECORD_HANDSHAKE || record[5] != 0x02 {
return Vec::new();
}
let record_len = u16::from_be_bytes([record[3], record[4]]) as usize;
if record.len() < 5 + record_len {
return Vec::new();
}
let hs_len = u32::from_be_bytes([0, record[6], record[7], record[8]]) as usize;
let hs_start = 5;
let hs_end = hs_start + 4 + hs_len;
if hs_end > record.len() {
return Vec::new();
}
let mut pos = hs_start + 4 + 2 + 32;
if pos >= hs_end {
return Vec::new();
}
let sid_len = record[pos] as usize;
pos += 1 + sid_len;
if pos + 2 + 1 + 2 > hs_end {
return Vec::new();
}
pos += 2 + 1;
let ext_len = u16::from_be_bytes([record[pos], record[pos + 1]]) as usize;
pos += 2;
let ext_end = pos + ext_len;
if ext_end > hs_end {
return Vec::new();
}
let mut out = Vec::new();
while pos + 4 <= ext_end {
let etype = u16::from_be_bytes([record[pos], record[pos + 1]]);
let elen = u16::from_be_bytes([record[pos + 2], record[pos + 3]]) as usize;
pos += 4;
if pos + elen > ext_end {
break;
}
out.push(etype);
pos += elen;
}
out
}
#[test]
fn build_server_hello_never_places_alpn_in_server_hello_extensions() {
let secret = b"alpn_sh_forbidden";
let client_digest = [0x11u8; 32];
let session_id = vec![0xAA; 32];
let rng = crate::crypto::SecureRandom::new();
let response = build_server_hello(
secret,
&client_digest,
&session_id,
1024,
&rng,
Some(b"h2".to_vec()),
0,
);
let exts = server_hello_extension_types(&response);
assert!(
!exts.contains(&0x0010),
"ALPN extension must not appear in ServerHello"
);
}
#[test]
fn emulated_server_hello_never_places_alpn_in_server_hello_extensions() {
let secret = b"alpn_emulated_forbidden";
let client_digest = [0x22u8; 32];
let session_id = vec![0xAB; 32];
let rng = crate::crypto::SecureRandom::new();
let cached = CachedTlsData {
server_hello_template: ParsedServerHello {
version: TLS_VERSION,
random: [0u8; 32],
session_id: Vec::new(),
cipher_suite: [0x13, 0x01],
compression: 0,
extensions: Vec::new(),
},
cert_info: None,
cert_payload: None,
app_data_records_sizes: vec![1024],
total_app_data_len: 1024,
behavior_profile: TlsBehaviorProfile {
change_cipher_spec_count: 1,
app_data_record_sizes: vec![1024],
ticket_record_sizes: Vec::new(),
source: TlsProfileSource::Default,
},
fetched_at: SystemTime::now(),
domain: "example.com".to_string(),
};
let response = build_emulated_server_hello(
secret,
&client_digest,
&session_id,
&cached,
false,
&rng,
Some(b"h2".to_vec()),
0,
);
let exts = server_hello_extension_types(&response);
assert!(
!exts.contains(&0x0010),
"ALPN extension must not appear in emulated ServerHello"
);
}
#[test]
@ -1394,3 +1780,191 @@ fn server_hello_application_data_payload_varies_across_runs() {
"ApplicationData payload should vary across runs to reduce fingerprintability"
);
}
#[test]
fn replay_window_zero_disables_boot_bypass_for_any_nonzero_timestamp() {
let secret = b"window_zero_boot_bypass_test";
let secrets = vec![("u".to_string(), secret.to_vec())];
let ts1 = make_valid_tls_handshake(secret, 1);
assert!(
validate_tls_handshake_with_replay_window(&ts1, &secrets, false, 0).is_none(),
"replay_window_secs=0 must reject nonzero timestamps even in boot-time range"
);
let ts0 = make_valid_tls_handshake(secret, 0);
assert!(
validate_tls_handshake_with_replay_window(&ts0, &secrets, false, 0).is_none(),
"replay_window_secs=0 enforces strict skew check and rejects timestamp=0 on normal wall-clock systems"
);
}
#[test]
fn large_replay_window_does_not_expand_time_skew_acceptance() {
let secret = b"large_replay_window_skew_bound_test";
let secrets = vec![("u".to_string(), secret.to_vec())];
let now: i64 = 1_700_000_000;
let ts_far_past = (now - 600) as u32;
let valid = make_valid_tls_handshake(secret, ts_far_past);
assert!(
validate_tls_handshake_with_replay_window(&valid, &secrets, false, 86_400).is_none(),
"large replay window must not relax strict skew check once boot-time bypass is not in play"
);
}
#[test]
fn parse_tls_record_header_accepts_tls_version_constant() {
let header = [TLS_RECORD_HANDSHAKE, TLS_VERSION[0], TLS_VERSION[1], 0x00, 0x2A];
let parsed = parse_tls_record_header(&header).expect("TLS_VERSION header should be accepted");
assert_eq!(parsed.0, TLS_RECORD_HANDSHAKE);
assert_eq!(parsed.1, 42);
}
#[test]
fn server_hello_clamps_fake_cert_len_lower_bound() {
let secret = b"fake_cert_lower_bound_test";
let client_digest = [0x11u8; TLS_DIGEST_LEN];
let session_id = vec![0x77; 32];
let rng = crate::crypto::SecureRandom::new();
let response = build_server_hello(secret, &client_digest, &session_id, 1, &rng, None, 0);
let sh_len = u16::from_be_bytes([response[3], response[4]]) as usize;
let ccs_pos = 5 + sh_len;
let ccs_len = u16::from_be_bytes([response[ccs_pos + 3], response[ccs_pos + 4]]) as usize;
let app_pos = ccs_pos + 5 + ccs_len;
let app_len = u16::from_be_bytes([response[app_pos + 3], response[app_pos + 4]]) as usize;
assert_eq!(response[app_pos], TLS_RECORD_APPLICATION);
assert_eq!(app_len, 64, "fake cert payload must be clamped to minimum 64 bytes");
}
#[test]
fn server_hello_clamps_fake_cert_len_upper_bound() {
let secret = b"fake_cert_upper_bound_test";
let client_digest = [0x22u8; TLS_DIGEST_LEN];
let session_id = vec![0x66; 32];
let rng = crate::crypto::SecureRandom::new();
let response = build_server_hello(secret, &client_digest, &session_id, 65_535, &rng, None, 0);
let sh_len = u16::from_be_bytes([response[3], response[4]]) as usize;
let ccs_pos = 5 + sh_len;
let ccs_len = u16::from_be_bytes([response[ccs_pos + 3], response[ccs_pos + 4]]) as usize;
let app_pos = ccs_pos + 5 + ccs_len;
let app_len = u16::from_be_bytes([response[app_pos + 3], response[app_pos + 4]]) as usize;
assert_eq!(response[app_pos], TLS_RECORD_APPLICATION);
assert_eq!(app_len, 16_640, "fake cert payload must be clamped to TLS record max bound");
}
#[test]
fn server_hello_new_session_ticket_count_matches_configuration() {
let secret = b"ticket_count_surface_test";
let client_digest = [0x33u8; TLS_DIGEST_LEN];
let session_id = vec![0x55; 32];
let rng = crate::crypto::SecureRandom::new();
let tickets: u8 = 3;
let response = build_server_hello(secret, &client_digest, &session_id, 1024, &rng, None, tickets);
let mut pos = 0usize;
let mut app_records = 0usize;
while pos + 5 <= response.len() {
let rtype = response[pos];
let rlen = u16::from_be_bytes([response[pos + 3], response[pos + 4]]) as usize;
let next = pos + 5 + rlen;
assert!(next <= response.len(), "TLS record must stay inside response bounds");
if rtype == TLS_RECORD_APPLICATION {
app_records += 1;
}
pos = next;
}
assert_eq!(
app_records,
1 + tickets as usize,
"response must contain one main application record plus configured ticket-like tail records"
);
}
#[test]
fn exhaustive_tls_minor_version_classification_matches_policy() {
for minor in 0u8..=u8::MAX {
let first = [TLS_RECORD_HANDSHAKE, 0x03, minor];
let expected = minor == 0x01 || minor == 0x03;
assert_eq!(
is_tls_handshake(&first),
expected,
"minor version {minor:#04x} classification mismatch"
);
}
}
#[test]
fn light_fuzz_tls_header_classifier_and_parser_policy_consistency() {
// Deterministic xorshift state keeps this fuzz test reproducible.
let mut s: u64 = 0x9E37_79B9_AA95_5A5D;
for _ in 0..10_000 {
s ^= s << 7;
s ^= s >> 9;
s ^= s << 8;
let header = [
(s & 0xff) as u8,
((s >> 8) & 0xff) as u8,
((s >> 16) & 0xff) as u8,
((s >> 24) & 0xff) as u8,
((s >> 32) & 0xff) as u8,
];
let classified = is_tls_handshake(&header[..3]);
let expected_classified = header[0] == TLS_RECORD_HANDSHAKE
&& header[1] == 0x03
&& (header[2] == 0x01 || header[2] == 0x03);
assert_eq!(
classified,
expected_classified,
"classifier policy mismatch for header {header:02x?}"
);
let parsed = parse_tls_record_header(&header);
let expected_parsed = header[1] == 0x03 && (header[2] == 0x01 || header[2] == TLS_VERSION[1]);
assert_eq!(
parsed.is_some(),
expected_parsed,
"parser policy mismatch for header {header:02x?}"
);
}
}
#[test]
fn stress_random_noise_handshakes_never_authenticate() {
let secret = b"stress_noise_secret";
let secrets = vec![("noise-user".to_string(), secret.to_vec())];
// Deterministic xorshift state keeps this stress test reproducible.
let mut s: u64 = 0xD1B5_4A32_9C6E_77F1;
for _ in 0..5_000 {
s ^= s << 7;
s ^= s >> 9;
s ^= s << 8;
let len = 1 + ((s as usize) % 196);
let mut buf = vec![0u8; len];
for b in &mut buf {
s ^= s << 7;
s ^= s >> 9;
s ^= s << 8;
*b = (s & 0xff) as u8;
}
assert!(
validate_tls_handshake(&buf, &secrets, true).is_none(),
"random noise must never authenticate"
);
}
}

View File

@ -51,9 +51,9 @@ impl UserConnectionReservation {
if !self.active {
return;
}
self.ip_tracker.remove_ip(&self.user, self.ip).await;
self.active = false;
self.stats.decrement_user_curr_connects(&self.user);
self.ip_tracker.remove_ip(&self.user, self.ip).await;
}
}
@ -111,7 +111,19 @@ use crate::proxy::middle_relay::handle_via_middle_proxy;
use crate::proxy::route_mode::{RelayRouteMode, RouteRuntimeController};
fn beobachten_ttl(config: &ProxyConfig) -> Duration {
Duration::from_secs(config.general.beobachten_minutes.saturating_mul(60))
let minutes = config.general.beobachten_minutes;
if minutes == 0 {
static BEOBACHTEN_ZERO_MINUTES_WARNED: OnceLock<AtomicBool> = OnceLock::new();
let warned = BEOBACHTEN_ZERO_MINUTES_WARNED.get_or_init(|| AtomicBool::new(false));
if !warned.swap(true, Ordering::Relaxed) {
warn!(
"general.beobachten_minutes=0 is insecure because entries expire immediately; forcing minimum TTL to 1 minute"
);
}
return Duration::from_secs(60);
}
Duration::from_secs(minutes.saturating_mul(60))
}
fn record_beobachten_class(
@ -494,7 +506,6 @@ impl RunningClientHandler {
pub async fn run(self) -> Result<()> {
self.stats.increment_connects_all();
let peer = self.peer;
let _ip_tracker = self.ip_tracker.clone();
debug!(peer = %peer, "New connection");
if let Err(e) = configure_client_socket(
@ -625,7 +636,6 @@ impl RunningClientHandler {
let is_tls = tls::is_tls_handshake(&first_bytes[..3]);
let peer = self.peer;
let _ip_tracker = self.ip_tracker.clone();
debug!(peer = %peer, is_tls = is_tls, "Handshake type detected");
@ -638,7 +648,6 @@ impl RunningClientHandler {
async fn handle_tls_client(mut self, first_bytes: [u8; 5], local_addr: SocketAddr) -> Result<HandshakeOutcome> {
let peer = self.peer;
let _ip_tracker = self.ip_tracker.clone();
let tls_len = u16::from_be_bytes([first_bytes[3], first_bytes[4]]) as usize;
@ -762,7 +771,6 @@ impl RunningClientHandler {
async fn handle_direct_client(mut self, first_bytes: [u8; 5], local_addr: SocketAddr) -> Result<HandshakeOutcome> {
let peer = self.peer;
let _ip_tracker = self.ip_tracker.clone();
if !self.config.general.modes.classic && !self.config.general.modes.secure {
debug!(peer = %peer, "Non-TLS modes disabled");
@ -1032,7 +1040,10 @@ impl RunningClientHandler {
}
match ip_tracker.check_and_add(user, peer_addr.ip()).await {
Ok(()) => {}
Ok(()) => {
ip_tracker.remove_ip(user, peer_addr.ip()).await;
stats.decrement_user_curr_connects(user);
}
Err(reason) => {
stats.decrement_user_curr_connects(user);
warn!(

View File

@ -361,6 +361,93 @@ async fn short_tls_probe_is_masked_through_client_pipeline() {
accept_task.await.unwrap();
}
#[tokio::test]
async fn tls12_record_probe_is_masked_through_client_pipeline() {
let listener = TcpListener::bind("127.0.0.1:0").await.unwrap();
let backend_addr = listener.local_addr().unwrap();
let probe = vec![0x16, 0x03, 0x03, 0x00, 0x10];
let backend_reply = b"HTTP/1.1 200 OK\r\nContent-Length: 2\r\n\r\nOK".to_vec();
let accept_task = tokio::spawn({
let probe = probe.clone();
let backend_reply = backend_reply.clone();
async move {
let (mut stream, _) = listener.accept().await.unwrap();
let mut got = vec![0u8; probe.len()];
stream.read_exact(&mut got).await.unwrap();
assert_eq!(got, probe);
stream.write_all(&backend_reply).await.unwrap();
}
});
let mut cfg = ProxyConfig::default();
cfg.general.beobachten = false;
cfg.censorship.mask = true;
cfg.censorship.mask_unix_sock = None;
cfg.censorship.mask_host = Some("127.0.0.1".to_string());
cfg.censorship.mask_port = backend_addr.port();
cfg.censorship.mask_proxy_protocol = 0;
let config = Arc::new(cfg);
let stats = Arc::new(Stats::new());
let upstream_manager = Arc::new(UpstreamManager::new(
vec![UpstreamConfig {
upstream_type: UpstreamType::Direct {
interface: None,
bind_addresses: None,
},
weight: 1,
enabled: true,
scopes: String::new(),
selected_scope: String::new(),
}],
1,
1,
1,
1,
false,
stats.clone(),
));
let replay_checker = Arc::new(ReplayChecker::new(128, Duration::from_secs(60)));
let buffer_pool = Arc::new(BufferPool::new());
let rng = Arc::new(SecureRandom::new());
let route_runtime = Arc::new(RouteRuntimeController::new(RelayRouteMode::Direct));
let ip_tracker = Arc::new(UserIpTracker::new());
let beobachten = Arc::new(BeobachtenStore::new());
let (server_side, mut client_side) = duplex(4096);
let peer: SocketAddr = "203.0.113.78:55001".parse().unwrap();
let handler = tokio::spawn(handle_client_stream(
server_side,
peer,
config,
stats,
upstream_manager,
replay_checker,
buffer_pool,
rng,
None,
route_runtime,
None,
ip_tracker,
beobachten,
false,
));
client_side.write_all(&probe).await.unwrap();
let mut observed = vec![0u8; backend_reply.len()];
client_side.read_exact(&mut observed).await.unwrap();
assert_eq!(observed, backend_reply);
drop(client_side);
let _ = tokio::time::timeout(Duration::from_secs(3), handler)
.await
.unwrap()
.unwrap();
accept_task.await.unwrap();
}
#[tokio::test]
async fn handle_client_stream_increments_connects_all_exactly_once() {
let listener = TcpListener::bind("127.0.0.1:0").await.unwrap();
@ -1381,6 +1468,34 @@ fn non_eof_error_is_classified_as_other() {
);
}
#[test]
fn beobachten_ttl_zero_minutes_is_floored_to_one_minute() {
let mut config = ProxyConfig::default();
config.general.beobachten = true;
config.general.beobachten_minutes = 0;
let ttl = beobachten_ttl(&config);
assert_eq!(
ttl,
Duration::from_secs(60),
"beobachten_minutes=0 must be fail-closed to a one-minute minimum TTL"
);
}
#[test]
fn beobachten_ttl_positive_minutes_remain_unchanged() {
let mut config = ProxyConfig::default();
config.general.beobachten = true;
config.general.beobachten_minutes = 7;
let ttl = beobachten_ttl(&config);
assert_eq!(
ttl,
Duration::from_secs(7 * 60),
"configured positive beobacten TTL must be preserved"
);
}
#[tokio::test]
async fn tcp_limit_rejection_does_not_reserve_ip_or_trigger_rollback() {
let mut config = ProxyConfig::default();
@ -1449,6 +1564,83 @@ async fn zero_tcp_limit_rejects_without_ip_or_counter_side_effects() {
assert_eq!(ip_tracker.get_active_ip_count("user").await, 0);
}
#[tokio::test]
async fn check_user_limits_static_success_does_not_leak_counter_or_ip_reservation() {
let user = "check-helper-user";
let mut config = ProxyConfig::default();
config
.access
.user_max_tcp_conns
.insert(user.to_string(), 1);
let stats = Stats::new();
let ip_tracker = UserIpTracker::new();
let peer_addr: SocketAddr = "198.51.100.212:50002".parse().unwrap();
let first = RunningClientHandler::check_user_limits_static(
user,
&config,
&stats,
peer_addr,
&ip_tracker,
)
.await;
assert!(first.is_ok(), "first check-only limit validation must succeed");
let second = RunningClientHandler::check_user_limits_static(
user,
&config,
&stats,
peer_addr,
&ip_tracker,
)
.await;
assert!(second.is_ok(), "second check-only validation must not fail from leaked state");
assert_eq!(stats.get_user_curr_connects(user), 0);
assert_eq!(ip_tracker.get_active_ip_count(user).await, 0);
}
#[tokio::test]
async fn stress_check_user_limits_static_success_never_leaks_state() {
let user = "check-helper-stress-user";
let mut config = ProxyConfig::default();
config
.access
.user_max_tcp_conns
.insert(user.to_string(), 1);
let stats = Stats::new();
let ip_tracker = UserIpTracker::new();
for i in 0..4096u16 {
let peer_addr = SocketAddr::new(
IpAddr::V4(std::net::Ipv4Addr::new(198, 51, 110, (i % 250) as u8 + 1)),
40000 + (i % 1024),
);
let result = RunningClientHandler::check_user_limits_static(
user,
&config,
&stats,
peer_addr,
&ip_tracker,
)
.await;
assert!(result.is_ok(), "check-only helper must remain leak-free under stress");
}
assert_eq!(
stats.get_user_curr_connects(user),
0,
"stress success loop must not leak user connection counters"
);
assert_eq!(
ip_tracker.get_active_ip_count(user).await,
0,
"stress success loop must not leak active IP reservations"
);
}
#[tokio::test]
async fn concurrent_distinct_ip_rejections_rollback_user_counter_without_leak() {
let user = "rollback-storm-user";
@ -1678,6 +1870,249 @@ async fn explicit_release_allows_immediate_cross_ip_reacquire_under_limit() {
assert_eq!(ip_tracker.get_active_ip_count(user).await, 0);
}
#[tokio::test]
async fn release_abort_storm_does_not_leak_user_or_ip_reservations() {
const ATTEMPTS: usize = 256;
let user = "release-abort-storm-user";
let mut config = ProxyConfig::default();
config
.access
.user_max_tcp_conns
.insert(user.to_string(), ATTEMPTS + 16);
let stats = Arc::new(Stats::new());
let ip_tracker = Arc::new(UserIpTracker::new());
for idx in 0..ATTEMPTS {
let peer = SocketAddr::new(
IpAddr::V4(std::net::Ipv4Addr::new(203, 0, 114, (idx % 250 + 1) as u8)),
52000 + idx as u16,
);
let reservation = RunningClientHandler::acquire_user_connection_reservation_static(
user,
&config,
stats.clone(),
peer,
ip_tracker.clone(),
)
.await
.expect("reservation acquisition must succeed in abort storm");
let release_task = tokio::spawn(async move {
reservation.release().await;
});
release_task.abort();
let _ = release_task.await;
}
tokio::time::timeout(Duration::from_secs(1), async {
loop {
if stats.get_user_curr_connects(user) == 0
&& ip_tracker.get_active_ip_count(user).await == 0
{
break;
}
tokio::task::yield_now().await;
tokio::time::sleep(Duration::from_millis(2)).await;
}
})
.await
.expect("release abort storm must not leak user slots or active IP entries");
}
#[tokio::test]
async fn release_abort_loop_preserves_immediate_same_ip_reacquire() {
const ITERATIONS: usize = 128;
let user = "release-abort-reacquire-user";
let peer: SocketAddr = "198.51.100.246:53001".parse().unwrap();
let mut config = ProxyConfig::default();
config.access.user_max_tcp_conns.insert(user.to_string(), 1);
let stats = Arc::new(Stats::new());
let ip_tracker = Arc::new(UserIpTracker::new());
for _ in 0..ITERATIONS {
let reservation = RunningClientHandler::acquire_user_connection_reservation_static(
user,
&config,
stats.clone(),
peer,
ip_tracker.clone(),
)
.await
.expect("baseline acquisition must succeed");
let release_task = tokio::spawn(async move {
reservation.release().await;
});
release_task.abort();
let _ = release_task.await;
tokio::time::timeout(Duration::from_secs(1), async {
loop {
if stats.get_user_curr_connects(user) == 0
&& ip_tracker.get_active_ip_count(user).await == 0
{
break;
}
tokio::task::yield_now().await;
tokio::time::sleep(Duration::from_millis(2)).await;
}
})
.await
.expect("aborted release must still converge to zero footprint");
}
let reservation = RunningClientHandler::acquire_user_connection_reservation_static(
user,
&config,
stats.clone(),
peer,
ip_tracker.clone(),
)
.await
.expect("same-ip reacquire must succeed after repeated abort-release churn");
reservation.release().await;
}
#[tokio::test]
async fn adversarial_mixed_release_drop_abort_wave_converges_to_zero() {
const RESERVATIONS: usize = 192;
let user = "mixed-wave-user";
let mut config = ProxyConfig::default();
config
.access
.user_max_tcp_conns
.insert(user.to_string(), RESERVATIONS + 8);
let stats = Arc::new(Stats::new());
let ip_tracker = Arc::new(UserIpTracker::new());
let mut reservations = Vec::with_capacity(RESERVATIONS);
for idx in 0..RESERVATIONS {
let peer = SocketAddr::new(
IpAddr::V4(std::net::Ipv4Addr::new(203, 0, 115, (idx % 250 + 1) as u8)),
54000 + idx as u16,
);
let reservation = RunningClientHandler::acquire_user_connection_reservation_static(
user,
&config,
stats.clone(),
peer,
ip_tracker.clone(),
)
.await
.expect("mixed-wave acquisition must succeed");
reservations.push(reservation);
}
let mut seed: u64 = 0xDEAD_BEEF_CAFE_BA5E;
let mut join_set = tokio::task::JoinSet::new();
for reservation in reservations {
seed ^= seed << 7;
seed ^= seed >> 9;
seed ^= seed << 8;
match seed % 3 {
0 => {
join_set.spawn(async move {
reservation.release().await;
});
}
1 => {
drop(reservation);
}
_ => {
let task = tokio::spawn(async move {
reservation.release().await;
});
task.abort();
let _ = task.await;
}
}
}
while let Some(result) = join_set.join_next().await {
result.expect("release subtask must not panic");
}
tokio::time::timeout(Duration::from_secs(2), async {
loop {
if stats.get_user_curr_connects(user) == 0
&& ip_tracker.get_active_ip_count(user).await == 0
{
break;
}
tokio::task::yield_now().await;
tokio::time::sleep(Duration::from_millis(2)).await;
}
})
.await
.expect("mixed release/drop/abort wave must converge to zero footprint");
}
#[tokio::test]
async fn parallel_users_abort_release_isolation_preserves_independent_cleanup() {
let user_a = "abort-isolation-a";
let user_b = "abort-isolation-b";
let mut config = ProxyConfig::default();
config.access.user_max_tcp_conns.insert(user_a.to_string(), 64);
config.access.user_max_tcp_conns.insert(user_b.to_string(), 64);
let stats = Arc::new(Stats::new());
let ip_tracker = Arc::new(UserIpTracker::new());
let mut tasks = tokio::task::JoinSet::new();
for idx in 0..64usize {
let user = if idx % 2 == 0 { user_a } else { user_b };
let peer = SocketAddr::new(
IpAddr::V4(std::net::Ipv4Addr::new(198, 18, 0, (idx % 250 + 1) as u8)),
55000 + idx as u16,
);
let reservation = RunningClientHandler::acquire_user_connection_reservation_static(
user,
&config,
stats.clone(),
peer,
ip_tracker.clone(),
)
.await
.expect("parallel-user acquisition must succeed");
tasks.spawn(async move {
let t = tokio::spawn(async move {
reservation.release().await;
});
t.abort();
let _ = t.await;
});
}
while let Some(result) = tasks.join_next().await {
result.expect("parallel-user abort task must not panic");
}
tokio::time::timeout(Duration::from_secs(2), async {
loop {
if stats.get_user_curr_connects(user_a) == 0
&& stats.get_user_curr_connects(user_b) == 0
&& ip_tracker.get_active_ip_count(user_a).await == 0
&& ip_tracker.get_active_ip_count(user_b).await == 0
{
break;
}
tokio::task::yield_now().await;
tokio::time::sleep(Duration::from_millis(2)).await;
}
})
.await
.expect("parallel users must cleanup independently under abort churn");
}
#[tokio::test]
async fn concurrent_release_storm_leaves_zero_user_and_ip_footprint() {
const RESERVATIONS: usize = 64;
@ -2301,16 +2736,24 @@ async fn atomic_limit_gate_allows_only_one_concurrent_acquire() {
IpAddr::V4(std::net::Ipv4Addr::new(203, 0, 113, (i + 1) as u8)),
30000 + i,
);
RunningClientHandler::check_user_limits_static("user", &config, &stats, peer, &ip_tracker)
RunningClientHandler::acquire_user_connection_reservation_static(
"user",
&config,
stats,
peer,
ip_tracker,
)
.await
.is_ok()
.ok()
});
}
let mut successes = 0u64;
let mut held_reservations = Vec::new();
while let Some(joined) = tasks.join_next().await {
if joined.unwrap() {
if let Some(reservation) = joined.unwrap() {
successes += 1;
held_reservations.push(reservation);
}
}
@ -2319,6 +2762,8 @@ async fn atomic_limit_gate_allows_only_one_concurrent_acquire() {
"exactly one concurrent acquire must pass for a limit=1 user"
);
assert_eq!(stats.get_user_curr_connects("user"), 1);
drop(held_reservations);
}
#[tokio::test]

View File

@ -1,6 +1,8 @@
use std::ffi::OsString;
use std::fs::OpenOptions;
use std::io::Write;
use std::net::SocketAddr;
use std::path::{Component, Path, PathBuf};
use std::sync::Arc;
use std::collections::HashSet;
use std::sync::{Mutex, OnceLock};
@ -24,14 +26,28 @@ use crate::stats::Stats;
use crate::stream::{BufferPool, CryptoReader, CryptoWriter};
use crate::transport::UpstreamManager;
#[cfg(unix)]
use std::os::unix::fs::OpenOptionsExt;
const UNKNOWN_DC_LOG_DISTINCT_LIMIT: usize = 1024;
static LOGGED_UNKNOWN_DCS: OnceLock<Mutex<HashSet<i16>>> = OnceLock::new();
#[derive(Clone)]
struct SanitizedUnknownDcLogPath {
resolved_path: PathBuf,
allowed_parent: PathBuf,
file_name: OsString,
}
// In tests, this function shares global mutable state. Callers that also use
// cache-reset helpers must hold `unknown_dc_test_lock()` to keep assertions
// deterministic under parallel execution.
fn should_log_unknown_dc(dc_idx: i16) -> bool {
let set = LOGGED_UNKNOWN_DCS.get_or_init(|| Mutex::new(HashSet::new()));
should_log_unknown_dc_with_set(set, dc_idx)
}
fn should_log_unknown_dc_with_set(set: &Mutex<HashSet<i16>>, dc_idx: i16) -> bool {
match set.lock() {
Ok(mut guard) => {
if guard.contains(&dc_idx) {
@ -42,9 +58,81 @@ fn should_log_unknown_dc(dc_idx: i16) -> bool {
}
guard.insert(dc_idx)
}
// If the lock is poisoned, keep logging rather than silently dropping
// operator-visible diagnostics.
Err(_) => true,
// Fail closed on poisoned state to avoid unbounded blocking log writes.
Err(_) => false,
}
}
fn sanitize_unknown_dc_log_path(path: &str) -> Option<SanitizedUnknownDcLogPath> {
let candidate = Path::new(path);
if candidate.as_os_str().is_empty() {
return None;
}
if candidate
.components()
.any(|component| matches!(component, Component::ParentDir))
{
return None;
}
let cwd = std::env::current_dir().ok()?;
let file_name = candidate.file_name()?;
let parent = candidate.parent().unwrap_or_else(|| Path::new("."));
let parent_path = if parent.is_absolute() {
parent.to_path_buf()
} else {
cwd.join(parent)
};
let canonical_parent = parent_path.canonicalize().ok()?;
if !canonical_parent.is_dir() {
return None;
}
Some(SanitizedUnknownDcLogPath {
resolved_path: canonical_parent.join(file_name),
allowed_parent: canonical_parent,
file_name: file_name.to_os_string(),
})
}
fn unknown_dc_log_path_is_still_safe(path: &SanitizedUnknownDcLogPath) -> bool {
let Some(parent) = path.resolved_path.parent() else {
return false;
};
let Ok(current_parent) = parent.canonicalize() else {
return false;
};
if current_parent != path.allowed_parent {
return false;
}
if let Ok(canonical_target) = path.resolved_path.canonicalize() {
let Some(target_parent) = canonical_target.parent() else {
return false;
};
let Some(target_name) = canonical_target.file_name() else {
return false;
};
if target_parent != path.allowed_parent || target_name != path.file_name {
return false;
}
}
true
}
fn open_unknown_dc_log_append(path: &Path) -> std::io::Result<std::fs::File> {
#[cfg(unix)]
{
OpenOptions::new()
.create(true)
.append(true)
.custom_flags(libc::O_NOFOLLOW)
.open(path)
}
#[cfg(not(unix))]
{
OpenOptions::new().create(true).append(true).open(path)
}
}
@ -200,12 +288,17 @@ fn get_dc_addr_static(dc_idx: i16, config: &ProxyConfig) -> Result<SocketAddr> {
&& should_log_unknown_dc(dc_idx)
&& let Ok(handle) = tokio::runtime::Handle::try_current()
{
let path = path.clone();
if let Some(path) = sanitize_unknown_dc_log_path(path) {
handle.spawn_blocking(move || {
if let Ok(mut file) = OpenOptions::new().create(true).append(true).open(path) {
if unknown_dc_log_path_is_still_safe(&path)
&& let Ok(mut file) = open_unknown_dc_log_append(&path.resolved_path)
{
let _ = writeln!(file, "dc_idx={dc_idx}");
}
});
} else {
warn!(dc_idx = dc_idx, raw_path = %path, "Rejected unsafe unknown DC log path");
}
}
}

View File

@ -6,7 +6,11 @@ use crate::proxy::route_mode::{RelayRouteMode, RouteRuntimeController};
use crate::stats::Stats;
use crate::stream::{BufferPool, CryptoReader, CryptoWriter};
use crate::transport::UpstreamManager;
use std::fs;
use std::io::Write;
use std::path::Path;
use std::sync::Arc;
use std::sync::atomic::{AtomicUsize, Ordering};
use std::time::Duration;
use tokio::io::duplex;
use tokio::net::TcpListener;
@ -29,6 +33,10 @@ where
CryptoWriter::new(writer, AesCtr::new(&key, iv), 8 * 1024)
}
fn nonempty_line_count(text: &str) -> usize {
text.lines().filter(|line| !line.trim().is_empty()).count()
}
#[test]
fn unknown_dc_log_is_deduplicated_per_dc_idx() {
let _guard = unknown_dc_test_lock()
@ -67,6 +75,771 @@ fn unknown_dc_log_respects_distinct_limit() {
);
}
#[test]
fn unknown_dc_log_fails_closed_when_dedup_lock_is_poisoned() {
let poisoned = Arc::new(std::sync::Mutex::new(std::collections::HashSet::<i16>::new()));
let poisoned_for_thread = poisoned.clone();
let _ = std::thread::spawn(move || {
let _guard = poisoned_for_thread
.lock()
.expect("poison setup lock must be available");
panic!("intentional poison for fail-closed regression");
})
.join();
assert!(
!should_log_unknown_dc_with_set(poisoned.as_ref(), 4242),
"poisoned unknown-DC dedup lock must fail closed"
);
}
#[test]
fn stress_unknown_dc_log_concurrent_unique_churn_respects_cap() {
let _guard = unknown_dc_test_lock()
.lock()
.expect("unknown dc test lock must be available");
clear_unknown_dc_log_cache_for_testing();
let accepted = Arc::new(AtomicUsize::new(0));
let mut workers = Vec::new();
// Adversarial model: many concurrent peers rotate dc_idx values rapidly.
for worker in 0..16usize {
let accepted = Arc::clone(&accepted);
workers.push(std::thread::spawn(move || {
let base = (worker * 2048) as i32;
for offset in 0..512i32 {
let raw = base + offset;
let dc = (raw % i16::MAX as i32) as i16;
if should_log_unknown_dc(dc) {
accepted.fetch_add(1, Ordering::Relaxed);
}
}
}));
}
for worker in workers {
worker.join().expect("worker thread must not panic");
}
assert_eq!(
accepted.load(Ordering::Relaxed),
UNKNOWN_DC_LOG_DISTINCT_LIMIT,
"concurrent unique churn must never admit more than the configured distinct cap"
);
}
#[test]
fn light_fuzz_unknown_dc_log_mixed_duplicates_never_exceeds_cap() {
let _guard = unknown_dc_test_lock()
.lock()
.expect("unknown dc test lock must be available");
clear_unknown_dc_log_cache_for_testing();
// Deterministic xorshift sequence for reproducible mixed duplicate fuzzing.
let mut s: u64 = 0xA5A5_5A5A_C3C3_3C3C;
let mut admitted = 0usize;
for _ in 0..20_000 {
s ^= s << 7;
s ^= s >> 9;
s ^= s << 8;
let dc = (s as i16).wrapping_sub(i16::MAX / 2);
if should_log_unknown_dc(dc) {
admitted += 1;
}
}
assert!(
admitted <= UNKNOWN_DC_LOG_DISTINCT_LIMIT,
"mixed-duplicate fuzzed inputs must not admit more than cap"
);
}
#[test]
fn unknown_dc_log_path_sanitizer_rejects_parent_traversal_inputs() {
assert!(
sanitize_unknown_dc_log_path("../unknown-dc.txt").is_none(),
"parent traversal paths must be rejected"
);
assert!(
sanitize_unknown_dc_log_path("logs/../unknown-dc.txt").is_none(),
"embedded parent traversal must be rejected"
);
assert!(
sanitize_unknown_dc_log_path("./../unknown-dc.txt").is_none(),
"relative parent traversal must be rejected"
);
}
#[test]
fn unknown_dc_log_path_sanitizer_accepts_absolute_paths_with_existing_parent() {
let absolute = std::env::temp_dir().join("unknown-dc.txt");
let absolute_str = absolute
.to_str()
.expect("temp absolute path must be valid UTF-8");
let sanitized = sanitize_unknown_dc_log_path(absolute_str)
.expect("absolute paths with existing parent must be accepted");
assert_eq!(sanitized.resolved_path, absolute);
}
#[test]
fn unknown_dc_log_path_sanitizer_rejects_absolute_parent_traversal() {
assert!(
sanitize_unknown_dc_log_path("/tmp/../etc/passwd").is_none(),
"absolute parent traversal must be rejected"
);
}
#[test]
fn unknown_dc_log_path_sanitizer_accepts_safe_relative_path() {
let base = std::env::current_dir()
.expect("cwd must be available")
.join("target")
.join(format!("telemt-unknown-dc-log-{}", std::process::id()));
fs::create_dir_all(&base).expect("temp test directory must be creatable");
let candidate = base.join("unknown-dc.txt");
let candidate_relative = format!("target/telemt-unknown-dc-log-{}/unknown-dc.txt", std::process::id());
let sanitized = sanitize_unknown_dc_log_path(&candidate_relative)
.expect("safe relative path with existing parent must be accepted");
assert_eq!(sanitized.resolved_path, candidate);
}
#[test]
fn unknown_dc_log_path_sanitizer_rejects_empty_or_dot_only_inputs() {
assert!(
sanitize_unknown_dc_log_path("").is_none(),
"empty path must be rejected"
);
assert!(
sanitize_unknown_dc_log_path(".").is_none(),
"dot-only path without filename must be rejected"
);
}
#[test]
fn unknown_dc_log_path_sanitizer_accepts_directory_only_as_filename_projection() {
let sanitized = sanitize_unknown_dc_log_path("target/")
.expect("directory-only input is interpreted as filename projection in current sanitizer");
assert!(
sanitized.resolved_path.ends_with("target"),
"directory-only input should resolve to canonical parent plus filename projection"
);
}
#[test]
fn unknown_dc_log_path_sanitizer_accepts_dot_prefixed_relative_path() {
let rel_dir = format!("target/telemt-unknown-dc-dot-{}", std::process::id());
let abs_dir = std::env::current_dir()
.expect("cwd must be available")
.join(&rel_dir);
fs::create_dir_all(&abs_dir).expect("dot-prefixed test directory must be creatable");
let rel_candidate = format!("./{rel_dir}/unknown-dc.log");
let expected = abs_dir.join("unknown-dc.log");
let sanitized = sanitize_unknown_dc_log_path(&rel_candidate)
.expect("dot-prefixed safe path must be accepted");
assert_eq!(sanitized.resolved_path, expected);
}
#[test]
fn light_fuzz_unknown_dc_path_parentdir_inputs_always_rejected() {
let mut s: u64 = 0xD00D_BAAD_1234_5678;
for _ in 0..4096 {
s ^= s << 7;
s ^= s >> 9;
s ^= s << 8;
let a = (s as usize) % 32;
let b = ((s >> 8) as usize) % 32;
let candidate = format!("target/{a}/../{b}/unknown-dc.log");
assert!(
sanitize_unknown_dc_log_path(&candidate).is_none(),
"parent-dir candidate must be rejected: {candidate}"
);
}
}
#[test]
fn unknown_dc_log_path_sanitizer_rejects_nonexistent_parent_directory() {
let rel_candidate = format!(
"target/telemt-unknown-dc-missing-{}/nested/unknown-dc.txt",
std::process::id()
);
assert!(
sanitize_unknown_dc_log_path(&rel_candidate).is_none(),
"path with missing parent must be rejected to avoid implicit directory creation"
);
}
#[cfg(unix)]
#[test]
fn unknown_dc_log_path_sanitizer_accepts_symlinked_parent_inside_workspace() {
use std::os::unix::fs::symlink;
let base = std::env::current_dir()
.expect("cwd must be available")
.join("target")
.join(format!("telemt-unknown-dc-log-symlink-internal-{}", std::process::id()));
let real_parent = base.join("real_parent");
fs::create_dir_all(&real_parent).expect("real parent dir must be creatable");
let symlink_parent = base.join("internal_link");
let _ = fs::remove_file(&symlink_parent);
symlink(&real_parent, &symlink_parent).expect("internal symlink must be creatable");
let rel_candidate = format!(
"target/telemt-unknown-dc-log-symlink-internal-{}/internal_link/unknown-dc.txt",
std::process::id()
);
let sanitized = sanitize_unknown_dc_log_path(&rel_candidate)
.expect("symlinked parent that resolves inside workspace must be accepted");
assert!(
sanitized.resolved_path.starts_with(&real_parent),
"sanitized path must resolve to canonical internal parent"
);
}
#[cfg(unix)]
#[test]
fn unknown_dc_log_path_sanitizer_accepts_symlink_parent_escape_as_canonical_path() {
use std::os::unix::fs::symlink;
let base = std::env::current_dir()
.expect("cwd must be available")
.join("target")
.join(format!("telemt-unknown-dc-log-symlink-{}", std::process::id()));
fs::create_dir_all(&base).expect("symlink test directory must be creatable");
let symlink_parent = base.join("escape_link");
let _ = fs::remove_file(&symlink_parent);
symlink("/tmp", &symlink_parent).expect("symlink parent must be creatable");
let rel_candidate = format!(
"target/telemt-unknown-dc-log-symlink-{}/escape_link/unknown-dc.txt",
std::process::id()
);
let sanitized = sanitize_unknown_dc_log_path(&rel_candidate)
.expect("symlinked parent must canonicalize to target path");
assert!(
sanitized.resolved_path.starts_with(Path::new("/tmp")),
"sanitized path must resolve to canonical symlink target"
);
}
#[cfg(unix)]
#[test]
fn unknown_dc_log_path_revalidation_rejects_symlinked_target_escape() {
use std::os::unix::fs::symlink;
let base = std::env::current_dir()
.expect("cwd must be available")
.join("target")
.join(format!("telemt-unknown-dc-target-link-{}", std::process::id()));
fs::create_dir_all(&base).expect("target-link base must be creatable");
let outside = std::env::temp_dir().join(format!("telemt-outside-{}", std::process::id()));
let _ = fs::remove_file(&outside);
fs::write(&outside, "outside").expect("outside file must be writable");
let linked_target = base.join("unknown-dc.log");
let _ = fs::remove_file(&linked_target);
symlink(&outside, &linked_target).expect("target symlink must be creatable");
let rel_candidate = format!(
"target/telemt-unknown-dc-target-link-{}/unknown-dc.log",
std::process::id()
);
let sanitized = sanitize_unknown_dc_log_path(&rel_candidate)
.expect("candidate should sanitize before final revalidation");
assert!(
!unknown_dc_log_path_is_still_safe(&sanitized),
"final revalidation must reject symlinked target escape"
);
}
#[cfg(unix)]
#[test]
fn unknown_dc_open_append_rejects_symlink_target_with_nofollow() {
use std::os::unix::fs::symlink;
let base = std::env::current_dir()
.expect("cwd must be available")
.join("target")
.join(format!("telemt-unknown-dc-nofollow-{}", std::process::id()));
fs::create_dir_all(&base).expect("nofollow base must be creatable");
let outside = std::env::temp_dir().join(format!(
"telemt-unknown-dc-nofollow-outside-{}.log",
std::process::id()
));
let _ = fs::remove_file(&outside);
fs::write(&outside, "outside\n").expect("outside file must be writable");
let linked_target = base.join("unknown-dc.log");
let _ = fs::remove_file(&linked_target);
symlink(&outside, &linked_target).expect("symlink target must be creatable");
let err = open_unknown_dc_log_append(&linked_target)
.expect_err("O_NOFOLLOW open must fail for symlink target");
assert_eq!(
err.raw_os_error(),
Some(libc::ELOOP),
"symlink target must be rejected with ELOOP when O_NOFOLLOW is applied"
);
}
#[cfg(unix)]
#[test]
fn unknown_dc_open_append_rejects_broken_symlink_target_with_nofollow() {
use std::os::unix::fs::symlink;
let base = std::env::current_dir()
.expect("cwd must be available")
.join("target")
.join(format!("telemt-unknown-dc-broken-link-{}", std::process::id()));
fs::create_dir_all(&base).expect("broken-link base must be creatable");
let linked_target = base.join("unknown-dc.log");
let _ = fs::remove_file(&linked_target);
symlink(base.join("missing-target.log"), &linked_target)
.expect("broken symlink target must be creatable");
let err = open_unknown_dc_log_append(&linked_target)
.expect_err("O_NOFOLLOW open must fail for broken symlink target");
assert_eq!(
err.raw_os_error(),
Some(libc::ELOOP),
"broken symlink target must be rejected with ELOOP when O_NOFOLLOW is applied"
);
}
#[cfg(unix)]
#[test]
fn adversarial_unknown_dc_open_append_symlink_flip_never_writes_outside_file() {
use std::os::unix::fs::symlink;
let base = std::env::current_dir()
.expect("cwd must be available")
.join("target")
.join(format!("telemt-unknown-dc-symlink-flip-{}", std::process::id()));
fs::create_dir_all(&base).expect("symlink-flip base must be creatable");
let outside = std::env::temp_dir().join(format!(
"telemt-unknown-dc-symlink-flip-outside-{}.log",
std::process::id()
));
fs::write(&outside, "outside-baseline\n").expect("outside baseline file must be writable");
let outside_before = fs::read_to_string(&outside).expect("outside baseline must be readable");
let target = base.join("unknown-dc.log");
let _ = fs::remove_file(&target);
for step in 0..1024usize {
let _ = fs::remove_file(&target);
if step % 2 == 0 {
symlink(&outside, &target).expect("symlink creation in flip loop must succeed");
}
if let Ok(mut file) = open_unknown_dc_log_append(&target) {
writeln!(file, "dc_idx={step}").expect("append on regular file must succeed");
}
}
let outside_after = fs::read_to_string(&outside).expect("outside file must remain readable");
assert_eq!(
outside_after, outside_before,
"outside file must never be modified under symlink-flip adversarial churn"
);
}
#[test]
fn unknown_dc_open_append_creates_regular_file() {
let base = std::env::current_dir()
.expect("cwd must be available")
.join("target")
.join(format!("telemt-unknown-dc-open-{}", std::process::id()));
fs::create_dir_all(&base).expect("open test base must be creatable");
let target = base.join("unknown-dc.log");
let _ = fs::remove_file(&target);
{
let mut file = open_unknown_dc_log_append(&target)
.expect("regular target must be creatable with append open");
writeln!(file, "dc_idx=1234").expect("append write must succeed");
}
let meta = fs::symlink_metadata(&target).expect("created target metadata must be readable");
assert!(meta.file_type().is_file(), "target must be a regular file");
assert!(
!meta.file_type().is_symlink(),
"regular target open path must not produce symlink artifacts"
);
}
#[test]
fn stress_unknown_dc_open_append_regular_file_preserves_line_integrity() {
let base = std::env::current_dir()
.expect("cwd must be available")
.join("target")
.join(format!("telemt-unknown-dc-open-stress-{}", std::process::id()));
fs::create_dir_all(&base).expect("stress open base must be creatable");
let target = base.join("unknown-dc.log");
let _ = fs::remove_file(&target);
let writes = 2048usize;
for idx in 0..writes {
let mut file = open_unknown_dc_log_append(&target)
.expect("stress append open on regular file must succeed");
writeln!(file, "dc_idx={idx}").expect("stress append write must succeed");
}
let content = fs::read_to_string(&target).expect("stress output file must be readable");
assert_eq!(
nonempty_line_count(&content),
writes,
"regular-file append stress must preserve one logical line per write"
);
}
#[test]
fn unknown_dc_log_path_revalidation_accepts_regular_existing_target() {
let base = std::env::current_dir()
.expect("cwd must be available")
.join("target")
.join(format!("telemt-unknown-dc-safe-target-{}", std::process::id()));
fs::create_dir_all(&base).expect("safe target base must be creatable");
let target = base.join("unknown-dc.log");
fs::write(&target, "seed\n").expect("safe target seed write must succeed");
let rel_candidate = format!(
"target/telemt-unknown-dc-safe-target-{}/unknown-dc.log",
std::process::id()
);
let sanitized = sanitize_unknown_dc_log_path(&rel_candidate)
.expect("safe candidate must sanitize");
assert!(
unknown_dc_log_path_is_still_safe(&sanitized),
"revalidation must allow safe existing regular files"
);
}
#[test]
fn unknown_dc_log_path_revalidation_rejects_deleted_parent_after_sanitize() {
let base = std::env::current_dir()
.expect("cwd must be available")
.join("target")
.join(format!("telemt-unknown-dc-vanish-parent-{}", std::process::id()));
fs::create_dir_all(&base).expect("vanish-parent base must be creatable");
let rel_candidate = format!(
"target/telemt-unknown-dc-vanish-parent-{}/unknown-dc.log",
std::process::id()
);
let sanitized = sanitize_unknown_dc_log_path(&rel_candidate)
.expect("candidate must sanitize before parent deletion");
fs::remove_dir_all(&base).expect("test parent directory must be removable");
assert!(
!unknown_dc_log_path_is_still_safe(&sanitized),
"revalidation must fail when sanitized parent disappears before write"
);
}
#[cfg(unix)]
#[test]
fn unknown_dc_log_path_revalidation_rejects_parent_swapped_to_symlink() {
use std::os::unix::fs::symlink;
let parent = std::env::current_dir()
.expect("cwd must be available")
.join("target")
.join(format!("telemt-unknown-dc-parent-swap-{}", std::process::id()));
fs::create_dir_all(&parent).expect("parent-swap test parent must be creatable");
let rel_candidate = format!(
"target/telemt-unknown-dc-parent-swap-{}/unknown-dc.log",
std::process::id()
);
let sanitized = sanitize_unknown_dc_log_path(&rel_candidate)
.expect("candidate must sanitize before parent swap");
let moved = parent.with_extension("bak");
let _ = fs::remove_dir_all(&moved);
fs::rename(&parent, &moved).expect("parent must be movable for swap simulation");
symlink("/tmp", &parent).expect("symlink replacement for parent must be creatable");
assert!(
!unknown_dc_log_path_is_still_safe(&sanitized),
"revalidation must fail when canonical parent is swapped to a symlinked target"
);
}
#[cfg(unix)]
#[test]
fn adversarial_check_then_symlink_flip_is_blocked_by_nofollow_open() {
use std::os::unix::fs::symlink;
let parent = std::env::current_dir()
.expect("cwd must be available")
.join("target")
.join(format!("telemt-unknown-dc-check-open-race-{}", std::process::id()));
fs::create_dir_all(&parent).expect("check-open-race parent must be creatable");
let target = parent.join("unknown-dc.log");
fs::write(&target, "seed\n").expect("seed target file must be writable");
let rel_candidate = format!(
"target/telemt-unknown-dc-check-open-race-{}/unknown-dc.log",
std::process::id()
);
let sanitized = sanitize_unknown_dc_log_path(&rel_candidate)
.expect("candidate must sanitize");
assert!(
unknown_dc_log_path_is_still_safe(&sanitized),
"precondition: target should initially pass revalidation"
);
let outside = std::env::temp_dir().join(format!(
"telemt-unknown-dc-check-open-race-outside-{}.log",
std::process::id()
));
fs::write(&outside, "outside\n").expect("outside file must be writable");
fs::remove_file(&target).expect("target removal before flip must succeed");
symlink(&outside, &target).expect("target symlink flip must be creatable");
let err = open_unknown_dc_log_append(&sanitized.resolved_path)
.expect_err("nofollow open must fail after symlink flip between check and open");
assert_eq!(
err.raw_os_error(),
Some(libc::ELOOP),
"symlink flip in check/open window must be neutralized by O_NOFOLLOW"
);
}
#[tokio::test]
async fn unknown_dc_absolute_log_path_writes_one_entry() {
let _guard = unknown_dc_test_lock()
.lock()
.expect("unknown dc test lock must be available");
clear_unknown_dc_log_cache_for_testing();
let dc_idx: i16 = 31_001;
let file_path = std::env::temp_dir().join(format!(
"telemt-unknown-dc-abs-{}-{}.log",
std::process::id(),
dc_idx
));
let _ = fs::remove_file(&file_path);
let mut cfg = ProxyConfig::default();
cfg.general.unknown_dc_file_log_enabled = true;
cfg.general.unknown_dc_log_path = Some(
file_path
.to_str()
.expect("temp file path must be valid UTF-8")
.to_string(),
);
let _ = get_dc_addr_static(dc_idx, &cfg).expect("fallback routing must still work");
let mut content = None;
for _ in 0..20 {
if let Ok(text) = fs::read_to_string(&file_path) {
content = Some(text);
break;
}
tokio::time::sleep(Duration::from_millis(15)).await;
}
let text = content.expect("absolute unknown-DC log path must produce exactly one log write");
assert!(
text.contains(&format!("dc_idx={dc_idx}")),
"absolute unknown-DC integration log must contain requested dc_idx"
);
}
#[tokio::test]
async fn unknown_dc_safe_relative_log_path_writes_one_entry() {
let _guard = unknown_dc_test_lock()
.lock()
.expect("unknown dc test lock must be available");
clear_unknown_dc_log_cache_for_testing();
let dc_idx: i16 = 31_002;
let rel_dir = format!("target/telemt-unknown-dc-int-{}", std::process::id());
let rel_file = format!("{rel_dir}/unknown-dc.log");
let abs_dir = std::env::current_dir()
.expect("cwd must be available")
.join(&rel_dir);
fs::create_dir_all(&abs_dir).expect("integration test log directory must be creatable");
let abs_file = abs_dir.join("unknown-dc.log");
let _ = fs::remove_file(&abs_file);
let mut cfg = ProxyConfig::default();
cfg.general.unknown_dc_file_log_enabled = true;
cfg.general.unknown_dc_log_path = Some(rel_file);
let _ = get_dc_addr_static(dc_idx, &cfg).expect("fallback routing must still work");
let mut content = None;
for _ in 0..20 {
if let Ok(text) = fs::read_to_string(&abs_file) {
content = Some(text);
break;
}
tokio::time::sleep(Duration::from_millis(15)).await;
}
let text = content.expect("safe relative path must produce exactly one log write");
assert!(
text.contains(&format!("dc_idx={dc_idx}")),
"unknown-DC integration log must contain requested dc_idx"
);
}
#[tokio::test]
async fn unknown_dc_same_index_burst_writes_only_once() {
let _guard = unknown_dc_test_lock()
.lock()
.expect("unknown dc test lock must be available");
clear_unknown_dc_log_cache_for_testing();
let dc_idx: i16 = 31_010;
let rel_dir = format!("target/telemt-unknown-dc-same-{}", std::process::id());
let rel_file = format!("{rel_dir}/unknown-dc.log");
let abs_dir = std::env::current_dir().unwrap().join(&rel_dir);
fs::create_dir_all(&abs_dir).expect("same-index log directory must be creatable");
let abs_file = abs_dir.join("unknown-dc.log");
let _ = fs::remove_file(&abs_file);
let mut cfg = ProxyConfig::default();
cfg.general.unknown_dc_file_log_enabled = true;
cfg.general.unknown_dc_log_path = Some(rel_file);
for _ in 0..64 {
let _ = get_dc_addr_static(dc_idx, &cfg).expect("fallback routing must still work");
}
let mut content = None;
for _ in 0..30 {
if let Ok(text) = fs::read_to_string(&abs_file) {
content = Some(text);
break;
}
tokio::time::sleep(Duration::from_millis(10)).await;
}
let text = content.expect("same-index burst must produce at least one log write");
assert_eq!(
nonempty_line_count(&text),
1,
"same unknown dc index must be deduplicated to one file line"
);
}
#[tokio::test]
async fn unknown_dc_distinct_burst_is_hard_capped_on_file_writes() {
let _guard = unknown_dc_test_lock()
.lock()
.expect("unknown dc test lock must be available");
clear_unknown_dc_log_cache_for_testing();
let rel_dir = format!("target/telemt-unknown-dc-cap-{}", std::process::id());
let rel_file = format!("{rel_dir}/unknown-dc.log");
let abs_dir = std::env::current_dir().unwrap().join(&rel_dir);
fs::create_dir_all(&abs_dir).expect("cap log directory must be creatable");
let abs_file = abs_dir.join("unknown-dc.log");
let _ = fs::remove_file(&abs_file);
let mut cfg = ProxyConfig::default();
cfg.general.unknown_dc_file_log_enabled = true;
cfg.general.unknown_dc_log_path = Some(rel_file);
for i in 0..(UNKNOWN_DC_LOG_DISTINCT_LIMIT + 128) {
let dc_idx = 20_000i16.wrapping_add(i as i16);
let _ = get_dc_addr_static(dc_idx, &cfg).expect("fallback routing must still work");
}
let mut final_text = String::new();
for _ in 0..80 {
if let Ok(text) = fs::read_to_string(&abs_file) {
final_text = text;
if nonempty_line_count(&final_text) >= UNKNOWN_DC_LOG_DISTINCT_LIMIT {
break;
}
}
tokio::time::sleep(Duration::from_millis(10)).await;
}
let line_count = nonempty_line_count(&final_text);
assert!(
line_count > 0,
"distinct unknown-dc burst must write at least one line"
);
assert!(
line_count <= UNKNOWN_DC_LOG_DISTINCT_LIMIT,
"distinct unknown-dc writes must stay within dedup hard cap"
);
}
#[cfg(unix)]
#[tokio::test]
async fn unknown_dc_symlinked_target_escape_is_not_written_integration() {
use std::os::unix::fs::symlink;
let _guard = unknown_dc_test_lock()
.lock()
.expect("unknown dc test lock must be available");
clear_unknown_dc_log_cache_for_testing();
let base = std::env::current_dir()
.expect("cwd must be available")
.join("target")
.join(format!("telemt-unknown-dc-no-write-link-{}", std::process::id()));
fs::create_dir_all(&base).expect("integration symlink base must be creatable");
let outside = std::env::temp_dir().join(format!(
"telemt-unknown-dc-outside-{}.log",
std::process::id()
));
fs::write(&outside, "baseline\n").expect("outside baseline file must be writable");
let linked_target = base.join("unknown-dc.log");
let _ = fs::remove_file(&linked_target);
symlink(&outside, &linked_target).expect("symlink target must be creatable");
let rel_file = format!(
"target/telemt-unknown-dc-no-write-link-{}/unknown-dc.log",
std::process::id()
);
let dc_idx: i16 = 31_050;
let mut cfg = ProxyConfig::default();
cfg.general.unknown_dc_file_log_enabled = true;
cfg.general.unknown_dc_log_path = Some(rel_file);
let before = fs::read_to_string(&outside).expect("must read baseline outside file");
let _ = get_dc_addr_static(dc_idx, &cfg).expect("fallback routing must still work");
tokio::time::sleep(Duration::from_millis(80)).await;
let after = fs::read_to_string(&outside).expect("must read outside file after attempt");
assert_eq!(
after, before,
"symlink target escape must not be written by unknown-DC logging"
);
}
#[test]
fn fallback_dc_never_panics_with_single_dc_list() {
let mut cfg = ProxyConfig::default();
@ -276,6 +1049,13 @@ async fn direct_relay_cutover_midflight_releases_route_gauge() {
relay_result.is_err(),
"cutover should terminate direct relay session"
);
assert!(
matches!(
relay_result,
Err(ProxyError::Proxy(ref msg)) if msg == ROUTE_SWITCH_ERROR_MSG
),
"client-visible cutover error must stay generic and avoid route-internal metadata"
);
assert_eq!(
stats.get_current_connections_direct(),
@ -287,3 +1067,143 @@ async fn direct_relay_cutover_midflight_releases_route_gauge() {
tg_accept_task.abort();
let _ = tg_accept_task.await;
}
#[tokio::test]
async fn direct_relay_cutover_storm_multi_session_keeps_generic_errors_and_releases_gauge() {
let session_count = 6usize;
let tg_listener = TcpListener::bind("127.0.0.1:0").await.unwrap();
let tg_addr = tg_listener.local_addr().unwrap();
let tg_accept_task = tokio::spawn(async move {
let mut held_streams = Vec::with_capacity(session_count);
for _ in 0..session_count {
let (stream, _) = tg_listener.accept().await.unwrap();
held_streams.push(stream);
}
tokio::time::sleep(Duration::from_secs(60)).await;
drop(held_streams);
});
let stats = Arc::new(Stats::new());
let mut config = ProxyConfig::default();
config
.dc_overrides
.insert("2".to_string(), vec![tg_addr.to_string()]);
let config = Arc::new(config);
let upstream_manager = Arc::new(UpstreamManager::new(
vec![UpstreamConfig {
upstream_type: UpstreamType::Direct {
interface: None,
bind_addresses: None,
},
weight: 1,
enabled: true,
scopes: String::new(),
selected_scope: String::new(),
}],
1,
1,
1,
1,
false,
stats.clone(),
));
let rng = Arc::new(SecureRandom::new());
let buffer_pool = Arc::new(BufferPool::new());
let route_runtime = Arc::new(RouteRuntimeController::new(RelayRouteMode::Direct));
let route_snapshot = route_runtime.snapshot();
let mut relay_tasks = Vec::with_capacity(session_count);
let mut client_sides = Vec::with_capacity(session_count);
for idx in 0..session_count {
let (server_side, client_side) = duplex(64 * 1024);
client_sides.push(client_side);
let (server_reader, server_writer) = tokio::io::split(server_side);
let client_reader = make_crypto_reader(server_reader);
let client_writer = make_crypto_writer(server_writer);
let success = HandshakeSuccess {
user: format!("cutover-storm-direct-user-{idx}"),
dc_idx: 2,
proto_tag: ProtoTag::Intermediate,
dec_key: [0u8; 32],
dec_iv: 0,
enc_key: [0u8; 32],
enc_iv: 0,
peer: SocketAddr::new(
std::net::IpAddr::V4(std::net::Ipv4Addr::new(127, 0, 0, 1)),
51000 + idx as u16,
),
is_tls: false,
};
relay_tasks.push(tokio::spawn(handle_via_direct(
client_reader,
client_writer,
success,
upstream_manager.clone(),
stats.clone(),
config.clone(),
buffer_pool.clone(),
rng.clone(),
route_runtime.subscribe(),
route_snapshot,
0xA000_0000 + idx as u64,
)));
}
tokio::time::timeout(Duration::from_secs(4), async {
loop {
if stats.get_current_connections_direct() == session_count as u64 {
break;
}
tokio::time::sleep(Duration::from_millis(10)).await;
}
})
.await
.expect("all direct sessions must become active before cutover storm");
let route_runtime_flipper = route_runtime.clone();
let flipper = tokio::spawn(async move {
for step in 0..64u32 {
let mode = if (step & 1) == 0 {
RelayRouteMode::Middle
} else {
RelayRouteMode::Direct
};
let _ = route_runtime_flipper.set_mode(mode);
tokio::time::sleep(Duration::from_millis(15)).await;
}
});
for relay_task in relay_tasks {
let relay_result = tokio::time::timeout(Duration::from_secs(10), relay_task)
.await
.expect("direct relay task must finish under cutover storm")
.expect("direct relay task must not panic");
assert!(
matches!(
relay_result,
Err(ProxyError::Proxy(ref msg)) if msg == ROUTE_SWITCH_ERROR_MSG
),
"storm-cutover termination must remain generic for all direct sessions"
);
}
flipper.abort();
let _ = flipper.await;
assert_eq!(
stats.get_current_connections_direct(),
0,
"direct route gauge must return to zero after cutover storm"
);
drop(client_sides);
tg_accept_task.abort();
let _ = tg_accept_task.await;
}

View File

@ -4,11 +4,11 @@
use std::net::SocketAddr;
use std::collections::HashSet;
use std::collections::hash_map::RandomState;
use std::net::{IpAddr, Ipv6Addr};
use std::sync::Arc;
use std::sync::{Mutex, OnceLock};
use std::collections::hash_map::DefaultHasher;
use std::hash::{Hash, Hasher};
use std::hash::{BuildHasher, Hash, Hasher};
use std::time::{Duration, Instant};
use dashmap::DashMap;
use dashmap::mapref::entry::Entry;
@ -36,6 +36,7 @@ const AUTH_PROBE_TRACK_MAX_ENTRIES: usize = 256;
const AUTH_PROBE_TRACK_MAX_ENTRIES: usize = 65_536;
const AUTH_PROBE_PRUNE_SCAN_LIMIT: usize = 1_024;
const AUTH_PROBE_BACKOFF_START_FAILS: u32 = 4;
const AUTH_PROBE_SATURATION_GRACE_FAILS: u32 = 2;
#[cfg(test)]
const AUTH_PROBE_BACKOFF_BASE_MS: u64 = 1;
@ -54,12 +55,25 @@ struct AuthProbeState {
last_seen: Instant,
}
#[derive(Clone, Copy)]
struct AuthProbeSaturationState {
fail_streak: u32,
blocked_until: Instant,
last_seen: Instant,
}
static AUTH_PROBE_STATE: OnceLock<DashMap<IpAddr, AuthProbeState>> = OnceLock::new();
static AUTH_PROBE_SATURATION_STATE: OnceLock<Mutex<Option<AuthProbeSaturationState>>> = OnceLock::new();
static AUTH_PROBE_EVICTION_HASHER: OnceLock<RandomState> = OnceLock::new();
fn auth_probe_state_map() -> &'static DashMap<IpAddr, AuthProbeState> {
AUTH_PROBE_STATE.get_or_init(DashMap::new)
}
fn auth_probe_saturation_state() -> &'static Mutex<Option<AuthProbeSaturationState>> {
AUTH_PROBE_SATURATION_STATE.get_or_init(|| Mutex::new(None))
}
fn normalize_auth_probe_ip(peer_ip: IpAddr) -> IpAddr {
match peer_ip {
IpAddr::V4(ip) => IpAddr::V4(ip),
@ -88,7 +102,8 @@ fn auth_probe_state_expired(state: &AuthProbeState, now: Instant) -> bool {
}
fn auth_probe_eviction_offset(peer_ip: IpAddr, now: Instant) -> usize {
let mut hasher = DefaultHasher::new();
let hasher_state = AUTH_PROBE_EVICTION_HASHER.get_or_init(RandomState::new);
let mut hasher = hasher_state.build_hasher();
peer_ip.hash(&mut hasher);
now.hash(&mut hasher);
hasher.finish() as usize
@ -108,6 +123,83 @@ fn auth_probe_is_throttled(peer_ip: IpAddr, now: Instant) -> bool {
now < entry.blocked_until
}
fn auth_probe_saturation_grace_exhausted(peer_ip: IpAddr, now: Instant) -> bool {
let peer_ip = normalize_auth_probe_ip(peer_ip);
let state = auth_probe_state_map();
let Some(entry) = state.get(&peer_ip) else {
return false;
};
if auth_probe_state_expired(&entry, now) {
drop(entry);
state.remove(&peer_ip);
return false;
}
entry.fail_streak >= AUTH_PROBE_BACKOFF_START_FAILS + AUTH_PROBE_SATURATION_GRACE_FAILS
}
fn auth_probe_should_apply_preauth_throttle(peer_ip: IpAddr, now: Instant) -> bool {
if !auth_probe_is_throttled(peer_ip, now) {
return false;
}
if !auth_probe_saturation_is_throttled(now) {
return true;
}
auth_probe_saturation_grace_exhausted(peer_ip, now)
}
fn auth_probe_saturation_is_throttled(now: Instant) -> bool {
let saturation = auth_probe_saturation_state();
let mut guard = match saturation.lock() {
Ok(guard) => guard,
Err(_) => return false,
};
let Some(state) = guard.as_mut() else {
return false;
};
if now.duration_since(state.last_seen) > Duration::from_secs(AUTH_PROBE_TRACK_RETENTION_SECS) {
*guard = None;
return false;
}
if now < state.blocked_until {
return true;
}
false
}
fn auth_probe_note_saturation(now: Instant) {
let saturation = auth_probe_saturation_state();
let mut guard = match saturation.lock() {
Ok(guard) => guard,
Err(_) => return,
};
match guard.as_mut() {
Some(state)
if now.duration_since(state.last_seen)
<= Duration::from_secs(AUTH_PROBE_TRACK_RETENTION_SECS) =>
{
state.fail_streak = state.fail_streak.saturating_add(1);
state.last_seen = now;
state.blocked_until = now + auth_probe_backoff(state.fail_streak);
}
_ => {
let fail_streak = AUTH_PROBE_BACKOFF_START_FAILS;
*guard = Some(AuthProbeSaturationState {
fail_streak,
blocked_until: now + auth_probe_backoff(fail_streak),
last_seen: now,
});
}
}
}
fn auth_probe_record_failure(peer_ip: IpAddr, now: Instant) {
let peer_ip = normalize_auth_probe_ip(peer_ip);
let state = auth_probe_state_map();
@ -144,24 +236,79 @@ fn auth_probe_record_failure_with_state(
}
if state.len() >= AUTH_PROBE_TRACK_MAX_ENTRIES {
let mut rounds = 0usize;
while state.len() >= AUTH_PROBE_TRACK_MAX_ENTRIES {
rounds += 1;
if rounds > 8 {
auth_probe_note_saturation(now);
return;
}
let mut stale_keys = Vec::new();
let mut eviction_candidates = Vec::new();
for entry in state.iter().take(AUTH_PROBE_PRUNE_SCAN_LIMIT) {
eviction_candidates.push(*entry.key());
let mut eviction_candidate: Option<(IpAddr, u32, Instant)> = None;
let state_len = state.len();
let scan_limit = state_len.min(AUTH_PROBE_PRUNE_SCAN_LIMIT);
let start_offset = if state_len == 0 {
0
} else {
auth_probe_eviction_offset(peer_ip, now) % state_len
};
let mut scanned = 0usize;
for entry in state.iter().skip(start_offset) {
let key = *entry.key();
let fail_streak = entry.value().fail_streak;
let last_seen = entry.value().last_seen;
match eviction_candidate {
Some((_, current_fail, current_seen))
if fail_streak > current_fail
|| (fail_streak == current_fail && last_seen >= current_seen) =>
{
}
_ => eviction_candidate = Some((key, fail_streak, last_seen)),
}
if auth_probe_state_expired(entry.value(), now) {
stale_keys.push(*entry.key());
stale_keys.push(key);
}
scanned += 1;
if scanned >= scan_limit {
break;
}
}
if scanned < scan_limit {
for entry in state.iter().take(scan_limit - scanned) {
let key = *entry.key();
let fail_streak = entry.value().fail_streak;
let last_seen = entry.value().last_seen;
match eviction_candidate {
Some((_, current_fail, current_seen))
if fail_streak > current_fail
|| (fail_streak == current_fail && last_seen >= current_seen) =>
{
}
_ => eviction_candidate = Some((key, fail_streak, last_seen)),
}
if auth_probe_state_expired(entry.value(), now) {
stale_keys.push(key);
}
}
}
for stale_key in stale_keys {
state.remove(&stale_key);
}
if state.len() >= AUTH_PROBE_TRACK_MAX_ENTRIES {
if eviction_candidates.is_empty() {
return;
if state.len() < AUTH_PROBE_TRACK_MAX_ENTRIES {
break;
}
let idx = auth_probe_eviction_offset(peer_ip, now) % eviction_candidates.len();
let evict_key = eviction_candidates[idx];
let Some((evict_key, _, _)) = eviction_candidate else {
auth_probe_note_saturation(now);
return;
};
state.remove(&evict_key);
auth_probe_note_saturation(now);
}
}
@ -186,6 +333,11 @@ fn clear_auth_probe_state_for_testing() {
if let Some(state) = AUTH_PROBE_STATE.get() {
state.clear();
}
if let Some(saturation) = AUTH_PROBE_SATURATION_STATE.get()
&& let Ok(mut guard) = saturation.lock()
{
*guard = None;
}
}
#[cfg(test)]
@ -200,6 +352,16 @@ fn auth_probe_is_throttled_for_testing(peer_ip: IpAddr) -> bool {
auth_probe_is_throttled(peer_ip, Instant::now())
}
#[cfg(test)]
fn auth_probe_saturation_is_throttled_for_testing() -> bool {
auth_probe_saturation_is_throttled(Instant::now())
}
#[cfg(test)]
fn auth_probe_saturation_is_throttled_at_for_testing(now: Instant) -> bool {
auth_probe_saturation_is_throttled(now)
}
#[cfg(test)]
fn auth_probe_test_lock() -> &'static Mutex<()> {
static TEST_LOCK: OnceLock<Mutex<()>> = OnceLock::new();
@ -385,7 +547,8 @@ where
{
debug!(peer = %peer, handshake_len = handshake.len(), "Processing TLS handshake");
if auth_probe_is_throttled(peer.ip(), Instant::now()) {
let throttle_now = Instant::now();
if auth_probe_should_apply_preauth_throttle(peer.ip(), throttle_now) {
maybe_apply_server_hello_delay(config).await;
debug!(peer = %peer, "TLS handshake rejected by pre-auth probe throttle");
return HandshakeResult::BadClient { reader, writer };
@ -397,7 +560,8 @@ where
return HandshakeResult::BadClient { reader, writer };
}
let secrets = decode_user_secrets(config, None);
let client_sni = tls::extract_sni_from_client_hello(handshake);
let secrets = decode_user_secrets(config, client_sni.as_deref());
let validation = match tls::validate_tls_handshake_with_replay_window(
handshake,
@ -438,9 +602,9 @@ where
let cached = if config.censorship.tls_emulation {
if let Some(cache) = tls_cache.as_ref() {
let selected_domain = if let Some(sni) = tls::extract_sni_from_client_hello(handshake) {
let selected_domain = if let Some(sni) = client_sni.as_ref() {
if cache.contains_domain(&sni).await {
sni
sni.clone()
} else {
config.censorship.tls_domain.clone()
}
@ -554,7 +718,8 @@ where
{
trace!(peer = %peer, handshake = ?hex::encode(handshake), "MTProto handshake bytes");
if auth_probe_is_throttled(peer.ip(), Instant::now()) {
let throttle_now = Instant::now();
if auth_probe_should_apply_preauth_throttle(peer.ip(), throttle_now) {
maybe_apply_server_hello_delay(config).await;
debug!(peer = %peer, "MTProto handshake rejected by pre-auth probe throttle");
return HandshakeResult::BadClient { reader, writer };

File diff suppressed because it is too large Load Diff

View File

@ -24,8 +24,36 @@ const MASK_TIMEOUT: Duration = Duration::from_millis(50);
const MASK_RELAY_TIMEOUT: Duration = Duration::from_secs(60);
#[cfg(test)]
const MASK_RELAY_TIMEOUT: Duration = Duration::from_millis(200);
#[cfg(not(test))]
const MASK_RELAY_IDLE_TIMEOUT: Duration = Duration::from_secs(5);
#[cfg(test)]
const MASK_RELAY_IDLE_TIMEOUT: Duration = Duration::from_millis(100);
const MASK_BUFFER_SIZE: usize = 8192;
async fn copy_with_idle_timeout<R, W>(reader: &mut R, writer: &mut W)
where
R: AsyncRead + Unpin,
W: AsyncWrite + Unpin,
{
let mut buf = [0u8; MASK_BUFFER_SIZE];
loop {
let read_res = timeout(MASK_RELAY_IDLE_TIMEOUT, reader.read(&mut buf)).await;
let n = match read_res {
Ok(Ok(n)) => n,
Ok(Err(_)) | Err(_) => break,
};
if n == 0 {
break;
}
let write_res = timeout(MASK_RELAY_IDLE_TIMEOUT, writer.write_all(&buf[..n])).await;
match write_res {
Ok(Ok(())) => {}
Ok(Err(_)) | Err(_) => break,
}
}
}
async fn write_proxy_header_with_timeout<W>(mask_write: &mut W, header: &[u8]) -> bool
where
W: AsyncWrite + Unpin,
@ -264,11 +292,11 @@ where
let _ = tokio::join!(
async {
let _ = tokio::io::copy(&mut reader, &mut mask_write).await;
copy_with_idle_timeout(&mut reader, &mut mask_write).await;
let _ = mask_write.shutdown().await;
},
async {
let _ = tokio::io::copy(&mut mask_read, &mut writer).await;
copy_with_idle_timeout(&mut mask_read, &mut writer).await;
let _ = writer.shutdown().await;
}
);

View File

@ -234,8 +234,9 @@ async fn backend_connect_refusal_waits_mask_connect_budget_before_fallback() {
let local_addr: SocketAddr = "127.0.0.1:443".parse().unwrap();
let probe = b"GET /probe HTTP/1.1\r\nHost: x\r\n\r\n";
// Keep reader open so fallback path does not terminate immediately on EOF.
let (_client_reader_side, client_reader) = duplex(256);
// Close client reader immediately to force the refusal path to rely on masking budget timing.
let (client_reader_side, client_reader) = duplex(256);
drop(client_reader_side);
let (_client_visible_reader, client_visible_writer) = duplex(256);
let beobachten = BeobachtenStore::new();
@ -890,6 +891,59 @@ async fn mask_disabled_slowloris_connection_is_closed_by_consume_timeout() {
timeout(Duration::from_secs(1), task).await.unwrap().unwrap();
}
#[tokio::test]
async fn mask_enabled_idle_relay_is_closed_by_idle_timeout_before_global_relay_timeout() {
let listener = TcpListener::bind("127.0.0.1:0").await.unwrap();
let backend_addr = listener.local_addr().unwrap();
let probe = b"GET /idle HTTP/1.1\r\nHost: front.example\r\n\r\n".to_vec();
let accept_task = tokio::spawn({
let probe = probe.clone();
async move {
let (mut stream, _) = listener.accept().await.unwrap();
let mut received = vec![0u8; probe.len()];
stream.read_exact(&mut received).await.unwrap();
assert_eq!(received, probe);
sleep(Duration::from_millis(300)).await;
}
});
let mut config = ProxyConfig::default();
config.general.beobachten = false;
config.censorship.mask = true;
config.censorship.mask_host = Some("127.0.0.1".to_string());
config.censorship.mask_port = backend_addr.port();
config.censorship.mask_unix_sock = None;
config.censorship.mask_proxy_protocol = 0;
let peer: SocketAddr = "198.51.100.34:45456".parse().unwrap();
let local_addr: SocketAddr = "127.0.0.1:443".parse().unwrap();
let (_client_reader_side, client_reader) = duplex(512);
let (_client_visible_reader, client_visible_writer) = duplex(512);
let beobachten = BeobachtenStore::new();
let started = Instant::now();
handle_bad_client(
client_reader,
client_visible_writer,
&probe,
peer,
local_addr,
&config,
&beobachten,
)
.await;
let elapsed = started.elapsed();
assert!(
elapsed < Duration::from_millis(150),
"idle unauth relay must terminate on idle timeout instead of waiting for full relay timeout"
);
accept_task.await.unwrap();
}
struct PendingWriter;
impl tokio::io::AsyncWrite for PendingWriter {
@ -1250,3 +1304,166 @@ async fn timing_matrix_masking_classes_under_controlled_inputs() {
(reachable_mean as u128) / BUCKET_MS
);
}
#[tokio::test]
async fn backend_connect_refusal_completes_within_bounded_mask_budget() {
let temp_listener = TcpListener::bind("127.0.0.1:0").await.unwrap();
let unused_port = temp_listener.local_addr().unwrap().port();
drop(temp_listener);
let mut config = ProxyConfig::default();
config.general.beobachten = false;
config.censorship.mask = true;
config.censorship.mask_host = Some("127.0.0.1".to_string());
config.censorship.mask_port = unused_port;
config.censorship.mask_unix_sock = None;
config.censorship.mask_proxy_protocol = 0;
let peer: SocketAddr = "203.0.113.41:51001".parse().unwrap();
let local_addr: SocketAddr = "127.0.0.1:443".parse().unwrap();
let probe = b"GET /bounded HTTP/1.1\r\nHost: x\r\n\r\n";
let (_client_reader_side, client_reader) = duplex(256);
let (_client_visible_reader, client_visible_writer) = duplex(256);
let beobachten = BeobachtenStore::new();
let started = Instant::now();
handle_bad_client(
client_reader,
client_visible_writer,
probe,
peer,
local_addr,
&config,
&beobachten,
)
.await;
let elapsed = started.elapsed();
assert!(
elapsed >= Duration::from_millis(45),
"connect refusal path must respect minimum masking budget"
);
assert!(
elapsed < Duration::from_millis(500),
"connect refusal path must stay bounded and avoid unbounded stall"
);
}
#[tokio::test]
async fn reachable_backend_one_response_then_silence_is_cut_by_idle_timeout() {
let listener = TcpListener::bind("127.0.0.1:0").await.unwrap();
let backend_addr = listener.local_addr().unwrap();
let probe = b"GET /oneshot HTTP/1.1\r\nHost: front.example\r\n\r\n".to_vec();
let response = b"HTTP/1.1 200 OK\r\nContent-Length: 2\r\n\r\nOK".to_vec();
let accept_task = tokio::spawn({
let probe = probe.clone();
let response = response.clone();
async move {
let (mut stream, _) = listener.accept().await.unwrap();
let mut received = vec![0u8; probe.len()];
stream.read_exact(&mut received).await.unwrap();
assert_eq!(received, probe);
stream.write_all(&response).await.unwrap();
sleep(Duration::from_millis(300)).await;
}
});
let mut config = ProxyConfig::default();
config.general.beobachten = false;
config.censorship.mask = true;
config.censorship.mask_host = Some("127.0.0.1".to_string());
config.censorship.mask_port = backend_addr.port();
config.censorship.mask_unix_sock = None;
config.censorship.mask_proxy_protocol = 0;
let peer: SocketAddr = "203.0.113.42:51002".parse().unwrap();
let local_addr: SocketAddr = "127.0.0.1:443".parse().unwrap();
let (_client_reader_side, client_reader) = duplex(256);
let (mut client_visible_reader, client_visible_writer) = duplex(512);
let beobachten = BeobachtenStore::new();
let started = Instant::now();
handle_bad_client(
client_reader,
client_visible_writer,
&probe,
peer,
local_addr,
&config,
&beobachten,
)
.await;
let elapsed = started.elapsed();
let mut observed = vec![0u8; response.len()];
client_visible_reader.read_exact(&mut observed).await.unwrap();
assert_eq!(observed, response);
assert!(
elapsed < Duration::from_millis(190),
"idle backend silence after first response must be cut by relay idle timeout"
);
accept_task.await.unwrap();
}
#[tokio::test]
async fn adversarial_client_drip_feed_longer_than_idle_timeout_is_cut_off() {
let listener = TcpListener::bind("127.0.0.1:0").await.unwrap();
let backend_addr = listener.local_addr().unwrap();
let initial = b"GET /drip HTTP/1.1\r\nHost: front.example\r\n\r\n".to_vec();
let accept_task = tokio::spawn({
let initial = initial.clone();
async move {
let (mut stream, _) = listener.accept().await.unwrap();
let mut observed = vec![0u8; initial.len()];
stream.read_exact(&mut observed).await.unwrap();
assert_eq!(observed, initial);
let mut extra = [0u8; 1];
let read_res = timeout(Duration::from_millis(220), stream.read_exact(&mut extra)).await;
assert!(
read_res.is_err() || read_res.unwrap().is_err(),
"drip-fed post-probe byte arriving after idle timeout should not be forwarded"
);
}
});
let mut config = ProxyConfig::default();
config.general.beobachten = false;
config.censorship.mask = true;
config.censorship.mask_host = Some("127.0.0.1".to_string());
config.censorship.mask_port = backend_addr.port();
config.censorship.mask_unix_sock = None;
config.censorship.mask_proxy_protocol = 0;
let peer: SocketAddr = "203.0.113.43:51003".parse().unwrap();
let local_addr: SocketAddr = "127.0.0.1:443".parse().unwrap();
let (mut client_writer_side, client_reader) = duplex(256);
let (_client_visible_reader, client_visible_writer) = duplex(256);
let beobachten = BeobachtenStore::new();
let relay_task = tokio::spawn(async move {
handle_bad_client(
client_reader,
client_visible_writer,
&initial,
peer,
local_addr,
&config,
&beobachten,
)
.await;
});
sleep(Duration::from_millis(160)).await;
let _ = client_writer_side.write_all(b"X").await;
drop(client_writer_side);
timeout(Duration::from_secs(1), relay_task).await.unwrap().unwrap();
accept_task.await.unwrap();
}

View File

@ -1,4 +1,5 @@
use std::collections::hash_map::DefaultHasher;
use std::collections::hash_map::RandomState;
use std::hash::BuildHasher;
use std::hash::{Hash, Hasher};
use std::net::{IpAddr, SocketAddr};
use std::sync::atomic::{AtomicU64, Ordering};
@ -41,6 +42,7 @@ const C2ME_SENDER_FAIRNESS_BUDGET: usize = 32;
const ME_D2C_FLUSH_BATCH_MAX_FRAMES_MIN: usize = 1;
const ME_D2C_FLUSH_BATCH_MAX_BYTES_MIN: usize = 4096;
static DESYNC_DEDUP: OnceLock<DashMap<u64, Instant>> = OnceLock::new();
static DESYNC_HASHER: OnceLock<RandomState> = OnceLock::new();
struct RelayForensicsState {
trace_id: u64,
@ -80,7 +82,8 @@ impl MeD2cFlushPolicy {
}
fn hash_value<T: Hash>(value: &T) -> u64 {
let mut hasher = DefaultHasher::new();
let state = DESYNC_HASHER.get_or_init(RandomState::new);
let mut hasher = state.build_hasher();
value.hash(&mut hasher);
hasher.finish()
}
@ -106,12 +109,17 @@ fn should_emit_full_desync(key: u64, all_full: bool, now: Instant) -> bool {
if dedup.len() >= DESYNC_DEDUP_MAX_ENTRIES {
let mut stale_keys = Vec::new();
let mut eviction_candidate = None;
let mut oldest_candidate: Option<(u64, Instant)> = None;
for entry in dedup.iter().take(DESYNC_DEDUP_PRUNE_SCAN_LIMIT) {
if eviction_candidate.is_none() {
eviction_candidate = Some(*entry.key());
let key = *entry.key();
let seen_at = *entry.value();
match oldest_candidate {
Some((_, oldest_seen)) if seen_at >= oldest_seen => {}
_ => oldest_candidate = Some((key, seen_at)),
}
if now.duration_since(*entry.value()) >= DESYNC_DEDUP_WINDOW {
if now.duration_since(seen_at) >= DESYNC_DEDUP_WINDOW {
stale_keys.push(*entry.key());
}
}
@ -119,7 +127,7 @@ fn should_emit_full_desync(key: u64, all_full: bool, now: Instant) -> bool {
dedup.remove(&stale_key);
}
if dedup.len() >= DESYNC_DEDUP_MAX_ENTRIES {
let Some(evict_key) = eviction_candidate else {
let Some((evict_key, _)) = oldest_candidate else {
return false;
};
dedup.remove(&evict_key);

View File

@ -8,7 +8,9 @@ use crate::proxy::route_mode::{RelayRouteMode, RouteRuntimeController};
use crate::stats::Stats;
use crate::stream::{BufferPool, CryptoReader, CryptoWriter, PooledBuffer};
use crate::transport::middle_proxy::MePool;
use std::collections::HashMap;
use rand::rngs::StdRng;
use rand::{Rng, SeedableRng};
use std::collections::{HashMap, HashSet};
use std::net::SocketAddr;
use std::sync::Arc;
use std::sync::atomic::AtomicU64;
@ -220,6 +222,190 @@ fn desync_dedup_full_cache_churn_stays_suppressed() {
}
}
#[test]
fn dedup_hash_is_stable_for_same_input_within_process() {
let sample = (
"scope_user",
hash_ip("198.51.100.7".parse().unwrap()),
ProtoTag::Secure,
);
let first = hash_value(&sample);
let second = hash_value(&sample);
assert_eq!(
first, second,
"dedup hash must be stable within a process for cache lookups"
);
}
#[test]
fn dedup_hash_resists_simple_collision_bursts_for_peer_ip_space() {
let mut seen = HashSet::new();
for octet in 1u16..=2048 {
let third = ((octet / 256) & 0xff) as u8;
let fourth = (octet & 0xff) as u8;
let ip = IpAddr::V4(std::net::Ipv4Addr::new(198, 51, third, fourth));
let key = hash_value(&(
"scope_user",
hash_ip(ip),
ProtoTag::Secure,
DESYNC_ERROR_CLASS,
));
seen.insert(key);
}
assert_eq!(
seen.len(),
2048,
"adversarial peer-IP burst should not collapse dedup keys via trivial collisions"
);
}
#[test]
fn light_fuzz_dedup_hash_collision_rate_stays_negligible() {
let mut rng = StdRng::seed_from_u64(0x9E37_79B9_A1B2_C3D4);
let mut seen = HashSet::new();
let samples = 8192usize;
for _ in 0..samples {
let user_seed: u64 = rng.random();
let peer_seed: u64 = rng.random();
let proto = if (peer_seed & 1) == 0 {
ProtoTag::Secure
} else {
ProtoTag::Intermediate
};
let key = hash_value(&(user_seed, peer_seed, proto, DESYNC_ERROR_CLASS));
seen.insert(key);
}
let collisions = samples - seen.len();
assert!(
collisions <= 1,
"light fuzz collision count should remain negligible for 64-bit dedup keys"
);
}
#[test]
fn stress_desync_dedup_churn_keeps_cache_hard_bounded() {
let _guard = desync_dedup_test_lock()
.lock()
.expect("desync dedup test lock must be available");
clear_desync_dedup_for_testing();
let now = Instant::now();
let total = DESYNC_DEDUP_MAX_ENTRIES + 8192;
for key in 0..total as u64 {
let emitted = should_emit_full_desync(key, false, now);
if key < DESYNC_DEDUP_MAX_ENTRIES as u64 {
assert!(emitted, "keys below cap must be admitted initially");
} else {
assert!(
!emitted,
"new keys above cap must stay suppressed under sustained churn"
);
}
}
let len = DESYNC_DEDUP
.get()
.expect("dedup cache must be initialized by stress run")
.len();
assert!(
len <= DESYNC_DEDUP_MAX_ENTRIES,
"dedup cache must stay bounded under stress churn"
);
}
#[test]
fn desync_dedup_full_cache_inserts_new_key_with_bounded_single_key_churn() {
let _guard = desync_dedup_test_lock()
.lock()
.expect("desync dedup test lock must be available");
clear_desync_dedup_for_testing();
let dedup = DESYNC_DEDUP.get_or_init(DashMap::new);
let base_now = Instant::now();
// Fill with fresh entries so stale-pruning does not apply.
for key in 0..DESYNC_DEDUP_MAX_ENTRIES as u64 {
dedup.insert(key, base_now - TokioDuration::from_millis(10));
}
let before_keys: std::collections::HashSet<u64> = dedup.iter().map(|e| *e.key()).collect();
let newcomer_key = u64::MAX;
let emitted = should_emit_full_desync(newcomer_key, false, base_now);
assert!(
!emitted,
"new entry under full fresh cache must stay suppressed"
);
assert!(
dedup.get(&newcomer_key).is_some(),
"new key must be inserted after bounded eviction"
);
let after_keys: std::collections::HashSet<u64> = dedup.iter().map(|e| *e.key()).collect();
let removed_count = before_keys.difference(&after_keys).count();
let added_count = after_keys.difference(&before_keys).count();
assert_eq!(
removed_count, 1,
"full-cache insertion must evict exactly one prior key"
);
assert_eq!(
added_count, 1,
"full-cache insertion must add exactly one newcomer key"
);
assert!(
dedup.len() <= DESYNC_DEDUP_MAX_ENTRIES,
"dedup cache must remain hard-bounded after full-cache churn"
);
}
#[test]
fn light_fuzz_desync_dedup_temporal_gate_behavior_is_stable() {
let _guard = desync_dedup_test_lock()
.lock()
.expect("desync dedup test lock must be available");
clear_desync_dedup_for_testing();
let key = 0xC0DE_CAFE_u64;
let start = Instant::now();
assert!(
should_emit_full_desync(key, false, start),
"first event for key must emit full forensic record"
);
// Deterministic pseudo-random time deltas around dedup window edge.
let mut s: u64 = 0x1234_5678_9ABC_DEF0;
for _ in 0..2048 {
s ^= s << 7;
s ^= s >> 9;
s ^= s << 8;
let delta_ms = s % (DESYNC_DEDUP_WINDOW.as_millis() as u64 * 2 + 1);
let now = start + TokioDuration::from_millis(delta_ms);
let emitted = should_emit_full_desync(key, false, now);
if delta_ms < DESYNC_DEDUP_WINDOW.as_millis() as u64 {
assert!(
!emitted,
"events inside dedup window must remain suppressed"
);
} else {
// Once window elapsed for this key, at least one sample should re-emit and refresh.
if emitted {
return;
}
}
}
panic!("expected at least one post-window sample to re-emit forensic record");
}
fn make_forensics_state() -> RelayForensicsState {
RelayForensicsState {
trace_id: 1,
@ -1010,6 +1196,13 @@ async fn middle_relay_cutover_midflight_releases_route_gauge() {
relay_result.is_err(),
"cutover should terminate middle relay session"
);
assert!(
matches!(
relay_result,
Err(ProxyError::Proxy(ref msg)) if msg == ROUTE_SWITCH_ERROR_MSG
),
"client-visible cutover error must stay generic and avoid route-internal metadata"
);
assert_eq!(
stats.get_current_connections_me(),
@ -1019,3 +1212,107 @@ async fn middle_relay_cutover_midflight_releases_route_gauge() {
drop(client_side);
}
#[tokio::test]
async fn middle_relay_cutover_storm_multi_session_keeps_generic_errors_and_releases_gauge() {
let session_count = 6usize;
let stats = Arc::new(Stats::new());
let me_pool = make_me_pool_for_abort_test(stats.clone()).await;
let config = Arc::new(ProxyConfig::default());
let buffer_pool = Arc::new(BufferPool::new());
let rng = Arc::new(SecureRandom::new());
let route_runtime = Arc::new(RouteRuntimeController::new(RelayRouteMode::Middle));
let route_snapshot = route_runtime.snapshot();
let mut relay_tasks = Vec::with_capacity(session_count);
let mut client_sides = Vec::with_capacity(session_count);
for idx in 0..session_count {
let (server_side, client_side) = duplex(64 * 1024);
client_sides.push(client_side);
let (server_reader, server_writer) = tokio::io::split(server_side);
let crypto_reader = make_crypto_reader(server_reader);
let crypto_writer = make_crypto_writer(server_writer);
let success = HandshakeSuccess {
user: format!("cutover-storm-middle-user-{idx}"),
dc_idx: 2,
proto_tag: ProtoTag::Intermediate,
dec_key: [0u8; 32],
dec_iv: 0,
enc_key: [0u8; 32],
enc_iv: 0,
peer: SocketAddr::new(
std::net::IpAddr::V4(std::net::Ipv4Addr::new(127, 0, 0, 1)),
52000 + idx as u16,
),
is_tls: false,
};
relay_tasks.push(tokio::spawn(handle_via_middle_proxy(
crypto_reader,
crypto_writer,
success,
me_pool.clone(),
stats.clone(),
config.clone(),
buffer_pool.clone(),
"127.0.0.1:443".parse().unwrap(),
rng.clone(),
route_runtime.subscribe(),
route_snapshot,
0xB000_0000 + idx as u64,
)));
}
tokio::time::timeout(TokioDuration::from_secs(4), async {
loop {
if stats.get_current_connections_me() == session_count as u64 {
break;
}
tokio::time::sleep(TokioDuration::from_millis(10)).await;
}
})
.await
.expect("all middle sessions must become active before cutover storm");
let route_runtime_flipper = route_runtime.clone();
let flipper = tokio::spawn(async move {
for step in 0..64u32 {
let mode = if (step & 1) == 0 {
RelayRouteMode::Direct
} else {
RelayRouteMode::Middle
};
let _ = route_runtime_flipper.set_mode(mode);
tokio::time::sleep(TokioDuration::from_millis(15)).await;
}
});
for relay_task in relay_tasks {
let relay_result = tokio::time::timeout(TokioDuration::from_secs(10), relay_task)
.await
.expect("middle relay task must finish under cutover storm")
.expect("middle relay task must not panic");
assert!(
matches!(
relay_result,
Err(ProxyError::Proxy(ref msg)) if msg == ROUTE_SWITCH_ERROR_MSG
),
"storm-cutover termination must remain generic for all middle sessions"
);
}
flipper.abort();
let _ = flipper.await;
assert_eq!(
stats.get_current_connections_me(),
0,
"middle route gauge must return to zero after cutover storm"
);
drop(client_sides);
}

View File

@ -1,10 +1,10 @@
use std::sync::Arc;
use std::sync::atomic::{AtomicU8, AtomicU64, Ordering};
use std::sync::atomic::{AtomicU64, Ordering};
use std::time::{Duration, SystemTime, UNIX_EPOCH};
use tokio::sync::watch;
pub(crate) const ROUTE_SWITCH_ERROR_MSG: &str = "Route mode switched by cutover";
pub(crate) const ROUTE_SWITCH_ERROR_MSG: &str = "Session terminated";
#[derive(Clone, Copy, Debug, PartialEq, Eq)]
#[repr(u8)]
@ -14,17 +14,6 @@ pub(crate) enum RelayRouteMode {
}
impl RelayRouteMode {
pub(crate) fn as_u8(self) -> u8 {
self as u8
}
pub(crate) fn from_u8(value: u8) -> Self {
match value {
1 => Self::Middle,
_ => Self::Direct,
}
}
pub(crate) fn as_str(self) -> &'static str {
match self {
Self::Direct => "direct",
@ -41,8 +30,6 @@ pub(crate) struct RouteCutoverState {
#[derive(Clone)]
pub(crate) struct RouteRuntimeController {
mode: Arc<AtomicU8>,
generation: Arc<AtomicU64>,
direct_since_epoch_secs: Arc<AtomicU64>,
tx: watch::Sender<RouteCutoverState>,
}
@ -60,18 +47,13 @@ impl RouteRuntimeController {
0
};
Self {
mode: Arc::new(AtomicU8::new(initial_mode.as_u8())),
generation: Arc::new(AtomicU64::new(0)),
direct_since_epoch_secs: Arc::new(AtomicU64::new(direct_since_epoch_secs)),
tx,
}
}
pub(crate) fn snapshot(&self) -> RouteCutoverState {
RouteCutoverState {
mode: RelayRouteMode::from_u8(self.mode.load(Ordering::Relaxed)),
generation: self.generation.load(Ordering::Relaxed),
}
*self.tx.borrow()
}
pub(crate) fn subscribe(&self) -> watch::Receiver<RouteCutoverState> {
@ -84,20 +66,29 @@ impl RouteRuntimeController {
}
pub(crate) fn set_mode(&self, mode: RelayRouteMode) -> Option<RouteCutoverState> {
let previous = self.mode.swap(mode.as_u8(), Ordering::Relaxed);
if previous == mode.as_u8() {
let mut next = None;
let changed = self.tx.send_if_modified(|state| {
if state.mode == mode {
return false;
}
state.mode = mode;
state.generation = state.generation.saturating_add(1);
next = Some(*state);
true
});
if !changed {
return None;
}
if matches!(mode, RelayRouteMode::Direct) {
self.direct_since_epoch_secs
.store(now_epoch_secs(), Ordering::Relaxed);
} else {
self.direct_since_epoch_secs.store(0, Ordering::Relaxed);
}
let generation = self.generation.fetch_add(1, Ordering::Relaxed) + 1;
let next = RouteCutoverState { mode, generation };
self.tx.send_replace(next);
Some(next)
next
}
}
@ -110,10 +101,10 @@ fn now_epoch_secs() -> u64 {
pub(crate) fn is_session_affected_by_cutover(
current: RouteCutoverState,
_session_mode: RelayRouteMode,
session_mode: RelayRouteMode,
session_generation: u64,
) -> bool {
current.generation > session_generation
current.generation > session_generation && current.mode != session_mode
}
pub(crate) fn affected_cutover_state(
@ -140,3 +131,7 @@ pub(crate) fn cutover_stagger_delay(session_id: u64, generation: u64) -> Duratio
let ms = 1000 + (value % 1000);
Duration::from_millis(ms)
}
#[cfg(test)]
#[path = "route_mode_security_tests.rs"]
mod security_tests;

View File

@ -0,0 +1,340 @@
use super::*;
use rand::{Rng, SeedableRng};
use rand::rngs::StdRng;
use std::sync::Arc;
use std::sync::atomic::{AtomicU64, Ordering};
#[test]
fn cutover_stagger_delay_is_deterministic_for_same_inputs() {
let d1 = cutover_stagger_delay(0x0123_4567_89ab_cdef, 42);
let d2 = cutover_stagger_delay(0x0123_4567_89ab_cdef, 42);
assert_eq!(
d1, d2,
"stagger delay must be deterministic for identical session/generation inputs"
);
}
#[test]
fn cutover_stagger_delay_stays_within_budget_bounds() {
// Black-hat model: censors trigger many cutovers and correlate disconnect timing.
// Keep delay inside a narrow coarse window to avoid long-tail spikes.
for generation in [0u64, 1, 2, 3, 16, 128, u32::MAX as u64, u64::MAX] {
for session_id in [
0u64,
1,
2,
0xdead_beef,
0xfeed_face_cafe_babe,
u64::MAX,
] {
let delay = cutover_stagger_delay(session_id, generation);
assert!(
(1000..=1999).contains(&delay.as_millis()),
"stagger delay must remain in fixed 1000..=1999ms budget"
);
}
}
}
#[test]
fn cutover_stagger_delay_changes_with_generation_for_same_session() {
let session_id = 0x0123_4567_89ab_cdef;
let first = cutover_stagger_delay(session_id, 100);
let second = cutover_stagger_delay(session_id, 101);
assert_ne!(
first, second,
"adjacent cutover generations should decorrelate disconnect delays"
);
}
#[test]
fn route_runtime_set_mode_is_idempotent_for_same_mode() {
let runtime = RouteRuntimeController::new(RelayRouteMode::Direct);
let first = runtime.snapshot();
let changed = runtime.set_mode(RelayRouteMode::Direct);
let second = runtime.snapshot();
assert!(
changed.is_none(),
"setting already-active mode must not produce a cutover event"
);
assert_eq!(
first.generation, second.generation,
"idempotent mode set must not bump generation"
);
}
#[test]
fn affected_cutover_state_triggers_only_for_newer_generation() {
let runtime = RouteRuntimeController::new(RelayRouteMode::Direct);
let rx = runtime.subscribe();
let initial = runtime.snapshot();
assert!(
affected_cutover_state(&rx, RelayRouteMode::Direct, initial.generation).is_none(),
"current generation must not be considered a cutover for existing session"
);
let next = runtime
.set_mode(RelayRouteMode::Middle)
.expect("mode change must produce cutover state");
let seen = affected_cutover_state(&rx, RelayRouteMode::Direct, initial.generation)
.expect("newer generation must be observed as cutover");
assert_eq!(seen.generation, next.generation);
assert_eq!(seen.mode, RelayRouteMode::Middle);
}
#[test]
fn integration_watch_and_snapshot_follow_same_transition_sequence() {
let runtime = RouteRuntimeController::new(RelayRouteMode::Direct);
let rx = runtime.subscribe();
let sequence = [
RelayRouteMode::Middle,
RelayRouteMode::Middle,
RelayRouteMode::Direct,
RelayRouteMode::Direct,
RelayRouteMode::Middle,
];
let mut expected_generation = 0u64;
let mut expected_mode = RelayRouteMode::Direct;
for target in sequence {
let changed = runtime.set_mode(target);
if target == expected_mode {
assert!(changed.is_none(), "idempotent transition must return none");
} else {
expected_mode = target;
expected_generation = expected_generation.saturating_add(1);
let emitted = changed.expect("real transition must emit cutover state");
assert_eq!(emitted.mode, expected_mode);
assert_eq!(emitted.generation, expected_generation);
}
let snap = runtime.snapshot();
let watched = *rx.borrow();
assert_eq!(snap, watched, "snapshot and watch state must stay aligned");
assert_eq!(snap.mode, expected_mode);
assert_eq!(snap.generation, expected_generation);
}
}
#[test]
fn session_is_not_affected_when_mode_matches_even_if_generation_advanced() {
let session_mode = RelayRouteMode::Direct;
let current = RouteCutoverState {
mode: RelayRouteMode::Direct,
generation: 2,
};
let session_generation = 0;
assert!(
!is_session_affected_by_cutover(current, session_mode, session_generation),
"session on matching final route mode should not be force-cut over on intermediate generation bumps"
);
}
#[test]
fn cutover_predicate_rejects_equal_generation_even_if_mode_differs() {
let current = RouteCutoverState {
mode: RelayRouteMode::Middle,
generation: 77,
};
assert!(
!is_session_affected_by_cutover(current, RelayRouteMode::Direct, 77),
"equal generation must never trigger cutover regardless of mode mismatch"
);
}
#[test]
fn adversarial_route_oscillation_only_cuts_over_sessions_with_different_final_mode() {
let runtime = RouteRuntimeController::new(RelayRouteMode::Direct);
let rx = runtime.subscribe();
let session_generation = runtime.snapshot().generation;
runtime
.set_mode(RelayRouteMode::Middle)
.expect("direct->middle must transition");
runtime
.set_mode(RelayRouteMode::Direct)
.expect("middle->direct must transition");
assert!(
affected_cutover_state(&rx, RelayRouteMode::Direct, session_generation).is_none(),
"direct session should survive when final mode returns to direct"
);
assert!(
affected_cutover_state(&rx, RelayRouteMode::Middle, session_generation).is_some(),
"middle session should be cut over when final mode is direct"
);
}
#[test]
fn light_fuzz_cutover_predicate_matches_reference_oracle() {
let mut rng = StdRng::seed_from_u64(0xC0DEC0DE5EED);
for _ in 0..20_000 {
let current = RouteCutoverState {
mode: if rng.random::<bool>() {
RelayRouteMode::Direct
} else {
RelayRouteMode::Middle
},
generation: rng.random_range(0u64..1_000_000),
};
let session_mode = if rng.random::<bool>() {
RelayRouteMode::Direct
} else {
RelayRouteMode::Middle
};
let session_generation = rng.random_range(0u64..1_000_000);
let expected = current.generation > session_generation && current.mode != session_mode;
let actual = is_session_affected_by_cutover(current, session_mode, session_generation);
assert_eq!(
actual, expected,
"cutover predicate must match mode-aware generation oracle"
);
}
}
#[test]
fn light_fuzz_set_mode_generation_tracks_only_real_transitions() {
let runtime = RouteRuntimeController::new(RelayRouteMode::Direct);
let mut rng = StdRng::seed_from_u64(0x0DDC0FFE);
let mut expected_mode = RelayRouteMode::Direct;
let mut expected_generation = 0u64;
for _ in 0..10_000 {
let candidate = if rng.random::<bool>() {
RelayRouteMode::Direct
} else {
RelayRouteMode::Middle
};
let changed = runtime.set_mode(candidate);
if candidate == expected_mode {
assert!(changed.is_none(), "idempotent set_mode must not emit cutover state");
} else {
expected_mode = candidate;
expected_generation = expected_generation.saturating_add(1);
let next = changed.expect("mode transition must emit cutover state");
assert_eq!(next.mode, expected_mode);
assert_eq!(next.generation, expected_generation);
}
}
let final_state = runtime.snapshot();
assert_eq!(final_state.mode, expected_mode);
assert_eq!(final_state.generation, expected_generation);
}
#[test]
fn stress_snapshot_and_watch_state_remain_consistent_under_concurrent_switch_storm() {
let runtime = Arc::new(RouteRuntimeController::new(RelayRouteMode::Direct));
std::thread::scope(|scope| {
let mut writers = Vec::new();
for worker in 0..4usize {
let runtime = Arc::clone(&runtime);
writers.push(scope.spawn(move || {
for step in 0..20_000usize {
let mode = if (worker + step) % 2 == 0 {
RelayRouteMode::Direct
} else {
RelayRouteMode::Middle
};
let _ = runtime.set_mode(mode);
}
}));
}
for writer in writers {
writer
.join()
.expect("route mode writer thread must not panic");
}
let rx = runtime.subscribe();
for _ in 0..128 {
assert_eq!(
runtime.snapshot(),
*rx.borrow(),
"snapshot and watch state must converge after concurrent set_mode churn"
);
std::thread::yield_now();
}
});
}
#[test]
fn stress_concurrent_transition_count_matches_final_generation() {
let runtime = Arc::new(RouteRuntimeController::new(RelayRouteMode::Direct));
let successful_transitions = Arc::new(AtomicU64::new(0));
std::thread::scope(|scope| {
let mut workers = Vec::new();
for worker in 0..6usize {
let runtime = Arc::clone(&runtime);
let successful_transitions = Arc::clone(&successful_transitions);
workers.push(scope.spawn(move || {
let mut state = (worker as u64 + 1).wrapping_mul(0x9E37_79B9_7F4A_7C15);
for _ in 0..25_000usize {
state ^= state << 7;
state ^= state >> 9;
state ^= state << 8;
let mode = if (state & 1) == 0 {
RelayRouteMode::Direct
} else {
RelayRouteMode::Middle
};
if runtime.set_mode(mode).is_some() {
successful_transitions.fetch_add(1, Ordering::Relaxed);
}
}
}));
}
for worker in workers {
worker.join().expect("route mode transition worker must not panic");
}
});
let final_state = runtime.snapshot();
assert_eq!(
final_state.generation,
successful_transitions.load(Ordering::Relaxed),
"final generation must equal number of accepted mode transitions"
);
assert_eq!(
final_state,
*runtime.subscribe().borrow(),
"watch and snapshot state must match after concurrent transition accounting"
);
}
#[test]
fn light_fuzz_cutover_stagger_delay_distribution_stays_in_fixed_window() {
// Deterministic xorshift fuzzing keeps this test stable across runs.
let mut s: u64 = 0x9E37_79B9_7F4A_7C15;
for _ in 0..20_000 {
s ^= s << 7;
s ^= s >> 9;
s ^= s << 8;
let session_id = s;
s ^= s << 7;
s ^= s >> 9;
s ^= s << 8;
let generation = s;
let delay = cutover_stagger_delay(session_id, generation);
assert!(
(1000..=1999).contains(&delay.as_millis()),
"fuzzed inputs must always map into fixed stagger window"
);
}
}

View File

@ -103,7 +103,7 @@ pub fn build_emulated_server_hello(
cached: &CachedTlsData,
use_full_cert_payload: bool,
rng: &SecureRandom,
alpn: Option<Vec<u8>>,
_alpn: Option<Vec<u8>>,
new_session_tickets: u8,
) -> Vec<u8> {
// --- ServerHello ---
@ -117,15 +117,6 @@ pub fn build_emulated_server_hello(
extensions.extend_from_slice(&0x002bu16.to_be_bytes());
extensions.extend_from_slice(&(2u16).to_be_bytes());
extensions.extend_from_slice(&0x0304u16.to_be_bytes());
if let Some(alpn_proto) = &alpn {
extensions.extend_from_slice(&0x0010u16.to_be_bytes());
let list_len: u16 = 1 + alpn_proto.len() as u16;
let ext_len: u16 = 2 + list_len;
extensions.extend_from_slice(&ext_len.to_be_bytes());
extensions.extend_from_slice(&list_len.to_be_bytes());
extensions.push(alpn_proto.len() as u8);
extensions.extend_from_slice(alpn_proto);
}
let extensions_len = extensions.len() as u16;
let body_len = 2 + // version