Compare commits

...

30 Commits

Author SHA1 Message Date
Maxim Myalin 062464175e
Merge branch 'main' into feat/shadowsocks-upstream 2026-03-18 12:38:23 +03:00
Maxim Myalin a5983c17d3
Add Docker build context ignore file 2026-03-18 12:36:48 +03:00
Maxim Myalin def42f0baa
Add Shadowsocks upstream support 2026-03-18 12:36:44 +03:00
Alexey 30ba41eb47
Merge pull request #479 from telemt/bump
Update Cargo.toml
2026-03-18 11:57:25 +03:00
Alexey 42f946f29e
Update Cargo.toml 2026-03-18 11:57:09 +03:00
Alexey c53d7951b5
Merge pull request #468 from temandroid/main
feat: add Telemt Control API Python simple client with CLI
2026-03-18 11:56:32 +03:00
Alexey f36e264093
Merge pull request #477 from Dimasssss/CONFIG_PARAMS.md
Create CONFIG_PARAMS.en.md
2026-03-18 11:56:17 +03:00
Alexey a3bdf64353
ME Coverage Ratio in API + as Draining Factor: merge pull request #478 from telemt/flow-api
ME Coverage Ratio in API + as Draining Factor
2026-03-18 11:56:01 +03:00
Alexey 2aa7ea5137
ME Coverage Ratio in API + as Draining Factor 2026-03-18 11:46:13 +03:00
Dimasssss 462c927da6
Create CONFIG_PARAMS.en.md 2026-03-18 10:53:09 +03:00
Alexey cb87b2eac3
Adaptive Buffers + Session Eviction Method: merge pull request #475 from telemt/flow-buffers
Adaptive Buffers + Session Eviction Method
2026-03-18 10:52:22 +03:00
Alexey 3739f38440
Adaptive Buffers + Session Eviction Method 2026-03-18 10:49:02 +03:00
TEMAndroid 8e96039a1c
Merge branch 'telemt:main' into main 2026-03-17 20:09:50 +03:00
TEMAndroid 36b360dfb6
feat: add Telemt Control API Python simple client with CLI
Stdlib-only HTTP client covering all /v1 endpoints with argparse CLI.
Supports If-Match concurrency, typed errors, user CRUD, and all runtime/stats routes.

Usage: ./telemt_api.py help

AI-Generated from API.md. 
Partially tested. 
Use with caution...
2026-03-17 20:09:36 +03:00
Alexey 5dd0c47f14
Merge pull request #464 from temandroid/patch-1
feat(zabbix): add graphs to Telemt template
2026-03-17 18:53:07 +03:00
TEMAndroid 4739083f57
feat(zabbix): add graphs to Telemt template
- Add per-user graph prototypes (Connections, IPs, Traffic, Messages)
- Add server-level graphs (Connections, Uptime, ME Keepalive, ME Reconnects,
  ME Route Drops, ME Writer Pool/Removals, Desync, Upstream, Refill)
2026-03-17 18:24:57 +03:00
Alexey 37a31c13cb
Merge pull request #460 from telemt/bump
Update Cargo.toml
2026-03-17 16:31:46 +03:00
Alexey 35bca7d4cc
Update Cargo.toml 2026-03-17 16:31:32 +03:00
Alexey f39d317d93
Merge pull request #459 from telemt/flow-perf
Flow perf
2026-03-17 16:28:59 +03:00
Alexey d4d93aabf5
Merge pull request #458 from DavidOsipov/ME-draining-fix-3.3.19
Add health monitoring tests for draining writers
2026-03-17 16:17:41 +03:00
David Osipov c9271d9083
Add health monitoring tests for draining writers
- Introduced adversarial tests to validate the behavior of the health monitoring system under various conditions, including the management of draining writers.
- Implemented integration tests to ensure the health monitor correctly handles expired and empty draining writers.
- Added regression tests to verify the functionality of the draining writers' cleanup process, ensuring it adheres to the defined thresholds and budgets.
- Updated the module structure to include the new test files for better organization and maintainability.
2026-03-17 17:11:51 +04:00
Alexey 9c9ba4becd
Merge pull request #452 from Dimasssss/patch-1
Update TLS-F-TCP-S.ru.md
2026-03-17 15:27:43 +03:00
Dimasssss bd0cefdb12
Update TLS-F-TCP-S.ru.md 2026-03-17 11:56:56 +03:00
Alexey e2ed1eb286
Merge pull request #450 from kutovoys/main
feat: add metrics_listen option for metrics endpoint bind address
2026-03-17 11:46:52 +03:00
Sergey Kutovoy a74def9561
Update metrics configuration to support custom listen address
- Bump telemt dependency version from 3.3.15 to 3.3.19.
- Add `metrics_listen` option to `config.toml` for specifying a custom address for the metrics endpoint.
- Update `ServerConfig` struct to include `metrics_listen` and adjust logic in `spawn_metrics_if_configured` to prioritize this new option over `metrics_port`.
- Enhance error handling for invalid listen addresses in metrics setup.
2026-03-17 12:58:40 +05:00
Alexey 95c1306166
Merge pull request #444 from Dimasssss/patch-1
Update FAQ (add max_connections)
2026-03-16 22:06:27 +03:00
Dimasssss e1ef192c10
Update FAQ.en.md 2026-03-16 22:03:28 +03:00
Dimasssss ee4d15fed6
Update FAQ.ru.md 2026-03-16 22:02:55 +03:00
Alexey 0040e9b6da
Merge pull request #442 from telemt/bump
Update Cargo.toml
2026-03-16 21:25:44 +03:00
Alexey 2c10560795
Update Cargo.toml 2026-03-16 21:25:14 +03:00
53 changed files with 6055 additions and 340 deletions

8
.dockerignore Normal file
View File

@ -0,0 +1,8 @@
.git
.github
target
.kilocode
cache
tlsfront
*.tar
*.tar.gz

602
Cargo.lock generated
View File

@ -2,6 +2,16 @@
# It is not intended for manual editing. # It is not intended for manual editing.
version = 4 version = 4
[[package]]
name = "aead"
version = "0.5.2"
source = "registry+https://github.com/rust-lang/crates.io-index"
checksum = "d122413f284cf2d62fb1b7db97e02edb8cda96d769b16e443a4f6195e35662b0"
dependencies = [
"crypto-common",
"generic-array",
]
[[package]] [[package]]
name = "aes" name = "aes"
version = "0.8.4" version = "0.8.4"
@ -13,6 +23,20 @@ dependencies = [
"cpufeatures", "cpufeatures",
] ]
[[package]]
name = "aes-gcm"
version = "0.10.3"
source = "registry+https://github.com/rust-lang/crates.io-index"
checksum = "831010a0f742e1209b3bcea8fab6a8e149051ba6099432c8cb2cc117dec3ead1"
dependencies = [
"aead",
"aes",
"cipher",
"ctr",
"ghash",
"subtle",
]
[[package]] [[package]]
name = "aho-corasick" name = "aho-corasick"
version = "1.1.4" version = "1.1.4"
@ -55,6 +79,27 @@ version = "1.0.101"
source = "registry+https://github.com/rust-lang/crates.io-index" source = "registry+https://github.com/rust-lang/crates.io-index"
checksum = "5f0e0fee31ef5ed1ba1316088939cea399010ed7731dba877ed44aeb407a75ea" checksum = "5f0e0fee31ef5ed1ba1316088939cea399010ed7731dba877ed44aeb407a75ea"
[[package]]
name = "arc-swap"
version = "1.8.2"
source = "registry+https://github.com/rust-lang/crates.io-index"
checksum = "f9f3647c145568cec02c42054e07bdf9a5a698e15b466fb2341bfc393cd24aa5"
dependencies = [
"rustversion",
]
[[package]]
name = "arrayref"
version = "0.3.9"
source = "registry+https://github.com/rust-lang/crates.io-index"
checksum = "76a2e8124351fda1ef8aaaa3bbd7ebbcb486bbcd4225aca0aa0d84bb2db8fecb"
[[package]]
name = "arrayvec"
version = "0.7.6"
source = "registry+https://github.com/rust-lang/crates.io-index"
checksum = "7c02d123df017efcdfbd739ef81735b36c5ba83ec3c59c80a9d7ecc718f92e50"
[[package]] [[package]]
name = "asn1-rs" name = "asn1-rs"
version = "0.5.2" version = "0.5.2"
@ -94,6 +139,17 @@ dependencies = [
"syn 1.0.109", "syn 1.0.109",
] ]
[[package]]
name = "async-trait"
version = "0.1.89"
source = "registry+https://github.com/rust-lang/crates.io-index"
checksum = "9035ad2d096bed7955a320ee7e2230574d28fd3c3a0f186cbea1ff3c7eed5dbb"
dependencies = [
"proc-macro2",
"quote",
"syn 2.0.114",
]
[[package]] [[package]]
name = "atomic-waker" name = "atomic-waker"
version = "1.1.2" version = "1.1.2"
@ -112,6 +168,12 @@ version = "0.22.1"
source = "registry+https://github.com/rust-lang/crates.io-index" source = "registry+https://github.com/rust-lang/crates.io-index"
checksum = "72b3254f16251a8381aa12e40e3c4d2f0199f8c6508fbecb9d91f575e0fbb8c6" checksum = "72b3254f16251a8381aa12e40e3c4d2f0199f8c6508fbecb9d91f575e0fbb8c6"
[[package]]
name = "base64ct"
version = "1.8.3"
source = "registry+https://github.com/rust-lang/crates.io-index"
checksum = "2af50177e190e07a26ab74f8b1efbfe2ef87da2116221318cb1c2e82baf7de06"
[[package]] [[package]]
name = "bit-set" name = "bit-set"
version = "0.8.0" version = "0.8.0"
@ -139,6 +201,20 @@ version = "2.10.0"
source = "registry+https://github.com/rust-lang/crates.io-index" source = "registry+https://github.com/rust-lang/crates.io-index"
checksum = "812e12b5285cc515a9c72a5c1d3b6d46a19dac5acfef5265968c166106e31dd3" checksum = "812e12b5285cc515a9c72a5c1d3b6d46a19dac5acfef5265968c166106e31dd3"
[[package]]
name = "blake3"
version = "1.8.3"
source = "registry+https://github.com/rust-lang/crates.io-index"
checksum = "2468ef7d57b3fb7e16b576e8377cdbde2320c60e1491e961d11da40fc4f02a2d"
dependencies = [
"arrayref",
"arrayvec",
"cc",
"cfg-if",
"constant_time_eq",
"cpufeatures",
]
[[package]] [[package]]
name = "block-buffer" name = "block-buffer"
version = "0.10.4" version = "0.10.4"
@ -163,6 +239,12 @@ version = "3.19.1"
source = "registry+https://github.com/rust-lang/crates.io-index" source = "registry+https://github.com/rust-lang/crates.io-index"
checksum = "5dd9dc738b7a8311c7ade152424974d8115f2cdad61e8dab8dac9f2362298510" checksum = "5dd9dc738b7a8311c7ade152424974d8115f2cdad61e8dab8dac9f2362298510"
[[package]]
name = "byte_string"
version = "1.0.0"
source = "registry+https://github.com/rust-lang/crates.io-index"
checksum = "11aade7a05aa8c3a351cedc44c3fc45806430543382fcc4743a9b757a2a0b4ed"
[[package]] [[package]]
name = "bytes" name = "bytes"
version = "1.11.1" version = "1.11.1"
@ -212,6 +294,30 @@ version = "0.2.1"
source = "registry+https://github.com/rust-lang/crates.io-index" source = "registry+https://github.com/rust-lang/crates.io-index"
checksum = "613afe47fcd5fac7ccf1db93babcb082c5994d996f20b8b159f2ad1658eb5724" checksum = "613afe47fcd5fac7ccf1db93babcb082c5994d996f20b8b159f2ad1658eb5724"
[[package]]
name = "chacha20"
version = "0.9.1"
source = "registry+https://github.com/rust-lang/crates.io-index"
checksum = "c3613f74bd2eac03dad61bd53dbe620703d4371614fe0bc3b9f04dd36fe4e818"
dependencies = [
"cfg-if",
"cipher",
"cpufeatures",
]
[[package]]
name = "chacha20poly1305"
version = "0.10.1"
source = "registry+https://github.com/rust-lang/crates.io-index"
checksum = "10cd79432192d1c0f4e1a0fef9527696cc039165d729fb41b3f4f4f354c2dc35"
dependencies = [
"aead",
"chacha20",
"cipher",
"poly1305",
"zeroize",
]
[[package]] [[package]]
name = "chrono" name = "chrono"
version = "0.4.43" version = "0.4.43"
@ -261,6 +367,7 @@ checksum = "773f3b9af64447d2ce9850330c473515014aa235e6a783b02db81ff39e4a3dad"
dependencies = [ dependencies = [
"crypto-common", "crypto-common",
"inout", "inout",
"zeroize",
] ]
[[package]] [[package]]
@ -288,6 +395,18 @@ version = "1.0.0"
source = "registry+https://github.com/rust-lang/crates.io-index" source = "registry+https://github.com/rust-lang/crates.io-index"
checksum = "3a822ea5bc7590f9d40f1ba12c0dc3c2760f3482c6984db1573ad11031420831" checksum = "3a822ea5bc7590f9d40f1ba12c0dc3c2760f3482c6984db1573ad11031420831"
[[package]]
name = "const-oid"
version = "0.9.6"
source = "registry+https://github.com/rust-lang/crates.io-index"
checksum = "c2459377285ad874054d797f3ccebf984978aa39129f6eafde5cdc8315b612f8"
[[package]]
name = "constant_time_eq"
version = "0.4.2"
source = "registry+https://github.com/rust-lang/crates.io-index"
checksum = "3d52eff69cd5e647efe296129160853a42795992097e8af39800e1060caeea9b"
[[package]] [[package]]
name = "core-foundation-sys" name = "core-foundation-sys"
version = "0.8.7" version = "0.8.7"
@ -357,6 +476,12 @@ dependencies = [
"itertools", "itertools",
] ]
[[package]]
name = "critical-section"
version = "1.2.0"
source = "registry+https://github.com/rust-lang/crates.io-index"
checksum = "790eea4361631c5e7d22598ecd5723ff611904e3344ce8720784c93e3d83d40b"
[[package]] [[package]]
name = "crossbeam-channel" name = "crossbeam-channel"
version = "0.5.15" version = "0.5.15"
@ -413,6 +538,7 @@ source = "registry+https://github.com/rust-lang/crates.io-index"
checksum = "78c8292055d1c1df0cce5d180393dc8cce0abec0a7102adb6c7b1eef6016d60a" checksum = "78c8292055d1c1df0cce5d180393dc8cce0abec0a7102adb6c7b1eef6016d60a"
dependencies = [ dependencies = [
"generic-array", "generic-array",
"rand_core 0.6.4",
"typenum", "typenum",
] ]
@ -444,6 +570,16 @@ version = "2.10.0"
source = "registry+https://github.com/rust-lang/crates.io-index" source = "registry+https://github.com/rust-lang/crates.io-index"
checksum = "d7a1e2f27636f116493b8b860f5546edb47c8d8f8ea73e1d2a20be88e28d1fea" checksum = "d7a1e2f27636f116493b8b860f5546edb47c8d8f8ea73e1d2a20be88e28d1fea"
[[package]]
name = "der"
version = "0.7.10"
source = "registry+https://github.com/rust-lang/crates.io-index"
checksum = "e7c1832837b905bbfb5101e07cc24c8deddf52f93225eee6ead5f4d63d53ddcb"
dependencies = [
"const-oid",
"zeroize",
]
[[package]] [[package]]
name = "der-parser" name = "der-parser"
version = "8.2.0" version = "8.2.0"
@ -489,12 +625,54 @@ dependencies = [
"syn 2.0.114", "syn 2.0.114",
] ]
[[package]]
name = "dynosaur"
version = "0.3.0"
source = "registry+https://github.com/rust-lang/crates.io-index"
checksum = "a12303417f378f29ba12cb12fc78a9df0d8e16ccb1ad94abf04d48d96bdda532"
dependencies = [
"dynosaur_derive",
]
[[package]]
name = "dynosaur_derive"
version = "0.3.0"
source = "registry+https://github.com/rust-lang/crates.io-index"
checksum = "0b0713d5c1d52e774c5cd7bb8b043d7c0fc4f921abfb678556140bfbe6ab2364"
dependencies = [
"proc-macro2",
"quote",
"syn 2.0.114",
]
[[package]]
name = "ed25519"
version = "2.2.3"
source = "registry+https://github.com/rust-lang/crates.io-index"
checksum = "115531babc129696a58c64a4fef0a8bf9e9698629fb97e9e40767d235cfbcd53"
dependencies = [
"pkcs8",
"signature",
]
[[package]] [[package]]
name = "either" name = "either"
version = "1.15.0" version = "1.15.0"
source = "registry+https://github.com/rust-lang/crates.io-index" source = "registry+https://github.com/rust-lang/crates.io-index"
checksum = "48c757948c5ede0e46177b7add2e67155f70e33c07fea8284df6576da70b3719" checksum = "48c757948c5ede0e46177b7add2e67155f70e33c07fea8284df6576da70b3719"
[[package]]
name = "enum-as-inner"
version = "0.6.1"
source = "registry+https://github.com/rust-lang/crates.io-index"
checksum = "a1e6a265c649f3f5979b601d26f1d05ada116434c87741c9493cb56218f76cbc"
dependencies = [
"heck",
"proc-macro2",
"quote",
"syn 2.0.114",
]
[[package]] [[package]]
name = "equivalent" name = "equivalent"
version = "1.0.2" version = "1.0.2"
@ -709,6 +887,16 @@ dependencies = [
"wasip3", "wasip3",
] ]
[[package]]
name = "ghash"
version = "0.5.1"
source = "registry+https://github.com/rust-lang/crates.io-index"
checksum = "f0d8a4362ccb29cb0b265253fb0a2728f592895ee6854fd9bc13f2ffda266ff1"
dependencies = [
"opaque-debug",
"polyval",
]
[[package]] [[package]]
name = "h2" name = "h2"
version = "0.4.13" version = "0.4.13"
@ -783,6 +971,61 @@ version = "0.4.3"
source = "registry+https://github.com/rust-lang/crates.io-index" source = "registry+https://github.com/rust-lang/crates.io-index"
checksum = "7f24254aa9a54b5c858eaee2f5bccdb46aaf0e486a595ed5fd8f86ba55232a70" checksum = "7f24254aa9a54b5c858eaee2f5bccdb46aaf0e486a595ed5fd8f86ba55232a70"
[[package]]
name = "hickory-proto"
version = "0.25.2"
source = "registry+https://github.com/rust-lang/crates.io-index"
checksum = "f8a6fe56c0038198998a6f217ca4e7ef3a5e51f46163bd6dd60b5c71ca6c6502"
dependencies = [
"async-trait",
"cfg-if",
"data-encoding",
"enum-as-inner",
"futures-channel",
"futures-io",
"futures-util",
"idna",
"ipnet",
"once_cell",
"rand",
"ring",
"thiserror 2.0.18",
"tinyvec",
"tokio",
"tracing",
"url",
]
[[package]]
name = "hickory-resolver"
version = "0.25.2"
source = "registry+https://github.com/rust-lang/crates.io-index"
checksum = "dc62a9a99b0bfb44d2ab95a7208ac952d31060efc16241c87eaf36406fecf87a"
dependencies = [
"cfg-if",
"futures-util",
"hickory-proto",
"ipconfig",
"moka",
"once_cell",
"parking_lot",
"rand",
"resolv-conf",
"smallvec",
"thiserror 2.0.18",
"tokio",
"tracing",
]
[[package]]
name = "hkdf"
version = "0.12.4"
source = "registry+https://github.com/rust-lang/crates.io-index"
checksum = "7b5f8eb2ad728638ea2c7d47a21db23b7b58a72ed6a38256b8a1849f15fbbdf7"
dependencies = [
"hmac",
]
[[package]] [[package]]
name = "hmac" name = "hmac"
version = "0.12.1" version = "0.12.1"
@ -1055,6 +1298,17 @@ dependencies = [
"libc", "libc",
] ]
[[package]]
name = "inotify"
version = "0.11.1"
source = "registry+https://github.com/rust-lang/crates.io-index"
checksum = "bd5b3eaf1a28b758ac0faa5a4254e8ab2705605496f1b1f3fbbc3988ad73d199"
dependencies = [
"bitflags 2.10.0",
"inotify-sys",
"libc",
]
[[package]] [[package]]
name = "inotify-sys" name = "inotify-sys"
version = "0.1.5" version = "0.1.5"
@ -1074,6 +1328,18 @@ dependencies = [
"generic-array", "generic-array",
] ]
[[package]]
name = "ipconfig"
version = "0.3.2"
source = "registry+https://github.com/rust-lang/crates.io-index"
checksum = "b58db92f96b720de98181bbbe63c831e87005ab460c1bf306eb2622b4707997f"
dependencies = [
"socket2 0.5.10",
"widestring",
"windows-sys 0.48.0",
"winreg",
]
[[package]] [[package]]
name = "ipnet" name = "ipnet"
version = "2.11.0" version = "2.11.0"
@ -1226,6 +1492,12 @@ version = "0.1.2"
source = "registry+https://github.com/rust-lang/crates.io-index" source = "registry+https://github.com/rust-lang/crates.io-index"
checksum = "112b39cec0b298b6c1999fee3e31427f74f676e4cb9879ed1a121b43661a4154" checksum = "112b39cec0b298b6c1999fee3e31427f74f676e4cb9879ed1a121b43661a4154"
[[package]]
name = "lru_time_cache"
version = "0.11.11"
source = "registry+https://github.com/rust-lang/crates.io-index"
checksum = "9106e1d747ffd48e6be5bb2d97fa706ed25b144fbee4d5c02eae110cd8d6badd"
[[package]] [[package]]
name = "matchers" name = "matchers"
version = "0.2.0" version = "0.2.0"
@ -1285,10 +1557,28 @@ source = "registry+https://github.com/rust-lang/crates.io-index"
checksum = "a69bcab0ad47271a0234d9422b131806bf3968021e5dc9328caf2d4cd58557fc" checksum = "a69bcab0ad47271a0234d9422b131806bf3968021e5dc9328caf2d4cd58557fc"
dependencies = [ dependencies = [
"libc", "libc",
"log",
"wasi", "wasi",
"windows-sys 0.61.2", "windows-sys 0.61.2",
] ]
[[package]]
name = "moka"
version = "0.12.14"
source = "registry+https://github.com/rust-lang/crates.io-index"
checksum = "85f8024e1c8e71c778968af91d43700ce1d11b219d127d79fb2934153b82b42b"
dependencies = [
"crossbeam-channel",
"crossbeam-epoch",
"crossbeam-utils",
"equivalent",
"parking_lot",
"portable-atomic",
"smallvec",
"tagptr",
"uuid",
]
[[package]] [[package]]
name = "nix" name = "nix"
version = "0.28.0" version = "0.28.0"
@ -1322,7 +1612,7 @@ dependencies = [
"crossbeam-channel", "crossbeam-channel",
"filetime", "filetime",
"fsevent-sys", "fsevent-sys",
"inotify", "inotify 0.9.6",
"kqueue", "kqueue",
"libc", "libc",
"log", "log",
@ -1331,6 +1621,33 @@ dependencies = [
"windows-sys 0.48.0", "windows-sys 0.48.0",
] ]
[[package]]
name = "notify"
version = "8.2.0"
source = "registry+https://github.com/rust-lang/crates.io-index"
checksum = "4d3d07927151ff8575b7087f245456e549fea62edf0ec4e565a5ee50c8402bc3"
dependencies = [
"bitflags 2.10.0",
"fsevent-sys",
"inotify 0.11.1",
"kqueue",
"libc",
"log",
"mio 1.1.1",
"notify-types",
"walkdir",
"windows-sys 0.60.2",
]
[[package]]
name = "notify-types"
version = "2.1.0"
source = "registry+https://github.com/rust-lang/crates.io-index"
checksum = "42b8cfee0e339a0337359f3c88165702ac6e600dc01c0cc9579a92d62b08477a"
dependencies = [
"bitflags 2.10.0",
]
[[package]] [[package]]
name = "nu-ansi-term" name = "nu-ansi-term"
version = "0.50.3" version = "0.50.3"
@ -1388,6 +1705,10 @@ name = "once_cell"
version = "1.21.3" version = "1.21.3"
source = "registry+https://github.com/rust-lang/crates.io-index" source = "registry+https://github.com/rust-lang/crates.io-index"
checksum = "42f5e15c9953c5e4ccceeb2e7382a716482c34515315f7b03532b8b4e8393d2d" checksum = "42f5e15c9953c5e4ccceeb2e7382a716482c34515315f7b03532b8b4e8393d2d"
dependencies = [
"critical-section",
"portable-atomic",
]
[[package]] [[package]]
name = "oorandom" name = "oorandom"
@ -1395,6 +1716,12 @@ version = "11.1.5"
source = "registry+https://github.com/rust-lang/crates.io-index" source = "registry+https://github.com/rust-lang/crates.io-index"
checksum = "d6790f58c7ff633d8771f42965289203411a5e5c68388703c06e14f24770b41e" checksum = "d6790f58c7ff633d8771f42965289203411a5e5c68388703c06e14f24770b41e"
[[package]]
name = "opaque-debug"
version = "0.3.1"
source = "registry+https://github.com/rust-lang/crates.io-index"
checksum = "c08d65885ee38876c4f86fa503fb49d7b507c2b62552df7c70b2fce627e06381"
[[package]] [[package]]
name = "parking_lot" name = "parking_lot"
version = "0.12.5" version = "0.12.5"
@ -1424,6 +1751,26 @@ version = "2.3.2"
source = "registry+https://github.com/rust-lang/crates.io-index" source = "registry+https://github.com/rust-lang/crates.io-index"
checksum = "9b4f627cb1b25917193a259e49bdad08f671f8d9708acfd5fe0a8c1455d87220" checksum = "9b4f627cb1b25917193a259e49bdad08f671f8d9708acfd5fe0a8c1455d87220"
[[package]]
name = "pin-project"
version = "1.1.11"
source = "registry+https://github.com/rust-lang/crates.io-index"
checksum = "f1749c7ed4bcaf4c3d0a3efc28538844fb29bcdd7d2b67b2be7e20ba861ff517"
dependencies = [
"pin-project-internal",
]
[[package]]
name = "pin-project-internal"
version = "1.1.11"
source = "registry+https://github.com/rust-lang/crates.io-index"
checksum = "d9b20ed30f105399776b9c883e68e536ef602a16ae6f596d2c473591d6ad64c6"
dependencies = [
"proc-macro2",
"quote",
"syn 2.0.114",
]
[[package]] [[package]]
name = "pin-project-lite" name = "pin-project-lite"
version = "0.2.16" version = "0.2.16"
@ -1436,6 +1783,16 @@ version = "0.1.0"
source = "registry+https://github.com/rust-lang/crates.io-index" source = "registry+https://github.com/rust-lang/crates.io-index"
checksum = "8b870d8c151b6f2fb93e84a13146138f05d02ed11c7e7c54f8826aaaf7c9f184" checksum = "8b870d8c151b6f2fb93e84a13146138f05d02ed11c7e7c54f8826aaaf7c9f184"
[[package]]
name = "pkcs8"
version = "0.10.2"
source = "registry+https://github.com/rust-lang/crates.io-index"
checksum = "f950b2377845cebe5cf8b5165cb3cc1a5e0fa5cfa3e1f7f55707d8fd82e0a7b7"
dependencies = [
"der",
"spki",
]
[[package]] [[package]]
name = "plotters" name = "plotters"
version = "0.3.7" version = "0.3.7"
@ -1464,6 +1821,35 @@ dependencies = [
"plotters-backend", "plotters-backend",
] ]
[[package]]
name = "poly1305"
version = "0.8.0"
source = "registry+https://github.com/rust-lang/crates.io-index"
checksum = "8159bd90725d2df49889a078b54f4f79e87f1f8a8444194cdca81d38f5393abf"
dependencies = [
"cpufeatures",
"opaque-debug",
"universal-hash",
]
[[package]]
name = "polyval"
version = "0.6.2"
source = "registry+https://github.com/rust-lang/crates.io-index"
checksum = "9d1fe60d06143b2430aa532c94cfe9e29783047f06c0d7fd359a9a51b729fa25"
dependencies = [
"cfg-if",
"cpufeatures",
"opaque-debug",
"universal-hash",
]
[[package]]
name = "portable-atomic"
version = "1.13.1"
source = "registry+https://github.com/rust-lang/crates.io-index"
checksum = "c33a9471896f1c69cecef8d20cbe2f7accd12527ce60845ff44c153bb2a21b49"
[[package]] [[package]]
name = "potential_utf" name = "potential_utf"
version = "0.1.4" version = "0.1.4"
@ -1609,7 +1995,7 @@ source = "registry+https://github.com/rust-lang/crates.io-index"
checksum = "6db2770f06117d490610c7488547d543617b21bfa07796d7a12f6f1bd53850d1" checksum = "6db2770f06117d490610c7488547d543617b21bfa07796d7a12f6f1bd53850d1"
dependencies = [ dependencies = [
"rand_chacha", "rand_chacha",
"rand_core", "rand_core 0.9.5",
] ]
[[package]] [[package]]
@ -1619,7 +2005,16 @@ source = "registry+https://github.com/rust-lang/crates.io-index"
checksum = "d3022b5f1df60f26e1ffddd6c66e8aa15de382ae63b3a0c1bfc0e4d3e3f325cb" checksum = "d3022b5f1df60f26e1ffddd6c66e8aa15de382ae63b3a0c1bfc0e4d3e3f325cb"
dependencies = [ dependencies = [
"ppv-lite86", "ppv-lite86",
"rand_core", "rand_core 0.9.5",
]
[[package]]
name = "rand_core"
version = "0.6.4"
source = "registry+https://github.com/rust-lang/crates.io-index"
checksum = "ec0be4795e2f6a28069bec0b5ff3e2ac9bafc99e6a9a7dc3547996c5c816922c"
dependencies = [
"getrandom 0.2.17",
] ]
[[package]] [[package]]
@ -1637,7 +2032,7 @@ version = "0.4.0"
source = "registry+https://github.com/rust-lang/crates.io-index" source = "registry+https://github.com/rust-lang/crates.io-index"
checksum = "513962919efc330f829edb2535844d1b912b0fbe2ca165d613e4e8788bb05a5a" checksum = "513962919efc330f829edb2535844d1b912b0fbe2ca165d613e4e8788bb05a5a"
dependencies = [ dependencies = [
"rand_core", "rand_core 0.9.5",
] ]
[[package]] [[package]]
@ -1745,6 +2140,12 @@ dependencies = [
"webpki-roots 1.0.6", "webpki-roots 1.0.6",
] ]
[[package]]
name = "resolv-conf"
version = "0.7.6"
source = "registry+https://github.com/rust-lang/crates.io-index"
checksum = "1e061d1b48cb8d38042de4ae0a7a6401009d6143dc80d2e2d6f31f0bdd6470c7"
[[package]] [[package]]
name = "ring" name = "ring"
version = "0.17.14" version = "0.17.14"
@ -1759,6 +2160,19 @@ dependencies = [
"windows-sys 0.52.0", "windows-sys 0.52.0",
] ]
[[package]]
name = "ring-compat"
version = "0.8.0"
source = "registry+https://github.com/rust-lang/crates.io-index"
checksum = "ccce7bae150b815f0811db41b8312fcb74bffa4cab9cee5429ee00f356dd5bd4"
dependencies = [
"aead",
"ed25519",
"generic-array",
"pkcs8",
"ring",
]
[[package]] [[package]]
name = "rustc-hash" name = "rustc-hash"
version = "2.1.1" version = "2.1.1"
@ -1870,12 +2284,33 @@ version = "1.2.0"
source = "registry+https://github.com/rust-lang/crates.io-index" source = "registry+https://github.com/rust-lang/crates.io-index"
checksum = "94143f37725109f92c262ed2cf5e59bce7498c01bcc1502d7b9afe439a4e9f49" checksum = "94143f37725109f92c262ed2cf5e59bce7498c01bcc1502d7b9afe439a4e9f49"
[[package]]
name = "sealed"
version = "0.6.0"
source = "registry+https://github.com/rust-lang/crates.io-index"
checksum = "22f968c5ea23d555e670b449c1c5e7b2fc399fdaec1d304a17cd48e288abc107"
dependencies = [
"proc-macro2",
"quote",
"syn 2.0.114",
]
[[package]] [[package]]
name = "semver" name = "semver"
version = "1.0.27" version = "1.0.27"
source = "registry+https://github.com/rust-lang/crates.io-index" source = "registry+https://github.com/rust-lang/crates.io-index"
checksum = "d767eb0aabc880b29956c35734170f26ed551a859dbd361d140cdbeca61ab1e2" checksum = "d767eb0aabc880b29956c35734170f26ed551a859dbd361d140cdbeca61ab1e2"
[[package]]
name = "sendfd"
version = "0.4.4"
source = "registry+https://github.com/rust-lang/crates.io-index"
checksum = "b183bfd5b1bc64ab0c1ef3ee06b008a9ef1b68a7d3a99ba566fbfe7a7c6d745b"
dependencies = [
"libc",
"tokio",
]
[[package]] [[package]]
name = "serde" name = "serde"
version = "1.0.228" version = "1.0.228"
@ -1962,6 +2397,64 @@ dependencies = [
"digest", "digest",
] ]
[[package]]
name = "shadowsocks"
version = "1.24.0"
source = "registry+https://github.com/rust-lang/crates.io-index"
checksum = "482831bf9d55acf3c98e211b6c852c3dfdf1d1b0d23fdf1d887c5a4b2acad4e4"
dependencies = [
"aes",
"arc-swap",
"base64",
"blake3",
"byte_string",
"bytes",
"cfg-if",
"dynosaur",
"futures",
"hickory-resolver",
"libc",
"log",
"lru_time_cache",
"notify 8.2.0",
"percent-encoding",
"pin-project",
"rand",
"sealed",
"sendfd",
"serde",
"serde_json",
"serde_urlencoded",
"shadowsocks-crypto",
"socket2 0.6.2",
"spin",
"thiserror 2.0.18",
"tokio",
"tokio-tfo",
"trait-variant",
"url",
"windows-sys 0.61.2",
]
[[package]]
name = "shadowsocks-crypto"
version = "0.6.2"
source = "registry+https://github.com/rust-lang/crates.io-index"
checksum = "3d038a3d17586f1c1ab3c1c3b9e4d5ef8fba98fb3890ad740c8487038b2e2ca5"
dependencies = [
"aes",
"aes-gcm",
"blake3",
"bytes",
"cfg-if",
"chacha20poly1305",
"hkdf",
"md-5",
"rand",
"ring-compat",
"sha1",
]
[[package]] [[package]]
name = "sharded-slab" name = "sharded-slab"
version = "0.1.7" version = "0.1.7"
@ -1987,6 +2480,12 @@ dependencies = [
"libc", "libc",
] ]
[[package]]
name = "signature"
version = "2.2.0"
source = "registry+https://github.com/rust-lang/crates.io-index"
checksum = "77549399552de45a898a580c1b41d445bf730df867cc44e6c0233bbc4b8329de"
[[package]] [[package]]
name = "slab" name = "slab"
version = "0.4.12" version = "0.4.12"
@ -2019,6 +2518,25 @@ dependencies = [
"windows-sys 0.60.2", "windows-sys 0.60.2",
] ]
[[package]]
name = "spin"
version = "0.10.0"
source = "registry+https://github.com/rust-lang/crates.io-index"
checksum = "d5fe4ccb98d9c292d56fec89a5e07da7fc4cf0dc11e156b41793132775d3e591"
dependencies = [
"lock_api",
]
[[package]]
name = "spki"
version = "0.7.3"
source = "registry+https://github.com/rust-lang/crates.io-index"
checksum = "d91ed6c858b01f942cd56b37a94b3e0a1798290327d1236e4d9cf4eaca44d29d"
dependencies = [
"base64ct",
"der",
]
[[package]] [[package]]
name = "stable_deref_trait" name = "stable_deref_trait"
version = "1.2.1" version = "1.2.1"
@ -2085,9 +2603,15 @@ dependencies = [
"syn 2.0.114", "syn 2.0.114",
] ]
[[package]]
name = "tagptr"
version = "0.2.0"
source = "registry+https://github.com/rust-lang/crates.io-index"
checksum = "7b2093cf4c8eb1e67749a6762251bc9cd836b6fc171623bd0a9d324d37af2417"
[[package]] [[package]]
name = "telemt" name = "telemt"
version = "3.3.15" version = "3.3.20"
dependencies = [ dependencies = [
"aes", "aes",
"anyhow", "anyhow",
@ -2113,7 +2637,7 @@ dependencies = [
"lru", "lru",
"md-5", "md-5",
"nix", "nix",
"notify", "notify 6.1.1",
"num-bigint", "num-bigint",
"num-traits", "num-traits",
"parking_lot", "parking_lot",
@ -2126,6 +2650,7 @@ dependencies = [
"serde_json", "serde_json",
"sha1", "sha1",
"sha2", "sha2",
"shadowsocks",
"socket2 0.5.10", "socket2 0.5.10",
"thiserror 2.0.18", "thiserror 2.0.18",
"tokio", "tokio",
@ -2330,6 +2855,23 @@ dependencies = [
"tokio-stream", "tokio-stream",
] ]
[[package]]
name = "tokio-tfo"
version = "0.4.3"
source = "registry+https://github.com/rust-lang/crates.io-index"
checksum = "e6ad2c3b3bb958ad992354a7ebc468fc0f7cdc9af4997bf4d3fd3cb28bad36dc"
dependencies = [
"cfg-if",
"futures",
"libc",
"log",
"once_cell",
"pin-project",
"socket2 0.6.2",
"tokio",
"windows-sys 0.60.2",
]
[[package]] [[package]]
name = "tokio-util" name = "tokio-util"
version = "0.7.18" version = "0.7.18"
@ -2494,6 +3036,17 @@ dependencies = [
"tracing-log", "tracing-log",
] ]
[[package]]
name = "trait-variant"
version = "0.1.2"
source = "registry+https://github.com/rust-lang/crates.io-index"
checksum = "70977707304198400eb4835a78f6a9f928bf41bba420deb8fdb175cd965d77a7"
dependencies = [
"proc-macro2",
"quote",
"syn 2.0.114",
]
[[package]] [[package]]
name = "try-lock" name = "try-lock"
version = "0.2.5" version = "0.2.5"
@ -2524,6 +3077,16 @@ version = "0.2.6"
source = "registry+https://github.com/rust-lang/crates.io-index" source = "registry+https://github.com/rust-lang/crates.io-index"
checksum = "ebc1c04c71510c7f702b52b7c350734c9ff1295c464a03335b00bb84fc54f853" checksum = "ebc1c04c71510c7f702b52b7c350734c9ff1295c464a03335b00bb84fc54f853"
[[package]]
name = "universal-hash"
version = "0.5.1"
source = "registry+https://github.com/rust-lang/crates.io-index"
checksum = "fc1de2c688dc15305988b563c3854064043356019f97a4b46276fe734c4f07ea"
dependencies = [
"crypto-common",
"subtle",
]
[[package]] [[package]]
name = "untrusted" name = "untrusted"
version = "0.9.0" version = "0.9.0"
@ -2548,6 +3111,17 @@ version = "1.0.4"
source = "registry+https://github.com/rust-lang/crates.io-index" source = "registry+https://github.com/rust-lang/crates.io-index"
checksum = "b6c140620e7ffbb22c2dee59cafe6084a59b5ffc27a8859a5f0d494b5d52b6be" checksum = "b6c140620e7ffbb22c2dee59cafe6084a59b5ffc27a8859a5f0d494b5d52b6be"
[[package]]
name = "uuid"
version = "1.22.0"
source = "registry+https://github.com/rust-lang/crates.io-index"
checksum = "a68d3c8f01c0cfa54a75291d83601161799e4a89a39e0929f4b0354d88757a37"
dependencies = [
"getrandom 0.4.1",
"js-sys",
"wasm-bindgen",
]
[[package]] [[package]]
name = "valuable" name = "valuable"
version = "0.1.1" version = "0.1.1"
@ -2743,6 +3317,12 @@ dependencies = [
"rustls-pki-types", "rustls-pki-types",
] ]
[[package]]
name = "widestring"
version = "1.2.1"
source = "registry+https://github.com/rust-lang/crates.io-index"
checksum = "72069c3113ab32ab29e5584db3c6ec55d416895e60715417b5b883a357c3e471"
[[package]] [[package]]
name = "winapi-util" name = "winapi-util"
version = "0.1.11" version = "0.1.11"
@ -3042,6 +3622,16 @@ dependencies = [
"memchr", "memchr",
] ]
[[package]]
name = "winreg"
version = "0.50.0"
source = "registry+https://github.com/rust-lang/crates.io-index"
checksum = "524e57b2c537c0f9b1e69f1965311ec12182b4122e45035b1508cd24d2adadb1"
dependencies = [
"cfg-if",
"windows-sys 0.48.0",
]
[[package]] [[package]]
name = "wit-bindgen" name = "wit-bindgen"
version = "0.51.0" version = "0.51.0"

View File

@ -1,6 +1,6 @@
[package] [package]
name = "telemt" name = "telemt"
version = "3.3.18" version = "3.3.21"
edition = "2024" edition = "2024"
[dependencies] [dependencies]
@ -26,6 +26,7 @@ zeroize = { version = "1.8", features = ["derive"] }
# Network # Network
socket2 = { version = "0.5", features = ["all"] } socket2 = { version = "0.5", features = ["all"] }
nix = { version = "0.28", default-features = false, features = ["net"] } nix = { version = "0.28", default-features = false, features = ["net"] }
shadowsocks = { version = "1.24", features = ["aead-cipher-2022"] }
# Serialization # Serialization
serde = { version = "1.0", features = ["derive"] } serde = { version = "1.0", features = ["derive"] }

View File

@ -32,6 +32,7 @@ show = "*"
port = 443 port = 443
# proxy_protocol = false # Enable if behind HAProxy/nginx with PROXY protocol # proxy_protocol = false # Enable if behind HAProxy/nginx with PROXY protocol
# metrics_port = 9090 # metrics_port = 9090
# metrics_listen = "0.0.0.0:9090" # Listen address for metrics (overrides metrics_port)
# metrics_whitelist = ["127.0.0.1", "::1", "0.0.0.0/0"] # metrics_whitelist = ["127.0.0.1", "::1", "0.0.0.0/0"]
[server.api] [server.api]

View File

@ -497,13 +497,14 @@ Note: the request contract is defined, but the corresponding route currently ret
| `direct_total` | `usize` | Direct-route upstream entries. | | `direct_total` | `usize` | Direct-route upstream entries. |
| `socks4_total` | `usize` | SOCKS4 upstream entries. | | `socks4_total` | `usize` | SOCKS4 upstream entries. |
| `socks5_total` | `usize` | SOCKS5 upstream entries. | | `socks5_total` | `usize` | SOCKS5 upstream entries. |
| `shadowsocks_total` | `usize` | Shadowsocks upstream entries. |
#### `RuntimeUpstreamQualityUpstreamData` #### `RuntimeUpstreamQualityUpstreamData`
| Field | Type | Description | | Field | Type | Description |
| --- | --- | --- | | --- | --- | --- |
| `upstream_id` | `usize` | Runtime upstream index. | | `upstream_id` | `usize` | Runtime upstream index. |
| `route_kind` | `string` | `direct`, `socks4`, `socks5`. | | `route_kind` | `string` | `direct`, `socks4`, `socks5`, `shadowsocks`. |
| `address` | `string` | Upstream address (`direct` literal for direct route kind). | | `address` | `string` | Upstream address (`direct` literal for direct route kind, `host:port` only for proxied upstreams). |
| `weight` | `u16` | Selection weight. | | `weight` | `u16` | Selection weight. |
| `scopes` | `string` | Configured scope selector. | | `scopes` | `string` | Configured scope selector. |
| `healthy` | `bool` | Current health flag. | | `healthy` | `bool` | Current health flag. |
@ -757,13 +758,14 @@ Note: the request contract is defined, but the corresponding route currently ret
| `direct_total` | `usize` | Number of direct upstream entries. | | `direct_total` | `usize` | Number of direct upstream entries. |
| `socks4_total` | `usize` | Number of SOCKS4 upstream entries. | | `socks4_total` | `usize` | Number of SOCKS4 upstream entries. |
| `socks5_total` | `usize` | Number of SOCKS5 upstream entries. | | `socks5_total` | `usize` | Number of SOCKS5 upstream entries. |
| `shadowsocks_total` | `usize` | Number of Shadowsocks upstream entries. |
#### `UpstreamStatus` #### `UpstreamStatus`
| Field | Type | Description | | Field | Type | Description |
| --- | --- | --- | | --- | --- | --- |
| `upstream_id` | `usize` | Runtime upstream index. | | `upstream_id` | `usize` | Runtime upstream index. |
| `route_kind` | `string` | Upstream route kind: `direct`, `socks4`, `socks5`. | | `route_kind` | `string` | Upstream route kind: `direct`, `socks4`, `socks5`, `shadowsocks`. |
| `address` | `string` | Upstream address (`direct` for direct route kind). Authentication fields are intentionally omitted. | | `address` | `string` | Upstream address (`direct` for direct route kind, `host:port` for Shadowsocks). Authentication fields are intentionally omitted. |
| `weight` | `u16` | Selection weight. | | `weight` | `u16` | Selection weight. |
| `scopes` | `string` | Configured scope selector string. | | `scopes` | `string` | Configured scope selector string. |
| `healthy` | `bool` | Current health flag. | | `healthy` | `bool` | Current health flag. |

289
docs/CONFIG_PARAMS.en.md Normal file
View File

@ -0,0 +1,289 @@
# Telemt Config Parameters Reference
This document lists all configuration keys accepted by `config.toml`.
> [!WARNING]
>
> The configuration parameters detailed in this document are intended for advanced users and fine-tuning purposes. Modifying these settings without a clear understanding of their function may lead to application instability or other unexpected behavior. Please proceed with caution and at your own risk.
## Top-level keys
| Parameter | Type | Description |
|---|---|---|
| include | `String` (special directive) | Includes another TOML file with `include = "relative/or/absolute/path.toml"`; includes are processed recursively before parsing. |
| show_link | `"*" \| String[]` | Legacy top-level link visibility selector (`"*"` for all users or explicit usernames list). |
| dc_overrides | `Map<String, String[]>` | Overrides DC endpoints for non-standard DCs; key is DC id string, value is `ip:port` list. |
| default_dc | `u8` | Default DC index used for unmapped non-standard DCs. |
## [general]
| Parameter | Type | Description |
|---|---|---|
| data_path | `String` | Optional runtime data directory path. |
| prefer_ipv6 | `bool` | Prefer IPv6 where applicable in runtime logic. |
| fast_mode | `bool` | Enables fast-path optimizations for traffic processing. |
| use_middle_proxy | `bool` | Enables Middle Proxy mode. |
| proxy_secret_path | `String` | Path to proxy secret binary; can be auto-downloaded if absent. |
| proxy_config_v4_cache_path | `String` | Optional cache path for raw `getProxyConfig` (IPv4) snapshot. |
| proxy_config_v6_cache_path | `String` | Optional cache path for raw `getProxyConfigV6` (IPv6) snapshot. |
| ad_tag | `String` | Global fallback ad tag (32 hex characters). |
| middle_proxy_nat_ip | `IpAddr` | Explicit public IP override for NAT environments. |
| middle_proxy_nat_probe | `bool` | Enables NAT probing for Middle Proxy KDF/public address discovery. |
| middle_proxy_nat_stun | `String` | Deprecated legacy single STUN server for NAT probing. |
| middle_proxy_nat_stun_servers | `String[]` | Deprecated legacy STUN list for NAT probing fallback. |
| stun_nat_probe_concurrency | `usize` | Maximum concurrent STUN probes during NAT detection. |
| middle_proxy_pool_size | `usize` | Target size of active Middle Proxy writer pool. |
| middle_proxy_warm_standby | `usize` | Number of warm standby Middle-End connections. |
| me_init_retry_attempts | `u32` | Startup retries for ME pool initialization (`0` means unlimited). |
| me2dc_fallback | `bool` | Allows fallback from ME mode to direct DC when ME startup fails. |
| me_keepalive_enabled | `bool` | Enables ME keepalive padding frames. |
| me_keepalive_interval_secs | `u64` | Keepalive interval in seconds. |
| me_keepalive_jitter_secs | `u64` | Keepalive jitter in seconds. |
| me_keepalive_payload_random | `bool` | Randomizes keepalive payload bytes instead of zero payload. |
| rpc_proxy_req_every | `u64` | Interval for service `RPC_PROXY_REQ` activity signals (`0` disables). |
| me_writer_cmd_channel_capacity | `usize` | Capacity of per-writer command channel. |
| me_route_channel_capacity | `usize` | Capacity of per-connection ME response route channel. |
| me_c2me_channel_capacity | `usize` | Capacity of per-client command queue (client reader -> ME sender). |
| me_reader_route_data_wait_ms | `u64` | Bounded wait for routing ME DATA to per-connection queue (`0` = no wait). |
| me_d2c_flush_batch_max_frames | `usize` | Max ME->client frames coalesced before flush. |
| me_d2c_flush_batch_max_bytes | `usize` | Max ME->client payload bytes coalesced before flush. |
| me_d2c_flush_batch_max_delay_us | `u64` | Max microsecond wait for coalescing more ME->client frames (`0` disables timed coalescing). |
| me_d2c_ack_flush_immediate | `bool` | Flushes client writer immediately after quick-ack write. |
| direct_relay_copy_buf_c2s_bytes | `usize` | Copy buffer size for client->DC direction in direct relay. |
| direct_relay_copy_buf_s2c_bytes | `usize` | Copy buffer size for DC->client direction in direct relay. |
| crypto_pending_buffer | `usize` | Max pending ciphertext buffer per client writer (bytes). |
| max_client_frame | `usize` | Maximum allowed client MTProto frame size (bytes). |
| desync_all_full | `bool` | Emits full crypto-desync forensic logs for every event. |
| beobachten | `bool` | Enables per-IP forensic observation buckets. |
| beobachten_minutes | `u64` | Retention window (minutes) for per-IP observation buckets. |
| beobachten_flush_secs | `u64` | Snapshot flush interval (seconds) for observation output file. |
| beobachten_file | `String` | Observation snapshot output file path. |
| hardswap | `bool` | Enables hard-swap generation switching for ME pool updates. |
| me_warmup_stagger_enabled | `bool` | Enables staggered warmup for extra ME writers. |
| me_warmup_step_delay_ms | `u64` | Base delay between warmup connections (ms). |
| me_warmup_step_jitter_ms | `u64` | Jitter for warmup delay (ms). |
| me_reconnect_max_concurrent_per_dc | `u32` | Max concurrent reconnect attempts per DC. |
| me_reconnect_backoff_base_ms | `u64` | Base reconnect backoff in ms. |
| me_reconnect_backoff_cap_ms | `u64` | Cap reconnect backoff in ms. |
| me_reconnect_fast_retry_count | `u32` | Number of fast retry attempts before backoff. |
| me_single_endpoint_shadow_writers | `u8` | Additional reserve writers for one-endpoint DC groups. |
| me_single_endpoint_outage_mode_enabled | `bool` | Enables aggressive outage recovery for one-endpoint DC groups. |
| me_single_endpoint_outage_disable_quarantine | `bool` | Ignores endpoint quarantine in one-endpoint outage mode. |
| me_single_endpoint_outage_backoff_min_ms | `u64` | Minimum reconnect backoff in outage mode (ms). |
| me_single_endpoint_outage_backoff_max_ms | `u64` | Maximum reconnect backoff in outage mode (ms). |
| me_single_endpoint_shadow_rotate_every_secs | `u64` | Periodic shadow writer rotation interval (`0` disables). |
| me_floor_mode | `"static" \| "adaptive"` | Writer floor policy mode. |
| me_adaptive_floor_idle_secs | `u64` | Idle time before adaptive floor may reduce one-endpoint target. |
| me_adaptive_floor_min_writers_single_endpoint | `u8` | Minimum adaptive writer target for one-endpoint DC groups. |
| me_adaptive_floor_min_writers_multi_endpoint | `u8` | Minimum adaptive writer target for multi-endpoint DC groups. |
| me_adaptive_floor_recover_grace_secs | `u64` | Grace period to hold static floor after activity. |
| me_adaptive_floor_writers_per_core_total | `u16` | Global writer budget per logical CPU core in adaptive mode. |
| me_adaptive_floor_cpu_cores_override | `u16` | Manual CPU core count override (`0` uses auto-detection). |
| me_adaptive_floor_max_extra_writers_single_per_core | `u16` | Per-core max extra writers above base floor for one-endpoint DCs. |
| me_adaptive_floor_max_extra_writers_multi_per_core | `u16` | Per-core max extra writers above base floor for multi-endpoint DCs. |
| me_adaptive_floor_max_active_writers_per_core | `u16` | Hard cap for active ME writers per logical CPU core. |
| me_adaptive_floor_max_warm_writers_per_core | `u16` | Hard cap for warm ME writers per logical CPU core. |
| me_adaptive_floor_max_active_writers_global | `u32` | Hard global cap for active ME writers. |
| me_adaptive_floor_max_warm_writers_global | `u32` | Hard global cap for warm ME writers. |
| upstream_connect_retry_attempts | `u32` | Connect attempts for selected upstream before error/fallback. |
| upstream_connect_retry_backoff_ms | `u64` | Delay between upstream connect attempts (ms). |
| upstream_connect_budget_ms | `u64` | Total wall-clock budget for one upstream connect request (ms). |
| upstream_unhealthy_fail_threshold | `u32` | Consecutive failed requests before upstream is marked unhealthy. |
| upstream_connect_failfast_hard_errors | `bool` | Skips additional retries for hard non-transient connect errors. |
| stun_iface_mismatch_ignore | `bool` | Ignores STUN/interface mismatch and keeps Middle Proxy mode. |
| unknown_dc_log_path | `String` | File path for unknown-DC request logging (`null` disables file path). |
| unknown_dc_file_log_enabled | `bool` | Enables unknown-DC file logging. |
| log_level | `"debug" \| "verbose" \| "normal" \| "silent"` | Runtime logging verbosity. |
| disable_colors | `bool` | Disables ANSI colors in logs. |
| me_socks_kdf_policy | `"strict" \| "compat"` | SOCKS-bound KDF fallback policy for ME handshake. |
| me_route_backpressure_base_timeout_ms | `u64` | Base backpressure timeout for route-channel send (ms). |
| me_route_backpressure_high_timeout_ms | `u64` | High backpressure timeout when queue occupancy exceeds watermark (ms). |
| me_route_backpressure_high_watermark_pct | `u8` | Queue occupancy threshold (%) for high timeout mode. |
| me_health_interval_ms_unhealthy | `u64` | Health monitor interval while writer coverage is degraded (ms). |
| me_health_interval_ms_healthy | `u64` | Health monitor interval while writer coverage is healthy (ms). |
| me_admission_poll_ms | `u64` | Poll interval for conditional-admission checks (ms). |
| me_warn_rate_limit_ms | `u64` | Cooldown for repetitive ME warning logs (ms). |
| me_route_no_writer_mode | `"async_recovery_failfast" \| "inline_recovery_legacy" \| "hybrid_async_persistent"` | Route behavior when no writer is immediately available. |
| me_route_no_writer_wait_ms | `u64` | Max wait in async-recovery failfast mode (ms). |
| me_route_inline_recovery_attempts | `u32` | Inline recovery attempts in legacy mode. |
| me_route_inline_recovery_wait_ms | `u64` | Max inline recovery wait in legacy mode (ms). |
| fast_mode_min_tls_record | `usize` | Minimum TLS record size when fast-mode coalescing is enabled (`0` disables). |
| update_every | `u64` | Unified interval for config/secret updater tasks. |
| me_reinit_every_secs | `u64` | Periodic ME pool reinitialization interval (seconds). |
| me_hardswap_warmup_delay_min_ms | `u64` | Minimum delay between hardswap warmup connects (ms). |
| me_hardswap_warmup_delay_max_ms | `u64` | Maximum delay between hardswap warmup connects (ms). |
| me_hardswap_warmup_extra_passes | `u8` | Additional warmup passes per hardswap cycle. |
| me_hardswap_warmup_pass_backoff_base_ms | `u64` | Base backoff between hardswap warmup passes (ms). |
| me_config_stable_snapshots | `u8` | Number of identical config snapshots required before apply. |
| me_config_apply_cooldown_secs | `u64` | Cooldown between applied ME map updates (seconds). |
| me_snapshot_require_http_2xx | `bool` | Requires 2xx HTTP responses for applying config snapshots. |
| me_snapshot_reject_empty_map | `bool` | Rejects empty config snapshots. |
| me_snapshot_min_proxy_for_lines | `u32` | Minimum parsed `proxy_for` rows required to accept snapshot. |
| proxy_secret_stable_snapshots | `u8` | Number of identical secret snapshots required before runtime rotation. |
| proxy_secret_rotate_runtime | `bool` | Enables runtime proxy-secret rotation from remote source. |
| me_secret_atomic_snapshot | `bool` | Keeps selector and secret bytes from the same snapshot atomically. |
| proxy_secret_len_max | `usize` | Maximum allowed proxy-secret length (bytes). |
| me_pool_drain_ttl_secs | `u64` | Drain TTL for stale ME writers after endpoint-map changes (seconds). |
| me_pool_drain_threshold | `u64` | Max draining stale writers before batch force-close (`0` disables threshold cleanup). |
| me_bind_stale_mode | `"never" \| "ttl" \| "always"` | Policy for new binds on stale draining writers. |
| me_bind_stale_ttl_secs | `u64` | TTL for stale bind allowance when stale mode is `ttl`. |
| me_pool_min_fresh_ratio | `f32` | Minimum desired-DC fresh coverage ratio before draining stale writers. |
| me_reinit_drain_timeout_secs | `u64` | Force-close timeout for stale writers after endpoint-map changes (`0` disables force-close). |
| proxy_secret_auto_reload_secs | `u64` | Deprecated legacy secret reload interval (fallback when `update_every` is not set). |
| proxy_config_auto_reload_secs | `u64` | Deprecated legacy config reload interval (fallback when `update_every` is not set). |
| me_reinit_singleflight | `bool` | Serializes ME reinit cycles across trigger sources. |
| me_reinit_trigger_channel | `usize` | Trigger queue capacity for reinit scheduler. |
| me_reinit_coalesce_window_ms | `u64` | Trigger coalescing window before starting reinit (ms). |
| me_deterministic_writer_sort | `bool` | Enables deterministic candidate sort for writer binding path. |
| me_writer_pick_mode | `"sorted_rr" \| "p2c"` | Writer selection mode for route bind path. |
| me_writer_pick_sample_size | `u8` | Number of candidates sampled by picker in `p2c` mode. |
| ntp_check | `bool` | Enables NTP drift check at startup. |
| ntp_servers | `String[]` | NTP servers used for drift check. |
| auto_degradation_enabled | `bool` | Enables automatic degradation from ME to direct DC. |
| degradation_min_unavailable_dc_groups | `u8` | Minimum unavailable ME DC groups required before degrading. |
## [general.modes]
| Parameter | Type | Description |
|---|---|---|
| classic | `bool` | Enables classic MTProxy mode. |
| secure | `bool` | Enables secure mode. |
| tls | `bool` | Enables TLS mode. |
## [general.links]
| Parameter | Type | Description |
|---|---|---|
| show | `"*" \| String[]` | Selects users whose tg:// links are shown at startup. |
| public_host | `String` | Public hostname/IP override for generated tg:// links. |
| public_port | `u16` | Public port override for generated tg:// links. |
## [general.telemetry]
| Parameter | Type | Description |
|---|---|---|
| core_enabled | `bool` | Enables core hot-path telemetry counters. |
| user_enabled | `bool` | Enables per-user telemetry counters. |
| me_level | `"silent" \| "normal" \| "debug"` | Middle-End telemetry verbosity level. |
## [network]
| Parameter | Type | Description |
|---|---|---|
| ipv4 | `bool` | Enables IPv4 networking. |
| ipv6 | `bool` | Enables/disables IPv6 (`null` = auto-detect availability). |
| prefer | `u8` | Preferred IP family for selection (`4` or `6`). |
| multipath | `bool` | Enables multipath behavior where supported. |
| stun_use | `bool` | Global switch for STUN probing. |
| stun_servers | `String[]` | STUN server list for public IP detection. |
| stun_tcp_fallback | `bool` | Enables TCP STUN fallback when UDP STUN is blocked. |
| http_ip_detect_urls | `String[]` | HTTP endpoints used as fallback public IP detectors. |
| cache_public_ip_path | `String` | File path for caching detected public IP. |
| dns_overrides | `String[]` | Runtime DNS overrides in `host:port:ip` format. |
## [server]
| Parameter | Type | Description |
|---|---|---|
| port | `u16` | Main proxy listen port. |
| listen_addr_ipv4 | `String` | IPv4 bind address for TCP listener. |
| listen_addr_ipv6 | `String` | IPv6 bind address for TCP listener. |
| listen_unix_sock | `String` | Unix socket path for listener. |
| listen_unix_sock_perm | `String` | Unix socket permissions in octal string (e.g., `"0666"`). |
| listen_tcp | `bool` | Explicit TCP listener enable/disable override. |
| proxy_protocol | `bool` | Enables HAProxy PROXY protocol parsing on incoming client connections. |
| proxy_protocol_header_timeout_ms | `u64` | Timeout for PROXY protocol header read/parse (ms). |
| metrics_port | `u16` | Metrics endpoint port (enables metrics listener). |
| metrics_listen | `String` | Full metrics bind address (`IP:PORT`), overrides `metrics_port`. |
| metrics_whitelist | `IpNetwork[]` | CIDR whitelist for metrics endpoint access. |
| max_connections | `u32` | Max concurrent client connections (`0` = unlimited). |
## [server.api]
| Parameter | Type | Description |
|---|---|---|
| enabled | `bool` | Enables control-plane REST API. |
| listen | `String` | API bind address in `IP:PORT` format. |
| whitelist | `IpNetwork[]` | CIDR whitelist allowed to access API. |
| auth_header | `String` | Exact expected `Authorization` header value (empty = disabled). |
| request_body_limit_bytes | `usize` | Maximum accepted HTTP request body size. |
| minimal_runtime_enabled | `bool` | Enables minimal runtime snapshots endpoint logic. |
| minimal_runtime_cache_ttl_ms | `u64` | Cache TTL for minimal runtime snapshots (ms; `0` disables cache). |
| runtime_edge_enabled | `bool` | Enables runtime edge endpoints. |
| runtime_edge_cache_ttl_ms | `u64` | Cache TTL for runtime edge aggregation payloads (ms). |
| runtime_edge_top_n | `usize` | Top-N size for edge connection leaderboard. |
| runtime_edge_events_capacity | `usize` | Ring-buffer capacity for runtime edge events. |
| read_only | `bool` | Rejects mutating API endpoints when enabled. |
## [[server.listeners]]
| Parameter | Type | Description |
|---|---|---|
| ip | `IpAddr` | Listener bind IP. |
| announce | `String` | Public IP/domain announced in proxy links (priority over `announce_ip`). |
| announce_ip | `IpAddr` | Deprecated legacy announce IP (migrated to `announce` if needed). |
| proxy_protocol | `bool` | Per-listener override for PROXY protocol enable flag. |
| reuse_allow | `bool` | Enables `SO_REUSEPORT` for multi-instance bind sharing. |
## [timeouts]
| Parameter | Type | Description |
|---|---|---|
| client_handshake | `u64` | Client handshake timeout. |
| tg_connect | `u64` | Upstream Telegram connect timeout. |
| client_keepalive | `u64` | Client keepalive timeout. |
| client_ack | `u64` | Client ACK timeout. |
| me_one_retry | `u8` | Quick ME reconnect attempts for single-address DC. |
| me_one_timeout_ms | `u64` | Timeout per quick attempt for single-address DC (ms). |
## [censorship]
| Parameter | Type | Description |
|---|---|---|
| tls_domain | `String` | Primary TLS domain used in fake TLS handshake profile. |
| tls_domains | `String[]` | Additional TLS domains for generating multiple links. |
| mask | `bool` | Enables masking/fronting relay mode. |
| mask_host | `String` | Upstream mask host for TLS fronting relay. |
| mask_port | `u16` | Upstream mask port for TLS fronting relay. |
| mask_unix_sock | `String` | Unix socket path for mask backend instead of TCP host/port. |
| fake_cert_len | `usize` | Length of synthetic certificate payload when emulation data is unavailable. |
| tls_emulation | `bool` | Enables certificate/TLS behavior emulation from cached real fronts. |
| tls_front_dir | `String` | Directory path for TLS front cache storage. |
| server_hello_delay_min_ms | `u64` | Minimum server_hello delay for anti-fingerprint behavior (ms). |
| server_hello_delay_max_ms | `u64` | Maximum server_hello delay for anti-fingerprint behavior (ms). |
| tls_new_session_tickets | `u8` | Number of `NewSessionTicket` messages to emit after handshake. |
| tls_full_cert_ttl_secs | `u64` | TTL for sending full cert payload per (domain, client IP) tuple. |
| alpn_enforce | `bool` | Enforces ALPN echo behavior based on client preference. |
| mask_proxy_protocol | `u8` | PROXY protocol mode for mask backend (`0` disabled, `1` v1, `2` v2). |
## [access]
| Parameter | Type | Description |
|---|---|---|
| users | `Map<String, String>` | Username -> 32-hex secret mapping. |
| user_ad_tags | `Map<String, String>` | Per-user ad tags (32 hex chars). |
| user_max_tcp_conns | `Map<String, usize>` | Per-user maximum concurrent TCP connections. |
| user_expirations | `Map<String, DateTime<Utc>>` | Per-user account expiration timestamps. |
| user_data_quota | `Map<String, u64>` | Per-user data quota limits. |
| user_max_unique_ips | `Map<String, usize>` | Per-user unique source IP limits. |
| user_max_unique_ips_global_each | `usize` | Global fallback per-user unique IP limit when no per-user override exists. |
| user_max_unique_ips_mode | `"active_window" \| "time_window" \| "combined"` | Unique source IP limit accounting mode. |
| user_max_unique_ips_window_secs | `u64` | Recent-window size for unique IP accounting (seconds). |
| replay_check_len | `usize` | Replay check storage length. |
| replay_window_secs | `u64` | Replay protection time window in seconds. |
| ignore_time_skew | `bool` | Ignores client/server timestamp skew in replay validation. |
## [[upstreams]]
| Parameter | Type | Description |
|---|---|---|
| type | `"direct" \| "socks4" \| "socks5"` | Upstream transport type selector. |
| weight | `u16` | Weighted selection coefficient for this upstream. |
| enabled | `bool` | Enables/disables this upstream entry. |
| scopes | `String` | Comma-separated scope tags for routing. |
| interface | `String` | Optional outgoing interface name (`direct`, `socks4`, `socks5`). |
| bind_addresses | `String[]` | Optional source bind addresses for `direct` upstream. |
| address | `String` | Upstream proxy address (`host:port`) for SOCKS upstreams. |
| user_id | `String` | SOCKS4 user ID (only for `type = "socks4"`). |
| username | `String` | SOCKS5 username (only for `type = "socks5"`). |
| password | `String` | SOCKS5 password (only for `type = "socks5"`). |

View File

@ -83,6 +83,13 @@ To specify a domain in the links, add to the `[general.links]` section of the co
public_host = "proxy.example.com" public_host = "proxy.example.com"
``` ```
### Server connection limit
Limits the total number of open connections to the server:
```toml
[server]
max_connections = 10000 # 0 - unlimited, 10000 - default
```
### Upstream Manager ### Upstream Manager
To specify an upstream, add to the `[[upstreams]]` section of the config.toml file: To specify an upstream, add to the `[[upstreams]]` section of the config.toml file:
#### Binding to IP #### Binding to IP
@ -113,3 +120,17 @@ password = "pass" # Password for Auth on SOCKS-server
weight = 1 # Set Weight for Scenarios weight = 1 # Set Weight for Scenarios
enabled = true enabled = true
``` ```
#### Shadowsocks as Upstream
Requires `use_middle_proxy = false`.
```toml
[general]
use_middle_proxy = false
[[upstreams]]
type = "shadowsocks"
url = "ss://2022-blake3-aes-256-gcm:BASE64_KEY@1.2.3.4:8388"
weight = 1
enabled = true
```

View File

@ -83,6 +83,13 @@ metrics_whitelist = ["127.0.0.1/32", "::1/128", "0.0.0.0/0"]
public_host = "proxy.example.com" public_host = "proxy.example.com"
``` ```
### Общий лимит подключений к серверу
Ограничивает общее число открытых подключений к серверу:
```toml
[server]
max_connections = 10000 # 0 - unlimited, 10000 - default
```
### Upstream Manager ### Upstream Manager
Чтобы указать апстрим, добавьте в секцию `[[upstreams]]` файла config.toml: Чтобы указать апстрим, добавьте в секцию `[[upstreams]]` файла config.toml:
#### Привязка к IP #### Привязка к IP
@ -113,3 +120,17 @@ password = "pass" # Password for Auth on SOCKS-server
weight = 1 # Set Weight for Scenarios weight = 1 # Set Weight for Scenarios
enabled = true enabled = true
``` ```
#### Shadowsocks как Upstream
Требует `use_middle_proxy = false`.
```toml
[general]
use_middle_proxy = false
[[upstreams]]
type = "shadowsocks"
url = "ss://2022-blake3-aes-256-gcm:BASE64_KEY@1.2.3.4:8388"
weight = 1
enabled = true
```

View File

@ -82,7 +82,7 @@ Die unten angegebenen `Default`-Werte sind Code-Defaults (bei fehlendem Schlüss
| Feld | Gilt für | Typ | Pflicht | Default | Bedeutung | | Feld | Gilt für | Typ | Pflicht | Default | Bedeutung |
|---|---|---|---|---|---| |---|---|---|---|---|---|
| `[[upstreams]].type` | alle Upstreams | `"direct" \| "socks4" \| "socks5"` | ja | n/a | Upstream-Transporttyp. | | `[[upstreams]].type` | alle Upstreams | `"direct" \| "socks4" \| "socks5" \| "shadowsocks"` | ja | n/a | Upstream-Transporttyp. |
| `[[upstreams]].weight` | alle Upstreams | `u16` | nein | `1` | Basisgewicht für weighted-random Auswahl. | | `[[upstreams]].weight` | alle Upstreams | `u16` | nein | `1` | Basisgewicht für weighted-random Auswahl. |
| `[[upstreams]].enabled` | alle Upstreams | `bool` | nein | `true` | Deaktivierte Einträge werden beim Start ignoriert. | | `[[upstreams]].enabled` | alle Upstreams | `bool` | nein | `true` | Deaktivierte Einträge werden beim Start ignoriert. |
| `[[upstreams]].scopes` | alle Upstreams | `String` | nein | `""` | Komma-separierte Scope-Tags für Request-Routing. | | `[[upstreams]].scopes` | alle Upstreams | `String` | nein | `""` | Komma-separierte Scope-Tags für Request-Routing. |
@ -95,6 +95,8 @@ Die unten angegebenen `Default`-Werte sind Code-Defaults (bei fehlendem Schlüss
| `interface` | `socks5` | `Option<String>` | nein | `null` | Wird nur genutzt, wenn `address` als `ip:port` angegeben ist. | | `interface` | `socks5` | `Option<String>` | nein | `null` | Wird nur genutzt, wenn `address` als `ip:port` angegeben ist. |
| `username` | `socks5` | `Option<String>` | nein | `null` | SOCKS5 Benutzername. | | `username` | `socks5` | `Option<String>` | nein | `null` | SOCKS5 Benutzername. |
| `password` | `socks5` | `Option<String>` | nein | `null` | SOCKS5 Passwort. | | `password` | `socks5` | `Option<String>` | nein | `null` | SOCKS5 Passwort. |
| `url` | `shadowsocks` | `String` | ja | n/a | Shadowsocks-SIP002-URL (`ss://...`). In Runtime-APIs wird nur `host:port` offengelegt. |
| `interface` | `shadowsocks` | `Option<String>` | nein | `null` | Optionales ausgehendes Bind-Interface oder lokale Literal-IP. |
### Runtime-Regeln (wichtig) ### Runtime-Regeln (wichtig)
@ -115,6 +117,7 @@ Die unten angegebenen `Default`-Werte sind Code-Defaults (bei fehlendem Schlüss
8. Im ME-Modus wird der gewählte Upstream auch für den ME-TCP-Dial-Pfad verwendet. 8. Im ME-Modus wird der gewählte Upstream auch für den ME-TCP-Dial-Pfad verwendet.
9. Im ME-Modus ist bei `direct` mit bind/interface die STUN-Reflection bind-aware für KDF-Adressmaterial. 9. Im ME-Modus ist bei `direct` mit bind/interface die STUN-Reflection bind-aware für KDF-Adressmaterial.
10. Im ME-Modus werden bei SOCKS-Upstream `BND.ADDR/BND.PORT` für KDF verwendet, wenn gültig/öffentlich und gleiche IP-Familie. 10. Im ME-Modus werden bei SOCKS-Upstream `BND.ADDR/BND.PORT` für KDF verwendet, wenn gültig/öffentlich und gleiche IP-Familie.
11. `shadowsocks`-Upstreams erfordern `general.use_middle_proxy = false`. Mit aktiviertem ME-Modus schlägt das Laden der Config sofort fehl.
## Upstream-Konfigurationsbeispiele ## Upstream-Konfigurationsbeispiele
@ -150,7 +153,20 @@ weight = 2
enabled = true enabled = true
``` ```
### Beispiel 4: Gemischte Upstreams mit Scopes ### Beispiel 4: Shadowsocks-Upstream
```toml
[general]
use_middle_proxy = false
[[upstreams]]
type = "shadowsocks"
url = "ss://2022-blake3-aes-256-gcm:BASE64_KEY@198.51.100.50:8388"
weight = 2
enabled = true
```
### Beispiel 5: Gemischte Upstreams mit Scopes
```toml ```toml
[[upstreams]] [[upstreams]]

View File

@ -82,7 +82,7 @@ Defaults below are code defaults (used when a key is omitted), not necessarily v
| Field | Applies to | Type | Required | Default | Meaning | | Field | Applies to | Type | Required | Default | Meaning |
|---|---|---|---|---|---| |---|---|---|---|---|---|
| `[[upstreams]].type` | all upstreams | `"direct" \| "socks4" \| "socks5"` | yes | n/a | Upstream transport type. | | `[[upstreams]].type` | all upstreams | `"direct" \| "socks4" \| "socks5" \| "shadowsocks"` | yes | n/a | Upstream transport type. |
| `[[upstreams]].weight` | all upstreams | `u16` | no | `1` | Base weight for weighted-random selection. | | `[[upstreams]].weight` | all upstreams | `u16` | no | `1` | Base weight for weighted-random selection. |
| `[[upstreams]].enabled` | all upstreams | `bool` | no | `true` | Disabled entries are ignored at startup. | | `[[upstreams]].enabled` | all upstreams | `bool` | no | `true` | Disabled entries are ignored at startup. |
| `[[upstreams]].scopes` | all upstreams | `String` | no | `""` | Comma-separated scope tags for request-level routing. | | `[[upstreams]].scopes` | all upstreams | `String` | no | `""` | Comma-separated scope tags for request-level routing. |
@ -95,6 +95,8 @@ Defaults below are code defaults (used when a key is omitted), not necessarily v
| `interface` | `socks5` | `Option<String>` | no | `null` | Used only for SOCKS server `ip:port` dial path. | | `interface` | `socks5` | `Option<String>` | no | `null` | Used only for SOCKS server `ip:port` dial path. |
| `username` | `socks5` | `Option<String>` | no | `null` | SOCKS5 username auth. | | `username` | `socks5` | `Option<String>` | no | `null` | SOCKS5 username auth. |
| `password` | `socks5` | `Option<String>` | no | `null` | SOCKS5 password auth. | | `password` | `socks5` | `Option<String>` | no | `null` | SOCKS5 password auth. |
| `url` | `shadowsocks` | `String` | yes | n/a | Shadowsocks SIP002 URL (`ss://...`). Only `host:port` is exposed in runtime APIs. |
| `interface` | `shadowsocks` | `Option<String>` | no | `null` | Optional outgoing bind interface or literal local IP. |
### Runtime rules (important) ### Runtime rules (important)
@ -115,6 +117,7 @@ Defaults below are code defaults (used when a key is omitted), not necessarily v
8. In ME mode, the selected upstream is also used for ME TCP dial path. 8. In ME mode, the selected upstream is also used for ME TCP dial path.
9. In ME mode for `direct` upstream with bind/interface, STUN reflection logic is bind-aware for KDF source material. 9. In ME mode for `direct` upstream with bind/interface, STUN reflection logic is bind-aware for KDF source material.
10. In ME mode for SOCKS upstream, SOCKS `BND.ADDR/BND.PORT` is used for KDF when it is valid/public for the same family. 10. In ME mode for SOCKS upstream, SOCKS `BND.ADDR/BND.PORT` is used for KDF when it is valid/public for the same family.
11. `shadowsocks` upstreams require `general.use_middle_proxy = false`. Config load fails fast if ME mode is enabled.
## Upstream Configuration Examples ## Upstream Configuration Examples
@ -150,7 +153,20 @@ weight = 2
enabled = true enabled = true
``` ```
### Example 4: Mixed upstreams with scopes ### Example 4: Shadowsocks upstream
```toml
[general]
use_middle_proxy = false
[[upstreams]]
type = "shadowsocks"
url = "ss://2022-blake3-aes-256-gcm:BASE64_KEY@198.51.100.50:8388"
weight = 2
enabled = true
```
### Example 5: Mixed upstreams with scopes
```toml ```toml
[[upstreams]] [[upstreams]]

View File

@ -82,7 +82,7 @@
| Поле | Применимость | Тип | Обязательно | Default | Назначение | | Поле | Применимость | Тип | Обязательно | Default | Назначение |
|---|---|---|---|---|---| |---|---|---|---|---|---|
| `[[upstreams]].type` | все upstream | `"direct" \| "socks4" \| "socks5"` | да | n/a | Тип upstream транспорта. | | `[[upstreams]].type` | все upstream | `"direct" \| "socks4" \| "socks5" \| "shadowsocks"` | да | n/a | Тип upstream транспорта. |
| `[[upstreams]].weight` | все upstream | `u16` | нет | `1` | Базовый вес в weighted-random выборе. | | `[[upstreams]].weight` | все upstream | `u16` | нет | `1` | Базовый вес в weighted-random выборе. |
| `[[upstreams]].enabled` | все upstream | `bool` | нет | `true` | Выключенные записи игнорируются на старте. | | `[[upstreams]].enabled` | все upstream | `bool` | нет | `true` | Выключенные записи игнорируются на старте. |
| `[[upstreams]].scopes` | все upstream | `String` | нет | `""` | Список scope-токенов через запятую для маршрутизации. | | `[[upstreams]].scopes` | все upstream | `String` | нет | `""` | Список scope-токенов через запятую для маршрутизации. |
@ -95,6 +95,8 @@
| `interface` | `socks5` | `Option<String>` | нет | `null` | Используется только если `address` задан как `ip:port`. | | `interface` | `socks5` | `Option<String>` | нет | `null` | Используется только если `address` задан как `ip:port`. |
| `username` | `socks5` | `Option<String>` | нет | `null` | Логин SOCKS5 auth. | | `username` | `socks5` | `Option<String>` | нет | `null` | Логин SOCKS5 auth. |
| `password` | `socks5` | `Option<String>` | нет | `null` | Пароль SOCKS5 auth. | | `password` | `socks5` | `Option<String>` | нет | `null` | Пароль SOCKS5 auth. |
| `url` | `shadowsocks` | `String` | да | n/a | Shadowsocks SIP002 URL (`ss://...`). В runtime API раскрывается только `host:port`. |
| `interface` | `shadowsocks` | `Option<String>` | нет | `null` | Необязательный исходящий bind-интерфейс или literal локальный IP. |
### Runtime-правила ### Runtime-правила
@ -115,6 +117,7 @@
8. В ME-режиме выбранный upstream также используется для ME TCP dial path. 8. В ME-режиме выбранный upstream также используется для ME TCP dial path.
9. В ME-режиме для `direct` upstream с bind/interface STUN-рефлексия выполняется bind-aware для KDF материала. 9. В ME-режиме для `direct` upstream с bind/interface STUN-рефлексия выполняется bind-aware для KDF материала.
10. В ME-режиме для SOCKS upstream используются `BND.ADDR/BND.PORT` для KDF, если адрес валиден/публичен и соответствует IP family. 10. В ME-режиме для SOCKS upstream используются `BND.ADDR/BND.PORT` для KDF, если адрес валиден/публичен и соответствует IP family.
11. `shadowsocks` upstream требует `general.use_middle_proxy = false`. При включенном ME-режиме конфиг отклоняется при загрузке.
## Примеры конфигурации Upstreams ## Примеры конфигурации Upstreams
@ -150,7 +153,20 @@ weight = 2
enabled = true enabled = true
``` ```
### Пример 4: смешанные upstream с scopes ### Пример 4: Shadowsocks upstream
```toml
[general]
use_middle_proxy = false
[[upstreams]]
type = "shadowsocks"
url = "ss://2022-blake3-aes-256-gcm:BASE64_KEY@198.51.100.50:8388"
weight = 2
enabled = true
```
### Пример 5: смешанные upstream с scopes
```toml ```toml
[[upstreams]] [[upstreams]]

View File

@ -38,8 +38,9 @@ umweltschutz.de -> A-запись 198.18.88.88
В конфигурации Telemt: В конфигурации Telemt:
``` ```toml
tls_domain = umweltschutz.de [censorship]
tls_domain = "umweltschutz.de"
``` ```
Этот домен используется клиентом как SNI в ClientHello Этот домен используется клиентом как SNI в ClientHello
@ -56,8 +57,9 @@ tls_domain = umweltschutz.de
В конфигурации Telemt: В конфигурации Telemt:
``` ```toml
mask_host = 127.0.0.1 [censorship]
mask_host = "127.0.0.1"
mask_port = 8443 mask_port = 8443
``` ```
@ -151,16 +153,18 @@ mask_host:mask_port
Например: Например:
``` ```toml
tls_domain = github.com [censorship]
mask_host = github.com tls_domain = "github.com"
mask_host = "github.com"
mask_port = 443 mask_port = 443
``` ```
или или
``` ```toml
mask_host = 140.82.121.4 [censorship]
mask_host = "140.82.121.4"
``` ```
В этом случае: В этом случае:

View File

@ -134,6 +134,7 @@ pub(super) struct UpstreamSummaryData {
pub(super) direct_total: usize, pub(super) direct_total: usize,
pub(super) socks4_total: usize, pub(super) socks4_total: usize,
pub(super) socks5_total: usize, pub(super) socks5_total: usize,
pub(super) shadowsocks_total: usize,
} }
#[derive(Serialize, Clone)] #[derive(Serialize, Clone)]
@ -195,6 +196,8 @@ pub(super) struct ZeroPoolData {
pub(super) pool_swap_total: u64, pub(super) pool_swap_total: u64,
pub(super) pool_drain_active: u64, pub(super) pool_drain_active: u64,
pub(super) pool_force_close_total: u64, pub(super) pool_force_close_total: u64,
pub(super) pool_drain_soft_evict_total: u64,
pub(super) pool_drain_soft_evict_writer_total: u64,
pub(super) pool_stale_pick_total: u64, pub(super) pool_stale_pick_total: u64,
pub(super) writer_removed_total: u64, pub(super) writer_removed_total: u64,
pub(super) writer_removed_unexpected_total: u64, pub(super) writer_removed_unexpected_total: u64,
@ -235,6 +238,7 @@ pub(super) struct MeWritersSummary {
pub(super) available_pct: f64, pub(super) available_pct: f64,
pub(super) required_writers: usize, pub(super) required_writers: usize,
pub(super) alive_writers: usize, pub(super) alive_writers: usize,
pub(super) coverage_ratio: f64,
pub(super) coverage_pct: f64, pub(super) coverage_pct: f64,
pub(super) fresh_alive_writers: usize, pub(super) fresh_alive_writers: usize,
pub(super) fresh_coverage_pct: f64, pub(super) fresh_coverage_pct: f64,
@ -283,6 +287,7 @@ pub(super) struct DcStatus {
pub(super) floor_max: usize, pub(super) floor_max: usize,
pub(super) floor_capped: bool, pub(super) floor_capped: bool,
pub(super) alive_writers: usize, pub(super) alive_writers: usize,
pub(super) coverage_ratio: f64,
pub(super) coverage_pct: f64, pub(super) coverage_pct: f64,
pub(super) fresh_alive_writers: usize, pub(super) fresh_alive_writers: usize,
pub(super) fresh_coverage_pct: f64, pub(super) fresh_coverage_pct: f64,
@ -360,6 +365,11 @@ pub(super) struct MinimalMeRuntimeData {
pub(super) me_reconnect_backoff_cap_ms: u64, pub(super) me_reconnect_backoff_cap_ms: u64,
pub(super) me_reconnect_fast_retry_count: u32, pub(super) me_reconnect_fast_retry_count: u32,
pub(super) me_pool_drain_ttl_secs: u64, pub(super) me_pool_drain_ttl_secs: u64,
pub(super) me_pool_drain_soft_evict_enabled: bool,
pub(super) me_pool_drain_soft_evict_grace_secs: u64,
pub(super) me_pool_drain_soft_evict_per_writer: u8,
pub(super) me_pool_drain_soft_evict_budget_per_core: u16,
pub(super) me_pool_drain_soft_evict_cooldown_ms: u64,
pub(super) me_pool_force_close_secs: u64, pub(super) me_pool_force_close_secs: u64,
pub(super) me_pool_min_fresh_ratio: f32, pub(super) me_pool_min_fresh_ratio: f32,
pub(super) me_bind_stale_mode: &'static str, pub(super) me_bind_stale_mode: &'static str,

View File

@ -113,6 +113,7 @@ pub(super) struct RuntimeMeQualityDcRttData {
pub(super) rtt_ema_ms: Option<f64>, pub(super) rtt_ema_ms: Option<f64>,
pub(super) alive_writers: usize, pub(super) alive_writers: usize,
pub(super) required_writers: usize, pub(super) required_writers: usize,
pub(super) coverage_ratio: f64,
pub(super) coverage_pct: f64, pub(super) coverage_pct: f64,
} }
@ -158,6 +159,7 @@ pub(super) struct RuntimeUpstreamQualitySummaryData {
pub(super) direct_total: usize, pub(super) direct_total: usize,
pub(super) socks4_total: usize, pub(super) socks4_total: usize,
pub(super) socks5_total: usize, pub(super) socks5_total: usize,
pub(super) shadowsocks_total: usize,
} }
#[derive(Serialize)] #[derive(Serialize)]
@ -388,6 +390,7 @@ pub(super) async fn build_runtime_me_quality_data(shared: &ApiShared) -> Runtime
rtt_ema_ms: dc.rtt_ms, rtt_ema_ms: dc.rtt_ms,
alive_writers: dc.alive_writers, alive_writers: dc.alive_writers,
required_writers: dc.required_writers, required_writers: dc.required_writers,
coverage_ratio: dc.coverage_ratio,
coverage_pct: dc.coverage_pct, coverage_pct: dc.coverage_pct,
}) })
.collect(), .collect(),
@ -404,7 +407,9 @@ pub(super) async fn build_runtime_upstream_quality_data(
connect_attempt_total: shared.stats.get_upstream_connect_attempt_total(), connect_attempt_total: shared.stats.get_upstream_connect_attempt_total(),
connect_success_total: shared.stats.get_upstream_connect_success_total(), connect_success_total: shared.stats.get_upstream_connect_success_total(),
connect_fail_total: shared.stats.get_upstream_connect_fail_total(), connect_fail_total: shared.stats.get_upstream_connect_fail_total(),
connect_failfast_hard_error_total: shared.stats.get_upstream_connect_failfast_hard_error_total(), connect_failfast_hard_error_total: shared
.stats
.get_upstream_connect_failfast_hard_error_total(),
}; };
let Some(snapshot) = shared.upstream_manager.try_api_snapshot() else { let Some(snapshot) = shared.upstream_manager.try_api_snapshot() else {
@ -444,6 +449,7 @@ pub(super) async fn build_runtime_upstream_quality_data(
direct_total: snapshot.summary.direct_total, direct_total: snapshot.summary.direct_total,
socks4_total: snapshot.summary.socks4_total, socks4_total: snapshot.summary.socks4_total,
socks5_total: snapshot.summary.socks5_total, socks5_total: snapshot.summary.socks5_total,
shadowsocks_total: snapshot.summary.shadowsocks_total,
}), }),
upstreams: Some( upstreams: Some(
snapshot snapshot
@ -455,6 +461,7 @@ pub(super) async fn build_runtime_upstream_quality_data(
crate::transport::UpstreamRouteKind::Direct => "direct", crate::transport::UpstreamRouteKind::Direct => "direct",
crate::transport::UpstreamRouteKind::Socks4 => "socks4", crate::transport::UpstreamRouteKind::Socks4 => "socks4",
crate::transport::UpstreamRouteKind::Socks5 => "socks5", crate::transport::UpstreamRouteKind::Socks5 => "socks5",
crate::transport::UpstreamRouteKind::Shadowsocks => "shadowsocks",
}, },
address: upstream.address, address: upstream.address,
weight: upstream.weight, weight: upstream.weight,
@ -474,7 +481,9 @@ pub(super) async fn build_runtime_upstream_quality_data(
crate::transport::upstream::IpPreference::PreferV6 => "prefer_v6", crate::transport::upstream::IpPreference::PreferV6 => "prefer_v6",
crate::transport::upstream::IpPreference::PreferV4 => "prefer_v4", crate::transport::upstream::IpPreference::PreferV4 => "prefer_v4",
crate::transport::upstream::IpPreference::BothWork => "both_work", crate::transport::upstream::IpPreference::BothWork => "both_work",
crate::transport::upstream::IpPreference::Unavailable => "unavailable", crate::transport::upstream::IpPreference::Unavailable => {
"unavailable"
}
}, },
}) })
.collect(), .collect(),
@ -512,14 +521,18 @@ pub(super) async fn build_runtime_nat_stun_data(shared: &ApiShared) -> RuntimeNa
live_total: snapshot.live_servers.len(), live_total: snapshot.live_servers.len(),
}, },
reflection: RuntimeNatStunReflectionBlockData { reflection: RuntimeNatStunReflectionBlockData {
v4: snapshot.reflection_v4.map(|entry| RuntimeNatStunReflectionData { v4: snapshot
addr: entry.addr.to_string(), .reflection_v4
age_secs: entry.age_secs, .map(|entry| RuntimeNatStunReflectionData {
}), addr: entry.addr.to_string(),
v6: snapshot.reflection_v6.map(|entry| RuntimeNatStunReflectionData { age_secs: entry.age_secs,
addr: entry.addr.to_string(), }),
age_secs: entry.age_secs, v6: snapshot
}), .reflection_v6
.map(|entry| RuntimeNatStunReflectionData {
addr: entry.addr.to_string(),
age_secs: entry.age_secs,
}),
}, },
stun_backoff_remaining_ms: snapshot.stun_backoff_remaining_ms, stun_backoff_remaining_ms: snapshot.stun_backoff_remaining_ms,
}), }),

View File

@ -1,5 +1,5 @@
use std::net::IpAddr;
use std::collections::HashMap; use std::collections::HashMap;
use std::net::IpAddr;
use std::sync::{Mutex, OnceLock}; use std::sync::{Mutex, OnceLock};
use std::time::{SystemTime, UNIX_EPOCH}; use std::time::{SystemTime, UNIX_EPOCH};
@ -7,8 +7,8 @@ use serde::Serialize;
use crate::config::{ProxyConfig, UpstreamType}; use crate::config::{ProxyConfig, UpstreamType};
use crate::network::probe::{detect_interface_ipv4, detect_interface_ipv6, is_bogon}; use crate::network::probe::{detect_interface_ipv4, detect_interface_ipv6, is_bogon};
use crate::transport::middle_proxy::{bnd_snapshot, timeskew_snapshot, upstream_bnd_snapshots};
use crate::transport::UpstreamRouteKind; use crate::transport::UpstreamRouteKind;
use crate::transport::middle_proxy::{bnd_snapshot, timeskew_snapshot, upstream_bnd_snapshots};
use super::ApiShared; use super::ApiShared;
@ -262,8 +262,8 @@ fn update_kdf_ewma(now_epoch_secs: u64, total_errors: u64) -> f64 {
let delta_errors = total_errors.saturating_sub(guard.last_total_errors); let delta_errors = total_errors.saturating_sub(guard.last_total_errors);
let instant_rate_per_min = (delta_errors as f64) * 60.0 / (dt_secs as f64); let instant_rate_per_min = (delta_errors as f64) * 60.0 / (dt_secs as f64);
let alpha = 1.0 - f64::exp(-(dt_secs as f64) / KDF_EWMA_TAU_SECS); let alpha = 1.0 - f64::exp(-(dt_secs as f64) / KDF_EWMA_TAU_SECS);
guard.ewma_errors_per_min = guard.ewma_errors_per_min guard.ewma_errors_per_min =
+ alpha * (instant_rate_per_min - guard.ewma_errors_per_min); guard.ewma_errors_per_min + alpha * (instant_rate_per_min - guard.ewma_errors_per_min);
guard.last_epoch_secs = now_epoch_secs; guard.last_epoch_secs = now_epoch_secs;
guard.last_total_errors = total_errors; guard.last_total_errors = total_errors;
guard.ewma_errors_per_min guard.ewma_errors_per_min
@ -284,6 +284,7 @@ fn map_route_kind(value: UpstreamRouteKind) -> &'static str {
UpstreamRouteKind::Direct => "direct", UpstreamRouteKind::Direct => "direct",
UpstreamRouteKind::Socks4 => "socks4", UpstreamRouteKind::Socks4 => "socks4",
UpstreamRouteKind::Socks5 => "socks5", UpstreamRouteKind::Socks5 => "socks5",
UpstreamRouteKind::Shadowsocks => "shadowsocks",
} }
} }

View File

@ -2,8 +2,8 @@ use std::time::{Duration, Instant, SystemTime, UNIX_EPOCH};
use crate::config::ApiConfig; use crate::config::ApiConfig;
use crate::stats::Stats; use crate::stats::Stats;
use crate::transport::upstream::IpPreference;
use crate::transport::UpstreamRouteKind; use crate::transport::UpstreamRouteKind;
use crate::transport::upstream::IpPreference;
use super::ApiShared; use super::ApiShared;
use super::model::{ use super::model::{
@ -96,6 +96,8 @@ pub(super) fn build_zero_all_data(stats: &Stats, configured_users: usize) -> Zer
pool_swap_total: stats.get_pool_swap_total(), pool_swap_total: stats.get_pool_swap_total(),
pool_drain_active: stats.get_pool_drain_active(), pool_drain_active: stats.get_pool_drain_active(),
pool_force_close_total: stats.get_pool_force_close_total(), pool_force_close_total: stats.get_pool_force_close_total(),
pool_drain_soft_evict_total: stats.get_pool_drain_soft_evict_total(),
pool_drain_soft_evict_writer_total: stats.get_pool_drain_soft_evict_writer_total(),
pool_stale_pick_total: stats.get_pool_stale_pick_total(), pool_stale_pick_total: stats.get_pool_stale_pick_total(),
writer_removed_total: stats.get_me_writer_removed_total(), writer_removed_total: stats.get_me_writer_removed_total(),
writer_removed_unexpected_total: stats.get_me_writer_removed_unexpected_total(), writer_removed_unexpected_total: stats.get_me_writer_removed_unexpected_total(),
@ -136,7 +138,8 @@ fn build_zero_upstream_data(stats: &Stats) -> ZeroUpstreamData {
.get_upstream_connect_duration_success_bucket_501_1000ms(), .get_upstream_connect_duration_success_bucket_501_1000ms(),
connect_duration_success_bucket_gt_1000ms: stats connect_duration_success_bucket_gt_1000ms: stats
.get_upstream_connect_duration_success_bucket_gt_1000ms(), .get_upstream_connect_duration_success_bucket_gt_1000ms(),
connect_duration_fail_bucket_le_100ms: stats.get_upstream_connect_duration_fail_bucket_le_100ms(), connect_duration_fail_bucket_le_100ms: stats
.get_upstream_connect_duration_fail_bucket_le_100ms(),
connect_duration_fail_bucket_101_500ms: stats connect_duration_fail_bucket_101_500ms: stats
.get_upstream_connect_duration_fail_bucket_101_500ms(), .get_upstream_connect_duration_fail_bucket_101_500ms(),
connect_duration_fail_bucket_501_1000ms: stats connect_duration_fail_bucket_501_1000ms: stats
@ -178,6 +181,7 @@ pub(super) fn build_upstreams_data(shared: &ApiShared, api_cfg: &ApiConfig) -> U
direct_total: snapshot.summary.direct_total, direct_total: snapshot.summary.direct_total,
socks4_total: snapshot.summary.socks4_total, socks4_total: snapshot.summary.socks4_total,
socks5_total: snapshot.summary.socks5_total, socks5_total: snapshot.summary.socks5_total,
shadowsocks_total: snapshot.summary.shadowsocks_total,
}; };
let upstreams = snapshot let upstreams = snapshot
.upstreams .upstreams
@ -313,6 +317,7 @@ async fn get_minimal_payload_cached(
available_pct: status.available_pct, available_pct: status.available_pct,
required_writers: status.required_writers, required_writers: status.required_writers,
alive_writers: status.alive_writers, alive_writers: status.alive_writers,
coverage_ratio: status.coverage_ratio,
coverage_pct: status.coverage_pct, coverage_pct: status.coverage_pct,
fresh_alive_writers: status.fresh_alive_writers, fresh_alive_writers: status.fresh_alive_writers,
fresh_coverage_pct: status.fresh_coverage_pct, fresh_coverage_pct: status.fresh_coverage_pct,
@ -370,6 +375,7 @@ async fn get_minimal_payload_cached(
floor_max: entry.floor_max, floor_max: entry.floor_max,
floor_capped: entry.floor_capped, floor_capped: entry.floor_capped,
alive_writers: entry.alive_writers, alive_writers: entry.alive_writers,
coverage_ratio: entry.coverage_ratio,
coverage_pct: entry.coverage_pct, coverage_pct: entry.coverage_pct,
fresh_alive_writers: entry.fresh_alive_writers, fresh_alive_writers: entry.fresh_alive_writers,
fresh_coverage_pct: entry.fresh_coverage_pct, fresh_coverage_pct: entry.fresh_coverage_pct,
@ -391,8 +397,7 @@ async fn get_minimal_payload_cached(
adaptive_floor_min_writers_multi_endpoint: runtime adaptive_floor_min_writers_multi_endpoint: runtime
.adaptive_floor_min_writers_multi_endpoint, .adaptive_floor_min_writers_multi_endpoint,
adaptive_floor_recover_grace_secs: runtime.adaptive_floor_recover_grace_secs, adaptive_floor_recover_grace_secs: runtime.adaptive_floor_recover_grace_secs,
adaptive_floor_writers_per_core_total: runtime adaptive_floor_writers_per_core_total: runtime.adaptive_floor_writers_per_core_total,
.adaptive_floor_writers_per_core_total,
adaptive_floor_cpu_cores_override: runtime.adaptive_floor_cpu_cores_override, adaptive_floor_cpu_cores_override: runtime.adaptive_floor_cpu_cores_override,
adaptive_floor_max_extra_writers_single_per_core: runtime adaptive_floor_max_extra_writers_single_per_core: runtime
.adaptive_floor_max_extra_writers_single_per_core, .adaptive_floor_max_extra_writers_single_per_core,
@ -400,12 +405,9 @@ async fn get_minimal_payload_cached(
.adaptive_floor_max_extra_writers_multi_per_core, .adaptive_floor_max_extra_writers_multi_per_core,
adaptive_floor_max_active_writers_per_core: runtime adaptive_floor_max_active_writers_per_core: runtime
.adaptive_floor_max_active_writers_per_core, .adaptive_floor_max_active_writers_per_core,
adaptive_floor_max_warm_writers_per_core: runtime adaptive_floor_max_warm_writers_per_core: runtime.adaptive_floor_max_warm_writers_per_core,
.adaptive_floor_max_warm_writers_per_core, adaptive_floor_max_active_writers_global: runtime.adaptive_floor_max_active_writers_global,
adaptive_floor_max_active_writers_global: runtime adaptive_floor_max_warm_writers_global: runtime.adaptive_floor_max_warm_writers_global,
.adaptive_floor_max_active_writers_global,
adaptive_floor_max_warm_writers_global: runtime
.adaptive_floor_max_warm_writers_global,
adaptive_floor_cpu_cores_detected: runtime.adaptive_floor_cpu_cores_detected, adaptive_floor_cpu_cores_detected: runtime.adaptive_floor_cpu_cores_detected,
adaptive_floor_cpu_cores_effective: runtime.adaptive_floor_cpu_cores_effective, adaptive_floor_cpu_cores_effective: runtime.adaptive_floor_cpu_cores_effective,
adaptive_floor_global_cap_raw: runtime.adaptive_floor_global_cap_raw, adaptive_floor_global_cap_raw: runtime.adaptive_floor_global_cap_raw,
@ -427,6 +429,11 @@ async fn get_minimal_payload_cached(
me_reconnect_backoff_cap_ms: runtime.me_reconnect_backoff_cap_ms, me_reconnect_backoff_cap_ms: runtime.me_reconnect_backoff_cap_ms,
me_reconnect_fast_retry_count: runtime.me_reconnect_fast_retry_count, me_reconnect_fast_retry_count: runtime.me_reconnect_fast_retry_count,
me_pool_drain_ttl_secs: runtime.me_pool_drain_ttl_secs, me_pool_drain_ttl_secs: runtime.me_pool_drain_ttl_secs,
me_pool_drain_soft_evict_enabled: runtime.me_pool_drain_soft_evict_enabled,
me_pool_drain_soft_evict_grace_secs: runtime.me_pool_drain_soft_evict_grace_secs,
me_pool_drain_soft_evict_per_writer: runtime.me_pool_drain_soft_evict_per_writer,
me_pool_drain_soft_evict_budget_per_core: runtime.me_pool_drain_soft_evict_budget_per_core,
me_pool_drain_soft_evict_cooldown_ms: runtime.me_pool_drain_soft_evict_cooldown_ms,
me_pool_force_close_secs: runtime.me_pool_force_close_secs, me_pool_force_close_secs: runtime.me_pool_force_close_secs,
me_pool_min_fresh_ratio: runtime.me_pool_min_fresh_ratio, me_pool_min_fresh_ratio: runtime.me_pool_min_fresh_ratio,
me_bind_stale_mode: runtime.me_bind_stale_mode, me_bind_stale_mode: runtime.me_bind_stale_mode,
@ -495,6 +502,7 @@ fn disabled_me_writers(now_epoch_secs: u64, reason: &'static str) -> MeWritersDa
available_pct: 0.0, available_pct: 0.0,
required_writers: 0, required_writers: 0,
alive_writers: 0, alive_writers: 0,
coverage_ratio: 0.0,
coverage_pct: 0.0, coverage_pct: 0.0,
fresh_alive_writers: 0, fresh_alive_writers: 0,
fresh_coverage_pct: 0.0, fresh_coverage_pct: 0.0,
@ -517,6 +525,7 @@ fn map_route_kind(value: UpstreamRouteKind) -> &'static str {
UpstreamRouteKind::Direct => "direct", UpstreamRouteKind::Direct => "direct",
UpstreamRouteKind::Socks4 => "socks4", UpstreamRouteKind::Socks4 => "socks4",
UpstreamRouteKind::Socks5 => "socks5", UpstreamRouteKind::Socks5 => "socks5",
UpstreamRouteKind::Shadowsocks => "shadowsocks",
} }
} }

View File

@ -27,8 +27,8 @@ const DEFAULT_ME_C2ME_CHANNEL_CAPACITY: usize = 1024;
const DEFAULT_ME_READER_ROUTE_DATA_WAIT_MS: u64 = 2; const DEFAULT_ME_READER_ROUTE_DATA_WAIT_MS: u64 = 2;
const DEFAULT_ME_D2C_FLUSH_BATCH_MAX_FRAMES: usize = 32; const DEFAULT_ME_D2C_FLUSH_BATCH_MAX_FRAMES: usize = 32;
const DEFAULT_ME_D2C_FLUSH_BATCH_MAX_BYTES: usize = 128 * 1024; const DEFAULT_ME_D2C_FLUSH_BATCH_MAX_BYTES: usize = 128 * 1024;
const DEFAULT_ME_D2C_FLUSH_BATCH_MAX_DELAY_US: u64 = 1500; const DEFAULT_ME_D2C_FLUSH_BATCH_MAX_DELAY_US: u64 = 500;
const DEFAULT_ME_D2C_ACK_FLUSH_IMMEDIATE: bool = false; const DEFAULT_ME_D2C_ACK_FLUSH_IMMEDIATE: bool = true;
const DEFAULT_DIRECT_RELAY_COPY_BUF_C2S_BYTES: usize = 64 * 1024; const DEFAULT_DIRECT_RELAY_COPY_BUF_C2S_BYTES: usize = 64 * 1024;
const DEFAULT_DIRECT_RELAY_COPY_BUF_S2C_BYTES: usize = 256 * 1024; const DEFAULT_DIRECT_RELAY_COPY_BUF_S2C_BYTES: usize = 256 * 1024;
const DEFAULT_ME_WRITER_PICK_SAMPLE_SIZE: u8 = 3; const DEFAULT_ME_WRITER_PICK_SAMPLE_SIZE: u8 = 3;
@ -36,6 +36,11 @@ const DEFAULT_ME_HEALTH_INTERVAL_MS_UNHEALTHY: u64 = 1000;
const DEFAULT_ME_HEALTH_INTERVAL_MS_HEALTHY: u64 = 3000; const DEFAULT_ME_HEALTH_INTERVAL_MS_HEALTHY: u64 = 3000;
const DEFAULT_ME_ADMISSION_POLL_MS: u64 = 1000; const DEFAULT_ME_ADMISSION_POLL_MS: u64 = 1000;
const DEFAULT_ME_WARN_RATE_LIMIT_MS: u64 = 5000; const DEFAULT_ME_WARN_RATE_LIMIT_MS: u64 = 5000;
const DEFAULT_ME_POOL_DRAIN_SOFT_EVICT_ENABLED: bool = true;
const DEFAULT_ME_POOL_DRAIN_SOFT_EVICT_GRACE_SECS: u64 = 30;
const DEFAULT_ME_POOL_DRAIN_SOFT_EVICT_PER_WRITER: u8 = 1;
const DEFAULT_ME_POOL_DRAIN_SOFT_EVICT_BUDGET_PER_CORE: u16 = 8;
const DEFAULT_ME_POOL_DRAIN_SOFT_EVICT_COOLDOWN_MS: u64 = 5000;
const DEFAULT_USER_MAX_UNIQUE_IPS_WINDOW_SECS: u64 = 30; const DEFAULT_USER_MAX_UNIQUE_IPS_WINDOW_SECS: u64 = 30;
const DEFAULT_UPSTREAM_CONNECT_RETRY_ATTEMPTS: u32 = 2; const DEFAULT_UPSTREAM_CONNECT_RETRY_ATTEMPTS: u32 = 2;
const DEFAULT_UPSTREAM_UNHEALTHY_FAIL_THRESHOLD: u32 = 5; const DEFAULT_UPSTREAM_UNHEALTHY_FAIL_THRESHOLD: u32 = 5;
@ -85,11 +90,11 @@ pub(crate) fn default_connect_timeout() -> u64 {
} }
pub(crate) fn default_keepalive() -> u64 { pub(crate) fn default_keepalive() -> u64 {
60 15
} }
pub(crate) fn default_ack_timeout() -> u64 { pub(crate) fn default_ack_timeout() -> u64 {
300 90
} }
pub(crate) fn default_me_one_retry() -> u8 { pub(crate) fn default_me_one_retry() -> u8 {
12 12
@ -592,6 +597,26 @@ pub(crate) fn default_me_pool_drain_threshold() -> u64 {
128 128
} }
pub(crate) fn default_me_pool_drain_soft_evict_enabled() -> bool {
DEFAULT_ME_POOL_DRAIN_SOFT_EVICT_ENABLED
}
pub(crate) fn default_me_pool_drain_soft_evict_grace_secs() -> u64 {
DEFAULT_ME_POOL_DRAIN_SOFT_EVICT_GRACE_SECS
}
pub(crate) fn default_me_pool_drain_soft_evict_per_writer() -> u8 {
DEFAULT_ME_POOL_DRAIN_SOFT_EVICT_PER_WRITER
}
pub(crate) fn default_me_pool_drain_soft_evict_budget_per_core() -> u16 {
DEFAULT_ME_POOL_DRAIN_SOFT_EVICT_BUDGET_PER_CORE
}
pub(crate) fn default_me_pool_drain_soft_evict_cooldown_ms() -> u64 {
DEFAULT_ME_POOL_DRAIN_SOFT_EVICT_COOLDOWN_MS
}
pub(crate) fn default_me_bind_stale_ttl_secs() -> u64 { pub(crate) fn default_me_bind_stale_ttl_secs() -> u64 {
default_me_pool_drain_ttl_secs() default_me_pool_drain_ttl_secs()
} }

View File

@ -56,6 +56,11 @@ pub struct HotFields {
pub hardswap: bool, pub hardswap: bool,
pub me_pool_drain_ttl_secs: u64, pub me_pool_drain_ttl_secs: u64,
pub me_pool_drain_threshold: u64, pub me_pool_drain_threshold: u64,
pub me_pool_drain_soft_evict_enabled: bool,
pub me_pool_drain_soft_evict_grace_secs: u64,
pub me_pool_drain_soft_evict_per_writer: u8,
pub me_pool_drain_soft_evict_budget_per_core: u16,
pub me_pool_drain_soft_evict_cooldown_ms: u64,
pub me_pool_min_fresh_ratio: f32, pub me_pool_min_fresh_ratio: f32,
pub me_reinit_drain_timeout_secs: u64, pub me_reinit_drain_timeout_secs: u64,
pub me_hardswap_warmup_delay_min_ms: u64, pub me_hardswap_warmup_delay_min_ms: u64,
@ -138,6 +143,15 @@ impl HotFields {
hardswap: cfg.general.hardswap, hardswap: cfg.general.hardswap,
me_pool_drain_ttl_secs: cfg.general.me_pool_drain_ttl_secs, me_pool_drain_ttl_secs: cfg.general.me_pool_drain_ttl_secs,
me_pool_drain_threshold: cfg.general.me_pool_drain_threshold, me_pool_drain_threshold: cfg.general.me_pool_drain_threshold,
me_pool_drain_soft_evict_enabled: cfg.general.me_pool_drain_soft_evict_enabled,
me_pool_drain_soft_evict_grace_secs: cfg.general.me_pool_drain_soft_evict_grace_secs,
me_pool_drain_soft_evict_per_writer: cfg.general.me_pool_drain_soft_evict_per_writer,
me_pool_drain_soft_evict_budget_per_core: cfg
.general
.me_pool_drain_soft_evict_budget_per_core,
me_pool_drain_soft_evict_cooldown_ms: cfg
.general
.me_pool_drain_soft_evict_cooldown_ms,
me_pool_min_fresh_ratio: cfg.general.me_pool_min_fresh_ratio, me_pool_min_fresh_ratio: cfg.general.me_pool_min_fresh_ratio,
me_reinit_drain_timeout_secs: cfg.general.me_reinit_drain_timeout_secs, me_reinit_drain_timeout_secs: cfg.general.me_reinit_drain_timeout_secs,
me_hardswap_warmup_delay_min_ms: cfg.general.me_hardswap_warmup_delay_min_ms, me_hardswap_warmup_delay_min_ms: cfg.general.me_hardswap_warmup_delay_min_ms,
@ -455,6 +469,15 @@ fn overlay_hot_fields(old: &ProxyConfig, new: &ProxyConfig) -> ProxyConfig {
cfg.general.hardswap = new.general.hardswap; cfg.general.hardswap = new.general.hardswap;
cfg.general.me_pool_drain_ttl_secs = new.general.me_pool_drain_ttl_secs; cfg.general.me_pool_drain_ttl_secs = new.general.me_pool_drain_ttl_secs;
cfg.general.me_pool_drain_threshold = new.general.me_pool_drain_threshold; cfg.general.me_pool_drain_threshold = new.general.me_pool_drain_threshold;
cfg.general.me_pool_drain_soft_evict_enabled = new.general.me_pool_drain_soft_evict_enabled;
cfg.general.me_pool_drain_soft_evict_grace_secs =
new.general.me_pool_drain_soft_evict_grace_secs;
cfg.general.me_pool_drain_soft_evict_per_writer =
new.general.me_pool_drain_soft_evict_per_writer;
cfg.general.me_pool_drain_soft_evict_budget_per_core =
new.general.me_pool_drain_soft_evict_budget_per_core;
cfg.general.me_pool_drain_soft_evict_cooldown_ms =
new.general.me_pool_drain_soft_evict_cooldown_ms;
cfg.general.me_pool_min_fresh_ratio = new.general.me_pool_min_fresh_ratio; cfg.general.me_pool_min_fresh_ratio = new.general.me_pool_min_fresh_ratio;
cfg.general.me_reinit_drain_timeout_secs = new.general.me_reinit_drain_timeout_secs; cfg.general.me_reinit_drain_timeout_secs = new.general.me_reinit_drain_timeout_secs;
cfg.general.me_hardswap_warmup_delay_min_ms = new.general.me_hardswap_warmup_delay_min_ms; cfg.general.me_hardswap_warmup_delay_min_ms = new.general.me_hardswap_warmup_delay_min_ms;
@ -835,6 +858,25 @@ fn log_changes(
old_hot.me_pool_drain_threshold, new_hot.me_pool_drain_threshold, old_hot.me_pool_drain_threshold, new_hot.me_pool_drain_threshold,
); );
} }
if old_hot.me_pool_drain_soft_evict_enabled != new_hot.me_pool_drain_soft_evict_enabled
|| old_hot.me_pool_drain_soft_evict_grace_secs
!= new_hot.me_pool_drain_soft_evict_grace_secs
|| old_hot.me_pool_drain_soft_evict_per_writer
!= new_hot.me_pool_drain_soft_evict_per_writer
|| old_hot.me_pool_drain_soft_evict_budget_per_core
!= new_hot.me_pool_drain_soft_evict_budget_per_core
|| old_hot.me_pool_drain_soft_evict_cooldown_ms
!= new_hot.me_pool_drain_soft_evict_cooldown_ms
{
info!(
"config reload: me_pool_drain_soft_evict: enabled={} grace={}s per_writer={} budget_per_core={} cooldown={}ms",
new_hot.me_pool_drain_soft_evict_enabled,
new_hot.me_pool_drain_soft_evict_grace_secs,
new_hot.me_pool_drain_soft_evict_per_writer,
new_hot.me_pool_drain_soft_evict_budget_per_core,
new_hot.me_pool_drain_soft_evict_cooldown_ms
);
}
if (old_hot.me_pool_min_fresh_ratio - new_hot.me_pool_min_fresh_ratio).abs() > f32::EPSILON { if (old_hot.me_pool_min_fresh_ratio - new_hot.me_pool_min_fresh_ratio).abs() > f32::EPSILON {
info!( info!(

View File

@ -6,8 +6,9 @@ use std::net::{IpAddr, SocketAddr};
use std::path::{Path, PathBuf}; use std::path::{Path, PathBuf};
use rand::Rng; use rand::Rng;
use serde::{Deserialize, Serialize};
use shadowsocks::config::ServerConfig as ShadowsocksServerConfig;
use tracing::warn; use tracing::warn;
use serde::{Serialize, Deserialize};
use crate::error::{ProxyError, Result}; use crate::error::{ProxyError, Result};
@ -122,13 +123,37 @@ fn sanitize_ad_tag(ad_tag: &mut Option<String>) {
}; };
if !is_valid_ad_tag(tag) { if !is_valid_ad_tag(tag) {
warn!( warn!("Invalid general.ad_tag value, expected exactly 32 hex chars; ad_tag is disabled");
"Invalid general.ad_tag value, expected exactly 32 hex chars; ad_tag is disabled"
);
*ad_tag = None; *ad_tag = None;
} }
} }
fn validate_upstreams(config: &ProxyConfig) -> Result<()> {
let has_enabled_shadowsocks = config.upstreams.iter().any(|upstream| {
upstream.enabled && matches!(upstream.upstream_type, UpstreamType::Shadowsocks { .. })
});
if has_enabled_shadowsocks && config.general.use_middle_proxy {
return Err(ProxyError::Config(
"shadowsocks upstreams require general.use_middle_proxy = false".to_string(),
));
}
for upstream in &config.upstreams {
if let UpstreamType::Shadowsocks { url, .. } = &upstream.upstream_type {
let parsed = ShadowsocksServerConfig::from_url(url)
.map_err(|error| ProxyError::Config(format!("invalid shadowsocks url: {error}")))?;
if parsed.plugin().is_some() {
return Err(ProxyError::Config(
"shadowsocks plugins are not supported".to_string(),
));
}
}
}
Ok(())
}
// ============= Main Config ============= // ============= Main Config =============
#[derive(Debug, Clone, Serialize, Deserialize, Default)] #[derive(Debug, Clone, Serialize, Deserialize, Default)]
@ -180,7 +205,8 @@ impl ProxyConfig {
pub(crate) fn load_with_metadata<P: AsRef<Path>>(path: P) -> Result<LoadedConfig> { pub(crate) fn load_with_metadata<P: AsRef<Path>>(path: P) -> Result<LoadedConfig> {
let path = path.as_ref(); let path = path.as_ref();
let content = std::fs::read_to_string(path).map_err(|e| ProxyError::Config(e.to_string()))?; let content =
std::fs::read_to_string(path).map_err(|e| ProxyError::Config(e.to_string()))?;
let base_dir = path.parent().unwrap_or(Path::new(".")); let base_dir = path.parent().unwrap_or(Path::new("."));
let mut source_files = BTreeSet::new(); let mut source_files = BTreeSet::new();
source_files.insert(normalize_config_path(path)); source_files.insert(normalize_config_path(path));
@ -207,15 +233,17 @@ impl ProxyConfig {
.map(|table| table.contains_key("stun_servers")) .map(|table| table.contains_key("stun_servers"))
.unwrap_or(false); .unwrap_or(false);
let mut config: ProxyConfig = let mut config: ProxyConfig = parsed_toml
parsed_toml.try_into().map_err(|e| ProxyError::Config(e.to_string()))?; .try_into()
.map_err(|e| ProxyError::Config(e.to_string()))?;
if !update_every_is_explicit && (legacy_secret_is_explicit || legacy_config_is_explicit) { if !update_every_is_explicit && (legacy_secret_is_explicit || legacy_config_is_explicit) {
config.general.update_every = None; config.general.update_every = None;
} }
let legacy_nat_stun = config.general.middle_proxy_nat_stun.take(); let legacy_nat_stun = config.general.middle_proxy_nat_stun.take();
let legacy_nat_stun_servers = std::mem::take(&mut config.general.middle_proxy_nat_stun_servers); let legacy_nat_stun_servers =
std::mem::take(&mut config.general.middle_proxy_nat_stun_servers);
let legacy_nat_stun_used = legacy_nat_stun.is_some() || !legacy_nat_stun_servers.is_empty(); let legacy_nat_stun_used = legacy_nat_stun.is_some() || !legacy_nat_stun_servers.is_empty();
if stun_servers_is_explicit { if stun_servers_is_explicit {
let mut explicit_stun_servers = Vec::new(); let mut explicit_stun_servers = Vec::new();
@ -225,7 +253,9 @@ impl ProxyConfig {
config.network.stun_servers = explicit_stun_servers; config.network.stun_servers = explicit_stun_servers;
if legacy_nat_stun_used { if legacy_nat_stun_used {
warn!("general.middle_proxy_nat_stun and general.middle_proxy_nat_stun_servers are ignored because network.stun_servers is explicitly set"); warn!(
"general.middle_proxy_nat_stun and general.middle_proxy_nat_stun_servers are ignored because network.stun_servers is explicitly set"
);
} }
} else { } else {
// Keep the default STUN pool unless network.stun_servers is explicitly overridden. // Keep the default STUN pool unless network.stun_servers is explicitly overridden.
@ -240,7 +270,9 @@ impl ProxyConfig {
config.network.stun_servers = unified_stun_servers; config.network.stun_servers = unified_stun_servers;
if legacy_nat_stun_used { if legacy_nat_stun_used {
warn!("general.middle_proxy_nat_stun and general.middle_proxy_nat_stun_servers are deprecated; use network.stun_servers"); warn!(
"general.middle_proxy_nat_stun and general.middle_proxy_nat_stun_servers are deprecated; use network.stun_servers"
);
} }
} }
@ -372,13 +404,15 @@ impl ProxyConfig {
if !(4096..=1024 * 1024).contains(&config.general.direct_relay_copy_buf_c2s_bytes) { if !(4096..=1024 * 1024).contains(&config.general.direct_relay_copy_buf_c2s_bytes) {
return Err(ProxyError::Config( return Err(ProxyError::Config(
"general.direct_relay_copy_buf_c2s_bytes must be within [4096, 1048576]".to_string(), "general.direct_relay_copy_buf_c2s_bytes must be within [4096, 1048576]"
.to_string(),
)); ));
} }
if !(8192..=2 * 1024 * 1024).contains(&config.general.direct_relay_copy_buf_s2c_bytes) { if !(8192..=2 * 1024 * 1024).contains(&config.general.direct_relay_copy_buf_s2c_bytes) {
return Err(ProxyError::Config( return Err(ProxyError::Config(
"general.direct_relay_copy_buf_s2c_bytes must be within [8192, 2097152]".to_string(), "general.direct_relay_copy_buf_s2c_bytes must be within [8192, 2097152]"
.to_string(),
)); ));
} }
@ -406,6 +440,35 @@ impl ProxyConfig {
)); ));
} }
if config.general.me_pool_drain_soft_evict_grace_secs > 3600 {
return Err(ProxyError::Config(
"general.me_pool_drain_soft_evict_grace_secs must be within [0, 3600]".to_string(),
));
}
if config.general.me_pool_drain_soft_evict_per_writer == 0
|| config.general.me_pool_drain_soft_evict_per_writer > 16
{
return Err(ProxyError::Config(
"general.me_pool_drain_soft_evict_per_writer must be within [1, 16]".to_string(),
));
}
if config.general.me_pool_drain_soft_evict_budget_per_core == 0
|| config.general.me_pool_drain_soft_evict_budget_per_core > 64
{
return Err(ProxyError::Config(
"general.me_pool_drain_soft_evict_budget_per_core must be within [1, 64]"
.to_string(),
));
}
if config.general.me_pool_drain_soft_evict_cooldown_ms == 0 {
return Err(ProxyError::Config(
"general.me_pool_drain_soft_evict_cooldown_ms must be > 0".to_string(),
));
}
if config.access.user_max_unique_ips_window_secs == 0 { if config.access.user_max_unique_ips_window_secs == 0 {
return Err(ProxyError::Config( return Err(ProxyError::Config(
"access.user_max_unique_ips_window_secs must be > 0".to_string(), "access.user_max_unique_ips_window_secs must be > 0".to_string(),
@ -588,7 +651,8 @@ impl ProxyConfig {
if !(1..=100).contains(&config.general.me_route_backpressure_high_watermark_pct) { if !(1..=100).contains(&config.general.me_route_backpressure_high_watermark_pct) {
return Err(ProxyError::Config( return Err(ProxyError::Config(
"general.me_route_backpressure_high_watermark_pct must be within [1, 100]".to_string(), "general.me_route_backpressure_high_watermark_pct must be within [1, 100]"
.to_string(),
)); ));
} }
@ -750,11 +814,15 @@ impl ProxyConfig {
crate::network::dns_overrides::validate_entries(&config.network.dns_overrides)?; crate::network::dns_overrides::validate_entries(&config.network.dns_overrides)?;
if config.general.use_middle_proxy && config.network.ipv6 == Some(true) { if config.general.use_middle_proxy && config.network.ipv6 == Some(true) {
warn!("IPv6 with Middle Proxy is experimental and may cause KDF address mismatch; consider disabling IPv6 or ME"); warn!(
"IPv6 with Middle Proxy is experimental and may cause KDF address mismatch; consider disabling IPv6 or ME"
);
} }
// Random fake_cert_len only when default is in use. // Random fake_cert_len only when default is in use.
if !config.censorship.tls_emulation && config.censorship.fake_cert_len == default_fake_cert_len() { if !config.censorship.tls_emulation
&& config.censorship.fake_cert_len == default_fake_cert_len()
{
config.censorship.fake_cert_len = rand::rng().gen_range(1024..4096); config.censorship.fake_cert_len = rand::rng().gen_range(1024..4096);
} }
@ -764,8 +832,7 @@ impl ProxyConfig {
let listen_tcp = config.server.listen_tcp.unwrap_or_else(|| { let listen_tcp = config.server.listen_tcp.unwrap_or_else(|| {
if config.server.listen_unix_sock.is_some() { if config.server.listen_unix_sock.is_some() {
// Unix socket present: TCP only if user explicitly set addresses or listeners. // Unix socket present: TCP only if user explicitly set addresses or listeners.
config.server.listen_addr_ipv4.is_some() config.server.listen_addr_ipv4.is_some() || !config.server.listeners.is_empty()
|| !config.server.listeners.is_empty()
} else { } else {
true true
} }
@ -773,7 +840,9 @@ impl ProxyConfig {
// Migration: Populate listeners if empty (skip when listen_tcp = false). // Migration: Populate listeners if empty (skip when listen_tcp = false).
if config.server.listeners.is_empty() && listen_tcp { if config.server.listeners.is_empty() && listen_tcp {
let ipv4_str = config.server.listen_addr_ipv4 let ipv4_str = config
.server
.listen_addr_ipv4
.as_deref() .as_deref()
.unwrap_or("0.0.0.0"); .unwrap_or("0.0.0.0");
if let Ok(ipv4) = ipv4_str.parse::<IpAddr>() { if let Ok(ipv4) = ipv4_str.parse::<IpAddr>() {
@ -815,7 +884,10 @@ impl ProxyConfig {
// Migration: Populate upstreams if empty (Default Direct). // Migration: Populate upstreams if empty (Default Direct).
if config.upstreams.is_empty() { if config.upstreams.is_empty() {
config.upstreams.push(UpstreamConfig { config.upstreams.push(UpstreamConfig {
upstream_type: UpstreamType::Direct { interface: None, bind_addresses: None }, upstream_type: UpstreamType::Direct {
interface: None,
bind_addresses: None,
},
weight: 1, weight: 1,
enabled: true, enabled: true,
scopes: String::new(), scopes: String::new(),
@ -829,6 +901,8 @@ impl ProxyConfig {
.entry("203".to_string()) .entry("203".to_string())
.or_insert_with(|| vec!["91.105.192.100:443".to_string()]); .or_insert_with(|| vec!["91.105.192.100:443".to_string()]);
validate_upstreams(&config)?;
Ok(LoadedConfig { Ok(LoadedConfig {
config, config,
source_files: source_files.into_iter().collect(), source_files: source_files.into_iter().collect(),
@ -875,6 +949,9 @@ impl ProxyConfig {
mod tests { mod tests {
use super::*; use super::*;
const TEST_SHADOWSOCKS_URL: &str =
"ss://2022-blake3-aes-256-gcm:MDEyMzQ1Njc4OTAxMjM0NTY3ODkwMTIzNDU2Nzg5MDE=@127.0.0.1:8388";
#[test] #[test]
fn serde_defaults_remain_unchanged_for_present_sections() { fn serde_defaults_remain_unchanged_for_present_sections() {
let toml = r#" let toml = r#"
@ -904,10 +981,7 @@ mod tests {
cfg.general.me_init_retry_attempts, cfg.general.me_init_retry_attempts,
default_me_init_retry_attempts() default_me_init_retry_attempts()
); );
assert_eq!( assert_eq!(cfg.general.me2dc_fallback, default_me2dc_fallback());
cfg.general.me2dc_fallback,
default_me2dc_fallback()
);
assert_eq!( assert_eq!(
cfg.general.proxy_config_v4_cache_path, cfg.general.proxy_config_v4_cache_path,
default_proxy_config_v4_cache_path() default_proxy_config_v4_cache_path()
@ -1216,11 +1290,12 @@ mod tests {
let path = dir.join("telemt_dc_override_test.toml"); let path = dir.join("telemt_dc_override_test.toml");
std::fs::write(&path, toml).unwrap(); std::fs::write(&path, toml).unwrap();
let cfg = ProxyConfig::load(&path).unwrap(); let cfg = ProxyConfig::load(&path).unwrap();
assert!(cfg assert!(
.dc_overrides cfg.dc_overrides
.get("203") .get("203")
.map(|v| v.contains(&"91.105.192.100:443".to_string())) .map(|v| v.contains(&"91.105.192.100:443".to_string()))
.unwrap_or(false)); .unwrap_or(false)
);
let _ = std::fs::remove_file(path); let _ = std::fs::remove_file(path);
} }
@ -1407,11 +1482,9 @@ mod tests {
let path = dir.join("telemt_me_adaptive_floor_min_writers_out_of_range_test.toml"); let path = dir.join("telemt_me_adaptive_floor_min_writers_out_of_range_test.toml");
std::fs::write(&path, toml).unwrap(); std::fs::write(&path, toml).unwrap();
let err = ProxyConfig::load(&path).unwrap_err().to_string(); let err = ProxyConfig::load(&path).unwrap_err().to_string();
assert!( assert!(err.contains(
err.contains( "general.me_adaptive_floor_min_writers_single_endpoint must be within [1, 32]"
"general.me_adaptive_floor_min_writers_single_endpoint must be within [1, 32]" ));
)
);
let _ = std::fs::remove_file(path); let _ = std::fs::remove_file(path);
} }
@ -1997,6 +2070,124 @@ mod tests {
let _ = std::fs::remove_file(path); let _ = std::fs::remove_file(path);
} }
#[test]
fn shadowsocks_upstream_url_loads_successfully() {
let toml = format!(
r#"
[general]
use_middle_proxy = false
[censorship]
tls_domain = "example.com"
[access.users]
user = "00000000000000000000000000000000"
[[upstreams]]
type = "shadowsocks"
url = "{url}"
interface = "127.0.0.2"
"#,
url = TEST_SHADOWSOCKS_URL,
);
let dir = std::env::temp_dir();
let path = dir.join("telemt_shadowsocks_valid_test.toml");
std::fs::write(&path, toml).unwrap();
let cfg = ProxyConfig::load(&path).unwrap();
assert!(matches!(
&cfg.upstreams[0].upstream_type,
UpstreamType::Shadowsocks { url, interface }
if url == TEST_SHADOWSOCKS_URL && interface.as_deref() == Some("127.0.0.2")
));
let _ = std::fs::remove_file(path);
}
#[test]
fn shadowsocks_requires_direct_mode() {
let toml = format!(
r#"
[general]
use_middle_proxy = true
[censorship]
tls_domain = "example.com"
[access.users]
user = "00000000000000000000000000000000"
[[upstreams]]
type = "shadowsocks"
url = "{url}"
"#,
url = TEST_SHADOWSOCKS_URL,
);
let dir = std::env::temp_dir();
let path = dir.join("telemt_shadowsocks_me_reject_test.toml");
std::fs::write(&path, toml).unwrap();
let err = ProxyConfig::load(&path).unwrap_err().to_string();
assert!(err.contains("shadowsocks upstreams require general.use_middle_proxy = false"));
let _ = std::fs::remove_file(path);
}
#[test]
fn invalid_shadowsocks_url_is_rejected() {
let toml = r#"
[general]
use_middle_proxy = false
[censorship]
tls_domain = "example.com"
[access.users]
user = "00000000000000000000000000000000"
[[upstreams]]
type = "shadowsocks"
url = "not-a-valid-ss-url"
"#;
let dir = std::env::temp_dir();
let path = dir.join("telemt_shadowsocks_invalid_url_test.toml");
std::fs::write(&path, toml).unwrap();
let err = ProxyConfig::load(&path).unwrap_err().to_string();
assert!(err.contains("invalid shadowsocks url"));
let _ = std::fs::remove_file(path);
}
#[test]
fn shadowsocks_plugins_are_rejected() {
let toml = format!(
r#"
[general]
use_middle_proxy = false
[censorship]
tls_domain = "example.com"
[access.users]
user = "00000000000000000000000000000000"
[[upstreams]]
type = "shadowsocks"
url = "{url}?plugin=obfs-local%3Bobfs%3Dhttp"
"#,
url = TEST_SHADOWSOCKS_URL,
);
let dir = std::env::temp_dir();
let path = dir.join("telemt_shadowsocks_plugin_reject_test.toml");
std::fs::write(&path, toml).unwrap();
let err = ProxyConfig::load(&path).unwrap_err().to_string();
assert!(err.contains("shadowsocks plugins are not supported"));
let _ = std::fs::remove_file(path);
}
#[test] #[test]
fn invalid_user_ad_tag_reports_access_user_ad_tags_key() { fn invalid_user_ad_tag_reports_access_user_ad_tags_key() {
let toml = r#" let toml = r#"

View File

@ -803,6 +803,26 @@ pub struct GeneralConfig {
#[serde(default = "default_me_pool_drain_threshold")] #[serde(default = "default_me_pool_drain_threshold")]
pub me_pool_drain_threshold: u64, pub me_pool_drain_threshold: u64,
/// Enable staged client eviction for draining ME writers that remain non-empty past TTL.
#[serde(default = "default_me_pool_drain_soft_evict_enabled")]
pub me_pool_drain_soft_evict_enabled: bool,
/// Extra grace in seconds after drain TTL before soft-eviction stage starts.
#[serde(default = "default_me_pool_drain_soft_evict_grace_secs")]
pub me_pool_drain_soft_evict_grace_secs: u64,
/// Maximum number of client sessions to evict from one draining writer per health tick.
#[serde(default = "default_me_pool_drain_soft_evict_per_writer")]
pub me_pool_drain_soft_evict_per_writer: u8,
/// Soft-eviction budget per CPU core for one health tick.
#[serde(default = "default_me_pool_drain_soft_evict_budget_per_core")]
pub me_pool_drain_soft_evict_budget_per_core: u16,
/// Cooldown for repetitive soft-eviction on the same writer in milliseconds.
#[serde(default = "default_me_pool_drain_soft_evict_cooldown_ms")]
pub me_pool_drain_soft_evict_cooldown_ms: u64,
/// Policy for new binds on stale draining writers. /// Policy for new binds on stale draining writers.
#[serde(default)] #[serde(default)]
pub me_bind_stale_mode: MeBindStaleMode, pub me_bind_stale_mode: MeBindStaleMode,
@ -916,24 +936,38 @@ impl Default for GeneralConfig {
me_reconnect_backoff_cap_ms: default_reconnect_backoff_cap_ms(), me_reconnect_backoff_cap_ms: default_reconnect_backoff_cap_ms(),
me_reconnect_fast_retry_count: default_me_reconnect_fast_retry_count(), me_reconnect_fast_retry_count: default_me_reconnect_fast_retry_count(),
me_single_endpoint_shadow_writers: default_me_single_endpoint_shadow_writers(), me_single_endpoint_shadow_writers: default_me_single_endpoint_shadow_writers(),
me_single_endpoint_outage_mode_enabled: default_me_single_endpoint_outage_mode_enabled(), me_single_endpoint_outage_mode_enabled: default_me_single_endpoint_outage_mode_enabled(
me_single_endpoint_outage_disable_quarantine: default_me_single_endpoint_outage_disable_quarantine(), ),
me_single_endpoint_outage_backoff_min_ms: default_me_single_endpoint_outage_backoff_min_ms(), me_single_endpoint_outage_disable_quarantine:
me_single_endpoint_outage_backoff_max_ms: default_me_single_endpoint_outage_backoff_max_ms(), default_me_single_endpoint_outage_disable_quarantine(),
me_single_endpoint_shadow_rotate_every_secs: default_me_single_endpoint_shadow_rotate_every_secs(), me_single_endpoint_outage_backoff_min_ms:
default_me_single_endpoint_outage_backoff_min_ms(),
me_single_endpoint_outage_backoff_max_ms:
default_me_single_endpoint_outage_backoff_max_ms(),
me_single_endpoint_shadow_rotate_every_secs:
default_me_single_endpoint_shadow_rotate_every_secs(),
me_floor_mode: MeFloorMode::default(), me_floor_mode: MeFloorMode::default(),
me_adaptive_floor_idle_secs: default_me_adaptive_floor_idle_secs(), me_adaptive_floor_idle_secs: default_me_adaptive_floor_idle_secs(),
me_adaptive_floor_min_writers_single_endpoint: default_me_adaptive_floor_min_writers_single_endpoint(), me_adaptive_floor_min_writers_single_endpoint:
me_adaptive_floor_min_writers_multi_endpoint: default_me_adaptive_floor_min_writers_multi_endpoint(), default_me_adaptive_floor_min_writers_single_endpoint(),
me_adaptive_floor_min_writers_multi_endpoint:
default_me_adaptive_floor_min_writers_multi_endpoint(),
me_adaptive_floor_recover_grace_secs: default_me_adaptive_floor_recover_grace_secs(), me_adaptive_floor_recover_grace_secs: default_me_adaptive_floor_recover_grace_secs(),
me_adaptive_floor_writers_per_core_total: default_me_adaptive_floor_writers_per_core_total(), me_adaptive_floor_writers_per_core_total:
default_me_adaptive_floor_writers_per_core_total(),
me_adaptive_floor_cpu_cores_override: default_me_adaptive_floor_cpu_cores_override(), me_adaptive_floor_cpu_cores_override: default_me_adaptive_floor_cpu_cores_override(),
me_adaptive_floor_max_extra_writers_single_per_core: default_me_adaptive_floor_max_extra_writers_single_per_core(), me_adaptive_floor_max_extra_writers_single_per_core:
me_adaptive_floor_max_extra_writers_multi_per_core: default_me_adaptive_floor_max_extra_writers_multi_per_core(), default_me_adaptive_floor_max_extra_writers_single_per_core(),
me_adaptive_floor_max_active_writers_per_core: default_me_adaptive_floor_max_active_writers_per_core(), me_adaptive_floor_max_extra_writers_multi_per_core:
me_adaptive_floor_max_warm_writers_per_core: default_me_adaptive_floor_max_warm_writers_per_core(), default_me_adaptive_floor_max_extra_writers_multi_per_core(),
me_adaptive_floor_max_active_writers_global: default_me_adaptive_floor_max_active_writers_global(), me_adaptive_floor_max_active_writers_per_core:
me_adaptive_floor_max_warm_writers_global: default_me_adaptive_floor_max_warm_writers_global(), default_me_adaptive_floor_max_active_writers_per_core(),
me_adaptive_floor_max_warm_writers_per_core:
default_me_adaptive_floor_max_warm_writers_per_core(),
me_adaptive_floor_max_active_writers_global:
default_me_adaptive_floor_max_active_writers_global(),
me_adaptive_floor_max_warm_writers_global:
default_me_adaptive_floor_max_warm_writers_global(),
upstream_connect_retry_attempts: default_upstream_connect_retry_attempts(), upstream_connect_retry_attempts: default_upstream_connect_retry_attempts(),
upstream_connect_retry_backoff_ms: default_upstream_connect_retry_backoff_ms(), upstream_connect_retry_backoff_ms: default_upstream_connect_retry_backoff_ms(),
upstream_connect_budget_ms: default_upstream_connect_budget_ms(), upstream_connect_budget_ms: default_upstream_connect_budget_ms(),
@ -948,7 +982,8 @@ impl Default for GeneralConfig {
me_socks_kdf_policy: MeSocksKdfPolicy::Strict, me_socks_kdf_policy: MeSocksKdfPolicy::Strict,
me_route_backpressure_base_timeout_ms: default_me_route_backpressure_base_timeout_ms(), me_route_backpressure_base_timeout_ms: default_me_route_backpressure_base_timeout_ms(),
me_route_backpressure_high_timeout_ms: default_me_route_backpressure_high_timeout_ms(), me_route_backpressure_high_timeout_ms: default_me_route_backpressure_high_timeout_ms(),
me_route_backpressure_high_watermark_pct: default_me_route_backpressure_high_watermark_pct(), me_route_backpressure_high_watermark_pct:
default_me_route_backpressure_high_watermark_pct(),
me_health_interval_ms_unhealthy: default_me_health_interval_ms_unhealthy(), me_health_interval_ms_unhealthy: default_me_health_interval_ms_unhealthy(),
me_health_interval_ms_healthy: default_me_health_interval_ms_healthy(), me_health_interval_ms_healthy: default_me_health_interval_ms_healthy(),
me_admission_poll_ms: default_me_admission_poll_ms(), me_admission_poll_ms: default_me_admission_poll_ms(),
@ -972,7 +1007,8 @@ impl Default for GeneralConfig {
me_hardswap_warmup_delay_min_ms: default_me_hardswap_warmup_delay_min_ms(), me_hardswap_warmup_delay_min_ms: default_me_hardswap_warmup_delay_min_ms(),
me_hardswap_warmup_delay_max_ms: default_me_hardswap_warmup_delay_max_ms(), me_hardswap_warmup_delay_max_ms: default_me_hardswap_warmup_delay_max_ms(),
me_hardswap_warmup_extra_passes: default_me_hardswap_warmup_extra_passes(), me_hardswap_warmup_extra_passes: default_me_hardswap_warmup_extra_passes(),
me_hardswap_warmup_pass_backoff_base_ms: default_me_hardswap_warmup_pass_backoff_base_ms(), me_hardswap_warmup_pass_backoff_base_ms:
default_me_hardswap_warmup_pass_backoff_base_ms(),
me_config_stable_snapshots: default_me_config_stable_snapshots(), me_config_stable_snapshots: default_me_config_stable_snapshots(),
me_config_apply_cooldown_secs: default_me_config_apply_cooldown_secs(), me_config_apply_cooldown_secs: default_me_config_apply_cooldown_secs(),
me_snapshot_require_http_2xx: default_me_snapshot_require_http_2xx(), me_snapshot_require_http_2xx: default_me_snapshot_require_http_2xx(),
@ -984,6 +1020,13 @@ impl Default for GeneralConfig {
proxy_secret_len_max: default_proxy_secret_len_max(), proxy_secret_len_max: default_proxy_secret_len_max(),
me_pool_drain_ttl_secs: default_me_pool_drain_ttl_secs(), me_pool_drain_ttl_secs: default_me_pool_drain_ttl_secs(),
me_pool_drain_threshold: default_me_pool_drain_threshold(), me_pool_drain_threshold: default_me_pool_drain_threshold(),
me_pool_drain_soft_evict_enabled: default_me_pool_drain_soft_evict_enabled(),
me_pool_drain_soft_evict_grace_secs: default_me_pool_drain_soft_evict_grace_secs(),
me_pool_drain_soft_evict_per_writer: default_me_pool_drain_soft_evict_per_writer(),
me_pool_drain_soft_evict_budget_per_core:
default_me_pool_drain_soft_evict_budget_per_core(),
me_pool_drain_soft_evict_cooldown_ms:
default_me_pool_drain_soft_evict_cooldown_ms(),
me_bind_stale_mode: MeBindStaleMode::default(), me_bind_stale_mode: MeBindStaleMode::default(),
me_bind_stale_ttl_secs: default_me_bind_stale_ttl_secs(), me_bind_stale_ttl_secs: default_me_bind_stale_ttl_secs(),
me_pool_min_fresh_ratio: default_me_pool_min_fresh_ratio(), me_pool_min_fresh_ratio: default_me_pool_min_fresh_ratio(),
@ -1008,8 +1051,10 @@ impl GeneralConfig {
/// Resolve the active updater interval for ME infrastructure refresh tasks. /// Resolve the active updater interval for ME infrastructure refresh tasks.
/// `update_every` has priority, otherwise legacy proxy_*_auto_reload_secs are used. /// `update_every` has priority, otherwise legacy proxy_*_auto_reload_secs are used.
pub fn effective_update_every_secs(&self) -> u64 { pub fn effective_update_every_secs(&self) -> u64 {
self.update_every self.update_every.unwrap_or_else(|| {
.unwrap_or_else(|| self.proxy_secret_auto_reload_secs.min(self.proxy_config_auto_reload_secs)) self.proxy_secret_auto_reload_secs
.min(self.proxy_config_auto_reload_secs)
})
} }
/// Resolve periodic zero-downtime reinit interval for ME writers. /// Resolve periodic zero-downtime reinit interval for ME writers.
@ -1156,9 +1201,17 @@ pub struct ServerConfig {
#[serde(default = "default_proxy_protocol_header_timeout_ms")] #[serde(default = "default_proxy_protocol_header_timeout_ms")]
pub proxy_protocol_header_timeout_ms: u64, pub proxy_protocol_header_timeout_ms: u64,
/// Port for the Prometheus-compatible metrics endpoint.
/// Enables metrics when set; binds on all interfaces (dual-stack) by default.
#[serde(default)] #[serde(default)]
pub metrics_port: Option<u16>, pub metrics_port: Option<u16>,
/// Listen address for metrics in `IP:PORT` format (e.g. `"127.0.0.1:9090"`).
/// When set, takes precedence over `metrics_port` and binds on the specified address only.
#[serde(default)]
pub metrics_listen: Option<String>,
/// CIDR whitelist for the metrics endpoint.
#[serde(default = "default_metrics_whitelist")] #[serde(default = "default_metrics_whitelist")]
pub metrics_whitelist: Vec<IpNetwork>, pub metrics_whitelist: Vec<IpNetwork>,
@ -1186,6 +1239,7 @@ impl Default for ServerConfig {
proxy_protocol: false, proxy_protocol: false,
proxy_protocol_header_timeout_ms: default_proxy_protocol_header_timeout_ms(), proxy_protocol_header_timeout_ms: default_proxy_protocol_header_timeout_ms(),
metrics_port: None, metrics_port: None,
metrics_listen: None,
metrics_whitelist: default_metrics_whitelist(), metrics_whitelist: default_metrics_whitelist(),
api: ApiConfig::default(), api: ApiConfig::default(),
listeners: Vec::new(), listeners: Vec::new(),
@ -1401,6 +1455,11 @@ pub enum UpstreamType {
#[serde(default)] #[serde(default)]
password: Option<String>, password: Option<String>,
}, },
Shadowsocks {
url: String,
#[serde(default)]
interface: Option<String>,
},
} }
#[derive(Debug, Clone, Serialize, Deserialize)] #[derive(Debug, Clone, Serialize, Deserialize)]
@ -1481,7 +1540,10 @@ impl ShowLink {
} }
impl Serialize for ShowLink { impl Serialize for ShowLink {
fn serialize<S: serde::Serializer>(&self, serializer: S) -> std::result::Result<S::Ok, S::Error> { fn serialize<S: serde::Serializer>(
&self,
serializer: S,
) -> std::result::Result<S::Ok, S::Error> {
match self { match self {
ShowLink::None => Vec::<String>::new().serialize(serializer), ShowLink::None => Vec::<String>::new().serialize(serializer),
ShowLink::All => serializer.serialize_str("*"), ShowLink::All => serializer.serialize_str("*"),
@ -1491,7 +1553,9 @@ impl Serialize for ShowLink {
} }
impl<'de> Deserialize<'de> for ShowLink { impl<'de> Deserialize<'de> for ShowLink {
fn deserialize<D: serde::Deserializer<'de>>(deserializer: D) -> std::result::Result<Self, D::Error> { fn deserialize<D: serde::Deserializer<'de>>(
deserializer: D,
) -> std::result::Result<Self, D::Error> {
use serde::de; use serde::de;
struct ShowLinkVisitor; struct ShowLinkVisitor;
@ -1507,14 +1571,14 @@ impl<'de> Deserialize<'de> for ShowLink {
if v == "*" { if v == "*" {
Ok(ShowLink::All) Ok(ShowLink::All)
} else { } else {
Err(de::Error::invalid_value( Err(de::Error::invalid_value(de::Unexpected::Str(v), &r#""*""#))
de::Unexpected::Str(v),
&r#""*""#,
))
} }
} }
fn visit_seq<A: de::SeqAccess<'de>>(self, mut seq: A) -> std::result::Result<ShowLink, A::Error> { fn visit_seq<A: de::SeqAccess<'de>>(
self,
mut seq: A,
) -> std::result::Result<ShowLink, A::Error> {
let mut names = Vec::new(); let mut names = Vec::new();
while let Some(name) = seq.next_element::<String>()? { while let Some(name) = seq.next_element::<String>()? {
names.push(name); names.push(name);

View File

@ -0,0 +1,450 @@
use std::collections::HashMap;
use std::net::{IpAddr, Ipv4Addr};
use std::sync::Arc;
use std::time::Duration;
use crate::config::UserMaxUniqueIpsMode;
use crate::ip_tracker::UserIpTracker;
fn ip_from_idx(idx: u32) -> IpAddr {
let a = 10u8;
let b = ((idx / 65_536) % 256) as u8;
let c = ((idx / 256) % 256) as u8;
let d = (idx % 256) as u8;
IpAddr::V4(Ipv4Addr::new(a, b, c, d))
}
#[tokio::test]
async fn active_window_enforces_large_unique_ip_burst() {
let tracker = UserIpTracker::new();
tracker.set_user_limit("burst_user", 64).await;
tracker
.set_limit_policy(UserMaxUniqueIpsMode::ActiveWindow, 30)
.await;
for idx in 0..64 {
assert!(tracker.check_and_add("burst_user", ip_from_idx(idx)).await.is_ok());
}
assert!(tracker.check_and_add("burst_user", ip_from_idx(9_999)).await.is_err());
assert_eq!(tracker.get_active_ip_count("burst_user").await, 64);
}
#[tokio::test]
async fn global_limit_applies_across_many_users() {
let tracker = UserIpTracker::new();
tracker.load_limits(3, &HashMap::new()).await;
for user_idx in 0..150u32 {
let user = format!("u{}", user_idx);
assert!(tracker.check_and_add(&user, ip_from_idx(user_idx * 10)).await.is_ok());
assert!(tracker
.check_and_add(&user, ip_from_idx(user_idx * 10 + 1))
.await
.is_ok());
assert!(tracker
.check_and_add(&user, ip_from_idx(user_idx * 10 + 2))
.await
.is_ok());
assert!(tracker
.check_and_add(&user, ip_from_idx(user_idx * 10 + 3))
.await
.is_err());
}
assert_eq!(tracker.get_stats().await.len(), 150);
}
#[tokio::test]
async fn user_zero_override_falls_back_to_global_limit() {
let tracker = UserIpTracker::new();
let mut limits = HashMap::new();
limits.insert("target".to_string(), 0);
tracker.load_limits(2, &limits).await;
assert!(tracker.check_and_add("target", ip_from_idx(1)).await.is_ok());
assert!(tracker.check_and_add("target", ip_from_idx(2)).await.is_ok());
assert!(tracker.check_and_add("target", ip_from_idx(3)).await.is_err());
assert_eq!(tracker.get_user_limit("target").await, Some(2));
}
#[tokio::test]
async fn remove_ip_is_idempotent_after_counter_reaches_zero() {
let tracker = UserIpTracker::new();
tracker.set_user_limit("u", 2).await;
let ip = ip_from_idx(42);
tracker.check_and_add("u", ip).await.unwrap();
tracker.remove_ip("u", ip).await;
tracker.remove_ip("u", ip).await;
tracker.remove_ip("u", ip).await;
assert_eq!(tracker.get_active_ip_count("u").await, 0);
assert!(!tracker.is_ip_active("u", ip).await);
}
#[tokio::test]
async fn clear_user_ips_resets_active_and_recent() {
let tracker = UserIpTracker::new();
tracker.set_user_limit("u", 10).await;
for idx in 0..6 {
tracker.check_and_add("u", ip_from_idx(idx)).await.unwrap();
}
tracker.clear_user_ips("u").await;
assert_eq!(tracker.get_active_ip_count("u").await, 0);
let counts = tracker
.get_recent_counts_for_users(&["u".to_string()])
.await;
assert_eq!(counts.get("u").copied().unwrap_or(0), 0);
}
#[tokio::test]
async fn clear_all_resets_multi_user_state() {
let tracker = UserIpTracker::new();
for user_idx in 0..80u32 {
let user = format!("u{}", user_idx);
for ip_idx in 0..3 {
tracker
.check_and_add(&user, ip_from_idx(user_idx * 100 + ip_idx))
.await
.unwrap();
}
}
tracker.clear_all().await;
assert!(tracker.get_stats().await.is_empty());
let users = (0..80u32)
.map(|idx| format!("u{}", idx))
.collect::<Vec<_>>();
let recent = tracker.get_recent_counts_for_users(&users).await;
assert!(recent.values().all(|count| *count == 0));
}
#[tokio::test]
async fn get_active_ips_for_users_are_sorted() {
let tracker = UserIpTracker::new();
tracker.set_user_limit("user", 10).await;
tracker
.check_and_add("user", IpAddr::V4(Ipv4Addr::new(10, 0, 0, 9)))
.await
.unwrap();
tracker
.check_and_add("user", IpAddr::V4(Ipv4Addr::new(10, 0, 0, 1)))
.await
.unwrap();
tracker
.check_and_add("user", IpAddr::V4(Ipv4Addr::new(10, 0, 0, 5)))
.await
.unwrap();
let map = tracker
.get_active_ips_for_users(&["user".to_string()])
.await;
let ips = map.get("user").cloned().unwrap_or_default();
assert_eq!(
ips,
vec![
IpAddr::V4(Ipv4Addr::new(10, 0, 0, 1)),
IpAddr::V4(Ipv4Addr::new(10, 0, 0, 5)),
IpAddr::V4(Ipv4Addr::new(10, 0, 0, 9)),
]
);
}
#[tokio::test]
async fn get_recent_ips_for_users_are_sorted() {
let tracker = UserIpTracker::new();
tracker.set_user_limit("user", 10).await;
tracker
.check_and_add("user", IpAddr::V4(Ipv4Addr::new(10, 1, 0, 9)))
.await
.unwrap();
tracker
.check_and_add("user", IpAddr::V4(Ipv4Addr::new(10, 1, 0, 1)))
.await
.unwrap();
tracker
.check_and_add("user", IpAddr::V4(Ipv4Addr::new(10, 1, 0, 5)))
.await
.unwrap();
let map = tracker
.get_recent_ips_for_users(&["user".to_string()])
.await;
let ips = map.get("user").cloned().unwrap_or_default();
assert_eq!(
ips,
vec![
IpAddr::V4(Ipv4Addr::new(10, 1, 0, 1)),
IpAddr::V4(Ipv4Addr::new(10, 1, 0, 5)),
IpAddr::V4(Ipv4Addr::new(10, 1, 0, 9)),
]
);
}
#[tokio::test]
async fn time_window_expires_for_large_rotation() {
let tracker = UserIpTracker::new();
tracker.set_user_limit("tw", 1).await;
tracker
.set_limit_policy(UserMaxUniqueIpsMode::TimeWindow, 1)
.await;
tracker.check_and_add("tw", ip_from_idx(1)).await.unwrap();
tracker.remove_ip("tw", ip_from_idx(1)).await;
assert!(tracker.check_and_add("tw", ip_from_idx(2)).await.is_err());
tokio::time::sleep(Duration::from_millis(1_100)).await;
assert!(tracker.check_and_add("tw", ip_from_idx(2)).await.is_ok());
}
#[tokio::test]
async fn combined_mode_blocks_recent_after_disconnect() {
let tracker = UserIpTracker::new();
tracker.set_user_limit("cmb", 1).await;
tracker
.set_limit_policy(UserMaxUniqueIpsMode::Combined, 2)
.await;
tracker.check_and_add("cmb", ip_from_idx(11)).await.unwrap();
tracker.remove_ip("cmb", ip_from_idx(11)).await;
assert!(tracker.check_and_add("cmb", ip_from_idx(12)).await.is_err());
}
#[tokio::test]
async fn load_limits_replaces_large_limit_map() {
let tracker = UserIpTracker::new();
let mut first = HashMap::new();
let mut second = HashMap::new();
for idx in 0..300usize {
first.insert(format!("u{}", idx), 2usize);
}
for idx in 150..450usize {
second.insert(format!("u{}", idx), 4usize);
}
tracker.load_limits(0, &first).await;
tracker.load_limits(0, &second).await;
assert_eq!(tracker.get_user_limit("u20").await, None);
assert_eq!(tracker.get_user_limit("u200").await, Some(4));
assert_eq!(tracker.get_user_limit("u420").await, Some(4));
}
#[tokio::test(flavor = "multi_thread", worker_threads = 4)]
async fn concurrent_same_user_unique_ip_pressure_stays_bounded() {
let tracker = Arc::new(UserIpTracker::new());
tracker.set_user_limit("hot", 32).await;
tracker
.set_limit_policy(UserMaxUniqueIpsMode::ActiveWindow, 30)
.await;
let mut handles = Vec::new();
for worker in 0..16u32 {
let tracker_cloned = tracker.clone();
handles.push(tokio::spawn(async move {
let base = worker * 200;
for step in 0..200u32 {
let _ = tracker_cloned
.check_and_add("hot", ip_from_idx(base + step))
.await;
}
}));
}
for handle in handles {
handle.await.unwrap();
}
assert!(tracker.get_active_ip_count("hot").await <= 32);
}
#[tokio::test(flavor = "multi_thread", worker_threads = 4)]
async fn concurrent_many_users_isolate_limits() {
let tracker = Arc::new(UserIpTracker::new());
tracker.load_limits(4, &HashMap::new()).await;
let mut handles = Vec::new();
for user_idx in 0..120u32 {
let tracker_cloned = tracker.clone();
handles.push(tokio::spawn(async move {
let user = format!("u{}", user_idx);
for ip_idx in 0..10u32 {
let _ = tracker_cloned
.check_and_add(&user, ip_from_idx(user_idx * 1_000 + ip_idx))
.await;
}
}));
}
for handle in handles {
handle.await.unwrap();
}
let stats = tracker.get_stats().await;
assert_eq!(stats.len(), 120);
assert!(stats.iter().all(|(_, active, limit)| *active <= 4 && *limit == 4));
}
#[tokio::test]
async fn same_ip_reconnect_high_frequency_keeps_single_unique() {
let tracker = UserIpTracker::new();
tracker.set_user_limit("same", 2).await;
let ip = ip_from_idx(9);
for _ in 0..2_000 {
tracker.check_and_add("same", ip).await.unwrap();
}
assert_eq!(tracker.get_active_ip_count("same").await, 1);
assert!(tracker.is_ip_active("same", ip).await);
}
#[tokio::test]
async fn format_stats_contains_expected_limited_and_unlimited_markers() {
let tracker = UserIpTracker::new();
tracker.set_user_limit("limited", 2).await;
tracker.check_and_add("limited", ip_from_idx(1)).await.unwrap();
tracker.check_and_add("open", ip_from_idx(2)).await.unwrap();
let text = tracker.format_stats().await;
assert!(text.contains("limited"));
assert!(text.contains("open"));
assert!(text.contains("unlimited"));
}
#[tokio::test]
async fn stats_report_global_default_for_users_without_override() {
let tracker = UserIpTracker::new();
tracker.load_limits(5, &HashMap::new()).await;
tracker.check_and_add("a", ip_from_idx(1)).await.unwrap();
tracker.check_and_add("b", ip_from_idx(2)).await.unwrap();
let stats = tracker.get_stats().await;
assert!(stats.iter().any(|(user, _, limit)| user == "a" && *limit == 5));
assert!(stats.iter().any(|(user, _, limit)| user == "b" && *limit == 5));
}
#[tokio::test]
async fn stress_cycle_add_remove_clear_preserves_empty_end_state() {
let tracker = UserIpTracker::new();
for cycle in 0..50u32 {
let user = format!("cycle{}", cycle);
tracker.set_user_limit(&user, 128).await;
for ip_idx in 0..128u32 {
tracker
.check_and_add(&user, ip_from_idx(cycle * 10_000 + ip_idx))
.await
.unwrap();
}
for ip_idx in 0..128u32 {
tracker
.remove_ip(&user, ip_from_idx(cycle * 10_000 + ip_idx))
.await;
}
tracker.clear_user_ips(&user).await;
}
assert!(tracker.get_stats().await.is_empty());
}
#[tokio::test]
async fn remove_unknown_user_or_ip_does_not_corrupt_state() {
let tracker = UserIpTracker::new();
tracker.remove_ip("no_user", ip_from_idx(1)).await;
tracker.check_and_add("x", ip_from_idx(2)).await.unwrap();
tracker.remove_ip("x", ip_from_idx(3)).await;
assert_eq!(tracker.get_active_ip_count("x").await, 1);
assert!(tracker.is_ip_active("x", ip_from_idx(2)).await);
}
#[tokio::test]
async fn active_and_recent_views_match_after_mixed_workload() {
let tracker = UserIpTracker::new();
tracker.set_user_limit("mix", 16).await;
for ip_idx in 0..12u32 {
tracker.check_and_add("mix", ip_from_idx(ip_idx)).await.unwrap();
}
for ip_idx in 0..6u32 {
tracker.remove_ip("mix", ip_from_idx(ip_idx)).await;
}
let active = tracker
.get_active_ips_for_users(&["mix".to_string()])
.await
.get("mix")
.cloned()
.unwrap_or_default();
let recent_count = tracker
.get_recent_counts_for_users(&["mix".to_string()])
.await
.get("mix")
.copied()
.unwrap_or(0);
assert_eq!(active.len(), 6);
assert!(recent_count >= active.len());
assert!(recent_count <= 12);
}
#[tokio::test]
async fn global_limit_switch_updates_enforcement_immediately() {
let tracker = UserIpTracker::new();
tracker.load_limits(2, &HashMap::new()).await;
assert!(tracker.check_and_add("u", ip_from_idx(1)).await.is_ok());
assert!(tracker.check_and_add("u", ip_from_idx(2)).await.is_ok());
assert!(tracker.check_and_add("u", ip_from_idx(3)).await.is_err());
tracker.clear_user_ips("u").await;
tracker.load_limits(4, &HashMap::new()).await;
assert!(tracker.check_and_add("u", ip_from_idx(1)).await.is_ok());
assert!(tracker.check_and_add("u", ip_from_idx(2)).await.is_ok());
assert!(tracker.check_and_add("u", ip_from_idx(3)).await.is_ok());
assert!(tracker.check_and_add("u", ip_from_idx(4)).await.is_ok());
assert!(tracker.check_and_add("u", ip_from_idx(5)).await.is_err());
}
#[tokio::test(flavor = "multi_thread", worker_threads = 4)]
async fn concurrent_reconnect_and_disconnect_preserves_non_negative_counts() {
let tracker = Arc::new(UserIpTracker::new());
tracker.set_user_limit("cc", 8).await;
let mut handles = Vec::new();
for worker in 0..8u32 {
let tracker_cloned = tracker.clone();
handles.push(tokio::spawn(async move {
let ip = ip_from_idx(50 + worker);
for _ in 0..500u32 {
let _ = tracker_cloned.check_and_add("cc", ip).await;
tracker_cloned.remove_ip("cc", ip).await;
}
}));
}
for handle in handles {
handle.await.unwrap();
}
assert!(tracker.get_active_ip_count("cc").await <= 8);
}

View File

@ -238,6 +238,11 @@ pub(crate) async fn initialize_me_pool(
config.general.hardswap, config.general.hardswap,
config.general.me_pool_drain_ttl_secs, config.general.me_pool_drain_ttl_secs,
config.general.me_pool_drain_threshold, config.general.me_pool_drain_threshold,
config.general.me_pool_drain_soft_evict_enabled,
config.general.me_pool_drain_soft_evict_grace_secs,
config.general.me_pool_drain_soft_evict_per_writer,
config.general.me_pool_drain_soft_evict_budget_per_core,
config.general.me_pool_drain_soft_evict_cooldown_ms,
config.general.effective_me_pool_force_close_secs(), config.general.effective_me_pool_force_close_secs(),
config.general.me_pool_min_fresh_ratio, config.general.me_pool_min_fresh_ratio,
config.general.me_hardswap_warmup_delay_min_ms, config.general.me_hardswap_warmup_delay_min_ms,

View File

@ -476,7 +476,7 @@ pub async fn run() -> std::result::Result<(), Box<dyn std::error::Error>> {
Duration::from_secs(config.access.replay_window_secs), Duration::from_secs(config.access.replay_window_secs),
)); ));
let buffer_pool = Arc::new(BufferPool::with_config(16 * 1024, 4096)); let buffer_pool = Arc::new(BufferPool::with_config(64 * 1024, 4096));
connectivity::run_startup_connectivity( connectivity::run_startup_connectivity(
&config, &config,

View File

@ -279,11 +279,32 @@ pub(crate) async fn spawn_metrics_if_configured(
ip_tracker: Arc<UserIpTracker>, ip_tracker: Arc<UserIpTracker>,
config_rx: watch::Receiver<Arc<ProxyConfig>>, config_rx: watch::Receiver<Arc<ProxyConfig>>,
) { ) {
if let Some(port) = config.server.metrics_port { // metrics_listen takes precedence; fall back to metrics_port for backward compat.
let metrics_target: Option<(u16, Option<String>)> =
if let Some(ref listen) = config.server.metrics_listen {
match listen.parse::<std::net::SocketAddr>() {
Ok(addr) => Some((addr.port(), Some(listen.clone()))),
Err(e) => {
startup_tracker
.skip_component(
COMPONENT_METRICS_START,
Some(format!("invalid metrics_listen \"{}\": {}", listen, e)),
)
.await;
None
}
}
} else {
config.server.metrics_port.map(|p| (p, None))
};
if let Some((port, listen)) = metrics_target {
let fallback_label = format!("port {}", port);
let label = listen.as_deref().unwrap_or(&fallback_label);
startup_tracker startup_tracker
.start_component( .start_component(
COMPONENT_METRICS_START, COMPONENT_METRICS_START,
Some(format!("spawn metrics endpoint on {}", port)), Some(format!("spawn metrics endpoint on {}", label)),
) )
.await; .await;
let stats = stats.clone(); let stats = stats.clone();
@ -294,6 +315,7 @@ pub(crate) async fn spawn_metrics_if_configured(
tokio::spawn(async move { tokio::spawn(async move {
metrics::serve( metrics::serve(
port, port,
listen,
stats, stats,
beobachten, beobachten,
ip_tracker_metrics, ip_tracker_metrics,
@ -308,7 +330,7 @@ pub(crate) async fn spawn_metrics_if_configured(
Some("metrics task spawned".to_string()), Some("metrics task spawned".to_string()),
) )
.await; .await;
} else { } else if config.server.metrics_listen.is_none() {
startup_tracker startup_tracker
.skip_component( .skip_component(
COMPONENT_METRICS_START, COMPONENT_METRICS_START,

View File

@ -6,6 +6,8 @@ mod config;
mod crypto; mod crypto;
mod error; mod error;
mod ip_tracker; mod ip_tracker;
#[cfg(test)]
mod ip_tracker_regression_tests;
mod maestro; mod maestro;
mod metrics; mod metrics;
mod network; mod network;

View File

@ -21,6 +21,7 @@ use crate::transport::{ListenOptions, create_listener};
pub async fn serve( pub async fn serve(
port: u16, port: u16,
listen: Option<String>,
stats: Arc<Stats>, stats: Arc<Stats>,
beobachten: Arc<BeobachtenStore>, beobachten: Arc<BeobachtenStore>,
ip_tracker: Arc<UserIpTracker>, ip_tracker: Arc<UserIpTracker>,
@ -28,6 +29,33 @@ pub async fn serve(
whitelist: Vec<IpNetwork>, whitelist: Vec<IpNetwork>,
) { ) {
let whitelist = Arc::new(whitelist); let whitelist = Arc::new(whitelist);
// If `metrics_listen` is set, bind on that single address only.
if let Some(ref listen_addr) = listen {
let addr: SocketAddr = match listen_addr.parse() {
Ok(a) => a,
Err(e) => {
warn!(error = %e, "Invalid metrics_listen address: {}", listen_addr);
return;
}
};
let is_ipv6 = addr.is_ipv6();
match bind_metrics_listener(addr, is_ipv6) {
Ok(listener) => {
info!("Metrics endpoint: http://{}/metrics and /beobachten", addr);
serve_listener(
listener, stats, beobachten, ip_tracker, config_rx, whitelist,
)
.await;
}
Err(e) => {
warn!(error = %e, "Failed to bind metrics on {}", addr);
}
}
return;
}
// Fallback: bind on 0.0.0.0 and [::] using metrics_port.
let mut listener_v4 = None; let mut listener_v4 = None;
let mut listener_v6 = None; let mut listener_v6 = None;
@ -264,6 +292,109 @@ async fn render_metrics(stats: &Stats, config: &ProxyConfig, ip_tracker: &UserIp
"telemt_connections_bad_total {}", "telemt_connections_bad_total {}",
if core_enabled { stats.get_connects_bad() } else { 0 } if core_enabled { stats.get_connects_bad() } else { 0 }
); );
let _ = writeln!(out, "# HELP telemt_connections_current Current active connections");
let _ = writeln!(out, "# TYPE telemt_connections_current gauge");
let _ = writeln!(
out,
"telemt_connections_current {}",
if core_enabled {
stats.get_current_connections_total()
} else {
0
}
);
let _ = writeln!(out, "# HELP telemt_connections_direct_current Current active direct connections");
let _ = writeln!(out, "# TYPE telemt_connections_direct_current gauge");
let _ = writeln!(
out,
"telemt_connections_direct_current {}",
if core_enabled {
stats.get_current_connections_direct()
} else {
0
}
);
let _ = writeln!(out, "# HELP telemt_connections_me_current Current active middle-end connections");
let _ = writeln!(out, "# TYPE telemt_connections_me_current gauge");
let _ = writeln!(
out,
"telemt_connections_me_current {}",
if core_enabled {
stats.get_current_connections_me()
} else {
0
}
);
let _ = writeln!(
out,
"# HELP telemt_relay_adaptive_promotions_total Adaptive relay tier promotions"
);
let _ = writeln!(out, "# TYPE telemt_relay_adaptive_promotions_total counter");
let _ = writeln!(
out,
"telemt_relay_adaptive_promotions_total {}",
if core_enabled {
stats.get_relay_adaptive_promotions_total()
} else {
0
}
);
let _ = writeln!(
out,
"# HELP telemt_relay_adaptive_demotions_total Adaptive relay tier demotions"
);
let _ = writeln!(out, "# TYPE telemt_relay_adaptive_demotions_total counter");
let _ = writeln!(
out,
"telemt_relay_adaptive_demotions_total {}",
if core_enabled {
stats.get_relay_adaptive_demotions_total()
} else {
0
}
);
let _ = writeln!(
out,
"# HELP telemt_relay_adaptive_hard_promotions_total Adaptive relay hard promotions triggered by write pressure"
);
let _ = writeln!(
out,
"# TYPE telemt_relay_adaptive_hard_promotions_total counter"
);
let _ = writeln!(
out,
"telemt_relay_adaptive_hard_promotions_total {}",
if core_enabled {
stats.get_relay_adaptive_hard_promotions_total()
} else {
0
}
);
let _ = writeln!(out, "# HELP telemt_reconnect_evict_total Reconnect-driven session evictions");
let _ = writeln!(out, "# TYPE telemt_reconnect_evict_total counter");
let _ = writeln!(
out,
"telemt_reconnect_evict_total {}",
if core_enabled {
stats.get_reconnect_evict_total()
} else {
0
}
);
let _ = writeln!(
out,
"# HELP telemt_reconnect_stale_close_total Sessions closed because they became stale after reconnect"
);
let _ = writeln!(out, "# TYPE telemt_reconnect_stale_close_total counter");
let _ = writeln!(
out,
"telemt_reconnect_stale_close_total {}",
if core_enabled {
stats.get_reconnect_stale_close_total()
} else {
0
}
);
let _ = writeln!(out, "# HELP telemt_handshake_timeouts_total Handshake timeouts"); let _ = writeln!(out, "# HELP telemt_handshake_timeouts_total Handshake timeouts");
let _ = writeln!(out, "# TYPE telemt_handshake_timeouts_total counter"); let _ = writeln!(out, "# TYPE telemt_handshake_timeouts_total counter");
@ -1519,6 +1650,36 @@ async fn render_metrics(stats: &Stats, config: &ProxyConfig, ip_tracker: &UserIp
} }
); );
let _ = writeln!(
out,
"# HELP telemt_pool_drain_soft_evict_total Soft-evicted client sessions on stuck draining writers"
);
let _ = writeln!(out, "# TYPE telemt_pool_drain_soft_evict_total counter");
let _ = writeln!(
out,
"telemt_pool_drain_soft_evict_total {}",
if me_allows_normal {
stats.get_pool_drain_soft_evict_total()
} else {
0
}
);
let _ = writeln!(
out,
"# HELP telemt_pool_drain_soft_evict_writer_total Draining writers with at least one soft eviction"
);
let _ = writeln!(out, "# TYPE telemt_pool_drain_soft_evict_writer_total counter");
let _ = writeln!(
out,
"telemt_pool_drain_soft_evict_writer_total {}",
if me_allows_normal {
stats.get_pool_drain_soft_evict_writer_total()
} else {
0
}
);
let _ = writeln!(out, "# HELP telemt_pool_stale_pick_total Stale writer fallback picks for new binds"); let _ = writeln!(out, "# HELP telemt_pool_stale_pick_total Stale writer fallback picks for new binds");
let _ = writeln!(out, "# TYPE telemt_pool_stale_pick_total counter"); let _ = writeln!(out, "# TYPE telemt_pool_stale_pick_total counter");
let _ = writeln!( let _ = writeln!(
@ -1836,6 +1997,8 @@ mod tests {
stats.increment_connects_all(); stats.increment_connects_all();
stats.increment_connects_all(); stats.increment_connects_all();
stats.increment_connects_bad(); stats.increment_connects_bad();
stats.increment_current_connections_direct();
stats.increment_current_connections_me();
stats.increment_handshake_timeouts(); stats.increment_handshake_timeouts();
stats.increment_upstream_connect_attempt_total(); stats.increment_upstream_connect_attempt_total();
stats.increment_upstream_connect_attempt_total(); stats.increment_upstream_connect_attempt_total();
@ -1867,6 +2030,9 @@ mod tests {
assert!(output.contains("telemt_connections_total 2")); assert!(output.contains("telemt_connections_total 2"));
assert!(output.contains("telemt_connections_bad_total 1")); assert!(output.contains("telemt_connections_bad_total 1"));
assert!(output.contains("telemt_connections_current 2"));
assert!(output.contains("telemt_connections_direct_current 1"));
assert!(output.contains("telemt_connections_me_current 1"));
assert!(output.contains("telemt_handshake_timeouts_total 1")); assert!(output.contains("telemt_handshake_timeouts_total 1"));
assert!(output.contains("telemt_upstream_connect_attempt_total 2")); assert!(output.contains("telemt_upstream_connect_attempt_total 2"));
assert!(output.contains("telemt_upstream_connect_success_total 1")); assert!(output.contains("telemt_upstream_connect_success_total 1"));
@ -1909,6 +2075,9 @@ mod tests {
let output = render_metrics(&stats, &config, &tracker).await; let output = render_metrics(&stats, &config, &tracker).await;
assert!(output.contains("telemt_connections_total 0")); assert!(output.contains("telemt_connections_total 0"));
assert!(output.contains("telemt_connections_bad_total 0")); assert!(output.contains("telemt_connections_bad_total 0"));
assert!(output.contains("telemt_connections_current 0"));
assert!(output.contains("telemt_connections_direct_current 0"));
assert!(output.contains("telemt_connections_me_current 0"));
assert!(output.contains("telemt_handshake_timeouts_total 0")); assert!(output.contains("telemt_handshake_timeouts_total 0"));
assert!(output.contains("telemt_user_unique_ips_current{user=")); assert!(output.contains("telemt_user_unique_ips_current{user="));
assert!(output.contains("telemt_user_unique_ips_recent_window{user=")); assert!(output.contains("telemt_user_unique_ips_recent_window{user="));
@ -1942,11 +2111,21 @@ mod tests {
assert!(output.contains("# TYPE telemt_uptime_seconds gauge")); assert!(output.contains("# TYPE telemt_uptime_seconds gauge"));
assert!(output.contains("# TYPE telemt_connections_total counter")); assert!(output.contains("# TYPE telemt_connections_total counter"));
assert!(output.contains("# TYPE telemt_connections_bad_total counter")); assert!(output.contains("# TYPE telemt_connections_bad_total counter"));
assert!(output.contains("# TYPE telemt_connections_current gauge"));
assert!(output.contains("# TYPE telemt_connections_direct_current gauge"));
assert!(output.contains("# TYPE telemt_connections_me_current gauge"));
assert!(output.contains("# TYPE telemt_relay_adaptive_promotions_total counter"));
assert!(output.contains("# TYPE telemt_relay_adaptive_demotions_total counter"));
assert!(output.contains("# TYPE telemt_relay_adaptive_hard_promotions_total counter"));
assert!(output.contains("# TYPE telemt_reconnect_evict_total counter"));
assert!(output.contains("# TYPE telemt_reconnect_stale_close_total counter"));
assert!(output.contains("# TYPE telemt_handshake_timeouts_total counter")); assert!(output.contains("# TYPE telemt_handshake_timeouts_total counter"));
assert!(output.contains("# TYPE telemt_upstream_connect_attempt_total counter")); assert!(output.contains("# TYPE telemt_upstream_connect_attempt_total counter"));
assert!(output.contains("# TYPE telemt_me_rpc_proxy_req_signal_sent_total counter")); assert!(output.contains("# TYPE telemt_me_rpc_proxy_req_signal_sent_total counter"));
assert!(output.contains("# TYPE telemt_me_idle_close_by_peer_total counter")); assert!(output.contains("# TYPE telemt_me_idle_close_by_peer_total counter"));
assert!(output.contains("# TYPE telemt_me_writer_removed_total counter")); assert!(output.contains("# TYPE telemt_me_writer_removed_total counter"));
assert!(output.contains("# TYPE telemt_pool_drain_soft_evict_total counter"));
assert!(output.contains("# TYPE telemt_pool_drain_soft_evict_writer_total counter"));
assert!(output.contains( assert!(output.contains(
"# TYPE telemt_me_writer_removed_unexpected_minus_restored_total gauge" "# TYPE telemt_me_writer_removed_unexpected_minus_restored_total gauge"
)); ));

View File

@ -0,0 +1,383 @@
use dashmap::DashMap;
use std::cmp::max;
use std::sync::OnceLock;
use std::time::{Duration, Instant};
const EMA_ALPHA: f64 = 0.2;
const PROFILE_TTL: Duration = Duration::from_secs(300);
const THROUGHPUT_UP_BPS: f64 = 8_000_000.0;
const THROUGHPUT_DOWN_BPS: f64 = 2_000_000.0;
const RATIO_CONFIRM_THRESHOLD: f64 = 1.12;
const TIER1_HOLD_TICKS: u32 = 8;
const TIER2_HOLD_TICKS: u32 = 4;
const QUIET_DEMOTE_TICKS: u32 = 480;
const HARD_COOLDOWN_TICKS: u32 = 20;
const HARD_PENDING_THRESHOLD: u32 = 3;
const HARD_PARTIAL_RATIO_THRESHOLD: f64 = 0.25;
const DIRECT_C2S_CAP_BYTES: usize = 128 * 1024;
const DIRECT_S2C_CAP_BYTES: usize = 512 * 1024;
const ME_FRAMES_CAP: usize = 96;
const ME_BYTES_CAP: usize = 384 * 1024;
const ME_DELAY_MIN_US: u64 = 150;
#[derive(Debug, Clone, Copy, PartialEq, Eq, PartialOrd, Ord)]
pub enum AdaptiveTier {
Base = 0,
Tier1 = 1,
Tier2 = 2,
Tier3 = 3,
}
impl AdaptiveTier {
pub fn promote(self) -> Self {
match self {
Self::Base => Self::Tier1,
Self::Tier1 => Self::Tier2,
Self::Tier2 => Self::Tier3,
Self::Tier3 => Self::Tier3,
}
}
pub fn demote(self) -> Self {
match self {
Self::Base => Self::Base,
Self::Tier1 => Self::Base,
Self::Tier2 => Self::Tier1,
Self::Tier3 => Self::Tier2,
}
}
fn ratio(self) -> (usize, usize) {
match self {
Self::Base => (1, 1),
Self::Tier1 => (5, 4),
Self::Tier2 => (3, 2),
Self::Tier3 => (2, 1),
}
}
pub fn as_u8(self) -> u8 {
self as u8
}
}
#[derive(Debug, Clone, Copy, PartialEq, Eq)]
pub enum TierTransitionReason {
SoftConfirmed,
HardPressure,
QuietDemotion,
}
#[derive(Debug, Clone, Copy, PartialEq, Eq)]
pub struct TierTransition {
pub from: AdaptiveTier,
pub to: AdaptiveTier,
pub reason: TierTransitionReason,
}
#[derive(Debug, Clone, Copy, Default)]
pub struct RelaySignalSample {
pub c2s_bytes: u64,
pub s2c_requested_bytes: u64,
pub s2c_written_bytes: u64,
pub s2c_write_ops: u64,
pub s2c_partial_writes: u64,
pub s2c_consecutive_pending_writes: u32,
}
#[derive(Debug, Clone, Copy)]
pub struct SessionAdaptiveController {
tier: AdaptiveTier,
max_tier_seen: AdaptiveTier,
throughput_ema_bps: f64,
incoming_ema_bps: f64,
outgoing_ema_bps: f64,
tier1_hold_ticks: u32,
tier2_hold_ticks: u32,
quiet_ticks: u32,
hard_cooldown_ticks: u32,
}
impl SessionAdaptiveController {
pub fn new(initial_tier: AdaptiveTier) -> Self {
Self {
tier: initial_tier,
max_tier_seen: initial_tier,
throughput_ema_bps: 0.0,
incoming_ema_bps: 0.0,
outgoing_ema_bps: 0.0,
tier1_hold_ticks: 0,
tier2_hold_ticks: 0,
quiet_ticks: 0,
hard_cooldown_ticks: 0,
}
}
pub fn max_tier_seen(&self) -> AdaptiveTier {
self.max_tier_seen
}
pub fn observe(&mut self, sample: RelaySignalSample, tick_secs: f64) -> Option<TierTransition> {
if tick_secs <= f64::EPSILON {
return None;
}
if self.hard_cooldown_ticks > 0 {
self.hard_cooldown_ticks -= 1;
}
let c2s_bps = (sample.c2s_bytes as f64 * 8.0) / tick_secs;
let incoming_bps = (sample.s2c_requested_bytes as f64 * 8.0) / tick_secs;
let outgoing_bps = (sample.s2c_written_bytes as f64 * 8.0) / tick_secs;
let throughput = c2s_bps.max(outgoing_bps);
self.throughput_ema_bps = ema(self.throughput_ema_bps, throughput);
self.incoming_ema_bps = ema(self.incoming_ema_bps, incoming_bps);
self.outgoing_ema_bps = ema(self.outgoing_ema_bps, outgoing_bps);
let tier1_now = self.throughput_ema_bps >= THROUGHPUT_UP_BPS;
if tier1_now {
self.tier1_hold_ticks = self.tier1_hold_ticks.saturating_add(1);
} else {
self.tier1_hold_ticks = 0;
}
let ratio = if self.outgoing_ema_bps <= f64::EPSILON {
0.0
} else {
self.incoming_ema_bps / self.outgoing_ema_bps
};
let tier2_now = ratio >= RATIO_CONFIRM_THRESHOLD;
if tier2_now {
self.tier2_hold_ticks = self.tier2_hold_ticks.saturating_add(1);
} else {
self.tier2_hold_ticks = 0;
}
let partial_ratio = if sample.s2c_write_ops == 0 {
0.0
} else {
sample.s2c_partial_writes as f64 / sample.s2c_write_ops as f64
};
let hard_now = sample.s2c_consecutive_pending_writes >= HARD_PENDING_THRESHOLD
|| partial_ratio >= HARD_PARTIAL_RATIO_THRESHOLD;
if hard_now && self.hard_cooldown_ticks == 0 {
return self.promote(TierTransitionReason::HardPressure, HARD_COOLDOWN_TICKS);
}
if self.tier1_hold_ticks >= TIER1_HOLD_TICKS && self.tier2_hold_ticks >= TIER2_HOLD_TICKS {
return self.promote(TierTransitionReason::SoftConfirmed, 0);
}
let demote_candidate = self.throughput_ema_bps < THROUGHPUT_DOWN_BPS && !tier2_now && !hard_now;
if demote_candidate {
self.quiet_ticks = self.quiet_ticks.saturating_add(1);
if self.quiet_ticks >= QUIET_DEMOTE_TICKS {
self.quiet_ticks = 0;
return self.demote(TierTransitionReason::QuietDemotion);
}
} else {
self.quiet_ticks = 0;
}
None
}
fn promote(
&mut self,
reason: TierTransitionReason,
hard_cooldown_ticks: u32,
) -> Option<TierTransition> {
let from = self.tier;
let to = from.promote();
if from == to {
return None;
}
self.tier = to;
self.max_tier_seen = max(self.max_tier_seen, to);
self.hard_cooldown_ticks = hard_cooldown_ticks;
self.tier1_hold_ticks = 0;
self.tier2_hold_ticks = 0;
self.quiet_ticks = 0;
Some(TierTransition { from, to, reason })
}
fn demote(&mut self, reason: TierTransitionReason) -> Option<TierTransition> {
let from = self.tier;
let to = from.demote();
if from == to {
return None;
}
self.tier = to;
self.tier1_hold_ticks = 0;
self.tier2_hold_ticks = 0;
Some(TierTransition { from, to, reason })
}
}
#[derive(Debug, Clone, Copy)]
struct UserAdaptiveProfile {
tier: AdaptiveTier,
seen_at: Instant,
}
fn profiles() -> &'static DashMap<String, UserAdaptiveProfile> {
static USER_PROFILES: OnceLock<DashMap<String, UserAdaptiveProfile>> = OnceLock::new();
USER_PROFILES.get_or_init(DashMap::new)
}
pub fn seed_tier_for_user(user: &str) -> AdaptiveTier {
let now = Instant::now();
if let Some(entry) = profiles().get(user) {
let value = entry.value();
if now.duration_since(value.seen_at) <= PROFILE_TTL {
return value.tier;
}
}
AdaptiveTier::Base
}
pub fn record_user_tier(user: &str, tier: AdaptiveTier) {
let now = Instant::now();
if let Some(mut entry) = profiles().get_mut(user) {
let existing = *entry;
let effective = if now.duration_since(existing.seen_at) > PROFILE_TTL {
tier
} else {
max(existing.tier, tier)
};
*entry = UserAdaptiveProfile {
tier: effective,
seen_at: now,
};
return;
}
profiles().insert(
user.to_string(),
UserAdaptiveProfile { tier, seen_at: now },
);
}
pub fn direct_copy_buffers_for_tier(
tier: AdaptiveTier,
base_c2s: usize,
base_s2c: usize,
) -> (usize, usize) {
let (num, den) = tier.ratio();
(
scale(base_c2s, num, den, DIRECT_C2S_CAP_BYTES),
scale(base_s2c, num, den, DIRECT_S2C_CAP_BYTES),
)
}
pub fn me_flush_policy_for_tier(
tier: AdaptiveTier,
base_frames: usize,
base_bytes: usize,
base_delay: Duration,
) -> (usize, usize, Duration) {
let (num, den) = tier.ratio();
let frames = scale(base_frames, num, den, ME_FRAMES_CAP).max(1);
let bytes = scale(base_bytes, num, den, ME_BYTES_CAP).max(4096);
let delay_us = base_delay.as_micros() as u64;
let adjusted_delay_us = match tier {
AdaptiveTier::Base => delay_us,
AdaptiveTier::Tier1 => (delay_us.saturating_mul(7)).saturating_div(10),
AdaptiveTier::Tier2 => delay_us.saturating_div(2),
AdaptiveTier::Tier3 => (delay_us.saturating_mul(3)).saturating_div(10),
}
.max(ME_DELAY_MIN_US)
.min(delay_us.max(ME_DELAY_MIN_US));
(frames, bytes, Duration::from_micros(adjusted_delay_us))
}
fn ema(prev: f64, value: f64) -> f64 {
if prev <= f64::EPSILON {
value
} else {
(prev * (1.0 - EMA_ALPHA)) + (value * EMA_ALPHA)
}
}
fn scale(base: usize, numerator: usize, denominator: usize, cap: usize) -> usize {
let scaled = base
.saturating_mul(numerator)
.saturating_div(denominator.max(1));
scaled.min(cap).max(1)
}
#[cfg(test)]
mod tests {
use super::*;
fn sample(
c2s_bytes: u64,
s2c_requested_bytes: u64,
s2c_written_bytes: u64,
s2c_write_ops: u64,
s2c_partial_writes: u64,
s2c_consecutive_pending_writes: u32,
) -> RelaySignalSample {
RelaySignalSample {
c2s_bytes,
s2c_requested_bytes,
s2c_written_bytes,
s2c_write_ops,
s2c_partial_writes,
s2c_consecutive_pending_writes,
}
}
#[test]
fn test_soft_promotion_requires_tier1_and_tier2() {
let mut ctrl = SessionAdaptiveController::new(AdaptiveTier::Base);
let tick_secs = 0.25;
let mut promoted = None;
for _ in 0..8 {
promoted = ctrl.observe(
sample(
300_000, // ~9.6 Mbps
320_000, // incoming > outgoing to confirm tier2
250_000,
10,
0,
0,
),
tick_secs,
);
}
let transition = promoted.expect("expected soft promotion");
assert_eq!(transition.from, AdaptiveTier::Base);
assert_eq!(transition.to, AdaptiveTier::Tier1);
assert_eq!(transition.reason, TierTransitionReason::SoftConfirmed);
}
#[test]
fn test_hard_promotion_on_pending_pressure() {
let mut ctrl = SessionAdaptiveController::new(AdaptiveTier::Base);
let transition = ctrl
.observe(
sample(10_000, 20_000, 10_000, 4, 1, 3),
0.25,
)
.expect("expected hard promotion");
assert_eq!(transition.reason, TierTransitionReason::HardPressure);
assert_eq!(transition.to, AdaptiveTier::Tier1);
}
#[test]
fn test_quiet_demotion_is_slow_and_stepwise() {
let mut ctrl = SessionAdaptiveController::new(AdaptiveTier::Tier2);
let mut demotion = None;
for _ in 0..QUIET_DEMOTE_TICKS {
demotion = ctrl.observe(sample(1, 1, 1, 1, 0, 0), 0.25);
}
let transition = demotion.expect("expected quiet demotion");
assert_eq!(transition.from, AdaptiveTier::Tier2);
assert_eq!(transition.to, AdaptiveTier::Tier1);
assert_eq!(transition.reason, TierTransitionReason::QuietDemotion);
}
}

View File

@ -40,6 +40,7 @@ use crate::proxy::handshake::{HandshakeSuccess, handle_mtproto_handshake, handle
use crate::proxy::masking::handle_bad_client; use crate::proxy::masking::handle_bad_client;
use crate::proxy::middle_relay::handle_via_middle_proxy; use crate::proxy::middle_relay::handle_via_middle_proxy;
use crate::proxy::route_mode::{RelayRouteMode, RouteRuntimeController}; use crate::proxy::route_mode::{RelayRouteMode, RouteRuntimeController};
use crate::proxy::session_eviction::register_session;
fn beobachten_ttl(config: &ProxyConfig) -> Duration { fn beobachten_ttl(config: &ProxyConfig) -> Duration {
Duration::from_secs(config.general.beobachten_minutes.saturating_mul(60)) Duration::from_secs(config.general.beobachten_minutes.saturating_mul(60))
@ -731,6 +732,17 @@ impl RunningClientHandler {
return Err(e); return Err(e);
} }
let registration = register_session(&user, success.dc_idx);
if registration.replaced_existing {
stats.increment_reconnect_evict_total();
warn!(
user = %user,
dc = success.dc_idx,
"Reconnect detected: replacing active session for user+dc"
);
}
let session_lease = registration.lease;
let route_snapshot = route_runtime.snapshot(); let route_snapshot = route_runtime.snapshot();
let session_id = rng.u64(); let session_id = rng.u64();
let relay_result = if config.general.use_middle_proxy let relay_result = if config.general.use_middle_proxy
@ -750,6 +762,7 @@ impl RunningClientHandler {
route_runtime.subscribe(), route_runtime.subscribe(),
route_snapshot, route_snapshot,
session_id, session_id,
session_lease.clone(),
) )
.await .await
} else { } else {
@ -766,6 +779,7 @@ impl RunningClientHandler {
route_runtime.subscribe(), route_runtime.subscribe(),
route_snapshot, route_snapshot,
session_id, session_id,
session_lease.clone(),
) )
.await .await
} }
@ -783,6 +797,7 @@ impl RunningClientHandler {
route_runtime.subscribe(), route_runtime.subscribe(),
route_snapshot, route_snapshot,
session_id, session_id,
session_lease.clone(),
) )
.await .await
}; };

View File

@ -3,8 +3,7 @@ use std::io::Write;
use std::net::SocketAddr; use std::net::SocketAddr;
use std::sync::Arc; use std::sync::Arc;
use tokio::io::{AsyncRead, AsyncWrite, AsyncWriteExt}; use tokio::io::{AsyncRead, AsyncWrite, AsyncWriteExt, ReadHalf, WriteHalf, split};
use tokio::net::TcpStream;
use tokio::sync::watch; use tokio::sync::watch;
use tracing::{debug, info, warn}; use tracing::{debug, info, warn};
@ -15,9 +14,11 @@ use crate::protocol::constants::*;
use crate::proxy::handshake::{HandshakeSuccess, encrypt_tg_nonce_with_ciphers, generate_tg_nonce}; use crate::proxy::handshake::{HandshakeSuccess, encrypt_tg_nonce_with_ciphers, generate_tg_nonce};
use crate::proxy::relay::relay_bidirectional; use crate::proxy::relay::relay_bidirectional;
use crate::proxy::route_mode::{ use crate::proxy::route_mode::{
RelayRouteMode, RouteCutoverState, ROUTE_SWITCH_ERROR_MSG, affected_cutover_state, ROUTE_SWITCH_ERROR_MSG, RelayRouteMode, RouteCutoverState, affected_cutover_state,
cutover_stagger_delay, cutover_stagger_delay,
}; };
use crate::proxy::adaptive_buffers;
use crate::proxy::session_eviction::SessionLease;
use crate::stats::Stats; use crate::stats::Stats;
use crate::stream::{BufferPool, CryptoReader, CryptoWriter}; use crate::stream::{BufferPool, CryptoReader, CryptoWriter};
use crate::transport::UpstreamManager; use crate::transport::UpstreamManager;
@ -34,6 +35,7 @@ pub(crate) async fn handle_via_direct<R, W>(
mut route_rx: watch::Receiver<RouteCutoverState>, mut route_rx: watch::Receiver<RouteCutoverState>,
route_snapshot: RouteCutoverState, route_snapshot: RouteCutoverState,
session_id: u64, session_id: u64,
session_lease: SessionLease,
) -> Result<()> ) -> Result<()>
where where
R: AsyncRead + Unpin + Send + 'static, R: AsyncRead + Unpin + Send + 'static,
@ -53,7 +55,11 @@ where
); );
let tg_stream = upstream_manager let tg_stream = upstream_manager
.connect(dc_addr, Some(success.dc_idx), user.strip_prefix("scope_").filter(|s| !s.is_empty())) .connect(
dc_addr,
Some(success.dc_idx),
user.strip_prefix("scope_").filter(|s| !s.is_empty()),
)
.await?; .await?;
debug!(peer = %success.peer, dc_addr = %dc_addr, "Connected, performing TG handshake"); debug!(peer = %success.peer, dc_addr = %dc_addr, "Connected, performing TG handshake");
@ -67,24 +73,32 @@ where
stats.increment_user_curr_connects(user); stats.increment_user_curr_connects(user);
stats.increment_current_connections_direct(); stats.increment_current_connections_direct();
let seed_tier = adaptive_buffers::seed_tier_for_user(user);
let (c2s_copy_buf, s2c_copy_buf) = adaptive_buffers::direct_copy_buffers_for_tier(
seed_tier,
config.general.direct_relay_copy_buf_c2s_bytes,
config.general.direct_relay_copy_buf_s2c_bytes,
);
let relay_result = relay_bidirectional( let relay_result = relay_bidirectional(
client_reader, client_reader,
client_writer, client_writer,
tg_reader, tg_reader,
tg_writer, tg_writer,
config.general.direct_relay_copy_buf_c2s_bytes, c2s_copy_buf,
config.general.direct_relay_copy_buf_s2c_bytes, s2c_copy_buf,
user, user,
success.dc_idx,
Arc::clone(&stats), Arc::clone(&stats),
buffer_pool, buffer_pool,
session_lease,
seed_tier,
); );
tokio::pin!(relay_result); tokio::pin!(relay_result);
let relay_result = loop { let relay_result = loop {
if let Some(cutover) = affected_cutover_state( if let Some(cutover) =
&route_rx, affected_cutover_state(&route_rx, RelayRouteMode::Direct, route_snapshot.generation)
RelayRouteMode::Direct, {
route_snapshot.generation,
) {
let delay = cutover_stagger_delay(session_id, cutover.generation); let delay = cutover_stagger_delay(session_id, cutover.generation);
warn!( warn!(
user = %user, user = %user,
@ -135,7 +149,9 @@ fn get_dc_addr_static(dc_idx: i16, config: &ProxyConfig) -> Result<SocketAddr> {
for addr_str in addrs { for addr_str in addrs {
match addr_str.parse::<SocketAddr>() { match addr_str.parse::<SocketAddr>() {
Ok(addr) => parsed.push(addr), Ok(addr) => parsed.push(addr),
Err(_) => warn!(dc_idx = dc_idx, addr_str = %addr_str, "Invalid DC override address in config, ignoring"), Err(_) => {
warn!(dc_idx = dc_idx, addr_str = %addr_str, "Invalid DC override address in config, ignoring")
}
} }
} }
@ -157,7 +173,10 @@ fn get_dc_addr_static(dc_idx: i16, config: &ProxyConfig) -> Result<SocketAddr> {
// Unknown DC requested by client without override: log and fall back. // Unknown DC requested by client without override: log and fall back.
if !config.dc_overrides.contains_key(&dc_key) { if !config.dc_overrides.contains_key(&dc_key) {
warn!(dc_idx = dc_idx, "Requested non-standard DC with no override; falling back to default cluster"); warn!(
dc_idx = dc_idx,
"Requested non-standard DC with no override; falling back to default cluster"
);
if config.general.unknown_dc_file_log_enabled if config.general.unknown_dc_file_log_enabled
&& let Some(path) = &config.general.unknown_dc_log_path && let Some(path) = &config.general.unknown_dc_log_path
&& let Ok(handle) = tokio::runtime::Handle::try_current() && let Ok(handle) = tokio::runtime::Handle::try_current()
@ -191,15 +210,15 @@ fn get_dc_addr_static(dc_idx: i16, config: &ProxyConfig) -> Result<SocketAddr> {
)) ))
} }
async fn do_tg_handshake_static( async fn do_tg_handshake_static<S>(
mut stream: TcpStream, mut stream: S,
success: &HandshakeSuccess, success: &HandshakeSuccess,
config: &ProxyConfig, config: &ProxyConfig,
rng: &SecureRandom, rng: &SecureRandom,
) -> Result<( ) -> Result<(CryptoReader<ReadHalf<S>>, CryptoWriter<WriteHalf<S>>)>
CryptoReader<tokio::net::tcp::OwnedReadHalf>, where
CryptoWriter<tokio::net::tcp::OwnedWriteHalf>, S: AsyncRead + AsyncWrite + Unpin,
)> { {
let (nonce, _tg_enc_key, _tg_enc_iv, _tg_dec_key, _tg_dec_iv) = generate_tg_nonce( let (nonce, _tg_enc_key, _tg_enc_iv, _tg_dec_key, _tg_dec_iv) = generate_tg_nonce(
success.proto_tag, success.proto_tag,
success.dc_idx, success.dc_idx,
@ -222,7 +241,7 @@ async fn do_tg_handshake_static(
stream.write_all(&encrypted_nonce).await?; stream.write_all(&encrypted_nonce).await?;
stream.flush().await?; stream.flush().await?;
let (read_half, write_half) = stream.into_split(); let (read_half, write_half) = split(stream);
let max_pending = config.general.crypto_pending_buffer; let max_pending = config.general.crypto_pending_buffer;
Ok(( Ok((

View File

@ -20,6 +20,8 @@ use crate::proxy::route_mode::{
RelayRouteMode, RouteCutoverState, ROUTE_SWITCH_ERROR_MSG, affected_cutover_state, RelayRouteMode, RouteCutoverState, ROUTE_SWITCH_ERROR_MSG, affected_cutover_state,
cutover_stagger_delay, cutover_stagger_delay,
}; };
use crate::proxy::adaptive_buffers::{self, AdaptiveTier};
use crate::proxy::session_eviction::SessionLease;
use crate::stats::Stats; use crate::stats::Stats;
use crate::stream::{BufferPool, CryptoReader, CryptoWriter}; use crate::stream::{BufferPool, CryptoReader, CryptoWriter};
use crate::transport::middle_proxy::{MePool, MeResponse, proto_flags_for_tag}; use crate::transport::middle_proxy::{MePool, MeResponse, proto_flags_for_tag};
@ -59,8 +61,8 @@ struct MeD2cFlushPolicy {
} }
impl MeD2cFlushPolicy { impl MeD2cFlushPolicy {
fn from_config(config: &ProxyConfig) -> Self { fn from_config(config: &ProxyConfig, tier: AdaptiveTier) -> Self {
Self { let base = Self {
max_frames: config max_frames: config
.general .general
.me_d2c_flush_batch_max_frames .me_d2c_flush_batch_max_frames
@ -71,6 +73,18 @@ impl MeD2cFlushPolicy {
.max(ME_D2C_FLUSH_BATCH_MAX_BYTES_MIN), .max(ME_D2C_FLUSH_BATCH_MAX_BYTES_MIN),
max_delay: Duration::from_micros(config.general.me_d2c_flush_batch_max_delay_us), max_delay: Duration::from_micros(config.general.me_d2c_flush_batch_max_delay_us),
ack_flush_immediate: config.general.me_d2c_ack_flush_immediate, ack_flush_immediate: config.general.me_d2c_ack_flush_immediate,
};
let (max_frames, max_bytes, max_delay) = adaptive_buffers::me_flush_policy_for_tier(
tier,
base.max_frames,
base.max_bytes,
base.max_delay,
);
Self {
max_frames,
max_bytes,
max_delay,
ack_flush_immediate: base.ack_flush_immediate,
} }
} }
} }
@ -235,6 +249,7 @@ pub(crate) async fn handle_via_middle_proxy<R, W>(
mut route_rx: watch::Receiver<RouteCutoverState>, mut route_rx: watch::Receiver<RouteCutoverState>,
route_snapshot: RouteCutoverState, route_snapshot: RouteCutoverState,
session_id: u64, session_id: u64,
session_lease: SessionLease,
) -> Result<()> ) -> Result<()>
where where
R: AsyncRead + Unpin + Send + 'static, R: AsyncRead + Unpin + Send + 'static,
@ -244,6 +259,7 @@ where
let peer = success.peer; let peer = success.peer;
let proto_tag = success.proto_tag; let proto_tag = success.proto_tag;
let pool_generation = me_pool.current_generation(); let pool_generation = me_pool.current_generation();
let seed_tier = adaptive_buffers::seed_tier_for_user(&user);
debug!( debug!(
user = %user, user = %user,
@ -295,6 +311,15 @@ where
return Err(ProxyError::Proxy(ROUTE_SWITCH_ERROR_MSG.to_string())); return Err(ProxyError::Proxy(ROUTE_SWITCH_ERROR_MSG.to_string()));
} }
if session_lease.is_stale() {
stats.increment_reconnect_stale_close_total();
let _ = me_pool.send_close(conn_id).await;
me_pool.registry().unregister(conn_id).await;
stats.decrement_current_connections_me();
stats.decrement_user_curr_connects(&user);
return Err(ProxyError::Proxy("Session evicted by reconnect".to_string()));
}
// Per-user ad_tag from access.user_ad_tags; fallback to general.ad_tag (hot-reloadable) // Per-user ad_tag from access.user_ad_tags; fallback to general.ad_tag (hot-reloadable)
let user_tag: Option<Vec<u8>> = config let user_tag: Option<Vec<u8>> = config
.access .access
@ -368,7 +393,7 @@ where
let rng_clone = rng.clone(); let rng_clone = rng.clone();
let user_clone = user.clone(); let user_clone = user.clone();
let bytes_me2c_clone = bytes_me2c.clone(); let bytes_me2c_clone = bytes_me2c.clone();
let d2c_flush_policy = MeD2cFlushPolicy::from_config(&config); let d2c_flush_policy = MeD2cFlushPolicy::from_config(&config, seed_tier);
let me_writer = tokio::spawn(async move { let me_writer = tokio::spawn(async move {
let mut writer = crypto_writer; let mut writer = crypto_writer;
let mut frame_buf = Vec::with_capacity(16 * 1024); let mut frame_buf = Vec::with_capacity(16 * 1024);
@ -528,6 +553,12 @@ where
let mut frame_counter: u64 = 0; let mut frame_counter: u64 = 0;
let mut route_watch_open = true; let mut route_watch_open = true;
loop { loop {
if session_lease.is_stale() {
stats.increment_reconnect_stale_close_total();
let _ = enqueue_c2me_command(&c2me_tx, C2MeCommand::Close).await;
main_result = Err(ProxyError::Proxy("Session evicted by reconnect".to_string()));
break;
}
if let Some(cutover) = affected_cutover_state( if let Some(cutover) = affected_cutover_state(
&route_rx, &route_rx,
RelayRouteMode::Middle, RelayRouteMode::Middle,
@ -636,6 +667,7 @@ where
frames_ok = frame_counter, frames_ok = frame_counter,
"ME relay cleanup" "ME relay cleanup"
); );
adaptive_buffers::record_user_tier(&user, seed_tier);
me_pool.registry().unregister(conn_id).await; me_pool.registry().unregister(conn_id).await;
stats.decrement_current_connections_me(); stats.decrement_current_connections_me();
stats.decrement_user_curr_connects(&user); stats.decrement_user_curr_connects(&user);

View File

@ -1,5 +1,6 @@
//! Proxy Defs //! Proxy Defs
pub mod adaptive_buffers;
pub mod client; pub mod client;
pub mod direct_relay; pub mod direct_relay;
pub mod handshake; pub mod handshake;
@ -7,6 +8,7 @@ pub mod masking;
pub mod middle_relay; pub mod middle_relay;
pub mod route_mode; pub mod route_mode;
pub mod relay; pub mod relay;
pub mod session_eviction;
pub use client::ClientHandler; pub use client::ClientHandler;
#[allow(unused_imports)] #[allow(unused_imports)]

View File

@ -63,6 +63,10 @@ use tokio::io::{
use tokio::time::Instant; use tokio::time::Instant;
use tracing::{debug, trace, warn}; use tracing::{debug, trace, warn};
use crate::error::Result; use crate::error::Result;
use crate::proxy::adaptive_buffers::{
self, AdaptiveTier, RelaySignalSample, SessionAdaptiveController, TierTransitionReason,
};
use crate::proxy::session_eviction::SessionLease;
use crate::stats::Stats; use crate::stats::Stats;
use crate::stream::BufferPool; use crate::stream::BufferPool;
@ -79,6 +83,7 @@ const ACTIVITY_TIMEOUT: Duration = Duration::from_secs(1800);
/// 10 seconds gives responsive timeout detection (±10s accuracy) /// 10 seconds gives responsive timeout detection (±10s accuracy)
/// without measurable overhead from atomic reads. /// without measurable overhead from atomic reads.
const WATCHDOG_INTERVAL: Duration = Duration::from_secs(10); const WATCHDOG_INTERVAL: Duration = Duration::from_secs(10);
const ADAPTIVE_TICK: Duration = Duration::from_millis(250);
// ============= CombinedStream ============= // ============= CombinedStream =============
@ -155,6 +160,16 @@ struct SharedCounters {
s2c_ops: AtomicU64, s2c_ops: AtomicU64,
/// Milliseconds since relay epoch of last I/O activity /// Milliseconds since relay epoch of last I/O activity
last_activity_ms: AtomicU64, last_activity_ms: AtomicU64,
/// Bytes requested to write to client (S→C direction).
s2c_requested_bytes: AtomicU64,
/// Total write operations for S→C direction.
s2c_write_ops: AtomicU64,
/// Number of partial writes to client.
s2c_partial_writes: AtomicU64,
/// Number of times S→C poll_write returned Pending.
s2c_pending_writes: AtomicU64,
/// Consecutive pending writes in S→C direction.
s2c_consecutive_pending_writes: AtomicU64,
} }
impl SharedCounters { impl SharedCounters {
@ -165,6 +180,11 @@ impl SharedCounters {
c2s_ops: AtomicU64::new(0), c2s_ops: AtomicU64::new(0),
s2c_ops: AtomicU64::new(0), s2c_ops: AtomicU64::new(0),
last_activity_ms: AtomicU64::new(0), last_activity_ms: AtomicU64::new(0),
s2c_requested_bytes: AtomicU64::new(0),
s2c_write_ops: AtomicU64::new(0),
s2c_partial_writes: AtomicU64::new(0),
s2c_pending_writes: AtomicU64::new(0),
s2c_consecutive_pending_writes: AtomicU64::new(0),
} }
} }
@ -259,9 +279,21 @@ impl<S: AsyncWrite + Unpin> AsyncWrite for StatsIo<S> {
buf: &[u8], buf: &[u8],
) -> Poll<io::Result<usize>> { ) -> Poll<io::Result<usize>> {
let this = self.get_mut(); let this = self.get_mut();
this.counters
.s2c_requested_bytes
.fetch_add(buf.len() as u64, Ordering::Relaxed);
match Pin::new(&mut this.inner).poll_write(cx, buf) { match Pin::new(&mut this.inner).poll_write(cx, buf) {
Poll::Ready(Ok(n)) => { Poll::Ready(Ok(n)) => {
this.counters.s2c_write_ops.fetch_add(1, Ordering::Relaxed);
this.counters
.s2c_consecutive_pending_writes
.store(0, Ordering::Relaxed);
if n < buf.len() {
this.counters
.s2c_partial_writes
.fetch_add(1, Ordering::Relaxed);
}
if n > 0 { if n > 0 {
// S→C: data written to client // S→C: data written to client
this.counters.s2c_bytes.fetch_add(n as u64, Ordering::Relaxed); this.counters.s2c_bytes.fetch_add(n as u64, Ordering::Relaxed);
@ -275,6 +307,15 @@ impl<S: AsyncWrite + Unpin> AsyncWrite for StatsIo<S> {
} }
Poll::Ready(Ok(n)) Poll::Ready(Ok(n))
} }
Poll::Pending => {
this.counters
.s2c_pending_writes
.fetch_add(1, Ordering::Relaxed);
this.counters
.s2c_consecutive_pending_writes
.fetch_add(1, Ordering::Relaxed);
Poll::Pending
}
other => other, other => other,
} }
} }
@ -316,8 +357,11 @@ pub async fn relay_bidirectional<CR, CW, SR, SW>(
c2s_buf_size: usize, c2s_buf_size: usize,
s2c_buf_size: usize, s2c_buf_size: usize,
user: &str, user: &str,
dc_idx: i16,
stats: Arc<Stats>, stats: Arc<Stats>,
_buffer_pool: Arc<BufferPool>, _buffer_pool: Arc<BufferPool>,
session_lease: SessionLease,
seed_tier: AdaptiveTier,
) -> Result<()> ) -> Result<()>
where where
CR: AsyncRead + Unpin + Send + 'static, CR: AsyncRead + Unpin + Send + 'static,
@ -345,13 +389,33 @@ where
// ── Watchdog: activity timeout + periodic rate logging ────────── // ── Watchdog: activity timeout + periodic rate logging ──────────
let wd_counters = Arc::clone(&counters); let wd_counters = Arc::clone(&counters);
let wd_user = user_owned.clone(); let wd_user = user_owned.clone();
let wd_dc = dc_idx;
let wd_stats = Arc::clone(&stats);
let wd_session = session_lease.clone();
let watchdog = async { let watchdog = async {
let mut prev_c2s: u64 = 0; let mut prev_c2s_log: u64 = 0;
let mut prev_s2c: u64 = 0; let mut prev_s2c_log: u64 = 0;
let mut prev_c2s_sample: u64 = 0;
let mut prev_s2c_requested_sample: u64 = 0;
let mut prev_s2c_written_sample: u64 = 0;
let mut prev_s2c_write_ops_sample: u64 = 0;
let mut prev_s2c_partial_sample: u64 = 0;
let mut accumulated_log = Duration::ZERO;
let mut adaptive = SessionAdaptiveController::new(seed_tier);
loop { loop {
tokio::time::sleep(WATCHDOG_INTERVAL).await; tokio::time::sleep(ADAPTIVE_TICK).await;
if wd_session.is_stale() {
wd_stats.increment_reconnect_stale_close_total();
warn!(
user = %wd_user,
dc = wd_dc,
"Session evicted by reconnect"
);
return;
}
let now = Instant::now(); let now = Instant::now();
let idle = wd_counters.idle_duration(now, epoch); let idle = wd_counters.idle_duration(now, epoch);
@ -370,11 +434,80 @@ where
return; // Causes select! to cancel copy_bidirectional return; // Causes select! to cancel copy_bidirectional
} }
let c2s_total = wd_counters.c2s_bytes.load(Ordering::Relaxed);
let s2c_requested_total = wd_counters
.s2c_requested_bytes
.load(Ordering::Relaxed);
let s2c_written_total = wd_counters.s2c_bytes.load(Ordering::Relaxed);
let s2c_write_ops_total = wd_counters
.s2c_write_ops
.load(Ordering::Relaxed);
let s2c_partial_total = wd_counters
.s2c_partial_writes
.load(Ordering::Relaxed);
let consecutive_pending = wd_counters
.s2c_consecutive_pending_writes
.load(Ordering::Relaxed) as u32;
let sample = RelaySignalSample {
c2s_bytes: c2s_total.saturating_sub(prev_c2s_sample),
s2c_requested_bytes: s2c_requested_total
.saturating_sub(prev_s2c_requested_sample),
s2c_written_bytes: s2c_written_total
.saturating_sub(prev_s2c_written_sample),
s2c_write_ops: s2c_write_ops_total
.saturating_sub(prev_s2c_write_ops_sample),
s2c_partial_writes: s2c_partial_total
.saturating_sub(prev_s2c_partial_sample),
s2c_consecutive_pending_writes: consecutive_pending,
};
if let Some(transition) = adaptive.observe(sample, ADAPTIVE_TICK.as_secs_f64()) {
match transition.reason {
TierTransitionReason::SoftConfirmed => {
wd_stats.increment_relay_adaptive_promotions_total();
}
TierTransitionReason::HardPressure => {
wd_stats.increment_relay_adaptive_promotions_total();
wd_stats.increment_relay_adaptive_hard_promotions_total();
}
TierTransitionReason::QuietDemotion => {
wd_stats.increment_relay_adaptive_demotions_total();
}
}
adaptive_buffers::record_user_tier(&wd_user, adaptive.max_tier_seen());
debug!(
user = %wd_user,
dc = wd_dc,
from_tier = transition.from.as_u8(),
to_tier = transition.to.as_u8(),
reason = ?transition.reason,
throughput_ema_bps = sample
.c2s_bytes
.max(sample.s2c_written_bytes)
.saturating_mul(8)
.saturating_mul(4),
"Adaptive relay tier transition"
);
}
prev_c2s_sample = c2s_total;
prev_s2c_requested_sample = s2c_requested_total;
prev_s2c_written_sample = s2c_written_total;
prev_s2c_write_ops_sample = s2c_write_ops_total;
prev_s2c_partial_sample = s2c_partial_total;
accumulated_log = accumulated_log.saturating_add(ADAPTIVE_TICK);
if accumulated_log < WATCHDOG_INTERVAL {
continue;
}
accumulated_log = Duration::ZERO;
// ── Periodic rate logging ─────────────────────────────── // ── Periodic rate logging ───────────────────────────────
let c2s = wd_counters.c2s_bytes.load(Ordering::Relaxed); let c2s = wd_counters.c2s_bytes.load(Ordering::Relaxed);
let s2c = wd_counters.s2c_bytes.load(Ordering::Relaxed); let s2c = wd_counters.s2c_bytes.load(Ordering::Relaxed);
let c2s_delta = c2s - prev_c2s; let c2s_delta = c2s.saturating_sub(prev_c2s_log);
let s2c_delta = s2c - prev_s2c; let s2c_delta = s2c.saturating_sub(prev_s2c_log);
if c2s_delta > 0 || s2c_delta > 0 { if c2s_delta > 0 || s2c_delta > 0 {
let secs = WATCHDOG_INTERVAL.as_secs_f64(); let secs = WATCHDOG_INTERVAL.as_secs_f64();
@ -388,8 +521,8 @@ where
); );
} }
prev_c2s = c2s; prev_c2s_log = c2s;
prev_s2c = s2c; prev_s2c_log = s2c;
} }
}; };
@ -424,6 +557,7 @@ where
let c2s_ops = counters.c2s_ops.load(Ordering::Relaxed); let c2s_ops = counters.c2s_ops.load(Ordering::Relaxed);
let s2c_ops = counters.s2c_ops.load(Ordering::Relaxed); let s2c_ops = counters.s2c_ops.load(Ordering::Relaxed);
let duration = epoch.elapsed(); let duration = epoch.elapsed();
adaptive_buffers::record_user_tier(&user_owned, seed_tier);
match copy_result { match copy_result {
Some(Ok((c2s, s2c))) => { Some(Ok((c2s, s2c))) => {

View File

@ -0,0 +1,46 @@
/// Session eviction is intentionally disabled in runtime.
///
/// The initial `user+dc` single-lease model caused valid parallel client
/// connections to evict each other. Keep the API shape for compatibility,
/// but make it a no-op until a safer policy is introduced.
#[derive(Debug, Clone, Default)]
pub struct SessionLease;
impl SessionLease {
pub fn is_stale(&self) -> bool {
false
}
#[allow(dead_code)]
pub fn release(&self) {}
}
pub struct RegistrationResult {
pub lease: SessionLease,
pub replaced_existing: bool,
}
pub fn register_session(_user: &str, _dc_idx: i16) -> RegistrationResult {
RegistrationResult {
lease: SessionLease,
replaced_existing: false,
}
}
#[cfg(test)]
mod tests {
use super::*;
#[test]
fn test_session_eviction_disabled_behavior() {
let first = register_session("alice", 2);
let second = register_session("alice", 2);
assert!(!first.replaced_existing);
assert!(!second.replaced_existing);
assert!(!first.lease.is_stale());
assert!(!second.lease.is_stale());
first.lease.release();
second.lease.release();
}
}

View File

@ -120,6 +120,8 @@ pub struct Stats {
pool_swap_total: AtomicU64, pool_swap_total: AtomicU64,
pool_drain_active: AtomicU64, pool_drain_active: AtomicU64,
pool_force_close_total: AtomicU64, pool_force_close_total: AtomicU64,
pool_drain_soft_evict_total: AtomicU64,
pool_drain_soft_evict_writer_total: AtomicU64,
pool_stale_pick_total: AtomicU64, pool_stale_pick_total: AtomicU64,
me_writer_removed_total: AtomicU64, me_writer_removed_total: AtomicU64,
me_writer_removed_unexpected_total: AtomicU64, me_writer_removed_unexpected_total: AtomicU64,
@ -133,6 +135,11 @@ pub struct Stats {
me_inline_recovery_total: AtomicU64, me_inline_recovery_total: AtomicU64,
ip_reservation_rollback_tcp_limit_total: AtomicU64, ip_reservation_rollback_tcp_limit_total: AtomicU64,
ip_reservation_rollback_quota_limit_total: AtomicU64, ip_reservation_rollback_quota_limit_total: AtomicU64,
relay_adaptive_promotions_total: AtomicU64,
relay_adaptive_demotions_total: AtomicU64,
relay_adaptive_hard_promotions_total: AtomicU64,
reconnect_evict_total: AtomicU64,
reconnect_stale_close_total: AtomicU64,
telemetry_core_enabled: AtomicBool, telemetry_core_enabled: AtomicBool,
telemetry_user_enabled: AtomicBool, telemetry_user_enabled: AtomicBool,
telemetry_me_level: AtomicU8, telemetry_me_level: AtomicU8,
@ -285,6 +292,36 @@ impl Stats {
pub fn decrement_current_connections_me(&self) { pub fn decrement_current_connections_me(&self) {
Self::decrement_atomic_saturating(&self.current_connections_me); Self::decrement_atomic_saturating(&self.current_connections_me);
} }
pub fn increment_relay_adaptive_promotions_total(&self) {
if self.telemetry_core_enabled() {
self.relay_adaptive_promotions_total
.fetch_add(1, Ordering::Relaxed);
}
}
pub fn increment_relay_adaptive_demotions_total(&self) {
if self.telemetry_core_enabled() {
self.relay_adaptive_demotions_total
.fetch_add(1, Ordering::Relaxed);
}
}
pub fn increment_relay_adaptive_hard_promotions_total(&self) {
if self.telemetry_core_enabled() {
self.relay_adaptive_hard_promotions_total
.fetch_add(1, Ordering::Relaxed);
}
}
pub fn increment_reconnect_evict_total(&self) {
if self.telemetry_core_enabled() {
self.reconnect_evict_total
.fetch_add(1, Ordering::Relaxed);
}
}
pub fn increment_reconnect_stale_close_total(&self) {
if self.telemetry_core_enabled() {
self.reconnect_stale_close_total
.fetch_add(1, Ordering::Relaxed);
}
}
pub fn increment_handshake_timeouts(&self) { pub fn increment_handshake_timeouts(&self) {
if self.telemetry_core_enabled() { if self.telemetry_core_enabled() {
self.handshake_timeouts.fetch_add(1, Ordering::Relaxed); self.handshake_timeouts.fetch_add(1, Ordering::Relaxed);
@ -680,6 +717,18 @@ impl Stats {
self.pool_force_close_total.fetch_add(1, Ordering::Relaxed); self.pool_force_close_total.fetch_add(1, Ordering::Relaxed);
} }
} }
pub fn increment_pool_drain_soft_evict_total(&self) {
if self.telemetry_me_allows_normal() {
self.pool_drain_soft_evict_total
.fetch_add(1, Ordering::Relaxed);
}
}
pub fn increment_pool_drain_soft_evict_writer_total(&self) {
if self.telemetry_me_allows_normal() {
self.pool_drain_soft_evict_writer_total
.fetch_add(1, Ordering::Relaxed);
}
}
pub fn increment_pool_stale_pick_total(&self) { pub fn increment_pool_stale_pick_total(&self) {
if self.telemetry_me_allows_normal() { if self.telemetry_me_allows_normal() {
self.pool_stale_pick_total.fetch_add(1, Ordering::Relaxed); self.pool_stale_pick_total.fetch_add(1, Ordering::Relaxed);
@ -933,6 +982,22 @@ impl Stats {
self.get_current_connections_direct() self.get_current_connections_direct()
.saturating_add(self.get_current_connections_me()) .saturating_add(self.get_current_connections_me())
} }
pub fn get_relay_adaptive_promotions_total(&self) -> u64 {
self.relay_adaptive_promotions_total.load(Ordering::Relaxed)
}
pub fn get_relay_adaptive_demotions_total(&self) -> u64 {
self.relay_adaptive_demotions_total.load(Ordering::Relaxed)
}
pub fn get_relay_adaptive_hard_promotions_total(&self) -> u64 {
self.relay_adaptive_hard_promotions_total
.load(Ordering::Relaxed)
}
pub fn get_reconnect_evict_total(&self) -> u64 {
self.reconnect_evict_total.load(Ordering::Relaxed)
}
pub fn get_reconnect_stale_close_total(&self) -> u64 {
self.reconnect_stale_close_total.load(Ordering::Relaxed)
}
pub fn get_me_keepalive_sent(&self) -> u64 { self.me_keepalive_sent.load(Ordering::Relaxed) } pub fn get_me_keepalive_sent(&self) -> u64 { self.me_keepalive_sent.load(Ordering::Relaxed) }
pub fn get_me_keepalive_failed(&self) -> u64 { self.me_keepalive_failed.load(Ordering::Relaxed) } pub fn get_me_keepalive_failed(&self) -> u64 { self.me_keepalive_failed.load(Ordering::Relaxed) }
pub fn get_me_keepalive_pong(&self) -> u64 { self.me_keepalive_pong.load(Ordering::Relaxed) } pub fn get_me_keepalive_pong(&self) -> u64 { self.me_keepalive_pong.load(Ordering::Relaxed) }
@ -1185,6 +1250,12 @@ impl Stats {
pub fn get_pool_force_close_total(&self) -> u64 { pub fn get_pool_force_close_total(&self) -> u64 {
self.pool_force_close_total.load(Ordering::Relaxed) self.pool_force_close_total.load(Ordering::Relaxed)
} }
pub fn get_pool_drain_soft_evict_total(&self) -> u64 {
self.pool_drain_soft_evict_total.load(Ordering::Relaxed)
}
pub fn get_pool_drain_soft_evict_writer_total(&self) -> u64 {
self.pool_drain_soft_evict_writer_total.load(Ordering::Relaxed)
}
pub fn get_pool_stale_pick_total(&self) -> u64 { pub fn get_pool_stale_pick_total(&self) -> u64 {
self.pool_stale_pick_total.load(Ordering::Relaxed) self.pool_stale_pick_total.load(Ordering::Relaxed)
} }
@ -1258,6 +1329,9 @@ impl Stats {
} }
pub fn decrement_user_curr_connects(&self, user: &str) { pub fn decrement_user_curr_connects(&self, user: &str) {
if !self.telemetry_user_enabled() {
return;
}
self.maybe_cleanup_user_stats(); self.maybe_cleanup_user_stats();
if let Some(stats) = self.user_stats.get(user) { if let Some(stats) = self.user_stats.get(user) {
Self::touch_user_stats(stats.value()); Self::touch_user_stats(stats.value());

View File

@ -14,8 +14,7 @@ use std::sync::Arc;
// ============= Configuration ============= // ============= Configuration =============
/// Default buffer size /// Default buffer size
/// CHANGED: Reduced from 64KB to 16KB to match TLS record size and prevent bufferbloat. pub const DEFAULT_BUFFER_SIZE: usize = 64 * 1024;
pub const DEFAULT_BUFFER_SIZE: usize = 16 * 1024;
/// Default maximum number of pooled buffers /// Default maximum number of pooled buffers
pub const DEFAULT_MAX_BUFFERS: usize = 1024; pub const DEFAULT_MAX_BUFFERS: usize = 1024;

View File

@ -7,33 +7,29 @@ use tokio::net::TcpStream;
#[cfg(unix)] #[cfg(unix)]
use tokio::net::UnixStream; use tokio::net::UnixStream;
use tokio::time::timeout; use tokio::time::timeout;
use tokio_rustls::client::TlsStream;
use tokio_rustls::TlsConnector; use tokio_rustls::TlsConnector;
use tokio_rustls::client::TlsStream;
use tracing::{debug, warn}; use tracing::{debug, warn};
use rustls::client::danger::{HandshakeSignatureValid, ServerCertVerified, ServerCertVerifier};
use rustls::client::ClientConfig; use rustls::client::ClientConfig;
use rustls::client::danger::{HandshakeSignatureValid, ServerCertVerified, ServerCertVerifier};
use rustls::pki_types::{CertificateDer, ServerName, UnixTime}; use rustls::pki_types::{CertificateDer, ServerName, UnixTime};
use rustls::{DigitallySignedStruct, Error as RustlsError}; use rustls::{DigitallySignedStruct, Error as RustlsError};
use x509_parser::prelude::FromDer;
use x509_parser::certificate::X509Certificate; use x509_parser::certificate::X509Certificate;
use x509_parser::prelude::FromDer;
use crate::crypto::SecureRandom; use crate::crypto::SecureRandom;
use crate::network::dns_overrides::resolve_socket_addr; use crate::network::dns_overrides::resolve_socket_addr;
use crate::protocol::constants::{ use crate::protocol::constants::{
TLS_RECORD_APPLICATION, TLS_RECORD_CHANGE_CIPHER, TLS_RECORD_HANDSHAKE, TLS_RECORD_APPLICATION, TLS_RECORD_CHANGE_CIPHER, TLS_RECORD_HANDSHAKE,
}; };
use crate::transport::proxy_protocol::{ProxyProtocolV1Builder, ProxyProtocolV2Builder};
use crate::tls_front::types::{ use crate::tls_front::types::{
ParsedCertificateInfo, ParsedCertificateInfo, ParsedServerHello, TlsBehaviorProfile, TlsCertPayload, TlsExtension,
ParsedServerHello, TlsFetchResult, TlsProfileSource,
TlsBehaviorProfile,
TlsCertPayload,
TlsExtension,
TlsFetchResult,
TlsProfileSource,
}; };
use crate::transport::UpstreamStream;
use crate::transport::proxy_protocol::{ProxyProtocolV1Builder, ProxyProtocolV2Builder};
/// No-op verifier: accept any certificate (we only need lengths and metadata). /// No-op verifier: accept any certificate (we only need lengths and metadata).
#[derive(Debug)] #[derive(Debug)]
@ -144,21 +140,27 @@ fn build_client_hello(sni: &str, rng: &SecureRandom) -> Vec<u8> {
exts.extend_from_slice(&0x000au16.to_be_bytes()); exts.extend_from_slice(&0x000au16.to_be_bytes());
exts.extend_from_slice(&((2 + groups.len() * 2) as u16).to_be_bytes()); exts.extend_from_slice(&((2 + groups.len() * 2) as u16).to_be_bytes());
exts.extend_from_slice(&(groups.len() as u16 * 2).to_be_bytes()); exts.extend_from_slice(&(groups.len() as u16 * 2).to_be_bytes());
for g in groups { exts.extend_from_slice(&g.to_be_bytes()); } for g in groups {
exts.extend_from_slice(&g.to_be_bytes());
}
// signature_algorithms // signature_algorithms
let sig_algs: [u16; 4] = [0x0804, 0x0805, 0x0403, 0x0503]; // rsa_pss_rsae_sha256/384, ecdsa_secp256r1_sha256, rsa_pkcs1_sha256 let sig_algs: [u16; 4] = [0x0804, 0x0805, 0x0403, 0x0503]; // rsa_pss_rsae_sha256/384, ecdsa_secp256r1_sha256, rsa_pkcs1_sha256
exts.extend_from_slice(&0x000du16.to_be_bytes()); exts.extend_from_slice(&0x000du16.to_be_bytes());
exts.extend_from_slice(&((2 + sig_algs.len() * 2) as u16).to_be_bytes()); exts.extend_from_slice(&((2 + sig_algs.len() * 2) as u16).to_be_bytes());
exts.extend_from_slice(&(sig_algs.len() as u16 * 2).to_be_bytes()); exts.extend_from_slice(&(sig_algs.len() as u16 * 2).to_be_bytes());
for a in sig_algs { exts.extend_from_slice(&a.to_be_bytes()); } for a in sig_algs {
exts.extend_from_slice(&a.to_be_bytes());
}
// supported_versions (TLS1.3 + TLS1.2) // supported_versions (TLS1.3 + TLS1.2)
let versions: [u16; 2] = [0x0304, 0x0303]; let versions: [u16; 2] = [0x0304, 0x0303];
exts.extend_from_slice(&0x002bu16.to_be_bytes()); exts.extend_from_slice(&0x002bu16.to_be_bytes());
exts.extend_from_slice(&((1 + versions.len() * 2) as u16).to_be_bytes()); exts.extend_from_slice(&((1 + versions.len() * 2) as u16).to_be_bytes());
exts.push((versions.len() * 2) as u8); exts.push((versions.len() * 2) as u8);
for v in versions { exts.extend_from_slice(&v.to_be_bytes()); } for v in versions {
exts.extend_from_slice(&v.to_be_bytes());
}
// key_share (x25519) // key_share (x25519)
let key = gen_key_share(rng); let key = gen_key_share(rng);
@ -273,7 +275,10 @@ fn parse_server_hello(body: &[u8]) -> Option<ParsedServerHello> {
pos += 4; pos += 4;
let data = body.get(pos..pos + elen)?.to_vec(); let data = body.get(pos..pos + elen)?.to_vec();
pos += elen; pos += elen;
extensions.push(TlsExtension { ext_type: etype, data }); extensions.push(TlsExtension {
ext_type: etype,
data,
});
} }
Some(ParsedServerHello { Some(ParsedServerHello {
@ -394,7 +399,7 @@ async fn connect_tcp_with_upstream(
port: u16, port: u16,
connect_timeout: Duration, connect_timeout: Duration,
upstream: Option<std::sync::Arc<crate::transport::UpstreamManager>>, upstream: Option<std::sync::Arc<crate::transport::UpstreamManager>>,
) -> Result<TcpStream> { ) -> Result<UpstreamStream> {
if let Some(manager) = upstream { if let Some(manager) = upstream {
if let Some(addr) = resolve_socket_addr(host, port) { if let Some(addr) = resolve_socket_addr(host, port) {
match manager.connect(addr, None, None).await { match manager.connect(addr, None, None).await {
@ -408,23 +413,25 @@ async fn connect_tcp_with_upstream(
); );
} }
} }
} else if let Ok(mut addrs) = tokio::net::lookup_host((host, port)).await { } else if let Ok(mut addrs) = tokio::net::lookup_host((host, port)).await
if let Some(addr) = addrs.find(|a| a.is_ipv4()) { && let Some(addr) = addrs.find(|a| a.is_ipv4())
match manager.connect(addr, None, None).await { {
Ok(stream) => return Ok(stream), match manager.connect(addr, None, None).await {
Err(e) => { Ok(stream) => return Ok(stream),
warn!( Err(e) => {
host = %host, warn!(
port = port, host = %host,
error = %e, port = port,
"Upstream connect failed, using direct connect" error = %e,
); "Upstream connect failed, using direct connect"
} );
} }
} }
} }
} }
connect_with_dns_override(host, port, connect_timeout).await Ok(UpstreamStream::Tcp(
connect_with_dns_override(host, port, connect_timeout).await?,
))
} }
fn encode_tls13_certificate_message(cert_chain_der: &[Vec<u8>]) -> Option<Vec<u8>> { fn encode_tls13_certificate_message(cert_chain_der: &[Vec<u8>]) -> Option<Vec<u8>> {
@ -443,9 +450,7 @@ fn encode_tls13_certificate_message(cert_chain_der: &[Vec<u8>]) -> Option<Vec<u8
} }
// Certificate = context_len(1) + certificate_list_len(3) + entries // Certificate = context_len(1) + certificate_list_len(3) + entries
let body_len = 1usize let body_len = 1usize.checked_add(3)?.checked_add(certificate_list.len())?;
.checked_add(3)?
.checked_add(certificate_list.len())?;
let mut message = Vec::with_capacity(4 + body_len); let mut message = Vec::with_capacity(4 + body_len);
message.push(0x0b); // HandshakeType::certificate message.push(0x0b); // HandshakeType::certificate
@ -549,7 +554,8 @@ async fn fetch_via_raw_tls(
sock = %sock_path, sock = %sock_path,
"Raw TLS fetch using mask unix socket" "Raw TLS fetch using mask unix socket"
); );
return fetch_via_raw_tls_stream(stream, sni, connect_timeout, proxy_protocol).await; return fetch_via_raw_tls_stream(stream, sni, connect_timeout, proxy_protocol)
.await;
} }
Ok(Err(e)) => { Ok(Err(e)) => {
warn!( warn!(
@ -616,12 +622,13 @@ where
.map(|slice| slice.to_vec()) .map(|slice| slice.to_vec())
.unwrap_or_default(); .unwrap_or_default();
let cert_chain_der: Vec<Vec<u8>> = certs.iter().map(|c| c.as_ref().to_vec()).collect(); let cert_chain_der: Vec<Vec<u8>> = certs.iter().map(|c| c.as_ref().to_vec()).collect();
let cert_payload = encode_tls13_certificate_message(&cert_chain_der).map(|certificate_message| { let cert_payload =
TlsCertPayload { encode_tls13_certificate_message(&cert_chain_der).map(|certificate_message| {
cert_chain_der: cert_chain_der.clone(), TlsCertPayload {
certificate_message, cert_chain_der: cert_chain_der.clone(),
} certificate_message,
}); }
});
let total_cert_len = cert_payload let total_cert_len = cert_payload
.as_ref() .as_ref()

View File

@ -299,6 +299,11 @@ async fn run_update_cycle(
cfg.general.hardswap, cfg.general.hardswap,
cfg.general.me_pool_drain_ttl_secs, cfg.general.me_pool_drain_ttl_secs,
cfg.general.me_pool_drain_threshold, cfg.general.me_pool_drain_threshold,
cfg.general.me_pool_drain_soft_evict_enabled,
cfg.general.me_pool_drain_soft_evict_grace_secs,
cfg.general.me_pool_drain_soft_evict_per_writer,
cfg.general.me_pool_drain_soft_evict_budget_per_core,
cfg.general.me_pool_drain_soft_evict_cooldown_ms,
cfg.general.effective_me_pool_force_close_secs(), cfg.general.effective_me_pool_force_close_secs(),
cfg.general.me_pool_min_fresh_ratio, cfg.general.me_pool_min_fresh_ratio,
cfg.general.me_hardswap_warmup_delay_min_ms, cfg.general.me_hardswap_warmup_delay_min_ms,
@ -526,6 +531,11 @@ pub async fn me_config_updater(
cfg.general.hardswap, cfg.general.hardswap,
cfg.general.me_pool_drain_ttl_secs, cfg.general.me_pool_drain_ttl_secs,
cfg.general.me_pool_drain_threshold, cfg.general.me_pool_drain_threshold,
cfg.general.me_pool_drain_soft_evict_enabled,
cfg.general.me_pool_drain_soft_evict_grace_secs,
cfg.general.me_pool_drain_soft_evict_per_writer,
cfg.general.me_pool_drain_soft_evict_budget_per_core,
cfg.general.me_pool_drain_soft_evict_cooldown_ms,
cfg.general.effective_me_pool_force_close_secs(), cfg.general.effective_me_pool_force_close_secs(),
cfg.general.me_pool_min_fresh_ratio, cfg.general.me_pool_min_fresh_ratio,
cfg.general.me_hardswap_warmup_delay_min_ms, cfg.general.me_hardswap_warmup_delay_min_ms,

View File

@ -25,6 +25,11 @@ const HEALTH_RECONNECT_BUDGET_PER_CORE: usize = 2;
const HEALTH_RECONNECT_BUDGET_PER_DC: usize = 1; const HEALTH_RECONNECT_BUDGET_PER_DC: usize = 1;
const HEALTH_RECONNECT_BUDGET_MIN: usize = 4; const HEALTH_RECONNECT_BUDGET_MIN: usize = 4;
const HEALTH_RECONNECT_BUDGET_MAX: usize = 128; const HEALTH_RECONNECT_BUDGET_MAX: usize = 128;
const HEALTH_DRAIN_CLOSE_BUDGET_PER_CORE: usize = 16;
const HEALTH_DRAIN_CLOSE_BUDGET_MIN: usize = 16;
const HEALTH_DRAIN_CLOSE_BUDGET_MAX: usize = 256;
const HEALTH_DRAIN_SOFT_EVICT_BUDGET_MIN: usize = 8;
const HEALTH_DRAIN_SOFT_EVICT_BUDGET_MAX: usize = 256;
#[derive(Debug, Clone)] #[derive(Debug, Clone)]
struct DcFloorPlanEntry { struct DcFloorPlanEntry {
@ -63,6 +68,7 @@ pub async fn me_health_monitor(pool: Arc<MePool>, rng: Arc<SecureRandom>, _min_c
let mut adaptive_recover_until: HashMap<(i32, IpFamily), Instant> = HashMap::new(); let mut adaptive_recover_until: HashMap<(i32, IpFamily), Instant> = HashMap::new();
let mut floor_warn_next_allowed: HashMap<(i32, IpFamily), Instant> = HashMap::new(); let mut floor_warn_next_allowed: HashMap<(i32, IpFamily), Instant> = HashMap::new();
let mut drain_warn_next_allowed: HashMap<u64, Instant> = HashMap::new(); let mut drain_warn_next_allowed: HashMap<u64, Instant> = HashMap::new();
let mut drain_soft_evict_next_allowed: HashMap<u64, Instant> = HashMap::new();
let mut degraded_interval = true; let mut degraded_interval = true;
loop { loop {
let interval = if degraded_interval { let interval = if degraded_interval {
@ -72,7 +78,12 @@ pub async fn me_health_monitor(pool: Arc<MePool>, rng: Arc<SecureRandom>, _min_c
}; };
tokio::time::sleep(interval).await; tokio::time::sleep(interval).await;
pool.prune_closed_writers().await; pool.prune_closed_writers().await;
reap_draining_writers(&pool, &mut drain_warn_next_allowed).await; reap_draining_writers(
&pool,
&mut drain_warn_next_allowed,
&mut drain_soft_evict_next_allowed,
)
.await;
let v4_degraded = check_family( let v4_degraded = check_family(
IpFamily::V4, IpFamily::V4,
&pool, &pool,
@ -111,9 +122,10 @@ pub async fn me_health_monitor(pool: Arc<MePool>, rng: Arc<SecureRandom>, _min_c
} }
} }
async fn reap_draining_writers( pub(super) async fn reap_draining_writers(
pool: &Arc<MePool>, pool: &Arc<MePool>,
warn_next_allowed: &mut HashMap<u64, Instant>, warn_next_allowed: &mut HashMap<u64, Instant>,
soft_evict_next_allowed: &mut HashMap<u64, Instant>,
) { ) {
let now_epoch_secs = MePool::now_epoch_secs(); let now_epoch_secs = MePool::now_epoch_secs();
let now = Instant::now(); let now = Instant::now();
@ -122,14 +134,22 @@ async fn reap_draining_writers(
.me_pool_drain_threshold .me_pool_drain_threshold
.load(std::sync::atomic::Ordering::Relaxed); .load(std::sync::atomic::Ordering::Relaxed);
let writers = pool.writers.read().await.clone(); let writers = pool.writers.read().await.clone();
let activity = pool.registry.writer_activity_snapshot().await;
let mut draining_writers = Vec::new(); let mut draining_writers = Vec::new();
let mut empty_writer_ids = Vec::<u64>::new();
let mut force_close_writer_ids = Vec::<u64>::new();
for writer in writers { for writer in writers {
if !writer.draining.load(std::sync::atomic::Ordering::Relaxed) { if !writer.draining.load(std::sync::atomic::Ordering::Relaxed) {
continue; continue;
} }
let is_empty = pool.registry.is_writer_empty(writer.id).await; if activity
if is_empty { .bound_clients_by_writer
pool.remove_writer_and_close_clients(writer.id).await; .get(&writer.id)
.copied()
.unwrap_or(0)
== 0
{
empty_writer_ids.push(writer.id);
continue; continue;
} }
draining_writers.push(writer); draining_writers.push(writer);
@ -156,12 +176,13 @@ async fn reap_draining_writers(
"ME draining writer threshold exceeded, force-closing oldest draining writers" "ME draining writer threshold exceeded, force-closing oldest draining writers"
); );
for writer in draining_writers.drain(..overflow) { for writer in draining_writers.drain(..overflow) {
pool.stats.increment_pool_force_close_total(); force_close_writer_ids.push(writer.id);
pool.remove_writer_and_close_clients(writer.id).await;
} }
} }
for writer in draining_writers { let mut active_draining_writer_ids = HashSet::with_capacity(draining_writers.len());
for writer in &draining_writers {
active_draining_writer_ids.insert(writer.id);
let drain_started_at_epoch_secs = writer let drain_started_at_epoch_secs = writer
.draining_started_at_epoch_secs .draining_started_at_epoch_secs
.load(std::sync::atomic::Ordering::Relaxed); .load(std::sync::atomic::Ordering::Relaxed);
@ -191,10 +212,152 @@ async fn reap_draining_writers(
.load(std::sync::atomic::Ordering::Relaxed); .load(std::sync::atomic::Ordering::Relaxed);
if deadline_epoch_secs != 0 && now_epoch_secs >= deadline_epoch_secs { if deadline_epoch_secs != 0 && now_epoch_secs >= deadline_epoch_secs {
warn!(writer_id = writer.id, "Drain timeout, force-closing"); warn!(writer_id = writer.id, "Drain timeout, force-closing");
pool.stats.increment_pool_force_close_total(); force_close_writer_ids.push(writer.id);
pool.remove_writer_and_close_clients(writer.id).await; active_draining_writer_ids.remove(&writer.id);
} }
} }
warn_next_allowed.retain(|writer_id, _| active_draining_writer_ids.contains(writer_id));
soft_evict_next_allowed.retain(|writer_id, _| active_draining_writer_ids.contains(writer_id));
if pool.drain_soft_evict_enabled() && drain_ttl_secs > 0 && !draining_writers.is_empty() {
let mut force_close_ids = HashSet::<u64>::with_capacity(force_close_writer_ids.len());
for writer_id in &force_close_writer_ids {
force_close_ids.insert(*writer_id);
}
let soft_grace_secs = pool.drain_soft_evict_grace_secs();
let soft_trigger_age_secs = drain_ttl_secs.saturating_add(soft_grace_secs);
let per_writer_limit = pool.drain_soft_evict_per_writer();
let soft_budget = health_drain_soft_evict_budget(pool);
let soft_cooldown = pool.drain_soft_evict_cooldown();
let mut soft_evicted_total = 0usize;
for writer in &draining_writers {
if soft_evicted_total >= soft_budget {
break;
}
if force_close_ids.contains(&writer.id) {
continue;
}
if pool.writer_accepts_new_binding(writer) {
continue;
}
let started_epoch_secs = writer
.draining_started_at_epoch_secs
.load(std::sync::atomic::Ordering::Relaxed);
if started_epoch_secs == 0
|| now_epoch_secs.saturating_sub(started_epoch_secs) < soft_trigger_age_secs
{
continue;
}
if !should_emit_writer_warn(
soft_evict_next_allowed,
writer.id,
now,
soft_cooldown,
) {
continue;
}
let remaining_budget = soft_budget.saturating_sub(soft_evicted_total);
let limit = per_writer_limit.min(remaining_budget);
if limit == 0 {
break;
}
let conn_ids = pool
.registry
.bound_conn_ids_for_writer_limited(writer.id, limit)
.await;
if conn_ids.is_empty() {
continue;
}
let mut evicted_for_writer = 0usize;
for conn_id in conn_ids {
if pool.registry.evict_bound_conn_if_writer(conn_id, writer.id).await {
evicted_for_writer = evicted_for_writer.saturating_add(1);
soft_evicted_total = soft_evicted_total.saturating_add(1);
pool.stats.increment_pool_drain_soft_evict_total();
if soft_evicted_total >= soft_budget {
break;
}
}
}
if evicted_for_writer > 0 {
pool.stats.increment_pool_drain_soft_evict_writer_total();
info!(
writer_id = writer.id,
writer_dc = writer.writer_dc,
endpoint = %writer.addr,
drained_connections = evicted_for_writer,
soft_budget,
soft_trigger_age_secs,
"ME draining writer soft-evicted bound clients"
);
}
}
}
let close_budget = health_drain_close_budget();
let requested_force_close = force_close_writer_ids.len();
let requested_empty_close = empty_writer_ids.len();
let requested_close_total = requested_force_close.saturating_add(requested_empty_close);
let mut closed_writer_ids = HashSet::<u64>::new();
let mut closed_total = 0usize;
for writer_id in force_close_writer_ids {
if closed_total >= close_budget {
break;
}
if !closed_writer_ids.insert(writer_id) {
continue;
}
pool.stats.increment_pool_force_close_total();
pool.remove_writer_and_close_clients(writer_id).await;
closed_total = closed_total.saturating_add(1);
}
for writer_id in empty_writer_ids {
if closed_total >= close_budget {
break;
}
if !closed_writer_ids.insert(writer_id) {
continue;
}
pool.remove_writer_and_close_clients(writer_id).await;
closed_total = closed_total.saturating_add(1);
}
let pending_close_total = requested_close_total.saturating_sub(closed_total);
if pending_close_total > 0 {
warn!(
close_budget,
closed_total,
pending_close_total,
"ME draining close backlog deferred to next health cycle"
);
}
}
pub(super) fn health_drain_close_budget() -> usize {
let cpu_cores = std::thread::available_parallelism()
.map(std::num::NonZeroUsize::get)
.unwrap_or(1);
cpu_cores
.saturating_mul(HEALTH_DRAIN_CLOSE_BUDGET_PER_CORE)
.clamp(HEALTH_DRAIN_CLOSE_BUDGET_MIN, HEALTH_DRAIN_CLOSE_BUDGET_MAX)
}
pub(super) fn health_drain_soft_evict_budget(pool: &MePool) -> usize {
let cpu_cores = std::thread::available_parallelism()
.map(std::num::NonZeroUsize::get)
.unwrap_or(1);
let per_core = pool.drain_soft_evict_budget_per_core();
cpu_cores
.saturating_mul(per_core)
.clamp(
HEALTH_DRAIN_SOFT_EVICT_BUDGET_MIN,
HEALTH_DRAIN_SOFT_EVICT_BUDGET_MAX,
)
} }
fn should_emit_writer_warn( fn should_emit_writer_warn(
@ -1382,6 +1545,11 @@ mod tests {
general.hardswap, general.hardswap,
general.me_pool_drain_ttl_secs, general.me_pool_drain_ttl_secs,
general.me_pool_drain_threshold, general.me_pool_drain_threshold,
general.me_pool_drain_soft_evict_enabled,
general.me_pool_drain_soft_evict_grace_secs,
general.me_pool_drain_soft_evict_per_writer,
general.me_pool_drain_soft_evict_budget_per_core,
general.me_pool_drain_soft_evict_cooldown_ms,
general.effective_me_pool_force_close_secs(), general.effective_me_pool_force_close_secs(),
general.me_pool_min_fresh_ratio, general.me_pool_min_fresh_ratio,
general.me_hardswap_warmup_delay_min_ms, general.me_hardswap_warmup_delay_min_ms,
@ -1463,8 +1631,9 @@ mod tests {
let conn_b = insert_draining_writer(&pool, 20, now_epoch_secs.saturating_sub(20)).await; let conn_b = insert_draining_writer(&pool, 20, now_epoch_secs.saturating_sub(20)).await;
let conn_c = insert_draining_writer(&pool, 30, now_epoch_secs.saturating_sub(10)).await; let conn_c = insert_draining_writer(&pool, 30, now_epoch_secs.saturating_sub(10)).await;
let mut warn_next_allowed = HashMap::new(); let mut warn_next_allowed = HashMap::new();
let mut soft_evict_next_allowed = HashMap::new();
reap_draining_writers(&pool, &mut warn_next_allowed).await; reap_draining_writers(&pool, &mut warn_next_allowed, &mut soft_evict_next_allowed).await;
let writer_ids: Vec<u64> = pool.writers.read().await.iter().map(|writer| writer.id).collect(); let writer_ids: Vec<u64> = pool.writers.read().await.iter().map(|writer| writer.id).collect();
assert_eq!(writer_ids, vec![20, 30]); assert_eq!(writer_ids, vec![20, 30]);
@ -1481,8 +1650,9 @@ mod tests {
let conn_b = insert_draining_writer(&pool, 20, now_epoch_secs.saturating_sub(20)).await; let conn_b = insert_draining_writer(&pool, 20, now_epoch_secs.saturating_sub(20)).await;
let conn_c = insert_draining_writer(&pool, 30, now_epoch_secs.saturating_sub(10)).await; let conn_c = insert_draining_writer(&pool, 30, now_epoch_secs.saturating_sub(10)).await;
let mut warn_next_allowed = HashMap::new(); let mut warn_next_allowed = HashMap::new();
let mut soft_evict_next_allowed = HashMap::new();
reap_draining_writers(&pool, &mut warn_next_allowed).await; reap_draining_writers(&pool, &mut warn_next_allowed, &mut soft_evict_next_allowed).await;
let writer_ids: Vec<u64> = pool.writers.read().await.iter().map(|writer| writer.id).collect(); let writer_ids: Vec<u64> = pool.writers.read().await.iter().map(|writer| writer.id).collect();
assert_eq!(writer_ids, vec![10, 20, 30]); assert_eq!(writer_ids, vec![10, 20, 30]);

View File

@ -0,0 +1,450 @@
use std::collections::HashMap;
use std::net::{IpAddr, Ipv4Addr, SocketAddr};
use std::sync::Arc;
use std::sync::atomic::{AtomicBool, AtomicU8, AtomicU32, AtomicU64, Ordering};
use std::time::{Duration, Instant};
use tokio::sync::mpsc;
use tokio_util::sync::CancellationToken;
use super::codec::WriterCommand;
use super::health::{health_drain_close_budget, reap_draining_writers};
use super::pool::{MePool, MeWriter, WriterContour};
use super::registry::ConnMeta;
use super::me_health_monitor;
use crate::config::{GeneralConfig, MeRouteNoWriterMode, MeSocksKdfPolicy, MeWriterPickMode};
use crate::crypto::SecureRandom;
use crate::network::probe::NetworkDecision;
use crate::stats::Stats;
async fn make_pool(
me_pool_drain_threshold: u64,
me_health_interval_ms_unhealthy: u64,
me_health_interval_ms_healthy: u64,
) -> (Arc<MePool>, Arc<SecureRandom>) {
let general = GeneralConfig {
me_pool_drain_threshold,
me_health_interval_ms_unhealthy,
me_health_interval_ms_healthy,
..GeneralConfig::default()
};
let rng = Arc::new(SecureRandom::new());
let pool = MePool::new(
None,
vec![1u8; 32],
None,
false,
None,
Vec::new(),
1,
None,
12,
1200,
HashMap::new(),
HashMap::new(),
None,
NetworkDecision::default(),
None,
rng.clone(),
Arc::new(Stats::default()),
general.me_keepalive_enabled,
general.me_keepalive_interval_secs,
general.me_keepalive_jitter_secs,
general.me_keepalive_payload_random,
general.rpc_proxy_req_every,
general.me_warmup_stagger_enabled,
general.me_warmup_step_delay_ms,
general.me_warmup_step_jitter_ms,
general.me_reconnect_max_concurrent_per_dc,
general.me_reconnect_backoff_base_ms,
general.me_reconnect_backoff_cap_ms,
general.me_reconnect_fast_retry_count,
general.me_single_endpoint_shadow_writers,
general.me_single_endpoint_outage_mode_enabled,
general.me_single_endpoint_outage_disable_quarantine,
general.me_single_endpoint_outage_backoff_min_ms,
general.me_single_endpoint_outage_backoff_max_ms,
general.me_single_endpoint_shadow_rotate_every_secs,
general.me_floor_mode,
general.me_adaptive_floor_idle_secs,
general.me_adaptive_floor_min_writers_single_endpoint,
general.me_adaptive_floor_min_writers_multi_endpoint,
general.me_adaptive_floor_recover_grace_secs,
general.me_adaptive_floor_writers_per_core_total,
general.me_adaptive_floor_cpu_cores_override,
general.me_adaptive_floor_max_extra_writers_single_per_core,
general.me_adaptive_floor_max_extra_writers_multi_per_core,
general.me_adaptive_floor_max_active_writers_per_core,
general.me_adaptive_floor_max_warm_writers_per_core,
general.me_adaptive_floor_max_active_writers_global,
general.me_adaptive_floor_max_warm_writers_global,
general.hardswap,
general.me_pool_drain_ttl_secs,
general.me_pool_drain_threshold,
general.me_pool_drain_soft_evict_enabled,
general.me_pool_drain_soft_evict_grace_secs,
general.me_pool_drain_soft_evict_per_writer,
general.me_pool_drain_soft_evict_budget_per_core,
general.me_pool_drain_soft_evict_cooldown_ms,
general.effective_me_pool_force_close_secs(),
general.me_pool_min_fresh_ratio,
general.me_hardswap_warmup_delay_min_ms,
general.me_hardswap_warmup_delay_max_ms,
general.me_hardswap_warmup_extra_passes,
general.me_hardswap_warmup_pass_backoff_base_ms,
general.me_bind_stale_mode,
general.me_bind_stale_ttl_secs,
general.me_secret_atomic_snapshot,
general.me_deterministic_writer_sort,
MeWriterPickMode::default(),
general.me_writer_pick_sample_size,
MeSocksKdfPolicy::default(),
general.me_writer_cmd_channel_capacity,
general.me_route_channel_capacity,
general.me_route_backpressure_base_timeout_ms,
general.me_route_backpressure_high_timeout_ms,
general.me_route_backpressure_high_watermark_pct,
general.me_reader_route_data_wait_ms,
general.me_health_interval_ms_unhealthy,
general.me_health_interval_ms_healthy,
general.me_warn_rate_limit_ms,
MeRouteNoWriterMode::default(),
general.me_route_no_writer_wait_ms,
general.me_route_inline_recovery_attempts,
general.me_route_inline_recovery_wait_ms,
);
(pool, rng)
}
async fn insert_draining_writer(
pool: &Arc<MePool>,
writer_id: u64,
drain_started_at_epoch_secs: u64,
bound_clients: usize,
drain_deadline_epoch_secs: u64,
) {
let (tx, _writer_rx) = mpsc::channel::<WriterCommand>(8);
let writer = MeWriter {
id: writer_id,
addr: SocketAddr::new(IpAddr::V4(Ipv4Addr::LOCALHOST), 6000 + writer_id as u16),
source_ip: IpAddr::V4(Ipv4Addr::LOCALHOST),
writer_dc: 2,
generation: 1,
contour: Arc::new(AtomicU8::new(WriterContour::Draining.as_u8())),
created_at: Instant::now() - Duration::from_secs(writer_id),
tx: tx.clone(),
cancel: CancellationToken::new(),
degraded: Arc::new(AtomicBool::new(false)),
rtt_ema_ms_x10: Arc::new(AtomicU32::new(0)),
draining: Arc::new(AtomicBool::new(true)),
draining_started_at_epoch_secs: Arc::new(AtomicU64::new(drain_started_at_epoch_secs)),
drain_deadline_epoch_secs: Arc::new(AtomicU64::new(drain_deadline_epoch_secs)),
allow_drain_fallback: Arc::new(AtomicBool::new(false)),
};
pool.writers.write().await.push(writer);
pool.registry.register_writer(writer_id, tx).await;
pool.conn_count.fetch_add(1, Ordering::Relaxed);
for idx in 0..bound_clients {
let (conn_id, _rx) = pool.registry.register().await;
assert!(
pool.registry
.bind_writer(
conn_id,
writer_id,
ConnMeta {
target_dc: 2,
client_addr: SocketAddr::new(
IpAddr::V4(Ipv4Addr::LOCALHOST),
8000 + idx as u16,
),
our_addr: SocketAddr::new(IpAddr::V4(Ipv4Addr::LOCALHOST), 443),
proto_flags: 0,
},
)
.await
);
}
}
async fn writer_count(pool: &Arc<MePool>) -> usize {
pool.writers.read().await.len()
}
async fn sorted_writer_ids(pool: &Arc<MePool>) -> Vec<u64> {
let mut ids = pool
.writers
.read()
.await
.iter()
.map(|writer| writer.id)
.collect::<Vec<_>>();
ids.sort_unstable();
ids
}
#[tokio::test]
async fn reap_draining_writers_clears_warn_state_when_pool_empty() {
let (pool, _rng) = make_pool(128, 1, 1).await;
let mut warn_next_allowed = HashMap::new();
let mut soft_evict_next_allowed = HashMap::new();
warn_next_allowed.insert(11, Instant::now() + Duration::from_secs(5));
warn_next_allowed.insert(22, Instant::now() + Duration::from_secs(5));
reap_draining_writers(&pool, &mut warn_next_allowed, &mut soft_evict_next_allowed).await;
assert!(warn_next_allowed.is_empty());
}
#[tokio::test]
async fn reap_draining_writers_respects_threshold_across_multiple_overflow_cycles() {
let threshold = 3u64;
let (pool, _rng) = make_pool(threshold, 1, 1).await;
pool.me_pool_drain_soft_evict_enabled
.store(false, Ordering::Relaxed);
let now_epoch_secs = MePool::now_epoch_secs();
for writer_id in 1..=60u64 {
insert_draining_writer(
&pool,
writer_id,
now_epoch_secs.saturating_sub(600).saturating_add(writer_id),
1,
0,
)
.await;
}
let mut warn_next_allowed = HashMap::new();
let mut soft_evict_next_allowed = HashMap::new();
for _ in 0..64 {
reap_draining_writers(&pool, &mut warn_next_allowed, &mut soft_evict_next_allowed).await;
if writer_count(&pool).await <= threshold as usize {
break;
}
}
assert_eq!(writer_count(&pool).await, threshold as usize);
assert_eq!(sorted_writer_ids(&pool).await, vec![58, 59, 60]);
}
#[tokio::test]
async fn reap_draining_writers_handles_large_empty_writer_population() {
let (pool, _rng) = make_pool(128, 1, 1).await;
let now_epoch_secs = MePool::now_epoch_secs();
let total = health_drain_close_budget().saturating_mul(3).saturating_add(27);
for writer_id in 1..=total as u64 {
insert_draining_writer(
&pool,
writer_id,
now_epoch_secs.saturating_sub(120),
0,
0,
)
.await;
}
let mut warn_next_allowed = HashMap::new();
let mut soft_evict_next_allowed = HashMap::new();
for _ in 0..24 {
if writer_count(&pool).await == 0 {
break;
}
reap_draining_writers(&pool, &mut warn_next_allowed, &mut soft_evict_next_allowed).await;
}
assert_eq!(writer_count(&pool).await, 0);
}
#[tokio::test]
async fn reap_draining_writers_processes_mass_deadline_expiry_without_unbounded_growth() {
let (pool, _rng) = make_pool(128, 1, 1).await;
let now_epoch_secs = MePool::now_epoch_secs();
let total = health_drain_close_budget().saturating_mul(4).saturating_add(31);
for writer_id in 1..=total as u64 {
insert_draining_writer(
&pool,
writer_id,
now_epoch_secs.saturating_sub(180),
1,
now_epoch_secs.saturating_sub(1),
)
.await;
}
let mut warn_next_allowed = HashMap::new();
let mut soft_evict_next_allowed = HashMap::new();
for _ in 0..40 {
if writer_count(&pool).await == 0 {
break;
}
reap_draining_writers(&pool, &mut warn_next_allowed, &mut soft_evict_next_allowed).await;
}
assert_eq!(writer_count(&pool).await, 0);
}
#[tokio::test]
async fn reap_draining_writers_maintains_warn_state_subset_property_under_bulk_churn() {
let (pool, _rng) = make_pool(128, 1, 1).await;
let now_epoch_secs = MePool::now_epoch_secs();
let mut warn_next_allowed = HashMap::new();
let mut soft_evict_next_allowed = HashMap::new();
for wave in 0..40u64 {
for offset in 0..8u64 {
insert_draining_writer(
&pool,
wave * 100 + offset,
now_epoch_secs.saturating_sub(400 + offset),
1,
0,
)
.await;
}
reap_draining_writers(&pool, &mut warn_next_allowed, &mut soft_evict_next_allowed).await;
assert!(warn_next_allowed.len() <= writer_count(&pool).await);
let ids = sorted_writer_ids(&pool).await;
for writer_id in ids.into_iter().take(3) {
let _ = pool.remove_writer_and_close_clients(writer_id).await;
}
reap_draining_writers(&pool, &mut warn_next_allowed, &mut soft_evict_next_allowed).await;
assert!(warn_next_allowed.len() <= writer_count(&pool).await);
}
}
#[tokio::test]
async fn reap_draining_writers_budgeted_cleanup_never_increases_pool_size() {
let (pool, _rng) = make_pool(5, 1, 1).await;
let now_epoch_secs = MePool::now_epoch_secs();
for writer_id in 1..=200u64 {
insert_draining_writer(
&pool,
writer_id,
now_epoch_secs.saturating_sub(240).saturating_add(writer_id),
1,
0,
)
.await;
}
let mut warn_next_allowed = HashMap::new();
let mut soft_evict_next_allowed = HashMap::new();
let mut previous = writer_count(&pool).await;
for _ in 0..32 {
reap_draining_writers(&pool, &mut warn_next_allowed, &mut soft_evict_next_allowed).await;
let current = writer_count(&pool).await;
assert!(current <= previous);
previous = current;
}
}
#[tokio::test]
async fn me_health_monitor_converges_to_threshold_under_live_injection_churn() {
let threshold = 7u64;
let (pool, rng) = make_pool(threshold, 1, 1).await;
let now_epoch_secs = MePool::now_epoch_secs();
for writer_id in 1..=40u64 {
insert_draining_writer(
&pool,
writer_id,
now_epoch_secs.saturating_sub(300).saturating_add(writer_id),
1,
0,
)
.await;
}
let monitor = tokio::spawn(me_health_monitor(pool.clone(), rng, 0));
for wave in 0..8u64 {
for offset in 0..10u64 {
insert_draining_writer(
&pool,
1000 + wave * 100 + offset,
now_epoch_secs.saturating_sub(120).saturating_add(offset),
1,
0,
)
.await;
}
tokio::time::sleep(Duration::from_millis(5)).await;
}
tokio::time::sleep(Duration::from_millis(120)).await;
monitor.abort();
let _ = monitor.await;
assert!(writer_count(&pool).await <= threshold as usize);
}
#[tokio::test]
async fn me_health_monitor_drains_deadline_storm_with_budgeted_progress() {
let (pool, rng) = make_pool(128, 1, 1).await;
let now_epoch_secs = MePool::now_epoch_secs();
for writer_id in 1..=220u64 {
insert_draining_writer(
&pool,
writer_id,
now_epoch_secs.saturating_sub(120),
1,
now_epoch_secs.saturating_sub(1),
)
.await;
}
let monitor = tokio::spawn(me_health_monitor(pool.clone(), rng, 0));
tokio::time::sleep(Duration::from_millis(120)).await;
monitor.abort();
let _ = monitor.await;
assert_eq!(writer_count(&pool).await, 0);
}
#[tokio::test]
async fn me_health_monitor_eliminates_mixed_empty_and_deadline_backlog() {
let threshold = 12u64;
let (pool, rng) = make_pool(threshold, 1, 1).await;
let now_epoch_secs = MePool::now_epoch_secs();
for writer_id in 1..=180u64 {
let bound_clients = if writer_id % 3 == 0 { 0 } else { 1 };
let deadline = if writer_id % 2 == 0 {
now_epoch_secs.saturating_sub(1)
} else {
0
};
insert_draining_writer(
&pool,
writer_id,
now_epoch_secs.saturating_sub(250).saturating_add(writer_id),
bound_clients,
deadline,
)
.await;
}
let monitor = tokio::spawn(me_health_monitor(pool.clone(), rng, 0));
tokio::time::sleep(Duration::from_millis(140)).await;
monitor.abort();
let _ = monitor.await;
assert!(writer_count(&pool).await <= threshold as usize);
}
#[test]
fn health_drain_close_budget_is_within_expected_bounds() {
let budget = health_drain_close_budget();
assert!((16..=256).contains(&budget));
}

View File

@ -0,0 +1,232 @@
use std::collections::HashMap;
use std::net::{IpAddr, Ipv4Addr, SocketAddr};
use std::sync::Arc;
use std::sync::atomic::{AtomicBool, AtomicU8, AtomicU32, AtomicU64, Ordering};
use std::time::{Duration, Instant};
use tokio::sync::mpsc;
use tokio_util::sync::CancellationToken;
use super::codec::WriterCommand;
use super::health::health_drain_close_budget;
use super::pool::{MePool, MeWriter, WriterContour};
use super::registry::ConnMeta;
use super::me_health_monitor;
use crate::config::{GeneralConfig, MeRouteNoWriterMode, MeSocksKdfPolicy, MeWriterPickMode};
use crate::crypto::SecureRandom;
use crate::network::probe::NetworkDecision;
use crate::stats::Stats;
async fn make_pool(
me_pool_drain_threshold: u64,
me_health_interval_ms_unhealthy: u64,
me_health_interval_ms_healthy: u64,
) -> (Arc<MePool>, Arc<SecureRandom>) {
let general = GeneralConfig {
me_pool_drain_threshold,
me_health_interval_ms_unhealthy,
me_health_interval_ms_healthy,
..GeneralConfig::default()
};
let rng = Arc::new(SecureRandom::new());
let pool = MePool::new(
None,
vec![1u8; 32],
None,
false,
None,
Vec::new(),
1,
None,
12,
1200,
HashMap::new(),
HashMap::new(),
None,
NetworkDecision::default(),
None,
rng.clone(),
Arc::new(Stats::default()),
general.me_keepalive_enabled,
general.me_keepalive_interval_secs,
general.me_keepalive_jitter_secs,
general.me_keepalive_payload_random,
general.rpc_proxy_req_every,
general.me_warmup_stagger_enabled,
general.me_warmup_step_delay_ms,
general.me_warmup_step_jitter_ms,
general.me_reconnect_max_concurrent_per_dc,
general.me_reconnect_backoff_base_ms,
general.me_reconnect_backoff_cap_ms,
general.me_reconnect_fast_retry_count,
general.me_single_endpoint_shadow_writers,
general.me_single_endpoint_outage_mode_enabled,
general.me_single_endpoint_outage_disable_quarantine,
general.me_single_endpoint_outage_backoff_min_ms,
general.me_single_endpoint_outage_backoff_max_ms,
general.me_single_endpoint_shadow_rotate_every_secs,
general.me_floor_mode,
general.me_adaptive_floor_idle_secs,
general.me_adaptive_floor_min_writers_single_endpoint,
general.me_adaptive_floor_min_writers_multi_endpoint,
general.me_adaptive_floor_recover_grace_secs,
general.me_adaptive_floor_writers_per_core_total,
general.me_adaptive_floor_cpu_cores_override,
general.me_adaptive_floor_max_extra_writers_single_per_core,
general.me_adaptive_floor_max_extra_writers_multi_per_core,
general.me_adaptive_floor_max_active_writers_per_core,
general.me_adaptive_floor_max_warm_writers_per_core,
general.me_adaptive_floor_max_active_writers_global,
general.me_adaptive_floor_max_warm_writers_global,
general.hardswap,
general.me_pool_drain_ttl_secs,
general.me_pool_drain_threshold,
general.me_pool_drain_soft_evict_enabled,
general.me_pool_drain_soft_evict_grace_secs,
general.me_pool_drain_soft_evict_per_writer,
general.me_pool_drain_soft_evict_budget_per_core,
general.me_pool_drain_soft_evict_cooldown_ms,
general.effective_me_pool_force_close_secs(),
general.me_pool_min_fresh_ratio,
general.me_hardswap_warmup_delay_min_ms,
general.me_hardswap_warmup_delay_max_ms,
general.me_hardswap_warmup_extra_passes,
general.me_hardswap_warmup_pass_backoff_base_ms,
general.me_bind_stale_mode,
general.me_bind_stale_ttl_secs,
general.me_secret_atomic_snapshot,
general.me_deterministic_writer_sort,
MeWriterPickMode::default(),
general.me_writer_pick_sample_size,
MeSocksKdfPolicy::default(),
general.me_writer_cmd_channel_capacity,
general.me_route_channel_capacity,
general.me_route_backpressure_base_timeout_ms,
general.me_route_backpressure_high_timeout_ms,
general.me_route_backpressure_high_watermark_pct,
general.me_reader_route_data_wait_ms,
general.me_health_interval_ms_unhealthy,
general.me_health_interval_ms_healthy,
general.me_warn_rate_limit_ms,
MeRouteNoWriterMode::default(),
general.me_route_no_writer_wait_ms,
general.me_route_inline_recovery_attempts,
general.me_route_inline_recovery_wait_ms,
);
(pool, rng)
}
async fn insert_draining_writer(
pool: &Arc<MePool>,
writer_id: u64,
drain_started_at_epoch_secs: u64,
bound_clients: usize,
drain_deadline_epoch_secs: u64,
) {
let (tx, _writer_rx) = mpsc::channel::<WriterCommand>(8);
let writer = MeWriter {
id: writer_id,
addr: SocketAddr::new(IpAddr::V4(Ipv4Addr::LOCALHOST), 5500 + writer_id as u16),
source_ip: IpAddr::V4(Ipv4Addr::LOCALHOST),
writer_dc: 2,
generation: 1,
contour: Arc::new(AtomicU8::new(WriterContour::Draining.as_u8())),
created_at: Instant::now() - Duration::from_secs(writer_id),
tx: tx.clone(),
cancel: CancellationToken::new(),
degraded: Arc::new(AtomicBool::new(false)),
rtt_ema_ms_x10: Arc::new(AtomicU32::new(0)),
draining: Arc::new(AtomicBool::new(true)),
draining_started_at_epoch_secs: Arc::new(AtomicU64::new(drain_started_at_epoch_secs)),
drain_deadline_epoch_secs: Arc::new(AtomicU64::new(drain_deadline_epoch_secs)),
allow_drain_fallback: Arc::new(AtomicBool::new(false)),
};
pool.writers.write().await.push(writer);
pool.registry.register_writer(writer_id, tx).await;
pool.conn_count.fetch_add(1, Ordering::Relaxed);
for idx in 0..bound_clients {
let (conn_id, _rx) = pool.registry.register().await;
assert!(
pool.registry
.bind_writer(
conn_id,
writer_id,
ConnMeta {
target_dc: 2,
client_addr: SocketAddr::new(
IpAddr::V4(Ipv4Addr::LOCALHOST),
7200 + idx as u16,
),
our_addr: SocketAddr::new(IpAddr::V4(Ipv4Addr::LOCALHOST), 443),
proto_flags: 0,
},
)
.await
);
}
}
#[tokio::test]
async fn me_health_monitor_drains_expired_backlog_over_multiple_cycles() {
let (pool, rng) = make_pool(128, 1, 1).await;
let now_epoch_secs = MePool::now_epoch_secs();
let writer_total = health_drain_close_budget().saturating_mul(2).saturating_add(9);
for writer_id in 1..=writer_total as u64 {
insert_draining_writer(
&pool,
writer_id,
now_epoch_secs.saturating_sub(120),
1,
now_epoch_secs.saturating_sub(1),
)
.await;
}
let monitor = tokio::spawn(me_health_monitor(pool.clone(), rng, 0));
tokio::time::sleep(Duration::from_millis(60)).await;
monitor.abort();
let _ = monitor.await;
assert!(pool.writers.read().await.is_empty());
}
#[tokio::test]
async fn me_health_monitor_cleans_empty_draining_writers_without_force_close() {
let (pool, rng) = make_pool(128, 1, 1).await;
let now_epoch_secs = MePool::now_epoch_secs();
for writer_id in 1..=24u64 {
insert_draining_writer(&pool, writer_id, now_epoch_secs.saturating_sub(60), 0, 0).await;
}
let monitor = tokio::spawn(me_health_monitor(pool.clone(), rng, 0));
tokio::time::sleep(Duration::from_millis(30)).await;
monitor.abort();
let _ = monitor.await;
assert!(pool.writers.read().await.is_empty());
}
#[tokio::test]
async fn me_health_monitor_converges_retry_like_threshold_backlog_to_empty() {
let threshold = 4u64;
let (pool, rng) = make_pool(threshold, 1, 1).await;
let now_epoch_secs = MePool::now_epoch_secs();
let writer_total = threshold as usize + health_drain_close_budget().saturating_add(11);
for writer_id in 1..=writer_total as u64 {
insert_draining_writer(
&pool,
writer_id,
now_epoch_secs.saturating_sub(300).saturating_add(writer_id),
1,
0,
)
.await;
}
let monitor = tokio::spawn(me_health_monitor(pool.clone(), rng, 0));
tokio::time::sleep(Duration::from_millis(60)).await;
monitor.abort();
let _ = monitor.await;
assert!(pool.writers.read().await.is_empty());
}

View File

@ -0,0 +1,533 @@
use std::collections::HashMap;
use std::net::{IpAddr, Ipv4Addr, SocketAddr};
use std::sync::Arc;
use std::sync::atomic::{AtomicBool, AtomicU8, AtomicU32, AtomicU64, Ordering};
use std::time::{Duration, Instant};
use tokio::sync::mpsc;
use tokio_util::sync::CancellationToken;
use super::codec::WriterCommand;
use super::health::{health_drain_close_budget, reap_draining_writers};
use super::pool::{MePool, MeWriter, WriterContour};
use super::registry::ConnMeta;
use crate::config::{GeneralConfig, MeRouteNoWriterMode, MeSocksKdfPolicy, MeWriterPickMode};
use crate::crypto::SecureRandom;
use crate::network::probe::NetworkDecision;
use crate::stats::Stats;
async fn make_pool(me_pool_drain_threshold: u64) -> Arc<MePool> {
let general = GeneralConfig {
me_pool_drain_threshold,
..GeneralConfig::default()
};
MePool::new(
None,
vec![1u8; 32],
None,
false,
None,
Vec::new(),
1,
None,
12,
1200,
HashMap::new(),
HashMap::new(),
None,
NetworkDecision::default(),
None,
Arc::new(SecureRandom::new()),
Arc::new(Stats::new()),
general.me_keepalive_enabled,
general.me_keepalive_interval_secs,
general.me_keepalive_jitter_secs,
general.me_keepalive_payload_random,
general.rpc_proxy_req_every,
general.me_warmup_stagger_enabled,
general.me_warmup_step_delay_ms,
general.me_warmup_step_jitter_ms,
general.me_reconnect_max_concurrent_per_dc,
general.me_reconnect_backoff_base_ms,
general.me_reconnect_backoff_cap_ms,
general.me_reconnect_fast_retry_count,
general.me_single_endpoint_shadow_writers,
general.me_single_endpoint_outage_mode_enabled,
general.me_single_endpoint_outage_disable_quarantine,
general.me_single_endpoint_outage_backoff_min_ms,
general.me_single_endpoint_outage_backoff_max_ms,
general.me_single_endpoint_shadow_rotate_every_secs,
general.me_floor_mode,
general.me_adaptive_floor_idle_secs,
general.me_adaptive_floor_min_writers_single_endpoint,
general.me_adaptive_floor_min_writers_multi_endpoint,
general.me_adaptive_floor_recover_grace_secs,
general.me_adaptive_floor_writers_per_core_total,
general.me_adaptive_floor_cpu_cores_override,
general.me_adaptive_floor_max_extra_writers_single_per_core,
general.me_adaptive_floor_max_extra_writers_multi_per_core,
general.me_adaptive_floor_max_active_writers_per_core,
general.me_adaptive_floor_max_warm_writers_per_core,
general.me_adaptive_floor_max_active_writers_global,
general.me_adaptive_floor_max_warm_writers_global,
general.hardswap,
general.me_pool_drain_ttl_secs,
general.me_pool_drain_threshold,
general.me_pool_drain_soft_evict_enabled,
general.me_pool_drain_soft_evict_grace_secs,
general.me_pool_drain_soft_evict_per_writer,
general.me_pool_drain_soft_evict_budget_per_core,
general.me_pool_drain_soft_evict_cooldown_ms,
general.effective_me_pool_force_close_secs(),
general.me_pool_min_fresh_ratio,
general.me_hardswap_warmup_delay_min_ms,
general.me_hardswap_warmup_delay_max_ms,
general.me_hardswap_warmup_extra_passes,
general.me_hardswap_warmup_pass_backoff_base_ms,
general.me_bind_stale_mode,
general.me_bind_stale_ttl_secs,
general.me_secret_atomic_snapshot,
general.me_deterministic_writer_sort,
MeWriterPickMode::default(),
general.me_writer_pick_sample_size,
MeSocksKdfPolicy::default(),
general.me_writer_cmd_channel_capacity,
general.me_route_channel_capacity,
general.me_route_backpressure_base_timeout_ms,
general.me_route_backpressure_high_timeout_ms,
general.me_route_backpressure_high_watermark_pct,
general.me_reader_route_data_wait_ms,
general.me_health_interval_ms_unhealthy,
general.me_health_interval_ms_healthy,
general.me_warn_rate_limit_ms,
MeRouteNoWriterMode::default(),
general.me_route_no_writer_wait_ms,
general.me_route_inline_recovery_attempts,
general.me_route_inline_recovery_wait_ms,
)
}
async fn insert_draining_writer(
pool: &Arc<MePool>,
writer_id: u64,
drain_started_at_epoch_secs: u64,
bound_clients: usize,
drain_deadline_epoch_secs: u64,
) -> Vec<u64> {
let mut conn_ids = Vec::with_capacity(bound_clients);
let (tx, _writer_rx) = mpsc::channel::<WriterCommand>(8);
let writer = MeWriter {
id: writer_id,
addr: SocketAddr::new(IpAddr::V4(Ipv4Addr::LOCALHOST), 4500 + writer_id as u16),
source_ip: IpAddr::V4(Ipv4Addr::LOCALHOST),
writer_dc: 2,
generation: 1,
contour: Arc::new(AtomicU8::new(WriterContour::Draining.as_u8())),
created_at: Instant::now() - Duration::from_secs(writer_id),
tx: tx.clone(),
cancel: CancellationToken::new(),
degraded: Arc::new(AtomicBool::new(false)),
rtt_ema_ms_x10: Arc::new(AtomicU32::new(0)),
draining: Arc::new(AtomicBool::new(true)),
draining_started_at_epoch_secs: Arc::new(AtomicU64::new(drain_started_at_epoch_secs)),
drain_deadline_epoch_secs: Arc::new(AtomicU64::new(drain_deadline_epoch_secs)),
allow_drain_fallback: Arc::new(AtomicBool::new(false)),
};
pool.writers.write().await.push(writer);
pool.registry.register_writer(writer_id, tx).await;
pool.conn_count.fetch_add(1, Ordering::Relaxed);
for idx in 0..bound_clients {
let (conn_id, _rx) = pool.registry.register().await;
assert!(
pool.registry
.bind_writer(
conn_id,
writer_id,
ConnMeta {
target_dc: 2,
client_addr: SocketAddr::new(
IpAddr::V4(Ipv4Addr::LOCALHOST),
6200 + idx as u16,
),
our_addr: SocketAddr::new(IpAddr::V4(Ipv4Addr::LOCALHOST), 443),
proto_flags: 0,
},
)
.await
);
conn_ids.push(conn_id);
}
conn_ids
}
async fn current_writer_ids(pool: &Arc<MePool>) -> Vec<u64> {
let mut writer_ids = pool
.writers
.read()
.await
.iter()
.map(|writer| writer.id)
.collect::<Vec<_>>();
writer_ids.sort_unstable();
writer_ids
}
#[tokio::test]
async fn reap_draining_writers_drops_warn_state_for_removed_writer() {
let pool = make_pool(128).await;
let now_epoch_secs = MePool::now_epoch_secs();
let conn_ids =
insert_draining_writer(&pool, 7, now_epoch_secs.saturating_sub(180), 1, 0).await;
let mut warn_next_allowed = HashMap::new();
let mut soft_evict_next_allowed = HashMap::new();
reap_draining_writers(&pool, &mut warn_next_allowed, &mut soft_evict_next_allowed).await;
assert!(warn_next_allowed.contains_key(&7));
let _ = pool.remove_writer_and_close_clients(7).await;
assert!(pool.registry.get_writer(conn_ids[0]).await.is_none());
reap_draining_writers(&pool, &mut warn_next_allowed, &mut soft_evict_next_allowed).await;
assert!(!warn_next_allowed.contains_key(&7));
}
#[tokio::test]
async fn reap_draining_writers_removes_empty_draining_writers() {
let pool = make_pool(128).await;
let now_epoch_secs = MePool::now_epoch_secs();
insert_draining_writer(&pool, 1, now_epoch_secs.saturating_sub(40), 0, 0).await;
insert_draining_writer(&pool, 2, now_epoch_secs.saturating_sub(30), 0, 0).await;
insert_draining_writer(&pool, 3, now_epoch_secs.saturating_sub(20), 1, 0).await;
let mut warn_next_allowed = HashMap::new();
let mut soft_evict_next_allowed = HashMap::new();
reap_draining_writers(&pool, &mut warn_next_allowed, &mut soft_evict_next_allowed).await;
assert_eq!(current_writer_ids(&pool).await, vec![3]);
}
#[tokio::test]
async fn reap_draining_writers_overflow_closes_oldest_non_empty_writers() {
let pool = make_pool(2).await;
let now_epoch_secs = MePool::now_epoch_secs();
insert_draining_writer(&pool, 11, now_epoch_secs.saturating_sub(40), 1, 0).await;
insert_draining_writer(&pool, 22, now_epoch_secs.saturating_sub(30), 1, 0).await;
insert_draining_writer(&pool, 33, now_epoch_secs.saturating_sub(20), 1, 0).await;
insert_draining_writer(&pool, 44, now_epoch_secs.saturating_sub(10), 1, 0).await;
let mut warn_next_allowed = HashMap::new();
let mut soft_evict_next_allowed = HashMap::new();
reap_draining_writers(&pool, &mut warn_next_allowed, &mut soft_evict_next_allowed).await;
assert_eq!(current_writer_ids(&pool).await, vec![33, 44]);
}
#[tokio::test]
async fn reap_draining_writers_deadline_force_close_applies_under_threshold() {
let pool = make_pool(128).await;
let now_epoch_secs = MePool::now_epoch_secs();
insert_draining_writer(
&pool,
50,
now_epoch_secs.saturating_sub(15),
1,
now_epoch_secs.saturating_sub(1),
)
.await;
let mut warn_next_allowed = HashMap::new();
let mut soft_evict_next_allowed = HashMap::new();
reap_draining_writers(&pool, &mut warn_next_allowed, &mut soft_evict_next_allowed).await;
assert!(current_writer_ids(&pool).await.is_empty());
}
#[tokio::test]
async fn reap_draining_writers_limits_closes_per_health_tick() {
let pool = make_pool(128).await;
let now_epoch_secs = MePool::now_epoch_secs();
let close_budget = health_drain_close_budget();
let writer_total = close_budget.saturating_add(19);
for writer_id in 1..=writer_total as u64 {
insert_draining_writer(
&pool,
writer_id,
now_epoch_secs.saturating_sub(20),
1,
now_epoch_secs.saturating_sub(1),
)
.await;
}
let mut warn_next_allowed = HashMap::new();
let mut soft_evict_next_allowed = HashMap::new();
reap_draining_writers(&pool, &mut warn_next_allowed, &mut soft_evict_next_allowed).await;
assert_eq!(pool.writers.read().await.len(), writer_total - close_budget);
}
#[tokio::test]
async fn reap_draining_writers_backlog_drains_across_ticks() {
let pool = make_pool(128).await;
let now_epoch_secs = MePool::now_epoch_secs();
let close_budget = health_drain_close_budget();
let writer_total = close_budget.saturating_mul(2).saturating_add(7);
for writer_id in 1..=writer_total as u64 {
insert_draining_writer(
&pool,
writer_id,
now_epoch_secs.saturating_sub(20),
1,
now_epoch_secs.saturating_sub(1),
)
.await;
}
let mut warn_next_allowed = HashMap::new();
let mut soft_evict_next_allowed = HashMap::new();
for _ in 0..8 {
if pool.writers.read().await.is_empty() {
break;
}
reap_draining_writers(&pool, &mut warn_next_allowed, &mut soft_evict_next_allowed).await;
}
assert!(pool.writers.read().await.is_empty());
}
#[tokio::test]
async fn reap_draining_writers_threshold_backlog_converges_to_threshold() {
let threshold = 5u64;
let pool = make_pool(threshold).await;
let now_epoch_secs = MePool::now_epoch_secs();
let close_budget = health_drain_close_budget();
let writer_total = threshold as usize + close_budget.saturating_add(12);
for writer_id in 1..=writer_total as u64 {
insert_draining_writer(
&pool,
writer_id,
now_epoch_secs.saturating_sub(200).saturating_add(writer_id),
1,
0,
)
.await;
}
let mut warn_next_allowed = HashMap::new();
let mut soft_evict_next_allowed = HashMap::new();
for _ in 0..16 {
reap_draining_writers(&pool, &mut warn_next_allowed, &mut soft_evict_next_allowed).await;
if pool.writers.read().await.len() <= threshold as usize {
break;
}
}
assert_eq!(pool.writers.read().await.len(), threshold as usize);
}
#[tokio::test]
async fn reap_draining_writers_threshold_zero_preserves_non_expired_non_empty_writers() {
let pool = make_pool(0).await;
let now_epoch_secs = MePool::now_epoch_secs();
insert_draining_writer(&pool, 10, now_epoch_secs.saturating_sub(40), 1, 0).await;
insert_draining_writer(&pool, 20, now_epoch_secs.saturating_sub(30), 1, 0).await;
insert_draining_writer(&pool, 30, now_epoch_secs.saturating_sub(20), 1, 0).await;
let mut warn_next_allowed = HashMap::new();
let mut soft_evict_next_allowed = HashMap::new();
reap_draining_writers(&pool, &mut warn_next_allowed, &mut soft_evict_next_allowed).await;
assert_eq!(current_writer_ids(&pool).await, vec![10, 20, 30]);
}
#[tokio::test]
async fn reap_draining_writers_prioritizes_force_close_before_empty_cleanup() {
let pool = make_pool(128).await;
let now_epoch_secs = MePool::now_epoch_secs();
let close_budget = health_drain_close_budget();
for writer_id in 1..=close_budget as u64 {
insert_draining_writer(
&pool,
writer_id,
now_epoch_secs.saturating_sub(20),
1,
now_epoch_secs.saturating_sub(1),
)
.await;
}
let empty_writer_id = close_budget as u64 + 1;
insert_draining_writer(&pool, empty_writer_id, now_epoch_secs.saturating_sub(20), 0, 0).await;
let mut warn_next_allowed = HashMap::new();
let mut soft_evict_next_allowed = HashMap::new();
reap_draining_writers(&pool, &mut warn_next_allowed, &mut soft_evict_next_allowed).await;
assert_eq!(current_writer_ids(&pool).await, vec![empty_writer_id]);
}
#[tokio::test]
async fn reap_draining_writers_empty_cleanup_does_not_increment_force_close_metric() {
let pool = make_pool(128).await;
let now_epoch_secs = MePool::now_epoch_secs();
insert_draining_writer(&pool, 1, now_epoch_secs.saturating_sub(60), 0, 0).await;
insert_draining_writer(&pool, 2, now_epoch_secs.saturating_sub(50), 0, 0).await;
let mut warn_next_allowed = HashMap::new();
let mut soft_evict_next_allowed = HashMap::new();
reap_draining_writers(&pool, &mut warn_next_allowed, &mut soft_evict_next_allowed).await;
assert!(current_writer_ids(&pool).await.is_empty());
assert_eq!(pool.stats.get_pool_force_close_total(), 0);
}
#[tokio::test]
async fn reap_draining_writers_handles_duplicate_force_close_requests_for_same_writer() {
let pool = make_pool(1).await;
let now_epoch_secs = MePool::now_epoch_secs();
insert_draining_writer(
&pool,
10,
now_epoch_secs.saturating_sub(30),
1,
now_epoch_secs.saturating_sub(1),
)
.await;
insert_draining_writer(
&pool,
20,
now_epoch_secs.saturating_sub(20),
1,
now_epoch_secs.saturating_sub(1),
)
.await;
let mut warn_next_allowed = HashMap::new();
let mut soft_evict_next_allowed = HashMap::new();
reap_draining_writers(&pool, &mut warn_next_allowed, &mut soft_evict_next_allowed).await;
assert!(current_writer_ids(&pool).await.is_empty());
}
#[tokio::test]
async fn reap_draining_writers_warn_state_never_exceeds_live_draining_population_under_churn() {
let pool = make_pool(128).await;
let now_epoch_secs = MePool::now_epoch_secs();
let mut warn_next_allowed = HashMap::new();
let mut soft_evict_next_allowed = HashMap::new();
for wave in 0..12u64 {
for offset in 0..9u64 {
insert_draining_writer(
&pool,
wave * 100 + offset,
now_epoch_secs.saturating_sub(120 + offset),
1,
0,
)
.await;
}
reap_draining_writers(&pool, &mut warn_next_allowed, &mut soft_evict_next_allowed).await;
assert!(warn_next_allowed.len() <= pool.writers.read().await.len());
let existing_writer_ids = current_writer_ids(&pool).await;
for writer_id in existing_writer_ids.into_iter().take(4) {
let _ = pool.remove_writer_and_close_clients(writer_id).await;
}
reap_draining_writers(&pool, &mut warn_next_allowed, &mut soft_evict_next_allowed).await;
assert!(warn_next_allowed.len() <= pool.writers.read().await.len());
}
}
#[tokio::test]
async fn reap_draining_writers_mixed_backlog_converges_without_leaking_warn_state() {
let pool = make_pool(6).await;
let now_epoch_secs = MePool::now_epoch_secs();
let mut warn_next_allowed = HashMap::new();
let mut soft_evict_next_allowed = HashMap::new();
for writer_id in 1..=18u64 {
let bound_clients = if writer_id % 3 == 0 { 0 } else { 1 };
let deadline = if writer_id % 2 == 0 {
now_epoch_secs.saturating_sub(1)
} else {
0
};
insert_draining_writer(
&pool,
writer_id,
now_epoch_secs.saturating_sub(300).saturating_add(writer_id),
bound_clients,
deadline,
)
.await;
}
for _ in 0..16 {
reap_draining_writers(&pool, &mut warn_next_allowed, &mut soft_evict_next_allowed).await;
if pool.writers.read().await.len() <= 6 {
break;
}
}
assert!(pool.writers.read().await.len() <= 6);
assert!(warn_next_allowed.len() <= pool.writers.read().await.len());
}
#[tokio::test]
async fn reap_draining_writers_soft_evicts_stuck_writer_with_per_writer_cap() {
let pool = make_pool(128).await;
pool.me_pool_drain_soft_evict_enabled.store(true, Ordering::Relaxed);
pool.me_pool_drain_soft_evict_grace_secs.store(0, Ordering::Relaxed);
pool.me_pool_drain_soft_evict_per_writer.store(1, Ordering::Relaxed);
pool.me_pool_drain_soft_evict_budget_per_core.store(8, Ordering::Relaxed);
pool.me_pool_drain_soft_evict_cooldown_ms
.store(1, Ordering::Relaxed);
let now_epoch_secs = MePool::now_epoch_secs();
insert_draining_writer(&pool, 77, now_epoch_secs.saturating_sub(240), 3, 0).await;
let mut warn_next_allowed = HashMap::new();
let mut soft_evict_next_allowed = HashMap::new();
reap_draining_writers(&pool, &mut warn_next_allowed, &mut soft_evict_next_allowed).await;
let activity = pool.registry.writer_activity_snapshot().await;
assert_eq!(activity.bound_clients_by_writer.get(&77), Some(&2));
assert_eq!(pool.stats.get_pool_drain_soft_evict_total(), 1);
assert_eq!(pool.stats.get_pool_drain_soft_evict_writer_total(), 1);
assert_eq!(current_writer_ids(&pool).await, vec![77]);
}
#[tokio::test]
async fn reap_draining_writers_soft_evict_respects_cooldown_per_writer() {
let pool = make_pool(128).await;
pool.me_pool_drain_soft_evict_enabled.store(true, Ordering::Relaxed);
pool.me_pool_drain_soft_evict_grace_secs.store(0, Ordering::Relaxed);
pool.me_pool_drain_soft_evict_per_writer.store(1, Ordering::Relaxed);
pool.me_pool_drain_soft_evict_budget_per_core.store(8, Ordering::Relaxed);
pool.me_pool_drain_soft_evict_cooldown_ms
.store(60_000, Ordering::Relaxed);
let now_epoch_secs = MePool::now_epoch_secs();
insert_draining_writer(&pool, 88, now_epoch_secs.saturating_sub(240), 3, 0).await;
let mut warn_next_allowed = HashMap::new();
let mut soft_evict_next_allowed = HashMap::new();
reap_draining_writers(&pool, &mut warn_next_allowed, &mut soft_evict_next_allowed).await;
reap_draining_writers(&pool, &mut warn_next_allowed, &mut soft_evict_next_allowed).await;
let activity = pool.registry.writer_activity_snapshot().await;
assert_eq!(activity.bound_clients_by_writer.get(&88), Some(&2));
assert_eq!(pool.stats.get_pool_drain_soft_evict_total(), 1);
assert_eq!(pool.stats.get_pool_drain_soft_evict_writer_total(), 1);
}
#[test]
fn general_config_default_drain_threshold_remains_enabled() {
assert_eq!(GeneralConfig::default().me_pool_drain_threshold, 128);
assert!(GeneralConfig::default().me_pool_drain_soft_evict_enabled);
assert_eq!(
GeneralConfig::default().me_pool_drain_soft_evict_per_writer,
1
);
}

View File

@ -21,6 +21,12 @@ mod secret;
mod selftest; mod selftest;
mod wire; mod wire;
mod pool_status; mod pool_status;
#[cfg(test)]
mod health_regression_tests;
#[cfg(test)]
mod health_integration_tests;
#[cfg(test)]
mod health_adversarial_tests;
use bytes::Bytes; use bytes::Bytes;

View File

@ -7,6 +7,7 @@ use tokio::net::UdpSocket;
use crate::config::{UpstreamConfig, UpstreamType}; use crate::config::{UpstreamConfig, UpstreamType};
use crate::crypto::SecureRandom; use crate::crypto::SecureRandom;
use crate::error::ProxyError; use crate::error::ProxyError;
use crate::transport::shadowsocks::sanitize_shadowsocks_url;
use crate::transport::{UpstreamEgressInfo, UpstreamRouteKind}; use crate::transport::{UpstreamEgressInfo, UpstreamRouteKind};
use super::MePool; use super::MePool;
@ -40,7 +41,11 @@ pub fn format_sample_line(sample: &MePingSample) -> String {
let sign = if sample.dc >= 0 { "+" } else { "-" }; let sign = if sample.dc >= 0 { "+" } else { "-" };
let addr = format!("{}:{}", sample.addr.ip(), sample.addr.port()); let addr = format!("{}:{}", sample.addr.ip(), sample.addr.port());
match (sample.connect_ms, sample.handshake_ms.as_ref(), sample.error.as_ref()) { match (
sample.connect_ms,
sample.handshake_ms.as_ref(),
sample.error.as_ref(),
) {
(Some(conn), Some(hs), None) => format!( (Some(conn), Some(hs), None) => format!(
" {sign} {addr}\tPing: {:.0} ms / RPC: {:.0} ms / OK", " {sign} {addr}\tPing: {:.0} ms / RPC: {:.0} ms / OK",
conn, hs conn, hs
@ -121,6 +126,7 @@ fn route_from_egress(egress: Option<UpstreamEgressInfo>) -> Option<String> {
None => route, None => route,
}) })
} }
UpstreamRouteKind::Shadowsocks => Some("shadowsocks".to_string()),
} }
} }
@ -232,6 +238,9 @@ pub async fn format_me_route(
} }
UpstreamType::Socks4 { address, .. } => format!("socks4://{address}"), UpstreamType::Socks4 { address, .. } => format!("socks4://{address}"),
UpstreamType::Socks5 { address, .. } => format!("socks5://{address}"), UpstreamType::Socks5 { address, .. } => format!("socks5://{address}"),
UpstreamType::Shadowsocks { url, .. } => sanitize_shadowsocks_url(url)
.map(|address| format!("shadowsocks://{address}"))
.unwrap_or_else(|_| "shadowsocks://invalid".to_string()),
}; };
} }
@ -254,6 +263,12 @@ pub async fn format_me_route(
if has_socks5 { if has_socks5 {
kinds.push("socks5"); kinds.push("socks5");
} }
if enabled_upstreams
.iter()
.any(|u| matches!(u.upstream_type, UpstreamType::Shadowsocks { .. }))
{
kinds.push("shadowsocks");
}
format!("mixed upstreams ({})", kinds.join(", ")) format!("mixed upstreams ({})", kinds.join(", "))
} }
@ -335,7 +350,10 @@ pub async fn run_me_ping(pool: &Arc<MePool>, rng: &SecureRandom) -> Vec<MePingRe
Ok((stream, conn_rtt, upstream_egress)) => { Ok((stream, conn_rtt, upstream_egress)) => {
connect_ms = Some(conn_rtt); connect_ms = Some(conn_rtt);
route = route_from_egress(upstream_egress); route = route_from_egress(upstream_egress);
match pool.handshake_only(stream, addr, upstream_egress, rng).await { match pool
.handshake_only(stream, addr, upstream_egress, rng)
.await
{
Ok(hs) => { Ok(hs) => {
handshake_ms = Some(hs.handshake_ms); handshake_ms = Some(hs.handshake_ms);
// drop halves to close // drop halves to close

View File

@ -172,6 +172,11 @@ pub struct MePool {
pub(super) kdf_material_fingerprint: Arc<RwLock<HashMap<SocketAddr, (u64, u16)>>>, pub(super) kdf_material_fingerprint: Arc<RwLock<HashMap<SocketAddr, (u64, u16)>>>,
pub(super) me_pool_drain_ttl_secs: AtomicU64, pub(super) me_pool_drain_ttl_secs: AtomicU64,
pub(super) me_pool_drain_threshold: AtomicU64, pub(super) me_pool_drain_threshold: AtomicU64,
pub(super) me_pool_drain_soft_evict_enabled: AtomicBool,
pub(super) me_pool_drain_soft_evict_grace_secs: AtomicU64,
pub(super) me_pool_drain_soft_evict_per_writer: AtomicU8,
pub(super) me_pool_drain_soft_evict_budget_per_core: AtomicU32,
pub(super) me_pool_drain_soft_evict_cooldown_ms: AtomicU64,
pub(super) me_pool_force_close_secs: AtomicU64, pub(super) me_pool_force_close_secs: AtomicU64,
pub(super) me_pool_min_fresh_ratio_permille: AtomicU32, pub(super) me_pool_min_fresh_ratio_permille: AtomicU32,
pub(super) me_hardswap_warmup_delay_min_ms: AtomicU64, pub(super) me_hardswap_warmup_delay_min_ms: AtomicU64,
@ -273,6 +278,11 @@ impl MePool {
hardswap: bool, hardswap: bool,
me_pool_drain_ttl_secs: u64, me_pool_drain_ttl_secs: u64,
me_pool_drain_threshold: u64, me_pool_drain_threshold: u64,
me_pool_drain_soft_evict_enabled: bool,
me_pool_drain_soft_evict_grace_secs: u64,
me_pool_drain_soft_evict_per_writer: u8,
me_pool_drain_soft_evict_budget_per_core: u16,
me_pool_drain_soft_evict_cooldown_ms: u64,
me_pool_force_close_secs: u64, me_pool_force_close_secs: u64,
me_pool_min_fresh_ratio: f32, me_pool_min_fresh_ratio: f32,
me_hardswap_warmup_delay_min_ms: u64, me_hardswap_warmup_delay_min_ms: u64,
@ -449,6 +459,17 @@ impl MePool {
kdf_material_fingerprint: Arc::new(RwLock::new(HashMap::new())), kdf_material_fingerprint: Arc::new(RwLock::new(HashMap::new())),
me_pool_drain_ttl_secs: AtomicU64::new(me_pool_drain_ttl_secs), me_pool_drain_ttl_secs: AtomicU64::new(me_pool_drain_ttl_secs),
me_pool_drain_threshold: AtomicU64::new(me_pool_drain_threshold), me_pool_drain_threshold: AtomicU64::new(me_pool_drain_threshold),
me_pool_drain_soft_evict_enabled: AtomicBool::new(me_pool_drain_soft_evict_enabled),
me_pool_drain_soft_evict_grace_secs: AtomicU64::new(me_pool_drain_soft_evict_grace_secs),
me_pool_drain_soft_evict_per_writer: AtomicU8::new(
me_pool_drain_soft_evict_per_writer.max(1),
),
me_pool_drain_soft_evict_budget_per_core: AtomicU32::new(
me_pool_drain_soft_evict_budget_per_core.max(1) as u32,
),
me_pool_drain_soft_evict_cooldown_ms: AtomicU64::new(
me_pool_drain_soft_evict_cooldown_ms.max(1),
),
me_pool_force_close_secs: AtomicU64::new(me_pool_force_close_secs), me_pool_force_close_secs: AtomicU64::new(me_pool_force_close_secs),
me_pool_min_fresh_ratio_permille: AtomicU32::new(Self::ratio_to_permille( me_pool_min_fresh_ratio_permille: AtomicU32::new(Self::ratio_to_permille(
me_pool_min_fresh_ratio, me_pool_min_fresh_ratio,
@ -496,6 +517,11 @@ impl MePool {
hardswap: bool, hardswap: bool,
drain_ttl_secs: u64, drain_ttl_secs: u64,
pool_drain_threshold: u64, pool_drain_threshold: u64,
pool_drain_soft_evict_enabled: bool,
pool_drain_soft_evict_grace_secs: u64,
pool_drain_soft_evict_per_writer: u8,
pool_drain_soft_evict_budget_per_core: u16,
pool_drain_soft_evict_cooldown_ms: u64,
force_close_secs: u64, force_close_secs: u64,
min_fresh_ratio: f32, min_fresh_ratio: f32,
hardswap_warmup_delay_min_ms: u64, hardswap_warmup_delay_min_ms: u64,
@ -536,6 +562,18 @@ impl MePool {
.store(drain_ttl_secs, Ordering::Relaxed); .store(drain_ttl_secs, Ordering::Relaxed);
self.me_pool_drain_threshold self.me_pool_drain_threshold
.store(pool_drain_threshold, Ordering::Relaxed); .store(pool_drain_threshold, Ordering::Relaxed);
self.me_pool_drain_soft_evict_enabled
.store(pool_drain_soft_evict_enabled, Ordering::Relaxed);
self.me_pool_drain_soft_evict_grace_secs
.store(pool_drain_soft_evict_grace_secs, Ordering::Relaxed);
self.me_pool_drain_soft_evict_per_writer
.store(pool_drain_soft_evict_per_writer.max(1), Ordering::Relaxed);
self.me_pool_drain_soft_evict_budget_per_core.store(
pool_drain_soft_evict_budget_per_core.max(1) as u32,
Ordering::Relaxed,
);
self.me_pool_drain_soft_evict_cooldown_ms
.store(pool_drain_soft_evict_cooldown_ms.max(1), Ordering::Relaxed);
self.me_pool_force_close_secs self.me_pool_force_close_secs
.store(force_close_secs, Ordering::Relaxed); .store(force_close_secs, Ordering::Relaxed);
self.me_pool_min_fresh_ratio_permille self.me_pool_min_fresh_ratio_permille
@ -690,6 +728,36 @@ impl MePool {
} }
} }
pub(super) fn drain_soft_evict_enabled(&self) -> bool {
self.me_pool_drain_soft_evict_enabled
.load(Ordering::Relaxed)
}
pub(super) fn drain_soft_evict_grace_secs(&self) -> u64 {
self.me_pool_drain_soft_evict_grace_secs
.load(Ordering::Relaxed)
}
pub(super) fn drain_soft_evict_per_writer(&self) -> usize {
self.me_pool_drain_soft_evict_per_writer
.load(Ordering::Relaxed)
.max(1) as usize
}
pub(super) fn drain_soft_evict_budget_per_core(&self) -> usize {
self.me_pool_drain_soft_evict_budget_per_core
.load(Ordering::Relaxed)
.max(1) as usize
}
pub(super) fn drain_soft_evict_cooldown(&self) -> Duration {
Duration::from_millis(
self.me_pool_drain_soft_evict_cooldown_ms
.load(Ordering::Relaxed)
.max(1),
)
}
pub(super) async fn key_selector(&self) -> u32 { pub(super) async fn key_selector(&self) -> u32 {
self.proxy_secret.read().await.key_selector self.proxy_secret.read().await.key_selector
} }

View File

@ -70,10 +70,12 @@ impl MePool {
let mut missing_dc = Vec::<i32>::new(); let mut missing_dc = Vec::<i32>::new();
let mut covered = 0usize; let mut covered = 0usize;
let mut total = 0usize;
for (dc, endpoints) in desired_by_dc { for (dc, endpoints) in desired_by_dc {
if endpoints.is_empty() { if endpoints.is_empty() {
continue; continue;
} }
total += 1;
if endpoints if endpoints
.iter() .iter()
.any(|addr| active_writer_addrs.contains(&(*dc, *addr))) .any(|addr| active_writer_addrs.contains(&(*dc, *addr)))
@ -85,7 +87,9 @@ impl MePool {
} }
missing_dc.sort_unstable(); missing_dc.sort_unstable();
let total = desired_by_dc.len().max(1); if total == 0 {
return (1.0, missing_dc);
}
let ratio = (covered as f32) / (total as f32); let ratio = (covered as f32) / (total as f32);
(ratio, missing_dc) (ratio, missing_dc)
} }
@ -399,29 +403,21 @@ impl MePool {
} }
if hardswap { if hardswap {
let mut fresh_missing_dc = Vec::<(i32, usize, usize)>::new(); let fresh_writer_addrs: HashSet<(i32, SocketAddr)> = writers
for (dc, endpoints) in &desired_by_dc { .iter()
if endpoints.is_empty() { .filter(|w| !w.draining.load(Ordering::Relaxed))
continue; .filter(|w| w.generation == generation)
} .map(|w| (w.writer_dc, w.addr))
let required = self.required_writers_for_dc(endpoints.len()); .collect();
let fresh_count = writers let (fresh_coverage_ratio, fresh_missing_dc) =
.iter() Self::coverage_ratio(&desired_by_dc, &fresh_writer_addrs);
.filter(|w| !w.draining.load(Ordering::Relaxed))
.filter(|w| w.generation == generation)
.filter(|w| w.writer_dc == *dc)
.filter(|w| endpoints.contains(&w.addr))
.count();
if fresh_count < required {
fresh_missing_dc.push((*dc, fresh_count, required));
}
}
if !fresh_missing_dc.is_empty() { if !fresh_missing_dc.is_empty() {
warn!( warn!(
previous_generation, previous_generation,
generation, generation,
fresh_coverage_ratio = format_args!("{fresh_coverage_ratio:.3}"),
missing_dc = ?fresh_missing_dc, missing_dc = ?fresh_missing_dc,
"ME hardswap pending: fresh generation coverage incomplete" "ME hardswap pending: fresh generation DC coverage incomplete"
); );
return; return;
} }
@ -491,3 +487,61 @@ impl MePool {
self.zero_downtime_reinit_after_map_change(rng).await; self.zero_downtime_reinit_after_map_change(rng).await;
} }
} }
#[cfg(test)]
mod tests {
use std::collections::{HashMap, HashSet};
use std::net::{IpAddr, Ipv4Addr, SocketAddr};
use super::MePool;
fn addr(octet: u8, port: u16) -> SocketAddr {
SocketAddr::new(IpAddr::V4(Ipv4Addr::new(127, 0, 0, octet)), port)
}
#[test]
fn coverage_ratio_counts_dc_coverage_not_floor() {
let dc1 = addr(1, 2001);
let dc2 = addr(2, 2002);
let mut desired_by_dc = HashMap::<i32, HashSet<SocketAddr>>::new();
desired_by_dc.insert(1, HashSet::from([dc1]));
desired_by_dc.insert(2, HashSet::from([dc2]));
let active_writer_addrs = HashSet::from([(1, dc1)]);
let (ratio, missing_dc) = MePool::coverage_ratio(&desired_by_dc, &active_writer_addrs);
assert_eq!(ratio, 0.5);
assert_eq!(missing_dc, vec![2]);
}
#[test]
fn coverage_ratio_ignores_empty_dc_groups() {
let dc1 = addr(1, 2001);
let mut desired_by_dc = HashMap::<i32, HashSet<SocketAddr>>::new();
desired_by_dc.insert(1, HashSet::from([dc1]));
desired_by_dc.insert(2, HashSet::new());
let active_writer_addrs = HashSet::from([(1, dc1)]);
let (ratio, missing_dc) = MePool::coverage_ratio(&desired_by_dc, &active_writer_addrs);
assert_eq!(ratio, 1.0);
assert!(missing_dc.is_empty());
}
#[test]
fn coverage_ratio_reports_missing_dcs_sorted() {
let dc1 = addr(1, 2001);
let dc2 = addr(2, 2002);
let mut desired_by_dc = HashMap::<i32, HashSet<SocketAddr>>::new();
desired_by_dc.insert(2, HashSet::from([dc2]));
desired_by_dc.insert(1, HashSet::from([dc1]));
let (ratio, missing_dc) = MePool::coverage_ratio(&desired_by_dc, &HashSet::new());
assert_eq!(ratio, 0.0);
assert_eq!(missing_dc, vec![1, 2]);
}
}

View File

@ -40,6 +40,7 @@ pub(crate) struct MeApiDcStatusSnapshot {
pub floor_max: usize, pub floor_max: usize,
pub floor_capped: bool, pub floor_capped: bool,
pub alive_writers: usize, pub alive_writers: usize,
pub coverage_ratio: f64,
pub coverage_pct: f64, pub coverage_pct: f64,
pub fresh_alive_writers: usize, pub fresh_alive_writers: usize,
pub fresh_coverage_pct: f64, pub fresh_coverage_pct: f64,
@ -62,6 +63,7 @@ pub(crate) struct MeApiStatusSnapshot {
pub available_pct: f64, pub available_pct: f64,
pub required_writers: usize, pub required_writers: usize,
pub alive_writers: usize, pub alive_writers: usize,
pub coverage_ratio: f64,
pub coverage_pct: f64, pub coverage_pct: f64,
pub fresh_alive_writers: usize, pub fresh_alive_writers: usize,
pub fresh_coverage_pct: f64, pub fresh_coverage_pct: f64,
@ -124,6 +126,11 @@ pub(crate) struct MeApiRuntimeSnapshot {
pub me_reconnect_backoff_cap_ms: u64, pub me_reconnect_backoff_cap_ms: u64,
pub me_reconnect_fast_retry_count: u32, pub me_reconnect_fast_retry_count: u32,
pub me_pool_drain_ttl_secs: u64, pub me_pool_drain_ttl_secs: u64,
pub me_pool_drain_soft_evict_enabled: bool,
pub me_pool_drain_soft_evict_grace_secs: u64,
pub me_pool_drain_soft_evict_per_writer: u8,
pub me_pool_drain_soft_evict_budget_per_core: u16,
pub me_pool_drain_soft_evict_cooldown_ms: u64,
pub me_pool_force_close_secs: u64, pub me_pool_force_close_secs: u64,
pub me_pool_min_fresh_ratio: f32, pub me_pool_min_fresh_ratio: f32,
pub me_bind_stale_mode: &'static str, pub me_bind_stale_mode: &'static str,
@ -337,6 +344,8 @@ impl MePool {
let mut available_endpoints = 0usize; let mut available_endpoints = 0usize;
let mut alive_writers = 0usize; let mut alive_writers = 0usize;
let mut fresh_alive_writers = 0usize; let mut fresh_alive_writers = 0usize;
let mut coverage_ratio_dcs_total = 0usize;
let mut coverage_ratio_dcs_covered = 0usize;
let floor_mode = self.floor_mode(); let floor_mode = self.floor_mode();
let adaptive_cpu_cores = (self let adaptive_cpu_cores = (self
.me_adaptive_floor_cpu_cores_effective .me_adaptive_floor_cpu_cores_effective
@ -388,6 +397,12 @@ impl MePool {
available_endpoints += dc_available_endpoints; available_endpoints += dc_available_endpoints;
alive_writers += dc_alive_writers; alive_writers += dc_alive_writers;
fresh_alive_writers += dc_fresh_alive_writers; fresh_alive_writers += dc_fresh_alive_writers;
if endpoint_count > 0 {
coverage_ratio_dcs_total += 1;
if dc_alive_writers > 0 {
coverage_ratio_dcs_covered += 1;
}
}
dcs.push(MeApiDcStatusSnapshot { dcs.push(MeApiDcStatusSnapshot {
dc, dc,
@ -410,6 +425,11 @@ impl MePool {
floor_max, floor_max,
floor_capped, floor_capped,
alive_writers: dc_alive_writers, alive_writers: dc_alive_writers,
coverage_ratio: if endpoint_count > 0 && dc_alive_writers > 0 {
100.0
} else {
0.0
},
coverage_pct: ratio_pct(dc_alive_writers, dc_required_writers), coverage_pct: ratio_pct(dc_alive_writers, dc_required_writers),
fresh_alive_writers: dc_fresh_alive_writers, fresh_alive_writers: dc_fresh_alive_writers,
fresh_coverage_pct: ratio_pct(dc_fresh_alive_writers, dc_required_writers), fresh_coverage_pct: ratio_pct(dc_fresh_alive_writers, dc_required_writers),
@ -426,6 +446,7 @@ impl MePool {
available_pct: ratio_pct(available_endpoints, configured_endpoints), available_pct: ratio_pct(available_endpoints, configured_endpoints),
required_writers, required_writers,
alive_writers, alive_writers,
coverage_ratio: ratio_pct(coverage_ratio_dcs_covered, coverage_ratio_dcs_total),
coverage_pct: ratio_pct(alive_writers, required_writers), coverage_pct: ratio_pct(alive_writers, required_writers),
fresh_alive_writers, fresh_alive_writers,
fresh_coverage_pct: ratio_pct(fresh_alive_writers, required_writers), fresh_coverage_pct: ratio_pct(fresh_alive_writers, required_writers),
@ -562,6 +583,22 @@ impl MePool {
me_reconnect_backoff_cap_ms: self.me_reconnect_backoff_cap.as_millis() as u64, me_reconnect_backoff_cap_ms: self.me_reconnect_backoff_cap.as_millis() as u64,
me_reconnect_fast_retry_count: self.me_reconnect_fast_retry_count, me_reconnect_fast_retry_count: self.me_reconnect_fast_retry_count,
me_pool_drain_ttl_secs: self.me_pool_drain_ttl_secs.load(Ordering::Relaxed), me_pool_drain_ttl_secs: self.me_pool_drain_ttl_secs.load(Ordering::Relaxed),
me_pool_drain_soft_evict_enabled: self
.me_pool_drain_soft_evict_enabled
.load(Ordering::Relaxed),
me_pool_drain_soft_evict_grace_secs: self
.me_pool_drain_soft_evict_grace_secs
.load(Ordering::Relaxed),
me_pool_drain_soft_evict_per_writer: self
.me_pool_drain_soft_evict_per_writer
.load(Ordering::Relaxed),
me_pool_drain_soft_evict_budget_per_core: self
.me_pool_drain_soft_evict_budget_per_core
.load(Ordering::Relaxed)
.min(u16::MAX as u32) as u16,
me_pool_drain_soft_evict_cooldown_ms: self
.me_pool_drain_soft_evict_cooldown_ms
.load(Ordering::Relaxed),
me_pool_force_close_secs: self.me_pool_force_close_secs.load(Ordering::Relaxed), me_pool_force_close_secs: self.me_pool_force_close_secs.load(Ordering::Relaxed),
me_pool_min_fresh_ratio: Self::permille_to_ratio( me_pool_min_fresh_ratio: Self::permille_to_ratio(
self.me_pool_min_fresh_ratio_permille.load(Ordering::Relaxed), self.me_pool_min_fresh_ratio_permille.load(Ordering::Relaxed),

View File

@ -394,6 +394,56 @@ impl ConnRegistry {
inner.writer_for_conn.keys().copied().collect() inner.writer_for_conn.keys().copied().collect()
} }
pub(super) async fn bound_conn_ids_for_writer_limited(
&self,
writer_id: u64,
limit: usize,
) -> Vec<u64> {
if limit == 0 {
return Vec::new();
}
let inner = self.inner.read().await;
let Some(conn_ids) = inner.conns_for_writer.get(&writer_id) else {
return Vec::new();
};
let mut out = conn_ids.iter().copied().collect::<Vec<_>>();
out.sort_unstable();
out.truncate(limit);
out
}
pub(super) async fn evict_bound_conn_if_writer(&self, conn_id: u64, writer_id: u64) -> bool {
let maybe_client_tx = {
let mut inner = self.inner.write().await;
if inner.writer_for_conn.get(&conn_id).copied() != Some(writer_id) {
return false;
}
let client_tx = inner.map.get(&conn_id).cloned();
inner.map.remove(&conn_id);
inner.meta.remove(&conn_id);
inner.writer_for_conn.remove(&conn_id);
let became_empty = if let Some(set) = inner.conns_for_writer.get_mut(&writer_id) {
set.remove(&conn_id);
set.is_empty()
} else {
false
};
if became_empty {
inner
.writer_idle_since_epoch_secs
.insert(writer_id, Self::now_epoch_secs());
}
client_tx
};
if let Some(client_tx) = maybe_client_tx {
let _ = client_tx.try_send(MeResponse::Close);
}
true
}
pub async fn writer_lost(&self, writer_id: u64) -> Vec<BoundConn> { pub async fn writer_lost(&self, writer_id: u64) -> Vec<BoundConn> {
let mut inner = self.inner.write().await; let mut inner = self.inner.write().await;
inner.writers.remove(&writer_id); inner.writers.remove(&writer_id);
@ -444,6 +494,7 @@ mod tests {
use super::ConnMeta; use super::ConnMeta;
use super::ConnRegistry; use super::ConnRegistry;
use super::MeResponse;
#[tokio::test] #[tokio::test]
async fn writer_activity_snapshot_tracks_writer_and_dc_load() { async fn writer_activity_snapshot_tracks_writer_and_dc_load() {
@ -634,4 +685,86 @@ mod tests {
); );
assert!(registry.get_writer(conn_id).await.is_none()); assert!(registry.get_writer(conn_id).await.is_none());
} }
#[tokio::test]
async fn bound_conn_ids_for_writer_limited_is_sorted_and_bounded() {
let registry = ConnRegistry::new();
let (writer_tx, _writer_rx) = tokio::sync::mpsc::channel(8);
registry.register_writer(10, writer_tx).await;
let addr = SocketAddr::new(IpAddr::V4(Ipv4Addr::LOCALHOST), 443);
let mut conn_ids = Vec::new();
for _ in 0..5 {
let (conn_id, _rx) = registry.register().await;
assert!(
registry
.bind_writer(
conn_id,
10,
ConnMeta {
target_dc: 2,
client_addr: addr,
our_addr: addr,
proto_flags: 0,
},
)
.await
);
conn_ids.push(conn_id);
}
conn_ids.sort_unstable();
let limited = registry.bound_conn_ids_for_writer_limited(10, 3).await;
assert_eq!(limited.len(), 3);
assert_eq!(limited, conn_ids.into_iter().take(3).collect::<Vec<_>>());
}
#[tokio::test]
async fn evict_bound_conn_if_writer_does_not_touch_rebound_conn() {
let registry = ConnRegistry::new();
let (conn_id, mut rx) = registry.register().await;
let (writer_tx_a, _writer_rx_a) = tokio::sync::mpsc::channel(8);
let (writer_tx_b, _writer_rx_b) = tokio::sync::mpsc::channel(8);
registry.register_writer(10, writer_tx_a).await;
registry.register_writer(20, writer_tx_b).await;
let addr = SocketAddr::new(IpAddr::V4(Ipv4Addr::LOCALHOST), 443);
assert!(
registry
.bind_writer(
conn_id,
10,
ConnMeta {
target_dc: 2,
client_addr: addr,
our_addr: addr,
proto_flags: 0,
},
)
.await
);
assert!(
registry
.bind_writer(
conn_id,
20,
ConnMeta {
target_dc: 2,
client_addr: addr,
our_addr: addr,
proto_flags: 1,
},
)
.await
);
let evicted = registry.evict_bound_conn_if_writer(conn_id, 10).await;
assert!(!evicted);
assert_eq!(registry.get_writer(conn_id).await.expect("writer").writer_id, 20);
assert!(rx.try_recv().is_err());
let evicted = registry.evict_bound_conn_if_writer(conn_id, 20).await;
assert!(evicted);
assert!(registry.get_writer(conn_id).await.is_none());
assert!(matches!(rx.try_recv(), Ok(MeResponse::Close)));
}
} }

View File

@ -2,6 +2,7 @@
pub mod pool; pub mod pool;
pub mod proxy_protocol; pub mod proxy_protocol;
pub mod shadowsocks;
pub mod socket; pub mod socket;
pub mod socks; pub mod socks;
pub mod upstream; pub mod upstream;
@ -14,5 +15,8 @@ pub use socket::*;
#[allow(unused_imports)] #[allow(unused_imports)]
pub use socks::*; pub use socks::*;
#[allow(unused_imports)] #[allow(unused_imports)]
pub use upstream::{DcPingResult, StartupPingResult, UpstreamEgressInfo, UpstreamManager, UpstreamRouteKind}; pub use upstream::{
DcPingResult, StartupPingResult, UpstreamEgressInfo, UpstreamManager, UpstreamRouteKind,
UpstreamStream,
};
pub mod middle_proxy; pub mod middle_proxy;

View File

@ -0,0 +1,60 @@
use std::net::{IpAddr, SocketAddr};
use std::time::Duration;
use shadowsocks::{
ProxyClientStream,
config::{ServerConfig, ServerType},
context::Context,
net::ConnectOpts,
};
use crate::error::{ProxyError, Result};
pub(crate) type ShadowsocksStream = ProxyClientStream<shadowsocks::net::TcpStream>;
fn parse_server_config(url: &str, connect_timeout: Duration) -> Result<ServerConfig> {
let mut config = ServerConfig::from_url(url)
.map_err(|error| ProxyError::Config(format!("invalid shadowsocks url: {error}")))?;
if config.plugin().is_some() {
return Err(ProxyError::Config(
"shadowsocks plugins are not supported".to_string(),
));
}
config.set_timeout(connect_timeout);
Ok(config)
}
pub(crate) fn sanitize_shadowsocks_url(url: &str) -> Result<String> {
Ok(parse_server_config(url, Duration::from_secs(1))?
.addr()
.to_string())
}
fn connect_opts_for_interface(interface: &Option<String>) -> ConnectOpts {
let mut opts = ConnectOpts::default();
if let Some(interface) = interface {
if let Ok(ip) = interface.parse::<IpAddr>() {
opts.bind_local_addr = Some(SocketAddr::new(ip, 0));
} else {
opts.bind_interface = Some(interface.clone());
}
}
opts
}
pub(crate) async fn connect_shadowsocks(
url: &str,
interface: &Option<String>,
target: SocketAddr,
connect_timeout: Duration,
) -> Result<ShadowsocksStream> {
let config = parse_server_config(url, connect_timeout)?;
let context = Context::new_shared(ServerType::Local);
let opts = connect_opts_for_interface(interface);
ProxyClientStream::connect_with_opts(context, &config, target, &opts)
.await
.map_err(ProxyError::Io)
}

View File

@ -11,6 +11,8 @@ use tokio::net::TcpStream;
use socket2::{Socket, TcpKeepalive, Domain, Type, Protocol}; use socket2::{Socket, TcpKeepalive, Domain, Type, Protocol};
use tracing::debug; use tracing::debug;
const DEFAULT_SOCKET_BUFFER_BYTES: usize = 256 * 1024;
/// Configure TCP socket with recommended settings for proxy use /// Configure TCP socket with recommended settings for proxy use
#[allow(dead_code)] #[allow(dead_code)]
pub fn configure_tcp_socket( pub fn configure_tcp_socket(
@ -34,10 +36,10 @@ pub fn configure_tcp_socket(
socket.set_tcp_keepalive(&keepalive)?; socket.set_tcp_keepalive(&keepalive)?;
} }
// CHANGED: Removed manual buffer size setting (was 256KB). // Use explicit baseline buffers to reduce slow-start stalls on high RTT links.
// Allowing the OS kernel to handle TCP window scaling (Autotuning) is critical socket.set_recv_buffer_size(DEFAULT_SOCKET_BUFFER_BYTES)?;
// for mobile clients to avoid bufferbloat and stalled connections during uploads. socket.set_send_buffer_size(DEFAULT_SOCKET_BUFFER_BYTES)?;
Ok(()) Ok(())
} }
@ -62,6 +64,10 @@ pub fn configure_client_socket(
let keepalive = keepalive.with_interval(Duration::from_secs(keepalive_secs)); let keepalive = keepalive.with_interval(Duration::from_secs(keepalive_secs));
socket.set_tcp_keepalive(&keepalive)?; socket.set_tcp_keepalive(&keepalive)?;
// Keep explicit baseline buffers for predictable throughput across busy hosts.
socket.set_recv_buffer_size(DEFAULT_SOCKET_BUFFER_BYTES)?;
socket.set_send_buffer_size(DEFAULT_SOCKET_BUFFER_BYTES)?;
// Set TCP user timeout (Linux only) // Set TCP user timeout (Linux only)
// NOTE: iOS does not support TCP_USER_TIMEOUT - application-level timeout // NOTE: iOS does not support TCP_USER_TIMEOUT - application-level timeout
@ -124,6 +130,8 @@ pub fn create_outgoing_socket_bound(addr: SocketAddr, bind_addr: Option<IpAddr>)
// Disable Nagle // Disable Nagle
socket.set_nodelay(true)?; socket.set_nodelay(true)?;
socket.set_recv_buffer_size(DEFAULT_SOCKET_BUFFER_BYTES)?;
socket.set_send_buffer_size(DEFAULT_SOCKET_BUFFER_BYTES)?;
if let Some(bind_ip) = bind_addr { if let Some(bind_ip) = bind_addr {
let bind_sock_addr = SocketAddr::new(bind_ip, 0); let bind_sock_addr = SocketAddr::new(bind_ip, 0);

View File

@ -4,22 +4,28 @@
#![allow(deprecated)] #![allow(deprecated)]
use rand::Rng;
use std::collections::{BTreeSet, HashMap}; use std::collections::{BTreeSet, HashMap};
use std::net::{SocketAddr, IpAddr}; use std::net::{IpAddr, SocketAddr};
use std::pin::Pin;
use std::sync::Arc; use std::sync::Arc;
use std::sync::atomic::{AtomicU64, AtomicUsize, Ordering}; use std::sync::atomic::{AtomicU64, AtomicUsize, Ordering};
use std::task::{Context, Poll};
use std::time::Duration; use std::time::Duration;
use tokio::io::{AsyncRead, AsyncWrite, ReadBuf};
use tokio::net::TcpStream; use tokio::net::TcpStream;
use tokio::sync::RwLock; use tokio::sync::RwLock;
use tokio::time::Instant; use tokio::time::Instant;
use rand::Rng; use tracing::{debug, info, trace, warn};
use tracing::{debug, warn, info, trace};
use crate::config::{UpstreamConfig, UpstreamType}; use crate::config::{UpstreamConfig, UpstreamType};
use crate::error::{Result, ProxyError}; use crate::error::{ProxyError, Result};
use crate::network::dns_overrides::{resolve_socket_addr, split_host_port}; use crate::network::dns_overrides::{resolve_socket_addr, split_host_port};
use crate::protocol::constants::{TG_DATACENTERS_V4, TG_DATACENTERS_V6, TG_DATACENTER_PORT}; use crate::protocol::constants::{TG_DATACENTER_PORT, TG_DATACENTERS_V4, TG_DATACENTERS_V6};
use crate::stats::Stats; use crate::stats::Stats;
use crate::transport::shadowsocks::{
ShadowsocksStream, connect_shadowsocks, sanitize_shadowsocks_url,
};
use crate::transport::socket::{create_outgoing_socket_bound, resolve_interface_ip}; use crate::transport::socket::{create_outgoing_socket_bound, resolve_interface_ip};
use crate::transport::socks::{connect_socks4, connect_socks5}; use crate::transport::socks::{connect_socks4, connect_socks5};
@ -47,7 +53,10 @@ struct LatencyEma {
impl LatencyEma { impl LatencyEma {
const fn new(alpha: f64) -> Self { const fn new(alpha: f64) -> Self {
Self { value_ms: None, alpha } Self {
value_ms: None,
alpha,
}
} }
fn update(&mut self, sample_ms: f64) { fn update(&mut self, sample_ms: f64) {
@ -131,11 +140,17 @@ impl UpstreamState {
return Some(ms); return Some(ms);
} }
let (sum, count) = self.dc_latency.iter() let (sum, count) = self
.dc_latency
.iter()
.filter_map(|l| l.get()) .filter_map(|l| l.get())
.fold((0.0, 0u32), |(s, c), v| (s + v, c + 1)); .fold((0.0, 0u32), |(s, c), v| (s + v, c + 1));
if count > 0 { Some(sum / count as f64) } else { None } if count > 0 {
Some(sum / count as f64)
} else {
None
}
} }
} }
@ -158,11 +173,78 @@ pub struct StartupPingResult {
pub both_available: bool, pub both_available: bool,
} }
pub enum UpstreamStream {
Tcp(TcpStream),
Shadowsocks(Box<ShadowsocksStream>),
}
impl std::fmt::Debug for UpstreamStream {
fn fmt(&self, f: &mut std::fmt::Formatter<'_>) -> std::fmt::Result {
match self {
Self::Tcp(_) => f.write_str("UpstreamStream::Tcp(..)"),
Self::Shadowsocks(_) => f.write_str("UpstreamStream::Shadowsocks(..)"),
}
}
}
impl UpstreamStream {
pub fn into_tcp(self) -> Result<TcpStream> {
match self {
Self::Tcp(stream) => Ok(stream),
Self::Shadowsocks(_) => Err(ProxyError::Config(
"shadowsocks upstreams are not supported when general.use_middle_proxy = true"
.to_string(),
)),
}
}
}
impl AsyncRead for UpstreamStream {
fn poll_read(
self: Pin<&mut Self>,
cx: &mut Context<'_>,
buf: &mut ReadBuf<'_>,
) -> Poll<std::io::Result<()>> {
match self.get_mut() {
Self::Tcp(stream) => Pin::new(stream).poll_read(cx, buf),
Self::Shadowsocks(stream) => Pin::new(stream.as_mut()).poll_read(cx, buf),
}
}
}
impl AsyncWrite for UpstreamStream {
fn poll_write(
self: Pin<&mut Self>,
cx: &mut Context<'_>,
buf: &[u8],
) -> Poll<std::io::Result<usize>> {
match self.get_mut() {
Self::Tcp(stream) => Pin::new(stream).poll_write(cx, buf),
Self::Shadowsocks(stream) => Pin::new(stream.as_mut()).poll_write(cx, buf),
}
}
fn poll_flush(self: Pin<&mut Self>, cx: &mut Context<'_>) -> Poll<std::io::Result<()>> {
match self.get_mut() {
Self::Tcp(stream) => Pin::new(stream).poll_flush(cx),
Self::Shadowsocks(stream) => Pin::new(stream.as_mut()).poll_flush(cx),
}
}
fn poll_shutdown(self: Pin<&mut Self>, cx: &mut Context<'_>) -> Poll<std::io::Result<()>> {
match self.get_mut() {
Self::Tcp(stream) => Pin::new(stream).poll_shutdown(cx),
Self::Shadowsocks(stream) => Pin::new(stream.as_mut()).poll_shutdown(cx),
}
}
}
#[derive(Debug, Clone, Copy, PartialEq, Eq)] #[derive(Debug, Clone, Copy, PartialEq, Eq)]
pub enum UpstreamRouteKind { pub enum UpstreamRouteKind {
Direct, Direct,
Socks4, Socks4,
Socks5, Socks5,
Shadowsocks,
} }
#[derive(Debug, Clone)] #[derive(Debug, Clone)]
@ -194,6 +276,7 @@ pub struct UpstreamApiSummarySnapshot {
pub direct_total: usize, pub direct_total: usize,
pub socks4_total: usize, pub socks4_total: usize,
pub socks5_total: usize, pub socks5_total: usize,
pub shadowsocks_total: usize,
} }
#[derive(Debug, Clone)] #[derive(Debug, Clone)]
@ -253,7 +336,8 @@ impl UpstreamManager {
connect_failfast_hard_errors: bool, connect_failfast_hard_errors: bool,
stats: Arc<Stats>, stats: Arc<Stats>,
) -> Self { ) -> Self {
let states = configs.into_iter() let states = configs
.into_iter()
.filter(|c| c.enabled) .filter(|c| c.enabled)
.map(UpstreamState::new) .map(UpstreamState::new)
.collect(); .collect();
@ -311,20 +395,13 @@ impl UpstreamManager {
summary.unhealthy_total += 1; summary.unhealthy_total += 1;
} }
let (route_kind, address) = match &upstream.config.upstream_type { let (route_kind, address) = Self::describe_upstream(&upstream.config.upstream_type);
UpstreamType::Direct { .. } => { match route_kind {
summary.direct_total += 1; UpstreamRouteKind::Direct => summary.direct_total += 1,
(UpstreamRouteKind::Direct, "direct".to_string()) UpstreamRouteKind::Socks4 => summary.socks4_total += 1,
} UpstreamRouteKind::Socks5 => summary.socks5_total += 1,
UpstreamType::Socks4 { address, .. } => { UpstreamRouteKind::Shadowsocks => summary.shadowsocks_total += 1,
summary.socks4_total += 1; }
(UpstreamRouteKind::Socks4, address.clone())
}
UpstreamType::Socks5 { address, .. } => {
summary.socks5_total += 1;
(UpstreamRouteKind::Socks5, address.clone())
}
};
let mut dc = Vec::with_capacity(NUM_DCS); let mut dc = Vec::with_capacity(NUM_DCS);
for dc_idx in 0..NUM_DCS { for dc_idx in 0..NUM_DCS {
@ -352,6 +429,18 @@ impl UpstreamManager {
Some(UpstreamApiSnapshot { summary, upstreams }) Some(UpstreamApiSnapshot { summary, upstreams })
} }
fn describe_upstream(upstream_type: &UpstreamType) -> (UpstreamRouteKind, String) {
match upstream_type {
UpstreamType::Direct { .. } => (UpstreamRouteKind::Direct, "direct".to_string()),
UpstreamType::Socks4 { address, .. } => (UpstreamRouteKind::Socks4, address.clone()),
UpstreamType::Socks5 { address, .. } => (UpstreamRouteKind::Socks5, address.clone()),
UpstreamType::Shadowsocks { url, .. } => (
UpstreamRouteKind::Shadowsocks,
sanitize_shadowsocks_url(url).unwrap_or_else(|_| "invalid".to_string()),
),
}
}
pub fn api_policy_snapshot(&self) -> UpstreamApiPolicySnapshot { pub fn api_policy_snapshot(&self) -> UpstreamApiPolicySnapshot {
UpstreamApiPolicySnapshot { UpstreamApiPolicySnapshot {
connect_retry_attempts: self.connect_retry_attempts, connect_retry_attempts: self.connect_retry_attempts,
@ -539,44 +628,44 @@ impl UpstreamManager {
// Scope filter: // Scope filter:
// If scope is set: only scoped and matched items // If scope is set: only scoped and matched items
// If scope is not set: only unscoped items // If scope is not set: only unscoped items
let filtered_upstreams : Vec<usize> = upstreams.iter() let filtered_upstreams: Vec<usize> = upstreams
.iter()
.enumerate() .enumerate()
.filter(|(_, u)| { .filter(|(_, u)| {
scope.map_or( scope.map_or(u.config.scopes.is_empty(), |req_scope| {
u.config.scopes.is_empty(), u.config
|req_scope| { .scopes
u.config.scopes .split(',')
.split(',') .map(str::trim)
.map(str::trim) .any(|s| s == req_scope)
.any(|s| s == req_scope) })
}
)
}) })
.map(|(i, _)| i) .map(|(i, _)| i)
.collect(); .collect();
// Healthy filter // Healthy filter
let healthy: Vec<usize> = filtered_upstreams.iter() let healthy: Vec<usize> = filtered_upstreams
.iter()
.filter(|&&i| upstreams[i].healthy) .filter(|&&i| upstreams[i].healthy)
.copied() .copied()
.collect(); .collect();
if filtered_upstreams.is_empty() { if filtered_upstreams.is_empty() {
if Self::should_emit_warn( if Self::should_emit_warn(self.no_upstreams_warn_epoch_ms.as_ref(), 5_000) {
self.no_upstreams_warn_epoch_ms.as_ref(), warn!(
5_000, scope = scope,
) { "No upstreams available! Using first (direct?)"
warn!(scope = scope, "No upstreams available! Using first (direct?)"); );
} }
return None; return None;
} }
if healthy.is_empty() { if healthy.is_empty() {
if Self::should_emit_warn( if Self::should_emit_warn(self.no_healthy_warn_epoch_ms.as_ref(), 5_000) {
self.no_healthy_warn_epoch_ms.as_ref(), warn!(
5_000, scope = scope,
) { "No healthy upstreams available! Using random."
warn!(scope = scope, "No healthy upstreams available! Using random."); );
} }
return Some(filtered_upstreams[rand::rng().gen_range(0..filtered_upstreams.len())]); return Some(filtered_upstreams[rand::rng().gen_range(0..filtered_upstreams.len())]);
} }
@ -585,14 +674,18 @@ impl UpstreamManager {
return Some(healthy[0]); return Some(healthy[0]);
} }
let weights: Vec<(usize, f64)> = healthy.iter().map(|&i| { let weights: Vec<(usize, f64)> = healthy
let base = upstreams[i].config.weight as f64; .iter()
let latency_factor = upstreams[i].effective_latency(dc_idx) .map(|&i| {
.map(|ms| if ms > 1.0 { 1000.0 / ms } else { 1000.0 }) let base = upstreams[i].config.weight as f64;
.unwrap_or(1.0); let latency_factor = upstreams[i]
.effective_latency(dc_idx)
.map(|ms| if ms > 1.0 { 1000.0 / ms } else { 1000.0 })
.unwrap_or(1.0);
(i, base * latency_factor) (i, base * latency_factor)
}).collect(); })
.collect();
let total: f64 = weights.iter().map(|(_, w)| w).sum(); let total: f64 = weights.iter().map(|(_, w)| w).sum();
@ -620,8 +713,34 @@ impl UpstreamManager {
} }
/// Connect to target through a selected upstream. /// Connect to target through a selected upstream.
pub async fn connect(&self, target: SocketAddr, dc_idx: Option<i16>, scope: Option<&str>) -> Result<TcpStream> { pub async fn connect(
let (stream, _) = self.connect_with_details(target, dc_idx, scope).await?; &self,
target: SocketAddr,
dc_idx: Option<i16>,
scope: Option<&str>,
) -> Result<UpstreamStream> {
let idx = self
.select_upstream(dc_idx, scope)
.await
.ok_or_else(|| ProxyError::Config("No upstreams available".to_string()))?;
let mut upstream = {
let guard = self.upstreams.read().await;
guard[idx].config.clone()
};
if let Some(s) = scope {
upstream.selected_scope = s.to_string();
}
let bind_rr = {
let guard = self.upstreams.read().await;
guard.get(idx).map(|u| u.bind_rr.clone())
};
let (stream, _) = self
.connect_selected_upstream(idx, upstream, target, dc_idx, bind_rr)
.await?;
Ok(stream) Ok(stream)
} }
@ -632,7 +751,9 @@ impl UpstreamManager {
dc_idx: Option<i16>, dc_idx: Option<i16>,
scope: Option<&str>, scope: Option<&str>,
) -> Result<(TcpStream, UpstreamEgressInfo)> { ) -> Result<(TcpStream, UpstreamEgressInfo)> {
let idx = self.select_upstream(dc_idx, scope).await let idx = self
.select_upstream(dc_idx, scope)
.await
.ok_or_else(|| ProxyError::Config("No upstreams available".to_string()))?; .ok_or_else(|| ProxyError::Config("No upstreams available".to_string()))?;
let mut upstream = { let mut upstream = {
@ -650,6 +771,20 @@ impl UpstreamManager {
guard.get(idx).map(|u| u.bind_rr.clone()) guard.get(idx).map(|u| u.bind_rr.clone())
}; };
let (stream, egress) = self
.connect_selected_upstream(idx, upstream, target, dc_idx, bind_rr)
.await?;
Ok((stream.into_tcp()?, egress))
}
async fn connect_selected_upstream(
&self,
idx: usize,
upstream: UpstreamConfig,
target: SocketAddr,
dc_idx: Option<i16>,
bind_rr: Option<Arc<AtomicUsize>>,
) -> Result<(UpstreamStream, UpstreamEgressInfo)> {
let connect_started_at = Instant::now(); let connect_started_at = Instant::now();
let mut last_error: Option<ProxyError> = None; let mut last_error: Option<ProxyError> = None;
let mut attempts_used = 0u32; let mut attempts_used = 0u32;
@ -662,8 +797,8 @@ impl UpstreamManager {
break; break;
} }
let remaining_budget = self.connect_budget.saturating_sub(elapsed); let remaining_budget = self.connect_budget.saturating_sub(elapsed);
let attempt_timeout = Duration::from_secs(DIRECT_CONNECT_TIMEOUT_SECS) let attempt_timeout =
.min(remaining_budget); Duration::from_secs(DIRECT_CONNECT_TIMEOUT_SECS).min(remaining_budget);
if attempt_timeout.is_zero() { if attempt_timeout.is_zero() {
last_error = Some(ProxyError::ConnectionTimeout { last_error = Some(ProxyError::ConnectionTimeout {
addr: target.to_string(), addr: target.to_string(),
@ -786,9 +921,12 @@ impl UpstreamManager {
target: SocketAddr, target: SocketAddr,
bind_rr: Option<Arc<AtomicUsize>>, bind_rr: Option<Arc<AtomicUsize>>,
connect_timeout: Duration, connect_timeout: Duration,
) -> Result<(TcpStream, UpstreamEgressInfo)> { ) -> Result<(UpstreamStream, UpstreamEgressInfo)> {
match &config.upstream_type { match &config.upstream_type {
UpstreamType::Direct { interface, bind_addresses } => { UpstreamType::Direct {
interface,
bind_addresses,
} => {
let bind_ip = Self::resolve_bind_address( let bind_ip = Self::resolve_bind_address(
interface, interface,
bind_addresses, bind_addresses,
@ -796,9 +934,7 @@ impl UpstreamManager {
bind_rr.as_deref(), bind_rr.as_deref(),
true, true,
); );
if bind_ip.is_none() if bind_ip.is_none() && bind_addresses.as_ref().is_some_and(|v| !v.is_empty()) {
&& bind_addresses.as_ref().is_some_and(|v| !v.is_empty())
{
return Err(ProxyError::Config(format!( return Err(ProxyError::Config(format!(
"No valid bind_addresses for target family {target}" "No valid bind_addresses for target family {target}"
))); )));
@ -813,8 +949,10 @@ impl UpstreamManager {
socket.set_nonblocking(true)?; socket.set_nonblocking(true)?;
match socket.connect(&target.into()) { match socket.connect(&target.into()) {
Ok(()) => {}, Ok(()) => {}
Err(err) if err.raw_os_error() == Some(libc::EINPROGRESS) || err.kind() == std::io::ErrorKind::WouldBlock => {}, Err(err)
if err.raw_os_error() == Some(libc::EINPROGRESS)
|| err.kind() == std::io::ErrorKind::WouldBlock => {}
Err(err) => return Err(ProxyError::Io(err)), Err(err) => return Err(ProxyError::Io(err)),
} }
@ -836,7 +974,7 @@ impl UpstreamManager {
let local_addr = stream.local_addr().ok(); let local_addr = stream.local_addr().ok();
Ok(( Ok((
stream, UpstreamStream::Tcp(stream),
UpstreamEgressInfo { UpstreamEgressInfo {
upstream_id, upstream_id,
route_kind: UpstreamRouteKind::Direct, route_kind: UpstreamRouteKind::Direct,
@ -846,8 +984,12 @@ impl UpstreamManager {
socks_proxy_addr: None, socks_proxy_addr: None,
}, },
)) ))
}, }
UpstreamType::Socks4 { address, interface, user_id } => { UpstreamType::Socks4 {
address,
interface,
user_id,
} => {
// Try to parse as SocketAddr first (IP:port), otherwise treat as hostname:port // Try to parse as SocketAddr first (IP:port), otherwise treat as hostname:port
let mut stream = if let Ok(proxy_addr) = address.parse::<SocketAddr>() { let mut stream = if let Ok(proxy_addr) = address.parse::<SocketAddr>() {
// IP:port format - use socket with optional interface binding // IP:port format - use socket with optional interface binding
@ -863,8 +1005,10 @@ impl UpstreamManager {
socket.set_nonblocking(true)?; socket.set_nonblocking(true)?;
match socket.connect(&proxy_addr.into()) { match socket.connect(&proxy_addr.into()) {
Ok(()) => {}, Ok(()) => {}
Err(err) if err.raw_os_error() == Some(libc::EINPROGRESS) || err.kind() == std::io::ErrorKind::WouldBlock => {}, Err(err)
if err.raw_os_error() == Some(libc::EINPROGRESS)
|| err.kind() == std::io::ErrorKind::WouldBlock => {}
Err(err) => return Err(ProxyError::Io(err)), Err(err) => return Err(ProxyError::Io(err)),
} }
@ -888,14 +1032,16 @@ impl UpstreamManager {
// Hostname:port format - use tokio DNS resolution // Hostname:port format - use tokio DNS resolution
// Note: interface binding is not supported for hostnames // Note: interface binding is not supported for hostnames
if interface.is_some() { if interface.is_some() {
warn!("SOCKS4 interface binding is not supported for hostname addresses, ignoring"); warn!(
"SOCKS4 interface binding is not supported for hostname addresses, ignoring"
);
} }
Self::connect_hostname_with_dns_override(address, connect_timeout).await? Self::connect_hostname_with_dns_override(address, connect_timeout).await?
}; };
// replace socks user_id with config.selected_scope, if set // replace socks user_id with config.selected_scope, if set
let scope: Option<&str> = Some(config.selected_scope.as_str()) let scope: Option<&str> =
.filter(|s| !s.is_empty()); Some(config.selected_scope.as_str()).filter(|s| !s.is_empty());
let _user_id: Option<&str> = scope.or(user_id.as_deref()); let _user_id: Option<&str> = scope.or(user_id.as_deref());
let bound = match tokio::time::timeout( let bound = match tokio::time::timeout(
@ -915,7 +1061,7 @@ impl UpstreamManager {
let local_addr = stream.local_addr().ok(); let local_addr = stream.local_addr().ok();
let socks_proxy_addr = stream.peer_addr().ok(); let socks_proxy_addr = stream.peer_addr().ok();
Ok(( Ok((
stream, UpstreamStream::Tcp(stream),
UpstreamEgressInfo { UpstreamEgressInfo {
upstream_id, upstream_id,
route_kind: UpstreamRouteKind::Socks4, route_kind: UpstreamRouteKind::Socks4,
@ -925,8 +1071,13 @@ impl UpstreamManager {
socks_proxy_addr, socks_proxy_addr,
}, },
)) ))
}, }
UpstreamType::Socks5 { address, interface, username, password } => { UpstreamType::Socks5 {
address,
interface,
username,
password,
} => {
// Try to parse as SocketAddr first (IP:port), otherwise treat as hostname:port // Try to parse as SocketAddr first (IP:port), otherwise treat as hostname:port
let mut stream = if let Ok(proxy_addr) = address.parse::<SocketAddr>() { let mut stream = if let Ok(proxy_addr) = address.parse::<SocketAddr>() {
// IP:port format - use socket with optional interface binding // IP:port format - use socket with optional interface binding
@ -942,8 +1093,10 @@ impl UpstreamManager {
socket.set_nonblocking(true)?; socket.set_nonblocking(true)?;
match socket.connect(&proxy_addr.into()) { match socket.connect(&proxy_addr.into()) {
Ok(()) => {}, Ok(()) => {}
Err(err) if err.raw_os_error() == Some(libc::EINPROGRESS) || err.kind() == std::io::ErrorKind::WouldBlock => {}, Err(err)
if err.raw_os_error() == Some(libc::EINPROGRESS)
|| err.kind() == std::io::ErrorKind::WouldBlock => {}
Err(err) => return Err(ProxyError::Io(err)), Err(err) => return Err(ProxyError::Io(err)),
} }
@ -967,15 +1120,17 @@ impl UpstreamManager {
// Hostname:port format - use tokio DNS resolution // Hostname:port format - use tokio DNS resolution
// Note: interface binding is not supported for hostnames // Note: interface binding is not supported for hostnames
if interface.is_some() { if interface.is_some() {
warn!("SOCKS5 interface binding is not supported for hostname addresses, ignoring"); warn!(
"SOCKS5 interface binding is not supported for hostname addresses, ignoring"
);
} }
Self::connect_hostname_with_dns_override(address, connect_timeout).await? Self::connect_hostname_with_dns_override(address, connect_timeout).await?
}; };
debug!(config = ?config, "Socks5 connection"); debug!(config = ?config, "Socks5 connection");
// replace socks user:pass with config.selected_scope, if set // replace socks user:pass with config.selected_scope, if set
let scope: Option<&str> = Some(config.selected_scope.as_str()) let scope: Option<&str> =
.filter(|s| !s.is_empty()); Some(config.selected_scope.as_str()).filter(|s| !s.is_empty());
let _username: Option<&str> = scope.or(username.as_deref()); let _username: Option<&str> = scope.or(username.as_deref());
let _password: Option<&str> = scope.or(password.as_deref()); let _password: Option<&str> = scope.or(password.as_deref());
@ -996,7 +1151,7 @@ impl UpstreamManager {
let local_addr = stream.local_addr().ok(); let local_addr = stream.local_addr().ok();
let socks_proxy_addr = stream.peer_addr().ok(); let socks_proxy_addr = stream.peer_addr().ok();
Ok(( Ok((
stream, UpstreamStream::Tcp(stream),
UpstreamEgressInfo { UpstreamEgressInfo {
upstream_id, upstream_id,
route_kind: UpstreamRouteKind::Socks5, route_kind: UpstreamRouteKind::Socks5,
@ -1006,7 +1161,22 @@ impl UpstreamManager {
socks_proxy_addr, socks_proxy_addr,
}, },
)) ))
}, }
UpstreamType::Shadowsocks { url, interface } => {
let stream = connect_shadowsocks(url, interface, target, connect_timeout).await?;
let local_addr = stream.get_ref().local_addr().ok();
Ok((
UpstreamStream::Shadowsocks(Box::new(stream)),
UpstreamEgressInfo {
upstream_id,
route_kind: UpstreamRouteKind::Shadowsocks,
local_addr,
direct_bind_ip: None,
socks_bound_addr: None,
socks_proxy_addr: None,
},
))
}
} }
} }
@ -1023,7 +1193,9 @@ impl UpstreamManager {
) -> Vec<StartupPingResult> { ) -> Vec<StartupPingResult> {
let upstreams: Vec<(usize, UpstreamConfig, Arc<AtomicUsize>)> = { let upstreams: Vec<(usize, UpstreamConfig, Arc<AtomicUsize>)> = {
let guard = self.upstreams.read().await; let guard = self.upstreams.read().await;
guard.iter().enumerate() guard
.iter()
.enumerate()
.map(|(i, u)| (i, u.config.clone(), u.bind_rr.clone())) .map(|(i, u)| (i, u.config.clone(), u.bind_rr.clone()))
.collect() .collect()
}; };
@ -1051,6 +1223,11 @@ impl UpstreamManager {
} }
UpstreamType::Socks4 { address, .. } => format!("socks4://{}", address), UpstreamType::Socks4 { address, .. } => format!("socks4://{}", address),
UpstreamType::Socks5 { address, .. } => format!("socks5://{}", address), UpstreamType::Socks5 { address, .. } => format!("socks5://{}", address),
UpstreamType::Shadowsocks { url, .. } => {
let address =
sanitize_shadowsocks_url(url).unwrap_or_else(|_| "invalid".to_string());
format!("shadowsocks://{address}")
}
}; };
let mut v6_results = Vec::with_capacity(NUM_DCS); let mut v6_results = Vec::with_capacity(NUM_DCS);
@ -1061,8 +1238,14 @@ impl UpstreamManager {
let result = tokio::time::timeout( let result = tokio::time::timeout(
Duration::from_secs(DC_PING_TIMEOUT_SECS), Duration::from_secs(DC_PING_TIMEOUT_SECS),
self.ping_single_dc(*upstream_idx, upstream_config, Some(bind_rr.clone()), addr_v6) self.ping_single_dc(
).await; *upstream_idx,
upstream_config,
Some(bind_rr.clone()),
addr_v6,
),
)
.await;
let ping_result = match result { let ping_result = match result {
Ok(Ok(rtt_ms)) => { Ok(Ok(rtt_ms)) => {
@ -1112,8 +1295,14 @@ impl UpstreamManager {
let result = tokio::time::timeout( let result = tokio::time::timeout(
Duration::from_secs(DC_PING_TIMEOUT_SECS), Duration::from_secs(DC_PING_TIMEOUT_SECS),
self.ping_single_dc(*upstream_idx, upstream_config, Some(bind_rr.clone()), addr_v4) self.ping_single_dc(
).await; *upstream_idx,
upstream_config,
Some(bind_rr.clone()),
addr_v4,
),
)
.await;
let ping_result = match result { let ping_result = match result {
Ok(Ok(rtt_ms)) => { Ok(Ok(rtt_ms)) => {
@ -1162,7 +1351,7 @@ impl UpstreamManager {
Err(_) => { Err(_) => {
warn!(dc = %dc_key, "Invalid dc_overrides key, skipping"); warn!(dc = %dc_key, "Invalid dc_overrides key, skipping");
continue; continue;
}, }
_ => continue, _ => continue,
}; };
let dc_idx = dc_num as usize; let dc_idx = dc_num as usize;
@ -1175,8 +1364,14 @@ impl UpstreamManager {
} }
let result = tokio::time::timeout( let result = tokio::time::timeout(
Duration::from_secs(DC_PING_TIMEOUT_SECS), Duration::from_secs(DC_PING_TIMEOUT_SECS),
self.ping_single_dc(*upstream_idx, upstream_config, Some(bind_rr.clone()), addr) self.ping_single_dc(
).await; *upstream_idx,
upstream_config,
Some(bind_rr.clone()),
addr,
),
)
.await;
let ping_result = match result { let ping_result = match result {
Ok(Ok(rtt_ms)) => DcPingResult { Ok(Ok(rtt_ms)) => DcPingResult {
@ -1205,7 +1400,9 @@ impl UpstreamManager {
v4_results.push(ping_result); v4_results.push(ping_result);
} }
} }
Err(_) => warn!(dc = %dc_idx, addr = %addr_str, "Invalid dc_overrides address, skipping"), Err(_) => {
warn!(dc = %dc_idx, addr = %addr_str, "Invalid dc_overrides address, skipping")
}
} }
} }
} }
@ -1381,12 +1578,8 @@ impl UpstreamManager {
ipv6_enabled: bool, ipv6_enabled: bool,
dc_overrides: HashMap<String, Vec<String>>, dc_overrides: HashMap<String, Vec<String>>,
) { ) {
let groups = Self::build_health_check_groups( let groups =
prefer_ipv6, Self::build_health_check_groups(prefer_ipv6, ipv4_enabled, ipv6_enabled, &dc_overrides);
ipv4_enabled,
ipv6_enabled,
&dc_overrides,
);
let required_healthy_groups = Self::required_healthy_group_count(groups.len()); let required_healthy_groups = Self::required_healthy_group_count(groups.len());
let mut endpoint_rotation: HashMap<(usize, i16, bool), usize> = HashMap::new(); let mut endpoint_rotation: HashMap<(usize, i16, bool), usize> = HashMap::new();
@ -1416,13 +1609,16 @@ impl UpstreamManager {
let mut group_ok = false; let mut group_ok = false;
let mut group_rtt_ms = None; let mut group_rtt_ms = None;
for (is_primary, endpoints) in [(true, &group.primary), (false, &group.fallback)] { for (is_primary, endpoints) in
[(true, &group.primary), (false, &group.fallback)]
{
if endpoints.is_empty() { if endpoints.is_empty() {
continue; continue;
} }
let rotation_key = (i, group.dc_idx, is_primary); let rotation_key = (i, group.dc_idx, is_primary);
let start_idx = *endpoint_rotation.entry(rotation_key).or_insert(0) % endpoints.len(); let start_idx =
*endpoint_rotation.entry(rotation_key).or_insert(0) % endpoints.len();
let mut next_idx = (start_idx + 1) % endpoints.len(); let mut next_idx = (start_idx + 1) % endpoints.len();
for step in 0..endpoints.len() { for step in 0..endpoints.len() {
@ -1544,8 +1740,7 @@ impl UpstreamManager {
return None; return None;
} }
UpstreamState::dc_array_idx(dc_idx) UpstreamState::dc_array_idx(dc_idx).map(|idx| guard[0].dc_ip_pref[idx])
.map(|idx| guard[0].dc_ip_pref[idx])
} }
/// Get preferred DC address based on config preference /// Get preferred DC address based on config preference
@ -1566,6 +1761,12 @@ impl UpstreamManager {
#[cfg(test)] #[cfg(test)]
mod tests { mod tests {
use super::*; use super::*;
use std::sync::Arc;
use crate::stats::Stats;
const TEST_SHADOWSOCKS_URL: &str =
"ss://2022-blake3-aes-256-gcm:MDEyMzQ1Njc4OTAxMjM0NTY3ODkwMTIzNDU2Nzg5MDE=@127.0.0.1:8388";
#[test] #[test]
fn required_healthy_group_count_applies_three_group_threshold() { fn required_healthy_group_count_applies_three_group_threshold() {
@ -1596,15 +1797,18 @@ mod tests {
assert!(dc2.primary.iter().all(|addr| addr.is_ipv6())); assert!(dc2.primary.iter().all(|addr| addr.is_ipv6()));
assert!(dc2.fallback.iter().all(|addr| addr.is_ipv4())); assert!(dc2.fallback.iter().all(|addr| addr.is_ipv4()));
assert!(dc2 assert!(
.primary dc2.primary
.contains(&"[2001:db8::10]:443".parse::<SocketAddr>().unwrap())); .contains(&"[2001:db8::10]:443".parse::<SocketAddr>().unwrap())
assert!(dc2 );
.fallback assert!(
.contains(&"203.0.113.10:443".parse::<SocketAddr>().unwrap())); dc2.fallback
assert!(dc2 .contains(&"203.0.113.10:443".parse::<SocketAddr>().unwrap())
.fallback );
.contains(&"203.0.113.11:443".parse::<SocketAddr>().unwrap())); assert!(
dc2.fallback
.contains(&"203.0.113.11:443".parse::<SocketAddr>().unwrap())
);
} }
#[test] #[test]
@ -1626,12 +1830,14 @@ mod tests {
.expect("override-only dc group must be present"); .expect("override-only dc group must be present");
assert_eq!(dc9.primary.len(), 2); assert_eq!(dc9.primary.len(), 2);
assert!(dc9 assert!(
.primary dc9.primary
.contains(&"198.51.100.1:443".parse::<SocketAddr>().unwrap())); .contains(&"198.51.100.1:443".parse::<SocketAddr>().unwrap())
assert!(dc9 );
.primary assert!(
.contains(&"198.51.100.2:443".parse::<SocketAddr>().unwrap())); dc9.primary
.contains(&"198.51.100.2:443".parse::<SocketAddr>().unwrap())
);
assert!(dc9.fallback.is_empty()); assert!(dc9.fallback.is_empty());
} }
@ -1678,4 +1884,36 @@ mod tests {
assert_eq!(bind, None); assert_eq!(bind, None);
} }
#[test]
fn api_snapshot_reports_shadowsocks_as_sanitized_route() {
let manager = UpstreamManager::new(
vec![UpstreamConfig {
upstream_type: UpstreamType::Shadowsocks {
url: TEST_SHADOWSOCKS_URL.to_string(),
interface: None,
},
weight: 2,
enabled: true,
scopes: String::new(),
selected_scope: String::new(),
}],
1,
100,
1000,
1,
false,
Arc::new(Stats::new()),
);
let snapshot = manager.try_api_snapshot().expect("snapshot");
assert_eq!(snapshot.summary.configured_total, 1);
assert_eq!(snapshot.summary.shadowsocks_total, 1);
assert_eq!(snapshot.upstreams.len(), 1);
assert_eq!(
snapshot.upstreams[0].route_kind,
UpstreamRouteKind::Shadowsocks
);
assert_eq!(snapshot.upstreams[0].address, "127.0.0.1:8388");
}
} }

728
tools/telemt_api.py Normal file
View File

@ -0,0 +1,728 @@
"""
Telemt Control API Python Client
Full-coverage client for https://github.com/telemt/telemt
Usage:
client = TelemtAPI("http://127.0.0.1:9091", auth_header="your-secret")
client.health()
client.create_user("alice", max_tcp_conns=10)
client.patch_user("alice", data_quota_bytes=1_000_000_000)
client.delete_user("alice")
"""
from __future__ import annotations
import json
import secrets
from dataclasses import dataclass, field
from typing import Any, Dict, List, Optional, Union
from urllib.error import HTTPError, URLError
from urllib.request import Request, urlopen
# ---------------------------------------------------------------------------
# Exceptions
# ---------------------------------------------------------------------------
class TememtAPIError(Exception):
"""Raised when the API returns an error envelope or a transport error."""
def __init__(self, message: str, code: str | None = None,
http_status: int | None = None, request_id: int | None = None):
super().__init__(message)
self.code = code
self.http_status = http_status
self.request_id = request_id
def __repr__(self) -> str:
return (f"TememtAPIError(message={str(self)!r}, code={self.code!r}, "
f"http_status={self.http_status}, request_id={self.request_id})")
# ---------------------------------------------------------------------------
# Response wrapper
# ---------------------------------------------------------------------------
@dataclass
class APIResponse:
"""Wraps a successful API response envelope."""
ok: bool
data: Any
revision: str | None = None
def __repr__(self) -> str: # pragma: no cover
return f"APIResponse(ok={self.ok}, revision={self.revision!r}, data={self.data!r})"
# ---------------------------------------------------------------------------
# Main client
# ---------------------------------------------------------------------------
class TememtAPI:
"""
HTTP client for the Telemt Control API.
Parameters
----------
base_url:
Scheme + host + port, e.g. ``"http://127.0.0.1:9091"``.
Trailing slash is stripped automatically.
auth_header:
Exact value for the ``Authorization`` header.
Leave *None* when ``auth_header`` is not configured server-side.
timeout:
Socket timeout in seconds for every request (default 10).
"""
def __init__(
self,
base_url: str = "http://127.0.0.1:9091",
auth_header: str | None = None,
timeout: int = 10,
) -> None:
self.base_url = base_url.rstrip("/")
self.auth_header = auth_header
self.timeout = timeout
# ------------------------------------------------------------------
# Low-level HTTP helpers
# ------------------------------------------------------------------
def _headers(self, extra: dict | None = None) -> dict:
h = {"Content-Type": "application/json; charset=utf-8",
"Accept": "application/json"}
if self.auth_header:
h["Authorization"] = self.auth_header
if extra:
h.update(extra)
return h
def _request(
self,
method: str,
path: str,
body: dict | None = None,
if_match: str | None = None,
query: dict | None = None,
) -> APIResponse:
url = self.base_url + path
if query:
qs = "&".join(f"{k}={v}" for k, v in query.items())
url = f"{url}?{qs}"
raw_body: bytes | None = None
if body is not None:
raw_body = json.dumps(body).encode()
extra_headers: dict = {}
if if_match is not None:
extra_headers["If-Match"] = if_match
req = Request(
url,
data=raw_body,
headers=self._headers(extra_headers),
method=method,
)
try:
with urlopen(req, timeout=self.timeout) as resp:
payload = json.loads(resp.read())
except HTTPError as exc:
raw = exc.read()
try:
payload = json.loads(raw)
except Exception:
raise TememtAPIError(
str(exc), http_status=exc.code
) from exc
err = payload.get("error", {})
raise TememtAPIError(
err.get("message", str(exc)),
code=err.get("code"),
http_status=exc.code,
request_id=payload.get("request_id"),
) from exc
except URLError as exc:
raise TememtAPIError(str(exc)) from exc
if not payload.get("ok"):
err = payload.get("error", {})
raise TememtAPIError(
err.get("message", "unknown error"),
code=err.get("code"),
request_id=payload.get("request_id"),
)
return APIResponse(
ok=True,
data=payload.get("data"),
revision=payload.get("revision"),
)
def _get(self, path: str, query: dict | None = None) -> APIResponse:
return self._request("GET", path, query=query)
def _post(self, path: str, body: dict | None = None,
if_match: str | None = None) -> APIResponse:
return self._request("POST", path, body=body, if_match=if_match)
def _patch(self, path: str, body: dict,
if_match: str | None = None) -> APIResponse:
return self._request("PATCH", path, body=body, if_match=if_match)
def _delete(self, path: str, if_match: str | None = None) -> APIResponse:
return self._request("DELETE", path, if_match=if_match)
# ------------------------------------------------------------------
# Health & system
# ------------------------------------------------------------------
def health(self) -> APIResponse:
"""GET /v1/health — liveness probe."""
return self._get("/v1/health")
def system_info(self) -> APIResponse:
"""GET /v1/system/info — binary version, uptime, config hash."""
return self._get("/v1/system/info")
# ------------------------------------------------------------------
# Runtime gates & initialization
# ------------------------------------------------------------------
def runtime_gates(self) -> APIResponse:
"""GET /v1/runtime/gates — admission gates and startup progress."""
return self._get("/v1/runtime/gates")
def runtime_initialization(self) -> APIResponse:
"""GET /v1/runtime/initialization — detailed startup timeline."""
return self._get("/v1/runtime/initialization")
# ------------------------------------------------------------------
# Limits & security
# ------------------------------------------------------------------
def limits_effective(self) -> APIResponse:
"""GET /v1/limits/effective — effective timeout/upstream/ME limits."""
return self._get("/v1/limits/effective")
def security_posture(self) -> APIResponse:
"""GET /v1/security/posture — API auth, telemetry, log-level summary."""
return self._get("/v1/security/posture")
def security_whitelist(self) -> APIResponse:
"""GET /v1/security/whitelist — current IP whitelist CIDRs."""
return self._get("/v1/security/whitelist")
# ------------------------------------------------------------------
# Stats
# ------------------------------------------------------------------
def stats_summary(self) -> APIResponse:
"""GET /v1/stats/summary — uptime, connection totals, user count."""
return self._get("/v1/stats/summary")
def stats_zero_all(self) -> APIResponse:
"""GET /v1/stats/zero/all — zero-cost counters (core, upstream, ME, pool, desync)."""
return self._get("/v1/stats/zero/all")
def stats_upstreams(self) -> APIResponse:
"""GET /v1/stats/upstreams — upstream health + zero counters."""
return self._get("/v1/stats/upstreams")
def stats_minimal_all(self) -> APIResponse:
"""GET /v1/stats/minimal/all — ME writers + DC snapshot (requires minimal_runtime_enabled)."""
return self._get("/v1/stats/minimal/all")
def stats_me_writers(self) -> APIResponse:
"""GET /v1/stats/me-writers — per-writer ME status (requires minimal_runtime_enabled)."""
return self._get("/v1/stats/me-writers")
def stats_dcs(self) -> APIResponse:
"""GET /v1/stats/dcs — per-DC coverage and writer counts (requires minimal_runtime_enabled)."""
return self._get("/v1/stats/dcs")
# ------------------------------------------------------------------
# Runtime deep-dive
# ------------------------------------------------------------------
def runtime_me_pool_state(self) -> APIResponse:
"""GET /v1/runtime/me_pool_state — ME pool generation/writer/refill snapshot."""
return self._get("/v1/runtime/me_pool_state")
def runtime_me_quality(self) -> APIResponse:
"""GET /v1/runtime/me_quality — ME KDF, route-drop, and per-DC RTT counters."""
return self._get("/v1/runtime/me_quality")
def runtime_upstream_quality(self) -> APIResponse:
"""GET /v1/runtime/upstream_quality — per-upstream health, latency, DC preferences."""
return self._get("/v1/runtime/upstream_quality")
def runtime_nat_stun(self) -> APIResponse:
"""GET /v1/runtime/nat_stun — NAT probe state, STUN servers, reflected IPs."""
return self._get("/v1/runtime/nat_stun")
def runtime_me_selftest(self) -> APIResponse:
"""GET /v1/runtime/me-selftest — KDF/timeskew/IP/PID/BND health state."""
return self._get("/v1/runtime/me-selftest")
def runtime_connections_summary(self) -> APIResponse:
"""GET /v1/runtime/connections/summary — live connection totals + top-N users (requires runtime_edge_enabled)."""
return self._get("/v1/runtime/connections/summary")
def runtime_events_recent(self, limit: int | None = None) -> APIResponse:
"""GET /v1/runtime/events/recent — recent ring-buffer events (requires runtime_edge_enabled).
Parameters
----------
limit:
Optional cap on returned events (11000, server default 50).
"""
query = {"limit": str(limit)} if limit is not None else None
return self._get("/v1/runtime/events/recent", query=query)
# ------------------------------------------------------------------
# Users (read)
# ------------------------------------------------------------------
def list_users(self) -> APIResponse:
"""GET /v1/users — list all users with connection/traffic info."""
return self._get("/v1/users")
def get_user(self, username: str) -> APIResponse:
"""GET /v1/users/{username} — single user info."""
return self._get(f"/v1/users/{_safe(username)}")
# ------------------------------------------------------------------
# Users (write)
# ------------------------------------------------------------------
def create_user(
self,
username: str,
*,
secret: str | None = None,
user_ad_tag: str | None = None,
max_tcp_conns: int | None = None,
expiration_rfc3339: str | None = None,
data_quota_bytes: int | None = None,
max_unique_ips: int | None = None,
if_match: str | None = None,
) -> APIResponse:
"""POST /v1/users — create a new user.
Parameters
----------
username:
``[A-Za-z0-9_.-]``, length 164.
secret:
Exactly 32 hex chars. Auto-generated if omitted.
user_ad_tag:
Exactly 32 hex chars.
max_tcp_conns:
Per-user concurrent TCP limit.
expiration_rfc3339:
RFC3339 expiration timestamp, e.g. ``"2025-12-31T23:59:59Z"``.
data_quota_bytes:
Per-user traffic quota in bytes.
max_unique_ips:
Per-user unique source IP limit.
if_match:
Optional ``If-Match`` revision for optimistic concurrency.
"""
body: Dict[str, Any] = {"username": username}
_opt(body, "secret", secret)
_opt(body, "user_ad_tag", user_ad_tag)
_opt(body, "max_tcp_conns", max_tcp_conns)
_opt(body, "expiration_rfc3339", expiration_rfc3339)
_opt(body, "data_quota_bytes", data_quota_bytes)
_opt(body, "max_unique_ips", max_unique_ips)
return self._post("/v1/users", body=body, if_match=if_match)
def patch_user(
self,
username: str,
*,
secret: str | None = None,
user_ad_tag: str | None = None,
max_tcp_conns: int | None = None,
expiration_rfc3339: str | None = None,
data_quota_bytes: int | None = None,
max_unique_ips: int | None = None,
if_match: str | None = None,
) -> APIResponse:
"""PATCH /v1/users/{username} — partial update; only provided fields change.
Parameters
----------
username:
Existing username to update.
secret:
New secret (32 hex chars).
user_ad_tag:
New ad tag (32 hex chars).
max_tcp_conns:
New TCP concurrency limit.
expiration_rfc3339:
New expiration timestamp.
data_quota_bytes:
New quota in bytes.
max_unique_ips:
New unique IP limit.
if_match:
Optional ``If-Match`` revision.
"""
body: Dict[str, Any] = {}
_opt(body, "secret", secret)
_opt(body, "user_ad_tag", user_ad_tag)
_opt(body, "max_tcp_conns", max_tcp_conns)
_opt(body, "expiration_rfc3339", expiration_rfc3339)
_opt(body, "data_quota_bytes", data_quota_bytes)
_opt(body, "max_unique_ips", max_unique_ips)
if not body:
raise ValueError("patch_user: at least one field must be provided")
return self._patch(f"/v1/users/{_safe(username)}", body=body,
if_match=if_match)
def delete_user(
self,
username: str,
*,
if_match: str | None = None,
) -> APIResponse:
"""DELETE /v1/users/{username} — remove user; blocks deletion of last user.
Parameters
----------
if_match:
Optional ``If-Match`` revision for optimistic concurrency.
"""
return self._delete(f"/v1/users/{_safe(username)}", if_match=if_match)
# NOTE: POST /v1/users/{username}/rotate-secret currently returns 404
# in the route matcher (documented limitation). The method is provided
# for completeness and future compatibility.
def rotate_secret(
self,
username: str,
*,
secret: str | None = None,
if_match: str | None = None,
) -> APIResponse:
"""POST /v1/users/{username}/rotate-secret — rotate user secret.
.. warning::
This endpoint currently returns ``404 not_found`` in all released
versions (documented route matcher limitation). The method is
included for future compatibility.
Parameters
----------
secret:
New secret (32 hex chars). Auto-generated if omitted.
"""
body: Dict[str, Any] = {}
_opt(body, "secret", secret)
return self._post(f"/v1/users/{_safe(username)}/rotate-secret",
body=body or None, if_match=if_match)
# ------------------------------------------------------------------
# Convenience helpers
# ------------------------------------------------------------------
@staticmethod
def generate_secret() -> str:
"""Generate a random 32-character hex secret suitable for user creation."""
return secrets.token_hex(16) # 16 bytes → 32 hex chars
# ---------------------------------------------------------------------------
# Internal helpers
# ---------------------------------------------------------------------------
def _safe(username: str) -> str:
"""Minimal guard: reject obvious path-injection attempts."""
if "/" in username or "\\" in username:
raise ValueError(f"Invalid username: {username!r}")
return username
def _opt(d: dict, key: str, value: Any) -> None:
"""Add key to dict only when value is not None."""
if value is not None:
d[key] = value
# ---------------------------------------------------------------------------
# CLI
# ---------------------------------------------------------------------------
def _print(resp: APIResponse) -> None:
print(json.dumps(resp.data, indent=2))
if resp.revision:
print(f"# revision: {resp.revision}", flush=True)
def _build_parser():
import argparse
p = argparse.ArgumentParser(
prog="telemt_api.py",
description="Telemt Control API CLI",
formatter_class=argparse.RawDescriptionHelpFormatter,
epilog="""
COMMANDS (read)
health Liveness check
info System info (version, uptime, config hash)
status Runtime gates + startup progress
init Runtime initialization timeline
limits Effective limits (timeouts, upstream, ME)
posture Security posture summary
whitelist IP whitelist entries
summary Stats summary (conns, uptime, users)
zero Zero-cost counters (core/upstream/ME/pool/desync)
upstreams Upstream health + zero counters
minimal ME writers + DC snapshot [minimal_runtime_enabled]
me-writers Per-writer ME status [minimal_runtime_enabled]
dcs Per-DC coverage [minimal_runtime_enabled]
me-pool ME pool generation/writer/refill snapshot
me-quality ME KDF, route-drops, per-DC RTT
upstream-quality Per-upstream health + latency
nat-stun NAT probe state + STUN servers
me-selftest KDF/timeskew/IP/PID/BND health
connections Live connection totals + top-N [runtime_edge_enabled]
events [--limit N] Recent ring-buffer events [runtime_edge_enabled]
COMMANDS (users)
users List all users
user <username> Get single user
create <username> [OPTIONS] Create user
patch <username> [OPTIONS] Partial update user
delete <username> Delete user
secret <username> [--secret S] Rotate secret (reserved; returns 404 in current release)
gen-secret Print a random 32-hex secret and exit
USER OPTIONS (for create / patch)
--secret S 32 hex chars
--ad-tag S 32 hex chars (ad tag)
--max-conns N Max concurrent TCP connections
--expires DATETIME RFC3339 expiration (e.g. 2026-12-31T23:59:59Z)
--quota N Data quota in bytes
--max-ips N Max unique source IPs
EXAMPLES
telemt_api.py health
telemt_api.py -u http://10.0.0.1:9091 -a mysecret users
telemt_api.py create alice --max-conns 5 --quota 10000000000
telemt_api.py patch alice --expires 2027-01-01T00:00:00Z
telemt_api.py delete alice
telemt_api.py events --limit 20
""",
)
p.add_argument("-u", "--url", default="http://127.0.0.1:9091",
metavar="URL", help="API base URL (default: http://127.0.0.1:9091)")
p.add_argument("-a", "--auth", default=None, metavar="TOKEN",
help="Authorization header value")
p.add_argument("-t", "--timeout", type=int, default=10, metavar="SEC",
help="Request timeout in seconds (default: 10)")
p.add_argument("command", nargs="?", default="help",
help="Command to run (see COMMANDS below)")
p.add_argument("arg", nargs="?", default=None, metavar="USERNAME",
help="Username for user commands")
# user create/patch fields
p.add_argument("--secret", default=None)
p.add_argument("--ad-tag", dest="ad_tag", default=None)
p.add_argument("--max-conns", dest="max_conns", type=int, default=None)
p.add_argument("--expires", default=None)
p.add_argument("--quota", type=int, default=None)
p.add_argument("--max-ips", dest="max_ips", type=int, default=None)
# events
p.add_argument("--limit", type=int, default=None,
help="Max events for `events` command")
# optimistic concurrency
p.add_argument("--if-match", dest="if_match", default=None,
metavar="REVISION", help="If-Match revision header")
return p
if __name__ == "__main__":
import sys
parser = _build_parser()
args = parser.parse_args()
cmd = (args.command or "help").lower()
if cmd in ("help", "--help"):
parser.print_help()
sys.exit(0)
if cmd == "gen-secret":
print(TememtAPI.generate_secret())
sys.exit(0)
api = TememtAPI(args.url, auth_header=args.auth, timeout=args.timeout)
try:
# -- read endpoints --------------------------------------------------
if cmd == "health":
_print(api.health())
elif cmd == "info":
_print(api.system_info())
elif cmd == "status":
_print(api.runtime_gates())
elif cmd == "init":
_print(api.runtime_initialization())
elif cmd == "limits":
_print(api.limits_effective())
elif cmd == "posture":
_print(api.security_posture())
elif cmd == "whitelist":
_print(api.security_whitelist())
elif cmd == "summary":
_print(api.stats_summary())
elif cmd == "zero":
_print(api.stats_zero_all())
elif cmd == "upstreams":
_print(api.stats_upstreams())
elif cmd == "minimal":
_print(api.stats_minimal_all())
elif cmd == "me-writers":
_print(api.stats_me_writers())
elif cmd == "dcs":
_print(api.stats_dcs())
elif cmd == "me-pool":
_print(api.runtime_me_pool_state())
elif cmd == "me-quality":
_print(api.runtime_me_quality())
elif cmd == "upstream-quality":
_print(api.runtime_upstream_quality())
elif cmd == "nat-stun":
_print(api.runtime_nat_stun())
elif cmd == "me-selftest":
_print(api.runtime_me_selftest())
elif cmd == "connections":
_print(api.runtime_connections_summary())
elif cmd == "events":
_print(api.runtime_events_recent(limit=args.limit))
# -- user read -------------------------------------------------------
elif cmd == "users":
resp = api.list_users()
users = resp.data or []
if not users:
print("No users configured.")
else:
fmt = "{:<24} {:>7} {:>14} {}"
print(fmt.format("USERNAME", "CONNS", "OCTETS", "LINKS"))
print("-" * 72)
for u in users:
links = (u.get("links") or {})
all_links = (links.get("classic") or []) + \
(links.get("secure") or []) + \
(links.get("tls") or [])
link_str = all_links[0] if all_links else "-"
print(fmt.format(
u["username"],
u.get("current_connections", 0),
u.get("total_octets", 0),
link_str,
))
if resp.revision:
print(f"# revision: {resp.revision}")
elif cmd == "user":
if not args.arg:
parser.error("user command requires <username>")
_print(api.get_user(args.arg))
# -- user write ------------------------------------------------------
elif cmd == "create":
if not args.arg:
parser.error("create command requires <username>")
resp = api.create_user(
args.arg,
secret=args.secret,
user_ad_tag=args.ad_tag,
max_tcp_conns=args.max_conns,
expiration_rfc3339=args.expires,
data_quota_bytes=args.quota,
max_unique_ips=args.max_ips,
if_match=args.if_match,
)
d = resp.data or {}
print(f"Created: {d.get('user', {}).get('username')}")
print(f"Secret: {d.get('secret')}")
links = (d.get("user") or {}).get("links") or {}
for kind, lst in links.items():
for link in (lst or []):
print(f"Link ({kind}): {link}")
if resp.revision:
print(f"# revision: {resp.revision}")
elif cmd == "patch":
if not args.arg:
parser.error("patch command requires <username>")
if not any([args.secret, args.ad_tag, args.max_conns,
args.expires, args.quota, args.max_ips]):
parser.error("patch requires at least one field (--secret, --max-conns, --expires, --quota, --max-ips, --ad-tag)")
_print(api.patch_user(
args.arg,
secret=args.secret,
user_ad_tag=args.ad_tag,
max_tcp_conns=args.max_conns,
expiration_rfc3339=args.expires,
data_quota_bytes=args.quota,
max_unique_ips=args.max_ips,
if_match=args.if_match,
))
elif cmd == "delete":
if not args.arg:
parser.error("delete command requires <username>")
resp = api.delete_user(args.arg, if_match=args.if_match)
print(f"Deleted: {resp.data}")
if resp.revision:
print(f"# revision: {resp.revision}")
elif cmd == "secret":
if not args.arg:
parser.error("secret command requires <username>")
_print(api.rotate_secret(args.arg, secret=args.secret,
if_match=args.if_match))
else:
print(f"Unknown command: {cmd!r}\nRun with 'help' to see available commands.",
file=sys.stderr)
sys.exit(1)
except TememtAPIError as exc:
print(f"API error [{exc.http_status}] {exc.code}: {exc}", file=sys.stderr)
sys.exit(1)
except KeyboardInterrupt:
sys.exit(130)

View File

@ -1165,6 +1165,60 @@ zabbix_export:
tags: tags:
- tag: Application - tag: Application
value: 'Users connections' value: 'Users connections'
graph_prototypes:
- uuid: 4199de3dcea943d8a1ec62dc297b2e9f
name: 'User {#TELEMT_USER}: Connections'
graph_items:
- color: 1A7C11
item:
host: Telemt
key: 'telemt.active_conn_[{#TELEMT_USER}]'
- color: F63100
sortorder: '1'
item:
host: Telemt
key: 'telemt.total_conn_[{#TELEMT_USER}]'
- uuid: 84b8f22d891e49768891f497cac12fb3
name: 'User {#TELEMT_USER}: IPs'
graph_items:
- color: 0080FF
item:
host: Telemt
key: 'telemt.ips_current_[{#TELEMT_USER}]'
- color: FF8000
sortorder: '1'
item:
host: Telemt
key: 'telemt.ips_limit_[{#TELEMT_USER}]'
- color: AA00FF
sortorder: '2'
item:
host: Telemt
key: 'telemt.ips_utilization_[{#TELEMT_USER}]'
- uuid: 09dabe7125114e36a6ce40788a7cb888
name: 'User {#TELEMT_USER}: Traffic'
graph_items:
- color: 00AA00
item:
host: Telemt
key: 'telemt.octets_from_[{#TELEMT_USER}]'
- color: AA0000
sortorder: '1'
item:
host: Telemt
key: 'telemt.octets_to_[{#TELEMT_USER}]'
- uuid: 367f458962574b0ab3c02278a4cd7ecb
name: 'User {#TELEMT_USER}: Messages'
graph_items:
- color: 00AAFF
item:
host: Telemt
key: 'telemt.msgs_from_[{#TELEMT_USER}]'
- color: FF5500
sortorder: '1'
item:
host: Telemt
key: 'telemt.msgs_to_[{#TELEMT_USER}]'
master_item: master_item:
key: telemt.prom_metrics key: telemt.prom_metrics
lld_macro_paths: lld_macro_paths:
@ -1177,3 +1231,206 @@ zabbix_export:
tags: tags:
- tag: target - tag: target
value: Telemt value: Telemt
graphs:
- uuid: f162658049ca4f50893c5cc02515ff10
name: 'Telemt: Server Connections Overview'
graph_items:
- color: 1A7C11
item:
host: Telemt
key: telemt.conn_total
- color: F63100
sortorder: '1'
item:
host: Telemt
key: telemt.conn_bad_total
- color: FC6EA3
sortorder: '2'
item:
host: Telemt
key: telemt.handshake_timeouts_total
- uuid: 759eca5e687142f19248f9d9343e1adf
name: 'Telemt: Uptime'
graph_items:
- color: 0080FF
item:
host: Telemt
key: telemt.uptime
- uuid: 0a27dbd0490d4a508c03ed39fa18545d
name: 'Telemt: ME Keepalive'
graph_items:
- color: 1A7C11
item:
host: Telemt
key: telemt.me_keepalive_sent_total
- color: 00AA00
sortorder: '1'
item:
host: Telemt
key: telemt.me_keepalive_pong_total
- color: F63100
sortorder: '2'
item:
host: Telemt
key: telemt.me_keepalive_failed_total
- color: FF8000
sortorder: '3'
item:
host: Telemt
key: telemt.me_keepalive_timeout_total
- uuid: 4015e24ff70b49f484e884d1dde687c0
name: 'Telemt: ME Reconnects'
graph_items:
- color: 0080FF
item:
host: Telemt
key: telemt.me_reconnect_attempts_total
- color: 1A7C11
sortorder: '1'
item:
host: Telemt
key: telemt.me_reconnect_success_total
- uuid: f3e3eeb0663c471aa26cf4b6872b0c50
name: 'Telemt: ME Route Drops'
graph_items:
- color: F63100
item:
host: Telemt
key: telemt.me_route_drop_channel_closed_total
- color: FF8000
sortorder: '1'
item:
host: Telemt
key: telemt.me_route_drop_no_conn_total
- color: AA00FF
sortorder: '2'
item:
host: Telemt
key: telemt.me_route_drop_queue_full_total
- uuid: 49b51ed78a5943bdbd6d1d34fe28bf61
name: 'Telemt: ME Writer Pool'
graph_items:
- color: 0080FF
item:
host: Telemt
key: telemt.pool_drain_active
- color: F63100
sortorder: '1'
item:
host: Telemt
key: telemt.pool_force_close_total
- color: FF8000
sortorder: '2'
item:
host: Telemt
key: telemt.pool_stale_pick_total
- color: 1A7C11
sortorder: '3'
item:
host: Telemt
key: telemt.pool_swap_total
- uuid: a0779e6c979f4c1ab7ac4da7123a5ecb
name: 'Telemt: ME Writer Removals and Restores'
graph_items:
- color: F63100
item:
host: Telemt
key: telemt.me_writer_removed_total
- color: FF8000
sortorder: '1'
item:
host: Telemt
key: telemt.me_writer_removed_unexpected_total
- color: FFAA00
sortorder: '2'
item:
host: Telemt
key: telemt.me_writer_removed_unexpected_minus_restored_total
- color: 1A7C11
sortorder: '3'
item:
host: Telemt
key: telemt.me_writer_restored_same_endpoint_total
- color: 00AA00
sortorder: '4'
item:
host: Telemt
key: telemt.me_writer_restored_fallback_total
- uuid: 4fead70290664953b026a228108bee0e
name: 'Telemt: Desync Detections'
graph_items:
- color: F63100
item:
host: Telemt
key: telemt.desync_total
- color: 1A7C11
sortorder: '1'
item:
host: Telemt
key: telemt.desync_full_logged_total
- color: FF8000
sortorder: '2'
item:
host: Telemt
key: telemt.desync_suppressed_total
- uuid: 9f8c9f48cb534a66ac21b1bba1acb602
name: 'Telemt: Upstream Connect Cycles'
graph_items:
- color: 0080FF
item:
host: Telemt
key: telemt.upstream_connect_attempt_total
- color: 1A7C11
sortorder: '1'
item:
host: Telemt
key: telemt.upstream_connect_success_total
- color: F63100
sortorder: '2'
item:
host: Telemt
key: telemt.upstream_connect_fail_total
- color: FF8000
sortorder: '3'
item:
host: Telemt
key: telemt.upstream_connect_failfast_hard_error_total
- uuid: 05182057727547f8b8884b7e71e34f19
name: 'Telemt: ME Single-Endpoint Outages'
graph_items:
- color: F63100
item:
host: Telemt
key: telemt.me_single_endpoint_outage_enter_total
- color: 1A7C11
sortorder: '1'
item:
host: Telemt
key: telemt.me_single_endpoint_outage_exit_total
- color: 0080FF
sortorder: '2'
item:
host: Telemt
key: telemt.me_single_endpoint_outage_reconnect_attempt_total
- color: 00AA00
sortorder: '3'
item:
host: Telemt
key: telemt.me_single_endpoint_outage_reconnect_success_total
- uuid: 6892e8b7fbd2445d9ccc0574af58a354
name: 'Telemt: ME Refill Activity'
graph_items:
- color: 0080FF
item:
host: Telemt
key: telemt.me_refill_triggered_total
- color: F63100
sortorder: '1'
item:
host: Telemt
key: telemt.me_refill_failed_total
- color: FF8000
sortorder: '2'
item:
host: Telemt
key: telemt.me_refill_skipped_inflight_total