Compare commits

...

96 Commits
3.3.31 ... main

Author SHA1 Message Date
Alexey 4d87a790cc
Merge pull request #626 from vladon/fix/strip-release-binaries
[codex] Strip release binaries before packaging
2026-04-05 21:12:07 +03:00
Alexey 07fed8f871
Merge pull request #632 from SysAdminKo/main
Актуализация документации CONFIG_PARAMS
2026-04-05 21:10:58 +03:00
Alexey 407d686d49
Merge pull request #638 from Dimasssss/patch-1
Update install.sh - add port availability check and new CLI arguments + update QUICK_START_GUIDE - add CAP_NET_ADMIN Service
2026-04-05 21:06:29 +03:00
Dimasssss eac5cc81fb
Update QUICK_START_GUIDE.ru.md 2026-04-05 18:53:16 +03:00
Dimasssss c51d16f403
Update QUICK_START_GUIDE.en.md 2026-04-05 18:53:06 +03:00
Dimasssss b5146bba94
Update install.sh 2026-04-05 18:43:08 +03:00
SysAdminKo 5ed525fa48
Add server.conntrack_control configuration section with detailed parameters and descriptions
This update introduces a new section in the configuration documentation for `server.conntrack_control`, outlining various parameters such as `inline_conntrack_control`, `mode`, `backend`, `profile`, `hybrid_listener_ips`, `pressure_high_watermark_pct`, `pressure_low_watermark_pct`, and `delete_budget_per_sec`. Each parameter includes constraints, descriptions, and examples to assist users in configuring conntrack control effectively.
2026-04-05 18:05:13 +03:00
Олегсей Бреднев 9f7c1693ce
Merge branch 'telemt:main' into main 2026-04-05 17:42:08 +03:00
Dimasssss 1524396e10
Update install.sh
Новые аргументы командной строки:
-d, --domain : TLS-домен (дефолт: petrovich.ru)
-p, --port : Порт сервера (дефолт: 443)
-s, --secret : Секрет пользователя (32 hex-символа)
-a, --ad-tag : Установка ad_tag

⚠️ Если эти флаги переданы при запуске, они заменят собой старые сохраненные значения.
2026-04-05 17:32:21 +03:00
Alexey e630ea0045
Bump 2026-04-05 17:31:48 +03:00
Alexey 4574e423c6
New Relay Methods + Conntrack Control + Cleanup Methods for Memory + Buffer Pool Trim + Shrink Session Vec + ME2DC Fast for unstoppable init + Config Fallback + Working Directory Setup + Logging fixes with --syslog: merge pull request #637 from telemt/flow
New Relay Methods + Conntrack Control + Cleanup Methods for Memory + Buffer Pool Trim + Shrink Session Vec + ME2DC Fast for unstoppable init + Config Fallback + Working Directory Setup + Logging fixes with --syslog
2026-04-05 17:30:43 +03:00
Alexey 5f5582865e
Rustfmt 2026-04-05 17:23:40 +03:00
Alexey 1f54e4a203
Logging fixes with --syslog
Co-Authored-By: brekotis <93345790+brekotis@users.noreply.github.com>
2026-04-05 17:21:47 +03:00
Alexey defa37da05
Merge pull request #636 from Dimasssss/patch-3
Update install.sh - add x86_64-v3 support + Add -d/--domain argument
2026-04-05 15:38:26 +03:00
Dimasssss 5fd058b6fd
Update install.sh - Add -d/--domain
**Example usage:**
`./install.sh -d example.com`
`./install.sh --domain example.com`
2026-04-05 14:49:31 +03:00
Alexey 977ee53b72
Config Fallback + Working Directory Setup
Co-Authored-By: brekotis <93345790+brekotis@users.noreply.github.com>
2026-04-05 14:40:17 +03:00
Dimasssss 5b11522620
Update install.sh 2026-04-05 13:26:52 +03:00
Alexey 8fe6fcb7eb
ME2DC Fast for unstoppable init 2026-04-05 13:10:35 +03:00
Alexey 486e439ae6
Update Cargo.toml + Cargo.lock 2026-04-05 12:19:24 +03:00
SysAdminKo 444a20672d
Refine CONFIG_PARAMS documentation by updating default values to use a dash (—) for optional parameters instead of null. Adjust constraints for clarity, ensuring all types are accurately represented as required. Enhance descriptions for better understanding of configuration options. 2026-04-04 21:56:24 +03:00
Alexey 8e7b27a16d
Deleting Kilocode 2026-04-04 18:10:09 +03:00
Alexey 7f0057acd7
Conntrack Control Method
Co-Authored-By: brekotis <93345790+brekotis@users.noreply.github.com>
2026-04-04 11:28:32 +03:00
Alexey 7fe38f1b9f
Merge pull request #627 from DavidOsipov/flow
Фазы 1 и 2 полностью выполнены
2026-04-04 18:40:03 +03:00
Alexey c2f16a343a
Update README.md 2026-04-03 19:13:57 +03:00
David Osipov 6ea867ce36
Phase 2 implemented with additional guards 2026-04-03 02:08:59 +04:00
Vlad Yaroslavlev d673935b6d
Merge branch 'main' into fix/strip-release-binaries 2026-04-03 00:35:20 +03:00
Vladislav Yaroslavlev 363b5014f7
Strip release binaries before packaging 2026-04-03 00:17:43 +03:00
Alexey bb6237151c
Update README.md 2026-04-03 00:06:34 +03:00
David Osipov a9f695623d
Implementation plan + Phase 1 finished 2026-04-02 20:08:47 +04:00
David Osipov 5c29870632
Update dependencies in Cargo.lock to latest versions 2026-04-02 13:21:56 +04:00
Alexey f6704d7d65
Update README.md 2026-04-02 10:59:19 +03:00
Alexey 3d20002e56
Update README.md 2026-04-02 10:58:50 +03:00
Alexey 8fcd0fa950
Merge pull request #618 from SysAdminKo/main
Переработка документации CONFIG_PARAMS
2026-04-01 17:24:22 +03:00
SysAdminKo 645e968778
Enhance CONFIG_PARAMS documentation with AI-assisted notes and detailed parameter descriptions. Update formatting for clarity and include examples for key configuration options. 2026-04-01 16:04:11 +03:00
Alexey b46216d357
Update README.md 2026-04-01 11:52:13 +03:00
Alexey 8ac1a0017d
Update Cargo.toml 2026-03-31 23:17:30 +03:00
Alexey 3df274caa6
Rustfmt 2026-03-31 19:42:07 +03:00
Alexey 780546a680
Memory Consumption in Stats and Metrics
Co-Authored-By: brekotis <93345790+brekotis@users.noreply.github.com>
2026-03-31 19:37:29 +03:00
Alexey 729ffa0fcd
Shrink Session Vec
Co-Authored-By: brekotis <93345790+brekotis@users.noreply.github.com>
2026-03-31 19:29:47 +03:00
Alexey e594d6f079
Buffer Pool Trim
Co-Authored-By: brekotis <93345790+brekotis@users.noreply.github.com>
2026-03-31 19:22:36 +03:00
Alexey ecd6a19246
Cleanup Methods for Memory Consistency
Co-Authored-By: brekotis <93345790+brekotis@users.noreply.github.com>
2026-03-31 18:40:04 +03:00
Alexey 2df6b8704d
BSD Support + Active IP in API + Timeouts tuning + Apple/XNU Connectivity fixes + Admission-timeouts + Global Each TCP Connections: merge pull request #611 from telemt/flow
BSD Support + Active IP in API + Timeouts tuning + Apple/XNU Connectivity fixes + Admission-timeouts + Global Each TCP Connections
2026-03-31 13:10:31 +03:00
Alexey 5f5a046710
Update Cargo.toml + Cargo.lock 2026-03-31 13:04:24 +03:00
Alexey 2dc81ad0e0
API Consistency fixes
Co-Authored-By: brekotis <93345790+brekotis@users.noreply.github.com>
2026-03-31 13:03:05 +03:00
Alexey d8d8534cf8
Update masking_ab_envelope_blur_integration_security_tests.rs 2026-03-31 12:30:43 +03:00
Alexey 6c850e4150
Update Cargo.toml 2026-03-31 11:15:31 +03:00
Alexey b8cf596e7d
Admission-timeouts + Global Each TCP Connections
Co-Authored-By: brekotis <93345790+brekotis@users.noreply.github.com>
2026-03-31 11:14:55 +03:00
Alexey 5bf56b6dd8
Update Cargo.toml 2026-03-30 23:36:45 +03:00
Alexey 65da1f91ec
Drafting fixes for Apple/XNU Darwin Connectivity issues
Co-Authored-By: Aleksandr Kalashnikov <33665156+sleep3r@users.noreply.github.com>
2026-03-30 23:35:41 +03:00
Alexey f3e9d00132
Merge pull request #605 from telemt/readme
Readme
2026-03-29 11:52:44 +03:00
Alexey dee6e13fef
Update CONTRIBUTING.md 2026-03-29 01:51:51 +03:00
Alexey 07d774a82a
Merge pull request #595 from xaosproxy/fix/apply-tg-connect-timeout-upstream
Apply [timeouts] tg_connect to upstream DC TCP connect attempts
2026-03-28 21:14:51 +03:00
Roman Martynov 618bc7e0b6
Merge branch 'flow' into fix/apply-tg-connect-timeout-upstream 2026-03-28 14:27:47 +03:00
sintanial d06ac222d6
fix: move tg_connect to general, rustfmt upstream, fix UpstreamManager::new tests
- Relocate tg_connect from [timeouts] to [general] with validation and docs updates.
- Apply rustfmt to per-attempt upstream connect timeout expression in upstream.rs.
- Pass tg_connect_timeout_secs in all UpstreamManager::new test call sites.
- Wire hot reload and runtime snapshot to general.tg_connect.
2026-03-28 14:25:18 +03:00
Alexey 567453e0f8
Merge pull request #596 from xaosproxy/fix/listen_backlog
feat(server): configurable TCP listen_backlog
2026-03-28 12:28:19 +03:00
Alexey cba837745b
Merge pull request #599 from Dimasssss/main
Update FAQ
2026-03-28 12:28:04 +03:00
Dimasssss 876c8f1612
Update FAQ.en.md 2026-03-27 22:26:21 +03:00
Dimasssss ac8ad864be
Update FAQ.ru.md 2026-03-27 22:26:07 +03:00
Alexey fe56dc7c1a
Update README.md 2026-03-27 14:13:08 +03:00
sintanial 96ae01078c
feat(server): configurable TCP listen_backlog
Add [server].listen_backlog (default 1024) for client-facing listen(2)
queue size; use the same value for metrics HTTP listeners. Hot reload
logs restart-required when this field changes.
2026-03-27 12:49:53 +03:00
sintanial 3b9919fa4d
Apply [timeouts] tg_connect to upstream DC TCP connect attempts
Wire config.timeouts.tg_connect into UpstreamManager; per-attempt timeout uses
the same .max(1) pattern as connect_budget_ms.

Reject timeouts.tg_connect = 0 at config load (consistent with
general.upstream_connect_budget_ms and related checks). Default when the key
is omitted remains default_connect_timeout() via serde.

Fixes telemt/telemt#439
2026-03-27 12:45:19 +03:00
Alexey 6c4a3b59f9
Merge pull request #515 from vkrivopalov/daemonize
Support running TeleMT as a background system service
2026-03-27 11:36:02 +03:00
Alexey 01c3d0a707
Merge branch 'flow' into daemonize 2026-03-27 11:35:52 +03:00
Alexey fbee4631d6
Merge pull request #588 from amirotin/feat/active-ips-endpoint
feat(api): add GET /v1/stats/users/active-ips endpoint
2026-03-26 11:12:43 +03:00
Mirotin Artem d0b52ea299
Merge branch 'main' into feat/active-ips-endpoint 2026-03-26 10:00:47 +03:00
Mirotin Artem 677195e587
feat(api): add GET /v1/stats/users/active-ips endpoint
Lightweight endpoint that returns only users with active TCP connections
and their IP addresses. Calls only get_active_ips_for_users() without
collecting recent IPs or building full UserInfo, significantly reducing
CPU and memory overhead compared to /v1/stats/users.
2026-03-26 10:00:29 +03:00
Alexey a383efcb21
Bounded Hybrid Loop + Watch + Family ArcSwap Snapshots + Health in Parallel + ArcSwap Writers + Registry Split + Endpoint on ArcSwap + New Backpressure Model + ME Decomposition: merge pull request #586 from telemt/flow
Bounded Hybrid Loop + Watch + Family ArcSwap Snapshots + Health in Parallel + ArcSwap Writers + Registry Split + Endpoint on ArcSwap + New Backpressure Model + ME Decomposition
2026-03-26 02:31:18 +03:00
Alexey cb5753f77c
Update admission.rs
Co-Authored-By: brekotis <93345790+brekotis@users.noreply.github.com>
2026-03-26 02:19:35 +03:00
Alexey 7a075b2ffe
Middle Relay fixes
Co-Authored-By: brekotis <93345790+brekotis@users.noreply.github.com>
2026-03-26 02:18:39 +03:00
Alexey 7de822dd15
RPC Proxy-req fixes
Co-Authored-By: brekotis <93345790+brekotis@users.noreply.github.com>
2026-03-25 22:51:00 +03:00
Alexey 1bbf4584a6
Merge branch 'main' into flow 2026-03-25 22:25:58 +03:00
Alexey 70479c4094
Unexpected-only Quarantine
Co-Authored-By: brekotis <93345790+brekotis@users.noreply.github.com>
2026-03-25 22:25:39 +03:00
Alexey b94746a6e0
Dashmap-driven Routing + Health Parallel + Family Runtime State
Co-Authored-By: brekotis <93345790+brekotis@users.noreply.github.com>
2026-03-25 21:26:20 +03:00
Alexey ceae1564af
Floor Runtime + Writer Selection Policy + Reconnect/Warmup + TransportPolicy + NAT Runtime Cores
Co-Authored-By: brekotis <93345790+brekotis@users.noreply.github.com>
2026-03-25 20:55:20 +03:00
Alexey 7ce5fc66db
ME Reinit Core advancing + Binding Policy Core
Co-Authored-By: brekotis <93345790+brekotis@users.noreply.github.com>
2026-03-25 20:35:57 +03:00
Alexey 41493462a1
Drain + Single-Endpoint Runtime Cores
Co-Authored-By: brekotis <93345790+brekotis@users.noreply.github.com>
2026-03-25 20:29:22 +03:00
Alexey 6ee4d4648c
ME Health Core
Co-Authored-By: brekotis <93345790+brekotis@users.noreply.github.com>
2026-03-25 20:01:44 +03:00
Alexey 97f6649584
ME Route Runtime Core
Co-Authored-By: brekotis <93345790+brekotis@users.noreply.github.com>
2026-03-25 19:56:25 +03:00
Alexey dc6b6d3f9d
ME Writer Lifecycle Core
Co-Authored-By: brekotis <93345790+brekotis@users.noreply.github.com>
2026-03-25 19:47:41 +03:00
Alexey 1c3e0d4e46
ME Reinit Core
Co-Authored-By: brekotis <93345790+brekotis@users.noreply.github.com>
2026-03-25 19:43:02 +03:00
Alexey 0b78583cf5
ME Routing Core
Co-Authored-By: brekotis <93345790+brekotis@users.noreply.github.com>
2026-03-25 18:18:06 +03:00
Alexey 28d318d724
ME Writer Task Consolidation
Co-Authored-By: brekotis <93345790+brekotis@users.noreply.github.com>
2026-03-25 17:59:54 +03:00
Alexey 70c2f0f045
RoutingTable + BindingState
Co-Authored-By: brekotis <93345790+brekotis@users.noreply.github.com>
2026-03-25 17:50:44 +03:00
Alexey b9b1271f14
Merge pull request #584 from Dimasssss/patch-3
Update CONFIG_PARAMS, QUICK_START_GUIDE and FAQ
2026-03-25 17:44:59 +03:00
Dimasssss 3c734bd811
Update FAQ.en.md 2026-03-25 17:42:16 +03:00
Dimasssss 6391df0583
Update FAQ.ru.md 2026-03-25 17:42:07 +03:00
Dimasssss 6a781c8bc3
Update QUICK_START_GUIDE.en.md 2026-03-25 17:40:45 +03:00
Dimasssss 138652af8e
Update QUICK_START_GUIDE.ru.md 2026-03-25 17:40:16 +03:00
Dimasssss 59157d31a6
Update CONFIG_PARAMS.en.md 2026-03-25 17:37:01 +03:00
Alexey 8bab3f70e1
WritersState on ArcSwao + Preferred Endpoint on ArcSwap + Two-map Rotation for Desync Dedup
Co-Authored-By: brekotis <93345790+brekotis@users.noreply.github.com>
2026-03-25 17:25:35 +03:00
Alexey 41d786cc11
Safety Gates Invariants + HybridAsyncPersistent + Watch + Runtime Snapshots + ME Writer Ping Tracker + Parallel Recovery + Backpressure Guardrails
Co-Authored-By: brekotis <93345790+brekotis@users.noreply.github.com>
2026-03-25 16:29:35 +03:00
Vladimir Krivopalov 95685adba7
Add multi-destination logging: syslog and file support
Implement logging infrastructure for non-systemd platforms:

- Add src/logging.rs with syslog and file logging support
- New CLI flags: --syslog, --log-file, --log-file-daily
- Syslog uses libc directly with LOG_DAEMON facility
- File logging via tracing-appender with optional daily rotation

Update service scripts:
- OpenRC and FreeBSD rc.d now use --syslog by default
- Ensures logs are captured on platforms without journald

Default (stderr) behavior unchanged for systemd compatibility.
Log destination is selected at startup based on CLI flags.

Signed-off-by: Vladimir Krivopalov <argenet@yandex.ru>
2026-03-21 21:09:29 +02:00
Vladimir Krivopalov 909714af31
Add multi-platform service manager integration
Implement automatic init system detection and service file generation
for systemd, OpenRC (Alpine/Gentoo), and FreeBSD rc.d:

- Add src/service module with init system detection and generators
- Auto-detect init system via filesystem probes
- Generate platform-appropriate service files during --init

systemd enhancements:
- ExecReload for SIGHUP config reload
- PIDFile directive
- Comprehensive security hardening (ProtectKernelTunables,
  RestrictAddressFamilies, MemoryDenyWriteExecute, etc.)
- CAP_NET_BIND_SERVICE for privileged ports

OpenRC support:
- Standard openrc-run script with depend/reload functions
- Directory setup in start_pre

FreeBSD rc.d support:
- rc.subr integration with rc.conf variables
- reload extra command

The --init command now detects the init system and runs the
appropriate enable/start commands (systemctl, rc-update, sysrc).

Signed-off-by: Vladimir Krivopalov <argenet@yandex.ru>
2026-03-21 21:09:29 +02:00
Vladimir Krivopalov dc2b4395bd
Add daemon lifecycle subcommands: start, stop, reload, status
Implement CLI subcommands for managing telemt as a daemon:

- `start [config.toml]` - Start as background daemon (implies --daemon)
- `stop` - Stop running daemon by sending SIGTERM
- `reload` - Reload configuration by sending SIGHUP
- `status` - Check if daemon is running via PID file

Subcommands use the PID file (default /var/run/telemt.pid) to locate
the running daemon. Stop command waits up to 10 seconds for graceful
shutdown. Status cleans up stale PID files automatically.

Updated help text with subcommand documentation and usage examples.
Exit codes follow Unix convention: 0 for success, 1 for not running
or error.

Signed-off-by: Vladimir Krivopalov <argenet@yandex.ru>
2026-03-21 21:09:29 +02:00
Vladimir Krivopalov 39875afbff
Add comprehensive Unix signal handling for daemon mode
Enhance signal handling to support proper daemon operation:

- SIGTERM: Graceful shutdown (same behavior as SIGINT)
- SIGQUIT: Graceful shutdown with full statistics dump
- SIGUSR1: Log rotation acknowledgment for external tools
- SIGUSR2: Dump runtime status to log without stopping

Statistics dump includes connection counts, ME keepalive metrics,
and relay adaptive tuning counters. SIGHUP config reload unchanged
(handled in hot_reload.rs).

Signals are handled via tokio::signal::unix with async select!
to avoid blocking the runtime. Non-shutdown signals (USR1/USR2)
run in a background task spawned at startup.

Signed-off-by: Vladimir Krivopalov <argenet@yandex.ru>
2026-03-21 21:09:29 +02:00
Vladimir Krivopalov 2ea7813ed4
Add Unix daemon mode with PID file and privilege dropping
Implement core daemon infrastructure for running telemt as a background
  service on Unix platforms (Linux, FreeBSD, etc.):

  - Add src/daemon module with classic double-fork daemonization
  - Implement flock-based PID file management to prevent duplicate instances
  - Add privilege dropping (setuid/setgid) after socket binding
  - New CLI flags: --daemon, --foreground, --pid-file, --run-as-user,
    --run-as-group, --working-dir

  Daemonization occurs before tokio runtime starts to ensure clean fork.
  PID file uses exclusive locking to detect already-running instances.
  Privilege dropping happens after bind_listeners() to allow binding
  to privileged ports (< 1024) before switching to unprivileged user.

Signed-off-by: Vladimir Krivopalov <argenet@yandex.ru>
2026-03-21 21:09:29 +02:00
127 changed files with 18506 additions and 3374 deletions

View File

@ -0,0 +1,126 @@
# Architecture Directives
> Companion to `Agents.md`. These are **activation directives**, not tutorials.
> You already know these patterns — apply them. When making any structural or
> design decision, run the relevant section below as a checklist.
---
## 1. Active Principles (always on)
Apply these on every non-trivial change. No exceptions.
- **SRP** — one reason to change per component. If you can't name the responsibility in one noun phrase, split it.
- **OCP** — extend by adding, not by modifying. New variants/impls over patching existing logic.
- **ISP** — traits stay minimal. More than ~5 methods is a split signal.
- **DIP** — high-level modules depend on traits, not concrete types. Infrastructure implements domain traits; it does not own domain logic.
- **DRY** — one authoritative source per piece of knowledge. Copies are bugs that haven't diverged yet.
- **YAGNI** — generic parameters, extension hooks, and pluggable strategies require an *existing* concrete use case, not a hypothetical one.
- **KISS** — two equivalent designs: choose the one with fewer concepts. Justify complexity; never assume it.
---
## 2. Layered Architecture
Dependencies point **inward only**: `Presentation → Application → Domain ← Infrastructure`.
- Domain layer: zero I/O. No network, no filesystem, no async runtime imports.
- Infrastructure: implements domain traits at the boundary. Never leaks SDK/wire types inward.
- Anti-Corruption Layer (ACL): all third-party and external-protocol types are translated here. If the external format changes, only the ACL changes.
- Presentation: translates wire/HTTP representations to domain types and back. Nothing else.
---
## 3. Design Pattern Selection
Apply the right pattern. Do not invent a new abstraction when a named pattern fits.
| Situation | Pattern to apply |
|---|---|
| Struct with 3+ optional/dependent fields | **Builder**`build()` returns `Result`, never panics |
| Cross-cutting behavior (logging, retry, metrics) on a trait impl | **Decorator** — implements same trait, delegates all calls |
| Subsystem with multiple internal components | **Façade** — single public entry point, internals are `pub(crate)` |
| Swappable algorithm or policy | **Strategy** — trait injection; generics for compile-time, `dyn` for runtime |
| Component notifying decoupled consumers | **Observer** — typed channels (`broadcast`, `watch`), not callback `Vec<Box<dyn Fn>>` |
| Exclusive mutable state serving concurrent callers | **Actor**`mpsc` command channel + `oneshot` reply; no lock needed on state |
| Finite state with invalid transition prevention | **Typestate** — distinct types per state; invalid ops are compile errors |
| Fixed process skeleton with overridable steps | **Template Method** — defaulted trait method calls required hooks |
| Request pipeline with independent handlers | **Chain/Middleware** — generic compile-time chain for hot paths, `dyn` for runtime assembly |
| Hiding a concrete type behind a trait | **Factory Function** — returns `Box<dyn Trait>` or `impl Trait` |
---
## 4. Data Modeling Rules
- **Make illegal states unrepresentable.** Type system enforces invariants; runtime validation is a second line, not the first.
- **Newtype every primitive** that carries domain meaning. `SessionId(u64)``UserId(u64)` — the compiler enforces it.
- **Enums over booleans** for any parameter or field with two or more named states.
- **Typed error enums** with named variants carrying full diagnostic context. `anyhow` is application-layer only; never in library code.
- **Domain types carry no I/O concerns.** No `serde`, no codec, no DB derives on domain structs. Conversions via `From`/`TryFrom` at layer boundaries.
---
## 5. Concurrency Rules
- Prefer message-passing over shared memory. Shared state is a fallback.
- All channels must be **bounded**. Document the bound's rationale inline.
- Never hold a lock across an `await` unless atomicity explicitly requires it — document why.
- Document lock acquisition order wherever two locks are taken together.
- Every `async fn` is cancellation-safe unless explicitly documented otherwise. Mutate shared state *after* the `await` that may be cancelled, not before.
- High-read/low-write state: use `arc-swap` or `watch` for lock-free reads.
---
## 6. Error Handling Rules
- Errors translated at every layer boundary — low-level errors never surface unmodified.
- Add context at the propagation site: what operation failed and where.
- No `unwrap()`/`expect()` in production paths without a comment proving `None`/`Err` is impossible.
- Panics are only permitted in: tests, startup/init unrecoverable failure, and `unreachable!()` with an invariant comment.
---
## 7. API Design Rules
- **CQS**: functions that return data must not mutate; functions that mutate return only `Result`.
- **Least surprise**: a function does exactly what its name implies. Side effects are documented.
- **Idempotency**: `close()`, `shutdown()`, `unregister()` called twice must not panic or error.
- **Fallibility at the type level**: failure → `Result<T, E>`. No sentinel values.
- **Minimal public surface**: default to `pub(crate)`. Mark `pub` only deliberate API. Re-export through a single surface in `mod.rs`.
---
## 8. Performance Rules (hot paths)
- Annotate hot-path functions with `// HOT PATH: <throughput requirement>`.
- Zero allocations per operation in hot paths after initialization. Preallocate in constructors, reuse buffers.
- Pass `&[u8]` / `Bytes` slices — not `Vec<u8>`. Use `BytesMut` for reusable mutable buffers.
- No `String` formatting in hot paths. No logging without a rate-limit or sampling gate.
- Any allocation in a hot path gets a comment: `// ALLOC: <reason and size>`.
---
## 9. Testing Rules
- Bug fixes require a regression test that is **red before the fix, green after**. Name it after the bug.
- Property tests for: codec round-trips, state machine invariants, cryptographic protocol correctness.
- No shared mutable state between tests. Each test constructs its own environment.
- Test doubles hierarchy (simplest first): Fake → Stub → Spy → Mock. Mocks couple to implementation, not behavior — use sparingly.
---
## 10. Pre-Change Checklist
Run this before proposing or implementing any structural decision:
- [ ] Responsibility nameable in one noun phrase?
- [ ] Layer dependencies point inward only?
- [ ] Invalid states unrepresentable in the type system?
- [ ] State transitions gated through a single interface?
- [ ] All channels bounded?
- [ ] No locks held across `await` (or documented)?
- [ ] Errors typed and translated at layer boundaries?
- [ ] No panics in production paths without invariant proof?
- [ ] Hot paths annotated and allocation-free?
- [ ] Public surface minimal — only deliberate API marked `pub`?
- [ ] Correct pattern chosen from Section 3 table?

View File

@ -151,6 +151,14 @@ jobs:
mkdir -p dist
cp "target/${{ matrix.target }}/release/${{ env.BINARY_NAME }}" dist/telemt
if [ "${{ matrix.target }}" = "aarch64-unknown-linux-gnu" ]; then
STRIP_BIN=aarch64-linux-gnu-strip
else
STRIP_BIN=strip
fi
"${STRIP_BIN}" dist/telemt
cd dist
tar -czf "${{ matrix.asset }}.tar.gz" \
--owner=0 --group=0 --numeric-owner \
@ -279,6 +287,14 @@ jobs:
mkdir -p dist
cp "target/${{ matrix.target }}/release/${{ env.BINARY_NAME }}" dist/telemt
if [ "${{ matrix.target }}" = "aarch64-unknown-linux-musl" ]; then
STRIP_BIN=aarch64-linux-musl-strip
else
STRIP_BIN=strip
fi
"${STRIP_BIN}" dist/telemt
cd dist
tar -czf "${{ matrix.asset }}.tar.gz" \
--owner=0 --group=0 --numeric-owner \

View File

@ -1,58 +0,0 @@
# Architect Mode Rules for Telemt
## Architecture Overview
```mermaid
graph TB
subgraph Entry
Client[Clients] --> Listener[TCP/Unix Listener]
end
subgraph Proxy Layer
Listener --> ClientHandler[ClientHandler]
ClientHandler --> Handshake[Handshake Validator]
Handshake --> |Valid| Relay[Relay Layer]
Handshake --> |Invalid| Masking[Masking/TLS Fronting]
end
subgraph Transport
Relay --> MiddleProxy[Middle-End Proxy Pool]
Relay --> DirectRelay[Direct DC Relay]
MiddleProxy --> TelegramDC[Telegram DCs]
DirectRelay --> TelegramDC
end
```
## Module Dependencies
- [`src/main.rs`](src/main.rs) - Entry point, spawns all async tasks
- [`src/config/`](src/config/) - Configuration loading with auto-migration
- [`src/error.rs`](src/error.rs) - Error types, must be used by all modules
- [`src/crypto/`](src/crypto/) - AES, SHA, random number generation
- [`src/protocol/`](src/protocol/) - MTProto constants, frame encoding, obfuscation
- [`src/stream/`](src/stream/) - Stream wrappers, buffer pool, frame codecs
- [`src/proxy/`](src/proxy/) - Client handling, handshake, relay logic
- [`src/transport/`](src/transport/) - Upstream management, middle-proxy, SOCKS support
- [`src/stats/`](src/stats/) - Statistics and replay protection
- [`src/ip_tracker.rs`](src/ip_tracker.rs) - Per-user IP tracking
## Key Architectural Constraints
### Middle-End Proxy Mode
- Requires public IP on interface OR 1:1 NAT with STUN probing
- Uses separate `proxy-secret` from Telegram (NOT user secrets)
- Falls back to direct mode automatically on STUN mismatch
### TLS Fronting
- Invalid handshakes are transparently proxied to `mask_host`
- This is critical for DPI evasion - do not change this behavior
- `mask_unix_sock` and `mask_host` are mutually exclusive
### Stream Architecture
- Buffer pool is shared globally via Arc - prevents allocation storms
- Frame codecs implement tokio-util Encoder/Decoder traits
- State machine in [`src/stream/state.rs`](src/stream/state.rs) manages stream transitions
### Configuration Migration
- [`ProxyConfig::load()`](src/config/mod.rs:641) mutates config in-place
- New fields must have sensible defaults
- DC203 override is auto-injected for CDN/media support

View File

@ -1,23 +0,0 @@
# Code Mode Rules for Telemt
## Error Handling
- Always use [`ProxyError`](src/error.rs:168) from [`src/error.rs`](src/error.rs) for proxy operations
- [`HandshakeResult<T,R,W>`](src/error.rs:292) returns streams on bad client - these MUST be returned for masking, never dropped
- Use [`Recoverable`](src/error.rs:110) trait to check if errors are retryable
## Configuration Changes
- [`ProxyConfig::load()`](src/config/mod.rs:641) auto-mutates config - new fields should have defaults
- DC203 override is auto-injected if missing - do not remove this behavior
- When adding config fields, add migration logic in [`ProxyConfig::load()`](src/config/mod.rs:641)
## Crypto Code
- [`SecureRandom`](src/crypto/random.rs) from [`src/crypto/random.rs`](src/crypto/random.rs) must be used for all crypto operations
- Never use `rand::thread_rng()` directly - use the shared `Arc<SecureRandom>`
## Stream Handling
- Buffer pool [`BufferPool`](src/stream/buffer_pool.rs) is shared via Arc - always use it instead of allocating
- Frame codecs in [`src/stream/frame_codec.rs`](src/stream/frame_codec.rs) implement tokio-util's Encoder/Decoder traits
## Testing
- Tests are inline in modules using `#[cfg(test)]`
- Use `cargo test --lib <module_name>` to run tests for specific modules

View File

@ -1,27 +0,0 @@
# Debug Mode Rules for Telemt
## Logging
- `RUST_LOG` environment variable takes absolute priority over all config log levels
- Log levels: `trace`, `debug`, `info`, `warn`, `error`
- Use `RUST_LOG=debug cargo run` for detailed operational logs
- Use `RUST_LOG=trace cargo run` for full protocol-level debugging
## Middle-End Proxy Debugging
- Set `ME_DIAG=1` environment variable for high-precision cryptography diagnostics
- STUN probe results are logged at startup - check for mismatch between local and reflected IP
- If Middle-End fails, check `proxy_secret_path` points to valid file from https://core.telegram.org/getProxySecret
## Connection Issues
- DC connectivity is logged at startup with RTT measurements
- If DC ping fails, check `dc_overrides` for custom addresses
- Use `prefer_ipv6=false` in config if IPv6 is unreliable
## TLS Fronting Issues
- Invalid handshakes are proxied to `mask_host` - check this host is reachable
- `mask_unix_sock` and `mask_host` are mutually exclusive - only one can be set
- If `mask_unix_sock` is set, socket must exist before connections arrive
## Common Errors
- `ReplayAttack` - client replayed a handshake nonce, potential attack
- `TimeSkew` - client clock is off, can disable with `ignore_time_skew=true`
- `TgHandshakeTimeout` - upstream DC connection failed, check network

View File

@ -1,19 +1,82 @@
# Issues - Rules
# Issues
## Warnung
Before opening Issue, if it is more question than problem or bug - ask about that [in our chat](https://t.me/telemtrs)
## What it is not
- NOT Question and Answer
- NOT Helpdesk
# Pull Requests - Rules
***Each of your Issues triggers attempts to reproduce problems and analyze them, which are done manually by people***
---
# Pull Requests
## General
- ONLY signed and verified commits
- ONLY from your name
- DO NOT commit with `codex` or `claude` as author/commiter
- DO NOT commit with `codex`, `claude`, or other AI tools as author/committer
- PREFER `flow` branch for development, not `main`
## AI
We are not against modern tools, like AI, where you act as a principal or architect, but we consider it important:
---
- you really understand what you're doing
- you understand the relationships and dependencies of the components being modified
- you understand the architecture of Telegram MTProto, MTProxy, Middle-End KDF at least generically
- you DO NOT commit for the sake of commits, but to help the community, core-developers and ordinary users
## Definition of Ready (MANDATORY)
A Pull Request WILL be ignored or closed if:
- it does NOT build
- it does NOT pass tests
- it does NOT follow formatting rules
- it contains unrelated or excessive changes
- the author cannot clearly explain the change
---
## Blessed Principles
- PR must build
- PR must pass tests
- PR must be understood by author
---
## AI Usage Policy
AI tools (Claude, ChatGPT, Codex, DeepSeek, etc.) are allowed as **assistants**, NOT as decision-makers.
By submitting a PR, you confirm that:
- you fully understand the code you submit
- you verified correctness manually
- you reviewed architecture and dependencies
- you take full responsibility for the change
AI-generated code is treated as **draft** and must be validated like any other external contribution.
PRs that look like unverified AI dumps WILL be closed
---
## Maintainer Policy
Maintainers reserve the right to:
- close PRs that do not meet basic quality requirements
- request explanations before review
- ignore low-effort contributions
Respect the reviewers time
---
## Enforcement
Pull Requests that violate project standards may be closed without review.
This includes (but is not limited to):
- non-building code
- failing tests
- unverified or low-effort changes
- inability to explain the change
These actions follow the Code of Conduct and are intended to preserve signal, quality, and Telemt's integrity

329
Cargo.lock generated
View File

@ -183,9 +183,9 @@ dependencies = [
[[package]]
name = "aws-lc-sys"
version = "0.39.0"
version = "0.39.1"
source = "registry+https://github.com/rust-lang/crates.io-index"
checksum = "1fa7e52a4c5c547c741610a2c6f123f3881e409b714cd27e6798ef020c514f0a"
checksum = "83a25cf98105baa966497416dbd42565ce3a8cf8dbfd59803ec9ad46f3126399"
dependencies = [
"cc",
"cmake",
@ -234,16 +234,16 @@ checksum = "843867be96c8daad0d758b57df9392b6d8d271134fce549de6ce169ff98a92af"
[[package]]
name = "blake3"
version = "1.8.3"
version = "1.8.4"
source = "registry+https://github.com/rust-lang/crates.io-index"
checksum = "2468ef7d57b3fb7e16b576e8377cdbde2320c60e1491e961d11da40fc4f02a2d"
checksum = "4d2d5991425dfd0785aed03aedcf0b321d61975c9b5b3689c774a2610ae0b51e"
dependencies = [
"arrayref",
"arrayvec",
"cc",
"cfg-if",
"constant_time_eq",
"cpufeatures 0.2.17",
"cpufeatures 0.3.0",
]
[[package]]
@ -299,9 +299,9 @@ dependencies = [
[[package]]
name = "cc"
version = "1.2.57"
version = "1.2.58"
source = "registry+https://github.com/rust-lang/crates.io-index"
checksum = "7a0dd1ca384932ff3641c8718a02769f1698e7563dc6974ffd03346116310423"
checksum = "e1e928d4b69e3077709075a938a05ffbedfa53a84c8f766efbf8220bb1ff60e1"
dependencies = [
"find-msvc-tools",
"jobserver",
@ -441,9 +441,9 @@ checksum = "c8d4a3bb8b1e0c1050499d1815f5ab16d04f0959b233085fb31653fbfc9d98f9"
[[package]]
name = "cmake"
version = "0.1.57"
version = "0.1.58"
source = "registry+https://github.com/rust-lang/crates.io-index"
checksum = "75443c44cd6b379beb8c5b45d85d0773baf31cce901fe7bb252f4eff3008ef7d"
checksum = "c0f78a02292a74a88ac736019ab962ece0bc380e3f977bf72e376c5d78ff0678"
dependencies = [
"cc",
]
@ -1191,9 +1191,9 @@ checksum = "df3b46402a9d5adb4c86a0cf463f42e19994e3ee891101b1841f30a545cb49a9"
[[package]]
name = "hyper"
version = "1.8.1"
version = "1.9.0"
source = "registry+https://github.com/rust-lang/crates.io-index"
checksum = "2ab2d4f250c3d7b1c9fcdff1cece94ea4e2dfbec68614f7b87cb205f24ca9d11"
checksum = "6299f016b246a94207e63da54dbe807655bf9e00044f73ded42c3ac5305fbcca"
dependencies = [
"atomic-waker",
"bytes",
@ -1206,7 +1206,6 @@ dependencies = [
"httpdate",
"itoa",
"pin-project-lite",
"pin-utils",
"smallvec",
"tokio",
"want",
@ -1245,7 +1244,7 @@ dependencies = [
"libc",
"percent-encoding",
"pin-project-lite",
"socket2 0.6.3",
"socket2",
"tokio",
"tower-service",
"tracing",
@ -1277,12 +1276,13 @@ dependencies = [
[[package]]
name = "icu_collections"
version = "2.1.1"
version = "2.2.0"
source = "registry+https://github.com/rust-lang/crates.io-index"
checksum = "4c6b649701667bbe825c3b7e6388cb521c23d88644678e83c0c4d0a621a34b43"
checksum = "2984d1cd16c883d7935b9e07e44071dca8d917fd52ecc02c04d5fa0b5a3f191c"
dependencies = [
"displaydoc",
"potential_utf",
"utf8_iter",
"yoke",
"zerofrom",
"zerovec",
@ -1290,9 +1290,9 @@ dependencies = [
[[package]]
name = "icu_locale_core"
version = "2.1.1"
version = "2.2.0"
source = "registry+https://github.com/rust-lang/crates.io-index"
checksum = "edba7861004dd3714265b4db54a3c390e880ab658fec5f7db895fae2046b5bb6"
checksum = "92219b62b3e2b4d88ac5119f8904c10f8f61bf7e95b640d25ba3075e6cac2c29"
dependencies = [
"displaydoc",
"litemap",
@ -1303,9 +1303,9 @@ dependencies = [
[[package]]
name = "icu_normalizer"
version = "2.1.1"
version = "2.2.0"
source = "registry+https://github.com/rust-lang/crates.io-index"
checksum = "5f6c8828b67bf8908d82127b2054ea1b4427ff0230ee9141c54251934ab1b599"
checksum = "c56e5ee99d6e3d33bd91c5d85458b6005a22140021cc324cea84dd0e72cff3b4"
dependencies = [
"icu_collections",
"icu_normalizer_data",
@ -1317,15 +1317,15 @@ dependencies = [
[[package]]
name = "icu_normalizer_data"
version = "2.1.1"
version = "2.2.0"
source = "registry+https://github.com/rust-lang/crates.io-index"
checksum = "7aedcccd01fc5fe81e6b489c15b247b8b0690feb23304303a9e560f37efc560a"
checksum = "da3be0ae77ea334f4da67c12f149704f19f81d1adf7c51cf482943e84a2bad38"
[[package]]
name = "icu_properties"
version = "2.1.2"
version = "2.2.0"
source = "registry+https://github.com/rust-lang/crates.io-index"
checksum = "020bfc02fe870ec3a66d93e677ccca0562506e5872c650f893269e08615d74ec"
checksum = "bee3b67d0ea5c2cca5003417989af8996f8604e34fb9ddf96208a033901e70de"
dependencies = [
"icu_collections",
"icu_locale_core",
@ -1337,15 +1337,15 @@ dependencies = [
[[package]]
name = "icu_properties_data"
version = "2.1.2"
version = "2.2.0"
source = "registry+https://github.com/rust-lang/crates.io-index"
checksum = "616c294cf8d725c6afcd8f55abc17c56464ef6211f9ed59cccffe534129c77af"
checksum = "8e2bbb201e0c04f7b4b3e14382af113e17ba4f63e2c9d2ee626b720cbce54a14"
[[package]]
name = "icu_provider"
version = "2.1.1"
version = "2.2.0"
source = "registry+https://github.com/rust-lang/crates.io-index"
checksum = "85962cf0ce02e1e0a629cc34e7ca3e373ce20dda4c4d7294bbd0bf1fdb59e614"
checksum = "139c4cf31c8b5f33d7e199446eff9c1e02decfc2f0eec2c8d71f65befa45b421"
dependencies = [
"displaydoc",
"icu_locale_core",
@ -1427,14 +1427,15 @@ dependencies = [
[[package]]
name = "ipconfig"
version = "0.3.2"
version = "0.3.4"
source = "registry+https://github.com/rust-lang/crates.io-index"
checksum = "b58db92f96b720de98181bbbe63c831e87005ab460c1bf306eb2622b4707997f"
checksum = "4d40460c0ce33d6ce4b0630ad68ff63d6661961c48b6dba35e5a4d81cfb48222"
dependencies = [
"socket2 0.5.10",
"socket2",
"widestring",
"windows-sys 0.48.0",
"winreg",
"windows-registry",
"windows-result",
"windows-sys 0.61.2",
]
[[package]]
@ -1454,9 +1455,9 @@ dependencies = [
[[package]]
name = "iri-string"
version = "0.7.11"
version = "0.7.12"
source = "registry+https://github.com/rust-lang/crates.io-index"
checksum = "d8e7418f59cc01c88316161279a7f665217ae316b388e58a0d10e29f54f1e5eb"
checksum = "25e659a4bb38e810ebc252e53b5814ff908a8c58c2a9ce2fae1bbec24cbf4e20"
dependencies = [
"memchr",
"serde",
@ -1533,10 +1534,12 @@ dependencies = [
[[package]]
name = "js-sys"
version = "0.3.91"
version = "0.3.94"
source = "registry+https://github.com/rust-lang/crates.io-index"
checksum = "b49715b7073f385ba4bc528e5747d02e66cb39c6146efb66b781f131f0fb399c"
checksum = "2e04e2ef80ce82e13552136fabeef8a5ed1f985a96805761cbb9a2c34e7664d9"
dependencies = [
"cfg-if",
"futures-util",
"once_cell",
"wasm-bindgen",
]
@ -1575,9 +1578,9 @@ checksum = "09edd9e8b54e49e587e4f6295a7d29c3ea94d469cb40ab8ca70b288248a81db2"
[[package]]
name = "libc"
version = "0.2.183"
version = "0.2.184"
source = "registry+https://github.com/rust-lang/crates.io-index"
checksum = "b5b646652bf6661599e1da8901b3b9522896f01e736bad5f723fe7a3a27f899d"
checksum = "48f5d2a454e16a5ea0f4ced81bd44e4cfc7bd3a507b61887c99fd3538b28e4af"
[[package]]
name = "linux-raw-sys"
@ -1587,9 +1590,9 @@ checksum = "32a66949e030da00e8c7d4434b251670a91556f4144941d37452769c25d58a53"
[[package]]
name = "litemap"
version = "0.8.1"
version = "0.8.2"
source = "registry+https://github.com/rust-lang/crates.io-index"
checksum = "6373607a59f0be73a39b6fe456b8192fcc3585f602af20751600e974dd455e77"
checksum = "92daf443525c4cce67b150400bc2316076100ce0b3686209eb8cf3c31612e6f0"
[[package]]
name = "lock_api"
@ -1669,9 +1672,9 @@ checksum = "68354c5c6bd36d73ff3feceb05efa59b6acb7626617f4962be322a825e61f79a"
[[package]]
name = "mio"
version = "1.1.1"
version = "1.2.0"
source = "registry+https://github.com/rust-lang/crates.io-index"
checksum = "a69bcab0ad47271a0234d9422b131806bf3968021e5dc9328caf2d4cd58557fc"
checksum = "50b7e5b27aa02a74bac8c3f23f448f8d87ff11f92d3aac1a6ed369ee08cc56c1"
dependencies = [
"libc",
"log",
@ -1767,9 +1770,9 @@ dependencies = [
[[package]]
name = "num-conv"
version = "0.2.0"
version = "0.2.1"
source = "registry+https://github.com/rust-lang/crates.io-index"
checksum = "cf97ec579c3c42f953ef76dbf8d55ac91fb219dde70e49aa4a6b7d74e9919050"
checksum = "c6673768db2d862beb9b39a78fdcb1a69439615d5794a1be50caa9bc92c81967"
[[package]]
name = "num-integer"
@ -1891,12 +1894,6 @@ version = "0.2.17"
source = "registry+https://github.com/rust-lang/crates.io-index"
checksum = "a89322df9ebe1c1578d689c92318e070967d1042b512afbe49518723f4e6d5cd"
[[package]]
name = "pin-utils"
version = "0.1.0"
source = "registry+https://github.com/rust-lang/crates.io-index"
checksum = "8b870d8c151b6f2fb93e84a13146138f05d02ed11c7e7c54f8826aaaf7c9f184"
[[package]]
name = "pkcs8"
version = "0.10.2"
@ -1966,9 +1963,9 @@ checksum = "c33a9471896f1c69cecef8d20cbe2f7accd12527ce60845ff44c153bb2a21b49"
[[package]]
name = "potential_utf"
version = "0.1.4"
version = "0.1.5"
source = "registry+https://github.com/rust-lang/crates.io-index"
checksum = "b73949432f5e2a09657003c25bca5e19a0e9c84f8058ca374f49e0ebe605af77"
checksum = "0103b1cef7ec0cf76490e969665504990193874ea05c85ff9bab8b911d0a0564"
dependencies = [
"zerovec",
]
@ -2009,9 +2006,9 @@ dependencies = [
[[package]]
name = "proptest"
version = "1.10.0"
version = "1.11.0"
source = "registry+https://github.com/rust-lang/crates.io-index"
checksum = "37566cb3fdacef14c0737f9546df7cfeadbfbc9fef10991038bf5015d0c80532"
checksum = "4b45fcc2344c680f5025fe57779faef368840d0bd1f42f216291f0dc4ace4744"
dependencies = [
"bit-set",
"bit-vec",
@ -2045,7 +2042,7 @@ dependencies = [
"quinn-udp",
"rustc-hash",
"rustls",
"socket2 0.6.3",
"socket2",
"thiserror 2.0.18",
"tokio",
"tracing",
@ -2083,7 +2080,7 @@ dependencies = [
"cfg_aliases",
"libc",
"once_cell",
"socket2 0.6.3",
"socket2",
"tracing",
"windows-sys 0.60.2",
]
@ -2301,9 +2298,9 @@ dependencies = [
[[package]]
name = "rustc-hash"
version = "2.1.1"
version = "2.1.2"
source = "registry+https://github.com/rust-lang/crates.io-index"
checksum = "357703d41365b4b27c590e3ed91eabb1b663f07c4c084095e60cbed4362dff0d"
checksum = "94300abf3f1ae2e2b8ffb7b58043de3d399c73fa6f4b73826402a5c457614dbe"
[[package]]
name = "rustc_version"
@ -2555,9 +2552,9 @@ dependencies = [
[[package]]
name = "serde_spanned"
version = "1.0.4"
version = "1.1.1"
source = "registry+https://github.com/rust-lang/crates.io-index"
checksum = "f8bbf91e5a4d6315eee45e704372590b30e260ee83af6639d64557f51b067776"
checksum = "6662b5879511e06e8999a8a235d848113e942c9124f211511b16466ee2995f26"
dependencies = [
"serde_core",
]
@ -2625,7 +2622,7 @@ dependencies = [
"serde_json",
"serde_urlencoded",
"shadowsocks-crypto",
"socket2 0.6.3",
"socket2",
"spin",
"thiserror 2.0.18",
"tokio",
@ -2697,16 +2694,6 @@ version = "1.15.1"
source = "registry+https://github.com/rust-lang/crates.io-index"
checksum = "67b1b7a3b5fe4f1376887184045fcf45c69e92af734b7aaddc05fb777b6fbd03"
[[package]]
name = "socket2"
version = "0.5.10"
source = "registry+https://github.com/rust-lang/crates.io-index"
checksum = "e22376abed350d73dd1cd119b57ffccad95b4e585a7cda43e286245ce23c0678"
dependencies = [
"libc",
"windows-sys 0.52.0",
]
[[package]]
name = "socket2"
version = "0.6.3"
@ -2793,7 +2780,7 @@ checksum = "7b2093cf4c8eb1e67749a6762251bc9cd836b6fc171623bd0a9d324d37af2417"
[[package]]
name = "telemt"
version = "3.3.31"
version = "3.3.38"
dependencies = [
"aes",
"anyhow",
@ -2834,7 +2821,7 @@ dependencies = [
"sha1",
"sha2",
"shadowsocks",
"socket2 0.6.3",
"socket2",
"static_assertions",
"subtle",
"thiserror 2.0.18",
@ -2844,6 +2831,7 @@ dependencies = [
"tokio-util",
"toml",
"tracing",
"tracing-appender",
"tracing-subscriber",
"url",
"webpki-roots",
@ -2947,9 +2935,9 @@ dependencies = [
[[package]]
name = "tinystr"
version = "0.8.2"
version = "0.8.3"
source = "registry+https://github.com/rust-lang/crates.io-index"
checksum = "42d3e9c45c09de15d06dd8acf5f4e0e399e85927b7f00711024eb7ae10fa4869"
checksum = "c8323304221c2a851516f22236c5722a72eaa19749016521d6dff0824447d96d"
dependencies = [
"displaydoc",
"zerovec",
@ -2992,7 +2980,7 @@ dependencies = [
"parking_lot",
"pin-project-lite",
"signal-hook-registry",
"socket2 0.6.3",
"socket2",
"tokio-macros",
"tracing",
"windows-sys 0.61.2",
@ -3053,7 +3041,7 @@ dependencies = [
"log",
"once_cell",
"pin-project",
"socket2 0.6.3",
"socket2",
"tokio",
"windows-sys 0.60.2",
]
@ -3077,9 +3065,9 @@ dependencies = [
[[package]]
name = "toml"
version = "1.0.7+spec-1.1.0"
version = "1.1.2+spec-1.1.0"
source = "registry+https://github.com/rust-lang/crates.io-index"
checksum = "dd28d57d8a6f6e458bc0b8784f8fdcc4b99a437936056fa122cb234f18656a96"
checksum = "81f3d15e84cbcd896376e6730314d59fb5a87f31e4b038454184435cd57defee"
dependencies = [
"indexmap",
"serde_core",
@ -3092,27 +3080,27 @@ dependencies = [
[[package]]
name = "toml_datetime"
version = "1.0.1+spec-1.1.0"
version = "1.1.1+spec-1.1.0"
source = "registry+https://github.com/rust-lang/crates.io-index"
checksum = "9b320e741db58cac564e26c607d3cc1fdc4a88fd36c879568c07856ed83ff3e9"
checksum = "3165f65f62e28e0115a00b2ebdd37eb6f3b641855f9d636d3cd4103767159ad7"
dependencies = [
"serde_core",
]
[[package]]
name = "toml_parser"
version = "1.0.10+spec-1.1.0"
version = "1.1.2+spec-1.1.0"
source = "registry+https://github.com/rust-lang/crates.io-index"
checksum = "7df25b4befd31c4816df190124375d5a20c6b6921e2cad937316de3fccd63420"
checksum = "a2abe9b86193656635d2411dc43050282ca48aa31c2451210f4202550afb7526"
dependencies = [
"winnow",
]
[[package]]
name = "toml_writer"
version = "1.0.7+spec-1.1.0"
version = "1.1.1+spec-1.1.0"
source = "registry+https://github.com/rust-lang/crates.io-index"
checksum = "f17aaa1c6e3dc22b1da4b6bba97d066e354c7945cac2f7852d4e4e7ca7a6b56d"
checksum = "756daf9b1013ebe47a8776667b466417e2d4c5679d441c26230efd9ef78692db"
[[package]]
name = "tower"
@ -3170,6 +3158,18 @@ dependencies = [
"tracing-core",
]
[[package]]
name = "tracing-appender"
version = "0.2.4"
source = "registry+https://github.com/rust-lang/crates.io-index"
checksum = "786d480bce6247ab75f005b14ae1624ad978d3029d9113f0a22fa1ac773faeaf"
dependencies = [
"crossbeam-channel",
"thiserror 2.0.18",
"time",
"tracing-subscriber",
]
[[package]]
name = "tracing-attributes"
version = "0.1.31"
@ -3297,9 +3297,9 @@ checksum = "b6c140620e7ffbb22c2dee59cafe6084a59b5ffc27a8859a5f0d494b5d52b6be"
[[package]]
name = "uuid"
version = "1.22.0"
version = "1.23.0"
source = "registry+https://github.com/rust-lang/crates.io-index"
checksum = "a68d3c8f01c0cfa54a75291d83601161799e4a89a39e0929f4b0354d88757a37"
checksum = "5ac8b6f42ead25368cf5b098aeb3dc8a1a2c05a3eee8a9a1a68c640edbfc79d9"
dependencies = [
"getrandom 0.4.2",
"js-sys",
@ -3372,9 +3372,9 @@ dependencies = [
[[package]]
name = "wasm-bindgen"
version = "0.2.114"
version = "0.2.117"
source = "registry+https://github.com/rust-lang/crates.io-index"
checksum = "6532f9a5c1ece3798cb1c2cfdba640b9b3ba884f5db45973a6f442510a87d38e"
checksum = "0551fc1bb415591e3372d0bc4780db7e587d84e2a7e79da121051c5c4b89d0b0"
dependencies = [
"cfg-if",
"once_cell",
@ -3385,23 +3385,19 @@ dependencies = [
[[package]]
name = "wasm-bindgen-futures"
version = "0.4.64"
version = "0.4.67"
source = "registry+https://github.com/rust-lang/crates.io-index"
checksum = "e9c5522b3a28661442748e09d40924dfb9ca614b21c00d3fd135720e48b67db8"
checksum = "03623de6905b7206edd0a75f69f747f134b7f0a2323392d664448bf2d3c5d87e"
dependencies = [
"cfg-if",
"futures-util",
"js-sys",
"once_cell",
"wasm-bindgen",
"web-sys",
]
[[package]]
name = "wasm-bindgen-macro"
version = "0.2.114"
version = "0.2.117"
source = "registry+https://github.com/rust-lang/crates.io-index"
checksum = "18a2d50fcf105fb33bb15f00e7a77b772945a2ee45dcf454961fd843e74c18e6"
checksum = "7fbdf9a35adf44786aecd5ff89b4563a90325f9da0923236f6104e603c7e86be"
dependencies = [
"quote",
"wasm-bindgen-macro-support",
@ -3409,9 +3405,9 @@ dependencies = [
[[package]]
name = "wasm-bindgen-macro-support"
version = "0.2.114"
version = "0.2.117"
source = "registry+https://github.com/rust-lang/crates.io-index"
checksum = "03ce4caeaac547cdf713d280eda22a730824dd11e6b8c3ca9e42247b25c631e3"
checksum = "dca9693ef2bab6d4e6707234500350d8dad079eb508dca05530c85dc3a529ff2"
dependencies = [
"bumpalo",
"proc-macro2",
@ -3422,9 +3418,9 @@ dependencies = [
[[package]]
name = "wasm-bindgen-shared"
version = "0.2.114"
version = "0.2.117"
source = "registry+https://github.com/rust-lang/crates.io-index"
checksum = "75a326b8c223ee17883a4251907455a2431acc2791c98c26279376490c378c16"
checksum = "39129a682a6d2d841b6c429d0c51e5cb0ed1a03829d8b3d1e69a011e62cb3d3b"
dependencies = [
"unicode-ident",
]
@ -3465,9 +3461,9 @@ dependencies = [
[[package]]
name = "web-sys"
version = "0.3.91"
version = "0.3.94"
source = "registry+https://github.com/rust-lang/crates.io-index"
checksum = "854ba17bb104abfb26ba36da9729addc7ce7f06f5c0f90f3c391f8461cca21f9"
checksum = "cd70027e39b12f0849461e08ffc50b9cd7688d942c1c8e3c7b22273236b4dd0a"
dependencies = [
"js-sys",
"wasm-bindgen",
@ -3579,6 +3575,17 @@ version = "0.2.1"
source = "registry+https://github.com/rust-lang/crates.io-index"
checksum = "f0805222e57f7521d6a62e36fa9163bc891acd422f971defe97d64e70d0a4fe5"
[[package]]
name = "windows-registry"
version = "0.6.1"
source = "registry+https://github.com/rust-lang/crates.io-index"
checksum = "02752bf7fbdcce7f2a27a742f798510f3e5ad88dbe84871e5168e2120c3d5720"
dependencies = [
"windows-link",
"windows-result",
"windows-strings",
]
[[package]]
name = "windows-result"
version = "0.4.1"
@ -3606,15 +3613,6 @@ dependencies = [
"windows-targets 0.42.2",
]
[[package]]
name = "windows-sys"
version = "0.48.0"
source = "registry+https://github.com/rust-lang/crates.io-index"
checksum = "677d2418bec65e3338edb076e806bc1ec15693c5d0104683f2efe857f61056a9"
dependencies = [
"windows-targets 0.48.5",
]
[[package]]
name = "windows-sys"
version = "0.52.0"
@ -3657,21 +3655,6 @@ dependencies = [
"windows_x86_64_msvc 0.42.2",
]
[[package]]
name = "windows-targets"
version = "0.48.5"
source = "registry+https://github.com/rust-lang/crates.io-index"
checksum = "9a2fa6e2155d7247be68c096456083145c183cbbbc2764150dda45a87197940c"
dependencies = [
"windows_aarch64_gnullvm 0.48.5",
"windows_aarch64_msvc 0.48.5",
"windows_i686_gnu 0.48.5",
"windows_i686_msvc 0.48.5",
"windows_x86_64_gnu 0.48.5",
"windows_x86_64_gnullvm 0.48.5",
"windows_x86_64_msvc 0.48.5",
]
[[package]]
name = "windows-targets"
version = "0.52.6"
@ -3711,12 +3694,6 @@ version = "0.42.2"
source = "registry+https://github.com/rust-lang/crates.io-index"
checksum = "597a5118570b68bc08d8d59125332c54f1ba9d9adeedeef5b99b02ba2b0698f8"
[[package]]
name = "windows_aarch64_gnullvm"
version = "0.48.5"
source = "registry+https://github.com/rust-lang/crates.io-index"
checksum = "2b38e32f0abccf9987a4e3079dfb67dcd799fb61361e53e2882c3cbaf0d905d8"
[[package]]
name = "windows_aarch64_gnullvm"
version = "0.52.6"
@ -3735,12 +3712,6 @@ version = "0.42.2"
source = "registry+https://github.com/rust-lang/crates.io-index"
checksum = "e08e8864a60f06ef0d0ff4ba04124db8b0fb3be5776a5cd47641e942e58c4d43"
[[package]]
name = "windows_aarch64_msvc"
version = "0.48.5"
source = "registry+https://github.com/rust-lang/crates.io-index"
checksum = "dc35310971f3b2dbbf3f0690a219f40e2d9afcf64f9ab7cc1be722937c26b4bc"
[[package]]
name = "windows_aarch64_msvc"
version = "0.52.6"
@ -3759,12 +3730,6 @@ version = "0.42.2"
source = "registry+https://github.com/rust-lang/crates.io-index"
checksum = "c61d927d8da41da96a81f029489353e68739737d3beca43145c8afec9a31a84f"
[[package]]
name = "windows_i686_gnu"
version = "0.48.5"
source = "registry+https://github.com/rust-lang/crates.io-index"
checksum = "a75915e7def60c94dcef72200b9a8e58e5091744960da64ec734a6c6e9b3743e"
[[package]]
name = "windows_i686_gnu"
version = "0.52.6"
@ -3795,12 +3760,6 @@ version = "0.42.2"
source = "registry+https://github.com/rust-lang/crates.io-index"
checksum = "44d840b6ec649f480a41c8d80f9c65108b92d89345dd94027bfe06ac444d1060"
[[package]]
name = "windows_i686_msvc"
version = "0.48.5"
source = "registry+https://github.com/rust-lang/crates.io-index"
checksum = "8f55c233f70c4b27f66c523580f78f1004e8b5a8b659e05a4eb49d4166cca406"
[[package]]
name = "windows_i686_msvc"
version = "0.52.6"
@ -3819,12 +3778,6 @@ version = "0.42.2"
source = "registry+https://github.com/rust-lang/crates.io-index"
checksum = "8de912b8b8feb55c064867cf047dda097f92d51efad5b491dfb98f6bbb70cb36"
[[package]]
name = "windows_x86_64_gnu"
version = "0.48.5"
source = "registry+https://github.com/rust-lang/crates.io-index"
checksum = "53d40abd2583d23e4718fddf1ebec84dbff8381c07cae67ff7768bbf19c6718e"
[[package]]
name = "windows_x86_64_gnu"
version = "0.52.6"
@ -3843,12 +3796,6 @@ version = "0.42.2"
source = "registry+https://github.com/rust-lang/crates.io-index"
checksum = "26d41b46a36d453748aedef1486d5c7a85db22e56aff34643984ea85514e94a3"
[[package]]
name = "windows_x86_64_gnullvm"
version = "0.48.5"
source = "registry+https://github.com/rust-lang/crates.io-index"
checksum = "0b7b52767868a23d5bab768e390dc5f5c55825b6d30b86c844ff2dc7414044cc"
[[package]]
name = "windows_x86_64_gnullvm"
version = "0.52.6"
@ -3867,12 +3814,6 @@ version = "0.42.2"
source = "registry+https://github.com/rust-lang/crates.io-index"
checksum = "9aec5da331524158c6d1a4ac0ab1541149c0b9505fde06423b02f5ef0106b9f0"
[[package]]
name = "windows_x86_64_msvc"
version = "0.48.5"
source = "registry+https://github.com/rust-lang/crates.io-index"
checksum = "ed94fce61571a4006852b7389a063ab983c02eb1bb37b47f8272ce92d06d9538"
[[package]]
name = "windows_x86_64_msvc"
version = "0.52.6"
@ -3887,19 +3828,9 @@ checksum = "d6bbff5f0aada427a1e5a6da5f1f98158182f26556f345ac9e04d36d0ebed650"
[[package]]
name = "winnow"
version = "1.0.0"
version = "1.0.1"
source = "registry+https://github.com/rust-lang/crates.io-index"
checksum = "a90e88e4667264a994d34e6d1ab2d26d398dcdca8b7f52bec8668957517fc7d8"
[[package]]
name = "winreg"
version = "0.50.0"
source = "registry+https://github.com/rust-lang/crates.io-index"
checksum = "524e57b2c537c0f9b1e69f1965311ec12182b4122e45035b1508cd24d2adadb1"
dependencies = [
"cfg-if",
"windows-sys 0.48.0",
]
checksum = "09dac053f1cd375980747450bfc7250c264eaae0583872e845c0c7cd578872b5"
[[package]]
name = "wit-bindgen"
@ -4026,9 +3957,9 @@ dependencies = [
[[package]]
name = "yoke"
version = "0.8.1"
version = "0.8.2"
source = "registry+https://github.com/rust-lang/crates.io-index"
checksum = "72d6e5c6afb84d73944e5cedb052c4680d5657337201555f9f2a16b7406d4954"
checksum = "abe8c5fda708d9ca3df187cae8bfb9ceda00dd96231bed36e445a1a48e66f9ca"
dependencies = [
"stable_deref_trait",
"yoke-derive",
@ -4037,9 +3968,9 @@ dependencies = [
[[package]]
name = "yoke-derive"
version = "0.8.1"
version = "0.8.2"
source = "registry+https://github.com/rust-lang/crates.io-index"
checksum = "b659052874eb698efe5b9e8cf382204678a0086ebf46982b79d6ca3182927e5d"
checksum = "de844c262c8848816172cef550288e7dc6c7b7814b4ee56b3e1553f275f1858e"
dependencies = [
"proc-macro2",
"quote",
@ -4049,18 +3980,18 @@ dependencies = [
[[package]]
name = "zerocopy"
version = "0.8.47"
version = "0.8.48"
source = "registry+https://github.com/rust-lang/crates.io-index"
checksum = "efbb2a062be311f2ba113ce66f697a4dc589f85e78a4aea276200804cea0ed87"
checksum = "eed437bf9d6692032087e337407a86f04cd8d6a16a37199ed57949d415bd68e9"
dependencies = [
"zerocopy-derive",
]
[[package]]
name = "zerocopy-derive"
version = "0.8.47"
version = "0.8.48"
source = "registry+https://github.com/rust-lang/crates.io-index"
checksum = "0e8bc7269b54418e7aeeef514aa68f8690b8c0489a06b0136e5f57c4c5ccab89"
checksum = "70e3cd084b1788766f53af483dd21f93881ff30d7320490ec3ef7526d203bad4"
dependencies = [
"proc-macro2",
"quote",
@ -4069,18 +4000,18 @@ dependencies = [
[[package]]
name = "zerofrom"
version = "0.1.6"
version = "0.1.7"
source = "registry+https://github.com/rust-lang/crates.io-index"
checksum = "50cc42e0333e05660c3587f3bf9d0478688e15d870fab3346451ce7f8c9fbea5"
checksum = "69faa1f2a1ea75661980b013019ed6687ed0e83d069bc1114e2cc74c6c04c4df"
dependencies = [
"zerofrom-derive",
]
[[package]]
name = "zerofrom-derive"
version = "0.1.6"
version = "0.1.7"
source = "registry+https://github.com/rust-lang/crates.io-index"
checksum = "d71e5d6e06ab090c67b5e44993ec16b72dcbaabc526db883a360057678b48502"
checksum = "11532158c46691caf0f2593ea8358fed6bbf68a0315e80aae9bd41fbade684a1"
dependencies = [
"proc-macro2",
"quote",
@ -4110,9 +4041,9 @@ dependencies = [
[[package]]
name = "zerotrie"
version = "0.2.3"
version = "0.2.4"
source = "registry+https://github.com/rust-lang/crates.io-index"
checksum = "2a59c17a5562d507e4b54960e8569ebee33bee890c70aa3fe7b97e85a9fd7851"
checksum = "0f9152d31db0792fa83f70fb2f83148effb5c1f5b8c7686c3459e361d9bc20bf"
dependencies = [
"displaydoc",
"yoke",
@ -4121,9 +4052,9 @@ dependencies = [
[[package]]
name = "zerovec"
version = "0.11.5"
version = "0.11.6"
source = "registry+https://github.com/rust-lang/crates.io-index"
checksum = "6c28719294829477f525be0186d13efa9a3c602f7ec202ca9e353d310fb9a002"
checksum = "90f911cbc359ab6af17377d242225f4d75119aec87ea711a880987b18cd7b239"
dependencies = [
"yoke",
"zerofrom",
@ -4132,9 +4063,9 @@ dependencies = [
[[package]]
name = "zerovec-derive"
version = "0.11.2"
version = "0.11.3"
source = "registry+https://github.com/rust-lang/crates.io-index"
checksum = "eadce39539ca5cb3985590102671f2567e659fca9666581ad3411d59207951f3"
checksum = "625dc425cab0dca6dc3c3319506e6593dcb08a9f387ea3b284dbd52a92c40555"
dependencies = [
"proc-macro2",
"quote",

View File

@ -1,6 +1,6 @@
[package]
name = "telemt"
version = "3.3.31"
version = "3.3.38"
edition = "2024"
[features]
@ -30,7 +30,13 @@ static_assertions = "1.1"
# Network
socket2 = { version = "0.6", features = ["all"] }
nix = { version = "0.31", default-features = false, features = ["net", "fs"] }
nix = { version = "0.31", default-features = false, features = [
"net",
"user",
"process",
"fs",
"signal",
] }
shadowsocks = { version = "1.24", features = ["aead-cipher-2022"] }
# Serialization
@ -44,6 +50,7 @@ bytes = "1.9"
thiserror = "2.0"
tracing = "0.1"
tracing-subscriber = { version = "0.3", features = ["env-filter"] }
tracing-appender = "0.2"
parking_lot = "0.12"
dashmap = "6.1"
arc-swap = "1.7"
@ -68,8 +75,14 @@ hyper = { version = "1", features = ["server", "http1"] }
hyper-util = { version = "0.1", features = ["tokio", "server-auto"] }
http-body-util = "0.1"
httpdate = "1.0"
tokio-rustls = { version = "0.26", default-features = false, features = ["tls12"] }
rustls = { version = "0.23", default-features = false, features = ["std", "tls12", "ring"] }
tokio-rustls = { version = "0.26", default-features = false, features = [
"tls12",
] }
rustls = { version = "0.23", default-features = false, features = [
"std",
"tls12",
"ring",
] }
webpki-roots = "1.0"
[dev-dependencies]

2035
IMPLEMENTATION_PLAN.md Normal file

File diff suppressed because it is too large Load Diff

View File

@ -2,6 +2,11 @@
***Löst Probleme, bevor andere überhaupt wissen, dass sie existieren*** / ***It solves problems before others even realize they exist***
### [**Telemt Chat in Telegram**](https://t.me/telemtrs)
#### Fixed TLS ClientHello is now available in Telegram Desktop starting from version 6.7.2: to work with EE-MTProxy, please update your client;
#### Fixed TLS ClientHello for Telegram Android Client is available in [our chat](https://t.me/telemtrs/30234/36441); official releases for Android and iOS are "work in progress";
**Telemt** is a fast, secure, and feature-rich server written in Rust: it fully implements the official Telegram proxy algo and adds many production-ready improvements such as:
- [ME Pool + Reader/Writer + Registry + Refill + Adaptive Floor + Trio-State + Generation Lifecycle](https://github.com/telemt/telemt/blob/main/docs/model/MODEL.en.md)
- [Full-covered API w/ management](https://github.com/telemt/telemt/blob/main/docs/API.md)
@ -9,60 +14,6 @@
- Prometheus-format Metrics
- TLS-Fronting and TCP-Splicing for masking from "prying" eyes
[**Telemt Chat in Telegram**](https://t.me/telemtrs)
## NEWS and EMERGENCY
### ✈️ Telemt 3 is released!
<table>
<tr>
<td width="50%" valign="top">
### 🇷🇺 RU
#### О релизах
[3.3.27](https://github.com/telemt/telemt/releases/tag/3.3.27) даёт баланс стабильности и передового функционала, а так же последние исправления по безопасности и багам
Будем рады вашему фидбеку и предложениям по улучшению — особенно в части **API**, **статистики**, **UX**
---
Если у вас есть компетенции в:
- Асинхронных сетевых приложениях
- Анализе трафика
- Реверс-инжиниринге
- Сетевых расследованиях
Мы открыты к архитектурным предложениям, идеям и pull requests
</td>
<td width="50%" valign="top">
### 🇬🇧 EN
#### About releases
[3.3.27](https://github.com/telemt/telemt/releases/tag/3.3.27) provides a balance of stability and advanced functionality, as well as the latest security and bug fixes
We are looking forward to your feedback and improvement proposals — especially regarding **API**, **statistics**, **UX**
---
If you have expertise in:
- Asynchronous network applications
- Traffic analysis
- Reverse engineering
- Network forensics
We welcome ideas, architectural feedback, and pull requests.
</td>
</tr>
</table>
# Features
💥 The configuration structure has changed since version 1.1.0.0. change it in your environment!
⚓ Our implementation of **TLS-fronting** is one of the most deeply debugged, focused, advanced and *almost* **"behaviorally consistent to real"**: we are confident we have it right - [see evidence on our validation and traces](#recognizability-for-dpi-and-crawler)
⚓ Our ***Middle-End Pool*** is fastest by design in standard scenarios, compared to other implementations of connecting to the Middle-End Proxy: non dramatically, but usual
@ -103,8 +54,12 @@ We welcome ideas, architectural feedback, and pull requests.
- [FAQ EN](docs/FAQ.en.md)
### Recognizability for DPI and crawler
Since version 1.1.0.0, we have debugged masking perfectly: for all clients without "presenting" a key,
we transparently direct traffic to the target host!
On April 1, 2026, we became aware of a method for detecting MTProxy Fake-TLS,
based on the ECH extension and the ordering of cipher suites,
as well as an overall unique JA3/JA4 fingerprint
that does not occur in modern browsers:
we have already submitted initial changes to the Telegram Desktop developers and are working on updates for other clients.
- We consider this a breakthrough aspect, which has no stable analogues today
- Based on this: if `telemt` configured correctly, **TLS mode is completely identical to real-life handshake + communication** with a specified host

File diff suppressed because it is too large Load Diff

View File

@ -1,110 +1,122 @@
## How to set up "proxy sponsor" channel and statistics via @MTProxybot bot
## How to set up a "proxy sponsor" channel and statistics via the @MTProxybot
1. Go to @MTProxybot bot.
2. Enter the command `/newproxy`
3. Send the server IP and port. For example: 1.2.3.4:443
4. Open the config `nano /etc/telemt/telemt.toml`.
5. Copy and send the user secret from the [access.users] section to the bot.
6. Copy the tag received from the bot. For example 1234567890abcdef1234567890abcdef.
1. Go to the @MTProxybot.
2. Enter the `/newproxy` command.
3. Send your server's IP address and port. For example: `1.2.3.4:443`.
4. Open the configuration file: `nano /etc/telemt/telemt.toml`.
5. Copy and send the user secret from the `[access.users]` section to the bot.
6. Copy the tag provided by the bot. For example: `1234567890abcdef1234567890abcdef`.
> [!WARNING]
> The link provided by the bot will not work. Do not copy or use it!
7. Uncomment the ad_tag parameter and enter the tag received from the bot.
8. Uncomment/add the parameter `use_middle_proxy = true`.
7. Uncomment the `ad_tag` parameter and enter the tag received from the bot.
8. Uncomment or add the `use_middle_proxy = true` parameter.
Config example:
Configuration example:
```toml
[general]
ad_tag = "1234567890abcdef1234567890abcdef"
use_middle_proxy = true
```
9. Save the config. Ctrl+S -> Ctrl+X.
10. Restart telemt `systemctl restart telemt`.
11. In the bot, send the command /myproxies and select the added server.
9. Save the changes (in nano: Ctrl+S -> Ctrl+X).
10. Restart the telemt service: `systemctl restart telemt`.
11. Send the `/myproxies` command to the bot and select the added server.
12. Click the "Set promotion" button.
13. Send a **public link** to the channel. Private channels cannot be added!
14. Wait approximately 1 hour for the information to update on Telegram servers.
14. Wait for about 1 hour for the information to update on Telegram servers.
> [!WARNING]
> You will not see the "proxy sponsor" if you are already subscribed to the channel.
> The sponsored channel will not be displayed to you if you are already subscribed to it.
**You can also set up different channels for different users.**
**You can also configure different sponsored channels for different users:**
```toml
[access.user_ad_tags]
hello = "ad_tag"
hello2 = "ad_tag2"
```
## Why is middle proxy (ME) needed
## Why do you need a middle proxy (ME)
https://github.com/telemt/telemt/discussions/167
## How many people can use 1 link
By default, 1 link can be used by any number of people.
You can limit the number of IPs using the proxy.
## How many people can use one link
By default, an unlimited number of people can use a single link.
However, you can limit the number of unique IP addresses for each user:
```toml
[access.user_max_unique_ips]
hello = 1
```
This parameter limits how many unique IPs can use 1 link simultaneously. If one user disconnects, a second user can connect. Also, multiple users can sit behind the same IP.
This parameter sets the maximum number of unique IP addresses from which a single link can be used simultaneously. If the first user disconnects, a second one can connect. At the same time, multiple users can connect from a single IP address simultaneously (for example, devices on the same Wi-Fi network).
## How to create multiple different links
1. Generate the required number of secrets `openssl rand -hex 16`
2. Open the config `nano /etc/telemt.toml`
3. Add new users.
1. Generate the required number of secrets using the command: `openssl rand -hex 16`.
2. Open the configuration file: `nano /etc/telemt/telemt.toml`.
3. Add new users to the `[access.users]` section:
```toml
[access.users]
user1 = "00000000000000000000000000000001"
user2 = "00000000000000000000000000000002"
user3 = "00000000000000000000000000000003"
```
4. Save the config. Ctrl+S -> Ctrl+X. You don't need to restart telemt.
5. Get the links via
4. Save the configuration (Ctrl+S -> Ctrl+X). There is no need to restart the telemt service.
5. Get the ready-to-use links using the command:
```bash
curl -s http://127.0.0.1:9091/v1/users | jq
```
## "Unknown TLS SNI" error
Usually, this error occurs if you have changed the `tls_domain` parameter, but users continue to connect using old links with the previous domain.
If you need to allow connections with any domains (ignoring SNI mismatches), add the following parameters:
```toml
[censorship]
unknown_sni_action = "mask"
```
## How to view metrics
1. Open the config `nano /etc/telemt.toml`
2. Add the following parameters
1. Open the configuration file: `nano /etc/telemt/telemt.toml`.
2. Add the following parameters:
```toml
[server]
metrics_port = 9090
metrics_whitelist = ["127.0.0.1/32", "::1/128", "0.0.0.0/0"]
```
3. Save the config. Ctrl+S -> Ctrl+X.
4. Metrics are available at SERVER_IP:9090/metrics.
3. Save the changes (Ctrl+S -> Ctrl+X).
4. After that, metrics will be available at: `SERVER_IP:9090/metrics`.
> [!WARNING]
> "0.0.0.0/0" in metrics_whitelist opens access from any IP. Replace with your own IP. For example "1.2.3.4"
> The value `"0.0.0.0/0"` in `metrics_whitelist` opens access to metrics from any IP address. It is recommended to replace it with your personal IP, for example: `"1.2.3.4/32"`.
## Additional parameters
### Domain in link instead of IP
To specify a domain in the links, add to the `[general.links]` section of the config file.
### Domain in the link instead of IP
To display a domain instead of an IP address in the connection links, add the following lines to the configuration file:
```toml
[general.links]
public_host = "proxy.example.com"
```
### Server connection limit
Limits the total number of open connections to the server:
### Total server connection limit
This parameter limits the total number of active connections to the server:
```toml
[server]
max_connections = 10000 # 0 - unlimited, 10000 - default
```
### Upstream Manager
To specify an upstream, add to the `[[upstreams]]` section of the config.toml file:
#### Binding to IP
To configure outbound connections (upstreams), add the corresponding parameters to the `[[upstreams]]` section of the configuration file:
#### Binding to an outbound IP address
```toml
[[upstreams]]
type = "direct"
weight = 1
enabled = true
interface = "192.168.1.100" # Change to your outgoing IP
interface = "192.168.1.100" # Replace with your outbound IP
```
#### SOCKS4/5 as Upstream
- Without authentication:
#### Using SOCKS4/5 as an Upstream
- Without authorization:
```toml
[[upstreams]]
type = "socks5" # Specify SOCKS4 or SOCKS5
@ -113,7 +125,7 @@ weight = 1 # Set Weight for Scenarios
enabled = true
```
- With authentication:
- With authorization:
```toml
[[upstreams]]
type = "socks5" # Specify SOCKS4 or SOCKS5
@ -124,8 +136,8 @@ weight = 1 # Set Weight for Scenarios
enabled = true
```
#### Shadowsocks as Upstream
Requires `use_middle_proxy = false`.
#### Using Shadowsocks as an Upstream
For this method to work, the `use_middle_proxy = false` parameter must be set.
```toml
[general]

View File

@ -1,32 +1,32 @@
## Как настроить канал "спонсор прокси" и статистику через бота @MTProxybot
1. Зайти в бота @MTProxybot.
2. Ввести команду `/newproxy`
3. Отправить IP и порт сервера. Например: 1.2.3.4:443
4. Открыть конфиг `nano /etc/telemt/telemt.toml`.
5. Скопировать и отправить боту секрет пользователя из раздела [access.users].
6. Скопировать полученный tag у бота. Например 1234567890abcdef1234567890abcdef.
1. Зайдите в бота @MTProxybot.
2. Введите команду `/newproxy`.
3. Отправьте IP-адрес и порт сервера. Например: `1.2.3.4:443`.
4. Откройте файл конфигурации: `nano /etc/telemt/telemt.toml`.
5. Скопируйте и отправьте боту секрет пользователя из раздела `[access.users]`.
6. Скопируйте тег (tag), который выдаст бот. Например: `1234567890abcdef1234567890abcdef`.
> [!WARNING]
> Ссылка, которую выдает бот, не будет работать. Не копируйте и не используйте её!
7. Раскомментировать параметр ad_tag и вписать tag, полученный у бота.
8. Раскомментировать/добавить параметр use_middle_proxy = true.
> Ссылка, которую выдает бот, работать не будет. Не копируйте и не используйте её!
7. Раскомментируйте параметр `ad_tag` и впишите тег, полученный от бота.
8. Раскомментируйте или добавьте параметр `use_middle_proxy = true`.
Пример конфига:
Пример конфигурации:
```toml
[general]
ad_tag = "1234567890abcdef1234567890abcdef"
use_middle_proxy = true
```
9. Сохранить конфиг. Ctrl+S -> Ctrl+X.
10. Перезапустить telemt `systemctl restart telemt`.
11. В боте отправить команду /myproxies и выбрать добавленный сервер.
12. Нажать кнопку "Set promotion".
13. Отправить **публичную ссылку** на канал. Приватный канал добавить нельзя!
14. Подождать примерно 1 час, пока информация обновится на серверах Telegram.
9. Сохраните изменения (в nano: Ctrl+S -> Ctrl+X).
10. Перезапустите службу telemt: `systemctl restart telemt`.
11. В боте отправьте команду `/myproxies` и выберите добавленный сервер.
12. Нажмите кнопку «Set promotion».
13. Отправьте **публичную ссылку** на канал. Приватные каналы добавлять нельзя!
14. Подождите примерно 1 час, пока информация обновится на серверах Telegram.
> [!WARNING]
> У вас не будет отображаться "спонсор прокси" если вы уже подписаны на канал.
> Спонсорский канал не будет у вас отображаться, если вы уже на него подписаны.
**Также вы можете настроить разные каналы для разных пользователей.**
**Вы также можете настроить разные спонсорские каналы для разных пользователей:**
```toml
[access.user_ad_tags]
hello = "ad_tag"
@ -37,74 +37,85 @@ hello2 = "ad_tag2"
https://github.com/telemt/telemt/discussions/167
## Сколько человек может пользоваться 1 ссылкой
## Сколько человек может пользоваться одной ссылкой
По умолчанию 1 ссылкой может пользоваться сколько угодно человек.
Вы можете ограничить число IP, использующих прокси.
По умолчанию одной ссылкой может пользоваться неограниченное число людей.
Однако вы можете ограничить количество уникальных IP-адресов для каждого пользователя:
```toml
[access.user_max_unique_ips]
hello = 1
```
Этот параметр ограничивает, сколько уникальных IP может использовать 1 ссылку одновременно. Если один пользователь отключится, второй сможет подключиться. Также с одного IP может сидеть несколько пользователей.
Этот параметр задает максимальное количество уникальных IP-адресов, с которых можно одновременно использовать одну ссылку. Если первый пользователь отключится, второй сможет подключиться. При этом с одного IP-адреса могут подключаться несколько пользователей одновременно (например, устройства в одной Wi-Fi сети).
## Как сделать несколько разных ссылок
## Как создать несколько разных ссылок
1. Сгенерируйте нужное число секретов `openssl rand -hex 16`
2. Открыть конфиг `nano /etc/telemt.toml`
3. Добавить новых пользователей.
1. Сгенерируйте необходимое количество секретов с помощью команды: `openssl rand -hex 16`.
2. Откройте файл конфигурации: `nano /etc/telemt/telemt.toml`.
3. Добавьте новых пользователей в секцию `[access.users]`:
```toml
[access.users]
user1 = "00000000000000000000000000000001"
user2 = "00000000000000000000000000000002"
user3 = "00000000000000000000000000000003"
```
4. Сохранить конфиг. Ctrl+S -> Ctrl+X. Перезапускать telemt не нужно.
5. Получить ссылки через
4. Сохраните конфигурацию (Ctrl+S -> Ctrl+X). Перезапускать службу telemt не нужно.
5. Получите готовые ссылки с помощью команды:
```bash
curl -s http://127.0.0.1:9091/v1/users | jq
```
## Ошибка "Unknown TLS SNI"
Обычно эта ошибка возникает, если вы изменили параметр `tls_domain`, но пользователи продолжают подключаться по старым ссылкам с прежним доменом.
Если необходимо разрешить подключение с любыми доменами (игнорируя несовпадения SNI), добавьте следующие параметры:
```toml
[censorship]
unknown_sni_action = "mask"
```
## Как посмотреть метрики
1. Открыть конфиг `nano /etc/telemt.toml`
2. Добавить следующие параметры
1. Откройте файл конфигурации: `nano /etc/telemt/telemt.toml`.
2. Добавьте следующие параметры:
```toml
[server]
metrics_port = 9090
metrics_whitelist = ["127.0.0.1/32", "::1/128", "0.0.0.0/0"]
```
3. Сохранить конфиг. Ctrl+S -> Ctrl+X.
4. Метрики доступны по адресу SERVER_IP:9090/metrics.
3. Сохраните изменения (Ctrl+S -> Ctrl+X).
4. После этого метрики будут доступны по адресу: `SERVER_IP:9090/metrics`.
> [!WARNING]
> "0.0.0.0/0" в metrics_whitelist открывает доступ с любого IP. Замените на свой ip. Например "1.2.3.4"
> Значение `"0.0.0.0/0"` в `metrics_whitelist` открывает доступ к метрикам с любого IP-адреса. Рекомендуется заменить его на ваш личный IP, например: `"1.2.3.4/32"`.
## Дополнительные параметры
### Домен в ссылке вместо IP
Чтобы указать домен в ссылках, добавьте в секцию `[general.links]` файла config.
Чтобы в ссылках для подключения отображался домен вместо IP-адреса, добавьте следующие строки в файл конфигурации:
```toml
[general.links]
public_host = "proxy.example.com"
```
### Общий лимит подключений к серверу
Ограничивает общее число открытых подключений к серверу:
Этот параметр ограничивает общее количество активных подключений к серверу:
```toml
[server]
max_connections = 10000 # 0 - unlimited, 10000 - default
max_connections = 10000 # 0 - без ограничений, 10000 - по умолчанию
```
### Upstream Manager
Чтобы указать апстрим, добавьте в секцию `[[upstreams]]` файла config.toml:
#### Привязка к IP
Для настройки исходящих подключений (апстримов) добавьте соответствующие параметры в секцию `[[upstreams]]` файла конфигурации:
#### Привязка к исходящему IP-адресу
```toml
[[upstreams]]
type = "direct"
weight = 1
enabled = true
interface = "192.168.1.100" # Change to your outgoing IP
interface = "192.168.1.100" # Замените на ваш исходящий IP
```
#### SOCKS4/5 как Upstream
#### Использование SOCKS4/5 в качестве Upstream
- Без авторизации:
```toml
[[upstreams]]
@ -125,8 +136,8 @@ weight = 1 # Set Weight for Scenarios
enabled = true
```
#### Shadowsocks как Upstream
Требует `use_middle_proxy = false`.
#### Использование Shadowsocks в качестве Upstream
Для работы этого метода требуется установить параметр `use_middle_proxy = false`.
```toml
[general]

View File

@ -27,12 +27,12 @@ chmod +x /bin/telemt
**0. Check port and generate secrets**
The port you have selected for use should be MISSING from the list, when:
The port you have selected for use should not be in the list:
```bash
netstat -lnp
```
Generate 16 bytes/32 characters HEX with OpenSSL or another way:
Generate 16 bytes/32 characters in HEX format with OpenSSL or another way:
```bash
openssl rand -hex 16
```
@ -50,7 +50,7 @@ Save the obtained result somewhere. You will need it later!
**1. Place your config to /etc/telemt/telemt.toml**
Create config directory:
Create the config directory:
```bash
mkdir /etc/telemt
```
@ -59,7 +59,7 @@ Open nano
```bash
nano /etc/telemt/telemt.toml
```
paste your config
Insert your configuration:
```toml
# === General Settings ===
@ -94,7 +94,8 @@ then Ctrl+S -> Ctrl+X to save
> [!WARNING]
> Replace the value of the hello parameter with the value you obtained in step 0.
> Replace the value of the tls_domain parameter with another website.
> Additionally, change the value of the tls_domain parameter to a different website.
> Changing the tls_domain parameter will break all links that use the old domain!
---
@ -105,14 +106,14 @@ useradd -d /opt/telemt -m -r -U telemt
chown -R telemt:telemt /etc/telemt
```
**3. Create service on /etc/systemd/system/telemt.service**
**3. Create service in /etc/systemd/system/telemt.service**
Open nano
```bash
nano /etc/systemd/system/telemt.service
```
paste this Systemd Module
Insert this Systemd module:
```bash
[Unit]
Description=Telemt
@ -127,8 +128,8 @@ WorkingDirectory=/opt/telemt
ExecStart=/bin/telemt /etc/telemt/telemt.toml
Restart=on-failure
LimitNOFILE=65536
AmbientCapabilities=CAP_NET_BIND_SERVICE
CapabilityBoundingSet=CAP_NET_BIND_SERVICE
AmbientCapabilities=CAP_NET_ADMIN CAP_NET_BIND_SERVICE
CapabilityBoundingSet=CAP_NET_ADMIN CAP_NET_BIND_SERVICE
NoNewPrivileges=true
[Install]
@ -147,13 +148,16 @@ systemctl daemon-reload
**6.** For automatic startup at system boot, enter `systemctl enable telemt`
**7.** To get the link(s), enter
**7.** To get the link(s), enter:
```bash
curl -s http://127.0.0.1:9091/v1/users | jq
```
> Any number of people can use one link.
> [!WARNING]
> Only the command from step 7 can provide a working link. Do not try to create it yourself or copy it from anywhere if you are not sure what you are doing!
---
# Telemt via Docker Compose

View File

@ -95,6 +95,7 @@ hello = "00000000000000000000000000000000"
> [!WARNING]
> Замените значение параметра hello на значение, которое вы получили в пункте 0.
> Так же замените значение параметра tls_domain на другой сайт.
> Изменение параметра tls_domain сделает нерабочими все ссылки, использующие старый домен!
---
@ -127,8 +128,8 @@ WorkingDirectory=/opt/telemt
ExecStart=/bin/telemt /etc/telemt/telemt.toml
Restart=on-failure
LimitNOFILE=65536
AmbientCapabilities=CAP_NET_BIND_SERVICE
CapabilityBoundingSet=CAP_NET_BIND_SERVICE
AmbientCapabilities=CAP_NET_ADMIN CAP_NET_BIND_SERVICE
CapabilityBoundingSet=CAP_NET_ADMIN CAP_NET_BIND_SERVICE
NoNewPrivileges=true
[Install]

View File

@ -8,18 +8,62 @@ CONFIG_DIR="${CONFIG_DIR:-/etc/telemt}"
CONFIG_FILE="${CONFIG_FILE:-${CONFIG_DIR}/telemt.toml}"
WORK_DIR="${WORK_DIR:-/opt/telemt}"
TLS_DOMAIN="${TLS_DOMAIN:-petrovich.ru}"
SERVER_PORT="${SERVER_PORT:-443}"
USER_SECRET=""
AD_TAG=""
SERVICE_NAME="telemt"
TEMP_DIR=""
SUDO=""
CONFIG_PARENT_DIR=""
SERVICE_START_FAILED=0
PORT_PROVIDED=0
SECRET_PROVIDED=0
AD_TAG_PROVIDED=0
DOMAIN_PROVIDED=0
ACTION="install"
TARGET_VERSION="${VERSION:-latest}"
while [ $# -gt 0 ]; do
case "$1" in
-h|--help) ACTION="help"; shift ;;
-d|--domain)
if [ "$#" -lt 2 ] || [ -z "$2" ]; then
printf '[ERROR] %s requires a domain argument.\n' "$1" >&2
exit 1
fi
TLS_DOMAIN="$2"; DOMAIN_PROVIDED=1; shift 2 ;;
-p|--port)
if [ "$#" -lt 2 ] || [ -z "$2" ]; then
printf '[ERROR] %s requires a port argument.\n' "$1" >&2; exit 1
fi
case "$2" in
*[!0-9]*) printf '[ERROR] Port must be a valid number.\n' >&2; exit 1 ;;
esac
port_num="$(printf '%s\n' "$2" | sed 's/^0*//')"
[ -z "$port_num" ] && port_num="0"
if [ "${#port_num}" -gt 5 ] || [ "$port_num" -lt 1 ] || [ "$port_num" -gt 65535 ]; then
printf '[ERROR] Port must be between 1 and 65535.\n' >&2; exit 1
fi
SERVER_PORT="$port_num"; PORT_PROVIDED=1; shift 2 ;;
-s|--secret)
if [ "$#" -lt 2 ] || [ -z "$2" ]; then
printf '[ERROR] %s requires a secret argument.\n' "$1" >&2; exit 1
fi
case "$2" in
*[!0-9a-fA-F]*)
printf '[ERROR] Secret must contain only hex characters.\n' >&2; exit 1 ;;
esac
if [ "${#2}" -ne 32 ]; then
printf '[ERROR] Secret must be exactly 32 chars.\n' >&2; exit 1
fi
USER_SECRET="$2"; SECRET_PROVIDED=1; shift 2 ;;
-a|--ad-tag|--ad_tag)
if [ "$#" -lt 2 ] || [ -z "$2" ]; then
printf '[ERROR] %s requires an ad_tag argument.\n' "$1" >&2; exit 1
fi
AD_TAG="$2"; AD_TAG_PROVIDED=1; shift 2 ;;
uninstall|--uninstall)
if [ "$ACTION" != "purge" ]; then ACTION="uninstall"; fi
shift ;;
@ -52,11 +96,17 @@ cleanup() {
trap cleanup EXIT INT TERM
show_help() {
say "Usage: $0 [ <version> | install | uninstall | purge | --help ]"
say "Usage: $0 [ <version> | install | uninstall | purge ] [ options ]"
say " <version> Install specific version (e.g. 3.3.15, default: latest)"
say " install Install the latest version"
say " uninstall Remove the binary and service (keeps config and user)"
say " uninstall Remove the binary and service"
say " purge Remove everything including configuration, data, and user"
say ""
say "Options:"
say " -d, --domain Set TLS domain (default: petrovich.ru)"
say " -p, --port Set server port (default: 443)"
say " -s, --secret Set specific user secret (32 hex characters)"
say " -a, --ad-tag Set ad_tag"
exit 0
}
@ -112,6 +162,14 @@ get_svc_mgr() {
else echo "none"; fi
}
is_config_exists() {
if [ -n "$SUDO" ]; then
$SUDO sh -c '[ -f "$1" ]' _ "$CONFIG_FILE"
else
[ -f "$CONFIG_FILE" ]
fi
}
verify_common() {
[ -n "$BIN_NAME" ] || die "BIN_NAME cannot be empty."
[ -n "$INSTALL_DIR" ] || die "INSTALL_DIR cannot be empty."
@ -119,7 +177,7 @@ verify_common() {
[ -n "$CONFIG_FILE" ] || die "CONFIG_FILE cannot be empty."
case "${INSTALL_DIR}${CONFIG_DIR}${WORK_DIR}${CONFIG_FILE}" in
*[!a-zA-Z0-9_./-]*) die "Invalid characters in paths. Only alphanumeric, _, ., -, and / allowed." ;;
*[!a-zA-Z0-9_./-]*) die "Invalid characters in paths." ;;
esac
case "$TARGET_VERSION" in *[!a-zA-Z0-9_.-]*) die "Invalid characters in version." ;; esac
@ -137,11 +195,11 @@ verify_common() {
if [ "$(id -u)" -eq 0 ]; then
SUDO=""
else
command -v sudo >/dev/null 2>&1 || die "This script requires root or sudo. Neither found."
command -v sudo >/dev/null 2>&1 || die "This script requires root or sudo."
SUDO="sudo"
if ! sudo -n true 2>/dev/null; then
if ! [ -t 0 ]; then
die "sudo requires a password, but no TTY detected. Aborting to prevent hang."
die "sudo requires a password, but no TTY detected."
fi
fi
fi
@ -154,21 +212,7 @@ verify_common() {
die "Safety check failed: CONFIG_FILE '$CONFIG_FILE' is a directory."
fi
for path in "$CONFIG_DIR" "$CONFIG_PARENT_DIR" "$WORK_DIR"; do
check_path="$(get_realpath "$path")"
case "$check_path" in
/|/bin|/sbin|/usr|/usr/bin|/usr/sbin|/usr/local|/usr/local/bin|/usr/local/sbin|/usr/local/etc|/usr/local/share|/etc|/var|/var/lib|/var/log|/var/run|/home|/root|/tmp|/lib|/lib64|/opt|/run|/boot|/dev|/sys|/proc)
die "Safety check failed: '$path' (resolved to '$check_path') is a critical system directory." ;;
esac
done
check_install_dir="$(get_realpath "$INSTALL_DIR")"
case "$check_install_dir" in
/|/etc|/var|/home|/root|/tmp|/usr|/usr/local|/opt|/boot|/dev|/sys|/proc|/run)
die "Safety check failed: INSTALL_DIR '$INSTALL_DIR' is a critical system directory." ;;
esac
for cmd in id uname grep find rm chown chmod mv mktemp mkdir tr dd sed ps head sleep cat tar gzip rmdir; do
for cmd in id uname awk grep find rm chown chmod mv mktemp mkdir tr dd sed ps head sleep cat tar gzip; do
command -v "$cmd" >/dev/null 2>&1 || die "Required command not found: $cmd"
done
}
@ -177,14 +221,41 @@ verify_install_deps() {
command -v curl >/dev/null 2>&1 || command -v wget >/dev/null 2>&1 || die "Neither curl nor wget is installed."
command -v cp >/dev/null 2>&1 || command -v install >/dev/null 2>&1 || die "Need cp or install"
if ! command -v setcap >/dev/null 2>&1; then
if ! command -v setcap >/dev/null 2>&1 || ! command -v conntrack >/dev/null 2>&1; then
if command -v apk >/dev/null 2>&1; then
$SUDO apk add --no-cache libcap-utils >/dev/null 2>&1 || $SUDO apk add --no-cache libcap >/dev/null 2>&1 || true
$SUDO apk add --no-cache libcap-utils libcap conntrack-tools >/dev/null 2>&1 || true
elif command -v apt-get >/dev/null 2>&1; then
$SUDO apt-get update -q >/dev/null 2>&1 || true
$SUDO apt-get install -y -q libcap2-bin >/dev/null 2>&1 || true
elif command -v dnf >/dev/null 2>&1; then $SUDO dnf install -y -q libcap >/dev/null 2>&1 || true
elif command -v yum >/dev/null 2>&1; then $SUDO yum install -y -q libcap >/dev/null 2>&1 || true
$SUDO env DEBIAN_FRONTEND=noninteractive apt-get install -y -q libcap2-bin conntrack >/dev/null 2>&1 || {
$SUDO env DEBIAN_FRONTEND=noninteractive apt-get update -q >/dev/null 2>&1 || true
$SUDO env DEBIAN_FRONTEND=noninteractive apt-get install -y -q libcap2-bin conntrack >/dev/null 2>&1 || true
}
elif command -v dnf >/dev/null 2>&1; then $SUDO dnf install -y -q libcap conntrack-tools >/dev/null 2>&1 || true
elif command -v yum >/dev/null 2>&1; then $SUDO yum install -y -q libcap conntrack-tools >/dev/null 2>&1 || true
fi
fi
}
check_port_availability() {
port_info=""
if command -v ss >/dev/null 2>&1; then
port_info=$($SUDO ss -tulnp 2>/dev/null | grep -E ":${SERVER_PORT}([[:space:]]|$)" || true)
elif command -v netstat >/dev/null 2>&1; then
port_info=$($SUDO netstat -tulnp 2>/dev/null | grep -E ":${SERVER_PORT}([[:space:]]|$)" || true)
elif command -v lsof >/dev/null 2>&1; then
port_info=$($SUDO lsof -i :${SERVER_PORT} 2>/dev/null | grep LISTEN || true)
else
say "[WARNING] Network diagnostic tools (ss, netstat, lsof) not found. Skipping port check."
return 0
fi
if [ -n "$port_info" ]; then
if printf '%s\n' "$port_info" | grep -q "${BIN_NAME}"; then
say " -> Port ${SERVER_PORT} is in use by ${BIN_NAME}. Ignoring as it will be restarted."
else
say "[ERROR] Port ${SERVER_PORT} is already in use by another process:"
printf ' %s\n' "$port_info"
die "Please free the port ${SERVER_PORT} or change it and try again."
fi
fi
}
@ -192,7 +263,13 @@ verify_install_deps() {
detect_arch() {
sys_arch="$(uname -m)"
case "$sys_arch" in
x86_64|amd64) echo "x86_64" ;;
x86_64|amd64)
if [ -r /proc/cpuinfo ] && grep -q "avx2" /proc/cpuinfo 2>/dev/null && grep -q "bmi2" /proc/cpuinfo 2>/dev/null; then
echo "x86_64-v3"
else
echo "x86_64"
fi
;;
aarch64|arm64) echo "aarch64" ;;
*) die "Unsupported architecture: $sys_arch" ;;
esac
@ -261,17 +338,19 @@ install_binary() {
fi
$SUDO mkdir -p "$INSTALL_DIR" || die "Failed to create install directory"
$SUDO rm -f "$bin_dst" 2>/dev/null || true
if command -v install >/dev/null 2>&1; then
$SUDO install -m 0755 "$bin_src" "$bin_dst" || die "Failed to install binary"
else
$SUDO rm -f "$bin_dst" 2>/dev/null || true
$SUDO cp "$bin_src" "$bin_dst" && $SUDO chmod 0755 "$bin_dst" || die "Failed to copy binary"
fi
$SUDO sh -c '[ -x "$1" ]' _ "$bin_dst" || die "Binary not executable: $bin_dst"
if command -v setcap >/dev/null 2>&1; then
$SUDO setcap cap_net_bind_service=+ep "$bin_dst" 2>/dev/null || true
$SUDO setcap cap_net_bind_service,cap_net_admin=+ep "$bin_dst" 2>/dev/null || true
fi
}
@ -287,11 +366,20 @@ generate_secret() {
}
generate_config_content() {
conf_secret="$1"
conf_tag="$2"
escaped_tls_domain="$(printf '%s\n' "$TLS_DOMAIN" | tr -d '[:cntrl:]' | sed 's/\\/\\\\/g; s/"/\\"/g')"
cat <<EOF
[general]
use_middle_proxy = false
use_middle_proxy = true
EOF
if [ -n "$conf_tag" ]; then
echo "ad_tag = \"${conf_tag}\""
fi
cat <<EOF
[general.modes]
classic = false
@ -299,7 +387,7 @@ secure = false
tls = true
[server]
port = 443
port = ${SERVER_PORT}
[server.api]
enabled = true
@ -310,28 +398,73 @@ whitelist = ["127.0.0.1/32"]
tls_domain = "${escaped_tls_domain}"
[access.users]
hello = "$1"
hello = "${conf_secret}"
EOF
}
install_config() {
if [ -n "$SUDO" ]; then
if $SUDO sh -c '[ -f "$1" ]' _ "$CONFIG_FILE"; then
say " -> Config already exists at $CONFIG_FILE. Skipping creation."
return 0
fi
elif [ -f "$CONFIG_FILE" ]; then
say " -> Config already exists at $CONFIG_FILE. Skipping creation."
if is_config_exists; then
say " -> Config already exists at $CONFIG_FILE. Updating parameters..."
tmp_conf="${TEMP_DIR}/config.tmp"
$SUDO cat "$CONFIG_FILE" > "$tmp_conf"
escaped_domain="$(printf '%s\n' "$TLS_DOMAIN" | tr -d '[:cntrl:]' | sed 's/\\/\\\\/g; s/"/\\"/g')"
export AWK_PORT="$SERVER_PORT"
export AWK_SECRET="$USER_SECRET"
export AWK_DOMAIN="$escaped_domain"
export AWK_AD_TAG="$AD_TAG"
export AWK_FLAG_P="$PORT_PROVIDED"
export AWK_FLAG_S="$SECRET_PROVIDED"
export AWK_FLAG_D="$DOMAIN_PROVIDED"
export AWK_FLAG_A="$AD_TAG_PROVIDED"
awk '
BEGIN { ad_tag_handled = 0 }
ENVIRON["AWK_FLAG_P"] == "1" && /^[ \t]*port[ \t]*=/ { print "port = " ENVIRON["AWK_PORT"]; next }
ENVIRON["AWK_FLAG_S"] == "1" && /^[ \t]*hello[ \t]*=/ { print "hello = \"" ENVIRON["AWK_SECRET"] "\""; next }
ENVIRON["AWK_FLAG_D"] == "1" && /^[ \t]*tls_domain[ \t]*=/ { print "tls_domain = \"" ENVIRON["AWK_DOMAIN"] "\""; next }
ENVIRON["AWK_FLAG_A"] == "1" && /^[ \t]*ad_tag[ \t]*=/ {
if (!ad_tag_handled) {
print "ad_tag = \"" ENVIRON["AWK_AD_TAG"] "\"";
ad_tag_handled = 1;
}
next
}
ENVIRON["AWK_FLAG_A"] == "1" && /^\[general\]/ {
print;
if (!ad_tag_handled) {
print "ad_tag = \"" ENVIRON["AWK_AD_TAG"] "\"";
ad_tag_handled = 1;
}
next
}
{ print }
' "$tmp_conf" > "${tmp_conf}.new" && mv "${tmp_conf}.new" "$tmp_conf"
[ "$PORT_PROVIDED" -eq 1 ] && say " -> Updated port: $SERVER_PORT"
[ "$SECRET_PROVIDED" -eq 1 ] && say " -> Updated secret for user 'hello'"
[ "$DOMAIN_PROVIDED" -eq 1 ] && say " -> Updated tls_domain: $TLS_DOMAIN"
[ "$AD_TAG_PROVIDED" -eq 1 ] && say " -> Updated ad_tag"
write_root "$CONFIG_FILE" < "$tmp_conf"
rm -f "$tmp_conf"
return 0
fi
toml_secret="$(generate_secret)" || die "Failed to generate secret."
if [ -z "$USER_SECRET" ]; then
USER_SECRET="$(generate_secret)" || die "Failed to generate secret."
fi
generate_config_content "$toml_secret" | write_root "$CONFIG_FILE" || die "Failed to install config"
generate_config_content "$USER_SECRET" "$AD_TAG" | write_root "$CONFIG_FILE" || die "Failed to install config"
$SUDO chown root:telemt "$CONFIG_FILE" && $SUDO chmod 640 "$CONFIG_FILE"
say " -> Config created successfully."
say " -> Generated secret for default user 'hello': $toml_secret"
say " -> Configured secret for user 'hello': $USER_SECRET"
}
generate_systemd_content() {
@ -348,9 +481,10 @@ Group=telemt
WorkingDirectory=$WORK_DIR
ExecStart="${INSTALL_DIR}/${BIN_NAME}" "${CONFIG_FILE}"
Restart=on-failure
RestartSec=5
LimitNOFILE=65536
AmbientCapabilities=CAP_NET_BIND_SERVICE
CapabilityBoundingSet=CAP_NET_BIND_SERVICE
AmbientCapabilities=CAP_NET_BIND_SERVICE CAP_NET_ADMIN
CapabilityBoundingSet=CAP_NET_BIND_SERVICE CAP_NET_ADMIN
[Install]
WantedBy=multi-user.target
@ -415,7 +549,8 @@ kill_user_procs() {
if command -v pgrep >/dev/null 2>&1; then
pids="$(pgrep -u telemt 2>/dev/null || true)"
else
pids="$(ps -u telemt -o pid= 2>/dev/null || true)"
pids="$(ps -ef 2>/dev/null | awk '$1=="telemt"{print $2}' || true)"
[ -z "$pids" ] && pids="$(ps 2>/dev/null | awk '$2=="telemt"{print $1}' || true)"
fi
if [ -n "$pids" ]; then
@ -457,11 +592,12 @@ uninstall() {
say ">>> Stage 5: Purging configuration, data, and user"
$SUDO rm -rf "$CONFIG_DIR" "$WORK_DIR"
$SUDO rm -f "$CONFIG_FILE"
if [ "$CONFIG_PARENT_DIR" != "$CONFIG_DIR" ] && [ "$CONFIG_PARENT_DIR" != "." ] && [ "$CONFIG_PARENT_DIR" != "/" ]; then
$SUDO rmdir "$CONFIG_PARENT_DIR" 2>/dev/null || true
fi
sleep 1
$SUDO userdel telemt 2>/dev/null || $SUDO deluser telemt 2>/dev/null || true
if check_os_entity group telemt; then
$SUDO groupdel telemt 2>/dev/null || $SUDO delgroup telemt 2>/dev/null || true
fi
else
say "Note: Configuration and user kept. Run with 'purge' to remove completely."
fi
@ -479,7 +615,17 @@ case "$ACTION" in
say "Starting installation of $BIN_NAME (Version: $TARGET_VERSION)"
say ">>> Stage 1: Verifying environment and dependencies"
verify_common; verify_install_deps
verify_common
verify_install_deps
if is_config_exists && [ "$PORT_PROVIDED" -eq 0 ]; then
ext_port="$($SUDO awk -F'=' '/^[ \t]*port[ \t]*=/ {gsub(/[^0-9]/, "", $2); print $2; exit}' "$CONFIG_FILE" 2>/dev/null || true)"
if [ -n "$ext_port" ]; then
SERVER_PORT="$ext_port"
fi
fi
check_port_availability
if [ "$TARGET_VERSION" != "latest" ]; then
TARGET_VERSION="${TARGET_VERSION#v}"
@ -500,7 +646,21 @@ case "$ACTION" in
die "Temp directory is invalid or was not created"
fi
if ! fetch_file "$DL_URL" "${TEMP_DIR}/${FILE_NAME}"; then
if [ "$ARCH" = "x86_64-v3" ]; then
say " -> x86_64-v3 build not found, falling back to standard x86_64..."
ARCH="x86_64"
FILE_NAME="${BIN_NAME}-${ARCH}-linux-${LIBC}.tar.gz"
if [ "$TARGET_VERSION" = "latest" ]; then
DL_URL="https://github.com/${REPO}/releases/latest/download/${FILE_NAME}"
else
DL_URL="https://github.com/${REPO}/releases/download/${TARGET_VERSION}/${FILE_NAME}"
fi
fetch_file "$DL_URL" "${TEMP_DIR}/${FILE_NAME}" || die "Download failed"
else
die "Download failed"
fi
fi
say ">>> Stage 3: Extracting archive"
if ! gzip -dc "${TEMP_DIR}/${FILE_NAME}" | tar -xf - -C "$TEMP_DIR" 2>/dev/null; then
@ -516,7 +676,7 @@ case "$ACTION" in
say ">>> Stage 5: Installing binary"
install_binary "$EXTRACTED_BIN" "${INSTALL_DIR}/${BIN_NAME}"
say ">>> Stage 6: Generating configuration"
say ">>> Stage 6: Generating/Updating configuration"
install_config
say ">>> Stage 7: Installing and starting service"
@ -543,11 +703,14 @@ case "$ACTION" in
printf ' rc-service %s status\n\n' "$SERVICE_NAME"
fi
API_LISTEN="$($SUDO awk -F'"' '/^[ \t]*listen[ \t]*=/ {print $2; exit}' "$CONFIG_FILE" 2>/dev/null || true)"
API_LISTEN="${API_LISTEN:-127.0.0.1:9091}"
printf 'To get your user connection links (for Telegram), run:\n'
if command -v jq >/dev/null 2>&1; then
printf ' curl -s http://127.0.0.1:9091/v1/users | jq -r '\''.data[] | "User: \\(.username)\\n\\(.links.tls[0] // empty)\\n"'\''\n'
printf ' curl -s http://%s/v1/users | jq -r '\''.data[]? | "User: \\(.username)\\n\\(.links.tls[0] // empty)\\n"'\''\n' "$API_LISTEN"
else
printf ' curl -s http://127.0.0.1:9091/v1/users\n'
printf ' curl -s http://%s/v1/users\n' "$API_LISTEN"
printf ' (Tip: Install '\''jq'\'' for a much cleaner output)\n'
fi

View File

@ -37,11 +37,12 @@ mod runtime_watch;
mod runtime_zero;
mod users;
use config_store::{current_revision, parse_if_match};
use config_store::{current_revision, load_config_from_disk, parse_if_match};
use events::ApiEventStore;
use http_utils::{error_response, read_json, read_optional_json, success_response};
use model::{
ApiFailure, CreateUserRequest, HealthData, PatchUserRequest, RotateSecretRequest, SummaryData,
ApiFailure, CreateUserRequest, DeleteUserResponse, HealthData, PatchUserRequest,
RotateSecretRequest, SummaryData, UserActiveIps,
};
use runtime_edge::{
EdgeConnectionsCacheEntry, build_runtime_connections_summary_data,
@ -362,15 +363,33 @@ async fn handle(
);
Ok(success_response(StatusCode::OK, data, revision))
}
("GET", "/v1/stats/users/active-ips") => {
let revision = current_revision(&shared.config_path).await?;
let usernames: Vec<_> = cfg.access.users.keys().cloned().collect();
let active_ips_map = shared.ip_tracker.get_active_ips_for_users(&usernames).await;
let mut data: Vec<UserActiveIps> = active_ips_map
.into_iter()
.filter(|(_, ips)| !ips.is_empty())
.map(|(username, active_ips)| UserActiveIps {
username,
active_ips,
})
.collect();
data.sort_by(|a, b| a.username.cmp(&b.username));
Ok(success_response(StatusCode::OK, data, revision))
}
("GET", "/v1/stats/users") | ("GET", "/v1/users") => {
let revision = current_revision(&shared.config_path).await?;
let disk_cfg = load_config_from_disk(&shared.config_path).await?;
let runtime_cfg = config_rx.borrow().clone();
let (detected_ip_v4, detected_ip_v6) = shared.detected_link_ips();
let users = users_from_config(
&cfg,
&disk_cfg,
&shared.stats,
&shared.ip_tracker,
detected_ip_v4,
detected_ip_v6,
Some(runtime_cfg.as_ref()),
)
.await;
Ok(success_response(StatusCode::OK, users, revision))
@ -389,7 +408,7 @@ async fn handle(
let expected_revision = parse_if_match(req.headers());
let body = read_json::<CreateUserRequest>(req.into_body(), body_limit).await?;
let result = create_user(body, expected_revision, &shared).await;
let (data, revision) = match result {
let (mut data, revision) = match result {
Ok(ok) => ok,
Err(error) => {
shared
@ -398,11 +417,18 @@ async fn handle(
return Err(error);
}
};
let runtime_cfg = config_rx.borrow().clone();
data.user.in_runtime = runtime_cfg.access.users.contains_key(&data.user.username);
shared.runtime_events.record(
"api.user.create.ok",
format!("username={}", data.user.username),
);
Ok(success_response(StatusCode::CREATED, data, revision))
let status = if data.user.in_runtime {
StatusCode::CREATED
} else {
StatusCode::ACCEPTED
};
Ok(success_response(status, data, revision))
}
_ => {
if let Some(user) = path.strip_prefix("/v1/users/")
@ -411,13 +437,16 @@ async fn handle(
{
if method == Method::GET {
let revision = current_revision(&shared.config_path).await?;
let disk_cfg = load_config_from_disk(&shared.config_path).await?;
let runtime_cfg = config_rx.borrow().clone();
let (detected_ip_v4, detected_ip_v6) = shared.detected_link_ips();
let users = users_from_config(
&cfg,
&disk_cfg,
&shared.stats,
&shared.ip_tracker,
detected_ip_v4,
detected_ip_v6,
Some(runtime_cfg.as_ref()),
)
.await;
if let Some(user_info) =
@ -445,7 +474,7 @@ async fn handle(
let body =
read_json::<PatchUserRequest>(req.into_body(), body_limit).await?;
let result = patch_user(user, body, expected_revision, &shared).await;
let (data, revision) = match result {
let (mut data, revision) = match result {
Ok(ok) => ok,
Err(error) => {
shared.runtime_events.record(
@ -455,10 +484,17 @@ async fn handle(
return Err(error);
}
};
let runtime_cfg = config_rx.borrow().clone();
data.in_runtime = runtime_cfg.access.users.contains_key(&data.username);
shared
.runtime_events
.record("api.user.patch.ok", format!("username={}", data.username));
return Ok(success_response(StatusCode::OK, data, revision));
let status = if data.in_runtime {
StatusCode::OK
} else {
StatusCode::ACCEPTED
};
return Ok(success_response(status, data, revision));
}
if method == Method::DELETE {
if api_cfg.read_only {
@ -486,7 +522,18 @@ async fn handle(
shared
.runtime_events
.record("api.user.delete.ok", format!("username={}", deleted_user));
return Ok(success_response(StatusCode::OK, deleted_user, revision));
let runtime_cfg = config_rx.borrow().clone();
let in_runtime = runtime_cfg.access.users.contains_key(&deleted_user);
let response = DeleteUserResponse {
username: deleted_user,
in_runtime,
};
let status = if response.in_runtime {
StatusCode::ACCEPTED
} else {
StatusCode::OK
};
return Ok(success_response(status, response, revision));
}
if method == Method::POST
&& let Some(base_user) = user.strip_suffix("/rotate-secret")
@ -514,7 +561,7 @@ async fn handle(
&shared,
)
.await;
let (data, revision) = match result {
let (mut data, revision) = match result {
Ok(ok) => ok,
Err(error) => {
shared.runtime_events.record(
@ -524,11 +571,19 @@ async fn handle(
return Err(error);
}
};
let runtime_cfg = config_rx.borrow().clone();
data.user.in_runtime =
runtime_cfg.access.users.contains_key(&data.user.username);
shared.runtime_events.record(
"api.user.rotate_secret.ok",
format!("username={}", base_user),
);
return Ok(success_response(StatusCode::OK, data, revision));
let status = if data.user.in_runtime {
StatusCode::OK
} else {
StatusCode::ACCEPTED
};
return Ok(success_response(status, data, revision));
}
if method == Method::POST {
return Ok(error_response(

View File

@ -81,10 +81,21 @@ pub(super) struct ZeroCoreData {
pub(super) connections_total: u64,
pub(super) connections_bad_total: u64,
pub(super) handshake_timeouts_total: u64,
pub(super) accept_permit_timeout_total: u64,
pub(super) configured_users: usize,
pub(super) telemetry_core_enabled: bool,
pub(super) telemetry_user_enabled: bool,
pub(super) telemetry_me_level: String,
pub(super) conntrack_control_enabled: bool,
pub(super) conntrack_control_available: bool,
pub(super) conntrack_pressure_active: bool,
pub(super) conntrack_event_queue_depth: u64,
pub(super) conntrack_rule_apply_ok: bool,
pub(super) conntrack_delete_attempt_total: u64,
pub(super) conntrack_delete_success_total: u64,
pub(super) conntrack_delete_not_found_total: u64,
pub(super) conntrack_delete_error_total: u64,
pub(super) conntrack_close_event_drop_total: u64,
}
#[derive(Serialize, Clone)]
@ -428,6 +439,7 @@ pub(super) struct UserLinks {
#[derive(Serialize)]
pub(super) struct UserInfo {
pub(super) username: String,
pub(super) in_runtime: bool,
pub(super) user_ad_tag: Option<String>,
pub(super) max_tcp_conns: Option<usize>,
pub(super) expiration_rfc3339: Option<String>,
@ -442,12 +454,24 @@ pub(super) struct UserInfo {
pub(super) links: UserLinks,
}
#[derive(Serialize)]
pub(super) struct UserActiveIps {
pub(super) username: String,
pub(super) active_ips: Vec<IpAddr>,
}
#[derive(Serialize)]
pub(super) struct CreateUserResponse {
pub(super) user: UserInfo,
pub(super) secret: String,
}
#[derive(Serialize)]
pub(super) struct DeleteUserResponse {
pub(super) username: String,
pub(super) in_runtime: bool,
}
#[derive(Deserialize)]
pub(super) struct CreateUserRequest {
pub(super) username: String,

View File

@ -39,10 +39,21 @@ pub(super) fn build_zero_all_data(stats: &Stats, configured_users: usize) -> Zer
connections_total: stats.get_connects_all(),
connections_bad_total: stats.get_connects_bad(),
handshake_timeouts_total: stats.get_handshake_timeouts(),
accept_permit_timeout_total: stats.get_accept_permit_timeout_total(),
configured_users,
telemetry_core_enabled: telemetry.core_enabled,
telemetry_user_enabled: telemetry.user_enabled,
telemetry_me_level: telemetry.me_level.to_string(),
conntrack_control_enabled: stats.get_conntrack_control_enabled(),
conntrack_control_available: stats.get_conntrack_control_available(),
conntrack_pressure_active: stats.get_conntrack_pressure_active(),
conntrack_event_queue_depth: stats.get_conntrack_event_queue_depth(),
conntrack_rule_apply_ok: stats.get_conntrack_rule_apply_ok(),
conntrack_delete_attempt_total: stats.get_conntrack_delete_attempt_total(),
conntrack_delete_success_total: stats.get_conntrack_delete_success_total(),
conntrack_delete_not_found_total: stats.get_conntrack_delete_not_found_total(),
conntrack_delete_error_total: stats.get_conntrack_delete_error_total(),
conntrack_close_event_drop_total: stats.get_conntrack_close_event_drop_total(),
},
upstream: build_zero_upstream_data(stats),
middle_proxy: ZeroMiddleProxyData {

View File

@ -35,11 +35,14 @@ pub(super) struct RuntimeGatesData {
pub(super) conditional_cast_enabled: bool,
pub(super) me_runtime_ready: bool,
pub(super) me2dc_fallback_enabled: bool,
pub(super) me2dc_fast_enabled: bool,
pub(super) use_middle_proxy: bool,
pub(super) route_mode: &'static str,
pub(super) reroute_active: bool,
#[serde(skip_serializing_if = "Option::is_none")]
pub(super) reroute_to_direct_at_epoch_secs: Option<u64>,
#[serde(skip_serializing_if = "Option::is_none")]
pub(super) reroute_reason: Option<&'static str>,
pub(super) startup_status: &'static str,
pub(super) startup_stage: String,
pub(super) startup_progress_pct: f64,
@ -47,6 +50,7 @@ pub(super) struct RuntimeGatesData {
#[derive(Serialize)]
pub(super) struct EffectiveTimeoutLimits {
pub(super) client_first_byte_idle_secs: u64,
pub(super) client_handshake_secs: u64,
pub(super) tg_connect_secs: u64,
pub(super) client_keepalive_secs: u64,
@ -86,6 +90,7 @@ pub(super) struct EffectiveMiddleProxyLimits {
pub(super) writer_pick_mode: &'static str,
pub(super) writer_pick_sample_size: u8,
pub(super) me2dc_fallback: bool,
pub(super) me2dc_fast: bool,
}
#[derive(Serialize)]
@ -95,6 +100,11 @@ pub(super) struct EffectiveUserIpPolicyLimits {
pub(super) window_secs: u64,
}
#[derive(Serialize)]
pub(super) struct EffectiveUserTcpPolicyLimits {
pub(super) global_each: usize,
}
#[derive(Serialize)]
pub(super) struct EffectiveLimitsData {
pub(super) update_every_secs: u64,
@ -104,6 +114,7 @@ pub(super) struct EffectiveLimitsData {
pub(super) upstream: EffectiveUpstreamLimits,
pub(super) middle_proxy: EffectiveMiddleProxyLimits,
pub(super) user_ip_policy: EffectiveUserIpPolicyLimits,
pub(super) user_tcp_policy: EffectiveUserTcpPolicyLimits,
}
#[derive(Serialize)]
@ -169,6 +180,8 @@ pub(super) async fn build_runtime_gates_data(
let startup_summary = build_runtime_startup_summary(shared).await;
let route_state = shared.route_runtime.snapshot();
let route_mode = route_state.mode.as_str();
let fast_fallback_enabled =
cfg.general.use_middle_proxy && cfg.general.me2dc_fallback && cfg.general.me2dc_fast;
let reroute_active = cfg.general.use_middle_proxy
&& cfg.general.me2dc_fallback
&& matches!(route_state.mode, RelayRouteMode::Direct);
@ -177,6 +190,15 @@ pub(super) async fn build_runtime_gates_data(
} else {
None
};
let reroute_reason = if reroute_active {
if fast_fallback_enabled {
Some("fast_not_ready_fallback")
} else {
Some("strict_grace_fallback")
}
} else {
None
};
let me_runtime_ready = if !cfg.general.use_middle_proxy {
true
} else {
@ -194,10 +216,12 @@ pub(super) async fn build_runtime_gates_data(
conditional_cast_enabled: cfg.general.use_middle_proxy,
me_runtime_ready,
me2dc_fallback_enabled: cfg.general.me2dc_fallback,
me2dc_fast_enabled: fast_fallback_enabled,
use_middle_proxy: cfg.general.use_middle_proxy,
route_mode,
reroute_active,
reroute_to_direct_at_epoch_secs,
reroute_reason,
startup_status: startup_summary.status,
startup_stage: startup_summary.stage,
startup_progress_pct: startup_summary.progress_pct,
@ -210,8 +234,9 @@ pub(super) fn build_limits_effective_data(cfg: &ProxyConfig) -> EffectiveLimitsD
me_reinit_every_secs: cfg.general.effective_me_reinit_every_secs(),
me_pool_force_close_secs: cfg.general.effective_me_pool_force_close_secs(),
timeouts: EffectiveTimeoutLimits {
client_first_byte_idle_secs: cfg.timeouts.client_first_byte_idle_secs,
client_handshake_secs: cfg.timeouts.client_handshake,
tg_connect_secs: cfg.timeouts.tg_connect,
tg_connect_secs: cfg.general.tg_connect,
client_keepalive_secs: cfg.timeouts.client_keepalive,
client_ack_secs: cfg.timeouts.client_ack,
me_one_retry: cfg.timeouts.me_one_retry,
@ -263,12 +288,16 @@ pub(super) fn build_limits_effective_data(cfg: &ProxyConfig) -> EffectiveLimitsD
writer_pick_mode: me_writer_pick_mode_label(cfg.general.me_writer_pick_mode),
writer_pick_sample_size: cfg.general.me_writer_pick_sample_size,
me2dc_fallback: cfg.general.me2dc_fallback,
me2dc_fast: cfg.general.me2dc_fast,
},
user_ip_policy: EffectiveUserIpPolicyLimits {
global_each: cfg.access.user_max_unique_ips_global_each,
mode: user_max_unique_ips_mode_label(cfg.access.user_max_unique_ips_mode),
window_secs: cfg.access.user_max_unique_ips_window_secs,
},
user_tcp_policy: EffectiveUserTcpPolicyLimits {
global_each: cfg.access.user_max_tcp_conns_global_each,
},
}
}

View File

@ -136,6 +136,7 @@ pub(super) async fn create_user(
&shared.ip_tracker,
detected_ip_v4,
detected_ip_v6,
None,
)
.await;
let user = users
@ -143,8 +144,16 @@ pub(super) async fn create_user(
.find(|entry| entry.username == body.username)
.unwrap_or(UserInfo {
username: body.username.clone(),
in_runtime: false,
user_ad_tag: None,
max_tcp_conns: None,
max_tcp_conns: cfg
.access
.user_max_tcp_conns
.get(&body.username)
.copied()
.filter(|limit| *limit > 0)
.or((cfg.access.user_max_tcp_conns_global_each > 0)
.then_some(cfg.access.user_max_tcp_conns_global_each)),
expiration_rfc3339: None,
data_quota_bytes: None,
max_unique_ips: updated_limit,
@ -236,6 +245,7 @@ pub(super) async fn patch_user(
&shared.ip_tracker,
detected_ip_v4,
detected_ip_v6,
None,
)
.await;
let user_info = users
@ -293,6 +303,7 @@ pub(super) async fn rotate_secret(
&shared.ip_tracker,
detected_ip_v4,
detected_ip_v6,
None,
)
.await;
let user_info = users
@ -365,6 +376,7 @@ pub(super) async fn users_from_config(
ip_tracker: &UserIpTracker,
startup_detected_ip_v4: Option<IpAddr>,
startup_detected_ip_v6: Option<IpAddr>,
runtime_cfg: Option<&ProxyConfig>,
) -> Vec<UserInfo> {
let mut names = cfg.access.users.keys().cloned().collect::<Vec<_>>();
names.sort();
@ -394,8 +406,18 @@ pub(super) async fn users_from_config(
tls: Vec::new(),
});
users.push(UserInfo {
in_runtime: runtime_cfg
.map(|runtime| runtime.access.users.contains_key(&username))
.unwrap_or(false),
user_ad_tag: cfg.access.user_ad_tags.get(&username).cloned(),
max_tcp_conns: cfg.access.user_max_tcp_conns.get(&username).copied(),
max_tcp_conns: cfg
.access
.user_max_tcp_conns
.get(&username)
.copied()
.filter(|limit| *limit > 0)
.or((cfg.access.user_max_tcp_conns_global_each > 0)
.then_some(cfg.access.user_max_tcp_conns_global_each)),
expiration_rfc3339: cfg
.access
.user_expirations
@ -572,3 +594,94 @@ fn resolve_tls_domains(cfg: &ProxyConfig) -> Vec<&str> {
}
domains
}
#[cfg(test)]
mod tests {
use super::*;
use crate::ip_tracker::UserIpTracker;
use crate::stats::Stats;
#[tokio::test]
async fn users_from_config_reports_effective_tcp_limit_with_global_fallback() {
let mut cfg = ProxyConfig::default();
cfg.access.users.insert(
"alice".to_string(),
"0123456789abcdef0123456789abcdef".to_string(),
);
cfg.access.user_max_tcp_conns_global_each = 7;
let stats = Stats::new();
let tracker = UserIpTracker::new();
let users = users_from_config(&cfg, &stats, &tracker, None, None, None).await;
let alice = users
.iter()
.find(|entry| entry.username == "alice")
.expect("alice must be present");
assert!(!alice.in_runtime);
assert_eq!(alice.max_tcp_conns, Some(7));
cfg.access.user_max_tcp_conns.insert("alice".to_string(), 5);
let users = users_from_config(&cfg, &stats, &tracker, None, None, None).await;
let alice = users
.iter()
.find(|entry| entry.username == "alice")
.expect("alice must be present");
assert!(!alice.in_runtime);
assert_eq!(alice.max_tcp_conns, Some(5));
cfg.access.user_max_tcp_conns.insert("alice".to_string(), 0);
let users = users_from_config(&cfg, &stats, &tracker, None, None, None).await;
let alice = users
.iter()
.find(|entry| entry.username == "alice")
.expect("alice must be present");
assert!(!alice.in_runtime);
assert_eq!(alice.max_tcp_conns, Some(7));
cfg.access.user_max_tcp_conns_global_each = 0;
let users = users_from_config(&cfg, &stats, &tracker, None, None, None).await;
let alice = users
.iter()
.find(|entry| entry.username == "alice")
.expect("alice must be present");
assert!(!alice.in_runtime);
assert_eq!(alice.max_tcp_conns, None);
}
#[tokio::test]
async fn users_from_config_marks_runtime_membership_when_snapshot_is_provided() {
let mut disk_cfg = ProxyConfig::default();
disk_cfg.access.users.insert(
"alice".to_string(),
"0123456789abcdef0123456789abcdef".to_string(),
);
disk_cfg.access.users.insert(
"bob".to_string(),
"fedcba9876543210fedcba9876543210".to_string(),
);
let mut runtime_cfg = ProxyConfig::default();
runtime_cfg.access.users.insert(
"alice".to_string(),
"0123456789abcdef0123456789abcdef".to_string(),
);
let stats = Stats::new();
let tracker = UserIpTracker::new();
let users =
users_from_config(&disk_cfg, &stats, &tracker, None, None, Some(&runtime_cfg)).await;
let alice = users
.iter()
.find(|entry| entry.username == "alice")
.expect("alice must be present");
let bob = users
.iter()
.find(|entry| entry.username == "bob")
.expect("bob must be present");
assert!(alice.in_runtime);
assert!(!bob.in_runtime);
}
}

View File

@ -1,11 +1,270 @@
//! CLI commands: --init (fire-and-forget setup)
//! CLI commands: --init (fire-and-forget setup), daemon options, subcommands
//!
//! Subcommands:
//! - `start [OPTIONS] [config.toml]` - Start the daemon
//! - `stop [--pid-file PATH]` - Stop a running daemon
//! - `reload [--pid-file PATH]` - Reload configuration (SIGHUP)
//! - `status [--pid-file PATH]` - Check daemon status
//! - `run [OPTIONS] [config.toml]` - Run in foreground (default behavior)
use rand::RngExt;
use std::fs;
use std::path::{Path, PathBuf};
use std::process::Command;
#[cfg(unix)]
use crate::daemon::{self, DEFAULT_PID_FILE, DaemonOptions};
/// CLI subcommand to execute.
#[derive(Debug, Clone, PartialEq, Eq)]
pub enum Subcommand {
/// Run the proxy (default, or explicit `run` subcommand).
Run,
/// Start as daemon (`start` subcommand).
Start,
/// Stop a running daemon (`stop` subcommand).
Stop,
/// Reload configuration (`reload` subcommand).
Reload,
/// Check daemon status (`status` subcommand).
Status,
/// Fire-and-forget setup (`--init`).
Init,
}
/// Parsed subcommand with its options.
#[derive(Debug)]
pub struct ParsedCommand {
pub subcommand: Subcommand,
pub pid_file: PathBuf,
pub config_path: String,
#[cfg(unix)]
pub daemon_opts: DaemonOptions,
pub init_opts: Option<InitOptions>,
}
impl Default for ParsedCommand {
fn default() -> Self {
Self {
subcommand: Subcommand::Run,
#[cfg(unix)]
pid_file: PathBuf::from(DEFAULT_PID_FILE),
#[cfg(not(unix))]
pid_file: PathBuf::from("/var/run/telemt.pid"),
config_path: "config.toml".to_string(),
#[cfg(unix)]
daemon_opts: DaemonOptions::default(),
init_opts: None,
}
}
}
/// Parse CLI arguments into a command structure.
pub fn parse_command(args: &[String]) -> ParsedCommand {
let mut cmd = ParsedCommand::default();
// Check for --init first (legacy form)
if args.iter().any(|a| a == "--init") {
cmd.subcommand = Subcommand::Init;
cmd.init_opts = parse_init_args(args);
return cmd;
}
// Check for subcommand as first argument
if let Some(first) = args.first() {
match first.as_str() {
"start" => {
cmd.subcommand = Subcommand::Start;
#[cfg(unix)]
{
cmd.daemon_opts = parse_daemon_args(args);
// Force daemonize for start command
cmd.daemon_opts.daemonize = true;
}
}
"stop" => {
cmd.subcommand = Subcommand::Stop;
}
"reload" => {
cmd.subcommand = Subcommand::Reload;
}
"status" => {
cmd.subcommand = Subcommand::Status;
}
"run" => {
cmd.subcommand = Subcommand::Run;
#[cfg(unix)]
{
cmd.daemon_opts = parse_daemon_args(args);
}
}
_ => {
// No subcommand, default to Run
#[cfg(unix)]
{
cmd.daemon_opts = parse_daemon_args(args);
}
}
}
}
// Parse remaining options
let mut i = 0;
while i < args.len() {
match args[i].as_str() {
// Skip subcommand names
"start" | "stop" | "reload" | "status" | "run" => {}
// PID file option (for stop/reload/status)
"--pid-file" => {
i += 1;
if i < args.len() {
cmd.pid_file = PathBuf::from(&args[i]);
#[cfg(unix)]
{
cmd.daemon_opts.pid_file = Some(cmd.pid_file.clone());
}
}
}
s if s.starts_with("--pid-file=") => {
cmd.pid_file = PathBuf::from(s.trim_start_matches("--pid-file="));
#[cfg(unix)]
{
cmd.daemon_opts.pid_file = Some(cmd.pid_file.clone());
}
}
// Config path (positional, non-flag argument)
s if !s.starts_with('-') => {
cmd.config_path = s.to_string();
}
_ => {}
}
i += 1;
}
cmd
}
/// Execute a subcommand that doesn't require starting the server.
/// Returns `Some(exit_code)` if the command was handled, `None` if server should start.
#[cfg(unix)]
pub fn execute_subcommand(cmd: &ParsedCommand) -> Option<i32> {
match cmd.subcommand {
Subcommand::Stop => Some(cmd_stop(&cmd.pid_file)),
Subcommand::Reload => Some(cmd_reload(&cmd.pid_file)),
Subcommand::Status => Some(cmd_status(&cmd.pid_file)),
Subcommand::Init => {
if let Some(opts) = cmd.init_opts.clone() {
match run_init(opts) {
Ok(()) => Some(0),
Err(e) => {
eprintln!("[telemt] Init failed: {}", e);
Some(1)
}
}
} else {
Some(1)
}
}
// Run and Start need the server
Subcommand::Run | Subcommand::Start => None,
}
}
#[cfg(not(unix))]
pub fn execute_subcommand(cmd: &ParsedCommand) -> Option<i32> {
match cmd.subcommand {
Subcommand::Stop | Subcommand::Reload | Subcommand::Status => {
eprintln!("[telemt] Subcommand not supported on this platform");
Some(1)
}
Subcommand::Init => {
if let Some(opts) = cmd.init_opts.clone() {
match run_init(opts) {
Ok(()) => Some(0),
Err(e) => {
eprintln!("[telemt] Init failed: {}", e);
Some(1)
}
}
} else {
Some(1)
}
}
Subcommand::Run | Subcommand::Start => None,
}
}
/// Stop command: send SIGTERM to the running daemon.
#[cfg(unix)]
fn cmd_stop(pid_file: &Path) -> i32 {
use nix::sys::signal::Signal;
println!("Stopping telemt daemon...");
match daemon::signal_pid_file(pid_file, Signal::SIGTERM) {
Ok(()) => {
println!("Stop signal sent successfully");
// Wait for process to exit (up to 10 seconds)
for _ in 0..20 {
std::thread::sleep(std::time::Duration::from_millis(500));
if let daemon::DaemonStatus::NotRunning = daemon::check_status(pid_file) {
println!("Daemon stopped");
return 0;
}
}
println!("Daemon may still be shutting down");
0
}
Err(e) => {
eprintln!("Failed to stop daemon: {}", e);
1
}
}
}
/// Reload command: send SIGHUP to trigger config reload.
#[cfg(unix)]
fn cmd_reload(pid_file: &Path) -> i32 {
use nix::sys::signal::Signal;
println!("Reloading telemt configuration...");
match daemon::signal_pid_file(pid_file, Signal::SIGHUP) {
Ok(()) => {
println!("Reload signal sent successfully");
0
}
Err(e) => {
eprintln!("Failed to reload daemon: {}", e);
1
}
}
}
/// Status command: check if daemon is running.
#[cfg(unix)]
fn cmd_status(pid_file: &Path) -> i32 {
match daemon::check_status(pid_file) {
daemon::DaemonStatus::Running(pid) => {
println!("telemt is running (pid {})", pid);
0
}
daemon::DaemonStatus::Stale(pid) => {
println!("telemt is not running (stale pid file, was pid {})", pid);
// Clean up stale PID file
let _ = std::fs::remove_file(pid_file);
1
}
daemon::DaemonStatus::NotRunning => {
println!("telemt is not running");
1
}
}
}
/// Options for the init command
#[derive(Debug, Clone)]
pub struct InitOptions {
pub port: u16,
pub domain: String,
@ -15,6 +274,64 @@ pub struct InitOptions {
pub no_start: bool,
}
/// Parse daemon-related options from CLI args.
#[cfg(unix)]
pub fn parse_daemon_args(args: &[String]) -> DaemonOptions {
let mut opts = DaemonOptions::default();
let mut i = 0;
while i < args.len() {
match args[i].as_str() {
"--daemon" | "-d" => {
opts.daemonize = true;
}
"--foreground" | "-f" => {
opts.foreground = true;
}
"--pid-file" => {
i += 1;
if i < args.len() {
opts.pid_file = Some(PathBuf::from(&args[i]));
}
}
s if s.starts_with("--pid-file=") => {
opts.pid_file = Some(PathBuf::from(s.trim_start_matches("--pid-file=")));
}
"--run-as-user" => {
i += 1;
if i < args.len() {
opts.user = Some(args[i].clone());
}
}
s if s.starts_with("--run-as-user=") => {
opts.user = Some(s.trim_start_matches("--run-as-user=").to_string());
}
"--run-as-group" => {
i += 1;
if i < args.len() {
opts.group = Some(args[i].clone());
}
}
s if s.starts_with("--run-as-group=") => {
opts.group = Some(s.trim_start_matches("--run-as-group=").to_string());
}
"--working-dir" => {
i += 1;
if i < args.len() {
opts.working_dir = Some(PathBuf::from(&args[i]));
}
}
s if s.starts_with("--working-dir=") => {
opts.working_dir = Some(PathBuf::from(s.trim_start_matches("--working-dir=")));
}
_ => {}
}
i += 1;
}
opts
}
impl Default for InitOptions {
fn default() -> Self {
Self {
@ -84,10 +401,16 @@ pub fn parse_init_args(args: &[String]) -> Option<InitOptions> {
/// Run the fire-and-forget setup.
pub fn run_init(opts: InitOptions) -> Result<(), Box<dyn std::error::Error>> {
use crate::service::{self, InitSystem, ServiceOptions};
eprintln!("[telemt] Fire-and-forget setup");
eprintln!();
// 1. Generate or validate secret
// 1. Detect init system
let init_system = service::detect_init_system();
eprintln!("[+] Detected init system: {}", init_system);
// 2. Generate or validate secret
let secret = match opts.secret {
Some(s) => {
if s.len() != 32 || !s.chars().all(|c| c.is_ascii_hexdigit()) {
@ -104,50 +427,74 @@ pub fn run_init(opts: InitOptions) -> Result<(), Box<dyn std::error::Error>> {
eprintln!("[+] Port: {}", opts.port);
eprintln!("[+] Domain: {}", opts.domain);
// 2. Create config directory
// 3. Create config directory
fs::create_dir_all(&opts.config_dir)?;
let config_path = opts.config_dir.join("config.toml");
// 3. Write config
// 4. Write config
let config_content = generate_config(&opts.username, &secret, opts.port, &opts.domain);
fs::write(&config_path, &config_content)?;
eprintln!("[+] Config written to {}", config_path.display());
// 4. Write systemd unit
// 5. Generate and write service file
let exe_path =
std::env::current_exe().unwrap_or_else(|_| PathBuf::from("/usr/local/bin/telemt"));
let unit_path = Path::new("/etc/systemd/system/telemt.service");
let unit_content = generate_systemd_unit(&exe_path, &config_path);
let service_opts = ServiceOptions {
exe_path: &exe_path,
config_path: &config_path,
user: None, // Let systemd/init handle user
group: None,
pid_file: "/var/run/telemt.pid",
working_dir: Some("/var/lib/telemt"),
description: "Telemt MTProxy - Telegram MTProto Proxy",
};
match fs::write(unit_path, &unit_content) {
let service_path = service::service_file_path(init_system);
let service_content = service::generate_service_file(init_system, &service_opts);
// Ensure parent directory exists
if let Some(parent) = Path::new(service_path).parent() {
let _ = fs::create_dir_all(parent);
}
match fs::write(service_path, &service_content) {
Ok(()) => {
eprintln!("[+] Systemd unit written to {}", unit_path.display());
eprintln!("[+] Service file written to {}", service_path);
// Make script executable for OpenRC/FreeBSD
#[cfg(unix)]
if init_system == InitSystem::OpenRC || init_system == InitSystem::FreeBSDRc {
use std::os::unix::fs::PermissionsExt;
let mut perms = fs::metadata(service_path)?.permissions();
perms.set_mode(0o755);
fs::set_permissions(service_path, perms)?;
}
}
Err(e) => {
eprintln!("[!] Cannot write systemd unit (run as root?): {}", e);
eprintln!("[!] Manual unit file content:");
eprintln!("{}", unit_content);
eprintln!("[!] Cannot write service file (run as root?): {}", e);
eprintln!("[!] Manual service file content:");
eprintln!("{}", service_content);
// Still print links and config
// Still print links and installation instructions
eprintln!();
eprintln!("{}", service::installation_instructions(init_system));
print_links(&opts.username, &secret, opts.port, &opts.domain);
return Ok(());
}
}
// 5. Reload systemd
// 6. Install and enable service based on init system
match init_system {
InitSystem::Systemd => {
run_cmd("systemctl", &["daemon-reload"]);
// 6. Enable service
run_cmd("systemctl", &["enable", "telemt.service"]);
eprintln!("[+] Service enabled");
// 7. Start service (unless --no-start)
if !opts.no_start {
run_cmd("systemctl", &["start", "telemt.service"]);
eprintln!("[+] Service started");
// Brief delay then check status
std::thread::sleep(std::time::Duration::from_secs(1));
let status = Command::new("systemctl")
.args(["is-active", "telemt.service"])
@ -166,10 +513,40 @@ pub fn run_init(opts: InitOptions) -> Result<(), Box<dyn std::error::Error>> {
eprintln!("[+] Service not started (--no-start)");
eprintln!("[+] Start manually: systemctl start telemt.service");
}
}
InitSystem::OpenRC => {
run_cmd("rc-update", &["add", "telemt", "default"]);
eprintln!("[+] Service enabled");
if !opts.no_start {
run_cmd("rc-service", &["telemt", "start"]);
eprintln!("[+] Service started");
} else {
eprintln!("[+] Service not started (--no-start)");
eprintln!("[+] Start manually: rc-service telemt start");
}
}
InitSystem::FreeBSDRc => {
run_cmd("sysrc", &["telemt_enable=YES"]);
eprintln!("[+] Service enabled");
if !opts.no_start {
run_cmd("service", &["telemt", "start"]);
eprintln!("[+] Service started");
} else {
eprintln!("[+] Service not started (--no-start)");
eprintln!("[+] Start manually: service telemt start");
}
}
InitSystem::Unknown => {
eprintln!("[!] Unknown init system - service file written but not installed");
eprintln!("[!] You may need to install it manually");
}
}
eprintln!();
// 8. Print links
// 7. Print links
print_links(&opts.username, &secret, opts.port, &opts.domain);
Ok(())
@ -207,6 +584,7 @@ me_pool_drain_soft_evict_cooldown_ms = 1000
me_bind_stale_mode = "never"
me_pool_min_fresh_ratio = 0.8
me_reinit_drain_timeout_secs = 90
tg_connect = 10
[network]
ipv4 = true
@ -232,8 +610,8 @@ ip = "0.0.0.0"
ip = "::"
[timeouts]
client_handshake = 15
tg_connect = 10
client_first_byte_idle_secs = 300
client_handshake = 60
client_keepalive = 60
client_ack = 300
@ -245,6 +623,7 @@ fake_cert_len = 2048
tls_full_cert_ttl_secs = 90
[access]
user_max_tcp_conns_global_each = 0
replay_check_len = 65536
replay_window_secs = 120
ignore_time_skew = false
@ -264,35 +643,6 @@ weight = 10
)
}
fn generate_systemd_unit(exe_path: &Path, config_path: &Path) -> String {
format!(
r#"[Unit]
Description=Telemt MTProxy
Documentation=https://github.com/telemt/telemt
After=network-online.target
Wants=network-online.target
[Service]
Type=simple
ExecStart={exe} {config}
Restart=always
RestartSec=5
LimitNOFILE=65535
# Security hardening
NoNewPrivileges=true
ProtectSystem=strict
ProtectHome=true
ReadWritePaths=/etc/telemt
PrivateTmp=true
[Install]
WantedBy=multi-user.target
"#,
exe = exe_path.display(),
config = config_path.display(),
)
}
fn run_cmd(cmd: &str, args: &[&str]) {
match Command::new(cmd).args(args).output() {
Ok(output) => {

View File

@ -48,6 +48,10 @@ const DEFAULT_ME_POOL_DRAIN_SOFT_EVICT_BUDGET_PER_CORE: u16 = 16;
const DEFAULT_ME_POOL_DRAIN_SOFT_EVICT_COOLDOWN_MS: u64 = 1000;
const DEFAULT_USER_MAX_UNIQUE_IPS_WINDOW_SECS: u64 = 30;
const DEFAULT_ACCEPT_PERMIT_TIMEOUT_MS: u64 = 250;
const DEFAULT_CONNTRACK_CONTROL_ENABLED: bool = true;
const DEFAULT_CONNTRACK_PRESSURE_HIGH_WATERMARK_PCT: u8 = 85;
const DEFAULT_CONNTRACK_PRESSURE_LOW_WATERMARK_PCT: u8 = 70;
const DEFAULT_CONNTRACK_DELETE_BUDGET_PER_SEC: u64 = 4096;
const DEFAULT_UPSTREAM_CONNECT_RETRY_ATTEMPTS: u32 = 2;
const DEFAULT_UPSTREAM_UNHEALTHY_FAIL_THRESHOLD: u32 = 5;
const DEFAULT_UPSTREAM_CONNECT_BUDGET_MS: u64 = 3000;
@ -96,7 +100,7 @@ pub(crate) fn default_fake_cert_len() -> usize {
}
pub(crate) fn default_tls_front_dir() -> String {
"tlsfront".to_string()
"/etc/telemt/tlsfront".to_string()
}
pub(crate) fn default_replay_check_len() -> usize {
@ -110,7 +114,11 @@ pub(crate) fn default_replay_window_secs() -> u64 {
}
pub(crate) fn default_handshake_timeout() -> u64 {
30
60
}
pub(crate) fn default_client_first_byte_idle_secs() -> u64 {
300
}
pub(crate) fn default_relay_idle_policy_v2_enabled() -> bool {
@ -209,10 +217,30 @@ pub(crate) fn default_server_max_connections() -> u32 {
10_000
}
pub(crate) fn default_listen_backlog() -> u32 {
1024
}
pub(crate) fn default_accept_permit_timeout_ms() -> u64 {
DEFAULT_ACCEPT_PERMIT_TIMEOUT_MS
}
pub(crate) fn default_conntrack_control_enabled() -> bool {
DEFAULT_CONNTRACK_CONTROL_ENABLED
}
pub(crate) fn default_conntrack_pressure_high_watermark_pct() -> u8 {
DEFAULT_CONNTRACK_PRESSURE_HIGH_WATERMARK_PCT
}
pub(crate) fn default_conntrack_pressure_low_watermark_pct() -> u8 {
DEFAULT_CONNTRACK_PRESSURE_LOW_WATERMARK_PCT
}
pub(crate) fn default_conntrack_delete_budget_per_sec() -> u64 {
DEFAULT_CONNTRACK_DELETE_BUDGET_PER_SEC
}
pub(crate) fn default_prefer_4() -> u8 {
4
}
@ -273,6 +301,10 @@ pub(crate) fn default_me2dc_fallback() -> bool {
true
}
pub(crate) fn default_me2dc_fast() -> bool {
true
}
pub(crate) fn default_keepalive_interval() -> u64 {
8
}
@ -526,7 +558,7 @@ pub(crate) fn default_beobachten_flush_secs() -> u64 {
}
pub(crate) fn default_beobachten_file() -> String {
"cache/beobachten.txt".to_string()
"/etc/telemt/beobachten.txt".to_string()
}
pub(crate) fn default_tls_new_session_tickets() -> u8 {
@ -799,6 +831,10 @@ pub(crate) fn default_user_max_unique_ips_window_secs() -> u64 {
DEFAULT_USER_MAX_UNIQUE_IPS_WINDOW_SECS
}
pub(crate) fn default_user_max_tcp_conns_global_each() -> usize {
0
}
pub(crate) fn default_user_max_unique_ips_global_each() -> usize {
0
}

View File

@ -117,6 +117,7 @@ pub struct HotFields {
pub users: std::collections::HashMap<String, String>,
pub user_ad_tags: std::collections::HashMap<String, String>,
pub user_max_tcp_conns: std::collections::HashMap<String, usize>,
pub user_max_tcp_conns_global_each: usize,
pub user_expirations: std::collections::HashMap<String, chrono::DateTime<chrono::Utc>>,
pub user_data_quota: std::collections::HashMap<String, u64>,
pub user_max_unique_ips: std::collections::HashMap<String, usize>,
@ -240,6 +241,7 @@ impl HotFields {
users: cfg.access.users.clone(),
user_ad_tags: cfg.access.user_ad_tags.clone(),
user_max_tcp_conns: cfg.access.user_max_tcp_conns.clone(),
user_max_tcp_conns_global_each: cfg.access.user_max_tcp_conns_global_each,
user_expirations: cfg.access.user_expirations.clone(),
user_data_quota: cfg.access.user_data_quota.clone(),
user_max_unique_ips: cfg.access.user_max_unique_ips.clone(),
@ -530,6 +532,7 @@ fn overlay_hot_fields(old: &ProxyConfig, new: &ProxyConfig) -> ProxyConfig {
cfg.access.users = new.access.users.clone();
cfg.access.user_ad_tags = new.access.user_ad_tags.clone();
cfg.access.user_max_tcp_conns = new.access.user_max_tcp_conns.clone();
cfg.access.user_max_tcp_conns_global_each = new.access.user_max_tcp_conns_global_each;
cfg.access.user_expirations = new.access.user_expirations.clone();
cfg.access.user_data_quota = new.access.user_data_quota.clone();
cfg.access.user_max_unique_ips = new.access.user_max_unique_ips.clone();
@ -570,6 +573,7 @@ fn warn_non_hot_changes(old: &ProxyConfig, new: &ProxyConfig, non_hot_changed: b
}
if old.server.proxy_protocol != new.server.proxy_protocol
|| !listeners_equal(&old.server.listeners, &new.server.listeners)
|| old.server.listen_backlog != new.server.listen_backlog
|| old.server.listen_addr_ipv4 != new.server.listen_addr_ipv4
|| old.server.listen_addr_ipv6 != new.server.listen_addr_ipv6
|| old.server.listen_tcp != new.server.listen_tcp
@ -651,6 +655,9 @@ fn warn_non_hot_changes(old: &ProxyConfig, new: &ProxyConfig, non_hot_changed: b
}
if old.general.me_route_no_writer_mode != new.general.me_route_no_writer_mode
|| old.general.me_route_no_writer_wait_ms != new.general.me_route_no_writer_wait_ms
|| old.general.me_route_hybrid_max_wait_ms != new.general.me_route_hybrid_max_wait_ms
|| old.general.me_route_blocking_send_timeout_ms
!= new.general.me_route_blocking_send_timeout_ms
|| old.general.me_route_inline_recovery_attempts
!= new.general.me_route_inline_recovery_attempts
|| old.general.me_route_inline_recovery_wait_ms
@ -669,9 +676,11 @@ fn warn_non_hot_changes(old: &ProxyConfig, new: &ProxyConfig, non_hot_changed: b
warned = true;
warn!("config reload: general.me_init_retry_attempts changed; restart required");
}
if old.general.me2dc_fallback != new.general.me2dc_fallback {
if old.general.me2dc_fallback != new.general.me2dc_fallback
|| old.general.me2dc_fast != new.general.me2dc_fast
{
warned = true;
warn!("config reload: general.me2dc_fallback changed; restart required");
warn!("config reload: general.me2dc_fallback/me2dc_fast changed; restart required");
}
if old.general.proxy_config_v4_cache_path != new.general.proxy_config_v4_cache_path
|| old.general.proxy_config_v6_cache_path != new.general.proxy_config_v6_cache_path
@ -690,6 +699,7 @@ fn warn_non_hot_changes(old: &ProxyConfig, new: &ProxyConfig, non_hot_changed: b
if old.general.upstream_connect_retry_attempts != new.general.upstream_connect_retry_attempts
|| old.general.upstream_connect_retry_backoff_ms
!= new.general.upstream_connect_retry_backoff_ms
|| old.general.tg_connect != new.general.tg_connect
|| old.general.upstream_unhealthy_fail_threshold
!= new.general.upstream_unhealthy_fail_threshold
|| old.general.upstream_connect_failfast_hard_errors
@ -1138,6 +1148,12 @@ fn log_changes(
new_hot.user_max_tcp_conns.len()
);
}
if old_hot.user_max_tcp_conns_global_each != new_hot.user_max_tcp_conns_global_each {
info!(
"config reload: user_max_tcp_conns policy global_each={}",
new_hot.user_max_tcp_conns_global_each
);
}
if old_hot.user_expirations != new_hot.user_expirations {
info!(
"config reload: user_expirations updated ({} entries)",

View File

@ -346,6 +346,12 @@ impl ProxyConfig {
));
}
if config.general.tg_connect == 0 {
return Err(ProxyError::Config(
"general.tg_connect must be > 0".to_string(),
));
}
if config.general.upstream_unhealthy_fail_threshold == 0 {
return Err(ProxyError::Config(
"general.upstream_unhealthy_fail_threshold must be > 0".to_string(),
@ -916,6 +922,43 @@ impl ProxyConfig {
));
}
if config.server.conntrack_control.pressure_high_watermark_pct == 0
|| config.server.conntrack_control.pressure_high_watermark_pct > 100
{
return Err(ProxyError::Config(
"server.conntrack_control.pressure_high_watermark_pct must be within [1, 100]"
.to_string(),
));
}
if config.server.conntrack_control.pressure_low_watermark_pct
>= config.server.conntrack_control.pressure_high_watermark_pct
{
return Err(ProxyError::Config(
"server.conntrack_control.pressure_low_watermark_pct must be < pressure_high_watermark_pct"
.to_string(),
));
}
if config.server.conntrack_control.delete_budget_per_sec == 0 {
return Err(ProxyError::Config(
"server.conntrack_control.delete_budget_per_sec must be > 0".to_string(),
));
}
if matches!(config.server.conntrack_control.mode, ConntrackMode::Hybrid)
&& config
.server
.conntrack_control
.hybrid_listener_ips
.is_empty()
{
return Err(ProxyError::Config(
"server.conntrack_control.hybrid_listener_ips must be non-empty in mode=hybrid"
.to_string(),
));
}
if config.general.effective_me_pool_force_close_secs() > 0
&& config.general.effective_me_pool_force_close_secs()
< config.general.me_pool_drain_ttl_secs
@ -1217,6 +1260,7 @@ mod tests {
default_me_init_retry_attempts()
);
assert_eq!(cfg.general.me2dc_fallback, default_me2dc_fallback());
assert_eq!(cfg.general.me2dc_fast, default_me2dc_fast());
assert_eq!(
cfg.general.proxy_config_v4_cache_path,
default_proxy_config_v4_cache_path()
@ -1320,7 +1364,36 @@ mod tests {
cfg.server.api.runtime_edge_events_capacity,
default_api_runtime_edge_events_capacity()
);
assert_eq!(
cfg.server.conntrack_control.inline_conntrack_control,
default_conntrack_control_enabled()
);
assert_eq!(cfg.server.conntrack_control.mode, ConntrackMode::default());
assert_eq!(
cfg.server.conntrack_control.backend,
ConntrackBackend::default()
);
assert_eq!(
cfg.server.conntrack_control.profile,
ConntrackPressureProfile::default()
);
assert_eq!(
cfg.server.conntrack_control.pressure_high_watermark_pct,
default_conntrack_pressure_high_watermark_pct()
);
assert_eq!(
cfg.server.conntrack_control.pressure_low_watermark_pct,
default_conntrack_pressure_low_watermark_pct()
);
assert_eq!(
cfg.server.conntrack_control.delete_budget_per_sec,
default_conntrack_delete_budget_per_sec()
);
assert_eq!(cfg.access.users, default_access_users());
assert_eq!(
cfg.access.user_max_tcp_conns_global_each,
default_user_max_tcp_conns_global_each()
);
assert_eq!(
cfg.access.user_max_unique_ips_mode,
UserMaxUniqueIpsMode::default()
@ -1356,6 +1429,7 @@ mod tests {
default_me_init_retry_attempts()
);
assert_eq!(general.me2dc_fallback, default_me2dc_fallback());
assert_eq!(general.me2dc_fast, default_me2dc_fast());
assert_eq!(
general.proxy_config_v4_cache_path,
default_proxy_config_v4_cache_path()
@ -1460,9 +1534,38 @@ mod tests {
server.api.runtime_edge_events_capacity,
default_api_runtime_edge_events_capacity()
);
assert_eq!(
server.conntrack_control.inline_conntrack_control,
default_conntrack_control_enabled()
);
assert_eq!(server.conntrack_control.mode, ConntrackMode::default());
assert_eq!(
server.conntrack_control.backend,
ConntrackBackend::default()
);
assert_eq!(
server.conntrack_control.profile,
ConntrackPressureProfile::default()
);
assert_eq!(
server.conntrack_control.pressure_high_watermark_pct,
default_conntrack_pressure_high_watermark_pct()
);
assert_eq!(
server.conntrack_control.pressure_low_watermark_pct,
default_conntrack_pressure_low_watermark_pct()
);
assert_eq!(
server.conntrack_control.delete_budget_per_sec,
default_conntrack_delete_budget_per_sec()
);
let access = AccessConfig::default();
assert_eq!(access.users, default_access_users());
assert_eq!(
access.user_max_tcp_conns_global_each,
default_user_max_tcp_conns_global_each()
);
}
#[test]
@ -1905,6 +2008,26 @@ mod tests {
let _ = std::fs::remove_file(path);
}
#[test]
fn tg_connect_zero_is_rejected() {
let toml = r#"
[general]
tg_connect = 0
[censorship]
tls_domain = "example.com"
[access.users]
user = "00000000000000000000000000000000"
"#;
let dir = std::env::temp_dir();
let path = dir.join("telemt_tg_connect_zero_test.toml");
std::fs::write(&path, toml).unwrap();
let err = ProxyConfig::load(&path).unwrap_err().to_string();
assert!(err.contains("general.tg_connect must be > 0"));
let _ = std::fs::remove_file(path);
}
#[test]
fn rpc_proxy_req_every_out_of_range_is_rejected() {
let toml = r#"
@ -2368,6 +2491,118 @@ mod tests {
let _ = std::fs::remove_file(path);
}
#[test]
fn conntrack_pressure_high_watermark_out_of_range_is_rejected() {
let toml = r#"
[server.conntrack_control]
pressure_high_watermark_pct = 0
[censorship]
tls_domain = "example.com"
[access.users]
user = "00000000000000000000000000000000"
"#;
let dir = std::env::temp_dir();
let path = dir.join("telemt_conntrack_high_watermark_invalid_test.toml");
std::fs::write(&path, toml).unwrap();
let err = ProxyConfig::load(&path).unwrap_err().to_string();
assert!(err.contains(
"server.conntrack_control.pressure_high_watermark_pct must be within [1, 100]"
));
let _ = std::fs::remove_file(path);
}
#[test]
fn conntrack_pressure_low_watermark_must_be_below_high() {
let toml = r#"
[server.conntrack_control]
pressure_high_watermark_pct = 50
pressure_low_watermark_pct = 50
[censorship]
tls_domain = "example.com"
[access.users]
user = "00000000000000000000000000000000"
"#;
let dir = std::env::temp_dir();
let path = dir.join("telemt_conntrack_low_watermark_invalid_test.toml");
std::fs::write(&path, toml).unwrap();
let err = ProxyConfig::load(&path).unwrap_err().to_string();
assert!(
err.contains(
"server.conntrack_control.pressure_low_watermark_pct must be < pressure_high_watermark_pct"
)
);
let _ = std::fs::remove_file(path);
}
#[test]
fn conntrack_delete_budget_zero_is_rejected() {
let toml = r#"
[server.conntrack_control]
delete_budget_per_sec = 0
[censorship]
tls_domain = "example.com"
[access.users]
user = "00000000000000000000000000000000"
"#;
let dir = std::env::temp_dir();
let path = dir.join("telemt_conntrack_delete_budget_invalid_test.toml");
std::fs::write(&path, toml).unwrap();
let err = ProxyConfig::load(&path).unwrap_err().to_string();
assert!(err.contains("server.conntrack_control.delete_budget_per_sec must be > 0"));
let _ = std::fs::remove_file(path);
}
#[test]
fn conntrack_hybrid_mode_requires_listener_allow_list() {
let toml = r#"
[server.conntrack_control]
mode = "hybrid"
[censorship]
tls_domain = "example.com"
[access.users]
user = "00000000000000000000000000000000"
"#;
let dir = std::env::temp_dir();
let path = dir.join("telemt_conntrack_hybrid_requires_ips_test.toml");
std::fs::write(&path, toml).unwrap();
let err = ProxyConfig::load(&path).unwrap_err().to_string();
assert!(err.contains(
"server.conntrack_control.hybrid_listener_ips must be non-empty in mode=hybrid"
));
let _ = std::fs::remove_file(path);
}
#[test]
fn conntrack_profile_is_loaded_from_config() {
let toml = r#"
[server.conntrack_control]
profile = "aggressive"
[censorship]
tls_domain = "example.com"
[access.users]
user = "00000000000000000000000000000000"
"#;
let dir = std::env::temp_dir();
let path = dir.join("telemt_conntrack_profile_parse_test.toml");
std::fs::write(&path, toml).unwrap();
let cfg = ProxyConfig::load(&path).unwrap();
assert_eq!(
cfg.server.conntrack_control.profile,
ConntrackPressureProfile::Aggressive
);
let _ = std::fs::remove_file(path);
}
#[test]
fn force_close_default_matches_drain_ttl() {
let toml = r#"

View File

@ -17,6 +17,28 @@ fn remove_temp_config(path: &PathBuf) {
let _ = fs::remove_file(path);
}
#[test]
fn default_timeouts_enable_apple_compatible_handshake_profile() {
let cfg = ProxyConfig::default();
assert_eq!(cfg.timeouts.client_first_byte_idle_secs, 300);
assert_eq!(cfg.timeouts.client_handshake, 60);
}
#[test]
fn load_accepts_zero_first_byte_idle_timeout_as_legacy_opt_out() {
let path = write_temp_config(
r#"
[timeouts]
client_first_byte_idle_secs = 0
"#,
);
let cfg = ProxyConfig::load(&path).expect("config with zero first-byte idle timeout must load");
assert_eq!(cfg.timeouts.client_first_byte_idle_secs, 0);
remove_temp_config(&path);
}
#[test]
fn load_rejects_relay_hard_idle_smaller_than_soft_idle_with_clear_error() {
let path = write_temp_config(

View File

@ -429,6 +429,11 @@ pub struct GeneralConfig {
#[serde(default = "default_me2dc_fallback")]
pub me2dc_fallback: bool,
/// Fast ME->Direct fallback mode for new sessions.
/// Active only when both `use_middle_proxy=true` and `me2dc_fallback=true`.
#[serde(default = "default_me2dc_fast")]
pub me2dc_fast: bool,
/// Enable ME keepalive padding frames.
#[serde(default = "default_true")]
pub me_keepalive_enabled: bool,
@ -658,6 +663,10 @@ pub struct GeneralConfig {
#[serde(default = "default_upstream_connect_budget_ms")]
pub upstream_connect_budget_ms: u64,
/// Per-attempt TCP connect timeout to Telegram DC (seconds).
#[serde(default = "default_connect_timeout")]
pub tg_connect: u64,
/// Consecutive failed requests before upstream is marked unhealthy.
#[serde(default = "default_upstream_unhealthy_fail_threshold")]
pub upstream_unhealthy_fail_threshold: u32,
@ -939,6 +948,7 @@ impl Default for GeneralConfig {
middle_proxy_warm_standby: default_middle_proxy_warm_standby(),
me_init_retry_attempts: default_me_init_retry_attempts(),
me2dc_fallback: default_me2dc_fallback(),
me2dc_fast: default_me2dc_fast(),
me_keepalive_enabled: default_true(),
me_keepalive_interval_secs: default_keepalive_interval(),
me_keepalive_jitter_secs: default_keepalive_jitter(),
@ -1001,6 +1011,7 @@ impl Default for GeneralConfig {
upstream_connect_retry_attempts: default_upstream_connect_retry_attempts(),
upstream_connect_retry_backoff_ms: default_upstream_connect_retry_backoff_ms(),
upstream_connect_budget_ms: default_upstream_connect_budget_ms(),
tg_connect: default_connect_timeout(),
upstream_unhealthy_fail_threshold: default_upstream_unhealthy_fail_threshold(),
upstream_connect_failfast_hard_errors: default_upstream_connect_failfast_hard_errors(),
stun_iface_mismatch_ignore: false,
@ -1205,6 +1216,118 @@ impl Default for ApiConfig {
}
}
#[derive(Debug, Clone, Copy, PartialEq, Eq, Serialize, Deserialize, Default)]
#[serde(rename_all = "lowercase")]
pub enum ConntrackMode {
#[default]
Tracked,
Notrack,
Hybrid,
}
#[derive(Debug, Clone, Copy, PartialEq, Eq, Serialize, Deserialize, Default)]
#[serde(rename_all = "lowercase")]
pub enum ConntrackBackend {
#[default]
Auto,
Nftables,
Iptables,
}
#[derive(Debug, Clone, Copy, PartialEq, Eq, Serialize, Deserialize, Default)]
#[serde(rename_all = "lowercase")]
pub enum ConntrackPressureProfile {
Conservative,
#[default]
Balanced,
Aggressive,
}
impl ConntrackPressureProfile {
pub fn client_first_byte_idle_cap_secs(self) -> u64 {
match self {
Self::Conservative => 30,
Self::Balanced => 20,
Self::Aggressive => 10,
}
}
pub fn direct_activity_timeout_secs(self) -> u64 {
match self {
Self::Conservative => 180,
Self::Balanced => 120,
Self::Aggressive => 60,
}
}
pub fn middle_soft_idle_cap_secs(self) -> u64 {
match self {
Self::Conservative => 60,
Self::Balanced => 30,
Self::Aggressive => 20,
}
}
pub fn middle_hard_idle_cap_secs(self) -> u64 {
match self {
Self::Conservative => 180,
Self::Balanced => 90,
Self::Aggressive => 60,
}
}
}
#[derive(Debug, Clone, Serialize, Deserialize)]
pub struct ConntrackControlConfig {
/// Enables runtime conntrack-control worker for pressure mitigation.
#[serde(default = "default_conntrack_control_enabled")]
pub inline_conntrack_control: bool,
/// Conntrack mode for listener ingress traffic.
#[serde(default)]
pub mode: ConntrackMode,
/// Netfilter backend used to reconcile notrack rules.
#[serde(default)]
pub backend: ConntrackBackend,
/// Pressure profile for timeout caps under resource saturation.
#[serde(default)]
pub profile: ConntrackPressureProfile,
/// Listener IP allow-list for hybrid mode.
/// Ignored in tracked/notrack mode.
#[serde(default)]
pub hybrid_listener_ips: Vec<IpAddr>,
/// Pressure high watermark as percentage.
#[serde(default = "default_conntrack_pressure_high_watermark_pct")]
pub pressure_high_watermark_pct: u8,
/// Pressure low watermark as percentage.
#[serde(default = "default_conntrack_pressure_low_watermark_pct")]
pub pressure_low_watermark_pct: u8,
/// Maximum conntrack delete operations per second.
#[serde(default = "default_conntrack_delete_budget_per_sec")]
pub delete_budget_per_sec: u64,
}
impl Default for ConntrackControlConfig {
fn default() -> Self {
Self {
inline_conntrack_control: default_conntrack_control_enabled(),
mode: ConntrackMode::default(),
backend: ConntrackBackend::default(),
profile: ConntrackPressureProfile::default(),
hybrid_listener_ips: Vec::new(),
pressure_high_watermark_pct: default_conntrack_pressure_high_watermark_pct(),
pressure_low_watermark_pct: default_conntrack_pressure_low_watermark_pct(),
delete_budget_per_sec: default_conntrack_delete_budget_per_sec(),
}
}
}
#[derive(Debug, Clone, Serialize, Deserialize)]
pub struct ServerConfig {
#[serde(default = "default_port")]
@ -1266,6 +1389,11 @@ pub struct ServerConfig {
#[serde(default)]
pub listeners: Vec<ListenerConfig>,
/// TCP `listen(2)` backlog for client-facing sockets (also used for the metrics HTTP listener).
/// The effective queue is capped by the kernel (for example `somaxconn` on Linux).
#[serde(default = "default_listen_backlog")]
pub listen_backlog: u32,
/// Maximum number of concurrent client connections.
/// 0 means unlimited.
#[serde(default = "default_server_max_connections")]
@ -1275,6 +1403,10 @@ pub struct ServerConfig {
/// `0` keeps legacy unbounded wait behavior.
#[serde(default = "default_accept_permit_timeout_ms")]
pub accept_permit_timeout_ms: u64,
/// Runtime conntrack control and pressure policy.
#[serde(default)]
pub conntrack_control: ConntrackControlConfig,
}
impl Default for ServerConfig {
@ -1294,14 +1426,22 @@ impl Default for ServerConfig {
metrics_whitelist: default_metrics_whitelist(),
api: ApiConfig::default(),
listeners: Vec::new(),
listen_backlog: default_listen_backlog(),
max_connections: default_server_max_connections(),
accept_permit_timeout_ms: default_accept_permit_timeout_ms(),
conntrack_control: ConntrackControlConfig::default(),
}
}
}
#[derive(Debug, Clone, Serialize, Deserialize)]
pub struct TimeoutsConfig {
/// Maximum idle wait in seconds for the first client byte before handshake parsing starts.
/// `0` disables the separate idle phase and keeps legacy timeout behavior.
#[serde(default = "default_client_first_byte_idle_secs")]
pub client_first_byte_idle_secs: u64,
/// Maximum active handshake duration in seconds after the first client byte is received.
#[serde(default = "default_handshake_timeout")]
pub client_handshake: u64,
@ -1323,9 +1463,6 @@ pub struct TimeoutsConfig {
#[serde(default = "default_relay_idle_grace_after_downstream_activity_secs")]
pub relay_idle_grace_after_downstream_activity_secs: u64,
#[serde(default = "default_connect_timeout")]
pub tg_connect: u64,
#[serde(default = "default_keepalive")]
pub client_keepalive: u64,
@ -1344,13 +1481,13 @@ pub struct TimeoutsConfig {
impl Default for TimeoutsConfig {
fn default() -> Self {
Self {
client_first_byte_idle_secs: default_client_first_byte_idle_secs(),
client_handshake: default_handshake_timeout(),
relay_idle_policy_v2_enabled: default_relay_idle_policy_v2_enabled(),
relay_client_idle_soft_secs: default_relay_client_idle_soft_secs(),
relay_client_idle_hard_secs: default_relay_client_idle_hard_secs(),
relay_idle_grace_after_downstream_activity_secs:
default_relay_idle_grace_after_downstream_activity_secs(),
tg_connect: default_connect_timeout(),
client_keepalive: default_keepalive(),
client_ack: default_ack_timeout(),
me_one_retry: default_me_one_retry(),
@ -1613,6 +1750,12 @@ pub struct AccessConfig {
#[serde(default)]
pub user_max_tcp_conns: HashMap<String, usize>,
/// Global per-user TCP connection limit applied when a user has no
/// positive individual override.
/// `0` disables the inherited limit.
#[serde(default = "default_user_max_tcp_conns_global_each")]
pub user_max_tcp_conns_global_each: usize,
#[serde(default)]
pub user_expirations: HashMap<String, DateTime<Utc>>,
@ -1649,6 +1792,7 @@ impl Default for AccessConfig {
users: default_access_users(),
user_ad_tags: HashMap::new(),
user_max_tcp_conns: HashMap::new(),
user_max_tcp_conns_global_each: default_user_max_tcp_conns_global_each(),
user_expirations: HashMap::new(),
user_data_quota: HashMap::new(),
user_max_unique_ips: HashMap::new(),

755
src/conntrack_control.rs Normal file
View File

@ -0,0 +1,755 @@
use std::collections::BTreeSet;
use std::net::IpAddr;
use std::path::PathBuf;
use std::sync::Arc;
use std::time::Duration;
use tokio::io::AsyncWriteExt;
use tokio::process::Command;
use tokio::sync::{mpsc, watch};
use tracing::{debug, info, warn};
use crate::config::{ConntrackBackend, ConntrackMode, ProxyConfig};
use crate::proxy::middle_relay::note_global_relay_pressure;
use crate::proxy::shared_state::{ConntrackCloseEvent, ConntrackCloseReason, ProxySharedState};
use crate::stats::Stats;
const CONNTRACK_EVENT_QUEUE_CAPACITY: usize = 32_768;
const PRESSURE_RELEASE_TICKS: u8 = 3;
const PRESSURE_SAMPLE_INTERVAL: Duration = Duration::from_secs(1);
#[derive(Clone, Copy, Debug, PartialEq, Eq)]
enum NetfilterBackend {
Nftables,
Iptables,
}
#[derive(Clone, Copy)]
struct PressureSample {
conn_pct: Option<u8>,
fd_pct: Option<u8>,
accept_timeout_delta: u64,
me_queue_pressure_delta: u64,
}
struct PressureState {
active: bool,
low_streak: u8,
prev_accept_timeout_total: u64,
prev_me_queue_pressure_total: u64,
}
impl PressureState {
fn new(stats: &Stats) -> Self {
Self {
active: false,
low_streak: 0,
prev_accept_timeout_total: stats.get_accept_permit_timeout_total(),
prev_me_queue_pressure_total: stats.get_me_c2me_send_full_total(),
}
}
}
pub(crate) fn spawn_conntrack_controller(
config_rx: watch::Receiver<Arc<ProxyConfig>>,
stats: Arc<Stats>,
shared: Arc<ProxySharedState>,
) {
if !cfg!(target_os = "linux") {
let enabled = config_rx
.borrow()
.server
.conntrack_control
.inline_conntrack_control;
stats.set_conntrack_control_enabled(enabled);
stats.set_conntrack_control_available(false);
stats.set_conntrack_pressure_active(false);
stats.set_conntrack_event_queue_depth(0);
stats.set_conntrack_rule_apply_ok(false);
shared.disable_conntrack_close_sender();
shared.set_conntrack_pressure_active(false);
if enabled {
warn!(
"conntrack control is configured but unsupported on this OS; disabling runtime worker"
);
}
return;
}
let (tx, rx) = mpsc::channel(CONNTRACK_EVENT_QUEUE_CAPACITY);
shared.set_conntrack_close_sender(tx);
tokio::spawn(async move {
run_conntrack_controller(config_rx, stats, shared, rx).await;
});
}
async fn run_conntrack_controller(
mut config_rx: watch::Receiver<Arc<ProxyConfig>>,
stats: Arc<Stats>,
shared: Arc<ProxySharedState>,
mut close_rx: mpsc::Receiver<ConntrackCloseEvent>,
) {
let mut cfg = config_rx.borrow().clone();
let mut pressure_state = PressureState::new(stats.as_ref());
let mut delete_budget_tokens = cfg.server.conntrack_control.delete_budget_per_sec;
let mut backend = pick_backend(cfg.server.conntrack_control.backend);
apply_runtime_state(
stats.as_ref(),
shared.as_ref(),
&cfg,
backend.is_some(),
false,
);
reconcile_rules(&cfg, backend, stats.as_ref()).await;
loop {
tokio::select! {
changed = config_rx.changed() => {
if changed.is_err() {
break;
}
cfg = config_rx.borrow_and_update().clone();
backend = pick_backend(cfg.server.conntrack_control.backend);
delete_budget_tokens = cfg.server.conntrack_control.delete_budget_per_sec;
apply_runtime_state(stats.as_ref(), shared.as_ref(), &cfg, backend.is_some(), pressure_state.active);
reconcile_rules(&cfg, backend, stats.as_ref()).await;
}
event = close_rx.recv() => {
let Some(event) = event else {
break;
};
stats.set_conntrack_event_queue_depth(close_rx.len() as u64);
if !cfg.server.conntrack_control.inline_conntrack_control {
continue;
}
if !pressure_state.active {
continue;
}
if !matches!(event.reason, ConntrackCloseReason::Timeout | ConntrackCloseReason::Pressure | ConntrackCloseReason::Reset) {
continue;
}
if delete_budget_tokens == 0 {
continue;
}
stats.increment_conntrack_delete_attempt_total();
match delete_conntrack_entry(event).await {
DeleteOutcome::Deleted => {
delete_budget_tokens = delete_budget_tokens.saturating_sub(1);
stats.increment_conntrack_delete_success_total();
}
DeleteOutcome::NotFound => {
delete_budget_tokens = delete_budget_tokens.saturating_sub(1);
stats.increment_conntrack_delete_not_found_total();
}
DeleteOutcome::Error => {
delete_budget_tokens = delete_budget_tokens.saturating_sub(1);
stats.increment_conntrack_delete_error_total();
}
}
}
_ = tokio::time::sleep(PRESSURE_SAMPLE_INTERVAL) => {
delete_budget_tokens = cfg.server.conntrack_control.delete_budget_per_sec;
stats.set_conntrack_event_queue_depth(close_rx.len() as u64);
let sample = collect_pressure_sample(stats.as_ref(), &cfg, &mut pressure_state);
update_pressure_state(
stats.as_ref(),
shared.as_ref(),
&cfg,
&sample,
&mut pressure_state,
);
if pressure_state.active {
note_global_relay_pressure(shared.as_ref());
}
}
}
}
shared.disable_conntrack_close_sender();
shared.set_conntrack_pressure_active(false);
stats.set_conntrack_pressure_active(false);
}
fn apply_runtime_state(
stats: &Stats,
shared: &ProxySharedState,
cfg: &ProxyConfig,
backend_available: bool,
pressure_active: bool,
) {
let enabled = cfg.server.conntrack_control.inline_conntrack_control;
let available = enabled && backend_available && has_cap_net_admin();
if enabled && !available {
warn!(
"conntrack control enabled but unavailable (missing CAP_NET_ADMIN or backend binaries)"
);
}
stats.set_conntrack_control_enabled(enabled);
stats.set_conntrack_control_available(available);
shared.set_conntrack_pressure_active(enabled && pressure_active);
stats.set_conntrack_pressure_active(enabled && pressure_active);
}
fn collect_pressure_sample(
stats: &Stats,
cfg: &ProxyConfig,
state: &mut PressureState,
) -> PressureSample {
let current_connections = stats.get_current_connections_total();
let conn_pct = if cfg.server.max_connections == 0 {
None
} else {
Some(
((current_connections.saturating_mul(100)) / u64::from(cfg.server.max_connections))
.min(100) as u8,
)
};
let fd_pct = fd_usage_pct();
let accept_total = stats.get_accept_permit_timeout_total();
let accept_delta = accept_total.saturating_sub(state.prev_accept_timeout_total);
state.prev_accept_timeout_total = accept_total;
let me_total = stats.get_me_c2me_send_full_total();
let me_delta = me_total.saturating_sub(state.prev_me_queue_pressure_total);
state.prev_me_queue_pressure_total = me_total;
PressureSample {
conn_pct,
fd_pct,
accept_timeout_delta: accept_delta,
me_queue_pressure_delta: me_delta,
}
}
fn update_pressure_state(
stats: &Stats,
shared: &ProxySharedState,
cfg: &ProxyConfig,
sample: &PressureSample,
state: &mut PressureState,
) {
if !cfg.server.conntrack_control.inline_conntrack_control {
if state.active {
state.active = false;
state.low_streak = 0;
shared.set_conntrack_pressure_active(false);
stats.set_conntrack_pressure_active(false);
info!("Conntrack pressure mode deactivated (feature disabled)");
}
return;
}
let high = cfg.server.conntrack_control.pressure_high_watermark_pct;
let low = cfg.server.conntrack_control.pressure_low_watermark_pct;
let high_hit = sample.conn_pct.is_some_and(|v| v >= high)
|| sample.fd_pct.is_some_and(|v| v >= high)
|| sample.accept_timeout_delta > 0
|| sample.me_queue_pressure_delta > 0;
let low_clear = sample.conn_pct.is_none_or(|v| v <= low)
&& sample.fd_pct.is_none_or(|v| v <= low)
&& sample.accept_timeout_delta == 0
&& sample.me_queue_pressure_delta == 0;
if !state.active && high_hit {
state.active = true;
state.low_streak = 0;
shared.set_conntrack_pressure_active(true);
stats.set_conntrack_pressure_active(true);
info!(
conn_pct = ?sample.conn_pct,
fd_pct = ?sample.fd_pct,
accept_timeout_delta = sample.accept_timeout_delta,
me_queue_pressure_delta = sample.me_queue_pressure_delta,
"Conntrack pressure mode activated"
);
return;
}
if state.active && low_clear {
state.low_streak = state.low_streak.saturating_add(1);
if state.low_streak >= PRESSURE_RELEASE_TICKS {
state.active = false;
state.low_streak = 0;
shared.set_conntrack_pressure_active(false);
stats.set_conntrack_pressure_active(false);
info!("Conntrack pressure mode deactivated");
}
return;
}
state.low_streak = 0;
}
async fn reconcile_rules(cfg: &ProxyConfig, backend: Option<NetfilterBackend>, stats: &Stats) {
if !cfg.server.conntrack_control.inline_conntrack_control {
clear_notrack_rules_all_backends().await;
stats.set_conntrack_rule_apply_ok(true);
return;
}
if !has_cap_net_admin() {
stats.set_conntrack_rule_apply_ok(false);
return;
}
let Some(backend) = backend else {
stats.set_conntrack_rule_apply_ok(false);
return;
};
let apply_result = match backend {
NetfilterBackend::Nftables => apply_nft_rules(cfg).await,
NetfilterBackend::Iptables => apply_iptables_rules(cfg).await,
};
if let Err(error) = apply_result {
warn!(error = %error, "Failed to reconcile conntrack/notrack rules");
stats.set_conntrack_rule_apply_ok(false);
} else {
stats.set_conntrack_rule_apply_ok(true);
}
}
fn pick_backend(configured: ConntrackBackend) -> Option<NetfilterBackend> {
match configured {
ConntrackBackend::Auto => {
if command_exists("nft") {
Some(NetfilterBackend::Nftables)
} else if command_exists("iptables") {
Some(NetfilterBackend::Iptables)
} else {
None
}
}
ConntrackBackend::Nftables => command_exists("nft").then_some(NetfilterBackend::Nftables),
ConntrackBackend::Iptables => {
command_exists("iptables").then_some(NetfilterBackend::Iptables)
}
}
}
fn command_exists(binary: &str) -> bool {
let Some(path_var) = std::env::var_os("PATH") else {
return false;
};
std::env::split_paths(&path_var).any(|dir| {
let candidate: PathBuf = dir.join(binary);
candidate.exists() && candidate.is_file()
})
}
fn notrack_targets(cfg: &ProxyConfig) -> (Vec<Option<IpAddr>>, Vec<Option<IpAddr>>) {
let mode = cfg.server.conntrack_control.mode;
let mut v4_targets: BTreeSet<Option<IpAddr>> = BTreeSet::new();
let mut v6_targets: BTreeSet<Option<IpAddr>> = BTreeSet::new();
match mode {
ConntrackMode::Tracked => {}
ConntrackMode::Notrack => {
if cfg.server.listeners.is_empty() {
if let Some(ipv4) = cfg
.server
.listen_addr_ipv4
.as_ref()
.and_then(|s| s.parse::<IpAddr>().ok())
{
if ipv4.is_unspecified() {
v4_targets.insert(None);
} else {
v4_targets.insert(Some(ipv4));
}
}
if let Some(ipv6) = cfg
.server
.listen_addr_ipv6
.as_ref()
.and_then(|s| s.parse::<IpAddr>().ok())
{
if ipv6.is_unspecified() {
v6_targets.insert(None);
} else {
v6_targets.insert(Some(ipv6));
}
}
} else {
for listener in &cfg.server.listeners {
if listener.ip.is_ipv4() {
if listener.ip.is_unspecified() {
v4_targets.insert(None);
} else {
v4_targets.insert(Some(listener.ip));
}
} else if listener.ip.is_unspecified() {
v6_targets.insert(None);
} else {
v6_targets.insert(Some(listener.ip));
}
}
}
}
ConntrackMode::Hybrid => {
for ip in &cfg.server.conntrack_control.hybrid_listener_ips {
if ip.is_ipv4() {
v4_targets.insert(Some(*ip));
} else {
v6_targets.insert(Some(*ip));
}
}
}
}
(
v4_targets.into_iter().collect(),
v6_targets.into_iter().collect(),
)
}
async fn apply_nft_rules(cfg: &ProxyConfig) -> Result<(), String> {
let _ = run_command(
"nft",
&["delete", "table", "inet", "telemt_conntrack"],
None,
)
.await;
if matches!(cfg.server.conntrack_control.mode, ConntrackMode::Tracked) {
return Ok(());
}
let (v4_targets, v6_targets) = notrack_targets(cfg);
let mut rules = Vec::new();
for ip in v4_targets {
let rule = if let Some(ip) = ip {
format!("tcp dport {} ip daddr {} notrack", cfg.server.port, ip)
} else {
format!("tcp dport {} notrack", cfg.server.port)
};
rules.push(rule);
}
for ip in v6_targets {
let rule = if let Some(ip) = ip {
format!("tcp dport {} ip6 daddr {} notrack", cfg.server.port, ip)
} else {
format!("tcp dport {} notrack", cfg.server.port)
};
rules.push(rule);
}
let rule_blob = if rules.is_empty() {
String::new()
} else {
format!(" {}\n", rules.join("\n "))
};
let script = format!(
"table inet telemt_conntrack {{\n chain preraw {{\n type filter hook prerouting priority raw; policy accept;\n{rule_blob} }}\n}}\n"
);
run_command("nft", &["-f", "-"], Some(script)).await
}
async fn apply_iptables_rules(cfg: &ProxyConfig) -> Result<(), String> {
apply_iptables_rules_for_binary("iptables", cfg, true).await?;
apply_iptables_rules_for_binary("ip6tables", cfg, false).await?;
Ok(())
}
async fn apply_iptables_rules_for_binary(
binary: &str,
cfg: &ProxyConfig,
ipv4: bool,
) -> Result<(), String> {
if !command_exists(binary) {
return Ok(());
}
let chain = "TELEMT_NOTRACK";
let _ = run_command(
binary,
&["-t", "raw", "-D", "PREROUTING", "-j", chain],
None,
)
.await;
let _ = run_command(binary, &["-t", "raw", "-F", chain], None).await;
let _ = run_command(binary, &["-t", "raw", "-X", chain], None).await;
if matches!(cfg.server.conntrack_control.mode, ConntrackMode::Tracked) {
return Ok(());
}
run_command(binary, &["-t", "raw", "-N", chain], None).await?;
run_command(binary, &["-t", "raw", "-F", chain], None).await?;
if run_command(
binary,
&["-t", "raw", "-C", "PREROUTING", "-j", chain],
None,
)
.await
.is_err()
{
run_command(
binary,
&["-t", "raw", "-I", "PREROUTING", "1", "-j", chain],
None,
)
.await?;
}
let (v4_targets, v6_targets) = notrack_targets(cfg);
let selected = if ipv4 { v4_targets } else { v6_targets };
for ip in selected {
let mut args = vec![
"-t".to_string(),
"raw".to_string(),
"-A".to_string(),
chain.to_string(),
"-p".to_string(),
"tcp".to_string(),
"--dport".to_string(),
cfg.server.port.to_string(),
];
if let Some(ip) = ip {
args.push("-d".to_string());
args.push(ip.to_string());
}
args.push("-j".to_string());
args.push("CT".to_string());
args.push("--notrack".to_string());
let arg_refs: Vec<&str> = args.iter().map(String::as_str).collect();
run_command(binary, &arg_refs, None).await?;
}
Ok(())
}
async fn clear_notrack_rules_all_backends() {
let _ = run_command(
"nft",
&["delete", "table", "inet", "telemt_conntrack"],
None,
)
.await;
let _ = run_command(
"iptables",
&["-t", "raw", "-D", "PREROUTING", "-j", "TELEMT_NOTRACK"],
None,
)
.await;
let _ = run_command("iptables", &["-t", "raw", "-F", "TELEMT_NOTRACK"], None).await;
let _ = run_command("iptables", &["-t", "raw", "-X", "TELEMT_NOTRACK"], None).await;
let _ = run_command(
"ip6tables",
&["-t", "raw", "-D", "PREROUTING", "-j", "TELEMT_NOTRACK"],
None,
)
.await;
let _ = run_command("ip6tables", &["-t", "raw", "-F", "TELEMT_NOTRACK"], None).await;
let _ = run_command("ip6tables", &["-t", "raw", "-X", "TELEMT_NOTRACK"], None).await;
}
enum DeleteOutcome {
Deleted,
NotFound,
Error,
}
async fn delete_conntrack_entry(event: ConntrackCloseEvent) -> DeleteOutcome {
if !command_exists("conntrack") {
return DeleteOutcome::Error;
}
let args = vec![
"-D".to_string(),
"-p".to_string(),
"tcp".to_string(),
"-s".to_string(),
event.src.ip().to_string(),
"--sport".to_string(),
event.src.port().to_string(),
"-d".to_string(),
event.dst.ip().to_string(),
"--dport".to_string(),
event.dst.port().to_string(),
];
let arg_refs: Vec<&str> = args.iter().map(String::as_str).collect();
match run_command("conntrack", &arg_refs, None).await {
Ok(()) => DeleteOutcome::Deleted,
Err(error) => {
if error.contains("0 flow entries have been deleted") {
DeleteOutcome::NotFound
} else {
debug!(error = %error, "conntrack delete failed");
DeleteOutcome::Error
}
}
}
}
async fn run_command(binary: &str, args: &[&str], stdin: Option<String>) -> Result<(), String> {
if !command_exists(binary) {
return Err(format!("{binary} is not available"));
}
let mut command = Command::new(binary);
command.args(args);
if stdin.is_some() {
command.stdin(std::process::Stdio::piped());
}
command.stdout(std::process::Stdio::null());
command.stderr(std::process::Stdio::piped());
let mut child = command
.spawn()
.map_err(|e| format!("spawn {binary} failed: {e}"))?;
if let Some(blob) = stdin
&& let Some(mut writer) = child.stdin.take()
{
writer
.write_all(blob.as_bytes())
.await
.map_err(|e| format!("stdin write {binary} failed: {e}"))?;
}
let output = child
.wait_with_output()
.await
.map_err(|e| format!("wait {binary} failed: {e}"))?;
if output.status.success() {
return Ok(());
}
let stderr = String::from_utf8_lossy(&output.stderr).trim().to_string();
Err(if stderr.is_empty() {
format!("{binary} exited with status {}", output.status)
} else {
stderr
})
}
fn fd_usage_pct() -> Option<u8> {
let soft_limit = nofile_soft_limit()?;
if soft_limit == 0 {
return None;
}
let fd_count = std::fs::read_dir("/proc/self/fd").ok()?.count() as u64;
Some(((fd_count.saturating_mul(100)) / soft_limit).min(100) as u8)
}
fn nofile_soft_limit() -> Option<u64> {
#[cfg(target_os = "linux")]
{
let mut lim = libc::rlimit {
rlim_cur: 0,
rlim_max: 0,
};
let rc = unsafe { libc::getrlimit(libc::RLIMIT_NOFILE, &mut lim) };
if rc != 0 {
return None;
}
return Some(lim.rlim_cur);
}
#[cfg(not(target_os = "linux"))]
{
None
}
}
fn has_cap_net_admin() -> bool {
#[cfg(target_os = "linux")]
{
let Ok(status) = std::fs::read_to_string("/proc/self/status") else {
return false;
};
for line in status.lines() {
if let Some(raw) = line.strip_prefix("CapEff:") {
let caps = raw.trim();
if let Ok(bits) = u64::from_str_radix(caps, 16) {
const CAP_NET_ADMIN_BIT: u64 = 12;
return (bits & (1u64 << CAP_NET_ADMIN_BIT)) != 0;
}
}
}
false
}
#[cfg(not(target_os = "linux"))]
{
false
}
}
#[cfg(test)]
mod tests {
use super::*;
use crate::config::ProxyConfig;
#[test]
fn pressure_activates_on_accept_timeout_spike() {
let stats = Stats::new();
let shared = ProxySharedState::new();
let mut cfg = ProxyConfig::default();
cfg.server.conntrack_control.inline_conntrack_control = true;
let mut state = PressureState::new(&stats);
let sample = PressureSample {
conn_pct: Some(10),
fd_pct: Some(10),
accept_timeout_delta: 1,
me_queue_pressure_delta: 0,
};
update_pressure_state(&stats, shared.as_ref(), &cfg, &sample, &mut state);
assert!(state.active);
assert!(shared.conntrack_pressure_active());
assert!(stats.get_conntrack_pressure_active());
}
#[test]
fn pressure_releases_after_hysteresis_window() {
let stats = Stats::new();
let shared = ProxySharedState::new();
let mut cfg = ProxyConfig::default();
cfg.server.conntrack_control.inline_conntrack_control = true;
let mut state = PressureState::new(&stats);
let high_sample = PressureSample {
conn_pct: Some(95),
fd_pct: Some(95),
accept_timeout_delta: 0,
me_queue_pressure_delta: 0,
};
update_pressure_state(&stats, shared.as_ref(), &cfg, &high_sample, &mut state);
assert!(state.active);
let low_sample = PressureSample {
conn_pct: Some(10),
fd_pct: Some(10),
accept_timeout_delta: 0,
me_queue_pressure_delta: 0,
};
update_pressure_state(&stats, shared.as_ref(), &cfg, &low_sample, &mut state);
assert!(state.active);
update_pressure_state(&stats, shared.as_ref(), &cfg, &low_sample, &mut state);
assert!(state.active);
update_pressure_state(&stats, shared.as_ref(), &cfg, &low_sample, &mut state);
assert!(!state.active);
assert!(!shared.conntrack_pressure_active());
assert!(!stats.get_conntrack_pressure_active());
}
#[test]
fn pressure_does_not_activate_when_disabled() {
let stats = Stats::new();
let shared = ProxySharedState::new();
let mut cfg = ProxyConfig::default();
cfg.server.conntrack_control.inline_conntrack_control = false;
let mut state = PressureState::new(&stats);
let sample = PressureSample {
conn_pct: Some(100),
fd_pct: Some(100),
accept_timeout_delta: 10,
me_queue_pressure_delta: 10,
};
update_pressure_state(&stats, shared.as_ref(), &cfg, &sample, &mut state);
assert!(!state.active);
assert!(!shared.conntrack_pressure_active());
assert!(!stats.get_conntrack_pressure_active());
}
}

541
src/daemon/mod.rs Normal file
View File

@ -0,0 +1,541 @@
//! Unix daemon support for telemt.
//!
//! Provides classic Unix daemonization (double-fork), PID file management,
//! and privilege dropping for running telemt as a background service.
use std::fs::{self, File, OpenOptions};
use std::io::{self, Read, Write};
use std::os::unix::fs::OpenOptionsExt;
use std::path::{Path, PathBuf};
use nix::fcntl::{Flock, FlockArg};
use nix::unistd::{self, ForkResult, Gid, Pid, Uid, chdir, close, fork, getpid, setsid};
use tracing::{debug, info, warn};
/// Default PID file location.
pub const DEFAULT_PID_FILE: &str = "/var/run/telemt.pid";
/// Daemon configuration options parsed from CLI.
#[derive(Debug, Clone, Default)]
pub struct DaemonOptions {
/// Run as daemon (fork to background).
pub daemonize: bool,
/// Path to PID file.
pub pid_file: Option<PathBuf>,
/// User to run as after binding sockets.
pub user: Option<String>,
/// Group to run as after binding sockets.
pub group: Option<String>,
/// Working directory for the daemon.
pub working_dir: Option<PathBuf>,
/// Explicit foreground mode (for systemd Type=simple).
pub foreground: bool,
}
impl DaemonOptions {
/// Returns the effective PID file path.
pub fn pid_file_path(&self) -> &Path {
self.pid_file
.as_deref()
.unwrap_or(Path::new(DEFAULT_PID_FILE))
}
/// Returns true if we should actually daemonize.
/// Foreground flag takes precedence.
pub fn should_daemonize(&self) -> bool {
self.daemonize && !self.foreground
}
}
/// Error types for daemon operations.
#[derive(Debug, thiserror::Error)]
pub enum DaemonError {
#[error("fork failed: {0}")]
ForkFailed(#[source] nix::Error),
#[error("setsid failed: {0}")]
SetsidFailed(#[source] nix::Error),
#[error("chdir failed: {0}")]
ChdirFailed(#[source] nix::Error),
#[error("failed to open /dev/null: {0}")]
DevNullFailed(#[source] io::Error),
#[error("failed to redirect stdio: {0}")]
RedirectFailed(#[source] nix::Error),
#[error("PID file error: {0}")]
PidFile(String),
#[error("another instance is already running (pid {0})")]
AlreadyRunning(i32),
#[error("user '{0}' not found")]
UserNotFound(String),
#[error("group '{0}' not found")]
GroupNotFound(String),
#[error("failed to set uid/gid: {0}")]
PrivilegeDrop(#[source] nix::Error),
#[error("io error: {0}")]
Io(#[from] io::Error),
}
/// Result of a successful daemonize() call.
#[derive(Debug)]
pub enum DaemonizeResult {
/// We are the parent process and should exit.
Parent,
/// We are the daemon child process and should continue.
Child,
}
/// Performs classic Unix double-fork daemonization.
///
/// This detaches the process from the controlling terminal:
/// 1. First fork - parent exits, child continues
/// 2. setsid() - become session leader
/// 3. Second fork - ensure we can never acquire a controlling terminal
/// 4. chdir("/") - don't hold any directory open
/// 5. Redirect stdin/stdout/stderr to /dev/null
///
/// Returns `DaemonizeResult::Parent` in the original parent (which should exit),
/// or `DaemonizeResult::Child` in the final daemon child.
pub fn daemonize(working_dir: Option<&Path>) -> Result<DaemonizeResult, DaemonError> {
// First fork
match unsafe { fork() } {
Ok(ForkResult::Parent { .. }) => {
// Parent exits
return Ok(DaemonizeResult::Parent);
}
Ok(ForkResult::Child) => {
// Child continues
}
Err(e) => return Err(DaemonError::ForkFailed(e)),
}
// Create new session, become session leader
setsid().map_err(DaemonError::SetsidFailed)?;
// Second fork to ensure we can never acquire a controlling terminal
match unsafe { fork() } {
Ok(ForkResult::Parent { .. }) => {
// Intermediate parent exits
std::process::exit(0);
}
Ok(ForkResult::Child) => {
// Final daemon child continues
}
Err(e) => return Err(DaemonError::ForkFailed(e)),
}
// Change working directory
let target_dir = working_dir.unwrap_or(Path::new("/"));
chdir(target_dir).map_err(DaemonError::ChdirFailed)?;
// Redirect stdin, stdout, stderr to /dev/null
redirect_stdio_to_devnull()?;
Ok(DaemonizeResult::Child)
}
/// Redirects stdin, stdout, and stderr to /dev/null.
fn redirect_stdio_to_devnull() -> Result<(), DaemonError> {
let devnull = File::options()
.read(true)
.write(true)
.open("/dev/null")
.map_err(DaemonError::DevNullFailed)?;
let devnull_fd = std::os::unix::io::AsRawFd::as_raw_fd(&devnull);
// Use libc::dup2 directly for redirecting standard file descriptors
// nix 0.31's dup2 requires OwnedFd which doesn't work well with stdio fds
unsafe {
// Redirect stdin (fd 0)
if libc::dup2(devnull_fd, 0) < 0 {
return Err(DaemonError::RedirectFailed(nix::errno::Errno::last()));
}
// Redirect stdout (fd 1)
if libc::dup2(devnull_fd, 1) < 0 {
return Err(DaemonError::RedirectFailed(nix::errno::Errno::last()));
}
// Redirect stderr (fd 2)
if libc::dup2(devnull_fd, 2) < 0 {
return Err(DaemonError::RedirectFailed(nix::errno::Errno::last()));
}
}
// Close original devnull fd if it's not one of the standard fds
if devnull_fd > 2 {
let _ = close(devnull_fd);
}
Ok(())
}
/// PID file manager with flock-based locking.
pub struct PidFile {
path: PathBuf,
file: Option<File>,
locked: bool,
}
impl PidFile {
/// Creates a new PID file manager for the given path.
pub fn new<P: AsRef<Path>>(path: P) -> Self {
Self {
path: path.as_ref().to_path_buf(),
file: None,
locked: false,
}
}
/// Checks if another instance is already running.
///
/// Returns the PID of the running instance if one exists.
pub fn check_running(&self) -> Result<Option<i32>, DaemonError> {
if !self.path.exists() {
return Ok(None);
}
// Try to read existing PID
let mut contents = String::new();
File::open(&self.path)
.and_then(|mut f| f.read_to_string(&mut contents))
.map_err(|e| {
DaemonError::PidFile(format!("cannot read {}: {}", self.path.display(), e))
})?;
let pid: i32 = contents
.trim()
.parse()
.map_err(|_| DaemonError::PidFile(format!("invalid PID in {}", self.path.display())))?;
// Check if process is still running
if is_process_running(pid) {
Ok(Some(pid))
} else {
// Stale PID file
debug!(pid, path = %self.path.display(), "Removing stale PID file");
let _ = fs::remove_file(&self.path);
Ok(None)
}
}
/// Acquires the PID file lock and writes the current PID.
///
/// Fails if another instance is already running.
pub fn acquire(&mut self) -> Result<(), DaemonError> {
// Check for running instance first
if let Some(pid) = self.check_running()? {
return Err(DaemonError::AlreadyRunning(pid));
}
// Ensure parent directory exists
if let Some(parent) = self.path.parent() {
if !parent.exists() {
fs::create_dir_all(parent).map_err(|e| {
DaemonError::PidFile(format!(
"cannot create directory {}: {}",
parent.display(),
e
))
})?;
}
}
// Open/create PID file with exclusive lock
let file = OpenOptions::new()
.write(true)
.create(true)
.truncate(true)
.mode(0o644)
.open(&self.path)
.map_err(|e| {
DaemonError::PidFile(format!("cannot open {}: {}", self.path.display(), e))
})?;
// Try to acquire exclusive lock (non-blocking)
let flock = Flock::lock(file, FlockArg::LockExclusiveNonblock).map_err(|(_, errno)| {
// Check if another instance grabbed the lock
if let Some(pid) = self.check_running().ok().flatten() {
DaemonError::AlreadyRunning(pid)
} else {
DaemonError::PidFile(format!("cannot lock {}: {}", self.path.display(), errno))
}
})?;
// Write our PID
let pid = getpid();
let mut file = flock
.unlock()
.map_err(|(_, errno)| DaemonError::PidFile(format!("unlock failed: {}", errno)))?;
writeln!(file, "{}", pid).map_err(|e| {
DaemonError::PidFile(format!(
"cannot write PID to {}: {}",
self.path.display(),
e
))
})?;
// Re-acquire lock and keep it
let flock = Flock::lock(file, FlockArg::LockExclusiveNonblock).map_err(|(_, errno)| {
DaemonError::PidFile(format!("cannot re-lock {}: {}", self.path.display(), errno))
})?;
self.file = Some(flock.unlock().map_err(|(_, errno)| {
DaemonError::PidFile(format!("unlock for storage failed: {}", errno))
})?);
self.locked = true;
info!(pid = pid.as_raw(), path = %self.path.display(), "PID file created");
Ok(())
}
/// Releases the PID file lock and removes the file.
pub fn release(&mut self) -> Result<(), DaemonError> {
if let Some(file) = self.file.take() {
drop(file);
}
self.locked = false;
if self.path.exists() {
fs::remove_file(&self.path).map_err(|e| {
DaemonError::PidFile(format!("cannot remove {}: {}", self.path.display(), e))
})?;
debug!(path = %self.path.display(), "PID file removed");
}
Ok(())
}
/// Returns the path to this PID file.
#[allow(dead_code)]
pub fn path(&self) -> &Path {
&self.path
}
}
impl Drop for PidFile {
fn drop(&mut self) {
if self.locked {
if let Err(e) = self.release() {
warn!(error = %e, "Failed to clean up PID file on drop");
}
}
}
}
/// Checks if a process with the given PID is running.
fn is_process_running(pid: i32) -> bool {
// kill(pid, 0) checks if process exists without sending a signal
nix::sys::signal::kill(Pid::from_raw(pid), None).is_ok()
}
/// Drops privileges to the specified user and group.
///
/// This should be called after binding privileged ports but before
/// entering the main event loop.
pub fn drop_privileges(user: Option<&str>, group: Option<&str>) -> Result<(), DaemonError> {
// Look up group first (need to do this while still root)
let target_gid = if let Some(group_name) = group {
Some(lookup_group(group_name)?)
} else if let Some(user_name) = user {
// If no group specified but user is, use user's primary group
Some(lookup_user_primary_gid(user_name)?)
} else {
None
};
// Look up user
let target_uid = if let Some(user_name) = user {
Some(lookup_user(user_name)?)
} else {
None
};
// Drop privileges: set GID first, then UID
// (Setting UID first would prevent us from setting GID)
if let Some(gid) = target_gid {
unistd::setgid(gid).map_err(DaemonError::PrivilegeDrop)?;
// Also set supplementary groups to just this one
unistd::setgroups(&[gid]).map_err(DaemonError::PrivilegeDrop)?;
info!(gid = gid.as_raw(), "Dropped group privileges");
}
if let Some(uid) = target_uid {
unistd::setuid(uid).map_err(DaemonError::PrivilegeDrop)?;
info!(uid = uid.as_raw(), "Dropped user privileges");
}
Ok(())
}
/// Looks up a user by name and returns their UID.
fn lookup_user(name: &str) -> Result<Uid, DaemonError> {
// Use libc getpwnam
let c_name =
std::ffi::CString::new(name).map_err(|_| DaemonError::UserNotFound(name.to_string()))?;
unsafe {
let pwd = libc::getpwnam(c_name.as_ptr());
if pwd.is_null() {
Err(DaemonError::UserNotFound(name.to_string()))
} else {
Ok(Uid::from_raw((*pwd).pw_uid))
}
}
}
/// Looks up a user's primary GID by username.
fn lookup_user_primary_gid(name: &str) -> Result<Gid, DaemonError> {
let c_name =
std::ffi::CString::new(name).map_err(|_| DaemonError::UserNotFound(name.to_string()))?;
unsafe {
let pwd = libc::getpwnam(c_name.as_ptr());
if pwd.is_null() {
Err(DaemonError::UserNotFound(name.to_string()))
} else {
Ok(Gid::from_raw((*pwd).pw_gid))
}
}
}
/// Looks up a group by name and returns its GID.
fn lookup_group(name: &str) -> Result<Gid, DaemonError> {
let c_name =
std::ffi::CString::new(name).map_err(|_| DaemonError::GroupNotFound(name.to_string()))?;
unsafe {
let grp = libc::getgrnam(c_name.as_ptr());
if grp.is_null() {
Err(DaemonError::GroupNotFound(name.to_string()))
} else {
Ok(Gid::from_raw((*grp).gr_gid))
}
}
}
/// Reads PID from a PID file.
#[allow(dead_code)]
pub fn read_pid_file<P: AsRef<Path>>(path: P) -> Result<i32, DaemonError> {
let path = path.as_ref();
let mut contents = String::new();
File::open(path)
.and_then(|mut f| f.read_to_string(&mut contents))
.map_err(|e| DaemonError::PidFile(format!("cannot read {}: {}", path.display(), e)))?;
contents
.trim()
.parse()
.map_err(|_| DaemonError::PidFile(format!("invalid PID in {}", path.display())))
}
/// Sends a signal to the process specified in a PID file.
#[allow(dead_code)]
pub fn signal_pid_file<P: AsRef<Path>>(
path: P,
signal: nix::sys::signal::Signal,
) -> Result<(), DaemonError> {
let pid = read_pid_file(&path)?;
if !is_process_running(pid) {
return Err(DaemonError::PidFile(format!(
"process {} from {} is not running",
pid,
path.as_ref().display()
)));
}
nix::sys::signal::kill(Pid::from_raw(pid), signal)
.map_err(|e| DaemonError::PidFile(format!("cannot signal process {}: {}", pid, e)))?;
Ok(())
}
/// Returns the status of the daemon based on PID file.
#[allow(dead_code)]
#[derive(Debug, Clone, PartialEq, Eq)]
pub enum DaemonStatus {
/// Daemon is running with the given PID.
Running(i32),
/// PID file exists but process is not running.
Stale(i32),
/// No PID file exists.
NotRunning,
}
/// Checks the daemon status from a PID file.
#[allow(dead_code)]
pub fn check_status<P: AsRef<Path>>(path: P) -> DaemonStatus {
let path = path.as_ref();
if !path.exists() {
return DaemonStatus::NotRunning;
}
match read_pid_file(path) {
Ok(pid) => {
if is_process_running(pid) {
DaemonStatus::Running(pid)
} else {
DaemonStatus::Stale(pid)
}
}
Err(_) => DaemonStatus::NotRunning,
}
}
#[cfg(test)]
mod tests {
use super::*;
#[test]
fn test_daemon_options_default() {
let opts = DaemonOptions::default();
assert!(!opts.daemonize);
assert!(!opts.should_daemonize());
assert_eq!(opts.pid_file_path(), Path::new(DEFAULT_PID_FILE));
}
#[test]
fn test_daemon_options_foreground_overrides() {
let opts = DaemonOptions {
daemonize: true,
foreground: true,
..Default::default()
};
assert!(!opts.should_daemonize());
}
#[test]
fn test_check_status_not_running() {
let path = "/tmp/telemt_test_nonexistent.pid";
assert_eq!(check_status(path), DaemonStatus::NotRunning);
}
#[test]
fn test_pid_file_basic() {
let path = "/tmp/telemt_test_pidfile.pid";
let _ = fs::remove_file(path);
let mut pf = PidFile::new(path);
assert!(pf.check_running().unwrap().is_none());
pf.acquire().unwrap();
assert!(Path::new(path).exists());
// Read it back
let pid = read_pid_file(path).unwrap();
assert_eq!(pid, std::process::id() as i32);
pf.release().unwrap();
assert!(!Path::new(path).exists());
}
}

View File

@ -26,6 +26,15 @@ pub struct UserIpTracker {
cleanup_drain_lock: Arc<AsyncMutex<()>>,
}
#[derive(Debug, Clone, Copy)]
pub struct UserIpTrackerMemoryStats {
pub active_users: usize,
pub recent_users: usize,
pub active_entries: usize,
pub recent_entries: usize,
pub cleanup_queue_len: usize,
}
impl UserIpTracker {
pub fn new() -> Self {
Self {
@ -141,6 +150,13 @@ impl UserIpTracker {
let mut active_ips = self.active_ips.write().await;
let mut recent_ips = self.recent_ips.write().await;
let window = *self.limit_window.read().await;
let now = Instant::now();
for user_recent in recent_ips.values_mut() {
Self::prune_recent(user_recent, now, window);
}
let mut users =
Vec::<String>::with_capacity(active_ips.len().saturating_add(recent_ips.len()));
users.extend(active_ips.keys().cloned());
@ -166,6 +182,26 @@ impl UserIpTracker {
}
}
pub async fn memory_stats(&self) -> UserIpTrackerMemoryStats {
let cleanup_queue_len = self
.cleanup_queue
.lock()
.unwrap_or_else(|poisoned| poisoned.into_inner())
.len();
let active_ips = self.active_ips.read().await;
let recent_ips = self.recent_ips.read().await;
let active_entries = active_ips.values().map(HashMap::len).sum();
let recent_entries = recent_ips.values().map(HashMap::len).sum();
UserIpTrackerMemoryStats {
active_users: active_ips.len(),
recent_users: recent_ips.len(),
active_entries,
recent_entries,
cleanup_queue_len,
}
}
pub async fn set_limit_policy(&self, mode: UserMaxUniqueIpsMode, window_secs: u64) {
{
let mut current_mode = self.limit_mode.write().await;
@ -451,6 +487,7 @@ impl Default for UserIpTracker {
mod tests {
use super::*;
use std::net::{IpAddr, Ipv4Addr, Ipv6Addr};
use std::sync::atomic::Ordering;
fn test_ipv4(oct1: u8, oct2: u8, oct3: u8, oct4: u8) -> IpAddr {
IpAddr::V4(Ipv4Addr::new(oct1, oct2, oct3, oct4))
@ -764,4 +801,54 @@ mod tests {
tokio::time::sleep(Duration::from_millis(1100)).await;
assert!(tracker.check_and_add("test_user", ip2).await.is_ok());
}
#[tokio::test]
async fn test_memory_stats_reports_queue_and_entry_counts() {
let tracker = UserIpTracker::new();
tracker.set_user_limit("test_user", 4).await;
let ip1 = test_ipv4(10, 2, 0, 1);
let ip2 = test_ipv4(10, 2, 0, 2);
tracker.check_and_add("test_user", ip1).await.unwrap();
tracker.check_and_add("test_user", ip2).await.unwrap();
tracker.enqueue_cleanup("test_user".to_string(), ip1);
let snapshot = tracker.memory_stats().await;
assert_eq!(snapshot.active_users, 1);
assert_eq!(snapshot.recent_users, 1);
assert_eq!(snapshot.active_entries, 2);
assert_eq!(snapshot.recent_entries, 2);
assert_eq!(snapshot.cleanup_queue_len, 1);
}
#[tokio::test]
async fn test_compact_prunes_stale_recent_entries() {
let tracker = UserIpTracker::new();
tracker
.set_limit_policy(UserMaxUniqueIpsMode::TimeWindow, 1)
.await;
let stale_user = "stale-user".to_string();
let stale_ip = test_ipv4(10, 3, 0, 1);
{
let mut recent_ips = tracker.recent_ips.write().await;
recent_ips
.entry(stale_user.clone())
.or_insert_with(HashMap::new)
.insert(stale_ip, Instant::now() - Duration::from_secs(5));
}
tracker.last_compact_epoch_secs.store(0, Ordering::Relaxed);
tracker
.check_and_add("trigger-user", test_ipv4(10, 3, 0, 2))
.await
.unwrap();
let recent_ips = tracker.recent_ips.read().await;
let stale_exists = recent_ips
.get(&stale_user)
.map(|ips| ips.contains_key(&stale_ip))
.unwrap_or(false);
assert!(!stale_exists);
}
}

343
src/logging.rs Normal file
View File

@ -0,0 +1,343 @@
//! Logging configuration for telemt.
//!
//! Supports multiple log destinations:
//! - stderr (default, works with systemd journald)
//! - syslog (Unix only, for traditional init systems)
//! - file (with optional rotation)
#![allow(dead_code)] // Infrastructure module - used via CLI flags
use std::path::Path;
use tracing_subscriber::layer::SubscriberExt;
use tracing_subscriber::util::SubscriberInitExt;
use tracing_subscriber::{EnvFilter, fmt, reload};
/// Log destination configuration.
#[derive(Debug, Clone, Default)]
pub enum LogDestination {
/// Log to stderr (default, captured by systemd journald).
#[default]
Stderr,
/// Log to syslog (Unix only).
#[cfg(unix)]
Syslog,
/// Log to a file with optional rotation.
File {
path: String,
/// Rotate daily if true.
rotate_daily: bool,
},
}
/// Logging options parsed from CLI/config.
#[derive(Debug, Clone, Default)]
pub struct LoggingOptions {
/// Where to send logs.
pub destination: LogDestination,
/// Disable ANSI colors.
pub disable_colors: bool,
}
/// Guard that must be held to keep file logging active.
/// When dropped, flushes and closes log files.
pub struct LoggingGuard {
_guard: Option<tracing_appender::non_blocking::WorkerGuard>,
}
impl LoggingGuard {
fn new(guard: Option<tracing_appender::non_blocking::WorkerGuard>) -> Self {
Self { _guard: guard }
}
/// Creates a no-op guard for stderr/syslog logging.
pub fn noop() -> Self {
Self { _guard: None }
}
}
/// Initialize the tracing subscriber with the specified options.
///
/// Returns a reload handle for dynamic log level changes and a guard
/// that must be kept alive for file logging.
pub fn init_logging(
opts: &LoggingOptions,
initial_filter: &str,
) -> (
reload::Handle<EnvFilter, impl tracing::Subscriber + Send + Sync>,
LoggingGuard,
) {
let (filter_layer, filter_handle) = reload::Layer::new(EnvFilter::new(initial_filter));
match &opts.destination {
LogDestination::Stderr => {
let fmt_layer = fmt::Layer::default()
.with_ansi(!opts.disable_colors)
.with_target(true);
tracing_subscriber::registry()
.with(filter_layer)
.with(fmt_layer)
.init();
(filter_handle, LoggingGuard::noop())
}
#[cfg(unix)]
LogDestination::Syslog => {
// Use a custom fmt layer that writes to syslog
let fmt_layer = fmt::Layer::default()
.with_ansi(false)
.with_target(false)
.with_level(false)
.without_time()
.with_writer(SyslogMakeWriter::new());
tracing_subscriber::registry()
.with(filter_layer)
.with(fmt_layer)
.init();
(filter_handle, LoggingGuard::noop())
}
LogDestination::File { path, rotate_daily } => {
let (non_blocking, guard) = if *rotate_daily {
// Extract directory and filename prefix
let path = Path::new(path);
let dir = path.parent().unwrap_or(Path::new("/var/log"));
let prefix = path
.file_name()
.and_then(|s| s.to_str())
.unwrap_or("telemt");
let file_appender = tracing_appender::rolling::daily(dir, prefix);
tracing_appender::non_blocking(file_appender)
} else {
let file = std::fs::OpenOptions::new()
.create(true)
.append(true)
.open(path)
.expect("Failed to open log file");
tracing_appender::non_blocking(file)
};
let fmt_layer = fmt::Layer::default()
.with_ansi(false)
.with_target(true)
.with_writer(non_blocking);
tracing_subscriber::registry()
.with(filter_layer)
.with(fmt_layer)
.init();
(filter_handle, LoggingGuard::new(Some(guard)))
}
}
}
/// Syslog writer for tracing.
#[cfg(unix)]
#[derive(Clone, Copy)]
struct SyslogMakeWriter;
#[cfg(unix)]
#[derive(Clone, Copy)]
struct SyslogWriter {
priority: libc::c_int,
}
#[cfg(unix)]
impl SyslogMakeWriter {
fn new() -> Self {
// Open syslog connection on first use
static INIT: std::sync::Once = std::sync::Once::new();
INIT.call_once(|| {
unsafe {
// Open syslog with ident "telemt", LOG_PID, LOG_DAEMON facility
let ident = b"telemt\0".as_ptr() as *const libc::c_char;
libc::openlog(ident, libc::LOG_PID | libc::LOG_NDELAY, libc::LOG_DAEMON);
}
});
Self
}
}
#[cfg(unix)]
fn syslog_priority_for_level(level: &tracing::Level) -> libc::c_int {
match *level {
tracing::Level::ERROR => libc::LOG_ERR,
tracing::Level::WARN => libc::LOG_WARNING,
tracing::Level::INFO => libc::LOG_INFO,
tracing::Level::DEBUG => libc::LOG_DEBUG,
tracing::Level::TRACE => libc::LOG_DEBUG,
}
}
#[cfg(unix)]
impl std::io::Write for SyslogWriter {
fn write(&mut self, buf: &[u8]) -> std::io::Result<usize> {
// Convert to C string, stripping newlines
let msg = String::from_utf8_lossy(buf);
let msg = msg.trim_end();
if msg.is_empty() {
return Ok(buf.len());
}
// Write to syslog
let c_msg = std::ffi::CString::new(msg.as_bytes())
.unwrap_or_else(|_| std::ffi::CString::new("(invalid utf8)").unwrap());
unsafe {
libc::syslog(
self.priority,
b"%s\0".as_ptr() as *const libc::c_char,
c_msg.as_ptr(),
);
}
Ok(buf.len())
}
fn flush(&mut self) -> std::io::Result<()> {
Ok(())
}
}
#[cfg(unix)]
impl<'a> tracing_subscriber::fmt::MakeWriter<'a> for SyslogMakeWriter {
type Writer = SyslogWriter;
fn make_writer(&'a self) -> Self::Writer {
SyslogWriter {
priority: libc::LOG_INFO,
}
}
fn make_writer_for(&'a self, meta: &tracing::Metadata<'_>) -> Self::Writer {
SyslogWriter {
priority: syslog_priority_for_level(meta.level()),
}
}
}
/// Parse log destination from CLI arguments.
pub fn parse_log_destination(args: &[String]) -> LogDestination {
let mut i = 0;
while i < args.len() {
match args[i].as_str() {
#[cfg(unix)]
"--syslog" => {
return LogDestination::Syslog;
}
"--log-file" => {
i += 1;
if i < args.len() {
return LogDestination::File {
path: args[i].clone(),
rotate_daily: false,
};
}
}
s if s.starts_with("--log-file=") => {
return LogDestination::File {
path: s.trim_start_matches("--log-file=").to_string(),
rotate_daily: false,
};
}
"--log-file-daily" => {
i += 1;
if i < args.len() {
return LogDestination::File {
path: args[i].clone(),
rotate_daily: true,
};
}
}
s if s.starts_with("--log-file-daily=") => {
return LogDestination::File {
path: s.trim_start_matches("--log-file-daily=").to_string(),
rotate_daily: true,
};
}
_ => {}
}
i += 1;
}
LogDestination::Stderr
}
#[cfg(test)]
mod tests {
use super::*;
#[test]
fn test_parse_log_destination_default() {
let args: Vec<String> = vec![];
assert!(matches!(
parse_log_destination(&args),
LogDestination::Stderr
));
}
#[test]
fn test_parse_log_destination_file() {
let args = vec!["--log-file".to_string(), "/var/log/telemt.log".to_string()];
match parse_log_destination(&args) {
LogDestination::File { path, rotate_daily } => {
assert_eq!(path, "/var/log/telemt.log");
assert!(!rotate_daily);
}
_ => panic!("Expected File destination"),
}
}
#[test]
fn test_parse_log_destination_file_daily() {
let args = vec!["--log-file-daily=/var/log/telemt".to_string()];
match parse_log_destination(&args) {
LogDestination::File { path, rotate_daily } => {
assert_eq!(path, "/var/log/telemt");
assert!(rotate_daily);
}
_ => panic!("Expected File destination"),
}
}
#[cfg(unix)]
#[test]
fn test_parse_log_destination_syslog() {
let args = vec!["--syslog".to_string()];
assert!(matches!(
parse_log_destination(&args),
LogDestination::Syslog
));
}
#[cfg(unix)]
#[test]
fn test_syslog_priority_for_level_mapping() {
assert_eq!(
syslog_priority_for_level(&tracing::Level::ERROR),
libc::LOG_ERR
);
assert_eq!(
syslog_priority_for_level(&tracing::Level::WARN),
libc::LOG_WARNING
);
assert_eq!(
syslog_priority_for_level(&tracing::Level::INFO),
libc::LOG_INFO
);
assert_eq!(
syslog_priority_for_level(&tracing::Level::DEBUG),
libc::LOG_DEBUG
);
assert_eq!(
syslog_priority_for_level(&tracing::Level::TRACE),
libc::LOG_DEBUG
);
}
}

View File

@ -21,10 +21,29 @@ pub(crate) async fn configure_admission_gate(
if config.general.use_middle_proxy {
if let Some(pool) = me_pool.as_ref() {
let initial_ready = pool.admission_ready_conditional_cast().await;
admission_tx.send_replace(initial_ready);
let _ = route_runtime.set_mode(RelayRouteMode::Middle);
let mut fallback_enabled = config.general.me2dc_fallback;
let mut fast_fallback_enabled = fallback_enabled && config.general.me2dc_fast;
let (initial_gate_open, initial_route_mode, initial_fallback_reason) = if initial_ready
{
(true, RelayRouteMode::Middle, None)
} else if fast_fallback_enabled {
(
true,
RelayRouteMode::Direct,
Some("fast_not_ready_fallback"),
)
} else {
(false, RelayRouteMode::Middle, None)
};
admission_tx.send_replace(initial_gate_open);
let _ = route_runtime.set_mode(initial_route_mode);
if initial_ready {
info!("Conditional-admission gate: open / ME pool READY");
} else if let Some(reason) = initial_fallback_reason {
warn!(
fallback_reason = reason,
"Conditional-admission gate opened in ME fast fallback mode"
);
} else {
warn!("Conditional-admission gate: closed / ME pool is NOT ready)");
}
@ -34,10 +53,9 @@ pub(crate) async fn configure_admission_gate(
let route_runtime_gate = route_runtime.clone();
let mut config_rx_gate = config_rx.clone();
let mut admission_poll_ms = config.general.me_admission_poll_ms.max(1);
let mut fallback_enabled = config.general.me2dc_fallback;
tokio::spawn(async move {
let mut gate_open = initial_ready;
let mut route_mode = RelayRouteMode::Middle;
let mut gate_open = initial_gate_open;
let mut route_mode = initial_route_mode;
let mut ready_observed = initial_ready;
let mut not_ready_since = if initial_ready {
None
@ -53,16 +71,23 @@ pub(crate) async fn configure_admission_gate(
let cfg = config_rx_gate.borrow_and_update().clone();
admission_poll_ms = cfg.general.me_admission_poll_ms.max(1);
fallback_enabled = cfg.general.me2dc_fallback;
fast_fallback_enabled = cfg.general.me2dc_fallback && cfg.general.me2dc_fast;
continue;
}
_ = tokio::time::sleep(Duration::from_millis(admission_poll_ms)) => {}
}
let ready = pool_for_gate.admission_ready_conditional_cast().await;
let now = Instant::now();
let (next_gate_open, next_route_mode, next_fallback_active) = if ready {
let (next_gate_open, next_route_mode, next_fallback_reason) = if ready {
ready_observed = true;
not_ready_since = None;
(true, RelayRouteMode::Middle, false)
(true, RelayRouteMode::Middle, None)
} else if fast_fallback_enabled {
(
true,
RelayRouteMode::Direct,
Some("fast_not_ready_fallback"),
)
} else {
let not_ready_started_at = *not_ready_since.get_or_insert(now);
let not_ready_for = now.saturating_duration_since(not_ready_started_at);
@ -72,11 +97,12 @@ pub(crate) async fn configure_admission_gate(
STARTUP_FALLBACK_AFTER
};
if fallback_enabled && not_ready_for > fallback_after {
(true, RelayRouteMode::Direct, true)
(true, RelayRouteMode::Direct, Some("strict_grace_fallback"))
} else {
(false, RelayRouteMode::Middle, false)
(false, RelayRouteMode::Middle, None)
}
};
let next_fallback_active = next_fallback_reason.is_some();
if next_route_mode != route_mode {
route_mode = next_route_mode;
@ -88,6 +114,8 @@ pub(crate) async fn configure_admission_gate(
"Middle-End routing restored for new sessions"
);
} else {
let fallback_reason = next_fallback_reason.unwrap_or("unknown");
if fallback_reason == "strict_grace_fallback" {
let fallback_after = if ready_observed {
RUNTIME_FALLBACK_AFTER
} else {
@ -97,8 +125,17 @@ pub(crate) async fn configure_admission_gate(
target_mode = route_mode.as_str(),
cutover_generation = snapshot.generation,
grace_secs = fallback_after.as_secs(),
fallback_reason,
"ME pool stayed not-ready beyond grace; routing new sessions via Direct-DC"
);
} else {
warn!(
target_mode = route_mode.as_str(),
cutover_generation = snapshot.generation,
fallback_reason,
"ME pool not-ready; routing new sessions via Direct-DC (fast mode)"
);
}
}
}
}
@ -108,7 +145,10 @@ pub(crate) async fn configure_admission_gate(
admission_tx_gate.send_replace(gate_open);
if gate_open {
if next_fallback_active {
warn!("Conditional-admission gate opened in ME fallback mode");
warn!(
fallback_reason = next_fallback_reason.unwrap_or("unknown"),
"Conditional-admission gate opened in ME fallback mode"
);
} else {
info!("Conditional-admission gate opened / ME pool READY");
}

View File

@ -8,6 +8,7 @@ use tracing::{debug, error, info, warn};
use crate::cli;
use crate::config::ProxyConfig;
use crate::logging::LogDestination;
use crate::transport::UpstreamManager;
use crate::transport::middle_proxy::{
ProxyConfigData, fetch_proxy_config_with_raw_via_upstream, load_proxy_config_cache,
@ -17,24 +18,56 @@ use crate::transport::middle_proxy::{
pub(crate) fn resolve_runtime_config_path(
config_path_cli: &str,
startup_cwd: &std::path::Path,
config_path_explicit: bool,
) -> PathBuf {
if config_path_explicit {
let raw = PathBuf::from(config_path_cli);
let absolute = if raw.is_absolute() {
raw
} else {
startup_cwd.join(raw)
};
absolute.canonicalize().unwrap_or(absolute)
return absolute.canonicalize().unwrap_or(absolute);
}
let etc_telemt = std::path::Path::new("/etc/telemt");
let candidates = [
startup_cwd.join("config.toml"),
startup_cwd.join("telemt.toml"),
etc_telemt.join("telemt.toml"),
etc_telemt.join("config.toml"),
];
for candidate in candidates {
if candidate.is_file() {
return candidate.canonicalize().unwrap_or(candidate);
}
}
startup_cwd.join("config.toml")
}
pub(crate) fn parse_cli() -> (String, Option<PathBuf>, bool, Option<String>) {
/// Parsed CLI arguments.
pub(crate) struct CliArgs {
pub config_path: String,
pub config_path_explicit: bool,
pub data_path: Option<PathBuf>,
pub silent: bool,
pub log_level: Option<String>,
pub log_destination: LogDestination,
}
pub(crate) fn parse_cli() -> CliArgs {
let mut config_path = "config.toml".to_string();
let mut config_path_explicit = false;
let mut data_path: Option<PathBuf> = None;
let mut silent = false;
let mut log_level: Option<String> = None;
let args: Vec<String> = std::env::args().skip(1).collect();
// Parse log destination
let log_destination = crate::logging::parse_log_destination(&args);
// Check for --init first (handled before tokio)
if let Some(init_opts) = cli::parse_init_args(&args) {
if let Err(e) = cli::run_init(init_opts) {
@ -61,6 +94,20 @@ pub(crate) fn parse_cli() -> (String, Option<PathBuf>, bool, Option<String>) {
s.trim_start_matches("--data-path=").to_string(),
));
}
"--working-dir" => {
i += 1;
if i < args.len() {
data_path = Some(PathBuf::from(args[i].clone()));
} else {
eprintln!("Missing value for --working-dir");
std::process::exit(0);
}
}
s if s.starts_with("--working-dir=") => {
data_path = Some(PathBuf::from(
s.trim_start_matches("--working-dir=").to_string(),
));
}
"--silent" | "-s" => {
silent = true;
}
@ -74,38 +121,35 @@ pub(crate) fn parse_cli() -> (String, Option<PathBuf>, bool, Option<String>) {
log_level = Some(s.trim_start_matches("--log-level=").to_string());
}
"--help" | "-h" => {
eprintln!("Usage: telemt [config.toml] [OPTIONS]");
eprintln!();
eprintln!("Options:");
eprintln!(
" --data-path <DIR> Set data directory (absolute path; overrides config value)"
);
eprintln!(" --silent, -s Suppress info logs");
eprintln!(" --log-level <LEVEL> debug|verbose|normal|silent");
eprintln!(" --help, -h Show this help");
eprintln!();
eprintln!("Setup (fire-and-forget):");
eprintln!(
" --init Generate config, install systemd service, start"
);
eprintln!(" --port <PORT> Listen port (default: 443)");
eprintln!(
" --domain <DOMAIN> TLS domain for masking (default: www.google.com)"
);
eprintln!(
" --secret <HEX> 32-char hex secret (auto-generated if omitted)"
);
eprintln!(" --user <NAME> Username (default: user)");
eprintln!(" --config-dir <DIR> Config directory (default: /etc/telemt)");
eprintln!(" --no-start Don't start the service after install");
print_help();
std::process::exit(0);
}
"--version" | "-V" => {
println!("telemt {}", env!("CARGO_PKG_VERSION"));
std::process::exit(0);
}
// Skip daemon-related flags (already parsed)
"--daemon" | "-d" | "--foreground" | "-f" => {}
s if s.starts_with("--pid-file") => {
if !s.contains('=') {
i += 1; // skip value
}
}
s if s.starts_with("--run-as-user") => {
if !s.contains('=') {
i += 1;
}
}
s if s.starts_with("--run-as-group") => {
if !s.contains('=') {
i += 1;
}
}
s if !s.starts_with('-') => {
if !matches!(s, "run" | "start" | "stop" | "reload" | "status") {
config_path = s.to_string();
config_path_explicit = true;
}
}
other => {
eprintln!("Unknown option: {}", other);
@ -114,7 +158,75 @@ pub(crate) fn parse_cli() -> (String, Option<PathBuf>, bool, Option<String>) {
i += 1;
}
(config_path, data_path, silent, log_level)
CliArgs {
config_path,
config_path_explicit,
data_path,
silent,
log_level,
log_destination,
}
}
fn print_help() {
eprintln!("Usage: telemt [COMMAND] [OPTIONS] [config.toml]");
eprintln!();
eprintln!("Commands:");
eprintln!(" run Run in foreground (default if no command given)");
#[cfg(unix)]
{
eprintln!(" start Start as background daemon");
eprintln!(" stop Stop a running daemon");
eprintln!(" reload Reload configuration (send SIGHUP)");
eprintln!(" status Check if daemon is running");
}
eprintln!();
eprintln!("Options:");
eprintln!(
" --data-path <DIR> Set data directory (absolute path; overrides config value)"
);
eprintln!(" --working-dir <DIR> Alias for --data-path");
eprintln!(" --silent, -s Suppress info logs");
eprintln!(" --log-level <LEVEL> debug|verbose|normal|silent");
eprintln!(" --help, -h Show this help");
eprintln!(" --version, -V Show version");
eprintln!();
eprintln!("Logging options:");
eprintln!(" --log-file <PATH> Log to file (default: stderr)");
eprintln!(" --log-file-daily <PATH> Log to file with daily rotation");
#[cfg(unix)]
eprintln!(" --syslog Log to syslog (Unix only)");
eprintln!();
#[cfg(unix)]
{
eprintln!("Daemon options (Unix only):");
eprintln!(" --daemon, -d Fork to background (daemonize)");
eprintln!(" --foreground, -f Explicit foreground mode (for systemd)");
eprintln!(" --pid-file <PATH> PID file path (default: /var/run/telemt.pid)");
eprintln!(" --run-as-user <USER> Drop privileges to this user after binding");
eprintln!(" --run-as-group <GROUP> Drop privileges to this group after binding");
eprintln!(" --working-dir <DIR> Working directory for daemon mode");
eprintln!();
}
eprintln!("Setup (fire-and-forget):");
eprintln!(" --init Generate config, install systemd service, start");
eprintln!(" --port <PORT> Listen port (default: 443)");
eprintln!(" --domain <DOMAIN> TLS domain for masking (default: www.google.com)");
eprintln!(" --secret <HEX> 32-char hex secret (auto-generated if omitted)");
eprintln!(" --user <NAME> Username (default: user)");
eprintln!(" --config-dir <DIR> Config directory (default: /etc/telemt)");
eprintln!(" --no-start Don't start the service after install");
#[cfg(unix)]
{
eprintln!();
eprintln!("Examples:");
eprintln!(" telemt config.toml Run in foreground");
eprintln!(" telemt start config.toml Start as daemon");
eprintln!(" telemt start --pid-file /tmp/t.pid Start with custom PID file");
eprintln!(" telemt stop Stop daemon");
eprintln!(" telemt reload Reload configuration");
eprintln!(" telemt status Check daemon status");
}
}
#[cfg(test)]
@ -132,7 +244,7 @@ mod tests {
let target = startup_cwd.join("config.toml");
std::fs::write(&target, " ").unwrap();
let resolved = resolve_runtime_config_path("config.toml", &startup_cwd);
let resolved = resolve_runtime_config_path("config.toml", &startup_cwd, true);
assert_eq!(resolved, target.canonicalize().unwrap());
let _ = std::fs::remove_file(&target);
@ -148,11 +260,45 @@ mod tests {
let startup_cwd = std::env::temp_dir().join(format!("telemt_cfg_path_missing_{nonce}"));
std::fs::create_dir_all(&startup_cwd).unwrap();
let resolved = resolve_runtime_config_path("missing.toml", &startup_cwd);
let resolved = resolve_runtime_config_path("missing.toml", &startup_cwd, true);
assert_eq!(resolved, startup_cwd.join("missing.toml"));
let _ = std::fs::remove_dir(&startup_cwd);
}
#[test]
fn resolve_runtime_config_path_uses_startup_candidates_when_not_explicit() {
let nonce = std::time::SystemTime::now()
.duration_since(std::time::UNIX_EPOCH)
.unwrap()
.as_nanos();
let startup_cwd =
std::env::temp_dir().join(format!("telemt_cfg_startup_candidates_{nonce}"));
std::fs::create_dir_all(&startup_cwd).unwrap();
let telemt = startup_cwd.join("telemt.toml");
std::fs::write(&telemt, " ").unwrap();
let resolved = resolve_runtime_config_path("config.toml", &startup_cwd, false);
assert_eq!(resolved, telemt.canonicalize().unwrap());
let _ = std::fs::remove_file(&telemt);
let _ = std::fs::remove_dir(&startup_cwd);
}
#[test]
fn resolve_runtime_config_path_defaults_to_startup_config_when_none_found() {
let nonce = std::time::SystemTime::now()
.duration_since(std::time::UNIX_EPOCH)
.unwrap()
.as_nanos();
let startup_cwd = std::env::temp_dir().join(format!("telemt_cfg_startup_default_{nonce}"));
std::fs::create_dir_all(&startup_cwd).unwrap();
let resolved = resolve_runtime_config_path("config.toml", &startup_cwd, false);
assert_eq!(resolved, startup_cwd.join("config.toml"));
let _ = std::fs::remove_dir(&startup_cwd);
}
}
pub(crate) fn print_proxy_links(host: &str, port: u16, config: &ProxyConfig) {

View File

@ -14,6 +14,7 @@ use crate::crypto::SecureRandom;
use crate::ip_tracker::UserIpTracker;
use crate::proxy::ClientHandler;
use crate::proxy::route_mode::{ROUTE_SWITCH_ERROR_MSG, RouteRuntimeController};
use crate::proxy::shared_state::ProxySharedState;
use crate::startup::{COMPONENT_LISTENERS_BIND, StartupTracker};
use crate::stats::beobachten::BeobachtenStore;
use crate::stats::{ReplayChecker, Stats};
@ -49,6 +50,7 @@ pub(crate) async fn bind_listeners(
tls_cache: Option<Arc<TlsFrontCache>>,
ip_tracker: Arc<UserIpTracker>,
beobachten: Arc<BeobachtenStore>,
shared: Arc<ProxySharedState>,
max_connections: Arc<Semaphore>,
) -> Result<BoundListeners, Box<dyn Error>> {
startup_tracker
@ -72,6 +74,7 @@ pub(crate) async fn bind_listeners(
let options = ListenOptions {
reuse_port: listener_conf.reuse_allow,
ipv6_only: listener_conf.ip.is_ipv6(),
backlog: config.server.listen_backlog,
..Default::default()
};
@ -223,6 +226,7 @@ pub(crate) async fn bind_listeners(
let tls_cache = tls_cache.clone();
let ip_tracker = ip_tracker.clone();
let beobachten = beobachten.clone();
let shared = shared.clone();
let max_connections_unix = max_connections.clone();
tokio::spawn(async move {
@ -258,6 +262,7 @@ pub(crate) async fn bind_listeners(
break;
}
Err(_) => {
stats.increment_accept_permit_timeout_total();
debug!(
timeout_ms = accept_permit_timeout_ms,
"Dropping accepted unix connection: permit wait timeout"
@ -283,11 +288,12 @@ pub(crate) async fn bind_listeners(
let tls_cache = tls_cache.clone();
let ip_tracker = ip_tracker.clone();
let beobachten = beobachten.clone();
let shared = shared.clone();
let proxy_protocol_enabled = config.server.proxy_protocol;
tokio::spawn(async move {
let _permit = permit;
if let Err(e) = crate::proxy::client::handle_client_stream(
if let Err(e) = crate::proxy::client::handle_client_stream_with_shared(
stream,
fake_peer,
config,
@ -301,6 +307,7 @@ pub(crate) async fn bind_listeners(
tls_cache,
ip_tracker,
beobachten,
shared,
proxy_protocol_enabled,
)
.await
@ -350,6 +357,7 @@ pub(crate) fn spawn_tcp_accept_loops(
tls_cache: Option<Arc<TlsFrontCache>>,
ip_tracker: Arc<UserIpTracker>,
beobachten: Arc<BeobachtenStore>,
shared: Arc<ProxySharedState>,
max_connections: Arc<Semaphore>,
) {
for (listener, listener_proxy_protocol) in listeners {
@ -365,6 +373,7 @@ pub(crate) fn spawn_tcp_accept_loops(
let tls_cache = tls_cache.clone();
let ip_tracker = ip_tracker.clone();
let beobachten = beobachten.clone();
let shared = shared.clone();
let max_connections_tcp = max_connections.clone();
tokio::spawn(async move {
@ -399,6 +408,7 @@ pub(crate) fn spawn_tcp_accept_loops(
break;
}
Err(_) => {
stats.increment_accept_permit_timeout_total();
debug!(
peer = %peer_addr,
timeout_ms = accept_permit_timeout_ms,
@ -420,13 +430,14 @@ pub(crate) fn spawn_tcp_accept_loops(
let tls_cache = tls_cache.clone();
let ip_tracker = ip_tracker.clone();
let beobachten = beobachten.clone();
let shared = shared.clone();
let proxy_protocol_enabled = listener_proxy_protocol;
let real_peer_report = Arc::new(std::sync::Mutex::new(None));
let real_peer_report_for_handler = real_peer_report.clone();
tokio::spawn(async move {
let _permit = permit;
if let Err(e) = ClientHandler::new(
if let Err(e) = ClientHandler::new_with_shared(
stream,
peer_addr,
config,
@ -440,6 +451,7 @@ pub(crate) fn spawn_tcp_accept_loops(
tls_cache,
ip_tracker,
beobachten,
shared,
proxy_protocol_enabled,
real_peer_report_for_handler,
)

View File

@ -277,6 +277,8 @@ pub(crate) async fn initialize_me_pool(
config.general.me_warn_rate_limit_ms,
config.general.me_route_no_writer_mode,
config.general.me_route_no_writer_wait_ms,
config.general.me_route_hybrid_max_wait_ms,
config.general.me_route_blocking_send_timeout_ms,
config.general.me_route_inline_recovery_attempts,
config.general.me_route_inline_recovery_wait_ms,
);

View File

@ -29,10 +29,12 @@ use tracing_subscriber::{EnvFilter, fmt, prelude::*, reload};
use crate::api;
use crate::config::{LogLevel, ProxyConfig};
use crate::conntrack_control;
use crate::crypto::SecureRandom;
use crate::ip_tracker::UserIpTracker;
use crate::network::probe::{decide_network_capabilities, log_probe_result, run_probe};
use crate::proxy::route_mode::{RelayRouteMode, RouteRuntimeController};
use crate::proxy::shared_state::ProxySharedState;
use crate::startup::{
COMPONENT_API_BOOTSTRAP, COMPONENT_CONFIG_LOAD, COMPONENT_ME_POOL_CONSTRUCT,
COMPONENT_ME_POOL_INIT_STAGE1, COMPONENT_ME_PROXY_CONFIG_V4, COMPONENT_ME_PROXY_CONFIG_V6,
@ -47,8 +49,55 @@ use crate::transport::UpstreamManager;
use crate::transport::middle_proxy::MePool;
use helpers::{parse_cli, resolve_runtime_config_path};
#[cfg(unix)]
use crate::daemon::{DaemonOptions, PidFile, drop_privileges};
/// Runs the full telemt runtime startup pipeline and blocks until shutdown.
///
/// On Unix, daemon options should be handled before calling this function
/// (daemonization must happen before tokio runtime starts).
#[cfg(unix)]
pub async fn run_with_daemon(
daemon_opts: DaemonOptions,
) -> std::result::Result<(), Box<dyn std::error::Error>> {
run_inner(daemon_opts).await
}
/// Runs the full telemt runtime startup pipeline and blocks until shutdown.
///
/// This is the main entry point for non-daemon mode or when called as a library.
#[allow(dead_code)]
pub async fn run() -> std::result::Result<(), Box<dyn std::error::Error>> {
#[cfg(unix)]
{
// Parse CLI to get daemon options even in simple run() path
let args: Vec<String> = std::env::args().skip(1).collect();
let daemon_opts = crate::cli::parse_daemon_args(&args);
run_inner(daemon_opts).await
}
#[cfg(not(unix))]
{
run_inner().await
}
}
#[cfg(unix)]
async fn run_inner(
daemon_opts: DaemonOptions,
) -> std::result::Result<(), Box<dyn std::error::Error>> {
// Acquire PID file if daemonizing or if explicitly requested
// Keep it alive until shutdown (underscore prefix = intentionally kept for RAII cleanup)
let _pid_file = if daemon_opts.daemonize || daemon_opts.pid_file.is_some() {
let mut pf = PidFile::new(daemon_opts.pid_file_path());
if let Err(e) = pf.acquire() {
eprintln!("[telemt] {}", e);
std::process::exit(1);
}
Some(pf)
} else {
None
};
let process_started_at = Instant::now();
let process_started_at_epoch_secs = SystemTime::now()
.duration_since(UNIX_EPOCH)
@ -61,7 +110,13 @@ pub async fn run() -> std::result::Result<(), Box<dyn std::error::Error>> {
Some("load and validate config".to_string()),
)
.await;
let (config_path_cli, data_path, cli_silent, cli_log_level) = parse_cli();
let cli_args = parse_cli();
let config_path_cli = cli_args.config_path;
let config_path_explicit = cli_args.config_path_explicit;
let data_path = cli_args.data_path;
let cli_silent = cli_args.silent;
let cli_log_level = cli_args.log_level;
let log_destination = cli_args.log_destination;
let startup_cwd = match std::env::current_dir() {
Ok(cwd) => cwd,
Err(e) => {
@ -69,7 +124,8 @@ pub async fn run() -> std::result::Result<(), Box<dyn std::error::Error>> {
std::process::exit(1);
}
};
let config_path = resolve_runtime_config_path(&config_path_cli, &startup_cwd);
let mut config_path =
resolve_runtime_config_path(&config_path_cli, &startup_cwd, config_path_explicit);
let mut config = match ProxyConfig::load(&config_path) {
Ok(c) => c,
@ -79,11 +135,99 @@ pub async fn run() -> std::result::Result<(), Box<dyn std::error::Error>> {
std::process::exit(1);
} else {
let default = ProxyConfig::default();
std::fs::write(&config_path, toml::to_string_pretty(&default).unwrap()).unwrap();
let serialized =
match toml::to_string_pretty(&default).or_else(|_| toml::to_string(&default)) {
Ok(value) => Some(value),
Err(serialize_error) => {
eprintln!(
"[telemt] Warning: failed to serialize default config: {}",
serialize_error
);
None
}
};
if config_path_explicit {
if let Some(serialized) = serialized.as_ref() {
if let Err(write_error) = std::fs::write(&config_path, serialized) {
eprintln!(
"[telemt] Error: failed to create explicit config at {}: {}",
config_path.display(),
write_error
);
std::process::exit(1);
}
eprintln!(
"[telemt] Created default config at {}",
config_path.display()
);
} else {
eprintln!(
"[telemt] Warning: running with in-memory default config without writing to disk"
);
}
} else {
let system_dir = std::path::Path::new("/etc/telemt");
let system_config_path = system_dir.join("telemt.toml");
let startup_config_path = startup_cwd.join("config.toml");
let mut persisted = false;
if let Some(serialized) = serialized.as_ref() {
match std::fs::create_dir_all(system_dir) {
Ok(()) => match std::fs::write(&system_config_path, serialized) {
Ok(()) => {
config_path = system_config_path;
eprintln!(
"[telemt] Created default config at {}",
config_path.display()
);
persisted = true;
}
Err(write_error) => {
eprintln!(
"[telemt] Warning: failed to write default config at {}: {}",
system_config_path.display(),
write_error
);
}
},
Err(create_error) => {
eprintln!(
"[telemt] Warning: failed to create {}: {}",
system_dir.display(),
create_error
);
}
}
if !persisted {
match std::fs::write(&startup_config_path, serialized) {
Ok(()) => {
config_path = startup_config_path;
eprintln!(
"[telemt] Created default config at {}",
config_path.display()
);
persisted = true;
}
Err(write_error) => {
eprintln!(
"[telemt] Warning: failed to write default config at {}: {}",
startup_config_path.display(),
write_error
);
}
}
}
}
if !persisted {
eprintln!(
"[telemt] Warning: running with in-memory default config without writing to disk"
);
}
}
default
}
}
@ -115,8 +259,7 @@ pub async fn run() -> std::result::Result<(), Box<dyn std::error::Error>> {
);
std::process::exit(1);
}
} else {
if let Err(e) = std::fs::create_dir_all(data_path) {
} else if let Err(e) = std::fs::create_dir_all(data_path) {
eprintln!(
"[telemt] Can't create data_path {}: {}",
data_path.display(),
@ -124,7 +267,6 @@ pub async fn run() -> std::result::Result<(), Box<dyn std::error::Error>> {
);
std::process::exit(1);
}
}
if let Err(e) = std::env::set_current_dir(data_path) {
eprintln!(
@ -161,17 +303,43 @@ pub async fn run() -> std::result::Result<(), Box<dyn std::error::Error>> {
)
.await;
// Configure color output based on config
// Initialize logging based on destination
let _logging_guard: Option<crate::logging::LoggingGuard>;
match log_destination {
crate::logging::LogDestination::Stderr => {
// Default: log to stderr (works with systemd journald)
let fmt_layer = if config.general.disable_colors {
fmt::Layer::default().with_ansi(false)
} else {
fmt::Layer::default().with_ansi(true)
};
tracing_subscriber::registry()
.with(filter_layer)
.with(fmt_layer)
.init();
_logging_guard = None;
}
#[cfg(unix)]
crate::logging::LogDestination::Syslog => {
// Syslog: for OpenRC/FreeBSD
let logging_opts = crate::logging::LoggingOptions {
destination: log_destination,
disable_colors: true,
};
let (_, guard) = crate::logging::init_logging(&logging_opts, "info");
_logging_guard = Some(guard);
}
crate::logging::LogDestination::File { .. } => {
// File logging with optional rotation
let logging_opts = crate::logging::LoggingOptions {
destination: log_destination,
disable_colors: true,
};
let (_, guard) = crate::logging::init_logging(&logging_opts, "info");
_logging_guard = Some(guard);
}
}
startup_tracker
.complete_component(
COMPONENT_TRACING_INIT,
@ -225,6 +393,7 @@ pub async fn run() -> std::result::Result<(), Box<dyn std::error::Error>> {
config.general.upstream_connect_retry_attempts,
config.general.upstream_connect_retry_backoff_ms,
config.general.upstream_connect_budget_ms,
config.general.tg_connect,
config.general.upstream_unhealthy_fail_threshold,
config.general.upstream_connect_failfast_hard_errors,
stats.clone(),
@ -554,6 +723,12 @@ pub async fn run() -> std::result::Result<(), Box<dyn std::error::Error>> {
)
.await;
let _admission_tx_hold = admission_tx;
let shared_state = ProxySharedState::new();
conntrack_control::spawn_conntrack_controller(
config_rx.clone(),
stats.clone(),
shared_state.clone(),
);
let bound = listeners::bind_listeners(
&config,
@ -574,6 +749,7 @@ pub async fn run() -> std::result::Result<(), Box<dyn std::error::Error>> {
tls_cache.clone(),
ip_tracker.clone(),
beobachten.clone(),
shared_state.clone(),
max_connections.clone(),
)
.await?;
@ -585,6 +761,14 @@ pub async fn run() -> std::result::Result<(), Box<dyn std::error::Error>> {
std::process::exit(1);
}
// Drop privileges after binding sockets (which may require root for port < 1024)
if daemon_opts.user.is_some() || daemon_opts.group.is_some() {
if let Err(e) = drop_privileges(daemon_opts.user.as_deref(), daemon_opts.group.as_deref()) {
error!(error = %e, "Failed to drop privileges");
std::process::exit(1);
}
}
runtime_tasks::apply_runtime_log_filter(
has_rust_log,
&effective_log_level,
@ -605,6 +789,9 @@ pub async fn run() -> std::result::Result<(), Box<dyn std::error::Error>> {
runtime_tasks::mark_runtime_ready(&startup_tracker).await;
// Spawn signal handlers for SIGUSR1/SIGUSR2 (non-shutdown signals)
shutdown::spawn_signal_handlers(stats.clone(), process_started_at);
listeners::spawn_tcp_accept_loops(
listeners,
config_rx.clone(),
@ -619,10 +806,11 @@ pub async fn run() -> std::result::Result<(), Box<dyn std::error::Error>> {
tls_cache.clone(),
ip_tracker.clone(),
beobachten.clone(),
shared_state,
max_connections.clone(),
);
shutdown::wait_for_shutdown(process_started_at, me_pool).await;
shutdown::wait_for_shutdown(process_started_at, me_pool, stats).await;
Ok(())
}

View File

@ -323,10 +323,12 @@ pub(crate) async fn spawn_metrics_if_configured(
let config_rx_metrics = config_rx.clone();
let ip_tracker_metrics = ip_tracker.clone();
let whitelist = config.server.metrics_whitelist.clone();
let listen_backlog = config.server.listen_backlog;
tokio::spawn(async move {
metrics::serve(
port,
listen,
listen_backlog,
stats,
beobachten,
ip_tracker_metrics,

View File

@ -1,25 +1,100 @@
//! Shutdown and signal handling for telemt.
//!
//! Handles graceful shutdown on various signals:
//! - SIGINT (Ctrl+C) / SIGTERM: Graceful shutdown
//! - SIGQUIT: Graceful shutdown with stats dump
//! - SIGUSR1: Reserved for log rotation (logs acknowledgment)
//! - SIGUSR2: Dump runtime status to log
//!
//! SIGHUP is handled separately in config/hot_reload.rs for config reload.
use std::sync::Arc;
use std::time::{Duration, Instant};
#[cfg(not(unix))]
use tokio::signal;
use tracing::{error, info, warn};
#[cfg(unix)]
use tokio::signal::unix::{SignalKind, signal};
use tracing::{info, warn};
use crate::stats::Stats;
use crate::transport::middle_proxy::MePool;
use super::helpers::{format_uptime, unit_label};
pub(crate) async fn wait_for_shutdown(process_started_at: Instant, me_pool: Option<Arc<MePool>>) {
match signal::ctrl_c().await {
Ok(()) => {
/// Signal that triggered shutdown.
#[derive(Debug, Clone, Copy, PartialEq, Eq)]
pub enum ShutdownSignal {
/// SIGINT (Ctrl+C)
Interrupt,
/// SIGTERM
Terminate,
/// SIGQUIT (with stats dump)
Quit,
}
impl std::fmt::Display for ShutdownSignal {
fn fmt(&self, f: &mut std::fmt::Formatter<'_>) -> std::fmt::Result {
match self {
ShutdownSignal::Interrupt => write!(f, "SIGINT"),
ShutdownSignal::Terminate => write!(f, "SIGTERM"),
ShutdownSignal::Quit => write!(f, "SIGQUIT"),
}
}
}
/// Waits for a shutdown signal and performs graceful shutdown.
pub(crate) async fn wait_for_shutdown(
process_started_at: Instant,
me_pool: Option<Arc<MePool>>,
stats: Arc<Stats>,
) {
let signal = wait_for_shutdown_signal().await;
perform_shutdown(signal, process_started_at, me_pool, &stats).await;
}
/// Waits for any shutdown signal (SIGINT, SIGTERM, SIGQUIT).
#[cfg(unix)]
async fn wait_for_shutdown_signal() -> ShutdownSignal {
let mut sigint = signal(SignalKind::interrupt()).expect("Failed to register SIGINT handler");
let mut sigterm = signal(SignalKind::terminate()).expect("Failed to register SIGTERM handler");
let mut sigquit = signal(SignalKind::quit()).expect("Failed to register SIGQUIT handler");
tokio::select! {
_ = sigint.recv() => ShutdownSignal::Interrupt,
_ = sigterm.recv() => ShutdownSignal::Terminate,
_ = sigquit.recv() => ShutdownSignal::Quit,
}
}
#[cfg(not(unix))]
async fn wait_for_shutdown_signal() -> ShutdownSignal {
signal::ctrl_c().await.expect("Failed to listen for Ctrl+C");
ShutdownSignal::Interrupt
}
/// Performs graceful shutdown sequence.
async fn perform_shutdown(
signal: ShutdownSignal,
process_started_at: Instant,
me_pool: Option<Arc<MePool>>,
stats: &Stats,
) {
let shutdown_started_at = Instant::now();
info!(signal = %signal, "Received shutdown signal");
// Dump stats if SIGQUIT
if signal == ShutdownSignal::Quit {
dump_stats(stats, process_started_at);
}
info!("Shutting down...");
let uptime_secs = process_started_at.elapsed().as_secs();
info!("Uptime: {}", format_uptime(uptime_secs));
// Graceful ME pool shutdown
if let Some(pool) = &me_pool {
match tokio::time::timeout(
Duration::from_secs(2),
pool.shutdown_send_close_conn_all(),
)
match tokio::time::timeout(Duration::from_secs(2), pool.shutdown_send_close_conn_all())
.await
{
Ok(total) => {
@ -33,13 +108,99 @@ pub(crate) async fn wait_for_shutdown(process_started_at: Instant, me_pool: Opti
}
}
}
let shutdown_secs = shutdown_started_at.elapsed().as_secs();
info!(
"Shutdown completed successfully in {} {}.",
shutdown_secs,
unit_label(shutdown_secs, "second", "seconds")
);
}
Err(e) => error!("Signal error: {}", e),
}
}
/// Dumps runtime statistics to the log.
fn dump_stats(stats: &Stats, process_started_at: Instant) {
let uptime_secs = process_started_at.elapsed().as_secs();
info!("=== Runtime Statistics Dump ===");
info!("Uptime: {}", format_uptime(uptime_secs));
// Connection stats
info!(
"Connections: total={}, current={} (direct={}, me={}), bad={}",
stats.get_connects_all(),
stats.get_current_connections_total(),
stats.get_current_connections_direct(),
stats.get_current_connections_me(),
stats.get_connects_bad(),
);
// ME pool stats
info!(
"ME keepalive: sent={}, pong={}, failed={}, timeout={}",
stats.get_me_keepalive_sent(),
stats.get_me_keepalive_pong(),
stats.get_me_keepalive_failed(),
stats.get_me_keepalive_timeout(),
);
// Relay stats
info!(
"Relay idle: soft_mark={}, hard_close={}, pressure_evict={}",
stats.get_relay_idle_soft_mark_total(),
stats.get_relay_idle_hard_close_total(),
stats.get_relay_pressure_evict_total(),
);
info!("=== End Statistics Dump ===");
}
/// Spawns a background task to handle operational signals (SIGUSR1, SIGUSR2).
///
/// These signals don't trigger shutdown but perform specific actions:
/// - SIGUSR1: Log rotation acknowledgment (for external log rotation tools)
/// - SIGUSR2: Dump runtime status to log
#[cfg(unix)]
pub(crate) fn spawn_signal_handlers(stats: Arc<Stats>, process_started_at: Instant) {
tokio::spawn(async move {
let mut sigusr1 =
signal(SignalKind::user_defined1()).expect("Failed to register SIGUSR1 handler");
let mut sigusr2 =
signal(SignalKind::user_defined2()).expect("Failed to register SIGUSR2 handler");
loop {
tokio::select! {
_ = sigusr1.recv() => {
handle_sigusr1();
}
_ = sigusr2.recv() => {
handle_sigusr2(&stats, process_started_at);
}
}
}
});
}
/// No-op on non-Unix platforms.
#[cfg(not(unix))]
pub(crate) fn spawn_signal_handlers(_stats: Arc<Stats>, _process_started_at: Instant) {
// No SIGUSR1/SIGUSR2 on non-Unix
}
/// Handles SIGUSR1 - log rotation signal.
///
/// This signal is typically sent by logrotate or similar tools after
/// rotating log files. Since tracing-subscriber doesn't natively support
/// reopening files, we just acknowledge the signal. If file logging is
/// added in the future, this would reopen log file handles.
#[cfg(unix)]
fn handle_sigusr1() {
info!("SIGUSR1 received - log rotation acknowledged");
// Future: If using file-based logging, reopen file handles here
}
/// Handles SIGUSR2 - dump runtime status.
#[cfg(unix)]
fn handle_sigusr2(stats: &Stats, process_started_at: Instant) {
info!("SIGUSR2 received - dumping runtime status");
dump_stats(stats, process_started_at);
}

View File

@ -3,7 +3,10 @@
mod api;
mod cli;
mod config;
mod conntrack_control;
mod crypto;
#[cfg(unix)]
mod daemon;
mod error;
mod ip_tracker;
#[cfg(test)]
@ -15,11 +18,13 @@ mod ip_tracker_hotpath_adversarial_tests;
#[cfg(test)]
#[path = "tests/ip_tracker_regression_tests.rs"]
mod ip_tracker_regression_tests;
mod logging;
mod maestro;
mod metrics;
mod network;
mod protocol;
mod proxy;
mod service;
mod startup;
mod stats;
mod stream;
@ -27,8 +32,49 @@ mod tls_front;
mod transport;
mod util;
#[tokio::main]
async fn main() -> std::result::Result<(), Box<dyn std::error::Error>> {
fn main() -> std::result::Result<(), Box<dyn std::error::Error>> {
// Install rustls crypto provider early
let _ = rustls::crypto::ring::default_provider().install_default();
maestro::run().await
let args: Vec<String> = std::env::args().skip(1).collect();
let cmd = cli::parse_command(&args);
// Handle subcommands that don't need the server (stop, reload, status, init)
if let Some(exit_code) = cli::execute_subcommand(&cmd) {
std::process::exit(exit_code);
}
#[cfg(unix)]
{
let daemon_opts = cmd.daemon_opts;
// Daemonize BEFORE runtime
if daemon_opts.should_daemonize() {
match daemon::daemonize(daemon_opts.working_dir.as_deref()) {
Ok(daemon::DaemonizeResult::Parent) => {
std::process::exit(0);
}
Ok(daemon::DaemonizeResult::Child) => {
// continue
}
Err(e) => {
eprintln!("[telemt] Daemonization failed: {}", e);
std::process::exit(1);
}
}
}
tokio::runtime::Builder::new_multi_thread()
.enable_all()
.build()?
.block_on(maestro::run_with_daemon(daemon_opts))
}
#[cfg(not(unix))]
{
tokio::runtime::Builder::new_multi_thread()
.enable_all()
.build()?
.block_on(maestro::run())
}
}

View File

@ -22,6 +22,7 @@ use crate::transport::{ListenOptions, create_listener};
pub async fn serve(
port: u16,
listen: Option<String>,
listen_backlog: u32,
stats: Arc<Stats>,
beobachten: Arc<BeobachtenStore>,
ip_tracker: Arc<UserIpTracker>,
@ -40,7 +41,7 @@ pub async fn serve(
}
};
let is_ipv6 = addr.is_ipv6();
match bind_metrics_listener(addr, is_ipv6) {
match bind_metrics_listener(addr, is_ipv6, listen_backlog) {
Ok(listener) => {
info!("Metrics endpoint: http://{}/metrics and /beobachten", addr);
serve_listener(
@ -60,7 +61,7 @@ pub async fn serve(
let mut listener_v6 = None;
let addr_v4 = SocketAddr::from(([0, 0, 0, 0], port));
match bind_metrics_listener(addr_v4, false) {
match bind_metrics_listener(addr_v4, false, listen_backlog) {
Ok(listener) => {
info!(
"Metrics endpoint: http://{}/metrics and /beobachten",
@ -74,7 +75,7 @@ pub async fn serve(
}
let addr_v6 = SocketAddr::from(([0, 0, 0, 0, 0, 0, 0, 0], port));
match bind_metrics_listener(addr_v6, true) {
match bind_metrics_listener(addr_v6, true, listen_backlog) {
Ok(listener) => {
info!(
"Metrics endpoint: http://[::]:{}/metrics and /beobachten",
@ -122,10 +123,15 @@ pub async fn serve(
}
}
fn bind_metrics_listener(addr: SocketAddr, ipv6_only: bool) -> std::io::Result<TcpListener> {
fn bind_metrics_listener(
addr: SocketAddr,
ipv6_only: bool,
listen_backlog: u32,
) -> std::io::Result<TcpListener> {
let options = ListenOptions {
reuse_port: false,
ipv6_only,
backlog: listen_backlog,
..Default::default()
};
let socket = create_listener(addr, &options)?;
@ -287,6 +293,27 @@ async fn render_metrics(stats: &Stats, config: &ProxyConfig, ip_tracker: &UserIp
}
);
let _ = writeln!(
out,
"# HELP telemt_buffer_pool_buffers_total Snapshot of pooled and allocated buffers"
);
let _ = writeln!(out, "# TYPE telemt_buffer_pool_buffers_total gauge");
let _ = writeln!(
out,
"telemt_buffer_pool_buffers_total{{kind=\"pooled\"}} {}",
stats.get_buffer_pool_pooled_gauge()
);
let _ = writeln!(
out,
"telemt_buffer_pool_buffers_total{{kind=\"allocated\"}} {}",
stats.get_buffer_pool_allocated_gauge()
);
let _ = writeln!(
out,
"telemt_buffer_pool_buffers_total{{kind=\"in_use\"}} {}",
stats.get_buffer_pool_in_use_gauge()
);
let _ = writeln!(
out,
"# HELP telemt_connections_total Total accepted connections"
@ -332,6 +359,134 @@ async fn render_metrics(stats: &Stats, config: &ProxyConfig, ip_tracker: &UserIp
}
);
let _ = writeln!(
out,
"# HELP telemt_accept_permit_timeout_total Accepted connections dropped due to permit wait timeout"
);
let _ = writeln!(out, "# TYPE telemt_accept_permit_timeout_total counter");
let _ = writeln!(
out,
"telemt_accept_permit_timeout_total {}",
if core_enabled {
stats.get_accept_permit_timeout_total()
} else {
0
}
);
let _ = writeln!(
out,
"# HELP telemt_conntrack_control_state Runtime conntrack control state flags"
);
let _ = writeln!(out, "# TYPE telemt_conntrack_control_state gauge");
let _ = writeln!(
out,
"telemt_conntrack_control_state{{flag=\"enabled\"}} {}",
if stats.get_conntrack_control_enabled() {
1
} else {
0
}
);
let _ = writeln!(
out,
"telemt_conntrack_control_state{{flag=\"available\"}} {}",
if stats.get_conntrack_control_available() {
1
} else {
0
}
);
let _ = writeln!(
out,
"telemt_conntrack_control_state{{flag=\"pressure_active\"}} {}",
if stats.get_conntrack_pressure_active() {
1
} else {
0
}
);
let _ = writeln!(
out,
"telemt_conntrack_control_state{{flag=\"rule_apply_ok\"}} {}",
if stats.get_conntrack_rule_apply_ok() {
1
} else {
0
}
);
let _ = writeln!(
out,
"# HELP telemt_conntrack_event_queue_depth Pending close events in conntrack control queue"
);
let _ = writeln!(out, "# TYPE telemt_conntrack_event_queue_depth gauge");
let _ = writeln!(
out,
"telemt_conntrack_event_queue_depth {}",
stats.get_conntrack_event_queue_depth()
);
let _ = writeln!(
out,
"# HELP telemt_conntrack_delete_total Conntrack delete attempts by outcome"
);
let _ = writeln!(out, "# TYPE telemt_conntrack_delete_total counter");
let _ = writeln!(
out,
"telemt_conntrack_delete_total{{result=\"attempt\"}} {}",
if core_enabled {
stats.get_conntrack_delete_attempt_total()
} else {
0
}
);
let _ = writeln!(
out,
"telemt_conntrack_delete_total{{result=\"success\"}} {}",
if core_enabled {
stats.get_conntrack_delete_success_total()
} else {
0
}
);
let _ = writeln!(
out,
"telemt_conntrack_delete_total{{result=\"not_found\"}} {}",
if core_enabled {
stats.get_conntrack_delete_not_found_total()
} else {
0
}
);
let _ = writeln!(
out,
"telemt_conntrack_delete_total{{result=\"error\"}} {}",
if core_enabled {
stats.get_conntrack_delete_error_total()
} else {
0
}
);
let _ = writeln!(
out,
"# HELP telemt_conntrack_close_event_drop_total Dropped conntrack close events due to queue pressure or unavailable sender"
);
let _ = writeln!(
out,
"# TYPE telemt_conntrack_close_event_drop_total counter"
);
let _ = writeln!(
out,
"telemt_conntrack_close_event_drop_total {}",
if core_enabled {
stats.get_conntrack_close_event_drop_total()
} else {
0
}
);
let _ = writeln!(
out,
"# HELP telemt_upstream_connect_attempt_total Upstream connect attempts across all requests"
@ -935,6 +1090,39 @@ async fn render_metrics(stats: &Stats, config: &ProxyConfig, ip_tracker: &UserIp
}
);
let _ = writeln!(
out,
"# HELP telemt_me_c2me_enqueue_events_total ME client->ME enqueue outcomes"
);
let _ = writeln!(out, "# TYPE telemt_me_c2me_enqueue_events_total counter");
let _ = writeln!(
out,
"telemt_me_c2me_enqueue_events_total{{event=\"full\"}} {}",
if me_allows_normal {
stats.get_me_c2me_send_full_total()
} else {
0
}
);
let _ = writeln!(
out,
"telemt_me_c2me_enqueue_events_total{{event=\"high_water\"}} {}",
if me_allows_normal {
stats.get_me_c2me_send_high_water_total()
} else {
0
}
);
let _ = writeln!(
out,
"telemt_me_c2me_enqueue_events_total{{event=\"timeout\"}} {}",
if me_allows_normal {
stats.get_me_c2me_send_timeout_total()
} else {
0
}
);
let _ = writeln!(
out,
"# HELP telemt_me_d2c_batches_total Total DC->Client flush batches"
@ -1558,6 +1746,40 @@ async fn render_metrics(stats: &Stats, config: &ProxyConfig, ip_tracker: &UserIp
0
}
);
let _ = writeln!(
out,
"# HELP telemt_me_endpoint_quarantine_unexpected_total ME endpoint quarantines caused by unexpected writer removals"
);
let _ = writeln!(
out,
"# TYPE telemt_me_endpoint_quarantine_unexpected_total counter"
);
let _ = writeln!(
out,
"telemt_me_endpoint_quarantine_unexpected_total {}",
if me_allows_normal {
stats.get_me_endpoint_quarantine_unexpected_total()
} else {
0
}
);
let _ = writeln!(
out,
"# HELP telemt_me_endpoint_quarantine_draining_suppressed_total Draining writer removals that skipped endpoint quarantine"
);
let _ = writeln!(
out,
"# TYPE telemt_me_endpoint_quarantine_draining_suppressed_total counter"
);
let _ = writeln!(
out,
"telemt_me_endpoint_quarantine_draining_suppressed_total {}",
if me_allows_normal {
stats.get_me_endpoint_quarantine_draining_suppressed_total()
} else {
0
}
);
let _ = writeln!(
out,
@ -2318,6 +2540,20 @@ async fn render_metrics(stats: &Stats, config: &ProxyConfig, ip_tracker: &UserIp
0
}
);
let _ = writeln!(
out,
"# HELP telemt_me_hybrid_timeout_total ME hybrid route timeouts after bounded retry window"
);
let _ = writeln!(out, "# TYPE telemt_me_hybrid_timeout_total counter");
let _ = writeln!(
out,
"telemt_me_hybrid_timeout_total {}",
if me_allows_normal {
stats.get_me_hybrid_timeout_total()
} else {
0
}
);
let _ = writeln!(
out,
"# HELP telemt_me_async_recovery_trigger_total Async ME recovery trigger attempts from route path"
@ -2436,6 +2672,48 @@ async fn render_metrics(stats: &Stats, config: &ProxyConfig, ip_tracker: &UserIp
if user_enabled { 0 } else { 1 }
);
let ip_memory = ip_tracker.memory_stats().await;
let _ = writeln!(
out,
"# HELP telemt_ip_tracker_users Number of users tracked by IP limiter state"
);
let _ = writeln!(out, "# TYPE telemt_ip_tracker_users gauge");
let _ = writeln!(
out,
"telemt_ip_tracker_users{{scope=\"active\"}} {}",
ip_memory.active_users
);
let _ = writeln!(
out,
"telemt_ip_tracker_users{{scope=\"recent\"}} {}",
ip_memory.recent_users
);
let _ = writeln!(
out,
"# HELP telemt_ip_tracker_entries Number of IP entries tracked by limiter state"
);
let _ = writeln!(out, "# TYPE telemt_ip_tracker_entries gauge");
let _ = writeln!(
out,
"telemt_ip_tracker_entries{{scope=\"active\"}} {}",
ip_memory.active_entries
);
let _ = writeln!(
out,
"telemt_ip_tracker_entries{{scope=\"recent\"}} {}",
ip_memory.recent_entries
);
let _ = writeln!(
out,
"# HELP telemt_ip_tracker_cleanup_queue_len Deferred disconnect cleanup queue length"
);
let _ = writeln!(out, "# TYPE telemt_ip_tracker_cleanup_queue_len gauge");
let _ = writeln!(
out,
"telemt_ip_tracker_cleanup_queue_len {}",
ip_memory.cleanup_queue_len
);
if user_enabled {
for entry in stats.iter_user_stats() {
let user = entry.key();
@ -2608,6 +2886,9 @@ mod tests {
stats.increment_me_d2c_write_mode(crate::stats::MeD2cWriteMode::Coalesced);
stats.increment_me_d2c_quota_reject_total(crate::stats::MeD2cQuotaRejectStage::PostWrite);
stats.observe_me_d2c_frame_buf_shrink(4096);
stats.increment_me_endpoint_quarantine_total();
stats.increment_me_endpoint_quarantine_unexpected_total();
stats.increment_me_endpoint_quarantine_draining_suppressed_total();
stats.increment_user_connects("alice");
stats.increment_user_curr_connects("alice");
stats.add_user_octets_from("alice", 1024);
@ -2658,6 +2939,9 @@ mod tests {
assert!(output.contains("telemt_me_d2c_quota_reject_total{stage=\"post_write\"} 1"));
assert!(output.contains("telemt_me_d2c_frame_buf_shrink_total 1"));
assert!(output.contains("telemt_me_d2c_frame_buf_shrink_bytes_total 4096"));
assert!(output.contains("telemt_me_endpoint_quarantine_total 1"));
assert!(output.contains("telemt_me_endpoint_quarantine_unexpected_total 1"));
assert!(output.contains("telemt_me_endpoint_quarantine_draining_suppressed_total 1"));
assert!(output.contains("telemt_user_connections_total{user=\"alice\"} 1"));
assert!(output.contains("telemt_user_connections_current{user=\"alice\"} 1"));
assert!(output.contains("telemt_user_octets_from_client{user=\"alice\"} 1024"));
@ -2668,6 +2952,9 @@ mod tests {
assert!(output.contains("telemt_user_unique_ips_recent_window{user=\"alice\"} 1"));
assert!(output.contains("telemt_user_unique_ips_limit{user=\"alice\"} 4"));
assert!(output.contains("telemt_user_unique_ips_utilization{user=\"alice\"} 0.250000"));
assert!(output.contains("telemt_ip_tracker_users{scope=\"active\"} 1"));
assert!(output.contains("telemt_ip_tracker_entries{scope=\"active\"} 1"));
assert!(output.contains("telemt_ip_tracker_cleanup_queue_len 0"));
}
#[tokio::test]
@ -2724,6 +3011,12 @@ mod tests {
assert!(output.contains("# TYPE telemt_me_d2c_write_mode_total counter"));
assert!(output.contains("# TYPE telemt_me_d2c_batch_frames_bucket_total counter"));
assert!(output.contains("# TYPE telemt_me_d2c_flush_duration_us_bucket_total counter"));
assert!(output.contains("# TYPE telemt_me_endpoint_quarantine_total counter"));
assert!(output.contains("# TYPE telemt_me_endpoint_quarantine_unexpected_total counter"));
assert!(
output
.contains("# TYPE telemt_me_endpoint_quarantine_draining_suppressed_total counter")
);
assert!(output.contains("# TYPE telemt_me_writer_removed_total counter"));
assert!(
output
@ -2733,6 +3026,9 @@ mod tests {
assert!(output.contains("# TYPE telemt_user_unique_ips_recent_window gauge"));
assert!(output.contains("# TYPE telemt_user_unique_ips_limit gauge"));
assert!(output.contains("# TYPE telemt_user_unique_ips_utilization gauge"));
assert!(output.contains("# TYPE telemt_ip_tracker_users gauge"));
assert!(output.contains("# TYPE telemt_ip_tracker_entries gauge"));
assert!(output.contains("# TYPE telemt_ip_tracker_cleanup_queue_len gauge"));
}
#[tokio::test]

View File

@ -24,6 +24,8 @@ const DIRECT_S2C_CAP_BYTES: usize = 512 * 1024;
const ME_FRAMES_CAP: usize = 96;
const ME_BYTES_CAP: usize = 384 * 1024;
const ME_DELAY_MIN_US: u64 = 150;
const MAX_USER_PROFILES_ENTRIES: usize = 50_000;
const MAX_USER_KEY_BYTES: usize = 512;
#[derive(Debug, Clone, Copy, PartialEq, Eq, PartialOrd, Ord)]
pub enum AdaptiveTier {
@ -234,32 +236,50 @@ fn profiles() -> &'static DashMap<String, UserAdaptiveProfile> {
}
pub fn seed_tier_for_user(user: &str) -> AdaptiveTier {
if user.len() > MAX_USER_KEY_BYTES {
return AdaptiveTier::Base;
}
let now = Instant::now();
if let Some(entry) = profiles().get(user) {
let value = entry.value();
if now.duration_since(value.seen_at) <= PROFILE_TTL {
let value = *entry.value();
drop(entry);
if now.saturating_duration_since(value.seen_at) <= PROFILE_TTL {
return value.tier;
}
profiles().remove_if(user, |_, v| {
now.saturating_duration_since(v.seen_at) > PROFILE_TTL
});
}
AdaptiveTier::Base
}
pub fn record_user_tier(user: &str, tier: AdaptiveTier) {
if user.len() > MAX_USER_KEY_BYTES {
return;
}
let now = Instant::now();
if let Some(mut entry) = profiles().get_mut(user) {
let existing = *entry;
let effective = if now.duration_since(existing.seen_at) > PROFILE_TTL {
let mut was_vacant = false;
match profiles().entry(user.to_string()) {
dashmap::mapref::entry::Entry::Occupied(mut entry) => {
let existing = *entry.get();
let effective = if now.saturating_duration_since(existing.seen_at) > PROFILE_TTL {
tier
} else {
max(existing.tier, tier)
};
*entry = UserAdaptiveProfile {
entry.insert(UserAdaptiveProfile {
tier: effective,
seen_at: now,
};
return;
});
}
dashmap::mapref::entry::Entry::Vacant(slot) => {
slot.insert(UserAdaptiveProfile { tier, seen_at: now });
was_vacant = true;
}
}
if was_vacant && profiles().len() > MAX_USER_PROFILES_ENTRIES {
profiles().retain(|_, v| now.saturating_duration_since(v.seen_at) <= PROFILE_TTL);
}
profiles().insert(user.to_string(), UserAdaptiveProfile { tier, seen_at: now });
}
pub fn direct_copy_buffers_for_tier(
@ -310,6 +330,14 @@ fn scale(base: usize, numerator: usize, denominator: usize, cap: usize) -> usize
scaled.min(cap).max(1)
}
#[cfg(test)]
#[path = "tests/adaptive_buffers_security_tests.rs"]
mod adaptive_buffers_security_tests;
#[cfg(test)]
#[path = "tests/adaptive_buffers_record_race_security_tests.rs"]
mod adaptive_buffers_record_race_security_tests;
#[cfg(test)]
mod tests {
use super::*;

View File

@ -80,11 +80,16 @@ use crate::transport::middle_proxy::MePool;
use crate::transport::socket::normalize_ip;
use crate::transport::{UpstreamManager, configure_client_socket, parse_proxy_protocol};
use crate::proxy::direct_relay::handle_via_direct;
use crate::proxy::handshake::{HandshakeSuccess, handle_mtproto_handshake, handle_tls_handshake};
use crate::proxy::direct_relay::handle_via_direct_with_shared;
use crate::proxy::handshake::{
HandshakeSuccess, handle_mtproto_handshake_with_shared, handle_tls_handshake_with_shared,
};
#[cfg(test)]
use crate::proxy::handshake::{handle_mtproto_handshake, handle_tls_handshake};
use crate::proxy::masking::handle_bad_client;
use crate::proxy::middle_relay::handle_via_middle_proxy;
use crate::proxy::route_mode::{RelayRouteMode, RouteRuntimeController};
use crate::proxy::shared_state::ProxySharedState;
fn beobachten_ttl(config: &ProxyConfig) -> Duration {
const BEOBACHTEN_TTL_MAX_MINUTES: u64 = 24 * 60;
@ -186,6 +191,24 @@ fn handshake_timeout_with_mask_grace(config: &ProxyConfig) -> Duration {
}
}
fn effective_client_first_byte_idle_secs(config: &ProxyConfig, shared: &ProxySharedState) -> u64 {
let idle_secs = config.timeouts.client_first_byte_idle_secs;
if idle_secs == 0 {
return 0;
}
if shared.conntrack_pressure_active() {
idle_secs.min(
config
.server
.conntrack_control
.profile
.client_first_byte_idle_cap_secs(),
)
} else {
idle_secs
}
}
const MASK_CLASSIFIER_PREFETCH_WINDOW: usize = 16;
#[cfg(test)]
const MASK_CLASSIFIER_PREFETCH_TIMEOUT: Duration = Duration::from_millis(5);
@ -342,7 +365,48 @@ fn synthetic_local_addr(port: u16) -> SocketAddr {
SocketAddr::from(([0, 0, 0, 0], port))
}
#[cfg(test)]
pub async fn handle_client_stream<S>(
stream: S,
peer: SocketAddr,
config: Arc<ProxyConfig>,
stats: Arc<Stats>,
upstream_manager: Arc<UpstreamManager>,
replay_checker: Arc<ReplayChecker>,
buffer_pool: Arc<BufferPool>,
rng: Arc<SecureRandom>,
me_pool: Option<Arc<MePool>>,
route_runtime: Arc<RouteRuntimeController>,
tls_cache: Option<Arc<TlsFrontCache>>,
ip_tracker: Arc<UserIpTracker>,
beobachten: Arc<BeobachtenStore>,
proxy_protocol_enabled: bool,
) -> Result<()>
where
S: AsyncRead + AsyncWrite + Unpin + Send + 'static,
{
handle_client_stream_with_shared(
stream,
peer,
config,
stats,
upstream_manager,
replay_checker,
buffer_pool,
rng,
me_pool,
route_runtime,
tls_cache,
ip_tracker,
beobachten,
ProxySharedState::new(),
proxy_protocol_enabled,
)
.await
}
#[allow(clippy::too_many_arguments)]
pub async fn handle_client_stream_with_shared<S>(
mut stream: S,
peer: SocketAddr,
config: Arc<ProxyConfig>,
@ -356,6 +420,7 @@ pub async fn handle_client_stream<S>(
tls_cache: Option<Arc<TlsFrontCache>>,
ip_tracker: Arc<UserIpTracker>,
beobachten: Arc<BeobachtenStore>,
shared: Arc<ProxySharedState>,
proxy_protocol_enabled: bool,
) -> Result<()>
where
@ -416,16 +481,69 @@ where
debug!(peer = %real_peer, "New connection (generic stream)");
let first_byte_idle_secs = effective_client_first_byte_idle_secs(&config, shared.as_ref());
let first_byte = if first_byte_idle_secs == 0 {
None
} else {
let idle_timeout = Duration::from_secs(first_byte_idle_secs);
let mut first_byte = [0u8; 1];
match timeout(idle_timeout, stream.read(&mut first_byte)).await {
Ok(Ok(0)) => {
debug!(peer = %real_peer, "Connection closed before first client byte");
return Ok(());
}
Ok(Ok(_)) => Some(first_byte[0]),
Ok(Err(e))
if matches!(
e.kind(),
std::io::ErrorKind::UnexpectedEof
| std::io::ErrorKind::ConnectionReset
| std::io::ErrorKind::ConnectionAborted
| std::io::ErrorKind::BrokenPipe
| std::io::ErrorKind::NotConnected
) =>
{
debug!(
peer = %real_peer,
error = %e,
"Connection closed before first client byte"
);
return Ok(());
}
Ok(Err(e)) => {
debug!(
peer = %real_peer,
error = %e,
"Failed while waiting for first client byte"
);
return Err(ProxyError::Io(e));
}
Err(_) => {
debug!(
peer = %real_peer,
idle_secs = first_byte_idle_secs,
"Closing idle pooled connection before first client byte"
);
return Ok(());
}
}
};
let handshake_timeout = handshake_timeout_with_mask_grace(&config);
let stats_for_timeout = stats.clone();
let config_for_timeout = config.clone();
let beobachten_for_timeout = beobachten.clone();
let peer_for_timeout = real_peer.ip();
// Phase 1: handshake (with timeout)
// Phase 2: active handshake (with timeout after the first client byte)
let outcome = match timeout(handshake_timeout, async {
let mut first_bytes = [0u8; 5];
if let Some(first_byte) = first_byte {
first_bytes[0] = first_byte;
stream.read_exact(&mut first_bytes[1..]).await?;
} else {
stream.read_exact(&mut first_bytes).await?;
}
let is_tls = tls::is_tls_handshake(&first_bytes[..3]);
debug!(peer = %real_peer, is_tls = is_tls, "Handshake type detected");
@ -498,9 +616,10 @@ where
let (read_half, write_half) = tokio::io::split(stream);
let (mut tls_reader, tls_writer, tls_user) = match handle_tls_handshake(
let (mut tls_reader, tls_writer, tls_user) = match handle_tls_handshake_with_shared(
&handshake, read_half, write_half, real_peer,
&config, &replay_checker, &rng, tls_cache.clone(),
shared.as_ref(),
).await {
HandshakeResult::Success(result) => result,
HandshakeResult::BadClient { reader, writer } => {
@ -526,9 +645,10 @@ where
let mtproto_handshake: [u8; HANDSHAKE_LEN] = mtproto_data[..].try_into()
.map_err(|_| ProxyError::InvalidHandshake("Short MTProto handshake".into()))?;
let (crypto_reader, crypto_writer, success) = match handle_mtproto_handshake(
let (crypto_reader, crypto_writer, success) = match handle_mtproto_handshake_with_shared(
&mtproto_handshake, tls_reader, tls_writer, real_peer,
&config, &replay_checker, true, Some(tls_user.as_str()),
shared.as_ref(),
).await {
HandshakeResult::Success(result) => result,
HandshakeResult::BadClient { reader, writer } => {
@ -562,11 +682,12 @@ where
};
Ok(HandshakeOutcome::NeedsRelay(Box::pin(
RunningClientHandler::handle_authenticated_static(
RunningClientHandler::handle_authenticated_static_with_shared(
crypto_reader, crypto_writer, success,
upstream_manager, stats, config, buffer_pool, rng, me_pool,
route_runtime.clone(),
local_addr, real_peer, ip_tracker.clone(),
shared.clone(),
),
)))
} else {
@ -592,9 +713,10 @@ where
let (read_half, write_half) = tokio::io::split(stream);
let (crypto_reader, crypto_writer, success) = match handle_mtproto_handshake(
let (crypto_reader, crypto_writer, success) = match handle_mtproto_handshake_with_shared(
&handshake, read_half, write_half, real_peer,
&config, &replay_checker, false, None,
shared.as_ref(),
).await {
HandshakeResult::Success(result) => result,
HandshakeResult::BadClient { reader, writer } => {
@ -613,7 +735,7 @@ where
};
Ok(HandshakeOutcome::NeedsRelay(Box::pin(
RunningClientHandler::handle_authenticated_static(
RunningClientHandler::handle_authenticated_static_with_shared(
crypto_reader,
crypto_writer,
success,
@ -627,6 +749,7 @@ where
local_addr,
real_peer,
ip_tracker.clone(),
shared.clone(),
)
)))
}
@ -679,10 +802,12 @@ pub struct RunningClientHandler {
tls_cache: Option<Arc<TlsFrontCache>>,
ip_tracker: Arc<UserIpTracker>,
beobachten: Arc<BeobachtenStore>,
shared: Arc<ProxySharedState>,
proxy_protocol_enabled: bool,
}
impl ClientHandler {
#[cfg(test)]
pub fn new(
stream: TcpStream,
peer: SocketAddr,
@ -699,6 +824,45 @@ impl ClientHandler {
beobachten: Arc<BeobachtenStore>,
proxy_protocol_enabled: bool,
real_peer_report: Arc<std::sync::Mutex<Option<SocketAddr>>>,
) -> RunningClientHandler {
Self::new_with_shared(
stream,
peer,
config,
stats,
upstream_manager,
replay_checker,
buffer_pool,
rng,
me_pool,
route_runtime,
tls_cache,
ip_tracker,
beobachten,
ProxySharedState::new(),
proxy_protocol_enabled,
real_peer_report,
)
}
#[allow(clippy::too_many_arguments)]
pub fn new_with_shared(
stream: TcpStream,
peer: SocketAddr,
config: Arc<ProxyConfig>,
stats: Arc<Stats>,
upstream_manager: Arc<UpstreamManager>,
replay_checker: Arc<ReplayChecker>,
buffer_pool: Arc<BufferPool>,
rng: Arc<SecureRandom>,
me_pool: Option<Arc<MePool>>,
route_runtime: Arc<RouteRuntimeController>,
tls_cache: Option<Arc<TlsFrontCache>>,
ip_tracker: Arc<UserIpTracker>,
beobachten: Arc<BeobachtenStore>,
shared: Arc<ProxySharedState>,
proxy_protocol_enabled: bool,
real_peer_report: Arc<std::sync::Mutex<Option<SocketAddr>>>,
) -> RunningClientHandler {
let normalized_peer = normalize_ip(peer);
RunningClientHandler {
@ -717,6 +881,7 @@ impl ClientHandler {
tls_cache,
ip_tracker,
beobachten,
shared,
proxy_protocol_enabled,
}
}
@ -736,36 +901,9 @@ impl RunningClientHandler {
debug!(peer = %peer, error = %e, "Failed to configure client socket");
}
let handshake_timeout = handshake_timeout_with_mask_grace(&self.config);
let stats = self.stats.clone();
let config_for_timeout = self.config.clone();
let beobachten_for_timeout = self.beobachten.clone();
let peer_for_timeout = peer.ip();
// Phase 1: handshake (with timeout)
let outcome = match timeout(handshake_timeout, self.do_handshake()).await {
Ok(Ok(outcome)) => outcome,
Ok(Err(e)) => {
debug!(peer = %peer, error = %e, "Handshake failed");
record_handshake_failure_class(
&beobachten_for_timeout,
&config_for_timeout,
peer_for_timeout,
&e,
);
return Err(e);
}
Err(_) => {
stats.increment_handshake_timeouts();
debug!(peer = %peer, "Handshake timeout");
record_beobachten_class(
&beobachten_for_timeout,
&config_for_timeout,
peer_for_timeout,
"other",
);
return Err(ProxyError::TgHandshakeTimeout);
}
let outcome = match self.do_handshake().await? {
Some(outcome) => outcome,
None => return Ok(()),
};
// Phase 2: relay (WITHOUT handshake timeout — relay has its own activity timeouts)
@ -774,7 +912,7 @@ impl RunningClientHandler {
}
}
async fn do_handshake(mut self) -> Result<HandshakeOutcome> {
async fn do_handshake(mut self) -> Result<Option<HandshakeOutcome>> {
let mut local_addr = self.stream.local_addr().map_err(ProxyError::Io)?;
if self.proxy_protocol_enabled {
@ -849,8 +987,70 @@ impl RunningClientHandler {
}
}
let first_byte_idle_secs =
effective_client_first_byte_idle_secs(&self.config, self.shared.as_ref());
let first_byte = if first_byte_idle_secs == 0 {
None
} else {
let idle_timeout = Duration::from_secs(first_byte_idle_secs);
let mut first_byte = [0u8; 1];
match timeout(idle_timeout, self.stream.read(&mut first_byte)).await {
Ok(Ok(0)) => {
debug!(peer = %self.peer, "Connection closed before first client byte");
return Ok(None);
}
Ok(Ok(_)) => Some(first_byte[0]),
Ok(Err(e))
if matches!(
e.kind(),
std::io::ErrorKind::UnexpectedEof
| std::io::ErrorKind::ConnectionReset
| std::io::ErrorKind::ConnectionAborted
| std::io::ErrorKind::BrokenPipe
| std::io::ErrorKind::NotConnected
) =>
{
debug!(
peer = %self.peer,
error = %e,
"Connection closed before first client byte"
);
return Ok(None);
}
Ok(Err(e)) => {
debug!(
peer = %self.peer,
error = %e,
"Failed while waiting for first client byte"
);
return Err(ProxyError::Io(e));
}
Err(_) => {
debug!(
peer = %self.peer,
idle_secs = first_byte_idle_secs,
"Closing idle pooled connection before first client byte"
);
return Ok(None);
}
}
};
let handshake_timeout = handshake_timeout_with_mask_grace(&self.config);
let stats = self.stats.clone();
let config_for_timeout = self.config.clone();
let beobachten_for_timeout = self.beobachten.clone();
let peer_for_timeout = self.peer.ip();
let peer_for_log = self.peer;
let outcome = match timeout(handshake_timeout, async {
let mut first_bytes = [0u8; 5];
if let Some(first_byte) = first_byte {
first_bytes[0] = first_byte;
self.stream.read_exact(&mut first_bytes[1..]).await?;
} else {
self.stream.read_exact(&mut first_bytes).await?;
}
let is_tls = tls::is_tls_handshake(&first_bytes[..3]);
let peer = self.peer;
@ -862,6 +1062,34 @@ impl RunningClientHandler {
} else {
self.handle_direct_client(first_bytes, local_addr).await
}
})
.await
{
Ok(Ok(outcome)) => outcome,
Ok(Err(e)) => {
debug!(peer = %peer_for_log, error = %e, "Handshake failed");
record_handshake_failure_class(
&beobachten_for_timeout,
&config_for_timeout,
peer_for_timeout,
&e,
);
return Err(e);
}
Err(_) => {
stats.increment_handshake_timeouts();
debug!(peer = %peer_for_log, "Handshake timeout");
record_beobachten_class(
&beobachten_for_timeout,
&config_for_timeout,
peer_for_timeout,
"other",
);
return Err(ProxyError::TgHandshakeTimeout);
}
};
Ok(Some(outcome))
}
async fn handle_tls_client(
@ -944,7 +1172,7 @@ impl RunningClientHandler {
let (read_half, write_half) = self.stream.into_split();
let (mut tls_reader, tls_writer, tls_user) = match handle_tls_handshake(
let (mut tls_reader, tls_writer, tls_user) = match handle_tls_handshake_with_shared(
&handshake,
read_half,
write_half,
@ -953,6 +1181,7 @@ impl RunningClientHandler {
&replay_checker,
&self.rng,
self.tls_cache.clone(),
self.shared.as_ref(),
)
.await
{
@ -981,7 +1210,7 @@ impl RunningClientHandler {
.try_into()
.map_err(|_| ProxyError::InvalidHandshake("Short MTProto handshake".into()))?;
let (crypto_reader, crypto_writer, success) = match handle_mtproto_handshake(
let (crypto_reader, crypto_writer, success) = match handle_mtproto_handshake_with_shared(
&mtproto_handshake,
tls_reader,
tls_writer,
@ -990,6 +1219,7 @@ impl RunningClientHandler {
&replay_checker,
true,
Some(tls_user.as_str()),
self.shared.as_ref(),
)
.await
{
@ -1026,7 +1256,7 @@ impl RunningClientHandler {
};
Ok(HandshakeOutcome::NeedsRelay(Box::pin(
Self::handle_authenticated_static(
Self::handle_authenticated_static_with_shared(
crypto_reader,
crypto_writer,
success,
@ -1040,6 +1270,7 @@ impl RunningClientHandler {
local_addr,
peer,
self.ip_tracker,
self.shared,
),
)))
}
@ -1078,7 +1309,7 @@ impl RunningClientHandler {
let (read_half, write_half) = self.stream.into_split();
let (crypto_reader, crypto_writer, success) = match handle_mtproto_handshake(
let (crypto_reader, crypto_writer, success) = match handle_mtproto_handshake_with_shared(
&handshake,
read_half,
write_half,
@ -1087,6 +1318,7 @@ impl RunningClientHandler {
&replay_checker,
false,
None,
self.shared.as_ref(),
)
.await
{
@ -1107,7 +1339,7 @@ impl RunningClientHandler {
};
Ok(HandshakeOutcome::NeedsRelay(Box::pin(
Self::handle_authenticated_static(
Self::handle_authenticated_static_with_shared(
crypto_reader,
crypto_writer,
success,
@ -1121,6 +1353,7 @@ impl RunningClientHandler {
local_addr,
peer,
self.ip_tracker,
self.shared,
),
)))
}
@ -1129,6 +1362,7 @@ impl RunningClientHandler {
/// Two modes:
/// - Direct: TCP relay to TG DC (existing behavior)
/// - Middle Proxy: RPC multiplex through ME pool (new — supports CDN DCs)
#[cfg(test)]
async fn handle_authenticated_static<R, W>(
client_reader: CryptoReader<R>,
client_writer: CryptoWriter<W>,
@ -1144,6 +1378,45 @@ impl RunningClientHandler {
peer_addr: SocketAddr,
ip_tracker: Arc<UserIpTracker>,
) -> Result<()>
where
R: AsyncRead + Unpin + Send + 'static,
W: AsyncWrite + Unpin + Send + 'static,
{
Self::handle_authenticated_static_with_shared(
client_reader,
client_writer,
success,
upstream_manager,
stats,
config,
buffer_pool,
rng,
me_pool,
route_runtime,
local_addr,
peer_addr,
ip_tracker,
ProxySharedState::new(),
)
.await
}
async fn handle_authenticated_static_with_shared<R, W>(
client_reader: CryptoReader<R>,
client_writer: CryptoWriter<W>,
success: HandshakeSuccess,
upstream_manager: Arc<UpstreamManager>,
stats: Arc<Stats>,
config: Arc<ProxyConfig>,
buffer_pool: Arc<BufferPool>,
rng: Arc<SecureRandom>,
me_pool: Option<Arc<MePool>>,
route_runtime: Arc<RouteRuntimeController>,
local_addr: SocketAddr,
peer_addr: SocketAddr,
ip_tracker: Arc<UserIpTracker>,
shared: Arc<ProxySharedState>,
) -> Result<()>
where
R: AsyncRead + Unpin + Send + 'static,
W: AsyncWrite + Unpin + Send + 'static,
@ -1185,11 +1458,12 @@ impl RunningClientHandler {
route_runtime.subscribe(),
route_snapshot,
session_id,
shared.clone(),
)
.await
} else {
warn!("use_middle_proxy=true but MePool not initialized, falling back to direct");
handle_via_direct(
handle_via_direct_with_shared(
client_reader,
client_writer,
success,
@ -1201,12 +1475,14 @@ impl RunningClientHandler {
route_runtime.subscribe(),
route_snapshot,
session_id,
local_addr,
shared.clone(),
)
.await
}
} else {
// Direct mode (original behavior)
handle_via_direct(
handle_via_direct_with_shared(
client_reader,
client_writer,
success,
@ -1218,6 +1494,8 @@ impl RunningClientHandler {
route_runtime.subscribe(),
route_snapshot,
session_id,
local_addr,
shared.clone(),
)
.await
};
@ -1252,7 +1530,11 @@ impl RunningClientHandler {
.access
.user_max_tcp_conns
.get(user)
.map(|v| *v as u64);
.copied()
.filter(|limit| *limit > 0)
.or((config.access.user_max_tcp_conns_global_each > 0)
.then_some(config.access.user_max_tcp_conns_global_each))
.map(|v| v as u64);
if !stats.try_acquire_user_curr_connects(user, limit) {
return Err(ProxyError::ConnectionLimitExceeded {
user: user.to_string(),
@ -1311,7 +1593,11 @@ impl RunningClientHandler {
.access
.user_max_tcp_conns
.get(user)
.map(|v| *v as u64);
.copied()
.filter(|limit| *limit > 0)
.or((config.access.user_max_tcp_conns_global_each > 0)
.then_some(config.access.user_max_tcp_conns_global_each))
.map(|v| v as u64);
if !stats.try_acquire_user_curr_connects(user, limit) {
return Err(ProxyError::ConnectionLimitExceeded {
user: user.to_string(),

View File

@ -6,6 +6,7 @@ use std::net::SocketAddr;
use std::path::{Component, Path, PathBuf};
use std::sync::Arc;
use std::sync::{Mutex, OnceLock};
use std::time::Duration;
use tokio::io::{AsyncRead, AsyncWrite, AsyncWriteExt, ReadHalf, WriteHalf, split};
use tokio::sync::watch;
@ -16,11 +17,13 @@ use crate::crypto::SecureRandom;
use crate::error::{ProxyError, Result};
use crate::protocol::constants::*;
use crate::proxy::handshake::{HandshakeSuccess, encrypt_tg_nonce_with_ciphers, generate_tg_nonce};
use crate::proxy::relay::relay_bidirectional;
use crate::proxy::route_mode::{
ROUTE_SWITCH_ERROR_MSG, RelayRouteMode, RouteCutoverState, affected_cutover_state,
cutover_stagger_delay,
};
use crate::proxy::shared_state::{
ConntrackCloseEvent, ConntrackClosePublishResult, ConntrackCloseReason, ProxySharedState,
};
use crate::stats::Stats;
use crate::stream::{BufferPool, CryptoReader, CryptoWriter};
use crate::transport::UpstreamManager;
@ -225,7 +228,43 @@ fn unknown_dc_test_lock() -> &'static Mutex<()> {
TEST_LOCK.get_or_init(|| Mutex::new(()))
}
#[allow(dead_code)]
pub(crate) async fn handle_via_direct<R, W>(
client_reader: CryptoReader<R>,
client_writer: CryptoWriter<W>,
success: HandshakeSuccess,
upstream_manager: Arc<UpstreamManager>,
stats: Arc<Stats>,
config: Arc<ProxyConfig>,
buffer_pool: Arc<BufferPool>,
rng: Arc<SecureRandom>,
route_rx: watch::Receiver<RouteCutoverState>,
route_snapshot: RouteCutoverState,
session_id: u64,
) -> Result<()>
where
R: AsyncRead + Unpin + Send + 'static,
W: AsyncWrite + Unpin + Send + 'static,
{
handle_via_direct_with_shared(
client_reader,
client_writer,
success,
upstream_manager,
stats,
config.clone(),
buffer_pool,
rng,
route_rx,
route_snapshot,
session_id,
SocketAddr::from(([0, 0, 0, 0], config.server.port)),
ProxySharedState::new(),
)
.await
}
pub(crate) async fn handle_via_direct_with_shared<R, W>(
client_reader: CryptoReader<R>,
client_writer: CryptoWriter<W>,
success: HandshakeSuccess,
@ -237,6 +276,8 @@ pub(crate) async fn handle_via_direct<R, W>(
mut route_rx: watch::Receiver<RouteCutoverState>,
route_snapshot: RouteCutoverState,
session_id: u64,
local_addr: SocketAddr,
shared: Arc<ProxySharedState>,
) -> Result<()>
where
R: AsyncRead + Unpin + Send + 'static,
@ -276,7 +317,19 @@ where
stats.increment_user_connects(user);
let _direct_connection_lease = stats.acquire_direct_connection_lease();
let relay_result = relay_bidirectional(
let buffer_pool_trim = Arc::clone(&buffer_pool);
let relay_activity_timeout = if shared.conntrack_pressure_active() {
Duration::from_secs(
config
.server
.conntrack_control
.profile
.direct_activity_timeout_secs(),
)
} else {
Duration::from_secs(1800)
};
let relay_result = crate::proxy::relay::relay_bidirectional_with_activity_timeout(
client_reader,
client_writer,
tg_reader,
@ -287,6 +340,7 @@ where
Arc::clone(&stats),
config.access.user_data_quota.get(user).copied(),
buffer_pool,
relay_activity_timeout,
);
tokio::pin!(relay_result);
let relay_result = loop {
@ -321,9 +375,59 @@ where
Err(e) => debug!(user = %user, error = %e, "Direct relay ended with error"),
}
buffer_pool_trim.trim_to(buffer_pool_trim.max_buffers().min(64));
let pool_snapshot = buffer_pool_trim.stats();
stats.set_buffer_pool_gauges(
pool_snapshot.pooled,
pool_snapshot.allocated,
pool_snapshot.allocated.saturating_sub(pool_snapshot.pooled),
);
let close_reason = classify_conntrack_close_reason(&relay_result);
let publish_result = shared.publish_conntrack_close_event(ConntrackCloseEvent {
src: success.peer,
dst: local_addr,
reason: close_reason,
});
if !matches!(
publish_result,
ConntrackClosePublishResult::Sent | ConntrackClosePublishResult::Disabled
) {
stats.increment_conntrack_close_event_drop_total();
}
relay_result
}
fn classify_conntrack_close_reason(result: &Result<()>) -> ConntrackCloseReason {
match result {
Ok(()) => ConntrackCloseReason::NormalEof,
Err(crate::error::ProxyError::Io(error))
if matches!(error.kind(), std::io::ErrorKind::TimedOut) =>
{
ConntrackCloseReason::Timeout
}
Err(crate::error::ProxyError::Io(error))
if matches!(
error.kind(),
std::io::ErrorKind::ConnectionReset
| std::io::ErrorKind::ConnectionAborted
| std::io::ErrorKind::BrokenPipe
| std::io::ErrorKind::NotConnected
| std::io::ErrorKind::UnexpectedEof
) =>
{
ConntrackCloseReason::Reset
}
Err(crate::error::ProxyError::Proxy(message))
if message.contains("pressure") || message.contains("evicted") =>
{
ConntrackCloseReason::Pressure
}
Err(_) => ConntrackCloseReason::Other,
}
}
fn get_dc_addr_static(dc_idx: i16, config: &ProxyConfig) -> Result<SocketAddr> {
let prefer_v6 = config.network.prefer == 6 && config.network.ipv6.unwrap_or(true);
let datacenters = if prefer_v6 {

View File

@ -4,13 +4,16 @@
use dashmap::DashMap;
use dashmap::mapref::entry::Entry;
#[cfg(test)]
use std::collections::HashSet;
#[cfg(test)]
use std::collections::hash_map::RandomState;
use std::hash::{BuildHasher, Hash, Hasher};
use std::net::SocketAddr;
use std::net::{IpAddr, Ipv6Addr};
use std::sync::Arc;
use std::sync::{Mutex, OnceLock};
#[cfg(test)]
use std::sync::Mutex;
use std::time::{Duration, Instant};
use tokio::io::{AsyncRead, AsyncWrite, AsyncWriteExt};
use tracing::{debug, info, trace, warn};
@ -21,15 +24,15 @@ use crate::crypto::{AesCtr, SecureRandom, sha256};
use crate::error::{HandshakeResult, ProxyError};
use crate::protocol::constants::*;
use crate::protocol::tls;
use crate::proxy::shared_state::ProxySharedState;
use crate::stats::ReplayChecker;
use crate::stream::{CryptoReader, CryptoWriter, FakeTlsReader, FakeTlsWriter};
use crate::tls_front::{TlsFrontCache, emulator};
#[cfg(test)]
use rand::RngExt;
const ACCESS_SECRET_BYTES: usize = 16;
static INVALID_SECRET_WARNED: OnceLock<Mutex<HashSet<(String, String)>>> = OnceLock::new();
const UNKNOWN_SNI_WARN_COOLDOWN_SECS: u64 = 5;
static UNKNOWN_SNI_WARN_NEXT_ALLOWED: OnceLock<Mutex<Option<Instant>>> = OnceLock::new();
#[cfg(test)]
const WARNED_SECRET_MAX_ENTRIES: usize = 64;
#[cfg(not(test))]
@ -55,48 +58,30 @@ const AUTH_PROBE_BACKOFF_MAX_MS: u64 = 16;
const AUTH_PROBE_BACKOFF_MAX_MS: u64 = 1_000;
#[derive(Clone, Copy)]
struct AuthProbeState {
pub(crate) struct AuthProbeState {
fail_streak: u32,
blocked_until: Instant,
last_seen: Instant,
}
#[derive(Clone, Copy)]
struct AuthProbeSaturationState {
pub(crate) struct AuthProbeSaturationState {
fail_streak: u32,
blocked_until: Instant,
last_seen: Instant,
}
static AUTH_PROBE_STATE: OnceLock<DashMap<IpAddr, AuthProbeState>> = OnceLock::new();
static AUTH_PROBE_SATURATION_STATE: OnceLock<Mutex<Option<AuthProbeSaturationState>>> =
OnceLock::new();
static AUTH_PROBE_EVICTION_HASHER: OnceLock<RandomState> = OnceLock::new();
fn auth_probe_state_map() -> &'static DashMap<IpAddr, AuthProbeState> {
AUTH_PROBE_STATE.get_or_init(DashMap::new)
}
fn auth_probe_saturation_state() -> &'static Mutex<Option<AuthProbeSaturationState>> {
AUTH_PROBE_SATURATION_STATE.get_or_init(|| Mutex::new(None))
}
fn auth_probe_saturation_state_lock()
-> std::sync::MutexGuard<'static, Option<AuthProbeSaturationState>> {
auth_probe_saturation_state()
fn unknown_sni_warn_state_lock_in(
shared: &ProxySharedState,
) -> std::sync::MutexGuard<'_, Option<Instant>> {
shared
.handshake
.unknown_sni_warn_next_allowed
.lock()
.unwrap_or_else(|poisoned| poisoned.into_inner())
}
fn unknown_sni_warn_state_lock() -> std::sync::MutexGuard<'static, Option<Instant>> {
UNKNOWN_SNI_WARN_NEXT_ALLOWED
.get_or_init(|| Mutex::new(None))
.lock()
.unwrap_or_else(|poisoned| poisoned.into_inner())
}
fn should_emit_unknown_sni_warn(now: Instant) -> bool {
let mut guard = unknown_sni_warn_state_lock();
fn should_emit_unknown_sni_warn_in(shared: &ProxySharedState, now: Instant) -> bool {
let mut guard = unknown_sni_warn_state_lock_in(shared);
if let Some(next_allowed) = *guard
&& now < next_allowed
{
@ -133,15 +118,20 @@ fn auth_probe_state_expired(state: &AuthProbeState, now: Instant) -> bool {
now.duration_since(state.last_seen) > retention
}
fn auth_probe_eviction_offset(peer_ip: IpAddr, now: Instant) -> usize {
let hasher_state = AUTH_PROBE_EVICTION_HASHER.get_or_init(RandomState::new);
fn auth_probe_eviction_offset_in(
shared: &ProxySharedState,
peer_ip: IpAddr,
now: Instant,
) -> usize {
let hasher_state = &shared.handshake.auth_probe_eviction_hasher;
let mut hasher = hasher_state.build_hasher();
peer_ip.hash(&mut hasher);
now.hash(&mut hasher);
hasher.finish() as usize
}
fn auth_probe_scan_start_offset(
fn auth_probe_scan_start_offset_in(
shared: &ProxySharedState,
peer_ip: IpAddr,
now: Instant,
state_len: usize,
@ -151,12 +141,12 @@ fn auth_probe_scan_start_offset(
return 0;
}
auth_probe_eviction_offset(peer_ip, now) % state_len
auth_probe_eviction_offset_in(shared, peer_ip, now) % state_len
}
fn auth_probe_is_throttled(peer_ip: IpAddr, now: Instant) -> bool {
fn auth_probe_is_throttled_in(shared: &ProxySharedState, peer_ip: IpAddr, now: Instant) -> bool {
let peer_ip = normalize_auth_probe_ip(peer_ip);
let state = auth_probe_state_map();
let state = &shared.handshake.auth_probe;
let Some(entry) = state.get(&peer_ip) else {
return false;
};
@ -168,9 +158,13 @@ fn auth_probe_is_throttled(peer_ip: IpAddr, now: Instant) -> bool {
now < entry.blocked_until
}
fn auth_probe_saturation_grace_exhausted(peer_ip: IpAddr, now: Instant) -> bool {
fn auth_probe_saturation_grace_exhausted_in(
shared: &ProxySharedState,
peer_ip: IpAddr,
now: Instant,
) -> bool {
let peer_ip = normalize_auth_probe_ip(peer_ip);
let state = auth_probe_state_map();
let state = &shared.handshake.auth_probe;
let Some(entry) = state.get(&peer_ip) else {
return false;
};
@ -183,20 +177,28 @@ fn auth_probe_saturation_grace_exhausted(peer_ip: IpAddr, now: Instant) -> bool
entry.fail_streak >= AUTH_PROBE_BACKOFF_START_FAILS + AUTH_PROBE_SATURATION_GRACE_FAILS
}
fn auth_probe_should_apply_preauth_throttle(peer_ip: IpAddr, now: Instant) -> bool {
if !auth_probe_is_throttled(peer_ip, now) {
fn auth_probe_should_apply_preauth_throttle_in(
shared: &ProxySharedState,
peer_ip: IpAddr,
now: Instant,
) -> bool {
if !auth_probe_is_throttled_in(shared, peer_ip, now) {
return false;
}
if !auth_probe_saturation_is_throttled(now) {
if !auth_probe_saturation_is_throttled_in(shared, now) {
return true;
}
auth_probe_saturation_grace_exhausted(peer_ip, now)
auth_probe_saturation_grace_exhausted_in(shared, peer_ip, now)
}
fn auth_probe_saturation_is_throttled(now: Instant) -> bool {
let mut guard = auth_probe_saturation_state_lock();
fn auth_probe_saturation_is_throttled_in(shared: &ProxySharedState, now: Instant) -> bool {
let mut guard = shared
.handshake
.auth_probe_saturation
.lock()
.unwrap_or_else(|poisoned| poisoned.into_inner());
let Some(state) = guard.as_mut() else {
return false;
@ -214,8 +216,12 @@ fn auth_probe_saturation_is_throttled(now: Instant) -> bool {
false
}
fn auth_probe_note_saturation(now: Instant) {
let mut guard = auth_probe_saturation_state_lock();
fn auth_probe_note_saturation_in(shared: &ProxySharedState, now: Instant) {
let mut guard = shared
.handshake
.auth_probe_saturation
.lock()
.unwrap_or_else(|poisoned| poisoned.into_inner());
match guard.as_mut() {
Some(state)
@ -237,13 +243,14 @@ fn auth_probe_note_saturation(now: Instant) {
}
}
fn auth_probe_record_failure(peer_ip: IpAddr, now: Instant) {
fn auth_probe_record_failure_in(shared: &ProxySharedState, peer_ip: IpAddr, now: Instant) {
let peer_ip = normalize_auth_probe_ip(peer_ip);
let state = auth_probe_state_map();
auth_probe_record_failure_with_state(state, peer_ip, now);
let state = &shared.handshake.auth_probe;
auth_probe_record_failure_with_state_in(shared, state, peer_ip, now);
}
fn auth_probe_record_failure_with_state(
fn auth_probe_record_failure_with_state_in(
shared: &ProxySharedState,
state: &DashMap<IpAddr, AuthProbeState>,
peer_ip: IpAddr,
now: Instant,
@ -277,7 +284,7 @@ fn auth_probe_record_failure_with_state(
while state.len() >= AUTH_PROBE_TRACK_MAX_ENTRIES {
rounds += 1;
if rounds > 8 {
auth_probe_note_saturation(now);
auth_probe_note_saturation_in(shared, now);
let mut eviction_candidate: Option<(IpAddr, u32, Instant)> = None;
for entry in state.iter().take(AUTH_PROBE_PRUNE_SCAN_LIMIT) {
let key = *entry.key();
@ -320,7 +327,7 @@ fn auth_probe_record_failure_with_state(
}
} else {
let start_offset =
auth_probe_scan_start_offset(peer_ip, now, state_len, scan_limit);
auth_probe_scan_start_offset_in(shared, peer_ip, now, state_len, scan_limit);
let mut scanned = 0usize;
for entry in state.iter().skip(start_offset) {
let key = *entry.key();
@ -369,11 +376,11 @@ fn auth_probe_record_failure_with_state(
}
let Some((evict_key, _, _)) = eviction_candidate else {
auth_probe_note_saturation(now);
auth_probe_note_saturation_in(shared, now);
return;
};
state.remove(&evict_key);
auth_probe_note_saturation(now);
auth_probe_note_saturation_in(shared, now);
}
}
@ -387,89 +394,58 @@ fn auth_probe_record_failure_with_state(
}
}
fn auth_probe_record_success(peer_ip: IpAddr) {
fn auth_probe_record_success_in(shared: &ProxySharedState, peer_ip: IpAddr) {
let peer_ip = normalize_auth_probe_ip(peer_ip);
let state = auth_probe_state_map();
let state = &shared.handshake.auth_probe;
state.remove(&peer_ip);
}
#[cfg(test)]
fn clear_auth_probe_state_for_testing() {
if let Some(state) = AUTH_PROBE_STATE.get() {
state.clear();
}
if AUTH_PROBE_SATURATION_STATE.get().is_some() {
let mut guard = auth_probe_saturation_state_lock();
*guard = None;
}
pub(crate) fn auth_probe_record_failure_for_testing(
shared: &ProxySharedState,
peer_ip: IpAddr,
now: Instant,
) {
auth_probe_record_failure_in(shared, peer_ip, now);
}
#[cfg(test)]
fn auth_probe_fail_streak_for_testing(peer_ip: IpAddr) -> Option<u32> {
pub(crate) fn auth_probe_fail_streak_for_testing_in_shared(
shared: &ProxySharedState,
peer_ip: IpAddr,
) -> Option<u32> {
let peer_ip = normalize_auth_probe_ip(peer_ip);
let state = AUTH_PROBE_STATE.get()?;
state.get(&peer_ip).map(|entry| entry.fail_streak)
shared
.handshake
.auth_probe
.get(&peer_ip)
.map(|entry| entry.fail_streak)
}
#[cfg(test)]
fn auth_probe_is_throttled_for_testing(peer_ip: IpAddr) -> bool {
auth_probe_is_throttled(peer_ip, Instant::now())
}
#[cfg(test)]
fn auth_probe_saturation_is_throttled_for_testing() -> bool {
auth_probe_saturation_is_throttled(Instant::now())
}
#[cfg(test)]
fn auth_probe_saturation_is_throttled_at_for_testing(now: Instant) -> bool {
auth_probe_saturation_is_throttled(now)
}
#[cfg(test)]
fn auth_probe_test_lock() -> &'static Mutex<()> {
static TEST_LOCK: OnceLock<Mutex<()>> = OnceLock::new();
TEST_LOCK.get_or_init(|| Mutex::new(()))
}
#[cfg(test)]
fn unknown_sni_warn_test_lock() -> &'static Mutex<()> {
static TEST_LOCK: OnceLock<Mutex<()>> = OnceLock::new();
TEST_LOCK.get_or_init(|| Mutex::new(()))
}
#[cfg(test)]
fn clear_unknown_sni_warn_state_for_testing() {
if UNKNOWN_SNI_WARN_NEXT_ALLOWED.get().is_some() {
let mut guard = unknown_sni_warn_state_lock();
*guard = None;
pub(crate) fn clear_auth_probe_state_for_testing_in_shared(shared: &ProxySharedState) {
shared.handshake.auth_probe.clear();
match shared.handshake.auth_probe_saturation.lock() {
Ok(mut saturation) => {
*saturation = None;
}
Err(poisoned) => {
let mut saturation = poisoned.into_inner();
*saturation = None;
shared.handshake.auth_probe_saturation.clear_poison();
}
}
}
#[cfg(test)]
fn should_emit_unknown_sni_warn_for_testing(now: Instant) -> bool {
should_emit_unknown_sni_warn(now)
}
#[cfg(test)]
fn clear_warned_secrets_for_testing() {
if let Some(warned) = INVALID_SECRET_WARNED.get()
&& let Ok(mut guard) = warned.lock()
{
guard.clear();
}
}
#[cfg(test)]
fn warned_secrets_test_lock() -> &'static Mutex<()> {
static TEST_LOCK: OnceLock<Mutex<()>> = OnceLock::new();
TEST_LOCK.get_or_init(|| Mutex::new(()))
}
fn warn_invalid_secret_once(name: &str, reason: &str, expected: usize, got: Option<usize>) {
fn warn_invalid_secret_once_in(
shared: &ProxySharedState,
name: &str,
reason: &str,
expected: usize,
got: Option<usize>,
) {
let key = (name.to_string(), reason.to_string());
let warned = INVALID_SECRET_WARNED.get_or_init(|| Mutex::new(HashSet::new()));
let should_warn = match warned.lock() {
let should_warn = match shared.handshake.invalid_secret_warned.lock() {
Ok(mut guard) => {
if !guard.contains(&key) && guard.len() >= WARNED_SECRET_MAX_ENTRIES {
false
@ -502,11 +478,12 @@ fn warn_invalid_secret_once(name: &str, reason: &str, expected: usize, got: Opti
}
}
fn decode_user_secret(name: &str, secret_hex: &str) -> Option<Vec<u8>> {
fn decode_user_secret(shared: &ProxySharedState, name: &str, secret_hex: &str) -> Option<Vec<u8>> {
match hex::decode(secret_hex) {
Ok(bytes) if bytes.len() == ACCESS_SECRET_BYTES => Some(bytes),
Ok(bytes) => {
warn_invalid_secret_once(
warn_invalid_secret_once_in(
shared,
name,
"invalid_length",
ACCESS_SECRET_BYTES,
@ -515,7 +492,7 @@ fn decode_user_secret(name: &str, secret_hex: &str) -> Option<Vec<u8>> {
None
}
Err(_) => {
warn_invalid_secret_once(name, "invalid_hex", ACCESS_SECRET_BYTES, None);
warn_invalid_secret_once_in(shared, name, "invalid_hex", ACCESS_SECRET_BYTES, None);
None
}
}
@ -543,7 +520,8 @@ fn mode_enabled_for_proto(config: &ProxyConfig, proto_tag: ProtoTag, is_tls: boo
}
}
fn decode_user_secrets(
fn decode_user_secrets_in(
shared: &ProxySharedState,
config: &ProxyConfig,
preferred_user: Option<&str>,
) -> Vec<(String, Vec<u8>)> {
@ -551,7 +529,7 @@ fn decode_user_secrets(
if let Some(preferred) = preferred_user
&& let Some(secret_hex) = config.access.users.get(preferred)
&& let Some(bytes) = decode_user_secret(preferred, secret_hex)
&& let Some(bytes) = decode_user_secret(shared, preferred, secret_hex)
{
secrets.push((preferred.to_string(), bytes));
}
@ -560,7 +538,7 @@ fn decode_user_secrets(
if preferred_user.is_some_and(|preferred| preferred == name.as_str()) {
continue;
}
if let Some(bytes) = decode_user_secret(name, secret_hex) {
if let Some(bytes) = decode_user_secret(shared, name, secret_hex) {
secrets.push((name.clone(), bytes));
}
}
@ -568,6 +546,86 @@ fn decode_user_secrets(
secrets
}
#[cfg(test)]
pub(crate) fn auth_probe_state_for_testing_in_shared(
shared: &ProxySharedState,
) -> &DashMap<IpAddr, AuthProbeState> {
&shared.handshake.auth_probe
}
#[cfg(test)]
pub(crate) fn auth_probe_saturation_state_for_testing_in_shared(
shared: &ProxySharedState,
) -> &Mutex<Option<AuthProbeSaturationState>> {
&shared.handshake.auth_probe_saturation
}
#[cfg(test)]
pub(crate) fn auth_probe_saturation_state_lock_for_testing_in_shared(
shared: &ProxySharedState,
) -> std::sync::MutexGuard<'_, Option<AuthProbeSaturationState>> {
shared
.handshake
.auth_probe_saturation
.lock()
.unwrap_or_else(|poisoned| poisoned.into_inner())
}
#[cfg(test)]
pub(crate) fn clear_unknown_sni_warn_state_for_testing_in_shared(shared: &ProxySharedState) {
let mut guard = shared
.handshake
.unknown_sni_warn_next_allowed
.lock()
.unwrap_or_else(|poisoned| poisoned.into_inner());
*guard = None;
}
#[cfg(test)]
pub(crate) fn should_emit_unknown_sni_warn_for_testing_in_shared(
shared: &ProxySharedState,
now: Instant,
) -> bool {
should_emit_unknown_sni_warn_in(shared, now)
}
#[cfg(test)]
pub(crate) fn clear_warned_secrets_for_testing_in_shared(shared: &ProxySharedState) {
if let Ok(mut guard) = shared.handshake.invalid_secret_warned.lock() {
guard.clear();
}
}
#[cfg(test)]
pub(crate) fn warned_secrets_for_testing_in_shared(
shared: &ProxySharedState,
) -> &Mutex<HashSet<(String, String)>> {
&shared.handshake.invalid_secret_warned
}
#[cfg(test)]
pub(crate) fn auth_probe_is_throttled_for_testing_in_shared(
shared: &ProxySharedState,
peer_ip: IpAddr,
) -> bool {
auth_probe_is_throttled_in(shared, peer_ip, Instant::now())
}
#[cfg(test)]
pub(crate) fn auth_probe_saturation_is_throttled_for_testing_in_shared(
shared: &ProxySharedState,
) -> bool {
auth_probe_saturation_is_throttled_in(shared, Instant::now())
}
#[cfg(test)]
pub(crate) fn auth_probe_saturation_is_throttled_at_for_testing_in_shared(
shared: &ProxySharedState,
now: Instant,
) -> bool {
auth_probe_saturation_is_throttled_in(shared, now)
}
#[inline]
fn find_matching_tls_domain<'a>(config: &'a ProxyConfig, sni: &str) -> Option<&'a str> {
if config.censorship.tls_domain.eq_ignore_ascii_case(sni) {
@ -593,7 +651,7 @@ async fn maybe_apply_server_hello_delay(config: &ProxyConfig) {
let delay_ms = if max == min {
max
} else {
rand::rng().random_range(min..=max)
crate::proxy::masking::sample_lognormal_percentile_bounded(min, max, &mut rand::rng())
};
if delay_ms > 0 {
@ -635,6 +693,7 @@ impl Drop for HandshakeSuccess {
}
/// Handle fake TLS handshake
#[cfg(test)]
pub async fn handle_tls_handshake<R, W>(
handshake: &[u8],
reader: R,
@ -645,6 +704,65 @@ pub async fn handle_tls_handshake<R, W>(
rng: &SecureRandom,
tls_cache: Option<Arc<TlsFrontCache>>,
) -> HandshakeResult<(FakeTlsReader<R>, FakeTlsWriter<W>, String), R, W>
where
R: AsyncRead + Unpin,
W: AsyncWrite + Unpin,
{
let shared = ProxySharedState::new();
handle_tls_handshake_impl(
handshake,
reader,
writer,
peer,
config,
replay_checker,
rng,
tls_cache,
shared.as_ref(),
)
.await
}
pub async fn handle_tls_handshake_with_shared<R, W>(
handshake: &[u8],
reader: R,
writer: W,
peer: SocketAddr,
config: &ProxyConfig,
replay_checker: &ReplayChecker,
rng: &SecureRandom,
tls_cache: Option<Arc<TlsFrontCache>>,
shared: &ProxySharedState,
) -> HandshakeResult<(FakeTlsReader<R>, FakeTlsWriter<W>, String), R, W>
where
R: AsyncRead + Unpin,
W: AsyncWrite + Unpin,
{
handle_tls_handshake_impl(
handshake,
reader,
writer,
peer,
config,
replay_checker,
rng,
tls_cache,
shared,
)
.await
}
async fn handle_tls_handshake_impl<R, W>(
handshake: &[u8],
reader: R,
mut writer: W,
peer: SocketAddr,
config: &ProxyConfig,
replay_checker: &ReplayChecker,
rng: &SecureRandom,
tls_cache: Option<Arc<TlsFrontCache>>,
shared: &ProxySharedState,
) -> HandshakeResult<(FakeTlsReader<R>, FakeTlsWriter<W>, String), R, W>
where
R: AsyncRead + Unpin,
W: AsyncWrite + Unpin,
@ -652,14 +770,14 @@ where
debug!(peer = %peer, handshake_len = handshake.len(), "Processing TLS handshake");
let throttle_now = Instant::now();
if auth_probe_should_apply_preauth_throttle(peer.ip(), throttle_now) {
if auth_probe_should_apply_preauth_throttle_in(shared, peer.ip(), throttle_now) {
maybe_apply_server_hello_delay(config).await;
debug!(peer = %peer, "TLS handshake rejected by pre-auth probe throttle");
return HandshakeResult::BadClient { reader, writer };
}
if handshake.len() < tls::TLS_DIGEST_POS + tls::TLS_DIGEST_LEN + 1 {
auth_probe_record_failure(peer.ip(), Instant::now());
auth_probe_record_failure_in(shared, peer.ip(), Instant::now());
maybe_apply_server_hello_delay(config).await;
debug!(peer = %peer, "TLS handshake too short");
return HandshakeResult::BadClient { reader, writer };
@ -695,11 +813,11 @@ where
};
if client_sni.is_some() && matched_tls_domain.is_none() && preferred_user_hint.is_none() {
auth_probe_record_failure(peer.ip(), Instant::now());
auth_probe_record_failure_in(shared, peer.ip(), Instant::now());
maybe_apply_server_hello_delay(config).await;
let sni = client_sni.as_deref().unwrap_or_default();
let log_now = Instant::now();
if should_emit_unknown_sni_warn(log_now) {
if should_emit_unknown_sni_warn_in(shared, log_now) {
warn!(
peer = %peer,
sni = %sni,
@ -722,7 +840,7 @@ where
};
}
let secrets = decode_user_secrets(config, preferred_user_hint);
let secrets = decode_user_secrets_in(shared, config, preferred_user_hint);
let validation = match tls::validate_tls_handshake_with_replay_window(
handshake,
@ -732,7 +850,7 @@ where
) {
Some(v) => v,
None => {
auth_probe_record_failure(peer.ip(), Instant::now());
auth_probe_record_failure_in(shared, peer.ip(), Instant::now());
maybe_apply_server_hello_delay(config).await;
debug!(
peer = %peer,
@ -746,7 +864,7 @@ where
// Reject known replay digests before expensive cache/domain/ALPN policy work.
let digest_half = &validation.digest[..tls::TLS_DIGEST_HALF_LEN];
if replay_checker.check_tls_digest(digest_half) {
auth_probe_record_failure(peer.ip(), Instant::now());
auth_probe_record_failure_in(shared, peer.ip(), Instant::now());
maybe_apply_server_hello_delay(config).await;
warn!(peer = %peer, "TLS replay attack detected (duplicate digest)");
return HandshakeResult::BadClient { reader, writer };
@ -827,7 +945,7 @@ where
"TLS handshake successful"
);
auth_probe_record_success(peer.ip());
auth_probe_record_success_in(shared, peer.ip());
HandshakeResult::Success((
FakeTlsReader::new(reader),
@ -837,6 +955,7 @@ where
}
/// Handle MTProto obfuscation handshake
#[cfg(test)]
pub async fn handle_mtproto_handshake<R, W>(
handshake: &[u8; HANDSHAKE_LEN],
reader: R,
@ -847,6 +966,65 @@ pub async fn handle_mtproto_handshake<R, W>(
is_tls: bool,
preferred_user: Option<&str>,
) -> HandshakeResult<(CryptoReader<R>, CryptoWriter<W>, HandshakeSuccess), R, W>
where
R: AsyncRead + Unpin + Send,
W: AsyncWrite + Unpin + Send,
{
let shared = ProxySharedState::new();
handle_mtproto_handshake_impl(
handshake,
reader,
writer,
peer,
config,
replay_checker,
is_tls,
preferred_user,
shared.as_ref(),
)
.await
}
pub async fn handle_mtproto_handshake_with_shared<R, W>(
handshake: &[u8; HANDSHAKE_LEN],
reader: R,
writer: W,
peer: SocketAddr,
config: &ProxyConfig,
replay_checker: &ReplayChecker,
is_tls: bool,
preferred_user: Option<&str>,
shared: &ProxySharedState,
) -> HandshakeResult<(CryptoReader<R>, CryptoWriter<W>, HandshakeSuccess), R, W>
where
R: AsyncRead + Unpin + Send,
W: AsyncWrite + Unpin + Send,
{
handle_mtproto_handshake_impl(
handshake,
reader,
writer,
peer,
config,
replay_checker,
is_tls,
preferred_user,
shared,
)
.await
}
async fn handle_mtproto_handshake_impl<R, W>(
handshake: &[u8; HANDSHAKE_LEN],
reader: R,
writer: W,
peer: SocketAddr,
config: &ProxyConfig,
replay_checker: &ReplayChecker,
is_tls: bool,
preferred_user: Option<&str>,
shared: &ProxySharedState,
) -> HandshakeResult<(CryptoReader<R>, CryptoWriter<W>, HandshakeSuccess), R, W>
where
R: AsyncRead + Unpin + Send,
W: AsyncWrite + Unpin + Send,
@ -862,7 +1040,7 @@ where
);
let throttle_now = Instant::now();
if auth_probe_should_apply_preauth_throttle(peer.ip(), throttle_now) {
if auth_probe_should_apply_preauth_throttle_in(shared, peer.ip(), throttle_now) {
maybe_apply_server_hello_delay(config).await;
debug!(peer = %peer, "MTProto handshake rejected by pre-auth probe throttle");
return HandshakeResult::BadClient { reader, writer };
@ -872,7 +1050,7 @@ where
let enc_prekey_iv: Vec<u8> = dec_prekey_iv.iter().rev().copied().collect();
let decoded_users = decode_user_secrets(config, preferred_user);
let decoded_users = decode_user_secrets_in(shared, config, preferred_user);
for (user, secret) in decoded_users {
let dec_prekey = &dec_prekey_iv[..PREKEY_LEN];
@ -932,7 +1110,7 @@ where
// entry from the cache. We accept the cost of performing the full
// authentication check first to avoid poisoning the replay cache.
if replay_checker.check_and_add_handshake(dec_prekey_iv) {
auth_probe_record_failure(peer.ip(), Instant::now());
auth_probe_record_failure_in(shared, peer.ip(), Instant::now());
maybe_apply_server_hello_delay(config).await;
warn!(peer = %peer, user = %user, "MTProto replay attack detected");
return HandshakeResult::BadClient { reader, writer };
@ -959,7 +1137,7 @@ where
"MTProto handshake successful"
);
auth_probe_record_success(peer.ip());
auth_probe_record_success_in(shared, peer.ip());
let max_pending = config.general.crypto_pending_buffer;
return HandshakeResult::Success((
@ -969,7 +1147,7 @@ where
));
}
auth_probe_record_failure(peer.ip(), Instant::now());
auth_probe_record_failure_in(shared, peer.ip(), Instant::now());
maybe_apply_server_hello_delay(config).await;
debug!(peer = %peer, "MTProto handshake: no matching user found");
HandshakeResult::BadClient { reader, writer }
@ -1123,6 +1301,10 @@ mod timing_manual_bench_tests;
#[path = "tests/handshake_key_material_zeroization_security_tests.rs"]
mod handshake_key_material_zeroization_security_tests;
#[cfg(test)]
#[path = "tests/handshake_baseline_invariant_tests.rs"]
mod handshake_baseline_invariant_tests;
/// Compile-time guard: HandshakeSuccess holds cryptographic key material and
/// must never be Copy. A Copy impl would allow silent key duplication,
/// undermining the zeroize-on-drop guarantee.

View File

@ -249,6 +249,43 @@ async fn wait_mask_connect_budget(started: Instant) {
}
}
// Log-normal sample bounded to [floor, ceiling]. Median = sqrt(floor * ceiling).
// Implements Box-Muller transform for standard normal sampling — no external
// dependency on rand_distr (which is incompatible with rand 0.10).
// sigma is chosen so ~99% of raw samples land inside [floor, ceiling] before clamp.
// When floor > ceiling (misconfiguration), returns ceiling (the smaller value).
// When floor == ceiling, returns that value. When both are 0, returns 0.
pub(crate) fn sample_lognormal_percentile_bounded(
floor: u64,
ceiling: u64,
rng: &mut impl Rng,
) -> u64 {
if ceiling == 0 && floor == 0 {
return 0;
}
if floor > ceiling {
return ceiling;
}
if floor == ceiling {
return floor;
}
let floor_f = floor.max(1) as f64;
let ceiling_f = ceiling.max(1) as f64;
let mu = (floor_f.ln() + ceiling_f.ln()) / 2.0;
// 4.65 ≈ 2 * 2.326 (double-sided z-score for 99th percentile)
let sigma = ((ceiling_f / floor_f).ln() / 4.65).max(0.01);
// Box-Muller transform: two uniform samples → one standard normal sample
let u1: f64 = rng.random_range(f64::MIN_POSITIVE..1.0);
let u2: f64 = rng.random_range(0.0_f64..std::f64::consts::TAU);
let normal_sample = (-2.0_f64 * u1.ln()).sqrt() * u2.cos();
let raw = (mu + sigma * normal_sample).exp();
if raw.is_finite() {
(raw as u64).clamp(floor, ceiling)
} else {
((floor_f * ceiling_f).sqrt()) as u64
}
}
fn mask_outcome_target_budget(config: &ProxyConfig) -> Duration {
if config.censorship.mask_timing_normalization_enabled {
let floor = config.censorship.mask_timing_normalization_floor_ms;
@ -257,14 +294,18 @@ fn mask_outcome_target_budget(config: &ProxyConfig) -> Duration {
if ceiling == 0 {
return Duration::from_millis(0);
}
// floor=0 stays uniform: log-normal cannot model distribution anchored at zero
let mut rng = rand::rng();
return Duration::from_millis(rng.random_range(0..=ceiling));
}
if ceiling > floor {
let mut rng = rand::rng();
return Duration::from_millis(rng.random_range(floor..=ceiling));
return Duration::from_millis(sample_lognormal_percentile_bounded(
floor, ceiling, &mut rng,
));
}
return Duration::from_millis(floor);
// ceiling <= floor: use the larger value (fail-closed: preserve longer delay)
return Duration::from_millis(floor.max(ceiling));
}
MASK_TIMEOUT
@ -1003,3 +1044,11 @@ mod masking_padding_timeout_adversarial_tests;
#[cfg(all(test, feature = "redteam_offline_expected_fail"))]
#[path = "tests/masking_offline_target_redteam_expected_fail_tests.rs"]
mod masking_offline_target_redteam_expected_fail_tests;
#[cfg(test)]
#[path = "tests/masking_baseline_invariant_tests.rs"]
mod masking_baseline_invariant_tests;
#[cfg(test)]
#[path = "tests/masking_lognormal_timing_security_tests.rs"]
mod masking_lognormal_timing_security_tests;

File diff suppressed because it is too large Load Diff

View File

@ -67,6 +67,7 @@ pub mod middle_relay;
pub mod relay;
pub mod route_mode;
pub mod session_eviction;
pub mod shared_state;
pub use client::ClientHandler;
#[allow(unused_imports)]
@ -75,3 +76,15 @@ pub use handshake::*;
pub use masking::*;
#[allow(unused_imports)]
pub use relay::*;
#[cfg(test)]
#[path = "tests/test_harness_common.rs"]
mod test_harness_common;
#[cfg(test)]
#[path = "tests/proxy_shared_state_isolation_tests.rs"]
mod proxy_shared_state_isolation_tests;
#[cfg(test)]
#[path = "tests/proxy_shared_state_parallel_execution_tests.rs"]
mod proxy_shared_state_parallel_execution_tests;

View File

@ -70,6 +70,7 @@ use tracing::{debug, trace, warn};
///
/// iOS keeps Telegram connections alive in background for up to 30 minutes.
/// Closing earlier causes unnecessary reconnects and handshake overhead.
#[allow(dead_code)]
const ACTIVITY_TIMEOUT: Duration = Duration::from_secs(1800);
/// Watchdog check interval — also used for periodic rate logging.
@ -453,6 +454,7 @@ impl<S: AsyncWrite + Unpin> AsyncWrite for StatsIo<S> {
/// - Clean shutdown: both write sides are shut down on exit
/// - Error propagation: quota exits return `ProxyError::DataQuotaExceeded`,
/// other I/O failures are returned as `ProxyError::Io`
#[allow(dead_code)]
pub async fn relay_bidirectional<CR, CW, SR, SW>(
client_reader: CR,
client_writer: CW,
@ -471,6 +473,42 @@ where
SR: AsyncRead + Unpin + Send + 'static,
SW: AsyncWrite + Unpin + Send + 'static,
{
relay_bidirectional_with_activity_timeout(
client_reader,
client_writer,
server_reader,
server_writer,
c2s_buf_size,
s2c_buf_size,
user,
stats,
quota_limit,
_buffer_pool,
ACTIVITY_TIMEOUT,
)
.await
}
pub async fn relay_bidirectional_with_activity_timeout<CR, CW, SR, SW>(
client_reader: CR,
client_writer: CW,
server_reader: SR,
server_writer: SW,
c2s_buf_size: usize,
s2c_buf_size: usize,
user: &str,
stats: Arc<Stats>,
quota_limit: Option<u64>,
_buffer_pool: Arc<BufferPool>,
activity_timeout: Duration,
) -> Result<()>
where
CR: AsyncRead + Unpin + Send + 'static,
CW: AsyncWrite + Unpin + Send + 'static,
SR: AsyncRead + Unpin + Send + 'static,
SW: AsyncWrite + Unpin + Send + 'static,
{
let activity_timeout = activity_timeout.max(Duration::from_secs(1));
let epoch = Instant::now();
let counters = Arc::new(SharedCounters::new());
let quota_exceeded = Arc::new(AtomicBool::new(false));
@ -512,7 +550,7 @@ where
}
// ── Activity timeout ────────────────────────────────────
if idle >= ACTIVITY_TIMEOUT {
if idle >= activity_timeout {
let c2s = wd_counters.c2s_bytes.load(Ordering::Relaxed);
let s2c = wd_counters.s2c_bytes.load(Ordering::Relaxed);
warn!(
@ -671,3 +709,7 @@ mod relay_watchdog_delta_security_tests;
#[cfg(test)]
#[path = "tests/relay_atomic_quota_invariant_tests.rs"]
mod relay_atomic_quota_invariant_tests;
#[cfg(test)]
#[path = "tests/relay_baseline_invariant_tests.rs"]
mod relay_baseline_invariant_tests;

146
src/proxy/shared_state.rs Normal file
View File

@ -0,0 +1,146 @@
use std::collections::HashSet;
use std::collections::hash_map::RandomState;
use std::net::{IpAddr, SocketAddr};
use std::sync::atomic::{AtomicBool, AtomicU64, Ordering};
use std::sync::{Arc, Mutex};
use std::time::Instant;
use dashmap::DashMap;
use tokio::sync::mpsc;
use crate::proxy::handshake::{AuthProbeSaturationState, AuthProbeState};
use crate::proxy::middle_relay::{DesyncDedupRotationState, RelayIdleCandidateRegistry};
#[derive(Debug, Clone, Copy, PartialEq, Eq)]
pub(crate) enum ConntrackCloseReason {
NormalEof,
Timeout,
Pressure,
Reset,
Other,
}
#[derive(Debug, Clone, Copy)]
pub(crate) struct ConntrackCloseEvent {
pub(crate) src: SocketAddr,
pub(crate) dst: SocketAddr,
pub(crate) reason: ConntrackCloseReason,
}
#[derive(Debug, Clone, Copy, PartialEq, Eq)]
pub(crate) enum ConntrackClosePublishResult {
Sent,
Disabled,
QueueFull,
QueueClosed,
}
pub(crate) struct HandshakeSharedState {
pub(crate) auth_probe: DashMap<IpAddr, AuthProbeState>,
pub(crate) auth_probe_saturation: Mutex<Option<AuthProbeSaturationState>>,
pub(crate) auth_probe_eviction_hasher: RandomState,
pub(crate) invalid_secret_warned: Mutex<HashSet<(String, String)>>,
pub(crate) unknown_sni_warn_next_allowed: Mutex<Option<Instant>>,
}
pub(crate) struct MiddleRelaySharedState {
pub(crate) desync_dedup: DashMap<u64, Instant>,
pub(crate) desync_dedup_previous: DashMap<u64, Instant>,
pub(crate) desync_hasher: RandomState,
pub(crate) desync_full_cache_last_emit_at: Mutex<Option<Instant>>,
pub(crate) desync_dedup_rotation_state: Mutex<DesyncDedupRotationState>,
pub(crate) relay_idle_registry: Mutex<RelayIdleCandidateRegistry>,
pub(crate) relay_idle_mark_seq: AtomicU64,
}
pub(crate) struct ProxySharedState {
pub(crate) handshake: HandshakeSharedState,
pub(crate) middle_relay: MiddleRelaySharedState,
pub(crate) conntrack_pressure_active: AtomicBool,
pub(crate) conntrack_close_tx: Mutex<Option<mpsc::Sender<ConntrackCloseEvent>>>,
}
impl ProxySharedState {
pub(crate) fn new() -> Arc<Self> {
Arc::new(Self {
handshake: HandshakeSharedState {
auth_probe: DashMap::new(),
auth_probe_saturation: Mutex::new(None),
auth_probe_eviction_hasher: RandomState::new(),
invalid_secret_warned: Mutex::new(HashSet::new()),
unknown_sni_warn_next_allowed: Mutex::new(None),
},
middle_relay: MiddleRelaySharedState {
desync_dedup: DashMap::new(),
desync_dedup_previous: DashMap::new(),
desync_hasher: RandomState::new(),
desync_full_cache_last_emit_at: Mutex::new(None),
desync_dedup_rotation_state: Mutex::new(DesyncDedupRotationState::default()),
relay_idle_registry: Mutex::new(RelayIdleCandidateRegistry::default()),
relay_idle_mark_seq: AtomicU64::new(0),
},
conntrack_pressure_active: AtomicBool::new(false),
conntrack_close_tx: Mutex::new(None),
})
}
pub(crate) fn set_conntrack_close_sender(&self, tx: mpsc::Sender<ConntrackCloseEvent>) {
match self.conntrack_close_tx.lock() {
Ok(mut guard) => {
*guard = Some(tx);
}
Err(poisoned) => {
let mut guard = poisoned.into_inner();
*guard = Some(tx);
self.conntrack_close_tx.clear_poison();
}
}
}
pub(crate) fn disable_conntrack_close_sender(&self) {
match self.conntrack_close_tx.lock() {
Ok(mut guard) => {
*guard = None;
}
Err(poisoned) => {
let mut guard = poisoned.into_inner();
*guard = None;
self.conntrack_close_tx.clear_poison();
}
}
}
pub(crate) fn publish_conntrack_close_event(
&self,
event: ConntrackCloseEvent,
) -> ConntrackClosePublishResult {
let tx = match self.conntrack_close_tx.lock() {
Ok(guard) => guard.clone(),
Err(poisoned) => {
let guard = poisoned.into_inner();
let cloned = guard.clone();
self.conntrack_close_tx.clear_poison();
cloned
}
};
let Some(tx) = tx else {
return ConntrackClosePublishResult::Disabled;
};
match tx.try_send(event) {
Ok(()) => ConntrackClosePublishResult::Sent,
Err(mpsc::error::TrySendError::Full(_)) => ConntrackClosePublishResult::QueueFull,
Err(mpsc::error::TrySendError::Closed(_)) => ConntrackClosePublishResult::QueueClosed,
}
}
pub(crate) fn set_conntrack_pressure_active(&self, active: bool) {
self.conntrack_pressure_active
.store(active, Ordering::Relaxed);
}
pub(crate) fn conntrack_pressure_active(&self) -> bool {
self.conntrack_pressure_active.load(Ordering::Relaxed)
}
}

View File

@ -0,0 +1,260 @@
use super::*;
use std::sync::Arc;
use std::sync::atomic::{AtomicUsize, Ordering};
use std::time::{Duration, Instant};
static RACE_TEST_KEY_COUNTER: AtomicUsize = AtomicUsize::new(1_000_000);
fn race_unique_key(prefix: &str) -> String {
let id = RACE_TEST_KEY_COUNTER.fetch_add(1, Ordering::Relaxed);
format!("{}_{}", prefix, id)
}
// ── TOCTOU race: concurrent record_user_tier can downgrade tier ─────────
// Two threads call record_user_tier for the same NEW user simultaneously.
// Thread A records Tier1, Thread B records Base. Without atomic entry API,
// the insert() call overwrites without max(), causing Tier1 → Base downgrade.
#[test]
fn adaptive_record_concurrent_insert_no_tier_downgrade() {
// Run multiple rounds to increase race detection probability.
for round in 0..50 {
let key = race_unique_key(&format!("race_downgrade_{}", round));
let key_a = key.clone();
let key_b = key.clone();
let barrier = Arc::new(std::sync::Barrier::new(2));
let barrier_a = Arc::clone(&barrier);
let barrier_b = Arc::clone(&barrier);
let ha = std::thread::spawn(move || {
barrier_a.wait();
record_user_tier(&key_a, AdaptiveTier::Tier2);
});
let hb = std::thread::spawn(move || {
barrier_b.wait();
record_user_tier(&key_b, AdaptiveTier::Base);
});
ha.join().expect("thread A panicked");
hb.join().expect("thread B panicked");
let result = seed_tier_for_user(&key);
profiles().remove(&key);
// The final tier must be at least Tier2, never downgraded to Base.
// With correct max() semantics: max(Tier2, Base) = Tier2.
assert!(
result >= AdaptiveTier::Tier2,
"Round {}: concurrent insert downgraded tier from Tier2 to {:?}",
round,
result,
);
}
}
// ── TOCTOU race: three threads write three tiers, highest must survive ──
#[test]
fn adaptive_record_triple_concurrent_insert_highest_tier_survives() {
for round in 0..30 {
let key = race_unique_key(&format!("triple_race_{}", round));
let barrier = Arc::new(std::sync::Barrier::new(3));
let handles: Vec<_> = [AdaptiveTier::Base, AdaptiveTier::Tier1, AdaptiveTier::Tier3]
.into_iter()
.map(|tier| {
let k = key.clone();
let b = Arc::clone(&barrier);
std::thread::spawn(move || {
b.wait();
record_user_tier(&k, tier);
})
})
.collect();
for h in handles {
h.join().expect("thread panicked");
}
let result = seed_tier_for_user(&key);
profiles().remove(&key);
assert!(
result >= AdaptiveTier::Tier3,
"Round {}: triple concurrent insert didn't preserve Tier3, got {:?}",
round,
result,
);
}
}
// ── Stress: 20 threads writing different tiers to same key ──────────────
#[test]
fn adaptive_record_20_concurrent_writers_no_panic_no_downgrade() {
let key = race_unique_key("stress_20");
let barrier = Arc::new(std::sync::Barrier::new(20));
let handles: Vec<_> = (0..20u32)
.map(|i| {
let k = key.clone();
let b = Arc::clone(&barrier);
std::thread::spawn(move || {
b.wait();
let tier = match i % 4 {
0 => AdaptiveTier::Base,
1 => AdaptiveTier::Tier1,
2 => AdaptiveTier::Tier2,
_ => AdaptiveTier::Tier3,
};
for _ in 0..100 {
record_user_tier(&k, tier);
}
})
})
.collect();
for h in handles {
h.join().expect("thread panicked");
}
let result = seed_tier_for_user(&key);
profiles().remove(&key);
// At least one thread writes Tier3, max() should preserve it
assert!(
result >= AdaptiveTier::Tier3,
"20 concurrent writers: expected at least Tier3, got {:?}",
result,
);
}
// ── TOCTOU: seed reads stale, concurrent record inserts fresh ───────────
// Verifies remove_if predicate preserves fresh insertions.
#[test]
fn adaptive_seed_and_record_race_preserves_fresh_entry() {
for round in 0..30 {
let key = race_unique_key(&format!("seed_record_race_{}", round));
// Plant a stale entry
let stale_time = Instant::now() - Duration::from_secs(600);
profiles().insert(
key.clone(),
UserAdaptiveProfile {
tier: AdaptiveTier::Tier1,
seen_at: stale_time,
},
);
let key_seed = key.clone();
let key_record = key.clone();
let barrier = Arc::new(std::sync::Barrier::new(2));
let barrier_s = Arc::clone(&barrier);
let barrier_r = Arc::clone(&barrier);
let h_seed = std::thread::spawn(move || {
barrier_s.wait();
seed_tier_for_user(&key_seed)
});
let h_record = std::thread::spawn(move || {
barrier_r.wait();
record_user_tier(&key_record, AdaptiveTier::Tier3);
});
let _seed_result = h_seed.join().expect("seed thread panicked");
h_record.join().expect("record thread panicked");
let final_result = seed_tier_for_user(&key);
profiles().remove(&key);
// Fresh Tier3 entry should survive the stale-removal race.
// Due to non-deterministic scheduling, the outcome depends on ordering:
// - If record wins: Tier3 is present, seed returns Tier3
// - If seed wins: stale entry removed, then record inserts Tier3
// Either way, Tier3 should be visible after both complete.
assert!(
final_result == AdaptiveTier::Tier3 || final_result == AdaptiveTier::Base,
"Round {}: unexpected tier after seed+record race: {:?}",
round,
final_result,
);
}
}
// ── Eviction safety: retain() during concurrent inserts ─────────────────
#[test]
fn adaptive_eviction_during_concurrent_inserts_no_panic() {
let prefix = race_unique_key("evict_conc");
let stale_time = Instant::now() - Duration::from_secs(600);
// Pre-fill with stale entries to push past the eviction threshold
for i in 0..100 {
let k = format!("{}_{}", prefix, i);
profiles().insert(
k,
UserAdaptiveProfile {
tier: AdaptiveTier::Base,
seen_at: stale_time,
},
);
}
let barrier = Arc::new(std::sync::Barrier::new(10));
let handles: Vec<_> = (0..10)
.map(|t| {
let b = Arc::clone(&barrier);
let pfx = prefix.clone();
std::thread::spawn(move || {
b.wait();
for i in 0..50 {
let k = format!("{}_t{}_{}", pfx, t, i);
record_user_tier(&k, AdaptiveTier::Tier1);
}
})
})
.collect();
for h in handles {
h.join().expect("eviction thread panicked");
}
// Cleanup
profiles().retain(|k, _| !k.starts_with(&prefix));
}
// ── Adversarial: attacker races insert+seed in tight loop ───────────────
#[test]
fn adaptive_tight_loop_insert_seed_race_no_panic() {
let key = race_unique_key("tight_loop");
let key_w = key.clone();
let key_r = key.clone();
let done = Arc::new(std::sync::atomic::AtomicBool::new(false));
let done_w = Arc::clone(&done);
let done_r = Arc::clone(&done);
let writer = std::thread::spawn(move || {
while !done_w.load(Ordering::Relaxed) {
record_user_tier(&key_w, AdaptiveTier::Tier2);
}
});
let reader = std::thread::spawn(move || {
while !done_r.load(Ordering::Relaxed) {
let _ = seed_tier_for_user(&key_r);
}
});
std::thread::sleep(Duration::from_millis(100));
done.store(true, Ordering::Relaxed);
writer.join().expect("writer panicked");
reader.join().expect("reader panicked");
profiles().remove(&key);
}

View File

@ -0,0 +1,453 @@
use super::*;
use std::sync::atomic::{AtomicUsize, Ordering};
use std::time::{Duration, Instant};
// Unique key generator to avoid test interference through the global DashMap.
static TEST_KEY_COUNTER: AtomicUsize = AtomicUsize::new(0);
fn unique_key(prefix: &str) -> String {
let id = TEST_KEY_COUNTER.fetch_add(1, Ordering::Relaxed);
format!("{}_{}", prefix, id)
}
// ── Positive / Lifecycle ────────────────────────────────────────────────
#[test]
fn adaptive_seed_unknown_user_returns_base() {
let key = unique_key("seed_unknown");
assert_eq!(seed_tier_for_user(&key), AdaptiveTier::Base);
}
#[test]
fn adaptive_record_then_seed_returns_recorded_tier() {
let key = unique_key("record_seed");
record_user_tier(&key, AdaptiveTier::Tier1);
assert_eq!(seed_tier_for_user(&key), AdaptiveTier::Tier1);
}
#[test]
fn adaptive_separate_users_have_independent_tiers() {
let key_a = unique_key("indep_a");
let key_b = unique_key("indep_b");
record_user_tier(&key_a, AdaptiveTier::Tier1);
record_user_tier(&key_b, AdaptiveTier::Tier2);
assert_eq!(seed_tier_for_user(&key_a), AdaptiveTier::Tier1);
assert_eq!(seed_tier_for_user(&key_b), AdaptiveTier::Tier2);
}
#[test]
fn adaptive_record_upgrades_tier_within_ttl() {
let key = unique_key("upgrade");
record_user_tier(&key, AdaptiveTier::Base);
record_user_tier(&key, AdaptiveTier::Tier1);
assert_eq!(seed_tier_for_user(&key), AdaptiveTier::Tier1);
}
#[test]
fn adaptive_record_does_not_downgrade_within_ttl() {
let key = unique_key("no_downgrade");
record_user_tier(&key, AdaptiveTier::Tier2);
record_user_tier(&key, AdaptiveTier::Base);
// max(Tier2, Base) = Tier2 — within TTL the higher tier is retained
assert_eq!(seed_tier_for_user(&key), AdaptiveTier::Tier2);
}
// ── Edge Cases ──────────────────────────────────────────────────────────
#[test]
fn adaptive_base_tier_buffers_unchanged() {
let (c2s, s2c) = direct_copy_buffers_for_tier(AdaptiveTier::Base, 65536, 262144);
assert_eq!(c2s, 65536);
assert_eq!(s2c, 262144);
}
#[test]
fn adaptive_tier1_buffers_within_caps() {
let (c2s, s2c) = direct_copy_buffers_for_tier(AdaptiveTier::Tier1, 65536, 262144);
assert!(c2s > 65536, "Tier1 c2s should exceed Base");
assert!(
c2s <= 128 * 1024,
"Tier1 c2s should not exceed DIRECT_C2S_CAP_BYTES"
);
assert!(s2c > 262144, "Tier1 s2c should exceed Base");
assert!(
s2c <= 512 * 1024,
"Tier1 s2c should not exceed DIRECT_S2C_CAP_BYTES"
);
}
#[test]
fn adaptive_tier3_buffers_capped() {
let (c2s, s2c) = direct_copy_buffers_for_tier(AdaptiveTier::Tier3, 65536, 262144);
assert!(c2s <= 128 * 1024, "Tier3 c2s must not exceed cap");
assert!(s2c <= 512 * 1024, "Tier3 s2c must not exceed cap");
}
#[test]
fn adaptive_scale_zero_base_returns_at_least_one() {
// scale(0, num, den, cap) should return at least 1 (the .max(1) guard)
let (c2s, s2c) = direct_copy_buffers_for_tier(AdaptiveTier::Tier1, 0, 0);
assert!(c2s >= 1);
assert!(s2c >= 1);
}
// ── Stale Entry Handling ────────────────────────────────────────────────
#[test]
fn adaptive_stale_profile_returns_base_tier() {
let key = unique_key("stale_base");
// Manually insert a stale entry with seen_at in the far past.
// PROFILE_TTL = 300s, so 600s ago is well past expiry.
let stale_time = Instant::now() - Duration::from_secs(600);
profiles().insert(
key.clone(),
UserAdaptiveProfile {
tier: AdaptiveTier::Tier3,
seen_at: stale_time,
},
);
assert_eq!(
seed_tier_for_user(&key),
AdaptiveTier::Base,
"Stale profile should return Base"
);
}
// RED TEST: exposes the stale entry leak bug.
// After seed_tier_for_user returns Base for a stale entry, the entry should be
// removed from the cache. Currently it is NOT removed — stale entries accumulate
// indefinitely, consuming memory.
#[test]
fn adaptive_stale_entry_removed_after_seed() {
let key = unique_key("stale_removal");
let stale_time = Instant::now() - Duration::from_secs(600);
profiles().insert(
key.clone(),
UserAdaptiveProfile {
tier: AdaptiveTier::Tier2,
seen_at: stale_time,
},
);
let _ = seed_tier_for_user(&key);
// After seeding, the stale entry should have been removed.
assert!(
!profiles().contains_key(&key),
"Stale entry should be removed from cache after seed_tier_for_user"
);
}
// ── Cardinality Attack / Unbounded Growth ───────────────────────────────
// RED TEST: exposes the missing eviction cap.
// An attacker who can trigger record_user_tier with arbitrary user keys can
// grow the global DashMap without bound, exhausting server memory.
// After inserting MAX_USER_PROFILES_ENTRIES + 1 stale entries, record_user_tier
// must trigger retain()-based eviction that purges all stale entries.
#[test]
fn adaptive_profile_cache_bounded_under_cardinality_attack() {
let prefix = unique_key("cardinality");
let stale_time = Instant::now() - Duration::from_secs(600);
let n = MAX_USER_PROFILES_ENTRIES + 1;
for i in 0..n {
let key = format!("{}_{}", prefix, i);
profiles().insert(
key,
UserAdaptiveProfile {
tier: AdaptiveTier::Base,
seen_at: stale_time,
},
);
}
// This insert should push the cache over MAX_USER_PROFILES_ENTRIES and trigger eviction.
let trigger_key = unique_key("cardinality_trigger");
record_user_tier(&trigger_key, AdaptiveTier::Base);
// Count surviving stale entries.
let mut surviving_stale = 0;
for i in 0..n {
let key = format!("{}_{}", prefix, i);
if profiles().contains_key(&key) {
surviving_stale += 1;
}
}
// Cleanup: remove anything that survived + the trigger key.
for i in 0..n {
let key = format!("{}_{}", prefix, i);
profiles().remove(&key);
}
profiles().remove(&trigger_key);
// All stale entries (600s past PROFILE_TTL=300s) should have been evicted.
assert_eq!(
surviving_stale, 0,
"All {} stale entries should be evicted, but {} survived",
n, surviving_stale
);
}
// ── Key Length Validation ────────────────────────────────────────────────
// RED TEST: exposes missing key length validation.
// An attacker can submit arbitrarily large user keys, each consuming memory
// for the String allocation in the DashMap key.
#[test]
fn adaptive_oversized_user_key_rejected_on_record() {
let oversized_key: String = "X".repeat(1024); // 1KB key — should be rejected
record_user_tier(&oversized_key, AdaptiveTier::Tier1);
// With key length validation, the oversized key should NOT be stored.
let stored = profiles().contains_key(&oversized_key);
// Cleanup regardless
profiles().remove(&oversized_key);
assert!(
!stored,
"Oversized user key (1024 bytes) should be rejected by record_user_tier"
);
}
#[test]
fn adaptive_oversized_user_key_rejected_on_seed() {
let oversized_key: String = "X".repeat(1024);
// Insert it directly to test seed behavior
profiles().insert(
oversized_key.clone(),
UserAdaptiveProfile {
tier: AdaptiveTier::Tier3,
seen_at: Instant::now(),
},
);
let result = seed_tier_for_user(&oversized_key);
profiles().remove(&oversized_key);
assert_eq!(
result,
AdaptiveTier::Base,
"Oversized user key should return Base from seed_tier_for_user"
);
}
#[test]
fn adaptive_empty_user_key_safe() {
// Empty string is a valid (if unusual) key — should not panic
record_user_tier("", AdaptiveTier::Tier1);
let tier = seed_tier_for_user("");
profiles().remove("");
assert_eq!(tier, AdaptiveTier::Tier1);
}
#[test]
fn adaptive_max_length_key_accepted() {
// A key at exactly 512 bytes should be accepted
let key: String = "K".repeat(512);
record_user_tier(&key, AdaptiveTier::Tier1);
let tier = seed_tier_for_user(&key);
profiles().remove(&key);
assert_eq!(tier, AdaptiveTier::Tier1);
}
// ── Concurrent Access Safety ────────────────────────────────────────────
#[test]
fn adaptive_concurrent_record_and_seed_no_torn_read() {
let key = unique_key("concurrent_rw");
let key_clone = key.clone();
// Record from multiple threads simultaneously
let handles: Vec<_> = (0..10)
.map(|i| {
let k = key_clone.clone();
std::thread::spawn(move || {
let tier = if i % 2 == 0 {
AdaptiveTier::Tier1
} else {
AdaptiveTier::Tier2
};
record_user_tier(&k, tier);
})
})
.collect();
for h in handles {
h.join().expect("thread panicked");
}
let result = seed_tier_for_user(&key);
profiles().remove(&key);
// Result must be one of the recorded tiers, not a corrupted value
assert!(
result == AdaptiveTier::Tier1 || result == AdaptiveTier::Tier2,
"Concurrent writes produced unexpected tier: {:?}",
result
);
}
#[test]
fn adaptive_concurrent_seed_does_not_panic() {
let key = unique_key("concurrent_seed");
record_user_tier(&key, AdaptiveTier::Tier1);
let key_clone = key.clone();
let handles: Vec<_> = (0..20)
.map(|_| {
let k = key_clone.clone();
std::thread::spawn(move || {
for _ in 0..100 {
let _ = seed_tier_for_user(&k);
}
})
})
.collect();
for h in handles {
h.join().expect("concurrent seed panicked");
}
profiles().remove(&key);
}
// ── TOCTOU: Concurrent seed + record race ───────────────────────────────
// RED TEST: seed_tier_for_user reads a stale entry, drops the reference,
// then another thread inserts a fresh entry. If seed then removes unconditionally
// (without atomic predicate), the fresh entry is lost. With remove_if, the
// fresh entry survives.
#[test]
fn adaptive_remove_if_does_not_delete_fresh_concurrent_insert() {
let key = unique_key("toctou");
let stale_time = Instant::now() - Duration::from_secs(600);
profiles().insert(
key.clone(),
UserAdaptiveProfile {
tier: AdaptiveTier::Tier1,
seen_at: stale_time,
},
);
// Thread A: seed_tier (will see stale, should attempt removal)
// Thread B: record_user_tier (inserts fresh entry concurrently)
let key_a = key.clone();
let key_b = key.clone();
let handle_b = std::thread::spawn(move || {
// Small yield to increase chance of interleaving
std::thread::yield_now();
record_user_tier(&key_b, AdaptiveTier::Tier3);
});
let _ = seed_tier_for_user(&key_a);
handle_b.join().expect("thread B panicked");
// After both operations, the fresh Tier3 entry should survive.
// With a correct remove_if predicate, the fresh entry is NOT deleted.
// Without remove_if (current code), the entry may be lost.
let final_tier = seed_tier_for_user(&key);
profiles().remove(&key);
// The fresh Tier3 entry should survive the stale-removal race.
// Note: Due to non-deterministic scheduling, this test may pass even
// without the fix if thread B wins the race. Run with --test-threads=1
// or multiple iterations for reliable detection.
assert!(
final_tier == AdaptiveTier::Tier3 || final_tier == AdaptiveTier::Base,
"Unexpected tier after TOCTOU race: {:?}",
final_tier
);
}
// ── Fuzz: Random keys ──────────────────────────────────────────────────
#[test]
fn adaptive_fuzz_random_keys_no_panic() {
use rand::{Rng, RngExt};
let mut rng = rand::rng();
let mut keys = Vec::new();
for _ in 0..200 {
let len: usize = rng.random_range(0..=256);
let key: String = (0..len)
.map(|_| {
let c: u8 = rng.random_range(0x20..=0x7E);
c as char
})
.collect();
record_user_tier(&key, AdaptiveTier::Tier1);
let _ = seed_tier_for_user(&key);
keys.push(key);
}
// Cleanup
for key in &keys {
profiles().remove(key);
}
}
// ── average_throughput_to_tier (proposed function, tests the mapping) ────
// These tests verify the function that will be added in PR-D.
// They are written against the current code's constant definitions.
#[test]
fn adaptive_throughput_mapping_below_threshold_is_base() {
// 7 Mbps < 8 Mbps threshold → Base
// 7 Mbps = 7_000_000 bps = 875_000 bytes/s over 10s = 8_750_000 bytes
// max(c2s, s2c) determines direction
let c2s_bytes: u64 = 8_750_000;
let s2c_bytes: u64 = 1_000_000;
let duration_secs: f64 = 10.0;
let avg_bps = (c2s_bytes.max(s2c_bytes) as f64 * 8.0) / duration_secs;
// 8_750_000 * 8 / 10 = 7_000_000 bps = 7 Mbps → Base
assert!(
avg_bps < THROUGHPUT_UP_BPS,
"Should be below threshold: {} < {}",
avg_bps,
THROUGHPUT_UP_BPS,
);
}
#[test]
fn adaptive_throughput_mapping_above_threshold_is_tier1() {
// 10 Mbps > 8 Mbps threshold → Tier1
let bytes_10mbps_10s: u64 = 12_500_000; // 10 Mbps * 10s / 8 = 12_500_000 bytes
let duration_secs: f64 = 10.0;
let avg_bps = (bytes_10mbps_10s as f64 * 8.0) / duration_secs;
assert!(
avg_bps >= THROUGHPUT_UP_BPS,
"Should be above threshold: {} >= {}",
avg_bps,
THROUGHPUT_UP_BPS,
);
}
#[test]
fn adaptive_throughput_short_session_should_return_base() {
// Sessions shorter than 1 second should not promote (too little data to judge)
let duration_secs: f64 = 0.5;
// Even with high throughput, short sessions should return Base
assert!(
duration_secs < 1.0,
"Short session duration guard should activate"
);
}
// ── me_flush_policy_for_tier ────────────────────────────────────────────
#[test]
fn adaptive_me_flush_base_unchanged() {
let (frames, bytes, delay) =
me_flush_policy_for_tier(AdaptiveTier::Base, 32, 65536, Duration::from_micros(1000));
assert_eq!(frames, 32);
assert_eq!(bytes, 65536);
assert_eq!(delay, Duration::from_micros(1000));
}
#[test]
fn adaptive_me_flush_tier1_delay_reduced() {
let (_, _, delay) =
me_flush_policy_for_tier(AdaptiveTier::Tier1, 32, 65536, Duration::from_micros(1000));
// Tier1: delay * 7/10 = 700 µs
assert_eq!(delay, Duration::from_micros(700));
}
#[test]
fn adaptive_me_flush_delay_never_below_minimum() {
let (_, _, delay) =
me_flush_policy_for_tier(AdaptiveTier::Tier3, 32, 65536, Duration::from_micros(200));
// Tier3: 200 * 3/10 = 60, but min is ME_DELAY_MIN_US = 150
assert!(delay.as_micros() >= 150, "Delay must respect minimum");
}

View File

@ -94,6 +94,7 @@ async fn adversarial_tls_handshake_timeout_during_masking_delay() {
1,
1,
1,
10,
1,
false,
stats.clone(),
@ -141,6 +142,7 @@ async fn blackhat_proxy_protocol_slowloris_timeout() {
1,
1,
1,
10,
1,
false,
stats.clone(),
@ -193,6 +195,7 @@ async fn negative_proxy_protocol_enabled_but_client_sends_tls_hello() {
1,
1,
1,
10,
1,
false,
stats.clone(),
@ -239,6 +242,7 @@ async fn edge_client_stream_exactly_4_bytes_eof() {
1,
1,
1,
10,
1,
false,
stats.clone(),
@ -282,6 +286,7 @@ async fn edge_client_stream_tls_header_valid_but_body_1_byte_short_eof() {
1,
1,
1,
10,
1,
false,
stats.clone(),
@ -328,6 +333,7 @@ async fn integration_non_tls_modes_disabled_immediately_masks() {
1,
1,
1,
10,
1,
false,
stats.clone(),

View File

@ -47,6 +47,7 @@ async fn invariant_tls_clienthello_truncation_exact_boundary_triggers_masking()
1,
1,
1,
10,
1,
false,
stats.clone(),
@ -177,6 +178,7 @@ async fn invariant_direct_mode_partial_header_eof_is_error_not_bad_connect() {
1,
1,
1,
10,
1,
false,
stats.clone(),

View File

@ -40,6 +40,7 @@ fn new_upstream_manager(stats: Arc<Stats>) -> Arc<UpstreamManager> {
1,
1,
1,
10,
1,
false,
stats,

View File

@ -36,6 +36,7 @@ fn build_harness(config: ProxyConfig) -> PipelineHarness {
1,
1,
1,
10,
1,
false,
stats.clone(),

View File

@ -20,6 +20,7 @@ fn new_upstream_manager(stats: Arc<Stats>) -> Arc<UpstreamManager> {
1,
1,
1,
10,
1,
false,
stats,

View File

@ -20,6 +20,7 @@ fn new_upstream_manager(stats: Arc<Stats>) -> Arc<UpstreamManager> {
1,
1,
1,
10,
1,
false,
stats,

View File

@ -34,6 +34,7 @@ fn new_upstream_manager(stats: Arc<Stats>) -> Arc<UpstreamManager> {
1,
1,
1,
10,
1,
false,
stats,

View File

@ -20,6 +20,7 @@ fn new_upstream_manager(stats: Arc<Stats>) -> Arc<UpstreamManager> {
1,
1,
1,
10,
1,
false,
stats,

View File

@ -20,6 +20,7 @@ fn new_upstream_manager(stats: Arc<Stats>) -> Arc<UpstreamManager> {
1,
1,
1,
10,
1,
false,
stats,

View File

@ -47,6 +47,7 @@ fn build_harness(secret_hex: &str, mask_port: u16) -> PipelineHarness {
1,
1,
1,
10,
1,
false,
stats.clone(),

View File

@ -25,6 +25,7 @@ fn make_test_upstream_manager(stats: Arc<Stats>) -> Arc<UpstreamManager> {
1,
1,
1,
10,
1,
false,
stats,

View File

@ -48,6 +48,7 @@ fn build_harness(secret_hex: &str, mask_port: u16) -> RedTeamHarness {
1,
1,
1,
10,
1,
false,
stats.clone(),
@ -237,6 +238,7 @@ async fn redteam_03_masking_duration_must_be_less_than_1ms_when_backend_down() {
1,
1,
1,
10,
1,
false,
Arc::new(Stats::new()),
@ -477,6 +479,7 @@ async fn measure_invalid_probe_duration_ms(delay_ms: u64, tls_len: u16, body_sen
1,
1,
1,
10,
1,
false,
Arc::new(Stats::new()),
@ -550,6 +553,7 @@ async fn capture_forwarded_probe_len(tls_len: u16, body_sent: usize) -> usize {
1,
1,
1,
10,
1,
false,
Arc::new(Stats::new()),

View File

@ -22,6 +22,7 @@ fn new_upstream_manager(stats: Arc<Stats>) -> Arc<UpstreamManager> {
1,
1,
1,
10,
1,
false,
stats,

View File

@ -20,6 +20,7 @@ fn new_upstream_manager(stats: Arc<Stats>) -> Arc<UpstreamManager> {
1,
1,
1,
10,
1,
false,
stats,

View File

@ -20,6 +20,7 @@ fn new_upstream_manager(stats: Arc<Stats>) -> Arc<UpstreamManager> {
1,
1,
1,
10,
1,
false,
stats,

View File

@ -20,6 +20,7 @@ fn new_upstream_manager(stats: Arc<Stats>) -> Arc<UpstreamManager> {
1,
1,
1,
10,
1,
false,
stats,

View File

@ -20,6 +20,7 @@ fn new_upstream_manager(stats: Arc<Stats>) -> Arc<UpstreamManager> {
1,
1,
1,
10,
1,
false,
stats,

View File

@ -34,6 +34,7 @@ fn new_upstream_manager(stats: Arc<Stats>) -> Arc<UpstreamManager> {
1,
1,
1,
10,
1,
false,
stats,

View File

@ -100,6 +100,7 @@ async fn blackhat_proxy_protocol_massive_garbage_rejected_quickly() {
1,
1,
1,
10,
1,
false,
stats.clone(),
@ -146,6 +147,7 @@ async fn edge_tls_body_immediate_eof_triggers_masking_and_bad_connect() {
1,
1,
1,
10,
1,
false,
stats.clone(),
@ -195,6 +197,7 @@ async fn security_classic_mode_disabled_masks_valid_length_payload() {
1,
1,
1,
10,
1,
false,
stats.clone(),

View File

@ -1,8 +1,10 @@
use super::*;
use crate::config::{UpstreamConfig, UpstreamType};
use crate::crypto::AesCtr;
use crate::crypto::sha256_hmac;
use crate::protocol::constants::ProtoTag;
use crate::crypto::{AesCtr, sha256, sha256_hmac};
use crate::protocol::constants::{
DC_IDX_POS, HANDSHAKE_LEN, IV_LEN, PREKEY_LEN, PROTO_TAG_POS, ProtoTag, SKIP_LEN,
TLS_RECORD_CHANGE_CIPHER,
};
use crate::protocol::tls;
use crate::proxy::handshake::HandshakeSuccess;
use crate::stream::{CryptoReader, CryptoWriter};
@ -339,6 +341,7 @@ async fn relay_task_abort_releases_user_gate_and_ip_reservation() {
1,
1,
1,
10,
1,
false,
stats.clone(),
@ -452,6 +455,7 @@ async fn relay_cutover_releases_user_gate_and_ip_reservation() {
1,
1,
1,
10,
1,
false,
stats.clone(),
@ -575,6 +579,7 @@ async fn integration_route_cutover_and_quota_overlap_fails_closed_and_releases_s
1,
1,
1,
10,
1,
false,
stats.clone(),
@ -744,6 +749,7 @@ async fn proxy_protocol_header_is_rejected_when_trust_list_is_empty() {
1,
1,
1,
10,
1,
false,
stats.clone(),
@ -820,6 +826,7 @@ async fn proxy_protocol_header_from_untrusted_peer_range_is_rejected_under_load(
1,
1,
1,
10,
1,
false,
stats.clone(),
@ -979,6 +986,7 @@ async fn short_tls_probe_is_masked_through_client_pipeline() {
1,
1,
1,
10,
1,
false,
stats.clone(),
@ -1066,6 +1074,7 @@ async fn tls12_record_probe_is_masked_through_client_pipeline() {
1,
1,
1,
10,
1,
false,
stats.clone(),
@ -1151,6 +1160,7 @@ async fn handle_client_stream_increments_connects_all_exactly_once() {
1,
1,
1,
10,
1,
false,
stats.clone(),
@ -1243,6 +1253,7 @@ async fn running_client_handler_increments_connects_all_exactly_once() {
1,
1,
1,
10,
1,
false,
stats.clone(),
@ -1310,6 +1321,163 @@ async fn running_client_handler_increments_connects_all_exactly_once() {
);
}
#[tokio::test(start_paused = true)]
async fn idle_pooled_connection_closes_cleanly_in_generic_stream_path() {
let mut cfg = ProxyConfig::default();
cfg.general.beobachten = false;
cfg.timeouts.client_first_byte_idle_secs = 1;
let config = Arc::new(cfg);
let stats = Arc::new(Stats::new());
let upstream_manager = Arc::new(UpstreamManager::new(
vec![UpstreamConfig {
upstream_type: UpstreamType::Direct {
interface: None,
bind_addresses: None,
},
weight: 1,
enabled: true,
scopes: String::new(),
selected_scope: String::new(),
}],
1,
1,
1,
10,
1,
false,
stats.clone(),
));
let replay_checker = Arc::new(ReplayChecker::new(128, Duration::from_secs(60)));
let buffer_pool = Arc::new(BufferPool::new());
let rng = Arc::new(SecureRandom::new());
let route_runtime = Arc::new(RouteRuntimeController::new(RelayRouteMode::Direct));
let ip_tracker = Arc::new(UserIpTracker::new());
let beobachten = Arc::new(BeobachtenStore::new());
let (server_side, _client_side) = duplex(4096);
let peer: SocketAddr = "198.51.100.169:55200".parse().unwrap();
let handler = tokio::spawn(handle_client_stream(
server_side,
peer,
config,
stats.clone(),
upstream_manager,
replay_checker,
buffer_pool,
rng,
None,
route_runtime,
None,
ip_tracker,
beobachten,
false,
));
// Let the spawned handler arm the idle-phase timeout before advancing paused time.
tokio::task::yield_now().await;
tokio::time::advance(Duration::from_secs(2)).await;
tokio::task::yield_now().await;
let result = tokio::time::timeout(Duration::from_secs(1), handler)
.await
.unwrap()
.unwrap();
assert!(result.is_ok());
assert_eq!(stats.get_handshake_timeouts(), 0);
assert_eq!(stats.get_connects_bad(), 0);
}
#[tokio::test(start_paused = true)]
async fn idle_pooled_connection_closes_cleanly_in_client_handler_path() {
let front_listener = TcpListener::bind("127.0.0.1:0").await.unwrap();
let front_addr = front_listener.local_addr().unwrap();
let mut cfg = ProxyConfig::default();
cfg.general.beobachten = false;
cfg.timeouts.client_first_byte_idle_secs = 1;
let config = Arc::new(cfg);
let stats = Arc::new(Stats::new());
let upstream_manager = Arc::new(UpstreamManager::new(
vec![UpstreamConfig {
upstream_type: UpstreamType::Direct {
interface: None,
bind_addresses: None,
},
weight: 1,
enabled: true,
scopes: String::new(),
selected_scope: String::new(),
}],
1,
1,
1,
10,
1,
false,
stats.clone(),
));
let replay_checker = Arc::new(ReplayChecker::new(128, Duration::from_secs(60)));
let buffer_pool = Arc::new(BufferPool::new());
let rng = Arc::new(SecureRandom::new());
let route_runtime = Arc::new(RouteRuntimeController::new(RelayRouteMode::Direct));
let ip_tracker = Arc::new(UserIpTracker::new());
let beobachten = Arc::new(BeobachtenStore::new());
let server_task = {
let config = config.clone();
let stats = stats.clone();
let upstream_manager = upstream_manager.clone();
let replay_checker = replay_checker.clone();
let buffer_pool = buffer_pool.clone();
let rng = rng.clone();
let route_runtime = route_runtime.clone();
let ip_tracker = ip_tracker.clone();
let beobachten = beobachten.clone();
tokio::spawn(async move {
let (stream, peer) = front_listener.accept().await.unwrap();
let real_peer_report = Arc::new(std::sync::Mutex::new(None));
ClientHandler::new(
stream,
peer,
config,
stats,
upstream_manager,
replay_checker,
buffer_pool,
rng,
None,
route_runtime,
None,
ip_tracker,
beobachten,
false,
real_peer_report,
)
.run()
.await
})
};
let _client = TcpStream::connect(front_addr).await.unwrap();
// Let the accepted connection reach the idle wait before advancing paused time.
tokio::task::yield_now().await;
tokio::time::advance(Duration::from_secs(2)).await;
tokio::task::yield_now().await;
let result = tokio::time::timeout(Duration::from_secs(1), server_task)
.await
.unwrap()
.unwrap();
assert!(result.is_ok());
assert_eq!(stats.get_handshake_timeouts(), 0);
assert_eq!(stats.get_connects_bad(), 0);
}
#[tokio::test]
async fn partial_tls_header_stall_triggers_handshake_timeout() {
let mut cfg = ProxyConfig::default();
@ -1332,6 +1500,7 @@ async fn partial_tls_header_stall_triggers_handshake_timeout() {
1,
1,
1,
10,
1,
false,
stats.clone(),
@ -1477,6 +1646,148 @@ fn wrap_tls_application_data(payload: &[u8]) -> Vec<u8> {
record
}
fn wrap_tls_ccs_record() -> Vec<u8> {
let mut record = Vec::with_capacity(6);
record.push(TLS_RECORD_CHANGE_CIPHER);
record.extend_from_slice(&[0x03, 0x03]);
record.extend_from_slice(&1u16.to_be_bytes());
record.push(0x01);
record
}
fn make_valid_mtproto_handshake(
secret_hex: &str,
proto_tag: ProtoTag,
dc_idx: i16,
) -> [u8; HANDSHAKE_LEN] {
let secret = hex::decode(secret_hex).expect("secret hex must decode for mtproto test helper");
let mut handshake = [0x5Au8; HANDSHAKE_LEN];
for (idx, b) in handshake[SKIP_LEN..SKIP_LEN + PREKEY_LEN + IV_LEN]
.iter_mut()
.enumerate()
{
*b = (idx as u8).wrapping_add(1);
}
let dec_prekey = &handshake[SKIP_LEN..SKIP_LEN + PREKEY_LEN];
let dec_iv_bytes = &handshake[SKIP_LEN + PREKEY_LEN..SKIP_LEN + PREKEY_LEN + IV_LEN];
let mut dec_key_input = Vec::with_capacity(PREKEY_LEN + secret.len());
dec_key_input.extend_from_slice(dec_prekey);
dec_key_input.extend_from_slice(&secret);
let dec_key = sha256(&dec_key_input);
let mut dec_iv_arr = [0u8; IV_LEN];
dec_iv_arr.copy_from_slice(dec_iv_bytes);
let dec_iv = u128::from_be_bytes(dec_iv_arr);
let mut stream = AesCtr::new(&dec_key, dec_iv);
let keystream = stream.encrypt(&[0u8; HANDSHAKE_LEN]);
let mut target_plain = [0u8; HANDSHAKE_LEN];
target_plain[PROTO_TAG_POS..PROTO_TAG_POS + 4].copy_from_slice(&proto_tag.to_bytes());
target_plain[DC_IDX_POS..DC_IDX_POS + 2].copy_from_slice(&dc_idx.to_le_bytes());
for idx in PROTO_TAG_POS..HANDSHAKE_LEN {
handshake[idx] = target_plain[idx] ^ keystream[idx];
}
handshake
}
#[tokio::test]
async fn fragmented_tls_mtproto_with_interleaved_ccs_is_accepted() {
let secret_hex = "55555555555555555555555555555555";
let secret = [0x55u8; 16];
let client_hello = make_valid_tls_client_hello(&secret, 0);
let mtproto_handshake = make_valid_mtproto_handshake(secret_hex, ProtoTag::Secure, 2);
let mut cfg = ProxyConfig::default();
cfg.general.beobachten = false;
cfg.access.ignore_time_skew = true;
cfg.access
.users
.insert("user".to_string(), secret_hex.to_string());
let config = Arc::new(cfg);
let replay_checker = Arc::new(ReplayChecker::new(128, Duration::from_secs(60)));
let rng = SecureRandom::new();
let (server_side, mut client_side) = duplex(131072);
let peer: SocketAddr = "198.51.100.85:55007".parse().unwrap();
let (read_half, write_half) = tokio::io::split(server_side);
let (mut tls_reader, tls_writer, tls_user) = match handle_tls_handshake(
&client_hello,
read_half,
write_half,
peer,
&config,
&replay_checker,
&rng,
None,
)
.await
{
HandshakeResult::Success(result) => result,
_ => panic!("expected successful TLS handshake"),
};
let mut tls_response_head = [0u8; 5];
client_side
.read_exact(&mut tls_response_head)
.await
.unwrap();
assert_eq!(tls_response_head[0], 0x16);
let tls_response_len =
u16::from_be_bytes([tls_response_head[3], tls_response_head[4]]) as usize;
let mut tls_response_body = vec![0u8; tls_response_len];
client_side
.read_exact(&mut tls_response_body)
.await
.unwrap();
client_side
.write_all(&wrap_tls_application_data(&mtproto_handshake[..13]))
.await
.unwrap();
client_side.write_all(&wrap_tls_ccs_record()).await.unwrap();
client_side
.write_all(&wrap_tls_application_data(&mtproto_handshake[13..37]))
.await
.unwrap();
client_side.write_all(&wrap_tls_ccs_record()).await.unwrap();
client_side
.write_all(&wrap_tls_application_data(&mtproto_handshake[37..]))
.await
.unwrap();
let mtproto_data = tls_reader.read_exact(HANDSHAKE_LEN).await.unwrap();
assert_eq!(&mtproto_data[..], &mtproto_handshake);
let mtproto_handshake: [u8; HANDSHAKE_LEN] = mtproto_data[..].try_into().unwrap();
let (_, _, success) = match handle_mtproto_handshake(
&mtproto_handshake,
tls_reader,
tls_writer,
peer,
&config,
&replay_checker,
true,
Some(tls_user.as_str()),
)
.await
{
HandshakeResult::Success(result) => result,
_ => panic!("expected successful MTProto handshake"),
};
assert_eq!(success.user, "user");
assert_eq!(success.proto_tag, ProtoTag::Secure);
assert_eq!(success.dc_idx, 2);
}
#[tokio::test]
async fn valid_tls_path_does_not_fall_back_to_mask_backend() {
let listener = TcpListener::bind("127.0.0.1:0").await.unwrap();
@ -1514,6 +1825,7 @@ async fn valid_tls_path_does_not_fall_back_to_mask_backend() {
1,
1,
1,
10,
1,
false,
stats.clone(),
@ -1622,6 +1934,7 @@ async fn valid_tls_with_invalid_mtproto_falls_back_to_mask_backend() {
1,
1,
1,
10,
1,
false,
stats.clone(),
@ -1728,6 +2041,7 @@ async fn client_handler_tls_bad_mtproto_is_forwarded_to_mask_backend() {
1,
1,
1,
10,
1,
false,
stats.clone(),
@ -1849,6 +2163,7 @@ async fn alpn_mismatch_tls_probe_is_masked_through_client_pipeline() {
1,
1,
1,
10,
1,
false,
stats.clone(),
@ -1941,6 +2256,7 @@ async fn invalid_hmac_tls_probe_is_masked_through_client_pipeline() {
1,
1,
1,
10,
1,
false,
stats.clone(),
@ -2039,6 +2355,7 @@ async fn burst_invalid_tls_probes_are_masked_verbatim() {
1,
1,
1,
10,
1,
false,
stats.clone(),
@ -2217,14 +2534,16 @@ async fn tcp_limit_rejection_does_not_reserve_ip_or_trigger_rollback() {
}
#[tokio::test]
async fn zero_tcp_limit_rejects_without_ip_or_counter_side_effects() {
async fn zero_tcp_limit_uses_global_fallback_and_rejects_without_side_effects() {
let mut config = ProxyConfig::default();
config
.access
.user_max_tcp_conns
.insert("user".to_string(), 0);
config.access.user_max_tcp_conns_global_each = 1;
let stats = Stats::new();
stats.increment_user_curr_connects("user");
let ip_tracker = UserIpTracker::new();
let peer_addr: SocketAddr = "198.51.100.211:50001".parse().unwrap();
@ -2241,10 +2560,75 @@ async fn zero_tcp_limit_rejects_without_ip_or_counter_side_effects() {
result,
Err(ProxyError::ConnectionLimitExceeded { user }) if user == "user"
));
assert_eq!(
stats.get_user_curr_connects("user"),
1,
"TCP-limit rejection must keep pre-existing in-flight connection count unchanged"
);
assert_eq!(ip_tracker.get_active_ip_count("user").await, 0);
}
#[tokio::test]
async fn zero_tcp_limit_with_disabled_global_fallback_is_unlimited() {
let mut config = ProxyConfig::default();
config
.access
.user_max_tcp_conns
.insert("user".to_string(), 0);
config.access.user_max_tcp_conns_global_each = 0;
let stats = Stats::new();
let ip_tracker = UserIpTracker::new();
let peer_addr: SocketAddr = "198.51.100.212:50002".parse().unwrap();
let result = RunningClientHandler::check_user_limits_static(
"user",
&config,
&stats,
peer_addr,
&ip_tracker,
)
.await;
assert!(
result.is_ok(),
"per-user zero with global fallback disabled must not enforce a TCP limit"
);
assert_eq!(stats.get_user_curr_connects("user"), 0);
assert_eq!(ip_tracker.get_active_ip_count("user").await, 0);
}
#[tokio::test]
async fn global_tcp_fallback_applies_when_per_user_limit_is_missing() {
let mut config = ProxyConfig::default();
config.access.user_max_tcp_conns_global_each = 1;
let stats = Stats::new();
stats.increment_user_curr_connects("user");
let ip_tracker = UserIpTracker::new();
let peer_addr: SocketAddr = "198.51.100.213:50003".parse().unwrap();
let result = RunningClientHandler::check_user_limits_static(
"user",
&config,
&stats,
peer_addr,
&ip_tracker,
)
.await;
assert!(matches!(
result,
Err(ProxyError::ConnectionLimitExceeded { user }) if user == "user"
));
assert_eq!(
stats.get_user_curr_connects("user"),
1,
"Global fallback TCP-limit rejection must keep pre-existing counter unchanged"
);
assert_eq!(ip_tracker.get_active_ip_count("user").await, 0);
}
#[tokio::test]
async fn check_user_limits_static_success_does_not_leak_counter_or_ip_reservation() {
let user = "check-helper-user";
@ -2876,6 +3260,7 @@ async fn relay_connect_error_releases_user_and_ip_before_return() {
1,
1,
1,
10,
1,
false,
stats.clone(),
@ -3436,6 +3821,7 @@ async fn untrusted_proxy_header_source_is_rejected() {
1,
1,
1,
10,
1,
false,
stats.clone(),
@ -3505,6 +3891,7 @@ async fn empty_proxy_trusted_cidrs_rejects_proxy_header_by_default() {
1,
1,
1,
10,
1,
false,
stats.clone(),
@ -3601,6 +3988,7 @@ async fn oversized_tls_record_is_masked_in_generic_stream_pipeline() {
1,
1,
1,
10,
1,
false,
stats.clone(),
@ -3703,6 +4091,7 @@ async fn oversized_tls_record_is_masked_in_client_handler_pipeline() {
1,
1,
1,
10,
1,
false,
stats.clone(),
@ -3819,6 +4208,7 @@ async fn tls_record_len_min_minus_1_is_rejected_in_generic_stream_pipeline() {
1,
1,
1,
10,
1,
false,
stats.clone(),
@ -3921,6 +4311,7 @@ async fn tls_record_len_min_minus_1_is_rejected_in_client_handler_pipeline() {
1,
1,
1,
10,
1,
false,
stats.clone(),
@ -4026,6 +4417,7 @@ async fn tls_record_len_16384_is_accepted_in_generic_stream_pipeline() {
1,
1,
1,
10,
1,
false,
stats.clone(),
@ -4126,6 +4518,7 @@ async fn tls_record_len_16384_is_accepted_in_client_handler_pipeline() {
1,
1,
1,
10,
1,
false,
stats.clone(),

View File

@ -33,6 +33,7 @@ fn make_test_upstream_manager(stats: Arc<Stats>) -> Arc<UpstreamManager> {
1,
1,
1,
10,
1,
false,
stats,

View File

@ -35,6 +35,7 @@ fn make_test_upstream_manager(stats: Arc<Stats>) -> Arc<UpstreamManager> {
1,
1,
1,
10,
1,
false,
stats,

View File

@ -36,6 +36,7 @@ fn make_test_upstream_manager(stats: Arc<Stats>) -> Arc<UpstreamManager> {
1,
1,
1,
10,
1,
false,
stats,

View File

@ -50,6 +50,7 @@ fn build_harness(secret_hex: &str, mask_port: u16) -> PipelineHarness {
1,
1,
1,
10,
1,
false,
stats.clone(),

View File

@ -1302,6 +1302,7 @@ async fn direct_relay_abort_midflight_releases_route_gauge() {
1,
1,
1,
10,
1,
false,
stats.clone(),
@ -1408,6 +1409,7 @@ async fn direct_relay_cutover_midflight_releases_route_gauge() {
1,
1,
1,
10,
1,
false,
stats.clone(),
@ -1529,6 +1531,7 @@ async fn direct_relay_cutover_storm_multi_session_keeps_generic_errors_and_relea
1,
1,
1,
10,
1,
false,
stats.clone(),
@ -1761,6 +1764,7 @@ async fn negative_direct_relay_dc_connection_refused_fails_fast() {
1,
100,
5000,
10,
3,
false,
stats.clone(),
@ -1851,6 +1855,7 @@ async fn adversarial_direct_relay_cutover_integrity() {
1,
100,
5000,
10,
3,
false,
stats.clone(),

View File

@ -7,12 +7,6 @@ use std::time::{Duration, Instant};
// --- Helpers ---
fn auth_probe_test_guard() -> std::sync::MutexGuard<'static, ()> {
auth_probe_test_lock()
.lock()
.unwrap_or_else(|poisoned| poisoned.into_inner())
}
fn test_config_with_secret_hex(secret_hex: &str) -> ProxyConfig {
let mut cfg = ProxyConfig::default();
cfg.access.users.clear();
@ -147,8 +141,8 @@ fn make_valid_tls_client_hello_with_alpn(
#[tokio::test]
async fn tls_minimum_viable_length_boundary() {
let _guard = auth_probe_test_guard();
clear_auth_probe_state_for_testing();
let shared = ProxySharedState::new();
clear_auth_probe_state_for_testing_in_shared(shared.as_ref());
let secret = [0x11u8; 16];
let config = test_config_with_secret_hex("11111111111111111111111111111111");
@ -200,8 +194,8 @@ async fn tls_minimum_viable_length_boundary() {
#[tokio::test]
async fn mtproto_extreme_dc_index_serialization() {
let _guard = auth_probe_test_guard();
clear_auth_probe_state_for_testing();
let shared = ProxySharedState::new();
clear_auth_probe_state_for_testing_in_shared(shared.as_ref());
let secret_hex = "22222222222222222222222222222222";
let config = test_config_with_secret_hex(secret_hex);
@ -241,8 +235,8 @@ async fn mtproto_extreme_dc_index_serialization() {
#[tokio::test]
async fn alpn_strict_case_and_padding_rejection() {
let _guard = auth_probe_test_guard();
clear_auth_probe_state_for_testing();
let shared = ProxySharedState::new();
clear_auth_probe_state_for_testing_in_shared(shared.as_ref());
let secret = [0x33u8; 16];
let mut config = test_config_with_secret_hex("33333333333333333333333333333333");
@ -297,8 +291,8 @@ fn ipv4_mapped_ipv6_bucketing_anomaly() {
#[tokio::test]
async fn mtproto_invalid_ciphertext_does_not_poison_replay_cache() {
let _guard = auth_probe_test_guard();
clear_auth_probe_state_for_testing();
let shared = ProxySharedState::new();
clear_auth_probe_state_for_testing_in_shared(shared.as_ref());
let secret_hex = "55555555555555555555555555555555";
let config = test_config_with_secret_hex(secret_hex);
@ -341,8 +335,8 @@ async fn mtproto_invalid_ciphertext_does_not_poison_replay_cache() {
#[tokio::test]
async fn tls_invalid_session_does_not_poison_replay_cache() {
let _guard = auth_probe_test_guard();
clear_auth_probe_state_for_testing();
let shared = ProxySharedState::new();
clear_auth_probe_state_for_testing_in_shared(shared.as_ref());
let secret = [0x66u8; 16];
let config = test_config_with_secret_hex("66666666666666666666666666666666");
@ -387,8 +381,8 @@ async fn tls_invalid_session_does_not_poison_replay_cache() {
#[tokio::test]
async fn server_hello_delay_timing_neutrality_on_hmac_failure() {
let _guard = auth_probe_test_guard();
clear_auth_probe_state_for_testing();
let shared = ProxySharedState::new();
clear_auth_probe_state_for_testing_in_shared(shared.as_ref());
let secret = [0x77u8; 16];
let mut config = test_config_with_secret_hex("77777777777777777777777777777777");
@ -425,8 +419,8 @@ async fn server_hello_delay_timing_neutrality_on_hmac_failure() {
#[tokio::test]
async fn server_hello_delay_inversion_resilience() {
let _guard = auth_probe_test_guard();
clear_auth_probe_state_for_testing();
let shared = ProxySharedState::new();
clear_auth_probe_state_for_testing_in_shared(shared.as_ref());
let secret = [0x88u8; 16];
let mut config = test_config_with_secret_hex("88888888888888888888888888888888");
@ -462,10 +456,9 @@ async fn server_hello_delay_inversion_resilience() {
#[tokio::test]
async fn mixed_valid_and_invalid_user_secrets_configuration() {
let _guard = auth_probe_test_guard();
clear_auth_probe_state_for_testing();
let _warn_guard = warned_secrets_test_lock().lock().unwrap();
clear_warned_secrets_for_testing();
let shared = ProxySharedState::new();
clear_auth_probe_state_for_testing_in_shared(shared.as_ref());
clear_warned_secrets_for_testing_in_shared(shared.as_ref());
let mut config = ProxyConfig::default();
config.access.ignore_time_skew = true;
@ -513,8 +506,8 @@ async fn mixed_valid_and_invalid_user_secrets_configuration() {
#[tokio::test]
async fn tls_emulation_fallback_when_cache_missing() {
let _guard = auth_probe_test_guard();
clear_auth_probe_state_for_testing();
let shared = ProxySharedState::new();
clear_auth_probe_state_for_testing_in_shared(shared.as_ref());
let secret = [0xAAu8; 16];
let mut config = test_config_with_secret_hex("aaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaa");
@ -547,8 +540,8 @@ async fn tls_emulation_fallback_when_cache_missing() {
#[tokio::test]
async fn classic_mode_over_tls_transport_protocol_confusion() {
let _guard = auth_probe_test_guard();
clear_auth_probe_state_for_testing();
let shared = ProxySharedState::new();
clear_auth_probe_state_for_testing_in_shared(shared.as_ref());
let secret_hex = "bbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbb";
let mut config = test_config_with_secret_hex(secret_hex);
@ -608,8 +601,8 @@ fn generate_tg_nonce_never_emits_reserved_bytes() {
#[tokio::test(flavor = "multi_thread", worker_threads = 4)]
async fn dashmap_concurrent_saturation_stress() {
let _guard = auth_probe_test_guard();
clear_auth_probe_state_for_testing();
let shared = ProxySharedState::new();
clear_auth_probe_state_for_testing_in_shared(shared.as_ref());
let ip_a: IpAddr = "192.0.2.13".parse().unwrap();
let ip_b: IpAddr = "198.51.100.13".parse().unwrap();
@ -617,9 +610,10 @@ async fn dashmap_concurrent_saturation_stress() {
for i in 0..100 {
let target_ip = if i % 2 == 0 { ip_a } else { ip_b };
let shared = shared.clone();
tasks.push(tokio::spawn(async move {
for _ in 0..50 {
auth_probe_record_failure(target_ip, Instant::now());
auth_probe_record_failure_in(shared.as_ref(), target_ip, Instant::now());
}
}));
}
@ -630,11 +624,11 @@ async fn dashmap_concurrent_saturation_stress() {
}
assert!(
auth_probe_is_throttled_for_testing(ip_a),
auth_probe_is_throttled_for_testing_in_shared(shared.as_ref(), ip_a),
"IP A must be throttled after concurrent stress"
);
assert!(
auth_probe_is_throttled_for_testing(ip_b),
auth_probe_is_throttled_for_testing_in_shared(shared.as_ref(), ip_b),
"IP B must be throttled after concurrent stress"
);
}
@ -661,15 +655,15 @@ fn prototag_invalid_bytes_fail_closed() {
#[test]
fn auth_probe_eviction_hash_collision_stress() {
let _guard = auth_probe_test_guard();
clear_auth_probe_state_for_testing();
let shared = ProxySharedState::new();
clear_auth_probe_state_for_testing_in_shared(shared.as_ref());
let state = auth_probe_state_map();
let state = auth_probe_state_for_testing_in_shared(shared.as_ref());
let now = Instant::now();
for i in 0..10_000u32 {
let ip = IpAddr::V4(Ipv4Addr::new(10, 0, (i >> 8) as u8, (i & 0xFF) as u8));
auth_probe_record_failure_with_state(state, ip, now);
auth_probe_record_failure_with_state_in(shared.as_ref(), state, ip, now);
}
assert!(

View File

@ -44,12 +44,6 @@ fn make_valid_mtproto_handshake(
handshake
}
fn auth_probe_test_guard() -> std::sync::MutexGuard<'static, ()> {
auth_probe_test_lock()
.lock()
.unwrap_or_else(|poisoned| poisoned.into_inner())
}
fn test_config_with_secret_hex(secret_hex: &str) -> ProxyConfig {
let mut cfg = ProxyConfig::default();
cfg.access.users.clear();
@ -67,8 +61,8 @@ fn test_config_with_secret_hex(secret_hex: &str) -> ProxyConfig {
#[tokio::test]
async fn mtproto_handshake_bit_flip_anywhere_rejected() {
let _guard = auth_probe_test_guard();
clear_auth_probe_state_for_testing();
let shared = ProxySharedState::new();
clear_auth_probe_state_for_testing_in_shared(shared.as_ref());
let secret_hex = "11223344556677889900aabbccddeeff";
let base = make_valid_mtproto_handshake(secret_hex, ProtoTag::Secure, 2);
@ -181,26 +175,26 @@ async fn mtproto_handshake_timing_neutrality_mocked() {
#[tokio::test]
async fn auth_probe_throttle_saturation_stress() {
let _guard = auth_probe_test_guard();
clear_auth_probe_state_for_testing();
let shared = ProxySharedState::new();
clear_auth_probe_state_for_testing_in_shared(shared.as_ref());
let now = Instant::now();
// Record enough failures for one IP to trigger backoff
let target_ip = IpAddr::V4(Ipv4Addr::new(1, 1, 1, 1));
for _ in 0..AUTH_PROBE_BACKOFF_START_FAILS {
auth_probe_record_failure(target_ip, now);
auth_probe_record_failure_in(shared.as_ref(), target_ip, now);
}
assert!(auth_probe_is_throttled(target_ip, now));
assert!(auth_probe_is_throttled_in(shared.as_ref(), target_ip, now));
// Stress test with many unique IPs
for i in 0..500u32 {
let ip = IpAddr::V4(Ipv4Addr::new(203, 0, 113, (i % 256) as u8));
auth_probe_record_failure(ip, now);
auth_probe_record_failure_in(shared.as_ref(), ip, now);
}
let tracked = AUTH_PROBE_STATE.get().map(|state| state.len()).unwrap_or(0);
let tracked = auth_probe_state_for_testing_in_shared(shared.as_ref()).len();
assert!(
tracked <= AUTH_PROBE_TRACK_MAX_ENTRIES,
"auth probe state grew past hard cap: {tracked} > {AUTH_PROBE_TRACK_MAX_ENTRIES}"
@ -209,8 +203,8 @@ async fn auth_probe_throttle_saturation_stress() {
#[tokio::test]
async fn mtproto_handshake_abridged_prefix_rejected() {
let _guard = auth_probe_test_guard();
clear_auth_probe_state_for_testing();
let shared = ProxySharedState::new();
clear_auth_probe_state_for_testing_in_shared(shared.as_ref());
let mut handshake = [0x5Au8; HANDSHAKE_LEN];
handshake[0] = 0xef; // Abridged prefix
@ -235,8 +229,8 @@ async fn mtproto_handshake_abridged_prefix_rejected() {
#[tokio::test]
async fn mtproto_handshake_preferred_user_mismatch_continues() {
let _guard = auth_probe_test_guard();
clear_auth_probe_state_for_testing();
let shared = ProxySharedState::new();
clear_auth_probe_state_for_testing_in_shared(shared.as_ref());
let secret1_hex = "11111111111111111111111111111111";
let secret2_hex = "22222222222222222222222222222222";
@ -278,8 +272,8 @@ async fn mtproto_handshake_preferred_user_mismatch_continues() {
#[tokio::test]
async fn mtproto_handshake_concurrent_flood_stability() {
let _guard = auth_probe_test_guard();
clear_auth_probe_state_for_testing();
let shared = ProxySharedState::new();
clear_auth_probe_state_for_testing_in_shared(shared.as_ref());
let secret_hex = "00112233445566778899aabbccddeeff";
let base = make_valid_mtproto_handshake(secret_hex, ProtoTag::Secure, 1);
@ -320,8 +314,8 @@ async fn mtproto_handshake_concurrent_flood_stability() {
#[tokio::test]
async fn mtproto_replay_is_rejected_across_distinct_peers() {
let _guard = auth_probe_test_guard();
clear_auth_probe_state_for_testing();
let shared = ProxySharedState::new();
clear_auth_probe_state_for_testing_in_shared(shared.as_ref());
let secret_hex = "0123456789abcdeffedcba9876543210";
let handshake = make_valid_mtproto_handshake(secret_hex, ProtoTag::Secure, 2);
@ -360,8 +354,8 @@ async fn mtproto_replay_is_rejected_across_distinct_peers() {
#[tokio::test]
async fn mtproto_blackhat_mutation_corpus_never_panics_and_stays_fail_closed() {
let _guard = auth_probe_test_guard();
clear_auth_probe_state_for_testing();
let shared = ProxySharedState::new();
clear_auth_probe_state_for_testing_in_shared(shared.as_ref());
let secret_hex = "89abcdef012345670123456789abcdef";
let base = make_valid_mtproto_handshake(secret_hex, ProtoTag::Secure, 2);
@ -405,27 +399,27 @@ async fn mtproto_blackhat_mutation_corpus_never_panics_and_stays_fail_closed() {
#[tokio::test]
async fn auth_probe_success_clears_throttled_peer_state() {
let _guard = auth_probe_test_guard();
clear_auth_probe_state_for_testing();
let shared = ProxySharedState::new();
clear_auth_probe_state_for_testing_in_shared(shared.as_ref());
let target_ip = IpAddr::V4(Ipv4Addr::new(203, 0, 113, 90));
let now = Instant::now();
for _ in 0..AUTH_PROBE_BACKOFF_START_FAILS {
auth_probe_record_failure(target_ip, now);
auth_probe_record_failure_in(shared.as_ref(), target_ip, now);
}
assert!(auth_probe_is_throttled(target_ip, now));
assert!(auth_probe_is_throttled_in(shared.as_ref(), target_ip, now));
auth_probe_record_success(target_ip);
auth_probe_record_success_in(shared.as_ref(), target_ip);
assert!(
!auth_probe_is_throttled(target_ip, now + Duration::from_millis(1)),
!auth_probe_is_throttled_in(shared.as_ref(), target_ip, now + Duration::from_millis(1)),
"successful auth must clear per-peer throttle state"
);
}
#[tokio::test]
async fn mtproto_invalid_storm_over_cap_keeps_probe_map_hard_bounded() {
let _guard = auth_probe_test_guard();
clear_auth_probe_state_for_testing();
let shared = ProxySharedState::new();
clear_auth_probe_state_for_testing_in_shared(shared.as_ref());
let secret_hex = "00112233445566778899aabbccddeeff";
let mut invalid = make_valid_mtproto_handshake(secret_hex, ProtoTag::Secure, 2);
@ -458,7 +452,7 @@ async fn mtproto_invalid_storm_over_cap_keeps_probe_map_hard_bounded() {
assert!(matches!(res, HandshakeResult::BadClient { .. }));
}
let tracked = AUTH_PROBE_STATE.get().map(|state| state.len()).unwrap_or(0);
let tracked = auth_probe_state_for_testing_in_shared(shared.as_ref()).len();
assert!(
tracked <= AUTH_PROBE_TRACK_MAX_ENTRIES,
"probe map must remain bounded under invalid storm: {tracked}"
@ -467,8 +461,8 @@ async fn mtproto_invalid_storm_over_cap_keeps_probe_map_hard_bounded() {
#[tokio::test]
async fn mtproto_property_style_multi_bit_mutations_fail_closed_or_auth_only() {
let _guard = auth_probe_test_guard();
clear_auth_probe_state_for_testing();
let shared = ProxySharedState::new();
clear_auth_probe_state_for_testing_in_shared(shared.as_ref());
let secret_hex = "f0e1d2c3b4a5968778695a4b3c2d1e0f";
let base = make_valid_mtproto_handshake(secret_hex, ProtoTag::Secure, 2);
@ -520,8 +514,8 @@ async fn mtproto_property_style_multi_bit_mutations_fail_closed_or_auth_only() {
#[tokio::test]
#[ignore = "heavy soak; run manually"]
async fn mtproto_blackhat_20k_mutation_soak_never_panics() {
let _guard = auth_probe_test_guard();
clear_auth_probe_state_for_testing();
let shared = ProxySharedState::new();
clear_auth_probe_state_for_testing_in_shared(shared.as_ref());
let secret_hex = "aaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaa";
let base = make_valid_mtproto_handshake(secret_hex, ProtoTag::Secure, 2);

View File

@ -3,15 +3,9 @@ use std::collections::HashSet;
use std::net::{IpAddr, Ipv4Addr};
use std::time::{Duration, Instant};
fn auth_probe_test_guard() -> std::sync::MutexGuard<'static, ()> {
auth_probe_test_lock()
.lock()
.unwrap_or_else(|poisoned| poisoned.into_inner())
}
#[test]
fn adversarial_large_state_offsets_escape_first_scan_window() {
let _guard = auth_probe_test_guard();
let shared = ProxySharedState::new();
let base = Instant::now();
let state_len = 65_536usize;
let scan_limit = 1_024usize;
@ -25,7 +19,8 @@ fn adversarial_large_state_offsets_escape_first_scan_window() {
((i.wrapping_mul(131)) & 0xff) as u8,
));
let now = base + Duration::from_nanos(i);
let start = auth_probe_scan_start_offset(ip, now, state_len, scan_limit);
let start =
auth_probe_scan_start_offset_in(shared.as_ref(), ip, now, state_len, scan_limit);
if start >= scan_limit {
saw_offset_outside_first_window = true;
break;
@ -40,7 +35,7 @@ fn adversarial_large_state_offsets_escape_first_scan_window() {
#[test]
fn stress_large_state_offsets_cover_many_scan_windows() {
let _guard = auth_probe_test_guard();
let shared = ProxySharedState::new();
let base = Instant::now();
let state_len = 65_536usize;
let scan_limit = 1_024usize;
@ -54,7 +49,8 @@ fn stress_large_state_offsets_cover_many_scan_windows() {
((i.wrapping_mul(17)) & 0xff) as u8,
));
let now = base + Duration::from_micros(i);
let start = auth_probe_scan_start_offset(ip, now, state_len, scan_limit);
let start =
auth_probe_scan_start_offset_in(shared.as_ref(), ip, now, state_len, scan_limit);
covered_windows.insert(start / scan_limit);
}
@ -68,7 +64,7 @@ fn stress_large_state_offsets_cover_many_scan_windows() {
#[test]
fn light_fuzz_offset_always_stays_inside_state_len() {
let _guard = auth_probe_test_guard();
let shared = ProxySharedState::new();
let mut seed = 0xC0FF_EE12_3456_789Au64;
let base = Instant::now();
@ -86,7 +82,8 @@ fn light_fuzz_offset_always_stays_inside_state_len() {
let state_len = ((seed >> 16) as usize % 200_000).saturating_add(1);
let scan_limit = ((seed >> 40) as usize % 2_048).saturating_add(1);
let now = base + Duration::from_nanos(seed & 0x0fff);
let start = auth_probe_scan_start_offset(ip, now, state_len, scan_limit);
let start =
auth_probe_scan_start_offset_in(shared.as_ref(), ip, now, state_len, scan_limit);
assert!(
start < state_len,

View File

@ -2,68 +2,62 @@ use super::*;
use std::net::{IpAddr, Ipv4Addr};
use std::time::{Duration, Instant};
fn auth_probe_test_guard() -> std::sync::MutexGuard<'static, ()> {
auth_probe_test_lock()
.lock()
.unwrap_or_else(|poisoned| poisoned.into_inner())
}
#[test]
fn positive_preauth_throttle_activates_after_failure_threshold() {
let _guard = auth_probe_test_guard();
clear_auth_probe_state_for_testing();
let shared = ProxySharedState::new();
clear_auth_probe_state_for_testing_in_shared(shared.as_ref());
let ip = IpAddr::V4(Ipv4Addr::new(198, 51, 100, 20));
let now = Instant::now();
for _ in 0..AUTH_PROBE_BACKOFF_START_FAILS {
auth_probe_record_failure(ip, now);
auth_probe_record_failure_in(shared.as_ref(), ip, now);
}
assert!(
auth_probe_is_throttled(ip, now),
auth_probe_is_throttled_in(shared.as_ref(), ip, now),
"peer must be throttled once fail streak reaches threshold"
);
}
#[test]
fn negative_unrelated_peer_remains_unthrottled() {
let _guard = auth_probe_test_guard();
clear_auth_probe_state_for_testing();
let shared = ProxySharedState::new();
clear_auth_probe_state_for_testing_in_shared(shared.as_ref());
let attacker = IpAddr::V4(Ipv4Addr::new(203, 0, 113, 12));
let benign = IpAddr::V4(Ipv4Addr::new(203, 0, 113, 13));
let now = Instant::now();
for _ in 0..AUTH_PROBE_BACKOFF_START_FAILS {
auth_probe_record_failure(attacker, now);
auth_probe_record_failure_in(shared.as_ref(), attacker, now);
}
assert!(auth_probe_is_throttled(attacker, now));
assert!(auth_probe_is_throttled_in(shared.as_ref(), attacker, now));
assert!(
!auth_probe_is_throttled(benign, now),
!auth_probe_is_throttled_in(shared.as_ref(), benign, now),
"throttle state must stay scoped to normalized peer key"
);
}
#[test]
fn edge_expired_entry_is_pruned_and_no_longer_throttled() {
let _guard = auth_probe_test_guard();
clear_auth_probe_state_for_testing();
let shared = ProxySharedState::new();
clear_auth_probe_state_for_testing_in_shared(shared.as_ref());
let ip = IpAddr::V4(Ipv4Addr::new(192, 0, 2, 41));
let base = Instant::now();
for _ in 0..AUTH_PROBE_BACKOFF_START_FAILS {
auth_probe_record_failure(ip, base);
auth_probe_record_failure_in(shared.as_ref(), ip, base);
}
let expired_at = base + Duration::from_secs(AUTH_PROBE_TRACK_RETENTION_SECS + 1);
assert!(
!auth_probe_is_throttled(ip, expired_at),
!auth_probe_is_throttled_in(shared.as_ref(), ip, expired_at),
"expired entries must not keep throttling peers"
);
let state = auth_probe_state_map();
let state = auth_probe_state_for_testing_in_shared(shared.as_ref());
assert!(
state.get(&normalize_auth_probe_ip(ip)).is_none(),
"expired lookup should prune stale state"
@ -72,36 +66,40 @@ fn edge_expired_entry_is_pruned_and_no_longer_throttled() {
#[test]
fn adversarial_saturation_grace_requires_extra_failures_before_preauth_throttle() {
let _guard = auth_probe_test_guard();
clear_auth_probe_state_for_testing();
let shared = ProxySharedState::new();
clear_auth_probe_state_for_testing_in_shared(shared.as_ref());
let ip = IpAddr::V4(Ipv4Addr::new(198, 18, 0, 7));
let now = Instant::now();
for _ in 0..AUTH_PROBE_BACKOFF_START_FAILS {
auth_probe_record_failure(ip, now);
auth_probe_record_failure_in(shared.as_ref(), ip, now);
}
auth_probe_note_saturation(now);
auth_probe_note_saturation_in(shared.as_ref(), now);
assert!(
!auth_probe_should_apply_preauth_throttle(ip, now),
!auth_probe_should_apply_preauth_throttle_in(shared.as_ref(), ip, now),
"during global saturation, peer must receive configured grace window"
);
for _ in 0..AUTH_PROBE_SATURATION_GRACE_FAILS {
auth_probe_record_failure(ip, now + Duration::from_millis(1));
auth_probe_record_failure_in(shared.as_ref(), ip, now + Duration::from_millis(1));
}
assert!(
auth_probe_should_apply_preauth_throttle(ip, now + Duration::from_millis(1)),
auth_probe_should_apply_preauth_throttle_in(
shared.as_ref(),
ip,
now + Duration::from_millis(1)
),
"after grace failures are exhausted, preauth throttle must activate"
);
}
#[test]
fn integration_over_cap_insertion_keeps_probe_map_bounded() {
let _guard = auth_probe_test_guard();
clear_auth_probe_state_for_testing();
let shared = ProxySharedState::new();
clear_auth_probe_state_for_testing_in_shared(shared.as_ref());
let now = Instant::now();
for idx in 0..(AUTH_PROBE_TRACK_MAX_ENTRIES + 1024) {
@ -111,10 +109,10 @@ fn integration_over_cap_insertion_keeps_probe_map_bounded() {
((idx / 256) % 256) as u8,
(idx % 256) as u8,
));
auth_probe_record_failure(ip, now);
auth_probe_record_failure_in(shared.as_ref(), ip, now);
}
let tracked = auth_probe_state_map().len();
let tracked = auth_probe_state_for_testing_in_shared(shared.as_ref()).len();
assert!(
tracked <= AUTH_PROBE_TRACK_MAX_ENTRIES,
"probe map must remain hard bounded under insertion storm"
@ -123,8 +121,8 @@ fn integration_over_cap_insertion_keeps_probe_map_bounded() {
#[test]
fn light_fuzz_randomized_failures_preserve_cap_and_nonzero_streaks() {
let _guard = auth_probe_test_guard();
clear_auth_probe_state_for_testing();
let shared = ProxySharedState::new();
clear_auth_probe_state_for_testing_in_shared(shared.as_ref());
let mut seed = 0x4D53_5854_6F66_6175u64;
let now = Instant::now();
@ -140,10 +138,14 @@ fn light_fuzz_randomized_failures_preserve_cap_and_nonzero_streaks() {
(seed >> 8) as u8,
seed as u8,
));
auth_probe_record_failure(ip, now + Duration::from_millis((seed & 0x3f) as u64));
auth_probe_record_failure_in(
shared.as_ref(),
ip,
now + Duration::from_millis((seed & 0x3f) as u64),
);
}
let state = auth_probe_state_map();
let state = auth_probe_state_for_testing_in_shared(shared.as_ref());
assert!(state.len() <= AUTH_PROBE_TRACK_MAX_ENTRIES);
for entry in state.iter() {
assert!(entry.value().fail_streak > 0);
@ -152,13 +154,14 @@ fn light_fuzz_randomized_failures_preserve_cap_and_nonzero_streaks() {
#[tokio::test(flavor = "multi_thread", worker_threads = 4)]
async fn stress_parallel_failure_flood_keeps_state_hard_capped() {
let _guard = auth_probe_test_guard();
clear_auth_probe_state_for_testing();
let shared = ProxySharedState::new();
clear_auth_probe_state_for_testing_in_shared(shared.as_ref());
let start = Instant::now();
let mut tasks = Vec::new();
for worker in 0..8u8 {
let shared = shared.clone();
tasks.push(tokio::spawn(async move {
for i in 0..4096u32 {
let ip = IpAddr::V4(Ipv4Addr::new(
@ -167,7 +170,11 @@ async fn stress_parallel_failure_flood_keeps_state_hard_capped() {
((i >> 8) & 0xff) as u8,
(i & 0xff) as u8,
));
auth_probe_record_failure(ip, start + Duration::from_millis((i % 4) as u64));
auth_probe_record_failure_in(
shared.as_ref(),
ip,
start + Duration::from_millis((i % 4) as u64),
);
}
}));
}
@ -176,12 +183,12 @@ async fn stress_parallel_failure_flood_keeps_state_hard_capped() {
task.await.expect("stress worker must not panic");
}
let tracked = auth_probe_state_map().len();
let tracked = auth_probe_state_for_testing_in_shared(shared.as_ref()).len();
assert!(
tracked <= AUTH_PROBE_TRACK_MAX_ENTRIES,
"parallel failure flood must not exceed cap"
);
let probe = IpAddr::V4(Ipv4Addr::new(172, 3, 4, 5));
let _ = auth_probe_is_throttled(probe, start + Duration::from_millis(2));
let _ = auth_probe_is_throttled_in(shared.as_ref(), probe, start + Duration::from_millis(2));
}

View File

@ -2,20 +2,14 @@ use super::*;
use std::net::{IpAddr, Ipv4Addr};
use std::time::{Duration, Instant};
fn auth_probe_test_guard() -> std::sync::MutexGuard<'static, ()> {
auth_probe_test_lock()
.lock()
.unwrap_or_else(|poisoned| poisoned.into_inner())
}
#[test]
fn edge_zero_state_len_yields_zero_start_offset() {
let _guard = auth_probe_test_guard();
let shared = ProxySharedState::new();
let ip = IpAddr::V4(Ipv4Addr::new(198, 51, 100, 44));
let now = Instant::now();
assert_eq!(
auth_probe_scan_start_offset(ip, now, 0, 16),
auth_probe_scan_start_offset_in(shared.as_ref(), ip, now, 0, 16),
0,
"empty map must not produce non-zero scan offset"
);
@ -23,7 +17,7 @@ fn edge_zero_state_len_yields_zero_start_offset() {
#[test]
fn adversarial_large_state_must_allow_start_offset_outside_scan_budget_window() {
let _guard = auth_probe_test_guard();
let shared = ProxySharedState::new();
let base = Instant::now();
let scan_limit = 16usize;
let state_len = 65_536usize;
@ -37,7 +31,8 @@ fn adversarial_large_state_must_allow_start_offset_outside_scan_budget_window()
(i & 0xff) as u8,
));
let now = base + Duration::from_micros(i as u64);
let start = auth_probe_scan_start_offset(ip, now, state_len, scan_limit);
let start =
auth_probe_scan_start_offset_in(shared.as_ref(), ip, now, state_len, scan_limit);
assert!(
start < state_len,
"start offset must stay within state length; start={start}, len={state_len}"
@ -56,12 +51,12 @@ fn adversarial_large_state_must_allow_start_offset_outside_scan_budget_window()
#[test]
fn positive_state_smaller_than_scan_limit_caps_to_state_len() {
let _guard = auth_probe_test_guard();
let shared = ProxySharedState::new();
let ip = IpAddr::V4(Ipv4Addr::new(192, 0, 2, 17));
let now = Instant::now();
for state_len in 1..32usize {
let start = auth_probe_scan_start_offset(ip, now, state_len, 64);
let start = auth_probe_scan_start_offset_in(shared.as_ref(), ip, now, state_len, 64);
assert!(
start < state_len,
"start offset must never exceed state length when scan limit is larger"
@ -71,7 +66,7 @@ fn positive_state_smaller_than_scan_limit_caps_to_state_len() {
#[test]
fn light_fuzz_scan_offset_budget_never_exceeds_effective_window() {
let _guard = auth_probe_test_guard();
let shared = ProxySharedState::new();
let mut seed = 0x5A41_5356_4C32_3236u64;
let base = Instant::now();
@ -89,7 +84,8 @@ fn light_fuzz_scan_offset_budget_never_exceeds_effective_window() {
let state_len = ((seed >> 8) as usize % 131_072).saturating_add(1);
let scan_limit = ((seed >> 32) as usize % 512).saturating_add(1);
let now = base + Duration::from_nanos(seed & 0xffff);
let start = auth_probe_scan_start_offset(ip, now, state_len, scan_limit);
let start =
auth_probe_scan_start_offset_in(shared.as_ref(), ip, now, state_len, scan_limit);
assert!(
start < state_len,

View File

@ -3,22 +3,16 @@ use std::collections::HashSet;
use std::net::{IpAddr, Ipv4Addr};
use std::time::{Duration, Instant};
fn auth_probe_test_guard() -> std::sync::MutexGuard<'static, ()> {
auth_probe_test_lock()
.lock()
.unwrap_or_else(|poisoned| poisoned.into_inner())
}
#[test]
fn positive_same_ip_moving_time_yields_diverse_scan_offsets() {
let _guard = auth_probe_test_guard();
let shared = ProxySharedState::new();
let ip = IpAddr::V4(Ipv4Addr::new(198, 51, 100, 77));
let base = Instant::now();
let mut uniq = HashSet::new();
for i in 0..512u64 {
let now = base + Duration::from_nanos(i);
let offset = auth_probe_scan_start_offset(ip, now, 65_536, 16);
let offset = auth_probe_scan_start_offset_in(shared.as_ref(), ip, now, 65_536, 16);
uniq.insert(offset);
}
@ -31,7 +25,7 @@ fn positive_same_ip_moving_time_yields_diverse_scan_offsets() {
#[test]
fn adversarial_many_ips_same_time_spreads_offsets_without_bias_collapse() {
let _guard = auth_probe_test_guard();
let shared = ProxySharedState::new();
let now = Instant::now();
let mut uniq = HashSet::new();
@ -42,7 +36,13 @@ fn adversarial_many_ips_same_time_spreads_offsets_without_bias_collapse() {
i as u8,
(255 - (i as u8)),
));
uniq.insert(auth_probe_scan_start_offset(ip, now, 65_536, 16));
uniq.insert(auth_probe_scan_start_offset_in(
shared.as_ref(),
ip,
now,
65_536,
16,
));
}
assert!(
@ -54,12 +54,13 @@ fn adversarial_many_ips_same_time_spreads_offsets_without_bias_collapse() {
#[tokio::test(flavor = "multi_thread", worker_threads = 4)]
async fn stress_parallel_failure_churn_under_saturation_remains_capped_and_live() {
let _guard = auth_probe_test_guard();
clear_auth_probe_state_for_testing();
let shared = ProxySharedState::new();
clear_auth_probe_state_for_testing_in_shared(shared.as_ref());
let start = Instant::now();
let mut workers = Vec::new();
for worker in 0..8u8 {
let shared = shared.clone();
workers.push(tokio::spawn(async move {
for i in 0..8192u32 {
let ip = IpAddr::V4(Ipv4Addr::new(
@ -68,7 +69,11 @@ async fn stress_parallel_failure_churn_under_saturation_remains_capped_and_live(
((i >> 8) & 0xff) as u8,
(i & 0xff) as u8,
));
auth_probe_record_failure(ip, start + Duration::from_micros((i % 128) as u64));
auth_probe_record_failure_in(
shared.as_ref(),
ip,
start + Duration::from_micros((i % 128) as u64),
);
}
}));
}
@ -78,17 +83,22 @@ async fn stress_parallel_failure_churn_under_saturation_remains_capped_and_live(
}
assert!(
auth_probe_state_map().len() <= AUTH_PROBE_TRACK_MAX_ENTRIES,
auth_probe_state_for_testing_in_shared(shared.as_ref()).len()
<= AUTH_PROBE_TRACK_MAX_ENTRIES,
"state must remain hard-capped under parallel saturation churn"
);
let probe = IpAddr::V4(Ipv4Addr::new(10, 4, 1, 1));
let _ = auth_probe_should_apply_preauth_throttle(probe, start + Duration::from_millis(1));
let _ = auth_probe_should_apply_preauth_throttle_in(
shared.as_ref(),
probe,
start + Duration::from_millis(1),
);
}
#[test]
fn light_fuzz_scan_offset_stays_within_window_for_randomized_inputs() {
let _guard = auth_probe_test_guard();
let shared = ProxySharedState::new();
let mut seed = 0xA55A_1357_2468_9BDFu64;
let base = Instant::now();
@ -107,7 +117,8 @@ fn light_fuzz_scan_offset_stays_within_window_for_randomized_inputs() {
let scan_limit = ((seed >> 40) as usize % 1024).saturating_add(1);
let now = base + Duration::from_nanos(seed & 0x1fff);
let offset = auth_probe_scan_start_offset(ip, now, state_len, scan_limit);
let offset =
auth_probe_scan_start_offset_in(shared.as_ref(), ip, now, state_len, scan_limit);
assert!(
offset < state_len,
"scan offset must always remain inside state length"

View File

@ -0,0 +1,237 @@
use super::*;
use crate::crypto::sha256_hmac;
use crate::stats::ReplayChecker;
use std::net::{IpAddr, Ipv4Addr, SocketAddr};
use std::time::{Duration, Instant};
use tokio::time::timeout;
fn test_config_with_secret_hex(secret_hex: &str) -> ProxyConfig {
let mut cfg = ProxyConfig::default();
cfg.access.users.clear();
cfg.access
.users
.insert("user".to_string(), secret_hex.to_string());
cfg.access.ignore_time_skew = true;
cfg.censorship.mask = true;
cfg
}
fn make_valid_tls_handshake(secret: &[u8], timestamp: u32) -> Vec<u8> {
let session_id_len: usize = 32;
let len = tls::TLS_DIGEST_POS + tls::TLS_DIGEST_LEN + 1 + session_id_len;
let mut handshake = vec![0x42u8; len];
handshake[tls::TLS_DIGEST_POS + tls::TLS_DIGEST_LEN] = session_id_len as u8;
handshake[tls::TLS_DIGEST_POS..tls::TLS_DIGEST_POS + tls::TLS_DIGEST_LEN].fill(0);
let computed = sha256_hmac(secret, &handshake);
let mut digest = computed;
let ts = timestamp.to_le_bytes();
for i in 0..4 {
digest[28 + i] ^= ts[i];
}
handshake[tls::TLS_DIGEST_POS..tls::TLS_DIGEST_POS + tls::TLS_DIGEST_LEN]
.copy_from_slice(&digest);
handshake
}
#[tokio::test]
async fn handshake_baseline_probe_always_falls_back_to_masking() {
let shared = ProxySharedState::new();
clear_auth_probe_state_for_testing_in_shared(shared.as_ref());
let cfg = test_config_with_secret_hex("11111111111111111111111111111111");
let replay_checker = ReplayChecker::new(64, Duration::from_secs(60));
let rng = SecureRandom::new();
let peer: SocketAddr = "198.51.100.210:44321".parse().unwrap();
let probe = b"not-a-tls-clienthello";
let res = handle_tls_handshake(
probe,
tokio::io::empty(),
tokio::io::sink(),
peer,
&cfg,
&replay_checker,
&rng,
None,
)
.await;
assert!(matches!(res, HandshakeResult::BadClient { .. }));
}
#[tokio::test]
async fn handshake_baseline_invalid_secret_triggers_fallback_not_error_response() {
let shared = ProxySharedState::new();
clear_auth_probe_state_for_testing_in_shared(shared.as_ref());
let good_secret = [0x22u8; 16];
let bad_cfg = test_config_with_secret_hex("33333333333333333333333333333333");
let replay_checker = ReplayChecker::new(64, Duration::from_secs(60));
let rng = SecureRandom::new();
let peer: SocketAddr = "198.51.100.211:44322".parse().unwrap();
let handshake = make_valid_tls_handshake(&good_secret, 0);
let res = handle_tls_handshake(
&handshake,
tokio::io::empty(),
tokio::io::sink(),
peer,
&bad_cfg,
&replay_checker,
&rng,
None,
)
.await;
assert!(matches!(res, HandshakeResult::BadClient { .. }));
}
#[tokio::test]
async fn handshake_baseline_auth_probe_streak_increments_per_ip() {
let shared = ProxySharedState::new();
clear_auth_probe_state_for_testing_in_shared(shared.as_ref());
let cfg = test_config_with_secret_hex("44444444444444444444444444444444");
let replay_checker = ReplayChecker::new(64, Duration::from_secs(60));
let rng = SecureRandom::new();
let peer: SocketAddr = "203.0.113.10:5555".parse().unwrap();
let untouched_ip = IpAddr::V4(Ipv4Addr::new(203, 0, 113, 11));
let bad_probe = b"\x16\x03\x01\x00";
for expected in 1..=3 {
let res = handle_tls_handshake_with_shared(
bad_probe,
tokio::io::empty(),
tokio::io::sink(),
peer,
&cfg,
&replay_checker,
&rng,
None,
shared.as_ref(),
)
.await;
assert!(matches!(res, HandshakeResult::BadClient { .. }));
assert_eq!(
auth_probe_fail_streak_for_testing_in_shared(shared.as_ref(), peer.ip()),
Some(expected)
);
assert_eq!(
auth_probe_fail_streak_for_testing_in_shared(shared.as_ref(), untouched_ip),
None
);
}
}
#[test]
fn handshake_baseline_saturation_fires_at_compile_time_threshold() {
let shared = ProxySharedState::new();
clear_auth_probe_state_for_testing_in_shared(shared.as_ref());
let ip = IpAddr::V4(Ipv4Addr::new(198, 51, 100, 33));
let now = Instant::now();
for _ in 0..AUTH_PROBE_BACKOFF_START_FAILS.saturating_sub(1) {
auth_probe_record_failure_in(shared.as_ref(), ip, now);
}
assert!(!auth_probe_is_throttled_in(shared.as_ref(), ip, now));
auth_probe_record_failure_in(shared.as_ref(), ip, now);
assert!(auth_probe_is_throttled_in(shared.as_ref(), ip, now));
}
#[test]
fn handshake_baseline_repeated_probes_streak_monotonic() {
let shared = ProxySharedState::new();
clear_auth_probe_state_for_testing_in_shared(shared.as_ref());
let ip = IpAddr::V4(Ipv4Addr::new(203, 0, 113, 42));
let now = Instant::now();
let mut prev = 0u32;
for _ in 0..100 {
auth_probe_record_failure_in(shared.as_ref(), ip, now);
let current =
auth_probe_fail_streak_for_testing_in_shared(shared.as_ref(), ip).unwrap_or(0);
assert!(current >= prev, "streak must be monotonic");
prev = current;
}
}
#[test]
fn handshake_baseline_throttled_ip_incurs_backoff_delay() {
let shared = ProxySharedState::new();
clear_auth_probe_state_for_testing_in_shared(shared.as_ref());
let ip = IpAddr::V4(Ipv4Addr::new(198, 51, 100, 44));
let now = Instant::now();
for _ in 0..AUTH_PROBE_BACKOFF_START_FAILS {
auth_probe_record_failure_in(shared.as_ref(), ip, now);
}
let delay = auth_probe_backoff(AUTH_PROBE_BACKOFF_START_FAILS);
assert!(delay >= Duration::from_millis(AUTH_PROBE_BACKOFF_BASE_MS));
let before_expiry = now + delay.saturating_sub(Duration::from_millis(1));
let after_expiry = now + delay + Duration::from_millis(1);
assert!(auth_probe_is_throttled_in(
shared.as_ref(),
ip,
before_expiry
));
assert!(!auth_probe_is_throttled_in(
shared.as_ref(),
ip,
after_expiry
));
}
#[tokio::test]
async fn handshake_baseline_malformed_probe_frames_fail_closed_to_masking() {
let shared = ProxySharedState::new();
clear_auth_probe_state_for_testing_in_shared(shared.as_ref());
let cfg = test_config_with_secret_hex("55555555555555555555555555555555");
let replay_checker = ReplayChecker::new(64, Duration::from_secs(60));
let rng = SecureRandom::new();
let peer: SocketAddr = "198.51.100.212:44323".parse().unwrap();
let corpus: Vec<Vec<u8>> = vec![
vec![0x16, 0x03, 0x01],
vec![0x16, 0x03, 0x01, 0xFF, 0xFF],
vec![0x00; 128],
(0..64u8).collect(),
];
for probe in corpus {
let res = timeout(
Duration::from_millis(250),
handle_tls_handshake(
&probe,
tokio::io::empty(),
tokio::io::sink(),
peer,
&cfg,
&replay_checker,
&rng,
None,
),
)
.await
.expect("malformed probe handling must complete in bounded time");
assert!(
matches!(
res,
HandshakeResult::BadClient { .. } | HandshakeResult::Error(_)
),
"malformed probe must fail closed"
);
}
}

View File

@ -67,16 +67,10 @@ fn test_config_with_secret_hex(secret_hex: &str) -> ProxyConfig {
cfg
}
fn auth_probe_test_guard() -> MutexGuard<'static, ()> {
auth_probe_test_lock()
.lock()
.unwrap_or_else(|poisoned| poisoned.into_inner())
}
#[tokio::test]
async fn mtproto_handshake_duplicate_digest_is_replayed_on_second_attempt() {
let _guard = auth_probe_test_guard();
clear_auth_probe_state_for_testing();
let shared = ProxySharedState::new();
clear_auth_probe_state_for_testing_in_shared(shared.as_ref());
let secret_hex = "11223344556677889900aabbccddeeff";
let base = make_valid_mtproto_handshake(secret_hex, ProtoTag::Secure, 2);
@ -110,13 +104,13 @@ async fn mtproto_handshake_duplicate_digest_is_replayed_on_second_attempt() {
.await;
assert!(matches!(second, HandshakeResult::BadClient { .. }));
clear_auth_probe_state_for_testing();
clear_auth_probe_state_for_testing_in_shared(shared.as_ref());
}
#[tokio::test]
async fn mtproto_handshake_fuzz_corpus_never_panics_and_stays_fail_closed() {
let _guard = auth_probe_test_guard();
clear_auth_probe_state_for_testing();
let shared = ProxySharedState::new();
clear_auth_probe_state_for_testing_in_shared(shared.as_ref());
let secret_hex = "00112233445566778899aabbccddeeff";
let base = make_valid_mtproto_handshake(secret_hex, ProtoTag::Secure, 1);
@ -178,13 +172,13 @@ async fn mtproto_handshake_fuzz_corpus_never_panics_and_stays_fail_closed() {
);
}
clear_auth_probe_state_for_testing();
clear_auth_probe_state_for_testing_in_shared(shared.as_ref());
}
#[tokio::test]
async fn mtproto_handshake_mixed_corpus_never_panics_and_exact_duplicates_are_rejected() {
let _guard = auth_probe_test_guard();
clear_auth_probe_state_for_testing();
let shared = ProxySharedState::new();
clear_auth_probe_state_for_testing_in_shared(shared.as_ref());
let secret_hex = "99887766554433221100ffeeddccbbaa";
let base = make_valid_mtproto_handshake(secret_hex, ProtoTag::Secure, 4);
@ -274,5 +268,5 @@ async fn mtproto_handshake_mixed_corpus_never_panics_and_exact_duplicates_are_re
);
}
clear_auth_probe_state_for_testing();
clear_auth_probe_state_for_testing_in_shared(shared.as_ref());
}

View File

@ -11,12 +11,6 @@ use tokio::sync::Barrier;
// --- Helpers ---
fn auth_probe_test_guard() -> std::sync::MutexGuard<'static, ()> {
auth_probe_test_lock()
.lock()
.unwrap_or_else(|poisoned| poisoned.into_inner())
}
fn test_config_with_secret_hex(secret_hex: &str) -> ProxyConfig {
let mut cfg = ProxyConfig::default();
cfg.access.users.clear();
@ -164,8 +158,8 @@ fn make_valid_tls_client_hello_with_sni_and_alpn(
#[tokio::test]
async fn server_hello_delay_bypassed_if_max_is_zero_despite_high_min() {
let _guard = auth_probe_test_guard();
clear_auth_probe_state_for_testing();
let shared = ProxySharedState::new();
clear_auth_probe_state_for_testing_in_shared(shared.as_ref());
let secret = [0x1Au8; 16];
let mut config = test_config_with_secret_hex("1a1a1a1a1a1a1a1a1a1a1a1a1a1a1a1a");
@ -201,10 +195,10 @@ async fn server_hello_delay_bypassed_if_max_is_zero_despite_high_min() {
#[test]
fn auth_probe_backoff_extreme_fail_streak_clamps_safely() {
let _guard = auth_probe_test_guard();
clear_auth_probe_state_for_testing();
let shared = ProxySharedState::new();
clear_auth_probe_state_for_testing_in_shared(shared.as_ref());
let state = auth_probe_state_map();
let state = auth_probe_state_for_testing_in_shared(shared.as_ref());
let peer_ip = IpAddr::V4(Ipv4Addr::new(198, 51, 100, 99));
let now = Instant::now();
@ -217,7 +211,7 @@ fn auth_probe_backoff_extreme_fail_streak_clamps_safely() {
},
);
auth_probe_record_failure_with_state(&state, peer_ip, now);
auth_probe_record_failure_with_state_in(shared.as_ref(), &state, peer_ip, now);
let updated = state.get(&peer_ip).unwrap();
assert_eq!(updated.fail_streak, u32::MAX);
@ -270,8 +264,8 @@ fn generate_tg_nonce_cryptographic_uniqueness_and_entropy() {
#[tokio::test]
async fn mtproto_multi_user_decryption_isolation() {
let _guard = auth_probe_test_guard();
clear_auth_probe_state_for_testing();
let shared = ProxySharedState::new();
clear_auth_probe_state_for_testing_in_shared(shared.as_ref());
let mut config = ProxyConfig::default();
config.general.modes.secure = true;
@ -323,10 +317,8 @@ async fn mtproto_multi_user_decryption_isolation() {
#[tokio::test(flavor = "multi_thread", worker_threads = 4)]
async fn invalid_secret_warning_lock_contention_and_bound() {
let _guard = warned_secrets_test_lock()
.lock()
.unwrap_or_else(|poisoned| poisoned.into_inner());
clear_warned_secrets_for_testing();
let shared = ProxySharedState::new();
clear_warned_secrets_for_testing_in_shared(shared.as_ref());
let tasks = 50;
let iterations_per_task = 100;
@ -335,11 +327,18 @@ async fn invalid_secret_warning_lock_contention_and_bound() {
for t in 0..tasks {
let b = barrier.clone();
let shared = shared.clone();
handles.push(tokio::spawn(async move {
b.wait().await;
for i in 0..iterations_per_task {
let user_name = format!("contention_user_{}_{}", t, i);
warn_invalid_secret_once(&user_name, "invalid_hex", ACCESS_SECRET_BYTES, None);
warn_invalid_secret_once_in(
shared.as_ref(),
&user_name,
"invalid_hex",
ACCESS_SECRET_BYTES,
None,
);
}
}));
}
@ -348,7 +347,7 @@ async fn invalid_secret_warning_lock_contention_and_bound() {
handle.await.unwrap();
}
let warned = INVALID_SECRET_WARNED.get().unwrap();
let warned = warned_secrets_for_testing_in_shared(shared.as_ref());
let guard = warned
.lock()
.unwrap_or_else(|poisoned| poisoned.into_inner());
@ -362,8 +361,8 @@ async fn invalid_secret_warning_lock_contention_and_bound() {
#[tokio::test(flavor = "multi_thread", worker_threads = 4)]
async fn mtproto_strict_concurrent_replay_race_condition() {
let _guard = auth_probe_test_guard();
clear_auth_probe_state_for_testing();
let shared = ProxySharedState::new();
clear_auth_probe_state_for_testing_in_shared(shared.as_ref());
let secret_hex = "4A4A4A4A4A4A4A4A4A4A4A4A4A4A4A4A";
let config = Arc::new(test_config_with_secret_hex(secret_hex));
@ -428,8 +427,8 @@ async fn mtproto_strict_concurrent_replay_race_condition() {
#[tokio::test]
async fn tls_alpn_zero_length_protocol_handled_safely() {
let _guard = auth_probe_test_guard();
clear_auth_probe_state_for_testing();
let shared = ProxySharedState::new();
clear_auth_probe_state_for_testing_in_shared(shared.as_ref());
let secret = [0x5Bu8; 16];
let mut config = test_config_with_secret_hex("5b5b5b5b5b5b5b5b5b5b5b5b5b5b5b5b");
@ -461,8 +460,8 @@ async fn tls_alpn_zero_length_protocol_handled_safely() {
#[tokio::test]
async fn tls_sni_massive_hostname_does_not_panic() {
let _guard = auth_probe_test_guard();
clear_auth_probe_state_for_testing();
let shared = ProxySharedState::new();
clear_auth_probe_state_for_testing_in_shared(shared.as_ref());
let secret = [0x6Cu8; 16];
let config = test_config_with_secret_hex("6c6c6c6c6c6c6c6c6c6c6c6c6c6c6c6c");
@ -497,8 +496,8 @@ async fn tls_sni_massive_hostname_does_not_panic() {
#[tokio::test]
async fn tls_progressive_truncation_fuzzing_no_panics() {
let _guard = auth_probe_test_guard();
clear_auth_probe_state_for_testing();
let shared = ProxySharedState::new();
clear_auth_probe_state_for_testing_in_shared(shared.as_ref());
let secret = [0x7Du8; 16];
let config = test_config_with_secret_hex("7d7d7d7d7d7d7d7d7d7d7d7d7d7d7d7d");
@ -535,8 +534,8 @@ async fn tls_progressive_truncation_fuzzing_no_panics() {
#[tokio::test]
async fn mtproto_pure_entropy_fuzzing_no_panics() {
let _guard = auth_probe_test_guard();
clear_auth_probe_state_for_testing();
let shared = ProxySharedState::new();
clear_auth_probe_state_for_testing_in_shared(shared.as_ref());
let config = test_config_with_secret_hex("8e8e8e8e8e8e8e8e8e8e8e8e8e8e8e8e");
let replay_checker = ReplayChecker::new(128, Duration::from_secs(60));
@ -569,10 +568,8 @@ async fn mtproto_pure_entropy_fuzzing_no_panics() {
#[test]
fn decode_user_secret_odd_length_hex_rejection() {
let _guard = warned_secrets_test_lock()
.lock()
.unwrap_or_else(|poisoned| poisoned.into_inner());
clear_warned_secrets_for_testing();
let shared = ProxySharedState::new();
clear_warned_secrets_for_testing_in_shared(shared.as_ref());
let mut config = ProxyConfig::default();
config.access.users.clear();
@ -581,7 +578,7 @@ fn decode_user_secret_odd_length_hex_rejection() {
"1234567890123456789012345678901".to_string(),
);
let decoded = decode_user_secrets(&config, None);
let decoded = decode_user_secrets_in(shared.as_ref(), &config, None);
assert!(
decoded.is_empty(),
"Odd-length hex string must be gracefully rejected by hex::decode without unwrapping"
@ -590,10 +587,10 @@ fn decode_user_secret_odd_length_hex_rejection() {
#[test]
fn saturation_grace_pre_existing_high_fail_streak_immediate_throttle() {
let _guard = auth_probe_test_guard();
clear_auth_probe_state_for_testing();
let shared = ProxySharedState::new();
clear_auth_probe_state_for_testing_in_shared(shared.as_ref());
let state = auth_probe_state_map();
let state = auth_probe_state_for_testing_in_shared(shared.as_ref());
let peer_ip = IpAddr::V4(Ipv4Addr::new(198, 51, 100, 112));
let now = Instant::now();
@ -608,7 +605,7 @@ fn saturation_grace_pre_existing_high_fail_streak_immediate_throttle() {
);
{
let mut guard = auth_probe_saturation_state_lock();
let mut guard = auth_probe_saturation_state_lock_for_testing_in_shared(shared.as_ref());
*guard = Some(AuthProbeSaturationState {
fail_streak: AUTH_PROBE_BACKOFF_START_FAILS,
blocked_until: now + Duration::from_secs(5),
@ -616,7 +613,7 @@ fn saturation_grace_pre_existing_high_fail_streak_immediate_throttle() {
});
}
let is_throttled = auth_probe_should_apply_preauth_throttle(peer_ip, now);
let is_throttled = auth_probe_should_apply_preauth_throttle_in(shared.as_ref(), peer_ip, now);
assert!(
is_throttled,
"A peer with a pre-existing high fail streak must be immediately throttled when saturation begins, receiving no unearned grace period"
@ -625,21 +622,22 @@ fn saturation_grace_pre_existing_high_fail_streak_immediate_throttle() {
#[test]
fn auth_probe_saturation_note_resets_retention_window() {
let _guard = auth_probe_test_guard();
clear_auth_probe_state_for_testing();
let shared = ProxySharedState::new();
clear_auth_probe_state_for_testing_in_shared(shared.as_ref());
let base_time = Instant::now();
auth_probe_note_saturation(base_time);
auth_probe_note_saturation_in(shared.as_ref(), base_time);
let later = base_time + Duration::from_secs(AUTH_PROBE_TRACK_RETENTION_SECS - 1);
auth_probe_note_saturation(later);
auth_probe_note_saturation_in(shared.as_ref(), later);
let check_time = base_time + Duration::from_secs(AUTH_PROBE_TRACK_RETENTION_SECS + 5);
// This call may return false if backoff has elapsed, but it must not clear
// the saturation state because `later` refreshed last_seen.
let _ = auth_probe_saturation_is_throttled_at_for_testing(check_time);
let guard = auth_probe_saturation_state_lock();
let _ =
auth_probe_saturation_is_throttled_at_for_testing_in_shared(shared.as_ref(), check_time);
let guard = auth_probe_saturation_state_lock_for_testing_in_shared(shared.as_ref());
assert!(
guard.is_some(),
"Ongoing saturation notes must refresh last_seen so saturation state remains retained past the original window"

View File

@ -6,12 +6,6 @@ use std::sync::Arc;
use std::time::{Duration, Instant};
use tokio::sync::Barrier;
fn auth_probe_test_guard() -> std::sync::MutexGuard<'static, ()> {
auth_probe_test_lock()
.lock()
.unwrap_or_else(|poisoned| poisoned.into_inner())
}
fn test_config_with_secret_hex(secret_hex: &str) -> ProxyConfig {
let mut cfg = ProxyConfig::default();
cfg.access.users.clear();
@ -127,8 +121,8 @@ fn make_valid_mtproto_handshake(
#[tokio::test]
async fn tls_alpn_reject_does_not_pollute_replay_cache() {
let _guard = auth_probe_test_guard();
clear_auth_probe_state_for_testing();
let shared = ProxySharedState::new();
clear_auth_probe_state_for_testing_in_shared(shared.as_ref());
let secret = [0x11u8; 16];
let mut config = test_config_with_secret_hex("11111111111111111111111111111111");
@ -164,8 +158,8 @@ async fn tls_alpn_reject_does_not_pollute_replay_cache() {
#[tokio::test]
async fn tls_truncated_session_id_len_fails_closed_without_panic() {
let _guard = auth_probe_test_guard();
clear_auth_probe_state_for_testing();
let shared = ProxySharedState::new();
clear_auth_probe_state_for_testing_in_shared(shared.as_ref());
let config = test_config_with_secret_hex("33333333333333333333333333333333");
let replay_checker = ReplayChecker::new(128, Duration::from_secs(60));
@ -193,10 +187,10 @@ async fn tls_truncated_session_id_len_fails_closed_without_panic() {
#[test]
fn auth_probe_eviction_identical_timestamps_keeps_map_bounded() {
let _guard = auth_probe_test_guard();
clear_auth_probe_state_for_testing();
let shared = ProxySharedState::new();
clear_auth_probe_state_for_testing_in_shared(shared.as_ref());
let state = auth_probe_state_map();
let state = auth_probe_state_for_testing_in_shared(shared.as_ref());
let same = Instant::now();
for i in 0..AUTH_PROBE_TRACK_MAX_ENTRIES {
@ -212,7 +206,12 @@ fn auth_probe_eviction_identical_timestamps_keeps_map_bounded() {
}
let new_ip = IpAddr::V4(Ipv4Addr::new(192, 168, 21, 21));
auth_probe_record_failure_with_state(state, new_ip, same + Duration::from_millis(1));
auth_probe_record_failure_with_state_in(
shared.as_ref(),
state,
new_ip,
same + Duration::from_millis(1),
);
assert_eq!(state.len(), AUTH_PROBE_TRACK_MAX_ENTRIES);
assert!(state.contains_key(&new_ip));
@ -220,21 +219,21 @@ fn auth_probe_eviction_identical_timestamps_keeps_map_bounded() {
#[test]
fn clear_auth_probe_state_recovers_from_poisoned_saturation_lock() {
let _guard = auth_probe_test_guard();
clear_auth_probe_state_for_testing();
let shared = ProxySharedState::new();
clear_auth_probe_state_for_testing_in_shared(shared.as_ref());
let saturation = auth_probe_saturation_state();
let shared_for_poison = shared.clone();
let poison_thread = std::thread::spawn(move || {
let _hold = saturation
let _hold = auth_probe_saturation_state_for_testing_in_shared(shared_for_poison.as_ref())
.lock()
.unwrap_or_else(|poisoned| poisoned.into_inner());
panic!("intentional poison for regression coverage");
});
let _ = poison_thread.join();
clear_auth_probe_state_for_testing();
clear_auth_probe_state_for_testing_in_shared(shared.as_ref());
let guard = auth_probe_saturation_state()
let guard = auth_probe_saturation_state_for_testing_in_shared(shared.as_ref())
.lock()
.unwrap_or_else(|poisoned| poisoned.into_inner());
assert!(guard.is_none());
@ -242,12 +241,9 @@ fn clear_auth_probe_state_recovers_from_poisoned_saturation_lock() {
#[tokio::test]
async fn mtproto_invalid_length_secret_is_ignored_and_valid_user_still_auths() {
let _probe_guard = auth_probe_test_guard();
let _warn_guard = warned_secrets_test_lock()
.lock()
.unwrap_or_else(|poisoned| poisoned.into_inner());
clear_auth_probe_state_for_testing();
clear_warned_secrets_for_testing();
let shared = ProxySharedState::new();
clear_auth_probe_state_for_testing_in_shared(shared.as_ref());
clear_warned_secrets_for_testing_in_shared(shared.as_ref());
let mut config = ProxyConfig::default();
config.general.modes.secure = true;
@ -285,14 +281,14 @@ async fn mtproto_invalid_length_secret_is_ignored_and_valid_user_still_auths() {
#[tokio::test(flavor = "multi_thread", worker_threads = 4)]
async fn saturation_grace_exhaustion_under_concurrency_keeps_peer_throttled() {
let _guard = auth_probe_test_guard();
clear_auth_probe_state_for_testing();
let shared = ProxySharedState::new();
clear_auth_probe_state_for_testing_in_shared(shared.as_ref());
let peer_ip = IpAddr::V4(Ipv4Addr::new(198, 51, 100, 80));
let now = Instant::now();
{
let mut guard = auth_probe_saturation_state()
let mut guard = auth_probe_saturation_state_for_testing_in_shared(shared.as_ref())
.lock()
.unwrap_or_else(|poisoned| poisoned.into_inner());
*guard = Some(AuthProbeSaturationState {
@ -302,7 +298,7 @@ async fn saturation_grace_exhaustion_under_concurrency_keeps_peer_throttled() {
});
}
let state = auth_probe_state_map();
let state = auth_probe_state_for_testing_in_shared(shared.as_ref());
state.insert(
peer_ip,
AuthProbeState {
@ -318,9 +314,10 @@ async fn saturation_grace_exhaustion_under_concurrency_keeps_peer_throttled() {
for _ in 0..tasks {
let b = barrier.clone();
let shared = shared.clone();
handles.push(tokio::spawn(async move {
b.wait().await;
auth_probe_record_failure(peer_ip, Instant::now());
auth_probe_record_failure_in(shared.as_ref(), peer_ip, Instant::now());
}));
}
@ -333,7 +330,8 @@ async fn saturation_grace_exhaustion_under_concurrency_keeps_peer_throttled() {
final_state.fail_streak
>= AUTH_PROBE_BACKOFF_START_FAILS + AUTH_PROBE_SATURATION_GRACE_FAILS
);
assert!(auth_probe_should_apply_preauth_throttle(
assert!(auth_probe_should_apply_preauth_throttle_in(
shared.as_ref(),
peer_ip,
Instant::now()
));

View File

@ -1,46 +1,39 @@
use super::*;
use std::time::{Duration, Instant};
fn auth_probe_test_guard() -> std::sync::MutexGuard<'static, ()> {
auth_probe_test_lock()
.lock()
.unwrap_or_else(|poisoned| poisoned.into_inner())
}
fn poison_saturation_mutex() {
let saturation = auth_probe_saturation_state();
let poison_thread = std::thread::spawn(move || {
fn poison_saturation_mutex(shared: &ProxySharedState) {
let saturation = auth_probe_saturation_state_for_testing_in_shared(shared);
let _ = std::panic::catch_unwind(std::panic::AssertUnwindSafe(|| {
let _guard = saturation
.lock()
.expect("saturation mutex must be lockable for poison setup");
panic!("intentional poison for saturation mutex resilience test");
});
let _ = poison_thread.join();
}));
}
#[test]
fn auth_probe_saturation_note_recovers_after_mutex_poison() {
let _guard = auth_probe_test_guard();
clear_auth_probe_state_for_testing();
poison_saturation_mutex();
let shared = ProxySharedState::new();
clear_auth_probe_state_for_testing_in_shared(shared.as_ref());
poison_saturation_mutex(shared.as_ref());
let now = Instant::now();
auth_probe_note_saturation(now);
auth_probe_note_saturation_in(shared.as_ref(), now);
assert!(
auth_probe_saturation_is_throttled_at_for_testing(now),
auth_probe_saturation_is_throttled_at_for_testing_in_shared(shared.as_ref(), now),
"poisoned saturation mutex must not disable saturation throttling"
);
}
#[test]
fn auth_probe_saturation_check_recovers_after_mutex_poison() {
let _guard = auth_probe_test_guard();
clear_auth_probe_state_for_testing();
poison_saturation_mutex();
let shared = ProxySharedState::new();
clear_auth_probe_state_for_testing_in_shared(shared.as_ref());
poison_saturation_mutex(shared.as_ref());
{
let mut guard = auth_probe_saturation_state_lock();
let mut guard = auth_probe_saturation_state_lock_for_testing_in_shared(shared.as_ref());
*guard = Some(AuthProbeSaturationState {
fail_streak: AUTH_PROBE_BACKOFF_START_FAILS,
blocked_until: Instant::now() + Duration::from_millis(10),
@ -49,23 +42,25 @@ fn auth_probe_saturation_check_recovers_after_mutex_poison() {
}
assert!(
auth_probe_saturation_is_throttled_for_testing(),
auth_probe_saturation_is_throttled_for_testing_in_shared(shared.as_ref()),
"throttle check must recover poisoned saturation mutex and stay fail-closed"
);
}
#[test]
fn clear_auth_probe_state_clears_saturation_even_if_poisoned() {
let _guard = auth_probe_test_guard();
clear_auth_probe_state_for_testing();
poison_saturation_mutex();
let shared = ProxySharedState::new();
clear_auth_probe_state_for_testing_in_shared(shared.as_ref());
poison_saturation_mutex(shared.as_ref());
auth_probe_note_saturation(Instant::now());
assert!(auth_probe_saturation_is_throttled_for_testing());
auth_probe_note_saturation_in(shared.as_ref(), Instant::now());
assert!(auth_probe_saturation_is_throttled_for_testing_in_shared(
shared.as_ref()
));
clear_auth_probe_state_for_testing();
clear_auth_probe_state_for_testing_in_shared(shared.as_ref());
assert!(
!auth_probe_saturation_is_throttled_for_testing(),
!auth_probe_saturation_is_throttled_for_testing_in_shared(shared.as_ref()),
"clear helper must clear saturation state even after poison"
);
}

File diff suppressed because it is too large Load Diff

View File

@ -4,12 +4,6 @@ use crate::protocol::constants::{ProtoTag, TLS_RECORD_HANDSHAKE, TLS_VERSION};
use std::net::SocketAddr;
use std::time::{Duration, Instant};
fn auth_probe_test_guard() -> std::sync::MutexGuard<'static, ()> {
auth_probe_test_lock()
.lock()
.unwrap_or_else(|poisoned| poisoned.into_inner())
}
fn make_valid_mtproto_handshake(
secret_hex: &str,
proto_tag: ProtoTag,
@ -149,8 +143,8 @@ fn median_ns(samples: &mut [u128]) -> u128 {
#[tokio::test]
#[ignore = "manual benchmark: timing-sensitive and host-dependent"]
async fn mtproto_user_scan_timing_manual_benchmark() {
let _guard = auth_probe_test_guard();
clear_auth_probe_state_for_testing();
let shared = ProxySharedState::new();
clear_auth_probe_state_for_testing_in_shared(shared.as_ref());
const DECOY_USERS: usize = 8_000;
const ITERATIONS: usize = 250;
@ -243,7 +237,7 @@ async fn mtproto_user_scan_timing_manual_benchmark() {
#[tokio::test]
#[ignore = "manual benchmark: timing-sensitive and host-dependent"]
async fn tls_sni_preferred_vs_no_sni_fallback_manual_benchmark() {
let _guard = auth_probe_test_guard();
let shared = ProxySharedState::new();
const DECOY_USERS: usize = 8_000;
const ITERATIONS: usize = 250;
@ -281,7 +275,7 @@ async fn tls_sni_preferred_vs_no_sni_fallback_manual_benchmark() {
let no_sni = make_valid_tls_handshake(&target_secret, (i as u32).wrapping_add(10_000));
let started_sni = Instant::now();
let sni_secrets = decode_user_secrets(&config, Some(preferred_user));
let sni_secrets = decode_user_secrets_in(shared.as_ref(), &config, Some(preferred_user));
let sni_result = tls::validate_tls_handshake_with_replay_window(
&with_sni,
&sni_secrets,
@ -292,7 +286,7 @@ async fn tls_sni_preferred_vs_no_sni_fallback_manual_benchmark() {
assert!(sni_result.is_some());
let started_no_sni = Instant::now();
let no_sni_secrets = decode_user_secrets(&config, None);
let no_sni_secrets = decode_user_secrets_in(shared.as_ref(), &config, None);
let no_sni_result = tls::validate_tls_handshake_with_replay_window(
&no_sni,
&no_sni_secrets,

View File

@ -562,9 +562,10 @@ async fn timing_classifier_light_fuzz_pairwise_bucketed_accuracy_stays_bounded_u
if low_info_pair_count > 0 {
let low_info_baseline_avg = low_info_baseline_sum / low_info_pair_count as f64;
let low_info_hardened_avg = low_info_hardened_sum / low_info_pair_count as f64;
let low_info_avg_jitter_budget = 0.40 + acc_quant_step;
assert!(
low_info_hardened_avg <= low_info_baseline_avg + 0.40,
"normalization low-info average drift exceeded jitter budget: baseline_avg={low_info_baseline_avg:.3} hardened_avg={low_info_hardened_avg:.3}"
low_info_hardened_avg <= low_info_baseline_avg + low_info_avg_jitter_budget,
"normalization low-info average drift exceeded jitter budget: baseline_avg={low_info_baseline_avg:.3} hardened_avg={low_info_hardened_avg:.3} tolerated={low_info_avg_jitter_budget:.3}"
);
}

View File

@ -0,0 +1,156 @@
use super::*;
use tokio::io::duplex;
use tokio::net::TcpListener;
use tokio::time::{Duration, Instant, timeout};
#[test]
fn masking_baseline_timing_normalization_budget_within_bounds() {
let mut config = ProxyConfig::default();
config.censorship.mask_timing_normalization_enabled = true;
config.censorship.mask_timing_normalization_floor_ms = 120;
config.censorship.mask_timing_normalization_ceiling_ms = 180;
for _ in 0..256 {
let budget = mask_outcome_target_budget(&config);
assert!(budget >= Duration::from_millis(120));
assert!(budget <= Duration::from_millis(180));
}
}
#[tokio::test]
async fn masking_baseline_fallback_relays_to_mask_host() {
let listener = TcpListener::bind("127.0.0.1:0").await.unwrap();
let backend_addr = listener.local_addr().unwrap();
let initial = b"GET /baseline HTTP/1.1\r\nHost: x\r\n\r\n".to_vec();
let reply = b"HTTP/1.1 200 OK\r\nContent-Length: 2\r\n\r\nOK".to_vec();
let accept_task = tokio::spawn({
let initial = initial.clone();
let reply = reply.clone();
async move {
let (mut stream, _) = listener.accept().await.unwrap();
let mut seen = vec![0u8; initial.len()];
stream.read_exact(&mut seen).await.unwrap();
assert_eq!(seen, initial);
stream.write_all(&reply).await.unwrap();
}
});
let mut config = ProxyConfig::default();
config.general.beobachten = false;
config.censorship.mask = true;
config.censorship.mask_host = Some("127.0.0.1".to_string());
config.censorship.mask_port = backend_addr.port();
config.censorship.mask_unix_sock = None;
config.censorship.mask_proxy_protocol = 0;
let peer: SocketAddr = "203.0.113.70:55070".parse().unwrap();
let local_addr: SocketAddr = "127.0.0.1:443".parse().unwrap();
let (client_reader, _client_writer) = duplex(1024);
let (mut visible_reader, visible_writer) = duplex(2048);
let beobachten = BeobachtenStore::new();
handle_bad_client(
client_reader,
visible_writer,
&initial,
peer,
local_addr,
&config,
&beobachten,
)
.await;
let mut observed = vec![0u8; reply.len()];
visible_reader.read_exact(&mut observed).await.unwrap();
assert_eq!(observed, reply);
accept_task.await.unwrap();
}
#[test]
fn masking_baseline_no_normalization_returns_default_budget() {
let mut config = ProxyConfig::default();
config.censorship.mask_timing_normalization_enabled = false;
let budget = mask_outcome_target_budget(&config);
assert_eq!(budget, MASK_TIMEOUT);
}
#[tokio::test]
async fn masking_baseline_unreachable_mask_host_silent_failure() {
let mut config = ProxyConfig::default();
config.general.beobachten = false;
config.censorship.mask = true;
config.censorship.mask_unix_sock = None;
config.censorship.mask_host = Some("127.0.0.1".to_string());
config.censorship.mask_port = 1;
config.censorship.mask_timing_normalization_enabled = false;
let peer: SocketAddr = "203.0.113.71:55071".parse().unwrap();
let local_addr: SocketAddr = "127.0.0.1:443".parse().unwrap();
let beobachten = BeobachtenStore::new();
let (client_reader, _client_writer) = duplex(1024);
let (mut visible_reader, visible_writer) = duplex(1024);
let started = Instant::now();
handle_bad_client(
client_reader,
visible_writer,
b"GET / HTTP/1.1\r\n\r\n",
peer,
local_addr,
&config,
&beobachten,
)
.await;
let elapsed = started.elapsed();
assert!(elapsed < Duration::from_secs(1));
let mut buf = [0u8; 1];
let read_res = timeout(Duration::from_millis(50), visible_reader.read(&mut buf)).await;
match read_res {
Ok(Ok(0)) | Err(_) => {}
Ok(Ok(n)) => panic!("expected no response bytes, got {n}"),
Ok(Err(e)) => panic!("unexpected client-side read error: {e}"),
}
}
#[tokio::test]
async fn masking_baseline_light_fuzz_initial_data_no_panic() {
let mut config = ProxyConfig::default();
config.general.beobachten = false;
config.censorship.mask = false;
let peer: SocketAddr = "203.0.113.72:55072".parse().unwrap();
let local_addr: SocketAddr = "127.0.0.1:443".parse().unwrap();
let beobachten = BeobachtenStore::new();
let corpus: Vec<Vec<u8>> = vec![
vec![],
vec![0x00],
vec![0xFF; 1024],
(0..255u8).collect(),
b"\xF0\x28\x8C\x28".to_vec(),
];
for sample in corpus {
let (client_reader, _client_writer) = duplex(1024);
let (_visible_reader, visible_writer) = duplex(1024);
timeout(
Duration::from_millis(300),
handle_bad_client(
client_reader,
visible_writer,
&sample,
peer,
local_addr,
&config,
&beobachten,
),
)
.await
.expect("fuzz sample must complete in bounded time");
}
}

View File

@ -0,0 +1,336 @@
use super::*;
use rand::SeedableRng;
use rand::rngs::StdRng;
fn seeded_rng(seed: u64) -> StdRng {
StdRng::seed_from_u64(seed)
}
// ── Positive: all samples within configured envelope ────────────────────
#[test]
fn masking_lognormal_all_samples_within_configured_envelope() {
let mut rng = seeded_rng(42);
let floor: u64 = 500;
let ceiling: u64 = 2000;
for _ in 0..10_000 {
let val = sample_lognormal_percentile_bounded(floor, ceiling, &mut rng);
assert!(
val >= floor && val <= ceiling,
"sample {} outside [{}, {}]",
val,
floor,
ceiling,
);
}
}
// ── Statistical: median near geometric mean ─────────────────────────────
#[test]
fn masking_lognormal_sample_median_near_geometric_mean_of_range() {
let mut rng = seeded_rng(42);
let floor: u64 = 500;
let ceiling: u64 = 2000;
let geometric_mean = ((floor as f64) * (ceiling as f64)).sqrt();
let mut samples: Vec<u64> = (0..10_000)
.map(|_| sample_lognormal_percentile_bounded(floor, ceiling, &mut rng))
.collect();
samples.sort();
let median = samples[samples.len() / 2] as f64;
let tolerance = geometric_mean * 0.10;
assert!(
(median - geometric_mean).abs() <= tolerance,
"median {} not within 10% of geometric mean {} (tolerance {})",
median,
geometric_mean,
tolerance,
);
}
// ── Edge: degenerate floor == ceiling returns exactly that value ─────────
#[test]
fn masking_lognormal_degenerate_floor_eq_ceiling_returns_floor() {
let mut rng = seeded_rng(99);
for _ in 0..100 {
let val = sample_lognormal_percentile_bounded(1000, 1000, &mut rng);
assert_eq!(
val, 1000,
"floor == ceiling must always return exactly that value"
);
}
}
// ── Edge: floor > ceiling (misconfiguration) clamps safely ──────────────
#[test]
fn masking_lognormal_floor_greater_than_ceiling_returns_ceiling() {
let mut rng = seeded_rng(77);
let val = sample_lognormal_percentile_bounded(2000, 500, &mut rng);
assert_eq!(
val, 500,
"floor > ceiling misconfiguration must return ceiling (the minimum)"
);
}
// ── Edge: floor == 1, ceiling == 1 ──────────────────────────────────────
#[test]
fn masking_lognormal_floor_1_ceiling_1_returns_1() {
let mut rng = seeded_rng(12);
let val = sample_lognormal_percentile_bounded(1, 1, &mut rng);
assert_eq!(val, 1);
}
// ── Edge: floor == 1, ceiling very large ────────────────────────────────
#[test]
fn masking_lognormal_wide_range_all_samples_within_bounds() {
let mut rng = seeded_rng(55);
let floor: u64 = 1;
let ceiling: u64 = 100_000;
for _ in 0..10_000 {
let val = sample_lognormal_percentile_bounded(floor, ceiling, &mut rng);
assert!(
val >= floor && val <= ceiling,
"sample {} outside [{}, {}]",
val,
floor,
ceiling,
);
}
}
// ── Adversarial: extreme sigma (floor very close to ceiling) ────────────
#[test]
fn masking_lognormal_narrow_range_does_not_panic() {
let mut rng = seeded_rng(88);
let floor: u64 = 999;
let ceiling: u64 = 1001;
for _ in 0..10_000 {
let val = sample_lognormal_percentile_bounded(floor, ceiling, &mut rng);
assert!(
val >= floor && val <= ceiling,
"narrow range sample {} outside [{}, {}]",
val,
floor,
ceiling,
);
}
}
// ── Adversarial: u64::MAX ceiling does not overflow ──────────────────────
#[test]
fn masking_lognormal_u64_max_ceiling_no_overflow() {
let mut rng = seeded_rng(123);
let floor: u64 = 1;
let ceiling: u64 = u64::MAX;
for _ in 0..1000 {
let val = sample_lognormal_percentile_bounded(floor, ceiling, &mut rng);
assert!(val >= floor, "sample {} below floor {}", val, floor);
// u64::MAX clamp ensures no overflow
}
}
// ── Adversarial: floor == 0 guard ───────────────────────────────────────
// The function should handle floor=0 gracefully even though callers
// should never pass it. Verifies no panic on ln(0).
#[test]
fn masking_lognormal_floor_zero_no_panic() {
let mut rng = seeded_rng(200);
let val = sample_lognormal_percentile_bounded(0, 1000, &mut rng);
assert!(val <= 1000, "sample {} exceeds ceiling 1000", val);
}
// ── Adversarial: both zero → returns 0 ──────────────────────────────────
#[test]
fn masking_lognormal_both_zero_returns_zero() {
let mut rng = seeded_rng(201);
let val = sample_lognormal_percentile_bounded(0, 0, &mut rng);
assert_eq!(val, 0, "floor=0 ceiling=0 must return 0");
}
// ── Distribution shape: not uniform ─────────────────────────────────────
// A DPI classifier trained on uniform delay samples should detect a
// distribution where > 60% of samples fall in the lower half of the range.
// Log-normal is right-skewed: more samples near floor than ceiling.
#[test]
fn masking_lognormal_distribution_is_right_skewed() {
let mut rng = seeded_rng(42);
let floor: u64 = 100;
let ceiling: u64 = 5000;
let midpoint = (floor + ceiling) / 2;
let samples: Vec<u64> = (0..10_000)
.map(|_| sample_lognormal_percentile_bounded(floor, ceiling, &mut rng))
.collect();
let below_mid = samples.iter().filter(|&&s| s < midpoint).count();
let ratio = below_mid as f64 / samples.len() as f64;
assert!(
ratio > 0.55,
"Log-normal should be right-skewed (>55% below midpoint), got {}%",
ratio * 100.0,
);
}
// ── Determinism: same seed produces same sequence ───────────────────────
#[test]
fn masking_lognormal_deterministic_with_same_seed() {
let mut rng1 = seeded_rng(42);
let mut rng2 = seeded_rng(42);
for _ in 0..100 {
let a = sample_lognormal_percentile_bounded(500, 2000, &mut rng1);
let b = sample_lognormal_percentile_bounded(500, 2000, &mut rng2);
assert_eq!(a, b, "Same seed must produce same output");
}
}
// ── Fuzz: 1000 random (floor, ceiling) pairs, no panics ─────────────────
#[test]
fn masking_lognormal_fuzz_random_params_no_panic() {
use rand::Rng;
let mut rng = seeded_rng(999);
for _ in 0..1000 {
let a: u64 = rng.random_range(0..=10_000);
let b: u64 = rng.random_range(0..=10_000);
let floor = a.min(b);
let ceiling = a.max(b);
let val = sample_lognormal_percentile_bounded(floor, ceiling, &mut rng);
assert!(
val >= floor && val <= ceiling,
"fuzz: sample {} outside [{}, {}]",
val,
floor,
ceiling,
);
}
}
// ── Fuzz: adversarial floor > ceiling pairs ──────────────────────────────
#[test]
fn masking_lognormal_fuzz_inverted_params_no_panic() {
use rand::Rng;
let mut rng = seeded_rng(777);
for _ in 0..500 {
let floor: u64 = rng.random_range(1..=10_000);
let ceiling: u64 = rng.random_range(0..floor);
// When floor > ceiling, must return ceiling (the smaller value)
let val = sample_lognormal_percentile_bounded(floor, ceiling, &mut rng);
assert_eq!(
val, ceiling,
"inverted: floor={} ceiling={} should return ceiling, got {}",
floor, ceiling, val,
);
}
}
// ── Security: clamp spike check ─────────────────────────────────────────
// With well-parameterized sigma, no more than 5% of samples should be
// at exactly floor or exactly ceiling (clamp spikes). A spike > 10%
// is detectable by DPI as bimodal.
#[test]
fn masking_lognormal_no_clamp_spike_at_boundaries() {
let mut rng = seeded_rng(42);
let floor: u64 = 500;
let ceiling: u64 = 2000;
let n = 10_000;
let samples: Vec<u64> = (0..n)
.map(|_| sample_lognormal_percentile_bounded(floor, ceiling, &mut rng))
.collect();
let at_floor = samples.iter().filter(|&&s| s == floor).count();
let at_ceiling = samples.iter().filter(|&&s| s == ceiling).count();
let floor_pct = at_floor as f64 / n as f64;
let ceiling_pct = at_ceiling as f64 / n as f64;
assert!(
floor_pct < 0.05,
"floor clamp spike: {}% of samples at exactly floor (max 5%)",
floor_pct * 100.0,
);
assert!(
ceiling_pct < 0.05,
"ceiling clamp spike: {}% of samples at exactly ceiling (max 5%)",
ceiling_pct * 100.0,
);
}
// ── Integration: mask_outcome_target_budget uses log-normal for path 3 ──
#[tokio::test]
async fn masking_lognormal_integration_budget_within_bounds() {
let mut config = ProxyConfig::default();
config.censorship.mask_timing_normalization_enabled = true;
config.censorship.mask_timing_normalization_floor_ms = 500;
config.censorship.mask_timing_normalization_ceiling_ms = 2000;
for _ in 0..100 {
let budget = mask_outcome_target_budget(&config);
let ms = budget.as_millis() as u64;
assert!(
ms >= 500 && ms <= 2000,
"budget {} ms outside [500, 2000]",
ms,
);
}
}
// ── Integration: floor == 0 path stays uniform (NOT log-normal) ─────────
#[tokio::test]
async fn masking_lognormal_floor_zero_path_stays_uniform() {
let mut config = ProxyConfig::default();
config.censorship.mask_timing_normalization_enabled = true;
config.censorship.mask_timing_normalization_floor_ms = 0;
config.censorship.mask_timing_normalization_ceiling_ms = 1000;
for _ in 0..100 {
let budget = mask_outcome_target_budget(&config);
let ms = budget.as_millis() as u64;
// floor=0 path uses uniform [0, ceiling], not log-normal
assert!(ms <= 1000, "budget {} ms exceeds ceiling 1000", ms);
}
}
// ── Integration: floor > ceiling misconfiguration is safe ───────────────
#[tokio::test]
async fn masking_lognormal_misconfigured_floor_gt_ceiling_safe() {
let mut config = ProxyConfig::default();
config.censorship.mask_timing_normalization_enabled = true;
config.censorship.mask_timing_normalization_floor_ms = 2000;
config.censorship.mask_timing_normalization_ceiling_ms = 500;
let budget = mask_outcome_target_budget(&config);
let ms = budget.as_millis() as u64;
// floor > ceiling: should not exceed the minimum of the two
assert!(
ms <= 2000,
"misconfigured budget {} ms should be bounded",
ms,
);
}
// ── Stress: rapid repeated calls do not panic or starve ─────────────────
#[test]
fn masking_lognormal_stress_rapid_calls_no_panic() {
let mut rng = seeded_rng(42);
for _ in 0..100_000 {
let _ = sample_lognormal_percentile_bounded(100, 5000, &mut rng);
}
}

View File

@ -0,0 +1,60 @@
use super::*;
use std::time::{Duration, Instant};
#[test]
fn middle_relay_baseline_public_api_idle_roundtrip_contract() {
let shared = ProxySharedState::new();
clear_relay_idle_pressure_state_for_testing_in_shared(shared.as_ref());
assert!(mark_relay_idle_candidate_for_testing(shared.as_ref(), 7001));
assert_eq!(
oldest_relay_idle_candidate_for_testing(shared.as_ref()),
Some(7001)
);
clear_relay_idle_candidate_for_testing(shared.as_ref(), 7001);
assert_ne!(
oldest_relay_idle_candidate_for_testing(shared.as_ref()),
Some(7001)
);
assert!(mark_relay_idle_candidate_for_testing(shared.as_ref(), 7001));
assert_eq!(
oldest_relay_idle_candidate_for_testing(shared.as_ref()),
Some(7001)
);
clear_relay_idle_pressure_state_for_testing_in_shared(shared.as_ref());
}
#[test]
fn middle_relay_baseline_public_api_desync_window_contract() {
let shared = ProxySharedState::new();
clear_desync_dedup_for_testing_in_shared(shared.as_ref());
let key = 0xDEAD_BEEF_0000_0001u64;
let t0 = Instant::now();
assert!(should_emit_full_desync_for_testing(
shared.as_ref(),
key,
false,
t0
));
assert!(!should_emit_full_desync_for_testing(
shared.as_ref(),
key,
false,
t0 + Duration::from_secs(1)
));
let t1 = t0 + DESYNC_DEDUP_WINDOW + Duration::from_millis(10);
assert!(should_emit_full_desync_for_testing(
shared.as_ref(),
key,
false,
t1
));
clear_desync_dedup_for_testing_in_shared(shared.as_ref());
}

View File

@ -5,22 +5,25 @@ use std::thread;
#[test]
fn desync_all_full_bypass_does_not_initialize_or_grow_dedup_cache() {
let _guard = desync_dedup_test_lock()
.lock()
.expect("desync dedup test lock must be available");
clear_desync_dedup_for_testing();
let shared = ProxySharedState::new();
clear_desync_dedup_for_testing_in_shared(shared.as_ref());
let initial_len = DESYNC_DEDUP.get().map(|dedup| dedup.len()).unwrap_or(0);
let initial_len = desync_dedup_len_for_testing(shared.as_ref());
let now = Instant::now();
for i in 0..20_000u64 {
assert!(
should_emit_full_desync(0xD35E_D000_0000_0000u64 ^ i, true, now),
should_emit_full_desync_for_testing(
shared.as_ref(),
0xD35E_D000_0000_0000u64 ^ i,
true,
now
),
"desync_all_full path must always emit"
);
}
let after_len = DESYNC_DEDUP.get().map(|dedup| dedup.len()).unwrap_or(0);
let after_len = desync_dedup_len_for_testing(shared.as_ref());
assert_eq!(
after_len, initial_len,
"desync_all_full bypass must not allocate or accumulate dedup entries"
@ -29,39 +32,39 @@ fn desync_all_full_bypass_does_not_initialize_or_grow_dedup_cache() {
#[test]
fn desync_all_full_bypass_keeps_existing_dedup_entries_unchanged() {
let _guard = desync_dedup_test_lock()
.lock()
.expect("desync dedup test lock must be available");
clear_desync_dedup_for_testing();
let shared = ProxySharedState::new();
clear_desync_dedup_for_testing_in_shared(shared.as_ref());
let dedup = DESYNC_DEDUP.get_or_init(DashMap::new);
let seed_time = Instant::now() - Duration::from_secs(7);
dedup.insert(0xAAAABBBBCCCCDDDD, seed_time);
dedup.insert(0x1111222233334444, seed_time);
desync_dedup_insert_for_testing(shared.as_ref(), 0xAAAABBBBCCCCDDDD, seed_time);
desync_dedup_insert_for_testing(shared.as_ref(), 0x1111222233334444, seed_time);
let now = Instant::now();
for i in 0..2048u64 {
assert!(
should_emit_full_desync(0xF011_F000_0000_0000u64 ^ i, true, now),
should_emit_full_desync_for_testing(
shared.as_ref(),
0xF011_F000_0000_0000u64 ^ i,
true,
now
),
"desync_all_full must bypass suppression and dedup refresh"
);
}
assert_eq!(
dedup.len(),
desync_dedup_len_for_testing(shared.as_ref()),
2,
"bypass path must not mutate dedup cardinality"
);
assert_eq!(
*dedup
.get(&0xAAAABBBBCCCCDDDD)
desync_dedup_get_for_testing(shared.as_ref(), 0xAAAABBBBCCCCDDDD)
.expect("seed key must remain"),
seed_time,
"bypass path must not refresh existing dedup timestamps"
);
assert_eq!(
*dedup
.get(&0x1111222233334444)
desync_dedup_get_for_testing(shared.as_ref(), 0x1111222233334444)
.expect("seed key must remain"),
seed_time,
"bypass path must not touch unrelated dedup entries"
@ -70,14 +73,13 @@ fn desync_all_full_bypass_keeps_existing_dedup_entries_unchanged() {
#[test]
fn edge_all_full_burst_does_not_poison_later_false_path_tracking() {
let _guard = desync_dedup_test_lock()
.lock()
.expect("desync dedup test lock must be available");
clear_desync_dedup_for_testing();
let shared = ProxySharedState::new();
clear_desync_dedup_for_testing_in_shared(shared.as_ref());
let now = Instant::now();
for i in 0..8192u64 {
assert!(should_emit_full_desync(
assert!(should_emit_full_desync_for_testing(
shared.as_ref(),
0xABCD_0000_0000_0000 ^ i,
true,
now
@ -86,26 +88,20 @@ fn edge_all_full_burst_does_not_poison_later_false_path_tracking() {
let tracked_key = 0xDEAD_BEEF_0000_0001u64;
assert!(
should_emit_full_desync(tracked_key, false, now),
should_emit_full_desync_for_testing(shared.as_ref(), tracked_key, false, now),
"first false-path event after all_full burst must still be tracked and emitted"
);
let dedup = DESYNC_DEDUP
.get()
.expect("false path should initialize dedup");
assert!(dedup.get(&tracked_key).is_some());
assert!(desync_dedup_get_for_testing(shared.as_ref(), tracked_key).is_some());
}
#[test]
fn adversarial_mixed_sequence_true_steps_never_change_cache_len() {
let _guard = desync_dedup_test_lock()
.lock()
.expect("desync dedup test lock must be available");
clear_desync_dedup_for_testing();
let shared = ProxySharedState::new();
clear_desync_dedup_for_testing_in_shared(shared.as_ref());
let dedup = DESYNC_DEDUP.get_or_init(DashMap::new);
for i in 0..256u64 {
dedup.insert(0x1000_0000_0000_0000 ^ i, Instant::now());
desync_dedup_insert_for_testing(shared.as_ref(), 0x1000_0000_0000_0000 ^ i, Instant::now());
}
let mut seed = 0xC0DE_CAFE_BAAD_F00Du64;
@ -116,9 +112,14 @@ fn adversarial_mixed_sequence_true_steps_never_change_cache_len() {
let flag_all_full = (seed & 0x1) == 1;
let key = 0x7000_0000_0000_0000u64 ^ i ^ seed;
let before = dedup.len();
let _ = should_emit_full_desync(key, flag_all_full, Instant::now());
let after = dedup.len();
let before = desync_dedup_len_for_testing(shared.as_ref());
let _ = should_emit_full_desync_for_testing(
shared.as_ref(),
key,
flag_all_full,
Instant::now(),
);
let after = desync_dedup_len_for_testing(shared.as_ref());
if flag_all_full {
assert_eq!(after, before, "all_full step must not mutate dedup length");
@ -128,50 +129,51 @@ fn adversarial_mixed_sequence_true_steps_never_change_cache_len() {
#[test]
fn light_fuzz_all_full_mode_always_emits_and_stays_bounded() {
let _guard = desync_dedup_test_lock()
.lock()
.expect("desync dedup test lock must be available");
clear_desync_dedup_for_testing();
let shared = ProxySharedState::new();
clear_desync_dedup_for_testing_in_shared(shared.as_ref());
let mut seed = 0x1234_5678_9ABC_DEF0u64;
let before = DESYNC_DEDUP.get().map(|d| d.len()).unwrap_or(0);
let before = desync_dedup_len_for_testing(shared.as_ref());
for _ in 0..20_000 {
seed ^= seed << 7;
seed ^= seed >> 9;
seed ^= seed << 8;
let key = seed ^ 0x55AA_55AA_55AA_55AAu64;
assert!(should_emit_full_desync(key, true, Instant::now()));
assert!(should_emit_full_desync_for_testing(
shared.as_ref(),
key,
true,
Instant::now()
));
}
let after = DESYNC_DEDUP.get().map(|d| d.len()).unwrap_or(0);
let after = desync_dedup_len_for_testing(shared.as_ref());
assert_eq!(after, before);
assert!(after <= DESYNC_DEDUP_MAX_ENTRIES);
}
#[test]
fn stress_parallel_all_full_storm_does_not_grow_or_mutate_cache() {
let _guard = desync_dedup_test_lock()
.lock()
.expect("desync dedup test lock must be available");
clear_desync_dedup_for_testing();
let shared = ProxySharedState::new();
clear_desync_dedup_for_testing_in_shared(shared.as_ref());
let dedup = DESYNC_DEDUP.get_or_init(DashMap::new);
let seed_time = Instant::now() - Duration::from_secs(2);
for i in 0..1024u64 {
dedup.insert(0x8888_0000_0000_0000 ^ i, seed_time);
desync_dedup_insert_for_testing(shared.as_ref(), 0x8888_0000_0000_0000 ^ i, seed_time);
}
let before_len = dedup.len();
let before_len = desync_dedup_len_for_testing(shared.as_ref());
let emits = Arc::new(AtomicUsize::new(0));
let mut workers = Vec::new();
for worker in 0..16u64 {
let emits = Arc::clone(&emits);
let shared = shared.clone();
workers.push(thread::spawn(move || {
let now = Instant::now();
for i in 0..4096u64 {
let key = 0xFACE_0000_0000_0000u64 ^ (worker << 20) ^ i;
if should_emit_full_desync(key, true, now) {
if should_emit_full_desync_for_testing(shared.as_ref(), key, true, now) {
emits.fetch_add(1, Ordering::Relaxed);
}
}
@ -184,7 +186,7 @@ fn stress_parallel_all_full_storm_does_not_grow_or_mutate_cache() {
assert_eq!(emits.load(Ordering::Relaxed), 16 * 4096);
assert_eq!(
dedup.len(),
desync_dedup_len_for_testing(shared.as_ref()),
before_len,
"parallel all_full storm must not mutate cache len"
);

View File

@ -360,73 +360,103 @@ async fn stress_many_idle_sessions_fail_closed_without_hang() {
#[test]
fn pressure_evicts_oldest_idle_candidate_with_deterministic_ordering() {
let _guard = relay_idle_pressure_test_scope();
clear_relay_idle_pressure_state_for_testing();
let shared = ProxySharedState::new();
clear_relay_idle_pressure_state_for_testing_in_shared(shared.as_ref());
let stats = Stats::new();
assert!(mark_relay_idle_candidate(10));
assert!(mark_relay_idle_candidate(11));
assert_eq!(oldest_relay_idle_candidate(), Some(10));
assert!(mark_relay_idle_candidate_for_testing(shared.as_ref(), 10));
assert!(mark_relay_idle_candidate_for_testing(shared.as_ref(), 11));
assert_eq!(
oldest_relay_idle_candidate_for_testing(shared.as_ref()),
Some(10)
);
note_relay_pressure_event();
note_relay_pressure_event_for_testing(shared.as_ref());
let mut seen_for_newer = 0u64;
assert!(
!maybe_evict_idle_candidate_on_pressure(11, &mut seen_for_newer, &stats),
!maybe_evict_idle_candidate_on_pressure_for_testing(
shared.as_ref(),
11,
&mut seen_for_newer,
&stats
),
"newer idle candidate must not be evicted while older candidate exists"
);
assert_eq!(oldest_relay_idle_candidate(), Some(10));
assert_eq!(
oldest_relay_idle_candidate_for_testing(shared.as_ref()),
Some(10)
);
let mut seen_for_oldest = 0u64;
assert!(
maybe_evict_idle_candidate_on_pressure(10, &mut seen_for_oldest, &stats),
maybe_evict_idle_candidate_on_pressure_for_testing(
shared.as_ref(),
10,
&mut seen_for_oldest,
&stats
),
"oldest idle candidate must be evicted first under pressure"
);
assert_eq!(oldest_relay_idle_candidate(), Some(11));
assert_eq!(
oldest_relay_idle_candidate_for_testing(shared.as_ref()),
Some(11)
);
assert_eq!(stats.get_relay_pressure_evict_total(), 1);
clear_relay_idle_pressure_state_for_testing();
clear_relay_idle_pressure_state_for_testing_in_shared(shared.as_ref());
}
#[test]
fn pressure_does_not_evict_without_new_pressure_signal() {
let _guard = relay_idle_pressure_test_scope();
clear_relay_idle_pressure_state_for_testing();
let shared = ProxySharedState::new();
clear_relay_idle_pressure_state_for_testing_in_shared(shared.as_ref());
let stats = Stats::new();
assert!(mark_relay_idle_candidate(21));
let mut seen = relay_pressure_event_seq();
assert!(mark_relay_idle_candidate_for_testing(shared.as_ref(), 21));
let mut seen = relay_pressure_event_seq_for_testing(shared.as_ref());
assert!(
!maybe_evict_idle_candidate_on_pressure(21, &mut seen, &stats),
!maybe_evict_idle_candidate_on_pressure_for_testing(shared.as_ref(), 21, &mut seen, &stats),
"without new pressure signal, candidate must stay"
);
assert_eq!(stats.get_relay_pressure_evict_total(), 0);
assert_eq!(oldest_relay_idle_candidate(), Some(21));
assert_eq!(
oldest_relay_idle_candidate_for_testing(shared.as_ref()),
Some(21)
);
clear_relay_idle_pressure_state_for_testing();
clear_relay_idle_pressure_state_for_testing_in_shared(shared.as_ref());
}
#[test]
fn stress_pressure_eviction_preserves_fifo_across_many_candidates() {
let _guard = relay_idle_pressure_test_scope();
clear_relay_idle_pressure_state_for_testing();
let shared = ProxySharedState::new();
clear_relay_idle_pressure_state_for_testing_in_shared(shared.as_ref());
let stats = Stats::new();
let mut seen_per_conn = std::collections::HashMap::new();
for conn_id in 1000u64..1064u64 {
assert!(mark_relay_idle_candidate(conn_id));
assert!(mark_relay_idle_candidate_for_testing(
shared.as_ref(),
conn_id
));
seen_per_conn.insert(conn_id, 0u64);
}
for expected in 1000u64..1064u64 {
note_relay_pressure_event();
note_relay_pressure_event_for_testing(shared.as_ref());
let mut seen = *seen_per_conn
.get(&expected)
.expect("per-conn pressure cursor must exist");
assert!(
maybe_evict_idle_candidate_on_pressure(expected, &mut seen, &stats),
maybe_evict_idle_candidate_on_pressure_for_testing(
shared.as_ref(),
expected,
&mut seen,
&stats
),
"expected conn_id {expected} must be evicted next by deterministic FIFO ordering"
);
seen_per_conn.insert(expected, seen);
@ -436,33 +466,51 @@ fn stress_pressure_eviction_preserves_fifo_across_many_candidates() {
} else {
Some(expected + 1)
};
assert_eq!(oldest_relay_idle_candidate(), next);
assert_eq!(
oldest_relay_idle_candidate_for_testing(shared.as_ref()),
next
);
}
assert_eq!(stats.get_relay_pressure_evict_total(), 64);
clear_relay_idle_pressure_state_for_testing();
clear_relay_idle_pressure_state_for_testing_in_shared(shared.as_ref());
}
#[test]
fn blackhat_single_pressure_event_must_not_evict_more_than_one_candidate() {
let _guard = relay_idle_pressure_test_scope();
clear_relay_idle_pressure_state_for_testing();
let shared = ProxySharedState::new();
clear_relay_idle_pressure_state_for_testing_in_shared(shared.as_ref());
let stats = Stats::new();
assert!(mark_relay_idle_candidate(301));
assert!(mark_relay_idle_candidate(302));
assert!(mark_relay_idle_candidate(303));
assert!(mark_relay_idle_candidate_for_testing(shared.as_ref(), 301));
assert!(mark_relay_idle_candidate_for_testing(shared.as_ref(), 302));
assert!(mark_relay_idle_candidate_for_testing(shared.as_ref(), 303));
let mut seen_301 = 0u64;
let mut seen_302 = 0u64;
let mut seen_303 = 0u64;
// Single pressure event should authorize at most one eviction globally.
note_relay_pressure_event();
note_relay_pressure_event_for_testing(shared.as_ref());
let evicted_301 = maybe_evict_idle_candidate_on_pressure(301, &mut seen_301, &stats);
let evicted_302 = maybe_evict_idle_candidate_on_pressure(302, &mut seen_302, &stats);
let evicted_303 = maybe_evict_idle_candidate_on_pressure(303, &mut seen_303, &stats);
let evicted_301 = maybe_evict_idle_candidate_on_pressure_for_testing(
shared.as_ref(),
301,
&mut seen_301,
&stats,
);
let evicted_302 = maybe_evict_idle_candidate_on_pressure_for_testing(
shared.as_ref(),
302,
&mut seen_302,
&stats,
);
let evicted_303 = maybe_evict_idle_candidate_on_pressure_for_testing(
shared.as_ref(),
303,
&mut seen_303,
&stats,
);
let evicted_total = [evicted_301, evicted_302, evicted_303]
.iter()
@ -474,30 +522,40 @@ fn blackhat_single_pressure_event_must_not_evict_more_than_one_candidate() {
"single pressure event must not cascade-evict multiple idle candidates"
);
clear_relay_idle_pressure_state_for_testing();
clear_relay_idle_pressure_state_for_testing_in_shared(shared.as_ref());
}
#[test]
fn blackhat_pressure_counter_must_track_global_budget_not_per_session_cursor() {
let _guard = relay_idle_pressure_test_scope();
clear_relay_idle_pressure_state_for_testing();
let shared = ProxySharedState::new();
clear_relay_idle_pressure_state_for_testing_in_shared(shared.as_ref());
let stats = Stats::new();
assert!(mark_relay_idle_candidate(401));
assert!(mark_relay_idle_candidate(402));
assert!(mark_relay_idle_candidate_for_testing(shared.as_ref(), 401));
assert!(mark_relay_idle_candidate_for_testing(shared.as_ref(), 402));
let mut seen_oldest = 0u64;
let mut seen_next = 0u64;
note_relay_pressure_event();
note_relay_pressure_event_for_testing(shared.as_ref());
assert!(
maybe_evict_idle_candidate_on_pressure(401, &mut seen_oldest, &stats),
maybe_evict_idle_candidate_on_pressure_for_testing(
shared.as_ref(),
401,
&mut seen_oldest,
&stats
),
"oldest candidate must consume pressure budget first"
);
assert!(
!maybe_evict_idle_candidate_on_pressure(402, &mut seen_next, &stats),
!maybe_evict_idle_candidate_on_pressure_for_testing(
shared.as_ref(),
402,
&mut seen_next,
&stats
),
"next candidate must not consume the same pressure budget"
);
@ -507,47 +565,67 @@ fn blackhat_pressure_counter_must_track_global_budget_not_per_session_cursor() {
"single pressure budget must produce exactly one eviction"
);
clear_relay_idle_pressure_state_for_testing();
clear_relay_idle_pressure_state_for_testing_in_shared(shared.as_ref());
}
#[test]
fn blackhat_stale_pressure_before_idle_mark_must_not_trigger_eviction() {
let _guard = relay_idle_pressure_test_scope();
clear_relay_idle_pressure_state_for_testing();
let shared = ProxySharedState::new();
clear_relay_idle_pressure_state_for_testing_in_shared(shared.as_ref());
let stats = Stats::new();
// Pressure happened before any idle candidate existed.
note_relay_pressure_event();
assert!(mark_relay_idle_candidate(501));
note_relay_pressure_event_for_testing(shared.as_ref());
assert!(mark_relay_idle_candidate_for_testing(shared.as_ref(), 501));
let mut seen = 0u64;
assert!(
!maybe_evict_idle_candidate_on_pressure(501, &mut seen, &stats),
!maybe_evict_idle_candidate_on_pressure_for_testing(
shared.as_ref(),
501,
&mut seen,
&stats
),
"stale pressure (before soft-idle mark) must not evict newly marked candidate"
);
clear_relay_idle_pressure_state_for_testing();
clear_relay_idle_pressure_state_for_testing_in_shared(shared.as_ref());
}
#[test]
fn blackhat_stale_pressure_must_not_evict_any_of_newly_marked_batch() {
let _guard = relay_idle_pressure_test_scope();
clear_relay_idle_pressure_state_for_testing();
let shared = ProxySharedState::new();
clear_relay_idle_pressure_state_for_testing_in_shared(shared.as_ref());
let stats = Stats::new();
note_relay_pressure_event();
assert!(mark_relay_idle_candidate(511));
assert!(mark_relay_idle_candidate(512));
assert!(mark_relay_idle_candidate(513));
note_relay_pressure_event_for_testing(shared.as_ref());
assert!(mark_relay_idle_candidate_for_testing(shared.as_ref(), 511));
assert!(mark_relay_idle_candidate_for_testing(shared.as_ref(), 512));
assert!(mark_relay_idle_candidate_for_testing(shared.as_ref(), 513));
let mut seen_511 = 0u64;
let mut seen_512 = 0u64;
let mut seen_513 = 0u64;
let evicted = [
maybe_evict_idle_candidate_on_pressure(511, &mut seen_511, &stats),
maybe_evict_idle_candidate_on_pressure(512, &mut seen_512, &stats),
maybe_evict_idle_candidate_on_pressure(513, &mut seen_513, &stats),
maybe_evict_idle_candidate_on_pressure_for_testing(
shared.as_ref(),
511,
&mut seen_511,
&stats,
),
maybe_evict_idle_candidate_on_pressure_for_testing(
shared.as_ref(),
512,
&mut seen_512,
&stats,
),
maybe_evict_idle_candidate_on_pressure_for_testing(
shared.as_ref(),
513,
&mut seen_513,
&stats,
),
]
.iter()
.filter(|value| **value)
@ -558,111 +636,118 @@ fn blackhat_stale_pressure_must_not_evict_any_of_newly_marked_batch() {
"stale pressure event must not evict any candidate from a newly marked batch"
);
clear_relay_idle_pressure_state_for_testing();
clear_relay_idle_pressure_state_for_testing_in_shared(shared.as_ref());
}
#[test]
fn blackhat_stale_pressure_seen_without_candidates_must_be_globally_invalidated() {
let _guard = relay_idle_pressure_test_scope();
clear_relay_idle_pressure_state_for_testing();
let shared = ProxySharedState::new();
clear_relay_idle_pressure_state_for_testing_in_shared(shared.as_ref());
let stats = Stats::new();
note_relay_pressure_event();
note_relay_pressure_event_for_testing(shared.as_ref());
// Session A observed pressure while there were no candidates.
let mut seen_a = 0u64;
assert!(
!maybe_evict_idle_candidate_on_pressure(999_001, &mut seen_a, &stats),
!maybe_evict_idle_candidate_on_pressure_for_testing(
shared.as_ref(),
999_001,
&mut seen_a,
&stats
),
"no candidate existed, so no eviction is possible"
);
// Candidate appears later; Session B must not be able to consume stale pressure.
assert!(mark_relay_idle_candidate(521));
assert!(mark_relay_idle_candidate_for_testing(shared.as_ref(), 521));
let mut seen_b = 0u64;
assert!(
!maybe_evict_idle_candidate_on_pressure(521, &mut seen_b, &stats),
!maybe_evict_idle_candidate_on_pressure_for_testing(
shared.as_ref(),
521,
&mut seen_b,
&stats
),
"once pressure is observed with empty candidate set, it must not be replayed later"
);
clear_relay_idle_pressure_state_for_testing();
clear_relay_idle_pressure_state_for_testing_in_shared(shared.as_ref());
}
#[test]
fn blackhat_stale_pressure_must_not_survive_candidate_churn() {
let _guard = relay_idle_pressure_test_scope();
clear_relay_idle_pressure_state_for_testing();
let shared = ProxySharedState::new();
clear_relay_idle_pressure_state_for_testing_in_shared(shared.as_ref());
let stats = Stats::new();
note_relay_pressure_event();
assert!(mark_relay_idle_candidate(531));
clear_relay_idle_candidate(531);
assert!(mark_relay_idle_candidate(532));
note_relay_pressure_event_for_testing(shared.as_ref());
assert!(mark_relay_idle_candidate_for_testing(shared.as_ref(), 531));
clear_relay_idle_candidate_for_testing(shared.as_ref(), 531);
assert!(mark_relay_idle_candidate_for_testing(shared.as_ref(), 532));
let mut seen = 0u64;
assert!(
!maybe_evict_idle_candidate_on_pressure(532, &mut seen, &stats),
!maybe_evict_idle_candidate_on_pressure_for_testing(
shared.as_ref(),
532,
&mut seen,
&stats
),
"stale pressure must not survive clear+remark churn cycles"
);
clear_relay_idle_pressure_state_for_testing();
clear_relay_idle_pressure_state_for_testing_in_shared(shared.as_ref());
}
#[test]
fn blackhat_pressure_seq_saturation_must_not_disable_future_pressure_accounting() {
let _guard = relay_idle_pressure_test_scope();
clear_relay_idle_pressure_state_for_testing();
let shared = ProxySharedState::new();
clear_relay_idle_pressure_state_for_testing_in_shared(shared.as_ref());
{
let mut guard = relay_idle_candidate_registry()
.lock()
.expect("registry lock must be available");
guard.pressure_event_seq = u64::MAX;
guard.pressure_consumed_seq = u64::MAX - 1;
set_relay_pressure_state_for_testing(shared.as_ref(), u64::MAX, u64::MAX - 1);
}
// A new pressure event should still be representable; saturating at MAX creates a permanent lockout.
note_relay_pressure_event();
let after = relay_pressure_event_seq();
note_relay_pressure_event_for_testing(shared.as_ref());
let after = relay_pressure_event_seq_for_testing(shared.as_ref());
assert_ne!(
after,
u64::MAX,
"pressure sequence saturation must not permanently freeze event progression"
);
clear_relay_idle_pressure_state_for_testing();
clear_relay_idle_pressure_state_for_testing_in_shared(shared.as_ref());
}
#[test]
fn blackhat_pressure_seq_saturation_must_not_break_multiple_distinct_events() {
let _guard = relay_idle_pressure_test_scope();
clear_relay_idle_pressure_state_for_testing();
let shared = ProxySharedState::new();
clear_relay_idle_pressure_state_for_testing_in_shared(shared.as_ref());
{
let mut guard = relay_idle_candidate_registry()
.lock()
.expect("registry lock must be available");
guard.pressure_event_seq = u64::MAX;
guard.pressure_consumed_seq = u64::MAX;
set_relay_pressure_state_for_testing(shared.as_ref(), u64::MAX, u64::MAX);
}
note_relay_pressure_event();
let first = relay_pressure_event_seq();
note_relay_pressure_event();
let second = relay_pressure_event_seq();
note_relay_pressure_event_for_testing(shared.as_ref());
let first = relay_pressure_event_seq_for_testing(shared.as_ref());
note_relay_pressure_event_for_testing(shared.as_ref());
let second = relay_pressure_event_seq_for_testing(shared.as_ref());
assert!(
second > first,
"distinct pressure events must remain distinguishable even at sequence boundary"
);
clear_relay_idle_pressure_state_for_testing();
clear_relay_idle_pressure_state_for_testing_in_shared(shared.as_ref());
}
#[tokio::test(flavor = "multi_thread", worker_threads = 4)]
async fn integration_race_single_pressure_event_allows_at_most_one_eviction_under_parallel_claims()
{
let _guard = relay_idle_pressure_test_scope();
clear_relay_idle_pressure_state_for_testing();
let shared = ProxySharedState::new();
clear_relay_idle_pressure_state_for_testing_in_shared(shared.as_ref());
let stats = Arc::new(Stats::new());
let sessions = 16usize;
@ -671,20 +756,28 @@ async fn integration_race_single_pressure_event_allows_at_most_one_eviction_unde
let mut seen_per_session = vec![0u64; sessions];
for conn_id in &conn_ids {
assert!(mark_relay_idle_candidate(*conn_id));
assert!(mark_relay_idle_candidate_for_testing(
shared.as_ref(),
*conn_id
));
}
for round in 0..rounds {
note_relay_pressure_event();
note_relay_pressure_event_for_testing(shared.as_ref());
let mut joins = Vec::with_capacity(sessions);
for (idx, conn_id) in conn_ids.iter().enumerate() {
let mut seen = seen_per_session[idx];
let conn_id = *conn_id;
let stats = stats.clone();
let shared = shared.clone();
joins.push(tokio::spawn(async move {
let evicted =
maybe_evict_idle_candidate_on_pressure(conn_id, &mut seen, stats.as_ref());
let evicted = maybe_evict_idle_candidate_on_pressure_for_testing(
shared.as_ref(),
conn_id,
&mut seen,
stats.as_ref(),
);
(idx, conn_id, seen, evicted)
}));
}
@ -706,7 +799,7 @@ async fn integration_race_single_pressure_event_allows_at_most_one_eviction_unde
);
if let Some(conn) = evicted_conn {
assert!(
mark_relay_idle_candidate(conn),
mark_relay_idle_candidate_for_testing(shared.as_ref(), conn),
"round {round}: evicted conn must be re-markable as idle candidate"
);
}
@ -721,13 +814,13 @@ async fn integration_race_single_pressure_event_allows_at_most_one_eviction_unde
"parallel race must still observe at least one successful eviction"
);
clear_relay_idle_pressure_state_for_testing();
clear_relay_idle_pressure_state_for_testing_in_shared(shared.as_ref());
}
#[tokio::test(flavor = "multi_thread", worker_threads = 4)]
async fn integration_race_burst_pressure_with_churn_preserves_empty_set_invalidation_and_budget() {
let _guard = relay_idle_pressure_test_scope();
clear_relay_idle_pressure_state_for_testing();
let shared = ProxySharedState::new();
clear_relay_idle_pressure_state_for_testing_in_shared(shared.as_ref());
let stats = Arc::new(Stats::new());
let sessions = 12usize;
@ -736,7 +829,10 @@ async fn integration_race_burst_pressure_with_churn_preserves_empty_set_invalida
let mut seen_per_session = vec![0u64; sessions];
for conn_id in &conn_ids {
assert!(mark_relay_idle_candidate(*conn_id));
assert!(mark_relay_idle_candidate_for_testing(
shared.as_ref(),
*conn_id
));
}
let mut expected_total_evictions = 0u64;
@ -745,20 +841,25 @@ async fn integration_race_burst_pressure_with_churn_preserves_empty_set_invalida
let empty_phase = round % 5 == 0;
if empty_phase {
for conn_id in &conn_ids {
clear_relay_idle_candidate(*conn_id);
clear_relay_idle_candidate_for_testing(shared.as_ref(), *conn_id);
}
}
note_relay_pressure_event();
note_relay_pressure_event_for_testing(shared.as_ref());
let mut joins = Vec::with_capacity(sessions);
for (idx, conn_id) in conn_ids.iter().enumerate() {
let mut seen = seen_per_session[idx];
let conn_id = *conn_id;
let stats = stats.clone();
let shared = shared.clone();
joins.push(tokio::spawn(async move {
let evicted =
maybe_evict_idle_candidate_on_pressure(conn_id, &mut seen, stats.as_ref());
let evicted = maybe_evict_idle_candidate_on_pressure_for_testing(
shared.as_ref(),
conn_id,
&mut seen,
stats.as_ref(),
);
(idx, conn_id, seen, evicted)
}));
}
@ -780,7 +881,10 @@ async fn integration_race_burst_pressure_with_churn_preserves_empty_set_invalida
"round {round}: empty candidate phase must not allow stale-pressure eviction"
);
for conn_id in &conn_ids {
assert!(mark_relay_idle_candidate(*conn_id));
assert!(mark_relay_idle_candidate_for_testing(
shared.as_ref(),
*conn_id
));
}
} else {
assert!(
@ -789,7 +893,10 @@ async fn integration_race_burst_pressure_with_churn_preserves_empty_set_invalida
);
if let Some(conn_id) = evicted_conn {
expected_total_evictions = expected_total_evictions.saturating_add(1);
assert!(mark_relay_idle_candidate(conn_id));
assert!(mark_relay_idle_candidate_for_testing(
shared.as_ref(),
conn_id
));
}
}
}
@ -800,5 +907,5 @@ async fn integration_race_burst_pressure_with_churn_preserves_empty_set_invalida
"global pressure eviction counter must match observed per-round successful consumes"
);
clear_relay_idle_pressure_state_for_testing();
clear_relay_idle_pressure_state_for_testing_in_shared(shared.as_ref());
}

View File

@ -3,12 +3,13 @@ use std::panic::{AssertUnwindSafe, catch_unwind};
#[test]
fn blackhat_registry_poison_recovers_with_fail_closed_reset_and_pressure_accounting() {
let _guard = relay_idle_pressure_test_scope();
clear_relay_idle_pressure_state_for_testing();
let shared = ProxySharedState::new();
clear_relay_idle_pressure_state_for_testing_in_shared(shared.as_ref());
let _ = catch_unwind(AssertUnwindSafe(|| {
let registry = relay_idle_candidate_registry();
let mut guard = registry
let mut guard = shared
.middle_relay
.relay_idle_registry
.lock()
.expect("registry lock must be acquired before poison");
guard.by_conn_id.insert(
@ -23,40 +24,50 @@ fn blackhat_registry_poison_recovers_with_fail_closed_reset_and_pressure_account
}));
// Helper lock must recover from poison, reset stale state, and continue.
assert!(mark_relay_idle_candidate(42));
assert_eq!(oldest_relay_idle_candidate(), Some(42));
assert!(mark_relay_idle_candidate_for_testing(shared.as_ref(), 42));
assert_eq!(
oldest_relay_idle_candidate_for_testing(shared.as_ref()),
Some(42)
);
let before = relay_pressure_event_seq();
note_relay_pressure_event();
let after = relay_pressure_event_seq();
let before = relay_pressure_event_seq_for_testing(shared.as_ref());
note_relay_pressure_event_for_testing(shared.as_ref());
let after = relay_pressure_event_seq_for_testing(shared.as_ref());
assert!(
after > before,
"pressure accounting must still advance after poison"
);
clear_relay_idle_pressure_state_for_testing();
clear_relay_idle_pressure_state_for_testing_in_shared(shared.as_ref());
}
#[test]
fn clear_state_helper_must_reset_poisoned_registry_for_deterministic_fifo_tests() {
let _guard = relay_idle_pressure_test_scope();
clear_relay_idle_pressure_state_for_testing();
let shared = ProxySharedState::new();
clear_relay_idle_pressure_state_for_testing_in_shared(shared.as_ref());
let _ = catch_unwind(AssertUnwindSafe(|| {
let registry = relay_idle_candidate_registry();
let _guard = registry
let _guard = shared
.middle_relay
.relay_idle_registry
.lock()
.expect("registry lock must be acquired before poison");
panic!("intentional poison while lock held");
}));
clear_relay_idle_pressure_state_for_testing();
clear_relay_idle_pressure_state_for_testing_in_shared(shared.as_ref());
assert_eq!(oldest_relay_idle_candidate(), None);
assert_eq!(relay_pressure_event_seq(), 0);
assert_eq!(
oldest_relay_idle_candidate_for_testing(shared.as_ref()),
None
);
assert_eq!(relay_pressure_event_seq_for_testing(shared.as_ref()), 0);
assert!(mark_relay_idle_candidate(7));
assert_eq!(oldest_relay_idle_candidate(), Some(7));
assert!(mark_relay_idle_candidate_for_testing(shared.as_ref(), 7));
assert_eq!(
oldest_relay_idle_candidate_for_testing(shared.as_ref()),
Some(7)
);
clear_relay_idle_pressure_state_for_testing();
clear_relay_idle_pressure_state_for_testing_in_shared(shared.as_ref());
}

View File

@ -1,6 +1,6 @@
use super::*;
use crate::stats::Stats;
use crate::stream::BufferPool;
use std::collections::HashSet;
use std::sync::Arc;
use tokio::time::{Duration as TokioDuration, timeout};
@ -15,32 +15,30 @@ fn make_pooled_payload(data: &[u8]) -> PooledBuffer {
#[test]
#[ignore = "Tracking for M-04: Verify should_emit_full_desync returns true on first occurrence and false on duplicate within window"]
fn should_emit_full_desync_filters_duplicates() {
let _guard = desync_dedup_test_lock()
.lock()
.expect("desync dedup test lock must be available");
clear_desync_dedup_for_testing();
let shared = ProxySharedState::new();
clear_desync_dedup_for_testing_in_shared(shared.as_ref());
let key = 0x4D04_0000_0000_0001_u64;
let base = Instant::now();
assert!(
should_emit_full_desync(key, false, base),
should_emit_full_desync_for_testing(shared.as_ref(), key, false, base),
"first occurrence must emit full forensic record"
);
assert!(
!should_emit_full_desync(key, false, base),
!should_emit_full_desync_for_testing(shared.as_ref(), key, false, base),
"duplicate at same timestamp must be suppressed"
);
let within_window = base + DESYNC_DEDUP_WINDOW - TokioDuration::from_millis(1);
assert!(
!should_emit_full_desync(key, false, within_window),
!should_emit_full_desync_for_testing(shared.as_ref(), key, false, within_window),
"duplicate strictly inside dedup window must stay suppressed"
);
let on_window_edge = base + DESYNC_DEDUP_WINDOW;
assert!(
should_emit_full_desync(key, false, on_window_edge),
should_emit_full_desync_for_testing(shared.as_ref(), key, false, on_window_edge),
"duplicate at window boundary must re-emit and refresh"
);
}
@ -48,39 +46,34 @@ fn should_emit_full_desync_filters_duplicates() {
#[test]
#[ignore = "Tracking for M-04: Verify desync dedup eviction behaves correctly under map-full condition"]
fn desync_dedup_eviction_under_map_full_condition() {
let _guard = desync_dedup_test_lock()
.lock()
.expect("desync dedup test lock must be available");
clear_desync_dedup_for_testing();
let shared = ProxySharedState::new();
clear_desync_dedup_for_testing_in_shared(shared.as_ref());
let base = Instant::now();
for key in 0..DESYNC_DEDUP_MAX_ENTRIES as u64 {
assert!(
should_emit_full_desync(key, false, base),
should_emit_full_desync_for_testing(shared.as_ref(), key, false, base),
"unique key should be inserted while warming dedup cache"
);
}
let dedup = DESYNC_DEDUP
.get()
.expect("dedup map must exist after warm-up insertions");
assert_eq!(
dedup.len(),
desync_dedup_len_for_testing(shared.as_ref()),
DESYNC_DEDUP_MAX_ENTRIES,
"cache warm-up must reach exact hard cap"
);
let before_keys: HashSet<u64> = dedup.iter().map(|entry| *entry.key()).collect();
let before_keys = desync_dedup_keys_for_testing(shared.as_ref());
let newcomer_key = 0x4D04_FFFF_FFFF_0001_u64;
assert!(
should_emit_full_desync(newcomer_key, false, base),
should_emit_full_desync_for_testing(shared.as_ref(), newcomer_key, false, base),
"first newcomer at map-full must emit under bounded full-cache gate"
);
let after_keys: HashSet<u64> = dedup.iter().map(|entry| *entry.key()).collect();
let after_keys = desync_dedup_keys_for_testing(shared.as_ref());
assert_eq!(
dedup.len(),
desync_dedup_len_for_testing(shared.as_ref()),
DESYNC_DEDUP_MAX_ENTRIES,
"map-full insertion must preserve hard capacity bound"
);
@ -101,7 +94,7 @@ fn desync_dedup_eviction_under_map_full_condition() {
);
assert!(
!should_emit_full_desync(newcomer_key, false, base),
!should_emit_full_desync_for_testing(shared.as_ref(), newcomer_key, false, base),
"immediate duplicate newcomer must remain suppressed"
);
}
@ -119,6 +112,7 @@ async fn c2me_channel_full_path_yields_then_sends() {
.expect("priming queue with one frame must succeed");
let tx2 = tx.clone();
let stats = Stats::default();
let producer = tokio::spawn(async move {
enqueue_c2me_command(
&tx2,
@ -126,6 +120,8 @@ async fn c2me_channel_full_path_yields_then_sends() {
payload: make_pooled_payload(&[0xBB, 0xCC]),
flags: 2,
},
None,
&stats,
)
.await
});

View File

@ -0,0 +1,674 @@
use crate::proxy::client::handle_client_stream_with_shared;
use crate::proxy::handshake::{
auth_probe_fail_streak_for_testing_in_shared, auth_probe_is_throttled_for_testing_in_shared,
auth_probe_record_failure_for_testing, clear_auth_probe_state_for_testing_in_shared,
clear_unknown_sni_warn_state_for_testing_in_shared, clear_warned_secrets_for_testing_in_shared,
should_emit_unknown_sni_warn_for_testing_in_shared, warned_secrets_for_testing_in_shared,
};
use crate::proxy::middle_relay::{
clear_desync_dedup_for_testing_in_shared, clear_relay_idle_candidate_for_testing,
clear_relay_idle_pressure_state_for_testing_in_shared, mark_relay_idle_candidate_for_testing,
maybe_evict_idle_candidate_on_pressure_for_testing, note_relay_pressure_event_for_testing,
oldest_relay_idle_candidate_for_testing, relay_idle_mark_seq_for_testing,
relay_pressure_event_seq_for_testing, should_emit_full_desync_for_testing,
};
use crate::proxy::route_mode::{RelayRouteMode, RouteRuntimeController};
use crate::proxy::shared_state::ProxySharedState;
use crate::{
config::{ProxyConfig, UpstreamConfig, UpstreamType},
crypto::SecureRandom,
ip_tracker::UserIpTracker,
stats::{ReplayChecker, Stats, beobachten::BeobachtenStore},
stream::BufferPool,
transport::UpstreamManager,
};
use std::net::{IpAddr, Ipv4Addr};
use std::sync::Arc;
use std::time::{Duration, Instant};
use tokio::io::{AsyncWriteExt, duplex};
use tokio::sync::Barrier;
struct ClientHarness {
config: Arc<ProxyConfig>,
stats: Arc<Stats>,
upstream_manager: Arc<UpstreamManager>,
replay_checker: Arc<ReplayChecker>,
buffer_pool: Arc<BufferPool>,
rng: Arc<SecureRandom>,
route_runtime: Arc<RouteRuntimeController>,
ip_tracker: Arc<UserIpTracker>,
beobachten: Arc<BeobachtenStore>,
}
fn new_client_harness() -> ClientHarness {
let mut cfg = ProxyConfig::default();
cfg.censorship.mask = false;
cfg.general.modes.classic = true;
cfg.general.modes.secure = true;
let config = Arc::new(cfg);
let stats = Arc::new(Stats::new());
let upstream_manager = Arc::new(UpstreamManager::new(
vec![UpstreamConfig {
upstream_type: UpstreamType::Direct {
interface: None,
bind_addresses: None,
},
weight: 1,
enabled: true,
scopes: String::new(),
selected_scope: String::new(),
}],
1,
1,
1,
10,
1,
false,
stats.clone(),
));
ClientHarness {
config,
stats,
upstream_manager,
replay_checker: Arc::new(ReplayChecker::new(128, Duration::from_secs(60))),
buffer_pool: Arc::new(BufferPool::new()),
rng: Arc::new(SecureRandom::new()),
route_runtime: Arc::new(RouteRuntimeController::new(RelayRouteMode::Direct)),
ip_tracker: Arc::new(UserIpTracker::new()),
beobachten: Arc::new(BeobachtenStore::new()),
}
}
async fn drive_invalid_mtproto_handshake(
shared: Arc<ProxySharedState>,
peer: std::net::SocketAddr,
) {
let harness = new_client_harness();
let (server_side, mut client_side) = duplex(4096);
let invalid = [0u8; 64];
let task = tokio::spawn(handle_client_stream_with_shared(
server_side,
peer,
harness.config,
harness.stats,
harness.upstream_manager,
harness.replay_checker,
harness.buffer_pool,
harness.rng,
None,
harness.route_runtime,
None,
harness.ip_tracker,
harness.beobachten,
shared,
false,
));
client_side
.write_all(&invalid)
.await
.expect("failed to write invalid handshake");
client_side
.shutdown()
.await
.expect("failed to shutdown client");
let _ = tokio::time::timeout(Duration::from_secs(3), task)
.await
.expect("client task timed out")
.expect("client task join failed");
}
#[test]
fn proxy_shared_state_two_instances_do_not_share_auth_probe_state() {
let a = ProxySharedState::new();
let b = ProxySharedState::new();
clear_auth_probe_state_for_testing_in_shared(a.as_ref());
let ip = IpAddr::V4(Ipv4Addr::new(198, 51, 100, 10));
auth_probe_record_failure_for_testing(a.as_ref(), ip, Instant::now());
assert_eq!(
auth_probe_fail_streak_for_testing_in_shared(a.as_ref(), ip),
Some(1)
);
assert_eq!(
auth_probe_fail_streak_for_testing_in_shared(b.as_ref(), ip),
None
);
}
#[test]
fn proxy_shared_state_two_instances_do_not_share_desync_dedup() {
let a = ProxySharedState::new();
let b = ProxySharedState::new();
clear_desync_dedup_for_testing_in_shared(a.as_ref());
let now = Instant::now();
let key = 0xA5A5_u64;
assert!(should_emit_full_desync_for_testing(
a.as_ref(),
key,
false,
now
));
assert!(should_emit_full_desync_for_testing(
b.as_ref(),
key,
false,
now
));
}
#[test]
fn proxy_shared_state_two_instances_do_not_share_idle_registry() {
let a = ProxySharedState::new();
let b = ProxySharedState::new();
clear_relay_idle_pressure_state_for_testing_in_shared(a.as_ref());
assert!(mark_relay_idle_candidate_for_testing(a.as_ref(), 111));
assert_eq!(
oldest_relay_idle_candidate_for_testing(a.as_ref()),
Some(111)
);
assert_eq!(oldest_relay_idle_candidate_for_testing(b.as_ref()), None);
}
#[test]
fn proxy_shared_state_reset_in_one_instance_does_not_affect_another() {
let a = ProxySharedState::new();
let b = ProxySharedState::new();
clear_auth_probe_state_for_testing_in_shared(a.as_ref());
let ip_a = IpAddr::V4(Ipv4Addr::new(203, 0, 113, 1));
let ip_b = IpAddr::V4(Ipv4Addr::new(203, 0, 113, 2));
let now = Instant::now();
auth_probe_record_failure_for_testing(a.as_ref(), ip_a, now);
auth_probe_record_failure_for_testing(b.as_ref(), ip_b, now);
clear_auth_probe_state_for_testing_in_shared(a.as_ref());
assert_eq!(
auth_probe_fail_streak_for_testing_in_shared(a.as_ref(), ip_a),
None
);
assert_eq!(
auth_probe_fail_streak_for_testing_in_shared(b.as_ref(), ip_b),
Some(1)
);
}
#[test]
fn proxy_shared_state_parallel_auth_probe_updates_stay_per_instance() {
let a = ProxySharedState::new();
let b = ProxySharedState::new();
clear_auth_probe_state_for_testing_in_shared(a.as_ref());
let ip = IpAddr::V4(Ipv4Addr::new(192, 0, 2, 77));
let now = Instant::now();
for _ in 0..5 {
auth_probe_record_failure_for_testing(a.as_ref(), ip, now);
}
for _ in 0..3 {
auth_probe_record_failure_for_testing(b.as_ref(), ip, now + Duration::from_millis(1));
}
assert_eq!(
auth_probe_fail_streak_for_testing_in_shared(a.as_ref(), ip),
Some(5)
);
assert_eq!(
auth_probe_fail_streak_for_testing_in_shared(b.as_ref(), ip),
Some(3)
);
}
#[tokio::test]
async fn proxy_shared_state_client_pipeline_records_probe_failures_in_instance_state() {
let shared = ProxySharedState::new();
clear_auth_probe_state_for_testing_in_shared(shared.as_ref());
let peer_ip = IpAddr::V4(Ipv4Addr::new(198, 51, 100, 200));
let peer = std::net::SocketAddr::new(peer_ip, 54001);
drive_invalid_mtproto_handshake(shared.clone(), peer).await;
assert_eq!(
auth_probe_fail_streak_for_testing_in_shared(shared.as_ref(), peer_ip),
Some(1),
"invalid handshake in client pipeline must update injected shared auth-probe state"
);
}
#[tokio::test]
async fn proxy_shared_state_client_pipeline_keeps_auth_probe_isolated_between_instances() {
let shared_a = ProxySharedState::new();
let shared_b = ProxySharedState::new();
clear_auth_probe_state_for_testing_in_shared(shared_a.as_ref());
clear_auth_probe_state_for_testing_in_shared(shared_b.as_ref());
let peer_a_ip = IpAddr::V4(Ipv4Addr::new(203, 0, 113, 210));
let peer_b_ip = IpAddr::V4(Ipv4Addr::new(203, 0, 113, 211));
drive_invalid_mtproto_handshake(
shared_a.clone(),
std::net::SocketAddr::new(peer_a_ip, 54110),
)
.await;
drive_invalid_mtproto_handshake(
shared_b.clone(),
std::net::SocketAddr::new(peer_b_ip, 54111),
)
.await;
assert_eq!(
auth_probe_fail_streak_for_testing_in_shared(shared_a.as_ref(), peer_a_ip),
Some(1)
);
assert_eq!(
auth_probe_fail_streak_for_testing_in_shared(shared_b.as_ref(), peer_b_ip),
Some(1)
);
assert_eq!(
auth_probe_fail_streak_for_testing_in_shared(shared_a.as_ref(), peer_b_ip),
None
);
assert_eq!(
auth_probe_fail_streak_for_testing_in_shared(shared_b.as_ref(), peer_a_ip),
None
);
}
#[tokio::test(flavor = "multi_thread", worker_threads = 4)]
async fn proxy_shared_state_client_pipeline_high_contention_same_ip_stays_lossless_per_instance() {
let shared_a = ProxySharedState::new();
let shared_b = ProxySharedState::new();
clear_auth_probe_state_for_testing_in_shared(shared_a.as_ref());
clear_auth_probe_state_for_testing_in_shared(shared_b.as_ref());
let ip = IpAddr::V4(Ipv4Addr::new(198, 51, 100, 250));
let workers = 48u16;
let barrier = Arc::new(Barrier::new((workers as usize) * 2));
let mut tasks = Vec::new();
for i in 0..workers {
let shared_a = shared_a.clone();
let barrier_a = barrier.clone();
let peer_a = std::net::SocketAddr::new(ip, 56000 + i);
tasks.push(tokio::spawn(async move {
barrier_a.wait().await;
drive_invalid_mtproto_handshake(shared_a, peer_a).await;
}));
let shared_b = shared_b.clone();
let barrier_b = barrier.clone();
let peer_b = std::net::SocketAddr::new(ip, 56100 + i);
tasks.push(tokio::spawn(async move {
barrier_b.wait().await;
drive_invalid_mtproto_handshake(shared_b, peer_b).await;
}));
}
for task in tasks {
task.await.expect("pipeline task join failed");
}
let streak_a = auth_probe_fail_streak_for_testing_in_shared(shared_a.as_ref(), ip)
.expect("instance A must track probe failures");
let streak_b = auth_probe_fail_streak_for_testing_in_shared(shared_b.as_ref(), ip)
.expect("instance B must track probe failures");
assert!(streak_a > 0);
assert!(streak_b > 0);
clear_auth_probe_state_for_testing_in_shared(shared_a.as_ref());
assert_eq!(
auth_probe_fail_streak_for_testing_in_shared(shared_a.as_ref(), ip),
None,
"clearing one instance must reset only that instance"
);
assert!(
auth_probe_fail_streak_for_testing_in_shared(shared_b.as_ref(), ip).is_some(),
"clearing one instance must not clear the other instance"
);
}
#[test]
fn proxy_shared_state_auth_saturation_does_not_bleed_across_instances() {
let a = ProxySharedState::new();
let b = ProxySharedState::new();
clear_auth_probe_state_for_testing_in_shared(a.as_ref());
clear_auth_probe_state_for_testing_in_shared(b.as_ref());
let ip = IpAddr::V4(Ipv4Addr::new(203, 0, 113, 77));
let future_now = Instant::now() + Duration::from_secs(1);
for _ in 0..8 {
auth_probe_record_failure_for_testing(a.as_ref(), ip, future_now);
}
assert!(auth_probe_is_throttled_for_testing_in_shared(
a.as_ref(),
ip
));
assert!(!auth_probe_is_throttled_for_testing_in_shared(
b.as_ref(),
ip
));
}
#[test]
fn proxy_shared_state_poison_clear_in_one_instance_does_not_affect_other_instance() {
let a = ProxySharedState::new();
let b = ProxySharedState::new();
clear_auth_probe_state_for_testing_in_shared(a.as_ref());
clear_auth_probe_state_for_testing_in_shared(b.as_ref());
let ip_a = IpAddr::V4(Ipv4Addr::new(203, 0, 113, 31));
let ip_b = IpAddr::V4(Ipv4Addr::new(203, 0, 113, 32));
let now = Instant::now();
auth_probe_record_failure_for_testing(a.as_ref(), ip_a, now);
auth_probe_record_failure_for_testing(b.as_ref(), ip_b, now);
let a_for_poison = a.clone();
let _ = std::thread::spawn(move || {
let _hold = a_for_poison
.handshake
.auth_probe_saturation
.lock()
.unwrap_or_else(|poisoned| poisoned.into_inner());
panic!("intentional poison for per-instance isolation regression coverage");
})
.join();
clear_auth_probe_state_for_testing_in_shared(a.as_ref());
assert_eq!(
auth_probe_fail_streak_for_testing_in_shared(a.as_ref(), ip_a),
None
);
assert_eq!(
auth_probe_fail_streak_for_testing_in_shared(b.as_ref(), ip_b),
Some(1),
"poison recovery and clear in one instance must not touch other instance state"
);
}
#[test]
fn proxy_shared_state_unknown_sni_cooldown_does_not_bleed_across_instances() {
let a = ProxySharedState::new();
let b = ProxySharedState::new();
clear_unknown_sni_warn_state_for_testing_in_shared(a.as_ref());
clear_unknown_sni_warn_state_for_testing_in_shared(b.as_ref());
let now = Instant::now();
assert!(should_emit_unknown_sni_warn_for_testing_in_shared(
a.as_ref(),
now
));
assert!(should_emit_unknown_sni_warn_for_testing_in_shared(
b.as_ref(),
now
));
}
#[test]
fn proxy_shared_state_warned_secret_cache_does_not_bleed_across_instances() {
let a = ProxySharedState::new();
let b = ProxySharedState::new();
clear_warned_secrets_for_testing_in_shared(a.as_ref());
clear_warned_secrets_for_testing_in_shared(b.as_ref());
let key = ("isolation-user".to_string(), "invalid_hex".to_string());
{
let warned = warned_secrets_for_testing_in_shared(a.as_ref());
let mut guard = warned
.lock()
.unwrap_or_else(|poisoned| poisoned.into_inner());
guard.insert(key.clone());
}
let contains_in_a = {
let warned = warned_secrets_for_testing_in_shared(a.as_ref());
let guard = warned
.lock()
.unwrap_or_else(|poisoned| poisoned.into_inner());
guard.contains(&key)
};
let contains_in_b = {
let warned = warned_secrets_for_testing_in_shared(b.as_ref());
let guard = warned
.lock()
.unwrap_or_else(|poisoned| poisoned.into_inner());
guard.contains(&key)
};
assert!(contains_in_a);
assert!(!contains_in_b);
}
#[test]
fn proxy_shared_state_idle_mark_seq_is_per_instance() {
let a = ProxySharedState::new();
let b = ProxySharedState::new();
clear_relay_idle_pressure_state_for_testing_in_shared(a.as_ref());
clear_relay_idle_pressure_state_for_testing_in_shared(b.as_ref());
assert_eq!(relay_idle_mark_seq_for_testing(a.as_ref()), 0);
assert_eq!(relay_idle_mark_seq_for_testing(b.as_ref()), 0);
assert!(mark_relay_idle_candidate_for_testing(a.as_ref(), 9001));
assert_eq!(relay_idle_mark_seq_for_testing(a.as_ref()), 1);
assert_eq!(relay_idle_mark_seq_for_testing(b.as_ref()), 0);
assert!(mark_relay_idle_candidate_for_testing(b.as_ref(), 9002));
assert_eq!(relay_idle_mark_seq_for_testing(a.as_ref()), 1);
assert_eq!(relay_idle_mark_seq_for_testing(b.as_ref()), 1);
}
#[test]
fn proxy_shared_state_unknown_sni_clear_in_one_instance_does_not_reset_other() {
let a = ProxySharedState::new();
let b = ProxySharedState::new();
clear_unknown_sni_warn_state_for_testing_in_shared(a.as_ref());
clear_unknown_sni_warn_state_for_testing_in_shared(b.as_ref());
let now = Instant::now();
assert!(should_emit_unknown_sni_warn_for_testing_in_shared(
a.as_ref(),
now
));
assert!(should_emit_unknown_sni_warn_for_testing_in_shared(
b.as_ref(),
now
));
clear_unknown_sni_warn_state_for_testing_in_shared(a.as_ref());
assert!(should_emit_unknown_sni_warn_for_testing_in_shared(
a.as_ref(),
now + Duration::from_millis(1)
));
assert!(!should_emit_unknown_sni_warn_for_testing_in_shared(
b.as_ref(),
now + Duration::from_millis(1)
));
}
#[test]
fn proxy_shared_state_warned_secret_clear_in_one_instance_does_not_clear_other() {
let a = ProxySharedState::new();
let b = ProxySharedState::new();
clear_warned_secrets_for_testing_in_shared(a.as_ref());
clear_warned_secrets_for_testing_in_shared(b.as_ref());
let key = (
"clear-isolation-user".to_string(),
"invalid_length".to_string(),
);
{
let warned_a = warned_secrets_for_testing_in_shared(a.as_ref());
let mut guard_a = warned_a
.lock()
.unwrap_or_else(|poisoned| poisoned.into_inner());
guard_a.insert(key.clone());
let warned_b = warned_secrets_for_testing_in_shared(b.as_ref());
let mut guard_b = warned_b
.lock()
.unwrap_or_else(|poisoned| poisoned.into_inner());
guard_b.insert(key.clone());
}
clear_warned_secrets_for_testing_in_shared(a.as_ref());
let has_a = {
let warned = warned_secrets_for_testing_in_shared(a.as_ref());
let guard = warned
.lock()
.unwrap_or_else(|poisoned| poisoned.into_inner());
guard.contains(&key)
};
let has_b = {
let warned = warned_secrets_for_testing_in_shared(b.as_ref());
let guard = warned
.lock()
.unwrap_or_else(|poisoned| poisoned.into_inner());
guard.contains(&key)
};
assert!(!has_a);
assert!(has_b);
}
#[test]
fn proxy_shared_state_desync_duplicate_suppression_is_instance_scoped() {
let a = ProxySharedState::new();
let b = ProxySharedState::new();
clear_desync_dedup_for_testing_in_shared(a.as_ref());
clear_desync_dedup_for_testing_in_shared(b.as_ref());
let now = Instant::now();
let key = 0xBEEF_0000_0000_0001u64;
assert!(should_emit_full_desync_for_testing(
a.as_ref(),
key,
false,
now
));
assert!(!should_emit_full_desync_for_testing(
a.as_ref(),
key,
false,
now + Duration::from_millis(1)
));
assert!(should_emit_full_desync_for_testing(
b.as_ref(),
key,
false,
now
));
}
#[test]
fn proxy_shared_state_desync_clear_in_one_instance_does_not_clear_other() {
let a = ProxySharedState::new();
let b = ProxySharedState::new();
clear_desync_dedup_for_testing_in_shared(a.as_ref());
clear_desync_dedup_for_testing_in_shared(b.as_ref());
let now = Instant::now();
let key = 0xCAFE_0000_0000_0001u64;
assert!(should_emit_full_desync_for_testing(
a.as_ref(),
key,
false,
now
));
assert!(should_emit_full_desync_for_testing(
b.as_ref(),
key,
false,
now
));
clear_desync_dedup_for_testing_in_shared(a.as_ref());
assert!(should_emit_full_desync_for_testing(
a.as_ref(),
key,
false,
now + Duration::from_millis(2)
));
assert!(!should_emit_full_desync_for_testing(
b.as_ref(),
key,
false,
now + Duration::from_millis(2)
));
}
#[test]
fn proxy_shared_state_idle_candidate_clear_in_one_instance_does_not_affect_other() {
let a = ProxySharedState::new();
let b = ProxySharedState::new();
clear_relay_idle_pressure_state_for_testing_in_shared(a.as_ref());
clear_relay_idle_pressure_state_for_testing_in_shared(b.as_ref());
assert!(mark_relay_idle_candidate_for_testing(a.as_ref(), 1001));
assert!(mark_relay_idle_candidate_for_testing(b.as_ref(), 2002));
clear_relay_idle_candidate_for_testing(a.as_ref(), 1001);
assert_eq!(oldest_relay_idle_candidate_for_testing(a.as_ref()), None);
assert_eq!(
oldest_relay_idle_candidate_for_testing(b.as_ref()),
Some(2002)
);
}
#[test]
fn proxy_shared_state_pressure_seq_increments_are_instance_scoped() {
let a = ProxySharedState::new();
let b = ProxySharedState::new();
clear_relay_idle_pressure_state_for_testing_in_shared(a.as_ref());
clear_relay_idle_pressure_state_for_testing_in_shared(b.as_ref());
assert_eq!(relay_pressure_event_seq_for_testing(a.as_ref()), 0);
assert_eq!(relay_pressure_event_seq_for_testing(b.as_ref()), 0);
note_relay_pressure_event_for_testing(a.as_ref());
note_relay_pressure_event_for_testing(a.as_ref());
assert_eq!(relay_pressure_event_seq_for_testing(a.as_ref()), 2);
assert_eq!(relay_pressure_event_seq_for_testing(b.as_ref()), 0);
}
#[test]
fn proxy_shared_state_pressure_consumption_does_not_cross_instances() {
let a = ProxySharedState::new();
let b = ProxySharedState::new();
clear_relay_idle_pressure_state_for_testing_in_shared(a.as_ref());
clear_relay_idle_pressure_state_for_testing_in_shared(b.as_ref());
assert!(mark_relay_idle_candidate_for_testing(a.as_ref(), 7001));
assert!(mark_relay_idle_candidate_for_testing(b.as_ref(), 7001));
note_relay_pressure_event_for_testing(a.as_ref());
let stats = Stats::new();
let mut seen_a = 0u64;
let mut seen_b = 0u64;
assert!(maybe_evict_idle_candidate_on_pressure_for_testing(
a.as_ref(),
7001,
&mut seen_a,
&stats
));
assert!(!maybe_evict_idle_candidate_on_pressure_for_testing(
b.as_ref(),
7001,
&mut seen_b,
&stats
));
}

View File

@ -0,0 +1,265 @@
use crate::proxy::handshake::{
auth_probe_fail_streak_for_testing_in_shared, auth_probe_record_failure_for_testing,
clear_auth_probe_state_for_testing_in_shared,
clear_unknown_sni_warn_state_for_testing_in_shared,
should_emit_unknown_sni_warn_for_testing_in_shared,
};
use crate::proxy::middle_relay::{
clear_desync_dedup_for_testing_in_shared,
clear_relay_idle_pressure_state_for_testing_in_shared, mark_relay_idle_candidate_for_testing,
oldest_relay_idle_candidate_for_testing, should_emit_full_desync_for_testing,
};
use crate::proxy::shared_state::ProxySharedState;
use rand::RngExt;
use rand::SeedableRng;
use rand::rngs::StdRng;
use std::net::{IpAddr, Ipv4Addr};
use std::sync::Arc;
use std::time::Instant;
use tokio::sync::Barrier;
#[tokio::test(flavor = "multi_thread", worker_threads = 4)]
async fn proxy_shared_state_50_concurrent_instances_no_counter_bleed() {
let mut handles = Vec::new();
for i in 0..50_u8 {
handles.push(tokio::spawn(async move {
let shared = ProxySharedState::new();
clear_auth_probe_state_for_testing_in_shared(shared.as_ref());
let ip = IpAddr::V4(Ipv4Addr::new(198, 51, 100, 200));
auth_probe_record_failure_for_testing(shared.as_ref(), ip, Instant::now());
auth_probe_fail_streak_for_testing_in_shared(shared.as_ref(), ip)
}));
}
for handle in handles {
let streak = handle.await.expect("task join failed");
assert_eq!(streak, Some(1));
}
}
#[tokio::test(flavor = "multi_thread", worker_threads = 4)]
async fn proxy_shared_state_desync_rotation_concurrent_20_instances() {
let now = Instant::now();
let key = 0xD35E_D35E_u64;
let mut handles = Vec::new();
for _ in 0..20_u64 {
handles.push(tokio::spawn(async move {
let shared = ProxySharedState::new();
clear_desync_dedup_for_testing_in_shared(shared.as_ref());
should_emit_full_desync_for_testing(shared.as_ref(), key, false, now)
}));
}
for handle in handles {
let emitted = handle.await.expect("task join failed");
assert!(emitted);
}
}
#[tokio::test(flavor = "multi_thread", worker_threads = 4)]
async fn proxy_shared_state_idle_registry_concurrent_10_instances() {
let mut handles = Vec::new();
let conn_id = 42_u64;
for _ in 1..=10_u64 {
handles.push(tokio::spawn(async move {
let shared = ProxySharedState::new();
clear_relay_idle_pressure_state_for_testing_in_shared(shared.as_ref());
let marked = mark_relay_idle_candidate_for_testing(shared.as_ref(), conn_id);
let oldest = oldest_relay_idle_candidate_for_testing(shared.as_ref());
(marked, oldest)
}));
}
for (i, handle) in handles.into_iter().enumerate() {
let (marked, oldest) = handle.await.expect("task join failed");
assert!(marked, "instance {} failed to mark", i);
assert_eq!(oldest, Some(conn_id));
}
}
#[tokio::test(flavor = "multi_thread", worker_threads = 4)]
async fn proxy_shared_state_dual_instance_same_ip_high_contention_no_counter_bleed() {
let a = ProxySharedState::new();
let b = ProxySharedState::new();
clear_auth_probe_state_for_testing_in_shared(a.as_ref());
clear_auth_probe_state_for_testing_in_shared(b.as_ref());
let ip = IpAddr::V4(Ipv4Addr::new(203, 0, 113, 200));
let mut handles = Vec::new();
for _ in 0..64 {
let a = a.clone();
let b = b.clone();
handles.push(tokio::spawn(async move {
auth_probe_record_failure_for_testing(a.as_ref(), ip, Instant::now());
auth_probe_record_failure_for_testing(b.as_ref(), ip, Instant::now());
}));
}
for handle in handles {
handle.await.expect("task join failed");
}
assert_eq!(
auth_probe_fail_streak_for_testing_in_shared(a.as_ref(), ip),
Some(64)
);
assert_eq!(
auth_probe_fail_streak_for_testing_in_shared(b.as_ref(), ip),
Some(64)
);
}
#[tokio::test(flavor = "multi_thread", worker_threads = 4)]
async fn proxy_shared_state_unknown_sni_parallel_instances_no_cross_cooldown() {
let mut handles = Vec::new();
let now = Instant::now();
for _ in 0..32 {
handles.push(tokio::spawn(async move {
let shared = ProxySharedState::new();
clear_unknown_sni_warn_state_for_testing_in_shared(shared.as_ref());
let first = should_emit_unknown_sni_warn_for_testing_in_shared(shared.as_ref(), now);
let second = should_emit_unknown_sni_warn_for_testing_in_shared(
shared.as_ref(),
now + std::time::Duration::from_millis(1),
);
(first, second)
}));
}
for handle in handles {
let (first, second) = handle.await.expect("task join failed");
assert!(first);
assert!(!second);
}
}
#[tokio::test(flavor = "multi_thread", worker_threads = 4)]
async fn proxy_shared_state_auth_probe_high_contention_increments_are_lossless() {
let shared = ProxySharedState::new();
clear_auth_probe_state_for_testing_in_shared(shared.as_ref());
let ip = IpAddr::V4(Ipv4Addr::new(198, 51, 100, 33));
let workers = 128usize;
let rounds = 20usize;
for _ in 0..rounds {
let start = Arc::new(Barrier::new(workers));
let mut handles = Vec::with_capacity(workers);
for _ in 0..workers {
let shared = shared.clone();
let start = start.clone();
handles.push(tokio::spawn(async move {
start.wait().await;
auth_probe_record_failure_for_testing(shared.as_ref(), ip, Instant::now());
}));
}
for handle in handles {
handle.await.expect("task join failed");
}
}
let expected = (workers * rounds) as u32;
assert_eq!(
auth_probe_fail_streak_for_testing_in_shared(shared.as_ref(), ip),
Some(expected),
"auth probe fail streak must account for every concurrent update"
);
}
#[tokio::test(flavor = "multi_thread", worker_threads = 4)]
async fn proxy_shared_state_seed_matrix_concurrency_isolation_no_counter_bleed() {
let seeds: [u64; 8] = [
0x0000_0000_0000_0001,
0x1111_1111_1111_1111,
0xA5A5_A5A5_A5A5_A5A5,
0xDEAD_BEEF_CAFE_BABE,
0x0123_4567_89AB_CDEF,
0xFEDC_BA98_7654_3210,
0x0F0F_F0F0_55AA_AA55,
0x1357_9BDF_2468_ACE0,
];
for seed in seeds {
let mut rng = StdRng::seed_from_u64(seed);
let shared_a = ProxySharedState::new();
let shared_b = ProxySharedState::new();
clear_auth_probe_state_for_testing_in_shared(shared_a.as_ref());
clear_auth_probe_state_for_testing_in_shared(shared_b.as_ref());
let ip = IpAddr::V4(Ipv4Addr::new(198, 51, 100, rng.random_range(1_u8..=250_u8)));
let workers = rng.random_range(16_usize..=48_usize);
let rounds = rng.random_range(4_usize..=10_usize);
let mut expected_a: u32 = 0;
let mut expected_b: u32 = 0;
for _ in 0..rounds {
let start = Arc::new(Barrier::new(workers * 2));
let mut handles = Vec::with_capacity(workers * 2);
for _ in 0..workers {
let a_ops = rng.random_range(1_u32..=3_u32);
let b_ops = rng.random_range(1_u32..=3_u32);
expected_a = expected_a.saturating_add(a_ops);
expected_b = expected_b.saturating_add(b_ops);
let shared_a = shared_a.clone();
let start_a = start.clone();
handles.push(tokio::spawn(async move {
start_a.wait().await;
for _ in 0..a_ops {
auth_probe_record_failure_for_testing(
shared_a.as_ref(),
ip,
Instant::now(),
);
}
}));
let shared_b = shared_b.clone();
let start_b = start.clone();
handles.push(tokio::spawn(async move {
start_b.wait().await;
for _ in 0..b_ops {
auth_probe_record_failure_for_testing(
shared_b.as_ref(),
ip,
Instant::now(),
);
}
}));
}
for handle in handles {
handle.await.expect("task join failed");
}
}
assert_eq!(
auth_probe_fail_streak_for_testing_in_shared(shared_a.as_ref(), ip),
Some(expected_a),
"seed {seed:#x}: instance A streak mismatch"
);
assert_eq!(
auth_probe_fail_streak_for_testing_in_shared(shared_b.as_ref(), ip),
Some(expected_b),
"seed {seed:#x}: instance B streak mismatch"
);
clear_auth_probe_state_for_testing_in_shared(shared_a.as_ref());
assert_eq!(
auth_probe_fail_streak_for_testing_in_shared(shared_a.as_ref(), ip),
None,
"seed {seed:#x}: clearing A must reset only A"
);
assert_eq!(
auth_probe_fail_streak_for_testing_in_shared(shared_b.as_ref(), ip),
Some(expected_b),
"seed {seed:#x}: clearing A must not mutate B"
);
}
}

View File

@ -0,0 +1,284 @@
use super::*;
use crate::error::ProxyError;
use crate::stats::Stats;
use crate::stream::BufferPool;
use std::io;
use std::sync::Arc;
use tokio::io::{AsyncRead, AsyncReadExt, AsyncWrite, AsyncWriteExt, ReadBuf, duplex};
use tokio::time::{Duration, timeout};
struct BrokenPipeWriter;
impl AsyncWrite for BrokenPipeWriter {
fn poll_write(
self: Pin<&mut Self>,
_cx: &mut Context<'_>,
_buf: &[u8],
) -> Poll<io::Result<usize>> {
Poll::Ready(Err(io::Error::new(
io::ErrorKind::BrokenPipe,
"forced broken pipe",
)))
}
fn poll_flush(self: Pin<&mut Self>, _cx: &mut Context<'_>) -> Poll<io::Result<()>> {
Poll::Ready(Ok(()))
}
fn poll_shutdown(self: Pin<&mut Self>, _cx: &mut Context<'_>) -> Poll<io::Result<()>> {
Poll::Ready(Ok(()))
}
}
#[tokio::test(start_paused = true)]
async fn relay_baseline_activity_timeout_fires_after_inactivity() {
let stats = Arc::new(Stats::new());
let user = "relay-baseline-idle-timeout";
let (_client_peer, relay_client) = duplex(1024);
let (_server_peer, relay_server) = duplex(1024);
let (client_reader, client_writer) = tokio::io::split(relay_client);
let (server_reader, server_writer) = tokio::io::split(relay_server);
let relay_task = tokio::spawn(relay_bidirectional(
client_reader,
client_writer,
server_reader,
server_writer,
1024,
1024,
user,
Arc::clone(&stats),
None,
Arc::new(BufferPool::new()),
));
tokio::task::yield_now().await;
tokio::time::advance(ACTIVITY_TIMEOUT.saturating_sub(Duration::from_secs(1))).await;
tokio::task::yield_now().await;
assert!(
!relay_task.is_finished(),
"relay must stay alive before inactivity timeout"
);
tokio::time::advance(WATCHDOG_INTERVAL + Duration::from_secs(2)).await;
let done = timeout(Duration::from_secs(1), relay_task)
.await
.expect("relay must complete after inactivity timeout")
.expect("relay task must not panic");
assert!(
done.is_ok(),
"relay must return Ok(()) after inactivity timeout"
);
}
#[tokio::test]
async fn relay_baseline_zero_bytes_returns_ok_and_counters_zero() {
let stats = Arc::new(Stats::new());
let user = "relay-baseline-zero-bytes";
let (client_peer, relay_client) = duplex(1024);
let (server_peer, relay_server) = duplex(1024);
let (client_reader, client_writer) = tokio::io::split(relay_client);
let (server_reader, server_writer) = tokio::io::split(relay_server);
let relay_task = tokio::spawn(relay_bidirectional(
client_reader,
client_writer,
server_reader,
server_writer,
1024,
1024,
user,
Arc::clone(&stats),
None,
Arc::new(BufferPool::new()),
));
drop(client_peer);
drop(server_peer);
let done = timeout(Duration::from_secs(2), relay_task)
.await
.expect("relay must stop after both peers close")
.expect("relay task must not panic");
assert!(done.is_ok(), "relay must return Ok(()) on immediate EOF");
assert_eq!(stats.get_user_total_octets(user), 0);
}
#[tokio::test]
async fn relay_baseline_bidirectional_bytes_counted_symmetrically() {
let stats = Arc::new(Stats::new());
let user = "relay-baseline-bidir-counters";
let (mut client_peer, relay_client) = duplex(16 * 1024);
let (relay_server, mut server_peer) = duplex(16 * 1024);
let (client_reader, client_writer) = tokio::io::split(relay_client);
let (server_reader, server_writer) = tokio::io::split(relay_server);
let relay_task = tokio::spawn(relay_bidirectional(
client_reader,
client_writer,
server_reader,
server_writer,
4096,
4096,
user,
Arc::clone(&stats),
None,
Arc::new(BufferPool::new()),
));
let c2s = vec![0xAA; 4096];
let s2c = vec![0xBB; 2048];
client_peer.write_all(&c2s).await.unwrap();
server_peer.write_all(&s2c).await.unwrap();
let mut seen_c2s = vec![0u8; c2s.len()];
let mut seen_s2c = vec![0u8; s2c.len()];
server_peer.read_exact(&mut seen_c2s).await.unwrap();
client_peer.read_exact(&mut seen_s2c).await.unwrap();
assert_eq!(seen_c2s, c2s);
assert_eq!(seen_s2c, s2c);
drop(client_peer);
drop(server_peer);
let done = timeout(Duration::from_secs(2), relay_task)
.await
.expect("relay must complete after both peers close")
.expect("relay task must not panic");
assert!(done.is_ok());
assert_eq!(
stats.get_user_total_octets(user),
(c2s.len() + s2c.len()) as u64
);
}
#[tokio::test]
async fn relay_baseline_both_sides_close_simultaneously_no_panic() {
let stats = Arc::new(Stats::new());
let (client_peer, relay_client) = duplex(1024);
let (relay_server, server_peer) = duplex(1024);
let (client_reader, client_writer) = tokio::io::split(relay_client);
let (server_reader, server_writer) = tokio::io::split(relay_server);
let relay_task = tokio::spawn(relay_bidirectional(
client_reader,
client_writer,
server_reader,
server_writer,
1024,
1024,
"relay-baseline-sim-close",
Arc::clone(&stats),
None,
Arc::new(BufferPool::new()),
));
drop(client_peer);
drop(server_peer);
let done = timeout(Duration::from_secs(2), relay_task)
.await
.expect("relay must complete")
.expect("relay task must not panic");
assert!(done.is_ok());
}
#[tokio::test]
async fn relay_baseline_broken_pipe_midtransfer_returns_error() {
let stats = Arc::new(Stats::new());
let user = "relay-baseline-broken-pipe";
let (mut client_peer, relay_client) = duplex(1024);
let (client_reader, client_writer) = tokio::io::split(relay_client);
let relay_task = tokio::spawn(relay_bidirectional(
client_reader,
client_writer,
tokio::io::empty(),
BrokenPipeWriter,
1024,
1024,
user,
Arc::clone(&stats),
None,
Arc::new(BufferPool::new()),
));
client_peer.write_all(b"trigger").await.unwrap();
let done = timeout(Duration::from_secs(2), relay_task)
.await
.expect("relay must return after broken pipe")
.expect("relay task must not panic");
match done {
Err(ProxyError::Io(err)) => {
assert!(
matches!(
err.kind(),
io::ErrorKind::BrokenPipe | io::ErrorKind::ConnectionReset
),
"expected BrokenPipe/ConnectionReset, got {:?}",
err.kind()
);
}
other => panic!("expected ProxyError::Io, got {other:?}"),
}
}
#[tokio::test]
async fn relay_baseline_many_small_writes_exact_counter() {
let stats = Arc::new(Stats::new());
let user = "relay-baseline-many-small";
let (mut client_peer, relay_client) = duplex(4096);
let (relay_server, mut server_peer) = duplex(4096);
let (client_reader, client_writer) = tokio::io::split(relay_client);
let (server_reader, server_writer) = tokio::io::split(relay_server);
let relay_task = tokio::spawn(relay_bidirectional(
client_reader,
client_writer,
server_reader,
server_writer,
1024,
1024,
user,
Arc::clone(&stats),
None,
Arc::new(BufferPool::new()),
));
for i in 0..10_000u32 {
let b = [(i & 0xFF) as u8];
client_peer.write_all(&b).await.unwrap();
let mut seen = [0u8; 1];
server_peer.read_exact(&mut seen).await.unwrap();
assert_eq!(seen, b);
}
drop(client_peer);
drop(server_peer);
let done = timeout(Duration::from_secs(3), relay_task)
.await
.expect("relay must complete for many small writes")
.expect("relay task must not panic");
assert!(done.is_ok());
assert_eq!(stats.get_user_total_octets(user), 10_000);
}

Some files were not shown because too many files have changed in this diff Show More