Compare commits

..

39 Commits
3.2.0 ... 3.3.0

Author SHA1 Message Date
Alexey
0494f8ac8b Merge pull request #325 from telemt/bump
Update Cargo.toml
2026-03-05 16:40:40 +03:00
Alexey
48ce59900e Update Cargo.toml 2026-03-05 16:40:28 +03:00
Alexey
84e95fd229 ME Pool Init fixes: merge pull request #324 from telemt/flow-fixes
ME Pool Init fixes
2026-03-05 16:35:00 +03:00
Alexey
a80be78345 DC writer floor is below required only in runtime 2026-03-05 16:32:31 +03:00
Alexey
64130dd02e MEP not ready only after 3 attempts 2026-03-05 16:13:40 +03:00
Alexey
d62a6e0417 Shutdown Timer fixes 2026-03-05 16:04:32 +03:00
Alexey
3260746785 Init + Uptime timers 2026-03-05 15:48:09 +03:00
Alexey
8066ea2163 ME Pool Init fixes 2026-03-05 15:31:36 +03:00
Alexey
813f1df63e Performance improvements: merge pull request #323 from telemt/flow-perf
Performance improvements
2026-03-05 14:43:10 +03:00
Alexey
09bdafa718 Performance improvements 2026-03-05 14:39:32 +03:00
Alexey
fb0f75df43 Merge pull request #322 from Dimasssss/patch-3
Update README.md
2026-03-05 14:10:01 +03:00
Alexey
39255df549 Unique IP always in Metrics+API: merge pull request #321 from telemt/flow-iplimit
Unique IP always in Metrics+API
2026-03-05 14:09:40 +03:00
Dimasssss
456495fd62 Update README.md 2026-03-05 13:59:58 +03:00
Alexey
83cadc0bf3 No lock-contention in ip-tracker 2026-03-05 13:52:27 +03:00
Alexey
0b1a8cd3f8 IP Limit fixes 2026-03-05 13:41:41 +03:00
Alexey
565b4ee923 Unique IP always in Metrics+API
Co-Authored-By: brekotis <93345790+brekotis@users.noreply.github.com>
2026-03-05 13:21:11 +03:00
Alexey
7a9c1e79c2 Merge pull request #320 from telemt/bump
Update Cargo.toml
2026-03-05 12:47:09 +03:00
Alexey
02c6af4912 Update Cargo.toml 2026-03-05 12:46:57 +03:00
Alexey
8ba4dea59f Merge pull request #319 from telemt/flow-api
New IP Limit + Hot-Reload fixes + API Docs + ME2DC Fallback + ME Init Retries
2026-03-05 12:46:34 +03:00
Alexey
ccfda10713 ME2DC Fallback + ME Init Retries
Co-Authored-By: brekotis <93345790+brekotis@users.noreply.github.com>
2026-03-05 12:43:07 +03:00
Alexey
bd1327592e Merge pull request #318 from telemt/readme
Update README.md
2026-03-05 12:40:34 +03:00
Alexey
30b22fe2bf Update README.md 2026-03-05 12:40:04 +03:00
Alexey
651f257a5d Update API.md
Co-Authored-By: brekotis <93345790+brekotis@users.noreply.github.com>
2026-03-05 12:30:29 +03:00
Alexey
a9209fd3c7 Hot-Reload fixes
Co-Authored-By: brekotis <93345790+brekotis@users.noreply.github.com>
2026-03-05 12:18:09 +03:00
Alexey
4ae4ca8ca8 New IP Limit Method
Co-Authored-By: brekotis <93345790+brekotis@users.noreply.github.com>
2026-03-05 02:28:19 +03:00
Alexey
8be1ddc0d8 Merge pull request #315 from telemt/contributing
Update CONTRIBUTING.md
2026-03-04 17:52:17 +03:00
Alexey
b55fa5ec8f Update CONTRIBUTING.md 2026-03-04 17:52:02 +03:00
Alexey
16c6ce850e Merge pull request #313 from badcdd/patch-2
Add new prometheus metrics to zabbix template
2026-03-04 16:46:21 +03:00
badcdd
12251e730f Add new prometheus metrics to zabbix template 2026-03-04 16:24:00 +03:00
Alexey
925b10f9fc Merge pull request #312 from Dimasssss/patch-2
Update README.md
2026-03-04 14:25:13 +03:00
Dimasssss
306b653318 Update README.md 2026-03-04 14:23:48 +03:00
Alexey
8791a52b7e Merge pull request #311 from Dimasssss/patch-6
Правка гайдов
2026-03-04 14:19:48 +03:00
Dimasssss
0d9470a840 Update QUICK_START_GUIDE.en.md 2026-03-04 14:10:46 +03:00
Dimasssss
0d320c20e0 Update QUICK_START_GUIDE.ru.md 2026-03-04 14:10:12 +03:00
Alexey
9b3ba2e1c6 API for UpstreamManager: merge pull request #310 from telemt/flow-api
API for UpstreamManager
2026-03-04 11:46:07 +03:00
Alexey
dbadbf0221 Update config.toml 2026-03-04 11:45:32 +03:00
Alexey
173624c838 Update Cargo.toml 2026-03-04 11:44:50 +03:00
Alexey
de2047adf2 API UpstreamManager
Co-Authored-By: brekotis <93345790+brekotis@users.noreply.github.com>
2026-03-04 11:41:41 +03:00
Alexey
5df2fe9f97 Autodetect IP in API User-links
Co-Authored-By: brekotis <93345790+brekotis@users.noreply.github.com>
2026-03-04 11:04:54 +03:00
29 changed files with 3597 additions and 693 deletions

View File

@@ -1,3 +1,8 @@
# Issues - Rules
## What it is not
- NOT Question and Answer
- NOT Helpdesk
# Pull Requests - Rules # Pull Requests - Rules
## General ## General
- ONLY signed and verified commits - ONLY signed and verified commits

View File

@@ -1,6 +1,6 @@
[package] [package]
name = "telemt" name = "telemt"
version = "3.2.0" version = "3.3.0"
edition = "2024" edition = "2024"
[dependencies] [dependencies]

110
README.md
View File

@@ -2,7 +2,12 @@
***Löst Probleme, bevor andere überhaupt wissen, dass sie existieren*** / ***It solves problems before others even realize they exist*** ***Löst Probleme, bevor andere überhaupt wissen, dass sie existieren*** / ***It solves problems before others even realize they exist***
**Telemt** is a fast, secure, and feature-rich server written in Rust: it fully implements the official Telegram proxy algo and adds many production-ready improvements such as connection pooling, replay protection, detailed statistics, masking from "prying" eyes **Telemt** is a fast, secure, and feature-rich server written in Rust: it fully implements the official Telegram proxy algo and adds many production-ready improvements such as:
- ME Pool + Reader/Writer + Registry + Refill + Adaptive Floor + Trio-State + Generation Lifecycle
- [Full-covered API w/ management](https://github.com/telemt/telemt/blob/main/docs/API.md)
- Anti-Replay on Sliding Window
- Prometheus-format Metrics
- TLS-Fronting and TCP-Splicing for masking from "prying" eyes
[**Telemt Chat in Telegram**](https://t.me/telemtrs) [**Telemt Chat in Telegram**](https://t.me/telemtrs)
@@ -112,110 +117,11 @@ We welcome ideas, architectural feedback, and pull requests.
- Extensive logging via `trace` and `debug` with `RUST_LOG` method - Extensive logging via `trace` and `debug` with `RUST_LOG` method
## Quick Start Guide ## Quick Start Guide
**This software is designed for Debian-based OS: in addition to Debian, these are Ubuntu, Mint, Kali, MX and many other Linux**
1. Download release
```bash
wget -qO- "https://github.com/telemt/telemt/releases/latest/download/telemt-$(uname -m)-linux-$(ldd --version 2>&1 | grep -iq musl && echo musl || echo gnu).tar.gz" | tar -xz
```
2. Move to Bin Folder
```bash
mv telemt /bin
```
4. Make Executable
```bash
chmod +x /bin/telemt
```
5. Go to [How to use?](#how-to-use) section for for further steps
## How to use? ### [Quick Start Guide RU](docs/QUICK_START_GUIDE.ru.md)
### Telemt via Systemd ### [Quick Start Guide EN](docs/QUICK_START_GUIDE.en.md)
**This instruction "assume" that you:**
- logged in as root or executed `su -` / `sudo su`
- you already have an assembled and executable `telemt` in /bin folder as a result of the [Quick Start Guide](#quick-start-guide) or [Build](#build)
**0. Check port and generate secrets**
The port you have selected for use should be MISSING from the list, when:
```bash
netstat -lnp
```
Generate 16 bytes/32 characters HEX with OpenSSL or another way:
```bash
openssl rand -hex 16
```
OR
```bash
xxd -l 16 -p /dev/urandom
```
OR
```bash
python3 -c 'import os; print(os.urandom(16).hex())'
```
**1. Place your config to /etc/telemt.toml**
Open nano
```bash
nano /etc/telemt.toml
```
paste your config from [Configuration](#configuration) section
then Ctrl+X -> Y -> Enter to save
**2. Create service on /etc/systemd/system/telemt.service**
Open nano
```bash
nano /etc/systemd/system/telemt.service
```
paste this Systemd Module
```bash
[Unit]
Description=Telemt
After=network.target
[Service]
Type=simple
WorkingDirectory=/bin
ExecStart=/bin/telemt /etc/telemt.toml
Restart=on-failure
LimitNOFILE=65536
[Install]
WantedBy=multi-user.target
```
then Ctrl+X -> Y -> Enter to save
**3.** In Shell type `systemctl start telemt` - it must start with zero exit-code
**4.** In Shell type `systemctl status telemt` - there you can reach info about current MTProxy status
**5.** In Shell type `systemctl enable telemt` - then telemt will start with system startup, after the network is up
**6.** In Shell type `journalctl -u telemt -n -g "links" --no-pager -o cat | tac` - get the connection links
## Configuration
### Minimal Configuration for First Start
```toml
# === General Settings ===
[general]
# ad_tag = "00000000000000000000000000000000"
[general.modes]
classic = false
secure = false
tls = true
# === Anti-Censorship & Masking ===
[censorship]
tls_domain = "petrovich.ru"
[access.users]
# format: "username" = "32_hex_chars_secret"
hello = "00000000000000000000000000000000"
```
### Advanced ### Advanced
#### Adtag (per-user) #### Adtag (per-user)
To use channel advertising and usage statistics from Telegram, get an Adtag from [@mtproxybot](https://t.me/mtproxybot). Set it per user in `[access.user_ad_tags]` (32 hex chars): To use channel advertising and usage statistics from Telegram, get an Adtag from [@mtproxybot](https://t.me/mtproxybot). Set it per user in `[access.user_ad_tags]` (32 hex chars):

View File

@@ -34,6 +34,13 @@ port = 443
# metrics_port = 9090 # metrics_port = 9090
# metrics_whitelist = ["127.0.0.1", "::1", "0.0.0.0/0"] # metrics_whitelist = ["127.0.0.1", "::1", "0.0.0.0/0"]
[server.api]
enabled = true
listen = "0.0.0.0:9091"
whitelist = ["127.0.0.0/8"]
minimal_runtime_enabled = false
minimal_runtime_cache_ttl_ms = 1000
# Listen on multiple interfaces/IPs - IPv4 # Listen on multiple interfaces/IPs - IPv4
[[server.listeners]] [[server.listeners]]
ip = "0.0.0.0" ip = "0.0.0.0"

View File

@@ -13,13 +13,18 @@ API runtime is configured in `[server.api]`.
| `listen` | `string` (`IP:PORT`) | `127.0.0.1:9091` | API bind address. | | `listen` | `string` (`IP:PORT`) | `127.0.0.1:9091` | API bind address. |
| `whitelist` | `CIDR[]` | `127.0.0.1/32, ::1/128` | Source IP allowlist. Empty list means allow all. | | `whitelist` | `CIDR[]` | `127.0.0.1/32, ::1/128` | Source IP allowlist. Empty list means allow all. |
| `auth_header` | `string` | `""` | Exact value for `Authorization` header. Empty disables header auth. | | `auth_header` | `string` | `""` | Exact value for `Authorization` header. Empty disables header auth. |
| `request_body_limit_bytes` | `usize` | `65536` | Maximum request body size. | | `request_body_limit_bytes` | `usize` | `65536` | Maximum request body size. Must be `> 0`. |
| `minimal_runtime_enabled` | `bool` | `false` | Enables runtime snapshot endpoints requiring ME pool read-lock aggregation. | | `minimal_runtime_enabled` | `bool` | `false` | Enables runtime snapshot endpoints requiring ME pool read-lock aggregation. |
| `minimal_runtime_cache_ttl_ms` | `u64` | `1000` | Cache TTL for minimal snapshots. `0` disables cache. | | `minimal_runtime_cache_ttl_ms` | `u64` | `1000` | Cache TTL for minimal snapshots. `0` disables cache; valid range is `[0, 60000]`. |
| `read_only` | `bool` | `false` | Disables mutating endpoints. | | `read_only` | `bool` | `false` | Disables mutating endpoints. |
`server.admin_api` is accepted as an alias for backward compatibility. `server.admin_api` is accepted as an alias for backward compatibility.
Runtime validation for API config:
- `server.api.listen` must be a valid `IP:PORT`.
- `server.api.request_body_limit_bytes` must be `> 0`.
- `server.api.minimal_runtime_cache_ttl_ms` must be within `[0, 60000]`.
## Protocol Contract ## Protocol Contract
| Item | Value | | Item | Value |
@@ -51,6 +56,21 @@ API runtime is configured in `[server.api]`.
} }
``` ```
## Request Processing Order
Requests are processed in this order:
1. `api_enabled` gate (`503 api_disabled` if disabled).
2. Source IP whitelist gate (`403 forbidden`).
3. `Authorization` header gate when configured (`401 unauthorized`).
4. Route and method matching (`404 not_found` or `405 method_not_allowed`).
5. `read_only` gate for mutating routes (`403 read_only`).
6. Request body read/limit/JSON decode (`413 payload_too_large`, `400 bad_request`).
7. Business validation and config write path.
Notes:
- Whitelist is evaluated against the direct TCP peer IP (`SocketAddr::ip`), without `X-Forwarded-For` support.
- `Authorization` check is exact string equality against configured `auth_header`.
## Endpoint Matrix ## Endpoint Matrix
| Method | Path | Body | Success | `data` contract | | Method | Path | Body | Success | `data` contract |
@@ -58,6 +78,7 @@ API runtime is configured in `[server.api]`.
| `GET` | `/v1/health` | none | `200` | `HealthData` | | `GET` | `/v1/health` | none | `200` | `HealthData` |
| `GET` | `/v1/stats/summary` | none | `200` | `SummaryData` | | `GET` | `/v1/stats/summary` | none | `200` | `SummaryData` |
| `GET` | `/v1/stats/zero/all` | none | `200` | `ZeroAllData` | | `GET` | `/v1/stats/zero/all` | none | `200` | `ZeroAllData` |
| `GET` | `/v1/stats/upstreams` | none | `200` | `UpstreamsData` |
| `GET` | `/v1/stats/minimal/all` | none | `200` | `MinimalAllData` | | `GET` | `/v1/stats/minimal/all` | none | `200` | `MinimalAllData` |
| `GET` | `/v1/stats/me-writers` | none | `200` | `MeWritersData` | | `GET` | `/v1/stats/me-writers` | none | `200` | `MeWritersData` |
| `GET` | `/v1/stats/dcs` | none | `200` | `DcStatusData` | | `GET` | `/v1/stats/dcs` | none | `200` | `DcStatusData` |
@@ -67,7 +88,7 @@ API runtime is configured in `[server.api]`.
| `GET` | `/v1/users/{username}` | none | `200` | `UserInfo` | | `GET` | `/v1/users/{username}` | none | `200` | `UserInfo` |
| `PATCH` | `/v1/users/{username}` | `PatchUserRequest` | `200` | `UserInfo` | | `PATCH` | `/v1/users/{username}` | `PatchUserRequest` | `200` | `UserInfo` |
| `DELETE` | `/v1/users/{username}` | none | `200` | `string` (deleted username) | | `DELETE` | `/v1/users/{username}` | none | `200` | `string` (deleted username) |
| `POST` | `/v1/users/{username}/rotate-secret` | `RotateSecretRequest` or empty body | `200` | `CreateUserResponse` | | `POST` | `/v1/users/{username}/rotate-secret` | `RotateSecretRequest` or empty body | `404` | `ErrorResponse` (`not_found`, current runtime behavior) |
## Common Error Codes ## Common Error Codes
@@ -77,8 +98,8 @@ API runtime is configured in `[server.api]`.
| `401` | `unauthorized` | Missing/invalid `Authorization` when `auth_header` is configured. | | `401` | `unauthorized` | Missing/invalid `Authorization` when `auth_header` is configured. |
| `403` | `forbidden` | Source IP is not allowed by whitelist. | | `403` | `forbidden` | Source IP is not allowed by whitelist. |
| `403` | `read_only` | Mutating endpoint called while `read_only=true`. | | `403` | `read_only` | Mutating endpoint called while `read_only=true`. |
| `404` | `not_found` | Unknown route or unknown user. | | `404` | `not_found` | Unknown route, unknown user, or unsupported sub-route (including current `rotate-secret` route). |
| `405` | `method_not_allowed` | Unsupported method for an existing user route. | | `405` | `method_not_allowed` | Unsupported method for `/v1/users/{username}` route shape. |
| `409` | `revision_conflict` | `If-Match` revision mismatch. | | `409` | `revision_conflict` | `If-Match` revision mismatch. |
| `409` | `user_exists` | User already exists on create. | | `409` | `user_exists` | User already exists on create. |
| `409` | `last_user_forbidden` | Attempt to delete last configured user. | | `409` | `last_user_forbidden` | Attempt to delete last configured user. |
@@ -86,6 +107,28 @@ API runtime is configured in `[server.api]`.
| `500` | `internal_error` | Internal error (I/O, serialization, config load/save). | | `500` | `internal_error` | Internal error (I/O, serialization, config load/save). |
| `503` | `api_disabled` | API disabled in config. | | `503` | `api_disabled` | API disabled in config. |
## Routing and Method Edge Cases
| Case | Behavior |
| --- | --- |
| Path matching | Exact match on `req.uri().path()`. Query string does not affect route matching. |
| Trailing slash | Not normalized. Example: `/v1/users/` is `404`. |
| Username route with extra slash | `/v1/users/{username}/...` is not treated as user route and returns `404`. |
| `PUT /v1/users/{username}` | `405 method_not_allowed`. |
| `POST /v1/users/{username}` | `404 not_found`. |
| `POST /v1/users/{username}/rotate-secret` | `404 not_found` in current release due route matcher limitation. |
## Body and JSON Semantics
- Request body is read only for mutating routes that define a body contract.
- Body size limit is enforced during streaming read (`413 payload_too_large`).
- Invalid transport body frame returns `400 bad_request` (`Invalid request body`).
- Invalid JSON returns `400 bad_request` (`Invalid JSON body`).
- `Content-Type` is not required for JSON parsing.
- Unknown JSON fields are ignored by deserialization.
- `PATCH` updates only provided fields and does not support explicit clearing of optional fields.
- `If-Match` supports both quoted and unquoted values; surrounding whitespace is trimmed.
## Request Contracts ## Request Contracts
### `CreateUserRequest` ### `CreateUserRequest`
@@ -114,6 +157,8 @@ API runtime is configured in `[server.api]`.
| --- | --- | --- | --- | | --- | --- | --- | --- |
| `secret` | `string` | no | Exactly 32 hex chars. If missing, generated automatically. | | `secret` | `string` | no | Exactly 32 hex chars. If missing, generated automatically. |
Note: the request contract is defined, but the corresponding route currently returns `404` (see routing edge cases).
## Response Data Contracts ## Response Data Contracts
### `HealthData` ### `HealthData`
@@ -173,6 +218,47 @@ API runtime is configured in `[server.api]`.
| `connect_duration_fail_bucket_501_1000ms` | `u64` | Failed connects 501-1000 ms. | | `connect_duration_fail_bucket_501_1000ms` | `u64` | Failed connects 501-1000 ms. |
| `connect_duration_fail_bucket_gt_1000ms` | `u64` | Failed connects >1000 ms. | | `connect_duration_fail_bucket_gt_1000ms` | `u64` | Failed connects >1000 ms. |
### `UpstreamsData`
| Field | Type | Description |
| --- | --- | --- |
| `enabled` | `bool` | Runtime upstream snapshot availability according to API config. |
| `reason` | `string?` | `feature_disabled` or `source_unavailable` when runtime snapshot is unavailable. |
| `generated_at_epoch_secs` | `u64` | Snapshot generation time. |
| `zero` | `ZeroUpstreamData` | Always available zero-cost upstream counters block. |
| `summary` | `UpstreamSummaryData?` | Runtime upstream aggregate view, null when unavailable. |
| `upstreams` | `UpstreamStatus[]?` | Per-upstream runtime status rows, null when unavailable. |
#### `UpstreamSummaryData`
| Field | Type | Description |
| --- | --- | --- |
| `configured_total` | `usize` | Total configured upstream entries. |
| `healthy_total` | `usize` | Upstreams currently marked healthy. |
| `unhealthy_total` | `usize` | Upstreams currently marked unhealthy. |
| `direct_total` | `usize` | Number of direct upstream entries. |
| `socks4_total` | `usize` | Number of SOCKS4 upstream entries. |
| `socks5_total` | `usize` | Number of SOCKS5 upstream entries. |
#### `UpstreamStatus`
| Field | Type | Description |
| --- | --- | --- |
| `upstream_id` | `usize` | Runtime upstream index. |
| `route_kind` | `string` | Upstream route kind: `direct`, `socks4`, `socks5`. |
| `address` | `string` | Upstream address (`direct` for direct route kind). Authentication fields are intentionally omitted. |
| `weight` | `u16` | Selection weight. |
| `scopes` | `string` | Configured scope selector string. |
| `healthy` | `bool` | Current health flag. |
| `fails` | `u32` | Consecutive fail counter. |
| `last_check_age_secs` | `u64` | Seconds since the last health-check update. |
| `effective_latency_ms` | `f64?` | Effective upstream latency used by selector. |
| `dc` | `UpstreamDcStatus[]` | Per-DC latency/IP preference snapshot. |
#### `UpstreamDcStatus`
| Field | Type | Description |
| --- | --- | --- |
| `dc` | `i16` | Telegram DC id. |
| `latency_ema_ms` | `f64?` | Per-DC latency EMA value. |
| `ip_preference` | `string` | Per-DC IP family preference: `unknown`, `prefer_v4`, `prefer_v6`, `both_work`, `unavailable`. |
#### `ZeroMiddleProxyData` #### `ZeroMiddleProxyData`
| Field | Type | Description | | Field | Type | Description |
| --- | --- | --- | | --- | --- | --- |
@@ -392,8 +478,11 @@ API runtime is configured in `[server.api]`.
Link generation uses active config and enabled modes: Link generation uses active config and enabled modes:
- `[general.links].public_host/public_port` have priority. - `[general.links].public_host/public_port` have priority.
- If `public_host` is not set, startup-detected public IPs are used (`IPv4`, `IPv6`, or both when available).
- Fallback host sources: listener `announce`, `announce_ip`, explicit listener `ip`. - Fallback host sources: listener `announce`, `announce_ip`, explicit listener `ip`.
- Legacy fallback: `listen_addr_ipv4` and `listen_addr_ipv6` when routable. - Legacy fallback: `listen_addr_ipv4` and `listen_addr_ipv6` when routable.
- Startup-detected IPs are fixed for process lifetime and refreshed on restart.
- User rows are sorted by `username` in ascending lexical order.
### `CreateUserResponse` ### `CreateUserResponse`
| Field | Type | Description | | Field | Type | Description |
@@ -407,21 +496,53 @@ Link generation uses active config and enabled modes:
| --- | --- | | --- | --- |
| `POST /v1/users` | Creates user and validates resulting config before atomic save. | | `POST /v1/users` | Creates user and validates resulting config before atomic save. |
| `PATCH /v1/users/{username}` | Partial update of provided fields only. Missing fields remain unchanged. | | `PATCH /v1/users/{username}` | Partial update of provided fields only. Missing fields remain unchanged. |
| `POST /v1/users/{username}/rotate-secret` | Replaces secret. Empty body is allowed and auto-generates secret. | | `POST /v1/users/{username}/rotate-secret` | Currently returns `404` in runtime route matcher; request schema is reserved for intended behavior. |
| `DELETE /v1/users/{username}` | Deletes user and related optional settings. Last user deletion is blocked. | | `DELETE /v1/users/{username}` | Deletes user and related optional settings. Last user deletion is blocked. |
All mutating endpoints: All mutating endpoints:
- Respect `read_only` mode. - Respect `read_only` mode.
- Accept optional `If-Match` for optimistic concurrency. - Accept optional `If-Match` for optimistic concurrency.
- Return new `revision` after successful write. - Return new `revision` after successful write.
- Use process-local mutation lock + atomic write (`tmp + rename`) for config persistence.
## Runtime State Matrix
| Endpoint | `minimal_runtime_enabled=false` | `minimal_runtime_enabled=true` + source unavailable | `minimal_runtime_enabled=true` + source available |
| --- | --- | --- | --- |
| `/v1/stats/minimal/all` | `enabled=false`, `reason=feature_disabled`, `data=null` | `enabled=true`, `reason=source_unavailable`, fallback `data` with disabled ME blocks | `enabled=true`, `reason` omitted, full payload |
| `/v1/stats/me-writers` | `middle_proxy_enabled=false`, `reason=feature_disabled` | `middle_proxy_enabled=false`, `reason=source_unavailable` | `middle_proxy_enabled=true`, runtime snapshot |
| `/v1/stats/dcs` | `middle_proxy_enabled=false`, `reason=feature_disabled` | `middle_proxy_enabled=false`, `reason=source_unavailable` | `middle_proxy_enabled=true`, runtime snapshot |
| `/v1/stats/upstreams` | `enabled=false`, `reason=feature_disabled`, `summary/upstreams` omitted, `zero` still present | `enabled=true`, `reason=source_unavailable`, `summary/upstreams` omitted, `zero` present | `enabled=true`, `reason` omitted, `summary/upstreams` present, `zero` present |
`source_unavailable` conditions:
- ME endpoints: ME pool is absent (for example direct-only mode or failed ME initialization).
- Upstreams endpoint: non-blocking upstream snapshot lock is unavailable at request time.
## Serialization Rules
- Success responses always include `revision`.
- Error responses never include `revision`; they include `request_id`.
- Optional fields with `skip_serializing_if` are omitted when absent.
- Nullable payload fields may still be `null` where contract uses `?` (for example `UserInfo` option fields).
- For `/v1/stats/upstreams`, authentication details of SOCKS upstreams are intentionally omitted.
## Operational Notes ## Operational Notes
| Topic | Details | | Topic | Details |
| --- | --- | | --- | --- |
| API startup | API binds only when `[server.api].enabled=true`. | | API startup | API listener is spawned only when `[server.api].enabled=true`. |
| Restart requirements | Changes in `server.api` settings require process restart. | | `listen` port `0` | API spawn is skipped when parsed listen port is `0` (treated as disabled bind target). |
| Bind failure | Failed API bind logs warning and API task exits (no auto-retry loop). |
| ME runtime status endpoints | `/v1/stats/me-writers`, `/v1/stats/dcs`, `/v1/stats/minimal/all` require `[server.api].minimal_runtime_enabled=true`; otherwise they return disabled payload with `reason=feature_disabled`. |
| Upstream runtime endpoint | `/v1/stats/upstreams` always returns `zero`, but runtime fields (`summary`, `upstreams`) require `[server.api].minimal_runtime_enabled=true`. |
| Restart requirements | `server.api` changes are restart-required for predictable behavior. |
| Hot-reload nuance | A pure `server.api`-only config change may not propagate through watcher broadcast; a mixed change (with hot fields) may propagate API flags while still warning that restart is required. |
| Runtime apply path | Successful writes are picked up by existing config watcher/hot-reload path. | | Runtime apply path | Successful writes are picked up by existing config watcher/hot-reload path. |
| Exposure | Built-in TLS/mTLS is not provided. Use loopback bind + reverse proxy if needed. | | Exposure | Built-in TLS/mTLS is not provided. Use loopback bind + reverse proxy if needed. |
| Pagination | User list currently has no pagination/filtering. | | Pagination | User list currently has no pagination/filtering. |
| Serialization side effect | Config comments/manual formatting are not preserved on write. | | Serialization side effect | Config comments/manual formatting are not preserved on write. |
## Known Limitations (Current Release)
- `POST /v1/users/{username}/rotate-secret` is currently unreachable in route matcher and returns `404`.
- API runtime controls under `server.api` are documented as restart-required; hot-reload behavior for these fields is not strictly uniform in all change combinations.

View File

@@ -60,6 +60,7 @@ paste your config
# === General Settings === # === General Settings ===
[general] [general]
# ad_tag = "00000000000000000000000000000000" # ad_tag = "00000000000000000000000000000000"
use_middle_proxy = false
[general.modes] [general.modes]
classic = false classic = false

View File

@@ -60,6 +60,7 @@ nano /etc/telemt.toml
# === General Settings === # === General Settings ===
[general] [general]
# ad_tag = "00000000000000000000000000000000" # ad_tag = "00000000000000000000000000000000"
use_middle_proxy = false
[general.modes] [general.modes]
classic = false classic = false

View File

@@ -1,5 +1,5 @@
use std::convert::Infallible; use std::convert::Infallible;
use std::net::SocketAddr; use std::net::{IpAddr, SocketAddr};
use std::path::PathBuf; use std::path::PathBuf;
use std::sync::Arc; use std::sync::Arc;
use std::sync::atomic::{AtomicU64, Ordering}; use std::sync::atomic::{AtomicU64, Ordering};
@@ -20,6 +20,7 @@ use crate::config::ProxyConfig;
use crate::ip_tracker::UserIpTracker; use crate::ip_tracker::UserIpTracker;
use crate::stats::Stats; use crate::stats::Stats;
use crate::transport::middle_proxy::MePool; use crate::transport::middle_proxy::MePool;
use crate::transport::UpstreamManager;
mod config_store; mod config_store;
mod model; mod model;
@@ -33,7 +34,7 @@ use model::{
}; };
use runtime_stats::{ use runtime_stats::{
MinimalCacheEntry, build_dcs_data, build_me_writers_data, build_minimal_all_data, MinimalCacheEntry, build_dcs_data, build_me_writers_data, build_minimal_all_data,
build_zero_all_data, build_upstreams_data, build_zero_all_data,
}; };
use users::{create_user, delete_user, patch_user, rotate_secret, users_from_config}; use users::{create_user, delete_user, patch_user, rotate_secret, users_from_config};
@@ -42,7 +43,10 @@ pub(super) struct ApiShared {
pub(super) stats: Arc<Stats>, pub(super) stats: Arc<Stats>,
pub(super) ip_tracker: Arc<UserIpTracker>, pub(super) ip_tracker: Arc<UserIpTracker>,
pub(super) me_pool: Option<Arc<MePool>>, pub(super) me_pool: Option<Arc<MePool>>,
pub(super) upstream_manager: Arc<UpstreamManager>,
pub(super) config_path: PathBuf, pub(super) config_path: PathBuf,
pub(super) startup_detected_ip_v4: Option<IpAddr>,
pub(super) startup_detected_ip_v6: Option<IpAddr>,
pub(super) mutation_lock: Arc<Mutex<()>>, pub(super) mutation_lock: Arc<Mutex<()>>,
pub(super) minimal_cache: Arc<Mutex<Option<MinimalCacheEntry>>>, pub(super) minimal_cache: Arc<Mutex<Option<MinimalCacheEntry>>>,
pub(super) request_id: Arc<AtomicU64>, pub(super) request_id: Arc<AtomicU64>,
@@ -59,8 +63,11 @@ pub async fn serve(
stats: Arc<Stats>, stats: Arc<Stats>,
ip_tracker: Arc<UserIpTracker>, ip_tracker: Arc<UserIpTracker>,
me_pool: Option<Arc<MePool>>, me_pool: Option<Arc<MePool>>,
upstream_manager: Arc<UpstreamManager>,
config_rx: watch::Receiver<Arc<ProxyConfig>>, config_rx: watch::Receiver<Arc<ProxyConfig>>,
config_path: PathBuf, config_path: PathBuf,
startup_detected_ip_v4: Option<IpAddr>,
startup_detected_ip_v6: Option<IpAddr>,
) { ) {
let listener = match TcpListener::bind(listen).await { let listener = match TcpListener::bind(listen).await {
Ok(listener) => listener, Ok(listener) => listener,
@@ -80,7 +87,10 @@ pub async fn serve(
stats, stats,
ip_tracker, ip_tracker,
me_pool, me_pool,
upstream_manager,
config_path, config_path,
startup_detected_ip_v4,
startup_detected_ip_v6,
mutation_lock: Arc::new(Mutex::new(())), mutation_lock: Arc::new(Mutex::new(())),
minimal_cache: Arc::new(Mutex::new(None)), minimal_cache: Arc::new(Mutex::new(None)),
request_id: Arc::new(AtomicU64::new(1)), request_id: Arc::new(AtomicU64::new(1)),
@@ -195,6 +205,11 @@ async fn handle(
let data = build_zero_all_data(&shared.stats, cfg.access.users.len()); let data = build_zero_all_data(&shared.stats, cfg.access.users.len());
Ok(success_response(StatusCode::OK, data, revision)) Ok(success_response(StatusCode::OK, data, revision))
} }
("GET", "/v1/stats/upstreams") => {
let revision = current_revision(&shared.config_path).await?;
let data = build_upstreams_data(shared.as_ref(), api_cfg);
Ok(success_response(StatusCode::OK, data, revision))
}
("GET", "/v1/stats/minimal/all") => { ("GET", "/v1/stats/minimal/all") => {
let revision = current_revision(&shared.config_path).await?; let revision = current_revision(&shared.config_path).await?;
let data = build_minimal_all_data(shared.as_ref(), api_cfg).await; let data = build_minimal_all_data(shared.as_ref(), api_cfg).await;
@@ -212,7 +227,14 @@ async fn handle(
} }
("GET", "/v1/stats/users") | ("GET", "/v1/users") => { ("GET", "/v1/stats/users") | ("GET", "/v1/users") => {
let revision = current_revision(&shared.config_path).await?; let revision = current_revision(&shared.config_path).await?;
let users = users_from_config(&cfg, &shared.stats, &shared.ip_tracker).await; let users = users_from_config(
&cfg,
&shared.stats,
&shared.ip_tracker,
shared.startup_detected_ip_v4,
shared.startup_detected_ip_v6,
)
.await;
Ok(success_response(StatusCode::OK, users, revision)) Ok(success_response(StatusCode::OK, users, revision))
} }
("POST", "/v1/users") => { ("POST", "/v1/users") => {
@@ -238,7 +260,14 @@ async fn handle(
{ {
if method == Method::GET { if method == Method::GET {
let revision = current_revision(&shared.config_path).await?; let revision = current_revision(&shared.config_path).await?;
let users = users_from_config(&cfg, &shared.stats, &shared.ip_tracker).await; let users = users_from_config(
&cfg,
&shared.stats,
&shared.ip_tracker,
shared.startup_detected_ip_v4,
shared.startup_detected_ip_v6,
)
.await;
if let Some(user_info) = users.into_iter().find(|entry| entry.username == user) if let Some(user_info) = users.into_iter().find(|entry| entry.username == user)
{ {
return Ok(success_response(StatusCode::OK, user_info, revision)); return Ok(success_response(StatusCode::OK, user_info, revision));

View File

@@ -1,3 +1,5 @@
use std::net::IpAddr;
use chrono::{DateTime, Utc}; use chrono::{DateTime, Utc};
use hyper::StatusCode; use hyper::StatusCode;
use rand::Rng; use rand::Rng;
@@ -103,6 +105,50 @@ pub(super) struct ZeroUpstreamData {
pub(super) connect_duration_fail_bucket_gt_1000ms: u64, pub(super) connect_duration_fail_bucket_gt_1000ms: u64,
} }
#[derive(Serialize, Clone)]
pub(super) struct UpstreamDcStatus {
pub(super) dc: i16,
pub(super) latency_ema_ms: Option<f64>,
pub(super) ip_preference: &'static str,
}
#[derive(Serialize, Clone)]
pub(super) struct UpstreamStatus {
pub(super) upstream_id: usize,
pub(super) route_kind: &'static str,
pub(super) address: String,
pub(super) weight: u16,
pub(super) scopes: String,
pub(super) healthy: bool,
pub(super) fails: u32,
pub(super) last_check_age_secs: u64,
pub(super) effective_latency_ms: Option<f64>,
pub(super) dc: Vec<UpstreamDcStatus>,
}
#[derive(Serialize, Clone)]
pub(super) struct UpstreamSummaryData {
pub(super) configured_total: usize,
pub(super) healthy_total: usize,
pub(super) unhealthy_total: usize,
pub(super) direct_total: usize,
pub(super) socks4_total: usize,
pub(super) socks5_total: usize,
}
#[derive(Serialize, Clone)]
pub(super) struct UpstreamsData {
pub(super) enabled: bool,
#[serde(skip_serializing_if = "Option::is_none")]
pub(super) reason: Option<&'static str>,
pub(super) generated_at_epoch_secs: u64,
pub(super) zero: ZeroUpstreamData,
#[serde(skip_serializing_if = "Option::is_none")]
pub(super) summary: Option<UpstreamSummaryData>,
#[serde(skip_serializing_if = "Option::is_none")]
pub(super) upstreams: Option<Vec<UpstreamStatus>>,
}
#[derive(Serialize, Clone)] #[derive(Serialize, Clone)]
pub(super) struct ZeroMiddleProxyData { pub(super) struct ZeroMiddleProxyData {
pub(super) keepalive_sent_total: u64, pub(super) keepalive_sent_total: u64,
@@ -325,6 +371,9 @@ pub(super) struct UserInfo {
pub(super) max_unique_ips: Option<usize>, pub(super) max_unique_ips: Option<usize>,
pub(super) current_connections: u64, pub(super) current_connections: u64,
pub(super) active_unique_ips: usize, pub(super) active_unique_ips: usize,
pub(super) active_unique_ips_list: Vec<IpAddr>,
pub(super) recent_unique_ips: usize,
pub(super) recent_unique_ips_list: Vec<IpAddr>,
pub(super) total_octets: u64, pub(super) total_octets: u64,
pub(super) links: UserLinks, pub(super) links: UserLinks,
} }

View File

@@ -2,12 +2,15 @@ use std::time::{Duration, Instant, SystemTime, UNIX_EPOCH};
use crate::config::ApiConfig; use crate::config::ApiConfig;
use crate::stats::Stats; use crate::stats::Stats;
use crate::transport::upstream::IpPreference;
use crate::transport::UpstreamRouteKind;
use super::ApiShared; use super::ApiShared;
use super::model::{ use super::model::{
DcStatus, DcStatusData, MeWriterStatus, MeWritersData, MeWritersSummary, MinimalAllData, DcStatus, DcStatusData, MeWriterStatus, MeWritersData, MeWritersSummary, MinimalAllData,
MinimalAllPayload, MinimalDcPathData, MinimalMeRuntimeData, MinimalQuarantineData, MinimalAllPayload, MinimalDcPathData, MinimalMeRuntimeData, MinimalQuarantineData,
ZeroAllData, ZeroCodeCount, ZeroCoreData, ZeroDesyncData, ZeroMiddleProxyData, ZeroPoolData, UpstreamDcStatus, UpstreamStatus, UpstreamSummaryData, UpstreamsData, ZeroAllData,
ZeroCodeCount, ZeroCoreData, ZeroDesyncData, ZeroMiddleProxyData, ZeroPoolData,
ZeroUpstreamData, ZeroUpstreamData,
}; };
@@ -41,32 +44,7 @@ pub(super) fn build_zero_all_data(stats: &Stats, configured_users: usize) -> Zer
telemetry_user_enabled: telemetry.user_enabled, telemetry_user_enabled: telemetry.user_enabled,
telemetry_me_level: telemetry.me_level.to_string(), telemetry_me_level: telemetry.me_level.to_string(),
}, },
upstream: ZeroUpstreamData { upstream: build_zero_upstream_data(stats),
connect_attempt_total: stats.get_upstream_connect_attempt_total(),
connect_success_total: stats.get_upstream_connect_success_total(),
connect_fail_total: stats.get_upstream_connect_fail_total(),
connect_failfast_hard_error_total: stats.get_upstream_connect_failfast_hard_error_total(),
connect_attempts_bucket_1: stats.get_upstream_connect_attempts_bucket_1(),
connect_attempts_bucket_2: stats.get_upstream_connect_attempts_bucket_2(),
connect_attempts_bucket_3_4: stats.get_upstream_connect_attempts_bucket_3_4(),
connect_attempts_bucket_gt_4: stats.get_upstream_connect_attempts_bucket_gt_4(),
connect_duration_success_bucket_le_100ms: stats
.get_upstream_connect_duration_success_bucket_le_100ms(),
connect_duration_success_bucket_101_500ms: stats
.get_upstream_connect_duration_success_bucket_101_500ms(),
connect_duration_success_bucket_501_1000ms: stats
.get_upstream_connect_duration_success_bucket_501_1000ms(),
connect_duration_success_bucket_gt_1000ms: stats
.get_upstream_connect_duration_success_bucket_gt_1000ms(),
connect_duration_fail_bucket_le_100ms: stats
.get_upstream_connect_duration_fail_bucket_le_100ms(),
connect_duration_fail_bucket_101_500ms: stats
.get_upstream_connect_duration_fail_bucket_101_500ms(),
connect_duration_fail_bucket_501_1000ms: stats
.get_upstream_connect_duration_fail_bucket_501_1000ms(),
connect_duration_fail_bucket_gt_1000ms: stats
.get_upstream_connect_duration_fail_bucket_gt_1000ms(),
},
middle_proxy: ZeroMiddleProxyData { middle_proxy: ZeroMiddleProxyData {
keepalive_sent_total: stats.get_me_keepalive_sent(), keepalive_sent_total: stats.get_me_keepalive_sent(),
keepalive_failed_total: stats.get_me_keepalive_failed(), keepalive_failed_total: stats.get_me_keepalive_failed(),
@@ -140,6 +118,102 @@ pub(super) fn build_zero_all_data(stats: &Stats, configured_users: usize) -> Zer
} }
} }
fn build_zero_upstream_data(stats: &Stats) -> ZeroUpstreamData {
ZeroUpstreamData {
connect_attempt_total: stats.get_upstream_connect_attempt_total(),
connect_success_total: stats.get_upstream_connect_success_total(),
connect_fail_total: stats.get_upstream_connect_fail_total(),
connect_failfast_hard_error_total: stats.get_upstream_connect_failfast_hard_error_total(),
connect_attempts_bucket_1: stats.get_upstream_connect_attempts_bucket_1(),
connect_attempts_bucket_2: stats.get_upstream_connect_attempts_bucket_2(),
connect_attempts_bucket_3_4: stats.get_upstream_connect_attempts_bucket_3_4(),
connect_attempts_bucket_gt_4: stats.get_upstream_connect_attempts_bucket_gt_4(),
connect_duration_success_bucket_le_100ms: stats
.get_upstream_connect_duration_success_bucket_le_100ms(),
connect_duration_success_bucket_101_500ms: stats
.get_upstream_connect_duration_success_bucket_101_500ms(),
connect_duration_success_bucket_501_1000ms: stats
.get_upstream_connect_duration_success_bucket_501_1000ms(),
connect_duration_success_bucket_gt_1000ms: stats
.get_upstream_connect_duration_success_bucket_gt_1000ms(),
connect_duration_fail_bucket_le_100ms: stats.get_upstream_connect_duration_fail_bucket_le_100ms(),
connect_duration_fail_bucket_101_500ms: stats
.get_upstream_connect_duration_fail_bucket_101_500ms(),
connect_duration_fail_bucket_501_1000ms: stats
.get_upstream_connect_duration_fail_bucket_501_1000ms(),
connect_duration_fail_bucket_gt_1000ms: stats
.get_upstream_connect_duration_fail_bucket_gt_1000ms(),
}
}
pub(super) fn build_upstreams_data(shared: &ApiShared, api_cfg: &ApiConfig) -> UpstreamsData {
let generated_at_epoch_secs = now_epoch_secs();
let zero = build_zero_upstream_data(&shared.stats);
if !api_cfg.minimal_runtime_enabled {
return UpstreamsData {
enabled: false,
reason: Some(FEATURE_DISABLED_REASON),
generated_at_epoch_secs,
zero,
summary: None,
upstreams: None,
};
}
let Some(snapshot) = shared.upstream_manager.try_api_snapshot() else {
return UpstreamsData {
enabled: true,
reason: Some(SOURCE_UNAVAILABLE_REASON),
generated_at_epoch_secs,
zero,
summary: None,
upstreams: None,
};
};
let summary = UpstreamSummaryData {
configured_total: snapshot.summary.configured_total,
healthy_total: snapshot.summary.healthy_total,
unhealthy_total: snapshot.summary.unhealthy_total,
direct_total: snapshot.summary.direct_total,
socks4_total: snapshot.summary.socks4_total,
socks5_total: snapshot.summary.socks5_total,
};
let upstreams = snapshot
.upstreams
.into_iter()
.map(|upstream| UpstreamStatus {
upstream_id: upstream.upstream_id,
route_kind: map_route_kind(upstream.route_kind),
address: upstream.address,
weight: upstream.weight,
scopes: upstream.scopes,
healthy: upstream.healthy,
fails: upstream.fails,
last_check_age_secs: upstream.last_check_age_secs,
effective_latency_ms: upstream.effective_latency_ms,
dc: upstream
.dc
.into_iter()
.map(|dc| UpstreamDcStatus {
dc: dc.dc,
latency_ema_ms: dc.latency_ema_ms,
ip_preference: map_ip_preference(dc.ip_preference),
})
.collect(),
})
.collect();
UpstreamsData {
enabled: true,
reason: None,
generated_at_epoch_secs,
zero,
summary: Some(summary),
upstreams: Some(upstreams),
}
}
pub(super) async fn build_minimal_all_data( pub(super) async fn build_minimal_all_data(
shared: &ApiShared, shared: &ApiShared,
api_cfg: &ApiConfig, api_cfg: &ApiConfig,
@@ -384,6 +458,24 @@ fn disabled_dcs(now_epoch_secs: u64, reason: &'static str) -> DcStatusData {
} }
} }
fn map_route_kind(value: UpstreamRouteKind) -> &'static str {
match value {
UpstreamRouteKind::Direct => "direct",
UpstreamRouteKind::Socks4 => "socks4",
UpstreamRouteKind::Socks5 => "socks5",
}
}
fn map_ip_preference(value: IpPreference) -> &'static str {
match value {
IpPreference::Unknown => "unknown",
IpPreference::PreferV6 => "prefer_v6",
IpPreference::PreferV4 => "prefer_v4",
IpPreference::BothWork => "both_work",
IpPreference::Unavailable => "unavailable",
}
}
fn now_epoch_secs() -> u64 { fn now_epoch_secs() -> u64 {
SystemTime::now() SystemTime::now()
.duration_since(UNIX_EPOCH) .duration_since(UNIX_EPOCH)

View File

@@ -1,4 +1,3 @@
use std::collections::HashMap;
use std::net::IpAddr; use std::net::IpAddr;
use hyper::StatusCode; use hyper::StatusCode;
@@ -92,7 +91,14 @@ pub(super) async fn create_user(
shared.ip_tracker.set_user_limit(&body.username, limit).await; shared.ip_tracker.set_user_limit(&body.username, limit).await;
} }
let users = users_from_config(&cfg, &shared.stats, &shared.ip_tracker).await; let users = users_from_config(
&cfg,
&shared.stats,
&shared.ip_tracker,
shared.startup_detected_ip_v4,
shared.startup_detected_ip_v6,
)
.await;
let user = users let user = users
.into_iter() .into_iter()
.find(|entry| entry.username == body.username) .find(|entry| entry.username == body.username)
@@ -105,8 +111,16 @@ pub(super) async fn create_user(
max_unique_ips: updated_limit, max_unique_ips: updated_limit,
current_connections: 0, current_connections: 0,
active_unique_ips: 0, active_unique_ips: 0,
active_unique_ips_list: Vec::new(),
recent_unique_ips: 0,
recent_unique_ips_list: Vec::new(),
total_octets: 0, total_octets: 0,
links: build_user_links(&cfg, &secret), links: build_user_links(
&cfg,
&secret,
shared.startup_detected_ip_v4,
shared.startup_detected_ip_v6,
),
}); });
Ok((CreateUserResponse { user, secret }, revision)) Ok((CreateUserResponse { user, secret }, revision))
@@ -171,7 +185,14 @@ pub(super) async fn patch_user(
if let Some(limit) = updated_limit { if let Some(limit) = updated_limit {
shared.ip_tracker.set_user_limit(user, limit).await; shared.ip_tracker.set_user_limit(user, limit).await;
} }
let users = users_from_config(&cfg, &shared.stats, &shared.ip_tracker).await; let users = users_from_config(
&cfg,
&shared.stats,
&shared.ip_tracker,
shared.startup_detected_ip_v4,
shared.startup_detected_ip_v6,
)
.await;
let user_info = users let user_info = users
.into_iter() .into_iter()
.find(|entry| entry.username == user) .find(|entry| entry.username == user)
@@ -211,7 +232,14 @@ pub(super) async fn rotate_secret(
let revision = save_config_to_disk(&shared.config_path, &cfg).await?; let revision = save_config_to_disk(&shared.config_path, &cfg).await?;
drop(_guard); drop(_guard);
let users = users_from_config(&cfg, &shared.stats, &shared.ip_tracker).await; let users = users_from_config(
&cfg,
&shared.stats,
&shared.ip_tracker,
shared.startup_detected_ip_v4,
shared.startup_detected_ip_v6,
)
.await;
let user_info = users let user_info = users
.into_iter() .into_iter()
.find(|entry| entry.username == user) .find(|entry| entry.username == user)
@@ -261,6 +289,7 @@ pub(super) async fn delete_user(
.map_err(|e| ApiFailure::bad_request(format!("config validation failed: {}", e)))?; .map_err(|e| ApiFailure::bad_request(format!("config validation failed: {}", e)))?;
let revision = save_config_to_disk(&shared.config_path, &cfg).await?; let revision = save_config_to_disk(&shared.config_path, &cfg).await?;
drop(_guard); drop(_guard);
shared.ip_tracker.remove_user_limit(user).await;
shared.ip_tracker.clear_user_ips(user).await; shared.ip_tracker.clear_user_ips(user).await;
Ok((user.to_string(), revision)) Ok((user.to_string(), revision))
@@ -270,24 +299,36 @@ pub(super) async fn users_from_config(
cfg: &ProxyConfig, cfg: &ProxyConfig,
stats: &Stats, stats: &Stats,
ip_tracker: &UserIpTracker, ip_tracker: &UserIpTracker,
startup_detected_ip_v4: Option<IpAddr>,
startup_detected_ip_v6: Option<IpAddr>,
) -> Vec<UserInfo> { ) -> Vec<UserInfo> {
let ip_counts = ip_tracker
.get_stats()
.await
.into_iter()
.map(|(user, count, _)| (user, count))
.collect::<HashMap<_, _>>();
let mut names = cfg.access.users.keys().cloned().collect::<Vec<_>>(); let mut names = cfg.access.users.keys().cloned().collect::<Vec<_>>();
names.sort(); names.sort();
let active_ip_lists = ip_tracker.get_active_ips_for_users(&names).await;
let recent_ip_lists = ip_tracker.get_recent_ips_for_users(&names).await;
let mut users = Vec::with_capacity(names.len()); let mut users = Vec::with_capacity(names.len());
for username in names { for username in names {
let active_ip_list = active_ip_lists
.get(&username)
.cloned()
.unwrap_or_else(Vec::new);
let recent_ip_list = recent_ip_lists
.get(&username)
.cloned()
.unwrap_or_else(Vec::new);
let links = cfg let links = cfg
.access .access
.users .users
.get(&username) .get(&username)
.map(|secret| build_user_links(cfg, secret)) .map(|secret| {
build_user_links(
cfg,
secret,
startup_detected_ip_v4,
startup_detected_ip_v6,
)
})
.unwrap_or(UserLinks { .unwrap_or(UserLinks {
classic: Vec::new(), classic: Vec::new(),
secure: Vec::new(), secure: Vec::new(),
@@ -304,7 +345,10 @@ pub(super) async fn users_from_config(
data_quota_bytes: cfg.access.user_data_quota.get(&username).copied(), data_quota_bytes: cfg.access.user_data_quota.get(&username).copied(),
max_unique_ips: cfg.access.user_max_unique_ips.get(&username).copied(), max_unique_ips: cfg.access.user_max_unique_ips.get(&username).copied(),
current_connections: stats.get_user_curr_connects(&username), current_connections: stats.get_user_curr_connects(&username),
active_unique_ips: ip_counts.get(&username).copied().unwrap_or(0), active_unique_ips: active_ip_list.len(),
active_unique_ips_list: active_ip_list,
recent_unique_ips: recent_ip_list.len(),
recent_unique_ips_list: recent_ip_list,
total_octets: stats.get_user_total_octets(&username), total_octets: stats.get_user_total_octets(&username),
links, links,
username, username,
@@ -313,8 +357,13 @@ pub(super) async fn users_from_config(
users users
} }
fn build_user_links(cfg: &ProxyConfig, secret: &str) -> UserLinks { fn build_user_links(
let hosts = resolve_link_hosts(cfg); cfg: &ProxyConfig,
secret: &str,
startup_detected_ip_v4: Option<IpAddr>,
startup_detected_ip_v6: Option<IpAddr>,
) -> UserLinks {
let hosts = resolve_link_hosts(cfg, startup_detected_ip_v4, startup_detected_ip_v6);
let port = cfg.general.links.public_port.unwrap_or(cfg.server.port); let port = cfg.general.links.public_port.unwrap_or(cfg.server.port);
let tls_domains = resolve_tls_domains(cfg); let tls_domains = resolve_tls_domains(cfg);
@@ -353,7 +402,11 @@ fn build_user_links(cfg: &ProxyConfig, secret: &str) -> UserLinks {
} }
} }
fn resolve_link_hosts(cfg: &ProxyConfig) -> Vec<String> { fn resolve_link_hosts(
cfg: &ProxyConfig,
startup_detected_ip_v4: Option<IpAddr>,
startup_detected_ip_v6: Option<IpAddr>,
) -> Vec<String> {
if let Some(host) = cfg if let Some(host) = cfg
.general .general
.links .links
@@ -365,6 +418,17 @@ fn resolve_link_hosts(cfg: &ProxyConfig) -> Vec<String> {
return vec![host.to_string()]; return vec![host.to_string()];
} }
let mut startup_hosts = Vec::new();
if let Some(ip) = startup_detected_ip_v4 {
push_unique_host(&mut startup_hosts, &ip.to_string());
}
if let Some(ip) = startup_detected_ip_v6 {
push_unique_host(&mut startup_hosts, &ip.to_string());
}
if !startup_hosts.is_empty() {
return startup_hosts;
}
let mut hosts = Vec::new(); let mut hosts = Vec::new();
for listener in &cfg.server.listeners { for listener in &cfg.server.listeners {
if let Some(host) = listener if let Some(host) = listener

View File

@@ -12,6 +12,7 @@ const DEFAULT_ME_SINGLE_ENDPOINT_SHADOW_WRITERS: u8 = 2;
const DEFAULT_ME_ADAPTIVE_FLOOR_IDLE_SECS: u64 = 90; const DEFAULT_ME_ADAPTIVE_FLOOR_IDLE_SECS: u64 = 90;
const DEFAULT_ME_ADAPTIVE_FLOOR_MIN_WRITERS_SINGLE_ENDPOINT: u8 = 1; const DEFAULT_ME_ADAPTIVE_FLOOR_MIN_WRITERS_SINGLE_ENDPOINT: u8 = 1;
const DEFAULT_ME_ADAPTIVE_FLOOR_RECOVER_GRACE_SECS: u64 = 180; const DEFAULT_ME_ADAPTIVE_FLOOR_RECOVER_GRACE_SECS: u64 = 180;
const DEFAULT_USER_MAX_UNIQUE_IPS_WINDOW_SECS: u64 = 30;
const DEFAULT_UPSTREAM_CONNECT_RETRY_ATTEMPTS: u32 = 2; const DEFAULT_UPSTREAM_CONNECT_RETRY_ATTEMPTS: u32 = 2;
const DEFAULT_UPSTREAM_UNHEALTHY_FAIL_THRESHOLD: u32 = 5; const DEFAULT_UPSTREAM_UNHEALTHY_FAIL_THRESHOLD: u32 = 5;
const DEFAULT_LISTEN_ADDR_IPV6: &str = "::"; const DEFAULT_LISTEN_ADDR_IPV6: &str = "::";
@@ -128,6 +129,10 @@ pub(crate) fn default_unknown_dc_log_path() -> Option<String> {
Some("unknown-dc.txt".to_string()) Some("unknown-dc.txt".to_string())
} }
pub(crate) fn default_unknown_dc_file_log_enabled() -> bool {
false
}
pub(crate) fn default_pool_size() -> usize { pub(crate) fn default_pool_size() -> usize {
8 8
} }
@@ -136,6 +141,14 @@ pub(crate) fn default_proxy_secret_path() -> Option<String> {
Some("proxy-secret".to_string()) Some("proxy-secret".to_string())
} }
pub(crate) fn default_proxy_config_v4_cache_path() -> Option<String> {
Some("cache/proxy-config-v4.txt".to_string())
}
pub(crate) fn default_proxy_config_v6_cache_path() -> Option<String> {
Some("cache/proxy-config-v6.txt".to_string())
}
pub(crate) fn default_middle_proxy_nat_stun() -> Option<String> { pub(crate) fn default_middle_proxy_nat_stun() -> Option<String> {
None None
} }
@@ -152,6 +165,14 @@ pub(crate) fn default_middle_proxy_warm_standby() -> usize {
DEFAULT_MIDDLE_PROXY_WARM_STANDBY DEFAULT_MIDDLE_PROXY_WARM_STANDBY
} }
pub(crate) fn default_me_init_retry_attempts() -> u32 {
0
}
pub(crate) fn default_me2dc_fallback() -> bool {
true
}
pub(crate) fn default_keepalive_interval() -> u64 { pub(crate) fn default_keepalive_interval() -> u64 {
8 8
} }
@@ -264,6 +285,18 @@ pub(crate) fn default_me_route_backpressure_high_watermark_pct() -> u8 {
80 80
} }
pub(crate) fn default_me_route_no_writer_wait_ms() -> u64 {
250
}
pub(crate) fn default_me_route_inline_recovery_attempts() -> u32 {
3
}
pub(crate) fn default_me_route_inline_recovery_wait_ms() -> u64 {
3000
}
pub(crate) fn default_beobachten_minutes() -> u64 { pub(crate) fn default_beobachten_minutes() -> u64 {
10 10
} }
@@ -464,6 +497,10 @@ pub(crate) fn default_access_users() -> HashMap<String, String> {
)]) )])
} }
pub(crate) fn default_user_max_unique_ips_window_secs() -> u64 {
DEFAULT_USER_MAX_UNIQUE_IPS_WINDOW_SECS
}
// Custom deserializer helpers // Custom deserializer helpers
#[derive(Deserialize)] #[derive(Deserialize)]

View File

@@ -9,20 +9,17 @@
//! | `general` | `log_level` | Filter updated via `log_level_tx` | //! | `general` | `log_level` | Filter updated via `log_level_tx` |
//! | `access` | `user_ad_tags` | Passed on next connection | //! | `access` | `user_ad_tags` | Passed on next connection |
//! | `general` | `ad_tag` | Passed on next connection (fallback per-user) | //! | `general` | `ad_tag` | Passed on next connection (fallback per-user) |
//! | `general` | `middle_proxy_pool_size` | Passed on next connection |
//! | `general` | `me_keepalive_*` | Passed on next connection |
//! | `general` | `desync_all_full` | Applied immediately | //! | `general` | `desync_all_full` | Applied immediately |
//! | `general` | `update_every` | Applied to ME updater immediately | //! | `general` | `update_every` | Applied to ME updater immediately |
//! | `general` | `hardswap` | Applied on next ME map update | //! | `general` | `me_reinit_*` | Applied to ME reinit scheduler immediately |
//! | `general` | `me_pool_drain_ttl_secs` | Applied on next ME map update | //! | `general` | `hardswap` / `me_*_reinit` | Applied on next ME map update |
//! | `general` | `me_pool_min_fresh_ratio` | Applied on next ME map update |
//! | `general` | `me_reinit_drain_timeout_secs` | Applied on next ME map update |
//! | `general` | `telemetry` / `me_*_policy` | Applied immediately | //! | `general` | `telemetry` / `me_*_policy` | Applied immediately |
//! | `network` | `dns_overrides` | Applied immediately | //! | `network` | `dns_overrides` | Applied immediately |
//! | `access` | All user/quota fields | Effective immediately | //! | `access` | All user/quota fields | Effective immediately |
//! //!
//! Fields that require re-binding sockets (`server.port`, `censorship.*`, //! Fields that require re-binding sockets (`server.port`, `censorship.*`,
//! `network.*`, `use_middle_proxy`) are **not** applied; a warning is emitted. //! `network.*`, `use_middle_proxy`) are **not** applied; a warning is emitted.
//! Non-hot changes are never mixed into the runtime config snapshot.
use std::net::IpAddr; use std::net::IpAddr;
use std::path::PathBuf; use std::path::PathBuf;
@@ -32,7 +29,7 @@ use notify::{EventKind, RecursiveMode, Watcher, recommended_watcher};
use tokio::sync::{mpsc, watch}; use tokio::sync::{mpsc, watch};
use tracing::{error, info, warn}; use tracing::{error, info, warn};
use crate::config::{LogLevel, MeFloorMode, MeSocksKdfPolicy, MeTelemetryLevel}; use crate::config::{LogLevel, MeBindStaleMode, MeFloorMode, MeSocksKdfPolicy, MeTelemetryLevel};
use super::load::ProxyConfig; use super::load::ProxyConfig;
// ── Hot fields ──────────────────────────────────────────────────────────────── // ── Hot fields ────────────────────────────────────────────────────────────────
@@ -43,17 +40,37 @@ pub struct HotFields {
pub log_level: LogLevel, pub log_level: LogLevel,
pub ad_tag: Option<String>, pub ad_tag: Option<String>,
pub dns_overrides: Vec<String>, pub dns_overrides: Vec<String>,
pub middle_proxy_pool_size: usize,
pub desync_all_full: bool, pub desync_all_full: bool,
pub update_every_secs: u64, pub update_every_secs: u64,
pub me_reinit_every_secs: u64,
pub me_reinit_singleflight: bool,
pub me_reinit_coalesce_window_ms: u64,
pub hardswap: bool, pub hardswap: bool,
pub me_pool_drain_ttl_secs: u64, pub me_pool_drain_ttl_secs: u64,
pub me_pool_min_fresh_ratio: f32, pub me_pool_min_fresh_ratio: f32,
pub me_reinit_drain_timeout_secs: u64, pub me_reinit_drain_timeout_secs: u64,
pub me_keepalive_enabled: bool, pub me_hardswap_warmup_delay_min_ms: u64,
pub me_keepalive_interval_secs: u64, pub me_hardswap_warmup_delay_max_ms: u64,
pub me_keepalive_jitter_secs: u64, pub me_hardswap_warmup_extra_passes: u8,
pub me_keepalive_payload_random: bool, pub me_hardswap_warmup_pass_backoff_base_ms: u64,
pub me_bind_stale_mode: MeBindStaleMode,
pub me_bind_stale_ttl_secs: u64,
pub me_secret_atomic_snapshot: bool,
pub me_deterministic_writer_sort: bool,
pub me_single_endpoint_shadow_writers: u8,
pub me_single_endpoint_outage_mode_enabled: bool,
pub me_single_endpoint_outage_disable_quarantine: bool,
pub me_single_endpoint_outage_backoff_min_ms: u64,
pub me_single_endpoint_outage_backoff_max_ms: u64,
pub me_single_endpoint_shadow_rotate_every_secs: u64,
pub me_config_stable_snapshots: u8,
pub me_config_apply_cooldown_secs: u64,
pub me_snapshot_require_http_2xx: bool,
pub me_snapshot_reject_empty_map: bool,
pub me_snapshot_min_proxy_for_lines: u32,
pub proxy_secret_stable_snapshots: u8,
pub proxy_secret_rotate_runtime: bool,
pub proxy_secret_len_max: usize,
pub telemetry_core_enabled: bool, pub telemetry_core_enabled: bool,
pub telemetry_user_enabled: bool, pub telemetry_user_enabled: bool,
pub telemetry_me_level: MeTelemetryLevel, pub telemetry_me_level: MeTelemetryLevel,
@@ -65,7 +82,14 @@ pub struct HotFields {
pub me_route_backpressure_base_timeout_ms: u64, pub me_route_backpressure_base_timeout_ms: u64,
pub me_route_backpressure_high_timeout_ms: u64, pub me_route_backpressure_high_timeout_ms: u64,
pub me_route_backpressure_high_watermark_pct: u8, pub me_route_backpressure_high_watermark_pct: u8,
pub access: crate::config::AccessConfig, pub users: std::collections::HashMap<String, String>,
pub user_ad_tags: std::collections::HashMap<String, String>,
pub user_max_tcp_conns: std::collections::HashMap<String, usize>,
pub user_expirations: std::collections::HashMap<String, chrono::DateTime<chrono::Utc>>,
pub user_data_quota: std::collections::HashMap<String, u64>,
pub user_max_unique_ips: std::collections::HashMap<String, usize>,
pub user_max_unique_ips_mode: crate::config::UserMaxUniqueIpsMode,
pub user_max_unique_ips_window_secs: u64,
} }
impl HotFields { impl HotFields {
@@ -74,17 +98,49 @@ impl HotFields {
log_level: cfg.general.log_level.clone(), log_level: cfg.general.log_level.clone(),
ad_tag: cfg.general.ad_tag.clone(), ad_tag: cfg.general.ad_tag.clone(),
dns_overrides: cfg.network.dns_overrides.clone(), dns_overrides: cfg.network.dns_overrides.clone(),
middle_proxy_pool_size: cfg.general.middle_proxy_pool_size,
desync_all_full: cfg.general.desync_all_full, desync_all_full: cfg.general.desync_all_full,
update_every_secs: cfg.general.effective_update_every_secs(), update_every_secs: cfg.general.effective_update_every_secs(),
me_reinit_every_secs: cfg.general.me_reinit_every_secs,
me_reinit_singleflight: cfg.general.me_reinit_singleflight,
me_reinit_coalesce_window_ms: cfg.general.me_reinit_coalesce_window_ms,
hardswap: cfg.general.hardswap, hardswap: cfg.general.hardswap,
me_pool_drain_ttl_secs: cfg.general.me_pool_drain_ttl_secs, me_pool_drain_ttl_secs: cfg.general.me_pool_drain_ttl_secs,
me_pool_min_fresh_ratio: cfg.general.me_pool_min_fresh_ratio, me_pool_min_fresh_ratio: cfg.general.me_pool_min_fresh_ratio,
me_reinit_drain_timeout_secs: cfg.general.me_reinit_drain_timeout_secs, me_reinit_drain_timeout_secs: cfg.general.me_reinit_drain_timeout_secs,
me_keepalive_enabled: cfg.general.me_keepalive_enabled, me_hardswap_warmup_delay_min_ms: cfg.general.me_hardswap_warmup_delay_min_ms,
me_keepalive_interval_secs: cfg.general.me_keepalive_interval_secs, me_hardswap_warmup_delay_max_ms: cfg.general.me_hardswap_warmup_delay_max_ms,
me_keepalive_jitter_secs: cfg.general.me_keepalive_jitter_secs, me_hardswap_warmup_extra_passes: cfg.general.me_hardswap_warmup_extra_passes,
me_keepalive_payload_random: cfg.general.me_keepalive_payload_random, me_hardswap_warmup_pass_backoff_base_ms: cfg
.general
.me_hardswap_warmup_pass_backoff_base_ms,
me_bind_stale_mode: cfg.general.me_bind_stale_mode,
me_bind_stale_ttl_secs: cfg.general.me_bind_stale_ttl_secs,
me_secret_atomic_snapshot: cfg.general.me_secret_atomic_snapshot,
me_deterministic_writer_sort: cfg.general.me_deterministic_writer_sort,
me_single_endpoint_shadow_writers: cfg.general.me_single_endpoint_shadow_writers,
me_single_endpoint_outage_mode_enabled: cfg
.general
.me_single_endpoint_outage_mode_enabled,
me_single_endpoint_outage_disable_quarantine: cfg
.general
.me_single_endpoint_outage_disable_quarantine,
me_single_endpoint_outage_backoff_min_ms: cfg
.general
.me_single_endpoint_outage_backoff_min_ms,
me_single_endpoint_outage_backoff_max_ms: cfg
.general
.me_single_endpoint_outage_backoff_max_ms,
me_single_endpoint_shadow_rotate_every_secs: cfg
.general
.me_single_endpoint_shadow_rotate_every_secs,
me_config_stable_snapshots: cfg.general.me_config_stable_snapshots,
me_config_apply_cooldown_secs: cfg.general.me_config_apply_cooldown_secs,
me_snapshot_require_http_2xx: cfg.general.me_snapshot_require_http_2xx,
me_snapshot_reject_empty_map: cfg.general.me_snapshot_reject_empty_map,
me_snapshot_min_proxy_for_lines: cfg.general.me_snapshot_min_proxy_for_lines,
proxy_secret_stable_snapshots: cfg.general.proxy_secret_stable_snapshots,
proxy_secret_rotate_runtime: cfg.general.proxy_secret_rotate_runtime,
proxy_secret_len_max: cfg.general.proxy_secret_len_max,
telemetry_core_enabled: cfg.general.telemetry.core_enabled, telemetry_core_enabled: cfg.general.telemetry.core_enabled,
telemetry_user_enabled: cfg.general.telemetry.user_enabled, telemetry_user_enabled: cfg.general.telemetry.user_enabled,
telemetry_me_level: cfg.general.telemetry.me_level, telemetry_me_level: cfg.general.telemetry.me_level,
@@ -100,16 +156,149 @@ impl HotFields {
me_route_backpressure_base_timeout_ms: cfg.general.me_route_backpressure_base_timeout_ms, me_route_backpressure_base_timeout_ms: cfg.general.me_route_backpressure_base_timeout_ms,
me_route_backpressure_high_timeout_ms: cfg.general.me_route_backpressure_high_timeout_ms, me_route_backpressure_high_timeout_ms: cfg.general.me_route_backpressure_high_timeout_ms,
me_route_backpressure_high_watermark_pct: cfg.general.me_route_backpressure_high_watermark_pct, me_route_backpressure_high_watermark_pct: cfg.general.me_route_backpressure_high_watermark_pct,
access: cfg.access.clone(), users: cfg.access.users.clone(),
user_ad_tags: cfg.access.user_ad_tags.clone(),
user_max_tcp_conns: cfg.access.user_max_tcp_conns.clone(),
user_expirations: cfg.access.user_expirations.clone(),
user_data_quota: cfg.access.user_data_quota.clone(),
user_max_unique_ips: cfg.access.user_max_unique_ips.clone(),
user_max_unique_ips_mode: cfg.access.user_max_unique_ips_mode,
user_max_unique_ips_window_secs: cfg.access.user_max_unique_ips_window_secs,
} }
} }
} }
// ── Helpers ─────────────────────────────────────────────────────────────────── // ── Helpers ───────────────────────────────────────────────────────────────────
fn canonicalize_json(value: &mut serde_json::Value) {
match value {
serde_json::Value::Object(map) => {
let mut pairs: Vec<(String, serde_json::Value)> =
std::mem::take(map).into_iter().collect();
pairs.sort_by(|a, b| a.0.cmp(&b.0));
for (_, item) in pairs.iter_mut() {
canonicalize_json(item);
}
for (key, item) in pairs {
map.insert(key, item);
}
}
serde_json::Value::Array(items) => {
for item in items {
canonicalize_json(item);
}
}
_ => {}
}
}
fn config_equal(lhs: &ProxyConfig, rhs: &ProxyConfig) -> bool {
let mut left = match serde_json::to_value(lhs) {
Ok(value) => value,
Err(_) => return false,
};
let mut right = match serde_json::to_value(rhs) {
Ok(value) => value,
Err(_) => return false,
};
canonicalize_json(&mut left);
canonicalize_json(&mut right);
left == right
}
fn listeners_equal(
lhs: &[crate::config::ListenerConfig],
rhs: &[crate::config::ListenerConfig],
) -> bool {
if lhs.len() != rhs.len() {
return false;
}
lhs.iter().zip(rhs.iter()).all(|(a, b)| {
a.ip == b.ip
&& a.announce == b.announce
&& a.announce_ip == b.announce_ip
&& a.proxy_protocol == b.proxy_protocol
&& a.reuse_allow == b.reuse_allow
})
}
fn overlay_hot_fields(old: &ProxyConfig, new: &ProxyConfig) -> ProxyConfig {
let mut cfg = old.clone();
cfg.general.log_level = new.general.log_level.clone();
cfg.general.ad_tag = new.general.ad_tag.clone();
cfg.network.dns_overrides = new.network.dns_overrides.clone();
cfg.general.desync_all_full = new.general.desync_all_full;
cfg.general.update_every = new.general.update_every;
cfg.general.proxy_secret_auto_reload_secs = new.general.proxy_secret_auto_reload_secs;
cfg.general.proxy_config_auto_reload_secs = new.general.proxy_config_auto_reload_secs;
cfg.general.me_reinit_every_secs = new.general.me_reinit_every_secs;
cfg.general.me_reinit_singleflight = new.general.me_reinit_singleflight;
cfg.general.me_reinit_coalesce_window_ms = new.general.me_reinit_coalesce_window_ms;
cfg.general.hardswap = new.general.hardswap;
cfg.general.me_pool_drain_ttl_secs = new.general.me_pool_drain_ttl_secs;
cfg.general.me_pool_min_fresh_ratio = new.general.me_pool_min_fresh_ratio;
cfg.general.me_reinit_drain_timeout_secs = new.general.me_reinit_drain_timeout_secs;
cfg.general.me_hardswap_warmup_delay_min_ms = new.general.me_hardswap_warmup_delay_min_ms;
cfg.general.me_hardswap_warmup_delay_max_ms = new.general.me_hardswap_warmup_delay_max_ms;
cfg.general.me_hardswap_warmup_extra_passes = new.general.me_hardswap_warmup_extra_passes;
cfg.general.me_hardswap_warmup_pass_backoff_base_ms =
new.general.me_hardswap_warmup_pass_backoff_base_ms;
cfg.general.me_bind_stale_mode = new.general.me_bind_stale_mode;
cfg.general.me_bind_stale_ttl_secs = new.general.me_bind_stale_ttl_secs;
cfg.general.me_secret_atomic_snapshot = new.general.me_secret_atomic_snapshot;
cfg.general.me_deterministic_writer_sort = new.general.me_deterministic_writer_sort;
cfg.general.me_single_endpoint_shadow_writers = new.general.me_single_endpoint_shadow_writers;
cfg.general.me_single_endpoint_outage_mode_enabled =
new.general.me_single_endpoint_outage_mode_enabled;
cfg.general.me_single_endpoint_outage_disable_quarantine =
new.general.me_single_endpoint_outage_disable_quarantine;
cfg.general.me_single_endpoint_outage_backoff_min_ms =
new.general.me_single_endpoint_outage_backoff_min_ms;
cfg.general.me_single_endpoint_outage_backoff_max_ms =
new.general.me_single_endpoint_outage_backoff_max_ms;
cfg.general.me_single_endpoint_shadow_rotate_every_secs =
new.general.me_single_endpoint_shadow_rotate_every_secs;
cfg.general.me_config_stable_snapshots = new.general.me_config_stable_snapshots;
cfg.general.me_config_apply_cooldown_secs = new.general.me_config_apply_cooldown_secs;
cfg.general.me_snapshot_require_http_2xx = new.general.me_snapshot_require_http_2xx;
cfg.general.me_snapshot_reject_empty_map = new.general.me_snapshot_reject_empty_map;
cfg.general.me_snapshot_min_proxy_for_lines = new.general.me_snapshot_min_proxy_for_lines;
cfg.general.proxy_secret_stable_snapshots = new.general.proxy_secret_stable_snapshots;
cfg.general.proxy_secret_rotate_runtime = new.general.proxy_secret_rotate_runtime;
cfg.general.proxy_secret_len_max = new.general.proxy_secret_len_max;
cfg.general.telemetry = new.general.telemetry.clone();
cfg.general.me_socks_kdf_policy = new.general.me_socks_kdf_policy;
cfg.general.me_floor_mode = new.general.me_floor_mode;
cfg.general.me_adaptive_floor_idle_secs = new.general.me_adaptive_floor_idle_secs;
cfg.general.me_adaptive_floor_min_writers_single_endpoint =
new.general.me_adaptive_floor_min_writers_single_endpoint;
cfg.general.me_adaptive_floor_recover_grace_secs =
new.general.me_adaptive_floor_recover_grace_secs;
cfg.general.me_route_backpressure_base_timeout_ms =
new.general.me_route_backpressure_base_timeout_ms;
cfg.general.me_route_backpressure_high_timeout_ms =
new.general.me_route_backpressure_high_timeout_ms;
cfg.general.me_route_backpressure_high_watermark_pct =
new.general.me_route_backpressure_high_watermark_pct;
cfg.access.users = new.access.users.clone();
cfg.access.user_ad_tags = new.access.user_ad_tags.clone();
cfg.access.user_max_tcp_conns = new.access.user_max_tcp_conns.clone();
cfg.access.user_expirations = new.access.user_expirations.clone();
cfg.access.user_data_quota = new.access.user_data_quota.clone();
cfg.access.user_max_unique_ips = new.access.user_max_unique_ips.clone();
cfg.access.user_max_unique_ips_mode = new.access.user_max_unique_ips_mode;
cfg.access.user_max_unique_ips_window_secs = new.access.user_max_unique_ips_window_secs;
cfg
}
/// Warn if any non-hot fields changed (require restart). /// Warn if any non-hot fields changed (require restart).
fn warn_non_hot_changes(old: &ProxyConfig, new: &ProxyConfig) { fn warn_non_hot_changes(old: &ProxyConfig, new: &ProxyConfig, non_hot_changed: bool) {
let mut warned = false;
if old.server.port != new.server.port { if old.server.port != new.server.port {
warned = true;
warn!( warn!(
"config reload: server.port changed ({} → {}); restart required", "config reload: server.port changed ({} → {}); restart required",
old.server.port, new.server.port old.server.port, new.server.port
@@ -125,23 +314,111 @@ fn warn_non_hot_changes(old: &ProxyConfig, new: &ProxyConfig) {
!= new.server.api.minimal_runtime_cache_ttl_ms != new.server.api.minimal_runtime_cache_ttl_ms
|| old.server.api.read_only != new.server.api.read_only || old.server.api.read_only != new.server.api.read_only
{ {
warned = true;
warn!("config reload: server.api changed; restart required"); warn!("config reload: server.api changed; restart required");
} }
if old.server.proxy_protocol != new.server.proxy_protocol
|| !listeners_equal(&old.server.listeners, &new.server.listeners)
|| old.server.listen_addr_ipv4 != new.server.listen_addr_ipv4
|| old.server.listen_addr_ipv6 != new.server.listen_addr_ipv6
|| old.server.listen_tcp != new.server.listen_tcp
|| old.server.listen_unix_sock != new.server.listen_unix_sock
|| old.server.listen_unix_sock_perm != new.server.listen_unix_sock_perm
{
warned = true;
warn!("config reload: server listener settings changed; restart required");
}
if old.censorship.tls_domain != new.censorship.tls_domain
|| old.censorship.tls_domains != new.censorship.tls_domains
|| old.censorship.mask != new.censorship.mask
|| old.censorship.mask_host != new.censorship.mask_host
|| old.censorship.mask_port != new.censorship.mask_port
|| old.censorship.mask_unix_sock != new.censorship.mask_unix_sock
|| old.censorship.fake_cert_len != new.censorship.fake_cert_len
|| old.censorship.tls_emulation != new.censorship.tls_emulation
|| old.censorship.tls_front_dir != new.censorship.tls_front_dir
|| old.censorship.server_hello_delay_min_ms != new.censorship.server_hello_delay_min_ms
|| old.censorship.server_hello_delay_max_ms != new.censorship.server_hello_delay_max_ms
|| old.censorship.tls_new_session_tickets != new.censorship.tls_new_session_tickets
|| old.censorship.tls_full_cert_ttl_secs != new.censorship.tls_full_cert_ttl_secs
|| old.censorship.alpn_enforce != new.censorship.alpn_enforce
|| old.censorship.mask_proxy_protocol != new.censorship.mask_proxy_protocol
{
warned = true;
warn!("config reload: censorship settings changed; restart required");
}
if old.censorship.tls_domain != new.censorship.tls_domain { if old.censorship.tls_domain != new.censorship.tls_domain {
warned = true;
warn!( warn!(
"config reload: censorship.tls_domain changed ('{}' → '{}'); restart required", "config reload: censorship.tls_domain changed ('{}' → '{}'); restart required",
old.censorship.tls_domain, new.censorship.tls_domain old.censorship.tls_domain, new.censorship.tls_domain
); );
} }
if old.network.ipv4 != new.network.ipv4 || old.network.ipv6 != new.network.ipv6 { if old.network.ipv4 != new.network.ipv4 || old.network.ipv6 != new.network.ipv6 {
warned = true;
warn!("config reload: network.ipv4/ipv6 changed; restart required"); warn!("config reload: network.ipv4/ipv6 changed; restart required");
} }
if old.network.prefer != new.network.prefer
|| old.network.multipath != new.network.multipath
|| old.network.stun_use != new.network.stun_use
|| old.network.stun_servers != new.network.stun_servers
|| old.network.stun_tcp_fallback != new.network.stun_tcp_fallback
|| old.network.http_ip_detect_urls != new.network.http_ip_detect_urls
|| old.network.cache_public_ip_path != new.network.cache_public_ip_path
{
warned = true;
warn!("config reload: non-hot network settings changed; restart required");
}
if old.general.use_middle_proxy != new.general.use_middle_proxy { if old.general.use_middle_proxy != new.general.use_middle_proxy {
warned = true;
warn!("config reload: use_middle_proxy changed; restart required"); warn!("config reload: use_middle_proxy changed; restart required");
} }
if old.general.stun_nat_probe_concurrency != new.general.stun_nat_probe_concurrency { if old.general.stun_nat_probe_concurrency != new.general.stun_nat_probe_concurrency {
warned = true;
warn!("config reload: general.stun_nat_probe_concurrency changed; restart required"); warn!("config reload: general.stun_nat_probe_concurrency changed; restart required");
} }
if old.general.middle_proxy_pool_size != new.general.middle_proxy_pool_size {
warned = true;
warn!("config reload: general.middle_proxy_pool_size changed; restart required");
}
if old.general.me_route_no_writer_mode != new.general.me_route_no_writer_mode
|| old.general.me_route_no_writer_wait_ms != new.general.me_route_no_writer_wait_ms
|| old.general.me_route_inline_recovery_attempts
!= new.general.me_route_inline_recovery_attempts
|| old.general.me_route_inline_recovery_wait_ms
!= new.general.me_route_inline_recovery_wait_ms
{
warned = true;
warn!("config reload: general.me_route_no_writer_* changed; restart required");
}
if old.general.unknown_dc_log_path != new.general.unknown_dc_log_path
|| old.general.unknown_dc_file_log_enabled != new.general.unknown_dc_file_log_enabled
{
warned = true;
warn!("config reload: general.unknown_dc_* changed; restart required");
}
if old.general.me_init_retry_attempts != new.general.me_init_retry_attempts {
warned = true;
warn!("config reload: general.me_init_retry_attempts changed; restart required");
}
if old.general.me2dc_fallback != new.general.me2dc_fallback {
warned = true;
warn!("config reload: general.me2dc_fallback changed; restart required");
}
if old.general.proxy_config_v4_cache_path != new.general.proxy_config_v4_cache_path
|| old.general.proxy_config_v6_cache_path != new.general.proxy_config_v6_cache_path
{
warned = true;
warn!("config reload: general.proxy_config_*_cache_path changed; restart required");
}
if old.general.me_keepalive_enabled != new.general.me_keepalive_enabled
|| old.general.me_keepalive_interval_secs != new.general.me_keepalive_interval_secs
|| old.general.me_keepalive_jitter_secs != new.general.me_keepalive_jitter_secs
|| old.general.me_keepalive_payload_random != new.general.me_keepalive_payload_random
{
warned = true;
warn!("config reload: general.me_keepalive_* changed; restart required");
}
if old.general.upstream_connect_retry_attempts != new.general.upstream_connect_retry_attempts if old.general.upstream_connect_retry_attempts != new.general.upstream_connect_retry_attempts
|| old.general.upstream_connect_retry_backoff_ms || old.general.upstream_connect_retry_backoff_ms
!= new.general.upstream_connect_retry_backoff_ms != new.general.upstream_connect_retry_backoff_ms
@@ -151,8 +428,12 @@ fn warn_non_hot_changes(old: &ProxyConfig, new: &ProxyConfig) {
!= new.general.upstream_connect_failfast_hard_errors != new.general.upstream_connect_failfast_hard_errors
|| old.general.rpc_proxy_req_every != new.general.rpc_proxy_req_every || old.general.rpc_proxy_req_every != new.general.rpc_proxy_req_every
{ {
warned = true;
warn!("config reload: general.upstream_* changed; restart required"); warn!("config reload: general.upstream_* changed; restart required");
} }
if non_hot_changed && !warned {
warn!("config reload: one or more non-hot fields changed; restart required");
}
} }
/// Resolve the public host for link generation — mirrors the logic in main.rs. /// Resolve the public host for link generation — mirrors the logic in main.rs.
@@ -235,10 +516,10 @@ fn log_changes(
log_tx.send(new_hot.log_level.clone()).ok(); log_tx.send(new_hot.log_level.clone()).ok();
} }
if old_hot.access.user_ad_tags != new_hot.access.user_ad_tags { if old_hot.user_ad_tags != new_hot.user_ad_tags {
info!( info!(
"config reload: user_ad_tags updated ({} entries)", "config reload: user_ad_tags updated ({} entries)",
new_hot.access.user_ad_tags.len(), new_hot.user_ad_tags.len(),
); );
} }
@@ -253,13 +534,6 @@ fn log_changes(
); );
} }
if old_hot.middle_proxy_pool_size != new_hot.middle_proxy_pool_size {
info!(
"config reload: middle_proxy_pool_size: {} → {}",
old_hot.middle_proxy_pool_size, new_hot.middle_proxy_pool_size,
);
}
if old_hot.desync_all_full != new_hot.desync_all_full { if old_hot.desync_all_full != new_hot.desync_all_full {
info!( info!(
"config reload: desync_all_full: {} → {}", "config reload: desync_all_full: {} → {}",
@@ -273,6 +547,17 @@ fn log_changes(
old_hot.update_every_secs, new_hot.update_every_secs, old_hot.update_every_secs, new_hot.update_every_secs,
); );
} }
if old_hot.me_reinit_every_secs != new_hot.me_reinit_every_secs
|| old_hot.me_reinit_singleflight != new_hot.me_reinit_singleflight
|| old_hot.me_reinit_coalesce_window_ms != new_hot.me_reinit_coalesce_window_ms
{
info!(
"config reload: me_reinit: interval={}s singleflight={} coalesce={}ms",
new_hot.me_reinit_every_secs,
new_hot.me_reinit_singleflight,
new_hot.me_reinit_coalesce_window_ms
);
}
if old_hot.hardswap != new_hot.hardswap { if old_hot.hardswap != new_hot.hardswap {
info!( info!(
@@ -301,18 +586,84 @@ fn log_changes(
old_hot.me_reinit_drain_timeout_secs, new_hot.me_reinit_drain_timeout_secs, old_hot.me_reinit_drain_timeout_secs, new_hot.me_reinit_drain_timeout_secs,
); );
} }
if old_hot.me_hardswap_warmup_delay_min_ms != new_hot.me_hardswap_warmup_delay_min_ms
if old_hot.me_keepalive_enabled != new_hot.me_keepalive_enabled || old_hot.me_hardswap_warmup_delay_max_ms != new_hot.me_hardswap_warmup_delay_max_ms
|| old_hot.me_keepalive_interval_secs != new_hot.me_keepalive_interval_secs || old_hot.me_hardswap_warmup_extra_passes != new_hot.me_hardswap_warmup_extra_passes
|| old_hot.me_keepalive_jitter_secs != new_hot.me_keepalive_jitter_secs || old_hot.me_hardswap_warmup_pass_backoff_base_ms
|| old_hot.me_keepalive_payload_random != new_hot.me_keepalive_payload_random != new_hot.me_hardswap_warmup_pass_backoff_base_ms
{ {
info!( info!(
"config reload: me_keepalive: enabled={} interval={}s jitter={}s random_payload={}", "config reload: me_hardswap_warmup: min={}ms max={}ms extra_passes={} pass_backoff={}ms",
new_hot.me_keepalive_enabled, new_hot.me_hardswap_warmup_delay_min_ms,
new_hot.me_keepalive_interval_secs, new_hot.me_hardswap_warmup_delay_max_ms,
new_hot.me_keepalive_jitter_secs, new_hot.me_hardswap_warmup_extra_passes,
new_hot.me_keepalive_payload_random, new_hot.me_hardswap_warmup_pass_backoff_base_ms
);
}
if old_hot.me_bind_stale_mode != new_hot.me_bind_stale_mode
|| old_hot.me_bind_stale_ttl_secs != new_hot.me_bind_stale_ttl_secs
{
info!(
"config reload: me_bind_stale: mode={:?} ttl={}s",
new_hot.me_bind_stale_mode,
new_hot.me_bind_stale_ttl_secs
);
}
if old_hot.me_secret_atomic_snapshot != new_hot.me_secret_atomic_snapshot
|| old_hot.me_deterministic_writer_sort != new_hot.me_deterministic_writer_sort
{
info!(
"config reload: me_runtime_flags: secret_atomic_snapshot={} deterministic_sort={}",
new_hot.me_secret_atomic_snapshot,
new_hot.me_deterministic_writer_sort
);
}
if old_hot.me_single_endpoint_shadow_writers != new_hot.me_single_endpoint_shadow_writers
|| old_hot.me_single_endpoint_outage_mode_enabled
!= new_hot.me_single_endpoint_outage_mode_enabled
|| old_hot.me_single_endpoint_outage_disable_quarantine
!= new_hot.me_single_endpoint_outage_disable_quarantine
|| old_hot.me_single_endpoint_outage_backoff_min_ms
!= new_hot.me_single_endpoint_outage_backoff_min_ms
|| old_hot.me_single_endpoint_outage_backoff_max_ms
!= new_hot.me_single_endpoint_outage_backoff_max_ms
|| old_hot.me_single_endpoint_shadow_rotate_every_secs
!= new_hot.me_single_endpoint_shadow_rotate_every_secs
{
info!(
"config reload: me_single_endpoint: shadow={} outage_enabled={} disable_quarantine={} backoff=[{}..{}]ms rotate={}s",
new_hot.me_single_endpoint_shadow_writers,
new_hot.me_single_endpoint_outage_mode_enabled,
new_hot.me_single_endpoint_outage_disable_quarantine,
new_hot.me_single_endpoint_outage_backoff_min_ms,
new_hot.me_single_endpoint_outage_backoff_max_ms,
new_hot.me_single_endpoint_shadow_rotate_every_secs
);
}
if old_hot.me_config_stable_snapshots != new_hot.me_config_stable_snapshots
|| old_hot.me_config_apply_cooldown_secs != new_hot.me_config_apply_cooldown_secs
|| old_hot.me_snapshot_require_http_2xx != new_hot.me_snapshot_require_http_2xx
|| old_hot.me_snapshot_reject_empty_map != new_hot.me_snapshot_reject_empty_map
|| old_hot.me_snapshot_min_proxy_for_lines != new_hot.me_snapshot_min_proxy_for_lines
{
info!(
"config reload: me_snapshot_guard: stable={} cooldown={}s require_2xx={} reject_empty={} min_proxy_for={}",
new_hot.me_config_stable_snapshots,
new_hot.me_config_apply_cooldown_secs,
new_hot.me_snapshot_require_http_2xx,
new_hot.me_snapshot_reject_empty_map,
new_hot.me_snapshot_min_proxy_for_lines
);
}
if old_hot.proxy_secret_stable_snapshots != new_hot.proxy_secret_stable_snapshots
|| old_hot.proxy_secret_rotate_runtime != new_hot.proxy_secret_rotate_runtime
|| old_hot.proxy_secret_len_max != new_hot.proxy_secret_len_max
{
info!(
"config reload: proxy_secret_runtime: stable={} rotate={} len_max={}",
new_hot.proxy_secret_stable_snapshots,
new_hot.proxy_secret_rotate_runtime,
new_hot.proxy_secret_len_max
); );
} }
@@ -367,21 +718,21 @@ fn log_changes(
); );
} }
if old_hot.access.users != new_hot.access.users { if old_hot.users != new_hot.users {
let mut added: Vec<&String> = new_hot.access.users.keys() let mut added: Vec<&String> = new_hot.users.keys()
.filter(|u| !old_hot.access.users.contains_key(*u)) .filter(|u| !old_hot.users.contains_key(*u))
.collect(); .collect();
added.sort(); added.sort();
let mut removed: Vec<&String> = old_hot.access.users.keys() let mut removed: Vec<&String> = old_hot.users.keys()
.filter(|u| !new_hot.access.users.contains_key(*u)) .filter(|u| !new_hot.users.contains_key(*u))
.collect(); .collect();
removed.sort(); removed.sort();
let mut changed: Vec<&String> = new_hot.access.users.keys() let mut changed: Vec<&String> = new_hot.users.keys()
.filter(|u| { .filter(|u| {
old_hot.access.users.get(*u) old_hot.users.get(*u)
.map(|s| s != &new_hot.access.users[*u]) .map(|s| s != &new_hot.users[*u])
.unwrap_or(false) .unwrap_or(false)
}) })
.collect(); .collect();
@@ -395,7 +746,7 @@ fn log_changes(
let host = resolve_link_host(new_cfg, detected_ip_v4, detected_ip_v6); let host = resolve_link_host(new_cfg, detected_ip_v4, detected_ip_v6);
let port = new_cfg.general.links.public_port.unwrap_or(new_cfg.server.port); let port = new_cfg.general.links.public_port.unwrap_or(new_cfg.server.port);
for user in &added { for user in &added {
if let Some(secret) = new_hot.access.users.get(*user) { if let Some(secret) = new_hot.users.get(*user) {
print_user_links(user, secret, &host, port, new_cfg); print_user_links(user, secret, &host, port, new_cfg);
} }
} }
@@ -414,28 +765,38 @@ fn log_changes(
} }
} }
if old_hot.access.user_max_tcp_conns != new_hot.access.user_max_tcp_conns { if old_hot.user_max_tcp_conns != new_hot.user_max_tcp_conns {
info!( info!(
"config reload: user_max_tcp_conns updated ({} entries)", "config reload: user_max_tcp_conns updated ({} entries)",
new_hot.access.user_max_tcp_conns.len() new_hot.user_max_tcp_conns.len()
); );
} }
if old_hot.access.user_expirations != new_hot.access.user_expirations { if old_hot.user_expirations != new_hot.user_expirations {
info!( info!(
"config reload: user_expirations updated ({} entries)", "config reload: user_expirations updated ({} entries)",
new_hot.access.user_expirations.len() new_hot.user_expirations.len()
); );
} }
if old_hot.access.user_data_quota != new_hot.access.user_data_quota { if old_hot.user_data_quota != new_hot.user_data_quota {
info!( info!(
"config reload: user_data_quota updated ({} entries)", "config reload: user_data_quota updated ({} entries)",
new_hot.access.user_data_quota.len() new_hot.user_data_quota.len()
); );
} }
if old_hot.access.user_max_unique_ips != new_hot.access.user_max_unique_ips { if old_hot.user_max_unique_ips != new_hot.user_max_unique_ips {
info!( info!(
"config reload: user_max_unique_ips updated ({} entries)", "config reload: user_max_unique_ips updated ({} entries)",
new_hot.access.user_max_unique_ips.len() new_hot.user_max_unique_ips.len()
);
}
if old_hot.user_max_unique_ips_mode != new_hot.user_max_unique_ips_mode
|| old_hot.user_max_unique_ips_window_secs
!= new_hot.user_max_unique_ips_window_secs
{
info!(
"config reload: user_max_unique_ips policy mode={:?} window={}s",
new_hot.user_max_unique_ips_mode,
new_hot.user_max_unique_ips_window_secs
); );
} }
} }
@@ -462,15 +823,22 @@ fn reload_config(
} }
let old_cfg = config_tx.borrow().clone(); let old_cfg = config_tx.borrow().clone();
let applied_cfg = overlay_hot_fields(&old_cfg, &new_cfg);
let old_hot = HotFields::from_config(&old_cfg); let old_hot = HotFields::from_config(&old_cfg);
let new_hot = HotFields::from_config(&new_cfg); let applied_hot = HotFields::from_config(&applied_cfg);
let non_hot_changed = !config_equal(&applied_cfg, &new_cfg);
let hot_changed = old_hot != applied_hot;
if old_hot == new_hot { if non_hot_changed {
warn_non_hot_changes(&old_cfg, &new_cfg, non_hot_changed);
}
if !hot_changed {
return; return;
} }
if old_hot.dns_overrides != new_hot.dns_overrides if old_hot.dns_overrides != applied_hot.dns_overrides
&& let Err(e) = crate::network::dns_overrides::install_entries(&new_hot.dns_overrides) && let Err(e) = crate::network::dns_overrides::install_entries(&applied_hot.dns_overrides)
{ {
error!( error!(
"config reload: invalid network.dns_overrides: {}; keeping old config", "config reload: invalid network.dns_overrides: {}; keeping old config",
@@ -479,9 +847,15 @@ fn reload_config(
return; return;
} }
warn_non_hot_changes(&old_cfg, &new_cfg); log_changes(
log_changes(&old_hot, &new_hot, &new_cfg, log_tx, detected_ip_v4, detected_ip_v6); &old_hot,
config_tx.send(Arc::new(new_cfg)).ok(); &applied_hot,
&applied_cfg,
log_tx,
detected_ip_v4,
detected_ip_v6,
);
config_tx.send(Arc::new(applied_cfg)).ok();
} }
// ── Public API ──────────────────────────────────────────────────────────────── // ── Public API ────────────────────────────────────────────────────────────────
@@ -607,3 +981,80 @@ pub fn spawn_config_watcher(
(config_rx, log_rx) (config_rx, log_rx)
} }
#[cfg(test)]
mod tests {
use super::*;
fn sample_config() -> ProxyConfig {
ProxyConfig::default()
}
#[test]
fn overlay_applies_hot_and_preserves_non_hot() {
let old = sample_config();
let mut new = old.clone();
new.general.hardswap = !old.general.hardswap;
new.server.port = old.server.port.saturating_add(1);
let applied = overlay_hot_fields(&old, &new);
assert_eq!(applied.general.hardswap, new.general.hardswap);
assert_eq!(applied.server.port, old.server.port);
}
#[test]
fn non_hot_only_change_does_not_change_hot_snapshot() {
let old = sample_config();
let mut new = old.clone();
new.server.port = old.server.port.saturating_add(1);
let applied = overlay_hot_fields(&old, &new);
assert_eq!(HotFields::from_config(&old), HotFields::from_config(&applied));
assert_eq!(applied.server.port, old.server.port);
}
#[test]
fn bind_stale_mode_is_hot() {
let old = sample_config();
let mut new = old.clone();
new.general.me_bind_stale_mode = match old.general.me_bind_stale_mode {
MeBindStaleMode::Never => MeBindStaleMode::Ttl,
MeBindStaleMode::Ttl => MeBindStaleMode::Always,
MeBindStaleMode::Always => MeBindStaleMode::Never,
};
let applied = overlay_hot_fields(&old, &new);
assert_eq!(
applied.general.me_bind_stale_mode,
new.general.me_bind_stale_mode
);
assert_ne!(HotFields::from_config(&old), HotFields::from_config(&applied));
}
#[test]
fn keepalive_is_not_hot() {
let old = sample_config();
let mut new = old.clone();
new.general.me_keepalive_interval_secs = old.general.me_keepalive_interval_secs + 5;
let applied = overlay_hot_fields(&old, &new);
assert_eq!(
applied.general.me_keepalive_interval_secs,
old.general.me_keepalive_interval_secs
);
assert_eq!(HotFields::from_config(&old), HotFields::from_config(&applied));
}
#[test]
fn mixed_hot_and_non_hot_change_applies_only_hot_subset() {
let old = sample_config();
let mut new = old.clone();
new.general.hardswap = !old.general.hardswap;
new.general.use_middle_proxy = !old.general.use_middle_proxy;
let applied = overlay_hot_fields(&old, &new);
assert_eq!(applied.general.hardswap, new.general.hardswap);
assert_eq!(applied.general.use_middle_proxy, old.general.use_middle_proxy);
assert!(!config_equal(&applied, &new));
}
}

View File

@@ -203,6 +203,22 @@ impl ProxyConfig {
sanitize_ad_tag(&mut config.general.ad_tag); sanitize_ad_tag(&mut config.general.ad_tag);
if let Some(path) = &config.general.proxy_config_v4_cache_path
&& path.trim().is_empty()
{
return Err(ProxyError::Config(
"general.proxy_config_v4_cache_path cannot be empty when provided".to_string(),
));
}
if let Some(path) = &config.general.proxy_config_v6_cache_path
&& path.trim().is_empty()
{
return Err(ProxyError::Config(
"general.proxy_config_v6_cache_path cannot be empty when provided".to_string(),
));
}
if let Some(update_every) = config.general.update_every { if let Some(update_every) = config.general.update_every {
if update_every == 0 { if update_every == 0 {
return Err(ProxyError::Config( return Err(ProxyError::Config(
@@ -237,6 +253,12 @@ impl ProxyConfig {
)); ));
} }
if config.general.me_init_retry_attempts > 1_000_000 {
return Err(ProxyError::Config(
"general.me_init_retry_attempts must be within [0, 1000000]".to_string(),
));
}
if config.general.upstream_connect_retry_attempts == 0 { if config.general.upstream_connect_retry_attempts == 0 {
return Err(ProxyError::Config( return Err(ProxyError::Config(
"general.upstream_connect_retry_attempts must be > 0".to_string(), "general.upstream_connect_retry_attempts must be > 0".to_string(),
@@ -257,6 +279,12 @@ impl ProxyConfig {
)); ));
} }
if config.access.user_max_unique_ips_window_secs == 0 {
return Err(ProxyError::Config(
"access.user_max_unique_ips_window_secs must be > 0".to_string(),
));
}
if config.general.me_reinit_every_secs == 0 { if config.general.me_reinit_every_secs == 0 {
return Err(ProxyError::Config( return Err(ProxyError::Config(
"general.me_reinit_every_secs must be > 0".to_string(), "general.me_reinit_every_secs must be > 0".to_string(),
@@ -398,6 +426,24 @@ impl ProxyConfig {
)); ));
} }
if !(10..=5000).contains(&config.general.me_route_no_writer_wait_ms) {
return Err(ProxyError::Config(
"general.me_route_no_writer_wait_ms must be within [10, 5000]".to_string(),
));
}
if config.general.me_route_inline_recovery_attempts == 0 {
return Err(ProxyError::Config(
"general.me_route_inline_recovery_attempts must be > 0".to_string(),
));
}
if !(10..=30000).contains(&config.general.me_route_inline_recovery_wait_ms) {
return Err(ProxyError::Config(
"general.me_route_inline_recovery_wait_ms must be within [10, 30000]".to_string(),
));
}
if config.server.api.request_body_limit_bytes == 0 { if config.server.api.request_body_limit_bytes == 0 {
return Err(ProxyError::Config( return Err(ProxyError::Config(
"server.api.request_body_limit_bytes must be > 0".to_string(), "server.api.request_body_limit_bytes must be > 0".to_string(),
@@ -653,6 +699,22 @@ mod tests {
cfg.general.me_reconnect_fast_retry_count, cfg.general.me_reconnect_fast_retry_count,
default_me_reconnect_fast_retry_count() default_me_reconnect_fast_retry_count()
); );
assert_eq!(
cfg.general.me_init_retry_attempts,
default_me_init_retry_attempts()
);
assert_eq!(
cfg.general.me2dc_fallback,
default_me2dc_fallback()
);
assert_eq!(
cfg.general.proxy_config_v4_cache_path,
default_proxy_config_v4_cache_path()
);
assert_eq!(
cfg.general.proxy_config_v6_cache_path,
default_proxy_config_v6_cache_path()
);
assert_eq!( assert_eq!(
cfg.general.me_single_endpoint_shadow_writers, cfg.general.me_single_endpoint_shadow_writers,
default_me_single_endpoint_shadow_writers() default_me_single_endpoint_shadow_writers()
@@ -728,6 +790,14 @@ mod tests {
default_api_minimal_runtime_cache_ttl_ms() default_api_minimal_runtime_cache_ttl_ms()
); );
assert_eq!(cfg.access.users, default_access_users()); assert_eq!(cfg.access.users, default_access_users());
assert_eq!(
cfg.access.user_max_unique_ips_mode,
UserMaxUniqueIpsMode::default()
);
assert_eq!(
cfg.access.user_max_unique_ips_window_secs,
default_user_max_unique_ips_window_secs()
);
} }
#[test] #[test]
@@ -750,6 +820,19 @@ mod tests {
general.me_reconnect_fast_retry_count, general.me_reconnect_fast_retry_count,
default_me_reconnect_fast_retry_count() default_me_reconnect_fast_retry_count()
); );
assert_eq!(
general.me_init_retry_attempts,
default_me_init_retry_attempts()
);
assert_eq!(general.me2dc_fallback, default_me2dc_fallback());
assert_eq!(
general.proxy_config_v4_cache_path,
default_proxy_config_v4_cache_path()
);
assert_eq!(
general.proxy_config_v6_cache_path,
default_proxy_config_v6_cache_path()
);
assert_eq!( assert_eq!(
general.me_single_endpoint_shadow_writers, general.me_single_endpoint_shadow_writers,
default_me_single_endpoint_shadow_writers() default_me_single_endpoint_shadow_writers()
@@ -1173,6 +1256,85 @@ mod tests {
let _ = std::fs::remove_file(path_valid); let _ = std::fs::remove_file(path_valid);
} }
#[test]
fn me_route_no_writer_wait_ms_out_of_range_is_rejected() {
let toml = r#"
[general]
me_route_no_writer_wait_ms = 5
[censorship]
tls_domain = "example.com"
[access.users]
user = "00000000000000000000000000000000"
"#;
let dir = std::env::temp_dir();
let path = dir.join("telemt_me_route_no_writer_wait_ms_out_of_range_test.toml");
std::fs::write(&path, toml).unwrap();
let err = ProxyConfig::load(&path).unwrap_err().to_string();
assert!(err.contains("general.me_route_no_writer_wait_ms must be within [10, 5000]"));
let _ = std::fs::remove_file(path);
}
#[test]
fn me_route_no_writer_mode_is_parsed() {
let toml = r#"
[general]
me_route_no_writer_mode = "inline_recovery_legacy"
[censorship]
tls_domain = "example.com"
[access.users]
user = "00000000000000000000000000000000"
"#;
let dir = std::env::temp_dir();
let path = dir.join("telemt_me_route_no_writer_mode_parse_test.toml");
std::fs::write(&path, toml).unwrap();
let cfg = ProxyConfig::load(&path).unwrap();
assert_eq!(
cfg.general.me_route_no_writer_mode,
crate::config::MeRouteNoWriterMode::InlineRecoveryLegacy
);
let _ = std::fs::remove_file(path);
}
#[test]
fn proxy_config_cache_paths_empty_are_rejected() {
let toml = r#"
[general]
proxy_config_v4_cache_path = " "
[censorship]
tls_domain = "example.com"
[access.users]
user = "00000000000000000000000000000000"
"#;
let dir = std::env::temp_dir();
let path = dir.join("telemt_proxy_config_v4_cache_path_empty_test.toml");
std::fs::write(&path, toml).unwrap();
let err = ProxyConfig::load(&path).unwrap_err().to_string();
assert!(err.contains("general.proxy_config_v4_cache_path cannot be empty"));
let _ = std::fs::remove_file(path);
let toml_v6 = r#"
[general]
proxy_config_v6_cache_path = ""
[censorship]
tls_domain = "example.com"
[access.users]
user = "00000000000000000000000000000000"
"#;
let path_v6 = dir.join("telemt_proxy_config_v6_cache_path_empty_test.toml");
std::fs::write(&path_v6, toml_v6).unwrap();
let err_v6 = ProxyConfig::load(&path_v6).unwrap_err().to_string();
assert!(err_v6.contains("general.proxy_config_v6_cache_path cannot be empty"));
let _ = std::fs::remove_file(path_v6);
}
#[test] #[test]
fn me_hardswap_warmup_defaults_are_set() { fn me_hardswap_warmup_defaults_are_set() {
let toml = r#" let toml = r#"

View File

@@ -183,6 +183,44 @@ impl MeFloorMode {
} }
} }
/// Middle-End route behavior when no writer is immediately available.
#[derive(Debug, Clone, Copy, PartialEq, Eq, Serialize, Deserialize, Default)]
#[serde(rename_all = "snake_case")]
pub enum MeRouteNoWriterMode {
#[default]
AsyncRecoveryFailfast,
InlineRecoveryLegacy,
}
impl MeRouteNoWriterMode {
pub fn as_u8(self) -> u8 {
match self {
MeRouteNoWriterMode::AsyncRecoveryFailfast => 0,
MeRouteNoWriterMode::InlineRecoveryLegacy => 1,
}
}
pub fn from_u8(raw: u8) -> Self {
match raw {
1 => MeRouteNoWriterMode::InlineRecoveryLegacy,
_ => MeRouteNoWriterMode::AsyncRecoveryFailfast,
}
}
}
/// Per-user unique source IP limit mode.
#[derive(Debug, Clone, Copy, PartialEq, Eq, Serialize, Deserialize, Default)]
#[serde(rename_all = "snake_case")]
pub enum UserMaxUniqueIpsMode {
/// Count only currently active source IPs.
#[default]
ActiveWindow,
/// Count source IPs seen within the recent time window.
TimeWindow,
/// Enforce both active and recent-window limits at the same time.
Combined,
}
/// Telemetry controls for hot-path counters and ME diagnostics. /// Telemetry controls for hot-path counters and ME diagnostics.
#[derive(Debug, Clone, PartialEq, Eq, Serialize, Deserialize)] #[derive(Debug, Clone, PartialEq, Eq, Serialize, Deserialize)]
pub struct TelemetryConfig { pub struct TelemetryConfig {
@@ -305,6 +343,14 @@ pub struct GeneralConfig {
#[serde(default = "default_proxy_secret_path")] #[serde(default = "default_proxy_secret_path")]
pub proxy_secret_path: Option<String>, pub proxy_secret_path: Option<String>,
/// Optional path to cache raw getProxyConfig (IPv4) snapshot for startup fallback.
#[serde(default = "default_proxy_config_v4_cache_path")]
pub proxy_config_v4_cache_path: Option<String>,
/// Optional path to cache raw getProxyConfigV6 snapshot for startup fallback.
#[serde(default = "default_proxy_config_v6_cache_path")]
pub proxy_config_v6_cache_path: Option<String>,
/// Global ad_tag (32 hex chars from @MTProxybot). Fallback when user has no per-user tag in access.user_ad_tags. /// Global ad_tag (32 hex chars from @MTProxybot). Fallback when user has no per-user tag in access.user_ad_tags.
#[serde(default)] #[serde(default)]
pub ad_tag: Option<String>, pub ad_tag: Option<String>,
@@ -340,6 +386,15 @@ pub struct GeneralConfig {
#[serde(default = "default_middle_proxy_warm_standby")] #[serde(default = "default_middle_proxy_warm_standby")]
pub middle_proxy_warm_standby: usize, pub middle_proxy_warm_standby: usize,
/// Startup retries for Middle-End pool initialization before ME→Direct fallback.
/// 0 means unlimited retries.
#[serde(default = "default_me_init_retry_attempts")]
pub me_init_retry_attempts: u32,
/// Allow fallback from Middle-End mode to direct DC when ME startup cannot be initialized.
#[serde(default = "default_me2dc_fallback")]
pub me2dc_fallback: bool,
/// Enable ME keepalive padding frames. /// Enable ME keepalive padding frames.
#[serde(default = "default_true")] #[serde(default = "default_true")]
pub me_keepalive_enabled: bool, pub me_keepalive_enabled: bool,
@@ -489,6 +544,10 @@ pub struct GeneralConfig {
#[serde(default = "default_unknown_dc_log_path")] #[serde(default = "default_unknown_dc_log_path")]
pub unknown_dc_log_path: Option<String>, pub unknown_dc_log_path: Option<String>,
/// Enable unknown-DC file logging.
#[serde(default = "default_unknown_dc_file_log_enabled")]
pub unknown_dc_file_log_enabled: bool,
#[serde(default)] #[serde(default)]
pub log_level: LogLevel, pub log_level: LogLevel,
@@ -516,6 +575,22 @@ pub struct GeneralConfig {
#[serde(default = "default_me_route_backpressure_high_watermark_pct")] #[serde(default = "default_me_route_backpressure_high_watermark_pct")]
pub me_route_backpressure_high_watermark_pct: u8, pub me_route_backpressure_high_watermark_pct: u8,
/// ME route behavior when no writer is immediately available.
#[serde(default)]
pub me_route_no_writer_mode: MeRouteNoWriterMode,
/// Maximum wait time in milliseconds for async-recovery failfast mode.
#[serde(default = "default_me_route_no_writer_wait_ms")]
pub me_route_no_writer_wait_ms: u64,
/// Number of inline recovery attempts in legacy mode.
#[serde(default = "default_me_route_inline_recovery_attempts")]
pub me_route_inline_recovery_attempts: u32,
/// Maximum wait time in milliseconds for inline recovery in legacy mode.
#[serde(default = "default_me_route_inline_recovery_wait_ms")]
pub me_route_inline_recovery_wait_ms: u64,
/// [general.links] — proxy link generation overrides. /// [general.links] — proxy link generation overrides.
#[serde(default)] #[serde(default)]
pub links: LinksConfig, pub links: LinksConfig,
@@ -660,6 +735,8 @@ impl Default for GeneralConfig {
use_middle_proxy: default_true(), use_middle_proxy: default_true(),
ad_tag: None, ad_tag: None,
proxy_secret_path: default_proxy_secret_path(), proxy_secret_path: default_proxy_secret_path(),
proxy_config_v4_cache_path: default_proxy_config_v4_cache_path(),
proxy_config_v6_cache_path: default_proxy_config_v6_cache_path(),
middle_proxy_nat_ip: None, middle_proxy_nat_ip: None,
middle_proxy_nat_probe: default_true(), middle_proxy_nat_probe: default_true(),
middle_proxy_nat_stun: default_middle_proxy_nat_stun(), middle_proxy_nat_stun: default_middle_proxy_nat_stun(),
@@ -667,6 +744,8 @@ impl Default for GeneralConfig {
stun_nat_probe_concurrency: default_stun_nat_probe_concurrency(), stun_nat_probe_concurrency: default_stun_nat_probe_concurrency(),
middle_proxy_pool_size: default_pool_size(), middle_proxy_pool_size: default_pool_size(),
middle_proxy_warm_standby: default_middle_proxy_warm_standby(), middle_proxy_warm_standby: default_middle_proxy_warm_standby(),
me_init_retry_attempts: default_me_init_retry_attempts(),
me2dc_fallback: default_me2dc_fallback(),
me_keepalive_enabled: default_true(), me_keepalive_enabled: default_true(),
me_keepalive_interval_secs: default_keepalive_interval(), me_keepalive_interval_secs: default_keepalive_interval(),
me_keepalive_jitter_secs: default_keepalive_jitter(), me_keepalive_jitter_secs: default_keepalive_jitter(),
@@ -695,6 +774,7 @@ impl Default for GeneralConfig {
upstream_connect_failfast_hard_errors: default_upstream_connect_failfast_hard_errors(), upstream_connect_failfast_hard_errors: default_upstream_connect_failfast_hard_errors(),
stun_iface_mismatch_ignore: false, stun_iface_mismatch_ignore: false,
unknown_dc_log_path: default_unknown_dc_log_path(), unknown_dc_log_path: default_unknown_dc_log_path(),
unknown_dc_file_log_enabled: default_unknown_dc_file_log_enabled(),
log_level: LogLevel::Normal, log_level: LogLevel::Normal,
disable_colors: false, disable_colors: false,
telemetry: TelemetryConfig::default(), telemetry: TelemetryConfig::default(),
@@ -702,6 +782,10 @@ impl Default for GeneralConfig {
me_route_backpressure_base_timeout_ms: default_me_route_backpressure_base_timeout_ms(), me_route_backpressure_base_timeout_ms: default_me_route_backpressure_base_timeout_ms(),
me_route_backpressure_high_timeout_ms: default_me_route_backpressure_high_timeout_ms(), me_route_backpressure_high_timeout_ms: default_me_route_backpressure_high_timeout_ms(),
me_route_backpressure_high_watermark_pct: default_me_route_backpressure_high_watermark_pct(), me_route_backpressure_high_watermark_pct: default_me_route_backpressure_high_watermark_pct(),
me_route_no_writer_mode: MeRouteNoWriterMode::default(),
me_route_no_writer_wait_ms: default_me_route_no_writer_wait_ms(),
me_route_inline_recovery_attempts: default_me_route_inline_recovery_attempts(),
me_route_inline_recovery_wait_ms: default_me_route_inline_recovery_wait_ms(),
links: LinksConfig::default(), links: LinksConfig::default(),
crypto_pending_buffer: default_crypto_pending_buffer(), crypto_pending_buffer: default_crypto_pending_buffer(),
max_client_frame: default_max_client_frame(), max_client_frame: default_max_client_frame(),
@@ -1045,6 +1129,12 @@ pub struct AccessConfig {
#[serde(default)] #[serde(default)]
pub user_max_unique_ips: HashMap<String, usize>, pub user_max_unique_ips: HashMap<String, usize>,
#[serde(default)]
pub user_max_unique_ips_mode: UserMaxUniqueIpsMode,
#[serde(default = "default_user_max_unique_ips_window_secs")]
pub user_max_unique_ips_window_secs: u64,
#[serde(default = "default_replay_check_len")] #[serde(default = "default_replay_check_len")]
pub replay_check_len: usize, pub replay_check_len: usize,
@@ -1064,6 +1154,8 @@ impl Default for AccessConfig {
user_expirations: HashMap::new(), user_expirations: HashMap::new(),
user_data_quota: HashMap::new(), user_data_quota: HashMap::new(),
user_max_unique_ips: HashMap::new(), user_max_unique_ips: HashMap::new(),
user_max_unique_ips_mode: UserMaxUniqueIpsMode::default(),
user_max_unique_ips_window_secs: default_user_max_unique_ips_window_secs(),
replay_check_len: default_replay_check_len(), replay_check_len: default_replay_check_len(),
replay_window_secs: default_replay_window_secs(), replay_window_secs: default_replay_window_secs(),
ignore_time_skew: false, ignore_time_skew: false,

View File

@@ -1,252 +1,278 @@
// src/ip_tracker.rs // IP address tracking and per-user unique IP limiting.
// IP address tracking and limiting for users
#![allow(dead_code)] #![allow(dead_code)]
use std::collections::{HashMap, HashSet}; use std::collections::HashMap;
use std::net::IpAddr; use std::net::IpAddr;
use std::sync::Arc; use std::sync::Arc;
use std::time::{Duration, Instant};
use tokio::sync::RwLock; use tokio::sync::RwLock;
/// Трекер уникальных IP-адресов для каждого пользователя MTProxy use crate::config::UserMaxUniqueIpsMode;
///
/// Предоставляет thread-safe механизм для:
/// - Отслеживания активных IP-адресов каждого пользователя
/// - Ограничения количества уникальных IP на пользователя
/// - Автоматической очистки при отключении клиентов
#[derive(Debug, Clone)] #[derive(Debug, Clone)]
pub struct UserIpTracker { pub struct UserIpTracker {
/// Маппинг: Имя пользователя -> Множество активных IP-адресов active_ips: Arc<RwLock<HashMap<String, HashMap<IpAddr, usize>>>>,
active_ips: Arc<RwLock<HashMap<String, HashSet<IpAddr>>>>, recent_ips: Arc<RwLock<HashMap<String, HashMap<IpAddr, Instant>>>>,
/// Маппинг: Имя пользователя -> Максимально разрешенное количество уникальных IP
max_ips: Arc<RwLock<HashMap<String, usize>>>, max_ips: Arc<RwLock<HashMap<String, usize>>>,
limit_mode: Arc<RwLock<UserMaxUniqueIpsMode>>,
limit_window: Arc<RwLock<Duration>>,
} }
impl UserIpTracker { impl UserIpTracker {
/// Создать новый пустой трекер
pub fn new() -> Self { pub fn new() -> Self {
Self { Self {
active_ips: Arc::new(RwLock::new(HashMap::new())), active_ips: Arc::new(RwLock::new(HashMap::new())),
recent_ips: Arc::new(RwLock::new(HashMap::new())),
max_ips: Arc::new(RwLock::new(HashMap::new())), max_ips: Arc::new(RwLock::new(HashMap::new())),
limit_mode: Arc::new(RwLock::new(UserMaxUniqueIpsMode::ActiveWindow)),
limit_window: Arc::new(RwLock::new(Duration::from_secs(30))),
} }
} }
/// Установить лимит уникальных IP для конкретного пользователя pub async fn set_limit_policy(&self, mode: UserMaxUniqueIpsMode, window_secs: u64) {
/// {
/// # Arguments let mut current_mode = self.limit_mode.write().await;
/// * `username` - Имя пользователя *current_mode = mode;
/// * `max_ips` - Максимальное количество одновременно активных IP-адресов }
let mut current_window = self.limit_window.write().await;
*current_window = Duration::from_secs(window_secs.max(1));
}
pub async fn set_user_limit(&self, username: &str, max_ips: usize) { pub async fn set_user_limit(&self, username: &str, max_ips: usize) {
let mut limits = self.max_ips.write().await; let mut limits = self.max_ips.write().await;
limits.insert(username.to_string(), max_ips); limits.insert(username.to_string(), max_ips);
} }
/// Загрузить лимиты из конфигурации pub async fn remove_user_limit(&self, username: &str) {
/// let mut limits = self.max_ips.write().await;
/// # Arguments limits.remove(username);
/// * `limits` - HashMap с лимитами из config.toml }
pub async fn load_limits(&self, limits: &HashMap<String, usize>) {
let mut max_ips = self.max_ips.write().await; pub async fn load_limits(&self, limits: &HashMap<String, usize>) {
for (user, limit) in limits { let mut max_ips = self.max_ips.write().await;
max_ips.insert(user.clone(), *limit); max_ips.clone_from(limits);
} }
fn prune_recent(user_recent: &mut HashMap<IpAddr, Instant>, now: Instant, window: Duration) {
if user_recent.is_empty() {
return;
}
user_recent.retain(|_, seen_at| now.duration_since(*seen_at) <= window);
} }
/// Проверить, может ли пользователь подключиться с данного IP-адреса
/// и добавить IP в список активных, если проверка успешна
///
/// # Arguments
/// * `username` - Имя пользователя
/// * `ip` - IP-адрес клиента
///
/// # Returns
/// * `Ok(())` - Подключение разрешено, IP добавлен в активные
/// * `Err(String)` - Подключение отклонено с описанием причины
pub async fn check_and_add(&self, username: &str, ip: IpAddr) -> Result<(), String> { pub async fn check_and_add(&self, username: &str, ip: IpAddr) -> Result<(), String> {
// Получаем лимит для пользователя let limit = {
let max_ips = self.max_ips.read().await; let max_ips = self.max_ips.read().await;
let limit = match max_ips.get(username) { max_ips.get(username).copied()
Some(limit) => *limit,
None => {
// Если лимит не задан - разрешаем безлимитный доступ
drop(max_ips);
let mut active_ips = self.active_ips.write().await;
let user_ips = active_ips
.entry(username.to_string())
.or_insert_with(HashSet::new);
user_ips.insert(ip);
return Ok(());
}
}; };
drop(max_ips); let mode = *self.limit_mode.read().await;
let window = *self.limit_window.read().await;
let now = Instant::now();
// Проверяем и обновляем активные IP
let mut active_ips = self.active_ips.write().await; let mut active_ips = self.active_ips.write().await;
let user_ips = active_ips let user_active = active_ips
.entry(username.to_string()) .entry(username.to_string())
.or_insert_with(HashSet::new); .or_insert_with(HashMap::new);
// Если IP уже есть в списке - это повторное подключение, разрешаем let mut recent_ips = self.recent_ips.write().await;
if user_ips.contains(&ip) { let user_recent = recent_ips
.entry(username.to_string())
.or_insert_with(HashMap::new);
Self::prune_recent(user_recent, now, window);
if let Some(count) = user_active.get_mut(&ip) {
*count = count.saturating_add(1);
user_recent.insert(ip, now);
return Ok(()); return Ok(());
} }
// Проверяем, не превышен ли лимит if let Some(limit) = limit {
if user_ips.len() >= limit { let active_limit_reached = user_active.len() >= limit;
return Err(format!( let recent_limit_reached = user_recent.len() >= limit;
"IP limit reached for user '{}': {}/{} unique IPs already connected", let deny = match mode {
username, UserMaxUniqueIpsMode::ActiveWindow => active_limit_reached,
user_ips.len(), UserMaxUniqueIpsMode::TimeWindow => recent_limit_reached,
limit UserMaxUniqueIpsMode::Combined => active_limit_reached || recent_limit_reached,
)); };
if deny {
return Err(format!(
"IP limit reached for user '{}': active={}/{} recent={}/{} mode={:?}",
username,
user_active.len(),
limit,
user_recent.len(),
limit,
mode
));
}
} }
// Лимит не превышен - добавляем новый IP user_active.insert(ip, 1);
user_ips.insert(ip); user_recent.insert(ip, now);
Ok(()) Ok(())
} }
/// Удалить IP-адрес из списка активных при отключении клиента
///
/// # Arguments
/// * `username` - Имя пользователя
/// * `ip` - IP-адрес отключившегося клиента
pub async fn remove_ip(&self, username: &str, ip: IpAddr) { pub async fn remove_ip(&self, username: &str, ip: IpAddr) {
let mut active_ips = self.active_ips.write().await; let mut active_ips = self.active_ips.write().await;
if let Some(user_ips) = active_ips.get_mut(username) { if let Some(user_ips) = active_ips.get_mut(username) {
user_ips.remove(&ip); if let Some(count) = user_ips.get_mut(&ip) {
if *count > 1 {
// Если у пользователя не осталось активных IP - удаляем запись *count -= 1;
// для экономии памяти } else {
user_ips.remove(&ip);
}
}
if user_ips.is_empty() { if user_ips.is_empty() {
active_ips.remove(username); active_ips.remove(username);
} }
} }
} }
/// Получить текущее количество активных IP-адресов для пользователя pub async fn get_recent_counts_for_users(&self, users: &[String]) -> HashMap<String, usize> {
/// let window = *self.limit_window.read().await;
/// # Arguments let now = Instant::now();
/// * `username` - Имя пользователя let recent_ips = self.recent_ips.read().await;
///
/// # Returns let mut counts = HashMap::with_capacity(users.len());
/// Количество уникальных активных IP-адресов for user in users {
pub async fn get_active_ip_count(&self, username: &str) -> usize { let count = if let Some(user_recent) = recent_ips.get(user) {
let active_ips = self.active_ips.read().await; user_recent
active_ips .values()
.get(username) .filter(|seen_at| now.duration_since(**seen_at) <= window)
.map(|ips| ips.len()) .count()
.unwrap_or(0) } else {
0
};
counts.insert(user.clone(), count);
}
counts
}
pub async fn get_active_ips_for_users(&self, users: &[String]) -> HashMap<String, Vec<IpAddr>> {
let active_ips = self.active_ips.read().await;
let mut out = HashMap::with_capacity(users.len());
for user in users {
let mut ips = active_ips
.get(user)
.map(|per_ip| per_ip.keys().copied().collect::<Vec<_>>())
.unwrap_or_else(Vec::new);
ips.sort();
out.insert(user.clone(), ips);
}
out
}
pub async fn get_recent_ips_for_users(&self, users: &[String]) -> HashMap<String, Vec<IpAddr>> {
let window = *self.limit_window.read().await;
let now = Instant::now();
let recent_ips = self.recent_ips.read().await;
let mut out = HashMap::with_capacity(users.len());
for user in users {
let mut ips = if let Some(user_recent) = recent_ips.get(user) {
user_recent
.iter()
.filter(|(_, seen_at)| now.duration_since(**seen_at) <= window)
.map(|(ip, _)| *ip)
.collect::<Vec<_>>()
} else {
Vec::new()
};
ips.sort();
out.insert(user.clone(), ips);
}
out
}
pub async fn get_active_ip_count(&self, username: &str) -> usize {
let active_ips = self.active_ips.read().await;
active_ips.get(username).map(|ips| ips.len()).unwrap_or(0)
} }
/// Получить список всех активных IP-адресов для пользователя
///
/// # Arguments
/// * `username` - Имя пользователя
///
/// # Returns
/// Вектор с активными IP-адресами
pub async fn get_active_ips(&self, username: &str) -> Vec<IpAddr> { pub async fn get_active_ips(&self, username: &str) -> Vec<IpAddr> {
let active_ips = self.active_ips.read().await; let active_ips = self.active_ips.read().await;
active_ips active_ips
.get(username) .get(username)
.map(|ips| ips.iter().copied().collect()) .map(|ips| ips.keys().copied().collect())
.unwrap_or_else(Vec::new) .unwrap_or_else(Vec::new)
} }
/// Получить статистику по всем пользователям
///
/// # Returns
/// Вектор кортежей: (имя_пользователя, количество_активных_IP, лимит)
pub async fn get_stats(&self) -> Vec<(String, usize, usize)> { pub async fn get_stats(&self) -> Vec<(String, usize, usize)> {
let active_ips = self.active_ips.read().await; let active_ips = self.active_ips.read().await;
let max_ips = self.max_ips.read().await; let max_ips = self.max_ips.read().await;
let mut stats = Vec::new(); let mut stats = Vec::new();
// Собираем статистику по пользователям с активными подключениями
for (username, user_ips) in active_ips.iter() { for (username, user_ips) in active_ips.iter() {
let limit = max_ips.get(username).copied().unwrap_or(0); let limit = max_ips.get(username).copied().unwrap_or(0);
stats.push((username.clone(), user_ips.len(), limit)); stats.push((username.clone(), user_ips.len(), limit));
} }
stats.sort_by(|a, b| a.0.cmp(&b.0)); // Сортируем по имени пользователя stats.sort_by(|a, b| a.0.cmp(&b.0));
stats stats
} }
/// Очистить все активные IP для пользователя (при необходимости)
///
/// # Arguments
/// * `username` - Имя пользователя
pub async fn clear_user_ips(&self, username: &str) { pub async fn clear_user_ips(&self, username: &str) {
let mut active_ips = self.active_ips.write().await; let mut active_ips = self.active_ips.write().await;
active_ips.remove(username); active_ips.remove(username);
drop(active_ips);
let mut recent_ips = self.recent_ips.write().await;
recent_ips.remove(username);
} }
/// Очистить всю статистику (использовать с осторожностью!)
pub async fn clear_all(&self) { pub async fn clear_all(&self) {
let mut active_ips = self.active_ips.write().await; let mut active_ips = self.active_ips.write().await;
active_ips.clear(); active_ips.clear();
drop(active_ips);
let mut recent_ips = self.recent_ips.write().await;
recent_ips.clear();
} }
/// Проверить, подключен ли пользователь с данного IP
///
/// # Arguments
/// * `username` - Имя пользователя
/// * `ip` - IP-адрес для проверки
///
/// # Returns
/// `true` если IP активен, `false` если нет
pub async fn is_ip_active(&self, username: &str, ip: IpAddr) -> bool { pub async fn is_ip_active(&self, username: &str, ip: IpAddr) -> bool {
let active_ips = self.active_ips.read().await; let active_ips = self.active_ips.read().await;
active_ips active_ips
.get(username) .get(username)
.map(|ips| ips.contains(&ip)) .map(|ips| ips.contains_key(&ip))
.unwrap_or(false) .unwrap_or(false)
} }
/// Получить лимит для пользователя
///
/// # Arguments
/// * `username` - Имя пользователя
///
/// # Returns
/// Лимит IP-адресов или None, если лимит не установлен
pub async fn get_user_limit(&self, username: &str) -> Option<usize> { pub async fn get_user_limit(&self, username: &str) -> Option<usize> {
let max_ips = self.max_ips.read().await; let max_ips = self.max_ips.read().await;
max_ips.get(username).copied() max_ips.get(username).copied()
} }
/// Форматировать статистику в читаемый текст
///
/// # Returns
/// Строка со статистикой для логов или мониторинга
pub async fn format_stats(&self) -> String { pub async fn format_stats(&self) -> String {
let stats = self.get_stats().await; let stats = self.get_stats().await;
if stats.is_empty() { if stats.is_empty() {
return String::from("No active users"); return String::from("No active users");
} }
let mut output = String::from("User IP Statistics:\n"); let mut output = String::from("User IP Statistics:\n");
output.push_str("==================\n"); output.push_str("==================\n");
for (username, active_count, limit) in stats { for (username, active_count, limit) in stats {
output.push_str(&format!( output.push_str(&format!(
"User: {:<20} Active IPs: {}/{}\n", "User: {:<20} Active IPs: {}/{}\n",
username, username,
active_count, active_count,
if limit > 0 { limit.to_string() } else { "unlimited".to_string() } if limit > 0 {
limit.to_string()
} else {
"unlimited".to_string()
}
)); ));
let ips = self.get_active_ips(&username).await; let ips = self.get_active_ips(&username).await;
for ip in ips { for ip in ips {
output.push_str(&format!(" └─ {}\n", ip)); output.push_str(&format!(" - {}\n", ip));
} }
} }
output output
} }
} }
@@ -257,10 +283,6 @@ impl Default for UserIpTracker {
} }
} }
// ============================================================================
// ТЕСТЫ
// ============================================================================
#[cfg(test)] #[cfg(test)]
mod tests { mod tests {
use super::*; use super::*;
@@ -283,17 +305,33 @@ mod tests {
let ip2 = test_ipv4(192, 168, 1, 2); let ip2 = test_ipv4(192, 168, 1, 2);
let ip3 = test_ipv4(192, 168, 1, 3); let ip3 = test_ipv4(192, 168, 1, 3);
// Первые два IP должны быть приняты
assert!(tracker.check_and_add("test_user", ip1).await.is_ok()); assert!(tracker.check_and_add("test_user", ip1).await.is_ok());
assert!(tracker.check_and_add("test_user", ip2).await.is_ok()); assert!(tracker.check_and_add("test_user", ip2).await.is_ok());
// Третий IP должен быть отклонен
assert!(tracker.check_and_add("test_user", ip3).await.is_err()); assert!(tracker.check_and_add("test_user", ip3).await.is_err());
// Проверяем счетчик
assert_eq!(tracker.get_active_ip_count("test_user").await, 2); assert_eq!(tracker.get_active_ip_count("test_user").await, 2);
} }
#[tokio::test]
async fn test_active_window_rejects_new_ip_and_keeps_existing_session() {
let tracker = UserIpTracker::new();
tracker.set_user_limit("test_user", 1).await;
tracker
.set_limit_policy(UserMaxUniqueIpsMode::ActiveWindow, 30)
.await;
let ip1 = test_ipv4(10, 10, 10, 1);
let ip2 = test_ipv4(10, 10, 10, 2);
assert!(tracker.check_and_add("test_user", ip1).await.is_ok());
assert!(tracker.is_ip_active("test_user", ip1).await);
assert!(tracker.check_and_add("test_user", ip2).await.is_err());
// Existing session remains active; only new unique IP is denied.
assert!(tracker.is_ip_active("test_user", ip1).await);
assert_eq!(tracker.get_active_ip_count("test_user").await, 1);
}
#[tokio::test] #[tokio::test]
async fn test_reconnection_from_same_ip() { async fn test_reconnection_from_same_ip() {
let tracker = UserIpTracker::new(); let tracker = UserIpTracker::new();
@@ -301,16 +339,29 @@ mod tests {
let ip1 = test_ipv4(192, 168, 1, 1); let ip1 = test_ipv4(192, 168, 1, 1);
// Первое подключение
assert!(tracker.check_and_add("test_user", ip1).await.is_ok()); assert!(tracker.check_and_add("test_user", ip1).await.is_ok());
// Повторное подключение с того же IP должно пройти
assert!(tracker.check_and_add("test_user", ip1).await.is_ok()); assert!(tracker.check_and_add("test_user", ip1).await.is_ok());
// Счетчик не должен увеличиться
assert_eq!(tracker.get_active_ip_count("test_user").await, 1); assert_eq!(tracker.get_active_ip_count("test_user").await, 1);
} }
#[tokio::test]
async fn test_same_ip_disconnect_keeps_active_while_other_session_alive() {
let tracker = UserIpTracker::new();
tracker.set_user_limit("test_user", 2).await;
let ip1 = test_ipv4(192, 168, 1, 1);
assert!(tracker.check_and_add("test_user", ip1).await.is_ok());
assert!(tracker.check_and_add("test_user", ip1).await.is_ok());
assert_eq!(tracker.get_active_ip_count("test_user").await, 1);
tracker.remove_ip("test_user", ip1).await;
assert_eq!(tracker.get_active_ip_count("test_user").await, 1);
tracker.remove_ip("test_user", ip1).await;
assert_eq!(tracker.get_active_ip_count("test_user").await, 0);
}
#[tokio::test] #[tokio::test]
async fn test_ip_removal() { async fn test_ip_removal() {
let tracker = UserIpTracker::new(); let tracker = UserIpTracker::new();
@@ -320,36 +371,28 @@ mod tests {
let ip2 = test_ipv4(192, 168, 1, 2); let ip2 = test_ipv4(192, 168, 1, 2);
let ip3 = test_ipv4(192, 168, 1, 3); let ip3 = test_ipv4(192, 168, 1, 3);
// Добавляем два IP
assert!(tracker.check_and_add("test_user", ip1).await.is_ok()); assert!(tracker.check_and_add("test_user", ip1).await.is_ok());
assert!(tracker.check_and_add("test_user", ip2).await.is_ok()); assert!(tracker.check_and_add("test_user", ip2).await.is_ok());
// Третий не должен пройти
assert!(tracker.check_and_add("test_user", ip3).await.is_err()); assert!(tracker.check_and_add("test_user", ip3).await.is_err());
// Удаляем первый IP
tracker.remove_ip("test_user", ip1).await; tracker.remove_ip("test_user", ip1).await;
// Теперь третий должен пройти
assert!(tracker.check_and_add("test_user", ip3).await.is_ok()); assert!(tracker.check_and_add("test_user", ip3).await.is_ok());
assert_eq!(tracker.get_active_ip_count("test_user").await, 2); assert_eq!(tracker.get_active_ip_count("test_user").await, 2);
} }
#[tokio::test] #[tokio::test]
async fn test_no_limit() { async fn test_no_limit() {
let tracker = UserIpTracker::new(); let tracker = UserIpTracker::new();
// Не устанавливаем лимит для test_user
let ip1 = test_ipv4(192, 168, 1, 1); let ip1 = test_ipv4(192, 168, 1, 1);
let ip2 = test_ipv4(192, 168, 1, 2); let ip2 = test_ipv4(192, 168, 1, 2);
let ip3 = test_ipv4(192, 168, 1, 3); let ip3 = test_ipv4(192, 168, 1, 3);
// Без лимита все IP должны проходить
assert!(tracker.check_and_add("test_user", ip1).await.is_ok()); assert!(tracker.check_and_add("test_user", ip1).await.is_ok());
assert!(tracker.check_and_add("test_user", ip2).await.is_ok()); assert!(tracker.check_and_add("test_user", ip2).await.is_ok());
assert!(tracker.check_and_add("test_user", ip3).await.is_ok()); assert!(tracker.check_and_add("test_user", ip3).await.is_ok());
assert_eq!(tracker.get_active_ip_count("test_user").await, 3); assert_eq!(tracker.get_active_ip_count("test_user").await, 3);
} }
@@ -362,11 +405,9 @@ mod tests {
let ip1 = test_ipv4(192, 168, 1, 1); let ip1 = test_ipv4(192, 168, 1, 1);
let ip2 = test_ipv4(192, 168, 1, 2); let ip2 = test_ipv4(192, 168, 1, 2);
// user1 может использовать 2 IP
assert!(tracker.check_and_add("user1", ip1).await.is_ok()); assert!(tracker.check_and_add("user1", ip1).await.is_ok());
assert!(tracker.check_and_add("user1", ip2).await.is_ok()); assert!(tracker.check_and_add("user1", ip2).await.is_ok());
// user2 может использовать только 1 IP
assert!(tracker.check_and_add("user2", ip1).await.is_ok()); assert!(tracker.check_and_add("user2", ip1).await.is_ok());
assert!(tracker.check_and_add("user2", ip2).await.is_err()); assert!(tracker.check_and_add("user2", ip2).await.is_err());
} }
@@ -379,10 +420,9 @@ mod tests {
let ipv4 = test_ipv4(192, 168, 1, 1); let ipv4 = test_ipv4(192, 168, 1, 1);
let ipv6 = test_ipv6(); let ipv6 = test_ipv6();
// Должны работать оба типа адресов
assert!(tracker.check_and_add("test_user", ipv4).await.is_ok()); assert!(tracker.check_and_add("test_user", ipv4).await.is_ok());
assert!(tracker.check_and_add("test_user", ipv6).await.is_ok()); assert!(tracker.check_and_add("test_user", ipv6).await.is_ok());
assert_eq!(tracker.get_active_ip_count("test_user").await, 2); assert_eq!(tracker.get_active_ip_count("test_user").await, 2);
} }
@@ -417,8 +457,7 @@ mod tests {
let stats = tracker.get_stats().await; let stats = tracker.get_stats().await;
assert_eq!(stats.len(), 2); assert_eq!(stats.len(), 2);
// Проверяем наличие обоих пользователей в статистике
assert!(stats.iter().any(|(name, _, _)| name == "user1")); assert!(stats.iter().any(|(name, _, _)| name == "user1"));
assert!(stats.iter().any(|(name, _, _)| name == "user2")); assert!(stats.iter().any(|(name, _, _)| name == "user2"));
} }
@@ -427,10 +466,10 @@ mod tests {
async fn test_clear_user_ips() { async fn test_clear_user_ips() {
let tracker = UserIpTracker::new(); let tracker = UserIpTracker::new();
let ip1 = test_ipv4(192, 168, 1, 1); let ip1 = test_ipv4(192, 168, 1, 1);
tracker.check_and_add("test_user", ip1).await.unwrap(); tracker.check_and_add("test_user", ip1).await.unwrap();
assert_eq!(tracker.get_active_ip_count("test_user").await, 1); assert_eq!(tracker.get_active_ip_count("test_user").await, 1);
tracker.clear_user_ips("test_user").await; tracker.clear_user_ips("test_user").await;
assert_eq!(tracker.get_active_ip_count("test_user").await, 0); assert_eq!(tracker.get_active_ip_count("test_user").await, 0);
} }
@@ -440,9 +479,9 @@ mod tests {
let tracker = UserIpTracker::new(); let tracker = UserIpTracker::new();
let ip1 = test_ipv4(192, 168, 1, 1); let ip1 = test_ipv4(192, 168, 1, 1);
let ip2 = test_ipv4(192, 168, 1, 2); let ip2 = test_ipv4(192, 168, 1, 2);
tracker.check_and_add("test_user", ip1).await.unwrap(); tracker.check_and_add("test_user", ip1).await.unwrap();
assert!(tracker.is_ip_active("test_user", ip1).await); assert!(tracker.is_ip_active("test_user", ip1).await);
assert!(!tracker.is_ip_active("test_user", ip2).await); assert!(!tracker.is_ip_active("test_user", ip2).await);
} }
@@ -450,15 +489,85 @@ mod tests {
#[tokio::test] #[tokio::test]
async fn test_load_limits_from_config() { async fn test_load_limits_from_config() {
let tracker = UserIpTracker::new(); let tracker = UserIpTracker::new();
let mut config_limits = HashMap::new(); let mut config_limits = HashMap::new();
config_limits.insert("user1".to_string(), 5); config_limits.insert("user1".to_string(), 5);
config_limits.insert("user2".to_string(), 3); config_limits.insert("user2".to_string(), 3);
tracker.load_limits(&config_limits).await; tracker.load_limits(&config_limits).await;
assert_eq!(tracker.get_user_limit("user1").await, Some(5)); assert_eq!(tracker.get_user_limit("user1").await, Some(5));
assert_eq!(tracker.get_user_limit("user2").await, Some(3)); assert_eq!(tracker.get_user_limit("user2").await, Some(3));
assert_eq!(tracker.get_user_limit("user3").await, None); assert_eq!(tracker.get_user_limit("user3").await, None);
} }
#[tokio::test]
async fn test_load_limits_replaces_previous_map() {
let tracker = UserIpTracker::new();
let mut first = HashMap::new();
first.insert("user1".to_string(), 2);
first.insert("user2".to_string(), 3);
tracker.load_limits(&first).await;
let mut second = HashMap::new();
second.insert("user2".to_string(), 5);
tracker.load_limits(&second).await;
assert_eq!(tracker.get_user_limit("user1").await, None);
assert_eq!(tracker.get_user_limit("user2").await, Some(5));
}
#[tokio::test]
async fn test_time_window_mode_blocks_recent_ip_churn() {
let tracker = UserIpTracker::new();
tracker.set_user_limit("test_user", 1).await;
tracker
.set_limit_policy(UserMaxUniqueIpsMode::TimeWindow, 30)
.await;
let ip1 = test_ipv4(10, 0, 0, 1);
let ip2 = test_ipv4(10, 0, 0, 2);
assert!(tracker.check_and_add("test_user", ip1).await.is_ok());
tracker.remove_ip("test_user", ip1).await;
assert!(tracker.check_and_add("test_user", ip2).await.is_err());
}
#[tokio::test]
async fn test_combined_mode_enforces_active_and_recent_limits() {
let tracker = UserIpTracker::new();
tracker.set_user_limit("test_user", 1).await;
tracker
.set_limit_policy(UserMaxUniqueIpsMode::Combined, 30)
.await;
let ip1 = test_ipv4(10, 0, 1, 1);
let ip2 = test_ipv4(10, 0, 1, 2);
assert!(tracker.check_and_add("test_user", ip1).await.is_ok());
assert!(tracker.check_and_add("test_user", ip2).await.is_err());
tracker.remove_ip("test_user", ip1).await;
assert!(tracker.check_and_add("test_user", ip2).await.is_err());
}
#[tokio::test]
async fn test_time_window_expires() {
let tracker = UserIpTracker::new();
tracker.set_user_limit("test_user", 1).await;
tracker
.set_limit_policy(UserMaxUniqueIpsMode::TimeWindow, 1)
.await;
let ip1 = test_ipv4(10, 1, 0, 1);
let ip2 = test_ipv4(10, 1, 0, 2);
assert!(tracker.check_and_add("test_user", ip1).await.is_ok());
tracker.remove_ip("test_user", ip1).await;
assert!(tracker.check_and_add("test_user", ip2).await.is_err());
tokio::time::sleep(Duration::from_millis(1100)).await;
assert!(tracker.check_and_add("test_user", ip2).await.is_ok());
}
} }

View File

@@ -4,7 +4,7 @@
use std::net::SocketAddr; use std::net::SocketAddr;
use std::sync::Arc; use std::sync::Arc;
use std::time::Duration; use std::time::{Duration, Instant};
use rand::Rng; use rand::Rng;
use tokio::net::TcpListener; use tokio::net::TcpListener;
use tokio::signal; use tokio::signal;
@@ -41,8 +41,9 @@ use crate::stats::telemetry::TelemetryPolicy;
use crate::stats::{ReplayChecker, Stats}; use crate::stats::{ReplayChecker, Stats};
use crate::stream::BufferPool; use crate::stream::BufferPool;
use crate::transport::middle_proxy::{ use crate::transport::middle_proxy::{
MePool, fetch_proxy_config, run_me_ping, MePingFamily, MePingSample, MeReinitTrigger, format_sample_line, MePool, ProxyConfigData, fetch_proxy_config_with_raw, format_me_route, format_sample_line,
format_me_route, load_proxy_config_cache, run_me_ping, save_proxy_config_cache, MePingFamily, MePingSample,
MeReinitTrigger,
}; };
use crate::transport::{ListenOptions, UpstreamManager, create_listener, find_listener_processes}; use crate::transport::{ListenOptions, UpstreamManager, create_listener, find_listener_processes};
use crate::tls_front::TlsFrontCache; use crate::tls_front::TlsFrontCache;
@@ -172,8 +173,191 @@ async fn write_beobachten_snapshot(path: &str, payload: &str) -> std::io::Result
tokio::fs::write(path, payload).await tokio::fs::write(path, payload).await
} }
fn unit_label(value: u64, singular: &'static str, plural: &'static str) -> &'static str {
if value == 1 { singular } else { plural }
}
fn format_uptime(total_secs: u64) -> String {
const SECS_PER_MINUTE: u64 = 60;
const SECS_PER_HOUR: u64 = 60 * SECS_PER_MINUTE;
const SECS_PER_DAY: u64 = 24 * SECS_PER_HOUR;
const SECS_PER_MONTH: u64 = 30 * SECS_PER_DAY;
const SECS_PER_YEAR: u64 = 12 * SECS_PER_MONTH;
let mut remaining = total_secs;
let years = remaining / SECS_PER_YEAR;
remaining %= SECS_PER_YEAR;
let months = remaining / SECS_PER_MONTH;
remaining %= SECS_PER_MONTH;
let days = remaining / SECS_PER_DAY;
remaining %= SECS_PER_DAY;
let hours = remaining / SECS_PER_HOUR;
remaining %= SECS_PER_HOUR;
let minutes = remaining / SECS_PER_MINUTE;
let seconds = remaining % SECS_PER_MINUTE;
let mut parts = Vec::new();
if total_secs > SECS_PER_YEAR {
parts.push(format!(
"{} {}",
years,
unit_label(years, "year", "years")
));
}
if total_secs > SECS_PER_MONTH {
parts.push(format!(
"{} {}",
months,
unit_label(months, "month", "months")
));
}
if total_secs > SECS_PER_DAY {
parts.push(format!(
"{} {}",
days,
unit_label(days, "day", "days")
));
}
if total_secs > SECS_PER_HOUR {
parts.push(format!(
"{} {}",
hours,
unit_label(hours, "hour", "hours")
));
}
if total_secs > SECS_PER_MINUTE {
parts.push(format!(
"{} {}",
minutes,
unit_label(minutes, "minute", "minutes")
));
}
parts.push(format!(
"{} {}",
seconds,
unit_label(seconds, "second", "seconds")
));
format!("{} / {} seconds", parts.join(", "), total_secs)
}
async fn load_startup_proxy_config_snapshot(
url: &str,
cache_path: Option<&str>,
me2dc_fallback: bool,
label: &'static str,
) -> Option<ProxyConfigData> {
loop {
match fetch_proxy_config_with_raw(url).await {
Ok((cfg, raw)) => {
if !cfg.map.is_empty() {
if let Some(path) = cache_path
&& let Err(e) = save_proxy_config_cache(path, &raw).await
{
warn!(error = %e, path, snapshot = label, "Failed to store startup proxy-config cache");
}
return Some(cfg);
}
warn!(snapshot = label, url, "Startup proxy-config is empty; trying disk cache");
if let Some(path) = cache_path {
match load_proxy_config_cache(path).await {
Ok(cached) if !cached.map.is_empty() => {
info!(
snapshot = label,
path,
proxy_for_lines = cached.proxy_for_lines,
"Loaded startup proxy-config from disk cache"
);
return Some(cached);
}
Ok(_) => {
warn!(
snapshot = label,
path,
"Startup proxy-config cache is empty; ignoring cache file"
);
}
Err(cache_err) => {
debug!(
snapshot = label,
path,
error = %cache_err,
"Startup proxy-config cache unavailable"
);
}
}
}
if me2dc_fallback {
error!(
snapshot = label,
"Startup proxy-config unavailable and no saved config found; falling back to direct mode"
);
return None;
}
warn!(
snapshot = label,
retry_in_secs = 2,
"Startup proxy-config unavailable and no saved config found; retrying because me2dc_fallback=false"
);
tokio::time::sleep(Duration::from_secs(2)).await;
}
Err(fetch_err) => {
if let Some(path) = cache_path {
match load_proxy_config_cache(path).await {
Ok(cached) if !cached.map.is_empty() => {
info!(
snapshot = label,
path,
proxy_for_lines = cached.proxy_for_lines,
"Loaded startup proxy-config from disk cache"
);
return Some(cached);
}
Ok(_) => {
warn!(
snapshot = label,
path,
"Startup proxy-config cache is empty; ignoring cache file"
);
}
Err(cache_err) => {
debug!(
snapshot = label,
path,
error = %cache_err,
"Startup proxy-config cache unavailable"
);
}
}
}
if me2dc_fallback {
error!(
snapshot = label,
error = %fetch_err,
"Startup proxy-config unavailable and no cached data; falling back to direct mode"
);
return None;
}
warn!(
snapshot = label,
error = %fetch_err,
retry_in_secs = 2,
"Startup proxy-config unavailable; retrying because me2dc_fallback=false"
);
tokio::time::sleep(Duration::from_secs(2)).await;
}
}
}
}
#[tokio::main] #[tokio::main]
async fn main() -> std::result::Result<(), Box<dyn std::error::Error>> { async fn main() -> std::result::Result<(), Box<dyn std::error::Error>> {
let process_started_at = Instant::now();
let (config_path, cli_silent, cli_log_level) = parse_cli(); let (config_path, cli_silent, cli_log_level) = parse_cli();
let mut config = match ProxyConfig::load(&config_path) { let mut config = match ProxyConfig::load(&config_path) {
@@ -416,13 +600,19 @@ async fn main() -> std::result::Result<(), Box<dyn std::error::Error>> {
log_probe_result(&probe, &decision); log_probe_result(&probe, &decision);
let prefer_ipv6 = decision.prefer_ipv6(); let prefer_ipv6 = decision.prefer_ipv6();
let mut use_middle_proxy = config.general.use_middle_proxy && (decision.ipv4_me || decision.ipv6_me); let mut use_middle_proxy = config.general.use_middle_proxy;
let beobachten = Arc::new(BeobachtenStore::new()); let beobachten = Arc::new(BeobachtenStore::new());
let rng = Arc::new(SecureRandom::new()); let rng = Arc::new(SecureRandom::new());
// IP Tracker initialization // IP Tracker initialization
let ip_tracker = Arc::new(UserIpTracker::new()); let ip_tracker = Arc::new(UserIpTracker::new());
ip_tracker.load_limits(&config.access.user_max_unique_ips).await; ip_tracker.load_limits(&config.access.user_max_unique_ips).await;
ip_tracker
.set_limit_policy(
config.access.user_max_unique_ips_mode,
config.access.user_max_unique_ips_window_secs,
)
.await;
if !config.access.user_max_unique_ips.is_empty() { if !config.access.user_max_unique_ips.is_empty() {
info!("IP limits configured for {} users", config.access.user_max_unique_ips.len()); info!("IP limits configured for {} users", config.access.user_max_unique_ips.len());
@@ -437,9 +627,18 @@ async fn main() -> std::result::Result<(), Box<dyn std::error::Error>> {
// Connection concurrency limit // Connection concurrency limit
let max_connections = Arc::new(Semaphore::new(10_000)); let max_connections = Arc::new(Semaphore::new(10_000));
let me2dc_fallback = config.general.me2dc_fallback;
let me_init_retry_attempts = config.general.me_init_retry_attempts;
let me_init_warn_after_attempts: u32 = 3;
if use_middle_proxy && !decision.ipv4_me && !decision.ipv6_me { if use_middle_proxy && !decision.ipv4_me && !decision.ipv6_me {
warn!("No usable IP family for Middle Proxy detected; falling back to direct DC"); if me2dc_fallback {
use_middle_proxy = false; warn!("No usable IP family for Middle Proxy detected; falling back to direct DC");
use_middle_proxy = false;
} else {
warn!(
"No usable IP family for Middle Proxy detected; me2dc_fallback=false, ME init retries stay active"
);
}
} }
// ===================================================================== // =====================================================================
@@ -469,13 +668,35 @@ async fn main() -> std::result::Result<(), Box<dyn std::error::Error>> {
// proxy-secret is from: https://core.telegram.org/getProxySecret // proxy-secret is from: https://core.telegram.org/getProxySecret
// ============================================================= // =============================================================
let proxy_secret_path = config.general.proxy_secret_path.as_deref(); let proxy_secret_path = config.general.proxy_secret_path.as_deref();
match crate::transport::middle_proxy::fetch_proxy_secret( let pool_size = config.general.middle_proxy_pool_size.max(1);
proxy_secret_path, let proxy_secret = loop {
config.general.proxy_secret_len_max, match crate::transport::middle_proxy::fetch_proxy_secret(
) proxy_secret_path,
.await config.general.proxy_secret_len_max,
{ )
Ok(proxy_secret) => { .await
{
Ok(proxy_secret) => break Some(proxy_secret),
Err(e) => {
if me2dc_fallback {
error!(
error = %e,
"ME startup failed: proxy-secret is unavailable and no saved secret found; falling back to direct mode"
);
break None;
}
warn!(
error = %e,
retry_in_secs = 2,
"ME startup failed: proxy-secret is unavailable and no saved secret found; retrying because me2dc_fallback=false"
);
tokio::time::sleep(Duration::from_secs(2)).await;
}
}
};
match proxy_secret {
Some(proxy_secret) => {
info!( info!(
secret_len = proxy_secret.len(), secret_len = proxy_secret.len(),
key_sig = format_args!( key_sig = format_args!(
@@ -494,118 +715,153 @@ async fn main() -> std::result::Result<(), Box<dyn std::error::Error>> {
"Proxy-secret loaded" "Proxy-secret loaded"
); );
// Load ME config (v4/v6) + default DC let cfg_v4 = load_startup_proxy_config_snapshot(
let mut cfg_v4 = fetch_proxy_config(
"https://core.telegram.org/getProxyConfig", "https://core.telegram.org/getProxyConfig",
config.general.proxy_config_v4_cache_path.as_deref(),
me2dc_fallback,
"getProxyConfig",
) )
.await .await;
.unwrap_or_default(); let cfg_v6 = load_startup_proxy_config_snapshot(
let mut cfg_v6 = fetch_proxy_config(
"https://core.telegram.org/getProxyConfigV6", "https://core.telegram.org/getProxyConfigV6",
config.general.proxy_config_v6_cache_path.as_deref(),
me2dc_fallback,
"getProxyConfigV6",
) )
.await .await;
.unwrap_or_default();
if cfg_v4.map.is_empty() { if let (Some(cfg_v4), Some(cfg_v6)) = (cfg_v4, cfg_v6) {
cfg_v4.map = crate::protocol::constants::TG_MIDDLE_PROXIES_V4.clone(); let pool = MePool::new(
} proxy_tag.clone(),
if cfg_v6.map.is_empty() { proxy_secret,
cfg_v6.map = crate::protocol::constants::TG_MIDDLE_PROXIES_V6.clone(); config.general.middle_proxy_nat_ip,
} me_nat_probe,
None,
config.network.stun_servers.clone(),
config.general.stun_nat_probe_concurrency,
probe.detected_ipv6,
config.timeouts.me_one_retry,
config.timeouts.me_one_timeout_ms,
cfg_v4.map.clone(),
cfg_v6.map.clone(),
cfg_v4.default_dc.or(cfg_v6.default_dc),
decision.clone(),
Some(upstream_manager.clone()),
rng.clone(),
stats.clone(),
config.general.me_keepalive_enabled,
config.general.me_keepalive_interval_secs,
config.general.me_keepalive_jitter_secs,
config.general.me_keepalive_payload_random,
config.general.rpc_proxy_req_every,
config.general.me_warmup_stagger_enabled,
config.general.me_warmup_step_delay_ms,
config.general.me_warmup_step_jitter_ms,
config.general.me_reconnect_max_concurrent_per_dc,
config.general.me_reconnect_backoff_base_ms,
config.general.me_reconnect_backoff_cap_ms,
config.general.me_reconnect_fast_retry_count,
config.general.me_single_endpoint_shadow_writers,
config.general.me_single_endpoint_outage_mode_enabled,
config.general.me_single_endpoint_outage_disable_quarantine,
config.general.me_single_endpoint_outage_backoff_min_ms,
config.general.me_single_endpoint_outage_backoff_max_ms,
config.general.me_single_endpoint_shadow_rotate_every_secs,
config.general.me_floor_mode,
config.general.me_adaptive_floor_idle_secs,
config.general.me_adaptive_floor_min_writers_single_endpoint,
config.general.me_adaptive_floor_recover_grace_secs,
config.general.hardswap,
config.general.me_pool_drain_ttl_secs,
config.general.effective_me_pool_force_close_secs(),
config.general.me_pool_min_fresh_ratio,
config.general.me_hardswap_warmup_delay_min_ms,
config.general.me_hardswap_warmup_delay_max_ms,
config.general.me_hardswap_warmup_extra_passes,
config.general.me_hardswap_warmup_pass_backoff_base_ms,
config.general.me_bind_stale_mode,
config.general.me_bind_stale_ttl_secs,
config.general.me_secret_atomic_snapshot,
config.general.me_deterministic_writer_sort,
config.general.me_socks_kdf_policy,
config.general.me_route_backpressure_base_timeout_ms,
config.general.me_route_backpressure_high_timeout_ms,
config.general.me_route_backpressure_high_watermark_pct,
config.general.me_route_no_writer_mode,
config.general.me_route_no_writer_wait_ms,
config.general.me_route_inline_recovery_attempts,
config.general.me_route_inline_recovery_wait_ms,
);
let pool = MePool::new( let mut init_attempt: u32 = 0;
proxy_tag, loop {
proxy_secret, init_attempt = init_attempt.saturating_add(1);
config.general.middle_proxy_nat_ip, match pool.init(pool_size, &rng).await {
me_nat_probe, Ok(()) => {
None, info!(
config.network.stun_servers.clone(), attempt = init_attempt,
config.general.stun_nat_probe_concurrency, "Middle-End pool initialized successfully"
probe.detected_ipv6, );
config.timeouts.me_one_retry,
config.timeouts.me_one_timeout_ms,
cfg_v4.map.clone(),
cfg_v6.map.clone(),
cfg_v4.default_dc.or(cfg_v6.default_dc),
decision.clone(),
Some(upstream_manager.clone()),
rng.clone(),
stats.clone(),
config.general.me_keepalive_enabled,
config.general.me_keepalive_interval_secs,
config.general.me_keepalive_jitter_secs,
config.general.me_keepalive_payload_random,
config.general.rpc_proxy_req_every,
config.general.me_warmup_stagger_enabled,
config.general.me_warmup_step_delay_ms,
config.general.me_warmup_step_jitter_ms,
config.general.me_reconnect_max_concurrent_per_dc,
config.general.me_reconnect_backoff_base_ms,
config.general.me_reconnect_backoff_cap_ms,
config.general.me_reconnect_fast_retry_count,
config.general.me_single_endpoint_shadow_writers,
config.general.me_single_endpoint_outage_mode_enabled,
config.general.me_single_endpoint_outage_disable_quarantine,
config.general.me_single_endpoint_outage_backoff_min_ms,
config.general.me_single_endpoint_outage_backoff_max_ms,
config.general.me_single_endpoint_shadow_rotate_every_secs,
config.general.me_floor_mode,
config.general.me_adaptive_floor_idle_secs,
config.general.me_adaptive_floor_min_writers_single_endpoint,
config.general.me_adaptive_floor_recover_grace_secs,
config.general.hardswap,
config.general.me_pool_drain_ttl_secs,
config.general.effective_me_pool_force_close_secs(),
config.general.me_pool_min_fresh_ratio,
config.general.me_hardswap_warmup_delay_min_ms,
config.general.me_hardswap_warmup_delay_max_ms,
config.general.me_hardswap_warmup_extra_passes,
config.general.me_hardswap_warmup_pass_backoff_base_ms,
config.general.me_bind_stale_mode,
config.general.me_bind_stale_ttl_secs,
config.general.me_secret_atomic_snapshot,
config.general.me_deterministic_writer_sort,
config.general.me_socks_kdf_policy,
config.general.me_route_backpressure_base_timeout_ms,
config.general.me_route_backpressure_high_timeout_ms,
config.general.me_route_backpressure_high_watermark_pct,
);
let pool_size = config.general.middle_proxy_pool_size.max(1); // Phase 4: Start health monitor
loop { let pool_clone = pool.clone();
match pool.init(pool_size, &rng).await { let rng_clone = rng.clone();
Ok(()) => { let min_conns = pool_size;
info!("Middle-End pool initialized successfully"); tokio::spawn(async move {
crate::transport::middle_proxy::me_health_monitor(
pool_clone, rng_clone, min_conns,
)
.await;
});
// Phase 4: Start health monitor break Some(pool);
let pool_clone = pool.clone(); }
let rng_clone = rng.clone(); Err(e) => {
let min_conns = pool_size; let retries_limited = me2dc_fallback && me_init_retry_attempts > 0;
tokio::spawn(async move { if retries_limited && init_attempt >= me_init_retry_attempts {
crate::transport::middle_proxy::me_health_monitor( error!(
pool_clone, rng_clone, min_conns, error = %e,
) attempt = init_attempt,
.await; retry_limit = me_init_retry_attempts,
}); "ME pool init retries exhausted; falling back to direct mode"
);
break None;
}
break Some(pool); let retry_limit = if !me2dc_fallback || me_init_retry_attempts == 0 {
} String::from("unlimited")
Err(e) => { } else {
warn!( me_init_retry_attempts.to_string()
error = %e, };
retry_in_secs = 2, if init_attempt >= me_init_warn_after_attempts {
"ME pool is not ready yet; retrying startup initialization" warn!(
); error = %e,
pool.reset_stun_state(); attempt = init_attempt,
tokio::time::sleep(Duration::from_secs(2)).await; retry_limit = retry_limit,
me2dc_fallback = me2dc_fallback,
retry_in_secs = 2,
"ME pool is not ready yet; retrying startup initialization"
);
} else {
info!(
error = %e,
attempt = init_attempt,
retry_limit = retry_limit,
me2dc_fallback = me2dc_fallback,
retry_in_secs = 2,
"ME pool startup warmup: retrying initialization"
);
}
pool.reset_stun_state();
tokio::time::sleep(Duration::from_secs(2)).await;
}
} }
} }
} else {
None
} }
} }
Err(e) => { None => None,
error!(error = %e, "Failed to fetch proxy-secret. Falling back to direct mode.");
None
}
} }
} else { } else {
None None
@@ -786,6 +1042,19 @@ async fn main() -> std::result::Result<(), Box<dyn std::error::Error>> {
} }
} }
let initialized_secs = process_started_at.elapsed().as_secs();
let second_suffix = if initialized_secs == 1 { "" } else { "s" };
info!("===================== Telegram Startup =====================");
info!(
" DC/ME Initialized in {} second{}",
initialized_secs, second_suffix
);
info!("============================================================");
if let Some(ref pool) = me_pool {
pool.set_runtime_ready(true);
}
// Background tasks // Background tasks
let um_clone = upstream_manager.clone(); let um_clone = upstream_manager.clone();
let decision_clone = decision.clone(); let decision_clone = decision.clone();
@@ -847,6 +1116,51 @@ async fn main() -> std::result::Result<(), Box<dyn std::error::Error>> {
} }
}); });
let ip_tracker_policy = ip_tracker.clone();
let mut config_rx_ip_limits = config_rx.clone();
tokio::spawn(async move {
let mut prev_limits = config_rx_ip_limits
.borrow()
.access
.user_max_unique_ips
.clone();
let mut prev_mode = config_rx_ip_limits
.borrow()
.access
.user_max_unique_ips_mode;
let mut prev_window = config_rx_ip_limits
.borrow()
.access
.user_max_unique_ips_window_secs;
loop {
if config_rx_ip_limits.changed().await.is_err() {
break;
}
let cfg = config_rx_ip_limits.borrow_and_update().clone();
if prev_limits != cfg.access.user_max_unique_ips {
ip_tracker_policy
.load_limits(&cfg.access.user_max_unique_ips)
.await;
prev_limits = cfg.access.user_max_unique_ips.clone();
}
if prev_mode != cfg.access.user_max_unique_ips_mode
|| prev_window != cfg.access.user_max_unique_ips_window_secs
{
ip_tracker_policy
.set_limit_policy(
cfg.access.user_max_unique_ips_mode,
cfg.access.user_max_unique_ips_window_secs,
)
.await;
prev_mode = cfg.access.user_max_unique_ips_mode;
prev_window = cfg.access.user_max_unique_ips_window_secs;
}
}
});
let beobachten_writer = beobachten.clone(); let beobachten_writer = beobachten.clone();
let config_rx_beobachten = config_rx.clone(); let config_rx_beobachten = config_rx.clone();
tokio::spawn(async move { tokio::spawn(async move {
@@ -1169,16 +1483,22 @@ async fn main() -> std::result::Result<(), Box<dyn std::error::Error>> {
let stats = stats.clone(); let stats = stats.clone();
let ip_tracker_api = ip_tracker.clone(); let ip_tracker_api = ip_tracker.clone();
let me_pool_api = me_pool.clone(); let me_pool_api = me_pool.clone();
let upstream_manager_api = upstream_manager.clone();
let config_rx_api = config_rx.clone(); let config_rx_api = config_rx.clone();
let config_path_api = std::path::PathBuf::from(&config_path); let config_path_api = std::path::PathBuf::from(&config_path);
let startup_detected_ip_v4 = detected_ip_v4;
let startup_detected_ip_v6 = detected_ip_v6;
tokio::spawn(async move { tokio::spawn(async move {
api::serve( api::serve(
listen, listen,
stats, stats,
ip_tracker_api, ip_tracker_api,
me_pool_api, me_pool_api,
upstream_manager_api,
config_rx_api, config_rx_api,
config_path_api, config_path_api,
startup_detected_ip_v4,
startup_detected_ip_v6,
) )
.await; .await;
}); });
@@ -1288,7 +1608,36 @@ async fn main() -> std::result::Result<(), Box<dyn std::error::Error>> {
} }
match signal::ctrl_c().await { match signal::ctrl_c().await {
Ok(()) => info!("Shutting down..."), Ok(()) => {
let shutdown_started_at = Instant::now();
info!("Shutting down...");
let uptime_secs = process_started_at.elapsed().as_secs();
info!("Uptime: {}", format_uptime(uptime_secs));
if let Some(pool) = &me_pool {
match tokio::time::timeout(
Duration::from_secs(2),
pool.shutdown_send_close_conn_all(),
)
.await
{
Ok(total) => {
info!(
close_conn_sent = total,
"ME shutdown: RPC_CLOSE_CONN broadcast completed"
);
}
Err(_) => {
warn!("ME shutdown: RPC_CLOSE_CONN broadcast timed out");
}
}
}
let shutdown_secs = shutdown_started_at.elapsed().as_secs();
info!(
"Shutdown completed successfully in {} {}.",
shutdown_secs,
unit_label(shutdown_secs, "second", "seconds")
);
}
Err(e) => error!("Signal error: {}", e), Err(e) => error!("Signal error: {}", e),
} }

View File

@@ -1199,6 +1199,48 @@ async fn render_metrics(stats: &Stats, config: &ProxyConfig, ip_tracker: &UserIp
0 0
} }
); );
let _ = writeln!(
out,
"# HELP telemt_me_no_writer_failfast_total ME route failfast errors due to missing writer in bounded wait window"
);
let _ = writeln!(out, "# TYPE telemt_me_no_writer_failfast_total counter");
let _ = writeln!(
out,
"telemt_me_no_writer_failfast_total {}",
if me_allows_normal {
stats.get_me_no_writer_failfast_total()
} else {
0
}
);
let _ = writeln!(
out,
"# HELP telemt_me_async_recovery_trigger_total Async ME recovery trigger attempts from route path"
);
let _ = writeln!(out, "# TYPE telemt_me_async_recovery_trigger_total counter");
let _ = writeln!(
out,
"telemt_me_async_recovery_trigger_total {}",
if me_allows_normal {
stats.get_me_async_recovery_trigger_total()
} else {
0
}
);
let _ = writeln!(
out,
"# HELP telemt_me_inline_recovery_total Legacy inline ME recovery attempts from route path"
);
let _ = writeln!(out, "# TYPE telemt_me_inline_recovery_total counter");
let _ = writeln!(
out,
"telemt_me_inline_recovery_total {}",
if me_allows_normal {
stats.get_me_inline_recovery_total()
} else {
0
}
);
let unresolved_writer_losses = if me_allows_normal { let unresolved_writer_losses = if me_allows_normal {
stats stats
@@ -1237,6 +1279,29 @@ async fn render_metrics(stats: &Stats, config: &ProxyConfig, ip_tracker: &UserIp
let _ = writeln!(out, "# TYPE telemt_user_msgs_from_client counter"); let _ = writeln!(out, "# TYPE telemt_user_msgs_from_client counter");
let _ = writeln!(out, "# HELP telemt_user_msgs_to_client Per-user messages sent"); let _ = writeln!(out, "# HELP telemt_user_msgs_to_client Per-user messages sent");
let _ = writeln!(out, "# TYPE telemt_user_msgs_to_client counter"); let _ = writeln!(out, "# TYPE telemt_user_msgs_to_client counter");
let _ = writeln!(
out,
"# HELP telemt_ip_reservation_rollback_total IP reservation rollbacks caused by later limit checks"
);
let _ = writeln!(out, "# TYPE telemt_ip_reservation_rollback_total counter");
let _ = writeln!(
out,
"telemt_ip_reservation_rollback_total{{reason=\"tcp_limit\"}} {}",
if core_enabled {
stats.get_ip_reservation_rollback_tcp_limit_total()
} else {
0
}
);
let _ = writeln!(
out,
"telemt_ip_reservation_rollback_total{{reason=\"quota_limit\"}} {}",
if core_enabled {
stats.get_ip_reservation_rollback_quota_limit_total()
} else {
0
}
);
let _ = writeln!( let _ = writeln!(
out, out,
"# HELP telemt_telemetry_user_series_suppressed User-labeled metric series suppression flag" "# HELP telemt_telemetry_user_series_suppressed User-labeled metric series suppression flag"
@@ -1267,11 +1332,21 @@ async fn render_metrics(stats: &Stats, config: &ProxyConfig, ip_tracker: &UserIp
.collect(); .collect();
let mut unique_users = BTreeSet::new(); let mut unique_users = BTreeSet::new();
unique_users.extend(config.access.users.keys().cloned());
unique_users.extend(config.access.user_max_unique_ips.keys().cloned()); unique_users.extend(config.access.user_max_unique_ips.keys().cloned());
unique_users.extend(ip_counts.keys().cloned()); unique_users.extend(ip_counts.keys().cloned());
let unique_users_vec: Vec<String> = unique_users.iter().cloned().collect();
let recent_counts = ip_tracker
.get_recent_counts_for_users(&unique_users_vec)
.await;
let _ = writeln!(out, "# HELP telemt_user_unique_ips_current Per-user current number of unique active IPs"); let _ = writeln!(out, "# HELP telemt_user_unique_ips_current Per-user current number of unique active IPs");
let _ = writeln!(out, "# TYPE telemt_user_unique_ips_current gauge"); let _ = writeln!(out, "# TYPE telemt_user_unique_ips_current gauge");
let _ = writeln!(
out,
"# HELP telemt_user_unique_ips_recent_window Per-user unique IPs seen in configured observation window"
);
let _ = writeln!(out, "# TYPE telemt_user_unique_ips_recent_window gauge");
let _ = writeln!(out, "# HELP telemt_user_unique_ips_limit Per-user configured unique IP limit (0 means unlimited)"); let _ = writeln!(out, "# HELP telemt_user_unique_ips_limit Per-user configured unique IP limit (0 means unlimited)");
let _ = writeln!(out, "# TYPE telemt_user_unique_ips_limit gauge"); let _ = writeln!(out, "# TYPE telemt_user_unique_ips_limit gauge");
let _ = writeln!(out, "# HELP telemt_user_unique_ips_utilization Per-user unique IP usage ratio (0 for unlimited)"); let _ = writeln!(out, "# HELP telemt_user_unique_ips_utilization Per-user unique IP usage ratio (0 for unlimited)");
@@ -1286,6 +1361,12 @@ async fn render_metrics(stats: &Stats, config: &ProxyConfig, ip_tracker: &UserIp
0.0 0.0
}; };
let _ = writeln!(out, "telemt_user_unique_ips_current{{user=\"{}\"}} {}", user, current); let _ = writeln!(out, "telemt_user_unique_ips_current{{user=\"{}\"}} {}", user, current);
let _ = writeln!(
out,
"telemt_user_unique_ips_recent_window{{user=\"{}\"}} {}",
user,
recent_counts.get(&user).copied().unwrap_or(0)
);
let _ = writeln!(out, "telemt_user_unique_ips_limit{{user=\"{}\"}} {}", user, limit); let _ = writeln!(out, "telemt_user_unique_ips_limit{{user=\"{}\"}} {}", user, limit);
let _ = writeln!( let _ = writeln!(
out, out,
@@ -1378,6 +1459,7 @@ mod tests {
assert!(output.contains("telemt_user_msgs_from_client{user=\"alice\"} 1")); assert!(output.contains("telemt_user_msgs_from_client{user=\"alice\"} 1"));
assert!(output.contains("telemt_user_msgs_to_client{user=\"alice\"} 2")); assert!(output.contains("telemt_user_msgs_to_client{user=\"alice\"} 2"));
assert!(output.contains("telemt_user_unique_ips_current{user=\"alice\"} 1")); assert!(output.contains("telemt_user_unique_ips_current{user=\"alice\"} 1"));
assert!(output.contains("telemt_user_unique_ips_recent_window{user=\"alice\"} 1"));
assert!(output.contains("telemt_user_unique_ips_limit{user=\"alice\"} 4")); assert!(output.contains("telemt_user_unique_ips_limit{user=\"alice\"} 4"));
assert!(output.contains("telemt_user_unique_ips_utilization{user=\"alice\"} 0.250000")); assert!(output.contains("telemt_user_unique_ips_utilization{user=\"alice\"} 0.250000"));
} }
@@ -1391,7 +1473,8 @@ mod tests {
assert!(output.contains("telemt_connections_total 0")); assert!(output.contains("telemt_connections_total 0"));
assert!(output.contains("telemt_connections_bad_total 0")); assert!(output.contains("telemt_connections_bad_total 0"));
assert!(output.contains("telemt_handshake_timeouts_total 0")); assert!(output.contains("telemt_handshake_timeouts_total 0"));
assert!(!output.contains("user=")); assert!(output.contains("telemt_user_unique_ips_current{user="));
assert!(output.contains("telemt_user_unique_ips_recent_window{user="));
} }
#[tokio::test] #[tokio::test]
@@ -1412,6 +1495,7 @@ mod tests {
"# TYPE telemt_me_writer_removed_unexpected_minus_restored_total gauge" "# TYPE telemt_me_writer_removed_unexpected_minus_restored_total gauge"
)); ));
assert!(output.contains("# TYPE telemt_user_unique_ips_current gauge")); assert!(output.contains("# TYPE telemt_user_unique_ips_current gauge"));
assert!(output.contains("# TYPE telemt_user_unique_ips_recent_window gauge"));
assert!(output.contains("# TYPE telemt_user_unique_ips_limit gauge")); assert!(output.contains("# TYPE telemt_user_unique_ips_limit gauge"));
assert!(output.contains("# TYPE telemt_user_unique_ips_utilization gauge")); assert!(output.contains("# TYPE telemt_user_unique_ips_utilization gauge"));
} }

View File

@@ -672,42 +672,16 @@ impl RunningClientHandler {
R: AsyncRead + Unpin + Send + 'static, R: AsyncRead + Unpin + Send + 'static,
W: AsyncWrite + Unpin + Send + 'static, W: AsyncWrite + Unpin + Send + 'static,
{ {
let user = &success.user; let user = success.user.clone();
if let Err(e) = Self::check_user_limits_static(user, &config, &stats, peer_addr, &ip_tracker).await { if let Err(e) = Self::check_user_limits_static(&user, &config, &stats, peer_addr, &ip_tracker).await {
warn!(user = %user, error = %e, "User limit exceeded"); warn!(user = %user, error = %e, "User limit exceeded");
return Err(e); return Err(e);
} }
// IP Cleanup Guard: автоматически удаляет IP при выходе из scope let relay_result = if config.general.use_middle_proxy {
struct IpCleanupGuard {
tracker: Arc<UserIpTracker>,
user: String,
ip: std::net::IpAddr,
}
impl Drop for IpCleanupGuard {
fn drop(&mut self) {
let tracker = self.tracker.clone();
let user = self.user.clone();
let ip = self.ip;
tokio::spawn(async move {
tracker.remove_ip(&user, ip).await;
debug!(user = %user, ip = %ip, "IP cleaned up on disconnect");
});
}
}
let _cleanup = IpCleanupGuard {
tracker: ip_tracker,
user: user.clone(),
ip: peer_addr.ip(),
};
// Decide: middle proxy or direct
if config.general.use_middle_proxy {
if let Some(ref pool) = me_pool { if let Some(ref pool) = me_pool {
return handle_via_middle_proxy( handle_via_middle_proxy(
client_reader, client_reader,
client_writer, client_writer,
success, success,
@@ -718,23 +692,38 @@ impl RunningClientHandler {
local_addr, local_addr,
rng, rng,
) )
.await; .await
} else {
warn!("use_middle_proxy=true but MePool not initialized, falling back to direct");
handle_via_direct(
client_reader,
client_writer,
success,
upstream_manager,
stats,
config,
buffer_pool,
rng,
)
.await
} }
warn!("use_middle_proxy=true but MePool not initialized, falling back to direct"); } else {
} // Direct mode (original behavior)
handle_via_direct(
client_reader,
client_writer,
success,
upstream_manager,
stats,
config,
buffer_pool,
rng,
)
.await
};
// Direct mode (original behavior) ip_tracker.remove_ip(&user, peer_addr.ip()).await;
handle_via_direct( relay_result
client_reader,
client_writer,
success,
upstream_manager,
stats,
config,
buffer_pool,
rng,
)
.await
} }
async fn check_user_limits_static( async fn check_user_limits_static(
@@ -752,22 +741,32 @@ impl RunningClientHandler {
}); });
} }
let mut ip_reserved = false;
// IP limit check // IP limit check
if let Err(reason) = ip_tracker.check_and_add(user, peer_addr.ip()).await { match ip_tracker.check_and_add(user, peer_addr.ip()).await {
warn!( Ok(()) => {
user = %user, ip_reserved = true;
ip = %peer_addr.ip(), }
reason = %reason, Err(reason) => {
"IP limit exceeded" warn!(
); user = %user,
return Err(ProxyError::ConnectionLimitExceeded { ip = %peer_addr.ip(),
user: user.to_string(), reason = %reason,
}); "IP limit exceeded"
);
return Err(ProxyError::ConnectionLimitExceeded {
user: user.to_string(),
});
}
} }
if let Some(limit) = config.access.user_max_tcp_conns.get(user) if let Some(limit) = config.access.user_max_tcp_conns.get(user)
&& stats.get_user_curr_connects(user) >= *limit as u64 && stats.get_user_curr_connects(user) >= *limit as u64
{ {
if ip_reserved {
ip_tracker.remove_ip(user, peer_addr.ip()).await;
stats.increment_ip_reservation_rollback_tcp_limit_total();
}
return Err(ProxyError::ConnectionLimitExceeded { return Err(ProxyError::ConnectionLimitExceeded {
user: user.to_string(), user: user.to_string(),
}); });
@@ -776,6 +775,10 @@ impl RunningClientHandler {
if let Some(quota) = config.access.user_data_quota.get(user) if let Some(quota) = config.access.user_data_quota.get(user)
&& stats.get_user_total_octets(user) >= *quota && stats.get_user_total_octets(user) >= *quota
{ {
if ip_reserved {
ip_tracker.remove_ip(user, peer_addr.ip()).await;
stats.increment_ip_reservation_rollback_quota_limit_total();
}
return Err(ProxyError::DataQuotaExceeded { return Err(ProxyError::DataQuotaExceeded {
user: user.to_string(), user: user.to_string(),
}); });

View File

@@ -118,10 +118,16 @@ fn get_dc_addr_static(dc_idx: i16, config: &ProxyConfig) -> Result<SocketAddr> {
// Unknown DC requested by client without override: log and fall back. // Unknown DC requested by client without override: log and fall back.
if !config.dc_overrides.contains_key(&dc_key) { if !config.dc_overrides.contains_key(&dc_key) {
warn!(dc_idx = dc_idx, "Requested non-standard DC with no override; falling back to default cluster"); warn!(dc_idx = dc_idx, "Requested non-standard DC with no override; falling back to default cluster");
if let Some(path) = &config.general.unknown_dc_log_path if config.general.unknown_dc_file_log_enabled
&& let Ok(mut file) = OpenOptions::new().create(true).append(true).open(path) && let Some(path) = &config.general.unknown_dc_log_path
&& let Ok(handle) = tokio::runtime::Handle::try_current()
{ {
let _ = writeln!(file, "dc_idx={dc_idx}"); let path = path.clone();
handle.spawn_blocking(move || {
if let Ok(mut file) = OpenOptions::new().create(true).append(true).open(path) {
let _ = writeln!(file, "dc_idx={dc_idx}");
}
});
} }
} }

View File

@@ -100,6 +100,11 @@ pub struct Stats {
me_refill_failed_total: AtomicU64, me_refill_failed_total: AtomicU64,
me_writer_restored_same_endpoint_total: AtomicU64, me_writer_restored_same_endpoint_total: AtomicU64,
me_writer_restored_fallback_total: AtomicU64, me_writer_restored_fallback_total: AtomicU64,
me_no_writer_failfast_total: AtomicU64,
me_async_recovery_trigger_total: AtomicU64,
me_inline_recovery_total: AtomicU64,
ip_reservation_rollback_tcp_limit_total: AtomicU64,
ip_reservation_rollback_quota_limit_total: AtomicU64,
telemetry_core_enabled: AtomicBool, telemetry_core_enabled: AtomicBool,
telemetry_user_enabled: AtomicBool, telemetry_user_enabled: AtomicBool,
telemetry_me_level: AtomicU8, telemetry_me_level: AtomicU8,
@@ -522,6 +527,34 @@ impl Stats {
.fetch_add(1, Ordering::Relaxed); .fetch_add(1, Ordering::Relaxed);
} }
} }
pub fn increment_me_no_writer_failfast_total(&self) {
if self.telemetry_me_allows_normal() {
self.me_no_writer_failfast_total.fetch_add(1, Ordering::Relaxed);
}
}
pub fn increment_me_async_recovery_trigger_total(&self) {
if self.telemetry_me_allows_normal() {
self.me_async_recovery_trigger_total
.fetch_add(1, Ordering::Relaxed);
}
}
pub fn increment_me_inline_recovery_total(&self) {
if self.telemetry_me_allows_normal() {
self.me_inline_recovery_total.fetch_add(1, Ordering::Relaxed);
}
}
pub fn increment_ip_reservation_rollback_tcp_limit_total(&self) {
if self.telemetry_core_enabled() {
self.ip_reservation_rollback_tcp_limit_total
.fetch_add(1, Ordering::Relaxed);
}
}
pub fn increment_ip_reservation_rollback_quota_limit_total(&self) {
if self.telemetry_core_enabled() {
self.ip_reservation_rollback_quota_limit_total
.fetch_add(1, Ordering::Relaxed);
}
}
pub fn increment_me_endpoint_quarantine_total(&self) { pub fn increment_me_endpoint_quarantine_total(&self) {
if self.telemetry_me_allows_normal() { if self.telemetry_me_allows_normal() {
self.me_endpoint_quarantine_total self.me_endpoint_quarantine_total
@@ -791,6 +824,23 @@ impl Stats {
pub fn get_me_writer_restored_fallback_total(&self) -> u64 { pub fn get_me_writer_restored_fallback_total(&self) -> u64 {
self.me_writer_restored_fallback_total.load(Ordering::Relaxed) self.me_writer_restored_fallback_total.load(Ordering::Relaxed)
} }
pub fn get_me_no_writer_failfast_total(&self) -> u64 {
self.me_no_writer_failfast_total.load(Ordering::Relaxed)
}
pub fn get_me_async_recovery_trigger_total(&self) -> u64 {
self.me_async_recovery_trigger_total.load(Ordering::Relaxed)
}
pub fn get_me_inline_recovery_total(&self) -> u64 {
self.me_inline_recovery_total.load(Ordering::Relaxed)
}
pub fn get_ip_reservation_rollback_tcp_limit_total(&self) -> u64 {
self.ip_reservation_rollback_tcp_limit_total
.load(Ordering::Relaxed)
}
pub fn get_ip_reservation_rollback_quota_limit_total(&self) -> u64 {
self.ip_reservation_rollback_quota_limit_total
.load(Ordering::Relaxed)
}
pub fn increment_user_connects(&self, user: &str) { pub fn increment_user_connects(&self, user: &str) {
if !self.telemetry_user_enabled() { if !self.telemetry_user_enabled() {

View File

@@ -1,6 +1,7 @@
use std::collections::HashMap; use std::collections::HashMap;
use std::hash::{DefaultHasher, Hash, Hasher}; use std::hash::{DefaultHasher, Hash, Hasher};
use std::net::IpAddr; use std::net::IpAddr;
use std::path::Path;
use std::sync::Arc; use std::sync::Arc;
use std::time::Duration; use std::time::Duration;
@@ -42,6 +43,87 @@ pub struct ProxyConfigData {
pub proxy_for_lines: u32, pub proxy_for_lines: u32,
} }
pub fn parse_proxy_config_text(text: &str, http_status: u16) -> ProxyConfigData {
let mut map: HashMap<i32, Vec<(IpAddr, u16)>> = HashMap::new();
let mut proxy_for_lines: u32 = 0;
for line in text.lines() {
if let Some((dc, ip, port)) = parse_proxy_line(line) {
map.entry(dc).or_default().push((ip, port));
proxy_for_lines = proxy_for_lines.saturating_add(1);
}
}
let default_dc = text.lines().find_map(|l| {
let t = l.trim();
if let Some(rest) = t.strip_prefix("default") {
return rest.trim().trim_end_matches(';').parse::<i32>().ok();
}
None
});
ProxyConfigData {
map,
default_dc,
http_status,
proxy_for_lines,
}
}
pub async fn load_proxy_config_cache(path: &str) -> Result<ProxyConfigData> {
let text = tokio::fs::read_to_string(path).await.map_err(|e| {
crate::error::ProxyError::Proxy(format!("read proxy-config cache '{path}' failed: {e}"))
})?;
Ok(parse_proxy_config_text(&text, 200))
}
pub async fn save_proxy_config_cache(path: &str, raw_text: &str) -> Result<()> {
if let Some(parent) = Path::new(path).parent()
&& !parent.as_os_str().is_empty()
{
tokio::fs::create_dir_all(parent).await.map_err(|e| {
crate::error::ProxyError::Proxy(format!(
"create proxy-config cache dir '{}' failed: {e}",
parent.display()
))
})?;
}
tokio::fs::write(path, raw_text).await.map_err(|e| {
crate::error::ProxyError::Proxy(format!("write proxy-config cache '{path}' failed: {e}"))
})?;
Ok(())
}
pub async fn fetch_proxy_config_with_raw(url: &str) -> Result<(ProxyConfigData, String)> {
let resp = reqwest::get(url)
.await
.map_err(|e| crate::error::ProxyError::Proxy(format!("fetch_proxy_config GET failed: {e}")))?
;
let http_status = resp.status().as_u16();
if let Some(date) = resp.headers().get(reqwest::header::DATE)
&& let Ok(date_str) = date.to_str()
&& let Ok(server_time) = httpdate::parse_http_date(date_str)
&& let Ok(skew) = SystemTime::now().duration_since(server_time).or_else(|e| {
server_time.duration_since(SystemTime::now()).map_err(|_| e)
})
{
let skew_secs = skew.as_secs();
if skew_secs > 60 {
warn!(skew_secs, "Time skew >60s detected from fetch_proxy_config Date header");
} else if skew_secs > 30 {
warn!(skew_secs, "Time skew >30s detected from fetch_proxy_config Date header");
}
}
let text = resp
.text()
.await
.map_err(|e| crate::error::ProxyError::Proxy(format!("fetch_proxy_config read failed: {e}")))?;
let parsed = parse_proxy_config_text(&text, http_status);
Ok((parsed, text))
}
#[derive(Debug, Default)] #[derive(Debug, Default)]
struct StableSnapshot { struct StableSnapshot {
candidate_hash: Option<u64>, candidate_hash: Option<u64>,
@@ -170,61 +252,9 @@ fn parse_proxy_line(line: &str) -> Option<(i32, IpAddr, u16)> {
} }
pub async fn fetch_proxy_config(url: &str) -> Result<ProxyConfigData> { pub async fn fetch_proxy_config(url: &str) -> Result<ProxyConfigData> {
let resp = reqwest::get(url) fetch_proxy_config_with_raw(url)
.await .await
.map_err(|e| crate::error::ProxyError::Proxy(format!("fetch_proxy_config GET failed: {e}")))? .map(|(parsed, _raw)| parsed)
;
let http_status = resp.status().as_u16();
if let Some(date) = resp.headers().get(reqwest::header::DATE)
&& let Ok(date_str) = date.to_str()
&& let Ok(server_time) = httpdate::parse_http_date(date_str)
&& let Ok(skew) = SystemTime::now().duration_since(server_time).or_else(|e| {
server_time.duration_since(SystemTime::now()).map_err(|_| e)
})
{
let skew_secs = skew.as_secs();
if skew_secs > 60 {
warn!(skew_secs, "Time skew >60s detected from fetch_proxy_config Date header");
} else if skew_secs > 30 {
warn!(skew_secs, "Time skew >30s detected from fetch_proxy_config Date header");
}
}
let text = resp
.text()
.await
.map_err(|e| crate::error::ProxyError::Proxy(format!("fetch_proxy_config read failed: {e}")))?;
let mut map: HashMap<i32, Vec<(IpAddr, u16)>> = HashMap::new();
let mut proxy_for_lines: u32 = 0;
for line in text.lines() {
if let Some((dc, ip, port)) = parse_proxy_line(line) {
map.entry(dc).or_default().push((ip, port));
proxy_for_lines = proxy_for_lines.saturating_add(1);
}
}
let default_dc = text
.lines()
.find_map(|l| {
let t = l.trim();
if let Some(rest) = t.strip_prefix("default") {
return rest
.trim()
.trim_end_matches(';')
.parse::<i32>()
.ok();
}
None
});
Ok(ProxyConfigData {
map,
default_dc,
http_status,
proxy_for_lines,
})
} }
fn snapshot_passes_guards( fn snapshot_passes_guards(

View File

@@ -295,15 +295,27 @@ async fn check_family(
let wait = Duration::from_millis(next_ms) let wait = Duration::from_millis(next_ms)
+ Duration::from_millis(rand::rng().random_range(0..=jitter.max(1))); + Duration::from_millis(rand::rng().random_range(0..=jitter.max(1)));
next_attempt.insert(key, now + wait); next_attempt.insert(key, now + wait);
warn!( if pool.is_runtime_ready() {
dc = %dc, warn!(
?family, dc = %dc,
alive = now_alive, ?family,
required, alive = now_alive,
endpoint_count = endpoints.len(), required,
backoff_ms = next_ms, endpoint_count = endpoints.len(),
"DC writer floor is below required level, scheduled reconnect" backoff_ms = next_ms,
); "DC writer floor is below required level, scheduled reconnect"
);
} else {
info!(
dc = %dc,
?family,
alive = now_alive,
required,
endpoint_count = endpoints.len(),
backoff_ms = next_ms,
"DC writer floor is below required level during startup, scheduled reconnect"
);
}
} }
if let Some(v) = inflight.get_mut(&key) { if let Some(v) = inflight.get_mut(&key) {
*v = v.saturating_sub(1); *v = v.saturating_sub(1);

View File

@@ -30,7 +30,11 @@ pub use pool::MePool;
pub use pool_nat::{stun_probe, detect_public_ip}; pub use pool_nat::{stun_probe, detect_public_ip};
pub use registry::ConnRegistry; pub use registry::ConnRegistry;
pub use secret::fetch_proxy_secret; pub use secret::fetch_proxy_secret;
pub use config_updater::{fetch_proxy_config, me_config_updater}; #[allow(unused_imports)]
pub use config_updater::{
ProxyConfigData, fetch_proxy_config, fetch_proxy_config_with_raw, load_proxy_config_cache,
me_config_updater, save_proxy_config_cache,
};
pub use rotation::{MeReinitTrigger, me_reinit_scheduler, me_rotation_task}; pub use rotation::{MeReinitTrigger, me_reinit_scheduler, me_rotation_task};
pub use wire::proto_flags_for_tag; pub use wire::proto_flags_for_tag;

View File

@@ -7,7 +7,7 @@ use std::time::{Duration, Instant, SystemTime, UNIX_EPOCH};
use tokio::sync::{Mutex, Notify, RwLock, mpsc}; use tokio::sync::{Mutex, Notify, RwLock, mpsc};
use tokio_util::sync::CancellationToken; use tokio_util::sync::CancellationToken;
use crate::config::{MeBindStaleMode, MeFloorMode, MeSocksKdfPolicy}; use crate::config::{MeBindStaleMode, MeFloorMode, MeRouteNoWriterMode, MeSocksKdfPolicy};
use crate::crypto::SecureRandom; use crate::crypto::SecureRandom;
use crate::network::IpFamily; use crate::network::IpFamily;
use crate::network::probe::NetworkDecision; use crate::network::probe::NetworkDecision;
@@ -145,6 +145,11 @@ pub struct MePool {
pub(super) secret_atomic_snapshot: AtomicBool, pub(super) secret_atomic_snapshot: AtomicBool,
pub(super) me_deterministic_writer_sort: AtomicBool, pub(super) me_deterministic_writer_sort: AtomicBool,
pub(super) me_socks_kdf_policy: AtomicU8, pub(super) me_socks_kdf_policy: AtomicU8,
pub(super) me_route_no_writer_mode: AtomicU8,
pub(super) me_route_no_writer_wait: Duration,
pub(super) me_route_inline_recovery_attempts: u32,
pub(super) me_route_inline_recovery_wait: Duration,
pub(super) runtime_ready: AtomicBool,
pool_size: usize, pool_size: usize,
} }
@@ -227,6 +232,10 @@ impl MePool {
me_route_backpressure_base_timeout_ms: u64, me_route_backpressure_base_timeout_ms: u64,
me_route_backpressure_high_timeout_ms: u64, me_route_backpressure_high_timeout_ms: u64,
me_route_backpressure_high_watermark_pct: u8, me_route_backpressure_high_watermark_pct: u8,
me_route_no_writer_mode: MeRouteNoWriterMode,
me_route_no_writer_wait_ms: u64,
me_route_inline_recovery_attempts: u32,
me_route_inline_recovery_wait_ms: u64,
) -> Arc<Self> { ) -> Arc<Self> {
let registry = Arc::new(ConnRegistry::new()); let registry = Arc::new(ConnRegistry::new());
registry.update_route_backpressure_policy( registry.update_route_backpressure_policy(
@@ -343,6 +352,11 @@ impl MePool {
secret_atomic_snapshot: AtomicBool::new(me_secret_atomic_snapshot), secret_atomic_snapshot: AtomicBool::new(me_secret_atomic_snapshot),
me_deterministic_writer_sort: AtomicBool::new(me_deterministic_writer_sort), me_deterministic_writer_sort: AtomicBool::new(me_deterministic_writer_sort),
me_socks_kdf_policy: AtomicU8::new(me_socks_kdf_policy.as_u8()), me_socks_kdf_policy: AtomicU8::new(me_socks_kdf_policy.as_u8()),
me_route_no_writer_mode: AtomicU8::new(me_route_no_writer_mode.as_u8()),
me_route_no_writer_wait: Duration::from_millis(me_route_no_writer_wait_ms),
me_route_inline_recovery_attempts,
me_route_inline_recovery_wait: Duration::from_millis(me_route_inline_recovery_wait_ms),
runtime_ready: AtomicBool::new(false),
}) })
} }
@@ -350,6 +364,14 @@ impl MePool {
self.active_generation.load(Ordering::Relaxed) self.active_generation.load(Ordering::Relaxed)
} }
pub fn set_runtime_ready(&self, ready: bool) {
self.runtime_ready.store(ready, Ordering::Relaxed);
}
pub fn is_runtime_ready(&self) -> bool {
self.runtime_ready.load(Ordering::Relaxed)
}
pub fn update_runtime_reinit_policy( pub fn update_runtime_reinit_policy(
&self, &self,
hardswap: bool, hardswap: bool,

View File

@@ -278,6 +278,11 @@ impl ConnRegistry {
Some(ConnWriter { writer_id, tx: writer }) Some(ConnWriter { writer_id, tx: writer })
} }
pub async fn active_conn_ids(&self) -> Vec<u64> {
let inner = self.inner.read().await;
inner.writer_for_conn.keys().copied().collect()
}
pub async fn writer_lost(&self, writer_id: u64) -> Vec<BoundConn> { pub async fn writer_lost(&self, writer_id: u64) -> Vec<BoundConn> {
let mut inner = self.inner.write().await; let mut inner = self.inner.write().await;
inner.writers.remove(&writer_id); inner.writers.remove(&writer_id);

View File

@@ -1,16 +1,17 @@
use std::cmp::Reverse; use std::cmp::Reverse;
use std::collections::HashMap; use std::collections::{HashMap, HashSet};
use std::net::SocketAddr; use std::net::SocketAddr;
use std::sync::Arc; use std::sync::Arc;
use std::sync::atomic::Ordering; use std::sync::atomic::Ordering;
use std::time::Duration; use std::time::{Duration, Instant};
use tokio::sync::mpsc::error::TrySendError; use tokio::sync::mpsc::error::TrySendError;
use tracing::{debug, warn}; use tracing::{debug, warn};
use crate::config::MeRouteNoWriterMode;
use crate::error::{ProxyError, Result}; use crate::error::{ProxyError, Result};
use crate::network::IpFamily; use crate::network::IpFamily;
use crate::protocol::constants::RPC_CLOSE_EXT_U32; use crate::protocol::constants::{RPC_CLOSE_CONN_U32, RPC_CLOSE_EXT_U32};
use super::MePool; use super::MePool;
use super::codec::WriterCommand; use super::codec::WriterCommand;
@@ -49,7 +50,11 @@ impl MePool {
our_addr, our_addr,
proto_flags, proto_flags,
}; };
let mut emergency_attempts = 0; let no_writer_mode =
MeRouteNoWriterMode::from_u8(self.me_route_no_writer_mode.load(Ordering::Relaxed));
let mut no_writer_deadline: Option<Instant> = None;
let mut emergency_attempts = 0u32;
let mut async_recovery_triggered = false;
loop { loop {
if let Some(current) = self.registry.get_writer(conn_id).await { if let Some(current) = self.registry.get_writer(conn_id).await {
@@ -74,34 +79,66 @@ impl MePool {
let mut writers_snapshot = { let mut writers_snapshot = {
let ws = self.writers.read().await; let ws = self.writers.read().await;
if ws.is_empty() { if ws.is_empty() {
// Create waiter before recovery attempts so notify_one permits are not missed.
let waiter = self.writer_available.notified();
drop(ws); drop(ws);
for family in self.family_order() { match no_writer_mode {
let map = match family { MeRouteNoWriterMode::AsyncRecoveryFailfast => {
IpFamily::V4 => self.proxy_map_v4.read().await.clone(), let deadline = *no_writer_deadline.get_or_insert_with(|| {
IpFamily::V6 => self.proxy_map_v6.read().await.clone(), Instant::now() + self.me_route_no_writer_wait
}; });
for (_dc, addrs) in map.iter() { if !async_recovery_triggered {
for (ip, port) in addrs { let triggered =
let addr = SocketAddr::new(*ip, *port); self.trigger_async_recovery_for_target_dc(target_dc).await;
if self.connect_one(addr, self.rng.as_ref()).await.is_ok() { if !triggered {
self.writer_available.notify_one(); self.trigger_async_recovery_global().await;
}
async_recovery_triggered = true;
}
if self.wait_for_writer_until(deadline).await {
continue;
}
self.stats.increment_me_no_writer_failfast_total();
return Err(ProxyError::Proxy(
"No ME writer available in failfast window".into(),
));
}
MeRouteNoWriterMode::InlineRecoveryLegacy => {
self.stats.increment_me_inline_recovery_total();
for _ in 0..self.me_route_inline_recovery_attempts.max(1) {
for family in self.family_order() {
let map = match family {
IpFamily::V4 => self.proxy_map_v4.read().await.clone(),
IpFamily::V6 => self.proxy_map_v6.read().await.clone(),
};
for (_dc, addrs) in &map {
for (ip, port) in addrs {
let addr = SocketAddr::new(*ip, *port);
let _ = self.connect_one(addr, self.rng.as_ref()).await;
}
}
}
if !self.writers.read().await.is_empty() {
break; break;
} }
} }
} if !self.writers.read().await.is_empty() {
} continue;
if !self.writers.read().await.is_empty() { }
continue; let waiter = self.writer_available.notified();
} if tokio::time::timeout(self.me_route_inline_recovery_wait, waiter)
if tokio::time::timeout(Duration::from_secs(3), waiter).await.is_err() { .await
if !self.writers.read().await.is_empty() { .is_err()
{
if !self.writers.read().await.is_empty() {
continue;
}
self.stats.increment_me_no_writer_failfast_total();
return Err(ProxyError::Proxy(
"All ME connections dead (legacy wait timeout)".into(),
));
}
continue; continue;
} }
return Err(ProxyError::Proxy("All ME connections dead (waited 3s)".into()));
} }
continue;
} }
ws.clone() ws.clone()
}; };
@@ -115,46 +152,70 @@ impl MePool {
.await; .await;
} }
if candidate_indices.is_empty() { if candidate_indices.is_empty() {
// Emergency connect-on-demand match no_writer_mode {
if emergency_attempts >= 3 { MeRouteNoWriterMode::AsyncRecoveryFailfast => {
return Err(ProxyError::Proxy("No ME writers available for target DC".into())); let deadline = *no_writer_deadline.get_or_insert_with(|| {
} Instant::now() + self.me_route_no_writer_wait
emergency_attempts += 1; });
for family in self.family_order() { if !async_recovery_triggered {
let map_guard = match family { let triggered = self.trigger_async_recovery_for_target_dc(target_dc).await;
IpFamily::V4 => self.proxy_map_v4.read().await, if !triggered {
IpFamily::V6 => self.proxy_map_v6.read().await, self.trigger_async_recovery_global().await;
}; }
if let Some(addrs) = map_guard.get(&(target_dc as i32)) { async_recovery_triggered = true;
let mut shuffled = addrs.clone(); }
shuffled.shuffle(&mut rand::rng()); if self.wait_for_candidate_until(target_dc, deadline).await {
drop(map_guard); continue;
for (ip, port) in shuffled { }
let addr = SocketAddr::new(ip, port); self.stats.increment_me_no_writer_failfast_total();
if self.connect_one(addr, self.rng.as_ref()).await.is_ok() { return Err(ProxyError::Proxy(
break; "No ME writers available for target DC in failfast window".into(),
));
}
MeRouteNoWriterMode::InlineRecoveryLegacy => {
self.stats.increment_me_inline_recovery_total();
if emergency_attempts >= self.me_route_inline_recovery_attempts.max(1) {
self.stats.increment_me_no_writer_failfast_total();
return Err(ProxyError::Proxy("No ME writers available for target DC".into()));
}
emergency_attempts += 1;
for family in self.family_order() {
let map_guard = match family {
IpFamily::V4 => self.proxy_map_v4.read().await,
IpFamily::V6 => self.proxy_map_v6.read().await,
};
if let Some(addrs) = map_guard.get(&(target_dc as i32)) {
let mut shuffled = addrs.clone();
shuffled.shuffle(&mut rand::rng());
drop(map_guard);
for (ip, port) in shuffled {
let addr = SocketAddr::new(ip, port);
if self.connect_one(addr, self.rng.as_ref()).await.is_ok() {
break;
}
}
tokio::time::sleep(Duration::from_millis(100 * emergency_attempts as u64)).await;
let ws2 = self.writers.read().await;
writers_snapshot = ws2.clone();
drop(ws2);
candidate_indices = self
.candidate_indices_for_dc(&writers_snapshot, target_dc, false)
.await;
if candidate_indices.is_empty() {
candidate_indices = self
.candidate_indices_for_dc(&writers_snapshot, target_dc, true)
.await;
}
if !candidate_indices.is_empty() {
break;
}
} }
} }
tokio::time::sleep(Duration::from_millis(100 * emergency_attempts)).await;
let ws2 = self.writers.read().await;
writers_snapshot = ws2.clone();
drop(ws2);
candidate_indices = self
.candidate_indices_for_dc(&writers_snapshot, target_dc, false)
.await;
if candidate_indices.is_empty() { if candidate_indices.is_empty() {
candidate_indices = self return Err(ProxyError::Proxy("No ME writers available for target DC".into()));
.candidate_indices_for_dc(&writers_snapshot, target_dc, true)
.await;
}
if !candidate_indices.is_empty() {
break;
} }
} }
} }
if candidate_indices.is_empty() {
return Err(ProxyError::Proxy("No ME writers available for target DC".into()));
}
} }
let writer_idle_since = self.registry.writer_idle_since_snapshot().await; let writer_idle_since = self.registry.writer_idle_since_snapshot().await;
let now_epoch_secs = Self::now_epoch_secs(); let now_epoch_secs = Self::now_epoch_secs();
@@ -275,6 +336,129 @@ impl MePool {
} }
} }
async fn wait_for_writer_until(&self, deadline: Instant) -> bool {
let waiter = self.writer_available.notified();
if !self.writers.read().await.is_empty() {
return true;
}
let now = Instant::now();
if now >= deadline {
return !self.writers.read().await.is_empty();
}
let timeout = deadline.saturating_duration_since(now);
if tokio::time::timeout(timeout, waiter).await.is_ok() {
return true;
}
!self.writers.read().await.is_empty()
}
async fn wait_for_candidate_until(&self, target_dc: i16, deadline: Instant) -> bool {
loop {
if self.has_candidate_for_target_dc(target_dc).await {
return true;
}
let now = Instant::now();
if now >= deadline {
return self.has_candidate_for_target_dc(target_dc).await;
}
let remaining = deadline.saturating_duration_since(now);
let sleep_for = remaining.min(Duration::from_millis(25));
let waiter = self.writer_available.notified();
tokio::select! {
_ = waiter => {}
_ = tokio::time::sleep(sleep_for) => {}
}
}
}
async fn has_candidate_for_target_dc(&self, target_dc: i16) -> bool {
let writers_snapshot = {
let ws = self.writers.read().await;
if ws.is_empty() {
return false;
}
ws.clone()
};
let mut candidate_indices = self
.candidate_indices_for_dc(&writers_snapshot, target_dc, false)
.await;
if candidate_indices.is_empty() {
candidate_indices = self
.candidate_indices_for_dc(&writers_snapshot, target_dc, true)
.await;
}
!candidate_indices.is_empty()
}
async fn trigger_async_recovery_for_target_dc(self: &Arc<Self>, target_dc: i16) -> bool {
let endpoints = self.endpoint_candidates_for_target_dc(target_dc).await;
if endpoints.is_empty() {
return false;
}
self.stats.increment_me_async_recovery_trigger_total();
for addr in endpoints.into_iter().take(8) {
self.trigger_immediate_refill(addr);
}
true
}
async fn trigger_async_recovery_global(self: &Arc<Self>) {
self.stats.increment_me_async_recovery_trigger_total();
let mut seen = HashSet::<SocketAddr>::new();
for family in self.family_order() {
let map = match family {
IpFamily::V4 => self.proxy_map_v4.read().await.clone(),
IpFamily::V6 => self.proxy_map_v6.read().await.clone(),
};
for addrs in map.values() {
for (ip, port) in addrs {
let addr = SocketAddr::new(*ip, *port);
if seen.insert(addr) {
self.trigger_immediate_refill(addr);
}
if seen.len() >= 8 {
return;
}
}
}
}
}
async fn endpoint_candidates_for_target_dc(&self, target_dc: i16) -> Vec<SocketAddr> {
let key = target_dc as i32;
let mut preferred = Vec::<SocketAddr>::new();
let mut seen = HashSet::<SocketAddr>::new();
for family in self.family_order() {
let map = match family {
IpFamily::V4 => self.proxy_map_v4.read().await.clone(),
IpFamily::V6 => self.proxy_map_v6.read().await.clone(),
};
let mut lookup_keys = vec![key, key.abs(), -key.abs()];
let def = self.default_dc.load(Ordering::Relaxed);
if def != 0 {
lookup_keys.push(def);
}
for lookup in lookup_keys {
if let Some(addrs) = map.get(&lookup) {
for (ip, port) in addrs {
let addr = SocketAddr::new(*ip, *port);
if seen.insert(addr) {
preferred.push(addr);
}
}
}
}
if !preferred.is_empty() && !self.decision.effective_multipath {
break;
}
}
preferred
}
pub async fn send_close(self: &Arc<Self>, conn_id: u64) -> Result<()> { pub async fn send_close(self: &Arc<Self>, conn_id: u64) -> Result<()> {
if let Some(w) = self.registry.get_writer(conn_id).await { if let Some(w) = self.registry.get_writer(conn_id).await {
let mut p = Vec::with_capacity(12); let mut p = Vec::with_capacity(12);
@@ -292,6 +476,37 @@ impl MePool {
Ok(()) Ok(())
} }
pub async fn send_close_conn(self: &Arc<Self>, conn_id: u64) -> Result<()> {
if let Some(w) = self.registry.get_writer(conn_id).await {
let mut p = Vec::with_capacity(12);
p.extend_from_slice(&RPC_CLOSE_CONN_U32.to_le_bytes());
p.extend_from_slice(&conn_id.to_le_bytes());
match w.tx.try_send(WriterCommand::DataAndFlush(p)) {
Ok(()) => {}
Err(TrySendError::Full(cmd)) => {
let _ = tokio::time::timeout(Duration::from_millis(50), w.tx.send(cmd)).await;
}
Err(TrySendError::Closed(_)) => {
debug!(conn_id, "ME close_conn skipped: writer channel closed");
}
}
} else {
debug!(conn_id, "ME close_conn skipped (writer missing)");
}
self.registry.unregister(conn_id).await;
Ok(())
}
pub async fn shutdown_send_close_conn_all(self: &Arc<Self>) -> usize {
let conn_ids = self.registry.active_conn_ids().await;
let total = conn_ids.len();
for conn_id in conn_ids {
let _ = self.send_close_conn(conn_id).await;
}
total
}
pub fn connection_count(&self) -> usize { pub fn connection_count(&self) -> usize {
self.conn_count.load(Ordering::Relaxed) self.conn_count.load(Ordering::Relaxed)
} }

View File

@@ -165,6 +165,43 @@ pub enum UpstreamRouteKind {
Socks5, Socks5,
} }
#[derive(Debug, Clone)]
pub struct UpstreamApiDcSnapshot {
pub dc: i16,
pub latency_ema_ms: Option<f64>,
pub ip_preference: IpPreference,
}
#[derive(Debug, Clone)]
pub struct UpstreamApiItemSnapshot {
pub upstream_id: usize,
pub route_kind: UpstreamRouteKind,
pub address: String,
pub weight: u16,
pub scopes: String,
pub healthy: bool,
pub fails: u32,
pub last_check_age_secs: u64,
pub effective_latency_ms: Option<f64>,
pub dc: Vec<UpstreamApiDcSnapshot>,
}
#[derive(Debug, Clone, Default)]
pub struct UpstreamApiSummarySnapshot {
pub configured_total: usize,
pub healthy_total: usize,
pub unhealthy_total: usize,
pub direct_total: usize,
pub socks4_total: usize,
pub socks5_total: usize,
}
#[derive(Debug, Clone)]
pub struct UpstreamApiSnapshot {
pub summary: UpstreamApiSummarySnapshot,
pub upstreams: Vec<UpstreamApiItemSnapshot>,
}
#[derive(Debug, Clone, Copy, PartialEq, Eq)] #[derive(Debug, Clone, Copy, PartialEq, Eq)]
pub struct UpstreamEgressInfo { pub struct UpstreamEgressInfo {
pub route_kind: UpstreamRouteKind, pub route_kind: UpstreamRouteKind,
@@ -217,6 +254,64 @@ impl UpstreamManager {
} }
} }
pub fn try_api_snapshot(&self) -> Option<UpstreamApiSnapshot> {
let guard = self.upstreams.try_read().ok()?;
let now = std::time::Instant::now();
let mut summary = UpstreamApiSummarySnapshot {
configured_total: guard.len(),
..UpstreamApiSummarySnapshot::default()
};
let mut upstreams = Vec::with_capacity(guard.len());
for (idx, upstream) in guard.iter().enumerate() {
if upstream.healthy {
summary.healthy_total += 1;
} else {
summary.unhealthy_total += 1;
}
let (route_kind, address) = match &upstream.config.upstream_type {
UpstreamType::Direct { .. } => {
summary.direct_total += 1;
(UpstreamRouteKind::Direct, "direct".to_string())
}
UpstreamType::Socks4 { address, .. } => {
summary.socks4_total += 1;
(UpstreamRouteKind::Socks4, address.clone())
}
UpstreamType::Socks5 { address, .. } => {
summary.socks5_total += 1;
(UpstreamRouteKind::Socks5, address.clone())
}
};
let mut dc = Vec::with_capacity(NUM_DCS);
for dc_idx in 0..NUM_DCS {
dc.push(UpstreamApiDcSnapshot {
dc: (dc_idx + 1) as i16,
latency_ema_ms: upstream.dc_latency[dc_idx].get(),
ip_preference: upstream.dc_ip_pref[dc_idx],
});
}
upstreams.push(UpstreamApiItemSnapshot {
upstream_id: idx,
route_kind,
address,
weight: upstream.config.weight,
scopes: upstream.config.scopes.clone(),
healthy: upstream.healthy,
fails: upstream.fails,
last_check_age_secs: now.saturating_duration_since(upstream.last_check).as_secs(),
effective_latency_ms: upstream.effective_latency(None),
dc,
});
}
Some(UpstreamApiSnapshot { summary, upstreams })
}
#[cfg(unix)] #[cfg(unix)]
fn resolve_interface_addrs(name: &str, want_ipv6: bool) -> Vec<IpAddr> { fn resolve_interface_addrs(name: &str, want_ipv6: bool) -> Vec<IpAddr> {
use nix::ifaddrs::getifaddrs; use nix::ifaddrs::getifaddrs;

View File

@@ -47,6 +47,54 @@ zabbix_export:
tags: tags:
- tag: Application - tag: Application
value: 'Server connections' value: 'Server connections'
- uuid: 2af8ff0f27e4408db3f9798dc3141457
name: 'Full forensic desync logs emitted'
type: DEPENDENT
key: telemt.desync_full_logged_total
delay: '0'
preprocessing:
- type: PROMETHEUS_PATTERN
parameters:
- telemt_desync_full_logged_total
- value
- ''
master_item:
key: telemt.prom_metrics
tags:
- tag: Application
value: 'Telemt other'
- uuid: f4439948a49f4b1d85c3eeee963259bc
name: 'Suppressed desync forensic events'
type: DEPENDENT
key: telemt.desync_suppressed_total
delay: '0'
preprocessing:
- type: PROMETHEUS_PATTERN
parameters:
- telemt_desync_suppressed_total
- value
- ''
master_item:
key: telemt.prom_metrics
tags:
- tag: Application
value: 'Telemt other'
- uuid: 721627b8c10a414a82be1e08873604c1
name: 'Total crypto-desync detections'
type: DEPENDENT
key: telemt.desync_total
delay: '0'
preprocessing:
- type: PROMETHEUS_PATTERN
parameters:
- telemt_desync_total
- value
- ''
master_item:
key: telemt.prom_metrics
tags:
- tag: Application
value: 'Telemt other'
- uuid: 1618272cf68e44509425f5fab029db7b - uuid: 1618272cf68e44509425f5fab029db7b
name: 'Handshake timeouts total' name: 'Handshake timeouts total'
type: DEPENDENT type: DEPENDENT
@@ -64,6 +112,152 @@ zabbix_export:
tags: tags:
- tag: Application - tag: Application
value: 'Server connections' value: 'Server connections'
- uuid: 4e5c0d10a4494c959445b4cd7a2e696e
name: 'ME CRC mismatches'
type: DEPENDENT
key: telemt.me_crc_mismatch_total
delay: '0'
preprocessing:
- type: PROMETHEUS_PATTERN
parameters:
- telemt_me_crc_mismatch_total
- value
- ''
master_item:
key: telemt.prom_metrics
tags:
- tag: Application
value: 'Middle-End connections'
- uuid: 21a4a48b6e98457d87c56c3ae7b56c55
name: 'ME endpoint quarantines due to rapid flaps'
type: DEPENDENT
key: telemt.me_endpoint_quarantine_total
delay: '0'
preprocessing:
- type: PROMETHEUS_PATTERN
parameters:
- telemt_me_endpoint_quarantine_total
- value
- ''
master_item:
key: telemt.prom_metrics
tags:
- tag: Application
value: 'Telemt other'
- uuid: c8ffc30dc3d94a6d9085ac79413fbdd6
name: 'Runtime ME writer floor policy mode'
type: DEPENDENT
key: telemt.me_floor_mode
delay: '0'
value_type: TEXT
trends: '0'
preprocessing:
- type: PROMETHEUS_PATTERN
parameters:
- 'telemt_me_floor_mode == 1'
- label
- mode
master_item:
key: telemt.prom_metrics
tags:
- tag: Application
value: 'Telemt other'
- uuid: 4814b52d5d184f63b64654e7635bdf6a
name: 'ME handshake rejects from upstream'
type: DEPENDENT
key: telemt.me_handshake_reject_total
delay: '0'
preprocessing:
- type: PROMETHEUS_PATTERN
parameters:
- telemt_me_handshake_reject_total
- value
- ''
master_item:
key: telemt.prom_metrics
tags:
- tag: Application
value: 'Telemt other'
- uuid: 72d11caecefb4472b6c3e07f1ee90053
name: 'Hardswap cycles that reused an existing pending generation'
type: DEPENDENT
key: telemt.me_hardswap_pending_reuse_total
delay: '0'
preprocessing:
- type: PROMETHEUS_PATTERN
parameters:
- telemt_me_hardswap_pending_reuse_total
- value
- ''
master_item:
key: telemt.prom_metrics
tags:
- tag: Application
value: 'Telemt other'
- uuid: 447030854e8840a393874f54e25861d5
name: 'Pending hardswap generations reset by TTL expiration'
type: DEPENDENT
key: telemt.me_hardswap_pending_ttl_expired_total
delay: '0'
preprocessing:
- type: PROMETHEUS_PATTERN
parameters:
- telemt_me_hardswap_pending_ttl_expired_total
- value
- ''
master_item:
key: telemt.prom_metrics
tags:
- tag: Application
value: 'Telemt other'
- uuid: 47f55dd7d9394405b1c0eba6e6eb3e5c
name: 'ME idle writers closed by peer'
type: DEPENDENT
key: telemt.me_idle_close_by_peer_total
delay: '0'
preprocessing:
- type: PROMETHEUS_PATTERN
parameters:
- telemt_me_idle_close_by_peer_total
- value
- ''
master_item:
key: telemt.prom_metrics
tags:
- tag: Application
value: 'Telemt other'
- uuid: 9e4598efbfe246fab9360270002b0cfa
name: 'ME KDF input drift detections'
type: DEPENDENT
key: telemt.me_kdf_drift_total
delay: '0'
preprocessing:
- type: PROMETHEUS_PATTERN
parameters:
- telemt_me_kdf_drift_total
- value
- ''
master_item:
key: telemt.prom_metrics
tags:
- tag: Application
value: 'Telemt other'
- uuid: 565cc9780c5541bfb7acbb1f4973b5fc
name: 'ME KDF client-port changes with stable non-port material'
type: DEPENDENT
key: telemt.me_kdf_port_only_drift_total
delay: '0'
preprocessing:
- type: PROMETHEUS_PATTERN
parameters:
- telemt_me_kdf_port_only_drift_total
- value
- ''
master_item:
key: telemt.prom_metrics
tags:
- tag: Application
value: 'Telemt other'
- uuid: fb95391c7f894e3eb6984b92885813d2 - uuid: fb95391c7f894e3eb6984b92885813d2
name: 'ME keepalive send failures' name: 'ME keepalive send failures'
type: DEPENDENT type: DEPENDENT
@@ -81,6 +275,22 @@ zabbix_export:
tags: tags:
- tag: Application - tag: Application
value: 'Middle-End connections' value: 'Middle-End connections'
- uuid: 7b5995401195430e9f9e02e5dd8c3313
name: 'ME keepalive pong replies'
type: DEPENDENT
key: telemt.me_keepalive_pong_total
delay: '0'
preprocessing:
- type: PROMETHEUS_PATTERN
parameters:
- telemt_me_keepalive_pong_total
- value
- ''
master_item:
key: telemt.prom_metrics
tags:
- tag: Application
value: 'Middle-End connections'
- uuid: fb95391c7f894e3eb6984b92885813c2 - uuid: fb95391c7f894e3eb6984b92885813c2
name: 'ME keepalive frames sent' name: 'ME keepalive frames sent'
type: DEPENDENT type: DEPENDENT
@@ -98,6 +308,38 @@ zabbix_export:
tags: tags:
- tag: Application - tag: Application
value: 'Middle-End connections' value: 'Middle-End connections'
- uuid: da5af5fd691d4f40bc6cad78b4758eac
name: 'ME keepalive ping timeouts'
type: DEPENDENT
key: telemt.me_keepalive_timeout_total
delay: '0'
preprocessing:
- type: PROMETHEUS_PATTERN
parameters:
- telemt_me_keepalive_timeout_total
- value
- ''
master_item:
key: telemt.prom_metrics
tags:
- tag: Application
value: 'Middle-End connections'
- uuid: 50b45e494d584a7b86fca8b80c727411
name: 'ME reader EOF terminations'
type: DEPENDENT
key: telemt.me_reader_eof_total
delay: '0'
preprocessing:
- type: PROMETHEUS_PATTERN
parameters:
- telemt_me_reader_eof_total
- value
- ''
master_item:
key: telemt.prom_metrics
tags:
- tag: Application
value: 'Telemt other'
- uuid: fb95391c7f894e3eb6984b92885811a2 - uuid: fb95391c7f894e3eb6984b92885811a2
name: 'ME reconnect attempts' name: 'ME reconnect attempts'
type: DEPENDENT type: DEPENDENT
@@ -132,6 +374,470 @@ zabbix_export:
tags: tags:
- tag: Application - tag: Application
value: 'Middle-End connections' value: 'Middle-End connections'
- uuid: 6288b537b7964aadb8a483abd716855a
name: 'Immediate ME refill failures'
type: DEPENDENT
key: telemt.me_refill_failed_total
delay: '0'
preprocessing:
- type: PROMETHEUS_PATTERN
parameters:
- telemt_me_refill_failed_total
- value
- ''
master_item:
key: telemt.prom_metrics
tags:
- tag: Application
value: 'Telemt other'
- uuid: 8450bdb48f9b4505beb8fdfc665b37c5
name: 'Immediate ME refill skips due to inflight dedup'
type: DEPENDENT
key: telemt.me_refill_skipped_inflight_total
delay: '0'
preprocessing:
- type: PROMETHEUS_PATTERN
parameters:
- telemt_me_refill_skipped_inflight_total
- value
- ''
master_item:
key: telemt.prom_metrics
tags:
- tag: Application
value: 'Telemt other'
- uuid: cb192264c03a40578140863970333515
name: 'Immediate ME refill runs started'
type: DEPENDENT
key: telemt.me_refill_triggered_total
delay: '0'
preprocessing:
- type: PROMETHEUS_PATTERN
parameters:
- telemt_me_refill_triggered_total
- value
- ''
master_item:
key: telemt.prom_metrics
tags:
- tag: Application
value: 'Telemt other'
- uuid: 8f46b374332848fba0daba72e17eaad0
name: 'ME route drops: channel closed'
type: DEPENDENT
key: telemt.me_route_drop_channel_closed_total
delay: '0'
preprocessing:
- type: PROMETHEUS_PATTERN
parameters:
- telemt_me_route_drop_channel_closed_total
- value
- ''
master_item:
key: telemt.prom_metrics
tags:
- tag: Application
value: 'Middle-End connections'
- uuid: de5fa7a316554d099bcf5e000b33bfed
name: 'ME route drops: no conn'
type: DEPENDENT
key: telemt.me_route_drop_no_conn_total
delay: '0'
preprocessing:
- type: PROMETHEUS_PATTERN
parameters:
- telemt_me_route_drop_no_conn_total
- value
- ''
master_item:
key: telemt.prom_metrics
tags:
- tag: Application
value: 'Middle-End connections'
- uuid: d9e1630ce38946f7a8d179187793f12c
name: 'ME route drops: queue full by adaptive profile'
type: DEPENDENT
key: telemt.me_route_drop_queue_full_profile_total
delay: '0'
preprocessing:
- type: PROMETHEUS_PATTERN
parameters:
- 'telemt_me_route_drop_queue_full_profile_total == 1'
- label
- profile
master_item:
key: telemt.prom_metrics
tags:
- tag: Application
value: 'Telemt other'
- uuid: d5caefb8978e4f3eac4dcdecd4655c46
name: 'ME route drops: queue full'
type: DEPENDENT
key: telemt.me_route_drop_queue_full_total
delay: '0'
preprocessing:
- type: PROMETHEUS_PATTERN
parameters:
- telemt_me_route_drop_queue_full_total
- value
- ''
master_item:
key: telemt.prom_metrics
tags:
- tag: Application
value: 'Telemt other'
- uuid: f682298c2dfc46dda45771a58faa9ffa
name: 'Service RPC_CLOSE_EXT sent after activity signals'
type: DEPENDENT
key: telemt.me_rpc_proxy_req_signal_close_sent_total
delay: '0'
preprocessing:
- type: PROMETHEUS_PATTERN
parameters:
- telemt_me_rpc_proxy_req_signal_close_sent_total
- value
- ''
master_item:
key: telemt.prom_metrics
tags:
- tag: Application
value: 'Telemt other'
- uuid: 5db4bdc93959473eade9281c221e34b6
name: 'Service RPC_PROXY_REQ activity signal failures'
type: DEPENDENT
key: telemt.me_rpc_proxy_req_signal_failed_total
delay: '0'
preprocessing:
- type: PROMETHEUS_PATTERN
parameters:
- telemt_me_rpc_proxy_req_signal_failed_total
- value
- ''
master_item:
key: telemt.prom_metrics
tags:
- tag: Application
value: 'Telemt other'
- uuid: 4e75611bc3854415b63a1863e9bf176f
name: 'Service RPC_PROXY_REQ responses observed'
type: DEPENDENT
key: telemt.me_rpc_proxy_req_signal_response_total
delay: '0'
preprocessing:
- type: PROMETHEUS_PATTERN
parameters:
- telemt_me_rpc_proxy_req_signal_response_total
- value
- ''
master_item:
key: telemt.prom_metrics
tags:
- tag: Application
value: 'Telemt other'
- uuid: ecbffb29f2784839bea0ce2a38393438
name: 'Service RPC_PROXY_REQ activity signals sent'
type: DEPENDENT
key: telemt.me_rpc_proxy_req_signal_sent_total
delay: '0'
preprocessing:
- type: PROMETHEUS_PATTERN
parameters:
- telemt_me_rpc_proxy_req_signal_sent_total
- value
- ''
master_item:
key: telemt.prom_metrics
tags:
- tag: Application
value: 'Telemt other'
- uuid: 078eff3deeec435597f0c531457bb906
name: 'Service RPC_PROXY_REQ skipped due to missing writer metadata'
type: DEPENDENT
key: telemt.me_rpc_proxy_req_signal_skipped_no_meta_total
delay: '0'
preprocessing:
- type: PROMETHEUS_PATTERN
parameters:
- telemt_me_rpc_proxy_req_signal_skipped_no_meta_total
- value
- ''
master_item:
key: telemt.prom_metrics
tags:
- tag: Application
value: 'Telemt other'
- uuid: 7429ffbd94a340d7a600bc1690eb57e7
name: 'ME sequence mismatches'
type: DEPENDENT
key: telemt.me_seq_mismatch_total
delay: '0'
preprocessing:
- type: PROMETHEUS_PATTERN
parameters:
- telemt_me_seq_mismatch_total
- value
- ''
master_item:
key: telemt.prom_metrics
tags:
- tag: Application
value: 'Telemt other'
- uuid: 0f1f77ae34df4a48b36ad263359b5ad3
name: 'Single-endpoint DC outage transitions to active state'
type: DEPENDENT
key: telemt.me_single_endpoint_outage_enter_total
delay: '0'
preprocessing:
- type: PROMETHEUS_PATTERN
parameters:
- telemt_me_single_endpoint_outage_enter_total
- value
- ''
master_item:
key: telemt.prom_metrics
tags:
- tag: Application
value: 'Telemt other'
- uuid: 63d44ef672ff4df288914eb98f6fa72c
name: 'Single-endpoint DC outage recovery transitions'
type: DEPENDENT
key: telemt.me_single_endpoint_outage_exit_total
delay: '0'
preprocessing:
- type: PROMETHEUS_PATTERN
parameters:
- telemt_me_single_endpoint_outage_exit_total
- value
- ''
master_item:
key: telemt.prom_metrics
tags:
- tag: Application
value: 'Telemt other'
- uuid: 1b72ff95f1ba4fb2924aa3a129b22f4d
name: 'Reconnect attempts performed during single-endpoint outages'
type: DEPENDENT
key: telemt.me_single_endpoint_outage_reconnect_attempt_total
delay: '0'
preprocessing:
- type: PROMETHEUS_PATTERN
parameters:
- telemt_me_single_endpoint_outage_reconnect_attempt_total
- value
- ''
master_item:
key: telemt.prom_metrics
tags:
- tag: Application
value: 'Telemt other'
- uuid: 466bb352d55946a0bb78efc63e1ed71e
name: 'Successful reconnect attempts during single-endpoint outages'
type: DEPENDENT
key: telemt.me_single_endpoint_outage_reconnect_success_total
delay: '0'
preprocessing:
- type: PROMETHEUS_PATTERN
parameters:
- telemt_me_single_endpoint_outage_reconnect_success_total
- value
- ''
master_item:
key: telemt.prom_metrics
tags:
- tag: Application
value: 'Telemt other'
- uuid: 295b4a519a4d46f7b1ddbdf5b5268751
name: 'Outage reconnect attempts that bypassed quarantine'
type: DEPENDENT
key: telemt.me_single_endpoint_quarantine_bypass_total
delay: '0'
preprocessing:
- type: PROMETHEUS_PATTERN
parameters:
- telemt_me_single_endpoint_quarantine_bypass_total
- value
- ''
master_item:
key: telemt.prom_metrics
tags:
- tag: Application
value: 'Telemt other'
- uuid: bffa4861f83f4445bb0b2259e100e04c
name: 'Shadow rotations skipped because endpoint is quarantined'
type: DEPENDENT
key: telemt.me_single_endpoint_shadow_rotate_skipped_quarantine_total
delay: '0'
preprocessing:
- type: PROMETHEUS_PATTERN
parameters:
- telemt_me_single_endpoint_shadow_rotate_skipped_quarantine_total
- value
- ''
master_item:
key: telemt.prom_metrics
tags:
- tag: Application
value: 'Telemt other'
- uuid: f80ce02b50824f8ea0ddabac9ff97757
name: 'Successful periodic shadow rotations for single-endpoint DC groups'
type: DEPENDENT
key: telemt.me_single_endpoint_shadow_rotate_total
delay: '0'
preprocessing:
- type: PROMETHEUS_PATTERN
parameters:
- telemt_me_single_endpoint_shadow_rotate_total
- value
- ''
master_item:
key: telemt.prom_metrics
tags:
- tag: Application
value: 'Telemt other'
- uuid: bf2a0ff89c314f78904aa43351601111
name: 'Total ME writer removals'
type: DEPENDENT
key: telemt.me_writer_removed_total
delay: '0'
preprocessing:
- type: PROMETHEUS_PATTERN
parameters:
- telemt_me_writer_removed_total
- value
- ''
master_item:
key: telemt.prom_metrics
tags:
- tag: Application
value: 'Telemt other'
- uuid: 0d12ea02187745eba55498dfb16daa5c
name: 'Unexpected writer removals not yet compensated by restore'
type: DEPENDENT
key: telemt.me_writer_removed_unexpected_minus_restored_total
delay: '0'
preprocessing:
- type: PROMETHEUS_PATTERN
parameters:
- telemt_me_writer_removed_unexpected_minus_restored_total
- value
- ''
master_item:
key: telemt.prom_metrics
tags:
- tag: Application
value: 'Telemt other'
- uuid: 644278e7f87947e1a49483ba4487e32b
name: 'Unexpected ME writer removals that triggered refill'
type: DEPENDENT
key: telemt.me_writer_removed_unexpected_total
delay: '0'
preprocessing:
- type: PROMETHEUS_PATTERN
parameters:
- telemt_me_writer_removed_unexpected_total
- value
- ''
master_item:
key: telemt.prom_metrics
tags:
- tag: Application
value: 'Telemt other'
- uuid: a6c24dfc85d643dab1c81fc1e63fe3cc
name: 'Refilled ME writer restored via fallback endpoint'
type: DEPENDENT
key: telemt.me_writer_restored_fallback_total
delay: '0'
preprocessing:
- type: PROMETHEUS_PATTERN
parameters:
- telemt_me_writer_restored_fallback_total
- value
- ''
master_item:
key: telemt.prom_metrics
tags:
- tag: Application
value: 'Telemt other'
- uuid: d7d0a78ca6da4bb9b4a0991fd83149cf
name: 'Refilled ME writer restored on the same endpoint'
type: DEPENDENT
key: telemt.me_writer_restored_same_endpoint_total
delay: '0'
preprocessing:
- type: PROMETHEUS_PATTERN
parameters:
- telemt_me_writer_restored_same_endpoint_total
- value
- ''
master_item:
key: telemt.prom_metrics
tags:
- tag: Application
value: 'Telemt other'
- uuid: beb906ab89564cf9adfbb7b1d4553c44
name: 'Active draining ME writers'
type: DEPENDENT
key: telemt.pool_drain_active
delay: '0'
preprocessing:
- type: PROMETHEUS_PATTERN
parameters:
- telemt_pool_drain_active
- value
- ''
master_item:
key: telemt.prom_metrics
tags:
- tag: Application
value: 'Telemt other'
- uuid: 2f0926e00d7a4e5aa1783cb33b1192ea
name: 'Forced close events for draining writers'
type: DEPENDENT
key: telemt.pool_force_close_total
delay: '0'
preprocessing:
- type: PROMETHEUS_PATTERN
parameters:
- telemt_pool_force_close_total
- value
- ''
master_item:
key: telemt.prom_metrics
tags:
- tag: Application
value: 'Telemt other'
- uuid: 70d0b4da6079435ebe978e99bda8f1d3
name: 'Stale writer fallback picks for new binds'
type: DEPENDENT
key: telemt.pool_stale_pick_total
delay: '0'
preprocessing:
- type: PROMETHEUS_PATTERN
parameters:
- telemt_pool_stale_pick_total
- value
- ''
master_item:
key: telemt.prom_metrics
tags:
- tag: Application
value: 'Telemt other'
- uuid: 8a1d240b9b554905a8add9bf730bf1f4
name: 'Successful ME pool swaps'
type: DEPENDENT
key: telemt.pool_swap_total
delay: '0'
preprocessing:
- type: PROMETHEUS_PATTERN
parameters:
- telemt_pool_swap_total
- value
- ''
master_item:
key: telemt.prom_metrics
tags:
- tag: Application
value: 'Telemt other'
- uuid: 991b1858e3f94b3098ff0f84859efc41 - uuid: 991b1858e3f94b3098ff0f84859efc41
name: 'Prometheus metrics' name: 'Prometheus metrics'
type: HTTP_AGENT type: HTTP_AGENT
@@ -139,11 +845,158 @@ zabbix_export:
value_type: TEXT value_type: TEXT
trends: '0' trends: '0'
url: '{$TELEMT_URL}' url: '{$TELEMT_URL}'
- uuid: cef2547bb9464d10b11b6c19beac089d
name: 'Invalid secure frame lengths'
type: DEPENDENT
key: telemt.secure_padding_invalid_total
delay: '0'
preprocessing:
- type: PROMETHEUS_PATTERN
parameters:
- telemt_secure_padding_invalid_total
- value
- ''
master_item:
key: telemt.prom_metrics
tags:
- tag: Application
value: 'Telemt other'
- uuid: c164d7b59bdc4429a23b908558de8cf4
name: 'Runtime core telemetry switch'
type: DEPENDENT
key: telemt.telemetry_core_enabled
delay: '0'
preprocessing:
- type: PROMETHEUS_PATTERN
parameters:
- telemt_telemetry_core_enabled
- value
- ''
master_item:
key: telemt.prom_metrics
tags:
- tag: Application
value: 'Telemt other'
- uuid: ff16438417d842178d26033d13520833
name: 'Runtime ME telemetry level flag'
type: DEPENDENT
key: telemt.telemetry_me_level
delay: '0'
value_type: TEXT
trends: '0'
preprocessing:
- type: PROMETHEUS_PATTERN
parameters:
- 'telemt_telemetry_me_level == 1'
- label
- level
master_item:
key: telemt.prom_metrics
tags:
- tag: Application
value: 'Telemt other'
- uuid: 9fec0bb7c3c84ada96668b74d5849556
name: 'Runtime per-user telemetry switch'
type: DEPENDENT
key: telemt.telemetry_user_enabled
delay: '0'
preprocessing:
- type: PROMETHEUS_PATTERN
parameters:
- telemt_telemetry_user_enabled
- value
- ''
master_item:
key: telemt.prom_metrics
tags:
- tag: Application
value: 'Telemt other'
- uuid: 378b765aa7bc4a4ea87d3bc876c50d12
name: 'User-labeled metric series suppression flag'
type: DEPENDENT
key: telemt.telemetry_user_series_suppressed
delay: '0'
preprocessing:
- type: PROMETHEUS_PATTERN
parameters:
- telemt_telemetry_user_series_suppressed
- value
- ''
master_item:
key: telemt.prom_metrics
tags:
- tag: Application
value: 'Telemt other'
- uuid: 17972d992fa84fc1b53fdefed123ccd8
name: 'Upstream connect attempts across all requests'
type: DEPENDENT
key: telemt.upstream_connect_attempt_total
delay: '0'
preprocessing:
- type: PROMETHEUS_PATTERN
parameters:
- telemt_upstream_connect_attempt_total
- value
- ''
master_item:
key: telemt.prom_metrics
tags:
- tag: Application
value: 'Telemt other'
- uuid: 38627dd1cb7145e180d111bdee1d2c23
name: 'Hard errors that triggered upstream connect failfast'
type: DEPENDENT
key: telemt.upstream_connect_failfast_hard_error_total
delay: '0'
preprocessing:
- type: PROMETHEUS_PATTERN
parameters:
- telemt_upstream_connect_failfast_hard_error_total
- value
- ''
master_item:
key: telemt.prom_metrics
tags:
- tag: Application
value: 'Telemt other'
- uuid: 0ffd4c35b6734c83bd77c59f30bf3246
name: 'Failed upstream connect request cycles'
type: DEPENDENT
key: telemt.upstream_connect_fail_total
delay: '0'
preprocessing:
- type: PROMETHEUS_PATTERN
parameters:
- telemt_upstream_connect_fail_total
- value
- ''
master_item:
key: telemt.prom_metrics
tags:
- tag: Application
value: 'Telemt other'
- uuid: 7da255f4f38c4095921bc876d16d3586
name: 'Successful upstream connect request cycles'
type: DEPENDENT
key: telemt.upstream_connect_success_total
delay: '0'
preprocessing:
- type: PROMETHEUS_PATTERN
parameters:
- telemt_upstream_connect_success_total
- value
- ''
master_item:
key: telemt.prom_metrics
tags:
- tag: Application
value: 'Telemt other'
- uuid: fb95391c7f894e3eb6984b92885813b2 - uuid: fb95391c7f894e3eb6984b92885813b2
name: 'Telemt Uptime' name: 'Telemt Uptime'
type: DEPENDENT type: DEPENDENT
key: telemt.uptime key: telemt.uptime
delay: '0' delay: '0'
value_type: FLOAT
trends: '0' trends: '0'
units: s units: s
preprocessing: preprocessing:
@@ -180,6 +1033,56 @@ zabbix_export:
tags: tags:
- tag: Application - tag: Application
value: 'Users connections' value: 'Users connections'
- uuid: f7ad02d1635542b584bba5941375ae41
name: 'Current number of unique active IPs by {#TELEMT_USER}'
type: DEPENDENT
key: 'telemt.ips_current_[{#TELEMT_USER}]'
delay: '0'
preprocessing:
- type: PROMETHEUS_PATTERN
parameters:
- 'telemt_user_unique_ips_current{user="{#TELEMT_USER}"}'
- value
- ''
master_item:
key: telemt.prom_metrics
tags:
- tag: Application
value: 'Users IPs'
- uuid: 100b09bf1cff420495c5c105bdb0af6c
name: 'Configured unique IP limit to {#TELEMT_USER}'
type: DEPENDENT
key: 'telemt.ips_limit_[{#TELEMT_USER}]'
delay: '0'
description: '0 means unlimited'
preprocessing:
- type: PROMETHEUS_PATTERN
parameters:
- 'telemt_user_unique_ips_limit{user="{#TELEMT_USER}"}'
- value
- ''
master_item:
key: telemt.prom_metrics
tags:
- tag: Application
value: 'Users IPs'
- uuid: ef3ac8f5c5d746bbaa4b0b698ba0d9f6
name: 'Unique IP usage ratio by {#TELEMT_USER}'
type: DEPENDENT
key: 'telemt.ips_utilization_[{#TELEMT_USER}]'
delay: '0'
value_type: FLOAT
preprocessing:
- type: PROMETHEUS_PATTERN
parameters:
- 'telemt_user_unique_ips_utilization{user="{#TELEMT_USER}"}'
- value
- ''
master_item:
key: telemt.prom_metrics
tags:
- tag: Application
value: 'Users IPs'
- uuid: 3ccce91ab5d54b4d972280c7b7bda910 - uuid: 3ccce91ab5d54b4d972280c7b7bda910
name: 'Messages received from {#TELEMT_USER}' name: 'Messages received from {#TELEMT_USER}'
type: DEPENDENT type: DEPENDENT