mirror of
https://github.com/telemt/telemt.git
synced 2026-04-15 17:44:11 +03:00
Compare commits
58 Commits
| Author | SHA1 | Date | |
|---|---|---|---|
|
|
32eeb4a98c | ||
|
|
a74cc14ed9 | ||
|
|
5f77f83b48 | ||
|
|
d543dbca92 | ||
|
|
02f9d59f5a | ||
|
|
7b745bc7bc | ||
|
|
5ac0ef1ffd | ||
|
|
e1f3efb619 | ||
|
|
508eea0131 | ||
|
|
9e7f80b9b3 | ||
|
|
ee2def2e62 | ||
|
|
258191ab87 | ||
|
|
27e6dec018 | ||
|
|
26323dbebf | ||
|
|
484137793f | ||
|
|
24713feddc | ||
|
|
93f58524d1 | ||
|
|
0ff2e95e49 | ||
|
|
89222e7123 | ||
|
|
2468ee15e7 | ||
|
|
3440aa9fcd | ||
|
|
ce9698d39b | ||
|
|
ddfe7c5cfa | ||
|
|
01893f3712 | ||
|
|
8ae741ec72 | ||
|
|
6856466cef | ||
|
|
68292fbd26 | ||
|
|
e90c42ae68 | ||
|
|
9f9a5dce0d | ||
|
|
6739cd8d01 | ||
|
|
6cc8d9cb00 | ||
|
|
ce375b62e4 | ||
|
|
95971ac62c | ||
|
|
4ea2226dcd | ||
|
|
d752a440e5 | ||
|
|
5ce2ee2dae | ||
|
|
6fd9f0595d | ||
|
|
fcdd8a9796 | ||
|
|
640468d4e7 | ||
|
|
02fe89f7d0 | ||
|
|
24df865503 | ||
|
|
e9f8c79498 | ||
|
|
24ff75701e | ||
|
|
4221230969 | ||
|
|
d87196c105 | ||
|
|
da89415961 | ||
|
|
2d98ebf3c3 | ||
|
|
fb5e9947bd | ||
|
|
2ea85c00d3 | ||
|
|
2a3b6b917f | ||
|
|
83ed9065b0 | ||
|
|
44b825edf5 | ||
|
|
487e95a66e | ||
|
|
c465c200c4 | ||
|
|
d7716ad875 | ||
|
|
edce194948 | ||
|
|
13fdff750d | ||
|
|
bdcf110c87 |
@@ -1,6 +1,6 @@
|
||||
[package]
|
||||
name = "telemt"
|
||||
version = "3.3.4"
|
||||
version = "3.3.9"
|
||||
edition = "2024"
|
||||
|
||||
[dependencies]
|
||||
|
||||
181
README.md
181
README.md
@@ -19,18 +19,24 @@
|
||||
|
||||
### 🇷🇺 RU
|
||||
|
||||
#### Релиз 3.0.15 — 25 февраля
|
||||
#### Релиз 3.3.5 LTS - 6 марта
|
||||
|
||||
25 февраля мы выпустили версию **3.0.15**
|
||||
6 марта мы выпустили Telemt **3.3.5**
|
||||
|
||||
Мы предполагаем, что она станет завершающей версией поколения 3.0 и уже сейчас мы рассматриваем её как **LTS-кандидата** для версии **3.1.0**!
|
||||
Это [3.3.5 - первая LTS-версия telemt](https://github.com/telemt/telemt/releases/tag/3.3.5)!
|
||||
|
||||
После нескольких дней детального анализа особенностей работы Middle-End мы спроектировали и реализовали продуманный режим **ротации ME Writer**. Данный режим позволяет поддерживать стабильно высокую производительность в long-run сценариях без возникновения ошибок, связанных с некорректной конфигурацией прокси
|
||||
В ней используется:
|
||||
- новый алгоритм ME NoWait для непревзойдённо быстрого восстановления пула
|
||||
- Adaptive Floor, поддерживающий количество ME Writer на оптимальном уровне
|
||||
- модель усовершенствованного доступа к KDF Fingerprint на RwLock
|
||||
- строгая привязка Middle-End к DC-ID с предсказуемым алгоритмом деградации и самовосстановления
|
||||
|
||||
Будем рады вашему фидбеку и предложениям по улучшению — особенно в части **статистики** и **UX**
|
||||
Telemt Control API V1 в 3.3.5 включает:
|
||||
- несколько режимов работы в зависимости от доступных ресурсов
|
||||
- снапшот-модель для живых метрик без вмешательства в hot-path
|
||||
- минималистичный набор запросов для управления пользователями
|
||||
|
||||
Релиз:
|
||||
[3.0.15](https://github.com/telemt/telemt/releases/tag/3.0.15)
|
||||
Будем рады вашему фидбеку и предложениям по улучшению — особенно в части **API**, **статистики**, **UX**
|
||||
|
||||
---
|
||||
|
||||
@@ -47,18 +53,24 @@
|
||||
|
||||
### 🇬🇧 EN
|
||||
|
||||
#### Release 3.0.15 — February 25
|
||||
#### Release 3.3.5 LTS - March 6
|
||||
|
||||
On February 25, we released version **3.0.15**
|
||||
On March 6, we released Telemt **3.3.3**
|
||||
|
||||
We expect this to become the final release of the 3.0 generation and at this point, we already see it as a strong **LTS candidate** for the upcoming **3.1.0** release!
|
||||
This is [3.3.5 - the first LTS release of telemt](https://github.com/telemt/telemt/releases/tag/3.3.5)
|
||||
|
||||
After several days of deep analysis of Middle-End behavior, we designed and implemented a well-engineered **ME Writer rotation mode**. This mode enables sustained high throughput in long-run scenarios while preventing proxy misconfiguration errors
|
||||
It introduces:
|
||||
- the new ME NoWait algorithm for exceptionally fast pool recovery
|
||||
- Adaptive Floor, which maintains the number of ME Writers at an optimal level
|
||||
- an improved KDF Fingerprint access model based on RwLock
|
||||
- strict binding of Middle-End instances to DC-ID with a predictable degradation and self-recovery algorithm
|
||||
|
||||
We are looking forward to your feedback and improvement proposals — especially regarding **statistics** and **UX**
|
||||
Telemt Control API V1 in version 3.3.5 includes:
|
||||
- multiple operating modes depending on available resources
|
||||
- a snapshot-based model for live metrics without interfering with the hot path
|
||||
- a minimalistic request set for user management
|
||||
|
||||
Release:
|
||||
[3.0.15](https://github.com/telemt/telemt/releases/tag/3.0.15)
|
||||
We are looking forward to your feedback and improvement proposals — especially regarding **API**, **statistics**, **UX**
|
||||
|
||||
---
|
||||
|
||||
@@ -81,31 +93,6 @@ We welcome ideas, architectural feedback, and pull requests.
|
||||
|
||||
⚓ Our ***Middle-End Pool*** is fastest by design in standard scenarios, compared to other implementations of connecting to the Middle-End Proxy: non dramatically, but usual
|
||||
|
||||
# GOTO
|
||||
- [Features](#features)
|
||||
- [Quick Start Guide](#quick-start-guide)
|
||||
- [How to use?](#how-to-use)
|
||||
- [Systemd Method](#telemt-via-systemd)
|
||||
- [Configuration](#configuration)
|
||||
- [Minimal Configuration](#minimal-configuration-for-first-start)
|
||||
- [Advanced](#advanced)
|
||||
- [Adtag](#adtag)
|
||||
- [Listening and Announce IPs](#listening-and-announce-ips)
|
||||
- [Upstream Manager](#upstream-manager)
|
||||
- [IP](#bind-on-ip)
|
||||
- [SOCKS](#socks45-as-upstream)
|
||||
- [FAQ](#faq)
|
||||
- [Recognizability for DPI + crawler](#recognizability-for-dpi-and-crawler)
|
||||
- [Telegram Calls](#telegram-calls-via-mtproxy)
|
||||
- [DPI](#how-does-dpi-see-mtproxy-tls)
|
||||
- [Whitelist on Network Level](#whitelist-on-ip)
|
||||
- [Too many open files](#too-many-open-files)
|
||||
- [Build](#build)
|
||||
- [Docker](#docker)
|
||||
- [Why Rust?](#why-rust)
|
||||
|
||||
## Features
|
||||
|
||||
- Full support for all official MTProto proxy modes:
|
||||
- Classic
|
||||
- Secure - with `dd` prefix
|
||||
@@ -116,59 +103,40 @@ We welcome ideas, architectural feedback, and pull requests.
|
||||
- Graceful shutdown on Ctrl+C
|
||||
- Extensive logging via `trace` and `debug` with `RUST_LOG` method
|
||||
|
||||
# GOTO
|
||||
- [Telemt - MTProxy on Rust + Tokio](#telemt---mtproxy-on-rust--tokio)
|
||||
- [NEWS and EMERGENCY](#news-and-emergency)
|
||||
- [✈️ Telemt 3 is released!](#️-telemt-3-is-released)
|
||||
- [🇷🇺 RU](#-ru)
|
||||
- [Релиз 3.3.5 LTS - 6 марта](#релиз-335-lts---6-марта)
|
||||
- [🇬🇧 EN](#-en)
|
||||
- [Release 3.3.5 LTS - March 6](#release-335-lts---march-6)
|
||||
- [Features](#features)
|
||||
- [GOTO](#goto)
|
||||
- [Quick Start Guide](#quick-start-guide)
|
||||
- [FAQ](#faq)
|
||||
- [Recognizability for DPI and crawler](#recognizability-for-dpi-and-crawler)
|
||||
- [Client WITH secret-key accesses the MTProxy resource:](#client-with-secret-key-accesses-the-mtproxy-resource)
|
||||
- [Client WITHOUT secret-key gets transparent access to the specified resource:](#client-without-secret-key-gets-transparent-access-to-the-specified-resource)
|
||||
- [Telegram Calls via MTProxy](#telegram-calls-via-mtproxy)
|
||||
- [How does DPI see MTProxy TLS?](#how-does-dpi-see-mtproxy-tls)
|
||||
- [Whitelist on IP](#whitelist-on-ip)
|
||||
- [Too many open files](#too-many-open-files)
|
||||
- [Build](#build)
|
||||
- [Why Rust?](#why-rust)
|
||||
- [Issues](#issues)
|
||||
- [Roadmap](#roadmap)
|
||||
|
||||
|
||||
## Quick Start Guide
|
||||
|
||||
### [Quick Start Guide RU](docs/QUICK_START_GUIDE.ru.md)
|
||||
### [Quick Start Guide EN](docs/QUICK_START_GUIDE.en.md)
|
||||
|
||||
|
||||
### Advanced
|
||||
#### Adtag (per-user)
|
||||
To use channel advertising and usage statistics from Telegram, get an Adtag from [@mtproxybot](https://t.me/mtproxybot). Set it per user in `[access.user_ad_tags]` (32 hex chars):
|
||||
```toml
|
||||
[access.user_ad_tags]
|
||||
username1 = "11111111111111111111111111111111" # Replace with your tag from @mtproxybot
|
||||
username2 = "22222222222222222222222222222222"
|
||||
```
|
||||
#### Listening and Announce IPs
|
||||
To specify listening address and/or address in links, add to section `[[server.listeners]]` of config.toml:
|
||||
```toml
|
||||
[[server.listeners]]
|
||||
ip = "0.0.0.0" # 0.0.0.0 = all IPs; your IP = specific listening
|
||||
announce_ip = "1.2.3.4" # IP in links; comment with # if not used
|
||||
```
|
||||
#### Upstream Manager
|
||||
To specify upstream, add to section `[[upstreams]]` of config.toml:
|
||||
##### Bind on IP
|
||||
```toml
|
||||
[[upstreams]]
|
||||
type = "direct"
|
||||
weight = 1
|
||||
enabled = true
|
||||
interface = "192.168.1.100" # Change to your outgoing IP
|
||||
```
|
||||
##### SOCKS4/5 as Upstream
|
||||
- Without Auth:
|
||||
```toml
|
||||
[[upstreams]]
|
||||
type = "socks5" # Specify SOCKS4 or SOCKS5
|
||||
address = "1.2.3.4:1234" # SOCKS-server Address
|
||||
weight = 1 # Set Weight for Scenarios
|
||||
enabled = true
|
||||
```
|
||||
|
||||
- With Auth:
|
||||
```toml
|
||||
[[upstreams]]
|
||||
type = "socks5" # Specify SOCKS4 or SOCKS5
|
||||
address = "1.2.3.4:1234" # SOCKS-server Address
|
||||
username = "user" # Username for Auth on SOCKS-server
|
||||
password = "pass" # Password for Auth on SOCKS-server
|
||||
weight = 1 # Set Weight for Scenarios
|
||||
enabled = true
|
||||
```
|
||||
- [Quick Start Guide RU](docs/QUICK_START_GUIDE.ru.md)
|
||||
- [Quick Start Guide EN](docs/QUICK_START_GUIDE.en.md)
|
||||
|
||||
## FAQ
|
||||
|
||||
- [FAQ RU](docs/FAQ.ru.md)
|
||||
- [FAQ EN](docs/FAQ.en.md)
|
||||
|
||||
### Recognizability for DPI and crawler
|
||||
Since version 1.1.0.0, we have debugged masking perfectly: for all clients without "presenting" a key,
|
||||
we transparently direct traffic to the target host!
|
||||
@@ -313,41 +281,6 @@ chmod +x /bin/telemt
|
||||
telemt config.toml
|
||||
```
|
||||
|
||||
## Docker
|
||||
**Quick start (Docker Compose)**
|
||||
|
||||
1. Edit `config.toml` in repo root (at least: port, users secrets, tls_domain)
|
||||
2. Start container:
|
||||
```bash
|
||||
docker compose up -d --build
|
||||
```
|
||||
3. Check logs:
|
||||
```bash
|
||||
docker compose logs -f telemt
|
||||
```
|
||||
4. Stop:
|
||||
```bash
|
||||
docker compose down
|
||||
```
|
||||
|
||||
**Notes**
|
||||
- `docker-compose.yml` maps `./config.toml` to `/app/config.toml` (read-only)
|
||||
- By default it publishes `443:443` and runs with dropped capabilities (only `NET_BIND_SERVICE` is added)
|
||||
- If you really need host networking (usually only for some IPv6 setups) uncomment `network_mode: host`
|
||||
|
||||
**Run without Compose**
|
||||
```bash
|
||||
docker build -t telemt:local .
|
||||
docker run --name telemt --restart unless-stopped \
|
||||
-p 443:443 \
|
||||
-e RUST_LOG=info \
|
||||
-v "$PWD/config.toml:/app/config.toml:ro" \
|
||||
--read-only \
|
||||
--cap-drop ALL --cap-add NET_BIND_SERVICE \
|
||||
--ulimit nofile=65536:65536 \
|
||||
telemt:local
|
||||
```
|
||||
|
||||
## Why Rust?
|
||||
- Long-running reliability and idempotent behavior
|
||||
- Rust's deterministic resource management - RAII
|
||||
|
||||
33
docs/API.md
33
docs/API.md
@@ -16,6 +16,10 @@ API runtime is configured in `[server.api]`.
|
||||
| `request_body_limit_bytes` | `usize` | `65536` | Maximum request body size. Must be `> 0`. |
|
||||
| `minimal_runtime_enabled` | `bool` | `false` | Enables runtime snapshot endpoints requiring ME pool read-lock aggregation. |
|
||||
| `minimal_runtime_cache_ttl_ms` | `u64` | `1000` | Cache TTL for minimal snapshots. `0` disables cache; valid range is `[0, 60000]`. |
|
||||
| `runtime_edge_enabled` | `bool` | `false` | Enables runtime edge endpoints with cached aggregation payloads. |
|
||||
| `runtime_edge_cache_ttl_ms` | `u64` | `1000` | Cache TTL for runtime edge summary payloads. `0` disables cache. |
|
||||
| `runtime_edge_top_n` | `usize` | `10` | Top-N rows for runtime edge leaderboard payloads. |
|
||||
| `runtime_edge_events_capacity` | `usize` | `256` | Ring-buffer size for `/v1/runtime/events/recent`. |
|
||||
| `read_only` | `bool` | `false` | Disables mutating endpoints. |
|
||||
|
||||
`server.admin_api` is accepted as an alias for backward compatibility.
|
||||
@@ -24,6 +28,9 @@ Runtime validation for API config:
|
||||
- `server.api.listen` must be a valid `IP:PORT`.
|
||||
- `server.api.request_body_limit_bytes` must be `> 0`.
|
||||
- `server.api.minimal_runtime_cache_ttl_ms` must be within `[0, 60000]`.
|
||||
- `server.api.runtime_edge_cache_ttl_ms` must be within `[0, 60000]`.
|
||||
- `server.api.runtime_edge_top_n` must be within `[1, 1000]`.
|
||||
- `server.api.runtime_edge_events_capacity` must be within `[16, 4096]`.
|
||||
|
||||
## Protocol Contract
|
||||
|
||||
@@ -80,12 +87,19 @@ Notes:
|
||||
| `GET` | `/v1/runtime/gates` | none | `200` | `RuntimeGatesData` |
|
||||
| `GET` | `/v1/limits/effective` | none | `200` | `EffectiveLimitsData` |
|
||||
| `GET` | `/v1/security/posture` | none | `200` | `SecurityPostureData` |
|
||||
| `GET` | `/v1/security/whitelist` | none | `200` | `SecurityWhitelistData` |
|
||||
| `GET` | `/v1/stats/summary` | none | `200` | `SummaryData` |
|
||||
| `GET` | `/v1/stats/zero/all` | none | `200` | `ZeroAllData` |
|
||||
| `GET` | `/v1/stats/upstreams` | none | `200` | `UpstreamsData` |
|
||||
| `GET` | `/v1/stats/minimal/all` | none | `200` | `MinimalAllData` |
|
||||
| `GET` | `/v1/stats/me-writers` | none | `200` | `MeWritersData` |
|
||||
| `GET` | `/v1/stats/dcs` | none | `200` | `DcStatusData` |
|
||||
| `GET` | `/v1/runtime/me_pool_state` | none | `200` | `RuntimeMePoolStateData` |
|
||||
| `GET` | `/v1/runtime/me_quality` | none | `200` | `RuntimeMeQualityData` |
|
||||
| `GET` | `/v1/runtime/upstream_quality` | none | `200` | `RuntimeUpstreamQualityData` |
|
||||
| `GET` | `/v1/runtime/nat_stun` | none | `200` | `RuntimeNatStunData` |
|
||||
| `GET` | `/v1/runtime/connections/summary` | none | `200` | `RuntimeEdgeConnectionsSummaryData` |
|
||||
| `GET` | `/v1/runtime/events/recent` | none | `200` | `RuntimeEdgeEventsData` |
|
||||
| `GET` | `/v1/stats/users` | none | `200` | `UserInfo[]` |
|
||||
| `GET` | `/v1/users` | none | `200` | `UserInfo[]` |
|
||||
| `POST` | `/v1/users` | `CreateUserRequest` | `201` | `CreateUserResponse` |
|
||||
@@ -268,6 +282,25 @@ Note: the request contract is defined, but the corresponding route currently ret
|
||||
| `telemetry_user_enabled` | `bool` | Per-user telemetry toggle. |
|
||||
| `telemetry_me_level` | `string` | ME telemetry level (`silent`, `normal`, `debug`). |
|
||||
|
||||
### `SecurityWhitelistData`
|
||||
| Field | Type | Description |
|
||||
| --- | --- | --- |
|
||||
| `generated_at_epoch_secs` | `u64` | Snapshot generation timestamp. |
|
||||
| `enabled` | `bool` | `true` when whitelist has at least one CIDR entry. |
|
||||
| `entries_total` | `usize` | Number of whitelist CIDR entries. |
|
||||
| `entries` | `string[]` | Whitelist CIDR entries as strings. |
|
||||
|
||||
### Runtime Min Endpoints
|
||||
- `/v1/runtime/me_pool_state`: generations, hardswap state, writer contour/health counts, refill inflight snapshot.
|
||||
- `/v1/runtime/me_quality`: ME error/drift/reconnect counters and per-DC RTT coverage snapshot.
|
||||
- `/v1/runtime/upstream_quality`: upstream runtime policy, connect counters, health summary and per-upstream DC latency/IP preference.
|
||||
- `/v1/runtime/nat_stun`: NAT/STUN runtime flags, server lists, reflection cache state and backoff remaining.
|
||||
|
||||
### Runtime Edge Endpoints
|
||||
- `/v1/runtime/connections/summary`: cached connection totals (`total/me/direct`), active users and top-N users by connections/traffic.
|
||||
- `/v1/runtime/events/recent?limit=N`: bounded control-plane ring-buffer events (`limit` clamped to `[1, 1000]`).
|
||||
- If `server.api.runtime_edge_enabled=false`, runtime edge endpoints return `enabled=false` with `reason=feature_disabled`.
|
||||
|
||||
### `ZeroAllData`
|
||||
| Field | Type | Description |
|
||||
| --- | --- | --- |
|
||||
|
||||
112
docs/FAQ.en.md
Normal file
112
docs/FAQ.en.md
Normal file
@@ -0,0 +1,112 @@
|
||||
## How to set up "proxy sponsor" channel and statistics via @MTProxybot bot
|
||||
|
||||
1. Go to @MTProxybot bot.
|
||||
2. Enter the command `/newproxy`
|
||||
3. Send the server IP and port. For example: 1.2.3.4:443
|
||||
4. Open the config `nano /etc/telemt.toml`.
|
||||
5. Copy and send the user secret from the [access.users] section to the bot.
|
||||
6. Copy the tag received from the bot. For example 1234567890abcdef1234567890abcdef.
|
||||
> [!WARNING]
|
||||
> The link provided by the bot will not work. Do not copy or use it!
|
||||
7. Uncomment the ad_tag parameter and enter the tag received from the bot.
|
||||
8. Uncomment/add the parameter `use_middle_proxy = true`.
|
||||
|
||||
Config example:
|
||||
```toml
|
||||
[general]
|
||||
ad_tag = "1234567890abcdef1234567890abcdef"
|
||||
use_middle_proxy = true
|
||||
```
|
||||
9. Save the config. Ctrl+S -> Ctrl+X.
|
||||
10. Restart telemt `systemctl restart telemt`.
|
||||
11. In the bot, send the command /myproxies and select the added server.
|
||||
12. Click the "Set promotion" button.
|
||||
13. Send a **public link** to the channel. Private channels cannot be added!
|
||||
14. Wait approximately 1 hour for the information to update on Telegram servers.
|
||||
> [!WARNING]
|
||||
> You will not see the "proxy sponsor" if you are already subscribed to the channel.
|
||||
|
||||
**You can also set up different channels for different users.**
|
||||
```toml
|
||||
[access.user_ad_tags]
|
||||
hello = "ad_tag"
|
||||
hello2 = "ad_tag2"
|
||||
```
|
||||
|
||||
## How many people can use 1 link
|
||||
|
||||
By default, 1 link can be used by any number of people.
|
||||
You can limit the number of IPs using the proxy.
|
||||
```toml
|
||||
[access.user_max_unique_ips]
|
||||
hello = 1
|
||||
```
|
||||
This parameter limits how many unique IPs can use 1 link simultaneously. If one user disconnects, a second user can connect. Also, multiple users can sit behind the same IP.
|
||||
|
||||
## How to create multiple different links
|
||||
|
||||
1. Generate the required number of secrets `openssl rand -hex 16`
|
||||
2. Open the config `nano /etc/telemt.toml`
|
||||
3. Add new users.
|
||||
```toml
|
||||
[access.users]
|
||||
user1 = "00000000000000000000000000000001"
|
||||
user2 = "00000000000000000000000000000002"
|
||||
user3 = "00000000000000000000000000000003"
|
||||
```
|
||||
4. Save the config. Ctrl+S -> Ctrl+X. You don't need to restart telemt.
|
||||
5. Get the links via `journalctl -u telemt -n -g "links" --no-pager -o cat | tac`
|
||||
|
||||
## How to view metrics
|
||||
|
||||
1. Open the config `nano /etc/telemt.toml`
|
||||
2. Add the following parameters
|
||||
```toml
|
||||
[server]
|
||||
metrics_port = 9090
|
||||
metrics_whitelist = ["127.0.0.1/32", "::1/128", "0.0.0.0/0"]
|
||||
```
|
||||
3. Save the config. Ctrl+S -> Ctrl+X.
|
||||
4. Metrics are available at SERVER_IP:9090/metrics.
|
||||
> [!WARNING]
|
||||
> "0.0.0.0/0" in metrics_whitelist opens access from any IP. Replace with your own IP. For example "1.2.3.4"
|
||||
|
||||
## Additional parameters
|
||||
|
||||
### Domain in link instead of IP
|
||||
To specify a domain in the links, add to the `[general.links]` section of the config file.
|
||||
```toml
|
||||
[general.links]
|
||||
public_host = "proxy.example.com"
|
||||
```
|
||||
|
||||
### Upstream Manager
|
||||
To specify an upstream, add to the `[[upstreams]]` section of the config.toml file:
|
||||
#### Binding to IP
|
||||
```toml
|
||||
[[upstreams]]
|
||||
type = "direct"
|
||||
weight = 1
|
||||
enabled = true
|
||||
interface = "192.168.1.100" # Change to your outgoing IP
|
||||
```
|
||||
#### SOCKS4/5 as Upstream
|
||||
- Without authentication:
|
||||
```toml
|
||||
[[upstreams]]
|
||||
type = "socks5" # Specify SOCKS4 or SOCKS5
|
||||
address = "1.2.3.4:1234" # SOCKS-server Address
|
||||
weight = 1 # Set Weight for Scenarios
|
||||
enabled = true
|
||||
```
|
||||
|
||||
- With authentication:
|
||||
```toml
|
||||
[[upstreams]]
|
||||
type = "socks5" # Specify SOCKS4 or SOCKS5
|
||||
address = "1.2.3.4:1234" # SOCKS-server Address
|
||||
username = "user" # Username for Auth on SOCKS-server
|
||||
password = "pass" # Password for Auth on SOCKS-server
|
||||
weight = 1 # Set Weight for Scenarios
|
||||
enabled = true
|
||||
```
|
||||
@@ -1,4 +1,4 @@
|
||||
## Как настроить канал "спонсор прокси"
|
||||
## Как настроить канал "спонсор прокси" и статистику через бота @MTProxybot
|
||||
|
||||
1. Зайти в бота @MTProxybot.
|
||||
2. Ввести команду `/newproxy`
|
||||
@@ -26,6 +26,13 @@ use_middle_proxy = true
|
||||
> [!WARNING]
|
||||
> У вас не будет отображаться "спонсор прокси" если вы уже подписаны на канал.
|
||||
|
||||
**Также вы можете настроить разные каналы для разных пользователей.**
|
||||
```toml
|
||||
[access.user_ad_tags]
|
||||
hello = "ad_tag"
|
||||
hello2 = "ad_tag2"
|
||||
```
|
||||
|
||||
## Сколько человек может пользоваться 1 ссылкой
|
||||
|
||||
По умолчанию 1 ссылкой может пользоваться сколько угодно человек.
|
||||
@@ -63,3 +70,43 @@ metrics_whitelist = ["127.0.0.1/32", "::1/128", "0.0.0.0/0"]
|
||||
4. Метрики доступны по адресу SERVER_IP:9090/metrics.
|
||||
> [!WARNING]
|
||||
> "0.0.0.0/0" в metrics_whitelist открывает доступ с любого IP. Замените на свой ip. Например "1.2.3.4"
|
||||
|
||||
## Дополнительные параметры
|
||||
|
||||
### Домен в ссылке вместо IP
|
||||
Чтобы указать домен в ссылках, добавьте в секцию `[general.links]` файла config.
|
||||
```toml
|
||||
[general.links]
|
||||
public_host = "proxy.example.com"
|
||||
```
|
||||
|
||||
### Upstream Manager
|
||||
Чтобы указать апстрим, добавьте в секцию `[[upstreams]]` файла config.toml:
|
||||
#### Привязка к IP
|
||||
```toml
|
||||
[[upstreams]]
|
||||
type = "direct"
|
||||
weight = 1
|
||||
enabled = true
|
||||
interface = "192.168.1.100" # Change to your outgoing IP
|
||||
```
|
||||
#### SOCKS4/5 как Upstream
|
||||
- Без авторизации:
|
||||
```toml
|
||||
[[upstreams]]
|
||||
type = "socks5" # Specify SOCKS4 or SOCKS5
|
||||
address = "1.2.3.4:1234" # SOCKS-server Address
|
||||
weight = 1 # Set Weight for Scenarios
|
||||
enabled = true
|
||||
```
|
||||
|
||||
- С авторизацией:
|
||||
```toml
|
||||
[[upstreams]]
|
||||
type = "socks5" # Specify SOCKS4 or SOCKS5
|
||||
address = "1.2.3.4:1234" # SOCKS-server Address
|
||||
username = "user" # Username for Auth on SOCKS-server
|
||||
password = "pass" # Password for Auth on SOCKS-server
|
||||
weight = 1 # Set Weight for Scenarios
|
||||
enabled = true
|
||||
```
|
||||
|
||||
@@ -67,6 +67,12 @@ classic = false
|
||||
secure = false
|
||||
tls = true
|
||||
|
||||
[server.api]
|
||||
enabled = true
|
||||
# listen = "127.0.0.1:9091"
|
||||
# whitelist = ["127.0.0.1/32"]
|
||||
# read_only = true
|
||||
|
||||
# === Anti-Censorship & Masking ===
|
||||
[censorship]
|
||||
tls_domain = "petrovich.ru"
|
||||
@@ -75,6 +81,7 @@ tls_domain = "petrovich.ru"
|
||||
# format: "username" = "32_hex_chars_secret"
|
||||
hello = "00000000000000000000000000000000"
|
||||
```
|
||||
|
||||
then Ctrl+S -> Ctrl+X to save
|
||||
|
||||
> [!WARNING]
|
||||
@@ -115,7 +122,12 @@ then Ctrl+S -> Ctrl+X to save
|
||||
|
||||
**5.** For automatic startup at system boot, enter `systemctl enable telemt`
|
||||
|
||||
**6.** To get the links, enter `journalctl -u telemt -n -g "links" --no-pager -o cat | tac`
|
||||
**6.** To get the link(s), enter
|
||||
```bash
|
||||
curl -s http://127.0.0.1:9091/v1/users | jq
|
||||
```
|
||||
|
||||
> Any number of people can use one link.
|
||||
|
||||
---
|
||||
|
||||
|
||||
@@ -67,6 +67,12 @@ classic = false
|
||||
secure = false
|
||||
tls = true
|
||||
|
||||
[server.api]
|
||||
enabled = true
|
||||
# listen = "127.0.0.1:9091"
|
||||
# whitelist = ["127.0.0.1/32"]
|
||||
# read_only = true
|
||||
|
||||
# === Anti-Censorship & Masking ===
|
||||
[censorship]
|
||||
tls_domain = "petrovich.ru"
|
||||
@@ -75,6 +81,7 @@ tls_domain = "petrovich.ru"
|
||||
# format: "username" = "32_hex_chars_secret"
|
||||
hello = "00000000000000000000000000000000"
|
||||
```
|
||||
|
||||
Затем нажмите Ctrl+S -> Ctrl+X, чтобы сохранить
|
||||
|
||||
> [!WARNING]
|
||||
@@ -115,9 +122,14 @@ WantedBy=multi-user.target
|
||||
|
||||
**5.** Для автоматического запуска при запуске системы в введите `systemctl enable telemt`
|
||||
|
||||
**6.** Для получения ссылки введите `journalctl -u telemt -n -g "links" --no-pager -o cat | tac`
|
||||
**6.** Для получения ссылки/ссылок введите
|
||||
```bash
|
||||
curl -s http://127.0.0.1:9091/v1/users | jq
|
||||
```
|
||||
> Одной ссылкой может пользоваться сколько угодно человек.
|
||||
|
||||
> [!WARNING]
|
||||
> Рабочую ссылку может выдать только команда из 6 пункта. Не пытайтесь делать ее самостоятельно или копировать откуда-либо!
|
||||
> Рабочую ссылку может выдать только команда из 6 пункта. Не пытайтесь делать ее самостоятельно или копировать откуда-либо если вы не уверены в том, что делаете!
|
||||
|
||||
---
|
||||
|
||||
|
||||
90
src/api/events.rs
Normal file
90
src/api/events.rs
Normal file
@@ -0,0 +1,90 @@
|
||||
use std::collections::VecDeque;
|
||||
use std::sync::Mutex;
|
||||
use std::time::{SystemTime, UNIX_EPOCH};
|
||||
|
||||
use serde::Serialize;
|
||||
|
||||
#[derive(Clone, Serialize)]
|
||||
pub(super) struct ApiEventRecord {
|
||||
pub(super) seq: u64,
|
||||
pub(super) ts_epoch_secs: u64,
|
||||
pub(super) event_type: String,
|
||||
pub(super) context: String,
|
||||
}
|
||||
|
||||
#[derive(Clone, Serialize)]
|
||||
pub(super) struct ApiEventSnapshot {
|
||||
pub(super) capacity: usize,
|
||||
pub(super) dropped_total: u64,
|
||||
pub(super) events: Vec<ApiEventRecord>,
|
||||
}
|
||||
|
||||
struct ApiEventsInner {
|
||||
capacity: usize,
|
||||
dropped_total: u64,
|
||||
next_seq: u64,
|
||||
events: VecDeque<ApiEventRecord>,
|
||||
}
|
||||
|
||||
/// Bounded ring-buffer for control-plane API/runtime events.
|
||||
pub(crate) struct ApiEventStore {
|
||||
inner: Mutex<ApiEventsInner>,
|
||||
}
|
||||
|
||||
impl ApiEventStore {
|
||||
pub(super) fn new(capacity: usize) -> Self {
|
||||
let bounded = capacity.max(16);
|
||||
Self {
|
||||
inner: Mutex::new(ApiEventsInner {
|
||||
capacity: bounded,
|
||||
dropped_total: 0,
|
||||
next_seq: 1,
|
||||
events: VecDeque::with_capacity(bounded),
|
||||
}),
|
||||
}
|
||||
}
|
||||
|
||||
pub(super) fn record(&self, event_type: &str, context: impl Into<String>) {
|
||||
let now_epoch_secs = SystemTime::now()
|
||||
.duration_since(UNIX_EPOCH)
|
||||
.unwrap_or_default()
|
||||
.as_secs();
|
||||
let mut context = context.into();
|
||||
if context.len() > 256 {
|
||||
context.truncate(256);
|
||||
}
|
||||
|
||||
let mut guard = self.inner.lock().expect("api event store mutex poisoned");
|
||||
if guard.events.len() == guard.capacity {
|
||||
guard.events.pop_front();
|
||||
guard.dropped_total = guard.dropped_total.saturating_add(1);
|
||||
}
|
||||
let seq = guard.next_seq;
|
||||
guard.next_seq = guard.next_seq.saturating_add(1);
|
||||
guard.events.push_back(ApiEventRecord {
|
||||
seq,
|
||||
ts_epoch_secs: now_epoch_secs,
|
||||
event_type: event_type.to_string(),
|
||||
context,
|
||||
});
|
||||
}
|
||||
|
||||
pub(super) fn snapshot(&self, limit: usize) -> ApiEventSnapshot {
|
||||
let guard = self.inner.lock().expect("api event store mutex poisoned");
|
||||
let bounded_limit = limit.clamp(1, guard.capacity.max(1));
|
||||
let mut items: Vec<ApiEventRecord> = guard
|
||||
.events
|
||||
.iter()
|
||||
.rev()
|
||||
.take(bounded_limit)
|
||||
.cloned()
|
||||
.collect();
|
||||
items.reverse();
|
||||
|
||||
ApiEventSnapshot {
|
||||
capacity: guard.capacity,
|
||||
dropped_total: guard.dropped_total,
|
||||
events: items,
|
||||
}
|
||||
}
|
||||
}
|
||||
91
src/api/http_utils.rs
Normal file
91
src/api/http_utils.rs
Normal file
@@ -0,0 +1,91 @@
|
||||
use http_body_util::{BodyExt, Full};
|
||||
use hyper::StatusCode;
|
||||
use hyper::body::{Bytes, Incoming};
|
||||
use serde::Serialize;
|
||||
use serde::de::DeserializeOwned;
|
||||
|
||||
use super::model::{ApiFailure, ErrorBody, ErrorResponse, SuccessResponse};
|
||||
|
||||
pub(super) fn success_response<T: Serialize>(
|
||||
status: StatusCode,
|
||||
data: T,
|
||||
revision: String,
|
||||
) -> hyper::Response<Full<Bytes>> {
|
||||
let payload = SuccessResponse {
|
||||
ok: true,
|
||||
data,
|
||||
revision,
|
||||
};
|
||||
let body = serde_json::to_vec(&payload).unwrap_or_else(|_| b"{\"ok\":false}".to_vec());
|
||||
hyper::Response::builder()
|
||||
.status(status)
|
||||
.header("content-type", "application/json; charset=utf-8")
|
||||
.body(Full::new(Bytes::from(body)))
|
||||
.unwrap()
|
||||
}
|
||||
|
||||
pub(super) fn error_response(
|
||||
request_id: u64,
|
||||
failure: ApiFailure,
|
||||
) -> hyper::Response<Full<Bytes>> {
|
||||
let payload = ErrorResponse {
|
||||
ok: false,
|
||||
error: ErrorBody {
|
||||
code: failure.code,
|
||||
message: failure.message,
|
||||
},
|
||||
request_id,
|
||||
};
|
||||
let body = serde_json::to_vec(&payload).unwrap_or_else(|_| {
|
||||
format!(
|
||||
"{{\"ok\":false,\"error\":{{\"code\":\"internal_error\",\"message\":\"serialization failed\"}},\"request_id\":{}}}",
|
||||
request_id
|
||||
)
|
||||
.into_bytes()
|
||||
});
|
||||
hyper::Response::builder()
|
||||
.status(failure.status)
|
||||
.header("content-type", "application/json; charset=utf-8")
|
||||
.body(Full::new(Bytes::from(body)))
|
||||
.unwrap()
|
||||
}
|
||||
|
||||
pub(super) async fn read_json<T: DeserializeOwned>(
|
||||
body: Incoming,
|
||||
limit: usize,
|
||||
) -> Result<T, ApiFailure> {
|
||||
let bytes = read_body_with_limit(body, limit).await?;
|
||||
serde_json::from_slice(&bytes).map_err(|_| ApiFailure::bad_request("Invalid JSON body"))
|
||||
}
|
||||
|
||||
pub(super) async fn read_optional_json<T: DeserializeOwned>(
|
||||
body: Incoming,
|
||||
limit: usize,
|
||||
) -> Result<Option<T>, ApiFailure> {
|
||||
let bytes = read_body_with_limit(body, limit).await?;
|
||||
if bytes.is_empty() {
|
||||
return Ok(None);
|
||||
}
|
||||
serde_json::from_slice(&bytes)
|
||||
.map(Some)
|
||||
.map_err(|_| ApiFailure::bad_request("Invalid JSON body"))
|
||||
}
|
||||
|
||||
async fn read_body_with_limit(body: Incoming, limit: usize) -> Result<Vec<u8>, ApiFailure> {
|
||||
let mut collected = Vec::new();
|
||||
let mut body = body;
|
||||
while let Some(frame_result) = body.frame().await {
|
||||
let frame = frame_result.map_err(|_| ApiFailure::bad_request("Invalid request body"))?;
|
||||
if let Some(chunk) = frame.data_ref() {
|
||||
if collected.len().saturating_add(chunk.len()) > limit {
|
||||
return Err(ApiFailure::new(
|
||||
StatusCode::PAYLOAD_TOO_LARGE,
|
||||
"payload_too_large",
|
||||
format!("Body exceeds {} bytes", limit),
|
||||
));
|
||||
}
|
||||
collected.extend_from_slice(chunk);
|
||||
}
|
||||
}
|
||||
Ok(collected)
|
||||
}
|
||||
281
src/api/mod.rs
281
src/api/mod.rs
@@ -3,36 +3,50 @@ use std::net::{IpAddr, SocketAddr};
|
||||
use std::path::PathBuf;
|
||||
use std::sync::Arc;
|
||||
use std::sync::atomic::{AtomicBool, AtomicU64, Ordering};
|
||||
use std::time::{SystemTime, UNIX_EPOCH};
|
||||
|
||||
use http_body_util::{BodyExt, Full};
|
||||
use http_body_util::Full;
|
||||
use hyper::body::{Bytes, Incoming};
|
||||
use hyper::header::AUTHORIZATION;
|
||||
use hyper::server::conn::http1;
|
||||
use hyper::service::service_fn;
|
||||
use hyper::{Method, Request, Response, StatusCode};
|
||||
use serde::Serialize;
|
||||
use serde::de::DeserializeOwned;
|
||||
use tokio::net::TcpListener;
|
||||
use tokio::sync::{Mutex, watch};
|
||||
use tokio::sync::{Mutex, RwLock, watch};
|
||||
use tracing::{debug, info, warn};
|
||||
|
||||
use crate::config::ProxyConfig;
|
||||
use crate::ip_tracker::UserIpTracker;
|
||||
use crate::startup::StartupTracker;
|
||||
use crate::stats::Stats;
|
||||
use crate::transport::middle_proxy::MePool;
|
||||
use crate::transport::UpstreamManager;
|
||||
|
||||
mod config_store;
|
||||
mod events;
|
||||
mod http_utils;
|
||||
mod model;
|
||||
mod runtime_edge;
|
||||
mod runtime_init;
|
||||
mod runtime_min;
|
||||
mod runtime_stats;
|
||||
mod runtime_watch;
|
||||
mod runtime_zero;
|
||||
mod users;
|
||||
|
||||
use config_store::{current_revision, parse_if_match};
|
||||
use http_utils::{error_response, read_json, read_optional_json, success_response};
|
||||
use events::ApiEventStore;
|
||||
use model::{
|
||||
ApiFailure, CreateUserRequest, ErrorBody, ErrorResponse, HealthData, PatchUserRequest,
|
||||
RotateSecretRequest, SuccessResponse, SummaryData,
|
||||
ApiFailure, CreateUserRequest, HealthData, PatchUserRequest, RotateSecretRequest, SummaryData,
|
||||
};
|
||||
use runtime_edge::{
|
||||
EdgeConnectionsCacheEntry, build_runtime_connections_summary_data,
|
||||
build_runtime_events_recent_data,
|
||||
};
|
||||
use runtime_init::build_runtime_initialization_data;
|
||||
use runtime_min::{
|
||||
build_runtime_me_pool_state_data, build_runtime_me_quality_data, build_runtime_nat_stun_data,
|
||||
build_runtime_upstream_quality_data, build_security_whitelist_data,
|
||||
};
|
||||
use runtime_stats::{
|
||||
MinimalCacheEntry, build_dcs_data, build_me_writers_data, build_minimal_all_data,
|
||||
@@ -42,6 +56,7 @@ use runtime_zero::{
|
||||
build_limits_effective_data, build_runtime_gates_data, build_security_posture_data,
|
||||
build_system_info_data,
|
||||
};
|
||||
use runtime_watch::spawn_runtime_watchers;
|
||||
use users::{create_user, delete_user, patch_user, rotate_secret, users_from_config};
|
||||
|
||||
pub(super) struct ApiRuntimeState {
|
||||
@@ -55,15 +70,19 @@ pub(super) struct ApiRuntimeState {
|
||||
pub(super) struct ApiShared {
|
||||
pub(super) stats: Arc<Stats>,
|
||||
pub(super) ip_tracker: Arc<UserIpTracker>,
|
||||
pub(super) me_pool: Option<Arc<MePool>>,
|
||||
pub(super) me_pool: Arc<RwLock<Option<Arc<MePool>>>>,
|
||||
pub(super) upstream_manager: Arc<UpstreamManager>,
|
||||
pub(super) config_path: PathBuf,
|
||||
pub(super) startup_detected_ip_v4: Option<IpAddr>,
|
||||
pub(super) startup_detected_ip_v6: Option<IpAddr>,
|
||||
pub(super) mutation_lock: Arc<Mutex<()>>,
|
||||
pub(super) minimal_cache: Arc<Mutex<Option<MinimalCacheEntry>>>,
|
||||
pub(super) runtime_edge_connections_cache: Arc<Mutex<Option<EdgeConnectionsCacheEntry>>>,
|
||||
pub(super) runtime_edge_recompute_lock: Arc<Mutex<()>>,
|
||||
pub(super) runtime_events: Arc<ApiEventStore>,
|
||||
pub(super) request_id: Arc<AtomicU64>,
|
||||
pub(super) runtime_state: Arc<ApiRuntimeState>,
|
||||
pub(super) startup_tracker: Arc<StartupTracker>,
|
||||
}
|
||||
|
||||
impl ApiShared {
|
||||
@@ -76,7 +95,7 @@ pub async fn serve(
|
||||
listen: SocketAddr,
|
||||
stats: Arc<Stats>,
|
||||
ip_tracker: Arc<UserIpTracker>,
|
||||
me_pool: Option<Arc<MePool>>,
|
||||
me_pool: Arc<RwLock<Option<Arc<MePool>>>>,
|
||||
upstream_manager: Arc<UpstreamManager>,
|
||||
config_rx: watch::Receiver<Arc<ProxyConfig>>,
|
||||
admission_rx: watch::Receiver<bool>,
|
||||
@@ -84,6 +103,7 @@ pub async fn serve(
|
||||
startup_detected_ip_v4: Option<IpAddr>,
|
||||
startup_detected_ip_v6: Option<IpAddr>,
|
||||
process_started_at_epoch_secs: u64,
|
||||
startup_tracker: Arc<StartupTracker>,
|
||||
) {
|
||||
let listener = match TcpListener::bind(listen).await {
|
||||
Ok(listener) => listener,
|
||||
@@ -116,40 +136,22 @@ pub async fn serve(
|
||||
startup_detected_ip_v6,
|
||||
mutation_lock: Arc::new(Mutex::new(())),
|
||||
minimal_cache: Arc::new(Mutex::new(None)),
|
||||
runtime_edge_connections_cache: Arc::new(Mutex::new(None)),
|
||||
runtime_edge_recompute_lock: Arc::new(Mutex::new(())),
|
||||
runtime_events: Arc::new(ApiEventStore::new(
|
||||
config_rx.borrow().server.api.runtime_edge_events_capacity,
|
||||
)),
|
||||
request_id: Arc::new(AtomicU64::new(1)),
|
||||
runtime_state: runtime_state.clone(),
|
||||
startup_tracker,
|
||||
});
|
||||
|
||||
let mut config_rx_reload = config_rx.clone();
|
||||
let runtime_state_reload = runtime_state.clone();
|
||||
tokio::spawn(async move {
|
||||
loop {
|
||||
if config_rx_reload.changed().await.is_err() {
|
||||
break;
|
||||
}
|
||||
runtime_state_reload
|
||||
.config_reload_count
|
||||
.fetch_add(1, Ordering::Relaxed);
|
||||
runtime_state_reload
|
||||
.last_config_reload_epoch_secs
|
||||
.store(now_epoch_secs(), Ordering::Relaxed);
|
||||
}
|
||||
});
|
||||
|
||||
let mut admission_rx_watch = admission_rx.clone();
|
||||
tokio::spawn(async move {
|
||||
runtime_state
|
||||
.admission_open
|
||||
.store(*admission_rx_watch.borrow(), Ordering::Relaxed);
|
||||
loop {
|
||||
if admission_rx_watch.changed().await.is_err() {
|
||||
break;
|
||||
}
|
||||
runtime_state
|
||||
.admission_open
|
||||
.store(*admission_rx_watch.borrow(), Ordering::Relaxed);
|
||||
}
|
||||
});
|
||||
spawn_runtime_watchers(
|
||||
config_rx.clone(),
|
||||
admission_rx.clone(),
|
||||
runtime_state.clone(),
|
||||
shared.runtime_events.clone(),
|
||||
);
|
||||
|
||||
loop {
|
||||
let (stream, peer) = match listener.accept().await {
|
||||
@@ -232,6 +234,7 @@ async fn handle(
|
||||
|
||||
let method = req.method().clone();
|
||||
let path = req.uri().path().to_string();
|
||||
let query = req.uri().query().map(str::to_string);
|
||||
let body_limit = api_cfg.request_body_limit_bytes;
|
||||
|
||||
let result: Result<Response<Full<Bytes>>, ApiFailure> = async {
|
||||
@@ -251,7 +254,12 @@ async fn handle(
|
||||
}
|
||||
("GET", "/v1/runtime/gates") => {
|
||||
let revision = current_revision(&shared.config_path).await?;
|
||||
let data = build_runtime_gates_data(shared.as_ref(), cfg.as_ref());
|
||||
let data = build_runtime_gates_data(shared.as_ref(), cfg.as_ref()).await;
|
||||
Ok(success_response(StatusCode::OK, data, revision))
|
||||
}
|
||||
("GET", "/v1/runtime/initialization") => {
|
||||
let revision = current_revision(&shared.config_path).await?;
|
||||
let data = build_runtime_initialization_data(shared.as_ref()).await;
|
||||
Ok(success_response(StatusCode::OK, data, revision))
|
||||
}
|
||||
("GET", "/v1/limits/effective") => {
|
||||
@@ -264,6 +272,11 @@ async fn handle(
|
||||
let data = build_security_posture_data(cfg.as_ref());
|
||||
Ok(success_response(StatusCode::OK, data, revision))
|
||||
}
|
||||
("GET", "/v1/security/whitelist") => {
|
||||
let revision = current_revision(&shared.config_path).await?;
|
||||
let data = build_security_whitelist_data(cfg.as_ref());
|
||||
Ok(success_response(StatusCode::OK, data, revision))
|
||||
}
|
||||
("GET", "/v1/stats/summary") => {
|
||||
let revision = current_revision(&shared.config_path).await?;
|
||||
let data = SummaryData {
|
||||
@@ -300,6 +313,40 @@ async fn handle(
|
||||
let data = build_dcs_data(shared.as_ref(), api_cfg).await;
|
||||
Ok(success_response(StatusCode::OK, data, revision))
|
||||
}
|
||||
("GET", "/v1/runtime/me_pool_state") => {
|
||||
let revision = current_revision(&shared.config_path).await?;
|
||||
let data = build_runtime_me_pool_state_data(shared.as_ref()).await;
|
||||
Ok(success_response(StatusCode::OK, data, revision))
|
||||
}
|
||||
("GET", "/v1/runtime/me_quality") => {
|
||||
let revision = current_revision(&shared.config_path).await?;
|
||||
let data = build_runtime_me_quality_data(shared.as_ref()).await;
|
||||
Ok(success_response(StatusCode::OK, data, revision))
|
||||
}
|
||||
("GET", "/v1/runtime/upstream_quality") => {
|
||||
let revision = current_revision(&shared.config_path).await?;
|
||||
let data = build_runtime_upstream_quality_data(shared.as_ref()).await;
|
||||
Ok(success_response(StatusCode::OK, data, revision))
|
||||
}
|
||||
("GET", "/v1/runtime/nat_stun") => {
|
||||
let revision = current_revision(&shared.config_path).await?;
|
||||
let data = build_runtime_nat_stun_data(shared.as_ref()).await;
|
||||
Ok(success_response(StatusCode::OK, data, revision))
|
||||
}
|
||||
("GET", "/v1/runtime/connections/summary") => {
|
||||
let revision = current_revision(&shared.config_path).await?;
|
||||
let data = build_runtime_connections_summary_data(shared.as_ref(), cfg.as_ref()).await;
|
||||
Ok(success_response(StatusCode::OK, data, revision))
|
||||
}
|
||||
("GET", "/v1/runtime/events/recent") => {
|
||||
let revision = current_revision(&shared.config_path).await?;
|
||||
let data = build_runtime_events_recent_data(
|
||||
shared.as_ref(),
|
||||
cfg.as_ref(),
|
||||
query.as_deref(),
|
||||
);
|
||||
Ok(success_response(StatusCode::OK, data, revision))
|
||||
}
|
||||
("GET", "/v1/stats/users") | ("GET", "/v1/users") => {
|
||||
let revision = current_revision(&shared.config_path).await?;
|
||||
let users = users_from_config(
|
||||
@@ -325,7 +372,17 @@ async fn handle(
|
||||
}
|
||||
let expected_revision = parse_if_match(req.headers());
|
||||
let body = read_json::<CreateUserRequest>(req.into_body(), body_limit).await?;
|
||||
let (data, revision) = create_user(body, expected_revision, &shared).await?;
|
||||
let result = create_user(body, expected_revision, &shared).await;
|
||||
let (data, revision) = match result {
|
||||
Ok(ok) => ok,
|
||||
Err(error) => {
|
||||
shared.runtime_events.record("api.user.create.failed", error.code);
|
||||
return Err(error);
|
||||
}
|
||||
};
|
||||
shared
|
||||
.runtime_events
|
||||
.record("api.user.create.ok", format!("username={}", data.user.username));
|
||||
Ok(success_response(StatusCode::CREATED, data, revision))
|
||||
}
|
||||
_ => {
|
||||
@@ -365,8 +422,20 @@ async fn handle(
|
||||
}
|
||||
let expected_revision = parse_if_match(req.headers());
|
||||
let body = read_json::<PatchUserRequest>(req.into_body(), body_limit).await?;
|
||||
let (data, revision) =
|
||||
patch_user(user, body, expected_revision, &shared).await?;
|
||||
let result = patch_user(user, body, expected_revision, &shared).await;
|
||||
let (data, revision) = match result {
|
||||
Ok(ok) => ok,
|
||||
Err(error) => {
|
||||
shared.runtime_events.record(
|
||||
"api.user.patch.failed",
|
||||
format!("username={} code={}", user, error.code),
|
||||
);
|
||||
return Err(error);
|
||||
}
|
||||
};
|
||||
shared
|
||||
.runtime_events
|
||||
.record("api.user.patch.ok", format!("username={}", data.username));
|
||||
return Ok(success_response(StatusCode::OK, data, revision));
|
||||
}
|
||||
if method == Method::DELETE {
|
||||
@@ -381,8 +450,21 @@ async fn handle(
|
||||
));
|
||||
}
|
||||
let expected_revision = parse_if_match(req.headers());
|
||||
let (deleted_user, revision) =
|
||||
delete_user(user, expected_revision, &shared).await?;
|
||||
let result = delete_user(user, expected_revision, &shared).await;
|
||||
let (deleted_user, revision) = match result {
|
||||
Ok(ok) => ok,
|
||||
Err(error) => {
|
||||
shared.runtime_events.record(
|
||||
"api.user.delete.failed",
|
||||
format!("username={} code={}", user, error.code),
|
||||
);
|
||||
return Err(error);
|
||||
}
|
||||
};
|
||||
shared.runtime_events.record(
|
||||
"api.user.delete.ok",
|
||||
format!("username={}", deleted_user),
|
||||
);
|
||||
return Ok(success_response(StatusCode::OK, deleted_user, revision));
|
||||
}
|
||||
if method == Method::POST
|
||||
@@ -404,9 +486,27 @@ async fn handle(
|
||||
let body =
|
||||
read_optional_json::<RotateSecretRequest>(req.into_body(), body_limit)
|
||||
.await?;
|
||||
let (data, revision) =
|
||||
rotate_secret(base_user, body.unwrap_or_default(), expected_revision, &shared)
|
||||
.await?;
|
||||
let result = rotate_secret(
|
||||
base_user,
|
||||
body.unwrap_or_default(),
|
||||
expected_revision,
|
||||
&shared,
|
||||
)
|
||||
.await;
|
||||
let (data, revision) = match result {
|
||||
Ok(ok) => ok,
|
||||
Err(error) => {
|
||||
shared.runtime_events.record(
|
||||
"api.user.rotate_secret.failed",
|
||||
format!("username={} code={}", base_user, error.code),
|
||||
);
|
||||
return Err(error);
|
||||
}
|
||||
};
|
||||
shared.runtime_events.record(
|
||||
"api.user.rotate_secret.ok",
|
||||
format!("username={}", base_user),
|
||||
);
|
||||
return Ok(success_response(StatusCode::OK, data, revision));
|
||||
}
|
||||
if method == Method::POST {
|
||||
@@ -438,88 +538,3 @@ async fn handle(
|
||||
Err(error) => Ok(error_response(request_id, error)),
|
||||
}
|
||||
}
|
||||
|
||||
fn success_response<T: Serialize>(
|
||||
status: StatusCode,
|
||||
data: T,
|
||||
revision: String,
|
||||
) -> Response<Full<Bytes>> {
|
||||
let payload = SuccessResponse {
|
||||
ok: true,
|
||||
data,
|
||||
revision,
|
||||
};
|
||||
let body = serde_json::to_vec(&payload).unwrap_or_else(|_| b"{\"ok\":false}".to_vec());
|
||||
Response::builder()
|
||||
.status(status)
|
||||
.header("content-type", "application/json; charset=utf-8")
|
||||
.body(Full::new(Bytes::from(body)))
|
||||
.unwrap()
|
||||
}
|
||||
|
||||
fn error_response(request_id: u64, failure: ApiFailure) -> Response<Full<Bytes>> {
|
||||
let payload = ErrorResponse {
|
||||
ok: false,
|
||||
error: ErrorBody {
|
||||
code: failure.code,
|
||||
message: failure.message,
|
||||
},
|
||||
request_id,
|
||||
};
|
||||
let body = serde_json::to_vec(&payload).unwrap_or_else(|_| {
|
||||
format!(
|
||||
"{{\"ok\":false,\"error\":{{\"code\":\"internal_error\",\"message\":\"serialization failed\"}},\"request_id\":{}}}",
|
||||
request_id
|
||||
)
|
||||
.into_bytes()
|
||||
});
|
||||
Response::builder()
|
||||
.status(failure.status)
|
||||
.header("content-type", "application/json; charset=utf-8")
|
||||
.body(Full::new(Bytes::from(body)))
|
||||
.unwrap()
|
||||
}
|
||||
|
||||
async fn read_json<T: DeserializeOwned>(body: Incoming, limit: usize) -> Result<T, ApiFailure> {
|
||||
let bytes = read_body_with_limit(body, limit).await?;
|
||||
serde_json::from_slice(&bytes).map_err(|_| ApiFailure::bad_request("Invalid JSON body"))
|
||||
}
|
||||
|
||||
async fn read_optional_json<T: DeserializeOwned>(
|
||||
body: Incoming,
|
||||
limit: usize,
|
||||
) -> Result<Option<T>, ApiFailure> {
|
||||
let bytes = read_body_with_limit(body, limit).await?;
|
||||
if bytes.is_empty() {
|
||||
return Ok(None);
|
||||
}
|
||||
serde_json::from_slice(&bytes)
|
||||
.map(Some)
|
||||
.map_err(|_| ApiFailure::bad_request("Invalid JSON body"))
|
||||
}
|
||||
|
||||
async fn read_body_with_limit(body: Incoming, limit: usize) -> Result<Vec<u8>, ApiFailure> {
|
||||
let mut collected = Vec::new();
|
||||
let mut body = body;
|
||||
while let Some(frame_result) = body.frame().await {
|
||||
let frame = frame_result.map_err(|_| ApiFailure::bad_request("Invalid request body"))?;
|
||||
if let Some(chunk) = frame.data_ref() {
|
||||
if collected.len().saturating_add(chunk.len()) > limit {
|
||||
return Err(ApiFailure::new(
|
||||
StatusCode::PAYLOAD_TOO_LARGE,
|
||||
"payload_too_large",
|
||||
format!("Body exceeds {} bytes", limit),
|
||||
));
|
||||
}
|
||||
collected.extend_from_slice(chunk);
|
||||
}
|
||||
}
|
||||
Ok(collected)
|
||||
}
|
||||
|
||||
fn now_epoch_secs() -> u64 {
|
||||
SystemTime::now()
|
||||
.duration_since(UNIX_EPOCH)
|
||||
.unwrap_or_default()
|
||||
.as_secs()
|
||||
}
|
||||
|
||||
@@ -269,6 +269,10 @@ pub(super) struct DcStatus {
|
||||
pub(super) available_endpoints: usize,
|
||||
pub(super) available_pct: f64,
|
||||
pub(super) required_writers: usize,
|
||||
pub(super) floor_min: usize,
|
||||
pub(super) floor_target: usize,
|
||||
pub(super) floor_max: usize,
|
||||
pub(super) floor_capped: bool,
|
||||
pub(super) alive_writers: usize,
|
||||
pub(super) coverage_pct: f64,
|
||||
pub(super) rtt_ms: Option<f64>,
|
||||
@@ -308,7 +312,27 @@ pub(super) struct MinimalMeRuntimeData {
|
||||
pub(super) floor_mode: &'static str,
|
||||
pub(super) adaptive_floor_idle_secs: u64,
|
||||
pub(super) adaptive_floor_min_writers_single_endpoint: u8,
|
||||
pub(super) adaptive_floor_min_writers_multi_endpoint: u8,
|
||||
pub(super) adaptive_floor_recover_grace_secs: u64,
|
||||
pub(super) adaptive_floor_writers_per_core_total: u16,
|
||||
pub(super) adaptive_floor_cpu_cores_override: u16,
|
||||
pub(super) adaptive_floor_max_extra_writers_single_per_core: u16,
|
||||
pub(super) adaptive_floor_max_extra_writers_multi_per_core: u16,
|
||||
pub(super) adaptive_floor_max_active_writers_per_core: u16,
|
||||
pub(super) adaptive_floor_max_warm_writers_per_core: u16,
|
||||
pub(super) adaptive_floor_max_active_writers_global: u32,
|
||||
pub(super) adaptive_floor_max_warm_writers_global: u32,
|
||||
pub(super) adaptive_floor_cpu_cores_detected: u32,
|
||||
pub(super) adaptive_floor_cpu_cores_effective: u32,
|
||||
pub(super) adaptive_floor_global_cap_raw: u64,
|
||||
pub(super) adaptive_floor_global_cap_effective: u64,
|
||||
pub(super) adaptive_floor_target_writers_total: u64,
|
||||
pub(super) adaptive_floor_active_cap_configured: u64,
|
||||
pub(super) adaptive_floor_active_cap_effective: u64,
|
||||
pub(super) adaptive_floor_warm_cap_configured: u64,
|
||||
pub(super) adaptive_floor_warm_cap_effective: u64,
|
||||
pub(super) adaptive_floor_active_writers_current: u64,
|
||||
pub(super) adaptive_floor_warm_writers_current: u64,
|
||||
pub(super) me_keepalive_enabled: bool,
|
||||
pub(super) me_keepalive_interval_secs: u64,
|
||||
pub(super) me_keepalive_jitter_secs: u64,
|
||||
|
||||
294
src/api/runtime_edge.rs
Normal file
294
src/api/runtime_edge.rs
Normal file
@@ -0,0 +1,294 @@
|
||||
use std::cmp::Reverse;
|
||||
use std::time::{Duration, Instant, SystemTime, UNIX_EPOCH};
|
||||
|
||||
use serde::Serialize;
|
||||
|
||||
use crate::config::ProxyConfig;
|
||||
|
||||
use super::ApiShared;
|
||||
use super::events::ApiEventRecord;
|
||||
|
||||
const FEATURE_DISABLED_REASON: &str = "feature_disabled";
|
||||
const SOURCE_UNAVAILABLE_REASON: &str = "source_unavailable";
|
||||
const EVENTS_DEFAULT_LIMIT: usize = 50;
|
||||
const EVENTS_MAX_LIMIT: usize = 1000;
|
||||
|
||||
#[derive(Clone, Serialize)]
|
||||
pub(super) struct RuntimeEdgeConnectionUserData {
|
||||
pub(super) username: String,
|
||||
pub(super) current_connections: u64,
|
||||
pub(super) total_octets: u64,
|
||||
}
|
||||
|
||||
#[derive(Clone, Serialize)]
|
||||
pub(super) struct RuntimeEdgeConnectionTotalsData {
|
||||
pub(super) current_connections: u64,
|
||||
pub(super) current_connections_me: u64,
|
||||
pub(super) current_connections_direct: u64,
|
||||
pub(super) active_users: usize,
|
||||
}
|
||||
|
||||
#[derive(Clone, Serialize)]
|
||||
pub(super) struct RuntimeEdgeConnectionTopData {
|
||||
pub(super) limit: usize,
|
||||
pub(super) by_connections: Vec<RuntimeEdgeConnectionUserData>,
|
||||
pub(super) by_throughput: Vec<RuntimeEdgeConnectionUserData>,
|
||||
}
|
||||
|
||||
#[derive(Clone, Serialize)]
|
||||
pub(super) struct RuntimeEdgeConnectionCacheData {
|
||||
pub(super) ttl_ms: u64,
|
||||
pub(super) served_from_cache: bool,
|
||||
pub(super) stale_cache_used: bool,
|
||||
}
|
||||
|
||||
#[derive(Clone, Serialize)]
|
||||
pub(super) struct RuntimeEdgeConnectionTelemetryData {
|
||||
pub(super) user_enabled: bool,
|
||||
pub(super) throughput_is_cumulative: bool,
|
||||
}
|
||||
|
||||
#[derive(Clone, Serialize)]
|
||||
pub(super) struct RuntimeEdgeConnectionsSummaryPayload {
|
||||
pub(super) cache: RuntimeEdgeConnectionCacheData,
|
||||
pub(super) totals: RuntimeEdgeConnectionTotalsData,
|
||||
pub(super) top: RuntimeEdgeConnectionTopData,
|
||||
pub(super) telemetry: RuntimeEdgeConnectionTelemetryData,
|
||||
}
|
||||
|
||||
#[derive(Serialize)]
|
||||
pub(super) struct RuntimeEdgeConnectionsSummaryData {
|
||||
pub(super) enabled: bool,
|
||||
#[serde(skip_serializing_if = "Option::is_none")]
|
||||
pub(super) reason: Option<&'static str>,
|
||||
pub(super) generated_at_epoch_secs: u64,
|
||||
#[serde(skip_serializing_if = "Option::is_none")]
|
||||
pub(super) data: Option<RuntimeEdgeConnectionsSummaryPayload>,
|
||||
}
|
||||
|
||||
#[derive(Clone)]
|
||||
pub(crate) struct EdgeConnectionsCacheEntry {
|
||||
pub(super) expires_at: Instant,
|
||||
pub(super) payload: RuntimeEdgeConnectionsSummaryPayload,
|
||||
pub(super) generated_at_epoch_secs: u64,
|
||||
}
|
||||
|
||||
#[derive(Serialize)]
|
||||
pub(super) struct RuntimeEdgeEventsPayload {
|
||||
pub(super) capacity: usize,
|
||||
pub(super) dropped_total: u64,
|
||||
pub(super) events: Vec<ApiEventRecord>,
|
||||
}
|
||||
|
||||
#[derive(Serialize)]
|
||||
pub(super) struct RuntimeEdgeEventsData {
|
||||
pub(super) enabled: bool,
|
||||
#[serde(skip_serializing_if = "Option::is_none")]
|
||||
pub(super) reason: Option<&'static str>,
|
||||
pub(super) generated_at_epoch_secs: u64,
|
||||
#[serde(skip_serializing_if = "Option::is_none")]
|
||||
pub(super) data: Option<RuntimeEdgeEventsPayload>,
|
||||
}
|
||||
|
||||
pub(super) async fn build_runtime_connections_summary_data(
|
||||
shared: &ApiShared,
|
||||
cfg: &ProxyConfig,
|
||||
) -> RuntimeEdgeConnectionsSummaryData {
|
||||
let now_epoch_secs = now_epoch_secs();
|
||||
let api_cfg = &cfg.server.api;
|
||||
if !api_cfg.runtime_edge_enabled {
|
||||
return RuntimeEdgeConnectionsSummaryData {
|
||||
enabled: false,
|
||||
reason: Some(FEATURE_DISABLED_REASON),
|
||||
generated_at_epoch_secs: now_epoch_secs,
|
||||
data: None,
|
||||
};
|
||||
}
|
||||
|
||||
let (generated_at_epoch_secs, payload) = match get_connections_payload_cached(
|
||||
shared,
|
||||
api_cfg.runtime_edge_cache_ttl_ms,
|
||||
api_cfg.runtime_edge_top_n,
|
||||
)
|
||||
.await
|
||||
{
|
||||
Some(v) => v,
|
||||
None => {
|
||||
return RuntimeEdgeConnectionsSummaryData {
|
||||
enabled: true,
|
||||
reason: Some(SOURCE_UNAVAILABLE_REASON),
|
||||
generated_at_epoch_secs: now_epoch_secs,
|
||||
data: None,
|
||||
};
|
||||
}
|
||||
};
|
||||
|
||||
RuntimeEdgeConnectionsSummaryData {
|
||||
enabled: true,
|
||||
reason: None,
|
||||
generated_at_epoch_secs,
|
||||
data: Some(payload),
|
||||
}
|
||||
}
|
||||
|
||||
pub(super) fn build_runtime_events_recent_data(
|
||||
shared: &ApiShared,
|
||||
cfg: &ProxyConfig,
|
||||
query: Option<&str>,
|
||||
) -> RuntimeEdgeEventsData {
|
||||
let now_epoch_secs = now_epoch_secs();
|
||||
let api_cfg = &cfg.server.api;
|
||||
if !api_cfg.runtime_edge_enabled {
|
||||
return RuntimeEdgeEventsData {
|
||||
enabled: false,
|
||||
reason: Some(FEATURE_DISABLED_REASON),
|
||||
generated_at_epoch_secs: now_epoch_secs,
|
||||
data: None,
|
||||
};
|
||||
}
|
||||
|
||||
let limit = parse_recent_events_limit(query, EVENTS_DEFAULT_LIMIT, EVENTS_MAX_LIMIT);
|
||||
let snapshot = shared.runtime_events.snapshot(limit);
|
||||
|
||||
RuntimeEdgeEventsData {
|
||||
enabled: true,
|
||||
reason: None,
|
||||
generated_at_epoch_secs: now_epoch_secs,
|
||||
data: Some(RuntimeEdgeEventsPayload {
|
||||
capacity: snapshot.capacity,
|
||||
dropped_total: snapshot.dropped_total,
|
||||
events: snapshot.events,
|
||||
}),
|
||||
}
|
||||
}
|
||||
|
||||
async fn get_connections_payload_cached(
|
||||
shared: &ApiShared,
|
||||
cache_ttl_ms: u64,
|
||||
top_n: usize,
|
||||
) -> Option<(u64, RuntimeEdgeConnectionsSummaryPayload)> {
|
||||
if cache_ttl_ms > 0 {
|
||||
let now = Instant::now();
|
||||
let cached = shared.runtime_edge_connections_cache.lock().await.clone();
|
||||
if let Some(entry) = cached
|
||||
&& now < entry.expires_at
|
||||
{
|
||||
let mut payload = entry.payload;
|
||||
payload.cache.served_from_cache = true;
|
||||
payload.cache.stale_cache_used = false;
|
||||
return Some((entry.generated_at_epoch_secs, payload));
|
||||
}
|
||||
}
|
||||
|
||||
let Ok(_guard) = shared.runtime_edge_recompute_lock.try_lock() else {
|
||||
let cached = shared.runtime_edge_connections_cache.lock().await.clone();
|
||||
if let Some(entry) = cached {
|
||||
let mut payload = entry.payload;
|
||||
payload.cache.served_from_cache = true;
|
||||
payload.cache.stale_cache_used = true;
|
||||
return Some((entry.generated_at_epoch_secs, payload));
|
||||
}
|
||||
return None;
|
||||
};
|
||||
|
||||
let generated_at_epoch_secs = now_epoch_secs();
|
||||
let payload = recompute_connections_payload(shared, cache_ttl_ms, top_n).await;
|
||||
|
||||
if cache_ttl_ms > 0 {
|
||||
let entry = EdgeConnectionsCacheEntry {
|
||||
expires_at: Instant::now() + Duration::from_millis(cache_ttl_ms),
|
||||
payload: payload.clone(),
|
||||
generated_at_epoch_secs,
|
||||
};
|
||||
*shared.runtime_edge_connections_cache.lock().await = Some(entry);
|
||||
}
|
||||
|
||||
Some((generated_at_epoch_secs, payload))
|
||||
}
|
||||
|
||||
async fn recompute_connections_payload(
|
||||
shared: &ApiShared,
|
||||
cache_ttl_ms: u64,
|
||||
top_n: usize,
|
||||
) -> RuntimeEdgeConnectionsSummaryPayload {
|
||||
let mut rows = Vec::<RuntimeEdgeConnectionUserData>::new();
|
||||
let mut active_users = 0usize;
|
||||
for entry in shared.stats.iter_user_stats() {
|
||||
let user_stats = entry.value();
|
||||
let current_connections = user_stats
|
||||
.curr_connects
|
||||
.load(std::sync::atomic::Ordering::Relaxed);
|
||||
let total_octets = user_stats
|
||||
.octets_from_client
|
||||
.load(std::sync::atomic::Ordering::Relaxed)
|
||||
.saturating_add(
|
||||
user_stats
|
||||
.octets_to_client
|
||||
.load(std::sync::atomic::Ordering::Relaxed),
|
||||
);
|
||||
if current_connections > 0 {
|
||||
active_users = active_users.saturating_add(1);
|
||||
}
|
||||
rows.push(RuntimeEdgeConnectionUserData {
|
||||
username: entry.key().clone(),
|
||||
current_connections,
|
||||
total_octets,
|
||||
});
|
||||
}
|
||||
|
||||
let limit = top_n.max(1);
|
||||
let mut by_connections = rows.clone();
|
||||
by_connections.sort_by_key(|row| (Reverse(row.current_connections), row.username.clone()));
|
||||
by_connections.truncate(limit);
|
||||
|
||||
let mut by_throughput = rows;
|
||||
by_throughput.sort_by_key(|row| (Reverse(row.total_octets), row.username.clone()));
|
||||
by_throughput.truncate(limit);
|
||||
|
||||
let telemetry = shared.stats.telemetry_policy();
|
||||
RuntimeEdgeConnectionsSummaryPayload {
|
||||
cache: RuntimeEdgeConnectionCacheData {
|
||||
ttl_ms: cache_ttl_ms,
|
||||
served_from_cache: false,
|
||||
stale_cache_used: false,
|
||||
},
|
||||
totals: RuntimeEdgeConnectionTotalsData {
|
||||
current_connections: shared.stats.get_current_connections_total(),
|
||||
current_connections_me: shared.stats.get_current_connections_me(),
|
||||
current_connections_direct: shared.stats.get_current_connections_direct(),
|
||||
active_users,
|
||||
},
|
||||
top: RuntimeEdgeConnectionTopData {
|
||||
limit,
|
||||
by_connections,
|
||||
by_throughput,
|
||||
},
|
||||
telemetry: RuntimeEdgeConnectionTelemetryData {
|
||||
user_enabled: telemetry.user_enabled,
|
||||
throughput_is_cumulative: true,
|
||||
},
|
||||
}
|
||||
}
|
||||
|
||||
fn parse_recent_events_limit(query: Option<&str>, default_limit: usize, max_limit: usize) -> usize {
|
||||
let Some(query) = query else {
|
||||
return default_limit;
|
||||
};
|
||||
for pair in query.split('&') {
|
||||
let mut split = pair.splitn(2, '=');
|
||||
if split.next() == Some("limit")
|
||||
&& let Some(raw) = split.next()
|
||||
&& let Ok(parsed) = raw.parse::<usize>()
|
||||
{
|
||||
return parsed.clamp(1, max_limit);
|
||||
}
|
||||
}
|
||||
default_limit
|
||||
}
|
||||
|
||||
fn now_epoch_secs() -> u64 {
|
||||
SystemTime::now()
|
||||
.duration_since(UNIX_EPOCH)
|
||||
.unwrap_or_default()
|
||||
.as_secs()
|
||||
}
|
||||
186
src/api/runtime_init.rs
Normal file
186
src/api/runtime_init.rs
Normal file
@@ -0,0 +1,186 @@
|
||||
use serde::Serialize;
|
||||
|
||||
use crate::startup::{
|
||||
COMPONENT_ME_CONNECTIVITY_PING, COMPONENT_ME_POOL_CONSTRUCT, COMPONENT_ME_POOL_INIT_STAGE1,
|
||||
COMPONENT_ME_PROXY_CONFIG_V4, COMPONENT_ME_PROXY_CONFIG_V6, COMPONENT_ME_SECRET_FETCH,
|
||||
StartupComponentStatus, StartupMeStatus, compute_progress_pct,
|
||||
};
|
||||
|
||||
use super::ApiShared;
|
||||
|
||||
#[derive(Serialize)]
|
||||
pub(super) struct RuntimeInitializationComponentData {
|
||||
pub(super) id: &'static str,
|
||||
pub(super) title: &'static str,
|
||||
pub(super) status: &'static str,
|
||||
pub(super) started_at_epoch_ms: Option<u64>,
|
||||
pub(super) finished_at_epoch_ms: Option<u64>,
|
||||
pub(super) duration_ms: Option<u64>,
|
||||
pub(super) attempts: u32,
|
||||
#[serde(skip_serializing_if = "Option::is_none")]
|
||||
pub(super) details: Option<String>,
|
||||
}
|
||||
|
||||
#[derive(Serialize)]
|
||||
pub(super) struct RuntimeInitializationMeData {
|
||||
pub(super) status: &'static str,
|
||||
pub(super) current_stage: String,
|
||||
pub(super) progress_pct: f64,
|
||||
pub(super) init_attempt: u32,
|
||||
pub(super) retry_limit: String,
|
||||
#[serde(skip_serializing_if = "Option::is_none")]
|
||||
pub(super) last_error: Option<String>,
|
||||
}
|
||||
|
||||
#[derive(Serialize)]
|
||||
pub(super) struct RuntimeInitializationData {
|
||||
pub(super) status: &'static str,
|
||||
pub(super) degraded: bool,
|
||||
pub(super) current_stage: String,
|
||||
pub(super) progress_pct: f64,
|
||||
pub(super) started_at_epoch_secs: u64,
|
||||
#[serde(skip_serializing_if = "Option::is_none")]
|
||||
pub(super) ready_at_epoch_secs: Option<u64>,
|
||||
pub(super) total_elapsed_ms: u64,
|
||||
pub(super) transport_mode: String,
|
||||
pub(super) me: RuntimeInitializationMeData,
|
||||
pub(super) components: Vec<RuntimeInitializationComponentData>,
|
||||
}
|
||||
|
||||
#[derive(Clone)]
|
||||
pub(super) struct RuntimeStartupSummaryData {
|
||||
pub(super) status: &'static str,
|
||||
pub(super) stage: String,
|
||||
pub(super) progress_pct: f64,
|
||||
}
|
||||
|
||||
pub(super) async fn build_runtime_startup_summary(shared: &ApiShared) -> RuntimeStartupSummaryData {
|
||||
let snapshot = shared.startup_tracker.snapshot().await;
|
||||
let me_pool_progress = current_me_pool_stage_progress(shared).await;
|
||||
let progress_pct = compute_progress_pct(&snapshot, me_pool_progress);
|
||||
RuntimeStartupSummaryData {
|
||||
status: snapshot.status.as_str(),
|
||||
stage: snapshot.current_stage,
|
||||
progress_pct,
|
||||
}
|
||||
}
|
||||
|
||||
pub(super) async fn build_runtime_initialization_data(
|
||||
shared: &ApiShared,
|
||||
) -> RuntimeInitializationData {
|
||||
let snapshot = shared.startup_tracker.snapshot().await;
|
||||
let me_pool_progress = current_me_pool_stage_progress(shared).await;
|
||||
let progress_pct = compute_progress_pct(&snapshot, me_pool_progress);
|
||||
let me_progress_pct = compute_me_progress_pct(&snapshot, me_pool_progress);
|
||||
|
||||
RuntimeInitializationData {
|
||||
status: snapshot.status.as_str(),
|
||||
degraded: snapshot.degraded,
|
||||
current_stage: snapshot.current_stage,
|
||||
progress_pct,
|
||||
started_at_epoch_secs: snapshot.started_at_epoch_secs,
|
||||
ready_at_epoch_secs: snapshot.ready_at_epoch_secs,
|
||||
total_elapsed_ms: snapshot.total_elapsed_ms,
|
||||
transport_mode: snapshot.transport_mode,
|
||||
me: RuntimeInitializationMeData {
|
||||
status: snapshot.me.status.as_str(),
|
||||
current_stage: snapshot.me.current_stage,
|
||||
progress_pct: me_progress_pct,
|
||||
init_attempt: snapshot.me.init_attempt,
|
||||
retry_limit: snapshot.me.retry_limit,
|
||||
last_error: snapshot.me.last_error,
|
||||
},
|
||||
components: snapshot
|
||||
.components
|
||||
.into_iter()
|
||||
.map(|component| RuntimeInitializationComponentData {
|
||||
id: component.id,
|
||||
title: component.title,
|
||||
status: component.status.as_str(),
|
||||
started_at_epoch_ms: component.started_at_epoch_ms,
|
||||
finished_at_epoch_ms: component.finished_at_epoch_ms,
|
||||
duration_ms: component.duration_ms,
|
||||
attempts: component.attempts,
|
||||
details: component.details,
|
||||
})
|
||||
.collect(),
|
||||
}
|
||||
}
|
||||
|
||||
fn compute_me_progress_pct(
|
||||
snapshot: &crate::startup::StartupSnapshot,
|
||||
me_pool_progress: Option<f64>,
|
||||
) -> f64 {
|
||||
match snapshot.me.status {
|
||||
StartupMeStatus::Pending => 0.0,
|
||||
StartupMeStatus::Ready | StartupMeStatus::Failed | StartupMeStatus::Skipped => 100.0,
|
||||
StartupMeStatus::Initializing => {
|
||||
let mut total_weight = 0.0f64;
|
||||
let mut completed_weight = 0.0f64;
|
||||
for component in &snapshot.components {
|
||||
if !is_me_component(component.id) {
|
||||
continue;
|
||||
}
|
||||
total_weight += component.weight;
|
||||
let unit_progress = match component.status {
|
||||
StartupComponentStatus::Pending => 0.0,
|
||||
StartupComponentStatus::Running => {
|
||||
if component.id == COMPONENT_ME_POOL_INIT_STAGE1 {
|
||||
me_pool_progress.unwrap_or(0.0).clamp(0.0, 1.0)
|
||||
} else {
|
||||
0.0
|
||||
}
|
||||
}
|
||||
StartupComponentStatus::Ready
|
||||
| StartupComponentStatus::Failed
|
||||
| StartupComponentStatus::Skipped => 1.0,
|
||||
};
|
||||
completed_weight += component.weight * unit_progress;
|
||||
}
|
||||
if total_weight <= f64::EPSILON {
|
||||
0.0
|
||||
} else {
|
||||
((completed_weight / total_weight) * 100.0).clamp(0.0, 100.0)
|
||||
}
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
fn is_me_component(component_id: &str) -> bool {
|
||||
matches!(
|
||||
component_id,
|
||||
COMPONENT_ME_SECRET_FETCH
|
||||
| COMPONENT_ME_PROXY_CONFIG_V4
|
||||
| COMPONENT_ME_PROXY_CONFIG_V6
|
||||
| COMPONENT_ME_POOL_CONSTRUCT
|
||||
| COMPONENT_ME_POOL_INIT_STAGE1
|
||||
| COMPONENT_ME_CONNECTIVITY_PING
|
||||
)
|
||||
}
|
||||
|
||||
async fn current_me_pool_stage_progress(shared: &ApiShared) -> Option<f64> {
|
||||
let snapshot = shared.startup_tracker.snapshot().await;
|
||||
if snapshot.me.status != StartupMeStatus::Initializing {
|
||||
return None;
|
||||
}
|
||||
|
||||
let pool = shared.me_pool.read().await.clone()?;
|
||||
let status = pool.api_status_snapshot().await;
|
||||
let configured_dc_groups = status.configured_dc_groups;
|
||||
let covered_dc_groups = status
|
||||
.dcs
|
||||
.iter()
|
||||
.filter(|dc| dc.alive_writers > 0)
|
||||
.count();
|
||||
|
||||
let dc_coverage = ratio_01(covered_dc_groups, configured_dc_groups);
|
||||
let writer_coverage = ratio_01(status.alive_writers, status.required_writers);
|
||||
Some((0.7 * dc_coverage + 0.3 * writer_coverage).clamp(0.0, 1.0))
|
||||
}
|
||||
|
||||
fn ratio_01(part: usize, total: usize) -> f64 {
|
||||
if total == 0 {
|
||||
return 0.0;
|
||||
}
|
||||
((part as f64) / (total as f64)).clamp(0.0, 1.0)
|
||||
}
|
||||
534
src/api/runtime_min.rs
Normal file
534
src/api/runtime_min.rs
Normal file
@@ -0,0 +1,534 @@
|
||||
use std::collections::BTreeSet;
|
||||
use std::time::{SystemTime, UNIX_EPOCH};
|
||||
|
||||
use serde::Serialize;
|
||||
|
||||
use crate::config::ProxyConfig;
|
||||
|
||||
use super::ApiShared;
|
||||
|
||||
const SOURCE_UNAVAILABLE_REASON: &str = "source_unavailable";
|
||||
|
||||
#[derive(Serialize)]
|
||||
pub(super) struct SecurityWhitelistData {
|
||||
pub(super) generated_at_epoch_secs: u64,
|
||||
pub(super) enabled: bool,
|
||||
pub(super) entries_total: usize,
|
||||
pub(super) entries: Vec<String>,
|
||||
}
|
||||
|
||||
#[derive(Serialize)]
|
||||
pub(super) struct RuntimeMePoolStateGenerationData {
|
||||
pub(super) active_generation: u64,
|
||||
pub(super) warm_generation: u64,
|
||||
pub(super) pending_hardswap_generation: u64,
|
||||
pub(super) pending_hardswap_age_secs: Option<u64>,
|
||||
pub(super) draining_generations: Vec<u64>,
|
||||
}
|
||||
|
||||
#[derive(Serialize)]
|
||||
pub(super) struct RuntimeMePoolStateHardswapData {
|
||||
pub(super) enabled: bool,
|
||||
pub(super) pending: bool,
|
||||
}
|
||||
|
||||
#[derive(Serialize)]
|
||||
pub(super) struct RuntimeMePoolStateWriterContourData {
|
||||
pub(super) warm: usize,
|
||||
pub(super) active: usize,
|
||||
pub(super) draining: usize,
|
||||
}
|
||||
|
||||
#[derive(Serialize)]
|
||||
pub(super) struct RuntimeMePoolStateWriterHealthData {
|
||||
pub(super) healthy: usize,
|
||||
pub(super) degraded: usize,
|
||||
pub(super) draining: usize,
|
||||
}
|
||||
|
||||
#[derive(Serialize)]
|
||||
pub(super) struct RuntimeMePoolStateWriterData {
|
||||
pub(super) total: usize,
|
||||
pub(super) alive_non_draining: usize,
|
||||
pub(super) draining: usize,
|
||||
pub(super) degraded: usize,
|
||||
pub(super) contour: RuntimeMePoolStateWriterContourData,
|
||||
pub(super) health: RuntimeMePoolStateWriterHealthData,
|
||||
}
|
||||
|
||||
#[derive(Serialize)]
|
||||
pub(super) struct RuntimeMePoolStateRefillDcData {
|
||||
pub(super) dc: i16,
|
||||
pub(super) family: &'static str,
|
||||
pub(super) inflight: usize,
|
||||
}
|
||||
|
||||
#[derive(Serialize)]
|
||||
pub(super) struct RuntimeMePoolStateRefillData {
|
||||
pub(super) inflight_endpoints_total: usize,
|
||||
pub(super) inflight_dc_total: usize,
|
||||
pub(super) by_dc: Vec<RuntimeMePoolStateRefillDcData>,
|
||||
}
|
||||
|
||||
#[derive(Serialize)]
|
||||
pub(super) struct RuntimeMePoolStatePayload {
|
||||
pub(super) generations: RuntimeMePoolStateGenerationData,
|
||||
pub(super) hardswap: RuntimeMePoolStateHardswapData,
|
||||
pub(super) writers: RuntimeMePoolStateWriterData,
|
||||
pub(super) refill: RuntimeMePoolStateRefillData,
|
||||
}
|
||||
|
||||
#[derive(Serialize)]
|
||||
pub(super) struct RuntimeMePoolStateData {
|
||||
pub(super) enabled: bool,
|
||||
#[serde(skip_serializing_if = "Option::is_none")]
|
||||
pub(super) reason: Option<&'static str>,
|
||||
pub(super) generated_at_epoch_secs: u64,
|
||||
#[serde(skip_serializing_if = "Option::is_none")]
|
||||
pub(super) data: Option<RuntimeMePoolStatePayload>,
|
||||
}
|
||||
|
||||
#[derive(Serialize)]
|
||||
pub(super) struct RuntimeMeQualityCountersData {
|
||||
pub(super) idle_close_by_peer_total: u64,
|
||||
pub(super) reader_eof_total: u64,
|
||||
pub(super) kdf_drift_total: u64,
|
||||
pub(super) kdf_port_only_drift_total: u64,
|
||||
pub(super) reconnect_attempt_total: u64,
|
||||
pub(super) reconnect_success_total: u64,
|
||||
}
|
||||
|
||||
#[derive(Serialize)]
|
||||
pub(super) struct RuntimeMeQualityRouteDropData {
|
||||
pub(super) no_conn_total: u64,
|
||||
pub(super) channel_closed_total: u64,
|
||||
pub(super) queue_full_total: u64,
|
||||
pub(super) queue_full_base_total: u64,
|
||||
pub(super) queue_full_high_total: u64,
|
||||
}
|
||||
|
||||
#[derive(Serialize)]
|
||||
pub(super) struct RuntimeMeQualityDcRttData {
|
||||
pub(super) dc: i16,
|
||||
pub(super) rtt_ema_ms: Option<f64>,
|
||||
pub(super) alive_writers: usize,
|
||||
pub(super) required_writers: usize,
|
||||
pub(super) coverage_pct: f64,
|
||||
}
|
||||
|
||||
#[derive(Serialize)]
|
||||
pub(super) struct RuntimeMeQualityPayload {
|
||||
pub(super) counters: RuntimeMeQualityCountersData,
|
||||
pub(super) route_drops: RuntimeMeQualityRouteDropData,
|
||||
pub(super) dc_rtt: Vec<RuntimeMeQualityDcRttData>,
|
||||
}
|
||||
|
||||
#[derive(Serialize)]
|
||||
pub(super) struct RuntimeMeQualityData {
|
||||
pub(super) enabled: bool,
|
||||
#[serde(skip_serializing_if = "Option::is_none")]
|
||||
pub(super) reason: Option<&'static str>,
|
||||
pub(super) generated_at_epoch_secs: u64,
|
||||
#[serde(skip_serializing_if = "Option::is_none")]
|
||||
pub(super) data: Option<RuntimeMeQualityPayload>,
|
||||
}
|
||||
|
||||
#[derive(Serialize)]
|
||||
pub(super) struct RuntimeUpstreamQualityPolicyData {
|
||||
pub(super) connect_retry_attempts: u32,
|
||||
pub(super) connect_retry_backoff_ms: u64,
|
||||
pub(super) connect_budget_ms: u64,
|
||||
pub(super) unhealthy_fail_threshold: u32,
|
||||
pub(super) connect_failfast_hard_errors: bool,
|
||||
}
|
||||
|
||||
#[derive(Serialize)]
|
||||
pub(super) struct RuntimeUpstreamQualityCountersData {
|
||||
pub(super) connect_attempt_total: u64,
|
||||
pub(super) connect_success_total: u64,
|
||||
pub(super) connect_fail_total: u64,
|
||||
pub(super) connect_failfast_hard_error_total: u64,
|
||||
}
|
||||
|
||||
#[derive(Serialize)]
|
||||
pub(super) struct RuntimeUpstreamQualitySummaryData {
|
||||
pub(super) configured_total: usize,
|
||||
pub(super) healthy_total: usize,
|
||||
pub(super) unhealthy_total: usize,
|
||||
pub(super) direct_total: usize,
|
||||
pub(super) socks4_total: usize,
|
||||
pub(super) socks5_total: usize,
|
||||
}
|
||||
|
||||
#[derive(Serialize)]
|
||||
pub(super) struct RuntimeUpstreamQualityDcData {
|
||||
pub(super) dc: i16,
|
||||
pub(super) latency_ema_ms: Option<f64>,
|
||||
pub(super) ip_preference: &'static str,
|
||||
}
|
||||
|
||||
#[derive(Serialize)]
|
||||
pub(super) struct RuntimeUpstreamQualityUpstreamData {
|
||||
pub(super) upstream_id: usize,
|
||||
pub(super) route_kind: &'static str,
|
||||
pub(super) address: String,
|
||||
pub(super) weight: u16,
|
||||
pub(super) scopes: String,
|
||||
pub(super) healthy: bool,
|
||||
pub(super) fails: u32,
|
||||
pub(super) last_check_age_secs: u64,
|
||||
pub(super) effective_latency_ms: Option<f64>,
|
||||
pub(super) dc: Vec<RuntimeUpstreamQualityDcData>,
|
||||
}
|
||||
|
||||
#[derive(Serialize)]
|
||||
pub(super) struct RuntimeUpstreamQualityData {
|
||||
pub(super) enabled: bool,
|
||||
#[serde(skip_serializing_if = "Option::is_none")]
|
||||
pub(super) reason: Option<&'static str>,
|
||||
pub(super) generated_at_epoch_secs: u64,
|
||||
pub(super) policy: RuntimeUpstreamQualityPolicyData,
|
||||
pub(super) counters: RuntimeUpstreamQualityCountersData,
|
||||
#[serde(skip_serializing_if = "Option::is_none")]
|
||||
pub(super) summary: Option<RuntimeUpstreamQualitySummaryData>,
|
||||
#[serde(skip_serializing_if = "Option::is_none")]
|
||||
pub(super) upstreams: Option<Vec<RuntimeUpstreamQualityUpstreamData>>,
|
||||
}
|
||||
|
||||
#[derive(Serialize)]
|
||||
pub(super) struct RuntimeNatStunReflectionData {
|
||||
pub(super) addr: String,
|
||||
pub(super) age_secs: u64,
|
||||
}
|
||||
|
||||
#[derive(Serialize)]
|
||||
pub(super) struct RuntimeNatStunFlagsData {
|
||||
pub(super) nat_probe_enabled: bool,
|
||||
pub(super) nat_probe_disabled_runtime: bool,
|
||||
pub(super) nat_probe_attempts: u8,
|
||||
}
|
||||
|
||||
#[derive(Serialize)]
|
||||
pub(super) struct RuntimeNatStunServersData {
|
||||
pub(super) configured: Vec<String>,
|
||||
pub(super) live: Vec<String>,
|
||||
pub(super) live_total: usize,
|
||||
}
|
||||
|
||||
#[derive(Serialize)]
|
||||
pub(super) struct RuntimeNatStunReflectionBlockData {
|
||||
#[serde(skip_serializing_if = "Option::is_none")]
|
||||
pub(super) v4: Option<RuntimeNatStunReflectionData>,
|
||||
#[serde(skip_serializing_if = "Option::is_none")]
|
||||
pub(super) v6: Option<RuntimeNatStunReflectionData>,
|
||||
}
|
||||
|
||||
#[derive(Serialize)]
|
||||
pub(super) struct RuntimeNatStunPayload {
|
||||
pub(super) flags: RuntimeNatStunFlagsData,
|
||||
pub(super) servers: RuntimeNatStunServersData,
|
||||
pub(super) reflection: RuntimeNatStunReflectionBlockData,
|
||||
#[serde(skip_serializing_if = "Option::is_none")]
|
||||
pub(super) stun_backoff_remaining_ms: Option<u64>,
|
||||
}
|
||||
|
||||
#[derive(Serialize)]
|
||||
pub(super) struct RuntimeNatStunData {
|
||||
pub(super) enabled: bool,
|
||||
#[serde(skip_serializing_if = "Option::is_none")]
|
||||
pub(super) reason: Option<&'static str>,
|
||||
pub(super) generated_at_epoch_secs: u64,
|
||||
#[serde(skip_serializing_if = "Option::is_none")]
|
||||
pub(super) data: Option<RuntimeNatStunPayload>,
|
||||
}
|
||||
|
||||
pub(super) fn build_security_whitelist_data(cfg: &ProxyConfig) -> SecurityWhitelistData {
|
||||
let entries = cfg
|
||||
.server
|
||||
.api
|
||||
.whitelist
|
||||
.iter()
|
||||
.map(ToString::to_string)
|
||||
.collect::<Vec<_>>();
|
||||
SecurityWhitelistData {
|
||||
generated_at_epoch_secs: now_epoch_secs(),
|
||||
enabled: !entries.is_empty(),
|
||||
entries_total: entries.len(),
|
||||
entries,
|
||||
}
|
||||
}
|
||||
|
||||
pub(super) async fn build_runtime_me_pool_state_data(shared: &ApiShared) -> RuntimeMePoolStateData {
|
||||
let now_epoch_secs = now_epoch_secs();
|
||||
let Some(pool) = shared.me_pool.read().await.clone() else {
|
||||
return RuntimeMePoolStateData {
|
||||
enabled: false,
|
||||
reason: Some(SOURCE_UNAVAILABLE_REASON),
|
||||
generated_at_epoch_secs: now_epoch_secs,
|
||||
data: None,
|
||||
};
|
||||
};
|
||||
|
||||
let status = pool.api_status_snapshot().await;
|
||||
let runtime = pool.api_runtime_snapshot().await;
|
||||
let refill = pool.api_refill_snapshot().await;
|
||||
|
||||
let mut draining_generations = BTreeSet::<u64>::new();
|
||||
let mut contour_warm = 0usize;
|
||||
let mut contour_active = 0usize;
|
||||
let mut contour_draining = 0usize;
|
||||
let mut draining = 0usize;
|
||||
let mut degraded = 0usize;
|
||||
let mut healthy = 0usize;
|
||||
|
||||
for writer in &status.writers {
|
||||
if writer.draining {
|
||||
draining_generations.insert(writer.generation);
|
||||
draining += 1;
|
||||
}
|
||||
if writer.degraded && !writer.draining {
|
||||
degraded += 1;
|
||||
}
|
||||
if !writer.degraded && !writer.draining {
|
||||
healthy += 1;
|
||||
}
|
||||
match writer.state {
|
||||
"warm" => contour_warm += 1,
|
||||
"active" => contour_active += 1,
|
||||
_ => contour_draining += 1,
|
||||
}
|
||||
}
|
||||
|
||||
RuntimeMePoolStateData {
|
||||
enabled: true,
|
||||
reason: None,
|
||||
generated_at_epoch_secs: status.generated_at_epoch_secs,
|
||||
data: Some(RuntimeMePoolStatePayload {
|
||||
generations: RuntimeMePoolStateGenerationData {
|
||||
active_generation: runtime.active_generation,
|
||||
warm_generation: runtime.warm_generation,
|
||||
pending_hardswap_generation: runtime.pending_hardswap_generation,
|
||||
pending_hardswap_age_secs: runtime.pending_hardswap_age_secs,
|
||||
draining_generations: draining_generations.into_iter().collect(),
|
||||
},
|
||||
hardswap: RuntimeMePoolStateHardswapData {
|
||||
enabled: runtime.hardswap_enabled,
|
||||
pending: runtime.pending_hardswap_generation != 0,
|
||||
},
|
||||
writers: RuntimeMePoolStateWriterData {
|
||||
total: status.writers.len(),
|
||||
alive_non_draining: status.writers.len().saturating_sub(draining),
|
||||
draining,
|
||||
degraded,
|
||||
contour: RuntimeMePoolStateWriterContourData {
|
||||
warm: contour_warm,
|
||||
active: contour_active,
|
||||
draining: contour_draining,
|
||||
},
|
||||
health: RuntimeMePoolStateWriterHealthData {
|
||||
healthy,
|
||||
degraded,
|
||||
draining,
|
||||
},
|
||||
},
|
||||
refill: RuntimeMePoolStateRefillData {
|
||||
inflight_endpoints_total: refill.inflight_endpoints_total,
|
||||
inflight_dc_total: refill.inflight_dc_total,
|
||||
by_dc: refill
|
||||
.by_dc
|
||||
.into_iter()
|
||||
.map(|entry| RuntimeMePoolStateRefillDcData {
|
||||
dc: entry.dc,
|
||||
family: entry.family,
|
||||
inflight: entry.inflight,
|
||||
})
|
||||
.collect(),
|
||||
},
|
||||
}),
|
||||
}
|
||||
}
|
||||
|
||||
pub(super) async fn build_runtime_me_quality_data(shared: &ApiShared) -> RuntimeMeQualityData {
|
||||
let now_epoch_secs = now_epoch_secs();
|
||||
let Some(pool) = shared.me_pool.read().await.clone() else {
|
||||
return RuntimeMeQualityData {
|
||||
enabled: false,
|
||||
reason: Some(SOURCE_UNAVAILABLE_REASON),
|
||||
generated_at_epoch_secs: now_epoch_secs,
|
||||
data: None,
|
||||
};
|
||||
};
|
||||
|
||||
let status = pool.api_status_snapshot().await;
|
||||
RuntimeMeQualityData {
|
||||
enabled: true,
|
||||
reason: None,
|
||||
generated_at_epoch_secs: status.generated_at_epoch_secs,
|
||||
data: Some(RuntimeMeQualityPayload {
|
||||
counters: RuntimeMeQualityCountersData {
|
||||
idle_close_by_peer_total: shared.stats.get_me_idle_close_by_peer_total(),
|
||||
reader_eof_total: shared.stats.get_me_reader_eof_total(),
|
||||
kdf_drift_total: shared.stats.get_me_kdf_drift_total(),
|
||||
kdf_port_only_drift_total: shared.stats.get_me_kdf_port_only_drift_total(),
|
||||
reconnect_attempt_total: shared.stats.get_me_reconnect_attempts(),
|
||||
reconnect_success_total: shared.stats.get_me_reconnect_success(),
|
||||
},
|
||||
route_drops: RuntimeMeQualityRouteDropData {
|
||||
no_conn_total: shared.stats.get_me_route_drop_no_conn(),
|
||||
channel_closed_total: shared.stats.get_me_route_drop_channel_closed(),
|
||||
queue_full_total: shared.stats.get_me_route_drop_queue_full(),
|
||||
queue_full_base_total: shared.stats.get_me_route_drop_queue_full_base(),
|
||||
queue_full_high_total: shared.stats.get_me_route_drop_queue_full_high(),
|
||||
},
|
||||
dc_rtt: status
|
||||
.dcs
|
||||
.into_iter()
|
||||
.map(|dc| RuntimeMeQualityDcRttData {
|
||||
dc: dc.dc,
|
||||
rtt_ema_ms: dc.rtt_ms,
|
||||
alive_writers: dc.alive_writers,
|
||||
required_writers: dc.required_writers,
|
||||
coverage_pct: dc.coverage_pct,
|
||||
})
|
||||
.collect(),
|
||||
}),
|
||||
}
|
||||
}
|
||||
|
||||
pub(super) async fn build_runtime_upstream_quality_data(
|
||||
shared: &ApiShared,
|
||||
) -> RuntimeUpstreamQualityData {
|
||||
let generated_at_epoch_secs = now_epoch_secs();
|
||||
let policy = shared.upstream_manager.api_policy_snapshot();
|
||||
let counters = RuntimeUpstreamQualityCountersData {
|
||||
connect_attempt_total: shared.stats.get_upstream_connect_attempt_total(),
|
||||
connect_success_total: shared.stats.get_upstream_connect_success_total(),
|
||||
connect_fail_total: shared.stats.get_upstream_connect_fail_total(),
|
||||
connect_failfast_hard_error_total: shared.stats.get_upstream_connect_failfast_hard_error_total(),
|
||||
};
|
||||
|
||||
let Some(snapshot) = shared.upstream_manager.try_api_snapshot() else {
|
||||
return RuntimeUpstreamQualityData {
|
||||
enabled: false,
|
||||
reason: Some(SOURCE_UNAVAILABLE_REASON),
|
||||
generated_at_epoch_secs,
|
||||
policy: RuntimeUpstreamQualityPolicyData {
|
||||
connect_retry_attempts: policy.connect_retry_attempts,
|
||||
connect_retry_backoff_ms: policy.connect_retry_backoff_ms,
|
||||
connect_budget_ms: policy.connect_budget_ms,
|
||||
unhealthy_fail_threshold: policy.unhealthy_fail_threshold,
|
||||
connect_failfast_hard_errors: policy.connect_failfast_hard_errors,
|
||||
},
|
||||
counters,
|
||||
summary: None,
|
||||
upstreams: None,
|
||||
};
|
||||
};
|
||||
|
||||
RuntimeUpstreamQualityData {
|
||||
enabled: true,
|
||||
reason: None,
|
||||
generated_at_epoch_secs,
|
||||
policy: RuntimeUpstreamQualityPolicyData {
|
||||
connect_retry_attempts: policy.connect_retry_attempts,
|
||||
connect_retry_backoff_ms: policy.connect_retry_backoff_ms,
|
||||
connect_budget_ms: policy.connect_budget_ms,
|
||||
unhealthy_fail_threshold: policy.unhealthy_fail_threshold,
|
||||
connect_failfast_hard_errors: policy.connect_failfast_hard_errors,
|
||||
},
|
||||
counters,
|
||||
summary: Some(RuntimeUpstreamQualitySummaryData {
|
||||
configured_total: snapshot.summary.configured_total,
|
||||
healthy_total: snapshot.summary.healthy_total,
|
||||
unhealthy_total: snapshot.summary.unhealthy_total,
|
||||
direct_total: snapshot.summary.direct_total,
|
||||
socks4_total: snapshot.summary.socks4_total,
|
||||
socks5_total: snapshot.summary.socks5_total,
|
||||
}),
|
||||
upstreams: Some(
|
||||
snapshot
|
||||
.upstreams
|
||||
.into_iter()
|
||||
.map(|upstream| RuntimeUpstreamQualityUpstreamData {
|
||||
upstream_id: upstream.upstream_id,
|
||||
route_kind: match upstream.route_kind {
|
||||
crate::transport::UpstreamRouteKind::Direct => "direct",
|
||||
crate::transport::UpstreamRouteKind::Socks4 => "socks4",
|
||||
crate::transport::UpstreamRouteKind::Socks5 => "socks5",
|
||||
},
|
||||
address: upstream.address,
|
||||
weight: upstream.weight,
|
||||
scopes: upstream.scopes,
|
||||
healthy: upstream.healthy,
|
||||
fails: upstream.fails,
|
||||
last_check_age_secs: upstream.last_check_age_secs,
|
||||
effective_latency_ms: upstream.effective_latency_ms,
|
||||
dc: upstream
|
||||
.dc
|
||||
.into_iter()
|
||||
.map(|dc| RuntimeUpstreamQualityDcData {
|
||||
dc: dc.dc,
|
||||
latency_ema_ms: dc.latency_ema_ms,
|
||||
ip_preference: match dc.ip_preference {
|
||||
crate::transport::upstream::IpPreference::Unknown => "unknown",
|
||||
crate::transport::upstream::IpPreference::PreferV6 => "prefer_v6",
|
||||
crate::transport::upstream::IpPreference::PreferV4 => "prefer_v4",
|
||||
crate::transport::upstream::IpPreference::BothWork => "both_work",
|
||||
crate::transport::upstream::IpPreference::Unavailable => "unavailable",
|
||||
},
|
||||
})
|
||||
.collect(),
|
||||
})
|
||||
.collect(),
|
||||
),
|
||||
}
|
||||
}
|
||||
|
||||
pub(super) async fn build_runtime_nat_stun_data(shared: &ApiShared) -> RuntimeNatStunData {
|
||||
let now_epoch_secs = now_epoch_secs();
|
||||
let Some(pool) = shared.me_pool.read().await.clone() else {
|
||||
return RuntimeNatStunData {
|
||||
enabled: false,
|
||||
reason: Some(SOURCE_UNAVAILABLE_REASON),
|
||||
generated_at_epoch_secs: now_epoch_secs,
|
||||
data: None,
|
||||
};
|
||||
};
|
||||
|
||||
let snapshot = pool.api_nat_stun_snapshot().await;
|
||||
RuntimeNatStunData {
|
||||
enabled: true,
|
||||
reason: None,
|
||||
generated_at_epoch_secs: now_epoch_secs,
|
||||
data: Some(RuntimeNatStunPayload {
|
||||
flags: RuntimeNatStunFlagsData {
|
||||
nat_probe_enabled: snapshot.nat_probe_enabled,
|
||||
nat_probe_disabled_runtime: snapshot.nat_probe_disabled_runtime,
|
||||
nat_probe_attempts: snapshot.nat_probe_attempts,
|
||||
},
|
||||
servers: RuntimeNatStunServersData {
|
||||
configured: snapshot.configured_servers,
|
||||
live: snapshot.live_servers.clone(),
|
||||
live_total: snapshot.live_servers.len(),
|
||||
},
|
||||
reflection: RuntimeNatStunReflectionBlockData {
|
||||
v4: snapshot.reflection_v4.map(|entry| RuntimeNatStunReflectionData {
|
||||
addr: entry.addr.to_string(),
|
||||
age_secs: entry.age_secs,
|
||||
}),
|
||||
v6: snapshot.reflection_v6.map(|entry| RuntimeNatStunReflectionData {
|
||||
addr: entry.addr.to_string(),
|
||||
age_secs: entry.age_secs,
|
||||
}),
|
||||
},
|
||||
stun_backoff_remaining_ms: snapshot.stun_backoff_remaining_ms,
|
||||
}),
|
||||
}
|
||||
}
|
||||
|
||||
fn now_epoch_secs() -> u64 {
|
||||
SystemTime::now()
|
||||
.duration_since(UNIX_EPOCH)
|
||||
.unwrap_or_default()
|
||||
.as_secs()
|
||||
}
|
||||
@@ -297,7 +297,7 @@ async fn get_minimal_payload_cached(
|
||||
}
|
||||
}
|
||||
|
||||
let pool = shared.me_pool.as_ref()?;
|
||||
let pool = shared.me_pool.read().await.clone()?;
|
||||
let status = pool.api_status_snapshot().await;
|
||||
let runtime = pool.api_runtime_snapshot().await;
|
||||
let generated_at_epoch_secs = status.generated_at_epoch_secs;
|
||||
@@ -349,6 +349,10 @@ async fn get_minimal_payload_cached(
|
||||
available_endpoints: entry.available_endpoints,
|
||||
available_pct: entry.available_pct,
|
||||
required_writers: entry.required_writers,
|
||||
floor_min: entry.floor_min,
|
||||
floor_target: entry.floor_target,
|
||||
floor_max: entry.floor_max,
|
||||
floor_capped: entry.floor_capped,
|
||||
alive_writers: entry.alive_writers,
|
||||
coverage_pct: entry.coverage_pct,
|
||||
rtt_ms: entry.rtt_ms,
|
||||
@@ -366,7 +370,35 @@ async fn get_minimal_payload_cached(
|
||||
adaptive_floor_idle_secs: runtime.adaptive_floor_idle_secs,
|
||||
adaptive_floor_min_writers_single_endpoint: runtime
|
||||
.adaptive_floor_min_writers_single_endpoint,
|
||||
adaptive_floor_min_writers_multi_endpoint: runtime
|
||||
.adaptive_floor_min_writers_multi_endpoint,
|
||||
adaptive_floor_recover_grace_secs: runtime.adaptive_floor_recover_grace_secs,
|
||||
adaptive_floor_writers_per_core_total: runtime
|
||||
.adaptive_floor_writers_per_core_total,
|
||||
adaptive_floor_cpu_cores_override: runtime.adaptive_floor_cpu_cores_override,
|
||||
adaptive_floor_max_extra_writers_single_per_core: runtime
|
||||
.adaptive_floor_max_extra_writers_single_per_core,
|
||||
adaptive_floor_max_extra_writers_multi_per_core: runtime
|
||||
.adaptive_floor_max_extra_writers_multi_per_core,
|
||||
adaptive_floor_max_active_writers_per_core: runtime
|
||||
.adaptive_floor_max_active_writers_per_core,
|
||||
adaptive_floor_max_warm_writers_per_core: runtime
|
||||
.adaptive_floor_max_warm_writers_per_core,
|
||||
adaptive_floor_max_active_writers_global: runtime
|
||||
.adaptive_floor_max_active_writers_global,
|
||||
adaptive_floor_max_warm_writers_global: runtime
|
||||
.adaptive_floor_max_warm_writers_global,
|
||||
adaptive_floor_cpu_cores_detected: runtime.adaptive_floor_cpu_cores_detected,
|
||||
adaptive_floor_cpu_cores_effective: runtime.adaptive_floor_cpu_cores_effective,
|
||||
adaptive_floor_global_cap_raw: runtime.adaptive_floor_global_cap_raw,
|
||||
adaptive_floor_global_cap_effective: runtime.adaptive_floor_global_cap_effective,
|
||||
adaptive_floor_target_writers_total: runtime.adaptive_floor_target_writers_total,
|
||||
adaptive_floor_active_cap_configured: runtime.adaptive_floor_active_cap_configured,
|
||||
adaptive_floor_active_cap_effective: runtime.adaptive_floor_active_cap_effective,
|
||||
adaptive_floor_warm_cap_configured: runtime.adaptive_floor_warm_cap_configured,
|
||||
adaptive_floor_warm_cap_effective: runtime.adaptive_floor_warm_cap_effective,
|
||||
adaptive_floor_active_writers_current: runtime.adaptive_floor_active_writers_current,
|
||||
adaptive_floor_warm_writers_current: runtime.adaptive_floor_warm_writers_current,
|
||||
me_keepalive_enabled: runtime.me_keepalive_enabled,
|
||||
me_keepalive_interval_secs: runtime.me_keepalive_interval_secs,
|
||||
me_keepalive_jitter_secs: runtime.me_keepalive_jitter_secs,
|
||||
|
||||
66
src/api/runtime_watch.rs
Normal file
66
src/api/runtime_watch.rs
Normal file
@@ -0,0 +1,66 @@
|
||||
use std::sync::Arc;
|
||||
use std::sync::atomic::Ordering;
|
||||
use std::time::{SystemTime, UNIX_EPOCH};
|
||||
|
||||
use tokio::sync::watch;
|
||||
|
||||
use crate::config::ProxyConfig;
|
||||
|
||||
use super::ApiRuntimeState;
|
||||
use super::events::ApiEventStore;
|
||||
|
||||
pub(super) fn spawn_runtime_watchers(
|
||||
config_rx: watch::Receiver<Arc<ProxyConfig>>,
|
||||
admission_rx: watch::Receiver<bool>,
|
||||
runtime_state: Arc<ApiRuntimeState>,
|
||||
runtime_events: Arc<ApiEventStore>,
|
||||
) {
|
||||
let mut config_rx_reload = config_rx;
|
||||
let runtime_state_reload = runtime_state.clone();
|
||||
let runtime_events_reload = runtime_events.clone();
|
||||
tokio::spawn(async move {
|
||||
loop {
|
||||
if config_rx_reload.changed().await.is_err() {
|
||||
break;
|
||||
}
|
||||
runtime_state_reload
|
||||
.config_reload_count
|
||||
.fetch_add(1, Ordering::Relaxed);
|
||||
runtime_state_reload
|
||||
.last_config_reload_epoch_secs
|
||||
.store(now_epoch_secs(), Ordering::Relaxed);
|
||||
runtime_events_reload.record("config.reload.applied", "config receiver updated");
|
||||
}
|
||||
});
|
||||
|
||||
let mut admission_rx_watch = admission_rx;
|
||||
tokio::spawn(async move {
|
||||
runtime_state
|
||||
.admission_open
|
||||
.store(*admission_rx_watch.borrow(), Ordering::Relaxed);
|
||||
runtime_events.record(
|
||||
"admission.state",
|
||||
format!("accepting_new_connections={}", *admission_rx_watch.borrow()),
|
||||
);
|
||||
loop {
|
||||
if admission_rx_watch.changed().await.is_err() {
|
||||
break;
|
||||
}
|
||||
let admission_open = *admission_rx_watch.borrow();
|
||||
runtime_state
|
||||
.admission_open
|
||||
.store(admission_open, Ordering::Relaxed);
|
||||
runtime_events.record(
|
||||
"admission.state",
|
||||
format!("accepting_new_connections={}", admission_open),
|
||||
);
|
||||
}
|
||||
});
|
||||
}
|
||||
|
||||
fn now_epoch_secs() -> u64 {
|
||||
SystemTime::now()
|
||||
.duration_since(UNIX_EPOCH)
|
||||
.unwrap_or_default()
|
||||
.as_secs()
|
||||
}
|
||||
@@ -5,6 +5,7 @@ use serde::Serialize;
|
||||
use crate::config::{MeFloorMode, ProxyConfig, UserMaxUniqueIpsMode};
|
||||
|
||||
use super::ApiShared;
|
||||
use super::runtime_init::build_runtime_startup_summary;
|
||||
|
||||
#[derive(Serialize)]
|
||||
pub(super) struct SystemInfoData {
|
||||
@@ -34,6 +35,9 @@ pub(super) struct RuntimeGatesData {
|
||||
pub(super) me_runtime_ready: bool,
|
||||
pub(super) me2dc_fallback_enabled: bool,
|
||||
pub(super) use_middle_proxy: bool,
|
||||
pub(super) startup_status: &'static str,
|
||||
pub(super) startup_stage: String,
|
||||
pub(super) startup_progress_pct: f64,
|
||||
}
|
||||
|
||||
#[derive(Serialize)]
|
||||
@@ -60,7 +64,16 @@ pub(super) struct EffectiveMiddleProxyLimits {
|
||||
pub(super) floor_mode: &'static str,
|
||||
pub(super) adaptive_floor_idle_secs: u64,
|
||||
pub(super) adaptive_floor_min_writers_single_endpoint: u8,
|
||||
pub(super) adaptive_floor_min_writers_multi_endpoint: u8,
|
||||
pub(super) adaptive_floor_recover_grace_secs: u64,
|
||||
pub(super) adaptive_floor_writers_per_core_total: u16,
|
||||
pub(super) adaptive_floor_cpu_cores_override: u16,
|
||||
pub(super) adaptive_floor_max_extra_writers_single_per_core: u16,
|
||||
pub(super) adaptive_floor_max_extra_writers_multi_per_core: u16,
|
||||
pub(super) adaptive_floor_max_active_writers_per_core: u16,
|
||||
pub(super) adaptive_floor_max_warm_writers_per_core: u16,
|
||||
pub(super) adaptive_floor_max_active_writers_global: u32,
|
||||
pub(super) adaptive_floor_max_warm_writers_global: u32,
|
||||
pub(super) reconnect_max_concurrent_per_dc: u32,
|
||||
pub(super) reconnect_backoff_base_ms: u64,
|
||||
pub(super) reconnect_backoff_cap_ms: u64,
|
||||
@@ -137,12 +150,18 @@ pub(super) fn build_system_info_data(
|
||||
}
|
||||
}
|
||||
|
||||
pub(super) fn build_runtime_gates_data(shared: &ApiShared, cfg: &ProxyConfig) -> RuntimeGatesData {
|
||||
pub(super) async fn build_runtime_gates_data(
|
||||
shared: &ApiShared,
|
||||
cfg: &ProxyConfig,
|
||||
) -> RuntimeGatesData {
|
||||
let startup_summary = build_runtime_startup_summary(shared).await;
|
||||
let me_runtime_ready = if !cfg.general.use_middle_proxy {
|
||||
true
|
||||
} else {
|
||||
shared
|
||||
.me_pool
|
||||
.read()
|
||||
.await
|
||||
.as_ref()
|
||||
.map(|pool| pool.is_runtime_ready())
|
||||
.unwrap_or(false)
|
||||
@@ -154,6 +173,9 @@ pub(super) fn build_runtime_gates_data(shared: &ApiShared, cfg: &ProxyConfig) ->
|
||||
me_runtime_ready,
|
||||
me2dc_fallback_enabled: cfg.general.me2dc_fallback,
|
||||
use_middle_proxy: cfg.general.use_middle_proxy,
|
||||
startup_status: startup_summary.status,
|
||||
startup_stage: startup_summary.stage,
|
||||
startup_progress_pct: startup_summary.progress_pct,
|
||||
}
|
||||
}
|
||||
|
||||
@@ -183,7 +205,34 @@ pub(super) fn build_limits_effective_data(cfg: &ProxyConfig) -> EffectiveLimitsD
|
||||
adaptive_floor_min_writers_single_endpoint: cfg
|
||||
.general
|
||||
.me_adaptive_floor_min_writers_single_endpoint,
|
||||
adaptive_floor_min_writers_multi_endpoint: cfg
|
||||
.general
|
||||
.me_adaptive_floor_min_writers_multi_endpoint,
|
||||
adaptive_floor_recover_grace_secs: cfg.general.me_adaptive_floor_recover_grace_secs,
|
||||
adaptive_floor_writers_per_core_total: cfg
|
||||
.general
|
||||
.me_adaptive_floor_writers_per_core_total,
|
||||
adaptive_floor_cpu_cores_override: cfg
|
||||
.general
|
||||
.me_adaptive_floor_cpu_cores_override,
|
||||
adaptive_floor_max_extra_writers_single_per_core: cfg
|
||||
.general
|
||||
.me_adaptive_floor_max_extra_writers_single_per_core,
|
||||
adaptive_floor_max_extra_writers_multi_per_core: cfg
|
||||
.general
|
||||
.me_adaptive_floor_max_extra_writers_multi_per_core,
|
||||
adaptive_floor_max_active_writers_per_core: cfg
|
||||
.general
|
||||
.me_adaptive_floor_max_active_writers_per_core,
|
||||
adaptive_floor_max_warm_writers_per_core: cfg
|
||||
.general
|
||||
.me_adaptive_floor_max_warm_writers_per_core,
|
||||
adaptive_floor_max_active_writers_global: cfg
|
||||
.general
|
||||
.me_adaptive_floor_max_active_writers_global,
|
||||
adaptive_floor_max_warm_writers_global: cfg
|
||||
.general
|
||||
.me_adaptive_floor_max_warm_writers_global,
|
||||
reconnect_max_concurrent_per_dc: cfg.general.me_reconnect_max_concurrent_per_dc,
|
||||
reconnect_backoff_base_ms: cfg.general.me_reconnect_backoff_base_ms,
|
||||
reconnect_backoff_cap_ms: cfg.general.me_reconnect_backoff_cap_ms,
|
||||
|
||||
@@ -11,7 +11,16 @@ const DEFAULT_ME_RECONNECT_FAST_RETRY_COUNT: u32 = 16;
|
||||
const DEFAULT_ME_SINGLE_ENDPOINT_SHADOW_WRITERS: u8 = 2;
|
||||
const DEFAULT_ME_ADAPTIVE_FLOOR_IDLE_SECS: u64 = 90;
|
||||
const DEFAULT_ME_ADAPTIVE_FLOOR_MIN_WRITERS_SINGLE_ENDPOINT: u8 = 1;
|
||||
const DEFAULT_ME_ADAPTIVE_FLOOR_MIN_WRITERS_MULTI_ENDPOINT: u8 = 1;
|
||||
const DEFAULT_ME_ADAPTIVE_FLOOR_RECOVER_GRACE_SECS: u64 = 180;
|
||||
const DEFAULT_ME_ADAPTIVE_FLOOR_WRITERS_PER_CORE_TOTAL: u16 = 48;
|
||||
const DEFAULT_ME_ADAPTIVE_FLOOR_CPU_CORES_OVERRIDE: u16 = 0;
|
||||
const DEFAULT_ME_ADAPTIVE_FLOOR_MAX_EXTRA_WRITERS_SINGLE_PER_CORE: u16 = 1;
|
||||
const DEFAULT_ME_ADAPTIVE_FLOOR_MAX_EXTRA_WRITERS_MULTI_PER_CORE: u16 = 2;
|
||||
const DEFAULT_ME_ADAPTIVE_FLOOR_MAX_ACTIVE_WRITERS_PER_CORE: u16 = 64;
|
||||
const DEFAULT_ME_ADAPTIVE_FLOOR_MAX_WARM_WRITERS_PER_CORE: u16 = 64;
|
||||
const DEFAULT_ME_ADAPTIVE_FLOOR_MAX_ACTIVE_WRITERS_GLOBAL: u32 = 256;
|
||||
const DEFAULT_ME_ADAPTIVE_FLOOR_MAX_WARM_WRITERS_GLOBAL: u32 = 256;
|
||||
const DEFAULT_USER_MAX_UNIQUE_IPS_WINDOW_SECS: u64 = 30;
|
||||
const DEFAULT_UPSTREAM_CONNECT_RETRY_ATTEMPTS: u32 = 2;
|
||||
const DEFAULT_UPSTREAM_UNHEALTHY_FAIL_THRESHOLD: u32 = 5;
|
||||
@@ -114,6 +123,11 @@ pub(crate) fn default_api_minimal_runtime_cache_ttl_ms() -> u64 {
|
||||
1000
|
||||
}
|
||||
|
||||
pub(crate) fn default_api_runtime_edge_enabled() -> bool { false }
|
||||
pub(crate) fn default_api_runtime_edge_cache_ttl_ms() -> u64 { 1000 }
|
||||
pub(crate) fn default_api_runtime_edge_top_n() -> usize { 10 }
|
||||
pub(crate) fn default_api_runtime_edge_events_capacity() -> usize { 256 }
|
||||
|
||||
pub(crate) fn default_proxy_protocol_header_timeout_ms() -> u64 {
|
||||
500
|
||||
}
|
||||
@@ -242,10 +256,46 @@ pub(crate) fn default_me_adaptive_floor_min_writers_single_endpoint() -> u8 {
|
||||
DEFAULT_ME_ADAPTIVE_FLOOR_MIN_WRITERS_SINGLE_ENDPOINT
|
||||
}
|
||||
|
||||
pub(crate) fn default_me_adaptive_floor_min_writers_multi_endpoint() -> u8 {
|
||||
DEFAULT_ME_ADAPTIVE_FLOOR_MIN_WRITERS_MULTI_ENDPOINT
|
||||
}
|
||||
|
||||
pub(crate) fn default_me_adaptive_floor_recover_grace_secs() -> u64 {
|
||||
DEFAULT_ME_ADAPTIVE_FLOOR_RECOVER_GRACE_SECS
|
||||
}
|
||||
|
||||
pub(crate) fn default_me_adaptive_floor_writers_per_core_total() -> u16 {
|
||||
DEFAULT_ME_ADAPTIVE_FLOOR_WRITERS_PER_CORE_TOTAL
|
||||
}
|
||||
|
||||
pub(crate) fn default_me_adaptive_floor_cpu_cores_override() -> u16 {
|
||||
DEFAULT_ME_ADAPTIVE_FLOOR_CPU_CORES_OVERRIDE
|
||||
}
|
||||
|
||||
pub(crate) fn default_me_adaptive_floor_max_extra_writers_single_per_core() -> u16 {
|
||||
DEFAULT_ME_ADAPTIVE_FLOOR_MAX_EXTRA_WRITERS_SINGLE_PER_CORE
|
||||
}
|
||||
|
||||
pub(crate) fn default_me_adaptive_floor_max_extra_writers_multi_per_core() -> u16 {
|
||||
DEFAULT_ME_ADAPTIVE_FLOOR_MAX_EXTRA_WRITERS_MULTI_PER_CORE
|
||||
}
|
||||
|
||||
pub(crate) fn default_me_adaptive_floor_max_active_writers_per_core() -> u16 {
|
||||
DEFAULT_ME_ADAPTIVE_FLOOR_MAX_ACTIVE_WRITERS_PER_CORE
|
||||
}
|
||||
|
||||
pub(crate) fn default_me_adaptive_floor_max_warm_writers_per_core() -> u16 {
|
||||
DEFAULT_ME_ADAPTIVE_FLOOR_MAX_WARM_WRITERS_PER_CORE
|
||||
}
|
||||
|
||||
pub(crate) fn default_me_adaptive_floor_max_active_writers_global() -> u32 {
|
||||
DEFAULT_ME_ADAPTIVE_FLOOR_MAX_ACTIVE_WRITERS_GLOBAL
|
||||
}
|
||||
|
||||
pub(crate) fn default_me_adaptive_floor_max_warm_writers_global() -> u32 {
|
||||
DEFAULT_ME_ADAPTIVE_FLOOR_MAX_WARM_WRITERS_GLOBAL
|
||||
}
|
||||
|
||||
pub(crate) fn default_upstream_connect_retry_attempts() -> u32 {
|
||||
DEFAULT_UPSTREAM_CONNECT_RETRY_ATTEMPTS
|
||||
}
|
||||
|
||||
@@ -78,7 +78,16 @@ pub struct HotFields {
|
||||
pub me_floor_mode: MeFloorMode,
|
||||
pub me_adaptive_floor_idle_secs: u64,
|
||||
pub me_adaptive_floor_min_writers_single_endpoint: u8,
|
||||
pub me_adaptive_floor_min_writers_multi_endpoint: u8,
|
||||
pub me_adaptive_floor_recover_grace_secs: u64,
|
||||
pub me_adaptive_floor_writers_per_core_total: u16,
|
||||
pub me_adaptive_floor_cpu_cores_override: u16,
|
||||
pub me_adaptive_floor_max_extra_writers_single_per_core: u16,
|
||||
pub me_adaptive_floor_max_extra_writers_multi_per_core: u16,
|
||||
pub me_adaptive_floor_max_active_writers_per_core: u16,
|
||||
pub me_adaptive_floor_max_warm_writers_per_core: u16,
|
||||
pub me_adaptive_floor_max_active_writers_global: u32,
|
||||
pub me_adaptive_floor_max_warm_writers_global: u32,
|
||||
pub me_route_backpressure_base_timeout_ms: u64,
|
||||
pub me_route_backpressure_high_timeout_ms: u64,
|
||||
pub me_route_backpressure_high_watermark_pct: u8,
|
||||
@@ -150,9 +159,36 @@ impl HotFields {
|
||||
me_adaptive_floor_min_writers_single_endpoint: cfg
|
||||
.general
|
||||
.me_adaptive_floor_min_writers_single_endpoint,
|
||||
me_adaptive_floor_min_writers_multi_endpoint: cfg
|
||||
.general
|
||||
.me_adaptive_floor_min_writers_multi_endpoint,
|
||||
me_adaptive_floor_recover_grace_secs: cfg
|
||||
.general
|
||||
.me_adaptive_floor_recover_grace_secs,
|
||||
me_adaptive_floor_writers_per_core_total: cfg
|
||||
.general
|
||||
.me_adaptive_floor_writers_per_core_total,
|
||||
me_adaptive_floor_cpu_cores_override: cfg
|
||||
.general
|
||||
.me_adaptive_floor_cpu_cores_override,
|
||||
me_adaptive_floor_max_extra_writers_single_per_core: cfg
|
||||
.general
|
||||
.me_adaptive_floor_max_extra_writers_single_per_core,
|
||||
me_adaptive_floor_max_extra_writers_multi_per_core: cfg
|
||||
.general
|
||||
.me_adaptive_floor_max_extra_writers_multi_per_core,
|
||||
me_adaptive_floor_max_active_writers_per_core: cfg
|
||||
.general
|
||||
.me_adaptive_floor_max_active_writers_per_core,
|
||||
me_adaptive_floor_max_warm_writers_per_core: cfg
|
||||
.general
|
||||
.me_adaptive_floor_max_warm_writers_per_core,
|
||||
me_adaptive_floor_max_active_writers_global: cfg
|
||||
.general
|
||||
.me_adaptive_floor_max_active_writers_global,
|
||||
me_adaptive_floor_max_warm_writers_global: cfg
|
||||
.general
|
||||
.me_adaptive_floor_max_warm_writers_global,
|
||||
me_route_backpressure_base_timeout_ms: cfg.general.me_route_backpressure_base_timeout_ms,
|
||||
me_route_backpressure_high_timeout_ms: cfg.general.me_route_backpressure_high_timeout_ms,
|
||||
me_route_backpressure_high_watermark_pct: cfg.general.me_route_backpressure_high_watermark_pct,
|
||||
@@ -273,8 +309,26 @@ fn overlay_hot_fields(old: &ProxyConfig, new: &ProxyConfig) -> ProxyConfig {
|
||||
cfg.general.me_adaptive_floor_idle_secs = new.general.me_adaptive_floor_idle_secs;
|
||||
cfg.general.me_adaptive_floor_min_writers_single_endpoint =
|
||||
new.general.me_adaptive_floor_min_writers_single_endpoint;
|
||||
cfg.general.me_adaptive_floor_min_writers_multi_endpoint =
|
||||
new.general.me_adaptive_floor_min_writers_multi_endpoint;
|
||||
cfg.general.me_adaptive_floor_recover_grace_secs =
|
||||
new.general.me_adaptive_floor_recover_grace_secs;
|
||||
cfg.general.me_adaptive_floor_writers_per_core_total =
|
||||
new.general.me_adaptive_floor_writers_per_core_total;
|
||||
cfg.general.me_adaptive_floor_cpu_cores_override =
|
||||
new.general.me_adaptive_floor_cpu_cores_override;
|
||||
cfg.general.me_adaptive_floor_max_extra_writers_single_per_core =
|
||||
new.general.me_adaptive_floor_max_extra_writers_single_per_core;
|
||||
cfg.general.me_adaptive_floor_max_extra_writers_multi_per_core =
|
||||
new.general.me_adaptive_floor_max_extra_writers_multi_per_core;
|
||||
cfg.general.me_adaptive_floor_max_active_writers_per_core =
|
||||
new.general.me_adaptive_floor_max_active_writers_per_core;
|
||||
cfg.general.me_adaptive_floor_max_warm_writers_per_core =
|
||||
new.general.me_adaptive_floor_max_warm_writers_per_core;
|
||||
cfg.general.me_adaptive_floor_max_active_writers_global =
|
||||
new.general.me_adaptive_floor_max_active_writers_global;
|
||||
cfg.general.me_adaptive_floor_max_warm_writers_global =
|
||||
new.general.me_adaptive_floor_max_warm_writers_global;
|
||||
cfg.general.me_route_backpressure_base_timeout_ms =
|
||||
new.general.me_route_backpressure_base_timeout_ms;
|
||||
cfg.general.me_route_backpressure_high_timeout_ms =
|
||||
@@ -312,6 +366,12 @@ fn warn_non_hot_changes(old: &ProxyConfig, new: &ProxyConfig, non_hot_changed: b
|
||||
|| old.server.api.minimal_runtime_enabled != new.server.api.minimal_runtime_enabled
|
||||
|| old.server.api.minimal_runtime_cache_ttl_ms
|
||||
!= new.server.api.minimal_runtime_cache_ttl_ms
|
||||
|| old.server.api.runtime_edge_enabled != new.server.api.runtime_edge_enabled
|
||||
|| old.server.api.runtime_edge_cache_ttl_ms
|
||||
!= new.server.api.runtime_edge_cache_ttl_ms
|
||||
|| old.server.api.runtime_edge_top_n != new.server.api.runtime_edge_top_n
|
||||
|| old.server.api.runtime_edge_events_capacity
|
||||
!= new.server.api.runtime_edge_events_capacity
|
||||
|| old.server.api.read_only != new.server.api.read_only
|
||||
{
|
||||
warned = true;
|
||||
@@ -691,15 +751,42 @@ fn log_changes(
|
||||
|| old_hot.me_adaptive_floor_idle_secs != new_hot.me_adaptive_floor_idle_secs
|
||||
|| old_hot.me_adaptive_floor_min_writers_single_endpoint
|
||||
!= new_hot.me_adaptive_floor_min_writers_single_endpoint
|
||||
|| old_hot.me_adaptive_floor_min_writers_multi_endpoint
|
||||
!= new_hot.me_adaptive_floor_min_writers_multi_endpoint
|
||||
|| old_hot.me_adaptive_floor_recover_grace_secs
|
||||
!= new_hot.me_adaptive_floor_recover_grace_secs
|
||||
|| old_hot.me_adaptive_floor_writers_per_core_total
|
||||
!= new_hot.me_adaptive_floor_writers_per_core_total
|
||||
|| old_hot.me_adaptive_floor_cpu_cores_override
|
||||
!= new_hot.me_adaptive_floor_cpu_cores_override
|
||||
|| old_hot.me_adaptive_floor_max_extra_writers_single_per_core
|
||||
!= new_hot.me_adaptive_floor_max_extra_writers_single_per_core
|
||||
|| old_hot.me_adaptive_floor_max_extra_writers_multi_per_core
|
||||
!= new_hot.me_adaptive_floor_max_extra_writers_multi_per_core
|
||||
|| old_hot.me_adaptive_floor_max_active_writers_per_core
|
||||
!= new_hot.me_adaptive_floor_max_active_writers_per_core
|
||||
|| old_hot.me_adaptive_floor_max_warm_writers_per_core
|
||||
!= new_hot.me_adaptive_floor_max_warm_writers_per_core
|
||||
|| old_hot.me_adaptive_floor_max_active_writers_global
|
||||
!= new_hot.me_adaptive_floor_max_active_writers_global
|
||||
|| old_hot.me_adaptive_floor_max_warm_writers_global
|
||||
!= new_hot.me_adaptive_floor_max_warm_writers_global
|
||||
{
|
||||
info!(
|
||||
"config reload: me_floor: mode={:?} idle={}s min_single={} recover_grace={}s",
|
||||
"config reload: me_floor: mode={:?} idle={}s min_single={} min_multi={} recover_grace={}s per_core_total={} cores_override={} extra_single_per_core={} extra_multi_per_core={} max_active_per_core={} max_warm_per_core={} max_active_global={} max_warm_global={}",
|
||||
new_hot.me_floor_mode,
|
||||
new_hot.me_adaptive_floor_idle_secs,
|
||||
new_hot.me_adaptive_floor_min_writers_single_endpoint,
|
||||
new_hot.me_adaptive_floor_min_writers_multi_endpoint,
|
||||
new_hot.me_adaptive_floor_recover_grace_secs,
|
||||
new_hot.me_adaptive_floor_writers_per_core_total,
|
||||
new_hot.me_adaptive_floor_cpu_cores_override,
|
||||
new_hot.me_adaptive_floor_max_extra_writers_single_per_core,
|
||||
new_hot.me_adaptive_floor_max_extra_writers_multi_per_core,
|
||||
new_hot.me_adaptive_floor_max_active_writers_per_core,
|
||||
new_hot.me_adaptive_floor_max_warm_writers_per_core,
|
||||
new_hot.me_adaptive_floor_max_active_writers_global,
|
||||
new_hot.me_adaptive_floor_max_warm_writers_global,
|
||||
);
|
||||
}
|
||||
|
||||
|
||||
@@ -312,6 +312,45 @@ impl ProxyConfig {
|
||||
));
|
||||
}
|
||||
|
||||
if config.general.me_adaptive_floor_min_writers_multi_endpoint == 0
|
||||
|| config.general.me_adaptive_floor_min_writers_multi_endpoint > 32
|
||||
{
|
||||
return Err(ProxyError::Config(
|
||||
"general.me_adaptive_floor_min_writers_multi_endpoint must be within [1, 32]"
|
||||
.to_string(),
|
||||
));
|
||||
}
|
||||
|
||||
if config.general.me_adaptive_floor_writers_per_core_total == 0 {
|
||||
return Err(ProxyError::Config(
|
||||
"general.me_adaptive_floor_writers_per_core_total must be > 0".to_string(),
|
||||
));
|
||||
}
|
||||
|
||||
if config.general.me_adaptive_floor_max_active_writers_per_core == 0 {
|
||||
return Err(ProxyError::Config(
|
||||
"general.me_adaptive_floor_max_active_writers_per_core must be > 0".to_string(),
|
||||
));
|
||||
}
|
||||
|
||||
if config.general.me_adaptive_floor_max_warm_writers_per_core == 0 {
|
||||
return Err(ProxyError::Config(
|
||||
"general.me_adaptive_floor_max_warm_writers_per_core must be > 0".to_string(),
|
||||
));
|
||||
}
|
||||
|
||||
if config.general.me_adaptive_floor_max_active_writers_global == 0 {
|
||||
return Err(ProxyError::Config(
|
||||
"general.me_adaptive_floor_max_active_writers_global must be > 0".to_string(),
|
||||
));
|
||||
}
|
||||
|
||||
if config.general.me_adaptive_floor_max_warm_writers_global == 0 {
|
||||
return Err(ProxyError::Config(
|
||||
"general.me_adaptive_floor_max_warm_writers_global must be > 0".to_string(),
|
||||
));
|
||||
}
|
||||
|
||||
if config.general.me_single_endpoint_outage_backoff_min_ms == 0 {
|
||||
return Err(ProxyError::Config(
|
||||
"general.me_single_endpoint_outage_backoff_min_ms must be > 0".to_string(),
|
||||
@@ -462,6 +501,24 @@ impl ProxyConfig {
|
||||
));
|
||||
}
|
||||
|
||||
if config.server.api.runtime_edge_cache_ttl_ms > 60_000 {
|
||||
return Err(ProxyError::Config(
|
||||
"server.api.runtime_edge_cache_ttl_ms must be within [0, 60000]".to_string(),
|
||||
));
|
||||
}
|
||||
|
||||
if !(1..=1000).contains(&config.server.api.runtime_edge_top_n) {
|
||||
return Err(ProxyError::Config(
|
||||
"server.api.runtime_edge_top_n must be within [1, 1000]".to_string(),
|
||||
));
|
||||
}
|
||||
|
||||
if !(16..=4096).contains(&config.server.api.runtime_edge_events_capacity) {
|
||||
return Err(ProxyError::Config(
|
||||
"server.api.runtime_edge_events_capacity must be within [16, 4096]".to_string(),
|
||||
));
|
||||
}
|
||||
|
||||
if config.server.api.listen.parse::<SocketAddr>().is_err() {
|
||||
return Err(ProxyError::Config(
|
||||
"server.api.listen must be in IP:PORT format".to_string(),
|
||||
@@ -802,6 +859,22 @@ mod tests {
|
||||
cfg.server.api.minimal_runtime_cache_ttl_ms,
|
||||
default_api_minimal_runtime_cache_ttl_ms()
|
||||
);
|
||||
assert_eq!(
|
||||
cfg.server.api.runtime_edge_enabled,
|
||||
default_api_runtime_edge_enabled()
|
||||
);
|
||||
assert_eq!(
|
||||
cfg.server.api.runtime_edge_cache_ttl_ms,
|
||||
default_api_runtime_edge_cache_ttl_ms()
|
||||
);
|
||||
assert_eq!(
|
||||
cfg.server.api.runtime_edge_top_n,
|
||||
default_api_runtime_edge_top_n()
|
||||
);
|
||||
assert_eq!(
|
||||
cfg.server.api.runtime_edge_events_capacity,
|
||||
default_api_runtime_edge_events_capacity()
|
||||
);
|
||||
assert_eq!(cfg.access.users, default_access_users());
|
||||
assert_eq!(
|
||||
cfg.access.user_max_unique_ips_mode,
|
||||
@@ -918,6 +991,22 @@ mod tests {
|
||||
server.api.minimal_runtime_cache_ttl_ms,
|
||||
default_api_minimal_runtime_cache_ttl_ms()
|
||||
);
|
||||
assert_eq!(
|
||||
server.api.runtime_edge_enabled,
|
||||
default_api_runtime_edge_enabled()
|
||||
);
|
||||
assert_eq!(
|
||||
server.api.runtime_edge_cache_ttl_ms,
|
||||
default_api_runtime_edge_cache_ttl_ms()
|
||||
);
|
||||
assert_eq!(
|
||||
server.api.runtime_edge_top_n,
|
||||
default_api_runtime_edge_top_n()
|
||||
);
|
||||
assert_eq!(
|
||||
server.api.runtime_edge_events_capacity,
|
||||
default_api_runtime_edge_events_capacity()
|
||||
);
|
||||
|
||||
let access = AccessConfig::default();
|
||||
assert_eq!(access.users, default_access_users());
|
||||
@@ -1173,6 +1262,46 @@ mod tests {
|
||||
let _ = std::fs::remove_file(path);
|
||||
}
|
||||
|
||||
#[test]
|
||||
fn me_adaptive_floor_max_active_writers_per_core_zero_is_rejected() {
|
||||
let toml = r#"
|
||||
[general]
|
||||
me_adaptive_floor_max_active_writers_per_core = 0
|
||||
|
||||
[censorship]
|
||||
tls_domain = "example.com"
|
||||
|
||||
[access.users]
|
||||
user = "00000000000000000000000000000000"
|
||||
"#;
|
||||
let dir = std::env::temp_dir();
|
||||
let path = dir.join("telemt_me_adaptive_floor_max_active_per_core_zero_test.toml");
|
||||
std::fs::write(&path, toml).unwrap();
|
||||
let err = ProxyConfig::load(&path).unwrap_err().to_string();
|
||||
assert!(err.contains("general.me_adaptive_floor_max_active_writers_per_core must be > 0"));
|
||||
let _ = std::fs::remove_file(path);
|
||||
}
|
||||
|
||||
#[test]
|
||||
fn me_adaptive_floor_max_warm_writers_global_zero_is_rejected() {
|
||||
let toml = r#"
|
||||
[general]
|
||||
me_adaptive_floor_max_warm_writers_global = 0
|
||||
|
||||
[censorship]
|
||||
tls_domain = "example.com"
|
||||
|
||||
[access.users]
|
||||
user = "00000000000000000000000000000000"
|
||||
"#;
|
||||
let dir = std::env::temp_dir();
|
||||
let path = dir.join("telemt_me_adaptive_floor_max_warm_global_zero_test.toml");
|
||||
std::fs::write(&path, toml).unwrap();
|
||||
let err = ProxyConfig::load(&path).unwrap_err().to_string();
|
||||
assert!(err.contains("general.me_adaptive_floor_max_warm_writers_global must be > 0"));
|
||||
let _ = std::fs::remove_file(path);
|
||||
}
|
||||
|
||||
#[test]
|
||||
fn upstream_connect_retry_attempts_zero_is_rejected() {
|
||||
let toml = r#"
|
||||
@@ -1565,6 +1694,72 @@ mod tests {
|
||||
let _ = std::fs::remove_file(path);
|
||||
}
|
||||
|
||||
#[test]
|
||||
fn api_runtime_edge_cache_ttl_out_of_range_is_rejected() {
|
||||
let toml = r#"
|
||||
[server.api]
|
||||
enabled = true
|
||||
listen = "127.0.0.1:9091"
|
||||
runtime_edge_cache_ttl_ms = 70000
|
||||
|
||||
[censorship]
|
||||
tls_domain = "example.com"
|
||||
|
||||
[access.users]
|
||||
user = "00000000000000000000000000000000"
|
||||
"#;
|
||||
let dir = std::env::temp_dir();
|
||||
let path = dir.join("telemt_api_runtime_edge_cache_ttl_invalid_test.toml");
|
||||
std::fs::write(&path, toml).unwrap();
|
||||
let err = ProxyConfig::load(&path).unwrap_err().to_string();
|
||||
assert!(err.contains("server.api.runtime_edge_cache_ttl_ms must be within [0, 60000]"));
|
||||
let _ = std::fs::remove_file(path);
|
||||
}
|
||||
|
||||
#[test]
|
||||
fn api_runtime_edge_top_n_out_of_range_is_rejected() {
|
||||
let toml = r#"
|
||||
[server.api]
|
||||
enabled = true
|
||||
listen = "127.0.0.1:9091"
|
||||
runtime_edge_top_n = 0
|
||||
|
||||
[censorship]
|
||||
tls_domain = "example.com"
|
||||
|
||||
[access.users]
|
||||
user = "00000000000000000000000000000000"
|
||||
"#;
|
||||
let dir = std::env::temp_dir();
|
||||
let path = dir.join("telemt_api_runtime_edge_top_n_invalid_test.toml");
|
||||
std::fs::write(&path, toml).unwrap();
|
||||
let err = ProxyConfig::load(&path).unwrap_err().to_string();
|
||||
assert!(err.contains("server.api.runtime_edge_top_n must be within [1, 1000]"));
|
||||
let _ = std::fs::remove_file(path);
|
||||
}
|
||||
|
||||
#[test]
|
||||
fn api_runtime_edge_events_capacity_out_of_range_is_rejected() {
|
||||
let toml = r#"
|
||||
[server.api]
|
||||
enabled = true
|
||||
listen = "127.0.0.1:9091"
|
||||
runtime_edge_events_capacity = 8
|
||||
|
||||
[censorship]
|
||||
tls_domain = "example.com"
|
||||
|
||||
[access.users]
|
||||
user = "00000000000000000000000000000000"
|
||||
"#;
|
||||
let dir = std::env::temp_dir();
|
||||
let path = dir.join("telemt_api_runtime_edge_events_capacity_invalid_test.toml");
|
||||
std::fs::write(&path, toml).unwrap();
|
||||
let err = ProxyConfig::load(&path).unwrap_err().to_string();
|
||||
assert!(err.contains("server.api.runtime_edge_events_capacity must be within [16, 4096]"));
|
||||
let _ = std::fs::remove_file(path);
|
||||
}
|
||||
|
||||
#[test]
|
||||
fn force_close_bumped_when_below_drain_ttl() {
|
||||
let toml = r#"
|
||||
|
||||
@@ -520,10 +520,47 @@ pub struct GeneralConfig {
|
||||
#[serde(default = "default_me_adaptive_floor_min_writers_single_endpoint")]
|
||||
pub me_adaptive_floor_min_writers_single_endpoint: u8,
|
||||
|
||||
/// Minimum writer target for multi-endpoint DC groups in adaptive floor mode.
|
||||
#[serde(default = "default_me_adaptive_floor_min_writers_multi_endpoint")]
|
||||
pub me_adaptive_floor_min_writers_multi_endpoint: u8,
|
||||
|
||||
/// Grace period in seconds to hold static floor after activity in adaptive mode.
|
||||
#[serde(default = "default_me_adaptive_floor_recover_grace_secs")]
|
||||
pub me_adaptive_floor_recover_grace_secs: u64,
|
||||
|
||||
/// Global ME writer budget per logical CPU core in adaptive mode.
|
||||
#[serde(default = "default_me_adaptive_floor_writers_per_core_total")]
|
||||
pub me_adaptive_floor_writers_per_core_total: u16,
|
||||
|
||||
/// Override logical CPU core count for adaptive floor calculations.
|
||||
/// Set to 0 to use runtime auto-detection.
|
||||
#[serde(default = "default_me_adaptive_floor_cpu_cores_override")]
|
||||
pub me_adaptive_floor_cpu_cores_override: u16,
|
||||
|
||||
/// Per-core max extra writers above base required floor for single-endpoint DC groups.
|
||||
#[serde(default = "default_me_adaptive_floor_max_extra_writers_single_per_core")]
|
||||
pub me_adaptive_floor_max_extra_writers_single_per_core: u16,
|
||||
|
||||
/// Per-core max extra writers above base required floor for multi-endpoint DC groups.
|
||||
#[serde(default = "default_me_adaptive_floor_max_extra_writers_multi_per_core")]
|
||||
pub me_adaptive_floor_max_extra_writers_multi_per_core: u16,
|
||||
|
||||
/// Hard cap for active ME writers per logical CPU core.
|
||||
#[serde(default = "default_me_adaptive_floor_max_active_writers_per_core")]
|
||||
pub me_adaptive_floor_max_active_writers_per_core: u16,
|
||||
|
||||
/// Hard cap for warm ME writers per logical CPU core.
|
||||
#[serde(default = "default_me_adaptive_floor_max_warm_writers_per_core")]
|
||||
pub me_adaptive_floor_max_warm_writers_per_core: u16,
|
||||
|
||||
/// Hard global cap for active ME writers.
|
||||
#[serde(default = "default_me_adaptive_floor_max_active_writers_global")]
|
||||
pub me_adaptive_floor_max_active_writers_global: u32,
|
||||
|
||||
/// Hard global cap for warm ME writers.
|
||||
#[serde(default = "default_me_adaptive_floor_max_warm_writers_global")]
|
||||
pub me_adaptive_floor_max_warm_writers_global: u32,
|
||||
|
||||
/// Connect attempts for the selected upstream before returning error/fallback.
|
||||
#[serde(default = "default_upstream_connect_retry_attempts")]
|
||||
pub upstream_connect_retry_attempts: u32,
|
||||
@@ -775,7 +812,16 @@ impl Default for GeneralConfig {
|
||||
me_floor_mode: MeFloorMode::default(),
|
||||
me_adaptive_floor_idle_secs: default_me_adaptive_floor_idle_secs(),
|
||||
me_adaptive_floor_min_writers_single_endpoint: default_me_adaptive_floor_min_writers_single_endpoint(),
|
||||
me_adaptive_floor_min_writers_multi_endpoint: default_me_adaptive_floor_min_writers_multi_endpoint(),
|
||||
me_adaptive_floor_recover_grace_secs: default_me_adaptive_floor_recover_grace_secs(),
|
||||
me_adaptive_floor_writers_per_core_total: default_me_adaptive_floor_writers_per_core_total(),
|
||||
me_adaptive_floor_cpu_cores_override: default_me_adaptive_floor_cpu_cores_override(),
|
||||
me_adaptive_floor_max_extra_writers_single_per_core: default_me_adaptive_floor_max_extra_writers_single_per_core(),
|
||||
me_adaptive_floor_max_extra_writers_multi_per_core: default_me_adaptive_floor_max_extra_writers_multi_per_core(),
|
||||
me_adaptive_floor_max_active_writers_per_core: default_me_adaptive_floor_max_active_writers_per_core(),
|
||||
me_adaptive_floor_max_warm_writers_per_core: default_me_adaptive_floor_max_warm_writers_per_core(),
|
||||
me_adaptive_floor_max_active_writers_global: default_me_adaptive_floor_max_active_writers_global(),
|
||||
me_adaptive_floor_max_warm_writers_global: default_me_adaptive_floor_max_warm_writers_global(),
|
||||
upstream_connect_retry_attempts: default_upstream_connect_retry_attempts(),
|
||||
upstream_connect_retry_backoff_ms: default_upstream_connect_retry_backoff_ms(),
|
||||
upstream_connect_budget_ms: default_upstream_connect_budget_ms(),
|
||||
@@ -918,6 +964,22 @@ pub struct ApiConfig {
|
||||
#[serde(default = "default_api_minimal_runtime_cache_ttl_ms")]
|
||||
pub minimal_runtime_cache_ttl_ms: u64,
|
||||
|
||||
/// Enables runtime edge endpoints with optional cached aggregation.
|
||||
#[serde(default = "default_api_runtime_edge_enabled")]
|
||||
pub runtime_edge_enabled: bool,
|
||||
|
||||
/// Cache TTL for runtime edge aggregation payloads in milliseconds.
|
||||
#[serde(default = "default_api_runtime_edge_cache_ttl_ms")]
|
||||
pub runtime_edge_cache_ttl_ms: u64,
|
||||
|
||||
/// Top-N limit for edge connection leaderboard payloads.
|
||||
#[serde(default = "default_api_runtime_edge_top_n")]
|
||||
pub runtime_edge_top_n: usize,
|
||||
|
||||
/// Ring-buffer capacity for runtime edge control-plane events.
|
||||
#[serde(default = "default_api_runtime_edge_events_capacity")]
|
||||
pub runtime_edge_events_capacity: usize,
|
||||
|
||||
/// Read-only mode: mutating endpoints are rejected.
|
||||
#[serde(default)]
|
||||
pub read_only: bool,
|
||||
@@ -933,6 +995,10 @@ impl Default for ApiConfig {
|
||||
request_body_limit_bytes: default_api_request_body_limit_bytes(),
|
||||
minimal_runtime_enabled: default_api_minimal_runtime_enabled(),
|
||||
minimal_runtime_cache_ttl_ms: default_api_minimal_runtime_cache_ttl_ms(),
|
||||
runtime_edge_enabled: default_api_runtime_edge_enabled(),
|
||||
runtime_edge_cache_ttl_ms: default_api_runtime_edge_cache_ttl_ms(),
|
||||
runtime_edge_top_n: default_api_runtime_edge_top_n(),
|
||||
runtime_edge_events_capacity: default_api_runtime_edge_events_capacity(),
|
||||
read_only: false,
|
||||
}
|
||||
}
|
||||
|
||||
@@ -5,6 +5,7 @@
|
||||
use std::collections::HashMap;
|
||||
use std::net::IpAddr;
|
||||
use std::sync::Arc;
|
||||
use std::sync::atomic::{AtomicU64, Ordering};
|
||||
use std::time::{Duration, Instant};
|
||||
|
||||
use tokio::sync::RwLock;
|
||||
@@ -18,6 +19,7 @@ pub struct UserIpTracker {
|
||||
max_ips: Arc<RwLock<HashMap<String, usize>>>,
|
||||
limit_mode: Arc<RwLock<UserMaxUniqueIpsMode>>,
|
||||
limit_window: Arc<RwLock<Duration>>,
|
||||
last_compact_epoch_secs: Arc<AtomicU64>,
|
||||
}
|
||||
|
||||
impl UserIpTracker {
|
||||
@@ -28,6 +30,54 @@ impl UserIpTracker {
|
||||
max_ips: Arc::new(RwLock::new(HashMap::new())),
|
||||
limit_mode: Arc::new(RwLock::new(UserMaxUniqueIpsMode::ActiveWindow)),
|
||||
limit_window: Arc::new(RwLock::new(Duration::from_secs(30))),
|
||||
last_compact_epoch_secs: Arc::new(AtomicU64::new(0)),
|
||||
}
|
||||
}
|
||||
|
||||
fn now_epoch_secs() -> u64 {
|
||||
std::time::SystemTime::now()
|
||||
.duration_since(std::time::UNIX_EPOCH)
|
||||
.unwrap_or_default()
|
||||
.as_secs()
|
||||
}
|
||||
|
||||
async fn maybe_compact_empty_users(&self) {
|
||||
const COMPACT_INTERVAL_SECS: u64 = 60;
|
||||
let now_epoch_secs = Self::now_epoch_secs();
|
||||
let last_compact_epoch_secs = self.last_compact_epoch_secs.load(Ordering::Relaxed);
|
||||
if now_epoch_secs.saturating_sub(last_compact_epoch_secs) < COMPACT_INTERVAL_SECS {
|
||||
return;
|
||||
}
|
||||
if self
|
||||
.last_compact_epoch_secs
|
||||
.compare_exchange(
|
||||
last_compact_epoch_secs,
|
||||
now_epoch_secs,
|
||||
Ordering::AcqRel,
|
||||
Ordering::Relaxed,
|
||||
)
|
||||
.is_err()
|
||||
{
|
||||
return;
|
||||
}
|
||||
|
||||
let mut active_ips = self.active_ips.write().await;
|
||||
let mut recent_ips = self.recent_ips.write().await;
|
||||
let mut users = Vec::<String>::with_capacity(active_ips.len().saturating_add(recent_ips.len()));
|
||||
users.extend(active_ips.keys().cloned());
|
||||
for user in recent_ips.keys() {
|
||||
if !active_ips.contains_key(user) {
|
||||
users.push(user.clone());
|
||||
}
|
||||
}
|
||||
|
||||
for user in users {
|
||||
let active_empty = active_ips.get(&user).map(|ips| ips.is_empty()).unwrap_or(true);
|
||||
let recent_empty = recent_ips.get(&user).map(|ips| ips.is_empty()).unwrap_or(true);
|
||||
if active_empty && recent_empty {
|
||||
active_ips.remove(&user);
|
||||
recent_ips.remove(&user);
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
@@ -63,6 +113,7 @@ impl UserIpTracker {
|
||||
}
|
||||
|
||||
pub async fn check_and_add(&self, username: &str, ip: IpAddr) -> Result<(), String> {
|
||||
self.maybe_compact_empty_users().await;
|
||||
let limit = {
|
||||
let max_ips = self.max_ips.read().await;
|
||||
max_ips.get(username).copied()
|
||||
@@ -116,6 +167,7 @@ impl UserIpTracker {
|
||||
}
|
||||
|
||||
pub async fn remove_ip(&self, username: &str, ip: IpAddr) {
|
||||
self.maybe_compact_empty_users().await;
|
||||
let mut active_ips = self.active_ips.write().await;
|
||||
if let Some(user_ips) = active_ips.get_mut(username) {
|
||||
if let Some(count) = user_ips.get_mut(&ip) {
|
||||
|
||||
568
src/main.rs
568
src/main.rs
@@ -8,7 +8,7 @@ use std::time::{Duration, Instant, SystemTime, UNIX_EPOCH};
|
||||
use rand::Rng;
|
||||
use tokio::net::TcpListener;
|
||||
use tokio::signal;
|
||||
use tokio::sync::{Semaphore, mpsc, watch};
|
||||
use tokio::sync::{RwLock, Semaphore, mpsc, watch};
|
||||
use tracing::{debug, error, info, warn};
|
||||
use tracing_subscriber::{EnvFilter, fmt, prelude::*, reload};
|
||||
#[cfg(unix)]
|
||||
@@ -26,6 +26,7 @@ mod protocol;
|
||||
mod proxy;
|
||||
mod stats;
|
||||
mod stream;
|
||||
mod startup;
|
||||
mod transport;
|
||||
mod tls_front;
|
||||
mod util;
|
||||
@@ -39,6 +40,14 @@ use crate::proxy::ClientHandler;
|
||||
use crate::stats::beobachten::BeobachtenStore;
|
||||
use crate::stats::telemetry::TelemetryPolicy;
|
||||
use crate::stats::{ReplayChecker, Stats};
|
||||
use crate::startup::{
|
||||
COMPONENT_API_BOOTSTRAP, COMPONENT_CONFIG_LOAD, COMPONENT_CONFIG_WATCHER_START,
|
||||
COMPONENT_DC_CONNECTIVITY_PING, COMPONENT_LISTENERS_BIND, COMPONENT_ME_CONNECTIVITY_PING,
|
||||
COMPONENT_ME_POOL_CONSTRUCT, COMPONENT_ME_POOL_INIT_STAGE1, COMPONENT_ME_PROXY_CONFIG_V4,
|
||||
COMPONENT_ME_PROXY_CONFIG_V6, COMPONENT_ME_SECRET_FETCH, COMPONENT_METRICS_START,
|
||||
COMPONENT_NETWORK_PROBE, COMPONENT_RUNTIME_READY, COMPONENT_TLS_FRONT_BOOTSTRAP,
|
||||
COMPONENT_TRACING_INIT, StartupMeStatus, StartupTracker,
|
||||
};
|
||||
use crate::stream::BufferPool;
|
||||
use crate::transport::middle_proxy::{
|
||||
MePool, ProxyConfigData, fetch_proxy_config_with_raw, format_me_route, format_sample_line,
|
||||
@@ -373,6 +382,10 @@ async fn main() -> std::result::Result<(), Box<dyn std::error::Error>> {
|
||||
.duration_since(UNIX_EPOCH)
|
||||
.unwrap_or_default()
|
||||
.as_secs();
|
||||
let startup_tracker = Arc::new(StartupTracker::new(process_started_at_epoch_secs));
|
||||
startup_tracker
|
||||
.start_component(COMPONENT_CONFIG_LOAD, Some("load and validate config".to_string()))
|
||||
.await;
|
||||
let (config_path, cli_silent, cli_log_level) = parse_cli();
|
||||
|
||||
let mut config = match ProxyConfig::load(&config_path) {
|
||||
@@ -399,6 +412,9 @@ async fn main() -> std::result::Result<(), Box<dyn std::error::Error>> {
|
||||
eprintln!("[telemt] Invalid network.dns_overrides: {}", e);
|
||||
std::process::exit(1);
|
||||
}
|
||||
startup_tracker
|
||||
.complete_component(COMPONENT_CONFIG_LOAD, Some("config is ready".to_string()))
|
||||
.await;
|
||||
|
||||
let has_rust_log = std::env::var("RUST_LOG").is_ok();
|
||||
let effective_log_level = if cli_silent {
|
||||
@@ -410,6 +426,9 @@ async fn main() -> std::result::Result<(), Box<dyn std::error::Error>> {
|
||||
};
|
||||
|
||||
let (filter_layer, filter_handle) = reload::Layer::new(EnvFilter::new("info"));
|
||||
startup_tracker
|
||||
.start_component(COMPONENT_TRACING_INIT, Some("initialize tracing subscriber".to_string()))
|
||||
.await;
|
||||
|
||||
// Configure color output based on config
|
||||
let fmt_layer = if config.general.disable_colors {
|
||||
@@ -422,6 +441,9 @@ async fn main() -> std::result::Result<(), Box<dyn std::error::Error>> {
|
||||
.with(filter_layer)
|
||||
.with(fmt_layer)
|
||||
.init();
|
||||
startup_tracker
|
||||
.complete_component(COMPONENT_TRACING_INIT, Some("tracing initialized".to_string()))
|
||||
.await;
|
||||
|
||||
info!("Telemt MTProxy v{}", env!("CARGO_PKG_VERSION"));
|
||||
info!("Log level: {}", effective_log_level);
|
||||
@@ -473,6 +495,95 @@ async fn main() -> std::result::Result<(), Box<dyn std::error::Error>> {
|
||||
config.general.upstream_connect_failfast_hard_errors,
|
||||
stats.clone(),
|
||||
));
|
||||
let ip_tracker = Arc::new(UserIpTracker::new());
|
||||
ip_tracker.load_limits(&config.access.user_max_unique_ips).await;
|
||||
ip_tracker
|
||||
.set_limit_policy(
|
||||
config.access.user_max_unique_ips_mode,
|
||||
config.access.user_max_unique_ips_window_secs,
|
||||
)
|
||||
.await;
|
||||
if !config.access.user_max_unique_ips.is_empty() {
|
||||
info!(
|
||||
"IP limits configured for {} users",
|
||||
config.access.user_max_unique_ips.len()
|
||||
);
|
||||
}
|
||||
if !config.network.dns_overrides.is_empty() {
|
||||
info!(
|
||||
"Runtime DNS overrides configured: {} entries",
|
||||
config.network.dns_overrides.len()
|
||||
);
|
||||
}
|
||||
|
||||
let (api_config_tx, api_config_rx) = watch::channel(Arc::new(config.clone()));
|
||||
let initial_admission_open = !config.general.use_middle_proxy;
|
||||
let (admission_tx, admission_rx) = watch::channel(initial_admission_open);
|
||||
let api_me_pool = Arc::new(RwLock::new(None::<Arc<MePool>>));
|
||||
startup_tracker
|
||||
.start_component(COMPONENT_API_BOOTSTRAP, Some("spawn API listener task".to_string()))
|
||||
.await;
|
||||
|
||||
if config.server.api.enabled {
|
||||
let listen = match config.server.api.listen.parse::<SocketAddr>() {
|
||||
Ok(listen) => listen,
|
||||
Err(error) => {
|
||||
warn!(
|
||||
error = %error,
|
||||
listen = %config.server.api.listen,
|
||||
"Invalid server.api.listen; API is disabled"
|
||||
);
|
||||
SocketAddr::from(([127, 0, 0, 1], 0))
|
||||
}
|
||||
};
|
||||
if listen.port() != 0 {
|
||||
let stats_api = stats.clone();
|
||||
let ip_tracker_api = ip_tracker.clone();
|
||||
let me_pool_api = api_me_pool.clone();
|
||||
let upstream_manager_api = upstream_manager.clone();
|
||||
let config_rx_api = api_config_rx.clone();
|
||||
let admission_rx_api = admission_rx.clone();
|
||||
let config_path_api = std::path::PathBuf::from(&config_path);
|
||||
let startup_tracker_api = startup_tracker.clone();
|
||||
tokio::spawn(async move {
|
||||
api::serve(
|
||||
listen,
|
||||
stats_api,
|
||||
ip_tracker_api,
|
||||
me_pool_api,
|
||||
upstream_manager_api,
|
||||
config_rx_api,
|
||||
admission_rx_api,
|
||||
config_path_api,
|
||||
None,
|
||||
None,
|
||||
process_started_at_epoch_secs,
|
||||
startup_tracker_api,
|
||||
)
|
||||
.await;
|
||||
});
|
||||
startup_tracker
|
||||
.complete_component(
|
||||
COMPONENT_API_BOOTSTRAP,
|
||||
Some(format!("api task spawned on {}", listen)),
|
||||
)
|
||||
.await;
|
||||
} else {
|
||||
startup_tracker
|
||||
.skip_component(
|
||||
COMPONENT_API_BOOTSTRAP,
|
||||
Some("server.api.listen has zero port".to_string()),
|
||||
)
|
||||
.await;
|
||||
}
|
||||
} else {
|
||||
startup_tracker
|
||||
.skip_component(
|
||||
COMPONENT_API_BOOTSTRAP,
|
||||
Some("server.api.enabled is false".to_string()),
|
||||
)
|
||||
.await;
|
||||
}
|
||||
|
||||
let mut tls_domains = Vec::with_capacity(1 + config.censorship.tls_domains.len());
|
||||
tls_domains.push(config.censorship.tls_domain.clone());
|
||||
@@ -483,6 +594,12 @@ async fn main() -> std::result::Result<(), Box<dyn std::error::Error>> {
|
||||
}
|
||||
|
||||
// Start TLS front fetching in background immediately, in parallel with STUN probing.
|
||||
startup_tracker
|
||||
.start_component(
|
||||
COMPONENT_TLS_FRONT_BOOTSTRAP,
|
||||
Some("initialize TLS front cache/bootstrap tasks".to_string()),
|
||||
)
|
||||
.await;
|
||||
let tls_cache: Option<Arc<TlsFrontCache>> = if config.censorship.tls_emulation {
|
||||
let cache = Arc::new(TlsFrontCache::new(
|
||||
&tls_domains,
|
||||
@@ -603,9 +720,26 @@ async fn main() -> std::result::Result<(), Box<dyn std::error::Error>> {
|
||||
|
||||
Some(cache)
|
||||
} else {
|
||||
startup_tracker
|
||||
.skip_component(
|
||||
COMPONENT_TLS_FRONT_BOOTSTRAP,
|
||||
Some("censorship.tls_emulation is false".to_string()),
|
||||
)
|
||||
.await;
|
||||
None
|
||||
};
|
||||
if tls_cache.is_some() {
|
||||
startup_tracker
|
||||
.complete_component(
|
||||
COMPONENT_TLS_FRONT_BOOTSTRAP,
|
||||
Some("tls front cache is initialized".to_string()),
|
||||
)
|
||||
.await;
|
||||
}
|
||||
|
||||
startup_tracker
|
||||
.start_component(COMPONENT_NETWORK_PROBE, Some("probe network capabilities".to_string()))
|
||||
.await;
|
||||
let probe = run_probe(
|
||||
&config.network,
|
||||
config.general.middle_proxy_nat_probe,
|
||||
@@ -614,32 +748,18 @@ async fn main() -> std::result::Result<(), Box<dyn std::error::Error>> {
|
||||
.await?;
|
||||
let decision = decide_network_capabilities(&config.network, &probe);
|
||||
log_probe_result(&probe, &decision);
|
||||
startup_tracker
|
||||
.complete_component(
|
||||
COMPONENT_NETWORK_PROBE,
|
||||
Some("network capabilities determined".to_string()),
|
||||
)
|
||||
.await;
|
||||
|
||||
let prefer_ipv6 = decision.prefer_ipv6();
|
||||
let mut use_middle_proxy = config.general.use_middle_proxy;
|
||||
let beobachten = Arc::new(BeobachtenStore::new());
|
||||
let rng = Arc::new(SecureRandom::new());
|
||||
|
||||
// IP Tracker initialization
|
||||
let ip_tracker = Arc::new(UserIpTracker::new());
|
||||
ip_tracker.load_limits(&config.access.user_max_unique_ips).await;
|
||||
ip_tracker
|
||||
.set_limit_policy(
|
||||
config.access.user_max_unique_ips_mode,
|
||||
config.access.user_max_unique_ips_window_secs,
|
||||
)
|
||||
.await;
|
||||
|
||||
if !config.access.user_max_unique_ips.is_empty() {
|
||||
info!("IP limits configured for {} users", config.access.user_max_unique_ips.len());
|
||||
}
|
||||
if !config.network.dns_overrides.is_empty() {
|
||||
info!(
|
||||
"Runtime DNS overrides configured: {} entries",
|
||||
config.network.dns_overrides.len()
|
||||
);
|
||||
}
|
||||
|
||||
// Connection concurrency limit
|
||||
let max_connections = Arc::new(Semaphore::new(10_000));
|
||||
|
||||
@@ -657,6 +777,59 @@ async fn main() -> std::result::Result<(), Box<dyn std::error::Error>> {
|
||||
}
|
||||
}
|
||||
|
||||
if use_middle_proxy {
|
||||
startup_tracker
|
||||
.set_me_status(StartupMeStatus::Initializing, COMPONENT_ME_SECRET_FETCH)
|
||||
.await;
|
||||
startup_tracker
|
||||
.start_component(
|
||||
COMPONENT_ME_SECRET_FETCH,
|
||||
Some("fetch proxy-secret from source/cache".to_string()),
|
||||
)
|
||||
.await;
|
||||
startup_tracker
|
||||
.set_me_retry_limit(if !me2dc_fallback || me_init_retry_attempts == 0 {
|
||||
"unlimited".to_string()
|
||||
} else {
|
||||
me_init_retry_attempts.to_string()
|
||||
})
|
||||
.await;
|
||||
} else {
|
||||
startup_tracker
|
||||
.set_me_status(StartupMeStatus::Skipped, "skipped")
|
||||
.await;
|
||||
startup_tracker
|
||||
.skip_component(
|
||||
COMPONENT_ME_SECRET_FETCH,
|
||||
Some("middle proxy mode disabled".to_string()),
|
||||
)
|
||||
.await;
|
||||
startup_tracker
|
||||
.skip_component(
|
||||
COMPONENT_ME_PROXY_CONFIG_V4,
|
||||
Some("middle proxy mode disabled".to_string()),
|
||||
)
|
||||
.await;
|
||||
startup_tracker
|
||||
.skip_component(
|
||||
COMPONENT_ME_PROXY_CONFIG_V6,
|
||||
Some("middle proxy mode disabled".to_string()),
|
||||
)
|
||||
.await;
|
||||
startup_tracker
|
||||
.skip_component(
|
||||
COMPONENT_ME_POOL_CONSTRUCT,
|
||||
Some("middle proxy mode disabled".to_string()),
|
||||
)
|
||||
.await;
|
||||
startup_tracker
|
||||
.skip_component(
|
||||
COMPONENT_ME_POOL_INIT_STAGE1,
|
||||
Some("middle proxy mode disabled".to_string()),
|
||||
)
|
||||
.await;
|
||||
}
|
||||
|
||||
// =====================================================================
|
||||
// Middle Proxy initialization (if enabled)
|
||||
// =====================================================================
|
||||
@@ -694,6 +867,9 @@ async fn main() -> std::result::Result<(), Box<dyn std::error::Error>> {
|
||||
{
|
||||
Ok(proxy_secret) => break Some(proxy_secret),
|
||||
Err(e) => {
|
||||
startup_tracker
|
||||
.set_me_last_error(Some(e.to_string()))
|
||||
.await;
|
||||
if me2dc_fallback {
|
||||
error!(
|
||||
error = %e,
|
||||
@@ -713,6 +889,12 @@ async fn main() -> std::result::Result<(), Box<dyn std::error::Error>> {
|
||||
};
|
||||
match proxy_secret {
|
||||
Some(proxy_secret) => {
|
||||
startup_tracker
|
||||
.complete_component(
|
||||
COMPONENT_ME_SECRET_FETCH,
|
||||
Some("proxy-secret loaded".to_string()),
|
||||
)
|
||||
.await;
|
||||
info!(
|
||||
secret_len = proxy_secret.len(),
|
||||
key_sig = format_args!(
|
||||
@@ -731,6 +913,15 @@ async fn main() -> std::result::Result<(), Box<dyn std::error::Error>> {
|
||||
"Proxy-secret loaded"
|
||||
);
|
||||
|
||||
startup_tracker
|
||||
.start_component(
|
||||
COMPONENT_ME_PROXY_CONFIG_V4,
|
||||
Some("load startup proxy-config v4".to_string()),
|
||||
)
|
||||
.await;
|
||||
startup_tracker
|
||||
.set_me_status(StartupMeStatus::Initializing, COMPONENT_ME_PROXY_CONFIG_V4)
|
||||
.await;
|
||||
let cfg_v4 = load_startup_proxy_config_snapshot(
|
||||
"https://core.telegram.org/getProxyConfig",
|
||||
config.general.proxy_config_v4_cache_path.as_deref(),
|
||||
@@ -738,6 +929,30 @@ async fn main() -> std::result::Result<(), Box<dyn std::error::Error>> {
|
||||
"getProxyConfig",
|
||||
)
|
||||
.await;
|
||||
if cfg_v4.is_some() {
|
||||
startup_tracker
|
||||
.complete_component(
|
||||
COMPONENT_ME_PROXY_CONFIG_V4,
|
||||
Some("proxy-config v4 loaded".to_string()),
|
||||
)
|
||||
.await;
|
||||
} else {
|
||||
startup_tracker
|
||||
.fail_component(
|
||||
COMPONENT_ME_PROXY_CONFIG_V4,
|
||||
Some("proxy-config v4 unavailable".to_string()),
|
||||
)
|
||||
.await;
|
||||
}
|
||||
startup_tracker
|
||||
.start_component(
|
||||
COMPONENT_ME_PROXY_CONFIG_V6,
|
||||
Some("load startup proxy-config v6".to_string()),
|
||||
)
|
||||
.await;
|
||||
startup_tracker
|
||||
.set_me_status(StartupMeStatus::Initializing, COMPONENT_ME_PROXY_CONFIG_V6)
|
||||
.await;
|
||||
let cfg_v6 = load_startup_proxy_config_snapshot(
|
||||
"https://core.telegram.org/getProxyConfigV6",
|
||||
config.general.proxy_config_v6_cache_path.as_deref(),
|
||||
@@ -745,8 +960,32 @@ async fn main() -> std::result::Result<(), Box<dyn std::error::Error>> {
|
||||
"getProxyConfigV6",
|
||||
)
|
||||
.await;
|
||||
if cfg_v6.is_some() {
|
||||
startup_tracker
|
||||
.complete_component(
|
||||
COMPONENT_ME_PROXY_CONFIG_V6,
|
||||
Some("proxy-config v6 loaded".to_string()),
|
||||
)
|
||||
.await;
|
||||
} else {
|
||||
startup_tracker
|
||||
.fail_component(
|
||||
COMPONENT_ME_PROXY_CONFIG_V6,
|
||||
Some("proxy-config v6 unavailable".to_string()),
|
||||
)
|
||||
.await;
|
||||
}
|
||||
|
||||
if let (Some(cfg_v4), Some(cfg_v6)) = (cfg_v4, cfg_v6) {
|
||||
startup_tracker
|
||||
.start_component(
|
||||
COMPONENT_ME_POOL_CONSTRUCT,
|
||||
Some("construct ME pool".to_string()),
|
||||
)
|
||||
.await;
|
||||
startup_tracker
|
||||
.set_me_status(StartupMeStatus::Initializing, COMPONENT_ME_POOL_CONSTRUCT)
|
||||
.await;
|
||||
let pool = MePool::new(
|
||||
proxy_tag.clone(),
|
||||
proxy_secret,
|
||||
@@ -786,7 +1025,16 @@ async fn main() -> std::result::Result<(), Box<dyn std::error::Error>> {
|
||||
config.general.me_floor_mode,
|
||||
config.general.me_adaptive_floor_idle_secs,
|
||||
config.general.me_adaptive_floor_min_writers_single_endpoint,
|
||||
config.general.me_adaptive_floor_min_writers_multi_endpoint,
|
||||
config.general.me_adaptive_floor_recover_grace_secs,
|
||||
config.general.me_adaptive_floor_writers_per_core_total,
|
||||
config.general.me_adaptive_floor_cpu_cores_override,
|
||||
config.general.me_adaptive_floor_max_extra_writers_single_per_core,
|
||||
config.general.me_adaptive_floor_max_extra_writers_multi_per_core,
|
||||
config.general.me_adaptive_floor_max_active_writers_per_core,
|
||||
config.general.me_adaptive_floor_max_warm_writers_per_core,
|
||||
config.general.me_adaptive_floor_max_active_writers_global,
|
||||
config.general.me_adaptive_floor_max_warm_writers_global,
|
||||
config.general.hardswap,
|
||||
config.general.me_pool_drain_ttl_secs,
|
||||
config.general.effective_me_pool_force_close_secs(),
|
||||
@@ -808,12 +1056,44 @@ async fn main() -> std::result::Result<(), Box<dyn std::error::Error>> {
|
||||
config.general.me_route_inline_recovery_attempts,
|
||||
config.general.me_route_inline_recovery_wait_ms,
|
||||
);
|
||||
startup_tracker
|
||||
.complete_component(
|
||||
COMPONENT_ME_POOL_CONSTRUCT,
|
||||
Some("ME pool object created".to_string()),
|
||||
)
|
||||
.await;
|
||||
*api_me_pool.write().await = Some(pool.clone());
|
||||
startup_tracker
|
||||
.start_component(
|
||||
COMPONENT_ME_POOL_INIT_STAGE1,
|
||||
Some("initialize ME pool writers".to_string()),
|
||||
)
|
||||
.await;
|
||||
startup_tracker
|
||||
.set_me_status(
|
||||
StartupMeStatus::Initializing,
|
||||
COMPONENT_ME_POOL_INIT_STAGE1,
|
||||
)
|
||||
.await;
|
||||
|
||||
let mut init_attempt: u32 = 0;
|
||||
loop {
|
||||
init_attempt = init_attempt.saturating_add(1);
|
||||
startup_tracker.set_me_init_attempt(init_attempt).await;
|
||||
match pool.init(pool_size, &rng).await {
|
||||
Ok(()) => {
|
||||
startup_tracker
|
||||
.set_me_last_error(None)
|
||||
.await;
|
||||
startup_tracker
|
||||
.complete_component(
|
||||
COMPONENT_ME_POOL_INIT_STAGE1,
|
||||
Some("ME pool initialized".to_string()),
|
||||
)
|
||||
.await;
|
||||
startup_tracker
|
||||
.set_me_status(StartupMeStatus::Ready, "ready")
|
||||
.await;
|
||||
info!(
|
||||
attempt = init_attempt,
|
||||
"Middle-End pool initialized successfully"
|
||||
@@ -833,8 +1113,20 @@ async fn main() -> std::result::Result<(), Box<dyn std::error::Error>> {
|
||||
break Some(pool);
|
||||
}
|
||||
Err(e) => {
|
||||
startup_tracker
|
||||
.set_me_last_error(Some(e.to_string()))
|
||||
.await;
|
||||
let retries_limited = me2dc_fallback && me_init_retry_attempts > 0;
|
||||
if retries_limited && init_attempt >= me_init_retry_attempts {
|
||||
startup_tracker
|
||||
.fail_component(
|
||||
COMPONENT_ME_POOL_INIT_STAGE1,
|
||||
Some("ME init retry budget exhausted".to_string()),
|
||||
)
|
||||
.await;
|
||||
startup_tracker
|
||||
.set_me_status(StartupMeStatus::Failed, "failed")
|
||||
.await;
|
||||
error!(
|
||||
error = %e,
|
||||
attempt = init_attempt,
|
||||
@@ -874,10 +1166,60 @@ async fn main() -> std::result::Result<(), Box<dyn std::error::Error>> {
|
||||
}
|
||||
}
|
||||
} else {
|
||||
startup_tracker
|
||||
.skip_component(
|
||||
COMPONENT_ME_POOL_CONSTRUCT,
|
||||
Some("ME configs are incomplete".to_string()),
|
||||
)
|
||||
.await;
|
||||
startup_tracker
|
||||
.fail_component(
|
||||
COMPONENT_ME_POOL_INIT_STAGE1,
|
||||
Some("ME configs are incomplete".to_string()),
|
||||
)
|
||||
.await;
|
||||
startup_tracker
|
||||
.set_me_status(StartupMeStatus::Failed, "failed")
|
||||
.await;
|
||||
None
|
||||
}
|
||||
}
|
||||
None => None,
|
||||
None => {
|
||||
startup_tracker
|
||||
.fail_component(
|
||||
COMPONENT_ME_SECRET_FETCH,
|
||||
Some("proxy-secret unavailable".to_string()),
|
||||
)
|
||||
.await;
|
||||
startup_tracker
|
||||
.skip_component(
|
||||
COMPONENT_ME_PROXY_CONFIG_V4,
|
||||
Some("proxy-secret unavailable".to_string()),
|
||||
)
|
||||
.await;
|
||||
startup_tracker
|
||||
.skip_component(
|
||||
COMPONENT_ME_PROXY_CONFIG_V6,
|
||||
Some("proxy-secret unavailable".to_string()),
|
||||
)
|
||||
.await;
|
||||
startup_tracker
|
||||
.skip_component(
|
||||
COMPONENT_ME_POOL_CONSTRUCT,
|
||||
Some("proxy-secret unavailable".to_string()),
|
||||
)
|
||||
.await;
|
||||
startup_tracker
|
||||
.fail_component(
|
||||
COMPONENT_ME_POOL_INIT_STAGE1,
|
||||
Some("proxy-secret unavailable".to_string()),
|
||||
)
|
||||
.await;
|
||||
startup_tracker
|
||||
.set_me_status(StartupMeStatus::Failed, "failed")
|
||||
.await;
|
||||
None
|
||||
}
|
||||
}
|
||||
} else {
|
||||
None
|
||||
@@ -885,12 +1227,33 @@ async fn main() -> std::result::Result<(), Box<dyn std::error::Error>> {
|
||||
|
||||
// If ME failed to initialize, force direct-only mode.
|
||||
if me_pool.is_some() {
|
||||
startup_tracker
|
||||
.set_transport_mode("middle_proxy")
|
||||
.await;
|
||||
startup_tracker
|
||||
.set_degraded(false)
|
||||
.await;
|
||||
info!("Transport: Middle-End Proxy - all DC-over-RPC");
|
||||
} else {
|
||||
let _ = use_middle_proxy;
|
||||
use_middle_proxy = false;
|
||||
// Make runtime config reflect direct-only mode for handlers.
|
||||
config.general.use_middle_proxy = false;
|
||||
startup_tracker
|
||||
.set_transport_mode("direct")
|
||||
.await;
|
||||
startup_tracker
|
||||
.set_degraded(true)
|
||||
.await;
|
||||
if me2dc_fallback {
|
||||
startup_tracker
|
||||
.set_me_status(StartupMeStatus::Failed, "fallback_to_direct")
|
||||
.await;
|
||||
} else {
|
||||
startup_tracker
|
||||
.set_me_status(StartupMeStatus::Skipped, "skipped")
|
||||
.await;
|
||||
}
|
||||
info!("Transport: Direct DC - TCP - standard DC-over-TCP");
|
||||
}
|
||||
|
||||
@@ -905,6 +1268,21 @@ async fn main() -> std::result::Result<(), Box<dyn std::error::Error>> {
|
||||
let buffer_pool = Arc::new(BufferPool::with_config(16 * 1024, 4096));
|
||||
|
||||
// Middle-End ping before DC connectivity
|
||||
if me_pool.is_some() {
|
||||
startup_tracker
|
||||
.start_component(
|
||||
COMPONENT_ME_CONNECTIVITY_PING,
|
||||
Some("run startup ME connectivity check".to_string()),
|
||||
)
|
||||
.await;
|
||||
} else {
|
||||
startup_tracker
|
||||
.skip_component(
|
||||
COMPONENT_ME_CONNECTIVITY_PING,
|
||||
Some("ME pool is not available".to_string()),
|
||||
)
|
||||
.await;
|
||||
}
|
||||
if let Some(ref pool) = me_pool {
|
||||
let me_results = run_me_ping(pool, &rng).await;
|
||||
|
||||
@@ -942,22 +1320,21 @@ async fn main() -> std::result::Result<(), Box<dyn std::error::Error>> {
|
||||
let mut grouped: BTreeMap<i32, Vec<MePingSample>> = BTreeMap::new();
|
||||
for report in me_results {
|
||||
for s in report.samples {
|
||||
let key = s.dc.abs();
|
||||
grouped.entry(key).or_default().push(s);
|
||||
grouped.entry(s.dc).or_default().push(s);
|
||||
}
|
||||
}
|
||||
|
||||
let family_order = if prefer_ipv6 {
|
||||
vec![(MePingFamily::V6, true), (MePingFamily::V6, false), (MePingFamily::V4, true), (MePingFamily::V4, false)]
|
||||
vec![MePingFamily::V6, MePingFamily::V4]
|
||||
} else {
|
||||
vec![(MePingFamily::V4, true), (MePingFamily::V4, false), (MePingFamily::V6, true), (MePingFamily::V6, false)]
|
||||
vec![MePingFamily::V4, MePingFamily::V6]
|
||||
};
|
||||
|
||||
for (dc_abs, samples) in grouped {
|
||||
for (family, is_pos) in &family_order {
|
||||
for (dc, samples) in grouped {
|
||||
for family in &family_order {
|
||||
let fam_samples: Vec<&MePingSample> = samples
|
||||
.iter()
|
||||
.filter(|s| matches!(s.family, f if &f == family) && (s.dc >= 0) == *is_pos)
|
||||
.filter(|s| matches!(s.family, f if &f == family))
|
||||
.collect();
|
||||
if fam_samples.is_empty() {
|
||||
continue;
|
||||
@@ -967,7 +1344,7 @@ async fn main() -> std::result::Result<(), Box<dyn std::error::Error>> {
|
||||
MePingFamily::V4 => "IPv4",
|
||||
MePingFamily::V6 => "IPv6",
|
||||
};
|
||||
info!(" DC{} [{}]", dc_abs, fam_label);
|
||||
info!(" DC{} [{}]", dc, fam_label);
|
||||
for sample in fam_samples {
|
||||
let line = format_sample_line(sample);
|
||||
info!("{}", line);
|
||||
@@ -975,9 +1352,21 @@ async fn main() -> std::result::Result<(), Box<dyn std::error::Error>> {
|
||||
}
|
||||
}
|
||||
info!("============================================================");
|
||||
startup_tracker
|
||||
.complete_component(
|
||||
COMPONENT_ME_CONNECTIVITY_PING,
|
||||
Some("startup ME connectivity check completed".to_string()),
|
||||
)
|
||||
.await;
|
||||
}
|
||||
|
||||
info!("================= Telegram DC Connectivity =================");
|
||||
startup_tracker
|
||||
.start_component(
|
||||
COMPONENT_DC_CONNECTIVITY_PING,
|
||||
Some("run startup DC connectivity check".to_string()),
|
||||
)
|
||||
.await;
|
||||
|
||||
let ping_results = upstream_manager
|
||||
.ping_all_dcs(
|
||||
@@ -1057,9 +1446,21 @@ async fn main() -> std::result::Result<(), Box<dyn std::error::Error>> {
|
||||
info!("============================================================");
|
||||
}
|
||||
}
|
||||
startup_tracker
|
||||
.complete_component(
|
||||
COMPONENT_DC_CONNECTIVITY_PING,
|
||||
Some("startup DC connectivity check completed".to_string()),
|
||||
)
|
||||
.await;
|
||||
|
||||
let initialized_secs = process_started_at.elapsed().as_secs();
|
||||
let second_suffix = if initialized_secs == 1 { "" } else { "s" };
|
||||
startup_tracker
|
||||
.start_component(
|
||||
COMPONENT_RUNTIME_READY,
|
||||
Some("finalize startup runtime state".to_string()),
|
||||
)
|
||||
.await;
|
||||
info!("===================== Telegram Startup =====================");
|
||||
info!(
|
||||
" DC/ME Initialized in {} second{}",
|
||||
@@ -1070,6 +1471,7 @@ async fn main() -> std::result::Result<(), Box<dyn std::error::Error>> {
|
||||
if let Some(ref pool) = me_pool {
|
||||
pool.set_runtime_ready(true);
|
||||
}
|
||||
*api_me_pool.write().await = me_pool.clone();
|
||||
|
||||
// Background tasks
|
||||
let um_clone = upstream_manager.clone();
|
||||
@@ -1101,6 +1503,12 @@ async fn main() -> std::result::Result<(), Box<dyn std::error::Error>> {
|
||||
// ── Hot-reload watcher ────────────────────────────────────────────────
|
||||
// Uses inotify to detect file changes instantly (SIGHUP also works).
|
||||
// detected_ip_v4/v6 are passed so newly added users get correct TG links.
|
||||
startup_tracker
|
||||
.start_component(
|
||||
COMPONENT_CONFIG_WATCHER_START,
|
||||
Some("spawn config hot-reload watcher".to_string()),
|
||||
)
|
||||
.await;
|
||||
let (config_rx, mut log_level_rx): (
|
||||
tokio::sync::watch::Receiver<Arc<ProxyConfig>>,
|
||||
tokio::sync::watch::Receiver<LogLevel>,
|
||||
@@ -1110,6 +1518,23 @@ async fn main() -> std::result::Result<(), Box<dyn std::error::Error>> {
|
||||
detected_ip_v4,
|
||||
detected_ip_v6,
|
||||
);
|
||||
startup_tracker
|
||||
.complete_component(
|
||||
COMPONENT_CONFIG_WATCHER_START,
|
||||
Some("config hot-reload watcher started".to_string()),
|
||||
)
|
||||
.await;
|
||||
let mut config_rx_api_bridge = config_rx.clone();
|
||||
let api_config_tx_bridge = api_config_tx.clone();
|
||||
tokio::spawn(async move {
|
||||
loop {
|
||||
if config_rx_api_bridge.changed().await.is_err() {
|
||||
break;
|
||||
}
|
||||
let cfg = config_rx_api_bridge.borrow_and_update().clone();
|
||||
api_config_tx_bridge.send_replace(cfg);
|
||||
}
|
||||
});
|
||||
|
||||
let stats_policy = stats.clone();
|
||||
let mut config_rx_policy = config_rx.clone();
|
||||
@@ -1240,6 +1665,12 @@ async fn main() -> std::result::Result<(), Box<dyn std::error::Error>> {
|
||||
});
|
||||
}
|
||||
|
||||
startup_tracker
|
||||
.start_component(
|
||||
COMPONENT_LISTENERS_BIND,
|
||||
Some("bind TCP/Unix listeners".to_string()),
|
||||
)
|
||||
.await;
|
||||
let mut listeners = Vec::new();
|
||||
|
||||
for listener_conf in &config.server.listeners {
|
||||
@@ -1341,7 +1772,6 @@ async fn main() -> std::result::Result<(), Box<dyn std::error::Error>> {
|
||||
print_proxy_links(&host, port, &config);
|
||||
}
|
||||
|
||||
let (admission_tx, admission_rx) = watch::channel(true);
|
||||
if config.general.use_middle_proxy {
|
||||
if let Some(pool) = me_pool.as_ref() {
|
||||
let initial_open = pool.admission_ready_conditional_cast().await;
|
||||
@@ -1491,6 +1921,16 @@ async fn main() -> std::result::Result<(), Box<dyn std::error::Error>> {
|
||||
}
|
||||
});
|
||||
}
|
||||
startup_tracker
|
||||
.complete_component(
|
||||
COMPONENT_LISTENERS_BIND,
|
||||
Some(format!(
|
||||
"listeners configured tcp={} unix={}",
|
||||
listeners.len(),
|
||||
has_unix_listener
|
||||
)),
|
||||
)
|
||||
.await;
|
||||
|
||||
if listeners.is_empty() && !has_unix_listener {
|
||||
error!("No listeners. Exiting.");
|
||||
@@ -1524,6 +1964,12 @@ async fn main() -> std::result::Result<(), Box<dyn std::error::Error>> {
|
||||
});
|
||||
|
||||
if let Some(port) = config.server.metrics_port {
|
||||
startup_tracker
|
||||
.start_component(
|
||||
COMPONENT_METRICS_START,
|
||||
Some(format!("spawn metrics endpoint on {}", port)),
|
||||
)
|
||||
.await;
|
||||
let stats = stats.clone();
|
||||
let beobachten = beobachten.clone();
|
||||
let config_rx_metrics = config_rx.clone();
|
||||
@@ -1540,48 +1986,28 @@ async fn main() -> std::result::Result<(), Box<dyn std::error::Error>> {
|
||||
)
|
||||
.await;
|
||||
});
|
||||
startup_tracker
|
||||
.complete_component(
|
||||
COMPONENT_METRICS_START,
|
||||
Some("metrics task spawned".to_string()),
|
||||
)
|
||||
.await;
|
||||
} else {
|
||||
startup_tracker
|
||||
.skip_component(
|
||||
COMPONENT_METRICS_START,
|
||||
Some("server.metrics_port is not configured".to_string()),
|
||||
)
|
||||
.await;
|
||||
}
|
||||
|
||||
if config.server.api.enabled {
|
||||
let listen = match config.server.api.listen.parse::<SocketAddr>() {
|
||||
Ok(listen) => listen,
|
||||
Err(error) => {
|
||||
warn!(
|
||||
error = %error,
|
||||
listen = %config.server.api.listen,
|
||||
"Invalid server.api.listen; API is disabled"
|
||||
);
|
||||
SocketAddr::from(([127, 0, 0, 1], 0))
|
||||
}
|
||||
};
|
||||
if listen.port() != 0 {
|
||||
let stats = stats.clone();
|
||||
let ip_tracker_api = ip_tracker.clone();
|
||||
let me_pool_api = me_pool.clone();
|
||||
let upstream_manager_api = upstream_manager.clone();
|
||||
let config_rx_api = config_rx.clone();
|
||||
let admission_rx_api = admission_rx.clone();
|
||||
let config_path_api = std::path::PathBuf::from(&config_path);
|
||||
let startup_detected_ip_v4 = detected_ip_v4;
|
||||
let startup_detected_ip_v6 = detected_ip_v6;
|
||||
tokio::spawn(async move {
|
||||
api::serve(
|
||||
listen,
|
||||
stats,
|
||||
ip_tracker_api,
|
||||
me_pool_api,
|
||||
upstream_manager_api,
|
||||
config_rx_api,
|
||||
admission_rx_api,
|
||||
config_path_api,
|
||||
startup_detected_ip_v4,
|
||||
startup_detected_ip_v6,
|
||||
process_started_at_epoch_secs,
|
||||
)
|
||||
.await;
|
||||
});
|
||||
}
|
||||
}
|
||||
startup_tracker
|
||||
.complete_component(
|
||||
COMPONENT_RUNTIME_READY,
|
||||
Some("startup pipeline is fully initialized".to_string()),
|
||||
)
|
||||
.await;
|
||||
startup_tracker.mark_ready().await;
|
||||
|
||||
for (listener, listener_proxy_protocol) in listeners {
|
||||
let mut config_rx: tokio::sync::watch::Receiver<Arc<ProxyConfig>> = config_rx.clone();
|
||||
|
||||
223
src/metrics.rs
223
src/metrics.rs
@@ -968,6 +968,229 @@ async fn render_metrics(stats: &Stats, config: &ProxyConfig, ip_tracker: &UserIp
|
||||
0
|
||||
}
|
||||
);
|
||||
let _ = writeln!(
|
||||
out,
|
||||
"# HELP telemt_me_adaptive_floor_cpu_cores_detected Runtime detected logical CPU cores for adaptive floor"
|
||||
);
|
||||
let _ = writeln!(
|
||||
out,
|
||||
"# TYPE telemt_me_adaptive_floor_cpu_cores_detected gauge"
|
||||
);
|
||||
let _ = writeln!(
|
||||
out,
|
||||
"telemt_me_adaptive_floor_cpu_cores_detected {}",
|
||||
if me_allows_normal {
|
||||
stats.get_me_floor_cpu_cores_detected_gauge()
|
||||
} else {
|
||||
0
|
||||
}
|
||||
);
|
||||
let _ = writeln!(
|
||||
out,
|
||||
"# HELP telemt_me_adaptive_floor_cpu_cores_effective Runtime effective logical CPU cores for adaptive floor"
|
||||
);
|
||||
let _ = writeln!(
|
||||
out,
|
||||
"# TYPE telemt_me_adaptive_floor_cpu_cores_effective gauge"
|
||||
);
|
||||
let _ = writeln!(
|
||||
out,
|
||||
"telemt_me_adaptive_floor_cpu_cores_effective {}",
|
||||
if me_allows_normal {
|
||||
stats.get_me_floor_cpu_cores_effective_gauge()
|
||||
} else {
|
||||
0
|
||||
}
|
||||
);
|
||||
let _ = writeln!(
|
||||
out,
|
||||
"# HELP telemt_me_adaptive_floor_global_cap_raw Runtime raw global adaptive floor cap"
|
||||
);
|
||||
let _ = writeln!(
|
||||
out,
|
||||
"# TYPE telemt_me_adaptive_floor_global_cap_raw gauge"
|
||||
);
|
||||
let _ = writeln!(
|
||||
out,
|
||||
"telemt_me_adaptive_floor_global_cap_raw {}",
|
||||
if me_allows_normal {
|
||||
stats.get_me_floor_global_cap_raw_gauge()
|
||||
} else {
|
||||
0
|
||||
}
|
||||
);
|
||||
let _ = writeln!(
|
||||
out,
|
||||
"# HELP telemt_me_adaptive_floor_global_cap_effective Runtime effective global adaptive floor cap"
|
||||
);
|
||||
let _ = writeln!(
|
||||
out,
|
||||
"# TYPE telemt_me_adaptive_floor_global_cap_effective gauge"
|
||||
);
|
||||
let _ = writeln!(
|
||||
out,
|
||||
"telemt_me_adaptive_floor_global_cap_effective {}",
|
||||
if me_allows_normal {
|
||||
stats.get_me_floor_global_cap_effective_gauge()
|
||||
} else {
|
||||
0
|
||||
}
|
||||
);
|
||||
let _ = writeln!(
|
||||
out,
|
||||
"# HELP telemt_me_adaptive_floor_target_writers_total Runtime adaptive floor target writers total"
|
||||
);
|
||||
let _ = writeln!(
|
||||
out,
|
||||
"# TYPE telemt_me_adaptive_floor_target_writers_total gauge"
|
||||
);
|
||||
let _ = writeln!(
|
||||
out,
|
||||
"telemt_me_adaptive_floor_target_writers_total {}",
|
||||
if me_allows_normal {
|
||||
stats.get_me_floor_target_writers_total_gauge()
|
||||
} else {
|
||||
0
|
||||
}
|
||||
);
|
||||
let _ = writeln!(
|
||||
out,
|
||||
"# HELP telemt_me_adaptive_floor_active_cap_configured Runtime configured active writer cap"
|
||||
);
|
||||
let _ = writeln!(
|
||||
out,
|
||||
"# TYPE telemt_me_adaptive_floor_active_cap_configured gauge"
|
||||
);
|
||||
let _ = writeln!(
|
||||
out,
|
||||
"telemt_me_adaptive_floor_active_cap_configured {}",
|
||||
if me_allows_normal {
|
||||
stats.get_me_floor_active_cap_configured_gauge()
|
||||
} else {
|
||||
0
|
||||
}
|
||||
);
|
||||
let _ = writeln!(
|
||||
out,
|
||||
"# HELP telemt_me_adaptive_floor_active_cap_effective Runtime effective active writer cap"
|
||||
);
|
||||
let _ = writeln!(
|
||||
out,
|
||||
"# TYPE telemt_me_adaptive_floor_active_cap_effective gauge"
|
||||
);
|
||||
let _ = writeln!(
|
||||
out,
|
||||
"telemt_me_adaptive_floor_active_cap_effective {}",
|
||||
if me_allows_normal {
|
||||
stats.get_me_floor_active_cap_effective_gauge()
|
||||
} else {
|
||||
0
|
||||
}
|
||||
);
|
||||
let _ = writeln!(
|
||||
out,
|
||||
"# HELP telemt_me_adaptive_floor_warm_cap_configured Runtime configured warm writer cap"
|
||||
);
|
||||
let _ = writeln!(
|
||||
out,
|
||||
"# TYPE telemt_me_adaptive_floor_warm_cap_configured gauge"
|
||||
);
|
||||
let _ = writeln!(
|
||||
out,
|
||||
"telemt_me_adaptive_floor_warm_cap_configured {}",
|
||||
if me_allows_normal {
|
||||
stats.get_me_floor_warm_cap_configured_gauge()
|
||||
} else {
|
||||
0
|
||||
}
|
||||
);
|
||||
let _ = writeln!(
|
||||
out,
|
||||
"# HELP telemt_me_adaptive_floor_warm_cap_effective Runtime effective warm writer cap"
|
||||
);
|
||||
let _ = writeln!(
|
||||
out,
|
||||
"# TYPE telemt_me_adaptive_floor_warm_cap_effective gauge"
|
||||
);
|
||||
let _ = writeln!(
|
||||
out,
|
||||
"telemt_me_adaptive_floor_warm_cap_effective {}",
|
||||
if me_allows_normal {
|
||||
stats.get_me_floor_warm_cap_effective_gauge()
|
||||
} else {
|
||||
0
|
||||
}
|
||||
);
|
||||
let _ = writeln!(
|
||||
out,
|
||||
"# HELP telemt_me_writers_active_current Current non-draining active ME writers"
|
||||
);
|
||||
let _ = writeln!(out, "# TYPE telemt_me_writers_active_current gauge");
|
||||
let _ = writeln!(
|
||||
out,
|
||||
"telemt_me_writers_active_current {}",
|
||||
if me_allows_normal {
|
||||
stats.get_me_writers_active_current_gauge()
|
||||
} else {
|
||||
0
|
||||
}
|
||||
);
|
||||
let _ = writeln!(
|
||||
out,
|
||||
"# HELP telemt_me_writers_warm_current Current non-draining warm ME writers"
|
||||
);
|
||||
let _ = writeln!(out, "# TYPE telemt_me_writers_warm_current gauge");
|
||||
let _ = writeln!(
|
||||
out,
|
||||
"telemt_me_writers_warm_current {}",
|
||||
if me_allows_normal {
|
||||
stats.get_me_writers_warm_current_gauge()
|
||||
} else {
|
||||
0
|
||||
}
|
||||
);
|
||||
let _ = writeln!(
|
||||
out,
|
||||
"# HELP telemt_me_floor_cap_block_total Reconnect attempts blocked by adaptive floor caps"
|
||||
);
|
||||
let _ = writeln!(out, "# TYPE telemt_me_floor_cap_block_total counter");
|
||||
let _ = writeln!(
|
||||
out,
|
||||
"telemt_me_floor_cap_block_total {}",
|
||||
if me_allows_normal {
|
||||
stats.get_me_floor_cap_block_total()
|
||||
} else {
|
||||
0
|
||||
}
|
||||
);
|
||||
let _ = writeln!(
|
||||
out,
|
||||
"# HELP telemt_me_floor_swap_idle_total Adaptive floor cap recovery via idle writer swap"
|
||||
);
|
||||
let _ = writeln!(out, "# TYPE telemt_me_floor_swap_idle_total counter");
|
||||
let _ = writeln!(
|
||||
out,
|
||||
"telemt_me_floor_swap_idle_total {}",
|
||||
if me_allows_normal {
|
||||
stats.get_me_floor_swap_idle_total()
|
||||
} else {
|
||||
0
|
||||
}
|
||||
);
|
||||
let _ = writeln!(
|
||||
out,
|
||||
"# HELP telemt_me_floor_swap_idle_failed_total Failed idle swap attempts under adaptive floor caps"
|
||||
);
|
||||
let _ = writeln!(out, "# TYPE telemt_me_floor_swap_idle_failed_total counter");
|
||||
let _ = writeln!(
|
||||
out,
|
||||
"telemt_me_floor_swap_idle_failed_total {}",
|
||||
if me_allows_normal {
|
||||
stats.get_me_floor_swap_idle_failed_total()
|
||||
} else {
|
||||
0
|
||||
}
|
||||
);
|
||||
|
||||
let _ = writeln!(out, "# HELP telemt_secure_padding_invalid_total Invalid secure frame lengths");
|
||||
let _ = writeln!(out, "# TYPE telemt_secure_padding_invalid_total counter");
|
||||
|
||||
@@ -57,6 +57,7 @@ where
|
||||
|
||||
stats.increment_user_connects(user);
|
||||
stats.increment_user_curr_connects(user);
|
||||
stats.increment_current_connections_direct();
|
||||
|
||||
let relay_result = relay_bidirectional(
|
||||
client_reader,
|
||||
@@ -69,6 +70,7 @@ where
|
||||
)
|
||||
.await;
|
||||
|
||||
stats.decrement_current_connections_direct();
|
||||
stats.decrement_user_curr_connects(user);
|
||||
|
||||
match &relay_result {
|
||||
|
||||
@@ -6,6 +6,7 @@ use std::sync::atomic::{AtomicU64, Ordering};
|
||||
use std::sync::{Arc, Mutex, OnceLock};
|
||||
use std::time::{Duration, Instant};
|
||||
|
||||
use bytes::Bytes;
|
||||
use tokio::io::{AsyncRead, AsyncReadExt, AsyncWrite, AsyncWriteExt};
|
||||
use tokio::sync::{mpsc, oneshot};
|
||||
use tracing::{debug, trace, warn};
|
||||
@@ -20,7 +21,7 @@ use crate::stream::{BufferPool, CryptoReader, CryptoWriter};
|
||||
use crate::transport::middle_proxy::{MePool, MeResponse, proto_flags_for_tag};
|
||||
|
||||
enum C2MeCommand {
|
||||
Data { payload: Vec<u8>, flags: u32 },
|
||||
Data { payload: Bytes, flags: u32 },
|
||||
Close,
|
||||
}
|
||||
|
||||
@@ -237,6 +238,7 @@ where
|
||||
|
||||
stats.increment_user_connects(&user);
|
||||
stats.increment_user_curr_connects(&user);
|
||||
stats.increment_current_connections_me();
|
||||
|
||||
// Per-user ad_tag from access.user_ad_tags; fallback to general.ad_tag (hot-reloadable)
|
||||
let user_tag: Option<Vec<u8>> = config
|
||||
@@ -282,7 +284,7 @@ where
|
||||
success.dc_idx,
|
||||
peer,
|
||||
translated_local_addr,
|
||||
&payload,
|
||||
payload.as_ref(),
|
||||
flags,
|
||||
effective_tag.as_deref(),
|
||||
).await?;
|
||||
@@ -466,6 +468,7 @@ where
|
||||
"ME relay cleanup"
|
||||
);
|
||||
me_pool.registry().unregister(conn_id).await;
|
||||
stats.decrement_current_connections_me();
|
||||
stats.decrement_user_curr_connects(&user);
|
||||
result
|
||||
}
|
||||
@@ -477,7 +480,7 @@ async fn read_client_payload<R>(
|
||||
forensics: &RelayForensicsState,
|
||||
frame_counter: &mut u64,
|
||||
stats: &Stats,
|
||||
) -> Result<Option<(Vec<u8>, bool)>>
|
||||
) -> Result<Option<(Bytes, bool)>>
|
||||
where
|
||||
R: AsyncRead + Unpin + Send + 'static,
|
||||
{
|
||||
@@ -576,7 +579,7 @@ where
|
||||
payload.truncate(secure_payload_len);
|
||||
}
|
||||
*frame_counter += 1;
|
||||
return Ok(Some((payload, quickack)));
|
||||
return Ok(Some((Bytes::from(payload), quickack)));
|
||||
}
|
||||
}
|
||||
|
||||
@@ -713,7 +716,7 @@ mod tests {
|
||||
enqueue_c2me_command(
|
||||
&tx,
|
||||
C2MeCommand::Data {
|
||||
payload: vec![1, 2, 3],
|
||||
payload: Bytes::from_static(&[1, 2, 3]),
|
||||
flags: 0,
|
||||
},
|
||||
)
|
||||
@@ -726,7 +729,7 @@ mod tests {
|
||||
.unwrap();
|
||||
match recv {
|
||||
C2MeCommand::Data { payload, flags } => {
|
||||
assert_eq!(payload, vec![1, 2, 3]);
|
||||
assert_eq!(payload.as_ref(), &[1, 2, 3]);
|
||||
assert_eq!(flags, 0);
|
||||
}
|
||||
C2MeCommand::Close => panic!("unexpected close command"),
|
||||
@@ -737,7 +740,7 @@ mod tests {
|
||||
async fn enqueue_c2me_command_falls_back_to_send_when_queue_is_full() {
|
||||
let (tx, mut rx) = mpsc::channel::<C2MeCommand>(1);
|
||||
tx.send(C2MeCommand::Data {
|
||||
payload: vec![9],
|
||||
payload: Bytes::from_static(&[9]),
|
||||
flags: 9,
|
||||
})
|
||||
.await
|
||||
@@ -748,7 +751,7 @@ mod tests {
|
||||
enqueue_c2me_command(
|
||||
&tx2,
|
||||
C2MeCommand::Data {
|
||||
payload: vec![7, 7],
|
||||
payload: Bytes::from_static(&[7, 7]),
|
||||
flags: 7,
|
||||
},
|
||||
)
|
||||
@@ -767,7 +770,7 @@ mod tests {
|
||||
.unwrap();
|
||||
match recv {
|
||||
C2MeCommand::Data { payload, flags } => {
|
||||
assert_eq!(payload, vec![7, 7]);
|
||||
assert_eq!(payload.as_ref(), &[7, 7]);
|
||||
assert_eq!(flags, 7);
|
||||
}
|
||||
C2MeCommand::Close => panic!("unexpected close command"),
|
||||
|
||||
373
src/startup.rs
Normal file
373
src/startup.rs
Normal file
@@ -0,0 +1,373 @@
|
||||
use std::time::{Instant, SystemTime, UNIX_EPOCH};
|
||||
|
||||
use tokio::sync::RwLock;
|
||||
|
||||
pub const COMPONENT_CONFIG_LOAD: &str = "config_load";
|
||||
pub const COMPONENT_TRACING_INIT: &str = "tracing_init";
|
||||
pub const COMPONENT_API_BOOTSTRAP: &str = "api_bootstrap";
|
||||
pub const COMPONENT_TLS_FRONT_BOOTSTRAP: &str = "tls_front_bootstrap";
|
||||
pub const COMPONENT_NETWORK_PROBE: &str = "network_probe";
|
||||
pub const COMPONENT_ME_SECRET_FETCH: &str = "me_secret_fetch";
|
||||
pub const COMPONENT_ME_PROXY_CONFIG_V4: &str = "me_proxy_config_fetch_v4";
|
||||
pub const COMPONENT_ME_PROXY_CONFIG_V6: &str = "me_proxy_config_fetch_v6";
|
||||
pub const COMPONENT_ME_POOL_CONSTRUCT: &str = "me_pool_construct";
|
||||
pub const COMPONENT_ME_POOL_INIT_STAGE1: &str = "me_pool_init_stage1";
|
||||
pub const COMPONENT_ME_CONNECTIVITY_PING: &str = "me_connectivity_ping";
|
||||
pub const COMPONENT_DC_CONNECTIVITY_PING: &str = "dc_connectivity_ping";
|
||||
pub const COMPONENT_LISTENERS_BIND: &str = "listeners_bind";
|
||||
pub const COMPONENT_CONFIG_WATCHER_START: &str = "config_watcher_start";
|
||||
pub const COMPONENT_METRICS_START: &str = "metrics_start";
|
||||
pub const COMPONENT_RUNTIME_READY: &str = "runtime_ready";
|
||||
|
||||
#[derive(Clone, Copy, Debug, PartialEq, Eq)]
|
||||
pub enum StartupStatus {
|
||||
Initializing,
|
||||
Ready,
|
||||
}
|
||||
|
||||
impl StartupStatus {
|
||||
pub fn as_str(self) -> &'static str {
|
||||
match self {
|
||||
Self::Initializing => "initializing",
|
||||
Self::Ready => "ready",
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
#[derive(Clone, Copy, Debug, PartialEq, Eq)]
|
||||
pub enum StartupComponentStatus {
|
||||
Pending,
|
||||
Running,
|
||||
Ready,
|
||||
Failed,
|
||||
Skipped,
|
||||
}
|
||||
|
||||
impl StartupComponentStatus {
|
||||
pub fn as_str(self) -> &'static str {
|
||||
match self {
|
||||
Self::Pending => "pending",
|
||||
Self::Running => "running",
|
||||
Self::Ready => "ready",
|
||||
Self::Failed => "failed",
|
||||
Self::Skipped => "skipped",
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
#[derive(Clone, Copy, Debug, PartialEq, Eq)]
|
||||
pub enum StartupMeStatus {
|
||||
Pending,
|
||||
Initializing,
|
||||
Ready,
|
||||
Failed,
|
||||
Skipped,
|
||||
}
|
||||
|
||||
impl StartupMeStatus {
|
||||
pub fn as_str(self) -> &'static str {
|
||||
match self {
|
||||
Self::Pending => "pending",
|
||||
Self::Initializing => "initializing",
|
||||
Self::Ready => "ready",
|
||||
Self::Failed => "failed",
|
||||
Self::Skipped => "skipped",
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
#[derive(Clone, Debug)]
|
||||
pub struct StartupComponentSnapshot {
|
||||
pub id: &'static str,
|
||||
pub title: &'static str,
|
||||
pub weight: f64,
|
||||
pub status: StartupComponentStatus,
|
||||
pub started_at_epoch_ms: Option<u64>,
|
||||
pub finished_at_epoch_ms: Option<u64>,
|
||||
pub duration_ms: Option<u64>,
|
||||
pub attempts: u32,
|
||||
pub details: Option<String>,
|
||||
}
|
||||
|
||||
#[derive(Clone, Debug)]
|
||||
pub struct StartupMeSnapshot {
|
||||
pub status: StartupMeStatus,
|
||||
pub current_stage: String,
|
||||
pub init_attempt: u32,
|
||||
pub retry_limit: String,
|
||||
pub last_error: Option<String>,
|
||||
}
|
||||
|
||||
#[derive(Clone, Debug)]
|
||||
pub struct StartupSnapshot {
|
||||
pub status: StartupStatus,
|
||||
pub degraded: bool,
|
||||
pub current_stage: String,
|
||||
pub started_at_epoch_secs: u64,
|
||||
pub ready_at_epoch_secs: Option<u64>,
|
||||
pub total_elapsed_ms: u64,
|
||||
pub transport_mode: String,
|
||||
pub me: StartupMeSnapshot,
|
||||
pub components: Vec<StartupComponentSnapshot>,
|
||||
}
|
||||
|
||||
#[derive(Clone, Debug)]
|
||||
struct StartupComponent {
|
||||
id: &'static str,
|
||||
title: &'static str,
|
||||
weight: f64,
|
||||
status: StartupComponentStatus,
|
||||
started_at_epoch_ms: Option<u64>,
|
||||
finished_at_epoch_ms: Option<u64>,
|
||||
duration_ms: Option<u64>,
|
||||
attempts: u32,
|
||||
details: Option<String>,
|
||||
}
|
||||
|
||||
#[derive(Clone, Debug)]
|
||||
struct StartupState {
|
||||
status: StartupStatus,
|
||||
degraded: bool,
|
||||
current_stage: String,
|
||||
started_at_epoch_secs: u64,
|
||||
ready_at_epoch_secs: Option<u64>,
|
||||
transport_mode: String,
|
||||
me: StartupMeSnapshot,
|
||||
components: Vec<StartupComponent>,
|
||||
}
|
||||
|
||||
pub struct StartupTracker {
|
||||
started_at_instant: Instant,
|
||||
state: RwLock<StartupState>,
|
||||
}
|
||||
|
||||
impl StartupTracker {
|
||||
pub fn new(started_at_epoch_secs: u64) -> Self {
|
||||
Self {
|
||||
started_at_instant: Instant::now(),
|
||||
state: RwLock::new(StartupState {
|
||||
status: StartupStatus::Initializing,
|
||||
degraded: false,
|
||||
current_stage: COMPONENT_CONFIG_LOAD.to_string(),
|
||||
started_at_epoch_secs,
|
||||
ready_at_epoch_secs: None,
|
||||
transport_mode: "unknown".to_string(),
|
||||
me: StartupMeSnapshot {
|
||||
status: StartupMeStatus::Pending,
|
||||
current_stage: "pending".to_string(),
|
||||
init_attempt: 0,
|
||||
retry_limit: "unlimited".to_string(),
|
||||
last_error: None,
|
||||
},
|
||||
components: component_blueprint(),
|
||||
}),
|
||||
}
|
||||
}
|
||||
|
||||
pub async fn set_transport_mode(&self, mode: &'static str) {
|
||||
self.state.write().await.transport_mode = mode.to_string();
|
||||
}
|
||||
|
||||
pub async fn set_degraded(&self, degraded: bool) {
|
||||
self.state.write().await.degraded = degraded;
|
||||
}
|
||||
|
||||
pub async fn start_component(&self, id: &'static str, details: Option<String>) {
|
||||
let mut guard = self.state.write().await;
|
||||
guard.current_stage = id.to_string();
|
||||
if let Some(component) = guard.components.iter_mut().find(|component| component.id == id) {
|
||||
if component.started_at_epoch_ms.is_none() {
|
||||
component.started_at_epoch_ms = Some(now_epoch_ms());
|
||||
}
|
||||
component.attempts = component.attempts.saturating_add(1);
|
||||
component.status = StartupComponentStatus::Running;
|
||||
component.details = normalize_details(details);
|
||||
}
|
||||
}
|
||||
|
||||
pub async fn complete_component(&self, id: &'static str, details: Option<String>) {
|
||||
self.finish_component(id, StartupComponentStatus::Ready, details)
|
||||
.await;
|
||||
}
|
||||
|
||||
pub async fn fail_component(&self, id: &'static str, details: Option<String>) {
|
||||
self.finish_component(id, StartupComponentStatus::Failed, details)
|
||||
.await;
|
||||
}
|
||||
|
||||
pub async fn skip_component(&self, id: &'static str, details: Option<String>) {
|
||||
self.finish_component(id, StartupComponentStatus::Skipped, details)
|
||||
.await;
|
||||
}
|
||||
|
||||
async fn finish_component(
|
||||
&self,
|
||||
id: &'static str,
|
||||
status: StartupComponentStatus,
|
||||
details: Option<String>,
|
||||
) {
|
||||
let mut guard = self.state.write().await;
|
||||
let finished_at = now_epoch_ms();
|
||||
if let Some(component) = guard.components.iter_mut().find(|component| component.id == id) {
|
||||
if component.started_at_epoch_ms.is_none() {
|
||||
component.started_at_epoch_ms = Some(finished_at);
|
||||
component.attempts = component.attempts.saturating_add(1);
|
||||
}
|
||||
component.finished_at_epoch_ms = Some(finished_at);
|
||||
component.duration_ms = component
|
||||
.started_at_epoch_ms
|
||||
.map(|started_at| finished_at.saturating_sub(started_at));
|
||||
component.status = status;
|
||||
component.details = normalize_details(details);
|
||||
}
|
||||
}
|
||||
|
||||
pub async fn set_me_status(&self, status: StartupMeStatus, stage: &'static str) {
|
||||
let mut guard = self.state.write().await;
|
||||
guard.me.status = status;
|
||||
guard.me.current_stage = stage.to_string();
|
||||
}
|
||||
|
||||
pub async fn set_me_retry_limit(&self, retry_limit: String) {
|
||||
self.state.write().await.me.retry_limit = retry_limit;
|
||||
}
|
||||
|
||||
pub async fn set_me_init_attempt(&self, attempt: u32) {
|
||||
self.state.write().await.me.init_attempt = attempt;
|
||||
}
|
||||
|
||||
pub async fn set_me_last_error(&self, error: Option<String>) {
|
||||
self.state.write().await.me.last_error = normalize_details(error);
|
||||
}
|
||||
|
||||
pub async fn mark_ready(&self) {
|
||||
let mut guard = self.state.write().await;
|
||||
if guard.status == StartupStatus::Ready {
|
||||
return;
|
||||
}
|
||||
guard.status = StartupStatus::Ready;
|
||||
guard.current_stage = "ready".to_string();
|
||||
guard.ready_at_epoch_secs = Some(now_epoch_secs());
|
||||
}
|
||||
|
||||
pub async fn snapshot(&self) -> StartupSnapshot {
|
||||
let guard = self.state.read().await;
|
||||
StartupSnapshot {
|
||||
status: guard.status,
|
||||
degraded: guard.degraded,
|
||||
current_stage: guard.current_stage.clone(),
|
||||
started_at_epoch_secs: guard.started_at_epoch_secs,
|
||||
ready_at_epoch_secs: guard.ready_at_epoch_secs,
|
||||
total_elapsed_ms: self.started_at_instant.elapsed().as_millis() as u64,
|
||||
transport_mode: guard.transport_mode.clone(),
|
||||
me: guard.me.clone(),
|
||||
components: guard
|
||||
.components
|
||||
.iter()
|
||||
.map(|component| StartupComponentSnapshot {
|
||||
id: component.id,
|
||||
title: component.title,
|
||||
weight: component.weight,
|
||||
status: component.status,
|
||||
started_at_epoch_ms: component.started_at_epoch_ms,
|
||||
finished_at_epoch_ms: component.finished_at_epoch_ms,
|
||||
duration_ms: component.duration_ms,
|
||||
attempts: component.attempts,
|
||||
details: component.details.clone(),
|
||||
})
|
||||
.collect(),
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
pub fn compute_progress_pct(snapshot: &StartupSnapshot, me_stage_progress: Option<f64>) -> f64 {
|
||||
if snapshot.status == StartupStatus::Ready {
|
||||
return 100.0;
|
||||
}
|
||||
|
||||
let mut total_weight = 0.0f64;
|
||||
let mut completed_weight = 0.0f64;
|
||||
|
||||
for component in &snapshot.components {
|
||||
total_weight += component.weight;
|
||||
let unit_progress = match component.status {
|
||||
StartupComponentStatus::Pending => 0.0,
|
||||
StartupComponentStatus::Running => {
|
||||
if component.id == COMPONENT_ME_POOL_INIT_STAGE1 {
|
||||
me_stage_progress.unwrap_or(0.0).clamp(0.0, 1.0)
|
||||
} else {
|
||||
0.0
|
||||
}
|
||||
}
|
||||
StartupComponentStatus::Ready
|
||||
| StartupComponentStatus::Failed
|
||||
| StartupComponentStatus::Skipped => 1.0,
|
||||
};
|
||||
completed_weight += component.weight * unit_progress;
|
||||
}
|
||||
|
||||
if total_weight <= f64::EPSILON {
|
||||
0.0
|
||||
} else {
|
||||
((completed_weight / total_weight) * 100.0).clamp(0.0, 100.0)
|
||||
}
|
||||
}
|
||||
|
||||
fn component_blueprint() -> Vec<StartupComponent> {
|
||||
vec![
|
||||
component(COMPONENT_CONFIG_LOAD, "Config load", 5.0),
|
||||
component(COMPONENT_TRACING_INIT, "Tracing init", 3.0),
|
||||
component(COMPONENT_API_BOOTSTRAP, "API bootstrap", 5.0),
|
||||
component(COMPONENT_TLS_FRONT_BOOTSTRAP, "TLS front bootstrap", 5.0),
|
||||
component(COMPONENT_NETWORK_PROBE, "Network probe", 10.0),
|
||||
component(COMPONENT_ME_SECRET_FETCH, "ME secret fetch", 8.0),
|
||||
component(COMPONENT_ME_PROXY_CONFIG_V4, "ME config v4 fetch", 4.0),
|
||||
component(COMPONENT_ME_PROXY_CONFIG_V6, "ME config v6 fetch", 4.0),
|
||||
component(COMPONENT_ME_POOL_CONSTRUCT, "ME pool construct", 6.0),
|
||||
component(COMPONENT_ME_POOL_INIT_STAGE1, "ME pool init stage1", 24.0),
|
||||
component(COMPONENT_ME_CONNECTIVITY_PING, "ME connectivity ping", 6.0),
|
||||
component(COMPONENT_DC_CONNECTIVITY_PING, "DC connectivity ping", 8.0),
|
||||
component(COMPONENT_LISTENERS_BIND, "Listener bind", 8.0),
|
||||
component(COMPONENT_CONFIG_WATCHER_START, "Config watcher start", 2.0),
|
||||
component(COMPONENT_METRICS_START, "Metrics start", 1.0),
|
||||
component(COMPONENT_RUNTIME_READY, "Runtime ready", 1.0),
|
||||
]
|
||||
}
|
||||
|
||||
fn component(id: &'static str, title: &'static str, weight: f64) -> StartupComponent {
|
||||
StartupComponent {
|
||||
id,
|
||||
title,
|
||||
weight,
|
||||
status: StartupComponentStatus::Pending,
|
||||
started_at_epoch_ms: None,
|
||||
finished_at_epoch_ms: None,
|
||||
duration_ms: None,
|
||||
attempts: 0,
|
||||
details: None,
|
||||
}
|
||||
}
|
||||
|
||||
fn normalize_details(details: Option<String>) -> Option<String> {
|
||||
details.map(|detail| {
|
||||
if detail.len() <= 256 {
|
||||
detail
|
||||
} else {
|
||||
detail[..256].to_string()
|
||||
}
|
||||
})
|
||||
}
|
||||
|
||||
fn now_epoch_secs() -> u64 {
|
||||
SystemTime::now()
|
||||
.duration_since(UNIX_EPOCH)
|
||||
.unwrap_or_default()
|
||||
.as_secs()
|
||||
}
|
||||
|
||||
fn now_epoch_ms() -> u64 {
|
||||
SystemTime::now()
|
||||
.duration_since(UNIX_EPOCH)
|
||||
.unwrap_or_default()
|
||||
.as_millis() as u64
|
||||
}
|
||||
304
src/stats/mod.rs
304
src/stats/mod.rs
@@ -6,7 +6,7 @@ pub mod beobachten;
|
||||
pub mod telemetry;
|
||||
|
||||
use std::sync::atomic::{AtomicBool, AtomicU8, AtomicU64, Ordering};
|
||||
use std::time::{Instant, Duration};
|
||||
use std::time::{Duration, Instant, SystemTime, UNIX_EPOCH};
|
||||
use dashmap::DashMap;
|
||||
use parking_lot::Mutex;
|
||||
use lru::LruCache;
|
||||
@@ -25,6 +25,8 @@ use self::telemetry::TelemetryPolicy;
|
||||
pub struct Stats {
|
||||
connects_all: AtomicU64,
|
||||
connects_bad: AtomicU64,
|
||||
current_connections_direct: AtomicU64,
|
||||
current_connections_me: AtomicU64,
|
||||
handshake_timeouts: AtomicU64,
|
||||
upstream_connect_attempt_total: AtomicU64,
|
||||
upstream_connect_success_total: AtomicU64,
|
||||
@@ -73,6 +75,20 @@ pub struct Stats {
|
||||
me_floor_mode_switch_total: AtomicU64,
|
||||
me_floor_mode_switch_static_to_adaptive_total: AtomicU64,
|
||||
me_floor_mode_switch_adaptive_to_static_total: AtomicU64,
|
||||
me_floor_cpu_cores_detected_gauge: AtomicU64,
|
||||
me_floor_cpu_cores_effective_gauge: AtomicU64,
|
||||
me_floor_global_cap_raw_gauge: AtomicU64,
|
||||
me_floor_global_cap_effective_gauge: AtomicU64,
|
||||
me_floor_target_writers_total_gauge: AtomicU64,
|
||||
me_floor_active_cap_configured_gauge: AtomicU64,
|
||||
me_floor_active_cap_effective_gauge: AtomicU64,
|
||||
me_floor_warm_cap_configured_gauge: AtomicU64,
|
||||
me_floor_warm_cap_effective_gauge: AtomicU64,
|
||||
me_writers_active_current_gauge: AtomicU64,
|
||||
me_writers_warm_current_gauge: AtomicU64,
|
||||
me_floor_cap_block_total: AtomicU64,
|
||||
me_floor_swap_idle_total: AtomicU64,
|
||||
me_floor_swap_idle_failed_total: AtomicU64,
|
||||
me_handshake_error_codes: DashMap<i32, AtomicU64>,
|
||||
me_route_drop_no_conn: AtomicU64,
|
||||
me_route_drop_channel_closed: AtomicU64,
|
||||
@@ -109,6 +125,7 @@ pub struct Stats {
|
||||
telemetry_user_enabled: AtomicBool,
|
||||
telemetry_me_level: AtomicU8,
|
||||
user_stats: DashMap<String, UserStats>,
|
||||
user_stats_last_cleanup_epoch_secs: AtomicU64,
|
||||
start_time: parking_lot::RwLock<Option<Instant>>,
|
||||
}
|
||||
|
||||
@@ -120,6 +137,7 @@ pub struct UserStats {
|
||||
pub octets_to_client: AtomicU64,
|
||||
pub msgs_from_client: AtomicU64,
|
||||
pub msgs_to_client: AtomicU64,
|
||||
pub last_seen_epoch_secs: AtomicU64,
|
||||
}
|
||||
|
||||
impl Stats {
|
||||
@@ -150,6 +168,72 @@ impl Stats {
|
||||
self.telemetry_me_level().allows_debug()
|
||||
}
|
||||
|
||||
fn decrement_atomic_saturating(counter: &AtomicU64) {
|
||||
let mut current = counter.load(Ordering::Relaxed);
|
||||
loop {
|
||||
if current == 0 {
|
||||
break;
|
||||
}
|
||||
match counter.compare_exchange_weak(
|
||||
current,
|
||||
current - 1,
|
||||
Ordering::Relaxed,
|
||||
Ordering::Relaxed,
|
||||
) {
|
||||
Ok(_) => break,
|
||||
Err(actual) => current = actual,
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
fn now_epoch_secs() -> u64 {
|
||||
SystemTime::now()
|
||||
.duration_since(UNIX_EPOCH)
|
||||
.unwrap_or_default()
|
||||
.as_secs()
|
||||
}
|
||||
|
||||
fn touch_user_stats(stats: &UserStats) {
|
||||
stats
|
||||
.last_seen_epoch_secs
|
||||
.store(Self::now_epoch_secs(), Ordering::Relaxed);
|
||||
}
|
||||
|
||||
fn maybe_cleanup_user_stats(&self) {
|
||||
const USER_STATS_CLEANUP_INTERVAL_SECS: u64 = 60;
|
||||
const USER_STATS_IDLE_TTL_SECS: u64 = 24 * 60 * 60;
|
||||
|
||||
let now_epoch_secs = Self::now_epoch_secs();
|
||||
let last_cleanup_epoch_secs = self
|
||||
.user_stats_last_cleanup_epoch_secs
|
||||
.load(Ordering::Relaxed);
|
||||
if now_epoch_secs.saturating_sub(last_cleanup_epoch_secs)
|
||||
< USER_STATS_CLEANUP_INTERVAL_SECS
|
||||
{
|
||||
return;
|
||||
}
|
||||
if self
|
||||
.user_stats_last_cleanup_epoch_secs
|
||||
.compare_exchange(
|
||||
last_cleanup_epoch_secs,
|
||||
now_epoch_secs,
|
||||
Ordering::AcqRel,
|
||||
Ordering::Relaxed,
|
||||
)
|
||||
.is_err()
|
||||
{
|
||||
return;
|
||||
}
|
||||
|
||||
self.user_stats.retain(|_, stats| {
|
||||
if stats.curr_connects.load(Ordering::Relaxed) > 0 {
|
||||
return true;
|
||||
}
|
||||
let last_seen_epoch_secs = stats.last_seen_epoch_secs.load(Ordering::Relaxed);
|
||||
now_epoch_secs.saturating_sub(last_seen_epoch_secs) <= USER_STATS_IDLE_TTL_SECS
|
||||
});
|
||||
}
|
||||
|
||||
pub fn apply_telemetry_policy(&self, policy: TelemetryPolicy) {
|
||||
self.telemetry_core_enabled
|
||||
.store(policy.core_enabled, Ordering::Relaxed);
|
||||
@@ -177,6 +261,18 @@ impl Stats {
|
||||
self.connects_bad.fetch_add(1, Ordering::Relaxed);
|
||||
}
|
||||
}
|
||||
pub fn increment_current_connections_direct(&self) {
|
||||
self.current_connections_direct.fetch_add(1, Ordering::Relaxed);
|
||||
}
|
||||
pub fn decrement_current_connections_direct(&self) {
|
||||
Self::decrement_atomic_saturating(&self.current_connections_direct);
|
||||
}
|
||||
pub fn increment_current_connections_me(&self) {
|
||||
self.current_connections_me.fetch_add(1, Ordering::Relaxed);
|
||||
}
|
||||
pub fn decrement_current_connections_me(&self) {
|
||||
Self::decrement_atomic_saturating(&self.current_connections_me);
|
||||
}
|
||||
pub fn increment_handshake_timeouts(&self) {
|
||||
if self.telemetry_core_enabled() {
|
||||
self.handshake_timeouts.fetch_add(1, Ordering::Relaxed);
|
||||
@@ -644,8 +740,100 @@ impl Stats {
|
||||
.fetch_add(1, Ordering::Relaxed);
|
||||
}
|
||||
}
|
||||
pub fn set_me_floor_cpu_cores_detected_gauge(&self, value: u64) {
|
||||
if self.telemetry_me_allows_normal() {
|
||||
self.me_floor_cpu_cores_detected_gauge
|
||||
.store(value, Ordering::Relaxed);
|
||||
}
|
||||
}
|
||||
pub fn set_me_floor_cpu_cores_effective_gauge(&self, value: u64) {
|
||||
if self.telemetry_me_allows_normal() {
|
||||
self.me_floor_cpu_cores_effective_gauge
|
||||
.store(value, Ordering::Relaxed);
|
||||
}
|
||||
}
|
||||
pub fn set_me_floor_global_cap_raw_gauge(&self, value: u64) {
|
||||
if self.telemetry_me_allows_normal() {
|
||||
self.me_floor_global_cap_raw_gauge
|
||||
.store(value, Ordering::Relaxed);
|
||||
}
|
||||
}
|
||||
pub fn set_me_floor_global_cap_effective_gauge(&self, value: u64) {
|
||||
if self.telemetry_me_allows_normal() {
|
||||
self.me_floor_global_cap_effective_gauge
|
||||
.store(value, Ordering::Relaxed);
|
||||
}
|
||||
}
|
||||
pub fn set_me_floor_target_writers_total_gauge(&self, value: u64) {
|
||||
if self.telemetry_me_allows_normal() {
|
||||
self.me_floor_target_writers_total_gauge
|
||||
.store(value, Ordering::Relaxed);
|
||||
}
|
||||
}
|
||||
pub fn set_me_floor_active_cap_configured_gauge(&self, value: u64) {
|
||||
if self.telemetry_me_allows_normal() {
|
||||
self.me_floor_active_cap_configured_gauge
|
||||
.store(value, Ordering::Relaxed);
|
||||
}
|
||||
}
|
||||
pub fn set_me_floor_active_cap_effective_gauge(&self, value: u64) {
|
||||
if self.telemetry_me_allows_normal() {
|
||||
self.me_floor_active_cap_effective_gauge
|
||||
.store(value, Ordering::Relaxed);
|
||||
}
|
||||
}
|
||||
pub fn set_me_floor_warm_cap_configured_gauge(&self, value: u64) {
|
||||
if self.telemetry_me_allows_normal() {
|
||||
self.me_floor_warm_cap_configured_gauge
|
||||
.store(value, Ordering::Relaxed);
|
||||
}
|
||||
}
|
||||
pub fn set_me_floor_warm_cap_effective_gauge(&self, value: u64) {
|
||||
if self.telemetry_me_allows_normal() {
|
||||
self.me_floor_warm_cap_effective_gauge
|
||||
.store(value, Ordering::Relaxed);
|
||||
}
|
||||
}
|
||||
pub fn set_me_writers_active_current_gauge(&self, value: u64) {
|
||||
if self.telemetry_me_allows_normal() {
|
||||
self.me_writers_active_current_gauge
|
||||
.store(value, Ordering::Relaxed);
|
||||
}
|
||||
}
|
||||
pub fn set_me_writers_warm_current_gauge(&self, value: u64) {
|
||||
if self.telemetry_me_allows_normal() {
|
||||
self.me_writers_warm_current_gauge
|
||||
.store(value, Ordering::Relaxed);
|
||||
}
|
||||
}
|
||||
pub fn increment_me_floor_cap_block_total(&self) {
|
||||
if self.telemetry_me_allows_normal() {
|
||||
self.me_floor_cap_block_total.fetch_add(1, Ordering::Relaxed);
|
||||
}
|
||||
}
|
||||
pub fn increment_me_floor_swap_idle_total(&self) {
|
||||
if self.telemetry_me_allows_normal() {
|
||||
self.me_floor_swap_idle_total.fetch_add(1, Ordering::Relaxed);
|
||||
}
|
||||
}
|
||||
pub fn increment_me_floor_swap_idle_failed_total(&self) {
|
||||
if self.telemetry_me_allows_normal() {
|
||||
self.me_floor_swap_idle_failed_total
|
||||
.fetch_add(1, Ordering::Relaxed);
|
||||
}
|
||||
}
|
||||
pub fn get_connects_all(&self) -> u64 { self.connects_all.load(Ordering::Relaxed) }
|
||||
pub fn get_connects_bad(&self) -> u64 { self.connects_bad.load(Ordering::Relaxed) }
|
||||
pub fn get_current_connections_direct(&self) -> u64 {
|
||||
self.current_connections_direct.load(Ordering::Relaxed)
|
||||
}
|
||||
pub fn get_current_connections_me(&self) -> u64 {
|
||||
self.current_connections_me.load(Ordering::Relaxed)
|
||||
}
|
||||
pub fn get_current_connections_total(&self) -> u64 {
|
||||
self.get_current_connections_direct()
|
||||
.saturating_add(self.get_current_connections_me())
|
||||
}
|
||||
pub fn get_me_keepalive_sent(&self) -> u64 { self.me_keepalive_sent.load(Ordering::Relaxed) }
|
||||
pub fn get_me_keepalive_failed(&self) -> u64 { self.me_keepalive_failed.load(Ordering::Relaxed) }
|
||||
pub fn get_me_keepalive_pong(&self) -> u64 { self.me_keepalive_pong.load(Ordering::Relaxed) }
|
||||
@@ -739,6 +927,58 @@ impl Stats {
|
||||
self.me_floor_mode_switch_adaptive_to_static_total
|
||||
.load(Ordering::Relaxed)
|
||||
}
|
||||
pub fn get_me_floor_cpu_cores_detected_gauge(&self) -> u64 {
|
||||
self.me_floor_cpu_cores_detected_gauge
|
||||
.load(Ordering::Relaxed)
|
||||
}
|
||||
pub fn get_me_floor_cpu_cores_effective_gauge(&self) -> u64 {
|
||||
self.me_floor_cpu_cores_effective_gauge
|
||||
.load(Ordering::Relaxed)
|
||||
}
|
||||
pub fn get_me_floor_global_cap_raw_gauge(&self) -> u64 {
|
||||
self.me_floor_global_cap_raw_gauge.load(Ordering::Relaxed)
|
||||
}
|
||||
pub fn get_me_floor_global_cap_effective_gauge(&self) -> u64 {
|
||||
self.me_floor_global_cap_effective_gauge
|
||||
.load(Ordering::Relaxed)
|
||||
}
|
||||
pub fn get_me_floor_target_writers_total_gauge(&self) -> u64 {
|
||||
self.me_floor_target_writers_total_gauge
|
||||
.load(Ordering::Relaxed)
|
||||
}
|
||||
pub fn get_me_floor_active_cap_configured_gauge(&self) -> u64 {
|
||||
self.me_floor_active_cap_configured_gauge
|
||||
.load(Ordering::Relaxed)
|
||||
}
|
||||
pub fn get_me_floor_active_cap_effective_gauge(&self) -> u64 {
|
||||
self.me_floor_active_cap_effective_gauge
|
||||
.load(Ordering::Relaxed)
|
||||
}
|
||||
pub fn get_me_floor_warm_cap_configured_gauge(&self) -> u64 {
|
||||
self.me_floor_warm_cap_configured_gauge
|
||||
.load(Ordering::Relaxed)
|
||||
}
|
||||
pub fn get_me_floor_warm_cap_effective_gauge(&self) -> u64 {
|
||||
self.me_floor_warm_cap_effective_gauge
|
||||
.load(Ordering::Relaxed)
|
||||
}
|
||||
pub fn get_me_writers_active_current_gauge(&self) -> u64 {
|
||||
self.me_writers_active_current_gauge
|
||||
.load(Ordering::Relaxed)
|
||||
}
|
||||
pub fn get_me_writers_warm_current_gauge(&self) -> u64 {
|
||||
self.me_writers_warm_current_gauge
|
||||
.load(Ordering::Relaxed)
|
||||
}
|
||||
pub fn get_me_floor_cap_block_total(&self) -> u64 {
|
||||
self.me_floor_cap_block_total.load(Ordering::Relaxed)
|
||||
}
|
||||
pub fn get_me_floor_swap_idle_total(&self) -> u64 {
|
||||
self.me_floor_swap_idle_total.load(Ordering::Relaxed)
|
||||
}
|
||||
pub fn get_me_floor_swap_idle_failed_total(&self) -> u64 {
|
||||
self.me_floor_swap_idle_failed_total.load(Ordering::Relaxed)
|
||||
}
|
||||
pub fn get_me_handshake_error_code_counts(&self) -> Vec<(i32, u64)> {
|
||||
let mut out: Vec<(i32, u64)> = self
|
||||
.me_handshake_error_codes
|
||||
@@ -846,34 +1086,36 @@ impl Stats {
|
||||
if !self.telemetry_user_enabled() {
|
||||
return;
|
||||
}
|
||||
self.maybe_cleanup_user_stats();
|
||||
if let Some(stats) = self.user_stats.get(user) {
|
||||
Self::touch_user_stats(stats.value());
|
||||
stats.connects.fetch_add(1, Ordering::Relaxed);
|
||||
return;
|
||||
}
|
||||
self.user_stats
|
||||
.entry(user.to_string())
|
||||
.or_default()
|
||||
.connects
|
||||
.fetch_add(1, Ordering::Relaxed);
|
||||
let stats = self.user_stats.entry(user.to_string()).or_default();
|
||||
Self::touch_user_stats(stats.value());
|
||||
stats.connects.fetch_add(1, Ordering::Relaxed);
|
||||
}
|
||||
|
||||
pub fn increment_user_curr_connects(&self, user: &str) {
|
||||
if !self.telemetry_user_enabled() {
|
||||
return;
|
||||
}
|
||||
self.maybe_cleanup_user_stats();
|
||||
if let Some(stats) = self.user_stats.get(user) {
|
||||
Self::touch_user_stats(stats.value());
|
||||
stats.curr_connects.fetch_add(1, Ordering::Relaxed);
|
||||
return;
|
||||
}
|
||||
self.user_stats
|
||||
.entry(user.to_string())
|
||||
.or_default()
|
||||
.curr_connects
|
||||
.fetch_add(1, Ordering::Relaxed);
|
||||
let stats = self.user_stats.entry(user.to_string()).or_default();
|
||||
Self::touch_user_stats(stats.value());
|
||||
stats.curr_connects.fetch_add(1, Ordering::Relaxed);
|
||||
}
|
||||
|
||||
pub fn decrement_user_curr_connects(&self, user: &str) {
|
||||
self.maybe_cleanup_user_stats();
|
||||
if let Some(stats) = self.user_stats.get(user) {
|
||||
Self::touch_user_stats(stats.value());
|
||||
let counter = &stats.curr_connects;
|
||||
let mut current = counter.load(Ordering::Relaxed);
|
||||
loop {
|
||||
@@ -903,60 +1145,60 @@ impl Stats {
|
||||
if !self.telemetry_user_enabled() {
|
||||
return;
|
||||
}
|
||||
self.maybe_cleanup_user_stats();
|
||||
if let Some(stats) = self.user_stats.get(user) {
|
||||
Self::touch_user_stats(stats.value());
|
||||
stats.octets_from_client.fetch_add(bytes, Ordering::Relaxed);
|
||||
return;
|
||||
}
|
||||
self.user_stats
|
||||
.entry(user.to_string())
|
||||
.or_default()
|
||||
.octets_from_client
|
||||
.fetch_add(bytes, Ordering::Relaxed);
|
||||
let stats = self.user_stats.entry(user.to_string()).or_default();
|
||||
Self::touch_user_stats(stats.value());
|
||||
stats.octets_from_client.fetch_add(bytes, Ordering::Relaxed);
|
||||
}
|
||||
|
||||
pub fn add_user_octets_to(&self, user: &str, bytes: u64) {
|
||||
if !self.telemetry_user_enabled() {
|
||||
return;
|
||||
}
|
||||
self.maybe_cleanup_user_stats();
|
||||
if let Some(stats) = self.user_stats.get(user) {
|
||||
Self::touch_user_stats(stats.value());
|
||||
stats.octets_to_client.fetch_add(bytes, Ordering::Relaxed);
|
||||
return;
|
||||
}
|
||||
self.user_stats
|
||||
.entry(user.to_string())
|
||||
.or_default()
|
||||
.octets_to_client
|
||||
.fetch_add(bytes, Ordering::Relaxed);
|
||||
let stats = self.user_stats.entry(user.to_string()).or_default();
|
||||
Self::touch_user_stats(stats.value());
|
||||
stats.octets_to_client.fetch_add(bytes, Ordering::Relaxed);
|
||||
}
|
||||
|
||||
pub fn increment_user_msgs_from(&self, user: &str) {
|
||||
if !self.telemetry_user_enabled() {
|
||||
return;
|
||||
}
|
||||
self.maybe_cleanup_user_stats();
|
||||
if let Some(stats) = self.user_stats.get(user) {
|
||||
Self::touch_user_stats(stats.value());
|
||||
stats.msgs_from_client.fetch_add(1, Ordering::Relaxed);
|
||||
return;
|
||||
}
|
||||
self.user_stats
|
||||
.entry(user.to_string())
|
||||
.or_default()
|
||||
.msgs_from_client
|
||||
.fetch_add(1, Ordering::Relaxed);
|
||||
let stats = self.user_stats.entry(user.to_string()).or_default();
|
||||
Self::touch_user_stats(stats.value());
|
||||
stats.msgs_from_client.fetch_add(1, Ordering::Relaxed);
|
||||
}
|
||||
|
||||
pub fn increment_user_msgs_to(&self, user: &str) {
|
||||
if !self.telemetry_user_enabled() {
|
||||
return;
|
||||
}
|
||||
self.maybe_cleanup_user_stats();
|
||||
if let Some(stats) = self.user_stats.get(user) {
|
||||
Self::touch_user_stats(stats.value());
|
||||
stats.msgs_to_client.fetch_add(1, Ordering::Relaxed);
|
||||
return;
|
||||
}
|
||||
self.user_stats
|
||||
.entry(user.to_string())
|
||||
.or_default()
|
||||
.msgs_to_client
|
||||
.fetch_add(1, Ordering::Relaxed);
|
||||
let stats = self.user_stats.entry(user.to_string()).or_default();
|
||||
Self::touch_user_stats(stats.value());
|
||||
stats.msgs_to_client.fetch_add(1, Ordering::Relaxed);
|
||||
}
|
||||
|
||||
pub fn get_user_total_octets(&self, user: &str) -> u64 {
|
||||
|
||||
@@ -1,4 +1,5 @@
|
||||
use tokio::io::{AsyncReadExt, AsyncWriteExt};
|
||||
use bytes::Bytes;
|
||||
|
||||
use crate::crypto::{AesCbc, crc32, crc32c};
|
||||
use crate::error::{ProxyError, Result};
|
||||
@@ -6,8 +7,8 @@ use crate::protocol::constants::*;
|
||||
|
||||
/// Commands sent to dedicated writer tasks to avoid mutex contention on TCP writes.
|
||||
pub(crate) enum WriterCommand {
|
||||
Data(Vec<u8>),
|
||||
DataAndFlush(Vec<u8>),
|
||||
Data(Bytes),
|
||||
DataAndFlush(Bytes),
|
||||
Close,
|
||||
}
|
||||
|
||||
|
||||
@@ -315,7 +315,16 @@ async fn run_update_cycle(
|
||||
cfg.general.me_floor_mode,
|
||||
cfg.general.me_adaptive_floor_idle_secs,
|
||||
cfg.general.me_adaptive_floor_min_writers_single_endpoint,
|
||||
cfg.general.me_adaptive_floor_min_writers_multi_endpoint,
|
||||
cfg.general.me_adaptive_floor_recover_grace_secs,
|
||||
cfg.general.me_adaptive_floor_writers_per_core_total,
|
||||
cfg.general.me_adaptive_floor_cpu_cores_override,
|
||||
cfg.general.me_adaptive_floor_max_extra_writers_single_per_core,
|
||||
cfg.general.me_adaptive_floor_max_extra_writers_multi_per_core,
|
||||
cfg.general.me_adaptive_floor_max_active_writers_per_core,
|
||||
cfg.general.me_adaptive_floor_max_warm_writers_per_core,
|
||||
cfg.general.me_adaptive_floor_max_active_writers_global,
|
||||
cfg.general.me_adaptive_floor_max_warm_writers_global,
|
||||
);
|
||||
|
||||
let required_cfg_snapshots = cfg.general.me_config_stable_snapshots.max(1);
|
||||
@@ -527,7 +536,16 @@ pub async fn me_config_updater(
|
||||
cfg.general.me_floor_mode,
|
||||
cfg.general.me_adaptive_floor_idle_secs,
|
||||
cfg.general.me_adaptive_floor_min_writers_single_endpoint,
|
||||
cfg.general.me_adaptive_floor_min_writers_multi_endpoint,
|
||||
cfg.general.me_adaptive_floor_recover_grace_secs,
|
||||
cfg.general.me_adaptive_floor_writers_per_core_total,
|
||||
cfg.general.me_adaptive_floor_cpu_cores_override,
|
||||
cfg.general.me_adaptive_floor_max_extra_writers_single_per_core,
|
||||
cfg.general.me_adaptive_floor_max_extra_writers_multi_per_core,
|
||||
cfg.general.me_adaptive_floor_max_active_writers_per_core,
|
||||
cfg.general.me_adaptive_floor_max_warm_writers_per_core,
|
||||
cfg.general.me_adaptive_floor_max_active_writers_global,
|
||||
cfg.general.me_adaptive_floor_max_warm_writers_global,
|
||||
);
|
||||
let new_secs = cfg.general.effective_update_every_secs().max(1);
|
||||
if new_secs == update_every_secs {
|
||||
|
||||
@@ -84,38 +84,7 @@ impl MePool {
|
||||
}
|
||||
|
||||
async fn resolve_dc_idx_for_endpoint(&self, addr: SocketAddr) -> Option<i16> {
|
||||
if addr.is_ipv4() {
|
||||
let map = self.proxy_map_v4.read().await;
|
||||
for (dc, addrs) in map.iter() {
|
||||
if addrs
|
||||
.iter()
|
||||
.any(|(ip, port)| SocketAddr::new(*ip, *port) == addr)
|
||||
{
|
||||
let abs_dc = dc.abs();
|
||||
if abs_dc > 0
|
||||
&& let Ok(dc_idx) = i16::try_from(abs_dc)
|
||||
{
|
||||
return Some(dc_idx);
|
||||
}
|
||||
}
|
||||
}
|
||||
} else {
|
||||
let map = self.proxy_map_v6.read().await;
|
||||
for (dc, addrs) in map.iter() {
|
||||
if addrs
|
||||
.iter()
|
||||
.any(|(ip, port)| SocketAddr::new(*ip, *port) == addr)
|
||||
{
|
||||
let abs_dc = dc.abs();
|
||||
if abs_dc > 0
|
||||
&& let Ok(dc_idx) = i16::try_from(abs_dc)
|
||||
{
|
||||
return Some(dc_idx);
|
||||
}
|
||||
}
|
||||
}
|
||||
}
|
||||
None
|
||||
i16::try_from(self.resolve_dc_for_endpoint(addr).await).ok()
|
||||
}
|
||||
|
||||
fn direct_bind_ip_for_stun(
|
||||
@@ -166,10 +135,15 @@ impl MePool {
|
||||
pub(crate) async fn connect_tcp(
|
||||
&self,
|
||||
addr: SocketAddr,
|
||||
dc_idx_override: Option<i16>,
|
||||
) -> Result<(TcpStream, f64, Option<UpstreamEgressInfo>)> {
|
||||
let start = Instant::now();
|
||||
let (stream, upstream_egress) = if let Some(upstream) = &self.upstream {
|
||||
let dc_idx = self.resolve_dc_idx_for_endpoint(addr).await;
|
||||
let dc_idx = if let Some(dc_idx) = dc_idx_override {
|
||||
Some(dc_idx)
|
||||
} else {
|
||||
self.resolve_dc_idx_for_endpoint(addr).await
|
||||
};
|
||||
let (stream, egress) = upstream.connect_with_details(addr, dc_idx, None).await?;
|
||||
(stream, Some(egress))
|
||||
} else {
|
||||
|
||||
@@ -22,6 +22,34 @@ const IDLE_REFRESH_TRIGGER_BASE_SECS: u64 = 45;
|
||||
const IDLE_REFRESH_TRIGGER_JITTER_SECS: u64 = 5;
|
||||
const IDLE_REFRESH_RETRY_SECS: u64 = 8;
|
||||
const IDLE_REFRESH_SUCCESS_GUARD_SECS: u64 = 5;
|
||||
const HEALTH_RECONNECT_BUDGET_PER_CORE: usize = 2;
|
||||
const HEALTH_RECONNECT_BUDGET_PER_DC: usize = 1;
|
||||
const HEALTH_RECONNECT_BUDGET_MIN: usize = 4;
|
||||
const HEALTH_RECONNECT_BUDGET_MAX: usize = 128;
|
||||
|
||||
#[derive(Debug, Clone)]
|
||||
struct DcFloorPlanEntry {
|
||||
dc: i32,
|
||||
endpoints: Vec<SocketAddr>,
|
||||
alive: usize,
|
||||
min_required: usize,
|
||||
target_required: usize,
|
||||
max_required: usize,
|
||||
has_bound_clients: bool,
|
||||
floor_capped: bool,
|
||||
}
|
||||
|
||||
#[derive(Debug, Clone)]
|
||||
struct FamilyFloorPlan {
|
||||
by_dc: HashMap<i32, DcFloorPlanEntry>,
|
||||
active_cap_configured_total: usize,
|
||||
active_cap_effective_total: usize,
|
||||
warm_cap_configured_total: usize,
|
||||
warm_cap_effective_total: usize,
|
||||
active_writers_current: usize,
|
||||
warm_writers_current: usize,
|
||||
target_writers_total: usize,
|
||||
}
|
||||
|
||||
pub async fn me_health_monitor(pool: Arc<MePool>, rng: Arc<SecureRandom>, _min_connections: usize) {
|
||||
let mut backoff: HashMap<(i32, IpFamily), u64> = HashMap::new();
|
||||
@@ -37,6 +65,7 @@ pub async fn me_health_monitor(pool: Arc<MePool>, rng: Arc<SecureRandom>, _min_c
|
||||
loop {
|
||||
tokio::time::sleep(Duration::from_secs(HEALTH_INTERVAL_SECS)).await;
|
||||
pool.prune_closed_writers().await;
|
||||
reap_draining_writers(&pool).await;
|
||||
check_family(
|
||||
IpFamily::V4,
|
||||
&pool,
|
||||
@@ -72,6 +101,28 @@ pub async fn me_health_monitor(pool: Arc<MePool>, rng: Arc<SecureRandom>, _min_c
|
||||
}
|
||||
}
|
||||
|
||||
async fn reap_draining_writers(pool: &Arc<MePool>) {
|
||||
let now_epoch_secs = MePool::now_epoch_secs();
|
||||
let writers = pool.writers.read().await.clone();
|
||||
for writer in writers {
|
||||
if !writer.draining.load(std::sync::atomic::Ordering::Relaxed) {
|
||||
continue;
|
||||
}
|
||||
if pool.registry.is_writer_empty(writer.id).await {
|
||||
pool.remove_writer_and_close_clients(writer.id).await;
|
||||
continue;
|
||||
}
|
||||
let deadline_epoch_secs = writer
|
||||
.drain_deadline_epoch_secs
|
||||
.load(std::sync::atomic::Ordering::Relaxed);
|
||||
if deadline_epoch_secs != 0 && now_epoch_secs >= deadline_epoch_secs {
|
||||
warn!(writer_id = writer.id, "Drain timeout, force-closing");
|
||||
pool.stats.increment_pool_force_close_total();
|
||||
pool.remove_writer_and_close_clients(writer.id).await;
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
async fn check_family(
|
||||
family: IpFamily,
|
||||
pool: &Arc<MePool>,
|
||||
@@ -95,59 +146,91 @@ async fn check_family(
|
||||
return;
|
||||
}
|
||||
|
||||
let map = match family {
|
||||
IpFamily::V4 => pool.proxy_map_v4.read().await.clone(),
|
||||
IpFamily::V6 => pool.proxy_map_v6.read().await.clone(),
|
||||
};
|
||||
|
||||
let mut dc_endpoints = HashMap::<i32, Vec<SocketAddr>>::new();
|
||||
for (dc, addrs) in map {
|
||||
let entry = dc_endpoints.entry(dc.abs()).or_default();
|
||||
for (ip, port) in addrs {
|
||||
let map_guard = match family {
|
||||
IpFamily::V4 => pool.proxy_map_v4.read().await,
|
||||
IpFamily::V6 => pool.proxy_map_v6.read().await,
|
||||
};
|
||||
for (dc, addrs) in map_guard.iter() {
|
||||
let entry = dc_endpoints.entry(*dc).or_default();
|
||||
for (ip, port) in addrs.iter().copied() {
|
||||
entry.push(SocketAddr::new(ip, port));
|
||||
}
|
||||
}
|
||||
drop(map_guard);
|
||||
for endpoints in dc_endpoints.values_mut() {
|
||||
endpoints.sort_unstable();
|
||||
endpoints.dedup();
|
||||
}
|
||||
let mut reconnect_budget = health_reconnect_budget(pool, dc_endpoints.len());
|
||||
|
||||
if pool.floor_mode() == MeFloorMode::Static {
|
||||
adaptive_idle_since.clear();
|
||||
adaptive_recover_until.clear();
|
||||
}
|
||||
|
||||
let mut live_addr_counts = HashMap::<SocketAddr, usize>::new();
|
||||
let mut live_writer_ids_by_addr = HashMap::<SocketAddr, Vec<u64>>::new();
|
||||
let mut live_addr_counts = HashMap::<(i32, SocketAddr), usize>::new();
|
||||
let mut live_writer_ids_by_addr = HashMap::<(i32, SocketAddr), Vec<u64>>::new();
|
||||
for writer in pool.writers.read().await.iter().filter(|w| {
|
||||
!w.draining.load(std::sync::atomic::Ordering::Relaxed)
|
||||
}) {
|
||||
*live_addr_counts.entry(writer.addr).or_insert(0) += 1;
|
||||
if !matches!(
|
||||
super::pool::WriterContour::from_u8(
|
||||
writer.contour.load(std::sync::atomic::Ordering::Relaxed),
|
||||
),
|
||||
super::pool::WriterContour::Active
|
||||
) {
|
||||
continue;
|
||||
}
|
||||
let key = (writer.writer_dc, writer.addr);
|
||||
*live_addr_counts.entry(key).or_insert(0) += 1;
|
||||
live_writer_ids_by_addr
|
||||
.entry(writer.addr)
|
||||
.entry(key)
|
||||
.or_default()
|
||||
.push(writer.id);
|
||||
}
|
||||
let writer_idle_since = pool.registry.writer_idle_since_snapshot().await;
|
||||
let bound_clients_by_writer = pool
|
||||
.registry
|
||||
.writer_activity_snapshot()
|
||||
.await
|
||||
.bound_clients_by_writer;
|
||||
let floor_plan = build_family_floor_plan(
|
||||
pool,
|
||||
family,
|
||||
&dc_endpoints,
|
||||
&live_addr_counts,
|
||||
&live_writer_ids_by_addr,
|
||||
&bound_clients_by_writer,
|
||||
adaptive_idle_since,
|
||||
adaptive_recover_until,
|
||||
)
|
||||
.await;
|
||||
pool.set_adaptive_floor_runtime_caps(
|
||||
floor_plan.active_cap_configured_total,
|
||||
floor_plan.active_cap_effective_total,
|
||||
floor_plan.warm_cap_configured_total,
|
||||
floor_plan.warm_cap_effective_total,
|
||||
floor_plan.target_writers_total,
|
||||
floor_plan.active_writers_current,
|
||||
floor_plan.warm_writers_current,
|
||||
);
|
||||
|
||||
for (dc, endpoints) in dc_endpoints {
|
||||
if endpoints.is_empty() {
|
||||
continue;
|
||||
}
|
||||
let key = (dc, family);
|
||||
let reduce_for_idle = should_reduce_floor_for_idle(
|
||||
pool,
|
||||
key,
|
||||
&endpoints,
|
||||
&live_writer_ids_by_addr,
|
||||
adaptive_idle_since,
|
||||
adaptive_recover_until,
|
||||
)
|
||||
.await;
|
||||
let required = pool.required_writers_for_dc_with_floor_mode(endpoints.len(), reduce_for_idle);
|
||||
let required = floor_plan
|
||||
.by_dc
|
||||
.get(&dc)
|
||||
.map(|entry| entry.target_required)
|
||||
.unwrap_or_else(|| {
|
||||
pool.required_writers_for_dc_with_floor_mode(endpoints.len(), false)
|
||||
});
|
||||
let alive = endpoints
|
||||
.iter()
|
||||
.map(|addr| *live_addr_counts.get(addr).unwrap_or(&0))
|
||||
.map(|addr| *live_addr_counts.get(&(dc, *addr)).unwrap_or(&0))
|
||||
.sum::<usize>();
|
||||
|
||||
if endpoints.len() == 1 && pool.single_endpoint_outage_mode_enabled() && alive == 0 {
|
||||
@@ -170,6 +253,7 @@ async fn check_family(
|
||||
required,
|
||||
outage_backoff,
|
||||
outage_next_attempt,
|
||||
&mut reconnect_budget,
|
||||
)
|
||||
.await;
|
||||
continue;
|
||||
@@ -205,6 +289,7 @@ async fn check_family(
|
||||
required,
|
||||
&live_writer_ids_by_addr,
|
||||
&writer_idle_since,
|
||||
&bound_clients_by_writer,
|
||||
idle_refresh_next_attempt,
|
||||
)
|
||||
.await;
|
||||
@@ -218,6 +303,7 @@ async fn check_family(
|
||||
alive,
|
||||
required,
|
||||
&live_writer_ids_by_addr,
|
||||
&bound_clients_by_writer,
|
||||
shadow_rotate_deadline,
|
||||
)
|
||||
.await;
|
||||
@@ -226,6 +312,24 @@ async fn check_family(
|
||||
let missing = required - alive;
|
||||
|
||||
let now = Instant::now();
|
||||
if reconnect_budget == 0 {
|
||||
let base_ms = pool.me_reconnect_backoff_base.as_millis() as u64;
|
||||
let next_ms = (*backoff.get(&key).unwrap_or(&base_ms)).max(base_ms);
|
||||
let jitter = next_ms / JITTER_FRAC_NUM;
|
||||
let wait = Duration::from_millis(next_ms)
|
||||
+ Duration::from_millis(rand::rng().random_range(0..=jitter.max(1)));
|
||||
next_attempt.insert(key, now + wait);
|
||||
debug!(
|
||||
dc = %dc,
|
||||
?family,
|
||||
alive,
|
||||
required,
|
||||
endpoint_count = endpoints.len(),
|
||||
reconnect_budget,
|
||||
"Skipping reconnect due to per-tick health reconnect budget"
|
||||
);
|
||||
continue;
|
||||
}
|
||||
if let Some(ts) = next_attempt.get(&key)
|
||||
&& now < *ts
|
||||
{
|
||||
@@ -236,7 +340,10 @@ async fn check_family(
|
||||
if *inflight.get(&key).unwrap_or(&0) >= max_concurrent {
|
||||
continue;
|
||||
}
|
||||
if pool.has_refill_inflight_for_endpoints(&endpoints).await {
|
||||
if pool
|
||||
.has_refill_inflight_for_dc_key(super::pool::RefillDcKey { dc, family })
|
||||
.await
|
||||
{
|
||||
debug!(
|
||||
dc = %dc,
|
||||
?family,
|
||||
@@ -251,9 +358,44 @@ async fn check_family(
|
||||
|
||||
let mut restored = 0usize;
|
||||
for _ in 0..missing {
|
||||
if reconnect_budget == 0 {
|
||||
break;
|
||||
}
|
||||
reconnect_budget = reconnect_budget.saturating_sub(1);
|
||||
if pool.active_contour_writer_count_total().await
|
||||
>= floor_plan.active_cap_effective_total
|
||||
{
|
||||
let swapped = maybe_swap_idle_writer_for_cap(
|
||||
pool,
|
||||
rng,
|
||||
dc,
|
||||
family,
|
||||
&endpoints,
|
||||
&live_writer_ids_by_addr,
|
||||
&writer_idle_since,
|
||||
&bound_clients_by_writer,
|
||||
)
|
||||
.await;
|
||||
if swapped {
|
||||
pool.stats.increment_me_floor_swap_idle_total();
|
||||
restored += 1;
|
||||
continue;
|
||||
}
|
||||
pool.stats.increment_me_floor_cap_block_total();
|
||||
pool.stats.increment_me_floor_swap_idle_failed_total();
|
||||
debug!(
|
||||
dc = %dc,
|
||||
?family,
|
||||
alive,
|
||||
required,
|
||||
active_cap_effective_total = floor_plan.active_cap_effective_total,
|
||||
"Adaptive floor cap reached, reconnect attempt blocked"
|
||||
);
|
||||
break;
|
||||
}
|
||||
let res = tokio::time::timeout(
|
||||
pool.me_one_timeout,
|
||||
pool.connect_endpoints_round_robin(&endpoints, rng.as_ref()),
|
||||
pool.connect_endpoints_round_robin(dc, &endpoints, rng.as_ref()),
|
||||
)
|
||||
.await;
|
||||
match res {
|
||||
@@ -323,6 +465,320 @@ async fn check_family(
|
||||
}
|
||||
}
|
||||
|
||||
fn health_reconnect_budget(pool: &Arc<MePool>, dc_groups: usize) -> usize {
|
||||
let cpu_cores = pool.adaptive_floor_effective_cpu_cores().max(1);
|
||||
let by_cpu = cpu_cores.saturating_mul(HEALTH_RECONNECT_BUDGET_PER_CORE);
|
||||
let by_dc = dc_groups.saturating_mul(HEALTH_RECONNECT_BUDGET_PER_DC);
|
||||
by_cpu
|
||||
.saturating_add(by_dc)
|
||||
.clamp(HEALTH_RECONNECT_BUDGET_MIN, HEALTH_RECONNECT_BUDGET_MAX)
|
||||
}
|
||||
|
||||
fn adaptive_floor_class_min(
|
||||
pool: &Arc<MePool>,
|
||||
endpoint_count: usize,
|
||||
base_required: usize,
|
||||
) -> usize {
|
||||
if endpoint_count <= 1 {
|
||||
let min_single = (pool
|
||||
.me_adaptive_floor_min_writers_single_endpoint
|
||||
.load(std::sync::atomic::Ordering::Relaxed) as usize)
|
||||
.max(1);
|
||||
min_single.min(base_required.max(1))
|
||||
} else {
|
||||
pool.adaptive_floor_min_writers_multi_endpoint()
|
||||
.min(base_required.max(1))
|
||||
}
|
||||
}
|
||||
|
||||
fn adaptive_floor_class_max(
|
||||
pool: &Arc<MePool>,
|
||||
endpoint_count: usize,
|
||||
base_required: usize,
|
||||
cpu_cores: usize,
|
||||
) -> usize {
|
||||
let extra_per_core = if endpoint_count <= 1 {
|
||||
pool.adaptive_floor_max_extra_single_per_core()
|
||||
} else {
|
||||
pool.adaptive_floor_max_extra_multi_per_core()
|
||||
};
|
||||
base_required.saturating_add(cpu_cores.saturating_mul(extra_per_core))
|
||||
}
|
||||
|
||||
fn list_writer_ids_for_endpoints(
|
||||
dc: i32,
|
||||
endpoints: &[SocketAddr],
|
||||
live_writer_ids_by_addr: &HashMap<(i32, SocketAddr), Vec<u64>>,
|
||||
) -> Vec<u64> {
|
||||
let mut out = Vec::<u64>::new();
|
||||
for endpoint in endpoints {
|
||||
if let Some(ids) = live_writer_ids_by_addr.get(&(dc, *endpoint)) {
|
||||
out.extend(ids.iter().copied());
|
||||
}
|
||||
}
|
||||
out
|
||||
}
|
||||
|
||||
async fn build_family_floor_plan(
|
||||
pool: &Arc<MePool>,
|
||||
family: IpFamily,
|
||||
dc_endpoints: &HashMap<i32, Vec<SocketAddr>>,
|
||||
live_addr_counts: &HashMap<(i32, SocketAddr), usize>,
|
||||
live_writer_ids_by_addr: &HashMap<(i32, SocketAddr), Vec<u64>>,
|
||||
bound_clients_by_writer: &HashMap<u64, usize>,
|
||||
adaptive_idle_since: &mut HashMap<(i32, IpFamily), Instant>,
|
||||
adaptive_recover_until: &mut HashMap<(i32, IpFamily), Instant>,
|
||||
) -> FamilyFloorPlan {
|
||||
let mut entries = Vec::<DcFloorPlanEntry>::new();
|
||||
let mut by_dc = HashMap::<i32, DcFloorPlanEntry>::new();
|
||||
let mut family_active_total = 0usize;
|
||||
|
||||
let floor_mode = pool.floor_mode();
|
||||
let is_adaptive = floor_mode == MeFloorMode::Adaptive;
|
||||
let cpu_cores = pool.adaptive_floor_effective_cpu_cores().max(1);
|
||||
let (active_writers_current, warm_writers_current, _) =
|
||||
pool.non_draining_writer_counts_by_contour().await;
|
||||
|
||||
for (dc, endpoints) in dc_endpoints {
|
||||
if endpoints.is_empty() {
|
||||
continue;
|
||||
}
|
||||
let key = (*dc, family);
|
||||
let reduce_for_idle = should_reduce_floor_for_idle(
|
||||
pool,
|
||||
key,
|
||||
*dc,
|
||||
endpoints,
|
||||
live_writer_ids_by_addr,
|
||||
bound_clients_by_writer,
|
||||
adaptive_idle_since,
|
||||
adaptive_recover_until,
|
||||
)
|
||||
.await;
|
||||
let base_required = pool.required_writers_for_dc(endpoints.len()).max(1);
|
||||
let min_required = if is_adaptive {
|
||||
adaptive_floor_class_min(pool, endpoints.len(), base_required)
|
||||
} else {
|
||||
base_required
|
||||
};
|
||||
let mut max_required = if is_adaptive {
|
||||
adaptive_floor_class_max(pool, endpoints.len(), base_required, cpu_cores)
|
||||
} else {
|
||||
base_required
|
||||
};
|
||||
if max_required < min_required {
|
||||
max_required = min_required;
|
||||
}
|
||||
let desired_raw = if is_adaptive && reduce_for_idle {
|
||||
min_required
|
||||
} else {
|
||||
base_required
|
||||
};
|
||||
let target_required = desired_raw.clamp(min_required, max_required);
|
||||
let alive = endpoints
|
||||
.iter()
|
||||
.map(|endpoint| live_addr_counts.get(&(*dc, *endpoint)).copied().unwrap_or(0))
|
||||
.sum::<usize>();
|
||||
family_active_total = family_active_total.saturating_add(alive);
|
||||
let writer_ids = list_writer_ids_for_endpoints(*dc, endpoints, live_writer_ids_by_addr);
|
||||
let has_bound_clients = has_bound_clients_on_endpoint(&writer_ids, bound_clients_by_writer);
|
||||
|
||||
entries.push(DcFloorPlanEntry {
|
||||
dc: *dc,
|
||||
endpoints: endpoints.clone(),
|
||||
alive,
|
||||
min_required,
|
||||
target_required,
|
||||
max_required,
|
||||
has_bound_clients,
|
||||
floor_capped: false,
|
||||
});
|
||||
}
|
||||
|
||||
if entries.is_empty() {
|
||||
let active_cap_configured_total = pool.adaptive_floor_active_cap_configured_total();
|
||||
let warm_cap_configured_total = pool.adaptive_floor_warm_cap_configured_total();
|
||||
return FamilyFloorPlan {
|
||||
by_dc,
|
||||
active_cap_configured_total,
|
||||
active_cap_effective_total: active_cap_configured_total,
|
||||
warm_cap_configured_total,
|
||||
warm_cap_effective_total: warm_cap_configured_total,
|
||||
active_writers_current,
|
||||
warm_writers_current,
|
||||
target_writers_total: 0,
|
||||
};
|
||||
}
|
||||
|
||||
if !is_adaptive {
|
||||
let target_total = entries
|
||||
.iter()
|
||||
.map(|entry| entry.target_required)
|
||||
.sum::<usize>();
|
||||
let active_cap_configured_total = pool.adaptive_floor_active_cap_configured_total();
|
||||
let warm_cap_configured_total = pool.adaptive_floor_warm_cap_configured_total();
|
||||
for entry in entries {
|
||||
by_dc.insert(entry.dc, entry);
|
||||
}
|
||||
return FamilyFloorPlan {
|
||||
by_dc,
|
||||
active_cap_configured_total,
|
||||
active_cap_effective_total: active_cap_configured_total.max(target_total),
|
||||
warm_cap_configured_total,
|
||||
warm_cap_effective_total: warm_cap_configured_total,
|
||||
active_writers_current,
|
||||
warm_writers_current,
|
||||
target_writers_total: target_total,
|
||||
};
|
||||
}
|
||||
|
||||
let active_cap_configured_total = pool.adaptive_floor_active_cap_configured_total();
|
||||
let warm_cap_configured_total = pool.adaptive_floor_warm_cap_configured_total();
|
||||
let other_active = active_writers_current.saturating_sub(family_active_total);
|
||||
let min_sum = entries
|
||||
.iter()
|
||||
.map(|entry| entry.min_required)
|
||||
.sum::<usize>();
|
||||
let mut target_sum = entries
|
||||
.iter()
|
||||
.map(|entry| entry.target_required)
|
||||
.sum::<usize>();
|
||||
let family_cap = active_cap_configured_total
|
||||
.saturating_sub(other_active)
|
||||
.max(min_sum);
|
||||
if target_sum > family_cap {
|
||||
entries.sort_by_key(|entry| {
|
||||
(
|
||||
entry.has_bound_clients,
|
||||
std::cmp::Reverse(entry.target_required.saturating_sub(entry.min_required)),
|
||||
std::cmp::Reverse(entry.alive),
|
||||
entry.dc.abs(),
|
||||
entry.dc,
|
||||
entry.endpoints.len(),
|
||||
entry.max_required,
|
||||
)
|
||||
});
|
||||
let mut changed = true;
|
||||
while target_sum > family_cap && changed {
|
||||
changed = false;
|
||||
for entry in &mut entries {
|
||||
if target_sum <= family_cap {
|
||||
break;
|
||||
}
|
||||
if entry.target_required > entry.min_required {
|
||||
entry.target_required -= 1;
|
||||
entry.floor_capped = true;
|
||||
target_sum -= 1;
|
||||
changed = true;
|
||||
}
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
for entry in entries {
|
||||
by_dc.insert(entry.dc, entry);
|
||||
}
|
||||
let active_cap_effective_total =
|
||||
active_cap_configured_total.max(other_active.saturating_add(min_sum));
|
||||
let target_writers_total = other_active.saturating_add(target_sum);
|
||||
FamilyFloorPlan {
|
||||
by_dc,
|
||||
active_cap_configured_total,
|
||||
active_cap_effective_total,
|
||||
warm_cap_configured_total,
|
||||
warm_cap_effective_total: warm_cap_configured_total,
|
||||
active_writers_current,
|
||||
warm_writers_current,
|
||||
target_writers_total,
|
||||
}
|
||||
}
|
||||
|
||||
async fn maybe_swap_idle_writer_for_cap(
|
||||
pool: &Arc<MePool>,
|
||||
rng: &Arc<SecureRandom>,
|
||||
dc: i32,
|
||||
family: IpFamily,
|
||||
endpoints: &[SocketAddr],
|
||||
live_writer_ids_by_addr: &HashMap<(i32, SocketAddr), Vec<u64>>,
|
||||
writer_idle_since: &HashMap<u64, u64>,
|
||||
bound_clients_by_writer: &HashMap<u64, usize>,
|
||||
) -> bool {
|
||||
let now_epoch_secs = MePool::now_epoch_secs();
|
||||
let mut candidate: Option<(u64, SocketAddr, u64)> = None;
|
||||
for endpoint in endpoints {
|
||||
let Some(writer_ids) = live_writer_ids_by_addr.get(&(dc, *endpoint)) else {
|
||||
continue;
|
||||
};
|
||||
for writer_id in writer_ids {
|
||||
if bound_clients_by_writer.get(writer_id).copied().unwrap_or(0) > 0 {
|
||||
continue;
|
||||
}
|
||||
let Some(idle_since_epoch_secs) = writer_idle_since.get(writer_id).copied() else {
|
||||
continue;
|
||||
};
|
||||
let idle_age_secs = now_epoch_secs.saturating_sub(idle_since_epoch_secs);
|
||||
if candidate
|
||||
.as_ref()
|
||||
.map(|(_, _, age)| idle_age_secs > *age)
|
||||
.unwrap_or(true)
|
||||
{
|
||||
candidate = Some((*writer_id, *endpoint, idle_age_secs));
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
let Some((old_writer_id, endpoint, idle_age_secs)) = candidate else {
|
||||
return false;
|
||||
};
|
||||
|
||||
let connected = match tokio::time::timeout(
|
||||
pool.me_one_timeout,
|
||||
pool.connect_one_for_dc(endpoint, dc, rng.as_ref()),
|
||||
)
|
||||
.await
|
||||
{
|
||||
Ok(Ok(())) => true,
|
||||
Ok(Err(error)) => {
|
||||
debug!(
|
||||
dc = %dc,
|
||||
?family,
|
||||
%endpoint,
|
||||
old_writer_id,
|
||||
idle_age_secs,
|
||||
%error,
|
||||
"Adaptive floor cap swap connect failed"
|
||||
);
|
||||
false
|
||||
}
|
||||
Err(_) => {
|
||||
debug!(
|
||||
dc = %dc,
|
||||
?family,
|
||||
%endpoint,
|
||||
old_writer_id,
|
||||
idle_age_secs,
|
||||
"Adaptive floor cap swap connect timed out"
|
||||
);
|
||||
false
|
||||
}
|
||||
};
|
||||
if !connected {
|
||||
return false;
|
||||
}
|
||||
|
||||
pool.mark_writer_draining_with_timeout(old_writer_id, pool.force_close_timeout(), false)
|
||||
.await;
|
||||
info!(
|
||||
dc = %dc,
|
||||
?family,
|
||||
%endpoint,
|
||||
old_writer_id,
|
||||
idle_age_secs,
|
||||
"Adaptive floor cap swap: idle writer rotated"
|
||||
);
|
||||
true
|
||||
}
|
||||
|
||||
async fn maybe_refresh_idle_writer_for_dc(
|
||||
pool: &Arc<MePool>,
|
||||
rng: &Arc<SecureRandom>,
|
||||
@@ -332,8 +788,9 @@ async fn maybe_refresh_idle_writer_for_dc(
|
||||
endpoints: &[SocketAddr],
|
||||
alive: usize,
|
||||
required: usize,
|
||||
live_writer_ids_by_addr: &HashMap<SocketAddr, Vec<u64>>,
|
||||
live_writer_ids_by_addr: &HashMap<(i32, SocketAddr), Vec<u64>>,
|
||||
writer_idle_since: &HashMap<u64, u64>,
|
||||
bound_clients_by_writer: &HashMap<u64, usize>,
|
||||
idle_refresh_next_attempt: &mut HashMap<(i32, IpFamily), Instant>,
|
||||
) {
|
||||
if alive < required {
|
||||
@@ -350,10 +807,13 @@ async fn maybe_refresh_idle_writer_for_dc(
|
||||
let now_epoch_secs = MePool::now_epoch_secs();
|
||||
let mut candidate: Option<(u64, SocketAddr, u64, u64)> = None;
|
||||
for endpoint in endpoints {
|
||||
let Some(writer_ids) = live_writer_ids_by_addr.get(endpoint) else {
|
||||
let Some(writer_ids) = live_writer_ids_by_addr.get(&(dc, *endpoint)) else {
|
||||
continue;
|
||||
};
|
||||
for writer_id in writer_ids {
|
||||
if bound_clients_by_writer.get(writer_id).copied().unwrap_or(0) > 0 {
|
||||
continue;
|
||||
}
|
||||
let Some(idle_since_epoch_secs) = writer_idle_since.get(writer_id).copied() else {
|
||||
continue;
|
||||
};
|
||||
@@ -377,7 +837,12 @@ async fn maybe_refresh_idle_writer_for_dc(
|
||||
return;
|
||||
};
|
||||
|
||||
let rotate_ok = match tokio::time::timeout(pool.me_one_timeout, pool.connect_one(endpoint, rng.as_ref())).await {
|
||||
let rotate_ok = match tokio::time::timeout(
|
||||
pool.me_one_timeout,
|
||||
pool.connect_one_for_dc(endpoint, dc, rng.as_ref()),
|
||||
)
|
||||
.await
|
||||
{
|
||||
Ok(Ok(())) => true,
|
||||
Ok(Err(error)) => {
|
||||
debug!(
|
||||
@@ -433,24 +898,22 @@ async fn maybe_refresh_idle_writer_for_dc(
|
||||
async fn should_reduce_floor_for_idle(
|
||||
pool: &Arc<MePool>,
|
||||
key: (i32, IpFamily),
|
||||
dc: i32,
|
||||
endpoints: &[SocketAddr],
|
||||
live_writer_ids_by_addr: &HashMap<SocketAddr, Vec<u64>>,
|
||||
live_writer_ids_by_addr: &HashMap<(i32, SocketAddr), Vec<u64>>,
|
||||
bound_clients_by_writer: &HashMap<u64, usize>,
|
||||
adaptive_idle_since: &mut HashMap<(i32, IpFamily), Instant>,
|
||||
adaptive_recover_until: &mut HashMap<(i32, IpFamily), Instant>,
|
||||
) -> bool {
|
||||
if endpoints.len() != 1 || pool.floor_mode() != MeFloorMode::Adaptive {
|
||||
if pool.floor_mode() != MeFloorMode::Adaptive {
|
||||
adaptive_idle_since.remove(&key);
|
||||
adaptive_recover_until.remove(&key);
|
||||
return false;
|
||||
}
|
||||
|
||||
let now = Instant::now();
|
||||
let endpoint = endpoints[0];
|
||||
let writer_ids = live_writer_ids_by_addr
|
||||
.get(&endpoint)
|
||||
.map(Vec::as_slice)
|
||||
.unwrap_or(&[]);
|
||||
let has_bound_clients = has_bound_clients_on_endpoint(pool, writer_ids).await;
|
||||
let writer_ids = list_writer_ids_for_endpoints(dc, endpoints, live_writer_ids_by_addr);
|
||||
let has_bound_clients = has_bound_clients_on_endpoint(&writer_ids, bound_clients_by_writer);
|
||||
if has_bound_clients {
|
||||
adaptive_idle_since.remove(&key);
|
||||
adaptive_recover_until.insert(key, now + pool.adaptive_floor_recover_grace_duration());
|
||||
@@ -469,13 +932,13 @@ async fn should_reduce_floor_for_idle(
|
||||
now.saturating_duration_since(*idle_since) >= pool.adaptive_floor_idle_duration()
|
||||
}
|
||||
|
||||
async fn has_bound_clients_on_endpoint(pool: &Arc<MePool>, writer_ids: &[u64]) -> bool {
|
||||
for writer_id in writer_ids {
|
||||
if !pool.registry.is_writer_empty(*writer_id).await {
|
||||
return true;
|
||||
}
|
||||
}
|
||||
false
|
||||
fn has_bound_clients_on_endpoint(
|
||||
writer_ids: &[u64],
|
||||
bound_clients_by_writer: &HashMap<u64, usize>,
|
||||
) -> bool {
|
||||
writer_ids
|
||||
.iter()
|
||||
.any(|writer_id| bound_clients_by_writer.get(writer_id).copied().unwrap_or(0) > 0)
|
||||
}
|
||||
|
||||
async fn recover_single_endpoint_outage(
|
||||
@@ -486,6 +949,7 @@ async fn recover_single_endpoint_outage(
|
||||
required: usize,
|
||||
outage_backoff: &mut HashMap<(i32, IpFamily), u64>,
|
||||
outage_next_attempt: &mut HashMap<(i32, IpFamily), Instant>,
|
||||
reconnect_budget: &mut usize,
|
||||
) {
|
||||
let now = Instant::now();
|
||||
if let Some(ts) = outage_next_attempt.get(&key)
|
||||
@@ -495,6 +959,18 @@ async fn recover_single_endpoint_outage(
|
||||
}
|
||||
|
||||
let (min_backoff_ms, max_backoff_ms) = pool.single_endpoint_outage_backoff_bounds_ms();
|
||||
if *reconnect_budget == 0 {
|
||||
outage_next_attempt.insert(key, now + Duration::from_millis(min_backoff_ms.max(250)));
|
||||
debug!(
|
||||
dc = %key.0,
|
||||
family = ?key.1,
|
||||
%endpoint,
|
||||
required,
|
||||
"Single-endpoint outage reconnect deferred by health reconnect budget"
|
||||
);
|
||||
return;
|
||||
}
|
||||
*reconnect_budget = (*reconnect_budget).saturating_sub(1);
|
||||
pool.stats
|
||||
.increment_me_single_endpoint_outage_reconnect_attempt_total();
|
||||
|
||||
@@ -502,7 +978,12 @@ async fn recover_single_endpoint_outage(
|
||||
let attempt_ok = if bypass_quarantine {
|
||||
pool.stats
|
||||
.increment_me_single_endpoint_quarantine_bypass_total();
|
||||
match tokio::time::timeout(pool.me_one_timeout, pool.connect_one(endpoint, rng.as_ref())).await {
|
||||
match tokio::time::timeout(
|
||||
pool.me_one_timeout,
|
||||
pool.connect_one_for_dc(endpoint, key.0, rng.as_ref()),
|
||||
)
|
||||
.await
|
||||
{
|
||||
Ok(Ok(())) => true,
|
||||
Ok(Err(e)) => {
|
||||
debug!(
|
||||
@@ -528,7 +1009,7 @@ async fn recover_single_endpoint_outage(
|
||||
let one_endpoint = [endpoint];
|
||||
match tokio::time::timeout(
|
||||
pool.me_one_timeout,
|
||||
pool.connect_endpoints_round_robin(&one_endpoint, rng.as_ref()),
|
||||
pool.connect_endpoints_round_robin(key.0, &one_endpoint, rng.as_ref()),
|
||||
)
|
||||
.await
|
||||
{
|
||||
@@ -592,7 +1073,8 @@ async fn maybe_rotate_single_endpoint_shadow(
|
||||
endpoints: &[SocketAddr],
|
||||
alive: usize,
|
||||
required: usize,
|
||||
live_writer_ids_by_addr: &HashMap<SocketAddr, Vec<u64>>,
|
||||
live_writer_ids_by_addr: &HashMap<(i32, SocketAddr), Vec<u64>>,
|
||||
bound_clients_by_writer: &HashMap<u64, usize>,
|
||||
shadow_rotate_deadline: &mut HashMap<(i32, IpFamily), Instant>,
|
||||
) {
|
||||
if endpoints.len() != 1 || alive < required {
|
||||
@@ -624,14 +1106,14 @@ async fn maybe_rotate_single_endpoint_shadow(
|
||||
return;
|
||||
}
|
||||
|
||||
let Some(writer_ids) = live_writer_ids_by_addr.get(&endpoint) else {
|
||||
let Some(writer_ids) = live_writer_ids_by_addr.get(&(dc, endpoint)) else {
|
||||
shadow_rotate_deadline.insert(key, now + Duration::from_secs(SHADOW_ROTATE_RETRY_SECS));
|
||||
return;
|
||||
};
|
||||
|
||||
let mut candidate_writer_id = None;
|
||||
for writer_id in writer_ids {
|
||||
if pool.registry.is_writer_empty(*writer_id).await {
|
||||
if bound_clients_by_writer.get(writer_id).copied().unwrap_or(0) == 0 {
|
||||
candidate_writer_id = Some(*writer_id);
|
||||
break;
|
||||
}
|
||||
@@ -650,7 +1132,12 @@ async fn maybe_rotate_single_endpoint_shadow(
|
||||
return;
|
||||
};
|
||||
|
||||
let rotate_ok = match tokio::time::timeout(pool.me_one_timeout, pool.connect_one(endpoint, rng.as_ref())).await {
|
||||
let rotate_ok = match tokio::time::timeout(
|
||||
pool.me_one_timeout,
|
||||
pool.connect_one_for_dc(endpoint, dc, rng.as_ref()),
|
||||
)
|
||||
.await
|
||||
{
|
||||
Ok(Ok(())) => true,
|
||||
Ok(Err(e)) => {
|
||||
debug!(
|
||||
|
||||
@@ -10,6 +10,7 @@ mod pool_init;
|
||||
mod pool_nat;
|
||||
mod pool_refill;
|
||||
mod pool_reinit;
|
||||
mod pool_runtime_api;
|
||||
mod pool_writer;
|
||||
mod ping;
|
||||
mod reader;
|
||||
|
||||
@@ -331,7 +331,7 @@ pub async fn run_me_ping(pool: &Arc<MePool>, rng: &SecureRandom) -> Vec<MePingRe
|
||||
let mut error = None;
|
||||
let mut route = None;
|
||||
|
||||
match pool.connect_tcp(addr).await {
|
||||
match pool.connect_tcp(addr, None).await {
|
||||
Ok((stream, conn_rtt, upstream_egress)) => {
|
||||
connect_ms = Some(conn_rtt);
|
||||
route = route_from_egress(upstream_egress);
|
||||
|
||||
@@ -22,10 +22,17 @@ pub(super) struct RefillDcKey {
|
||||
pub family: IpFamily,
|
||||
}
|
||||
|
||||
#[derive(Debug, Clone, Copy, PartialEq, Eq, Hash)]
|
||||
pub(super) struct RefillEndpointKey {
|
||||
pub dc: i32,
|
||||
pub addr: SocketAddr,
|
||||
}
|
||||
|
||||
#[derive(Clone)]
|
||||
pub struct MeWriter {
|
||||
pub id: u64,
|
||||
pub addr: SocketAddr,
|
||||
pub writer_dc: i32,
|
||||
pub generation: u64,
|
||||
pub contour: Arc<AtomicU8>,
|
||||
pub created_at: Instant,
|
||||
@@ -34,6 +41,7 @@ pub struct MeWriter {
|
||||
pub degraded: Arc<AtomicBool>,
|
||||
pub draining: Arc<AtomicBool>,
|
||||
pub draining_started_at_epoch_secs: Arc<AtomicU64>,
|
||||
pub drain_deadline_epoch_secs: Arc<AtomicU64>,
|
||||
pub allow_drain_fallback: Arc<AtomicBool>,
|
||||
}
|
||||
|
||||
@@ -111,18 +119,40 @@ pub struct MePool {
|
||||
pub(super) me_floor_mode: AtomicU8,
|
||||
pub(super) me_adaptive_floor_idle_secs: AtomicU64,
|
||||
pub(super) me_adaptive_floor_min_writers_single_endpoint: AtomicU8,
|
||||
pub(super) me_adaptive_floor_min_writers_multi_endpoint: AtomicU8,
|
||||
pub(super) me_adaptive_floor_recover_grace_secs: AtomicU64,
|
||||
pub(super) me_adaptive_floor_writers_per_core_total: AtomicU32,
|
||||
pub(super) me_adaptive_floor_cpu_cores_override: AtomicU32,
|
||||
pub(super) me_adaptive_floor_max_extra_writers_single_per_core: AtomicU32,
|
||||
pub(super) me_adaptive_floor_max_extra_writers_multi_per_core: AtomicU32,
|
||||
pub(super) me_adaptive_floor_max_active_writers_per_core: AtomicU32,
|
||||
pub(super) me_adaptive_floor_max_warm_writers_per_core: AtomicU32,
|
||||
pub(super) me_adaptive_floor_max_active_writers_global: AtomicU32,
|
||||
pub(super) me_adaptive_floor_max_warm_writers_global: AtomicU32,
|
||||
pub(super) me_adaptive_floor_cpu_cores_detected: AtomicU32,
|
||||
pub(super) me_adaptive_floor_cpu_cores_effective: AtomicU32,
|
||||
pub(super) me_adaptive_floor_global_cap_raw: AtomicU64,
|
||||
pub(super) me_adaptive_floor_global_cap_effective: AtomicU64,
|
||||
pub(super) me_adaptive_floor_target_writers_total: AtomicU64,
|
||||
pub(super) me_adaptive_floor_active_cap_configured: AtomicU64,
|
||||
pub(super) me_adaptive_floor_active_cap_effective: AtomicU64,
|
||||
pub(super) me_adaptive_floor_warm_cap_configured: AtomicU64,
|
||||
pub(super) me_adaptive_floor_warm_cap_effective: AtomicU64,
|
||||
pub(super) me_adaptive_floor_active_writers_current: AtomicU64,
|
||||
pub(super) me_adaptive_floor_warm_writers_current: AtomicU64,
|
||||
pub(super) proxy_map_v4: Arc<RwLock<HashMap<i32, Vec<(IpAddr, u16)>>>>,
|
||||
pub(super) proxy_map_v6: Arc<RwLock<HashMap<i32, Vec<(IpAddr, u16)>>>>,
|
||||
pub(super) endpoint_dc_map: Arc<RwLock<HashMap<SocketAddr, Option<i32>>>>,
|
||||
pub(super) default_dc: AtomicI32,
|
||||
pub(super) next_writer_id: AtomicU64,
|
||||
pub(super) ping_tracker: Arc<Mutex<HashMap<i64, (std::time::Instant, u64)>>>,
|
||||
pub(super) ping_tracker_last_cleanup_epoch_ms: AtomicU64,
|
||||
pub(super) rtt_stats: Arc<Mutex<HashMap<u64, (f64, f64)>>>,
|
||||
pub(super) nat_reflection_cache: Arc<Mutex<NatReflectionCache>>,
|
||||
pub(super) nat_reflection_singleflight_v4: Arc<Mutex<()>>,
|
||||
pub(super) nat_reflection_singleflight_v6: Arc<Mutex<()>>,
|
||||
pub(super) writer_available: Arc<Notify>,
|
||||
pub(super) refill_inflight: Arc<Mutex<HashSet<SocketAddr>>>,
|
||||
pub(super) refill_inflight: Arc<Mutex<HashSet<RefillEndpointKey>>>,
|
||||
pub(super) refill_inflight_dc: Arc<Mutex<HashSet<RefillDcKey>>>,
|
||||
pub(super) conn_count: AtomicUsize,
|
||||
pub(super) stats: Arc<crate::stats::Stats>,
|
||||
@@ -217,7 +247,16 @@ impl MePool {
|
||||
me_floor_mode: MeFloorMode,
|
||||
me_adaptive_floor_idle_secs: u64,
|
||||
me_adaptive_floor_min_writers_single_endpoint: u8,
|
||||
me_adaptive_floor_min_writers_multi_endpoint: u8,
|
||||
me_adaptive_floor_recover_grace_secs: u64,
|
||||
me_adaptive_floor_writers_per_core_total: u16,
|
||||
me_adaptive_floor_cpu_cores_override: u16,
|
||||
me_adaptive_floor_max_extra_writers_single_per_core: u16,
|
||||
me_adaptive_floor_max_extra_writers_multi_per_core: u16,
|
||||
me_adaptive_floor_max_active_writers_per_core: u16,
|
||||
me_adaptive_floor_max_warm_writers_per_core: u16,
|
||||
me_adaptive_floor_max_active_writers_global: u32,
|
||||
me_adaptive_floor_max_warm_writers_global: u32,
|
||||
hardswap: bool,
|
||||
me_pool_drain_ttl_secs: u64,
|
||||
me_pool_force_close_secs: u64,
|
||||
@@ -239,6 +278,7 @@ impl MePool {
|
||||
me_route_inline_recovery_attempts: u32,
|
||||
me_route_inline_recovery_wait_ms: u64,
|
||||
) -> Arc<Self> {
|
||||
let endpoint_dc_map = Self::build_endpoint_dc_map_from_maps(&proxy_map_v4, &proxy_map_v6);
|
||||
let registry = Arc::new(ConnRegistry::new());
|
||||
registry.update_route_backpressure_policy(
|
||||
me_route_backpressure_base_timeout_ms,
|
||||
@@ -314,15 +354,55 @@ impl MePool {
|
||||
me_adaptive_floor_min_writers_single_endpoint: AtomicU8::new(
|
||||
me_adaptive_floor_min_writers_single_endpoint,
|
||||
),
|
||||
me_adaptive_floor_min_writers_multi_endpoint: AtomicU8::new(
|
||||
me_adaptive_floor_min_writers_multi_endpoint,
|
||||
),
|
||||
me_adaptive_floor_recover_grace_secs: AtomicU64::new(
|
||||
me_adaptive_floor_recover_grace_secs,
|
||||
),
|
||||
me_adaptive_floor_writers_per_core_total: AtomicU32::new(
|
||||
me_adaptive_floor_writers_per_core_total as u32,
|
||||
),
|
||||
me_adaptive_floor_cpu_cores_override: AtomicU32::new(
|
||||
me_adaptive_floor_cpu_cores_override as u32,
|
||||
),
|
||||
me_adaptive_floor_max_extra_writers_single_per_core: AtomicU32::new(
|
||||
me_adaptive_floor_max_extra_writers_single_per_core as u32,
|
||||
),
|
||||
me_adaptive_floor_max_extra_writers_multi_per_core: AtomicU32::new(
|
||||
me_adaptive_floor_max_extra_writers_multi_per_core as u32,
|
||||
),
|
||||
me_adaptive_floor_max_active_writers_per_core: AtomicU32::new(
|
||||
me_adaptive_floor_max_active_writers_per_core as u32,
|
||||
),
|
||||
me_adaptive_floor_max_warm_writers_per_core: AtomicU32::new(
|
||||
me_adaptive_floor_max_warm_writers_per_core as u32,
|
||||
),
|
||||
me_adaptive_floor_max_active_writers_global: AtomicU32::new(
|
||||
me_adaptive_floor_max_active_writers_global,
|
||||
),
|
||||
me_adaptive_floor_max_warm_writers_global: AtomicU32::new(
|
||||
me_adaptive_floor_max_warm_writers_global,
|
||||
),
|
||||
me_adaptive_floor_cpu_cores_detected: AtomicU32::new(1),
|
||||
me_adaptive_floor_cpu_cores_effective: AtomicU32::new(1),
|
||||
me_adaptive_floor_global_cap_raw: AtomicU64::new(0),
|
||||
me_adaptive_floor_global_cap_effective: AtomicU64::new(0),
|
||||
me_adaptive_floor_target_writers_total: AtomicU64::new(0),
|
||||
me_adaptive_floor_active_cap_configured: AtomicU64::new(0),
|
||||
me_adaptive_floor_active_cap_effective: AtomicU64::new(0),
|
||||
me_adaptive_floor_warm_cap_configured: AtomicU64::new(0),
|
||||
me_adaptive_floor_warm_cap_effective: AtomicU64::new(0),
|
||||
me_adaptive_floor_active_writers_current: AtomicU64::new(0),
|
||||
me_adaptive_floor_warm_writers_current: AtomicU64::new(0),
|
||||
pool_size: 2,
|
||||
proxy_map_v4: Arc::new(RwLock::new(proxy_map_v4)),
|
||||
proxy_map_v6: Arc::new(RwLock::new(proxy_map_v6)),
|
||||
default_dc: AtomicI32::new(default_dc.unwrap_or(0)),
|
||||
endpoint_dc_map: Arc::new(RwLock::new(endpoint_dc_map)),
|
||||
default_dc: AtomicI32::new(default_dc.unwrap_or(2)),
|
||||
next_writer_id: AtomicU64::new(1),
|
||||
ping_tracker: Arc::new(Mutex::new(HashMap::new())),
|
||||
ping_tracker_last_cleanup_epoch_ms: AtomicU64::new(0),
|
||||
rtt_stats: Arc::new(Mutex::new(HashMap::new())),
|
||||
nat_reflection_cache: Arc::new(Mutex::new(NatReflectionCache::default())),
|
||||
nat_reflection_singleflight_v4: Arc::new(Mutex::new(())),
|
||||
@@ -399,7 +479,16 @@ impl MePool {
|
||||
floor_mode: MeFloorMode,
|
||||
adaptive_floor_idle_secs: u64,
|
||||
adaptive_floor_min_writers_single_endpoint: u8,
|
||||
adaptive_floor_min_writers_multi_endpoint: u8,
|
||||
adaptive_floor_recover_grace_secs: u64,
|
||||
adaptive_floor_writers_per_core_total: u16,
|
||||
adaptive_floor_cpu_cores_override: u16,
|
||||
adaptive_floor_max_extra_writers_single_per_core: u16,
|
||||
adaptive_floor_max_extra_writers_multi_per_core: u16,
|
||||
adaptive_floor_max_active_writers_per_core: u16,
|
||||
adaptive_floor_max_warm_writers_per_core: u16,
|
||||
adaptive_floor_max_active_writers_global: u32,
|
||||
adaptive_floor_max_warm_writers_global: u32,
|
||||
) {
|
||||
self.hardswap.store(hardswap, Ordering::Relaxed);
|
||||
self.me_pool_drain_ttl_secs
|
||||
@@ -443,8 +532,38 @@ impl MePool {
|
||||
.store(adaptive_floor_idle_secs, Ordering::Relaxed);
|
||||
self.me_adaptive_floor_min_writers_single_endpoint
|
||||
.store(adaptive_floor_min_writers_single_endpoint, Ordering::Relaxed);
|
||||
self.me_adaptive_floor_min_writers_multi_endpoint
|
||||
.store(adaptive_floor_min_writers_multi_endpoint, Ordering::Relaxed);
|
||||
self.me_adaptive_floor_recover_grace_secs
|
||||
.store(adaptive_floor_recover_grace_secs, Ordering::Relaxed);
|
||||
self.me_adaptive_floor_writers_per_core_total
|
||||
.store(adaptive_floor_writers_per_core_total as u32, Ordering::Relaxed);
|
||||
self.me_adaptive_floor_cpu_cores_override
|
||||
.store(adaptive_floor_cpu_cores_override as u32, Ordering::Relaxed);
|
||||
self.me_adaptive_floor_max_extra_writers_single_per_core
|
||||
.store(
|
||||
adaptive_floor_max_extra_writers_single_per_core as u32,
|
||||
Ordering::Relaxed,
|
||||
);
|
||||
self.me_adaptive_floor_max_extra_writers_multi_per_core
|
||||
.store(
|
||||
adaptive_floor_max_extra_writers_multi_per_core as u32,
|
||||
Ordering::Relaxed,
|
||||
);
|
||||
self.me_adaptive_floor_max_active_writers_per_core
|
||||
.store(
|
||||
adaptive_floor_max_active_writers_per_core as u32,
|
||||
Ordering::Relaxed,
|
||||
);
|
||||
self.me_adaptive_floor_max_warm_writers_per_core
|
||||
.store(
|
||||
adaptive_floor_max_warm_writers_per_core as u32,
|
||||
Ordering::Relaxed,
|
||||
);
|
||||
self.me_adaptive_floor_max_active_writers_global
|
||||
.store(adaptive_floor_max_active_writers_global, Ordering::Relaxed);
|
||||
self.me_adaptive_floor_max_warm_writers_global
|
||||
.store(adaptive_floor_max_warm_writers_global, Ordering::Relaxed);
|
||||
if previous_floor_mode != floor_mode {
|
||||
self.stats.increment_me_floor_mode_switch_total();
|
||||
match (previous_floor_mode, floor_mode) {
|
||||
@@ -515,6 +634,28 @@ impl MePool {
|
||||
self.proxy_secret.read().await.key_selector
|
||||
}
|
||||
|
||||
pub(super) async fn non_draining_writer_counts_by_contour(&self) -> (usize, usize, usize) {
|
||||
let ws = self.writers.read().await;
|
||||
let mut active = 0usize;
|
||||
let mut warm = 0usize;
|
||||
for writer in ws.iter() {
|
||||
if writer.draining.load(Ordering::Relaxed) {
|
||||
continue;
|
||||
}
|
||||
match WriterContour::from_u8(writer.contour.load(Ordering::Relaxed)) {
|
||||
WriterContour::Active => active = active.saturating_add(1),
|
||||
WriterContour::Warm => warm = warm.saturating_add(1),
|
||||
WriterContour::Draining => {}
|
||||
}
|
||||
}
|
||||
(active, warm, active.saturating_add(warm))
|
||||
}
|
||||
|
||||
pub(super) async fn active_contour_writer_count_total(&self) -> usize {
|
||||
let (active, _, _) = self.non_draining_writer_counts_by_contour().await;
|
||||
active
|
||||
}
|
||||
|
||||
pub(super) async fn secret_snapshot(&self) -> SecretSnapshot {
|
||||
self.proxy_secret.read().await.clone()
|
||||
}
|
||||
@@ -551,6 +692,201 @@ impl MePool {
|
||||
)
|
||||
}
|
||||
|
||||
pub(super) fn adaptive_floor_min_writers_multi_endpoint(&self) -> usize {
|
||||
(self
|
||||
.me_adaptive_floor_min_writers_multi_endpoint
|
||||
.load(Ordering::Relaxed) as usize)
|
||||
.max(1)
|
||||
}
|
||||
|
||||
pub(super) fn adaptive_floor_max_extra_single_per_core(&self) -> usize {
|
||||
self.me_adaptive_floor_max_extra_writers_single_per_core
|
||||
.load(Ordering::Relaxed) as usize
|
||||
}
|
||||
|
||||
pub(super) fn adaptive_floor_max_extra_multi_per_core(&self) -> usize {
|
||||
self.me_adaptive_floor_max_extra_writers_multi_per_core
|
||||
.load(Ordering::Relaxed) as usize
|
||||
}
|
||||
|
||||
pub(super) fn adaptive_floor_max_active_writers_per_core(&self) -> usize {
|
||||
(self
|
||||
.me_adaptive_floor_max_active_writers_per_core
|
||||
.load(Ordering::Relaxed) as usize)
|
||||
.max(1)
|
||||
}
|
||||
|
||||
pub(super) fn adaptive_floor_max_warm_writers_per_core(&self) -> usize {
|
||||
(self
|
||||
.me_adaptive_floor_max_warm_writers_per_core
|
||||
.load(Ordering::Relaxed) as usize)
|
||||
.max(1)
|
||||
}
|
||||
|
||||
pub(super) fn adaptive_floor_max_active_writers_global(&self) -> usize {
|
||||
(self
|
||||
.me_adaptive_floor_max_active_writers_global
|
||||
.load(Ordering::Relaxed) as usize)
|
||||
.max(1)
|
||||
}
|
||||
|
||||
pub(super) fn adaptive_floor_max_warm_writers_global(&self) -> usize {
|
||||
(self
|
||||
.me_adaptive_floor_max_warm_writers_global
|
||||
.load(Ordering::Relaxed) as usize)
|
||||
.max(1)
|
||||
}
|
||||
|
||||
pub(super) fn adaptive_floor_detected_cpu_cores(&self) -> usize {
|
||||
std::thread::available_parallelism()
|
||||
.map(|value| value.get())
|
||||
.unwrap_or(1)
|
||||
.max(1)
|
||||
}
|
||||
|
||||
pub(super) fn adaptive_floor_effective_cpu_cores(&self) -> usize {
|
||||
let detected = self.adaptive_floor_detected_cpu_cores();
|
||||
let override_cores = self
|
||||
.me_adaptive_floor_cpu_cores_override
|
||||
.load(Ordering::Relaxed) as usize;
|
||||
let effective = if override_cores == 0 {
|
||||
detected
|
||||
} else {
|
||||
override_cores.max(1)
|
||||
};
|
||||
self.me_adaptive_floor_cpu_cores_detected
|
||||
.store(detected as u32, Ordering::Relaxed);
|
||||
self.me_adaptive_floor_cpu_cores_effective
|
||||
.store(effective as u32, Ordering::Relaxed);
|
||||
self.stats
|
||||
.set_me_floor_cpu_cores_detected_gauge(detected as u64);
|
||||
self.stats
|
||||
.set_me_floor_cpu_cores_effective_gauge(effective as u64);
|
||||
effective
|
||||
}
|
||||
|
||||
pub(super) fn adaptive_floor_active_cap_configured_total(&self) -> usize {
|
||||
let cores = self.adaptive_floor_effective_cpu_cores();
|
||||
let per_core_cap = cores.saturating_mul(self.adaptive_floor_max_active_writers_per_core());
|
||||
let configured = per_core_cap.min(self.adaptive_floor_max_active_writers_global());
|
||||
self.me_adaptive_floor_active_cap_configured
|
||||
.store(configured as u64, Ordering::Relaxed);
|
||||
self.stats
|
||||
.set_me_floor_active_cap_configured_gauge(configured as u64);
|
||||
configured
|
||||
}
|
||||
|
||||
pub(super) fn adaptive_floor_warm_cap_configured_total(&self) -> usize {
|
||||
let cores = self.adaptive_floor_effective_cpu_cores();
|
||||
let per_core_cap = cores.saturating_mul(self.adaptive_floor_max_warm_writers_per_core());
|
||||
let configured = per_core_cap.min(self.adaptive_floor_max_warm_writers_global());
|
||||
self.me_adaptive_floor_warm_cap_configured
|
||||
.store(configured as u64, Ordering::Relaxed);
|
||||
self.stats
|
||||
.set_me_floor_warm_cap_configured_gauge(configured as u64);
|
||||
configured
|
||||
}
|
||||
|
||||
pub(super) fn set_adaptive_floor_runtime_caps(
|
||||
&self,
|
||||
active_cap_configured: usize,
|
||||
active_cap_effective: usize,
|
||||
warm_cap_configured: usize,
|
||||
warm_cap_effective: usize,
|
||||
target_writers_total: usize,
|
||||
active_writers_current: usize,
|
||||
warm_writers_current: usize,
|
||||
) {
|
||||
self.me_adaptive_floor_global_cap_raw
|
||||
.store(active_cap_configured as u64, Ordering::Relaxed);
|
||||
self.me_adaptive_floor_global_cap_effective
|
||||
.store(active_cap_effective as u64, Ordering::Relaxed);
|
||||
self.me_adaptive_floor_target_writers_total
|
||||
.store(target_writers_total as u64, Ordering::Relaxed);
|
||||
self.me_adaptive_floor_active_cap_configured
|
||||
.store(active_cap_configured as u64, Ordering::Relaxed);
|
||||
self.me_adaptive_floor_active_cap_effective
|
||||
.store(active_cap_effective as u64, Ordering::Relaxed);
|
||||
self.me_adaptive_floor_warm_cap_configured
|
||||
.store(warm_cap_configured as u64, Ordering::Relaxed);
|
||||
self.me_adaptive_floor_warm_cap_effective
|
||||
.store(warm_cap_effective as u64, Ordering::Relaxed);
|
||||
self.me_adaptive_floor_active_writers_current
|
||||
.store(active_writers_current as u64, Ordering::Relaxed);
|
||||
self.me_adaptive_floor_warm_writers_current
|
||||
.store(warm_writers_current as u64, Ordering::Relaxed);
|
||||
self.stats
|
||||
.set_me_floor_global_cap_raw_gauge(active_cap_configured as u64);
|
||||
self.stats
|
||||
.set_me_floor_global_cap_effective_gauge(active_cap_effective as u64);
|
||||
self.stats
|
||||
.set_me_floor_target_writers_total_gauge(target_writers_total as u64);
|
||||
self.stats
|
||||
.set_me_floor_active_cap_configured_gauge(active_cap_configured as u64);
|
||||
self.stats
|
||||
.set_me_floor_active_cap_effective_gauge(active_cap_effective as u64);
|
||||
self.stats
|
||||
.set_me_floor_warm_cap_configured_gauge(warm_cap_configured as u64);
|
||||
self.stats
|
||||
.set_me_floor_warm_cap_effective_gauge(warm_cap_effective as u64);
|
||||
self.stats
|
||||
.set_me_writers_active_current_gauge(active_writers_current as u64);
|
||||
self.stats
|
||||
.set_me_writers_warm_current_gauge(warm_writers_current as u64);
|
||||
}
|
||||
|
||||
pub(super) async fn active_coverage_required_total(&self) -> usize {
|
||||
let mut endpoints_by_dc = HashMap::<i32, HashSet<SocketAddr>>::new();
|
||||
|
||||
if self.decision.ipv4_me {
|
||||
let map = self.proxy_map_v4.read().await;
|
||||
for (dc, addrs) in map.iter() {
|
||||
let entry = endpoints_by_dc.entry(*dc).or_default();
|
||||
for (ip, port) in addrs.iter().copied() {
|
||||
entry.insert(SocketAddr::new(ip, port));
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
if self.decision.ipv6_me {
|
||||
let map = self.proxy_map_v6.read().await;
|
||||
for (dc, addrs) in map.iter() {
|
||||
let entry = endpoints_by_dc.entry(*dc).or_default();
|
||||
for (ip, port) in addrs.iter().copied() {
|
||||
entry.insert(SocketAddr::new(ip, port));
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
endpoints_by_dc
|
||||
.values()
|
||||
.map(|endpoints| self.required_writers_for_dc_with_floor_mode(endpoints.len(), false))
|
||||
.sum()
|
||||
}
|
||||
|
||||
pub(super) async fn can_open_writer_for_contour(
|
||||
&self,
|
||||
contour: WriterContour,
|
||||
allow_coverage_override: bool,
|
||||
) -> bool {
|
||||
let (active_writers, warm_writers, _) = self.non_draining_writer_counts_by_contour().await;
|
||||
match contour {
|
||||
WriterContour::Active => {
|
||||
let active_cap = self.adaptive_floor_active_cap_configured_total();
|
||||
if active_writers < active_cap {
|
||||
return true;
|
||||
}
|
||||
if !allow_coverage_override {
|
||||
return false;
|
||||
}
|
||||
let coverage_required = self.active_coverage_required_total().await;
|
||||
active_writers < coverage_required
|
||||
}
|
||||
WriterContour::Warm => warm_writers < self.adaptive_floor_warm_cap_configured_total(),
|
||||
WriterContour::Draining => true,
|
||||
}
|
||||
}
|
||||
|
||||
pub(super) fn required_writers_for_dc_with_floor_mode(
|
||||
&self,
|
||||
endpoint_count: usize,
|
||||
@@ -560,13 +896,20 @@ impl MePool {
|
||||
if !reduce_for_idle {
|
||||
return base_required;
|
||||
}
|
||||
if endpoint_count != 1 || self.floor_mode() != MeFloorMode::Adaptive {
|
||||
if self.floor_mode() != MeFloorMode::Adaptive {
|
||||
return base_required;
|
||||
}
|
||||
let min_writers = (self
|
||||
.me_adaptive_floor_min_writers_single_endpoint
|
||||
.load(Ordering::Relaxed) as usize)
|
||||
.max(1);
|
||||
let min_writers = if endpoint_count == 1 {
|
||||
(self
|
||||
.me_adaptive_floor_min_writers_single_endpoint
|
||||
.load(Ordering::Relaxed) as usize)
|
||||
.max(1)
|
||||
} else {
|
||||
(self
|
||||
.me_adaptive_floor_min_writers_multi_endpoint
|
||||
.load(Ordering::Relaxed) as usize)
|
||||
.max(1)
|
||||
};
|
||||
base_required.min(min_writers)
|
||||
}
|
||||
|
||||
@@ -625,6 +968,51 @@ impl MePool {
|
||||
order
|
||||
}
|
||||
|
||||
pub(super) fn default_dc_for_routing(&self) -> i32 {
|
||||
let dc = self.default_dc.load(Ordering::Relaxed);
|
||||
if dc == 0 { 2 } else { dc }
|
||||
}
|
||||
|
||||
pub(super) async fn has_configured_endpoints_for_dc(&self, dc: i32) -> bool {
|
||||
if self.decision.ipv4_me {
|
||||
let map = self.proxy_map_v4.read().await;
|
||||
if map.get(&dc).is_some_and(|endpoints| !endpoints.is_empty()) {
|
||||
return true;
|
||||
}
|
||||
}
|
||||
|
||||
if self.decision.ipv6_me {
|
||||
let map = self.proxy_map_v6.read().await;
|
||||
if map.get(&dc).is_some_and(|endpoints| !endpoints.is_empty()) {
|
||||
return true;
|
||||
}
|
||||
}
|
||||
|
||||
false
|
||||
}
|
||||
|
||||
pub(super) async fn resolve_target_dc_for_routing(&self, target_dc: i32) -> (i32, bool) {
|
||||
if target_dc == 0 {
|
||||
return (self.default_dc_for_routing(), true);
|
||||
}
|
||||
|
||||
if self.has_configured_endpoints_for_dc(target_dc).await {
|
||||
return (target_dc, false);
|
||||
}
|
||||
|
||||
(self.default_dc_for_routing(), true)
|
||||
}
|
||||
|
||||
pub(super) async fn resolve_dc_for_endpoint(&self, addr: SocketAddr) -> i32 {
|
||||
if let Some(cached) = self.endpoint_dc_map.read().await.get(&addr).copied()
|
||||
&& let Some(dc) = cached
|
||||
{
|
||||
return dc;
|
||||
}
|
||||
|
||||
self.default_dc_for_routing()
|
||||
}
|
||||
|
||||
pub(super) async fn proxy_map_for_family(
|
||||
&self,
|
||||
family: IpFamily,
|
||||
@@ -634,4 +1022,48 @@ impl MePool {
|
||||
IpFamily::V6 => self.proxy_map_v6.read().await.clone(),
|
||||
}
|
||||
}
|
||||
|
||||
fn merge_endpoint_dc(
|
||||
endpoint_dc_map: &mut HashMap<SocketAddr, Option<i32>>,
|
||||
dc: i32,
|
||||
ip: IpAddr,
|
||||
port: u16,
|
||||
) {
|
||||
let endpoint = SocketAddr::new(ip, port);
|
||||
match endpoint_dc_map.get_mut(&endpoint) {
|
||||
None => {
|
||||
endpoint_dc_map.insert(endpoint, Some(dc));
|
||||
}
|
||||
Some(existing) => {
|
||||
if existing.is_some_and(|existing_dc| existing_dc != dc) {
|
||||
*existing = None;
|
||||
}
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
fn build_endpoint_dc_map_from_maps(
|
||||
map_v4: &HashMap<i32, Vec<(IpAddr, u16)>>,
|
||||
map_v6: &HashMap<i32, Vec<(IpAddr, u16)>>,
|
||||
) -> HashMap<SocketAddr, Option<i32>> {
|
||||
let mut endpoint_dc_map = HashMap::<SocketAddr, Option<i32>>::new();
|
||||
for (dc, endpoints) in map_v4 {
|
||||
for (ip, port) in endpoints {
|
||||
Self::merge_endpoint_dc(&mut endpoint_dc_map, *dc, *ip, *port);
|
||||
}
|
||||
}
|
||||
for (dc, endpoints) in map_v6 {
|
||||
for (ip, port) in endpoints {
|
||||
Self::merge_endpoint_dc(&mut endpoint_dc_map, *dc, *ip, *port);
|
||||
}
|
||||
}
|
||||
endpoint_dc_map
|
||||
}
|
||||
|
||||
pub(super) async fn rebuild_endpoint_dc_map(&self) {
|
||||
let map_v4 = self.proxy_map_v4.read().await.clone();
|
||||
let map_v6 = self.proxy_map_v6.read().await.clone();
|
||||
let rebuilt = Self::build_endpoint_dc_map_from_maps(&map_v4, &map_v6);
|
||||
*self.endpoint_dc_map.write().await = rebuilt;
|
||||
}
|
||||
}
|
||||
|
||||
@@ -54,6 +54,7 @@ impl MePool {
|
||||
&& let Some(addrs) = guard.get(&k).cloned()
|
||||
{
|
||||
guard.insert(-k, addrs);
|
||||
changed = true;
|
||||
}
|
||||
}
|
||||
}
|
||||
@@ -65,9 +66,14 @@ impl MePool {
|
||||
&& let Some(addrs) = guard.get(&k).cloned()
|
||||
{
|
||||
guard.insert(-k, addrs);
|
||||
changed = true;
|
||||
}
|
||||
}
|
||||
}
|
||||
if changed {
|
||||
self.rebuild_endpoint_dc_map().await;
|
||||
self.writer_available.notify_waiters();
|
||||
}
|
||||
if changed {
|
||||
SnapshotApplyOutcome::AppliedChanged
|
||||
} else {
|
||||
@@ -104,7 +110,10 @@ impl MePool {
|
||||
pub async fn reconnect_all(self: &Arc<Self>) {
|
||||
let ws = self.writers.read().await.clone();
|
||||
for w in ws {
|
||||
if let Ok(()) = self.connect_one(w.addr, self.rng.as_ref()).await {
|
||||
if let Ok(()) = self
|
||||
.connect_one_for_dc(w.addr, w.writer_dc, self.rng.as_ref())
|
||||
.await
|
||||
{
|
||||
self.mark_writer_draining(w.id).await;
|
||||
tokio::time::sleep(Duration::from_secs(2)).await;
|
||||
}
|
||||
|
||||
@@ -1,4 +1,4 @@
|
||||
use std::collections::{HashMap, HashSet};
|
||||
use std::collections::HashSet;
|
||||
use std::net::{IpAddr, SocketAddr};
|
||||
use std::sync::Arc;
|
||||
|
||||
@@ -27,20 +27,14 @@ impl MePool {
|
||||
|
||||
for family in family_order {
|
||||
let map = self.proxy_map_for_family(family).await;
|
||||
let mut grouped_dc_addrs: HashMap<i32, Vec<(IpAddr, u16)>> = HashMap::new();
|
||||
for (dc, addrs) in map {
|
||||
if addrs.is_empty() {
|
||||
continue;
|
||||
}
|
||||
grouped_dc_addrs.entry(dc.abs()).or_default().extend(addrs);
|
||||
}
|
||||
let mut dc_addrs: Vec<(i32, Vec<(IpAddr, u16)>)> = grouped_dc_addrs
|
||||
let mut dc_addrs: Vec<(i32, Vec<(IpAddr, u16)>)> = map
|
||||
.into_iter()
|
||||
.map(|(dc, mut addrs)| {
|
||||
addrs.sort_unstable();
|
||||
addrs.dedup();
|
||||
(dc, addrs)
|
||||
})
|
||||
.filter(|(_, addrs)| !addrs.is_empty())
|
||||
.collect();
|
||||
dc_addrs.sort_unstable_by_key(|(dc, _)| *dc);
|
||||
dc_addrs.sort_by_key(|(_, addrs)| (addrs.len() != 1, addrs.len()));
|
||||
@@ -61,7 +55,11 @@ impl MePool {
|
||||
.iter()
|
||||
.map(|(ip, port)| SocketAddr::new(*ip, *port))
|
||||
.collect();
|
||||
if self.active_writer_count_for_endpoints(&endpoints).await >= target_writers {
|
||||
if self
|
||||
.active_writer_count_for_dc_endpoints(dc, &endpoints)
|
||||
.await
|
||||
>= target_writers
|
||||
{
|
||||
continue;
|
||||
}
|
||||
let pool = Arc::clone(self);
|
||||
@@ -73,6 +71,7 @@ impl MePool {
|
||||
target_writers,
|
||||
rng_clone,
|
||||
connect_concurrency,
|
||||
true,
|
||||
)
|
||||
.await
|
||||
});
|
||||
@@ -85,7 +84,7 @@ impl MePool {
|
||||
.iter()
|
||||
.map(|(ip, port)| SocketAddr::new(*ip, *port))
|
||||
.collect();
|
||||
if self.active_writer_count_for_endpoints(&endpoints).await == 0 {
|
||||
if self.active_writer_count_for_dc_endpoints(*dc, &endpoints).await == 0 {
|
||||
missing_dcs.push(*dc);
|
||||
}
|
||||
}
|
||||
@@ -116,6 +115,7 @@ impl MePool {
|
||||
target_writers,
|
||||
rng_clone_local,
|
||||
connect_concurrency,
|
||||
false,
|
||||
)
|
||||
.await
|
||||
});
|
||||
@@ -149,6 +149,7 @@ impl MePool {
|
||||
target_writers: usize,
|
||||
rng: Arc<SecureRandom>,
|
||||
connect_concurrency: usize,
|
||||
allow_coverage_override: bool,
|
||||
) -> bool {
|
||||
if addrs.is_empty() {
|
||||
return false;
|
||||
@@ -162,7 +163,9 @@ impl MePool {
|
||||
let endpoint_set: HashSet<SocketAddr> = endpoints.iter().copied().collect();
|
||||
|
||||
loop {
|
||||
let alive = self.active_writer_count_for_endpoints(&endpoint_set).await;
|
||||
let alive = self
|
||||
.active_writer_count_for_dc_endpoints(dc, &endpoint_set)
|
||||
.await;
|
||||
if alive >= target_writers {
|
||||
info!(
|
||||
dc = %dc,
|
||||
@@ -180,9 +183,17 @@ impl MePool {
|
||||
let pool = Arc::clone(&self);
|
||||
let rng_clone = Arc::clone(&rng);
|
||||
let endpoints_clone = endpoints.clone();
|
||||
let generation = self.current_generation();
|
||||
join.spawn(async move {
|
||||
pool.connect_endpoints_round_robin(&endpoints_clone, rng_clone.as_ref())
|
||||
.await
|
||||
pool.connect_endpoints_round_robin_with_generation_contour(
|
||||
dc,
|
||||
&endpoints_clone,
|
||||
rng_clone.as_ref(),
|
||||
generation,
|
||||
super::pool::WriterContour::Active,
|
||||
allow_coverage_override,
|
||||
)
|
||||
.await
|
||||
});
|
||||
}
|
||||
|
||||
@@ -199,7 +210,9 @@ impl MePool {
|
||||
}
|
||||
}
|
||||
|
||||
let alive_after = self.active_writer_count_for_endpoints(&endpoint_set).await;
|
||||
let alive_after = self
|
||||
.active_writer_count_for_dc_endpoints(dc, &endpoint_set)
|
||||
.await;
|
||||
if alive_after >= target_writers {
|
||||
info!(
|
||||
dc = %dc,
|
||||
@@ -210,12 +223,25 @@ impl MePool {
|
||||
return true;
|
||||
}
|
||||
if !progress {
|
||||
warn!(
|
||||
dc = %dc,
|
||||
alive = alive_after,
|
||||
target_writers,
|
||||
"All ME servers for DC failed at init"
|
||||
);
|
||||
let active_writers_current = self.active_contour_writer_count_total().await;
|
||||
let active_cap_configured = self.adaptive_floor_active_cap_configured_total();
|
||||
if !allow_coverage_override && active_writers_current >= active_cap_configured {
|
||||
info!(
|
||||
dc = %dc,
|
||||
alive = alive_after,
|
||||
target_writers,
|
||||
active_writers_current,
|
||||
active_cap_configured,
|
||||
"ME init saturation stopped by active writer cap"
|
||||
);
|
||||
} else {
|
||||
warn!(
|
||||
dc = %dc,
|
||||
alive = alive_after,
|
||||
target_writers,
|
||||
"All ME servers for DC failed at init"
|
||||
);
|
||||
}
|
||||
return false;
|
||||
}
|
||||
|
||||
|
||||
@@ -9,7 +9,7 @@ use tracing::{debug, info, warn};
|
||||
use crate::crypto::SecureRandom;
|
||||
use crate::network::IpFamily;
|
||||
|
||||
use super::pool::{MePool, RefillDcKey, WriterContour};
|
||||
use super::pool::{MePool, RefillDcKey, RefillEndpointKey, WriterContour};
|
||||
|
||||
const ME_FLAP_UPTIME_THRESHOLD_SECS: u64 = 20;
|
||||
const ME_FLAP_QUARANTINE_SECS: u64 = 25;
|
||||
@@ -82,80 +82,36 @@ impl MePool {
|
||||
Vec::new()
|
||||
}
|
||||
|
||||
pub(super) async fn has_refill_inflight_for_endpoints(&self, endpoints: &[SocketAddr]) -> bool {
|
||||
if endpoints.is_empty() {
|
||||
return false;
|
||||
}
|
||||
|
||||
{
|
||||
let guard = self.refill_inflight.lock().await;
|
||||
if endpoints.iter().any(|addr| guard.contains(addr)) {
|
||||
return true;
|
||||
}
|
||||
}
|
||||
|
||||
let dc_keys = self.resolve_refill_dc_keys_for_endpoints(endpoints).await;
|
||||
if dc_keys.is_empty() {
|
||||
return false;
|
||||
}
|
||||
pub(super) async fn has_refill_inflight_for_dc_key(&self, key: RefillDcKey) -> bool {
|
||||
let guard = self.refill_inflight_dc.lock().await;
|
||||
dc_keys.iter().any(|key| guard.contains(key))
|
||||
}
|
||||
|
||||
async fn resolve_refill_dc_key_for_addr(&self, addr: SocketAddr) -> Option<RefillDcKey> {
|
||||
let family = if addr.is_ipv4() {
|
||||
IpFamily::V4
|
||||
} else {
|
||||
IpFamily::V6
|
||||
};
|
||||
let map = self.proxy_map_for_family(family).await;
|
||||
for (dc, endpoints) in map {
|
||||
if endpoints
|
||||
.into_iter()
|
||||
.any(|(ip, port)| SocketAddr::new(ip, port) == addr)
|
||||
{
|
||||
return Some(RefillDcKey {
|
||||
dc: dc.abs(),
|
||||
family,
|
||||
});
|
||||
}
|
||||
}
|
||||
None
|
||||
}
|
||||
|
||||
async fn resolve_refill_dc_keys_for_endpoints(
|
||||
&self,
|
||||
endpoints: &[SocketAddr],
|
||||
) -> HashSet<RefillDcKey> {
|
||||
let mut out = HashSet::<RefillDcKey>::new();
|
||||
for addr in endpoints {
|
||||
if let Some(key) = self.resolve_refill_dc_key_for_addr(*addr).await {
|
||||
out.insert(key);
|
||||
}
|
||||
}
|
||||
out
|
||||
guard.contains(&key)
|
||||
}
|
||||
|
||||
pub(super) async fn connect_endpoints_round_robin(
|
||||
self: &Arc<Self>,
|
||||
dc: i32,
|
||||
endpoints: &[SocketAddr],
|
||||
rng: &SecureRandom,
|
||||
) -> bool {
|
||||
self.connect_endpoints_round_robin_with_generation_contour(
|
||||
dc,
|
||||
endpoints,
|
||||
rng,
|
||||
self.current_generation(),
|
||||
WriterContour::Active,
|
||||
false,
|
||||
)
|
||||
.await
|
||||
}
|
||||
|
||||
pub(super) async fn connect_endpoints_round_robin_with_generation_contour(
|
||||
self: &Arc<Self>,
|
||||
dc: i32,
|
||||
endpoints: &[SocketAddr],
|
||||
rng: &SecureRandom,
|
||||
generation: u64,
|
||||
contour: WriterContour,
|
||||
allow_coverage_override: bool,
|
||||
) -> bool {
|
||||
let candidates = self.connectable_endpoints(endpoints).await;
|
||||
if candidates.is_empty() {
|
||||
@@ -166,7 +122,14 @@ impl MePool {
|
||||
let idx = (start + offset) % candidates.len();
|
||||
let addr = candidates[idx];
|
||||
match self
|
||||
.connect_one_with_generation_contour(addr, rng, generation, contour)
|
||||
.connect_one_with_generation_contour_for_dc_with_cap_policy(
|
||||
addr,
|
||||
rng,
|
||||
generation,
|
||||
contour,
|
||||
dc,
|
||||
allow_coverage_override,
|
||||
)
|
||||
.await
|
||||
{
|
||||
Ok(()) => return true,
|
||||
@@ -176,48 +139,23 @@ impl MePool {
|
||||
false
|
||||
}
|
||||
|
||||
async fn endpoints_for_same_dc(&self, addr: SocketAddr) -> Vec<SocketAddr> {
|
||||
let mut target_dc = HashSet::<i32>::new();
|
||||
async fn endpoints_for_dc(&self, target_dc: i32) -> Vec<SocketAddr> {
|
||||
let mut endpoints = HashSet::<SocketAddr>::new();
|
||||
|
||||
if self.decision.ipv4_me {
|
||||
let map = self.proxy_map_v4.read().await.clone();
|
||||
for (dc, addrs) in &map {
|
||||
if addrs
|
||||
.iter()
|
||||
.any(|(ip, port)| SocketAddr::new(*ip, *port) == addr)
|
||||
{
|
||||
target_dc.insert(dc.abs());
|
||||
}
|
||||
}
|
||||
for dc in &target_dc {
|
||||
for key in [*dc, -*dc] {
|
||||
if let Some(addrs) = map.get(&key) {
|
||||
for (ip, port) in addrs {
|
||||
endpoints.insert(SocketAddr::new(*ip, *port));
|
||||
}
|
||||
}
|
||||
let map = self.proxy_map_v4.read().await;
|
||||
if let Some(addrs) = map.get(&target_dc) {
|
||||
for (ip, port) in addrs {
|
||||
endpoints.insert(SocketAddr::new(*ip, *port));
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
if self.decision.ipv6_me {
|
||||
let map = self.proxy_map_v6.read().await.clone();
|
||||
for (dc, addrs) in &map {
|
||||
if addrs
|
||||
.iter()
|
||||
.any(|(ip, port)| SocketAddr::new(*ip, *port) == addr)
|
||||
{
|
||||
target_dc.insert(dc.abs());
|
||||
}
|
||||
}
|
||||
for dc in &target_dc {
|
||||
for key in [*dc, -*dc] {
|
||||
if let Some(addrs) = map.get(&key) {
|
||||
for (ip, port) in addrs {
|
||||
endpoints.insert(SocketAddr::new(*ip, *port));
|
||||
}
|
||||
}
|
||||
let map = self.proxy_map_v6.read().await;
|
||||
if let Some(addrs) = map.get(&target_dc) {
|
||||
for (ip, port) in addrs {
|
||||
endpoints.insert(SocketAddr::new(*ip, *port));
|
||||
}
|
||||
}
|
||||
}
|
||||
@@ -227,14 +165,14 @@ impl MePool {
|
||||
sorted
|
||||
}
|
||||
|
||||
async fn refill_writer_after_loss(self: &Arc<Self>, addr: SocketAddr) -> bool {
|
||||
async fn refill_writer_after_loss(self: &Arc<Self>, addr: SocketAddr, writer_dc: i32) -> bool {
|
||||
let fast_retries = self.me_reconnect_fast_retry_count.max(1);
|
||||
let same_endpoint_quarantined = self.is_endpoint_quarantined(addr).await;
|
||||
|
||||
if !same_endpoint_quarantined {
|
||||
for attempt in 0..fast_retries {
|
||||
self.stats.increment_me_reconnect_attempt();
|
||||
match self.connect_one(addr, self.rng.as_ref()).await {
|
||||
match self.connect_one_for_dc(addr, writer_dc, self.rng.as_ref()).await {
|
||||
Ok(()) => {
|
||||
self.stats.increment_me_reconnect_success();
|
||||
self.stats.increment_me_writer_restored_same_endpoint_total();
|
||||
@@ -262,7 +200,7 @@ impl MePool {
|
||||
);
|
||||
}
|
||||
|
||||
let dc_endpoints = self.endpoints_for_same_dc(addr).await;
|
||||
let dc_endpoints = self.endpoints_for_dc(writer_dc).await;
|
||||
if dc_endpoints.is_empty() {
|
||||
self.stats.increment_me_refill_failed_total();
|
||||
return false;
|
||||
@@ -271,7 +209,7 @@ impl MePool {
|
||||
for attempt in 0..fast_retries {
|
||||
self.stats.increment_me_reconnect_attempt();
|
||||
if self
|
||||
.connect_endpoints_round_robin(&dc_endpoints, self.rng.as_ref())
|
||||
.connect_endpoints_round_robin(writer_dc, &dc_endpoints, self.rng.as_ref())
|
||||
.await
|
||||
{
|
||||
self.stats.increment_me_reconnect_success();
|
||||
@@ -289,48 +227,63 @@ impl MePool {
|
||||
false
|
||||
}
|
||||
|
||||
pub(crate) fn trigger_immediate_refill(self: &Arc<Self>, addr: SocketAddr) {
|
||||
pub(crate) fn trigger_immediate_refill_for_dc(self: &Arc<Self>, addr: SocketAddr, writer_dc: i32) {
|
||||
let endpoint_key = RefillEndpointKey {
|
||||
dc: writer_dc,
|
||||
addr,
|
||||
};
|
||||
let pre_inserted = if let Ok(mut guard) = self.refill_inflight.try_lock() {
|
||||
if !guard.insert(endpoint_key) {
|
||||
self.stats.increment_me_refill_skipped_inflight_total();
|
||||
return;
|
||||
}
|
||||
true
|
||||
} else {
|
||||
false
|
||||
};
|
||||
|
||||
let pool = Arc::clone(self);
|
||||
tokio::spawn(async move {
|
||||
let dc_endpoints = pool.endpoints_for_same_dc(addr).await;
|
||||
let dc_keys = pool.resolve_refill_dc_keys_for_endpoints(&dc_endpoints).await;
|
||||
let dc_key = RefillDcKey {
|
||||
dc: writer_dc,
|
||||
family: if addr.is_ipv4() {
|
||||
IpFamily::V4
|
||||
} else {
|
||||
IpFamily::V6
|
||||
},
|
||||
};
|
||||
|
||||
{
|
||||
if !pre_inserted {
|
||||
let mut guard = pool.refill_inflight.lock().await;
|
||||
if !guard.insert(addr) {
|
||||
if !guard.insert(endpoint_key) {
|
||||
pool.stats.increment_me_refill_skipped_inflight_total();
|
||||
return;
|
||||
}
|
||||
}
|
||||
|
||||
if !dc_keys.is_empty() {
|
||||
{
|
||||
let mut dc_guard = pool.refill_inflight_dc.lock().await;
|
||||
if dc_keys.iter().any(|key| dc_guard.contains(key)) {
|
||||
if dc_guard.contains(&dc_key) {
|
||||
pool.stats.increment_me_refill_skipped_inflight_total();
|
||||
drop(dc_guard);
|
||||
let mut guard = pool.refill_inflight.lock().await;
|
||||
guard.remove(&addr);
|
||||
guard.remove(&endpoint_key);
|
||||
return;
|
||||
}
|
||||
dc_guard.extend(dc_keys.iter().copied());
|
||||
dc_guard.insert(dc_key);
|
||||
}
|
||||
|
||||
pool.stats.increment_me_refill_triggered_total();
|
||||
|
||||
let restored = pool.refill_writer_after_loss(addr).await;
|
||||
let restored = pool.refill_writer_after_loss(addr, writer_dc).await;
|
||||
if !restored {
|
||||
warn!(%addr, "ME immediate refill failed");
|
||||
warn!(%addr, dc = writer_dc, "ME immediate refill failed");
|
||||
}
|
||||
|
||||
let mut guard = pool.refill_inflight.lock().await;
|
||||
guard.remove(&addr);
|
||||
guard.remove(&endpoint_key);
|
||||
drop(guard);
|
||||
if !dc_keys.is_empty() {
|
||||
let mut dc_guard = pool.refill_inflight_dc.lock().await;
|
||||
for key in &dc_keys {
|
||||
dc_guard.remove(key);
|
||||
}
|
||||
}
|
||||
let mut dc_guard = pool.refill_inflight_dc.lock().await;
|
||||
dc_guard.remove(&dc_key);
|
||||
});
|
||||
}
|
||||
}
|
||||
|
||||
@@ -62,7 +62,7 @@ impl MePool {
|
||||
|
||||
fn coverage_ratio(
|
||||
desired_by_dc: &HashMap<i32, HashSet<SocketAddr>>,
|
||||
active_writer_addrs: &HashSet<SocketAddr>,
|
||||
active_writer_addrs: &HashSet<(i32, SocketAddr)>,
|
||||
) -> (f32, Vec<i32>) {
|
||||
if desired_by_dc.is_empty() {
|
||||
return (1.0, Vec::new());
|
||||
@@ -76,7 +76,7 @@ impl MePool {
|
||||
}
|
||||
if endpoints
|
||||
.iter()
|
||||
.any(|addr| active_writer_addrs.contains(addr))
|
||||
.any(|addr| active_writer_addrs.contains(&(*dc, *addr)))
|
||||
{
|
||||
covered += 1;
|
||||
} else {
|
||||
@@ -91,32 +91,25 @@ impl MePool {
|
||||
}
|
||||
|
||||
pub async fn reconcile_connections(self: &Arc<Self>, rng: &SecureRandom) {
|
||||
let writers = self.writers.read().await;
|
||||
let current: HashSet<SocketAddr> = writers
|
||||
.iter()
|
||||
.filter(|w| !w.draining.load(Ordering::Relaxed))
|
||||
.map(|w| w.addr)
|
||||
.collect();
|
||||
drop(writers);
|
||||
|
||||
for family in self.family_order() {
|
||||
let map = self.proxy_map_for_family(family).await;
|
||||
for (_dc, addrs) in &map {
|
||||
for (dc, addrs) in &map {
|
||||
let dc_addrs: Vec<SocketAddr> = addrs
|
||||
.iter()
|
||||
.map(|(ip, port)| SocketAddr::new(*ip, *port))
|
||||
.collect();
|
||||
if !dc_addrs.iter().any(|a| current.contains(a)) {
|
||||
let dc_endpoints: HashSet<SocketAddr> = dc_addrs.iter().copied().collect();
|
||||
if self.active_writer_count_for_dc_endpoints(*dc, &dc_endpoints).await == 0 {
|
||||
let mut shuffled = dc_addrs.clone();
|
||||
shuffled.shuffle(&mut rand::rng());
|
||||
for addr in shuffled {
|
||||
if self.connect_one(addr, rng).await.is_ok() {
|
||||
if self.connect_one_for_dc(addr, *dc, rng).await.is_ok() {
|
||||
break;
|
||||
}
|
||||
}
|
||||
}
|
||||
}
|
||||
if !self.decision.effective_multipath && !current.is_empty() {
|
||||
if !self.decision.effective_multipath && self.connection_count() > 0 {
|
||||
break;
|
||||
}
|
||||
}
|
||||
@@ -128,7 +121,7 @@ impl MePool {
|
||||
if self.decision.ipv4_me {
|
||||
let map_v4 = self.proxy_map_v4.read().await.clone();
|
||||
for (dc, addrs) in map_v4 {
|
||||
let entry = out.entry(dc.abs()).or_default();
|
||||
let entry = out.entry(dc).or_default();
|
||||
for (ip, port) in addrs {
|
||||
entry.insert(SocketAddr::new(ip, port));
|
||||
}
|
||||
@@ -138,7 +131,7 @@ impl MePool {
|
||||
if self.decision.ipv6_me {
|
||||
let map_v6 = self.proxy_map_v6.read().await.clone();
|
||||
for (dc, addrs) in map_v6 {
|
||||
let entry = out.entry(dc.abs()).or_default();
|
||||
let entry = out.entry(dc).or_default();
|
||||
for (ip, port) in addrs {
|
||||
entry.insert(SocketAddr::new(ip, port));
|
||||
}
|
||||
@@ -174,26 +167,30 @@ impl MePool {
|
||||
core.saturating_add(rand::rng().random_range(0..=jitter))
|
||||
}
|
||||
|
||||
async fn fresh_writer_count_for_endpoints(
|
||||
async fn fresh_writer_count_for_dc_endpoints(
|
||||
&self,
|
||||
generation: u64,
|
||||
dc: i32,
|
||||
endpoints: &HashSet<SocketAddr>,
|
||||
) -> usize {
|
||||
let ws = self.writers.read().await;
|
||||
ws.iter()
|
||||
.filter(|w| !w.draining.load(Ordering::Relaxed))
|
||||
.filter(|w| w.generation == generation)
|
||||
.filter(|w| w.writer_dc == dc)
|
||||
.filter(|w| endpoints.contains(&w.addr))
|
||||
.count()
|
||||
}
|
||||
|
||||
pub(super) async fn active_writer_count_for_endpoints(
|
||||
pub(super) async fn active_writer_count_for_dc_endpoints(
|
||||
&self,
|
||||
dc: i32,
|
||||
endpoints: &HashSet<SocketAddr>,
|
||||
) -> usize {
|
||||
let ws = self.writers.read().await;
|
||||
ws.iter()
|
||||
.filter(|w| !w.draining.load(Ordering::Relaxed))
|
||||
.filter(|w| w.writer_dc == dc)
|
||||
.filter(|w| endpoints.contains(&w.addr))
|
||||
.count()
|
||||
}
|
||||
@@ -220,7 +217,7 @@ impl MePool {
|
||||
let required = self.required_writers_for_dc(endpoint_list.len());
|
||||
let mut completed = false;
|
||||
let mut last_fresh_count = self
|
||||
.fresh_writer_count_for_endpoints(generation, endpoints)
|
||||
.fresh_writer_count_for_dc_endpoints(generation, *dc, endpoints)
|
||||
.await;
|
||||
|
||||
for pass_idx in 0..total_passes {
|
||||
@@ -247,10 +244,12 @@ impl MePool {
|
||||
|
||||
let connected = self
|
||||
.connect_endpoints_round_robin_with_generation_contour(
|
||||
*dc,
|
||||
&endpoint_list,
|
||||
rng,
|
||||
generation,
|
||||
WriterContour::Warm,
|
||||
false,
|
||||
)
|
||||
.await;
|
||||
debug!(
|
||||
@@ -265,7 +264,7 @@ impl MePool {
|
||||
}
|
||||
|
||||
last_fresh_count = self
|
||||
.fresh_writer_count_for_endpoints(generation, endpoints)
|
||||
.fresh_writer_count_for_dc_endpoints(generation, *dc, endpoints)
|
||||
.await;
|
||||
if last_fresh_count >= required {
|
||||
completed = true;
|
||||
@@ -377,10 +376,10 @@ impl MePool {
|
||||
}
|
||||
|
||||
let writers = self.writers.read().await;
|
||||
let active_writer_addrs: HashSet<SocketAddr> = writers
|
||||
let active_writer_addrs: HashSet<(i32, SocketAddr)> = writers
|
||||
.iter()
|
||||
.filter(|w| !w.draining.load(Ordering::Relaxed))
|
||||
.map(|w| w.addr)
|
||||
.map(|w| (w.writer_dc, w.addr))
|
||||
.collect();
|
||||
let min_ratio = Self::permille_to_ratio(
|
||||
self.me_pool_min_fresh_ratio_permille
|
||||
@@ -410,6 +409,7 @@ impl MePool {
|
||||
.iter()
|
||||
.filter(|w| !w.draining.load(Ordering::Relaxed))
|
||||
.filter(|w| w.generation == generation)
|
||||
.filter(|w| w.writer_dc == *dc)
|
||||
.filter(|w| endpoints.contains(&w.addr))
|
||||
.count();
|
||||
if fresh_count < required {
|
||||
@@ -438,9 +438,9 @@ impl MePool {
|
||||
self.promote_warm_generation_to_active(generation).await;
|
||||
}
|
||||
|
||||
let desired_addrs: HashSet<SocketAddr> = desired_by_dc
|
||||
.values()
|
||||
.flat_map(|set| set.iter().copied())
|
||||
let desired_addrs: HashSet<(i32, SocketAddr)> = desired_by_dc
|
||||
.iter()
|
||||
.flat_map(|(dc, set)| set.iter().copied().map(|addr| (*dc, addr)))
|
||||
.collect();
|
||||
|
||||
let stale_writer_ids: Vec<u64> = writers
|
||||
@@ -450,7 +450,7 @@ impl MePool {
|
||||
if hardswap {
|
||||
w.generation < generation
|
||||
} else {
|
||||
!desired_addrs.contains(&w.addr)
|
||||
!desired_addrs.contains(&(w.writer_dc, w.addr))
|
||||
}
|
||||
})
|
||||
.map(|w| w.id)
|
||||
|
||||
128
src/transport/middle_proxy/pool_runtime_api.rs
Normal file
128
src/transport/middle_proxy/pool_runtime_api.rs
Normal file
@@ -0,0 +1,128 @@
|
||||
use std::collections::HashMap;
|
||||
use std::time::Instant;
|
||||
|
||||
use super::pool::{MePool, RefillDcKey};
|
||||
use crate::network::IpFamily;
|
||||
|
||||
#[derive(Clone, Debug)]
|
||||
pub(crate) struct MeApiRefillDcSnapshot {
|
||||
pub dc: i16,
|
||||
pub family: &'static str,
|
||||
pub inflight: usize,
|
||||
}
|
||||
|
||||
#[derive(Clone, Debug)]
|
||||
pub(crate) struct MeApiRefillSnapshot {
|
||||
pub inflight_endpoints_total: usize,
|
||||
pub inflight_dc_total: usize,
|
||||
pub by_dc: Vec<MeApiRefillDcSnapshot>,
|
||||
}
|
||||
|
||||
#[derive(Clone, Debug)]
|
||||
pub(crate) struct MeApiNatReflectionSnapshot {
|
||||
pub addr: std::net::SocketAddr,
|
||||
pub age_secs: u64,
|
||||
}
|
||||
|
||||
#[derive(Clone, Debug)]
|
||||
pub(crate) struct MeApiNatStunSnapshot {
|
||||
pub nat_probe_enabled: bool,
|
||||
pub nat_probe_disabled_runtime: bool,
|
||||
pub nat_probe_attempts: u8,
|
||||
pub configured_servers: Vec<String>,
|
||||
pub live_servers: Vec<String>,
|
||||
pub reflection_v4: Option<MeApiNatReflectionSnapshot>,
|
||||
pub reflection_v6: Option<MeApiNatReflectionSnapshot>,
|
||||
pub stun_backoff_remaining_ms: Option<u64>,
|
||||
}
|
||||
|
||||
impl MePool {
|
||||
pub(crate) async fn api_refill_snapshot(&self) -> MeApiRefillSnapshot {
|
||||
let inflight_endpoints_total = self.refill_inflight.lock().await.len();
|
||||
let inflight_dc_keys = self
|
||||
.refill_inflight_dc
|
||||
.lock()
|
||||
.await
|
||||
.iter()
|
||||
.copied()
|
||||
.collect::<Vec<RefillDcKey>>();
|
||||
|
||||
let mut by_dc_map = HashMap::<(i16, &'static str), usize>::new();
|
||||
for key in inflight_dc_keys {
|
||||
let family = match key.family {
|
||||
IpFamily::V4 => "v4",
|
||||
IpFamily::V6 => "v6",
|
||||
};
|
||||
let dc = key.dc as i16;
|
||||
*by_dc_map.entry((dc, family)).or_insert(0) += 1;
|
||||
}
|
||||
|
||||
let mut by_dc = by_dc_map
|
||||
.into_iter()
|
||||
.map(|((dc, family), inflight)| MeApiRefillDcSnapshot {
|
||||
dc,
|
||||
family,
|
||||
inflight,
|
||||
})
|
||||
.collect::<Vec<_>>();
|
||||
by_dc.sort_by_key(|entry| (entry.dc, entry.family));
|
||||
|
||||
MeApiRefillSnapshot {
|
||||
inflight_endpoints_total,
|
||||
inflight_dc_total: by_dc.len(),
|
||||
by_dc,
|
||||
}
|
||||
}
|
||||
|
||||
pub(crate) async fn api_nat_stun_snapshot(&self) -> MeApiNatStunSnapshot {
|
||||
let now = Instant::now();
|
||||
let mut configured_servers = if !self.nat_stun_servers.is_empty() {
|
||||
self.nat_stun_servers.clone()
|
||||
} else if let Some(stun) = &self.nat_stun {
|
||||
if stun.trim().is_empty() {
|
||||
Vec::new()
|
||||
} else {
|
||||
vec![stun.clone()]
|
||||
}
|
||||
} else {
|
||||
Vec::new()
|
||||
};
|
||||
configured_servers.sort();
|
||||
configured_servers.dedup();
|
||||
|
||||
let mut live_servers = self.nat_stun_live_servers.read().await.clone();
|
||||
live_servers.sort();
|
||||
live_servers.dedup();
|
||||
|
||||
let reflection = self.nat_reflection_cache.lock().await;
|
||||
let reflection_v4 = reflection.v4.map(|(ts, addr)| MeApiNatReflectionSnapshot {
|
||||
addr,
|
||||
age_secs: now.saturating_duration_since(ts).as_secs(),
|
||||
});
|
||||
let reflection_v6 = reflection.v6.map(|(ts, addr)| MeApiNatReflectionSnapshot {
|
||||
addr,
|
||||
age_secs: now.saturating_duration_since(ts).as_secs(),
|
||||
});
|
||||
drop(reflection);
|
||||
|
||||
let backoff_until = *self.stun_backoff_until.read().await;
|
||||
let stun_backoff_remaining_ms = backoff_until.and_then(|until| {
|
||||
(until > now).then_some(until.duration_since(now).as_millis() as u64)
|
||||
});
|
||||
|
||||
MeApiNatStunSnapshot {
|
||||
nat_probe_enabled: self.nat_probe,
|
||||
nat_probe_disabled_runtime: self
|
||||
.nat_probe_disabled
|
||||
.load(std::sync::atomic::Ordering::Relaxed),
|
||||
nat_probe_attempts: self
|
||||
.nat_probe_attempts
|
||||
.load(std::sync::atomic::Ordering::Relaxed),
|
||||
configured_servers,
|
||||
live_servers,
|
||||
reflection_v4,
|
||||
reflection_v6,
|
||||
stun_backoff_remaining_ms,
|
||||
}
|
||||
}
|
||||
}
|
||||
@@ -1,5 +1,5 @@
|
||||
use std::collections::{BTreeMap, BTreeSet, HashMap};
|
||||
use std::net::SocketAddr;
|
||||
use std::net::{IpAddr, SocketAddr};
|
||||
use std::sync::atomic::Ordering;
|
||||
use std::time::Instant;
|
||||
|
||||
@@ -28,6 +28,10 @@ pub(crate) struct MeApiDcStatusSnapshot {
|
||||
pub available_endpoints: usize,
|
||||
pub available_pct: f64,
|
||||
pub required_writers: usize,
|
||||
pub floor_min: usize,
|
||||
pub floor_target: usize,
|
||||
pub floor_max: usize,
|
||||
pub floor_capped: bool,
|
||||
pub alive_writers: usize,
|
||||
pub coverage_pct: f64,
|
||||
pub rtt_ms: Option<f64>,
|
||||
@@ -72,7 +76,27 @@ pub(crate) struct MeApiRuntimeSnapshot {
|
||||
pub floor_mode: &'static str,
|
||||
pub adaptive_floor_idle_secs: u64,
|
||||
pub adaptive_floor_min_writers_single_endpoint: u8,
|
||||
pub adaptive_floor_min_writers_multi_endpoint: u8,
|
||||
pub adaptive_floor_recover_grace_secs: u64,
|
||||
pub adaptive_floor_writers_per_core_total: u16,
|
||||
pub adaptive_floor_cpu_cores_override: u16,
|
||||
pub adaptive_floor_max_extra_writers_single_per_core: u16,
|
||||
pub adaptive_floor_max_extra_writers_multi_per_core: u16,
|
||||
pub adaptive_floor_max_active_writers_per_core: u16,
|
||||
pub adaptive_floor_max_warm_writers_per_core: u16,
|
||||
pub adaptive_floor_max_active_writers_global: u32,
|
||||
pub adaptive_floor_max_warm_writers_global: u32,
|
||||
pub adaptive_floor_cpu_cores_detected: u32,
|
||||
pub adaptive_floor_cpu_cores_effective: u32,
|
||||
pub adaptive_floor_global_cap_raw: u64,
|
||||
pub adaptive_floor_global_cap_effective: u64,
|
||||
pub adaptive_floor_target_writers_total: u64,
|
||||
pub adaptive_floor_active_cap_configured: u64,
|
||||
pub adaptive_floor_active_cap_effective: u64,
|
||||
pub adaptive_floor_warm_cap_configured: u64,
|
||||
pub adaptive_floor_warm_cap_effective: u64,
|
||||
pub adaptive_floor_active_writers_current: u64,
|
||||
pub adaptive_floor_warm_writers_current: u64,
|
||||
pub me_keepalive_enabled: bool,
|
||||
pub me_keepalive_interval_secs: u64,
|
||||
pub me_keepalive_jitter_secs: u64,
|
||||
@@ -104,35 +128,11 @@ impl MePool {
|
||||
let mut endpoints_by_dc = BTreeMap::<i16, BTreeSet<SocketAddr>>::new();
|
||||
if self.decision.ipv4_me {
|
||||
let map = self.proxy_map_v4.read().await.clone();
|
||||
for (dc, addrs) in map {
|
||||
let abs_dc = dc.abs();
|
||||
if abs_dc == 0 {
|
||||
continue;
|
||||
}
|
||||
let Ok(dc_idx) = i16::try_from(abs_dc) else {
|
||||
continue;
|
||||
};
|
||||
let entry = endpoints_by_dc.entry(dc_idx).or_default();
|
||||
for (ip, port) in addrs {
|
||||
entry.insert(SocketAddr::new(ip, port));
|
||||
}
|
||||
}
|
||||
extend_signed_endpoints(&mut endpoints_by_dc, map);
|
||||
}
|
||||
if self.decision.ipv6_me {
|
||||
let map = self.proxy_map_v6.read().await.clone();
|
||||
for (dc, addrs) in map {
|
||||
let abs_dc = dc.abs();
|
||||
if abs_dc == 0 {
|
||||
continue;
|
||||
}
|
||||
let Ok(dc_idx) = i16::try_from(abs_dc) else {
|
||||
continue;
|
||||
};
|
||||
let entry = endpoints_by_dc.entry(dc_idx).or_default();
|
||||
for (ip, port) in addrs {
|
||||
entry.insert(SocketAddr::new(ip, port));
|
||||
}
|
||||
}
|
||||
extend_signed_endpoints(&mut endpoints_by_dc, map);
|
||||
}
|
||||
|
||||
if endpoints_by_dc.is_empty() {
|
||||
@@ -140,19 +140,18 @@ impl MePool {
|
||||
}
|
||||
|
||||
let writers = self.writers.read().await.clone();
|
||||
let mut live_writers_by_endpoint = HashMap::<SocketAddr, usize>::new();
|
||||
let mut live_writers_by_dc = HashMap::<i16, usize>::new();
|
||||
for writer in writers {
|
||||
if writer.draining.load(Ordering::Relaxed) {
|
||||
continue;
|
||||
}
|
||||
*live_writers_by_endpoint.entry(writer.addr).or_insert(0) += 1;
|
||||
if let Ok(dc) = i16::try_from(writer.writer_dc) {
|
||||
*live_writers_by_dc.entry(dc).or_insert(0) += 1;
|
||||
}
|
||||
}
|
||||
|
||||
for endpoints in endpoints_by_dc.values() {
|
||||
let alive: usize = endpoints
|
||||
.iter()
|
||||
.map(|endpoint| live_writers_by_endpoint.get(endpoint).copied().unwrap_or(0))
|
||||
.sum();
|
||||
for dc in endpoints_by_dc.keys() {
|
||||
let alive = live_writers_by_dc.get(dc).copied().unwrap_or(0);
|
||||
if alive == 0 {
|
||||
return false;
|
||||
}
|
||||
@@ -166,35 +165,11 @@ impl MePool {
|
||||
let mut endpoints_by_dc = BTreeMap::<i16, BTreeSet<SocketAddr>>::new();
|
||||
if self.decision.ipv4_me {
|
||||
let map = self.proxy_map_v4.read().await.clone();
|
||||
for (dc, addrs) in map {
|
||||
let abs_dc = dc.abs();
|
||||
if abs_dc == 0 {
|
||||
continue;
|
||||
}
|
||||
let Ok(dc_idx) = i16::try_from(abs_dc) else {
|
||||
continue;
|
||||
};
|
||||
let entry = endpoints_by_dc.entry(dc_idx).or_default();
|
||||
for (ip, port) in addrs {
|
||||
entry.insert(SocketAddr::new(ip, port));
|
||||
}
|
||||
}
|
||||
extend_signed_endpoints(&mut endpoints_by_dc, map);
|
||||
}
|
||||
if self.decision.ipv6_me {
|
||||
let map = self.proxy_map_v6.read().await.clone();
|
||||
for (dc, addrs) in map {
|
||||
let abs_dc = dc.abs();
|
||||
if abs_dc == 0 {
|
||||
continue;
|
||||
}
|
||||
let Ok(dc_idx) = i16::try_from(abs_dc) else {
|
||||
continue;
|
||||
};
|
||||
let entry = endpoints_by_dc.entry(dc_idx).or_default();
|
||||
for (ip, port) in addrs {
|
||||
entry.insert(SocketAddr::new(ip, port));
|
||||
}
|
||||
}
|
||||
extend_signed_endpoints(&mut endpoints_by_dc, map);
|
||||
}
|
||||
|
||||
if endpoints_by_dc.is_empty() {
|
||||
@@ -202,24 +177,23 @@ impl MePool {
|
||||
}
|
||||
|
||||
let writers = self.writers.read().await.clone();
|
||||
let mut live_writers_by_endpoint = HashMap::<SocketAddr, usize>::new();
|
||||
let mut live_writers_by_dc = HashMap::<i16, usize>::new();
|
||||
for writer in writers {
|
||||
if writer.draining.load(Ordering::Relaxed) {
|
||||
continue;
|
||||
}
|
||||
*live_writers_by_endpoint.entry(writer.addr).or_insert(0) += 1;
|
||||
if let Ok(dc) = i16::try_from(writer.writer_dc) {
|
||||
*live_writers_by_dc.entry(dc).or_insert(0) += 1;
|
||||
}
|
||||
}
|
||||
|
||||
for endpoints in endpoints_by_dc.values() {
|
||||
for (dc, endpoints) in endpoints_by_dc {
|
||||
let endpoint_count = endpoints.len();
|
||||
if endpoint_count == 0 {
|
||||
return false;
|
||||
}
|
||||
let required = self.required_writers_for_dc_with_floor_mode(endpoint_count, false);
|
||||
let alive: usize = endpoints
|
||||
.iter()
|
||||
.map(|endpoint| live_writers_by_endpoint.get(endpoint).copied().unwrap_or(0))
|
||||
.sum();
|
||||
let alive = live_writers_by_dc.get(&dc).copied().unwrap_or(0);
|
||||
if alive < required {
|
||||
return false;
|
||||
}
|
||||
@@ -234,42 +208,11 @@ impl MePool {
|
||||
let mut endpoints_by_dc = BTreeMap::<i16, BTreeSet<SocketAddr>>::new();
|
||||
if self.decision.ipv4_me {
|
||||
let map = self.proxy_map_v4.read().await.clone();
|
||||
for (dc, addrs) in map {
|
||||
let abs_dc = dc.abs();
|
||||
if abs_dc == 0 {
|
||||
continue;
|
||||
}
|
||||
let Ok(dc_idx) = i16::try_from(abs_dc) else {
|
||||
continue;
|
||||
};
|
||||
let entry = endpoints_by_dc.entry(dc_idx).or_default();
|
||||
for (ip, port) in addrs {
|
||||
entry.insert(SocketAddr::new(ip, port));
|
||||
}
|
||||
}
|
||||
extend_signed_endpoints(&mut endpoints_by_dc, map);
|
||||
}
|
||||
if self.decision.ipv6_me {
|
||||
let map = self.proxy_map_v6.read().await.clone();
|
||||
for (dc, addrs) in map {
|
||||
let abs_dc = dc.abs();
|
||||
if abs_dc == 0 {
|
||||
continue;
|
||||
}
|
||||
let Ok(dc_idx) = i16::try_from(abs_dc) else {
|
||||
continue;
|
||||
};
|
||||
let entry = endpoints_by_dc.entry(dc_idx).or_default();
|
||||
for (ip, port) in addrs {
|
||||
entry.insert(SocketAddr::new(ip, port));
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
let mut endpoint_to_dc = HashMap::<SocketAddr, i16>::new();
|
||||
for (dc, endpoints) in &endpoints_by_dc {
|
||||
for endpoint in endpoints {
|
||||
endpoint_to_dc.entry(*endpoint).or_insert(*dc);
|
||||
}
|
||||
extend_signed_endpoints(&mut endpoints_by_dc, map);
|
||||
}
|
||||
|
||||
let configured_dc_groups = endpoints_by_dc.len();
|
||||
@@ -285,14 +228,14 @@ impl MePool {
|
||||
let rtt = self.rtt_stats.lock().await.clone();
|
||||
let writers = self.writers.read().await.clone();
|
||||
|
||||
let mut live_writers_by_endpoint = HashMap::<SocketAddr, usize>::new();
|
||||
let mut live_writers_by_dc_endpoint = HashMap::<(i16, SocketAddr), usize>::new();
|
||||
let mut live_writers_by_dc = HashMap::<i16, usize>::new();
|
||||
let mut dc_rtt_agg = HashMap::<i16, (f64, u64)>::new();
|
||||
let mut writer_rows = Vec::<MeApiWriterStatusSnapshot>::with_capacity(writers.len());
|
||||
|
||||
for writer in writers {
|
||||
let endpoint = writer.addr;
|
||||
let dc = endpoint_to_dc.get(&endpoint).copied();
|
||||
let dc = i16::try_from(writer.writer_dc).ok();
|
||||
let draining = writer.draining.load(Ordering::Relaxed);
|
||||
let degraded = writer.degraded.load(Ordering::Relaxed);
|
||||
let bound_clients = activity
|
||||
@@ -311,8 +254,10 @@ impl MePool {
|
||||
};
|
||||
|
||||
if !draining {
|
||||
*live_writers_by_endpoint.entry(endpoint).or_insert(0) += 1;
|
||||
if let Some(dc_idx) = dc {
|
||||
*live_writers_by_dc_endpoint
|
||||
.entry((dc_idx, endpoint))
|
||||
.or_insert(0) += 1;
|
||||
*live_writers_by_dc.entry(dc_idx).or_insert(0) += 1;
|
||||
if let Some(ema_ms) = rtt_ema_ms {
|
||||
let entry = dc_rtt_agg.entry(dc_idx).or_insert((0.0, 0));
|
||||
@@ -341,14 +286,43 @@ impl MePool {
|
||||
let mut dcs = Vec::<MeApiDcStatusSnapshot>::with_capacity(endpoints_by_dc.len());
|
||||
let mut available_endpoints = 0usize;
|
||||
let mut alive_writers = 0usize;
|
||||
let floor_mode = self.floor_mode();
|
||||
let adaptive_cpu_cores = (self
|
||||
.me_adaptive_floor_cpu_cores_effective
|
||||
.load(Ordering::Relaxed) as usize)
|
||||
.max(1);
|
||||
for (dc, endpoints) in endpoints_by_dc {
|
||||
let endpoint_count = endpoints.len();
|
||||
let dc_available_endpoints = endpoints
|
||||
.iter()
|
||||
.filter(|endpoint| live_writers_by_endpoint.contains_key(endpoint))
|
||||
.filter(|endpoint| live_writers_by_dc_endpoint.contains_key(&(dc, **endpoint)))
|
||||
.count();
|
||||
let base_required = self.required_writers_for_dc(endpoint_count);
|
||||
let dc_required_writers =
|
||||
self.required_writers_for_dc_with_floor_mode(endpoint_count, false);
|
||||
let floor_min = if endpoint_count <= 1 {
|
||||
(self
|
||||
.me_adaptive_floor_min_writers_single_endpoint
|
||||
.load(Ordering::Relaxed) as usize)
|
||||
.max(1)
|
||||
.min(base_required.max(1))
|
||||
} else {
|
||||
(self
|
||||
.me_adaptive_floor_min_writers_multi_endpoint
|
||||
.load(Ordering::Relaxed) as usize)
|
||||
.max(1)
|
||||
.min(base_required.max(1))
|
||||
};
|
||||
let extra_per_core = if endpoint_count <= 1 {
|
||||
self.me_adaptive_floor_max_extra_writers_single_per_core
|
||||
.load(Ordering::Relaxed) as usize
|
||||
} else {
|
||||
self.me_adaptive_floor_max_extra_writers_multi_per_core
|
||||
.load(Ordering::Relaxed) as usize
|
||||
};
|
||||
let floor_max = base_required.saturating_add(adaptive_cpu_cores.saturating_mul(extra_per_core));
|
||||
let floor_capped = matches!(floor_mode, MeFloorMode::Adaptive)
|
||||
&& dc_required_writers < base_required;
|
||||
let dc_alive_writers = live_writers_by_dc.get(&dc).copied().unwrap_or(0);
|
||||
let dc_load = activity
|
||||
.active_sessions_by_target_dc
|
||||
@@ -368,6 +342,10 @@ impl MePool {
|
||||
available_endpoints: dc_available_endpoints,
|
||||
available_pct: ratio_pct(dc_available_endpoints, endpoint_count),
|
||||
required_writers: dc_required_writers,
|
||||
floor_min,
|
||||
floor_target: dc_required_writers,
|
||||
floor_max,
|
||||
floor_capped,
|
||||
alive_writers: dc_alive_writers,
|
||||
coverage_pct: ratio_pct(dc_alive_writers, dc_required_writers),
|
||||
rtt_ms: dc_rtt_ms,
|
||||
@@ -444,9 +422,69 @@ impl MePool {
|
||||
adaptive_floor_min_writers_single_endpoint: self
|
||||
.me_adaptive_floor_min_writers_single_endpoint
|
||||
.load(Ordering::Relaxed),
|
||||
adaptive_floor_min_writers_multi_endpoint: self
|
||||
.me_adaptive_floor_min_writers_multi_endpoint
|
||||
.load(Ordering::Relaxed),
|
||||
adaptive_floor_recover_grace_secs: self
|
||||
.me_adaptive_floor_recover_grace_secs
|
||||
.load(Ordering::Relaxed),
|
||||
adaptive_floor_writers_per_core_total: self
|
||||
.me_adaptive_floor_writers_per_core_total
|
||||
.load(Ordering::Relaxed) as u16,
|
||||
adaptive_floor_cpu_cores_override: self
|
||||
.me_adaptive_floor_cpu_cores_override
|
||||
.load(Ordering::Relaxed) as u16,
|
||||
adaptive_floor_max_extra_writers_single_per_core: self
|
||||
.me_adaptive_floor_max_extra_writers_single_per_core
|
||||
.load(Ordering::Relaxed) as u16,
|
||||
adaptive_floor_max_extra_writers_multi_per_core: self
|
||||
.me_adaptive_floor_max_extra_writers_multi_per_core
|
||||
.load(Ordering::Relaxed) as u16,
|
||||
adaptive_floor_max_active_writers_per_core: self
|
||||
.me_adaptive_floor_max_active_writers_per_core
|
||||
.load(Ordering::Relaxed) as u16,
|
||||
adaptive_floor_max_warm_writers_per_core: self
|
||||
.me_adaptive_floor_max_warm_writers_per_core
|
||||
.load(Ordering::Relaxed) as u16,
|
||||
adaptive_floor_max_active_writers_global: self
|
||||
.me_adaptive_floor_max_active_writers_global
|
||||
.load(Ordering::Relaxed),
|
||||
adaptive_floor_max_warm_writers_global: self
|
||||
.me_adaptive_floor_max_warm_writers_global
|
||||
.load(Ordering::Relaxed),
|
||||
adaptive_floor_cpu_cores_detected: self
|
||||
.me_adaptive_floor_cpu_cores_detected
|
||||
.load(Ordering::Relaxed),
|
||||
adaptive_floor_cpu_cores_effective: self
|
||||
.me_adaptive_floor_cpu_cores_effective
|
||||
.load(Ordering::Relaxed),
|
||||
adaptive_floor_global_cap_raw: self
|
||||
.me_adaptive_floor_global_cap_raw
|
||||
.load(Ordering::Relaxed),
|
||||
adaptive_floor_global_cap_effective: self
|
||||
.me_adaptive_floor_global_cap_effective
|
||||
.load(Ordering::Relaxed),
|
||||
adaptive_floor_target_writers_total: self
|
||||
.me_adaptive_floor_target_writers_total
|
||||
.load(Ordering::Relaxed),
|
||||
adaptive_floor_active_cap_configured: self
|
||||
.me_adaptive_floor_active_cap_configured
|
||||
.load(Ordering::Relaxed),
|
||||
adaptive_floor_active_cap_effective: self
|
||||
.me_adaptive_floor_active_cap_effective
|
||||
.load(Ordering::Relaxed),
|
||||
adaptive_floor_warm_cap_configured: self
|
||||
.me_adaptive_floor_warm_cap_configured
|
||||
.load(Ordering::Relaxed),
|
||||
adaptive_floor_warm_cap_effective: self
|
||||
.me_adaptive_floor_warm_cap_effective
|
||||
.load(Ordering::Relaxed),
|
||||
adaptive_floor_active_writers_current: self
|
||||
.me_adaptive_floor_active_writers_current
|
||||
.load(Ordering::Relaxed),
|
||||
adaptive_floor_warm_writers_current: self
|
||||
.me_adaptive_floor_warm_writers_current
|
||||
.load(Ordering::Relaxed),
|
||||
me_keepalive_enabled: self.me_keepalive_enabled,
|
||||
me_keepalive_interval_secs: self.me_keepalive_interval.as_secs(),
|
||||
me_keepalive_jitter_secs: self.me_keepalive_jitter.as_secs(),
|
||||
@@ -499,6 +537,24 @@ fn ratio_pct(part: usize, total: usize) -> f64 {
|
||||
pct.clamp(0.0, 100.0)
|
||||
}
|
||||
|
||||
fn extend_signed_endpoints(
|
||||
endpoints_by_dc: &mut BTreeMap<i16, BTreeSet<SocketAddr>>,
|
||||
map: HashMap<i32, Vec<(IpAddr, u16)>>,
|
||||
) {
|
||||
for (dc, addrs) in map {
|
||||
if dc == 0 {
|
||||
continue;
|
||||
}
|
||||
let Ok(dc_idx) = i16::try_from(dc) else {
|
||||
continue;
|
||||
};
|
||||
let entry = endpoints_by_dc.entry(dc_idx).or_default();
|
||||
for (ip, port) in addrs {
|
||||
entry.insert(SocketAddr::new(ip, port));
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
fn floor_mode_label(mode: MeFloorMode) -> &'static str {
|
||||
match mode {
|
||||
MeFloorMode::Static => "static",
|
||||
|
||||
@@ -4,6 +4,7 @@ use std::sync::atomic::{AtomicBool, AtomicU8, AtomicU64, Ordering};
|
||||
use std::time::{Duration, Instant};
|
||||
use std::io::ErrorKind;
|
||||
|
||||
use bytes::Bytes;
|
||||
use bytes::BytesMut;
|
||||
use rand::Rng;
|
||||
use tokio::sync::mpsc;
|
||||
@@ -49,12 +50,18 @@ impl MePool {
|
||||
}
|
||||
}
|
||||
|
||||
pub(crate) async fn connect_one(self: &Arc<Self>, addr: SocketAddr, rng: &SecureRandom) -> Result<()> {
|
||||
pub(crate) async fn connect_one_for_dc(
|
||||
self: &Arc<Self>,
|
||||
addr: SocketAddr,
|
||||
writer_dc: i32,
|
||||
rng: &SecureRandom,
|
||||
) -> Result<()> {
|
||||
self.connect_one_with_generation_contour(
|
||||
addr,
|
||||
rng,
|
||||
self.current_generation(),
|
||||
WriterContour::Active,
|
||||
writer_dc,
|
||||
)
|
||||
.await
|
||||
}
|
||||
@@ -65,13 +72,56 @@ impl MePool {
|
||||
rng: &SecureRandom,
|
||||
generation: u64,
|
||||
contour: WriterContour,
|
||||
writer_dc: i32,
|
||||
) -> Result<()> {
|
||||
self.connect_one_with_generation_contour_for_dc(addr, rng, generation, contour, writer_dc)
|
||||
.await
|
||||
}
|
||||
|
||||
pub(super) async fn connect_one_with_generation_contour_for_dc(
|
||||
self: &Arc<Self>,
|
||||
addr: SocketAddr,
|
||||
rng: &SecureRandom,
|
||||
generation: u64,
|
||||
contour: WriterContour,
|
||||
writer_dc: i32,
|
||||
) -> Result<()> {
|
||||
self.connect_one_with_generation_contour_for_dc_with_cap_policy(
|
||||
addr,
|
||||
rng,
|
||||
generation,
|
||||
contour,
|
||||
writer_dc,
|
||||
false,
|
||||
)
|
||||
.await
|
||||
}
|
||||
|
||||
pub(super) async fn connect_one_with_generation_contour_for_dc_with_cap_policy(
|
||||
self: &Arc<Self>,
|
||||
addr: SocketAddr,
|
||||
rng: &SecureRandom,
|
||||
generation: u64,
|
||||
contour: WriterContour,
|
||||
writer_dc: i32,
|
||||
allow_coverage_override: bool,
|
||||
) -> Result<()> {
|
||||
if !self
|
||||
.can_open_writer_for_contour(contour, allow_coverage_override)
|
||||
.await
|
||||
{
|
||||
return Err(ProxyError::Proxy(format!(
|
||||
"ME {contour:?} writer cap reached"
|
||||
)));
|
||||
}
|
||||
|
||||
let secret_len = self.proxy_secret.read().await.secret.len();
|
||||
if secret_len < 32 {
|
||||
return Err(ProxyError::Proxy("proxy-secret too short for ME auth".into()));
|
||||
}
|
||||
|
||||
let (stream, _connect_ms, upstream_egress) = self.connect_tcp(addr).await?;
|
||||
let dc_idx = i16::try_from(writer_dc).ok();
|
||||
let (stream, _connect_ms, upstream_egress) = self.connect_tcp(addr, dc_idx).await?;
|
||||
let hs = self.handshake_only(stream, addr, upstream_egress, rng).await?;
|
||||
|
||||
let writer_id = self.next_writer_id.fetch_add(1, Ordering::Relaxed);
|
||||
@@ -80,6 +130,7 @@ impl MePool {
|
||||
let degraded = Arc::new(AtomicBool::new(false));
|
||||
let draining = Arc::new(AtomicBool::new(false));
|
||||
let draining_started_at_epoch_secs = Arc::new(AtomicU64::new(0));
|
||||
let drain_deadline_epoch_secs = Arc::new(AtomicU64::new(0));
|
||||
let allow_drain_fallback = Arc::new(AtomicBool::new(false));
|
||||
let (tx, mut rx) = mpsc::channel::<WriterCommand>(4096);
|
||||
let mut rpc_writer = RpcWriter {
|
||||
@@ -111,6 +162,7 @@ impl MePool {
|
||||
let writer = MeWriter {
|
||||
id: writer_id,
|
||||
addr,
|
||||
writer_dc,
|
||||
generation,
|
||||
contour: contour.clone(),
|
||||
created_at: Instant::now(),
|
||||
@@ -119,6 +171,7 @@ impl MePool {
|
||||
degraded: degraded.clone(),
|
||||
draining: draining.clone(),
|
||||
draining_started_at_epoch_secs: draining_started_at_epoch_secs.clone(),
|
||||
drain_deadline_epoch_secs: drain_deadline_epoch_secs.clone(),
|
||||
allow_drain_fallback: allow_drain_fallback.clone(),
|
||||
};
|
||||
self.writers.write().await.push(writer.clone());
|
||||
@@ -254,17 +307,47 @@ impl MePool {
|
||||
p.extend_from_slice(&sent_id.to_le_bytes());
|
||||
{
|
||||
let mut tracker = ping_tracker_ping.lock().await;
|
||||
let before = tracker.len();
|
||||
tracker.retain(|_, (ts, _)| ts.elapsed() < Duration::from_secs(120));
|
||||
let expired = before.saturating_sub(tracker.len());
|
||||
if expired > 0 {
|
||||
stats_ping.increment_me_keepalive_timeout_by(expired as u64);
|
||||
let now_epoch_ms = std::time::SystemTime::now()
|
||||
.duration_since(std::time::UNIX_EPOCH)
|
||||
.unwrap_or_default()
|
||||
.as_millis() as u64;
|
||||
let mut run_cleanup = false;
|
||||
if let Some(pool) = pool_ping.upgrade() {
|
||||
let last_cleanup_ms = pool
|
||||
.ping_tracker_last_cleanup_epoch_ms
|
||||
.load(Ordering::Relaxed);
|
||||
if now_epoch_ms.saturating_sub(last_cleanup_ms) >= 30_000
|
||||
&& pool
|
||||
.ping_tracker_last_cleanup_epoch_ms
|
||||
.compare_exchange(
|
||||
last_cleanup_ms,
|
||||
now_epoch_ms,
|
||||
Ordering::AcqRel,
|
||||
Ordering::Relaxed,
|
||||
)
|
||||
.is_ok()
|
||||
{
|
||||
run_cleanup = true;
|
||||
}
|
||||
}
|
||||
|
||||
if run_cleanup {
|
||||
let before = tracker.len();
|
||||
tracker.retain(|_, (ts, _)| ts.elapsed() < Duration::from_secs(120));
|
||||
let expired = before.saturating_sub(tracker.len());
|
||||
if expired > 0 {
|
||||
stats_ping.increment_me_keepalive_timeout_by(expired as u64);
|
||||
}
|
||||
}
|
||||
tracker.insert(sent_id, (std::time::Instant::now(), writer_id));
|
||||
}
|
||||
ping_id = ping_id.wrapping_add(1);
|
||||
stats_ping.increment_me_keepalive_sent();
|
||||
if tx_ping.send(WriterCommand::DataAndFlush(p)).await.is_err() {
|
||||
if tx_ping
|
||||
.send(WriterCommand::DataAndFlush(Bytes::from(p)))
|
||||
.await
|
||||
.is_err()
|
||||
{
|
||||
stats_ping.increment_me_keepalive_failed();
|
||||
debug!("ME ping failed, removing dead writer");
|
||||
cancel_ping.cancel();
|
||||
@@ -338,7 +421,11 @@ impl MePool {
|
||||
meta.proto_flags,
|
||||
);
|
||||
|
||||
if tx_signal.send(WriterCommand::DataAndFlush(payload)).await.is_err() {
|
||||
if tx_signal
|
||||
.send(WriterCommand::DataAndFlush(payload))
|
||||
.await
|
||||
.is_err()
|
||||
{
|
||||
stats_signal.increment_me_rpc_proxy_req_signal_failed_total();
|
||||
let _ = pool.registry.unregister(conn_id).await;
|
||||
cancel_signal.cancel();
|
||||
@@ -369,7 +456,7 @@ impl MePool {
|
||||
close_payload.extend_from_slice(&conn_id.to_le_bytes());
|
||||
|
||||
if tx_signal
|
||||
.send(WriterCommand::DataAndFlush(close_payload))
|
||||
.send(WriterCommand::DataAndFlush(Bytes::from(close_payload)))
|
||||
.await
|
||||
.is_err()
|
||||
{
|
||||
@@ -404,6 +491,7 @@ impl MePool {
|
||||
async fn remove_writer_only(self: &Arc<Self>, writer_id: u64) -> Vec<BoundConn> {
|
||||
let mut close_tx: Option<mpsc::Sender<WriterCommand>> = None;
|
||||
let mut removed_addr: Option<SocketAddr> = None;
|
||||
let mut removed_dc: Option<i32> = None;
|
||||
let mut removed_uptime: Option<Duration> = None;
|
||||
let mut trigger_refill = false;
|
||||
{
|
||||
@@ -417,6 +505,7 @@ impl MePool {
|
||||
self.stats.increment_me_writer_removed_total();
|
||||
w.cancel.cancel();
|
||||
removed_addr = Some(w.addr);
|
||||
removed_dc = Some(w.writer_dc);
|
||||
removed_uptime = Some(w.created_at.elapsed());
|
||||
trigger_refill = !was_draining;
|
||||
if trigger_refill {
|
||||
@@ -431,11 +520,12 @@ impl MePool {
|
||||
}
|
||||
if trigger_refill
|
||||
&& let Some(addr) = removed_addr
|
||||
&& let Some(writer_dc) = removed_dc
|
||||
{
|
||||
if let Some(uptime) = removed_uptime {
|
||||
self.maybe_quarantine_flapping_endpoint(addr, uptime).await;
|
||||
}
|
||||
self.trigger_immediate_refill(addr);
|
||||
self.trigger_immediate_refill_for_dc(addr, writer_dc);
|
||||
}
|
||||
self.rtt_stats.lock().await.remove(&writer_id);
|
||||
self.registry.writer_lost(writer_id).await
|
||||
@@ -454,8 +544,14 @@ impl MePool {
|
||||
let already_draining = w.draining.swap(true, Ordering::Relaxed);
|
||||
w.allow_drain_fallback
|
||||
.store(allow_drain_fallback, Ordering::Relaxed);
|
||||
let now_epoch_secs = Self::now_epoch_secs();
|
||||
w.draining_started_at_epoch_secs
|
||||
.store(Self::now_epoch_secs(), Ordering::Relaxed);
|
||||
.store(now_epoch_secs, Ordering::Relaxed);
|
||||
let drain_deadline_epoch_secs = timeout
|
||||
.map(|duration| now_epoch_secs.saturating_add(duration.as_secs()))
|
||||
.unwrap_or(0);
|
||||
w.drain_deadline_epoch_secs
|
||||
.store(drain_deadline_epoch_secs, Ordering::Relaxed);
|
||||
if !already_draining {
|
||||
self.stats.increment_pool_drain_active();
|
||||
}
|
||||
@@ -479,26 +575,6 @@ impl MePool {
|
||||
allow_drain_fallback,
|
||||
"ME writer marked draining"
|
||||
);
|
||||
|
||||
let pool = Arc::downgrade(self);
|
||||
tokio::spawn(async move {
|
||||
let deadline = timeout.map(|t| Instant::now() + t);
|
||||
while let Some(p) = pool.upgrade() {
|
||||
if let Some(deadline_at) = deadline
|
||||
&& Instant::now() >= deadline_at
|
||||
{
|
||||
warn!(writer_id, "Drain timeout, force-closing");
|
||||
p.stats.increment_pool_force_close_total();
|
||||
let _ = p.remove_writer_and_close_clients(writer_id).await;
|
||||
break;
|
||||
}
|
||||
if p.registry.is_writer_empty(writer_id).await {
|
||||
let _ = p.remove_writer_only(writer_id).await;
|
||||
break;
|
||||
}
|
||||
tokio::time::sleep(Duration::from_secs(1)).await;
|
||||
}
|
||||
});
|
||||
}
|
||||
|
||||
pub(crate) async fn mark_writer_draining(self: &Arc<Self>, writer_id: u64) {
|
||||
|
||||
@@ -181,7 +181,11 @@ pub(crate) async fn reader_loop(
|
||||
let mut pong = Vec::with_capacity(12);
|
||||
pong.extend_from_slice(&RPC_PONG_U32.to_le_bytes());
|
||||
pong.extend_from_slice(&ping_id.to_le_bytes());
|
||||
if tx.send(WriterCommand::DataAndFlush(pong)).await.is_err() {
|
||||
if tx
|
||||
.send(WriterCommand::DataAndFlush(Bytes::from(pong)))
|
||||
.await
|
||||
.is_err()
|
||||
{
|
||||
warn!("PONG send failed");
|
||||
break;
|
||||
}
|
||||
@@ -222,5 +226,5 @@ async fn send_close_conn(tx: &mpsc::Sender<WriterCommand>, conn_id: u64) {
|
||||
p.extend_from_slice(&RPC_CLOSE_CONN_U32.to_le_bytes());
|
||||
p.extend_from_slice(&conn_id.to_le_bytes());
|
||||
|
||||
let _ = tx.send(WriterCommand::DataAndFlush(p)).await;
|
||||
let _ = tx.send(WriterCommand::DataAndFlush(Bytes::from(p))).await;
|
||||
}
|
||||
|
||||
@@ -264,6 +264,20 @@ impl ConnRegistry {
|
||||
inner.writer_idle_since_epoch_secs.clone()
|
||||
}
|
||||
|
||||
pub async fn writer_idle_since_for_writer_ids(
|
||||
&self,
|
||||
writer_ids: &[u64],
|
||||
) -> HashMap<u64, u64> {
|
||||
let inner = self.inner.read().await;
|
||||
let mut out = HashMap::<u64, u64>::with_capacity(writer_ids.len());
|
||||
for writer_id in writer_ids {
|
||||
if let Some(idle_since) = inner.writer_idle_since_epoch_secs.get(writer_id).copied() {
|
||||
out.insert(*writer_id, idle_since);
|
||||
}
|
||||
}
|
||||
out
|
||||
}
|
||||
|
||||
pub(super) async fn writer_activity_snapshot(&self) -> WriterActivitySnapshot {
|
||||
let inner = self.inner.read().await;
|
||||
let mut bound_clients_by_writer = HashMap::<u64, usize>::new();
|
||||
@@ -273,13 +287,12 @@ impl ConnRegistry {
|
||||
bound_clients_by_writer.insert(*writer_id, conn_ids.len());
|
||||
}
|
||||
for conn_meta in inner.meta.values() {
|
||||
let dc_u16 = conn_meta.target_dc.unsigned_abs();
|
||||
if dc_u16 == 0 {
|
||||
if conn_meta.target_dc == 0 {
|
||||
continue;
|
||||
}
|
||||
if let Ok(dc) = i16::try_from(dc_u16) {
|
||||
*active_sessions_by_target_dc.entry(dc).or_insert(0) += 1;
|
||||
}
|
||||
*active_sessions_by_target_dc
|
||||
.entry(conn_meta.target_dc)
|
||||
.or_insert(0) += 1;
|
||||
}
|
||||
|
||||
WriterActivitySnapshot {
|
||||
@@ -402,7 +415,8 @@ mod tests {
|
||||
let snapshot = registry.writer_activity_snapshot().await;
|
||||
assert_eq!(snapshot.bound_clients_by_writer.get(&10), Some(&2));
|
||||
assert_eq!(snapshot.bound_clients_by_writer.get(&20), Some(&1));
|
||||
assert_eq!(snapshot.active_sessions_by_target_dc.get(&2), Some(&2));
|
||||
assert_eq!(snapshot.active_sessions_by_target_dc.get(&2), Some(&1));
|
||||
assert_eq!(snapshot.active_sessions_by_target_dc.get(&-2), Some(&1));
|
||||
assert_eq!(snapshot.active_sessions_by_target_dc.get(&4), Some(&1));
|
||||
}
|
||||
}
|
||||
|
||||
@@ -5,6 +5,7 @@ use std::sync::Arc;
|
||||
use std::sync::atomic::Ordering;
|
||||
use std::time::{Duration, Instant};
|
||||
|
||||
use bytes::Bytes;
|
||||
use tokio::sync::mpsc::error::TrySendError;
|
||||
use tracing::{debug, warn};
|
||||
|
||||
@@ -53,12 +54,16 @@ impl MePool {
|
||||
};
|
||||
let no_writer_mode =
|
||||
MeRouteNoWriterMode::from_u8(self.me_route_no_writer_mode.load(Ordering::Relaxed));
|
||||
let (routed_dc, unknown_target_dc) = self
|
||||
.resolve_target_dc_for_routing(target_dc as i32)
|
||||
.await;
|
||||
let mut no_writer_deadline: Option<Instant> = None;
|
||||
let mut emergency_attempts = 0u32;
|
||||
let mut async_recovery_triggered = false;
|
||||
let mut hybrid_recovery_round = 0u32;
|
||||
let mut hybrid_last_recovery_at: Option<Instant> = None;
|
||||
let hybrid_wait_step = self.me_route_no_writer_wait.max(Duration::from_millis(50));
|
||||
let mut hybrid_wait_current = hybrid_wait_step;
|
||||
|
||||
loop {
|
||||
if let Some(current) = self.registry.get_writer(conn_id).await {
|
||||
@@ -89,9 +94,9 @@ impl MePool {
|
||||
let deadline = *no_writer_deadline.get_or_insert_with(|| {
|
||||
Instant::now() + self.me_route_no_writer_wait
|
||||
});
|
||||
if !async_recovery_triggered {
|
||||
if !async_recovery_triggered && !unknown_target_dc {
|
||||
let triggered =
|
||||
self.trigger_async_recovery_for_target_dc(target_dc).await;
|
||||
self.trigger_async_recovery_for_target_dc(routed_dc).await;
|
||||
if !triggered {
|
||||
self.trigger_async_recovery_global().await;
|
||||
}
|
||||
@@ -107,31 +112,34 @@ impl MePool {
|
||||
}
|
||||
MeRouteNoWriterMode::InlineRecoveryLegacy => {
|
||||
self.stats.increment_me_inline_recovery_total();
|
||||
for _ in 0..self.me_route_inline_recovery_attempts.max(1) {
|
||||
for family in self.family_order() {
|
||||
let map = match family {
|
||||
IpFamily::V4 => self.proxy_map_v4.read().await.clone(),
|
||||
IpFamily::V6 => self.proxy_map_v6.read().await.clone(),
|
||||
};
|
||||
for (_dc, addrs) in &map {
|
||||
for (ip, port) in addrs {
|
||||
let addr = SocketAddr::new(*ip, *port);
|
||||
let _ = self.connect_one(addr, self.rng.as_ref()).await;
|
||||
if !unknown_target_dc {
|
||||
for _ in 0..self.me_route_inline_recovery_attempts.max(1) {
|
||||
for family in self.family_order() {
|
||||
let map = match family {
|
||||
IpFamily::V4 => self.proxy_map_v4.read().await.clone(),
|
||||
IpFamily::V6 => self.proxy_map_v6.read().await.clone(),
|
||||
};
|
||||
for (dc, addrs) in &map {
|
||||
for (ip, port) in addrs {
|
||||
let addr = SocketAddr::new(*ip, *port);
|
||||
let _ = self
|
||||
.connect_one_for_dc(addr, *dc, self.rng.as_ref())
|
||||
.await;
|
||||
}
|
||||
}
|
||||
}
|
||||
}
|
||||
if !self.writers.read().await.is_empty() {
|
||||
break;
|
||||
if !self.writers.read().await.is_empty() {
|
||||
break;
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
if !self.writers.read().await.is_empty() {
|
||||
continue;
|
||||
}
|
||||
let waiter = self.writer_available.notified();
|
||||
if tokio::time::timeout(self.me_route_inline_recovery_wait, waiter)
|
||||
.await
|
||||
.is_err()
|
||||
{
|
||||
let deadline = *no_writer_deadline
|
||||
.get_or_insert_with(|| Instant::now() + self.me_route_inline_recovery_wait);
|
||||
if !self.wait_for_writer_until(deadline).await {
|
||||
if !self.writers.read().await.is_empty() {
|
||||
continue;
|
||||
}
|
||||
@@ -143,15 +151,20 @@ impl MePool {
|
||||
continue;
|
||||
}
|
||||
MeRouteNoWriterMode::HybridAsyncPersistent => {
|
||||
self.maybe_trigger_hybrid_recovery(
|
||||
target_dc,
|
||||
&mut hybrid_recovery_round,
|
||||
&mut hybrid_last_recovery_at,
|
||||
hybrid_wait_step,
|
||||
)
|
||||
.await;
|
||||
let deadline = Instant::now() + hybrid_wait_step;
|
||||
if !unknown_target_dc {
|
||||
self.maybe_trigger_hybrid_recovery(
|
||||
routed_dc,
|
||||
&mut hybrid_recovery_round,
|
||||
&mut hybrid_last_recovery_at,
|
||||
hybrid_wait_current,
|
||||
)
|
||||
.await;
|
||||
}
|
||||
let deadline = Instant::now() + hybrid_wait_current;
|
||||
let _ = self.wait_for_writer_until(deadline).await;
|
||||
hybrid_wait_current =
|
||||
(hybrid_wait_current.saturating_mul(2))
|
||||
.min(Duration::from_millis(400));
|
||||
continue;
|
||||
}
|
||||
}
|
||||
@@ -160,11 +173,11 @@ impl MePool {
|
||||
};
|
||||
|
||||
let mut candidate_indices = self
|
||||
.candidate_indices_for_dc(&writers_snapshot, target_dc, false)
|
||||
.candidate_indices_for_dc(&writers_snapshot, routed_dc, false)
|
||||
.await;
|
||||
if candidate_indices.is_empty() {
|
||||
candidate_indices = self
|
||||
.candidate_indices_for_dc(&writers_snapshot, target_dc, true)
|
||||
.candidate_indices_for_dc(&writers_snapshot, routed_dc, true)
|
||||
.await;
|
||||
}
|
||||
if candidate_indices.is_empty() {
|
||||
@@ -173,14 +186,14 @@ impl MePool {
|
||||
let deadline = *no_writer_deadline.get_or_insert_with(|| {
|
||||
Instant::now() + self.me_route_no_writer_wait
|
||||
});
|
||||
if !async_recovery_triggered {
|
||||
let triggered = self.trigger_async_recovery_for_target_dc(target_dc).await;
|
||||
if !async_recovery_triggered && !unknown_target_dc {
|
||||
let triggered = self.trigger_async_recovery_for_target_dc(routed_dc).await;
|
||||
if !triggered {
|
||||
self.trigger_async_recovery_global().await;
|
||||
}
|
||||
async_recovery_triggered = true;
|
||||
}
|
||||
if self.wait_for_candidate_until(target_dc, deadline).await {
|
||||
if self.wait_for_candidate_until(routed_dc, deadline).await {
|
||||
continue;
|
||||
}
|
||||
self.stats.increment_me_no_writer_failfast_total();
|
||||
@@ -190,62 +203,70 @@ impl MePool {
|
||||
}
|
||||
MeRouteNoWriterMode::InlineRecoveryLegacy => {
|
||||
self.stats.increment_me_inline_recovery_total();
|
||||
if unknown_target_dc {
|
||||
let deadline = *no_writer_deadline
|
||||
.get_or_insert_with(|| Instant::now() + self.me_route_inline_recovery_wait);
|
||||
if self.wait_for_candidate_until(routed_dc, deadline).await {
|
||||
continue;
|
||||
}
|
||||
self.stats.increment_me_no_writer_failfast_total();
|
||||
return Err(ProxyError::Proxy("No ME writers available for target DC".into()));
|
||||
}
|
||||
if emergency_attempts >= self.me_route_inline_recovery_attempts.max(1) {
|
||||
self.stats.increment_me_no_writer_failfast_total();
|
||||
return Err(ProxyError::Proxy("No ME writers available for target DC".into()));
|
||||
}
|
||||
emergency_attempts += 1;
|
||||
for family in self.family_order() {
|
||||
let map_guard = match family {
|
||||
IpFamily::V4 => self.proxy_map_v4.read().await,
|
||||
IpFamily::V6 => self.proxy_map_v6.read().await,
|
||||
};
|
||||
if let Some(addrs) = map_guard.get(&(target_dc as i32)) {
|
||||
let mut shuffled = addrs.clone();
|
||||
shuffled.shuffle(&mut rand::rng());
|
||||
drop(map_guard);
|
||||
for (ip, port) in shuffled {
|
||||
let addr = SocketAddr::new(ip, port);
|
||||
if self.connect_one(addr, self.rng.as_ref()).await.is_ok() {
|
||||
break;
|
||||
}
|
||||
}
|
||||
tokio::time::sleep(Duration::from_millis(100 * emergency_attempts as u64)).await;
|
||||
let ws2 = self.writers.read().await;
|
||||
writers_snapshot = ws2.clone();
|
||||
drop(ws2);
|
||||
candidate_indices = self
|
||||
.candidate_indices_for_dc(&writers_snapshot, target_dc, false)
|
||||
.await;
|
||||
if candidate_indices.is_empty() {
|
||||
candidate_indices = self
|
||||
.candidate_indices_for_dc(&writers_snapshot, target_dc, true)
|
||||
.await;
|
||||
}
|
||||
if !candidate_indices.is_empty() {
|
||||
break;
|
||||
}
|
||||
let mut endpoints = self.endpoint_candidates_for_target_dc(routed_dc).await;
|
||||
endpoints.shuffle(&mut rand::rng());
|
||||
for addr in endpoints {
|
||||
if self.connect_one_for_dc(addr, routed_dc, self.rng.as_ref()).await.is_ok() {
|
||||
break;
|
||||
}
|
||||
}
|
||||
tokio::time::sleep(Duration::from_millis(100 * emergency_attempts as u64)).await;
|
||||
let ws2 = self.writers.read().await;
|
||||
writers_snapshot = ws2.clone();
|
||||
drop(ws2);
|
||||
candidate_indices = self
|
||||
.candidate_indices_for_dc(&writers_snapshot, routed_dc, false)
|
||||
.await;
|
||||
if candidate_indices.is_empty() {
|
||||
candidate_indices = self
|
||||
.candidate_indices_for_dc(&writers_snapshot, routed_dc, true)
|
||||
.await;
|
||||
}
|
||||
if candidate_indices.is_empty() {
|
||||
return Err(ProxyError::Proxy("No ME writers available for target DC".into()));
|
||||
}
|
||||
}
|
||||
MeRouteNoWriterMode::HybridAsyncPersistent => {
|
||||
self.maybe_trigger_hybrid_recovery(
|
||||
target_dc,
|
||||
&mut hybrid_recovery_round,
|
||||
&mut hybrid_last_recovery_at,
|
||||
hybrid_wait_step,
|
||||
)
|
||||
.await;
|
||||
let deadline = Instant::now() + hybrid_wait_step;
|
||||
let _ = self.wait_for_candidate_until(target_dc, deadline).await;
|
||||
if !unknown_target_dc {
|
||||
self.maybe_trigger_hybrid_recovery(
|
||||
routed_dc,
|
||||
&mut hybrid_recovery_round,
|
||||
&mut hybrid_last_recovery_at,
|
||||
hybrid_wait_current,
|
||||
)
|
||||
.await;
|
||||
}
|
||||
let deadline = Instant::now() + hybrid_wait_current;
|
||||
let _ = self.wait_for_candidate_until(routed_dc, deadline).await;
|
||||
hybrid_wait_current = (hybrid_wait_current.saturating_mul(2))
|
||||
.min(Duration::from_millis(400));
|
||||
continue;
|
||||
}
|
||||
}
|
||||
}
|
||||
let writer_idle_since = self.registry.writer_idle_since_snapshot().await;
|
||||
hybrid_wait_current = hybrid_wait_step;
|
||||
let writer_ids: Vec<u64> = candidate_indices
|
||||
.iter()
|
||||
.map(|idx| writers_snapshot[*idx].id)
|
||||
.collect();
|
||||
let writer_idle_since = self
|
||||
.registry
|
||||
.writer_idle_since_for_writer_ids(&writer_ids)
|
||||
.await;
|
||||
let now_epoch_secs = Self::now_epoch_secs();
|
||||
|
||||
if self.me_deterministic_writer_sort.load(Ordering::Relaxed) {
|
||||
@@ -380,28 +401,32 @@ impl MePool {
|
||||
!self.writers.read().await.is_empty()
|
||||
}
|
||||
|
||||
async fn wait_for_candidate_until(&self, target_dc: i16, deadline: Instant) -> bool {
|
||||
async fn wait_for_candidate_until(&self, routed_dc: i32, deadline: Instant) -> bool {
|
||||
loop {
|
||||
if self.has_candidate_for_target_dc(target_dc).await {
|
||||
if self.has_candidate_for_target_dc(routed_dc).await {
|
||||
return true;
|
||||
}
|
||||
|
||||
let now = Instant::now();
|
||||
if now >= deadline {
|
||||
return self.has_candidate_for_target_dc(target_dc).await;
|
||||
return self.has_candidate_for_target_dc(routed_dc).await;
|
||||
}
|
||||
|
||||
let remaining = deadline.saturating_duration_since(now);
|
||||
let sleep_for = remaining.min(Duration::from_millis(25));
|
||||
let waiter = self.writer_available.notified();
|
||||
tokio::select! {
|
||||
_ = waiter => {}
|
||||
_ = tokio::time::sleep(sleep_for) => {}
|
||||
if self.has_candidate_for_target_dc(routed_dc).await {
|
||||
return true;
|
||||
}
|
||||
let remaining = deadline.saturating_duration_since(Instant::now());
|
||||
if remaining.is_zero() {
|
||||
return self.has_candidate_for_target_dc(routed_dc).await;
|
||||
}
|
||||
if tokio::time::timeout(remaining, waiter).await.is_err() {
|
||||
return self.has_candidate_for_target_dc(routed_dc).await;
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
async fn has_candidate_for_target_dc(&self, target_dc: i16) -> bool {
|
||||
async fn has_candidate_for_target_dc(&self, routed_dc: i32) -> bool {
|
||||
let writers_snapshot = {
|
||||
let ws = self.writers.read().await;
|
||||
if ws.is_empty() {
|
||||
@@ -410,41 +435,41 @@ impl MePool {
|
||||
ws.clone()
|
||||
};
|
||||
let mut candidate_indices = self
|
||||
.candidate_indices_for_dc(&writers_snapshot, target_dc, false)
|
||||
.candidate_indices_for_dc(&writers_snapshot, routed_dc, false)
|
||||
.await;
|
||||
if candidate_indices.is_empty() {
|
||||
candidate_indices = self
|
||||
.candidate_indices_for_dc(&writers_snapshot, target_dc, true)
|
||||
.candidate_indices_for_dc(&writers_snapshot, routed_dc, true)
|
||||
.await;
|
||||
}
|
||||
!candidate_indices.is_empty()
|
||||
}
|
||||
|
||||
async fn trigger_async_recovery_for_target_dc(self: &Arc<Self>, target_dc: i16) -> bool {
|
||||
let endpoints = self.endpoint_candidates_for_target_dc(target_dc).await;
|
||||
async fn trigger_async_recovery_for_target_dc(self: &Arc<Self>, routed_dc: i32) -> bool {
|
||||
let endpoints = self.endpoint_candidates_for_target_dc(routed_dc).await;
|
||||
if endpoints.is_empty() {
|
||||
return false;
|
||||
}
|
||||
self.stats.increment_me_async_recovery_trigger_total();
|
||||
for addr in endpoints.into_iter().take(8) {
|
||||
self.trigger_immediate_refill(addr);
|
||||
self.trigger_immediate_refill_for_dc(addr, routed_dc);
|
||||
}
|
||||
true
|
||||
}
|
||||
|
||||
async fn trigger_async_recovery_global(self: &Arc<Self>) {
|
||||
self.stats.increment_me_async_recovery_trigger_total();
|
||||
let mut seen = HashSet::<SocketAddr>::new();
|
||||
let mut seen = HashSet::<(i32, SocketAddr)>::new();
|
||||
for family in self.family_order() {
|
||||
let map = match family {
|
||||
IpFamily::V4 => self.proxy_map_v4.read().await.clone(),
|
||||
IpFamily::V6 => self.proxy_map_v6.read().await.clone(),
|
||||
let map_guard = match family {
|
||||
IpFamily::V4 => self.proxy_map_v4.read().await,
|
||||
IpFamily::V6 => self.proxy_map_v6.read().await,
|
||||
};
|
||||
for addrs in map.values() {
|
||||
for (dc, addrs) in map_guard.iter() {
|
||||
for (ip, port) in addrs {
|
||||
let addr = SocketAddr::new(*ip, *port);
|
||||
if seen.insert(addr) {
|
||||
self.trigger_immediate_refill(addr);
|
||||
if seen.insert((*dc, addr)) {
|
||||
self.trigger_immediate_refill_for_dc(addr, *dc);
|
||||
}
|
||||
if seen.len() >= 8 {
|
||||
return;
|
||||
@@ -454,29 +479,24 @@ impl MePool {
|
||||
}
|
||||
}
|
||||
|
||||
async fn endpoint_candidates_for_target_dc(&self, target_dc: i16) -> Vec<SocketAddr> {
|
||||
let key = target_dc as i32;
|
||||
async fn endpoint_candidates_for_target_dc(&self, routed_dc: i32) -> Vec<SocketAddr> {
|
||||
let mut preferred = Vec::<SocketAddr>::new();
|
||||
let mut seen = HashSet::<SocketAddr>::new();
|
||||
|
||||
for family in self.family_order() {
|
||||
let map = match family {
|
||||
IpFamily::V4 => self.proxy_map_v4.read().await.clone(),
|
||||
IpFamily::V6 => self.proxy_map_v6.read().await.clone(),
|
||||
let map_guard = match family {
|
||||
IpFamily::V4 => self.proxy_map_v4.read().await,
|
||||
IpFamily::V6 => self.proxy_map_v6.read().await,
|
||||
};
|
||||
let mut lookup_keys = vec![key, key.abs(), -key.abs()];
|
||||
let def = self.default_dc.load(Ordering::Relaxed);
|
||||
if def != 0 {
|
||||
lookup_keys.push(def);
|
||||
let mut family_selected = Vec::<SocketAddr>::new();
|
||||
if let Some(addrs) = map_guard.get(&routed_dc) {
|
||||
for (ip, port) in addrs {
|
||||
family_selected.push(SocketAddr::new(*ip, *port));
|
||||
}
|
||||
}
|
||||
for lookup in lookup_keys {
|
||||
if let Some(addrs) = map.get(&lookup) {
|
||||
for (ip, port) in addrs {
|
||||
let addr = SocketAddr::new(*ip, *port);
|
||||
if seen.insert(addr) {
|
||||
preferred.push(addr);
|
||||
}
|
||||
}
|
||||
for addr in family_selected {
|
||||
if seen.insert(addr) {
|
||||
preferred.push(addr);
|
||||
}
|
||||
}
|
||||
if !preferred.is_empty() && !self.decision.effective_multipath {
|
||||
@@ -489,7 +509,7 @@ impl MePool {
|
||||
|
||||
async fn maybe_trigger_hybrid_recovery(
|
||||
self: &Arc<Self>,
|
||||
target_dc: i16,
|
||||
routed_dc: i32,
|
||||
hybrid_recovery_round: &mut u32,
|
||||
hybrid_last_recovery_at: &mut Option<Instant>,
|
||||
hybrid_wait_step: Duration,
|
||||
@@ -501,7 +521,7 @@ impl MePool {
|
||||
}
|
||||
|
||||
let round = *hybrid_recovery_round;
|
||||
let target_triggered = self.trigger_async_recovery_for_target_dc(target_dc).await;
|
||||
let target_triggered = self.trigger_async_recovery_for_target_dc(routed_dc).await;
|
||||
if !target_triggered || round % HYBRID_GLOBAL_BURST_PERIOD_ROUNDS == 0 {
|
||||
self.trigger_async_recovery_global().await;
|
||||
}
|
||||
@@ -514,7 +534,11 @@ impl MePool {
|
||||
let mut p = Vec::with_capacity(12);
|
||||
p.extend_from_slice(&RPC_CLOSE_EXT_U32.to_le_bytes());
|
||||
p.extend_from_slice(&conn_id.to_le_bytes());
|
||||
if w.tx.send(WriterCommand::DataAndFlush(p)).await.is_err() {
|
||||
if w.tx
|
||||
.send(WriterCommand::DataAndFlush(Bytes::from(p)))
|
||||
.await
|
||||
.is_err()
|
||||
{
|
||||
debug!("ME close write failed");
|
||||
self.remove_writer_and_close_clients(w.writer_id).await;
|
||||
}
|
||||
@@ -531,7 +555,7 @@ impl MePool {
|
||||
let mut p = Vec::with_capacity(12);
|
||||
p.extend_from_slice(&RPC_CLOSE_CONN_U32.to_le_bytes());
|
||||
p.extend_from_slice(&conn_id.to_le_bytes());
|
||||
match w.tx.try_send(WriterCommand::DataAndFlush(p)) {
|
||||
match w.tx.try_send(WriterCommand::DataAndFlush(Bytes::from(p))) {
|
||||
Ok(()) => {}
|
||||
Err(TrySendError::Full(cmd)) => {
|
||||
let _ = tokio::time::timeout(Duration::from_millis(50), w.tx.send(cmd)).await;
|
||||
@@ -564,40 +588,22 @@ impl MePool {
|
||||
pub(super) async fn candidate_indices_for_dc(
|
||||
&self,
|
||||
writers: &[super::pool::MeWriter],
|
||||
target_dc: i16,
|
||||
routed_dc: i32,
|
||||
include_warm: bool,
|
||||
) -> Vec<usize> {
|
||||
let key = target_dc as i32;
|
||||
let mut preferred = Vec::<SocketAddr>::new();
|
||||
let mut preferred = HashSet::<SocketAddr>::new();
|
||||
|
||||
for family in self.family_order() {
|
||||
let map_guard = match family {
|
||||
IpFamily::V4 => self.proxy_map_v4.read().await,
|
||||
IpFamily::V6 => self.proxy_map_v6.read().await,
|
||||
};
|
||||
|
||||
if let Some(v) = map_guard.get(&key) {
|
||||
preferred.extend(v.iter().map(|(ip, port)| SocketAddr::new(*ip, *port)));
|
||||
let mut family_selected = Vec::<SocketAddr>::new();
|
||||
if let Some(v) = map_guard.get(&routed_dc) {
|
||||
family_selected.extend(v.iter().map(|(ip, port)| SocketAddr::new(*ip, *port)));
|
||||
}
|
||||
if preferred.is_empty() {
|
||||
let abs = key.abs();
|
||||
if let Some(v) = map_guard.get(&abs) {
|
||||
preferred.extend(v.iter().map(|(ip, port)| SocketAddr::new(*ip, *port)));
|
||||
}
|
||||
}
|
||||
if preferred.is_empty() {
|
||||
let abs = key.abs();
|
||||
if let Some(v) = map_guard.get(&-abs) {
|
||||
preferred.extend(v.iter().map(|(ip, port)| SocketAddr::new(*ip, *port)));
|
||||
}
|
||||
}
|
||||
if preferred.is_empty() {
|
||||
let def = self.default_dc.load(Ordering::Relaxed);
|
||||
if def != 0
|
||||
&& let Some(v) = map_guard.get(&def)
|
||||
{
|
||||
preferred.extend(v.iter().map(|(ip, port)| SocketAddr::new(*ip, *port)));
|
||||
}
|
||||
for endpoint in family_selected {
|
||||
preferred.insert(endpoint);
|
||||
}
|
||||
|
||||
drop(map_guard);
|
||||
@@ -608,9 +614,7 @@ impl MePool {
|
||||
}
|
||||
|
||||
if preferred.is_empty() {
|
||||
return (0..writers.len())
|
||||
.filter(|i| self.writer_eligible_for_selection(&writers[*i], include_warm))
|
||||
.collect();
|
||||
return Vec::new();
|
||||
}
|
||||
|
||||
let mut out = Vec::new();
|
||||
@@ -618,15 +622,10 @@ impl MePool {
|
||||
if !self.writer_eligible_for_selection(w, include_warm) {
|
||||
continue;
|
||||
}
|
||||
if preferred.contains(&w.addr) {
|
||||
if w.writer_dc == routed_dc && preferred.contains(&w.addr) {
|
||||
out.push(idx);
|
||||
}
|
||||
}
|
||||
if out.is_empty() {
|
||||
return (0..writers.len())
|
||||
.filter(|i| self.writer_eligible_for_selection(&writers[*i], include_warm))
|
||||
.collect();
|
||||
}
|
||||
out
|
||||
}
|
||||
|
||||
|
||||
@@ -1,4 +1,5 @@
|
||||
use std::net::{IpAddr, Ipv4Addr, SocketAddr};
|
||||
use bytes::Bytes;
|
||||
|
||||
use crate::protocol::constants::*;
|
||||
|
||||
@@ -48,7 +49,7 @@ pub(crate) fn build_proxy_req_payload(
|
||||
data: &[u8],
|
||||
proxy_tag: Option<&[u8]>,
|
||||
proto_flags: u32,
|
||||
) -> Vec<u8> {
|
||||
) -> Bytes {
|
||||
let mut b = Vec::with_capacity(128 + data.len());
|
||||
|
||||
b.extend_from_slice(&RPC_PROXY_REQ_U32.to_le_bytes());
|
||||
@@ -85,7 +86,7 @@ pub(crate) fn build_proxy_req_payload(
|
||||
}
|
||||
|
||||
b.extend_from_slice(data);
|
||||
b
|
||||
Bytes::from(b)
|
||||
}
|
||||
|
||||
pub fn proto_flags_for_tag(tag: crate::protocol::constants::ProtoTag, has_proxy_tag: bool) -> u32 {
|
||||
|
||||
@@ -7,7 +7,7 @@
|
||||
use std::collections::{BTreeSet, HashMap};
|
||||
use std::net::{SocketAddr, IpAddr};
|
||||
use std::sync::Arc;
|
||||
use std::sync::atomic::{AtomicUsize, Ordering};
|
||||
use std::sync::atomic::{AtomicU64, AtomicUsize, Ordering};
|
||||
use std::time::Duration;
|
||||
use tokio::net::TcpStream;
|
||||
use tokio::sync::RwLock;
|
||||
@@ -202,6 +202,15 @@ pub struct UpstreamApiSnapshot {
|
||||
pub upstreams: Vec<UpstreamApiItemSnapshot>,
|
||||
}
|
||||
|
||||
#[derive(Debug, Clone, Copy)]
|
||||
pub struct UpstreamApiPolicySnapshot {
|
||||
pub connect_retry_attempts: u32,
|
||||
pub connect_retry_backoff_ms: u64,
|
||||
pub connect_budget_ms: u64,
|
||||
pub unhealthy_fail_threshold: u32,
|
||||
pub connect_failfast_hard_errors: bool,
|
||||
}
|
||||
|
||||
#[derive(Debug, Clone, Copy, PartialEq, Eq)]
|
||||
pub struct UpstreamEgressInfo {
|
||||
pub route_kind: UpstreamRouteKind,
|
||||
@@ -228,6 +237,8 @@ pub struct UpstreamManager {
|
||||
connect_budget: Duration,
|
||||
unhealthy_fail_threshold: u32,
|
||||
connect_failfast_hard_errors: bool,
|
||||
no_upstreams_warn_epoch_ms: Arc<AtomicU64>,
|
||||
no_healthy_warn_epoch_ms: Arc<AtomicU64>,
|
||||
stats: Arc<Stats>,
|
||||
}
|
||||
|
||||
@@ -253,10 +264,35 @@ impl UpstreamManager {
|
||||
connect_budget: Duration::from_millis(connect_budget_ms.max(1)),
|
||||
unhealthy_fail_threshold: unhealthy_fail_threshold.max(1),
|
||||
connect_failfast_hard_errors,
|
||||
no_upstreams_warn_epoch_ms: Arc::new(AtomicU64::new(0)),
|
||||
no_healthy_warn_epoch_ms: Arc::new(AtomicU64::new(0)),
|
||||
stats,
|
||||
}
|
||||
}
|
||||
|
||||
fn now_epoch_ms() -> u64 {
|
||||
std::time::SystemTime::now()
|
||||
.duration_since(std::time::UNIX_EPOCH)
|
||||
.unwrap_or_default()
|
||||
.as_millis() as u64
|
||||
}
|
||||
|
||||
fn should_emit_warn(last_epoch_ms: &AtomicU64, cooldown_ms: u64) -> bool {
|
||||
let now_epoch_ms = Self::now_epoch_ms();
|
||||
let previous_epoch_ms = last_epoch_ms.load(Ordering::Relaxed);
|
||||
if now_epoch_ms.saturating_sub(previous_epoch_ms) < cooldown_ms {
|
||||
return false;
|
||||
}
|
||||
last_epoch_ms
|
||||
.compare_exchange(
|
||||
previous_epoch_ms,
|
||||
now_epoch_ms,
|
||||
Ordering::AcqRel,
|
||||
Ordering::Relaxed,
|
||||
)
|
||||
.is_ok()
|
||||
}
|
||||
|
||||
pub fn try_api_snapshot(&self) -> Option<UpstreamApiSnapshot> {
|
||||
let guard = self.upstreams.try_read().ok()?;
|
||||
let now = std::time::Instant::now();
|
||||
@@ -315,6 +351,16 @@ impl UpstreamManager {
|
||||
Some(UpstreamApiSnapshot { summary, upstreams })
|
||||
}
|
||||
|
||||
pub fn api_policy_snapshot(&self) -> UpstreamApiPolicySnapshot {
|
||||
UpstreamApiPolicySnapshot {
|
||||
connect_retry_attempts: self.connect_retry_attempts,
|
||||
connect_retry_backoff_ms: self.connect_retry_backoff.as_millis() as u64,
|
||||
connect_budget_ms: self.connect_budget.as_millis() as u64,
|
||||
unhealthy_fail_threshold: self.unhealthy_fail_threshold,
|
||||
connect_failfast_hard_errors: self.connect_failfast_hard_errors,
|
||||
}
|
||||
}
|
||||
|
||||
#[cfg(unix)]
|
||||
fn resolve_interface_addrs(name: &str, want_ipv6: bool) -> Vec<IpAddr> {
|
||||
use nix::ifaddrs::getifaddrs;
|
||||
@@ -514,12 +560,22 @@ impl UpstreamManager {
|
||||
.collect();
|
||||
|
||||
if filtered_upstreams.is_empty() {
|
||||
warn!(scope = scope, "No upstreams available! Using first (direct?)");
|
||||
if Self::should_emit_warn(
|
||||
self.no_upstreams_warn_epoch_ms.as_ref(),
|
||||
5_000,
|
||||
) {
|
||||
warn!(scope = scope, "No upstreams available! Using first (direct?)");
|
||||
}
|
||||
return None;
|
||||
}
|
||||
|
||||
if healthy.is_empty() {
|
||||
warn!(scope = scope, "No healthy upstreams available! Using random.");
|
||||
if Self::should_emit_warn(
|
||||
self.no_healthy_warn_epoch_ms.as_ref(),
|
||||
5_000,
|
||||
) {
|
||||
warn!(scope = scope, "No healthy upstreams available! Using random.");
|
||||
}
|
||||
return Some(filtered_upstreams[rand::rng().gen_range(0..filtered_upstreams.len())]);
|
||||
}
|
||||
|
||||
|
||||
403
tools/aesdiag.py
Normal file
403
tools/aesdiag.py
Normal file
@@ -0,0 +1,403 @@
|
||||
#!/usr/bin/env python3
|
||||
"""
|
||||
AES-CBC validation tool for telemt middle proxy logs with support for noop padding.
|
||||
|
||||
Parses log lines containing:
|
||||
- "ME diag: derived keys and handshake plaintext" (provides write_key, write_iv, hs_plain)
|
||||
- "ME diag: handshake ciphertext" (provides hs_cipher)
|
||||
|
||||
For each pair it:
|
||||
- Decrypts the ciphertext using the provided key and IV.
|
||||
- Compares the beginning of the decrypted data with hs_plain.
|
||||
- Attempts to identify the actual padding scheme (PKCS#7, zero padding, noop padding).
|
||||
- Re-encrypts with different paddings and reports mismatches block by block.
|
||||
- Accumulates statistics for final summary.
|
||||
"""
|
||||
|
||||
import sys
|
||||
import re
|
||||
from collections import defaultdict
|
||||
from Crypto.Cipher import AES
|
||||
|
||||
# Constants
|
||||
NOOP_FRAME = bytes([0x04, 0x00, 0x00, 0x00]) # noop frame used for padding
|
||||
|
||||
def hex_str_to_bytes(hex_str):
|
||||
"""Convert a hex string like 'aa bb cc' to bytes."""
|
||||
return bytes.fromhex(hex_str.replace(' ', ''))
|
||||
|
||||
def parse_params(line):
|
||||
"""Extract key=value pairs where value is a space-separated hex string."""
|
||||
pattern = r'(\w+)=((?:[0-9a-f]{2} )*[0-9a-f]{2})'
|
||||
return {key: val for key, val in re.findall(pattern, line)}
|
||||
|
||||
def pkcs7_pad(data, block_size=16):
|
||||
"""Apply PKCS#7 padding to the given data."""
|
||||
pad_len = block_size - (len(data) % block_size)
|
||||
if pad_len == 0:
|
||||
pad_len = block_size
|
||||
return data + bytes([pad_len]) * pad_len
|
||||
|
||||
def zero_pad(data, block_size=16):
|
||||
"""Pad with zeros to the next block boundary."""
|
||||
pad_len = block_size - (len(data) % block_size)
|
||||
if pad_len == block_size:
|
||||
return data # already full blocks, no zero padding needed
|
||||
return data + bytes(pad_len)
|
||||
|
||||
def noop_pad(data):
|
||||
"""
|
||||
Pad with minimal number of noop frames (b'\\x04\\x00\\x00\\x00')
|
||||
to reach a multiple of 16 bytes.
|
||||
"""
|
||||
block_size = 16
|
||||
frame_len = len(NOOP_FRAME) # 4
|
||||
remainder = len(data) % block_size
|
||||
if remainder == 0:
|
||||
return data # no padding needed
|
||||
# We need to add k frames such that (len(data) + k*frame_len) % block_size == 0
|
||||
# => k*frame_len ≡ -remainder (mod block_size)
|
||||
# Since frame_len=4 and block_size=16, we need k*4 ≡ (16-remainder) mod 16
|
||||
# k must be an integer in {1,2,3} (because 4*4=16 ≡0 mod16, so k=4 gives remainder 0, but then total increase=16,
|
||||
# but if remainder==0 we already handled; if remainder!=0, k=4 gives (len+16)%16 == remainder, not 0,
|
||||
# so k=4 doesn't solve unless remainder=0. Actually 4*4=16 ≡0, so k=4 gives (len+16)%16 = remainder, so still not 0.
|
||||
# The equation is k*4 ≡ (16-remainder) mod 16. Let r=16-remainder (1..15). Then k ≡ r*inv(4) mod 4? Since mod 16,
|
||||
# 4 has no inverse modulo 16 because gcd(4,16)=4. So solutions exist only if r is multiple of 4.
|
||||
# Therefore remainder must be 4,8,12 (so that r = 12,8,4). This matches the idea that noop padding is only added
|
||||
# when the plaintext length mod 16 is 4,8,12. In our logs it's always 44 mod16=12, so r=4, so k=1 works.
|
||||
# For safety, we compute k as (block_size - remainder) // frame_len, but this only works if that value is integer.
|
||||
need = block_size - remainder
|
||||
if need % frame_len != 0:
|
||||
# This shouldn't happen by protocol, but if it does, fall back to adding full blocks of noop until multiple.
|
||||
# We'll add ceil(need/frame_len) frames.
|
||||
k = (need + frame_len - 1) // frame_len
|
||||
else:
|
||||
k = need // frame_len
|
||||
return data + NOOP_FRAME * k
|
||||
|
||||
def unpad_pkcs7(data):
|
||||
"""Remove PKCS#7 padding (assumes correct padding)."""
|
||||
if not data:
|
||||
return data
|
||||
pad_len = data[-1]
|
||||
if pad_len < 1 or pad_len > 16:
|
||||
return data # not valid PKCS#7, return as is
|
||||
# Check that all padding bytes are equal to pad_len
|
||||
if all(b == pad_len for b in data[-pad_len:]):
|
||||
return data[:-pad_len]
|
||||
return data
|
||||
|
||||
def is_noop_padded(decrypted, plain_log):
|
||||
"""
|
||||
Check if the extra bytes after plain_log in decrypted consist of one or more NOOP_FRAMEs.
|
||||
Returns True if they do, False otherwise.
|
||||
"""
|
||||
extra = decrypted[len(plain_log):]
|
||||
if len(extra) == 0:
|
||||
return False
|
||||
# Split into chunks of 4
|
||||
if len(extra) % 4 != 0:
|
||||
return False
|
||||
for i in range(0, len(extra), 4):
|
||||
if extra[i:i+4] != NOOP_FRAME:
|
||||
return False
|
||||
return True
|
||||
|
||||
def main():
|
||||
derived_list = [] # entries from "derived keys and handshake plaintext"
|
||||
cipher_list = [] # entries from "handshake ciphertext"
|
||||
|
||||
for line in sys.stdin:
|
||||
if 'ME diag: derived keys and handshake plaintext' in line:
|
||||
params = parse_params(line)
|
||||
if all(k in params for k in ('write_key', 'write_iv', 'hs_plain')):
|
||||
derived_list.append(params)
|
||||
elif 'ME diag: handshake ciphertext' in line:
|
||||
params = parse_params(line)
|
||||
if 'hs_cipher' in params:
|
||||
cipher_list.append(params)
|
||||
|
||||
# Warn about count mismatch but process as many pairs as possible
|
||||
n_pairs = min(len(derived_list), len(cipher_list))
|
||||
if len(derived_list) != len(cipher_list):
|
||||
print(f"\n[WARN] Number of derived entries ({len(derived_list)}) "
|
||||
f"differs from cipher entries ({len(cipher_list)}). "
|
||||
f"Processing first {n_pairs} pairs.\n")
|
||||
|
||||
# Statistics accumulators
|
||||
stats = {
|
||||
'total': n_pairs,
|
||||
'key_length_ok': 0,
|
||||
'iv_length_ok': 0,
|
||||
'cipher_aligned': 0,
|
||||
'decryption_match_start': 0, # first bytes equal hs_plain
|
||||
'pkcs7_after_unpad_matches': 0, # after removing PKCS7, equals hs_plain
|
||||
'extra_bytes_all_zero': 0, # extra bytes after hs_plain are zero
|
||||
'extra_bytes_noop': 0, # extra bytes are noop frames
|
||||
'pkcs7_encrypt_ok': 0, # re-encryption with PKCS7 matches ciphertext
|
||||
'zero_encrypt_ok': 0, # re-encryption with zero padding matches
|
||||
'noop_encrypt_ok': 0, # re-encryption with noop padding matches
|
||||
'no_padding_encrypt_ok': 0, # only if plaintext multiple of 16 and matches
|
||||
'no_padding_applicable': 0, # number of tests where plaintext len %16 ==0
|
||||
}
|
||||
|
||||
detailed_results = [] # store per-test summary for final heuristic
|
||||
|
||||
for idx, (der, ciph) in enumerate(zip(derived_list[:n_pairs], cipher_list[:n_pairs]), 1):
|
||||
print(f"\n{'='*60}")
|
||||
print(f"Test #{idx}")
|
||||
print(f"{'='*60}")
|
||||
|
||||
# Local stats for this test
|
||||
test_stats = defaultdict(bool)
|
||||
|
||||
try:
|
||||
key = hex_str_to_bytes(der['write_key'])
|
||||
iv = hex_str_to_bytes(der['write_iv'])
|
||||
plain_log = hex_str_to_bytes(der['hs_plain'])
|
||||
ciphertext = hex_str_to_bytes(ciph['hs_cipher'])
|
||||
|
||||
# Basic sanity checks
|
||||
print(f"[INFO] Key length : {len(key)} bytes (expected 32)")
|
||||
print(f"[INFO] IV length : {len(iv)} bytes (expected 16)")
|
||||
print(f"[INFO] hs_plain length : {len(plain_log)} bytes")
|
||||
print(f"[INFO] hs_cipher length : {len(ciphertext)} bytes")
|
||||
|
||||
if len(key) == 32:
|
||||
stats['key_length_ok'] += 1
|
||||
test_stats['key_ok'] = True
|
||||
else:
|
||||
print("[WARN] Key length is not 32 bytes – AES-256 requires 32-byte key.")
|
||||
|
||||
if len(iv) == 16:
|
||||
stats['iv_length_ok'] += 1
|
||||
test_stats['iv_ok'] = True
|
||||
else:
|
||||
print("[WARN] IV length is not 16 bytes – AES-CBC requires 16-byte IV.")
|
||||
|
||||
if len(ciphertext) % 16 == 0:
|
||||
stats['cipher_aligned'] += 1
|
||||
test_stats['cipher_aligned'] = True
|
||||
else:
|
||||
print("[ERROR] Ciphertext length is not a multiple of 16 – invalid AES-CBC block alignment.")
|
||||
# Skip further processing for this test
|
||||
detailed_results.append(test_stats)
|
||||
continue
|
||||
|
||||
# --- Decryption test ---
|
||||
cipher_dec = AES.new(key, AES.MODE_CBC, iv)
|
||||
decrypted = cipher_dec.decrypt(ciphertext)
|
||||
print(f"[INFO] Decrypted ({len(decrypted)} bytes): {decrypted.hex()}")
|
||||
|
||||
# Compare beginning with hs_plain
|
||||
match_len = min(len(plain_log), len(decrypted))
|
||||
if decrypted[:match_len] == plain_log[:match_len]:
|
||||
print(f"[OK] First {match_len} bytes match hs_plain.")
|
||||
stats['decryption_match_start'] += 1
|
||||
test_stats['decrypt_start_ok'] = True
|
||||
else:
|
||||
print(f"[FAIL] First bytes do NOT match hs_plain.")
|
||||
for i in range(match_len):
|
||||
if decrypted[i] != plain_log[i]:
|
||||
print(f" First mismatch at byte {i}: hs_plain={plain_log[i]:02x}, decrypted={decrypted[i]:02x}")
|
||||
break
|
||||
test_stats['decrypt_start_ok'] = False
|
||||
|
||||
# --- Try to identify actual padding ---
|
||||
# Remove possible PKCS#7 padding from decrypted data
|
||||
decrypted_unpadded = unpad_pkcs7(decrypted)
|
||||
if decrypted_unpadded != decrypted:
|
||||
print(f"[INFO] After removing PKCS#7 padding: {len(decrypted_unpadded)} bytes left.")
|
||||
if decrypted_unpadded == plain_log:
|
||||
print("[OK] Decrypted data with PKCS#7 removed exactly matches hs_plain.")
|
||||
stats['pkcs7_after_unpad_matches'] += 1
|
||||
test_stats['pkcs7_unpad_matches'] = True
|
||||
else:
|
||||
print("[INFO] Decrypted (PKCS#7 removed) does NOT match hs_plain.")
|
||||
test_stats['pkcs7_unpad_matches'] = False
|
||||
else:
|
||||
print("[INFO] No valid PKCS#7 padding detected in decrypted data.")
|
||||
test_stats['pkcs7_unpad_matches'] = False
|
||||
|
||||
# Check if the extra bytes after hs_plain in decrypted are all zero (zero padding)
|
||||
extra = decrypted[len(plain_log):]
|
||||
if extra and all(b == 0 for b in extra):
|
||||
print("[INFO] Extra bytes after hs_plain are all zeros – likely zero padding.")
|
||||
stats['extra_bytes_all_zero'] += 1
|
||||
test_stats['extra_zero'] = True
|
||||
else:
|
||||
test_stats['extra_zero'] = False
|
||||
|
||||
# Check for noop padding in extra bytes
|
||||
if is_noop_padded(decrypted, plain_log):
|
||||
print(f"[OK] Extra bytes after hs_plain consist of noop frames ({NOOP_FRAME.hex()}).")
|
||||
stats['extra_bytes_noop'] += 1
|
||||
test_stats['extra_noop'] = True
|
||||
else:
|
||||
test_stats['extra_noop'] = False
|
||||
if extra:
|
||||
print(f"[INFO] Extra bytes after hs_plain (hex): {extra.hex()}")
|
||||
|
||||
# --- Re-encryption tests ---
|
||||
# PKCS#7
|
||||
padded_pkcs7 = pkcs7_pad(plain_log)
|
||||
cipher_enc = AES.new(key, AES.MODE_CBC, iv)
|
||||
computed_pkcs7 = cipher_enc.encrypt(padded_pkcs7)
|
||||
if computed_pkcs7 == ciphertext:
|
||||
print("[OK] PKCS#7 padding produces the expected ciphertext.")
|
||||
stats['pkcs7_encrypt_ok'] += 1
|
||||
test_stats['pkcs7_enc_ok'] = True
|
||||
else:
|
||||
print("[FAIL] PKCS#7 padding does NOT match the ciphertext.")
|
||||
test_stats['pkcs7_enc_ok'] = False
|
||||
# Show block where first difference occurs
|
||||
block_size = 16
|
||||
for blk in range(len(ciphertext)//block_size):
|
||||
start = blk*block_size
|
||||
exp = ciphertext[start:start+block_size]
|
||||
comp = computed_pkcs7[start:start+block_size]
|
||||
if exp != comp:
|
||||
print(f" First difference in block {blk}:")
|
||||
print(f" expected : {exp.hex()}")
|
||||
print(f" computed : {comp.hex()}")
|
||||
break
|
||||
|
||||
# Zero padding
|
||||
padded_zero = zero_pad(plain_log)
|
||||
# Ensure multiple of 16
|
||||
if len(padded_zero) % 16 != 0:
|
||||
padded_zero += bytes(16 - (len(padded_zero)%16))
|
||||
cipher_enc_zero = AES.new(key, AES.MODE_CBC, iv)
|
||||
computed_zero = cipher_enc_zero.encrypt(padded_zero)
|
||||
if computed_zero == ciphertext:
|
||||
print("[OK] Zero padding produces the expected ciphertext.")
|
||||
stats['zero_encrypt_ok'] += 1
|
||||
test_stats['zero_enc_ok'] = True
|
||||
else:
|
||||
print("[INFO] Zero padding does NOT match (expected, unless log used PKCS#7).")
|
||||
test_stats['zero_enc_ok'] = False
|
||||
|
||||
# Noop padding
|
||||
padded_noop = noop_pad(plain_log)
|
||||
# Ensure multiple of 16 (noop_pad already returns multiple of 16)
|
||||
cipher_enc_noop = AES.new(key, AES.MODE_CBC, iv)
|
||||
computed_noop = cipher_enc_noop.encrypt(padded_noop)
|
||||
if computed_noop == ciphertext:
|
||||
print("[OK] Noop padding produces the expected ciphertext.")
|
||||
stats['noop_encrypt_ok'] += 1
|
||||
test_stats['noop_enc_ok'] = True
|
||||
else:
|
||||
print("[FAIL] Noop padding does NOT match the ciphertext.")
|
||||
test_stats['noop_enc_ok'] = False
|
||||
# Show block difference if needed
|
||||
for blk in range(len(ciphertext)//16):
|
||||
start = blk*16
|
||||
if computed_noop[start:start+16] != ciphertext[start:start+16]:
|
||||
print(f" First difference in block {blk}:")
|
||||
print(f" expected : {ciphertext[start:start+16].hex()}")
|
||||
print(f" computed : {computed_noop[start:start+16].hex()}")
|
||||
break
|
||||
|
||||
# No padding (only possible if plaintext is already multiple of 16)
|
||||
if len(plain_log) % 16 == 0:
|
||||
stats['no_padding_applicable'] += 1
|
||||
cipher_enc_nopad = AES.new(key, AES.MODE_CBC, iv)
|
||||
computed_nopad = cipher_enc_nopad.encrypt(plain_log)
|
||||
if computed_nopad == ciphertext:
|
||||
print("[OK] No padding (plaintext multiple of 16) matches.")
|
||||
stats['no_padding_encrypt_ok'] += 1
|
||||
test_stats['no_pad_enc_ok'] = True
|
||||
else:
|
||||
print("[INFO] No padding does NOT match.")
|
||||
test_stats['no_pad_enc_ok'] = False
|
||||
else:
|
||||
print("[INFO] Skipping no‑padding test because plaintext length is not a multiple of 16.")
|
||||
|
||||
except Exception as e:
|
||||
print(f"[EXCEPTION] {e}")
|
||||
test_stats['exception'] = True
|
||||
|
||||
detailed_results.append(test_stats)
|
||||
|
||||
# --- Final statistics and heuristic summary ---
|
||||
print("\n" + "="*60)
|
||||
print("STATISTICS SUMMARY")
|
||||
print("="*60)
|
||||
print(f"Total tests processed : {stats['total']}")
|
||||
print(f"Key length OK (32) : {stats['key_length_ok']}/{stats['total']}")
|
||||
print(f"IV length OK (16) : {stats['iv_length_ok']}/{stats['total']}")
|
||||
print(f"Ciphertext 16-byte aligned : {stats['cipher_aligned']}/{stats['total']}")
|
||||
print(f"Decryption starts with hs_plain : {stats['decryption_match_start']}/{stats['total']}")
|
||||
print(f"After PKCS#7 removal matches : {stats['pkcs7_after_unpad_matches']}/{stats['total']}")
|
||||
print(f"Extra bytes after hs_plain are 0 : {stats['extra_bytes_all_zero']}/{stats['total']}")
|
||||
print(f"Extra bytes are noop frames : {stats['extra_bytes_noop']}/{stats['total']}")
|
||||
print(f"PKCS#7 re-encryption OK : {stats['pkcs7_encrypt_ok']}/{stats['total']}")
|
||||
print(f"Zero padding re-encryption OK : {stats['zero_encrypt_ok']}/{stats['total']}")
|
||||
print(f"Noop padding re-encryption OK : {stats['noop_encrypt_ok']}/{stats['total']}")
|
||||
if stats['no_padding_applicable'] > 0:
|
||||
print(f"No-padding applicable tests : {stats['no_padding_applicable']}")
|
||||
print(f"No-padding re-encryption OK : {stats['no_padding_encrypt_ok']}/{stats['no_padding_applicable']}")
|
||||
|
||||
# Heuristic: determine most likely padding
|
||||
print("\n" + "="*60)
|
||||
print("HEURISTIC CONCLUSION")
|
||||
print("="*60)
|
||||
|
||||
if stats['decryption_match_start'] == stats['total']:
|
||||
print("✓ All tests: first bytes of decrypted data match hs_plain → keys and IV are correct.")
|
||||
else:
|
||||
print("✗ Some tests: first bytes mismatch → possible key/IV issues or corrupted ciphertext.")
|
||||
|
||||
# Guess padding based on re-encryption success and extra bytes
|
||||
candidates = []
|
||||
if stats['pkcs7_encrypt_ok'] == stats['total']:
|
||||
candidates.append("PKCS#7")
|
||||
if stats['zero_encrypt_ok'] == stats['total']:
|
||||
candidates.append("zero padding")
|
||||
if stats['noop_encrypt_ok'] == stats['total']:
|
||||
candidates.append("noop padding")
|
||||
if stats['no_padding_applicable'] == stats['total'] and stats['no_padding_encrypt_ok'] == stats['total']:
|
||||
candidates.append("no padding")
|
||||
|
||||
if len(candidates) == 1:
|
||||
print(f"✓ All tests consistent with padding scheme: {candidates[0]}.")
|
||||
elif len(candidates) > 1:
|
||||
print(f"⚠ Multiple padding schemes succeed in all tests: {', '.join(candidates)}. This is unusual.")
|
||||
else:
|
||||
# No scheme succeeded in all tests – look at ratios
|
||||
print("Mixed padding results:")
|
||||
total = stats['total']
|
||||
pkcs7_ratio = stats['pkcs7_encrypt_ok'] / total if total else 0
|
||||
zero_ratio = stats['zero_encrypt_ok'] / total if total else 0
|
||||
noop_ratio = stats['noop_encrypt_ok'] / total if total else 0
|
||||
print(f" PKCS#7 success = {stats['pkcs7_encrypt_ok']}/{total} ({pkcs7_ratio*100:.1f}%)")
|
||||
print(f" Zero success = {stats['zero_encrypt_ok']}/{total} ({zero_ratio*100:.1f}%)")
|
||||
print(f" Noop success = {stats['noop_encrypt_ok']}/{total} ({noop_ratio*100:.1f}%)")
|
||||
|
||||
if noop_ratio > max(pkcs7_ratio, zero_ratio):
|
||||
print("→ Noop padding is most frequent. Check if extra bytes are indeed noop frames.")
|
||||
elif pkcs7_ratio > zero_ratio:
|
||||
print("→ PKCS#7 is most frequent, but fails in some tests.")
|
||||
elif zero_ratio > pkcs7_ratio:
|
||||
print("→ Zero padding is most frequent, but fails in some tests.")
|
||||
else:
|
||||
print("→ No clear winner; possibly a different padding scheme or random data.")
|
||||
|
||||
# Additional heuristics based on extra bytes
|
||||
if stats['extra_bytes_noop'] == stats['total']:
|
||||
print("✓ All tests: extra bytes after hs_plain are noop frames → strongly indicates noop padding.")
|
||||
if stats['extra_bytes_all_zero'] == stats['total']:
|
||||
print("✓ All tests: extra bytes are zeros → suggests zero padding.")
|
||||
|
||||
# Final health check
|
||||
if (stats['decryption_match_start'] == stats['total'] and
|
||||
(stats['pkcs7_encrypt_ok'] == stats['total'] or
|
||||
stats['zero_encrypt_ok'] == stats['total'] or
|
||||
stats['noop_encrypt_ok'] == stats['total'] or
|
||||
stats['no_padding_encrypt_ok'] == stats['no_padding_applicable'] == stats['total'])):
|
||||
print("\n✅ OVERALL: All tests consistent. The encryption parameters and padding are correct.")
|
||||
else:
|
||||
print("\n⚠️ OVERALL: Inconsistencies detected. Review the detailed output for failing tests.")
|
||||
|
||||
if __name__ == '__main__':
|
||||
main()
|
||||
Reference in New Issue
Block a user