mirror of
https://github.com/telemt/telemt.git
synced 2026-04-19 11:34:10 +03:00
Compare commits
11 Commits
| Author | SHA1 | Date | |
|---|---|---|---|
|
|
f1bf95a7de | ||
|
|
959a16af88 | ||
|
|
a54f9ba719 | ||
|
|
2d5cd9c8e1 | ||
|
|
37b6f7b985 | ||
|
|
50e9e5cf32 | ||
|
|
d72cfd6bc4 | ||
|
|
fa3566a9cb | ||
|
|
4e59e52454 | ||
|
|
7b9b46291d | ||
|
|
2a168b2600 |
@@ -255,13 +255,22 @@ This document lists all configuration keys accepted by `config.toml`.
|
|||||||
```
|
```
|
||||||
## proxy_secret_path
|
## proxy_secret_path
|
||||||
- **Constraints / validation**: `String`. When omitted, the default path is `"proxy-secret"`. Empty values are accepted by TOML/serde but will likely fail at runtime (invalid file path).
|
- **Constraints / validation**: `String`. When omitted, the default path is `"proxy-secret"`. Empty values are accepted by TOML/serde but will likely fail at runtime (invalid file path).
|
||||||
- **Description**: Path to Telegram infrastructure `proxy-secret` cache file used by ME handshake/RPC auth. Telemt always tries a fresh download from `https://core.telegram.org/getProxySecret` first, caches it to this path on success, and falls back to reading the cached file (any age) on download failure.
|
- **Description**: Path to Telegram infrastructure `proxy-secret` cache file used by ME handshake/RPC auth. Telemt always tries a fresh download from `https://core.telegram.org/getProxySecret` first (unless `proxy_secret_url` is set) , caches it to this path on success, and falls back to reading the cached file (any age) on download failure.
|
||||||
- **Example**:
|
- **Example**:
|
||||||
|
|
||||||
```toml
|
```toml
|
||||||
[general]
|
[general]
|
||||||
proxy_secret_path = "proxy-secret"
|
proxy_secret_path = "proxy-secret"
|
||||||
```
|
```
|
||||||
|
## proxy_secret_url
|
||||||
|
- **Constraints / validation**: `String`. When omitted, the `"https://core.telegram.org/getProxySecret"` is used.
|
||||||
|
- **Description**: Optional URL to obtain `proxy-secret` file used by ME handshake/RPC auth. Telemt always tries a fresh download from this URL first (with fallback to `https://core.telegram.org/getProxySecret` if absent).
|
||||||
|
- **Example**:
|
||||||
|
|
||||||
|
```toml
|
||||||
|
[general]
|
||||||
|
proxy_secret_url = "https://core.telegram.org/getProxySecret"
|
||||||
|
```
|
||||||
## proxy_config_v4_cache_path
|
## proxy_config_v4_cache_path
|
||||||
- **Constraints / validation**: `String`. When set, must not be empty/whitespace-only.
|
- **Constraints / validation**: `String`. When set, must not be empty/whitespace-only.
|
||||||
- **Description**: Optional disk cache path for raw `getProxyConfig` (IPv4) snapshot. At startup Telemt tries to fetch a fresh snapshot first; on fetch failure or empty snapshot it falls back to this cache file when present and non-empty.
|
- **Description**: Optional disk cache path for raw `getProxyConfig` (IPv4) snapshot. At startup Telemt tries to fetch a fresh snapshot first; on fetch failure or empty snapshot it falls back to this cache file when present and non-empty.
|
||||||
@@ -271,6 +280,15 @@ This document lists all configuration keys accepted by `config.toml`.
|
|||||||
[general]
|
[general]
|
||||||
proxy_config_v4_cache_path = "cache/proxy-config-v4.txt"
|
proxy_config_v4_cache_path = "cache/proxy-config-v4.txt"
|
||||||
```
|
```
|
||||||
|
## proxy_config_v4_url
|
||||||
|
- **Constraints / validation**: `String`. When omitted, the `"https://core.telegram.org/getProxyConfig"` is used.
|
||||||
|
- **Description**: Optional URL to obtain raw `getProxyConfig` (IPv4). Telemt always tries a fresh download from this URL first (with fallback to `https://core.telegram.org/getProxyConfig` if absent).
|
||||||
|
- **Example**:
|
||||||
|
|
||||||
|
```toml
|
||||||
|
[general]
|
||||||
|
proxy_config_v4_url = "https://core.telegram.org/getProxyConfig"
|
||||||
|
```
|
||||||
## proxy_config_v6_cache_path
|
## proxy_config_v6_cache_path
|
||||||
- **Constraints / validation**: `String`. When set, must not be empty/whitespace-only.
|
- **Constraints / validation**: `String`. When set, must not be empty/whitespace-only.
|
||||||
- **Description**: Optional disk cache path for raw `getProxyConfigV6` (IPv6) snapshot. At startup Telemt tries to fetch a fresh snapshot first; on fetch failure or empty snapshot it falls back to this cache file when present and non-empty.
|
- **Description**: Optional disk cache path for raw `getProxyConfigV6` (IPv6) snapshot. At startup Telemt tries to fetch a fresh snapshot first; on fetch failure or empty snapshot it falls back to this cache file when present and non-empty.
|
||||||
@@ -280,6 +298,15 @@ This document lists all configuration keys accepted by `config.toml`.
|
|||||||
[general]
|
[general]
|
||||||
proxy_config_v6_cache_path = "cache/proxy-config-v6.txt"
|
proxy_config_v6_cache_path = "cache/proxy-config-v6.txt"
|
||||||
```
|
```
|
||||||
|
## proxy_config_v6_url
|
||||||
|
- **Constraints / validation**: `String`. When omitted, the `"https://core.telegram.org/getProxyConfigV6"` is used.
|
||||||
|
- **Description**: Optional URL to obtain raw `getProxyConfigV6` (IPv6). Telemt always tries a fresh download from this URL first (with fallback to `https://core.telegram.org/getProxyConfigV6` if absent).
|
||||||
|
- **Example**:
|
||||||
|
|
||||||
|
```toml
|
||||||
|
[general]
|
||||||
|
proxy_config_v6_url = "https://core.telegram.org/getProxyConfigV6"
|
||||||
|
```
|
||||||
## ad_tag
|
## ad_tag
|
||||||
- **Constraints / validation**: `String` (optional). When set, must be exactly 32 hex characters; invalid values are disabled during config load.
|
- **Constraints / validation**: `String` (optional). When set, must be exactly 32 hex characters; invalid values are disabled during config load.
|
||||||
- **Description**: Global fallback sponsored-channel `ad_tag` (used when user has no override in `access.user_ad_tags`). An all-zero tag is accepted but has no effect (and is warned about) until replaced with a real tag from `@MTProxybot`.
|
- **Description**: Global fallback sponsored-channel `ad_tag` (used when user has no override in `access.user_ad_tags`). An all-zero tag is accepted but has no effect (and is warned about) until replaced with a real tag from `@MTProxybot`.
|
||||||
|
|||||||
@@ -255,13 +255,22 @@
|
|||||||
```
|
```
|
||||||
## proxy_secret_path
|
## proxy_secret_path
|
||||||
- **Ограничения / валидация**: `String`. Если этот параметр не указан, используется путь по умолчанию — «proxy-secret». Пустые значения принимаются TOML/serde, но во время выполнения произойдет ошибка (invalid file path).
|
- **Ограничения / валидация**: `String`. Если этот параметр не указан, используется путь по умолчанию — «proxy-secret». Пустые значения принимаются TOML/serde, но во время выполнения произойдет ошибка (invalid file path).
|
||||||
- **Описание**: Путь к файлу кэша `proxy-secret` инфраструктуры Telegram, используемому ME-handshake/аутентификацией RPC. Telemt всегда сначала пытается выполнить новую загрузку с https://core.telegram.org/getProxySecret, в случае успеха кэширует ее по этому пути и возвращается к чтению кэшированного файла в случае сбоя загрузки.
|
- **Описание**: Путь к файлу кэша `proxy-secret` инфраструктуры Telegram, используемому ME-handshake/аутентификацией RPC. Telemt всегда сначала пытается выполнить новую загрузку с https://core.telegram.org/getProxySecret (если не установлен `proxy_secret_url`), в случае успеха кэширует ее по этому пути и возвращается к чтению кэшированного файла в случае сбоя загрузки.
|
||||||
- **Пример**:
|
- **Пример**:
|
||||||
|
|
||||||
```toml
|
```toml
|
||||||
[general]
|
[general]
|
||||||
proxy_secret_path = "proxy-secret"
|
proxy_secret_path = "proxy-secret"
|
||||||
```
|
```
|
||||||
|
## proxy_secret_url
|
||||||
|
- **Ограничения / валидация**: `String`. Если не указан, используется `"https://core.telegram.org/getProxySecret"`.
|
||||||
|
- **Описание**: Необязательный URL для получения файла `proxy-secret` используемого ME-handshake/аутентификацией RPC. Telemt всегда сначала пытается выполнить новую загрузку с этого URL (если не задан, используется https://core.telegram.org/getProxySecret).
|
||||||
|
- **Пример**:
|
||||||
|
|
||||||
|
```toml
|
||||||
|
[general]
|
||||||
|
proxy_secret_url = "https://core.telegram.org/getProxySecret"
|
||||||
|
```
|
||||||
## proxy_config_v4_cache_path
|
## proxy_config_v4_cache_path
|
||||||
- **Ограничения / валидация**: `String`. Если используется, значение не должно быть пустым или содержать только пробелы.
|
- **Ограничения / валидация**: `String`. Если используется, значение не должно быть пустым или содержать только пробелы.
|
||||||
- **Описание**: Необязательный путь к кэшу для необработанного (raw) снимка getProxyConfig (IPv4). При запуске Telemt сначала пытается получить свежий снимок; в случае сбоя выборки или пустого снимка он возвращается к этому файлу кэша, если он присутствует и не пуст.
|
- **Описание**: Необязательный путь к кэшу для необработанного (raw) снимка getProxyConfig (IPv4). При запуске Telemt сначала пытается получить свежий снимок; в случае сбоя выборки или пустого снимка он возвращается к этому файлу кэша, если он присутствует и не пуст.
|
||||||
@@ -271,6 +280,15 @@
|
|||||||
[general]
|
[general]
|
||||||
proxy_config_v4_cache_path = "cache/proxy-config-v4.txt"
|
proxy_config_v4_cache_path = "cache/proxy-config-v4.txt"
|
||||||
```
|
```
|
||||||
|
## proxy_config_v4_url
|
||||||
|
- **Ограничения / валидация**: `String`. Если не указан, используется `"https://core.telegram.org/getProxyConfig"`.
|
||||||
|
- **Описание**: Необязательный URL для получения `getProxyConfig` (IPv4). Telemt при всегда пытается выполнить новую загрузку с этого URL (и если не задан, использует `https://core.telegram.org/getProxyConfig`).
|
||||||
|
- **Example**:
|
||||||
|
|
||||||
|
```toml
|
||||||
|
[general]
|
||||||
|
proxy_config_v4_url = "https://core.telegram.org/getProxyConfig"
|
||||||
|
```
|
||||||
## proxy_config_v6_cache_path
|
## proxy_config_v6_cache_path
|
||||||
- **Ограничения / валидация**: `String`. Если используется, значение не должно быть пустым или содержать только пробелы.
|
- **Ограничения / валидация**: `String`. Если используется, значение не должно быть пустым или содержать только пробелы.
|
||||||
- **Описание**: Необязательный путь к кэшу для необработанного (raw) снимка getProxyConfigV6 (IPv6). При запуске Telemt сначала пытается получить свежий снимок; в случае сбоя выборки или пустого снимка он возвращается к этому файлу кэша, если он присутствует и не пуст.
|
- **Описание**: Необязательный путь к кэшу для необработанного (raw) снимка getProxyConfigV6 (IPv6). При запуске Telemt сначала пытается получить свежий снимок; в случае сбоя выборки или пустого снимка он возвращается к этому файлу кэша, если он присутствует и не пуст.
|
||||||
@@ -280,6 +298,15 @@
|
|||||||
[general]
|
[general]
|
||||||
proxy_config_v6_cache_path = "cache/proxy-config-v6.txt"
|
proxy_config_v6_cache_path = "cache/proxy-config-v6.txt"
|
||||||
```
|
```
|
||||||
|
## proxy_config_v6_url
|
||||||
|
- **Ограничения / валидация**: `String`. Если не указан, используется `"https://core.telegram.org/getProxyConfigV6"`.
|
||||||
|
- **Описание**: Необязательный URL для получения `getProxyConfigV6` (IPv6). Telemt при всегда пытается выполнить новую загрузку с этого URL (и если не задан, использует `https://core.telegram.org/getProxyConfigV6`).
|
||||||
|
- **Example**:
|
||||||
|
|
||||||
|
```toml
|
||||||
|
[general]
|
||||||
|
proxy_config_v6_url = "https://core.telegram.org/getProxyConfigV6"
|
||||||
|
```
|
||||||
## ad_tag
|
## ad_tag
|
||||||
- **Ограничения / валидация**: `String` (необязательный параметр). Если используется, значение должно быть ровно 32 символа в шестнадцатеричной системе; недопустимые значения отключаются во время загрузки конфигурации.
|
- **Ограничения / валидация**: `String` (необязательный параметр). Если используется, значение должно быть ровно 32 символа в шестнадцатеричной системе; недопустимые значения отключаются во время загрузки конфигурации.
|
||||||
- **Описание**: Глобальный резервный спонсируемый канал `ad_tag` (используется, когда у пользователя нет переопределения в `access.user_ad_tags`). Тег со всеми нулями принимается, но не имеет никакого эффекта, пока не будет заменен реальным тегом от `@MTProxybot`.
|
- **Описание**: Глобальный резервный спонсируемый канал `ad_tag` (используется, когда у пользователя нет переопределения в `access.user_ad_tags`). Тег со всеми нулями принимается, но не имеет никакого эффекта, пока не будет заменен реальным тегом от `@MTProxybot`.
|
||||||
|
|||||||
@@ -392,14 +392,26 @@ pub struct GeneralConfig {
|
|||||||
#[serde(default = "default_proxy_secret_path")]
|
#[serde(default = "default_proxy_secret_path")]
|
||||||
pub proxy_secret_path: Option<String>,
|
pub proxy_secret_path: Option<String>,
|
||||||
|
|
||||||
|
/// Optional custom URL for infrastructure secret (https://core.telegram.org/getProxySecret if absent).
|
||||||
|
#[serde(default)]
|
||||||
|
pub proxy_secret_url: Option<String>,
|
||||||
|
|
||||||
/// Optional path to cache raw getProxyConfig (IPv4) snapshot for startup fallback.
|
/// Optional path to cache raw getProxyConfig (IPv4) snapshot for startup fallback.
|
||||||
#[serde(default = "default_proxy_config_v4_cache_path")]
|
#[serde(default = "default_proxy_config_v4_cache_path")]
|
||||||
pub proxy_config_v4_cache_path: Option<String>,
|
pub proxy_config_v4_cache_path: Option<String>,
|
||||||
|
|
||||||
|
/// Optional custom URL for getProxyConfig (https://core.telegram.org/getProxyConfig if absent).
|
||||||
|
#[serde(default)]
|
||||||
|
pub proxy_config_v4_url: Option<String>,
|
||||||
|
|
||||||
/// Optional path to cache raw getProxyConfigV6 snapshot for startup fallback.
|
/// Optional path to cache raw getProxyConfigV6 snapshot for startup fallback.
|
||||||
#[serde(default = "default_proxy_config_v6_cache_path")]
|
#[serde(default = "default_proxy_config_v6_cache_path")]
|
||||||
pub proxy_config_v6_cache_path: Option<String>,
|
pub proxy_config_v6_cache_path: Option<String>,
|
||||||
|
|
||||||
|
/// Optional custom URL for getProxyConfigV6 (https://core.telegram.org/getProxyConfigV6 if absent).
|
||||||
|
#[serde(default)]
|
||||||
|
pub proxy_config_v6_url: Option<String>,
|
||||||
|
|
||||||
/// Global ad_tag (32 hex chars from @MTProxybot). Fallback when user has no per-user tag in access.user_ad_tags.
|
/// Global ad_tag (32 hex chars from @MTProxybot). Fallback when user has no per-user tag in access.user_ad_tags.
|
||||||
#[serde(default)]
|
#[serde(default)]
|
||||||
pub ad_tag: Option<String>,
|
pub ad_tag: Option<String>,
|
||||||
@@ -960,8 +972,11 @@ impl Default for GeneralConfig {
|
|||||||
use_middle_proxy: default_true(),
|
use_middle_proxy: default_true(),
|
||||||
ad_tag: None,
|
ad_tag: None,
|
||||||
proxy_secret_path: default_proxy_secret_path(),
|
proxy_secret_path: default_proxy_secret_path(),
|
||||||
|
proxy_secret_url: None,
|
||||||
proxy_config_v4_cache_path: default_proxy_config_v4_cache_path(),
|
proxy_config_v4_cache_path: default_proxy_config_v4_cache_path(),
|
||||||
|
proxy_config_v4_url: None,
|
||||||
proxy_config_v6_cache_path: default_proxy_config_v6_cache_path(),
|
proxy_config_v6_cache_path: default_proxy_config_v6_cache_path(),
|
||||||
|
proxy_config_v6_url: None,
|
||||||
middle_proxy_nat_ip: None,
|
middle_proxy_nat_ip: None,
|
||||||
middle_proxy_nat_probe: default_true(),
|
middle_proxy_nat_probe: default_true(),
|
||||||
middle_proxy_nat_stun: default_middle_proxy_nat_stun(),
|
middle_proxy_nat_stun: default_middle_proxy_nat_stun(),
|
||||||
|
|||||||
@@ -66,6 +66,7 @@ pub(crate) async fn initialize_me_pool(
|
|||||||
match crate::transport::middle_proxy::fetch_proxy_secret_with_upstream(
|
match crate::transport::middle_proxy::fetch_proxy_secret_with_upstream(
|
||||||
proxy_secret_path,
|
proxy_secret_path,
|
||||||
config.general.proxy_secret_len_max,
|
config.general.proxy_secret_len_max,
|
||||||
|
config.general.proxy_secret_url.as_deref(),
|
||||||
Some(upstream_manager.clone()),
|
Some(upstream_manager.clone()),
|
||||||
)
|
)
|
||||||
.await
|
.await
|
||||||
@@ -126,7 +127,11 @@ pub(crate) async fn initialize_me_pool(
|
|||||||
.set_me_status(StartupMeStatus::Initializing, COMPONENT_ME_PROXY_CONFIG_V4)
|
.set_me_status(StartupMeStatus::Initializing, COMPONENT_ME_PROXY_CONFIG_V4)
|
||||||
.await;
|
.await;
|
||||||
let cfg_v4 = load_startup_proxy_config_snapshot(
|
let cfg_v4 = load_startup_proxy_config_snapshot(
|
||||||
"https://core.telegram.org/getProxyConfig",
|
config
|
||||||
|
.general
|
||||||
|
.proxy_config_v4_url
|
||||||
|
.as_deref()
|
||||||
|
.unwrap_or("https://core.telegram.org/getProxyConfig"),
|
||||||
config.general.proxy_config_v4_cache_path.as_deref(),
|
config.general.proxy_config_v4_cache_path.as_deref(),
|
||||||
me2dc_fallback,
|
me2dc_fallback,
|
||||||
"getProxyConfig",
|
"getProxyConfig",
|
||||||
@@ -158,7 +163,11 @@ pub(crate) async fn initialize_me_pool(
|
|||||||
.set_me_status(StartupMeStatus::Initializing, COMPONENT_ME_PROXY_CONFIG_V6)
|
.set_me_status(StartupMeStatus::Initializing, COMPONENT_ME_PROXY_CONFIG_V6)
|
||||||
.await;
|
.await;
|
||||||
let cfg_v6 = load_startup_proxy_config_snapshot(
|
let cfg_v6 = load_startup_proxy_config_snapshot(
|
||||||
"https://core.telegram.org/getProxyConfigV6",
|
config
|
||||||
|
.general
|
||||||
|
.proxy_config_v6_url
|
||||||
|
.as_deref()
|
||||||
|
.unwrap_or("https://core.telegram.org/getProxyConfigV6"),
|
||||||
config.general.proxy_config_v6_cache_path.as_deref(),
|
config.general.proxy_config_v6_cache_path.as_deref(),
|
||||||
me2dc_fallback,
|
me2dc_fallback,
|
||||||
"getProxyConfigV6",
|
"getProxyConfigV6",
|
||||||
|
|||||||
@@ -321,7 +321,14 @@ async fn run_update_cycle(
|
|||||||
let mut maps_changed = false;
|
let mut maps_changed = false;
|
||||||
|
|
||||||
let mut ready_v4: Option<(ProxyConfigData, u64)> = None;
|
let mut ready_v4: Option<(ProxyConfigData, u64)> = None;
|
||||||
let cfg_v4 = retry_fetch("https://core.telegram.org/getProxyConfig", upstream.clone()).await;
|
let cfg_v4 = retry_fetch(
|
||||||
|
cfg.general
|
||||||
|
.proxy_config_v4_url
|
||||||
|
.as_deref()
|
||||||
|
.unwrap_or("https://core.telegram.org/getProxyConfig"),
|
||||||
|
upstream.clone(),
|
||||||
|
)
|
||||||
|
.await;
|
||||||
if let Some(cfg_v4) = cfg_v4
|
if let Some(cfg_v4) = cfg_v4
|
||||||
&& snapshot_passes_guards(cfg, &cfg_v4, "getProxyConfig")
|
&& snapshot_passes_guards(cfg, &cfg_v4, "getProxyConfig")
|
||||||
{
|
{
|
||||||
@@ -346,7 +353,10 @@ async fn run_update_cycle(
|
|||||||
|
|
||||||
let mut ready_v6: Option<(ProxyConfigData, u64)> = None;
|
let mut ready_v6: Option<(ProxyConfigData, u64)> = None;
|
||||||
let cfg_v6 = retry_fetch(
|
let cfg_v6 = retry_fetch(
|
||||||
"https://core.telegram.org/getProxyConfigV6",
|
cfg.general
|
||||||
|
.proxy_config_v6_url
|
||||||
|
.as_deref()
|
||||||
|
.unwrap_or("https://core.telegram.org/getProxyConfigV6"),
|
||||||
upstream.clone(),
|
upstream.clone(),
|
||||||
)
|
)
|
||||||
.await;
|
.await;
|
||||||
@@ -430,6 +440,7 @@ async fn run_update_cycle(
|
|||||||
match download_proxy_secret_with_max_len_via_upstream(
|
match download_proxy_secret_with_max_len_via_upstream(
|
||||||
cfg.general.proxy_secret_len_max,
|
cfg.general.proxy_secret_len_max,
|
||||||
upstream,
|
upstream,
|
||||||
|
cfg.general.proxy_secret_url.as_deref(),
|
||||||
)
|
)
|
||||||
.await
|
.await
|
||||||
{
|
{
|
||||||
|
|||||||
@@ -7,7 +7,6 @@ mod model;
|
|||||||
mod pressure;
|
mod pressure;
|
||||||
mod scheduler;
|
mod scheduler;
|
||||||
|
|
||||||
#[cfg(test)]
|
|
||||||
pub(crate) use model::PressureState;
|
pub(crate) use model::PressureState;
|
||||||
pub(crate) use model::{AdmissionDecision, DispatchAction, DispatchFeedback, SchedulerDecision};
|
pub(crate) use model::{AdmissionDecision, DispatchAction, DispatchFeedback, SchedulerDecision};
|
||||||
pub(crate) use scheduler::{WorkerFairnessConfig, WorkerFairnessSnapshot, WorkerFairnessState};
|
pub(crate) use scheduler::{WorkerFairnessConfig, WorkerFairnessSnapshot, WorkerFairnessState};
|
||||||
|
|||||||
@@ -77,11 +77,12 @@ pub(crate) struct FlowFairnessState {
|
|||||||
pub(crate) standing_state: StandingQueueState,
|
pub(crate) standing_state: StandingQueueState,
|
||||||
pub(crate) scheduler_state: FlowSchedulerState,
|
pub(crate) scheduler_state: FlowSchedulerState,
|
||||||
pub(crate) bucket_id: usize,
|
pub(crate) bucket_id: usize,
|
||||||
|
pub(crate) weight_quanta: u8,
|
||||||
pub(crate) in_active_ring: bool,
|
pub(crate) in_active_ring: bool,
|
||||||
}
|
}
|
||||||
|
|
||||||
impl FlowFairnessState {
|
impl FlowFairnessState {
|
||||||
pub(crate) fn new(flow_id: u64, worker_id: u16, bucket_id: usize) -> Self {
|
pub(crate) fn new(flow_id: u64, worker_id: u16, bucket_id: usize, weight_quanta: u8) -> Self {
|
||||||
Self {
|
Self {
|
||||||
_flow_id: flow_id,
|
_flow_id: flow_id,
|
||||||
_worker_id: worker_id,
|
_worker_id: worker_id,
|
||||||
@@ -97,6 +98,7 @@ impl FlowFairnessState {
|
|||||||
standing_state: StandingQueueState::Transient,
|
standing_state: StandingQueueState::Transient,
|
||||||
scheduler_state: FlowSchedulerState::Idle,
|
scheduler_state: FlowSchedulerState::Idle,
|
||||||
bucket_id,
|
bucket_id,
|
||||||
|
weight_quanta: weight_quanta.max(1),
|
||||||
in_active_ring: false,
|
in_active_ring: false,
|
||||||
}
|
}
|
||||||
}
|
}
|
||||||
|
|||||||
@@ -146,59 +146,55 @@ impl PressureEvaluator {
|
|||||||
((signals.standing_flows.saturating_mul(100)) / signals.active_flows).min(100) as u8
|
((signals.standing_flows.saturating_mul(100)) / signals.active_flows).min(100) as u8
|
||||||
};
|
};
|
||||||
|
|
||||||
let mut pressure_score = 0u8;
|
let mut pressured = false;
|
||||||
|
let mut saturated = false;
|
||||||
|
|
||||||
|
let queue_saturated_pct = cfg.queue_ratio_shedding_pct.min(cfg.queue_ratio_saturated_pct);
|
||||||
if queue_ratio_pct >= cfg.queue_ratio_pressured_pct {
|
if queue_ratio_pct >= cfg.queue_ratio_pressured_pct {
|
||||||
pressure_score = pressure_score.max(1);
|
pressured = true;
|
||||||
}
|
}
|
||||||
if queue_ratio_pct >= cfg.queue_ratio_shedding_pct {
|
if queue_ratio_pct >= queue_saturated_pct {
|
||||||
pressure_score = pressure_score.max(2);
|
saturated = true;
|
||||||
}
|
|
||||||
if queue_ratio_pct >= cfg.queue_ratio_saturated_pct {
|
|
||||||
pressure_score = pressure_score.max(3);
|
|
||||||
}
|
}
|
||||||
|
|
||||||
|
let standing_saturated_pct = cfg
|
||||||
|
.standing_ratio_shedding_pct
|
||||||
|
.min(cfg.standing_ratio_saturated_pct);
|
||||||
if standing_ratio_pct >= cfg.standing_ratio_pressured_pct {
|
if standing_ratio_pct >= cfg.standing_ratio_pressured_pct {
|
||||||
pressure_score = pressure_score.max(1);
|
pressured = true;
|
||||||
}
|
}
|
||||||
if standing_ratio_pct >= cfg.standing_ratio_shedding_pct {
|
if standing_ratio_pct >= standing_saturated_pct {
|
||||||
pressure_score = pressure_score.max(2);
|
saturated = true;
|
||||||
}
|
|
||||||
if standing_ratio_pct >= cfg.standing_ratio_saturated_pct {
|
|
||||||
pressure_score = pressure_score.max(3);
|
|
||||||
}
|
}
|
||||||
|
|
||||||
|
let rejects_saturated = cfg.rejects_shedding.min(cfg.rejects_saturated);
|
||||||
if self.admission_rejects_window >= cfg.rejects_pressured {
|
if self.admission_rejects_window >= cfg.rejects_pressured {
|
||||||
pressure_score = pressure_score.max(1);
|
pressured = true;
|
||||||
}
|
}
|
||||||
if self.admission_rejects_window >= cfg.rejects_shedding {
|
if self.admission_rejects_window >= rejects_saturated {
|
||||||
pressure_score = pressure_score.max(2);
|
saturated = true;
|
||||||
}
|
|
||||||
if self.admission_rejects_window >= cfg.rejects_saturated {
|
|
||||||
pressure_score = pressure_score.max(3);
|
|
||||||
}
|
}
|
||||||
|
|
||||||
|
let stalls_saturated = cfg.stalls_shedding.min(cfg.stalls_saturated);
|
||||||
if self.route_stalls_window >= cfg.stalls_pressured {
|
if self.route_stalls_window >= cfg.stalls_pressured {
|
||||||
pressure_score = pressure_score.max(1);
|
pressured = true;
|
||||||
}
|
}
|
||||||
if self.route_stalls_window >= cfg.stalls_shedding {
|
if self.route_stalls_window >= stalls_saturated {
|
||||||
pressure_score = pressure_score.max(2);
|
saturated = true;
|
||||||
}
|
|
||||||
if self.route_stalls_window >= cfg.stalls_saturated {
|
|
||||||
pressure_score = pressure_score.max(3);
|
|
||||||
}
|
}
|
||||||
|
|
||||||
if signals.backpressured_flows > signals.active_flows.saturating_div(2)
|
if signals.backpressured_flows > signals.active_flows.saturating_div(2)
|
||||||
&& signals.active_flows > 0
|
&& signals.active_flows > 0
|
||||||
{
|
{
|
||||||
pressure_score = pressure_score.max(2);
|
pressured = true;
|
||||||
}
|
}
|
||||||
|
|
||||||
match pressure_score {
|
if saturated {
|
||||||
0 => PressureState::Normal,
|
PressureState::Saturated
|
||||||
1 => PressureState::Pressured,
|
} else if pressured {
|
||||||
2 => PressureState::Shedding,
|
PressureState::Pressured
|
||||||
_ => PressureState::Saturated,
|
} else {
|
||||||
|
PressureState::Normal
|
||||||
}
|
}
|
||||||
}
|
}
|
||||||
|
|
||||||
|
|||||||
@@ -1,7 +1,8 @@
|
|||||||
use std::collections::{HashMap, VecDeque};
|
use std::collections::{HashMap, HashSet, VecDeque};
|
||||||
use std::time::{Duration, Instant};
|
use std::time::{Duration, Instant};
|
||||||
|
|
||||||
use bytes::Bytes;
|
use bytes::Bytes;
|
||||||
|
use crate::protocol::constants::RPC_FLAG_QUICKACK;
|
||||||
|
|
||||||
use super::model::{
|
use super::model::{
|
||||||
AdmissionDecision, DispatchAction, DispatchCandidate, DispatchFeedback, FlowFairnessState,
|
AdmissionDecision, DispatchAction, DispatchCandidate, DispatchFeedback, FlowFairnessState,
|
||||||
@@ -26,6 +27,8 @@ pub(crate) struct WorkerFairnessConfig {
|
|||||||
pub(crate) max_consecutive_stalls_before_close: u8,
|
pub(crate) max_consecutive_stalls_before_close: u8,
|
||||||
pub(crate) soft_bucket_count: usize,
|
pub(crate) soft_bucket_count: usize,
|
||||||
pub(crate) soft_bucket_share_pct: u8,
|
pub(crate) soft_bucket_share_pct: u8,
|
||||||
|
pub(crate) default_flow_weight: u8,
|
||||||
|
pub(crate) quickack_flow_weight: u8,
|
||||||
pub(crate) pressure: PressureConfig,
|
pub(crate) pressure: PressureConfig,
|
||||||
}
|
}
|
||||||
|
|
||||||
@@ -46,6 +49,8 @@ impl Default for WorkerFairnessConfig {
|
|||||||
max_consecutive_stalls_before_close: 16,
|
max_consecutive_stalls_before_close: 16,
|
||||||
soft_bucket_count: 64,
|
soft_bucket_count: 64,
|
||||||
soft_bucket_share_pct: 25,
|
soft_bucket_share_pct: 25,
|
||||||
|
default_flow_weight: 1,
|
||||||
|
quickack_flow_weight: 4,
|
||||||
pressure: PressureConfig::default(),
|
pressure: PressureConfig::default(),
|
||||||
}
|
}
|
||||||
}
|
}
|
||||||
@@ -57,9 +62,9 @@ struct FlowEntry {
|
|||||||
}
|
}
|
||||||
|
|
||||||
impl FlowEntry {
|
impl FlowEntry {
|
||||||
fn new(flow_id: u64, worker_id: u16, bucket_id: usize) -> Self {
|
fn new(flow_id: u64, worker_id: u16, bucket_id: usize, weight_quanta: u8) -> Self {
|
||||||
Self {
|
Self {
|
||||||
fairness: FlowFairnessState::new(flow_id, worker_id, bucket_id),
|
fairness: FlowFairnessState::new(flow_id, worker_id, bucket_id, weight_quanta),
|
||||||
queue: VecDeque::new(),
|
queue: VecDeque::new(),
|
||||||
}
|
}
|
||||||
}
|
}
|
||||||
@@ -86,6 +91,7 @@ pub(crate) struct WorkerFairnessState {
|
|||||||
pressure: PressureEvaluator,
|
pressure: PressureEvaluator,
|
||||||
flows: HashMap<u64, FlowEntry>,
|
flows: HashMap<u64, FlowEntry>,
|
||||||
active_ring: VecDeque<u64>,
|
active_ring: VecDeque<u64>,
|
||||||
|
active_ring_members: HashSet<u64>,
|
||||||
total_queued_bytes: u64,
|
total_queued_bytes: u64,
|
||||||
bucket_queued_bytes: Vec<u64>,
|
bucket_queued_bytes: Vec<u64>,
|
||||||
bucket_active_flows: Vec<usize>,
|
bucket_active_flows: Vec<usize>,
|
||||||
@@ -108,6 +114,7 @@ impl WorkerFairnessState {
|
|||||||
pressure: PressureEvaluator::new(now),
|
pressure: PressureEvaluator::new(now),
|
||||||
flows: HashMap::new(),
|
flows: HashMap::new(),
|
||||||
active_ring: VecDeque::new(),
|
active_ring: VecDeque::new(),
|
||||||
|
active_ring_members: HashSet::new(),
|
||||||
total_queued_bytes: 0,
|
total_queued_bytes: 0,
|
||||||
bucket_queued_bytes: vec![0; bucket_count],
|
bucket_queued_bytes: vec![0; bucket_count],
|
||||||
bucket_active_flows: vec![0; bucket_count],
|
bucket_active_flows: vec![0; bucket_count],
|
||||||
@@ -184,6 +191,7 @@ impl WorkerFairnessState {
|
|||||||
}
|
}
|
||||||
|
|
||||||
let bucket_id = self.bucket_for(conn_id);
|
let bucket_id = self.bucket_for(conn_id);
|
||||||
|
let frame_weight = Self::weight_for_flags(&self.config, flags);
|
||||||
let bucket_cap = self
|
let bucket_cap = self
|
||||||
.config
|
.config
|
||||||
.max_total_queued_bytes
|
.max_total_queued_bytes
|
||||||
@@ -205,12 +213,13 @@ impl WorkerFairnessState {
|
|||||||
self.bucket_active_flows[bucket_id].saturating_add(1);
|
self.bucket_active_flows[bucket_id].saturating_add(1);
|
||||||
self.flows.insert(
|
self.flows.insert(
|
||||||
conn_id,
|
conn_id,
|
||||||
FlowEntry::new(conn_id, self.config.worker_id, bucket_id),
|
FlowEntry::new(conn_id, self.config.worker_id, bucket_id, frame_weight),
|
||||||
);
|
);
|
||||||
self.flows
|
self.flows
|
||||||
.get_mut(&conn_id)
|
.get_mut(&conn_id)
|
||||||
.expect("flow inserted must be retrievable")
|
.expect("flow inserted must be retrievable")
|
||||||
};
|
};
|
||||||
|
entry.fairness.weight_quanta = entry.fairness.weight_quanta.max(frame_weight);
|
||||||
|
|
||||||
if entry.fairness.pending_bytes.saturating_add(frame_bytes)
|
if entry.fairness.pending_bytes.saturating_add(frame_bytes)
|
||||||
> self.config.max_flow_queued_bytes
|
> self.config.max_flow_queued_bytes
|
||||||
@@ -242,11 +251,24 @@ impl WorkerFairnessState {
|
|||||||
self.bucket_queued_bytes[bucket_id] =
|
self.bucket_queued_bytes[bucket_id] =
|
||||||
self.bucket_queued_bytes[bucket_id].saturating_add(frame_bytes);
|
self.bucket_queued_bytes[bucket_id].saturating_add(frame_bytes);
|
||||||
|
|
||||||
|
let mut enqueue_active = false;
|
||||||
if !entry.fairness.in_active_ring {
|
if !entry.fairness.in_active_ring {
|
||||||
entry.fairness.in_active_ring = true;
|
entry.fairness.in_active_ring = true;
|
||||||
self.active_ring.push_back(conn_id);
|
enqueue_active = true;
|
||||||
}
|
}
|
||||||
|
|
||||||
|
let pressure_state = self.pressure.state();
|
||||||
|
let (before_membership, after_membership) = {
|
||||||
|
let before = Self::flow_membership(&entry.fairness);
|
||||||
|
Self::classify_flow(&self.config, pressure_state, now, &mut entry.fairness);
|
||||||
|
let after = Self::flow_membership(&entry.fairness);
|
||||||
|
(before, after)
|
||||||
|
};
|
||||||
|
if enqueue_active {
|
||||||
|
self.enqueue_active_conn(conn_id);
|
||||||
|
}
|
||||||
|
self.apply_flow_membership_delta(before_membership, after_membership);
|
||||||
|
|
||||||
self.evaluate_pressure(now, true);
|
self.evaluate_pressure(now, true);
|
||||||
AdmissionDecision::Admit
|
AdmissionDecision::Admit
|
||||||
}
|
}
|
||||||
@@ -260,32 +282,44 @@ impl WorkerFairnessState {
|
|||||||
let Some(conn_id) = self.active_ring.pop_front() else {
|
let Some(conn_id) = self.active_ring.pop_front() else {
|
||||||
break;
|
break;
|
||||||
};
|
};
|
||||||
|
if !self.active_ring_members.remove(&conn_id) {
|
||||||
|
continue;
|
||||||
|
}
|
||||||
|
|
||||||
let mut candidate = None;
|
let mut candidate = None;
|
||||||
let mut requeue_active = false;
|
let mut requeue_active = false;
|
||||||
let mut drained_bytes = 0u64;
|
let mut drained_bytes = 0u64;
|
||||||
let mut bucket_id = 0usize;
|
let mut bucket_id = 0usize;
|
||||||
|
let mut should_continue = false;
|
||||||
|
let mut enqueue_active = false;
|
||||||
|
let mut membership_delta = None;
|
||||||
let pressure_state = self.pressure.state();
|
let pressure_state = self.pressure.state();
|
||||||
|
|
||||||
if let Some(flow) = self.flows.get_mut(&conn_id) {
|
if let Some(flow) = self.flows.get_mut(&conn_id) {
|
||||||
bucket_id = flow.fairness.bucket_id;
|
bucket_id = flow.fairness.bucket_id;
|
||||||
|
flow.fairness.in_active_ring = false;
|
||||||
|
let before_membership = Self::flow_membership(&flow.fairness);
|
||||||
|
|
||||||
if flow.queue.is_empty() {
|
if flow.queue.is_empty() {
|
||||||
flow.fairness.in_active_ring = false;
|
flow.fairness.in_active_ring = false;
|
||||||
flow.fairness.scheduler_state = FlowSchedulerState::Idle;
|
flow.fairness.scheduler_state = FlowSchedulerState::Idle;
|
||||||
flow.fairness.pending_bytes = 0;
|
flow.fairness.pending_bytes = 0;
|
||||||
|
flow.fairness.deficit_bytes = 0;
|
||||||
flow.fairness.queue_started_at = None;
|
flow.fairness.queue_started_at = None;
|
||||||
continue;
|
should_continue = true;
|
||||||
}
|
} else {
|
||||||
|
|
||||||
Self::classify_flow(&self.config, pressure_state, now, &mut flow.fairness);
|
Self::classify_flow(&self.config, pressure_state, now, &mut flow.fairness);
|
||||||
|
|
||||||
let quantum =
|
let quantum = Self::effective_quantum_bytes(
|
||||||
Self::effective_quantum_bytes(&self.config, pressure_state, &flow.fairness);
|
&self.config,
|
||||||
|
pressure_state,
|
||||||
|
&flow.fairness,
|
||||||
|
);
|
||||||
flow.fairness.deficit_bytes = flow
|
flow.fairness.deficit_bytes = flow
|
||||||
.fairness
|
.fairness
|
||||||
.deficit_bytes
|
.deficit_bytes
|
||||||
.saturating_add(i64::from(quantum));
|
.saturating_add(i64::from(quantum));
|
||||||
|
Self::clamp_deficit_bytes(&self.config, &mut flow.fairness);
|
||||||
self.deficit_grants = self.deficit_grants.saturating_add(1);
|
self.deficit_grants = self.deficit_grants.saturating_add(1);
|
||||||
|
|
||||||
let front_len = flow.queue.front().map_or(0, |front| front.queued_bytes());
|
let front_len = flow.queue.front().map_or(0, |front| front.queued_bytes());
|
||||||
@@ -294,6 +328,8 @@ impl WorkerFairnessState {
|
|||||||
flow.fairness.consecutive_skips.saturating_add(1);
|
flow.fairness.consecutive_skips.saturating_add(1);
|
||||||
self.deficit_skips = self.deficit_skips.saturating_add(1);
|
self.deficit_skips = self.deficit_skips.saturating_add(1);
|
||||||
requeue_active = true;
|
requeue_active = true;
|
||||||
|
flow.fairness.in_active_ring = true;
|
||||||
|
enqueue_active = true;
|
||||||
} else if let Some(frame) = flow.queue.pop_front() {
|
} else if let Some(frame) = flow.queue.pop_front() {
|
||||||
drained_bytes = frame.queued_bytes();
|
drained_bytes = frame.queued_bytes();
|
||||||
flow.fairness.pending_bytes =
|
flow.fairness.pending_bytes =
|
||||||
@@ -302,6 +338,7 @@ impl WorkerFairnessState {
|
|||||||
.fairness
|
.fairness
|
||||||
.deficit_bytes
|
.deficit_bytes
|
||||||
.saturating_sub(drained_bytes as i64);
|
.saturating_sub(drained_bytes as i64);
|
||||||
|
Self::clamp_deficit_bytes(&self.config, &mut flow.fairness);
|
||||||
flow.fairness.consecutive_skips = 0;
|
flow.fairness.consecutive_skips = 0;
|
||||||
flow.fairness.queue_started_at =
|
flow.fairness.queue_started_at =
|
||||||
flow.queue.front().map(|front| front.enqueued_at);
|
flow.queue.front().map(|front| front.enqueued_at);
|
||||||
@@ -309,6 +346,10 @@ impl WorkerFairnessState {
|
|||||||
if !requeue_active {
|
if !requeue_active {
|
||||||
flow.fairness.scheduler_state = FlowSchedulerState::Idle;
|
flow.fairness.scheduler_state = FlowSchedulerState::Idle;
|
||||||
flow.fairness.in_active_ring = false;
|
flow.fairness.in_active_ring = false;
|
||||||
|
flow.fairness.deficit_bytes = 0;
|
||||||
|
} else {
|
||||||
|
flow.fairness.in_active_ring = true;
|
||||||
|
enqueue_active = true;
|
||||||
}
|
}
|
||||||
candidate = Some(DispatchCandidate {
|
candidate = Some(DispatchCandidate {
|
||||||
pressure_state,
|
pressure_state,
|
||||||
@@ -318,17 +359,25 @@ impl WorkerFairnessState {
|
|||||||
}
|
}
|
||||||
}
|
}
|
||||||
|
|
||||||
|
membership_delta = Some((before_membership, Self::flow_membership(&flow.fairness)));
|
||||||
|
}
|
||||||
|
|
||||||
|
if let Some((before_membership, after_membership)) = membership_delta {
|
||||||
|
self.apply_flow_membership_delta(before_membership, after_membership);
|
||||||
|
}
|
||||||
|
|
||||||
|
if should_continue {
|
||||||
|
continue;
|
||||||
|
}
|
||||||
|
|
||||||
if drained_bytes > 0 {
|
if drained_bytes > 0 {
|
||||||
self.total_queued_bytes = self.total_queued_bytes.saturating_sub(drained_bytes);
|
self.total_queued_bytes = self.total_queued_bytes.saturating_sub(drained_bytes);
|
||||||
self.bucket_queued_bytes[bucket_id] =
|
self.bucket_queued_bytes[bucket_id] =
|
||||||
self.bucket_queued_bytes[bucket_id].saturating_sub(drained_bytes);
|
self.bucket_queued_bytes[bucket_id].saturating_sub(drained_bytes);
|
||||||
}
|
}
|
||||||
|
|
||||||
if requeue_active {
|
if requeue_active && enqueue_active {
|
||||||
if let Some(flow) = self.flows.get_mut(&conn_id) {
|
self.enqueue_active_conn(conn_id);
|
||||||
flow.fairness.in_active_ring = true;
|
|
||||||
}
|
|
||||||
self.active_ring.push_back(conn_id);
|
|
||||||
}
|
}
|
||||||
|
|
||||||
if let Some(candidate) = candidate {
|
if let Some(candidate) = candidate {
|
||||||
@@ -348,7 +397,9 @@ impl WorkerFairnessState {
|
|||||||
) -> DispatchAction {
|
) -> DispatchAction {
|
||||||
match feedback {
|
match feedback {
|
||||||
DispatchFeedback::Routed => {
|
DispatchFeedback::Routed => {
|
||||||
|
let mut membership_delta = None;
|
||||||
if let Some(flow) = self.flows.get_mut(&conn_id) {
|
if let Some(flow) = self.flows.get_mut(&conn_id) {
|
||||||
|
let before_membership = Self::flow_membership(&flow.fairness);
|
||||||
flow.fairness.last_drain_at = Some(now);
|
flow.fairness.last_drain_at = Some(now);
|
||||||
flow.fairness.recent_drain_bytes = flow
|
flow.fairness.recent_drain_bytes = flow
|
||||||
.fairness
|
.fairness
|
||||||
@@ -358,6 +409,17 @@ impl WorkerFairnessState {
|
|||||||
if flow.fairness.scheduler_state != FlowSchedulerState::Idle {
|
if flow.fairness.scheduler_state != FlowSchedulerState::Idle {
|
||||||
flow.fairness.scheduler_state = FlowSchedulerState::Active;
|
flow.fairness.scheduler_state = FlowSchedulerState::Active;
|
||||||
}
|
}
|
||||||
|
Self::classify_flow(
|
||||||
|
&self.config,
|
||||||
|
self.pressure.state(),
|
||||||
|
now,
|
||||||
|
&mut flow.fairness,
|
||||||
|
);
|
||||||
|
membership_delta =
|
||||||
|
Some((before_membership, Self::flow_membership(&flow.fairness)));
|
||||||
|
}
|
||||||
|
if let Some((before_membership, after_membership)) = membership_delta {
|
||||||
|
self.apply_flow_membership_delta(before_membership, after_membership);
|
||||||
}
|
}
|
||||||
self.evaluate_pressure(now, false);
|
self.evaluate_pressure(now, false);
|
||||||
DispatchAction::Continue
|
DispatchAction::Continue
|
||||||
@@ -365,17 +427,20 @@ impl WorkerFairnessState {
|
|||||||
DispatchFeedback::QueueFull => {
|
DispatchFeedback::QueueFull => {
|
||||||
self.pressure.note_route_stall(now, &self.config.pressure);
|
self.pressure.note_route_stall(now, &self.config.pressure);
|
||||||
self.downstream_stalls = self.downstream_stalls.saturating_add(1);
|
self.downstream_stalls = self.downstream_stalls.saturating_add(1);
|
||||||
|
let state = self.pressure.state();
|
||||||
let Some(flow) = self.flows.get_mut(&conn_id) else {
|
let Some(flow) = self.flows.get_mut(&conn_id) else {
|
||||||
self.evaluate_pressure(now, true);
|
self.evaluate_pressure(now, true);
|
||||||
return DispatchAction::Continue;
|
return DispatchAction::Continue;
|
||||||
};
|
};
|
||||||
|
let (before_membership, after_membership, should_close_flow, enqueue_active) = {
|
||||||
|
let before_membership = Self::flow_membership(&flow.fairness);
|
||||||
|
let mut enqueue_active = false;
|
||||||
|
|
||||||
flow.fairness.consecutive_stalls =
|
flow.fairness.consecutive_stalls =
|
||||||
flow.fairness.consecutive_stalls.saturating_add(1);
|
flow.fairness.consecutive_stalls.saturating_add(1);
|
||||||
flow.fairness.scheduler_state = FlowSchedulerState::Backpressured;
|
flow.fairness.scheduler_state = FlowSchedulerState::Backpressured;
|
||||||
flow.fairness.pressure_class = FlowPressureClass::Backpressured;
|
flow.fairness.pressure_class = FlowPressureClass::Backpressured;
|
||||||
|
|
||||||
let state = self.pressure.state();
|
|
||||||
let should_shed_frame = matches!(state, PressureState::Saturated)
|
let should_shed_frame = matches!(state, PressureState::Saturated)
|
||||||
|| (matches!(state, PressureState::Shedding)
|
|| (matches!(state, PressureState::Shedding)
|
||||||
&& flow.fairness.standing_state == StandingQueueState::Standing
|
&& flow.fairness.standing_state == StandingQueueState::Standing
|
||||||
@@ -392,20 +457,35 @@ impl WorkerFairnessState {
|
|||||||
flow.fairness.pending_bytes.saturating_add(frame_bytes);
|
flow.fairness.pending_bytes.saturating_add(frame_bytes);
|
||||||
flow.fairness.queue_started_at =
|
flow.fairness.queue_started_at =
|
||||||
flow.queue.front().map(|front| front.enqueued_at);
|
flow.queue.front().map(|front| front.enqueued_at);
|
||||||
self.total_queued_bytes = self.total_queued_bytes.saturating_add(frame_bytes);
|
self.total_queued_bytes =
|
||||||
self.bucket_queued_bytes[flow.fairness.bucket_id] = self.bucket_queued_bytes
|
self.total_queued_bytes.saturating_add(frame_bytes);
|
||||||
[flow.fairness.bucket_id]
|
self.bucket_queued_bytes[flow.fairness.bucket_id] = self
|
||||||
|
.bucket_queued_bytes[flow.fairness.bucket_id]
|
||||||
.saturating_add(frame_bytes);
|
.saturating_add(frame_bytes);
|
||||||
if !flow.fairness.in_active_ring {
|
if !flow.fairness.in_active_ring {
|
||||||
flow.fairness.in_active_ring = true;
|
flow.fairness.in_active_ring = true;
|
||||||
self.active_ring.push_back(conn_id);
|
enqueue_active = true;
|
||||||
}
|
}
|
||||||
}
|
}
|
||||||
|
|
||||||
if flow.fairness.consecutive_stalls
|
Self::classify_flow(&self.config, state, now, &mut flow.fairness);
|
||||||
|
let after_membership = Self::flow_membership(&flow.fairness);
|
||||||
|
let should_close_flow = flow.fairness.consecutive_stalls
|
||||||
>= self.config.max_consecutive_stalls_before_close
|
>= self.config.max_consecutive_stalls_before_close
|
||||||
&& self.pressure.state() == PressureState::Saturated
|
&& self.pressure.state() == PressureState::Saturated;
|
||||||
{
|
(
|
||||||
|
before_membership,
|
||||||
|
after_membership,
|
||||||
|
should_close_flow,
|
||||||
|
enqueue_active,
|
||||||
|
)
|
||||||
|
};
|
||||||
|
if enqueue_active {
|
||||||
|
self.enqueue_active_conn(conn_id);
|
||||||
|
}
|
||||||
|
self.apply_flow_membership_delta(before_membership, after_membership);
|
||||||
|
|
||||||
|
if should_close_flow {
|
||||||
self.remove_flow(conn_id);
|
self.remove_flow(conn_id);
|
||||||
self.evaluate_pressure(now, true);
|
self.evaluate_pressure(now, true);
|
||||||
return DispatchAction::CloseFlow;
|
return DispatchAction::CloseFlow;
|
||||||
@@ -426,6 +506,15 @@ impl WorkerFairnessState {
|
|||||||
let Some(entry) = self.flows.remove(&conn_id) else {
|
let Some(entry) = self.flows.remove(&conn_id) else {
|
||||||
return;
|
return;
|
||||||
};
|
};
|
||||||
|
self.active_ring_members.remove(&conn_id);
|
||||||
|
self.active_ring.retain(|queued_conn_id| *queued_conn_id != conn_id);
|
||||||
|
let (was_standing, was_backpressured) = Self::flow_membership(&entry.fairness);
|
||||||
|
if was_standing {
|
||||||
|
self.standing_flow_count = self.standing_flow_count.saturating_sub(1);
|
||||||
|
}
|
||||||
|
if was_backpressured {
|
||||||
|
self.backpressured_flow_count = self.backpressured_flow_count.saturating_sub(1);
|
||||||
|
}
|
||||||
|
|
||||||
self.bucket_active_flows[entry.fairness.bucket_id] =
|
self.bucket_active_flows[entry.fairness.bucket_id] =
|
||||||
self.bucket_active_flows[entry.fairness.bucket_id].saturating_sub(1);
|
self.bucket_active_flows[entry.fairness.bucket_id].saturating_sub(1);
|
||||||
@@ -440,27 +529,6 @@ impl WorkerFairnessState {
|
|||||||
}
|
}
|
||||||
|
|
||||||
fn evaluate_pressure(&mut self, now: Instant, force: bool) {
|
fn evaluate_pressure(&mut self, now: Instant, force: bool) {
|
||||||
let mut standing = 0usize;
|
|
||||||
let mut backpressured = 0usize;
|
|
||||||
|
|
||||||
for flow in self.flows.values_mut() {
|
|
||||||
Self::classify_flow(&self.config, self.pressure.state(), now, &mut flow.fairness);
|
|
||||||
if flow.fairness.standing_state == StandingQueueState::Standing {
|
|
||||||
standing = standing.saturating_add(1);
|
|
||||||
}
|
|
||||||
if matches!(
|
|
||||||
flow.fairness.scheduler_state,
|
|
||||||
FlowSchedulerState::Backpressured
|
|
||||||
| FlowSchedulerState::Penalized
|
|
||||||
| FlowSchedulerState::SheddingCandidate
|
|
||||||
) {
|
|
||||||
backpressured = backpressured.saturating_add(1);
|
|
||||||
}
|
|
||||||
}
|
|
||||||
|
|
||||||
self.standing_flow_count = standing;
|
|
||||||
self.backpressured_flow_count = backpressured;
|
|
||||||
|
|
||||||
let _ = self.pressure.maybe_evaluate(
|
let _ = self.pressure.maybe_evaluate(
|
||||||
now,
|
now,
|
||||||
&self.config.pressure,
|
&self.config.pressure,
|
||||||
@@ -468,8 +536,8 @@ impl WorkerFairnessState {
|
|||||||
PressureSignals {
|
PressureSignals {
|
||||||
active_flows: self.flows.len(),
|
active_flows: self.flows.len(),
|
||||||
total_queued_bytes: self.total_queued_bytes,
|
total_queued_bytes: self.total_queued_bytes,
|
||||||
standing_flows: standing,
|
standing_flows: self.standing_flow_count,
|
||||||
backpressured_flows: backpressured,
|
backpressured_flows: self.backpressured_flow_count,
|
||||||
},
|
},
|
||||||
force,
|
force,
|
||||||
);
|
);
|
||||||
@@ -481,12 +549,39 @@ impl WorkerFairnessState {
|
|||||||
now: Instant,
|
now: Instant,
|
||||||
fairness: &mut FlowFairnessState,
|
fairness: &mut FlowFairnessState,
|
||||||
) {
|
) {
|
||||||
if fairness.pending_bytes == 0 {
|
let (pressure_class, standing_state, scheduler_state, standing) =
|
||||||
fairness.pressure_class = FlowPressureClass::Healthy;
|
Self::derive_flow_classification(config, pressure_state, now, fairness);
|
||||||
fairness.standing_state = StandingQueueState::Transient;
|
fairness.pressure_class = pressure_class;
|
||||||
fairness.scheduler_state = FlowSchedulerState::Idle;
|
fairness.standing_state = standing_state;
|
||||||
|
fairness.scheduler_state = scheduler_state;
|
||||||
|
if scheduler_state == FlowSchedulerState::Idle {
|
||||||
|
fairness.deficit_bytes = 0;
|
||||||
|
}
|
||||||
|
if standing {
|
||||||
|
fairness.penalty_score = fairness.penalty_score.saturating_add(1);
|
||||||
|
} else {
|
||||||
fairness.penalty_score = fairness.penalty_score.saturating_sub(1);
|
fairness.penalty_score = fairness.penalty_score.saturating_sub(1);
|
||||||
return;
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
fn derive_flow_classification(
|
||||||
|
config: &WorkerFairnessConfig,
|
||||||
|
pressure_state: PressureState,
|
||||||
|
now: Instant,
|
||||||
|
fairness: &FlowFairnessState,
|
||||||
|
) -> (
|
||||||
|
FlowPressureClass,
|
||||||
|
StandingQueueState,
|
||||||
|
FlowSchedulerState,
|
||||||
|
bool,
|
||||||
|
) {
|
||||||
|
if fairness.pending_bytes == 0 {
|
||||||
|
return (
|
||||||
|
FlowPressureClass::Healthy,
|
||||||
|
StandingQueueState::Transient,
|
||||||
|
FlowSchedulerState::Idle,
|
||||||
|
false,
|
||||||
|
);
|
||||||
}
|
}
|
||||||
|
|
||||||
let queue_age = fairness
|
let queue_age = fairness
|
||||||
@@ -503,29 +598,165 @@ impl WorkerFairnessState {
|
|||||||
&& (fairness.consecutive_stalls >= config.standing_stall_threshold || drain_stalled);
|
&& (fairness.consecutive_stalls >= config.standing_stall_threshold || drain_stalled);
|
||||||
|
|
||||||
if standing {
|
if standing {
|
||||||
fairness.standing_state = StandingQueueState::Standing;
|
let scheduler_state = if pressure_state >= PressureState::Shedding {
|
||||||
fairness.pressure_class = FlowPressureClass::Standing;
|
|
||||||
fairness.penalty_score = fairness.penalty_score.saturating_add(1);
|
|
||||||
fairness.scheduler_state = if pressure_state >= PressureState::Shedding {
|
|
||||||
FlowSchedulerState::SheddingCandidate
|
FlowSchedulerState::SheddingCandidate
|
||||||
} else {
|
} else {
|
||||||
FlowSchedulerState::Penalized
|
FlowSchedulerState::Penalized
|
||||||
};
|
};
|
||||||
return;
|
return (
|
||||||
|
FlowPressureClass::Standing,
|
||||||
|
StandingQueueState::Standing,
|
||||||
|
scheduler_state,
|
||||||
|
true,
|
||||||
|
);
|
||||||
}
|
}
|
||||||
|
|
||||||
fairness.standing_state = StandingQueueState::Transient;
|
|
||||||
if fairness.consecutive_stalls > 0 {
|
if fairness.consecutive_stalls > 0 {
|
||||||
fairness.pressure_class = FlowPressureClass::Backpressured;
|
return (
|
||||||
fairness.scheduler_state = FlowSchedulerState::Backpressured;
|
FlowPressureClass::Backpressured,
|
||||||
} else if fairness.pending_bytes >= config.standing_queue_min_backlog_bytes {
|
StandingQueueState::Transient,
|
||||||
fairness.pressure_class = FlowPressureClass::Bursty;
|
FlowSchedulerState::Backpressured,
|
||||||
fairness.scheduler_state = FlowSchedulerState::Active;
|
false,
|
||||||
} else {
|
);
|
||||||
fairness.pressure_class = FlowPressureClass::Healthy;
|
|
||||||
fairness.scheduler_state = FlowSchedulerState::Active;
|
|
||||||
}
|
}
|
||||||
fairness.penalty_score = fairness.penalty_score.saturating_sub(1);
|
|
||||||
|
if fairness.pending_bytes >= config.standing_queue_min_backlog_bytes {
|
||||||
|
return (
|
||||||
|
FlowPressureClass::Bursty,
|
||||||
|
StandingQueueState::Transient,
|
||||||
|
FlowSchedulerState::Active,
|
||||||
|
false,
|
||||||
|
);
|
||||||
|
}
|
||||||
|
|
||||||
|
(
|
||||||
|
FlowPressureClass::Healthy,
|
||||||
|
StandingQueueState::Transient,
|
||||||
|
FlowSchedulerState::Active,
|
||||||
|
false,
|
||||||
|
)
|
||||||
|
}
|
||||||
|
|
||||||
|
#[inline]
|
||||||
|
fn flow_membership(fairness: &FlowFairnessState) -> (bool, bool) {
|
||||||
|
(
|
||||||
|
fairness.standing_state == StandingQueueState::Standing,
|
||||||
|
Self::scheduler_state_is_backpressured(fairness.scheduler_state),
|
||||||
|
)
|
||||||
|
}
|
||||||
|
|
||||||
|
#[inline]
|
||||||
|
fn scheduler_state_is_backpressured(state: FlowSchedulerState) -> bool {
|
||||||
|
matches!(
|
||||||
|
state,
|
||||||
|
FlowSchedulerState::Backpressured
|
||||||
|
| FlowSchedulerState::Penalized
|
||||||
|
| FlowSchedulerState::SheddingCandidate
|
||||||
|
)
|
||||||
|
}
|
||||||
|
|
||||||
|
fn apply_flow_membership_delta(
|
||||||
|
&mut self,
|
||||||
|
before_membership: (bool, bool),
|
||||||
|
after_membership: (bool, bool),
|
||||||
|
) {
|
||||||
|
if before_membership.0 != after_membership.0 {
|
||||||
|
if after_membership.0 {
|
||||||
|
self.standing_flow_count = self.standing_flow_count.saturating_add(1);
|
||||||
|
} else {
|
||||||
|
self.standing_flow_count = self.standing_flow_count.saturating_sub(1);
|
||||||
|
}
|
||||||
|
}
|
||||||
|
if before_membership.1 != after_membership.1 {
|
||||||
|
if after_membership.1 {
|
||||||
|
self.backpressured_flow_count = self.backpressured_flow_count.saturating_add(1);
|
||||||
|
} else {
|
||||||
|
self.backpressured_flow_count = self.backpressured_flow_count.saturating_sub(1);
|
||||||
|
}
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
#[inline]
|
||||||
|
fn clamp_deficit_bytes(config: &WorkerFairnessConfig, fairness: &mut FlowFairnessState) {
|
||||||
|
let max_deficit = config.max_flow_queued_bytes.min(i64::MAX as u64) as i64;
|
||||||
|
fairness.deficit_bytes = fairness.deficit_bytes.clamp(0, max_deficit);
|
||||||
|
}
|
||||||
|
|
||||||
|
#[inline]
|
||||||
|
fn enqueue_active_conn(&mut self, conn_id: u64) {
|
||||||
|
if self.active_ring_members.insert(conn_id) {
|
||||||
|
self.active_ring.push_back(conn_id);
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
#[inline]
|
||||||
|
fn weight_for_flags(config: &WorkerFairnessConfig, flags: u32) -> u8 {
|
||||||
|
if (flags & RPC_FLAG_QUICKACK) != 0 {
|
||||||
|
return config.quickack_flow_weight.max(1);
|
||||||
|
}
|
||||||
|
config.default_flow_weight.max(1)
|
||||||
|
}
|
||||||
|
|
||||||
|
#[cfg(test)]
|
||||||
|
pub(crate) fn debug_recompute_flow_counters(&self, now: Instant) -> (usize, usize) {
|
||||||
|
let pressure_state = self.pressure.state();
|
||||||
|
let mut standing = 0usize;
|
||||||
|
let mut backpressured = 0usize;
|
||||||
|
for flow in self.flows.values() {
|
||||||
|
let (_, standing_state, scheduler_state, _) =
|
||||||
|
Self::derive_flow_classification(&self.config, pressure_state, now, &flow.fairness);
|
||||||
|
if standing_state == StandingQueueState::Standing {
|
||||||
|
standing = standing.saturating_add(1);
|
||||||
|
}
|
||||||
|
if Self::scheduler_state_is_backpressured(scheduler_state) {
|
||||||
|
backpressured = backpressured.saturating_add(1);
|
||||||
|
}
|
||||||
|
}
|
||||||
|
(standing, backpressured)
|
||||||
|
}
|
||||||
|
|
||||||
|
#[cfg(test)]
|
||||||
|
pub(crate) fn debug_check_active_ring_consistency(&self) -> bool {
|
||||||
|
if self.active_ring.len() != self.active_ring_members.len() {
|
||||||
|
return false;
|
||||||
|
}
|
||||||
|
|
||||||
|
let mut seen = HashSet::with_capacity(self.active_ring.len());
|
||||||
|
for conn_id in self.active_ring.iter().copied() {
|
||||||
|
if !seen.insert(conn_id) {
|
||||||
|
return false;
|
||||||
|
}
|
||||||
|
if !self.active_ring_members.contains(&conn_id) {
|
||||||
|
return false;
|
||||||
|
}
|
||||||
|
let Some(flow) = self.flows.get(&conn_id) else {
|
||||||
|
return false;
|
||||||
|
};
|
||||||
|
if !flow.fairness.in_active_ring || flow.queue.is_empty() {
|
||||||
|
return false;
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
for (conn_id, flow) in self.flows.iter() {
|
||||||
|
let in_ring = self.active_ring_members.contains(conn_id);
|
||||||
|
if flow.fairness.in_active_ring != in_ring {
|
||||||
|
return false;
|
||||||
|
}
|
||||||
|
if in_ring && flow.queue.is_empty() {
|
||||||
|
return false;
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
true
|
||||||
|
}
|
||||||
|
|
||||||
|
#[cfg(test)]
|
||||||
|
pub(crate) fn debug_max_deficit_bytes(&self) -> i64 {
|
||||||
|
self.flows
|
||||||
|
.values()
|
||||||
|
.map(|entry| entry.fairness.deficit_bytes)
|
||||||
|
.max()
|
||||||
|
.unwrap_or(0)
|
||||||
}
|
}
|
||||||
|
|
||||||
fn effective_quantum_bytes(
|
fn effective_quantum_bytes(
|
||||||
@@ -542,12 +773,14 @@ impl WorkerFairnessState {
|
|||||||
return config.penalized_quantum_bytes.max(1);
|
return config.penalized_quantum_bytes.max(1);
|
||||||
}
|
}
|
||||||
|
|
||||||
match pressure_state {
|
let base_quantum = match pressure_state {
|
||||||
PressureState::Normal => config.base_quantum_bytes.max(1),
|
PressureState::Normal => config.base_quantum_bytes.max(1),
|
||||||
PressureState::Pressured => config.pressured_quantum_bytes.max(1),
|
PressureState::Pressured => config.pressured_quantum_bytes.max(1),
|
||||||
PressureState::Shedding => config.pressured_quantum_bytes.max(1),
|
PressureState::Shedding => config.pressured_quantum_bytes.max(1),
|
||||||
PressureState::Saturated => config.penalized_quantum_bytes.max(1),
|
PressureState::Saturated => config.penalized_quantum_bytes.max(1),
|
||||||
}
|
};
|
||||||
|
let weighted_quantum = base_quantum.saturating_mul(fairness.weight_quanta.max(1) as u32);
|
||||||
|
weighted_quantum.max(1)
|
||||||
}
|
}
|
||||||
|
|
||||||
fn bucket_for(&self, conn_id: u64) -> usize {
|
fn bucket_for(&self, conn_id: u64) -> usize {
|
||||||
|
|||||||
@@ -4,7 +4,7 @@ use std::collections::HashMap;
|
|||||||
use std::io::ErrorKind;
|
use std::io::ErrorKind;
|
||||||
use std::sync::Arc;
|
use std::sync::Arc;
|
||||||
use std::sync::atomic::{AtomicBool, AtomicU32, AtomicU64, Ordering};
|
use std::sync::atomic::{AtomicBool, AtomicU32, AtomicU64, Ordering};
|
||||||
use std::time::Instant;
|
use std::time::{Duration, Instant};
|
||||||
|
|
||||||
use bytes::{Bytes, BytesMut};
|
use bytes::{Bytes, BytesMut};
|
||||||
use tokio::io::AsyncReadExt;
|
use tokio::io::AsyncReadExt;
|
||||||
@@ -21,8 +21,8 @@ use crate::stats::Stats;
|
|||||||
|
|
||||||
use super::codec::{RpcChecksumMode, WriterCommand, rpc_crc};
|
use super::codec::{RpcChecksumMode, WriterCommand, rpc_crc};
|
||||||
use super::fairness::{
|
use super::fairness::{
|
||||||
AdmissionDecision, DispatchAction, DispatchFeedback, SchedulerDecision, WorkerFairnessConfig,
|
AdmissionDecision, DispatchAction, DispatchFeedback, PressureState, SchedulerDecision,
|
||||||
WorkerFairnessSnapshot, WorkerFairnessState,
|
WorkerFairnessConfig, WorkerFairnessSnapshot, WorkerFairnessState,
|
||||||
};
|
};
|
||||||
use super::registry::RouteResult;
|
use super::registry::RouteResult;
|
||||||
use super::{ConnRegistry, MeResponse};
|
use super::{ConnRegistry, MeResponse};
|
||||||
@@ -45,10 +45,22 @@ fn is_data_route_queue_full(result: RouteResult) -> bool {
|
|||||||
)
|
)
|
||||||
}
|
}
|
||||||
|
|
||||||
fn should_close_on_queue_full_streak(streak: u8) -> bool {
|
fn should_close_on_queue_full_streak(streak: u8, pressure_state: PressureState) -> bool {
|
||||||
|
if pressure_state < PressureState::Shedding {
|
||||||
|
return false;
|
||||||
|
}
|
||||||
|
|
||||||
streak >= DATA_ROUTE_QUEUE_FULL_STARVATION_THRESHOLD
|
streak >= DATA_ROUTE_QUEUE_FULL_STARVATION_THRESHOLD
|
||||||
}
|
}
|
||||||
|
|
||||||
|
fn should_schedule_fairness_retry(snapshot: &WorkerFairnessSnapshot) -> bool {
|
||||||
|
snapshot.total_queued_bytes > 0
|
||||||
|
}
|
||||||
|
|
||||||
|
fn fairness_retry_delay(route_wait_ms: u64) -> Duration {
|
||||||
|
Duration::from_millis(route_wait_ms.max(1))
|
||||||
|
}
|
||||||
|
|
||||||
async fn route_data_with_retry(
|
async fn route_data_with_retry(
|
||||||
reg: &ConnRegistry,
|
reg: &ConnRegistry,
|
||||||
conn_id: u64,
|
conn_id: u64,
|
||||||
@@ -157,7 +169,7 @@ async fn drain_fairness_scheduler(
|
|||||||
break;
|
break;
|
||||||
};
|
};
|
||||||
let cid = candidate.frame.conn_id;
|
let cid = candidate.frame.conn_id;
|
||||||
let _pressure_state = candidate.pressure_state;
|
let pressure_state = candidate.pressure_state;
|
||||||
let _flow_class = candidate.flow_class;
|
let _flow_class = candidate.flow_class;
|
||||||
let routed = route_data_with_retry(
|
let routed = route_data_with_retry(
|
||||||
reg,
|
reg,
|
||||||
@@ -176,7 +188,7 @@ async fn drain_fairness_scheduler(
|
|||||||
if is_data_route_queue_full(routed) {
|
if is_data_route_queue_full(routed) {
|
||||||
let streak = data_route_queue_full_streak.entry(cid).or_insert(0);
|
let streak = data_route_queue_full_streak.entry(cid).or_insert(0);
|
||||||
*streak = streak.saturating_add(1);
|
*streak = streak.saturating_add(1);
|
||||||
if should_close_on_queue_full_streak(*streak) {
|
if should_close_on_queue_full_streak(*streak, pressure_state) {
|
||||||
fairness.remove_flow(cid);
|
fairness.remove_flow(cid);
|
||||||
data_route_queue_full_streak.remove(&cid);
|
data_route_queue_full_streak.remove(&cid);
|
||||||
reg.unregister(cid).await;
|
reg.unregister(cid).await;
|
||||||
@@ -231,10 +243,33 @@ pub(crate) async fn reader_loop(
|
|||||||
let mut fairness_snapshot = fairness.snapshot();
|
let mut fairness_snapshot = fairness.snapshot();
|
||||||
loop {
|
loop {
|
||||||
let mut tmp = [0u8; 65_536];
|
let mut tmp = [0u8; 65_536];
|
||||||
|
let backlog_retry_enabled = should_schedule_fairness_retry(&fairness_snapshot);
|
||||||
|
let backlog_retry_delay =
|
||||||
|
fairness_retry_delay(reader_route_data_wait_ms.load(Ordering::Relaxed));
|
||||||
|
let mut retry_only = false;
|
||||||
let n = tokio::select! {
|
let n = tokio::select! {
|
||||||
res = rd.read(&mut tmp) => res.map_err(ProxyError::Io)?,
|
res = rd.read(&mut tmp) => res.map_err(ProxyError::Io)?,
|
||||||
|
_ = tokio::time::sleep(backlog_retry_delay), if backlog_retry_enabled => {
|
||||||
|
retry_only = true;
|
||||||
|
0usize
|
||||||
|
},
|
||||||
_ = cancel.cancelled() => return Ok(()),
|
_ = cancel.cancelled() => return Ok(()),
|
||||||
};
|
};
|
||||||
|
if retry_only {
|
||||||
|
let route_wait_ms = reader_route_data_wait_ms.load(Ordering::Relaxed);
|
||||||
|
drain_fairness_scheduler(
|
||||||
|
&mut fairness,
|
||||||
|
reg.as_ref(),
|
||||||
|
&tx,
|
||||||
|
&mut data_route_queue_full_streak,
|
||||||
|
route_wait_ms,
|
||||||
|
stats.as_ref(),
|
||||||
|
)
|
||||||
|
.await;
|
||||||
|
let current_snapshot = fairness.snapshot();
|
||||||
|
apply_fairness_metrics_delta(stats.as_ref(), &mut fairness_snapshot, current_snapshot);
|
||||||
|
continue;
|
||||||
|
}
|
||||||
if n == 0 {
|
if n == 0 {
|
||||||
stats.increment_me_reader_eof_total();
|
stats.increment_me_reader_eof_total();
|
||||||
return Err(ProxyError::Io(std::io::Error::new(
|
return Err(ProxyError::Io(std::io::Error::new(
|
||||||
@@ -317,12 +352,9 @@ pub(crate) async fn reader_loop(
|
|||||||
stats.increment_me_route_drop_queue_full_high();
|
stats.increment_me_route_drop_queue_full_high();
|
||||||
let streak = data_route_queue_full_streak.entry(cid).or_insert(0);
|
let streak = data_route_queue_full_streak.entry(cid).or_insert(0);
|
||||||
*streak = streak.saturating_add(1);
|
*streak = streak.saturating_add(1);
|
||||||
if should_close_on_queue_full_streak(*streak)
|
let pressure_state = fairness.pressure_state();
|
||||||
|| matches!(
|
if should_close_on_queue_full_streak(*streak, pressure_state)
|
||||||
admission,
|
|| matches!(admission, AdmissionDecision::RejectSaturated)
|
||||||
AdmissionDecision::RejectSaturated
|
|
||||||
| AdmissionDecision::RejectStandingFlow
|
|
||||||
)
|
|
||||||
{
|
{
|
||||||
fairness.remove_flow(cid);
|
fairness.remove_flow(cid);
|
||||||
data_route_queue_full_streak.remove(&cid);
|
data_route_queue_full_streak.remove(&cid);
|
||||||
@@ -445,14 +477,18 @@ pub(crate) async fn reader_loop(
|
|||||||
|
|
||||||
#[cfg(test)]
|
#[cfg(test)]
|
||||||
mod tests {
|
mod tests {
|
||||||
|
use std::time::Duration;
|
||||||
|
|
||||||
use bytes::Bytes;
|
use bytes::Bytes;
|
||||||
|
|
||||||
|
use super::PressureState;
|
||||||
use crate::transport::middle_proxy::ConnRegistry;
|
use crate::transport::middle_proxy::ConnRegistry;
|
||||||
|
|
||||||
use super::{
|
use super::{
|
||||||
MeResponse, RouteResult, is_data_route_queue_full, route_data_with_retry,
|
MeResponse, RouteResult, WorkerFairnessSnapshot, fairness_retry_delay,
|
||||||
should_close_on_queue_full_streak, should_close_on_route_result_for_ack,
|
is_data_route_queue_full, route_data_with_retry, should_close_on_queue_full_streak,
|
||||||
should_close_on_route_result_for_data,
|
should_close_on_route_result_for_ack, should_close_on_route_result_for_data,
|
||||||
|
should_schedule_fairness_retry,
|
||||||
};
|
};
|
||||||
|
|
||||||
#[test]
|
#[test]
|
||||||
@@ -475,10 +511,29 @@ mod tests {
|
|||||||
assert!(is_data_route_queue_full(RouteResult::QueueFullBase));
|
assert!(is_data_route_queue_full(RouteResult::QueueFullBase));
|
||||||
assert!(is_data_route_queue_full(RouteResult::QueueFullHigh));
|
assert!(is_data_route_queue_full(RouteResult::QueueFullHigh));
|
||||||
assert!(!is_data_route_queue_full(RouteResult::NoConn));
|
assert!(!is_data_route_queue_full(RouteResult::NoConn));
|
||||||
assert!(!should_close_on_queue_full_streak(1));
|
assert!(!should_close_on_queue_full_streak(1, PressureState::Normal));
|
||||||
assert!(!should_close_on_queue_full_streak(2));
|
assert!(!should_close_on_queue_full_streak(2, PressureState::Pressured));
|
||||||
assert!(should_close_on_queue_full_streak(3));
|
assert!(!should_close_on_queue_full_streak(3, PressureState::Pressured));
|
||||||
assert!(should_close_on_queue_full_streak(u8::MAX));
|
assert!(should_close_on_queue_full_streak(3, PressureState::Shedding));
|
||||||
|
assert!(should_close_on_queue_full_streak(
|
||||||
|
u8::MAX,
|
||||||
|
PressureState::Saturated
|
||||||
|
));
|
||||||
|
}
|
||||||
|
|
||||||
|
#[test]
|
||||||
|
fn fairness_retry_is_scheduled_only_when_queue_has_pending_bytes() {
|
||||||
|
let mut snapshot = WorkerFairnessSnapshot::default();
|
||||||
|
assert!(!should_schedule_fairness_retry(&snapshot));
|
||||||
|
|
||||||
|
snapshot.total_queued_bytes = 1;
|
||||||
|
assert!(should_schedule_fairness_retry(&snapshot));
|
||||||
|
}
|
||||||
|
|
||||||
|
#[test]
|
||||||
|
fn fairness_retry_delay_never_drops_below_one_millisecond() {
|
||||||
|
assert_eq!(fairness_retry_delay(0), Duration::from_millis(1));
|
||||||
|
assert_eq!(fairness_retry_delay(2), Duration::from_millis(2));
|
||||||
}
|
}
|
||||||
|
|
||||||
#[test]
|
#[test]
|
||||||
|
|||||||
@@ -37,20 +37,26 @@ pub(super) fn validate_proxy_secret_len(data_len: usize, max_len: usize) -> Resu
|
|||||||
|
|
||||||
/// Fetch Telegram proxy-secret binary.
|
/// Fetch Telegram proxy-secret binary.
|
||||||
#[allow(dead_code)]
|
#[allow(dead_code)]
|
||||||
pub async fn fetch_proxy_secret(cache_path: Option<&str>, max_len: usize) -> Result<Vec<u8>> {
|
pub async fn fetch_proxy_secret(
|
||||||
fetch_proxy_secret_with_upstream(cache_path, max_len, None).await
|
cache_path: Option<&str>,
|
||||||
|
max_len: usize,
|
||||||
|
proxy_secret_url: Option<&str>,
|
||||||
|
) -> Result<Vec<u8>> {
|
||||||
|
fetch_proxy_secret_with_upstream(cache_path, max_len, proxy_secret_url, None).await
|
||||||
}
|
}
|
||||||
|
|
||||||
/// Fetch Telegram proxy-secret binary, optionally through upstream routing.
|
/// Fetch Telegram proxy-secret binary, optionally through upstream routing.
|
||||||
pub async fn fetch_proxy_secret_with_upstream(
|
pub async fn fetch_proxy_secret_with_upstream(
|
||||||
cache_path: Option<&str>,
|
cache_path: Option<&str>,
|
||||||
max_len: usize,
|
max_len: usize,
|
||||||
|
proxy_secret_url: Option<&str>,
|
||||||
upstream: Option<Arc<UpstreamManager>>,
|
upstream: Option<Arc<UpstreamManager>>,
|
||||||
) -> Result<Vec<u8>> {
|
) -> Result<Vec<u8>> {
|
||||||
let cache = cache_path.unwrap_or("proxy-secret");
|
let cache = cache_path.unwrap_or("proxy-secret");
|
||||||
|
|
||||||
// 1) Try fresh download first.
|
// 1) Try fresh download first.
|
||||||
match download_proxy_secret_with_max_len_via_upstream(max_len, upstream).await {
|
match download_proxy_secret_with_max_len_via_upstream(max_len, upstream, proxy_secret_url).await
|
||||||
|
{
|
||||||
Ok(data) => {
|
Ok(data) => {
|
||||||
if let Err(e) = tokio::fs::write(cache, &data).await {
|
if let Err(e) = tokio::fs::write(cache, &data).await {
|
||||||
warn!(error = %e, "Failed to cache proxy-secret (non-fatal)");
|
warn!(error = %e, "Failed to cache proxy-secret (non-fatal)");
|
||||||
@@ -91,14 +97,19 @@ pub async fn fetch_proxy_secret_with_upstream(
|
|||||||
|
|
||||||
#[allow(dead_code)]
|
#[allow(dead_code)]
|
||||||
pub async fn download_proxy_secret_with_max_len(max_len: usize) -> Result<Vec<u8>> {
|
pub async fn download_proxy_secret_with_max_len(max_len: usize) -> Result<Vec<u8>> {
|
||||||
download_proxy_secret_with_max_len_via_upstream(max_len, None).await
|
download_proxy_secret_with_max_len_via_upstream(max_len, None, None).await
|
||||||
}
|
}
|
||||||
|
|
||||||
pub async fn download_proxy_secret_with_max_len_via_upstream(
|
pub async fn download_proxy_secret_with_max_len_via_upstream(
|
||||||
max_len: usize,
|
max_len: usize,
|
||||||
upstream: Option<Arc<UpstreamManager>>,
|
upstream: Option<Arc<UpstreamManager>>,
|
||||||
|
proxy_secret_url: Option<&str>,
|
||||||
) -> Result<Vec<u8>> {
|
) -> Result<Vec<u8>> {
|
||||||
let resp = https_get("https://core.telegram.org/getProxySecret", upstream).await?;
|
let resp = https_get(
|
||||||
|
proxy_secret_url.unwrap_or("https://core.telegram.org/getProxySecret"),
|
||||||
|
upstream,
|
||||||
|
)
|
||||||
|
.await?;
|
||||||
|
|
||||||
if !(200..=299).contains(&resp.status) {
|
if !(200..=299).contains(&resp.status) {
|
||||||
return Err(ProxyError::Proxy(format!(
|
return Err(ProxyError::Proxy(format!(
|
||||||
|
|||||||
@@ -2,6 +2,7 @@ use std::time::{Duration, Instant};
|
|||||||
|
|
||||||
use bytes::Bytes;
|
use bytes::Bytes;
|
||||||
|
|
||||||
|
use crate::protocol::constants::RPC_FLAG_QUICKACK;
|
||||||
use crate::transport::middle_proxy::fairness::{
|
use crate::transport::middle_proxy::fairness::{
|
||||||
AdmissionDecision, DispatchAction, DispatchFeedback, PressureState, SchedulerDecision,
|
AdmissionDecision, DispatchAction, DispatchFeedback, PressureState, SchedulerDecision,
|
||||||
WorkerFairnessConfig, WorkerFairnessState,
|
WorkerFairnessConfig, WorkerFairnessState,
|
||||||
@@ -114,6 +115,62 @@ fn fairness_keeps_fast_flow_progress_under_slow_neighbor() {
|
|||||||
assert!(snapshot.total_queued_bytes <= 64 * 1024);
|
assert!(snapshot.total_queued_bytes <= 64 * 1024);
|
||||||
}
|
}
|
||||||
|
|
||||||
|
#[test]
|
||||||
|
fn fairness_prioritizes_quickack_flow_when_weights_enabled() {
|
||||||
|
let mut now = Instant::now();
|
||||||
|
let mut fairness = WorkerFairnessState::new(
|
||||||
|
WorkerFairnessConfig {
|
||||||
|
max_total_queued_bytes: 256 * 1024,
|
||||||
|
max_flow_queued_bytes: 64 * 1024,
|
||||||
|
base_quantum_bytes: 8 * 1024,
|
||||||
|
pressured_quantum_bytes: 8 * 1024,
|
||||||
|
penalized_quantum_bytes: 8 * 1024,
|
||||||
|
default_flow_weight: 1,
|
||||||
|
quickack_flow_weight: 4,
|
||||||
|
..WorkerFairnessConfig::default()
|
||||||
|
},
|
||||||
|
now,
|
||||||
|
);
|
||||||
|
|
||||||
|
for _ in 0..8 {
|
||||||
|
assert_eq!(
|
||||||
|
fairness.enqueue_data(10, RPC_FLAG_QUICKACK, enqueue_payload(16 * 1024), now),
|
||||||
|
AdmissionDecision::Admit
|
||||||
|
);
|
||||||
|
assert_eq!(
|
||||||
|
fairness.enqueue_data(20, 0, enqueue_payload(16 * 1024), now),
|
||||||
|
AdmissionDecision::Admit
|
||||||
|
);
|
||||||
|
}
|
||||||
|
|
||||||
|
let mut quickack_dispatched = 0u64;
|
||||||
|
let mut bulk_dispatched = 0u64;
|
||||||
|
for _ in 0..64 {
|
||||||
|
now += Duration::from_millis(1);
|
||||||
|
let SchedulerDecision::Dispatch(candidate) = fairness.next_decision(now) else {
|
||||||
|
break;
|
||||||
|
};
|
||||||
|
|
||||||
|
if candidate.frame.conn_id == 10 {
|
||||||
|
quickack_dispatched = quickack_dispatched.saturating_add(1);
|
||||||
|
} else if candidate.frame.conn_id == 20 {
|
||||||
|
bulk_dispatched = bulk_dispatched.saturating_add(1);
|
||||||
|
}
|
||||||
|
|
||||||
|
let _ = fairness.apply_dispatch_feedback(
|
||||||
|
candidate.frame.conn_id,
|
||||||
|
candidate,
|
||||||
|
DispatchFeedback::Routed,
|
||||||
|
now,
|
||||||
|
);
|
||||||
|
}
|
||||||
|
|
||||||
|
assert!(
|
||||||
|
quickack_dispatched > bulk_dispatched,
|
||||||
|
"quickack flow must receive higher dispatch rate with larger weight"
|
||||||
|
);
|
||||||
|
}
|
||||||
|
|
||||||
#[test]
|
#[test]
|
||||||
fn fairness_pressure_hysteresis_prevents_instant_flapping() {
|
fn fairness_pressure_hysteresis_prevents_instant_flapping() {
|
||||||
let mut now = Instant::now();
|
let mut now = Instant::now();
|
||||||
@@ -180,6 +237,12 @@ fn fairness_randomized_sequence_preserves_memory_bounds() {
|
|||||||
}
|
}
|
||||||
|
|
||||||
let snapshot = fairness.snapshot();
|
let snapshot = fairness.snapshot();
|
||||||
|
let (standing_recomputed, backpressured_recomputed) =
|
||||||
|
fairness.debug_recompute_flow_counters(now);
|
||||||
assert!(snapshot.total_queued_bytes <= 32 * 1024);
|
assert!(snapshot.total_queued_bytes <= 32 * 1024);
|
||||||
|
assert_eq!(snapshot.standing_flows, standing_recomputed);
|
||||||
|
assert_eq!(snapshot.backpressured_flows, backpressured_recomputed);
|
||||||
|
assert!(fairness.debug_check_active_ring_consistency());
|
||||||
|
assert!(fairness.debug_max_deficit_bytes() <= 4 * 1024);
|
||||||
}
|
}
|
||||||
}
|
}
|
||||||
|
|||||||
Reference in New Issue
Block a user