mirror of https://github.com/telemt/telemt.git
New Hot-Reload method + TLS-F New Methods + TLS-F/TCP-S Docs: merge pull request #420 from telemt/flow
New Hot-Reload method + TLS-F New Methods + TLS-F/TCP-S Docs
This commit is contained in:
commit
dda31b3d2f
|
|
@ -0,0 +1,274 @@
|
|||
# TLS-F и TCP-S в Telemt
|
||||
|
||||
## Общая архитектура
|
||||
|
||||
**Telemt** - это прежде всего реализация **MTProxy**, через которую проходит payload Telegram
|
||||
|
||||
Подсистема **TLS-Fronting / TCP-Splitting** служит **маскировочным транспортным слоем**, задача которого - сделать MTProxy-соединение внешне похожим на обычное TLS-подключение к легитимному сайту
|
||||
|
||||
Таким образом:
|
||||
|
||||
- **MTProxy** - основной функциональный слой Telemt для обработки Telegram-трафика
|
||||
- **TLS-Fronting / TCP-Splitting** - подсистема маскировки транспорта
|
||||
|
||||
С точки зрения сети Telemt ведёт себя как **TLS-сервер**, но фактически:
|
||||
|
||||
- валидные MTProxy-клиенты остаются внутри контура Telemt
|
||||
- любые другие TLS-клиенты проксируются на обычный HTTPS-сервер-заглушку
|
||||
|
||||
# Базовый сценарий / Best-practice
|
||||
|
||||
Предположим, у вас есть домен:
|
||||
|
||||
```
|
||||
umweltschutz.de
|
||||
```
|
||||
|
||||
### 1 DNS
|
||||
|
||||
Вы создаёте A-запись:
|
||||
|
||||
```
|
||||
umweltschutz.de -> A-запись 198.18.88.88
|
||||
```
|
||||
|
||||
где `198.18.88.88` - IP вашего сервера с telemt
|
||||
|
||||
### 2 TLS-домен
|
||||
|
||||
В конфигурации Telemt:
|
||||
|
||||
```
|
||||
tls_domain = umweltschutz.de
|
||||
```
|
||||
|
||||
Этот домен используется клиентом как SNI в ClientHello
|
||||
|
||||
### 3 Сервер-заглушка
|
||||
|
||||
Вы поднимаете обычный HTTPS-сервер, например **nginx**, с сертификатом для этого домена.
|
||||
|
||||
Он может работать:
|
||||
|
||||
- на том же сервере
|
||||
- на другом сервере
|
||||
- на другом порту
|
||||
|
||||
В конфигурации Telemt:
|
||||
|
||||
```
|
||||
mask_host = 127.0.0.1
|
||||
mask_port = 8443
|
||||
```
|
||||
|
||||
где `127.0.0.1` - IP сервера-заглушки, а 8443 - порт, который он слушает
|
||||
|
||||
Этот сервер нужен **для обработки любых non-MTProxy запросов**
|
||||
|
||||
### 4 Работа Telemt
|
||||
|
||||
После запуска Telemt действует следующим образом:
|
||||
|
||||
1) принимает входящее TCP-соединение
|
||||
2) анализирует TLS-ClientHello
|
||||
3) пытается определить, является ли соединение валидным **MTProxy FakeTLS**
|
||||
|
||||
Далее работают два варианта логики:
|
||||
|
||||
---
|
||||
|
||||
# Сценарий 1 - MTProxy клиент с валидным ключом
|
||||
|
||||
Если клиент предъявил **валидный MTProxy-ключ**:
|
||||
|
||||
- соединение **остаётся внутри Telemt**
|
||||
- TLS используется только как **транспортная маскировка**
|
||||
- далее запускается обычная логика **MTProxy**
|
||||
|
||||
Для внешнего наблюдателя это выглядит как:
|
||||
|
||||
```
|
||||
TLS connection -> umweltschutz.de
|
||||
```
|
||||
|
||||
Хотя внутри передаётся **MTProto-трафик Telegram**
|
||||
|
||||
# Сценарий 2 - обычный TLS-клиент - crawler / scanner / browser
|
||||
|
||||
Если Telemt не обнаруживает валидный MTProxy-ключ:
|
||||
|
||||
соединение **переключается в режим TCP-Splitting / TCP-Splicing**.
|
||||
|
||||
В этом режиме Telemt:
|
||||
|
||||
1. открывает новое TCP-соединение к
|
||||
|
||||
```
|
||||
mask_host:mask_port
|
||||
```
|
||||
|
||||
2. начинает **проксировать TCP-трафик**
|
||||
|
||||
Важно:
|
||||
|
||||
* клиентский TLS-запрос **НЕ модифицируется**
|
||||
* **ClientHello передаётся "как есть", без изменений**
|
||||
* **SNI остаётся неизменным**
|
||||
* Telemt **не завершает TLS-рукопожатие**, а только перенаправляет его на более низком уровне сетевого стека - L4
|
||||
|
||||
Таким образом upstream-сервер получает **оригинальное TLS-соединение клиента**:
|
||||
|
||||
- если это nginx-заглушка, он просто отдаёт обычный сайт
|
||||
- для внешнего наблюдателя это выглядит как обычный HTTPS-сервер
|
||||
|
||||
# TCP-S / TCP-Splitting / TCP-Splicing
|
||||
|
||||
Ключевые свойства механизма:
|
||||
|
||||
**Telemt работает как TCP-переключатель:**
|
||||
|
||||
1) принимает соединение
|
||||
2️) определяет тип клиента
|
||||
3) либо:
|
||||
|
||||
- обрабатывает MTProxy внутри
|
||||
- либо проксирует TCP-поток
|
||||
|
||||
При проксировании:
|
||||
|
||||
- Telemt **разрешает `mask_host` в IP**
|
||||
- устанавливает TCP-соединение
|
||||
- начинает **bidirectional TCP relay**
|
||||
|
||||
При этом:
|
||||
|
||||
- TLS-рукопожатие происходит **между клиентом и `mask_host`**
|
||||
- Telemt выступает только **на уровне L4 - как TCP-релей**, такой же как HAProxy в TCP-режиме
|
||||
|
||||
# Использование чужого домена
|
||||
|
||||
Можно использовать и внешний сайт.
|
||||
|
||||
Например:
|
||||
|
||||
```
|
||||
tls_domain = github.com
|
||||
mask_host = github.com
|
||||
mask_port = 443
|
||||
```
|
||||
|
||||
или
|
||||
|
||||
```
|
||||
mask_host = 140.82.121.4
|
||||
```
|
||||
|
||||
В этом случае:
|
||||
|
||||
- цензор видит **TLS-подключение к github.com**
|
||||
- обычные клиенты/краулер действительно получают **настоящий GitHub**
|
||||
|
||||
Telemt просто **проксирует TCP-соединение на GitHub**
|
||||
|
||||
# Что видит анализатор трафика?
|
||||
|
||||
Для DPI это выглядит так:
|
||||
|
||||
```
|
||||
client -> TLS -> github.com
|
||||
```
|
||||
|
||||
или
|
||||
|
||||
```
|
||||
client -> TLS -> umweltschutz.de
|
||||
```
|
||||
|
||||
TLS-handshake выглядит валидным, SNI соответствует домену, сертификат корректный - от целевого `mask_host:mask_port`
|
||||
|
||||
# Что видит сканер / краулер?
|
||||
|
||||
Если сканер попытается подключиться:
|
||||
|
||||
```
|
||||
openssl s_client -connect 198.18.88.88:443 -servername umweltschutz.de
|
||||
```
|
||||
|
||||
он получит **обычный HTTPS-сайт-заглушку**
|
||||
|
||||
Потому что:
|
||||
|
||||
- он не предъявил MTProxy-ключ
|
||||
- Telemt отправил соединение на `mask_host:mask_port`, на котором находится nginx
|
||||
|
||||
# Какую проблему решает TLS-Fronting / TCP-Splitting?
|
||||
|
||||
Эта архитектура решает сразу несколько проблем обхода цензуры.
|
||||
|
||||
## 1 Закрытие плоскости MTProxy от активного сканирования
|
||||
|
||||
Многие цензоры:
|
||||
|
||||
- сканируют IP-адреса
|
||||
- проверяют известные сигнатуры прокси
|
||||
|
||||
Telemt отвечает на такие проверки **обычным HTTPS-сайтом**, поэтому прокси невозможно обнаружить простым сканированием
|
||||
|
||||
---
|
||||
|
||||
## 2 Маскировка трафика под легитимный TLS
|
||||
|
||||
Для DPI-систем соединение выглядит как:
|
||||
|
||||
```
|
||||
обычный TLS-трафик к популярному домену
|
||||
```
|
||||
|
||||
Это делает блокировку значительно сложнее и непредсказуемее
|
||||
|
||||
---
|
||||
|
||||
## 3 Устойчивость к протокольному анализу
|
||||
|
||||
MTProxy трафик проходит **внутри TLS-like-потока**, поэтому:
|
||||
|
||||
- не видны характерные сигнатуры MTProto
|
||||
- соединение выглядит как обычный HTTPS
|
||||
|
||||
---
|
||||
|
||||
## 4 Правдоподобное поведение сервера
|
||||
|
||||
Даже если краулер:
|
||||
|
||||
- подключится сам
|
||||
- выполнит TLS-handshake
|
||||
- попытается получить HTTP-ответ
|
||||
|
||||
он увидит **реальный сайт**, а не telemt
|
||||
|
||||
Это устраняет один из главных признаков для антифрод-краулеров мобильных операторов
|
||||
|
||||
# Схема
|
||||
|
||||
```text
|
||||
Client
|
||||
│
|
||||
│ TCP
|
||||
│
|
||||
V
|
||||
Telemt
|
||||
│
|
||||
├── valid MTProxy key
|
||||
│ │
|
||||
│ V
|
||||
│ MTProxy logic
|
||||
│
|
||||
└── обычный TLS клиент
|
||||
│
|
||||
V
|
||||
TCP-Splitting
|
||||
│
|
||||
V
|
||||
mask_host:mask_port
|
||||
```
|
||||
|
|
@ -21,9 +21,11 @@
|
|||
//! `network.*`, `use_middle_proxy`) are **not** applied; a warning is emitted.
|
||||
//! Non-hot changes are never mixed into the runtime config snapshot.
|
||||
|
||||
use std::collections::BTreeSet;
|
||||
use std::net::IpAddr;
|
||||
use std::path::PathBuf;
|
||||
use std::sync::Arc;
|
||||
use std::path::{Path, PathBuf};
|
||||
use std::sync::{Arc, RwLock as StdRwLock};
|
||||
use std::time::Duration;
|
||||
|
||||
use notify::{EventKind, RecursiveMode, Watcher, recommended_watcher};
|
||||
use tokio::sync::{mpsc, watch};
|
||||
|
|
@ -33,7 +35,10 @@ use crate::config::{
|
|||
LogLevel, MeBindStaleMode, MeFloorMode, MeSocksKdfPolicy, MeTelemetryLevel,
|
||||
MeWriterPickMode,
|
||||
};
|
||||
use super::load::ProxyConfig;
|
||||
use super::load::{LoadedConfig, ProxyConfig};
|
||||
|
||||
const HOT_RELOAD_STABLE_SNAPSHOTS: u8 = 2;
|
||||
const HOT_RELOAD_DEBOUNCE: Duration = Duration::from_millis(50);
|
||||
|
||||
// ── Hot fields ────────────────────────────────────────────────────────────────
|
||||
|
||||
|
|
@ -287,6 +292,149 @@ fn listeners_equal(
|
|||
})
|
||||
}
|
||||
|
||||
#[derive(Debug, Clone, Default, PartialEq, Eq)]
|
||||
struct WatchManifest {
|
||||
files: BTreeSet<PathBuf>,
|
||||
dirs: BTreeSet<PathBuf>,
|
||||
}
|
||||
|
||||
impl WatchManifest {
|
||||
fn from_source_files(source_files: &[PathBuf]) -> Self {
|
||||
let mut files = BTreeSet::new();
|
||||
let mut dirs = BTreeSet::new();
|
||||
|
||||
for path in source_files {
|
||||
let normalized = normalize_watch_path(path);
|
||||
files.insert(normalized.clone());
|
||||
if let Some(parent) = normalized.parent() {
|
||||
dirs.insert(parent.to_path_buf());
|
||||
}
|
||||
}
|
||||
|
||||
Self { files, dirs }
|
||||
}
|
||||
|
||||
fn matches_event_paths(&self, event_paths: &[PathBuf]) -> bool {
|
||||
event_paths
|
||||
.iter()
|
||||
.map(|path| normalize_watch_path(path))
|
||||
.any(|path| self.files.contains(&path))
|
||||
}
|
||||
}
|
||||
|
||||
#[derive(Debug, Default)]
|
||||
struct ReloadState {
|
||||
applied_snapshot_hash: Option<u64>,
|
||||
candidate_snapshot_hash: Option<u64>,
|
||||
candidate_hits: u8,
|
||||
}
|
||||
|
||||
impl ReloadState {
|
||||
fn new(applied_snapshot_hash: Option<u64>) -> Self {
|
||||
Self {
|
||||
applied_snapshot_hash,
|
||||
candidate_snapshot_hash: None,
|
||||
candidate_hits: 0,
|
||||
}
|
||||
}
|
||||
|
||||
fn is_applied(&self, hash: u64) -> bool {
|
||||
self.applied_snapshot_hash == Some(hash)
|
||||
}
|
||||
|
||||
fn observe_candidate(&mut self, hash: u64) -> u8 {
|
||||
if self.candidate_snapshot_hash == Some(hash) {
|
||||
self.candidate_hits = self.candidate_hits.saturating_add(1);
|
||||
} else {
|
||||
self.candidate_snapshot_hash = Some(hash);
|
||||
self.candidate_hits = 1;
|
||||
}
|
||||
self.candidate_hits
|
||||
}
|
||||
|
||||
fn reset_candidate(&mut self) {
|
||||
self.candidate_snapshot_hash = None;
|
||||
self.candidate_hits = 0;
|
||||
}
|
||||
|
||||
fn mark_applied(&mut self, hash: u64) {
|
||||
self.applied_snapshot_hash = Some(hash);
|
||||
self.reset_candidate();
|
||||
}
|
||||
}
|
||||
|
||||
fn normalize_watch_path(path: &Path) -> PathBuf {
|
||||
path.canonicalize().unwrap_or_else(|_| {
|
||||
if path.is_absolute() {
|
||||
path.to_path_buf()
|
||||
} else {
|
||||
std::env::current_dir()
|
||||
.map(|cwd| cwd.join(path))
|
||||
.unwrap_or_else(|_| path.to_path_buf())
|
||||
}
|
||||
})
|
||||
}
|
||||
|
||||
fn sync_watch_paths<W: Watcher>(
|
||||
watcher: &mut W,
|
||||
current: &BTreeSet<PathBuf>,
|
||||
next: &BTreeSet<PathBuf>,
|
||||
recursive_mode: RecursiveMode,
|
||||
kind: &str,
|
||||
) {
|
||||
for path in current.difference(next) {
|
||||
if let Err(e) = watcher.unwatch(path) {
|
||||
warn!(path = %path.display(), error = %e, "config watcher: failed to unwatch {kind}");
|
||||
}
|
||||
}
|
||||
|
||||
for path in next.difference(current) {
|
||||
if let Err(e) = watcher.watch(path, recursive_mode) {
|
||||
warn!(path = %path.display(), error = %e, "config watcher: failed to watch {kind}");
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
fn apply_watch_manifest<W1: Watcher, W2: Watcher>(
|
||||
notify_watcher: Option<&mut W1>,
|
||||
poll_watcher: Option<&mut W2>,
|
||||
manifest_state: &Arc<StdRwLock<WatchManifest>>,
|
||||
next_manifest: WatchManifest,
|
||||
) {
|
||||
let current_manifest = manifest_state
|
||||
.read()
|
||||
.map(|manifest| manifest.clone())
|
||||
.unwrap_or_default();
|
||||
|
||||
if current_manifest == next_manifest {
|
||||
return;
|
||||
}
|
||||
|
||||
if let Some(watcher) = notify_watcher {
|
||||
sync_watch_paths(
|
||||
watcher,
|
||||
¤t_manifest.dirs,
|
||||
&next_manifest.dirs,
|
||||
RecursiveMode::NonRecursive,
|
||||
"config directory",
|
||||
);
|
||||
}
|
||||
|
||||
if let Some(watcher) = poll_watcher {
|
||||
sync_watch_paths(
|
||||
watcher,
|
||||
¤t_manifest.files,
|
||||
&next_manifest.files,
|
||||
RecursiveMode::NonRecursive,
|
||||
"config file",
|
||||
);
|
||||
}
|
||||
|
||||
if let Ok(mut manifest) = manifest_state.write() {
|
||||
*manifest = next_manifest;
|
||||
}
|
||||
}
|
||||
|
||||
fn overlay_hot_fields(old: &ProxyConfig, new: &ProxyConfig) -> ProxyConfig {
|
||||
let mut cfg = old.clone();
|
||||
|
||||
|
|
@ -970,18 +1118,42 @@ fn reload_config(
|
|||
log_tx: &watch::Sender<LogLevel>,
|
||||
detected_ip_v4: Option<IpAddr>,
|
||||
detected_ip_v6: Option<IpAddr>,
|
||||
) {
|
||||
let new_cfg = match ProxyConfig::load(config_path) {
|
||||
Ok(c) => c,
|
||||
reload_state: &mut ReloadState,
|
||||
) -> Option<WatchManifest> {
|
||||
let loaded = match ProxyConfig::load_with_metadata(config_path) {
|
||||
Ok(loaded) => loaded,
|
||||
Err(e) => {
|
||||
reload_state.reset_candidate();
|
||||
error!("config reload: failed to parse {:?}: {}", config_path, e);
|
||||
return;
|
||||
return None;
|
||||
}
|
||||
};
|
||||
let LoadedConfig {
|
||||
config: new_cfg,
|
||||
source_files,
|
||||
rendered_hash,
|
||||
} = loaded;
|
||||
let next_manifest = WatchManifest::from_source_files(&source_files);
|
||||
|
||||
if let Err(e) = new_cfg.validate() {
|
||||
reload_state.reset_candidate();
|
||||
error!("config reload: validation failed: {}; keeping old config", e);
|
||||
return;
|
||||
return Some(next_manifest);
|
||||
}
|
||||
|
||||
if reload_state.is_applied(rendered_hash) {
|
||||
return Some(next_manifest);
|
||||
}
|
||||
|
||||
let candidate_hits = reload_state.observe_candidate(rendered_hash);
|
||||
if candidate_hits < HOT_RELOAD_STABLE_SNAPSHOTS {
|
||||
info!(
|
||||
snapshot_hash = rendered_hash,
|
||||
candidate_hits,
|
||||
required_hits = HOT_RELOAD_STABLE_SNAPSHOTS,
|
||||
"config reload: candidate snapshot observed but not stable yet"
|
||||
);
|
||||
return Some(next_manifest);
|
||||
}
|
||||
|
||||
let old_cfg = config_tx.borrow().clone();
|
||||
|
|
@ -996,17 +1168,19 @@ fn reload_config(
|
|||
}
|
||||
|
||||
if !hot_changed {
|
||||
return;
|
||||
reload_state.mark_applied(rendered_hash);
|
||||
return Some(next_manifest);
|
||||
}
|
||||
|
||||
if old_hot.dns_overrides != applied_hot.dns_overrides
|
||||
&& let Err(e) = crate::network::dns_overrides::install_entries(&applied_hot.dns_overrides)
|
||||
{
|
||||
reload_state.reset_candidate();
|
||||
error!(
|
||||
"config reload: invalid network.dns_overrides: {}; keeping old config",
|
||||
e
|
||||
);
|
||||
return;
|
||||
return Some(next_manifest);
|
||||
}
|
||||
|
||||
log_changes(
|
||||
|
|
@ -1018,6 +1192,8 @@ fn reload_config(
|
|||
detected_ip_v6,
|
||||
);
|
||||
config_tx.send(Arc::new(applied_cfg)).ok();
|
||||
reload_state.mark_applied(rendered_hash);
|
||||
Some(next_manifest)
|
||||
}
|
||||
|
||||
// ── Public API ────────────────────────────────────────────────────────────────
|
||||
|
|
@ -1040,80 +1216,86 @@ pub fn spawn_config_watcher(
|
|||
let (config_tx, config_rx) = watch::channel(initial);
|
||||
let (log_tx, log_rx) = watch::channel(initial_level);
|
||||
|
||||
// Bridge: sync notify callbacks → async task via mpsc.
|
||||
let (notify_tx, mut notify_rx) = mpsc::channel::<()>(4);
|
||||
let config_path = normalize_watch_path(&config_path);
|
||||
let initial_loaded = ProxyConfig::load_with_metadata(&config_path).ok();
|
||||
let initial_manifest = initial_loaded
|
||||
.as_ref()
|
||||
.map(|loaded| WatchManifest::from_source_files(&loaded.source_files))
|
||||
.unwrap_or_else(|| WatchManifest::from_source_files(std::slice::from_ref(&config_path)));
|
||||
let initial_snapshot_hash = initial_loaded.as_ref().map(|loaded| loaded.rendered_hash);
|
||||
|
||||
// Canonicalize so path matches what notify returns (absolute) in events.
|
||||
let config_path = match config_path.canonicalize() {
|
||||
Ok(p) => p,
|
||||
Err(_) => config_path.to_path_buf(),
|
||||
};
|
||||
|
||||
// Watch the parent directory rather than the file itself, because many
|
||||
// editors (vim, nano) and systemd write via rename, which would cause
|
||||
// inotify to lose track of the original inode.
|
||||
let watch_dir = config_path
|
||||
.parent()
|
||||
.unwrap_or_else(|| std::path::Path::new("."))
|
||||
.to_path_buf();
|
||||
|
||||
// ── inotify watcher (instant on local fs) ────────────────────────────
|
||||
let config_file = config_path.clone();
|
||||
let tx_inotify = notify_tx.clone();
|
||||
let inotify_ok = match recommended_watcher(move |res: notify::Result<notify::Event>| {
|
||||
let Ok(event) = res else { return };
|
||||
let is_our_file = event.paths.iter().any(|p| p == &config_file);
|
||||
if !is_our_file { return; }
|
||||
if matches!(event.kind, EventKind::Modify(_) | EventKind::Create(_) | EventKind::Remove(_)) {
|
||||
let _ = tx_inotify.try_send(());
|
||||
}
|
||||
}) {
|
||||
Ok(mut w) => match w.watch(&watch_dir, RecursiveMode::NonRecursive) {
|
||||
Ok(()) => {
|
||||
info!("config watcher: inotify active on {:?}", config_path);
|
||||
Box::leak(Box::new(w));
|
||||
true
|
||||
}
|
||||
Err(e) => { warn!("config watcher: inotify watch failed: {}", e); false }
|
||||
},
|
||||
Err(e) => { warn!("config watcher: inotify unavailable: {}", e); false }
|
||||
};
|
||||
|
||||
// ── poll watcher (always active, fixes Docker bind mounts / NFS) ─────
|
||||
// inotify does not receive events for files mounted from the host into
|
||||
// a container. PollWatcher compares file contents every 3 s and fires
|
||||
// on any change regardless of the underlying fs.
|
||||
let config_file2 = config_path.clone();
|
||||
let tx_poll = notify_tx.clone();
|
||||
match notify::poll::PollWatcher::new(
|
||||
move |res: notify::Result<notify::Event>| {
|
||||
let Ok(event) = res else { return };
|
||||
let is_our_file = event.paths.iter().any(|p| p == &config_file2);
|
||||
if !is_our_file { return; }
|
||||
if matches!(event.kind, EventKind::Modify(_) | EventKind::Create(_) | EventKind::Remove(_)) {
|
||||
let _ = tx_poll.try_send(());
|
||||
}
|
||||
},
|
||||
notify::Config::default()
|
||||
.with_poll_interval(std::time::Duration::from_secs(3))
|
||||
.with_compare_contents(true),
|
||||
) {
|
||||
Ok(mut w) => match w.watch(&config_path, RecursiveMode::NonRecursive) {
|
||||
Ok(()) => {
|
||||
if inotify_ok {
|
||||
info!("config watcher: poll watcher also active (Docker/NFS safe)");
|
||||
} else {
|
||||
info!("config watcher: poll watcher active on {:?} (3s interval)", config_path);
|
||||
}
|
||||
Box::leak(Box::new(w));
|
||||
}
|
||||
Err(e) => warn!("config watcher: poll watch failed: {}", e),
|
||||
},
|
||||
Err(e) => warn!("config watcher: poll watcher unavailable: {}", e),
|
||||
}
|
||||
|
||||
// ── event loop ───────────────────────────────────────────────────────
|
||||
tokio::spawn(async move {
|
||||
let (notify_tx, mut notify_rx) = mpsc::channel::<()>(4);
|
||||
let manifest_state = Arc::new(StdRwLock::new(WatchManifest::default()));
|
||||
let mut reload_state = ReloadState::new(initial_snapshot_hash);
|
||||
|
||||
let tx_inotify = notify_tx.clone();
|
||||
let manifest_for_inotify = manifest_state.clone();
|
||||
let mut inotify_watcher = match recommended_watcher(move |res: notify::Result<notify::Event>| {
|
||||
let Ok(event) = res else { return };
|
||||
if !matches!(event.kind, EventKind::Modify(_) | EventKind::Create(_) | EventKind::Remove(_)) {
|
||||
return;
|
||||
}
|
||||
let is_our_file = manifest_for_inotify
|
||||
.read()
|
||||
.map(|manifest| manifest.matches_event_paths(&event.paths))
|
||||
.unwrap_or(false);
|
||||
if is_our_file {
|
||||
let _ = tx_inotify.try_send(());
|
||||
}
|
||||
}) {
|
||||
Ok(watcher) => Some(watcher),
|
||||
Err(e) => {
|
||||
warn!("config watcher: inotify unavailable: {}", e);
|
||||
None
|
||||
}
|
||||
};
|
||||
apply_watch_manifest(
|
||||
inotify_watcher.as_mut(),
|
||||
Option::<&mut notify::poll::PollWatcher>::None,
|
||||
&manifest_state,
|
||||
initial_manifest.clone(),
|
||||
);
|
||||
if inotify_watcher.is_some() {
|
||||
info!("config watcher: inotify active on {:?}", config_path);
|
||||
}
|
||||
|
||||
let tx_poll = notify_tx.clone();
|
||||
let manifest_for_poll = manifest_state.clone();
|
||||
let mut poll_watcher = match notify::poll::PollWatcher::new(
|
||||
move |res: notify::Result<notify::Event>| {
|
||||
let Ok(event) = res else { return };
|
||||
if !matches!(event.kind, EventKind::Modify(_) | EventKind::Create(_) | EventKind::Remove(_)) {
|
||||
return;
|
||||
}
|
||||
let is_our_file = manifest_for_poll
|
||||
.read()
|
||||
.map(|manifest| manifest.matches_event_paths(&event.paths))
|
||||
.unwrap_or(false);
|
||||
if is_our_file {
|
||||
let _ = tx_poll.try_send(());
|
||||
}
|
||||
},
|
||||
notify::Config::default()
|
||||
.with_poll_interval(Duration::from_secs(3))
|
||||
.with_compare_contents(true),
|
||||
) {
|
||||
Ok(watcher) => Some(watcher),
|
||||
Err(e) => {
|
||||
warn!("config watcher: poll watcher unavailable: {}", e);
|
||||
None
|
||||
}
|
||||
};
|
||||
apply_watch_manifest(
|
||||
Option::<&mut notify::RecommendedWatcher>::None,
|
||||
poll_watcher.as_mut(),
|
||||
&manifest_state,
|
||||
initial_manifest.clone(),
|
||||
);
|
||||
if poll_watcher.is_some() {
|
||||
info!("config watcher: poll watcher active (Docker/NFS safe)");
|
||||
}
|
||||
|
||||
#[cfg(unix)]
|
||||
let mut sighup = {
|
||||
use tokio::signal::unix::{SignalKind, signal};
|
||||
|
|
@ -1133,11 +1315,25 @@ pub fn spawn_config_watcher(
|
|||
#[cfg(not(unix))]
|
||||
if notify_rx.recv().await.is_none() { break; }
|
||||
|
||||
// Debounce: drain extra events that arrive within 50 ms.
|
||||
tokio::time::sleep(std::time::Duration::from_millis(50)).await;
|
||||
// Debounce: drain extra events that arrive within a short quiet window.
|
||||
tokio::time::sleep(HOT_RELOAD_DEBOUNCE).await;
|
||||
while notify_rx.try_recv().is_ok() {}
|
||||
|
||||
reload_config(&config_path, &config_tx, &log_tx, detected_ip_v4, detected_ip_v6);
|
||||
if let Some(next_manifest) = reload_config(
|
||||
&config_path,
|
||||
&config_tx,
|
||||
&log_tx,
|
||||
detected_ip_v4,
|
||||
detected_ip_v6,
|
||||
&mut reload_state,
|
||||
) {
|
||||
apply_watch_manifest(
|
||||
inotify_watcher.as_mut(),
|
||||
poll_watcher.as_mut(),
|
||||
&manifest_state,
|
||||
next_manifest,
|
||||
);
|
||||
}
|
||||
}
|
||||
});
|
||||
|
||||
|
|
@ -1152,6 +1348,40 @@ mod tests {
|
|||
ProxyConfig::default()
|
||||
}
|
||||
|
||||
fn write_reload_config(path: &Path, ad_tag: Option<&str>, server_port: Option<u16>) {
|
||||
let mut config = String::from(
|
||||
r#"
|
||||
[censorship]
|
||||
tls_domain = "example.com"
|
||||
|
||||
[access.users]
|
||||
user = "00000000000000000000000000000000"
|
||||
"#,
|
||||
);
|
||||
|
||||
if ad_tag.is_some() {
|
||||
config.push_str("\n[general]\n");
|
||||
if let Some(tag) = ad_tag {
|
||||
config.push_str(&format!("ad_tag = \"{tag}\"\n"));
|
||||
}
|
||||
}
|
||||
|
||||
if let Some(port) = server_port {
|
||||
config.push_str("\n[server]\n");
|
||||
config.push_str(&format!("port = {port}\n"));
|
||||
}
|
||||
|
||||
std::fs::write(path, config).unwrap();
|
||||
}
|
||||
|
||||
fn temp_config_path(prefix: &str) -> PathBuf {
|
||||
let nonce = std::time::SystemTime::now()
|
||||
.duration_since(std::time::UNIX_EPOCH)
|
||||
.unwrap()
|
||||
.as_nanos();
|
||||
std::env::temp_dir().join(format!("{prefix}_{nonce}.toml"))
|
||||
}
|
||||
|
||||
#[test]
|
||||
fn overlay_applies_hot_and_preserves_non_hot() {
|
||||
let old = sample_config();
|
||||
|
|
@ -1219,4 +1449,61 @@ mod tests {
|
|||
assert_eq!(applied.general.use_middle_proxy, old.general.use_middle_proxy);
|
||||
assert!(!config_equal(&applied, &new));
|
||||
}
|
||||
|
||||
#[test]
|
||||
fn reload_requires_stable_snapshot_before_hot_apply() {
|
||||
let initial_tag = "11111111111111111111111111111111";
|
||||
let final_tag = "22222222222222222222222222222222";
|
||||
let path = temp_config_path("telemt_hot_reload_stable");
|
||||
|
||||
write_reload_config(&path, Some(initial_tag), None);
|
||||
let initial_cfg = Arc::new(ProxyConfig::load(&path).unwrap());
|
||||
let initial_hash = ProxyConfig::load_with_metadata(&path).unwrap().rendered_hash;
|
||||
let (config_tx, _config_rx) = watch::channel(initial_cfg.clone());
|
||||
let (log_tx, _log_rx) = watch::channel(initial_cfg.general.log_level.clone());
|
||||
let mut reload_state = ReloadState::new(Some(initial_hash));
|
||||
|
||||
write_reload_config(&path, None, None);
|
||||
reload_config(&path, &config_tx, &log_tx, None, None, &mut reload_state).unwrap();
|
||||
assert_eq!(
|
||||
config_tx.borrow().general.ad_tag.as_deref(),
|
||||
Some(initial_tag)
|
||||
);
|
||||
|
||||
write_reload_config(&path, Some(final_tag), None);
|
||||
reload_config(&path, &config_tx, &log_tx, None, None, &mut reload_state).unwrap();
|
||||
assert_eq!(
|
||||
config_tx.borrow().general.ad_tag.as_deref(),
|
||||
Some(initial_tag)
|
||||
);
|
||||
|
||||
reload_config(&path, &config_tx, &log_tx, None, None, &mut reload_state).unwrap();
|
||||
assert_eq!(config_tx.borrow().general.ad_tag.as_deref(), Some(final_tag));
|
||||
|
||||
let _ = std::fs::remove_file(path);
|
||||
}
|
||||
|
||||
#[test]
|
||||
fn reload_keeps_hot_apply_when_non_hot_fields_change() {
|
||||
let initial_tag = "aaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaa";
|
||||
let final_tag = "bbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbb";
|
||||
let path = temp_config_path("telemt_hot_reload_mixed");
|
||||
|
||||
write_reload_config(&path, Some(initial_tag), None);
|
||||
let initial_cfg = Arc::new(ProxyConfig::load(&path).unwrap());
|
||||
let initial_hash = ProxyConfig::load_with_metadata(&path).unwrap().rendered_hash;
|
||||
let (config_tx, _config_rx) = watch::channel(initial_cfg.clone());
|
||||
let (log_tx, _log_rx) = watch::channel(initial_cfg.general.log_level.clone());
|
||||
let mut reload_state = ReloadState::new(Some(initial_hash));
|
||||
|
||||
write_reload_config(&path, Some(final_tag), Some(initial_cfg.server.port + 1));
|
||||
reload_config(&path, &config_tx, &log_tx, None, None, &mut reload_state).unwrap();
|
||||
reload_config(&path, &config_tx, &log_tx, None, None, &mut reload_state).unwrap();
|
||||
|
||||
let applied = config_tx.borrow().clone();
|
||||
assert_eq!(applied.general.ad_tag.as_deref(), Some(final_tag));
|
||||
assert_eq!(applied.server.port, initial_cfg.server.port);
|
||||
|
||||
let _ = std::fs::remove_file(path);
|
||||
}
|
||||
}
|
||||
|
|
|
|||
|
|
@ -1,8 +1,9 @@
|
|||
#![allow(deprecated)]
|
||||
|
||||
use std::collections::HashMap;
|
||||
use std::collections::{BTreeSet, HashMap};
|
||||
use std::hash::{DefaultHasher, Hash, Hasher};
|
||||
use std::net::{IpAddr, SocketAddr};
|
||||
use std::path::Path;
|
||||
use std::path::{Path, PathBuf};
|
||||
|
||||
use rand::Rng;
|
||||
use tracing::warn;
|
||||
|
|
@ -13,7 +14,37 @@ use crate::error::{ProxyError, Result};
|
|||
use super::defaults::*;
|
||||
use super::types::*;
|
||||
|
||||
fn preprocess_includes(content: &str, base_dir: &Path, depth: u8) -> Result<String> {
|
||||
#[derive(Debug, Clone)]
|
||||
pub(crate) struct LoadedConfig {
|
||||
pub(crate) config: ProxyConfig,
|
||||
pub(crate) source_files: Vec<PathBuf>,
|
||||
pub(crate) rendered_hash: u64,
|
||||
}
|
||||
|
||||
fn normalize_config_path(path: &Path) -> PathBuf {
|
||||
path.canonicalize().unwrap_or_else(|_| {
|
||||
if path.is_absolute() {
|
||||
path.to_path_buf()
|
||||
} else {
|
||||
std::env::current_dir()
|
||||
.map(|cwd| cwd.join(path))
|
||||
.unwrap_or_else(|_| path.to_path_buf())
|
||||
}
|
||||
})
|
||||
}
|
||||
|
||||
fn hash_rendered_snapshot(rendered: &str) -> u64 {
|
||||
let mut hasher = DefaultHasher::new();
|
||||
rendered.hash(&mut hasher);
|
||||
hasher.finish()
|
||||
}
|
||||
|
||||
fn preprocess_includes(
|
||||
content: &str,
|
||||
base_dir: &Path,
|
||||
depth: u8,
|
||||
source_files: &mut BTreeSet<PathBuf>,
|
||||
) -> Result<String> {
|
||||
if depth > 10 {
|
||||
return Err(ProxyError::Config("Include depth > 10".into()));
|
||||
}
|
||||
|
|
@ -25,10 +56,16 @@ fn preprocess_includes(content: &str, base_dir: &Path, depth: u8) -> Result<Stri
|
|||
if let Some(rest) = rest.strip_prefix('=') {
|
||||
let path_str = rest.trim().trim_matches('"');
|
||||
let resolved = base_dir.join(path_str);
|
||||
source_files.insert(normalize_config_path(&resolved));
|
||||
let included = std::fs::read_to_string(&resolved)
|
||||
.map_err(|e| ProxyError::Config(e.to_string()))?;
|
||||
let included_dir = resolved.parent().unwrap_or(base_dir);
|
||||
output.push_str(&preprocess_includes(&included, included_dir, depth + 1)?);
|
||||
output.push_str(&preprocess_includes(
|
||||
&included,
|
||||
included_dir,
|
||||
depth + 1,
|
||||
source_files,
|
||||
)?);
|
||||
output.push('\n');
|
||||
continue;
|
||||
}
|
||||
|
|
@ -138,10 +175,16 @@ pub struct ProxyConfig {
|
|||
|
||||
impl ProxyConfig {
|
||||
pub fn load<P: AsRef<Path>>(path: P) -> Result<Self> {
|
||||
let content =
|
||||
std::fs::read_to_string(&path).map_err(|e| ProxyError::Config(e.to_string()))?;
|
||||
let base_dir = path.as_ref().parent().unwrap_or(Path::new("."));
|
||||
let processed = preprocess_includes(&content, base_dir, 0)?;
|
||||
Self::load_with_metadata(path).map(|loaded| loaded.config)
|
||||
}
|
||||
|
||||
pub(crate) fn load_with_metadata<P: AsRef<Path>>(path: P) -> Result<LoadedConfig> {
|
||||
let path = path.as_ref();
|
||||
let content = std::fs::read_to_string(path).map_err(|e| ProxyError::Config(e.to_string()))?;
|
||||
let base_dir = path.parent().unwrap_or(Path::new("."));
|
||||
let mut source_files = BTreeSet::new();
|
||||
source_files.insert(normalize_config_path(path));
|
||||
let processed = preprocess_includes(&content, base_dir, 0, &mut source_files)?;
|
||||
|
||||
let parsed_toml: toml::Value =
|
||||
toml::from_str(&processed).map_err(|e| ProxyError::Config(e.to_string()))?;
|
||||
|
|
@ -786,7 +829,11 @@ impl ProxyConfig {
|
|||
.entry("203".to_string())
|
||||
.or_insert_with(|| vec!["91.105.192.100:443".to_string()]);
|
||||
|
||||
Ok(config)
|
||||
Ok(LoadedConfig {
|
||||
config,
|
||||
source_files: source_files.into_iter().collect(),
|
||||
rendered_hash: hash_rendered_snapshot(&processed),
|
||||
})
|
||||
}
|
||||
|
||||
pub fn validate(&self) -> Result<()> {
|
||||
|
|
@ -1111,6 +1158,48 @@ mod tests {
|
|||
);
|
||||
}
|
||||
|
||||
#[test]
|
||||
fn load_with_metadata_collects_include_files() {
|
||||
let nonce = std::time::SystemTime::now()
|
||||
.duration_since(std::time::UNIX_EPOCH)
|
||||
.unwrap()
|
||||
.as_nanos();
|
||||
let dir = std::env::temp_dir().join(format!("telemt_load_metadata_{nonce}"));
|
||||
std::fs::create_dir_all(&dir).unwrap();
|
||||
let main_path = dir.join("config.toml");
|
||||
let include_path = dir.join("included.toml");
|
||||
|
||||
std::fs::write(
|
||||
&include_path,
|
||||
r#"
|
||||
[access.users]
|
||||
user = "00000000000000000000000000000000"
|
||||
"#,
|
||||
)
|
||||
.unwrap();
|
||||
std::fs::write(
|
||||
&main_path,
|
||||
r#"
|
||||
include = "included.toml"
|
||||
|
||||
[censorship]
|
||||
tls_domain = "example.com"
|
||||
"#,
|
||||
)
|
||||
.unwrap();
|
||||
|
||||
let loaded = ProxyConfig::load_with_metadata(&main_path).unwrap();
|
||||
let main_normalized = normalize_config_path(&main_path);
|
||||
let include_normalized = normalize_config_path(&include_path);
|
||||
|
||||
assert!(loaded.source_files.contains(&main_normalized));
|
||||
assert!(loaded.source_files.contains(&include_normalized));
|
||||
|
||||
let _ = std::fs::remove_file(main_path);
|
||||
let _ = std::fs::remove_file(include_path);
|
||||
let _ = std::fs::remove_dir(dir);
|
||||
}
|
||||
|
||||
#[test]
|
||||
fn dc_overrides_inject_dc203_default() {
|
||||
let toml = r#"
|
||||
|
|
|
|||
|
|
@ -8,7 +8,9 @@ use tokio::sync::RwLock;
|
|||
use tokio::time::sleep;
|
||||
use tracing::{debug, warn, info};
|
||||
|
||||
use crate::tls_front::types::{CachedTlsData, ParsedServerHello, TlsFetchResult};
|
||||
use crate::tls_front::types::{
|
||||
CachedTlsData, ParsedServerHello, TlsBehaviorProfile, TlsFetchResult,
|
||||
};
|
||||
|
||||
/// Lightweight in-memory + optional on-disk cache for TLS fronting data.
|
||||
#[derive(Debug)]
|
||||
|
|
@ -37,6 +39,7 @@ impl TlsFrontCache {
|
|||
cert_payload: None,
|
||||
app_data_records_sizes: vec![default_len],
|
||||
total_app_data_len: default_len,
|
||||
behavior_profile: TlsBehaviorProfile::default(),
|
||||
fetched_at: SystemTime::now(),
|
||||
domain: "default".to_string(),
|
||||
});
|
||||
|
|
@ -189,6 +192,7 @@ impl TlsFrontCache {
|
|||
cert_payload: fetched.cert_payload,
|
||||
app_data_records_sizes: fetched.app_data_records_sizes.clone(),
|
||||
total_app_data_len: fetched.total_app_data_len,
|
||||
behavior_profile: fetched.behavior_profile,
|
||||
fetched_at: SystemTime::now(),
|
||||
domain: domain.to_string(),
|
||||
};
|
||||
|
|
|
|||
|
|
@ -3,7 +3,7 @@ use crate::protocol::constants::{
|
|||
TLS_RECORD_APPLICATION, TLS_RECORD_CHANGE_CIPHER, TLS_RECORD_HANDSHAKE, TLS_VERSION,
|
||||
};
|
||||
use crate::protocol::tls::{TLS_DIGEST_LEN, TLS_DIGEST_POS, gen_fake_x25519_key};
|
||||
use crate::tls_front::types::{CachedTlsData, ParsedCertificateInfo};
|
||||
use crate::tls_front::types::{CachedTlsData, ParsedCertificateInfo, TlsProfileSource};
|
||||
|
||||
const MIN_APP_DATA: usize = 64;
|
||||
const MAX_APP_DATA: usize = 16640; // RFC 8446 §5.2 allows up to 2^14 + 256
|
||||
|
|
@ -108,14 +108,12 @@ pub fn build_emulated_server_hello(
|
|||
) -> Vec<u8> {
|
||||
// --- ServerHello ---
|
||||
let mut extensions = Vec::new();
|
||||
// KeyShare (x25519)
|
||||
let key = gen_fake_x25519_key(rng);
|
||||
extensions.extend_from_slice(&0x0033u16.to_be_bytes()); // key_share
|
||||
extensions.extend_from_slice(&(2 + 2 + 32u16).to_be_bytes()); // len
|
||||
extensions.extend_from_slice(&0x001du16.to_be_bytes()); // X25519
|
||||
extensions.extend_from_slice(&0x0033u16.to_be_bytes());
|
||||
extensions.extend_from_slice(&(2 + 2 + 32u16).to_be_bytes());
|
||||
extensions.extend_from_slice(&0x001du16.to_be_bytes());
|
||||
extensions.extend_from_slice(&(32u16).to_be_bytes());
|
||||
extensions.extend_from_slice(&key);
|
||||
// supported_versions (TLS1.3)
|
||||
extensions.extend_from_slice(&0x002bu16.to_be_bytes());
|
||||
extensions.extend_from_slice(&(2u16).to_be_bytes());
|
||||
extensions.extend_from_slice(&0x0304u16.to_be_bytes());
|
||||
|
|
@ -128,7 +126,6 @@ pub fn build_emulated_server_hello(
|
|||
extensions.push(alpn_proto.len() as u8);
|
||||
extensions.extend_from_slice(alpn_proto);
|
||||
}
|
||||
|
||||
let extensions_len = extensions.len() as u16;
|
||||
|
||||
let body_len = 2 + // version
|
||||
|
|
@ -173,11 +170,22 @@ pub fn build_emulated_server_hello(
|
|||
];
|
||||
|
||||
// --- ApplicationData (fake encrypted records) ---
|
||||
// Use the same number and sizes of ApplicationData records as the cached server.
|
||||
let mut sizes = cached.app_data_records_sizes.clone();
|
||||
if sizes.is_empty() {
|
||||
sizes.push(cached.total_app_data_len.max(1024));
|
||||
}
|
||||
let sizes = match cached.behavior_profile.source {
|
||||
TlsProfileSource::Raw | TlsProfileSource::Merged => cached
|
||||
.app_data_records_sizes
|
||||
.first()
|
||||
.copied()
|
||||
.or_else(|| cached.behavior_profile.app_data_record_sizes.first().copied())
|
||||
.map(|size| vec![size])
|
||||
.unwrap_or_else(|| vec![cached.total_app_data_len.max(1024)]),
|
||||
_ => {
|
||||
let mut sizes = cached.app_data_records_sizes.clone();
|
||||
if sizes.is_empty() {
|
||||
sizes.push(cached.total_app_data_len.max(1024));
|
||||
}
|
||||
sizes
|
||||
}
|
||||
};
|
||||
let mut sizes = jitter_and_clamp_sizes(&sizes, rng);
|
||||
let compact_payload = cached
|
||||
.cert_info
|
||||
|
|
@ -269,7 +277,9 @@ pub fn build_emulated_server_hello(
|
|||
mod tests {
|
||||
use std::time::SystemTime;
|
||||
|
||||
use crate::tls_front::types::{CachedTlsData, ParsedServerHello, TlsCertPayload};
|
||||
use crate::tls_front::types::{
|
||||
CachedTlsData, ParsedServerHello, TlsBehaviorProfile, TlsCertPayload, TlsProfileSource,
|
||||
};
|
||||
|
||||
use super::build_emulated_server_hello;
|
||||
use crate::crypto::SecureRandom;
|
||||
|
|
@ -300,6 +310,7 @@ mod tests {
|
|||
cert_payload,
|
||||
app_data_records_sizes: vec![64],
|
||||
total_app_data_len: 64,
|
||||
behavior_profile: TlsBehaviorProfile::default(),
|
||||
fetched_at: SystemTime::now(),
|
||||
domain: "example.com".to_string(),
|
||||
}
|
||||
|
|
@ -385,4 +396,34 @@ mod tests {
|
|||
let payload = first_app_data_payload(&response);
|
||||
assert!(payload.starts_with(b"CN=example.com"));
|
||||
}
|
||||
|
||||
#[test]
|
||||
fn test_build_emulated_server_hello_ignores_tail_records_for_raw_profile() {
|
||||
let mut cached = make_cached(None);
|
||||
cached.app_data_records_sizes = vec![27, 3905, 537, 69];
|
||||
cached.total_app_data_len = 4538;
|
||||
cached.behavior_profile.source = TlsProfileSource::Merged;
|
||||
cached.behavior_profile.app_data_record_sizes = vec![27, 3905, 537];
|
||||
cached.behavior_profile.ticket_record_sizes = vec![69];
|
||||
|
||||
let rng = SecureRandom::new();
|
||||
let response = build_emulated_server_hello(
|
||||
b"secret",
|
||||
&[0x12; 32],
|
||||
&[0x34; 16],
|
||||
&cached,
|
||||
false,
|
||||
&rng,
|
||||
None,
|
||||
0,
|
||||
);
|
||||
|
||||
let hello_len = u16::from_be_bytes([response[3], response[4]]) as usize;
|
||||
let ccs_start = 5 + hello_len;
|
||||
let app_start = ccs_start + 6;
|
||||
let app_len = u16::from_be_bytes([response[app_start + 3], response[app_start + 4]]) as usize;
|
||||
|
||||
assert_eq!(response[app_start], TLS_RECORD_APPLICATION);
|
||||
assert_eq!(app_start + 5 + app_len, response.len());
|
||||
}
|
||||
}
|
||||
|
|
|
|||
|
|
@ -21,14 +21,18 @@ use x509_parser::certificate::X509Certificate;
|
|||
|
||||
use crate::crypto::SecureRandom;
|
||||
use crate::network::dns_overrides::resolve_socket_addr;
|
||||
use crate::protocol::constants::{TLS_RECORD_APPLICATION, TLS_RECORD_HANDSHAKE};
|
||||
use crate::protocol::constants::{
|
||||
TLS_RECORD_APPLICATION, TLS_RECORD_CHANGE_CIPHER, TLS_RECORD_HANDSHAKE,
|
||||
};
|
||||
use crate::transport::proxy_protocol::{ProxyProtocolV1Builder, ProxyProtocolV2Builder};
|
||||
use crate::tls_front::types::{
|
||||
ParsedCertificateInfo,
|
||||
ParsedServerHello,
|
||||
TlsBehaviorProfile,
|
||||
TlsCertPayload,
|
||||
TlsExtension,
|
||||
TlsFetchResult,
|
||||
TlsProfileSource,
|
||||
};
|
||||
|
||||
/// No-op verifier: accept any certificate (we only need lengths and metadata).
|
||||
|
|
@ -282,6 +286,41 @@ fn parse_server_hello(body: &[u8]) -> Option<ParsedServerHello> {
|
|||
})
|
||||
}
|
||||
|
||||
fn derive_behavior_profile(records: &[(u8, Vec<u8>)]) -> TlsBehaviorProfile {
|
||||
let mut change_cipher_spec_count = 0u8;
|
||||
let mut app_data_record_sizes = Vec::new();
|
||||
|
||||
for (record_type, body) in records {
|
||||
match *record_type {
|
||||
TLS_RECORD_CHANGE_CIPHER => {
|
||||
change_cipher_spec_count = change_cipher_spec_count.saturating_add(1);
|
||||
}
|
||||
TLS_RECORD_APPLICATION => {
|
||||
app_data_record_sizes.push(body.len());
|
||||
}
|
||||
_ => {}
|
||||
}
|
||||
}
|
||||
|
||||
let mut ticket_record_sizes = Vec::new();
|
||||
while app_data_record_sizes
|
||||
.last()
|
||||
.is_some_and(|size| *size <= 256 && ticket_record_sizes.len() < 2)
|
||||
{
|
||||
if let Some(size) = app_data_record_sizes.pop() {
|
||||
ticket_record_sizes.push(size);
|
||||
}
|
||||
}
|
||||
ticket_record_sizes.reverse();
|
||||
|
||||
TlsBehaviorProfile {
|
||||
change_cipher_spec_count: change_cipher_spec_count.max(1),
|
||||
app_data_record_sizes,
|
||||
ticket_record_sizes,
|
||||
source: TlsProfileSource::Raw,
|
||||
}
|
||||
}
|
||||
|
||||
fn parse_cert_info(certs: &[CertificateDer<'static>]) -> Option<ParsedCertificateInfo> {
|
||||
let first = certs.first()?;
|
||||
let (_rem, cert) = X509Certificate::from_der(first.as_ref()).ok()?;
|
||||
|
|
@ -443,39 +482,50 @@ where
|
|||
.await??;
|
||||
|
||||
let mut records = Vec::new();
|
||||
// Read up to 4 records: ServerHello, CCS, and up to two ApplicationData.
|
||||
for _ in 0..4 {
|
||||
let mut app_records_seen = 0usize;
|
||||
// Read a bounded encrypted flight: ServerHello, CCS, certificate-like data,
|
||||
// and a small number of ticket-like tail records.
|
||||
for _ in 0..8 {
|
||||
match timeout(connect_timeout, read_tls_record(&mut stream)).await {
|
||||
Ok(Ok(rec)) => records.push(rec),
|
||||
Ok(Ok(rec)) => {
|
||||
if rec.0 == TLS_RECORD_APPLICATION {
|
||||
app_records_seen += 1;
|
||||
}
|
||||
records.push(rec);
|
||||
}
|
||||
Ok(Err(e)) => return Err(e),
|
||||
Err(_) => break,
|
||||
}
|
||||
if records.len() >= 3 && records.iter().any(|(t, _)| *t == TLS_RECORD_APPLICATION) {
|
||||
if app_records_seen >= 4 {
|
||||
break;
|
||||
}
|
||||
}
|
||||
|
||||
let mut app_sizes = Vec::new();
|
||||
let mut server_hello = None;
|
||||
for (t, body) in &records {
|
||||
if *t == TLS_RECORD_HANDSHAKE && server_hello.is_none() {
|
||||
server_hello = parse_server_hello(body);
|
||||
} else if *t == TLS_RECORD_APPLICATION {
|
||||
app_sizes.push(body.len());
|
||||
}
|
||||
}
|
||||
|
||||
let parsed = server_hello.ok_or_else(|| anyhow!("ServerHello not received"))?;
|
||||
let behavior_profile = derive_behavior_profile(&records);
|
||||
let mut app_sizes = behavior_profile.app_data_record_sizes.clone();
|
||||
app_sizes.extend_from_slice(&behavior_profile.ticket_record_sizes);
|
||||
let total_app_data_len = app_sizes.iter().sum::<usize>().max(1024);
|
||||
let app_data_records_sizes = behavior_profile
|
||||
.app_data_record_sizes
|
||||
.first()
|
||||
.copied()
|
||||
.or_else(|| behavior_profile.ticket_record_sizes.first().copied())
|
||||
.map(|size| vec![size])
|
||||
.unwrap_or_else(|| vec![total_app_data_len]);
|
||||
|
||||
Ok(TlsFetchResult {
|
||||
server_hello_parsed: parsed,
|
||||
app_data_records_sizes: if app_sizes.is_empty() {
|
||||
vec![total_app_data_len]
|
||||
} else {
|
||||
app_sizes
|
||||
},
|
||||
app_data_records_sizes,
|
||||
total_app_data_len,
|
||||
behavior_profile,
|
||||
cert_info: None,
|
||||
cert_payload: None,
|
||||
})
|
||||
|
|
@ -608,6 +658,12 @@ where
|
|||
server_hello_parsed: parsed,
|
||||
app_data_records_sizes: app_data_records_sizes.clone(),
|
||||
total_app_data_len: app_data_records_sizes.iter().sum(),
|
||||
behavior_profile: TlsBehaviorProfile {
|
||||
change_cipher_spec_count: 1,
|
||||
app_data_record_sizes: app_data_records_sizes,
|
||||
ticket_record_sizes: Vec::new(),
|
||||
source: TlsProfileSource::Rustls,
|
||||
},
|
||||
cert_info,
|
||||
cert_payload,
|
||||
})
|
||||
|
|
@ -706,6 +762,7 @@ pub async fn fetch_real_tls(
|
|||
if let Some(mut raw) = raw_result {
|
||||
raw.cert_info = rustls_result.cert_info;
|
||||
raw.cert_payload = rustls_result.cert_payload;
|
||||
raw.behavior_profile.source = TlsProfileSource::Merged;
|
||||
debug!(sni = %sni, "Fetched TLS metadata via raw probe + rustls cert chain");
|
||||
Ok(raw)
|
||||
} else {
|
||||
|
|
@ -725,7 +782,11 @@ pub async fn fetch_real_tls(
|
|||
|
||||
#[cfg(test)]
|
||||
mod tests {
|
||||
use super::encode_tls13_certificate_message;
|
||||
use super::{derive_behavior_profile, encode_tls13_certificate_message};
|
||||
use crate::protocol::constants::{
|
||||
TLS_RECORD_APPLICATION, TLS_RECORD_CHANGE_CIPHER, TLS_RECORD_HANDSHAKE,
|
||||
};
|
||||
use crate::tls_front::types::TlsProfileSource;
|
||||
|
||||
fn read_u24(bytes: &[u8]) -> usize {
|
||||
((bytes[0] as usize) << 16) | ((bytes[1] as usize) << 8) | (bytes[2] as usize)
|
||||
|
|
@ -753,4 +814,20 @@ mod tests {
|
|||
fn test_encode_tls13_certificate_message_empty_chain() {
|
||||
assert!(encode_tls13_certificate_message(&[]).is_none());
|
||||
}
|
||||
|
||||
#[test]
|
||||
fn test_derive_behavior_profile_splits_ticket_like_tail_records() {
|
||||
let profile = derive_behavior_profile(&[
|
||||
(TLS_RECORD_HANDSHAKE, vec![0u8; 90]),
|
||||
(TLS_RECORD_CHANGE_CIPHER, vec![0x01]),
|
||||
(TLS_RECORD_APPLICATION, vec![0u8; 1400]),
|
||||
(TLS_RECORD_APPLICATION, vec![0u8; 220]),
|
||||
(TLS_RECORD_APPLICATION, vec![0u8; 180]),
|
||||
]);
|
||||
|
||||
assert_eq!(profile.change_cipher_spec_count, 1);
|
||||
assert_eq!(profile.app_data_record_sizes, vec![1400]);
|
||||
assert_eq!(profile.ticket_record_sizes, vec![220, 180]);
|
||||
assert_eq!(profile.source, TlsProfileSource::Raw);
|
||||
}
|
||||
}
|
||||
|
|
|
|||
|
|
@ -39,6 +39,53 @@ pub struct TlsCertPayload {
|
|||
pub certificate_message: Vec<u8>,
|
||||
}
|
||||
|
||||
/// Provenance of the cached TLS behavior profile.
|
||||
#[derive(Debug, Clone, Copy, Serialize, Deserialize, PartialEq, Eq, Default)]
|
||||
#[serde(rename_all = "snake_case")]
|
||||
pub enum TlsProfileSource {
|
||||
/// Built from hardcoded defaults or legacy cache entries.
|
||||
#[default]
|
||||
Default,
|
||||
/// Derived from raw TLS record capture only.
|
||||
Raw,
|
||||
/// Derived from rustls-only metadata fallback.
|
||||
Rustls,
|
||||
/// Merged from raw TLS capture and rustls certificate metadata.
|
||||
Merged,
|
||||
}
|
||||
|
||||
/// Coarse-grained TLS response behavior captured per SNI.
|
||||
#[derive(Debug, Clone, Serialize, Deserialize)]
|
||||
pub struct TlsBehaviorProfile {
|
||||
/// Number of ChangeCipherSpec records observed before encrypted flight.
|
||||
#[serde(default = "default_change_cipher_spec_count")]
|
||||
pub change_cipher_spec_count: u8,
|
||||
/// Sizes of the primary encrypted flight records carrying cert-like payload.
|
||||
#[serde(default)]
|
||||
pub app_data_record_sizes: Vec<usize>,
|
||||
/// Sizes of small tail ApplicationData records that look like tickets.
|
||||
#[serde(default)]
|
||||
pub ticket_record_sizes: Vec<usize>,
|
||||
/// Source of this behavior profile.
|
||||
#[serde(default)]
|
||||
pub source: TlsProfileSource,
|
||||
}
|
||||
|
||||
fn default_change_cipher_spec_count() -> u8 {
|
||||
1
|
||||
}
|
||||
|
||||
impl Default for TlsBehaviorProfile {
|
||||
fn default() -> Self {
|
||||
Self {
|
||||
change_cipher_spec_count: default_change_cipher_spec_count(),
|
||||
app_data_record_sizes: Vec::new(),
|
||||
ticket_record_sizes: Vec::new(),
|
||||
source: TlsProfileSource::Default,
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
/// Cached data per SNI used by the emulator.
|
||||
#[derive(Debug, Clone, Serialize, Deserialize)]
|
||||
pub struct CachedTlsData {
|
||||
|
|
@ -48,6 +95,8 @@ pub struct CachedTlsData {
|
|||
pub cert_payload: Option<TlsCertPayload>,
|
||||
pub app_data_records_sizes: Vec<usize>,
|
||||
pub total_app_data_len: usize,
|
||||
#[serde(default)]
|
||||
pub behavior_profile: TlsBehaviorProfile,
|
||||
#[serde(default = "now_system_time", skip_serializing, skip_deserializing)]
|
||||
pub fetched_at: SystemTime,
|
||||
pub domain: String,
|
||||
|
|
@ -63,6 +112,40 @@ pub struct TlsFetchResult {
|
|||
pub server_hello_parsed: ParsedServerHello,
|
||||
pub app_data_records_sizes: Vec<usize>,
|
||||
pub total_app_data_len: usize,
|
||||
#[serde(default)]
|
||||
pub behavior_profile: TlsBehaviorProfile,
|
||||
pub cert_info: Option<ParsedCertificateInfo>,
|
||||
pub cert_payload: Option<TlsCertPayload>,
|
||||
}
|
||||
|
||||
#[cfg(test)]
|
||||
mod tests {
|
||||
use super::*;
|
||||
|
||||
#[test]
|
||||
fn cached_tls_data_deserializes_without_behavior_profile() {
|
||||
let json = r#"
|
||||
{
|
||||
"server_hello_template": {
|
||||
"version": [3, 3],
|
||||
"random": [0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0],
|
||||
"session_id": [],
|
||||
"cipher_suite": [19, 1],
|
||||
"compression": 0,
|
||||
"extensions": []
|
||||
},
|
||||
"cert_info": null,
|
||||
"cert_payload": null,
|
||||
"app_data_records_sizes": [1024],
|
||||
"total_app_data_len": 1024,
|
||||
"domain": "example.com"
|
||||
}
|
||||
"#;
|
||||
|
||||
let cached: CachedTlsData = serde_json::from_str(json).unwrap();
|
||||
assert_eq!(cached.behavior_profile.change_cipher_spec_count, 1);
|
||||
assert!(cached.behavior_profile.app_data_record_sizes.is_empty());
|
||||
assert!(cached.behavior_profile.ticket_record_sizes.is_empty());
|
||||
assert_eq!(cached.behavior_profile.source, TlsProfileSource::Default);
|
||||
}
|
||||
}
|
||||
|
|
|
|||
Loading…
Reference in New Issue