mirror of
https://github.com/telemt/telemt.git
synced 2026-04-17 10:34:11 +03:00
Compare commits
18 Commits
| Author | SHA1 | Date | |
|---|---|---|---|
|
|
23af3cad5d | ||
|
|
c1990d81c2 | ||
|
|
065cf21c66 | ||
|
|
4011812fda | ||
|
|
b5d0564f2a | ||
|
|
cfe8fc72a5 | ||
|
|
3e4b98b002 | ||
|
|
427d65627c | ||
|
|
ae8124d6c6 | ||
|
|
06b9693cf0 | ||
|
|
869d1429ac | ||
|
|
eaba926fe5 | ||
|
|
536e6417a0 | ||
|
|
ecad96374a | ||
|
|
4895217828 | ||
|
|
d0a8d31c3c | ||
|
|
4d83cc1f04 | ||
|
|
c4c91863f0 |
@@ -1,6 +1,6 @@
|
|||||||
[package]
|
[package]
|
||||||
name = "telemt"
|
name = "telemt"
|
||||||
version = "3.0.9"
|
version = "3.0.11"
|
||||||
edition = "2024"
|
edition = "2024"
|
||||||
|
|
||||||
[dependencies]
|
[dependencies]
|
||||||
|
|||||||
80
README.md
80
README.md
@@ -10,41 +10,77 @@
|
|||||||
|
|
||||||
### 🇷🇺 RU
|
### 🇷🇺 RU
|
||||||
|
|
||||||
18 февраля мы опубликовали `telemt 3.0.3`, он имеет:
|
#### Драфтинг LTS и текущие улучшения
|
||||||
|
|
||||||
- улучшенный механизм Middle-End Health Check
|
С 21 февраля мы начали подготовку LTS-версии.
|
||||||
- высокоскоростное восстановление инициализации Middle-End
|
|
||||||
- меньше задержек на hot-path
|
|
||||||
- более корректную работу в Dualstack, а именно - IPv6 Middle-End
|
|
||||||
- аккуратное переподключение клиента без дрифта сессий между Middle-End
|
|
||||||
- автоматическая деградация на Direct-DC при массовой (>2 ME-DC-групп) недоступности Middle-End
|
|
||||||
- автодетект IP за NAT, при возможности - будет выполнен хендшейк с ME, при неудаче - автодеградация
|
|
||||||
- единственный известный специальный DC=203 уже добавлен в код: медиа загружаются с CDN в Direct-DC режиме
|
|
||||||
|
|
||||||
[Здесь вы можете найти релиз](https://github.com/telemt/telemt/releases/tag/3.0.3)
|
Мы внимательно анализируем весь доступный фидбек.
|
||||||
|
Наша цель — сделать LTS-кандидаты максимально стабильными, тщательно отлаженными и готовыми к long-run и highload production-сценариям.
|
||||||
|
|
||||||
Если у вас есть компетенции в асинхронных сетевых приложениях, анализе трафика, реверс-инжиниринге или сетевых расследованиях - мы открыты к идеям и pull requests!
|
---
|
||||||
|
|
||||||
|
#### Улучшения от 23 февраля
|
||||||
|
|
||||||
|
23 февраля были внесены улучшения производительности в режимах **DC** и **Middle-End (ME)**, с акцентом на обратный канал (путь клиент → DC / ME).
|
||||||
|
|
||||||
|
Дополнительно реализован ряд изменений, направленных на повышение устойчивости системы:
|
||||||
|
|
||||||
|
- Смягчение сетевой нестабильности
|
||||||
|
- Повышение устойчивости к десинхронизации криптографии
|
||||||
|
- Снижение дрейфа сессий при неблагоприятных условиях
|
||||||
|
- Улучшение обработки ошибок в edge-case транспортных сценариях
|
||||||
|
|
||||||
|
Релиз:
|
||||||
|
[3.0.9](https://github.com/telemt/telemt/releases/tag/3.0.9)
|
||||||
|
|
||||||
|
---
|
||||||
|
|
||||||
|
Если у вас есть компетенции в:
|
||||||
|
|
||||||
|
- Асинхронных сетевых приложениях
|
||||||
|
- Анализе трафика
|
||||||
|
- Реверс-инжиниринге
|
||||||
|
- Сетевых расследованиях
|
||||||
|
|
||||||
|
Мы открыты к архитектурным предложениям, идеям и pull requests
|
||||||
</td>
|
</td>
|
||||||
<td width="50%" valign="top">
|
<td width="50%" valign="top">
|
||||||
|
|
||||||
### 🇬🇧 EN
|
### 🇬🇧 EN
|
||||||
|
|
||||||
On February 18, we released `telemt 3.0.3`. This version introduces:
|
#### LTS Drafting and Ongoing Improvements
|
||||||
|
|
||||||
- improved Middle-End Health Check method
|
Starting February 21, we began drafting the upcoming LTS version.
|
||||||
- high-speed recovery of Middle-End init
|
|
||||||
- reduced latency on the hot path
|
|
||||||
- correct Dualstack support: proper handling of IPv6 Middle-End
|
|
||||||
- *clean* client reconnection without session "drift" between Middle-End
|
|
||||||
- automatic degradation to Direct-DC mode in case of large-scale (>2 ME-DC groups) Middle-End unavailability
|
|
||||||
- automatic public IP detection behind NAT; first - Middle-End handshake is performed, otherwise automatic degradation is applied
|
|
||||||
- known special DC=203 is now handled natively: media is delivered from the CDN via Direct-DC mode
|
|
||||||
|
|
||||||
[Release is available here](https://github.com/telemt/telemt/releases/tag/3.0.3)
|
We are carefully reviewing and analyzing all available feedback.
|
||||||
|
The goal is to ensure that LTS candidates are максимально stable, thoroughly debugged, and ready for long-run and high-load production scenarios.
|
||||||
|
|
||||||
If you have expertise in asynchronous network applications, traffic analysis, reverse engineering, or network forensics - we welcome ideas and pull requests!
|
---
|
||||||
|
|
||||||
|
#### February 23 Improvements
|
||||||
|
|
||||||
|
On February 23, we introduced performance improvements for both **DC** and **Middle-End (ME)** modes, specifically optimizing the reverse channel (client → DC / ME data path).
|
||||||
|
|
||||||
|
Additionally, we implemented a set of robustness enhancements designed to:
|
||||||
|
|
||||||
|
- Mitigate network-related instability
|
||||||
|
- Improve resilience against cryptographic desynchronization
|
||||||
|
- Reduce session drift under adverse conditions
|
||||||
|
- Improve error handling in edge-case transport scenarios
|
||||||
|
|
||||||
|
Release:
|
||||||
|
[3.0.9](https://github.com/telemt/telemt/releases/tag/3.0.9)
|
||||||
|
|
||||||
|
---
|
||||||
|
|
||||||
|
If you have expertise in:
|
||||||
|
|
||||||
|
- Asynchronous network applications
|
||||||
|
- Traffic analysis
|
||||||
|
- Reverse engineering
|
||||||
|
- Network forensics
|
||||||
|
|
||||||
|
We welcome ideas, architectural feedback, and pull requests.
|
||||||
</td>
|
</td>
|
||||||
</tr>
|
</tr>
|
||||||
</table>
|
</table>
|
||||||
|
|||||||
@@ -229,6 +229,7 @@ tls_domain = "{domain}"
|
|||||||
mask = true
|
mask = true
|
||||||
mask_port = 443
|
mask_port = 443
|
||||||
fake_cert_len = 2048
|
fake_cert_len = 2048
|
||||||
|
tls_full_cert_ttl_secs = 90
|
||||||
|
|
||||||
[access]
|
[access]
|
||||||
replay_check_len = 65536
|
replay_check_len = 65536
|
||||||
|
|||||||
@@ -122,6 +122,10 @@ pub(crate) fn default_tls_new_session_tickets() -> u8 {
|
|||||||
0
|
0
|
||||||
}
|
}
|
||||||
|
|
||||||
|
pub(crate) fn default_tls_full_cert_ttl_secs() -> u64 {
|
||||||
|
90
|
||||||
|
}
|
||||||
|
|
||||||
pub(crate) fn default_server_hello_delay_min_ms() -> u64 {
|
pub(crate) fn default_server_hello_delay_min_ms() -> u64 {
|
||||||
0
|
0
|
||||||
}
|
}
|
||||||
|
|||||||
@@ -474,6 +474,12 @@ pub struct AntiCensorshipConfig {
|
|||||||
#[serde(default = "default_tls_new_session_tickets")]
|
#[serde(default = "default_tls_new_session_tickets")]
|
||||||
pub tls_new_session_tickets: u8,
|
pub tls_new_session_tickets: u8,
|
||||||
|
|
||||||
|
/// TTL in seconds for sending full certificate payload per client IP.
|
||||||
|
/// First client connection per (SNI domain, client IP) gets full cert payload.
|
||||||
|
/// Subsequent handshakes within TTL use compact cert metadata payload.
|
||||||
|
#[serde(default = "default_tls_full_cert_ttl_secs")]
|
||||||
|
pub tls_full_cert_ttl_secs: u64,
|
||||||
|
|
||||||
/// Enforce ALPN echo of client preference.
|
/// Enforce ALPN echo of client preference.
|
||||||
#[serde(default = "default_alpn_enforce")]
|
#[serde(default = "default_alpn_enforce")]
|
||||||
pub alpn_enforce: bool,
|
pub alpn_enforce: bool,
|
||||||
@@ -494,6 +500,7 @@ impl Default for AntiCensorshipConfig {
|
|||||||
server_hello_delay_min_ms: default_server_hello_delay_min_ms(),
|
server_hello_delay_min_ms: default_server_hello_delay_min_ms(),
|
||||||
server_hello_delay_max_ms: default_server_hello_delay_max_ms(),
|
server_hello_delay_max_ms: default_server_hello_delay_max_ms(),
|
||||||
tls_new_session_tickets: default_tls_new_session_tickets(),
|
tls_new_session_tickets: default_tls_new_session_tickets(),
|
||||||
|
tls_full_cert_ttl_secs: default_tls_full_cert_ttl_secs(),
|
||||||
alpn_enforce: default_alpn_enforce(),
|
alpn_enforce: default_alpn_enforce(),
|
||||||
}
|
}
|
||||||
}
|
}
|
||||||
|
|||||||
@@ -49,19 +49,32 @@ impl SecureRandom {
|
|||||||
}
|
}
|
||||||
}
|
}
|
||||||
|
|
||||||
/// Generate random bytes
|
/// Fill a caller-provided buffer with random bytes.
|
||||||
pub fn bytes(&self, len: usize) -> Vec<u8> {
|
pub fn fill(&self, out: &mut [u8]) {
|
||||||
let mut inner = self.inner.lock();
|
let mut inner = self.inner.lock();
|
||||||
const CHUNK_SIZE: usize = 512;
|
const CHUNK_SIZE: usize = 512;
|
||||||
|
|
||||||
while inner.buffer.len() < len {
|
let mut written = 0usize;
|
||||||
|
while written < out.len() {
|
||||||
|
if inner.buffer.is_empty() {
|
||||||
let mut chunk = vec![0u8; CHUNK_SIZE];
|
let mut chunk = vec![0u8; CHUNK_SIZE];
|
||||||
inner.rng.fill_bytes(&mut chunk);
|
inner.rng.fill_bytes(&mut chunk);
|
||||||
inner.cipher.apply(&mut chunk);
|
inner.cipher.apply(&mut chunk);
|
||||||
inner.buffer.extend_from_slice(&chunk);
|
inner.buffer.extend_from_slice(&chunk);
|
||||||
}
|
}
|
||||||
|
|
||||||
inner.buffer.drain(..len).collect()
|
let take = (out.len() - written).min(inner.buffer.len());
|
||||||
|
out[written..written + take].copy_from_slice(&inner.buffer[..take]);
|
||||||
|
inner.buffer.drain(..take);
|
||||||
|
written += take;
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
/// Generate random bytes
|
||||||
|
pub fn bytes(&self, len: usize) -> Vec<u8> {
|
||||||
|
let mut out = vec![0u8; len];
|
||||||
|
self.fill(&mut out);
|
||||||
|
out
|
||||||
}
|
}
|
||||||
|
|
||||||
/// Generate random number in range [0, max)
|
/// Generate random number in range [0, max)
|
||||||
|
|||||||
20
src/main.rs
20
src/main.rs
@@ -265,7 +265,7 @@ async fn main() -> std::result::Result<(), Box<dyn std::error::Error>> {
|
|||||||
}
|
}
|
||||||
|
|
||||||
// Connection concurrency limit
|
// Connection concurrency limit
|
||||||
let _max_connections = Arc::new(Semaphore::new(10_000));
|
let max_connections = Arc::new(Semaphore::new(10_000));
|
||||||
|
|
||||||
if use_middle_proxy && !decision.ipv4_me && !decision.ipv6_me {
|
if use_middle_proxy && !decision.ipv4_me && !decision.ipv6_me {
|
||||||
warn!("No usable IP family for Middle Proxy detected; falling back to direct DC");
|
warn!("No usable IP family for Middle Proxy detected; falling back to direct DC");
|
||||||
@@ -844,6 +844,7 @@ match crate::transport::middle_proxy::fetch_proxy_secret(proxy_secret_path).awai
|
|||||||
let me_pool = me_pool.clone();
|
let me_pool = me_pool.clone();
|
||||||
let tls_cache = tls_cache.clone();
|
let tls_cache = tls_cache.clone();
|
||||||
let ip_tracker = ip_tracker.clone();
|
let ip_tracker = ip_tracker.clone();
|
||||||
|
let max_connections_unix = max_connections.clone();
|
||||||
|
|
||||||
tokio::spawn(async move {
|
tokio::spawn(async move {
|
||||||
let unix_conn_counter = std::sync::Arc::new(std::sync::atomic::AtomicU64::new(1));
|
let unix_conn_counter = std::sync::Arc::new(std::sync::atomic::AtomicU64::new(1));
|
||||||
@@ -851,6 +852,13 @@ match crate::transport::middle_proxy::fetch_proxy_secret(proxy_secret_path).awai
|
|||||||
loop {
|
loop {
|
||||||
match unix_listener.accept().await {
|
match unix_listener.accept().await {
|
||||||
Ok((stream, _)) => {
|
Ok((stream, _)) => {
|
||||||
|
let permit = match max_connections_unix.clone().acquire_owned().await {
|
||||||
|
Ok(permit) => permit,
|
||||||
|
Err(_) => {
|
||||||
|
error!("Connection limiter is closed");
|
||||||
|
break;
|
||||||
|
}
|
||||||
|
};
|
||||||
let conn_id = unix_conn_counter.fetch_add(1, std::sync::atomic::Ordering::Relaxed);
|
let conn_id = unix_conn_counter.fetch_add(1, std::sync::atomic::Ordering::Relaxed);
|
||||||
let fake_peer = SocketAddr::from(([127, 0, 0, 1], (conn_id % 65535) as u16));
|
let fake_peer = SocketAddr::from(([127, 0, 0, 1], (conn_id % 65535) as u16));
|
||||||
|
|
||||||
@@ -866,6 +874,7 @@ match crate::transport::middle_proxy::fetch_proxy_secret(proxy_secret_path).awai
|
|||||||
let proxy_protocol_enabled = config.server.proxy_protocol;
|
let proxy_protocol_enabled = config.server.proxy_protocol;
|
||||||
|
|
||||||
tokio::spawn(async move {
|
tokio::spawn(async move {
|
||||||
|
let _permit = permit;
|
||||||
if let Err(e) = crate::proxy::client::handle_client_stream(
|
if let Err(e) = crate::proxy::client::handle_client_stream(
|
||||||
stream, fake_peer, config, stats,
|
stream, fake_peer, config, stats,
|
||||||
upstream_manager, replay_checker, buffer_pool, rng,
|
upstream_manager, replay_checker, buffer_pool, rng,
|
||||||
@@ -933,11 +942,19 @@ match crate::transport::middle_proxy::fetch_proxy_secret(proxy_secret_path).awai
|
|||||||
let me_pool = me_pool.clone();
|
let me_pool = me_pool.clone();
|
||||||
let tls_cache = tls_cache.clone();
|
let tls_cache = tls_cache.clone();
|
||||||
let ip_tracker = ip_tracker.clone();
|
let ip_tracker = ip_tracker.clone();
|
||||||
|
let max_connections_tcp = max_connections.clone();
|
||||||
|
|
||||||
tokio::spawn(async move {
|
tokio::spawn(async move {
|
||||||
loop {
|
loop {
|
||||||
match listener.accept().await {
|
match listener.accept().await {
|
||||||
Ok((stream, peer_addr)) => {
|
Ok((stream, peer_addr)) => {
|
||||||
|
let permit = match max_connections_tcp.clone().acquire_owned().await {
|
||||||
|
Ok(permit) => permit,
|
||||||
|
Err(_) => {
|
||||||
|
error!("Connection limiter is closed");
|
||||||
|
break;
|
||||||
|
}
|
||||||
|
};
|
||||||
let config = config_rx.borrow_and_update().clone();
|
let config = config_rx.borrow_and_update().clone();
|
||||||
let stats = stats.clone();
|
let stats = stats.clone();
|
||||||
let upstream_manager = upstream_manager.clone();
|
let upstream_manager = upstream_manager.clone();
|
||||||
@@ -950,6 +967,7 @@ match crate::transport::middle_proxy::fetch_proxy_secret(proxy_secret_path).awai
|
|||||||
let proxy_protocol_enabled = listener_proxy_protocol;
|
let proxy_protocol_enabled = listener_proxy_protocol;
|
||||||
|
|
||||||
tokio::spawn(async move {
|
tokio::spawn(async move {
|
||||||
|
let _permit = permit;
|
||||||
if let Err(e) = ClientHandler::new(
|
if let Err(e) = ClientHandler::new(
|
||||||
stream,
|
stream,
|
||||||
peer_addr,
|
peer_addr,
|
||||||
|
|||||||
@@ -2,6 +2,7 @@
|
|||||||
|
|
||||||
use std::net::SocketAddr;
|
use std::net::SocketAddr;
|
||||||
use std::sync::Arc;
|
use std::sync::Arc;
|
||||||
|
use std::time::Duration;
|
||||||
use tokio::io::{AsyncRead, AsyncWrite, AsyncWriteExt};
|
use tokio::io::{AsyncRead, AsyncWrite, AsyncWriteExt};
|
||||||
use tracing::{debug, warn, trace, info};
|
use tracing::{debug, warn, trace, info};
|
||||||
use zeroize::Zeroize;
|
use zeroize::Zeroize;
|
||||||
@@ -108,11 +109,23 @@ where
|
|||||||
|
|
||||||
let cached = if config.censorship.tls_emulation {
|
let cached = if config.censorship.tls_emulation {
|
||||||
if let Some(cache) = tls_cache.as_ref() {
|
if let Some(cache) = tls_cache.as_ref() {
|
||||||
if let Some(sni) = tls::extract_sni_from_client_hello(handshake) {
|
let selected_domain = if let Some(sni) = tls::extract_sni_from_client_hello(handshake) {
|
||||||
Some(cache.get(&sni).await)
|
if cache.contains_domain(&sni).await {
|
||||||
|
sni
|
||||||
} else {
|
} else {
|
||||||
Some(cache.get(&config.censorship.tls_domain).await)
|
config.censorship.tls_domain.clone()
|
||||||
}
|
}
|
||||||
|
} else {
|
||||||
|
config.censorship.tls_domain.clone()
|
||||||
|
};
|
||||||
|
let cached_entry = cache.get(&selected_domain).await;
|
||||||
|
let use_full_cert_payload = cache
|
||||||
|
.take_full_cert_budget_for_ip(
|
||||||
|
peer.ip(),
|
||||||
|
Duration::from_secs(config.censorship.tls_full_cert_ttl_secs),
|
||||||
|
)
|
||||||
|
.await;
|
||||||
|
Some((cached_entry, use_full_cert_payload))
|
||||||
} else {
|
} else {
|
||||||
None
|
None
|
||||||
}
|
}
|
||||||
@@ -137,12 +150,13 @@ where
|
|||||||
None
|
None
|
||||||
};
|
};
|
||||||
|
|
||||||
let response = if let Some(cached_entry) = cached {
|
let response = if let Some((cached_entry, use_full_cert_payload)) = cached {
|
||||||
emulator::build_emulated_server_hello(
|
emulator::build_emulated_server_hello(
|
||||||
secret,
|
secret,
|
||||||
&validation.digest,
|
&validation.digest,
|
||||||
&validation.session_id,
|
&validation.session_id,
|
||||||
&cached_entry,
|
&cached_entry,
|
||||||
|
use_full_cert_payload,
|
||||||
rng,
|
rng,
|
||||||
selected_alpn.clone(),
|
selected_alpn.clone(),
|
||||||
config.censorship.tls_new_session_tickets,
|
config.censorship.tls_new_session_tickets,
|
||||||
|
|||||||
@@ -95,6 +95,7 @@ where
|
|||||||
let user_clone = user.clone();
|
let user_clone = user.clone();
|
||||||
let me_writer = tokio::spawn(async move {
|
let me_writer = tokio::spawn(async move {
|
||||||
let mut writer = crypto_writer;
|
let mut writer = crypto_writer;
|
||||||
|
let mut frame_buf = Vec::with_capacity(16 * 1024);
|
||||||
loop {
|
loop {
|
||||||
tokio::select! {
|
tokio::select! {
|
||||||
msg = me_rx_task.recv() => {
|
msg = me_rx_task.recv() => {
|
||||||
@@ -102,7 +103,15 @@ where
|
|||||||
Some(MeResponse::Data { flags, data }) => {
|
Some(MeResponse::Data { flags, data }) => {
|
||||||
trace!(conn_id, bytes = data.len(), flags, "ME->C data");
|
trace!(conn_id, bytes = data.len(), flags, "ME->C data");
|
||||||
stats_clone.add_user_octets_to(&user_clone, data.len() as u64);
|
stats_clone.add_user_octets_to(&user_clone, data.len() as u64);
|
||||||
write_client_payload(&mut writer, proto_tag, flags, &data, rng_clone.as_ref()).await?;
|
write_client_payload(
|
||||||
|
&mut writer,
|
||||||
|
proto_tag,
|
||||||
|
flags,
|
||||||
|
&data,
|
||||||
|
rng_clone.as_ref(),
|
||||||
|
&mut frame_buf,
|
||||||
|
)
|
||||||
|
.await?;
|
||||||
|
|
||||||
// Drain all immediately queued ME responses and flush once.
|
// Drain all immediately queued ME responses and flush once.
|
||||||
while let Ok(next) = me_rx_task.try_recv() {
|
while let Ok(next) = me_rx_task.try_recv() {
|
||||||
@@ -116,6 +125,7 @@ where
|
|||||||
flags,
|
flags,
|
||||||
&data,
|
&data,
|
||||||
rng_clone.as_ref(),
|
rng_clone.as_ref(),
|
||||||
|
&mut frame_buf,
|
||||||
).await?;
|
).await?;
|
||||||
}
|
}
|
||||||
MeResponse::Ack(confirm) => {
|
MeResponse::Ack(confirm) => {
|
||||||
@@ -363,6 +373,7 @@ async fn write_client_payload<W>(
|
|||||||
flags: u32,
|
flags: u32,
|
||||||
data: &[u8],
|
data: &[u8],
|
||||||
rng: &SecureRandom,
|
rng: &SecureRandom,
|
||||||
|
frame_buf: &mut Vec<u8>,
|
||||||
) -> Result<()>
|
) -> Result<()>
|
||||||
where
|
where
|
||||||
W: AsyncWrite + Unpin + Send + 'static,
|
W: AsyncWrite + Unpin + Send + 'static,
|
||||||
@@ -384,7 +395,8 @@ where
|
|||||||
if quickack {
|
if quickack {
|
||||||
first |= 0x80;
|
first |= 0x80;
|
||||||
}
|
}
|
||||||
let mut frame_buf = Vec::with_capacity(1 + data.len());
|
frame_buf.clear();
|
||||||
|
frame_buf.reserve(1 + data.len());
|
||||||
frame_buf.push(first);
|
frame_buf.push(first);
|
||||||
frame_buf.extend_from_slice(data);
|
frame_buf.extend_from_slice(data);
|
||||||
client_writer
|
client_writer
|
||||||
@@ -397,7 +409,8 @@ where
|
|||||||
first |= 0x80;
|
first |= 0x80;
|
||||||
}
|
}
|
||||||
let lw = (len_words as u32).to_le_bytes();
|
let lw = (len_words as u32).to_le_bytes();
|
||||||
let mut frame_buf = Vec::with_capacity(4 + data.len());
|
frame_buf.clear();
|
||||||
|
frame_buf.reserve(4 + data.len());
|
||||||
frame_buf.extend_from_slice(&[first, lw[0], lw[1], lw[2]]);
|
frame_buf.extend_from_slice(&[first, lw[0], lw[1], lw[2]]);
|
||||||
frame_buf.extend_from_slice(data);
|
frame_buf.extend_from_slice(data);
|
||||||
client_writer
|
client_writer
|
||||||
@@ -428,11 +441,14 @@ where
|
|||||||
len_val |= 0x8000_0000;
|
len_val |= 0x8000_0000;
|
||||||
}
|
}
|
||||||
let total = 4 + data.len() + padding_len;
|
let total = 4 + data.len() + padding_len;
|
||||||
let mut frame_buf = Vec::with_capacity(total);
|
frame_buf.clear();
|
||||||
|
frame_buf.reserve(total);
|
||||||
frame_buf.extend_from_slice(&len_val.to_le_bytes());
|
frame_buf.extend_from_slice(&len_val.to_le_bytes());
|
||||||
frame_buf.extend_from_slice(data);
|
frame_buf.extend_from_slice(data);
|
||||||
if padding_len > 0 {
|
if padding_len > 0 {
|
||||||
frame_buf.extend_from_slice(&rng.bytes(padding_len));
|
let start = frame_buf.len();
|
||||||
|
frame_buf.resize(start + padding_len, 0);
|
||||||
|
rng.fill(&mut frame_buf[start..]);
|
||||||
}
|
}
|
||||||
client_writer
|
client_writer
|
||||||
.write_all(&frame_buf)
|
.write_all(&frame_buf)
|
||||||
|
|||||||
@@ -109,7 +109,22 @@ impl Stats {
|
|||||||
|
|
||||||
pub fn decrement_user_curr_connects(&self, user: &str) {
|
pub fn decrement_user_curr_connects(&self, user: &str) {
|
||||||
if let Some(stats) = self.user_stats.get(user) {
|
if let Some(stats) = self.user_stats.get(user) {
|
||||||
stats.curr_connects.fetch_sub(1, Ordering::Relaxed);
|
let counter = &stats.curr_connects;
|
||||||
|
let mut current = counter.load(Ordering::Relaxed);
|
||||||
|
loop {
|
||||||
|
if current == 0 {
|
||||||
|
break;
|
||||||
|
}
|
||||||
|
match counter.compare_exchange_weak(
|
||||||
|
current,
|
||||||
|
current - 1,
|
||||||
|
Ordering::Relaxed,
|
||||||
|
Ordering::Relaxed,
|
||||||
|
) {
|
||||||
|
Ok(_) => break,
|
||||||
|
Err(actual) => current = actual,
|
||||||
|
}
|
||||||
|
}
|
||||||
}
|
}
|
||||||
}
|
}
|
||||||
|
|
||||||
|
|||||||
@@ -1,7 +1,8 @@
|
|||||||
use std::collections::HashMap;
|
use std::collections::HashMap;
|
||||||
|
use std::net::IpAddr;
|
||||||
use std::path::{Path, PathBuf};
|
use std::path::{Path, PathBuf};
|
||||||
use std::sync::Arc;
|
use std::sync::Arc;
|
||||||
use std::time::{SystemTime, Duration};
|
use std::time::{Duration, Instant, SystemTime};
|
||||||
|
|
||||||
use tokio::sync::RwLock;
|
use tokio::sync::RwLock;
|
||||||
use tokio::time::sleep;
|
use tokio::time::sleep;
|
||||||
@@ -14,6 +15,7 @@ use crate::tls_front::types::{CachedTlsData, ParsedServerHello, TlsFetchResult};
|
|||||||
pub struct TlsFrontCache {
|
pub struct TlsFrontCache {
|
||||||
memory: RwLock<HashMap<String, Arc<CachedTlsData>>>,
|
memory: RwLock<HashMap<String, Arc<CachedTlsData>>>,
|
||||||
default: Arc<CachedTlsData>,
|
default: Arc<CachedTlsData>,
|
||||||
|
full_cert_sent: RwLock<HashMap<IpAddr, Instant>>,
|
||||||
disk_path: PathBuf,
|
disk_path: PathBuf,
|
||||||
}
|
}
|
||||||
|
|
||||||
@@ -31,6 +33,7 @@ impl TlsFrontCache {
|
|||||||
let default = Arc::new(CachedTlsData {
|
let default = Arc::new(CachedTlsData {
|
||||||
server_hello_template: default_template,
|
server_hello_template: default_template,
|
||||||
cert_info: None,
|
cert_info: None,
|
||||||
|
cert_payload: None,
|
||||||
app_data_records_sizes: vec![default_len],
|
app_data_records_sizes: vec![default_len],
|
||||||
total_app_data_len: default_len,
|
total_app_data_len: default_len,
|
||||||
fetched_at: SystemTime::now(),
|
fetched_at: SystemTime::now(),
|
||||||
@@ -45,6 +48,7 @@ impl TlsFrontCache {
|
|||||||
Self {
|
Self {
|
||||||
memory: RwLock::new(map),
|
memory: RwLock::new(map),
|
||||||
default,
|
default,
|
||||||
|
full_cert_sent: RwLock::new(HashMap::new()),
|
||||||
disk_path: disk_path.as_ref().to_path_buf(),
|
disk_path: disk_path.as_ref().to_path_buf(),
|
||||||
}
|
}
|
||||||
}
|
}
|
||||||
@@ -54,6 +58,45 @@ impl TlsFrontCache {
|
|||||||
guard.get(sni).cloned().unwrap_or_else(|| self.default.clone())
|
guard.get(sni).cloned().unwrap_or_else(|| self.default.clone())
|
||||||
}
|
}
|
||||||
|
|
||||||
|
pub async fn contains_domain(&self, domain: &str) -> bool {
|
||||||
|
self.memory.read().await.contains_key(domain)
|
||||||
|
}
|
||||||
|
|
||||||
|
/// Returns true when full cert payload should be sent for client_ip
|
||||||
|
/// according to TTL policy.
|
||||||
|
pub async fn take_full_cert_budget_for_ip(
|
||||||
|
&self,
|
||||||
|
client_ip: IpAddr,
|
||||||
|
ttl: Duration,
|
||||||
|
) -> bool {
|
||||||
|
if ttl.is_zero() {
|
||||||
|
self.full_cert_sent
|
||||||
|
.write()
|
||||||
|
.await
|
||||||
|
.insert(client_ip, Instant::now());
|
||||||
|
return true;
|
||||||
|
}
|
||||||
|
|
||||||
|
let now = Instant::now();
|
||||||
|
let mut guard = self.full_cert_sent.write().await;
|
||||||
|
guard.retain(|_, seen_at| now.duration_since(*seen_at) < ttl);
|
||||||
|
|
||||||
|
match guard.get_mut(&client_ip) {
|
||||||
|
Some(seen_at) => {
|
||||||
|
if now.duration_since(*seen_at) >= ttl {
|
||||||
|
*seen_at = now;
|
||||||
|
true
|
||||||
|
} else {
|
||||||
|
false
|
||||||
|
}
|
||||||
|
}
|
||||||
|
None => {
|
||||||
|
guard.insert(client_ip, now);
|
||||||
|
true
|
||||||
|
}
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
pub async fn set(&self, domain: &str, data: CachedTlsData) {
|
pub async fn set(&self, domain: &str, data: CachedTlsData) {
|
||||||
let mut guard = self.memory.write().await;
|
let mut guard = self.memory.write().await;
|
||||||
guard.insert(domain.to_string(), Arc::new(data));
|
guard.insert(domain.to_string(), Arc::new(data));
|
||||||
@@ -142,6 +185,7 @@ impl TlsFrontCache {
|
|||||||
let data = CachedTlsData {
|
let data = CachedTlsData {
|
||||||
server_hello_template: fetched.server_hello_parsed,
|
server_hello_template: fetched.server_hello_parsed,
|
||||||
cert_info: fetched.cert_info,
|
cert_info: fetched.cert_info,
|
||||||
|
cert_payload: fetched.cert_payload,
|
||||||
app_data_records_sizes: fetched.app_data_records_sizes.clone(),
|
app_data_records_sizes: fetched.app_data_records_sizes.clone(),
|
||||||
total_app_data_len: fetched.total_app_data_len,
|
total_app_data_len: fetched.total_app_data_len,
|
||||||
fetched_at: SystemTime::now(),
|
fetched_at: SystemTime::now(),
|
||||||
@@ -161,3 +205,50 @@ impl TlsFrontCache {
|
|||||||
&self.disk_path
|
&self.disk_path
|
||||||
}
|
}
|
||||||
}
|
}
|
||||||
|
|
||||||
|
#[cfg(test)]
|
||||||
|
mod tests {
|
||||||
|
use super::*;
|
||||||
|
|
||||||
|
#[tokio::test]
|
||||||
|
async fn test_take_full_cert_budget_for_ip_uses_ttl() {
|
||||||
|
let cache = TlsFrontCache::new(
|
||||||
|
&["example.com".to_string()],
|
||||||
|
1024,
|
||||||
|
"tlsfront-test-cache",
|
||||||
|
);
|
||||||
|
let ip: IpAddr = "127.0.0.1".parse().expect("ip");
|
||||||
|
let ttl = Duration::from_millis(80);
|
||||||
|
|
||||||
|
assert!(cache
|
||||||
|
.take_full_cert_budget_for_ip(ip, ttl)
|
||||||
|
.await);
|
||||||
|
assert!(!cache
|
||||||
|
.take_full_cert_budget_for_ip(ip, ttl)
|
||||||
|
.await);
|
||||||
|
|
||||||
|
tokio::time::sleep(Duration::from_millis(90)).await;
|
||||||
|
|
||||||
|
assert!(cache
|
||||||
|
.take_full_cert_budget_for_ip(ip, ttl)
|
||||||
|
.await);
|
||||||
|
}
|
||||||
|
|
||||||
|
#[tokio::test]
|
||||||
|
async fn test_take_full_cert_budget_for_ip_zero_ttl_always_allows_full_payload() {
|
||||||
|
let cache = TlsFrontCache::new(
|
||||||
|
&["example.com".to_string()],
|
||||||
|
1024,
|
||||||
|
"tlsfront-test-cache",
|
||||||
|
);
|
||||||
|
let ip: IpAddr = "127.0.0.1".parse().expect("ip");
|
||||||
|
let ttl = Duration::ZERO;
|
||||||
|
|
||||||
|
assert!(cache
|
||||||
|
.take_full_cert_budget_for_ip(ip, ttl)
|
||||||
|
.await);
|
||||||
|
assert!(cache
|
||||||
|
.take_full_cert_budget_for_ip(ip, ttl)
|
||||||
|
.await);
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|||||||
@@ -3,7 +3,7 @@ use crate::protocol::constants::{
|
|||||||
TLS_RECORD_APPLICATION, TLS_RECORD_CHANGE_CIPHER, TLS_RECORD_HANDSHAKE, TLS_VERSION,
|
TLS_RECORD_APPLICATION, TLS_RECORD_CHANGE_CIPHER, TLS_RECORD_HANDSHAKE, TLS_VERSION,
|
||||||
};
|
};
|
||||||
use crate::protocol::tls::{TLS_DIGEST_LEN, TLS_DIGEST_POS, gen_fake_x25519_key};
|
use crate::protocol::tls::{TLS_DIGEST_LEN, TLS_DIGEST_POS, gen_fake_x25519_key};
|
||||||
use crate::tls_front::types::CachedTlsData;
|
use crate::tls_front::types::{CachedTlsData, ParsedCertificateInfo};
|
||||||
|
|
||||||
const MIN_APP_DATA: usize = 64;
|
const MIN_APP_DATA: usize = 64;
|
||||||
const MAX_APP_DATA: usize = 16640; // RFC 8446 §5.2 allows up to 2^14 + 256
|
const MAX_APP_DATA: usize = 16640; // RFC 8446 §5.2 allows up to 2^14 + 256
|
||||||
@@ -27,12 +27,81 @@ fn jitter_and_clamp_sizes(sizes: &[usize], rng: &SecureRandom) -> Vec<usize> {
|
|||||||
.collect()
|
.collect()
|
||||||
}
|
}
|
||||||
|
|
||||||
|
fn app_data_body_capacity(sizes: &[usize]) -> usize {
|
||||||
|
sizes.iter().map(|&size| size.saturating_sub(17)).sum()
|
||||||
|
}
|
||||||
|
|
||||||
|
fn ensure_payload_capacity(mut sizes: Vec<usize>, payload_len: usize) -> Vec<usize> {
|
||||||
|
if payload_len == 0 {
|
||||||
|
return sizes;
|
||||||
|
}
|
||||||
|
|
||||||
|
let mut body_total = app_data_body_capacity(&sizes);
|
||||||
|
if body_total >= payload_len {
|
||||||
|
return sizes;
|
||||||
|
}
|
||||||
|
|
||||||
|
if let Some(last) = sizes.last_mut() {
|
||||||
|
let free = MAX_APP_DATA.saturating_sub(*last);
|
||||||
|
let grow = free.min(payload_len - body_total);
|
||||||
|
*last += grow;
|
||||||
|
body_total += grow;
|
||||||
|
}
|
||||||
|
|
||||||
|
while body_total < payload_len {
|
||||||
|
let remaining = payload_len - body_total;
|
||||||
|
let chunk = (remaining + 17).min(MAX_APP_DATA).max(MIN_APP_DATA);
|
||||||
|
sizes.push(chunk);
|
||||||
|
body_total += chunk.saturating_sub(17);
|
||||||
|
}
|
||||||
|
|
||||||
|
sizes
|
||||||
|
}
|
||||||
|
|
||||||
|
fn build_compact_cert_info_payload(cert_info: &ParsedCertificateInfo) -> Option<Vec<u8>> {
|
||||||
|
let mut fields = Vec::new();
|
||||||
|
|
||||||
|
if let Some(subject) = cert_info.subject_cn.as_deref() {
|
||||||
|
fields.push(format!("CN={subject}"));
|
||||||
|
}
|
||||||
|
if let Some(issuer) = cert_info.issuer_cn.as_deref() {
|
||||||
|
fields.push(format!("ISSUER={issuer}"));
|
||||||
|
}
|
||||||
|
if let Some(not_before) = cert_info.not_before_unix {
|
||||||
|
fields.push(format!("NB={not_before}"));
|
||||||
|
}
|
||||||
|
if let Some(not_after) = cert_info.not_after_unix {
|
||||||
|
fields.push(format!("NA={not_after}"));
|
||||||
|
}
|
||||||
|
if !cert_info.san_names.is_empty() {
|
||||||
|
let san = cert_info
|
||||||
|
.san_names
|
||||||
|
.iter()
|
||||||
|
.take(8)
|
||||||
|
.map(String::as_str)
|
||||||
|
.collect::<Vec<_>>()
|
||||||
|
.join(",");
|
||||||
|
fields.push(format!("SAN={san}"));
|
||||||
|
}
|
||||||
|
|
||||||
|
if fields.is_empty() {
|
||||||
|
return None;
|
||||||
|
}
|
||||||
|
|
||||||
|
let mut payload = fields.join(";").into_bytes();
|
||||||
|
if payload.len() > 512 {
|
||||||
|
payload.truncate(512);
|
||||||
|
}
|
||||||
|
Some(payload)
|
||||||
|
}
|
||||||
|
|
||||||
/// Build a ServerHello + CCS + ApplicationData sequence using cached TLS metadata.
|
/// Build a ServerHello + CCS + ApplicationData sequence using cached TLS metadata.
|
||||||
pub fn build_emulated_server_hello(
|
pub fn build_emulated_server_hello(
|
||||||
secret: &[u8],
|
secret: &[u8],
|
||||||
client_digest: &[u8; TLS_DIGEST_LEN],
|
client_digest: &[u8; TLS_DIGEST_LEN],
|
||||||
session_id: &[u8],
|
session_id: &[u8],
|
||||||
cached: &CachedTlsData,
|
cached: &CachedTlsData,
|
||||||
|
use_full_cert_payload: bool,
|
||||||
rng: &SecureRandom,
|
rng: &SecureRandom,
|
||||||
alpn: Option<Vec<u8>>,
|
alpn: Option<Vec<u8>>,
|
||||||
new_session_tickets: u8,
|
new_session_tickets: u8,
|
||||||
@@ -109,14 +178,52 @@ pub fn build_emulated_server_hello(
|
|||||||
if sizes.is_empty() {
|
if sizes.is_empty() {
|
||||||
sizes.push(cached.total_app_data_len.max(1024));
|
sizes.push(cached.total_app_data_len.max(1024));
|
||||||
}
|
}
|
||||||
let sizes = jitter_and_clamp_sizes(&sizes, rng);
|
let mut sizes = jitter_and_clamp_sizes(&sizes, rng);
|
||||||
|
let compact_payload = cached
|
||||||
|
.cert_info
|
||||||
|
.as_ref()
|
||||||
|
.and_then(build_compact_cert_info_payload);
|
||||||
|
let selected_payload: Option<&[u8]> = if use_full_cert_payload {
|
||||||
|
cached
|
||||||
|
.cert_payload
|
||||||
|
.as_ref()
|
||||||
|
.map(|payload| payload.certificate_message.as_slice())
|
||||||
|
.filter(|payload| !payload.is_empty())
|
||||||
|
.or_else(|| compact_payload.as_deref())
|
||||||
|
} else {
|
||||||
|
compact_payload.as_deref()
|
||||||
|
};
|
||||||
|
|
||||||
|
if let Some(payload) = selected_payload {
|
||||||
|
sizes = ensure_payload_capacity(sizes, payload.len());
|
||||||
|
}
|
||||||
|
|
||||||
let mut app_data = Vec::new();
|
let mut app_data = Vec::new();
|
||||||
|
let mut payload_offset = 0usize;
|
||||||
for size in sizes {
|
for size in sizes {
|
||||||
let mut rec = Vec::with_capacity(5 + size);
|
let mut rec = Vec::with_capacity(5 + size);
|
||||||
rec.push(TLS_RECORD_APPLICATION);
|
rec.push(TLS_RECORD_APPLICATION);
|
||||||
rec.extend_from_slice(&TLS_VERSION);
|
rec.extend_from_slice(&TLS_VERSION);
|
||||||
rec.extend_from_slice(&(size as u16).to_be_bytes());
|
rec.extend_from_slice(&(size as u16).to_be_bytes());
|
||||||
|
|
||||||
|
if let Some(payload) = selected_payload {
|
||||||
|
if size > 17 {
|
||||||
|
let body_len = size - 17;
|
||||||
|
let remaining = payload.len().saturating_sub(payload_offset);
|
||||||
|
let copy_len = remaining.min(body_len);
|
||||||
|
if copy_len > 0 {
|
||||||
|
rec.extend_from_slice(&payload[payload_offset..payload_offset + copy_len]);
|
||||||
|
payload_offset += copy_len;
|
||||||
|
}
|
||||||
|
if body_len > copy_len {
|
||||||
|
rec.extend_from_slice(&rng.bytes(body_len - copy_len));
|
||||||
|
}
|
||||||
|
rec.push(0x16); // inner content type marker (handshake)
|
||||||
|
rec.extend_from_slice(&rng.bytes(16)); // AEAD-like tag
|
||||||
|
} else {
|
||||||
|
rec.extend_from_slice(&rng.bytes(size));
|
||||||
|
}
|
||||||
|
} else {
|
||||||
if size > 17 {
|
if size > 17 {
|
||||||
let body_len = size - 17;
|
let body_len = size - 17;
|
||||||
rec.extend_from_slice(&rng.bytes(body_len));
|
rec.extend_from_slice(&rng.bytes(body_len));
|
||||||
@@ -125,6 +232,7 @@ pub fn build_emulated_server_hello(
|
|||||||
} else {
|
} else {
|
||||||
rec.extend_from_slice(&rng.bytes(size));
|
rec.extend_from_slice(&rng.bytes(size));
|
||||||
}
|
}
|
||||||
|
}
|
||||||
app_data.extend_from_slice(&rec);
|
app_data.extend_from_slice(&rec);
|
||||||
}
|
}
|
||||||
|
|
||||||
@@ -158,3 +266,125 @@ pub fn build_emulated_server_hello(
|
|||||||
|
|
||||||
response
|
response
|
||||||
}
|
}
|
||||||
|
|
||||||
|
#[cfg(test)]
|
||||||
|
mod tests {
|
||||||
|
use std::time::SystemTime;
|
||||||
|
|
||||||
|
use crate::tls_front::types::{CachedTlsData, ParsedServerHello, TlsCertPayload};
|
||||||
|
|
||||||
|
use super::build_emulated_server_hello;
|
||||||
|
use crate::crypto::SecureRandom;
|
||||||
|
use crate::protocol::constants::{
|
||||||
|
TLS_RECORD_APPLICATION, TLS_RECORD_CHANGE_CIPHER, TLS_RECORD_HANDSHAKE,
|
||||||
|
};
|
||||||
|
|
||||||
|
fn first_app_data_payload(response: &[u8]) -> &[u8] {
|
||||||
|
let hello_len = u16::from_be_bytes([response[3], response[4]]) as usize;
|
||||||
|
let ccs_start = 5 + hello_len;
|
||||||
|
let ccs_len = u16::from_be_bytes([response[ccs_start + 3], response[ccs_start + 4]]) as usize;
|
||||||
|
let app_start = ccs_start + 5 + ccs_len;
|
||||||
|
let app_len = u16::from_be_bytes([response[app_start + 3], response[app_start + 4]]) as usize;
|
||||||
|
&response[app_start + 5..app_start + 5 + app_len]
|
||||||
|
}
|
||||||
|
|
||||||
|
fn make_cached(cert_payload: Option<TlsCertPayload>) -> CachedTlsData {
|
||||||
|
CachedTlsData {
|
||||||
|
server_hello_template: ParsedServerHello {
|
||||||
|
version: [0x03, 0x03],
|
||||||
|
random: [0u8; 32],
|
||||||
|
session_id: Vec::new(),
|
||||||
|
cipher_suite: [0x13, 0x01],
|
||||||
|
compression: 0,
|
||||||
|
extensions: Vec::new(),
|
||||||
|
},
|
||||||
|
cert_info: None,
|
||||||
|
cert_payload,
|
||||||
|
app_data_records_sizes: vec![64],
|
||||||
|
total_app_data_len: 64,
|
||||||
|
fetched_at: SystemTime::now(),
|
||||||
|
domain: "example.com".to_string(),
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
#[test]
|
||||||
|
fn test_build_emulated_server_hello_uses_cached_cert_payload() {
|
||||||
|
let cert_msg = vec![0x0b, 0x00, 0x00, 0x05, 0x00, 0xaa, 0xbb, 0xcc, 0xdd];
|
||||||
|
let cached = make_cached(Some(TlsCertPayload {
|
||||||
|
cert_chain_der: vec![vec![0x30, 0x01, 0x00]],
|
||||||
|
certificate_message: cert_msg.clone(),
|
||||||
|
}));
|
||||||
|
let rng = SecureRandom::new();
|
||||||
|
let response = build_emulated_server_hello(
|
||||||
|
b"secret",
|
||||||
|
&[0x11; 32],
|
||||||
|
&[0x22; 16],
|
||||||
|
&cached,
|
||||||
|
true,
|
||||||
|
&rng,
|
||||||
|
None,
|
||||||
|
0,
|
||||||
|
);
|
||||||
|
|
||||||
|
assert_eq!(response[0], TLS_RECORD_HANDSHAKE);
|
||||||
|
let hello_len = u16::from_be_bytes([response[3], response[4]]) as usize;
|
||||||
|
let ccs_start = 5 + hello_len;
|
||||||
|
assert_eq!(response[ccs_start], TLS_RECORD_CHANGE_CIPHER);
|
||||||
|
let app_start = ccs_start + 6;
|
||||||
|
assert_eq!(response[app_start], TLS_RECORD_APPLICATION);
|
||||||
|
|
||||||
|
let payload = first_app_data_payload(&response);
|
||||||
|
assert!(payload.starts_with(&cert_msg));
|
||||||
|
}
|
||||||
|
|
||||||
|
#[test]
|
||||||
|
fn test_build_emulated_server_hello_random_fallback_when_no_cert_payload() {
|
||||||
|
let cached = make_cached(None);
|
||||||
|
let rng = SecureRandom::new();
|
||||||
|
let response = build_emulated_server_hello(
|
||||||
|
b"secret",
|
||||||
|
&[0x22; 32],
|
||||||
|
&[0x33; 16],
|
||||||
|
&cached,
|
||||||
|
true,
|
||||||
|
&rng,
|
||||||
|
None,
|
||||||
|
0,
|
||||||
|
);
|
||||||
|
|
||||||
|
let payload = first_app_data_payload(&response);
|
||||||
|
assert!(payload.len() >= 64);
|
||||||
|
assert_eq!(payload[payload.len() - 17], 0x16);
|
||||||
|
}
|
||||||
|
|
||||||
|
#[test]
|
||||||
|
fn test_build_emulated_server_hello_uses_compact_payload_after_first() {
|
||||||
|
let cert_msg = vec![0x0b, 0x00, 0x00, 0x05, 0x00, 0xaa, 0xbb, 0xcc, 0xdd];
|
||||||
|
let mut cached = make_cached(Some(TlsCertPayload {
|
||||||
|
cert_chain_der: vec![vec![0x30, 0x01, 0x00]],
|
||||||
|
certificate_message: cert_msg,
|
||||||
|
}));
|
||||||
|
cached.cert_info = Some(crate::tls_front::types::ParsedCertificateInfo {
|
||||||
|
not_after_unix: Some(1_900_000_000),
|
||||||
|
not_before_unix: Some(1_700_000_000),
|
||||||
|
issuer_cn: Some("Issuer".to_string()),
|
||||||
|
subject_cn: Some("example.com".to_string()),
|
||||||
|
san_names: vec!["example.com".to_string(), "www.example.com".to_string()],
|
||||||
|
});
|
||||||
|
|
||||||
|
let rng = SecureRandom::new();
|
||||||
|
let response = build_emulated_server_hello(
|
||||||
|
b"secret",
|
||||||
|
&[0x44; 32],
|
||||||
|
&[0x55; 16],
|
||||||
|
&cached,
|
||||||
|
false,
|
||||||
|
&rng,
|
||||||
|
None,
|
||||||
|
0,
|
||||||
|
);
|
||||||
|
|
||||||
|
let payload = first_app_data_payload(&response);
|
||||||
|
assert!(payload.starts_with(b"CN=example.com"));
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|||||||
@@ -1,7 +1,7 @@
|
|||||||
use std::sync::Arc;
|
use std::sync::Arc;
|
||||||
use std::time::Duration;
|
use std::time::Duration;
|
||||||
|
|
||||||
use anyhow::{Context, Result, anyhow};
|
use anyhow::{Result, anyhow};
|
||||||
use tokio::io::{AsyncReadExt, AsyncWriteExt};
|
use tokio::io::{AsyncReadExt, AsyncWriteExt};
|
||||||
use tokio::net::TcpStream;
|
use tokio::net::TcpStream;
|
||||||
use tokio::time::timeout;
|
use tokio::time::timeout;
|
||||||
@@ -19,7 +19,13 @@ use x509_parser::certificate::X509Certificate;
|
|||||||
|
|
||||||
use crate::crypto::SecureRandom;
|
use crate::crypto::SecureRandom;
|
||||||
use crate::protocol::constants::{TLS_RECORD_APPLICATION, TLS_RECORD_HANDSHAKE};
|
use crate::protocol::constants::{TLS_RECORD_APPLICATION, TLS_RECORD_HANDSHAKE};
|
||||||
use crate::tls_front::types::{ParsedServerHello, TlsExtension, TlsFetchResult, ParsedCertificateInfo};
|
use crate::tls_front::types::{
|
||||||
|
ParsedCertificateInfo,
|
||||||
|
ParsedServerHello,
|
||||||
|
TlsCertPayload,
|
||||||
|
TlsExtension,
|
||||||
|
TlsFetchResult,
|
||||||
|
};
|
||||||
|
|
||||||
/// No-op verifier: accept any certificate (we only need lengths and metadata).
|
/// No-op verifier: accept any certificate (we only need lengths and metadata).
|
||||||
#[derive(Debug)]
|
#[derive(Debug)]
|
||||||
@@ -315,6 +321,46 @@ fn parse_cert_info(certs: &[CertificateDer<'static>]) -> Option<ParsedCertificat
|
|||||||
})
|
})
|
||||||
}
|
}
|
||||||
|
|
||||||
|
fn u24_bytes(value: usize) -> Option<[u8; 3]> {
|
||||||
|
if value > 0x00ff_ffff {
|
||||||
|
return None;
|
||||||
|
}
|
||||||
|
Some([
|
||||||
|
((value >> 16) & 0xff) as u8,
|
||||||
|
((value >> 8) & 0xff) as u8,
|
||||||
|
(value & 0xff) as u8,
|
||||||
|
])
|
||||||
|
}
|
||||||
|
|
||||||
|
fn encode_tls13_certificate_message(cert_chain_der: &[Vec<u8>]) -> Option<Vec<u8>> {
|
||||||
|
if cert_chain_der.is_empty() {
|
||||||
|
return None;
|
||||||
|
}
|
||||||
|
|
||||||
|
let mut certificate_list = Vec::new();
|
||||||
|
for cert in cert_chain_der {
|
||||||
|
if cert.is_empty() {
|
||||||
|
return None;
|
||||||
|
}
|
||||||
|
certificate_list.extend_from_slice(&u24_bytes(cert.len())?);
|
||||||
|
certificate_list.extend_from_slice(cert);
|
||||||
|
certificate_list.extend_from_slice(&0u16.to_be_bytes()); // cert_entry extensions
|
||||||
|
}
|
||||||
|
|
||||||
|
// Certificate = context_len(1) + certificate_list_len(3) + entries
|
||||||
|
let body_len = 1usize
|
||||||
|
.checked_add(3)?
|
||||||
|
.checked_add(certificate_list.len())?;
|
||||||
|
|
||||||
|
let mut message = Vec::with_capacity(4 + body_len);
|
||||||
|
message.push(0x0b); // HandshakeType::certificate
|
||||||
|
message.extend_from_slice(&u24_bytes(body_len)?);
|
||||||
|
message.push(0x00); // certificate_request_context length
|
||||||
|
message.extend_from_slice(&u24_bytes(certificate_list.len())?);
|
||||||
|
message.extend_from_slice(&certificate_list);
|
||||||
|
Some(message)
|
||||||
|
}
|
||||||
|
|
||||||
async fn fetch_via_raw_tls(
|
async fn fetch_via_raw_tls(
|
||||||
host: &str,
|
host: &str,
|
||||||
port: u16,
|
port: u16,
|
||||||
@@ -368,26 +414,18 @@ async fn fetch_via_raw_tls(
|
|||||||
},
|
},
|
||||||
total_app_data_len,
|
total_app_data_len,
|
||||||
cert_info: None,
|
cert_info: None,
|
||||||
|
cert_payload: None,
|
||||||
})
|
})
|
||||||
}
|
}
|
||||||
|
|
||||||
/// Fetch real TLS metadata for the given SNI: negotiated cipher and cert lengths.
|
async fn fetch_via_rustls(
|
||||||
pub async fn fetch_real_tls(
|
|
||||||
host: &str,
|
host: &str,
|
||||||
port: u16,
|
port: u16,
|
||||||
sni: &str,
|
sni: &str,
|
||||||
connect_timeout: Duration,
|
connect_timeout: Duration,
|
||||||
upstream: Option<std::sync::Arc<crate::transport::UpstreamManager>>,
|
upstream: Option<std::sync::Arc<crate::transport::UpstreamManager>>,
|
||||||
) -> Result<TlsFetchResult> {
|
) -> Result<TlsFetchResult> {
|
||||||
// Preferred path: raw TLS probe for accurate record sizing
|
// rustls handshake path for certificate and basic negotiated metadata.
|
||||||
match fetch_via_raw_tls(host, port, sni, connect_timeout).await {
|
|
||||||
Ok(res) => return Ok(res),
|
|
||||||
Err(e) => {
|
|
||||||
warn!(sni = %sni, error = %e, "Raw TLS fetch failed, falling back to rustls");
|
|
||||||
}
|
|
||||||
}
|
|
||||||
|
|
||||||
// Fallback: rustls handshake to at least get certificate sizes
|
|
||||||
let stream = if let Some(manager) = upstream {
|
let stream = if let Some(manager) = upstream {
|
||||||
// Resolve host to SocketAddr
|
// Resolve host to SocketAddr
|
||||||
if let Ok(mut addrs) = tokio::net::lookup_host((host, port)).await {
|
if let Ok(mut addrs) = tokio::net::lookup_host((host, port)).await {
|
||||||
@@ -429,8 +467,19 @@ pub async fn fetch_real_tls(
|
|||||||
.peer_certificates()
|
.peer_certificates()
|
||||||
.map(|slice| slice.to_vec())
|
.map(|slice| slice.to_vec())
|
||||||
.unwrap_or_default();
|
.unwrap_or_default();
|
||||||
|
let cert_chain_der: Vec<Vec<u8>> = certs.iter().map(|c| c.as_ref().to_vec()).collect();
|
||||||
|
let cert_payload = encode_tls13_certificate_message(&cert_chain_der).map(|certificate_message| {
|
||||||
|
TlsCertPayload {
|
||||||
|
cert_chain_der: cert_chain_der.clone(),
|
||||||
|
certificate_message,
|
||||||
|
}
|
||||||
|
});
|
||||||
|
|
||||||
let total_cert_len: usize = certs.iter().map(|c| c.len()).sum::<usize>().max(1024);
|
let total_cert_len = cert_payload
|
||||||
|
.as_ref()
|
||||||
|
.map(|payload| payload.certificate_message.len())
|
||||||
|
.unwrap_or_else(|| cert_chain_der.iter().map(Vec::len).sum::<usize>())
|
||||||
|
.max(1024);
|
||||||
let cert_info = parse_cert_info(&certs);
|
let cert_info = parse_cert_info(&certs);
|
||||||
|
|
||||||
// Heuristic: split across two records if large to mimic real servers a bit.
|
// Heuristic: split across two records if large to mimic real servers a bit.
|
||||||
@@ -453,6 +502,7 @@ pub async fn fetch_real_tls(
|
|||||||
sni = %sni,
|
sni = %sni,
|
||||||
len = total_cert_len,
|
len = total_cert_len,
|
||||||
cipher = format!("0x{:04x}", u16::from_be_bytes(cipher_suite)),
|
cipher = format!("0x{:04x}", u16::from_be_bytes(cipher_suite)),
|
||||||
|
has_cert_payload = cert_payload.is_some(),
|
||||||
"Fetched TLS metadata via rustls"
|
"Fetched TLS metadata via rustls"
|
||||||
);
|
);
|
||||||
|
|
||||||
@@ -461,5 +511,81 @@ pub async fn fetch_real_tls(
|
|||||||
app_data_records_sizes: app_data_records_sizes.clone(),
|
app_data_records_sizes: app_data_records_sizes.clone(),
|
||||||
total_app_data_len: app_data_records_sizes.iter().sum(),
|
total_app_data_len: app_data_records_sizes.iter().sum(),
|
||||||
cert_info,
|
cert_info,
|
||||||
|
cert_payload,
|
||||||
})
|
})
|
||||||
}
|
}
|
||||||
|
|
||||||
|
/// Fetch real TLS metadata for the given SNI.
|
||||||
|
///
|
||||||
|
/// Strategy:
|
||||||
|
/// 1) Probe raw TLS for realistic ServerHello and ApplicationData record sizes.
|
||||||
|
/// 2) Fetch certificate chain via rustls to build cert payload.
|
||||||
|
/// 3) Merge both when possible; otherwise auto-fallback to whichever succeeded.
|
||||||
|
pub async fn fetch_real_tls(
|
||||||
|
host: &str,
|
||||||
|
port: u16,
|
||||||
|
sni: &str,
|
||||||
|
connect_timeout: Duration,
|
||||||
|
upstream: Option<std::sync::Arc<crate::transport::UpstreamManager>>,
|
||||||
|
) -> Result<TlsFetchResult> {
|
||||||
|
let raw_result = match fetch_via_raw_tls(host, port, sni, connect_timeout).await {
|
||||||
|
Ok(res) => Some(res),
|
||||||
|
Err(e) => {
|
||||||
|
warn!(sni = %sni, error = %e, "Raw TLS fetch failed");
|
||||||
|
None
|
||||||
|
}
|
||||||
|
};
|
||||||
|
|
||||||
|
match fetch_via_rustls(host, port, sni, connect_timeout, upstream).await {
|
||||||
|
Ok(rustls_result) => {
|
||||||
|
if let Some(mut raw) = raw_result {
|
||||||
|
raw.cert_info = rustls_result.cert_info;
|
||||||
|
raw.cert_payload = rustls_result.cert_payload;
|
||||||
|
debug!(sni = %sni, "Fetched TLS metadata via raw probe + rustls cert chain");
|
||||||
|
Ok(raw)
|
||||||
|
} else {
|
||||||
|
Ok(rustls_result)
|
||||||
|
}
|
||||||
|
}
|
||||||
|
Err(e) => {
|
||||||
|
if let Some(raw) = raw_result {
|
||||||
|
warn!(sni = %sni, error = %e, "Rustls cert fetch failed, using raw TLS metadata only");
|
||||||
|
Ok(raw)
|
||||||
|
} else {
|
||||||
|
Err(e)
|
||||||
|
}
|
||||||
|
}
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
#[cfg(test)]
|
||||||
|
mod tests {
|
||||||
|
use super::encode_tls13_certificate_message;
|
||||||
|
|
||||||
|
fn read_u24(bytes: &[u8]) -> usize {
|
||||||
|
((bytes[0] as usize) << 16) | ((bytes[1] as usize) << 8) | (bytes[2] as usize)
|
||||||
|
}
|
||||||
|
|
||||||
|
#[test]
|
||||||
|
fn test_encode_tls13_certificate_message_single_cert() {
|
||||||
|
let cert = vec![0x30, 0x03, 0x02, 0x01, 0x01];
|
||||||
|
let message = encode_tls13_certificate_message(&[cert.clone()]).expect("message");
|
||||||
|
|
||||||
|
assert_eq!(message[0], 0x0b);
|
||||||
|
assert_eq!(read_u24(&message[1..4]), message.len() - 4);
|
||||||
|
assert_eq!(message[4], 0x00);
|
||||||
|
|
||||||
|
let cert_list_len = read_u24(&message[5..8]);
|
||||||
|
assert_eq!(cert_list_len, cert.len() + 5);
|
||||||
|
|
||||||
|
let cert_len = read_u24(&message[8..11]);
|
||||||
|
assert_eq!(cert_len, cert.len());
|
||||||
|
assert_eq!(&message[11..11 + cert.len()], cert.as_slice());
|
||||||
|
assert_eq!(&message[11 + cert.len()..13 + cert.len()], &[0x00, 0x00]);
|
||||||
|
}
|
||||||
|
|
||||||
|
#[test]
|
||||||
|
fn test_encode_tls13_certificate_message_empty_chain() {
|
||||||
|
assert!(encode_tls13_certificate_message(&[]).is_none());
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|||||||
@@ -29,11 +29,23 @@ pub struct ParsedCertificateInfo {
|
|||||||
pub san_names: Vec<String>,
|
pub san_names: Vec<String>,
|
||||||
}
|
}
|
||||||
|
|
||||||
|
/// TLS certificate payload captured from profiled upstream.
|
||||||
|
///
|
||||||
|
/// `certificate_message` stores an encoded TLS 1.3 Certificate handshake
|
||||||
|
/// message body that can be replayed as opaque ApplicationData bytes in FakeTLS.
|
||||||
|
#[derive(Debug, Clone, Serialize, Deserialize)]
|
||||||
|
pub struct TlsCertPayload {
|
||||||
|
pub cert_chain_der: Vec<Vec<u8>>,
|
||||||
|
pub certificate_message: Vec<u8>,
|
||||||
|
}
|
||||||
|
|
||||||
/// Cached data per SNI used by the emulator.
|
/// Cached data per SNI used by the emulator.
|
||||||
#[derive(Debug, Clone, Serialize, Deserialize)]
|
#[derive(Debug, Clone, Serialize, Deserialize)]
|
||||||
pub struct CachedTlsData {
|
pub struct CachedTlsData {
|
||||||
pub server_hello_template: ParsedServerHello,
|
pub server_hello_template: ParsedServerHello,
|
||||||
pub cert_info: Option<ParsedCertificateInfo>,
|
pub cert_info: Option<ParsedCertificateInfo>,
|
||||||
|
#[serde(default)]
|
||||||
|
pub cert_payload: Option<TlsCertPayload>,
|
||||||
pub app_data_records_sizes: Vec<usize>,
|
pub app_data_records_sizes: Vec<usize>,
|
||||||
pub total_app_data_len: usize,
|
pub total_app_data_len: usize,
|
||||||
#[serde(default = "now_system_time", skip_serializing, skip_deserializing)]
|
#[serde(default = "now_system_time", skip_serializing, skip_deserializing)]
|
||||||
@@ -52,4 +64,5 @@ pub struct TlsFetchResult {
|
|||||||
pub app_data_records_sizes: Vec<usize>,
|
pub app_data_records_sizes: Vec<usize>,
|
||||||
pub total_app_data_len: usize,
|
pub total_app_data_len: usize,
|
||||||
pub cert_info: Option<ParsedCertificateInfo>,
|
pub cert_info: Option<ParsedCertificateInfo>,
|
||||||
|
pub cert_payload: Option<TlsCertPayload>,
|
||||||
}
|
}
|
||||||
|
|||||||
@@ -223,7 +223,7 @@ pub(crate) struct RpcWriter {
|
|||||||
impl RpcWriter {
|
impl RpcWriter {
|
||||||
pub(crate) async fn send(&mut self, payload: &[u8]) -> Result<()> {
|
pub(crate) async fn send(&mut self, payload: &[u8]) -> Result<()> {
|
||||||
let frame = build_rpc_frame(self.seq_no, payload, self.crc_mode);
|
let frame = build_rpc_frame(self.seq_no, payload, self.crc_mode);
|
||||||
self.seq_no += 1;
|
self.seq_no = self.seq_no.wrapping_add(1);
|
||||||
|
|
||||||
let pad = (16 - (frame.len() % 16)) % 16;
|
let pad = (16 - (frame.len() % 16)) % 16;
|
||||||
let mut buf = frame;
|
let mut buf = frame;
|
||||||
|
|||||||
@@ -415,7 +415,6 @@ impl MePool {
|
|||||||
let degraded = Arc::new(AtomicBool::new(false));
|
let degraded = Arc::new(AtomicBool::new(false));
|
||||||
let draining = Arc::new(AtomicBool::new(false));
|
let draining = Arc::new(AtomicBool::new(false));
|
||||||
let (tx, mut rx) = mpsc::channel::<WriterCommand>(4096);
|
let (tx, mut rx) = mpsc::channel::<WriterCommand>(4096);
|
||||||
let tx_for_keepalive = tx.clone();
|
|
||||||
let mut rpc_writer = RpcWriter {
|
let mut rpc_writer = RpcWriter {
|
||||||
writer: hs.wr,
|
writer: hs.wr,
|
||||||
key: hs.write_key,
|
key: hs.write_key,
|
||||||
@@ -452,7 +451,7 @@ impl MePool {
|
|||||||
};
|
};
|
||||||
self.writers.write().await.push(writer.clone());
|
self.writers.write().await.push(writer.clone());
|
||||||
self.conn_count.fetch_add(1, Ordering::Relaxed);
|
self.conn_count.fetch_add(1, Ordering::Relaxed);
|
||||||
self.writer_available.notify_waiters();
|
self.writer_available.notify_one();
|
||||||
|
|
||||||
let reg = self.registry.clone();
|
let reg = self.registry.clone();
|
||||||
let writers_arc = self.writers_arc();
|
let writers_arc = self.writers_arc();
|
||||||
@@ -461,7 +460,6 @@ impl MePool {
|
|||||||
let rtt_stats = self.rtt_stats.clone();
|
let rtt_stats = self.rtt_stats.clone();
|
||||||
let stats_reader = self.stats.clone();
|
let stats_reader = self.stats.clone();
|
||||||
let stats_ping = self.stats.clone();
|
let stats_ping = self.stats.clone();
|
||||||
let stats_keepalive = self.stats.clone();
|
|
||||||
let pool = Arc::downgrade(self);
|
let pool = Arc::downgrade(self);
|
||||||
let cancel_ping = cancel.clone();
|
let cancel_ping = cancel.clone();
|
||||||
let tx_ping = tx.clone();
|
let tx_ping = tx.clone();
|
||||||
@@ -474,7 +472,6 @@ impl MePool {
|
|||||||
let keepalive_jitter = self.me_keepalive_jitter;
|
let keepalive_jitter = self.me_keepalive_jitter;
|
||||||
let cancel_reader_token = cancel.clone();
|
let cancel_reader_token = cancel.clone();
|
||||||
let cancel_ping_token = cancel_ping.clone();
|
let cancel_ping_token = cancel_ping.clone();
|
||||||
let cancel_keepalive_token = cancel.clone();
|
|
||||||
|
|
||||||
tokio::spawn(async move {
|
tokio::spawn(async move {
|
||||||
let res = reader_loop(
|
let res = reader_loop(
|
||||||
@@ -513,15 +510,40 @@ impl MePool {
|
|||||||
let pool_ping = Arc::downgrade(self);
|
let pool_ping = Arc::downgrade(self);
|
||||||
tokio::spawn(async move {
|
tokio::spawn(async move {
|
||||||
let mut ping_id: i64 = rand::random::<i64>();
|
let mut ping_id: i64 = rand::random::<i64>();
|
||||||
loop {
|
// Per-writer jittered start to avoid phase sync.
|
||||||
|
let startup_jitter = if keepalive_enabled {
|
||||||
|
let jitter_cap_ms = keepalive_interval.as_millis() / 2;
|
||||||
|
let effective_jitter_ms = keepalive_jitter.as_millis().min(jitter_cap_ms).max(1);
|
||||||
|
Duration::from_millis(rand::rng().random_range(0..=effective_jitter_ms as u64))
|
||||||
|
} else {
|
||||||
let jitter = rand::rng()
|
let jitter = rand::rng()
|
||||||
.random_range(-ME_ACTIVE_PING_JITTER_SECS..=ME_ACTIVE_PING_JITTER_SECS);
|
.random_range(-ME_ACTIVE_PING_JITTER_SECS..=ME_ACTIVE_PING_JITTER_SECS);
|
||||||
let wait = (ME_ACTIVE_PING_SECS as i64 + jitter).max(5) as u64;
|
let wait = (ME_ACTIVE_PING_SECS as i64 + jitter).max(5) as u64;
|
||||||
|
Duration::from_secs(wait)
|
||||||
|
};
|
||||||
|
tokio::select! {
|
||||||
|
_ = cancel_ping_token.cancelled() => return,
|
||||||
|
_ = tokio::time::sleep(startup_jitter) => {}
|
||||||
|
}
|
||||||
|
loop {
|
||||||
|
let wait = if keepalive_enabled {
|
||||||
|
let jitter_cap_ms = keepalive_interval.as_millis() / 2;
|
||||||
|
let effective_jitter_ms = keepalive_jitter.as_millis().min(jitter_cap_ms).max(1);
|
||||||
|
keepalive_interval
|
||||||
|
+ Duration::from_millis(
|
||||||
|
rand::rng().random_range(0..=effective_jitter_ms as u64)
|
||||||
|
)
|
||||||
|
} else {
|
||||||
|
let jitter = rand::rng()
|
||||||
|
.random_range(-ME_ACTIVE_PING_JITTER_SECS..=ME_ACTIVE_PING_JITTER_SECS);
|
||||||
|
let secs = (ME_ACTIVE_PING_SECS as i64 + jitter).max(5) as u64;
|
||||||
|
Duration::from_secs(secs)
|
||||||
|
};
|
||||||
tokio::select! {
|
tokio::select! {
|
||||||
_ = cancel_ping_token.cancelled() => {
|
_ = cancel_ping_token.cancelled() => {
|
||||||
break;
|
break;
|
||||||
}
|
}
|
||||||
_ = tokio::time::sleep(Duration::from_secs(wait)) => {}
|
_ = tokio::time::sleep(wait) => {}
|
||||||
}
|
}
|
||||||
let sent_id = ping_id;
|
let sent_id = ping_id;
|
||||||
let mut p = Vec::with_capacity(12);
|
let mut p = Vec::with_capacity(12);
|
||||||
@@ -538,8 +560,10 @@ impl MePool {
|
|||||||
tracker.insert(sent_id, (std::time::Instant::now(), writer_id));
|
tracker.insert(sent_id, (std::time::Instant::now(), writer_id));
|
||||||
}
|
}
|
||||||
ping_id = ping_id.wrapping_add(1);
|
ping_id = ping_id.wrapping_add(1);
|
||||||
|
stats_ping.increment_me_keepalive_sent();
|
||||||
if tx_ping.send(WriterCommand::DataAndFlush(p)).await.is_err() {
|
if tx_ping.send(WriterCommand::DataAndFlush(p)).await.is_err() {
|
||||||
debug!("Active ME ping failed, removing dead writer");
|
stats_ping.increment_me_keepalive_failed();
|
||||||
|
debug!("ME ping failed, removing dead writer");
|
||||||
cancel_ping.cancel();
|
cancel_ping.cancel();
|
||||||
if let Some(pool) = pool_ping.upgrade() {
|
if let Some(pool) = pool_ping.upgrade() {
|
||||||
if cleanup_for_ping
|
if cleanup_for_ping
|
||||||
@@ -554,46 +578,6 @@ impl MePool {
|
|||||||
}
|
}
|
||||||
});
|
});
|
||||||
|
|
||||||
if keepalive_enabled {
|
|
||||||
let tx_keepalive = tx_for_keepalive;
|
|
||||||
let cancel_keepalive = cancel_keepalive_token;
|
|
||||||
let ping_tracker_keepalive = ping_tracker.clone();
|
|
||||||
tokio::spawn(async move {
|
|
||||||
// Per-writer jittered start to avoid phase sync.
|
|
||||||
let jitter_cap_ms = keepalive_interval.as_millis() / 2;
|
|
||||||
let effective_jitter_ms = keepalive_jitter.as_millis().min(jitter_cap_ms).max(1);
|
|
||||||
let initial_jitter_ms = rand::rng().random_range(0..=effective_jitter_ms as u64);
|
|
||||||
tokio::time::sleep(Duration::from_millis(initial_jitter_ms)).await;
|
|
||||||
let mut ping_id: i64 = rand::random::<i64>();
|
|
||||||
loop {
|
|
||||||
tokio::select! {
|
|
||||||
_ = cancel_keepalive.cancelled() => break,
|
|
||||||
_ = tokio::time::sleep(keepalive_interval + Duration::from_millis(rand::rng().random_range(0..=effective_jitter_ms as u64))) => {}
|
|
||||||
}
|
|
||||||
let sent_id = ping_id;
|
|
||||||
ping_id = ping_id.wrapping_add(1);
|
|
||||||
let mut p = Vec::with_capacity(12);
|
|
||||||
p.extend_from_slice(&RPC_PING_U32.to_le_bytes());
|
|
||||||
p.extend_from_slice(&sent_id.to_le_bytes());
|
|
||||||
{
|
|
||||||
let mut tracker = ping_tracker_keepalive.lock().await;
|
|
||||||
let before = tracker.len();
|
|
||||||
tracker.retain(|_, (ts, _)| ts.elapsed() < Duration::from_secs(120));
|
|
||||||
let expired = before.saturating_sub(tracker.len());
|
|
||||||
if expired > 0 {
|
|
||||||
stats_keepalive.increment_me_keepalive_timeout_by(expired as u64);
|
|
||||||
}
|
|
||||||
tracker.insert(sent_id, (std::time::Instant::now(), writer_id));
|
|
||||||
}
|
|
||||||
stats_keepalive.increment_me_keepalive_sent();
|
|
||||||
if tx_keepalive.send(WriterCommand::DataAndFlush(p)).await.is_err() {
|
|
||||||
stats_keepalive.increment_me_keepalive_failed();
|
|
||||||
break;
|
|
||||||
}
|
|
||||||
}
|
|
||||||
});
|
|
||||||
}
|
|
||||||
|
|
||||||
Ok(())
|
Ok(())
|
||||||
}
|
}
|
||||||
|
|
||||||
@@ -630,15 +614,19 @@ impl MePool {
|
|||||||
}
|
}
|
||||||
|
|
||||||
async fn remove_writer_only(&self, writer_id: u64) -> Vec<BoundConn> {
|
async fn remove_writer_only(&self, writer_id: u64) -> Vec<BoundConn> {
|
||||||
|
let mut close_tx: Option<mpsc::Sender<WriterCommand>> = None;
|
||||||
{
|
{
|
||||||
let mut ws = self.writers.write().await;
|
let mut ws = self.writers.write().await;
|
||||||
if let Some(pos) = ws.iter().position(|w| w.id == writer_id) {
|
if let Some(pos) = ws.iter().position(|w| w.id == writer_id) {
|
||||||
let w = ws.remove(pos);
|
let w = ws.remove(pos);
|
||||||
w.cancel.cancel();
|
w.cancel.cancel();
|
||||||
let _ = w.tx.send(WriterCommand::Close).await;
|
close_tx = Some(w.tx.clone());
|
||||||
self.conn_count.fetch_sub(1, Ordering::Relaxed);
|
self.conn_count.fetch_sub(1, Ordering::Relaxed);
|
||||||
}
|
}
|
||||||
}
|
}
|
||||||
|
if let Some(tx) = close_tx {
|
||||||
|
let _ = tx.send(WriterCommand::Close).await;
|
||||||
|
}
|
||||||
self.rtt_stats.lock().await.remove(&writer_id);
|
self.rtt_stats.lock().await.remove(&writer_id);
|
||||||
self.registry.writer_lost(writer_id).await
|
self.registry.writer_lost(writer_id).await
|
||||||
}
|
}
|
||||||
|
|||||||
@@ -1,6 +1,7 @@
|
|||||||
use std::collections::{HashMap, HashSet};
|
use std::collections::{HashMap, HashSet};
|
||||||
use std::net::SocketAddr;
|
use std::net::SocketAddr;
|
||||||
use std::sync::atomic::{AtomicU64, Ordering};
|
use std::sync::atomic::{AtomicU64, Ordering};
|
||||||
|
use std::time::Duration;
|
||||||
|
|
||||||
use tokio::sync::{mpsc, RwLock};
|
use tokio::sync::{mpsc, RwLock};
|
||||||
use tokio::sync::mpsc::error::TrySendError;
|
use tokio::sync::mpsc::error::TrySendError;
|
||||||
@@ -9,6 +10,7 @@ use super::codec::WriterCommand;
|
|||||||
use super::MeResponse;
|
use super::MeResponse;
|
||||||
|
|
||||||
const ROUTE_CHANNEL_CAPACITY: usize = 4096;
|
const ROUTE_CHANNEL_CAPACITY: usize = 4096;
|
||||||
|
const ROUTE_BACKPRESSURE_TIMEOUT: Duration = Duration::from_millis(25);
|
||||||
|
|
||||||
#[derive(Debug, Clone, Copy, PartialEq, Eq)]
|
#[derive(Debug, Clone, Copy, PartialEq, Eq)]
|
||||||
pub enum RouteResult {
|
pub enum RouteResult {
|
||||||
@@ -94,15 +96,26 @@ impl ConnRegistry {
|
|||||||
}
|
}
|
||||||
|
|
||||||
pub async fn route(&self, id: u64, resp: MeResponse) -> RouteResult {
|
pub async fn route(&self, id: u64, resp: MeResponse) -> RouteResult {
|
||||||
|
let tx = {
|
||||||
let inner = self.inner.read().await;
|
let inner = self.inner.read().await;
|
||||||
if let Some(tx) = inner.map.get(&id) {
|
inner.map.get(&id).cloned()
|
||||||
|
};
|
||||||
|
|
||||||
|
let Some(tx) = tx else {
|
||||||
|
return RouteResult::NoConn;
|
||||||
|
};
|
||||||
|
|
||||||
match tx.try_send(resp) {
|
match tx.try_send(resp) {
|
||||||
Ok(()) => RouteResult::Routed,
|
Ok(()) => RouteResult::Routed,
|
||||||
Err(TrySendError::Closed(_)) => RouteResult::ChannelClosed,
|
Err(TrySendError::Closed(_)) => RouteResult::ChannelClosed,
|
||||||
Err(TrySendError::Full(_)) => RouteResult::QueueFull,
|
Err(TrySendError::Full(resp)) => {
|
||||||
|
// Absorb short bursts without dropping/closing the session immediately.
|
||||||
|
match tokio::time::timeout(ROUTE_BACKPRESSURE_TIMEOUT, tx.send(resp)).await {
|
||||||
|
Ok(Ok(())) => RouteResult::Routed,
|
||||||
|
Ok(Err(_)) => RouteResult::ChannelClosed,
|
||||||
|
Err(_) => RouteResult::QueueFull,
|
||||||
|
}
|
||||||
}
|
}
|
||||||
} else {
|
|
||||||
RouteResult::NoConn
|
|
||||||
}
|
}
|
||||||
}
|
}
|
||||||
|
|
||||||
|
|||||||
@@ -62,6 +62,8 @@ impl MePool {
|
|||||||
let mut writers_snapshot = {
|
let mut writers_snapshot = {
|
||||||
let ws = self.writers.read().await;
|
let ws = self.writers.read().await;
|
||||||
if ws.is_empty() {
|
if ws.is_empty() {
|
||||||
|
// Create waiter before recovery attempts so notify_one permits are not missed.
|
||||||
|
let waiter = self.writer_available.notified();
|
||||||
drop(ws);
|
drop(ws);
|
||||||
for family in self.family_order() {
|
for family in self.family_order() {
|
||||||
let map = match family {
|
let map = match family {
|
||||||
@@ -72,13 +74,19 @@ impl MePool {
|
|||||||
for (ip, port) in addrs {
|
for (ip, port) in addrs {
|
||||||
let addr = SocketAddr::new(*ip, *port);
|
let addr = SocketAddr::new(*ip, *port);
|
||||||
if self.connect_one(addr, self.rng.as_ref()).await.is_ok() {
|
if self.connect_one(addr, self.rng.as_ref()).await.is_ok() {
|
||||||
self.writer_available.notify_waiters();
|
self.writer_available.notify_one();
|
||||||
break;
|
break;
|
||||||
}
|
}
|
||||||
}
|
}
|
||||||
}
|
}
|
||||||
}
|
}
|
||||||
if tokio::time::timeout(Duration::from_secs(3), self.writer_available.notified()).await.is_err() {
|
if !self.writers.read().await.is_empty() {
|
||||||
|
continue;
|
||||||
|
}
|
||||||
|
if tokio::time::timeout(Duration::from_secs(3), waiter).await.is_err() {
|
||||||
|
if !self.writers.read().await.is_empty() {
|
||||||
|
continue;
|
||||||
|
}
|
||||||
return Err(ProxyError::Proxy("All ME connections dead (waited 3s)".into()));
|
return Err(ProxyError::Proxy("All ME connections dead (waited 3s)".into()));
|
||||||
}
|
}
|
||||||
continue;
|
continue;
|
||||||
|
|||||||
@@ -394,6 +394,7 @@ impl UpstreamManager {
|
|||||||
Ok(stream)
|
Ok(stream)
|
||||||
},
|
},
|
||||||
UpstreamType::Socks4 { address, interface, user_id } => {
|
UpstreamType::Socks4 { address, interface, user_id } => {
|
||||||
|
let connect_timeout = Duration::from_secs(DIRECT_CONNECT_TIMEOUT_SECS);
|
||||||
// Try to parse as SocketAddr first (IP:port), otherwise treat as hostname:port
|
// Try to parse as SocketAddr first (IP:port), otherwise treat as hostname:port
|
||||||
let mut stream = if let Ok(proxy_addr) = address.parse::<SocketAddr>() {
|
let mut stream = if let Ok(proxy_addr) = address.parse::<SocketAddr>() {
|
||||||
// IP:port format - use socket with optional interface binding
|
// IP:port format - use socket with optional interface binding
|
||||||
@@ -416,7 +417,15 @@ impl UpstreamManager {
|
|||||||
let std_stream: std::net::TcpStream = socket.into();
|
let std_stream: std::net::TcpStream = socket.into();
|
||||||
let stream = TcpStream::from_std(std_stream)?;
|
let stream = TcpStream::from_std(std_stream)?;
|
||||||
|
|
||||||
stream.writable().await?;
|
match tokio::time::timeout(connect_timeout, stream.writable()).await {
|
||||||
|
Ok(Ok(())) => {}
|
||||||
|
Ok(Err(e)) => return Err(ProxyError::Io(e)),
|
||||||
|
Err(_) => {
|
||||||
|
return Err(ProxyError::ConnectionTimeout {
|
||||||
|
addr: proxy_addr.to_string(),
|
||||||
|
});
|
||||||
|
}
|
||||||
|
}
|
||||||
if let Some(e) = stream.take_error()? {
|
if let Some(e) = stream.take_error()? {
|
||||||
return Err(ProxyError::Io(e));
|
return Err(ProxyError::Io(e));
|
||||||
}
|
}
|
||||||
@@ -427,8 +436,15 @@ impl UpstreamManager {
|
|||||||
if interface.is_some() {
|
if interface.is_some() {
|
||||||
warn!("SOCKS4 interface binding is not supported for hostname addresses, ignoring");
|
warn!("SOCKS4 interface binding is not supported for hostname addresses, ignoring");
|
||||||
}
|
}
|
||||||
TcpStream::connect(address).await
|
match tokio::time::timeout(connect_timeout, TcpStream::connect(address)).await {
|
||||||
.map_err(ProxyError::Io)?
|
Ok(Ok(stream)) => stream,
|
||||||
|
Ok(Err(e)) => return Err(ProxyError::Io(e)),
|
||||||
|
Err(_) => {
|
||||||
|
return Err(ProxyError::ConnectionTimeout {
|
||||||
|
addr: address.clone(),
|
||||||
|
});
|
||||||
|
}
|
||||||
|
}
|
||||||
};
|
};
|
||||||
|
|
||||||
// replace socks user_id with config.selected_scope, if set
|
// replace socks user_id with config.selected_scope, if set
|
||||||
@@ -436,10 +452,19 @@ impl UpstreamManager {
|
|||||||
.filter(|s| !s.is_empty());
|
.filter(|s| !s.is_empty());
|
||||||
let _user_id: Option<&str> = scope.or(user_id.as_deref());
|
let _user_id: Option<&str> = scope.or(user_id.as_deref());
|
||||||
|
|
||||||
connect_socks4(&mut stream, target, _user_id).await?;
|
match tokio::time::timeout(connect_timeout, connect_socks4(&mut stream, target, _user_id)).await {
|
||||||
|
Ok(Ok(())) => {}
|
||||||
|
Ok(Err(e)) => return Err(e),
|
||||||
|
Err(_) => {
|
||||||
|
return Err(ProxyError::ConnectionTimeout {
|
||||||
|
addr: target.to_string(),
|
||||||
|
});
|
||||||
|
}
|
||||||
|
}
|
||||||
Ok(stream)
|
Ok(stream)
|
||||||
},
|
},
|
||||||
UpstreamType::Socks5 { address, interface, username, password } => {
|
UpstreamType::Socks5 { address, interface, username, password } => {
|
||||||
|
let connect_timeout = Duration::from_secs(DIRECT_CONNECT_TIMEOUT_SECS);
|
||||||
// Try to parse as SocketAddr first (IP:port), otherwise treat as hostname:port
|
// Try to parse as SocketAddr first (IP:port), otherwise treat as hostname:port
|
||||||
let mut stream = if let Ok(proxy_addr) = address.parse::<SocketAddr>() {
|
let mut stream = if let Ok(proxy_addr) = address.parse::<SocketAddr>() {
|
||||||
// IP:port format - use socket with optional interface binding
|
// IP:port format - use socket with optional interface binding
|
||||||
@@ -462,7 +487,15 @@ impl UpstreamManager {
|
|||||||
let std_stream: std::net::TcpStream = socket.into();
|
let std_stream: std::net::TcpStream = socket.into();
|
||||||
let stream = TcpStream::from_std(std_stream)?;
|
let stream = TcpStream::from_std(std_stream)?;
|
||||||
|
|
||||||
stream.writable().await?;
|
match tokio::time::timeout(connect_timeout, stream.writable()).await {
|
||||||
|
Ok(Ok(())) => {}
|
||||||
|
Ok(Err(e)) => return Err(ProxyError::Io(e)),
|
||||||
|
Err(_) => {
|
||||||
|
return Err(ProxyError::ConnectionTimeout {
|
||||||
|
addr: proxy_addr.to_string(),
|
||||||
|
});
|
||||||
|
}
|
||||||
|
}
|
||||||
if let Some(e) = stream.take_error()? {
|
if let Some(e) = stream.take_error()? {
|
||||||
return Err(ProxyError::Io(e));
|
return Err(ProxyError::Io(e));
|
||||||
}
|
}
|
||||||
@@ -473,8 +506,15 @@ impl UpstreamManager {
|
|||||||
if interface.is_some() {
|
if interface.is_some() {
|
||||||
warn!("SOCKS5 interface binding is not supported for hostname addresses, ignoring");
|
warn!("SOCKS5 interface binding is not supported for hostname addresses, ignoring");
|
||||||
}
|
}
|
||||||
TcpStream::connect(address).await
|
match tokio::time::timeout(connect_timeout, TcpStream::connect(address)).await {
|
||||||
.map_err(ProxyError::Io)?
|
Ok(Ok(stream)) => stream,
|
||||||
|
Ok(Err(e)) => return Err(ProxyError::Io(e)),
|
||||||
|
Err(_) => {
|
||||||
|
return Err(ProxyError::ConnectionTimeout {
|
||||||
|
addr: address.clone(),
|
||||||
|
});
|
||||||
|
}
|
||||||
|
}
|
||||||
};
|
};
|
||||||
|
|
||||||
debug!(config = ?config, "Socks5 connection");
|
debug!(config = ?config, "Socks5 connection");
|
||||||
@@ -484,7 +524,20 @@ impl UpstreamManager {
|
|||||||
let _username: Option<&str> = scope.or(username.as_deref());
|
let _username: Option<&str> = scope.or(username.as_deref());
|
||||||
let _password: Option<&str> = scope.or(password.as_deref());
|
let _password: Option<&str> = scope.or(password.as_deref());
|
||||||
|
|
||||||
connect_socks5(&mut stream, target, _username, _password).await?;
|
match tokio::time::timeout(
|
||||||
|
connect_timeout,
|
||||||
|
connect_socks5(&mut stream, target, _username, _password),
|
||||||
|
)
|
||||||
|
.await
|
||||||
|
{
|
||||||
|
Ok(Ok(())) => {}
|
||||||
|
Ok(Err(e)) => return Err(e),
|
||||||
|
Err(_) => {
|
||||||
|
return Err(ProxyError::ConnectionTimeout {
|
||||||
|
addr: target.to_string(),
|
||||||
|
});
|
||||||
|
}
|
||||||
|
}
|
||||||
Ok(stream)
|
Ok(stream)
|
||||||
},
|
},
|
||||||
}
|
}
|
||||||
|
|||||||
396
tools/tlsearch.py
Normal file
396
tools/tlsearch.py
Normal file
@@ -0,0 +1,396 @@
|
|||||||
|
#!/usr/bin/env python3
|
||||||
|
"""
|
||||||
|
TLS Profile Inspector
|
||||||
|
|
||||||
|
Usage:
|
||||||
|
python3 tools/tlsearch.py
|
||||||
|
python3 tools/tlsearch.py tlsfront
|
||||||
|
python3 tools/tlsearch.py tlsfront/petrovich.ru.json
|
||||||
|
python3 tools/tlsearch.py tlsfront --only-current
|
||||||
|
"""
|
||||||
|
|
||||||
|
from __future__ import annotations
|
||||||
|
|
||||||
|
import argparse
|
||||||
|
import datetime as dt
|
||||||
|
import json
|
||||||
|
from dataclasses import dataclass
|
||||||
|
from pathlib import Path
|
||||||
|
from typing import Any, Iterable
|
||||||
|
|
||||||
|
|
||||||
|
TLS_VERSIONS = {
|
||||||
|
0x0301: "TLS 1.0",
|
||||||
|
0x0302: "TLS 1.1",
|
||||||
|
0x0303: "TLS 1.2",
|
||||||
|
0x0304: "TLS 1.3",
|
||||||
|
}
|
||||||
|
|
||||||
|
EXT_NAMES = {
|
||||||
|
0: "server_name",
|
||||||
|
5: "status_request",
|
||||||
|
10: "supported_groups",
|
||||||
|
11: "ec_point_formats",
|
||||||
|
13: "signature_algorithms",
|
||||||
|
16: "alpn",
|
||||||
|
18: "signed_certificate_timestamp",
|
||||||
|
21: "padding",
|
||||||
|
23: "extended_master_secret",
|
||||||
|
35: "session_ticket",
|
||||||
|
43: "supported_versions",
|
||||||
|
45: "psk_key_exchange_modes",
|
||||||
|
51: "key_share",
|
||||||
|
}
|
||||||
|
|
||||||
|
CIPHER_NAMES = {
|
||||||
|
0x1301: "TLS_AES_128_GCM_SHA256",
|
||||||
|
0x1302: "TLS_AES_256_GCM_SHA384",
|
||||||
|
0x1303: "TLS_CHACHA20_POLY1305_SHA256",
|
||||||
|
0x1304: "TLS_AES_128_CCM_SHA256",
|
||||||
|
0x1305: "TLS_AES_128_CCM_8_SHA256",
|
||||||
|
0x009C: "TLS_RSA_WITH_AES_128_GCM_SHA256",
|
||||||
|
0x009D: "TLS_RSA_WITH_AES_256_GCM_SHA384",
|
||||||
|
0xC02F: "TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256",
|
||||||
|
0xC030: "TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384",
|
||||||
|
0xCCA8: "TLS_ECDHE_RSA_WITH_CHACHA20_POLY1305_SHA256",
|
||||||
|
0xCCA9: "TLS_ECDHE_ECDSA_WITH_CHACHA20_POLY1305_SHA256",
|
||||||
|
}
|
||||||
|
|
||||||
|
NAMED_GROUPS = {
|
||||||
|
0x001D: "x25519",
|
||||||
|
0x0017: "secp256r1",
|
||||||
|
0x0018: "secp384r1",
|
||||||
|
0x0019: "secp521r1",
|
||||||
|
0x0100: "ffdhe2048",
|
||||||
|
0x0101: "ffdhe3072",
|
||||||
|
0x0102: "ffdhe4096",
|
||||||
|
}
|
||||||
|
|
||||||
|
|
||||||
|
@dataclass
|
||||||
|
class ProfileRecognition:
|
||||||
|
schema: str
|
||||||
|
mode: str
|
||||||
|
has_cert_info: bool
|
||||||
|
has_full_cert_payload: bool
|
||||||
|
cert_message_len: int
|
||||||
|
cert_chain_count: int
|
||||||
|
cert_chain_total_len: int
|
||||||
|
issues: list[str]
|
||||||
|
|
||||||
|
|
||||||
|
def to_hex(data: Iterable[int]) -> str:
|
||||||
|
return "".join(f"{b:02x}" for b in data)
|
||||||
|
|
||||||
|
|
||||||
|
def read_u16be(data: list[int], off: int = 0) -> int:
|
||||||
|
return (data[off] << 8) | data[off + 1]
|
||||||
|
|
||||||
|
|
||||||
|
def normalize_u8_list(value: Any) -> list[int]:
|
||||||
|
if not isinstance(value, list):
|
||||||
|
return []
|
||||||
|
out: list[int] = []
|
||||||
|
for item in value:
|
||||||
|
if isinstance(item, int) and 0 <= item <= 0xFF:
|
||||||
|
out.append(item)
|
||||||
|
else:
|
||||||
|
return []
|
||||||
|
return out
|
||||||
|
|
||||||
|
|
||||||
|
def as_dict(value: Any) -> dict[str, Any]:
|
||||||
|
return value if isinstance(value, dict) else {}
|
||||||
|
|
||||||
|
|
||||||
|
def as_int(value: Any, default: int = 0) -> int:
|
||||||
|
return value if isinstance(value, int) else default
|
||||||
|
|
||||||
|
|
||||||
|
def decode_version_pair(v: list[int]) -> str:
|
||||||
|
if len(v) != 2:
|
||||||
|
return f"invalid({v})"
|
||||||
|
ver = read_u16be(v)
|
||||||
|
return f"0x{ver:04x} ({TLS_VERSIONS.get(ver, 'unknown')})"
|
||||||
|
|
||||||
|
|
||||||
|
def decode_cipher_suite(v: list[int]) -> str:
|
||||||
|
if len(v) != 2:
|
||||||
|
return f"invalid({v})"
|
||||||
|
cs = read_u16be(v)
|
||||||
|
name = CIPHER_NAMES.get(cs, "unknown")
|
||||||
|
return f"0x{cs:04x} ({name})"
|
||||||
|
|
||||||
|
|
||||||
|
def decode_supported_versions(data: list[int]) -> str:
|
||||||
|
if len(data) == 2:
|
||||||
|
ver = read_u16be(data)
|
||||||
|
return f"selected=0x{ver:04x} ({TLS_VERSIONS.get(ver, 'unknown')})"
|
||||||
|
if not data:
|
||||||
|
return "empty"
|
||||||
|
if len(data) < 3:
|
||||||
|
return f"raw={to_hex(data)}"
|
||||||
|
vec_len = data[0]
|
||||||
|
versions: list[str] = []
|
||||||
|
for i in range(1, min(1 + vec_len, len(data)), 2):
|
||||||
|
if i + 1 >= len(data):
|
||||||
|
break
|
||||||
|
ver = read_u16be(data, i)
|
||||||
|
versions.append(f"0x{ver:04x}({TLS_VERSIONS.get(ver, 'unknown')})")
|
||||||
|
return "offered=[" + ", ".join(versions) + "]"
|
||||||
|
|
||||||
|
|
||||||
|
def decode_key_share(data: list[int]) -> str:
|
||||||
|
if len(data) < 4:
|
||||||
|
return f"raw={to_hex(data)}"
|
||||||
|
group = read_u16be(data, 0)
|
||||||
|
key_len = read_u16be(data, 2)
|
||||||
|
key_hex = to_hex(data[4 : 4 + min(key_len, len(data) - 4)])
|
||||||
|
gname = NAMED_GROUPS.get(group, "unknown_group")
|
||||||
|
return f"group=0x{group:04x}({gname}), key_len={key_len}, key={key_hex}"
|
||||||
|
|
||||||
|
|
||||||
|
def decode_alpn(data: list[int]) -> str:
|
||||||
|
if len(data) < 3:
|
||||||
|
return f"raw={to_hex(data)}"
|
||||||
|
total = read_u16be(data, 0)
|
||||||
|
pos = 2
|
||||||
|
vals: list[str] = []
|
||||||
|
limit = min(len(data), 2 + total)
|
||||||
|
while pos < limit:
|
||||||
|
ln = data[pos]
|
||||||
|
pos += 1
|
||||||
|
if pos + ln > limit:
|
||||||
|
break
|
||||||
|
raw = bytes(data[pos : pos + ln])
|
||||||
|
pos += ln
|
||||||
|
try:
|
||||||
|
vals.append(raw.decode("ascii"))
|
||||||
|
except UnicodeDecodeError:
|
||||||
|
vals.append(raw.hex())
|
||||||
|
return "protocols=[" + ", ".join(vals) + "]"
|
||||||
|
|
||||||
|
|
||||||
|
def decode_extension(ext_type: int, data: list[int]) -> str:
|
||||||
|
if ext_type == 43:
|
||||||
|
return decode_supported_versions(data)
|
||||||
|
if ext_type == 51:
|
||||||
|
return decode_key_share(data)
|
||||||
|
if ext_type == 16:
|
||||||
|
return decode_alpn(data)
|
||||||
|
return f"raw={to_hex(data)}"
|
||||||
|
|
||||||
|
|
||||||
|
def ts_to_iso(ts: Any) -> str:
|
||||||
|
if not isinstance(ts, int):
|
||||||
|
return "-"
|
||||||
|
return dt.datetime.fromtimestamp(ts, tz=dt.timezone.utc).isoformat()
|
||||||
|
|
||||||
|
|
||||||
|
def recognize_profile(obj: dict[str, Any]) -> ProfileRecognition:
|
||||||
|
issues: list[str] = []
|
||||||
|
|
||||||
|
sh = as_dict(obj.get("server_hello_template"))
|
||||||
|
if not sh:
|
||||||
|
issues.append("missing server_hello_template")
|
||||||
|
|
||||||
|
version = normalize_u8_list(sh.get("version"))
|
||||||
|
if version and len(version) != 2:
|
||||||
|
issues.append("server_hello_template.version must have 2 bytes")
|
||||||
|
|
||||||
|
app_sizes = obj.get("app_data_records_sizes")
|
||||||
|
if not isinstance(app_sizes, list) or not app_sizes:
|
||||||
|
issues.append("missing app_data_records_sizes")
|
||||||
|
elif any((not isinstance(v, int) or v <= 0) for v in app_sizes):
|
||||||
|
issues.append("app_data_records_sizes contains invalid values")
|
||||||
|
|
||||||
|
if not isinstance(obj.get("total_app_data_len"), int):
|
||||||
|
issues.append("missing total_app_data_len")
|
||||||
|
|
||||||
|
cert_info = as_dict(obj.get("cert_info"))
|
||||||
|
has_cert_info = bool(
|
||||||
|
cert_info.get("subject_cn")
|
||||||
|
or cert_info.get("issuer_cn")
|
||||||
|
or cert_info.get("san_names")
|
||||||
|
or isinstance(cert_info.get("not_before_unix"), int)
|
||||||
|
or isinstance(cert_info.get("not_after_unix"), int)
|
||||||
|
)
|
||||||
|
|
||||||
|
cert_payload = as_dict(obj.get("cert_payload"))
|
||||||
|
cert_message_len = 0
|
||||||
|
cert_chain_count = 0
|
||||||
|
cert_chain_total_len = 0
|
||||||
|
has_full_cert_payload = False
|
||||||
|
|
||||||
|
if cert_payload:
|
||||||
|
cert_msg = normalize_u8_list(cert_payload.get("certificate_message"))
|
||||||
|
if not cert_msg:
|
||||||
|
issues.append("cert_payload.certificate_message is missing or invalid")
|
||||||
|
else:
|
||||||
|
cert_message_len = len(cert_msg)
|
||||||
|
|
||||||
|
chain_raw = cert_payload.get("cert_chain_der")
|
||||||
|
if not isinstance(chain_raw, list):
|
||||||
|
issues.append("cert_payload.cert_chain_der is missing or invalid")
|
||||||
|
else:
|
||||||
|
for entry in chain_raw:
|
||||||
|
cert = normalize_u8_list(entry)
|
||||||
|
if cert:
|
||||||
|
cert_chain_count += 1
|
||||||
|
cert_chain_total_len += len(cert)
|
||||||
|
else:
|
||||||
|
issues.append("cert_payload.cert_chain_der has invalid certificate entry")
|
||||||
|
break
|
||||||
|
|
||||||
|
has_full_cert_payload = cert_message_len > 0 and cert_chain_count > 0
|
||||||
|
elif obj.get("cert_payload") is not None:
|
||||||
|
issues.append("cert_payload is not an object")
|
||||||
|
|
||||||
|
if has_full_cert_payload:
|
||||||
|
schema = "current"
|
||||||
|
mode = "full-cert-payload"
|
||||||
|
elif has_cert_info:
|
||||||
|
schema = "current-compact"
|
||||||
|
mode = "compact-cert-info"
|
||||||
|
else:
|
||||||
|
schema = "legacy"
|
||||||
|
mode = "random-fallback"
|
||||||
|
|
||||||
|
if issues:
|
||||||
|
schema = f"{schema}+issues"
|
||||||
|
|
||||||
|
return ProfileRecognition(
|
||||||
|
schema=schema,
|
||||||
|
mode=mode,
|
||||||
|
has_cert_info=has_cert_info,
|
||||||
|
has_full_cert_payload=has_full_cert_payload,
|
||||||
|
cert_message_len=cert_message_len,
|
||||||
|
cert_chain_count=cert_chain_count,
|
||||||
|
cert_chain_total_len=cert_chain_total_len,
|
||||||
|
issues=issues,
|
||||||
|
)
|
||||||
|
|
||||||
|
|
||||||
|
def decode_profile(path: Path) -> tuple[str, ProfileRecognition]:
|
||||||
|
obj: dict[str, Any] = json.loads(path.read_text(encoding="utf-8"))
|
||||||
|
recognition = recognize_profile(obj)
|
||||||
|
|
||||||
|
sh = as_dict(obj.get("server_hello_template"))
|
||||||
|
version = normalize_u8_list(sh.get("version"))
|
||||||
|
cipher = normalize_u8_list(sh.get("cipher_suite"))
|
||||||
|
random_bytes = normalize_u8_list(sh.get("random"))
|
||||||
|
session_id = normalize_u8_list(sh.get("session_id"))
|
||||||
|
|
||||||
|
lines: list[str] = []
|
||||||
|
lines.append(f"[{path.name}]")
|
||||||
|
lines.append(f" domain: {obj.get('domain', '-')}")
|
||||||
|
lines.append(f" profile.schema: {recognition.schema}")
|
||||||
|
lines.append(f" profile.mode: {recognition.mode}")
|
||||||
|
lines.append(f" profile.has_full_cert_payload: {recognition.has_full_cert_payload}")
|
||||||
|
lines.append(f" profile.has_cert_info: {recognition.has_cert_info}")
|
||||||
|
if recognition.has_full_cert_payload:
|
||||||
|
lines.append(f" profile.cert_message_len: {recognition.cert_message_len}")
|
||||||
|
lines.append(f" profile.cert_chain_count: {recognition.cert_chain_count}")
|
||||||
|
lines.append(f" profile.cert_chain_total_len: {recognition.cert_chain_total_len}")
|
||||||
|
if recognition.issues:
|
||||||
|
lines.append(" profile.issues:")
|
||||||
|
for issue in recognition.issues:
|
||||||
|
lines.append(f" - {issue}")
|
||||||
|
|
||||||
|
lines.append(f" tls.version: {decode_version_pair(version)}")
|
||||||
|
lines.append(f" tls.cipher: {decode_cipher_suite(cipher)}")
|
||||||
|
lines.append(f" tls.compression: {sh.get('compression', '-')}")
|
||||||
|
lines.append(f" tls.random: {to_hex(random_bytes)}")
|
||||||
|
lines.append(f" tls.session_id_len: {len(session_id)}")
|
||||||
|
if session_id:
|
||||||
|
lines.append(f" tls.session_id: {to_hex(session_id)}")
|
||||||
|
|
||||||
|
app_sizes = obj.get("app_data_records_sizes", [])
|
||||||
|
if isinstance(app_sizes, list):
|
||||||
|
lines.append(" app_data_records_sizes: " + ", ".join(str(v) for v in app_sizes))
|
||||||
|
else:
|
||||||
|
lines.append(" app_data_records_sizes: -")
|
||||||
|
lines.append(f" total_app_data_len: {obj.get('total_app_data_len', '-')}")
|
||||||
|
|
||||||
|
cert = as_dict(obj.get("cert_info"))
|
||||||
|
if cert:
|
||||||
|
lines.append(" cert_info:")
|
||||||
|
lines.append(f" subject_cn: {cert.get('subject_cn') or '-'}")
|
||||||
|
lines.append(f" issuer_cn: {cert.get('issuer_cn') or '-'}")
|
||||||
|
lines.append(f" not_before: {ts_to_iso(cert.get('not_before_unix'))}")
|
||||||
|
lines.append(f" not_after: {ts_to_iso(cert.get('not_after_unix'))}")
|
||||||
|
sans = cert.get("san_names")
|
||||||
|
if isinstance(sans, list) and sans:
|
||||||
|
lines.append(" san_names: " + ", ".join(str(v) for v in sans))
|
||||||
|
else:
|
||||||
|
lines.append(" san_names: -")
|
||||||
|
else:
|
||||||
|
lines.append(" cert_info: -")
|
||||||
|
|
||||||
|
exts = sh.get("extensions", [])
|
||||||
|
if not isinstance(exts, list):
|
||||||
|
exts = []
|
||||||
|
lines.append(f" extensions[{len(exts)}]:")
|
||||||
|
for ext in exts:
|
||||||
|
ext_obj = as_dict(ext)
|
||||||
|
ext_type = as_int(ext_obj.get("ext_type"), -1)
|
||||||
|
data = normalize_u8_list(ext_obj.get("data"))
|
||||||
|
name = EXT_NAMES.get(ext_type, "unknown")
|
||||||
|
decoded = decode_extension(ext_type, data)
|
||||||
|
lines.append(f" - type={ext_type} ({name}), len={len(data)}: {decoded}")
|
||||||
|
|
||||||
|
lines.append("")
|
||||||
|
return ("\n".join(lines), recognition)
|
||||||
|
|
||||||
|
|
||||||
|
def collect_files(input_path: Path) -> list[Path]:
|
||||||
|
if input_path.is_file():
|
||||||
|
return [input_path]
|
||||||
|
return sorted(p for p in input_path.glob("*.json") if p.is_file())
|
||||||
|
|
||||||
|
|
||||||
|
def main() -> int:
|
||||||
|
parser = argparse.ArgumentParser(
|
||||||
|
description="Decode TLS profile JSON files and recognize current schema."
|
||||||
|
)
|
||||||
|
parser.add_argument(
|
||||||
|
"path",
|
||||||
|
nargs="?",
|
||||||
|
default="tlsfront",
|
||||||
|
help="Path to tlsfront directory or a single JSON file.",
|
||||||
|
)
|
||||||
|
parser.add_argument(
|
||||||
|
"--only-current",
|
||||||
|
action="store_true",
|
||||||
|
help="Show only profiles recognized as current/full-cert-payload.",
|
||||||
|
)
|
||||||
|
args = parser.parse_args()
|
||||||
|
|
||||||
|
base = Path(args.path)
|
||||||
|
if not base.exists():
|
||||||
|
print(f"Path not found: {base}")
|
||||||
|
return 1
|
||||||
|
|
||||||
|
files = collect_files(base)
|
||||||
|
if not files:
|
||||||
|
print(f"No JSON files found in: {base}")
|
||||||
|
return 1
|
||||||
|
|
||||||
|
printed = 0
|
||||||
|
for path in files:
|
||||||
|
try:
|
||||||
|
rendered, recognition = decode_profile(path)
|
||||||
|
if args.only_current and recognition.schema != "current":
|
||||||
|
continue
|
||||||
|
print(rendered, end="")
|
||||||
|
printed += 1
|
||||||
|
except Exception as e: # noqa: BLE001
|
||||||
|
print(f"[{path.name}] decode error: {e}\n")
|
||||||
|
|
||||||
|
if args.only_current and printed == 0:
|
||||||
|
print("No current profiles found.")
|
||||||
|
return 0
|
||||||
|
|
||||||
|
|
||||||
|
if __name__ == "__main__":
|
||||||
|
raise SystemExit(main())
|
||||||
Reference in New Issue
Block a user