mirror of https://github.com/telemt/telemt.git
Compare commits
No commits in common. "main" and "3.3.17" have entirely different histories.
|
|
@ -1,15 +0,0 @@
|
||||||
[bans]
|
|
||||||
multiple-versions = "deny"
|
|
||||||
wildcards = "allow"
|
|
||||||
highlight = "all"
|
|
||||||
|
|
||||||
# Explicitly flag the weak cryptography so the agent is forced to justify its existence
|
|
||||||
[[bans.skip]]
|
|
||||||
name = "md-5"
|
|
||||||
version = "*"
|
|
||||||
reason = "MUST VERIFY: Only allowed for legacy checksums, never for security."
|
|
||||||
|
|
||||||
[[bans.skip]]
|
|
||||||
name = "sha1"
|
|
||||||
version = "*"
|
|
||||||
reason = "MUST VERIFY: Only allowed for backwards compatibility."
|
|
||||||
|
|
@ -1,8 +0,0 @@
|
||||||
.git
|
|
||||||
.github
|
|
||||||
target
|
|
||||||
.kilocode
|
|
||||||
cache
|
|
||||||
tlsfront
|
|
||||||
*.tar
|
|
||||||
*.tar.gz
|
|
||||||
|
|
@ -7,16 +7,7 @@ queries:
|
||||||
- uses: security-and-quality
|
- uses: security-and-quality
|
||||||
- uses: ./.github/codeql/queries
|
- uses: ./.github/codeql/queries
|
||||||
|
|
||||||
paths-ignore:
|
|
||||||
- "**/tests/**"
|
|
||||||
- "**/test/**"
|
|
||||||
- "**/*_test.rs"
|
|
||||||
- "**/*/tests.rs"
|
|
||||||
query-filters:
|
query-filters:
|
||||||
- exclude:
|
|
||||||
tags:
|
|
||||||
- test
|
|
||||||
|
|
||||||
- exclude:
|
- exclude:
|
||||||
id:
|
id:
|
||||||
- rust/unwrap-on-option
|
- rust/unwrap-on-option
|
||||||
|
|
|
||||||
|
|
@ -1,126 +0,0 @@
|
||||||
# Architecture Directives
|
|
||||||
|
|
||||||
> Companion to `Agents.md`. These are **activation directives**, not tutorials.
|
|
||||||
> You already know these patterns — apply them. When making any structural or
|
|
||||||
> design decision, run the relevant section below as a checklist.
|
|
||||||
|
|
||||||
---
|
|
||||||
|
|
||||||
## 1. Active Principles (always on)
|
|
||||||
|
|
||||||
Apply these on every non-trivial change. No exceptions.
|
|
||||||
|
|
||||||
- **SRP** — one reason to change per component. If you can't name the responsibility in one noun phrase, split it.
|
|
||||||
- **OCP** — extend by adding, not by modifying. New variants/impls over patching existing logic.
|
|
||||||
- **ISP** — traits stay minimal. More than ~5 methods is a split signal.
|
|
||||||
- **DIP** — high-level modules depend on traits, not concrete types. Infrastructure implements domain traits; it does not own domain logic.
|
|
||||||
- **DRY** — one authoritative source per piece of knowledge. Copies are bugs that haven't diverged yet.
|
|
||||||
- **YAGNI** — generic parameters, extension hooks, and pluggable strategies require an *existing* concrete use case, not a hypothetical one.
|
|
||||||
- **KISS** — two equivalent designs: choose the one with fewer concepts. Justify complexity; never assume it.
|
|
||||||
|
|
||||||
---
|
|
||||||
|
|
||||||
## 2. Layered Architecture
|
|
||||||
|
|
||||||
Dependencies point **inward only**: `Presentation → Application → Domain ← Infrastructure`.
|
|
||||||
|
|
||||||
- Domain layer: zero I/O. No network, no filesystem, no async runtime imports.
|
|
||||||
- Infrastructure: implements domain traits at the boundary. Never leaks SDK/wire types inward.
|
|
||||||
- Anti-Corruption Layer (ACL): all third-party and external-protocol types are translated here. If the external format changes, only the ACL changes.
|
|
||||||
- Presentation: translates wire/HTTP representations to domain types and back. Nothing else.
|
|
||||||
|
|
||||||
---
|
|
||||||
|
|
||||||
## 3. Design Pattern Selection
|
|
||||||
|
|
||||||
Apply the right pattern. Do not invent a new abstraction when a named pattern fits.
|
|
||||||
|
|
||||||
| Situation | Pattern to apply |
|
|
||||||
|---|---|
|
|
||||||
| Struct with 3+ optional/dependent fields | **Builder** — `build()` returns `Result`, never panics |
|
|
||||||
| Cross-cutting behavior (logging, retry, metrics) on a trait impl | **Decorator** — implements same trait, delegates all calls |
|
|
||||||
| Subsystem with multiple internal components | **Façade** — single public entry point, internals are `pub(crate)` |
|
|
||||||
| Swappable algorithm or policy | **Strategy** — trait injection; generics for compile-time, `dyn` for runtime |
|
|
||||||
| Component notifying decoupled consumers | **Observer** — typed channels (`broadcast`, `watch`), not callback `Vec<Box<dyn Fn>>` |
|
|
||||||
| Exclusive mutable state serving concurrent callers | **Actor** — `mpsc` command channel + `oneshot` reply; no lock needed on state |
|
|
||||||
| Finite state with invalid transition prevention | **Typestate** — distinct types per state; invalid ops are compile errors |
|
|
||||||
| Fixed process skeleton with overridable steps | **Template Method** — defaulted trait method calls required hooks |
|
|
||||||
| Request pipeline with independent handlers | **Chain/Middleware** — generic compile-time chain for hot paths, `dyn` for runtime assembly |
|
|
||||||
| Hiding a concrete type behind a trait | **Factory Function** — returns `Box<dyn Trait>` or `impl Trait` |
|
|
||||||
|
|
||||||
---
|
|
||||||
|
|
||||||
## 4. Data Modeling Rules
|
|
||||||
|
|
||||||
- **Make illegal states unrepresentable.** Type system enforces invariants; runtime validation is a second line, not the first.
|
|
||||||
- **Newtype every primitive** that carries domain meaning. `SessionId(u64)` ≠ `UserId(u64)` — the compiler enforces it.
|
|
||||||
- **Enums over booleans** for any parameter or field with two or more named states.
|
|
||||||
- **Typed error enums** with named variants carrying full diagnostic context. `anyhow` is application-layer only; never in library code.
|
|
||||||
- **Domain types carry no I/O concerns.** No `serde`, no codec, no DB derives on domain structs. Conversions via `From`/`TryFrom` at layer boundaries.
|
|
||||||
|
|
||||||
---
|
|
||||||
|
|
||||||
## 5. Concurrency Rules
|
|
||||||
|
|
||||||
- Prefer message-passing over shared memory. Shared state is a fallback.
|
|
||||||
- All channels must be **bounded**. Document the bound's rationale inline.
|
|
||||||
- Never hold a lock across an `await` unless atomicity explicitly requires it — document why.
|
|
||||||
- Document lock acquisition order wherever two locks are taken together.
|
|
||||||
- Every `async fn` is cancellation-safe unless explicitly documented otherwise. Mutate shared state *after* the `await` that may be cancelled, not before.
|
|
||||||
- High-read/low-write state: use `arc-swap` or `watch` for lock-free reads.
|
|
||||||
|
|
||||||
---
|
|
||||||
|
|
||||||
## 6. Error Handling Rules
|
|
||||||
|
|
||||||
- Errors translated at every layer boundary — low-level errors never surface unmodified.
|
|
||||||
- Add context at the propagation site: what operation failed and where.
|
|
||||||
- No `unwrap()`/`expect()` in production paths without a comment proving `None`/`Err` is impossible.
|
|
||||||
- Panics are only permitted in: tests, startup/init unrecoverable failure, and `unreachable!()` with an invariant comment.
|
|
||||||
|
|
||||||
---
|
|
||||||
|
|
||||||
## 7. API Design Rules
|
|
||||||
|
|
||||||
- **CQS**: functions that return data must not mutate; functions that mutate return only `Result`.
|
|
||||||
- **Least surprise**: a function does exactly what its name implies. Side effects are documented.
|
|
||||||
- **Idempotency**: `close()`, `shutdown()`, `unregister()` called twice must not panic or error.
|
|
||||||
- **Fallibility at the type level**: failure → `Result<T, E>`. No sentinel values.
|
|
||||||
- **Minimal public surface**: default to `pub(crate)`. Mark `pub` only deliberate API. Re-export through a single surface in `mod.rs`.
|
|
||||||
|
|
||||||
---
|
|
||||||
|
|
||||||
## 8. Performance Rules (hot paths)
|
|
||||||
|
|
||||||
- Annotate hot-path functions with `// HOT PATH: <throughput requirement>`.
|
|
||||||
- Zero allocations per operation in hot paths after initialization. Preallocate in constructors, reuse buffers.
|
|
||||||
- Pass `&[u8]` / `Bytes` slices — not `Vec<u8>`. Use `BytesMut` for reusable mutable buffers.
|
|
||||||
- No `String` formatting in hot paths. No logging without a rate-limit or sampling gate.
|
|
||||||
- Any allocation in a hot path gets a comment: `// ALLOC: <reason and size>`.
|
|
||||||
|
|
||||||
---
|
|
||||||
|
|
||||||
## 9. Testing Rules
|
|
||||||
|
|
||||||
- Bug fixes require a regression test that is **red before the fix, green after**. Name it after the bug.
|
|
||||||
- Property tests for: codec round-trips, state machine invariants, cryptographic protocol correctness.
|
|
||||||
- No shared mutable state between tests. Each test constructs its own environment.
|
|
||||||
- Test doubles hierarchy (simplest first): Fake → Stub → Spy → Mock. Mocks couple to implementation, not behavior — use sparingly.
|
|
||||||
|
|
||||||
---
|
|
||||||
|
|
||||||
## 10. Pre-Change Checklist
|
|
||||||
|
|
||||||
Run this before proposing or implementing any structural decision:
|
|
||||||
|
|
||||||
- [ ] Responsibility nameable in one noun phrase?
|
|
||||||
- [ ] Layer dependencies point inward only?
|
|
||||||
- [ ] Invalid states unrepresentable in the type system?
|
|
||||||
- [ ] State transitions gated through a single interface?
|
|
||||||
- [ ] All channels bounded?
|
|
||||||
- [ ] No locks held across `await` (or documented)?
|
|
||||||
- [ ] Errors typed and translated at layer boundaries?
|
|
||||||
- [ ] No panics in production paths without invariant proof?
|
|
||||||
- [ ] Hot paths annotated and allocation-free?
|
|
||||||
- [ ] Public surface minimal — only deliberate API marked `pub`?
|
|
||||||
- [ ] Correct pattern chosen from Section 3 table?
|
|
||||||
|
|
@ -1,39 +0,0 @@
|
||||||
name: Build
|
|
||||||
|
|
||||||
on:
|
|
||||||
push:
|
|
||||||
branches: [ "*" ]
|
|
||||||
pull_request:
|
|
||||||
branches: [ "*" ]
|
|
||||||
|
|
||||||
env:
|
|
||||||
CARGO_TERM_COLOR: always
|
|
||||||
|
|
||||||
jobs:
|
|
||||||
build:
|
|
||||||
name: Build
|
|
||||||
runs-on: ubuntu-latest
|
|
||||||
|
|
||||||
permissions:
|
|
||||||
contents: read
|
|
||||||
|
|
||||||
steps:
|
|
||||||
- name: Checkout repository
|
|
||||||
uses: actions/checkout@v4
|
|
||||||
|
|
||||||
- name: Install latest stable Rust toolchain
|
|
||||||
uses: dtolnay/rust-toolchain@stable
|
|
||||||
|
|
||||||
- name: Cache cargo registry & build artifacts
|
|
||||||
uses: actions/cache@v4
|
|
||||||
with:
|
|
||||||
path: |
|
|
||||||
~/.cargo/registry
|
|
||||||
~/.cargo/git
|
|
||||||
target
|
|
||||||
key: ${{ runner.os }}-cargo-${{ hashFiles('**/Cargo.lock') }}
|
|
||||||
restore-keys: |
|
|
||||||
${{ runner.os }}-cargo-
|
|
||||||
|
|
||||||
- name: Build Release
|
|
||||||
run: cargo build --release --verbose
|
|
||||||
|
|
@ -5,87 +5,37 @@ on:
|
||||||
tags:
|
tags:
|
||||||
- '[0-9]+.[0-9]+.[0-9]+'
|
- '[0-9]+.[0-9]+.[0-9]+'
|
||||||
workflow_dispatch:
|
workflow_dispatch:
|
||||||
inputs:
|
|
||||||
tag:
|
|
||||||
description: 'Release tag (example: 3.3.15)'
|
|
||||||
required: true
|
|
||||||
type: string
|
|
||||||
|
|
||||||
concurrency:
|
|
||||||
group: release-${{ github.ref_name }}-${{ github.event.inputs.tag || 'auto' }}
|
|
||||||
cancel-in-progress: true
|
|
||||||
|
|
||||||
permissions:
|
permissions:
|
||||||
contents: read
|
contents: read
|
||||||
|
packages: write
|
||||||
|
|
||||||
env:
|
env:
|
||||||
CARGO_TERM_COLOR: always
|
CARGO_TERM_COLOR: always
|
||||||
BINARY_NAME: telemt
|
|
||||||
|
|
||||||
jobs:
|
jobs:
|
||||||
prepare:
|
build:
|
||||||
name: Prepare
|
name: Build ${{ matrix.target }}
|
||||||
runs-on: ubuntu-latest
|
runs-on: ubuntu-latest
|
||||||
|
permissions:
|
||||||
outputs:
|
contents: read
|
||||||
version: ${{ steps.vars.outputs.version }}
|
|
||||||
prerelease: ${{ steps.vars.outputs.prerelease }}
|
|
||||||
|
|
||||||
steps:
|
|
||||||
- name: Resolve version
|
|
||||||
id: vars
|
|
||||||
shell: bash
|
|
||||||
run: |
|
|
||||||
set -euo pipefail
|
|
||||||
|
|
||||||
if [ "${GITHUB_EVENT_NAME}" = "workflow_dispatch" ]; then
|
|
||||||
VERSION="${{ github.event.inputs.tag }}"
|
|
||||||
else
|
|
||||||
VERSION="${GITHUB_REF#refs/tags/}"
|
|
||||||
fi
|
|
||||||
|
|
||||||
VERSION="${VERSION#refs/tags/}"
|
|
||||||
|
|
||||||
if [ -z "${VERSION}" ]; then
|
|
||||||
echo "Release version is empty" >&2
|
|
||||||
exit 1
|
|
||||||
fi
|
|
||||||
|
|
||||||
if [[ "${VERSION}" == *-* ]]; then
|
|
||||||
PRERELEASE=true
|
|
||||||
else
|
|
||||||
PRERELEASE=false
|
|
||||||
fi
|
|
||||||
|
|
||||||
echo "version=${VERSION}" >> "${GITHUB_OUTPUT}"
|
|
||||||
echo "prerelease=${PRERELEASE}" >> "${GITHUB_OUTPUT}"
|
|
||||||
|
|
||||||
# ==========================
|
|
||||||
# GNU / glibc
|
|
||||||
# ==========================
|
|
||||||
build-gnu:
|
|
||||||
name: GNU ${{ matrix.asset }}
|
|
||||||
runs-on: ubuntu-latest
|
|
||||||
needs: prepare
|
|
||||||
|
|
||||||
container:
|
|
||||||
image: rust:slim-bookworm
|
|
||||||
|
|
||||||
strategy:
|
strategy:
|
||||||
fail-fast: false
|
fail-fast: false
|
||||||
matrix:
|
matrix:
|
||||||
include:
|
include:
|
||||||
- target: x86_64-unknown-linux-gnu
|
- target: x86_64-unknown-linux-gnu
|
||||||
asset: telemt-x86_64-linux-gnu
|
artifact_name: telemt
|
||||||
cpu: baseline
|
asset_name: telemt-x86_64-linux-gnu
|
||||||
|
|
||||||
- target: x86_64-unknown-linux-gnu
|
|
||||||
asset: telemt-x86_64-v3-linux-gnu
|
|
||||||
cpu: v3
|
|
||||||
|
|
||||||
- target: aarch64-unknown-linux-gnu
|
- target: aarch64-unknown-linux-gnu
|
||||||
asset: telemt-aarch64-linux-gnu
|
artifact_name: telemt
|
||||||
cpu: generic
|
asset_name: telemt-aarch64-linux-gnu
|
||||||
|
- target: x86_64-unknown-linux-musl
|
||||||
|
artifact_name: telemt
|
||||||
|
asset_name: telemt-x86_64-linux-musl
|
||||||
|
- target: aarch64-unknown-linux-musl
|
||||||
|
artifact_name: telemt
|
||||||
|
asset_name: telemt-aarch64-linux-musl
|
||||||
|
|
||||||
steps:
|
steps:
|
||||||
- uses: actions/checkout@v4
|
- uses: actions/checkout@v4
|
||||||
|
|
@ -93,261 +43,47 @@ jobs:
|
||||||
- uses: dtolnay/rust-toolchain@v1
|
- uses: dtolnay/rust-toolchain@v1
|
||||||
with:
|
with:
|
||||||
toolchain: stable
|
toolchain: stable
|
||||||
targets: |
|
targets: ${{ matrix.target }}
|
||||||
x86_64-unknown-linux-gnu
|
|
||||||
aarch64-unknown-linux-gnu
|
|
||||||
|
|
||||||
- name: Install deps
|
- name: Install cross-compilation tools
|
||||||
run: |
|
run: |
|
||||||
apt-get update
|
sudo apt-get update
|
||||||
apt-get install -y \
|
sudo apt-get install -y gcc-aarch64-linux-gnu
|
||||||
build-essential \
|
|
||||||
clang \
|
|
||||||
lld \
|
|
||||||
pkg-config \
|
|
||||||
gcc-aarch64-linux-gnu \
|
|
||||||
g++-aarch64-linux-gnu
|
|
||||||
|
|
||||||
- uses: actions/cache@v4
|
- uses: actions/cache@v4
|
||||||
with:
|
with:
|
||||||
path: |
|
path: |
|
||||||
/usr/local/cargo/registry
|
~/.cargo/registry
|
||||||
/usr/local/cargo/git
|
~/.cargo/git
|
||||||
target
|
target
|
||||||
key: gnu-${{ matrix.asset }}-${{ hashFiles('**/Cargo.lock') }}
|
key: ${{ runner.os }}-${{ matrix.target }}-cargo-${{ hashFiles('**/Cargo.lock') }}
|
||||||
restore-keys: |
|
restore-keys: |
|
||||||
gnu-${{ matrix.asset }}-
|
${{ runner.os }}-${{ matrix.target }}-cargo-
|
||||||
gnu-
|
|
||||||
|
|
||||||
- name: Build
|
- name: Install cross
|
||||||
shell: bash
|
run: cargo install cross --git https://github.com/cross-rs/cross
|
||||||
|
|
||||||
|
- name: Build Release
|
||||||
|
env:
|
||||||
|
RUSTFLAGS: ${{ contains(matrix.target, 'musl') && '-C target-feature=+crt-static' || '' }}
|
||||||
|
run: cross build --release --target ${{ matrix.target }}
|
||||||
|
|
||||||
|
- name: Package binary
|
||||||
run: |
|
run: |
|
||||||
set -euo pipefail
|
cd target/${{ matrix.target }}/release
|
||||||
|
tar -czvf ${{ matrix.asset_name }}.tar.gz ${{ matrix.artifact_name }}
|
||||||
if [ "${{ matrix.target }}" = "aarch64-unknown-linux-gnu" ]; then
|
sha256sum ${{ matrix.asset_name }}.tar.gz > ${{ matrix.asset_name }}.sha256
|
||||||
export CC=aarch64-linux-gnu-gcc
|
|
||||||
export CXX=aarch64-linux-gnu-g++
|
|
||||||
export RUSTFLAGS="-C linker=aarch64-linux-gnu-gcc -C lto=fat -C panic=abort"
|
|
||||||
else
|
|
||||||
export CC=clang
|
|
||||||
export CXX=clang++
|
|
||||||
|
|
||||||
if [ "${{ matrix.cpu }}" = "v3" ]; then
|
|
||||||
CPU_FLAGS="-C target-cpu=x86-64-v3"
|
|
||||||
else
|
|
||||||
CPU_FLAGS="-C target-cpu=x86-64"
|
|
||||||
fi
|
|
||||||
|
|
||||||
export RUSTFLAGS="-C linker=clang -C link-arg=-fuse-ld=lld -C lto=fat -C panic=abort ${CPU_FLAGS}"
|
|
||||||
fi
|
|
||||||
|
|
||||||
cargo build --release --target ${{ matrix.target }} -j "$(nproc)"
|
|
||||||
|
|
||||||
- name: Package
|
|
||||||
shell: bash
|
|
||||||
run: |
|
|
||||||
set -euo pipefail
|
|
||||||
|
|
||||||
mkdir -p dist
|
|
||||||
cp "target/${{ matrix.target }}/release/${{ env.BINARY_NAME }}" dist/telemt
|
|
||||||
|
|
||||||
if [ "${{ matrix.target }}" = "aarch64-unknown-linux-gnu" ]; then
|
|
||||||
STRIP_BIN=aarch64-linux-gnu-strip
|
|
||||||
else
|
|
||||||
STRIP_BIN=strip
|
|
||||||
fi
|
|
||||||
|
|
||||||
"${STRIP_BIN}" dist/telemt
|
|
||||||
|
|
||||||
cd dist
|
|
||||||
tar -czf "${{ matrix.asset }}.tar.gz" \
|
|
||||||
--owner=0 --group=0 --numeric-owner \
|
|
||||||
telemt
|
|
||||||
|
|
||||||
sha256sum "${{ matrix.asset }}.tar.gz" > "${{ matrix.asset }}.tar.gz.sha256"
|
|
||||||
|
|
||||||
- uses: actions/upload-artifact@v4
|
- uses: actions/upload-artifact@v4
|
||||||
with:
|
with:
|
||||||
name: ${{ matrix.asset }}
|
name: ${{ matrix.asset_name }}
|
||||||
path: dist/*
|
|
||||||
|
|
||||||
# ==========================
|
|
||||||
# MUSL
|
|
||||||
# ==========================
|
|
||||||
build-musl:
|
|
||||||
name: MUSL ${{ matrix.asset }}
|
|
||||||
runs-on: ubuntu-latest
|
|
||||||
needs: prepare
|
|
||||||
|
|
||||||
container:
|
|
||||||
image: rust:slim-bookworm
|
|
||||||
|
|
||||||
strategy:
|
|
||||||
fail-fast: false
|
|
||||||
matrix:
|
|
||||||
include:
|
|
||||||
- target: x86_64-unknown-linux-musl
|
|
||||||
asset: telemt-x86_64-linux-musl
|
|
||||||
cpu: baseline
|
|
||||||
|
|
||||||
- target: x86_64-unknown-linux-musl
|
|
||||||
asset: telemt-x86_64-v3-linux-musl
|
|
||||||
cpu: v3
|
|
||||||
|
|
||||||
- target: aarch64-unknown-linux-musl
|
|
||||||
asset: telemt-aarch64-linux-musl
|
|
||||||
cpu: generic
|
|
||||||
|
|
||||||
steps:
|
|
||||||
- uses: actions/checkout@v4
|
|
||||||
|
|
||||||
- name: Install deps
|
|
||||||
run: |
|
|
||||||
apt-get update
|
|
||||||
apt-get install -y \
|
|
||||||
musl-tools \
|
|
||||||
pkg-config \
|
|
||||||
curl
|
|
||||||
|
|
||||||
- uses: actions/cache@v4
|
|
||||||
if: matrix.target == 'aarch64-unknown-linux-musl'
|
|
||||||
with:
|
|
||||||
path: ~/.musl-aarch64
|
|
||||||
key: musl-toolchain-aarch64-v1
|
|
||||||
|
|
||||||
- name: Install aarch64 musl toolchain
|
|
||||||
if: matrix.target == 'aarch64-unknown-linux-musl'
|
|
||||||
shell: bash
|
|
||||||
run: |
|
|
||||||
set -euo pipefail
|
|
||||||
|
|
||||||
TOOLCHAIN_DIR="$HOME/.musl-aarch64"
|
|
||||||
ARCHIVE="aarch64-linux-musl-cross.tgz"
|
|
||||||
URL="https://github.com/telemt/telemt/releases/download/toolchains/${ARCHIVE}"
|
|
||||||
|
|
||||||
if [ -x "${TOOLCHAIN_DIR}/bin/aarch64-linux-musl-gcc" ]; then
|
|
||||||
echo "MUSL toolchain cached"
|
|
||||||
else
|
|
||||||
curl -fL \
|
|
||||||
--retry 5 \
|
|
||||||
--retry-delay 3 \
|
|
||||||
--connect-timeout 10 \
|
|
||||||
--max-time 120 \
|
|
||||||
-o "${ARCHIVE}" "${URL}"
|
|
||||||
|
|
||||||
mkdir -p "${TOOLCHAIN_DIR}"
|
|
||||||
tar -xzf "${ARCHIVE}" --strip-components=1 -C "${TOOLCHAIN_DIR}"
|
|
||||||
fi
|
|
||||||
|
|
||||||
echo "${TOOLCHAIN_DIR}/bin" >> "${GITHUB_PATH}"
|
|
||||||
|
|
||||||
- name: Add rust target
|
|
||||||
run: rustup target add ${{ matrix.target }}
|
|
||||||
|
|
||||||
- uses: actions/cache@v4
|
|
||||||
with:
|
|
||||||
path: |
|
path: |
|
||||||
/usr/local/cargo/registry
|
target/${{ matrix.target }}/release/${{ matrix.asset_name }}.tar.gz
|
||||||
/usr/local/cargo/git
|
target/${{ matrix.target }}/release/${{ matrix.asset_name }}.sha256
|
||||||
target
|
|
||||||
key: musl-${{ matrix.asset }}-${{ hashFiles('**/Cargo.lock') }}
|
|
||||||
restore-keys: |
|
|
||||||
musl-${{ matrix.asset }}-
|
|
||||||
musl-
|
|
||||||
|
|
||||||
- name: Build
|
build-docker-image:
|
||||||
shell: bash
|
needs: build
|
||||||
run: |
|
|
||||||
set -euo pipefail
|
|
||||||
|
|
||||||
if [ "${{ matrix.target }}" = "aarch64-unknown-linux-musl" ]; then
|
|
||||||
export CC=aarch64-linux-musl-gcc
|
|
||||||
export CC_aarch64_unknown_linux_musl=aarch64-linux-musl-gcc
|
|
||||||
export RUSTFLAGS="-C target-feature=+crt-static -C linker=aarch64-linux-musl-gcc -C lto=fat -C panic=abort"
|
|
||||||
else
|
|
||||||
export CC=musl-gcc
|
|
||||||
export CC_x86_64_unknown_linux_musl=musl-gcc
|
|
||||||
|
|
||||||
if [ "${{ matrix.cpu }}" = "v3" ]; then
|
|
||||||
CPU_FLAGS="-C target-cpu=x86-64-v3"
|
|
||||||
else
|
|
||||||
CPU_FLAGS="-C target-cpu=x86-64"
|
|
||||||
fi
|
|
||||||
|
|
||||||
export RUSTFLAGS="-C target-feature=+crt-static -C lto=fat -C panic=abort ${CPU_FLAGS}"
|
|
||||||
fi
|
|
||||||
|
|
||||||
cargo build --release --target ${{ matrix.target }} -j "$(nproc)"
|
|
||||||
|
|
||||||
- name: Package
|
|
||||||
shell: bash
|
|
||||||
run: |
|
|
||||||
set -euo pipefail
|
|
||||||
|
|
||||||
mkdir -p dist
|
|
||||||
cp "target/${{ matrix.target }}/release/${{ env.BINARY_NAME }}" dist/telemt
|
|
||||||
|
|
||||||
if [ "${{ matrix.target }}" = "aarch64-unknown-linux-musl" ]; then
|
|
||||||
STRIP_BIN=aarch64-linux-musl-strip
|
|
||||||
else
|
|
||||||
STRIP_BIN=strip
|
|
||||||
fi
|
|
||||||
|
|
||||||
"${STRIP_BIN}" dist/telemt
|
|
||||||
|
|
||||||
cd dist
|
|
||||||
tar -czf "${{ matrix.asset }}.tar.gz" \
|
|
||||||
--owner=0 --group=0 --numeric-owner \
|
|
||||||
telemt
|
|
||||||
|
|
||||||
sha256sum "${{ matrix.asset }}.tar.gz" > "${{ matrix.asset }}.tar.gz.sha256"
|
|
||||||
|
|
||||||
- uses: actions/upload-artifact@v4
|
|
||||||
with:
|
|
||||||
name: ${{ matrix.asset }}
|
|
||||||
path: dist/*
|
|
||||||
|
|
||||||
# ==========================
|
|
||||||
# Release
|
|
||||||
# ==========================
|
|
||||||
release:
|
|
||||||
name: Release
|
|
||||||
runs-on: ubuntu-latest
|
runs-on: ubuntu-latest
|
||||||
needs: [prepare, build-gnu, build-musl]
|
|
||||||
|
|
||||||
permissions:
|
|
||||||
contents: write
|
|
||||||
|
|
||||||
steps:
|
|
||||||
- uses: actions/download-artifact@v4
|
|
||||||
with:
|
|
||||||
path: artifacts
|
|
||||||
|
|
||||||
- name: Flatten artifacts
|
|
||||||
shell: bash
|
|
||||||
run: |
|
|
||||||
set -euo pipefail
|
|
||||||
mkdir -p dist
|
|
||||||
find artifacts -type f -exec cp {} dist/ \;
|
|
||||||
|
|
||||||
- name: Create GitHub Release
|
|
||||||
uses: softprops/action-gh-release@v2
|
|
||||||
with:
|
|
||||||
tag_name: ${{ needs.prepare.outputs.version }}
|
|
||||||
target_commitish: ${{ github.sha }}
|
|
||||||
files: dist/*
|
|
||||||
generate_release_notes: true
|
|
||||||
prerelease: ${{ needs.prepare.outputs.prerelease == 'true' }}
|
|
||||||
overwrite_files: true
|
|
||||||
|
|
||||||
# ==========================
|
|
||||||
# Docker
|
|
||||||
# ==========================
|
|
||||||
docker:
|
|
||||||
name: Docker
|
|
||||||
runs-on: ubuntu-latest
|
|
||||||
needs: [prepare, release]
|
|
||||||
|
|
||||||
permissions:
|
permissions:
|
||||||
contents: read
|
contents: read
|
||||||
packages: write
|
packages: write
|
||||||
|
|
@ -356,66 +92,48 @@ jobs:
|
||||||
- uses: actions/checkout@v4
|
- uses: actions/checkout@v4
|
||||||
|
|
||||||
- uses: docker/setup-qemu-action@v3
|
- uses: docker/setup-qemu-action@v3
|
||||||
|
|
||||||
- uses: docker/setup-buildx-action@v3
|
- uses: docker/setup-buildx-action@v3
|
||||||
|
|
||||||
- uses: docker/login-action@v3
|
- name: Login to GHCR
|
||||||
|
uses: docker/login-action@v3
|
||||||
with:
|
with:
|
||||||
registry: ghcr.io
|
registry: ghcr.io
|
||||||
username: ${{ github.actor }}
|
username: ${{ github.actor }}
|
||||||
password: ${{ secrets.GITHUB_TOKEN }}
|
password: ${{ secrets.GITHUB_TOKEN }}
|
||||||
|
|
||||||
- name: Probe release assets
|
- name: Extract version
|
||||||
shell: bash
|
id: vars
|
||||||
env:
|
run: echo "VERSION=${GITHUB_REF#refs/tags/}" >> $GITHUB_OUTPUT
|
||||||
VERSION: ${{ needs.prepare.outputs.version }}
|
|
||||||
run: |
|
|
||||||
set -euo pipefail
|
|
||||||
|
|
||||||
for asset in \
|
- name: Build and push
|
||||||
telemt-x86_64-linux-musl.tar.gz \
|
|
||||||
telemt-x86_64-linux-musl.tar.gz.sha256 \
|
|
||||||
telemt-aarch64-linux-musl.tar.gz \
|
|
||||||
telemt-aarch64-linux-musl.tar.gz.sha256
|
|
||||||
do
|
|
||||||
curl -fsIL \
|
|
||||||
--retry 10 \
|
|
||||||
--retry-delay 3 \
|
|
||||||
"https://github.com/${GITHUB_REPOSITORY}/releases/download/${VERSION}/${asset}" \
|
|
||||||
> /dev/null
|
|
||||||
done
|
|
||||||
|
|
||||||
- name: Compute image tags
|
|
||||||
id: meta
|
|
||||||
shell: bash
|
|
||||||
env:
|
|
||||||
VERSION: ${{ needs.prepare.outputs.version }}
|
|
||||||
run: |
|
|
||||||
set -euo pipefail
|
|
||||||
|
|
||||||
IMAGE="$(echo "ghcr.io/${GITHUB_REPOSITORY}" | tr '[:upper:]' '[:lower:]')"
|
|
||||||
TAGS="${IMAGE}:${VERSION}"
|
|
||||||
|
|
||||||
if [[ "${VERSION}" != *-* ]]; then
|
|
||||||
TAGS="${TAGS}"$'\n'"${IMAGE}:latest"
|
|
||||||
fi
|
|
||||||
|
|
||||||
{
|
|
||||||
echo "tags<<EOF"
|
|
||||||
printf '%s\n' "${TAGS}"
|
|
||||||
echo "EOF"
|
|
||||||
} >> "${GITHUB_OUTPUT}"
|
|
||||||
|
|
||||||
- name: Build & Push
|
|
||||||
uses: docker/build-push-action@v6
|
uses: docker/build-push-action@v6
|
||||||
with:
|
with:
|
||||||
context: .
|
context: .
|
||||||
push: true
|
push: true
|
||||||
pull: true
|
tags: |
|
||||||
platforms: linux/amd64,linux/arm64
|
ghcr.io/${{ github.repository }}:${{ steps.vars.outputs.VERSION }}
|
||||||
tags: ${{ steps.meta.outputs.tags }}
|
ghcr.io/${{ github.repository }}:latest
|
||||||
build-args: |
|
|
||||||
TELEMT_REPOSITORY=${{ github.repository }}
|
release:
|
||||||
TELEMT_VERSION=${{ needs.prepare.outputs.version }}
|
name: Create Release
|
||||||
cache-from: type=gha
|
needs: build
|
||||||
cache-to: type=gha,mode=max
|
runs-on: ubuntu-latest
|
||||||
|
permissions:
|
||||||
|
contents: write
|
||||||
|
|
||||||
|
steps:
|
||||||
|
- uses: actions/checkout@v4
|
||||||
|
with:
|
||||||
|
fetch-depth: 0
|
||||||
|
|
||||||
|
- uses: actions/download-artifact@v4
|
||||||
|
with:
|
||||||
|
path: artifacts
|
||||||
|
|
||||||
|
- name: Create Release
|
||||||
|
uses: softprops/action-gh-release@v2
|
||||||
|
with:
|
||||||
|
files: artifacts/**/*
|
||||||
|
generate_release_notes: true
|
||||||
|
draft: false
|
||||||
|
prerelease: ${{ contains(github.ref, '-rc') || contains(github.ref, '-beta') || contains(github.ref, '-alpha') }}
|
||||||
|
|
|
||||||
|
|
@ -0,0 +1,54 @@
|
||||||
|
name: Rust
|
||||||
|
|
||||||
|
on:
|
||||||
|
push:
|
||||||
|
branches: [ "*" ]
|
||||||
|
pull_request:
|
||||||
|
branches: [ "*" ]
|
||||||
|
|
||||||
|
env:
|
||||||
|
CARGO_TERM_COLOR: always
|
||||||
|
|
||||||
|
jobs:
|
||||||
|
build:
|
||||||
|
name: Build
|
||||||
|
runs-on: ubuntu-latest
|
||||||
|
|
||||||
|
permissions:
|
||||||
|
contents: read
|
||||||
|
actions: write
|
||||||
|
checks: write
|
||||||
|
|
||||||
|
steps:
|
||||||
|
- name: Checkout repository
|
||||||
|
uses: actions/checkout@v4
|
||||||
|
|
||||||
|
- name: Install latest stable Rust toolchain
|
||||||
|
uses: dtolnay/rust-toolchain@stable
|
||||||
|
with:
|
||||||
|
components: rustfmt, clippy
|
||||||
|
|
||||||
|
- name: Cache cargo registry & build artifacts
|
||||||
|
uses: actions/cache@v4
|
||||||
|
with:
|
||||||
|
path: |
|
||||||
|
~/.cargo/registry
|
||||||
|
~/.cargo/git
|
||||||
|
target
|
||||||
|
key: ${{ runner.os }}-cargo-${{ hashFiles('**/Cargo.lock') }}
|
||||||
|
restore-keys: |
|
||||||
|
${{ runner.os }}-cargo-
|
||||||
|
|
||||||
|
- name: Build Release
|
||||||
|
run: cargo build --release --verbose
|
||||||
|
|
||||||
|
- name: Run tests
|
||||||
|
run: cargo test --verbose
|
||||||
|
|
||||||
|
# clippy dont fail on warnings because of active development of telemt
|
||||||
|
# and many warnings
|
||||||
|
- name: Run clippy
|
||||||
|
run: cargo clippy -- --cap-lints warn
|
||||||
|
|
||||||
|
- name: Check for unused dependencies
|
||||||
|
run: cargo udeps || true
|
||||||
|
|
@ -1,139 +0,0 @@
|
||||||
name: Check
|
|
||||||
|
|
||||||
on:
|
|
||||||
push:
|
|
||||||
branches: [ "*" ]
|
|
||||||
pull_request:
|
|
||||||
branches: [ "*" ]
|
|
||||||
|
|
||||||
env:
|
|
||||||
CARGO_TERM_COLOR: always
|
|
||||||
|
|
||||||
concurrency:
|
|
||||||
group: test-${{ github.ref }}
|
|
||||||
cancel-in-progress: true
|
|
||||||
|
|
||||||
jobs:
|
|
||||||
# ==========================
|
|
||||||
# Formatting
|
|
||||||
# ==========================
|
|
||||||
fmt:
|
|
||||||
name: Fmt
|
|
||||||
runs-on: ubuntu-latest
|
|
||||||
|
|
||||||
permissions:
|
|
||||||
contents: read
|
|
||||||
|
|
||||||
steps:
|
|
||||||
- uses: actions/checkout@v4
|
|
||||||
|
|
||||||
- uses: dtolnay/rust-toolchain@stable
|
|
||||||
with:
|
|
||||||
components: rustfmt
|
|
||||||
|
|
||||||
- run: cargo fmt -- --check
|
|
||||||
|
|
||||||
# ==========================
|
|
||||||
# Tests
|
|
||||||
# ==========================
|
|
||||||
test:
|
|
||||||
name: Test
|
|
||||||
runs-on: ubuntu-latest
|
|
||||||
|
|
||||||
permissions:
|
|
||||||
contents: read
|
|
||||||
actions: write
|
|
||||||
checks: write
|
|
||||||
|
|
||||||
steps:
|
|
||||||
- uses: actions/checkout@v4
|
|
||||||
|
|
||||||
- uses: dtolnay/rust-toolchain@stable
|
|
||||||
|
|
||||||
- name: Cache cargo
|
|
||||||
uses: actions/cache@v4
|
|
||||||
with:
|
|
||||||
path: |
|
|
||||||
~/.cargo/bin
|
|
||||||
~/.cargo/registry
|
|
||||||
~/.cargo/git
|
|
||||||
target
|
|
||||||
key: ${{ runner.os }}-cargo-nextest-${{ hashFiles('**/Cargo.lock') }}
|
|
||||||
restore-keys: |
|
|
||||||
${{ runner.os }}-cargo-nextest-
|
|
||||||
${{ runner.os }}-cargo-
|
|
||||||
|
|
||||||
- name: Install cargo-nextest
|
|
||||||
run: cargo install --locked cargo-nextest || true
|
|
||||||
|
|
||||||
- name: Run tests with nextest
|
|
||||||
run: cargo nextest run -j "$(nproc)"
|
|
||||||
|
|
||||||
# ==========================
|
|
||||||
# Clippy
|
|
||||||
# ==========================
|
|
||||||
clippy:
|
|
||||||
name: Clippy
|
|
||||||
runs-on: ubuntu-latest
|
|
||||||
|
|
||||||
permissions:
|
|
||||||
contents: read
|
|
||||||
checks: write
|
|
||||||
|
|
||||||
steps:
|
|
||||||
- uses: actions/checkout@v4
|
|
||||||
|
|
||||||
- uses: dtolnay/rust-toolchain@stable
|
|
||||||
with:
|
|
||||||
components: clippy
|
|
||||||
|
|
||||||
- name: Cache cargo
|
|
||||||
uses: actions/cache@v4
|
|
||||||
with:
|
|
||||||
path: |
|
|
||||||
~/.cargo/registry
|
|
||||||
~/.cargo/git
|
|
||||||
target
|
|
||||||
key: ${{ runner.os }}-cargo-clippy-${{ hashFiles('**/Cargo.lock') }}
|
|
||||||
restore-keys: |
|
|
||||||
${{ runner.os }}-cargo-clippy-
|
|
||||||
${{ runner.os }}-cargo-
|
|
||||||
|
|
||||||
- name: Run clippy
|
|
||||||
run: cargo clippy -j "$(nproc)" -- --cap-lints warn
|
|
||||||
|
|
||||||
# ==========================
|
|
||||||
# Udeps
|
|
||||||
# ==========================
|
|
||||||
udeps:
|
|
||||||
name: Udeps
|
|
||||||
runs-on: ubuntu-latest
|
|
||||||
|
|
||||||
permissions:
|
|
||||||
contents: read
|
|
||||||
|
|
||||||
steps:
|
|
||||||
- uses: actions/checkout@v4
|
|
||||||
|
|
||||||
- uses: dtolnay/rust-toolchain@stable
|
|
||||||
with:
|
|
||||||
components: rust-src
|
|
||||||
|
|
||||||
- name: Cache cargo
|
|
||||||
uses: actions/cache@v4
|
|
||||||
with:
|
|
||||||
path: |
|
|
||||||
~/.cargo/bin
|
|
||||||
~/.cargo/registry
|
|
||||||
~/.cargo/git
|
|
||||||
target
|
|
||||||
key: ${{ runner.os }}-cargo-udeps-${{ hashFiles('**/Cargo.lock') }}
|
|
||||||
restore-keys: |
|
|
||||||
${{ runner.os }}-cargo-udeps-
|
|
||||||
${{ runner.os }}-cargo-
|
|
||||||
|
|
||||||
- name: Install cargo-udeps
|
|
||||||
run: cargo install --locked cargo-udeps || true
|
|
||||||
|
|
||||||
- name: Run udeps
|
|
||||||
run: cargo udeps -j "$(nproc)" || true
|
|
||||||
|
|
@ -21,4 +21,3 @@ target
|
||||||
#.idea/
|
#.idea/
|
||||||
|
|
||||||
proxy-secret
|
proxy-secret
|
||||||
coverage-html/
|
|
||||||
|
|
@ -0,0 +1,58 @@
|
||||||
|
# Architect Mode Rules for Telemt
|
||||||
|
|
||||||
|
## Architecture Overview
|
||||||
|
|
||||||
|
```mermaid
|
||||||
|
graph TB
|
||||||
|
subgraph Entry
|
||||||
|
Client[Clients] --> Listener[TCP/Unix Listener]
|
||||||
|
end
|
||||||
|
|
||||||
|
subgraph Proxy Layer
|
||||||
|
Listener --> ClientHandler[ClientHandler]
|
||||||
|
ClientHandler --> Handshake[Handshake Validator]
|
||||||
|
Handshake --> |Valid| Relay[Relay Layer]
|
||||||
|
Handshake --> |Invalid| Masking[Masking/TLS Fronting]
|
||||||
|
end
|
||||||
|
|
||||||
|
subgraph Transport
|
||||||
|
Relay --> MiddleProxy[Middle-End Proxy Pool]
|
||||||
|
Relay --> DirectRelay[Direct DC Relay]
|
||||||
|
MiddleProxy --> TelegramDC[Telegram DCs]
|
||||||
|
DirectRelay --> TelegramDC
|
||||||
|
end
|
||||||
|
```
|
||||||
|
|
||||||
|
## Module Dependencies
|
||||||
|
- [`src/main.rs`](src/main.rs) - Entry point, spawns all async tasks
|
||||||
|
- [`src/config/`](src/config/) - Configuration loading with auto-migration
|
||||||
|
- [`src/error.rs`](src/error.rs) - Error types, must be used by all modules
|
||||||
|
- [`src/crypto/`](src/crypto/) - AES, SHA, random number generation
|
||||||
|
- [`src/protocol/`](src/protocol/) - MTProto constants, frame encoding, obfuscation
|
||||||
|
- [`src/stream/`](src/stream/) - Stream wrappers, buffer pool, frame codecs
|
||||||
|
- [`src/proxy/`](src/proxy/) - Client handling, handshake, relay logic
|
||||||
|
- [`src/transport/`](src/transport/) - Upstream management, middle-proxy, SOCKS support
|
||||||
|
- [`src/stats/`](src/stats/) - Statistics and replay protection
|
||||||
|
- [`src/ip_tracker.rs`](src/ip_tracker.rs) - Per-user IP tracking
|
||||||
|
|
||||||
|
## Key Architectural Constraints
|
||||||
|
|
||||||
|
### Middle-End Proxy Mode
|
||||||
|
- Requires public IP on interface OR 1:1 NAT with STUN probing
|
||||||
|
- Uses separate `proxy-secret` from Telegram (NOT user secrets)
|
||||||
|
- Falls back to direct mode automatically on STUN mismatch
|
||||||
|
|
||||||
|
### TLS Fronting
|
||||||
|
- Invalid handshakes are transparently proxied to `mask_host`
|
||||||
|
- This is critical for DPI evasion - do not change this behavior
|
||||||
|
- `mask_unix_sock` and `mask_host` are mutually exclusive
|
||||||
|
|
||||||
|
### Stream Architecture
|
||||||
|
- Buffer pool is shared globally via Arc - prevents allocation storms
|
||||||
|
- Frame codecs implement tokio-util Encoder/Decoder traits
|
||||||
|
- State machine in [`src/stream/state.rs`](src/stream/state.rs) manages stream transitions
|
||||||
|
|
||||||
|
### Configuration Migration
|
||||||
|
- [`ProxyConfig::load()`](src/config/mod.rs:641) mutates config in-place
|
||||||
|
- New fields must have sensible defaults
|
||||||
|
- DC203 override is auto-injected for CDN/media support
|
||||||
|
|
@ -0,0 +1,23 @@
|
||||||
|
# Code Mode Rules for Telemt
|
||||||
|
|
||||||
|
## Error Handling
|
||||||
|
- Always use [`ProxyError`](src/error.rs:168) from [`src/error.rs`](src/error.rs) for proxy operations
|
||||||
|
- [`HandshakeResult<T,R,W>`](src/error.rs:292) returns streams on bad client - these MUST be returned for masking, never dropped
|
||||||
|
- Use [`Recoverable`](src/error.rs:110) trait to check if errors are retryable
|
||||||
|
|
||||||
|
## Configuration Changes
|
||||||
|
- [`ProxyConfig::load()`](src/config/mod.rs:641) auto-mutates config - new fields should have defaults
|
||||||
|
- DC203 override is auto-injected if missing - do not remove this behavior
|
||||||
|
- When adding config fields, add migration logic in [`ProxyConfig::load()`](src/config/mod.rs:641)
|
||||||
|
|
||||||
|
## Crypto Code
|
||||||
|
- [`SecureRandom`](src/crypto/random.rs) from [`src/crypto/random.rs`](src/crypto/random.rs) must be used for all crypto operations
|
||||||
|
- Never use `rand::thread_rng()` directly - use the shared `Arc<SecureRandom>`
|
||||||
|
|
||||||
|
## Stream Handling
|
||||||
|
- Buffer pool [`BufferPool`](src/stream/buffer_pool.rs) is shared via Arc - always use it instead of allocating
|
||||||
|
- Frame codecs in [`src/stream/frame_codec.rs`](src/stream/frame_codec.rs) implement tokio-util's Encoder/Decoder traits
|
||||||
|
|
||||||
|
## Testing
|
||||||
|
- Tests are inline in modules using `#[cfg(test)]`
|
||||||
|
- Use `cargo test --lib <module_name>` to run tests for specific modules
|
||||||
|
|
@ -0,0 +1,27 @@
|
||||||
|
# Debug Mode Rules for Telemt
|
||||||
|
|
||||||
|
## Logging
|
||||||
|
- `RUST_LOG` environment variable takes absolute priority over all config log levels
|
||||||
|
- Log levels: `trace`, `debug`, `info`, `warn`, `error`
|
||||||
|
- Use `RUST_LOG=debug cargo run` for detailed operational logs
|
||||||
|
- Use `RUST_LOG=trace cargo run` for full protocol-level debugging
|
||||||
|
|
||||||
|
## Middle-End Proxy Debugging
|
||||||
|
- Set `ME_DIAG=1` environment variable for high-precision cryptography diagnostics
|
||||||
|
- STUN probe results are logged at startup - check for mismatch between local and reflected IP
|
||||||
|
- If Middle-End fails, check `proxy_secret_path` points to valid file from https://core.telegram.org/getProxySecret
|
||||||
|
|
||||||
|
## Connection Issues
|
||||||
|
- DC connectivity is logged at startup with RTT measurements
|
||||||
|
- If DC ping fails, check `dc_overrides` for custom addresses
|
||||||
|
- Use `prefer_ipv6=false` in config if IPv6 is unreliable
|
||||||
|
|
||||||
|
## TLS Fronting Issues
|
||||||
|
- Invalid handshakes are proxied to `mask_host` - check this host is reachable
|
||||||
|
- `mask_unix_sock` and `mask_host` are mutually exclusive - only one can be set
|
||||||
|
- If `mask_unix_sock` is set, socket must exist before connections arrive
|
||||||
|
|
||||||
|
## Common Errors
|
||||||
|
- `ReplayAttack` - client replayed a handshake nonce, potential attack
|
||||||
|
- `TimeSkew` - client clock is off, can disable with `ignore_time_skew=true`
|
||||||
|
- `TgHandshakeTimeout` - upstream DC connection failed, check network
|
||||||
22
AGENTS.md
22
AGENTS.md
|
|
@ -5,22 +5,6 @@ Your responses are precise, minimal, and architecturally sound. You are working
|
||||||
|
|
||||||
---
|
---
|
||||||
|
|
||||||
### Context: The Telemt Project
|
|
||||||
|
|
||||||
You are working on **Telemt**, a high-performance, production-grade Telegram MTProxy implementation written in Rust. It is explicitly designed to operate in highly hostile network environments and evade advanced network censorship.
|
|
||||||
|
|
||||||
**Adversarial Threat Model:**
|
|
||||||
The proxy operates under constant surveillance by DPI (Deep Packet Inspection) systems and active scanners (state firewalls, mobile operator fraud controls). These entities actively probe IPs, analyze protocol handshakes, and look for known proxy signatures to block or throttle traffic.
|
|
||||||
|
|
||||||
**Core Architectural Pillars:**
|
|
||||||
1. **TLS-Fronting (TLS-F) & TCP-Splitting (TCP-S):** To the outside world, Telemt looks like a standard TLS server. If a client presents a valid MTProxy key, the connection is handled internally. If a censor's scanner, web browser, or unauthorized crawler connects, Telemt seamlessly splices the TCP connection (L4) to a real, legitimate HTTPS fallback server (e.g., Nginx) without modifying the `ClientHello` or terminating the TLS handshake.
|
|
||||||
2. **Middle-End (ME) Orchestration:** A highly concurrent, generation-based pool managing upstream connections to Telegram Datacenters (DCs). It utilizes an **Adaptive Floor** (dynamically scaling writer connections based on traffic), **Hardswaps** (zero-downtime pool reconfiguration), and **STUN/NAT** reflection mechanisms.
|
|
||||||
3. **Strict KDF Routing:** Cryptographic Key Derivation Functions (KDF) in this protocol strictly rely on the exact pairing of Source IP/Port and Destination IP/Port. Deviations or missing port logic will silently break the MTProto handshake.
|
|
||||||
4. **Data Plane vs. Control Plane Isolation:** The Data Plane (readers, writers, payload relay, TCP splicing) must remain strictly non-blocking, zero-allocation in hot paths, and highly resilient to network backpressure. The Control Plane (API, metrics, pool generation swaps, config reloads) orchestrates the state asynchronously without stalling the Data Plane.
|
|
||||||
|
|
||||||
Any modification you make must preserve Telemt's invisibility to censors, its strict memory-safety invariants, and its hot-path throughput.
|
|
||||||
|
|
||||||
|
|
||||||
### 0. Priority Resolution — Scope Control
|
### 0. Priority Resolution — Scope Control
|
||||||
|
|
||||||
This section resolves conflicts between code quality enforcement and scope limitation.
|
This section resolves conflicts between code quality enforcement and scope limitation.
|
||||||
|
|
@ -390,12 +374,6 @@ you MUST explain why existing invariants remain valid.
|
||||||
- Do not modify existing tests unless the task explicitly requires it.
|
- Do not modify existing tests unless the task explicitly requires it.
|
||||||
- Do not weaken assertions.
|
- Do not weaken assertions.
|
||||||
- Preserve determinism in testable components.
|
- Preserve determinism in testable components.
|
||||||
- Bug-first forces the discipline of proving you understand a bug before you fix it. Tests written after a fix almost always pass trivially and catch nothing new.
|
|
||||||
- Invariants over scenarios is the core shift. The route_mode table alone would have caught both BUG-1 and BUG-2 before they were written — "snapshot equals watch state after any transition burst" is a two-line property test that fails immediately on the current diverged-atomics code.
|
|
||||||
- Differential/model catches logic drift over time.
|
|
||||||
- Scheduler pressure is specifically aimed at the concurrent state bugs that keep reappearing. A single-threaded happy-path test of set_mode will never find subtle bugs; 10,000 concurrent calls will find it on the first run.
|
|
||||||
- Mutation gate answers your original complaint directly. It measures test power. If you can remove a bounds check and nothing breaks, the suite isn't covering that branch yet — it just says so explicitly.
|
|
||||||
- Dead parameter is a code smell rule.
|
|
||||||
|
|
||||||
### 15. Security Constraints
|
### 15. Security Constraints
|
||||||
|
|
||||||
|
|
|
||||||
|
|
@ -1,212 +0,0 @@
|
||||||
# Code of Conduct
|
|
||||||
|
|
||||||
## Purpose
|
|
||||||
|
|
||||||
**Telemt exists to solve technical problems.**
|
|
||||||
|
|
||||||
Telemt is open to contributors who want to learn, improve and build meaningful systems together.
|
|
||||||
|
|
||||||
It is a place for building, testing, reasoning, documenting, and improving systems.
|
|
||||||
|
|
||||||
Discussions that advance this work are in scope. Discussions that divert it are not.
|
|
||||||
|
|
||||||
Technology has consequences. Responsibility is inherent.
|
|
||||||
|
|
||||||
> **Zweck bestimmt die Form.**
|
|
||||||
|
|
||||||
> Purpose defines form.
|
|
||||||
|
|
||||||
---
|
|
||||||
|
|
||||||
## Principles
|
|
||||||
|
|
||||||
* **Technical over emotional**
|
|
||||||
|
|
||||||
Arguments are grounded in data, logs, reproducible cases, or clear reasoning.
|
|
||||||
|
|
||||||
* **Clarity over noise**
|
|
||||||
|
|
||||||
Communication is structured, concise, and relevant.
|
|
||||||
|
|
||||||
* **Openness with standards**
|
|
||||||
|
|
||||||
Participation is open. The work remains disciplined.
|
|
||||||
|
|
||||||
* **Independence of judgment**
|
|
||||||
|
|
||||||
Claims are evaluated on technical merit, not affiliation or posture.
|
|
||||||
|
|
||||||
* **Responsibility over capability**
|
|
||||||
|
|
||||||
Capability does not justify careless use.
|
|
||||||
|
|
||||||
* **Cooperation over friction**
|
|
||||||
|
|
||||||
Progress depends on coordination, mutual support, and honest review.
|
|
||||||
|
|
||||||
* **Good intent, rigorous method**
|
|
||||||
|
|
||||||
Assume good intent, but require rigor.
|
|
||||||
|
|
||||||
> **Aussagen gelten nach ihrer Begründung.**
|
|
||||||
|
|
||||||
> Claims are weighed by evidence.
|
|
||||||
|
|
||||||
---
|
|
||||||
|
|
||||||
## Expected Behavior
|
|
||||||
|
|
||||||
Participants are expected to:
|
|
||||||
|
|
||||||
* Communicate directly and respectfully
|
|
||||||
* Support claims with evidence
|
|
||||||
* Stay within technical scope
|
|
||||||
* Accept critique and provide it constructively
|
|
||||||
* Reduce noise, duplication, and ambiguity
|
|
||||||
* Help others reach correct and reproducible outcomes
|
|
||||||
* Act in a way that improves the system as a whole
|
|
||||||
|
|
||||||
Precision is learned.
|
|
||||||
|
|
||||||
New contributors are welcome. They are expected to grow into these standards. Existing contributors are expected to make that growth possible.
|
|
||||||
|
|
||||||
> **Wer behauptet, belegt.**
|
|
||||||
|
|
||||||
> Whoever claims, proves.
|
|
||||||
|
|
||||||
---
|
|
||||||
|
|
||||||
## Unacceptable Behavior
|
|
||||||
|
|
||||||
The following is not allowed:
|
|
||||||
|
|
||||||
* Personal attacks, insults, harassment, or intimidation
|
|
||||||
* Repeatedly derailing discussion away from Telemt’s purpose
|
|
||||||
* Spam, flooding, or repeated low-quality input
|
|
||||||
* Misinformation presented as fact
|
|
||||||
* Attempts to degrade, destabilize, or exhaust Telemt or its participants
|
|
||||||
* Use of Telemt or its spaces to enable harm
|
|
||||||
|
|
||||||
Telemt is not a venue for disputes that displace technical work.
|
|
||||||
Such discussions may be closed, removed, or redirected.
|
|
||||||
|
|
||||||
> **Störung ist kein Beitrag.**
|
|
||||||
|
|
||||||
> Disruption is not contribution.
|
|
||||||
|
|
||||||
---
|
|
||||||
|
|
||||||
## Security and Misuse
|
|
||||||
|
|
||||||
Telemt is intended for responsible use.
|
|
||||||
|
|
||||||
* Do not use it to plan, coordinate, or execute harm
|
|
||||||
* Do not publish vulnerabilities without responsible disclosure
|
|
||||||
* Report security issues privately where possible
|
|
||||||
|
|
||||||
Security is both technical and behavioral.
|
|
||||||
|
|
||||||
> **Verantwortung endet nicht am Code.**
|
|
||||||
|
|
||||||
> Responsibility does not end at the code.
|
|
||||||
|
|
||||||
---
|
|
||||||
|
|
||||||
## 6. Openness
|
|
||||||
|
|
||||||
Telemt is open to contributors of different backgrounds, experience levels, and working styles.
|
|
||||||
|
|
||||||
- Standards are public, legible, and applied to the work itself.
|
|
||||||
- Questions are welcome. Careful disagreement is welcome. Honest correction is welcome.
|
|
||||||
- Gatekeeping by obscurity, status signaling, or hostility is not.
|
|
||||||
|
|
||||||
---
|
|
||||||
|
|
||||||
## Scope
|
|
||||||
|
|
||||||
This Code of Conduct applies to all official spaces:
|
|
||||||
|
|
||||||
* Source repositories (issues, pull requests, discussions)
|
|
||||||
* Documentation
|
|
||||||
* Communication channels associated with Telemt
|
|
||||||
|
|
||||||
---
|
|
||||||
|
|
||||||
## Maintainer Stewardship
|
|
||||||
|
|
||||||
Maintainers are responsible for final decisions in matters of conduct, scope, and direction.
|
|
||||||
|
|
||||||
This responsibility is stewardship:
|
|
||||||
- preserving continuity,
|
|
||||||
- protecting signal,
|
|
||||||
- maintaining standards,
|
|
||||||
- keeping Telemt workable for others.
|
|
||||||
|
|
||||||
Judgment should be exercised with restraint, consistency, and institutional responsibility.
|
|
||||||
- Not every decision requires extended debate.
|
|
||||||
- Not every intervention requires public explanation.
|
|
||||||
|
|
||||||
All decisions are expected to serve the durability, clarity, and integrity of Telemt.
|
|
||||||
|
|
||||||
> **Ordnung ist Voraussetzung der Funktion.**
|
|
||||||
|
|
||||||
> Order is the precondition of function.
|
|
||||||
|
|
||||||
---
|
|
||||||
|
|
||||||
## Enforcement
|
|
||||||
|
|
||||||
Maintainers may act to preserve the integrity of Telemt, including by:
|
|
||||||
|
|
||||||
* Removing content
|
|
||||||
* Locking discussions
|
|
||||||
* Rejecting contributions
|
|
||||||
* Restricting or banning participants
|
|
||||||
|
|
||||||
Actions are taken to maintain function, continuity, and signal quality.
|
|
||||||
- Where possible, correction is preferred to exclusion.
|
|
||||||
- Where necessary, exclusion is preferred to decay.
|
|
||||||
|
|
||||||
---
|
|
||||||
|
|
||||||
## Final
|
|
||||||
|
|
||||||
Telemt is built on discipline, structure, and shared intent.
|
|
||||||
- Signal over noise.
|
|
||||||
- Facts over opinion.
|
|
||||||
- Systems over rhetoric.
|
|
||||||
|
|
||||||
- Work is collective.
|
|
||||||
- Outcomes are shared.
|
|
||||||
- Responsibility is distributed.
|
|
||||||
|
|
||||||
- Precision is learned.
|
|
||||||
- Rigor is expected.
|
|
||||||
- Help is part of the work.
|
|
||||||
|
|
||||||
> **Ordnung ist Voraussetzung der Freiheit.**
|
|
||||||
|
|
||||||
- If you contribute — contribute with care.
|
|
||||||
- If you speak — speak with substance.
|
|
||||||
- If you engage — engage constructively.
|
|
||||||
|
|
||||||
---
|
|
||||||
|
|
||||||
## After All
|
|
||||||
|
|
||||||
Systems outlive intentions.
|
|
||||||
- What is built will be used.
|
|
||||||
- What is released will propagate.
|
|
||||||
- What is maintained will define the future state.
|
|
||||||
|
|
||||||
There is no neutral infrastructure, only infrastructure shaped well or poorly.
|
|
||||||
|
|
||||||
> **Jedes System trägt Verantwortung.**
|
|
||||||
|
|
||||||
> Every system carries responsibility.
|
|
||||||
|
|
||||||
- Stability requires discipline.
|
|
||||||
- Freedom requires structure.
|
|
||||||
- Trust requires honesty.
|
|
||||||
|
|
||||||
In the end: the system reflects its contributors.
|
|
||||||
|
|
@ -1,82 +1,19 @@
|
||||||
# Issues
|
# Issues - Rules
|
||||||
## Warnung
|
|
||||||
Before opening Issue, if it is more question than problem or bug - ask about that [in our chat](https://t.me/telemtrs)
|
|
||||||
|
|
||||||
## What it is not
|
## What it is not
|
||||||
- NOT Question and Answer
|
- NOT Question and Answer
|
||||||
- NOT Helpdesk
|
- NOT Helpdesk
|
||||||
|
|
||||||
***Each of your Issues triggers attempts to reproduce problems and analyze them, which are done manually by people***
|
# Pull Requests - Rules
|
||||||
|
|
||||||
---
|
|
||||||
|
|
||||||
# Pull Requests
|
|
||||||
|
|
||||||
## General
|
## General
|
||||||
- ONLY signed and verified commits
|
- ONLY signed and verified commits
|
||||||
- ONLY from your name
|
- ONLY from your name
|
||||||
- DO NOT commit with `codex`, `claude`, or other AI tools as author/committer
|
- DO NOT commit with `codex` or `claude` as author/commiter
|
||||||
- PREFER `flow` branch for development, not `main`
|
- PREFER `flow` branch for development, not `main`
|
||||||
|
|
||||||
---
|
## AI
|
||||||
|
We are not against modern tools, like AI, where you act as a principal or architect, but we consider it important:
|
||||||
|
|
||||||
## Definition of Ready (MANDATORY)
|
- you really understand what you're doing
|
||||||
|
- you understand the relationships and dependencies of the components being modified
|
||||||
A Pull Request WILL be ignored or closed if:
|
- you understand the architecture of Telegram MTProto, MTProxy, Middle-End KDF at least generically
|
||||||
|
- you DO NOT commit for the sake of commits, but to help the community, core-developers and ordinary users
|
||||||
- it does NOT build
|
|
||||||
- it does NOT pass tests
|
|
||||||
- it does NOT follow formatting rules
|
|
||||||
- it contains unrelated or excessive changes
|
|
||||||
- the author cannot clearly explain the change
|
|
||||||
|
|
||||||
---
|
|
||||||
|
|
||||||
## Blessed Principles
|
|
||||||
- PR must build
|
|
||||||
- PR must pass tests
|
|
||||||
- PR must be understood by author
|
|
||||||
|
|
||||||
---
|
|
||||||
|
|
||||||
## AI Usage Policy
|
|
||||||
|
|
||||||
AI tools (Claude, ChatGPT, Codex, DeepSeek, etc.) are allowed as **assistants**, NOT as decision-makers.
|
|
||||||
|
|
||||||
By submitting a PR, you confirm that:
|
|
||||||
|
|
||||||
- you fully understand the code you submit
|
|
||||||
- you verified correctness manually
|
|
||||||
- you reviewed architecture and dependencies
|
|
||||||
- you take full responsibility for the change
|
|
||||||
|
|
||||||
AI-generated code is treated as **draft** and must be validated like any other external contribution.
|
|
||||||
|
|
||||||
PRs that look like unverified AI dumps WILL be closed
|
|
||||||
|
|
||||||
---
|
|
||||||
|
|
||||||
## Maintainer Policy
|
|
||||||
|
|
||||||
Maintainers reserve the right to:
|
|
||||||
|
|
||||||
- close PRs that do not meet basic quality requirements
|
|
||||||
- request explanations before review
|
|
||||||
- ignore low-effort contributions
|
|
||||||
|
|
||||||
Respect the reviewers time
|
|
||||||
|
|
||||||
---
|
|
||||||
|
|
||||||
## Enforcement
|
|
||||||
|
|
||||||
Pull Requests that violate project standards may be closed without review.
|
|
||||||
|
|
||||||
This includes (but is not limited to):
|
|
||||||
|
|
||||||
- non-building code
|
|
||||||
- failing tests
|
|
||||||
- unverified or low-effort changes
|
|
||||||
- inability to explain the change
|
|
||||||
|
|
||||||
These actions follow the Code of Conduct and are intended to preserve signal, quality, and Telemt's integrity
|
|
||||||
|
|
|
||||||
File diff suppressed because it is too large
Load Diff
53
Cargo.toml
53
Cargo.toml
|
|
@ -1,11 +1,8 @@
|
||||||
[package]
|
[package]
|
||||||
name = "telemt"
|
name = "telemt"
|
||||||
version = "3.3.38"
|
version = "3.3.17"
|
||||||
edition = "2024"
|
edition = "2024"
|
||||||
|
|
||||||
[features]
|
|
||||||
redteam_offline_expected_fail = []
|
|
||||||
|
|
||||||
[dependencies]
|
[dependencies]
|
||||||
# C
|
# C
|
||||||
libc = "0.2"
|
libc = "0.2"
|
||||||
|
|
@ -25,37 +22,26 @@ hmac = "0.12"
|
||||||
crc32fast = "1.4"
|
crc32fast = "1.4"
|
||||||
crc32c = "0.6"
|
crc32c = "0.6"
|
||||||
zeroize = { version = "1.8", features = ["derive"] }
|
zeroize = { version = "1.8", features = ["derive"] }
|
||||||
subtle = "2.6"
|
|
||||||
static_assertions = "1.1"
|
|
||||||
|
|
||||||
# Network
|
# Network
|
||||||
socket2 = { version = "0.6", features = ["all"] }
|
socket2 = { version = "0.5", features = ["all"] }
|
||||||
nix = { version = "0.31", default-features = false, features = [
|
nix = { version = "0.28", default-features = false, features = ["net"] }
|
||||||
"net",
|
|
||||||
"user",
|
|
||||||
"process",
|
|
||||||
"fs",
|
|
||||||
"signal",
|
|
||||||
] }
|
|
||||||
shadowsocks = { version = "1.24", features = ["aead-cipher-2022"] }
|
|
||||||
|
|
||||||
# Serialization
|
# Serialization
|
||||||
serde = { version = "1.0", features = ["derive"] }
|
serde = { version = "1.0", features = ["derive"] }
|
||||||
serde_json = "1.0"
|
serde_json = "1.0"
|
||||||
toml = "1.0"
|
toml = "0.8"
|
||||||
x509-parser = "0.18"
|
x509-parser = "0.15"
|
||||||
|
|
||||||
# Utils
|
# Utils
|
||||||
bytes = "1.9"
|
bytes = "1.9"
|
||||||
thiserror = "2.0"
|
thiserror = "2.0"
|
||||||
tracing = "0.1"
|
tracing = "0.1"
|
||||||
tracing-subscriber = { version = "0.3", features = ["env-filter"] }
|
tracing-subscriber = { version = "0.3", features = ["env-filter"] }
|
||||||
tracing-appender = "0.2"
|
|
||||||
parking_lot = "0.12"
|
parking_lot = "0.12"
|
||||||
dashmap = "6.1"
|
dashmap = "5.5"
|
||||||
arc-swap = "1.7"
|
|
||||||
lru = "0.16"
|
lru = "0.16"
|
||||||
rand = "0.10"
|
rand = "0.9"
|
||||||
chrono = { version = "0.4", features = ["serde"] }
|
chrono = { version = "0.4", features = ["serde"] }
|
||||||
hex = "0.4"
|
hex = "0.4"
|
||||||
base64 = "0.22"
|
base64 = "0.22"
|
||||||
|
|
@ -64,30 +50,23 @@ regex = "1.11"
|
||||||
crossbeam-queue = "0.3"
|
crossbeam-queue = "0.3"
|
||||||
num-bigint = "0.4"
|
num-bigint = "0.4"
|
||||||
num-traits = "0.2"
|
num-traits = "0.2"
|
||||||
x25519-dalek = "2"
|
|
||||||
anyhow = "1.0"
|
anyhow = "1.0"
|
||||||
|
|
||||||
# HTTP
|
# HTTP
|
||||||
reqwest = { version = "0.13", features = ["rustls"], default-features = false }
|
reqwest = { version = "0.12", features = ["rustls-tls"], default-features = false }
|
||||||
notify = "8.2"
|
notify = { version = "6", features = ["macos_fsevent"] }
|
||||||
ipnetwork = { version = "0.21", features = ["serde"] }
|
ipnetwork = "0.20"
|
||||||
hyper = { version = "1", features = ["server", "http1"] }
|
hyper = { version = "1", features = ["server", "http1"] }
|
||||||
hyper-util = { version = "0.1", features = ["tokio", "server-auto"] }
|
hyper-util = { version = "0.1", features = ["tokio", "server-auto"] }
|
||||||
http-body-util = "0.1"
|
http-body-util = "0.1"
|
||||||
httpdate = "1.0"
|
httpdate = "1.0"
|
||||||
tokio-rustls = { version = "0.26", default-features = false, features = [
|
tokio-rustls = { version = "0.26", default-features = false, features = ["tls12"] }
|
||||||
"tls12",
|
rustls = { version = "0.23", default-features = false, features = ["std", "tls12", "ring"] }
|
||||||
] }
|
webpki-roots = "0.26"
|
||||||
rustls = { version = "0.23", default-features = false, features = [
|
|
||||||
"std",
|
|
||||||
"tls12",
|
|
||||||
"ring",
|
|
||||||
] }
|
|
||||||
webpki-roots = "1.0"
|
|
||||||
|
|
||||||
[dev-dependencies]
|
[dev-dependencies]
|
||||||
tokio-test = "0.4"
|
tokio-test = "0.4"
|
||||||
criterion = "0.8"
|
criterion = "0.5"
|
||||||
proptest = "1.4"
|
proptest = "1.4"
|
||||||
futures = "0.3"
|
futures = "0.3"
|
||||||
|
|
||||||
|
|
@ -96,6 +75,4 @@ name = "crypto_bench"
|
||||||
harness = false
|
harness = false
|
||||||
|
|
||||||
[profile.release]
|
[profile.release]
|
||||||
lto = "fat"
|
lto = "thin"
|
||||||
codegen-units = 1
|
|
||||||
|
|
||||||
|
|
|
||||||
111
Dockerfile
111
Dockerfile
|
|
@ -1,98 +1,43 @@
|
||||||
# syntax=docker/dockerfile:1
|
# ==========================
|
||||||
|
# Stage 1: Build
|
||||||
|
# ==========================
|
||||||
|
FROM rust:1.88-slim-bookworm AS builder
|
||||||
|
|
||||||
ARG TELEMT_REPOSITORY=telemt/telemt
|
RUN apt-get update && apt-get install -y --no-install-recommends \
|
||||||
ARG TELEMT_VERSION=latest
|
pkg-config \
|
||||||
|
&& rm -rf /var/lib/apt/lists/*
|
||||||
|
|
||||||
|
WORKDIR /build
|
||||||
|
|
||||||
|
COPY Cargo.toml Cargo.lock* ./
|
||||||
|
RUN mkdir src && echo 'fn main() {}' > src/main.rs && \
|
||||||
|
cargo build --release 2>/dev/null || true && \
|
||||||
|
rm -rf src
|
||||||
|
|
||||||
|
COPY . .
|
||||||
|
RUN cargo build --release && strip target/release/telemt
|
||||||
|
|
||||||
# ==========================
|
# ==========================
|
||||||
# Minimal Image
|
# Stage 2: Runtime
|
||||||
# ==========================
|
# ==========================
|
||||||
FROM debian:12-slim AS minimal
|
FROM debian:bookworm-slim
|
||||||
|
|
||||||
ARG TARGETARCH
|
RUN apt-get update && apt-get install -y --no-install-recommends \
|
||||||
ARG TELEMT_REPOSITORY
|
|
||||||
ARG TELEMT_VERSION
|
|
||||||
|
|
||||||
RUN set -eux; \
|
|
||||||
apt-get update; \
|
|
||||||
apt-get install -y --no-install-recommends \
|
|
||||||
binutils \
|
|
||||||
ca-certificates \
|
ca-certificates \
|
||||||
curl \
|
&& rm -rf /var/lib/apt/lists/*
|
||||||
tar; \
|
|
||||||
rm -rf /var/lib/apt/lists/*
|
|
||||||
|
|
||||||
RUN set -eux; \
|
RUN useradd -r -s /usr/sbin/nologin telemt
|
||||||
case "${TARGETARCH}" in \
|
|
||||||
amd64) ASSET="telemt-x86_64-linux-musl.tar.gz" ;; \
|
|
||||||
arm64) ASSET="telemt-aarch64-linux-musl.tar.gz" ;; \
|
|
||||||
*) echo "Unsupported TARGETARCH: ${TARGETARCH}" >&2; exit 1 ;; \
|
|
||||||
esac; \
|
|
||||||
VERSION="${TELEMT_VERSION#refs/tags/}"; \
|
|
||||||
if [ -z "${VERSION}" ] || [ "${VERSION}" = "latest" ]; then \
|
|
||||||
BASE_URL="https://github.com/${TELEMT_REPOSITORY}/releases/latest/download"; \
|
|
||||||
else \
|
|
||||||
BASE_URL="https://github.com/${TELEMT_REPOSITORY}/releases/download/${VERSION}"; \
|
|
||||||
fi; \
|
|
||||||
curl -fL \
|
|
||||||
--retry 5 \
|
|
||||||
--retry-delay 3 \
|
|
||||||
--connect-timeout 10 \
|
|
||||||
--max-time 120 \
|
|
||||||
-o "/tmp/${ASSET}" \
|
|
||||||
"${BASE_URL}/${ASSET}"; \
|
|
||||||
curl -fL \
|
|
||||||
--retry 5 \
|
|
||||||
--retry-delay 3 \
|
|
||||||
--connect-timeout 10 \
|
|
||||||
--max-time 120 \
|
|
||||||
-o "/tmp/${ASSET}.sha256" \
|
|
||||||
"${BASE_URL}/${ASSET}.sha256"; \
|
|
||||||
cd /tmp; \
|
|
||||||
sha256sum -c "${ASSET}.sha256"; \
|
|
||||||
tar -xzf "${ASSET}" -C /tmp; \
|
|
||||||
test -f /tmp/telemt; \
|
|
||||||
install -m 0755 /tmp/telemt /telemt; \
|
|
||||||
strip --strip-unneeded /telemt || true; \
|
|
||||||
rm -f "/tmp/${ASSET}" "/tmp/${ASSET}.sha256" /tmp/telemt
|
|
||||||
|
|
||||||
# ==========================
|
|
||||||
# Debug Image
|
|
||||||
# ==========================
|
|
||||||
FROM debian:12-slim AS debug
|
|
||||||
|
|
||||||
RUN set -eux; \
|
|
||||||
apt-get update; \
|
|
||||||
apt-get install -y --no-install-recommends \
|
|
||||||
ca-certificates \
|
|
||||||
tzdata \
|
|
||||||
curl \
|
|
||||||
iproute2 \
|
|
||||||
busybox; \
|
|
||||||
rm -rf /var/lib/apt/lists/*
|
|
||||||
|
|
||||||
WORKDIR /app
|
WORKDIR /app
|
||||||
|
|
||||||
COPY --from=minimal /telemt /app/telemt
|
COPY --from=builder /build/target/release/telemt /app/telemt
|
||||||
COPY config.toml /app/config.toml
|
COPY config.toml /app/config.toml
|
||||||
|
|
||||||
EXPOSE 443 9090 9091
|
RUN chown -R telemt:telemt /app
|
||||||
|
USER telemt
|
||||||
ENTRYPOINT ["/app/telemt"]
|
|
||||||
CMD ["config.toml"]
|
EXPOSE 443
|
||||||
|
EXPOSE 9090
|
||||||
# ==========================
|
|
||||||
# Production Distroless on MUSL
|
|
||||||
# ==========================
|
|
||||||
FROM gcr.io/distroless/static-debian12 AS prod
|
|
||||||
|
|
||||||
WORKDIR /app
|
|
||||||
|
|
||||||
COPY --from=minimal /telemt /app/telemt
|
|
||||||
COPY config.toml /app/config.toml
|
|
||||||
|
|
||||||
USER nonroot:nonroot
|
|
||||||
|
|
||||||
EXPOSE 443 9090 9091
|
|
||||||
|
|
||||||
ENTRYPOINT ["/app/telemt"]
|
ENTRYPOINT ["/app/telemt"]
|
||||||
CMD ["config.toml"]
|
CMD ["config.toml"]
|
||||||
|
|
|
||||||
File diff suppressed because it is too large
Load Diff
169
LICENSE
169
LICENSE
|
|
@ -1,169 +0,0 @@
|
||||||
######## TELEMT LICENSE 3.3 #########
|
|
||||||
##### Copyright (c) 2026 Telemt #####
|
|
||||||
|
|
||||||
Permission is hereby granted, free of charge, to any person obtaining a copy
|
|
||||||
of this Software and associated documentation files (the "Software"),
|
|
||||||
to use, reproduce, modify, prepare derivative works of, merge, publish,
|
|
||||||
distribute, sublicense, and/or sell copies of the Software, and to permit
|
|
||||||
persons to whom the Software is furnished to do so, provided that all
|
|
||||||
copyright notices, license terms, and conditions set forth in this License
|
|
||||||
are preserved and complied with.
|
|
||||||
|
|
||||||
### Official Translations
|
|
||||||
|
|
||||||
The canonical version of this License is the English version.
|
|
||||||
Official translations are provided for informational purposes only
|
|
||||||
and for convenience, and do not have legal force. In case of any
|
|
||||||
discrepancy, the English version of this License shall prevail
|
|
||||||
|
|
||||||
/----------------------------------------------------------\
|
|
||||||
| Language | Location |
|
|
||||||
|-------------|--------------------------------------------|
|
|
||||||
| English | docs/LICENSE/TELEMT-LICENSE.en.md |
|
|
||||||
| German | docs/LICENSE/TELEMT-LICENSE.de.md |
|
|
||||||
| Russian | docs/LICENSE/TELEMT-LICENSE.ru.md |
|
|
||||||
\----------------------------------------------------------/
|
|
||||||
|
|
||||||
### License Versioning Policy
|
|
||||||
|
|
||||||
This License is version 3 of the TELEMT Public License.
|
|
||||||
Each version of the Software is licensed under the License that
|
|
||||||
accompanies its corresponding source code distribution.
|
|
||||||
|
|
||||||
Future versions of the Software may be distributed under a different
|
|
||||||
version of the TELEMT Public License or under a different license,
|
|
||||||
as determined by the Telemt maintainers.
|
|
||||||
|
|
||||||
Any such change of license applies only to the versions of the
|
|
||||||
Software distributed with the new license and SHALL NOT retroactively
|
|
||||||
affect any previously released versions of the Software.
|
|
||||||
|
|
||||||
Recipients of the Software are granted rights only under the License
|
|
||||||
provided with the version of the Software they received.
|
|
||||||
|
|
||||||
Redistributions of the Software, including Modified Versions, MUST
|
|
||||||
preserve the copyright notices, license text, and conditions of this
|
|
||||||
License for all portions of the Software derived from Telemt.
|
|
||||||
|
|
||||||
Additional terms or licenses may be applied to modifications or
|
|
||||||
additional code added by a redistributor, provided that such terms
|
|
||||||
do not restrict or alter the rights granted under this License for
|
|
||||||
the original Telemt Software.
|
|
||||||
|
|
||||||
Nothing in this section limits the rights granted under this License
|
|
||||||
for versions of the Software already released.
|
|
||||||
|
|
||||||
### Definitions
|
|
||||||
|
|
||||||
For the purposes of this License:
|
|
||||||
- "Software" means the Telemt software, including source code, documentation,
|
|
||||||
and any associated files distributed under this License.
|
|
||||||
- "Contributor" means any person or entity that submits code, patches,
|
|
||||||
documentation, or other contributions to the Software that are accepted
|
|
||||||
into the Software by the maintainers.
|
|
||||||
- "Contribution" means any work of authorship intentionally submitted
|
|
||||||
to the Software for inclusion in the Software.
|
|
||||||
- "Modified Version" means any version of the Software that has been
|
|
||||||
changed, adapted, extended, or otherwise modified from the original
|
|
||||||
Software.
|
|
||||||
- "Maintainers" means the individuals or entities responsible for
|
|
||||||
the official Telemt project and its releases.
|
|
||||||
|
|
||||||
#### 1 Attribution
|
|
||||||
|
|
||||||
Redistributions of the Software, in source or binary form, MUST RETAIN the
|
|
||||||
above copyright notice, this license text, and any existing attribution
|
|
||||||
notices.
|
|
||||||
|
|
||||||
#### 2 Modification Notice
|
|
||||||
|
|
||||||
If you modify the Software, you MUST clearly state that the Software has been
|
|
||||||
modified and include a brief description of the changes made.
|
|
||||||
|
|
||||||
Modified versions MUST NOT be presented as the original Telemt.
|
|
||||||
|
|
||||||
#### 3 Trademark and Branding
|
|
||||||
|
|
||||||
This license DOES NOT grant permission to use the name "Telemt",
|
|
||||||
the Telemt logo, or any Telemt trademarks or branding.
|
|
||||||
|
|
||||||
Redistributed or modified versions of the Software MAY NOT use the Telemt
|
|
||||||
name in a way that suggests endorsement or official origin without explicit
|
|
||||||
permission from the Telemt maintainers.
|
|
||||||
|
|
||||||
Use of the name "Telemt" to describe a modified version of the Software
|
|
||||||
is permitted only if the modified version is clearly identified as a
|
|
||||||
modified or unofficial version.
|
|
||||||
|
|
||||||
Any distribution that could reasonably confuse users into believing that
|
|
||||||
the software is an official Telemt release is prohibited.
|
|
||||||
|
|
||||||
#### 4 Binary Distribution Transparency
|
|
||||||
|
|
||||||
If you distribute compiled binaries of the Software,
|
|
||||||
you are ENCOURAGED to provide access to the corresponding
|
|
||||||
source code and build instructions where reasonably possible.
|
|
||||||
|
|
||||||
This helps preserve transparency and allows recipients to verify the
|
|
||||||
integrity and reproducibility of distributed builds.
|
|
||||||
|
|
||||||
#### 5 Patent Grant and Defensive Termination Clause
|
|
||||||
|
|
||||||
Each contributor grants you a perpetual, worldwide, non-exclusive,
|
|
||||||
no-charge, royalty-free, irrevocable patent license to make, have made,
|
|
||||||
use, offer to sell, sell, import, and otherwise transfer the Software.
|
|
||||||
|
|
||||||
This patent license applies only to those patent claims necessarily
|
|
||||||
infringed by the contributor’s contribution alone or by combination of
|
|
||||||
their contribution with the Software.
|
|
||||||
|
|
||||||
If you initiate or participate in any patent litigation, including
|
|
||||||
cross-claims or counterclaims, alleging that the Software or any
|
|
||||||
contribution incorporated within the Software constitutes patent
|
|
||||||
infringement, then **all rights granted to you under this license shall
|
|
||||||
terminate immediately** as of the date such litigation is filed.
|
|
||||||
|
|
||||||
Additionally, if you initiate legal action alleging that the
|
|
||||||
Software itself infringes your patent or other intellectual
|
|
||||||
property rights, then all rights granted to you under this
|
|
||||||
license SHALL TERMINATE automatically.
|
|
||||||
|
|
||||||
#### 6 Contributions
|
|
||||||
|
|
||||||
Unless you explicitly state otherwise, any Contribution intentionally
|
|
||||||
submitted for inclusion in the Software shall be licensed under the terms
|
|
||||||
of this License.
|
|
||||||
|
|
||||||
By submitting a Contribution, you grant the Telemt maintainers and all
|
|
||||||
recipients of the Software the rights described in this License with
|
|
||||||
respect to that Contribution.
|
|
||||||
|
|
||||||
#### 7 Network Use Attribution
|
|
||||||
|
|
||||||
If the Software is used to provide a publicly accessible network service,
|
|
||||||
the operator of such service SHOULD provide attribution to Telemt in at least
|
|
||||||
one of the following locations:
|
|
||||||
|
|
||||||
- service documentation
|
|
||||||
- service description
|
|
||||||
- an "About" or similar informational page
|
|
||||||
- other user-visible materials reasonably associated with the service
|
|
||||||
|
|
||||||
Such attribution MUST NOT imply endorsement by the Telemt project or its
|
|
||||||
maintainers.
|
|
||||||
|
|
||||||
#### 8 Disclaimer of Warranty and Severability Clause
|
|
||||||
|
|
||||||
THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
|
|
||||||
IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
|
|
||||||
FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT.
|
|
||||||
|
|
||||||
IN NO EVENT SHALL THE AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM,
|
|
||||||
DAMAGES OR OTHER LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR
|
|
||||||
OTHERWISE, ARISING FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE
|
|
||||||
USE OR OTHER DEALINGS IN THE SOFTWARE
|
|
||||||
|
|
||||||
IF ANY PROVISION OF THIS LICENSE IS HELD TO BE INVALID OR UNENFORCEABLE,
|
|
||||||
SUCH PROVISION SHALL BE INTERPRETED TO REFLECT THE ORIGINAL INTENT
|
|
||||||
OF THE PARTIES AS CLOSELY AS POSSIBLE, AND THE REMAINING PROVISIONS
|
|
||||||
SHALL REMAIN IN FULL FORCE AND EFFECT
|
|
||||||
19
LICENSING.md
19
LICENSING.md
|
|
@ -1,12 +1,17 @@
|
||||||
# LICENSING
|
# LICENSING
|
||||||
## Licenses for Versions
|
## Licenses for Versions
|
||||||
| Version ≥ | Version ≤ | License |
|
| Version | License |
|
||||||
|-----------|-----------|---------------|
|
|---------|---------------|
|
||||||
| 1.0 | 3.3.17 | NO LICNESE |
|
| 1.0 | NO LICNESE |
|
||||||
| 3.3.18 | 3.4.0 | TELEMT PL 3 |
|
| 1.1 | NO LICENSE |
|
||||||
|
| 1.2 | NO LICENSE |
|
||||||
|
| 2.0 | NO LICENSE |
|
||||||
|
| 3.0 | TELEMT UL 1 |
|
||||||
|
|
||||||
### License Types
|
### License Types
|
||||||
- **NO LICENSE** = ***ALL RIGHT RESERVED***
|
- **NO LICENSE** = ***ALL RIGHT RESERVED***
|
||||||
- **TELEMT PL** - special Telemt Public License based on Apache License 2 principles
|
- **TELEMT UL1** - work in progress license for source code of `telemt`, which encourages:
|
||||||
|
- fair use,
|
||||||
## [Telemt Public License 3](https://github.com/telemt/telemt/blob/main/LICENSE)
|
- contributions,
|
||||||
|
- distribution,
|
||||||
|
- but prohibits NOT mentioning the authors
|
||||||
|
|
|
||||||
67
README.md
67
README.md
|
|
@ -2,11 +2,6 @@
|
||||||
|
|
||||||
***Löst Probleme, bevor andere überhaupt wissen, dass sie existieren*** / ***It solves problems before others even realize they exist***
|
***Löst Probleme, bevor andere überhaupt wissen, dass sie existieren*** / ***It solves problems before others even realize they exist***
|
||||||
|
|
||||||
### [**Telemt Chat in Telegram**](https://t.me/telemtrs)
|
|
||||||
#### Fixed TLS ClientHello is now available in Telegram Desktop starting from version 6.7.2: to work with EE-MTProxy, please update your client;
|
|
||||||
#### Fixed TLS ClientHello for Telegram Android Client is available in [our chat](https://t.me/telemtrs/30234/36441); official releases for Android and iOS are "work in progress";
|
|
||||||
|
|
||||||
|
|
||||||
**Telemt** is a fast, secure, and feature-rich server written in Rust: it fully implements the official Telegram proxy algo and adds many production-ready improvements such as:
|
**Telemt** is a fast, secure, and feature-rich server written in Rust: it fully implements the official Telegram proxy algo and adds many production-ready improvements such as:
|
||||||
- [ME Pool + Reader/Writer + Registry + Refill + Adaptive Floor + Trio-State + Generation Lifecycle](https://github.com/telemt/telemt/blob/main/docs/model/MODEL.en.md)
|
- [ME Pool + Reader/Writer + Registry + Refill + Adaptive Floor + Trio-State + Generation Lifecycle](https://github.com/telemt/telemt/blob/main/docs/model/MODEL.en.md)
|
||||||
- [Full-covered API w/ management](https://github.com/telemt/telemt/blob/main/docs/API.md)
|
- [Full-covered API w/ management](https://github.com/telemt/telemt/blob/main/docs/API.md)
|
||||||
|
|
@ -14,6 +9,60 @@
|
||||||
- Prometheus-format Metrics
|
- Prometheus-format Metrics
|
||||||
- TLS-Fronting and TCP-Splicing for masking from "prying" eyes
|
- TLS-Fronting and TCP-Splicing for masking from "prying" eyes
|
||||||
|
|
||||||
|
[**Telemt Chat in Telegram**](https://t.me/telemtrs)
|
||||||
|
|
||||||
|
## NEWS and EMERGENCY
|
||||||
|
### ✈️ Telemt 3 is released!
|
||||||
|
<table>
|
||||||
|
<tr>
|
||||||
|
<td width="50%" valign="top">
|
||||||
|
|
||||||
|
### 🇷🇺 RU
|
||||||
|
|
||||||
|
#### Релиз 3.3.16
|
||||||
|
|
||||||
|
[3.3.16](https://github.com/telemt/telemt/releases/tag/3.3.16)!
|
||||||
|
|
||||||
|
Будем рады вашему фидбеку и предложениям по улучшению — особенно в части **API**, **статистики**, **UX**
|
||||||
|
|
||||||
|
---
|
||||||
|
|
||||||
|
Если у вас есть компетенции в:
|
||||||
|
|
||||||
|
- Асинхронных сетевых приложениях
|
||||||
|
- Анализе трафика
|
||||||
|
- Реверс-инжиниринге
|
||||||
|
- Сетевых расследованиях
|
||||||
|
|
||||||
|
Мы открыты к архитектурным предложениям, идеям и pull requests
|
||||||
|
</td>
|
||||||
|
<td width="50%" valign="top">
|
||||||
|
|
||||||
|
### 🇬🇧 EN
|
||||||
|
|
||||||
|
#### Release 3.3.16
|
||||||
|
|
||||||
|
[3.3.16](https://github.com/telemt/telemt/releases/tag/3.3.16)
|
||||||
|
|
||||||
|
We are looking forward to your feedback and improvement proposals — especially regarding **API**, **statistics**, **UX**
|
||||||
|
|
||||||
|
---
|
||||||
|
|
||||||
|
If you have expertise in:
|
||||||
|
|
||||||
|
- Asynchronous network applications
|
||||||
|
- Traffic analysis
|
||||||
|
- Reverse engineering
|
||||||
|
- Network forensics
|
||||||
|
|
||||||
|
We welcome ideas, architectural feedback, and pull requests.
|
||||||
|
</td>
|
||||||
|
</tr>
|
||||||
|
</table>
|
||||||
|
|
||||||
|
# Features
|
||||||
|
💥 The configuration structure has changed since version 1.1.0.0. change it in your environment!
|
||||||
|
|
||||||
⚓ Our implementation of **TLS-fronting** is one of the most deeply debugged, focused, advanced and *almost* **"behaviorally consistent to real"**: we are confident we have it right - [see evidence on our validation and traces](#recognizability-for-dpi-and-crawler)
|
⚓ Our implementation of **TLS-fronting** is one of the most deeply debugged, focused, advanced and *almost* **"behaviorally consistent to real"**: we are confident we have it right - [see evidence on our validation and traces](#recognizability-for-dpi-and-crawler)
|
||||||
|
|
||||||
⚓ Our ***Middle-End Pool*** is fastest by design in standard scenarios, compared to other implementations of connecting to the Middle-End Proxy: non dramatically, but usual
|
⚓ Our ***Middle-End Pool*** is fastest by design in standard scenarios, compared to other implementations of connecting to the Middle-End Proxy: non dramatically, but usual
|
||||||
|
|
@ -54,12 +103,8 @@
|
||||||
- [FAQ EN](docs/FAQ.en.md)
|
- [FAQ EN](docs/FAQ.en.md)
|
||||||
|
|
||||||
### Recognizability for DPI and crawler
|
### Recognizability for DPI and crawler
|
||||||
|
Since version 1.1.0.0, we have debugged masking perfectly: for all clients without "presenting" a key,
|
||||||
On April 1, 2026, we became aware of a method for detecting MTProxy Fake-TLS,
|
we transparently direct traffic to the target host!
|
||||||
based on the ECH extension and the ordering of cipher suites,
|
|
||||||
as well as an overall unique JA3/JA4 fingerprint
|
|
||||||
that does not occur in modern browsers:
|
|
||||||
we have already submitted initial changes to the Telegram Desktop developers and are working on updates for other clients.
|
|
||||||
|
|
||||||
- We consider this a breakthrough aspect, which has no stable analogues today
|
- We consider this a breakthrough aspect, which has no stable analogues today
|
||||||
- Based on this: if `telemt` configured correctly, **TLS mode is completely identical to real-life handshake + communication** with a specified host
|
- Based on this: if `telemt` configured correctly, **TLS mode is completely identical to real-life handshake + communication** with a specified host
|
||||||
|
|
|
||||||
|
|
@ -1,5 +1,5 @@
|
||||||
// Cryptobench
|
// Cryptobench
|
||||||
use criterion::{Criterion, black_box, criterion_group};
|
use criterion::{black_box, criterion_group, Criterion};
|
||||||
|
|
||||||
fn bench_aes_ctr(c: &mut Criterion) {
|
fn bench_aes_ctr(c: &mut Criterion) {
|
||||||
c.bench_function("aes_ctr_encrypt_64kb", |b| {
|
c.bench_function("aes_ctr_encrypt_64kb", |b| {
|
||||||
|
|
|
||||||
|
|
@ -0,0 +1,697 @@
|
||||||
|
# ==============================================================================
|
||||||
|
#
|
||||||
|
# TELEMT — Advanced Rust-based Telegram MTProto Proxy
|
||||||
|
# Full Configuration Reference
|
||||||
|
#
|
||||||
|
# This file is both a working config and a complete documentation.
|
||||||
|
# Every parameter is explained. Read it top to bottom before deploying.
|
||||||
|
#
|
||||||
|
# Quick Start:
|
||||||
|
# 1. Set [server].port to your desired port (443 recommended)
|
||||||
|
# 2. Generate a secret: openssl rand -hex 16
|
||||||
|
# 3. Put it in [access.users] under a name you choose
|
||||||
|
# 4. Set [censorship].tls_domain to a popular unblocked HTTPS site
|
||||||
|
# 5. Set your public IP in [general].middle_proxy_nat_ip
|
||||||
|
# and [general.links].public_host
|
||||||
|
# 6. Set announce IP in [[server.listeners]]
|
||||||
|
# 7. Run Telemt. It prints a tg:// link. Send it to your users.
|
||||||
|
#
|
||||||
|
# Modes of Operation:
|
||||||
|
# Direct Mode (use_middle_proxy = false)
|
||||||
|
# Connects straight to Telegram DCs via TCP. Simple, fast, low overhead.
|
||||||
|
# No ad_tag support. No CDN DC support (203, etc).
|
||||||
|
#
|
||||||
|
# Middle-Proxy Mode (use_middle_proxy = true)
|
||||||
|
# Connects to Telegram Middle-End servers via RPC protocol.
|
||||||
|
# Required for ad_tag monetization and CDN support.
|
||||||
|
# Requires proxy_secret_path and a valid public IP.
|
||||||
|
#
|
||||||
|
# ==============================================================================
|
||||||
|
|
||||||
|
|
||||||
|
# ==============================================================================
|
||||||
|
# LEGACY TOP-LEVEL FIELDS
|
||||||
|
# ==============================================================================
|
||||||
|
|
||||||
|
# Deprecated. Use [general.links].show instead.
|
||||||
|
# Accepts "*" for all users, or an array like ["alice", "bob"].
|
||||||
|
show_link = ["0"]
|
||||||
|
|
||||||
|
# Fallback Datacenter index (1-5) when a client requests an unknown DC ID.
|
||||||
|
# DC 2 is Amsterdam (Europe), closest for most CIS users.
|
||||||
|
# default_dc = 2
|
||||||
|
|
||||||
|
|
||||||
|
# ==============================================================================
|
||||||
|
# GENERAL SETTINGS
|
||||||
|
# ==============================================================================
|
||||||
|
|
||||||
|
[general]
|
||||||
|
|
||||||
|
# ------------------------------------------------------------------------------
|
||||||
|
# Core Protocol
|
||||||
|
# ------------------------------------------------------------------------------
|
||||||
|
|
||||||
|
# Coalesce the MTProto handshake and first data payload into a single TCP packet.
|
||||||
|
# Significantly reduces connection latency. No reason to disable.
|
||||||
|
fast_mode = true
|
||||||
|
|
||||||
|
# How the proxy connects to Telegram servers.
|
||||||
|
# false = Direct TCP to Telegram DCs (simple, low overhead)
|
||||||
|
# true = Middle-End RPC protocol (required for ad_tag and CDN DCs)
|
||||||
|
use_middle_proxy = true
|
||||||
|
|
||||||
|
# 32-char hex Ad-Tag from @MTProxybot for sponsored channel injection.
|
||||||
|
# Only works when use_middle_proxy = true.
|
||||||
|
# Obtain yours: message @MTProxybot on Telegram, register your proxy.
|
||||||
|
# ad_tag = "00000000000000000000000000000000"
|
||||||
|
|
||||||
|
# ------------------------------------------------------------------------------
|
||||||
|
# Middle-End Authentication
|
||||||
|
# ------------------------------------------------------------------------------
|
||||||
|
|
||||||
|
# Path to the Telegram infrastructure AES key file.
|
||||||
|
# Auto-downloaded from https://core.telegram.org/getProxySecret on first run.
|
||||||
|
# This key authenticates your proxy with Middle-End servers.
|
||||||
|
proxy_secret_path = "proxy-secret"
|
||||||
|
|
||||||
|
# ------------------------------------------------------------------------------
|
||||||
|
# Public IP Configuration (Critical for Middle-Proxy Mode)
|
||||||
|
# ------------------------------------------------------------------------------
|
||||||
|
|
||||||
|
# Your server's PUBLIC IPv4 address.
|
||||||
|
# Middle-End servers need this for the cryptographic Key Derivation Function.
|
||||||
|
# If your server has a direct public IP, set it here.
|
||||||
|
# If behind NAT (AWS, Docker, etc.), this MUST be your external IP.
|
||||||
|
# If omitted, Telemt uses STUN to auto-detect (see middle_proxy_nat_probe).
|
||||||
|
# middle_proxy_nat_ip = "203.0.113.10"
|
||||||
|
|
||||||
|
# Auto-detect public IP via STUN servers defined in [network].
|
||||||
|
# Set to false if you hardcoded middle_proxy_nat_ip above.
|
||||||
|
# Set to true if you want automatic detection.
|
||||||
|
middle_proxy_nat_probe = true
|
||||||
|
|
||||||
|
# ------------------------------------------------------------------------------
|
||||||
|
# Middle-End Connection Pool
|
||||||
|
# ------------------------------------------------------------------------------
|
||||||
|
|
||||||
|
# Number of persistent multiplexed RPC connections to ME servers.
|
||||||
|
# All client traffic is routed through these "fat pipes".
|
||||||
|
# 8 handles thousands of concurrent users comfortably.
|
||||||
|
middle_proxy_pool_size = 8
|
||||||
|
|
||||||
|
# Legacy field. Connections kept initialized but idle as warm standby.
|
||||||
|
middle_proxy_warm_standby = 16
|
||||||
|
|
||||||
|
# ------------------------------------------------------------------------------
|
||||||
|
# Middle-End Keepalive
|
||||||
|
# Telegram ME servers aggressively kill idle TCP connections.
|
||||||
|
# These settings send periodic RPC_PING frames to keep pipes alive.
|
||||||
|
# ------------------------------------------------------------------------------
|
||||||
|
|
||||||
|
me_keepalive_enabled = true
|
||||||
|
|
||||||
|
# Base interval between pings in seconds.
|
||||||
|
me_keepalive_interval_secs = 25
|
||||||
|
|
||||||
|
# Random jitter added to interval to prevent all connections pinging simultaneously.
|
||||||
|
me_keepalive_jitter_secs = 5
|
||||||
|
|
||||||
|
# Randomize ping payload bytes to prevent DPI from fingerprinting ping patterns.
|
||||||
|
me_keepalive_payload_random = true
|
||||||
|
|
||||||
|
# ------------------------------------------------------------------------------
|
||||||
|
# Client-Side Limits
|
||||||
|
# ------------------------------------------------------------------------------
|
||||||
|
|
||||||
|
# Max buffered ciphertext per client (bytes) when upstream is slow.
|
||||||
|
# Acts as backpressure to prevent memory exhaustion. 256KB is safe.
|
||||||
|
crypto_pending_buffer = 262144
|
||||||
|
|
||||||
|
# Maximum single MTProto frame size from client. 16MB is protocol standard.
|
||||||
|
max_client_frame = 16777216
|
||||||
|
|
||||||
|
# ------------------------------------------------------------------------------
|
||||||
|
# Crypto Desynchronization Logging
|
||||||
|
# Desync errors usually mean DPI/GFW is tampering with connections.
|
||||||
|
# ------------------------------------------------------------------------------
|
||||||
|
|
||||||
|
# true = full forensics (trace ID, IP hash, hex dumps) for EVERY desync event
|
||||||
|
# false = deduplicated logging, one entry per time window (prevents log spam)
|
||||||
|
# Set true if you are actively debugging DPI interference.
|
||||||
|
desync_all_full = true
|
||||||
|
|
||||||
|
# ------------------------------------------------------------------------------
|
||||||
|
# Beobachten — Built-in Honeypot / Active Probe Tracker
|
||||||
|
# Tracks IPs that fail handshakes or behave like TLS scanners.
|
||||||
|
# Output file can be fed into fail2ban or iptables for auto-blocking.
|
||||||
|
# ------------------------------------------------------------------------------
|
||||||
|
|
||||||
|
beobachten = true
|
||||||
|
|
||||||
|
# How long (minutes) to remember a suspicious IP before expiring it.
|
||||||
|
beobachten_minutes = 30
|
||||||
|
|
||||||
|
# How often (seconds) to flush tracker state to disk.
|
||||||
|
beobachten_flush_secs = 15
|
||||||
|
|
||||||
|
# File path for the tracker output.
|
||||||
|
beobachten_file = "cache/beobachten.txt"
|
||||||
|
|
||||||
|
# ------------------------------------------------------------------------------
|
||||||
|
# Hardswap — Zero-Downtime ME Pool Rotation
|
||||||
|
# When Telegram updates ME server IPs, Hardswap creates a completely new pool,
|
||||||
|
# waits until it is fully ready, migrates traffic, then kills the old pool.
|
||||||
|
# Users experience zero interruption.
|
||||||
|
# ------------------------------------------------------------------------------
|
||||||
|
|
||||||
|
hardswap = true
|
||||||
|
|
||||||
|
# ------------------------------------------------------------------------------
|
||||||
|
# ME Pool Warmup Staggering
|
||||||
|
# When creating a new pool, connections are opened one by one with delays
|
||||||
|
# to avoid a burst of SYN packets that could trigger ISP flood protection.
|
||||||
|
# ------------------------------------------------------------------------------
|
||||||
|
|
||||||
|
me_warmup_stagger_enabled = true
|
||||||
|
|
||||||
|
# Delay between each connection creation (milliseconds).
|
||||||
|
me_warmup_step_delay_ms = 500
|
||||||
|
|
||||||
|
# Random jitter added to the delay (milliseconds).
|
||||||
|
me_warmup_step_jitter_ms = 300
|
||||||
|
|
||||||
|
# ------------------------------------------------------------------------------
|
||||||
|
# ME Reconnect Backoff
|
||||||
|
# If an ME server drops the connection, Telemt retries with this strategy.
|
||||||
|
# ------------------------------------------------------------------------------
|
||||||
|
|
||||||
|
# Max simultaneous reconnect attempts per DC.
|
||||||
|
me_reconnect_max_concurrent_per_dc = 8
|
||||||
|
|
||||||
|
# Exponential backoff base (milliseconds).
|
||||||
|
me_reconnect_backoff_base_ms = 500
|
||||||
|
|
||||||
|
# Backoff ceiling (milliseconds). Will never wait longer than this.
|
||||||
|
me_reconnect_backoff_cap_ms = 30000
|
||||||
|
|
||||||
|
# Number of instant retries before switching to exponential backoff.
|
||||||
|
me_reconnect_fast_retry_count = 12
|
||||||
|
|
||||||
|
# ------------------------------------------------------------------------------
|
||||||
|
# NAT Mismatch Behavior
|
||||||
|
# If STUN-detected IP differs from local interface IP (you are behind NAT).
|
||||||
|
# false = abort ME mode (safe default)
|
||||||
|
# true = force ME mode anyway (use if you know your NAT setup is correct)
|
||||||
|
# ------------------------------------------------------------------------------
|
||||||
|
|
||||||
|
stun_iface_mismatch_ignore = false
|
||||||
|
|
||||||
|
# ------------------------------------------------------------------------------
|
||||||
|
# Logging
|
||||||
|
# ------------------------------------------------------------------------------
|
||||||
|
|
||||||
|
# File to log unknown DC requests (DC IDs outside standard 1-5).
|
||||||
|
unknown_dc_log_path = "unknown-dc.txt"
|
||||||
|
|
||||||
|
# Verbosity: "debug" | "verbose" | "normal" | "silent"
|
||||||
|
log_level = "normal"
|
||||||
|
|
||||||
|
# Disable ANSI color codes in log output (useful for file logging).
|
||||||
|
disable_colors = false
|
||||||
|
|
||||||
|
# ------------------------------------------------------------------------------
|
||||||
|
# FakeTLS Record Sizing
|
||||||
|
# Buffer small MTProto packets into larger TLS records to mimic real HTTPS.
|
||||||
|
# Real HTTPS servers send records close to MTU size (~1400 bytes).
|
||||||
|
# A stream of tiny TLS records is a strong DPI signal.
|
||||||
|
# Set to 0 to disable. Set to 1400 for realistic HTTPS emulation.
|
||||||
|
# ------------------------------------------------------------------------------
|
||||||
|
|
||||||
|
fast_mode_min_tls_record = 1400
|
||||||
|
|
||||||
|
# ------------------------------------------------------------------------------
|
||||||
|
# Periodic Updates
|
||||||
|
# ------------------------------------------------------------------------------
|
||||||
|
|
||||||
|
# How often (seconds) to re-fetch ME server lists and proxy secrets
|
||||||
|
# from core.telegram.org. Keeps your proxy in sync with Telegram infrastructure.
|
||||||
|
update_every = 300
|
||||||
|
|
||||||
|
# How often (seconds) to force a Hardswap even if the ME map is unchanged.
|
||||||
|
# Shorter intervals mean shorter-lived TCP flows, harder for DPI to profile.
|
||||||
|
me_reinit_every_secs = 600
|
||||||
|
|
||||||
|
# ------------------------------------------------------------------------------
|
||||||
|
# Hardswap Warmup Tuning
|
||||||
|
# Fine-grained control over how the new pool is warmed up before traffic switch.
|
||||||
|
# ------------------------------------------------------------------------------
|
||||||
|
|
||||||
|
me_hardswap_warmup_delay_min_ms = 1000
|
||||||
|
me_hardswap_warmup_delay_max_ms = 2000
|
||||||
|
me_hardswap_warmup_extra_passes = 3
|
||||||
|
me_hardswap_warmup_pass_backoff_base_ms = 500
|
||||||
|
|
||||||
|
# ------------------------------------------------------------------------------
|
||||||
|
# Config Update Debouncing
|
||||||
|
# Telegram sometimes pushes transient/broken configs. Debouncing requires
|
||||||
|
# N consecutive identical fetches before applying a change.
|
||||||
|
# ------------------------------------------------------------------------------
|
||||||
|
|
||||||
|
# ME server list must be identical for this many fetches before applying.
|
||||||
|
me_config_stable_snapshots = 2
|
||||||
|
|
||||||
|
# Minimum seconds between config applications.
|
||||||
|
me_config_apply_cooldown_secs = 300
|
||||||
|
|
||||||
|
# Proxy secret must be identical for this many fetches before applying.
|
||||||
|
proxy_secret_stable_snapshots = 2
|
||||||
|
|
||||||
|
# ------------------------------------------------------------------------------
|
||||||
|
# Proxy Secret Rotation
|
||||||
|
# ------------------------------------------------------------------------------
|
||||||
|
|
||||||
|
# Apply newly downloaded secrets at runtime without restart.
|
||||||
|
proxy_secret_rotate_runtime = true
|
||||||
|
|
||||||
|
# Maximum acceptable secret length (bytes). Rejects abnormally large secrets.
|
||||||
|
proxy_secret_len_max = 256
|
||||||
|
|
||||||
|
# ------------------------------------------------------------------------------
|
||||||
|
# Hardswap Drain Settings
|
||||||
|
# Controls graceful shutdown of old ME connections during pool rotation.
|
||||||
|
# ------------------------------------------------------------------------------
|
||||||
|
|
||||||
|
# Seconds to keep old connections alive for in-flight data before force-closing.
|
||||||
|
me_pool_drain_ttl_secs = 90
|
||||||
|
|
||||||
|
# Minimum ratio of healthy connections in new pool before draining old pool.
|
||||||
|
# 0.8 = at least 80% of new pool must be ready.
|
||||||
|
me_pool_min_fresh_ratio = 0.8
|
||||||
|
|
||||||
|
# Maximum seconds to wait for drain to complete before force-killing.
|
||||||
|
me_reinit_drain_timeout_secs = 120
|
||||||
|
|
||||||
|
# ------------------------------------------------------------------------------
|
||||||
|
# NTP Clock Check
|
||||||
|
# MTProto uses timestamps. Clock drift > 30 seconds breaks handshakes.
|
||||||
|
# Telemt checks on startup and warns if out of sync.
|
||||||
|
# ------------------------------------------------------------------------------
|
||||||
|
|
||||||
|
ntp_check = true
|
||||||
|
ntp_servers = ["pool.ntp.org"]
|
||||||
|
|
||||||
|
# ------------------------------------------------------------------------------
|
||||||
|
# Auto-Degradation
|
||||||
|
# If ME servers become completely unreachable (ISP blocking),
|
||||||
|
# automatically fall back to Direct Mode so users stay connected.
|
||||||
|
# ------------------------------------------------------------------------------
|
||||||
|
|
||||||
|
auto_degradation_enabled = true
|
||||||
|
|
||||||
|
# Number of DC groups that must be unreachable before triggering fallback.
|
||||||
|
degradation_min_unavailable_dc_groups = 2
|
||||||
|
|
||||||
|
|
||||||
|
# ==============================================================================
|
||||||
|
# ALLOWED CLIENT PROTOCOLS
|
||||||
|
# Only enable what you need. In censored regions, TLS-only is safest.
|
||||||
|
# ==============================================================================
|
||||||
|
|
||||||
|
[general.modes]
|
||||||
|
|
||||||
|
# Classic MTProto. Unobfuscated length prefixes. Trivially detected by DPI.
|
||||||
|
# No reason to enable unless you have ancient clients.
|
||||||
|
classic = false
|
||||||
|
|
||||||
|
# Obfuscated MTProto with randomized padding. Better than classic, but
|
||||||
|
# still detectable by statistical analysis of packet sizes.
|
||||||
|
secure = false
|
||||||
|
|
||||||
|
# FakeTLS (ee-secrets). Wraps MTProto in TLS 1.3 framing.
|
||||||
|
# To DPI, it looks like a normal HTTPS connection.
|
||||||
|
# This should be the ONLY enabled mode in censored environments.
|
||||||
|
tls = true
|
||||||
|
|
||||||
|
|
||||||
|
# ==============================================================================
|
||||||
|
# STARTUP LINK GENERATION
|
||||||
|
# Controls what tg:// invite links are printed to console on startup.
|
||||||
|
# ==============================================================================
|
||||||
|
|
||||||
|
[general.links]
|
||||||
|
|
||||||
|
# Which users to generate links for.
|
||||||
|
# "*" = all users, or an array like ["alice", "bob"].
|
||||||
|
show = "*"
|
||||||
|
|
||||||
|
# IP or domain to embed in the tg:// link.
|
||||||
|
# If omitted, Telemt uses STUN to auto-detect.
|
||||||
|
# Set this to your server's public IP or domain for reliable links.
|
||||||
|
# public_host = "proxy.example.com"
|
||||||
|
|
||||||
|
# Port to embed in the tg:// link.
|
||||||
|
# If omitted, uses [server].port.
|
||||||
|
# public_port = 443
|
||||||
|
|
||||||
|
|
||||||
|
# ==============================================================================
|
||||||
|
# NETWORK & IP RESOLUTION
|
||||||
|
# ==============================================================================
|
||||||
|
|
||||||
|
[network]
|
||||||
|
|
||||||
|
# Enable IPv4 for outbound connections to Telegram.
|
||||||
|
ipv4 = true
|
||||||
|
|
||||||
|
# Enable IPv6 for outbound connections to Telegram.
|
||||||
|
ipv6 = false
|
||||||
|
|
||||||
|
# Prefer IPv4 (4) or IPv6 (6) when both are available.
|
||||||
|
prefer = 4
|
||||||
|
|
||||||
|
# Experimental: use both IPv4 and IPv6 ME servers simultaneously.
|
||||||
|
# May improve reliability but doubles connection count.
|
||||||
|
multipath = false
|
||||||
|
|
||||||
|
# STUN servers for external IP discovery.
|
||||||
|
# Used for Middle-Proxy KDF (if nat_probe=true) and link generation.
|
||||||
|
stun_servers = [
|
||||||
|
"stun.l.google.com:5349",
|
||||||
|
"stun1.l.google.com:3478",
|
||||||
|
"stun.gmx.net:3478",
|
||||||
|
"stun.l.google.com:19302"
|
||||||
|
]
|
||||||
|
|
||||||
|
# If UDP STUN is blocked, attempt TCP-based STUN as fallback.
|
||||||
|
stun_tcp_fallback = true
|
||||||
|
|
||||||
|
# If all STUN fails, use HTTP APIs to discover public IP.
|
||||||
|
http_ip_detect_urls = [
|
||||||
|
"https://ifconfig.me/ip",
|
||||||
|
"https://api.ipify.org"
|
||||||
|
]
|
||||||
|
|
||||||
|
# Cache discovered public IP to this file to survive restarts.
|
||||||
|
cache_public_ip_path = "cache/public_ip.txt"
|
||||||
|
|
||||||
|
|
||||||
|
# ==============================================================================
|
||||||
|
# SERVER BINDING & METRICS
|
||||||
|
# ==============================================================================
|
||||||
|
|
||||||
|
[server]
|
||||||
|
|
||||||
|
# TCP port to listen on.
|
||||||
|
# 443 is recommended (looks like normal HTTPS traffic).
|
||||||
|
port = 443
|
||||||
|
|
||||||
|
# IPv4 bind address. "0.0.0.0" = all interfaces.
|
||||||
|
listen_addr_ipv4 = "0.0.0.0"
|
||||||
|
|
||||||
|
# IPv6 bind address. "::" = all interfaces.
|
||||||
|
listen_addr_ipv6 = "::"
|
||||||
|
|
||||||
|
# Unix socket listener (for reverse proxy setups with Nginx/HAProxy).
|
||||||
|
# listen_unix_sock = "/var/run/telemt.sock"
|
||||||
|
# listen_unix_sock_perm = "0660"
|
||||||
|
|
||||||
|
# Enable PROXY protocol header parsing.
|
||||||
|
# Set true ONLY if Telemt is behind HAProxy/Nginx that injects PROXY headers.
|
||||||
|
# If enabled without a proxy in front, clients will fail to connect.
|
||||||
|
proxy_protocol = false
|
||||||
|
|
||||||
|
# Prometheus metrics HTTP endpoint port.
|
||||||
|
# Uncomment to enable. Access at http://your-server:9090/metrics
|
||||||
|
# metrics_port = 9090
|
||||||
|
|
||||||
|
# IP ranges allowed to access the metrics endpoint.
|
||||||
|
metrics_whitelist = [
|
||||||
|
"127.0.0.1/32",
|
||||||
|
"::1/128"
|
||||||
|
]
|
||||||
|
|
||||||
|
# ------------------------------------------------------------------------------
|
||||||
|
# Listener Overrides
|
||||||
|
# Define explicit listeners with specific bind IPs and announce IPs.
|
||||||
|
# The announce IP is what gets embedded in tg:// links and sent to ME servers.
|
||||||
|
# You MUST set announce to your server's public IP for ME mode to work.
|
||||||
|
# ------------------------------------------------------------------------------
|
||||||
|
|
||||||
|
# [[server.listeners]]
|
||||||
|
# ip = "0.0.0.0"
|
||||||
|
# announce = "203.0.113.10"
|
||||||
|
# reuse_allow = false
|
||||||
|
|
||||||
|
|
||||||
|
# ==============================================================================
|
||||||
|
# TIMEOUTS (seconds unless noted)
|
||||||
|
# ==============================================================================
|
||||||
|
|
||||||
|
[timeouts]
|
||||||
|
|
||||||
|
# Maximum time for client to complete FakeTLS + MTProto handshake.
|
||||||
|
client_handshake = 15
|
||||||
|
|
||||||
|
# Maximum time to establish TCP connection to upstream Telegram DC.
|
||||||
|
tg_connect = 10
|
||||||
|
|
||||||
|
# TCP keepalive interval for client connections.
|
||||||
|
client_keepalive = 60
|
||||||
|
|
||||||
|
# Maximum client inactivity before dropping the connection.
|
||||||
|
client_ack = 300
|
||||||
|
|
||||||
|
# Instant retry count for a single ME endpoint before giving up on it.
|
||||||
|
me_one_retry = 3
|
||||||
|
|
||||||
|
# Timeout (milliseconds) for a single ME endpoint connection attempt.
|
||||||
|
me_one_timeout_ms = 1500
|
||||||
|
|
||||||
|
|
||||||
|
# ==============================================================================
|
||||||
|
# ANTI-CENSORSHIP / FAKETLS / MASKING
|
||||||
|
# This is where Telemt becomes invisible to Deep Packet Inspection.
|
||||||
|
# ==============================================================================
|
||||||
|
|
||||||
|
[censorship]
|
||||||
|
|
||||||
|
# ------------------------------------------------------------------------------
|
||||||
|
# TLS Domain Fronting
|
||||||
|
# The SNI (Server Name Indication) your proxy presents to connecting clients.
|
||||||
|
# Must be a popular, unblocked HTTPS website in your target country.
|
||||||
|
# DPI sees traffic to this domain. Choose carefully.
|
||||||
|
# Good choices: major CDNs, banks, government sites, search engines.
|
||||||
|
# Bad choices: obscure sites, already-blocked domains.
|
||||||
|
# ------------------------------------------------------------------------------
|
||||||
|
|
||||||
|
tls_domain = "www.google.com"
|
||||||
|
|
||||||
|
# ------------------------------------------------------------------------------
|
||||||
|
# Active Probe Masking
|
||||||
|
# When someone connects but fails the MTProto handshake (wrong secret),
|
||||||
|
# they might be an ISP active prober testing if this is a proxy.
|
||||||
|
#
|
||||||
|
# mask = false: drop the connection (prober knows something is here)
|
||||||
|
# mask = true: transparently proxy them to mask_host (prober sees a real website)
|
||||||
|
#
|
||||||
|
# With mask enabled, your server is indistinguishable from a real web server
|
||||||
|
# to anyone who doesn't have the correct secret.
|
||||||
|
# ------------------------------------------------------------------------------
|
||||||
|
|
||||||
|
mask = true
|
||||||
|
|
||||||
|
# The real web server to forward failed handshakes to.
|
||||||
|
# If omitted, defaults to tls_domain.
|
||||||
|
# mask_host = "www.google.com"
|
||||||
|
|
||||||
|
# Port on the mask host to connect to.
|
||||||
|
mask_port = 443
|
||||||
|
|
||||||
|
# Inject PROXY protocol header when forwarding to mask host.
|
||||||
|
# 0 = disabled, 1 = v1, 2 = v2. Leave disabled unless mask_host expects it.
|
||||||
|
# mask_proxy_protocol = 0
|
||||||
|
|
||||||
|
# ------------------------------------------------------------------------------
|
||||||
|
# TLS Certificate Emulation
|
||||||
|
# ------------------------------------------------------------------------------
|
||||||
|
|
||||||
|
# Size (bytes) of the locally generated fake TLS certificate.
|
||||||
|
# Only used when tls_emulation is disabled.
|
||||||
|
fake_cert_len = 2048
|
||||||
|
|
||||||
|
# KILLER FEATURE: Real-Time TLS Emulation.
|
||||||
|
# Telemt connects to tls_domain, fetches its actual TLS 1.3 certificate chain,
|
||||||
|
# and exactly replicates the byte sizes of ServerHello and Certificate records.
|
||||||
|
# Defeats DPI that uses TLS record length heuristics to detect proxies.
|
||||||
|
# Strongly recommended in censored environments.
|
||||||
|
tls_emulation = true
|
||||||
|
|
||||||
|
# Directory to cache fetched TLS certificates.
|
||||||
|
tls_front_dir = "tlsfront"
|
||||||
|
|
||||||
|
# ------------------------------------------------------------------------------
|
||||||
|
# ServerHello Timing
|
||||||
|
# Real web servers take 30-150ms to respond to ClientHello due to network
|
||||||
|
# latency and crypto processing. A proxy responding in <1ms is suspicious.
|
||||||
|
# These settings add realistic delay to mimic genuine server behavior.
|
||||||
|
# ------------------------------------------------------------------------------
|
||||||
|
|
||||||
|
# Minimum delay before sending ServerHello (milliseconds).
|
||||||
|
server_hello_delay_min_ms = 50
|
||||||
|
|
||||||
|
# Maximum delay before sending ServerHello (milliseconds).
|
||||||
|
server_hello_delay_max_ms = 150
|
||||||
|
|
||||||
|
# ------------------------------------------------------------------------------
|
||||||
|
# TLS Session Tickets
|
||||||
|
# Real TLS 1.3 servers send 1-2 NewSessionTicket messages after handshake.
|
||||||
|
# A server that sends zero tickets is anomalous and may trigger DPI flags.
|
||||||
|
# Set this to match your tls_domain's behavior (usually 2).
|
||||||
|
# ------------------------------------------------------------------------------
|
||||||
|
|
||||||
|
# tls_new_session_tickets = 0
|
||||||
|
|
||||||
|
# ------------------------------------------------------------------------------
|
||||||
|
# Full Certificate Frequency
|
||||||
|
# When tls_emulation is enabled, this controls how often (per client IP)
|
||||||
|
# to send the complete emulated certificate chain.
|
||||||
|
#
|
||||||
|
# > 0: Subsequent connections within TTL seconds get a smaller cached version.
|
||||||
|
# Saves bandwidth but creates a detectable size difference between
|
||||||
|
# first and repeat connections.
|
||||||
|
#
|
||||||
|
# = 0: Every connection gets the full certificate. More bandwidth but
|
||||||
|
# perfectly consistent behavior, no anomalies for DPI to detect.
|
||||||
|
# ------------------------------------------------------------------------------
|
||||||
|
|
||||||
|
tls_full_cert_ttl_secs = 0
|
||||||
|
|
||||||
|
# ------------------------------------------------------------------------------
|
||||||
|
# ALPN Enforcement
|
||||||
|
# Ensure ServerHello responds with the exact ALPN protocol the client requested.
|
||||||
|
# Mismatched ALPN (e.g., client asks h2, server says http/1.1) is a DPI red flag.
|
||||||
|
# ------------------------------------------------------------------------------
|
||||||
|
|
||||||
|
alpn_enforce = true
|
||||||
|
|
||||||
|
|
||||||
|
# ==============================================================================
|
||||||
|
# ACCESS CONTROL & USERS
|
||||||
|
# ==============================================================================
|
||||||
|
|
||||||
|
[access]
|
||||||
|
|
||||||
|
# ------------------------------------------------------------------------------
|
||||||
|
# Replay Attack Protection
|
||||||
|
# DPI can record a legitimate user's handshake and replay it later to probe
|
||||||
|
# whether the server is a proxy. Telemt remembers recent handshake nonces
|
||||||
|
# and rejects duplicates.
|
||||||
|
# ------------------------------------------------------------------------------
|
||||||
|
|
||||||
|
# Number of nonce slots in the replay detection buffer.
|
||||||
|
replay_check_len = 65536
|
||||||
|
|
||||||
|
# How long (seconds) to remember nonces before expiring them.
|
||||||
|
replay_window_secs = 1800
|
||||||
|
|
||||||
|
# Allow clients with incorrect system clocks to connect.
|
||||||
|
# false = reject clients with significant time skew (more secure)
|
||||||
|
# true = accept anyone regardless of clock (more permissive)
|
||||||
|
ignore_time_skew = false
|
||||||
|
|
||||||
|
# ------------------------------------------------------------------------------
|
||||||
|
# User Secrets
|
||||||
|
# Each user needs a unique 32-character hex string as their secret.
|
||||||
|
# Generate with: openssl rand -hex 16
|
||||||
|
#
|
||||||
|
# This secret is embedded in the tg:// link. Anyone with it can connect.
|
||||||
|
# Format: username = "hex_secret"
|
||||||
|
# ------------------------------------------------------------------------------
|
||||||
|
|
||||||
|
[access.users]
|
||||||
|
# alice = "0123456789abcdef0123456789abcdef"
|
||||||
|
# bob = "fedcba9876543210fedcba9876543210"
|
||||||
|
|
||||||
|
# ------------------------------------------------------------------------------
|
||||||
|
# Per-User Connection Limits
|
||||||
|
# Limits concurrent TCP connections per user to prevent secret sharing.
|
||||||
|
# Uncomment and set for each user as needed.
|
||||||
|
# ------------------------------------------------------------------------------
|
||||||
|
|
||||||
|
[access.user_max_tcp_conns]
|
||||||
|
# alice = 100
|
||||||
|
# bob = 50
|
||||||
|
|
||||||
|
# ------------------------------------------------------------------------------
|
||||||
|
# Per-User Expiration Dates
|
||||||
|
# Automatically revoke access after the specified date (ISO 8601 format).
|
||||||
|
# ------------------------------------------------------------------------------
|
||||||
|
|
||||||
|
[access.user_expirations]
|
||||||
|
# alice = "2025-12-31T23:59:59Z"
|
||||||
|
# bob = "2026-06-15T00:00:00Z"
|
||||||
|
|
||||||
|
# ------------------------------------------------------------------------------
|
||||||
|
# Per-User Data Quotas
|
||||||
|
# Maximum total bytes transferred per user. Connection refused after limit.
|
||||||
|
# ------------------------------------------------------------------------------
|
||||||
|
|
||||||
|
[access.user_data_quota]
|
||||||
|
# alice = 107374182400
|
||||||
|
# bob = 53687091200
|
||||||
|
|
||||||
|
# ------------------------------------------------------------------------------
|
||||||
|
# Per-User Unique IP Limits
|
||||||
|
# Maximum number of different IP addresses that can use this secret
|
||||||
|
# at the same time. Highly effective against secret leaking/sharing.
|
||||||
|
# Set to 1 for single-device, 2-3 for phone+desktop, etc.
|
||||||
|
# ------------------------------------------------------------------------------
|
||||||
|
|
||||||
|
[access.user_max_unique_ips]
|
||||||
|
# alice = 3
|
||||||
|
# bob = 2
|
||||||
|
|
||||||
|
|
||||||
|
# ==============================================================================
|
||||||
|
# UPSTREAM ROUTING
|
||||||
|
# Controls how Telemt connects to Telegram servers (or ME servers).
|
||||||
|
# If omitted entirely, uses the OS default route.
|
||||||
|
# ==============================================================================
|
||||||
|
|
||||||
|
# ------------------------------------------------------------------------------
|
||||||
|
# Direct upstream: use the server's own network interface.
|
||||||
|
# You can optionally bind to a specific interface or local IP.
|
||||||
|
# ------------------------------------------------------------------------------
|
||||||
|
|
||||||
|
# [[upstreams]]
|
||||||
|
# type = "direct"
|
||||||
|
# interface = "eth0"
|
||||||
|
# bind_addresses = ["192.0.2.10"]
|
||||||
|
# weight = 1
|
||||||
|
# enabled = true
|
||||||
|
# scopes = "*"
|
||||||
|
|
||||||
|
# ------------------------------------------------------------------------------
|
||||||
|
# SOCKS5 upstream: route Telegram traffic through a SOCKS5 proxy.
|
||||||
|
# Useful if your server's IP is blocked from reaching Telegram DCs.
|
||||||
|
# ------------------------------------------------------------------------------
|
||||||
|
|
||||||
|
# [[upstreams]]
|
||||||
|
# type = "socks5"
|
||||||
|
# address = "198.51.100.30:1080"
|
||||||
|
# username = "proxy-user"
|
||||||
|
# password = "proxy-pass"
|
||||||
|
# weight = 1
|
||||||
|
# enabled = true
|
||||||
|
|
||||||
|
|
||||||
|
# ==============================================================================
|
||||||
|
# DATACENTER OVERRIDES
|
||||||
|
# Force specific DC IDs to route to specific IP:Port combinations.
|
||||||
|
# DC 203 (CDN) is auto-injected by Telemt if not specified here.
|
||||||
|
# ==============================================================================
|
||||||
|
|
||||||
|
# [dc_overrides]
|
||||||
|
# "201" = "149.154.175.50:443"
|
||||||
|
# "202" = ["149.154.167.51:443", "149.154.175.100:443"]
|
||||||
|
|
@ -32,7 +32,6 @@ show = "*"
|
||||||
port = 443
|
port = 443
|
||||||
# proxy_protocol = false # Enable if behind HAProxy/nginx with PROXY protocol
|
# proxy_protocol = false # Enable if behind HAProxy/nginx with PROXY protocol
|
||||||
# metrics_port = 9090
|
# metrics_port = 9090
|
||||||
# metrics_listen = "0.0.0.0:9090" # Listen address for metrics (overrides metrics_port)
|
|
||||||
# metrics_whitelist = ["127.0.0.1", "::1", "0.0.0.0/0"]
|
# metrics_whitelist = ["127.0.0.1", "::1", "0.0.0.0/0"]
|
||||||
|
|
||||||
[server.api]
|
[server.api]
|
||||||
|
|
|
||||||
|
|
@ -1,3 +0,0 @@
|
||||||
u telemt - "telemt user" /var/lib/telemt -
|
|
||||||
g telemt - -
|
|
||||||
m telemt telemt
|
|
||||||
|
|
@ -1,21 +0,0 @@
|
||||||
[Unit]
|
|
||||||
Description=Telemt
|
|
||||||
Wants=network-online.target
|
|
||||||
After=multi-user.target network.target network-online.target
|
|
||||||
|
|
||||||
[Service]
|
|
||||||
Type=simple
|
|
||||||
User=telemt
|
|
||||||
Group=telemt
|
|
||||||
WorkingDirectory=/var/lib/telemt
|
|
||||||
ExecStart=/usr/bin/telemt /etc/telemt/telemt.toml
|
|
||||||
Restart=on-failure
|
|
||||||
RestartSec=10
|
|
||||||
LimitNOFILE=65536
|
|
||||||
AmbientCapabilities=CAP_NET_BIND_SERVICE
|
|
||||||
CapabilityBoundingSet=CAP_NET_BIND_SERVICE
|
|
||||||
NoNewPrivileges=true
|
|
||||||
|
|
||||||
|
|
||||||
[Install]
|
|
||||||
WantedBy=multi-user.target
|
|
||||||
|
|
@ -1 +0,0 @@
|
||||||
d /var/lib/telemt 700 telemt telemt
|
|
||||||
|
|
@ -7,7 +7,6 @@ services:
|
||||||
ports:
|
ports:
|
||||||
- "443:443"
|
- "443:443"
|
||||||
- "127.0.0.1:9090:9090"
|
- "127.0.0.1:9090:9090"
|
||||||
- "127.0.0.1:9091:9091"
|
|
||||||
# Allow caching 'proxy-secret' in read-only container
|
# Allow caching 'proxy-secret' in read-only container
|
||||||
working_dir: /run/telemt
|
working_dir: /run/telemt
|
||||||
volumes:
|
volumes:
|
||||||
|
|
|
||||||
10
docs/API.md
10
docs/API.md
|
|
@ -497,14 +497,13 @@ Note: the request contract is defined, but the corresponding route currently ret
|
||||||
| `direct_total` | `usize` | Direct-route upstream entries. |
|
| `direct_total` | `usize` | Direct-route upstream entries. |
|
||||||
| `socks4_total` | `usize` | SOCKS4 upstream entries. |
|
| `socks4_total` | `usize` | SOCKS4 upstream entries. |
|
||||||
| `socks5_total` | `usize` | SOCKS5 upstream entries. |
|
| `socks5_total` | `usize` | SOCKS5 upstream entries. |
|
||||||
| `shadowsocks_total` | `usize` | Shadowsocks upstream entries. |
|
|
||||||
|
|
||||||
#### `RuntimeUpstreamQualityUpstreamData`
|
#### `RuntimeUpstreamQualityUpstreamData`
|
||||||
| Field | Type | Description |
|
| Field | Type | Description |
|
||||||
| --- | --- | --- |
|
| --- | --- | --- |
|
||||||
| `upstream_id` | `usize` | Runtime upstream index. |
|
| `upstream_id` | `usize` | Runtime upstream index. |
|
||||||
| `route_kind` | `string` | `direct`, `socks4`, `socks5`, `shadowsocks`. |
|
| `route_kind` | `string` | `direct`, `socks4`, `socks5`. |
|
||||||
| `address` | `string` | Upstream address (`direct` literal for direct route kind, `host:port` only for proxied upstreams). |
|
| `address` | `string` | Upstream address (`direct` literal for direct route kind). |
|
||||||
| `weight` | `u16` | Selection weight. |
|
| `weight` | `u16` | Selection weight. |
|
||||||
| `scopes` | `string` | Configured scope selector. |
|
| `scopes` | `string` | Configured scope selector. |
|
||||||
| `healthy` | `bool` | Current health flag. |
|
| `healthy` | `bool` | Current health flag. |
|
||||||
|
|
@ -758,14 +757,13 @@ Note: the request contract is defined, but the corresponding route currently ret
|
||||||
| `direct_total` | `usize` | Number of direct upstream entries. |
|
| `direct_total` | `usize` | Number of direct upstream entries. |
|
||||||
| `socks4_total` | `usize` | Number of SOCKS4 upstream entries. |
|
| `socks4_total` | `usize` | Number of SOCKS4 upstream entries. |
|
||||||
| `socks5_total` | `usize` | Number of SOCKS5 upstream entries. |
|
| `socks5_total` | `usize` | Number of SOCKS5 upstream entries. |
|
||||||
| `shadowsocks_total` | `usize` | Number of Shadowsocks upstream entries. |
|
|
||||||
|
|
||||||
#### `UpstreamStatus`
|
#### `UpstreamStatus`
|
||||||
| Field | Type | Description |
|
| Field | Type | Description |
|
||||||
| --- | --- | --- |
|
| --- | --- | --- |
|
||||||
| `upstream_id` | `usize` | Runtime upstream index. |
|
| `upstream_id` | `usize` | Runtime upstream index. |
|
||||||
| `route_kind` | `string` | Upstream route kind: `direct`, `socks4`, `socks5`, `shadowsocks`. |
|
| `route_kind` | `string` | Upstream route kind: `direct`, `socks4`, `socks5`. |
|
||||||
| `address` | `string` | Upstream address (`direct` for direct route kind, `host:port` for Shadowsocks). Authentication fields are intentionally omitted. |
|
| `address` | `string` | Upstream address (`direct` for direct route kind). Authentication fields are intentionally omitted. |
|
||||||
| `weight` | `u16` | Selection weight. |
|
| `weight` | `u16` | Selection weight. |
|
||||||
| `scopes` | `string` | Configured scope selector string. |
|
| `scopes` | `string` | Configured scope selector string. |
|
||||||
| `healthy` | `bool` | Current health flag. |
|
| `healthy` | `bool` | Current health flag. |
|
||||||
|
|
|
||||||
File diff suppressed because it is too large
Load Diff
115
docs/FAQ.en.md
115
docs/FAQ.en.md
|
|
@ -1,122 +1,97 @@
|
||||||
## How to set up a "proxy sponsor" channel and statistics via the @MTProxybot
|
## How to set up "proxy sponsor" channel and statistics via @MTProxybot bot
|
||||||
|
|
||||||
1. Go to the @MTProxybot.
|
1. Go to @MTProxybot bot.
|
||||||
2. Enter the `/newproxy` command.
|
2. Enter the command `/newproxy`
|
||||||
3. Send your server's IP address and port. For example: `1.2.3.4:443`.
|
3. Send the server IP and port. For example: 1.2.3.4:443
|
||||||
4. Open the configuration file: `nano /etc/telemt/telemt.toml`.
|
4. Open the config `nano /etc/telemt.toml`.
|
||||||
5. Copy and send the user secret from the `[access.users]` section to the bot.
|
5. Copy and send the user secret from the [access.users] section to the bot.
|
||||||
6. Copy the tag provided by the bot. For example: `1234567890abcdef1234567890abcdef`.
|
6. Copy the tag received from the bot. For example 1234567890abcdef1234567890abcdef.
|
||||||
> [!WARNING]
|
> [!WARNING]
|
||||||
> The link provided by the bot will not work. Do not copy or use it!
|
> The link provided by the bot will not work. Do not copy or use it!
|
||||||
7. Uncomment the `ad_tag` parameter and enter the tag received from the bot.
|
7. Uncomment the ad_tag parameter and enter the tag received from the bot.
|
||||||
8. Uncomment or add the `use_middle_proxy = true` parameter.
|
8. Uncomment/add the parameter `use_middle_proxy = true`.
|
||||||
|
|
||||||
Configuration example:
|
Config example:
|
||||||
```toml
|
```toml
|
||||||
[general]
|
[general]
|
||||||
ad_tag = "1234567890abcdef1234567890abcdef"
|
ad_tag = "1234567890abcdef1234567890abcdef"
|
||||||
use_middle_proxy = true
|
use_middle_proxy = true
|
||||||
```
|
```
|
||||||
9. Save the changes (in nano: Ctrl+S -> Ctrl+X).
|
9. Save the config. Ctrl+S -> Ctrl+X.
|
||||||
10. Restart the telemt service: `systemctl restart telemt`.
|
10. Restart telemt `systemctl restart telemt`.
|
||||||
11. Send the `/myproxies` command to the bot and select the added server.
|
11. In the bot, send the command /myproxies and select the added server.
|
||||||
12. Click the "Set promotion" button.
|
12. Click the "Set promotion" button.
|
||||||
13. Send a **public link** to the channel. Private channels cannot be added!
|
13. Send a **public link** to the channel. Private channels cannot be added!
|
||||||
14. Wait for about 1 hour for the information to update on Telegram servers.
|
14. Wait approximately 1 hour for the information to update on Telegram servers.
|
||||||
> [!WARNING]
|
> [!WARNING]
|
||||||
> The sponsored channel will not be displayed to you if you are already subscribed to it.
|
> You will not see the "proxy sponsor" if you are already subscribed to the channel.
|
||||||
|
|
||||||
**You can also configure different sponsored channels for different users:**
|
**You can also set up different channels for different users.**
|
||||||
```toml
|
```toml
|
||||||
[access.user_ad_tags]
|
[access.user_ad_tags]
|
||||||
hello = "ad_tag"
|
hello = "ad_tag"
|
||||||
hello2 = "ad_tag2"
|
hello2 = "ad_tag2"
|
||||||
```
|
```
|
||||||
|
|
||||||
## Why do you need a middle proxy (ME)
|
## How many people can use 1 link
|
||||||
https://github.com/telemt/telemt/discussions/167
|
|
||||||
|
|
||||||
|
By default, 1 link can be used by any number of people.
|
||||||
## How many people can use one link
|
You can limit the number of IPs using the proxy.
|
||||||
|
|
||||||
By default, an unlimited number of people can use a single link.
|
|
||||||
However, you can limit the number of unique IP addresses for each user:
|
|
||||||
```toml
|
```toml
|
||||||
[access.user_max_unique_ips]
|
[access.user_max_unique_ips]
|
||||||
hello = 1
|
hello = 1
|
||||||
```
|
```
|
||||||
This parameter sets the maximum number of unique IP addresses from which a single link can be used simultaneously. If the first user disconnects, a second one can connect. At the same time, multiple users can connect from a single IP address simultaneously (for example, devices on the same Wi-Fi network).
|
This parameter limits how many unique IPs can use 1 link simultaneously. If one user disconnects, a second user can connect. Also, multiple users can sit behind the same IP.
|
||||||
|
|
||||||
## How to create multiple different links
|
## How to create multiple different links
|
||||||
|
|
||||||
1. Generate the required number of secrets using the command: `openssl rand -hex 16`.
|
1. Generate the required number of secrets `openssl rand -hex 16`
|
||||||
2. Open the configuration file: `nano /etc/telemt/telemt.toml`.
|
2. Open the config `nano /etc/telemt.toml`
|
||||||
3. Add new users to the `[access.users]` section:
|
3. Add new users.
|
||||||
```toml
|
```toml
|
||||||
[access.users]
|
[access.users]
|
||||||
user1 = "00000000000000000000000000000001"
|
user1 = "00000000000000000000000000000001"
|
||||||
user2 = "00000000000000000000000000000002"
|
user2 = "00000000000000000000000000000002"
|
||||||
user3 = "00000000000000000000000000000003"
|
user3 = "00000000000000000000000000000003"
|
||||||
```
|
```
|
||||||
4. Save the configuration (Ctrl+S -> Ctrl+X). There is no need to restart the telemt service.
|
4. Save the config. Ctrl+S -> Ctrl+X. You don't need to restart telemt.
|
||||||
5. Get the ready-to-use links using the command:
|
5. Get the links via `journalctl -u telemt -n -g "links" --no-pager -o cat | tac`
|
||||||
```bash
|
|
||||||
curl -s http://127.0.0.1:9091/v1/users | jq
|
|
||||||
```
|
|
||||||
|
|
||||||
## "Unknown TLS SNI" error
|
|
||||||
Usually, this error occurs if you have changed the `tls_domain` parameter, but users continue to connect using old links with the previous domain.
|
|
||||||
|
|
||||||
If you need to allow connections with any domains (ignoring SNI mismatches), add the following parameters:
|
|
||||||
```toml
|
|
||||||
[censorship]
|
|
||||||
unknown_sni_action = "mask"
|
|
||||||
```
|
|
||||||
|
|
||||||
## How to view metrics
|
## How to view metrics
|
||||||
|
|
||||||
1. Open the configuration file: `nano /etc/telemt/telemt.toml`.
|
1. Open the config `nano /etc/telemt.toml`
|
||||||
2. Add the following parameters:
|
2. Add the following parameters
|
||||||
```toml
|
```toml
|
||||||
[server]
|
[server]
|
||||||
metrics_port = 9090
|
metrics_port = 9090
|
||||||
metrics_whitelist = ["127.0.0.1/32", "::1/128", "0.0.0.0/0"]
|
metrics_whitelist = ["127.0.0.1/32", "::1/128", "0.0.0.0/0"]
|
||||||
```
|
```
|
||||||
3. Save the changes (Ctrl+S -> Ctrl+X).
|
3. Save the config. Ctrl+S -> Ctrl+X.
|
||||||
4. After that, metrics will be available at: `SERVER_IP:9090/metrics`.
|
4. Metrics are available at SERVER_IP:9090/metrics.
|
||||||
> [!WARNING]
|
> [!WARNING]
|
||||||
> The value `"0.0.0.0/0"` in `metrics_whitelist` opens access to metrics from any IP address. It is recommended to replace it with your personal IP, for example: `"1.2.3.4/32"`.
|
> "0.0.0.0/0" in metrics_whitelist opens access from any IP. Replace with your own IP. For example "1.2.3.4"
|
||||||
|
|
||||||
## Additional parameters
|
## Additional parameters
|
||||||
|
|
||||||
### Domain in the link instead of IP
|
### Domain in link instead of IP
|
||||||
To display a domain instead of an IP address in the connection links, add the following lines to the configuration file:
|
To specify a domain in the links, add to the `[general.links]` section of the config file.
|
||||||
```toml
|
```toml
|
||||||
[general.links]
|
[general.links]
|
||||||
public_host = "proxy.example.com"
|
public_host = "proxy.example.com"
|
||||||
```
|
```
|
||||||
|
|
||||||
### Total server connection limit
|
|
||||||
This parameter limits the total number of active connections to the server:
|
|
||||||
```toml
|
|
||||||
[server]
|
|
||||||
max_connections = 10000 # 0 - unlimited, 10000 - default
|
|
||||||
```
|
|
||||||
|
|
||||||
### Upstream Manager
|
### Upstream Manager
|
||||||
To configure outbound connections (upstreams), add the corresponding parameters to the `[[upstreams]]` section of the configuration file:
|
To specify an upstream, add to the `[[upstreams]]` section of the config.toml file:
|
||||||
|
#### Binding to IP
|
||||||
#### Binding to an outbound IP address
|
|
||||||
```toml
|
```toml
|
||||||
[[upstreams]]
|
[[upstreams]]
|
||||||
type = "direct"
|
type = "direct"
|
||||||
weight = 1
|
weight = 1
|
||||||
enabled = true
|
enabled = true
|
||||||
interface = "192.168.1.100" # Replace with your outbound IP
|
interface = "192.168.1.100" # Change to your outgoing IP
|
||||||
```
|
```
|
||||||
|
#### SOCKS4/5 as Upstream
|
||||||
#### Using SOCKS4/5 as an Upstream
|
- Without authentication:
|
||||||
- Without authorization:
|
|
||||||
```toml
|
```toml
|
||||||
[[upstreams]]
|
[[upstreams]]
|
||||||
type = "socks5" # Specify SOCKS4 or SOCKS5
|
type = "socks5" # Specify SOCKS4 or SOCKS5
|
||||||
|
|
@ -125,7 +100,7 @@ weight = 1 # Set Weight for Scenarios
|
||||||
enabled = true
|
enabled = true
|
||||||
```
|
```
|
||||||
|
|
||||||
- With authorization:
|
- With authentication:
|
||||||
```toml
|
```toml
|
||||||
[[upstreams]]
|
[[upstreams]]
|
||||||
type = "socks5" # Specify SOCKS4 or SOCKS5
|
type = "socks5" # Specify SOCKS4 or SOCKS5
|
||||||
|
|
@ -135,17 +110,3 @@ password = "pass" # Password for Auth on SOCKS-server
|
||||||
weight = 1 # Set Weight for Scenarios
|
weight = 1 # Set Weight for Scenarios
|
||||||
enabled = true
|
enabled = true
|
||||||
```
|
```
|
||||||
|
|
||||||
#### Using Shadowsocks as an Upstream
|
|
||||||
For this method to work, the `use_middle_proxy = false` parameter must be set.
|
|
||||||
|
|
||||||
```toml
|
|
||||||
[general]
|
|
||||||
use_middle_proxy = false
|
|
||||||
|
|
||||||
[[upstreams]]
|
|
||||||
type = "shadowsocks"
|
|
||||||
url = "ss://2022-blake3-aes-256-gcm:BASE64_KEY@1.2.3.4:8388"
|
|
||||||
weight = 1
|
|
||||||
enabled = true
|
|
||||||
```
|
|
||||||
|
|
|
||||||
115
docs/FAQ.ru.md
115
docs/FAQ.ru.md
|
|
@ -1,121 +1,96 @@
|
||||||
## Как настроить канал "спонсор прокси" и статистику через бота @MTProxybot
|
## Как настроить канал "спонсор прокси" и статистику через бота @MTProxybot
|
||||||
|
|
||||||
1. Зайдите в бота @MTProxybot.
|
1. Зайти в бота @MTProxybot.
|
||||||
2. Введите команду `/newproxy`.
|
2. Ввести команду `/newproxy`
|
||||||
3. Отправьте IP-адрес и порт сервера. Например: `1.2.3.4:443`.
|
3. Отправить IP и порт сервера. Например: 1.2.3.4:443
|
||||||
4. Откройте файл конфигурации: `nano /etc/telemt/telemt.toml`.
|
4. Открыть конфиг `nano /etc/telemt.toml`.
|
||||||
5. Скопируйте и отправьте боту секрет пользователя из раздела `[access.users]`.
|
5. Скопировать и отправить боту секрет пользователя из раздела [access.users].
|
||||||
6. Скопируйте тег (tag), который выдаст бот. Например: `1234567890abcdef1234567890abcdef`.
|
6. Скопировать полученный tag у бота. Например 1234567890abcdef1234567890abcdef.
|
||||||
> [!WARNING]
|
> [!WARNING]
|
||||||
> Ссылка, которую выдает бот, работать не будет. Не копируйте и не используйте её!
|
> Ссылка, которую выдает бот, не будет работать. Не копируйте и не используйте её!
|
||||||
7. Раскомментируйте параметр `ad_tag` и впишите тег, полученный от бота.
|
7. Раскомментировать параметр ad_tag и вписать tag, полученный у бота.
|
||||||
8. Раскомментируйте или добавьте параметр `use_middle_proxy = true`.
|
8. Раскомментировать/добавить параметр use_middle_proxy = true.
|
||||||
|
|
||||||
Пример конфигурации:
|
Пример конфига:
|
||||||
```toml
|
```toml
|
||||||
[general]
|
[general]
|
||||||
ad_tag = "1234567890abcdef1234567890abcdef"
|
ad_tag = "1234567890abcdef1234567890abcdef"
|
||||||
use_middle_proxy = true
|
use_middle_proxy = true
|
||||||
```
|
```
|
||||||
9. Сохраните изменения (в nano: Ctrl+S -> Ctrl+X).
|
9. Сохранить конфиг. Ctrl+S -> Ctrl+X.
|
||||||
10. Перезапустите службу telemt: `systemctl restart telemt`.
|
10. Перезапустить telemt `systemctl restart telemt`.
|
||||||
11. В боте отправьте команду `/myproxies` и выберите добавленный сервер.
|
11. В боте отправить команду /myproxies и выбрать добавленный сервер.
|
||||||
12. Нажмите кнопку «Set promotion».
|
12. Нажать кнопку "Set promotion".
|
||||||
13. Отправьте **публичную ссылку** на канал. Приватные каналы добавлять нельзя!
|
13. Отправить **публичную ссылку** на канал. Приватный канал добавить нельзя!
|
||||||
14. Подождите примерно 1 час, пока информация обновится на серверах Telegram.
|
14. Подождать примерно 1 час, пока информация обновится на серверах Telegram.
|
||||||
> [!WARNING]
|
> [!WARNING]
|
||||||
> Спонсорский канал не будет у вас отображаться, если вы уже на него подписаны.
|
> У вас не будет отображаться "спонсор прокси" если вы уже подписаны на канал.
|
||||||
|
|
||||||
**Вы также можете настроить разные спонсорские каналы для разных пользователей:**
|
**Также вы можете настроить разные каналы для разных пользователей.**
|
||||||
```toml
|
```toml
|
||||||
[access.user_ad_tags]
|
[access.user_ad_tags]
|
||||||
hello = "ad_tag"
|
hello = "ad_tag"
|
||||||
hello2 = "ad_tag2"
|
hello2 = "ad_tag2"
|
||||||
```
|
```
|
||||||
|
|
||||||
## Зачем нужен middle proxy (ME)
|
## Сколько человек может пользоваться 1 ссылкой
|
||||||
https://github.com/telemt/telemt/discussions/167
|
|
||||||
|
|
||||||
|
По умолчанию 1 ссылкой может пользоваться сколько угодно человек.
|
||||||
## Сколько человек может пользоваться одной ссылкой
|
Вы можете ограничить число IP, использующих прокси.
|
||||||
|
|
||||||
По умолчанию одной ссылкой может пользоваться неограниченное число людей.
|
|
||||||
Однако вы можете ограничить количество уникальных IP-адресов для каждого пользователя:
|
|
||||||
```toml
|
```toml
|
||||||
[access.user_max_unique_ips]
|
[access.user_max_unique_ips]
|
||||||
hello = 1
|
hello = 1
|
||||||
```
|
```
|
||||||
Этот параметр задает максимальное количество уникальных IP-адресов, с которых можно одновременно использовать одну ссылку. Если первый пользователь отключится, второй сможет подключиться. При этом с одного IP-адреса могут подключаться несколько пользователей одновременно (например, устройства в одной Wi-Fi сети).
|
Этот параметр ограничивает, сколько уникальных IP может использовать 1 ссылку одновременно. Если один пользователь отключится, второй сможет подключиться. Также с одного IP может сидеть несколько пользователей.
|
||||||
|
|
||||||
## Как создать несколько разных ссылок
|
## Как сделать несколько разных ссылок
|
||||||
|
|
||||||
1. Сгенерируйте необходимое количество секретов с помощью команды: `openssl rand -hex 16`.
|
1. Сгенерируйте нужное число секретов `openssl rand -hex 16`
|
||||||
2. Откройте файл конфигурации: `nano /etc/telemt/telemt.toml`.
|
2. Открыть конфиг `nano /etc/telemt.toml`
|
||||||
3. Добавьте новых пользователей в секцию `[access.users]`:
|
3. Добавить новых пользователей.
|
||||||
```toml
|
```toml
|
||||||
[access.users]
|
[access.users]
|
||||||
user1 = "00000000000000000000000000000001"
|
user1 = "00000000000000000000000000000001"
|
||||||
user2 = "00000000000000000000000000000002"
|
user2 = "00000000000000000000000000000002"
|
||||||
user3 = "00000000000000000000000000000003"
|
user3 = "00000000000000000000000000000003"
|
||||||
```
|
```
|
||||||
4. Сохраните конфигурацию (Ctrl+S -> Ctrl+X). Перезапускать службу telemt не нужно.
|
4. Сохранить конфиг. Ctrl+S -> Ctrl+X. Перезапускать telemt не нужно.
|
||||||
5. Получите готовые ссылки с помощью команды:
|
5. Получить ссылки через `journalctl -u telemt -n -g "links" --no-pager -o cat | tac`
|
||||||
```bash
|
|
||||||
curl -s http://127.0.0.1:9091/v1/users | jq
|
|
||||||
```
|
|
||||||
|
|
||||||
## Ошибка "Unknown TLS SNI"
|
|
||||||
Обычно эта ошибка возникает, если вы изменили параметр `tls_domain`, но пользователи продолжают подключаться по старым ссылкам с прежним доменом.
|
|
||||||
|
|
||||||
Если необходимо разрешить подключение с любыми доменами (игнорируя несовпадения SNI), добавьте следующие параметры:
|
|
||||||
```toml
|
|
||||||
[censorship]
|
|
||||||
unknown_sni_action = "mask"
|
|
||||||
```
|
|
||||||
|
|
||||||
## Как посмотреть метрики
|
## Как посмотреть метрики
|
||||||
|
|
||||||
1. Откройте файл конфигурации: `nano /etc/telemt/telemt.toml`.
|
1. Открыть конфиг `nano /etc/telemt.toml`
|
||||||
2. Добавьте следующие параметры:
|
2. Добавить следующие параметры
|
||||||
```toml
|
```toml
|
||||||
[server]
|
[server]
|
||||||
metrics_port = 9090
|
metrics_port = 9090
|
||||||
metrics_whitelist = ["127.0.0.1/32", "::1/128", "0.0.0.0/0"]
|
metrics_whitelist = ["127.0.0.1/32", "::1/128", "0.0.0.0/0"]
|
||||||
```
|
```
|
||||||
3. Сохраните изменения (Ctrl+S -> Ctrl+X).
|
3. Сохранить конфиг. Ctrl+S -> Ctrl+X.
|
||||||
4. После этого метрики будут доступны по адресу: `SERVER_IP:9090/metrics`.
|
4. Метрики доступны по адресу SERVER_IP:9090/metrics.
|
||||||
> [!WARNING]
|
> [!WARNING]
|
||||||
> Значение `"0.0.0.0/0"` в `metrics_whitelist` открывает доступ к метрикам с любого IP-адреса. Рекомендуется заменить его на ваш личный IP, например: `"1.2.3.4/32"`.
|
> "0.0.0.0/0" в metrics_whitelist открывает доступ с любого IP. Замените на свой ip. Например "1.2.3.4"
|
||||||
|
|
||||||
## Дополнительные параметры
|
## Дополнительные параметры
|
||||||
|
|
||||||
### Домен в ссылке вместо IP
|
### Домен в ссылке вместо IP
|
||||||
Чтобы в ссылках для подключения отображался домен вместо IP-адреса, добавьте следующие строки в файл конфигурации:
|
Чтобы указать домен в ссылках, добавьте в секцию `[general.links]` файла config.
|
||||||
```toml
|
```toml
|
||||||
[general.links]
|
[general.links]
|
||||||
public_host = "proxy.example.com"
|
public_host = "proxy.example.com"
|
||||||
```
|
```
|
||||||
|
|
||||||
### Общий лимит подключений к серверу
|
|
||||||
Этот параметр ограничивает общее количество активных подключений к серверу:
|
|
||||||
```toml
|
|
||||||
[server]
|
|
||||||
max_connections = 10000 # 0 - без ограничений, 10000 - по умолчанию
|
|
||||||
```
|
|
||||||
|
|
||||||
### Upstream Manager
|
### Upstream Manager
|
||||||
Для настройки исходящих подключений (апстримов) добавьте соответствующие параметры в секцию `[[upstreams]]` файла конфигурации:
|
Чтобы указать апстрим, добавьте в секцию `[[upstreams]]` файла config.toml:
|
||||||
|
#### Привязка к IP
|
||||||
#### Привязка к исходящему IP-адресу
|
|
||||||
```toml
|
```toml
|
||||||
[[upstreams]]
|
[[upstreams]]
|
||||||
type = "direct"
|
type = "direct"
|
||||||
weight = 1
|
weight = 1
|
||||||
enabled = true
|
enabled = true
|
||||||
interface = "192.168.1.100" # Замените на ваш исходящий IP
|
interface = "192.168.1.100" # Change to your outgoing IP
|
||||||
```
|
```
|
||||||
|
#### SOCKS4/5 как Upstream
|
||||||
#### Использование SOCKS4/5 в качестве Upstream
|
|
||||||
- Без авторизации:
|
- Без авторизации:
|
||||||
```toml
|
```toml
|
||||||
[[upstreams]]
|
[[upstreams]]
|
||||||
|
|
@ -135,17 +110,3 @@ password = "pass" # Password for Auth on SOCKS-server
|
||||||
weight = 1 # Set Weight for Scenarios
|
weight = 1 # Set Weight for Scenarios
|
||||||
enabled = true
|
enabled = true
|
||||||
```
|
```
|
||||||
|
|
||||||
#### Использование Shadowsocks в качестве Upstream
|
|
||||||
Для работы этого метода требуется установить параметр `use_middle_proxy = false`.
|
|
||||||
|
|
||||||
```toml
|
|
||||||
[general]
|
|
||||||
use_middle_proxy = false
|
|
||||||
|
|
||||||
[[upstreams]]
|
|
||||||
type = "shadowsocks"
|
|
||||||
url = "ss://2022-blake3-aes-256-gcm:BASE64_KEY@1.2.3.4:8388"
|
|
||||||
weight = 1
|
|
||||||
enabled = true
|
|
||||||
```
|
|
||||||
|
|
|
||||||
|
|
@ -1,92 +0,0 @@
|
||||||
# Öffentliche TELEMT-Lizenz 3
|
|
||||||
|
|
||||||
***Alle Rechte vorbehalten (c) 2026 Telemt***
|
|
||||||
|
|
||||||
Hiermit wird jeder Person, die eine Kopie dieser Software und der dazugehörigen Dokumentation (nachfolgend "Software") erhält, unentgeltlich die Erlaubnis erteilt, die Software ohne Einschränkungen zu nutzen, einschließlich des Rechts, die Software zu verwenden, zu vervielfältigen, zu ändern, abgeleitete Werke zu erstellen, zu verbinden, zu veröffentlichen, zu verbreiten, zu unterlizenzieren und/oder Kopien der Software zu verkaufen sowie diese Rechte auch denjenigen einzuräumen, denen die Software zur Verfügung gestellt wird, vorausgesetzt, dass sämtliche Urheberrechtshinweise sowie die Bedingungen und Bestimmungen dieser Lizenz eingehalten werden.
|
|
||||||
|
|
||||||
### Begriffsbestimmungen
|
|
||||||
|
|
||||||
Für die Zwecke dieser Lizenz gelten die folgenden Definitionen:
|
|
||||||
|
|
||||||
**"Software" (Software)** — die Telemt-Software einschließlich Quellcode, Dokumentation und sämtlicher zugehöriger Dateien, die unter den Bedingungen dieser Lizenz verbreitet werden.
|
|
||||||
|
|
||||||
**"Contributor" (Contributor)** — jede natürliche oder juristische Person, die Code, Patches, Dokumentation oder andere Materialien eingereicht hat, die von den Maintainers des Projekts angenommen und in die Software aufgenommen wurden.
|
|
||||||
|
|
||||||
**"Beitrag" (Contribution)** — jedes urheberrechtlich geschützte Werk, das bewusst zur Aufnahme in die Software eingereicht wurde.
|
|
||||||
|
|
||||||
**"Modifizierte Version" (Modified Version)** — jede Version der Software, die gegenüber der ursprünglichen Software geändert, angepasst, erweitert oder anderweitig modifiziert wurde.
|
|
||||||
|
|
||||||
**"Maintainers" (Maintainers)** — natürliche oder juristische Personen, die für das offizielle Telemt-Projekt und dessen offizielle Veröffentlichungen verantwortlich sind.
|
|
||||||
|
|
||||||
### 1 Urheberrechtshinweis (Attribution)
|
|
||||||
|
|
||||||
Bei der Weitergabe der Software, sowohl in Form des Quellcodes als auch in binärer Form, MÜSSEN folgende Elemente erhalten bleiben:
|
|
||||||
|
|
||||||
- der oben genannte Urheberrechtshinweis;
|
|
||||||
- der vollständige Text dieser Lizenz;
|
|
||||||
- sämtliche bestehenden Hinweise auf Urheberschaft.
|
|
||||||
|
|
||||||
### 2 Hinweis auf Modifikationen
|
|
||||||
|
|
||||||
Wenn Änderungen an der Software vorgenommen werden, MUSS die Person, die diese Änderungen vorgenommen hat, eindeutig darauf hinweisen, dass die Software modifiziert wurde, und eine kurze Beschreibung der vorgenommenen Änderungen beifügen.
|
|
||||||
|
|
||||||
Modifizierte Versionen der Software DÜRFEN NICHT als die originale Version von Telemt dargestellt werden.
|
|
||||||
|
|
||||||
### 3 Marken und Bezeichnungen
|
|
||||||
|
|
||||||
Diese Lizenz GEWÄHRT KEINE Rechte zur Nutzung der Bezeichnung **"Telemt"**, des Telemt-Logos oder sonstiger Marken, Kennzeichen oder Branding-Elemente von Telemt.
|
|
||||||
|
|
||||||
Weiterverbreitete oder modifizierte Versionen der Software DÜRFEN die Bezeichnung Telemt nicht in einer Weise verwenden, die bei Nutzern den Eindruck eines offiziellen Ursprungs oder einer Billigung durch das Telemt-Projekt erwecken könnte, sofern hierfür keine ausdrückliche Genehmigung der Maintainers vorliegt.
|
|
||||||
|
|
||||||
Die Verwendung der Bezeichnung **Telemt** zur Beschreibung einer modifizierten Version der Software ist nur zulässig, wenn diese Version eindeutig als modifiziert oder inoffiziell gekennzeichnet ist.
|
|
||||||
|
|
||||||
Jegliche Verbreitung, die Nutzer vernünftigerweise darüber täuschen könnte, dass es sich um eine offizielle Veröffentlichung von Telemt handelt, ist untersagt.
|
|
||||||
|
|
||||||
### 4 Transparenz bei der Verbreitung von Binärversionen
|
|
||||||
|
|
||||||
Im Falle der Verbreitung kompilierter Binärversionen der Software wird der Verbreiter HIERMIT ERMUTIGT (encouraged), soweit dies vernünftigerweise möglich ist, Zugang zum entsprechenden Quellcode sowie zu den Build-Anweisungen bereitzustellen.
|
|
||||||
|
|
||||||
Diese Praxis trägt zur Transparenz bei und ermöglicht es Empfängern, die Integrität und Reproduzierbarkeit der verbreiteten Builds zu überprüfen.
|
|
||||||
|
|
||||||
## 5 Gewährung einer Patentlizenz und Beendigung von Rechten
|
|
||||||
|
|
||||||
Jeder Contributor gewährt den Empfängern der Software eine unbefristete, weltweite, nicht-exklusive, unentgeltliche, lizenzgebührenfreie und unwiderrufliche Patentlizenz für:
|
|
||||||
|
|
||||||
- die Herstellung,
|
|
||||||
- die Beauftragung der Herstellung,
|
|
||||||
- die Nutzung,
|
|
||||||
- das Anbieten zum Verkauf,
|
|
||||||
- den Verkauf,
|
|
||||||
- den Import,
|
|
||||||
- sowie jede sonstige Verbreitung der Software.
|
|
||||||
|
|
||||||
Diese Patentlizenz erstreckt sich ausschließlich auf solche Patentansprüche, die notwendigerweise durch den jeweiligen Beitrag des Contributors allein oder in Kombination mit der Software verletzt würden.
|
|
||||||
|
|
||||||
Leitet eine Person ein Patentverfahren ein oder beteiligt sich daran, einschließlich Gegenklagen oder Kreuzklagen, mit der Behauptung, dass die Software oder ein darin enthaltener Beitrag ein Patent verletzt, **erlöschen sämtliche durch diese Lizenz gewährten Rechte für diese Person unmittelbar mit Einreichung der Klage**.
|
|
||||||
|
|
||||||
Darüber hinaus erlöschen alle durch diese Lizenz gewährten Rechte **automatisch**, wenn eine Person ein gerichtliches Verfahren einleitet, in dem behauptet wird, dass die Software selbst ein Patent oder andere Rechte des geistigen Eigentums verletzt.
|
|
||||||
|
|
||||||
### 6 Beteiligung und Beiträge zur Entwicklung
|
|
||||||
|
|
||||||
Sofern ein Contributor nicht ausdrücklich etwas anderes erklärt, gilt jeder Beitrag, der bewusst zur Aufnahme in die Software eingereicht wird, als unter den Bedingungen dieser Lizenz lizenziert.
|
|
||||||
|
|
||||||
Durch die Einreichung eines Beitrags gewährt der Contributor den Maintainers des Telemt-Projekts sowie allen Empfängern der Software die in dieser Lizenz beschriebenen Rechte in Bezug auf diesen Beitrag.
|
|
||||||
|
|
||||||
### 7 Urheberhinweis bei Netzwerk- und Servicenutzung
|
|
||||||
|
|
||||||
Wird die Software zur Bereitstellung eines öffentlich zugänglichen Netzwerkdienstes verwendet, MUSS der Betreiber dieses Dienstes einen Hinweis auf die Urheberschaft von Telemt an mindestens einer der folgenden Stellen anbringen:
|
|
||||||
|
|
||||||
* in der Servicedokumentation;
|
|
||||||
* in der Dienstbeschreibung;
|
|
||||||
* auf einer Seite "Über" oder einer vergleichbaren Informationsseite;
|
|
||||||
* in anderen für Nutzer zugänglichen Materialien, die in angemessenem Zusammenhang mit dem Dienst stehen.
|
|
||||||
|
|
||||||
Ein solcher Hinweis DARF NICHT den Eindruck erwecken, dass der Dienst vom Telemt-Projekt oder dessen Maintainers unterstützt oder offiziell gebilligt wird.
|
|
||||||
|
|
||||||
### 8 Haftungsausschluss und salvatorische Klausel
|
|
||||||
|
|
||||||
DIE SOFTWARE WIRD "WIE BESEHEN" BEREITGESTELLT, OHNE JEGLICHE AUSDRÜCKLICHE ODER STILLSCHWEIGENDE GEWÄHRLEISTUNG, EINSCHLIESSLICH, ABER NICHT BESCHRÄNKT AUF GEWÄHRLEISTUNGEN DER MARKTGÄNGIGKEIT, DER EIGNUNG FÜR EINEN BESTIMMTEN ZWECK UND DER NICHTVERLETZUNG VON RECHTEN.
|
|
||||||
|
|
||||||
IN KEINEM FALL HAFTEN DIE AUTOREN ODER RECHTEINHABER FÜR IRGENDWELCHE ANSPRÜCHE, SCHÄDEN ODER SONSTIGE HAFTUNG, DIE AUS VERTRAG, UNERLAUBTER HANDLUNG ODER AUF ANDERE WEISE AUS DER SOFTWARE ODER DER NUTZUNG DER SOFTWARE ENTSTEHEN.
|
|
||||||
|
|
||||||
SOLLTE EINE BESTIMMUNG DIESER LIZENZ ALS UNWIRKSAM ODER NICHT DURCHSETZBAR ANGESEHEN WERDEN, IST DIESE BESTIMMUNG SO AUSZULEGEN, DASS SIE DEM URSPRÜNGLICHEN WILLEN DER PARTEIEN MÖGLICHST NAHEKOMMT; DIE ÜBRIGEN BESTIMMUNGEN BLEIBEN DAVON UNBERÜHRT UND IN VOLLER WIRKUNG.
|
|
||||||
|
|
@ -1,143 +0,0 @@
|
||||||
###### TELEMT Public License 3 ######
|
|
||||||
##### Copyright (c) 2026 Telemt #####
|
|
||||||
|
|
||||||
Permission is hereby granted, free of charge, to any person obtaining a copy
|
|
||||||
of this Software and associated documentation files (the "Software"),
|
|
||||||
to use, reproduce, modify, prepare derivative works of, merge, publish,
|
|
||||||
distribute, sublicense, and/or sell copies of the Software, and to permit
|
|
||||||
persons to whom the Software is furnished to do so, provided that all
|
|
||||||
copyright notices, license terms, and conditions set forth in this License
|
|
||||||
are preserved and complied with.
|
|
||||||
|
|
||||||
### Official Translations
|
|
||||||
|
|
||||||
The canonical version of this License is the English version.
|
|
||||||
|
|
||||||
Official translations are provided for informational purposes only
|
|
||||||
and for convenience, and do not have legal force. In case of any
|
|
||||||
discrepancy, the English version of this License shall prevail.
|
|
||||||
|
|
||||||
Available versions:
|
|
||||||
- English in Markdown: docs/LICENSE/LICENSE.md
|
|
||||||
- German: docs/LICENSE/LICENSE.de.md
|
|
||||||
- Russian: docs/LICENSE/LICENSE.ru.md
|
|
||||||
|
|
||||||
### Definitions
|
|
||||||
|
|
||||||
For the purposes of this License:
|
|
||||||
|
|
||||||
"Software" means the Telemt software, including source code, documentation,
|
|
||||||
and any associated files distributed under this License.
|
|
||||||
|
|
||||||
"Contributor" means any person or entity that submits code, patches,
|
|
||||||
documentation, or other contributions to the Software that are accepted
|
|
||||||
into the Software by the maintainers.
|
|
||||||
|
|
||||||
"Contribution" means any work of authorship intentionally submitted
|
|
||||||
to the Software for inclusion in the Software.
|
|
||||||
|
|
||||||
"Modified Version" means any version of the Software that has been
|
|
||||||
changed, adapted, extended, or otherwise modified from the original
|
|
||||||
Software.
|
|
||||||
|
|
||||||
"Maintainers" means the individuals or entities responsible for
|
|
||||||
the official Telemt project and its releases.
|
|
||||||
|
|
||||||
#### 1 Attribution
|
|
||||||
|
|
||||||
Redistributions of the Software, in source or binary form, MUST RETAIN the
|
|
||||||
above copyright notice, this license text, and any existing attribution
|
|
||||||
notices.
|
|
||||||
|
|
||||||
#### 2 Modification Notice
|
|
||||||
|
|
||||||
If you modify the Software, you MUST clearly state that the Software has been
|
|
||||||
modified and include a brief description of the changes made.
|
|
||||||
|
|
||||||
Modified versions MUST NOT be presented as the original Telemt.
|
|
||||||
|
|
||||||
#### 3 Trademark and Branding
|
|
||||||
|
|
||||||
This license DOES NOT grant permission to use the name "Telemt",
|
|
||||||
the Telemt logo, or any Telemt trademarks or branding.
|
|
||||||
|
|
||||||
Redistributed or modified versions of the Software MAY NOT use the Telemt
|
|
||||||
name in a way that suggests endorsement or official origin without explicit
|
|
||||||
permission from the Telemt maintainers.
|
|
||||||
|
|
||||||
Use of the name "Telemt" to describe a modified version of the Software
|
|
||||||
is permitted only if the modified version is clearly identified as a
|
|
||||||
modified or unofficial version.
|
|
||||||
|
|
||||||
Any distribution that could reasonably confuse users into believing that
|
|
||||||
the software is an official Telemt release is prohibited.
|
|
||||||
|
|
||||||
#### 4 Binary Distribution Transparency
|
|
||||||
|
|
||||||
If you distribute compiled binaries of the Software,
|
|
||||||
you are ENCOURAGED to provide access to the corresponding
|
|
||||||
source code and build instructions where reasonably possible.
|
|
||||||
|
|
||||||
This helps preserve transparency and allows recipients to verify the
|
|
||||||
integrity and reproducibility of distributed builds.
|
|
||||||
|
|
||||||
#### 5 Patent Grant and Defensive Termination Clause
|
|
||||||
|
|
||||||
Each contributor grants you a perpetual, worldwide, non-exclusive,
|
|
||||||
no-charge, royalty-free, irrevocable patent license to make, have made,
|
|
||||||
use, offer to sell, sell, import, and otherwise transfer the Software.
|
|
||||||
|
|
||||||
This patent license applies only to those patent claims necessarily
|
|
||||||
infringed by the contributor’s contribution alone or by combination of
|
|
||||||
their contribution with the Software.
|
|
||||||
|
|
||||||
If you initiate or participate in any patent litigation, including
|
|
||||||
cross-claims or counterclaims, alleging that the Software or any
|
|
||||||
contribution incorporated within the Software constitutes patent
|
|
||||||
infringement, then **all rights granted to you under this license shall
|
|
||||||
terminate immediately** as of the date such litigation is filed.
|
|
||||||
|
|
||||||
Additionally, if you initiate legal action alleging that the
|
|
||||||
Software itself infringes your patent or other intellectual
|
|
||||||
property rights, then all rights granted to you under this
|
|
||||||
license SHALL TERMINATE automatically.
|
|
||||||
|
|
||||||
#### 6 Contributions
|
|
||||||
|
|
||||||
Unless you explicitly state otherwise, any Contribution intentionally
|
|
||||||
submitted for inclusion in the Software shall be licensed under the terms
|
|
||||||
of this License.
|
|
||||||
|
|
||||||
By submitting a Contribution, you grant the Telemt maintainers and all
|
|
||||||
recipients of the Software the rights described in this License with
|
|
||||||
respect to that Contribution.
|
|
||||||
|
|
||||||
#### 7 Network Use Attribution
|
|
||||||
|
|
||||||
If the Software is used to provide a publicly accessible network service,
|
|
||||||
the operator of such service MUST provide attribution to Telemt in at least
|
|
||||||
one of the following locations:
|
|
||||||
|
|
||||||
- service documentation
|
|
||||||
- service description
|
|
||||||
- an "About" or similar informational page
|
|
||||||
- other user-visible materials reasonably associated with the service
|
|
||||||
|
|
||||||
Such attribution MUST NOT imply endorsement by the Telemt project or its
|
|
||||||
maintainers.
|
|
||||||
|
|
||||||
#### 8 Disclaimer of Warranty and Severability Clause
|
|
||||||
|
|
||||||
THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
|
|
||||||
IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
|
|
||||||
FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT.
|
|
||||||
|
|
||||||
IN NO EVENT SHALL THE AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM,
|
|
||||||
DAMAGES OR OTHER LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR
|
|
||||||
OTHERWISE, ARISING FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE
|
|
||||||
USE OR OTHER DEALINGS IN THE SOFTWARE
|
|
||||||
|
|
||||||
IF ANY PROVISION OF THIS LICENSE IS HELD TO BE INVALID OR UNENFORCEABLE,
|
|
||||||
SUCH PROVISION SHALL BE INTERPRETED TO REFLECT THE ORIGINAL INTENT
|
|
||||||
OF THE PARTIES AS CLOSELY AS POSSIBLE, AND THE REMAINING PROVISIONS
|
|
||||||
SHALL REMAIN IN FULL FORCE AND EFFECT
|
|
||||||
|
|
@ -1,90 +0,0 @@
|
||||||
# Публичная лицензия TELEMT 3
|
|
||||||
|
|
||||||
***Все права защищёны (c) 2026 Telemt***
|
|
||||||
|
|
||||||
Настоящим любому лицу, получившему копию данного программного обеспечения и сопутствующей документации (далее — "Программное обеспечение"), безвозмездно предоставляется разрешение использовать Программное обеспечение без ограничений, включая право использовать, воспроизводить, изменять, создавать производные произведения, объединять, публиковать, распространять, сублицензировать и (или) продавать копии Программного обеспечения, а также предоставлять такие права лицам, которым предоставляется Программное обеспечение, при условии соблюдения всех уведомлений об авторских правах, условий и положений настоящей Лицензии.
|
|
||||||
|
|
||||||
### Определения
|
|
||||||
|
|
||||||
Для целей настоящей Лицензии применяются следующие определения:
|
|
||||||
|
|
||||||
**"Программное обеспечение" (Software)** — программное обеспечение Telemt, включая исходный код, документацию и любые связанные файлы, распространяемые на условиях настоящей Лицензии.
|
|
||||||
|
|
||||||
**"Контрибьютор" (Contributor)** — любое физическое или юридическое лицо, направившее код, исправления (патчи), документацию или иные материалы, которые были приняты мейнтейнерами проекта и включены в состав Программного обеспечения.
|
|
||||||
|
|
||||||
**"Вклад" (Contribution)** — любое произведение авторского права, намеренно представленное для включения в состав Программного обеспечения.
|
|
||||||
|
|
||||||
**"Модифицированная версия" (Modified Version)** — любая версия Программного обеспечения, которая была изменена, адаптирована, расширена или иным образом модифицирована по сравнению с исходным Программным обеспечением.
|
|
||||||
|
|
||||||
**"Мейнтейнеры" (Maintainers)** — физические или юридические лица, ответственные за официальный проект Telemt и его официальные релизы.
|
|
||||||
|
|
||||||
### 1 Указание авторства
|
|
||||||
|
|
||||||
При распространении Программного обеспечения, как в форме исходного кода, так и в бинарной форме, ДОЛЖНЫ СОХРАНЯТЬСЯ:
|
|
||||||
|
|
||||||
- указанное выше уведомление об авторских правах;
|
|
||||||
- текст настоящей Лицензии;
|
|
||||||
- любые существующие уведомления об авторстве.
|
|
||||||
|
|
||||||
### 2 Уведомление о модификации
|
|
||||||
|
|
||||||
В случае внесения изменений в Программное обеспечение лицо, осуществившее такие изменения, ОБЯЗАНО явно указать, что Программное обеспечение было модифицировано, а также включить краткое описание внесённых изменений.
|
|
||||||
|
|
||||||
Модифицированные версии Программного обеспечения НЕ ДОЛЖНЫ представляться как оригинальная версия Telemt.
|
|
||||||
|
|
||||||
### 3 Товарные знаки и обозначения
|
|
||||||
|
|
||||||
Настоящая Лицензия НЕ ПРЕДОСТАВЛЯЕТ права использовать наименование **"Telemt"**, логотип Telemt, а также любые товарные знаки, фирменные обозначения или элементы бренда Telemt.
|
|
||||||
|
|
||||||
Распространяемые или модифицированные версии Программного обеспечения НЕ ДОЛЖНЫ использовать наименование Telemt таким образом, который может создавать у пользователей впечатление официального происхождения либо одобрения со стороны проекта Telemt без явного разрешения мейнтейнеров проекта.
|
|
||||||
|
|
||||||
Использование наименования **Telemt** для описания модифицированной версии Программного обеспечения допускается только при условии, что такая версия ясно обозначена как модифицированная или неофициальная.
|
|
||||||
|
|
||||||
Запрещается любое распространение, которое может разумно вводить пользователей в заблуждение относительно того, что программное обеспечение является официальным релизом Telemt.
|
|
||||||
|
|
||||||
### 4 Прозрачность распространения бинарных версий
|
|
||||||
|
|
||||||
В случае распространения скомпилированных бинарных версий Программного обеспечения распространитель НАСТОЯЩИМ ПОБУЖДАЕТСЯ предоставлять доступ к соответствующему исходному коду и инструкциям по сборке, если это разумно возможно.
|
|
||||||
|
|
||||||
Такая практика способствует прозрачности распространения и позволяет получателям проверять целостность и воспроизводимость распространяемых сборок.
|
|
||||||
|
|
||||||
### 5 Предоставление патентной лицензии и прекращение прав
|
|
||||||
|
|
||||||
Каждый контрибьютор предоставляет получателям Программного обеспечения бессрочную, всемирную, неисключительную, безвозмездную, не требующую выплаты роялти и безотзывную патентную лицензию на:
|
|
||||||
|
|
||||||
- изготовление,
|
|
||||||
- поручение изготовления,
|
|
||||||
- использование,
|
|
||||||
- предложение к продаже,
|
|
||||||
- продажу,
|
|
||||||
- импорт,
|
|
||||||
- и иное распространение Программного обеспечения.
|
|
||||||
|
|
||||||
Такая патентная лицензия распространяется исключительно на те патентные требования, которые неизбежно нарушаются соответствующим вкладом контрибьютора как таковым либо его сочетанием с Программным обеспечением.
|
|
||||||
|
|
||||||
Если лицо инициирует либо участвует в каком-либо судебном разбирательстве по патентному спору, включая встречные или перекрёстные иски, утверждая, что Программное обеспечение либо любой вклад, включённый в него, нарушает патент, **все права, предоставленные такому лицу настоящей Лицензией, немедленно прекращаются** с даты подачи соответствующего иска.
|
|
||||||
|
|
||||||
Кроме того, если лицо инициирует судебное разбирательство, утверждая, что само Программное обеспечение нарушает его патентные либо иные права интеллектуальной собственности, все права, предоставленные настоящей Лицензией, **автоматически прекращаются**.
|
|
||||||
|
|
||||||
### 6 Участие и вклад в разработку
|
|
||||||
|
|
||||||
Если контрибьютор явно не указал иное, любой Вклад, намеренно представленный для включения в Программное обеспечение, считается лицензированным на условиях настоящей Лицензии.
|
|
||||||
Путём предоставления Вклада контрибьютор предоставляет мейнтейнером проекта Telemt и всем получателям Программного обеспечения права, предусмотренные настоящей Лицензией, в отношении такого Вклада.
|
|
||||||
|
|
||||||
### 7 Указание авторства при сетевом и сервисном использовании
|
|
||||||
|
|
||||||
В случае использования Программного обеспечения для предоставления публично доступного сетевого сервиса оператор такого сервиса ОБЯЗАН обеспечить указание авторства Telemt как минимум в одном из следующих мест:
|
|
||||||
- документация сервиса;
|
|
||||||
- описание сервиса;
|
|
||||||
- страница "О программе" или аналогичная информационная страница;
|
|
||||||
- иные материалы, доступные пользователям и разумно связанные с данным сервисом.
|
|
||||||
|
|
||||||
Такое указание авторства НЕ ДОЛЖНО создавать впечатление одобрения или официальной поддержки со стороны проекта Telemt либо его мейнтейнеров.
|
|
||||||
|
|
||||||
### 8 Отказ от гарантий и делимость положений
|
|
||||||
|
|
||||||
ПРОГРАММНОЕ ОБЕСПЕЧЕНИЕ ПРЕДОСТАВЛЯЕТСЯ "КАК ЕСТЬ", БЕЗ КАКИХ-ЛИБО ГАРАНТИЙ, ЯВНЫХ ИЛИ ПОДРАЗУМЕВАЕМЫХ, ВКЛЮЧАЯ, НО НЕ ОГРАНИЧИВАЯСЬ ГАРАНТИЯМИ КОММЕРЧЕСКОЙ ПРИГОДНОСТИ, ПРИГОДНОСТИ ДЛЯ КОНКРЕТНОЙ ЦЕЛИ И НЕНАРУШЕНИЯ ПРАВ.
|
|
||||||
|
|
||||||
НИ ПРИ КАКИХ ОБСТОЯТЕЛЬСТВАХ АВТОРЫ ИЛИ ПРАВООБЛАДАТЕЛИ НЕ НЕСУТ ОТВЕТСТВЕННОСТИ ПО КАКИМ-ЛИБО ТРЕБОВАНИЯМ, УБЫТКАМ ИЛИ ИНОЙ ОТВЕТСТВЕННОСТИ, ВОЗНИКАЮЩЕЙ В РЕЗУЛЬТАТЕ ДОГОВОРА, ДЕЛИКТА ИЛИ ИНЫМ ОБРАЗОМ, СВЯЗАННЫМ С ПРОГРАММНЫМ ОБЕСПЕЧЕНИЕМ ИЛИ ЕГО ИСПОЛЬЗОВАНИЕМ.
|
|
||||||
|
|
||||||
В СЛУЧАЕ ЕСЛИ КАКОЕ-ЛИБО ПОЛОЖЕНИЕ НАСТОЯЩЕЙ ЛИЦЕНЗИИ ПРИЗНАЁТСЯ НЕДЕЙСТВИТЕЛЬНЫМ ИЛИ НЕПРИМЕНИМЫМ, ТАКОЕ ПОЛОЖЕНИЕ ПОДЛЕЖИТ ТОЛКОВАНИЮ МАКСИМАЛЬНО БЛИЗКО К ИСХОДНОМУ НАМЕРЕНИЮ СТОРОН, ПРИ ЭТОМ ОСТАЛЬНЫЕ ПОЛОЖЕНИЯ СОХРАНЯЮТ ПОЛНУЮ ЮРИДИЧЕСКУЮ СИЛУ.
|
|
||||||
|
|
@ -27,12 +27,12 @@ chmod +x /bin/telemt
|
||||||
|
|
||||||
**0. Check port and generate secrets**
|
**0. Check port and generate secrets**
|
||||||
|
|
||||||
The port you have selected for use should not be in the list:
|
The port you have selected for use should be MISSING from the list, when:
|
||||||
```bash
|
```bash
|
||||||
netstat -lnp
|
netstat -lnp
|
||||||
```
|
```
|
||||||
|
|
||||||
Generate 16 bytes/32 characters in HEX format with OpenSSL or another way:
|
Generate 16 bytes/32 characters HEX with OpenSSL or another way:
|
||||||
```bash
|
```bash
|
||||||
openssl rand -hex 16
|
openssl rand -hex 16
|
||||||
```
|
```
|
||||||
|
|
@ -50,7 +50,7 @@ Save the obtained result somewhere. You will need it later!
|
||||||
|
|
||||||
**1. Place your config to /etc/telemt/telemt.toml**
|
**1. Place your config to /etc/telemt/telemt.toml**
|
||||||
|
|
||||||
Create the config directory:
|
Create config directory:
|
||||||
```bash
|
```bash
|
||||||
mkdir /etc/telemt
|
mkdir /etc/telemt
|
||||||
```
|
```
|
||||||
|
|
@ -59,7 +59,7 @@ Open nano
|
||||||
```bash
|
```bash
|
||||||
nano /etc/telemt/telemt.toml
|
nano /etc/telemt/telemt.toml
|
||||||
```
|
```
|
||||||
Insert your configuration:
|
paste your config
|
||||||
|
|
||||||
```toml
|
```toml
|
||||||
# === General Settings ===
|
# === General Settings ===
|
||||||
|
|
@ -72,9 +72,6 @@ classic = false
|
||||||
secure = false
|
secure = false
|
||||||
tls = true
|
tls = true
|
||||||
|
|
||||||
[server]
|
|
||||||
port = 443
|
|
||||||
|
|
||||||
[server.api]
|
[server.api]
|
||||||
enabled = true
|
enabled = true
|
||||||
# listen = "127.0.0.1:9091"
|
# listen = "127.0.0.1:9091"
|
||||||
|
|
@ -94,8 +91,7 @@ then Ctrl+S -> Ctrl+X to save
|
||||||
|
|
||||||
> [!WARNING]
|
> [!WARNING]
|
||||||
> Replace the value of the hello parameter with the value you obtained in step 0.
|
> Replace the value of the hello parameter with the value you obtained in step 0.
|
||||||
> Additionally, change the value of the tls_domain parameter to a different website.
|
> Replace the value of the tls_domain parameter with another website.
|
||||||
> Changing the tls_domain parameter will break all links that use the old domain!
|
|
||||||
|
|
||||||
---
|
---
|
||||||
|
|
||||||
|
|
@ -106,14 +102,14 @@ useradd -d /opt/telemt -m -r -U telemt
|
||||||
chown -R telemt:telemt /etc/telemt
|
chown -R telemt:telemt /etc/telemt
|
||||||
```
|
```
|
||||||
|
|
||||||
**3. Create service in /etc/systemd/system/telemt.service**
|
**3. Create service on /etc/systemd/system/telemt.service**
|
||||||
|
|
||||||
Open nano
|
Open nano
|
||||||
```bash
|
```bash
|
||||||
nano /etc/systemd/system/telemt.service
|
nano /etc/systemd/system/telemt.service
|
||||||
```
|
```
|
||||||
|
|
||||||
Insert this Systemd module:
|
paste this Systemd Module
|
||||||
```bash
|
```bash
|
||||||
[Unit]
|
[Unit]
|
||||||
Description=Telemt
|
Description=Telemt
|
||||||
|
|
@ -128,8 +124,8 @@ WorkingDirectory=/opt/telemt
|
||||||
ExecStart=/bin/telemt /etc/telemt/telemt.toml
|
ExecStart=/bin/telemt /etc/telemt/telemt.toml
|
||||||
Restart=on-failure
|
Restart=on-failure
|
||||||
LimitNOFILE=65536
|
LimitNOFILE=65536
|
||||||
AmbientCapabilities=CAP_NET_ADMIN CAP_NET_BIND_SERVICE
|
AmbientCapabilities=CAP_NET_BIND_SERVICE
|
||||||
CapabilityBoundingSet=CAP_NET_ADMIN CAP_NET_BIND_SERVICE
|
CapabilityBoundingSet=CAP_NET_BIND_SERVICE
|
||||||
NoNewPrivileges=true
|
NoNewPrivileges=true
|
||||||
|
|
||||||
[Install]
|
[Install]
|
||||||
|
|
@ -148,16 +144,13 @@ systemctl daemon-reload
|
||||||
|
|
||||||
**6.** For automatic startup at system boot, enter `systemctl enable telemt`
|
**6.** For automatic startup at system boot, enter `systemctl enable telemt`
|
||||||
|
|
||||||
**7.** To get the link(s), enter:
|
**7.** To get the link(s), enter
|
||||||
```bash
|
```bash
|
||||||
curl -s http://127.0.0.1:9091/v1/users | jq
|
curl -s http://127.0.0.1:9091/v1/users | jq
|
||||||
```
|
```
|
||||||
|
|
||||||
> Any number of people can use one link.
|
> Any number of people can use one link.
|
||||||
|
|
||||||
> [!WARNING]
|
|
||||||
> Only the command from step 7 can provide a working link. Do not try to create it yourself or copy it from anywhere if you are not sure what you are doing!
|
|
||||||
|
|
||||||
---
|
---
|
||||||
|
|
||||||
# Telemt via Docker Compose
|
# Telemt via Docker Compose
|
||||||
|
|
@ -185,8 +178,6 @@ docker compose down
|
||||||
docker build -t telemt:local .
|
docker build -t telemt:local .
|
||||||
docker run --name telemt --restart unless-stopped \
|
docker run --name telemt --restart unless-stopped \
|
||||||
-p 443:443 \
|
-p 443:443 \
|
||||||
-p 9090:9090 \
|
|
||||||
-p 9091:9091 \
|
|
||||||
-e RUST_LOG=info \
|
-e RUST_LOG=info \
|
||||||
-v "$PWD/config.toml:/app/config.toml:ro" \
|
-v "$PWD/config.toml:/app/config.toml:ro" \
|
||||||
--read-only \
|
--read-only \
|
||||||
|
|
|
||||||
|
|
@ -72,9 +72,6 @@ classic = false
|
||||||
secure = false
|
secure = false
|
||||||
tls = true
|
tls = true
|
||||||
|
|
||||||
[server]
|
|
||||||
port = 443
|
|
||||||
|
|
||||||
[server.api]
|
[server.api]
|
||||||
enabled = true
|
enabled = true
|
||||||
# listen = "127.0.0.1:9091"
|
# listen = "127.0.0.1:9091"
|
||||||
|
|
@ -95,7 +92,6 @@ hello = "00000000000000000000000000000000"
|
||||||
> [!WARNING]
|
> [!WARNING]
|
||||||
> Замените значение параметра hello на значение, которое вы получили в пункте 0.
|
> Замените значение параметра hello на значение, которое вы получили в пункте 0.
|
||||||
> Так же замените значение параметра tls_domain на другой сайт.
|
> Так же замените значение параметра tls_domain на другой сайт.
|
||||||
> Изменение параметра tls_domain сделает нерабочими все ссылки, использующие старый домен!
|
|
||||||
|
|
||||||
---
|
---
|
||||||
|
|
||||||
|
|
@ -128,8 +124,8 @@ WorkingDirectory=/opt/telemt
|
||||||
ExecStart=/bin/telemt /etc/telemt/telemt.toml
|
ExecStart=/bin/telemt /etc/telemt/telemt.toml
|
||||||
Restart=on-failure
|
Restart=on-failure
|
||||||
LimitNOFILE=65536
|
LimitNOFILE=65536
|
||||||
AmbientCapabilities=CAP_NET_ADMIN CAP_NET_BIND_SERVICE
|
AmbientCapabilities=CAP_NET_BIND_SERVICE
|
||||||
CapabilityBoundingSet=CAP_NET_ADMIN CAP_NET_BIND_SERVICE
|
CapabilityBoundingSet=CAP_NET_BIND_SERVICE
|
||||||
NoNewPrivileges=true
|
NoNewPrivileges=true
|
||||||
|
|
||||||
[Install]
|
[Install]
|
||||||
|
|
@ -179,13 +175,11 @@ docker compose down
|
||||||
> - По умолчанию публикуются порты 443:443, а контейнер запускается со сброшенными привилегиями (добавлена только `NET_BIND_SERVICE`)
|
> - По умолчанию публикуются порты 443:443, а контейнер запускается со сброшенными привилегиями (добавлена только `NET_BIND_SERVICE`)
|
||||||
> - Если вам действительно нужна сеть хоста (обычно это требуется только для некоторых конфигураций IPv6), раскомментируйте `network_mode: host`
|
> - Если вам действительно нужна сеть хоста (обычно это требуется только для некоторых конфигураций IPv6), раскомментируйте `network_mode: host`
|
||||||
|
|
||||||
**Запуск без Docker Compose**
|
**Запуск в Docker Compose**
|
||||||
```bash
|
```bash
|
||||||
docker build -t telemt:local .
|
docker build -t telemt:local .
|
||||||
docker run --name telemt --restart unless-stopped \
|
docker run --name telemt --restart unless-stopped \
|
||||||
-p 443:443 \
|
-p 443:443 \
|
||||||
-p 9090:9090 \
|
|
||||||
-p 9091:9091 \
|
|
||||||
-e RUST_LOG=info \
|
-e RUST_LOG=info \
|
||||||
-v "$PWD/config.toml:/app/config.toml:ro" \
|
-v "$PWD/config.toml:/app/config.toml:ro" \
|
||||||
--read-only \
|
--read-only \
|
||||||
|
|
|
||||||
|
|
@ -82,7 +82,7 @@ Die unten angegebenen `Default`-Werte sind Code-Defaults (bei fehlendem Schlüss
|
||||||
|
|
||||||
| Feld | Gilt für | Typ | Pflicht | Default | Bedeutung |
|
| Feld | Gilt für | Typ | Pflicht | Default | Bedeutung |
|
||||||
|---|---|---|---|---|---|
|
|---|---|---|---|---|---|
|
||||||
| `[[upstreams]].type` | alle Upstreams | `"direct" \| "socks4" \| "socks5" \| "shadowsocks"` | ja | n/a | Upstream-Transporttyp. |
|
| `[[upstreams]].type` | alle Upstreams | `"direct" \| "socks4" \| "socks5"` | ja | n/a | Upstream-Transporttyp. |
|
||||||
| `[[upstreams]].weight` | alle Upstreams | `u16` | nein | `1` | Basisgewicht für weighted-random Auswahl. |
|
| `[[upstreams]].weight` | alle Upstreams | `u16` | nein | `1` | Basisgewicht für weighted-random Auswahl. |
|
||||||
| `[[upstreams]].enabled` | alle Upstreams | `bool` | nein | `true` | Deaktivierte Einträge werden beim Start ignoriert. |
|
| `[[upstreams]].enabled` | alle Upstreams | `bool` | nein | `true` | Deaktivierte Einträge werden beim Start ignoriert. |
|
||||||
| `[[upstreams]].scopes` | alle Upstreams | `String` | nein | `""` | Komma-separierte Scope-Tags für Request-Routing. |
|
| `[[upstreams]].scopes` | alle Upstreams | `String` | nein | `""` | Komma-separierte Scope-Tags für Request-Routing. |
|
||||||
|
|
@ -95,8 +95,6 @@ Die unten angegebenen `Default`-Werte sind Code-Defaults (bei fehlendem Schlüss
|
||||||
| `interface` | `socks5` | `Option<String>` | nein | `null` | Wird nur genutzt, wenn `address` als `ip:port` angegeben ist. |
|
| `interface` | `socks5` | `Option<String>` | nein | `null` | Wird nur genutzt, wenn `address` als `ip:port` angegeben ist. |
|
||||||
| `username` | `socks5` | `Option<String>` | nein | `null` | SOCKS5 Benutzername. |
|
| `username` | `socks5` | `Option<String>` | nein | `null` | SOCKS5 Benutzername. |
|
||||||
| `password` | `socks5` | `Option<String>` | nein | `null` | SOCKS5 Passwort. |
|
| `password` | `socks5` | `Option<String>` | nein | `null` | SOCKS5 Passwort. |
|
||||||
| `url` | `shadowsocks` | `String` | ja | n/a | Shadowsocks-SIP002-URL (`ss://...`). In Runtime-APIs wird nur `host:port` offengelegt. |
|
|
||||||
| `interface` | `shadowsocks` | `Option<String>` | nein | `null` | Optionales ausgehendes Bind-Interface oder lokale Literal-IP. |
|
|
||||||
|
|
||||||
### Runtime-Regeln (wichtig)
|
### Runtime-Regeln (wichtig)
|
||||||
|
|
||||||
|
|
@ -117,7 +115,6 @@ Die unten angegebenen `Default`-Werte sind Code-Defaults (bei fehlendem Schlüss
|
||||||
8. Im ME-Modus wird der gewählte Upstream auch für den ME-TCP-Dial-Pfad verwendet.
|
8. Im ME-Modus wird der gewählte Upstream auch für den ME-TCP-Dial-Pfad verwendet.
|
||||||
9. Im ME-Modus ist bei `direct` mit bind/interface die STUN-Reflection bind-aware für KDF-Adressmaterial.
|
9. Im ME-Modus ist bei `direct` mit bind/interface die STUN-Reflection bind-aware für KDF-Adressmaterial.
|
||||||
10. Im ME-Modus werden bei SOCKS-Upstream `BND.ADDR/BND.PORT` für KDF verwendet, wenn gültig/öffentlich und gleiche IP-Familie.
|
10. Im ME-Modus werden bei SOCKS-Upstream `BND.ADDR/BND.PORT` für KDF verwendet, wenn gültig/öffentlich und gleiche IP-Familie.
|
||||||
11. `shadowsocks`-Upstreams erfordern `general.use_middle_proxy = false`. Mit aktiviertem ME-Modus schlägt das Laden der Config sofort fehl.
|
|
||||||
|
|
||||||
## Upstream-Konfigurationsbeispiele
|
## Upstream-Konfigurationsbeispiele
|
||||||
|
|
||||||
|
|
@ -153,20 +150,7 @@ weight = 2
|
||||||
enabled = true
|
enabled = true
|
||||||
```
|
```
|
||||||
|
|
||||||
### Beispiel 4: Shadowsocks-Upstream
|
### Beispiel 4: Gemischte Upstreams mit Scopes
|
||||||
|
|
||||||
```toml
|
|
||||||
[general]
|
|
||||||
use_middle_proxy = false
|
|
||||||
|
|
||||||
[[upstreams]]
|
|
||||||
type = "shadowsocks"
|
|
||||||
url = "ss://2022-blake3-aes-256-gcm:BASE64_KEY@198.51.100.50:8388"
|
|
||||||
weight = 2
|
|
||||||
enabled = true
|
|
||||||
```
|
|
||||||
|
|
||||||
### Beispiel 5: Gemischte Upstreams mit Scopes
|
|
||||||
|
|
||||||
```toml
|
```toml
|
||||||
[[upstreams]]
|
[[upstreams]]
|
||||||
|
|
|
||||||
|
|
@ -82,7 +82,7 @@ Defaults below are code defaults (used when a key is omitted), not necessarily v
|
||||||
|
|
||||||
| Field | Applies to | Type | Required | Default | Meaning |
|
| Field | Applies to | Type | Required | Default | Meaning |
|
||||||
|---|---|---|---|---|---|
|
|---|---|---|---|---|---|
|
||||||
| `[[upstreams]].type` | all upstreams | `"direct" \| "socks4" \| "socks5" \| "shadowsocks"` | yes | n/a | Upstream transport type. |
|
| `[[upstreams]].type` | all upstreams | `"direct" \| "socks4" \| "socks5"` | yes | n/a | Upstream transport type. |
|
||||||
| `[[upstreams]].weight` | all upstreams | `u16` | no | `1` | Base weight for weighted-random selection. |
|
| `[[upstreams]].weight` | all upstreams | `u16` | no | `1` | Base weight for weighted-random selection. |
|
||||||
| `[[upstreams]].enabled` | all upstreams | `bool` | no | `true` | Disabled entries are ignored at startup. |
|
| `[[upstreams]].enabled` | all upstreams | `bool` | no | `true` | Disabled entries are ignored at startup. |
|
||||||
| `[[upstreams]].scopes` | all upstreams | `String` | no | `""` | Comma-separated scope tags for request-level routing. |
|
| `[[upstreams]].scopes` | all upstreams | `String` | no | `""` | Comma-separated scope tags for request-level routing. |
|
||||||
|
|
@ -95,8 +95,6 @@ Defaults below are code defaults (used when a key is omitted), not necessarily v
|
||||||
| `interface` | `socks5` | `Option<String>` | no | `null` | Used only for SOCKS server `ip:port` dial path. |
|
| `interface` | `socks5` | `Option<String>` | no | `null` | Used only for SOCKS server `ip:port` dial path. |
|
||||||
| `username` | `socks5` | `Option<String>` | no | `null` | SOCKS5 username auth. |
|
| `username` | `socks5` | `Option<String>` | no | `null` | SOCKS5 username auth. |
|
||||||
| `password` | `socks5` | `Option<String>` | no | `null` | SOCKS5 password auth. |
|
| `password` | `socks5` | `Option<String>` | no | `null` | SOCKS5 password auth. |
|
||||||
| `url` | `shadowsocks` | `String` | yes | n/a | Shadowsocks SIP002 URL (`ss://...`). Only `host:port` is exposed in runtime APIs. |
|
|
||||||
| `interface` | `shadowsocks` | `Option<String>` | no | `null` | Optional outgoing bind interface or literal local IP. |
|
|
||||||
|
|
||||||
### Runtime rules (important)
|
### Runtime rules (important)
|
||||||
|
|
||||||
|
|
@ -117,7 +115,6 @@ Defaults below are code defaults (used when a key is omitted), not necessarily v
|
||||||
8. In ME mode, the selected upstream is also used for ME TCP dial path.
|
8. In ME mode, the selected upstream is also used for ME TCP dial path.
|
||||||
9. In ME mode for `direct` upstream with bind/interface, STUN reflection logic is bind-aware for KDF source material.
|
9. In ME mode for `direct` upstream with bind/interface, STUN reflection logic is bind-aware for KDF source material.
|
||||||
10. In ME mode for SOCKS upstream, SOCKS `BND.ADDR/BND.PORT` is used for KDF when it is valid/public for the same family.
|
10. In ME mode for SOCKS upstream, SOCKS `BND.ADDR/BND.PORT` is used for KDF when it is valid/public for the same family.
|
||||||
11. `shadowsocks` upstreams require `general.use_middle_proxy = false`. Config load fails fast if ME mode is enabled.
|
|
||||||
|
|
||||||
## Upstream Configuration Examples
|
## Upstream Configuration Examples
|
||||||
|
|
||||||
|
|
@ -153,20 +150,7 @@ weight = 2
|
||||||
enabled = true
|
enabled = true
|
||||||
```
|
```
|
||||||
|
|
||||||
### Example 4: Shadowsocks upstream
|
### Example 4: Mixed upstreams with scopes
|
||||||
|
|
||||||
```toml
|
|
||||||
[general]
|
|
||||||
use_middle_proxy = false
|
|
||||||
|
|
||||||
[[upstreams]]
|
|
||||||
type = "shadowsocks"
|
|
||||||
url = "ss://2022-blake3-aes-256-gcm:BASE64_KEY@198.51.100.50:8388"
|
|
||||||
weight = 2
|
|
||||||
enabled = true
|
|
||||||
```
|
|
||||||
|
|
||||||
### Example 5: Mixed upstreams with scopes
|
|
||||||
|
|
||||||
```toml
|
```toml
|
||||||
[[upstreams]]
|
[[upstreams]]
|
||||||
|
|
|
||||||
|
|
@ -82,7 +82,7 @@
|
||||||
|
|
||||||
| Поле | Применимость | Тип | Обязательно | Default | Назначение |
|
| Поле | Применимость | Тип | Обязательно | Default | Назначение |
|
||||||
|---|---|---|---|---|---|
|
|---|---|---|---|---|---|
|
||||||
| `[[upstreams]].type` | все upstream | `"direct" \| "socks4" \| "socks5" \| "shadowsocks"` | да | n/a | Тип upstream транспорта. |
|
| `[[upstreams]].type` | все upstream | `"direct" \| "socks4" \| "socks5"` | да | n/a | Тип upstream транспорта. |
|
||||||
| `[[upstreams]].weight` | все upstream | `u16` | нет | `1` | Базовый вес в weighted-random выборе. |
|
| `[[upstreams]].weight` | все upstream | `u16` | нет | `1` | Базовый вес в weighted-random выборе. |
|
||||||
| `[[upstreams]].enabled` | все upstream | `bool` | нет | `true` | Выключенные записи игнорируются на старте. |
|
| `[[upstreams]].enabled` | все upstream | `bool` | нет | `true` | Выключенные записи игнорируются на старте. |
|
||||||
| `[[upstreams]].scopes` | все upstream | `String` | нет | `""` | Список scope-токенов через запятую для маршрутизации. |
|
| `[[upstreams]].scopes` | все upstream | `String` | нет | `""` | Список scope-токенов через запятую для маршрутизации. |
|
||||||
|
|
@ -95,8 +95,6 @@
|
||||||
| `interface` | `socks5` | `Option<String>` | нет | `null` | Используется только если `address` задан как `ip:port`. |
|
| `interface` | `socks5` | `Option<String>` | нет | `null` | Используется только если `address` задан как `ip:port`. |
|
||||||
| `username` | `socks5` | `Option<String>` | нет | `null` | Логин SOCKS5 auth. |
|
| `username` | `socks5` | `Option<String>` | нет | `null` | Логин SOCKS5 auth. |
|
||||||
| `password` | `socks5` | `Option<String>` | нет | `null` | Пароль SOCKS5 auth. |
|
| `password` | `socks5` | `Option<String>` | нет | `null` | Пароль SOCKS5 auth. |
|
||||||
| `url` | `shadowsocks` | `String` | да | n/a | Shadowsocks SIP002 URL (`ss://...`). В runtime API раскрывается только `host:port`. |
|
|
||||||
| `interface` | `shadowsocks` | `Option<String>` | нет | `null` | Необязательный исходящий bind-интерфейс или literal локальный IP. |
|
|
||||||
|
|
||||||
### Runtime-правила
|
### Runtime-правила
|
||||||
|
|
||||||
|
|
@ -117,7 +115,6 @@
|
||||||
8. В ME-режиме выбранный upstream также используется для ME TCP dial path.
|
8. В ME-режиме выбранный upstream также используется для ME TCP dial path.
|
||||||
9. В ME-режиме для `direct` upstream с bind/interface STUN-рефлексия выполняется bind-aware для KDF материала.
|
9. В ME-режиме для `direct` upstream с bind/interface STUN-рефлексия выполняется bind-aware для KDF материала.
|
||||||
10. В ME-режиме для SOCKS upstream используются `BND.ADDR/BND.PORT` для KDF, если адрес валиден/публичен и соответствует IP family.
|
10. В ME-режиме для SOCKS upstream используются `BND.ADDR/BND.PORT` для KDF, если адрес валиден/публичен и соответствует IP family.
|
||||||
11. `shadowsocks` upstream требует `general.use_middle_proxy = false`. При включенном ME-режиме конфиг отклоняется при загрузке.
|
|
||||||
|
|
||||||
## Примеры конфигурации Upstreams
|
## Примеры конфигурации Upstreams
|
||||||
|
|
||||||
|
|
@ -153,20 +150,7 @@ weight = 2
|
||||||
enabled = true
|
enabled = true
|
||||||
```
|
```
|
||||||
|
|
||||||
### Пример 4: Shadowsocks upstream
|
### Пример 4: смешанные upstream с scopes
|
||||||
|
|
||||||
```toml
|
|
||||||
[general]
|
|
||||||
use_middle_proxy = false
|
|
||||||
|
|
||||||
[[upstreams]]
|
|
||||||
type = "shadowsocks"
|
|
||||||
url = "ss://2022-blake3-aes-256-gcm:BASE64_KEY@198.51.100.50:8388"
|
|
||||||
weight = 2
|
|
||||||
enabled = true
|
|
||||||
```
|
|
||||||
|
|
||||||
### Пример 5: смешанные upstream с scopes
|
|
||||||
|
|
||||||
```toml
|
```toml
|
||||||
[[upstreams]]
|
[[upstreams]]
|
||||||
|
|
|
||||||
|
|
@ -1,287 +0,0 @@
|
||||||
<img src="https://gist.githubusercontent.com/avbor/1f8a128e628f47249aae6e058a57610b/raw/19013276c035e91058e0a9799ab145f8e70e3ff5/scheme.svg">
|
|
||||||
|
|
||||||
## Concept
|
|
||||||
- **Server A** (__conditionally Russian Federation_):\
|
|
||||||
Entry point, receives Telegram proxy user traffic via **HAProxy** (port `443`)\
|
|
||||||
and sends it to the tunnel to Server **B**.\
|
|
||||||
Internal IP in the tunnel — `10.10.10.2`\
|
|
||||||
Port for HAProxy clients — `443\tcp`
|
|
||||||
- **Server B** (_conditionally Netherlands_):\
|
|
||||||
Exit point, runs **telemt** and accepts client connections through Server **A**.\
|
|
||||||
The server must have unrestricted access to Telegram servers.\
|
|
||||||
Internal IP in the tunnel — `10.10.10.1`\
|
|
||||||
AmneziaWG port — `8443\udp`\
|
|
||||||
Port for telemt clients — `443\tcp`
|
|
||||||
|
|
||||||
---
|
|
||||||
|
|
||||||
## Step 1. Setting up the AmneziaWG tunnel (A <-> B)
|
|
||||||
[AmneziaWG](https://github.com/amnezia-vpn/amneziawg-linux-kernel-module) must be installed on all servers.\
|
|
||||||
All following commands are given for **Ubuntu 24.04**.\
|
|
||||||
For RHEL-based distributions, installation instructions are available at the link above.
|
|
||||||
|
|
||||||
### Installing AmneziaWG (Servers A and B)
|
|
||||||
The following steps must be performed on each server:
|
|
||||||
|
|
||||||
#### 1. Adding the AmneziaWG repository and installing required packages:
|
|
||||||
```bash
|
|
||||||
sudo apt install -y software-properties-common python3-launchpadlib gnupg2 linux-headers-$(uname -r) && \
|
|
||||||
sudo add-apt-repository ppa:amnezia/ppa && \
|
|
||||||
sudo apt-get install -y amneziawg
|
|
||||||
```
|
|
||||||
|
|
||||||
#### 2. Generating a unique key pair:
|
|
||||||
```bash
|
|
||||||
cd /etc/amnezia/amneziawg && \
|
|
||||||
awg genkey | tee private.key | awg pubkey > public.key
|
|
||||||
```
|
|
||||||
|
|
||||||
As a result, you will get two files in the `/etc/amnezia/amneziawg` folder:\
|
|
||||||
`private.key` - private, and\
|
|
||||||
`public.key` - public server keys
|
|
||||||
|
|
||||||
#### 3. Configuring network interfaces:
|
|
||||||
Obfuscation parameters `S1`, `S2`, `H1`, `H2`, `H3`, `H4` must be strictly identical on both servers.\
|
|
||||||
Parameters `Jc`, `Jmin` and `Jmax` can differ.\
|
|
||||||
Parameters `I1-I5` ([Custom Protocol Signature](https://docs.amnezia.org/documentation/amnezia-wg/)) must be specified on the client side (Server **A**).
|
|
||||||
|
|
||||||
Recommendations for choosing values:
|
|
||||||
|
|
||||||
```text
|
|
||||||
Jc — 1 ≤ Jc ≤ 128; from 4 to 12 inclusive
|
|
||||||
Jmin — Jmax > Jmin < 1280*; recommended 8
|
|
||||||
Jmax — Jmin < Jmax ≤ 1280*; recommended 80
|
|
||||||
S1 — S1 ≤ 1132* (1280* - 148 = 1132); S1 + 56 ≠ S2;
|
|
||||||
recommended range from 15 to 150 inclusive
|
|
||||||
S2 — S2 ≤ 1188* (1280* - 92 = 1188);
|
|
||||||
recommended range from 15 to 150 inclusive
|
|
||||||
H1/H2/H3/H4 — must be unique and differ from each other;
|
|
||||||
recommended range from 5 to 2147483647 inclusive
|
|
||||||
|
|
||||||
* It is assumed that the Internet connection has an MTU of 1280.
|
|
||||||
```
|
|
||||||
|
|
||||||
> [!IMPORTANT]
|
|
||||||
> It is recommended to use your own, unique values.\
|
|
||||||
> You can use the [generator](https://htmlpreview.github.io/?https://gist.githubusercontent.com/avbor/955782b5c37b06240b243aa375baeac5/raw/13f5517ca473b47c412b9a99407066de973732bd/awg-gen.html) to select parameters.
|
|
||||||
|
|
||||||
#### Server B Configuration (Netherlands):
|
|
||||||
|
|
||||||
Create the interface configuration file (`awg0`)
|
|
||||||
```bash
|
|
||||||
nano /etc/amnezia/amneziawg/awg0.conf
|
|
||||||
```
|
|
||||||
|
|
||||||
File content
|
|
||||||
```ini
|
|
||||||
[Interface]
|
|
||||||
Address = 10.10.10.1/24
|
|
||||||
ListenPort = 8443
|
|
||||||
PrivateKey = <PRIVATE_KEY_SERVER_B>
|
|
||||||
SaveConfig = true
|
|
||||||
Jc = 4
|
|
||||||
Jmin = 8
|
|
||||||
Jmax = 80
|
|
||||||
S1 = 29
|
|
||||||
S2 = 15
|
|
||||||
S3 = 18
|
|
||||||
S4 = 0
|
|
||||||
H1 = 2087563914
|
|
||||||
H2 = 188817757
|
|
||||||
H3 = 101784570
|
|
||||||
H4 = 432174303
|
|
||||||
|
|
||||||
[Peer]
|
|
||||||
PublicKey = <PUBLIC_KEY_SERVER_A>
|
|
||||||
AllowedIPs = 10.10.10.2/32
|
|
||||||
```
|
|
||||||
`ListenPort` - the port on which the server will wait for connections, you can choose any free one.\
|
|
||||||
`<PRIVATE_KEY_SERVER_B>` - the content of the `private.key` file from Server **B**.\
|
|
||||||
`<PUBLIC_KEY_SERVER_A>` - the content of the `public.key` file from Server **A**.
|
|
||||||
|
|
||||||
Open the port on the firewall (if enabled):
|
|
||||||
```bash
|
|
||||||
sudo ufw allow from <PUBLIC_IP_SERVER_A> to any port 8443 proto udp
|
|
||||||
```
|
|
||||||
|
|
||||||
`<PUBLIC_IP_SERVER_A>` - the external IP address of Server **A**.
|
|
||||||
|
|
||||||
#### Server A Configuration (Russian Federation):
|
|
||||||
Create the interface configuration file (awg0)
|
|
||||||
|
|
||||||
```bash
|
|
||||||
nano /etc/amnezia/amneziawg/awg0.conf
|
|
||||||
```
|
|
||||||
|
|
||||||
File content
|
|
||||||
```ini
|
|
||||||
[Interface]
|
|
||||||
Address = 10.10.10.2/24
|
|
||||||
PrivateKey = <PRIVATE_KEY_SERVER_A>
|
|
||||||
Jc = 4
|
|
||||||
Jmin = 8
|
|
||||||
Jmax = 80
|
|
||||||
S1 = 29
|
|
||||||
S2 = 15
|
|
||||||
S3 = 18
|
|
||||||
S4 = 0
|
|
||||||
H1 = 2087563914
|
|
||||||
H2 = 188817757
|
|
||||||
H3 = 101784570
|
|
||||||
H4 = 432174303
|
|
||||||
I1 = <b 0xc10000000108981eba846e21f74e00>
|
|
||||||
I2 = <b 0xc20000000108981eba846e21f74e00>
|
|
||||||
I3 = <b 0xc30000000108981eba846e21f74e00>
|
|
||||||
I4 = <b 0x43981eba846e21f74e>
|
|
||||||
I5 = <b 0x43981eba846e21f74e>
|
|
||||||
|
|
||||||
[Peer]
|
|
||||||
PublicKey = <PUBLIC_KEY_SERVER_B>
|
|
||||||
Endpoint = <PUBLIC_IP_SERVER_B>:8443
|
|
||||||
AllowedIPs = 10.10.10.1/32
|
|
||||||
PersistentKeepalive = 25
|
|
||||||
```
|
|
||||||
|
|
||||||
`<PRIVATE_KEY_SERVER_A>` - the content of the `private.key` file from Server **A**.\
|
|
||||||
`<PUBLIC_KEY_SERVER_B>` - the content of the `public.key` file from Server **B**.\
|
|
||||||
`<PUBLIC_IP_SERVER_B>` - the public IP address of Server **B**.
|
|
||||||
|
|
||||||
Enable the tunnel on both servers:
|
|
||||||
```bash
|
|
||||||
sudo systemctl enable --now awg-quick@awg0
|
|
||||||
```
|
|
||||||
|
|
||||||
Make sure Server B is accessible from Server A through the tunnel.
|
|
||||||
```bash
|
|
||||||
ping 10.10.10.1
|
|
||||||
PING 10.10.10.1 (10.10.10.1) 56(84) bytes of data.
|
|
||||||
64 bytes from 10.10.10.1: icmp_seq=1 ttl=64 time=35.1 ms
|
|
||||||
64 bytes from 10.10.10.1: icmp_seq=2 ttl=64 time=35.0 ms
|
|
||||||
64 bytes from 10.10.10.1: icmp_seq=3 ttl=64 time=35.1 ms
|
|
||||||
^C
|
|
||||||
```
|
|
||||||
---
|
|
||||||
|
|
||||||
## Step 2. Installing telemt on Server B (conditionally Netherlands)
|
|
||||||
Installation and configuration are described [here](https://github.com/telemt/telemt/blob/main/docs/QUICK_START_GUIDE.ru.md) or [here](https://gitlab.com/An0nX/telemt-docker#-quick-start-docker-compose).\
|
|
||||||
It is assumed that telemt expects connections on port `443\tcp`.
|
|
||||||
|
|
||||||
In the telemt config, you must enable the `Proxy` protocol and restrict connections to it only through the tunnel.
|
|
||||||
```toml
|
|
||||||
[server]
|
|
||||||
port = 443
|
|
||||||
listen_addr_ipv4 = "10.10.10.1"
|
|
||||||
proxy_protocol = true
|
|
||||||
```
|
|
||||||
|
|
||||||
Also, for correct link generation, specify the FQDN or IP address and port of Server `A`
|
|
||||||
```toml
|
|
||||||
[general.links]
|
|
||||||
show = "*"
|
|
||||||
public_host = "<FQDN_OR_IP_SERVER_A>"
|
|
||||||
public_port = 443
|
|
||||||
```
|
|
||||||
|
|
||||||
Open the port on the firewall (if enabled):
|
|
||||||
```bash
|
|
||||||
sudo ufw allow from 10.10.10.2 to any port 443 proto tcp
|
|
||||||
```
|
|
||||||
|
|
||||||
---
|
|
||||||
|
|
||||||
## Step 3. Configuring HAProxy on Server A (Russian Federation)
|
|
||||||
Since the version in the standard Ubuntu repository is relatively old, it makes sense to use the official Docker image.\
|
|
||||||
[Instructions](https://docs.docker.com/engine/install/ubuntu/) for installing Docker on Ubuntu.
|
|
||||||
|
|
||||||
> [!WARNING]
|
|
||||||
> By default, regular users do not have rights to use ports < 1024.
|
|
||||||
> Attempts to run HAProxy on port 443 can lead to errors:
|
|
||||||
> ```
|
|
||||||
> [ALERT] (8) : Binding [/usr/local/etc/haproxy/haproxy.cfg:17] for frontend tcp_in_443:
|
|
||||||
> protocol tcpv4: cannot bind socket (Permission denied) for [0.0.0.0:443].
|
|
||||||
> ```
|
|
||||||
> There are two simple ways to bypass this restriction, choose one:
|
|
||||||
> 1. At the OS level, change the net.ipv4.ip_unprivileged_port_start setting to allow users to use all ports:
|
|
||||||
> ```
|
|
||||||
> echo "net.ipv4.ip_unprivileged_port_start = 0" | sudo tee -a /etc/sysctl.conf && sudo sysctl -p
|
|
||||||
> ```
|
|
||||||
> or
|
|
||||||
>
|
|
||||||
> 2. Run HAProxy as root:
|
|
||||||
> Uncomment the `user: "root"` parameter in docker-compose.yaml.
|
|
||||||
|
|
||||||
#### Create a folder for HAProxy:
|
|
||||||
```bash
|
|
||||||
mkdir -p /opt/docker-compose/haproxy && cd $_
|
|
||||||
```
|
|
||||||
|
|
||||||
#### Create the docker-compose.yaml file
|
|
||||||
`nano docker-compose.yaml`
|
|
||||||
|
|
||||||
File content
|
|
||||||
```yaml
|
|
||||||
services:
|
|
||||||
haproxy:
|
|
||||||
image: haproxy:latest
|
|
||||||
container_name: haproxy
|
|
||||||
restart: unless-stopped
|
|
||||||
# user: "root"
|
|
||||||
network_mode: "host"
|
|
||||||
volumes:
|
|
||||||
- ./haproxy.cfg:/usr/local/etc/haproxy/haproxy.cfg:ro
|
|
||||||
logging:
|
|
||||||
driver: "json-file"
|
|
||||||
options:
|
|
||||||
max-size: "1m"
|
|
||||||
max-file: "1"
|
|
||||||
```
|
|
||||||
|
|
||||||
#### Create the haproxy.cfg config file
|
|
||||||
Accept connections on port 443\tcp and send them through the tunnel to Server `B` 10.10.10.1:443
|
|
||||||
|
|
||||||
`nano haproxy.cfg`
|
|
||||||
|
|
||||||
File content
|
|
||||||
|
|
||||||
```haproxy
|
|
||||||
global
|
|
||||||
log stdout format raw local0
|
|
||||||
maxconn 10000
|
|
||||||
|
|
||||||
defaults
|
|
||||||
log global
|
|
||||||
mode tcp
|
|
||||||
option tcplog
|
|
||||||
option clitcpka
|
|
||||||
option srvtcpka
|
|
||||||
timeout connect 5s
|
|
||||||
timeout client 2h
|
|
||||||
timeout server 2h
|
|
||||||
timeout check 5s
|
|
||||||
|
|
||||||
frontend tcp_in_443
|
|
||||||
bind *:443
|
|
||||||
maxconn 8000
|
|
||||||
option tcp-smart-accept
|
|
||||||
default_backend telemt_nodes
|
|
||||||
|
|
||||||
backend telemt_nodes
|
|
||||||
option tcp-smart-connect
|
|
||||||
server server_a 10.10.10.1:443 check inter 5s rise 2 fall 3 send-proxy-v2
|
|
||||||
|
|
||||||
|
|
||||||
```
|
|
||||||
> [!WARNING]
|
|
||||||
> **The file must end with an empty line, otherwise HAProxy will not start!**
|
|
||||||
|
|
||||||
#### Allow port 443\tcp in the firewall (if enabled)
|
|
||||||
```bash
|
|
||||||
sudo ufw allow 443/tcp
|
|
||||||
```
|
|
||||||
|
|
||||||
#### Start the HAProxy container
|
|
||||||
```bash
|
|
||||||
docker compose up -d
|
|
||||||
```
|
|
||||||
|
|
||||||
If everything is configured correctly, you can now try connecting Telegram clients using links from the telemt log\api.
|
|
||||||
|
|
@ -1,291 +0,0 @@
|
||||||
<img src="https://gist.githubusercontent.com/avbor/1f8a128e628f47249aae6e058a57610b/raw/19013276c035e91058e0a9799ab145f8e70e3ff5/scheme.svg">
|
|
||||||
|
|
||||||
## Концепция
|
|
||||||
- **Сервер A** (_РФ_):\
|
|
||||||
Точка входа, принимает трафик пользователей Telegram-прокси через **HAProxy** (порт `443`)\
|
|
||||||
и отправляет в туннель на Сервер **B**.\
|
|
||||||
Внутренний IP в туннеле — `10.10.10.2`\
|
|
||||||
Порт для клиентов HAProxy — `443\tcp`
|
|
||||||
- **Сервер B** (_условно Нидерланды_):\
|
|
||||||
Точка выхода, на нем работает **telemt** и принимает подключения клиентов через Сервер **A**.\
|
|
||||||
На сервере должен быть неограниченный доступ до серверов Telegram.\
|
|
||||||
Внутренний IP в туннеле — `10.10.10.1`\
|
|
||||||
Порт AmneziaWG — `8443\udp`\
|
|
||||||
Порт для клиентов telemt — `443\tcp`
|
|
||||||
|
|
||||||
---
|
|
||||||
|
|
||||||
## Шаг 1. Настройка туннеля AmneziaWG (A <-> B)
|
|
||||||
|
|
||||||
На всех серверах необходимо установить [amneziawg](https://github.com/amnezia-vpn/amneziawg-linux-kernel-module).\
|
|
||||||
Далее все команды даны для **Ununtu 24.04**.\
|
|
||||||
Для RHEL-based дистрибутивов инструкция по установке есть по ссылке выше.
|
|
||||||
|
|
||||||
### Установка AmneziaWG (Сервера A и B)
|
|
||||||
На каждом из серверов необходимо выполнить следующие шаги:
|
|
||||||
|
|
||||||
#### 1. Добавление репозитория AmneziaWG и установка необходимых пакетов:
|
|
||||||
```bash
|
|
||||||
sudo apt install -y software-properties-common python3-launchpadlib gnupg2 linux-headers-$(uname -r) && \
|
|
||||||
sudo add-apt-repository ppa:amnezia/ppa && \
|
|
||||||
sudo apt-get install -y amneziawg
|
|
||||||
```
|
|
||||||
|
|
||||||
#### 2. Генерация уникальной пары ключей:
|
|
||||||
```bash
|
|
||||||
cd /etc/amnezia/amneziawg && \
|
|
||||||
awg genkey | tee private.key | awg pubkey > public.key
|
|
||||||
```
|
|
||||||
В результате вы получите в папке `/etc/amnezia/amneziawg` два файла:\
|
|
||||||
`private.key` - приватный и\
|
|
||||||
`public.key` - публичный ключи сервера
|
|
||||||
|
|
||||||
#### 3. Настройка сетевых интерфейсов:
|
|
||||||
|
|
||||||
Параметры обфускации `S1`, `S2`, `H1`, `H2`, `H3`, `H4` должны быть строго идентичными на обоих серверах.\
|
|
||||||
Параметры `Jc`, `Jmin` и `Jmax` могут отличатся.\
|
|
||||||
Параметры `I1-I5` ([Custom Protocol Signature](https://docs.amnezia.org/documentation/amnezia-wg/)) нужно указывать на стороне _клиента_ (Сервер **А**).
|
|
||||||
|
|
||||||
Рекомендации по выбору значений:
|
|
||||||
```text
|
|
||||||
Jc — 1 ≤ Jc ≤ 128; от 4 до 12 включительно
|
|
||||||
Jmin — Jmax > Jmin < 1280*; рекомендовано 8
|
|
||||||
Jmax — Jmin < Jmax ≤ 1280*; рекомендовано 80
|
|
||||||
S1 — S1 ≤ 1132* (1280* - 148 = 1132); S1 + 56 ≠ S2;
|
|
||||||
рекомендованный диапазон от 15 до 150 включительно
|
|
||||||
S2 — S2 ≤ 1188* (1280* - 92 = 1188);
|
|
||||||
рекомендованный диапазон от 15 до 150 включительно
|
|
||||||
H1/H2/H3/H4 — должны быть уникальны и отличаться друг от друга;
|
|
||||||
рекомендованный диапазон от 5 до 2147483647 включительно
|
|
||||||
|
|
||||||
* Предполагается, что подключение к Интернету имеет MTU 1280.
|
|
||||||
```
|
|
||||||
> [!IMPORTANT]
|
|
||||||
> Рекомендуется использовать собственные, уникальные значения.\
|
|
||||||
> Для выбора параметров можете воспользоваться [генератором](https://htmlpreview.github.io/?https://gist.githubusercontent.com/avbor/955782b5c37b06240b243aa375baeac5/raw/13f5517ca473b47c412b9a99407066de973732bd/awg-gen.html).
|
|
||||||
|
|
||||||
#### Конфигурация Сервера B (_Нидерланды_):
|
|
||||||
|
|
||||||
Создаем файл конфигурации интерфейса (`awg0`)
|
|
||||||
```bash
|
|
||||||
nano /etc/amnezia/amneziawg/awg0.conf
|
|
||||||
```
|
|
||||||
|
|
||||||
Содержимое файла
|
|
||||||
```ini
|
|
||||||
[Interface]
|
|
||||||
Address = 10.10.10.1/24
|
|
||||||
ListenPort = 8443
|
|
||||||
PrivateKey = <PRIVATE_KEY_SERVER_B>
|
|
||||||
SaveConfig = true
|
|
||||||
Jc = 4
|
|
||||||
Jmin = 8
|
|
||||||
Jmax = 80
|
|
||||||
S1 = 29
|
|
||||||
S2 = 15
|
|
||||||
S3 = 18
|
|
||||||
S4 = 0
|
|
||||||
H1 = 2087563914
|
|
||||||
H2 = 188817757
|
|
||||||
H3 = 101784570
|
|
||||||
H4 = 432174303
|
|
||||||
|
|
||||||
[Peer]
|
|
||||||
PublicKey = <PUBLIC_KEY_SERVER_A>
|
|
||||||
AllowedIPs = 10.10.10.2/32
|
|
||||||
```
|
|
||||||
|
|
||||||
`ListenPort` - порт, на котором сервер будет ждать подключения, можете выбрать любой свободный.\
|
|
||||||
`<PRIVATE_KEY_SERVER_B>` - содержимое файла `private.key` с сервера **B**.\
|
|
||||||
`<PUBLIC_KEY_SERVER_A>` - содержимое файла `public.key` с сервера **A**.
|
|
||||||
|
|
||||||
Открываем порт на фаерволе (если включен):
|
|
||||||
```bash
|
|
||||||
sudo ufw allow from <PUBLIC_IP_SERVER_A> to any port 8443 proto udp
|
|
||||||
```
|
|
||||||
|
|
||||||
`<PUBLIC_IP_SERVER_A>` - внешний IP адрес Сервера **A**.
|
|
||||||
|
|
||||||
#### Конфигурация Сервера A (_РФ_):
|
|
||||||
|
|
||||||
Создаем файл конфигурации интерфейса (`awg0`)
|
|
||||||
```bash
|
|
||||||
nano /etc/amnezia/amneziawg/awg0.conf
|
|
||||||
```
|
|
||||||
|
|
||||||
Содержимое файла
|
|
||||||
```ini
|
|
||||||
[Interface]
|
|
||||||
Address = 10.10.10.2/24
|
|
||||||
PrivateKey = <PRIVATE_KEY_SERVER_A>
|
|
||||||
Jc = 4
|
|
||||||
Jmin = 8
|
|
||||||
Jmax = 80
|
|
||||||
S1 = 29
|
|
||||||
S2 = 15
|
|
||||||
S3 = 18
|
|
||||||
S4 = 0
|
|
||||||
H1 = 2087563914
|
|
||||||
H2 = 188817757
|
|
||||||
H3 = 101784570
|
|
||||||
H4 = 432174303
|
|
||||||
I1 = <b 0xc10000000108981eba846e21f74e00>
|
|
||||||
I2 = <b 0xc20000000108981eba846e21f74e00>
|
|
||||||
I3 = <b 0xc30000000108981eba846e21f74e00>
|
|
||||||
I4 = <b 0x43981eba846e21f74e>
|
|
||||||
I5 = <b 0x43981eba846e21f74e>
|
|
||||||
|
|
||||||
[Peer]
|
|
||||||
PublicKey = <PUBLIC_KEY_SERVER_B>
|
|
||||||
Endpoint = <PUBLIC_IP_SERVER_B>:8443
|
|
||||||
AllowedIPs = 10.10.10.1/32
|
|
||||||
PersistentKeepalive = 25
|
|
||||||
```
|
|
||||||
|
|
||||||
`<PRIVATE_KEY_SERVER_A>` - содержимое файла `private.key` с сервера **A**.\
|
|
||||||
`<PUBLIC_KEY_SERVER_B>` - содержимое файла `public.key` с сервера **B**.\
|
|
||||||
`<PUBLIC_IP_SERVER_B>` - публичный IP адресс сервера **B**.
|
|
||||||
|
|
||||||
#### Включаем туннель на обоих серверах:
|
|
||||||
```bash
|
|
||||||
sudo systemctl enable --now awg-quick@awg0
|
|
||||||
```
|
|
||||||
|
|
||||||
Убедитесь, что с Сервера `A` доступен Сервер `B` через туннель.
|
|
||||||
```bash
|
|
||||||
ping 10.10.10.1
|
|
||||||
PING 10.10.10.1 (10.10.10.1) 56(84) bytes of data.
|
|
||||||
64 bytes from 10.10.10.1: icmp_seq=1 ttl=64 time=35.1 ms
|
|
||||||
64 bytes from 10.10.10.1: icmp_seq=2 ttl=64 time=35.0 ms
|
|
||||||
64 bytes from 10.10.10.1: icmp_seq=3 ttl=64 time=35.1 ms
|
|
||||||
^C
|
|
||||||
|
|
||||||
```
|
|
||||||
|
|
||||||
---
|
|
||||||
|
|
||||||
## Шаг 2. Установка telemt на Сервере B (_условно Нидерланды_)
|
|
||||||
|
|
||||||
Установка и настройка описаны [здесь](https://github.com/telemt/telemt/blob/main/docs/QUICK_START_GUIDE.ru.md) или [здесь](https://gitlab.com/An0nX/telemt-docker#-quick-start-docker-compose).\
|
|
||||||
Подразумевается что telemt ожидает подключения на порту `443\tcp`.
|
|
||||||
|
|
||||||
В конфиге telemt необходимо включить протокол `Proxy` и ограничить подключения к нему только через туннель.
|
|
||||||
|
|
||||||
```toml
|
|
||||||
[server]
|
|
||||||
port = 443
|
|
||||||
listen_addr_ipv4 = "10.10.10.1"
|
|
||||||
proxy_protocol = true
|
|
||||||
```
|
|
||||||
|
|
||||||
А также, для правильной генерации ссылок, указать FQDN или IP адрес и порт Сервера `A`
|
|
||||||
|
|
||||||
```toml
|
|
||||||
[general.links]
|
|
||||||
show = "*"
|
|
||||||
public_host = "<FQDN_OR_IP_SERVER_A>"
|
|
||||||
public_port = 443
|
|
||||||
```
|
|
||||||
|
|
||||||
Открываем порт на фаерволе (если включен):
|
|
||||||
```bash
|
|
||||||
sudo ufw allow from 10.10.10.2 to any port 443 proto tcp
|
|
||||||
```
|
|
||||||
|
|
||||||
---
|
|
||||||
|
|
||||||
### Шаг 3. Настройка HAProxy на Сервере A (_РФ_)
|
|
||||||
|
|
||||||
Т.к. в стандартном репозитории Ubuntu версия относительно старая, имеет смысл воспользоваться официальным образом Docker.\
|
|
||||||
[Инструкция](https://docs.docker.com/engine/install/ubuntu/) по установке Docker на Ubuntu.
|
|
||||||
|
|
||||||
> [!WARNING]
|
|
||||||
> По умолчанию у обычных пользователей нет прав на использование портов < 1024.\
|
|
||||||
> Попытки запустить HAProxy на 443 порту могут приводить к ошибкам:
|
|
||||||
> ```
|
|
||||||
> [ALERT] (8) : Binding [/usr/local/etc/haproxy/haproxy.cfg:17] for frontend tcp_in_443:
|
|
||||||
> protocol tcpv4: cannot bind socket (Permission denied) for [0.0.0.0:443].
|
|
||||||
> ```
|
|
||||||
> Есть два простых способа обойти это ограничение, выберите что-то одно:
|
|
||||||
> 1. На уровне ОС изменить настройку net.ipv4.ip_unprivileged_port_start, разрешив пользователям использовать все порты:
|
|
||||||
> ```
|
|
||||||
> echo "net.ipv4.ip_unprivileged_port_start = 0" | sudo tee -a /etc/sysctl.conf && sudo sysctl -p
|
|
||||||
> ```
|
|
||||||
> или
|
|
||||||
>
|
|
||||||
> 2. Запустить HAProxy под root:\
|
|
||||||
> Раскомментируйте в docker-compose.yaml параметр `user: "root"`.
|
|
||||||
|
|
||||||
#### Создаем папку для HAProxy:
|
|
||||||
```bash
|
|
||||||
mkdir -p /opt/docker-compose/haproxy && cd $_
|
|
||||||
```
|
|
||||||
#### Создаем файл docker-compose.yaml
|
|
||||||
|
|
||||||
`nano docker-compose.yaml`
|
|
||||||
|
|
||||||
Содержимое файла
|
|
||||||
```yaml
|
|
||||||
services:
|
|
||||||
haproxy:
|
|
||||||
image: haproxy:latest
|
|
||||||
container_name: haproxy
|
|
||||||
restart: unless-stopped
|
|
||||||
# user: "root"
|
|
||||||
network_mode: "host"
|
|
||||||
volumes:
|
|
||||||
- ./haproxy.cfg:/usr/local/etc/haproxy/haproxy.cfg:ro
|
|
||||||
logging:
|
|
||||||
driver: "json-file"
|
|
||||||
options:
|
|
||||||
max-size: "1m"
|
|
||||||
max-file: "1"
|
|
||||||
```
|
|
||||||
#### Создаем файл конфига haproxy.cfg
|
|
||||||
Принимаем подключения на порту 443\tcp и отправляем их через туннель на Сервер `B` 10.10.10.1:443
|
|
||||||
|
|
||||||
`nano haproxy.cfg`
|
|
||||||
|
|
||||||
Содержимое файла
|
|
||||||
```haproxy
|
|
||||||
global
|
|
||||||
log stdout format raw local0
|
|
||||||
maxconn 10000
|
|
||||||
|
|
||||||
defaults
|
|
||||||
log global
|
|
||||||
mode tcp
|
|
||||||
option tcplog
|
|
||||||
option clitcpka
|
|
||||||
option srvtcpka
|
|
||||||
timeout connect 5s
|
|
||||||
timeout client 2h
|
|
||||||
timeout server 2h
|
|
||||||
timeout check 5s
|
|
||||||
|
|
||||||
frontend tcp_in_443
|
|
||||||
bind *:443
|
|
||||||
maxconn 8000
|
|
||||||
option tcp-smart-accept
|
|
||||||
default_backend telemt_nodes
|
|
||||||
|
|
||||||
backend telemt_nodes
|
|
||||||
option tcp-smart-connect
|
|
||||||
server server_a 10.10.10.1:443 check inter 5s rise 2 fall 3 send-proxy-v2
|
|
||||||
|
|
||||||
|
|
||||||
```
|
|
||||||
>[!WARNING]
|
|
||||||
>**Файл должен заканчиваться пустой строкой, иначе HAProxy не запустится!**
|
|
||||||
|
|
||||||
#### Разрешаем порт 443\tcp в фаерволе (если включен)
|
|
||||||
```bash
|
|
||||||
sudo ufw allow 443/tcp
|
|
||||||
```
|
|
||||||
|
|
||||||
#### Запускаем контейнер HAProxy
|
|
||||||
```bash
|
|
||||||
docker compose up -d
|
|
||||||
```
|
|
||||||
|
|
||||||
Если все настроено верно, то теперь можно пробовать подключить клиентов Telegram с использованием ссылок из лога\api telemt.
|
|
||||||
|
|
@ -1,278 +0,0 @@
|
||||||
# TLS-F и TCP-S в Telemt
|
|
||||||
|
|
||||||
## Общая архитектура
|
|
||||||
|
|
||||||
**Telemt** - это прежде всего реализация **MTProxy**, через которую проходит payload Telegram
|
|
||||||
|
|
||||||
Подсистема **TLS-Fronting / TCP-Splitting** служит **маскировочным транспортным слоем**, задача которого - сделать MTProxy-соединение внешне похожим на обычное TLS-подключение к легитимному сайту
|
|
||||||
|
|
||||||
Таким образом:
|
|
||||||
|
|
||||||
- **MTProxy** - основной функциональный слой Telemt для обработки Telegram-трафика
|
|
||||||
- **TLS-Fronting / TCP-Splitting** - подсистема маскировки транспорта
|
|
||||||
|
|
||||||
С точки зрения сети Telemt ведёт себя как **TLS-сервер**, но фактически:
|
|
||||||
|
|
||||||
- валидные MTProxy-клиенты остаются внутри контура Telemt
|
|
||||||
- любые другие TLS-клиенты проксируются на обычный HTTPS-сервер-заглушку
|
|
||||||
|
|
||||||
# Базовый сценарий / Best-practice
|
|
||||||
|
|
||||||
Предположим, у вас есть домен:
|
|
||||||
|
|
||||||
```
|
|
||||||
umweltschutz.de
|
|
||||||
```
|
|
||||||
|
|
||||||
### 1 DNS
|
|
||||||
|
|
||||||
Вы создаёте A-запись:
|
|
||||||
|
|
||||||
```
|
|
||||||
umweltschutz.de -> A-запись 198.18.88.88
|
|
||||||
```
|
|
||||||
|
|
||||||
где `198.18.88.88` - IP вашего сервера с telemt
|
|
||||||
|
|
||||||
### 2 TLS-домен
|
|
||||||
|
|
||||||
В конфигурации Telemt:
|
|
||||||
|
|
||||||
```toml
|
|
||||||
[censorship]
|
|
||||||
tls_domain = "umweltschutz.de"
|
|
||||||
```
|
|
||||||
|
|
||||||
Этот домен используется клиентом как SNI в ClientHello
|
|
||||||
|
|
||||||
### 3 Сервер-заглушка
|
|
||||||
|
|
||||||
Вы поднимаете обычный HTTPS-сервер, например **nginx**, с сертификатом для этого домена.
|
|
||||||
|
|
||||||
Он может работать:
|
|
||||||
|
|
||||||
- на том же сервере
|
|
||||||
- на другом сервере
|
|
||||||
- на другом порту
|
|
||||||
|
|
||||||
В конфигурации Telemt:
|
|
||||||
|
|
||||||
```toml
|
|
||||||
[censorship]
|
|
||||||
mask_host = "127.0.0.1"
|
|
||||||
mask_port = 8443
|
|
||||||
```
|
|
||||||
|
|
||||||
где `127.0.0.1` - IP сервера-заглушки, а 8443 - порт, который он слушает
|
|
||||||
|
|
||||||
Этот сервер нужен **для обработки любых non-MTProxy запросов**
|
|
||||||
|
|
||||||
### 4 Работа Telemt
|
|
||||||
|
|
||||||
После запуска Telemt действует следующим образом:
|
|
||||||
|
|
||||||
1) принимает входящее TCP-соединение
|
|
||||||
2) анализирует TLS-ClientHello
|
|
||||||
3) пытается определить, является ли соединение валидным **MTProxy FakeTLS**
|
|
||||||
|
|
||||||
Далее работают два варианта логики:
|
|
||||||
|
|
||||||
---
|
|
||||||
|
|
||||||
# Сценарий 1 - MTProxy клиент с валидным ключом
|
|
||||||
|
|
||||||
Если клиент предъявил **валидный MTProxy-ключ**:
|
|
||||||
|
|
||||||
- соединение **остаётся внутри Telemt**
|
|
||||||
- TLS используется только как **транспортная маскировка**
|
|
||||||
- далее запускается обычная логика **MTProxy**
|
|
||||||
|
|
||||||
Для внешнего наблюдателя это выглядит как:
|
|
||||||
|
|
||||||
```
|
|
||||||
TLS connection -> umweltschutz.de
|
|
||||||
```
|
|
||||||
|
|
||||||
Хотя внутри передаётся **MTProto-трафик Telegram**
|
|
||||||
|
|
||||||
# Сценарий 2 - обычный TLS-клиент - crawler / scanner / browser
|
|
||||||
|
|
||||||
Если Telemt не обнаруживает валидный MTProxy-ключ:
|
|
||||||
|
|
||||||
соединение **переключается в режим TCP-Splitting / TCP-Splicing**.
|
|
||||||
|
|
||||||
В этом режиме Telemt:
|
|
||||||
|
|
||||||
1. открывает новое TCP-соединение к
|
|
||||||
|
|
||||||
```
|
|
||||||
mask_host:mask_port
|
|
||||||
```
|
|
||||||
|
|
||||||
2. начинает **проксировать TCP-трафик**
|
|
||||||
|
|
||||||
Важно:
|
|
||||||
|
|
||||||
* клиентский TLS-запрос **НЕ модифицируется**
|
|
||||||
* **ClientHello передаётся "как есть", без изменений**
|
|
||||||
* **SNI остаётся неизменным**
|
|
||||||
* Telemt **не завершает TLS-рукопожатие**, а только перенаправляет его на более низком уровне сетевого стека - L4
|
|
||||||
|
|
||||||
Таким образом upstream-сервер получает **оригинальное TLS-соединение клиента**:
|
|
||||||
|
|
||||||
- если это nginx-заглушка, он просто отдаёт обычный сайт
|
|
||||||
- для внешнего наблюдателя это выглядит как обычный HTTPS-сервер
|
|
||||||
|
|
||||||
# TCP-S / TCP-Splitting / TCP-Splicing
|
|
||||||
|
|
||||||
Ключевые свойства механизма:
|
|
||||||
|
|
||||||
**Telemt работает как TCP-переключатель:**
|
|
||||||
|
|
||||||
1) принимает соединение
|
|
||||||
2️) определяет тип клиента
|
|
||||||
3) либо:
|
|
||||||
|
|
||||||
- обрабатывает MTProxy внутри
|
|
||||||
- либо проксирует TCP-поток
|
|
||||||
|
|
||||||
При проксировании:
|
|
||||||
|
|
||||||
- Telemt **разрешает `mask_host` в IP**
|
|
||||||
- устанавливает TCP-соединение
|
|
||||||
- начинает **bidirectional TCP relay**
|
|
||||||
|
|
||||||
При этом:
|
|
||||||
|
|
||||||
- TLS-рукопожатие происходит **между клиентом и `mask_host`**
|
|
||||||
- Telemt выступает только **на уровне L4 - как TCP-релей**, такой же как HAProxy в TCP-режиме
|
|
||||||
|
|
||||||
# Использование чужого домена
|
|
||||||
|
|
||||||
Можно использовать и внешний сайт.
|
|
||||||
|
|
||||||
Например:
|
|
||||||
|
|
||||||
```toml
|
|
||||||
[censorship]
|
|
||||||
tls_domain = "github.com"
|
|
||||||
mask_host = "github.com"
|
|
||||||
mask_port = 443
|
|
||||||
```
|
|
||||||
|
|
||||||
или
|
|
||||||
|
|
||||||
```toml
|
|
||||||
[censorship]
|
|
||||||
mask_host = "140.82.121.4"
|
|
||||||
```
|
|
||||||
|
|
||||||
В этом случае:
|
|
||||||
|
|
||||||
- цензор видит **TLS-подключение к github.com**
|
|
||||||
- обычные клиенты/краулер действительно получают **настоящий GitHub**
|
|
||||||
|
|
||||||
Telemt просто **проксирует TCP-соединение на GitHub**
|
|
||||||
|
|
||||||
# Что видит анализатор трафика?
|
|
||||||
|
|
||||||
Для DPI это выглядит так:
|
|
||||||
|
|
||||||
```
|
|
||||||
client -> TLS -> github.com
|
|
||||||
```
|
|
||||||
|
|
||||||
или
|
|
||||||
|
|
||||||
```
|
|
||||||
client -> TLS -> umweltschutz.de
|
|
||||||
```
|
|
||||||
|
|
||||||
TLS-handshake выглядит валидным, SNI соответствует домену, сертификат корректный - от целевого `mask_host:mask_port`
|
|
||||||
|
|
||||||
# Что видит сканер / краулер?
|
|
||||||
|
|
||||||
Если сканер попытается подключиться:
|
|
||||||
|
|
||||||
```
|
|
||||||
openssl s_client -connect 198.18.88.88:443 -servername umweltschutz.de
|
|
||||||
```
|
|
||||||
|
|
||||||
он получит **обычный HTTPS-сайт-заглушку**
|
|
||||||
|
|
||||||
Потому что:
|
|
||||||
|
|
||||||
- он не предъявил MTProxy-ключ
|
|
||||||
- Telemt отправил соединение на `mask_host:mask_port`, на котором находится nginx
|
|
||||||
|
|
||||||
# Какую проблему решает TLS-Fronting / TCP-Splitting?
|
|
||||||
|
|
||||||
Эта архитектура решает сразу несколько проблем обхода цензуры.
|
|
||||||
|
|
||||||
## 1 Закрытие плоскости MTProxy от активного сканирования
|
|
||||||
|
|
||||||
Многие цензоры:
|
|
||||||
|
|
||||||
- сканируют IP-адреса
|
|
||||||
- проверяют известные сигнатуры прокси
|
|
||||||
|
|
||||||
Telemt отвечает на такие проверки **обычным HTTPS-сайтом**, поэтому прокси невозможно обнаружить простым сканированием
|
|
||||||
|
|
||||||
---
|
|
||||||
|
|
||||||
## 2 Маскировка трафика под легитимный TLS
|
|
||||||
|
|
||||||
Для DPI-систем соединение выглядит как:
|
|
||||||
|
|
||||||
```
|
|
||||||
обычный TLS-трафик к популярному домену
|
|
||||||
```
|
|
||||||
|
|
||||||
Это делает блокировку значительно сложнее и непредсказуемее
|
|
||||||
|
|
||||||
---
|
|
||||||
|
|
||||||
## 3 Устойчивость к протокольному анализу
|
|
||||||
|
|
||||||
MTProxy трафик проходит **внутри TLS-like-потока**, поэтому:
|
|
||||||
|
|
||||||
- не видны характерные сигнатуры MTProto
|
|
||||||
- соединение выглядит как обычный HTTPS
|
|
||||||
|
|
||||||
---
|
|
||||||
|
|
||||||
## 4 Правдоподобное поведение сервера
|
|
||||||
|
|
||||||
Даже если краулер:
|
|
||||||
|
|
||||||
- подключится сам
|
|
||||||
- выполнит TLS-handshake
|
|
||||||
- попытается получить HTTP-ответ
|
|
||||||
|
|
||||||
он увидит **реальный сайт**, а не telemt
|
|
||||||
|
|
||||||
Это устраняет один из главных признаков для антифрод-краулеров мобильных операторов
|
|
||||||
|
|
||||||
# Схема
|
|
||||||
|
|
||||||
```text
|
|
||||||
Client
|
|
||||||
│
|
|
||||||
│ TCP
|
|
||||||
│
|
|
||||||
V
|
|
||||||
Telemt
|
|
||||||
│
|
|
||||||
├── valid MTProxy key
|
|
||||||
│ │
|
|
||||||
│ V
|
|
||||||
│ MTProxy logic
|
|
||||||
│
|
|
||||||
└── обычный TLS клиент
|
|
||||||
│
|
|
||||||
V
|
|
||||||
TCP-Splitting
|
|
||||||
│
|
|
||||||
V
|
|
||||||
mask_host:mask_port
|
|
||||||
```
|
|
||||||
Binary file not shown.
|
Before Width: | Height: | Size: 650 KiB |
Binary file not shown.
|
Before Width: | Height: | Size: 838 KiB |
756
install.sh
756
install.sh
|
|
@ -3,717 +3,113 @@ set -eu
|
||||||
|
|
||||||
REPO="${REPO:-telemt/telemt}"
|
REPO="${REPO:-telemt/telemt}"
|
||||||
BIN_NAME="${BIN_NAME:-telemt}"
|
BIN_NAME="${BIN_NAME:-telemt}"
|
||||||
INSTALL_DIR="${INSTALL_DIR:-/bin}"
|
VERSION="${1:-${VERSION:-latest}}"
|
||||||
CONFIG_DIR="${CONFIG_DIR:-/etc/telemt}"
|
INSTALL_DIR="${INSTALL_DIR:-/usr/local/bin}"
|
||||||
CONFIG_FILE="${CONFIG_FILE:-${CONFIG_DIR}/telemt.toml}"
|
|
||||||
WORK_DIR="${WORK_DIR:-/opt/telemt}"
|
|
||||||
TLS_DOMAIN="${TLS_DOMAIN:-petrovich.ru}"
|
|
||||||
SERVER_PORT="${SERVER_PORT:-443}"
|
|
||||||
USER_SECRET=""
|
|
||||||
AD_TAG=""
|
|
||||||
SERVICE_NAME="telemt"
|
|
||||||
TEMP_DIR=""
|
|
||||||
SUDO=""
|
|
||||||
CONFIG_PARENT_DIR=""
|
|
||||||
SERVICE_START_FAILED=0
|
|
||||||
|
|
||||||
PORT_PROVIDED=0
|
|
||||||
SECRET_PROVIDED=0
|
|
||||||
AD_TAG_PROVIDED=0
|
|
||||||
DOMAIN_PROVIDED=0
|
|
||||||
|
|
||||||
ACTION="install"
|
|
||||||
TARGET_VERSION="${VERSION:-latest}"
|
|
||||||
|
|
||||||
while [ $# -gt 0 ]; do
|
|
||||||
case "$1" in
|
|
||||||
-h|--help) ACTION="help"; shift ;;
|
|
||||||
-d|--domain)
|
|
||||||
if [ "$#" -lt 2 ] || [ -z "$2" ]; then
|
|
||||||
printf '[ERROR] %s requires a domain argument.\n' "$1" >&2
|
|
||||||
exit 1
|
|
||||||
fi
|
|
||||||
TLS_DOMAIN="$2"; DOMAIN_PROVIDED=1; shift 2 ;;
|
|
||||||
-p|--port)
|
|
||||||
if [ "$#" -lt 2 ] || [ -z "$2" ]; then
|
|
||||||
printf '[ERROR] %s requires a port argument.\n' "$1" >&2; exit 1
|
|
||||||
fi
|
|
||||||
case "$2" in
|
|
||||||
*[!0-9]*) printf '[ERROR] Port must be a valid number.\n' >&2; exit 1 ;;
|
|
||||||
esac
|
|
||||||
port_num="$(printf '%s\n' "$2" | sed 's/^0*//')"
|
|
||||||
[ -z "$port_num" ] && port_num="0"
|
|
||||||
if [ "${#port_num}" -gt 5 ] || [ "$port_num" -lt 1 ] || [ "$port_num" -gt 65535 ]; then
|
|
||||||
printf '[ERROR] Port must be between 1 and 65535.\n' >&2; exit 1
|
|
||||||
fi
|
|
||||||
SERVER_PORT="$port_num"; PORT_PROVIDED=1; shift 2 ;;
|
|
||||||
-s|--secret)
|
|
||||||
if [ "$#" -lt 2 ] || [ -z "$2" ]; then
|
|
||||||
printf '[ERROR] %s requires a secret argument.\n' "$1" >&2; exit 1
|
|
||||||
fi
|
|
||||||
case "$2" in
|
|
||||||
*[!0-9a-fA-F]*)
|
|
||||||
printf '[ERROR] Secret must contain only hex characters.\n' >&2; exit 1 ;;
|
|
||||||
esac
|
|
||||||
if [ "${#2}" -ne 32 ]; then
|
|
||||||
printf '[ERROR] Secret must be exactly 32 chars.\n' >&2; exit 1
|
|
||||||
fi
|
|
||||||
USER_SECRET="$2"; SECRET_PROVIDED=1; shift 2 ;;
|
|
||||||
-a|--ad-tag|--ad_tag)
|
|
||||||
if [ "$#" -lt 2 ] || [ -z "$2" ]; then
|
|
||||||
printf '[ERROR] %s requires an ad_tag argument.\n' "$1" >&2; exit 1
|
|
||||||
fi
|
|
||||||
AD_TAG="$2"; AD_TAG_PROVIDED=1; shift 2 ;;
|
|
||||||
uninstall|--uninstall)
|
|
||||||
if [ "$ACTION" != "purge" ]; then ACTION="uninstall"; fi
|
|
||||||
shift ;;
|
|
||||||
purge|--purge) ACTION="purge"; shift ;;
|
|
||||||
install|--install) ACTION="install"; shift ;;
|
|
||||||
-*) printf '[ERROR] Unknown option: %s\n' "$1" >&2; exit 1 ;;
|
|
||||||
*)
|
|
||||||
if [ "$ACTION" = "install" ]; then TARGET_VERSION="$1"
|
|
||||||
else printf '[WARNING] Ignoring extra argument: %s\n' "$1" >&2; fi
|
|
||||||
shift ;;
|
|
||||||
esac
|
|
||||||
done
|
|
||||||
|
|
||||||
say() {
|
say() {
|
||||||
if [ "$#" -eq 0 ] || [ -z "${1:-}" ]; then
|
printf '%s\n' "$*"
|
||||||
printf '\n'
|
|
||||||
else
|
|
||||||
printf '[INFO] %s\n' "$*"
|
|
||||||
fi
|
|
||||||
}
|
|
||||||
die() { printf '[ERROR] %s\n' "$*" >&2; exit 1; }
|
|
||||||
|
|
||||||
write_root() { $SUDO sh -c 'cat > "$1"' _ "$1"; }
|
|
||||||
|
|
||||||
cleanup() {
|
|
||||||
if [ -n "${TEMP_DIR:-}" ] && [ -d "$TEMP_DIR" ]; then
|
|
||||||
rm -rf -- "$TEMP_DIR"
|
|
||||||
fi
|
|
||||||
}
|
|
||||||
trap cleanup EXIT INT TERM
|
|
||||||
|
|
||||||
show_help() {
|
|
||||||
say "Usage: $0 [ <version> | install | uninstall | purge ] [ options ]"
|
|
||||||
say " <version> Install specific version (e.g. 3.3.15, default: latest)"
|
|
||||||
say " install Install the latest version"
|
|
||||||
say " uninstall Remove the binary and service"
|
|
||||||
say " purge Remove everything including configuration, data, and user"
|
|
||||||
say ""
|
|
||||||
say "Options:"
|
|
||||||
say " -d, --domain Set TLS domain (default: petrovich.ru)"
|
|
||||||
say " -p, --port Set server port (default: 443)"
|
|
||||||
say " -s, --secret Set specific user secret (32 hex characters)"
|
|
||||||
say " -a, --ad-tag Set ad_tag"
|
|
||||||
exit 0
|
|
||||||
}
|
}
|
||||||
|
|
||||||
check_os_entity() {
|
die() {
|
||||||
if command -v getent >/dev/null 2>&1; then getent "$1" "$2" >/dev/null 2>&1
|
printf 'Error: %s\n' "$*" >&2
|
||||||
else grep -q "^${2}:" "/etc/$1" 2>/dev/null; fi
|
exit 1
|
||||||
}
|
}
|
||||||
|
|
||||||
normalize_path() {
|
need_cmd() {
|
||||||
printf '%s\n' "$1" | tr -s '/' | sed 's|/$||; s|^$|/|'
|
command -v "$1" >/dev/null 2>&1 || die "required command not found: $1"
|
||||||
}
|
}
|
||||||
|
|
||||||
get_realpath() {
|
detect_os() {
|
||||||
path_in="$1"
|
os="$(uname -s)"
|
||||||
case "$path_in" in /*) ;; *) path_in="$(pwd)/$path_in" ;; esac
|
case "$os" in
|
||||||
|
Linux) printf 'linux\n' ;;
|
||||||
if command -v realpath >/dev/null 2>&1; then
|
OpenBSD) printf 'openbsd\n' ;;
|
||||||
if realpath_out="$(realpath -m "$path_in" 2>/dev/null)"; then
|
*) printf '%s\n' "$os" ;;
|
||||||
printf '%s\n' "$realpath_out"
|
|
||||||
return
|
|
||||||
fi
|
|
||||||
fi
|
|
||||||
|
|
||||||
if command -v readlink >/dev/null 2>&1; then
|
|
||||||
resolved_path="$(readlink -f "$path_in" 2>/dev/null || true)"
|
|
||||||
if [ -n "$resolved_path" ]; then
|
|
||||||
printf '%s\n' "$resolved_path"
|
|
||||||
return
|
|
||||||
fi
|
|
||||||
fi
|
|
||||||
|
|
||||||
d="${path_in%/*}"; b="${path_in##*/}"
|
|
||||||
if [ -z "$d" ]; then d="/"; fi
|
|
||||||
if [ "$d" = "$path_in" ]; then d="/"; b="$path_in"; fi
|
|
||||||
|
|
||||||
if [ -d "$d" ]; then
|
|
||||||
abs_d="$(cd "$d" >/dev/null 2>&1 && pwd || true)"
|
|
||||||
if [ -n "$abs_d" ]; then
|
|
||||||
if [ "$b" = "." ] || [ -z "$b" ]; then printf '%s\n' "$abs_d"
|
|
||||||
elif [ "$abs_d" = "/" ]; then printf '/%s\n' "$b"
|
|
||||||
else printf '%s/%s\n' "$abs_d" "$b"; fi
|
|
||||||
else
|
|
||||||
normalize_path "$path_in"
|
|
||||||
fi
|
|
||||||
else
|
|
||||||
normalize_path "$path_in"
|
|
||||||
fi
|
|
||||||
}
|
|
||||||
|
|
||||||
get_svc_mgr() {
|
|
||||||
if command -v systemctl >/dev/null 2>&1 && [ -d /run/systemd/system ]; then echo "systemd"
|
|
||||||
elif command -v rc-service >/dev/null 2>&1; then echo "openrc"
|
|
||||||
else echo "none"; fi
|
|
||||||
}
|
|
||||||
|
|
||||||
is_config_exists() {
|
|
||||||
if [ -n "$SUDO" ]; then
|
|
||||||
$SUDO sh -c '[ -f "$1" ]' _ "$CONFIG_FILE"
|
|
||||||
else
|
|
||||||
[ -f "$CONFIG_FILE" ]
|
|
||||||
fi
|
|
||||||
}
|
|
||||||
|
|
||||||
verify_common() {
|
|
||||||
[ -n "$BIN_NAME" ] || die "BIN_NAME cannot be empty."
|
|
||||||
[ -n "$INSTALL_DIR" ] || die "INSTALL_DIR cannot be empty."
|
|
||||||
[ -n "$CONFIG_DIR" ] || die "CONFIG_DIR cannot be empty."
|
|
||||||
[ -n "$CONFIG_FILE" ] || die "CONFIG_FILE cannot be empty."
|
|
||||||
|
|
||||||
case "${INSTALL_DIR}${CONFIG_DIR}${WORK_DIR}${CONFIG_FILE}" in
|
|
||||||
*[!a-zA-Z0-9_./-]*) die "Invalid characters in paths." ;;
|
|
||||||
esac
|
esac
|
||||||
|
|
||||||
case "$TARGET_VERSION" in *[!a-zA-Z0-9_.-]*) die "Invalid characters in version." ;; esac
|
|
||||||
case "$BIN_NAME" in *[!a-zA-Z0-9_-]*) die "Invalid characters in BIN_NAME." ;; esac
|
|
||||||
|
|
||||||
INSTALL_DIR="$(get_realpath "$INSTALL_DIR")"
|
|
||||||
CONFIG_DIR="$(get_realpath "$CONFIG_DIR")"
|
|
||||||
WORK_DIR="$(get_realpath "$WORK_DIR")"
|
|
||||||
CONFIG_FILE="$(get_realpath "$CONFIG_FILE")"
|
|
||||||
|
|
||||||
CONFIG_PARENT_DIR="${CONFIG_FILE%/*}"
|
|
||||||
if [ -z "$CONFIG_PARENT_DIR" ]; then CONFIG_PARENT_DIR="/"; fi
|
|
||||||
if [ "$CONFIG_PARENT_DIR" = "$CONFIG_FILE" ]; then CONFIG_PARENT_DIR="."; fi
|
|
||||||
|
|
||||||
if [ "$(id -u)" -eq 0 ]; then
|
|
||||||
SUDO=""
|
|
||||||
else
|
|
||||||
command -v sudo >/dev/null 2>&1 || die "This script requires root or sudo."
|
|
||||||
SUDO="sudo"
|
|
||||||
if ! sudo -n true 2>/dev/null; then
|
|
||||||
if ! [ -t 0 ]; then
|
|
||||||
die "sudo requires a password, but no TTY detected."
|
|
||||||
fi
|
|
||||||
fi
|
|
||||||
fi
|
|
||||||
|
|
||||||
if [ -n "$SUDO" ]; then
|
|
||||||
if $SUDO sh -c '[ -d "$1" ]' _ "$CONFIG_FILE"; then
|
|
||||||
die "Safety check failed: CONFIG_FILE '$CONFIG_FILE' is a directory."
|
|
||||||
fi
|
|
||||||
elif [ -d "$CONFIG_FILE" ]; then
|
|
||||||
die "Safety check failed: CONFIG_FILE '$CONFIG_FILE' is a directory."
|
|
||||||
fi
|
|
||||||
|
|
||||||
for cmd in id uname awk grep find rm chown chmod mv mktemp mkdir tr dd sed ps head sleep cat tar gzip; do
|
|
||||||
command -v "$cmd" >/dev/null 2>&1 || die "Required command not found: $cmd"
|
|
||||||
done
|
|
||||||
}
|
|
||||||
|
|
||||||
verify_install_deps() {
|
|
||||||
command -v curl >/dev/null 2>&1 || command -v wget >/dev/null 2>&1 || die "Neither curl nor wget is installed."
|
|
||||||
command -v cp >/dev/null 2>&1 || command -v install >/dev/null 2>&1 || die "Need cp or install"
|
|
||||||
|
|
||||||
if ! command -v setcap >/dev/null 2>&1 || ! command -v conntrack >/dev/null 2>&1; then
|
|
||||||
if command -v apk >/dev/null 2>&1; then
|
|
||||||
$SUDO apk add --no-cache libcap-utils libcap conntrack-tools >/dev/null 2>&1 || true
|
|
||||||
elif command -v apt-get >/dev/null 2>&1; then
|
|
||||||
$SUDO env DEBIAN_FRONTEND=noninteractive apt-get install -y -q libcap2-bin conntrack >/dev/null 2>&1 || {
|
|
||||||
$SUDO env DEBIAN_FRONTEND=noninteractive apt-get update -q >/dev/null 2>&1 || true
|
|
||||||
$SUDO env DEBIAN_FRONTEND=noninteractive apt-get install -y -q libcap2-bin conntrack >/dev/null 2>&1 || true
|
|
||||||
}
|
|
||||||
elif command -v dnf >/dev/null 2>&1; then $SUDO dnf install -y -q libcap conntrack-tools >/dev/null 2>&1 || true
|
|
||||||
elif command -v yum >/dev/null 2>&1; then $SUDO yum install -y -q libcap conntrack-tools >/dev/null 2>&1 || true
|
|
||||||
fi
|
|
||||||
fi
|
|
||||||
}
|
|
||||||
|
|
||||||
check_port_availability() {
|
|
||||||
port_info=""
|
|
||||||
|
|
||||||
if command -v ss >/dev/null 2>&1; then
|
|
||||||
port_info=$($SUDO ss -tulnp 2>/dev/null | grep -E ":${SERVER_PORT}([[:space:]]|$)" || true)
|
|
||||||
elif command -v netstat >/dev/null 2>&1; then
|
|
||||||
port_info=$($SUDO netstat -tulnp 2>/dev/null | grep -E ":${SERVER_PORT}([[:space:]]|$)" || true)
|
|
||||||
elif command -v lsof >/dev/null 2>&1; then
|
|
||||||
port_info=$($SUDO lsof -i :${SERVER_PORT} 2>/dev/null | grep LISTEN || true)
|
|
||||||
else
|
|
||||||
say "[WARNING] Network diagnostic tools (ss, netstat, lsof) not found. Skipping port check."
|
|
||||||
return 0
|
|
||||||
fi
|
|
||||||
|
|
||||||
if [ -n "$port_info" ]; then
|
|
||||||
if printf '%s\n' "$port_info" | grep -q "${BIN_NAME}"; then
|
|
||||||
say " -> Port ${SERVER_PORT} is in use by ${BIN_NAME}. Ignoring as it will be restarted."
|
|
||||||
else
|
|
||||||
say "[ERROR] Port ${SERVER_PORT} is already in use by another process:"
|
|
||||||
printf ' %s\n' "$port_info"
|
|
||||||
die "Please free the port ${SERVER_PORT} or change it and try again."
|
|
||||||
fi
|
|
||||||
fi
|
|
||||||
}
|
}
|
||||||
|
|
||||||
detect_arch() {
|
detect_arch() {
|
||||||
sys_arch="$(uname -m)"
|
arch="$(uname -m)"
|
||||||
case "$sys_arch" in
|
case "$arch" in
|
||||||
x86_64|amd64)
|
x86_64|amd64) printf 'x86_64\n' ;;
|
||||||
if [ -r /proc/cpuinfo ] && grep -q "avx2" /proc/cpuinfo 2>/dev/null && grep -q "bmi2" /proc/cpuinfo 2>/dev/null; then
|
aarch64|arm64) printf 'aarch64\n' ;;
|
||||||
echo "x86_64-v3"
|
*) die "unsupported architecture: $arch" ;;
|
||||||
else
|
|
||||||
echo "x86_64"
|
|
||||||
fi
|
|
||||||
;;
|
|
||||||
aarch64|arm64) echo "aarch64" ;;
|
|
||||||
*) die "Unsupported architecture: $sys_arch" ;;
|
|
||||||
esac
|
esac
|
||||||
}
|
}
|
||||||
|
|
||||||
detect_libc() {
|
detect_libc() {
|
||||||
for f in /lib/ld-musl-*.so.* /lib64/ld-musl-*.so.*; do
|
case "$(ldd --version 2>&1 || true)" in
|
||||||
if [ -e "$f" ]; then echo "musl"; return 0; fi
|
*musl*) printf 'musl\n' ;;
|
||||||
done
|
*) printf 'gnu\n' ;;
|
||||||
if grep -qE '^ID="?alpine"?' /etc/os-release 2>/dev/null; then echo "musl"; return 0; fi
|
esac
|
||||||
if command -v ldd >/dev/null 2>&1 && (ldd --version 2>&1 || true) | grep -qi musl; then echo "musl"; return 0; fi
|
|
||||||
echo "gnu"
|
|
||||||
}
|
}
|
||||||
|
|
||||||
fetch_file() {
|
fetch_to_stdout() {
|
||||||
if command -v curl >/dev/null 2>&1; then curl -fsSL "$1" -o "$2"
|
url="$1"
|
||||||
else wget -q -O "$2" "$1"; fi
|
if command -v curl >/dev/null 2>&1; then
|
||||||
}
|
curl -fsSL "$url"
|
||||||
|
elif command -v wget >/dev/null 2>&1; then
|
||||||
ensure_user_group() {
|
wget -qO- "$url"
|
||||||
nologin_bin="$(command -v nologin 2>/dev/null || command -v false 2>/dev/null || echo /bin/false)"
|
|
||||||
|
|
||||||
if ! check_os_entity group telemt; then
|
|
||||||
if command -v groupadd >/dev/null 2>&1; then $SUDO groupadd -r telemt
|
|
||||||
elif command -v addgroup >/dev/null 2>&1; then $SUDO addgroup -S telemt
|
|
||||||
else die "Cannot create group"; fi
|
|
||||||
fi
|
|
||||||
|
|
||||||
if ! check_os_entity passwd telemt; then
|
|
||||||
if command -v useradd >/dev/null 2>&1; then
|
|
||||||
$SUDO useradd -r -g telemt -d "$WORK_DIR" -s "$nologin_bin" -c "Telemt Proxy" telemt
|
|
||||||
elif command -v adduser >/dev/null 2>&1; then
|
|
||||||
if adduser --help 2>&1 | grep -q -- '-S'; then
|
|
||||||
$SUDO adduser -S -D -H -h "$WORK_DIR" -s "$nologin_bin" -G telemt telemt
|
|
||||||
else
|
else
|
||||||
$SUDO adduser --system --home "$WORK_DIR" --shell "$nologin_bin" --no-create-home --ingroup telemt --disabled-password telemt
|
die "neither curl nor wget is installed"
|
||||||
fi
|
|
||||||
else die "Cannot create user"; fi
|
|
||||||
fi
|
|
||||||
}
|
|
||||||
|
|
||||||
setup_dirs() {
|
|
||||||
$SUDO mkdir -p "$WORK_DIR" "$CONFIG_DIR" "$CONFIG_PARENT_DIR" || die "Failed to create directories"
|
|
||||||
|
|
||||||
$SUDO chown telemt:telemt "$WORK_DIR" && $SUDO chmod 750 "$WORK_DIR"
|
|
||||||
$SUDO chown root:telemt "$CONFIG_DIR" && $SUDO chmod 750 "$CONFIG_DIR"
|
|
||||||
|
|
||||||
if [ "$CONFIG_PARENT_DIR" != "$CONFIG_DIR" ] && [ "$CONFIG_PARENT_DIR" != "." ] && [ "$CONFIG_PARENT_DIR" != "/" ]; then
|
|
||||||
$SUDO chown root:telemt "$CONFIG_PARENT_DIR" && $SUDO chmod 750 "$CONFIG_PARENT_DIR"
|
|
||||||
fi
|
|
||||||
}
|
|
||||||
|
|
||||||
stop_service() {
|
|
||||||
svc="$(get_svc_mgr)"
|
|
||||||
if [ "$svc" = "systemd" ] && systemctl is-active --quiet "$SERVICE_NAME" 2>/dev/null; then
|
|
||||||
$SUDO systemctl stop "$SERVICE_NAME" 2>/dev/null || true
|
|
||||||
elif [ "$svc" = "openrc" ] && rc-service "$SERVICE_NAME" status >/dev/null 2>&1; then
|
|
||||||
$SUDO rc-service "$SERVICE_NAME" stop 2>/dev/null || true
|
|
||||||
fi
|
fi
|
||||||
}
|
}
|
||||||
|
|
||||||
install_binary() {
|
install_binary() {
|
||||||
bin_src="$1"; bin_dst="$2"
|
src="$1"
|
||||||
if [ -e "$INSTALL_DIR" ] && [ ! -d "$INSTALL_DIR" ]; then
|
dst="$2"
|
||||||
die "'$INSTALL_DIR' is not a directory."
|
|
||||||
fi
|
|
||||||
|
|
||||||
$SUDO mkdir -p "$INSTALL_DIR" || die "Failed to create install directory"
|
if [ -w "$INSTALL_DIR" ] || { [ ! -e "$INSTALL_DIR" ] && [ -w "$(dirname "$INSTALL_DIR")" ]; }; then
|
||||||
|
mkdir -p "$INSTALL_DIR"
|
||||||
$SUDO rm -f "$bin_dst" 2>/dev/null || true
|
install -m 0755 "$src" "$dst"
|
||||||
|
elif command -v sudo >/dev/null 2>&1; then
|
||||||
if command -v install >/dev/null 2>&1; then
|
sudo mkdir -p "$INSTALL_DIR"
|
||||||
$SUDO install -m 0755 "$bin_src" "$bin_dst" || die "Failed to install binary"
|
sudo install -m 0755 "$src" "$dst"
|
||||||
else
|
else
|
||||||
$SUDO cp "$bin_src" "$bin_dst" && $SUDO chmod 0755 "$bin_dst" || die "Failed to copy binary"
|
die "cannot write to $INSTALL_DIR and sudo is not available"
|
||||||
fi
|
|
||||||
|
|
||||||
$SUDO sh -c '[ -x "$1" ]' _ "$bin_dst" || die "Binary not executable: $bin_dst"
|
|
||||||
|
|
||||||
if command -v setcap >/dev/null 2>&1; then
|
|
||||||
$SUDO setcap cap_net_bind_service,cap_net_admin=+ep "$bin_dst" 2>/dev/null || true
|
|
||||||
fi
|
fi
|
||||||
}
|
}
|
||||||
|
|
||||||
generate_secret() {
|
need_cmd uname
|
||||||
secret="$(command -v openssl >/dev/null 2>&1 && openssl rand -hex 16 2>/dev/null || true)"
|
need_cmd tar
|
||||||
if [ -z "$secret" ] || [ "${#secret}" -ne 32 ]; then
|
need_cmd mktemp
|
||||||
if command -v od >/dev/null 2>&1; then secret="$(dd if=/dev/urandom bs=16 count=1 2>/dev/null | od -An -tx1 | tr -d ' \n')"
|
need_cmd grep
|
||||||
elif command -v hexdump >/dev/null 2>&1; then secret="$(dd if=/dev/urandom bs=16 count=1 2>/dev/null | hexdump -e '1/1 "%02x"')"
|
need_cmd install
|
||||||
elif command -v xxd >/dev/null 2>&1; then secret="$(dd if=/dev/urandom bs=16 count=1 2>/dev/null | xxd -p | tr -d '\n')"
|
|
||||||
fi
|
|
||||||
fi
|
|
||||||
if [ "${#secret}" -eq 32 ]; then echo "$secret"; else return 1; fi
|
|
||||||
}
|
|
||||||
|
|
||||||
generate_config_content() {
|
ARCH="$(detect_arch)"
|
||||||
conf_secret="$1"
|
OS="$(detect_os)"
|
||||||
conf_tag="$2"
|
|
||||||
escaped_tls_domain="$(printf '%s\n' "$TLS_DOMAIN" | tr -d '[:cntrl:]' | sed 's/\\/\\\\/g; s/"/\\"/g')"
|
|
||||||
|
|
||||||
cat <<EOF
|
if [ "$OS" != "linux" ]; then
|
||||||
[general]
|
case "$OS" in
|
||||||
use_middle_proxy = true
|
openbsd)
|
||||||
EOF
|
die "install.sh installs only Linux release artifacts. On OpenBSD, build from source (see docs/OPENBSD.en.md)."
|
||||||
|
;;
|
||||||
|
*)
|
||||||
|
die "unsupported operating system for install.sh: $OS"
|
||||||
|
;;
|
||||||
|
esac
|
||||||
|
fi
|
||||||
|
|
||||||
if [ -n "$conf_tag" ]; then
|
LIBC="$(detect_libc)"
|
||||||
echo "ad_tag = \"${conf_tag}\""
|
|
||||||
fi
|
|
||||||
|
|
||||||
cat <<EOF
|
case "$VERSION" in
|
||||||
|
latest)
|
||||||
[general.modes]
|
URL="https://github.com/$REPO/releases/latest/download/${BIN_NAME}-${ARCH}-linux-${LIBC}.tar.gz"
|
||||||
classic = false
|
;;
|
||||||
secure = false
|
*)
|
||||||
tls = true
|
URL="https://github.com/$REPO/releases/download/${VERSION}/${BIN_NAME}-${ARCH}-linux-${LIBC}.tar.gz"
|
||||||
|
|
||||||
[server]
|
|
||||||
port = ${SERVER_PORT}
|
|
||||||
|
|
||||||
[server.api]
|
|
||||||
enabled = true
|
|
||||||
listen = "127.0.0.1:9091"
|
|
||||||
whitelist = ["127.0.0.1/32"]
|
|
||||||
|
|
||||||
[censorship]
|
|
||||||
tls_domain = "${escaped_tls_domain}"
|
|
||||||
|
|
||||||
[access.users]
|
|
||||||
hello = "${conf_secret}"
|
|
||||||
EOF
|
|
||||||
}
|
|
||||||
|
|
||||||
install_config() {
|
|
||||||
if is_config_exists; then
|
|
||||||
say " -> Config already exists at $CONFIG_FILE. Updating parameters..."
|
|
||||||
|
|
||||||
tmp_conf="${TEMP_DIR}/config.tmp"
|
|
||||||
$SUDO cat "$CONFIG_FILE" > "$tmp_conf"
|
|
||||||
|
|
||||||
escaped_domain="$(printf '%s\n' "$TLS_DOMAIN" | tr -d '[:cntrl:]' | sed 's/\\/\\\\/g; s/"/\\"/g')"
|
|
||||||
|
|
||||||
export AWK_PORT="$SERVER_PORT"
|
|
||||||
export AWK_SECRET="$USER_SECRET"
|
|
||||||
export AWK_DOMAIN="$escaped_domain"
|
|
||||||
export AWK_AD_TAG="$AD_TAG"
|
|
||||||
export AWK_FLAG_P="$PORT_PROVIDED"
|
|
||||||
export AWK_FLAG_S="$SECRET_PROVIDED"
|
|
||||||
export AWK_FLAG_D="$DOMAIN_PROVIDED"
|
|
||||||
export AWK_FLAG_A="$AD_TAG_PROVIDED"
|
|
||||||
|
|
||||||
awk '
|
|
||||||
BEGIN { ad_tag_handled = 0 }
|
|
||||||
|
|
||||||
ENVIRON["AWK_FLAG_P"] == "1" && /^[ \t]*port[ \t]*=/ { print "port = " ENVIRON["AWK_PORT"]; next }
|
|
||||||
ENVIRON["AWK_FLAG_S"] == "1" && /^[ \t]*hello[ \t]*=/ { print "hello = \"" ENVIRON["AWK_SECRET"] "\""; next }
|
|
||||||
ENVIRON["AWK_FLAG_D"] == "1" && /^[ \t]*tls_domain[ \t]*=/ { print "tls_domain = \"" ENVIRON["AWK_DOMAIN"] "\""; next }
|
|
||||||
|
|
||||||
ENVIRON["AWK_FLAG_A"] == "1" && /^[ \t]*ad_tag[ \t]*=/ {
|
|
||||||
if (!ad_tag_handled) {
|
|
||||||
print "ad_tag = \"" ENVIRON["AWK_AD_TAG"] "\"";
|
|
||||||
ad_tag_handled = 1;
|
|
||||||
}
|
|
||||||
next
|
|
||||||
}
|
|
||||||
ENVIRON["AWK_FLAG_A"] == "1" && /^\[general\]/ {
|
|
||||||
print;
|
|
||||||
if (!ad_tag_handled) {
|
|
||||||
print "ad_tag = \"" ENVIRON["AWK_AD_TAG"] "\"";
|
|
||||||
ad_tag_handled = 1;
|
|
||||||
}
|
|
||||||
next
|
|
||||||
}
|
|
||||||
|
|
||||||
{ print }
|
|
||||||
' "$tmp_conf" > "${tmp_conf}.new" && mv "${tmp_conf}.new" "$tmp_conf"
|
|
||||||
|
|
||||||
[ "$PORT_PROVIDED" -eq 1 ] && say " -> Updated port: $SERVER_PORT"
|
|
||||||
[ "$SECRET_PROVIDED" -eq 1 ] && say " -> Updated secret for user 'hello'"
|
|
||||||
[ "$DOMAIN_PROVIDED" -eq 1 ] && say " -> Updated tls_domain: $TLS_DOMAIN"
|
|
||||||
[ "$AD_TAG_PROVIDED" -eq 1 ] && say " -> Updated ad_tag"
|
|
||||||
|
|
||||||
write_root "$CONFIG_FILE" < "$tmp_conf"
|
|
||||||
rm -f "$tmp_conf"
|
|
||||||
return 0
|
|
||||||
fi
|
|
||||||
|
|
||||||
if [ -z "$USER_SECRET" ]; then
|
|
||||||
USER_SECRET="$(generate_secret)" || die "Failed to generate secret."
|
|
||||||
fi
|
|
||||||
|
|
||||||
generate_config_content "$USER_SECRET" "$AD_TAG" | write_root "$CONFIG_FILE" || die "Failed to install config"
|
|
||||||
$SUDO chown root:telemt "$CONFIG_FILE" && $SUDO chmod 640 "$CONFIG_FILE"
|
|
||||||
|
|
||||||
say " -> Config created successfully."
|
|
||||||
say " -> Configured secret for user 'hello': $USER_SECRET"
|
|
||||||
}
|
|
||||||
|
|
||||||
generate_systemd_content() {
|
|
||||||
cat <<EOF
|
|
||||||
[Unit]
|
|
||||||
Description=Telemt
|
|
||||||
After=network-online.target
|
|
||||||
Wants=network-online.target
|
|
||||||
|
|
||||||
[Service]
|
|
||||||
Type=simple
|
|
||||||
User=telemt
|
|
||||||
Group=telemt
|
|
||||||
WorkingDirectory=$WORK_DIR
|
|
||||||
ExecStart="${INSTALL_DIR}/${BIN_NAME}" "${CONFIG_FILE}"
|
|
||||||
Restart=on-failure
|
|
||||||
RestartSec=5
|
|
||||||
LimitNOFILE=65536
|
|
||||||
AmbientCapabilities=CAP_NET_BIND_SERVICE CAP_NET_ADMIN
|
|
||||||
CapabilityBoundingSet=CAP_NET_BIND_SERVICE CAP_NET_ADMIN
|
|
||||||
|
|
||||||
[Install]
|
|
||||||
WantedBy=multi-user.target
|
|
||||||
EOF
|
|
||||||
}
|
|
||||||
|
|
||||||
generate_openrc_content() {
|
|
||||||
cat <<EOF
|
|
||||||
#!/sbin/openrc-run
|
|
||||||
name="$SERVICE_NAME"
|
|
||||||
description="Telemt Proxy Service"
|
|
||||||
command="${INSTALL_DIR}/${BIN_NAME}"
|
|
||||||
command_args="${CONFIG_FILE}"
|
|
||||||
command_background=true
|
|
||||||
command_user="telemt:telemt"
|
|
||||||
pidfile="/run/\${RC_SVCNAME}.pid"
|
|
||||||
directory="${WORK_DIR}"
|
|
||||||
rc_ulimit="-n 65536"
|
|
||||||
depend() { need net; use logger; }
|
|
||||||
EOF
|
|
||||||
}
|
|
||||||
|
|
||||||
install_service() {
|
|
||||||
svc="$(get_svc_mgr)"
|
|
||||||
if [ "$svc" = "systemd" ]; then
|
|
||||||
generate_systemd_content | write_root "/etc/systemd/system/${SERVICE_NAME}.service"
|
|
||||||
$SUDO chown root:root "/etc/systemd/system/${SERVICE_NAME}.service" && $SUDO chmod 644 "/etc/systemd/system/${SERVICE_NAME}.service"
|
|
||||||
|
|
||||||
$SUDO systemctl daemon-reload || true
|
|
||||||
$SUDO systemctl enable "$SERVICE_NAME" || true
|
|
||||||
|
|
||||||
if ! $SUDO systemctl start "$SERVICE_NAME"; then
|
|
||||||
say "[WARNING] Failed to start service"
|
|
||||||
SERVICE_START_FAILED=1
|
|
||||||
fi
|
|
||||||
elif [ "$svc" = "openrc" ]; then
|
|
||||||
generate_openrc_content | write_root "/etc/init.d/${SERVICE_NAME}"
|
|
||||||
$SUDO chown root:root "/etc/init.d/${SERVICE_NAME}" && $SUDO chmod 0755 "/etc/init.d/${SERVICE_NAME}"
|
|
||||||
|
|
||||||
$SUDO rc-update add "$SERVICE_NAME" default 2>/dev/null || true
|
|
||||||
|
|
||||||
if ! $SUDO rc-service "$SERVICE_NAME" start 2>/dev/null; then
|
|
||||||
say "[WARNING] Failed to start service"
|
|
||||||
SERVICE_START_FAILED=1
|
|
||||||
fi
|
|
||||||
else
|
|
||||||
cmd="\"${INSTALL_DIR}/${BIN_NAME}\" \"${CONFIG_FILE}\""
|
|
||||||
if [ -n "$SUDO" ]; then
|
|
||||||
say " -> Service manager not found. Start manually: sudo -u telemt $cmd"
|
|
||||||
else
|
|
||||||
say " -> Service manager not found. Start manually: su -s /bin/sh telemt -c '$cmd'"
|
|
||||||
fi
|
|
||||||
fi
|
|
||||||
}
|
|
||||||
|
|
||||||
kill_user_procs() {
|
|
||||||
if command -v pkill >/dev/null 2>&1; then
|
|
||||||
$SUDO pkill -u telemt "$BIN_NAME" 2>/dev/null || true
|
|
||||||
sleep 1
|
|
||||||
$SUDO pkill -9 -u telemt "$BIN_NAME" 2>/dev/null || true
|
|
||||||
else
|
|
||||||
if command -v pgrep >/dev/null 2>&1; then
|
|
||||||
pids="$(pgrep -u telemt 2>/dev/null || true)"
|
|
||||||
else
|
|
||||||
pids="$(ps -ef 2>/dev/null | awk '$1=="telemt"{print $2}' || true)"
|
|
||||||
[ -z "$pids" ] && pids="$(ps 2>/dev/null | awk '$2=="telemt"{print $1}' || true)"
|
|
||||||
fi
|
|
||||||
|
|
||||||
if [ -n "$pids" ]; then
|
|
||||||
for pid in $pids; do
|
|
||||||
case "$pid" in ''|*[!0-9]*) continue ;; *) $SUDO kill "$pid" 2>/dev/null || true ;; esac
|
|
||||||
done
|
|
||||||
sleep 1
|
|
||||||
for pid in $pids; do
|
|
||||||
case "$pid" in ''|*[!0-9]*) continue ;; *) $SUDO kill -9 "$pid" 2>/dev/null || true ;; esac
|
|
||||||
done
|
|
||||||
fi
|
|
||||||
fi
|
|
||||||
}
|
|
||||||
|
|
||||||
uninstall() {
|
|
||||||
say "Starting uninstallation of $BIN_NAME..."
|
|
||||||
|
|
||||||
say ">>> Stage 1: Stopping services"
|
|
||||||
stop_service
|
|
||||||
|
|
||||||
say ">>> Stage 2: Removing service configuration"
|
|
||||||
svc="$(get_svc_mgr)"
|
|
||||||
if [ "$svc" = "systemd" ]; then
|
|
||||||
$SUDO systemctl disable "$SERVICE_NAME" 2>/dev/null || true
|
|
||||||
$SUDO rm -f "/etc/systemd/system/${SERVICE_NAME}.service"
|
|
||||||
$SUDO systemctl daemon-reload 2>/dev/null || true
|
|
||||||
elif [ "$svc" = "openrc" ]; then
|
|
||||||
$SUDO rc-update del "$SERVICE_NAME" 2>/dev/null || true
|
|
||||||
$SUDO rm -f "/etc/init.d/${SERVICE_NAME}"
|
|
||||||
fi
|
|
||||||
|
|
||||||
say ">>> Stage 3: Terminating user processes"
|
|
||||||
kill_user_procs
|
|
||||||
|
|
||||||
say ">>> Stage 4: Removing binary"
|
|
||||||
$SUDO rm -f "${INSTALL_DIR}/${BIN_NAME}"
|
|
||||||
|
|
||||||
if [ "$ACTION" = "purge" ]; then
|
|
||||||
say ">>> Stage 5: Purging configuration, data, and user"
|
|
||||||
$SUDO rm -rf "$CONFIG_DIR" "$WORK_DIR"
|
|
||||||
$SUDO rm -f "$CONFIG_FILE"
|
|
||||||
sleep 1
|
|
||||||
$SUDO userdel telemt 2>/dev/null || $SUDO deluser telemt 2>/dev/null || true
|
|
||||||
|
|
||||||
if check_os_entity group telemt; then
|
|
||||||
$SUDO groupdel telemt 2>/dev/null || $SUDO delgroup telemt 2>/dev/null || true
|
|
||||||
fi
|
|
||||||
else
|
|
||||||
say "Note: Configuration and user kept. Run with 'purge' to remove completely."
|
|
||||||
fi
|
|
||||||
|
|
||||||
printf '\n====================================================================\n'
|
|
||||||
printf ' UNINSTALLATION COMPLETE\n'
|
|
||||||
printf '====================================================================\n\n'
|
|
||||||
exit 0
|
|
||||||
}
|
|
||||||
|
|
||||||
case "$ACTION" in
|
|
||||||
help) show_help ;;
|
|
||||||
uninstall|purge) verify_common; uninstall ;;
|
|
||||||
install)
|
|
||||||
say "Starting installation of $BIN_NAME (Version: $TARGET_VERSION)"
|
|
||||||
|
|
||||||
say ">>> Stage 1: Verifying environment and dependencies"
|
|
||||||
verify_common
|
|
||||||
verify_install_deps
|
|
||||||
|
|
||||||
if is_config_exists && [ "$PORT_PROVIDED" -eq 0 ]; then
|
|
||||||
ext_port="$($SUDO awk -F'=' '/^[ \t]*port[ \t]*=/ {gsub(/[^0-9]/, "", $2); print $2; exit}' "$CONFIG_FILE" 2>/dev/null || true)"
|
|
||||||
if [ -n "$ext_port" ]; then
|
|
||||||
SERVER_PORT="$ext_port"
|
|
||||||
fi
|
|
||||||
fi
|
|
||||||
|
|
||||||
check_port_availability
|
|
||||||
|
|
||||||
if [ "$TARGET_VERSION" != "latest" ]; then
|
|
||||||
TARGET_VERSION="${TARGET_VERSION#v}"
|
|
||||||
fi
|
|
||||||
|
|
||||||
ARCH="$(detect_arch)"; LIBC="$(detect_libc)"
|
|
||||||
FILE_NAME="${BIN_NAME}-${ARCH}-linux-${LIBC}.tar.gz"
|
|
||||||
|
|
||||||
if [ "$TARGET_VERSION" = "latest" ]; then
|
|
||||||
DL_URL="https://github.com/${REPO}/releases/latest/download/${FILE_NAME}"
|
|
||||||
else
|
|
||||||
DL_URL="https://github.com/${REPO}/releases/download/${TARGET_VERSION}/${FILE_NAME}"
|
|
||||||
fi
|
|
||||||
|
|
||||||
say ">>> Stage 2: Downloading archive"
|
|
||||||
TEMP_DIR="$(mktemp -d)" || die "Temp directory creation failed"
|
|
||||||
if [ -z "$TEMP_DIR" ] || [ ! -d "$TEMP_DIR" ]; then
|
|
||||||
die "Temp directory is invalid or was not created"
|
|
||||||
fi
|
|
||||||
|
|
||||||
if ! fetch_file "$DL_URL" "${TEMP_DIR}/${FILE_NAME}"; then
|
|
||||||
if [ "$ARCH" = "x86_64-v3" ]; then
|
|
||||||
say " -> x86_64-v3 build not found, falling back to standard x86_64..."
|
|
||||||
ARCH="x86_64"
|
|
||||||
FILE_NAME="${BIN_NAME}-${ARCH}-linux-${LIBC}.tar.gz"
|
|
||||||
if [ "$TARGET_VERSION" = "latest" ]; then
|
|
||||||
DL_URL="https://github.com/${REPO}/releases/latest/download/${FILE_NAME}"
|
|
||||||
else
|
|
||||||
DL_URL="https://github.com/${REPO}/releases/download/${TARGET_VERSION}/${FILE_NAME}"
|
|
||||||
fi
|
|
||||||
fetch_file "$DL_URL" "${TEMP_DIR}/${FILE_NAME}" || die "Download failed"
|
|
||||||
else
|
|
||||||
die "Download failed"
|
|
||||||
fi
|
|
||||||
fi
|
|
||||||
|
|
||||||
say ">>> Stage 3: Extracting archive"
|
|
||||||
if ! gzip -dc "${TEMP_DIR}/${FILE_NAME}" | tar -xf - -C "$TEMP_DIR" 2>/dev/null; then
|
|
||||||
die "Extraction failed (downloaded archive might be invalid or 404)."
|
|
||||||
fi
|
|
||||||
|
|
||||||
EXTRACTED_BIN="$(find "$TEMP_DIR" -type f -name "$BIN_NAME" -print 2>/dev/null | head -n 1 || true)"
|
|
||||||
[ -n "$EXTRACTED_BIN" ] || die "Binary '$BIN_NAME' not found in archive"
|
|
||||||
|
|
||||||
say ">>> Stage 4: Setting up environment (User, Group, Directories)"
|
|
||||||
ensure_user_group; setup_dirs; stop_service
|
|
||||||
|
|
||||||
say ">>> Stage 5: Installing binary"
|
|
||||||
install_binary "$EXTRACTED_BIN" "${INSTALL_DIR}/${BIN_NAME}"
|
|
||||||
|
|
||||||
say ">>> Stage 6: Generating/Updating configuration"
|
|
||||||
install_config
|
|
||||||
|
|
||||||
say ">>> Stage 7: Installing and starting service"
|
|
||||||
install_service
|
|
||||||
|
|
||||||
if [ "${SERVICE_START_FAILED:-0}" -eq 1 ]; then
|
|
||||||
printf '\n====================================================================\n'
|
|
||||||
printf ' INSTALLATION COMPLETED WITH WARNINGS\n'
|
|
||||||
printf '====================================================================\n\n'
|
|
||||||
printf 'The service was installed but failed to start automatically.\n'
|
|
||||||
printf 'Please check the logs to determine the issue.\n\n'
|
|
||||||
else
|
|
||||||
printf '\n====================================================================\n'
|
|
||||||
printf ' INSTALLATION SUCCESS\n'
|
|
||||||
printf '====================================================================\n\n'
|
|
||||||
fi
|
|
||||||
|
|
||||||
svc="$(get_svc_mgr)"
|
|
||||||
if [ "$svc" = "systemd" ]; then
|
|
||||||
printf 'To check the status of your proxy service, run:\n'
|
|
||||||
printf ' systemctl status %s\n\n' "$SERVICE_NAME"
|
|
||||||
elif [ "$svc" = "openrc" ]; then
|
|
||||||
printf 'To check the status of your proxy service, run:\n'
|
|
||||||
printf ' rc-service %s status\n\n' "$SERVICE_NAME"
|
|
||||||
fi
|
|
||||||
|
|
||||||
API_LISTEN="$($SUDO awk -F'"' '/^[ \t]*listen[ \t]*=/ {print $2; exit}' "$CONFIG_FILE" 2>/dev/null || true)"
|
|
||||||
API_LISTEN="${API_LISTEN:-127.0.0.1:9091}"
|
|
||||||
|
|
||||||
printf 'To get your user connection links (for Telegram), run:\n'
|
|
||||||
if command -v jq >/dev/null 2>&1; then
|
|
||||||
printf ' curl -s http://%s/v1/users | jq -r '\''.data[]? | "User: \\(.username)\\n\\(.links.tls[0] // empty)\\n"'\''\n' "$API_LISTEN"
|
|
||||||
else
|
|
||||||
printf ' curl -s http://%s/v1/users\n' "$API_LISTEN"
|
|
||||||
printf ' (Tip: Install '\''jq'\'' for a much cleaner output)\n'
|
|
||||||
fi
|
|
||||||
|
|
||||||
printf '\n====================================================================\n'
|
|
||||||
;;
|
;;
|
||||||
esac
|
esac
|
||||||
|
|
||||||
|
TMPDIR="$(mktemp -d)"
|
||||||
|
trap 'rm -rf "$TMPDIR"' EXIT INT TERM
|
||||||
|
|
||||||
|
say "Installing $BIN_NAME ($VERSION) for $ARCH-linux-$LIBC..."
|
||||||
|
fetch_to_stdout "$URL" | tar -xzf - -C "$TMPDIR"
|
||||||
|
|
||||||
|
[ -f "$TMPDIR/$BIN_NAME" ] || die "archive did not contain $BIN_NAME"
|
||||||
|
|
||||||
|
install_binary "$TMPDIR/$BIN_NAME" "$INSTALL_DIR/$BIN_NAME"
|
||||||
|
|
||||||
|
say "Installed: $INSTALL_DIR/$BIN_NAME"
|
||||||
|
"$INSTALL_DIR/$BIN_NAME" --version 2>/dev/null || true
|
||||||
|
|
|
||||||
|
|
@ -24,7 +24,10 @@ pub(super) fn success_response<T: Serialize>(
|
||||||
.unwrap()
|
.unwrap()
|
||||||
}
|
}
|
||||||
|
|
||||||
pub(super) fn error_response(request_id: u64, failure: ApiFailure) -> hyper::Response<Full<Bytes>> {
|
pub(super) fn error_response(
|
||||||
|
request_id: u64,
|
||||||
|
failure: ApiFailure,
|
||||||
|
) -> hyper::Response<Full<Bytes>> {
|
||||||
let payload = ErrorResponse {
|
let payload = ErrorResponse {
|
||||||
ok: false,
|
ok: false,
|
||||||
error: ErrorBody {
|
error: ErrorBody {
|
||||||
|
|
|
||||||
124
src/api/mod.rs
124
src/api/mod.rs
|
|
@ -1,5 +1,3 @@
|
||||||
#![allow(clippy::too_many_arguments)]
|
|
||||||
|
|
||||||
use std::convert::Infallible;
|
use std::convert::Infallible;
|
||||||
use std::net::{IpAddr, SocketAddr};
|
use std::net::{IpAddr, SocketAddr};
|
||||||
use std::path::PathBuf;
|
use std::path::PathBuf;
|
||||||
|
|
@ -21,8 +19,8 @@ use crate::ip_tracker::UserIpTracker;
|
||||||
use crate::proxy::route_mode::RouteRuntimeController;
|
use crate::proxy::route_mode::RouteRuntimeController;
|
||||||
use crate::startup::StartupTracker;
|
use crate::startup::StartupTracker;
|
||||||
use crate::stats::Stats;
|
use crate::stats::Stats;
|
||||||
use crate::transport::UpstreamManager;
|
|
||||||
use crate::transport::middle_proxy::MePool;
|
use crate::transport::middle_proxy::MePool;
|
||||||
|
use crate::transport::UpstreamManager;
|
||||||
|
|
||||||
mod config_store;
|
mod config_store;
|
||||||
mod events;
|
mod events;
|
||||||
|
|
@ -37,12 +35,11 @@ mod runtime_watch;
|
||||||
mod runtime_zero;
|
mod runtime_zero;
|
||||||
mod users;
|
mod users;
|
||||||
|
|
||||||
use config_store::{current_revision, load_config_from_disk, parse_if_match};
|
use config_store::{current_revision, parse_if_match};
|
||||||
use events::ApiEventStore;
|
|
||||||
use http_utils::{error_response, read_json, read_optional_json, success_response};
|
use http_utils::{error_response, read_json, read_optional_json, success_response};
|
||||||
|
use events::ApiEventStore;
|
||||||
use model::{
|
use model::{
|
||||||
ApiFailure, CreateUserRequest, DeleteUserResponse, HealthData, PatchUserRequest,
|
ApiFailure, CreateUserRequest, HealthData, PatchUserRequest, RotateSecretRequest, SummaryData,
|
||||||
RotateSecretRequest, SummaryData, UserActiveIps,
|
|
||||||
};
|
};
|
||||||
use runtime_edge::{
|
use runtime_edge::{
|
||||||
EdgeConnectionsCacheEntry, build_runtime_connections_summary_data,
|
EdgeConnectionsCacheEntry, build_runtime_connections_summary_data,
|
||||||
|
|
@ -58,11 +55,11 @@ use runtime_stats::{
|
||||||
MinimalCacheEntry, build_dcs_data, build_me_writers_data, build_minimal_all_data,
|
MinimalCacheEntry, build_dcs_data, build_me_writers_data, build_minimal_all_data,
|
||||||
build_upstreams_data, build_zero_all_data,
|
build_upstreams_data, build_zero_all_data,
|
||||||
};
|
};
|
||||||
use runtime_watch::spawn_runtime_watchers;
|
|
||||||
use runtime_zero::{
|
use runtime_zero::{
|
||||||
build_limits_effective_data, build_runtime_gates_data, build_security_posture_data,
|
build_limits_effective_data, build_runtime_gates_data, build_security_posture_data,
|
||||||
build_system_info_data,
|
build_system_info_data,
|
||||||
};
|
};
|
||||||
|
use runtime_watch::spawn_runtime_watchers;
|
||||||
use users::{create_user, delete_user, patch_user, rotate_secret, users_from_config};
|
use users::{create_user, delete_user, patch_user, rotate_secret, users_from_config};
|
||||||
|
|
||||||
pub(super) struct ApiRuntimeState {
|
pub(super) struct ApiRuntimeState {
|
||||||
|
|
@ -211,15 +208,15 @@ async fn handle(
|
||||||
));
|
));
|
||||||
}
|
}
|
||||||
|
|
||||||
if !api_cfg.whitelist.is_empty() && !api_cfg.whitelist.iter().any(|net| net.contains(peer.ip()))
|
if !api_cfg.whitelist.is_empty()
|
||||||
|
&& !api_cfg
|
||||||
|
.whitelist
|
||||||
|
.iter()
|
||||||
|
.any(|net| net.contains(peer.ip()))
|
||||||
{
|
{
|
||||||
return Ok(error_response(
|
return Ok(error_response(
|
||||||
request_id,
|
request_id,
|
||||||
ApiFailure::new(
|
ApiFailure::new(StatusCode::FORBIDDEN, "forbidden", "Source IP is not allowed"),
|
||||||
StatusCode::FORBIDDEN,
|
|
||||||
"forbidden",
|
|
||||||
"Source IP is not allowed",
|
|
||||||
),
|
|
||||||
));
|
));
|
||||||
}
|
}
|
||||||
|
|
||||||
|
|
@ -350,8 +347,7 @@ async fn handle(
|
||||||
}
|
}
|
||||||
("GET", "/v1/runtime/connections/summary") => {
|
("GET", "/v1/runtime/connections/summary") => {
|
||||||
let revision = current_revision(&shared.config_path).await?;
|
let revision = current_revision(&shared.config_path).await?;
|
||||||
let data =
|
let data = build_runtime_connections_summary_data(shared.as_ref(), cfg.as_ref()).await;
|
||||||
build_runtime_connections_summary_data(shared.as_ref(), cfg.as_ref()).await;
|
|
||||||
Ok(success_response(StatusCode::OK, data, revision))
|
Ok(success_response(StatusCode::OK, data, revision))
|
||||||
}
|
}
|
||||||
("GET", "/v1/runtime/events/recent") => {
|
("GET", "/v1/runtime/events/recent") => {
|
||||||
|
|
@ -363,33 +359,15 @@ async fn handle(
|
||||||
);
|
);
|
||||||
Ok(success_response(StatusCode::OK, data, revision))
|
Ok(success_response(StatusCode::OK, data, revision))
|
||||||
}
|
}
|
||||||
("GET", "/v1/stats/users/active-ips") => {
|
|
||||||
let revision = current_revision(&shared.config_path).await?;
|
|
||||||
let usernames: Vec<_> = cfg.access.users.keys().cloned().collect();
|
|
||||||
let active_ips_map = shared.ip_tracker.get_active_ips_for_users(&usernames).await;
|
|
||||||
let mut data: Vec<UserActiveIps> = active_ips_map
|
|
||||||
.into_iter()
|
|
||||||
.filter(|(_, ips)| !ips.is_empty())
|
|
||||||
.map(|(username, active_ips)| UserActiveIps {
|
|
||||||
username,
|
|
||||||
active_ips,
|
|
||||||
})
|
|
||||||
.collect();
|
|
||||||
data.sort_by(|a, b| a.username.cmp(&b.username));
|
|
||||||
Ok(success_response(StatusCode::OK, data, revision))
|
|
||||||
}
|
|
||||||
("GET", "/v1/stats/users") | ("GET", "/v1/users") => {
|
("GET", "/v1/stats/users") | ("GET", "/v1/users") => {
|
||||||
let revision = current_revision(&shared.config_path).await?;
|
let revision = current_revision(&shared.config_path).await?;
|
||||||
let disk_cfg = load_config_from_disk(&shared.config_path).await?;
|
|
||||||
let runtime_cfg = config_rx.borrow().clone();
|
|
||||||
let (detected_ip_v4, detected_ip_v6) = shared.detected_link_ips();
|
let (detected_ip_v4, detected_ip_v6) = shared.detected_link_ips();
|
||||||
let users = users_from_config(
|
let users = users_from_config(
|
||||||
&disk_cfg,
|
&cfg,
|
||||||
&shared.stats,
|
&shared.stats,
|
||||||
&shared.ip_tracker,
|
&shared.ip_tracker,
|
||||||
detected_ip_v4,
|
detected_ip_v4,
|
||||||
detected_ip_v6,
|
detected_ip_v6,
|
||||||
Some(runtime_cfg.as_ref()),
|
|
||||||
)
|
)
|
||||||
.await;
|
.await;
|
||||||
Ok(success_response(StatusCode::OK, users, revision))
|
Ok(success_response(StatusCode::OK, users, revision))
|
||||||
|
|
@ -408,27 +386,17 @@ async fn handle(
|
||||||
let expected_revision = parse_if_match(req.headers());
|
let expected_revision = parse_if_match(req.headers());
|
||||||
let body = read_json::<CreateUserRequest>(req.into_body(), body_limit).await?;
|
let body = read_json::<CreateUserRequest>(req.into_body(), body_limit).await?;
|
||||||
let result = create_user(body, expected_revision, &shared).await;
|
let result = create_user(body, expected_revision, &shared).await;
|
||||||
let (mut data, revision) = match result {
|
let (data, revision) = match result {
|
||||||
Ok(ok) => ok,
|
Ok(ok) => ok,
|
||||||
Err(error) => {
|
Err(error) => {
|
||||||
shared
|
shared.runtime_events.record("api.user.create.failed", error.code);
|
||||||
.runtime_events
|
|
||||||
.record("api.user.create.failed", error.code);
|
|
||||||
return Err(error);
|
return Err(error);
|
||||||
}
|
}
|
||||||
};
|
};
|
||||||
let runtime_cfg = config_rx.borrow().clone();
|
shared
|
||||||
data.user.in_runtime = runtime_cfg.access.users.contains_key(&data.user.username);
|
.runtime_events
|
||||||
shared.runtime_events.record(
|
.record("api.user.create.ok", format!("username={}", data.user.username));
|
||||||
"api.user.create.ok",
|
Ok(success_response(StatusCode::CREATED, data, revision))
|
||||||
format!("username={}", data.user.username),
|
|
||||||
);
|
|
||||||
let status = if data.user.in_runtime {
|
|
||||||
StatusCode::CREATED
|
|
||||||
} else {
|
|
||||||
StatusCode::ACCEPTED
|
|
||||||
};
|
|
||||||
Ok(success_response(status, data, revision))
|
|
||||||
}
|
}
|
||||||
_ => {
|
_ => {
|
||||||
if let Some(user) = path.strip_prefix("/v1/users/")
|
if let Some(user) = path.strip_prefix("/v1/users/")
|
||||||
|
|
@ -437,20 +405,16 @@ async fn handle(
|
||||||
{
|
{
|
||||||
if method == Method::GET {
|
if method == Method::GET {
|
||||||
let revision = current_revision(&shared.config_path).await?;
|
let revision = current_revision(&shared.config_path).await?;
|
||||||
let disk_cfg = load_config_from_disk(&shared.config_path).await?;
|
|
||||||
let runtime_cfg = config_rx.borrow().clone();
|
|
||||||
let (detected_ip_v4, detected_ip_v6) = shared.detected_link_ips();
|
let (detected_ip_v4, detected_ip_v6) = shared.detected_link_ips();
|
||||||
let users = users_from_config(
|
let users = users_from_config(
|
||||||
&disk_cfg,
|
&cfg,
|
||||||
&shared.stats,
|
&shared.stats,
|
||||||
&shared.ip_tracker,
|
&shared.ip_tracker,
|
||||||
detected_ip_v4,
|
detected_ip_v4,
|
||||||
detected_ip_v6,
|
detected_ip_v6,
|
||||||
Some(runtime_cfg.as_ref()),
|
|
||||||
)
|
)
|
||||||
.await;
|
.await;
|
||||||
if let Some(user_info) =
|
if let Some(user_info) = users.into_iter().find(|entry| entry.username == user)
|
||||||
users.into_iter().find(|entry| entry.username == user)
|
|
||||||
{
|
{
|
||||||
return Ok(success_response(StatusCode::OK, user_info, revision));
|
return Ok(success_response(StatusCode::OK, user_info, revision));
|
||||||
}
|
}
|
||||||
|
|
@ -471,10 +435,9 @@ async fn handle(
|
||||||
));
|
));
|
||||||
}
|
}
|
||||||
let expected_revision = parse_if_match(req.headers());
|
let expected_revision = parse_if_match(req.headers());
|
||||||
let body =
|
let body = read_json::<PatchUserRequest>(req.into_body(), body_limit).await?;
|
||||||
read_json::<PatchUserRequest>(req.into_body(), body_limit).await?;
|
|
||||||
let result = patch_user(user, body, expected_revision, &shared).await;
|
let result = patch_user(user, body, expected_revision, &shared).await;
|
||||||
let (mut data, revision) = match result {
|
let (data, revision) = match result {
|
||||||
Ok(ok) => ok,
|
Ok(ok) => ok,
|
||||||
Err(error) => {
|
Err(error) => {
|
||||||
shared.runtime_events.record(
|
shared.runtime_events.record(
|
||||||
|
|
@ -484,17 +447,10 @@ async fn handle(
|
||||||
return Err(error);
|
return Err(error);
|
||||||
}
|
}
|
||||||
};
|
};
|
||||||
let runtime_cfg = config_rx.borrow().clone();
|
|
||||||
data.in_runtime = runtime_cfg.access.users.contains_key(&data.username);
|
|
||||||
shared
|
shared
|
||||||
.runtime_events
|
.runtime_events
|
||||||
.record("api.user.patch.ok", format!("username={}", data.username));
|
.record("api.user.patch.ok", format!("username={}", data.username));
|
||||||
let status = if data.in_runtime {
|
return Ok(success_response(StatusCode::OK, data, revision));
|
||||||
StatusCode::OK
|
|
||||||
} else {
|
|
||||||
StatusCode::ACCEPTED
|
|
||||||
};
|
|
||||||
return Ok(success_response(status, data, revision));
|
|
||||||
}
|
}
|
||||||
if method == Method::DELETE {
|
if method == Method::DELETE {
|
||||||
if api_cfg.read_only {
|
if api_cfg.read_only {
|
||||||
|
|
@ -519,21 +475,11 @@ async fn handle(
|
||||||
return Err(error);
|
return Err(error);
|
||||||
}
|
}
|
||||||
};
|
};
|
||||||
shared
|
shared.runtime_events.record(
|
||||||
.runtime_events
|
"api.user.delete.ok",
|
||||||
.record("api.user.delete.ok", format!("username={}", deleted_user));
|
format!("username={}", deleted_user),
|
||||||
let runtime_cfg = config_rx.borrow().clone();
|
);
|
||||||
let in_runtime = runtime_cfg.access.users.contains_key(&deleted_user);
|
return Ok(success_response(StatusCode::OK, deleted_user, revision));
|
||||||
let response = DeleteUserResponse {
|
|
||||||
username: deleted_user,
|
|
||||||
in_runtime,
|
|
||||||
};
|
|
||||||
let status = if response.in_runtime {
|
|
||||||
StatusCode::ACCEPTED
|
|
||||||
} else {
|
|
||||||
StatusCode::OK
|
|
||||||
};
|
|
||||||
return Ok(success_response(status, response, revision));
|
|
||||||
}
|
}
|
||||||
if method == Method::POST
|
if method == Method::POST
|
||||||
&& let Some(base_user) = user.strip_suffix("/rotate-secret")
|
&& let Some(base_user) = user.strip_suffix("/rotate-secret")
|
||||||
|
|
@ -561,7 +507,7 @@ async fn handle(
|
||||||
&shared,
|
&shared,
|
||||||
)
|
)
|
||||||
.await;
|
.await;
|
||||||
let (mut data, revision) = match result {
|
let (data, revision) = match result {
|
||||||
Ok(ok) => ok,
|
Ok(ok) => ok,
|
||||||
Err(error) => {
|
Err(error) => {
|
||||||
shared.runtime_events.record(
|
shared.runtime_events.record(
|
||||||
|
|
@ -571,19 +517,11 @@ async fn handle(
|
||||||
return Err(error);
|
return Err(error);
|
||||||
}
|
}
|
||||||
};
|
};
|
||||||
let runtime_cfg = config_rx.borrow().clone();
|
|
||||||
data.user.in_runtime =
|
|
||||||
runtime_cfg.access.users.contains_key(&data.user.username);
|
|
||||||
shared.runtime_events.record(
|
shared.runtime_events.record(
|
||||||
"api.user.rotate_secret.ok",
|
"api.user.rotate_secret.ok",
|
||||||
format!("username={}", base_user),
|
format!("username={}", base_user),
|
||||||
);
|
);
|
||||||
let status = if data.user.in_runtime {
|
return Ok(success_response(StatusCode::OK, data, revision));
|
||||||
StatusCode::OK
|
|
||||||
} else {
|
|
||||||
StatusCode::ACCEPTED
|
|
||||||
};
|
|
||||||
return Ok(success_response(status, data, revision));
|
|
||||||
}
|
}
|
||||||
if method == Method::POST {
|
if method == Method::POST {
|
||||||
return Ok(error_response(
|
return Ok(error_response(
|
||||||
|
|
|
||||||
|
|
@ -1,12 +1,10 @@
|
||||||
use std::net::IpAddr;
|
use std::net::IpAddr;
|
||||||
use std::sync::OnceLock;
|
|
||||||
|
|
||||||
use chrono::{DateTime, Utc};
|
use chrono::{DateTime, Utc};
|
||||||
use hyper::StatusCode;
|
use hyper::StatusCode;
|
||||||
|
use rand::Rng;
|
||||||
use serde::{Deserialize, Serialize};
|
use serde::{Deserialize, Serialize};
|
||||||
|
|
||||||
use crate::crypto::SecureRandom;
|
|
||||||
|
|
||||||
const MAX_USERNAME_LEN: usize = 64;
|
const MAX_USERNAME_LEN: usize = 64;
|
||||||
|
|
||||||
#[derive(Debug)]
|
#[derive(Debug)]
|
||||||
|
|
@ -81,21 +79,10 @@ pub(super) struct ZeroCoreData {
|
||||||
pub(super) connections_total: u64,
|
pub(super) connections_total: u64,
|
||||||
pub(super) connections_bad_total: u64,
|
pub(super) connections_bad_total: u64,
|
||||||
pub(super) handshake_timeouts_total: u64,
|
pub(super) handshake_timeouts_total: u64,
|
||||||
pub(super) accept_permit_timeout_total: u64,
|
|
||||||
pub(super) configured_users: usize,
|
pub(super) configured_users: usize,
|
||||||
pub(super) telemetry_core_enabled: bool,
|
pub(super) telemetry_core_enabled: bool,
|
||||||
pub(super) telemetry_user_enabled: bool,
|
pub(super) telemetry_user_enabled: bool,
|
||||||
pub(super) telemetry_me_level: String,
|
pub(super) telemetry_me_level: String,
|
||||||
pub(super) conntrack_control_enabled: bool,
|
|
||||||
pub(super) conntrack_control_available: bool,
|
|
||||||
pub(super) conntrack_pressure_active: bool,
|
|
||||||
pub(super) conntrack_event_queue_depth: u64,
|
|
||||||
pub(super) conntrack_rule_apply_ok: bool,
|
|
||||||
pub(super) conntrack_delete_attempt_total: u64,
|
|
||||||
pub(super) conntrack_delete_success_total: u64,
|
|
||||||
pub(super) conntrack_delete_not_found_total: u64,
|
|
||||||
pub(super) conntrack_delete_error_total: u64,
|
|
||||||
pub(super) conntrack_close_event_drop_total: u64,
|
|
||||||
}
|
}
|
||||||
|
|
||||||
#[derive(Serialize, Clone)]
|
#[derive(Serialize, Clone)]
|
||||||
|
|
@ -147,7 +134,6 @@ pub(super) struct UpstreamSummaryData {
|
||||||
pub(super) direct_total: usize,
|
pub(super) direct_total: usize,
|
||||||
pub(super) socks4_total: usize,
|
pub(super) socks4_total: usize,
|
||||||
pub(super) socks5_total: usize,
|
pub(super) socks5_total: usize,
|
||||||
pub(super) shadowsocks_total: usize,
|
|
||||||
}
|
}
|
||||||
|
|
||||||
#[derive(Serialize, Clone)]
|
#[derive(Serialize, Clone)]
|
||||||
|
|
@ -185,24 +171,6 @@ pub(super) struct ZeroMiddleProxyData {
|
||||||
pub(super) route_drop_queue_full_total: u64,
|
pub(super) route_drop_queue_full_total: u64,
|
||||||
pub(super) route_drop_queue_full_base_total: u64,
|
pub(super) route_drop_queue_full_base_total: u64,
|
||||||
pub(super) route_drop_queue_full_high_total: u64,
|
pub(super) route_drop_queue_full_high_total: u64,
|
||||||
pub(super) d2c_batches_total: u64,
|
|
||||||
pub(super) d2c_batch_frames_total: u64,
|
|
||||||
pub(super) d2c_batch_bytes_total: u64,
|
|
||||||
pub(super) d2c_flush_reason_queue_drain_total: u64,
|
|
||||||
pub(super) d2c_flush_reason_batch_frames_total: u64,
|
|
||||||
pub(super) d2c_flush_reason_batch_bytes_total: u64,
|
|
||||||
pub(super) d2c_flush_reason_max_delay_total: u64,
|
|
||||||
pub(super) d2c_flush_reason_ack_immediate_total: u64,
|
|
||||||
pub(super) d2c_flush_reason_close_total: u64,
|
|
||||||
pub(super) d2c_data_frames_total: u64,
|
|
||||||
pub(super) d2c_ack_frames_total: u64,
|
|
||||||
pub(super) d2c_payload_bytes_total: u64,
|
|
||||||
pub(super) d2c_write_mode_coalesced_total: u64,
|
|
||||||
pub(super) d2c_write_mode_split_total: u64,
|
|
||||||
pub(super) d2c_quota_reject_pre_write_total: u64,
|
|
||||||
pub(super) d2c_quota_reject_post_write_total: u64,
|
|
||||||
pub(super) d2c_frame_buf_shrink_total: u64,
|
|
||||||
pub(super) d2c_frame_buf_shrink_bytes_total: u64,
|
|
||||||
pub(super) socks_kdf_strict_reject_total: u64,
|
pub(super) socks_kdf_strict_reject_total: u64,
|
||||||
pub(super) socks_kdf_compat_fallback_total: u64,
|
pub(super) socks_kdf_compat_fallback_total: u64,
|
||||||
pub(super) endpoint_quarantine_total: u64,
|
pub(super) endpoint_quarantine_total: u64,
|
||||||
|
|
@ -268,8 +236,6 @@ pub(super) struct MeWritersSummary {
|
||||||
pub(super) required_writers: usize,
|
pub(super) required_writers: usize,
|
||||||
pub(super) alive_writers: usize,
|
pub(super) alive_writers: usize,
|
||||||
pub(super) coverage_pct: f64,
|
pub(super) coverage_pct: f64,
|
||||||
pub(super) fresh_alive_writers: usize,
|
|
||||||
pub(super) fresh_coverage_pct: f64,
|
|
||||||
}
|
}
|
||||||
|
|
||||||
#[derive(Serialize, Clone)]
|
#[derive(Serialize, Clone)]
|
||||||
|
|
@ -284,12 +250,6 @@ pub(super) struct MeWriterStatus {
|
||||||
pub(super) bound_clients: usize,
|
pub(super) bound_clients: usize,
|
||||||
pub(super) idle_for_secs: Option<u64>,
|
pub(super) idle_for_secs: Option<u64>,
|
||||||
pub(super) rtt_ema_ms: Option<f64>,
|
pub(super) rtt_ema_ms: Option<f64>,
|
||||||
pub(super) matches_active_generation: bool,
|
|
||||||
pub(super) in_desired_map: bool,
|
|
||||||
pub(super) allow_drain_fallback: bool,
|
|
||||||
pub(super) drain_started_at_epoch_secs: Option<u64>,
|
|
||||||
pub(super) drain_deadline_epoch_secs: Option<u64>,
|
|
||||||
pub(super) drain_over_ttl: bool,
|
|
||||||
}
|
}
|
||||||
|
|
||||||
#[derive(Serialize, Clone)]
|
#[derive(Serialize, Clone)]
|
||||||
|
|
@ -316,8 +276,6 @@ pub(super) struct DcStatus {
|
||||||
pub(super) floor_capped: bool,
|
pub(super) floor_capped: bool,
|
||||||
pub(super) alive_writers: usize,
|
pub(super) alive_writers: usize,
|
||||||
pub(super) coverage_pct: f64,
|
pub(super) coverage_pct: f64,
|
||||||
pub(super) fresh_alive_writers: usize,
|
|
||||||
pub(super) fresh_coverage_pct: f64,
|
|
||||||
pub(super) rtt_ms: Option<f64>,
|
pub(super) rtt_ms: Option<f64>,
|
||||||
pub(super) load: usize,
|
pub(super) load: usize,
|
||||||
}
|
}
|
||||||
|
|
@ -439,7 +397,6 @@ pub(super) struct UserLinks {
|
||||||
#[derive(Serialize)]
|
#[derive(Serialize)]
|
||||||
pub(super) struct UserInfo {
|
pub(super) struct UserInfo {
|
||||||
pub(super) username: String,
|
pub(super) username: String,
|
||||||
pub(super) in_runtime: bool,
|
|
||||||
pub(super) user_ad_tag: Option<String>,
|
pub(super) user_ad_tag: Option<String>,
|
||||||
pub(super) max_tcp_conns: Option<usize>,
|
pub(super) max_tcp_conns: Option<usize>,
|
||||||
pub(super) expiration_rfc3339: Option<String>,
|
pub(super) expiration_rfc3339: Option<String>,
|
||||||
|
|
@ -454,24 +411,12 @@ pub(super) struct UserInfo {
|
||||||
pub(super) links: UserLinks,
|
pub(super) links: UserLinks,
|
||||||
}
|
}
|
||||||
|
|
||||||
#[derive(Serialize)]
|
|
||||||
pub(super) struct UserActiveIps {
|
|
||||||
pub(super) username: String,
|
|
||||||
pub(super) active_ips: Vec<IpAddr>,
|
|
||||||
}
|
|
||||||
|
|
||||||
#[derive(Serialize)]
|
#[derive(Serialize)]
|
||||||
pub(super) struct CreateUserResponse {
|
pub(super) struct CreateUserResponse {
|
||||||
pub(super) user: UserInfo,
|
pub(super) user: UserInfo,
|
||||||
pub(super) secret: String,
|
pub(super) secret: String,
|
||||||
}
|
}
|
||||||
|
|
||||||
#[derive(Serialize)]
|
|
||||||
pub(super) struct DeleteUserResponse {
|
|
||||||
pub(super) username: String,
|
|
||||||
pub(super) in_runtime: bool,
|
|
||||||
}
|
|
||||||
|
|
||||||
#[derive(Deserialize)]
|
#[derive(Deserialize)]
|
||||||
pub(super) struct CreateUserRequest {
|
pub(super) struct CreateUserRequest {
|
||||||
pub(super) username: String,
|
pub(super) username: String,
|
||||||
|
|
@ -526,9 +471,7 @@ pub(super) fn is_valid_username(user: &str) -> bool {
|
||||||
}
|
}
|
||||||
|
|
||||||
pub(super) fn random_user_secret() -> String {
|
pub(super) fn random_user_secret() -> String {
|
||||||
static API_SECRET_RNG: OnceLock<SecureRandom> = OnceLock::new();
|
|
||||||
let rng = API_SECRET_RNG.get_or_init(SecureRandom::new);
|
|
||||||
let mut bytes = [0u8; 16];
|
let mut bytes = [0u8; 16];
|
||||||
rng.fill(&mut bytes);
|
rand::rng().fill(&mut bytes);
|
||||||
hex::encode(bytes)
|
hex::encode(bytes)
|
||||||
}
|
}
|
||||||
|
|
|
||||||
|
|
@ -167,7 +167,11 @@ async fn current_me_pool_stage_progress(shared: &ApiShared) -> Option<f64> {
|
||||||
let pool = shared.me_pool.read().await.clone()?;
|
let pool = shared.me_pool.read().await.clone()?;
|
||||||
let status = pool.api_status_snapshot().await;
|
let status = pool.api_status_snapshot().await;
|
||||||
let configured_dc_groups = status.configured_dc_groups;
|
let configured_dc_groups = status.configured_dc_groups;
|
||||||
let covered_dc_groups = status.dcs.iter().filter(|dc| dc.alive_writers > 0).count();
|
let covered_dc_groups = status
|
||||||
|
.dcs
|
||||||
|
.iter()
|
||||||
|
.filter(|dc| dc.alive_writers > 0)
|
||||||
|
.count();
|
||||||
|
|
||||||
let dc_coverage = ratio_01(covered_dc_groups, configured_dc_groups);
|
let dc_coverage = ratio_01(covered_dc_groups, configured_dc_groups);
|
||||||
let writer_coverage = ratio_01(status.alive_writers, status.required_writers);
|
let writer_coverage = ratio_01(status.alive_writers, status.required_writers);
|
||||||
|
|
|
||||||
|
|
@ -107,25 +107,6 @@ pub(super) struct RuntimeMeQualityRouteDropData {
|
||||||
pub(super) queue_full_high_total: u64,
|
pub(super) queue_full_high_total: u64,
|
||||||
}
|
}
|
||||||
|
|
||||||
#[derive(Serialize)]
|
|
||||||
pub(super) struct RuntimeMeQualityFamilyStateData {
|
|
||||||
pub(super) family: &'static str,
|
|
||||||
pub(super) state: &'static str,
|
|
||||||
pub(super) state_since_epoch_secs: u64,
|
|
||||||
#[serde(skip_serializing_if = "Option::is_none")]
|
|
||||||
pub(super) suppressed_until_epoch_secs: Option<u64>,
|
|
||||||
pub(super) fail_streak: u32,
|
|
||||||
pub(super) recover_success_streak: u32,
|
|
||||||
}
|
|
||||||
|
|
||||||
#[derive(Serialize)]
|
|
||||||
pub(super) struct RuntimeMeQualityDrainGateData {
|
|
||||||
pub(super) route_quorum_ok: bool,
|
|
||||||
pub(super) redundancy_ok: bool,
|
|
||||||
pub(super) block_reason: &'static str,
|
|
||||||
pub(super) updated_at_epoch_secs: u64,
|
|
||||||
}
|
|
||||||
|
|
||||||
#[derive(Serialize)]
|
#[derive(Serialize)]
|
||||||
pub(super) struct RuntimeMeQualityDcRttData {
|
pub(super) struct RuntimeMeQualityDcRttData {
|
||||||
pub(super) dc: i16,
|
pub(super) dc: i16,
|
||||||
|
|
@ -139,8 +120,6 @@ pub(super) struct RuntimeMeQualityDcRttData {
|
||||||
pub(super) struct RuntimeMeQualityPayload {
|
pub(super) struct RuntimeMeQualityPayload {
|
||||||
pub(super) counters: RuntimeMeQualityCountersData,
|
pub(super) counters: RuntimeMeQualityCountersData,
|
||||||
pub(super) route_drops: RuntimeMeQualityRouteDropData,
|
pub(super) route_drops: RuntimeMeQualityRouteDropData,
|
||||||
pub(super) family_states: Vec<RuntimeMeQualityFamilyStateData>,
|
|
||||||
pub(super) drain_gate: RuntimeMeQualityDrainGateData,
|
|
||||||
pub(super) dc_rtt: Vec<RuntimeMeQualityDcRttData>,
|
pub(super) dc_rtt: Vec<RuntimeMeQualityDcRttData>,
|
||||||
}
|
}
|
||||||
|
|
||||||
|
|
@ -179,7 +158,6 @@ pub(super) struct RuntimeUpstreamQualitySummaryData {
|
||||||
pub(super) direct_total: usize,
|
pub(super) direct_total: usize,
|
||||||
pub(super) socks4_total: usize,
|
pub(super) socks4_total: usize,
|
||||||
pub(super) socks5_total: usize,
|
pub(super) socks5_total: usize,
|
||||||
pub(super) shadowsocks_total: usize,
|
|
||||||
}
|
}
|
||||||
|
|
||||||
#[derive(Serialize)]
|
#[derive(Serialize)]
|
||||||
|
|
@ -382,19 +360,6 @@ pub(super) async fn build_runtime_me_quality_data(shared: &ApiShared) -> Runtime
|
||||||
};
|
};
|
||||||
|
|
||||||
let status = pool.api_status_snapshot().await;
|
let status = pool.api_status_snapshot().await;
|
||||||
let family_states = pool
|
|
||||||
.api_family_state_snapshot()
|
|
||||||
.into_iter()
|
|
||||||
.map(|entry| RuntimeMeQualityFamilyStateData {
|
|
||||||
family: entry.family,
|
|
||||||
state: entry.state,
|
|
||||||
state_since_epoch_secs: entry.state_since_epoch_secs,
|
|
||||||
suppressed_until_epoch_secs: entry.suppressed_until_epoch_secs,
|
|
||||||
fail_streak: entry.fail_streak,
|
|
||||||
recover_success_streak: entry.recover_success_streak,
|
|
||||||
})
|
|
||||||
.collect();
|
|
||||||
let drain_gate_snapshot = pool.api_drain_gate_snapshot();
|
|
||||||
RuntimeMeQualityData {
|
RuntimeMeQualityData {
|
||||||
enabled: true,
|
enabled: true,
|
||||||
reason: None,
|
reason: None,
|
||||||
|
|
@ -415,13 +380,6 @@ pub(super) async fn build_runtime_me_quality_data(shared: &ApiShared) -> Runtime
|
||||||
queue_full_base_total: shared.stats.get_me_route_drop_queue_full_base(),
|
queue_full_base_total: shared.stats.get_me_route_drop_queue_full_base(),
|
||||||
queue_full_high_total: shared.stats.get_me_route_drop_queue_full_high(),
|
queue_full_high_total: shared.stats.get_me_route_drop_queue_full_high(),
|
||||||
},
|
},
|
||||||
family_states,
|
|
||||||
drain_gate: RuntimeMeQualityDrainGateData {
|
|
||||||
route_quorum_ok: drain_gate_snapshot.route_quorum_ok,
|
|
||||||
redundancy_ok: drain_gate_snapshot.redundancy_ok,
|
|
||||||
block_reason: drain_gate_snapshot.block_reason,
|
|
||||||
updated_at_epoch_secs: drain_gate_snapshot.updated_at_epoch_secs,
|
|
||||||
},
|
|
||||||
dc_rtt: status
|
dc_rtt: status
|
||||||
.dcs
|
.dcs
|
||||||
.into_iter()
|
.into_iter()
|
||||||
|
|
@ -446,9 +404,7 @@ pub(super) async fn build_runtime_upstream_quality_data(
|
||||||
connect_attempt_total: shared.stats.get_upstream_connect_attempt_total(),
|
connect_attempt_total: shared.stats.get_upstream_connect_attempt_total(),
|
||||||
connect_success_total: shared.stats.get_upstream_connect_success_total(),
|
connect_success_total: shared.stats.get_upstream_connect_success_total(),
|
||||||
connect_fail_total: shared.stats.get_upstream_connect_fail_total(),
|
connect_fail_total: shared.stats.get_upstream_connect_fail_total(),
|
||||||
connect_failfast_hard_error_total: shared
|
connect_failfast_hard_error_total: shared.stats.get_upstream_connect_failfast_hard_error_total(),
|
||||||
.stats
|
|
||||||
.get_upstream_connect_failfast_hard_error_total(),
|
|
||||||
};
|
};
|
||||||
|
|
||||||
let Some(snapshot) = shared.upstream_manager.try_api_snapshot() else {
|
let Some(snapshot) = shared.upstream_manager.try_api_snapshot() else {
|
||||||
|
|
@ -488,7 +444,6 @@ pub(super) async fn build_runtime_upstream_quality_data(
|
||||||
direct_total: snapshot.summary.direct_total,
|
direct_total: snapshot.summary.direct_total,
|
||||||
socks4_total: snapshot.summary.socks4_total,
|
socks4_total: snapshot.summary.socks4_total,
|
||||||
socks5_total: snapshot.summary.socks5_total,
|
socks5_total: snapshot.summary.socks5_total,
|
||||||
shadowsocks_total: snapshot.summary.shadowsocks_total,
|
|
||||||
}),
|
}),
|
||||||
upstreams: Some(
|
upstreams: Some(
|
||||||
snapshot
|
snapshot
|
||||||
|
|
@ -500,7 +455,6 @@ pub(super) async fn build_runtime_upstream_quality_data(
|
||||||
crate::transport::UpstreamRouteKind::Direct => "direct",
|
crate::transport::UpstreamRouteKind::Direct => "direct",
|
||||||
crate::transport::UpstreamRouteKind::Socks4 => "socks4",
|
crate::transport::UpstreamRouteKind::Socks4 => "socks4",
|
||||||
crate::transport::UpstreamRouteKind::Socks5 => "socks5",
|
crate::transport::UpstreamRouteKind::Socks5 => "socks5",
|
||||||
crate::transport::UpstreamRouteKind::Shadowsocks => "shadowsocks",
|
|
||||||
},
|
},
|
||||||
address: upstream.address,
|
address: upstream.address,
|
||||||
weight: upstream.weight,
|
weight: upstream.weight,
|
||||||
|
|
@ -520,9 +474,7 @@ pub(super) async fn build_runtime_upstream_quality_data(
|
||||||
crate::transport::upstream::IpPreference::PreferV6 => "prefer_v6",
|
crate::transport::upstream::IpPreference::PreferV6 => "prefer_v6",
|
||||||
crate::transport::upstream::IpPreference::PreferV4 => "prefer_v4",
|
crate::transport::upstream::IpPreference::PreferV4 => "prefer_v4",
|
||||||
crate::transport::upstream::IpPreference::BothWork => "both_work",
|
crate::transport::upstream::IpPreference::BothWork => "both_work",
|
||||||
crate::transport::upstream::IpPreference::Unavailable => {
|
crate::transport::upstream::IpPreference::Unavailable => "unavailable",
|
||||||
"unavailable"
|
|
||||||
}
|
|
||||||
},
|
},
|
||||||
})
|
})
|
||||||
.collect(),
|
.collect(),
|
||||||
|
|
@ -560,15 +512,11 @@ pub(super) async fn build_runtime_nat_stun_data(shared: &ApiShared) -> RuntimeNa
|
||||||
live_total: snapshot.live_servers.len(),
|
live_total: snapshot.live_servers.len(),
|
||||||
},
|
},
|
||||||
reflection: RuntimeNatStunReflectionBlockData {
|
reflection: RuntimeNatStunReflectionBlockData {
|
||||||
v4: snapshot
|
v4: snapshot.reflection_v4.map(|entry| RuntimeNatStunReflectionData {
|
||||||
.reflection_v4
|
|
||||||
.map(|entry| RuntimeNatStunReflectionData {
|
|
||||||
addr: entry.addr.to_string(),
|
addr: entry.addr.to_string(),
|
||||||
age_secs: entry.age_secs,
|
age_secs: entry.age_secs,
|
||||||
}),
|
}),
|
||||||
v6: snapshot
|
v6: snapshot.reflection_v6.map(|entry| RuntimeNatStunReflectionData {
|
||||||
.reflection_v6
|
|
||||||
.map(|entry| RuntimeNatStunReflectionData {
|
|
||||||
addr: entry.addr.to_string(),
|
addr: entry.addr.to_string(),
|
||||||
age_secs: entry.age_secs,
|
age_secs: entry.age_secs,
|
||||||
}),
|
}),
|
||||||
|
|
|
||||||
|
|
@ -1,5 +1,5 @@
|
||||||
use std::collections::HashMap;
|
|
||||||
use std::net::IpAddr;
|
use std::net::IpAddr;
|
||||||
|
use std::collections::HashMap;
|
||||||
use std::sync::{Mutex, OnceLock};
|
use std::sync::{Mutex, OnceLock};
|
||||||
use std::time::{SystemTime, UNIX_EPOCH};
|
use std::time::{SystemTime, UNIX_EPOCH};
|
||||||
|
|
||||||
|
|
@ -7,8 +7,8 @@ use serde::Serialize;
|
||||||
|
|
||||||
use crate::config::{ProxyConfig, UpstreamType};
|
use crate::config::{ProxyConfig, UpstreamType};
|
||||||
use crate::network::probe::{detect_interface_ipv4, detect_interface_ipv6, is_bogon};
|
use crate::network::probe::{detect_interface_ipv4, detect_interface_ipv6, is_bogon};
|
||||||
use crate::transport::UpstreamRouteKind;
|
|
||||||
use crate::transport::middle_proxy::{bnd_snapshot, timeskew_snapshot, upstream_bnd_snapshots};
|
use crate::transport::middle_proxy::{bnd_snapshot, timeskew_snapshot, upstream_bnd_snapshots};
|
||||||
|
use crate::transport::UpstreamRouteKind;
|
||||||
|
|
||||||
use super::ApiShared;
|
use super::ApiShared;
|
||||||
|
|
||||||
|
|
@ -262,8 +262,8 @@ fn update_kdf_ewma(now_epoch_secs: u64, total_errors: u64) -> f64 {
|
||||||
let delta_errors = total_errors.saturating_sub(guard.last_total_errors);
|
let delta_errors = total_errors.saturating_sub(guard.last_total_errors);
|
||||||
let instant_rate_per_min = (delta_errors as f64) * 60.0 / (dt_secs as f64);
|
let instant_rate_per_min = (delta_errors as f64) * 60.0 / (dt_secs as f64);
|
||||||
let alpha = 1.0 - f64::exp(-(dt_secs as f64) / KDF_EWMA_TAU_SECS);
|
let alpha = 1.0 - f64::exp(-(dt_secs as f64) / KDF_EWMA_TAU_SECS);
|
||||||
guard.ewma_errors_per_min =
|
guard.ewma_errors_per_min = guard.ewma_errors_per_min
|
||||||
guard.ewma_errors_per_min + alpha * (instant_rate_per_min - guard.ewma_errors_per_min);
|
+ alpha * (instant_rate_per_min - guard.ewma_errors_per_min);
|
||||||
guard.last_epoch_secs = now_epoch_secs;
|
guard.last_epoch_secs = now_epoch_secs;
|
||||||
guard.last_total_errors = total_errors;
|
guard.last_total_errors = total_errors;
|
||||||
guard.ewma_errors_per_min
|
guard.ewma_errors_per_min
|
||||||
|
|
@ -284,7 +284,6 @@ fn map_route_kind(value: UpstreamRouteKind) -> &'static str {
|
||||||
UpstreamRouteKind::Direct => "direct",
|
UpstreamRouteKind::Direct => "direct",
|
||||||
UpstreamRouteKind::Socks4 => "socks4",
|
UpstreamRouteKind::Socks4 => "socks4",
|
||||||
UpstreamRouteKind::Socks5 => "socks5",
|
UpstreamRouteKind::Socks5 => "socks5",
|
||||||
UpstreamRouteKind::Shadowsocks => "shadowsocks",
|
|
||||||
}
|
}
|
||||||
}
|
}
|
||||||
|
|
||||||
|
|
|
||||||
|
|
@ -2,8 +2,8 @@ use std::time::{Duration, Instant, SystemTime, UNIX_EPOCH};
|
||||||
|
|
||||||
use crate::config::ApiConfig;
|
use crate::config::ApiConfig;
|
||||||
use crate::stats::Stats;
|
use crate::stats::Stats;
|
||||||
use crate::transport::UpstreamRouteKind;
|
|
||||||
use crate::transport::upstream::IpPreference;
|
use crate::transport::upstream::IpPreference;
|
||||||
|
use crate::transport::UpstreamRouteKind;
|
||||||
|
|
||||||
use super::ApiShared;
|
use super::ApiShared;
|
||||||
use super::model::{
|
use super::model::{
|
||||||
|
|
@ -39,21 +39,10 @@ pub(super) fn build_zero_all_data(stats: &Stats, configured_users: usize) -> Zer
|
||||||
connections_total: stats.get_connects_all(),
|
connections_total: stats.get_connects_all(),
|
||||||
connections_bad_total: stats.get_connects_bad(),
|
connections_bad_total: stats.get_connects_bad(),
|
||||||
handshake_timeouts_total: stats.get_handshake_timeouts(),
|
handshake_timeouts_total: stats.get_handshake_timeouts(),
|
||||||
accept_permit_timeout_total: stats.get_accept_permit_timeout_total(),
|
|
||||||
configured_users,
|
configured_users,
|
||||||
telemetry_core_enabled: telemetry.core_enabled,
|
telemetry_core_enabled: telemetry.core_enabled,
|
||||||
telemetry_user_enabled: telemetry.user_enabled,
|
telemetry_user_enabled: telemetry.user_enabled,
|
||||||
telemetry_me_level: telemetry.me_level.to_string(),
|
telemetry_me_level: telemetry.me_level.to_string(),
|
||||||
conntrack_control_enabled: stats.get_conntrack_control_enabled(),
|
|
||||||
conntrack_control_available: stats.get_conntrack_control_available(),
|
|
||||||
conntrack_pressure_active: stats.get_conntrack_pressure_active(),
|
|
||||||
conntrack_event_queue_depth: stats.get_conntrack_event_queue_depth(),
|
|
||||||
conntrack_rule_apply_ok: stats.get_conntrack_rule_apply_ok(),
|
|
||||||
conntrack_delete_attempt_total: stats.get_conntrack_delete_attempt_total(),
|
|
||||||
conntrack_delete_success_total: stats.get_conntrack_delete_success_total(),
|
|
||||||
conntrack_delete_not_found_total: stats.get_conntrack_delete_not_found_total(),
|
|
||||||
conntrack_delete_error_total: stats.get_conntrack_delete_error_total(),
|
|
||||||
conntrack_close_event_drop_total: stats.get_conntrack_close_event_drop_total(),
|
|
||||||
},
|
},
|
||||||
upstream: build_zero_upstream_data(stats),
|
upstream: build_zero_upstream_data(stats),
|
||||||
middle_proxy: ZeroMiddleProxyData {
|
middle_proxy: ZeroMiddleProxyData {
|
||||||
|
|
@ -79,25 +68,6 @@ pub(super) fn build_zero_all_data(stats: &Stats, configured_users: usize) -> Zer
|
||||||
route_drop_queue_full_total: stats.get_me_route_drop_queue_full(),
|
route_drop_queue_full_total: stats.get_me_route_drop_queue_full(),
|
||||||
route_drop_queue_full_base_total: stats.get_me_route_drop_queue_full_base(),
|
route_drop_queue_full_base_total: stats.get_me_route_drop_queue_full_base(),
|
||||||
route_drop_queue_full_high_total: stats.get_me_route_drop_queue_full_high(),
|
route_drop_queue_full_high_total: stats.get_me_route_drop_queue_full_high(),
|
||||||
d2c_batches_total: stats.get_me_d2c_batches_total(),
|
|
||||||
d2c_batch_frames_total: stats.get_me_d2c_batch_frames_total(),
|
|
||||||
d2c_batch_bytes_total: stats.get_me_d2c_batch_bytes_total(),
|
|
||||||
d2c_flush_reason_queue_drain_total: stats.get_me_d2c_flush_reason_queue_drain_total(),
|
|
||||||
d2c_flush_reason_batch_frames_total: stats.get_me_d2c_flush_reason_batch_frames_total(),
|
|
||||||
d2c_flush_reason_batch_bytes_total: stats.get_me_d2c_flush_reason_batch_bytes_total(),
|
|
||||||
d2c_flush_reason_max_delay_total: stats.get_me_d2c_flush_reason_max_delay_total(),
|
|
||||||
d2c_flush_reason_ack_immediate_total: stats
|
|
||||||
.get_me_d2c_flush_reason_ack_immediate_total(),
|
|
||||||
d2c_flush_reason_close_total: stats.get_me_d2c_flush_reason_close_total(),
|
|
||||||
d2c_data_frames_total: stats.get_me_d2c_data_frames_total(),
|
|
||||||
d2c_ack_frames_total: stats.get_me_d2c_ack_frames_total(),
|
|
||||||
d2c_payload_bytes_total: stats.get_me_d2c_payload_bytes_total(),
|
|
||||||
d2c_write_mode_coalesced_total: stats.get_me_d2c_write_mode_coalesced_total(),
|
|
||||||
d2c_write_mode_split_total: stats.get_me_d2c_write_mode_split_total(),
|
|
||||||
d2c_quota_reject_pre_write_total: stats.get_me_d2c_quota_reject_pre_write_total(),
|
|
||||||
d2c_quota_reject_post_write_total: stats.get_me_d2c_quota_reject_post_write_total(),
|
|
||||||
d2c_frame_buf_shrink_total: stats.get_me_d2c_frame_buf_shrink_total(),
|
|
||||||
d2c_frame_buf_shrink_bytes_total: stats.get_me_d2c_frame_buf_shrink_bytes_total(),
|
|
||||||
socks_kdf_strict_reject_total: stats.get_me_socks_kdf_strict_reject(),
|
socks_kdf_strict_reject_total: stats.get_me_socks_kdf_strict_reject(),
|
||||||
socks_kdf_compat_fallback_total: stats.get_me_socks_kdf_compat_fallback(),
|
socks_kdf_compat_fallback_total: stats.get_me_socks_kdf_compat_fallback(),
|
||||||
endpoint_quarantine_total: stats.get_me_endpoint_quarantine_total(),
|
endpoint_quarantine_total: stats.get_me_endpoint_quarantine_total(),
|
||||||
|
|
@ -166,8 +136,7 @@ fn build_zero_upstream_data(stats: &Stats) -> ZeroUpstreamData {
|
||||||
.get_upstream_connect_duration_success_bucket_501_1000ms(),
|
.get_upstream_connect_duration_success_bucket_501_1000ms(),
|
||||||
connect_duration_success_bucket_gt_1000ms: stats
|
connect_duration_success_bucket_gt_1000ms: stats
|
||||||
.get_upstream_connect_duration_success_bucket_gt_1000ms(),
|
.get_upstream_connect_duration_success_bucket_gt_1000ms(),
|
||||||
connect_duration_fail_bucket_le_100ms: stats
|
connect_duration_fail_bucket_le_100ms: stats.get_upstream_connect_duration_fail_bucket_le_100ms(),
|
||||||
.get_upstream_connect_duration_fail_bucket_le_100ms(),
|
|
||||||
connect_duration_fail_bucket_101_500ms: stats
|
connect_duration_fail_bucket_101_500ms: stats
|
||||||
.get_upstream_connect_duration_fail_bucket_101_500ms(),
|
.get_upstream_connect_duration_fail_bucket_101_500ms(),
|
||||||
connect_duration_fail_bucket_501_1000ms: stats
|
connect_duration_fail_bucket_501_1000ms: stats
|
||||||
|
|
@ -209,7 +178,6 @@ pub(super) fn build_upstreams_data(shared: &ApiShared, api_cfg: &ApiConfig) -> U
|
||||||
direct_total: snapshot.summary.direct_total,
|
direct_total: snapshot.summary.direct_total,
|
||||||
socks4_total: snapshot.summary.socks4_total,
|
socks4_total: snapshot.summary.socks4_total,
|
||||||
socks5_total: snapshot.summary.socks5_total,
|
socks5_total: snapshot.summary.socks5_total,
|
||||||
shadowsocks_total: snapshot.summary.shadowsocks_total,
|
|
||||||
};
|
};
|
||||||
let upstreams = snapshot
|
let upstreams = snapshot
|
||||||
.upstreams
|
.upstreams
|
||||||
|
|
@ -346,8 +314,6 @@ async fn get_minimal_payload_cached(
|
||||||
required_writers: status.required_writers,
|
required_writers: status.required_writers,
|
||||||
alive_writers: status.alive_writers,
|
alive_writers: status.alive_writers,
|
||||||
coverage_pct: status.coverage_pct,
|
coverage_pct: status.coverage_pct,
|
||||||
fresh_alive_writers: status.fresh_alive_writers,
|
|
||||||
fresh_coverage_pct: status.fresh_coverage_pct,
|
|
||||||
},
|
},
|
||||||
writers: status
|
writers: status
|
||||||
.writers
|
.writers
|
||||||
|
|
@ -363,12 +329,6 @@ async fn get_minimal_payload_cached(
|
||||||
bound_clients: entry.bound_clients,
|
bound_clients: entry.bound_clients,
|
||||||
idle_for_secs: entry.idle_for_secs,
|
idle_for_secs: entry.idle_for_secs,
|
||||||
rtt_ema_ms: entry.rtt_ema_ms,
|
rtt_ema_ms: entry.rtt_ema_ms,
|
||||||
matches_active_generation: entry.matches_active_generation,
|
|
||||||
in_desired_map: entry.in_desired_map,
|
|
||||||
allow_drain_fallback: entry.allow_drain_fallback,
|
|
||||||
drain_started_at_epoch_secs: entry.drain_started_at_epoch_secs,
|
|
||||||
drain_deadline_epoch_secs: entry.drain_deadline_epoch_secs,
|
|
||||||
drain_over_ttl: entry.drain_over_ttl,
|
|
||||||
})
|
})
|
||||||
.collect(),
|
.collect(),
|
||||||
};
|
};
|
||||||
|
|
@ -403,8 +363,6 @@ async fn get_minimal_payload_cached(
|
||||||
floor_capped: entry.floor_capped,
|
floor_capped: entry.floor_capped,
|
||||||
alive_writers: entry.alive_writers,
|
alive_writers: entry.alive_writers,
|
||||||
coverage_pct: entry.coverage_pct,
|
coverage_pct: entry.coverage_pct,
|
||||||
fresh_alive_writers: entry.fresh_alive_writers,
|
|
||||||
fresh_coverage_pct: entry.fresh_coverage_pct,
|
|
||||||
rtt_ms: entry.rtt_ms,
|
rtt_ms: entry.rtt_ms,
|
||||||
load: entry.load,
|
load: entry.load,
|
||||||
})
|
})
|
||||||
|
|
@ -423,7 +381,8 @@ async fn get_minimal_payload_cached(
|
||||||
adaptive_floor_min_writers_multi_endpoint: runtime
|
adaptive_floor_min_writers_multi_endpoint: runtime
|
||||||
.adaptive_floor_min_writers_multi_endpoint,
|
.adaptive_floor_min_writers_multi_endpoint,
|
||||||
adaptive_floor_recover_grace_secs: runtime.adaptive_floor_recover_grace_secs,
|
adaptive_floor_recover_grace_secs: runtime.adaptive_floor_recover_grace_secs,
|
||||||
adaptive_floor_writers_per_core_total: runtime.adaptive_floor_writers_per_core_total,
|
adaptive_floor_writers_per_core_total: runtime
|
||||||
|
.adaptive_floor_writers_per_core_total,
|
||||||
adaptive_floor_cpu_cores_override: runtime.adaptive_floor_cpu_cores_override,
|
adaptive_floor_cpu_cores_override: runtime.adaptive_floor_cpu_cores_override,
|
||||||
adaptive_floor_max_extra_writers_single_per_core: runtime
|
adaptive_floor_max_extra_writers_single_per_core: runtime
|
||||||
.adaptive_floor_max_extra_writers_single_per_core,
|
.adaptive_floor_max_extra_writers_single_per_core,
|
||||||
|
|
@ -431,9 +390,12 @@ async fn get_minimal_payload_cached(
|
||||||
.adaptive_floor_max_extra_writers_multi_per_core,
|
.adaptive_floor_max_extra_writers_multi_per_core,
|
||||||
adaptive_floor_max_active_writers_per_core: runtime
|
adaptive_floor_max_active_writers_per_core: runtime
|
||||||
.adaptive_floor_max_active_writers_per_core,
|
.adaptive_floor_max_active_writers_per_core,
|
||||||
adaptive_floor_max_warm_writers_per_core: runtime.adaptive_floor_max_warm_writers_per_core,
|
adaptive_floor_max_warm_writers_per_core: runtime
|
||||||
adaptive_floor_max_active_writers_global: runtime.adaptive_floor_max_active_writers_global,
|
.adaptive_floor_max_warm_writers_per_core,
|
||||||
adaptive_floor_max_warm_writers_global: runtime.adaptive_floor_max_warm_writers_global,
|
adaptive_floor_max_active_writers_global: runtime
|
||||||
|
.adaptive_floor_max_active_writers_global,
|
||||||
|
adaptive_floor_max_warm_writers_global: runtime
|
||||||
|
.adaptive_floor_max_warm_writers_global,
|
||||||
adaptive_floor_cpu_cores_detected: runtime.adaptive_floor_cpu_cores_detected,
|
adaptive_floor_cpu_cores_detected: runtime.adaptive_floor_cpu_cores_detected,
|
||||||
adaptive_floor_cpu_cores_effective: runtime.adaptive_floor_cpu_cores_effective,
|
adaptive_floor_cpu_cores_effective: runtime.adaptive_floor_cpu_cores_effective,
|
||||||
adaptive_floor_global_cap_raw: runtime.adaptive_floor_global_cap_raw,
|
adaptive_floor_global_cap_raw: runtime.adaptive_floor_global_cap_raw,
|
||||||
|
|
@ -524,8 +486,6 @@ fn disabled_me_writers(now_epoch_secs: u64, reason: &'static str) -> MeWritersDa
|
||||||
required_writers: 0,
|
required_writers: 0,
|
||||||
alive_writers: 0,
|
alive_writers: 0,
|
||||||
coverage_pct: 0.0,
|
coverage_pct: 0.0,
|
||||||
fresh_alive_writers: 0,
|
|
||||||
fresh_coverage_pct: 0.0,
|
|
||||||
},
|
},
|
||||||
writers: Vec::new(),
|
writers: Vec::new(),
|
||||||
}
|
}
|
||||||
|
|
@ -545,7 +505,6 @@ fn map_route_kind(value: UpstreamRouteKind) -> &'static str {
|
||||||
UpstreamRouteKind::Direct => "direct",
|
UpstreamRouteKind::Direct => "direct",
|
||||||
UpstreamRouteKind::Socks4 => "socks4",
|
UpstreamRouteKind::Socks4 => "socks4",
|
||||||
UpstreamRouteKind::Socks5 => "socks5",
|
UpstreamRouteKind::Socks5 => "socks5",
|
||||||
UpstreamRouteKind::Shadowsocks => "shadowsocks",
|
|
||||||
}
|
}
|
||||||
}
|
}
|
||||||
|
|
||||||
|
|
|
||||||
|
|
@ -35,14 +35,11 @@ pub(super) struct RuntimeGatesData {
|
||||||
pub(super) conditional_cast_enabled: bool,
|
pub(super) conditional_cast_enabled: bool,
|
||||||
pub(super) me_runtime_ready: bool,
|
pub(super) me_runtime_ready: bool,
|
||||||
pub(super) me2dc_fallback_enabled: bool,
|
pub(super) me2dc_fallback_enabled: bool,
|
||||||
pub(super) me2dc_fast_enabled: bool,
|
|
||||||
pub(super) use_middle_proxy: bool,
|
pub(super) use_middle_proxy: bool,
|
||||||
pub(super) route_mode: &'static str,
|
pub(super) route_mode: &'static str,
|
||||||
pub(super) reroute_active: bool,
|
pub(super) reroute_active: bool,
|
||||||
#[serde(skip_serializing_if = "Option::is_none")]
|
#[serde(skip_serializing_if = "Option::is_none")]
|
||||||
pub(super) reroute_to_direct_at_epoch_secs: Option<u64>,
|
pub(super) reroute_to_direct_at_epoch_secs: Option<u64>,
|
||||||
#[serde(skip_serializing_if = "Option::is_none")]
|
|
||||||
pub(super) reroute_reason: Option<&'static str>,
|
|
||||||
pub(super) startup_status: &'static str,
|
pub(super) startup_status: &'static str,
|
||||||
pub(super) startup_stage: String,
|
pub(super) startup_stage: String,
|
||||||
pub(super) startup_progress_pct: f64,
|
pub(super) startup_progress_pct: f64,
|
||||||
|
|
@ -50,7 +47,6 @@ pub(super) struct RuntimeGatesData {
|
||||||
|
|
||||||
#[derive(Serialize)]
|
#[derive(Serialize)]
|
||||||
pub(super) struct EffectiveTimeoutLimits {
|
pub(super) struct EffectiveTimeoutLimits {
|
||||||
pub(super) client_first_byte_idle_secs: u64,
|
|
||||||
pub(super) client_handshake_secs: u64,
|
pub(super) client_handshake_secs: u64,
|
||||||
pub(super) tg_connect_secs: u64,
|
pub(super) tg_connect_secs: u64,
|
||||||
pub(super) client_keepalive_secs: u64,
|
pub(super) client_keepalive_secs: u64,
|
||||||
|
|
@ -90,21 +86,14 @@ pub(super) struct EffectiveMiddleProxyLimits {
|
||||||
pub(super) writer_pick_mode: &'static str,
|
pub(super) writer_pick_mode: &'static str,
|
||||||
pub(super) writer_pick_sample_size: u8,
|
pub(super) writer_pick_sample_size: u8,
|
||||||
pub(super) me2dc_fallback: bool,
|
pub(super) me2dc_fallback: bool,
|
||||||
pub(super) me2dc_fast: bool,
|
|
||||||
}
|
}
|
||||||
|
|
||||||
#[derive(Serialize)]
|
#[derive(Serialize)]
|
||||||
pub(super) struct EffectiveUserIpPolicyLimits {
|
pub(super) struct EffectiveUserIpPolicyLimits {
|
||||||
pub(super) global_each: usize,
|
|
||||||
pub(super) mode: &'static str,
|
pub(super) mode: &'static str,
|
||||||
pub(super) window_secs: u64,
|
pub(super) window_secs: u64,
|
||||||
}
|
}
|
||||||
|
|
||||||
#[derive(Serialize)]
|
|
||||||
pub(super) struct EffectiveUserTcpPolicyLimits {
|
|
||||||
pub(super) global_each: usize,
|
|
||||||
}
|
|
||||||
|
|
||||||
#[derive(Serialize)]
|
#[derive(Serialize)]
|
||||||
pub(super) struct EffectiveLimitsData {
|
pub(super) struct EffectiveLimitsData {
|
||||||
pub(super) update_every_secs: u64,
|
pub(super) update_every_secs: u64,
|
||||||
|
|
@ -114,7 +103,6 @@ pub(super) struct EffectiveLimitsData {
|
||||||
pub(super) upstream: EffectiveUpstreamLimits,
|
pub(super) upstream: EffectiveUpstreamLimits,
|
||||||
pub(super) middle_proxy: EffectiveMiddleProxyLimits,
|
pub(super) middle_proxy: EffectiveMiddleProxyLimits,
|
||||||
pub(super) user_ip_policy: EffectiveUserIpPolicyLimits,
|
pub(super) user_ip_policy: EffectiveUserIpPolicyLimits,
|
||||||
pub(super) user_tcp_policy: EffectiveUserTcpPolicyLimits,
|
|
||||||
}
|
}
|
||||||
|
|
||||||
#[derive(Serialize)]
|
#[derive(Serialize)]
|
||||||
|
|
@ -139,8 +127,7 @@ pub(super) fn build_system_info_data(
|
||||||
.runtime_state
|
.runtime_state
|
||||||
.last_config_reload_epoch_secs
|
.last_config_reload_epoch_secs
|
||||||
.load(Ordering::Relaxed);
|
.load(Ordering::Relaxed);
|
||||||
let last_config_reload_epoch_secs =
|
let last_config_reload_epoch_secs = (last_reload_epoch_secs > 0).then_some(last_reload_epoch_secs);
|
||||||
(last_reload_epoch_secs > 0).then_some(last_reload_epoch_secs);
|
|
||||||
|
|
||||||
let git_commit = option_env!("TELEMT_GIT_COMMIT")
|
let git_commit = option_env!("TELEMT_GIT_COMMIT")
|
||||||
.or(option_env!("VERGEN_GIT_SHA"))
|
.or(option_env!("VERGEN_GIT_SHA"))
|
||||||
|
|
@ -165,10 +152,7 @@ pub(super) fn build_system_info_data(
|
||||||
uptime_seconds: shared.stats.uptime_secs(),
|
uptime_seconds: shared.stats.uptime_secs(),
|
||||||
config_path: shared.config_path.display().to_string(),
|
config_path: shared.config_path.display().to_string(),
|
||||||
config_hash: revision.to_string(),
|
config_hash: revision.to_string(),
|
||||||
config_reload_count: shared
|
config_reload_count: shared.runtime_state.config_reload_count.load(Ordering::Relaxed),
|
||||||
.runtime_state
|
|
||||||
.config_reload_count
|
|
||||||
.load(Ordering::Relaxed),
|
|
||||||
last_config_reload_epoch_secs,
|
last_config_reload_epoch_secs,
|
||||||
}
|
}
|
||||||
}
|
}
|
||||||
|
|
@ -180,8 +164,6 @@ pub(super) async fn build_runtime_gates_data(
|
||||||
let startup_summary = build_runtime_startup_summary(shared).await;
|
let startup_summary = build_runtime_startup_summary(shared).await;
|
||||||
let route_state = shared.route_runtime.snapshot();
|
let route_state = shared.route_runtime.snapshot();
|
||||||
let route_mode = route_state.mode.as_str();
|
let route_mode = route_state.mode.as_str();
|
||||||
let fast_fallback_enabled =
|
|
||||||
cfg.general.use_middle_proxy && cfg.general.me2dc_fallback && cfg.general.me2dc_fast;
|
|
||||||
let reroute_active = cfg.general.use_middle_proxy
|
let reroute_active = cfg.general.use_middle_proxy
|
||||||
&& cfg.general.me2dc_fallback
|
&& cfg.general.me2dc_fallback
|
||||||
&& matches!(route_state.mode, RelayRouteMode::Direct);
|
&& matches!(route_state.mode, RelayRouteMode::Direct);
|
||||||
|
|
@ -190,15 +172,6 @@ pub(super) async fn build_runtime_gates_data(
|
||||||
} else {
|
} else {
|
||||||
None
|
None
|
||||||
};
|
};
|
||||||
let reroute_reason = if reroute_active {
|
|
||||||
if fast_fallback_enabled {
|
|
||||||
Some("fast_not_ready_fallback")
|
|
||||||
} else {
|
|
||||||
Some("strict_grace_fallback")
|
|
||||||
}
|
|
||||||
} else {
|
|
||||||
None
|
|
||||||
};
|
|
||||||
let me_runtime_ready = if !cfg.general.use_middle_proxy {
|
let me_runtime_ready = if !cfg.general.use_middle_proxy {
|
||||||
true
|
true
|
||||||
} else {
|
} else {
|
||||||
|
|
@ -216,12 +189,10 @@ pub(super) async fn build_runtime_gates_data(
|
||||||
conditional_cast_enabled: cfg.general.use_middle_proxy,
|
conditional_cast_enabled: cfg.general.use_middle_proxy,
|
||||||
me_runtime_ready,
|
me_runtime_ready,
|
||||||
me2dc_fallback_enabled: cfg.general.me2dc_fallback,
|
me2dc_fallback_enabled: cfg.general.me2dc_fallback,
|
||||||
me2dc_fast_enabled: fast_fallback_enabled,
|
|
||||||
use_middle_proxy: cfg.general.use_middle_proxy,
|
use_middle_proxy: cfg.general.use_middle_proxy,
|
||||||
route_mode,
|
route_mode,
|
||||||
reroute_active,
|
reroute_active,
|
||||||
reroute_to_direct_at_epoch_secs,
|
reroute_to_direct_at_epoch_secs,
|
||||||
reroute_reason,
|
|
||||||
startup_status: startup_summary.status,
|
startup_status: startup_summary.status,
|
||||||
startup_stage: startup_summary.stage,
|
startup_stage: startup_summary.stage,
|
||||||
startup_progress_pct: startup_summary.progress_pct,
|
startup_progress_pct: startup_summary.progress_pct,
|
||||||
|
|
@ -234,9 +205,8 @@ pub(super) fn build_limits_effective_data(cfg: &ProxyConfig) -> EffectiveLimitsD
|
||||||
me_reinit_every_secs: cfg.general.effective_me_reinit_every_secs(),
|
me_reinit_every_secs: cfg.general.effective_me_reinit_every_secs(),
|
||||||
me_pool_force_close_secs: cfg.general.effective_me_pool_force_close_secs(),
|
me_pool_force_close_secs: cfg.general.effective_me_pool_force_close_secs(),
|
||||||
timeouts: EffectiveTimeoutLimits {
|
timeouts: EffectiveTimeoutLimits {
|
||||||
client_first_byte_idle_secs: cfg.timeouts.client_first_byte_idle_secs,
|
|
||||||
client_handshake_secs: cfg.timeouts.client_handshake,
|
client_handshake_secs: cfg.timeouts.client_handshake,
|
||||||
tg_connect_secs: cfg.general.tg_connect,
|
tg_connect_secs: cfg.timeouts.tg_connect,
|
||||||
client_keepalive_secs: cfg.timeouts.client_keepalive,
|
client_keepalive_secs: cfg.timeouts.client_keepalive,
|
||||||
client_ack_secs: cfg.timeouts.client_ack,
|
client_ack_secs: cfg.timeouts.client_ack,
|
||||||
me_one_retry: cfg.timeouts.me_one_retry,
|
me_one_retry: cfg.timeouts.me_one_retry,
|
||||||
|
|
@ -262,7 +232,9 @@ pub(super) fn build_limits_effective_data(cfg: &ProxyConfig) -> EffectiveLimitsD
|
||||||
adaptive_floor_writers_per_core_total: cfg
|
adaptive_floor_writers_per_core_total: cfg
|
||||||
.general
|
.general
|
||||||
.me_adaptive_floor_writers_per_core_total,
|
.me_adaptive_floor_writers_per_core_total,
|
||||||
adaptive_floor_cpu_cores_override: cfg.general.me_adaptive_floor_cpu_cores_override,
|
adaptive_floor_cpu_cores_override: cfg
|
||||||
|
.general
|
||||||
|
.me_adaptive_floor_cpu_cores_override,
|
||||||
adaptive_floor_max_extra_writers_single_per_core: cfg
|
adaptive_floor_max_extra_writers_single_per_core: cfg
|
||||||
.general
|
.general
|
||||||
.me_adaptive_floor_max_extra_writers_single_per_core,
|
.me_adaptive_floor_max_extra_writers_single_per_core,
|
||||||
|
|
@ -288,16 +260,11 @@ pub(super) fn build_limits_effective_data(cfg: &ProxyConfig) -> EffectiveLimitsD
|
||||||
writer_pick_mode: me_writer_pick_mode_label(cfg.general.me_writer_pick_mode),
|
writer_pick_mode: me_writer_pick_mode_label(cfg.general.me_writer_pick_mode),
|
||||||
writer_pick_sample_size: cfg.general.me_writer_pick_sample_size,
|
writer_pick_sample_size: cfg.general.me_writer_pick_sample_size,
|
||||||
me2dc_fallback: cfg.general.me2dc_fallback,
|
me2dc_fallback: cfg.general.me2dc_fallback,
|
||||||
me2dc_fast: cfg.general.me2dc_fast,
|
|
||||||
},
|
},
|
||||||
user_ip_policy: EffectiveUserIpPolicyLimits {
|
user_ip_policy: EffectiveUserIpPolicyLimits {
|
||||||
global_each: cfg.access.user_max_unique_ips_global_each,
|
|
||||||
mode: user_max_unique_ips_mode_label(cfg.access.user_max_unique_ips_mode),
|
mode: user_max_unique_ips_mode_label(cfg.access.user_max_unique_ips_mode),
|
||||||
window_secs: cfg.access.user_max_unique_ips_window_secs,
|
window_secs: cfg.access.user_max_unique_ips_window_secs,
|
||||||
},
|
},
|
||||||
user_tcp_policy: EffectiveUserTcpPolicyLimits {
|
|
||||||
global_each: cfg.access.user_max_tcp_conns_global_each,
|
|
||||||
},
|
|
||||||
}
|
}
|
||||||
}
|
}
|
||||||
|
|
||||||
|
|
|
||||||
200
src/api/users.rs
200
src/api/users.rs
|
|
@ -46,9 +46,7 @@ pub(super) async fn create_user(
|
||||||
None => random_user_secret(),
|
None => random_user_secret(),
|
||||||
};
|
};
|
||||||
|
|
||||||
if let Some(ad_tag) = body.user_ad_tag.as_ref()
|
if let Some(ad_tag) = body.user_ad_tag.as_ref() && !is_valid_ad_tag(ad_tag) {
|
||||||
&& !is_valid_ad_tag(ad_tag)
|
|
||||||
{
|
|
||||||
return Err(ApiFailure::bad_request(
|
return Err(ApiFailure::bad_request(
|
||||||
"user_ad_tag must be exactly 32 hex characters",
|
"user_ad_tag must be exactly 32 hex characters",
|
||||||
));
|
));
|
||||||
|
|
@ -67,18 +65,12 @@ pub(super) async fn create_user(
|
||||||
));
|
));
|
||||||
}
|
}
|
||||||
|
|
||||||
cfg.access
|
cfg.access.users.insert(body.username.clone(), secret.clone());
|
||||||
.users
|
|
||||||
.insert(body.username.clone(), secret.clone());
|
|
||||||
if let Some(ad_tag) = body.user_ad_tag {
|
if let Some(ad_tag) = body.user_ad_tag {
|
||||||
cfg.access
|
cfg.access.user_ad_tags.insert(body.username.clone(), ad_tag);
|
||||||
.user_ad_tags
|
|
||||||
.insert(body.username.clone(), ad_tag);
|
|
||||||
}
|
}
|
||||||
if let Some(limit) = body.max_tcp_conns {
|
if let Some(limit) = body.max_tcp_conns {
|
||||||
cfg.access
|
cfg.access.user_max_tcp_conns.insert(body.username.clone(), limit);
|
||||||
.user_max_tcp_conns
|
|
||||||
.insert(body.username.clone(), limit);
|
|
||||||
}
|
}
|
||||||
if let Some(expiration) = expiration {
|
if let Some(expiration) = expiration {
|
||||||
cfg.access
|
cfg.access
|
||||||
|
|
@ -86,9 +78,7 @@ pub(super) async fn create_user(
|
||||||
.insert(body.username.clone(), expiration);
|
.insert(body.username.clone(), expiration);
|
||||||
}
|
}
|
||||||
if let Some(quota) = body.data_quota_bytes {
|
if let Some(quota) = body.data_quota_bytes {
|
||||||
cfg.access
|
cfg.access.user_data_quota.insert(body.username.clone(), quota);
|
||||||
.user_data_quota
|
|
||||||
.insert(body.username.clone(), quota);
|
|
||||||
}
|
}
|
||||||
|
|
||||||
let updated_limit = body.max_unique_ips;
|
let updated_limit = body.max_unique_ips;
|
||||||
|
|
@ -118,15 +108,11 @@ pub(super) async fn create_user(
|
||||||
touched_sections.push(AccessSection::UserMaxUniqueIps);
|
touched_sections.push(AccessSection::UserMaxUniqueIps);
|
||||||
}
|
}
|
||||||
|
|
||||||
let revision =
|
let revision = save_access_sections_to_disk(&shared.config_path, &cfg, &touched_sections).await?;
|
||||||
save_access_sections_to_disk(&shared.config_path, &cfg, &touched_sections).await?;
|
|
||||||
drop(_guard);
|
drop(_guard);
|
||||||
|
|
||||||
if let Some(limit) = updated_limit {
|
if let Some(limit) = updated_limit {
|
||||||
shared
|
shared.ip_tracker.set_user_limit(&body.username, limit).await;
|
||||||
.ip_tracker
|
|
||||||
.set_user_limit(&body.username, limit)
|
|
||||||
.await;
|
|
||||||
}
|
}
|
||||||
let (detected_ip_v4, detected_ip_v6) = shared.detected_link_ips();
|
let (detected_ip_v4, detected_ip_v6) = shared.detected_link_ips();
|
||||||
|
|
||||||
|
|
@ -136,7 +122,6 @@ pub(super) async fn create_user(
|
||||||
&shared.ip_tracker,
|
&shared.ip_tracker,
|
||||||
detected_ip_v4,
|
detected_ip_v4,
|
||||||
detected_ip_v6,
|
detected_ip_v6,
|
||||||
None,
|
|
||||||
)
|
)
|
||||||
.await;
|
.await;
|
||||||
let user = users
|
let user = users
|
||||||
|
|
@ -144,16 +129,8 @@ pub(super) async fn create_user(
|
||||||
.find(|entry| entry.username == body.username)
|
.find(|entry| entry.username == body.username)
|
||||||
.unwrap_or(UserInfo {
|
.unwrap_or(UserInfo {
|
||||||
username: body.username.clone(),
|
username: body.username.clone(),
|
||||||
in_runtime: false,
|
|
||||||
user_ad_tag: None,
|
user_ad_tag: None,
|
||||||
max_tcp_conns: cfg
|
max_tcp_conns: None,
|
||||||
.access
|
|
||||||
.user_max_tcp_conns
|
|
||||||
.get(&body.username)
|
|
||||||
.copied()
|
|
||||||
.filter(|limit| *limit > 0)
|
|
||||||
.or((cfg.access.user_max_tcp_conns_global_each > 0)
|
|
||||||
.then_some(cfg.access.user_max_tcp_conns_global_each)),
|
|
||||||
expiration_rfc3339: None,
|
expiration_rfc3339: None,
|
||||||
data_quota_bytes: None,
|
data_quota_bytes: None,
|
||||||
max_unique_ips: updated_limit,
|
max_unique_ips: updated_limit,
|
||||||
|
|
@ -163,7 +140,12 @@ pub(super) async fn create_user(
|
||||||
recent_unique_ips: 0,
|
recent_unique_ips: 0,
|
||||||
recent_unique_ips_list: Vec::new(),
|
recent_unique_ips_list: Vec::new(),
|
||||||
total_octets: 0,
|
total_octets: 0,
|
||||||
links: build_user_links(&cfg, &secret, detected_ip_v4, detected_ip_v6),
|
links: build_user_links(
|
||||||
|
&cfg,
|
||||||
|
&secret,
|
||||||
|
detected_ip_v4,
|
||||||
|
detected_ip_v6,
|
||||||
|
),
|
||||||
});
|
});
|
||||||
|
|
||||||
Ok((CreateUserResponse { user, secret }, revision))
|
Ok((CreateUserResponse { user, secret }, revision))
|
||||||
|
|
@ -175,16 +157,12 @@ pub(super) async fn patch_user(
|
||||||
expected_revision: Option<String>,
|
expected_revision: Option<String>,
|
||||||
shared: &ApiShared,
|
shared: &ApiShared,
|
||||||
) -> Result<(UserInfo, String), ApiFailure> {
|
) -> Result<(UserInfo, String), ApiFailure> {
|
||||||
if let Some(secret) = body.secret.as_ref()
|
if let Some(secret) = body.secret.as_ref() && !is_valid_user_secret(secret) {
|
||||||
&& !is_valid_user_secret(secret)
|
|
||||||
{
|
|
||||||
return Err(ApiFailure::bad_request(
|
return Err(ApiFailure::bad_request(
|
||||||
"secret must be exactly 32 hex characters",
|
"secret must be exactly 32 hex characters",
|
||||||
));
|
));
|
||||||
}
|
}
|
||||||
if let Some(ad_tag) = body.user_ad_tag.as_ref()
|
if let Some(ad_tag) = body.user_ad_tag.as_ref() && !is_valid_ad_tag(ad_tag) {
|
||||||
&& !is_valid_ad_tag(ad_tag)
|
|
||||||
{
|
|
||||||
return Err(ApiFailure::bad_request(
|
return Err(ApiFailure::bad_request(
|
||||||
"user_ad_tag must be exactly 32 hex characters",
|
"user_ad_tag must be exactly 32 hex characters",
|
||||||
));
|
));
|
||||||
|
|
@ -209,14 +187,10 @@ pub(super) async fn patch_user(
|
||||||
cfg.access.user_ad_tags.insert(user.to_string(), ad_tag);
|
cfg.access.user_ad_tags.insert(user.to_string(), ad_tag);
|
||||||
}
|
}
|
||||||
if let Some(limit) = body.max_tcp_conns {
|
if let Some(limit) = body.max_tcp_conns {
|
||||||
cfg.access
|
cfg.access.user_max_tcp_conns.insert(user.to_string(), limit);
|
||||||
.user_max_tcp_conns
|
|
||||||
.insert(user.to_string(), limit);
|
|
||||||
}
|
}
|
||||||
if let Some(expiration) = expiration {
|
if let Some(expiration) = expiration {
|
||||||
cfg.access
|
cfg.access.user_expirations.insert(user.to_string(), expiration);
|
||||||
.user_expirations
|
|
||||||
.insert(user.to_string(), expiration);
|
|
||||||
}
|
}
|
||||||
if let Some(quota) = body.data_quota_bytes {
|
if let Some(quota) = body.data_quota_bytes {
|
||||||
cfg.access.user_data_quota.insert(user.to_string(), quota);
|
cfg.access.user_data_quota.insert(user.to_string(), quota);
|
||||||
|
|
@ -224,9 +198,7 @@ pub(super) async fn patch_user(
|
||||||
|
|
||||||
let mut updated_limit = None;
|
let mut updated_limit = None;
|
||||||
if let Some(limit) = body.max_unique_ips {
|
if let Some(limit) = body.max_unique_ips {
|
||||||
cfg.access
|
cfg.access.user_max_unique_ips.insert(user.to_string(), limit);
|
||||||
.user_max_unique_ips
|
|
||||||
.insert(user.to_string(), limit);
|
|
||||||
updated_limit = Some(limit);
|
updated_limit = Some(limit);
|
||||||
}
|
}
|
||||||
|
|
||||||
|
|
@ -245,7 +217,6 @@ pub(super) async fn patch_user(
|
||||||
&shared.ip_tracker,
|
&shared.ip_tracker,
|
||||||
detected_ip_v4,
|
detected_ip_v4,
|
||||||
detected_ip_v6,
|
detected_ip_v6,
|
||||||
None,
|
|
||||||
)
|
)
|
||||||
.await;
|
.await;
|
||||||
let user_info = users
|
let user_info = users
|
||||||
|
|
@ -292,8 +263,7 @@ pub(super) async fn rotate_secret(
|
||||||
AccessSection::UserDataQuota,
|
AccessSection::UserDataQuota,
|
||||||
AccessSection::UserMaxUniqueIps,
|
AccessSection::UserMaxUniqueIps,
|
||||||
];
|
];
|
||||||
let revision =
|
let revision = save_access_sections_to_disk(&shared.config_path, &cfg, &touched_sections).await?;
|
||||||
save_access_sections_to_disk(&shared.config_path, &cfg, &touched_sections).await?;
|
|
||||||
drop(_guard);
|
drop(_guard);
|
||||||
|
|
||||||
let (detected_ip_v4, detected_ip_v6) = shared.detected_link_ips();
|
let (detected_ip_v4, detected_ip_v6) = shared.detected_link_ips();
|
||||||
|
|
@ -303,7 +273,6 @@ pub(super) async fn rotate_secret(
|
||||||
&shared.ip_tracker,
|
&shared.ip_tracker,
|
||||||
detected_ip_v4,
|
detected_ip_v4,
|
||||||
detected_ip_v6,
|
detected_ip_v6,
|
||||||
None,
|
|
||||||
)
|
)
|
||||||
.await;
|
.await;
|
||||||
let user_info = users
|
let user_info = users
|
||||||
|
|
@ -361,8 +330,7 @@ pub(super) async fn delete_user(
|
||||||
AccessSection::UserDataQuota,
|
AccessSection::UserDataQuota,
|
||||||
AccessSection::UserMaxUniqueIps,
|
AccessSection::UserMaxUniqueIps,
|
||||||
];
|
];
|
||||||
let revision =
|
let revision = save_access_sections_to_disk(&shared.config_path, &cfg, &touched_sections).await?;
|
||||||
save_access_sections_to_disk(&shared.config_path, &cfg, &touched_sections).await?;
|
|
||||||
drop(_guard);
|
drop(_guard);
|
||||||
shared.ip_tracker.remove_user_limit(user).await;
|
shared.ip_tracker.remove_user_limit(user).await;
|
||||||
shared.ip_tracker.clear_user_ips(user).await;
|
shared.ip_tracker.clear_user_ips(user).await;
|
||||||
|
|
@ -376,7 +344,6 @@ pub(super) async fn users_from_config(
|
||||||
ip_tracker: &UserIpTracker,
|
ip_tracker: &UserIpTracker,
|
||||||
startup_detected_ip_v4: Option<IpAddr>,
|
startup_detected_ip_v4: Option<IpAddr>,
|
||||||
startup_detected_ip_v6: Option<IpAddr>,
|
startup_detected_ip_v6: Option<IpAddr>,
|
||||||
runtime_cfg: Option<&ProxyConfig>,
|
|
||||||
) -> Vec<UserInfo> {
|
) -> Vec<UserInfo> {
|
||||||
let mut names = cfg.access.users.keys().cloned().collect::<Vec<_>>();
|
let mut names = cfg.access.users.keys().cloned().collect::<Vec<_>>();
|
||||||
names.sort();
|
names.sort();
|
||||||
|
|
@ -398,7 +365,12 @@ pub(super) async fn users_from_config(
|
||||||
.users
|
.users
|
||||||
.get(&username)
|
.get(&username)
|
||||||
.map(|secret| {
|
.map(|secret| {
|
||||||
build_user_links(cfg, secret, startup_detected_ip_v4, startup_detected_ip_v6)
|
build_user_links(
|
||||||
|
cfg,
|
||||||
|
secret,
|
||||||
|
startup_detected_ip_v4,
|
||||||
|
startup_detected_ip_v6,
|
||||||
|
)
|
||||||
})
|
})
|
||||||
.unwrap_or(UserLinks {
|
.unwrap_or(UserLinks {
|
||||||
classic: Vec::new(),
|
classic: Vec::new(),
|
||||||
|
|
@ -406,32 +378,15 @@ pub(super) async fn users_from_config(
|
||||||
tls: Vec::new(),
|
tls: Vec::new(),
|
||||||
});
|
});
|
||||||
users.push(UserInfo {
|
users.push(UserInfo {
|
||||||
in_runtime: runtime_cfg
|
|
||||||
.map(|runtime| runtime.access.users.contains_key(&username))
|
|
||||||
.unwrap_or(false),
|
|
||||||
user_ad_tag: cfg.access.user_ad_tags.get(&username).cloned(),
|
user_ad_tag: cfg.access.user_ad_tags.get(&username).cloned(),
|
||||||
max_tcp_conns: cfg
|
max_tcp_conns: cfg.access.user_max_tcp_conns.get(&username).copied(),
|
||||||
.access
|
|
||||||
.user_max_tcp_conns
|
|
||||||
.get(&username)
|
|
||||||
.copied()
|
|
||||||
.filter(|limit| *limit > 0)
|
|
||||||
.or((cfg.access.user_max_tcp_conns_global_each > 0)
|
|
||||||
.then_some(cfg.access.user_max_tcp_conns_global_each)),
|
|
||||||
expiration_rfc3339: cfg
|
expiration_rfc3339: cfg
|
||||||
.access
|
.access
|
||||||
.user_expirations
|
.user_expirations
|
||||||
.get(&username)
|
.get(&username)
|
||||||
.map(chrono::DateTime::<chrono::Utc>::to_rfc3339),
|
.map(chrono::DateTime::<chrono::Utc>::to_rfc3339),
|
||||||
data_quota_bytes: cfg.access.user_data_quota.get(&username).copied(),
|
data_quota_bytes: cfg.access.user_data_quota.get(&username).copied(),
|
||||||
max_unique_ips: cfg
|
max_unique_ips: cfg.access.user_max_unique_ips.get(&username).copied(),
|
||||||
.access
|
|
||||||
.user_max_unique_ips
|
|
||||||
.get(&username)
|
|
||||||
.copied()
|
|
||||||
.filter(|limit| *limit > 0)
|
|
||||||
.or((cfg.access.user_max_unique_ips_global_each > 0)
|
|
||||||
.then_some(cfg.access.user_max_unique_ips_global_each)),
|
|
||||||
current_connections: stats.get_user_curr_connects(&username),
|
current_connections: stats.get_user_curr_connects(&username),
|
||||||
active_unique_ips: active_ip_list.len(),
|
active_unique_ips: active_ip_list.len(),
|
||||||
active_unique_ips_list: active_ip_list,
|
active_unique_ips_list: active_ip_list,
|
||||||
|
|
@ -517,12 +472,12 @@ fn resolve_link_hosts(
|
||||||
push_unique_host(&mut hosts, host);
|
push_unique_host(&mut hosts, host);
|
||||||
continue;
|
continue;
|
||||||
}
|
}
|
||||||
if let Some(ip) = listener.announce_ip
|
if let Some(ip) = listener.announce_ip {
|
||||||
&& !ip.is_unspecified()
|
if !ip.is_unspecified() {
|
||||||
{
|
|
||||||
push_unique_host(&mut hosts, &ip.to_string());
|
push_unique_host(&mut hosts, &ip.to_string());
|
||||||
continue;
|
continue;
|
||||||
}
|
}
|
||||||
|
}
|
||||||
if listener.ip.is_unspecified() {
|
if listener.ip.is_unspecified() {
|
||||||
let detected_ip = if listener.ip.is_ipv4() {
|
let detected_ip = if listener.ip.is_ipv4() {
|
||||||
startup_detected_ip_v4
|
startup_detected_ip_v4
|
||||||
|
|
@ -594,94 +549,3 @@ fn resolve_tls_domains(cfg: &ProxyConfig) -> Vec<&str> {
|
||||||
}
|
}
|
||||||
domains
|
domains
|
||||||
}
|
}
|
||||||
|
|
||||||
#[cfg(test)]
|
|
||||||
mod tests {
|
|
||||||
use super::*;
|
|
||||||
use crate::ip_tracker::UserIpTracker;
|
|
||||||
use crate::stats::Stats;
|
|
||||||
|
|
||||||
#[tokio::test]
|
|
||||||
async fn users_from_config_reports_effective_tcp_limit_with_global_fallback() {
|
|
||||||
let mut cfg = ProxyConfig::default();
|
|
||||||
cfg.access.users.insert(
|
|
||||||
"alice".to_string(),
|
|
||||||
"0123456789abcdef0123456789abcdef".to_string(),
|
|
||||||
);
|
|
||||||
cfg.access.user_max_tcp_conns_global_each = 7;
|
|
||||||
|
|
||||||
let stats = Stats::new();
|
|
||||||
let tracker = UserIpTracker::new();
|
|
||||||
|
|
||||||
let users = users_from_config(&cfg, &stats, &tracker, None, None, None).await;
|
|
||||||
let alice = users
|
|
||||||
.iter()
|
|
||||||
.find(|entry| entry.username == "alice")
|
|
||||||
.expect("alice must be present");
|
|
||||||
assert!(!alice.in_runtime);
|
|
||||||
assert_eq!(alice.max_tcp_conns, Some(7));
|
|
||||||
|
|
||||||
cfg.access.user_max_tcp_conns.insert("alice".to_string(), 5);
|
|
||||||
let users = users_from_config(&cfg, &stats, &tracker, None, None, None).await;
|
|
||||||
let alice = users
|
|
||||||
.iter()
|
|
||||||
.find(|entry| entry.username == "alice")
|
|
||||||
.expect("alice must be present");
|
|
||||||
assert!(!alice.in_runtime);
|
|
||||||
assert_eq!(alice.max_tcp_conns, Some(5));
|
|
||||||
|
|
||||||
cfg.access.user_max_tcp_conns.insert("alice".to_string(), 0);
|
|
||||||
let users = users_from_config(&cfg, &stats, &tracker, None, None, None).await;
|
|
||||||
let alice = users
|
|
||||||
.iter()
|
|
||||||
.find(|entry| entry.username == "alice")
|
|
||||||
.expect("alice must be present");
|
|
||||||
assert!(!alice.in_runtime);
|
|
||||||
assert_eq!(alice.max_tcp_conns, Some(7));
|
|
||||||
|
|
||||||
cfg.access.user_max_tcp_conns_global_each = 0;
|
|
||||||
let users = users_from_config(&cfg, &stats, &tracker, None, None, None).await;
|
|
||||||
let alice = users
|
|
||||||
.iter()
|
|
||||||
.find(|entry| entry.username == "alice")
|
|
||||||
.expect("alice must be present");
|
|
||||||
assert!(!alice.in_runtime);
|
|
||||||
assert_eq!(alice.max_tcp_conns, None);
|
|
||||||
}
|
|
||||||
|
|
||||||
#[tokio::test]
|
|
||||||
async fn users_from_config_marks_runtime_membership_when_snapshot_is_provided() {
|
|
||||||
let mut disk_cfg = ProxyConfig::default();
|
|
||||||
disk_cfg.access.users.insert(
|
|
||||||
"alice".to_string(),
|
|
||||||
"0123456789abcdef0123456789abcdef".to_string(),
|
|
||||||
);
|
|
||||||
disk_cfg.access.users.insert(
|
|
||||||
"bob".to_string(),
|
|
||||||
"fedcba9876543210fedcba9876543210".to_string(),
|
|
||||||
);
|
|
||||||
|
|
||||||
let mut runtime_cfg = ProxyConfig::default();
|
|
||||||
runtime_cfg.access.users.insert(
|
|
||||||
"alice".to_string(),
|
|
||||||
"0123456789abcdef0123456789abcdef".to_string(),
|
|
||||||
);
|
|
||||||
|
|
||||||
let stats = Stats::new();
|
|
||||||
let tracker = UserIpTracker::new();
|
|
||||||
let users =
|
|
||||||
users_from_config(&disk_cfg, &stats, &tracker, None, None, Some(&runtime_cfg)).await;
|
|
||||||
|
|
||||||
let alice = users
|
|
||||||
.iter()
|
|
||||||
.find(|entry| entry.username == "alice")
|
|
||||||
.expect("alice must be present");
|
|
||||||
let bob = users
|
|
||||||
.iter()
|
|
||||||
.find(|entry| entry.username == "bob")
|
|
||||||
.expect("bob must be present");
|
|
||||||
|
|
||||||
assert!(alice.in_runtime);
|
|
||||||
assert!(!bob.in_runtime);
|
|
||||||
}
|
|
||||||
}
|
|
||||||
|
|
|
||||||
475
src/cli.rs
475
src/cli.rs
|
|
@ -1,270 +1,11 @@
|
||||||
//! CLI commands: --init (fire-and-forget setup), daemon options, subcommands
|
//! CLI commands: --init (fire-and-forget setup)
|
||||||
//!
|
|
||||||
//! Subcommands:
|
|
||||||
//! - `start [OPTIONS] [config.toml]` - Start the daemon
|
|
||||||
//! - `stop [--pid-file PATH]` - Stop a running daemon
|
|
||||||
//! - `reload [--pid-file PATH]` - Reload configuration (SIGHUP)
|
|
||||||
//! - `status [--pid-file PATH]` - Check daemon status
|
|
||||||
//! - `run [OPTIONS] [config.toml]` - Run in foreground (default behavior)
|
|
||||||
|
|
||||||
use rand::RngExt;
|
|
||||||
use std::fs;
|
use std::fs;
|
||||||
use std::path::{Path, PathBuf};
|
use std::path::{Path, PathBuf};
|
||||||
use std::process::Command;
|
use std::process::Command;
|
||||||
|
use rand::Rng;
|
||||||
#[cfg(unix)]
|
|
||||||
use crate::daemon::{self, DEFAULT_PID_FILE, DaemonOptions};
|
|
||||||
|
|
||||||
/// CLI subcommand to execute.
|
|
||||||
#[derive(Debug, Clone, PartialEq, Eq)]
|
|
||||||
pub enum Subcommand {
|
|
||||||
/// Run the proxy (default, or explicit `run` subcommand).
|
|
||||||
Run,
|
|
||||||
/// Start as daemon (`start` subcommand).
|
|
||||||
Start,
|
|
||||||
/// Stop a running daemon (`stop` subcommand).
|
|
||||||
Stop,
|
|
||||||
/// Reload configuration (`reload` subcommand).
|
|
||||||
Reload,
|
|
||||||
/// Check daemon status (`status` subcommand).
|
|
||||||
Status,
|
|
||||||
/// Fire-and-forget setup (`--init`).
|
|
||||||
Init,
|
|
||||||
}
|
|
||||||
|
|
||||||
/// Parsed subcommand with its options.
|
|
||||||
#[derive(Debug)]
|
|
||||||
pub struct ParsedCommand {
|
|
||||||
pub subcommand: Subcommand,
|
|
||||||
pub pid_file: PathBuf,
|
|
||||||
pub config_path: String,
|
|
||||||
#[cfg(unix)]
|
|
||||||
pub daemon_opts: DaemonOptions,
|
|
||||||
pub init_opts: Option<InitOptions>,
|
|
||||||
}
|
|
||||||
|
|
||||||
impl Default for ParsedCommand {
|
|
||||||
fn default() -> Self {
|
|
||||||
Self {
|
|
||||||
subcommand: Subcommand::Run,
|
|
||||||
#[cfg(unix)]
|
|
||||||
pid_file: PathBuf::from(DEFAULT_PID_FILE),
|
|
||||||
#[cfg(not(unix))]
|
|
||||||
pid_file: PathBuf::from("/var/run/telemt.pid"),
|
|
||||||
config_path: "config.toml".to_string(),
|
|
||||||
#[cfg(unix)]
|
|
||||||
daemon_opts: DaemonOptions::default(),
|
|
||||||
init_opts: None,
|
|
||||||
}
|
|
||||||
}
|
|
||||||
}
|
|
||||||
|
|
||||||
/// Parse CLI arguments into a command structure.
|
|
||||||
pub fn parse_command(args: &[String]) -> ParsedCommand {
|
|
||||||
let mut cmd = ParsedCommand::default();
|
|
||||||
|
|
||||||
// Check for --init first (legacy form)
|
|
||||||
if args.iter().any(|a| a == "--init") {
|
|
||||||
cmd.subcommand = Subcommand::Init;
|
|
||||||
cmd.init_opts = parse_init_args(args);
|
|
||||||
return cmd;
|
|
||||||
}
|
|
||||||
|
|
||||||
// Check for subcommand as first argument
|
|
||||||
if let Some(first) = args.first() {
|
|
||||||
match first.as_str() {
|
|
||||||
"start" => {
|
|
||||||
cmd.subcommand = Subcommand::Start;
|
|
||||||
#[cfg(unix)]
|
|
||||||
{
|
|
||||||
cmd.daemon_opts = parse_daemon_args(args);
|
|
||||||
// Force daemonize for start command
|
|
||||||
cmd.daemon_opts.daemonize = true;
|
|
||||||
}
|
|
||||||
}
|
|
||||||
"stop" => {
|
|
||||||
cmd.subcommand = Subcommand::Stop;
|
|
||||||
}
|
|
||||||
"reload" => {
|
|
||||||
cmd.subcommand = Subcommand::Reload;
|
|
||||||
}
|
|
||||||
"status" => {
|
|
||||||
cmd.subcommand = Subcommand::Status;
|
|
||||||
}
|
|
||||||
"run" => {
|
|
||||||
cmd.subcommand = Subcommand::Run;
|
|
||||||
#[cfg(unix)]
|
|
||||||
{
|
|
||||||
cmd.daemon_opts = parse_daemon_args(args);
|
|
||||||
}
|
|
||||||
}
|
|
||||||
_ => {
|
|
||||||
// No subcommand, default to Run
|
|
||||||
#[cfg(unix)]
|
|
||||||
{
|
|
||||||
cmd.daemon_opts = parse_daemon_args(args);
|
|
||||||
}
|
|
||||||
}
|
|
||||||
}
|
|
||||||
}
|
|
||||||
|
|
||||||
// Parse remaining options
|
|
||||||
let mut i = 0;
|
|
||||||
while i < args.len() {
|
|
||||||
match args[i].as_str() {
|
|
||||||
// Skip subcommand names
|
|
||||||
"start" | "stop" | "reload" | "status" | "run" => {}
|
|
||||||
// PID file option (for stop/reload/status)
|
|
||||||
"--pid-file" => {
|
|
||||||
i += 1;
|
|
||||||
if i < args.len() {
|
|
||||||
cmd.pid_file = PathBuf::from(&args[i]);
|
|
||||||
#[cfg(unix)]
|
|
||||||
{
|
|
||||||
cmd.daemon_opts.pid_file = Some(cmd.pid_file.clone());
|
|
||||||
}
|
|
||||||
}
|
|
||||||
}
|
|
||||||
s if s.starts_with("--pid-file=") => {
|
|
||||||
cmd.pid_file = PathBuf::from(s.trim_start_matches("--pid-file="));
|
|
||||||
#[cfg(unix)]
|
|
||||||
{
|
|
||||||
cmd.daemon_opts.pid_file = Some(cmd.pid_file.clone());
|
|
||||||
}
|
|
||||||
}
|
|
||||||
// Config path (positional, non-flag argument)
|
|
||||||
s if !s.starts_with('-') => {
|
|
||||||
cmd.config_path = s.to_string();
|
|
||||||
}
|
|
||||||
_ => {}
|
|
||||||
}
|
|
||||||
i += 1;
|
|
||||||
}
|
|
||||||
|
|
||||||
cmd
|
|
||||||
}
|
|
||||||
|
|
||||||
/// Execute a subcommand that doesn't require starting the server.
|
|
||||||
/// Returns `Some(exit_code)` if the command was handled, `None` if server should start.
|
|
||||||
#[cfg(unix)]
|
|
||||||
pub fn execute_subcommand(cmd: &ParsedCommand) -> Option<i32> {
|
|
||||||
match cmd.subcommand {
|
|
||||||
Subcommand::Stop => Some(cmd_stop(&cmd.pid_file)),
|
|
||||||
Subcommand::Reload => Some(cmd_reload(&cmd.pid_file)),
|
|
||||||
Subcommand::Status => Some(cmd_status(&cmd.pid_file)),
|
|
||||||
Subcommand::Init => {
|
|
||||||
if let Some(opts) = cmd.init_opts.clone() {
|
|
||||||
match run_init(opts) {
|
|
||||||
Ok(()) => Some(0),
|
|
||||||
Err(e) => {
|
|
||||||
eprintln!("[telemt] Init failed: {}", e);
|
|
||||||
Some(1)
|
|
||||||
}
|
|
||||||
}
|
|
||||||
} else {
|
|
||||||
Some(1)
|
|
||||||
}
|
|
||||||
}
|
|
||||||
// Run and Start need the server
|
|
||||||
Subcommand::Run | Subcommand::Start => None,
|
|
||||||
}
|
|
||||||
}
|
|
||||||
|
|
||||||
#[cfg(not(unix))]
|
|
||||||
pub fn execute_subcommand(cmd: &ParsedCommand) -> Option<i32> {
|
|
||||||
match cmd.subcommand {
|
|
||||||
Subcommand::Stop | Subcommand::Reload | Subcommand::Status => {
|
|
||||||
eprintln!("[telemt] Subcommand not supported on this platform");
|
|
||||||
Some(1)
|
|
||||||
}
|
|
||||||
Subcommand::Init => {
|
|
||||||
if let Some(opts) = cmd.init_opts.clone() {
|
|
||||||
match run_init(opts) {
|
|
||||||
Ok(()) => Some(0),
|
|
||||||
Err(e) => {
|
|
||||||
eprintln!("[telemt] Init failed: {}", e);
|
|
||||||
Some(1)
|
|
||||||
}
|
|
||||||
}
|
|
||||||
} else {
|
|
||||||
Some(1)
|
|
||||||
}
|
|
||||||
}
|
|
||||||
Subcommand::Run | Subcommand::Start => None,
|
|
||||||
}
|
|
||||||
}
|
|
||||||
|
|
||||||
/// Stop command: send SIGTERM to the running daemon.
|
|
||||||
#[cfg(unix)]
|
|
||||||
fn cmd_stop(pid_file: &Path) -> i32 {
|
|
||||||
use nix::sys::signal::Signal;
|
|
||||||
|
|
||||||
println!("Stopping telemt daemon...");
|
|
||||||
|
|
||||||
match daemon::signal_pid_file(pid_file, Signal::SIGTERM) {
|
|
||||||
Ok(()) => {
|
|
||||||
println!("Stop signal sent successfully");
|
|
||||||
|
|
||||||
// Wait for process to exit (up to 10 seconds)
|
|
||||||
for _ in 0..20 {
|
|
||||||
std::thread::sleep(std::time::Duration::from_millis(500));
|
|
||||||
if let daemon::DaemonStatus::NotRunning = daemon::check_status(pid_file) {
|
|
||||||
println!("Daemon stopped");
|
|
||||||
return 0;
|
|
||||||
}
|
|
||||||
}
|
|
||||||
println!("Daemon may still be shutting down");
|
|
||||||
0
|
|
||||||
}
|
|
||||||
Err(e) => {
|
|
||||||
eprintln!("Failed to stop daemon: {}", e);
|
|
||||||
1
|
|
||||||
}
|
|
||||||
}
|
|
||||||
}
|
|
||||||
|
|
||||||
/// Reload command: send SIGHUP to trigger config reload.
|
|
||||||
#[cfg(unix)]
|
|
||||||
fn cmd_reload(pid_file: &Path) -> i32 {
|
|
||||||
use nix::sys::signal::Signal;
|
|
||||||
|
|
||||||
println!("Reloading telemt configuration...");
|
|
||||||
|
|
||||||
match daemon::signal_pid_file(pid_file, Signal::SIGHUP) {
|
|
||||||
Ok(()) => {
|
|
||||||
println!("Reload signal sent successfully");
|
|
||||||
0
|
|
||||||
}
|
|
||||||
Err(e) => {
|
|
||||||
eprintln!("Failed to reload daemon: {}", e);
|
|
||||||
1
|
|
||||||
}
|
|
||||||
}
|
|
||||||
}
|
|
||||||
|
|
||||||
/// Status command: check if daemon is running.
|
|
||||||
#[cfg(unix)]
|
|
||||||
fn cmd_status(pid_file: &Path) -> i32 {
|
|
||||||
match daemon::check_status(pid_file) {
|
|
||||||
daemon::DaemonStatus::Running(pid) => {
|
|
||||||
println!("telemt is running (pid {})", pid);
|
|
||||||
0
|
|
||||||
}
|
|
||||||
daemon::DaemonStatus::Stale(pid) => {
|
|
||||||
println!("telemt is not running (stale pid file, was pid {})", pid);
|
|
||||||
// Clean up stale PID file
|
|
||||||
let _ = std::fs::remove_file(pid_file);
|
|
||||||
1
|
|
||||||
}
|
|
||||||
daemon::DaemonStatus::NotRunning => {
|
|
||||||
println!("telemt is not running");
|
|
||||||
1
|
|
||||||
}
|
|
||||||
}
|
|
||||||
}
|
|
||||||
|
|
||||||
/// Options for the init command
|
/// Options for the init command
|
||||||
#[derive(Debug, Clone)]
|
|
||||||
pub struct InitOptions {
|
pub struct InitOptions {
|
||||||
pub port: u16,
|
pub port: u16,
|
||||||
pub domain: String,
|
pub domain: String,
|
||||||
|
|
@ -274,64 +15,6 @@ pub struct InitOptions {
|
||||||
pub no_start: bool,
|
pub no_start: bool,
|
||||||
}
|
}
|
||||||
|
|
||||||
/// Parse daemon-related options from CLI args.
|
|
||||||
#[cfg(unix)]
|
|
||||||
pub fn parse_daemon_args(args: &[String]) -> DaemonOptions {
|
|
||||||
let mut opts = DaemonOptions::default();
|
|
||||||
let mut i = 0;
|
|
||||||
|
|
||||||
while i < args.len() {
|
|
||||||
match args[i].as_str() {
|
|
||||||
"--daemon" | "-d" => {
|
|
||||||
opts.daemonize = true;
|
|
||||||
}
|
|
||||||
"--foreground" | "-f" => {
|
|
||||||
opts.foreground = true;
|
|
||||||
}
|
|
||||||
"--pid-file" => {
|
|
||||||
i += 1;
|
|
||||||
if i < args.len() {
|
|
||||||
opts.pid_file = Some(PathBuf::from(&args[i]));
|
|
||||||
}
|
|
||||||
}
|
|
||||||
s if s.starts_with("--pid-file=") => {
|
|
||||||
opts.pid_file = Some(PathBuf::from(s.trim_start_matches("--pid-file=")));
|
|
||||||
}
|
|
||||||
"--run-as-user" => {
|
|
||||||
i += 1;
|
|
||||||
if i < args.len() {
|
|
||||||
opts.user = Some(args[i].clone());
|
|
||||||
}
|
|
||||||
}
|
|
||||||
s if s.starts_with("--run-as-user=") => {
|
|
||||||
opts.user = Some(s.trim_start_matches("--run-as-user=").to_string());
|
|
||||||
}
|
|
||||||
"--run-as-group" => {
|
|
||||||
i += 1;
|
|
||||||
if i < args.len() {
|
|
||||||
opts.group = Some(args[i].clone());
|
|
||||||
}
|
|
||||||
}
|
|
||||||
s if s.starts_with("--run-as-group=") => {
|
|
||||||
opts.group = Some(s.trim_start_matches("--run-as-group=").to_string());
|
|
||||||
}
|
|
||||||
"--working-dir" => {
|
|
||||||
i += 1;
|
|
||||||
if i < args.len() {
|
|
||||||
opts.working_dir = Some(PathBuf::from(&args[i]));
|
|
||||||
}
|
|
||||||
}
|
|
||||||
s if s.starts_with("--working-dir=") => {
|
|
||||||
opts.working_dir = Some(PathBuf::from(s.trim_start_matches("--working-dir=")));
|
|
||||||
}
|
|
||||||
_ => {}
|
|
||||||
}
|
|
||||||
i += 1;
|
|
||||||
}
|
|
||||||
|
|
||||||
opts
|
|
||||||
}
|
|
||||||
|
|
||||||
impl Default for InitOptions {
|
impl Default for InitOptions {
|
||||||
fn default() -> Self {
|
fn default() -> Self {
|
||||||
Self {
|
Self {
|
||||||
|
|
@ -401,16 +84,10 @@ pub fn parse_init_args(args: &[String]) -> Option<InitOptions> {
|
||||||
|
|
||||||
/// Run the fire-and-forget setup.
|
/// Run the fire-and-forget setup.
|
||||||
pub fn run_init(opts: InitOptions) -> Result<(), Box<dyn std::error::Error>> {
|
pub fn run_init(opts: InitOptions) -> Result<(), Box<dyn std::error::Error>> {
|
||||||
use crate::service::{self, InitSystem, ServiceOptions};
|
|
||||||
|
|
||||||
eprintln!("[telemt] Fire-and-forget setup");
|
eprintln!("[telemt] Fire-and-forget setup");
|
||||||
eprintln!();
|
eprintln!();
|
||||||
|
|
||||||
// 1. Detect init system
|
// 1. Generate or validate secret
|
||||||
let init_system = service::detect_init_system();
|
|
||||||
eprintln!("[+] Detected init system: {}", init_system);
|
|
||||||
|
|
||||||
// 2. Generate or validate secret
|
|
||||||
let secret = match opts.secret {
|
let secret = match opts.secret {
|
||||||
Some(s) => {
|
Some(s) => {
|
||||||
if s.len() != 32 || !s.chars().all(|c| c.is_ascii_hexdigit()) {
|
if s.len() != 32 || !s.chars().all(|c| c.is_ascii_hexdigit()) {
|
||||||
|
|
@ -427,74 +104,50 @@ pub fn run_init(opts: InitOptions) -> Result<(), Box<dyn std::error::Error>> {
|
||||||
eprintln!("[+] Port: {}", opts.port);
|
eprintln!("[+] Port: {}", opts.port);
|
||||||
eprintln!("[+] Domain: {}", opts.domain);
|
eprintln!("[+] Domain: {}", opts.domain);
|
||||||
|
|
||||||
// 3. Create config directory
|
// 2. Create config directory
|
||||||
fs::create_dir_all(&opts.config_dir)?;
|
fs::create_dir_all(&opts.config_dir)?;
|
||||||
let config_path = opts.config_dir.join("config.toml");
|
let config_path = opts.config_dir.join("config.toml");
|
||||||
|
|
||||||
// 4. Write config
|
// 3. Write config
|
||||||
let config_content = generate_config(&opts.username, &secret, opts.port, &opts.domain);
|
let config_content = generate_config(&opts.username, &secret, opts.port, &opts.domain);
|
||||||
fs::write(&config_path, &config_content)?;
|
fs::write(&config_path, &config_content)?;
|
||||||
eprintln!("[+] Config written to {}", config_path.display());
|
eprintln!("[+] Config written to {}", config_path.display());
|
||||||
|
|
||||||
// 5. Generate and write service file
|
// 4. Write systemd unit
|
||||||
let exe_path =
|
let exe_path = std::env::current_exe()
|
||||||
std::env::current_exe().unwrap_or_else(|_| PathBuf::from("/usr/local/bin/telemt"));
|
.unwrap_or_else(|_| PathBuf::from("/usr/local/bin/telemt"));
|
||||||
|
|
||||||
let service_opts = ServiceOptions {
|
let unit_path = Path::new("/etc/systemd/system/telemt.service");
|
||||||
exe_path: &exe_path,
|
let unit_content = generate_systemd_unit(&exe_path, &config_path);
|
||||||
config_path: &config_path,
|
|
||||||
user: None, // Let systemd/init handle user
|
|
||||||
group: None,
|
|
||||||
pid_file: "/var/run/telemt.pid",
|
|
||||||
working_dir: Some("/var/lib/telemt"),
|
|
||||||
description: "Telemt MTProxy - Telegram MTProto Proxy",
|
|
||||||
};
|
|
||||||
|
|
||||||
let service_path = service::service_file_path(init_system);
|
match fs::write(unit_path, &unit_content) {
|
||||||
let service_content = service::generate_service_file(init_system, &service_opts);
|
|
||||||
|
|
||||||
// Ensure parent directory exists
|
|
||||||
if let Some(parent) = Path::new(service_path).parent() {
|
|
||||||
let _ = fs::create_dir_all(parent);
|
|
||||||
}
|
|
||||||
|
|
||||||
match fs::write(service_path, &service_content) {
|
|
||||||
Ok(()) => {
|
Ok(()) => {
|
||||||
eprintln!("[+] Service file written to {}", service_path);
|
eprintln!("[+] Systemd unit written to {}", unit_path.display());
|
||||||
|
|
||||||
// Make script executable for OpenRC/FreeBSD
|
|
||||||
#[cfg(unix)]
|
|
||||||
if init_system == InitSystem::OpenRC || init_system == InitSystem::FreeBSDRc {
|
|
||||||
use std::os::unix::fs::PermissionsExt;
|
|
||||||
let mut perms = fs::metadata(service_path)?.permissions();
|
|
||||||
perms.set_mode(0o755);
|
|
||||||
fs::set_permissions(service_path, perms)?;
|
|
||||||
}
|
|
||||||
}
|
}
|
||||||
Err(e) => {
|
Err(e) => {
|
||||||
eprintln!("[!] Cannot write service file (run as root?): {}", e);
|
eprintln!("[!] Cannot write systemd unit (run as root?): {}", e);
|
||||||
eprintln!("[!] Manual service file content:");
|
eprintln!("[!] Manual unit file content:");
|
||||||
eprintln!("{}", service_content);
|
eprintln!("{}", unit_content);
|
||||||
|
|
||||||
// Still print links and installation instructions
|
// Still print links and config
|
||||||
eprintln!();
|
|
||||||
eprintln!("{}", service::installation_instructions(init_system));
|
|
||||||
print_links(&opts.username, &secret, opts.port, &opts.domain);
|
print_links(&opts.username, &secret, opts.port, &opts.domain);
|
||||||
return Ok(());
|
return Ok(());
|
||||||
}
|
}
|
||||||
}
|
}
|
||||||
|
|
||||||
// 6. Install and enable service based on init system
|
// 5. Reload systemd
|
||||||
match init_system {
|
|
||||||
InitSystem::Systemd => {
|
|
||||||
run_cmd("systemctl", &["daemon-reload"]);
|
run_cmd("systemctl", &["daemon-reload"]);
|
||||||
|
|
||||||
|
// 6. Enable service
|
||||||
run_cmd("systemctl", &["enable", "telemt.service"]);
|
run_cmd("systemctl", &["enable", "telemt.service"]);
|
||||||
eprintln!("[+] Service enabled");
|
eprintln!("[+] Service enabled");
|
||||||
|
|
||||||
|
// 7. Start service (unless --no-start)
|
||||||
if !opts.no_start {
|
if !opts.no_start {
|
||||||
run_cmd("systemctl", &["start", "telemt.service"]);
|
run_cmd("systemctl", &["start", "telemt.service"]);
|
||||||
eprintln!("[+] Service started");
|
eprintln!("[+] Service started");
|
||||||
|
|
||||||
|
// Brief delay then check status
|
||||||
std::thread::sleep(std::time::Duration::from_secs(1));
|
std::thread::sleep(std::time::Duration::from_secs(1));
|
||||||
let status = Command::new("systemctl")
|
let status = Command::new("systemctl")
|
||||||
.args(["is-active", "telemt.service"])
|
.args(["is-active", "telemt.service"])
|
||||||
|
|
@ -513,40 +166,10 @@ pub fn run_init(opts: InitOptions) -> Result<(), Box<dyn std::error::Error>> {
|
||||||
eprintln!("[+] Service not started (--no-start)");
|
eprintln!("[+] Service not started (--no-start)");
|
||||||
eprintln!("[+] Start manually: systemctl start telemt.service");
|
eprintln!("[+] Start manually: systemctl start telemt.service");
|
||||||
}
|
}
|
||||||
}
|
|
||||||
InitSystem::OpenRC => {
|
|
||||||
run_cmd("rc-update", &["add", "telemt", "default"]);
|
|
||||||
eprintln!("[+] Service enabled");
|
|
||||||
|
|
||||||
if !opts.no_start {
|
|
||||||
run_cmd("rc-service", &["telemt", "start"]);
|
|
||||||
eprintln!("[+] Service started");
|
|
||||||
} else {
|
|
||||||
eprintln!("[+] Service not started (--no-start)");
|
|
||||||
eprintln!("[+] Start manually: rc-service telemt start");
|
|
||||||
}
|
|
||||||
}
|
|
||||||
InitSystem::FreeBSDRc => {
|
|
||||||
run_cmd("sysrc", &["telemt_enable=YES"]);
|
|
||||||
eprintln!("[+] Service enabled");
|
|
||||||
|
|
||||||
if !opts.no_start {
|
|
||||||
run_cmd("service", &["telemt", "start"]);
|
|
||||||
eprintln!("[+] Service started");
|
|
||||||
} else {
|
|
||||||
eprintln!("[+] Service not started (--no-start)");
|
|
||||||
eprintln!("[+] Start manually: service telemt start");
|
|
||||||
}
|
|
||||||
}
|
|
||||||
InitSystem::Unknown => {
|
|
||||||
eprintln!("[!] Unknown init system - service file written but not installed");
|
|
||||||
eprintln!("[!] You may need to install it manually");
|
|
||||||
}
|
|
||||||
}
|
|
||||||
|
|
||||||
eprintln!();
|
eprintln!();
|
||||||
|
|
||||||
// 7. Print links
|
// 8. Print links
|
||||||
print_links(&opts.username, &secret, opts.port, &opts.domain);
|
print_links(&opts.username, &secret, opts.port, &opts.domain);
|
||||||
|
|
||||||
Ok(())
|
Ok(())
|
||||||
|
|
@ -560,7 +183,7 @@ fn generate_secret() -> String {
|
||||||
|
|
||||||
fn generate_config(username: &str, secret: &str, port: u16, domain: &str) -> String {
|
fn generate_config(username: &str, secret: &str, port: u16, domain: &str) -> String {
|
||||||
format!(
|
format!(
|
||||||
r#"# Telemt MTProxy — auto-generated config
|
r#"# Telemt MTProxy — auto-generated config
|
||||||
# Re-run `telemt --init` to regenerate
|
# Re-run `telemt --init` to regenerate
|
||||||
|
|
||||||
show_link = ["{username}"]
|
show_link = ["{username}"]
|
||||||
|
|
@ -575,16 +198,8 @@ desync_all_full = false
|
||||||
update_every = 43200
|
update_every = 43200
|
||||||
hardswap = false
|
hardswap = false
|
||||||
me_pool_drain_ttl_secs = 90
|
me_pool_drain_ttl_secs = 90
|
||||||
me_instadrain = false
|
|
||||||
me_pool_drain_threshold = 32
|
|
||||||
me_pool_drain_soft_evict_grace_secs = 10
|
|
||||||
me_pool_drain_soft_evict_per_writer = 2
|
|
||||||
me_pool_drain_soft_evict_budget_per_core = 16
|
|
||||||
me_pool_drain_soft_evict_cooldown_ms = 1000
|
|
||||||
me_bind_stale_mode = "never"
|
|
||||||
me_pool_min_fresh_ratio = 0.8
|
me_pool_min_fresh_ratio = 0.8
|
||||||
me_reinit_drain_timeout_secs = 90
|
me_reinit_drain_timeout_secs = 120
|
||||||
tg_connect = 10
|
|
||||||
|
|
||||||
[network]
|
[network]
|
||||||
ipv4 = true
|
ipv4 = true
|
||||||
|
|
@ -610,8 +225,8 @@ ip = "0.0.0.0"
|
||||||
ip = "::"
|
ip = "::"
|
||||||
|
|
||||||
[timeouts]
|
[timeouts]
|
||||||
client_first_byte_idle_secs = 300
|
client_handshake = 15
|
||||||
client_handshake = 60
|
tg_connect = 10
|
||||||
client_keepalive = 60
|
client_keepalive = 60
|
||||||
client_ack = 300
|
client_ack = 300
|
||||||
|
|
||||||
|
|
@ -623,9 +238,8 @@ fake_cert_len = 2048
|
||||||
tls_full_cert_ttl_secs = 90
|
tls_full_cert_ttl_secs = 90
|
||||||
|
|
||||||
[access]
|
[access]
|
||||||
user_max_tcp_conns_global_each = 0
|
|
||||||
replay_check_len = 65536
|
replay_check_len = 65536
|
||||||
replay_window_secs = 120
|
replay_window_secs = 1800
|
||||||
ignore_time_skew = false
|
ignore_time_skew = false
|
||||||
|
|
||||||
[access.users]
|
[access.users]
|
||||||
|
|
@ -643,6 +257,35 @@ weight = 10
|
||||||
)
|
)
|
||||||
}
|
}
|
||||||
|
|
||||||
|
fn generate_systemd_unit(exe_path: &Path, config_path: &Path) -> String {
|
||||||
|
format!(
|
||||||
|
r#"[Unit]
|
||||||
|
Description=Telemt MTProxy
|
||||||
|
Documentation=https://github.com/nicepkg/telemt
|
||||||
|
After=network-online.target
|
||||||
|
Wants=network-online.target
|
||||||
|
|
||||||
|
[Service]
|
||||||
|
Type=simple
|
||||||
|
ExecStart={exe} {config}
|
||||||
|
Restart=always
|
||||||
|
RestartSec=5
|
||||||
|
LimitNOFILE=65535
|
||||||
|
# Security hardening
|
||||||
|
NoNewPrivileges=true
|
||||||
|
ProtectSystem=strict
|
||||||
|
ProtectHome=true
|
||||||
|
ReadWritePaths=/etc/telemt
|
||||||
|
PrivateTmp=true
|
||||||
|
|
||||||
|
[Install]
|
||||||
|
WantedBy=multi-user.target
|
||||||
|
"#,
|
||||||
|
exe = exe_path.display(),
|
||||||
|
config = config_path.display(),
|
||||||
|
)
|
||||||
|
}
|
||||||
|
|
||||||
fn run_cmd(cmd: &str, args: &[&str]) {
|
fn run_cmd(cmd: &str, args: &[&str]) {
|
||||||
match Command::new(cmd).args(args).output() {
|
match Command::new(cmd).args(args).output() {
|
||||||
Ok(output) => {
|
Ok(output) => {
|
||||||
|
|
@ -662,10 +305,8 @@ fn print_links(username: &str, secret: &str, port: u16, domain: &str) {
|
||||||
|
|
||||||
println!("=== Proxy Links ===");
|
println!("=== Proxy Links ===");
|
||||||
println!("[{}]", username);
|
println!("[{}]", username);
|
||||||
println!(
|
println!(" EE-TLS: tg://proxy?server=YOUR_SERVER_IP&port={}&secret=ee{}{}",
|
||||||
" EE-TLS: tg://proxy?server=YOUR_SERVER_IP&port={}&secret=ee{}{}",
|
port, secret, domain_hex);
|
||||||
port, secret, domain_hex
|
|
||||||
);
|
|
||||||
println!();
|
println!();
|
||||||
println!("Replace YOUR_SERVER_IP with your server's public IP.");
|
println!("Replace YOUR_SERVER_IP with your server's public IP.");
|
||||||
println!("The proxy will auto-detect and display the correct link on startup.");
|
println!("The proxy will auto-detect and display the correct link on startup.");
|
||||||
|
|
|
||||||
|
|
@ -1,6 +1,6 @@
|
||||||
|
use std::collections::HashMap;
|
||||||
use ipnetwork::IpNetwork;
|
use ipnetwork::IpNetwork;
|
||||||
use serde::Deserialize;
|
use serde::Deserialize;
|
||||||
use std::collections::HashMap;
|
|
||||||
|
|
||||||
// Helper defaults kept private to the config module.
|
// Helper defaults kept private to the config module.
|
||||||
const DEFAULT_NETWORK_IPV6: Option<bool> = Some(false);
|
const DEFAULT_NETWORK_IPV6: Option<bool> = Some(false);
|
||||||
|
|
@ -27,10 +27,8 @@ const DEFAULT_ME_C2ME_CHANNEL_CAPACITY: usize = 1024;
|
||||||
const DEFAULT_ME_READER_ROUTE_DATA_WAIT_MS: u64 = 2;
|
const DEFAULT_ME_READER_ROUTE_DATA_WAIT_MS: u64 = 2;
|
||||||
const DEFAULT_ME_D2C_FLUSH_BATCH_MAX_FRAMES: usize = 32;
|
const DEFAULT_ME_D2C_FLUSH_BATCH_MAX_FRAMES: usize = 32;
|
||||||
const DEFAULT_ME_D2C_FLUSH_BATCH_MAX_BYTES: usize = 128 * 1024;
|
const DEFAULT_ME_D2C_FLUSH_BATCH_MAX_BYTES: usize = 128 * 1024;
|
||||||
const DEFAULT_ME_D2C_FLUSH_BATCH_MAX_DELAY_US: u64 = 500;
|
const DEFAULT_ME_D2C_FLUSH_BATCH_MAX_DELAY_US: u64 = 1500;
|
||||||
const DEFAULT_ME_D2C_ACK_FLUSH_IMMEDIATE: bool = true;
|
const DEFAULT_ME_D2C_ACK_FLUSH_IMMEDIATE: bool = false;
|
||||||
const DEFAULT_ME_QUOTA_SOFT_OVERSHOOT_BYTES: u64 = 64 * 1024;
|
|
||||||
const DEFAULT_ME_D2C_FRAME_BUF_SHRINK_THRESHOLD_BYTES: usize = 256 * 1024;
|
|
||||||
const DEFAULT_DIRECT_RELAY_COPY_BUF_C2S_BYTES: usize = 64 * 1024;
|
const DEFAULT_DIRECT_RELAY_COPY_BUF_C2S_BYTES: usize = 64 * 1024;
|
||||||
const DEFAULT_DIRECT_RELAY_COPY_BUF_S2C_BYTES: usize = 256 * 1024;
|
const DEFAULT_DIRECT_RELAY_COPY_BUF_S2C_BYTES: usize = 256 * 1024;
|
||||||
const DEFAULT_ME_WRITER_PICK_SAMPLE_SIZE: u8 = 3;
|
const DEFAULT_ME_WRITER_PICK_SAMPLE_SIZE: u8 = 3;
|
||||||
|
|
@ -38,20 +36,7 @@ const DEFAULT_ME_HEALTH_INTERVAL_MS_UNHEALTHY: u64 = 1000;
|
||||||
const DEFAULT_ME_HEALTH_INTERVAL_MS_HEALTHY: u64 = 3000;
|
const DEFAULT_ME_HEALTH_INTERVAL_MS_HEALTHY: u64 = 3000;
|
||||||
const DEFAULT_ME_ADMISSION_POLL_MS: u64 = 1000;
|
const DEFAULT_ME_ADMISSION_POLL_MS: u64 = 1000;
|
||||||
const DEFAULT_ME_WARN_RATE_LIMIT_MS: u64 = 5000;
|
const DEFAULT_ME_WARN_RATE_LIMIT_MS: u64 = 5000;
|
||||||
const DEFAULT_ME_ROUTE_HYBRID_MAX_WAIT_MS: u64 = 3000;
|
|
||||||
const DEFAULT_ME_ROUTE_BLOCKING_SEND_TIMEOUT_MS: u64 = 250;
|
|
||||||
const DEFAULT_ME_C2ME_SEND_TIMEOUT_MS: u64 = 4000;
|
|
||||||
const DEFAULT_ME_POOL_DRAIN_SOFT_EVICT_ENABLED: bool = true;
|
|
||||||
const DEFAULT_ME_POOL_DRAIN_SOFT_EVICT_GRACE_SECS: u64 = 10;
|
|
||||||
const DEFAULT_ME_POOL_DRAIN_SOFT_EVICT_PER_WRITER: u8 = 2;
|
|
||||||
const DEFAULT_ME_POOL_DRAIN_SOFT_EVICT_BUDGET_PER_CORE: u16 = 16;
|
|
||||||
const DEFAULT_ME_POOL_DRAIN_SOFT_EVICT_COOLDOWN_MS: u64 = 1000;
|
|
||||||
const DEFAULT_USER_MAX_UNIQUE_IPS_WINDOW_SECS: u64 = 30;
|
const DEFAULT_USER_MAX_UNIQUE_IPS_WINDOW_SECS: u64 = 30;
|
||||||
const DEFAULT_ACCEPT_PERMIT_TIMEOUT_MS: u64 = 250;
|
|
||||||
const DEFAULT_CONNTRACK_CONTROL_ENABLED: bool = true;
|
|
||||||
const DEFAULT_CONNTRACK_PRESSURE_HIGH_WATERMARK_PCT: u8 = 85;
|
|
||||||
const DEFAULT_CONNTRACK_PRESSURE_LOW_WATERMARK_PCT: u8 = 70;
|
|
||||||
const DEFAULT_CONNTRACK_DELETE_BUDGET_PER_SEC: u64 = 4096;
|
|
||||||
const DEFAULT_UPSTREAM_CONNECT_RETRY_ATTEMPTS: u32 = 2;
|
const DEFAULT_UPSTREAM_CONNECT_RETRY_ATTEMPTS: u32 = 2;
|
||||||
const DEFAULT_UPSTREAM_UNHEALTHY_FAIL_THRESHOLD: u32 = 5;
|
const DEFAULT_UPSTREAM_UNHEALTHY_FAIL_THRESHOLD: u32 = 5;
|
||||||
const DEFAULT_UPSTREAM_CONNECT_BUDGET_MS: u64 = 3000;
|
const DEFAULT_UPSTREAM_CONNECT_BUDGET_MS: u64 = 3000;
|
||||||
|
|
@ -71,26 +56,6 @@ pub(crate) fn default_tls_domain() -> String {
|
||||||
"petrovich.ru".to_string()
|
"petrovich.ru".to_string()
|
||||||
}
|
}
|
||||||
|
|
||||||
pub(crate) fn default_tls_fetch_scope() -> String {
|
|
||||||
String::new()
|
|
||||||
}
|
|
||||||
|
|
||||||
pub(crate) fn default_tls_fetch_attempt_timeout_ms() -> u64 {
|
|
||||||
5_000
|
|
||||||
}
|
|
||||||
|
|
||||||
pub(crate) fn default_tls_fetch_total_budget_ms() -> u64 {
|
|
||||||
15_000
|
|
||||||
}
|
|
||||||
|
|
||||||
pub(crate) fn default_tls_fetch_strict_route() -> bool {
|
|
||||||
true
|
|
||||||
}
|
|
||||||
|
|
||||||
pub(crate) fn default_tls_fetch_profile_cache_ttl_secs() -> u64 {
|
|
||||||
600
|
|
||||||
}
|
|
||||||
|
|
||||||
pub(crate) fn default_mask_port() -> u16 {
|
pub(crate) fn default_mask_port() -> u16 {
|
||||||
443
|
443
|
||||||
}
|
}
|
||||||
|
|
@ -100,7 +65,7 @@ pub(crate) fn default_fake_cert_len() -> usize {
|
||||||
}
|
}
|
||||||
|
|
||||||
pub(crate) fn default_tls_front_dir() -> String {
|
pub(crate) fn default_tls_front_dir() -> String {
|
||||||
"/etc/telemt/tlsfront".to_string()
|
"tlsfront".to_string()
|
||||||
}
|
}
|
||||||
|
|
||||||
pub(crate) fn default_replay_check_len() -> usize {
|
pub(crate) fn default_replay_check_len() -> usize {
|
||||||
|
|
@ -108,32 +73,10 @@ pub(crate) fn default_replay_check_len() -> usize {
|
||||||
}
|
}
|
||||||
|
|
||||||
pub(crate) fn default_replay_window_secs() -> u64 {
|
pub(crate) fn default_replay_window_secs() -> u64 {
|
||||||
// Keep replay cache TTL tight by default to reduce replay surface.
|
1800
|
||||||
// Deployments with higher RTT or longer reconnect jitter can override this in config.
|
|
||||||
120
|
|
||||||
}
|
}
|
||||||
|
|
||||||
pub(crate) fn default_handshake_timeout() -> u64 {
|
pub(crate) fn default_handshake_timeout() -> u64 {
|
||||||
60
|
|
||||||
}
|
|
||||||
|
|
||||||
pub(crate) fn default_client_first_byte_idle_secs() -> u64 {
|
|
||||||
300
|
|
||||||
}
|
|
||||||
|
|
||||||
pub(crate) fn default_relay_idle_policy_v2_enabled() -> bool {
|
|
||||||
true
|
|
||||||
}
|
|
||||||
|
|
||||||
pub(crate) fn default_relay_client_idle_soft_secs() -> u64 {
|
|
||||||
120
|
|
||||||
}
|
|
||||||
|
|
||||||
pub(crate) fn default_relay_client_idle_hard_secs() -> u64 {
|
|
||||||
360
|
|
||||||
}
|
|
||||||
|
|
||||||
pub(crate) fn default_relay_idle_grace_after_downstream_activity_secs() -> u64 {
|
|
||||||
30
|
30
|
||||||
}
|
}
|
||||||
|
|
||||||
|
|
@ -142,11 +85,11 @@ pub(crate) fn default_connect_timeout() -> u64 {
|
||||||
}
|
}
|
||||||
|
|
||||||
pub(crate) fn default_keepalive() -> u64 {
|
pub(crate) fn default_keepalive() -> u64 {
|
||||||
15
|
60
|
||||||
}
|
}
|
||||||
|
|
||||||
pub(crate) fn default_ack_timeout() -> u64 {
|
pub(crate) fn default_ack_timeout() -> u64 {
|
||||||
90
|
300
|
||||||
}
|
}
|
||||||
pub(crate) fn default_me_one_retry() -> u8 {
|
pub(crate) fn default_me_one_retry() -> u8 {
|
||||||
12
|
12
|
||||||
|
|
@ -169,7 +112,10 @@ pub(crate) fn default_weight() -> u16 {
|
||||||
}
|
}
|
||||||
|
|
||||||
pub(crate) fn default_metrics_whitelist() -> Vec<IpNetwork> {
|
pub(crate) fn default_metrics_whitelist() -> Vec<IpNetwork> {
|
||||||
vec!["127.0.0.1/32".parse().unwrap(), "::1/128".parse().unwrap()]
|
vec![
|
||||||
|
"127.0.0.1/32".parse().unwrap(),
|
||||||
|
"::1/128".parse().unwrap(),
|
||||||
|
]
|
||||||
}
|
}
|
||||||
|
|
||||||
pub(crate) fn default_api_listen() -> String {
|
pub(crate) fn default_api_listen() -> String {
|
||||||
|
|
@ -192,55 +138,15 @@ pub(crate) fn default_api_minimal_runtime_cache_ttl_ms() -> u64 {
|
||||||
1000
|
1000
|
||||||
}
|
}
|
||||||
|
|
||||||
pub(crate) fn default_api_runtime_edge_enabled() -> bool {
|
pub(crate) fn default_api_runtime_edge_enabled() -> bool { false }
|
||||||
false
|
pub(crate) fn default_api_runtime_edge_cache_ttl_ms() -> u64 { 1000 }
|
||||||
}
|
pub(crate) fn default_api_runtime_edge_top_n() -> usize { 10 }
|
||||||
pub(crate) fn default_api_runtime_edge_cache_ttl_ms() -> u64 {
|
pub(crate) fn default_api_runtime_edge_events_capacity() -> usize { 256 }
|
||||||
1000
|
|
||||||
}
|
|
||||||
pub(crate) fn default_api_runtime_edge_top_n() -> usize {
|
|
||||||
10
|
|
||||||
}
|
|
||||||
pub(crate) fn default_api_runtime_edge_events_capacity() -> usize {
|
|
||||||
256
|
|
||||||
}
|
|
||||||
|
|
||||||
pub(crate) fn default_proxy_protocol_header_timeout_ms() -> u64 {
|
pub(crate) fn default_proxy_protocol_header_timeout_ms() -> u64 {
|
||||||
500
|
500
|
||||||
}
|
}
|
||||||
|
|
||||||
pub(crate) fn default_proxy_protocol_trusted_cidrs() -> Vec<IpNetwork> {
|
|
||||||
vec!["0.0.0.0/0".parse().unwrap(), "::/0".parse().unwrap()]
|
|
||||||
}
|
|
||||||
|
|
||||||
pub(crate) fn default_server_max_connections() -> u32 {
|
|
||||||
10_000
|
|
||||||
}
|
|
||||||
|
|
||||||
pub(crate) fn default_listen_backlog() -> u32 {
|
|
||||||
1024
|
|
||||||
}
|
|
||||||
|
|
||||||
pub(crate) fn default_accept_permit_timeout_ms() -> u64 {
|
|
||||||
DEFAULT_ACCEPT_PERMIT_TIMEOUT_MS
|
|
||||||
}
|
|
||||||
|
|
||||||
pub(crate) fn default_conntrack_control_enabled() -> bool {
|
|
||||||
DEFAULT_CONNTRACK_CONTROL_ENABLED
|
|
||||||
}
|
|
||||||
|
|
||||||
pub(crate) fn default_conntrack_pressure_high_watermark_pct() -> u8 {
|
|
||||||
DEFAULT_CONNTRACK_PRESSURE_HIGH_WATERMARK_PCT
|
|
||||||
}
|
|
||||||
|
|
||||||
pub(crate) fn default_conntrack_pressure_low_watermark_pct() -> u8 {
|
|
||||||
DEFAULT_CONNTRACK_PRESSURE_LOW_WATERMARK_PCT
|
|
||||||
}
|
|
||||||
|
|
||||||
pub(crate) fn default_conntrack_delete_budget_per_sec() -> u64 {
|
|
||||||
DEFAULT_CONNTRACK_DELETE_BUDGET_PER_SEC
|
|
||||||
}
|
|
||||||
|
|
||||||
pub(crate) fn default_prefer_4() -> u8 {
|
pub(crate) fn default_prefer_4() -> u8 {
|
||||||
4
|
4
|
||||||
}
|
}
|
||||||
|
|
@ -301,10 +207,6 @@ pub(crate) fn default_me2dc_fallback() -> bool {
|
||||||
true
|
true
|
||||||
}
|
}
|
||||||
|
|
||||||
pub(crate) fn default_me2dc_fast() -> bool {
|
|
||||||
true
|
|
||||||
}
|
|
||||||
|
|
||||||
pub(crate) fn default_keepalive_interval() -> u64 {
|
pub(crate) fn default_keepalive_interval() -> u64 {
|
||||||
8
|
8
|
||||||
}
|
}
|
||||||
|
|
@ -441,14 +343,6 @@ pub(crate) fn default_me_d2c_ack_flush_immediate() -> bool {
|
||||||
DEFAULT_ME_D2C_ACK_FLUSH_IMMEDIATE
|
DEFAULT_ME_D2C_ACK_FLUSH_IMMEDIATE
|
||||||
}
|
}
|
||||||
|
|
||||||
pub(crate) fn default_me_quota_soft_overshoot_bytes() -> u64 {
|
|
||||||
DEFAULT_ME_QUOTA_SOFT_OVERSHOOT_BYTES
|
|
||||||
}
|
|
||||||
|
|
||||||
pub(crate) fn default_me_d2c_frame_buf_shrink_threshold_bytes() -> usize {
|
|
||||||
DEFAULT_ME_D2C_FRAME_BUF_SHRINK_THRESHOLD_BYTES
|
|
||||||
}
|
|
||||||
|
|
||||||
pub(crate) fn default_direct_relay_copy_buf_c2s_bytes() -> usize {
|
pub(crate) fn default_direct_relay_copy_buf_c2s_bytes() -> usize {
|
||||||
DEFAULT_DIRECT_RELAY_COPY_BUF_C2S_BYTES
|
DEFAULT_DIRECT_RELAY_COPY_BUF_C2S_BYTES
|
||||||
}
|
}
|
||||||
|
|
@ -477,18 +371,6 @@ pub(crate) fn default_me_warn_rate_limit_ms() -> u64 {
|
||||||
DEFAULT_ME_WARN_RATE_LIMIT_MS
|
DEFAULT_ME_WARN_RATE_LIMIT_MS
|
||||||
}
|
}
|
||||||
|
|
||||||
pub(crate) fn default_me_route_hybrid_max_wait_ms() -> u64 {
|
|
||||||
DEFAULT_ME_ROUTE_HYBRID_MAX_WAIT_MS
|
|
||||||
}
|
|
||||||
|
|
||||||
pub(crate) fn default_me_route_blocking_send_timeout_ms() -> u64 {
|
|
||||||
DEFAULT_ME_ROUTE_BLOCKING_SEND_TIMEOUT_MS
|
|
||||||
}
|
|
||||||
|
|
||||||
pub(crate) fn default_me_c2me_send_timeout_ms() -> u64 {
|
|
||||||
DEFAULT_ME_C2ME_SEND_TIMEOUT_MS
|
|
||||||
}
|
|
||||||
|
|
||||||
pub(crate) fn default_upstream_connect_retry_attempts() -> u32 {
|
pub(crate) fn default_upstream_connect_retry_attempts() -> u32 {
|
||||||
DEFAULT_UPSTREAM_CONNECT_RETRY_ATTEMPTS
|
DEFAULT_UPSTREAM_CONNECT_RETRY_ATTEMPTS
|
||||||
}
|
}
|
||||||
|
|
@ -558,7 +440,7 @@ pub(crate) fn default_beobachten_flush_secs() -> u64 {
|
||||||
}
|
}
|
||||||
|
|
||||||
pub(crate) fn default_beobachten_file() -> String {
|
pub(crate) fn default_beobachten_file() -> String {
|
||||||
"/etc/telemt/beobachten.txt".to_string()
|
"cache/beobachten.txt".to_string()
|
||||||
}
|
}
|
||||||
|
|
||||||
pub(crate) fn default_tls_new_session_tickets() -> u8 {
|
pub(crate) fn default_tls_new_session_tickets() -> u8 {
|
||||||
|
|
@ -570,67 +452,17 @@ pub(crate) fn default_tls_full_cert_ttl_secs() -> u64 {
|
||||||
}
|
}
|
||||||
|
|
||||||
pub(crate) fn default_server_hello_delay_min_ms() -> u64 {
|
pub(crate) fn default_server_hello_delay_min_ms() -> u64 {
|
||||||
8
|
0
|
||||||
}
|
}
|
||||||
|
|
||||||
pub(crate) fn default_server_hello_delay_max_ms() -> u64 {
|
pub(crate) fn default_server_hello_delay_max_ms() -> u64 {
|
||||||
24
|
0
|
||||||
}
|
}
|
||||||
|
|
||||||
pub(crate) fn default_alpn_enforce() -> bool {
|
pub(crate) fn default_alpn_enforce() -> bool {
|
||||||
true
|
true
|
||||||
}
|
}
|
||||||
|
|
||||||
pub(crate) fn default_mask_shape_hardening() -> bool {
|
|
||||||
true
|
|
||||||
}
|
|
||||||
|
|
||||||
pub(crate) fn default_mask_shape_hardening_aggressive_mode() -> bool {
|
|
||||||
false
|
|
||||||
}
|
|
||||||
|
|
||||||
pub(crate) fn default_mask_shape_bucket_floor_bytes() -> usize {
|
|
||||||
512
|
|
||||||
}
|
|
||||||
|
|
||||||
pub(crate) fn default_mask_shape_bucket_cap_bytes() -> usize {
|
|
||||||
4096
|
|
||||||
}
|
|
||||||
|
|
||||||
pub(crate) fn default_mask_shape_above_cap_blur() -> bool {
|
|
||||||
false
|
|
||||||
}
|
|
||||||
|
|
||||||
pub(crate) fn default_mask_shape_above_cap_blur_max_bytes() -> usize {
|
|
||||||
512
|
|
||||||
}
|
|
||||||
|
|
||||||
#[cfg(not(test))]
|
|
||||||
pub(crate) fn default_mask_relay_max_bytes() -> usize {
|
|
||||||
5 * 1024 * 1024
|
|
||||||
}
|
|
||||||
|
|
||||||
#[cfg(test)]
|
|
||||||
pub(crate) fn default_mask_relay_max_bytes() -> usize {
|
|
||||||
32 * 1024
|
|
||||||
}
|
|
||||||
|
|
||||||
pub(crate) fn default_mask_classifier_prefetch_timeout_ms() -> u64 {
|
|
||||||
5
|
|
||||||
}
|
|
||||||
|
|
||||||
pub(crate) fn default_mask_timing_normalization_enabled() -> bool {
|
|
||||||
false
|
|
||||||
}
|
|
||||||
|
|
||||||
pub(crate) fn default_mask_timing_normalization_floor_ms() -> u64 {
|
|
||||||
0
|
|
||||||
}
|
|
||||||
|
|
||||||
pub(crate) fn default_mask_timing_normalization_ceiling_ms() -> u64 {
|
|
||||||
0
|
|
||||||
}
|
|
||||||
|
|
||||||
pub(crate) fn default_stun_servers() -> Vec<String> {
|
pub(crate) fn default_stun_servers() -> Vec<String> {
|
||||||
vec![
|
vec![
|
||||||
"stun.l.google.com:5349".to_string(),
|
"stun.l.google.com:5349".to_string(),
|
||||||
|
|
@ -745,41 +577,13 @@ pub(crate) fn default_proxy_secret_len_max() -> usize {
|
||||||
}
|
}
|
||||||
|
|
||||||
pub(crate) fn default_me_reinit_drain_timeout_secs() -> u64 {
|
pub(crate) fn default_me_reinit_drain_timeout_secs() -> u64 {
|
||||||
90
|
120
|
||||||
}
|
}
|
||||||
|
|
||||||
pub(crate) fn default_me_pool_drain_ttl_secs() -> u64 {
|
pub(crate) fn default_me_pool_drain_ttl_secs() -> u64 {
|
||||||
90
|
90
|
||||||
}
|
}
|
||||||
|
|
||||||
pub(crate) fn default_me_instadrain() -> bool {
|
|
||||||
false
|
|
||||||
}
|
|
||||||
|
|
||||||
pub(crate) fn default_me_pool_drain_threshold() -> u64 {
|
|
||||||
32
|
|
||||||
}
|
|
||||||
|
|
||||||
pub(crate) fn default_me_pool_drain_soft_evict_enabled() -> bool {
|
|
||||||
DEFAULT_ME_POOL_DRAIN_SOFT_EVICT_ENABLED
|
|
||||||
}
|
|
||||||
|
|
||||||
pub(crate) fn default_me_pool_drain_soft_evict_grace_secs() -> u64 {
|
|
||||||
DEFAULT_ME_POOL_DRAIN_SOFT_EVICT_GRACE_SECS
|
|
||||||
}
|
|
||||||
|
|
||||||
pub(crate) fn default_me_pool_drain_soft_evict_per_writer() -> u8 {
|
|
||||||
DEFAULT_ME_POOL_DRAIN_SOFT_EVICT_PER_WRITER
|
|
||||||
}
|
|
||||||
|
|
||||||
pub(crate) fn default_me_pool_drain_soft_evict_budget_per_core() -> u16 {
|
|
||||||
DEFAULT_ME_POOL_DRAIN_SOFT_EVICT_BUDGET_PER_CORE
|
|
||||||
}
|
|
||||||
|
|
||||||
pub(crate) fn default_me_pool_drain_soft_evict_cooldown_ms() -> u64 {
|
|
||||||
DEFAULT_ME_POOL_DRAIN_SOFT_EVICT_COOLDOWN_MS
|
|
||||||
}
|
|
||||||
|
|
||||||
pub(crate) fn default_me_bind_stale_ttl_secs() -> u64 {
|
pub(crate) fn default_me_bind_stale_ttl_secs() -> u64 {
|
||||||
default_me_pool_drain_ttl_secs()
|
default_me_pool_drain_ttl_secs()
|
||||||
}
|
}
|
||||||
|
|
@ -831,14 +635,6 @@ pub(crate) fn default_user_max_unique_ips_window_secs() -> u64 {
|
||||||
DEFAULT_USER_MAX_UNIQUE_IPS_WINDOW_SECS
|
DEFAULT_USER_MAX_UNIQUE_IPS_WINDOW_SECS
|
||||||
}
|
}
|
||||||
|
|
||||||
pub(crate) fn default_user_max_tcp_conns_global_each() -> usize {
|
|
||||||
0
|
|
||||||
}
|
|
||||||
|
|
||||||
pub(crate) fn default_user_max_unique_ips_global_each() -> usize {
|
|
||||||
0
|
|
||||||
}
|
|
||||||
|
|
||||||
// Custom deserializer helpers
|
// Custom deserializer helpers
|
||||||
|
|
||||||
#[derive(Deserialize)]
|
#[derive(Deserialize)]
|
||||||
|
|
|
||||||
|
|
@ -21,22 +21,19 @@
|
||||||
//! `network.*`, `use_middle_proxy`) are **not** applied; a warning is emitted.
|
//! `network.*`, `use_middle_proxy`) are **not** applied; a warning is emitted.
|
||||||
//! Non-hot changes are never mixed into the runtime config snapshot.
|
//! Non-hot changes are never mixed into the runtime config snapshot.
|
||||||
|
|
||||||
use std::collections::BTreeSet;
|
|
||||||
use std::net::IpAddr;
|
use std::net::IpAddr;
|
||||||
use std::path::{Path, PathBuf};
|
use std::path::PathBuf;
|
||||||
use std::sync::{Arc, RwLock as StdRwLock};
|
use std::sync::Arc;
|
||||||
use std::time::Duration;
|
|
||||||
|
|
||||||
use notify::{EventKind, RecursiveMode, Watcher, recommended_watcher};
|
use notify::{EventKind, RecursiveMode, Watcher, recommended_watcher};
|
||||||
use tokio::sync::{mpsc, watch};
|
use tokio::sync::{mpsc, watch};
|
||||||
use tracing::{error, info, warn};
|
use tracing::{error, info, warn};
|
||||||
|
|
||||||
use super::load::{LoadedConfig, ProxyConfig};
|
|
||||||
use crate::config::{
|
use crate::config::{
|
||||||
LogLevel, MeBindStaleMode, MeFloorMode, MeSocksKdfPolicy, MeTelemetryLevel, MeWriterPickMode,
|
LogLevel, MeBindStaleMode, MeFloorMode, MeSocksKdfPolicy, MeTelemetryLevel,
|
||||||
|
MeWriterPickMode,
|
||||||
};
|
};
|
||||||
|
use super::load::ProxyConfig;
|
||||||
const HOT_RELOAD_DEBOUNCE: Duration = Duration::from_millis(50);
|
|
||||||
|
|
||||||
// ── Hot fields ────────────────────────────────────────────────────────────────
|
// ── Hot fields ────────────────────────────────────────────────────────────────
|
||||||
|
|
||||||
|
|
@ -53,8 +50,6 @@ pub struct HotFields {
|
||||||
pub me_reinit_coalesce_window_ms: u64,
|
pub me_reinit_coalesce_window_ms: u64,
|
||||||
pub hardswap: bool,
|
pub hardswap: bool,
|
||||||
pub me_pool_drain_ttl_secs: u64,
|
pub me_pool_drain_ttl_secs: u64,
|
||||||
pub me_instadrain: bool,
|
|
||||||
pub me_pool_drain_threshold: u64,
|
|
||||||
pub me_pool_min_fresh_ratio: f32,
|
pub me_pool_min_fresh_ratio: f32,
|
||||||
pub me_reinit_drain_timeout_secs: u64,
|
pub me_reinit_drain_timeout_secs: u64,
|
||||||
pub me_hardswap_warmup_delay_min_ms: u64,
|
pub me_hardswap_warmup_delay_min_ms: u64,
|
||||||
|
|
@ -106,8 +101,6 @@ pub struct HotFields {
|
||||||
pub me_d2c_flush_batch_max_bytes: usize,
|
pub me_d2c_flush_batch_max_bytes: usize,
|
||||||
pub me_d2c_flush_batch_max_delay_us: u64,
|
pub me_d2c_flush_batch_max_delay_us: u64,
|
||||||
pub me_d2c_ack_flush_immediate: bool,
|
pub me_d2c_ack_flush_immediate: bool,
|
||||||
pub me_quota_soft_overshoot_bytes: u64,
|
|
||||||
pub me_d2c_frame_buf_shrink_threshold_bytes: usize,
|
|
||||||
pub direct_relay_copy_buf_c2s_bytes: usize,
|
pub direct_relay_copy_buf_c2s_bytes: usize,
|
||||||
pub direct_relay_copy_buf_s2c_bytes: usize,
|
pub direct_relay_copy_buf_s2c_bytes: usize,
|
||||||
pub me_health_interval_ms_unhealthy: u64,
|
pub me_health_interval_ms_unhealthy: u64,
|
||||||
|
|
@ -117,11 +110,9 @@ pub struct HotFields {
|
||||||
pub users: std::collections::HashMap<String, String>,
|
pub users: std::collections::HashMap<String, String>,
|
||||||
pub user_ad_tags: std::collections::HashMap<String, String>,
|
pub user_ad_tags: std::collections::HashMap<String, String>,
|
||||||
pub user_max_tcp_conns: std::collections::HashMap<String, usize>,
|
pub user_max_tcp_conns: std::collections::HashMap<String, usize>,
|
||||||
pub user_max_tcp_conns_global_each: usize,
|
|
||||||
pub user_expirations: std::collections::HashMap<String, chrono::DateTime<chrono::Utc>>,
|
pub user_expirations: std::collections::HashMap<String, chrono::DateTime<chrono::Utc>>,
|
||||||
pub user_data_quota: std::collections::HashMap<String, u64>,
|
pub user_data_quota: std::collections::HashMap<String, u64>,
|
||||||
pub user_max_unique_ips: std::collections::HashMap<String, usize>,
|
pub user_max_unique_ips: std::collections::HashMap<String, usize>,
|
||||||
pub user_max_unique_ips_global_each: usize,
|
|
||||||
pub user_max_unique_ips_mode: crate::config::UserMaxUniqueIpsMode,
|
pub user_max_unique_ips_mode: crate::config::UserMaxUniqueIpsMode,
|
||||||
pub user_max_unique_ips_window_secs: u64,
|
pub user_max_unique_ips_window_secs: u64,
|
||||||
}
|
}
|
||||||
|
|
@ -139,8 +130,6 @@ impl HotFields {
|
||||||
me_reinit_coalesce_window_ms: cfg.general.me_reinit_coalesce_window_ms,
|
me_reinit_coalesce_window_ms: cfg.general.me_reinit_coalesce_window_ms,
|
||||||
hardswap: cfg.general.hardswap,
|
hardswap: cfg.general.hardswap,
|
||||||
me_pool_drain_ttl_secs: cfg.general.me_pool_drain_ttl_secs,
|
me_pool_drain_ttl_secs: cfg.general.me_pool_drain_ttl_secs,
|
||||||
me_instadrain: cfg.general.me_instadrain,
|
|
||||||
me_pool_drain_threshold: cfg.general.me_pool_drain_threshold,
|
|
||||||
me_pool_min_fresh_ratio: cfg.general.me_pool_min_fresh_ratio,
|
me_pool_min_fresh_ratio: cfg.general.me_pool_min_fresh_ratio,
|
||||||
me_reinit_drain_timeout_secs: cfg.general.me_reinit_drain_timeout_secs,
|
me_reinit_drain_timeout_secs: cfg.general.me_reinit_drain_timeout_secs,
|
||||||
me_hardswap_warmup_delay_min_ms: cfg.general.me_hardswap_warmup_delay_min_ms,
|
me_hardswap_warmup_delay_min_ms: cfg.general.me_hardswap_warmup_delay_min_ms,
|
||||||
|
|
@ -191,11 +180,15 @@ impl HotFields {
|
||||||
me_adaptive_floor_min_writers_multi_endpoint: cfg
|
me_adaptive_floor_min_writers_multi_endpoint: cfg
|
||||||
.general
|
.general
|
||||||
.me_adaptive_floor_min_writers_multi_endpoint,
|
.me_adaptive_floor_min_writers_multi_endpoint,
|
||||||
me_adaptive_floor_recover_grace_secs: cfg.general.me_adaptive_floor_recover_grace_secs,
|
me_adaptive_floor_recover_grace_secs: cfg
|
||||||
|
.general
|
||||||
|
.me_adaptive_floor_recover_grace_secs,
|
||||||
me_adaptive_floor_writers_per_core_total: cfg
|
me_adaptive_floor_writers_per_core_total: cfg
|
||||||
.general
|
.general
|
||||||
.me_adaptive_floor_writers_per_core_total,
|
.me_adaptive_floor_writers_per_core_total,
|
||||||
me_adaptive_floor_cpu_cores_override: cfg.general.me_adaptive_floor_cpu_cores_override,
|
me_adaptive_floor_cpu_cores_override: cfg
|
||||||
|
.general
|
||||||
|
.me_adaptive_floor_cpu_cores_override,
|
||||||
me_adaptive_floor_max_extra_writers_single_per_core: cfg
|
me_adaptive_floor_max_extra_writers_single_per_core: cfg
|
||||||
.general
|
.general
|
||||||
.me_adaptive_floor_max_extra_writers_single_per_core,
|
.me_adaptive_floor_max_extra_writers_single_per_core,
|
||||||
|
|
@ -214,24 +207,14 @@ impl HotFields {
|
||||||
me_adaptive_floor_max_warm_writers_global: cfg
|
me_adaptive_floor_max_warm_writers_global: cfg
|
||||||
.general
|
.general
|
||||||
.me_adaptive_floor_max_warm_writers_global,
|
.me_adaptive_floor_max_warm_writers_global,
|
||||||
me_route_backpressure_base_timeout_ms: cfg
|
me_route_backpressure_base_timeout_ms: cfg.general.me_route_backpressure_base_timeout_ms,
|
||||||
.general
|
me_route_backpressure_high_timeout_ms: cfg.general.me_route_backpressure_high_timeout_ms,
|
||||||
.me_route_backpressure_base_timeout_ms,
|
me_route_backpressure_high_watermark_pct: cfg.general.me_route_backpressure_high_watermark_pct,
|
||||||
me_route_backpressure_high_timeout_ms: cfg
|
|
||||||
.general
|
|
||||||
.me_route_backpressure_high_timeout_ms,
|
|
||||||
me_route_backpressure_high_watermark_pct: cfg
|
|
||||||
.general
|
|
||||||
.me_route_backpressure_high_watermark_pct,
|
|
||||||
me_reader_route_data_wait_ms: cfg.general.me_reader_route_data_wait_ms,
|
me_reader_route_data_wait_ms: cfg.general.me_reader_route_data_wait_ms,
|
||||||
me_d2c_flush_batch_max_frames: cfg.general.me_d2c_flush_batch_max_frames,
|
me_d2c_flush_batch_max_frames: cfg.general.me_d2c_flush_batch_max_frames,
|
||||||
me_d2c_flush_batch_max_bytes: cfg.general.me_d2c_flush_batch_max_bytes,
|
me_d2c_flush_batch_max_bytes: cfg.general.me_d2c_flush_batch_max_bytes,
|
||||||
me_d2c_flush_batch_max_delay_us: cfg.general.me_d2c_flush_batch_max_delay_us,
|
me_d2c_flush_batch_max_delay_us: cfg.general.me_d2c_flush_batch_max_delay_us,
|
||||||
me_d2c_ack_flush_immediate: cfg.general.me_d2c_ack_flush_immediate,
|
me_d2c_ack_flush_immediate: cfg.general.me_d2c_ack_flush_immediate,
|
||||||
me_quota_soft_overshoot_bytes: cfg.general.me_quota_soft_overshoot_bytes,
|
|
||||||
me_d2c_frame_buf_shrink_threshold_bytes: cfg
|
|
||||||
.general
|
|
||||||
.me_d2c_frame_buf_shrink_threshold_bytes,
|
|
||||||
direct_relay_copy_buf_c2s_bytes: cfg.general.direct_relay_copy_buf_c2s_bytes,
|
direct_relay_copy_buf_c2s_bytes: cfg.general.direct_relay_copy_buf_c2s_bytes,
|
||||||
direct_relay_copy_buf_s2c_bytes: cfg.general.direct_relay_copy_buf_s2c_bytes,
|
direct_relay_copy_buf_s2c_bytes: cfg.general.direct_relay_copy_buf_s2c_bytes,
|
||||||
me_health_interval_ms_unhealthy: cfg.general.me_health_interval_ms_unhealthy,
|
me_health_interval_ms_unhealthy: cfg.general.me_health_interval_ms_unhealthy,
|
||||||
|
|
@ -241,11 +224,9 @@ impl HotFields {
|
||||||
users: cfg.access.users.clone(),
|
users: cfg.access.users.clone(),
|
||||||
user_ad_tags: cfg.access.user_ad_tags.clone(),
|
user_ad_tags: cfg.access.user_ad_tags.clone(),
|
||||||
user_max_tcp_conns: cfg.access.user_max_tcp_conns.clone(),
|
user_max_tcp_conns: cfg.access.user_max_tcp_conns.clone(),
|
||||||
user_max_tcp_conns_global_each: cfg.access.user_max_tcp_conns_global_each,
|
|
||||||
user_expirations: cfg.access.user_expirations.clone(),
|
user_expirations: cfg.access.user_expirations.clone(),
|
||||||
user_data_quota: cfg.access.user_data_quota.clone(),
|
user_data_quota: cfg.access.user_data_quota.clone(),
|
||||||
user_max_unique_ips: cfg.access.user_max_unique_ips.clone(),
|
user_max_unique_ips: cfg.access.user_max_unique_ips.clone(),
|
||||||
user_max_unique_ips_global_each: cfg.access.user_max_unique_ips_global_each,
|
|
||||||
user_max_unique_ips_mode: cfg.access.user_max_unique_ips_mode,
|
user_max_unique_ips_mode: cfg.access.user_max_unique_ips_mode,
|
||||||
user_max_unique_ips_window_secs: cfg.access.user_max_unique_ips_window_secs,
|
user_max_unique_ips_window_secs: cfg.access.user_max_unique_ips_window_secs,
|
||||||
}
|
}
|
||||||
|
|
@ -306,129 +287,6 @@ fn listeners_equal(
|
||||||
})
|
})
|
||||||
}
|
}
|
||||||
|
|
||||||
#[derive(Debug, Clone, Default, PartialEq, Eq)]
|
|
||||||
struct WatchManifest {
|
|
||||||
files: BTreeSet<PathBuf>,
|
|
||||||
dirs: BTreeSet<PathBuf>,
|
|
||||||
}
|
|
||||||
|
|
||||||
impl WatchManifest {
|
|
||||||
fn from_source_files(source_files: &[PathBuf]) -> Self {
|
|
||||||
let mut files = BTreeSet::new();
|
|
||||||
let mut dirs = BTreeSet::new();
|
|
||||||
|
|
||||||
for path in source_files {
|
|
||||||
let normalized = normalize_watch_path(path);
|
|
||||||
files.insert(normalized.clone());
|
|
||||||
if let Some(parent) = normalized.parent() {
|
|
||||||
dirs.insert(parent.to_path_buf());
|
|
||||||
}
|
|
||||||
}
|
|
||||||
|
|
||||||
Self { files, dirs }
|
|
||||||
}
|
|
||||||
|
|
||||||
fn matches_event_paths(&self, event_paths: &[PathBuf]) -> bool {
|
|
||||||
event_paths
|
|
||||||
.iter()
|
|
||||||
.map(|path| normalize_watch_path(path))
|
|
||||||
.any(|path| self.files.contains(&path))
|
|
||||||
}
|
|
||||||
}
|
|
||||||
|
|
||||||
#[derive(Debug, Default)]
|
|
||||||
struct ReloadState {
|
|
||||||
applied_snapshot_hash: Option<u64>,
|
|
||||||
}
|
|
||||||
|
|
||||||
impl ReloadState {
|
|
||||||
fn new(applied_snapshot_hash: Option<u64>) -> Self {
|
|
||||||
Self {
|
|
||||||
applied_snapshot_hash,
|
|
||||||
}
|
|
||||||
}
|
|
||||||
|
|
||||||
fn is_applied(&self, hash: u64) -> bool {
|
|
||||||
self.applied_snapshot_hash == Some(hash)
|
|
||||||
}
|
|
||||||
|
|
||||||
fn mark_applied(&mut self, hash: u64) {
|
|
||||||
self.applied_snapshot_hash = Some(hash);
|
|
||||||
}
|
|
||||||
}
|
|
||||||
|
|
||||||
fn normalize_watch_path(path: &Path) -> PathBuf {
|
|
||||||
path.canonicalize().unwrap_or_else(|_| {
|
|
||||||
if path.is_absolute() {
|
|
||||||
path.to_path_buf()
|
|
||||||
} else {
|
|
||||||
std::env::current_dir()
|
|
||||||
.map(|cwd| cwd.join(path))
|
|
||||||
.unwrap_or_else(|_| path.to_path_buf())
|
|
||||||
}
|
|
||||||
})
|
|
||||||
}
|
|
||||||
|
|
||||||
fn sync_watch_paths<W: Watcher>(
|
|
||||||
watcher: &mut W,
|
|
||||||
current: &BTreeSet<PathBuf>,
|
|
||||||
next: &BTreeSet<PathBuf>,
|
|
||||||
recursive_mode: RecursiveMode,
|
|
||||||
kind: &str,
|
|
||||||
) {
|
|
||||||
for path in current.difference(next) {
|
|
||||||
if let Err(e) = watcher.unwatch(path) {
|
|
||||||
warn!(path = %path.display(), error = %e, "config watcher: failed to unwatch {kind}");
|
|
||||||
}
|
|
||||||
}
|
|
||||||
|
|
||||||
for path in next.difference(current) {
|
|
||||||
if let Err(e) = watcher.watch(path, recursive_mode) {
|
|
||||||
warn!(path = %path.display(), error = %e, "config watcher: failed to watch {kind}");
|
|
||||||
}
|
|
||||||
}
|
|
||||||
}
|
|
||||||
|
|
||||||
fn apply_watch_manifest<W1: Watcher, W2: Watcher>(
|
|
||||||
notify_watcher: Option<&mut W1>,
|
|
||||||
poll_watcher: Option<&mut W2>,
|
|
||||||
manifest_state: &Arc<StdRwLock<WatchManifest>>,
|
|
||||||
next_manifest: WatchManifest,
|
|
||||||
) {
|
|
||||||
let current_manifest = manifest_state
|
|
||||||
.read()
|
|
||||||
.map(|manifest| manifest.clone())
|
|
||||||
.unwrap_or_default();
|
|
||||||
|
|
||||||
if current_manifest == next_manifest {
|
|
||||||
return;
|
|
||||||
}
|
|
||||||
|
|
||||||
if let Some(watcher) = notify_watcher {
|
|
||||||
sync_watch_paths(
|
|
||||||
watcher,
|
|
||||||
¤t_manifest.dirs,
|
|
||||||
&next_manifest.dirs,
|
|
||||||
RecursiveMode::NonRecursive,
|
|
||||||
"config directory",
|
|
||||||
);
|
|
||||||
}
|
|
||||||
|
|
||||||
if let Some(watcher) = poll_watcher {
|
|
||||||
sync_watch_paths(
|
|
||||||
watcher,
|
|
||||||
¤t_manifest.files,
|
|
||||||
&next_manifest.files,
|
|
||||||
RecursiveMode::NonRecursive,
|
|
||||||
"config file",
|
|
||||||
);
|
|
||||||
}
|
|
||||||
|
|
||||||
if let Ok(mut manifest) = manifest_state.write() {
|
|
||||||
*manifest = next_manifest;
|
|
||||||
}
|
|
||||||
}
|
|
||||||
|
|
||||||
fn overlay_hot_fields(old: &ProxyConfig, new: &ProxyConfig) -> ProxyConfig {
|
fn overlay_hot_fields(old: &ProxyConfig, new: &ProxyConfig) -> ProxyConfig {
|
||||||
let mut cfg = old.clone();
|
let mut cfg = old.clone();
|
||||||
|
|
||||||
|
|
@ -444,8 +302,6 @@ fn overlay_hot_fields(old: &ProxyConfig, new: &ProxyConfig) -> ProxyConfig {
|
||||||
cfg.general.me_reinit_coalesce_window_ms = new.general.me_reinit_coalesce_window_ms;
|
cfg.general.me_reinit_coalesce_window_ms = new.general.me_reinit_coalesce_window_ms;
|
||||||
cfg.general.hardswap = new.general.hardswap;
|
cfg.general.hardswap = new.general.hardswap;
|
||||||
cfg.general.me_pool_drain_ttl_secs = new.general.me_pool_drain_ttl_secs;
|
cfg.general.me_pool_drain_ttl_secs = new.general.me_pool_drain_ttl_secs;
|
||||||
cfg.general.me_instadrain = new.general.me_instadrain;
|
|
||||||
cfg.general.me_pool_drain_threshold = new.general.me_pool_drain_threshold;
|
|
||||||
cfg.general.me_pool_min_fresh_ratio = new.general.me_pool_min_fresh_ratio;
|
cfg.general.me_pool_min_fresh_ratio = new.general.me_pool_min_fresh_ratio;
|
||||||
cfg.general.me_reinit_drain_timeout_secs = new.general.me_reinit_drain_timeout_secs;
|
cfg.general.me_reinit_drain_timeout_secs = new.general.me_reinit_drain_timeout_secs;
|
||||||
cfg.general.me_hardswap_warmup_delay_min_ms = new.general.me_hardswap_warmup_delay_min_ms;
|
cfg.general.me_hardswap_warmup_delay_min_ms = new.general.me_hardswap_warmup_delay_min_ms;
|
||||||
|
|
@ -492,14 +348,10 @@ fn overlay_hot_fields(old: &ProxyConfig, new: &ProxyConfig) -> ProxyConfig {
|
||||||
new.general.me_adaptive_floor_writers_per_core_total;
|
new.general.me_adaptive_floor_writers_per_core_total;
|
||||||
cfg.general.me_adaptive_floor_cpu_cores_override =
|
cfg.general.me_adaptive_floor_cpu_cores_override =
|
||||||
new.general.me_adaptive_floor_cpu_cores_override;
|
new.general.me_adaptive_floor_cpu_cores_override;
|
||||||
cfg.general
|
cfg.general.me_adaptive_floor_max_extra_writers_single_per_core =
|
||||||
.me_adaptive_floor_max_extra_writers_single_per_core = new
|
new.general.me_adaptive_floor_max_extra_writers_single_per_core;
|
||||||
.general
|
cfg.general.me_adaptive_floor_max_extra_writers_multi_per_core =
|
||||||
.me_adaptive_floor_max_extra_writers_single_per_core;
|
new.general.me_adaptive_floor_max_extra_writers_multi_per_core;
|
||||||
cfg.general
|
|
||||||
.me_adaptive_floor_max_extra_writers_multi_per_core = new
|
|
||||||
.general
|
|
||||||
.me_adaptive_floor_max_extra_writers_multi_per_core;
|
|
||||||
cfg.general.me_adaptive_floor_max_active_writers_per_core =
|
cfg.general.me_adaptive_floor_max_active_writers_per_core =
|
||||||
new.general.me_adaptive_floor_max_active_writers_per_core;
|
new.general.me_adaptive_floor_max_active_writers_per_core;
|
||||||
cfg.general.me_adaptive_floor_max_warm_writers_per_core =
|
cfg.general.me_adaptive_floor_max_warm_writers_per_core =
|
||||||
|
|
@ -519,9 +371,6 @@ fn overlay_hot_fields(old: &ProxyConfig, new: &ProxyConfig) -> ProxyConfig {
|
||||||
cfg.general.me_d2c_flush_batch_max_bytes = new.general.me_d2c_flush_batch_max_bytes;
|
cfg.general.me_d2c_flush_batch_max_bytes = new.general.me_d2c_flush_batch_max_bytes;
|
||||||
cfg.general.me_d2c_flush_batch_max_delay_us = new.general.me_d2c_flush_batch_max_delay_us;
|
cfg.general.me_d2c_flush_batch_max_delay_us = new.general.me_d2c_flush_batch_max_delay_us;
|
||||||
cfg.general.me_d2c_ack_flush_immediate = new.general.me_d2c_ack_flush_immediate;
|
cfg.general.me_d2c_ack_flush_immediate = new.general.me_d2c_ack_flush_immediate;
|
||||||
cfg.general.me_quota_soft_overshoot_bytes = new.general.me_quota_soft_overshoot_bytes;
|
|
||||||
cfg.general.me_d2c_frame_buf_shrink_threshold_bytes =
|
|
||||||
new.general.me_d2c_frame_buf_shrink_threshold_bytes;
|
|
||||||
cfg.general.direct_relay_copy_buf_c2s_bytes = new.general.direct_relay_copy_buf_c2s_bytes;
|
cfg.general.direct_relay_copy_buf_c2s_bytes = new.general.direct_relay_copy_buf_c2s_bytes;
|
||||||
cfg.general.direct_relay_copy_buf_s2c_bytes = new.general.direct_relay_copy_buf_s2c_bytes;
|
cfg.general.direct_relay_copy_buf_s2c_bytes = new.general.direct_relay_copy_buf_s2c_bytes;
|
||||||
cfg.general.me_health_interval_ms_unhealthy = new.general.me_health_interval_ms_unhealthy;
|
cfg.general.me_health_interval_ms_unhealthy = new.general.me_health_interval_ms_unhealthy;
|
||||||
|
|
@ -532,11 +381,9 @@ fn overlay_hot_fields(old: &ProxyConfig, new: &ProxyConfig) -> ProxyConfig {
|
||||||
cfg.access.users = new.access.users.clone();
|
cfg.access.users = new.access.users.clone();
|
||||||
cfg.access.user_ad_tags = new.access.user_ad_tags.clone();
|
cfg.access.user_ad_tags = new.access.user_ad_tags.clone();
|
||||||
cfg.access.user_max_tcp_conns = new.access.user_max_tcp_conns.clone();
|
cfg.access.user_max_tcp_conns = new.access.user_max_tcp_conns.clone();
|
||||||
cfg.access.user_max_tcp_conns_global_each = new.access.user_max_tcp_conns_global_each;
|
|
||||||
cfg.access.user_expirations = new.access.user_expirations.clone();
|
cfg.access.user_expirations = new.access.user_expirations.clone();
|
||||||
cfg.access.user_data_quota = new.access.user_data_quota.clone();
|
cfg.access.user_data_quota = new.access.user_data_quota.clone();
|
||||||
cfg.access.user_max_unique_ips = new.access.user_max_unique_ips.clone();
|
cfg.access.user_max_unique_ips = new.access.user_max_unique_ips.clone();
|
||||||
cfg.access.user_max_unique_ips_global_each = new.access.user_max_unique_ips_global_each;
|
|
||||||
cfg.access.user_max_unique_ips_mode = new.access.user_max_unique_ips_mode;
|
cfg.access.user_max_unique_ips_mode = new.access.user_max_unique_ips_mode;
|
||||||
cfg.access.user_max_unique_ips_window_secs = new.access.user_max_unique_ips_window_secs;
|
cfg.access.user_max_unique_ips_window_secs = new.access.user_max_unique_ips_window_secs;
|
||||||
|
|
||||||
|
|
@ -562,7 +409,8 @@ fn warn_non_hot_changes(old: &ProxyConfig, new: &ProxyConfig, non_hot_changed: b
|
||||||
|| old.server.api.minimal_runtime_cache_ttl_ms
|
|| old.server.api.minimal_runtime_cache_ttl_ms
|
||||||
!= new.server.api.minimal_runtime_cache_ttl_ms
|
!= new.server.api.minimal_runtime_cache_ttl_ms
|
||||||
|| old.server.api.runtime_edge_enabled != new.server.api.runtime_edge_enabled
|
|| old.server.api.runtime_edge_enabled != new.server.api.runtime_edge_enabled
|
||||||
|| old.server.api.runtime_edge_cache_ttl_ms != new.server.api.runtime_edge_cache_ttl_ms
|
|| old.server.api.runtime_edge_cache_ttl_ms
|
||||||
|
!= new.server.api.runtime_edge_cache_ttl_ms
|
||||||
|| old.server.api.runtime_edge_top_n != new.server.api.runtime_edge_top_n
|
|| old.server.api.runtime_edge_top_n != new.server.api.runtime_edge_top_n
|
||||||
|| old.server.api.runtime_edge_events_capacity
|
|| old.server.api.runtime_edge_events_capacity
|
||||||
!= new.server.api.runtime_edge_events_capacity
|
!= new.server.api.runtime_edge_events_capacity
|
||||||
|
|
@ -573,7 +421,6 @@ fn warn_non_hot_changes(old: &ProxyConfig, new: &ProxyConfig, non_hot_changed: b
|
||||||
}
|
}
|
||||||
if old.server.proxy_protocol != new.server.proxy_protocol
|
if old.server.proxy_protocol != new.server.proxy_protocol
|
||||||
|| !listeners_equal(&old.server.listeners, &new.server.listeners)
|
|| !listeners_equal(&old.server.listeners, &new.server.listeners)
|
||||||
|| old.server.listen_backlog != new.server.listen_backlog
|
|
||||||
|| old.server.listen_addr_ipv4 != new.server.listen_addr_ipv4
|
|| old.server.listen_addr_ipv4 != new.server.listen_addr_ipv4
|
||||||
|| old.server.listen_addr_ipv6 != new.server.listen_addr_ipv6
|
|| old.server.listen_addr_ipv6 != new.server.listen_addr_ipv6
|
||||||
|| old.server.listen_tcp != new.server.listen_tcp
|
|| old.server.listen_tcp != new.server.listen_tcp
|
||||||
|
|
@ -585,7 +432,6 @@ fn warn_non_hot_changes(old: &ProxyConfig, new: &ProxyConfig, non_hot_changed: b
|
||||||
}
|
}
|
||||||
if old.censorship.tls_domain != new.censorship.tls_domain
|
if old.censorship.tls_domain != new.censorship.tls_domain
|
||||||
|| old.censorship.tls_domains != new.censorship.tls_domains
|
|| old.censorship.tls_domains != new.censorship.tls_domains
|
||||||
|| old.censorship.tls_fetch_scope != new.censorship.tls_fetch_scope
|
|
||||||
|| old.censorship.mask != new.censorship.mask
|
|| old.censorship.mask != new.censorship.mask
|
||||||
|| old.censorship.mask_host != new.censorship.mask_host
|
|| old.censorship.mask_host != new.censorship.mask_host
|
||||||
|| old.censorship.mask_port != new.censorship.mask_port
|
|| old.censorship.mask_port != new.censorship.mask_port
|
||||||
|
|
@ -599,22 +445,6 @@ fn warn_non_hot_changes(old: &ProxyConfig, new: &ProxyConfig, non_hot_changed: b
|
||||||
|| old.censorship.tls_full_cert_ttl_secs != new.censorship.tls_full_cert_ttl_secs
|
|| old.censorship.tls_full_cert_ttl_secs != new.censorship.tls_full_cert_ttl_secs
|
||||||
|| old.censorship.alpn_enforce != new.censorship.alpn_enforce
|
|| old.censorship.alpn_enforce != new.censorship.alpn_enforce
|
||||||
|| old.censorship.mask_proxy_protocol != new.censorship.mask_proxy_protocol
|
|| old.censorship.mask_proxy_protocol != new.censorship.mask_proxy_protocol
|
||||||
|| old.censorship.mask_shape_hardening != new.censorship.mask_shape_hardening
|
|
||||||
|| old.censorship.mask_shape_bucket_floor_bytes
|
|
||||||
!= new.censorship.mask_shape_bucket_floor_bytes
|
|
||||||
|| old.censorship.mask_shape_bucket_cap_bytes != new.censorship.mask_shape_bucket_cap_bytes
|
|
||||||
|| old.censorship.mask_shape_above_cap_blur != new.censorship.mask_shape_above_cap_blur
|
|
||||||
|| old.censorship.mask_shape_above_cap_blur_max_bytes
|
|
||||||
!= new.censorship.mask_shape_above_cap_blur_max_bytes
|
|
||||||
|| old.censorship.mask_relay_max_bytes != new.censorship.mask_relay_max_bytes
|
|
||||||
|| old.censorship.mask_classifier_prefetch_timeout_ms
|
|
||||||
!= new.censorship.mask_classifier_prefetch_timeout_ms
|
|
||||||
|| old.censorship.mask_timing_normalization_enabled
|
|
||||||
!= new.censorship.mask_timing_normalization_enabled
|
|
||||||
|| old.censorship.mask_timing_normalization_floor_ms
|
|
||||||
!= new.censorship.mask_timing_normalization_floor_ms
|
|
||||||
|| old.censorship.mask_timing_normalization_ceiling_ms
|
|
||||||
!= new.censorship.mask_timing_normalization_ceiling_ms
|
|
||||||
{
|
{
|
||||||
warned = true;
|
warned = true;
|
||||||
warn!("config reload: censorship settings changed; restart required");
|
warn!("config reload: censorship settings changed; restart required");
|
||||||
|
|
@ -655,9 +485,6 @@ fn warn_non_hot_changes(old: &ProxyConfig, new: &ProxyConfig, non_hot_changed: b
|
||||||
}
|
}
|
||||||
if old.general.me_route_no_writer_mode != new.general.me_route_no_writer_mode
|
if old.general.me_route_no_writer_mode != new.general.me_route_no_writer_mode
|
||||||
|| old.general.me_route_no_writer_wait_ms != new.general.me_route_no_writer_wait_ms
|
|| old.general.me_route_no_writer_wait_ms != new.general.me_route_no_writer_wait_ms
|
||||||
|| old.general.me_route_hybrid_max_wait_ms != new.general.me_route_hybrid_max_wait_ms
|
|
||||||
|| old.general.me_route_blocking_send_timeout_ms
|
|
||||||
!= new.general.me_route_blocking_send_timeout_ms
|
|
||||||
|| old.general.me_route_inline_recovery_attempts
|
|| old.general.me_route_inline_recovery_attempts
|
||||||
!= new.general.me_route_inline_recovery_attempts
|
!= new.general.me_route_inline_recovery_attempts
|
||||||
|| old.general.me_route_inline_recovery_wait_ms
|
|| old.general.me_route_inline_recovery_wait_ms
|
||||||
|
|
@ -676,11 +503,9 @@ fn warn_non_hot_changes(old: &ProxyConfig, new: &ProxyConfig, non_hot_changed: b
|
||||||
warned = true;
|
warned = true;
|
||||||
warn!("config reload: general.me_init_retry_attempts changed; restart required");
|
warn!("config reload: general.me_init_retry_attempts changed; restart required");
|
||||||
}
|
}
|
||||||
if old.general.me2dc_fallback != new.general.me2dc_fallback
|
if old.general.me2dc_fallback != new.general.me2dc_fallback {
|
||||||
|| old.general.me2dc_fast != new.general.me2dc_fast
|
|
||||||
{
|
|
||||||
warned = true;
|
warned = true;
|
||||||
warn!("config reload: general.me2dc_fallback/me2dc_fast changed; restart required");
|
warn!("config reload: general.me2dc_fallback changed; restart required");
|
||||||
}
|
}
|
||||||
if old.general.proxy_config_v4_cache_path != new.general.proxy_config_v4_cache_path
|
if old.general.proxy_config_v4_cache_path != new.general.proxy_config_v4_cache_path
|
||||||
|| old.general.proxy_config_v6_cache_path != new.general.proxy_config_v6_cache_path
|
|| old.general.proxy_config_v6_cache_path != new.general.proxy_config_v6_cache_path
|
||||||
|
|
@ -699,7 +524,6 @@ fn warn_non_hot_changes(old: &ProxyConfig, new: &ProxyConfig, non_hot_changed: b
|
||||||
if old.general.upstream_connect_retry_attempts != new.general.upstream_connect_retry_attempts
|
if old.general.upstream_connect_retry_attempts != new.general.upstream_connect_retry_attempts
|
||||||
|| old.general.upstream_connect_retry_backoff_ms
|
|| old.general.upstream_connect_retry_backoff_ms
|
||||||
!= new.general.upstream_connect_retry_backoff_ms
|
!= new.general.upstream_connect_retry_backoff_ms
|
||||||
|| old.general.tg_connect != new.general.tg_connect
|
|
||||||
|| old.general.upstream_unhealthy_fail_threshold
|
|| old.general.upstream_unhealthy_fail_threshold
|
||||||
!= new.general.upstream_unhealthy_fail_threshold
|
!= new.general.upstream_unhealthy_fail_threshold
|
||||||
|| old.general.upstream_connect_failfast_hard_errors
|
|| old.general.upstream_connect_failfast_hard_errors
|
||||||
|
|
@ -850,19 +674,6 @@ fn log_changes(
|
||||||
old_hot.me_pool_drain_ttl_secs, new_hot.me_pool_drain_ttl_secs,
|
old_hot.me_pool_drain_ttl_secs, new_hot.me_pool_drain_ttl_secs,
|
||||||
);
|
);
|
||||||
}
|
}
|
||||||
if old_hot.me_instadrain != new_hot.me_instadrain {
|
|
||||||
info!(
|
|
||||||
"config reload: me_instadrain: {} → {}",
|
|
||||||
old_hot.me_instadrain, new_hot.me_instadrain,
|
|
||||||
);
|
|
||||||
}
|
|
||||||
|
|
||||||
if old_hot.me_pool_drain_threshold != new_hot.me_pool_drain_threshold {
|
|
||||||
info!(
|
|
||||||
"config reload: me_pool_drain_threshold: {} → {}",
|
|
||||||
old_hot.me_pool_drain_threshold, new_hot.me_pool_drain_threshold,
|
|
||||||
);
|
|
||||||
}
|
|
||||||
|
|
||||||
if (old_hot.me_pool_min_fresh_ratio - new_hot.me_pool_min_fresh_ratio).abs() > f32::EPSILON {
|
if (old_hot.me_pool_min_fresh_ratio - new_hot.me_pool_min_fresh_ratio).abs() > f32::EPSILON {
|
||||||
info!(
|
info!(
|
||||||
|
|
@ -896,7 +707,8 @@ fn log_changes(
|
||||||
{
|
{
|
||||||
info!(
|
info!(
|
||||||
"config reload: me_bind_stale: mode={:?} ttl={}s",
|
"config reload: me_bind_stale: mode={:?} ttl={}s",
|
||||||
new_hot.me_bind_stale_mode, new_hot.me_bind_stale_ttl_secs
|
new_hot.me_bind_stale_mode,
|
||||||
|
new_hot.me_bind_stale_ttl_secs
|
||||||
);
|
);
|
||||||
}
|
}
|
||||||
if old_hot.me_secret_atomic_snapshot != new_hot.me_secret_atomic_snapshot
|
if old_hot.me_secret_atomic_snapshot != new_hot.me_secret_atomic_snapshot
|
||||||
|
|
@ -976,7 +788,8 @@ fn log_changes(
|
||||||
if old_hot.me_socks_kdf_policy != new_hot.me_socks_kdf_policy {
|
if old_hot.me_socks_kdf_policy != new_hot.me_socks_kdf_policy {
|
||||||
info!(
|
info!(
|
||||||
"config reload: me_socks_kdf_policy: {:?} → {:?}",
|
"config reload: me_socks_kdf_policy: {:?} → {:?}",
|
||||||
old_hot.me_socks_kdf_policy, new_hot.me_socks_kdf_policy,
|
old_hot.me_socks_kdf_policy,
|
||||||
|
new_hot.me_socks_kdf_policy,
|
||||||
);
|
);
|
||||||
}
|
}
|
||||||
|
|
||||||
|
|
@ -1030,7 +843,8 @@ fn log_changes(
|
||||||
|| old_hot.me_route_backpressure_high_watermark_pct
|
|| old_hot.me_route_backpressure_high_watermark_pct
|
||||||
!= new_hot.me_route_backpressure_high_watermark_pct
|
!= new_hot.me_route_backpressure_high_watermark_pct
|
||||||
|| old_hot.me_reader_route_data_wait_ms != new_hot.me_reader_route_data_wait_ms
|
|| old_hot.me_reader_route_data_wait_ms != new_hot.me_reader_route_data_wait_ms
|
||||||
|| old_hot.me_health_interval_ms_unhealthy != new_hot.me_health_interval_ms_unhealthy
|
|| old_hot.me_health_interval_ms_unhealthy
|
||||||
|
!= new_hot.me_health_interval_ms_unhealthy
|
||||||
|| old_hot.me_health_interval_ms_healthy != new_hot.me_health_interval_ms_healthy
|
|| old_hot.me_health_interval_ms_healthy != new_hot.me_health_interval_ms_healthy
|
||||||
|| old_hot.me_admission_poll_ms != new_hot.me_admission_poll_ms
|
|| old_hot.me_admission_poll_ms != new_hot.me_admission_poll_ms
|
||||||
|| old_hot.me_warn_rate_limit_ms != new_hot.me_warn_rate_limit_ms
|
|| old_hot.me_warn_rate_limit_ms != new_hot.me_warn_rate_limit_ms
|
||||||
|
|
@ -1052,47 +866,34 @@ fn log_changes(
|
||||||
|| old_hot.me_d2c_flush_batch_max_bytes != new_hot.me_d2c_flush_batch_max_bytes
|
|| old_hot.me_d2c_flush_batch_max_bytes != new_hot.me_d2c_flush_batch_max_bytes
|
||||||
|| old_hot.me_d2c_flush_batch_max_delay_us != new_hot.me_d2c_flush_batch_max_delay_us
|
|| old_hot.me_d2c_flush_batch_max_delay_us != new_hot.me_d2c_flush_batch_max_delay_us
|
||||||
|| old_hot.me_d2c_ack_flush_immediate != new_hot.me_d2c_ack_flush_immediate
|
|| old_hot.me_d2c_ack_flush_immediate != new_hot.me_d2c_ack_flush_immediate
|
||||||
|| old_hot.me_quota_soft_overshoot_bytes != new_hot.me_quota_soft_overshoot_bytes
|
|
||||||
|| old_hot.me_d2c_frame_buf_shrink_threshold_bytes
|
|
||||||
!= new_hot.me_d2c_frame_buf_shrink_threshold_bytes
|
|
||||||
|| old_hot.direct_relay_copy_buf_c2s_bytes != new_hot.direct_relay_copy_buf_c2s_bytes
|
|| old_hot.direct_relay_copy_buf_c2s_bytes != new_hot.direct_relay_copy_buf_c2s_bytes
|
||||||
|| old_hot.direct_relay_copy_buf_s2c_bytes != new_hot.direct_relay_copy_buf_s2c_bytes
|
|| old_hot.direct_relay_copy_buf_s2c_bytes != new_hot.direct_relay_copy_buf_s2c_bytes
|
||||||
{
|
{
|
||||||
info!(
|
info!(
|
||||||
"config reload: relay_tuning: me_d2c_frames={} me_d2c_bytes={} me_d2c_delay_us={} me_ack_flush_immediate={} me_quota_soft_overshoot_bytes={} me_d2c_frame_buf_shrink_threshold_bytes={} direct_buf_c2s={} direct_buf_s2c={}",
|
"config reload: relay_tuning: me_d2c_frames={} me_d2c_bytes={} me_d2c_delay_us={} me_ack_flush_immediate={} direct_buf_c2s={} direct_buf_s2c={}",
|
||||||
new_hot.me_d2c_flush_batch_max_frames,
|
new_hot.me_d2c_flush_batch_max_frames,
|
||||||
new_hot.me_d2c_flush_batch_max_bytes,
|
new_hot.me_d2c_flush_batch_max_bytes,
|
||||||
new_hot.me_d2c_flush_batch_max_delay_us,
|
new_hot.me_d2c_flush_batch_max_delay_us,
|
||||||
new_hot.me_d2c_ack_flush_immediate,
|
new_hot.me_d2c_ack_flush_immediate,
|
||||||
new_hot.me_quota_soft_overshoot_bytes,
|
|
||||||
new_hot.me_d2c_frame_buf_shrink_threshold_bytes,
|
|
||||||
new_hot.direct_relay_copy_buf_c2s_bytes,
|
new_hot.direct_relay_copy_buf_c2s_bytes,
|
||||||
new_hot.direct_relay_copy_buf_s2c_bytes,
|
new_hot.direct_relay_copy_buf_s2c_bytes,
|
||||||
);
|
);
|
||||||
}
|
}
|
||||||
|
|
||||||
if old_hot.users != new_hot.users {
|
if old_hot.users != new_hot.users {
|
||||||
let mut added: Vec<&String> = new_hot
|
let mut added: Vec<&String> = new_hot.users.keys()
|
||||||
.users
|
|
||||||
.keys()
|
|
||||||
.filter(|u| !old_hot.users.contains_key(*u))
|
.filter(|u| !old_hot.users.contains_key(*u))
|
||||||
.collect();
|
.collect();
|
||||||
added.sort();
|
added.sort();
|
||||||
|
|
||||||
let mut removed: Vec<&String> = old_hot
|
let mut removed: Vec<&String> = old_hot.users.keys()
|
||||||
.users
|
|
||||||
.keys()
|
|
||||||
.filter(|u| !new_hot.users.contains_key(*u))
|
.filter(|u| !new_hot.users.contains_key(*u))
|
||||||
.collect();
|
.collect();
|
||||||
removed.sort();
|
removed.sort();
|
||||||
|
|
||||||
let mut changed: Vec<&String> = new_hot
|
let mut changed: Vec<&String> = new_hot.users.keys()
|
||||||
.users
|
|
||||||
.keys()
|
|
||||||
.filter(|u| {
|
.filter(|u| {
|
||||||
old_hot
|
old_hot.users.get(*u)
|
||||||
.users
|
|
||||||
.get(*u)
|
|
||||||
.map(|s| s != &new_hot.users[*u])
|
.map(|s| s != &new_hot.users[*u])
|
||||||
.unwrap_or(false)
|
.unwrap_or(false)
|
||||||
})
|
})
|
||||||
|
|
@ -1102,18 +903,10 @@ fn log_changes(
|
||||||
if !added.is_empty() {
|
if !added.is_empty() {
|
||||||
info!(
|
info!(
|
||||||
"config reload: users added: [{}]",
|
"config reload: users added: [{}]",
|
||||||
added
|
added.iter().map(|s| s.as_str()).collect::<Vec<_>>().join(", ")
|
||||||
.iter()
|
|
||||||
.map(|s| s.as_str())
|
|
||||||
.collect::<Vec<_>>()
|
|
||||||
.join(", ")
|
|
||||||
);
|
);
|
||||||
let host = resolve_link_host(new_cfg, detected_ip_v4, detected_ip_v6);
|
let host = resolve_link_host(new_cfg, detected_ip_v4, detected_ip_v6);
|
||||||
let port = new_cfg
|
let port = new_cfg.general.links.public_port.unwrap_or(new_cfg.server.port);
|
||||||
.general
|
|
||||||
.links
|
|
||||||
.public_port
|
|
||||||
.unwrap_or(new_cfg.server.port);
|
|
||||||
for user in &added {
|
for user in &added {
|
||||||
if let Some(secret) = new_hot.users.get(*user) {
|
if let Some(secret) = new_hot.users.get(*user) {
|
||||||
print_user_links(user, secret, &host, port, new_cfg);
|
print_user_links(user, secret, &host, port, new_cfg);
|
||||||
|
|
@ -1123,21 +916,13 @@ fn log_changes(
|
||||||
if !removed.is_empty() {
|
if !removed.is_empty() {
|
||||||
info!(
|
info!(
|
||||||
"config reload: users removed: [{}]",
|
"config reload: users removed: [{}]",
|
||||||
removed
|
removed.iter().map(|s| s.as_str()).collect::<Vec<_>>().join(", ")
|
||||||
.iter()
|
|
||||||
.map(|s| s.as_str())
|
|
||||||
.collect::<Vec<_>>()
|
|
||||||
.join(", ")
|
|
||||||
);
|
);
|
||||||
}
|
}
|
||||||
if !changed.is_empty() {
|
if !changed.is_empty() {
|
||||||
info!(
|
info!(
|
||||||
"config reload: users secret changed: [{}]",
|
"config reload: users secret changed: [{}]",
|
||||||
changed
|
changed.iter().map(|s| s.as_str()).collect::<Vec<_>>().join(", ")
|
||||||
.iter()
|
|
||||||
.map(|s| s.as_str())
|
|
||||||
.collect::<Vec<_>>()
|
|
||||||
.join(", ")
|
|
||||||
);
|
);
|
||||||
}
|
}
|
||||||
}
|
}
|
||||||
|
|
@ -1148,12 +933,6 @@ fn log_changes(
|
||||||
new_hot.user_max_tcp_conns.len()
|
new_hot.user_max_tcp_conns.len()
|
||||||
);
|
);
|
||||||
}
|
}
|
||||||
if old_hot.user_max_tcp_conns_global_each != new_hot.user_max_tcp_conns_global_each {
|
|
||||||
info!(
|
|
||||||
"config reload: user_max_tcp_conns policy global_each={}",
|
|
||||||
new_hot.user_max_tcp_conns_global_each
|
|
||||||
);
|
|
||||||
}
|
|
||||||
if old_hot.user_expirations != new_hot.user_expirations {
|
if old_hot.user_expirations != new_hot.user_expirations {
|
||||||
info!(
|
info!(
|
||||||
"config reload: user_expirations updated ({} entries)",
|
"config reload: user_expirations updated ({} entries)",
|
||||||
|
|
@ -1172,13 +951,12 @@ fn log_changes(
|
||||||
new_hot.user_max_unique_ips.len()
|
new_hot.user_max_unique_ips.len()
|
||||||
);
|
);
|
||||||
}
|
}
|
||||||
if old_hot.user_max_unique_ips_global_each != new_hot.user_max_unique_ips_global_each
|
if old_hot.user_max_unique_ips_mode != new_hot.user_max_unique_ips_mode
|
||||||
|| old_hot.user_max_unique_ips_mode != new_hot.user_max_unique_ips_mode
|
|| old_hot.user_max_unique_ips_window_secs
|
||||||
|| old_hot.user_max_unique_ips_window_secs != new_hot.user_max_unique_ips_window_secs
|
!= new_hot.user_max_unique_ips_window_secs
|
||||||
{
|
{
|
||||||
info!(
|
info!(
|
||||||
"config reload: user_max_unique_ips policy global_each={} mode={:?} window={}s",
|
"config reload: user_max_unique_ips policy mode={:?} window={}s",
|
||||||
new_hot.user_max_unique_ips_global_each,
|
|
||||||
new_hot.user_max_unique_ips_mode,
|
new_hot.user_max_unique_ips_mode,
|
||||||
new_hot.user_max_unique_ips_window_secs
|
new_hot.user_max_unique_ips_window_secs
|
||||||
);
|
);
|
||||||
|
|
@ -1192,32 +970,18 @@ fn reload_config(
|
||||||
log_tx: &watch::Sender<LogLevel>,
|
log_tx: &watch::Sender<LogLevel>,
|
||||||
detected_ip_v4: Option<IpAddr>,
|
detected_ip_v4: Option<IpAddr>,
|
||||||
detected_ip_v6: Option<IpAddr>,
|
detected_ip_v6: Option<IpAddr>,
|
||||||
reload_state: &mut ReloadState,
|
) {
|
||||||
) -> Option<WatchManifest> {
|
let new_cfg = match ProxyConfig::load(config_path) {
|
||||||
let loaded = match ProxyConfig::load_with_metadata(config_path) {
|
Ok(c) => c,
|
||||||
Ok(loaded) => loaded,
|
|
||||||
Err(e) => {
|
Err(e) => {
|
||||||
error!("config reload: failed to parse {:?}: {}", config_path, e);
|
error!("config reload: failed to parse {:?}: {}", config_path, e);
|
||||||
return None;
|
return;
|
||||||
}
|
}
|
||||||
};
|
};
|
||||||
let LoadedConfig {
|
|
||||||
config: new_cfg,
|
|
||||||
source_files,
|
|
||||||
rendered_hash,
|
|
||||||
} = loaded;
|
|
||||||
let next_manifest = WatchManifest::from_source_files(&source_files);
|
|
||||||
|
|
||||||
if let Err(e) = new_cfg.validate() {
|
if let Err(e) = new_cfg.validate() {
|
||||||
error!(
|
error!("config reload: validation failed: {}; keeping old config", e);
|
||||||
"config reload: validation failed: {}; keeping old config",
|
return;
|
||||||
e
|
|
||||||
);
|
|
||||||
return Some(next_manifest);
|
|
||||||
}
|
|
||||||
|
|
||||||
if reload_state.is_applied(rendered_hash) {
|
|
||||||
return Some(next_manifest);
|
|
||||||
}
|
}
|
||||||
|
|
||||||
let old_cfg = config_tx.borrow().clone();
|
let old_cfg = config_tx.borrow().clone();
|
||||||
|
|
@ -1232,8 +996,7 @@ fn reload_config(
|
||||||
}
|
}
|
||||||
|
|
||||||
if !hot_changed {
|
if !hot_changed {
|
||||||
reload_state.mark_applied(rendered_hash);
|
return;
|
||||||
return Some(next_manifest);
|
|
||||||
}
|
}
|
||||||
|
|
||||||
if old_hot.dns_overrides != applied_hot.dns_overrides
|
if old_hot.dns_overrides != applied_hot.dns_overrides
|
||||||
|
|
@ -1243,7 +1006,7 @@ fn reload_config(
|
||||||
"config reload: invalid network.dns_overrides: {}; keeping old config",
|
"config reload: invalid network.dns_overrides: {}; keeping old config",
|
||||||
e
|
e
|
||||||
);
|
);
|
||||||
return Some(next_manifest);
|
return;
|
||||||
}
|
}
|
||||||
|
|
||||||
log_changes(
|
log_changes(
|
||||||
|
|
@ -1255,8 +1018,6 @@ fn reload_config(
|
||||||
detected_ip_v6,
|
detected_ip_v6,
|
||||||
);
|
);
|
||||||
config_tx.send(Arc::new(applied_cfg)).ok();
|
config_tx.send(Arc::new(applied_cfg)).ok();
|
||||||
reload_state.mark_applied(rendered_hash);
|
|
||||||
Some(next_manifest)
|
|
||||||
}
|
}
|
||||||
|
|
||||||
// ── Public API ────────────────────────────────────────────────────────────────
|
// ── Public API ────────────────────────────────────────────────────────────────
|
||||||
|
|
@ -1279,93 +1040,80 @@ pub fn spawn_config_watcher(
|
||||||
let (config_tx, config_rx) = watch::channel(initial);
|
let (config_tx, config_rx) = watch::channel(initial);
|
||||||
let (log_tx, log_rx) = watch::channel(initial_level);
|
let (log_tx, log_rx) = watch::channel(initial_level);
|
||||||
|
|
||||||
let config_path = normalize_watch_path(&config_path);
|
// Bridge: sync notify callbacks → async task via mpsc.
|
||||||
let initial_loaded = ProxyConfig::load_with_metadata(&config_path).ok();
|
|
||||||
let initial_manifest = initial_loaded
|
|
||||||
.as_ref()
|
|
||||||
.map(|loaded| WatchManifest::from_source_files(&loaded.source_files))
|
|
||||||
.unwrap_or_else(|| WatchManifest::from_source_files(std::slice::from_ref(&config_path)));
|
|
||||||
let initial_snapshot_hash = initial_loaded.as_ref().map(|loaded| loaded.rendered_hash);
|
|
||||||
|
|
||||||
tokio::spawn(async move {
|
|
||||||
let (notify_tx, mut notify_rx) = mpsc::channel::<()>(4);
|
let (notify_tx, mut notify_rx) = mpsc::channel::<()>(4);
|
||||||
let manifest_state = Arc::new(StdRwLock::new(WatchManifest::default()));
|
|
||||||
let mut reload_state = ReloadState::new(initial_snapshot_hash);
|
|
||||||
|
|
||||||
|
// Canonicalize so path matches what notify returns (absolute) in events.
|
||||||
|
let config_path = match config_path.canonicalize() {
|
||||||
|
Ok(p) => p,
|
||||||
|
Err(_) => config_path.to_path_buf(),
|
||||||
|
};
|
||||||
|
|
||||||
|
// Watch the parent directory rather than the file itself, because many
|
||||||
|
// editors (vim, nano) and systemd write via rename, which would cause
|
||||||
|
// inotify to lose track of the original inode.
|
||||||
|
let watch_dir = config_path
|
||||||
|
.parent()
|
||||||
|
.unwrap_or_else(|| std::path::Path::new("."))
|
||||||
|
.to_path_buf();
|
||||||
|
|
||||||
|
// ── inotify watcher (instant on local fs) ────────────────────────────
|
||||||
|
let config_file = config_path.clone();
|
||||||
let tx_inotify = notify_tx.clone();
|
let tx_inotify = notify_tx.clone();
|
||||||
let manifest_for_inotify = manifest_state.clone();
|
let inotify_ok = match recommended_watcher(move |res: notify::Result<notify::Event>| {
|
||||||
let mut inotify_watcher =
|
|
||||||
match recommended_watcher(move |res: notify::Result<notify::Event>| {
|
|
||||||
let Ok(event) = res else { return };
|
let Ok(event) = res else { return };
|
||||||
if !matches!(
|
let is_our_file = event.paths.iter().any(|p| p == &config_file);
|
||||||
event.kind,
|
if !is_our_file { return; }
|
||||||
EventKind::Modify(_) | EventKind::Create(_) | EventKind::Remove(_)
|
if matches!(event.kind, EventKind::Modify(_) | EventKind::Create(_) | EventKind::Remove(_)) {
|
||||||
) {
|
|
||||||
return;
|
|
||||||
}
|
|
||||||
let is_our_file = manifest_for_inotify
|
|
||||||
.read()
|
|
||||||
.map(|manifest| manifest.matches_event_paths(&event.paths))
|
|
||||||
.unwrap_or(false);
|
|
||||||
if is_our_file {
|
|
||||||
let _ = tx_inotify.try_send(());
|
let _ = tx_inotify.try_send(());
|
||||||
}
|
}
|
||||||
}) {
|
}) {
|
||||||
Ok(watcher) => Some(watcher),
|
Ok(mut w) => match w.watch(&watch_dir, RecursiveMode::NonRecursive) {
|
||||||
Err(e) => {
|
Ok(()) => {
|
||||||
warn!("config watcher: inotify unavailable: {}", e);
|
|
||||||
None
|
|
||||||
}
|
|
||||||
};
|
|
||||||
apply_watch_manifest(
|
|
||||||
inotify_watcher.as_mut(),
|
|
||||||
Option::<&mut notify::poll::PollWatcher>::None,
|
|
||||||
&manifest_state,
|
|
||||||
initial_manifest.clone(),
|
|
||||||
);
|
|
||||||
if inotify_watcher.is_some() {
|
|
||||||
info!("config watcher: inotify active on {:?}", config_path);
|
info!("config watcher: inotify active on {:?}", config_path);
|
||||||
|
Box::leak(Box::new(w));
|
||||||
|
true
|
||||||
}
|
}
|
||||||
|
Err(e) => { warn!("config watcher: inotify watch failed: {}", e); false }
|
||||||
|
},
|
||||||
|
Err(e) => { warn!("config watcher: inotify unavailable: {}", e); false }
|
||||||
|
};
|
||||||
|
|
||||||
|
// ── poll watcher (always active, fixes Docker bind mounts / NFS) ─────
|
||||||
|
// inotify does not receive events for files mounted from the host into
|
||||||
|
// a container. PollWatcher compares file contents every 3 s and fires
|
||||||
|
// on any change regardless of the underlying fs.
|
||||||
|
let config_file2 = config_path.clone();
|
||||||
let tx_poll = notify_tx.clone();
|
let tx_poll = notify_tx.clone();
|
||||||
let manifest_for_poll = manifest_state.clone();
|
match notify::poll::PollWatcher::new(
|
||||||
let mut poll_watcher = match notify::poll::PollWatcher::new(
|
|
||||||
move |res: notify::Result<notify::Event>| {
|
move |res: notify::Result<notify::Event>| {
|
||||||
let Ok(event) = res else { return };
|
let Ok(event) = res else { return };
|
||||||
if !matches!(
|
let is_our_file = event.paths.iter().any(|p| p == &config_file2);
|
||||||
event.kind,
|
if !is_our_file { return; }
|
||||||
EventKind::Modify(_) | EventKind::Create(_) | EventKind::Remove(_)
|
if matches!(event.kind, EventKind::Modify(_) | EventKind::Create(_) | EventKind::Remove(_)) {
|
||||||
) {
|
|
||||||
return;
|
|
||||||
}
|
|
||||||
let is_our_file = manifest_for_poll
|
|
||||||
.read()
|
|
||||||
.map(|manifest| manifest.matches_event_paths(&event.paths))
|
|
||||||
.unwrap_or(false);
|
|
||||||
if is_our_file {
|
|
||||||
let _ = tx_poll.try_send(());
|
let _ = tx_poll.try_send(());
|
||||||
}
|
}
|
||||||
},
|
},
|
||||||
notify::Config::default()
|
notify::Config::default()
|
||||||
.with_poll_interval(Duration::from_secs(3))
|
.with_poll_interval(std::time::Duration::from_secs(3))
|
||||||
.with_compare_contents(true),
|
.with_compare_contents(true),
|
||||||
) {
|
) {
|
||||||
Ok(watcher) => Some(watcher),
|
Ok(mut w) => match w.watch(&config_path, RecursiveMode::NonRecursive) {
|
||||||
Err(e) => {
|
Ok(()) => {
|
||||||
warn!("config watcher: poll watcher unavailable: {}", e);
|
if inotify_ok {
|
||||||
None
|
info!("config watcher: poll watcher also active (Docker/NFS safe)");
|
||||||
|
} else {
|
||||||
|
info!("config watcher: poll watcher active on {:?} (3s interval)", config_path);
|
||||||
}
|
}
|
||||||
};
|
Box::leak(Box::new(w));
|
||||||
apply_watch_manifest(
|
}
|
||||||
Option::<&mut notify::RecommendedWatcher>::None,
|
Err(e) => warn!("config watcher: poll watch failed: {}", e),
|
||||||
poll_watcher.as_mut(),
|
},
|
||||||
&manifest_state,
|
Err(e) => warn!("config watcher: poll watcher unavailable: {}", e),
|
||||||
initial_manifest.clone(),
|
|
||||||
);
|
|
||||||
if poll_watcher.is_some() {
|
|
||||||
info!("config watcher: poll watcher active (Docker/NFS safe)");
|
|
||||||
}
|
}
|
||||||
|
|
||||||
|
// ── event loop ───────────────────────────────────────────────────────
|
||||||
|
tokio::spawn(async move {
|
||||||
#[cfg(unix)]
|
#[cfg(unix)]
|
||||||
let mut sighup = {
|
let mut sighup = {
|
||||||
use tokio::signal::unix::{SignalKind, signal};
|
use tokio::signal::unix::{SignalKind, signal};
|
||||||
|
|
@ -1383,43 +1131,13 @@ pub fn spawn_config_watcher(
|
||||||
}
|
}
|
||||||
}
|
}
|
||||||
#[cfg(not(unix))]
|
#[cfg(not(unix))]
|
||||||
if notify_rx.recv().await.is_none() {
|
if notify_rx.recv().await.is_none() { break; }
|
||||||
break;
|
|
||||||
}
|
|
||||||
|
|
||||||
// Debounce: drain extra events that arrive within a short quiet window.
|
// Debounce: drain extra events that arrive within 50 ms.
|
||||||
tokio::time::sleep(HOT_RELOAD_DEBOUNCE).await;
|
tokio::time::sleep(std::time::Duration::from_millis(50)).await;
|
||||||
while notify_rx.try_recv().is_ok() {}
|
while notify_rx.try_recv().is_ok() {}
|
||||||
|
|
||||||
let mut next_manifest = reload_config(
|
reload_config(&config_path, &config_tx, &log_tx, detected_ip_v4, detected_ip_v6);
|
||||||
&config_path,
|
|
||||||
&config_tx,
|
|
||||||
&log_tx,
|
|
||||||
detected_ip_v4,
|
|
||||||
detected_ip_v6,
|
|
||||||
&mut reload_state,
|
|
||||||
);
|
|
||||||
if next_manifest.is_none() {
|
|
||||||
tokio::time::sleep(HOT_RELOAD_DEBOUNCE).await;
|
|
||||||
while notify_rx.try_recv().is_ok() {}
|
|
||||||
next_manifest = reload_config(
|
|
||||||
&config_path,
|
|
||||||
&config_tx,
|
|
||||||
&log_tx,
|
|
||||||
detected_ip_v4,
|
|
||||||
detected_ip_v6,
|
|
||||||
&mut reload_state,
|
|
||||||
);
|
|
||||||
}
|
|
||||||
|
|
||||||
if let Some(next_manifest) = next_manifest {
|
|
||||||
apply_watch_manifest(
|
|
||||||
inotify_watcher.as_mut(),
|
|
||||||
poll_watcher.as_mut(),
|
|
||||||
&manifest_state,
|
|
||||||
next_manifest,
|
|
||||||
);
|
|
||||||
}
|
|
||||||
}
|
}
|
||||||
});
|
});
|
||||||
|
|
||||||
|
|
@ -1434,40 +1152,6 @@ mod tests {
|
||||||
ProxyConfig::default()
|
ProxyConfig::default()
|
||||||
}
|
}
|
||||||
|
|
||||||
fn write_reload_config(path: &Path, ad_tag: Option<&str>, server_port: Option<u16>) {
|
|
||||||
let mut config = String::from(
|
|
||||||
r#"
|
|
||||||
[censorship]
|
|
||||||
tls_domain = "example.com"
|
|
||||||
|
|
||||||
[access.users]
|
|
||||||
user = "00000000000000000000000000000000"
|
|
||||||
"#,
|
|
||||||
);
|
|
||||||
|
|
||||||
if ad_tag.is_some() {
|
|
||||||
config.push_str("\n[general]\n");
|
|
||||||
if let Some(tag) = ad_tag {
|
|
||||||
config.push_str(&format!("ad_tag = \"{tag}\"\n"));
|
|
||||||
}
|
|
||||||
}
|
|
||||||
|
|
||||||
if let Some(port) = server_port {
|
|
||||||
config.push_str("\n[server]\n");
|
|
||||||
config.push_str(&format!("port = {port}\n"));
|
|
||||||
}
|
|
||||||
|
|
||||||
std::fs::write(path, config).unwrap();
|
|
||||||
}
|
|
||||||
|
|
||||||
fn temp_config_path(prefix: &str) -> PathBuf {
|
|
||||||
let nonce = std::time::SystemTime::now()
|
|
||||||
.duration_since(std::time::UNIX_EPOCH)
|
|
||||||
.unwrap()
|
|
||||||
.as_nanos();
|
|
||||||
std::env::temp_dir().join(format!("{prefix}_{nonce}.toml"))
|
|
||||||
}
|
|
||||||
|
|
||||||
#[test]
|
#[test]
|
||||||
fn overlay_applies_hot_and_preserves_non_hot() {
|
fn overlay_applies_hot_and_preserves_non_hot() {
|
||||||
let old = sample_config();
|
let old = sample_config();
|
||||||
|
|
@ -1487,10 +1171,7 @@ mod tests {
|
||||||
new.server.port = old.server.port.saturating_add(1);
|
new.server.port = old.server.port.saturating_add(1);
|
||||||
|
|
||||||
let applied = overlay_hot_fields(&old, &new);
|
let applied = overlay_hot_fields(&old, &new);
|
||||||
assert_eq!(
|
assert_eq!(HotFields::from_config(&old), HotFields::from_config(&applied));
|
||||||
HotFields::from_config(&old),
|
|
||||||
HotFields::from_config(&applied)
|
|
||||||
);
|
|
||||||
assert_eq!(applied.server.port, old.server.port);
|
assert_eq!(applied.server.port, old.server.port);
|
||||||
}
|
}
|
||||||
|
|
||||||
|
|
@ -1509,10 +1190,7 @@ mod tests {
|
||||||
applied.general.me_bind_stale_mode,
|
applied.general.me_bind_stale_mode,
|
||||||
new.general.me_bind_stale_mode
|
new.general.me_bind_stale_mode
|
||||||
);
|
);
|
||||||
assert_ne!(
|
assert_ne!(HotFields::from_config(&old), HotFields::from_config(&applied));
|
||||||
HotFields::from_config(&old),
|
|
||||||
HotFields::from_config(&applied)
|
|
||||||
);
|
|
||||||
}
|
}
|
||||||
|
|
||||||
#[test]
|
#[test]
|
||||||
|
|
@ -1526,10 +1204,7 @@ mod tests {
|
||||||
applied.general.me_keepalive_interval_secs,
|
applied.general.me_keepalive_interval_secs,
|
||||||
old.general.me_keepalive_interval_secs
|
old.general.me_keepalive_interval_secs
|
||||||
);
|
);
|
||||||
assert_eq!(
|
assert_eq!(HotFields::from_config(&old), HotFields::from_config(&applied));
|
||||||
HotFields::from_config(&old),
|
|
||||||
HotFields::from_config(&applied)
|
|
||||||
);
|
|
||||||
}
|
}
|
||||||
|
|
||||||
#[test]
|
#[test]
|
||||||
|
|
@ -1541,92 +1216,7 @@ mod tests {
|
||||||
|
|
||||||
let applied = overlay_hot_fields(&old, &new);
|
let applied = overlay_hot_fields(&old, &new);
|
||||||
assert_eq!(applied.general.hardswap, new.general.hardswap);
|
assert_eq!(applied.general.hardswap, new.general.hardswap);
|
||||||
assert_eq!(
|
assert_eq!(applied.general.use_middle_proxy, old.general.use_middle_proxy);
|
||||||
applied.general.use_middle_proxy,
|
|
||||||
old.general.use_middle_proxy
|
|
||||||
);
|
|
||||||
assert!(!config_equal(&applied, &new));
|
assert!(!config_equal(&applied, &new));
|
||||||
}
|
}
|
||||||
|
|
||||||
#[test]
|
|
||||||
fn reload_applies_hot_change_on_first_observed_snapshot() {
|
|
||||||
let initial_tag = "11111111111111111111111111111111";
|
|
||||||
let final_tag = "22222222222222222222222222222222";
|
|
||||||
let path = temp_config_path("telemt_hot_reload_stable");
|
|
||||||
|
|
||||||
write_reload_config(&path, Some(initial_tag), None);
|
|
||||||
let initial_cfg = Arc::new(ProxyConfig::load(&path).unwrap());
|
|
||||||
let initial_hash = ProxyConfig::load_with_metadata(&path)
|
|
||||||
.unwrap()
|
|
||||||
.rendered_hash;
|
|
||||||
let (config_tx, _config_rx) = watch::channel(initial_cfg.clone());
|
|
||||||
let (log_tx, _log_rx) = watch::channel(initial_cfg.general.log_level.clone());
|
|
||||||
let mut reload_state = ReloadState::new(Some(initial_hash));
|
|
||||||
|
|
||||||
write_reload_config(&path, Some(final_tag), None);
|
|
||||||
reload_config(&path, &config_tx, &log_tx, None, None, &mut reload_state).unwrap();
|
|
||||||
assert_eq!(
|
|
||||||
config_tx.borrow().general.ad_tag.as_deref(),
|
|
||||||
Some(final_tag)
|
|
||||||
);
|
|
||||||
|
|
||||||
let _ = std::fs::remove_file(path);
|
|
||||||
}
|
|
||||||
|
|
||||||
#[test]
|
|
||||||
fn reload_keeps_hot_apply_when_non_hot_fields_change() {
|
|
||||||
let initial_tag = "aaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaa";
|
|
||||||
let final_tag = "bbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbb";
|
|
||||||
let path = temp_config_path("telemt_hot_reload_mixed");
|
|
||||||
|
|
||||||
write_reload_config(&path, Some(initial_tag), None);
|
|
||||||
let initial_cfg = Arc::new(ProxyConfig::load(&path).unwrap());
|
|
||||||
let initial_hash = ProxyConfig::load_with_metadata(&path)
|
|
||||||
.unwrap()
|
|
||||||
.rendered_hash;
|
|
||||||
let (config_tx, _config_rx) = watch::channel(initial_cfg.clone());
|
|
||||||
let (log_tx, _log_rx) = watch::channel(initial_cfg.general.log_level.clone());
|
|
||||||
let mut reload_state = ReloadState::new(Some(initial_hash));
|
|
||||||
|
|
||||||
write_reload_config(&path, Some(final_tag), Some(initial_cfg.server.port + 1));
|
|
||||||
reload_config(&path, &config_tx, &log_tx, None, None, &mut reload_state).unwrap();
|
|
||||||
|
|
||||||
let applied = config_tx.borrow().clone();
|
|
||||||
assert_eq!(applied.general.ad_tag.as_deref(), Some(final_tag));
|
|
||||||
assert_eq!(applied.server.port, initial_cfg.server.port);
|
|
||||||
|
|
||||||
let _ = std::fs::remove_file(path);
|
|
||||||
}
|
|
||||||
|
|
||||||
#[test]
|
|
||||||
fn reload_recovers_after_parse_error_on_next_attempt() {
|
|
||||||
let initial_tag = "cccccccccccccccccccccccccccccccc";
|
|
||||||
let final_tag = "dddddddddddddddddddddddddddddddd";
|
|
||||||
let path = temp_config_path("telemt_hot_reload_parse_recovery");
|
|
||||||
|
|
||||||
write_reload_config(&path, Some(initial_tag), None);
|
|
||||||
let initial_cfg = Arc::new(ProxyConfig::load(&path).unwrap());
|
|
||||||
let initial_hash = ProxyConfig::load_with_metadata(&path)
|
|
||||||
.unwrap()
|
|
||||||
.rendered_hash;
|
|
||||||
let (config_tx, _config_rx) = watch::channel(initial_cfg.clone());
|
|
||||||
let (log_tx, _log_rx) = watch::channel(initial_cfg.general.log_level.clone());
|
|
||||||
let mut reload_state = ReloadState::new(Some(initial_hash));
|
|
||||||
|
|
||||||
std::fs::write(&path, "[access.users\nuser = \"broken\"\n").unwrap();
|
|
||||||
assert!(reload_config(&path, &config_tx, &log_tx, None, None, &mut reload_state).is_none());
|
|
||||||
assert_eq!(
|
|
||||||
config_tx.borrow().general.ad_tag.as_deref(),
|
|
||||||
Some(initial_tag)
|
|
||||||
);
|
|
||||||
|
|
||||||
write_reload_config(&path, Some(final_tag), None);
|
|
||||||
reload_config(&path, &config_tx, &log_tx, None, None, &mut reload_state).unwrap();
|
|
||||||
assert_eq!(
|
|
||||||
config_tx.borrow().general.ad_tag.as_deref(),
|
|
||||||
Some(final_tag)
|
|
||||||
);
|
|
||||||
|
|
||||||
let _ = std::fs::remove_file(path);
|
|
||||||
}
|
|
||||||
}
|
}
|
||||||
|
|
|
||||||
1135
src/config/load.rs
1135
src/config/load.rs
File diff suppressed because it is too large
Load Diff
|
|
@ -1,9 +1,9 @@
|
||||||
//! Configuration.
|
//! Configuration.
|
||||||
|
|
||||||
pub(crate) mod defaults;
|
pub(crate) mod defaults;
|
||||||
pub mod hot_reload;
|
|
||||||
mod load;
|
|
||||||
mod types;
|
mod types;
|
||||||
|
mod load;
|
||||||
|
pub mod hot_reload;
|
||||||
|
|
||||||
pub use load::ProxyConfig;
|
pub use load::ProxyConfig;
|
||||||
pub use types::*;
|
pub use types::*;
|
||||||
|
|
|
||||||
|
|
@ -1,102 +0,0 @@
|
||||||
use super::*;
|
|
||||||
use std::fs;
|
|
||||||
use std::path::PathBuf;
|
|
||||||
use std::time::{SystemTime, UNIX_EPOCH};
|
|
||||||
|
|
||||||
fn write_temp_config(contents: &str) -> PathBuf {
|
|
||||||
let nonce = SystemTime::now()
|
|
||||||
.duration_since(UNIX_EPOCH)
|
|
||||||
.expect("system time must be after unix epoch")
|
|
||||||
.as_nanos();
|
|
||||||
let path = std::env::temp_dir().join(format!("telemt-idle-policy-{nonce}.toml"));
|
|
||||||
fs::write(&path, contents).expect("temp config write must succeed");
|
|
||||||
path
|
|
||||||
}
|
|
||||||
|
|
||||||
fn remove_temp_config(path: &PathBuf) {
|
|
||||||
let _ = fs::remove_file(path);
|
|
||||||
}
|
|
||||||
|
|
||||||
#[test]
|
|
||||||
fn default_timeouts_enable_apple_compatible_handshake_profile() {
|
|
||||||
let cfg = ProxyConfig::default();
|
|
||||||
assert_eq!(cfg.timeouts.client_first_byte_idle_secs, 300);
|
|
||||||
assert_eq!(cfg.timeouts.client_handshake, 60);
|
|
||||||
}
|
|
||||||
|
|
||||||
#[test]
|
|
||||||
fn load_accepts_zero_first_byte_idle_timeout_as_legacy_opt_out() {
|
|
||||||
let path = write_temp_config(
|
|
||||||
r#"
|
|
||||||
[timeouts]
|
|
||||||
client_first_byte_idle_secs = 0
|
|
||||||
"#,
|
|
||||||
);
|
|
||||||
|
|
||||||
let cfg = ProxyConfig::load(&path).expect("config with zero first-byte idle timeout must load");
|
|
||||||
assert_eq!(cfg.timeouts.client_first_byte_idle_secs, 0);
|
|
||||||
|
|
||||||
remove_temp_config(&path);
|
|
||||||
}
|
|
||||||
|
|
||||||
#[test]
|
|
||||||
fn load_rejects_relay_hard_idle_smaller_than_soft_idle_with_clear_error() {
|
|
||||||
let path = write_temp_config(
|
|
||||||
r#"
|
|
||||||
[timeouts]
|
|
||||||
relay_client_idle_soft_secs = 120
|
|
||||||
relay_client_idle_hard_secs = 60
|
|
||||||
"#,
|
|
||||||
);
|
|
||||||
|
|
||||||
let err = ProxyConfig::load(&path).expect_err("config with hard<soft must fail");
|
|
||||||
let msg = err.to_string();
|
|
||||||
assert!(
|
|
||||||
msg.contains(
|
|
||||||
"timeouts.relay_client_idle_hard_secs must be >= timeouts.relay_client_idle_soft_secs"
|
|
||||||
),
|
|
||||||
"error must explain the violated hard>=soft invariant, got: {msg}"
|
|
||||||
);
|
|
||||||
|
|
||||||
remove_temp_config(&path);
|
|
||||||
}
|
|
||||||
|
|
||||||
#[test]
|
|
||||||
fn load_rejects_relay_grace_larger_than_hard_idle_with_clear_error() {
|
|
||||||
let path = write_temp_config(
|
|
||||||
r#"
|
|
||||||
[timeouts]
|
|
||||||
relay_client_idle_soft_secs = 60
|
|
||||||
relay_client_idle_hard_secs = 120
|
|
||||||
relay_idle_grace_after_downstream_activity_secs = 121
|
|
||||||
"#,
|
|
||||||
);
|
|
||||||
|
|
||||||
let err = ProxyConfig::load(&path).expect_err("config with grace>hard must fail");
|
|
||||||
let msg = err.to_string();
|
|
||||||
assert!(
|
|
||||||
msg.contains("timeouts.relay_idle_grace_after_downstream_activity_secs must be <= timeouts.relay_client_idle_hard_secs"),
|
|
||||||
"error must explain the violated grace<=hard invariant, got: {msg}"
|
|
||||||
);
|
|
||||||
|
|
||||||
remove_temp_config(&path);
|
|
||||||
}
|
|
||||||
|
|
||||||
#[test]
|
|
||||||
fn load_rejects_zero_handshake_timeout_with_clear_error() {
|
|
||||||
let path = write_temp_config(
|
|
||||||
r#"
|
|
||||||
[timeouts]
|
|
||||||
client_handshake = 0
|
|
||||||
"#,
|
|
||||||
);
|
|
||||||
|
|
||||||
let err = ProxyConfig::load(&path).expect_err("config with zero handshake timeout must fail");
|
|
||||||
let msg = err.to_string();
|
|
||||||
assert!(
|
|
||||||
msg.contains("timeouts.client_handshake must be > 0"),
|
|
||||||
"error must explain that handshake timeout must be positive, got: {msg}"
|
|
||||||
);
|
|
||||||
|
|
||||||
remove_temp_config(&path);
|
|
||||||
}
|
|
||||||
|
|
@ -1,76 +0,0 @@
|
||||||
use super::*;
|
|
||||||
use std::fs;
|
|
||||||
use std::path::PathBuf;
|
|
||||||
use std::time::{SystemTime, UNIX_EPOCH};
|
|
||||||
|
|
||||||
fn write_temp_config(contents: &str) -> PathBuf {
|
|
||||||
let nonce = SystemTime::now()
|
|
||||||
.duration_since(UNIX_EPOCH)
|
|
||||||
.expect("system time must be after unix epoch")
|
|
||||||
.as_nanos();
|
|
||||||
let path = std::env::temp_dir().join(format!(
|
|
||||||
"telemt-load-mask-prefetch-timeout-security-{nonce}.toml"
|
|
||||||
));
|
|
||||||
fs::write(&path, contents).expect("temp config write must succeed");
|
|
||||||
path
|
|
||||||
}
|
|
||||||
|
|
||||||
fn remove_temp_config(path: &PathBuf) {
|
|
||||||
let _ = fs::remove_file(path);
|
|
||||||
}
|
|
||||||
|
|
||||||
#[test]
|
|
||||||
fn load_rejects_mask_classifier_prefetch_timeout_below_min_bound() {
|
|
||||||
let path = write_temp_config(
|
|
||||||
r#"
|
|
||||||
[censorship]
|
|
||||||
mask_classifier_prefetch_timeout_ms = 4
|
|
||||||
"#,
|
|
||||||
);
|
|
||||||
|
|
||||||
let err = ProxyConfig::load(&path)
|
|
||||||
.expect_err("prefetch timeout below minimum security bound must be rejected");
|
|
||||||
let msg = err.to_string();
|
|
||||||
assert!(
|
|
||||||
msg.contains("censorship.mask_classifier_prefetch_timeout_ms must be within [5, 50]"),
|
|
||||||
"error must explain timeout bound invariant, got: {msg}"
|
|
||||||
);
|
|
||||||
|
|
||||||
remove_temp_config(&path);
|
|
||||||
}
|
|
||||||
|
|
||||||
#[test]
|
|
||||||
fn load_rejects_mask_classifier_prefetch_timeout_above_max_bound() {
|
|
||||||
let path = write_temp_config(
|
|
||||||
r#"
|
|
||||||
[censorship]
|
|
||||||
mask_classifier_prefetch_timeout_ms = 51
|
|
||||||
"#,
|
|
||||||
);
|
|
||||||
|
|
||||||
let err = ProxyConfig::load(&path)
|
|
||||||
.expect_err("prefetch timeout above max security bound must be rejected");
|
|
||||||
let msg = err.to_string();
|
|
||||||
assert!(
|
|
||||||
msg.contains("censorship.mask_classifier_prefetch_timeout_ms must be within [5, 50]"),
|
|
||||||
"error must explain timeout bound invariant, got: {msg}"
|
|
||||||
);
|
|
||||||
|
|
||||||
remove_temp_config(&path);
|
|
||||||
}
|
|
||||||
|
|
||||||
#[test]
|
|
||||||
fn load_accepts_mask_classifier_prefetch_timeout_within_bounds() {
|
|
||||||
let path = write_temp_config(
|
|
||||||
r#"
|
|
||||||
[censorship]
|
|
||||||
mask_classifier_prefetch_timeout_ms = 20
|
|
||||||
"#,
|
|
||||||
);
|
|
||||||
|
|
||||||
let cfg =
|
|
||||||
ProxyConfig::load(&path).expect("prefetch timeout within security bounds must be accepted");
|
|
||||||
assert_eq!(cfg.censorship.mask_classifier_prefetch_timeout_ms, 20);
|
|
||||||
|
|
||||||
remove_temp_config(&path);
|
|
||||||
}
|
|
||||||
|
|
@ -1,292 +0,0 @@
|
||||||
use super::*;
|
|
||||||
use std::fs;
|
|
||||||
use std::path::PathBuf;
|
|
||||||
use std::time::{SystemTime, UNIX_EPOCH};
|
|
||||||
|
|
||||||
fn write_temp_config(contents: &str) -> PathBuf {
|
|
||||||
let nonce = SystemTime::now()
|
|
||||||
.duration_since(UNIX_EPOCH)
|
|
||||||
.expect("system time must be after unix epoch")
|
|
||||||
.as_nanos();
|
|
||||||
let path = std::env::temp_dir().join(format!("telemt-load-mask-shape-security-{nonce}.toml"));
|
|
||||||
fs::write(&path, contents).expect("temp config write must succeed");
|
|
||||||
path
|
|
||||||
}
|
|
||||||
|
|
||||||
fn remove_temp_config(path: &PathBuf) {
|
|
||||||
let _ = fs::remove_file(path);
|
|
||||||
}
|
|
||||||
|
|
||||||
#[test]
|
|
||||||
fn load_rejects_zero_mask_shape_bucket_floor_bytes() {
|
|
||||||
let path = write_temp_config(
|
|
||||||
r#"
|
|
||||||
[censorship]
|
|
||||||
mask_shape_bucket_floor_bytes = 0
|
|
||||||
mask_shape_bucket_cap_bytes = 4096
|
|
||||||
"#,
|
|
||||||
);
|
|
||||||
|
|
||||||
let err =
|
|
||||||
ProxyConfig::load(&path).expect_err("zero mask_shape_bucket_floor_bytes must be rejected");
|
|
||||||
let msg = err.to_string();
|
|
||||||
assert!(
|
|
||||||
msg.contains("censorship.mask_shape_bucket_floor_bytes must be > 0"),
|
|
||||||
"error must explain floor>0 invariant, got: {msg}"
|
|
||||||
);
|
|
||||||
|
|
||||||
remove_temp_config(&path);
|
|
||||||
}
|
|
||||||
|
|
||||||
#[test]
|
|
||||||
fn load_rejects_mask_shape_bucket_cap_less_than_floor() {
|
|
||||||
let path = write_temp_config(
|
|
||||||
r#"
|
|
||||||
[censorship]
|
|
||||||
mask_shape_bucket_floor_bytes = 1024
|
|
||||||
mask_shape_bucket_cap_bytes = 512
|
|
||||||
"#,
|
|
||||||
);
|
|
||||||
|
|
||||||
let err =
|
|
||||||
ProxyConfig::load(&path).expect_err("mask_shape_bucket_cap_bytes < floor must be rejected");
|
|
||||||
let msg = err.to_string();
|
|
||||||
assert!(
|
|
||||||
msg.contains(
|
|
||||||
"censorship.mask_shape_bucket_cap_bytes must be >= censorship.mask_shape_bucket_floor_bytes"
|
|
||||||
),
|
|
||||||
"error must explain cap>=floor invariant, got: {msg}"
|
|
||||||
);
|
|
||||||
|
|
||||||
remove_temp_config(&path);
|
|
||||||
}
|
|
||||||
|
|
||||||
#[test]
|
|
||||||
fn load_accepts_mask_shape_bucket_cap_equal_to_floor() {
|
|
||||||
let path = write_temp_config(
|
|
||||||
r#"
|
|
||||||
[censorship]
|
|
||||||
mask_shape_hardening = true
|
|
||||||
mask_shape_bucket_floor_bytes = 1024
|
|
||||||
mask_shape_bucket_cap_bytes = 1024
|
|
||||||
"#,
|
|
||||||
);
|
|
||||||
|
|
||||||
let cfg = ProxyConfig::load(&path).expect("equal cap and floor must be accepted");
|
|
||||||
assert!(cfg.censorship.mask_shape_hardening);
|
|
||||||
assert_eq!(cfg.censorship.mask_shape_bucket_floor_bytes, 1024);
|
|
||||||
assert_eq!(cfg.censorship.mask_shape_bucket_cap_bytes, 1024);
|
|
||||||
|
|
||||||
remove_temp_config(&path);
|
|
||||||
}
|
|
||||||
|
|
||||||
#[test]
|
|
||||||
fn load_rejects_above_cap_blur_when_shape_hardening_disabled() {
|
|
||||||
let path = write_temp_config(
|
|
||||||
r#"
|
|
||||||
[censorship]
|
|
||||||
mask_shape_hardening = false
|
|
||||||
mask_shape_above_cap_blur = true
|
|
||||||
mask_shape_above_cap_blur_max_bytes = 64
|
|
||||||
"#,
|
|
||||||
);
|
|
||||||
|
|
||||||
let err =
|
|
||||||
ProxyConfig::load(&path).expect_err("above-cap blur must require shape hardening enabled");
|
|
||||||
let msg = err.to_string();
|
|
||||||
assert!(
|
|
||||||
msg.contains(
|
|
||||||
"censorship.mask_shape_above_cap_blur requires censorship.mask_shape_hardening = true"
|
|
||||||
),
|
|
||||||
"error must explain blur prerequisite, got: {msg}"
|
|
||||||
);
|
|
||||||
|
|
||||||
remove_temp_config(&path);
|
|
||||||
}
|
|
||||||
|
|
||||||
#[test]
|
|
||||||
fn load_rejects_above_cap_blur_with_zero_max_bytes() {
|
|
||||||
let path = write_temp_config(
|
|
||||||
r#"
|
|
||||||
[censorship]
|
|
||||||
mask_shape_hardening = true
|
|
||||||
mask_shape_above_cap_blur = true
|
|
||||||
mask_shape_above_cap_blur_max_bytes = 0
|
|
||||||
"#,
|
|
||||||
);
|
|
||||||
|
|
||||||
let err =
|
|
||||||
ProxyConfig::load(&path).expect_err("above-cap blur max bytes must be > 0 when enabled");
|
|
||||||
let msg = err.to_string();
|
|
||||||
assert!(
|
|
||||||
msg.contains("censorship.mask_shape_above_cap_blur_max_bytes must be > 0 when censorship.mask_shape_above_cap_blur is enabled"),
|
|
||||||
"error must explain blur max bytes invariant, got: {msg}"
|
|
||||||
);
|
|
||||||
|
|
||||||
remove_temp_config(&path);
|
|
||||||
}
|
|
||||||
|
|
||||||
#[test]
|
|
||||||
fn load_rejects_timing_normalization_floor_zero_when_enabled() {
|
|
||||||
let path = write_temp_config(
|
|
||||||
r#"
|
|
||||||
[censorship]
|
|
||||||
mask_timing_normalization_enabled = true
|
|
||||||
mask_timing_normalization_floor_ms = 0
|
|
||||||
mask_timing_normalization_ceiling_ms = 200
|
|
||||||
"#,
|
|
||||||
);
|
|
||||||
|
|
||||||
let err =
|
|
||||||
ProxyConfig::load(&path).expect_err("timing normalization floor must be > 0 when enabled");
|
|
||||||
let msg = err.to_string();
|
|
||||||
assert!(
|
|
||||||
msg.contains("censorship.mask_timing_normalization_floor_ms must be > 0 when censorship.mask_timing_normalization_enabled is true"),
|
|
||||||
"error must explain timing floor invariant, got: {msg}"
|
|
||||||
);
|
|
||||||
|
|
||||||
remove_temp_config(&path);
|
|
||||||
}
|
|
||||||
|
|
||||||
#[test]
|
|
||||||
fn load_rejects_timing_normalization_ceiling_below_floor() {
|
|
||||||
let path = write_temp_config(
|
|
||||||
r#"
|
|
||||||
[censorship]
|
|
||||||
mask_timing_normalization_enabled = true
|
|
||||||
mask_timing_normalization_floor_ms = 220
|
|
||||||
mask_timing_normalization_ceiling_ms = 200
|
|
||||||
"#,
|
|
||||||
);
|
|
||||||
|
|
||||||
let err = ProxyConfig::load(&path).expect_err("timing normalization ceiling must be >= floor");
|
|
||||||
let msg = err.to_string();
|
|
||||||
assert!(
|
|
||||||
msg.contains("censorship.mask_timing_normalization_ceiling_ms must be >= censorship.mask_timing_normalization_floor_ms"),
|
|
||||||
"error must explain timing ceiling/floor invariant, got: {msg}"
|
|
||||||
);
|
|
||||||
|
|
||||||
remove_temp_config(&path);
|
|
||||||
}
|
|
||||||
|
|
||||||
#[test]
|
|
||||||
fn load_accepts_valid_timing_normalization_and_above_cap_blur_config() {
|
|
||||||
let path = write_temp_config(
|
|
||||||
r#"
|
|
||||||
[censorship]
|
|
||||||
mask_shape_hardening = true
|
|
||||||
mask_shape_above_cap_blur = true
|
|
||||||
mask_shape_above_cap_blur_max_bytes = 128
|
|
||||||
mask_timing_normalization_enabled = true
|
|
||||||
mask_timing_normalization_floor_ms = 150
|
|
||||||
mask_timing_normalization_ceiling_ms = 240
|
|
||||||
"#,
|
|
||||||
);
|
|
||||||
|
|
||||||
let cfg = ProxyConfig::load(&path)
|
|
||||||
.expect("valid blur and timing normalization settings must be accepted");
|
|
||||||
assert!(cfg.censorship.mask_shape_hardening);
|
|
||||||
assert!(cfg.censorship.mask_shape_above_cap_blur);
|
|
||||||
assert_eq!(cfg.censorship.mask_shape_above_cap_blur_max_bytes, 128);
|
|
||||||
assert!(cfg.censorship.mask_timing_normalization_enabled);
|
|
||||||
assert_eq!(cfg.censorship.mask_timing_normalization_floor_ms, 150);
|
|
||||||
assert_eq!(cfg.censorship.mask_timing_normalization_ceiling_ms, 240);
|
|
||||||
|
|
||||||
remove_temp_config(&path);
|
|
||||||
}
|
|
||||||
|
|
||||||
#[test]
|
|
||||||
fn load_rejects_aggressive_shape_mode_when_shape_hardening_disabled() {
|
|
||||||
let path = write_temp_config(
|
|
||||||
r#"
|
|
||||||
[censorship]
|
|
||||||
mask_shape_hardening = false
|
|
||||||
mask_shape_hardening_aggressive_mode = true
|
|
||||||
"#,
|
|
||||||
);
|
|
||||||
|
|
||||||
let err = ProxyConfig::load(&path)
|
|
||||||
.expect_err("aggressive shape hardening mode must require shape hardening enabled");
|
|
||||||
let msg = err.to_string();
|
|
||||||
assert!(
|
|
||||||
msg.contains("censorship.mask_shape_hardening_aggressive_mode requires censorship.mask_shape_hardening = true"),
|
|
||||||
"error must explain aggressive-mode prerequisite, got: {msg}"
|
|
||||||
);
|
|
||||||
|
|
||||||
remove_temp_config(&path);
|
|
||||||
}
|
|
||||||
|
|
||||||
#[test]
|
|
||||||
fn load_accepts_aggressive_shape_mode_when_shape_hardening_enabled() {
|
|
||||||
let path = write_temp_config(
|
|
||||||
r#"
|
|
||||||
[censorship]
|
|
||||||
mask_shape_hardening = true
|
|
||||||
mask_shape_hardening_aggressive_mode = true
|
|
||||||
mask_shape_above_cap_blur = true
|
|
||||||
mask_shape_above_cap_blur_max_bytes = 8
|
|
||||||
"#,
|
|
||||||
);
|
|
||||||
|
|
||||||
let cfg = ProxyConfig::load(&path)
|
|
||||||
.expect("aggressive shape hardening mode should be accepted when prerequisites are met");
|
|
||||||
assert!(cfg.censorship.mask_shape_hardening);
|
|
||||||
assert!(cfg.censorship.mask_shape_hardening_aggressive_mode);
|
|
||||||
assert!(cfg.censorship.mask_shape_above_cap_blur);
|
|
||||||
|
|
||||||
remove_temp_config(&path);
|
|
||||||
}
|
|
||||||
|
|
||||||
#[test]
|
|
||||||
fn load_rejects_zero_mask_relay_max_bytes() {
|
|
||||||
let path = write_temp_config(
|
|
||||||
r#"
|
|
||||||
[censorship]
|
|
||||||
mask_relay_max_bytes = 0
|
|
||||||
"#,
|
|
||||||
);
|
|
||||||
|
|
||||||
let err = ProxyConfig::load(&path).expect_err("mask_relay_max_bytes must be > 0");
|
|
||||||
let msg = err.to_string();
|
|
||||||
assert!(
|
|
||||||
msg.contains("censorship.mask_relay_max_bytes must be > 0"),
|
|
||||||
"error must explain non-zero relay cap invariant, got: {msg}"
|
|
||||||
);
|
|
||||||
|
|
||||||
remove_temp_config(&path);
|
|
||||||
}
|
|
||||||
|
|
||||||
#[test]
|
|
||||||
fn load_rejects_mask_relay_max_bytes_above_upper_bound() {
|
|
||||||
let path = write_temp_config(
|
|
||||||
r#"
|
|
||||||
[censorship]
|
|
||||||
mask_relay_max_bytes = 67108865
|
|
||||||
"#,
|
|
||||||
);
|
|
||||||
|
|
||||||
let err =
|
|
||||||
ProxyConfig::load(&path).expect_err("mask_relay_max_bytes above hard cap must be rejected");
|
|
||||||
let msg = err.to_string();
|
|
||||||
assert!(
|
|
||||||
msg.contains("censorship.mask_relay_max_bytes must be <= 67108864"),
|
|
||||||
"error must explain relay cap upper bound invariant, got: {msg}"
|
|
||||||
);
|
|
||||||
|
|
||||||
remove_temp_config(&path);
|
|
||||||
}
|
|
||||||
|
|
||||||
#[test]
|
|
||||||
fn load_accepts_valid_mask_relay_max_bytes() {
|
|
||||||
let path = write_temp_config(
|
|
||||||
r#"
|
|
||||||
[censorship]
|
|
||||||
mask_relay_max_bytes = 8388608
|
|
||||||
"#,
|
|
||||||
);
|
|
||||||
|
|
||||||
let cfg = ProxyConfig::load(&path).expect("valid mask_relay_max_bytes must be accepted");
|
|
||||||
assert_eq!(cfg.censorship.mask_relay_max_bytes, 8_388_608);
|
|
||||||
|
|
||||||
remove_temp_config(&path);
|
|
||||||
}
|
|
||||||
|
|
@ -1,88 +0,0 @@
|
||||||
use super::*;
|
|
||||||
use std::fs;
|
|
||||||
use std::path::PathBuf;
|
|
||||||
use std::time::{SystemTime, UNIX_EPOCH};
|
|
||||||
|
|
||||||
fn write_temp_config(contents: &str) -> PathBuf {
|
|
||||||
let nonce = SystemTime::now()
|
|
||||||
.duration_since(UNIX_EPOCH)
|
|
||||||
.expect("system time must be after unix epoch")
|
|
||||||
.as_nanos();
|
|
||||||
let path = std::env::temp_dir().join(format!("telemt-load-security-{nonce}.toml"));
|
|
||||||
fs::write(&path, contents).expect("temp config write must succeed");
|
|
||||||
path
|
|
||||||
}
|
|
||||||
|
|
||||||
fn remove_temp_config(path: &PathBuf) {
|
|
||||||
let _ = fs::remove_file(path);
|
|
||||||
}
|
|
||||||
|
|
||||||
#[test]
|
|
||||||
fn load_rejects_server_hello_delay_equal_to_handshake_timeout_budget() {
|
|
||||||
let path = write_temp_config(
|
|
||||||
r#"
|
|
||||||
[timeouts]
|
|
||||||
client_handshake = 1
|
|
||||||
|
|
||||||
[censorship]
|
|
||||||
server_hello_delay_max_ms = 1000
|
|
||||||
"#,
|
|
||||||
);
|
|
||||||
|
|
||||||
let err =
|
|
||||||
ProxyConfig::load(&path).expect_err("delay equal to handshake timeout must be rejected");
|
|
||||||
let msg = err.to_string();
|
|
||||||
assert!(
|
|
||||||
msg.contains(
|
|
||||||
"censorship.server_hello_delay_max_ms must be < timeouts.client_handshake * 1000"
|
|
||||||
),
|
|
||||||
"error must explain delay<timeout invariant, got: {msg}"
|
|
||||||
);
|
|
||||||
|
|
||||||
remove_temp_config(&path);
|
|
||||||
}
|
|
||||||
|
|
||||||
#[test]
|
|
||||||
fn load_rejects_server_hello_delay_larger_than_handshake_timeout_budget() {
|
|
||||||
let path = write_temp_config(
|
|
||||||
r#"
|
|
||||||
[timeouts]
|
|
||||||
client_handshake = 1
|
|
||||||
|
|
||||||
[censorship]
|
|
||||||
server_hello_delay_max_ms = 1500
|
|
||||||
"#,
|
|
||||||
);
|
|
||||||
|
|
||||||
let err =
|
|
||||||
ProxyConfig::load(&path).expect_err("delay larger than handshake timeout must be rejected");
|
|
||||||
let msg = err.to_string();
|
|
||||||
assert!(
|
|
||||||
msg.contains(
|
|
||||||
"censorship.server_hello_delay_max_ms must be < timeouts.client_handshake * 1000"
|
|
||||||
),
|
|
||||||
"error must explain delay<timeout invariant, got: {msg}"
|
|
||||||
);
|
|
||||||
|
|
||||||
remove_temp_config(&path);
|
|
||||||
}
|
|
||||||
|
|
||||||
#[test]
|
|
||||||
fn load_accepts_server_hello_delay_strictly_below_handshake_timeout_budget() {
|
|
||||||
let path = write_temp_config(
|
|
||||||
r#"
|
|
||||||
[timeouts]
|
|
||||||
client_handshake = 1
|
|
||||||
|
|
||||||
[censorship]
|
|
||||||
server_hello_delay_max_ms = 999
|
|
||||||
"#,
|
|
||||||
);
|
|
||||||
|
|
||||||
let cfg =
|
|
||||||
ProxyConfig::load(&path).expect("delay below handshake timeout budget must be accepted");
|
|
||||||
assert_eq!(cfg.timeouts.client_handshake, 1);
|
|
||||||
assert_eq!(cfg.censorship.server_hello_delay_max_ms, 999);
|
|
||||||
|
|
||||||
remove_temp_config(&path);
|
|
||||||
}
|
|
||||||
|
|
@ -3,7 +3,6 @@ use ipnetwork::IpNetwork;
|
||||||
use serde::{Deserialize, Serialize};
|
use serde::{Deserialize, Serialize};
|
||||||
use std::collections::HashMap;
|
use std::collections::HashMap;
|
||||||
use std::net::IpAddr;
|
use std::net::IpAddr;
|
||||||
use std::path::PathBuf;
|
|
||||||
|
|
||||||
use super::defaults::*;
|
use super::defaults::*;
|
||||||
|
|
||||||
|
|
@ -135,8 +134,8 @@ impl MeSocksKdfPolicy {
|
||||||
#[derive(Debug, Clone, Copy, PartialEq, Eq, Serialize, Deserialize, Default)]
|
#[derive(Debug, Clone, Copy, PartialEq, Eq, Serialize, Deserialize, Default)]
|
||||||
#[serde(rename_all = "lowercase")]
|
#[serde(rename_all = "lowercase")]
|
||||||
pub enum MeBindStaleMode {
|
pub enum MeBindStaleMode {
|
||||||
#[default]
|
|
||||||
Never,
|
Never,
|
||||||
|
#[default]
|
||||||
Ttl,
|
Ttl,
|
||||||
Always,
|
Always,
|
||||||
}
|
}
|
||||||
|
|
@ -357,9 +356,6 @@ impl Default for NetworkConfig {
|
||||||
|
|
||||||
#[derive(Debug, Clone, Serialize, Deserialize)]
|
#[derive(Debug, Clone, Serialize, Deserialize)]
|
||||||
pub struct GeneralConfig {
|
pub struct GeneralConfig {
|
||||||
#[serde(default)]
|
|
||||||
pub data_path: Option<PathBuf>,
|
|
||||||
|
|
||||||
#[serde(default)]
|
#[serde(default)]
|
||||||
pub modes: ProxyModes,
|
pub modes: ProxyModes,
|
||||||
|
|
||||||
|
|
@ -429,11 +425,6 @@ pub struct GeneralConfig {
|
||||||
#[serde(default = "default_me2dc_fallback")]
|
#[serde(default = "default_me2dc_fallback")]
|
||||||
pub me2dc_fallback: bool,
|
pub me2dc_fallback: bool,
|
||||||
|
|
||||||
/// Fast ME->Direct fallback mode for new sessions.
|
|
||||||
/// Active only when both `use_middle_proxy=true` and `me2dc_fallback=true`.
|
|
||||||
#[serde(default = "default_me2dc_fast")]
|
|
||||||
pub me2dc_fast: bool,
|
|
||||||
|
|
||||||
/// Enable ME keepalive padding frames.
|
/// Enable ME keepalive padding frames.
|
||||||
#[serde(default = "default_true")]
|
#[serde(default = "default_true")]
|
||||||
pub me_keepalive_enabled: bool,
|
pub me_keepalive_enabled: bool,
|
||||||
|
|
@ -467,13 +458,8 @@ pub struct GeneralConfig {
|
||||||
#[serde(default = "default_me_c2me_channel_capacity")]
|
#[serde(default = "default_me_c2me_channel_capacity")]
|
||||||
pub me_c2me_channel_capacity: usize,
|
pub me_c2me_channel_capacity: usize,
|
||||||
|
|
||||||
/// Maximum wait in milliseconds for enqueueing C2ME commands when the queue is full.
|
|
||||||
/// `0` keeps legacy unbounded wait behavior.
|
|
||||||
#[serde(default = "default_me_c2me_send_timeout_ms")]
|
|
||||||
pub me_c2me_send_timeout_ms: u64,
|
|
||||||
|
|
||||||
/// Bounded wait in milliseconds for routing ME DATA to per-connection queue.
|
/// Bounded wait in milliseconds for routing ME DATA to per-connection queue.
|
||||||
/// `0` keeps non-blocking routing; values >0 enable bounded wait for compatibility.
|
/// `0` keeps legacy no-wait behavior.
|
||||||
#[serde(default = "default_me_reader_route_data_wait_ms")]
|
#[serde(default = "default_me_reader_route_data_wait_ms")]
|
||||||
pub me_reader_route_data_wait_ms: u64,
|
pub me_reader_route_data_wait_ms: u64,
|
||||||
|
|
||||||
|
|
@ -494,14 +480,6 @@ pub struct GeneralConfig {
|
||||||
#[serde(default = "default_me_d2c_ack_flush_immediate")]
|
#[serde(default = "default_me_d2c_ack_flush_immediate")]
|
||||||
pub me_d2c_ack_flush_immediate: bool,
|
pub me_d2c_ack_flush_immediate: bool,
|
||||||
|
|
||||||
/// Additional bytes above strict per-user quota allowed in hot-path soft mode.
|
|
||||||
#[serde(default = "default_me_quota_soft_overshoot_bytes")]
|
|
||||||
pub me_quota_soft_overshoot_bytes: u64,
|
|
||||||
|
|
||||||
/// Shrink threshold for reusable ME->Client frame assembly buffer.
|
|
||||||
#[serde(default = "default_me_d2c_frame_buf_shrink_threshold_bytes")]
|
|
||||||
pub me_d2c_frame_buf_shrink_threshold_bytes: usize,
|
|
||||||
|
|
||||||
/// Copy buffer size for client->DC direction in direct relay.
|
/// Copy buffer size for client->DC direction in direct relay.
|
||||||
#[serde(default = "default_direct_relay_copy_buf_c2s_bytes")]
|
#[serde(default = "default_direct_relay_copy_buf_c2s_bytes")]
|
||||||
pub direct_relay_copy_buf_c2s_bytes: usize,
|
pub direct_relay_copy_buf_c2s_bytes: usize,
|
||||||
|
|
@ -663,10 +641,6 @@ pub struct GeneralConfig {
|
||||||
#[serde(default = "default_upstream_connect_budget_ms")]
|
#[serde(default = "default_upstream_connect_budget_ms")]
|
||||||
pub upstream_connect_budget_ms: u64,
|
pub upstream_connect_budget_ms: u64,
|
||||||
|
|
||||||
/// Per-attempt TCP connect timeout to Telegram DC (seconds).
|
|
||||||
#[serde(default = "default_connect_timeout")]
|
|
||||||
pub tg_connect: u64,
|
|
||||||
|
|
||||||
/// Consecutive failed requests before upstream is marked unhealthy.
|
/// Consecutive failed requests before upstream is marked unhealthy.
|
||||||
#[serde(default = "default_upstream_unhealthy_fail_threshold")]
|
#[serde(default = "default_upstream_unhealthy_fail_threshold")]
|
||||||
pub upstream_unhealthy_fail_threshold: u32,
|
pub upstream_unhealthy_fail_threshold: u32,
|
||||||
|
|
@ -738,15 +712,6 @@ pub struct GeneralConfig {
|
||||||
#[serde(default = "default_me_route_no_writer_wait_ms")]
|
#[serde(default = "default_me_route_no_writer_wait_ms")]
|
||||||
pub me_route_no_writer_wait_ms: u64,
|
pub me_route_no_writer_wait_ms: u64,
|
||||||
|
|
||||||
/// Maximum cumulative wait in milliseconds for hybrid no-writer mode before failfast.
|
|
||||||
#[serde(default = "default_me_route_hybrid_max_wait_ms")]
|
|
||||||
pub me_route_hybrid_max_wait_ms: u64,
|
|
||||||
|
|
||||||
/// Maximum wait in milliseconds for blocking ME writer channel send fallback.
|
|
||||||
/// `0` keeps legacy unbounded wait behavior.
|
|
||||||
#[serde(default = "default_me_route_blocking_send_timeout_ms")]
|
|
||||||
pub me_route_blocking_send_timeout_ms: u64,
|
|
||||||
|
|
||||||
/// Number of inline recovery attempts in legacy mode.
|
/// Number of inline recovery attempts in legacy mode.
|
||||||
#[serde(default = "default_me_route_inline_recovery_attempts")]
|
#[serde(default = "default_me_route_inline_recovery_attempts")]
|
||||||
pub me_route_inline_recovery_attempts: u32,
|
pub me_route_inline_recovery_attempts: u32,
|
||||||
|
|
@ -829,35 +794,6 @@ pub struct GeneralConfig {
|
||||||
#[serde(default = "default_me_pool_drain_ttl_secs")]
|
#[serde(default = "default_me_pool_drain_ttl_secs")]
|
||||||
pub me_pool_drain_ttl_secs: u64,
|
pub me_pool_drain_ttl_secs: u64,
|
||||||
|
|
||||||
/// Force-remove any draining writer on the next cleanup tick, regardless of age/deadline.
|
|
||||||
#[serde(default = "default_me_instadrain")]
|
|
||||||
pub me_instadrain: bool,
|
|
||||||
|
|
||||||
/// Maximum allowed number of draining ME writers before oldest ones are force-closed in batches.
|
|
||||||
/// Set to 0 to disable threshold-based draining cleanup and keep timeout-only behavior.
|
|
||||||
#[serde(default = "default_me_pool_drain_threshold")]
|
|
||||||
pub me_pool_drain_threshold: u64,
|
|
||||||
|
|
||||||
/// Enable staged client eviction for draining ME writers that remain non-empty past TTL.
|
|
||||||
#[serde(default = "default_me_pool_drain_soft_evict_enabled")]
|
|
||||||
pub me_pool_drain_soft_evict_enabled: bool,
|
|
||||||
|
|
||||||
/// Extra grace in seconds after drain TTL before soft-eviction stage starts.
|
|
||||||
#[serde(default = "default_me_pool_drain_soft_evict_grace_secs")]
|
|
||||||
pub me_pool_drain_soft_evict_grace_secs: u64,
|
|
||||||
|
|
||||||
/// Maximum number of client sessions to evict from one draining writer per health tick.
|
|
||||||
#[serde(default = "default_me_pool_drain_soft_evict_per_writer")]
|
|
||||||
pub me_pool_drain_soft_evict_per_writer: u8,
|
|
||||||
|
|
||||||
/// Soft-eviction budget per CPU core for one health tick.
|
|
||||||
#[serde(default = "default_me_pool_drain_soft_evict_budget_per_core")]
|
|
||||||
pub me_pool_drain_soft_evict_budget_per_core: u16,
|
|
||||||
|
|
||||||
/// Cooldown for repetitive soft-eviction on the same writer in milliseconds.
|
|
||||||
#[serde(default = "default_me_pool_drain_soft_evict_cooldown_ms")]
|
|
||||||
pub me_pool_drain_soft_evict_cooldown_ms: u64,
|
|
||||||
|
|
||||||
/// Policy for new binds on stale draining writers.
|
/// Policy for new binds on stale draining writers.
|
||||||
#[serde(default)]
|
#[serde(default)]
|
||||||
pub me_bind_stale_mode: MeBindStaleMode,
|
pub me_bind_stale_mode: MeBindStaleMode,
|
||||||
|
|
@ -872,7 +808,7 @@ pub struct GeneralConfig {
|
||||||
pub me_pool_min_fresh_ratio: f32,
|
pub me_pool_min_fresh_ratio: f32,
|
||||||
|
|
||||||
/// Drain timeout in seconds for stale ME writers after endpoint map changes.
|
/// Drain timeout in seconds for stale ME writers after endpoint map changes.
|
||||||
/// Set to 0 to use the runtime safety fallback timeout.
|
/// Set to 0 to keep stale writers draining indefinitely (no force-close).
|
||||||
#[serde(default = "default_me_reinit_drain_timeout_secs")]
|
#[serde(default = "default_me_reinit_drain_timeout_secs")]
|
||||||
pub me_reinit_drain_timeout_secs: u64,
|
pub me_reinit_drain_timeout_secs: u64,
|
||||||
|
|
||||||
|
|
@ -930,7 +866,6 @@ pub struct GeneralConfig {
|
||||||
impl Default for GeneralConfig {
|
impl Default for GeneralConfig {
|
||||||
fn default() -> Self {
|
fn default() -> Self {
|
||||||
Self {
|
Self {
|
||||||
data_path: None,
|
|
||||||
modes: ProxyModes::default(),
|
modes: ProxyModes::default(),
|
||||||
prefer_ipv6: false,
|
prefer_ipv6: false,
|
||||||
fast_mode: default_true(),
|
fast_mode: default_true(),
|
||||||
|
|
@ -948,7 +883,6 @@ impl Default for GeneralConfig {
|
||||||
middle_proxy_warm_standby: default_middle_proxy_warm_standby(),
|
middle_proxy_warm_standby: default_middle_proxy_warm_standby(),
|
||||||
me_init_retry_attempts: default_me_init_retry_attempts(),
|
me_init_retry_attempts: default_me_init_retry_attempts(),
|
||||||
me2dc_fallback: default_me2dc_fallback(),
|
me2dc_fallback: default_me2dc_fallback(),
|
||||||
me2dc_fast: default_me2dc_fast(),
|
|
||||||
me_keepalive_enabled: default_true(),
|
me_keepalive_enabled: default_true(),
|
||||||
me_keepalive_interval_secs: default_keepalive_interval(),
|
me_keepalive_interval_secs: default_keepalive_interval(),
|
||||||
me_keepalive_jitter_secs: default_keepalive_jitter(),
|
me_keepalive_jitter_secs: default_keepalive_jitter(),
|
||||||
|
|
@ -957,15 +891,11 @@ impl Default for GeneralConfig {
|
||||||
me_writer_cmd_channel_capacity: default_me_writer_cmd_channel_capacity(),
|
me_writer_cmd_channel_capacity: default_me_writer_cmd_channel_capacity(),
|
||||||
me_route_channel_capacity: default_me_route_channel_capacity(),
|
me_route_channel_capacity: default_me_route_channel_capacity(),
|
||||||
me_c2me_channel_capacity: default_me_c2me_channel_capacity(),
|
me_c2me_channel_capacity: default_me_c2me_channel_capacity(),
|
||||||
me_c2me_send_timeout_ms: default_me_c2me_send_timeout_ms(),
|
|
||||||
me_reader_route_data_wait_ms: default_me_reader_route_data_wait_ms(),
|
me_reader_route_data_wait_ms: default_me_reader_route_data_wait_ms(),
|
||||||
me_d2c_flush_batch_max_frames: default_me_d2c_flush_batch_max_frames(),
|
me_d2c_flush_batch_max_frames: default_me_d2c_flush_batch_max_frames(),
|
||||||
me_d2c_flush_batch_max_bytes: default_me_d2c_flush_batch_max_bytes(),
|
me_d2c_flush_batch_max_bytes: default_me_d2c_flush_batch_max_bytes(),
|
||||||
me_d2c_flush_batch_max_delay_us: default_me_d2c_flush_batch_max_delay_us(),
|
me_d2c_flush_batch_max_delay_us: default_me_d2c_flush_batch_max_delay_us(),
|
||||||
me_d2c_ack_flush_immediate: default_me_d2c_ack_flush_immediate(),
|
me_d2c_ack_flush_immediate: default_me_d2c_ack_flush_immediate(),
|
||||||
me_quota_soft_overshoot_bytes: default_me_quota_soft_overshoot_bytes(),
|
|
||||||
me_d2c_frame_buf_shrink_threshold_bytes:
|
|
||||||
default_me_d2c_frame_buf_shrink_threshold_bytes(),
|
|
||||||
direct_relay_copy_buf_c2s_bytes: default_direct_relay_copy_buf_c2s_bytes(),
|
direct_relay_copy_buf_c2s_bytes: default_direct_relay_copy_buf_c2s_bytes(),
|
||||||
direct_relay_copy_buf_s2c_bytes: default_direct_relay_copy_buf_s2c_bytes(),
|
direct_relay_copy_buf_s2c_bytes: default_direct_relay_copy_buf_s2c_bytes(),
|
||||||
me_warmup_stagger_enabled: default_true(),
|
me_warmup_stagger_enabled: default_true(),
|
||||||
|
|
@ -976,42 +906,27 @@ impl Default for GeneralConfig {
|
||||||
me_reconnect_backoff_cap_ms: default_reconnect_backoff_cap_ms(),
|
me_reconnect_backoff_cap_ms: default_reconnect_backoff_cap_ms(),
|
||||||
me_reconnect_fast_retry_count: default_me_reconnect_fast_retry_count(),
|
me_reconnect_fast_retry_count: default_me_reconnect_fast_retry_count(),
|
||||||
me_single_endpoint_shadow_writers: default_me_single_endpoint_shadow_writers(),
|
me_single_endpoint_shadow_writers: default_me_single_endpoint_shadow_writers(),
|
||||||
me_single_endpoint_outage_mode_enabled: default_me_single_endpoint_outage_mode_enabled(
|
me_single_endpoint_outage_mode_enabled: default_me_single_endpoint_outage_mode_enabled(),
|
||||||
),
|
me_single_endpoint_outage_disable_quarantine: default_me_single_endpoint_outage_disable_quarantine(),
|
||||||
me_single_endpoint_outage_disable_quarantine:
|
me_single_endpoint_outage_backoff_min_ms: default_me_single_endpoint_outage_backoff_min_ms(),
|
||||||
default_me_single_endpoint_outage_disable_quarantine(),
|
me_single_endpoint_outage_backoff_max_ms: default_me_single_endpoint_outage_backoff_max_ms(),
|
||||||
me_single_endpoint_outage_backoff_min_ms:
|
me_single_endpoint_shadow_rotate_every_secs: default_me_single_endpoint_shadow_rotate_every_secs(),
|
||||||
default_me_single_endpoint_outage_backoff_min_ms(),
|
|
||||||
me_single_endpoint_outage_backoff_max_ms:
|
|
||||||
default_me_single_endpoint_outage_backoff_max_ms(),
|
|
||||||
me_single_endpoint_shadow_rotate_every_secs:
|
|
||||||
default_me_single_endpoint_shadow_rotate_every_secs(),
|
|
||||||
me_floor_mode: MeFloorMode::default(),
|
me_floor_mode: MeFloorMode::default(),
|
||||||
me_adaptive_floor_idle_secs: default_me_adaptive_floor_idle_secs(),
|
me_adaptive_floor_idle_secs: default_me_adaptive_floor_idle_secs(),
|
||||||
me_adaptive_floor_min_writers_single_endpoint:
|
me_adaptive_floor_min_writers_single_endpoint: default_me_adaptive_floor_min_writers_single_endpoint(),
|
||||||
default_me_adaptive_floor_min_writers_single_endpoint(),
|
me_adaptive_floor_min_writers_multi_endpoint: default_me_adaptive_floor_min_writers_multi_endpoint(),
|
||||||
me_adaptive_floor_min_writers_multi_endpoint:
|
|
||||||
default_me_adaptive_floor_min_writers_multi_endpoint(),
|
|
||||||
me_adaptive_floor_recover_grace_secs: default_me_adaptive_floor_recover_grace_secs(),
|
me_adaptive_floor_recover_grace_secs: default_me_adaptive_floor_recover_grace_secs(),
|
||||||
me_adaptive_floor_writers_per_core_total:
|
me_adaptive_floor_writers_per_core_total: default_me_adaptive_floor_writers_per_core_total(),
|
||||||
default_me_adaptive_floor_writers_per_core_total(),
|
|
||||||
me_adaptive_floor_cpu_cores_override: default_me_adaptive_floor_cpu_cores_override(),
|
me_adaptive_floor_cpu_cores_override: default_me_adaptive_floor_cpu_cores_override(),
|
||||||
me_adaptive_floor_max_extra_writers_single_per_core:
|
me_adaptive_floor_max_extra_writers_single_per_core: default_me_adaptive_floor_max_extra_writers_single_per_core(),
|
||||||
default_me_adaptive_floor_max_extra_writers_single_per_core(),
|
me_adaptive_floor_max_extra_writers_multi_per_core: default_me_adaptive_floor_max_extra_writers_multi_per_core(),
|
||||||
me_adaptive_floor_max_extra_writers_multi_per_core:
|
me_adaptive_floor_max_active_writers_per_core: default_me_adaptive_floor_max_active_writers_per_core(),
|
||||||
default_me_adaptive_floor_max_extra_writers_multi_per_core(),
|
me_adaptive_floor_max_warm_writers_per_core: default_me_adaptive_floor_max_warm_writers_per_core(),
|
||||||
me_adaptive_floor_max_active_writers_per_core:
|
me_adaptive_floor_max_active_writers_global: default_me_adaptive_floor_max_active_writers_global(),
|
||||||
default_me_adaptive_floor_max_active_writers_per_core(),
|
me_adaptive_floor_max_warm_writers_global: default_me_adaptive_floor_max_warm_writers_global(),
|
||||||
me_adaptive_floor_max_warm_writers_per_core:
|
|
||||||
default_me_adaptive_floor_max_warm_writers_per_core(),
|
|
||||||
me_adaptive_floor_max_active_writers_global:
|
|
||||||
default_me_adaptive_floor_max_active_writers_global(),
|
|
||||||
me_adaptive_floor_max_warm_writers_global:
|
|
||||||
default_me_adaptive_floor_max_warm_writers_global(),
|
|
||||||
upstream_connect_retry_attempts: default_upstream_connect_retry_attempts(),
|
upstream_connect_retry_attempts: default_upstream_connect_retry_attempts(),
|
||||||
upstream_connect_retry_backoff_ms: default_upstream_connect_retry_backoff_ms(),
|
upstream_connect_retry_backoff_ms: default_upstream_connect_retry_backoff_ms(),
|
||||||
upstream_connect_budget_ms: default_upstream_connect_budget_ms(),
|
upstream_connect_budget_ms: default_upstream_connect_budget_ms(),
|
||||||
tg_connect: default_connect_timeout(),
|
|
||||||
upstream_unhealthy_fail_threshold: default_upstream_unhealthy_fail_threshold(),
|
upstream_unhealthy_fail_threshold: default_upstream_unhealthy_fail_threshold(),
|
||||||
upstream_connect_failfast_hard_errors: default_upstream_connect_failfast_hard_errors(),
|
upstream_connect_failfast_hard_errors: default_upstream_connect_failfast_hard_errors(),
|
||||||
stun_iface_mismatch_ignore: false,
|
stun_iface_mismatch_ignore: false,
|
||||||
|
|
@ -1023,16 +938,13 @@ impl Default for GeneralConfig {
|
||||||
me_socks_kdf_policy: MeSocksKdfPolicy::Strict,
|
me_socks_kdf_policy: MeSocksKdfPolicy::Strict,
|
||||||
me_route_backpressure_base_timeout_ms: default_me_route_backpressure_base_timeout_ms(),
|
me_route_backpressure_base_timeout_ms: default_me_route_backpressure_base_timeout_ms(),
|
||||||
me_route_backpressure_high_timeout_ms: default_me_route_backpressure_high_timeout_ms(),
|
me_route_backpressure_high_timeout_ms: default_me_route_backpressure_high_timeout_ms(),
|
||||||
me_route_backpressure_high_watermark_pct:
|
me_route_backpressure_high_watermark_pct: default_me_route_backpressure_high_watermark_pct(),
|
||||||
default_me_route_backpressure_high_watermark_pct(),
|
|
||||||
me_health_interval_ms_unhealthy: default_me_health_interval_ms_unhealthy(),
|
me_health_interval_ms_unhealthy: default_me_health_interval_ms_unhealthy(),
|
||||||
me_health_interval_ms_healthy: default_me_health_interval_ms_healthy(),
|
me_health_interval_ms_healthy: default_me_health_interval_ms_healthy(),
|
||||||
me_admission_poll_ms: default_me_admission_poll_ms(),
|
me_admission_poll_ms: default_me_admission_poll_ms(),
|
||||||
me_warn_rate_limit_ms: default_me_warn_rate_limit_ms(),
|
me_warn_rate_limit_ms: default_me_warn_rate_limit_ms(),
|
||||||
me_route_no_writer_mode: MeRouteNoWriterMode::default(),
|
me_route_no_writer_mode: MeRouteNoWriterMode::default(),
|
||||||
me_route_no_writer_wait_ms: default_me_route_no_writer_wait_ms(),
|
me_route_no_writer_wait_ms: default_me_route_no_writer_wait_ms(),
|
||||||
me_route_hybrid_max_wait_ms: default_me_route_hybrid_max_wait_ms(),
|
|
||||||
me_route_blocking_send_timeout_ms: default_me_route_blocking_send_timeout_ms(),
|
|
||||||
me_route_inline_recovery_attempts: default_me_route_inline_recovery_attempts(),
|
me_route_inline_recovery_attempts: default_me_route_inline_recovery_attempts(),
|
||||||
me_route_inline_recovery_wait_ms: default_me_route_inline_recovery_wait_ms(),
|
me_route_inline_recovery_wait_ms: default_me_route_inline_recovery_wait_ms(),
|
||||||
links: LinksConfig::default(),
|
links: LinksConfig::default(),
|
||||||
|
|
@ -1050,8 +962,7 @@ impl Default for GeneralConfig {
|
||||||
me_hardswap_warmup_delay_min_ms: default_me_hardswap_warmup_delay_min_ms(),
|
me_hardswap_warmup_delay_min_ms: default_me_hardswap_warmup_delay_min_ms(),
|
||||||
me_hardswap_warmup_delay_max_ms: default_me_hardswap_warmup_delay_max_ms(),
|
me_hardswap_warmup_delay_max_ms: default_me_hardswap_warmup_delay_max_ms(),
|
||||||
me_hardswap_warmup_extra_passes: default_me_hardswap_warmup_extra_passes(),
|
me_hardswap_warmup_extra_passes: default_me_hardswap_warmup_extra_passes(),
|
||||||
me_hardswap_warmup_pass_backoff_base_ms:
|
me_hardswap_warmup_pass_backoff_base_ms: default_me_hardswap_warmup_pass_backoff_base_ms(),
|
||||||
default_me_hardswap_warmup_pass_backoff_base_ms(),
|
|
||||||
me_config_stable_snapshots: default_me_config_stable_snapshots(),
|
me_config_stable_snapshots: default_me_config_stable_snapshots(),
|
||||||
me_config_apply_cooldown_secs: default_me_config_apply_cooldown_secs(),
|
me_config_apply_cooldown_secs: default_me_config_apply_cooldown_secs(),
|
||||||
me_snapshot_require_http_2xx: default_me_snapshot_require_http_2xx(),
|
me_snapshot_require_http_2xx: default_me_snapshot_require_http_2xx(),
|
||||||
|
|
@ -1062,14 +973,6 @@ impl Default for GeneralConfig {
|
||||||
me_secret_atomic_snapshot: default_me_secret_atomic_snapshot(),
|
me_secret_atomic_snapshot: default_me_secret_atomic_snapshot(),
|
||||||
proxy_secret_len_max: default_proxy_secret_len_max(),
|
proxy_secret_len_max: default_proxy_secret_len_max(),
|
||||||
me_pool_drain_ttl_secs: default_me_pool_drain_ttl_secs(),
|
me_pool_drain_ttl_secs: default_me_pool_drain_ttl_secs(),
|
||||||
me_instadrain: default_me_instadrain(),
|
|
||||||
me_pool_drain_threshold: default_me_pool_drain_threshold(),
|
|
||||||
me_pool_drain_soft_evict_enabled: default_me_pool_drain_soft_evict_enabled(),
|
|
||||||
me_pool_drain_soft_evict_grace_secs: default_me_pool_drain_soft_evict_grace_secs(),
|
|
||||||
me_pool_drain_soft_evict_per_writer: default_me_pool_drain_soft_evict_per_writer(),
|
|
||||||
me_pool_drain_soft_evict_budget_per_core:
|
|
||||||
default_me_pool_drain_soft_evict_budget_per_core(),
|
|
||||||
me_pool_drain_soft_evict_cooldown_ms: default_me_pool_drain_soft_evict_cooldown_ms(),
|
|
||||||
me_bind_stale_mode: MeBindStaleMode::default(),
|
me_bind_stale_mode: MeBindStaleMode::default(),
|
||||||
me_bind_stale_ttl_secs: default_me_bind_stale_ttl_secs(),
|
me_bind_stale_ttl_secs: default_me_bind_stale_ttl_secs(),
|
||||||
me_pool_min_fresh_ratio: default_me_pool_min_fresh_ratio(),
|
me_pool_min_fresh_ratio: default_me_pool_min_fresh_ratio(),
|
||||||
|
|
@ -1094,10 +997,8 @@ impl GeneralConfig {
|
||||||
/// Resolve the active updater interval for ME infrastructure refresh tasks.
|
/// Resolve the active updater interval for ME infrastructure refresh tasks.
|
||||||
/// `update_every` has priority, otherwise legacy proxy_*_auto_reload_secs are used.
|
/// `update_every` has priority, otherwise legacy proxy_*_auto_reload_secs are used.
|
||||||
pub fn effective_update_every_secs(&self) -> u64 {
|
pub fn effective_update_every_secs(&self) -> u64 {
|
||||||
self.update_every.unwrap_or_else(|| {
|
self.update_every
|
||||||
self.proxy_secret_auto_reload_secs
|
.unwrap_or_else(|| self.proxy_secret_auto_reload_secs.min(self.proxy_config_auto_reload_secs))
|
||||||
.min(self.proxy_config_auto_reload_secs)
|
|
||||||
})
|
|
||||||
}
|
}
|
||||||
|
|
||||||
/// Resolve periodic zero-downtime reinit interval for ME writers.
|
/// Resolve periodic zero-downtime reinit interval for ME writers.
|
||||||
|
|
@ -1107,14 +1008,9 @@ impl GeneralConfig {
|
||||||
|
|
||||||
/// Resolve force-close timeout for stale writers.
|
/// Resolve force-close timeout for stale writers.
|
||||||
/// `me_reinit_drain_timeout_secs` remains backward-compatible alias.
|
/// `me_reinit_drain_timeout_secs` remains backward-compatible alias.
|
||||||
/// A configured `0` uses the runtime safety fallback (300s).
|
|
||||||
pub fn effective_me_pool_force_close_secs(&self) -> u64 {
|
pub fn effective_me_pool_force_close_secs(&self) -> u64 {
|
||||||
if self.me_reinit_drain_timeout_secs == 0 {
|
|
||||||
300
|
|
||||||
} else {
|
|
||||||
self.me_reinit_drain_timeout_secs
|
self.me_reinit_drain_timeout_secs
|
||||||
}
|
}
|
||||||
}
|
|
||||||
}
|
}
|
||||||
|
|
||||||
/// `[general.links]` — proxy link generation settings.
|
/// `[general.links]` — proxy link generation settings.
|
||||||
|
|
@ -1216,118 +1112,6 @@ impl Default for ApiConfig {
|
||||||
}
|
}
|
||||||
}
|
}
|
||||||
|
|
||||||
#[derive(Debug, Clone, Copy, PartialEq, Eq, Serialize, Deserialize, Default)]
|
|
||||||
#[serde(rename_all = "lowercase")]
|
|
||||||
pub enum ConntrackMode {
|
|
||||||
#[default]
|
|
||||||
Tracked,
|
|
||||||
Notrack,
|
|
||||||
Hybrid,
|
|
||||||
}
|
|
||||||
|
|
||||||
#[derive(Debug, Clone, Copy, PartialEq, Eq, Serialize, Deserialize, Default)]
|
|
||||||
#[serde(rename_all = "lowercase")]
|
|
||||||
pub enum ConntrackBackend {
|
|
||||||
#[default]
|
|
||||||
Auto,
|
|
||||||
Nftables,
|
|
||||||
Iptables,
|
|
||||||
}
|
|
||||||
|
|
||||||
#[derive(Debug, Clone, Copy, PartialEq, Eq, Serialize, Deserialize, Default)]
|
|
||||||
#[serde(rename_all = "lowercase")]
|
|
||||||
pub enum ConntrackPressureProfile {
|
|
||||||
Conservative,
|
|
||||||
#[default]
|
|
||||||
Balanced,
|
|
||||||
Aggressive,
|
|
||||||
}
|
|
||||||
|
|
||||||
impl ConntrackPressureProfile {
|
|
||||||
pub fn client_first_byte_idle_cap_secs(self) -> u64 {
|
|
||||||
match self {
|
|
||||||
Self::Conservative => 30,
|
|
||||||
Self::Balanced => 20,
|
|
||||||
Self::Aggressive => 10,
|
|
||||||
}
|
|
||||||
}
|
|
||||||
|
|
||||||
pub fn direct_activity_timeout_secs(self) -> u64 {
|
|
||||||
match self {
|
|
||||||
Self::Conservative => 180,
|
|
||||||
Self::Balanced => 120,
|
|
||||||
Self::Aggressive => 60,
|
|
||||||
}
|
|
||||||
}
|
|
||||||
|
|
||||||
pub fn middle_soft_idle_cap_secs(self) -> u64 {
|
|
||||||
match self {
|
|
||||||
Self::Conservative => 60,
|
|
||||||
Self::Balanced => 30,
|
|
||||||
Self::Aggressive => 20,
|
|
||||||
}
|
|
||||||
}
|
|
||||||
|
|
||||||
pub fn middle_hard_idle_cap_secs(self) -> u64 {
|
|
||||||
match self {
|
|
||||||
Self::Conservative => 180,
|
|
||||||
Self::Balanced => 90,
|
|
||||||
Self::Aggressive => 60,
|
|
||||||
}
|
|
||||||
}
|
|
||||||
}
|
|
||||||
|
|
||||||
#[derive(Debug, Clone, Serialize, Deserialize)]
|
|
||||||
pub struct ConntrackControlConfig {
|
|
||||||
/// Enables runtime conntrack-control worker for pressure mitigation.
|
|
||||||
#[serde(default = "default_conntrack_control_enabled")]
|
|
||||||
pub inline_conntrack_control: bool,
|
|
||||||
|
|
||||||
/// Conntrack mode for listener ingress traffic.
|
|
||||||
#[serde(default)]
|
|
||||||
pub mode: ConntrackMode,
|
|
||||||
|
|
||||||
/// Netfilter backend used to reconcile notrack rules.
|
|
||||||
#[serde(default)]
|
|
||||||
pub backend: ConntrackBackend,
|
|
||||||
|
|
||||||
/// Pressure profile for timeout caps under resource saturation.
|
|
||||||
#[serde(default)]
|
|
||||||
pub profile: ConntrackPressureProfile,
|
|
||||||
|
|
||||||
/// Listener IP allow-list for hybrid mode.
|
|
||||||
/// Ignored in tracked/notrack mode.
|
|
||||||
#[serde(default)]
|
|
||||||
pub hybrid_listener_ips: Vec<IpAddr>,
|
|
||||||
|
|
||||||
/// Pressure high watermark as percentage.
|
|
||||||
#[serde(default = "default_conntrack_pressure_high_watermark_pct")]
|
|
||||||
pub pressure_high_watermark_pct: u8,
|
|
||||||
|
|
||||||
/// Pressure low watermark as percentage.
|
|
||||||
#[serde(default = "default_conntrack_pressure_low_watermark_pct")]
|
|
||||||
pub pressure_low_watermark_pct: u8,
|
|
||||||
|
|
||||||
/// Maximum conntrack delete operations per second.
|
|
||||||
#[serde(default = "default_conntrack_delete_budget_per_sec")]
|
|
||||||
pub delete_budget_per_sec: u64,
|
|
||||||
}
|
|
||||||
|
|
||||||
impl Default for ConntrackControlConfig {
|
|
||||||
fn default() -> Self {
|
|
||||||
Self {
|
|
||||||
inline_conntrack_control: default_conntrack_control_enabled(),
|
|
||||||
mode: ConntrackMode::default(),
|
|
||||||
backend: ConntrackBackend::default(),
|
|
||||||
profile: ConntrackPressureProfile::default(),
|
|
||||||
hybrid_listener_ips: Vec::new(),
|
|
||||||
pressure_high_watermark_pct: default_conntrack_pressure_high_watermark_pct(),
|
|
||||||
pressure_low_watermark_pct: default_conntrack_pressure_low_watermark_pct(),
|
|
||||||
delete_budget_per_sec: default_conntrack_delete_budget_per_sec(),
|
|
||||||
}
|
|
||||||
}
|
|
||||||
}
|
|
||||||
|
|
||||||
#[derive(Debug, Clone, Serialize, Deserialize)]
|
#[derive(Debug, Clone, Serialize, Deserialize)]
|
||||||
pub struct ServerConfig {
|
pub struct ServerConfig {
|
||||||
#[serde(default = "default_port")]
|
#[serde(default = "default_port")]
|
||||||
|
|
@ -1361,25 +1145,9 @@ pub struct ServerConfig {
|
||||||
#[serde(default = "default_proxy_protocol_header_timeout_ms")]
|
#[serde(default = "default_proxy_protocol_header_timeout_ms")]
|
||||||
pub proxy_protocol_header_timeout_ms: u64,
|
pub proxy_protocol_header_timeout_ms: u64,
|
||||||
|
|
||||||
/// Trusted source CIDRs allowed to send incoming PROXY protocol headers.
|
|
||||||
///
|
|
||||||
/// If this field is omitted in config, it defaults to trust-all CIDRs
|
|
||||||
/// (`0.0.0.0/0` and `::/0`). If it is explicitly set to an empty list,
|
|
||||||
/// all PROXY protocol headers are rejected.
|
|
||||||
#[serde(default = "default_proxy_protocol_trusted_cidrs")]
|
|
||||||
pub proxy_protocol_trusted_cidrs: Vec<IpNetwork>,
|
|
||||||
|
|
||||||
/// Port for the Prometheus-compatible metrics endpoint.
|
|
||||||
/// Enables metrics when set; binds on all interfaces (dual-stack) by default.
|
|
||||||
#[serde(default)]
|
#[serde(default)]
|
||||||
pub metrics_port: Option<u16>,
|
pub metrics_port: Option<u16>,
|
||||||
|
|
||||||
/// Listen address for metrics in `IP:PORT` format (e.g. `"127.0.0.1:9090"`).
|
|
||||||
/// When set, takes precedence over `metrics_port` and binds on the specified address only.
|
|
||||||
#[serde(default)]
|
|
||||||
pub metrics_listen: Option<String>,
|
|
||||||
|
|
||||||
/// CIDR whitelist for the metrics endpoint.
|
|
||||||
#[serde(default = "default_metrics_whitelist")]
|
#[serde(default = "default_metrics_whitelist")]
|
||||||
pub metrics_whitelist: Vec<IpNetwork>,
|
pub metrics_whitelist: Vec<IpNetwork>,
|
||||||
|
|
||||||
|
|
@ -1388,25 +1156,6 @@ pub struct ServerConfig {
|
||||||
|
|
||||||
#[serde(default)]
|
#[serde(default)]
|
||||||
pub listeners: Vec<ListenerConfig>,
|
pub listeners: Vec<ListenerConfig>,
|
||||||
|
|
||||||
/// TCP `listen(2)` backlog for client-facing sockets (also used for the metrics HTTP listener).
|
|
||||||
/// The effective queue is capped by the kernel (for example `somaxconn` on Linux).
|
|
||||||
#[serde(default = "default_listen_backlog")]
|
|
||||||
pub listen_backlog: u32,
|
|
||||||
|
|
||||||
/// Maximum number of concurrent client connections.
|
|
||||||
/// 0 means unlimited.
|
|
||||||
#[serde(default = "default_server_max_connections")]
|
|
||||||
pub max_connections: u32,
|
|
||||||
|
|
||||||
/// Maximum wait in milliseconds while acquiring a connection slot permit.
|
|
||||||
/// `0` keeps legacy unbounded wait behavior.
|
|
||||||
#[serde(default = "default_accept_permit_timeout_ms")]
|
|
||||||
pub accept_permit_timeout_ms: u64,
|
|
||||||
|
|
||||||
/// Runtime conntrack control and pressure policy.
|
|
||||||
#[serde(default)]
|
|
||||||
pub conntrack_control: ConntrackControlConfig,
|
|
||||||
}
|
}
|
||||||
|
|
||||||
impl Default for ServerConfig {
|
impl Default for ServerConfig {
|
||||||
|
|
@ -1420,48 +1169,21 @@ impl Default for ServerConfig {
|
||||||
listen_tcp: None,
|
listen_tcp: None,
|
||||||
proxy_protocol: false,
|
proxy_protocol: false,
|
||||||
proxy_protocol_header_timeout_ms: default_proxy_protocol_header_timeout_ms(),
|
proxy_protocol_header_timeout_ms: default_proxy_protocol_header_timeout_ms(),
|
||||||
proxy_protocol_trusted_cidrs: default_proxy_protocol_trusted_cidrs(),
|
|
||||||
metrics_port: None,
|
metrics_port: None,
|
||||||
metrics_listen: None,
|
|
||||||
metrics_whitelist: default_metrics_whitelist(),
|
metrics_whitelist: default_metrics_whitelist(),
|
||||||
api: ApiConfig::default(),
|
api: ApiConfig::default(),
|
||||||
listeners: Vec::new(),
|
listeners: Vec::new(),
|
||||||
listen_backlog: default_listen_backlog(),
|
|
||||||
max_connections: default_server_max_connections(),
|
|
||||||
accept_permit_timeout_ms: default_accept_permit_timeout_ms(),
|
|
||||||
conntrack_control: ConntrackControlConfig::default(),
|
|
||||||
}
|
}
|
||||||
}
|
}
|
||||||
}
|
}
|
||||||
|
|
||||||
#[derive(Debug, Clone, Serialize, Deserialize)]
|
#[derive(Debug, Clone, Serialize, Deserialize)]
|
||||||
pub struct TimeoutsConfig {
|
pub struct TimeoutsConfig {
|
||||||
/// Maximum idle wait in seconds for the first client byte before handshake parsing starts.
|
|
||||||
/// `0` disables the separate idle phase and keeps legacy timeout behavior.
|
|
||||||
#[serde(default = "default_client_first_byte_idle_secs")]
|
|
||||||
pub client_first_byte_idle_secs: u64,
|
|
||||||
|
|
||||||
/// Maximum active handshake duration in seconds after the first client byte is received.
|
|
||||||
#[serde(default = "default_handshake_timeout")]
|
#[serde(default = "default_handshake_timeout")]
|
||||||
pub client_handshake: u64,
|
pub client_handshake: u64,
|
||||||
|
|
||||||
/// Enables soft/hard relay client idle policy for middle-relay sessions.
|
#[serde(default = "default_connect_timeout")]
|
||||||
#[serde(default = "default_relay_idle_policy_v2_enabled")]
|
pub tg_connect: u64,
|
||||||
pub relay_idle_policy_v2_enabled: bool,
|
|
||||||
|
|
||||||
/// Soft idle threshold for middle-relay client uplink activity in seconds.
|
|
||||||
/// Hitting this threshold marks the session as idle-candidate, but does not close it.
|
|
||||||
#[serde(default = "default_relay_client_idle_soft_secs")]
|
|
||||||
pub relay_client_idle_soft_secs: u64,
|
|
||||||
|
|
||||||
/// Hard idle threshold for middle-relay client uplink activity in seconds.
|
|
||||||
/// Hitting this threshold closes the session.
|
|
||||||
#[serde(default = "default_relay_client_idle_hard_secs")]
|
|
||||||
pub relay_client_idle_hard_secs: u64,
|
|
||||||
|
|
||||||
/// Additional grace in seconds added to hard idle window after recent downstream activity.
|
|
||||||
#[serde(default = "default_relay_idle_grace_after_downstream_activity_secs")]
|
|
||||||
pub relay_idle_grace_after_downstream_activity_secs: u64,
|
|
||||||
|
|
||||||
#[serde(default = "default_keepalive")]
|
#[serde(default = "default_keepalive")]
|
||||||
pub client_keepalive: u64,
|
pub client_keepalive: u64,
|
||||||
|
|
@ -1481,13 +1203,8 @@ pub struct TimeoutsConfig {
|
||||||
impl Default for TimeoutsConfig {
|
impl Default for TimeoutsConfig {
|
||||||
fn default() -> Self {
|
fn default() -> Self {
|
||||||
Self {
|
Self {
|
||||||
client_first_byte_idle_secs: default_client_first_byte_idle_secs(),
|
|
||||||
client_handshake: default_handshake_timeout(),
|
client_handshake: default_handshake_timeout(),
|
||||||
relay_idle_policy_v2_enabled: default_relay_idle_policy_v2_enabled(),
|
tg_connect: default_connect_timeout(),
|
||||||
relay_client_idle_soft_secs: default_relay_client_idle_soft_secs(),
|
|
||||||
relay_client_idle_hard_secs: default_relay_client_idle_hard_secs(),
|
|
||||||
relay_idle_grace_after_downstream_activity_secs:
|
|
||||||
default_relay_idle_grace_after_downstream_activity_secs(),
|
|
||||||
client_keepalive: default_keepalive(),
|
client_keepalive: default_keepalive(),
|
||||||
client_ack: default_ack_timeout(),
|
client_ack: default_ack_timeout(),
|
||||||
me_one_retry: default_me_one_retry(),
|
me_one_retry: default_me_one_retry(),
|
||||||
|
|
@ -1496,90 +1213,6 @@ impl Default for TimeoutsConfig {
|
||||||
}
|
}
|
||||||
}
|
}
|
||||||
|
|
||||||
#[derive(Debug, Clone, Copy, PartialEq, Eq, Serialize, Deserialize, Default)]
|
|
||||||
#[serde(rename_all = "lowercase")]
|
|
||||||
pub enum UnknownSniAction {
|
|
||||||
#[default]
|
|
||||||
Drop,
|
|
||||||
Mask,
|
|
||||||
}
|
|
||||||
|
|
||||||
#[derive(Debug, Clone, Copy, PartialEq, Eq, Hash, Serialize, Deserialize)]
|
|
||||||
#[serde(rename_all = "snake_case")]
|
|
||||||
pub enum TlsFetchProfile {
|
|
||||||
ModernChromeLike,
|
|
||||||
ModernFirefoxLike,
|
|
||||||
CompatTls12,
|
|
||||||
LegacyMinimal,
|
|
||||||
}
|
|
||||||
|
|
||||||
impl TlsFetchProfile {
|
|
||||||
pub fn as_str(self) -> &'static str {
|
|
||||||
match self {
|
|
||||||
TlsFetchProfile::ModernChromeLike => "modern_chrome_like",
|
|
||||||
TlsFetchProfile::ModernFirefoxLike => "modern_firefox_like",
|
|
||||||
TlsFetchProfile::CompatTls12 => "compat_tls12",
|
|
||||||
TlsFetchProfile::LegacyMinimal => "legacy_minimal",
|
|
||||||
}
|
|
||||||
}
|
|
||||||
}
|
|
||||||
|
|
||||||
fn default_tls_fetch_profiles() -> Vec<TlsFetchProfile> {
|
|
||||||
vec![
|
|
||||||
TlsFetchProfile::ModernChromeLike,
|
|
||||||
TlsFetchProfile::ModernFirefoxLike,
|
|
||||||
TlsFetchProfile::CompatTls12,
|
|
||||||
TlsFetchProfile::LegacyMinimal,
|
|
||||||
]
|
|
||||||
}
|
|
||||||
|
|
||||||
#[derive(Debug, Clone, Serialize, Deserialize)]
|
|
||||||
pub struct TlsFetchConfig {
|
|
||||||
/// Ordered list of ClientHello profiles used for adaptive fallback.
|
|
||||||
#[serde(default = "default_tls_fetch_profiles")]
|
|
||||||
pub profiles: Vec<TlsFetchProfile>,
|
|
||||||
|
|
||||||
/// When true and upstream route is configured, TLS fetch fails closed on
|
|
||||||
/// upstream connect errors and does not fallback to direct TCP.
|
|
||||||
#[serde(default = "default_tls_fetch_strict_route")]
|
|
||||||
pub strict_route: bool,
|
|
||||||
|
|
||||||
/// Timeout per one profile attempt in milliseconds.
|
|
||||||
#[serde(default = "default_tls_fetch_attempt_timeout_ms")]
|
|
||||||
pub attempt_timeout_ms: u64,
|
|
||||||
|
|
||||||
/// Total wall-clock budget in milliseconds across all profile attempts.
|
|
||||||
#[serde(default = "default_tls_fetch_total_budget_ms")]
|
|
||||||
pub total_budget_ms: u64,
|
|
||||||
|
|
||||||
/// Adds GREASE-style values into selected ClientHello extensions.
|
|
||||||
#[serde(default)]
|
|
||||||
pub grease_enabled: bool,
|
|
||||||
|
|
||||||
/// Produces deterministic ClientHello randomness for debugging/tests.
|
|
||||||
#[serde(default)]
|
|
||||||
pub deterministic: bool,
|
|
||||||
|
|
||||||
/// TTL for winner-profile cache entries in seconds.
|
|
||||||
/// Set to 0 to disable profile cache.
|
|
||||||
#[serde(default = "default_tls_fetch_profile_cache_ttl_secs")]
|
|
||||||
pub profile_cache_ttl_secs: u64,
|
|
||||||
}
|
|
||||||
|
|
||||||
impl Default for TlsFetchConfig {
|
|
||||||
fn default() -> Self {
|
|
||||||
Self {
|
|
||||||
profiles: default_tls_fetch_profiles(),
|
|
||||||
strict_route: default_tls_fetch_strict_route(),
|
|
||||||
attempt_timeout_ms: default_tls_fetch_attempt_timeout_ms(),
|
|
||||||
total_budget_ms: default_tls_fetch_total_budget_ms(),
|
|
||||||
grease_enabled: false,
|
|
||||||
deterministic: false,
|
|
||||||
profile_cache_ttl_secs: default_tls_fetch_profile_cache_ttl_secs(),
|
|
||||||
}
|
|
||||||
}
|
|
||||||
}
|
|
||||||
|
|
||||||
#[derive(Debug, Clone, Serialize, Deserialize)]
|
#[derive(Debug, Clone, Serialize, Deserialize)]
|
||||||
pub struct AntiCensorshipConfig {
|
pub struct AntiCensorshipConfig {
|
||||||
#[serde(default = "default_tls_domain")]
|
#[serde(default = "default_tls_domain")]
|
||||||
|
|
@ -1589,19 +1222,6 @@ pub struct AntiCensorshipConfig {
|
||||||
#[serde(default)]
|
#[serde(default)]
|
||||||
pub tls_domains: Vec<String>,
|
pub tls_domains: Vec<String>,
|
||||||
|
|
||||||
/// Policy for TLS ClientHello with unknown (non-configured) SNI.
|
|
||||||
#[serde(default)]
|
|
||||||
pub unknown_sni_action: UnknownSniAction,
|
|
||||||
|
|
||||||
/// Upstream scope used for TLS front metadata fetches.
|
|
||||||
/// Empty value keeps default upstream routing behavior.
|
|
||||||
#[serde(default = "default_tls_fetch_scope")]
|
|
||||||
pub tls_fetch_scope: String,
|
|
||||||
|
|
||||||
/// Fetch strategy for TLS front metadata bootstrap and periodic refresh.
|
|
||||||
#[serde(default)]
|
|
||||||
pub tls_fetch: TlsFetchConfig,
|
|
||||||
|
|
||||||
#[serde(default = "default_true")]
|
#[serde(default = "default_true")]
|
||||||
pub mask: bool,
|
pub mask: bool,
|
||||||
|
|
||||||
|
|
@ -1652,54 +1272,6 @@ pub struct AntiCensorshipConfig {
|
||||||
/// Allows the backend to see the real client IP.
|
/// Allows the backend to see the real client IP.
|
||||||
#[serde(default)]
|
#[serde(default)]
|
||||||
pub mask_proxy_protocol: u8,
|
pub mask_proxy_protocol: u8,
|
||||||
|
|
||||||
/// Enable shape-channel hardening on mask backend path by padding
|
|
||||||
/// client->mask stream tail to configured buckets on stream end.
|
|
||||||
#[serde(default = "default_mask_shape_hardening")]
|
|
||||||
pub mask_shape_hardening: bool,
|
|
||||||
|
|
||||||
/// Opt-in aggressive shape hardening mode.
|
|
||||||
/// When enabled, masking may shape some backend-silent timeout paths and
|
|
||||||
/// enforces strictly positive above-cap blur when blur is enabled.
|
|
||||||
#[serde(default = "default_mask_shape_hardening_aggressive_mode")]
|
|
||||||
pub mask_shape_hardening_aggressive_mode: bool,
|
|
||||||
|
|
||||||
/// Minimum bucket size for mask shape hardening padding.
|
|
||||||
#[serde(default = "default_mask_shape_bucket_floor_bytes")]
|
|
||||||
pub mask_shape_bucket_floor_bytes: usize,
|
|
||||||
|
|
||||||
/// Maximum bucket size for mask shape hardening padding.
|
|
||||||
#[serde(default = "default_mask_shape_bucket_cap_bytes")]
|
|
||||||
pub mask_shape_bucket_cap_bytes: usize,
|
|
||||||
|
|
||||||
/// Add bounded random tail bytes even when total bytes already exceed
|
|
||||||
/// mask_shape_bucket_cap_bytes.
|
|
||||||
#[serde(default = "default_mask_shape_above_cap_blur")]
|
|
||||||
pub mask_shape_above_cap_blur: bool,
|
|
||||||
|
|
||||||
/// Maximum random bytes appended above cap when above-cap blur is enabled.
|
|
||||||
#[serde(default = "default_mask_shape_above_cap_blur_max_bytes")]
|
|
||||||
pub mask_shape_above_cap_blur_max_bytes: usize,
|
|
||||||
|
|
||||||
/// Maximum bytes relayed per direction on unauthenticated masking fallback paths.
|
|
||||||
#[serde(default = "default_mask_relay_max_bytes")]
|
|
||||||
pub mask_relay_max_bytes: usize,
|
|
||||||
|
|
||||||
/// Prefetch timeout (ms) for extending fragmented masking classifier window.
|
|
||||||
#[serde(default = "default_mask_classifier_prefetch_timeout_ms")]
|
|
||||||
pub mask_classifier_prefetch_timeout_ms: u64,
|
|
||||||
|
|
||||||
/// Enable outcome-time normalization envelope for masking fallback.
|
|
||||||
#[serde(default = "default_mask_timing_normalization_enabled")]
|
|
||||||
pub mask_timing_normalization_enabled: bool,
|
|
||||||
|
|
||||||
/// Lower bound (ms) for masking outcome timing envelope.
|
|
||||||
#[serde(default = "default_mask_timing_normalization_floor_ms")]
|
|
||||||
pub mask_timing_normalization_floor_ms: u64,
|
|
||||||
|
|
||||||
/// Upper bound (ms) for masking outcome timing envelope.
|
|
||||||
#[serde(default = "default_mask_timing_normalization_ceiling_ms")]
|
|
||||||
pub mask_timing_normalization_ceiling_ms: u64,
|
|
||||||
}
|
}
|
||||||
|
|
||||||
impl Default for AntiCensorshipConfig {
|
impl Default for AntiCensorshipConfig {
|
||||||
|
|
@ -1707,9 +1279,6 @@ impl Default for AntiCensorshipConfig {
|
||||||
Self {
|
Self {
|
||||||
tls_domain: default_tls_domain(),
|
tls_domain: default_tls_domain(),
|
||||||
tls_domains: Vec::new(),
|
tls_domains: Vec::new(),
|
||||||
unknown_sni_action: UnknownSniAction::Drop,
|
|
||||||
tls_fetch_scope: default_tls_fetch_scope(),
|
|
||||||
tls_fetch: TlsFetchConfig::default(),
|
|
||||||
mask: default_true(),
|
mask: default_true(),
|
||||||
mask_host: None,
|
mask_host: None,
|
||||||
mask_port: default_mask_port(),
|
mask_port: default_mask_port(),
|
||||||
|
|
@ -1723,17 +1292,6 @@ impl Default for AntiCensorshipConfig {
|
||||||
tls_full_cert_ttl_secs: default_tls_full_cert_ttl_secs(),
|
tls_full_cert_ttl_secs: default_tls_full_cert_ttl_secs(),
|
||||||
alpn_enforce: default_alpn_enforce(),
|
alpn_enforce: default_alpn_enforce(),
|
||||||
mask_proxy_protocol: 0,
|
mask_proxy_protocol: 0,
|
||||||
mask_shape_hardening: default_mask_shape_hardening(),
|
|
||||||
mask_shape_hardening_aggressive_mode: default_mask_shape_hardening_aggressive_mode(),
|
|
||||||
mask_shape_bucket_floor_bytes: default_mask_shape_bucket_floor_bytes(),
|
|
||||||
mask_shape_bucket_cap_bytes: default_mask_shape_bucket_cap_bytes(),
|
|
||||||
mask_shape_above_cap_blur: default_mask_shape_above_cap_blur(),
|
|
||||||
mask_shape_above_cap_blur_max_bytes: default_mask_shape_above_cap_blur_max_bytes(),
|
|
||||||
mask_relay_max_bytes: default_mask_relay_max_bytes(),
|
|
||||||
mask_classifier_prefetch_timeout_ms: default_mask_classifier_prefetch_timeout_ms(),
|
|
||||||
mask_timing_normalization_enabled: default_mask_timing_normalization_enabled(),
|
|
||||||
mask_timing_normalization_floor_ms: default_mask_timing_normalization_floor_ms(),
|
|
||||||
mask_timing_normalization_ceiling_ms: default_mask_timing_normalization_ceiling_ms(),
|
|
||||||
}
|
}
|
||||||
}
|
}
|
||||||
}
|
}
|
||||||
|
|
@ -1750,12 +1308,6 @@ pub struct AccessConfig {
|
||||||
#[serde(default)]
|
#[serde(default)]
|
||||||
pub user_max_tcp_conns: HashMap<String, usize>,
|
pub user_max_tcp_conns: HashMap<String, usize>,
|
||||||
|
|
||||||
/// Global per-user TCP connection limit applied when a user has no
|
|
||||||
/// positive individual override.
|
|
||||||
/// `0` disables the inherited limit.
|
|
||||||
#[serde(default = "default_user_max_tcp_conns_global_each")]
|
|
||||||
pub user_max_tcp_conns_global_each: usize,
|
|
||||||
|
|
||||||
#[serde(default)]
|
#[serde(default)]
|
||||||
pub user_expirations: HashMap<String, DateTime<Utc>>,
|
pub user_expirations: HashMap<String, DateTime<Utc>>,
|
||||||
|
|
||||||
|
|
@ -1765,11 +1317,6 @@ pub struct AccessConfig {
|
||||||
#[serde(default)]
|
#[serde(default)]
|
||||||
pub user_max_unique_ips: HashMap<String, usize>,
|
pub user_max_unique_ips: HashMap<String, usize>,
|
||||||
|
|
||||||
/// Global per-user unique IP limit applied when a user has no individual override.
|
|
||||||
/// `0` disables the inherited limit.
|
|
||||||
#[serde(default = "default_user_max_unique_ips_global_each")]
|
|
||||||
pub user_max_unique_ips_global_each: usize,
|
|
||||||
|
|
||||||
#[serde(default)]
|
#[serde(default)]
|
||||||
pub user_max_unique_ips_mode: UserMaxUniqueIpsMode,
|
pub user_max_unique_ips_mode: UserMaxUniqueIpsMode,
|
||||||
|
|
||||||
|
|
@ -1792,11 +1339,9 @@ impl Default for AccessConfig {
|
||||||
users: default_access_users(),
|
users: default_access_users(),
|
||||||
user_ad_tags: HashMap::new(),
|
user_ad_tags: HashMap::new(),
|
||||||
user_max_tcp_conns: HashMap::new(),
|
user_max_tcp_conns: HashMap::new(),
|
||||||
user_max_tcp_conns_global_each: default_user_max_tcp_conns_global_each(),
|
|
||||||
user_expirations: HashMap::new(),
|
user_expirations: HashMap::new(),
|
||||||
user_data_quota: HashMap::new(),
|
user_data_quota: HashMap::new(),
|
||||||
user_max_unique_ips: HashMap::new(),
|
user_max_unique_ips: HashMap::new(),
|
||||||
user_max_unique_ips_global_each: default_user_max_unique_ips_global_each(),
|
|
||||||
user_max_unique_ips_mode: UserMaxUniqueIpsMode::default(),
|
user_max_unique_ips_mode: UserMaxUniqueIpsMode::default(),
|
||||||
user_max_unique_ips_window_secs: default_user_max_unique_ips_window_secs(),
|
user_max_unique_ips_window_secs: default_user_max_unique_ips_window_secs(),
|
||||||
replay_check_len: default_replay_check_len(),
|
replay_check_len: default_replay_check_len(),
|
||||||
|
|
@ -1833,11 +1378,6 @@ pub enum UpstreamType {
|
||||||
#[serde(default)]
|
#[serde(default)]
|
||||||
password: Option<String>,
|
password: Option<String>,
|
||||||
},
|
},
|
||||||
Shadowsocks {
|
|
||||||
url: String,
|
|
||||||
#[serde(default)]
|
|
||||||
interface: Option<String>,
|
|
||||||
},
|
|
||||||
}
|
}
|
||||||
|
|
||||||
#[derive(Debug, Clone, Serialize, Deserialize)]
|
#[derive(Debug, Clone, Serialize, Deserialize)]
|
||||||
|
|
@ -1918,10 +1458,7 @@ impl ShowLink {
|
||||||
}
|
}
|
||||||
|
|
||||||
impl Serialize for ShowLink {
|
impl Serialize for ShowLink {
|
||||||
fn serialize<S: serde::Serializer>(
|
fn serialize<S: serde::Serializer>(&self, serializer: S) -> std::result::Result<S::Ok, S::Error> {
|
||||||
&self,
|
|
||||||
serializer: S,
|
|
||||||
) -> std::result::Result<S::Ok, S::Error> {
|
|
||||||
match self {
|
match self {
|
||||||
ShowLink::None => Vec::<String>::new().serialize(serializer),
|
ShowLink::None => Vec::<String>::new().serialize(serializer),
|
||||||
ShowLink::All => serializer.serialize_str("*"),
|
ShowLink::All => serializer.serialize_str("*"),
|
||||||
|
|
@ -1931,9 +1468,7 @@ impl Serialize for ShowLink {
|
||||||
}
|
}
|
||||||
|
|
||||||
impl<'de> Deserialize<'de> for ShowLink {
|
impl<'de> Deserialize<'de> for ShowLink {
|
||||||
fn deserialize<D: serde::Deserializer<'de>>(
|
fn deserialize<D: serde::Deserializer<'de>>(deserializer: D) -> std::result::Result<Self, D::Error> {
|
||||||
deserializer: D,
|
|
||||||
) -> std::result::Result<Self, D::Error> {
|
|
||||||
use serde::de;
|
use serde::de;
|
||||||
|
|
||||||
struct ShowLinkVisitor;
|
struct ShowLinkVisitor;
|
||||||
|
|
@ -1949,14 +1484,14 @@ impl<'de> Deserialize<'de> for ShowLink {
|
||||||
if v == "*" {
|
if v == "*" {
|
||||||
Ok(ShowLink::All)
|
Ok(ShowLink::All)
|
||||||
} else {
|
} else {
|
||||||
Err(de::Error::invalid_value(de::Unexpected::Str(v), &r#""*""#))
|
Err(de::Error::invalid_value(
|
||||||
|
de::Unexpected::Str(v),
|
||||||
|
&r#""*""#,
|
||||||
|
))
|
||||||
}
|
}
|
||||||
}
|
}
|
||||||
|
|
||||||
fn visit_seq<A: de::SeqAccess<'de>>(
|
fn visit_seq<A: de::SeqAccess<'de>>(self, mut seq: A) -> std::result::Result<ShowLink, A::Error> {
|
||||||
self,
|
|
||||||
mut seq: A,
|
|
||||||
) -> std::result::Result<ShowLink, A::Error> {
|
|
||||||
let mut names = Vec::new();
|
let mut names = Vec::new();
|
||||||
while let Some(name) = seq.next_element::<String>()? {
|
while let Some(name) = seq.next_element::<String>()? {
|
||||||
names.push(name);
|
names.push(name);
|
||||||
|
|
|
||||||
|
|
@ -1,755 +0,0 @@
|
||||||
use std::collections::BTreeSet;
|
|
||||||
use std::net::IpAddr;
|
|
||||||
use std::path::PathBuf;
|
|
||||||
use std::sync::Arc;
|
|
||||||
use std::time::Duration;
|
|
||||||
|
|
||||||
use tokio::io::AsyncWriteExt;
|
|
||||||
use tokio::process::Command;
|
|
||||||
use tokio::sync::{mpsc, watch};
|
|
||||||
use tracing::{debug, info, warn};
|
|
||||||
|
|
||||||
use crate::config::{ConntrackBackend, ConntrackMode, ProxyConfig};
|
|
||||||
use crate::proxy::middle_relay::note_global_relay_pressure;
|
|
||||||
use crate::proxy::shared_state::{ConntrackCloseEvent, ConntrackCloseReason, ProxySharedState};
|
|
||||||
use crate::stats::Stats;
|
|
||||||
|
|
||||||
const CONNTRACK_EVENT_QUEUE_CAPACITY: usize = 32_768;
|
|
||||||
const PRESSURE_RELEASE_TICKS: u8 = 3;
|
|
||||||
const PRESSURE_SAMPLE_INTERVAL: Duration = Duration::from_secs(1);
|
|
||||||
|
|
||||||
#[derive(Clone, Copy, Debug, PartialEq, Eq)]
|
|
||||||
enum NetfilterBackend {
|
|
||||||
Nftables,
|
|
||||||
Iptables,
|
|
||||||
}
|
|
||||||
|
|
||||||
#[derive(Clone, Copy)]
|
|
||||||
struct PressureSample {
|
|
||||||
conn_pct: Option<u8>,
|
|
||||||
fd_pct: Option<u8>,
|
|
||||||
accept_timeout_delta: u64,
|
|
||||||
me_queue_pressure_delta: u64,
|
|
||||||
}
|
|
||||||
|
|
||||||
struct PressureState {
|
|
||||||
active: bool,
|
|
||||||
low_streak: u8,
|
|
||||||
prev_accept_timeout_total: u64,
|
|
||||||
prev_me_queue_pressure_total: u64,
|
|
||||||
}
|
|
||||||
|
|
||||||
impl PressureState {
|
|
||||||
fn new(stats: &Stats) -> Self {
|
|
||||||
Self {
|
|
||||||
active: false,
|
|
||||||
low_streak: 0,
|
|
||||||
prev_accept_timeout_total: stats.get_accept_permit_timeout_total(),
|
|
||||||
prev_me_queue_pressure_total: stats.get_me_c2me_send_full_total(),
|
|
||||||
}
|
|
||||||
}
|
|
||||||
}
|
|
||||||
|
|
||||||
pub(crate) fn spawn_conntrack_controller(
|
|
||||||
config_rx: watch::Receiver<Arc<ProxyConfig>>,
|
|
||||||
stats: Arc<Stats>,
|
|
||||||
shared: Arc<ProxySharedState>,
|
|
||||||
) {
|
|
||||||
if !cfg!(target_os = "linux") {
|
|
||||||
let enabled = config_rx
|
|
||||||
.borrow()
|
|
||||||
.server
|
|
||||||
.conntrack_control
|
|
||||||
.inline_conntrack_control;
|
|
||||||
stats.set_conntrack_control_enabled(enabled);
|
|
||||||
stats.set_conntrack_control_available(false);
|
|
||||||
stats.set_conntrack_pressure_active(false);
|
|
||||||
stats.set_conntrack_event_queue_depth(0);
|
|
||||||
stats.set_conntrack_rule_apply_ok(false);
|
|
||||||
shared.disable_conntrack_close_sender();
|
|
||||||
shared.set_conntrack_pressure_active(false);
|
|
||||||
if enabled {
|
|
||||||
warn!(
|
|
||||||
"conntrack control is configured but unsupported on this OS; disabling runtime worker"
|
|
||||||
);
|
|
||||||
}
|
|
||||||
return;
|
|
||||||
}
|
|
||||||
|
|
||||||
let (tx, rx) = mpsc::channel(CONNTRACK_EVENT_QUEUE_CAPACITY);
|
|
||||||
shared.set_conntrack_close_sender(tx);
|
|
||||||
tokio::spawn(async move {
|
|
||||||
run_conntrack_controller(config_rx, stats, shared, rx).await;
|
|
||||||
});
|
|
||||||
}
|
|
||||||
|
|
||||||
async fn run_conntrack_controller(
|
|
||||||
mut config_rx: watch::Receiver<Arc<ProxyConfig>>,
|
|
||||||
stats: Arc<Stats>,
|
|
||||||
shared: Arc<ProxySharedState>,
|
|
||||||
mut close_rx: mpsc::Receiver<ConntrackCloseEvent>,
|
|
||||||
) {
|
|
||||||
let mut cfg = config_rx.borrow().clone();
|
|
||||||
let mut pressure_state = PressureState::new(stats.as_ref());
|
|
||||||
let mut delete_budget_tokens = cfg.server.conntrack_control.delete_budget_per_sec;
|
|
||||||
let mut backend = pick_backend(cfg.server.conntrack_control.backend);
|
|
||||||
|
|
||||||
apply_runtime_state(
|
|
||||||
stats.as_ref(),
|
|
||||||
shared.as_ref(),
|
|
||||||
&cfg,
|
|
||||||
backend.is_some(),
|
|
||||||
false,
|
|
||||||
);
|
|
||||||
reconcile_rules(&cfg, backend, stats.as_ref()).await;
|
|
||||||
|
|
||||||
loop {
|
|
||||||
tokio::select! {
|
|
||||||
changed = config_rx.changed() => {
|
|
||||||
if changed.is_err() {
|
|
||||||
break;
|
|
||||||
}
|
|
||||||
cfg = config_rx.borrow_and_update().clone();
|
|
||||||
backend = pick_backend(cfg.server.conntrack_control.backend);
|
|
||||||
delete_budget_tokens = cfg.server.conntrack_control.delete_budget_per_sec;
|
|
||||||
apply_runtime_state(stats.as_ref(), shared.as_ref(), &cfg, backend.is_some(), pressure_state.active);
|
|
||||||
reconcile_rules(&cfg, backend, stats.as_ref()).await;
|
|
||||||
}
|
|
||||||
event = close_rx.recv() => {
|
|
||||||
let Some(event) = event else {
|
|
||||||
break;
|
|
||||||
};
|
|
||||||
stats.set_conntrack_event_queue_depth(close_rx.len() as u64);
|
|
||||||
if !cfg.server.conntrack_control.inline_conntrack_control {
|
|
||||||
continue;
|
|
||||||
}
|
|
||||||
if !pressure_state.active {
|
|
||||||
continue;
|
|
||||||
}
|
|
||||||
if !matches!(event.reason, ConntrackCloseReason::Timeout | ConntrackCloseReason::Pressure | ConntrackCloseReason::Reset) {
|
|
||||||
continue;
|
|
||||||
}
|
|
||||||
if delete_budget_tokens == 0 {
|
|
||||||
continue;
|
|
||||||
}
|
|
||||||
stats.increment_conntrack_delete_attempt_total();
|
|
||||||
match delete_conntrack_entry(event).await {
|
|
||||||
DeleteOutcome::Deleted => {
|
|
||||||
delete_budget_tokens = delete_budget_tokens.saturating_sub(1);
|
|
||||||
stats.increment_conntrack_delete_success_total();
|
|
||||||
}
|
|
||||||
DeleteOutcome::NotFound => {
|
|
||||||
delete_budget_tokens = delete_budget_tokens.saturating_sub(1);
|
|
||||||
stats.increment_conntrack_delete_not_found_total();
|
|
||||||
}
|
|
||||||
DeleteOutcome::Error => {
|
|
||||||
delete_budget_tokens = delete_budget_tokens.saturating_sub(1);
|
|
||||||
stats.increment_conntrack_delete_error_total();
|
|
||||||
}
|
|
||||||
}
|
|
||||||
}
|
|
||||||
_ = tokio::time::sleep(PRESSURE_SAMPLE_INTERVAL) => {
|
|
||||||
delete_budget_tokens = cfg.server.conntrack_control.delete_budget_per_sec;
|
|
||||||
stats.set_conntrack_event_queue_depth(close_rx.len() as u64);
|
|
||||||
let sample = collect_pressure_sample(stats.as_ref(), &cfg, &mut pressure_state);
|
|
||||||
update_pressure_state(
|
|
||||||
stats.as_ref(),
|
|
||||||
shared.as_ref(),
|
|
||||||
&cfg,
|
|
||||||
&sample,
|
|
||||||
&mut pressure_state,
|
|
||||||
);
|
|
||||||
if pressure_state.active {
|
|
||||||
note_global_relay_pressure(shared.as_ref());
|
|
||||||
}
|
|
||||||
}
|
|
||||||
}
|
|
||||||
}
|
|
||||||
|
|
||||||
shared.disable_conntrack_close_sender();
|
|
||||||
shared.set_conntrack_pressure_active(false);
|
|
||||||
stats.set_conntrack_pressure_active(false);
|
|
||||||
}
|
|
||||||
|
|
||||||
fn apply_runtime_state(
|
|
||||||
stats: &Stats,
|
|
||||||
shared: &ProxySharedState,
|
|
||||||
cfg: &ProxyConfig,
|
|
||||||
backend_available: bool,
|
|
||||||
pressure_active: bool,
|
|
||||||
) {
|
|
||||||
let enabled = cfg.server.conntrack_control.inline_conntrack_control;
|
|
||||||
let available = enabled && backend_available && has_cap_net_admin();
|
|
||||||
if enabled && !available {
|
|
||||||
warn!(
|
|
||||||
"conntrack control enabled but unavailable (missing CAP_NET_ADMIN or backend binaries)"
|
|
||||||
);
|
|
||||||
}
|
|
||||||
stats.set_conntrack_control_enabled(enabled);
|
|
||||||
stats.set_conntrack_control_available(available);
|
|
||||||
shared.set_conntrack_pressure_active(enabled && pressure_active);
|
|
||||||
stats.set_conntrack_pressure_active(enabled && pressure_active);
|
|
||||||
}
|
|
||||||
|
|
||||||
fn collect_pressure_sample(
|
|
||||||
stats: &Stats,
|
|
||||||
cfg: &ProxyConfig,
|
|
||||||
state: &mut PressureState,
|
|
||||||
) -> PressureSample {
|
|
||||||
let current_connections = stats.get_current_connections_total();
|
|
||||||
let conn_pct = if cfg.server.max_connections == 0 {
|
|
||||||
None
|
|
||||||
} else {
|
|
||||||
Some(
|
|
||||||
((current_connections.saturating_mul(100)) / u64::from(cfg.server.max_connections))
|
|
||||||
.min(100) as u8,
|
|
||||||
)
|
|
||||||
};
|
|
||||||
|
|
||||||
let fd_pct = fd_usage_pct();
|
|
||||||
|
|
||||||
let accept_total = stats.get_accept_permit_timeout_total();
|
|
||||||
let accept_delta = accept_total.saturating_sub(state.prev_accept_timeout_total);
|
|
||||||
state.prev_accept_timeout_total = accept_total;
|
|
||||||
|
|
||||||
let me_total = stats.get_me_c2me_send_full_total();
|
|
||||||
let me_delta = me_total.saturating_sub(state.prev_me_queue_pressure_total);
|
|
||||||
state.prev_me_queue_pressure_total = me_total;
|
|
||||||
|
|
||||||
PressureSample {
|
|
||||||
conn_pct,
|
|
||||||
fd_pct,
|
|
||||||
accept_timeout_delta: accept_delta,
|
|
||||||
me_queue_pressure_delta: me_delta,
|
|
||||||
}
|
|
||||||
}
|
|
||||||
|
|
||||||
fn update_pressure_state(
|
|
||||||
stats: &Stats,
|
|
||||||
shared: &ProxySharedState,
|
|
||||||
cfg: &ProxyConfig,
|
|
||||||
sample: &PressureSample,
|
|
||||||
state: &mut PressureState,
|
|
||||||
) {
|
|
||||||
if !cfg.server.conntrack_control.inline_conntrack_control {
|
|
||||||
if state.active {
|
|
||||||
state.active = false;
|
|
||||||
state.low_streak = 0;
|
|
||||||
shared.set_conntrack_pressure_active(false);
|
|
||||||
stats.set_conntrack_pressure_active(false);
|
|
||||||
info!("Conntrack pressure mode deactivated (feature disabled)");
|
|
||||||
}
|
|
||||||
return;
|
|
||||||
}
|
|
||||||
|
|
||||||
let high = cfg.server.conntrack_control.pressure_high_watermark_pct;
|
|
||||||
let low = cfg.server.conntrack_control.pressure_low_watermark_pct;
|
|
||||||
|
|
||||||
let high_hit = sample.conn_pct.is_some_and(|v| v >= high)
|
|
||||||
|| sample.fd_pct.is_some_and(|v| v >= high)
|
|
||||||
|| sample.accept_timeout_delta > 0
|
|
||||||
|| sample.me_queue_pressure_delta > 0;
|
|
||||||
|
|
||||||
let low_clear = sample.conn_pct.is_none_or(|v| v <= low)
|
|
||||||
&& sample.fd_pct.is_none_or(|v| v <= low)
|
|
||||||
&& sample.accept_timeout_delta == 0
|
|
||||||
&& sample.me_queue_pressure_delta == 0;
|
|
||||||
|
|
||||||
if !state.active && high_hit {
|
|
||||||
state.active = true;
|
|
||||||
state.low_streak = 0;
|
|
||||||
shared.set_conntrack_pressure_active(true);
|
|
||||||
stats.set_conntrack_pressure_active(true);
|
|
||||||
info!(
|
|
||||||
conn_pct = ?sample.conn_pct,
|
|
||||||
fd_pct = ?sample.fd_pct,
|
|
||||||
accept_timeout_delta = sample.accept_timeout_delta,
|
|
||||||
me_queue_pressure_delta = sample.me_queue_pressure_delta,
|
|
||||||
"Conntrack pressure mode activated"
|
|
||||||
);
|
|
||||||
return;
|
|
||||||
}
|
|
||||||
|
|
||||||
if state.active && low_clear {
|
|
||||||
state.low_streak = state.low_streak.saturating_add(1);
|
|
||||||
if state.low_streak >= PRESSURE_RELEASE_TICKS {
|
|
||||||
state.active = false;
|
|
||||||
state.low_streak = 0;
|
|
||||||
shared.set_conntrack_pressure_active(false);
|
|
||||||
stats.set_conntrack_pressure_active(false);
|
|
||||||
info!("Conntrack pressure mode deactivated");
|
|
||||||
}
|
|
||||||
return;
|
|
||||||
}
|
|
||||||
|
|
||||||
state.low_streak = 0;
|
|
||||||
}
|
|
||||||
|
|
||||||
async fn reconcile_rules(cfg: &ProxyConfig, backend: Option<NetfilterBackend>, stats: &Stats) {
|
|
||||||
if !cfg.server.conntrack_control.inline_conntrack_control {
|
|
||||||
clear_notrack_rules_all_backends().await;
|
|
||||||
stats.set_conntrack_rule_apply_ok(true);
|
|
||||||
return;
|
|
||||||
}
|
|
||||||
|
|
||||||
if !has_cap_net_admin() {
|
|
||||||
stats.set_conntrack_rule_apply_ok(false);
|
|
||||||
return;
|
|
||||||
}
|
|
||||||
|
|
||||||
let Some(backend) = backend else {
|
|
||||||
stats.set_conntrack_rule_apply_ok(false);
|
|
||||||
return;
|
|
||||||
};
|
|
||||||
|
|
||||||
let apply_result = match backend {
|
|
||||||
NetfilterBackend::Nftables => apply_nft_rules(cfg).await,
|
|
||||||
NetfilterBackend::Iptables => apply_iptables_rules(cfg).await,
|
|
||||||
};
|
|
||||||
|
|
||||||
if let Err(error) = apply_result {
|
|
||||||
warn!(error = %error, "Failed to reconcile conntrack/notrack rules");
|
|
||||||
stats.set_conntrack_rule_apply_ok(false);
|
|
||||||
} else {
|
|
||||||
stats.set_conntrack_rule_apply_ok(true);
|
|
||||||
}
|
|
||||||
}
|
|
||||||
|
|
||||||
fn pick_backend(configured: ConntrackBackend) -> Option<NetfilterBackend> {
|
|
||||||
match configured {
|
|
||||||
ConntrackBackend::Auto => {
|
|
||||||
if command_exists("nft") {
|
|
||||||
Some(NetfilterBackend::Nftables)
|
|
||||||
} else if command_exists("iptables") {
|
|
||||||
Some(NetfilterBackend::Iptables)
|
|
||||||
} else {
|
|
||||||
None
|
|
||||||
}
|
|
||||||
}
|
|
||||||
ConntrackBackend::Nftables => command_exists("nft").then_some(NetfilterBackend::Nftables),
|
|
||||||
ConntrackBackend::Iptables => {
|
|
||||||
command_exists("iptables").then_some(NetfilterBackend::Iptables)
|
|
||||||
}
|
|
||||||
}
|
|
||||||
}
|
|
||||||
|
|
||||||
fn command_exists(binary: &str) -> bool {
|
|
||||||
let Some(path_var) = std::env::var_os("PATH") else {
|
|
||||||
return false;
|
|
||||||
};
|
|
||||||
std::env::split_paths(&path_var).any(|dir| {
|
|
||||||
let candidate: PathBuf = dir.join(binary);
|
|
||||||
candidate.exists() && candidate.is_file()
|
|
||||||
})
|
|
||||||
}
|
|
||||||
|
|
||||||
fn notrack_targets(cfg: &ProxyConfig) -> (Vec<Option<IpAddr>>, Vec<Option<IpAddr>>) {
|
|
||||||
let mode = cfg.server.conntrack_control.mode;
|
|
||||||
let mut v4_targets: BTreeSet<Option<IpAddr>> = BTreeSet::new();
|
|
||||||
let mut v6_targets: BTreeSet<Option<IpAddr>> = BTreeSet::new();
|
|
||||||
|
|
||||||
match mode {
|
|
||||||
ConntrackMode::Tracked => {}
|
|
||||||
ConntrackMode::Notrack => {
|
|
||||||
if cfg.server.listeners.is_empty() {
|
|
||||||
if let Some(ipv4) = cfg
|
|
||||||
.server
|
|
||||||
.listen_addr_ipv4
|
|
||||||
.as_ref()
|
|
||||||
.and_then(|s| s.parse::<IpAddr>().ok())
|
|
||||||
{
|
|
||||||
if ipv4.is_unspecified() {
|
|
||||||
v4_targets.insert(None);
|
|
||||||
} else {
|
|
||||||
v4_targets.insert(Some(ipv4));
|
|
||||||
}
|
|
||||||
}
|
|
||||||
if let Some(ipv6) = cfg
|
|
||||||
.server
|
|
||||||
.listen_addr_ipv6
|
|
||||||
.as_ref()
|
|
||||||
.and_then(|s| s.parse::<IpAddr>().ok())
|
|
||||||
{
|
|
||||||
if ipv6.is_unspecified() {
|
|
||||||
v6_targets.insert(None);
|
|
||||||
} else {
|
|
||||||
v6_targets.insert(Some(ipv6));
|
|
||||||
}
|
|
||||||
}
|
|
||||||
} else {
|
|
||||||
for listener in &cfg.server.listeners {
|
|
||||||
if listener.ip.is_ipv4() {
|
|
||||||
if listener.ip.is_unspecified() {
|
|
||||||
v4_targets.insert(None);
|
|
||||||
} else {
|
|
||||||
v4_targets.insert(Some(listener.ip));
|
|
||||||
}
|
|
||||||
} else if listener.ip.is_unspecified() {
|
|
||||||
v6_targets.insert(None);
|
|
||||||
} else {
|
|
||||||
v6_targets.insert(Some(listener.ip));
|
|
||||||
}
|
|
||||||
}
|
|
||||||
}
|
|
||||||
}
|
|
||||||
ConntrackMode::Hybrid => {
|
|
||||||
for ip in &cfg.server.conntrack_control.hybrid_listener_ips {
|
|
||||||
if ip.is_ipv4() {
|
|
||||||
v4_targets.insert(Some(*ip));
|
|
||||||
} else {
|
|
||||||
v6_targets.insert(Some(*ip));
|
|
||||||
}
|
|
||||||
}
|
|
||||||
}
|
|
||||||
}
|
|
||||||
|
|
||||||
(
|
|
||||||
v4_targets.into_iter().collect(),
|
|
||||||
v6_targets.into_iter().collect(),
|
|
||||||
)
|
|
||||||
}
|
|
||||||
|
|
||||||
async fn apply_nft_rules(cfg: &ProxyConfig) -> Result<(), String> {
|
|
||||||
let _ = run_command(
|
|
||||||
"nft",
|
|
||||||
&["delete", "table", "inet", "telemt_conntrack"],
|
|
||||||
None,
|
|
||||||
)
|
|
||||||
.await;
|
|
||||||
if matches!(cfg.server.conntrack_control.mode, ConntrackMode::Tracked) {
|
|
||||||
return Ok(());
|
|
||||||
}
|
|
||||||
|
|
||||||
let (v4_targets, v6_targets) = notrack_targets(cfg);
|
|
||||||
let mut rules = Vec::new();
|
|
||||||
for ip in v4_targets {
|
|
||||||
let rule = if let Some(ip) = ip {
|
|
||||||
format!("tcp dport {} ip daddr {} notrack", cfg.server.port, ip)
|
|
||||||
} else {
|
|
||||||
format!("tcp dport {} notrack", cfg.server.port)
|
|
||||||
};
|
|
||||||
rules.push(rule);
|
|
||||||
}
|
|
||||||
for ip in v6_targets {
|
|
||||||
let rule = if let Some(ip) = ip {
|
|
||||||
format!("tcp dport {} ip6 daddr {} notrack", cfg.server.port, ip)
|
|
||||||
} else {
|
|
||||||
format!("tcp dport {} notrack", cfg.server.port)
|
|
||||||
};
|
|
||||||
rules.push(rule);
|
|
||||||
}
|
|
||||||
|
|
||||||
let rule_blob = if rules.is_empty() {
|
|
||||||
String::new()
|
|
||||||
} else {
|
|
||||||
format!(" {}\n", rules.join("\n "))
|
|
||||||
};
|
|
||||||
let script = format!(
|
|
||||||
"table inet telemt_conntrack {{\n chain preraw {{\n type filter hook prerouting priority raw; policy accept;\n{rule_blob} }}\n}}\n"
|
|
||||||
);
|
|
||||||
run_command("nft", &["-f", "-"], Some(script)).await
|
|
||||||
}
|
|
||||||
|
|
||||||
async fn apply_iptables_rules(cfg: &ProxyConfig) -> Result<(), String> {
|
|
||||||
apply_iptables_rules_for_binary("iptables", cfg, true).await?;
|
|
||||||
apply_iptables_rules_for_binary("ip6tables", cfg, false).await?;
|
|
||||||
Ok(())
|
|
||||||
}
|
|
||||||
|
|
||||||
async fn apply_iptables_rules_for_binary(
|
|
||||||
binary: &str,
|
|
||||||
cfg: &ProxyConfig,
|
|
||||||
ipv4: bool,
|
|
||||||
) -> Result<(), String> {
|
|
||||||
if !command_exists(binary) {
|
|
||||||
return Ok(());
|
|
||||||
}
|
|
||||||
let chain = "TELEMT_NOTRACK";
|
|
||||||
let _ = run_command(
|
|
||||||
binary,
|
|
||||||
&["-t", "raw", "-D", "PREROUTING", "-j", chain],
|
|
||||||
None,
|
|
||||||
)
|
|
||||||
.await;
|
|
||||||
let _ = run_command(binary, &["-t", "raw", "-F", chain], None).await;
|
|
||||||
let _ = run_command(binary, &["-t", "raw", "-X", chain], None).await;
|
|
||||||
|
|
||||||
if matches!(cfg.server.conntrack_control.mode, ConntrackMode::Tracked) {
|
|
||||||
return Ok(());
|
|
||||||
}
|
|
||||||
|
|
||||||
run_command(binary, &["-t", "raw", "-N", chain], None).await?;
|
|
||||||
run_command(binary, &["-t", "raw", "-F", chain], None).await?;
|
|
||||||
if run_command(
|
|
||||||
binary,
|
|
||||||
&["-t", "raw", "-C", "PREROUTING", "-j", chain],
|
|
||||||
None,
|
|
||||||
)
|
|
||||||
.await
|
|
||||||
.is_err()
|
|
||||||
{
|
|
||||||
run_command(
|
|
||||||
binary,
|
|
||||||
&["-t", "raw", "-I", "PREROUTING", "1", "-j", chain],
|
|
||||||
None,
|
|
||||||
)
|
|
||||||
.await?;
|
|
||||||
}
|
|
||||||
|
|
||||||
let (v4_targets, v6_targets) = notrack_targets(cfg);
|
|
||||||
let selected = if ipv4 { v4_targets } else { v6_targets };
|
|
||||||
for ip in selected {
|
|
||||||
let mut args = vec![
|
|
||||||
"-t".to_string(),
|
|
||||||
"raw".to_string(),
|
|
||||||
"-A".to_string(),
|
|
||||||
chain.to_string(),
|
|
||||||
"-p".to_string(),
|
|
||||||
"tcp".to_string(),
|
|
||||||
"--dport".to_string(),
|
|
||||||
cfg.server.port.to_string(),
|
|
||||||
];
|
|
||||||
if let Some(ip) = ip {
|
|
||||||
args.push("-d".to_string());
|
|
||||||
args.push(ip.to_string());
|
|
||||||
}
|
|
||||||
args.push("-j".to_string());
|
|
||||||
args.push("CT".to_string());
|
|
||||||
args.push("--notrack".to_string());
|
|
||||||
let arg_refs: Vec<&str> = args.iter().map(String::as_str).collect();
|
|
||||||
run_command(binary, &arg_refs, None).await?;
|
|
||||||
}
|
|
||||||
Ok(())
|
|
||||||
}
|
|
||||||
|
|
||||||
async fn clear_notrack_rules_all_backends() {
|
|
||||||
let _ = run_command(
|
|
||||||
"nft",
|
|
||||||
&["delete", "table", "inet", "telemt_conntrack"],
|
|
||||||
None,
|
|
||||||
)
|
|
||||||
.await;
|
|
||||||
let _ = run_command(
|
|
||||||
"iptables",
|
|
||||||
&["-t", "raw", "-D", "PREROUTING", "-j", "TELEMT_NOTRACK"],
|
|
||||||
None,
|
|
||||||
)
|
|
||||||
.await;
|
|
||||||
let _ = run_command("iptables", &["-t", "raw", "-F", "TELEMT_NOTRACK"], None).await;
|
|
||||||
let _ = run_command("iptables", &["-t", "raw", "-X", "TELEMT_NOTRACK"], None).await;
|
|
||||||
let _ = run_command(
|
|
||||||
"ip6tables",
|
|
||||||
&["-t", "raw", "-D", "PREROUTING", "-j", "TELEMT_NOTRACK"],
|
|
||||||
None,
|
|
||||||
)
|
|
||||||
.await;
|
|
||||||
let _ = run_command("ip6tables", &["-t", "raw", "-F", "TELEMT_NOTRACK"], None).await;
|
|
||||||
let _ = run_command("ip6tables", &["-t", "raw", "-X", "TELEMT_NOTRACK"], None).await;
|
|
||||||
}
|
|
||||||
|
|
||||||
enum DeleteOutcome {
|
|
||||||
Deleted,
|
|
||||||
NotFound,
|
|
||||||
Error,
|
|
||||||
}
|
|
||||||
|
|
||||||
async fn delete_conntrack_entry(event: ConntrackCloseEvent) -> DeleteOutcome {
|
|
||||||
if !command_exists("conntrack") {
|
|
||||||
return DeleteOutcome::Error;
|
|
||||||
}
|
|
||||||
let args = vec![
|
|
||||||
"-D".to_string(),
|
|
||||||
"-p".to_string(),
|
|
||||||
"tcp".to_string(),
|
|
||||||
"-s".to_string(),
|
|
||||||
event.src.ip().to_string(),
|
|
||||||
"--sport".to_string(),
|
|
||||||
event.src.port().to_string(),
|
|
||||||
"-d".to_string(),
|
|
||||||
event.dst.ip().to_string(),
|
|
||||||
"--dport".to_string(),
|
|
||||||
event.dst.port().to_string(),
|
|
||||||
];
|
|
||||||
let arg_refs: Vec<&str> = args.iter().map(String::as_str).collect();
|
|
||||||
match run_command("conntrack", &arg_refs, None).await {
|
|
||||||
Ok(()) => DeleteOutcome::Deleted,
|
|
||||||
Err(error) => {
|
|
||||||
if error.contains("0 flow entries have been deleted") {
|
|
||||||
DeleteOutcome::NotFound
|
|
||||||
} else {
|
|
||||||
debug!(error = %error, "conntrack delete failed");
|
|
||||||
DeleteOutcome::Error
|
|
||||||
}
|
|
||||||
}
|
|
||||||
}
|
|
||||||
}
|
|
||||||
|
|
||||||
async fn run_command(binary: &str, args: &[&str], stdin: Option<String>) -> Result<(), String> {
|
|
||||||
if !command_exists(binary) {
|
|
||||||
return Err(format!("{binary} is not available"));
|
|
||||||
}
|
|
||||||
let mut command = Command::new(binary);
|
|
||||||
command.args(args);
|
|
||||||
if stdin.is_some() {
|
|
||||||
command.stdin(std::process::Stdio::piped());
|
|
||||||
}
|
|
||||||
command.stdout(std::process::Stdio::null());
|
|
||||||
command.stderr(std::process::Stdio::piped());
|
|
||||||
let mut child = command
|
|
||||||
.spawn()
|
|
||||||
.map_err(|e| format!("spawn {binary} failed: {e}"))?;
|
|
||||||
if let Some(blob) = stdin
|
|
||||||
&& let Some(mut writer) = child.stdin.take()
|
|
||||||
{
|
|
||||||
writer
|
|
||||||
.write_all(blob.as_bytes())
|
|
||||||
.await
|
|
||||||
.map_err(|e| format!("stdin write {binary} failed: {e}"))?;
|
|
||||||
}
|
|
||||||
let output = child
|
|
||||||
.wait_with_output()
|
|
||||||
.await
|
|
||||||
.map_err(|e| format!("wait {binary} failed: {e}"))?;
|
|
||||||
if output.status.success() {
|
|
||||||
return Ok(());
|
|
||||||
}
|
|
||||||
let stderr = String::from_utf8_lossy(&output.stderr).trim().to_string();
|
|
||||||
Err(if stderr.is_empty() {
|
|
||||||
format!("{binary} exited with status {}", output.status)
|
|
||||||
} else {
|
|
||||||
stderr
|
|
||||||
})
|
|
||||||
}
|
|
||||||
|
|
||||||
fn fd_usage_pct() -> Option<u8> {
|
|
||||||
let soft_limit = nofile_soft_limit()?;
|
|
||||||
if soft_limit == 0 {
|
|
||||||
return None;
|
|
||||||
}
|
|
||||||
let fd_count = std::fs::read_dir("/proc/self/fd").ok()?.count() as u64;
|
|
||||||
Some(((fd_count.saturating_mul(100)) / soft_limit).min(100) as u8)
|
|
||||||
}
|
|
||||||
|
|
||||||
fn nofile_soft_limit() -> Option<u64> {
|
|
||||||
#[cfg(target_os = "linux")]
|
|
||||||
{
|
|
||||||
let mut lim = libc::rlimit {
|
|
||||||
rlim_cur: 0,
|
|
||||||
rlim_max: 0,
|
|
||||||
};
|
|
||||||
let rc = unsafe { libc::getrlimit(libc::RLIMIT_NOFILE, &mut lim) };
|
|
||||||
if rc != 0 {
|
|
||||||
return None;
|
|
||||||
}
|
|
||||||
return Some(lim.rlim_cur);
|
|
||||||
}
|
|
||||||
#[cfg(not(target_os = "linux"))]
|
|
||||||
{
|
|
||||||
None
|
|
||||||
}
|
|
||||||
}
|
|
||||||
|
|
||||||
fn has_cap_net_admin() -> bool {
|
|
||||||
#[cfg(target_os = "linux")]
|
|
||||||
{
|
|
||||||
let Ok(status) = std::fs::read_to_string("/proc/self/status") else {
|
|
||||||
return false;
|
|
||||||
};
|
|
||||||
for line in status.lines() {
|
|
||||||
if let Some(raw) = line.strip_prefix("CapEff:") {
|
|
||||||
let caps = raw.trim();
|
|
||||||
if let Ok(bits) = u64::from_str_radix(caps, 16) {
|
|
||||||
const CAP_NET_ADMIN_BIT: u64 = 12;
|
|
||||||
return (bits & (1u64 << CAP_NET_ADMIN_BIT)) != 0;
|
|
||||||
}
|
|
||||||
}
|
|
||||||
}
|
|
||||||
false
|
|
||||||
}
|
|
||||||
#[cfg(not(target_os = "linux"))]
|
|
||||||
{
|
|
||||||
false
|
|
||||||
}
|
|
||||||
}
|
|
||||||
|
|
||||||
#[cfg(test)]
|
|
||||||
mod tests {
|
|
||||||
use super::*;
|
|
||||||
use crate::config::ProxyConfig;
|
|
||||||
|
|
||||||
#[test]
|
|
||||||
fn pressure_activates_on_accept_timeout_spike() {
|
|
||||||
let stats = Stats::new();
|
|
||||||
let shared = ProxySharedState::new();
|
|
||||||
let mut cfg = ProxyConfig::default();
|
|
||||||
cfg.server.conntrack_control.inline_conntrack_control = true;
|
|
||||||
let mut state = PressureState::new(&stats);
|
|
||||||
let sample = PressureSample {
|
|
||||||
conn_pct: Some(10),
|
|
||||||
fd_pct: Some(10),
|
|
||||||
accept_timeout_delta: 1,
|
|
||||||
me_queue_pressure_delta: 0,
|
|
||||||
};
|
|
||||||
|
|
||||||
update_pressure_state(&stats, shared.as_ref(), &cfg, &sample, &mut state);
|
|
||||||
|
|
||||||
assert!(state.active);
|
|
||||||
assert!(shared.conntrack_pressure_active());
|
|
||||||
assert!(stats.get_conntrack_pressure_active());
|
|
||||||
}
|
|
||||||
|
|
||||||
#[test]
|
|
||||||
fn pressure_releases_after_hysteresis_window() {
|
|
||||||
let stats = Stats::new();
|
|
||||||
let shared = ProxySharedState::new();
|
|
||||||
let mut cfg = ProxyConfig::default();
|
|
||||||
cfg.server.conntrack_control.inline_conntrack_control = true;
|
|
||||||
let mut state = PressureState::new(&stats);
|
|
||||||
|
|
||||||
let high_sample = PressureSample {
|
|
||||||
conn_pct: Some(95),
|
|
||||||
fd_pct: Some(95),
|
|
||||||
accept_timeout_delta: 0,
|
|
||||||
me_queue_pressure_delta: 0,
|
|
||||||
};
|
|
||||||
update_pressure_state(&stats, shared.as_ref(), &cfg, &high_sample, &mut state);
|
|
||||||
assert!(state.active);
|
|
||||||
|
|
||||||
let low_sample = PressureSample {
|
|
||||||
conn_pct: Some(10),
|
|
||||||
fd_pct: Some(10),
|
|
||||||
accept_timeout_delta: 0,
|
|
||||||
me_queue_pressure_delta: 0,
|
|
||||||
};
|
|
||||||
update_pressure_state(&stats, shared.as_ref(), &cfg, &low_sample, &mut state);
|
|
||||||
assert!(state.active);
|
|
||||||
update_pressure_state(&stats, shared.as_ref(), &cfg, &low_sample, &mut state);
|
|
||||||
assert!(state.active);
|
|
||||||
update_pressure_state(&stats, shared.as_ref(), &cfg, &low_sample, &mut state);
|
|
||||||
|
|
||||||
assert!(!state.active);
|
|
||||||
assert!(!shared.conntrack_pressure_active());
|
|
||||||
assert!(!stats.get_conntrack_pressure_active());
|
|
||||||
}
|
|
||||||
|
|
||||||
#[test]
|
|
||||||
fn pressure_does_not_activate_when_disabled() {
|
|
||||||
let stats = Stats::new();
|
|
||||||
let shared = ProxySharedState::new();
|
|
||||||
let mut cfg = ProxyConfig::default();
|
|
||||||
cfg.server.conntrack_control.inline_conntrack_control = false;
|
|
||||||
let mut state = PressureState::new(&stats);
|
|
||||||
let sample = PressureSample {
|
|
||||||
conn_pct: Some(100),
|
|
||||||
fd_pct: Some(100),
|
|
||||||
accept_timeout_delta: 10,
|
|
||||||
me_queue_pressure_delta: 10,
|
|
||||||
};
|
|
||||||
|
|
||||||
update_pressure_state(&stats, shared.as_ref(), &cfg, &sample, &mut state);
|
|
||||||
|
|
||||||
assert!(!state.active);
|
|
||||||
assert!(!shared.conntrack_pressure_active());
|
|
||||||
assert!(!stats.get_conntrack_pressure_active());
|
|
||||||
}
|
|
||||||
}
|
|
||||||
|
|
@ -13,13 +13,10 @@
|
||||||
|
|
||||||
#![allow(dead_code)]
|
#![allow(dead_code)]
|
||||||
|
|
||||||
use crate::error::{ProxyError, Result};
|
|
||||||
use aes::Aes256;
|
use aes::Aes256;
|
||||||
use ctr::{
|
use ctr::{Ctr128BE, cipher::{KeyIvInit, StreamCipher}};
|
||||||
Ctr128BE,
|
|
||||||
cipher::{KeyIvInit, StreamCipher},
|
|
||||||
};
|
|
||||||
use zeroize::Zeroize;
|
use zeroize::Zeroize;
|
||||||
|
use crate::error::{ProxyError, Result};
|
||||||
|
|
||||||
type Aes256Ctr = Ctr128BE<Aes256>;
|
type Aes256Ctr = Ctr128BE<Aes256>;
|
||||||
|
|
||||||
|
|
@ -49,16 +46,10 @@ impl AesCtr {
|
||||||
/// Create from key and IV slices
|
/// Create from key and IV slices
|
||||||
pub fn from_key_iv(key: &[u8], iv: &[u8]) -> Result<Self> {
|
pub fn from_key_iv(key: &[u8], iv: &[u8]) -> Result<Self> {
|
||||||
if key.len() != 32 {
|
if key.len() != 32 {
|
||||||
return Err(ProxyError::InvalidKeyLength {
|
return Err(ProxyError::InvalidKeyLength { expected: 32, got: key.len() });
|
||||||
expected: 32,
|
|
||||||
got: key.len(),
|
|
||||||
});
|
|
||||||
}
|
}
|
||||||
if iv.len() != 16 {
|
if iv.len() != 16 {
|
||||||
return Err(ProxyError::InvalidKeyLength {
|
return Err(ProxyError::InvalidKeyLength { expected: 16, got: iv.len() });
|
||||||
expected: 16,
|
|
||||||
got: iv.len(),
|
|
||||||
});
|
|
||||||
}
|
}
|
||||||
|
|
||||||
let key: [u8; 32] = key.try_into().unwrap();
|
let key: [u8; 32] = key.try_into().unwrap();
|
||||||
|
|
@ -117,16 +108,10 @@ impl AesCbc {
|
||||||
/// Create from slices
|
/// Create from slices
|
||||||
pub fn from_slices(key: &[u8], iv: &[u8]) -> Result<Self> {
|
pub fn from_slices(key: &[u8], iv: &[u8]) -> Result<Self> {
|
||||||
if key.len() != 32 {
|
if key.len() != 32 {
|
||||||
return Err(ProxyError::InvalidKeyLength {
|
return Err(ProxyError::InvalidKeyLength { expected: 32, got: key.len() });
|
||||||
expected: 32,
|
|
||||||
got: key.len(),
|
|
||||||
});
|
|
||||||
}
|
}
|
||||||
if iv.len() != 16 {
|
if iv.len() != 16 {
|
||||||
return Err(ProxyError::InvalidKeyLength {
|
return Err(ProxyError::InvalidKeyLength { expected: 16, got: iv.len() });
|
||||||
expected: 16,
|
|
||||||
got: iv.len(),
|
|
||||||
});
|
|
||||||
}
|
}
|
||||||
|
|
||||||
Ok(Self {
|
Ok(Self {
|
||||||
|
|
@ -165,10 +150,9 @@ impl AesCbc {
|
||||||
/// CBC Encryption: C[i] = AES_Encrypt(P[i] XOR C[i-1]), where C[-1] = IV
|
/// CBC Encryption: C[i] = AES_Encrypt(P[i] XOR C[i-1]), where C[-1] = IV
|
||||||
pub fn encrypt(&self, data: &[u8]) -> Result<Vec<u8>> {
|
pub fn encrypt(&self, data: &[u8]) -> Result<Vec<u8>> {
|
||||||
if !data.len().is_multiple_of(Self::BLOCK_SIZE) {
|
if !data.len().is_multiple_of(Self::BLOCK_SIZE) {
|
||||||
return Err(ProxyError::Crypto(format!(
|
return Err(ProxyError::Crypto(
|
||||||
"CBC data must be aligned to 16 bytes, got {}",
|
format!("CBC data must be aligned to 16 bytes, got {}", data.len())
|
||||||
data.len()
|
));
|
||||||
)));
|
|
||||||
}
|
}
|
||||||
|
|
||||||
if data.is_empty() {
|
if data.is_empty() {
|
||||||
|
|
@ -197,10 +181,9 @@ impl AesCbc {
|
||||||
/// CBC Decryption: P[i] = AES_Decrypt(C[i]) XOR C[i-1], where C[-1] = IV
|
/// CBC Decryption: P[i] = AES_Decrypt(C[i]) XOR C[i-1], where C[-1] = IV
|
||||||
pub fn decrypt(&self, data: &[u8]) -> Result<Vec<u8>> {
|
pub fn decrypt(&self, data: &[u8]) -> Result<Vec<u8>> {
|
||||||
if !data.len().is_multiple_of(Self::BLOCK_SIZE) {
|
if !data.len().is_multiple_of(Self::BLOCK_SIZE) {
|
||||||
return Err(ProxyError::Crypto(format!(
|
return Err(ProxyError::Crypto(
|
||||||
"CBC data must be aligned to 16 bytes, got {}",
|
format!("CBC data must be aligned to 16 bytes, got {}", data.len())
|
||||||
data.len()
|
));
|
||||||
)));
|
|
||||||
}
|
}
|
||||||
|
|
||||||
if data.is_empty() {
|
if data.is_empty() {
|
||||||
|
|
@ -227,10 +210,9 @@ impl AesCbc {
|
||||||
/// Encrypt data in-place
|
/// Encrypt data in-place
|
||||||
pub fn encrypt_in_place(&self, data: &mut [u8]) -> Result<()> {
|
pub fn encrypt_in_place(&self, data: &mut [u8]) -> Result<()> {
|
||||||
if !data.len().is_multiple_of(Self::BLOCK_SIZE) {
|
if !data.len().is_multiple_of(Self::BLOCK_SIZE) {
|
||||||
return Err(ProxyError::Crypto(format!(
|
return Err(ProxyError::Crypto(
|
||||||
"CBC data must be aligned to 16 bytes, got {}",
|
format!("CBC data must be aligned to 16 bytes, got {}", data.len())
|
||||||
data.len()
|
));
|
||||||
)));
|
|
||||||
}
|
}
|
||||||
|
|
||||||
if data.is_empty() {
|
if data.is_empty() {
|
||||||
|
|
@ -261,10 +243,9 @@ impl AesCbc {
|
||||||
/// Decrypt data in-place
|
/// Decrypt data in-place
|
||||||
pub fn decrypt_in_place(&self, data: &mut [u8]) -> Result<()> {
|
pub fn decrypt_in_place(&self, data: &mut [u8]) -> Result<()> {
|
||||||
if !data.len().is_multiple_of(Self::BLOCK_SIZE) {
|
if !data.len().is_multiple_of(Self::BLOCK_SIZE) {
|
||||||
return Err(ProxyError::Crypto(format!(
|
return Err(ProxyError::Crypto(
|
||||||
"CBC data must be aligned to 16 bytes, got {}",
|
format!("CBC data must be aligned to 16 bytes, got {}", data.len())
|
||||||
data.len()
|
));
|
||||||
)));
|
|
||||||
}
|
}
|
||||||
|
|
||||||
if data.is_empty() {
|
if data.is_empty() {
|
||||||
|
|
|
||||||
|
|
@ -12,10 +12,10 @@
|
||||||
//! usages are intentional and protocol-mandated.
|
//! usages are intentional and protocol-mandated.
|
||||||
|
|
||||||
use hmac::{Hmac, Mac};
|
use hmac::{Hmac, Mac};
|
||||||
|
use sha2::Sha256;
|
||||||
use md5::Md5;
|
use md5::Md5;
|
||||||
use sha1::Sha1;
|
use sha1::Sha1;
|
||||||
use sha2::Digest;
|
use sha2::Digest;
|
||||||
use sha2::Sha256;
|
|
||||||
|
|
||||||
type HmacSha256 = Hmac<Sha256>;
|
type HmacSha256 = Hmac<Sha256>;
|
||||||
|
|
||||||
|
|
@ -28,7 +28,8 @@ pub fn sha256(data: &[u8]) -> [u8; 32] {
|
||||||
|
|
||||||
/// SHA-256 HMAC
|
/// SHA-256 HMAC
|
||||||
pub fn sha256_hmac(key: &[u8], data: &[u8]) -> [u8; 32] {
|
pub fn sha256_hmac(key: &[u8], data: &[u8]) -> [u8; 32] {
|
||||||
let mut mac = HmacSha256::new_from_slice(key).expect("HMAC accepts any key length");
|
let mut mac = HmacSha256::new_from_slice(key)
|
||||||
|
.expect("HMAC accepts any key length");
|
||||||
mac.update(data);
|
mac.update(data);
|
||||||
mac.finalize().into_bytes().into()
|
mac.finalize().into_bytes().into()
|
||||||
}
|
}
|
||||||
|
|
@ -123,8 +124,17 @@ pub fn derive_middleproxy_keys(
|
||||||
srv_ipv6: Option<&[u8; 16]>,
|
srv_ipv6: Option<&[u8; 16]>,
|
||||||
) -> ([u8; 32], [u8; 16]) {
|
) -> ([u8; 32], [u8; 16]) {
|
||||||
let s = build_middleproxy_prekey(
|
let s = build_middleproxy_prekey(
|
||||||
nonce_srv, nonce_clt, clt_ts, srv_ip, clt_port, purpose, clt_ip, srv_port, secret,
|
nonce_srv,
|
||||||
clt_ipv6, srv_ipv6,
|
nonce_clt,
|
||||||
|
clt_ts,
|
||||||
|
srv_ip,
|
||||||
|
clt_port,
|
||||||
|
purpose,
|
||||||
|
clt_ip,
|
||||||
|
srv_port,
|
||||||
|
secret,
|
||||||
|
clt_ipv6,
|
||||||
|
srv_ipv6,
|
||||||
);
|
);
|
||||||
|
|
||||||
let md5_1 = md5(&s[1..]);
|
let md5_1 = md5(&s[1..]);
|
||||||
|
|
@ -154,8 +164,17 @@ mod tests {
|
||||||
let secret = vec![0x55u8; 128];
|
let secret = vec![0x55u8; 128];
|
||||||
|
|
||||||
let prekey = build_middleproxy_prekey(
|
let prekey = build_middleproxy_prekey(
|
||||||
&nonce_srv, &nonce_clt, &clt_ts, srv_ip, &clt_port, b"CLIENT", clt_ip, &srv_port,
|
&nonce_srv,
|
||||||
&secret, None, None,
|
&nonce_clt,
|
||||||
|
&clt_ts,
|
||||||
|
srv_ip,
|
||||||
|
&clt_port,
|
||||||
|
b"CLIENT",
|
||||||
|
clt_ip,
|
||||||
|
&srv_port,
|
||||||
|
&secret,
|
||||||
|
None,
|
||||||
|
None,
|
||||||
);
|
);
|
||||||
let digest = sha256(&prekey);
|
let digest = sha256(&prekey);
|
||||||
assert_eq!(
|
assert_eq!(
|
||||||
|
|
|
||||||
|
|
@ -4,7 +4,7 @@ pub mod aes;
|
||||||
pub mod hash;
|
pub mod hash;
|
||||||
pub mod random;
|
pub mod random;
|
||||||
|
|
||||||
pub use aes::{AesCbc, AesCtr};
|
pub use aes::{AesCtr, AesCbc};
|
||||||
pub use hash::{
|
pub use hash::{
|
||||||
build_middleproxy_prekey, crc32, crc32c, derive_middleproxy_keys, sha256, sha256_hmac,
|
build_middleproxy_prekey, crc32, crc32c, derive_middleproxy_keys, sha256, sha256_hmac,
|
||||||
};
|
};
|
||||||
|
|
|
||||||
|
|
@ -3,11 +3,11 @@
|
||||||
#![allow(deprecated)]
|
#![allow(deprecated)]
|
||||||
#![allow(dead_code)]
|
#![allow(dead_code)]
|
||||||
|
|
||||||
use crate::crypto::AesCtr;
|
use rand::{Rng, RngCore, SeedableRng};
|
||||||
use parking_lot::Mutex;
|
|
||||||
use rand::rngs::StdRng;
|
use rand::rngs::StdRng;
|
||||||
use rand::{Rng, RngExt, SeedableRng};
|
use parking_lot::Mutex;
|
||||||
use zeroize::Zeroize;
|
use zeroize::Zeroize;
|
||||||
|
use crate::crypto::AesCtr;
|
||||||
|
|
||||||
/// Cryptographically secure PRNG with AES-CTR
|
/// Cryptographically secure PRNG with AES-CTR
|
||||||
pub struct SecureRandom {
|
pub struct SecureRandom {
|
||||||
|
|
@ -101,7 +101,7 @@ impl SecureRandom {
|
||||||
return 0;
|
return 0;
|
||||||
}
|
}
|
||||||
let mut inner = self.inner.lock();
|
let mut inner = self.inner.lock();
|
||||||
inner.rng.random_range(0..max)
|
inner.rng.gen_range(0..max)
|
||||||
}
|
}
|
||||||
|
|
||||||
/// Generate random bits
|
/// Generate random bits
|
||||||
|
|
@ -141,7 +141,7 @@ impl SecureRandom {
|
||||||
pub fn shuffle<T>(&self, slice: &mut [T]) {
|
pub fn shuffle<T>(&self, slice: &mut [T]) {
|
||||||
let mut inner = self.inner.lock();
|
let mut inner = self.inner.lock();
|
||||||
for i in (1..slice.len()).rev() {
|
for i in (1..slice.len()).rev() {
|
||||||
let j = inner.rng.random_range(0..=i);
|
let j = inner.rng.gen_range(0..=i);
|
||||||
slice.swap(i, j);
|
slice.swap(i, j);
|
||||||
}
|
}
|
||||||
}
|
}
|
||||||
|
|
|
||||||
|
|
@ -1,541 +0,0 @@
|
||||||
//! Unix daemon support for telemt.
|
|
||||||
//!
|
|
||||||
//! Provides classic Unix daemonization (double-fork), PID file management,
|
|
||||||
//! and privilege dropping for running telemt as a background service.
|
|
||||||
|
|
||||||
use std::fs::{self, File, OpenOptions};
|
|
||||||
use std::io::{self, Read, Write};
|
|
||||||
use std::os::unix::fs::OpenOptionsExt;
|
|
||||||
use std::path::{Path, PathBuf};
|
|
||||||
|
|
||||||
use nix::fcntl::{Flock, FlockArg};
|
|
||||||
use nix::unistd::{self, ForkResult, Gid, Pid, Uid, chdir, close, fork, getpid, setsid};
|
|
||||||
use tracing::{debug, info, warn};
|
|
||||||
|
|
||||||
/// Default PID file location.
|
|
||||||
pub const DEFAULT_PID_FILE: &str = "/var/run/telemt.pid";
|
|
||||||
|
|
||||||
/// Daemon configuration options parsed from CLI.
|
|
||||||
#[derive(Debug, Clone, Default)]
|
|
||||||
pub struct DaemonOptions {
|
|
||||||
/// Run as daemon (fork to background).
|
|
||||||
pub daemonize: bool,
|
|
||||||
/// Path to PID file.
|
|
||||||
pub pid_file: Option<PathBuf>,
|
|
||||||
/// User to run as after binding sockets.
|
|
||||||
pub user: Option<String>,
|
|
||||||
/// Group to run as after binding sockets.
|
|
||||||
pub group: Option<String>,
|
|
||||||
/// Working directory for the daemon.
|
|
||||||
pub working_dir: Option<PathBuf>,
|
|
||||||
/// Explicit foreground mode (for systemd Type=simple).
|
|
||||||
pub foreground: bool,
|
|
||||||
}
|
|
||||||
|
|
||||||
impl DaemonOptions {
|
|
||||||
/// Returns the effective PID file path.
|
|
||||||
pub fn pid_file_path(&self) -> &Path {
|
|
||||||
self.pid_file
|
|
||||||
.as_deref()
|
|
||||||
.unwrap_or(Path::new(DEFAULT_PID_FILE))
|
|
||||||
}
|
|
||||||
|
|
||||||
/// Returns true if we should actually daemonize.
|
|
||||||
/// Foreground flag takes precedence.
|
|
||||||
pub fn should_daemonize(&self) -> bool {
|
|
||||||
self.daemonize && !self.foreground
|
|
||||||
}
|
|
||||||
}
|
|
||||||
|
|
||||||
/// Error types for daemon operations.
|
|
||||||
#[derive(Debug, thiserror::Error)]
|
|
||||||
pub enum DaemonError {
|
|
||||||
#[error("fork failed: {0}")]
|
|
||||||
ForkFailed(#[source] nix::Error),
|
|
||||||
|
|
||||||
#[error("setsid failed: {0}")]
|
|
||||||
SetsidFailed(#[source] nix::Error),
|
|
||||||
|
|
||||||
#[error("chdir failed: {0}")]
|
|
||||||
ChdirFailed(#[source] nix::Error),
|
|
||||||
|
|
||||||
#[error("failed to open /dev/null: {0}")]
|
|
||||||
DevNullFailed(#[source] io::Error),
|
|
||||||
|
|
||||||
#[error("failed to redirect stdio: {0}")]
|
|
||||||
RedirectFailed(#[source] nix::Error),
|
|
||||||
|
|
||||||
#[error("PID file error: {0}")]
|
|
||||||
PidFile(String),
|
|
||||||
|
|
||||||
#[error("another instance is already running (pid {0})")]
|
|
||||||
AlreadyRunning(i32),
|
|
||||||
|
|
||||||
#[error("user '{0}' not found")]
|
|
||||||
UserNotFound(String),
|
|
||||||
|
|
||||||
#[error("group '{0}' not found")]
|
|
||||||
GroupNotFound(String),
|
|
||||||
|
|
||||||
#[error("failed to set uid/gid: {0}")]
|
|
||||||
PrivilegeDrop(#[source] nix::Error),
|
|
||||||
|
|
||||||
#[error("io error: {0}")]
|
|
||||||
Io(#[from] io::Error),
|
|
||||||
}
|
|
||||||
|
|
||||||
/// Result of a successful daemonize() call.
|
|
||||||
#[derive(Debug)]
|
|
||||||
pub enum DaemonizeResult {
|
|
||||||
/// We are the parent process and should exit.
|
|
||||||
Parent,
|
|
||||||
/// We are the daemon child process and should continue.
|
|
||||||
Child,
|
|
||||||
}
|
|
||||||
|
|
||||||
/// Performs classic Unix double-fork daemonization.
|
|
||||||
///
|
|
||||||
/// This detaches the process from the controlling terminal:
|
|
||||||
/// 1. First fork - parent exits, child continues
|
|
||||||
/// 2. setsid() - become session leader
|
|
||||||
/// 3. Second fork - ensure we can never acquire a controlling terminal
|
|
||||||
/// 4. chdir("/") - don't hold any directory open
|
|
||||||
/// 5. Redirect stdin/stdout/stderr to /dev/null
|
|
||||||
///
|
|
||||||
/// Returns `DaemonizeResult::Parent` in the original parent (which should exit),
|
|
||||||
/// or `DaemonizeResult::Child` in the final daemon child.
|
|
||||||
pub fn daemonize(working_dir: Option<&Path>) -> Result<DaemonizeResult, DaemonError> {
|
|
||||||
// First fork
|
|
||||||
match unsafe { fork() } {
|
|
||||||
Ok(ForkResult::Parent { .. }) => {
|
|
||||||
// Parent exits
|
|
||||||
return Ok(DaemonizeResult::Parent);
|
|
||||||
}
|
|
||||||
Ok(ForkResult::Child) => {
|
|
||||||
// Child continues
|
|
||||||
}
|
|
||||||
Err(e) => return Err(DaemonError::ForkFailed(e)),
|
|
||||||
}
|
|
||||||
|
|
||||||
// Create new session, become session leader
|
|
||||||
setsid().map_err(DaemonError::SetsidFailed)?;
|
|
||||||
|
|
||||||
// Second fork to ensure we can never acquire a controlling terminal
|
|
||||||
match unsafe { fork() } {
|
|
||||||
Ok(ForkResult::Parent { .. }) => {
|
|
||||||
// Intermediate parent exits
|
|
||||||
std::process::exit(0);
|
|
||||||
}
|
|
||||||
Ok(ForkResult::Child) => {
|
|
||||||
// Final daemon child continues
|
|
||||||
}
|
|
||||||
Err(e) => return Err(DaemonError::ForkFailed(e)),
|
|
||||||
}
|
|
||||||
|
|
||||||
// Change working directory
|
|
||||||
let target_dir = working_dir.unwrap_or(Path::new("/"));
|
|
||||||
chdir(target_dir).map_err(DaemonError::ChdirFailed)?;
|
|
||||||
|
|
||||||
// Redirect stdin, stdout, stderr to /dev/null
|
|
||||||
redirect_stdio_to_devnull()?;
|
|
||||||
|
|
||||||
Ok(DaemonizeResult::Child)
|
|
||||||
}
|
|
||||||
|
|
||||||
/// Redirects stdin, stdout, and stderr to /dev/null.
|
|
||||||
fn redirect_stdio_to_devnull() -> Result<(), DaemonError> {
|
|
||||||
let devnull = File::options()
|
|
||||||
.read(true)
|
|
||||||
.write(true)
|
|
||||||
.open("/dev/null")
|
|
||||||
.map_err(DaemonError::DevNullFailed)?;
|
|
||||||
|
|
||||||
let devnull_fd = std::os::unix::io::AsRawFd::as_raw_fd(&devnull);
|
|
||||||
|
|
||||||
// Use libc::dup2 directly for redirecting standard file descriptors
|
|
||||||
// nix 0.31's dup2 requires OwnedFd which doesn't work well with stdio fds
|
|
||||||
unsafe {
|
|
||||||
// Redirect stdin (fd 0)
|
|
||||||
if libc::dup2(devnull_fd, 0) < 0 {
|
|
||||||
return Err(DaemonError::RedirectFailed(nix::errno::Errno::last()));
|
|
||||||
}
|
|
||||||
// Redirect stdout (fd 1)
|
|
||||||
if libc::dup2(devnull_fd, 1) < 0 {
|
|
||||||
return Err(DaemonError::RedirectFailed(nix::errno::Errno::last()));
|
|
||||||
}
|
|
||||||
// Redirect stderr (fd 2)
|
|
||||||
if libc::dup2(devnull_fd, 2) < 0 {
|
|
||||||
return Err(DaemonError::RedirectFailed(nix::errno::Errno::last()));
|
|
||||||
}
|
|
||||||
}
|
|
||||||
|
|
||||||
// Close original devnull fd if it's not one of the standard fds
|
|
||||||
if devnull_fd > 2 {
|
|
||||||
let _ = close(devnull_fd);
|
|
||||||
}
|
|
||||||
|
|
||||||
Ok(())
|
|
||||||
}
|
|
||||||
|
|
||||||
/// PID file manager with flock-based locking.
|
|
||||||
pub struct PidFile {
|
|
||||||
path: PathBuf,
|
|
||||||
file: Option<File>,
|
|
||||||
locked: bool,
|
|
||||||
}
|
|
||||||
|
|
||||||
impl PidFile {
|
|
||||||
/// Creates a new PID file manager for the given path.
|
|
||||||
pub fn new<P: AsRef<Path>>(path: P) -> Self {
|
|
||||||
Self {
|
|
||||||
path: path.as_ref().to_path_buf(),
|
|
||||||
file: None,
|
|
||||||
locked: false,
|
|
||||||
}
|
|
||||||
}
|
|
||||||
|
|
||||||
/// Checks if another instance is already running.
|
|
||||||
///
|
|
||||||
/// Returns the PID of the running instance if one exists.
|
|
||||||
pub fn check_running(&self) -> Result<Option<i32>, DaemonError> {
|
|
||||||
if !self.path.exists() {
|
|
||||||
return Ok(None);
|
|
||||||
}
|
|
||||||
|
|
||||||
// Try to read existing PID
|
|
||||||
let mut contents = String::new();
|
|
||||||
File::open(&self.path)
|
|
||||||
.and_then(|mut f| f.read_to_string(&mut contents))
|
|
||||||
.map_err(|e| {
|
|
||||||
DaemonError::PidFile(format!("cannot read {}: {}", self.path.display(), e))
|
|
||||||
})?;
|
|
||||||
|
|
||||||
let pid: i32 = contents
|
|
||||||
.trim()
|
|
||||||
.parse()
|
|
||||||
.map_err(|_| DaemonError::PidFile(format!("invalid PID in {}", self.path.display())))?;
|
|
||||||
|
|
||||||
// Check if process is still running
|
|
||||||
if is_process_running(pid) {
|
|
||||||
Ok(Some(pid))
|
|
||||||
} else {
|
|
||||||
// Stale PID file
|
|
||||||
debug!(pid, path = %self.path.display(), "Removing stale PID file");
|
|
||||||
let _ = fs::remove_file(&self.path);
|
|
||||||
Ok(None)
|
|
||||||
}
|
|
||||||
}
|
|
||||||
|
|
||||||
/// Acquires the PID file lock and writes the current PID.
|
|
||||||
///
|
|
||||||
/// Fails if another instance is already running.
|
|
||||||
pub fn acquire(&mut self) -> Result<(), DaemonError> {
|
|
||||||
// Check for running instance first
|
|
||||||
if let Some(pid) = self.check_running()? {
|
|
||||||
return Err(DaemonError::AlreadyRunning(pid));
|
|
||||||
}
|
|
||||||
|
|
||||||
// Ensure parent directory exists
|
|
||||||
if let Some(parent) = self.path.parent() {
|
|
||||||
if !parent.exists() {
|
|
||||||
fs::create_dir_all(parent).map_err(|e| {
|
|
||||||
DaemonError::PidFile(format!(
|
|
||||||
"cannot create directory {}: {}",
|
|
||||||
parent.display(),
|
|
||||||
e
|
|
||||||
))
|
|
||||||
})?;
|
|
||||||
}
|
|
||||||
}
|
|
||||||
|
|
||||||
// Open/create PID file with exclusive lock
|
|
||||||
let file = OpenOptions::new()
|
|
||||||
.write(true)
|
|
||||||
.create(true)
|
|
||||||
.truncate(true)
|
|
||||||
.mode(0o644)
|
|
||||||
.open(&self.path)
|
|
||||||
.map_err(|e| {
|
|
||||||
DaemonError::PidFile(format!("cannot open {}: {}", self.path.display(), e))
|
|
||||||
})?;
|
|
||||||
|
|
||||||
// Try to acquire exclusive lock (non-blocking)
|
|
||||||
let flock = Flock::lock(file, FlockArg::LockExclusiveNonblock).map_err(|(_, errno)| {
|
|
||||||
// Check if another instance grabbed the lock
|
|
||||||
if let Some(pid) = self.check_running().ok().flatten() {
|
|
||||||
DaemonError::AlreadyRunning(pid)
|
|
||||||
} else {
|
|
||||||
DaemonError::PidFile(format!("cannot lock {}: {}", self.path.display(), errno))
|
|
||||||
}
|
|
||||||
})?;
|
|
||||||
|
|
||||||
// Write our PID
|
|
||||||
let pid = getpid();
|
|
||||||
let mut file = flock
|
|
||||||
.unlock()
|
|
||||||
.map_err(|(_, errno)| DaemonError::PidFile(format!("unlock failed: {}", errno)))?;
|
|
||||||
|
|
||||||
writeln!(file, "{}", pid).map_err(|e| {
|
|
||||||
DaemonError::PidFile(format!(
|
|
||||||
"cannot write PID to {}: {}",
|
|
||||||
self.path.display(),
|
|
||||||
e
|
|
||||||
))
|
|
||||||
})?;
|
|
||||||
|
|
||||||
// Re-acquire lock and keep it
|
|
||||||
let flock = Flock::lock(file, FlockArg::LockExclusiveNonblock).map_err(|(_, errno)| {
|
|
||||||
DaemonError::PidFile(format!("cannot re-lock {}: {}", self.path.display(), errno))
|
|
||||||
})?;
|
|
||||||
|
|
||||||
self.file = Some(flock.unlock().map_err(|(_, errno)| {
|
|
||||||
DaemonError::PidFile(format!("unlock for storage failed: {}", errno))
|
|
||||||
})?);
|
|
||||||
self.locked = true;
|
|
||||||
|
|
||||||
info!(pid = pid.as_raw(), path = %self.path.display(), "PID file created");
|
|
||||||
Ok(())
|
|
||||||
}
|
|
||||||
|
|
||||||
/// Releases the PID file lock and removes the file.
|
|
||||||
pub fn release(&mut self) -> Result<(), DaemonError> {
|
|
||||||
if let Some(file) = self.file.take() {
|
|
||||||
drop(file);
|
|
||||||
}
|
|
||||||
self.locked = false;
|
|
||||||
|
|
||||||
if self.path.exists() {
|
|
||||||
fs::remove_file(&self.path).map_err(|e| {
|
|
||||||
DaemonError::PidFile(format!("cannot remove {}: {}", self.path.display(), e))
|
|
||||||
})?;
|
|
||||||
debug!(path = %self.path.display(), "PID file removed");
|
|
||||||
}
|
|
||||||
|
|
||||||
Ok(())
|
|
||||||
}
|
|
||||||
|
|
||||||
/// Returns the path to this PID file.
|
|
||||||
#[allow(dead_code)]
|
|
||||||
pub fn path(&self) -> &Path {
|
|
||||||
&self.path
|
|
||||||
}
|
|
||||||
}
|
|
||||||
|
|
||||||
impl Drop for PidFile {
|
|
||||||
fn drop(&mut self) {
|
|
||||||
if self.locked {
|
|
||||||
if let Err(e) = self.release() {
|
|
||||||
warn!(error = %e, "Failed to clean up PID file on drop");
|
|
||||||
}
|
|
||||||
}
|
|
||||||
}
|
|
||||||
}
|
|
||||||
|
|
||||||
/// Checks if a process with the given PID is running.
|
|
||||||
fn is_process_running(pid: i32) -> bool {
|
|
||||||
// kill(pid, 0) checks if process exists without sending a signal
|
|
||||||
nix::sys::signal::kill(Pid::from_raw(pid), None).is_ok()
|
|
||||||
}
|
|
||||||
|
|
||||||
/// Drops privileges to the specified user and group.
|
|
||||||
///
|
|
||||||
/// This should be called after binding privileged ports but before
|
|
||||||
/// entering the main event loop.
|
|
||||||
pub fn drop_privileges(user: Option<&str>, group: Option<&str>) -> Result<(), DaemonError> {
|
|
||||||
// Look up group first (need to do this while still root)
|
|
||||||
let target_gid = if let Some(group_name) = group {
|
|
||||||
Some(lookup_group(group_name)?)
|
|
||||||
} else if let Some(user_name) = user {
|
|
||||||
// If no group specified but user is, use user's primary group
|
|
||||||
Some(lookup_user_primary_gid(user_name)?)
|
|
||||||
} else {
|
|
||||||
None
|
|
||||||
};
|
|
||||||
|
|
||||||
// Look up user
|
|
||||||
let target_uid = if let Some(user_name) = user {
|
|
||||||
Some(lookup_user(user_name)?)
|
|
||||||
} else {
|
|
||||||
None
|
|
||||||
};
|
|
||||||
|
|
||||||
// Drop privileges: set GID first, then UID
|
|
||||||
// (Setting UID first would prevent us from setting GID)
|
|
||||||
if let Some(gid) = target_gid {
|
|
||||||
unistd::setgid(gid).map_err(DaemonError::PrivilegeDrop)?;
|
|
||||||
// Also set supplementary groups to just this one
|
|
||||||
unistd::setgroups(&[gid]).map_err(DaemonError::PrivilegeDrop)?;
|
|
||||||
info!(gid = gid.as_raw(), "Dropped group privileges");
|
|
||||||
}
|
|
||||||
|
|
||||||
if let Some(uid) = target_uid {
|
|
||||||
unistd::setuid(uid).map_err(DaemonError::PrivilegeDrop)?;
|
|
||||||
info!(uid = uid.as_raw(), "Dropped user privileges");
|
|
||||||
}
|
|
||||||
|
|
||||||
Ok(())
|
|
||||||
}
|
|
||||||
|
|
||||||
/// Looks up a user by name and returns their UID.
|
|
||||||
fn lookup_user(name: &str) -> Result<Uid, DaemonError> {
|
|
||||||
// Use libc getpwnam
|
|
||||||
let c_name =
|
|
||||||
std::ffi::CString::new(name).map_err(|_| DaemonError::UserNotFound(name.to_string()))?;
|
|
||||||
|
|
||||||
unsafe {
|
|
||||||
let pwd = libc::getpwnam(c_name.as_ptr());
|
|
||||||
if pwd.is_null() {
|
|
||||||
Err(DaemonError::UserNotFound(name.to_string()))
|
|
||||||
} else {
|
|
||||||
Ok(Uid::from_raw((*pwd).pw_uid))
|
|
||||||
}
|
|
||||||
}
|
|
||||||
}
|
|
||||||
|
|
||||||
/// Looks up a user's primary GID by username.
|
|
||||||
fn lookup_user_primary_gid(name: &str) -> Result<Gid, DaemonError> {
|
|
||||||
let c_name =
|
|
||||||
std::ffi::CString::new(name).map_err(|_| DaemonError::UserNotFound(name.to_string()))?;
|
|
||||||
|
|
||||||
unsafe {
|
|
||||||
let pwd = libc::getpwnam(c_name.as_ptr());
|
|
||||||
if pwd.is_null() {
|
|
||||||
Err(DaemonError::UserNotFound(name.to_string()))
|
|
||||||
} else {
|
|
||||||
Ok(Gid::from_raw((*pwd).pw_gid))
|
|
||||||
}
|
|
||||||
}
|
|
||||||
}
|
|
||||||
|
|
||||||
/// Looks up a group by name and returns its GID.
|
|
||||||
fn lookup_group(name: &str) -> Result<Gid, DaemonError> {
|
|
||||||
let c_name =
|
|
||||||
std::ffi::CString::new(name).map_err(|_| DaemonError::GroupNotFound(name.to_string()))?;
|
|
||||||
|
|
||||||
unsafe {
|
|
||||||
let grp = libc::getgrnam(c_name.as_ptr());
|
|
||||||
if grp.is_null() {
|
|
||||||
Err(DaemonError::GroupNotFound(name.to_string()))
|
|
||||||
} else {
|
|
||||||
Ok(Gid::from_raw((*grp).gr_gid))
|
|
||||||
}
|
|
||||||
}
|
|
||||||
}
|
|
||||||
|
|
||||||
/// Reads PID from a PID file.
|
|
||||||
#[allow(dead_code)]
|
|
||||||
pub fn read_pid_file<P: AsRef<Path>>(path: P) -> Result<i32, DaemonError> {
|
|
||||||
let path = path.as_ref();
|
|
||||||
let mut contents = String::new();
|
|
||||||
File::open(path)
|
|
||||||
.and_then(|mut f| f.read_to_string(&mut contents))
|
|
||||||
.map_err(|e| DaemonError::PidFile(format!("cannot read {}: {}", path.display(), e)))?;
|
|
||||||
|
|
||||||
contents
|
|
||||||
.trim()
|
|
||||||
.parse()
|
|
||||||
.map_err(|_| DaemonError::PidFile(format!("invalid PID in {}", path.display())))
|
|
||||||
}
|
|
||||||
|
|
||||||
/// Sends a signal to the process specified in a PID file.
|
|
||||||
#[allow(dead_code)]
|
|
||||||
pub fn signal_pid_file<P: AsRef<Path>>(
|
|
||||||
path: P,
|
|
||||||
signal: nix::sys::signal::Signal,
|
|
||||||
) -> Result<(), DaemonError> {
|
|
||||||
let pid = read_pid_file(&path)?;
|
|
||||||
|
|
||||||
if !is_process_running(pid) {
|
|
||||||
return Err(DaemonError::PidFile(format!(
|
|
||||||
"process {} from {} is not running",
|
|
||||||
pid,
|
|
||||||
path.as_ref().display()
|
|
||||||
)));
|
|
||||||
}
|
|
||||||
|
|
||||||
nix::sys::signal::kill(Pid::from_raw(pid), signal)
|
|
||||||
.map_err(|e| DaemonError::PidFile(format!("cannot signal process {}: {}", pid, e)))?;
|
|
||||||
|
|
||||||
Ok(())
|
|
||||||
}
|
|
||||||
|
|
||||||
/// Returns the status of the daemon based on PID file.
|
|
||||||
#[allow(dead_code)]
|
|
||||||
#[derive(Debug, Clone, PartialEq, Eq)]
|
|
||||||
pub enum DaemonStatus {
|
|
||||||
/// Daemon is running with the given PID.
|
|
||||||
Running(i32),
|
|
||||||
/// PID file exists but process is not running.
|
|
||||||
Stale(i32),
|
|
||||||
/// No PID file exists.
|
|
||||||
NotRunning,
|
|
||||||
}
|
|
||||||
|
|
||||||
/// Checks the daemon status from a PID file.
|
|
||||||
#[allow(dead_code)]
|
|
||||||
pub fn check_status<P: AsRef<Path>>(path: P) -> DaemonStatus {
|
|
||||||
let path = path.as_ref();
|
|
||||||
|
|
||||||
if !path.exists() {
|
|
||||||
return DaemonStatus::NotRunning;
|
|
||||||
}
|
|
||||||
|
|
||||||
match read_pid_file(path) {
|
|
||||||
Ok(pid) => {
|
|
||||||
if is_process_running(pid) {
|
|
||||||
DaemonStatus::Running(pid)
|
|
||||||
} else {
|
|
||||||
DaemonStatus::Stale(pid)
|
|
||||||
}
|
|
||||||
}
|
|
||||||
Err(_) => DaemonStatus::NotRunning,
|
|
||||||
}
|
|
||||||
}
|
|
||||||
|
|
||||||
#[cfg(test)]
|
|
||||||
mod tests {
|
|
||||||
use super::*;
|
|
||||||
|
|
||||||
#[test]
|
|
||||||
fn test_daemon_options_default() {
|
|
||||||
let opts = DaemonOptions::default();
|
|
||||||
assert!(!opts.daemonize);
|
|
||||||
assert!(!opts.should_daemonize());
|
|
||||||
assert_eq!(opts.pid_file_path(), Path::new(DEFAULT_PID_FILE));
|
|
||||||
}
|
|
||||||
|
|
||||||
#[test]
|
|
||||||
fn test_daemon_options_foreground_overrides() {
|
|
||||||
let opts = DaemonOptions {
|
|
||||||
daemonize: true,
|
|
||||||
foreground: true,
|
|
||||||
..Default::default()
|
|
||||||
};
|
|
||||||
assert!(!opts.should_daemonize());
|
|
||||||
}
|
|
||||||
|
|
||||||
#[test]
|
|
||||||
fn test_check_status_not_running() {
|
|
||||||
let path = "/tmp/telemt_test_nonexistent.pid";
|
|
||||||
assert_eq!(check_status(path), DaemonStatus::NotRunning);
|
|
||||||
}
|
|
||||||
|
|
||||||
#[test]
|
|
||||||
fn test_pid_file_basic() {
|
|
||||||
let path = "/tmp/telemt_test_pidfile.pid";
|
|
||||||
let _ = fs::remove_file(path);
|
|
||||||
|
|
||||||
let mut pf = PidFile::new(path);
|
|
||||||
assert!(pf.check_running().unwrap().is_none());
|
|
||||||
|
|
||||||
pf.acquire().unwrap();
|
|
||||||
assert!(Path::new(path).exists());
|
|
||||||
|
|
||||||
// Read it back
|
|
||||||
let pid = read_pid_file(path).unwrap();
|
|
||||||
assert_eq!(pid, std::process::id() as i32);
|
|
||||||
|
|
||||||
pf.release().unwrap();
|
|
||||||
assert!(!Path::new(path).exists());
|
|
||||||
}
|
|
||||||
}
|
|
||||||
101
src/error.rs
101
src/error.rs
|
|
@ -12,15 +12,28 @@ use thiserror::Error;
|
||||||
#[derive(Debug)]
|
#[derive(Debug)]
|
||||||
pub enum StreamError {
|
pub enum StreamError {
|
||||||
/// Partial read: got fewer bytes than expected
|
/// Partial read: got fewer bytes than expected
|
||||||
PartialRead { expected: usize, got: usize },
|
PartialRead {
|
||||||
|
expected: usize,
|
||||||
|
got: usize,
|
||||||
|
},
|
||||||
/// Partial write: wrote fewer bytes than expected
|
/// Partial write: wrote fewer bytes than expected
|
||||||
PartialWrite { expected: usize, written: usize },
|
PartialWrite {
|
||||||
|
expected: usize,
|
||||||
|
written: usize,
|
||||||
|
},
|
||||||
/// Stream is in poisoned state and cannot be used
|
/// Stream is in poisoned state and cannot be used
|
||||||
Poisoned { reason: String },
|
Poisoned {
|
||||||
|
reason: String,
|
||||||
|
},
|
||||||
/// Buffer overflow: attempted to buffer more than allowed
|
/// Buffer overflow: attempted to buffer more than allowed
|
||||||
BufferOverflow { limit: usize, attempted: usize },
|
BufferOverflow {
|
||||||
|
limit: usize,
|
||||||
|
attempted: usize,
|
||||||
|
},
|
||||||
/// Invalid frame format
|
/// Invalid frame format
|
||||||
InvalidFrame { details: String },
|
InvalidFrame {
|
||||||
|
details: String,
|
||||||
|
},
|
||||||
/// Unexpected end of stream
|
/// Unexpected end of stream
|
||||||
UnexpectedEof,
|
UnexpectedEof,
|
||||||
/// Underlying I/O error
|
/// Underlying I/O error
|
||||||
|
|
@ -34,21 +47,13 @@ impl fmt::Display for StreamError {
|
||||||
write!(f, "partial read: expected {} bytes, got {}", expected, got)
|
write!(f, "partial read: expected {} bytes, got {}", expected, got)
|
||||||
}
|
}
|
||||||
Self::PartialWrite { expected, written } => {
|
Self::PartialWrite { expected, written } => {
|
||||||
write!(
|
write!(f, "partial write: expected {} bytes, wrote {}", expected, written)
|
||||||
f,
|
|
||||||
"partial write: expected {} bytes, wrote {}",
|
|
||||||
expected, written
|
|
||||||
)
|
|
||||||
}
|
}
|
||||||
Self::Poisoned { reason } => {
|
Self::Poisoned { reason } => {
|
||||||
write!(f, "stream poisoned: {}", reason)
|
write!(f, "stream poisoned: {}", reason)
|
||||||
}
|
}
|
||||||
Self::BufferOverflow { limit, attempted } => {
|
Self::BufferOverflow { limit, attempted } => {
|
||||||
write!(
|
write!(f, "buffer overflow: limit {}, attempted {}", limit, attempted)
|
||||||
f,
|
|
||||||
"buffer overflow: limit {}, attempted {}",
|
|
||||||
limit, attempted
|
|
||||||
)
|
|
||||||
}
|
}
|
||||||
Self::InvalidFrame { details } => {
|
Self::InvalidFrame { details } => {
|
||||||
write!(f, "invalid frame: {}", details)
|
write!(f, "invalid frame: {}", details)
|
||||||
|
|
@ -85,7 +90,9 @@ impl From<StreamError> for std::io::Error {
|
||||||
StreamError::UnexpectedEof => {
|
StreamError::UnexpectedEof => {
|
||||||
std::io::Error::new(std::io::ErrorKind::UnexpectedEof, err)
|
std::io::Error::new(std::io::ErrorKind::UnexpectedEof, err)
|
||||||
}
|
}
|
||||||
StreamError::Poisoned { .. } => std::io::Error::other(err),
|
StreamError::Poisoned { .. } => {
|
||||||
|
std::io::Error::other(err)
|
||||||
|
}
|
||||||
StreamError::BufferOverflow { .. } => {
|
StreamError::BufferOverflow { .. } => {
|
||||||
std::io::Error::new(std::io::ErrorKind::OutOfMemory, err)
|
std::io::Error::new(std::io::ErrorKind::OutOfMemory, err)
|
||||||
}
|
}
|
||||||
|
|
@ -128,10 +135,7 @@ impl Recoverable for StreamError {
|
||||||
}
|
}
|
||||||
|
|
||||||
fn can_continue(&self) -> bool {
|
fn can_continue(&self) -> bool {
|
||||||
!matches!(
|
!matches!(self, Self::Poisoned { .. } | Self::UnexpectedEof | Self::BufferOverflow { .. })
|
||||||
self,
|
|
||||||
Self::Poisoned { .. } | Self::UnexpectedEof | Self::BufferOverflow { .. }
|
|
||||||
)
|
|
||||||
}
|
}
|
||||||
}
|
}
|
||||||
|
|
||||||
|
|
@ -161,6 +165,7 @@ impl Recoverable for std::io::Error {
|
||||||
#[derive(Error, Debug)]
|
#[derive(Error, Debug)]
|
||||||
pub enum ProxyError {
|
pub enum ProxyError {
|
||||||
// ============= Crypto Errors =============
|
// ============= Crypto Errors =============
|
||||||
|
|
||||||
#[error("Crypto error: {0}")]
|
#[error("Crypto error: {0}")]
|
||||||
Crypto(String),
|
Crypto(String),
|
||||||
|
|
||||||
|
|
@ -168,10 +173,12 @@ pub enum ProxyError {
|
||||||
InvalidKeyLength { expected: usize, got: usize },
|
InvalidKeyLength { expected: usize, got: usize },
|
||||||
|
|
||||||
// ============= Stream Errors =============
|
// ============= Stream Errors =============
|
||||||
|
|
||||||
#[error("Stream error: {0}")]
|
#[error("Stream error: {0}")]
|
||||||
Stream(#[from] StreamError),
|
Stream(#[from] StreamError),
|
||||||
|
|
||||||
// ============= Protocol Errors =============
|
// ============= Protocol Errors =============
|
||||||
|
|
||||||
#[error("Invalid handshake: {0}")]
|
#[error("Invalid handshake: {0}")]
|
||||||
InvalidHandshake(String),
|
InvalidHandshake(String),
|
||||||
|
|
||||||
|
|
@ -203,6 +210,7 @@ pub enum ProxyError {
|
||||||
TgHandshakeTimeout,
|
TgHandshakeTimeout,
|
||||||
|
|
||||||
// ============= Network Errors =============
|
// ============= Network Errors =============
|
||||||
|
|
||||||
#[error("Connection timeout to {addr}")]
|
#[error("Connection timeout to {addr}")]
|
||||||
ConnectionTimeout { addr: String },
|
ConnectionTimeout { addr: String },
|
||||||
|
|
||||||
|
|
@ -213,16 +221,15 @@ pub enum ProxyError {
|
||||||
Io(#[from] std::io::Error),
|
Io(#[from] std::io::Error),
|
||||||
|
|
||||||
// ============= Proxy Protocol Errors =============
|
// ============= Proxy Protocol Errors =============
|
||||||
|
|
||||||
#[error("Invalid proxy protocol header")]
|
#[error("Invalid proxy protocol header")]
|
||||||
InvalidProxyProtocol,
|
InvalidProxyProtocol,
|
||||||
|
|
||||||
#[error("Unknown TLS SNI")]
|
|
||||||
UnknownTlsSni,
|
|
||||||
|
|
||||||
#[error("Proxy error: {0}")]
|
#[error("Proxy error: {0}")]
|
||||||
Proxy(String),
|
Proxy(String),
|
||||||
|
|
||||||
// ============= Config Errors =============
|
// ============= Config Errors =============
|
||||||
|
|
||||||
#[error("Config error: {0}")]
|
#[error("Config error: {0}")]
|
||||||
Config(String),
|
Config(String),
|
||||||
|
|
||||||
|
|
@ -230,6 +237,7 @@ pub enum ProxyError {
|
||||||
InvalidSecret { user: String, reason: String },
|
InvalidSecret { user: String, reason: String },
|
||||||
|
|
||||||
// ============= User Errors =============
|
// ============= User Errors =============
|
||||||
|
|
||||||
#[error("User {user} expired")]
|
#[error("User {user} expired")]
|
||||||
UserExpired { user: String },
|
UserExpired { user: String },
|
||||||
|
|
||||||
|
|
@ -246,6 +254,7 @@ pub enum ProxyError {
|
||||||
RateLimited,
|
RateLimited,
|
||||||
|
|
||||||
// ============= General Errors =============
|
// ============= General Errors =============
|
||||||
|
|
||||||
#[error("Internal error: {0}")]
|
#[error("Internal error: {0}")]
|
||||||
Internal(String),
|
Internal(String),
|
||||||
}
|
}
|
||||||
|
|
@ -302,9 +311,7 @@ impl<T, R, W> HandshakeResult<T, R, W> {
|
||||||
pub fn map<U, F: FnOnce(T) -> U>(self, f: F) -> HandshakeResult<U, R, W> {
|
pub fn map<U, F: FnOnce(T) -> U>(self, f: F) -> HandshakeResult<U, R, W> {
|
||||||
match self {
|
match self {
|
||||||
HandshakeResult::Success(v) => HandshakeResult::Success(f(v)),
|
HandshakeResult::Success(v) => HandshakeResult::Success(f(v)),
|
||||||
HandshakeResult::BadClient { reader, writer } => {
|
HandshakeResult::BadClient { reader, writer } => HandshakeResult::BadClient { reader, writer },
|
||||||
HandshakeResult::BadClient { reader, writer }
|
|
||||||
}
|
|
||||||
HandshakeResult::Error(e) => HandshakeResult::Error(e),
|
HandshakeResult::Error(e) => HandshakeResult::Error(e),
|
||||||
}
|
}
|
||||||
}
|
}
|
||||||
|
|
@ -334,35 +341,18 @@ mod tests {
|
||||||
|
|
||||||
#[test]
|
#[test]
|
||||||
fn test_stream_error_display() {
|
fn test_stream_error_display() {
|
||||||
let err = StreamError::PartialRead {
|
let err = StreamError::PartialRead { expected: 100, got: 50 };
|
||||||
expected: 100,
|
|
||||||
got: 50,
|
|
||||||
};
|
|
||||||
assert!(err.to_string().contains("100"));
|
assert!(err.to_string().contains("100"));
|
||||||
assert!(err.to_string().contains("50"));
|
assert!(err.to_string().contains("50"));
|
||||||
|
|
||||||
let err = StreamError::Poisoned {
|
let err = StreamError::Poisoned { reason: "test".into() };
|
||||||
reason: "test".into(),
|
|
||||||
};
|
|
||||||
assert!(err.to_string().contains("test"));
|
assert!(err.to_string().contains("test"));
|
||||||
}
|
}
|
||||||
|
|
||||||
#[test]
|
#[test]
|
||||||
fn test_stream_error_recoverable() {
|
fn test_stream_error_recoverable() {
|
||||||
assert!(
|
assert!(StreamError::PartialRead { expected: 10, got: 5 }.is_recoverable());
|
||||||
StreamError::PartialRead {
|
assert!(StreamError::PartialWrite { expected: 10, written: 5 }.is_recoverable());
|
||||||
expected: 10,
|
|
||||||
got: 5
|
|
||||||
}
|
|
||||||
.is_recoverable()
|
|
||||||
);
|
|
||||||
assert!(
|
|
||||||
StreamError::PartialWrite {
|
|
||||||
expected: 10,
|
|
||||||
written: 5
|
|
||||||
}
|
|
||||||
.is_recoverable()
|
|
||||||
);
|
|
||||||
assert!(!StreamError::Poisoned { reason: "x".into() }.is_recoverable());
|
assert!(!StreamError::Poisoned { reason: "x".into() }.is_recoverable());
|
||||||
assert!(!StreamError::UnexpectedEof.is_recoverable());
|
assert!(!StreamError::UnexpectedEof.is_recoverable());
|
||||||
}
|
}
|
||||||
|
|
@ -371,13 +361,7 @@ mod tests {
|
||||||
fn test_stream_error_can_continue() {
|
fn test_stream_error_can_continue() {
|
||||||
assert!(!StreamError::Poisoned { reason: "x".into() }.can_continue());
|
assert!(!StreamError::Poisoned { reason: "x".into() }.can_continue());
|
||||||
assert!(!StreamError::UnexpectedEof.can_continue());
|
assert!(!StreamError::UnexpectedEof.can_continue());
|
||||||
assert!(
|
assert!(StreamError::PartialRead { expected: 10, got: 5 }.can_continue());
|
||||||
StreamError::PartialRead {
|
|
||||||
expected: 10,
|
|
||||||
got: 5
|
|
||||||
}
|
|
||||||
.can_continue()
|
|
||||||
);
|
|
||||||
}
|
}
|
||||||
|
|
||||||
#[test]
|
#[test]
|
||||||
|
|
@ -393,10 +377,7 @@ mod tests {
|
||||||
assert!(success.is_success());
|
assert!(success.is_success());
|
||||||
assert!(!success.is_bad_client());
|
assert!(!success.is_bad_client());
|
||||||
|
|
||||||
let bad: HandshakeResult<i32, (), ()> = HandshakeResult::BadClient {
|
let bad: HandshakeResult<i32, (), ()> = HandshakeResult::BadClient { reader: (), writer: () };
|
||||||
reader: (),
|
|
||||||
writer: (),
|
|
||||||
};
|
|
||||||
assert!(!bad.is_success());
|
assert!(!bad.is_success());
|
||||||
assert!(bad.is_bad_client());
|
assert!(bad.is_bad_client());
|
||||||
}
|
}
|
||||||
|
|
@ -423,9 +404,7 @@ mod tests {
|
||||||
|
|
||||||
#[test]
|
#[test]
|
||||||
fn test_error_display() {
|
fn test_error_display() {
|
||||||
let err = ProxyError::ConnectionTimeout {
|
let err = ProxyError::ConnectionTimeout { addr: "1.2.3.4:443".into() };
|
||||||
addr: "1.2.3.4:443".into(),
|
|
||||||
};
|
|
||||||
assert!(err.to_string().contains("1.2.3.4:443"));
|
assert!(err.to_string().contains("1.2.3.4:443"));
|
||||||
|
|
||||||
let err = ProxyError::InvalidProxyProtocol;
|
let err = ProxyError::InvalidProxyProtocol;
|
||||||
|
|
|
||||||
|
|
@ -5,11 +5,10 @@
|
||||||
use std::collections::HashMap;
|
use std::collections::HashMap;
|
||||||
use std::net::IpAddr;
|
use std::net::IpAddr;
|
||||||
use std::sync::Arc;
|
use std::sync::Arc;
|
||||||
use std::sync::Mutex;
|
|
||||||
use std::sync::atomic::{AtomicU64, Ordering};
|
use std::sync::atomic::{AtomicU64, Ordering};
|
||||||
use std::time::{Duration, Instant};
|
use std::time::{Duration, Instant};
|
||||||
|
|
||||||
use tokio::sync::{Mutex as AsyncMutex, RwLock};
|
use tokio::sync::RwLock;
|
||||||
|
|
||||||
use crate::config::UserMaxUniqueIpsMode;
|
use crate::config::UserMaxUniqueIpsMode;
|
||||||
|
|
||||||
|
|
@ -18,21 +17,9 @@ pub struct UserIpTracker {
|
||||||
active_ips: Arc<RwLock<HashMap<String, HashMap<IpAddr, usize>>>>,
|
active_ips: Arc<RwLock<HashMap<String, HashMap<IpAddr, usize>>>>,
|
||||||
recent_ips: Arc<RwLock<HashMap<String, HashMap<IpAddr, Instant>>>>,
|
recent_ips: Arc<RwLock<HashMap<String, HashMap<IpAddr, Instant>>>>,
|
||||||
max_ips: Arc<RwLock<HashMap<String, usize>>>,
|
max_ips: Arc<RwLock<HashMap<String, usize>>>,
|
||||||
default_max_ips: Arc<RwLock<usize>>,
|
|
||||||
limit_mode: Arc<RwLock<UserMaxUniqueIpsMode>>,
|
limit_mode: Arc<RwLock<UserMaxUniqueIpsMode>>,
|
||||||
limit_window: Arc<RwLock<Duration>>,
|
limit_window: Arc<RwLock<Duration>>,
|
||||||
last_compact_epoch_secs: Arc<AtomicU64>,
|
last_compact_epoch_secs: Arc<AtomicU64>,
|
||||||
cleanup_queue: Arc<Mutex<Vec<(String, IpAddr)>>>,
|
|
||||||
cleanup_drain_lock: Arc<AsyncMutex<()>>,
|
|
||||||
}
|
|
||||||
|
|
||||||
#[derive(Debug, Clone, Copy)]
|
|
||||||
pub struct UserIpTrackerMemoryStats {
|
|
||||||
pub active_users: usize,
|
|
||||||
pub recent_users: usize,
|
|
||||||
pub active_entries: usize,
|
|
||||||
pub recent_entries: usize,
|
|
||||||
pub cleanup_queue_len: usize,
|
|
||||||
}
|
}
|
||||||
|
|
||||||
impl UserIpTracker {
|
impl UserIpTracker {
|
||||||
|
|
@ -41,83 +28,9 @@ impl UserIpTracker {
|
||||||
active_ips: Arc::new(RwLock::new(HashMap::new())),
|
active_ips: Arc::new(RwLock::new(HashMap::new())),
|
||||||
recent_ips: Arc::new(RwLock::new(HashMap::new())),
|
recent_ips: Arc::new(RwLock::new(HashMap::new())),
|
||||||
max_ips: Arc::new(RwLock::new(HashMap::new())),
|
max_ips: Arc::new(RwLock::new(HashMap::new())),
|
||||||
default_max_ips: Arc::new(RwLock::new(0)),
|
|
||||||
limit_mode: Arc::new(RwLock::new(UserMaxUniqueIpsMode::ActiveWindow)),
|
limit_mode: Arc::new(RwLock::new(UserMaxUniqueIpsMode::ActiveWindow)),
|
||||||
limit_window: Arc::new(RwLock::new(Duration::from_secs(30))),
|
limit_window: Arc::new(RwLock::new(Duration::from_secs(30))),
|
||||||
last_compact_epoch_secs: Arc::new(AtomicU64::new(0)),
|
last_compact_epoch_secs: Arc::new(AtomicU64::new(0)),
|
||||||
cleanup_queue: Arc::new(Mutex::new(Vec::new())),
|
|
||||||
cleanup_drain_lock: Arc::new(AsyncMutex::new(())),
|
|
||||||
}
|
|
||||||
}
|
|
||||||
|
|
||||||
pub fn enqueue_cleanup(&self, user: String, ip: IpAddr) {
|
|
||||||
match self.cleanup_queue.lock() {
|
|
||||||
Ok(mut queue) => queue.push((user, ip)),
|
|
||||||
Err(poisoned) => {
|
|
||||||
let mut queue = poisoned.into_inner();
|
|
||||||
queue.push((user.clone(), ip));
|
|
||||||
self.cleanup_queue.clear_poison();
|
|
||||||
tracing::warn!(
|
|
||||||
"UserIpTracker cleanup_queue lock poisoned; recovered and enqueued IP cleanup for {} ({})",
|
|
||||||
user,
|
|
||||||
ip
|
|
||||||
);
|
|
||||||
}
|
|
||||||
}
|
|
||||||
}
|
|
||||||
|
|
||||||
#[cfg(test)]
|
|
||||||
pub(crate) fn cleanup_queue_len_for_tests(&self) -> usize {
|
|
||||||
self.cleanup_queue
|
|
||||||
.lock()
|
|
||||||
.unwrap_or_else(|poisoned| poisoned.into_inner())
|
|
||||||
.len()
|
|
||||||
}
|
|
||||||
|
|
||||||
#[cfg(test)]
|
|
||||||
pub(crate) fn cleanup_queue_mutex_for_tests(&self) -> Arc<Mutex<Vec<(String, IpAddr)>>> {
|
|
||||||
Arc::clone(&self.cleanup_queue)
|
|
||||||
}
|
|
||||||
|
|
||||||
pub(crate) async fn drain_cleanup_queue(&self) {
|
|
||||||
// Serialize queue draining and active-IP mutation so check-and-add cannot
|
|
||||||
// observe stale active entries that are already queued for removal.
|
|
||||||
let _drain_guard = self.cleanup_drain_lock.lock().await;
|
|
||||||
let to_remove = {
|
|
||||||
match self.cleanup_queue.lock() {
|
|
||||||
Ok(mut queue) => {
|
|
||||||
if queue.is_empty() {
|
|
||||||
return;
|
|
||||||
}
|
|
||||||
std::mem::take(&mut *queue)
|
|
||||||
}
|
|
||||||
Err(poisoned) => {
|
|
||||||
let mut queue = poisoned.into_inner();
|
|
||||||
if queue.is_empty() {
|
|
||||||
self.cleanup_queue.clear_poison();
|
|
||||||
return;
|
|
||||||
}
|
|
||||||
let drained = std::mem::take(&mut *queue);
|
|
||||||
self.cleanup_queue.clear_poison();
|
|
||||||
drained
|
|
||||||
}
|
|
||||||
}
|
|
||||||
};
|
|
||||||
|
|
||||||
let mut active_ips = self.active_ips.write().await;
|
|
||||||
for (user, ip) in to_remove {
|
|
||||||
if let Some(user_ips) = active_ips.get_mut(&user) {
|
|
||||||
if let Some(count) = user_ips.get_mut(&ip) {
|
|
||||||
if *count > 1 {
|
|
||||||
*count -= 1;
|
|
||||||
} else {
|
|
||||||
user_ips.remove(&ip);
|
|
||||||
}
|
|
||||||
}
|
|
||||||
if user_ips.is_empty() {
|
|
||||||
active_ips.remove(&user);
|
|
||||||
}
|
|
||||||
}
|
|
||||||
}
|
}
|
||||||
}
|
}
|
||||||
|
|
||||||
|
|
@ -150,15 +63,7 @@ impl UserIpTracker {
|
||||||
|
|
||||||
let mut active_ips = self.active_ips.write().await;
|
let mut active_ips = self.active_ips.write().await;
|
||||||
let mut recent_ips = self.recent_ips.write().await;
|
let mut recent_ips = self.recent_ips.write().await;
|
||||||
let window = *self.limit_window.read().await;
|
let mut users = Vec::<String>::with_capacity(active_ips.len().saturating_add(recent_ips.len()));
|
||||||
let now = Instant::now();
|
|
||||||
|
|
||||||
for user_recent in recent_ips.values_mut() {
|
|
||||||
Self::prune_recent(user_recent, now, window);
|
|
||||||
}
|
|
||||||
|
|
||||||
let mut users =
|
|
||||||
Vec::<String>::with_capacity(active_ips.len().saturating_add(recent_ips.len()));
|
|
||||||
users.extend(active_ips.keys().cloned());
|
users.extend(active_ips.keys().cloned());
|
||||||
for user in recent_ips.keys() {
|
for user in recent_ips.keys() {
|
||||||
if !active_ips.contains_key(user) {
|
if !active_ips.contains_key(user) {
|
||||||
|
|
@ -167,14 +72,8 @@ impl UserIpTracker {
|
||||||
}
|
}
|
||||||
|
|
||||||
for user in users {
|
for user in users {
|
||||||
let active_empty = active_ips
|
let active_empty = active_ips.get(&user).map(|ips| ips.is_empty()).unwrap_or(true);
|
||||||
.get(&user)
|
let recent_empty = recent_ips.get(&user).map(|ips| ips.is_empty()).unwrap_or(true);
|
||||||
.map(|ips| ips.is_empty())
|
|
||||||
.unwrap_or(true);
|
|
||||||
let recent_empty = recent_ips
|
|
||||||
.get(&user)
|
|
||||||
.map(|ips| ips.is_empty())
|
|
||||||
.unwrap_or(true);
|
|
||||||
if active_empty && recent_empty {
|
if active_empty && recent_empty {
|
||||||
active_ips.remove(&user);
|
active_ips.remove(&user);
|
||||||
recent_ips.remove(&user);
|
recent_ips.remove(&user);
|
||||||
|
|
@ -182,26 +81,6 @@ impl UserIpTracker {
|
||||||
}
|
}
|
||||||
}
|
}
|
||||||
|
|
||||||
pub async fn memory_stats(&self) -> UserIpTrackerMemoryStats {
|
|
||||||
let cleanup_queue_len = self
|
|
||||||
.cleanup_queue
|
|
||||||
.lock()
|
|
||||||
.unwrap_or_else(|poisoned| poisoned.into_inner())
|
|
||||||
.len();
|
|
||||||
let active_ips = self.active_ips.read().await;
|
|
||||||
let recent_ips = self.recent_ips.read().await;
|
|
||||||
let active_entries = active_ips.values().map(HashMap::len).sum();
|
|
||||||
let recent_entries = recent_ips.values().map(HashMap::len).sum();
|
|
||||||
|
|
||||||
UserIpTrackerMemoryStats {
|
|
||||||
active_users: active_ips.len(),
|
|
||||||
recent_users: recent_ips.len(),
|
|
||||||
active_entries,
|
|
||||||
recent_entries,
|
|
||||||
cleanup_queue_len,
|
|
||||||
}
|
|
||||||
}
|
|
||||||
|
|
||||||
pub async fn set_limit_policy(&self, mode: UserMaxUniqueIpsMode, window_secs: u64) {
|
pub async fn set_limit_policy(&self, mode: UserMaxUniqueIpsMode, window_secs: u64) {
|
||||||
{
|
{
|
||||||
let mut current_mode = self.limit_mode.write().await;
|
let mut current_mode = self.limit_mode.write().await;
|
||||||
|
|
@ -221,10 +100,7 @@ impl UserIpTracker {
|
||||||
limits.remove(username);
|
limits.remove(username);
|
||||||
}
|
}
|
||||||
|
|
||||||
pub async fn load_limits(&self, default_limit: usize, limits: &HashMap<String, usize>) {
|
pub async fn load_limits(&self, limits: &HashMap<String, usize>) {
|
||||||
let mut default_max_ips = self.default_max_ips.write().await;
|
|
||||||
*default_max_ips = default_limit;
|
|
||||||
drop(default_max_ips);
|
|
||||||
let mut max_ips = self.max_ips.write().await;
|
let mut max_ips = self.max_ips.write().await;
|
||||||
max_ips.clone_from(limits);
|
max_ips.clone_from(limits);
|
||||||
}
|
}
|
||||||
|
|
@ -237,16 +113,10 @@ impl UserIpTracker {
|
||||||
}
|
}
|
||||||
|
|
||||||
pub async fn check_and_add(&self, username: &str, ip: IpAddr) -> Result<(), String> {
|
pub async fn check_and_add(&self, username: &str, ip: IpAddr) -> Result<(), String> {
|
||||||
self.drain_cleanup_queue().await;
|
|
||||||
self.maybe_compact_empty_users().await;
|
self.maybe_compact_empty_users().await;
|
||||||
let default_max_ips = *self.default_max_ips.read().await;
|
|
||||||
let limit = {
|
let limit = {
|
||||||
let max_ips = self.max_ips.read().await;
|
let max_ips = self.max_ips.read().await;
|
||||||
max_ips
|
max_ips.get(username).copied()
|
||||||
.get(username)
|
|
||||||
.copied()
|
|
||||||
.filter(|limit| *limit > 0)
|
|
||||||
.or((default_max_ips > 0).then_some(default_max_ips))
|
|
||||||
};
|
};
|
||||||
let mode = *self.limit_mode.read().await;
|
let mode = *self.limit_mode.read().await;
|
||||||
let window = *self.limit_window.read().await;
|
let window = *self.limit_window.read().await;
|
||||||
|
|
@ -314,7 +184,6 @@ impl UserIpTracker {
|
||||||
}
|
}
|
||||||
|
|
||||||
pub async fn get_recent_counts_for_users(&self, users: &[String]) -> HashMap<String, usize> {
|
pub async fn get_recent_counts_for_users(&self, users: &[String]) -> HashMap<String, usize> {
|
||||||
self.drain_cleanup_queue().await;
|
|
||||||
let window = *self.limit_window.read().await;
|
let window = *self.limit_window.read().await;
|
||||||
let now = Instant::now();
|
let now = Instant::now();
|
||||||
let recent_ips = self.recent_ips.read().await;
|
let recent_ips = self.recent_ips.read().await;
|
||||||
|
|
@ -335,7 +204,6 @@ impl UserIpTracker {
|
||||||
}
|
}
|
||||||
|
|
||||||
pub async fn get_active_ips_for_users(&self, users: &[String]) -> HashMap<String, Vec<IpAddr>> {
|
pub async fn get_active_ips_for_users(&self, users: &[String]) -> HashMap<String, Vec<IpAddr>> {
|
||||||
self.drain_cleanup_queue().await;
|
|
||||||
let active_ips = self.active_ips.read().await;
|
let active_ips = self.active_ips.read().await;
|
||||||
let mut out = HashMap::with_capacity(users.len());
|
let mut out = HashMap::with_capacity(users.len());
|
||||||
for user in users {
|
for user in users {
|
||||||
|
|
@ -350,7 +218,6 @@ impl UserIpTracker {
|
||||||
}
|
}
|
||||||
|
|
||||||
pub async fn get_recent_ips_for_users(&self, users: &[String]) -> HashMap<String, Vec<IpAddr>> {
|
pub async fn get_recent_ips_for_users(&self, users: &[String]) -> HashMap<String, Vec<IpAddr>> {
|
||||||
self.drain_cleanup_queue().await;
|
|
||||||
let window = *self.limit_window.read().await;
|
let window = *self.limit_window.read().await;
|
||||||
let now = Instant::now();
|
let now = Instant::now();
|
||||||
let recent_ips = self.recent_ips.read().await;
|
let recent_ips = self.recent_ips.read().await;
|
||||||
|
|
@ -373,13 +240,11 @@ impl UserIpTracker {
|
||||||
}
|
}
|
||||||
|
|
||||||
pub async fn get_active_ip_count(&self, username: &str) -> usize {
|
pub async fn get_active_ip_count(&self, username: &str) -> usize {
|
||||||
self.drain_cleanup_queue().await;
|
|
||||||
let active_ips = self.active_ips.read().await;
|
let active_ips = self.active_ips.read().await;
|
||||||
active_ips.get(username).map(|ips| ips.len()).unwrap_or(0)
|
active_ips.get(username).map(|ips| ips.len()).unwrap_or(0)
|
||||||
}
|
}
|
||||||
|
|
||||||
pub async fn get_active_ips(&self, username: &str) -> Vec<IpAddr> {
|
pub async fn get_active_ips(&self, username: &str) -> Vec<IpAddr> {
|
||||||
self.drain_cleanup_queue().await;
|
|
||||||
let active_ips = self.active_ips.read().await;
|
let active_ips = self.active_ips.read().await;
|
||||||
active_ips
|
active_ips
|
||||||
.get(username)
|
.get(username)
|
||||||
|
|
@ -388,19 +253,12 @@ impl UserIpTracker {
|
||||||
}
|
}
|
||||||
|
|
||||||
pub async fn get_stats(&self) -> Vec<(String, usize, usize)> {
|
pub async fn get_stats(&self) -> Vec<(String, usize, usize)> {
|
||||||
self.drain_cleanup_queue().await;
|
|
||||||
let active_ips = self.active_ips.read().await;
|
let active_ips = self.active_ips.read().await;
|
||||||
let max_ips = self.max_ips.read().await;
|
let max_ips = self.max_ips.read().await;
|
||||||
let default_max_ips = *self.default_max_ips.read().await;
|
|
||||||
|
|
||||||
let mut stats = Vec::new();
|
let mut stats = Vec::new();
|
||||||
for (username, user_ips) in active_ips.iter() {
|
for (username, user_ips) in active_ips.iter() {
|
||||||
let limit = max_ips
|
let limit = max_ips.get(username).copied().unwrap_or(0);
|
||||||
.get(username)
|
|
||||||
.copied()
|
|
||||||
.filter(|limit| *limit > 0)
|
|
||||||
.or((default_max_ips > 0).then_some(default_max_ips))
|
|
||||||
.unwrap_or(0);
|
|
||||||
stats.push((username.clone(), user_ips.len(), limit));
|
stats.push((username.clone(), user_ips.len(), limit));
|
||||||
}
|
}
|
||||||
|
|
||||||
|
|
@ -427,7 +285,6 @@ impl UserIpTracker {
|
||||||
}
|
}
|
||||||
|
|
||||||
pub async fn is_ip_active(&self, username: &str, ip: IpAddr) -> bool {
|
pub async fn is_ip_active(&self, username: &str, ip: IpAddr) -> bool {
|
||||||
self.drain_cleanup_queue().await;
|
|
||||||
let active_ips = self.active_ips.read().await;
|
let active_ips = self.active_ips.read().await;
|
||||||
active_ips
|
active_ips
|
||||||
.get(username)
|
.get(username)
|
||||||
|
|
@ -436,13 +293,8 @@ impl UserIpTracker {
|
||||||
}
|
}
|
||||||
|
|
||||||
pub async fn get_user_limit(&self, username: &str) -> Option<usize> {
|
pub async fn get_user_limit(&self, username: &str) -> Option<usize> {
|
||||||
let default_max_ips = *self.default_max_ips.read().await;
|
|
||||||
let max_ips = self.max_ips.read().await;
|
let max_ips = self.max_ips.read().await;
|
||||||
max_ips
|
max_ips.get(username).copied()
|
||||||
.get(username)
|
|
||||||
.copied()
|
|
||||||
.filter(|limit| *limit > 0)
|
|
||||||
.or((default_max_ips > 0).then_some(default_max_ips))
|
|
||||||
}
|
}
|
||||||
|
|
||||||
pub async fn format_stats(&self) -> String {
|
pub async fn format_stats(&self) -> String {
|
||||||
|
|
@ -487,7 +339,6 @@ impl Default for UserIpTracker {
|
||||||
mod tests {
|
mod tests {
|
||||||
use super::*;
|
use super::*;
|
||||||
use std::net::{IpAddr, Ipv4Addr, Ipv6Addr};
|
use std::net::{IpAddr, Ipv4Addr, Ipv6Addr};
|
||||||
use std::sync::atomic::Ordering;
|
|
||||||
|
|
||||||
fn test_ipv4(oct1: u8, oct2: u8, oct3: u8, oct4: u8) -> IpAddr {
|
fn test_ipv4(oct1: u8, oct2: u8, oct3: u8, oct4: u8) -> IpAddr {
|
||||||
IpAddr::V4(Ipv4Addr::new(oct1, oct2, oct3, oct4))
|
IpAddr::V4(Ipv4Addr::new(oct1, oct2, oct3, oct4))
|
||||||
|
|
@ -695,7 +546,7 @@ mod tests {
|
||||||
config_limits.insert("user1".to_string(), 5);
|
config_limits.insert("user1".to_string(), 5);
|
||||||
config_limits.insert("user2".to_string(), 3);
|
config_limits.insert("user2".to_string(), 3);
|
||||||
|
|
||||||
tracker.load_limits(0, &config_limits).await;
|
tracker.load_limits(&config_limits).await;
|
||||||
|
|
||||||
assert_eq!(tracker.get_user_limit("user1").await, Some(5));
|
assert_eq!(tracker.get_user_limit("user1").await, Some(5));
|
||||||
assert_eq!(tracker.get_user_limit("user2").await, Some(3));
|
assert_eq!(tracker.get_user_limit("user2").await, Some(3));
|
||||||
|
|
@ -709,46 +560,16 @@ mod tests {
|
||||||
let mut first = HashMap::new();
|
let mut first = HashMap::new();
|
||||||
first.insert("user1".to_string(), 2);
|
first.insert("user1".to_string(), 2);
|
||||||
first.insert("user2".to_string(), 3);
|
first.insert("user2".to_string(), 3);
|
||||||
tracker.load_limits(0, &first).await;
|
tracker.load_limits(&first).await;
|
||||||
|
|
||||||
let mut second = HashMap::new();
|
let mut second = HashMap::new();
|
||||||
second.insert("user2".to_string(), 5);
|
second.insert("user2".to_string(), 5);
|
||||||
tracker.load_limits(0, &second).await;
|
tracker.load_limits(&second).await;
|
||||||
|
|
||||||
assert_eq!(tracker.get_user_limit("user1").await, None);
|
assert_eq!(tracker.get_user_limit("user1").await, None);
|
||||||
assert_eq!(tracker.get_user_limit("user2").await, Some(5));
|
assert_eq!(tracker.get_user_limit("user2").await, Some(5));
|
||||||
}
|
}
|
||||||
|
|
||||||
#[tokio::test]
|
|
||||||
async fn test_global_each_limit_applies_without_user_override() {
|
|
||||||
let tracker = UserIpTracker::new();
|
|
||||||
tracker.load_limits(2, &HashMap::new()).await;
|
|
||||||
|
|
||||||
let ip1 = test_ipv4(172, 16, 0, 1);
|
|
||||||
let ip2 = test_ipv4(172, 16, 0, 2);
|
|
||||||
let ip3 = test_ipv4(172, 16, 0, 3);
|
|
||||||
|
|
||||||
assert!(tracker.check_and_add("test_user", ip1).await.is_ok());
|
|
||||||
assert!(tracker.check_and_add("test_user", ip2).await.is_ok());
|
|
||||||
assert!(tracker.check_and_add("test_user", ip3).await.is_err());
|
|
||||||
assert_eq!(tracker.get_user_limit("test_user").await, Some(2));
|
|
||||||
}
|
|
||||||
|
|
||||||
#[tokio::test]
|
|
||||||
async fn test_user_override_wins_over_global_each_limit() {
|
|
||||||
let tracker = UserIpTracker::new();
|
|
||||||
let mut limits = HashMap::new();
|
|
||||||
limits.insert("test_user".to_string(), 1);
|
|
||||||
tracker.load_limits(3, &limits).await;
|
|
||||||
|
|
||||||
let ip1 = test_ipv4(172, 17, 0, 1);
|
|
||||||
let ip2 = test_ipv4(172, 17, 0, 2);
|
|
||||||
|
|
||||||
assert!(tracker.check_and_add("test_user", ip1).await.is_ok());
|
|
||||||
assert!(tracker.check_and_add("test_user", ip2).await.is_err());
|
|
||||||
assert_eq!(tracker.get_user_limit("test_user").await, Some(1));
|
|
||||||
}
|
|
||||||
|
|
||||||
#[tokio::test]
|
#[tokio::test]
|
||||||
async fn test_time_window_mode_blocks_recent_ip_churn() {
|
async fn test_time_window_mode_blocks_recent_ip_churn() {
|
||||||
let tracker = UserIpTracker::new();
|
let tracker = UserIpTracker::new();
|
||||||
|
|
@ -801,54 +622,4 @@ mod tests {
|
||||||
tokio::time::sleep(Duration::from_millis(1100)).await;
|
tokio::time::sleep(Duration::from_millis(1100)).await;
|
||||||
assert!(tracker.check_and_add("test_user", ip2).await.is_ok());
|
assert!(tracker.check_and_add("test_user", ip2).await.is_ok());
|
||||||
}
|
}
|
||||||
|
|
||||||
#[tokio::test]
|
|
||||||
async fn test_memory_stats_reports_queue_and_entry_counts() {
|
|
||||||
let tracker = UserIpTracker::new();
|
|
||||||
tracker.set_user_limit("test_user", 4).await;
|
|
||||||
let ip1 = test_ipv4(10, 2, 0, 1);
|
|
||||||
let ip2 = test_ipv4(10, 2, 0, 2);
|
|
||||||
|
|
||||||
tracker.check_and_add("test_user", ip1).await.unwrap();
|
|
||||||
tracker.check_and_add("test_user", ip2).await.unwrap();
|
|
||||||
tracker.enqueue_cleanup("test_user".to_string(), ip1);
|
|
||||||
|
|
||||||
let snapshot = tracker.memory_stats().await;
|
|
||||||
assert_eq!(snapshot.active_users, 1);
|
|
||||||
assert_eq!(snapshot.recent_users, 1);
|
|
||||||
assert_eq!(snapshot.active_entries, 2);
|
|
||||||
assert_eq!(snapshot.recent_entries, 2);
|
|
||||||
assert_eq!(snapshot.cleanup_queue_len, 1);
|
|
||||||
}
|
|
||||||
|
|
||||||
#[tokio::test]
|
|
||||||
async fn test_compact_prunes_stale_recent_entries() {
|
|
||||||
let tracker = UserIpTracker::new();
|
|
||||||
tracker
|
|
||||||
.set_limit_policy(UserMaxUniqueIpsMode::TimeWindow, 1)
|
|
||||||
.await;
|
|
||||||
|
|
||||||
let stale_user = "stale-user".to_string();
|
|
||||||
let stale_ip = test_ipv4(10, 3, 0, 1);
|
|
||||||
{
|
|
||||||
let mut recent_ips = tracker.recent_ips.write().await;
|
|
||||||
recent_ips
|
|
||||||
.entry(stale_user.clone())
|
|
||||||
.or_insert_with(HashMap::new)
|
|
||||||
.insert(stale_ip, Instant::now() - Duration::from_secs(5));
|
|
||||||
}
|
|
||||||
|
|
||||||
tracker.last_compact_epoch_secs.store(0, Ordering::Relaxed);
|
|
||||||
tracker
|
|
||||||
.check_and_add("trigger-user", test_ipv4(10, 3, 0, 2))
|
|
||||||
.await
|
|
||||||
.unwrap();
|
|
||||||
|
|
||||||
let recent_ips = tracker.recent_ips.read().await;
|
|
||||||
let stale_exists = recent_ips
|
|
||||||
.get(&stale_user)
|
|
||||||
.map(|ips| ips.contains_key(&stale_ip))
|
|
||||||
.unwrap_or(false);
|
|
||||||
assert!(!stale_exists);
|
|
||||||
}
|
|
||||||
}
|
}
|
||||||
|
|
|
||||||
343
src/logging.rs
343
src/logging.rs
|
|
@ -1,343 +0,0 @@
|
||||||
//! Logging configuration for telemt.
|
|
||||||
//!
|
|
||||||
//! Supports multiple log destinations:
|
|
||||||
//! - stderr (default, works with systemd journald)
|
|
||||||
//! - syslog (Unix only, for traditional init systems)
|
|
||||||
//! - file (with optional rotation)
|
|
||||||
|
|
||||||
#![allow(dead_code)] // Infrastructure module - used via CLI flags
|
|
||||||
|
|
||||||
use std::path::Path;
|
|
||||||
|
|
||||||
use tracing_subscriber::layer::SubscriberExt;
|
|
||||||
use tracing_subscriber::util::SubscriberInitExt;
|
|
||||||
use tracing_subscriber::{EnvFilter, fmt, reload};
|
|
||||||
|
|
||||||
/// Log destination configuration.
|
|
||||||
#[derive(Debug, Clone, Default)]
|
|
||||||
pub enum LogDestination {
|
|
||||||
/// Log to stderr (default, captured by systemd journald).
|
|
||||||
#[default]
|
|
||||||
Stderr,
|
|
||||||
/// Log to syslog (Unix only).
|
|
||||||
#[cfg(unix)]
|
|
||||||
Syslog,
|
|
||||||
/// Log to a file with optional rotation.
|
|
||||||
File {
|
|
||||||
path: String,
|
|
||||||
/// Rotate daily if true.
|
|
||||||
rotate_daily: bool,
|
|
||||||
},
|
|
||||||
}
|
|
||||||
|
|
||||||
/// Logging options parsed from CLI/config.
|
|
||||||
#[derive(Debug, Clone, Default)]
|
|
||||||
pub struct LoggingOptions {
|
|
||||||
/// Where to send logs.
|
|
||||||
pub destination: LogDestination,
|
|
||||||
/// Disable ANSI colors.
|
|
||||||
pub disable_colors: bool,
|
|
||||||
}
|
|
||||||
|
|
||||||
/// Guard that must be held to keep file logging active.
|
|
||||||
/// When dropped, flushes and closes log files.
|
|
||||||
pub struct LoggingGuard {
|
|
||||||
_guard: Option<tracing_appender::non_blocking::WorkerGuard>,
|
|
||||||
}
|
|
||||||
|
|
||||||
impl LoggingGuard {
|
|
||||||
fn new(guard: Option<tracing_appender::non_blocking::WorkerGuard>) -> Self {
|
|
||||||
Self { _guard: guard }
|
|
||||||
}
|
|
||||||
|
|
||||||
/// Creates a no-op guard for stderr/syslog logging.
|
|
||||||
pub fn noop() -> Self {
|
|
||||||
Self { _guard: None }
|
|
||||||
}
|
|
||||||
}
|
|
||||||
|
|
||||||
/// Initialize the tracing subscriber with the specified options.
|
|
||||||
///
|
|
||||||
/// Returns a reload handle for dynamic log level changes and a guard
|
|
||||||
/// that must be kept alive for file logging.
|
|
||||||
pub fn init_logging(
|
|
||||||
opts: &LoggingOptions,
|
|
||||||
initial_filter: &str,
|
|
||||||
) -> (
|
|
||||||
reload::Handle<EnvFilter, impl tracing::Subscriber + Send + Sync>,
|
|
||||||
LoggingGuard,
|
|
||||||
) {
|
|
||||||
let (filter_layer, filter_handle) = reload::Layer::new(EnvFilter::new(initial_filter));
|
|
||||||
|
|
||||||
match &opts.destination {
|
|
||||||
LogDestination::Stderr => {
|
|
||||||
let fmt_layer = fmt::Layer::default()
|
|
||||||
.with_ansi(!opts.disable_colors)
|
|
||||||
.with_target(true);
|
|
||||||
|
|
||||||
tracing_subscriber::registry()
|
|
||||||
.with(filter_layer)
|
|
||||||
.with(fmt_layer)
|
|
||||||
.init();
|
|
||||||
|
|
||||||
(filter_handle, LoggingGuard::noop())
|
|
||||||
}
|
|
||||||
|
|
||||||
#[cfg(unix)]
|
|
||||||
LogDestination::Syslog => {
|
|
||||||
// Use a custom fmt layer that writes to syslog
|
|
||||||
let fmt_layer = fmt::Layer::default()
|
|
||||||
.with_ansi(false)
|
|
||||||
.with_target(false)
|
|
||||||
.with_level(false)
|
|
||||||
.without_time()
|
|
||||||
.with_writer(SyslogMakeWriter::new());
|
|
||||||
|
|
||||||
tracing_subscriber::registry()
|
|
||||||
.with(filter_layer)
|
|
||||||
.with(fmt_layer)
|
|
||||||
.init();
|
|
||||||
|
|
||||||
(filter_handle, LoggingGuard::noop())
|
|
||||||
}
|
|
||||||
|
|
||||||
LogDestination::File { path, rotate_daily } => {
|
|
||||||
let (non_blocking, guard) = if *rotate_daily {
|
|
||||||
// Extract directory and filename prefix
|
|
||||||
let path = Path::new(path);
|
|
||||||
let dir = path.parent().unwrap_or(Path::new("/var/log"));
|
|
||||||
let prefix = path
|
|
||||||
.file_name()
|
|
||||||
.and_then(|s| s.to_str())
|
|
||||||
.unwrap_or("telemt");
|
|
||||||
|
|
||||||
let file_appender = tracing_appender::rolling::daily(dir, prefix);
|
|
||||||
tracing_appender::non_blocking(file_appender)
|
|
||||||
} else {
|
|
||||||
let file = std::fs::OpenOptions::new()
|
|
||||||
.create(true)
|
|
||||||
.append(true)
|
|
||||||
.open(path)
|
|
||||||
.expect("Failed to open log file");
|
|
||||||
tracing_appender::non_blocking(file)
|
|
||||||
};
|
|
||||||
|
|
||||||
let fmt_layer = fmt::Layer::default()
|
|
||||||
.with_ansi(false)
|
|
||||||
.with_target(true)
|
|
||||||
.with_writer(non_blocking);
|
|
||||||
|
|
||||||
tracing_subscriber::registry()
|
|
||||||
.with(filter_layer)
|
|
||||||
.with(fmt_layer)
|
|
||||||
.init();
|
|
||||||
|
|
||||||
(filter_handle, LoggingGuard::new(Some(guard)))
|
|
||||||
}
|
|
||||||
}
|
|
||||||
}
|
|
||||||
|
|
||||||
/// Syslog writer for tracing.
|
|
||||||
#[cfg(unix)]
|
|
||||||
#[derive(Clone, Copy)]
|
|
||||||
struct SyslogMakeWriter;
|
|
||||||
|
|
||||||
#[cfg(unix)]
|
|
||||||
#[derive(Clone, Copy)]
|
|
||||||
struct SyslogWriter {
|
|
||||||
priority: libc::c_int,
|
|
||||||
}
|
|
||||||
|
|
||||||
#[cfg(unix)]
|
|
||||||
impl SyslogMakeWriter {
|
|
||||||
fn new() -> Self {
|
|
||||||
// Open syslog connection on first use
|
|
||||||
static INIT: std::sync::Once = std::sync::Once::new();
|
|
||||||
INIT.call_once(|| {
|
|
||||||
unsafe {
|
|
||||||
// Open syslog with ident "telemt", LOG_PID, LOG_DAEMON facility
|
|
||||||
let ident = b"telemt\0".as_ptr() as *const libc::c_char;
|
|
||||||
libc::openlog(ident, libc::LOG_PID | libc::LOG_NDELAY, libc::LOG_DAEMON);
|
|
||||||
}
|
|
||||||
});
|
|
||||||
Self
|
|
||||||
}
|
|
||||||
}
|
|
||||||
|
|
||||||
#[cfg(unix)]
|
|
||||||
fn syslog_priority_for_level(level: &tracing::Level) -> libc::c_int {
|
|
||||||
match *level {
|
|
||||||
tracing::Level::ERROR => libc::LOG_ERR,
|
|
||||||
tracing::Level::WARN => libc::LOG_WARNING,
|
|
||||||
tracing::Level::INFO => libc::LOG_INFO,
|
|
||||||
tracing::Level::DEBUG => libc::LOG_DEBUG,
|
|
||||||
tracing::Level::TRACE => libc::LOG_DEBUG,
|
|
||||||
}
|
|
||||||
}
|
|
||||||
|
|
||||||
#[cfg(unix)]
|
|
||||||
impl std::io::Write for SyslogWriter {
|
|
||||||
fn write(&mut self, buf: &[u8]) -> std::io::Result<usize> {
|
|
||||||
// Convert to C string, stripping newlines
|
|
||||||
let msg = String::from_utf8_lossy(buf);
|
|
||||||
let msg = msg.trim_end();
|
|
||||||
|
|
||||||
if msg.is_empty() {
|
|
||||||
return Ok(buf.len());
|
|
||||||
}
|
|
||||||
|
|
||||||
// Write to syslog
|
|
||||||
let c_msg = std::ffi::CString::new(msg.as_bytes())
|
|
||||||
.unwrap_or_else(|_| std::ffi::CString::new("(invalid utf8)").unwrap());
|
|
||||||
|
|
||||||
unsafe {
|
|
||||||
libc::syslog(
|
|
||||||
self.priority,
|
|
||||||
b"%s\0".as_ptr() as *const libc::c_char,
|
|
||||||
c_msg.as_ptr(),
|
|
||||||
);
|
|
||||||
}
|
|
||||||
|
|
||||||
Ok(buf.len())
|
|
||||||
}
|
|
||||||
|
|
||||||
fn flush(&mut self) -> std::io::Result<()> {
|
|
||||||
Ok(())
|
|
||||||
}
|
|
||||||
}
|
|
||||||
|
|
||||||
#[cfg(unix)]
|
|
||||||
impl<'a> tracing_subscriber::fmt::MakeWriter<'a> for SyslogMakeWriter {
|
|
||||||
type Writer = SyslogWriter;
|
|
||||||
|
|
||||||
fn make_writer(&'a self) -> Self::Writer {
|
|
||||||
SyslogWriter {
|
|
||||||
priority: libc::LOG_INFO,
|
|
||||||
}
|
|
||||||
}
|
|
||||||
|
|
||||||
fn make_writer_for(&'a self, meta: &tracing::Metadata<'_>) -> Self::Writer {
|
|
||||||
SyslogWriter {
|
|
||||||
priority: syslog_priority_for_level(meta.level()),
|
|
||||||
}
|
|
||||||
}
|
|
||||||
}
|
|
||||||
|
|
||||||
/// Parse log destination from CLI arguments.
|
|
||||||
pub fn parse_log_destination(args: &[String]) -> LogDestination {
|
|
||||||
let mut i = 0;
|
|
||||||
while i < args.len() {
|
|
||||||
match args[i].as_str() {
|
|
||||||
#[cfg(unix)]
|
|
||||||
"--syslog" => {
|
|
||||||
return LogDestination::Syslog;
|
|
||||||
}
|
|
||||||
"--log-file" => {
|
|
||||||
i += 1;
|
|
||||||
if i < args.len() {
|
|
||||||
return LogDestination::File {
|
|
||||||
path: args[i].clone(),
|
|
||||||
rotate_daily: false,
|
|
||||||
};
|
|
||||||
}
|
|
||||||
}
|
|
||||||
s if s.starts_with("--log-file=") => {
|
|
||||||
return LogDestination::File {
|
|
||||||
path: s.trim_start_matches("--log-file=").to_string(),
|
|
||||||
rotate_daily: false,
|
|
||||||
};
|
|
||||||
}
|
|
||||||
"--log-file-daily" => {
|
|
||||||
i += 1;
|
|
||||||
if i < args.len() {
|
|
||||||
return LogDestination::File {
|
|
||||||
path: args[i].clone(),
|
|
||||||
rotate_daily: true,
|
|
||||||
};
|
|
||||||
}
|
|
||||||
}
|
|
||||||
s if s.starts_with("--log-file-daily=") => {
|
|
||||||
return LogDestination::File {
|
|
||||||
path: s.trim_start_matches("--log-file-daily=").to_string(),
|
|
||||||
rotate_daily: true,
|
|
||||||
};
|
|
||||||
}
|
|
||||||
_ => {}
|
|
||||||
}
|
|
||||||
i += 1;
|
|
||||||
}
|
|
||||||
LogDestination::Stderr
|
|
||||||
}
|
|
||||||
|
|
||||||
#[cfg(test)]
|
|
||||||
mod tests {
|
|
||||||
use super::*;
|
|
||||||
|
|
||||||
#[test]
|
|
||||||
fn test_parse_log_destination_default() {
|
|
||||||
let args: Vec<String> = vec![];
|
|
||||||
assert!(matches!(
|
|
||||||
parse_log_destination(&args),
|
|
||||||
LogDestination::Stderr
|
|
||||||
));
|
|
||||||
}
|
|
||||||
|
|
||||||
#[test]
|
|
||||||
fn test_parse_log_destination_file() {
|
|
||||||
let args = vec!["--log-file".to_string(), "/var/log/telemt.log".to_string()];
|
|
||||||
match parse_log_destination(&args) {
|
|
||||||
LogDestination::File { path, rotate_daily } => {
|
|
||||||
assert_eq!(path, "/var/log/telemt.log");
|
|
||||||
assert!(!rotate_daily);
|
|
||||||
}
|
|
||||||
_ => panic!("Expected File destination"),
|
|
||||||
}
|
|
||||||
}
|
|
||||||
|
|
||||||
#[test]
|
|
||||||
fn test_parse_log_destination_file_daily() {
|
|
||||||
let args = vec!["--log-file-daily=/var/log/telemt".to_string()];
|
|
||||||
match parse_log_destination(&args) {
|
|
||||||
LogDestination::File { path, rotate_daily } => {
|
|
||||||
assert_eq!(path, "/var/log/telemt");
|
|
||||||
assert!(rotate_daily);
|
|
||||||
}
|
|
||||||
_ => panic!("Expected File destination"),
|
|
||||||
}
|
|
||||||
}
|
|
||||||
|
|
||||||
#[cfg(unix)]
|
|
||||||
#[test]
|
|
||||||
fn test_parse_log_destination_syslog() {
|
|
||||||
let args = vec!["--syslog".to_string()];
|
|
||||||
assert!(matches!(
|
|
||||||
parse_log_destination(&args),
|
|
||||||
LogDestination::Syslog
|
|
||||||
));
|
|
||||||
}
|
|
||||||
|
|
||||||
#[cfg(unix)]
|
|
||||||
#[test]
|
|
||||||
fn test_syslog_priority_for_level_mapping() {
|
|
||||||
assert_eq!(
|
|
||||||
syslog_priority_for_level(&tracing::Level::ERROR),
|
|
||||||
libc::LOG_ERR
|
|
||||||
);
|
|
||||||
assert_eq!(
|
|
||||||
syslog_priority_for_level(&tracing::Level::WARN),
|
|
||||||
libc::LOG_WARNING
|
|
||||||
);
|
|
||||||
assert_eq!(
|
|
||||||
syslog_priority_for_level(&tracing::Level::INFO),
|
|
||||||
libc::LOG_INFO
|
|
||||||
);
|
|
||||||
assert_eq!(
|
|
||||||
syslog_priority_for_level(&tracing::Level::DEBUG),
|
|
||||||
libc::LOG_DEBUG
|
|
||||||
);
|
|
||||||
assert_eq!(
|
|
||||||
syslog_priority_for_level(&tracing::Level::TRACE),
|
|
||||||
libc::LOG_DEBUG
|
|
||||||
);
|
|
||||||
}
|
|
||||||
}
|
|
||||||
|
|
@ -21,29 +21,10 @@ pub(crate) async fn configure_admission_gate(
|
||||||
if config.general.use_middle_proxy {
|
if config.general.use_middle_proxy {
|
||||||
if let Some(pool) = me_pool.as_ref() {
|
if let Some(pool) = me_pool.as_ref() {
|
||||||
let initial_ready = pool.admission_ready_conditional_cast().await;
|
let initial_ready = pool.admission_ready_conditional_cast().await;
|
||||||
let mut fallback_enabled = config.general.me2dc_fallback;
|
admission_tx.send_replace(initial_ready);
|
||||||
let mut fast_fallback_enabled = fallback_enabled && config.general.me2dc_fast;
|
let _ = route_runtime.set_mode(RelayRouteMode::Middle);
|
||||||
let (initial_gate_open, initial_route_mode, initial_fallback_reason) = if initial_ready
|
|
||||||
{
|
|
||||||
(true, RelayRouteMode::Middle, None)
|
|
||||||
} else if fast_fallback_enabled {
|
|
||||||
(
|
|
||||||
true,
|
|
||||||
RelayRouteMode::Direct,
|
|
||||||
Some("fast_not_ready_fallback"),
|
|
||||||
)
|
|
||||||
} else {
|
|
||||||
(false, RelayRouteMode::Middle, None)
|
|
||||||
};
|
|
||||||
admission_tx.send_replace(initial_gate_open);
|
|
||||||
let _ = route_runtime.set_mode(initial_route_mode);
|
|
||||||
if initial_ready {
|
if initial_ready {
|
||||||
info!("Conditional-admission gate: open / ME pool READY");
|
info!("Conditional-admission gate: open / ME pool READY");
|
||||||
} else if let Some(reason) = initial_fallback_reason {
|
|
||||||
warn!(
|
|
||||||
fallback_reason = reason,
|
|
||||||
"Conditional-admission gate opened in ME fast fallback mode"
|
|
||||||
);
|
|
||||||
} else {
|
} else {
|
||||||
warn!("Conditional-admission gate: closed / ME pool is NOT ready)");
|
warn!("Conditional-admission gate: closed / ME pool is NOT ready)");
|
||||||
}
|
}
|
||||||
|
|
@ -53,9 +34,10 @@ pub(crate) async fn configure_admission_gate(
|
||||||
let route_runtime_gate = route_runtime.clone();
|
let route_runtime_gate = route_runtime.clone();
|
||||||
let mut config_rx_gate = config_rx.clone();
|
let mut config_rx_gate = config_rx.clone();
|
||||||
let mut admission_poll_ms = config.general.me_admission_poll_ms.max(1);
|
let mut admission_poll_ms = config.general.me_admission_poll_ms.max(1);
|
||||||
|
let mut fallback_enabled = config.general.me2dc_fallback;
|
||||||
tokio::spawn(async move {
|
tokio::spawn(async move {
|
||||||
let mut gate_open = initial_gate_open;
|
let mut gate_open = initial_ready;
|
||||||
let mut route_mode = initial_route_mode;
|
let mut route_mode = RelayRouteMode::Middle;
|
||||||
let mut ready_observed = initial_ready;
|
let mut ready_observed = initial_ready;
|
||||||
let mut not_ready_since = if initial_ready {
|
let mut not_ready_since = if initial_ready {
|
||||||
None
|
None
|
||||||
|
|
@ -71,23 +53,16 @@ pub(crate) async fn configure_admission_gate(
|
||||||
let cfg = config_rx_gate.borrow_and_update().clone();
|
let cfg = config_rx_gate.borrow_and_update().clone();
|
||||||
admission_poll_ms = cfg.general.me_admission_poll_ms.max(1);
|
admission_poll_ms = cfg.general.me_admission_poll_ms.max(1);
|
||||||
fallback_enabled = cfg.general.me2dc_fallback;
|
fallback_enabled = cfg.general.me2dc_fallback;
|
||||||
fast_fallback_enabled = cfg.general.me2dc_fallback && cfg.general.me2dc_fast;
|
|
||||||
continue;
|
continue;
|
||||||
}
|
}
|
||||||
_ = tokio::time::sleep(Duration::from_millis(admission_poll_ms)) => {}
|
_ = tokio::time::sleep(Duration::from_millis(admission_poll_ms)) => {}
|
||||||
}
|
}
|
||||||
let ready = pool_for_gate.admission_ready_conditional_cast().await;
|
let ready = pool_for_gate.admission_ready_conditional_cast().await;
|
||||||
let now = Instant::now();
|
let now = Instant::now();
|
||||||
let (next_gate_open, next_route_mode, next_fallback_reason) = if ready {
|
let (next_gate_open, next_route_mode, next_fallback_active) = if ready {
|
||||||
ready_observed = true;
|
ready_observed = true;
|
||||||
not_ready_since = None;
|
not_ready_since = None;
|
||||||
(true, RelayRouteMode::Middle, None)
|
(true, RelayRouteMode::Middle, false)
|
||||||
} else if fast_fallback_enabled {
|
|
||||||
(
|
|
||||||
true,
|
|
||||||
RelayRouteMode::Direct,
|
|
||||||
Some("fast_not_ready_fallback"),
|
|
||||||
)
|
|
||||||
} else {
|
} else {
|
||||||
let not_ready_started_at = *not_ready_since.get_or_insert(now);
|
let not_ready_started_at = *not_ready_since.get_or_insert(now);
|
||||||
let not_ready_for = now.saturating_duration_since(not_ready_started_at);
|
let not_ready_for = now.saturating_duration_since(not_ready_started_at);
|
||||||
|
|
@ -97,12 +72,11 @@ pub(crate) async fn configure_admission_gate(
|
||||||
STARTUP_FALLBACK_AFTER
|
STARTUP_FALLBACK_AFTER
|
||||||
};
|
};
|
||||||
if fallback_enabled && not_ready_for > fallback_after {
|
if fallback_enabled && not_ready_for > fallback_after {
|
||||||
(true, RelayRouteMode::Direct, Some("strict_grace_fallback"))
|
(true, RelayRouteMode::Direct, true)
|
||||||
} else {
|
} else {
|
||||||
(false, RelayRouteMode::Middle, None)
|
(false, RelayRouteMode::Middle, false)
|
||||||
}
|
}
|
||||||
};
|
};
|
||||||
let next_fallback_active = next_fallback_reason.is_some();
|
|
||||||
|
|
||||||
if next_route_mode != route_mode {
|
if next_route_mode != route_mode {
|
||||||
route_mode = next_route_mode;
|
route_mode = next_route_mode;
|
||||||
|
|
@ -114,8 +88,6 @@ pub(crate) async fn configure_admission_gate(
|
||||||
"Middle-End routing restored for new sessions"
|
"Middle-End routing restored for new sessions"
|
||||||
);
|
);
|
||||||
} else {
|
} else {
|
||||||
let fallback_reason = next_fallback_reason.unwrap_or("unknown");
|
|
||||||
if fallback_reason == "strict_grace_fallback" {
|
|
||||||
let fallback_after = if ready_observed {
|
let fallback_after = if ready_observed {
|
||||||
RUNTIME_FALLBACK_AFTER
|
RUNTIME_FALLBACK_AFTER
|
||||||
} else {
|
} else {
|
||||||
|
|
@ -125,17 +97,8 @@ pub(crate) async fn configure_admission_gate(
|
||||||
target_mode = route_mode.as_str(),
|
target_mode = route_mode.as_str(),
|
||||||
cutover_generation = snapshot.generation,
|
cutover_generation = snapshot.generation,
|
||||||
grace_secs = fallback_after.as_secs(),
|
grace_secs = fallback_after.as_secs(),
|
||||||
fallback_reason,
|
|
||||||
"ME pool stayed not-ready beyond grace; routing new sessions via Direct-DC"
|
"ME pool stayed not-ready beyond grace; routing new sessions via Direct-DC"
|
||||||
);
|
);
|
||||||
} else {
|
|
||||||
warn!(
|
|
||||||
target_mode = route_mode.as_str(),
|
|
||||||
cutover_generation = snapshot.generation,
|
|
||||||
fallback_reason,
|
|
||||||
"ME pool not-ready; routing new sessions via Direct-DC (fast mode)"
|
|
||||||
);
|
|
||||||
}
|
|
||||||
}
|
}
|
||||||
}
|
}
|
||||||
}
|
}
|
||||||
|
|
@ -145,10 +108,7 @@ pub(crate) async fn configure_admission_gate(
|
||||||
admission_tx_gate.send_replace(gate_open);
|
admission_tx_gate.send_replace(gate_open);
|
||||||
if gate_open {
|
if gate_open {
|
||||||
if next_fallback_active {
|
if next_fallback_active {
|
||||||
warn!(
|
warn!("Conditional-admission gate opened in ME fallback mode");
|
||||||
fallback_reason = next_fallback_reason.unwrap_or("unknown"),
|
|
||||||
"Conditional-admission gate opened in ME fallback mode"
|
|
||||||
);
|
|
||||||
} else {
|
} else {
|
||||||
info!("Conditional-admission gate opened / ME pool READY");
|
info!("Conditional-admission gate opened / ME pool READY");
|
||||||
}
|
}
|
||||||
|
|
|
||||||
|
|
@ -1,5 +1,3 @@
|
||||||
#![allow(clippy::too_many_arguments)]
|
|
||||||
|
|
||||||
use std::sync::Arc;
|
use std::sync::Arc;
|
||||||
use std::time::Instant;
|
use std::time::Instant;
|
||||||
|
|
||||||
|
|
@ -13,10 +11,10 @@ use crate::startup::{
|
||||||
COMPONENT_DC_CONNECTIVITY_PING, COMPONENT_ME_CONNECTIVITY_PING, COMPONENT_RUNTIME_READY,
|
COMPONENT_DC_CONNECTIVITY_PING, COMPONENT_ME_CONNECTIVITY_PING, COMPONENT_RUNTIME_READY,
|
||||||
StartupTracker,
|
StartupTracker,
|
||||||
};
|
};
|
||||||
use crate::transport::UpstreamManager;
|
|
||||||
use crate::transport::middle_proxy::{
|
use crate::transport::middle_proxy::{
|
||||||
MePingFamily, MePingSample, MePool, format_me_route, format_sample_line, run_me_ping,
|
MePingFamily, MePingSample, MePool, format_me_route, format_sample_line, run_me_ping,
|
||||||
};
|
};
|
||||||
|
use crate::transport::UpstreamManager;
|
||||||
|
|
||||||
pub(crate) async fn run_startup_connectivity(
|
pub(crate) async fn run_startup_connectivity(
|
||||||
config: &Arc<ProxyConfig>,
|
config: &Arc<ProxyConfig>,
|
||||||
|
|
@ -49,15 +47,11 @@ pub(crate) async fn run_startup_connectivity(
|
||||||
|
|
||||||
let v4_ok = me_results.iter().any(|r| {
|
let v4_ok = me_results.iter().any(|r| {
|
||||||
matches!(r.family, MePingFamily::V4)
|
matches!(r.family, MePingFamily::V4)
|
||||||
&& r.samples
|
&& r.samples.iter().any(|s| s.error.is_none() && s.handshake_ms.is_some())
|
||||||
.iter()
|
|
||||||
.any(|s| s.error.is_none() && s.handshake_ms.is_some())
|
|
||||||
});
|
});
|
||||||
let v6_ok = me_results.iter().any(|r| {
|
let v6_ok = me_results.iter().any(|r| {
|
||||||
matches!(r.family, MePingFamily::V6)
|
matches!(r.family, MePingFamily::V6)
|
||||||
&& r.samples
|
&& r.samples.iter().any(|s| s.error.is_none() && s.handshake_ms.is_some())
|
||||||
.iter()
|
|
||||||
.any(|s| s.error.is_none() && s.handshake_ms.is_some())
|
|
||||||
});
|
});
|
||||||
|
|
||||||
info!("================= Telegram ME Connectivity =================");
|
info!("================= Telegram ME Connectivity =================");
|
||||||
|
|
@ -137,14 +131,8 @@ pub(crate) async fn run_startup_connectivity(
|
||||||
.await;
|
.await;
|
||||||
|
|
||||||
for upstream_result in &ping_results {
|
for upstream_result in &ping_results {
|
||||||
let v6_works = upstream_result
|
let v6_works = upstream_result.v6_results.iter().any(|r| r.rtt_ms.is_some());
|
||||||
.v6_results
|
let v4_works = upstream_result.v4_results.iter().any(|r| r.rtt_ms.is_some());
|
||||||
.iter()
|
|
||||||
.any(|r| r.rtt_ms.is_some());
|
|
||||||
let v4_works = upstream_result
|
|
||||||
.v4_results
|
|
||||||
.iter()
|
|
||||||
.any(|r| r.rtt_ms.is_some());
|
|
||||||
|
|
||||||
if upstream_result.both_available {
|
if upstream_result.both_available {
|
||||||
if prefer_ipv6 {
|
if prefer_ipv6 {
|
||||||
|
|
|
||||||
|
|
@ -1,6 +1,3 @@
|
||||||
#![allow(clippy::items_after_test_module)]
|
|
||||||
|
|
||||||
use std::path::PathBuf;
|
|
||||||
use std::time::Duration;
|
use std::time::Duration;
|
||||||
|
|
||||||
use tokio::sync::watch;
|
use tokio::sync::watch;
|
||||||
|
|
@ -8,66 +5,17 @@ use tracing::{debug, error, info, warn};
|
||||||
|
|
||||||
use crate::cli;
|
use crate::cli;
|
||||||
use crate::config::ProxyConfig;
|
use crate::config::ProxyConfig;
|
||||||
use crate::logging::LogDestination;
|
|
||||||
use crate::transport::UpstreamManager;
|
|
||||||
use crate::transport::middle_proxy::{
|
use crate::transport::middle_proxy::{
|
||||||
ProxyConfigData, fetch_proxy_config_with_raw_via_upstream, load_proxy_config_cache,
|
ProxyConfigData, fetch_proxy_config_with_raw, load_proxy_config_cache, save_proxy_config_cache,
|
||||||
save_proxy_config_cache,
|
|
||||||
};
|
};
|
||||||
|
|
||||||
pub(crate) fn resolve_runtime_config_path(
|
pub(crate) fn parse_cli() -> (String, bool, Option<String>) {
|
||||||
config_path_cli: &str,
|
|
||||||
startup_cwd: &std::path::Path,
|
|
||||||
config_path_explicit: bool,
|
|
||||||
) -> PathBuf {
|
|
||||||
if config_path_explicit {
|
|
||||||
let raw = PathBuf::from(config_path_cli);
|
|
||||||
let absolute = if raw.is_absolute() {
|
|
||||||
raw
|
|
||||||
} else {
|
|
||||||
startup_cwd.join(raw)
|
|
||||||
};
|
|
||||||
return absolute.canonicalize().unwrap_or(absolute);
|
|
||||||
}
|
|
||||||
|
|
||||||
let etc_telemt = std::path::Path::new("/etc/telemt");
|
|
||||||
let candidates = [
|
|
||||||
startup_cwd.join("config.toml"),
|
|
||||||
startup_cwd.join("telemt.toml"),
|
|
||||||
etc_telemt.join("telemt.toml"),
|
|
||||||
etc_telemt.join("config.toml"),
|
|
||||||
];
|
|
||||||
for candidate in candidates {
|
|
||||||
if candidate.is_file() {
|
|
||||||
return candidate.canonicalize().unwrap_or(candidate);
|
|
||||||
}
|
|
||||||
}
|
|
||||||
|
|
||||||
startup_cwd.join("config.toml")
|
|
||||||
}
|
|
||||||
|
|
||||||
/// Parsed CLI arguments.
|
|
||||||
pub(crate) struct CliArgs {
|
|
||||||
pub config_path: String,
|
|
||||||
pub config_path_explicit: bool,
|
|
||||||
pub data_path: Option<PathBuf>,
|
|
||||||
pub silent: bool,
|
|
||||||
pub log_level: Option<String>,
|
|
||||||
pub log_destination: LogDestination,
|
|
||||||
}
|
|
||||||
|
|
||||||
pub(crate) fn parse_cli() -> CliArgs {
|
|
||||||
let mut config_path = "config.toml".to_string();
|
let mut config_path = "config.toml".to_string();
|
||||||
let mut config_path_explicit = false;
|
|
||||||
let mut data_path: Option<PathBuf> = None;
|
|
||||||
let mut silent = false;
|
let mut silent = false;
|
||||||
let mut log_level: Option<String> = None;
|
let mut log_level: Option<String> = None;
|
||||||
|
|
||||||
let args: Vec<String> = std::env::args().skip(1).collect();
|
let args: Vec<String> = std::env::args().skip(1).collect();
|
||||||
|
|
||||||
// Parse log destination
|
|
||||||
let log_destination = crate::logging::parse_log_destination(&args);
|
|
||||||
|
|
||||||
// Check for --init first (handled before tokio)
|
// Check for --init first (handled before tokio)
|
||||||
if let Some(init_opts) = cli::parse_init_args(&args) {
|
if let Some(init_opts) = cli::parse_init_args(&args) {
|
||||||
if let Err(e) = cli::run_init(init_opts) {
|
if let Err(e) = cli::run_init(init_opts) {
|
||||||
|
|
@ -80,34 +28,6 @@ pub(crate) fn parse_cli() -> CliArgs {
|
||||||
let mut i = 0;
|
let mut i = 0;
|
||||||
while i < args.len() {
|
while i < args.len() {
|
||||||
match args[i].as_str() {
|
match args[i].as_str() {
|
||||||
"--data-path" => {
|
|
||||||
i += 1;
|
|
||||||
if i < args.len() {
|
|
||||||
data_path = Some(PathBuf::from(args[i].clone()));
|
|
||||||
} else {
|
|
||||||
eprintln!("Missing value for --data-path");
|
|
||||||
std::process::exit(0);
|
|
||||||
}
|
|
||||||
}
|
|
||||||
s if s.starts_with("--data-path=") => {
|
|
||||||
data_path = Some(PathBuf::from(
|
|
||||||
s.trim_start_matches("--data-path=").to_string(),
|
|
||||||
));
|
|
||||||
}
|
|
||||||
"--working-dir" => {
|
|
||||||
i += 1;
|
|
||||||
if i < args.len() {
|
|
||||||
data_path = Some(PathBuf::from(args[i].clone()));
|
|
||||||
} else {
|
|
||||||
eprintln!("Missing value for --working-dir");
|
|
||||||
std::process::exit(0);
|
|
||||||
}
|
|
||||||
}
|
|
||||||
s if s.starts_with("--working-dir=") => {
|
|
||||||
data_path = Some(PathBuf::from(
|
|
||||||
s.trim_start_matches("--working-dir=").to_string(),
|
|
||||||
));
|
|
||||||
}
|
|
||||||
"--silent" | "-s" => {
|
"--silent" | "-s" => {
|
||||||
silent = true;
|
silent = true;
|
||||||
}
|
}
|
||||||
|
|
@ -121,35 +41,35 @@ pub(crate) fn parse_cli() -> CliArgs {
|
||||||
log_level = Some(s.trim_start_matches("--log-level=").to_string());
|
log_level = Some(s.trim_start_matches("--log-level=").to_string());
|
||||||
}
|
}
|
||||||
"--help" | "-h" => {
|
"--help" | "-h" => {
|
||||||
print_help();
|
eprintln!("Usage: telemt [config.toml] [OPTIONS]");
|
||||||
|
eprintln!();
|
||||||
|
eprintln!("Options:");
|
||||||
|
eprintln!(" --silent, -s Suppress info logs");
|
||||||
|
eprintln!(" --log-level <LEVEL> debug|verbose|normal|silent");
|
||||||
|
eprintln!(" --help, -h Show this help");
|
||||||
|
eprintln!();
|
||||||
|
eprintln!("Setup (fire-and-forget):");
|
||||||
|
eprintln!(
|
||||||
|
" --init Generate config, install systemd service, start"
|
||||||
|
);
|
||||||
|
eprintln!(" --port <PORT> Listen port (default: 443)");
|
||||||
|
eprintln!(
|
||||||
|
" --domain <DOMAIN> TLS domain for masking (default: www.google.com)"
|
||||||
|
);
|
||||||
|
eprintln!(
|
||||||
|
" --secret <HEX> 32-char hex secret (auto-generated if omitted)"
|
||||||
|
);
|
||||||
|
eprintln!(" --user <NAME> Username (default: user)");
|
||||||
|
eprintln!(" --config-dir <DIR> Config directory (default: /etc/telemt)");
|
||||||
|
eprintln!(" --no-start Don't start the service after install");
|
||||||
std::process::exit(0);
|
std::process::exit(0);
|
||||||
}
|
}
|
||||||
"--version" | "-V" => {
|
"--version" | "-V" => {
|
||||||
println!("telemt {}", env!("CARGO_PKG_VERSION"));
|
println!("telemt {}", env!("CARGO_PKG_VERSION"));
|
||||||
std::process::exit(0);
|
std::process::exit(0);
|
||||||
}
|
}
|
||||||
// Skip daemon-related flags (already parsed)
|
|
||||||
"--daemon" | "-d" | "--foreground" | "-f" => {}
|
|
||||||
s if s.starts_with("--pid-file") => {
|
|
||||||
if !s.contains('=') {
|
|
||||||
i += 1; // skip value
|
|
||||||
}
|
|
||||||
}
|
|
||||||
s if s.starts_with("--run-as-user") => {
|
|
||||||
if !s.contains('=') {
|
|
||||||
i += 1;
|
|
||||||
}
|
|
||||||
}
|
|
||||||
s if s.starts_with("--run-as-group") => {
|
|
||||||
if !s.contains('=') {
|
|
||||||
i += 1;
|
|
||||||
}
|
|
||||||
}
|
|
||||||
s if !s.starts_with('-') => {
|
s if !s.starts_with('-') => {
|
||||||
if !matches!(s, "run" | "start" | "stop" | "reload" | "status") {
|
|
||||||
config_path = s.to_string();
|
config_path = s.to_string();
|
||||||
config_path_explicit = true;
|
|
||||||
}
|
|
||||||
}
|
}
|
||||||
other => {
|
other => {
|
||||||
eprintln!("Unknown option: {}", other);
|
eprintln!("Unknown option: {}", other);
|
||||||
|
|
@ -158,157 +78,12 @@ pub(crate) fn parse_cli() -> CliArgs {
|
||||||
i += 1;
|
i += 1;
|
||||||
}
|
}
|
||||||
|
|
||||||
CliArgs {
|
(config_path, silent, log_level)
|
||||||
config_path,
|
|
||||||
config_path_explicit,
|
|
||||||
data_path,
|
|
||||||
silent,
|
|
||||||
log_level,
|
|
||||||
log_destination,
|
|
||||||
}
|
|
||||||
}
|
|
||||||
|
|
||||||
fn print_help() {
|
|
||||||
eprintln!("Usage: telemt [COMMAND] [OPTIONS] [config.toml]");
|
|
||||||
eprintln!();
|
|
||||||
eprintln!("Commands:");
|
|
||||||
eprintln!(" run Run in foreground (default if no command given)");
|
|
||||||
#[cfg(unix)]
|
|
||||||
{
|
|
||||||
eprintln!(" start Start as background daemon");
|
|
||||||
eprintln!(" stop Stop a running daemon");
|
|
||||||
eprintln!(" reload Reload configuration (send SIGHUP)");
|
|
||||||
eprintln!(" status Check if daemon is running");
|
|
||||||
}
|
|
||||||
eprintln!();
|
|
||||||
eprintln!("Options:");
|
|
||||||
eprintln!(
|
|
||||||
" --data-path <DIR> Set data directory (absolute path; overrides config value)"
|
|
||||||
);
|
|
||||||
eprintln!(" --working-dir <DIR> Alias for --data-path");
|
|
||||||
eprintln!(" --silent, -s Suppress info logs");
|
|
||||||
eprintln!(" --log-level <LEVEL> debug|verbose|normal|silent");
|
|
||||||
eprintln!(" --help, -h Show this help");
|
|
||||||
eprintln!(" --version, -V Show version");
|
|
||||||
eprintln!();
|
|
||||||
eprintln!("Logging options:");
|
|
||||||
eprintln!(" --log-file <PATH> Log to file (default: stderr)");
|
|
||||||
eprintln!(" --log-file-daily <PATH> Log to file with daily rotation");
|
|
||||||
#[cfg(unix)]
|
|
||||||
eprintln!(" --syslog Log to syslog (Unix only)");
|
|
||||||
eprintln!();
|
|
||||||
#[cfg(unix)]
|
|
||||||
{
|
|
||||||
eprintln!("Daemon options (Unix only):");
|
|
||||||
eprintln!(" --daemon, -d Fork to background (daemonize)");
|
|
||||||
eprintln!(" --foreground, -f Explicit foreground mode (for systemd)");
|
|
||||||
eprintln!(" --pid-file <PATH> PID file path (default: /var/run/telemt.pid)");
|
|
||||||
eprintln!(" --run-as-user <USER> Drop privileges to this user after binding");
|
|
||||||
eprintln!(" --run-as-group <GROUP> Drop privileges to this group after binding");
|
|
||||||
eprintln!(" --working-dir <DIR> Working directory for daemon mode");
|
|
||||||
eprintln!();
|
|
||||||
}
|
|
||||||
eprintln!("Setup (fire-and-forget):");
|
|
||||||
eprintln!(" --init Generate config, install systemd service, start");
|
|
||||||
eprintln!(" --port <PORT> Listen port (default: 443)");
|
|
||||||
eprintln!(" --domain <DOMAIN> TLS domain for masking (default: www.google.com)");
|
|
||||||
eprintln!(" --secret <HEX> 32-char hex secret (auto-generated if omitted)");
|
|
||||||
eprintln!(" --user <NAME> Username (default: user)");
|
|
||||||
eprintln!(" --config-dir <DIR> Config directory (default: /etc/telemt)");
|
|
||||||
eprintln!(" --no-start Don't start the service after install");
|
|
||||||
#[cfg(unix)]
|
|
||||||
{
|
|
||||||
eprintln!();
|
|
||||||
eprintln!("Examples:");
|
|
||||||
eprintln!(" telemt config.toml Run in foreground");
|
|
||||||
eprintln!(" telemt start config.toml Start as daemon");
|
|
||||||
eprintln!(" telemt start --pid-file /tmp/t.pid Start with custom PID file");
|
|
||||||
eprintln!(" telemt stop Stop daemon");
|
|
||||||
eprintln!(" telemt reload Reload configuration");
|
|
||||||
eprintln!(" telemt status Check daemon status");
|
|
||||||
}
|
|
||||||
}
|
|
||||||
|
|
||||||
#[cfg(test)]
|
|
||||||
mod tests {
|
|
||||||
use super::resolve_runtime_config_path;
|
|
||||||
|
|
||||||
#[test]
|
|
||||||
fn resolve_runtime_config_path_anchors_relative_to_startup_cwd() {
|
|
||||||
let nonce = std::time::SystemTime::now()
|
|
||||||
.duration_since(std::time::UNIX_EPOCH)
|
|
||||||
.unwrap()
|
|
||||||
.as_nanos();
|
|
||||||
let startup_cwd = std::env::temp_dir().join(format!("telemt_cfg_path_{nonce}"));
|
|
||||||
std::fs::create_dir_all(&startup_cwd).unwrap();
|
|
||||||
let target = startup_cwd.join("config.toml");
|
|
||||||
std::fs::write(&target, " ").unwrap();
|
|
||||||
|
|
||||||
let resolved = resolve_runtime_config_path("config.toml", &startup_cwd, true);
|
|
||||||
assert_eq!(resolved, target.canonicalize().unwrap());
|
|
||||||
|
|
||||||
let _ = std::fs::remove_file(&target);
|
|
||||||
let _ = std::fs::remove_dir(&startup_cwd);
|
|
||||||
}
|
|
||||||
|
|
||||||
#[test]
|
|
||||||
fn resolve_runtime_config_path_keeps_absolute_for_missing_file() {
|
|
||||||
let nonce = std::time::SystemTime::now()
|
|
||||||
.duration_since(std::time::UNIX_EPOCH)
|
|
||||||
.unwrap()
|
|
||||||
.as_nanos();
|
|
||||||
let startup_cwd = std::env::temp_dir().join(format!("telemt_cfg_path_missing_{nonce}"));
|
|
||||||
std::fs::create_dir_all(&startup_cwd).unwrap();
|
|
||||||
|
|
||||||
let resolved = resolve_runtime_config_path("missing.toml", &startup_cwd, true);
|
|
||||||
assert_eq!(resolved, startup_cwd.join("missing.toml"));
|
|
||||||
|
|
||||||
let _ = std::fs::remove_dir(&startup_cwd);
|
|
||||||
}
|
|
||||||
|
|
||||||
#[test]
|
|
||||||
fn resolve_runtime_config_path_uses_startup_candidates_when_not_explicit() {
|
|
||||||
let nonce = std::time::SystemTime::now()
|
|
||||||
.duration_since(std::time::UNIX_EPOCH)
|
|
||||||
.unwrap()
|
|
||||||
.as_nanos();
|
|
||||||
let startup_cwd =
|
|
||||||
std::env::temp_dir().join(format!("telemt_cfg_startup_candidates_{nonce}"));
|
|
||||||
std::fs::create_dir_all(&startup_cwd).unwrap();
|
|
||||||
let telemt = startup_cwd.join("telemt.toml");
|
|
||||||
std::fs::write(&telemt, " ").unwrap();
|
|
||||||
|
|
||||||
let resolved = resolve_runtime_config_path("config.toml", &startup_cwd, false);
|
|
||||||
assert_eq!(resolved, telemt.canonicalize().unwrap());
|
|
||||||
|
|
||||||
let _ = std::fs::remove_file(&telemt);
|
|
||||||
let _ = std::fs::remove_dir(&startup_cwd);
|
|
||||||
}
|
|
||||||
|
|
||||||
#[test]
|
|
||||||
fn resolve_runtime_config_path_defaults_to_startup_config_when_none_found() {
|
|
||||||
let nonce = std::time::SystemTime::now()
|
|
||||||
.duration_since(std::time::UNIX_EPOCH)
|
|
||||||
.unwrap()
|
|
||||||
.as_nanos();
|
|
||||||
let startup_cwd = std::env::temp_dir().join(format!("telemt_cfg_startup_default_{nonce}"));
|
|
||||||
std::fs::create_dir_all(&startup_cwd).unwrap();
|
|
||||||
|
|
||||||
let resolved = resolve_runtime_config_path("config.toml", &startup_cwd, false);
|
|
||||||
assert_eq!(resolved, startup_cwd.join("config.toml"));
|
|
||||||
|
|
||||||
let _ = std::fs::remove_dir(&startup_cwd);
|
|
||||||
}
|
|
||||||
}
|
}
|
||||||
|
|
||||||
pub(crate) fn print_proxy_links(host: &str, port: u16, config: &ProxyConfig) {
|
pub(crate) fn print_proxy_links(host: &str, port: u16, config: &ProxyConfig) {
|
||||||
info!(target: "telemt::links", "--- Proxy Links ({}) ---", host);
|
info!(target: "telemt::links", "--- Proxy Links ({}) ---", host);
|
||||||
for user_name in config
|
for user_name in config.general.links.show.resolve_users(&config.access.users) {
|
||||||
.general
|
|
||||||
.links
|
|
||||||
.show
|
|
||||||
.resolve_users(&config.access.users)
|
|
||||||
{
|
|
||||||
if let Some(secret) = config.access.users.get(user_name) {
|
if let Some(secret) = config.access.users.get(user_name) {
|
||||||
info!(target: "telemt::links", "User: {}", user_name);
|
info!(target: "telemt::links", "User: {}", user_name);
|
||||||
if config.general.modes.classic {
|
if config.general.modes.classic {
|
||||||
|
|
@ -415,7 +190,6 @@ pub(crate) fn format_uptime(total_secs: u64) -> String {
|
||||||
format!("{} / {} seconds", parts.join(", "), total_secs)
|
format!("{} / {} seconds", parts.join(", "), total_secs)
|
||||||
}
|
}
|
||||||
|
|
||||||
#[allow(dead_code)]
|
|
||||||
pub(crate) async fn wait_until_admission_open(admission_rx: &mut watch::Receiver<bool>) -> bool {
|
pub(crate) async fn wait_until_admission_open(admission_rx: &mut watch::Receiver<bool>) -> bool {
|
||||||
loop {
|
loop {
|
||||||
if *admission_rx.borrow() {
|
if *admission_rx.borrow() {
|
||||||
|
|
@ -436,10 +210,9 @@ pub(crate) async fn load_startup_proxy_config_snapshot(
|
||||||
cache_path: Option<&str>,
|
cache_path: Option<&str>,
|
||||||
me2dc_fallback: bool,
|
me2dc_fallback: bool,
|
||||||
label: &'static str,
|
label: &'static str,
|
||||||
upstream: Option<std::sync::Arc<UpstreamManager>>,
|
|
||||||
) -> Option<ProxyConfigData> {
|
) -> Option<ProxyConfigData> {
|
||||||
loop {
|
loop {
|
||||||
match fetch_proxy_config_with_raw_via_upstream(url, upstream.clone()).await {
|
match fetch_proxy_config_with_raw(url).await {
|
||||||
Ok((cfg, raw)) => {
|
Ok((cfg, raw)) => {
|
||||||
if !cfg.map.is_empty() {
|
if !cfg.map.is_empty() {
|
||||||
if let Some(path) = cache_path
|
if let Some(path) = cache_path
|
||||||
|
|
@ -450,10 +223,7 @@ pub(crate) async fn load_startup_proxy_config_snapshot(
|
||||||
return Some(cfg);
|
return Some(cfg);
|
||||||
}
|
}
|
||||||
|
|
||||||
warn!(
|
warn!(snapshot = label, url, "Startup proxy-config is empty; trying disk cache");
|
||||||
snapshot = label,
|
|
||||||
url, "Startup proxy-config is empty; trying disk cache"
|
|
||||||
);
|
|
||||||
if let Some(path) = cache_path {
|
if let Some(path) = cache_path {
|
||||||
match load_proxy_config_cache(path).await {
|
match load_proxy_config_cache(path).await {
|
||||||
Ok(cached) if !cached.map.is_empty() => {
|
Ok(cached) if !cached.map.is_empty() => {
|
||||||
|
|
@ -468,7 +238,8 @@ pub(crate) async fn load_startup_proxy_config_snapshot(
|
||||||
Ok(_) => {
|
Ok(_) => {
|
||||||
warn!(
|
warn!(
|
||||||
snapshot = label,
|
snapshot = label,
|
||||||
path, "Startup proxy-config cache is empty; ignoring cache file"
|
path,
|
||||||
|
"Startup proxy-config cache is empty; ignoring cache file"
|
||||||
);
|
);
|
||||||
}
|
}
|
||||||
Err(cache_err) => {
|
Err(cache_err) => {
|
||||||
|
|
@ -512,7 +283,8 @@ pub(crate) async fn load_startup_proxy_config_snapshot(
|
||||||
Ok(_) => {
|
Ok(_) => {
|
||||||
warn!(
|
warn!(
|
||||||
snapshot = label,
|
snapshot = label,
|
||||||
path, "Startup proxy-config cache is empty; ignoring cache file"
|
path,
|
||||||
|
"Startup proxy-config cache is empty; ignoring cache file"
|
||||||
);
|
);
|
||||||
}
|
}
|
||||||
Err(cache_err) => {
|
Err(cache_err) => {
|
||||||
|
|
|
||||||
|
|
@ -12,18 +12,19 @@ use tracing::{debug, error, info, warn};
|
||||||
use crate::config::ProxyConfig;
|
use crate::config::ProxyConfig;
|
||||||
use crate::crypto::SecureRandom;
|
use crate::crypto::SecureRandom;
|
||||||
use crate::ip_tracker::UserIpTracker;
|
use crate::ip_tracker::UserIpTracker;
|
||||||
use crate::proxy::ClientHandler;
|
|
||||||
use crate::proxy::route_mode::{ROUTE_SWITCH_ERROR_MSG, RouteRuntimeController};
|
use crate::proxy::route_mode::{ROUTE_SWITCH_ERROR_MSG, RouteRuntimeController};
|
||||||
use crate::proxy::shared_state::ProxySharedState;
|
use crate::proxy::ClientHandler;
|
||||||
use crate::startup::{COMPONENT_LISTENERS_BIND, StartupTracker};
|
use crate::startup::{COMPONENT_LISTENERS_BIND, StartupTracker};
|
||||||
use crate::stats::beobachten::BeobachtenStore;
|
use crate::stats::beobachten::BeobachtenStore;
|
||||||
use crate::stats::{ReplayChecker, Stats};
|
use crate::stats::{ReplayChecker, Stats};
|
||||||
use crate::stream::BufferPool;
|
use crate::stream::BufferPool;
|
||||||
use crate::tls_front::TlsFrontCache;
|
use crate::tls_front::TlsFrontCache;
|
||||||
use crate::transport::middle_proxy::MePool;
|
use crate::transport::middle_proxy::MePool;
|
||||||
use crate::transport::{ListenOptions, UpstreamManager, create_listener, find_listener_processes};
|
use crate::transport::{
|
||||||
|
ListenOptions, UpstreamManager, create_listener, find_listener_processes,
|
||||||
|
};
|
||||||
|
|
||||||
use super::helpers::{is_expected_handshake_eof, print_proxy_links};
|
use super::helpers::{is_expected_handshake_eof, print_proxy_links, wait_until_admission_open};
|
||||||
|
|
||||||
pub(crate) struct BoundListeners {
|
pub(crate) struct BoundListeners {
|
||||||
pub(crate) listeners: Vec<(TcpListener, bool)>,
|
pub(crate) listeners: Vec<(TcpListener, bool)>,
|
||||||
|
|
@ -50,7 +51,6 @@ pub(crate) async fn bind_listeners(
|
||||||
tls_cache: Option<Arc<TlsFrontCache>>,
|
tls_cache: Option<Arc<TlsFrontCache>>,
|
||||||
ip_tracker: Arc<UserIpTracker>,
|
ip_tracker: Arc<UserIpTracker>,
|
||||||
beobachten: Arc<BeobachtenStore>,
|
beobachten: Arc<BeobachtenStore>,
|
||||||
shared: Arc<ProxySharedState>,
|
|
||||||
max_connections: Arc<Semaphore>,
|
max_connections: Arc<Semaphore>,
|
||||||
) -> Result<BoundListeners, Box<dyn Error>> {
|
) -> Result<BoundListeners, Box<dyn Error>> {
|
||||||
startup_tracker
|
startup_tracker
|
||||||
|
|
@ -74,7 +74,6 @@ pub(crate) async fn bind_listeners(
|
||||||
let options = ListenOptions {
|
let options = ListenOptions {
|
||||||
reuse_port: listener_conf.reuse_allow,
|
reuse_port: listener_conf.reuse_allow,
|
||||||
ipv6_only: listener_conf.ip.is_ipv6(),
|
ipv6_only: listener_conf.ip.is_ipv6(),
|
||||||
backlog: config.server.listen_backlog,
|
|
||||||
..Default::default()
|
..Default::default()
|
||||||
};
|
};
|
||||||
|
|
||||||
|
|
@ -82,9 +81,8 @@ pub(crate) async fn bind_listeners(
|
||||||
Ok(socket) => {
|
Ok(socket) => {
|
||||||
let listener = TcpListener::from_std(socket.into())?;
|
let listener = TcpListener::from_std(socket.into())?;
|
||||||
info!("Listening on {}", addr);
|
info!("Listening on {}", addr);
|
||||||
let listener_proxy_protocol = listener_conf
|
let listener_proxy_protocol =
|
||||||
.proxy_protocol
|
listener_conf.proxy_protocol.unwrap_or(config.server.proxy_protocol);
|
||||||
.unwrap_or(config.server.proxy_protocol);
|
|
||||||
|
|
||||||
let public_host = if let Some(ref announce) = listener_conf.announce {
|
let public_host = if let Some(ref announce) = listener_conf.announce {
|
||||||
announce.clone()
|
announce.clone()
|
||||||
|
|
@ -102,14 +100,8 @@ pub(crate) async fn bind_listeners(
|
||||||
listener_conf.ip.to_string()
|
listener_conf.ip.to_string()
|
||||||
};
|
};
|
||||||
|
|
||||||
if config.general.links.public_host.is_none()
|
if config.general.links.public_host.is_none() && !config.general.links.show.is_empty() {
|
||||||
&& !config.general.links.show.is_empty()
|
let link_port = config.general.links.public_port.unwrap_or(config.server.port);
|
||||||
{
|
|
||||||
let link_port = config
|
|
||||||
.general
|
|
||||||
.links
|
|
||||||
.public_port
|
|
||||||
.unwrap_or(config.server.port);
|
|
||||||
print_proxy_links(&public_host, link_port, config);
|
print_proxy_links(&public_host, link_port, config);
|
||||||
}
|
}
|
||||||
|
|
||||||
|
|
@ -153,14 +145,12 @@ pub(crate) async fn bind_listeners(
|
||||||
let (host, port) = if let Some(ref h) = config.general.links.public_host {
|
let (host, port) = if let Some(ref h) = config.general.links.public_host {
|
||||||
(
|
(
|
||||||
h.clone(),
|
h.clone(),
|
||||||
config
|
config.general.links.public_port.unwrap_or(config.server.port),
|
||||||
.general
|
|
||||||
.links
|
|
||||||
.public_port
|
|
||||||
.unwrap_or(config.server.port),
|
|
||||||
)
|
)
|
||||||
} else {
|
} else {
|
||||||
let ip = detected_ip_v4.or(detected_ip_v6).map(|ip| ip.to_string());
|
let ip = detected_ip_v4
|
||||||
|
.or(detected_ip_v6)
|
||||||
|
.map(|ip| ip.to_string());
|
||||||
if ip.is_none() {
|
if ip.is_none() {
|
||||||
warn!(
|
warn!(
|
||||||
"show_link is configured but public IP could not be detected. Set public_host in config."
|
"show_link is configured but public IP could not be detected. Set public_host in config."
|
||||||
|
|
@ -168,11 +158,7 @@ pub(crate) async fn bind_listeners(
|
||||||
}
|
}
|
||||||
(
|
(
|
||||||
ip.unwrap_or_else(|| "UNKNOWN".to_string()),
|
ip.unwrap_or_else(|| "UNKNOWN".to_string()),
|
||||||
config
|
config.general.links.public_port.unwrap_or(config.server.port),
|
||||||
.general
|
|
||||||
.links
|
|
||||||
.public_port
|
|
||||||
.unwrap_or(config.server.port),
|
|
||||||
)
|
)
|
||||||
};
|
};
|
||||||
|
|
||||||
|
|
@ -192,19 +178,13 @@ pub(crate) async fn bind_listeners(
|
||||||
use std::os::unix::fs::PermissionsExt;
|
use std::os::unix::fs::PermissionsExt;
|
||||||
let perms = std::fs::Permissions::from_mode(mode);
|
let perms = std::fs::Permissions::from_mode(mode);
|
||||||
if let Err(e) = std::fs::set_permissions(unix_path, perms) {
|
if let Err(e) = std::fs::set_permissions(unix_path, perms) {
|
||||||
error!(
|
error!("Failed to set unix socket permissions to {}: {}", perm_str, e);
|
||||||
"Failed to set unix socket permissions to {}: {}",
|
|
||||||
perm_str, e
|
|
||||||
);
|
|
||||||
} else {
|
} else {
|
||||||
info!("Listening on unix:{} (mode {})", unix_path, perm_str);
|
info!("Listening on unix:{} (mode {})", unix_path, perm_str);
|
||||||
}
|
}
|
||||||
}
|
}
|
||||||
Err(e) => {
|
Err(e) => {
|
||||||
warn!(
|
warn!("Invalid listen_unix_sock_perm '{}': {}. Ignoring.", perm_str, e);
|
||||||
"Invalid listen_unix_sock_perm '{}': {}. Ignoring.",
|
|
||||||
perm_str, e
|
|
||||||
);
|
|
||||||
info!("Listening on unix:{}", unix_path);
|
info!("Listening on unix:{}", unix_path);
|
||||||
}
|
}
|
||||||
}
|
}
|
||||||
|
|
@ -215,7 +195,7 @@ pub(crate) async fn bind_listeners(
|
||||||
has_unix_listener = true;
|
has_unix_listener = true;
|
||||||
|
|
||||||
let mut config_rx_unix: watch::Receiver<Arc<ProxyConfig>> = config_rx.clone();
|
let mut config_rx_unix: watch::Receiver<Arc<ProxyConfig>> = config_rx.clone();
|
||||||
let admission_rx_unix = admission_rx.clone();
|
let mut admission_rx_unix = admission_rx.clone();
|
||||||
let stats = stats.clone();
|
let stats = stats.clone();
|
||||||
let upstream_manager = upstream_manager.clone();
|
let upstream_manager = upstream_manager.clone();
|
||||||
let replay_checker = replay_checker.clone();
|
let replay_checker = replay_checker.clone();
|
||||||
|
|
@ -226,51 +206,24 @@ pub(crate) async fn bind_listeners(
|
||||||
let tls_cache = tls_cache.clone();
|
let tls_cache = tls_cache.clone();
|
||||||
let ip_tracker = ip_tracker.clone();
|
let ip_tracker = ip_tracker.clone();
|
||||||
let beobachten = beobachten.clone();
|
let beobachten = beobachten.clone();
|
||||||
let shared = shared.clone();
|
|
||||||
let max_connections_unix = max_connections.clone();
|
let max_connections_unix = max_connections.clone();
|
||||||
|
|
||||||
tokio::spawn(async move {
|
tokio::spawn(async move {
|
||||||
let unix_conn_counter = Arc::new(std::sync::atomic::AtomicU64::new(1));
|
let unix_conn_counter = Arc::new(std::sync::atomic::AtomicU64::new(1));
|
||||||
|
|
||||||
loop {
|
loop {
|
||||||
|
if !wait_until_admission_open(&mut admission_rx_unix).await {
|
||||||
|
warn!("Conditional-admission gate channel closed for unix listener");
|
||||||
|
break;
|
||||||
|
}
|
||||||
match unix_listener.accept().await {
|
match unix_listener.accept().await {
|
||||||
Ok((stream, _)) => {
|
Ok((stream, _)) => {
|
||||||
if !*admission_rx_unix.borrow() {
|
let permit = match max_connections_unix.clone().acquire_owned().await {
|
||||||
drop(stream);
|
|
||||||
continue;
|
|
||||||
}
|
|
||||||
let accept_permit_timeout_ms =
|
|
||||||
config_rx_unix.borrow().server.accept_permit_timeout_ms;
|
|
||||||
let permit = if accept_permit_timeout_ms == 0 {
|
|
||||||
match max_connections_unix.clone().acquire_owned().await {
|
|
||||||
Ok(permit) => permit,
|
Ok(permit) => permit,
|
||||||
Err(_) => {
|
Err(_) => {
|
||||||
error!("Connection limiter is closed");
|
error!("Connection limiter is closed");
|
||||||
break;
|
break;
|
||||||
}
|
}
|
||||||
}
|
|
||||||
} else {
|
|
||||||
match tokio::time::timeout(
|
|
||||||
Duration::from_millis(accept_permit_timeout_ms),
|
|
||||||
max_connections_unix.clone().acquire_owned(),
|
|
||||||
)
|
|
||||||
.await
|
|
||||||
{
|
|
||||||
Ok(Ok(permit)) => permit,
|
|
||||||
Ok(Err(_)) => {
|
|
||||||
error!("Connection limiter is closed");
|
|
||||||
break;
|
|
||||||
}
|
|
||||||
Err(_) => {
|
|
||||||
stats.increment_accept_permit_timeout_total();
|
|
||||||
debug!(
|
|
||||||
timeout_ms = accept_permit_timeout_ms,
|
|
||||||
"Dropping accepted unix connection: permit wait timeout"
|
|
||||||
);
|
|
||||||
drop(stream);
|
|
||||||
continue;
|
|
||||||
}
|
|
||||||
}
|
|
||||||
};
|
};
|
||||||
let conn_id =
|
let conn_id =
|
||||||
unix_conn_counter.fetch_add(1, std::sync::atomic::Ordering::Relaxed);
|
unix_conn_counter.fetch_add(1, std::sync::atomic::Ordering::Relaxed);
|
||||||
|
|
@ -288,12 +241,11 @@ pub(crate) async fn bind_listeners(
|
||||||
let tls_cache = tls_cache.clone();
|
let tls_cache = tls_cache.clone();
|
||||||
let ip_tracker = ip_tracker.clone();
|
let ip_tracker = ip_tracker.clone();
|
||||||
let beobachten = beobachten.clone();
|
let beobachten = beobachten.clone();
|
||||||
let shared = shared.clone();
|
|
||||||
let proxy_protocol_enabled = config.server.proxy_protocol;
|
let proxy_protocol_enabled = config.server.proxy_protocol;
|
||||||
|
|
||||||
tokio::spawn(async move {
|
tokio::spawn(async move {
|
||||||
let _permit = permit;
|
let _permit = permit;
|
||||||
if let Err(e) = crate::proxy::client::handle_client_stream_with_shared(
|
if let Err(e) = crate::proxy::client::handle_client_stream(
|
||||||
stream,
|
stream,
|
||||||
fake_peer,
|
fake_peer,
|
||||||
config,
|
config,
|
||||||
|
|
@ -307,7 +259,6 @@ pub(crate) async fn bind_listeners(
|
||||||
tls_cache,
|
tls_cache,
|
||||||
ip_tracker,
|
ip_tracker,
|
||||||
beobachten,
|
beobachten,
|
||||||
shared,
|
|
||||||
proxy_protocol_enabled,
|
proxy_protocol_enabled,
|
||||||
)
|
)
|
||||||
.await
|
.await
|
||||||
|
|
@ -357,12 +308,11 @@ pub(crate) fn spawn_tcp_accept_loops(
|
||||||
tls_cache: Option<Arc<TlsFrontCache>>,
|
tls_cache: Option<Arc<TlsFrontCache>>,
|
||||||
ip_tracker: Arc<UserIpTracker>,
|
ip_tracker: Arc<UserIpTracker>,
|
||||||
beobachten: Arc<BeobachtenStore>,
|
beobachten: Arc<BeobachtenStore>,
|
||||||
shared: Arc<ProxySharedState>,
|
|
||||||
max_connections: Arc<Semaphore>,
|
max_connections: Arc<Semaphore>,
|
||||||
) {
|
) {
|
||||||
for (listener, listener_proxy_protocol) in listeners {
|
for (listener, listener_proxy_protocol) in listeners {
|
||||||
let mut config_rx: watch::Receiver<Arc<ProxyConfig>> = config_rx.clone();
|
let mut config_rx: watch::Receiver<Arc<ProxyConfig>> = config_rx.clone();
|
||||||
let admission_rx_tcp = admission_rx.clone();
|
let mut admission_rx_tcp = admission_rx.clone();
|
||||||
let stats = stats.clone();
|
let stats = stats.clone();
|
||||||
let upstream_manager = upstream_manager.clone();
|
let upstream_manager = upstream_manager.clone();
|
||||||
let replay_checker = replay_checker.clone();
|
let replay_checker = replay_checker.clone();
|
||||||
|
|
@ -373,51 +323,22 @@ pub(crate) fn spawn_tcp_accept_loops(
|
||||||
let tls_cache = tls_cache.clone();
|
let tls_cache = tls_cache.clone();
|
||||||
let ip_tracker = ip_tracker.clone();
|
let ip_tracker = ip_tracker.clone();
|
||||||
let beobachten = beobachten.clone();
|
let beobachten = beobachten.clone();
|
||||||
let shared = shared.clone();
|
|
||||||
let max_connections_tcp = max_connections.clone();
|
let max_connections_tcp = max_connections.clone();
|
||||||
|
|
||||||
tokio::spawn(async move {
|
tokio::spawn(async move {
|
||||||
loop {
|
loop {
|
||||||
|
if !wait_until_admission_open(&mut admission_rx_tcp).await {
|
||||||
|
warn!("Conditional-admission gate channel closed for tcp listener");
|
||||||
|
break;
|
||||||
|
}
|
||||||
match listener.accept().await {
|
match listener.accept().await {
|
||||||
Ok((stream, peer_addr)) => {
|
Ok((stream, peer_addr)) => {
|
||||||
if !*admission_rx_tcp.borrow() {
|
let permit = match max_connections_tcp.clone().acquire_owned().await {
|
||||||
debug!(peer = %peer_addr, "Admission gate closed, dropping connection");
|
|
||||||
drop(stream);
|
|
||||||
continue;
|
|
||||||
}
|
|
||||||
let accept_permit_timeout_ms =
|
|
||||||
config_rx.borrow().server.accept_permit_timeout_ms;
|
|
||||||
let permit = if accept_permit_timeout_ms == 0 {
|
|
||||||
match max_connections_tcp.clone().acquire_owned().await {
|
|
||||||
Ok(permit) => permit,
|
Ok(permit) => permit,
|
||||||
Err(_) => {
|
Err(_) => {
|
||||||
error!("Connection limiter is closed");
|
error!("Connection limiter is closed");
|
||||||
break;
|
break;
|
||||||
}
|
}
|
||||||
}
|
|
||||||
} else {
|
|
||||||
match tokio::time::timeout(
|
|
||||||
Duration::from_millis(accept_permit_timeout_ms),
|
|
||||||
max_connections_tcp.clone().acquire_owned(),
|
|
||||||
)
|
|
||||||
.await
|
|
||||||
{
|
|
||||||
Ok(Ok(permit)) => permit,
|
|
||||||
Ok(Err(_)) => {
|
|
||||||
error!("Connection limiter is closed");
|
|
||||||
break;
|
|
||||||
}
|
|
||||||
Err(_) => {
|
|
||||||
stats.increment_accept_permit_timeout_total();
|
|
||||||
debug!(
|
|
||||||
peer = %peer_addr,
|
|
||||||
timeout_ms = accept_permit_timeout_ms,
|
|
||||||
"Dropping accepted connection: permit wait timeout"
|
|
||||||
);
|
|
||||||
drop(stream);
|
|
||||||
continue;
|
|
||||||
}
|
|
||||||
}
|
|
||||||
};
|
};
|
||||||
let config = config_rx.borrow_and_update().clone();
|
let config = config_rx.borrow_and_update().clone();
|
||||||
let stats = stats.clone();
|
let stats = stats.clone();
|
||||||
|
|
@ -430,14 +351,13 @@ pub(crate) fn spawn_tcp_accept_loops(
|
||||||
let tls_cache = tls_cache.clone();
|
let tls_cache = tls_cache.clone();
|
||||||
let ip_tracker = ip_tracker.clone();
|
let ip_tracker = ip_tracker.clone();
|
||||||
let beobachten = beobachten.clone();
|
let beobachten = beobachten.clone();
|
||||||
let shared = shared.clone();
|
|
||||||
let proxy_protocol_enabled = listener_proxy_protocol;
|
let proxy_protocol_enabled = listener_proxy_protocol;
|
||||||
let real_peer_report = Arc::new(std::sync::Mutex::new(None));
|
let real_peer_report = Arc::new(std::sync::Mutex::new(None));
|
||||||
let real_peer_report_for_handler = real_peer_report.clone();
|
let real_peer_report_for_handler = real_peer_report.clone();
|
||||||
|
|
||||||
tokio::spawn(async move {
|
tokio::spawn(async move {
|
||||||
let _permit = permit;
|
let _permit = permit;
|
||||||
if let Err(e) = ClientHandler::new_with_shared(
|
if let Err(e) = ClientHandler::new(
|
||||||
stream,
|
stream,
|
||||||
peer_addr,
|
peer_addr,
|
||||||
config,
|
config,
|
||||||
|
|
@ -451,7 +371,6 @@ pub(crate) fn spawn_tcp_accept_loops(
|
||||||
tls_cache,
|
tls_cache,
|
||||||
ip_tracker,
|
ip_tracker,
|
||||||
beobachten,
|
beobachten,
|
||||||
shared,
|
|
||||||
proxy_protocol_enabled,
|
proxy_protocol_enabled,
|
||||||
real_peer_report_for_handler,
|
real_peer_report_for_handler,
|
||||||
)
|
)
|
||||||
|
|
|
||||||
|
|
@ -1,5 +1,3 @@
|
||||||
#![allow(clippy::too_many_arguments)]
|
|
||||||
|
|
||||||
use std::sync::Arc;
|
use std::sync::Arc;
|
||||||
use std::time::Duration;
|
use std::time::Duration;
|
||||||
|
|
||||||
|
|
@ -14,8 +12,8 @@ use crate::startup::{
|
||||||
COMPONENT_ME_PROXY_CONFIG_V6, COMPONENT_ME_SECRET_FETCH, StartupMeStatus, StartupTracker,
|
COMPONENT_ME_PROXY_CONFIG_V6, COMPONENT_ME_SECRET_FETCH, StartupMeStatus, StartupTracker,
|
||||||
};
|
};
|
||||||
use crate::stats::Stats;
|
use crate::stats::Stats;
|
||||||
use crate::transport::UpstreamManager;
|
|
||||||
use crate::transport::middle_proxy::MePool;
|
use crate::transport::middle_proxy::MePool;
|
||||||
|
use crate::transport::UpstreamManager;
|
||||||
|
|
||||||
use super::helpers::load_startup_proxy_config_snapshot;
|
use super::helpers::load_startup_proxy_config_snapshot;
|
||||||
|
|
||||||
|
|
@ -63,10 +61,9 @@ pub(crate) async fn initialize_me_pool(
|
||||||
let proxy_secret_path = config.general.proxy_secret_path.as_deref();
|
let proxy_secret_path = config.general.proxy_secret_path.as_deref();
|
||||||
let pool_size = config.general.middle_proxy_pool_size.max(1);
|
let pool_size = config.general.middle_proxy_pool_size.max(1);
|
||||||
let proxy_secret = loop {
|
let proxy_secret = loop {
|
||||||
match crate::transport::middle_proxy::fetch_proxy_secret_with_upstream(
|
match crate::transport::middle_proxy::fetch_proxy_secret(
|
||||||
proxy_secret_path,
|
proxy_secret_path,
|
||||||
config.general.proxy_secret_len_max,
|
config.general.proxy_secret_len_max,
|
||||||
Some(upstream_manager.clone()),
|
|
||||||
)
|
)
|
||||||
.await
|
.await
|
||||||
{
|
{
|
||||||
|
|
@ -130,7 +127,6 @@ pub(crate) async fn initialize_me_pool(
|
||||||
config.general.proxy_config_v4_cache_path.as_deref(),
|
config.general.proxy_config_v4_cache_path.as_deref(),
|
||||||
me2dc_fallback,
|
me2dc_fallback,
|
||||||
"getProxyConfig",
|
"getProxyConfig",
|
||||||
Some(upstream_manager.clone()),
|
|
||||||
)
|
)
|
||||||
.await;
|
.await;
|
||||||
if cfg_v4.is_some() {
|
if cfg_v4.is_some() {
|
||||||
|
|
@ -162,7 +158,6 @@ pub(crate) async fn initialize_me_pool(
|
||||||
config.general.proxy_config_v6_cache_path.as_deref(),
|
config.general.proxy_config_v6_cache_path.as_deref(),
|
||||||
me2dc_fallback,
|
me2dc_fallback,
|
||||||
"getProxyConfigV6",
|
"getProxyConfigV6",
|
||||||
Some(upstream_manager.clone()),
|
|
||||||
)
|
)
|
||||||
.await;
|
.await;
|
||||||
if cfg_v6.is_some() {
|
if cfg_v6.is_some() {
|
||||||
|
|
@ -234,25 +229,14 @@ pub(crate) async fn initialize_me_pool(
|
||||||
config.general.me_adaptive_floor_recover_grace_secs,
|
config.general.me_adaptive_floor_recover_grace_secs,
|
||||||
config.general.me_adaptive_floor_writers_per_core_total,
|
config.general.me_adaptive_floor_writers_per_core_total,
|
||||||
config.general.me_adaptive_floor_cpu_cores_override,
|
config.general.me_adaptive_floor_cpu_cores_override,
|
||||||
config
|
config.general.me_adaptive_floor_max_extra_writers_single_per_core,
|
||||||
.general
|
config.general.me_adaptive_floor_max_extra_writers_multi_per_core,
|
||||||
.me_adaptive_floor_max_extra_writers_single_per_core,
|
|
||||||
config
|
|
||||||
.general
|
|
||||||
.me_adaptive_floor_max_extra_writers_multi_per_core,
|
|
||||||
config.general.me_adaptive_floor_max_active_writers_per_core,
|
config.general.me_adaptive_floor_max_active_writers_per_core,
|
||||||
config.general.me_adaptive_floor_max_warm_writers_per_core,
|
config.general.me_adaptive_floor_max_warm_writers_per_core,
|
||||||
config.general.me_adaptive_floor_max_active_writers_global,
|
config.general.me_adaptive_floor_max_active_writers_global,
|
||||||
config.general.me_adaptive_floor_max_warm_writers_global,
|
config.general.me_adaptive_floor_max_warm_writers_global,
|
||||||
config.general.hardswap,
|
config.general.hardswap,
|
||||||
config.general.me_pool_drain_ttl_secs,
|
config.general.me_pool_drain_ttl_secs,
|
||||||
config.general.me_instadrain,
|
|
||||||
config.general.me_pool_drain_threshold,
|
|
||||||
config.general.me_pool_drain_soft_evict_enabled,
|
|
||||||
config.general.me_pool_drain_soft_evict_grace_secs,
|
|
||||||
config.general.me_pool_drain_soft_evict_per_writer,
|
|
||||||
config.general.me_pool_drain_soft_evict_budget_per_core,
|
|
||||||
config.general.me_pool_drain_soft_evict_cooldown_ms,
|
|
||||||
config.general.effective_me_pool_force_close_secs(),
|
config.general.effective_me_pool_force_close_secs(),
|
||||||
config.general.me_pool_min_fresh_ratio,
|
config.general.me_pool_min_fresh_ratio,
|
||||||
config.general.me_hardswap_warmup_delay_min_ms,
|
config.general.me_hardswap_warmup_delay_min_ms,
|
||||||
|
|
@ -277,8 +261,6 @@ pub(crate) async fn initialize_me_pool(
|
||||||
config.general.me_warn_rate_limit_ms,
|
config.general.me_warn_rate_limit_ms,
|
||||||
config.general.me_route_no_writer_mode,
|
config.general.me_route_no_writer_mode,
|
||||||
config.general.me_route_no_writer_wait_ms,
|
config.general.me_route_no_writer_wait_ms,
|
||||||
config.general.me_route_hybrid_max_wait_ms,
|
|
||||||
config.general.me_route_blocking_send_timeout_ms,
|
|
||||||
config.general.me_route_inline_recovery_attempts,
|
config.general.me_route_inline_recovery_attempts,
|
||||||
config.general.me_route_inline_recovery_wait_ms,
|
config.general.me_route_inline_recovery_wait_ms,
|
||||||
);
|
);
|
||||||
|
|
@ -341,76 +323,18 @@ pub(crate) async fn initialize_me_pool(
|
||||||
"Middle-End pool initialized successfully"
|
"Middle-End pool initialized successfully"
|
||||||
);
|
);
|
||||||
|
|
||||||
// ── Supervised background tasks ──────────────────
|
|
||||||
// Each task runs inside a nested tokio::spawn so
|
|
||||||
// that a panic is caught via JoinHandle and the
|
|
||||||
// outer loop restarts the task automatically.
|
|
||||||
let pool_health = pool_bg.clone();
|
let pool_health = pool_bg.clone();
|
||||||
let rng_health = rng_bg.clone();
|
let rng_health = rng_bg.clone();
|
||||||
let min_conns = pool_size;
|
let min_conns = pool_size;
|
||||||
tokio::spawn(async move {
|
tokio::spawn(async move {
|
||||||
loop {
|
|
||||||
let p = pool_health.clone();
|
|
||||||
let r = rng_health.clone();
|
|
||||||
let res = tokio::spawn(async move {
|
|
||||||
crate::transport::middle_proxy::me_health_monitor(
|
crate::transport::middle_proxy::me_health_monitor(
|
||||||
p, r, min_conns,
|
pool_health,
|
||||||
|
rng_health,
|
||||||
|
min_conns,
|
||||||
)
|
)
|
||||||
.await;
|
.await;
|
||||||
})
|
|
||||||
.await;
|
|
||||||
match res {
|
|
||||||
Ok(()) => warn!("me_health_monitor exited unexpectedly, restarting"),
|
|
||||||
Err(e) => {
|
|
||||||
error!(error = %e, "me_health_monitor panicked, restarting in 1s");
|
|
||||||
tokio::time::sleep(Duration::from_secs(1)).await;
|
|
||||||
}
|
|
||||||
}
|
|
||||||
}
|
|
||||||
});
|
});
|
||||||
let pool_drain_enforcer = pool_bg.clone();
|
break;
|
||||||
tokio::spawn(async move {
|
|
||||||
loop {
|
|
||||||
let p = pool_drain_enforcer.clone();
|
|
||||||
let res = tokio::spawn(async move {
|
|
||||||
crate::transport::middle_proxy::me_drain_timeout_enforcer(p).await;
|
|
||||||
})
|
|
||||||
.await;
|
|
||||||
match res {
|
|
||||||
Ok(()) => warn!("me_drain_timeout_enforcer exited unexpectedly, restarting"),
|
|
||||||
Err(e) => {
|
|
||||||
error!(error = %e, "me_drain_timeout_enforcer panicked, restarting in 1s");
|
|
||||||
tokio::time::sleep(Duration::from_secs(1)).await;
|
|
||||||
}
|
|
||||||
}
|
|
||||||
}
|
|
||||||
});
|
|
||||||
let pool_watchdog = pool_bg.clone();
|
|
||||||
tokio::spawn(async move {
|
|
||||||
loop {
|
|
||||||
let p = pool_watchdog.clone();
|
|
||||||
let res = tokio::spawn(async move {
|
|
||||||
crate::transport::middle_proxy::me_zombie_writer_watchdog(p).await;
|
|
||||||
})
|
|
||||||
.await;
|
|
||||||
match res {
|
|
||||||
Ok(()) => warn!("me_zombie_writer_watchdog exited unexpectedly, restarting"),
|
|
||||||
Err(e) => {
|
|
||||||
error!(error = %e, "me_zombie_writer_watchdog panicked, restarting in 1s");
|
|
||||||
tokio::time::sleep(Duration::from_secs(1)).await;
|
|
||||||
}
|
|
||||||
}
|
|
||||||
}
|
|
||||||
});
|
|
||||||
// CRITICAL: keep the current-thread runtime
|
|
||||||
// alive. Without this, block_on() returns,
|
|
||||||
// the Runtime is dropped, and ALL spawned
|
|
||||||
// background tasks (health monitor, drain
|
|
||||||
// enforcer, zombie watchdog) are silently
|
|
||||||
// cancelled — causing the draining-writer
|
|
||||||
// leak that brought us here.
|
|
||||||
std::future::pending::<()>().await;
|
|
||||||
unreachable!();
|
|
||||||
}
|
}
|
||||||
Err(e) => {
|
Err(e) => {
|
||||||
startup_tracker_bg.set_me_last_error(Some(e.to_string())).await;
|
startup_tracker_bg.set_me_last_error(Some(e.to_string())).await;
|
||||||
|
|
@ -468,69 +392,14 @@ pub(crate) async fn initialize_me_pool(
|
||||||
"Middle-End pool initialized successfully"
|
"Middle-End pool initialized successfully"
|
||||||
);
|
);
|
||||||
|
|
||||||
// ── Supervised background tasks ──────────────────
|
|
||||||
let pool_clone = pool.clone();
|
let pool_clone = pool.clone();
|
||||||
let rng_clone = rng.clone();
|
let rng_clone = rng.clone();
|
||||||
let min_conns = pool_size;
|
let min_conns = pool_size;
|
||||||
tokio::spawn(async move {
|
tokio::spawn(async move {
|
||||||
loop {
|
|
||||||
let p = pool_clone.clone();
|
|
||||||
let r = rng_clone.clone();
|
|
||||||
let res = tokio::spawn(async move {
|
|
||||||
crate::transport::middle_proxy::me_health_monitor(
|
crate::transport::middle_proxy::me_health_monitor(
|
||||||
p, r, min_conns,
|
pool_clone, rng_clone, min_conns,
|
||||||
)
|
)
|
||||||
.await;
|
.await;
|
||||||
})
|
|
||||||
.await;
|
|
||||||
match res {
|
|
||||||
Ok(()) => warn!(
|
|
||||||
"me_health_monitor exited unexpectedly, restarting"
|
|
||||||
),
|
|
||||||
Err(e) => {
|
|
||||||
error!(error = %e, "me_health_monitor panicked, restarting in 1s");
|
|
||||||
tokio::time::sleep(Duration::from_secs(1)).await;
|
|
||||||
}
|
|
||||||
}
|
|
||||||
}
|
|
||||||
});
|
|
||||||
let pool_drain_enforcer = pool.clone();
|
|
||||||
tokio::spawn(async move {
|
|
||||||
loop {
|
|
||||||
let p = pool_drain_enforcer.clone();
|
|
||||||
let res = tokio::spawn(async move {
|
|
||||||
crate::transport::middle_proxy::me_drain_timeout_enforcer(p).await;
|
|
||||||
})
|
|
||||||
.await;
|
|
||||||
match res {
|
|
||||||
Ok(()) => warn!(
|
|
||||||
"me_drain_timeout_enforcer exited unexpectedly, restarting"
|
|
||||||
),
|
|
||||||
Err(e) => {
|
|
||||||
error!(error = %e, "me_drain_timeout_enforcer panicked, restarting in 1s");
|
|
||||||
tokio::time::sleep(Duration::from_secs(1)).await;
|
|
||||||
}
|
|
||||||
}
|
|
||||||
}
|
|
||||||
});
|
|
||||||
let pool_watchdog = pool.clone();
|
|
||||||
tokio::spawn(async move {
|
|
||||||
loop {
|
|
||||||
let p = pool_watchdog.clone();
|
|
||||||
let res = tokio::spawn(async move {
|
|
||||||
crate::transport::middle_proxy::me_zombie_writer_watchdog(p).await;
|
|
||||||
})
|
|
||||||
.await;
|
|
||||||
match res {
|
|
||||||
Ok(()) => warn!(
|
|
||||||
"me_zombie_writer_watchdog exited unexpectedly, restarting"
|
|
||||||
),
|
|
||||||
Err(e) => {
|
|
||||||
error!(error = %e, "me_zombie_writer_watchdog panicked, restarting in 1s");
|
|
||||||
tokio::time::sleep(Duration::from_secs(1)).await;
|
|
||||||
}
|
|
||||||
}
|
|
||||||
}
|
|
||||||
});
|
});
|
||||||
|
|
||||||
break Some(pool);
|
break Some(pool);
|
||||||
|
|
|
||||||
|
|
@ -11,9 +11,9 @@
|
||||||
// - admission: conditional-cast gate and route mode switching.
|
// - admission: conditional-cast gate and route mode switching.
|
||||||
// - listeners: TCP/Unix listener bind and accept-loop orchestration.
|
// - listeners: TCP/Unix listener bind and accept-loop orchestration.
|
||||||
// - shutdown: graceful shutdown sequence and uptime logging.
|
// - shutdown: graceful shutdown sequence and uptime logging.
|
||||||
|
mod helpers;
|
||||||
mod admission;
|
mod admission;
|
||||||
mod connectivity;
|
mod connectivity;
|
||||||
mod helpers;
|
|
||||||
mod listeners;
|
mod listeners;
|
||||||
mod me_startup;
|
mod me_startup;
|
||||||
mod runtime_tasks;
|
mod runtime_tasks;
|
||||||
|
|
@ -29,75 +29,26 @@ use tracing_subscriber::{EnvFilter, fmt, prelude::*, reload};
|
||||||
|
|
||||||
use crate::api;
|
use crate::api;
|
||||||
use crate::config::{LogLevel, ProxyConfig};
|
use crate::config::{LogLevel, ProxyConfig};
|
||||||
use crate::conntrack_control;
|
|
||||||
use crate::crypto::SecureRandom;
|
use crate::crypto::SecureRandom;
|
||||||
use crate::ip_tracker::UserIpTracker;
|
use crate::ip_tracker::UserIpTracker;
|
||||||
use crate::network::probe::{decide_network_capabilities, log_probe_result, run_probe};
|
use crate::network::probe::{decide_network_capabilities, log_probe_result, run_probe};
|
||||||
use crate::proxy::route_mode::{RelayRouteMode, RouteRuntimeController};
|
use crate::proxy::route_mode::{RelayRouteMode, RouteRuntimeController};
|
||||||
use crate::proxy::shared_state::ProxySharedState;
|
|
||||||
use crate::startup::{
|
|
||||||
COMPONENT_API_BOOTSTRAP, COMPONENT_CONFIG_LOAD, COMPONENT_ME_POOL_CONSTRUCT,
|
|
||||||
COMPONENT_ME_POOL_INIT_STAGE1, COMPONENT_ME_PROXY_CONFIG_V4, COMPONENT_ME_PROXY_CONFIG_V6,
|
|
||||||
COMPONENT_ME_SECRET_FETCH, COMPONENT_NETWORK_PROBE, COMPONENT_TRACING_INIT, StartupMeStatus,
|
|
||||||
StartupTracker,
|
|
||||||
};
|
|
||||||
use crate::stats::beobachten::BeobachtenStore;
|
use crate::stats::beobachten::BeobachtenStore;
|
||||||
use crate::stats::telemetry::TelemetryPolicy;
|
use crate::stats::telemetry::TelemetryPolicy;
|
||||||
use crate::stats::{ReplayChecker, Stats};
|
use crate::stats::{ReplayChecker, Stats};
|
||||||
|
use crate::startup::{
|
||||||
|
COMPONENT_API_BOOTSTRAP, COMPONENT_CONFIG_LOAD,
|
||||||
|
COMPONENT_ME_POOL_CONSTRUCT, COMPONENT_ME_POOL_INIT_STAGE1,
|
||||||
|
COMPONENT_ME_PROXY_CONFIG_V4, COMPONENT_ME_PROXY_CONFIG_V6, COMPONENT_ME_SECRET_FETCH,
|
||||||
|
COMPONENT_NETWORK_PROBE, COMPONENT_TRACING_INIT, StartupMeStatus, StartupTracker,
|
||||||
|
};
|
||||||
use crate::stream::BufferPool;
|
use crate::stream::BufferPool;
|
||||||
use crate::transport::UpstreamManager;
|
|
||||||
use crate::transport::middle_proxy::MePool;
|
use crate::transport::middle_proxy::MePool;
|
||||||
use helpers::{parse_cli, resolve_runtime_config_path};
|
use crate::transport::UpstreamManager;
|
||||||
|
use helpers::parse_cli;
|
||||||
#[cfg(unix)]
|
|
||||||
use crate::daemon::{DaemonOptions, PidFile, drop_privileges};
|
|
||||||
|
|
||||||
/// Runs the full telemt runtime startup pipeline and blocks until shutdown.
|
/// Runs the full telemt runtime startup pipeline and blocks until shutdown.
|
||||||
///
|
|
||||||
/// On Unix, daemon options should be handled before calling this function
|
|
||||||
/// (daemonization must happen before tokio runtime starts).
|
|
||||||
#[cfg(unix)]
|
|
||||||
pub async fn run_with_daemon(
|
|
||||||
daemon_opts: DaemonOptions,
|
|
||||||
) -> std::result::Result<(), Box<dyn std::error::Error>> {
|
|
||||||
run_inner(daemon_opts).await
|
|
||||||
}
|
|
||||||
|
|
||||||
/// Runs the full telemt runtime startup pipeline and blocks until shutdown.
|
|
||||||
///
|
|
||||||
/// This is the main entry point for non-daemon mode or when called as a library.
|
|
||||||
#[allow(dead_code)]
|
|
||||||
pub async fn run() -> std::result::Result<(), Box<dyn std::error::Error>> {
|
pub async fn run() -> std::result::Result<(), Box<dyn std::error::Error>> {
|
||||||
#[cfg(unix)]
|
|
||||||
{
|
|
||||||
// Parse CLI to get daemon options even in simple run() path
|
|
||||||
let args: Vec<String> = std::env::args().skip(1).collect();
|
|
||||||
let daemon_opts = crate::cli::parse_daemon_args(&args);
|
|
||||||
run_inner(daemon_opts).await
|
|
||||||
}
|
|
||||||
#[cfg(not(unix))]
|
|
||||||
{
|
|
||||||
run_inner().await
|
|
||||||
}
|
|
||||||
}
|
|
||||||
|
|
||||||
#[cfg(unix)]
|
|
||||||
async fn run_inner(
|
|
||||||
daemon_opts: DaemonOptions,
|
|
||||||
) -> std::result::Result<(), Box<dyn std::error::Error>> {
|
|
||||||
// Acquire PID file if daemonizing or if explicitly requested
|
|
||||||
// Keep it alive until shutdown (underscore prefix = intentionally kept for RAII cleanup)
|
|
||||||
let _pid_file = if daemon_opts.daemonize || daemon_opts.pid_file.is_some() {
|
|
||||||
let mut pf = PidFile::new(daemon_opts.pid_file_path());
|
|
||||||
if let Err(e) = pf.acquire() {
|
|
||||||
eprintln!("[telemt] {}", e);
|
|
||||||
std::process::exit(1);
|
|
||||||
}
|
|
||||||
Some(pf)
|
|
||||||
} else {
|
|
||||||
None
|
|
||||||
};
|
|
||||||
|
|
||||||
let process_started_at = Instant::now();
|
let process_started_at = Instant::now();
|
||||||
let process_started_at_epoch_secs = SystemTime::now()
|
let process_started_at_epoch_secs = SystemTime::now()
|
||||||
.duration_since(UNIX_EPOCH)
|
.duration_since(UNIX_EPOCH)
|
||||||
|
|
@ -105,129 +56,20 @@ async fn run_inner(
|
||||||
.as_secs();
|
.as_secs();
|
||||||
let startup_tracker = Arc::new(StartupTracker::new(process_started_at_epoch_secs));
|
let startup_tracker = Arc::new(StartupTracker::new(process_started_at_epoch_secs));
|
||||||
startup_tracker
|
startup_tracker
|
||||||
.start_component(
|
.start_component(COMPONENT_CONFIG_LOAD, Some("load and validate config".to_string()))
|
||||||
COMPONENT_CONFIG_LOAD,
|
|
||||||
Some("load and validate config".to_string()),
|
|
||||||
)
|
|
||||||
.await;
|
.await;
|
||||||
let cli_args = parse_cli();
|
let (config_path, cli_silent, cli_log_level) = parse_cli();
|
||||||
let config_path_cli = cli_args.config_path;
|
|
||||||
let config_path_explicit = cli_args.config_path_explicit;
|
|
||||||
let data_path = cli_args.data_path;
|
|
||||||
let cli_silent = cli_args.silent;
|
|
||||||
let cli_log_level = cli_args.log_level;
|
|
||||||
let log_destination = cli_args.log_destination;
|
|
||||||
let startup_cwd = match std::env::current_dir() {
|
|
||||||
Ok(cwd) => cwd,
|
|
||||||
Err(e) => {
|
|
||||||
eprintln!("[telemt] Can't read current_dir: {}", e);
|
|
||||||
std::process::exit(1);
|
|
||||||
}
|
|
||||||
};
|
|
||||||
let mut config_path =
|
|
||||||
resolve_runtime_config_path(&config_path_cli, &startup_cwd, config_path_explicit);
|
|
||||||
|
|
||||||
let mut config = match ProxyConfig::load(&config_path) {
|
let mut config = match ProxyConfig::load(&config_path) {
|
||||||
Ok(c) => c,
|
Ok(c) => c,
|
||||||
Err(e) => {
|
Err(e) => {
|
||||||
if config_path.exists() {
|
if std::path::Path::new(&config_path).exists() {
|
||||||
eprintln!("[telemt] Error: {}", e);
|
eprintln!("[telemt] Error: {}", e);
|
||||||
std::process::exit(1);
|
std::process::exit(1);
|
||||||
} else {
|
} else {
|
||||||
let default = ProxyConfig::default();
|
let default = ProxyConfig::default();
|
||||||
|
std::fs::write(&config_path, toml::to_string_pretty(&default).unwrap()).unwrap();
|
||||||
let serialized =
|
eprintln!("[telemt] Created default config at {}", config_path);
|
||||||
match toml::to_string_pretty(&default).or_else(|_| toml::to_string(&default)) {
|
|
||||||
Ok(value) => Some(value),
|
|
||||||
Err(serialize_error) => {
|
|
||||||
eprintln!(
|
|
||||||
"[telemt] Warning: failed to serialize default config: {}",
|
|
||||||
serialize_error
|
|
||||||
);
|
|
||||||
None
|
|
||||||
}
|
|
||||||
};
|
|
||||||
|
|
||||||
if config_path_explicit {
|
|
||||||
if let Some(serialized) = serialized.as_ref() {
|
|
||||||
if let Err(write_error) = std::fs::write(&config_path, serialized) {
|
|
||||||
eprintln!(
|
|
||||||
"[telemt] Error: failed to create explicit config at {}: {}",
|
|
||||||
config_path.display(),
|
|
||||||
write_error
|
|
||||||
);
|
|
||||||
std::process::exit(1);
|
|
||||||
}
|
|
||||||
eprintln!(
|
|
||||||
"[telemt] Created default config at {}",
|
|
||||||
config_path.display()
|
|
||||||
);
|
|
||||||
} else {
|
|
||||||
eprintln!(
|
|
||||||
"[telemt] Warning: running with in-memory default config without writing to disk"
|
|
||||||
);
|
|
||||||
}
|
|
||||||
} else {
|
|
||||||
let system_dir = std::path::Path::new("/etc/telemt");
|
|
||||||
let system_config_path = system_dir.join("telemt.toml");
|
|
||||||
let startup_config_path = startup_cwd.join("config.toml");
|
|
||||||
let mut persisted = false;
|
|
||||||
|
|
||||||
if let Some(serialized) = serialized.as_ref() {
|
|
||||||
match std::fs::create_dir_all(system_dir) {
|
|
||||||
Ok(()) => match std::fs::write(&system_config_path, serialized) {
|
|
||||||
Ok(()) => {
|
|
||||||
config_path = system_config_path;
|
|
||||||
eprintln!(
|
|
||||||
"[telemt] Created default config at {}",
|
|
||||||
config_path.display()
|
|
||||||
);
|
|
||||||
persisted = true;
|
|
||||||
}
|
|
||||||
Err(write_error) => {
|
|
||||||
eprintln!(
|
|
||||||
"[telemt] Warning: failed to write default config at {}: {}",
|
|
||||||
system_config_path.display(),
|
|
||||||
write_error
|
|
||||||
);
|
|
||||||
}
|
|
||||||
},
|
|
||||||
Err(create_error) => {
|
|
||||||
eprintln!(
|
|
||||||
"[telemt] Warning: failed to create {}: {}",
|
|
||||||
system_dir.display(),
|
|
||||||
create_error
|
|
||||||
);
|
|
||||||
}
|
|
||||||
}
|
|
||||||
|
|
||||||
if !persisted {
|
|
||||||
match std::fs::write(&startup_config_path, serialized) {
|
|
||||||
Ok(()) => {
|
|
||||||
config_path = startup_config_path;
|
|
||||||
eprintln!(
|
|
||||||
"[telemt] Created default config at {}",
|
|
||||||
config_path.display()
|
|
||||||
);
|
|
||||||
persisted = true;
|
|
||||||
}
|
|
||||||
Err(write_error) => {
|
|
||||||
eprintln!(
|
|
||||||
"[telemt] Warning: failed to write default config at {}: {}",
|
|
||||||
startup_config_path.display(),
|
|
||||||
write_error
|
|
||||||
);
|
|
||||||
}
|
|
||||||
}
|
|
||||||
}
|
|
||||||
}
|
|
||||||
|
|
||||||
if !persisted {
|
|
||||||
eprintln!(
|
|
||||||
"[telemt] Warning: running with in-memory default config without writing to disk"
|
|
||||||
);
|
|
||||||
}
|
|
||||||
}
|
|
||||||
default
|
default
|
||||||
}
|
}
|
||||||
}
|
}
|
||||||
|
|
@ -238,46 +80,6 @@ async fn run_inner(
|
||||||
std::process::exit(1);
|
std::process::exit(1);
|
||||||
}
|
}
|
||||||
|
|
||||||
if let Some(p) = data_path {
|
|
||||||
config.general.data_path = Some(p);
|
|
||||||
}
|
|
||||||
|
|
||||||
if let Some(ref data_path) = config.general.data_path {
|
|
||||||
if !data_path.is_absolute() {
|
|
||||||
eprintln!(
|
|
||||||
"[telemt] data_path must be absolute: {}",
|
|
||||||
data_path.display()
|
|
||||||
);
|
|
||||||
std::process::exit(1);
|
|
||||||
}
|
|
||||||
|
|
||||||
if data_path.exists() {
|
|
||||||
if !data_path.is_dir() {
|
|
||||||
eprintln!(
|
|
||||||
"[telemt] data_path exists but is not a directory: {}",
|
|
||||||
data_path.display()
|
|
||||||
);
|
|
||||||
std::process::exit(1);
|
|
||||||
}
|
|
||||||
} else if let Err(e) = std::fs::create_dir_all(data_path) {
|
|
||||||
eprintln!(
|
|
||||||
"[telemt] Can't create data_path {}: {}",
|
|
||||||
data_path.display(),
|
|
||||||
e
|
|
||||||
);
|
|
||||||
std::process::exit(1);
|
|
||||||
}
|
|
||||||
|
|
||||||
if let Err(e) = std::env::set_current_dir(data_path) {
|
|
||||||
eprintln!(
|
|
||||||
"[telemt] Can't use data_path {}: {}",
|
|
||||||
data_path.display(),
|
|
||||||
e
|
|
||||||
);
|
|
||||||
std::process::exit(1);
|
|
||||||
}
|
|
||||||
}
|
|
||||||
|
|
||||||
if let Err(e) = crate::network::dns_overrides::install_entries(&config.network.dns_overrides) {
|
if let Err(e) = crate::network::dns_overrides::install_entries(&config.network.dns_overrides) {
|
||||||
eprintln!("[telemt] Invalid network.dns_overrides: {}", e);
|
eprintln!("[telemt] Invalid network.dns_overrides: {}", e);
|
||||||
std::process::exit(1);
|
std::process::exit(1);
|
||||||
|
|
@ -297,54 +99,22 @@ async fn run_inner(
|
||||||
|
|
||||||
let (filter_layer, filter_handle) = reload::Layer::new(EnvFilter::new("info"));
|
let (filter_layer, filter_handle) = reload::Layer::new(EnvFilter::new("info"));
|
||||||
startup_tracker
|
startup_tracker
|
||||||
.start_component(
|
.start_component(COMPONENT_TRACING_INIT, Some("initialize tracing subscriber".to_string()))
|
||||||
COMPONENT_TRACING_INIT,
|
|
||||||
Some("initialize tracing subscriber".to_string()),
|
|
||||||
)
|
|
||||||
.await;
|
.await;
|
||||||
|
|
||||||
// Initialize logging based on destination
|
// Configure color output based on config
|
||||||
let _logging_guard: Option<crate::logging::LoggingGuard>;
|
|
||||||
match log_destination {
|
|
||||||
crate::logging::LogDestination::Stderr => {
|
|
||||||
// Default: log to stderr (works with systemd journald)
|
|
||||||
let fmt_layer = if config.general.disable_colors {
|
let fmt_layer = if config.general.disable_colors {
|
||||||
fmt::Layer::default().with_ansi(false)
|
fmt::Layer::default().with_ansi(false)
|
||||||
} else {
|
} else {
|
||||||
fmt::Layer::default().with_ansi(true)
|
fmt::Layer::default().with_ansi(true)
|
||||||
};
|
};
|
||||||
|
|
||||||
tracing_subscriber::registry()
|
tracing_subscriber::registry()
|
||||||
.with(filter_layer)
|
.with(filter_layer)
|
||||||
.with(fmt_layer)
|
.with(fmt_layer)
|
||||||
.init();
|
.init();
|
||||||
_logging_guard = None;
|
|
||||||
}
|
|
||||||
#[cfg(unix)]
|
|
||||||
crate::logging::LogDestination::Syslog => {
|
|
||||||
// Syslog: for OpenRC/FreeBSD
|
|
||||||
let logging_opts = crate::logging::LoggingOptions {
|
|
||||||
destination: log_destination,
|
|
||||||
disable_colors: true,
|
|
||||||
};
|
|
||||||
let (_, guard) = crate::logging::init_logging(&logging_opts, "info");
|
|
||||||
_logging_guard = Some(guard);
|
|
||||||
}
|
|
||||||
crate::logging::LogDestination::File { .. } => {
|
|
||||||
// File logging with optional rotation
|
|
||||||
let logging_opts = crate::logging::LoggingOptions {
|
|
||||||
destination: log_destination,
|
|
||||||
disable_colors: true,
|
|
||||||
};
|
|
||||||
let (_, guard) = crate::logging::init_logging(&logging_opts, "info");
|
|
||||||
_logging_guard = Some(guard);
|
|
||||||
}
|
|
||||||
}
|
|
||||||
|
|
||||||
startup_tracker
|
startup_tracker
|
||||||
.complete_component(
|
.complete_component(COMPONENT_TRACING_INIT, Some("tracing initialized".to_string()))
|
||||||
COMPONENT_TRACING_INIT,
|
|
||||||
Some("tracing initialized".to_string()),
|
|
||||||
)
|
|
||||||
.await;
|
.await;
|
||||||
|
|
||||||
info!("Telemt MTProxy v{}", env!("CARGO_PKG_VERSION"));
|
info!("Telemt MTProxy v{}", env!("CARGO_PKG_VERSION"));
|
||||||
|
|
@ -393,31 +163,22 @@ async fn run_inner(
|
||||||
config.general.upstream_connect_retry_attempts,
|
config.general.upstream_connect_retry_attempts,
|
||||||
config.general.upstream_connect_retry_backoff_ms,
|
config.general.upstream_connect_retry_backoff_ms,
|
||||||
config.general.upstream_connect_budget_ms,
|
config.general.upstream_connect_budget_ms,
|
||||||
config.general.tg_connect,
|
|
||||||
config.general.upstream_unhealthy_fail_threshold,
|
config.general.upstream_unhealthy_fail_threshold,
|
||||||
config.general.upstream_connect_failfast_hard_errors,
|
config.general.upstream_connect_failfast_hard_errors,
|
||||||
stats.clone(),
|
stats.clone(),
|
||||||
));
|
));
|
||||||
let ip_tracker = Arc::new(UserIpTracker::new());
|
let ip_tracker = Arc::new(UserIpTracker::new());
|
||||||
ip_tracker
|
ip_tracker.load_limits(&config.access.user_max_unique_ips).await;
|
||||||
.load_limits(
|
|
||||||
config.access.user_max_unique_ips_global_each,
|
|
||||||
&config.access.user_max_unique_ips,
|
|
||||||
)
|
|
||||||
.await;
|
|
||||||
ip_tracker
|
ip_tracker
|
||||||
.set_limit_policy(
|
.set_limit_policy(
|
||||||
config.access.user_max_unique_ips_mode,
|
config.access.user_max_unique_ips_mode,
|
||||||
config.access.user_max_unique_ips_window_secs,
|
config.access.user_max_unique_ips_window_secs,
|
||||||
)
|
)
|
||||||
.await;
|
.await;
|
||||||
if config.access.user_max_unique_ips_global_each > 0
|
if !config.access.user_max_unique_ips.is_empty() {
|
||||||
|| !config.access.user_max_unique_ips.is_empty()
|
|
||||||
{
|
|
||||||
info!(
|
info!(
|
||||||
global_each_limit = config.access.user_max_unique_ips_global_each,
|
"IP limits configured for {} users",
|
||||||
explicit_user_limits = config.access.user_max_unique_ips.len(),
|
config.access.user_max_unique_ips.len()
|
||||||
"User unique IP limits configured"
|
|
||||||
);
|
);
|
||||||
}
|
}
|
||||||
if !config.network.dns_overrides.is_empty() {
|
if !config.network.dns_overrides.is_empty() {
|
||||||
|
|
@ -439,10 +200,7 @@ async fn run_inner(
|
||||||
let route_runtime = Arc::new(RouteRuntimeController::new(initial_route_mode));
|
let route_runtime = Arc::new(RouteRuntimeController::new(initial_route_mode));
|
||||||
let api_me_pool = Arc::new(RwLock::new(None::<Arc<MePool>>));
|
let api_me_pool = Arc::new(RwLock::new(None::<Arc<MePool>>));
|
||||||
startup_tracker
|
startup_tracker
|
||||||
.start_component(
|
.start_component(COMPONENT_API_BOOTSTRAP, Some("spawn API listener task".to_string()))
|
||||||
COMPONENT_API_BOOTSTRAP,
|
|
||||||
Some("spawn API listener task".to_string()),
|
|
||||||
)
|
|
||||||
.await;
|
.await;
|
||||||
|
|
||||||
if config.server.api.enabled {
|
if config.server.api.enabled {
|
||||||
|
|
@ -465,7 +223,7 @@ async fn run_inner(
|
||||||
let route_runtime_api = route_runtime.clone();
|
let route_runtime_api = route_runtime.clone();
|
||||||
let config_rx_api = api_config_rx.clone();
|
let config_rx_api = api_config_rx.clone();
|
||||||
let admission_rx_api = admission_rx.clone();
|
let admission_rx_api = admission_rx.clone();
|
||||||
let config_path_api = config_path.clone();
|
let config_path_api = std::path::PathBuf::from(&config_path);
|
||||||
let startup_tracker_api = startup_tracker.clone();
|
let startup_tracker_api = startup_tracker.clone();
|
||||||
let detected_ips_rx_api = detected_ips_rx.clone();
|
let detected_ips_rx_api = detected_ips_rx.clone();
|
||||||
tokio::spawn(async move {
|
tokio::spawn(async move {
|
||||||
|
|
@ -525,10 +283,7 @@ async fn run_inner(
|
||||||
.await;
|
.await;
|
||||||
|
|
||||||
startup_tracker
|
startup_tracker
|
||||||
.start_component(
|
.start_component(COMPONENT_NETWORK_PROBE, Some("probe network capabilities".to_string()))
|
||||||
COMPONENT_NETWORK_PROBE,
|
|
||||||
Some("probe network capabilities".to_string()),
|
|
||||||
)
|
|
||||||
.await;
|
.await;
|
||||||
let probe = run_probe(
|
let probe = run_probe(
|
||||||
&config.network,
|
&config.network,
|
||||||
|
|
@ -541,8 +296,11 @@ async fn run_inner(
|
||||||
probe.detected_ipv4.map(IpAddr::V4),
|
probe.detected_ipv4.map(IpAddr::V4),
|
||||||
probe.detected_ipv6.map(IpAddr::V6),
|
probe.detected_ipv6.map(IpAddr::V6),
|
||||||
));
|
));
|
||||||
let decision =
|
let decision = decide_network_capabilities(
|
||||||
decide_network_capabilities(&config.network, &probe, config.general.middle_proxy_nat_ip);
|
&config.network,
|
||||||
|
&probe,
|
||||||
|
config.general.middle_proxy_nat_ip,
|
||||||
|
);
|
||||||
log_probe_result(&probe, &decision);
|
log_probe_result(&probe, &decision);
|
||||||
startup_tracker
|
startup_tracker
|
||||||
.complete_component(
|
.complete_component(
|
||||||
|
|
@ -556,13 +314,8 @@ async fn run_inner(
|
||||||
let beobachten = Arc::new(BeobachtenStore::new());
|
let beobachten = Arc::new(BeobachtenStore::new());
|
||||||
let rng = Arc::new(SecureRandom::new());
|
let rng = Arc::new(SecureRandom::new());
|
||||||
|
|
||||||
// Connection concurrency limit (0 = unlimited)
|
// Connection concurrency limit
|
||||||
let max_connections_limit = if config.server.max_connections == 0 {
|
let max_connections = Arc::new(Semaphore::new(10_000));
|
||||||
Semaphore::MAX_PERMITS
|
|
||||||
} else {
|
|
||||||
config.server.max_connections as usize
|
|
||||||
};
|
|
||||||
let max_connections = Arc::new(Semaphore::new(max_connections_limit));
|
|
||||||
|
|
||||||
let me2dc_fallback = config.general.me2dc_fallback;
|
let me2dc_fallback = config.general.me2dc_fallback;
|
||||||
let me_init_retry_attempts = config.general.me_init_retry_attempts;
|
let me_init_retry_attempts = config.general.me_init_retry_attempts;
|
||||||
|
|
@ -645,16 +398,24 @@ async fn run_inner(
|
||||||
|
|
||||||
// If ME failed to initialize, force direct-only mode.
|
// If ME failed to initialize, force direct-only mode.
|
||||||
if me_pool.is_some() {
|
if me_pool.is_some() {
|
||||||
startup_tracker.set_transport_mode("middle_proxy").await;
|
startup_tracker
|
||||||
startup_tracker.set_degraded(false).await;
|
.set_transport_mode("middle_proxy")
|
||||||
|
.await;
|
||||||
|
startup_tracker
|
||||||
|
.set_degraded(false)
|
||||||
|
.await;
|
||||||
info!("Transport: Middle-End Proxy - all DC-over-RPC");
|
info!("Transport: Middle-End Proxy - all DC-over-RPC");
|
||||||
} else {
|
} else {
|
||||||
let _ = use_middle_proxy;
|
let _ = use_middle_proxy;
|
||||||
use_middle_proxy = false;
|
use_middle_proxy = false;
|
||||||
// Make runtime config reflect direct-only mode for handlers.
|
// Make runtime config reflect direct-only mode for handlers.
|
||||||
config.general.use_middle_proxy = false;
|
config.general.use_middle_proxy = false;
|
||||||
startup_tracker.set_transport_mode("direct").await;
|
startup_tracker
|
||||||
startup_tracker.set_degraded(true).await;
|
.set_transport_mode("direct")
|
||||||
|
.await;
|
||||||
|
startup_tracker
|
||||||
|
.set_degraded(true)
|
||||||
|
.await;
|
||||||
if me2dc_fallback {
|
if me2dc_fallback {
|
||||||
startup_tracker
|
startup_tracker
|
||||||
.set_me_status(StartupMeStatus::Failed, "fallback_to_direct")
|
.set_me_status(StartupMeStatus::Failed, "fallback_to_direct")
|
||||||
|
|
@ -675,7 +436,7 @@ async fn run_inner(
|
||||||
Duration::from_secs(config.access.replay_window_secs),
|
Duration::from_secs(config.access.replay_window_secs),
|
||||||
));
|
));
|
||||||
|
|
||||||
let buffer_pool = Arc::new(BufferPool::with_config(64 * 1024, 4096));
|
let buffer_pool = Arc::new(BufferPool::with_config(16 * 1024, 4096));
|
||||||
|
|
||||||
connectivity::run_startup_connectivity(
|
connectivity::run_startup_connectivity(
|
||||||
&config,
|
&config,
|
||||||
|
|
@ -723,12 +484,6 @@ async fn run_inner(
|
||||||
)
|
)
|
||||||
.await;
|
.await;
|
||||||
let _admission_tx_hold = admission_tx;
|
let _admission_tx_hold = admission_tx;
|
||||||
let shared_state = ProxySharedState::new();
|
|
||||||
conntrack_control::spawn_conntrack_controller(
|
|
||||||
config_rx.clone(),
|
|
||||||
stats.clone(),
|
|
||||||
shared_state.clone(),
|
|
||||||
);
|
|
||||||
|
|
||||||
let bound = listeners::bind_listeners(
|
let bound = listeners::bind_listeners(
|
||||||
&config,
|
&config,
|
||||||
|
|
@ -749,7 +504,6 @@ async fn run_inner(
|
||||||
tls_cache.clone(),
|
tls_cache.clone(),
|
||||||
ip_tracker.clone(),
|
ip_tracker.clone(),
|
||||||
beobachten.clone(),
|
beobachten.clone(),
|
||||||
shared_state.clone(),
|
|
||||||
max_connections.clone(),
|
max_connections.clone(),
|
||||||
)
|
)
|
||||||
.await?;
|
.await?;
|
||||||
|
|
@ -761,14 +515,6 @@ async fn run_inner(
|
||||||
std::process::exit(1);
|
std::process::exit(1);
|
||||||
}
|
}
|
||||||
|
|
||||||
// Drop privileges after binding sockets (which may require root for port < 1024)
|
|
||||||
if daemon_opts.user.is_some() || daemon_opts.group.is_some() {
|
|
||||||
if let Err(e) = drop_privileges(daemon_opts.user.as_deref(), daemon_opts.group.as_deref()) {
|
|
||||||
error!(error = %e, "Failed to drop privileges");
|
|
||||||
std::process::exit(1);
|
|
||||||
}
|
|
||||||
}
|
|
||||||
|
|
||||||
runtime_tasks::apply_runtime_log_filter(
|
runtime_tasks::apply_runtime_log_filter(
|
||||||
has_rust_log,
|
has_rust_log,
|
||||||
&effective_log_level,
|
&effective_log_level,
|
||||||
|
|
@ -789,9 +535,6 @@ async fn run_inner(
|
||||||
|
|
||||||
runtime_tasks::mark_runtime_ready(&startup_tracker).await;
|
runtime_tasks::mark_runtime_ready(&startup_tracker).await;
|
||||||
|
|
||||||
// Spawn signal handlers for SIGUSR1/SIGUSR2 (non-shutdown signals)
|
|
||||||
shutdown::spawn_signal_handlers(stats.clone(), process_started_at);
|
|
||||||
|
|
||||||
listeners::spawn_tcp_accept_loops(
|
listeners::spawn_tcp_accept_loops(
|
||||||
listeners,
|
listeners,
|
||||||
config_rx.clone(),
|
config_rx.clone(),
|
||||||
|
|
@ -806,11 +549,10 @@ async fn run_inner(
|
||||||
tls_cache.clone(),
|
tls_cache.clone(),
|
||||||
ip_tracker.clone(),
|
ip_tracker.clone(),
|
||||||
beobachten.clone(),
|
beobachten.clone(),
|
||||||
shared_state,
|
|
||||||
max_connections.clone(),
|
max_connections.clone(),
|
||||||
);
|
);
|
||||||
|
|
||||||
shutdown::wait_for_shutdown(process_started_at, me_pool, stats).await;
|
shutdown::wait_for_shutdown(process_started_at, me_pool).await;
|
||||||
|
|
||||||
Ok(())
|
Ok(())
|
||||||
}
|
}
|
||||||
|
|
|
||||||
|
|
@ -1,27 +1,24 @@
|
||||||
use std::net::IpAddr;
|
use std::net::IpAddr;
|
||||||
use std::path::Path;
|
use std::path::PathBuf;
|
||||||
use std::sync::Arc;
|
use std::sync::Arc;
|
||||||
|
|
||||||
use tokio::sync::{mpsc, watch};
|
use tokio::sync::{mpsc, watch};
|
||||||
use tracing::{debug, warn};
|
use tracing::{debug, warn};
|
||||||
use tracing_subscriber::EnvFilter;
|
|
||||||
use tracing_subscriber::reload;
|
use tracing_subscriber::reload;
|
||||||
|
use tracing_subscriber::EnvFilter;
|
||||||
|
|
||||||
use crate::config::hot_reload::spawn_config_watcher;
|
|
||||||
use crate::config::{LogLevel, ProxyConfig};
|
use crate::config::{LogLevel, ProxyConfig};
|
||||||
|
use crate::config::hot_reload::spawn_config_watcher;
|
||||||
use crate::crypto::SecureRandom;
|
use crate::crypto::SecureRandom;
|
||||||
use crate::ip_tracker::UserIpTracker;
|
use crate::ip_tracker::UserIpTracker;
|
||||||
use crate::metrics;
|
use crate::metrics;
|
||||||
use crate::network::probe::NetworkProbe;
|
use crate::network::probe::NetworkProbe;
|
||||||
use crate::startup::{
|
use crate::startup::{COMPONENT_CONFIG_WATCHER_START, COMPONENT_METRICS_START, COMPONENT_RUNTIME_READY, StartupTracker};
|
||||||
COMPONENT_CONFIG_WATCHER_START, COMPONENT_METRICS_START, COMPONENT_RUNTIME_READY,
|
|
||||||
StartupTracker,
|
|
||||||
};
|
|
||||||
use crate::stats::beobachten::BeobachtenStore;
|
use crate::stats::beobachten::BeobachtenStore;
|
||||||
use crate::stats::telemetry::TelemetryPolicy;
|
use crate::stats::telemetry::TelemetryPolicy;
|
||||||
use crate::stats::{ReplayChecker, Stats};
|
use crate::stats::{ReplayChecker, Stats};
|
||||||
use crate::transport::UpstreamManager;
|
|
||||||
use crate::transport::middle_proxy::{MePool, MeReinitTrigger};
|
use crate::transport::middle_proxy::{MePool, MeReinitTrigger};
|
||||||
|
use crate::transport::UpstreamManager;
|
||||||
|
|
||||||
use super::helpers::write_beobachten_snapshot;
|
use super::helpers::write_beobachten_snapshot;
|
||||||
|
|
||||||
|
|
@ -35,7 +32,7 @@ pub(crate) struct RuntimeWatches {
|
||||||
#[allow(clippy::too_many_arguments)]
|
#[allow(clippy::too_many_arguments)]
|
||||||
pub(crate) async fn spawn_runtime_tasks(
|
pub(crate) async fn spawn_runtime_tasks(
|
||||||
config: &Arc<ProxyConfig>,
|
config: &Arc<ProxyConfig>,
|
||||||
config_path: &Path,
|
config_path: &str,
|
||||||
probe: &NetworkProbe,
|
probe: &NetworkProbe,
|
||||||
prefer_ipv6: bool,
|
prefer_ipv6: bool,
|
||||||
decision_ipv4_dc: bool,
|
decision_ipv4_dc: bool,
|
||||||
|
|
@ -82,9 +79,11 @@ pub(crate) async fn spawn_runtime_tasks(
|
||||||
Some("spawn config hot-reload watcher".to_string()),
|
Some("spawn config hot-reload watcher".to_string()),
|
||||||
)
|
)
|
||||||
.await;
|
.await;
|
||||||
let (config_rx, log_level_rx): (watch::Receiver<Arc<ProxyConfig>>, watch::Receiver<LogLevel>) =
|
let (config_rx, log_level_rx): (
|
||||||
spawn_config_watcher(
|
watch::Receiver<Arc<ProxyConfig>>,
|
||||||
config_path.to_path_buf(),
|
watch::Receiver<LogLevel>,
|
||||||
|
) = spawn_config_watcher(
|
||||||
|
PathBuf::from(config_path),
|
||||||
config.clone(),
|
config.clone(),
|
||||||
detected_ip_v4,
|
detected_ip_v4,
|
||||||
detected_ip_v6,
|
detected_ip_v6,
|
||||||
|
|
@ -115,8 +114,7 @@ pub(crate) async fn spawn_runtime_tasks(
|
||||||
break;
|
break;
|
||||||
}
|
}
|
||||||
let cfg = config_rx_policy.borrow_and_update().clone();
|
let cfg = config_rx_policy.borrow_and_update().clone();
|
||||||
stats_policy
|
stats_policy.apply_telemetry_policy(TelemetryPolicy::from_config(&cfg.general.telemetry));
|
||||||
.apply_telemetry_policy(TelemetryPolicy::from_config(&cfg.general.telemetry));
|
|
||||||
if let Some(pool) = &me_pool_for_policy {
|
if let Some(pool) = &me_pool_for_policy {
|
||||||
pool.update_runtime_transport_policy(
|
pool.update_runtime_transport_policy(
|
||||||
cfg.general.me_socks_kdf_policy,
|
cfg.general.me_socks_kdf_policy,
|
||||||
|
|
@ -132,15 +130,7 @@ pub(crate) async fn spawn_runtime_tasks(
|
||||||
let ip_tracker_policy = ip_tracker.clone();
|
let ip_tracker_policy = ip_tracker.clone();
|
||||||
let mut config_rx_ip_limits = config_rx.clone();
|
let mut config_rx_ip_limits = config_rx.clone();
|
||||||
tokio::spawn(async move {
|
tokio::spawn(async move {
|
||||||
let mut prev_limits = config_rx_ip_limits
|
let mut prev_limits = config_rx_ip_limits.borrow().access.user_max_unique_ips.clone();
|
||||||
.borrow()
|
|
||||||
.access
|
|
||||||
.user_max_unique_ips
|
|
||||||
.clone();
|
|
||||||
let mut prev_global_each = config_rx_ip_limits
|
|
||||||
.borrow()
|
|
||||||
.access
|
|
||||||
.user_max_unique_ips_global_each;
|
|
||||||
let mut prev_mode = config_rx_ip_limits.borrow().access.user_max_unique_ips_mode;
|
let mut prev_mode = config_rx_ip_limits.borrow().access.user_max_unique_ips_mode;
|
||||||
let mut prev_window = config_rx_ip_limits
|
let mut prev_window = config_rx_ip_limits
|
||||||
.borrow()
|
.borrow()
|
||||||
|
|
@ -153,17 +143,9 @@ pub(crate) async fn spawn_runtime_tasks(
|
||||||
}
|
}
|
||||||
let cfg = config_rx_ip_limits.borrow_and_update().clone();
|
let cfg = config_rx_ip_limits.borrow_and_update().clone();
|
||||||
|
|
||||||
if prev_limits != cfg.access.user_max_unique_ips
|
if prev_limits != cfg.access.user_max_unique_ips {
|
||||||
|| prev_global_each != cfg.access.user_max_unique_ips_global_each
|
ip_tracker_policy.load_limits(&cfg.access.user_max_unique_ips).await;
|
||||||
{
|
|
||||||
ip_tracker_policy
|
|
||||||
.load_limits(
|
|
||||||
cfg.access.user_max_unique_ips_global_each,
|
|
||||||
&cfg.access.user_max_unique_ips,
|
|
||||||
)
|
|
||||||
.await;
|
|
||||||
prev_limits = cfg.access.user_max_unique_ips.clone();
|
prev_limits = cfg.access.user_max_unique_ips.clone();
|
||||||
prev_global_each = cfg.access.user_max_unique_ips_global_each;
|
|
||||||
}
|
}
|
||||||
|
|
||||||
if prev_mode != cfg.access.user_max_unique_ips_mode
|
if prev_mode != cfg.access.user_max_unique_ips_mode
|
||||||
|
|
@ -189,9 +171,7 @@ pub(crate) async fn spawn_runtime_tasks(
|
||||||
let sleep_secs = cfg.general.beobachten_flush_secs.max(1);
|
let sleep_secs = cfg.general.beobachten_flush_secs.max(1);
|
||||||
|
|
||||||
if cfg.general.beobachten {
|
if cfg.general.beobachten {
|
||||||
let ttl = std::time::Duration::from_secs(
|
let ttl = std::time::Duration::from_secs(cfg.general.beobachten_minutes.saturating_mul(60));
|
||||||
cfg.general.beobachten_minutes.saturating_mul(60),
|
|
||||||
);
|
|
||||||
let path = cfg.general.beobachten_file.clone();
|
let path = cfg.general.beobachten_file.clone();
|
||||||
let snapshot = beobachten_writer.snapshot_text(ttl);
|
let snapshot = beobachten_writer.snapshot_text(ttl);
|
||||||
if let Err(e) = write_beobachten_snapshot(&path, &snapshot).await {
|
if let Err(e) = write_beobachten_snapshot(&path, &snapshot).await {
|
||||||
|
|
@ -235,10 +215,7 @@ pub(crate) async fn spawn_runtime_tasks(
|
||||||
let config_rx_clone_rot = config_rx.clone();
|
let config_rx_clone_rot = config_rx.clone();
|
||||||
let reinit_tx_rotation = reinit_tx.clone();
|
let reinit_tx_rotation = reinit_tx.clone();
|
||||||
tokio::spawn(async move {
|
tokio::spawn(async move {
|
||||||
crate::transport::middle_proxy::me_rotation_task(
|
crate::transport::middle_proxy::me_rotation_task(config_rx_clone_rot, reinit_tx_rotation)
|
||||||
config_rx_clone_rot,
|
|
||||||
reinit_tx_rotation,
|
|
||||||
)
|
|
||||||
.await;
|
.await;
|
||||||
});
|
});
|
||||||
}
|
}
|
||||||
|
|
@ -290,32 +267,11 @@ pub(crate) async fn spawn_metrics_if_configured(
|
||||||
ip_tracker: Arc<UserIpTracker>,
|
ip_tracker: Arc<UserIpTracker>,
|
||||||
config_rx: watch::Receiver<Arc<ProxyConfig>>,
|
config_rx: watch::Receiver<Arc<ProxyConfig>>,
|
||||||
) {
|
) {
|
||||||
// metrics_listen takes precedence; fall back to metrics_port for backward compat.
|
if let Some(port) = config.server.metrics_port {
|
||||||
let metrics_target: Option<(u16, Option<String>)> =
|
|
||||||
if let Some(ref listen) = config.server.metrics_listen {
|
|
||||||
match listen.parse::<std::net::SocketAddr>() {
|
|
||||||
Ok(addr) => Some((addr.port(), Some(listen.clone()))),
|
|
||||||
Err(e) => {
|
|
||||||
startup_tracker
|
|
||||||
.skip_component(
|
|
||||||
COMPONENT_METRICS_START,
|
|
||||||
Some(format!("invalid metrics_listen \"{}\": {}", listen, e)),
|
|
||||||
)
|
|
||||||
.await;
|
|
||||||
None
|
|
||||||
}
|
|
||||||
}
|
|
||||||
} else {
|
|
||||||
config.server.metrics_port.map(|p| (p, None))
|
|
||||||
};
|
|
||||||
|
|
||||||
if let Some((port, listen)) = metrics_target {
|
|
||||||
let fallback_label = format!("port {}", port);
|
|
||||||
let label = listen.as_deref().unwrap_or(&fallback_label);
|
|
||||||
startup_tracker
|
startup_tracker
|
||||||
.start_component(
|
.start_component(
|
||||||
COMPONENT_METRICS_START,
|
COMPONENT_METRICS_START,
|
||||||
Some(format!("spawn metrics endpoint on {}", label)),
|
Some(format!("spawn metrics endpoint on {}", port)),
|
||||||
)
|
)
|
||||||
.await;
|
.await;
|
||||||
let stats = stats.clone();
|
let stats = stats.clone();
|
||||||
|
|
@ -323,12 +279,9 @@ pub(crate) async fn spawn_metrics_if_configured(
|
||||||
let config_rx_metrics = config_rx.clone();
|
let config_rx_metrics = config_rx.clone();
|
||||||
let ip_tracker_metrics = ip_tracker.clone();
|
let ip_tracker_metrics = ip_tracker.clone();
|
||||||
let whitelist = config.server.metrics_whitelist.clone();
|
let whitelist = config.server.metrics_whitelist.clone();
|
||||||
let listen_backlog = config.server.listen_backlog;
|
|
||||||
tokio::spawn(async move {
|
tokio::spawn(async move {
|
||||||
metrics::serve(
|
metrics::serve(
|
||||||
port,
|
port,
|
||||||
listen,
|
|
||||||
listen_backlog,
|
|
||||||
stats,
|
stats,
|
||||||
beobachten,
|
beobachten,
|
||||||
ip_tracker_metrics,
|
ip_tracker_metrics,
|
||||||
|
|
@ -343,7 +296,7 @@ pub(crate) async fn spawn_metrics_if_configured(
|
||||||
Some("metrics task spawned".to_string()),
|
Some("metrics task spawned".to_string()),
|
||||||
)
|
)
|
||||||
.await;
|
.await;
|
||||||
} else if config.server.metrics_listen.is_none() {
|
} else {
|
||||||
startup_tracker
|
startup_tracker
|
||||||
.skip_component(
|
.skip_component(
|
||||||
COMPONENT_METRICS_START,
|
COMPONENT_METRICS_START,
|
||||||
|
|
|
||||||
|
|
@ -1,98 +1,20 @@
|
||||||
//! Shutdown and signal handling for telemt.
|
|
||||||
//!
|
|
||||||
//! Handles graceful shutdown on various signals:
|
|
||||||
//! - SIGINT (Ctrl+C) / SIGTERM: Graceful shutdown
|
|
||||||
//! - SIGQUIT: Graceful shutdown with stats dump
|
|
||||||
//! - SIGUSR1: Reserved for log rotation (logs acknowledgment)
|
|
||||||
//! - SIGUSR2: Dump runtime status to log
|
|
||||||
//!
|
|
||||||
//! SIGHUP is handled separately in config/hot_reload.rs for config reload.
|
|
||||||
|
|
||||||
use std::sync::Arc;
|
use std::sync::Arc;
|
||||||
use std::time::{Duration, Instant};
|
use std::time::{Duration, Instant};
|
||||||
|
|
||||||
#[cfg(not(unix))]
|
|
||||||
use tokio::signal;
|
use tokio::signal;
|
||||||
#[cfg(unix)]
|
use tracing::{error, info, warn};
|
||||||
use tokio::signal::unix::{SignalKind, signal};
|
|
||||||
use tracing::{info, warn};
|
|
||||||
|
|
||||||
use crate::stats::Stats;
|
|
||||||
use crate::transport::middle_proxy::MePool;
|
use crate::transport::middle_proxy::MePool;
|
||||||
|
|
||||||
use super::helpers::{format_uptime, unit_label};
|
use super::helpers::{format_uptime, unit_label};
|
||||||
|
|
||||||
/// Signal that triggered shutdown.
|
pub(crate) async fn wait_for_shutdown(process_started_at: Instant, me_pool: Option<Arc<MePool>>) {
|
||||||
#[derive(Debug, Clone, Copy, PartialEq, Eq)]
|
match signal::ctrl_c().await {
|
||||||
pub enum ShutdownSignal {
|
Ok(()) => {
|
||||||
/// SIGINT (Ctrl+C)
|
|
||||||
Interrupt,
|
|
||||||
/// SIGTERM
|
|
||||||
Terminate,
|
|
||||||
/// SIGQUIT (with stats dump)
|
|
||||||
Quit,
|
|
||||||
}
|
|
||||||
|
|
||||||
impl std::fmt::Display for ShutdownSignal {
|
|
||||||
fn fmt(&self, f: &mut std::fmt::Formatter<'_>) -> std::fmt::Result {
|
|
||||||
match self {
|
|
||||||
ShutdownSignal::Interrupt => write!(f, "SIGINT"),
|
|
||||||
ShutdownSignal::Terminate => write!(f, "SIGTERM"),
|
|
||||||
ShutdownSignal::Quit => write!(f, "SIGQUIT"),
|
|
||||||
}
|
|
||||||
}
|
|
||||||
}
|
|
||||||
|
|
||||||
/// Waits for a shutdown signal and performs graceful shutdown.
|
|
||||||
pub(crate) async fn wait_for_shutdown(
|
|
||||||
process_started_at: Instant,
|
|
||||||
me_pool: Option<Arc<MePool>>,
|
|
||||||
stats: Arc<Stats>,
|
|
||||||
) {
|
|
||||||
let signal = wait_for_shutdown_signal().await;
|
|
||||||
perform_shutdown(signal, process_started_at, me_pool, &stats).await;
|
|
||||||
}
|
|
||||||
|
|
||||||
/// Waits for any shutdown signal (SIGINT, SIGTERM, SIGQUIT).
|
|
||||||
#[cfg(unix)]
|
|
||||||
async fn wait_for_shutdown_signal() -> ShutdownSignal {
|
|
||||||
let mut sigint = signal(SignalKind::interrupt()).expect("Failed to register SIGINT handler");
|
|
||||||
let mut sigterm = signal(SignalKind::terminate()).expect("Failed to register SIGTERM handler");
|
|
||||||
let mut sigquit = signal(SignalKind::quit()).expect("Failed to register SIGQUIT handler");
|
|
||||||
|
|
||||||
tokio::select! {
|
|
||||||
_ = sigint.recv() => ShutdownSignal::Interrupt,
|
|
||||||
_ = sigterm.recv() => ShutdownSignal::Terminate,
|
|
||||||
_ = sigquit.recv() => ShutdownSignal::Quit,
|
|
||||||
}
|
|
||||||
}
|
|
||||||
|
|
||||||
#[cfg(not(unix))]
|
|
||||||
async fn wait_for_shutdown_signal() -> ShutdownSignal {
|
|
||||||
signal::ctrl_c().await.expect("Failed to listen for Ctrl+C");
|
|
||||||
ShutdownSignal::Interrupt
|
|
||||||
}
|
|
||||||
|
|
||||||
/// Performs graceful shutdown sequence.
|
|
||||||
async fn perform_shutdown(
|
|
||||||
signal: ShutdownSignal,
|
|
||||||
process_started_at: Instant,
|
|
||||||
me_pool: Option<Arc<MePool>>,
|
|
||||||
stats: &Stats,
|
|
||||||
) {
|
|
||||||
let shutdown_started_at = Instant::now();
|
let shutdown_started_at = Instant::now();
|
||||||
info!(signal = %signal, "Received shutdown signal");
|
|
||||||
|
|
||||||
// Dump stats if SIGQUIT
|
|
||||||
if signal == ShutdownSignal::Quit {
|
|
||||||
dump_stats(stats, process_started_at);
|
|
||||||
}
|
|
||||||
|
|
||||||
info!("Shutting down...");
|
info!("Shutting down...");
|
||||||
let uptime_secs = process_started_at.elapsed().as_secs();
|
let uptime_secs = process_started_at.elapsed().as_secs();
|
||||||
info!("Uptime: {}", format_uptime(uptime_secs));
|
info!("Uptime: {}", format_uptime(uptime_secs));
|
||||||
|
|
||||||
// Graceful ME pool shutdown
|
|
||||||
if let Some(pool) = &me_pool {
|
if let Some(pool) = &me_pool {
|
||||||
match tokio::time::timeout(Duration::from_secs(2), pool.shutdown_send_close_conn_all())
|
match tokio::time::timeout(Duration::from_secs(2), pool.shutdown_send_close_conn_all())
|
||||||
.await
|
.await
|
||||||
|
|
@ -108,99 +30,13 @@ async fn perform_shutdown(
|
||||||
}
|
}
|
||||||
}
|
}
|
||||||
}
|
}
|
||||||
|
|
||||||
let shutdown_secs = shutdown_started_at.elapsed().as_secs();
|
let shutdown_secs = shutdown_started_at.elapsed().as_secs();
|
||||||
info!(
|
info!(
|
||||||
"Shutdown completed successfully in {} {}.",
|
"Shutdown completed successfully in {} {}.",
|
||||||
shutdown_secs,
|
shutdown_secs,
|
||||||
unit_label(shutdown_secs, "second", "seconds")
|
unit_label(shutdown_secs, "second", "seconds")
|
||||||
);
|
);
|
||||||
}
|
|
||||||
|
|
||||||
/// Dumps runtime statistics to the log.
|
|
||||||
fn dump_stats(stats: &Stats, process_started_at: Instant) {
|
|
||||||
let uptime_secs = process_started_at.elapsed().as_secs();
|
|
||||||
|
|
||||||
info!("=== Runtime Statistics Dump ===");
|
|
||||||
info!("Uptime: {}", format_uptime(uptime_secs));
|
|
||||||
|
|
||||||
// Connection stats
|
|
||||||
info!(
|
|
||||||
"Connections: total={}, current={} (direct={}, me={}), bad={}",
|
|
||||||
stats.get_connects_all(),
|
|
||||||
stats.get_current_connections_total(),
|
|
||||||
stats.get_current_connections_direct(),
|
|
||||||
stats.get_current_connections_me(),
|
|
||||||
stats.get_connects_bad(),
|
|
||||||
);
|
|
||||||
|
|
||||||
// ME pool stats
|
|
||||||
info!(
|
|
||||||
"ME keepalive: sent={}, pong={}, failed={}, timeout={}",
|
|
||||||
stats.get_me_keepalive_sent(),
|
|
||||||
stats.get_me_keepalive_pong(),
|
|
||||||
stats.get_me_keepalive_failed(),
|
|
||||||
stats.get_me_keepalive_timeout(),
|
|
||||||
);
|
|
||||||
|
|
||||||
// Relay stats
|
|
||||||
info!(
|
|
||||||
"Relay idle: soft_mark={}, hard_close={}, pressure_evict={}",
|
|
||||||
stats.get_relay_idle_soft_mark_total(),
|
|
||||||
stats.get_relay_idle_hard_close_total(),
|
|
||||||
stats.get_relay_pressure_evict_total(),
|
|
||||||
);
|
|
||||||
|
|
||||||
info!("=== End Statistics Dump ===");
|
|
||||||
}
|
|
||||||
|
|
||||||
/// Spawns a background task to handle operational signals (SIGUSR1, SIGUSR2).
|
|
||||||
///
|
|
||||||
/// These signals don't trigger shutdown but perform specific actions:
|
|
||||||
/// - SIGUSR1: Log rotation acknowledgment (for external log rotation tools)
|
|
||||||
/// - SIGUSR2: Dump runtime status to log
|
|
||||||
#[cfg(unix)]
|
|
||||||
pub(crate) fn spawn_signal_handlers(stats: Arc<Stats>, process_started_at: Instant) {
|
|
||||||
tokio::spawn(async move {
|
|
||||||
let mut sigusr1 =
|
|
||||||
signal(SignalKind::user_defined1()).expect("Failed to register SIGUSR1 handler");
|
|
||||||
let mut sigusr2 =
|
|
||||||
signal(SignalKind::user_defined2()).expect("Failed to register SIGUSR2 handler");
|
|
||||||
|
|
||||||
loop {
|
|
||||||
tokio::select! {
|
|
||||||
_ = sigusr1.recv() => {
|
|
||||||
handle_sigusr1();
|
|
||||||
}
|
}
|
||||||
_ = sigusr2.recv() => {
|
Err(e) => error!("Signal error: {}", e),
|
||||||
handle_sigusr2(&stats, process_started_at);
|
|
||||||
}
|
}
|
||||||
}
|
|
||||||
}
|
|
||||||
});
|
|
||||||
}
|
|
||||||
|
|
||||||
/// No-op on non-Unix platforms.
|
|
||||||
#[cfg(not(unix))]
|
|
||||||
pub(crate) fn spawn_signal_handlers(_stats: Arc<Stats>, _process_started_at: Instant) {
|
|
||||||
// No SIGUSR1/SIGUSR2 on non-Unix
|
|
||||||
}
|
|
||||||
|
|
||||||
/// Handles SIGUSR1 - log rotation signal.
|
|
||||||
///
|
|
||||||
/// This signal is typically sent by logrotate or similar tools after
|
|
||||||
/// rotating log files. Since tracing-subscriber doesn't natively support
|
|
||||||
/// reopening files, we just acknowledge the signal. If file logging is
|
|
||||||
/// added in the future, this would reopen log file handles.
|
|
||||||
#[cfg(unix)]
|
|
||||||
fn handle_sigusr1() {
|
|
||||||
info!("SIGUSR1 received - log rotation acknowledged");
|
|
||||||
// Future: If using file-based logging, reopen file handles here
|
|
||||||
}
|
|
||||||
|
|
||||||
/// Handles SIGUSR2 - dump runtime status.
|
|
||||||
#[cfg(unix)]
|
|
||||||
fn handle_sigusr2(stats: &Stats, process_started_at: Instant) {
|
|
||||||
info!("SIGUSR2 received - dumping runtime status");
|
|
||||||
dump_stats(stats, process_started_at);
|
|
||||||
}
|
}
|
||||||
|
|
|
||||||
|
|
@ -1,13 +1,12 @@
|
||||||
use std::sync::Arc;
|
use std::sync::Arc;
|
||||||
use std::time::Duration;
|
use std::time::Duration;
|
||||||
|
|
||||||
use rand::RngExt;
|
use rand::Rng;
|
||||||
use tracing::warn;
|
use tracing::warn;
|
||||||
|
|
||||||
use crate::config::ProxyConfig;
|
use crate::config::ProxyConfig;
|
||||||
use crate::startup::{COMPONENT_TLS_FRONT_BOOTSTRAP, StartupTracker};
|
use crate::startup::{COMPONENT_TLS_FRONT_BOOTSTRAP, StartupTracker};
|
||||||
use crate::tls_front::TlsFrontCache;
|
use crate::tls_front::TlsFrontCache;
|
||||||
use crate::tls_front::fetcher::TlsFetchStrategy;
|
|
||||||
use crate::transport::UpstreamManager;
|
use crate::transport::UpstreamManager;
|
||||||
|
|
||||||
pub(crate) async fn bootstrap_tls_front(
|
pub(crate) async fn bootstrap_tls_front(
|
||||||
|
|
@ -39,44 +38,27 @@ pub(crate) async fn bootstrap_tls_front(
|
||||||
.clone()
|
.clone()
|
||||||
.unwrap_or_else(|| config.censorship.tls_domain.clone());
|
.unwrap_or_else(|| config.censorship.tls_domain.clone());
|
||||||
let mask_unix_sock = config.censorship.mask_unix_sock.clone();
|
let mask_unix_sock = config.censorship.mask_unix_sock.clone();
|
||||||
let tls_fetch_scope = (!config.censorship.tls_fetch_scope.is_empty())
|
let fetch_timeout = Duration::from_secs(5);
|
||||||
.then(|| config.censorship.tls_fetch_scope.clone());
|
|
||||||
let tls_fetch = config.censorship.tls_fetch.clone();
|
|
||||||
let fetch_strategy = TlsFetchStrategy {
|
|
||||||
profiles: tls_fetch.profiles,
|
|
||||||
strict_route: tls_fetch.strict_route,
|
|
||||||
attempt_timeout: Duration::from_millis(tls_fetch.attempt_timeout_ms.max(1)),
|
|
||||||
total_budget: Duration::from_millis(tls_fetch.total_budget_ms.max(1)),
|
|
||||||
grease_enabled: tls_fetch.grease_enabled,
|
|
||||||
deterministic: tls_fetch.deterministic,
|
|
||||||
profile_cache_ttl: Duration::from_secs(tls_fetch.profile_cache_ttl_secs),
|
|
||||||
};
|
|
||||||
let fetch_timeout = fetch_strategy.total_budget;
|
|
||||||
|
|
||||||
let cache_initial = cache.clone();
|
let cache_initial = cache.clone();
|
||||||
let domains_initial = tls_domains.to_vec();
|
let domains_initial = tls_domains.to_vec();
|
||||||
let host_initial = mask_host.clone();
|
let host_initial = mask_host.clone();
|
||||||
let unix_sock_initial = mask_unix_sock.clone();
|
let unix_sock_initial = mask_unix_sock.clone();
|
||||||
let scope_initial = tls_fetch_scope.clone();
|
|
||||||
let upstream_initial = upstream_manager.clone();
|
let upstream_initial = upstream_manager.clone();
|
||||||
let strategy_initial = fetch_strategy.clone();
|
|
||||||
tokio::spawn(async move {
|
tokio::spawn(async move {
|
||||||
let mut join = tokio::task::JoinSet::new();
|
let mut join = tokio::task::JoinSet::new();
|
||||||
for domain in domains_initial {
|
for domain in domains_initial {
|
||||||
let cache_domain = cache_initial.clone();
|
let cache_domain = cache_initial.clone();
|
||||||
let host_domain = host_initial.clone();
|
let host_domain = host_initial.clone();
|
||||||
let unix_sock_domain = unix_sock_initial.clone();
|
let unix_sock_domain = unix_sock_initial.clone();
|
||||||
let scope_domain = scope_initial.clone();
|
|
||||||
let upstream_domain = upstream_initial.clone();
|
let upstream_domain = upstream_initial.clone();
|
||||||
let strategy_domain = strategy_initial.clone();
|
|
||||||
join.spawn(async move {
|
join.spawn(async move {
|
||||||
match crate::tls_front::fetcher::fetch_real_tls_with_strategy(
|
match crate::tls_front::fetcher::fetch_real_tls(
|
||||||
&host_domain,
|
&host_domain,
|
||||||
port,
|
port,
|
||||||
&domain,
|
&domain,
|
||||||
&strategy_domain,
|
fetch_timeout,
|
||||||
Some(upstream_domain),
|
Some(upstream_domain),
|
||||||
scope_domain.as_deref(),
|
|
||||||
proxy_protocol,
|
proxy_protocol,
|
||||||
unix_sock_domain.as_deref(),
|
unix_sock_domain.as_deref(),
|
||||||
)
|
)
|
||||||
|
|
@ -118,9 +100,7 @@ pub(crate) async fn bootstrap_tls_front(
|
||||||
let domains_refresh = tls_domains.to_vec();
|
let domains_refresh = tls_domains.to_vec();
|
||||||
let host_refresh = mask_host.clone();
|
let host_refresh = mask_host.clone();
|
||||||
let unix_sock_refresh = mask_unix_sock.clone();
|
let unix_sock_refresh = mask_unix_sock.clone();
|
||||||
let scope_refresh = tls_fetch_scope.clone();
|
|
||||||
let upstream_refresh = upstream_manager.clone();
|
let upstream_refresh = upstream_manager.clone();
|
||||||
let strategy_refresh = fetch_strategy.clone();
|
|
||||||
tokio::spawn(async move {
|
tokio::spawn(async move {
|
||||||
loop {
|
loop {
|
||||||
let base_secs = rand::rng().random_range(4 * 3600..=6 * 3600);
|
let base_secs = rand::rng().random_range(4 * 3600..=6 * 3600);
|
||||||
|
|
@ -132,17 +112,14 @@ pub(crate) async fn bootstrap_tls_front(
|
||||||
let cache_domain = cache_refresh.clone();
|
let cache_domain = cache_refresh.clone();
|
||||||
let host_domain = host_refresh.clone();
|
let host_domain = host_refresh.clone();
|
||||||
let unix_sock_domain = unix_sock_refresh.clone();
|
let unix_sock_domain = unix_sock_refresh.clone();
|
||||||
let scope_domain = scope_refresh.clone();
|
|
||||||
let upstream_domain = upstream_refresh.clone();
|
let upstream_domain = upstream_refresh.clone();
|
||||||
let strategy_domain = strategy_refresh.clone();
|
|
||||||
join.spawn(async move {
|
join.spawn(async move {
|
||||||
match crate::tls_front::fetcher::fetch_real_tls_with_strategy(
|
match crate::tls_front::fetcher::fetch_real_tls(
|
||||||
&host_domain,
|
&host_domain,
|
||||||
port,
|
port,
|
||||||
&domain,
|
&domain,
|
||||||
&strategy_domain,
|
fetch_timeout,
|
||||||
Some(upstream_domain),
|
Some(upstream_domain),
|
||||||
scope_domain.as_deref(),
|
|
||||||
proxy_protocol,
|
proxy_protocol,
|
||||||
unix_sock_domain.as_deref(),
|
unix_sock_domain.as_deref(),
|
||||||
)
|
)
|
||||||
|
|
|
||||||
62
src/main.rs
62
src/main.rs
|
|
@ -3,28 +3,14 @@
|
||||||
mod api;
|
mod api;
|
||||||
mod cli;
|
mod cli;
|
||||||
mod config;
|
mod config;
|
||||||
mod conntrack_control;
|
|
||||||
mod crypto;
|
mod crypto;
|
||||||
#[cfg(unix)]
|
|
||||||
mod daemon;
|
|
||||||
mod error;
|
mod error;
|
||||||
mod ip_tracker;
|
mod ip_tracker;
|
||||||
#[cfg(test)]
|
|
||||||
#[path = "tests/ip_tracker_encapsulation_adversarial_tests.rs"]
|
|
||||||
mod ip_tracker_encapsulation_adversarial_tests;
|
|
||||||
#[cfg(test)]
|
|
||||||
#[path = "tests/ip_tracker_hotpath_adversarial_tests.rs"]
|
|
||||||
mod ip_tracker_hotpath_adversarial_tests;
|
|
||||||
#[cfg(test)]
|
|
||||||
#[path = "tests/ip_tracker_regression_tests.rs"]
|
|
||||||
mod ip_tracker_regression_tests;
|
|
||||||
mod logging;
|
|
||||||
mod maestro;
|
mod maestro;
|
||||||
mod metrics;
|
mod metrics;
|
||||||
mod network;
|
mod network;
|
||||||
mod protocol;
|
mod protocol;
|
||||||
mod proxy;
|
mod proxy;
|
||||||
mod service;
|
|
||||||
mod startup;
|
mod startup;
|
||||||
mod stats;
|
mod stats;
|
||||||
mod stream;
|
mod stream;
|
||||||
|
|
@ -32,49 +18,7 @@ mod tls_front;
|
||||||
mod transport;
|
mod transport;
|
||||||
mod util;
|
mod util;
|
||||||
|
|
||||||
fn main() -> std::result::Result<(), Box<dyn std::error::Error>> {
|
#[tokio::main]
|
||||||
// Install rustls crypto provider early
|
async fn main() -> std::result::Result<(), Box<dyn std::error::Error>> {
|
||||||
let _ = rustls::crypto::ring::default_provider().install_default();
|
maestro::run().await
|
||||||
|
|
||||||
let args: Vec<String> = std::env::args().skip(1).collect();
|
|
||||||
let cmd = cli::parse_command(&args);
|
|
||||||
|
|
||||||
// Handle subcommands that don't need the server (stop, reload, status, init)
|
|
||||||
if let Some(exit_code) = cli::execute_subcommand(&cmd) {
|
|
||||||
std::process::exit(exit_code);
|
|
||||||
}
|
|
||||||
|
|
||||||
#[cfg(unix)]
|
|
||||||
{
|
|
||||||
let daemon_opts = cmd.daemon_opts;
|
|
||||||
|
|
||||||
// Daemonize BEFORE runtime
|
|
||||||
if daemon_opts.should_daemonize() {
|
|
||||||
match daemon::daemonize(daemon_opts.working_dir.as_deref()) {
|
|
||||||
Ok(daemon::DaemonizeResult::Parent) => {
|
|
||||||
std::process::exit(0);
|
|
||||||
}
|
|
||||||
Ok(daemon::DaemonizeResult::Child) => {
|
|
||||||
// continue
|
|
||||||
}
|
|
||||||
Err(e) => {
|
|
||||||
eprintln!("[telemt] Daemonization failed: {}", e);
|
|
||||||
std::process::exit(1);
|
|
||||||
}
|
|
||||||
}
|
|
||||||
}
|
|
||||||
|
|
||||||
tokio::runtime::Builder::new_multi_thread()
|
|
||||||
.enable_all()
|
|
||||||
.build()?
|
|
||||||
.block_on(maestro::run_with_daemon(daemon_opts))
|
|
||||||
}
|
|
||||||
|
|
||||||
#[cfg(not(unix))]
|
|
||||||
{
|
|
||||||
tokio::runtime::Builder::new_multi_thread()
|
|
||||||
.enable_all()
|
|
||||||
.build()?
|
|
||||||
.block_on(maestro::run())
|
|
||||||
}
|
|
||||||
}
|
}
|
||||||
|
|
|
||||||
1317
src/metrics.rs
1317
src/metrics.rs
File diff suppressed because it is too large
Load Diff
|
|
@ -26,7 +26,9 @@ fn parse_ip_spec(ip_spec: &str) -> Result<IpAddr> {
|
||||||
}
|
}
|
||||||
|
|
||||||
let ip = ip_spec.parse::<IpAddr>().map_err(|_| {
|
let ip = ip_spec.parse::<IpAddr>().map_err(|_| {
|
||||||
ProxyError::Config(format!("network.dns_overrides IP is invalid: '{ip_spec}'"))
|
ProxyError::Config(format!(
|
||||||
|
"network.dns_overrides IP is invalid: '{ip_spec}'"
|
||||||
|
))
|
||||||
})?;
|
})?;
|
||||||
if matches!(ip, IpAddr::V6(_)) {
|
if matches!(ip, IpAddr::V6(_)) {
|
||||||
return Err(ProxyError::Config(format!(
|
return Err(ProxyError::Config(format!(
|
||||||
|
|
@ -101,9 +103,9 @@ pub fn validate_entries(entries: &[String]) -> Result<()> {
|
||||||
/// Replace runtime DNS overrides with a new validated snapshot.
|
/// Replace runtime DNS overrides with a new validated snapshot.
|
||||||
pub fn install_entries(entries: &[String]) -> Result<()> {
|
pub fn install_entries(entries: &[String]) -> Result<()> {
|
||||||
let parsed = parse_entries(entries)?;
|
let parsed = parse_entries(entries)?;
|
||||||
let mut guard = overrides_store().write().map_err(|_| {
|
let mut guard = overrides_store()
|
||||||
ProxyError::Config("network.dns_overrides runtime lock is poisoned".to_string())
|
.write()
|
||||||
})?;
|
.map_err(|_| ProxyError::Config("network.dns_overrides runtime lock is poisoned".to_string()))?;
|
||||||
*guard = parsed;
|
*guard = parsed;
|
||||||
Ok(())
|
Ok(())
|
||||||
}
|
}
|
||||||
|
|
|
||||||
|
|
@ -1,5 +1,4 @@
|
||||||
#![allow(dead_code)]
|
#![allow(dead_code)]
|
||||||
#![allow(clippy::items_after_test_module)]
|
|
||||||
|
|
||||||
use std::collections::HashMap;
|
use std::collections::HashMap;
|
||||||
use std::net::{IpAddr, Ipv4Addr, Ipv6Addr, SocketAddr, UdpSocket};
|
use std::net::{IpAddr, Ipv4Addr, Ipv6Addr, SocketAddr, UdpSocket};
|
||||||
|
|
@ -11,9 +10,7 @@ use tracing::{debug, info, warn};
|
||||||
|
|
||||||
use crate::config::{NetworkConfig, UpstreamConfig, UpstreamType};
|
use crate::config::{NetworkConfig, UpstreamConfig, UpstreamType};
|
||||||
use crate::error::Result;
|
use crate::error::Result;
|
||||||
use crate::network::stun::{
|
use crate::network::stun::{stun_probe_family_with_bind, DualStunResult, IpFamily, StunProbeResult};
|
||||||
DualStunResult, IpFamily, StunProbeResult, stun_probe_family_with_bind,
|
|
||||||
};
|
|
||||||
use crate::transport::UpstreamManager;
|
use crate::transport::UpstreamManager;
|
||||||
|
|
||||||
#[derive(Debug, Clone, Default)]
|
#[derive(Debug, Clone, Default)]
|
||||||
|
|
@ -81,7 +78,12 @@ pub async fn run_probe(
|
||||||
warn!("STUN probe is enabled but network.stun_servers is empty");
|
warn!("STUN probe is enabled but network.stun_servers is empty");
|
||||||
DualStunResult::default()
|
DualStunResult::default()
|
||||||
} else {
|
} else {
|
||||||
probe_stun_servers_parallel(&servers, stun_nat_probe_concurrency.max(1), None, None)
|
probe_stun_servers_parallel(
|
||||||
|
&servers,
|
||||||
|
stun_nat_probe_concurrency.max(1),
|
||||||
|
None,
|
||||||
|
None,
|
||||||
|
)
|
||||||
.await
|
.await
|
||||||
}
|
}
|
||||||
} else if nat_probe {
|
} else if nat_probe {
|
||||||
|
|
@ -97,8 +99,7 @@ pub async fn run_probe(
|
||||||
let UpstreamType::Direct {
|
let UpstreamType::Direct {
|
||||||
interface,
|
interface,
|
||||||
bind_addresses,
|
bind_addresses,
|
||||||
} = &upstream.upstream_type
|
} = &upstream.upstream_type else {
|
||||||
else {
|
|
||||||
continue;
|
continue;
|
||||||
};
|
};
|
||||||
if let Some(addrs) = bind_addresses.as_ref().filter(|v| !v.is_empty()) {
|
if let Some(addrs) = bind_addresses.as_ref().filter(|v| !v.is_empty()) {
|
||||||
|
|
@ -198,11 +199,12 @@ pub async fn run_probe(
|
||||||
if nat_probe
|
if nat_probe
|
||||||
&& probe.reflected_ipv4.is_none()
|
&& probe.reflected_ipv4.is_none()
|
||||||
&& probe.detected_ipv4.map(is_bogon_v4).unwrap_or(false)
|
&& probe.detected_ipv4.map(is_bogon_v4).unwrap_or(false)
|
||||||
&& let Some(public_ip) = detect_public_ipv4_http(&config.http_ip_detect_urls).await
|
|
||||||
{
|
{
|
||||||
|
if let Some(public_ip) = detect_public_ipv4_http(&config.http_ip_detect_urls).await {
|
||||||
probe.reflected_ipv4 = Some(SocketAddr::new(IpAddr::V4(public_ip), 0));
|
probe.reflected_ipv4 = Some(SocketAddr::new(IpAddr::V4(public_ip), 0));
|
||||||
info!(public_ip = %public_ip, "STUN unavailable, using HTTP public IPv4 fallback");
|
info!(public_ip = %public_ip, "STUN unavailable, using HTTP public IPv4 fallback");
|
||||||
}
|
}
|
||||||
|
}
|
||||||
|
|
||||||
probe.ipv4_nat_detected = match (probe.detected_ipv4, probe.reflected_ipv4) {
|
probe.ipv4_nat_detected = match (probe.detected_ipv4, probe.reflected_ipv4) {
|
||||||
(Some(det), Some(reflected)) => det != reflected.ip(),
|
(Some(det), Some(reflected)) => det != reflected.ip(),
|
||||||
|
|
@ -215,20 +217,12 @@ pub async fn run_probe(
|
||||||
|
|
||||||
probe.ipv4_usable = config.ipv4
|
probe.ipv4_usable = config.ipv4
|
||||||
&& probe.detected_ipv4.is_some()
|
&& probe.detected_ipv4.is_some()
|
||||||
&& (!probe.ipv4_is_bogon
|
&& (!probe.ipv4_is_bogon || probe.reflected_ipv4.map(|r| !is_bogon(r.ip())).unwrap_or(false));
|
||||||
|| probe
|
|
||||||
.reflected_ipv4
|
|
||||||
.map(|r| !is_bogon(r.ip()))
|
|
||||||
.unwrap_or(false));
|
|
||||||
|
|
||||||
let ipv6_enabled = config.ipv6.unwrap_or(probe.detected_ipv6.is_some());
|
let ipv6_enabled = config.ipv6.unwrap_or(probe.detected_ipv6.is_some());
|
||||||
probe.ipv6_usable = ipv6_enabled
|
probe.ipv6_usable = ipv6_enabled
|
||||||
&& probe.detected_ipv6.is_some()
|
&& probe.detected_ipv6.is_some()
|
||||||
&& (!probe.ipv6_is_bogon
|
&& (!probe.ipv6_is_bogon || probe.reflected_ipv6.map(|r| !is_bogon(r.ip())).unwrap_or(false));
|
||||||
|| probe
|
|
||||||
.reflected_ipv6
|
|
||||||
.map(|r| !is_bogon(r.ip()))
|
|
||||||
.unwrap_or(false));
|
|
||||||
|
|
||||||
Ok(probe)
|
Ok(probe)
|
||||||
}
|
}
|
||||||
|
|
@ -286,6 +280,8 @@ async fn probe_stun_servers_parallel(
|
||||||
while next_idx < servers.len() && join_set.len() < concurrency {
|
while next_idx < servers.len() && join_set.len() < concurrency {
|
||||||
let stun_addr = servers[next_idx].clone();
|
let stun_addr = servers[next_idx].clone();
|
||||||
next_idx += 1;
|
next_idx += 1;
|
||||||
|
let bind_v4 = bind_v4;
|
||||||
|
let bind_v6 = bind_v6;
|
||||||
join_set.spawn(async move {
|
join_set.spawn(async move {
|
||||||
let res = timeout(STUN_BATCH_TIMEOUT, async {
|
let res = timeout(STUN_BATCH_TIMEOUT, async {
|
||||||
let v4 = stun_probe_family_with_bind(&stun_addr, IpFamily::V4, bind_v4).await?;
|
let v4 = stun_probe_family_with_bind(&stun_addr, IpFamily::V4, bind_v4).await?;
|
||||||
|
|
@ -304,15 +300,11 @@ async fn probe_stun_servers_parallel(
|
||||||
match task {
|
match task {
|
||||||
Ok((stun_addr, Ok(Ok(result)))) => {
|
Ok((stun_addr, Ok(Ok(result)))) => {
|
||||||
if let Some(v4) = result.v4 {
|
if let Some(v4) = result.v4 {
|
||||||
let entry = best_v4_by_ip
|
let entry = best_v4_by_ip.entry(v4.reflected_addr.ip()).or_insert((0, v4));
|
||||||
.entry(v4.reflected_addr.ip())
|
|
||||||
.or_insert((0, v4));
|
|
||||||
entry.0 += 1;
|
entry.0 += 1;
|
||||||
}
|
}
|
||||||
if let Some(v6) = result.v6 {
|
if let Some(v6) = result.v6 {
|
||||||
let entry = best_v6_by_ip
|
let entry = best_v6_by_ip.entry(v6.reflected_addr.ip()).or_insert((0, v6));
|
||||||
.entry(v6.reflected_addr.ip())
|
|
||||||
.or_insert((0, v6));
|
|
||||||
entry.0 += 1;
|
entry.0 += 1;
|
||||||
}
|
}
|
||||||
if result.v4.is_some() || result.v6.is_some() {
|
if result.v4.is_some() || result.v6.is_some() {
|
||||||
|
|
@ -332,11 +324,17 @@ async fn probe_stun_servers_parallel(
|
||||||
}
|
}
|
||||||
|
|
||||||
let mut out = DualStunResult::default();
|
let mut out = DualStunResult::default();
|
||||||
if let Some((_, best)) = best_v4_by_ip.into_values().max_by_key(|(count, _)| *count) {
|
if let Some((_, best)) = best_v4_by_ip
|
||||||
|
.into_values()
|
||||||
|
.max_by_key(|(count, _)| *count)
|
||||||
|
{
|
||||||
info!("STUN-Quorum reached, IP: {}", best.reflected_addr.ip());
|
info!("STUN-Quorum reached, IP: {}", best.reflected_addr.ip());
|
||||||
out.v4 = Some(best);
|
out.v4 = Some(best);
|
||||||
}
|
}
|
||||||
if let Some((_, best)) = best_v6_by_ip.into_values().max_by_key(|(count, _)| *count) {
|
if let Some((_, best)) = best_v6_by_ip
|
||||||
|
.into_values()
|
||||||
|
.max_by_key(|(count, _)| *count)
|
||||||
|
{
|
||||||
info!("STUN-Quorum reached, IP: {}", best.reflected_addr.ip());
|
info!("STUN-Quorum reached, IP: {}", best.reflected_addr.ip());
|
||||||
out.v6 = Some(best);
|
out.v6 = Some(best);
|
||||||
}
|
}
|
||||||
|
|
@ -349,8 +347,7 @@ pub fn decide_network_capabilities(
|
||||||
middle_proxy_nat_ip: Option<IpAddr>,
|
middle_proxy_nat_ip: Option<IpAddr>,
|
||||||
) -> NetworkDecision {
|
) -> NetworkDecision {
|
||||||
let ipv4_dc = config.ipv4 && probe.detected_ipv4.is_some();
|
let ipv4_dc = config.ipv4 && probe.detected_ipv4.is_some();
|
||||||
let ipv6_dc =
|
let ipv6_dc = config.ipv6.unwrap_or(probe.detected_ipv6.is_some()) && probe.detected_ipv6.is_some();
|
||||||
config.ipv6.unwrap_or(probe.detected_ipv6.is_some()) && probe.detected_ipv6.is_some();
|
|
||||||
let nat_ip_v4 = matches!(middle_proxy_nat_ip, Some(IpAddr::V4(_)));
|
let nat_ip_v4 = matches!(middle_proxy_nat_ip, Some(IpAddr::V4(_)));
|
||||||
let nat_ip_v6 = matches!(middle_proxy_nat_ip, Some(IpAddr::V6(_)));
|
let nat_ip_v6 = matches!(middle_proxy_nat_ip, Some(IpAddr::V6(_)));
|
||||||
|
|
||||||
|
|
@ -537,26 +534,10 @@ pub fn is_bogon_v6(ip: Ipv6Addr) -> bool {
|
||||||
|
|
||||||
pub fn log_probe_result(probe: &NetworkProbe, decision: &NetworkDecision) {
|
pub fn log_probe_result(probe: &NetworkProbe, decision: &NetworkDecision) {
|
||||||
info!(
|
info!(
|
||||||
ipv4 = probe
|
ipv4 = probe.detected_ipv4.as_ref().map(|v| v.to_string()).unwrap_or_else(|| "-".into()),
|
||||||
.detected_ipv4
|
ipv6 = probe.detected_ipv6.as_ref().map(|v| v.to_string()).unwrap_or_else(|| "-".into()),
|
||||||
.as_ref()
|
reflected_v4 = probe.reflected_ipv4.as_ref().map(|v| v.ip().to_string()).unwrap_or_else(|| "-".into()),
|
||||||
.map(|v| v.to_string())
|
reflected_v6 = probe.reflected_ipv6.as_ref().map(|v| v.ip().to_string()).unwrap_or_else(|| "-".into()),
|
||||||
.unwrap_or_else(|| "-".into()),
|
|
||||||
ipv6 = probe
|
|
||||||
.detected_ipv6
|
|
||||||
.as_ref()
|
|
||||||
.map(|v| v.to_string())
|
|
||||||
.unwrap_or_else(|| "-".into()),
|
|
||||||
reflected_v4 = probe
|
|
||||||
.reflected_ipv4
|
|
||||||
.as_ref()
|
|
||||||
.map(|v| v.ip().to_string())
|
|
||||||
.unwrap_or_else(|| "-".into()),
|
|
||||||
reflected_v6 = probe
|
|
||||||
.reflected_ipv6
|
|
||||||
.as_ref()
|
|
||||||
.map(|v| v.ip().to_string())
|
|
||||||
.unwrap_or_else(|| "-".into()),
|
|
||||||
ipv4_bogon = probe.ipv4_is_bogon,
|
ipv4_bogon = probe.ipv4_is_bogon,
|
||||||
ipv6_bogon = probe.ipv6_is_bogon,
|
ipv6_bogon = probe.ipv6_is_bogon,
|
||||||
ipv4_me = decision.ipv4_me,
|
ipv4_me = decision.ipv4_me,
|
||||||
|
|
|
||||||
|
|
@ -2,20 +2,13 @@
|
||||||
#![allow(dead_code)]
|
#![allow(dead_code)]
|
||||||
|
|
||||||
use std::net::{IpAddr, Ipv4Addr, Ipv6Addr, SocketAddr};
|
use std::net::{IpAddr, Ipv4Addr, Ipv6Addr, SocketAddr};
|
||||||
use std::sync::OnceLock;
|
|
||||||
|
|
||||||
use tokio::net::{UdpSocket, lookup_host};
|
use tokio::net::{lookup_host, UdpSocket};
|
||||||
use tokio::time::{Duration, sleep, timeout};
|
use tokio::time::{timeout, Duration, sleep};
|
||||||
|
|
||||||
use crate::crypto::SecureRandom;
|
|
||||||
use crate::error::{ProxyError, Result};
|
use crate::error::{ProxyError, Result};
|
||||||
use crate::network::dns_overrides::{resolve, split_host_port};
|
use crate::network::dns_overrides::{resolve, split_host_port};
|
||||||
|
|
||||||
fn stun_rng() -> &'static SecureRandom {
|
|
||||||
static STUN_RNG: OnceLock<SecureRandom> = OnceLock::new();
|
|
||||||
STUN_RNG.get_or_init(SecureRandom::new)
|
|
||||||
}
|
|
||||||
|
|
||||||
#[derive(Debug, Clone, Copy, PartialEq, Eq, Hash)]
|
#[derive(Debug, Clone, Copy, PartialEq, Eq, Hash)]
|
||||||
pub enum IpFamily {
|
pub enum IpFamily {
|
||||||
V4,
|
V4,
|
||||||
|
|
@ -41,13 +34,13 @@ pub async fn stun_probe_dual(stun_addr: &str) -> Result<DualStunResult> {
|
||||||
stun_probe_family(stun_addr, IpFamily::V6),
|
stun_probe_family(stun_addr, IpFamily::V6),
|
||||||
);
|
);
|
||||||
|
|
||||||
Ok(DualStunResult { v4: v4?, v6: v6? })
|
Ok(DualStunResult {
|
||||||
|
v4: v4?,
|
||||||
|
v6: v6?,
|
||||||
|
})
|
||||||
}
|
}
|
||||||
|
|
||||||
pub async fn stun_probe_family(
|
pub async fn stun_probe_family(stun_addr: &str, family: IpFamily) -> Result<Option<StunProbeResult>> {
|
||||||
stun_addr: &str,
|
|
||||||
family: IpFamily,
|
|
||||||
) -> Result<Option<StunProbeResult>> {
|
|
||||||
stun_probe_family_with_bind(stun_addr, family, None).await
|
stun_probe_family_with_bind(stun_addr, family, None).await
|
||||||
}
|
}
|
||||||
|
|
||||||
|
|
@ -56,6 +49,8 @@ pub async fn stun_probe_family_with_bind(
|
||||||
family: IpFamily,
|
family: IpFamily,
|
||||||
bind_ip: Option<IpAddr>,
|
bind_ip: Option<IpAddr>,
|
||||||
) -> Result<Option<StunProbeResult>> {
|
) -> Result<Option<StunProbeResult>> {
|
||||||
|
use rand::RngCore;
|
||||||
|
|
||||||
let bind_addr = match (family, bind_ip) {
|
let bind_addr = match (family, bind_ip) {
|
||||||
(IpFamily::V4, Some(IpAddr::V4(ip))) => SocketAddr::new(IpAddr::V4(ip), 0),
|
(IpFamily::V4, Some(IpAddr::V4(ip))) => SocketAddr::new(IpAddr::V4(ip), 0),
|
||||||
(IpFamily::V6, Some(IpAddr::V6(ip))) => SocketAddr::new(IpAddr::V6(ip), 0),
|
(IpFamily::V6, Some(IpAddr::V6(ip))) => SocketAddr::new(IpAddr::V6(ip), 0),
|
||||||
|
|
@ -76,18 +71,13 @@ pub async fn stun_probe_family_with_bind(
|
||||||
if let Some(addr) = target_addr {
|
if let Some(addr) = target_addr {
|
||||||
match socket.connect(addr).await {
|
match socket.connect(addr).await {
|
||||||
Ok(()) => {}
|
Ok(()) => {}
|
||||||
Err(e)
|
Err(e) if family == IpFamily::V6 && matches!(
|
||||||
if family == IpFamily::V6
|
|
||||||
&& matches!(
|
|
||||||
e.kind(),
|
e.kind(),
|
||||||
std::io::ErrorKind::NetworkUnreachable
|
std::io::ErrorKind::NetworkUnreachable
|
||||||
| std::io::ErrorKind::HostUnreachable
|
| std::io::ErrorKind::HostUnreachable
|
||||||
| std::io::ErrorKind::Unsupported
|
| std::io::ErrorKind::Unsupported
|
||||||
| std::io::ErrorKind::NetworkDown
|
| std::io::ErrorKind::NetworkDown
|
||||||
) =>
|
) => return Ok(None),
|
||||||
{
|
|
||||||
return Ok(None);
|
|
||||||
}
|
|
||||||
Err(e) => return Err(ProxyError::Proxy(format!("STUN connect failed: {e}"))),
|
Err(e) => return Err(ProxyError::Proxy(format!("STUN connect failed: {e}"))),
|
||||||
}
|
}
|
||||||
} else {
|
} else {
|
||||||
|
|
@ -98,7 +88,7 @@ pub async fn stun_probe_family_with_bind(
|
||||||
req[0..2].copy_from_slice(&0x0001u16.to_be_bytes()); // Binding Request
|
req[0..2].copy_from_slice(&0x0001u16.to_be_bytes()); // Binding Request
|
||||||
req[2..4].copy_from_slice(&0u16.to_be_bytes()); // length
|
req[2..4].copy_from_slice(&0u16.to_be_bytes()); // length
|
||||||
req[4..8].copy_from_slice(&0x2112A442u32.to_be_bytes()); // magic cookie
|
req[4..8].copy_from_slice(&0x2112A442u32.to_be_bytes()); // magic cookie
|
||||||
stun_rng().fill(&mut req[8..20]); // transaction ID
|
rand::rng().fill_bytes(&mut req[8..20]); // transaction ID
|
||||||
|
|
||||||
let mut buf = [0u8; 256];
|
let mut buf = [0u8; 256];
|
||||||
let mut attempt = 0;
|
let mut attempt = 0;
|
||||||
|
|
@ -210,6 +200,7 @@ pub async fn stun_probe_family_with_bind(
|
||||||
|
|
||||||
idx += (alen + 3) & !3;
|
idx += (alen + 3) & !3;
|
||||||
}
|
}
|
||||||
|
|
||||||
}
|
}
|
||||||
|
|
||||||
Ok(None)
|
Ok(None)
|
||||||
|
|
@ -237,11 +228,7 @@ async fn resolve_stun_addr(stun_addr: &str, family: IpFamily) -> Result<Option<S
|
||||||
.await
|
.await
|
||||||
.map_err(|e| ProxyError::Proxy(format!("STUN resolve failed: {e}")))?;
|
.map_err(|e| ProxyError::Proxy(format!("STUN resolve failed: {e}")))?;
|
||||||
|
|
||||||
let target = addrs.find(|a| {
|
let target = addrs
|
||||||
matches!(
|
.find(|a| matches!((a.is_ipv4(), family), (true, IpFamily::V4) | (false, IpFamily::V6)));
|
||||||
(a.is_ipv4(), family),
|
|
||||||
(true, IpFamily::V4) | (false, IpFamily::V6)
|
|
||||||
)
|
|
||||||
});
|
|
||||||
Ok(target)
|
Ok(target)
|
||||||
}
|
}
|
||||||
|
|
|
||||||
|
|
@ -36,86 +36,32 @@ pub static TG_DATACENTERS_V6: LazyLock<Vec<IpAddr>> = LazyLock::new(|| {
|
||||||
pub static TG_MIDDLE_PROXIES_V4: LazyLock<std::collections::HashMap<i32, Vec<(IpAddr, u16)>>> =
|
pub static TG_MIDDLE_PROXIES_V4: LazyLock<std::collections::HashMap<i32, Vec<(IpAddr, u16)>>> =
|
||||||
LazyLock::new(|| {
|
LazyLock::new(|| {
|
||||||
let mut m = std::collections::HashMap::new();
|
let mut m = std::collections::HashMap::new();
|
||||||
m.insert(
|
m.insert(1, vec![(IpAddr::V4(Ipv4Addr::new(149, 154, 175, 50)), 8888)]);
|
||||||
1,
|
m.insert(-1, vec![(IpAddr::V4(Ipv4Addr::new(149, 154, 175, 50)), 8888)]);
|
||||||
vec![(IpAddr::V4(Ipv4Addr::new(149, 154, 175, 50)), 8888)],
|
m.insert(2, vec![(IpAddr::V4(Ipv4Addr::new(149, 154, 161, 144)), 8888)]);
|
||||||
);
|
m.insert(-2, vec![(IpAddr::V4(Ipv4Addr::new(149, 154, 161, 144)), 8888)]);
|
||||||
m.insert(
|
m.insert(3, vec![(IpAddr::V4(Ipv4Addr::new(149, 154, 175, 100)), 8888)]);
|
||||||
-1,
|
m.insert(-3, vec![(IpAddr::V4(Ipv4Addr::new(149, 154, 175, 100)), 8888)]);
|
||||||
vec![(IpAddr::V4(Ipv4Addr::new(149, 154, 175, 50)), 8888)],
|
|
||||||
);
|
|
||||||
m.insert(
|
|
||||||
2,
|
|
||||||
vec![(IpAddr::V4(Ipv4Addr::new(149, 154, 161, 144)), 8888)],
|
|
||||||
);
|
|
||||||
m.insert(
|
|
||||||
-2,
|
|
||||||
vec![(IpAddr::V4(Ipv4Addr::new(149, 154, 161, 144)), 8888)],
|
|
||||||
);
|
|
||||||
m.insert(
|
|
||||||
3,
|
|
||||||
vec![(IpAddr::V4(Ipv4Addr::new(149, 154, 175, 100)), 8888)],
|
|
||||||
);
|
|
||||||
m.insert(
|
|
||||||
-3,
|
|
||||||
vec![(IpAddr::V4(Ipv4Addr::new(149, 154, 175, 100)), 8888)],
|
|
||||||
);
|
|
||||||
m.insert(4, vec![(IpAddr::V4(Ipv4Addr::new(91, 108, 4, 136)), 8888)]);
|
m.insert(4, vec![(IpAddr::V4(Ipv4Addr::new(91, 108, 4, 136)), 8888)]);
|
||||||
m.insert(
|
m.insert(-4, vec![(IpAddr::V4(Ipv4Addr::new(149, 154, 165, 109)), 8888)]);
|
||||||
-4,
|
|
||||||
vec![(IpAddr::V4(Ipv4Addr::new(149, 154, 165, 109)), 8888)],
|
|
||||||
);
|
|
||||||
m.insert(5, vec![(IpAddr::V4(Ipv4Addr::new(91, 108, 56, 183)), 8888)]);
|
m.insert(5, vec![(IpAddr::V4(Ipv4Addr::new(91, 108, 56, 183)), 8888)]);
|
||||||
m.insert(
|
m.insert(-5, vec![(IpAddr::V4(Ipv4Addr::new(91, 108, 56, 183)), 8888)]);
|
||||||
-5,
|
|
||||||
vec![(IpAddr::V4(Ipv4Addr::new(91, 108, 56, 183)), 8888)],
|
|
||||||
);
|
|
||||||
m
|
m
|
||||||
});
|
});
|
||||||
|
|
||||||
pub static TG_MIDDLE_PROXIES_V6: LazyLock<std::collections::HashMap<i32, Vec<(IpAddr, u16)>>> =
|
pub static TG_MIDDLE_PROXIES_V6: LazyLock<std::collections::HashMap<i32, Vec<(IpAddr, u16)>>> =
|
||||||
LazyLock::new(|| {
|
LazyLock::new(|| {
|
||||||
let mut m = std::collections::HashMap::new();
|
let mut m = std::collections::HashMap::new();
|
||||||
m.insert(
|
m.insert(1, vec![(IpAddr::V6("2001:b28:f23d:f001::d".parse().unwrap()), 8888)]);
|
||||||
1,
|
m.insert(-1, vec![(IpAddr::V6("2001:b28:f23d:f001::d".parse().unwrap()), 8888)]);
|
||||||
vec![(IpAddr::V6("2001:b28:f23d:f001::d".parse().unwrap()), 8888)],
|
m.insert(2, vec![(IpAddr::V6("2001:67c:04e8:f002::d".parse().unwrap()), 80)]);
|
||||||
);
|
m.insert(-2, vec![(IpAddr::V6("2001:67c:04e8:f002::d".parse().unwrap()), 80)]);
|
||||||
m.insert(
|
m.insert(3, vec![(IpAddr::V6("2001:b28:f23d:f003::d".parse().unwrap()), 8888)]);
|
||||||
-1,
|
m.insert(-3, vec![(IpAddr::V6("2001:b28:f23d:f003::d".parse().unwrap()), 8888)]);
|
||||||
vec![(IpAddr::V6("2001:b28:f23d:f001::d".parse().unwrap()), 8888)],
|
m.insert(4, vec![(IpAddr::V6("2001:67c:04e8:f004::d".parse().unwrap()), 8888)]);
|
||||||
);
|
m.insert(-4, vec![(IpAddr::V6("2001:67c:04e8:f004::d".parse().unwrap()), 8888)]);
|
||||||
m.insert(
|
m.insert(5, vec![(IpAddr::V6("2001:b28:f23f:f005::d".parse().unwrap()), 8888)]);
|
||||||
2,
|
m.insert(-5, vec![(IpAddr::V6("2001:b28:f23f:f005::d".parse().unwrap()), 8888)]);
|
||||||
vec![(IpAddr::V6("2001:67c:04e8:f002::d".parse().unwrap()), 80)],
|
|
||||||
);
|
|
||||||
m.insert(
|
|
||||||
-2,
|
|
||||||
vec![(IpAddr::V6("2001:67c:04e8:f002::d".parse().unwrap()), 80)],
|
|
||||||
);
|
|
||||||
m.insert(
|
|
||||||
3,
|
|
||||||
vec![(IpAddr::V6("2001:b28:f23d:f003::d".parse().unwrap()), 8888)],
|
|
||||||
);
|
|
||||||
m.insert(
|
|
||||||
-3,
|
|
||||||
vec![(IpAddr::V6("2001:b28:f23d:f003::d".parse().unwrap()), 8888)],
|
|
||||||
);
|
|
||||||
m.insert(
|
|
||||||
4,
|
|
||||||
vec![(IpAddr::V6("2001:67c:04e8:f004::d".parse().unwrap()), 8888)],
|
|
||||||
);
|
|
||||||
m.insert(
|
|
||||||
-4,
|
|
||||||
vec![(IpAddr::V6("2001:67c:04e8:f004::d".parse().unwrap()), 8888)],
|
|
||||||
);
|
|
||||||
m.insert(
|
|
||||||
5,
|
|
||||||
vec![(IpAddr::V6("2001:b28:f23f:f005::d".parse().unwrap()), 8888)],
|
|
||||||
);
|
|
||||||
m.insert(
|
|
||||||
-5,
|
|
||||||
vec![(IpAddr::V6("2001:b28:f23f:f005::d".parse().unwrap()), 8888)],
|
|
||||||
);
|
|
||||||
m
|
m
|
||||||
});
|
});
|
||||||
|
|
||||||
|
|
@ -206,29 +152,11 @@ pub const TLS_RECORD_CHANGE_CIPHER: u8 = 0x14;
|
||||||
pub const TLS_RECORD_APPLICATION: u8 = 0x17;
|
pub const TLS_RECORD_APPLICATION: u8 = 0x17;
|
||||||
/// TLS record type: Alert
|
/// TLS record type: Alert
|
||||||
pub const TLS_RECORD_ALERT: u8 = 0x15;
|
pub const TLS_RECORD_ALERT: u8 = 0x15;
|
||||||
/// Maximum TLS plaintext record payload size.
|
/// Maximum TLS record size
|
||||||
/// RFC 8446 §5.1: "The length MUST NOT exceed 2^14 bytes."
|
pub const MAX_TLS_RECORD_SIZE: usize = 16384;
|
||||||
/// Use this for validating incoming unencrypted records
|
/// Maximum TLS chunk size (with overhead)
|
||||||
/// (ClientHello, ChangeCipherSpec, unprotected Handshake messages).
|
/// RFC 8446 §5.2 allows up to 16384 + 256 bytes of ciphertext
|
||||||
pub const MAX_TLS_PLAINTEXT_SIZE: usize = 16_384;
|
pub const MAX_TLS_CHUNK_SIZE: usize = 16384 + 256;
|
||||||
|
|
||||||
/// Structural minimum for a valid TLS 1.3 ClientHello with SNI.
|
|
||||||
/// Derived from RFC 8446 §4.1.2 field layout + Appendix D.4 compat mode.
|
|
||||||
/// Deliberately conservative (below any real client) to avoid false
|
|
||||||
/// positives on legitimate connections with compact extension sets.
|
|
||||||
pub const MIN_TLS_CLIENT_HELLO_SIZE: usize = 100;
|
|
||||||
|
|
||||||
/// Maximum TLS ciphertext record payload size.
|
|
||||||
/// RFC 8446 §5.2: "The length MUST NOT exceed 2^14 + 256 bytes."
|
|
||||||
/// The +256 accounts for maximum AEAD expansion overhead.
|
|
||||||
/// Use this for validating or sizing buffers for encrypted records.
|
|
||||||
pub const MAX_TLS_CIPHERTEXT_SIZE: usize = 16_384 + 256;
|
|
||||||
|
|
||||||
#[deprecated(note = "use MAX_TLS_PLAINTEXT_SIZE")]
|
|
||||||
pub const MAX_TLS_RECORD_SIZE: usize = MAX_TLS_PLAINTEXT_SIZE;
|
|
||||||
|
|
||||||
#[deprecated(note = "use MAX_TLS_CIPHERTEXT_SIZE")]
|
|
||||||
pub const MAX_TLS_CHUNK_SIZE: usize = MAX_TLS_CIPHERTEXT_SIZE;
|
|
||||||
|
|
||||||
/// Secure Intermediate payload is expected to be 4-byte aligned.
|
/// Secure Intermediate payload is expected to be 4-byte aligned.
|
||||||
pub fn is_valid_secure_payload_len(data_len: usize) -> bool {
|
pub fn is_valid_secure_payload_len(data_len: usize) -> bool {
|
||||||
|
|
@ -276,7 +204,9 @@ pub const SMALL_BUFFER_SIZE: usize = 8192;
|
||||||
// ============= Statistics =============
|
// ============= Statistics =============
|
||||||
|
|
||||||
/// Duration buckets for histogram metrics
|
/// Duration buckets for histogram metrics
|
||||||
pub static DURATION_BUCKETS: &[f64] = &[0.1, 0.5, 1.0, 2.0, 5.0, 15.0, 60.0, 300.0, 600.0, 1800.0];
|
pub static DURATION_BUCKETS: &[f64] = &[
|
||||||
|
0.1, 0.5, 1.0, 2.0, 5.0, 15.0, 60.0, 300.0, 600.0, 1800.0,
|
||||||
|
];
|
||||||
|
|
||||||
// ============= Reserved Nonce Patterns =============
|
// ============= Reserved Nonce Patterns =============
|
||||||
|
|
||||||
|
|
@ -294,7 +224,9 @@ pub static RESERVED_NONCE_BEGINNINGS: &[[u8; 4]] = &[
|
||||||
];
|
];
|
||||||
|
|
||||||
/// Reserved continuation bytes (bytes 4-7)
|
/// Reserved continuation bytes (bytes 4-7)
|
||||||
pub static RESERVED_NONCE_CONTINUES: &[[u8; 4]] = &[[0x00, 0x00, 0x00, 0x00]];
|
pub static RESERVED_NONCE_CONTINUES: &[[u8; 4]] = &[
|
||||||
|
[0x00, 0x00, 0x00, 0x00],
|
||||||
|
];
|
||||||
|
|
||||||
// ============= RPC Constants (for Middle Proxy) =============
|
// ============= RPC Constants (for Middle Proxy) =============
|
||||||
|
|
||||||
|
|
@ -335,10 +267,11 @@ pub mod rpc_flags {
|
||||||
pub const FLAG_QUICKACK: u32 = 0x80000000;
|
pub const FLAG_QUICKACK: u32 = 0x80000000;
|
||||||
}
|
}
|
||||||
|
|
||||||
// ============= Middle-End Proxy Servers =============
|
|
||||||
pub const ME_PROXY_PORT: u16 = 8888;
|
|
||||||
|
|
||||||
pub static TG_MIDDLE_PROXIES_FLAT_V4: LazyLock<Vec<(IpAddr, u16)>> = LazyLock::new(|| {
|
// ============= Middle-End Proxy Servers =============
|
||||||
|
pub const ME_PROXY_PORT: u16 = 8888;
|
||||||
|
|
||||||
|
pub static TG_MIDDLE_PROXIES_FLAT_V4: LazyLock<Vec<(IpAddr, u16)>> = LazyLock::new(|| {
|
||||||
vec![
|
vec![
|
||||||
(IpAddr::V4(Ipv4Addr::new(149, 154, 175, 50)), 8888),
|
(IpAddr::V4(Ipv4Addr::new(149, 154, 175, 50)), 8888),
|
||||||
(IpAddr::V4(Ipv4Addr::new(149, 154, 161, 144)), 8888),
|
(IpAddr::V4(Ipv4Addr::new(149, 154, 161, 144)), 8888),
|
||||||
|
|
@ -346,29 +279,29 @@ pub static TG_MIDDLE_PROXIES_FLAT_V4: LazyLock<Vec<(IpAddr, u16)>> = LazyLock::n
|
||||||
(IpAddr::V4(Ipv4Addr::new(91, 108, 4, 136)), 8888),
|
(IpAddr::V4(Ipv4Addr::new(91, 108, 4, 136)), 8888),
|
||||||
(IpAddr::V4(Ipv4Addr::new(91, 108, 56, 183)), 8888),
|
(IpAddr::V4(Ipv4Addr::new(91, 108, 56, 183)), 8888),
|
||||||
]
|
]
|
||||||
});
|
});
|
||||||
|
|
||||||
// ============= RPC Constants (u32 native endian) =============
|
// ============= RPC Constants (u32 native endian) =============
|
||||||
// From mtproto-common.h + net-tcp-rpc-common.h + mtproto-proxy.c
|
// From mtproto-common.h + net-tcp-rpc-common.h + mtproto-proxy.c
|
||||||
|
|
||||||
pub const RPC_NONCE_U32: u32 = 0x7acb87aa;
|
pub const RPC_NONCE_U32: u32 = 0x7acb87aa;
|
||||||
pub const RPC_HANDSHAKE_U32: u32 = 0x7682eef5;
|
pub const RPC_HANDSHAKE_U32: u32 = 0x7682eef5;
|
||||||
pub const RPC_HANDSHAKE_ERROR_U32: u32 = 0x6a27beda;
|
pub const RPC_HANDSHAKE_ERROR_U32: u32 = 0x6a27beda;
|
||||||
pub const TL_PROXY_TAG_U32: u32 = 0xdb1e26ae; // mtproto-proxy.c:121
|
pub const TL_PROXY_TAG_U32: u32 = 0xdb1e26ae; // mtproto-proxy.c:121
|
||||||
|
|
||||||
// mtproto-common.h
|
// mtproto-common.h
|
||||||
pub const RPC_PROXY_REQ_U32: u32 = 0x36cef1ee;
|
pub const RPC_PROXY_REQ_U32: u32 = 0x36cef1ee;
|
||||||
pub const RPC_PROXY_ANS_U32: u32 = 0x4403da0d;
|
pub const RPC_PROXY_ANS_U32: u32 = 0x4403da0d;
|
||||||
pub const RPC_CLOSE_CONN_U32: u32 = 0x1fcf425d;
|
pub const RPC_CLOSE_CONN_U32: u32 = 0x1fcf425d;
|
||||||
pub const RPC_CLOSE_EXT_U32: u32 = 0x5eb634a2;
|
pub const RPC_CLOSE_EXT_U32: u32 = 0x5eb634a2;
|
||||||
pub const RPC_SIMPLE_ACK_U32: u32 = 0x3bac409b;
|
pub const RPC_SIMPLE_ACK_U32: u32 = 0x3bac409b;
|
||||||
pub const RPC_PING_U32: u32 = 0x5730a2df;
|
pub const RPC_PING_U32: u32 = 0x5730a2df;
|
||||||
pub const RPC_PONG_U32: u32 = 0x8430eaa7;
|
pub const RPC_PONG_U32: u32 = 0x8430eaa7;
|
||||||
|
|
||||||
pub const RPC_CRYPTO_NONE_U32: u32 = 0;
|
pub const RPC_CRYPTO_NONE_U32: u32 = 0;
|
||||||
pub const RPC_CRYPTO_AES_U32: u32 = 1;
|
pub const RPC_CRYPTO_AES_U32: u32 = 1;
|
||||||
|
|
||||||
pub mod proxy_flags {
|
pub mod proxy_flags {
|
||||||
pub const FLAG_HAS_AD_TAG: u32 = 1;
|
pub const FLAG_HAS_AD_TAG: u32 = 1;
|
||||||
pub const FLAG_NOT_ENCRYPTED: u32 = 0x2;
|
pub const FLAG_NOT_ENCRYPTED: u32 = 0x2;
|
||||||
pub const FLAG_HAS_AD_TAG2: u32 = 0x8;
|
pub const FLAG_HAS_AD_TAG2: u32 = 0x8;
|
||||||
|
|
@ -378,20 +311,16 @@ pub mod proxy_flags {
|
||||||
pub const FLAG_INTERMEDIATE: u32 = 0x20000000;
|
pub const FLAG_INTERMEDIATE: u32 = 0x20000000;
|
||||||
pub const FLAG_ABRIDGED: u32 = 0x40000000;
|
pub const FLAG_ABRIDGED: u32 = 0x40000000;
|
||||||
pub const FLAG_QUICKACK: u32 = 0x80000000;
|
pub const FLAG_QUICKACK: u32 = 0x80000000;
|
||||||
}
|
}
|
||||||
|
|
||||||
pub mod rpc_crypto_flags {
|
pub mod rpc_crypto_flags {
|
||||||
pub const USE_CRC32C: u32 = 0x800;
|
pub const USE_CRC32C: u32 = 0x800;
|
||||||
}
|
}
|
||||||
|
|
||||||
pub const ME_CONNECT_TIMEOUT_SECS: u64 = 5;
|
pub const ME_CONNECT_TIMEOUT_SECS: u64 = 5;
|
||||||
pub const ME_HANDSHAKE_TIMEOUT_SECS: u64 = 10;
|
pub const ME_HANDSHAKE_TIMEOUT_SECS: u64 = 10;
|
||||||
|
|
||||||
#[cfg(test)]
|
#[cfg(test)]
|
||||||
#[path = "tests/tls_size_constants_security_tests.rs"]
|
|
||||||
mod tls_size_constants_security_tests;
|
|
||||||
|
|
||||||
#[cfg(test)]
|
|
||||||
mod tests {
|
mod tests {
|
||||||
use super::*;
|
use super::*;
|
||||||
|
|
||||||
|
|
|
||||||
|
|
@ -83,7 +83,7 @@ impl FrameMode {
|
||||||
|
|
||||||
/// Validate message length for MTProto
|
/// Validate message length for MTProto
|
||||||
pub fn validate_message_length(len: usize) -> bool {
|
pub fn validate_message_length(len: usize) -> bool {
|
||||||
use super::constants::{MAX_MSG_LEN, MIN_MSG_LEN, PADDING_FILLER};
|
use super::constants::{MIN_MSG_LEN, MAX_MSG_LEN, PADDING_FILLER};
|
||||||
|
|
||||||
(MIN_MSG_LEN..=MAX_MSG_LEN).contains(&len) && len.is_multiple_of(PADDING_FILLER.len())
|
(MIN_MSG_LEN..=MAX_MSG_LEN).contains(&len) && len.is_multiple_of(PADDING_FILLER.len())
|
||||||
}
|
}
|
||||||
|
|
|
||||||
|
|
@ -2,9 +2,9 @@
|
||||||
|
|
||||||
#![allow(dead_code)]
|
#![allow(dead_code)]
|
||||||
|
|
||||||
use super::constants::*;
|
|
||||||
use crate::crypto::{AesCtr, sha256};
|
|
||||||
use zeroize::Zeroize;
|
use zeroize::Zeroize;
|
||||||
|
use crate::crypto::{sha256, AesCtr};
|
||||||
|
use super::constants::*;
|
||||||
|
|
||||||
/// Obfuscation parameters from handshake
|
/// Obfuscation parameters from handshake
|
||||||
///
|
///
|
||||||
|
|
@ -69,8 +69,9 @@ impl ObfuscationParams {
|
||||||
None => continue,
|
None => continue,
|
||||||
};
|
};
|
||||||
|
|
||||||
let dc_idx =
|
let dc_idx = i16::from_le_bytes(
|
||||||
i16::from_le_bytes(decrypted[DC_IDX_POS..DC_IDX_POS + 2].try_into().unwrap());
|
decrypted[DC_IDX_POS..DC_IDX_POS + 2].try_into().unwrap()
|
||||||
|
);
|
||||||
|
|
||||||
let mut enc_key_input = Vec::with_capacity(PREKEY_LEN + secret.len());
|
let mut enc_key_input = Vec::with_capacity(PREKEY_LEN + secret.len());
|
||||||
enc_key_input.extend_from_slice(enc_prekey);
|
enc_key_input.extend_from_slice(enc_prekey);
|
||||||
|
|
|
||||||
|
|
@ -1,358 +0,0 @@
|
||||||
use super::*;
|
|
||||||
use crate::crypto::sha256_hmac;
|
|
||||||
use std::time::Instant;
|
|
||||||
|
|
||||||
/// Helper to create a byte vector of specific length.
|
|
||||||
fn make_garbage(len: usize) -> Vec<u8> {
|
|
||||||
vec![0x42u8; len]
|
|
||||||
}
|
|
||||||
|
|
||||||
/// Helper to create a valid-looking HMAC digest for test.
|
|
||||||
fn make_digest(secret: &[u8], msg: &[u8], ts: u32) -> [u8; 32] {
|
|
||||||
let mut hmac = sha256_hmac(secret, msg);
|
|
||||||
let ts_bytes = ts.to_le_bytes();
|
|
||||||
for i in 0..4 {
|
|
||||||
hmac[28 + i] ^= ts_bytes[i];
|
|
||||||
}
|
|
||||||
hmac
|
|
||||||
}
|
|
||||||
|
|
||||||
fn make_valid_tls_handshake_with_session_id(
|
|
||||||
secret: &[u8],
|
|
||||||
timestamp: u32,
|
|
||||||
session_id: &[u8],
|
|
||||||
) -> Vec<u8> {
|
|
||||||
let session_id_len = session_id.len();
|
|
||||||
let len = TLS_DIGEST_POS + TLS_DIGEST_LEN + 1 + session_id_len;
|
|
||||||
let mut handshake = vec![0x42u8; len];
|
|
||||||
|
|
||||||
handshake[TLS_DIGEST_POS + TLS_DIGEST_LEN] = session_id_len as u8;
|
|
||||||
let sid_start = TLS_DIGEST_POS + TLS_DIGEST_LEN + 1;
|
|
||||||
handshake[sid_start..sid_start + session_id_len].copy_from_slice(session_id);
|
|
||||||
handshake[TLS_DIGEST_POS..TLS_DIGEST_POS + TLS_DIGEST_LEN].fill(0);
|
|
||||||
|
|
||||||
let digest = make_digest(secret, &handshake, timestamp);
|
|
||||||
|
|
||||||
handshake[TLS_DIGEST_POS..TLS_DIGEST_POS + TLS_DIGEST_LEN].copy_from_slice(&digest);
|
|
||||||
handshake
|
|
||||||
}
|
|
||||||
|
|
||||||
fn make_valid_tls_handshake(secret: &[u8], timestamp: u32) -> Vec<u8> {
|
|
||||||
make_valid_tls_handshake_with_session_id(secret, timestamp, &[0x42; 32])
|
|
||||||
}
|
|
||||||
|
|
||||||
// ------------------------------------------------------------------
|
|
||||||
// Truncated Packet Tests (OWASP ASVS 5.1.4, 5.1.5)
|
|
||||||
// ------------------------------------------------------------------
|
|
||||||
|
|
||||||
#[test]
|
|
||||||
fn validate_tls_handshake_truncated_10_bytes_rejected() {
|
|
||||||
let secrets = vec![("user".to_string(), b"secret".to_vec())];
|
|
||||||
let truncated = make_garbage(10);
|
|
||||||
assert!(validate_tls_handshake(&truncated, &secrets, true).is_none());
|
|
||||||
}
|
|
||||||
|
|
||||||
#[test]
|
|
||||||
fn validate_tls_handshake_truncated_at_digest_start_rejected() {
|
|
||||||
let secrets = vec![("user".to_string(), b"secret".to_vec())];
|
|
||||||
// TLS_DIGEST_POS = 11. 11 bytes should be rejected.
|
|
||||||
let truncated = make_garbage(TLS_DIGEST_POS);
|
|
||||||
assert!(validate_tls_handshake(&truncated, &secrets, true).is_none());
|
|
||||||
}
|
|
||||||
|
|
||||||
#[test]
|
|
||||||
fn validate_tls_handshake_truncated_inside_digest_rejected() {
|
|
||||||
let secrets = vec![("user".to_string(), b"secret".to_vec())];
|
|
||||||
// TLS_DIGEST_POS + 16 (half digest)
|
|
||||||
let truncated = make_garbage(TLS_DIGEST_POS + 16);
|
|
||||||
assert!(validate_tls_handshake(&truncated, &secrets, true).is_none());
|
|
||||||
}
|
|
||||||
|
|
||||||
#[test]
|
|
||||||
fn extract_sni_truncated_at_record_header_rejected() {
|
|
||||||
let truncated = make_garbage(3);
|
|
||||||
assert!(extract_sni_from_client_hello(&truncated).is_none());
|
|
||||||
}
|
|
||||||
|
|
||||||
#[test]
|
|
||||||
fn extract_sni_truncated_at_handshake_header_rejected() {
|
|
||||||
let mut truncated = vec![TLS_RECORD_HANDSHAKE, 0x03, 0x03, 0x00, 0x05];
|
|
||||||
truncated.extend_from_slice(&[0x01, 0x00]); // ClientHello type but truncated length
|
|
||||||
assert!(extract_sni_from_client_hello(&truncated).is_none());
|
|
||||||
}
|
|
||||||
|
|
||||||
// ------------------------------------------------------------------
|
|
||||||
// Malformed Extension Parsing Tests
|
|
||||||
// ------------------------------------------------------------------
|
|
||||||
|
|
||||||
#[test]
|
|
||||||
fn extract_sni_with_overlapping_extension_lengths_rejected() {
|
|
||||||
let mut h = vec![0x16, 0x03, 0x03, 0x00, 0x60]; // Record header
|
|
||||||
h.push(0x01); // Handshake type: ClientHello
|
|
||||||
h.extend_from_slice(&[0x00, 0x00, 0x5C]); // Length: 92
|
|
||||||
h.extend_from_slice(&[0x03, 0x03]); // Version
|
|
||||||
h.extend_from_slice(&[0u8; 32]); // Random
|
|
||||||
h.push(0); // Session ID length: 0
|
|
||||||
h.extend_from_slice(&[0x00, 0x02, 0x13, 0x01]); // Cipher suites
|
|
||||||
h.extend_from_slice(&[0x01, 0x00]); // Compression
|
|
||||||
|
|
||||||
// Extensions start
|
|
||||||
h.extend_from_slice(&[0x00, 0x20]); // Total Extensions length: 32
|
|
||||||
|
|
||||||
// Extension 1: SNI (type 0)
|
|
||||||
h.extend_from_slice(&[0x00, 0x00]);
|
|
||||||
h.extend_from_slice(&[0x00, 0x40]); // Claimed len: 64 (OVERFLOWS total extensions len 32)
|
|
||||||
h.extend_from_slice(&[0u8; 64]);
|
|
||||||
|
|
||||||
assert!(extract_sni_from_client_hello(&h).is_none());
|
|
||||||
}
|
|
||||||
|
|
||||||
#[test]
|
|
||||||
fn extract_sni_with_infinite_loop_potential_extension_rejected() {
|
|
||||||
let mut h = vec![0x16, 0x03, 0x03, 0x00, 0x60]; // Record header
|
|
||||||
h.push(0x01); // Handshake type: ClientHello
|
|
||||||
h.extend_from_slice(&[0x00, 0x00, 0x5C]); // Length: 92
|
|
||||||
h.extend_from_slice(&[0x03, 0x03]); // Version
|
|
||||||
h.extend_from_slice(&[0u8; 32]); // Random
|
|
||||||
h.push(0); // Session ID length: 0
|
|
||||||
h.extend_from_slice(&[0x00, 0x02, 0x13, 0x01]); // Cipher suites
|
|
||||||
h.extend_from_slice(&[0x01, 0x00]); // Compression
|
|
||||||
|
|
||||||
// Extensions start
|
|
||||||
h.extend_from_slice(&[0x00, 0x10]); // Total Extensions length: 16
|
|
||||||
|
|
||||||
// Extension: zero length but claims more?
|
|
||||||
// If our parser didn't advance, it might loop.
|
|
||||||
// Telemt uses `pos += 4 + elen;` so it always advances.
|
|
||||||
h.extend_from_slice(&[0x12, 0x34]); // Unknown type
|
|
||||||
h.extend_from_slice(&[0x00, 0x00]); // Length 0
|
|
||||||
|
|
||||||
// Fill the rest with garbage
|
|
||||||
h.extend_from_slice(&[0x42; 12]);
|
|
||||||
|
|
||||||
// We expect it to finish without SNI found
|
|
||||||
assert!(extract_sni_from_client_hello(&h).is_none());
|
|
||||||
}
|
|
||||||
|
|
||||||
#[test]
|
|
||||||
fn extract_sni_with_invalid_hostname_rejected() {
|
|
||||||
let host = b"invalid_host!%^";
|
|
||||||
let mut sni = Vec::new();
|
|
||||||
sni.extend_from_slice(&((host.len() + 3) as u16).to_be_bytes());
|
|
||||||
sni.push(0);
|
|
||||||
sni.extend_from_slice(&(host.len() as u16).to_be_bytes());
|
|
||||||
sni.extend_from_slice(host);
|
|
||||||
|
|
||||||
let mut h = vec![0x16, 0x03, 0x03, 0x00, 0x60]; // Record header
|
|
||||||
h.push(0x01); // ClientHello
|
|
||||||
h.extend_from_slice(&[0x00, 0x00, 0x5C]);
|
|
||||||
h.extend_from_slice(&[0x03, 0x03]);
|
|
||||||
h.extend_from_slice(&[0u8; 32]);
|
|
||||||
h.push(0);
|
|
||||||
h.extend_from_slice(&[0x00, 0x02, 0x13, 0x01]);
|
|
||||||
h.extend_from_slice(&[0x01, 0x00]);
|
|
||||||
|
|
||||||
let mut ext = Vec::new();
|
|
||||||
ext.extend_from_slice(&0x0000u16.to_be_bytes());
|
|
||||||
ext.extend_from_slice(&(sni.len() as u16).to_be_bytes());
|
|
||||||
ext.extend_from_slice(&sni);
|
|
||||||
|
|
||||||
h.extend_from_slice(&(ext.len() as u16).to_be_bytes());
|
|
||||||
h.extend_from_slice(&ext);
|
|
||||||
|
|
||||||
assert!(
|
|
||||||
extract_sni_from_client_hello(&h).is_none(),
|
|
||||||
"Invalid SNI hostname must be rejected"
|
|
||||||
);
|
|
||||||
}
|
|
||||||
|
|
||||||
// ------------------------------------------------------------------
|
|
||||||
// Timing Neutrality Tests (OWASP ASVS 5.1.7)
|
|
||||||
// ------------------------------------------------------------------
|
|
||||||
|
|
||||||
#[test]
|
|
||||||
fn validate_tls_handshake_timing_neutrality() {
|
|
||||||
let secret = b"timing_test_secret_32_bytes_long_";
|
|
||||||
let secrets = vec![("u".to_string(), secret.to_vec())];
|
|
||||||
|
|
||||||
let mut base = vec![0x42u8; 100];
|
|
||||||
base[TLS_DIGEST_POS + TLS_DIGEST_LEN] = 32;
|
|
||||||
|
|
||||||
const ITER: usize = 600;
|
|
||||||
const ROUNDS: usize = 7;
|
|
||||||
|
|
||||||
let mut per_round_avg_diff_ns = Vec::with_capacity(ROUNDS);
|
|
||||||
|
|
||||||
for round in 0..ROUNDS {
|
|
||||||
let mut success_h = base.clone();
|
|
||||||
let mut fail_h = base.clone();
|
|
||||||
|
|
||||||
let start_success = Instant::now();
|
|
||||||
for _ in 0..ITER {
|
|
||||||
let digest = make_digest(secret, &success_h, 0);
|
|
||||||
success_h[TLS_DIGEST_POS..TLS_DIGEST_POS + TLS_DIGEST_LEN].copy_from_slice(&digest);
|
|
||||||
let _ = validate_tls_handshake_at_time(&success_h, &secrets, true, 0);
|
|
||||||
}
|
|
||||||
let success_elapsed = start_success.elapsed();
|
|
||||||
|
|
||||||
let start_fail = Instant::now();
|
|
||||||
for i in 0..ITER {
|
|
||||||
let mut digest = make_digest(secret, &fail_h, 0);
|
|
||||||
let flip_idx = (i + round) % (TLS_DIGEST_LEN - 4);
|
|
||||||
digest[flip_idx] ^= 0xFF;
|
|
||||||
fail_h[TLS_DIGEST_POS..TLS_DIGEST_POS + TLS_DIGEST_LEN].copy_from_slice(&digest);
|
|
||||||
let _ = validate_tls_handshake_at_time(&fail_h, &secrets, true, 0);
|
|
||||||
}
|
|
||||||
let fail_elapsed = start_fail.elapsed();
|
|
||||||
|
|
||||||
let diff = if success_elapsed > fail_elapsed {
|
|
||||||
success_elapsed - fail_elapsed
|
|
||||||
} else {
|
|
||||||
fail_elapsed - success_elapsed
|
|
||||||
};
|
|
||||||
per_round_avg_diff_ns.push(diff.as_nanos() as f64 / ITER as f64);
|
|
||||||
}
|
|
||||||
|
|
||||||
per_round_avg_diff_ns.sort_by(|a, b| a.partial_cmp(b).unwrap());
|
|
||||||
let median_avg_diff_ns = per_round_avg_diff_ns[ROUNDS / 2];
|
|
||||||
|
|
||||||
// Keep this as a coarse side-channel guard only; noisy shared CI hosts can
|
|
||||||
// introduce microsecond-level jitter that should not fail deterministic suites.
|
|
||||||
assert!(
|
|
||||||
median_avg_diff_ns < 50_000.0,
|
|
||||||
"Median timing delta too large: {} ns/iter",
|
|
||||||
median_avg_diff_ns
|
|
||||||
);
|
|
||||||
}
|
|
||||||
|
|
||||||
// ------------------------------------------------------------------
|
|
||||||
// Adversarial Fingerprinting / Active Probing Tests
|
|
||||||
// ------------------------------------------------------------------
|
|
||||||
|
|
||||||
#[test]
|
|
||||||
fn is_tls_handshake_robustness_against_probing() {
|
|
||||||
// Valid TLS 1.0 ClientHello
|
|
||||||
assert!(is_tls_handshake(&[0x16, 0x03, 0x01]));
|
|
||||||
// Valid TLS 1.2/1.3 ClientHello (Legacy Record Layer)
|
|
||||||
assert!(is_tls_handshake(&[0x16, 0x03, 0x03]));
|
|
||||||
|
|
||||||
// Invalid record type but matching version
|
|
||||||
assert!(!is_tls_handshake(&[0x17, 0x03, 0x03]));
|
|
||||||
// Plaintext HTTP request
|
|
||||||
assert!(!is_tls_handshake(b"GET / HTTP/1.1"));
|
|
||||||
// Short garbage
|
|
||||||
assert!(!is_tls_handshake(&[0x16, 0x03]));
|
|
||||||
}
|
|
||||||
|
|
||||||
#[test]
|
|
||||||
fn validate_tls_handshake_at_time_strict_boundary() {
|
|
||||||
let secret = b"strict_boundary_secret_32_bytes_";
|
|
||||||
let secrets = vec![("u".to_string(), secret.to_vec())];
|
|
||||||
let now: i64 = 1_000_000_000;
|
|
||||||
|
|
||||||
// Boundary: exactly TIME_SKEW_MAX (120s past)
|
|
||||||
let ts_past = (now - TIME_SKEW_MAX) as u32;
|
|
||||||
let h = make_valid_tls_handshake_with_session_id(secret, ts_past, &[0x42; 32]);
|
|
||||||
assert!(validate_tls_handshake_at_time(&h, &secrets, false, now).is_some());
|
|
||||||
|
|
||||||
// Boundary + 1s: should be rejected
|
|
||||||
let ts_too_past = (now - TIME_SKEW_MAX - 1) as u32;
|
|
||||||
let h2 = make_valid_tls_handshake_with_session_id(secret, ts_too_past, &[0x42; 32]);
|
|
||||||
assert!(validate_tls_handshake_at_time(&h2, &secrets, false, now).is_none());
|
|
||||||
}
|
|
||||||
|
|
||||||
#[test]
|
|
||||||
fn extract_sni_with_duplicate_extensions_rejected() {
|
|
||||||
// Construct a ClientHello with TWO SNI extensions
|
|
||||||
let host1 = b"first.com";
|
|
||||||
let mut sni1 = Vec::new();
|
|
||||||
sni1.extend_from_slice(&((host1.len() + 3) as u16).to_be_bytes());
|
|
||||||
sni1.push(0);
|
|
||||||
sni1.extend_from_slice(&(host1.len() as u16).to_be_bytes());
|
|
||||||
sni1.extend_from_slice(host1);
|
|
||||||
|
|
||||||
let host2 = b"second.com";
|
|
||||||
let mut sni2 = Vec::new();
|
|
||||||
sni2.extend_from_slice(&((host2.len() + 3) as u16).to_be_bytes());
|
|
||||||
sni2.push(0);
|
|
||||||
sni2.extend_from_slice(&(host2.len() as u16).to_be_bytes());
|
|
||||||
sni2.extend_from_slice(host2);
|
|
||||||
|
|
||||||
let mut ext = Vec::new();
|
|
||||||
// Ext 1: SNI
|
|
||||||
ext.extend_from_slice(&0x0000u16.to_be_bytes());
|
|
||||||
ext.extend_from_slice(&(sni1.len() as u16).to_be_bytes());
|
|
||||||
ext.extend_from_slice(&sni1);
|
|
||||||
// Ext 2: SNI again
|
|
||||||
ext.extend_from_slice(&0x0000u16.to_be_bytes());
|
|
||||||
ext.extend_from_slice(&(sni2.len() as u16).to_be_bytes());
|
|
||||||
ext.extend_from_slice(&sni2);
|
|
||||||
|
|
||||||
let mut body = Vec::new();
|
|
||||||
body.extend_from_slice(&[0x03, 0x03]);
|
|
||||||
body.extend_from_slice(&[0u8; 32]);
|
|
||||||
body.push(0);
|
|
||||||
body.extend_from_slice(&[0x00, 0x02, 0x13, 0x01]);
|
|
||||||
body.extend_from_slice(&[0x01, 0x00]);
|
|
||||||
body.extend_from_slice(&(ext.len() as u16).to_be_bytes());
|
|
||||||
body.extend_from_slice(&ext);
|
|
||||||
|
|
||||||
let mut handshake = Vec::new();
|
|
||||||
handshake.push(0x01);
|
|
||||||
let body_len = (body.len() as u32).to_be_bytes();
|
|
||||||
handshake.extend_from_slice(&body_len[1..4]);
|
|
||||||
handshake.extend_from_slice(&body);
|
|
||||||
|
|
||||||
let mut h = Vec::new();
|
|
||||||
h.push(0x16);
|
|
||||||
h.extend_from_slice(&[0x03, 0x03]);
|
|
||||||
h.extend_from_slice(&(handshake.len() as u16).to_be_bytes());
|
|
||||||
h.extend_from_slice(&handshake);
|
|
||||||
|
|
||||||
// Duplicate SNI extensions are ambiguous and must fail closed.
|
|
||||||
assert!(extract_sni_from_client_hello(&h).is_none());
|
|
||||||
}
|
|
||||||
|
|
||||||
#[test]
|
|
||||||
fn extract_alpn_with_malformed_list_rejected() {
|
|
||||||
let mut alpn_payload = Vec::new();
|
|
||||||
alpn_payload.extend_from_slice(&0x0005u16.to_be_bytes()); // Total len 5
|
|
||||||
alpn_payload.push(10); // Labeled len 10 (OVERFLOWS total 5)
|
|
||||||
alpn_payload.extend_from_slice(b"h2");
|
|
||||||
|
|
||||||
let mut ext = Vec::new();
|
|
||||||
ext.extend_from_slice(&0x0010u16.to_be_bytes()); // Type: ALPN (16)
|
|
||||||
ext.extend_from_slice(&(alpn_payload.len() as u16).to_be_bytes());
|
|
||||||
ext.extend_from_slice(&alpn_payload);
|
|
||||||
|
|
||||||
let mut h = vec![
|
|
||||||
0x16, 0x03, 0x03, 0x00, 0x40, 0x01, 0x00, 0x00, 0x3C, 0x03, 0x03,
|
|
||||||
];
|
|
||||||
h.extend_from_slice(&[0u8; 32]);
|
|
||||||
h.push(0);
|
|
||||||
h.extend_from_slice(&[0x00, 0x02, 0x13, 0x01, 0x01, 0x00]);
|
|
||||||
h.extend_from_slice(&(ext.len() as u16).to_be_bytes());
|
|
||||||
h.extend_from_slice(&ext);
|
|
||||||
|
|
||||||
let res = extract_alpn_from_client_hello(&h);
|
|
||||||
assert!(
|
|
||||||
res.is_empty(),
|
|
||||||
"Malformed ALPN list must return empty or fail"
|
|
||||||
);
|
|
||||||
}
|
|
||||||
|
|
||||||
#[test]
|
|
||||||
fn extract_sni_with_huge_extension_header_rejected() {
|
|
||||||
let mut h = vec![0x16, 0x03, 0x03, 0x00, 0x00]; // Record header
|
|
||||||
h.push(0x01); // ClientHello
|
|
||||||
h.extend_from_slice(&[0x00, 0xFF, 0xFF]); // Huge length (65535) - overflows record
|
|
||||||
h.extend_from_slice(&[0x03, 0x03]);
|
|
||||||
h.extend_from_slice(&[0u8; 32]);
|
|
||||||
h.push(0);
|
|
||||||
h.extend_from_slice(&[0x00, 0x02, 0x13, 0x01, 0x01, 0x00]);
|
|
||||||
|
|
||||||
// Extensions start
|
|
||||||
h.extend_from_slice(&[0xFF, 0xFF]); // Total extensions: 65535 (OVERFLOWS everything)
|
|
||||||
|
|
||||||
assert!(extract_sni_from_client_hello(&h).is_none());
|
|
||||||
}
|
|
||||||
|
|
@ -1,210 +0,0 @@
|
||||||
use super::*;
|
|
||||||
use crate::crypto::sha256_hmac;
|
|
||||||
use std::panic::catch_unwind;
|
|
||||||
|
|
||||||
fn make_valid_tls_handshake_with_session_id(
|
|
||||||
secret: &[u8],
|
|
||||||
timestamp: u32,
|
|
||||||
session_id: &[u8],
|
|
||||||
) -> Vec<u8> {
|
|
||||||
let session_id_len = session_id.len();
|
|
||||||
assert!(session_id_len <= u8::MAX as usize);
|
|
||||||
|
|
||||||
let len = TLS_DIGEST_POS + TLS_DIGEST_LEN + 1 + session_id_len;
|
|
||||||
let mut handshake = vec![0x42u8; len];
|
|
||||||
handshake[TLS_DIGEST_POS + TLS_DIGEST_LEN] = session_id_len as u8;
|
|
||||||
let sid_start = TLS_DIGEST_POS + TLS_DIGEST_LEN + 1;
|
|
||||||
handshake[sid_start..sid_start + session_id_len].copy_from_slice(session_id);
|
|
||||||
handshake[TLS_DIGEST_POS..TLS_DIGEST_POS + TLS_DIGEST_LEN].fill(0);
|
|
||||||
|
|
||||||
let mut digest = sha256_hmac(secret, &handshake);
|
|
||||||
let ts = timestamp.to_le_bytes();
|
|
||||||
for idx in 0..4 {
|
|
||||||
digest[28 + idx] ^= ts[idx];
|
|
||||||
}
|
|
||||||
|
|
||||||
handshake[TLS_DIGEST_POS..TLS_DIGEST_POS + TLS_DIGEST_LEN].copy_from_slice(&digest);
|
|
||||||
handshake
|
|
||||||
}
|
|
||||||
|
|
||||||
fn make_valid_client_hello_record(host: &str, alpn_protocols: &[&[u8]]) -> Vec<u8> {
|
|
||||||
let mut body = Vec::new();
|
|
||||||
body.extend_from_slice(&TLS_VERSION);
|
|
||||||
body.extend_from_slice(&[0u8; 32]);
|
|
||||||
body.push(0);
|
|
||||||
body.extend_from_slice(&2u16.to_be_bytes());
|
|
||||||
body.extend_from_slice(&[0x13, 0x01]);
|
|
||||||
body.push(1);
|
|
||||||
body.push(0);
|
|
||||||
|
|
||||||
let mut ext_blob = Vec::new();
|
|
||||||
|
|
||||||
let host_bytes = host.as_bytes();
|
|
||||||
let mut sni_payload = Vec::new();
|
|
||||||
sni_payload.extend_from_slice(&((host_bytes.len() + 3) as u16).to_be_bytes());
|
|
||||||
sni_payload.push(0);
|
|
||||||
sni_payload.extend_from_slice(&(host_bytes.len() as u16).to_be_bytes());
|
|
||||||
sni_payload.extend_from_slice(host_bytes);
|
|
||||||
ext_blob.extend_from_slice(&0x0000u16.to_be_bytes());
|
|
||||||
ext_blob.extend_from_slice(&(sni_payload.len() as u16).to_be_bytes());
|
|
||||||
ext_blob.extend_from_slice(&sni_payload);
|
|
||||||
|
|
||||||
if !alpn_protocols.is_empty() {
|
|
||||||
let mut alpn_list = Vec::new();
|
|
||||||
for proto in alpn_protocols {
|
|
||||||
alpn_list.push(proto.len() as u8);
|
|
||||||
alpn_list.extend_from_slice(proto);
|
|
||||||
}
|
|
||||||
let mut alpn_data = Vec::new();
|
|
||||||
alpn_data.extend_from_slice(&(alpn_list.len() as u16).to_be_bytes());
|
|
||||||
alpn_data.extend_from_slice(&alpn_list);
|
|
||||||
|
|
||||||
ext_blob.extend_from_slice(&0x0010u16.to_be_bytes());
|
|
||||||
ext_blob.extend_from_slice(&(alpn_data.len() as u16).to_be_bytes());
|
|
||||||
ext_blob.extend_from_slice(&alpn_data);
|
|
||||||
}
|
|
||||||
|
|
||||||
body.extend_from_slice(&(ext_blob.len() as u16).to_be_bytes());
|
|
||||||
body.extend_from_slice(&ext_blob);
|
|
||||||
|
|
||||||
let mut handshake = Vec::new();
|
|
||||||
handshake.push(0x01);
|
|
||||||
let body_len = (body.len() as u32).to_be_bytes();
|
|
||||||
handshake.extend_from_slice(&body_len[1..4]);
|
|
||||||
handshake.extend_from_slice(&body);
|
|
||||||
|
|
||||||
let mut record = Vec::new();
|
|
||||||
record.push(TLS_RECORD_HANDSHAKE);
|
|
||||||
record.extend_from_slice(&[0x03, 0x01]);
|
|
||||||
record.extend_from_slice(&(handshake.len() as u16).to_be_bytes());
|
|
||||||
record.extend_from_slice(&handshake);
|
|
||||||
record
|
|
||||||
}
|
|
||||||
|
|
||||||
#[test]
|
|
||||||
fn client_hello_fuzz_corpus_never_panics_or_accepts_corruption() {
|
|
||||||
let valid = make_valid_client_hello_record("example.com", &[b"h2", b"http/1.1"]);
|
|
||||||
assert_eq!(
|
|
||||||
extract_sni_from_client_hello(&valid).as_deref(),
|
|
||||||
Some("example.com")
|
|
||||||
);
|
|
||||||
assert_eq!(
|
|
||||||
extract_alpn_from_client_hello(&valid),
|
|
||||||
vec![b"h2".to_vec(), b"http/1.1".to_vec()]
|
|
||||||
);
|
|
||||||
assert!(
|
|
||||||
extract_sni_from_client_hello(&make_valid_client_hello_record("127.0.0.1", &[])).is_none(),
|
|
||||||
"literal IP hostnames must be rejected"
|
|
||||||
);
|
|
||||||
|
|
||||||
let mut corpus = vec![
|
|
||||||
Vec::new(),
|
|
||||||
vec![0x16, 0x03, 0x03],
|
|
||||||
valid[..9].to_vec(),
|
|
||||||
valid[..valid.len() - 1].to_vec(),
|
|
||||||
];
|
|
||||||
|
|
||||||
let mut wrong_type = valid.clone();
|
|
||||||
wrong_type[0] = 0x15;
|
|
||||||
corpus.push(wrong_type);
|
|
||||||
|
|
||||||
let mut wrong_handshake = valid.clone();
|
|
||||||
wrong_handshake[5] = 0x02;
|
|
||||||
corpus.push(wrong_handshake);
|
|
||||||
|
|
||||||
let mut wrong_length = valid.clone();
|
|
||||||
wrong_length[3] ^= 0x7f;
|
|
||||||
corpus.push(wrong_length);
|
|
||||||
|
|
||||||
for (idx, input) in corpus.iter().enumerate() {
|
|
||||||
assert!(catch_unwind(|| extract_sni_from_client_hello(input)).is_ok());
|
|
||||||
assert!(catch_unwind(|| extract_alpn_from_client_hello(input)).is_ok());
|
|
||||||
|
|
||||||
if idx == 0 {
|
|
||||||
continue;
|
|
||||||
}
|
|
||||||
|
|
||||||
assert!(
|
|
||||||
extract_sni_from_client_hello(input).is_none(),
|
|
||||||
"corpus item {idx} must fail closed for SNI"
|
|
||||||
);
|
|
||||||
assert!(
|
|
||||||
extract_alpn_from_client_hello(input).is_empty(),
|
|
||||||
"corpus item {idx} must fail closed for ALPN"
|
|
||||||
);
|
|
||||||
}
|
|
||||||
}
|
|
||||||
|
|
||||||
#[test]
|
|
||||||
fn tls_handshake_fuzz_corpus_never_panics_and_rejects_digest_mutations() {
|
|
||||||
let secret = b"tls_fuzz_security_secret";
|
|
||||||
let now: i64 = 1_700_000_000;
|
|
||||||
let base = make_valid_tls_handshake_with_session_id(secret, now as u32, &[0x42; 32]);
|
|
||||||
let secrets = vec![("fuzz-user".to_string(), secret.to_vec())];
|
|
||||||
|
|
||||||
assert!(validate_tls_handshake_at_time(&base, &secrets, false, now).is_some());
|
|
||||||
|
|
||||||
let mut corpus = Vec::new();
|
|
||||||
|
|
||||||
let mut truncated = base.clone();
|
|
||||||
truncated.truncate(TLS_DIGEST_POS + 16);
|
|
||||||
corpus.push(truncated);
|
|
||||||
|
|
||||||
let mut digest_flip = base.clone();
|
|
||||||
digest_flip[TLS_DIGEST_POS + 7] ^= 0x80;
|
|
||||||
corpus.push(digest_flip);
|
|
||||||
|
|
||||||
let mut session_id_len_overflow = base.clone();
|
|
||||||
session_id_len_overflow[TLS_DIGEST_POS + TLS_DIGEST_LEN] = 33;
|
|
||||||
corpus.push(session_id_len_overflow);
|
|
||||||
|
|
||||||
let mut timestamp_far_past = base.clone();
|
|
||||||
timestamp_far_past[TLS_DIGEST_POS + 28..TLS_DIGEST_POS + 32]
|
|
||||||
.copy_from_slice(&((now - i64::from(TIME_SKEW_MAX) - 1) as u32).to_le_bytes());
|
|
||||||
corpus.push(timestamp_far_past);
|
|
||||||
|
|
||||||
let mut timestamp_far_future = base.clone();
|
|
||||||
timestamp_far_future[TLS_DIGEST_POS + 28..TLS_DIGEST_POS + 32]
|
|
||||||
.copy_from_slice(&((now - TIME_SKEW_MIN + 1) as u32).to_le_bytes());
|
|
||||||
corpus.push(timestamp_far_future);
|
|
||||||
|
|
||||||
let mut seed = 0xA5A5_5A5A_F00D_BAAD_u64;
|
|
||||||
for _ in 0..32 {
|
|
||||||
let mut mutated = base.clone();
|
|
||||||
for _ in 0..2 {
|
|
||||||
seed = seed
|
|
||||||
.wrapping_mul(2862933555777941757)
|
|
||||||
.wrapping_add(3037000493);
|
|
||||||
let idx = TLS_DIGEST_POS + (seed as usize % TLS_DIGEST_LEN);
|
|
||||||
mutated[idx] ^= ((seed >> 17) as u8).wrapping_add(1);
|
|
||||||
}
|
|
||||||
corpus.push(mutated);
|
|
||||||
}
|
|
||||||
|
|
||||||
for (idx, handshake) in corpus.iter().enumerate() {
|
|
||||||
let result =
|
|
||||||
catch_unwind(|| validate_tls_handshake_at_time(handshake, &secrets, false, now));
|
|
||||||
assert!(result.is_ok(), "corpus item {idx} must not panic");
|
|
||||||
assert!(
|
|
||||||
result.unwrap().is_none(),
|
|
||||||
"corpus item {idx} must fail closed"
|
|
||||||
);
|
|
||||||
}
|
|
||||||
}
|
|
||||||
|
|
||||||
#[test]
|
|
||||||
fn tls_boot_time_acceptance_is_capped_by_replay_window() {
|
|
||||||
let secret = b"tls_boot_time_cap_secret";
|
|
||||||
let secrets = vec![("boot-user".to_string(), secret.to_vec())];
|
|
||||||
let boot_ts = 1u32;
|
|
||||||
let handshake = make_valid_tls_handshake_with_session_id(secret, boot_ts, &[0x42; 32]);
|
|
||||||
|
|
||||||
assert!(
|
|
||||||
validate_tls_handshake_with_replay_window(&handshake, &secrets, false, 300).is_some(),
|
|
||||||
"boot-time timestamp should be accepted while replay window permits it"
|
|
||||||
);
|
|
||||||
assert!(
|
|
||||||
validate_tls_handshake_with_replay_window(&handshake, &secrets, false, 0).is_none(),
|
|
||||||
"boot-time timestamp must be rejected when replay window disables the bypass"
|
|
||||||
);
|
|
||||||
}
|
|
||||||
|
|
@ -1,37 +0,0 @@
|
||||||
use super::*;
|
|
||||||
|
|
||||||
#[test]
|
|
||||||
fn extension_builder_fails_closed_on_u16_length_overflow() {
|
|
||||||
let builder = TlsExtensionBuilder {
|
|
||||||
extensions: vec![0u8; (u16::MAX as usize) + 1],
|
|
||||||
};
|
|
||||||
|
|
||||||
let built = builder.build();
|
|
||||||
assert!(
|
|
||||||
built.is_empty(),
|
|
||||||
"oversized extension blob must fail closed instead of truncating length field"
|
|
||||||
);
|
|
||||||
}
|
|
||||||
|
|
||||||
#[test]
|
|
||||||
fn server_hello_builder_fails_closed_on_session_id_len_overflow() {
|
|
||||||
let builder = ServerHelloBuilder {
|
|
||||||
random: [0u8; 32],
|
|
||||||
session_id: vec![0xAB; (u8::MAX as usize) + 1],
|
|
||||||
cipher_suite: cipher_suite::TLS_AES_128_GCM_SHA256,
|
|
||||||
compression: 0,
|
|
||||||
extensions: TlsExtensionBuilder::new(),
|
|
||||||
};
|
|
||||||
|
|
||||||
let message = builder.build_message();
|
|
||||||
let record = builder.build_record();
|
|
||||||
|
|
||||||
assert!(
|
|
||||||
message.is_empty(),
|
|
||||||
"session_id length overflow must fail closed in message builder"
|
|
||||||
);
|
|
||||||
assert!(
|
|
||||||
record.is_empty(),
|
|
||||||
"session_id length overflow must fail closed in record builder"
|
|
||||||
);
|
|
||||||
}
|
|
||||||
File diff suppressed because it is too large
Load Diff
|
|
@ -1,11 +0,0 @@
|
||||||
use super::{MAX_TLS_CIPHERTEXT_SIZE, MAX_TLS_PLAINTEXT_SIZE, MIN_TLS_CLIENT_HELLO_SIZE};
|
|
||||||
|
|
||||||
#[test]
|
|
||||||
fn tls_size_constants_match_rfc_8446() {
|
|
||||||
assert_eq!(MAX_TLS_PLAINTEXT_SIZE, 16_384);
|
|
||||||
assert_eq!(MAX_TLS_CIPHERTEXT_SIZE, 16_640);
|
|
||||||
|
|
||||||
assert!(MIN_TLS_CLIENT_HELLO_SIZE < 512);
|
|
||||||
assert!(MIN_TLS_CLIENT_HELLO_SIZE > 64);
|
|
||||||
assert!(MAX_TLS_CIPHERTEXT_SIZE > MAX_TLS_PLAINTEXT_SIZE);
|
|
||||||
}
|
|
||||||
Some files were not shown because too many files have changed in this diff Show More
Loading…
Reference in New Issue