Network Protocol
Comprehensive specification of network communication protocols, connection management, and peer-to-peer networking in Zentalk.
Protocol Stack Overview
Zentalk uses a layered protocol stack designed for security, privacy, and resilience:
ZenTalk Protocol Stack
Layer Responsibilities
| Layer | Responsibility | Protocols |
|---|---|---|
| Application | Message semantics, user data | JSON-RPC, Protobuf |
| Session | Cryptographic sessions, key management | X3DH, Double Ratchet |
| Transport | Reliable delivery, multiplexing | WebSocket, Binary Frames |
| Security | Authentication, encryption | Noise NK, TLS 1.3 |
| Network | Routing, peer discovery | Kademlia DHT, 3-Hop Relay |
| Physical | Network connectivity | TCP, UDP, QUIC |
Connection Management
Connection Lifecycle
┌────────────────────────────────────────────────────────────────┐
│ Connection State Machine │
├────────────────────────────────────────────────────────────────┤
│ │
│ ┌──────────┐ │
│ │ IDLE │ │
│ └────┬─────┘ │
│ │ connect() │
│ ▼ │
│ ┌──────────┐ timeout ┌──────────┐ │
│ │CONNECTING├─────────────────►│ FAILED │ │
│ └────┬─────┘ └────┬─────┘ │
│ │ TCP connected │ retry() │
│ ▼ │ │
│ ┌──────────┐ │ │
│ │HANDSHAKE ├───────────────────────┤ │
│ └────┬─────┘ auth failed │ │
│ │ authenticated │ │
│ ▼ │ │
│ ┌──────────┐ │ │
│ │ ACTIVE │◄──────────────────────┘ │
│ └────┬─────┘ │
│ │ close() / error │
│ ▼ │
│ ┌──────────┐ │
│ │ CLOSING │ │
│ └────┬─────┘ │
│ │ closed │
│ ▼ │
│ ┌──────────┐ │
│ │ CLOSED │ │
│ └──────────┘ │
│ │
└────────────────────────────────────────────────────────────────┘Lifecycle Phases
| Phase | Duration | Actions |
|---|---|---|
| Connect | 0-5s | TCP handshake, TLS negotiation |
| Handshake | 0-10s | Protocol negotiation, authentication |
| Active | Indefinite | Message exchange, keepalive |
| Closing | 0-5s | Graceful shutdown, pending message flush |
| Closed | N/A | Resources released, connection terminated |
Connection Establishment Sequence
Client Server
│ │
├──────── TCP SYN ───────────────────────►│
│ │
│◄─────── TCP SYN+ACK ────────────────────┤
│ │
├──────── TCP ACK ───────────────────────►│
│ │
│ [TLS 1.3 Handshake] │
│ │
├──────── ClientHello ───────────────────►│
│ (supported ciphers, keys) │
│ │
│◄─────── ServerHello ────────────────────┤
│ (selected cipher, cert) │
│ │
├──────── Finished ──────────────────────►│
│ │
│ [Protocol Handshake] │
│ │
├──────── HELLO ─────────────────────────►│
│ (version, capabilities) │
│ │
│◄─────── HELLO_ACK ──────────────────────┤
│ (version, capabilities) │
│ │
│ [Connection Active] │
└─────────────────────────────────────────┘Connection Pooling
Zentalk maintains connection pools for efficient peer communication:
| Pool Type | Max Connections | Purpose |
|---|---|---|
| Relay Pool | 10 | Connections to relay nodes |
| DHT Pool | 50 | Kademlia routing table peers |
| Direct Pool | 100 | Direct peer-to-peer connections |
Pool Management Strategy:
Pool Eviction Policy:
1. Priority: Keep frequently used connections
2. LRU: Evict least recently used when full
3. Health: Evict connections with high latency
Pool Warmup:
1. On startup: Connect to bootstrap nodes
2. Background: Maintain minimum connections per pool
3. Lazy: Create connections on first requestConnection Limits
| Limit | Value | Scope |
|---|---|---|
| Max connections per peer | 3 | Per remote peer ID |
| Max total connections | 200 | Per client instance |
| Max connections per IP | 5 | Rate limiting |
| Max pending connections | 20 | Connection queue |
Idle Connection Handling
| Parameter | Value | Description |
|---|---|---|
| Idle timeout | 5 minutes | Time before marking idle |
| Idle check interval | 30 seconds | How often to check |
| Max idle connections | 20 | Per pool |
| Idle grace period | 2 minutes | Before closing idle |
Idle Connection Lifecycle:
1. No activity for 5 minutes → Mark as IDLE
2. Idle for 2 minutes → Send PING
3. No PONG within 10s → Close connection
4. PONG received → Reset idle timerNAT Traversal
Overview
NAT traversal enables peer-to-peer connections even when peers are behind NAT devices or firewalls.
┌───────────────────────────────────────────────────────────────┐
│ NAT Traversal Flow │
├───────────────────────────────────────────────────────────────┤
│ │
│ Peer A STUN/TURN Server Peer B │
│ │ │ │ │
│ ├── STUN Binding ──────►│ │ │
│ │ Request │ │ │
│ │ │◄── STUN Binding ────────┤ │
│ │◄── Reflexive ─────────┤ Request │ │
│ │ Address │ │ │
│ │ ├── Reflexive ───────────►│ │
│ │ │ Address │ │
│ │ │ │ │
│ │ [Exchange candidates via signaling] │ │
│ │ │ │ │
│ ├── UDP Hole ──────────────────────────────────► │ │
│ │ Punch │ │ │
│ │ ◄────────────────────────────── UDP Hole ──────┤ │
│ │ Punch │ │
│ │ │ │ │
│ │◄═══════════ Direct P2P Connection ═════════════►│ │
│ │
└───────────────────────────────────────────────────────────────┘STUN Server Usage
STUN (Session Traversal Utilities for NAT) discovers public IP addresses:
| STUN Server | Region | Purpose |
|---|---|---|
| stun.zentalk.network:3478 | Global | Primary STUN |
| stun1.zentalk.network:3478 | EU | Fallback |
| stun2.zentalk.network:3478 | US | Fallback |
| stun3.zentalk.network:3478 | APAC | Fallback |
STUN Binding Request:
┌────────────────────────────────────────┐
│ STUN Message Header │
├────────────────────────────────────────┤
│ Type: 0x0001 (Binding Request) │
│ Length: Variable │
│ Transaction ID: 96-bit random │
├────────────────────────────────────────┤
│ Attributes (optional): │
│ - SOFTWARE │
│ - FINGERPRINT │
└────────────────────────────────────────┘STUN Binding Response:
┌────────────────────────────────────────┐
│ Type: 0x0101 (Binding Response) │
├────────────────────────────────────────┤
│ XOR-MAPPED-ADDRESS: │
│ - Family: IPv4/IPv6 │
│ - Port: Public port │
│ - Address: Public IP │
├────────────────────────────────────────┤
│ MAPPED-ADDRESS (legacy): │
│ - Same format as above │
└────────────────────────────────────────┘UDP Hole Punching
UDP hole punching creates direct peer connections through NAT:
Algorithm:
UDP Hole Punch Procedure:
1. Both peers discover their reflexive addresses via STUN
2. Exchange candidates through signaling channel:
- Host candidates (local IPs)
- Server-reflexive candidates (from STUN)
- Relay candidates (from TURN, as fallback)
3. Connectivity checks (ICE):
For each candidate pair:
a. Send STUN Binding Request to peer
b. Wait for response
c. If success: mark as valid pair
4. Select best valid pair (lowest RTT)
5. Establish data channel on selected pairHole Punch Timing:
| Phase | Timeout | Retries |
|---|---|---|
| STUN discovery | 3s | 3 |
| Candidate exchange | 5s | N/A |
| Connectivity check | 500ms | 10 |
| Total process | 30s | N/A |
TCP Hole Punching
For networks that block UDP, TCP hole punching is attempted:
TCP Simultaneous Open:
Peer A Peer B
│ │
├── SYN (dst: B:port) ─────────────────────────►│
│ ◄────────────────────────┤
│ SYN (dst: A:port) ───┤
│ │
│ [NAT creates mapping for outbound SYN] │
│ │
│◄─ SYN+ACK ────────────────────────────────────┤
├─────────────────────────────────── SYN+ACK ──►│
│ │
│ [TCP connection established] │
└───────────────────────────────────────────────┘TCP Hole Punch Success Rate:
| NAT Type | Success Rate |
|---|---|
| Full Cone | 95% |
| Restricted Cone | 85% |
| Port Restricted | 60% |
| Symmetric | 20% |
TURN Relay Fallback
When direct connection fails, TURN relays traffic:
┌─────────────────────────────────────────────────────────────┐
│ TURN Relay Mode │
├─────────────────────────────────────────────────────────────┤
│ │
│ Peer A ◄───────► TURN Server ◄───────► Peer B │
│ │
│ All traffic routed through relay │
│ Higher latency, but guaranteed connectivity │
│ │
└─────────────────────────────────────────────────────────────┘TURN Allocation:
| Parameter | Value |
|---|---|
| Allocation lifetime | 10 minutes |
| Refresh interval | 5 minutes |
| Max bandwidth | 1 Mbps per allocation |
| Max allocations per user | 5 |
TURN Authentication:
1. Client requests allocation with credentials
2. Server challenges with nonce
3. Client responds with MESSAGE-INTEGRITY
4. Server validates and allocates relay addressNAT Type Detection
NAT Type Detection Algorithm:
1. Test I: STUN to primary server
- Get mapped address A
2. Test II: STUN to alternate IP/port
- Get mapped address B
3. Test III: STUN with change-request flag
- Check if response received
Analysis:
- A == B, Test III success → Full Cone
- A == B, Test III fail, port same → Restricted Cone
- A == B, port different → Port Restricted
- A != B → Symmetric NATPeer Exchange Protocol (PEX)
Overview
PEX enables nodes to share peer information, helping the network grow organically.
PEX Message Format
┌────────────────────────────────────────────────────────────┐
│ PEX Message │
├────────────────────────────────────────────────────────────┤
│ Type: 0x20 (PEX) │
├────────────────────────────────────────────────────────────┤
│ Subtype (1 byte): │
│ 0x01 = PEX_REQUEST │
│ 0x02 = PEX_RESPONSE │
├────────────────────────────────────────────────────────────┤
│ Request ID (4 bytes) │
├────────────────────────────────────────────────────────────┤
│ Peer Count (2 bytes) │
├────────────────────────────────────────────────────────────┤
│ Peer Entries (variable): │
│ ┌──────────────────────────────────────────────────────┐ │
│ │ Node ID (32 bytes) │ │
│ │ IP Address (4 or 16 bytes) │ │
│ │ Port (2 bytes) │ │
│ │ Capabilities (4 bytes) │ │
│ │ Last Seen (8 bytes, timestamp) │ │
│ │ Latency (2 bytes, milliseconds) │ │
│ └──────────────────────────────────────────────────────┘ │
└────────────────────────────────────────────────────────────┘Peer Sharing Rules
| Rule | Description |
|---|---|
| Max peers per response | 50 |
| Min uptime for sharing | 10 minutes |
| Freshness requirement | Seen within last hour |
| Geographic diversity | Include peers from multiple regions |
Privacy Considerations
PEX includes privacy protections:
| Protection | Implementation |
|---|---|
| IP obfuscation | Share only node IDs, not IPs for private peers |
| Rate limiting | Max 1 PEX request per minute per peer |
| Verification | Peers must be verified before sharing |
| Opt-out | Nodes can disable being shared via PEX |
Privacy-Preserving PEX:
1. Node A requests peers from Node B
2. Node B filters its peer list:
- Exclude peers with privacy flag set
- Exclude peers not seen recently
- Exclude peers that failed verification
3. Node B returns filtered list
4. Node A verifies each peer before connectingHandshake Protocol
Protocol Version Negotiation
┌────────────────────────────────────────────────────────────┐
│ Version Negotiation │
├────────────────────────────────────────────────────────────┤
│ │
│ Client Server │
│ │ │ │
│ ├──── HELLO ───────────────────────────────►│ │
│ │ versions: [3, 2, 1] │ │
│ │ min_version: 1 │ │
│ │ │ │
│ │◄─── HELLO_ACK ────────────────────────────┤ │
│ │ selected_version: 3 │ │
│ │ status: OK │ │
│ │ │ │
│ │ [Proceed with version 3 protocol] │ │
│ │
└────────────────────────────────────────────────────────────┘Version Selection Algorithm:
select_version(client_versions, server_versions):
common = intersection(client_versions, server_versions)
if common is empty:
return VERSION_MISMATCH error
return max(common)Capability Exchange
┌────────────────────────────────────────┐
│ Capabilities Bitfield (4 bytes) │
├────────────────────────────────────────┤
│ Bit 0: RELAY - Can relay msgs │
│ Bit 1: EXIT - Can be exit node │
│ Bit 2: GUARD - Can be guard │
│ Bit 3: KYBER - Post-quantum │
│ Bit 4: PADDING - Traffic padding │
│ Bit 5: PEX - Peer exchange │
│ Bit 6: DHT - DHT participation│
│ Bit 7: STORAGE - Mesh storage │
│ Bit 8: VOICE - Voice calls │
│ Bit 9: VIDEO - Video calls │
│ Bit 10: GROUPS - Group messaging │
│ Bit 11-31: Reserved │
└────────────────────────────────────────┘Capability Negotiation:
effective_capabilities = client_caps AND server_caps
Example:
Client: 0x000007FF (all features)
Server: 0x0000001F (basic features)
Result: 0x0000001F (basic features only)Noise Protocol Handshake
Zentalk uses the Noise NK pattern for connection encryption:
Noise_NK_25519_ChaChaPoly_SHA256
NK = No client static key required (one-way authentication)
Handshake Pattern:
← s (server static key known to client)
...
→ e (client sends ephemeral)
← e, ee, es (server sends ephemeral, does DH)Handshake Message Flow:
Client Server
│ │
│ [Client knows server's static public key] │
│ │
├──── e ───────────────────────────────────────►│
│ (client ephemeral public key) │
│ │
│◄─── e, ee, es ────────────────────────────────┤
│ e = server ephemeral │
│ ee = DH(client_e, server_e) │
│ es = DH(client_e, server_s) │
│ │
│ [Both derive symmetric keys] │
│ │
│ encrypt_key = HKDF(ee || es, "encrypt") │
│ decrypt_key = HKDF(ee || es, "decrypt") │
│ │
│═══════════ Encrypted Channel ════════════════►│
│ │Key Derivation:
Handshake completes with:
- ChaCha20-Poly1305 for encryption
- Separate keys for each direction
- SHA256 for transcript hashingKeep-Alive and Heartbeat
Ping/Pong Mechanism
┌────────────────────────────────────────┐
│ PING Message (0x06) │
├────────────────────────────────────────┤
│ Timestamp (8 bytes, Unix ms) │
│ Nonce (8 bytes, random) │
│ Sequence (4 bytes) │
└────────────────────────────────────────┘
Total: 20 bytes
┌────────────────────────────────────────┐
│ PONG Message (0x07) │
├────────────────────────────────────────┤
│ Timestamp (8 bytes, original) │
│ Nonce (8 bytes, from PING) │
│ Sequence (4 bytes, from PING) │
│ Receive Time (8 bytes, local time) │
└────────────────────────────────────────┘
Total: 28 bytesTiming Parameters
| Parameter | Value | Description |
|---|---|---|
| PING interval | 30 seconds | Time between PINGs |
| PONG timeout | 10 seconds | Max wait for PONG |
| Max missed PINGs | 3 | Before disconnect |
| Jitter | 0-5 seconds | Random delay to prevent thundering herd |
RTT Calculation
RTT Measurement:
ping_time = current_time()
send(PING{timestamp: ping_time, nonce: random()})
receive(PONG)
pong_time = current_time()
rtt = pong_time - ping_time
server_processing = pong.receive_time - pong.timestamp
network_rtt = rtt - server_processingDead Peer Detection
Dead Peer Detection Algorithm:
1. Send PING at regular intervals (30s)
2. Track pending PINGs with timestamps
3. On PONG: remove from pending, update RTT
4. On timeout (10s): increment miss_count
5. If miss_count >= 3:
- Mark connection as DEAD
- Close connection
- Trigger reconnection if neededConnection Quality Assessment
| RTT Range | Quality | Action |
|---|---|---|
| 0-50ms | Excellent | Normal operation |
| 50-150ms | Good | Normal operation |
| 150-500ms | Fair | Consider backup route |
| 500ms-1s | Poor | Warn user, try alternate |
| > 1s | Critical | Force reconnection |
Backpressure Handling
Flow Control
┌────────────────────────────────────────────────────────────┐
│ Flow Control Window │
├────────────────────────────────────────────────────────────┤
│ │
│ Sender Receiver │
│ │ │ │
│ │ [Window Size: 1000 messages] │ │
│ │ │ │
│ ├── MSG 1 ─────────────────────────────────────►│ │
│ ├── MSG 2 ─────────────────────────────────────►│ │
│ ├── ... ─────────────────────────────────────►│ │
│ ├── MSG 1000 ──────────────────────────────────►│ │
│ │ [Window exhausted, pause sending] │ │
│ │ │ │
│ │◄─ WINDOW_UPDATE (consumed: 500) ──────────────┤ │
│ │ [Window restored: 500 slots] │ │
│ │ │ │
│ ├── MSG 1001 ──────────────────────────────────►│ │
│ └───────────────────────────────────────────────┘ │
│ │
└────────────────────────────────────────────────────────────┘Window Parameters
| Parameter | Value | Description |
|---|---|---|
| Initial window | 1000 messages | Starting window size |
| Min window | 100 messages | Floor for congestion |
| Max window | 10000 messages | Ceiling |
| Update threshold | 50% consumed | When to send update |
Message Queuing
Priority Queue Structure:
┌─────────────────────────────────────┐
│ Priority 0 (Highest): Control │
│ - PING/PONG │
│ - HANDSHAKE │
│ - CLOSE │
├─────────────────────────────────────┤
│ Priority 1: Key Exchange │
│ - X3DH initiation │
│ - Prekey requests │
├─────────────────────────────────────┤
│ Priority 2: Messages │
│ - User messages │
│ - Read receipts │
├─────────────────────────────────────┤
│ Priority 3: Background │
│ - Presence updates │
│ - Profile sync │
│ - PEX │
└─────────────────────────────────────┘Rate Limiting per Connection
| Operation | Limit | Window | Burst |
|---|---|---|---|
| Messages | 100 | 60 seconds | 20 |
| Key requests | 20 | 60 seconds | 5 |
| PEX requests | 1 | 60 seconds | 1 |
| Presence updates | 10 | 60 seconds | 3 |
Backpressure Signals
Backpressure Mechanism:
1. Receiver nearing capacity:
- Send FLOW_CONTROL with reduced window
2. Sender receiving backpressure:
- Reduce send rate
- Queue messages locally
- Apply exponential backoff
3. Recovery:
- Receiver sends WINDOW_UPDATE
- Sender resumes at controlled rateFLOW_CONTROL Message:
┌────────────────────────────────────────┐
│ Type: 0x10 (FLOW_CONTROL) │
├────────────────────────────────────────┤
│ New Window Size (4 bytes) │
│ Consumed Count (4 bytes) │
│ Reason (1 byte): │
│ 0x01 = BUFFER_FULL │
│ 0x02 = RATE_LIMIT │
│ 0x03 = SYSTEM_LOAD │
│ Retry After (4 bytes, milliseconds) │
└────────────────────────────────────────┘Protocol Versioning
Version Format
Version: MAJOR.MINOR.PATCH
MAJOR: Breaking changes (incompatible)
MINOR: New features (backward compatible)
PATCH: Bug fixes (backward compatible)
Current: 3.2.1
Wire format: 2 bytes (MAJOR << 8 | MINOR)
Example: 0x0302 = version 3.2Version Negotiation
Version Selection Rules:
1. Client sends supported versions list
2. Server selects highest mutually supported
3. MAJOR version must match exactly
4. MINOR version: server may select lower if needed
Example:
Client supports: [3.2, 3.1, 3.0, 2.5]
Server supports: [3.1, 3.0, 2.5, 2.4]
Selected: 3.1 (highest common MAJOR.MINOR)Backward Compatibility
| Change Type | Compatibility |
|---|---|
| New optional field | Compatible |
| New message type | Compatible (ignored if unknown) |
| Changed field type | Incompatible |
| Removed field | Incompatible |
| Changed semantics | Incompatible |
Deprecation Policy
Deprecation Timeline:
Version N released:
- Version N-1: Supported
- Version N-2: Deprecated (6 months support)
- Version N-3: End of life
Example with current v3:
- v3: Current
- v2: Supported
- v1: Deprecated, EOL in 6 monthsFeature Flags
Feature Flag System:
1. New features gated by flags
2. Server announces supported features
3. Client enables based on negotiation
Feature Announcement:
┌────────────────────────────────────────┐
│ Feature Flags (variable): │
│ - "kyber_pq": true │
│ - "video_v2": true │
│ - "group_encryption_v3": false │
└────────────────────────────────────────┘Connection Security
TLS 1.3 Configuration
| Parameter | Value |
|---|---|
| Minimum version | TLS 1.3 |
| Certificate validation | Required |
| SNI | Required |
| ALPN | ”zentalk/3” |
Cipher Suites (in order of preference):
| Priority | Cipher Suite | Key Exchange |
|---|---|---|
| 1 | TLS_AES_256_GCM_SHA384 | X25519 |
| 2 | TLS_CHACHA20_POLY1305_SHA256 | X25519 |
| 3 | TLS_AES_128_GCM_SHA256 | X25519 |
Certificate Pinning
Certificate Pin Verification:
1. Extract server certificate
2. Compute SHA256 of SubjectPublicKeyInfo
3. Compare against pinned hashes
4. If match: proceed
5. If no match: reject connection
Pinned Certificates:
Primary: sha256/AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA=
Backup 1: sha256/BBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBB=
Backup 2: sha256/CCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCC=Peer Authentication
Peer Authentication Flow:
1. TLS Handshake (server authenticated)
2. Application-Layer Authentication:
┌─────────────────────────────────────────┐
│ AUTH_CHALLENGE from server: │
│ - random_bytes: 32 bytes │
│ - timestamp: current time │
│ - server_node_id: 32 bytes │
└─────────────────────────────────────────┘
3. Client Response:
┌─────────────────────────────────────────┐
│ AUTH_RESPONSE: │
│ - client_node_id: 32 bytes │
│ - signature: Ed25519(random || ts) │
│ - public_key: Ed25519 public key │
└─────────────────────────────────────────┘
4. Server Verification:
- Verify signature
- Check node_id = SHA256(public_key)
- Validate timestamp freshnessMan-in-the-Middle Prevention
| Protection | Implementation |
|---|---|
| Certificate transparency | CT logs monitored |
| Public key pinning | HPKP headers |
| Certificate pinning | In-app pin verification |
| Channel binding | TLS-Unique in auth |
| Key verification | Safety number comparison |
Channel Binding:
Channel Binding for MITM Prevention:
tls_unique = TLS session unique identifier
auth_signature = Sign(
private_key,
challenge || timestamp || tls_unique
)
Server verifies signature includes tls_unique
→ Proves auth over THIS TLS session
→ MITM cannot replay on different sessionKey Verification (Safety Numbers)
Safety Number Generation:
1. Concatenate identity keys (sorted):
combined = sort(alice_ik, bob_ik)
2. Generate fingerprint:
fingerprint = SHA512(version || combined)
3. Format as numbers:
safety_number = format_as_numbers(fingerprint)
Example: "37291 84756 19283 47561 92837 46529"
Users compare out-of-band to verify no MITMError Recovery
Connection Recovery
Connection Recovery Strategy:
1. Connection Lost:
- Save pending message queue
- Store connection state
2. Reconnection Attempt:
- Exponential backoff: 1s, 2s, 4s, 8s... max 30s
- Try alternate endpoints if available
- Use session resumption if possible
3. Session Resumption:
- Send RESUME with session_token
- Server validates and restores state
- Replay missed messages since disconnect
4. Full Reconnection:
- If resumption fails, full handshake
- Re-authenticate
- Request missed messages via syncMessage Recovery
Message Recovery Protocol:
1. On reconnect, client sends:
SYNC_REQUEST {
last_received_id: "msg_xyz",
last_received_timestamp: 1704067200000
}
2. Server responds with:
SYNC_RESPONSE {
messages: [...missed messages...],
has_more: true/false,
next_cursor: "cursor_abc"
}
3. Client processes and acknowledges
4. Repeat if has_more == trueRelated Documentation
- Wire Protocol - Binary message formats
- WebSocket API - Real-time communication
- DHT and Kademlia - Peer discovery
- Onion Routing - Anonymous routing
- Rate Limiting - Abuse prevention
- Error Handling - Error codes and recovery