Message Delivery
Technical specification for reliable message delivery in Zentalk’s decentralized network.
Overview
Zentalk implements a store-and-forward message delivery system optimized for privacy-preserving, decentralized communication. Messages traverse the network through 3-hop relay routing, are stored temporarily on the mesh network, and are delivered to recipients with strong reliability guarantees.
Delivery Pipeline
1. Sender creates message
2. Message queued in local outbox
3. E2EE encryption (Double Ratchet)
4. 3-hop relay routing
5. Storage on mesh DHT (multiple nodes)
6. Recipient polls/notified
7. Message retrieved and decrypted
8. ACK sent back through mesh
9. Sender removes from outboxKey Properties
| Property | Guarantee |
|---|---|
| Delivery semantics | At-least-once |
| Ordering | Causal ordering within session |
| Durability | Mesh-replicated storage |
| Privacy | End-to-end encrypted, onion routed |
| Offline support | Store-and-forward up to 30 days |
Delivery Guarantees
At-Least-Once Delivery
Zentalk provides at-least-once delivery semantics. This means:
| Scenario | Outcome |
|---|---|
| Normal delivery | Message delivered exactly once |
| Network failure during send | Message retried, may deliver multiple times |
| Network failure during ACK | Message retried, may deliver multiple times |
| Recipient offline | Message stored, delivered when online |
Why not exactly-once?
True exactly-once delivery in a distributed system requires either:
- Centralized coordination (violates decentralization)
- Two-phase commit (requires both parties online simultaneously)
- Expensive consensus protocols (high latency, low throughput)
Instead, Zentalk achieves effectively-once delivery through client-side deduplication.
Delivery Semantics Comparison
| Semantic | Guarantee | Use Case | Zentalk |
|---|---|---|---|
| At-most-once | May lose messages | Fire-and-forget analytics | No |
| At-least-once | May duplicate messages | Critical messaging | Yes |
| Exactly-once | Neither loss nor duplication | Financial transactions | Effectively (via dedup) |
Causal Ordering
Messages within a session maintain causal ordering:
If Alice sends M1 then M2:
- Bob receives M1 before M2 (or holds M2 until M1 arrives)
- This is enforced by the Double Ratchet message counters
Cross-session ordering:
- Not guaranteed between different conversations
- Not guaranteed across device boundariesMessage Queue Structure
Outbox Queue (Sender Side)
Messages awaiting delivery are stored in a local priority queue.
Outbox Structure:
┌─────────────────────────────────────────────────────────────┐
│ Queue Entry │
├─────────────────────────────────────────────────────────────┤
│ message_id (16 bytes) │ UUID v4 │
│ recipient (32 bytes) │ Recipient wallet address │
│ encrypted_payload (variable) │ E2EE ciphertext │
│ priority (1 byte) │ 0=low, 1=normal, 2=high │
│ created_at (8 bytes) │ Unix timestamp (ms) │
│ retry_count (2 bytes) │ Number of send attempts │
│ next_retry_at (8 bytes) │ Scheduled retry time │
│ expires_at (8 bytes) │ Message TTL deadline │
│ status (1 byte) │ pending, sending, sent │
└─────────────────────────────────────────────────────────────┘Queue Organization
| Property | Implementation |
|---|---|
| Order | Priority queue with FIFO within priority |
| Persistence | IndexedDB with WAL |
| Max size | 10,000 messages |
| Memory limit | 50MB encrypted payloads |
Priority Levels
| Priority | Value | Use Case | Max Queue Time |
|---|---|---|---|
| High | 2 | Real-time messages, read receipts | 1 hour |
| Normal | 1 | Regular messages | 24 hours |
| Low | 0 | Media chunks, backfill | 72 hours |
Queue Processing Algorithm
process_outbox():
while true:
// Get highest priority pending message
msg = outbox.peek_highest_priority()
if msg == null:
sleep(100ms)
continue
if msg.next_retry_at > now():
sleep(min(msg.next_retry_at - now(), 100ms))
continue
if msg.expires_at < now():
outbox.remove(msg.message_id)
emit_event("message_expired", msg.message_id)
continue
// Attempt delivery
msg.status = "sending"
msg.retry_count += 1
result = attempt_delivery(msg)
if result.success:
// Keep in outbox until ACK received
msg.status = "sent"
msg.sent_at = now()
else:
msg.status = "pending"
msg.next_retry_at = calculate_backoff(msg.retry_count)
if msg.retry_count >= MAX_RETRIES:
outbox.move_to_dead_letter(msg)Inbox Queue (Recipient Side)
Inbox Structure:
┌─────────────────────────────────────────────────────────────┐
│ Inbox Entry │
├─────────────────────────────────────────────────────────────┤
│ message_id (16 bytes) │ Sender-assigned UUID │
│ sender (32 bytes) │ Sender wallet address │
│ decrypted_payload (variable) │ Plaintext message │
│ received_at (8 bytes) │ Local receipt time │
│ server_timestamp (8 bytes) │ Mesh storage time │
│ sequence_number (4 bytes) │ DR message counter │
│ session_id (32 bytes) │ Double Ratchet session │
│ ack_sent (1 byte) │ Whether ACK was sent │
└─────────────────────────────────────────────────────────────┘Delivery Retry Logic
Exponential Backoff Strategy
When message delivery fails, Zentalk uses exponential backoff with jitter to prevent thundering herd problems.
Backoff Formula:
base_delay = 1000ms (1 second)
max_delay = 3600000ms (1 hour)
jitter_factor = 0.2 (20%)
delay = min(base_delay * 2^retry_count, max_delay)
jitter = delay * random(-jitter_factor, +jitter_factor)
next_retry = now() + delay + jitterRetry Schedule
| Attempt | Base Delay | With Jitter Range |
|---|---|---|
| 1 | 1s | 0.8s - 1.2s |
| 2 | 2s | 1.6s - 2.4s |
| 3 | 4s | 3.2s - 4.8s |
| 4 | 8s | 6.4s - 9.6s |
| 5 | 16s | 12.8s - 19.2s |
| 6 | 32s | 25.6s - 38.4s |
| 7 | 64s | 51.2s - 76.8s |
| 8 | 128s | 102.4s - 153.6s |
| 9 | 256s | 204.8s - 307.2s |
| 10 | 512s | 409.6s - 614.4s |
| 11+ | 1 hour | 48m - 72m |
Retry Configuration
| Parameter | Default | Range |
|---|---|---|
| Max retries | 15 | 5-50 |
| Base delay | 1s | 100ms-10s |
| Max delay | 1 hour | 1m-24h |
| Jitter factor | 0.2 | 0-0.5 |
Failure Classification
Different failure types trigger different retry behaviors:
| Failure Type | Retry | Backoff | Notes |
|---|---|---|---|
| Network timeout | Yes | Exponential | Temporary connectivity issue |
| Connection refused | Yes | Exponential | Node may be down |
| TLS handshake failed | Yes | Linear (5min) | Certificate issue |
| DHT lookup failed | Yes | Exponential | Try different nodes |
| Circuit build failed | Yes | Exponential | Rebuild circuit |
| Recipient key not found | Delayed | 1 hour | Recipient may not be registered |
| Invalid message format | No | N/A | Bug in sender, log error |
| Authentication failed | No | N/A | Session corruption, trigger reset |
Retry Algorithm Implementation
attempt_delivery(msg):
try:
// Build onion circuit
circuit = build_circuit()
if circuit.failed:
return { success: false, error: "circuit_build_failed" }
// Find storage nodes for recipient
storage_nodes = dht.find_nodes(
hash("inbox:" + msg.recipient),
replication_factor=3
)
if storage_nodes.empty:
return { success: false, error: "dht_lookup_failed" }
// Store on mesh with replication
stored_count = 0
for node in storage_nodes:
result = circuit.send(node, STORE_MESSAGE, msg.encrypted_payload)
if result.success:
stored_count += 1
// Require majority success
if stored_count >= 2:
return { success: true, stored_nodes: stored_count }
else:
return { success: false, error: "insufficient_replication" }
catch NetworkError as e:
return { success: false, error: "network_error", details: e }
catch TimeoutError as e:
return { success: false, error: "timeout", details: e }Dead Letter Handling
Messages that exceed maximum retry attempts are moved to a dead letter queue for manual review or automatic expiration.
Dead Letter Queue Structure
Dead Letter Entry:
┌─────────────────────────────────────────────────────────────┐
│ original_message │ Full message data │
│ failure_reason (string) │ Last error message │
│ total_attempts (2 bytes) │ Number of delivery tries │
│ first_attempt_at (8 bytes) │ Initial send time │
│ last_attempt_at (8 bytes) │ Final retry time │
│ moved_to_dlq_at (8 bytes) │ When moved to DLQ │
│ expires_at (8 bytes) │ Auto-deletion time │
└─────────────────────────────────────────────────────────────┘Dead Letter Policies
| Policy | Action | Trigger |
|---|---|---|
| Max retries exceeded | Move to DLQ | 15 attempts |
| Permanent failure | Move to DLQ | Auth failure, invalid format |
| User action | Move to DLQ | Manual cancel |
| TTL expired | Delete | Message expired before delivery |
Dead Letter Retention
| Category | Retention | Auto-action |
|---|---|---|
| Retriable failures | 7 days | Auto-retry once daily |
| Permanent failures | 30 days | Manual review required |
| Expired messages | 24 hours | Auto-delete |
Dead Letter Recovery
recover_from_dlq(message_id):
1. Fetch message from DLQ
2. Reset retry count to 0
3. Recalculate expiration (if not already expired)
4. Move back to outbox
5. Trigger immediate delivery attempt
// User can also:
- Manually delete from DLQ
- Export failed messages for debugging
- View failure historyMessage Expiration (TTL)
TTL Configuration
Messages have configurable time-to-live values:
| Message Type | Default TTL | Max TTL | Min TTL |
|---|---|---|---|
| Text message | 30 days | 90 days | 1 hour |
| Media message | 7 days | 30 days | 1 hour |
| Read receipt | 24 hours | 7 days | 1 hour |
| Typing indicator | 30 seconds | 60 seconds | 10 seconds |
| Disappearing message | User-defined | 30 days | 5 seconds |
TTL Enforcement Points
| Point | Action |
|---|---|
| Sender outbox | Delete if expired before send |
| Mesh storage | Nodes refuse storage if TTL < 1 hour remaining |
| Mesh retrieval | Nodes delete expired messages on access |
| Recipient inbox | Delete on expiration check |
TTL Wire Format
TTL Header (included in encrypted payload):
┌─────────────────────────────────────────────────────────────┐
│ Field │ Size │ Description │
├─────────────────────────────────────────────────────────────┤
│ created_at │ 8 bytes │ Message creation │
│ ttl_seconds │ 4 bytes │ Lifetime in secs │
│ ttl_type │ 1 byte │ absolute/relative│
└─────────────────────────────────────────────────────────────┘
TTL Types:
0x00 = absolute: expires at created_at + ttl_seconds
0x01 = relative: expires ttl_seconds after first readExpiration Algorithm
is_expired(message):
if message.ttl_type == ABSOLUTE:
return now() > message.created_at + message.ttl_seconds
if message.ttl_type == RELATIVE:
if message.first_read_at == null:
return false // Not yet read
return now() > message.first_read_at + message.ttl_secondsDeduplication Mechanism
Since Zentalk provides at-least-once delivery, clients must handle potential duplicates.
Deduplication Strategy
Deduplication is performed at the recipient using:
1. Message ID (UUID)
2. Sender address
3. Session ID
4. Double Ratchet message counterDeduplication Cache
Dedup Cache Structure:
┌─────────────────────────────────────────────────────────────┐
│ Key: (sender || session_id || message_id) │
│ Value: { │
│ received_at: timestamp, │
│ message_counter: uint32, │
│ hash: SHA256(encrypted_payload) │
│ } │
│ TTL: 7 days │
└─────────────────────────────────────────────────────────────┘Deduplication Algorithm
process_incoming_message(msg):
// Generate dedup key
dedup_key = hash(msg.sender || msg.session_id || msg.message_id)
// Check cache
existing = dedup_cache.get(dedup_key)
if existing != null:
// Duplicate detected
if existing.hash == hash(msg.encrypted_payload):
// Exact duplicate - safe to ignore
log("Duplicate message ignored", msg.message_id)
return DUPLICATE
else:
// Same ID but different content - potential attack
log_security_event("Message ID collision", msg)
return COLLISION_ERROR
// New message - decrypt and process
decrypted = decrypt(msg)
// Verify message counter (Double Ratchet provides this)
if decrypted.counter <= session.last_received_counter:
// Out of order or replay
if decrypted.counter in session.skipped_keys:
// Legitimate out-of-order message
process_out_of_order(decrypted)
else:
// Replay attack
log_security_event("Replay attack detected", msg)
return REPLAY_ERROR
// Valid new message
dedup_cache.set(dedup_key, {
received_at: now(),
message_counter: decrypted.counter,
hash: hash(msg.encrypted_payload)
})
return SUCCESSCache Size Management
| Parameter | Value |
|---|---|
| Max entries | 100,000 |
| Max memory | 10MB |
| Eviction policy | LRU with TTL |
| Persistence | IndexedDB |
Out-of-Order Message Handling
Message Ordering Guarantees
| Scope | Guarantee |
|---|---|
| Same session, same sender | Causal order (via DR counters) |
| Same session, bidirectional | No ordering between directions |
| Different sessions | No ordering guarantee |
| Different devices | No ordering guarantee |
Reordering Buffer
When messages arrive out of order, they are buffered until predecessors arrive:
Reorder Buffer:
┌─────────────────────────────────────────────────────────────┐
│ Session: (sender, session_id) │
│ Expected counter: 42 │
│ Buffer: { │
│ 43: { message: M43, received_at: ... }, │
│ 45: { message: M45, received_at: ... }, │
│ 44: { message: M44, received_at: ... } │
│ } │
│ Buffer limit: 1000 messages │
│ Buffer timeout: 5 minutes │
└─────────────────────────────────────────────────────────────┘Reordering Algorithm
handle_message(session, msg):
counter = msg.message_counter
if counter == session.expected_counter:
// In order - deliver immediately
deliver(msg)
session.expected_counter += 1
// Check buffer for subsequent messages
while session.buffer.has(session.expected_counter):
buffered = session.buffer.remove(session.expected_counter)
deliver(buffered)
session.expected_counter += 1
elif counter > session.expected_counter:
// Future message - buffer it
gap = counter - session.expected_counter
if gap > MAX_GAP (1000):
// Too large gap - likely session corruption
trigger_session_reset(session)
return
session.buffer.add(counter, msg)
// Set timeout for missing messages
schedule_timeout(session, counter, 5 minutes)
else:
// Past message - check skipped keys
if session.has_skipped_key(counter):
deliver_with_skipped_key(session, msg, counter)
else:
// Already processed or replay
log("Duplicate or replay", msg.message_id)Gap Timeout Handling
on_gap_timeout(session, expected_counter):
// Missing messages didn't arrive in time
// Deliver buffered messages anyway to prevent UI stall
if session.buffer.has(expected_counter + 1):
log("Gap timeout, delivering subsequent messages")
// Mark the gap
session.gaps.add({
missing_counter: expected_counter,
detected_at: now()
})
// Skip and deliver
session.expected_counter = find_next_available(session.buffer)
flush_buffer(session)
// Request retransmission if possible
request_message_range(session.peer, expected_counter, expected_counter + 10)Sequence Number vs Timestamp Ordering
| Method | Pros | Cons | Use in Zentalk |
|---|---|---|---|
| Sequence numbers | Reliable, gap detection | Sender-specific | Primary |
| Timestamps | Cross-sender ordering | Clock skew, spoofing | Display hint |
| Vector clocks | Causal ordering | Complex, overhead | Not used |
Zentalk approach:
- Use Double Ratchet message counters for ordering (sequence numbers)
- Use timestamps for UI display order
- Never trust timestamps for security decisions
Acknowledgment Protocol (ACKs)
ACK Types
| Type | Code | Purpose |
|---|---|---|
| DELIVERY_ACK | 0x01 | Message stored on mesh |
| RECEIPT_ACK | 0x02 | Recipient retrieved message |
| READ_ACK | 0x03 | Recipient read message (optional) |
ACK Message Format
ACK Structure:
┌─────────────────────────────────────────────────────────────┐
│ ack_type (1 byte) │ Type of acknowledgment │
│ message_id (16 bytes) │ Original message UUID │
│ sender (32 bytes) │ Original sender address │
│ timestamp (8 bytes) │ ACK generation time │
│ signature (64 bytes) │ Ed25519 signature │
└─────────────────────────────────────────────────────────────┘
Total: 121 bytesACK Flow
Sender Mesh DHT Recipient
│ │ │
│──── Message ─────────────►│ │
│◄─── DELIVERY_ACK ─────────│ │
│ │ │
│ │◄──── Poll/Notify ─────────│
│ │───── Message ────────────►│
│ │ │
│ │◄──── RECEIPT_ACK ─────────│
│◄─── RECEIPT_ACK ──────────│ │
│ │ │
│ │ (User reads message) │
│ │◄──── READ_ACK ────────────│
│◄─── READ_ACK ─────────────│ │
│ │ │ACK Processing (Sender)
on_ack_received(ack):
msg = outbox.find(ack.message_id)
if msg == null:
return // Already removed or unknown
switch ack.ack_type:
case DELIVERY_ACK:
// Message stored on mesh
msg.delivery_confirmed = true
msg.delivered_at = ack.timestamp
// Keep in outbox until RECEIPT_ACK
case RECEIPT_ACK:
// Recipient has the message
outbox.remove(msg.message_id)
emit_event("message_delivered", msg.message_id)
case READ_ACK:
// Recipient read the message
emit_event("message_read", msg.message_id, ack.timestamp)ACK Reliability
ACKs themselves may be lost. Handling strategies:
| Lost ACK | Consequence | Mitigation |
|---|---|---|
| DELIVERY_ACK lost | Sender retries unnecessarily | Mesh deduplicates |
| RECEIPT_ACK lost | Sender keeps in outbox longer | Periodic re-poll |
| READ_ACK lost | No read receipt shown | Acceptable (optional feature) |
ACK Aggregation
To reduce overhead, ACKs can be batched:
Batch ACK Structure:
┌─────────────────────────────────────────────────────────────┐
│ ack_type (1 byte) │ Type (same for all) │
│ count (2 bytes) │ Number of ACKs │
│ For each ACK: │
│ message_id (16 bytes) │ Original message UUID │
│ timestamp (8 bytes) │ Batch timestamp │
│ signature (64 bytes) │ Ed25519 signature │
└─────────────────────────────────────────────────────────────┘Offline Message Storage on Mesh
Storage Architecture
When recipients are offline, messages are stored on the mesh DHT network.
Storage Location:
key = hash("inbox:" || recipient_wallet_address || timestamp_bucket)
Timestamp Bucketing:
bucket = floor(timestamp / 3600) // 1-hour buckets
This prevents timing analysis of message patternsReplication Strategy
| Parameter | Value |
|---|---|
| Replication factor | 3 nodes |
| Consistency model | Eventual |
| Read quorum | 1 |
| Write quorum | 2 |
Storage Node Selection
select_storage_nodes(recipient):
// Use DHT to find nodes close to recipient's inbox key
inbox_key = hash("inbox:" || recipient)
// Find k closest nodes
nodes = dht.find_closest(inbox_key, k=5)
// Select top 3 by reliability score
scored = nodes.map(n => {
uptime: n.uptime_percentage,
bandwidth: n.available_bandwidth,
latency: n.average_latency,
score: 0.4 * uptime + 0.3 * bandwidth + 0.3 * (1/latency)
})
return scored.sort_by(score).take(3)Message Retrieval
retrieve_messages(my_address):
// Calculate recent inbox keys (last 30 days)
buckets = []
for day in range(0, 30):
for hour in range(0, 24):
bucket = floor((now() - day*86400 - hour*3600) / 3600)
buckets.append(hash("inbox:" || my_address || bucket))
// Query DHT for each bucket
messages = []
for bucket_key in buckets:
nodes = dht.find_closest(bucket_key, k=3)
for node in nodes:
result = node.get(bucket_key)
if result.success:
messages.extend(result.messages)
break // Got from one node, move to next bucket
// Deduplicate and sort
return deduplicate(messages).sort_by(timestamp)Storage Quotas
| Quota | Limit |
|---|---|
| Per-recipient storage | 100MB |
| Max messages per recipient | 10,000 |
| Max message size | 256KB |
| Storage duration | 30 days |
Garbage Collection
Storage nodes periodically clean expired messages:
gc_interval: 1 hour
gc_algorithm:
1. Scan all stored messages
2. Delete if:
- TTL expired
- Storage duration > 30 days
- ACK received (confirmed delivery)
3. Report storage metricsMessage Routing through Onion Network
Circuit Selection for Messages
select_circuit(destination):
// Check for existing circuit to destination region
existing = circuit_pool.find_for_destination(destination)
if existing && existing.is_healthy():
return existing
// Build new circuit
guard = select_guard()
middle = select_middle(exclude: [guard.family])
exit = select_exit(exclude: [guard.family, middle.family])
circuit = build_circuit(guard, middle, exit)
circuit_pool.add(circuit)
return circuitMessage Encapsulation
send_message_through_circuit(circuit, msg):
// Wrap message for exit node
exit_cell = {
command: RELAY_DATA,
destination: msg.storage_node,
payload: msg.encrypted_payload
}
layer3 = encrypt(circuit.exit_key, exit_cell)
// Wrap for middle node
middle_cell = {
command: RELAY,
next_hop: circuit.exit_node,
payload: layer3
}
layer2 = encrypt(circuit.middle_key, middle_cell)
// Wrap for guard node
guard_cell = {
command: RELAY,
next_hop: circuit.middle_node,
payload: layer2
}
layer1 = encrypt(circuit.guard_key, guard_cell)
// Send to guard
circuit.guard_connection.send(layer1)Circuit Lifecycle for Messaging
| Phase | Duration | Action |
|---|---|---|
| Build | 500ms-2s | Create 3-hop circuit |
| Active | Up to 10min | Send/receive messages |
| Idle timeout | 60s | Destroy if unused |
| Max lifetime | 10min | Force rotation |
| Failure | Immediate | Rebuild and retry |
Performance Characteristics
Latency Breakdown
| Operation | Typical | Worst Case |
|---|---|---|
| Local encryption | 1-5ms | 20ms |
| Circuit build | 500ms | 3s |
| 3-hop relay routing | 100-300ms | 1s |
| DHT lookup | 200-500ms | 2s |
| Mesh storage | 50-100ms | 500ms |
| Total send latency | 850ms-1s | 7s |
Throughput Limits
| Constraint | Limit |
|---|---|
| Messages per circuit | 100/minute |
| Circuits per client | 10 concurrent |
| Max message rate | 1000/minute |
| Bandwidth per client | 1MB/s |
Scalability Characteristics
DHT Lookup: O(log n) where n = network size
Storage Lookup: O(1) with k replicas
Message Routing: O(3) fixed hops
For 1M node network:
- DHT lookup: ~20 hops max
- Effective lookup time: ~1s
- Storage availability: 99.9%+ with k=3Reliability Metrics
| Metric | Target | Mechanism |
|---|---|---|
| Message delivery rate | 99.9% | Retries, replication |
| Delivery latency P50 | 1s | Circuit pooling |
| Delivery latency P99 | 10s | Aggressive retry |
| Duplicate rate | < 0.1% | Client deduplication |
Error Handling
Error Codes
| Code | Name | Description | Retry |
|---|---|---|---|
| E001 | NETWORK_UNREACHABLE | No network connectivity | Yes |
| E002 | CIRCUIT_BUILD_FAILED | Cannot establish onion circuit | Yes |
| E003 | DHT_LOOKUP_FAILED | Cannot find storage nodes | Yes |
| E004 | STORAGE_FAILED | Mesh storage rejected message | Yes |
| E005 | RECIPIENT_NOT_FOUND | Recipient not registered | Delayed |
| E006 | MESSAGE_TOO_LARGE | Exceeds size limit | No |
| E007 | RATE_LIMITED | Too many messages | Delayed |
| E008 | ENCRYPTION_FAILED | Cryptographic error | No |
| E009 | SESSION_INVALID | DR session corrupted | Reset session |
| E010 | TTL_EXPIRED | Message expired | No |
Error Recovery Flows
on_delivery_error(msg, error):
switch error.code:
case E001, E002, E003, E004:
// Transient error - schedule retry
schedule_retry(msg)
case E005:
// Recipient not registered
// May be new user, delay retry
schedule_delayed_retry(msg, 1 hour)
notify_user("Recipient may not have Zentalk yet")
case E006:
// Message too large - cannot recover
move_to_dlq(msg, "message_too_large")
notify_user("Message too large to send")
case E007:
// Rate limited - back off
schedule_delayed_retry(msg, error.retry_after)
case E008, E009:
// Crypto error - session issue
trigger_session_reset(msg.recipient)
schedule_retry(msg)
case E010:
// Expired - remove
outbox.remove(msg)
notify_user("Message expired before delivery")Implementation Examples
Sending a Message (TypeScript)
interface Message {
id: string;
recipient: string;
content: Uint8Array;
ttl: number;
priority: 'low' | 'normal' | 'high';
}
async function sendMessage(msg: Message): Promise<void> {
// 1. Encrypt with Double Ratchet
const session = await getOrCreateSession(msg.recipient);
const encrypted = await session.encrypt(msg.content);
// 2. Add to outbox
await outbox.add({
messageId: msg.id,
recipient: msg.recipient,
encryptedPayload: encrypted,
priority: msg.priority,
createdAt: Date.now(),
expiresAt: Date.now() + msg.ttl * 1000,
retryCount: 0,
status: 'pending'
});
// 3. Trigger delivery (non-blocking)
deliveryWorker.notify();
}Processing Incoming Messages (TypeScript)
interface IncomingMessage {
messageId: string;
sender: string;
sessionId: string;
encryptedPayload: Uint8Array;
serverTimestamp: number;
}
async function processIncoming(msg: IncomingMessage): Promise<void> {
// 1. Check deduplication cache
const dedupKey = hash(msg.sender, msg.sessionId, msg.messageId);
if (dedupCache.has(dedupKey)) {
console.log('Duplicate message ignored');
return;
}
// 2. Decrypt
const session = await getSession(msg.sender, msg.sessionId);
const decrypted = await session.decrypt(msg.encryptedPayload);
// 3. Handle ordering
if (decrypted.counter > session.expectedCounter) {
// Buffer for reordering
reorderBuffer.add(session.id, decrypted);
return;
}
// 4. Deliver to UI
await inbox.add(decrypted);
eventEmitter.emit('newMessage', decrypted);
// 5. Update dedup cache
dedupCache.set(dedupKey, {
receivedAt: Date.now(),
counter: decrypted.counter
});
// 6. Send receipt ACK
await sendAck(msg.sender, msg.messageId, 'RECEIPT_ACK');
}Related Documentation
- Protocol Specification - E2EE and Double Ratchet details
- Wire Protocol - Binary message formats
- DHT and Kademlia - Mesh storage architecture
- Onion Routing - Anonymous routing
- Error Handling - Error codes and recovery