Scanning Protocol
Comprehensive documentation of the scanning mechanism for detecting stealth address transactions in Zentalk.
Why Scanning Is Necessary
Stealth addresses provide strong unlinkability guarantees, but they create a fundamental challenge: how does a recipient detect messages intended for them when each message uses a unique, unpredictable address?
The Detection Problem
| Traditional Addressing | Stealth Addressing |
|---|---|
| Recipient publishes fixed address | Recipient publishes meta-address (V, S) |
| Sender uses known address | Sender derives unique one-time address |
| Recipient monitors single address | Recipient must scan ALL announcements |
| O(1) detection | O(n) detection per announcement |
The core challenge: Without scanning, recipients have no way to know which of the millions of network announcements are intended for them.
Mathematical Basis
For each announcement (R, P) on the network, the recipient must:
1. Compute shared secret: sharedPoint = v * R
2. Derive expected key: P' = S + H(sharedPoint || R) * G
3. Compare: if P' == P, this announcement is for usThis ECDH operation must be performed for EVERY announcement to determine ownership.
Full Scanning Algorithm
Algorithm Overview
The scanning algorithm iterates through network announcements, attempting to derive each potential stealth address and comparing against the announced destination.
Core Data Structures
STRUCT ScannerState:
view_private_key: bytes[32] // Recipient's view key (v)
spend_public_key: Point // Recipient's spend key (S)
last_scanned_block: uint64 // Resume point for incremental scanning
last_scanned_timestamp: uint64 // Alternative resume point
detected_payments: Map<bytes32, DetectedPayment>
bloom_filter: BloomFilter // Quick rejection filter
STRUCT DetectedPayment:
ephemeral_public_key: Point // R from announcement
stealth_public_key: Point // P (destination)
shared_secret: bytes[32] // For spending key derivation
announcement_id: bytes[32] // Unique identifier
timestamp: uint64 // Detection time
payload_hash: bytes[32] // Hash of encrypted payload
status: enum { pending, processed, spent }
STRUCT Announcement:
ephemeral_key: Point // R
stealth_address: Point // P
encrypted_payload: bytes // Message/transaction data
timestamp: uint64 // Network timestamp
block_height: uint64 // If blockchain-backedMain Scanning Loop
FUNCTION scan_announcements(state: ScannerState, announcements: List<Announcement>):
v = state.view_private_key
S = state.spend_public_key
detected = []
FOR EACH announcement IN announcements:
// Quick rejection using Bloom filter
IF state.bloom_filter.definitely_not_contains(announcement.id):
CONTINUE
R = announcement.ephemeral_key
P = announcement.stealth_address
// Core ECDH computation
shared_point = scalar_multiply(R, v)
shared_secret = SHA256(encode_point(shared_point) || encode_point(R))
// Derive expected stealth public key
shared_secret_scalar = reduce_to_scalar(shared_secret)
expected_P = point_add(S, scalar_multiply(G, shared_secret_scalar))
// Check for match
IF point_equals(expected_P, P):
payment = DetectedPayment {
ephemeral_public_key: R,
stealth_public_key: P,
shared_secret: shared_secret,
announcement_id: announcement.id,
timestamp: current_timestamp(),
payload_hash: SHA256(announcement.encrypted_payload),
status: pending
}
detected.append(payment)
state.detected_payments.put(announcement.id, payment)
// Add to Bloom filter for future quick lookup
state.bloom_filter.add(announcement.id)
// Update scan position
IF announcements.length > 0:
state.last_scanned_block = announcements.last().block_height
state.last_scanned_timestamp = announcements.last().timestamp
RETURN detectedSpending Key Derivation
Once a payment is detected, the recipient derives the spending private key:
FUNCTION derive_spending_key(spend_private_key: bytes[32], shared_secret: bytes[32]):
// spend_private_key = s
// shared_secret from detection
s = interpret_as_scalar(spend_private_key)
h = reduce_to_scalar(shared_secret)
// p = s + h (mod curve_order)
spending_key = scalar_add(s, h)
RETURN spending_key
// Verification
FUNCTION verify_spending_key(spending_key: bytes[32], expected_public: Point):
derived_public = scalar_multiply(G, spending_key)
RETURN point_equals(derived_public, expected_public)State Persistence
FUNCTION save_scanner_state(state: ScannerState, storage: SecureStorage):
serialized = {
"version": 1,
"last_scanned_block": state.last_scanned_block,
"last_scanned_timestamp": state.last_scanned_timestamp,
"detected_payments": serialize_payments(state.detected_payments),
"bloom_filter": state.bloom_filter.serialize()
}
// Encrypt with storage key
encrypted = encrypt_with_storage_key(serialized, storage.key)
storage.write("scanner_state", encrypted)
FUNCTION load_scanner_state(storage: SecureStorage, view_key, spend_pub):
encrypted = storage.read("scanner_state")
IF encrypted IS NULL:
RETURN new_scanner_state(view_key, spend_pub)
serialized = decrypt_with_storage_key(encrypted, storage.key)
RETURN ScannerState {
view_private_key: view_key,
spend_public_key: spend_pub,
last_scanned_block: serialized.last_scanned_block,
last_scanned_timestamp: serialized.last_scanned_timestamp,
detected_payments: deserialize_payments(serialized.detected_payments),
bloom_filter: BloomFilter.deserialize(serialized.bloom_filter)
}Bloom Filter Optimization
Why Bloom Filters
Scanning every announcement requires expensive elliptic curve operations. Bloom filters provide a probabilistic “definitely not” test to skip announcements quickly.
| Operation | Without Bloom | With Bloom |
|---|---|---|
| Per-announcement cost | 1 ECDH (~0.3ms) | 3 hash lookups (~0.001ms) |
| False negatives | 0% | 0% (guaranteed) |
| False positives | 0% | ~1% (configurable) |
| Memory overhead | None | ~10 bits per element |
Bloom Filter Parameters
STRUCT BloomFilterConfig:
expected_elements: uint64 // Expected announcements to track
false_positive_rate: float // Target FP rate (e.g., 0.01)
// Derived parameters
bits_per_element: float // -ln(fp_rate) / (ln(2)^2)
num_bits: uint64 // expected_elements * bits_per_element
num_hashes: uint32 // (num_bits / expected_elements) * ln(2)
// Zentalk defaults
DEFAULT_BLOOM_CONFIG = BloomFilterConfig {
expected_elements: 1_000_000,
false_positive_rate: 0.01,
// Derived: ~9.6 bits per element, 7 hash functions
// Total size: ~1.2 MB for 1M elements
}Bloom Filter Implementation
STRUCT BloomFilter:
bits: BitArray
num_hashes: uint32
seeds: List<bytes[32]>
FUNCTION bloom_create(config: BloomFilterConfig):
num_bits = ceil(config.expected_elements * config.bits_per_element)
num_hashes = ceil((num_bits / config.expected_elements) * LN2)
// Generate deterministic seeds for hash functions
seeds = []
FOR i IN 0..num_hashes:
seeds.append(SHA256("bloom_seed_" || i.to_bytes()))
RETURN BloomFilter {
bits: BitArray(num_bits),
num_hashes: num_hashes,
seeds: seeds
}
FUNCTION bloom_add(filter: BloomFilter, element: bytes):
FOR i IN 0..filter.num_hashes:
hash_value = HMAC_SHA256(filter.seeds[i], element)
bit_index = hash_value % filter.bits.length
filter.bits.set(bit_index, true)
FUNCTION bloom_possibly_contains(filter: BloomFilter, element: bytes):
FOR i IN 0..filter.num_hashes:
hash_value = HMAC_SHA256(filter.seeds[i], element)
bit_index = hash_value % filter.bits.length
IF NOT filter.bits.get(bit_index):
RETURN false // Definitely not in set
RETURN true // Possibly in set
FUNCTION bloom_definitely_not_contains(filter: BloomFilter, element: bytes):
RETURN NOT bloom_possibly_contains(filter, element)Multi-Stage Filtering
For high-volume scanning, use cascaded filters:
STRUCT CascadedBloomFilter:
stage1: BloomFilter // Coarse filter (high FP rate, small)
stage2: BloomFilter // Fine filter (low FP rate, larger)
stage3: HashSet // Exact set for confirmed matches
FUNCTION cascaded_check(filter: CascadedBloomFilter, element: bytes):
// Stage 1: Quick rejection (99% of non-matches)
IF filter.stage1.definitely_not_contains(element):
RETURN DEFINITELY_NOT
// Stage 2: Secondary check (catches 99% of remaining FPs)
IF filter.stage2.definitely_not_contains(element):
RETURN DEFINITELY_NOT
// Stage 3: Exact check for known matches
IF filter.stage3.contains(element):
RETURN DEFINITELY_YES
RETURN POSSIBLY_YES // Needs full ECDH verificationNetwork Announcement Format
Announcement Structure
STRUCT NetworkAnnouncement:
// Header (unencrypted, for routing)
version: uint8 // Protocol version
announcement_type: uint8 // 0x01 = stealth payment
timestamp: uint64 // Unix timestamp (seconds)
ttl: uint32 // Time-to-live in seconds
// Stealth address components
ephemeral_public_key: bytes[33] // Compressed point R
stealth_public_key: bytes[33] // Compressed point P
// Encrypted payload
payload_nonce: bytes[12] // AES-GCM nonce
encrypted_payload: bytes // Variable length
payload_tag: bytes[16] // AES-GCM auth tag
// Metadata
announcement_id: bytes[32] // SHA256(full announcement)
// Optional fields (version 2+)
view_tag: bytes[1] // First byte of shared secret (optimization)
chain_id: uint32 // For multi-chain support
// Announcement types
ENUM AnnouncementType:
STEALTH_PAYMENT = 0x01
KEY_BUNDLE_UPDATE = 0x02
GROUP_ANNOUNCEMENT = 0x03
RELAY_ADVERTISEMENT = 0x04Wire Format
Network Announcement Wire Format (Version 1):
Offset Size Field
------ ---- -----
0 1 version (0x01)
1 1 announcement_type
2 8 timestamp (big-endian)
10 4 ttl (big-endian)
14 33 ephemeral_public_key (compressed)
47 33 stealth_public_key (compressed)
80 12 payload_nonce
92 var encrypted_payload
var 16 payload_tag
var 32 announcement_id
Total: 145 + payload_length bytes minimumView Tag Optimization
View tags reduce scanning overhead by 99% for non-matching announcements:
FUNCTION generate_view_tag(shared_point: Point, R: Point):
// View tag is first byte of shared secret
shared_secret = SHA256(encode_point(shared_point) || encode_point(R))
RETURN shared_secret[0]
// Sender includes view tag in announcement
FUNCTION create_announcement_with_view_tag(recipient_meta_address, payload):
V = recipient_meta_address.view_public_key
S = recipient_meta_address.spend_public_key
// Generate ephemeral key
r = secure_random(32)
R = scalar_multiply(G, r)
// Compute shared secret
shared_point = scalar_multiply(V, r)
shared_secret = SHA256(encode_point(shared_point) || encode_point(R))
// View tag for quick filtering
view_tag = shared_secret[0]
// Derive stealth address
P = point_add(S, scalar_multiply(G, reduce_to_scalar(shared_secret)))
RETURN Announcement {
ephemeral_public_key: R,
stealth_public_key: P,
view_tag: view_tag,
// ... encrypt payload
}
// Scanner uses view tag for quick rejection
FUNCTION scan_with_view_tag(state: ScannerState, announcement: Announcement):
R = announcement.ephemeral_key
// Compute view tag candidate
shared_point = scalar_multiply(R, state.view_private_key)
expected_view_tag = SHA256(encode_point(shared_point) || encode_point(R))[0]
// Quick rejection (99.6% of non-matches)
IF expected_view_tag != announcement.view_tag:
RETURN null
// Full verification only for view tag matches
// ... continue with full ECDHScanning Performance with View Tags
| Announcements/day | Without View Tag | With View Tag |
|---|---|---|
| 1,000 | 0.3 seconds | 0.003 seconds |
| 100,000 | 30 seconds | 0.42 seconds |
| 1,000,000 | 5 minutes | 4.2 seconds |
| 10,000,000 | 50 minutes | 42 seconds |
Announcement Dissemination Mechanism
DHT-Based Announcement Storage
Announcements are distributed through the Kademlia DHT using a time-bucketed approach:
FUNCTION store_announcement(announcement: Announcement, dht: KademliaNode):
// Time bucket for efficient retrieval
time_bucket = announcement.timestamp / BUCKET_INTERVAL // e.g., 1 hour buckets
// DHT key combines announcement type and time bucket
dht_key = SHA256("announcement:" || announcement.type || ":" || time_bucket)
// Store in DHT with TTL
dht.store(dht_key, announcement, ttl=announcement.ttl)
// Also store by recipient hint (if available)
IF announcement.has_recipient_hint:
hint_key = SHA256("hint:" || announcement.recipient_hint)
dht.store(hint_key, announcement.id, ttl=announcement.ttl)
FUNCTION retrieve_announcements(scanner: Scanner, dht: KademliaNode, since: uint64):
announcements = []
// Calculate time buckets to query
start_bucket = since / BUCKET_INTERVAL
end_bucket = current_time() / BUCKET_INTERVAL
FOR bucket IN start_bucket..end_bucket:
dht_key = SHA256("announcement:" || STEALTH_PAYMENT || ":" || bucket)
bucket_announcements = dht.find_value(dht_key)
announcements.extend(bucket_announcements)
// Remove duplicates and sort by timestamp
RETURN deduplicate_and_sort(announcements)Gossip Protocol for Real-Time Propagation
STRUCT GossipMessage:
announcement: Announcement
hops_remaining: uint8 // TTL in hops
seen_by: BloomFilter // Nodes that have seen this
FUNCTION propagate_announcement(node: NetworkNode, announcement: Announcement):
gossip = GossipMessage {
announcement: announcement,
hops_remaining: MAX_GOSSIP_HOPS, // e.g., 6
seen_by: new_bloom_filter(1000)
}
// Mark self as seen
gossip.seen_by.add(node.id)
// Forward to random subset of peers
peers = node.select_gossip_peers(GOSSIP_FANOUT) // e.g., 4 peers
FOR EACH peer IN peers:
IF NOT gossip.seen_by.possibly_contains(peer.id):
peer.send(gossip)
FUNCTION receive_gossip(node: NetworkNode, gossip: GossipMessage):
// Check if already seen
IF node.announcement_cache.contains(gossip.announcement.id):
RETURN
// Validate announcement
IF NOT validate_announcement(gossip.announcement):
RETURN
// Store locally
node.announcement_cache.put(gossip.announcement.id, gossip.announcement)
// Notify local scanners
node.notify_scanners(gossip.announcement)
// Continue propagation if TTL allows
IF gossip.hops_remaining > 0:
gossip.seen_by.add(node.id)
gossip.hops_remaining -= 1
propagate_to_peers(node, gossip)Announcement Validation
FUNCTION validate_announcement(announcement: Announcement):
// Check version compatibility
IF announcement.version > SUPPORTED_VERSION:
RETURN false
// Validate timestamp (not too old, not in future)
now = current_timestamp()
IF announcement.timestamp > now + MAX_FUTURE_DRIFT: // 5 minutes
RETURN false
IF announcement.timestamp < now - MAX_AGE: // 72 hours
RETURN false
// Validate TTL
IF announcement.ttl > MAX_TTL: // 72 hours
RETURN false
// Validate ephemeral key is on curve
IF NOT is_valid_curve_point(announcement.ephemeral_public_key):
RETURN false
// Validate stealth key is on curve
IF NOT is_valid_curve_point(announcement.stealth_public_key):
RETURN false
// Validate payload size
IF announcement.encrypted_payload.length > MAX_PAYLOAD_SIZE: // 64KB
RETURN false
// Verify announcement ID
expected_id = compute_announcement_id(announcement)
IF expected_id != announcement.announcement_id:
RETURN false
RETURN trueScanning Service Architecture
Why Delegated Scanning
Mobile devices and light clients cannot efficiently scan all network announcements:
| Constraint | Mobile Device | Dedicated Scanner |
|---|---|---|
| CPU availability | Limited, battery drain | Always available |
| Network | Intermittent, metered | Always connected |
| Storage | Limited | Ample |
| Latency | High (must sync first) | Real-time |
Service Components
ARCHITECTURE ScanningService:
Components:
┌─────────────────────────────────────────────────────────┐
│ Scanning Service │
├─────────────────────────────────────────────────────────┤
│ ┌─────────────┐ ┌─────────────┐ ┌─────────────┐ │
│ │ Announcement │ │ Scanner │ │ Notification │ │
│ │ Ingester │ │ Pool │ │ Service │ │
│ └──────┬──────┘ └──────┬──────┘ └──────┬──────┘ │
│ │ │ │ │
│ ▼ ▼ ▼ │
│ ┌─────────────────────────────────────────────────┐ │
│ │ Announcement Database │ │
│ └─────────────────────────────────────────────────┘ │
│ │ │
│ ┌────────────────┼────────────────┐ │
│ ▼ ▼ ▼ │
│ ┌───────────┐ ┌───────────┐ ┌───────────┐ │
│ │ User A │ │ User B │ │ User C │ │
│ │View Key A │ │View Key B │ │View Key C │ │
│ └───────────┘ └───────────┘ └───────────┘ │
└─────────────────────────────────────────────────────────┘Service API
SERVICE ScanningServiceAPI:
// Register for scanning
RPC register_view_key(request: RegisterRequest) -> RegisterResponse
// Get detected payments
RPC get_detected_payments(request: PaymentsRequest) -> PaymentsResponse
// Real-time notification stream
STREAM payment_notifications(request: NotifyRequest) -> PaymentNotification
STRUCT RegisterRequest:
view_public_key: bytes[33] // V = v*G (NOT the private key!)
spend_public_key: bytes[33] // S
auth_signature: bytes[64] // Proves ownership
notification_preferences: NotificationConfig
STRUCT RegisterResponse:
registration_id: bytes[32]
scan_start_block: uint64
estimated_catch_up_time: uint32
STRUCT PaymentsRequest:
registration_id: bytes[32]
since_timestamp: uint64
limit: uint32
auth_token: bytes[32]
STRUCT PaymentsResponse:
payments: List<DetectedPaymentInfo>
next_cursor: bytes[32]
scan_progress: ScanProgress
STRUCT DetectedPaymentInfo:
announcement_id: bytes[32]
ephemeral_public_key: bytes[33]
stealth_public_key: bytes[33]
encrypted_payload: bytes
timestamp: uint64
// NOTE: shared_secret NOT included - client derives locallyService-Side Scanning
FUNCTION service_scan_for_user(user: RegisteredUser, announcements: List<Announcement>):
// Service knows: V (view public key), S (spend public key)
// Service does NOT know: v (view private key)
// IMPORTANT: Service cannot perform full scanning!
// Must use alternative approach:
// Option 1: User provides encrypted view key
// Option 2: User delegates actual view key (privacy tradeoff)
// Option 3: Use view tag filtering only (partial privacy)
IF user.delegation_type == FULL_VIEW_KEY:
// User has delegated view private key
v = decrypt_delegated_key(user.encrypted_view_key, service_key)
RETURN full_scan(v, user.S, announcements)
ELSE IF user.delegation_type == VIEW_TAG_ONLY:
// Service can only filter by view tag
// User must do final verification locally
candidates = []
FOR EACH announcement IN announcements:
IF matches_view_tag_pattern(announcement, user.view_tag_hint):
candidates.append(announcement)
RETURN candidates // User verifies these locallyPrivacy Tradeoffs
| Delegation Level | Service Learns | Privacy | Convenience |
|---|---|---|---|
| No delegation | Nothing | Maximum | Must scan locally |
| View tag hint | Possible recipient patterns | High | Filters ~99% |
| Full view key | All incoming payments | Medium | Real-time notifications |
| Spend key | Never delegate! | N/A | N/A |
View Key Delegation Implementation
Secure Key Sharing
View keys can be shared with scanning services or trusted parties without compromising spending authority.
STRUCT ViewKeyDelegation:
delegation_id: bytes[32] // Unique identifier
delegate_public_key: bytes[33] // Recipient of delegation
encrypted_view_key: bytes // View key encrypted to delegate
grant_timestamp: uint64 // When delegation was created
expiry_timestamp: uint64 // When delegation expires (0 = no expiry)
permissions: DelegationPermissions
revocation_status: bool
audit_log_commitment: bytes[32] // For verifiable audit trail
STRUCT DelegationPermissions:
can_scan: bool // Detect incoming payments
can_notify: bool // Send notifications
can_export_history: bool // Export detected payments
rate_limit: uint32 // Max scans per hour
allowed_time_ranges: List<TimeRange> // Restrict to specific periodsKey Delegation Protocol
FUNCTION create_view_key_delegation(
view_private_key: bytes[32],
delegate_public_key: bytes[33],
permissions: DelegationPermissions,
expiry: uint64
):
// Generate delegation ID
delegation_id = secure_random(32)
// Encrypt view key to delegate using ECIES
ephemeral_key = generate_keypair()
shared_secret = ECDH(ephemeral_key.private, delegate_public_key)
encryption_key = HKDF(shared_secret, "view-key-delegation")
encrypted_view_key = AES_GCM_encrypt(
key: encryption_key,
plaintext: view_private_key,
aad: delegation_id || permissions.serialize()
)
// Create delegation record
delegation = ViewKeyDelegation {
delegation_id: delegation_id,
delegate_public_key: delegate_public_key,
encrypted_view_key: ephemeral_key.public || encrypted_view_key,
grant_timestamp: current_timestamp(),
expiry_timestamp: expiry,
permissions: permissions,
revocation_status: false,
audit_log_commitment: compute_audit_commitment(delegation_id)
}
// Sign the delegation
delegation_signature = sign(owner_identity_key, delegation.serialize())
RETURN (delegation, delegation_signature)
FUNCTION accept_view_key_delegation(
delegation: ViewKeyDelegation,
delegation_signature: bytes[64],
delegate_private_key: bytes[32],
owner_public_key: bytes[33]
):
// Verify signature from owner
IF NOT verify(owner_public_key, delegation.serialize(), delegation_signature):
RETURN error("Invalid delegation signature")
// Check expiry
IF delegation.expiry_timestamp != 0 AND delegation.expiry_timestamp < current_timestamp():
RETURN error("Delegation expired")
// Decrypt view key
ephemeral_public = delegation.encrypted_view_key[0:33]
ciphertext = delegation.encrypted_view_key[33:]
shared_secret = ECDH(delegate_private_key, ephemeral_public)
decryption_key = HKDF(shared_secret, "view-key-delegation")
view_private_key = AES_GCM_decrypt(
key: decryption_key,
ciphertext: ciphertext,
aad: delegation.delegation_id || delegation.permissions.serialize()
)
RETURN view_private_keyKey Revocation Mechanism
STRUCT RevocationRecord:
delegation_id: bytes[32]
revocation_timestamp: uint64
reason: string
revocation_signature: bytes[64]
FUNCTION revoke_delegation(
delegation_id: bytes[32],
owner_private_key: bytes[32],
reason: string
):
revocation = RevocationRecord {
delegation_id: delegation_id,
revocation_timestamp: current_timestamp(),
reason: reason
}
revocation.revocation_signature = sign(owner_private_key, revocation.serialize())
// Publish to revocation registry (DHT or dedicated service)
publish_revocation(revocation)
RETURN revocation
FUNCTION check_revocation_status(delegation_id: bytes[32]):
// Check local cache first
IF revocation_cache.contains(delegation_id):
RETURN revocation_cache.get(delegation_id)
// Query revocation registry
revocation = query_revocation_registry(delegation_id)
IF revocation IS NOT NULL:
// Verify revocation signature
delegation = get_delegation(delegation_id)
IF verify(delegation.owner_public_key, revocation.serialize(), revocation.signature):
revocation_cache.put(delegation_id, revocation)
RETURN revocation
RETURN null // Not revoked
// Delegate must check revocation before each scan
FUNCTION scan_with_delegation_check(delegation: ViewKeyDelegation, announcements):
// Check if delegation is still valid
revocation = check_revocation_status(delegation.delegation_id)
IF revocation IS NOT NULL:
THROW DelegationRevokedException(revocation)
// Check expiry
IF delegation.expiry_timestamp != 0 AND delegation.expiry_timestamp < current_timestamp():
THROW DelegationExpiredException()
// Proceed with scanning
RETURN scan_announcements(...)Time-Limited Grants
FUNCTION create_time_limited_grant(
view_private_key: bytes[32],
delegate_public_key: bytes[33],
start_time: uint64,
end_time: uint64
):
// Create delegation with strict time bounds
permissions = DelegationPermissions {
can_scan: true,
can_notify: true,
can_export_history: false,
allowed_time_ranges: [TimeRange(start_time, end_time)]
}
delegation = create_view_key_delegation(
view_private_key,
delegate_public_key,
permissions,
expiry: end_time + GRACE_PERIOD // Small grace for clock skew
)
RETURN delegation
// Example: Grant scanning access for next 24 hours
grant = create_time_limited_grant(
my_view_key,
scanning_service_pubkey,
start_time: now(),
end_time: now() + 24 * 3600
)Audit Trail
STRUCT AuditLogEntry:
delegation_id: bytes[32]
action: AuditAction
timestamp: uint64
details: Map<string, any>
entry_hash: bytes[32]
previous_hash: bytes[32] // Chain entries together
ENUM AuditAction:
DELEGATION_CREATED
DELEGATION_ACCEPTED
SCAN_PERFORMED
PAYMENT_DETECTED
NOTIFICATION_SENT
DELEGATION_REVOKED
DELEGATION_EXPIRED
FUNCTION log_audit_entry(
delegation: ViewKeyDelegation,
action: AuditAction,
details: Map<string, any>
):
previous_entry = get_latest_audit_entry(delegation.delegation_id)
previous_hash = previous_entry?.entry_hash ?? ZERO_HASH
entry = AuditLogEntry {
delegation_id: delegation.delegation_id,
action: action,
timestamp: current_timestamp(),
details: details,
previous_hash: previous_hash
}
// Compute hash chain
entry.entry_hash = SHA256(entry.serialize())
// Store entry
store_audit_entry(entry)
// Update commitment in delegation
update_audit_commitment(delegation.delegation_id, entry.entry_hash)
RETURN entry
FUNCTION verify_audit_trail(delegation_id: bytes[32]):
entries = get_all_audit_entries(delegation_id)
// Verify hash chain integrity
FOR i IN 1..entries.length:
IF entries[i].previous_hash != entries[i-1].entry_hash:
RETURN (false, "Hash chain broken at entry " + i)
IF SHA256(entries[i].serialize_without_hash()) != entries[i].entry_hash:
RETURN (false, "Entry hash mismatch at entry " + i)
// Verify against commitment
expected_commitment = entries.last().entry_hash
delegation = get_delegation(delegation_id)
IF delegation.audit_log_commitment != expected_commitment:
RETURN (false, "Audit commitment mismatch")
RETURN (true, entries)Batch Processing Strategies
Chunked Scanning
For large backlogs, process announcements in manageable chunks:
FUNCTION batch_scan(scanner: Scanner, start: uint64, end: uint64):
CHUNK_SIZE = 10000 // Announcements per batch
results = []
current = start
WHILE current < end:
chunk_end = min(current + CHUNK_SIZE, end)
// Fetch chunk from DHT/database
announcements = fetch_announcements(current, chunk_end)
// Scan chunk
detected = scanner.scan_announcements(announcements)
results.extend(detected)
// Update progress
scanner.last_scanned_timestamp = chunk_end
save_scanner_state(scanner)
// Yield to other tasks
yield_execution()
current = chunk_end
RETURN resultsPriority Queuing
STRUCT PriorityAnnouncement:
announcement: Announcement
priority: uint8 // 0 = highest priority
FUNCTION prioritize_announcements(announcements: List<Announcement>, user_context: UserContext):
prioritized = PriorityQueue()
FOR EACH a IN announcements:
priority = calculate_priority(a, user_context)
prioritized.push(PriorityAnnouncement(a, priority))
RETURN prioritized
FUNCTION calculate_priority(announcement: Announcement, context: UserContext):
base_priority = 128 // Middle priority
// Recent announcements are higher priority
age = current_timestamp() - announcement.timestamp
IF age < 3600: // Less than 1 hour
base_priority -= 64
ELSE IF age < 86400: // Less than 1 day
base_priority -= 32
// Known sender patterns (if view tag matches known contacts)
IF matches_known_contact_pattern(announcement, context):
base_priority -= 32
// Large payloads might be more important
IF announcement.payload_size > 1000:
base_priority -= 16
RETURN max(0, min(255, base_priority))Parallel Batch Processing
FUNCTION parallel_batch_scan(
scanner: Scanner,
announcements: List<Announcement>,
worker_count: uint32
):
// Split into worker batches
batch_size = ceil(announcements.length / worker_count)
batches = split_into_batches(announcements, batch_size)
// Create worker-specific scanner states
workers = []
FOR i IN 0..worker_count:
worker_scanner = scanner.clone()
workers.append(spawn_worker(worker_scanner, batches[i]))
// Collect results
all_detected = []
FOR EACH worker IN workers:
detected = worker.await()
all_detected.extend(detected)
// Merge scanner states
merge_scanner_states(scanner, workers)
RETURN all_detected
FUNCTION worker_scan(scanner: Scanner, batch: List<Announcement>):
detected = []
FOR EACH announcement IN batch:
result = scan_single_announcement(scanner, announcement)
IF result IS NOT NULL:
detected.append(result)
RETURN detectedMulti-Scanner Synchronization
Distributed Scanner State
When multiple devices or services scan for the same user:
STRUCT DistributedScannerState:
user_id: bytes[32]
scanner_instances: Map<bytes[32], ScannerInstance>
global_last_scanned: uint64
conflict_resolution_policy: ConflictPolicy
STRUCT ScannerInstance:
instance_id: bytes[32]
device_type: string
last_active: uint64
scan_range: (uint64, uint64) // (start, end) of assigned range
detected_payments: Set<bytes[32]> // announcement IDs
ENUM ConflictPolicy:
MERGE_ALL // Combine all detected payments
LATEST_WINS // Most recent scan takes precedence
PRIMARY_AUTHORITATIVE // Primary scanner is source of truthRange Assignment
FUNCTION assign_scan_ranges(
distributed_state: DistributedScannerState,
total_range: (uint64, uint64)
):
active_instances = distributed_state.scanner_instances
.filter(instance => instance.last_active > now() - ACTIVE_THRESHOLD)
IF active_instances.length == 0:
RETURN null // No active scanners
range_size = (total_range.end - total_range.start) / active_instances.length
assignments = []
current_start = total_range.start
FOR i, instance IN enumerate(active_instances):
range_end = current_start + range_size
IF i == active_instances.length - 1:
range_end = total_range.end // Last scanner gets remainder
assignments.append({
instance_id: instance.instance_id,
range: (current_start, range_end)
})
current_start = range_end
RETURN assignments
FUNCTION sync_scanner_states(distributed_state: DistributedScannerState):
all_detected = Set()
latest_timestamp = 0
FOR EACH instance IN distributed_state.scanner_instances.values():
// Merge detected payments
all_detected.union(instance.detected_payments)
// Track latest scan position
latest_timestamp = max(latest_timestamp, instance.scan_range.end)
// Update global state
distributed_state.global_last_scanned = latest_timestamp
// Reconcile conflicts
reconcile_conflicts(distributed_state, all_detected)
RETURN all_detectedConflict Resolution
FUNCTION reconcile_conflicts(
distributed_state: DistributedScannerState,
all_detected: Set<bytes[32]>
):
SWITCH distributed_state.conflict_resolution_policy:
CASE MERGE_ALL:
// All detected payments are valid
// May have duplicates from overlapping ranges
RETURN deduplicate(all_detected)
CASE LATEST_WINS:
// For each payment, keep version from most recent scan
payments = Map()
FOR EACH instance IN distributed_state.scanner_instances.values():
FOR EACH payment_id IN instance.detected_payments:
IF NOT payments.contains(payment_id) OR
instance.last_active > payments[payment_id].detected_by_time:
payments[payment_id] = {
detected_by: instance.instance_id,
detected_by_time: instance.last_active
}
RETURN payments.keys()
CASE PRIMARY_AUTHORITATIVE:
primary = distributed_state.scanner_instances.get("primary")
IF primary IS NOT NULL:
RETURN primary.detected_payments
ELSE:
// Fallback to merge
RETURN deduplicate(all_detected)State Synchronization Protocol
FUNCTION sync_with_peer(local_scanner: Scanner, peer_scanner: RemoteScanner):
// Exchange state summaries
local_summary = {
last_scanned: local_scanner.last_scanned_timestamp,
detected_count: local_scanner.detected_payments.size(),
bloom_filter_hash: hash(local_scanner.bloom_filter)
}
peer_summary = peer_scanner.get_summary()
// Determine sync direction
IF local_summary.last_scanned > peer_summary.last_scanned:
// We have newer data - push to peer
delta = compute_delta(local_scanner, peer_summary.last_scanned)
peer_scanner.apply_delta(delta)
ELSE IF peer_summary.last_scanned > local_summary.last_scanned:
// Peer has newer data - pull from peer
delta = peer_scanner.compute_delta(local_summary.last_scanned)
apply_delta(local_scanner, delta)
// Reconcile detected payments
reconcile_detected_payments(local_scanner, peer_scanner)
FUNCTION compute_delta(scanner: Scanner, since: uint64):
RETURN {
detected_payments: scanner.detected_payments.filter(p => p.timestamp > since),
bloom_filter_updates: scanner.bloom_filter.entries_since(since)
}
FUNCTION apply_delta(scanner: Scanner, delta: ScannerDelta):
FOR EACH payment IN delta.detected_payments:
scanner.detected_payments.put(payment.id, payment)
FOR EACH entry IN delta.bloom_filter_updates:
scanner.bloom_filter.add(entry)
scanner.last_scanned_timestamp = max(
scanner.last_scanned_timestamp,
delta.timestamp
)Performance Optimization
Parallel Scanning Architecture
ARCHITECTURE ParallelScanner:
┌────────────────────────────────────────────────────────┐
│ Announcement Stream │
└────────────────────────┬───────────────────────────────┘
│
▼
┌────────────────────────────────────────────────────────┐
│ Dispatcher Thread │
│ - Receives announcements from network │
│ - Applies view tag filter (first pass) │
│ - Distributes to worker pool │
└────────────────────────┬───────────────────────────────┘
│
┌────────────────┼────────────────┐
▼ ▼ ▼
┌──────────────┐ ┌──────────────┐ ┌──────────────┐
│ Worker 1 │ │ Worker 2 │ │ Worker N │
│ - Full ECDH │ │ - Full ECDH │ │ - Full ECDH │
│ - Verify P │ │ - Verify P │ │ - Verify P │
└──────┬───────┘ └──────┬───────┘ └──────┬───────┘
│ │ │
└────────────────┴────────────────┘
│
▼
┌────────────────────────────────────────────────────────┐
│ Result Aggregator │
│ - Collects detected payments │
│ - Updates scanner state │
│ - Triggers notifications │
└────────────────────────────────────────────────────────┘Worker Pool Implementation
STRUCT WorkerPool:
workers: List<Worker>
job_queue: BlockingQueue<ScanJob>
result_queue: BlockingQueue<ScanResult>
shutdown_flag: AtomicBool
FUNCTION create_worker_pool(worker_count: uint32, scanner_state: ScannerState):
pool = WorkerPool {
workers: [],
job_queue: BlockingQueue(capacity=10000),
result_queue: BlockingQueue(capacity=10000),
shutdown_flag: AtomicBool(false)
}
FOR i IN 0..worker_count:
worker = spawn_thread(worker_loop, pool, scanner_state.clone())
pool.workers.append(worker)
RETURN pool
FUNCTION worker_loop(pool: WorkerPool, scanner_state: ScannerState):
WHILE NOT pool.shutdown_flag.get():
job = pool.job_queue.take(timeout=1000ms)
IF job IS NULL:
CONTINUE
result = process_scan_job(job, scanner_state)
pool.result_queue.put(result)
FUNCTION process_scan_job(job: ScanJob, state: ScannerState):
v = state.view_private_key
S = state.spend_public_key
R = job.announcement.ephemeral_key
P = job.announcement.stealth_address
// Full ECDH computation
shared_point = scalar_multiply(R, v)
shared_secret = SHA256(encode_point(shared_point) || encode_point(R))
expected_P = point_add(S, scalar_multiply(G, reduce_to_scalar(shared_secret)))
IF point_equals(expected_P, P):
RETURN ScanResult.match(job.announcement, shared_secret)
ELSE:
RETURN ScanResult.no_match(job.announcement.id)Caching Strategies
STRUCT ScannerCache:
// LRU cache for recently scanned announcements
recent_scans: LRUCache<bytes[32], bool> // announcement_id -> is_match
// Precomputed point multiplication cache
// Cache G*i for common scalar values
point_cache: Map<bytes[32], Point>
// Announcement metadata cache
metadata_cache: LRUCache<bytes[32], AnnouncementMetadata>
FUNCTION scan_with_cache(
cache: ScannerCache,
scanner: Scanner,
announcement: Announcement
):
// Check if already scanned
IF cache.recent_scans.contains(announcement.id):
IF cache.recent_scans.get(announcement.id):
RETURN cache.metadata_cache.get(announcement.id)
ELSE:
RETURN null
// Perform scan
result = scanner.scan_single(announcement)
// Update cache
cache.recent_scans.put(announcement.id, result IS NOT NULL)
IF result IS NOT NULL:
cache.metadata_cache.put(announcement.id, result)
RETURN result
// Precompute common point multiplications
FUNCTION initialize_point_cache(cache: ScannerCache):
// Cache small multiples of G for optimization
FOR i IN 1..256:
scalar = bytes_from_int(i)
cache.point_cache.put(scalar, scalar_multiply(G, scalar))
// Cache powers of 2
FOR i IN 0..256:
scalar = bytes_from_int(1 << i)
cache.point_cache.put(scalar, scalar_multiply(G, scalar))Memory-Efficient Processing
FUNCTION memory_efficient_scan(
scanner: Scanner,
announcement_stream: Stream<Announcement>,
max_memory_mb: uint32
):
BATCH_SIZE = estimate_batch_size(max_memory_mb)
batch = []
FOR EACH announcement IN announcement_stream:
batch.append(announcement)
IF batch.length >= BATCH_SIZE:
// Process batch
results = scanner.scan_batch(batch)
// Handle results immediately
FOR EACH result IN results:
IF result.is_match:
yield result
// Clear batch to free memory
batch.clear()
// Force garbage collection hint
schedule_gc()
// Process remaining
IF batch.length > 0:
results = scanner.scan_batch(batch)
FOR EACH result IN results:
IF result.is_match:
yield result
FUNCTION estimate_batch_size(max_memory_mb: uint32):
// Each announcement is ~200 bytes
// Each scan operation needs ~1KB working memory
BYTES_PER_ANNOUNCEMENT = 200 + 1024
RETURN (max_memory_mb * 1024 * 1024) / BYTES_PER_ANNOUNCEMENTPrivacy Considerations
Information Leaked by Scanning
| Information | Who Learns | Risk Level | Mitigation |
|---|---|---|---|
| Scanning activity | Network observers | Low | 3-hop relay routing |
| View key (if delegated) | Scanning service | Medium | Time-limited grants |
| Payment timing | Service with view key | Medium | Batch notifications |
| Payment amounts | Only recipient | None | E2E encryption |
| Sender identity | Only recipient | None | Stealth addresses |
Protecting Scanner Privacy
FUNCTION privacy_preserving_scan(scanner: Scanner, network: OnionNetwork):
// Route all DHT queries through onion network
announcements = []
// Use multiple circuits for different time ranges
FOR EACH time_bucket IN get_time_buckets_to_scan():
circuit = network.create_new_circuit()
bucket_announcements = fetch_via_circuit(circuit, time_bucket)
announcements.extend(bucket_announcements)
// Random delay between queries
sleep(random(100ms, 500ms))
circuit.close()
// Scan locally (no network exposure)
RETURN scanner.scan_announcements(announcements)
FUNCTION obfuscate_scanning_pattern(scanner: Scanner):
// Add dummy queries to hide real scanning
real_queries = generate_real_queries(scanner)
dummy_queries = generate_dummy_queries(len(real_queries) * DUMMY_RATIO)
all_queries = shuffle(real_queries + dummy_queries)
FOR EACH query IN all_queries:
result = execute_query(query)
IF query.is_real:
process_result(result)
// Dummy results are discardedDelegated Scanning Privacy
FUNCTION privacy_aware_delegation(
view_key: bytes[32],
service_pubkey: bytes[33],
privacy_level: PrivacyLevel
):
SWITCH privacy_level:
CASE MAXIMUM:
// No delegation - scan locally only
RETURN null
CASE HIGH:
// View tag hints only
view_tag_hint = derive_view_tag_hint(view_key)
RETURN ViewTagDelegation(view_tag_hint)
CASE MEDIUM:
// Full view key with time limits and audit
RETURN create_time_limited_grant(
view_key,
service_pubkey,
duration: 24_HOURS,
audit_required: true
)
CASE CONVENIENCE:
// Full view key, longer duration
RETURN create_view_key_delegation(
view_key,
service_pubkey,
expiry: 30_DAYS
)Scanning Service Trust Model
THREAT MODEL for Scanning Services:
Honest-but-curious service:
LEARNS:
- When payments arrive (timing)
- Approximate payment frequency
- View tag matches (not amounts)
CANNOT LEARN:
- Payment amounts
- Sender identities
- Message contents
- Spending capability
Malicious service:
ADDITIONAL RISKS:
- May delay notifications
- May selectively report payments
- May log all activity for later analysis
MITIGATIONS:
- Multiple redundant services
- Local scanning verification
- Cryptographic receipts
- Audit trail verification
FUNCTION verify_service_honesty(
local_scanner: Scanner,
service_reports: List<DetectedPayment>
):
// Periodically verify service is reporting all payments
sample_range = select_random_time_range()
local_results = local_scanner.scan_range(sample_range)
service_results = filter_to_range(service_reports, sample_range)
missing = local_results - service_results
IF missing.length > 0:
report_service_misbehavior(missing)
revoke_service_delegation()Related Documentation
- Wallet-Based Identity - Stealth address fundamentals
- Cryptography Fundamentals - ECDH and key derivation
- DHT and Kademlia - Announcement distribution
- Multi-Device Support - Cross-device scanning coordination
- Threat Model - Security assumptions