01Executive Summary
JIL Sovereign implements a peer-to-peer settlement architecture where each validator node subscribes only to the compliance zone topics it is authorized to process. Settlement messages are published to zone-specific RedPanda topics, and validators consume messages using manual offset commitment (autoCommit disabled) to achieve exactly-once processing semantics. Failed messages are classified as transient or permanent, with transient failures routed back to the same partition via key-based routing for retry by the same validator, and permanent failures routed to a dead letter queue for manual investigation.
The architecture has been validated with 1 million settlement messages at zero loss. Partition-affinity retry ensures that the same validator retries a failed message, maintaining local state consistency. Consumer groups enable horizontal scaling within a zone by distributing partitions across multiple validators authorized for the same compliance zone.
02Problem Statement
Settlement processing in a multi-jurisdictional blockchain network must satisfy three competing requirements: compliance zone isolation (only authorized validators process settlements for their zone), exactly-once semantics (no duplicate or missed settlements), and horizontal scalability (adding validators increases throughput). Traditional centralized routing layers satisfy the first requirement but create single points of failure and scalability bottlenecks.
2.1 Settlement Processing Challenges
- Zone Authorization: A settlement for a DE_BAFIN-regulated entity must only be processed by a validator authorized for the DE_BAFIN zone, not by a SG_MAS or US_FINCEN validator.
- Exactly-Once Semantics: A settlement that is partially processed and then fails must be retried completely, not skipped or double-processed. Auto-commit consumers risk both scenarios.
- Partition Affinity: When a settlement fails and is retried, the retry should be handled by the same validator that started processing it, preserving any local state (database transactions, pending proofs) from the initial attempt.
- Graceful Degradation: If a validator goes offline, its zone's settlements should queue naturally and be processed when the validator recovers or when another authorized validator joins the zone's consumer group.
2.2 Why Existing Approaches Fail
| Approach | Zone Isolation | Exactly-Once | Partition Affinity |
|---|---|---|---|
| Central Router | Yes - router filters | Depends on router | No - any worker processes |
| Auto-Commit Consumer | Yes - topic subscription | No - at-most-once on crash | No - rebalance reassigns |
| Database Queue | Yes - WHERE clause | Yes - transaction based | No - any poller picks up |
| Shared Nothing | Yes - isolated instances | Depends on implementation | Yes - single processor |
03Technical Architecture
The settlement architecture uses zone-specific RedPanda topics, per-zone consumer groups, manual offset management, and error classification to achieve exactly-once, zone-isolated, partition-affine settlement processing.
3.1 Zone Topic Mapping
| Topic | Zone | Authorized Validators | Partitions |
|---|---|---|---|
| jil.settlement.DE_BAFIN | Germany | Genesis, DE validator | 4 |
| jil.settlement.US_FINCEN | United States | Genesis, US validator | 4 |
| jil.settlement.EU_ESMA | European Union | Genesis, EU validator | 4 |
| jil.settlement.SG_MAS | Singapore | Genesis, SG validator | 4 |
| jil.settlement.CH_FINMA | Switzerland | Genesis, CH validator | 4 |
| jil.settlement.GB_FCA | United Kingdom | Genesis, GB validator | 4 |
| jil.settlement.JP_JFSA | Japan | Genesis, JP validator | 4 |
| jil.settlement.AE_FSRA | UAE | Genesis, AE validator | 4 |
| jil.settlement.BR_CVM | Brazil | Genesis, BR validator | 4 |
| jil.settlement.GLOBAL_FATF | Global fallback | Genesis | 4 |
| jil.settlement.dlq | Dead letter | All (monitoring only) | 1 |
3.2 Error Classification
| Error Type | Examples | Action | Routing |
|---|---|---|---|
| Transient | Database timeout, network error, lock contention | Retry (same partition) | Key-based re-publish to same topic |
| Permanent | Invalid data, compliance rejection, schema violation | Dead letter queue | Publish to jil.settlement.dlq |
3.3 Offset Commitment Model
Auto-commit is disabled on all settlement consumers. The consumer commits the offset only after the settlement has been fully processed: compliance check passed, ledger transaction committed, proof generated, and all database writes confirmed. If any step fails, the offset is not committed, and the message will be re-delivered on the next consumer poll. This guarantees that no settlement is ever marked as processed without actually being processed.
04Implementation
4.1 Settlement Processing Pipeline
- Message consumption: The consumer polls its authorized zone topics. Each message contains the full settlement payload including parties, amounts, compliance zone, and settlement type.
- Compliance check: The settlement is validated against the zone's compliance rules (transaction limits, sanctions screening, velocity checks, asset restrictions).
- Ledger commit: The validated settlement is committed to the L1 ledger as a PaymentTx, signed by the validator's consensus key.
- Proof generation: A settlement proof (ZK receipt) is generated attesting to the settlement's validity and compliance status.
- Offset commit: Only after all preceding steps complete successfully is the consumer offset committed, marking the message as processed.
4.2 Partition-Affinity Retry
When a transient error occurs, the message is re-published to the same zone topic using the settlement ID as the partition key. Because Kafka/RedPanda uses consistent hashing for partition assignment, the same partition key always routes to the same partition, and the same consumer in the consumer group always reads from the same partition (sticky assignment). This means the same validator retries the settlement, preserving any local state from the initial processing attempt.
4.3 Dead Letter Queue
Permanent failures are published to the dead letter queue (jil.settlement.dlq) with the original message payload, the error details, the zone, and the processing validator ID. DLQ messages are monitored by the AI Fleet Inspector and appear in the ops dashboard alerts tile. Manual investigation and re-processing is handled through the ops console.
4.4 Backpressure
Queue depth serves as a natural backpressure signal. If a zone's validators cannot keep up with settlement volume, messages accumulate in the topic. The AI Fleet Inspector monitors per-zone queue depth via the PERF_SETTLEMENT_LAG rule and can trigger auto-remediation (cycle the consumer) or alert operators when lag exceeds thresholds.
05Integration with JIL Ecosystem
5.1 Settlement API
The settlement-api service publishes settlement messages to zone-specific topics based on the transaction's compliance zone. The zone is determined during compliance evaluation and encoded in the settlement payload. The settlement-api does not route or process settlements - it only produces messages for the appropriate zone topic.
5.2 AI Fleet Inspector
The inspector monitors settlement processing metrics per validator and per zone. Three dedicated rules (PERF_SETTLEMENT_LAG, PERF_SETTLEMENT_ERRORS, FLEET_SETTLEMENT_STOPPED) track consumption rates, error rates, and zone-level health. Auto-remediation can cycle a consumer that has stopped processing or is falling behind.
5.3 Compliance API
Each settlement is validated against the compliance-api's zone-specific rules before ledger commitment. The compliance check enforces jurisdiction-specific transaction limits, sanctions lists, velocity controls, allowed assets, and corridor restrictions. A compliance rejection is classified as a permanent error and routes the settlement to the DLQ.
5.4 Horizontal Scaling
Adding a second validator authorized for a zone doubles that zone's processing capacity. The new validator joins the zone's consumer group, and RedPanda automatically rebalances partitions across the group members. Each validator processes a subset of the zone's partitions, with the total throughput scaling linearly with the number of group members.
06Prior Art Differentiation
| System | Architecture | Zone Isolation | Exactly-Once | JIL Advantage |
|---|---|---|---|---|
| Traditional RTGS | Central bank clearing | Single jurisdiction | Transaction-based | JIL supports 10 zones P2P without central authority |
| Ripple/XRP | Validator consensus | No zone concept | Consensus-based | JIL enforces zone-specific compliance |
| Kafka Streams | Stream processing | Topic-based | Exactly-once via transactions | JIL adds compliance zone authorization layer |
| Cosmos IBC | Inter-chain messaging | Chain-level | Packet acknowledgment | JIL provides intra-chain zone settlement |
| Lightning Network | Payment channels | None | Channel-based | JIL processes on-chain with full compliance |
07Implementation Roadmap
Core Settlement Consumer
Deploy zone-specific topic consumers with manual offset commitment. Implement error classification (transient vs permanent). Build partition-affinity retry with key-based routing. Create dead letter queue with monitoring integration.
Compliance Integration
Wire zone-specific compliance checks into settlement pipeline. Implement jurisdiction-specific transaction limits and sanctions screening. Build velocity controls and corridor restrictions per zone. Deploy compliance audit logging for each settlement.
Horizontal Scaling
Multi-validator consumer groups per zone with automatic partition rebalancing. Sticky partition assignment for optimal affinity. Cross-zone settlement coordination for multi-jurisdiction transactions. Load testing at 10x projected volume.
Advanced Recovery
Automated DLQ reprocessing with policy-based retry. Cross-datacenter topic replication for disaster recovery. Settlement finality proofs published on-chain. Real-time settlement analytics dashboard with per-zone metrics.