Solutions for Distributed Systems
ATOMiK replaces consensus protocols, CRDTs, and event sourcing with XOR delta-state algebra. Every node converges to the same state without leaders, quorums, or conflict resolution — mathematically guaranteed by 92 Lean4 proofs.
Every mainstream approach to distributed state pays a tax — in latency, bandwidth, complexity, or all three.
Raft and Paxos require leader election, log replication, and quorum agreement for every state change. Latency scales with cluster size. A single slow node stalls the entire pipeline.
Delta commutativity eliminates ordering requirements. Every node accumulates XOR deltas independently — no leader, no quorum, no round-trips. Latency is O(1) regardless of cluster size.
G-Counters, LWW-Registers, OR-Sets — each data type needs its own merge function, metadata overhead, and correctness proof. State bloat grows unbounded as tombstones accumulate.
One algebraic operation (XOR) handles all merge semantics. Self-inverse property means no tombstones: applying a delta twice cancels it. Zero metadata overhead per operation.
Event sourcing stores every mutation forever. Replaying 10M events to reconstruct current state takes minutes. Compaction is fragile. Storage costs grow linearly with history.
O(1) state reconstruction: current_state = initial_state XOR accumulator. No log replay. No compaction. The accumulator is a fixed-size summary of all deltas ever applied.
Network partitions create divergent state. Rejoining requires conflict resolution, manual intervention, or data loss. Vector clocks add per-node metadata to every message.
XOR is associative and commutative — partitioned nodes accumulate deltas locally, then exchange them in any order upon reconnection. States converge automatically. No vector clocks needed.
Three nodes send deltas in any order. XOR commutativity guarantees identical final state — no coordination required.
Node A Node B Node C
┌──────────┐ ┌──────────┐ ┌──────────┐
│ state(0) │ │ state(0) │ │ state(0) │
│ = S_init │ │ = S_init │ │ = S_init │
└────┬─────┘ └────┬─────┘ └────┬─────┘
│ │ │
accum(d_A) accum(d_B) accum(d_C)
│ │ │
├───── send d_A ──────►├───── send d_B ──────►│
│◄──── send d_B ───────┤◄──── send d_C ───────┤
│◄──────────── send d_C ──────────────────────►│
│ send d_A ─────────────────────►│
│ │◄──── send d_A ───────►│
│ │ │
┌────┴─────┐ ┌────┴─────┐ ┌────┴─────┐
│ S_init │ │ S_init │ │ S_init │
│ ⊕ d_A │ │ ⊕ d_A │ │ ⊕ d_A │
│ ⊕ d_B │ ══════ │ ⊕ d_B │ ══════ │ ⊕ d_B │
│ ⊕ d_C │ │ ⊕ d_C │ │ ⊕ d_C │
└──────────┘ └──────────┘ └──────────┘
Order doesn't matter: d_A ⊕ d_B ⊕ d_C = d_C ⊕ d_A ⊕ d_B
Algebraic guarantee: Abelian group (commutative, associative, self-inverse)Two nodes start with the same reference state. Each accumulates a different delta. Exchange deltas in any order — the result is always identical.
from atomik_core import DeltaStream
# Node A and Node B converge without coordination
stream_a = DeltaStream()
stream_b = DeltaStream()
# Both start from the same reference state
stream_a.load(addr=0, initial_state=0xCAFEBABE)
stream_b.load(addr=0, initial_state=0xCAFEBABE)
# Each node accumulates different deltas
stream_a.accum(addr=0, delta=0x000000FF)
stream_b.accum(addr=0, delta=0x0000FF00)
# Exchange deltas in any order -- result is identical
stream_a.accum(addr=0, delta=0x0000FF00) # B's delta applied to A
stream_b.accum(addr=0, delta=0x000000FF) # A's delta applied to B
assert stream_a.read(0) == stream_b.read(0) # Always trueSide-by-side comparison across the dimensions that matter for distributed state management.
| Metric | ATOMiK | Raft / Paxos | CRDTs | Event Sourcing |
|---|---|---|---|---|
| Write Latency | O(1) local | O(n) quorum RTT | O(1) local | O(1) append |
| Sync Bandwidth | Fixed-size delta | Full log entries | State + metadata | Full event stream |
| Conflict Resolution | None needed | Leader decides | Type-specific merge | Manual / LWW |
| State Reconstruction | O(1) XOR | Log replay | Merge all replicas | Replay all events |
| Metadata Overhead | Zero | Term + index/entry | Vector clocks / dots | Sequence numbers |
| Partition Recovery | Automatic | Re-election + catch-up | Automatic (slow) | Conflict detection |
| Formal Proofs | 92 Lean4 theorems | TLA+ spec | Per-type proofs | None standard |
Start with the free Python SDK. Scale to kernel-level optimization or FPGA hardware when you need 69.7 Gops/s throughput.