Migration Guide

Migrate from Event Sourcing to ATOMiK

Event sourcing gives you a complete history. ATOMiK gives you instant current state. This guide shows how to replace event replay with O(1) state reconstruction, eliminate snapshots, and simplify undo -- and when to keep your event log.

The fundamental shift

Event sourcing stores every change and replays them to reconstruct state. ATOMiK accumulates changes into a single value and reconstructs state in one operation: current_state = initial_state ⊕ accumulator. You trade history for speed: O(1) instead of O(n), constant storage instead of linear growth, and free undo via self-inverse instead of compensating events.

Event Sourcing to ATOMiK Mapping

Each event sourcing concept has a direct (or eliminated) counterpart in ATOMiK.

Event SourcingATOMiKNotes
Event LogAccumulatorEvent sourcing appends every event to an ordered log (linear growth). ATOMiK XOR-accumulates every delta into a single fixed-size value (constant space).
Event Replayctx.read()ES replays all events since last snapshot to reconstruct state: O(n). ATOMiK reconstructs in O(1): initial_state XOR accumulator.
Snapshotctx.swap()ES periodically snapshots to bound replay cost. ATOMiK swap() atomically captures current state and resets the accumulator -- no serialization needed.
Compensating EventRe-apply same deltaES undo requires publishing a compensating event with reversal logic. ATOMiK undo is algebraic: XOR is self-inverse, so re-applying the same delta reverses it.
Projection / Read Modelctx.read()ES projects events into read-optimized views (CQRS). ATOMiK read() returns current state directly -- no separate read model needed.
Event Schema EvolutionFixed-size deltaES events require versioning and upcasting as schemas evolve. ATOMiK deltas are fixed-size integers -- no schema to evolve.

Pattern 1: Eliminate Event Replay

The most impactful migration: replace O(n) event replay with O(1) state reconstruction. No more snapshot management, no more replay lag on startup.

Before: Event Sourcing

class Account:
    def rebuild_from_events(self, event_store):
        # Load latest snapshot (if any)
        snap = event_store.latest_snapshot(self.id)
        if snap:
            self.state = snap.state
            start = snap.version
        else:
            self.state = 0
            start = 0

        # Replay all events since snapshot
        events = event_store.load(
            self.id, since=start
        )
        for e in events: # O(n)!
            self.apply(e)

        # Maybe snapshot for next time
        if len(events) > 1000:
            event_store.save_snapshot(
                self.id, self.state
            )

After: ATOMiK

from atomik_core import AtomikContext

class Account:
    def __init__(self):
        self.ctx = AtomikContext()
        self.ctx.load(0)

    def apply_change(self, delta):
        self.ctx.accum(delta)

    def current_state(self):
        # O(1). Always. No replay.
        return self.ctx.read()

    def checkpoint(self):
        # Atomic snapshot + reset
        return self.ctx.swap()

# No event store. No snapshots.
# No replay. No snapshot scheduling.

Pattern 2: Eliminate Compensating Events

In event sourcing, undo requires designing and publishing a compensating event with reversal semantics. In ATOMiK, undo is algebraic and free.

Before: Compensating Events

# To undo a transfer, publish a reversal
def cancel_transfer(original_event):
    # Must design reversal logic
    compensation = TransferReversed(
        from_acct=original_event.to_acct,
        to_acct=original_event.from_acct,
        amount=original_event.amount,
        reason="cancellation",
        ref=original_event.id
    )
    event_store.publish(compensation)

    # Log grows. Complexity grows.
    # Every event type needs a reversal.

After: ATOMiK Self-Inverse

# To undo: apply the same delta again
def cancel_transfer(ctx, original_delta):
    ctx.accum(original_delta)
    # That's it.
    #
    # XOR is self-inverse:
    # state XOR delta XOR delta = state
    #
    # No reversal logic to design.
    # No compensating event schema.
    # No additional storage.
    # Works for every delta type.

# Mathematical guarantee:
# a XOR a = 0 (proven in Lean4)

Trade-off Comparison

Event sourcing and ATOMiK optimize for different things. This table shows where each approach wins.

DimensionEvent SourcingATOMiK
State reconstructionO(n) -- replay from last snapshotO(1) -- initial XOR accumulator
Storage growthLinear -- every event stored foreverConstant -- single accumulator value
Undo / CompensationManual compensating eventsFree -- self-inverse (XOR twice = identity)
Full audit trailYes -- every event preserved with orderingNo -- deltas are accumulated, not stored individually
Temporal queriesYes -- replay to any point in timeNo -- only current state is available
CQRS patternNative -- separate command and query modelsNot needed -- read() returns current state directly
Ordering requirementsStrict -- events must be totally orderedNone -- deltas commute (any order, same result)
Infrastructure complexityHigh -- event store, projectors, snapshot storeMinimal -- single context object, pip install

When Event Sourcing Is Still Better

ATOMiK is not a universal replacement. These are legitimate reasons to keep your event log.

Regulatory audit trails

Financial services, healthcare, and compliance domains often require a complete, ordered, tamper-evident record of every state change. ATOMiK accumulates deltas into a single value -- individual events are not preserved. If you must answer "what happened at 14:32:07 on March 3rd?", keep your event log.

Temporal queries / time travel

Event sourcing lets you reconstruct state at any historical point by replaying events up to that timestamp. ATOMiK provides only the current state. If your domain requires "show me the account balance as of last Tuesday", event sourcing is the right tool.

Complex domain event choreography

If your system relies on event-driven sagas, process managers, or cross-aggregate reactions where the semantics of individual events matter (OrderPlaced triggers InventoryReserved triggers PaymentCharged), ATOMiK's opaque deltas cannot replace meaningful domain events.

Existing projections that work well

If your CQRS read models are performant and your event store handles production load without issues, migration cost may exceed benefit. ATOMiK is most valuable when replay latency, snapshot overhead, or storage growth are actual pain points.

The Hybrid Approach

You do not have to choose exclusively. Many teams keep event sourcing for audit-critical aggregates while using ATOMiK for hot-path state where performance matters most.

# Hybrid: event sourcing for audit trail,
# ATOMiK for real-time state queries

class HybridAggregate:
    def __init__(self):
        self.event_store = EventStore() # audit trail
        self.ctx = AtomikContext()          # hot-path state
        self.ctx.load(0)

    def apply(self, event, delta):
        # Write path: both systems
        self.event_store.append(event) # for compliance
        self.ctx.accum(delta)          # for speed

    def current_state(self):
        # Read path: O(1) via ATOMiK
        return self.ctx.read() # no replay

    def audit_at(self, timestamp):
        # Time travel: via event store
        return self.event_store.replay_to(timestamp)

Key takeaway

Event sourcing optimizes for understanding the past. ATOMiK optimizes for knowing the present. If you need both, use both -- ATOMiK on the hot read path, event sourcing for audit and temporal queries. The write path (one accum() call) adds negligible overhead.