The Shift
Traditional systems treat clients as request senders. The client says what it wants. The server interprets, parses, routes, deserialises, queries, serialises, and responds.
This model is different.
Clients are pointer generators into a structured data space. They don't ask "what do you want?" They say "where is it?"
That single shift changes everything downstream.
The Core Model
The system is built from simple primitives:
WAL โ append-only log (the truth)
idmap โ key โ offset (the index)
schema โ field ordinals (the type system)
jump table โ ordinal โ handler (the dispatch)
span<byte> โ the universal representation
Compile-time access and metadata-driven access are equivalent if dispatch is a jump table. Runtime metadata is just delayed compile-time.
There is no fundamental difference between a compiled field accessor and a runtime ordinal lookup. One was resolved early. The other was resolved late. The mechanism is identical.
The Protocol Collapse
REST / JSON
String parsing
Object materialisation
Route matching
Serialization overhead
Allocations everywhere
Ordinal Protocol
[route][id][field]
No parsing
No routing
No serialization
Just pointer โ bytes โ stream
The entire HTTP/JSON stack collapses into three integers and a span read.
Clients Become Pointers
Clients hold route ordinals, field ordinals, and keys. They generate compact requests that effectively address data directly.
Traditional
GET /users/123?fields=name
Parse URL. Match route. Parse query string. Deserialise. Filter. Serialise. Respond.
Ordinal
[route=2][id=123][field=1]
Jump table[2]. Idmap[123]. Read span at field offset 1. Return bytes. Done.
The client isn't asking a question. It's dereferencing a pointer.
Real World Parallels
This pattern isn't invented. It's how fast systems already work:
Memory-mapped files โ direct addressing, no parsing
CPU page tables โ virtual โ physical mapping via idmap
GPU buffers โ structured data, no object layer
NIC descriptor rings โ precomputed buffers sent directly to hardware
Kafka / log systems โ append-only truth, replayable state
Filesystems โ inode โ block mapping
Database execution engines โ query plan as execution, not interpretation
This is not new. It's how fast systems already work. We're just applying it end-to-end.
Performance Characteristics
O(1) lookup + sequential read
Zero-copy via spans
No allocations in hot path
Branch predictability via jump tables
Compaction as the only heavy operation
Performance comes from removing work, not adding optimisation.
Schema as Instruction Set
Field ordinals act like opcodes. The jump table is the execution engine. Data + schema = program.
This is a virtual machine where:
The query is the execution plan
The schema is the instruction set
The handlers are compiled behaviour
The data is the operand
You're not "querying a database." You're executing a program against a structured memory space.
Distributed Memory Model
Frame the whole system:
WAL = physical memory
idmap = page table
schema = type system
client = pointer generator
This behaves like a distributed, permissioned memory space backed by a log.
Trade-offs and Constraints
Be honest about what this costs:
Ordinals must be stable โ renumbering breaks clients
Schema versioning is critical โ evolution must be managed
Debugging is lower-level โ no friendly JSON to inspect
Less forgiving than traditional APIs โ precision is required
This is not easier. It's faster. Those are different things.