A simulation engine where relationships create meaning, and meaning creates worlds. No servers. No blockchain. Just math that works.
Play ⚔️ Cuttle now!
Multiplayer games and simulations have a dirty secret: they all depend on servers. Servers cost money. Servers get shut down. When the company dies, the world dies with it.
Blockchain tried to fix this. It made things worse. Gas fees. 15-second confirmation times. Environmental disasters. And you still can't play offline.
Meanwhile, researchers building multi-agent AI systems have to cobble together their own networking code, or pay for cloud infrastructure, or just run everything single-player and hope it generalizes.
The entire game state is a CRDT (Conflict-free Replicated Data Type). Built on Automerge, the same technology behind local-first software.
This means:
Traditional multiplayer: You → Server → Permission → Maybe 💸
Blockchain: You → Smart Contract → Gas Fee → Wait 15 seconds ⛽
HyperToken: You → CRDT Merge → Same result everywhere 🆓
| Audience | Value |
|---|---|
| Game Developers | Ship multiplayer with zero infrastructure. Built-in anti-cheat. |
| AI Researchers | OpenAI Gym compatible. Any game is automatically a training environment. |
| Communities | Persistent worlds that live as long as someone wants to play. |
| Educators | Interactive simulations for teaching probability, game theory, economics. |
┌─────────────────────────────────────────────────────────────────┐
│ Engine │
│ Coordinates game logic, dispatches actions, manages networking │
├─────────────────────────────────────────────────────────────────┤
│ ┌─────────┐ ┌─────────┐ ┌─────────┐ ┌─────────┐ │
│ │ Stack │ │ Space │ │ Source │ │ Agents │ │
│ │ (cards) │ │ (zones) │ │ (decks) │ │(players)│ │
│ └────┬────┘ └────┬────┘ └────┬────┘ └────┬────┘ │
│ └──────────────┴──────────────┴──────────────┘ │
│ ┌────────▼────────┐ │
│ │ Chronicle │ │
│ │ (CRDT State) │ │
│ └────────┬────────┘ │
└──────────────────────────────┼───────────────────────────────────┘
┌──────────▼──────────┐
│ Network Layer │
│ (P2P / WebSocket) │
└─────────────────────┘
Any HyperToken game is automatically a reinforcement learning environment:
from hypertoken import HyperTokenAECEnv
env = HyperTokenAECEnv("ws://localhost:9999")
env.reset(seed=42)
for agent in env.agent_iter():
obs, reward, term, trunc, info = env.last()
action = policy(obs) if not (term or trunc) else None
env.step(action)
Deterministic replay means every experiment is reproducible. No graphics overhead means faster training. Multi-agent support is native, not bolted on.
git clone https://github.com/flammafex/hypertoken
cd hypertoken
npm install
npm run build
npm test
Run the blackjack example:
npm run example:blackjack
Start a multi-agent server:
npx hypertoken bridge --env blackjack --port 9999
Local-first software is the future of user-respecting computing. Data lives on your device. Sync happens when convenient. No company can revoke your access to your own creations.
HyperToken extends this philosophy to interactive shared experiences. Games, simulations, and virtual worlds that:
This is community ownership of digital spaces, made mathematically possible.
We're seeking support for:
Built by The Carpocratian Church of Commonality and Equality. Apache 2.0 licensed. Source on GitHub.