Lesson 1The Graph Laplacian — Diffusion's Discrete Twin
You know $\nabla^2 f$. On a regular grid, it computes the sum of differences from neighbors. The graph Laplacian is exactly this object, freed from the grid.
Given a graph $G = (V, E)$, define:
- $A$ — the adjacency matrix: $A_{ij} = 1$ if $(i,j) \in E$
- $D$ — the degree matrix: $D_{ii} = \deg(i)$, zero off-diagonal
- $L = D - A$ — the combinatorial Laplacian
The quadratic form tells you everything: $x^T L x = \sum_{(i,j) \in E} (x_i - x_j)^2$. This measures total variation across edges. Smooth signals (constant on components) have zero energy. Spiky signals (large differences across edges) have high energy.
Play with it
Click any node to deposit heat. Watch it diffuse along edges via $dp/dt = -\alpha L p$. The sidebar shows the Laplacian matrix and the current state vector in real time.
Notice what happens: total heat is conserved under pure diffusion (no decay). The constant eigenmode $\lambda_0 = 0$ is never damped. Heat spreads but never disappears. This changes in Lesson 2.
Lesson 2The SOMA Equation — Diffusion, Decay, Stability
SOMA's Medium evolves by three forces:
The decay term $-\gamma p$ breaks conservation. Without persistent sources, all traces fade to zero — this is biologically necessary. Pheromone trails that lasted forever would make the system rigid. The source term $S(t)$ represents agents depositing traces as they work.
Equilibrium
At steady state, $dp/dt = 0$:
The equilibrium exists and is unique because $\alpha L + \gamma I$ has all eigenvalues $\alpha \lambda_i + \gamma > 0$. The system is globally attracting.
Numerical stability
SOMA uses explicit Euler: $p(t+\Delta t) = p(t) + \Delta t \cdot f(p)$. The Jacobian is $J = -(\alpha L + \gamma I)$ with eigenvalues $-(\alpha \lambda_i + \gamma)$. Explicit Euler is stable when $|1 + \Delta t \cdot \mu| < 1$ for all eigenvalues $\mu$ of $J$.
By Gershgorin, $\lambda_{\max}(L) \leq 2 d_{\max}$. So:
Play with it
Click to deposit. Adjust $\alpha$, $\gamma$, and $\Delta t$. The "destabilize" button cranks $\Delta t$ past the stability bound — watch what happens. Toggle the stability clamp to see SOMA's fix.
Lesson 3Stigmergic Agents — Coordination Without Communication
Termites build cathedrals without architects. The mechanism: indirect coordination through environmental modification. An agent modifies the shared medium; other agents respond to the modification, not to each other.
In SOMA, each agent computes the pheromone gradient at its current position:
Movement follows an $\varepsilon$-greedy policy: with probability $\varepsilon$, move randomly (explore). Otherwise, select a neighbor proportional to $\max(0, \partial p(v, u))$ (exploit).
The baseline: coupon collector
Expected time for a random walk to find $k$ bugs among $n$ nodes with $m$ agents:
Play with it
Run Random Walk (no pheromone, no gradient) and Stigmergic (agents deposit pheromone on discovery, follow gradients) side by side. The counter shows steps to find all bugs. Run multiple trials to see the distribution.
Lesson 4Resolution & Homeostasis — The Anti-Inflammatory
Here's a problem: stigmergic agents pile on. Once a bug is found, its pheromone attracts more agents. They arrive, confirm the finding, deposit more pheromone, attracting even more agents. In immunology, this is a cytokine storm — an inflammatory cascade that damages the host.
SOMA's fix: resolution traces. After discovery, agents deposit an anti-inflammatory signal:
Agents sense $p_{\text{eff}}$, not raw $p$. Resolved areas become invisible even if they still have pheromone. Agents move on to unexplored territory.
Play with it
Toggle resolution on and off. Without it, agents cluster at the first discovery. With it, they spread out and find everything.
Lesson 5From Scalars to Sheaves — and the Road Ahead
Everything so far used a constant sheaf: each node carries a scalar (pheromone level), and the restriction maps are the identity. The full SOMA framework assigns each node a typed stalk:
The sheaf Laplacian $L_{\mathcal{F}}$ generalizes the graph Laplacian. Where $L$ compares scalar values across edges, $L_{\mathcal{F}}$ compares stalks through restriction maps and measures disagreement via the coboundary operator. The spectral gap of $L_{\mathcal{F}}$ governs convergence speed — Hansen & Ghrist (2021) proved exponential convergence to global sections.
Think of it this way: the constant sheaf says "every node speaks the same language." The full sheaf says "every node has its own dialect, and the restriction maps are the translation dictionaries." Sheaf cohomology $H^1(K, \mathcal{F})$ detects obstructions — local agreements that can't be made globally consistent. When $H^1 \neq 0$, the system has structural inconsistencies. This is your debuggability tool.
The Roadmap
What follows is the mathematical territory ahead. Each item becomes a future interactive lesson.
Active Inference: Agents That Minimize Surprise
Replace $\varepsilon$-greedy exploration with free-energy-driven epistemic foraging. Each agent carries a generative model $p(o, s; \theta)$ and minimizes variational free energy:
High uncertainty at a node means high epistemic value means agents are attracted to explore it.
This solves the utils/crypto.py isolation problem: even with zero pheromone,
the node's uncertainty attracts curious agents.
Interactive lesson concept: Two-panel comparison. Left: epsilon-greedy agents ignore isolated nodes. Right: Active Inference agents are drawn to high-uncertainty regions. Watch the coverage difference in real time.
Belief Markets: Truth via Price Discovery
Agents deposit belief traces with confidence stakes. Conflicting beliefs trigger a tatonnement auction:
Cole & Fleischer (2008) proved polynomial convergence for weak gross substitutes. Market prices converge to collective credence — a 10% accuracy gain over single-shot baselines (Gho et al., 2025).
Interactive lesson concept: A market simulator. Multiple agents submit beliefs with stakes. Watch the tatonnement iterate. See prices converge. Compare market consensus to individual agent accuracy.
Immune Selection: Evolving the Agent Population
Successful agents clone with mutation. Failed agents are culled. The population evolves toward problem-solving fitness:
This is CLONALG (de Castro, 2000) applied to agent configurations. Affinity maturation = accelerated mutation near solutions. Negative selection = kill agents that react to "self" (the system's normal operating state). Memory cells = dormant solution templates.
Interactive lesson concept: A population dynamics visualization. Watch agent parameter distributions shift over generations. Successful exploration rates and deposit intensities propagate. Failed configurations die out. See the population converge.
Cohomological Diagnostics
The full SOMA Medium is a cellular sheaf $(K, \mathcal{F}, L_{\mathcal{F}})$ where $K$ is a dynamic simplicial complex and $\mathcal{F}$ assigns typed stalks. The master equation becomes:
Where $L_{\mathcal{F}}$ is the sheaf Laplacian and $u(t)$ is urgency amplification from deadline-aware traces. The urgency function is exponential: $u(t) = \alpha \cdot e^{\beta(t - t_{\text{deadline}})}$.
When this is fully implemented, sheaf cohomology provides the diagnostic layer: $H^1 \neq 0$ pinpoints where agents locally agree but globally contradict each other.
Interactive lesson concept: A graph where each node has a vector-valued stalk (not just a scalar). Restriction maps enforce consistency across edges. Visualize the cohomology: highlight edges where local agreement breaks down. Watch the sheaf Laplacian drive the system toward consistency.