How we measure bi-temporal RESOLVER latency.
Last updated: 2026-05-05
Benchmark methodology
Latency is the question every AI engineer on a buying committee asks. Mycelium publishes the methodology first, the harness second, the numbers third. The methodology is on this page. The harness is in the open-core ai-brain-starter repository under benchmarks/resolver. The first public run lands when memory-runtime-pro v1.0 ships.
What we measure
Wall-clock latency of a bi-temporal query against the typed-memory graph. Bi-temporal means the query carries two time axes: when the fact was true (valid time) and when the system learned about it (transaction time). Sub-200ms p99 against an enterprise-scale graph is the public target, set to match Zep's published claim. Latency is measured at the resolver boundary, after authentication and before agent-runtime serialization, so the number is the substrate's, not the surrounding stack's.
Test corpus
The public benchmark runs against the ai-brain-starter test fixtures: a synthetic enterprise graph with 50,000 typed memory records, 5,000 entities, 12,000 decisions, and 8,000 events spanning 24 months of valid time. The fixtures ship with the repository so the methodology is reproducible by anyone with a git clone and a Postgres install.
How the harness works
- Step 1. Load the 50,000-record fixture into a clean Postgres database.
- Step 2. Warm the cache with one read pass over every record (eliminates cold-start variance).
- Step 3. Run a thousand bi-temporal queries against the warmed cache, randomly sampled across entity, time, and predicate dimensions.
- Step 4. Record p50, p95, p99, and max latency at the resolver boundary.
- Step 5. Repeat the run on three different machine sizes (4-core / 16-core / 64-core) and publish all three series.
- Step 6. Re-run on every tagged release of memory-runtime-pro and append to the public benchmark history.
How you reproduce it
Clone github.com/adelaidasofia/ai-brain-starter, install the resolver harness from benchmarks/resolver/README.md, run the harness against your own machine. Your numbers are yours. Send anomalies to contact@myceliumai.co; we publish reproducible third-party runs alongside our own.
Current status
| Methodology version | v1, published 2026-05-05 |
| Public harness | In development; ships in ai-brain-starter v0.5 |
| First public run | Scheduled with memory-runtime-pro v1.0 release |
| Target | Sub-200ms p99 across all three machine sizes |
| Verification standard | Reproducible third-party runs from the public harness against the public fixtures |
What we will not do
Mycelium will not publish latency numbers from a private internal harness against private fixtures. Numbers without a public harness and public corpus are unverifiable claims, and the procurement reader is right to ignore them. The methodology lands first; the numbers follow when both the harness and the corpus are public.
Mycelium · founded 2026