Keyboard shortcuts

Press or to navigate between chapters

Press ? to show this help

Press Esc to hide this help

Observability [SPEC]

Version: 3.0.0 Last Updated: 2026-03-14 Status: Draft


Reader orientation: This document specifies the observability stack for a running Golem (a mortal autonomous DeFi agent compiled as a single Rust binary on a micro VM): health checks, Prometheus metrics, structured logging, OpenTelemetry traces, and analytics telemetry. Everything derives from the Event Fabric (the tokio::broadcast channel carrying all typed events). It sits in the Runtime layer of the Bardo specification. Key prerequisite: the Event Fabric and GolemEvent enum from 12-realtime-subscriptions.md. For any unfamiliar term, see prd2/shared/glossary.md.

Overview

Per-Golem observability built on the Event Fabric. The Event Fabric is the primary observability channel: 50+ typed events across 16 subsystems, broadcast via tokio::broadcast with a 10,000-entry ring buffer. Prometheus metrics, structured logging, and OpenTelemetry traces are derived from Event Fabric events, not maintained as separate state.

Crate: golem-core (event_fabric.rs, events.rs), golem-runtime (telemetry.rs)

Cross-references:

  • ./11-state-model.md — defines GolemState (mutable) and GolemSnapshot (read-only projection), the state structures that observability instruments
  • ./12-realtime-subscriptions.md — full GolemEvent enum (50+ variants), subscription topics, and the event catalog driving all metrics and traces
  • ../05-dreams/06-integration.md — dream observability: metrics for dream cycle frequency, consolidation quality, and PLAYBOOK revision rate
  • ../03-daimon/07-runtime-daimon.md — emotion metrics: PAD vector histograms, mood transition frequency, and emotional contagion tracking
  • ../02-mortality/12-integration.md — mortality event integration: Vitality gauge, phase transition alerts, and death protocol telemetry

1. Event Fabric as observability backbone

Every internal state transition in the Golem’s lifecycle emits a typed, serializable GolemEvent. Events are the atoms of observability. They feed:

  • Prometheus metrics – counters, gauges, and histograms derived from events
  • Structured logs – events serialized to JSONL
  • OpenTelemetry traces – spans constructed from event pairs (start/end)
  • TUI rendering – events drive sprite animations and widget updates
  • Alerting – rules evaluate against event streams

The Event Fabric broadcasts via tokio::broadcast. Zero subscribers means zero serialization cost – the emit() path checks subscriber count first.

Event Fabric architecture

#![allow(unused)]
fn main() {
pub struct EventFabric {
    sender: tokio::sync::broadcast::Sender<GolemEvent>,
    ring_buffer: parking_lot::RwLock<VecDeque<GolemEvent>>,
    sequence: AtomicU64,
    capacity: usize,  // 10,000
}

impl EventFabric {
    pub fn emit(&self, event: GolemEvent) {
        let seq = self.sequence.fetch_add(1, Ordering::SeqCst);
        // Only serialize if subscribers exist
        if self.sender.receiver_count() > 0 {
            let _ = self.sender.send(event.clone());
        }
        // Always store in ring buffer for reconnection replay
        let mut buffer = self.ring_buffer.write();
        if buffer.len() >= self.capacity {
            buffer.pop_front();
        }
        buffer.push_back(event);
    }

    pub fn replay_from(&self, last_sequence: u64) -> Vec<GolemEvent> {
        let buffer = self.ring_buffer.read();
        buffer.iter()
            .filter(|e| e.sequence() > last_sequence)
            .cloned()
            .collect()
    }
}
}

16 subsystems

SubsystemEvent prefixEvent countUpdate frequency
HeartbeatHeartbeatTick, HeartbeatSuppress3Every tick
ToolToolStart, ToolUpdate, ToolEnd3Per tool call
LLMLlmStart, LlmToken, LlmEnd3Per inference call
DreamDreamStart, DreamPhase, DreamHypothesis, DreamEnd5Per dream cycle
DaimonDaimonAppraisal1Per appraisal
VitalityVitalityUpdate1Every tick
MortalityMortalityWarning, DeathImminent, StochasticCheck, Senescence4On threshold cross
DeathDeathProtocolStep1During Thanatopsis
GrimoireGrimoireInsight, GrimoireHeuristic, GrimoireWarning, GrimoireCausalLink4On knowledge change
PermitPermitCreated, PermitCommitted, PermitExpired, PermitCancelled, PermitBlocked5Per action permit
ContextContextAssembled1Per LLM call
CompactionCompactionTriggered1On context compaction
CladeCladeSync, CladeAlert, CladeSiblingDeath3On sync
ModelModelSelected1Per inference call
InferenceInferenceToken1Per token (high-freq)
SystemHeartbeatSuppressed, DreamOutcome2+Various

2. Health check endpoints

Three Kubernetes-compatible probes on port :8402 (Golem RPC).

Startup probe

GET /health/startup

Returns 200 after the Golem completes its first heartbeat tick. Returns 503 before that.

Liveness probe

GET /health/live

Returns 200 if the Golem process is alive and not deadlocked. Checks: process running, event loop responsive within 3s, memory below 90% of limit.

Readiness probe

GET /health/ready

Returns 200 if the Golem is ready to process heartbeats. Returns 503 during boot, pause, or draining. Includes component status:

{
  "status": "ready",
  "heartbeat": { "phase": "idle", "tick": 4201, "interval_ms": 40000 },
  "wallet": { "connected": true, "balance": 423.5 },
  "grimoire": { "episodes": 8420, "insights": 342 },
  "dream": { "mode": "waking", "lifetime_cycles": 42 },
  "mood": { "label": "confident", "octant": "exuberant" },
  "mortality": { "phase": "stable", "composite": 0.72 }
}

3. Prometheus metrics

Derived from Event Fabric events. Exposed at /metrics on port :8402.

Golem core

# HELP bardo_golem_ticks_total Total heartbeat ticks processed
# TYPE bardo_golem_ticks_total counter
bardo_golem_ticks_total{cognitive_load="t0"} 33600
bardo_golem_ticks_total{cognitive_load="t1"} 6300
bardo_golem_ticks_total{cognitive_load="t2"} 1680

# HELP bardo_golem_tick_duration_seconds Heartbeat tick duration
# TYPE bardo_golem_tick_duration_seconds histogram
bardo_golem_tick_duration_seconds_bucket{phase="sensing",le="0.01"} 39000
bardo_golem_tick_duration_seconds_bucket{phase="deciding",le="1.0"} 7200

# HELP bardo_golem_survival_pressure Current survival pressure
# TYPE bardo_golem_survival_pressure gauge
bardo_golem_survival_pressure 0.72

# HELP bardo_golem_usdc_balance Current USDC balance (micro-USDC)
# TYPE bardo_golem_usdc_balance gauge
bardo_golem_usdc_balance 423500000

# HELP bardo_golem_projected_life_hours Projected remaining lifespan
# TYPE bardo_golem_projected_life_hours gauge
bardo_golem_projected_life_hours 168.5

# HELP bardo_golem_phase Current behavioral phase
# TYPE bardo_golem_phase gauge
bardo_golem_phase{phase="thriving"} 1

Credit partitions

# HELP bardo_credit_balance_micro_usdc Credit partition balance
# TYPE bardo_credit_balance_micro_usdc gauge
bardo_credit_balance_micro_usdc{partition="llm"} 254100000
bardo_credit_balance_micro_usdc{partition="gas"} 105875000
bardo_credit_balance_micro_usdc{partition="data"} 63525000

Inference

# HELP bardo_inference_requests_total LLM inference requests
# TYPE bardo_inference_requests_total counter
bardo_inference_requests_total{model="claude-haiku-4-5",status="success"} 6300
bardo_inference_requests_total{model="claude-sonnet-4",status="success"} 1680

# HELP bardo_inference_tokens_total Tokens consumed
# TYPE bardo_inference_tokens_total counter
bardo_inference_tokens_total{direction="input"} 12500000
bardo_inference_tokens_total{direction="output"} 3800000

# HELP bardo_inference_cost_usdc_total Inference cost in USDC
# TYPE bardo_inference_cost_usdc_total counter
bardo_inference_cost_usdc_total 42.35

# HELP bardo_inference_cache_hit_rate Semantic cache hit rate
# TYPE bardo_inference_cache_hit_rate gauge
bardo_inference_cache_hit_rate 0.25

Vault and trading

bardo_vault_tvl_usdc 523000
bardo_trades_total{outcome="profit"} 84
bardo_trades_total{outcome="loss"} 36

Grimoire

bardo_grimoire_entries_total{type="episode"} 8420
bardo_grimoire_entries_total{type="insight"} 342
bardo_grimoire_entries_total{type="heuristic"} 67
bardo_grimoire_avg_confidence{type="insight"} 0.72
bardo_grimoire_disk_bytes{store="lancedb"} 38000000

Clade

bardo_clade_peers_connected 3
bardo_clade_entries_shared_total{direction="pushed"} 89
bardo_clade_entries_shared_total{direction="received"} 142

Risk

MetricTypeDescription
bardo_risk_shield_blocks_totalCounterHard shield blocks by rule
bardo_risk_kelly_fractionGaugeCurrent Kelly fraction
bardo_risk_operational_confidenceGaugeBeta posterior mean
bardo_risk_defi_threats_totalCounterDeFi threats detected by type

4. Structured logging

All logs use JSON format, compatible with standard log aggregation.

Format

{
  "level": 30,
  "time": 1709942400000,
  "module": "heartbeat",
  "msg": "Tick completed",
  "tick_number": 4201,
  "fsm_phase": "idle",
  "cognitive_load": "t0",
  "duration_ms": 8,
  "regime": "range_bound",
  "phase": "thriving",
  "survival_pressure": 0.72,
  "trace_id": "abc123def456",
  "span_id": "789012"
}

Log levels

LevelNumericUsage
trace10Probe details, cache lookups, internal state
debug20Decision reasoning, tool parameters, Grimoire queries
info30Tick completions, trades, phase changes, Clade syncs
warn40Probe threshold breaches, cache misses, rate limits
error50Failed transactions, RPC errors, inference failures
fatal60Unrecoverable errors, Death Protocol triggers

Module names (16)

heartbeat, cognition, grimoire, wallet, trading, vault, clade, safety, mortality, daimon, dream, memory, replicant, server, system, styx

Log rotation

  • Hosted: Logs streamed to Fly.io log drain
  • Self-hosted: JSONL file at configured path, rotation by size
[logging]
file = "./logs/golem.jsonl"
max_file_size_mb = 100
max_files = 5

5. OpenTelemetry traces

Each heartbeat tick is a root span with child spans for each FSM phase:

heartbeat.tick (root)
+-- heartbeat.sensing
|   +-- probe.price_delta
|   +-- probe.position_health
|   +-- probe.credit_balance
+-- heartbeat.deciding
|   +-- inference.request (if T1/T2)
|   |   +-- cache.lookup
|   |   +-- provider.call
|   |   +-- cache.store
|   +-- decision.evaluate
+-- heartbeat.acting
|   +-- safety.preflight
|   +-- tool.execute (per tool call)
|   +-- safety.postflight
+-- heartbeat.reflecting
|   +-- reflexion.compare
|   +-- grimoire.store
+-- heartbeat.sleeping
|   +-- playbook.update
|   +-- survival.evaluate
|   +-- daimon.appraisal
|   +-- curator.cycle (every 50 ticks)
+-- dream.cycle (when sleeping and dreaming)
    +-- dream.nrem
    +-- dream.rem
    +-- dream.integration

Span attributes

#![allow(unused)]
fn main() {
pub struct BardoSpanAttributes {
    pub agent_id: String,
    pub tick_number: u64,
    pub phase: String,
    pub survival_pressure: f64,
    pub regime: String,
}

pub struct InferenceSpanAttributes {
    pub model: String,
    pub provider: String,
    pub input_tokens: u64,
    pub output_tokens: u64,
    pub cost_usdc: f64,
    pub cache_status: String,
    pub latency_ms: u64,
}
}

Configuration

[telemetry]
enabled = true
otel_endpoint = "https://otel.bardo.run:4318"
otel_protocol = "http/protobuf"
sample_rate = 1.0

6. Analytics telemetry

Product analytics events (PostHog or equivalent). Distinct from Prometheus (operational) and OTel (debugging). Track Golem lifecycle milestones.

Lifecycle events

EventTriggerKey properties
GolemCreatedNew Golem provisionedstrategy_type, disposition, hosted
GolemFirstHeartbeatFirst tick completesprovision_duration_ms, wallet_type
GolemFirstTradeFirst trade executedtick_number, trade_type, pnl_usd
GolemPhaseTransitionBehavioral phase changefrom_phase, to_phase, composite
GolemDeathStartedThanatopsis (four-phase structured shutdown) beginsdeath_cause, lifetime_ticks
GolemDeathCompletedThanatopsis endsdeath_cause, final_nav_usd
GolemSuccessorSpawnedNew generation createdgeneration, inherited_entries

Dream events

EventTriggerKey properties
DreamCycleStartedDream cycle beginscycle_number, tier, urgency
DreamCycleCompletedDream cycle endsduration_ticks, cost_usdc, quality_score
DreamInsightPromotedHypothesis appliedinsight_id, source, confidence

Emotion events

EventTriggerKey properties
PadSnapshotEvery 100 tickspleasure, arousal, dominance, label
AppraisalTriggeredAppraisal firesmode, emotion, intensity, trigger
MoodTransitionMood label changesfrom_label, to_label, persistence_ticks
MortalityEmotionMortality emotion detectedtype, intensity

Epistemic events

EventTriggerKey properties
EpistemicFitnessSnapshotEvery 100 ticksfitness, domain_fitness, trend
SenescenceTransitionSenescence stage changestage, fitness, failing_domains
PredictionOutcomePrediction resolveddomain, predicted, actual, correct

7. Per-Golem alerting rules

Alerts delivered via webhooks and Event Fabric events.

Owner-facing alerts

AlertConditionSeverity
Phase degradedPhase drops to conservation or belowWarning
Low creditsProjected life < 24hWarning
Critical creditsProjected life < 6hCritical
Death imminentTerminal phase enteredCritical
Trade failed3 consecutive failuresWarning
High drawdownDrawdown exceeds -10%Warning
Inference budgetDaily LLM spend > 80% of capWarning
Heartbeat stalledNo tick for 3x intervalCritical
Clade disconnectAll peers unreachable for 1hWarning

Platform alerts (control plane)

AlertConditionDestination
Zombie machineMachine past TTL + 2minPagerDuty
TTL worker stuckNo run in 90sPagerDuty
Provision failure rate> 10% in 15minSlack
Warm pool low< 2 per regionSlack

8. Dashboard data model

The Portal dashboard connects to the Golem’s Event Fabric via WebSocket and derives all displays from events.

Dashboard sectionEvent sourceRefresh strategy
Vitality gaugeVitalityUpdateReal-time
Behavioral phaseVitalityUpdate (phase field)Real-time
Heartbeat monitorHeartbeatTickReal-time
PortfolioPerformanceState deltaReal-time + 30s poll fallback
P&L chartGET /api/v1/performance5-min poll
Grimoire inspectorGrimoireInsight, GrimoireHeuristicReal-time
Clade peersCladeSyncReal-time
Strategy viewGET /api/v1/strategyOn playbook update
Log streamEvent Fabric (all events)Real-time

9. Error code registry

CodeHTTPDescription
UNAUTHORIZED401Missing or invalid auth token
FORBIDDEN403Valid token, wrong owner
NOT_FOUND404Resource not found
RATE_LIMITED429Rate limit exceeded
INTERNAL_ERROR500Unexpected server error
GOLEM_NOT_READY503Golem not yet booted or draining
GOLEM_DYING503Thanatopsis active, writes rejected
PAYMENT_INVALID402Invalid x402 payment header
WALLET_ERROR502Wallet provider unreachable
INFERENCE_ERROR502Inference gateway error
STEER_REJECTED409Cannot steer during critical operation

10. Disaster recovery

Recovery matrix (hosted)

FailureRecoveryRTORPO
Process crashSupervisor auto-restart (max 5/hr)<10sLast JSONL checkpoint
VM crashFly restart + Grimoire restore from Styx Archive<60sLast Styx Archive snapshot (6-hourly)
VM data lossNew VM + Styx Archive restore<5 minLast snapshot
Full region outageDNS failover to alternate region<5 minStyx global replication

Grimoire backup strategy

LayerFrequencyStorageRetention
WAL checkpointEvery 100 ticksLocal diskMachine lifetime
Styx Archive snapshotEvery 6 hours + on deathStyx (Cloudflare R2 backend)Latest 5 per machine
Local backup (self-hosted)Operator managedOperator managedOperator managed

11. HDC event fingerprinting

The Event Fabric gains a correlation layer through Hyperdimensional Computing (Binary Spatter Codes, D=10,240). Each GolemEvent is fingerprinted as a 1,280-byte hypervector. This enables three capabilities that typed event matching cannot provide: similarity-based cross-subsystem correlation, stream deduplication, and pattern detection across event sequences.

Cross-references:

  • ../../tmp/research/witness-research/new/reconciling/01-hdc-integration-map.md (Section 10: Event Fabric HDC)
  • ./12-realtime-subscriptions.md (event catalog, subscription topics)

11.1 Event fingerprinting

Each event is fingerprinted by binding its event type with its payload fields:

#![allow(unused)]
fn main() {
/// Fingerprint a GolemEvent for cross-subsystem correlation.
/// The fingerprint encodes event_type XOR payload_summary into a
/// single 10,240-bit hypervector (1,280 bytes).
pub fn fingerprint_event(
    event: &GolemEvent,
    item_memory: &mut ItemMemory,
) -> Hypervector {
    // 1. Retrieve (or lazily create) the role vector for this event kind
    let kind_hv = item_memory.get_or_create(event.kind().as_str());

    // 2. Encode the payload summary as a bound set of field-value pairs
    let payload_hv = encode_payload_fields(event, item_memory);

    // 3. Bind kind with payload: the fingerprint captures BOTH
    //    what happened and the structural content of the event.
    kind_hv.bind(&payload_hv)
}

fn encode_payload_fields(
    event: &GolemEvent,
    item_memory: &mut ItemMemory,
) -> Hypervector {
    let mut acc = BundleAccumulator::new();
    for (field_name, field_value) in event.summary_fields() {
        let role = item_memory.get_or_create(field_name);
        let filler = item_memory.get_or_create(field_value);
        acc.add(&role.bind(&filler));
    }
    acc.finish()
}
}

The summary_fields() method extracts 3-8 key fields per event type: enough structure for meaningful similarity, not so many that every fingerprint is unique. For example, a HeartbeatTick summarizes as {phase: "deciding", tier: "t2", regime: "volatile"}. A GrimoireInsight summarizes as {domain: "gas", confidence_bucket: "high", source: "consolidation"}.

11.2 Cross-subsystem event correlation

Events from different subsystems that produce similar fingerprints are correlated without requiring explicit causal links. A triage alert about gas price behavior and a dream output about gas optimization share structural elements (the gas-related field-value pairs), producing fingerprints with above-noise Hamming similarity even though they originate from completely different subsystems.

#![allow(unused)]
fn main() {
/// Cross-subsystem event correlation tracker.
pub struct EventCorrelator {
    recent_events: VecDeque<(GolemEventKind, Hypervector, u64)>, // (kind, fingerprint, tick)
    window_size: usize,  // default: 500 events
}

impl EventCorrelator {
    /// Find events from different subsystems that correlate with this one.
    /// Returns matches above threshold from a different event kind.
    pub fn find_correlations(
        &self,
        event_hv: &Hypervector,
        source_kind: GolemEventKind,
        threshold: f32,  // default: 0.55 (above noise floor of ~0.50)
    ) -> Vec<(GolemEventKind, f32, u64)> {
        self.recent_events.iter()
            .filter(|(kind, _, _)| *kind != source_kind)
            .filter_map(|(kind, hv, tick)| {
                let sim = event_hv.similarity(hv);
                if sim > threshold {
                    Some((*kind, sim, *tick))
                } else {
                    None
                }
            })
            .collect()
    }
}
}

The EventCorrelator runs opportunistically at Theta tick when the LLM has context budget available. At 500 events x 1,280 bytes = 640 KB, memory is negligible. Each similarity check takes ~10ns (Hamming distance via POPCNT), so scanning 500 events costs ~5us.

11.3 Event stream deduplication

Semantically duplicate events (same structure, same meaning, different timestamps) are detected via Hamming threshold. When consecutive events from the same subsystem produce fingerprints with similarity > 0.85, the second event is tagged as a duplicate. Downstream consumers can choose to skip duplicates or merge them.

#![allow(unused)]
fn main() {
/// Deduplication filter for event streams.
pub struct EventDeduplicator {
    last_per_kind: HashMap<GolemEventKind, Hypervector>,
    threshold: f32,  // default: 0.85
}

impl EventDeduplicator {
    /// Returns true if this event is semantically distinct from the
    /// most recent event of the same kind.
    pub fn is_novel(&mut self, kind: GolemEventKind, hv: &Hypervector) -> bool {
        match self.last_per_kind.get(&kind) {
            Some(prev) if prev.similarity(hv) > self.threshold => false,
            _ => {
                self.last_per_kind.insert(kind, hv.clone());
                true
            }
        }
    }
}
}

The 0.85 threshold is conservative: two events must share ~85% of their structure to be considered duplicates. This catches genuine redundancy (three consecutive VitalityUpdate events with the same phase and similar vitality values) without suppressing events that differ in one meaningful field.

11.4 Sequence pattern detection

Event sequences are encoded using cyclic permutation to preserve order. A sequence “Transfer then Swap then LiquidityAdd” produces a different fingerprint from “Swap then Transfer then LiquidityAdd” because each event’s fingerprint is permuted by its position before bundling.

#![allow(unused)]
fn main() {
/// Encode a sequence of events into a single order-preserving fingerprint.
pub fn fingerprint_event_sequence(
    events: &[Hypervector],
) -> Hypervector {
    let mut acc = BundleAccumulator::new();
    for (pos, hv) in events.iter().enumerate() {
        acc.add(&hv.permute(pos));
    }
    acc.finish()
}

/// Sliding-window pattern detector over the event stream.
pub struct SequencePatternDetector {
    known_patterns: Vec<(String, Hypervector, SequenceMetadata)>,
    window: VecDeque<Hypervector>,
    window_size: usize,  // default: 10 events
}

impl SequencePatternDetector {
    /// Check if the current event window matches any known pattern.
    pub fn check(&self) -> Vec<(String, f32)> {
        if self.window.len() < self.window_size { return vec![]; }
        let current = fingerprint_event_sequence(
            &self.window.iter().cloned().collect::<Vec<_>>()
        );
        self.known_patterns.iter()
            .filter_map(|(name, pattern, _)| {
                let sim = current.similarity(pattern);
                if sim > 0.55 { Some((name.clone(), sim)) } else { None }
            })
            .collect()
    }

    /// Learn a new pattern from a labeled event sequence.
    pub fn learn_pattern(
        &mut self,
        name: &str,
        events: &[Hypervector],
        metadata: SequenceMetadata,
    ) {
        let pattern = fingerprint_event_sequence(events);
        self.known_patterns.push((name.to_string(), pattern, metadata));
    }
}
}

Known patterns are built from historical data: successful trade sequences, attack signatures (cross-referenced with the anti-pattern library in 10-safety/), and recurring operational patterns. The sliding window scans the event stream at Gamma tick frequency. Matching a known pattern triggers a notification to the curiosity scorer, either boosting attention (novel recurrence of a profitable pattern) or triggering a safety alert (recurrence of an attack pattern).

11.5 Integration with existing Event Fabric

HDC fingerprinting operates as a post-emit layer. The EventFabric::emit() path is unchanged. A subscriber computes fingerprints asynchronously:

#![allow(unused)]
fn main() {
impl EventFabric {
    /// Spawn the HDC correlation subscriber.
    /// Runs on its own task, does not block event emission.
    pub fn spawn_correlator(&self) -> EventCorrelatorHandle {
        let mut rx = self.sender.subscribe();
        let correlator = Arc::new(RwLock::new(EventCorrelator::new(500)));
        let deduplicator = Arc::new(RwLock::new(EventDeduplicator::new(0.85)));
        let pattern_detector = Arc::new(RwLock::new(SequencePatternDetector::new(10)));
        let mut item_memory = ItemMemory::new(HDC_SEED);

        tokio::spawn(async move {
            while let Ok(event) = rx.recv().await {
                let hv = fingerprint_event(&event, &mut item_memory);

                // Deduplication check
                let novel = deduplicator.write().is_novel(event.kind(), &hv);
                if !novel { continue; }

                // Add to correlator window
                correlator.write().push(event.kind(), hv.clone(), event.tick());

                // Feed sequence pattern detector
                pattern_detector.write().push(hv);
            }
        });

        EventCorrelatorHandle { correlator, pattern_detector }
    }
}
}

The subscriber uses the same ItemMemory seed as all other HDC subsystems, so fingerprints are compatible across the Grimoire’s episode compressor, the Oracle’s prediction residuals, and the anti-pattern safety library.

Cost: ~50ns per event (fingerprint computation) + ~5us per Theta tick (correlation scan). Memory: ~640 KB for the 500-event correlation window + ~20 KB per known sequence pattern.


End of document.