Hippocampus Architecture — Client-Side Cognitive Partner¶
Status: Research & Ideation (12-hour deep think cycle) Origin: Jeff, 2026-03-21, Ideas thread Parent concept: ACI (Adaptive Context Injection)
Concept¶
A lightweight local process that watches LLM conversations from the user's side, detects relevance signals, and injects context on demand — mimicking how the human hippocampus surfaces memories without conscious effort.
Core Principle¶
Put the compute load on the machine, not the LLM. The user's device is already paid for. Token processing isn't.
Key Questions to Resolve¶
- Timing: Can context be injected before the model responds, or only next-turn?
- Access: How does a local process tap into the conversation stream?
- Minimum viable version: What's the simplest thing that proves the concept?
- LLM vs program: What truly needs a model vs what's pure code?
- OpenClaw integration: Hooks, plugins, or something else?
- Scalability: Could this work for any OpenClaw user, not just us?
Directory Structure¶
research/— docs, references, OpenClaw internals researchprototypes/— code experimentsthinking/— hourly session notes from the deep think cycle
Relationship to ACI¶
ACI = static optimization (lighter file injection) Hippocampus = dynamic intelligence (context surfacing mid-conversation) They complement each other. ACI may become Layer 0-1 of a system where Hippocampus handles Layer 2-3.