Skip to content

Hippocampus Architecture — Client-Side Cognitive Partner

Status: Research & Ideation (12-hour deep think cycle) Origin: Jeff, 2026-03-21, Ideas thread Parent concept: ACI (Adaptive Context Injection)

Concept

A lightweight local process that watches LLM conversations from the user's side, detects relevance signals, and injects context on demand — mimicking how the human hippocampus surfaces memories without conscious effort.

Core Principle

Put the compute load on the machine, not the LLM. The user's device is already paid for. Token processing isn't.

Key Questions to Resolve

  1. Timing: Can context be injected before the model responds, or only next-turn?
  2. Access: How does a local process tap into the conversation stream?
  3. Minimum viable version: What's the simplest thing that proves the concept?
  4. LLM vs program: What truly needs a model vs what's pure code?
  5. OpenClaw integration: Hooks, plugins, or something else?
  6. Scalability: Could this work for any OpenClaw user, not just us?

Directory Structure

  • research/ — docs, references, OpenClaw internals research
  • prototypes/ — code experiments
  • thinking/ — hourly session notes from the deep think cycle

Relationship to ACI

ACI = static optimization (lighter file injection) Hippocampus = dynamic intelligence (context surfacing mid-conversation) They complement each other. ACI may become Layer 0-1 of a system where Hippocampus handles Layer 2-3.