AI Systems Memory — Persistent Knowledge and Agent Memory
Persistent knowledge beyond a single chat thread.
This section collects guides on persistent knowledge and memory for AI systems — how assistants keep facts, preferences, and distilled context across sessions without stuffing every token into one prompt. Here, memory means intentional retention (user facts, summaries, plugin-backed stores), not GPU RAM or model weights.
It complements the broader AI Systems cluster — OpenClaw, Hermes, orchestration — and sits beside RAG for retrieval mechanics and LLM Hosting for running models.
Agent memory providers
Drop-in backends exposed by frameworks such as Hermes Agent and OpenClaw — Honcho, OpenViking, Mem0, Hindsight, and others — with different LLM, embedding, and database trade-offs.
- Agent memory providers compared — full table, dependency notes, and Hermes
memory setupflows
For Hermes-only bounded core memory (MEMORY.md / USER.md), see Hermes Agent Memory System.
Knowledge graphs and Cognee
Institutional and project knowledge extracted into graphs for retrieval-aware assistants.
- Self-Hosting Cognee — Choosing LLM on Ollama — hands-on Cognee quickstart with local models
- Choosing the Right LLM for Cognee — Local Ollama Setup — model comparison for graph quality vs hardware