AI Systems Memory — Persistent Knowledge and Agent Memory

Persistent knowledge beyond a single chat thread.

Page content

This section collects guides on persistent knowledge and memory for AI systems — how assistants keep facts, preferences, and distilled context across sessions without stuffing every token into one prompt. Here, memory means intentional retention (user facts, summaries, plugin-backed stores), not GPU RAM or model weights.

It complements the broader AI Systems cluster — OpenClaw, Hermes, orchestration — and sits beside RAG for retrieval mechanics and LLM Hosting for running models.


Agent memory providers

Drop-in backends exposed by frameworks such as Hermes Agent and OpenClaw — Honcho, OpenViking, Mem0, Hindsight, and others — with different LLM, embedding, and database trade-offs.

For Hermes-only bounded core memory (MEMORY.md / USER.md), see Hermes Agent Memory System.


Knowledge graphs and Cognee

Institutional and project knowledge extracted into graphs for retrieval-aware assistants.


Stack context

Subscribe

Get new posts on AI systems, Infrastructure, and AI engineering.