Your AI Agent Is Smart. It Just Doesn't Remember Enough.
Why agentic coding keeps losing hard-won knowledge, and how Lore helps teams build continuity instead of repetition.
The early promise of agentic coding
Agentic coding is intoxicating.
You ask for a refactor, and the agent reasons about structure instead of just editing lines. You describe an outcome loosely, and something usable appears in minutes. For a while, it feels like you are finally collaborating with the system rather than supervising it.
Then a pattern quietly sets in.
You explain a constraint. You correct a decision. You clarify why something that looks wrong is actually intentional.
The agent responds well. You move on.
Later, often sooner than expected, the same explanation has to be given again.
Nothing is obviously broken. Yet progress starts to feel circular.
This is not really a context window problem
This failure mode is usually explained away as a context issue. The model did not see enough. The prompt was not structured well. The documentation was incomplete.
Those explanations are not wrong, but they miss the core issue.
What breaks down in agentic coding is not access to information. It is continuity of knowledge.
Real codebases are shaped by accumulated decisions:
- Trade-offs made under pressure
- Constraints imposed by production incidents
- Conventions that exist for reasons no longer visible in the code
Humans absorb this over time. Agents do not. Each run effectively starts clean unless you reconstruct that history for them.
That reconstruction does not scale.
Why documentation and prompts only go so far
Documentation is good at describing what a system does. It is much worse at preserving why it ended up this way.
The "why" lives in places like:
- "We tried the simpler version. It caused data corruption."
- "This service looks redundant but exists for regulatory reasons."
- "Do not refactor this path without checking X."
Teams respond by adding more context:
- Growing CLAUDE.md files
- Entire folders of AI instructions
- Repeated prompt rituals that restate the same caveats
Eventually, signal gets buried in noise. Agents either miss the important parts or treat everything as equally important.
Neither leads to good outcomes.
What agentic systems actually lack
Seen clearly, the gap is simple.
Agents have working memory. They have short-term context. They can retrieve documents.
What they lack is a way to retain selective, decision-level knowledge across time.
Without that, teams are forced into a bad trade-off:
- Either overload the agent with everything
- Or let it forget things that genuinely matter
Where Lore fits
Lore is built to address this specific failure mode.
It is not a documentation system. It is not a prompt manager. It is not an agent framework.
Lore treats project knowledge as institutional memory rather than reference material.
You store small, explicit entries such as:
- Architectural decisions
- Conventions
- Warnings and constraints
- Workflow rules
Each entry can be scoped to files or areas of the codebase. When a developer or agent works on a task, Lore surfaces only the knowledge that applies.
It has been tested with Claude Code, but it is intentionally not locked into any single agentic CLI. The knowledge lives with your repo, not with a tool, and survives model changes, agent changes, and workflow rewrites.
That portability is the point.
A simple example
Getting started with Lore is deliberately lightweight.
You initialize it once:
lore initThen you capture knowledge at the moment it forms, instead of letting it vanish into Slack or memory:
lore add \
--type decision \
--files src/storage/* \
--content "Dual storage is intentional. Simplifying this breaks offline guarantees."Later, when an agent or developer works on that area:
$ lore query --task "simplify the storage layer" --limit 3Lore returns something like:
[15%] ADR-001: Dual Storage (SQLite + JSONL)
architecture | 197 tokens
Status: Accepted
Context: We need persistent storage that survives restarts
while remaining inspectable and portable.
[12%] Dual Storage Strategy
reference | 50 tokens
SQLite is the working database. JSONL is the append-only log.
[12%] ADR-010: Local-First Architecture
architecture | 176 tokens
Status: Accepted
Context: Data must live locally first, sync second.
Before the agent attempts to "simplify" anything, it is warned that the complexity is intentional.
That moment is the value.
A concrete before and after
Before Lore:
- A 140–150 line CLAUDE.md trying to cover every edge case
- Instructions pasted repeatedly into prompts
- Agents missing the one paragraph that actually mattered
After Lore:
- Three scoped entries tied to the storage layer
- Retrieved only when storage is being touched
- No prompt bloat, no repetition
The difference is not dramatic in one run. It compounds over time.
Why this matters as agents become more autonomous
As agents improve, they act with more confidence and make broader changes. Without continuity, that confidence becomes risky.
Systems rarely fail loudly. They drift.
If agentic coding is going to move beyond experimentation into long-lived systems, knowledge has to accumulate instead of resetting.
Lore is a small, deliberate step in that direction.
If this resonates
Lore is open source and early.
If you are experimenting with agentic coding, using Claude Code, or simply tired of repeating the same explanations to increasingly capable tools, try it out:
If it helps:
- Star the repo
- Share it with someone running into the same friction
- Open issues with feedback or ideas
Agentic systems do not just need better reasoning. They need a past they can actually learn from.