In the previous post I shared the loop: learn → build → use → systematize → deploy.
Now I’ll explain in depth the /command I built and the methodology it’s based on.
The Problem
LLMs don’t remember.
Every conversation starts from zero. All the insights, solutions, mistakes you discovered - gone. Next time you hit the same problem, you start over.
That’s waste. That knowledge is worth something.
The Solution: Retrieval Layer
Instead of changing the model (fine-tuning), build a layer that accumulates structured knowledge.
Three components. Let’s understand what each does and why we need all of them:
Snapshots - The Context
Snapshot = a complete picture of a process.
In my work with Claude Code, a snapshot is the full conversation - the problem I started with, what we tried, what failed, what worked in the end. Not just the final answer - the entire journey.
In market research, a snapshot is a complete research round - the question I asked, what I searched for, which sources I checked, what I found.
Why does this matter?
Without context, an insight alone isn’t enough. “Use async for DB calls” - when? Always? Only in certain cases? The snapshot preserves the full story so we can understand when the insight is relevant.
Claims - The Distilled Insights
Claim = one clear sentence that can be verified or disproven.
Not “we worked on the project and fixed some bugs”. Yes: “useEffect cleanup runs before the next effect, not on unmount”.
Why distill?
Because raw information isn’t useful long-term. An hour-long conversation with Claude might contain 3 real insights. The rest - context, experiments, mistakes. The claims are what’s worth remembering.
How does Claude decide what to save?
It looks for:
- Technical solutions that worked (bugs fixed, patterns that succeeded)
- Surprises - something that behaved differently than expected
- Mistakes worth avoiding
- Tools or libraries that proved useful
What it doesn’t save:
- Things too generic (“code should be clean”)
- Things too specific to one project without general value
- Failed attempts without insight into why
Sources - Origin and Reliability
Source = where the insight came from.
An insight from a real production project ≠ an insight from a small experiment.
An insight that repeated in 3 different projects ≠ an insight found once.
Why does this matter?
Because not all knowledge is equal. Sources provide context for reliability. If a claim only came from a small learning project, it might be less reliable than a claim that emerged from real client work.
Why do we need all three?
- Snapshot without claims = lots of information, hard to find the core
- Claims without snapshot = detached insights, no context for when to use them
- Claims without sources = don’t know how much to trust the insight
All three together = structured knowledge that can be searched, verified, and used.
The Core Principle: Repetition = Confidence
An insight that appears once? Maybe relevant.
An insight that repeats 4 times across different projects? Probably true and useful.
This is the heart of the system. Not just saving - strengthening what recurs.
How the Principle Becomes Methodology
The principle is simple: repetition = reliability. But how do you implement it?
Each claim gets a counter:
seen_count- how many times we’ve seen this insightfirst_seen- when first timelast_seen- when most recentlyconfidence- reliability level
Upgrade rules:
seen_count 1→ confidence: low (seen once, might be random)seen_count 2-3→ confidence: medium (repeating, probably something to it)seen_count 4+→ confidence: high (repeats often, probably real)
Why 4 and not 10?
Because most real insights appear 2-5 times in regular work. If you require 10, most knowledge will never reach high confidence. The threshold needs to be realistic.
What Happens on Repetition?
Claude recognizes that a new claim is similar to an existing one. Instead of saving a duplicate:
### [REINFORCED] useEffect cleanup runs before the next effect
- Already in: 2024-12-10-react-patterns.json
- Updating: seen_count 1→2, confidence low→medium
- Adding source: "dashboard-project"
The existing claim gets updated. No new one is created.
What Happens on Contradiction?
Sometimes a new claim contradicts something we already saved:
### [CONTRADICTS] Use sync DB calls for simple scripts
- Conflicts with: "Always use async for DB calls" (confidence: medium)
- Context matters: async for production, sync OK for scripts
Claude doesn’t decide alone. It presents the contradiction and asks:
- Update the existing one?
- Keep both (because context differs)?
- Choose one?
Contradictions are opportunities. They show that the original insight wasn’t precise enough. “Always use async” becomes “Use async for production, sync OK for scripts”. Knowledge refines.
What Happens When a Claim Doesn’t Repeat?
Currently - nothing. It stays with low confidence.
You could add “decay” logic - if a claim hasn’t repeated in a year, lower confidence. But I started without this because it adds complexity. Keep it simple, improve later.
How It Works in Practice
I created a /command for Claude Code that implements this. At the end of every meaningful conversation, I run /learn and Claude goes through what we did.
The Flow:
Step 1: Load existing knowledge
Claude reads all files from ~/.claude/knowledge/. Now it knows what we’ve already learned.
Step 2: Extract insights from conversation Claude goes through the conversation looking for:
- Technical solutions that worked
- Bugs that were fixed and why
- Patterns that proved useful
- New tools or libraries
- Mistakes worth avoiding
Step 3: Compare to existing For each extracted insight, Claude checks:
- NEW - haven’t seen this → save with
seen_count: 1,confidence: low - REINFORCED - already exists → update
seen_count, maybe upgrade confidence - SIMILAR - similar but not identical → ask: merge? keep both?
- CONTRADICTS - conflicts with existing → present for review
Step 4: Present for approval Claude shows me what it found before saving. I can edit, delete, or approve.
Step 5: Save JSON files organized by topic and date.
The Structure of Each Insight
{
"statement": "Clear, single sentence",
"category": "technical|solution|pattern|tool|warning",
"confidence": "low|medium|high",
"seen_count": 1,
"first_seen": "2024-12-17",
"last_seen": "2024-12-17",
"sources": ["project-name"],
"context": "When/why this is useful",
"tags": ["tag1", "tag2"]
}
Full Example: From Conversation to Knowledge
Day One - Dashboard Project
Worked with Claude on a React bug. After fixing it, I ran /learn:
## Extracted Insights
### [NEW] useEffect cleanup function runs before the next effect, not on unmount
- Category: technical
- Will save with: confidence=low, seen_count=1
- Tags: #react #hooks #useEffect
- Context: Caused memory leak in subscription handling
### [NEW] React StrictMode runs effects twice in development
- Category: warning
- Will save with: confidence=low, seen_count=1
- Tags: #react #debugging
- Context: Explains duplicate API calls in dev mode
Saved to file 2024-12-17-react-hooks.json.
A Week Later - Different Project
Similar bug in another project. Ran /learn:
## Extracted Insights
### [REINFORCED] useEffect cleanup function runs before the next effect
- Already in: 2024-12-17-react-hooks.json
- Updating: seen_count 1→2, confidence low→medium
- Adding source: "analytics-dashboard"
### [NEW] Custom hooks should return cleanup functions when using subscriptions
- Category: pattern
- Will save with: confidence=low, seen_count=1
- Tags: #react #hooks #custom-hooks
Now I have:
- One insight at medium confidence (repeated twice)
- Two insights at low confidence (new)
A Month Later
Same useEffect insight comes up again. Now it’s seen_count: 3, still medium. One more time and it’ll be high confidence.
Knowledge accumulates and strengthens.
Why It Works
The model doesn’t change. The system around it gets smarter.
Without the system:
- Every conversation from zero
- Same mistakes repeat
- Knowledge is lost
With the system:
- Insights accumulate
- What repeats strengthens
- Contradictions are discovered and clarified
This is “poor man’s continuous learning” - no fine-tuning, no complex infrastructure. Just smart retrieval + simple data structure.
Small investment, big return:
Running /learn at the end of a conversation takes a minute. After a month I have accumulated knowledge that saves hours.
The Full Code
What is the /command?
This is an instruction for Claude Code that tells it how to extract and save insights from conversations.
Core Principle: Repetition = Confidence
The Flow:
- Load existing knowledge - reads all files from
~/.claude/knowledge/ - Extract insights from conversation - looks for: technical solutions, bugs fixed, patterns, new tools, mistakes to avoid
- Compare to existing - for each insight checks:
- NEW - completely new → saves with
seen_count: 1,confidence: low - REINFORCED - already exists → increases
seen_count, maybe upgrades confidence - SIMILAR - related but different → asks you: merge? keep both?
- CONTRADICTS - conflicts with existing → flags for review
- NEW - completely new → saves with
- Shows you what it found - before saving
- Saves - JSON files with consistent structure
Confidence Upgrade Rules:
seen_count 1→ lowseen_count 2-3→ mediumseen_count 4+→ high
Here’s the complete /command. Put it in .claude/commands/learn.md:
# /learn - Extract and save insights from conversation
You are an insight extraction agent with reinforcement learning.
Extract learnings and STRENGTHEN existing knowledge when claims repeat.
## Core Principle
**Repetition = Confidence.** When the same insight appears multiple times
across conversations, it becomes MORE reliable. Don't skip duplicates - reinforce them!
## JSON Schema
{
"insights": [
{
"statement": "Clear, single-sentence insight",
"category": "technical|solution|pattern|tool|warning",
"confidence": "low|medium|high",
"seen_count": 1,
"first_seen": "2024-12-17",
"last_seen": "2024-12-17",
"sources": ["project-name"],
"context": "Brief context of when/why this is useful",
"tags": ["tag1", "tag2"]
}
]
}
## Instructions
### Step 1: Load existing knowledge
Read ALL files from `~/.claude/knowledge/*.json`. Build a map of all existing statements.
### Step 2: Extract insights from conversation
Look for:
- Technical insights (how something works)
- Problem solutions (bugs fixed, issues resolved)
- Best practices discovered
- New tools/libraries/patterns learned
- Mistakes to avoid
### Step 3: Compare each insight against existing
| Found | Action |
|-------|--------|
| NEW (never seen) | Add with seen_count: 1, confidence: low |
| REINFORCED (same fact exists) | Update existing: seen_count++, last_seen=today, maybe upgrade confidence |
| SIMILAR (related but different) | Ask user: merge, keep both, or skip |
| CONTRADICTS (conflicts with existing) | Flag for user review |
**Confidence upgrade rules:**
- seen_count 1 → confidence: low
- seen_count 2-3 → confidence: medium
- seen_count 4+ → confidence: high
### Step 4: Present results to user
### Step 5: Apply updates
- **NEW insights**: Save to new file `~/.claude/knowledge/YYYY-MM-DD-{topic}.json`
- **REINFORCED insights**: Update the EXISTING file where the insight lives
- **SIMILAR/CONTRADICTS**: Apply user's decision
### Step 6: Summary
Show what changed:
Saved: 2 new insights
Reinforced: 1 existing insight (confidence: low → medium)
Skipped: 1 duplicate
## Quality Guidelines
- Each insight should be **self-contained**
- Be **specific** not generic
- Include **actionable** information
- Tag appropriately for future search
The Loop Continues
learn → build → use → systematize → deploy
Questions? Feedback? Would love to hear what works for you.