hellodb
← blog8 min read

Context is triage. Memory is engineering.

Anthropic just published a clear-eyed post on Claude Code session management. It’s an honest acknowledgment that session management, not raw context size, is what determines output quality. It names three problems. hellodb was built to make each of them go away.

Read the post. It’s worth reading in full — less for the announcement (the new /usage command is useful but modest) and more for the quiet concession underneath: the team building Claude Code is openly admitting what their heaviest users have been quietly patching around for months.

Here’s the thesis of the post, compressed to one sentence:

Context rot, lossy compaction, and manual session briefs are real; here are the primitives we give you to survive them.

Every word of that is true. None of those primitives are the solution.

context·triage
1M window986K / 1M
  • pnpm over npm
  • oauth, not sessions
  • tabs over spaces
  • use OKLCH
  • dark by default
  • Rust workspaces
  • brain.toml @ 0.75
  • no force-push
/compact
summary
  • pnpm over npm
  • oauth, not sessions
  • tabs over spaces
  • use OKLCH
  • dark by default
  • Rust workspaces
  • brain.toml @ 0.75
  • no force-push

5 facts dropped. the model can’t tell you which.
next prompt referencing them misses.

memory·engineering
  • b3:a7f2pnpm over npm
  • b3:e4c8oauth, not sessions
  • b3:9d11tabs over spaces
  • b3:f03ause OKLCH
  • b3:2b6edark by default
  • b3:5c44Rust workspaces
  • b3:1ab9brain.toml @ 0.75
  • b3:c3d7no force-push
  • … immutable, signed, content-addressed
top-k
next session
  • b3:a7f2pnpm over npm
  • b3:f03ause OKLCH
  • b3:1ab9brain.toml @ 0.75

window stays lean.
store is the source of truth.

same input · different retention

Problem 1 — context rot

Direct quote:

“Context rot is the observation that model performance degrades as context grows, because attention gets spread across more tokens and older, irrelevant content starts to distract.”

This is a fundamental property of transformer attention. No amount of context-window expansion fixes it — 1M tokens rots more than 200K tokens, just later. The post’s own advice follows: keep the window lean; start a new session when you start a new task.

Good advice. But it pushes a cost onto the user: every new session means re-loading the state you need. Your stack. Your conventions. The decision you made three sessions ago about why you’re not using Redux. The reason you picked pnpm over npm. Claude forgets all of it the moment you /clear.

hellodb’s answer:durable facts don’t live in your context window. They live in ~/.hellodb/local.db— SQLCipher-encrypted, content-addressed, Ed25519-signed. At session start, Claude calls hellodb_find_relevant_memories and gets the top-k memories your current task actually needs, ranked by semantic similarity times reinforcement decay. The window stays lean because the store is the source of truth.

Problem 2 — /compact is lossy

From the post:

“Autocompact fires after a long debugging session and summarizes the investigation, and your next message is ‘now fix that other warning we saw in bar.ts’ — the other warning might have been dropped from the summary.”

The team also concedes a harder constraint: the model is at its least intelligent point when compacting. At the moment you most need a careful summary, the model has the least capacity to produce one.

This is a structural problem with context-as-memory. A summary is a compression of a summary is a compression of a summary. Information entropy only moves in one direction.

hellodb’s answer: facts are immutable and content-addressed. A compact pass can’t drop a fact because the fact was never in the context — it’s in the store, under a BLAKE3 hash that will never change. Compact as aggressively as you want. Nothing load-bearing lives in the window long enough to lose.

The digest runs out of band: after your session ends, a Haiku sub-agent reads the raw episode tail and writes consolidated facts to a draft branch (claude.facts/ digest-<ts>). High-confidence facts auto-merge to main. Low-confidence or contradictory ones stay on the draft for you to review. Your primary session never sees the messy intermediate step.

Problem 3 — /clear demands a brief

The post’s recommendation for /clear:

“Start fresh session with user-written brief.”

Three words. Enormous amount of cognitive load hidden inside “user-written brief.” Every time you /clear, youare the compaction function. You are sitting there typing out “we’re working on the auth refactor, remember we’re using oauth not sessions, and I need you to ...” That’s the state your last session spent an hour getting right.

hellodb’s answer: the brief already exists. Across past sessions, durable facts were captured automatically (via the memorize skill) or harvested from your existing CLAUDE.md files (via hellodb ingest --from-claudemd). You /clear; Claude invokes hellodb_find_relevant_memories; the top-8 relevant facts are re-seeded. No brief to write.

The retrieval tool mirrors Claude Code’s own memory-manifest shape — type (user / feedback / project / reference), description, source_path, decayed_score. It falls back gracefully: if you haven’t configured an embedding backend, it ranks by keyword overlap + reinforcement decay instead. Always returns something; never errors for missing config.

The 1M context tell

Near the end of the post:

“With one million context, you have more time to /compactproactively with a description.”

Read that carefully. 1M context doesn’t solve the compaction problem. It gives you more time before you hit it. The compaction is still coming. The summary is still lossy. The primitives are still triage.

Context is triage. Memory is engineering.

A context window is what fits in the model’s attention right now. Memory is what survives every session you’ve ever had, queryable, branchable, auditable, reinforceable. They are different categories of thing. Anthropic’s post is about surviving the first one; hellodb is about owning the second.

What we built, in one screen

hellodb ships as a Claude Code plugin plus a local MCP server. You install it with one command:

curl -fsSL hellodb.dev/install | sh

The installer drops three binaries into /usr/local/bin (hellodb, hellodb-mcp, hellodb-brain), generates an Ed25519 identity key, opens an encrypted SQLite database at ~/.hellodb/local.db, and registers the plugin with Claude Code if claude is on your PATH.

On first run:

hellodb ingest --from-claudemd

...scans ~/.claude/projects/*/memory/*.md and imports every memory file into a per-project namespace (claude.memory.<project-slug>). Each project is hard-isolated — memories from your auth refactor repo never leak into your side-project repo. Content-addressing dedupes, so re-running is a no-op on unchanged files.

From there, every session:

  1. Primary agent writes episodes as they happen (via the memorize skill or any hellodb_note / hellodb_remember call).
  2. Stop hook fires when the session ends, idempotent + cool-down-gated.
  3. Brain daemon tails the episode namespace and digests new material via a Haiku memory-digest sub-agent. High-confidence facts auto-merge; low-confidence or contradictory ones stay on a draft branch for review.
  4. Next session, on any topic, in any repo, calls hellodb_find_relevant_memories and pulls the top-kcurated facts back into context — no window bloat, no /compact roulette.

That’s the loop. That’s the whole pitch.

On owning it

One more thing worth saying explicitly.

Everything above is open source, MIT-licensed, local-first. The DB lives on your machine, encrypted with a key derived from an identity you own. The signing key never leaves disk. Optional semantic search runs on yourCloudflare account via Workers AI — no shared service, no affiliate middleman, no API key you don’t control.

The reason I care about this: if I spend all day pair-programming with Claude, my memory of that work should live somewhere I own. Not in a context window that’s rented and then garbage- collected. Not in a cloud service that might deprecate the retention policy. On my disk, under my key, in my format.

Anthropic’s post is honest about the limits of context. hellodb is what you build when you take that honesty seriously.