-
-
Notifications
You must be signed in to change notification settings - Fork 9
Description
Summary
CKB already understands a project's architecture, symbols, ownership, decisions, and hotspots. This feature packages that intelligence into a Documentation Vault — a structured, always-current knowledge base that follows the Cognitive Vault spec v1.1.
The idea: instead of documentation that rots the moment it's written, CKB continuously generates and updates vault entries from what it actually knows about the codebase.
The Problem
- Project docs go stale within days of being written
- Architecture knowledge lives in people's heads or scattered READMEs
- AI tools need structured, up-to-date context to be effective
- There's no single source of truth for "what does this project look like right now?"
Proposed Feature
ckb vault generate / ckb vault sync
Generate a Cognitive Vault structure (Markdown files with frontmatter) from CKB's existing intelligence:
.ckb/vault/
├── architecture/
│ ├── module-overview.md # from getArchitecture
│ ├── entry-points.md # from listEntrypoints
│ └── key-concepts.md # from listKeyConcepts
├── modules/
│ ├── internal-query.md # per-module: responsibilities, symbols, ownership
│ ├── internal-mcp.md
│ └── ...
├── decisions/
│ ├── 2026-01-15-symbol-identity.md # from existing ADRs
│ └── ...
├── health/
│ ├── hotspots.md # from getHotspots
│ ├── coupling.md # from analyzeCoupling
│ └── risk-audit.md # from auditRisk
└── api/
├── public-api-surface.md # from compareAPI
└── contracts.md # from listContracts
Entry Format (Cognitive Vault v1.1 compliant)
---
title: "Query Engine Module"
date: 2026-03-22
tags: [type:module-doc, status:active, module:internal/query]
summary: "Central query engine — routes all operations through cache and backend orchestrator"
generated-by: ckb
generated-at: 2026-03-22T14:00:00Z
source-hash: abc123 # tracks when regeneration is needed
---Freshness
- On-demand:
ckb vault syncregenerates stale entries (detected viasource-hash/ index age) - Watch mode:
ckb vault sync --watchkeeps vault current during dev sessions - Daemon: Background scheduler keeps vault fresh
- Staleness detection: Each entry tracks what data it was generated from; if the underlying data changes, the entry is marked stale
MCP Integration
New tools:
getVaultEntry— retrieve a specific vault entrysearchVault— full-text search across the vaultgetVaultStatus— freshness report (which entries are stale)syncVault— trigger regeneration from MCP
What Makes This Different from Just Running explainFile
- Structured dataset — follows Cognitive Vault spec, not freeform text
- Incremental updates — only regenerates what changed
- Cross-referenced — entries link to each other (module → decisions, hotspots → modules)
- Versioned — vault entries have source hashes, so you can tell when docs last reflected reality
- AI-optimized — frontmatter tags enable precise retrieval by AI tools
Open Questions
- Should the vault be committed to git or stay in
.ckb/(gitignored)? - Should users be able to add manual entries alongside generated ones? (Hybrid vault)
- What's the right granularity — one entry per module? per package? per file?
- Should we support exporting to other formats (Obsidian vault, Notion, etc.)?
- How do we handle LLM-generated summaries vs purely structural data? (cost, offline mode)
Implementation Phases
Phase 1: Structural vault from existing CKB data (no LLM needed — architecture, modules, hotspots, ownership)
Phase 2: LLM-enriched entries (summaries, explanations) with --llm flag
Phase 3: MCP tools for vault access and search
Phase 4: Watch/daemon integration for continuous freshness
Related
- Cognitive Vault Spec v1.1:
docs/cognitive-vault-structure-spec-v1.1.md - Existing features this builds on:
getArchitecture,listKeyConcepts,listEntrypoints,getHotspots,analyzeCoupling,auditRisk,getOwnership,getModuleResponsibilities,getDecisions