GBrain is a personal knowledge brain and GStack mod for agent platforms. Pluggable engines: PGLite (embedded Postgres via WASM, zero-config default) or Postgres + pgvector
- hybrid search in a managed Supabase instance.
gbrain initdefaults to PGLite; suggests Supabase for 1000+ files. GStack teaches agents how to code. GBrain teaches agents everything else: brain ops, signal detection, content ingestion, enrichment, cron scheduling, reports, identity, and access control.
Contract-first: src/core/operations.ts defines ~41 shared operations (adds find_orphans in v0.12.3). CLI and MCP
server are both generated from this single source. Engine factory (src/core/engine-factory.ts)
dynamically imports the configured engine ('pglite' or 'postgres'). Skills are fat
markdown files (tool-agnostic, work with both CLI and plugin contexts).
Trust boundary: OperationContext.remote distinguishes trusted local CLI callers
(remote: false set by src/cli.ts) from untrusted agent-facing callers
(remote: true set by src/mcp/server.ts). Security-sensitive operations like
file_upload tighten filesystem confinement when remote=true and default to
strict behavior when unset.
src/core/operations.ts— Contract-first operation definitions (the foundation). Also exports upload validators:validateUploadPath,validatePageSlug,validateFilename.OperationContext.remoteflags untrusted callers.src/core/engine.ts— Pluggable engine interface (BrainEngine).clampSearchLimit(limit, default, cap)takes an explicit cap so per-operation caps can be tighter thanMAX_SEARCH_LIMIT. ExportsLinkBatchInput/TimelineBatchInputfor the v0.12.1 bulk-insert API (addLinksBatch/addTimelineEntriesBatch). As of v0.13.1,BrainEnginehas areadonly kind: 'postgres' | 'pglite'discriminator so migrations (src/core/migrate.ts) and other consumers can branch on engine withoutinstanceof+ dynamic imports.src/core/engine-factory.ts— Engine factory with dynamic imports ('pglite'|'postgres')src/core/pglite-engine.ts— PGLite (embedded Postgres 17.5 via WASM) implementation, all 40 BrainEngine methods.addLinksBatch/addTimelineEntriesBatchuse multi-rowunnest()with manual$Nplaceholders. As of v0.13.1,connect()wrapsPGlite.create()in a try/catch that emits an actionable error naming the macOS 26.3 WASM bug (#223) and pointing atgbrain doctor; the lock is released on failure so the next process can retry cleanly.src/core/pglite-schema.ts— PGLite-specific DDL (pgvector, pg_trgm, triggers)src/core/postgres-engine.ts— Postgres + pgvector implementation (Supabase / self-hosted).addLinksBatch/addTimelineEntriesBatchuseINSERT ... SELECT FROM unnest($1::text[], ...) JOIN pages ON CONFLICT DO NOTHING RETURNING 1— 4-5 array params regardless of batch size, sidesteps the 65535-parameter cap. As of v0.12.3,searchKeyword/searchVectorscopestatement_timeoutviasql.begin+SET LOCALso the GUC dies with the transaction instead of leaking across the pooled postgres.js connection (contributed by @garagon).getEmbeddingsByChunkIdsusestryParseEmbeddingso one corrupt row skips+warns instead of killing the query.src/core/utils.ts— Shared SQL utilities extracted from postgres-engine.ts. ExportsparseEmbedding(value)(throws on unknown input, used by migration + ingest paths where data integrity matters) and as of v0.12.3tryParseEmbedding(value)(returnsnull+ warns once per process, used by search/rescore paths where availability matters more than strictness).src/core/db.ts— Connection management, schema initializationsrc/commands/migrate-engine.ts— Bidirectional engine migration (gbrain migrate --to supabase/pglite)src/core/import-file.ts— importFromFile + importFromContent (chunk + embed + tags)src/core/sync.ts— Pure sync functions (manifest parsing, filtering, slug conversion)src/core/storage.ts— Pluggable storage interface (S3, Supabase Storage, local)src/core/supabase-admin.ts— Supabase admin API (project discovery, pgvector check)src/core/file-resolver.ts— File resolution with fallback chain (local -> .redirect.yaml -> .redirect -> .supabase)src/core/chunkers/— 3-tier chunking (recursive, semantic, LLM-guided)src/core/search/— Hybrid search: vector + keyword + RRF + multi-query expansion + dedupsrc/core/search/intent.ts— Query intent classifier (entity/temporal/event/general → auto-selects detail level)src/core/search/eval.ts— Retrieval eval harness: P@k, R@k, MRR, nDCG@k metrics + runEval() orchestratorsrc/commands/eval.ts—gbrain evalcommand: single-run table + A/B config comparisonsrc/core/embedding.ts— OpenAI text-embedding-3-large, batch, retry, backoffsrc/core/check-resolvable.ts— Resolver validation: reachability, MECE overlap, DRY checks, structured fix objects. v0.14.1:CROSS_CUTTING_PATTERNS.conventionsis an array (notability gate accepts bothconventions/quality.mdand_brain-filing-rules.md). NewextractDelegationTargets()parses> **Convention:**,> **Filing rule:**, and inline backtick references. DRY suppression is proximity-based viaDRY_PROXIMITY_LINES = 40.src/core/repo-root.ts— SharedfindRepoRoot(startDir?)(v0.16.4): walks up fromstartDir(defaultprocess.cwd()) looking forskills/RESOLVER.md. Zero-dependency module imported by bothdoctor.tsandcheck-resolvable.ts. ParameterizedstartDirmakes tests hermetic.src/commands/check-resolvable.ts— Standalone CLI wrapper (v0.16.4) overcheckResolvable(). ExportsparseFlags,resolveSkillsDir,DEFERRED,runCheckResolvable. Exit rule: 1 on any issue (warnings OR errors), stricter than doctor'sokflag — honors README:259. Stable JSON envelope{ok, skillsDir, report, autoFix, deferred, error, message}— same shape on success and error paths.--fixpath runsautoFixDryViolationsBEFOREcheckResolvable(same ordering as doctor).scripts/skillify-check.tssubprocess-callsgbrain check-resolvable --json(cached per process) and fails loud on binary-missing — no silent false-pass. v0.19: AGENTS.md workspaces now resolve natively (seesrc/core/resolver-filenames.ts) — gbrain inspects the 107-skill OpenClaw deployment whether the routing file isRESOLVER.mdorAGENTS.md.DEFERRED[]is empty — Checks 5 + 6 shipped as real code, not issue URLs.src/core/resolver-filenames.ts(v0.19) — central list of accepted routing filenames (RESOLVER.md,AGENTS.md). Shared byfindRepoRoot,check-resolvable, and skillpack install so every code path walks the same fallback chain.src/commands/skillify.ts+src/core/skillify/{generator,templates}.ts(v0.19) —gbrain skillify scaffold <name>creates all stubs for a new skill in one command: SKILL.md, script, tests, routing-eval.jsonl, resolver entry, filing-rules pointer.gbrain skillify check <script>runs the 10-step checklist (LLM evals, routing evals, check-resolvable gate, filing audit) against a candidate skill before it lands.src/commands/skillify-check.ts(v0.19) —gbrain skillpack-checkagent-readable health report. Exit 0/1/2 for CI pipeline gating; JSON for debugging. Wrapscheck-resolvable --json,doctor --json, and migration ledger into one payload so agents can decide whether a human action is required.src/commands/skillpack.ts+src/core/skillpack/{bundle,installer}.ts(v0.19) —gbrain skillpack installdrops gbrain's curated 25-skill bundle into a host workspace, managed-block style. Never clobbers local edits; tracks a skill manifest so subsequentinstall --updatediffs cleanly. Bundle builder (skillpack/bundle.ts) packages the set fromskills/into a versioned payload.src/core/skill-manifest.ts(v0.19) — parser forskill-manifest.jsonrecords. Used by skillpack installer to detect drift between the shipped bundle and the user's local edits, so updates merge instead of overwriting.src/commands/routing-eval.ts+src/core/routing-eval.ts(v0.19) —gbrain routing-evalcatches user phrasings that route to the wrong skill. Readsskills/<name>/routing-eval.jsonlfixtures ({intent, expected_skill, ambiguous_with?}). Structural layer runs incheck-resolvableby default (zero API cost);--llmopts into a Haiku tie-break layer for CI. False positives surface before users hit them.src/core/filing-audit.ts+skills/_brain-filing-rules.json(v0.19) — Check 6 ofcheck-resolvable. Parses newwrites_pages:/writes_to:frontmatter on skills and audits their filing claims against the filing-rules JSON. Warning-only in v0.19, upgrades to error in v0.20.src/core/dry-fix.ts—gbrain doctor --fixengine.autoFixDryViolations(fixes, {dryRun})rewrites inlined rules to> **Convention:** see [path](path).callouts via three shape-aware expanders (bullet / blockquote / paragraph). Five guards: working-tree-dirty (getWorkingTreeStatus()returns 3-state'clean' | 'dirty' | 'not_a_repo'), no-git-backup, inside-code-fence, already-delegated (40-line proximity, consistent with detector), ambiguous-multi-match, block-is-callout.execFileSyncarray args (no shell — no injection surface). EOF newline preserved.src/core/backoff.ts— Adaptive load-aware throttling: CPU/memory checks, exponential backoff, active hours multipliersrc/core/fail-improve.ts— Deterministic-first, LLM-fallback loop with JSONL failure logging and auto-test generationsrc/core/transcription.ts— Audio transcription: Groq Whisper (default), OpenAI fallback, ffmpeg segmentation for >25MBsrc/core/enrichment-service.ts— Global enrichment service: entity slug generation, tier auto-escalation, batch throttlingsrc/core/data-research.ts— Recipe validation, field extraction (MRR/ARR regex), dedup, tracker parsing, HTML strippingsrc/commands/extract.ts—gbrain extract links|timeline|all [--source fs|db]: batch link/timeline extraction. fs walks markdown files, db walks pages from the engine (mutation-immune snapshot iteration; use this for live brains with no local checkout). As of v0.12.1 there is no in-memory dedup pre-load — candidates are buffered 100 at a time and flushed viaaddLinksBatch/addTimelineEntriesBatch;ON CONFLICT DO NOTHINGenforces uniqueness at the DB layer, and thecreatedcounter returns real rows inserted (truthful on re-runs).src/commands/graph-query.ts—gbrain graph-query <slug> [--type T] [--depth N] [--direction in|out|both]: typed-edge relationship traversal (renders indented tree)src/core/link-extraction.ts— shared library for the v0.12.0 graph layer. extractEntityRefs (canonical, replaces backlinks.ts duplicate) matches both[Name](people/slug)markdown links and Obsidian[[people/slug|Name]]wikilinks as of v0.12.3. extractPageLinks, inferLinkType heuristics (attended/works_at/invested_in/founded/advises/source/mentions), parseTimelineEntries, isAutoLinkEnabled config helper.DIR_PATTERNcoverspeople,companies,deals,topics,concepts,projects,entities,tech,finance,personal,openclaw. Used by extract.ts, operations.ts auto-link post-hook, and backlinks.ts.src/core/minions/— Minions job queue: BullMQ-inspired, Postgres-native (queue, worker, backoff, types, protected-names, quiet-hours, stagger, handlers/shell).src/core/minions/queue.ts— MinionQueue class (submit, claim, complete, fail, stall detection, parent-child, depth/child-cap, per-job timeouts, cascade-kill, attachments, idempotency keys, child_done inbox, removeOnComplete/Fail).add()takes a 4thtrustedarg (separate fromoptsto prevent spread leakage); protected names inPROTECTED_JOB_NAMESrequire{allowProtectedSubmit: true}and the check runs trim-normalized (whitespace-bypass safe). v0.14.1 #219:add()plumbsmax_stalledthrough with a[1, 100]clamp; omitted values let the schema DEFAULT (5) kick in. v0.19.0:handleWallClockTimeouts(lockDurationMs)is Layer 3 kill shot for jobs whereFOR UPDATE SKIP LOCKEDstall detection and the timeout sweep both fail to evict (wedged worker holding a row lock via a pending transaction). v0.19.1:maxWaitingcoalesce path now usespg_advisory_xact_lockkeyed on(name, queue)to serialize concurrent submits for the same key, and filters onqueuein addition tonameso cross-queue same-name jobs don't suppress each other.src/core/minions/worker.ts— MinionWorker class (handler registry, lock renewal, graceful shutdown, timeout safety net). v0.14.0 abort-path fix: aborted jobs now callfailJobwith reason (timeout/cancel/lock-lost/shutdown) instead of returning silently.shutdownAbort(instance field) fires on process SIGTERM/SIGINT and propagates toctx.shutdownSignal— shell handler listens to it; non-shell handlers don't.src/core/minions/types.ts—MinionJobInput+MinionJobStatus+ handler context types.MinionJobInput.max_stalled(new in v0.14.1) is optional; omitted values let the schema DEFAULT (5) kick in, provided values are clamped to[1, 100].src/core/minions/protected-names.ts— side-effect-free constant module exportingPROTECTED_JOB_NAMES+isProtectedJobName(). Kept pure so queue core can import without loading handler modules.src/core/minions/handlers/shell.ts—shelljob handler. Spawns/bin/sh -c cmd(absolute path, PATH-override-safe) orargv[0] argv[1..](no shell). Env allowlist:PATH, HOME, USER, LANG, TZ, NODE_ENV+ callerenv:overrides. UTF-8-safe stdout/stderr tail viastring_decoder.StringDecoder. Abort (eitherctx.signalorctx.shutdownSignal) fires SIGTERM → 5s grace → SIGKILL on child. RequiresGBRAIN_ALLOW_SHELL_JOBS=1on worker (gated byregisterBuiltinHandlers).src/core/minions/handlers/shell-audit.ts— per-submission JSONL audit trail at~/.gbrain/audit/shell-jobs-YYYY-Www.jsonl(ISO-week rotation; override viaGBRAIN_AUDIT_DIR). Best-effort:mkdirSync(recursive)+appendFileSync; failures logged to stderr, submission not blocked. Logs cmd (first 80 chars) or argv (JSON array). Never logs env values.src/core/minions/backpressure-audit.ts(v0.19.1) — sibling of shell-audit.ts formaxWaitingcoalesce events. JSONL at~/.gbrain/audit/backpressure-YYYY-Www.jsonl. Fires one line per coalesce with(queue, name, waiting_count, max_waiting, returned_job_id, ts). Closes the silent-drop vector the v0.19.0 maxWaiting guard introduced.src/core/minions/handlers/subagent.ts(v0.15) — LLM-loop handler. Two-phase tool persistence (pending → complete/failed), replay reconciliation for mid-dispatch crashes, dual-signal abort (ctx.signal+ctx.shutdownSignal), Anthropic prompt caching on system + tool defs.makeSubagentHandler({engine, client?, ...})factory;MessagesClientis an injectable interface the real SDK implements structurally. ThrowsRateLeaseUnavailableError(renewable) when rate-lease capacity is full.src/core/minions/handlers/subagent-aggregator.ts(v0.15) —subagent_aggregatorhandler. Claims AFTER all children resolve (queue changes guarantee every terminal child posts achild_doneinbox message with outcome). Reads inbox viactx.readInbox(), builds deterministic mixed-outcome markdown summary. No LLM call in v0.15.src/core/minions/handlers/subagent-audit.ts(v0.15) — JSONL audit + heartbeat writer at~/.gbrain/audit/subagent-jobs-YYYY-Www.jsonl. Events:submission(one line per submit) +heartbeat(per turn boundary:llm_call_started | llm_call_completed | tool_called | tool_result | tool_failed). Never logs prompts or tool inputs.readSubagentAuditForJob(jobId, {sinceIso})is the readback path forgbrain agent logs.src/core/minions/rate-leases.ts(v0.15) — lease-based concurrency cap for outbound providers (default keyanthropic:messages, max viaGBRAIN_ANTHROPIC_MAX_INFLIGHT). Owner-tagged rows withexpires_atauto-prune on acquire;pg_advisory_xact_lockguards check-then-insert; CASCADE on owning job deletion.renewLeaseWithBackoffretries 3x (250/500/1000ms).src/core/minions/wait-for-completion.ts(v0.15) — poll-until-terminal helper for CLI callers.TimeoutErrordoes NOT cancel the job;AbortSignalexits without throwing. DefaultpollMs: 1000 on Postgres, 250 on PGLite inline.src/core/minions/transcript.ts(v0.15) — renderssubagent_messages+subagent_tool_executionsto markdown. Tool rows splice under their owning assistanttool_usebytool_use_id. UTF-8-safe truncation; unknown block types fall through to fenced JSON.src/core/minions/plugin-loader.ts(v0.15) —GBRAIN_PLUGIN_PATHdiscovery. Absolute paths only, left-wins collision,gbrain.plugin.jsonwithplugin_version: "gbrain-plugin-v1", plugins ship DEFS only (no new tools),allowed_tools:validated at load time against the derived registry.src/core/minions/tools/brain-allowlist.ts(v0.15) — derives subagent tool registry fromsrc/core/operations.ts. 11-name allow-list:query,search,get_page,list_pages,file_list,file_url,get_backlinks,traverse_graph,resolve_slugs,get_ingest_log,put_page.put_pageschema is namespace-wrapped per subagent (^wiki/agents/<subagentId>/.+); theput_pageop's server-side check is the authoritative gate viactx.viaSubagentfail-closed.src/mcp/tool-defs.ts(v0.15) — extractedbuildToolDefs(ops)helper. MCP server + subagent tool registry both call it; byte-for-byte equivalence pinned bytest/mcp-tool-defs.test.ts.src/core/minions/attachments.ts— Attachment validation (path traversal, null byte, oversize, base64, duplicate detection)src/commands/agent.ts(v0.16) —gbrain agent run <prompt> [flags]CLI. Submitssubagent(or N children + 1 aggregator) under{allowProtectedSubmit: true}. Single-entry--fanout-manifestshort-circuits. Children geton_child_fail: 'continue'+max_stalled: 3.--followis the default on TTY; streams logs + pollswaitForCompletionin parallel. Ctrl-C detaches, does not cancel.src/commands/agent-logs.ts(v0.16) —gbrain agent logs <job> [--follow] [--since]. Merges JSONL heartbeat audit +subagent_messagesinto a chronological timeline.parseSinceaccepts ISO-8601 or relative (5m,1h,2d). Transcript tail renders only for terminal jobs.src/commands/jobs.ts—gbrain jobsCLI subcommands +gbrain jobs workdaemon. v0.13.1 surfaces the fullMinionJobInputretry/backoff/timeout/idempotency surface as first-class CLI flags onjobs submit:--max-stalled,--backoff-type fixed|exponential,--backoff-delay,--backoff-jitter,--timeout-ms,--idempotency-key.jobs smoke --sigkill-rescueis the opt-in regression guard for #219. v0.16 wiresregisterBuiltinHandlersto always registersubagent+subagent_aggregator(no env flag —ANTHROPIC_API_KEYis the natural cost gate, trust is viaPROTECTED_JOB_NAMES) and loadsGBRAIN_PLUGIN_PATHplugins at worker startup with a loud startup-line per plugin.shellhandler still gated byGBRAIN_ALLOW_SHELL_JOBS=1(RCE surface, separate concern).src/commands/features.ts—gbrain features --json --auto-fix: usage scan + feature adoption salesmansrc/commands/autopilot.ts—gbrain autopilot --install: self-maintaining brain daemon (sync+extract+embed)src/mcp/server.ts— MCP stdio server (generated from operations)src/commands/auth.ts— Standalone token management (create/list/revoke/test)src/commands/upgrade.ts— Self-update CLI.runPostUpgrade()enumerates migrations from the TS registry (src/commands/migrations/index.ts) and tail-callsrunApplyMigrations(['--yes', '--non-interactive'])so the mechanical side of every outstanding migration runs unconditionally.src/commands/migrations/— TS migration registry (compiled into the binary; no filesystem walk ofskills/migrations/*.mdneeded at runtime).index.tslists migrations in semver order.v0_11_0.ts= Minions adoption orchestrator (8 phases).v0_12_0.ts= Knowledge Graph auto-wire orchestrator (5 phases: schema → config check → backfill links → backfill timeline → verify).phaseASchemahas a 600s timeout (bumped from 60s in v0.12.1 for duplicate-heavy brains).v0_12_2.ts= JSONB double-encode repair orchestrator (4 phases: schema → repair-jsonb → verify → record).v0_14_0.ts= shell-jobs + autopilot cooperative (2 phases: schema ALTER minion_jobs.max_stalled SET DEFAULT 3 — superseded by v0.14.3's schema-level DEFAULT 5 + UPDATE backfill; pending-host-work ping for skills/migrations/v0.14.0.md). All orchestrators are idempotent and resumable frompartialstatus. As of v0.14.2 (Bug 3), the RUNNER owns all ledger writes — orchestrators returnOrchestratorResultandapply-migrations.tspersists a canonical{version, status, phases}shape after return. Orchestrators no longer callappendCompletedMigrationdirectly.statusForVersionpreferscompleteoverpartial(never regresses). 3 consecutive partials → wedged →--force-retry <version>writes a'retry'reset marker. v0.14.3 (fix wave) ships schema-only migrations v14 (pages_updated_at_index) + v15 (minion_jobs_max_stalled_default_5with UPDATE backfill) via theMIGRATIONSarray insrc/core/migrate.ts— no orchestrator phases needed.src/commands/repair-jsonb.ts—gbrain repair-jsonb [--dry-run] [--json]: rewritesjsonb_typeof='string'rows in place across 5 affected columns (pages.frontmatter, raw_data.data, ingest_log.pages_updated, files.metadata, page_versions.frontmatter). Fixes v0.12.0 double-encode bug on Postgres; PGLite no-ops. Idempotent.src/commands/orphans.ts—gbrain orphans [--json] [--count] [--include-pseudo]: surfaces pages with zero inbound wikilinks, grouped by domain. Auto-generated/raw/pseudo pages filtered by default. Also exposed asfind_orphansMCP operation. Shipped in v0.12.3 (contributed by @knee5).src/commands/doctor.ts—gbrain doctor [--json] [--fast] [--fix] [--dry-run] [--index-audit]: health checks. v0.12.3 addedjsonb_integrity+markdown_body_completenessreliability checks. v0.14.1:--fixdelegates inlined cross-cutting rules to> **Convention:** see [path](path).callouts (pipes DRY violations intosrc/core/dry-fix.ts);--fix --dry-runpreviews without writing. v0.14.2:schema_versioncheck fails loudly whenversion=0(migrations never ran — the #218bun install -gsignature) and routes users togbrain apply-migrations --yes; new opt-in--index-auditflag (Postgres-only) reports zero-scan indexes frompg_stat_user_indexes(informational only, no auto-drop). v0.15.2: every DB check is wrapped in a progress phase;markdown_body_completenessruns under a 1s heartbeat timer so 10+ min scans are observable on 50K-page brains. v0.19.1 addedqueue_health(Postgres-only) with two subchecks: stalled-forever active jobs (started_at > 1h) and waiting-depth-per-name > threshold (default 10, override viaGBRAIN_QUEUE_WAITING_THRESHOLD). Worker-heartbeat subcheck intentionally deferred to follow-up B7 because it needs aminion_workerstable to produce ground-truth signal. Fix hints point atgbrain repair-jsonb,gbrain sync --force,gbrain apply-migrations, andgbrain jobs get/cancel <id>.src/core/migrate.ts— schema-migration runner. Owns theMIGRATIONSarray (source of truth for schema DDL). v0.14.2 extended theMigrationinterface withsqlFor?: { postgres?, pglite? }(engine-specific SQL overridessql) andtransaction?: boolean(set to false forCREATE INDEX CONCURRENTLY, which Postgres refuses inside a transaction; ignored on PGLite since it has no concurrent writers). Migration v14 (fix wave) uses a handler branching onengine.kindto run CONCURRENTLY on Postgres (with a pre-drop of any invalid remnant viapg_index.indisvalid) and plainCREATE INDEXon PGLite. v15 bumpsminion_jobs.max_stalleddefault 1→5 and backfills existing non-terminal rows.src/core/progress.ts— Shared bulk-action progress reporter. Writes to stderr. Modes:auto(TTY:\r-rewriting; non-TTY: plain lines),human,json(JSONL),quiet. Rate-gated byminIntervalMsandminItems.startHeartbeat(reporter, note)helper for single long queries.child()composes phase paths. Singleton SIGINT/SIGTERM coordinator emitsabortevents for every live phase. EPIPE defense on both sync throws and stream'error'events. Zero dependencies. Introduced in v0.15.2.src/core/cli-options.ts— Global CLI flag parser.parseGlobalFlags(argv)returns{cliOpts, rest}with--quiet/--progress-json/--progress-interval=<ms>stripped.getCliOptions()/setCliOptions()expose a module-level singleton so commands reach the resolved flags without parameter threading.cliOptsToProgressOptions()maps to reporter options.childGlobalFlags()returns the flag suffix to append toexecSync('gbrain ...')calls in migration orchestrators.OperationContext.cliOptsextends shared-op dispatch for MCP callers.src/core/cycle.ts— v0.17 brain maintenance cycle primitive.runCycle(engine: BrainEngine | null, opts: CycleOpts): Promise<CycleReport>composes 6 phases in semantically-driven order (lint → backlinks → sync → extract → embed → orphans). Three callers:gbrain dreamCLI,gbrain autopilotdaemon's inline path, and the Minionsautopilot-cyclehandler (src/commands/jobs.ts). One source of truth for what the brain does overnight. Coordination viagbrain_cycle_locksDB table (TTL-based; works through PgBouncer transaction pooling, unlike session-scopedpg_try_advisory_lock) +~/.gbrain/cycle.lockfile lock with PID-liveness for PGLite / engine=null mode.CycleReport.schema_version: "1"is the stable agent-consumable shape.PhaseResult.error: { class, code, message, hint?, docs_url? }is Stripe-API-tier structured failure info.yieldBetweenPhaseshook awaited between every phase — Minions handler uses this to renew its job lock and prevent v0.14 stall-death regression. Engine nullable: filesystem phases (lint, backlinks) run without DB; DB phases skip withstatus: "skipped", reason: "no_database". Lock-skip: read-only phase selections (--phase orphans) bypass the cycle lock.src/commands/dream.ts— v0.17gbrain dreamCLI. ~80-line thin alias overrunCycle. brainDir resolution requires explicit--dirORsync.repo_pathconfig (no more walk-up-cwd-for-.git footgun). Flags:--dry-run,--json,--phase <name>,--pull,--dir <path>. Exit code 1 on status=failed (partial/warn not fatal — don't page on warnings).scripts/check-progress-to-stdout.sh— CI guard against regressing to\r-on-stdout progress. Wired intobun run testviascripts/check-progress-to-stdout.sh && bun testin package.json.docs/progress-events.md— Canonical JSON event schema reference. Stable from v0.15.2, additive only.src/core/markdown.ts— Frontmatter parsing + body splitter.splitBodyrequires an explicit timeline sentinel (<!-- timeline -->,--- timeline ---, or---immediately before## Timeline/## History). Plain---in body text is a markdown horizontal rule, not a separator.inferTypeauto-types/wiki/analysis/→ analysis,/wiki/guides/→ guide,/wiki/hardware/→ hardware,/wiki/architecture/→ architecture,/writing/→ writing (plus the existing people/companies/deals/etc heuristics).scripts/check-jsonb-pattern.sh— CI grep guard. Fails the build if anyone reintroduces (a) the${JSON.stringify(x)}::jsonbinterpolation pattern (postgres.js v3 double-encodes it), or (b)max_stalled INTEGER NOT NULL DEFAULT 1in any schema source file (v0.15.1 #219 regression guard — must be DEFAULT 5 to preserve SIGKILL-rescue). Wired intobun test.scripts/llms-config.ts+scripts/build-llms.ts— Generator forllms.txt(llmstxt.org-spec web index) +llms-full.txt(inlined single-fetch bundle). Curated config drives both. Runbun run build:llmsafter adding a new doc.LLMS_REPO_BASEenv var lets forks regenerate with their own URL base.FULL_SIZE_BUDGET(600KB) caps the inline bundle; generator WARNs if exceeded. Committed output is not analogous toschema-embedded.ts(no runtime consumer); we commit for GitHub browsing and fork-safe fetching.AGENTS.md— Local-clone entry point for non-Claude agents (Codex, Cursor, OpenClaw, Aider). MirrorsCLAUDE.mdintent via relative links. Claude Code keeps usingCLAUDE.md.docs/UPGRADING_DOWNSTREAM_AGENTS.md— Patches for downstream agent skill forks to apply when upgrading. Each release appends a new section. v0.10.3 includes diffs for brain-ops, meeting-ingestion, signal-detector, enrich.src/core/schema-embedded.ts— AUTO-GENERATED from schema.sql (runbun run build:schema)src/schema.sql— Full Postgres + pgvector DDL (source of truth, generates schema-embedded.ts)src/commands/integrations.ts— Standalone integration recipe management (no DB needed). ExportsgetRecipeDirs()(trust-tagged recipe sources), SSRF helpers (isInternalUrl,parseOctet,hostnameToOctets,isPrivateIpv4). Only package-bundled recipes areembedded=true;$GBRAIN_RECIPES_DIRand cwd./recipes/are untrusted and cannot runcommand/http/string health checks.src/core/search/expansion.ts— Multi-query expansion via Haiku. ExportssanitizeQueryForPrompt+sanitizeExpansionOutput(prompt-injection defense-in-depth). Sanitized query is only used for the LLM channel; original query still drives search.recipes/— Integration recipe files (YAML frontmatter + markdown setup instructions)docs/guides/— Individual SKILLPACK guides (broken out from monolith)docs/integrations/— "Getting Data In" guides and integration docsdocs/architecture/infra-layer.md— Shared infrastructure documentationdocs/ethos/THIN_HARNESS_FAT_SKILLS.md— Architecture philosophy essaydocs/ethos/MARKDOWN_SKILLS_AS_RECIPES.md— "Homebrew for Personal AI" essaydocs/guides/repo-architecture.md— Two-repo pattern (agent vs brain)docs/guides/sub-agent-routing.md— Model routing table for sub-agentsdocs/guides/skill-development.md— 5-step skill development cycle + MECEdocs/guides/idea-capture.md— Originality distribution, depth test, cross-linkingdocs/guides/quiet-hours.md— Notification hold + timezone-aware deliverydocs/guides/diligence-ingestion.md— Data room to brain pages pipelinedocs/designs/HOMEBREW_FOR_PERSONAL_AI.md— 10-star vision for integration systemdocs/mcp/— Per-client setup guides (Claude Desktop, Code, Cowork, Perplexity)- BrainBench (benchmark suite + corpus): lives in the separate gbrain-evals repo. Not installed alongside gbrain.
skills/_brain-filing-rules.md— Cross-cutting brain filing rules (referenced by all brain-writing skills)skills/RESOLVER.md— Skill routing table (based on the agent-fork AGENTS.md pattern)skills/conventions/— Cross-cutting rules (quality, brain-first, model-routing, test-before-bulk, cross-modal)skills/_output-rules.md— Output quality standards (deterministic links, no slop, exact phrasing)skills/signal-detector/SKILL.md— Always-on idea+entity capture on every messageskills/brain-ops/SKILL.md— Brain-first lookup, read-enrich-write loop, source attributionskills/idea-ingest/SKILL.md— Links/articles/tweets with author people page mandatoryskills/media-ingest/SKILL.md— Video/audio/PDF/book with entity extractionskills/meeting-ingestion/SKILL.md— Transcripts with attendee enrichment chainingskills/citation-fixer/SKILL.md— Citation format auditing and fixingskills/repo-architecture/SKILL.md— Filing rules by primary subjectskills/skill-creator/SKILL.md— Create conforming skills with MECE checkskills/daily-task-manager/SKILL.md— Task lifecycle with priority levelsskills/daily-task-prep/SKILL.md— Morning prep with calendar contextskills/cross-modal-review/SKILL.md— Quality gate via second modelskills/cron-scheduler/SKILL.md— Schedule staggering, quiet hours, idempotencyskills/reports/SKILL.md— Timestamped reports with keyword routingskills/testing/SKILL.md— Skill validation frameworkskills/soul-audit/SKILL.md— 6-phase interview for SOUL.md, USER.md, ACCESS_POLICY.md, HEARTBEAT.mdskills/webhook-transforms/SKILL.md— External events to brain signalsskills/data-research/SKILL.md— Structured data research: email-to-tracker pipeline with parameterized YAML recipesskills/minion-orchestrator/SKILL.md— Unified background-work skill (v0.20.4 consolidation of the formerminion-orchestrator+gbrain-jobssplit). Two lanes: shell jobs viagbrain jobs submit shell --params '{"cmd":"..."}'(operator/CLI only; MCP throwspermission_deniedfor protected names) and LLM subagents viagbrain agent run(user-facing entrypoint). Shared Preconditions block, parent-child DAGs with depth/cap/timeouts,child_doneinbox for fan-in, PGLite--followinline path for dev. Triggers narrowed from bare"gbrain jobs"to"gbrain jobs submit"+"submit a gbrain job"sostats/prune/retryquestions fall through togbrain --help.templates/— SOUL.md, USER.md, ACCESS_POLICY.md, HEARTBEAT.md templatesskills/migrations/— Version migration files with feature_pitch YAML frontmattersrc/commands/publish.ts— Deterministic brain page publisher (code+skill pair, zero LLM calls)src/commands/backlinks.ts— Back-link checker and fixer (enforces Iron Law)src/commands/lint.ts— Page quality linter (catches LLM artifacts, placeholder dates)src/commands/report.ts— Structured report saver (audit trail for maintenance/enrichment)openclaw.plugin.json— ClawHub bundle plugin manifest
BrainBench — the public benchmark for personal-knowledge agent stacks — lives in github.com/garrytan/gbrain-evals. It depends on gbrain as a consumer; gbrain never pulls in the ~5MB eval corpus or the pdf-parse dev dep at install time.
gbrain's public API surface (the exports map in package.json) is what
gbrain-evals consumes: gbrain/engine, gbrain/types, gbrain/operations,
gbrain/pglite-engine, gbrain/link-extraction, gbrain/import-file,
gbrain/transcription, gbrain/embedding, gbrain/config, gbrain/markdown,
gbrain/backoff, gbrain/search/hybrid, gbrain/search/expansion,
gbrain/extract. Removing any of these is a breaking change for the
gbrain-evals consumer.
Run gbrain --help or gbrain --tools-json for full command reference.
Key commands added in v0.7:
gbrain init— defaults to PGLite (no Supabase needed), scans repo size, suggests Supabase for 1000+ filesgbrain migrate --to supabase/gbrain migrate --to pglite— bidirectional engine migration
Key commands added for Minions (job queue):
gbrain jobs submit <name> [--params JSON] [--follow] [--dry-run]— submit a background job. v0.13.1 adds first-class flags for everyMinionJobInputtuning knob:--max-stalled N,--backoff-type fixed|exponential,--backoff-delay Nms,--backoff-jitter 0..1,--timeout-ms N,--idempotency-key K.gbrain jobs list [--status S] [--queue Q]— list jobs with filtersgbrain jobs get <id>— job details with attempt historygbrain jobs cancel/retry/delete <id>— manage job lifecyclegbrain jobs prune [--older-than 30d]— clean old completed/dead jobsgbrain jobs stats— job health dashboardgbrain jobs smoke [--sigkill-rescue]— health smoke test.--sigkill-rescueis the v0.13.1 regression guard for #219: simulates a killed worker and asserts the stalled job is requeued instead of dead-lettered on first stall.gbrain jobs work [--queue Q] [--concurrency N]— start worker daemon (Postgres only)
Key commands added in v0.12.2:
gbrain repair-jsonb [--dry-run] [--json]— repair double-encoded JSONB rows left over from v0.12.0-and-earlier Postgres writes. Idempotent; PGLite no-ops. Thev0_12_2migration runs this automatically ongbrain upgrade.
Key commands added in v0.12.3:
gbrain orphans [--json] [--count] [--include-pseudo]— surface pages with zero inbound wikilinks, grouped by domain. Auto-generated/raw/pseudo pages filtered by default. Also exposed asfind_orphansMCP operation. The natural consumer of the v0.12.0 knowledge graph layer: once edges are captured, find the gaps.gbrain doctorgains two new reliability detection checks:jsonb_integrity(v0.12.0 Postgres double-encode damage) andmarkdown_body_completeness(pages truncated by the old splitBody bug). Detection only; fix hints point atgbrain repair-jsonbandgbrain sync --force.
Key commands added in v0.14.2:
gbrain sync --skip-failed— acknowledge the current set of failed-parse files recorded in~/.gbrain/sync-failures.jsonlso the sync bookmark advances past them. Doctor'ssync_failurescheck shows previously-skipped as "all acknowledged" instead of warning.gbrain sync --retry-failed— re-walk the unacknowledged failures and re-attempt parsing. If the files now succeed, they clear from the set and the bookmark advances naturally.gbrain apply-migrations --force-retry <version>— reset a wedged migration (3 consecutive partials with no completion) by appending a'retry'marker. Nextapply-migrations --yestreats the version as fresh.completestatus never regresses topartialeither before or after a retry marker.GBRAIN_POOL_SIZEenv var — honored by both the singleton pool (src/core/db.ts) and the parallel-import worker pool (src/commands/import.ts). Default is 10; lower to 2 for Supabase transaction pooler to avoid MaxClients crashes duringgbrain upgradesubprocess spawns. Read at call time viaresolvePoolSize().gbrain doctorgains two new checks:sync_failures(surfaces unacknowledged parse failures with exact paths + fix hints) andbrain_score(renders the 5-component breakdown when score < 100: embed coverage / 35, link density / 25, timeline coverage / 15, orphans / 15, dead links / 10 — sum equals total).
Key commands added in v0.14.3 (fix wave):
gbrain doctor --index-audit— opt-in Postgres-only check reporting zero-scan indexes frompg_stat_user_indexes. Informational only; never auto-drops.gbrain doctorschema_version check fails loudly whenversion=0— catchesbun install -g github:...postinstall failures (#218) and routes users togbrain apply-migrations --yes.gbrain jobs submitgains--max-stalled,--backoff-type,--backoff-delay,--backoff-jitter,--timeout-ms,--idempotency-key— exposing existingMinionJobInputfields as first-class CLI flags.gbrain jobs smoke --sigkill-rescue— opt-in regression smoke case simulating a killed worker; asserts the v0.14.3 schema default (max_stalled=5) actually rescues on first stall.
bun test runs all tests. After the v0.12.1 release: ~75 unit test files + 8 E2E test files (1412 unit pass, 119 E2E when DATABASE_URL is set — skip gracefully otherwise). Unit tests run
without a database. E2E tests skip gracefully when DATABASE_URL is not set.
Unit tests: test/markdown.test.ts (frontmatter parsing), test/chunkers/recursive.test.ts
(chunking), test/parity.test.ts (operations contract
parity), test/cli.test.ts (CLI structure), test/config.test.ts (config redaction),
test/files.test.ts (MIME/hash), test/import-file.test.ts (import pipeline),
test/upgrade.test.ts (schema migrations),
test/file-migration.test.ts (file migration), test/file-resolver.test.ts (file resolution),
test/import-resume.test.ts (import checkpoints), test/migrate.test.ts (migration; v8/v9 helper-btree-index SQL structural assertions + 1000-row wall-clock fixtures that guard the O(n²)→O(n log n) fix + v0.13.1 assertions on v12/v13 SQL shape, sqlFor + transaction:false runner semantics, and the max_stalled DEFAULT 1 regression guard),
test/setup-branching.test.ts (setup flow), test/slug-validation.test.ts (slug validation),
test/storage.test.ts (storage backends), test/supabase-admin.test.ts (Supabase admin),
test/yaml-lite.test.ts (YAML parsing), test/check-update.test.ts (version check + update CLI),
test/pglite-engine.test.ts (PGLite engine, all 40 BrainEngine methods including 11 cases for addLinksBatch / addTimelineEntriesBatch: empty batch, missing optionals, within-batch dedup via ON CONFLICT, missing-slug rows dropped by JOIN, half-existing batch, batch of 100 + v0.13.1 connect() error-wrap assertion (original error nested, #223 link in message, lock released)),
test/engine-factory.test.ts (engine factory + dynamic imports),
test/integrations.test.ts (recipe parsing, CLI routing, recipe validation),
test/publish.test.ts (content stripping, encryption, password generation, HTML output),
test/backlinks.test.ts (entity extraction, back-link detection, timeline entry generation),
test/lint.test.ts (LLM artifact detection, code fence stripping, frontmatter validation),
test/report.test.ts (report format, directory structure),
test/skills-conformance.test.ts (skill frontmatter + required sections validation),
test/resolver.test.ts (RESOLVER.md coverage, routing validation + v0.20.4 round-trip: every quoted RESOLVER.md trigger must match a frontmatter triggers: entry in the target skill, and every name="<word>" reference in any SKILL.md must resolve to a declared op in src/core/operations.ts or a Minions handler in PROTECTED_JOB_NAMES),
test/search.test.ts (RRF normalization, compiled truth boost, cosine similarity, dedup key),
test/dedup.test.ts (source-aware dedup, compiled truth guarantee, layer interactions),
test/intent.test.ts (query intent classification: entity/temporal/event/general),
test/eval.test.ts (retrieval metrics: precisionAtK, recallAtK, mrr, ndcgAtK, parseQrels),
test/check-resolvable.test.ts (resolver reachability, MECE overlap, gap detection, DRY checks + v0.14.1 proximity-based DRY detection + extractDelegationTargets coverage — 13 DRY cases),
test/dry-fix.test.ts (v0.14.1 auto-fix: three shape-aware expander pure-function tests, five guards — working-tree-dirty, no-git-backup, inside-code-fence, already-delegated within 40 lines, ambiguous-multi-match, block-is-callout — 28 cases),
test/doctor-fix.test.ts (v0.14.1 gbrain doctor --fix CLI integration: dry-run preview, apply path, JSON output shape — 3 cases),
test/backoff.test.ts (load-aware throttling, concurrency limits, active hours),
test/fail-improve.test.ts (deterministic/LLM cascade, JSONL logging, test generation, rotation),
test/transcription.test.ts (provider detection, format validation, API key errors),
test/enrichment-service.test.ts (entity slugification, extraction, tier escalation),
test/data-research.test.ts (recipe validation, MRR/ARR extraction, dedup, tracker parsing, HTML stripping),
test/minions.test.ts (Minions job queue v7: CRUD, state machine, backoff, stall detection, dependencies, worker lifecycle, lock management, claim mechanics, depth/child-cap, timeouts, cascade kill, idempotency, child_done inbox, attachments, removeOnComplete/Fail + v0.13.1 max_stalled clamp/default/plumbing coverage),
test/extract.test.ts (link extraction, timeline extraction, frontmatter parsing, directory type inference),
test/extract-db.test.ts (gbrain extract --source db: typed link inference, idempotency, --type filter, --dry-run JSON output),
test/extract-fs.test.ts (gbrain extract --source fs: first-run inserts + second-run reports zero, dry-run dedups candidates across files, second-run perf regression guard — the v0.12.1 N+1 dedup bug),
test/link-extraction.test.ts (canonical extractEntityRefs both formats, extractPageLinks dedup, inferLinkType heuristics, parseTimelineEntries date variants, isAutoLinkEnabled config),
test/graph-query.test.ts (direction in/out/both, type filter, indented tree output),
test/features.test.ts (feature scanning, brain_score calculation, CLI routing, persistence),
test/file-upload-security.test.ts (symlink traversal, cwd confinement, slug + filename allowlists, remote vs local trust),
test/query-sanitization.test.ts (prompt-injection stripping, output sanitization, structural boundary),
test/search-limit.test.ts (clampSearchLimit default/cap behavior across list_pages and get_ingest_log),
test/repair-jsonb.test.ts (v0.12.2 JSONB repair: TARGETS list, idempotency, engine-awareness),
test/migrations-v0_12_2.test.ts (v0.12.2 orchestrator phases: schema → repair → verify → record),
test/markdown.test.ts (splitBody sentinel precedence, horizontal-rule preservation, inferType wiki subtypes),
test/orphans.test.ts (v0.12.3 orphans command: detection, pseudo filtering, text/json/count outputs, MCP op),
test/postgres-engine.test.ts (v0.12.3 statement_timeout scoping: sql.begin + SET LOCAL shape, source-level grep guardrail against reintroduced bare SET statement_timeout),
test/sync.test.ts (sync logic + v0.12.3 regression guard asserting top-level engine.transaction is not called),
test/doctor.test.ts (doctor command + v0.12.3 assertions that jsonb_integrity scans the four v0.12.0 write sites and markdown_body_completeness is present),
test/utils.test.ts (shared SQL utilities + tryParseEmbedding null-return and single-warn semantics),
test/build-llms.test.ts (llms.txt/llms-full.txt generator: path resolution, idempotence, spec shape, regen-drift guard, content contract, AGENTS.md install-path mirror, size-budget enforcement — 7 cases),
test/check-resolvable-cli.test.ts (v0.19 CLI wrapper: exit codes, JSON envelope shape, AGENTS.md fallback chain),
test/regression-v0_16_4.test.ts (findRepoRoot regression guard — hermetic startDir parameterization),
test/filing-audit.test.ts (v0.19 Check 6: writes_pages / writes_to frontmatter, filing-rules JSON validation),
test/routing-eval.test.ts (v0.19 Check 5: fixture parsing, structural routing, ambiguous_with, Haiku tie-break layer),
test/skill-manifest.test.ts (v0.19 skill manifest parser: drift detection, managed-block markers),
test/skillify-scaffold.test.ts (v0.19 gbrain skillify scaffold stubs: SKILL.md, script, tests, routing-eval fixtures),
test/skillpack-install.test.ts (v0.19 gbrain skillpack install managed-block install / update / no-clobber semantics),
test/skillpack-sync-guard.test.ts (v0.19 sync-guard: bundled skills stay byte-identical to skills/ source).
E2E tests (test/e2e/): Run against real Postgres+pgvector. Require DATABASE_URL.
bun run test:e2eruns Tier 1 (mechanical, all operations, no API keys). Includes 9 dedicated cases for the postgres-engineaddLinksBatch/addTimelineEntriesBatchbind path — postgres-js'sunnest()binding is structurally different from PGLite's and gets its own coverage.test/e2e/search-quality.test.tsruns search quality E2E against PGLite (no API keys, in-memory)test/e2e/graph-quality.test.tsruns the v0.10.3 knowledge graph pipeline (auto-link via put_page, reconciliation, traversePaths) against PGLite in-memorytest/e2e/postgres-jsonb.test.ts— v0.12.2 regression test. Round-trips all 5 JSONB write sites (pages.frontmatter, raw_data.data, ingest_log.pages_updated, files.metadata, page_versions.frontmatter) against real Postgres and assertsjsonb_typeof='object'plus->>'key'returns the expected scalar. The test that should have caught the original double-encode bug.test/e2e/jsonb-roundtrip.test.ts— v0.12.3 companion regression against the 4 doctor-scanned JSONB sites. Assertion-level overlap withpostgres-jsonb.test.tsis intentional defense-in-depth: if doctor's scan surface ever drifts from the actual write surface, one of these tests catches it.test/e2e/upgrade.test.tsruns check-update E2E against real GitHub API (network required)test/e2e/minions-shell-pglite.test.ts(v0.20.4) exercises the PGLite--followinline shell-job path (in-memory, noDATABASE_URLrequired) — the path the consolidated minion-orchestrator skill documents for dev usetest/e2e/openclaw-reference-compat.test.ts(v0.19) — exercisescheck-resolvable+skillpack installagainst a minimal AGENTS.md workspace fixture (test/fixtures/openclaw-reference-minimal/), regression guard for the 107-skill OpenClaw deployment shape- Tier 2 (
skills.test.ts) requires OpenClaw + API keys, runs nightly in CI - If
.env.testingdoesn't exist in this directory, check sibling worktrees for one:find ../ -maxdepth 2 -name .env.testing -print -quitand copy it here if found. - Always run E2E tests when they exist. Do not skip them just because DATABASE_URL is not set. Start the test DB, run the tests, then tear it down.
ALWAYS source the user's shell profile before running tests:
source ~/.zshrc 2>/dev/null || trueThis loads OPENAI_API_KEY and ANTHROPIC_API_KEY. Without these, Tier 2 tests
skip silently. Do NOT skip Tier 2 tests just because they require API keys — load
the keys and run them.
When asked to "run all E2E tests" or "run tests", that means ALL tiers:
- Tier 1:
bun run test:e2e(mechanical, sync, upgrade — no API keys needed) - Tier 2:
test/e2e/skills.test.ts(requires OpenAI + Anthropic + openclaw CLI) - Always spin up the test DB, source zshrc, run everything, tear down.
You are responsible for spinning up and tearing down the test Postgres container. Do not leave containers running after tests. Do not skip E2E tests.
- Check for
.env.testing— if missing, copy from sibling worktree. Read it to get the DATABASE_URL (it has the port number). - Check if the port is free:
docker ps --filter "publish=PORT"— if another container is on that port, pick a different port (try 5435, 5436, 5437) and start on that one instead. - Start the test DB:
Wait for ready:
docker run -d --name gbrain-test-pg \ -e POSTGRES_USER=postgres -e POSTGRES_PASSWORD=postgres \ -e POSTGRES_DB=gbrain_test \ -p PORT:5432 pgvector/pgvector:pg16
docker exec gbrain-test-pg pg_isready -U postgres - Run E2E tests:
DATABASE_URL=postgresql://postgres:postgres@localhost:PORT/gbrain_test bun run test:e2e - Tear down immediately after tests finish (pass or fail):
docker stop gbrain-test-pg && docker rm gbrain-test-pg
Never leave gbrain-test-pg running. If you find a stale one from a previous run,
stop and remove it before starting a new one.
Read the skill files in skills/ before doing brain operations. GBrain ships 29 skills
organized by skills/RESOLVER.md (AGENTS.md is also accepted as of v0.19):
Original 8 (conformance-migrated): ingest (thin router), query, maintain, enrich, briefing, migrate, setup, publish.
Brain skills (ported from an upstream agent fork): signal-detector, brain-ops, idea-ingest, media-ingest, meeting-ingestion, citation-fixer, repo-architecture, skill-creator, daily-task-manager.
Operational + identity: daily-task-prep, cross-modal-review, cron-scheduler, reports,
testing, soul-audit, webhook-transforms, data-research, minion-orchestrator. As of
v0.20.4, minion-orchestrator is the single unified skill for both lanes of background
work (shell jobs via gbrain jobs submit shell, LLM subagents via gbrain agent run) ...
the prior gbrain-jobs skill was merged in, Preconditions are shared, and trigger
routing is narrowed to what the skill actually covers.
Skillify loop (v0.19): skillify (the markdown orchestration), skillpack-check (agent-readable health report).
Operational health (v0.19.1): smoke-test (8 post-restart health checks with auto-fix
for Bun, CLI, DB, worker, Zod CJS, gateway, API key, brain repo; user-extensible via
~/.gbrain/smoke-tests.d/*.sh).
Conventions: skills/conventions/ has cross-cutting rules (quality, brain-first,
model-routing, test-before-bulk, cross-modal). skills/_brain-filing-rules.md and
skills/_output-rules.md are shared references.
All bulk commands (doctor, embed, import, export, sync, extract, migrate,
repair-jsonb, orphans, check-backlinks, lint, integrity auto, eval, files
sync, and apply-migrations) stream progress through the shared reporter
at src/core/progress.ts. Agents get heartbeats within 1 second of every
iteration regardless of how slow the underlying work is.
Rules:
- Progress always writes to stderr. Stdout stays clean for data output
(
--jsonpayloads, final summaries, JSON action events fromextract). - Non-TTY default: plain one-line-per-event human text. JSON requires the
explicit
--progress-jsonflag. - Global flags (
--quiet,--progress-json,--progress-interval=<ms>) are parsed bysrc/core/cli-options.tsBEFORE command dispatch. - Phase names are machine-stable
snake_case.dot.path(e.g.doctor.db_checks,sync.imports). Documented indocs/progress-events.md; additive changes only. scripts/check-progress-to-stdout.shis a CI guard that fails the build if any new code writes\rprogress to stdout. Wired intobun run test.- Minion handlers pass
job.updateProgressas theonProgresscallback to core functions (DB-backed primary progress channel); stderr fromjobs workstays coarse for daemon liveness only.
When wiring a new bulk command: import { createProgress } from '../core/progress.ts'
and import { getCliOptions, cliOptsToProgressOptions } from '../core/cli-options.ts'.
Create a reporter with createProgress(cliOptsToProgressOptions(getCliOptions())),
start(phase, total?) before the loop, tick() inside it, finish() after.
For single long-running queries, use startHeartbeat(reporter, note) with a
try/finally to guarantee cleanup. Never call process.stdout.write('\r...')
in bulk paths, the CI guard will fail the build.
bun build --compile --outfile bin/gbrain src/cli.ts
Before shipping (/ship) or reviewing (/review), always run the full test suite:
bun test— unit tests (no database required)- Follow the "E2E test DB lifecycle" steps above to spin up the test DB,
run
bun run test:e2e, then tear it down.
Both must pass. Do not ship with failing E2E tests. Do not skip E2E tests.
After EVERY /ship, you MUST run /document-release. This is NOT optional. Do NOT skip it. Do NOT say "docs look fine" without running it. The skill reads every .md file in the project, cross-references the diff, and updates anything that drifted.
If /ship's Step 8.5 triggers document-release automatically, that counts. But if it gets skipped for ANY reason (timeout, error, oversight), you MUST run it manually before considering the ship complete.
Files that MUST be checked on every ship:
- README.md — does it reflect new features, commands, or setup steps?
- CLAUDE.md — does it reflect new files, test files, or architecture changes?
- CHANGELOG.md — does it cover every commit?
- TODOS.md — are completed items marked done?
- docs/ — do any guides need updating?
A ship without updated docs is an incomplete ship. Period.
VERSION and CHANGELOG describe what THIS branch adds vs master, not how we got here. Every feature branch that ships gets its own version bump and CHANGELOG entry. The entry is product release notes for users; it is not a log of internal decisions, review rounds, or codex findings.
Write the CHANGELOG entry at /ship time, not during development. Mid-branch
iterations, review rounds (CEO/Eng/Codex/DX), and implementation detours belong
in the plan file at ~/.claude/plans/, not in the CHANGELOG. One unified entry
per branch, covering what the branch added vs the base branch.
Never edit a CHANGELOG entry that already landed on master. If master has v0.18.2 and your branch adds features, bump to the next version (v0.19.0, not editing master's v0.18.2). When merging master into your branch, master may bring new CHANGELOG entries above yours — push your entry above master's latest and verify:
- Does CHANGELOG have your branch's own entry separate from master's entries?
- Is VERSION higher than master's VERSION?
- Is your entry the topmost
## [X.Y.Z]entry? grep "^## \[" CHANGELOG.mdshows a contiguous version sequence?
If any answer is no, fix it before continuing.
CHANGELOG is for users, not contributors. Write like product release notes:
- Lead with what the user can now do that they couldn't before. Sell the capability.
- Plain language, not implementation details. "You can now..." not "Refactored the..."
- Never mention internal artifacts: plan file IDs, decision tags (D-CX-#, F-ENG-#), review rounds, codex findings, subcontractor credits. These are invisible to users.
- Put contributor-facing changes in a separate
### For contributorssection at the bottom. - Every entry should make someone think "oh nice, I want to try that."
What to omit:
- "Codex caught X that the CEO review missed" — private process detail.
- "D-CX-3 split errors/warnings" — tag is meaningless to users; name the feature instead.
- "Fix-wave PR #N supersedes #M" — supersede chains belong in PR bodies, not release notes.
- "215 new cases, 3 decisions applied, 7 reviews cleared" — these are planning-mode metrics.
What to keep:
- The user-facing change: what commands exist now, what flag was added, what behavior fixed.
- Numbers that mean something to the user: TTHW, commands that timed out before, detection counts.
- Upgrade instructions:
gbrain upgrade+ any manual step if needed. - Credit to external contributors when a community PR was incorporated.
Every version entry in CHANGELOG.md MUST start with a release-summary section in
the GStack/Garry voice — one viewport's worth of prose + tables that lands like a
verdict, not marketing. The itemized changelog (subsections, bullets, files) goes
BELOW that summary, separated by a ### Itemized changes header.
The release-summary section gets read by humans, by the auto-update agent, and by anyone deciding whether to upgrade. The itemized list is for agents that need to know exactly what changed.
Use this structure for the top of every ## [X.Y.Z] entry:
- Two-line bold headline (10-14 words total) ... should land like a verdict, not marketing. Sound like someone who shipped today and cares whether it works.
- Lead paragraph (3-5 sentences) ... what shipped, what changed for the user. Specific, concrete, no AI vocabulary, no em dashes, no hype.
- A "The X numbers that matter" section with:
- One short setup paragraph naming the source of the numbers (real production deployment OR a reproducible benchmark ... name the file/command to run).
- A table of 3-6 key metrics with BEFORE / AFTER / Δ columns.
- A second optional table for per-category breakdown if relevant.
- 1-2 sentences interpreting the most striking number in concrete user terms.
- A "What this means for [audience]" closing paragraph (2-4 sentences) tying the metrics to a real workflow shift. End with what to do.
Voice rules:
- No em dashes (use commas, periods, "...").
- No AI vocabulary (delve, robust, comprehensive, nuanced, fundamental, etc.) or banned phrases ("here's the kicker", "the bottom line", etc.).
- Real numbers, real file names, real commands. Not "fast" but "~30s on 30K pages."
- Short paragraphs, mix one-sentence punches with 2-3 sentence runs.
- Connect to user outcomes: "the agent does ~3x less reading" beats "improved precision."
- Be direct about quality. "Well-designed" or "this is a mess." No dancing.
Source material to pull from:
- CHANGELOG.md previous entry for prior context
- Latest
gbrain-evals/docs/benchmarks/[latest].mdfor headline numbers (sibling repo) - Recent commits (
git log <prev-version>..HEAD --oneline) for what shipped - Don't make up numbers. If a metric isn't in a benchmark or production data, don't include it. Say "no measurement yet" if asked.
Target length: ~250-350 words for the summary. Should render as one viewport.
After the release-summary and BEFORE ### Itemized changes, every ## [X.Y.Z]
entry MUST include a human-readable self-repair block under the heading
## To take advantage of v[version].
Why: gbrain upgrade runs gbrain post-upgrade which runs gbrain apply-migrations.
This chain has a known weak link — upgrade.ts catches post-upgrade failures as
best-effort (so the binary still works). When that chain silently fails, users end
up with half-upgraded brains. The self-repair block gives them a paste-ready
recovery path; the v0.13+ ~/.gbrain/upgrade-errors.jsonl trail + gbrain doctor
integration close the loop.
Template (adapt the verify commands per release):
## To take advantage of v[version]
`gbrain upgrade` should do this automatically. If it didn't, or if `gbrain doctor`
warns about a partial migration:
1. **Run the orchestrator manually:**
```bash
gbrain apply-migrations --yes-
Your agent reads
skills/migrations/v[version].mdthe next time you interact with it. [One sentence on whether headless agents need manual action, or whether the orchestrator already handled the mechanical side.] -
Verify the outcome:
[release-specific verify commands, e.g. `gbrain graph ... --depth 2`] gbrain stats -
If any step fails or the numbers look wrong, please file an issue: https://github.com/garrytan/gbrain/issues with:
- output of
gbrain doctor - contents of
~/.gbrain/upgrade-errors.jsonlif it exists - which step broke
This feedback loop is how the gbrain maintainers find fragile upgrade paths. Thank you.
- output of
**Skip this block** for patches that are pure bug fixes with zero user-facing action
(rare). If the release has a schema migration, data backfill, or new feature the
user needs to verify, the block is required.
The v0.13.0 entry in CHANGELOG.md is the canonical example.
### Itemized changes (the existing rules)
Below the release summary, write `### Itemized changes` and continue with the
detailed subsections (Knowledge Graph Layer, Schema migrations, Security hardening,
Tests, etc.). Same rules as before:
- Lead with what the user can now DO that they couldn't before
- Frame as benefits and capabilities, not files changed or code written
- Make the user think "hell yeah, I want that"
- Bad: "Added GBRAIN_VERIFY.md installation verification runbook"
- Good: "Your agent now verifies the entire GBrain installation end-to-end, catching
silent sync failures and stale embeddings before they bite you"
- Bad: "Setup skill Phase H and Phase I added"
- Good: "New installs automatically set up live sync so your brain never falls behind"
- **Always credit community contributions.** When a CHANGELOG entry includes work from
a community PR, name the contributor with `Contributed by @username`. Contributors
did real work. Thank them publicly every time, no exceptions.
### Reference: v0.12.0 entry as canonical example
The v0.12.0 entry in CHANGELOG.md is the canonical example of the format. Match its
structure for every future version: bold headline, lead paragraph, "numbers that
matter" with BrainBench-style before/after table, "what this means" closer, then
`### Itemized changes` with the detailed sections below.
## Version migrations
Create a migration file at `skills/migrations/v[version].md` when a release
includes changes that existing users need to act on. The auto-update agent
reads these files post-upgrade (Section 17, Step 4) and executes them.
**You need a migration file when:**
- New setup step that existing installs don't have (e.g., v0.5.0 added live sync,
existing users need to set it up, not just new installs)
- New SKILLPACK section with a MUST ADD setup requirement
- Schema changes that require `gbrain init` or manual SQL
- Changed defaults that affect existing behavior
- Deprecated commands or flags that need replacement
- New verification steps that should run on existing installs
- New cron jobs or background processes that should be registered
**You do NOT need a migration file when:**
- Bug fixes with no behavior changes
- Documentation-only improvements (the agent re-reads docs automatically)
- New optional features that don't affect existing setups
- Performance improvements that are transparent
**The key test:** if an existing user upgrades and does nothing else, will their
brain work worse than before? If yes, migration file. If no, skip it.
Write migration files as agent instructions, not technical notes. Tell the agent
what to do, step by step, with exact commands. See `skills/migrations/v0.5.0.md`
for the pattern.
## Migration is canonical, not advisory
GBrain's job is to deliver a canonical, working setup to every user on upgrade.
Anything that looks like a "host-repo change" — AGENTS.md, cron manifests,
launchctl units, config files outside `~/.gbrain/` — is a GBrain migration
step, not a nudge we leave for the host-repo maintainer. Migrations edit host
files (with backups) to make the canonical setup real. Exceptions: changes
that require human judgment (content edits, renames that break semantics,
host-specific handler registration where shell-exec would be an RCE surface).
Everything mechanical ships in the migration.
**Test:** if shipping a feature requires a sentence that starts with "in
your AGENTS.md, add…" or "in your cron/jobs.json, rewrite…", the migration
orchestrator should be doing that edit, not the user.
**The exception is host-specific code.** For custom Minion handlers
(host-specific integrations like inbox sweeps or third-party API scanners), shipping them as a
data file the worker would exec is an RCE surface. Those get registered in
the host's own repo via the plugin contract (`docs/guides/plugin-handlers.md`);
the migration orchestrator emits a structured TODO to
`~/.gbrain/migrations/pending-host-work.jsonl` + the host agent walks the
TODOs using `skills/migrations/v0.11.0.md` — stays host-agnostic, still
canonical.
## Privacy rule: scrub real names from public docs
**Never reference real people, companies, funds, or private agent names in any
public-facing artifact.** Public artifacts include: `CHANGELOG.md`, `README.md`,
`docs/`, `skills/`, PR titles + bodies, commit messages, and comments in checked-in
code. Query examples, benchmark stories, and migration guides MUST use generic
placeholders.
Why: gbrain runs a personal knowledge brain containing notes on real people and
real companies (YC founders, portfolio companies, funds, investors, meeting
attendees). When a doc copies a query like `gbrain graph diana-hu --depth 2` or
names a specific agent fork like `Wintermute`, that real name gets indexed by
search engines, surfaced in cross-references, and distributed with every release.
**Name mapping** to use in examples:
- Agent forks → `your agent fork`, `a downstream agent`, or `agent-fork`
- Example person → `alice-example`, `charlie-example`, or `a-founder`
- Example company → `acme-example`, `widget-co`, or `a-company`
- Example fund → `fund-a`, `fund-b`, `fund-c`
- Example deal → `acme-seed`, `widget-series-a`
- Example meeting → `meetings/2026-04-03` (generic date is fine)
- Example user → `you` or `the user`, never a proper name
**Specific rule: never say `Wintermute` in any CHANGELOG, README, doc, PR, or
commit message.** When the temptation is to illustrate with the real fork name:
- Reader-facing copy → `your OpenClaw` (covers Wintermute, Hermes, AlphaClaw,
and any other downstream OpenClaw deployment in one term the reader already
recognizes).
- First-person / origin-story copy → `Garry's OpenClaw` (honest that this is
the production deployment driving the feature, without exposing the private
agent's name).
`Wintermute` may appear in private artifacts (scratch plans under
`~/.gstack/projects/…`, memory files, conversation transcripts, CEO-review
plans) — those aren't distributed. Anything checked into this repo or shipped
in a release must use the OpenClaw phrasing above. Sweeping a stale reference
is a small clean-up PR, not a debate.
**When in doubt, ask yourself:** "Would this query reveal private information
about the user's contacts, investments, or portfolio if it were read by a
stranger?" If yes, replace with generic placeholders.
**Illustrative API examples with household-brand companies** (Stripe, Brex, OpenAI,
GitHub, etc.) are fine — they're public entities, not contacts in anyone's brain.
Do not confuse illustrative API examples with queries that reveal real
relationships.
## Responsible-disclosure rule: don't broadcast attack surface in release notes
**When a release fixes a security gap or a user-impacting bug, describe the fix
functionally. Do not enumerate the attack surface, quantify the exposure window,
or highlight the most sensitive records by name in public-facing artifacts.**
Public-facing artifacts include: `CHANGELOG.md`, `README.md`, `docs/`, PR titles
and bodies, commit messages, GitHub issue titles and comments, release pages,
tweets, blog posts.
**Don't write:**
- "10 tables were publicly readable by the anon key for months, including X, Y, Z"
- "X and Y are the most sensitive ones"
- "N tables exposed. Fix: enable RLS on these specific tables: ..."
**Do write:**
- "Security hardening pass. Fresh installs secure by default. Existing brains
brought to the same bar automatically on upgrade."
- "If `gbrain doctor` still flags anything after upgrade, the message names each
table and gives the exact fix."
Why: anyone reading the release page before they've upgraded now has a directed
probe list for unpatched installs. The source code ships the specifics anyway
(`src/schema.sql`, `src/core/migrate.ts`, test fixtures) — reverse engineers can
get them. But the release page is a broadcast channel. Don't hand attackers a
curated list with a banner.
**The test:** if a reader with no prior context could read the release note and
walk away knowing "gbrain at version X has table Y readable by anon key until
they patch," the note is too specific. Rewrite until that's no longer possible.
**What IS fine in public artifacts:**
- The mechanism of the fix ("the check now scans every public table instead of
a hardcoded allowlist").
- User-facing operator ergonomics (the escape-hatch SQL template, the upgrade
commands, the breaking-change flag).
- Credit to contributors.
- Generic framing of severity ("security posture tightening pass") without
quantification.
**What stays in private artifacts (plan files, private memories, internal docs):**
- Specific table names, record counts, exposure duration.
- Which records stand out as highest-risk.
- Detailed before/after tables in the "numbers that matter" format.
If the CEO/Eng review of a plan produces a detailed exposure table, keep it in
the plan file under `~/.claude/plans/` or `~/.gstack/projects/`. Don't copy it
into the CHANGELOG or PR body.
Applies retroactively: if you see a prior CHANGELOG entry naming attack-surface
specifics, scrub it as a small cleanup commit, the same way a stale Wintermute
reference gets swept.
## Schema state tracking
`~/.gbrain/update-state.json` tracks which recommended schema directories the user
adopted, declined, or added custom. The auto-update agent (SKILLPACK Section 17)
reads this during upgrades to suggest new schema additions without re-suggesting
things the user already declined. The setup skill writes the initial state during
Phase C/E. Never modify a user's custom directories or re-suggest declined ones.
## GitHub Actions SHA maintenance
All GitHub Actions in `.github/workflows/` are pinned to commit SHAs. Before shipping
(`/ship`) or reviewing (`/review`), check for stale pins and update them:
```bash
for action in actions/checkout oven-sh/setup-bun actions/upload-artifact actions/download-artifact softprops/action-gh-release gitleaks/gitleaks-action; do
tag=$(grep -r "$action@" .github/workflows/ | head -1 | grep -o '#.*' | tr -d '# ')
[ -n "$tag" ] && echo "$action@$tag: $(gh api repos/$action/git/ref/tags/$tag --jq .object.sha 2>/dev/null)"
done
If any SHA differs from what's in the workflow files, update the pin and version comment.
Pull request titles and bodies must describe everything in the PR diff against the
base branch, not just the most recent commit you made. When you open or update a
PR, walk the full commit range with git log --oneline <base>..<head> and write the
body to cover all of it. Group by feature area (schema, code, tests, docs) — not
chronologically by commit.
This matters because reviewers read the PR body to understand what's shipping. If the body only covers your last commit, they miss everything else and can't review properly. A 7-commit PR with a body that describes commit 7 is worse than no body at all — it actively misleads.
When in doubt, run gh pr view <N> --json commits --jq '[.commits[].messageHeadline]'
to see what's actually in the PR before writing the body.
Never merge external PRs directly into master. Instead, use the "fix wave" workflow:
- Categorize — group PRs by theme (bug fixes, features, infra, docs)
- Deduplicate — if two PRs fix the same thing, pick the one that changes fewer lines. Close the other with a note pointing to the winner.
- Collector branch — create a feature branch (e.g.
garrytan/fix-wave-N), cherry-pick or manually re-implement the best fixes from each PR. Do NOT merge PR branches directly — read the diff, understand the fix, and write it yourself if needed. - Test the wave — verify with
bun test && bun run test:e2e(full E2E lifecycle). Every fix in the wave must have test coverage. - Close with context — every closed PR gets a comment explaining why and what (if anything) supersedes it. Contributors did real work; respect that with clear communication and thank them.
- Ship as one PR — single PR to master with all attributions preserved via
Co-Authored-By:trailers. Include a summary of what merged and what closed.
Community PR guardrails:
- Always AskUserQuestion before accepting commits that touch voice, tone, or promotional material (README intro, CHANGELOG voice, skill templates).
- Never auto-merge PRs that remove YC references or "neutralize" the founder perspective.
- Preserve contributor attribution in commit messages.
When the user's request matches an available skill, ALWAYS invoke it using the Skill tool as your FIRST action. Do NOT answer directly, do NOT use other tools first. The skill has specialized workflows that produce better results than ad-hoc answers.
NEVER hand-roll ship operations. Do not manually run git commit + push + gh pr
create when /ship is available. /ship handles VERSION bump, CHANGELOG, document-release,
pre-landing review, test coverage audit, and adversarial review. Manually creating a PR
skips all of these. If the user says "commit and ship", "push and ship", "bisect and
ship", or any combination that ends with shipping — invoke /ship and let it handle
everything including the commits. If the branch name contains a version (e.g.
v0.5-live-sync), /ship should use that version for the bump.
Key routing rules:
- Product ideas, "is this worth building", brainstorming → invoke office-hours
- Bugs, errors, "why is this broken", 500 errors → invoke investigate
- Ship, deploy, push, create PR, "commit and ship", "push and ship" → invoke ship
- QA, test the site, find bugs → invoke qa
- Code review, check my diff → invoke review
- Update docs after shipping → invoke document-release
- Weekly retro → invoke retro
- Design system, brand → invoke design-consultation
- Visual audit, design polish → invoke design-review
- Architecture review → invoke plan-eng-review
- Save progress, checkpoint, resume → invoke checkpoint
- Code quality, health check → invoke health