feat: add ops harbor dashboard and apps workspace#34
Conversation
|
Note Reviews pausedIt looks like this branch is under active development. To avoid overwhelming you with review comments due to an influx of new commits, CodeRabbit has automatically paused this review. You can configure this behavior by changing the Use the following commands to manage reviews:
Use the checkboxes below for quick actions:
📝 WalkthroughWalkthroughA new "Ops Harbor" system is introduced consisting of a GitHub App control plane ( Changes
Estimated code review effort🎯 4 (Complex) | ⏱️ ~45 minutes Possibly related PRs
Poem
✨ Finishing Touches🧪 Generate unit tests (beta)
|
There was a problem hiding this comment.
Actionable comments posted: 15
Note
Due to the large number of review comments, Critical, Major severity comments were prioritized as inline comments.
Caution
Some comments are outside the diff and can’t be posted inline due to platform limitations.
⚠️ Outside diff range comments (2)
apps/sdd-webapp/src/lib/status-store.test.ts (1)
19-31:⚠️ Potential issue | 🟡 MinorRestore
SDD_WEBAPP_CONFIG_DIRafter tests to avoid env bleed.Line 19 mutates global process env, but teardown does not restore the previous value. This can make other tests order-dependent within the same worker.
Suggested fix
import { afterAll, afterEach, describe, expect, test } from "vitest"; import { randomUUID } from "node:crypto"; import { mkdir, readFile, writeFile } from "node:fs/promises"; import { join } from "node:path"; const CONFIG_DIR = join(process.cwd(), ".tmp", "vitest", "sdd-webapp-status", randomUUID()); const STATUS_DIR = join(CONFIG_DIR, "status"); const TEST_PROJECT_ID = "status-store-test"; +const PREV_CONFIG_DIR = process.env.SDD_WEBAPP_CONFIG_DIR; process.env.SDD_WEBAPP_CONFIG_DIR = CONFIG_DIR; afterAll(async () => { await deleteStatusStore(TEST_PROJECT_ID).catch(() => {}); + if (PREV_CONFIG_DIR === undefined) { + delete process.env.SDD_WEBAPP_CONFIG_DIR; + } else { + process.env.SDD_WEBAPP_CONFIG_DIR = PREV_CONFIG_DIR; + } });🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@apps/sdd-webapp/src/lib/status-store.test.ts` around lines 19 - 31, You set process.env.SDD_WEBAPP_CONFIG_DIR = CONFIG_DIR in the test setup but never restore the original value, causing env bleed; save the original value (e.g. const ORIGINAL_SDD_WEBAPP_CONFIG_DIR = process.env.SDD_WEBAPP_CONFIG_DIR) before mutating, then restore it in teardown hooks (afterEach and/or afterAll) alongside the existing deleteStatusStore(TEST_PROJECT_ID) cleanup so the environment is returned to its prior state; reference the mutated symbol process.env.SDD_WEBAPP_CONFIG_DIR and the existing teardown helpers deleteStatusStore and TEST_PROJECT_ID when making the change.AGENTS.md (1)
44-49:⚠️ Potential issue | 🟡 MinorFill missing descriptions in generated package/app lists.
Empty description cells reduce usability of this index. Add
descriptionfields in the correspondingpackage.jsonfiles so generated docs stay informative.Also applies to: 61-63
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@AGENTS.md` around lines 44 - 49, The AGENTS.md table has empty description cells for the packages **@r_masseater/doc-engine**, **@r_masseater/ops-harbor-core**, and **@r_masseater/repository-lint** (and similarly for the entries referenced at lines 61-63); open each package's package.json and add a meaningful non-empty "description" field (e.g., a one-line summary of the package's purpose) inside the JSON for those package names, save, and re-run the documentation/generator that produces AGENTS.md so the table is regenerated with the new descriptions; ensure you update the package.json for `@r_masseater/doc-engine`, `@r_masseater/ops-harbor-core`, and `@r_masseater/repository-lint` (and any other packages corresponding to lines 61-63) rather than editing AGENTS.md directly.
🟡 Minor comments (6)
apps/sdd-webapp/src/server.test.ts-16-22 (1)
16-22:⚠️ Potential issue | 🟡 MinorClean up the per-run config directory after tests.
This new test config path is unique per run, but it isn’t removed in
afterAll, so.tmp/vitest/sdd-webapp-server/*will accumulate.🧹 Proposed fix
-process.env.SDD_WEBAPP_CONFIG_DIR = join( +const testConfigDir = join( process.cwd(), ".tmp", "vitest", "sdd-webapp-server", randomUUID(), ); +process.env.SDD_WEBAPP_CONFIG_DIR = testConfigDir;describe("Server API - Worktree Detection", () => { afterAll(async () => { @@ for (const dir of tempDirs) { await rm(dir, { recursive: true, force: true }).catch(() => {}); } + await rm(testConfigDir, { recursive: true, force: true }).catch(() => {}); + delete process.env.SDD_WEBAPP_CONFIG_DIR; });🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@apps/sdd-webapp/src/server.test.ts` around lines 16 - 22, Add an afterAll cleanup that removes the per-run config directory set by process.env.SDD_WEBAPP_CONFIG_DIR in server.test.ts: import fs (or use fs.promises) and in an afterAll hook check that process.env.SDD_WEBAPP_CONFIG_DIR exists and is under join(process.cwd(), ".tmp", "vitest", "sdd-webapp-server") to avoid accidental deletes, then remove it with fs.rmSync or fs.promises.rm using recursive/force options (or equivalent) so each test run’s randomUUID() directory is deleted.apps/sdd-webapp/src/lib/status-parser.test.ts-11-13 (1)
11-13:⚠️ Potential issue | 🟡 MinorModule-level env var assignment may cause test pollution in parallel runs.
Setting
process.env.SDD_WEBAPP_CONFIG_DIRat module scope (outsidebeforeAll) means it executes during module initialization. In parallel test execution, if another test file also modifies this env var, they could interfere with each other.Consider moving the env var assignment inside
beforeAllto ensure proper test isolation:🛡️ Proposed fix
const TEST_PROJECT_ID = "status-parser-test"; const tempBase = join(tmpdir(), "sdd-webapp-status-parser-test", randomUUID()); - -process.env.SDD_WEBAPP_CONFIG_DIR = join(tempBase, "config"); beforeAll(async () => { + process.env.SDD_WEBAPP_CONFIG_DIR = join(tempBase, "config"); await rm(tempBase, { recursive: true, force: true }).catch(() => {}); await mkdir(tempBase, { recursive: true }); });🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@apps/sdd-webapp/src/lib/status-parser.test.ts` around lines 11 - 13, The module-level assignment to process.env.SDD_WEBAPP_CONFIG_DIR using tempBase should be moved inside the test suite lifecycle (e.g., beforeAll) to avoid cross-test pollution; update the file so tempBase is still created at module scope or inside beforeAll, but ensure the line setting process.env.SDD_WEBAPP_CONFIG_DIR runs in beforeAll (and if needed reset or delete it in afterAll) and reference the existing tempBase variable and the process.env.SDD_WEBAPP_CONFIG_DIR assignment so tests run with isolated env per suite.apps/sdd-webapp/src/lib/config.ts-11-13 (1)
11-13:⚠️ Potential issue | 🟡 MinorHarden empty override handling for
SDD_WEBAPP_CONFIG_DIR.When
SDD_WEBAPP_CONFIG_DIR="", Line 12 resolves to the empty string and writesprojects.jsonin the current working directory instead of the default config path.Based on learnings: Store project configuration in `~/.config/sdd-webapp/projects.json` using a dedicated config module.Proposed fix
function configDirPath(): string { - return process.env.SDD_WEBAPP_CONFIG_DIR ?? join(homedir(), ".config", "sdd-webapp"); + const override = process.env.SDD_WEBAPP_CONFIG_DIR?.trim(); + return override && override.length > 0 + ? override + : join(homedir(), ".config", "sdd-webapp"); }🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@apps/sdd-webapp/src/lib/config.ts` around lines 11 - 13, configDirPath() currently returns process.env.SDD_WEBAPP_CONFIG_DIR directly which allows an empty string to override the default; change it to treat empty/whitespace-only values as unset by reading process.env.SDD_WEBAPP_CONFIG_DIR into a local (e.g. env) and returning env (trimmed) only if it is non-empty, otherwise return join(homedir(), ".config", "sdd-webapp") so projects.json is written to the default path when SDD_WEBAPP_CONFIG_DIR is blank.packages/ops-harbor-core/src/filters.ts-27-27 (1)
27-27:⚠️ Potential issue | 🟡 MinorNormalize
limitto a positive integer.
slice(0, options.limit ?? 50)allows negative/zero/non-integer limits with surprising behavior. For example, negative values use JavaScript's negative index semantics (e.g.,slice(0, -5)returns all but the last 5 elements), and zero returns an empty array. Since the function acceptslimit?: numberwithout constraints, these edge cases should be normalized.Suggested patch
- return filtered.slice(0, options.limit ?? 50); + const limit = + Number.isInteger(options.limit) && (options.limit as number) > 0 ? (options.limit as number) : 50; + return filtered.slice(0, limit);🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@packages/ops-harbor-core/src/filters.ts` at line 27, The current return uses slice(0, options.limit ?? 50) which permits negative, zero, or non-integer limits; replace it by normalizing options.limit into a positive integer defaulting to 50 (e.g., compute const limitValue = Number.isFinite(options.limit) ? Math.max(1, Math.floor(options.limit)) : 50) and then use filtered.slice(0, limitValue); this ensures negative/zero/non-integer inputs are clamped to a sensible positive integer before slicing.apps/ops-harbor/src/lib/control-plane-client.ts-9-16 (1)
9-16:⚠️ Potential issue | 🟡 MinorMissing timeout on fetch requests.
The
fetchJsonhelper has no timeout. If the control plane is unresponsive, HTTP calls will hang indefinitely, which could block the sync loop or automation worker. Consider addingAbortSignal.timeout().🛡️ Suggested fix
-async function fetchJson<T>(input: string, init?: RequestInit): Promise<T> { - const response = await fetch(input, init); +async function fetchJson<T>(input: string, init?: RequestInit, timeoutMs = 30_000): Promise<T> { + const response = await fetch(input, { + ...init, + signal: AbortSignal.timeout(timeoutMs), + }); if (!response.ok) {🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@apps/ops-harbor/src/lib/control-plane-client.ts` around lines 9 - 16, The fetch helper fetchJson lacks a request timeout; update the async function fetchJson<T>(input: string, init?: RequestInit) to create or derive an AbortSignal with a timeout (e.g., using AbortSignal.timeout or an AbortController) and pass that signal to fetch so requests fail fast on unresponsive control plane; ensure you merge existing init.signal (prefer the caller signal but race with the timeout signal), handle the abort by throwing a clear Error (or rethrow AbortError) and still await/cleanup any response body if needed before throwing.apps/ops-harbor/src/lib/automation.ts-246-259 (1)
246-259:⚠️ Potential issue | 🟡 MinorSilent failure if
completeJobfails in catch block.If
client.completeJobthrows during error handling (e.g., network failure), the error is not logged or propagated. The local DB record would be written, but the control plane would never know the job failed, potentially leaving it in a stuck "leased" state.🛡️ Suggested improvement
} catch (error) { const summary = error instanceof Error ? error.message : String(error); - await client.completeJob(leased.job.id, "failed", summary); + try { + await client.completeJob(leased.job.id, "failed", summary); + } catch (completeError) { + console.error("Failed to report job failure to control plane:", completeError); + // Continue to record locally even if control plane is unreachable + } recordAutomationRun(db, {🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@apps/ops-harbor/src/lib/automation.ts` around lines 246 - 259, The catch block currently calls client.completeJob(leased.job.id, "failed", summary) without guarding against it throwing, which can silently drop the failure; wrap the client.completeJob call in its own try/catch (inside the existing catch), capture and log the error (including error object and summary) so failures are visible, and decide whether to retry or rethrow—ensure recordAutomationRun(...) still executes (or move it after a successful completeJob if you want control-plane confirmation first); reference the client.completeJob call and the recordAutomationRun function in your change.
🧹 Nitpick comments (9)
plugins/ops-harbor/README.md (1)
3-5: Consider adding a quick-start section for prerequisites.A short “how to run” block (commands for starting
ops-harborapp and MCP server) would make first-time setup much clearer.Based on learnings: Maintain clear README or AGENTS.md documentation for agent definitions and usage.
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@plugins/ops-harbor/README.md` around lines 3 - 5, Add a short "Quick Start / Prerequisites" section to the README that lists the required components (local ops-harbor app and MCP server), exact commands to start each service (e.g., how to run ops-harbor and how to start the MCP server), expected ports/endpoints, and any env vars or config files to set; also add a brief pointer to an AGENTS.md (or expand README) that documents agent definitions and usage examples for this plugin so first-time users can boot the stack and validate it quickly.apps/ops-harbor/src/lib/port-finder.ts (1)
6-15: Normalize and validatestartPortagainst the declared scan range.Right now callers can pass values below
MIN_PORTor aboveMAX_PORT. Clamping/validating upfront makes behavior explicit and avoids unexpected scans.♻️ Proposed refactor
export async function findAvailablePort(startPort = MIN_PORT): Promise<number> { - for (let port = startPort; port <= MAX_PORT; port++) { + const firstPort = Math.max(MIN_PORT, startPort); + if (firstPort > MAX_PORT) { + throw new Error(`startPort must be between ${MIN_PORT} and ${MAX_PORT}`); + } + + for (let port = firstPort; port <= MAX_PORT; port++) { // Ports must be probed one by one to avoid racing the same listener logic. // eslint-disable-next-line no-await-in-loop if (await isPortAvailable(port)) { return port; }🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@apps/ops-harbor/src/lib/port-finder.ts` around lines 6 - 15, In findAvailablePort, validate and normalize the startPort against MIN_PORT and MAX_PORT before scanning: if startPort is less than MIN_PORT, set it to MIN_PORT; if startPort is greater than MAX_PORT, throw a clear Error indicating the allowed port range; then proceed with the existing loop that calls isPortAvailable(port). This makes behavior explicit, avoids scanning outside the declared range, and keeps references to the existing symbols (findAvailablePort, MIN_PORT, MAX_PORT, isPortAvailable).apps/ops-harbor/src/lib/sync.ts (1)
12-18: Don’t silently swallowtriggerSyncfailures.Keeping best-effort behavior is fine, but add a warning log so control-plane failures are diagnosable.
Suggested patch
if (config.authorLogin) { try { const result = await client.triggerSync(config.authorLogin); synchronized = result.synchronized; - } catch { + } catch (error) { + console.warn("[ops-harbor] triggerSync failed:", error); synchronized = undefined; } }🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@apps/ops-harbor/src/lib/sync.ts` around lines 12 - 18, The code currently swallows errors from client.triggerSync when config.authorLogin is set; change the catch to capture the error (e.g., catch (err)) and emit a warning including the error and context (authorLogin) instead of silently setting synchronized to undefined — reference the existing synchronized variable and call site of client.triggerSync and use the module's logger (e.g., processLogger or logger if available; if none exists, create a logger and use it) to log a concise warning like "triggerSync failed for authorLogin=<...>: <error>" before leaving synchronized as undefined.packages/ops-harbor-core/src/alerts.test.ts (1)
57-70: Test assertion may be fragile to implementation changes.The test asserts
blockershas length 3 forci_failed,base_behind, andreview_changes_requested. However, the test usesmergeStateStatus: "BEHIND"which produces abase_behindalert (warning severity), not a blocker. Based on the alerts.ts implementation,base_behindhas"warning"severity which is filtered as a blocker (severity !== "info"), so length 3 is correct.Consider using
expect.arrayContainingfor blockers too, making the test more explicit about which alerts are blockers:♻️ More explicit blocker assertion
expect(withAlerts.alerts.map((alert: WorkItemAlert) => alert.type)).toEqual( expect.arrayContaining(["ci_failed", "base_behind", "review_changes_requested"]), ); - expect(withAlerts.blockers).toHaveLength(3); + expect(withAlerts.blockers.map((b: WorkItemAlert) => b.type)).toEqual( + expect.arrayContaining(["ci_failed", "base_behind", "review_changes_requested"]), + );🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@packages/ops-harbor-core/src/alerts.test.ts` around lines 57 - 70, The test is fragile because it only checks blockers length; update the assertion to explicitly verify which alert types are present in withAlerts.blockers by mapping blockers to their type and using expect.arrayContaining(["ci_failed","base_behind","review_changes_requested"]) (referencing deriveAlerts, buildWorkItem, WorkItemAlert and withAlerts.blockers) so the test asserts the specific blocker types rather than just a numeric length.apps/ops-harbor-control-plane/src/lib/config.ts (1)
28-28: Port parsing doesn't validate range.
Number.parseIntwill accept any numeric string, including values outside the valid port range (1-65535). Consider adding range validation or using a valibot transform with bounds checking.♻️ Optional: Add port range validation
+function parsePort(value: string | undefined): number { + const port = Number.parseInt(value ?? "4130", 10); + if (port < 1 || port > 65535) { + throw new Error(`Invalid port number: ${port}`); + } + return port; +} export function readConfig(env: NodeJS.ProcessEnv = process.env): OpsHarborControlPlaneConfig { const parsed = v.parse(EnvSchema, env); return { - port: Number.parseInt(parsed.OPS_HARBOR_CONTROL_PLANE_PORT ?? "4130", 10), + port: parsePort(parsed.OPS_HARBOR_CONTROL_PLANE_PORT),🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@apps/ops-harbor-control-plane/src/lib/config.ts` at line 28, The port assignment using Number.parseInt(parsed.OPS_HARBOR_CONTROL_PLANE_PORT ?? "4130", 10) does not validate that the parsed value falls within the valid TCP/UDP port range (1–65535); update the code that sets the port (the port property) to parse the input, then check the numeric result is an integer within 1..65535 and otherwise fall back to the default (4130) or throw a clear validation error; alternatively implement this as a valibot transform/bounds check in the config schema so parsed.OPS_HARBOR_CONTROL_PLANE_PORT is validated and normalized to a safe port number before being assigned to port.apps/ops-harbor/package.json (1)
63-66: Redundant peerDependencies for an application package.For an application (not a library), having
reactandreact-domin bothdependenciesandpeerDependenciesis typically unnecessary.peerDependenciesare meant for libraries to signal version requirements to consumers. Since this is an app that bundles its own dependencies, consider removing thepeerDependenciessection.♻️ Suggested fix
- "peerDependencies": { - "react": "^19.0.0", - "react-dom": "^19.0.0" - }, "engines": { "bun": ">=1.0.0" }🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@apps/ops-harbor/package.json` around lines 63 - 66, The package.json currently declares react and react-dom under "peerDependencies" even though this is an application; remove the entire "peerDependencies" section (or at minimum delete the "react" and "react-dom" entries) from apps/ops-harbor's package.json so the app only lists them under "dependencies" and does not advertise peer requirements as a library would; verify "react" and "react-dom" remain in "dependencies" if the app needs them.apps/ops-harbor/src/lib/config.ts (2)
112-117: Consider atomic writes to prevent config corruption.The current implementation writes directly to the config file, which could corrupt data if interrupted mid-write. Based on learnings, implement atomic writes using temporary file creation and rename operations to prevent file corruption.
♻️ Atomic write implementation
+import { rename } from "node:fs/promises"; +import { randomUUID } from "node:crypto"; export async function saveConfig(config: OpsHarborConfig): Promise<OpsHarborConfig> { const parsed = normalizeConfig(v.parse(ConfigSchema, config)); await mkdir(dirname(CONFIG_PATH), { recursive: true }); - await writeFile(CONFIG_PATH, `${JSON.stringify(parsed, null, 2)}\n`, "utf-8"); + const tempPath = `${CONFIG_PATH}.${randomUUID()}.tmp`; + await writeFile(tempPath, `${JSON.stringify(parsed, null, 2)}\n`, "utf-8"); + await rename(tempPath, CONFIG_PATH); return parsed; }🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@apps/ops-harbor/src/lib/config.ts` around lines 112 - 117, The saveConfig function currently writes directly to CONFIG_PATH which can corrupt the file if interrupted; change it to perform an atomic write by writing the JSON to a temporary file in the same directory (e.g. a unique name based on CONFIG_PATH, process.pid and Date.now()), fsyncing/closing it, then renaming the temp file to CONFIG_PATH using fs.promises.rename so the final replace is atomic; update saveConfig to use mkdir(dirname(CONFIG_PATH)) as before, write to the temp path, ensure file handle is properly closed/synced, and then rename the temp file to CONFIG_PATH (clean up the temp file on error).
103-110: Silent fallback on load errors could hide configuration issues.The
catchblock returnsdefaultConfig()for any error, including JSON syntax errors or schema validation failures. This could mask user configuration mistakes. Consider logging the error or distinguishing between "file not found" (expected on first run) and actual parse/validation errors.♻️ Suggested improvement
export async function loadConfig(): Promise<OpsHarborConfig> { try { const raw = await readFile(CONFIG_PATH, "utf-8"); return normalizeConfig(v.parse(ConfigSchema, JSON.parse(raw))); - } catch { + } catch (error) { + // Only silently fall back if config doesn't exist yet + if (error instanceof Error && 'code' in error && error.code === 'ENOENT') { + return defaultConfig(); + } + console.warn(`Failed to load config from ${CONFIG_PATH}:`, error); return defaultConfig(); } }🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@apps/ops-harbor/src/lib/config.ts` around lines 103 - 110, loadConfig currently swallows all errors and returns defaultConfig(), hiding JSON parse or schema validation issues; update loadConfig (which calls readFile(CONFIG_PATH), JSON.parse, v.parse(ConfigSchema, ...) and normalizeConfig) to only fallback to defaultConfig when the file is missing (e.g., error.code === 'ENOENT'), and for other errors (JSON.parse errors or v.parse validation errors) log the error (use the module's logger if available or console.error) and rethrow or return a failure so callers see the problem; specifically, change the catch to inspect the thrown error, return defaultConfig() for file-not-found, and otherwise log the error with context and rethrow.packages/ops-harbor-core/src/alerts.ts (1)
142-156: Only the most recentreview_commentedevent creates an alert.The logic filters and sorts to get only the most recent
review_commentedevent, so multiple comments will result in only one alert. If this is intentional (to avoid alert spam), it's fine. Otherwise, consider whether multiple comment alerts might be useful.🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@packages/ops-harbor-core/src/alerts.ts` around lines 142 - 156, Currently only the most recent review_commented event is turned into an alert; change the logic so every review_commented event from recentEvents generates its own alert by iterating over the filtered/sorted array rather than taking [0]. Locate the recentEvents filtering block that produces reviewCommentEvent and replace the single-element selection with something like mapping/forEach over recentEvents.filter(e => e.type === "review_commented").toSorted(...) and push createAlert(...) for each event (using the same fields: "review_commented", level "info", title "Review comment received", event.message, event.createdAt, "event") so each comment produces an alert via alerts.push; alternatively, if deduping is desired, apply a time-based or author-based grouping before pushing.
🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.
Inline comments:
In `@apps/ops-harbor-control-plane/src/lib/db.ts`:
- Around line 670-686: The current select-then-update race allows two workers to
lease the same automation_jobs row; make leasing atomic by performing the
guarded UPDATE inside a transaction and only returning the job if the UPDATE
affected exactly one row. In practice, begin a transaction, attempt an UPDATE on
automation_jobs that sets status='leased', lease_owner=?, lease_expires_at=?,
updated_at=? with a WHERE clause that matches id = ? AND (status = 'pending' OR
(status = 'leased' AND (lease_expires_at IS NULL OR lease_expires_at < ?)));
check the number of rows changed (must be 1), and only then SELECT the row to
return; otherwise rollback and return null; ensure you use the same identifiers
(leaseOwner, leaseTtlMs, jobRow.id/returned id, automation_jobs) so the code
replaces the current SELECT then UPDATE pattern with a single transactional
guarded update.
In `@apps/ops-harbor-control-plane/src/lib/github.ts`:
- Around line 283-302: The listInstallations function currently fetches only the
first page from the /app/installations endpoint; modify listInstallations to
iterate over all pages (either by incrementing a page query parameter or by
following the Link response header) and accumulate installations until no more
pages remain, preserving and merging rateLimit info between requests; update the
call site inside listInstallations (the githubRequest call and the local
installations variable) to request subsequent pages and concatenate results from
each response, ensuring the function still returns the final installations array
and combined rateLimit object.
In `@apps/ops-harbor-control-plane/src/lib/sync.ts`:
- Around line 79-87: The inserted activity event may lack workItemId when
created by handleWebhookEvent, so before calling insertActivityEvent(db, event)
assign event.workItemId = item.id (using the hydrated item) so that
listActivity(db, { workItemId: item.id, ... }) will include the newly inserted
event; update the block around insertActivityEvent to backfill event.workItemId
from item before insertion (referencing the event and item variables and the
insertActivityEvent/listActivity calls).
In `@apps/ops-harbor-control-plane/src/server.ts`:
- Around line 115-126: Currently the code calls markWebhookDelivery(...) before
awaiting handleWebhookEvent(...), which causes deliveries to be deduplicated
even if processing fails; move the "mark as processed" operation to after a
successful await of handleWebhookEvent (or implement separate
markWebhookReceived(...) and markWebhookProcessed(...) functions and only set
processed after success), so that you still detect duplicates for already
processed deliveries (use deliveryId/eventName/payload to check) but do not
return deduplicated=true until handleWebhookEvent completes without throwing;
update the control flow around deliveryId/eventName/payload and the calls to
markWebhookDelivery/markWebhookProcessed accordingly.
- Around line 88-90: The internal/admin routes (GET "/api/internal/jobs", POST
"/api/admin/sync", POST "/api/internal/jobs/lease", POST
"/api/internal/jobs/:id/complete") are exposed without authentication; protect
them by adding auth middleware or restricting the listener to a private
interface. Locate the route registrations (the app.get/app.post calls that call
listAutomationJobs, leaseAutomationJobs, completeAutomationJob, and the admin
sync handler) and either (a) prepend an authentication/authorization middleware
(reuse existing auth middleware if available or implement a short middleware
that validates a shared secret/JWT and returns 401/403) or (b) ensure the server
binds those routes only on a private interface (localhost or internal network)
so they are not reachable from public webhooks; apply the same protection to all
listed routes (92-102 and 130-145) for consistent coverage.
In `@apps/ops-harbor/package.json`:
- Around line 19-21: The package exposes the CLI ("ops-harbor" in package.json)
but uses Bun-specific APIs (Bun.argv, Bun.serve) and is built with --target bun,
so add a clear runtime requirement: update package.json to declare Bun as
required (e.g., add an "engines" entry for bun and/or a "peerDependency" or
"optionalDependencies" note) and add a short README/README section telling users
to install Bun before using the ops-harbor CLI; ensure the change references the
existing "bin": {"ops-harbor": "./dist/cli.js"} entry so installers see the
runtime requirement when they install or view the package.
In `@apps/ops-harbor/src/cli.ts`:
- Around line 45-60: The code currently parses values.port with Number.parseInt
and may pass NaN or an out-of-range value into Bun.serve; update the logic
around the port variable to validate the CLI-provided port: if values.port is
set, parse it to a number, check Number.isInteger(parsed) && parsed >= 1 &&
parsed <= 65535, and if invalid print a clear CLI error (e.g., console.error)
and exit non-zero instead of continuing; only call findAvailablePort() when
values.port is not provided, and ensure the final port variable passed to
Bun.serve is a valid integer (reference symbols: values.port,
findAvailablePort(), port, Bun.serve, createServedApp()).
In `@apps/ops-harbor/src/client/App.tsx`:
- Around line 72-93: The mount effect currently fire-and-forgets the promise
returned by refresh(), so any rejection from Promise.all is unhandled; update
the component to explicitly handle errors by adding a load error state (e.g.,
loadError via useState) and changing the effect to call refresh().catch(err => {
setLoadError(err); console.error("initial refresh failed", err); }) and ensure
the UI reads loadError to surface a user-visible error message; locate and
modify the useEffect and refresh function references to implement this handling
and any small UI addition that displays loadError.
In `@apps/ops-harbor/src/lib/automation.ts`:
- Around line 63-95: The runCommand function currently has no timeout and can
hang; add a timeout mechanism by extending runCommand to accept a timeout (e.g.,
options.timeout ms) or create an AbortController via AbortSignal.timeout and
wire it to the child process; start a timer after spawning child (variable
child) that on expiry kills the child (child.kill('SIGTERM') and fallback to
SIGKILL) and rejects/resolve with a non-zero exit or timed-out error, ensure the
timer is cleared in the child.on("close") and child.on("error") handlers, and
preserve returning a Promise<CommandResult> with stdout/stderr trimmed when the
process ends normally.
In `@apps/ops-harbor/src/mcp-server.ts`:
- Around line 88-146: The handler currently calls openLocalDb() inside
server.setRequestHandler which creates a new better-sqlite3 connection per
request and never closes it; either create a single shared DB connection by
calling openLocalDb() once outside the server.setRequestHandler and reuse that
connection for listWorkItems, getWorkItem, listActivity, listAlerts, or ensure
the per-request connection is closed by wrapping the handler logic in
try/finally and calling db.close() in finally; update references to db in the
ops_harbor_list_work_items, ops_harbor_get_work_item, ops_harbor_list_activity,
and ops_harbor_list_alerts cases accordingly.
In `@apps/ops-harbor/src/server.ts`:
- Around line 42-50: The POST /api/sync handler currently calls
syncFromControlPlane directly and can overlap with the background sync; modify
the handler (app.post("/api/sync")) to check state.syncInFlight: if true return
a 409 response immediately, otherwise set state.syncInFlight = true, then await
loadConfig() and syncFromControlPlane(state.db, config) inside a try block and
clear state.syncInFlight = false in a finally block so the flag is always reset;
ensure errors still return the existing error JSON and status code.
- Around line 147-153: The timers created after calling loadConfig() snapshot
config.syncIntervalSeconds and config.automationPollSeconds only once, so
updated values from PUT /api/config never take effect; change the design to
self-rescheduling loops that reload config each iteration: replace the
setInterval usage with two async loop functions (e.g., runSyncLoop and
runWorkerLoop) that call loadConfig() at the start of each cycle, await
runSync() / runWorker(), then await a delay based on the freshly loaded
config.syncIntervalSeconds or config.automationPollSeconds before looping again
(or alternatively expose a restartTimers() helper that cancels existing
intervals and re-reads config to schedule new setInterval calls whenever config
changes); refer to loadConfig, runSync, runWorker, config.syncIntervalSeconds
and config.automationPollSeconds to locate where to implement the change.
- Around line 121-145: Both runSync and runWorker set their in-flight guards
before calling loadConfig(), but loadConfig() runs outside the try/finally so a
rejection leaves state.syncInFlight/state.workerInFlight stuck and can escape
the fire-and-forget caller; update both runSync and runWorker so the async IIFE
wraps the entire work in try/catch/finally: move the call to loadConfig() into
the try block, catch any error (log it) so it doesn't bubble out of the
background task, and always clear the guard in the finally block (referencing
state.syncInFlight, state.workerInFlight, loadConfig, runSync, runWorker,
processNextAutomationJob, and syncFromControlPlane to locate the changes).
In `@apps/sdd-webapp/src/lib/config.test.ts`:
- Around line 16-17: The test sets process.env.SDD_WEBAPP_CONFIG_DIR but never
restores it, risking test pollution; update the test file (config.test.ts) to
capture the original value (e.g., const original =
process.env.SDD_WEBAPP_CONFIG_DIR) before assigning CONFIG_DIR and restore it in
an appropriate teardown hook (afterEach or afterAll) by reassigning
process.env.SDD_WEBAPP_CONFIG_DIR = original (or deleting it if original was
undefined) so other tests are not affected.
In `@apps/sdd-webapp/tsconfig.json`:
- Around line 4-6: Add a "baseUrl" to the tsconfig so the "@/..." path mapping
resolves for TypeScript tooling: update the tsconfig where the "paths" mapping
"@/*": ["./src/client/*"] is defined to include "baseUrl": "./src/client" (so
TypeScript/tsserver/tsc/tsgo can resolve imports), keeping the existing "paths"
entry intact.
---
Outside diff comments:
In `@AGENTS.md`:
- Around line 44-49: The AGENTS.md table has empty description cells for the
packages **@r_masseater/doc-engine**, **@r_masseater/ops-harbor-core**, and
**@r_masseater/repository-lint** (and similarly for the entries referenced at
lines 61-63); open each package's package.json and add a meaningful non-empty
"description" field (e.g., a one-line summary of the package's purpose) inside
the JSON for those package names, save, and re-run the documentation/generator
that produces AGENTS.md so the table is regenerated with the new descriptions;
ensure you update the package.json for `@r_masseater/doc-engine`,
`@r_masseater/ops-harbor-core`, and `@r_masseater/repository-lint` (and any other
packages corresponding to lines 61-63) rather than editing AGENTS.md directly.
In `@apps/sdd-webapp/src/lib/status-store.test.ts`:
- Around line 19-31: You set process.env.SDD_WEBAPP_CONFIG_DIR = CONFIG_DIR in
the test setup but never restore the original value, causing env bleed; save the
original value (e.g. const ORIGINAL_SDD_WEBAPP_CONFIG_DIR =
process.env.SDD_WEBAPP_CONFIG_DIR) before mutating, then restore it in teardown
hooks (afterEach and/or afterAll) alongside the existing
deleteStatusStore(TEST_PROJECT_ID) cleanup so the environment is returned to its
prior state; reference the mutated symbol process.env.SDD_WEBAPP_CONFIG_DIR and
the existing teardown helpers deleteStatusStore and TEST_PROJECT_ID when making
the change.
---
Minor comments:
In `@apps/ops-harbor/src/lib/automation.ts`:
- Around line 246-259: The catch block currently calls
client.completeJob(leased.job.id, "failed", summary) without guarding against it
throwing, which can silently drop the failure; wrap the client.completeJob call
in its own try/catch (inside the existing catch), capture and log the error
(including error object and summary) so failures are visible, and decide whether
to retry or rethrow—ensure recordAutomationRun(...) still executes (or move it
after a successful completeJob if you want control-plane confirmation first);
reference the client.completeJob call and the recordAutomationRun function in
your change.
In `@apps/ops-harbor/src/lib/control-plane-client.ts`:
- Around line 9-16: The fetch helper fetchJson lacks a request timeout; update
the async function fetchJson<T>(input: string, init?: RequestInit) to create or
derive an AbortSignal with a timeout (e.g., using AbortSignal.timeout or an
AbortController) and pass that signal to fetch so requests fail fast on
unresponsive control plane; ensure you merge existing init.signal (prefer the
caller signal but race with the timeout signal), handle the abort by throwing a
clear Error (or rethrow AbortError) and still await/cleanup any response body if
needed before throwing.
In `@apps/sdd-webapp/src/lib/config.ts`:
- Around line 11-13: configDirPath() currently returns
process.env.SDD_WEBAPP_CONFIG_DIR directly which allows an empty string to
override the default; change it to treat empty/whitespace-only values as unset
by reading process.env.SDD_WEBAPP_CONFIG_DIR into a local (e.g. env) and
returning env (trimmed) only if it is non-empty, otherwise return
join(homedir(), ".config", "sdd-webapp") so projects.json is written to the
default path when SDD_WEBAPP_CONFIG_DIR is blank.
In `@apps/sdd-webapp/src/lib/status-parser.test.ts`:
- Around line 11-13: The module-level assignment to
process.env.SDD_WEBAPP_CONFIG_DIR using tempBase should be moved inside the test
suite lifecycle (e.g., beforeAll) to avoid cross-test pollution; update the file
so tempBase is still created at module scope or inside beforeAll, but ensure the
line setting process.env.SDD_WEBAPP_CONFIG_DIR runs in beforeAll (and if needed
reset or delete it in afterAll) and reference the existing tempBase variable and
the process.env.SDD_WEBAPP_CONFIG_DIR assignment so tests run with isolated env
per suite.
In `@apps/sdd-webapp/src/server.test.ts`:
- Around line 16-22: Add an afterAll cleanup that removes the per-run config
directory set by process.env.SDD_WEBAPP_CONFIG_DIR in server.test.ts: import fs
(or use fs.promises) and in an afterAll hook check that
process.env.SDD_WEBAPP_CONFIG_DIR exists and is under join(process.cwd(),
".tmp", "vitest", "sdd-webapp-server") to avoid accidental deletes, then remove
it with fs.rmSync or fs.promises.rm using recursive/force options (or
equivalent) so each test run’s randomUUID() directory is deleted.
In `@packages/ops-harbor-core/src/filters.ts`:
- Line 27: The current return uses slice(0, options.limit ?? 50) which permits
negative, zero, or non-integer limits; replace it by normalizing options.limit
into a positive integer defaulting to 50 (e.g., compute const limitValue =
Number.isFinite(options.limit) ? Math.max(1, Math.floor(options.limit)) : 50)
and then use filtered.slice(0, limitValue); this ensures
negative/zero/non-integer inputs are clamped to a sensible positive integer
before slicing.
---
Nitpick comments:
In `@apps/ops-harbor-control-plane/src/lib/config.ts`:
- Line 28: The port assignment using
Number.parseInt(parsed.OPS_HARBOR_CONTROL_PLANE_PORT ?? "4130", 10) does not
validate that the parsed value falls within the valid TCP/UDP port range
(1–65535); update the code that sets the port (the port property) to parse the
input, then check the numeric result is an integer within 1..65535 and otherwise
fall back to the default (4130) or throw a clear validation error; alternatively
implement this as a valibot transform/bounds check in the config schema so
parsed.OPS_HARBOR_CONTROL_PLANE_PORT is validated and normalized to a safe port
number before being assigned to port.
In `@apps/ops-harbor/package.json`:
- Around line 63-66: The package.json currently declares react and react-dom
under "peerDependencies" even though this is an application; remove the entire
"peerDependencies" section (or at minimum delete the "react" and "react-dom"
entries) from apps/ops-harbor's package.json so the app only lists them under
"dependencies" and does not advertise peer requirements as a library would;
verify "react" and "react-dom" remain in "dependencies" if the app needs them.
In `@apps/ops-harbor/src/lib/config.ts`:
- Around line 112-117: The saveConfig function currently writes directly to
CONFIG_PATH which can corrupt the file if interrupted; change it to perform an
atomic write by writing the JSON to a temporary file in the same directory (e.g.
a unique name based on CONFIG_PATH, process.pid and Date.now()),
fsyncing/closing it, then renaming the temp file to CONFIG_PATH using
fs.promises.rename so the final replace is atomic; update saveConfig to use
mkdir(dirname(CONFIG_PATH)) as before, write to the temp path, ensure file
handle is properly closed/synced, and then rename the temp file to CONFIG_PATH
(clean up the temp file on error).
- Around line 103-110: loadConfig currently swallows all errors and returns
defaultConfig(), hiding JSON parse or schema validation issues; update
loadConfig (which calls readFile(CONFIG_PATH), JSON.parse, v.parse(ConfigSchema,
...) and normalizeConfig) to only fallback to defaultConfig when the file is
missing (e.g., error.code === 'ENOENT'), and for other errors (JSON.parse errors
or v.parse validation errors) log the error (use the module's logger if
available or console.error) and rethrow or return a failure so callers see the
problem; specifically, change the catch to inspect the thrown error, return
defaultConfig() for file-not-found, and otherwise log the error with context and
rethrow.
In `@apps/ops-harbor/src/lib/port-finder.ts`:
- Around line 6-15: In findAvailablePort, validate and normalize the startPort
against MIN_PORT and MAX_PORT before scanning: if startPort is less than
MIN_PORT, set it to MIN_PORT; if startPort is greater than MAX_PORT, throw a
clear Error indicating the allowed port range; then proceed with the existing
loop that calls isPortAvailable(port). This makes behavior explicit, avoids
scanning outside the declared range, and keeps references to the existing
symbols (findAvailablePort, MIN_PORT, MAX_PORT, isPortAvailable).
In `@apps/ops-harbor/src/lib/sync.ts`:
- Around line 12-18: The code currently swallows errors from client.triggerSync
when config.authorLogin is set; change the catch to capture the error (e.g.,
catch (err)) and emit a warning including the error and context (authorLogin)
instead of silently setting synchronized to undefined — reference the existing
synchronized variable and call site of client.triggerSync and use the module's
logger (e.g., processLogger or logger if available; if none exists, create a
logger and use it) to log a concise warning like "triggerSync failed for
authorLogin=<...>: <error>" before leaving synchronized as undefined.
In `@packages/ops-harbor-core/src/alerts.test.ts`:
- Around line 57-70: The test is fragile because it only checks blockers length;
update the assertion to explicitly verify which alert types are present in
withAlerts.blockers by mapping blockers to their type and using
expect.arrayContaining(["ci_failed","base_behind","review_changes_requested"])
(referencing deriveAlerts, buildWorkItem, WorkItemAlert and withAlerts.blockers)
so the test asserts the specific blocker types rather than just a numeric
length.
In `@packages/ops-harbor-core/src/alerts.ts`:
- Around line 142-156: Currently only the most recent review_commented event is
turned into an alert; change the logic so every review_commented event from
recentEvents generates its own alert by iterating over the filtered/sorted array
rather than taking [0]. Locate the recentEvents filtering block that produces
reviewCommentEvent and replace the single-element selection with something like
mapping/forEach over recentEvents.filter(e => e.type ===
"review_commented").toSorted(...) and push createAlert(...) for each event
(using the same fields: "review_commented", level "info", title "Review comment
received", event.message, event.createdAt, "event") so each comment produces an
alert via alerts.push; alternatively, if deduping is desired, apply a time-based
or author-based grouping before pushing.
In `@plugins/ops-harbor/README.md`:
- Around line 3-5: Add a short "Quick Start / Prerequisites" section to the
README that lists the required components (local ops-harbor app and MCP server),
exact commands to start each service (e.g., how to run ops-harbor and how to
start the MCP server), expected ports/endpoints, and any env vars or config
files to set; also add a brief pointer to an AGENTS.md (or expand README) that
documents agent definitions and usage examples for this plugin so first-time
users can boot the stack and validate it quickly.
🪄 Autofix (Beta)
Fix all unresolved CodeRabbit comments on this PR:
- Push a commit to this branch (recommended)
- Create a new PR with the fixes
ℹ️ Review info
⚙️ Run configuration
Configuration used: Organization UI
Review profile: CHILL
Plan: Pro
Run ID: 14cc9094-3ae1-44e7-b2e1-7cabb27ccebb
⛔ Files ignored due to path filters (2)
apps/sdd-webapp/bun.lockis excluded by!**/*.lockbun.lockis excluded by!**/*.lock
📒 Files selected for processing (115)
.claude-plugin/marketplace.json.claude/hooks/gen-docs.ts.github/workflows/publish-package.ymlAGENTS.mdREADME.mdapps/ops-harbor-control-plane/package.jsonapps/ops-harbor-control-plane/src/cli.tsapps/ops-harbor-control-plane/src/lib/config.tsapps/ops-harbor-control-plane/src/lib/db.tsapps/ops-harbor-control-plane/src/lib/github.tsapps/ops-harbor-control-plane/src/lib/sync.tsapps/ops-harbor-control-plane/src/server.tsapps/ops-harbor-control-plane/tsconfig.jsonapps/ops-harbor-control-plane/vitest.config.tsapps/ops-harbor/package.jsonapps/ops-harbor/src/cli.tsapps/ops-harbor/src/client/App.tsxapps/ops-harbor/src/client/env.d.tsapps/ops-harbor/src/client/index.htmlapps/ops-harbor/src/client/main.tsxapps/ops-harbor/src/client/styles.cssapps/ops-harbor/src/lib/automation.tsapps/ops-harbor/src/lib/config.tsapps/ops-harbor/src/lib/control-plane-client.tsapps/ops-harbor/src/lib/local-db.tsapps/ops-harbor/src/lib/mcp.tsapps/ops-harbor/src/lib/port-finder.tsapps/ops-harbor/src/lib/sync.tsapps/ops-harbor/src/mcp-server.tsapps/ops-harbor/src/server.tsapps/ops-harbor/tsconfig.jsonapps/ops-harbor/vite.config.tsapps/ops-harbor/vitest.config.tsapps/sdd-webapp/.gitignoreapps/sdd-webapp/.oxfmtrc.jsonapps/sdd-webapp/.oxlintrc.jsonapps/sdd-webapp/AGENTS.mdapps/sdd-webapp/CHANGELOG.mdapps/sdd-webapp/CLAUDE.mdapps/sdd-webapp/README.mdapps/sdd-webapp/components.jsonapps/sdd-webapp/package.jsonapps/sdd-webapp/src/cli.tsapps/sdd-webapp/src/client/App.tsxapps/sdd-webapp/src/client/components/ui/button.tsxapps/sdd-webapp/src/client/components/ui/input.tsxapps/sdd-webapp/src/client/components/ui/popover.tsxapps/sdd-webapp/src/client/env.d.tsapps/sdd-webapp/src/client/features/projects/ProjectSelector.tsxapps/sdd-webapp/src/client/features/projects/useProjects.tsapps/sdd-webapp/src/client/features/specs/Dashboard.tsxapps/sdd-webapp/src/client/features/specs/FileTree.tsxapps/sdd-webapp/src/client/features/specs/SpecViewer.tsxapps/sdd-webapp/src/client/features/specs/TaskDetailView.tsxapps/sdd-webapp/src/client/features/specs/status.tsapps/sdd-webapp/src/client/features/specs/tree.tsapps/sdd-webapp/src/client/features/specs/useSpecs.tsapps/sdd-webapp/src/client/index.htmlapps/sdd-webapp/src/client/lib/abort.tsapps/sdd-webapp/src/client/lib/api.tsapps/sdd-webapp/src/client/lib/error.tsapps/sdd-webapp/src/client/lib/routes.tsapps/sdd-webapp/src/client/lib/utils.tsapps/sdd-webapp/src/client/main.tsxapps/sdd-webapp/src/client/styles.cssapps/sdd-webapp/src/client/tailwind.config.jsapps/sdd-webapp/src/features/file-system/index.test.tsapps/sdd-webapp/src/features/file-system/index.tsapps/sdd-webapp/src/lib/config.test.tsapps/sdd-webapp/src/lib/config.tsapps/sdd-webapp/src/lib/markdown.test.tsapps/sdd-webapp/src/lib/markdown.tsapps/sdd-webapp/src/lib/mcp.tsapps/sdd-webapp/src/lib/port-finder.test.tsapps/sdd-webapp/src/lib/port-finder.tsapps/sdd-webapp/src/lib/project-path.test.tsapps/sdd-webapp/src/lib/project-path.tsapps/sdd-webapp/src/lib/specs-scanner.test.tsapps/sdd-webapp/src/lib/specs-scanner.tsapps/sdd-webapp/src/lib/status-parser.test.tsapps/sdd-webapp/src/lib/status-parser.tsapps/sdd-webapp/src/lib/status-store.test.tsapps/sdd-webapp/src/lib/status-store.tsapps/sdd-webapp/src/lib/worktree-detector.test.tsapps/sdd-webapp/src/lib/worktree-detector.tsapps/sdd-webapp/src/mcp-server.tsapps/sdd-webapp/src/server.test.tsapps/sdd-webapp/src/server.tsapps/sdd-webapp/src/types.tsapps/sdd-webapp/tsconfig.jsonapps/sdd-webapp/vite.config.tsapps/sdd-webapp/vitest.config.tsdocs/dev-commands.mdknip.jsonpackage.jsonpackages/doc-engine/src/generator.tspackages/doc-engine/src/plugin-list.tspackages/ops-harbor-core/package.jsonpackages/ops-harbor-core/src/alerts.test.tspackages/ops-harbor-core/src/alerts.tspackages/ops-harbor-core/src/filters.tspackages/ops-harbor-core/src/index.tspackages/ops-harbor-core/src/prompt.tspackages/ops-harbor-core/src/types.tspackages/ops-harbor-core/tsconfig.jsonpackages/ops-harbor-core/vitest.config.tsplugins/ops-harbor/AGENTS.mdplugins/ops-harbor/CLAUDE.mdplugins/ops-harbor/README.mdplugins/ops-harbor/package.jsonplugins/ops-harbor/plugin.jsonplugins/ops-harbor/skills/use-ops-harbor/SKILL.mdplugins/ops-harbor/src/index.tsplugins/ops-harbor/tsconfig.jsonplugins/ops-harbor/vitest.config.ts
apps/ops-harbor/src/server.ts
Outdated
| const config = await loadConfig(); | ||
| setInterval(() => { | ||
| void runSync(); | ||
| }, config.syncIntervalSeconds * 1000); | ||
| setInterval(() => { | ||
| void runWorker(); | ||
| }, config.automationPollSeconds * 1000); |
There was a problem hiding this comment.
Saved poll intervals never take effect.
These timers snapshot syncIntervalSeconds and automationPollSeconds only once at startup. After PUT /api/config, the new values do nothing until process restart, which makes the config editor misleading. Prefer a self-rescheduling loop that reloads config each cycle, or restart the timers when config changes.
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In `@apps/ops-harbor/src/server.ts` around lines 147 - 153, The timers created
after calling loadConfig() snapshot config.syncIntervalSeconds and
config.automationPollSeconds only once, so updated values from PUT /api/config
never take effect; change the design to self-rescheduling loops that reload
config each iteration: replace the setInterval usage with two async loop
functions (e.g., runSyncLoop and runWorkerLoop) that call loadConfig() at the
start of each cycle, await runSync() / runWorker(), then await a delay based on
the freshly loaded config.syncIntervalSeconds or config.automationPollSeconds
before looping again (or alternatively expose a restartTimers() helper that
cancels existing intervals and re-reads config to schedule new setInterval calls
whenever config changes); refer to loadConfig, runSync, runWorker,
config.syncIntervalSeconds and config.automationPollSeconds to locate where to
implement the change.
| "paths": { | ||
| "@/*": ["./src/client/*"] | ||
| } |
There was a problem hiding this comment.
🧩 Analysis chain
🏁 Script executed:
#!/bin/bash
set -euo pipefail
echo "Locate shared app tsconfig:"
fd -i "app.json" | rg -n "@repo/ts-config|ts-config"
echo
echo "Print candidate shared app config files:"
fd -i "app.json" --exec sh -c 'echo "--- $1"; sed -n "1,200p" "$1"' sh {}
echo
echo "Print apps/sdd-webapp/tsconfig.json:"
sed -n "1,200p" apps/sdd-webapp/tsconfig.jsonRepository: masseater/claude-code-plugin
Length of output: 726
🏁 Script executed:
cat -n packages/ts-config/base.jsonRepository: masseater/claude-code-plugin
Length of output: 600
🏁 Script executed:
fd "tsconfig.json" | head -20Repository: masseater/claude-code-plugin
Length of output: 695
🏁 Script executed:
# Check for root tsconfig.json
ls -la tsconfig.json 2>&1
# Check if any tsconfig.json in the repo defines baseUrl
rg "baseUrl" --type jsonRepository: masseater/claude-code-plugin
Length of output: 132
🏁 Script executed:
# Verify the TypeScript spec: check if there's any way paths work without baseUrl
# Also check the actual usage in the codebase to see if the alias is actually used
rg "@/" --type ts --type tsx | head -20Repository: masseater/claude-code-plugin
Length of output: 99
🏁 Script executed:
# Search for `@/` usage in ts and tsx files
rg "@/" -g "*.ts" -g "*.tsx" | head -30Repository: masseater/claude-code-plugin
Length of output: 2128
🏁 Script executed:
# Check what bundler/build tool is being used
fd "vite.config|webpack.config|next.config|tsup.config" --type f
# Also check package.json scripts for the webapp
cat apps/sdd-webapp/package.json 2>/dev/null | head -50Repository: masseater/claude-code-plugin
Length of output: 1527
🏁 Script executed:
# Check vite.config.ts for path/alias configuration
cat apps/sdd-webapp/vite.config.tsRepository: masseater/claude-code-plugin
Length of output: 686
Add baseUrl to enable path alias resolution in type checking.
The @/* path mapping requires baseUrl to resolve correctly during type checking with tsgo. Currently, no baseUrl is defined in the configuration chain (apps/sdd-webapp/tsconfig.json → @repo/ts-config/app.json → @repo/ts-config/base.json). While Vite handles the alias resolution at build time, TypeScript's type checker will fail to resolve @/ imports, causing false errors and breaking IDE intellisense. Add "baseUrl": "./src/client" to apps/sdd-webapp/tsconfig.json.
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In `@apps/sdd-webapp/tsconfig.json` around lines 4 - 6, Add a "baseUrl" to the
tsconfig so the "@/..." path mapping resolves for TypeScript tooling: update the
tsconfig where the "paths" mapping "@/*": ["./src/client/*"] is defined to
include "baseUrl": "./src/client" (so TypeScript/tsserver/tsc/tsgo can resolve
imports), keeping the existing "paths" entry intact.
There was a problem hiding this comment.
🧹 Nitpick comments (2)
apps/ops-harbor/src/lib/port-finder.test.ts (1)
40-44: Consider a more resilient approach for exact-port assertion.This test assumes port 4310 is always free, which could cause intermittent failures if another process happens to bind it. A more robust alternative is to use the same ephemeral-then-release pattern, though there's still a small race window.
If the goal is simply to verify the function returns the start port when available, you could first bind an ephemeral port, close it, and immediately call
findAvailablePortwith that port—though even that has a race. The current approach is acceptable for CI stability in practice, but adding a brief comment explaining the port choice would help future maintainers.🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@apps/ops-harbor/src/lib/port-finder.test.ts` around lines 40 - 44, The test that asserts findAvailablePort(4310) assumes port 4310 is free; replace it with an ephemeral-then-release pattern: create a server, listen(0) to obtain an available ephemeral port, close the server to release that port, then call findAvailablePort(ephemeralPort) and expect the returned port to equal ephemeralPort (or, if you prefer to keep the hardcoded 4310, add a one-line comment explaining the choice and the potential race). Target the test named "returns the requested port when it is free" and the findAvailablePort function when making this change.packages/ops-harbor-core/src/utilities.test.ts (1)
11-43: Consider extracting shared test helper.The
buildWorkItemhelper is duplicated betweenalerts.test.tsand this file. While test isolation can justify some duplication, extracting this to a sharedtest-utils.tsfile would reduce maintenance burden if theWorkItemshape changes.🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@packages/ops-harbor-core/src/utilities.test.ts` around lines 11 - 43, The helper buildWorkItem used to create WorkItem test fixtures is duplicated; extract it into a shared test-utils.ts (export function buildWorkItem(partial?: Partial<WorkItem>): WorkItem) and move any dependent types/imports (WorkItem, summarizeStatus) into that module so both utilities.test.ts and alerts.test.ts import buildWorkItem from test-utils.ts; keep the implementation and default values unchanged and update the two test files to import the helper instead of duplicating it.
🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.
Nitpick comments:
In `@apps/ops-harbor/src/lib/port-finder.test.ts`:
- Around line 40-44: The test that asserts findAvailablePort(4310) assumes port
4310 is free; replace it with an ephemeral-then-release pattern: create a
server, listen(0) to obtain an available ephemeral port, close the server to
release that port, then call findAvailablePort(ephemeralPort) and expect the
returned port to equal ephemeralPort (or, if you prefer to keep the hardcoded
4310, add a one-line comment explaining the choice and the potential race).
Target the test named "returns the requested port when it is free" and the
findAvailablePort function when making this change.
In `@packages/ops-harbor-core/src/utilities.test.ts`:
- Around line 11-43: The helper buildWorkItem used to create WorkItem test
fixtures is duplicated; extract it into a shared test-utils.ts (export function
buildWorkItem(partial?: Partial<WorkItem>): WorkItem) and move any dependent
types/imports (WorkItem, summarizeStatus) into that module so both
utilities.test.ts and alerts.test.ts import buildWorkItem from test-utils.ts;
keep the implementation and default values unchanged and update the two test
files to import the helper instead of duplicating it.
ℹ️ Review info
⚙️ Run configuration
Configuration used: Organization UI
Review profile: CHILL
Plan: Pro
Run ID: 9541d84d-c0c1-47c2-b651-5580327128fb
📒 Files selected for processing (17)
.github/workflows/ci.ymlapps/ops-harbor-control-plane/.oxlintrc.jsonapps/ops-harbor-control-plane/src/lib/config.test.tsapps/ops-harbor-control-plane/vitest.config.tsapps/ops-harbor/.oxlintrc.jsonapps/ops-harbor/src/lib/port-finder.test.tsapps/ops-harbor/vitest.config.tseslint.config.tslefthook.ymlpackage.jsonpackages/ops-harbor-core/.oxlintrc.jsonpackages/ops-harbor-core/src/alerts.test.tspackages/ops-harbor-core/src/utilities.test.tspackages/ops-harbor-core/vitest.config.tsplugins/ops-harbor/.oxlintrc.jsonplugins/ops-harbor/src/index.test.tsplugins/ops-harbor/vitest.config.ts
✅ Files skipped from review due to trivial changes (6)
- lefthook.yml
- apps/ops-harbor/.oxlintrc.json
- packages/ops-harbor-core/.oxlintrc.json
- plugins/ops-harbor/.oxlintrc.json
- eslint.config.ts
- apps/ops-harbor-control-plane/src/lib/config.test.ts
🚧 Files skipped from review as they are similar to previous changes (5)
- apps/ops-harbor-control-plane/vitest.config.ts
- plugins/ops-harbor/vitest.config.ts
- apps/ops-harbor/vitest.config.ts
- packages/ops-harbor-core/vitest.config.ts
- package.json
There was a problem hiding this comment.
Actionable comments posted: 5
♻️ Duplicate comments (1)
apps/ops-harbor-control-plane/src/server.ts (1)
54-65:⚠️ Potential issue | 🔴 CriticalThe admin/internal surface still fails open when no token is configured.
If
internalApiTokenis absent, every route under/api/admin/*and/api/internal/*is public again. On the same service that accepts GitHub webhooks, that leaves sync triggers, job leasing, job completion, and queued prompts reachable without authentication. Fail closed at startup or bind these routes to a private listener/interface.🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@apps/ops-harbor-control-plane/src/server.ts` around lines 54 - 65, Current middleware for the "/api/admin/*" and "/api/internal/*" routes uses a conditional that allows requests when config.internalApiToken is not set, leaving those surfaces public; change this to fail closed by either (a) validating at startup that config.internalApiToken is present and terminating/startup-failing with a clear error if it is missing, or (b) altering the middleware for the app.use handlers for "/api/admin/*" and "/api/internal/*" so that if config.internalApiToken is falsy the middleware immediately rejects requests (e.g., return unauthorized) rather than permitting them; reference config.internalApiToken and the two app.use route middlewares to locate where to implement the fix.
🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.
Inline comments:
In `@apps/ops-harbor-control-plane/src/lib/github.ts`:
- Around line 245-258: githubRequest and graphql use plain fetch without
timeouts which can hang; add an AbortController with a configurable timeout
(e.g., constant or from config) in both async functions (githubRequest and
graphql), wire controller.signal into the fetch init, start a timer that calls
controller.abort() after the timeout, and clear the timer on response; catch the
abort error and return or throw a distinct timeout-specific error (or include a
clear error shape in the returned result with rateLimit) so callers of
githubRequest and graphql (e.g., the handlers in server.ts) can detect timeouts
and avoid hanging the server.
- Around line 129-149: SEARCH_QUERY and PULL_REQUEST_QUERY use statusCheckRollup
{ contexts(first: 50) } which truncates check runs and status contexts; update
the queries (both occurrences) to paginate the contexts connection using
cursor-based pagination (add after: $afterContexts variable, edges/node or nodes
with pageInfo { hasNextPage endCursor }) and loop in the Github client code that
fetches these queries to fetch subsequent pages until hasNextPage is false, then
pass the full concatenated contexts to mapChecks() so summarizeStatus() receives
the complete set; alternatively, replace the query with a single API call that
returns all checks if available, but ensure mapChecks() and any callers consume
the fully paginated result from statusCheckRollup.
- Around line 537-544: The verifyWebhookSignature function currently compares
hex strings with === which is vulnerable to timing attacks; change it to use
crypto.timingSafeEqual by computing raw HMAC digest as a Buffer (use
createHmac(...).update(rawBody).digest()) and convert the incoming
signatureHeader (after stripping the "sha256=" prefix) into a Buffer from hex,
then if their lengths differ return false, otherwise return
timingSafeEqual(expectedBuf, signatureBuf); update references in
verifyWebhookSignature accordingly and ensure crypto.timingSafeEqual is
imported/available.
In `@apps/ops-harbor-control-plane/src/lib/sync.ts`:
- Around line 88-93: The current save path (listActivity -> deriveAlerts ->
upsertWorkItem) persists the WorkItem directly and loses the new mergeable
states; update the value stored before calling upsertWorkItem and any downstream
consumers (refreshWorkItemAlerts, enqueueJobsForAlerts) by mapping the GitHub
mergeable union ("mergeable" | "conflicting" | "unknown") to the DB
representation expected by the serializer (or update the serializer itself in
db.ts to accept the three-state value). Concretely, convert item.mergeable into
the DB-safe form (e.g., explicit enum/string or numeric code) and use that
mapped value when building enriched (the object passed to upsertWorkItem,
refreshWorkItemAlerts, enqueueJobsForAlerts) so the new states are preserved
across writes and reads.
In `@apps/ops-harbor/src/lib/automation.ts`:
- Around line 113-121: ensureExpectedBranch currently only checks the branch
name; modify it to also guard the expected PR head SHA by comparing the output
of "git rev-parse HEAD" to the expected SHA (e.g., leased.workItem.headSha)
before proceeding, and/or perform a fetch + explicit reset to that SHA if you
prefer to enforce sync; update the function ensureExpectedBranch to run "git
rev-parse HEAD" (or fetch+git reset --hard <sha>) and throw an error if the SHA
does not match the expected value so automation never runs against stale code.
---
Duplicate comments:
In `@apps/ops-harbor-control-plane/src/server.ts`:
- Around line 54-65: Current middleware for the "/api/admin/*" and
"/api/internal/*" routes uses a conditional that allows requests when
config.internalApiToken is not set, leaving those surfaces public; change this
to fail closed by either (a) validating at startup that config.internalApiToken
is present and terminating/startup-failing with a clear error if it is missing,
or (b) altering the middleware for the app.use handlers for "/api/admin/*" and
"/api/internal/*" so that if config.internalApiToken is falsy the middleware
immediately rejects requests (e.g., return unauthorized) rather than permitting
them; reference config.internalApiToken and the two app.use route middlewares to
locate where to implement the fix.
🪄 Autofix (Beta)
Fix all unresolved CodeRabbit comments on this PR:
- Push a commit to this branch (recommended)
- Create a new PR with the fixes
ℹ️ Review info
⚙️ Run configuration
Configuration used: Organization UI
Review profile: CHILL
Plan: Pro
Run ID: 1dc50545-0bb6-44cf-a6d5-4cad58b3201f
⛔ Files ignored due to path filters (1)
bun.lockis excluded by!**/*.lock
📒 Files selected for processing (29)
.github/workflows/ci.ymlapps/ops-harbor-control-plane/src/lib/config.test.tsapps/ops-harbor-control-plane/src/lib/config.tsapps/ops-harbor-control-plane/src/lib/db.tsapps/ops-harbor-control-plane/src/lib/github.test.tsapps/ops-harbor-control-plane/src/lib/github.tsapps/ops-harbor-control-plane/src/lib/sync.tsapps/ops-harbor-control-plane/src/server.tsapps/ops-harbor/README.mdapps/ops-harbor/src/cli.tsapps/ops-harbor/src/client/App.tsxapps/ops-harbor/src/lib/automation.tsapps/ops-harbor/src/lib/config.tsapps/ops-harbor/src/lib/control-plane-client.tsapps/ops-harbor/src/mcp-server.tsapps/ops-harbor/src/server.tsapps/sdd-webapp/src/lib/config.test.tseslint.config.tslefthook.ymlpackage.jsonplugins/agnix/plugin.jsonplugins/context/plugin.jsonplugins/debug/plugin.jsonplugins/devkit/plugin.jsonplugins/discord-notify/plugin.jsonplugins/eslint-lsp/plugin.jsonplugins/progress-tracker/plugin.jsonplugins/sdd/plugin.jsonplugins/ui-dev/plugin.json
✅ Files skipped from review due to trivial changes (11)
- plugins/sdd/plugin.json
- plugins/context/plugin.json
- plugins/devkit/plugin.json
- plugins/debug/plugin.json
- plugins/ui-dev/plugin.json
- plugins/agnix/plugin.json
- plugins/discord-notify/plugin.json
- plugins/eslint-lsp/plugin.json
- plugins/progress-tracker/plugin.json
- apps/ops-harbor/README.md
- apps/ops-harbor-control-plane/src/lib/config.test.ts
🚧 Files skipped from review as they are similar to previous changes (10)
- .github/workflows/ci.yml
- eslint.config.ts
- package.json
- lefthook.yml
- apps/sdd-webapp/src/lib/config.test.ts
- apps/ops-harbor/src/mcp-server.ts
- apps/ops-harbor/src/cli.ts
- apps/ops-harbor/src/lib/config.ts
- apps/ops-harbor-control-plane/src/lib/config.ts
- apps/ops-harbor-control-plane/src/lib/db.ts
| statusCheckRollup { | ||
| contexts(first: 50) { | ||
| nodes { | ||
| __typename | ||
| ... on CheckRun { | ||
| name | ||
| status | ||
| conclusion | ||
| detailsUrl | ||
| startedAt | ||
| completedAt | ||
| } | ||
| ... on StatusContext { | ||
| context | ||
| state | ||
| targetUrl | ||
| createdAt | ||
| description | ||
| } | ||
| } | ||
| } |
There was a problem hiding this comment.
🧩 Analysis chain
🏁 Script executed:
find . -type f -name "github.ts" | head -20Repository: masseater/claude-code-plugin
Length of output: 121
🏁 Script executed:
wc -l ./apps/ops-harbor-control-plane/src/lib/github.tsRepository: masseater/claude-code-plugin
Length of output: 125
🏁 Script executed:
sed -n '120,160p' ./apps/ops-harbor-control-plane/src/lib/github.tsRepository: masseater/claude-code-plugin
Length of output: 1062
🏁 Script executed:
sed -n '185,220p' ./apps/ops-harbor-control-plane/src/lib/github.tsRepository: masseater/claude-code-plugin
Length of output: 913
🏁 Script executed:
grep -n "mapChecks\|summarizeStatus" ./apps/ops-harbor-control-plane/src/lib/github.tsRepository: masseater/claude-code-plugin
Length of output: 309
🏁 Script executed:
sed -n '399,435p' ./apps/ops-harbor-control-plane/src/lib/github.tsRepository: masseater/claude-code-plugin
Length of output: 1390
🏁 Script executed:
sed -n '435,450p' ./apps/ops-harbor-control-plane/src/lib/github.tsRepository: masseater/claude-code-plugin
Length of output: 576
🏁 Script executed:
find . -type f -name "*.ts" -o -name "*.tsx" | xargs grep -l "export.*summarizeStatus" | head -10Repository: masseater/claude-code-plugin
Length of output: 112
🏁 Script executed:
grep -n "summarizeStatus" ./packages/ops-harbor-core/src/alerts.ts | head -20Repository: masseater/claude-code-plugin
Length of output: 107
🏁 Script executed:
sed -n '53,120p' ./packages/ops-harbor-core/src/alerts.tsRepository: masseater/claude-code-plugin
Length of output: 1980
🏁 Script executed:
sed -n '1,52p' ./packages/ops-harbor-core/src/alerts.tsRepository: masseater/claude-code-plugin
Length of output: 1715
🏁 Script executed:
grep -n "pageInfo\|hasNextPage\|endCursor\|after" ./apps/ops-harbor-control-plane/src/lib/github.tsRepository: masseater/claude-code-plugin
Length of output: 481
🏁 Script executed:
sed -n '95,165p' ./apps/ops-harbor-control-plane/src/lib/github.tsRepository: masseater/claude-code-plugin
Length of output: 1916
🏁 Script executed:
sed -n '165,230p' ./apps/ops-harbor-control-plane/src/lib/github.tsRepository: masseater/claude-code-plugin
Length of output: 1579
Add pagination support for check contexts in statusCheckRollup queries.
Both SEARCH_QUERY and PULL_REQUEST_QUERY use contexts(first: 50) in the statusCheckRollup connection without pagination. If a PR has more than 50 combined check runs and status contexts, mapChecks() truncates the list and summarizeStatus() derives the status from the incomplete set. A failing or pending check beyond the first 50 is invisible, causing the dashboard to report false "healthy" status.
Implement cursor-based pagination for the contexts connection or switch to an API endpoint that returns the full check set in a single response.
Also applies to: 193-212 in PULL_REQUEST_QUERY
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In `@apps/ops-harbor-control-plane/src/lib/github.ts` around lines 129 - 149,
SEARCH_QUERY and PULL_REQUEST_QUERY use statusCheckRollup { contexts(first: 50)
} which truncates check runs and status contexts; update the queries (both
occurrences) to paginate the contexts connection using cursor-based pagination
(add after: $afterContexts variable, edges/node or nodes with pageInfo {
hasNextPage endCursor }) and loop in the Github client code that fetches these
queries to fetch subsequent pages until hasNextPage is false, then pass the full
concatenated contexts to mapChecks() so summarizeStatus() receives the complete
set; alternatively, replace the query with a single API call that returns all
checks if available, but ensure mapChecks() and any callers consume the fully
paginated result from statusCheckRollup.
| async function githubRequest<T>( | ||
| config: OpsHarborControlPlaneConfig, | ||
| path: string, | ||
| init: RequestInit, | ||
| scope: string, | ||
| ): Promise<{ data: T; rateLimit: { remaining?: number; resetAt?: string | null } }> { | ||
| const response = await fetch(`${config.githubApiUrl}${path}`, { | ||
| ...init, | ||
| headers: { | ||
| Accept: "application/vnd.github+json", | ||
| "User-Agent": "ops-harbor-control-plane", | ||
| ...init.headers, | ||
| }, | ||
| }); |
There was a problem hiding this comment.
🧩 Analysis chain
🏁 Script executed:
# First, locate and read the github.ts file to understand the githubRequest function
cd apps/ops-harbor-control-plane/src/lib && wc -l github.tsRepository: masseater/claude-code-plugin
Length of output: 85
🏁 Script executed:
# Read the githubRequest function and surrounding context (lines 240-280)
cd apps/ops-harbor-control-plane/src/lib && sed -n '240,280p' github.tsRepository: masseater/claude-code-plugin
Length of output: 1439
🏁 Script executed:
# Read the area around lines 365-374 to identify the second function mentioned
cd apps/ops-harbor-control-plane/src/lib && sed -n '360,380p' github.tsRepository: masseater/claude-code-plugin
Length of output: 946
🏁 Script executed:
# Check the server.ts file to see how these helpers are called (lines 100-150)
cd apps/ops-harbor-control-plane/src && wc -l server.tsRepository: masseater/claude-code-plugin
Length of output: 85
🏁 Script executed:
# Read server.ts lines around 106-144 to see the request handlers
cd apps/ops-harbor-control-plane/src && sed -n '100,150p' server.tsRepository: masseater/claude-code-plugin
Length of output: 2123
🏁 Script executed:
# Find where syncAuthorWorkItems is defined
rg "syncAuthorWorkItems" apps/ops-harbor-control-plane/src --no-heading -nRepository: masseater/claude-code-plugin
Length of output: 396
🏁 Script executed:
# Find where handleWebhookEvent is defined
rg "handleWebhookEvent" apps/ops-harbor-control-plane/src --no-heading -nRepository: masseater/claude-code-plugin
Length of output: 392
🏁 Script executed:
# Check the entire graphql function to see if it has any timeout handling (around line 365-374)
cd apps/ops-harbor-control-plane/src/lib && sed -n '360,390p' github.tsRepository: masseater/claude-code-plugin
Length of output: 1242
🏁 Script executed:
# Let's see the full graphql function signature and any timeout code
cd apps/ops-harbor-control-plane/src/lib && grep -n "async function graphql" github.tsRepository: masseater/claude-code-plugin
Length of output: 102
🏁 Script executed:
# Look for any AbortController or timeout usage in github.ts
rg "AbortController|setTimeout|timeout" apps/ops-harbor-control-plane/src/lib/github.tsRepository: masseater/claude-code-plugin
Length of output: 54
🏁 Script executed:
# Get the exact line range for githubRequest function
cd apps/ops-harbor-control-plane/src/lib && sed -n '245,280p' github.ts | head -20Repository: masseater/claude-code-plugin
Length of output: 698
🏁 Script executed:
# Get the exact line range for graphql function (around 358-390)
cd apps/ops-harbor-control-plane/src/lib && sed -n '358,395p' github.tsRepository: masseater/claude-code-plugin
Length of output: 1467
🏁 Script executed:
# Check syncAuthorWorkItems to see if it calls these fetch functions
cd apps/ops-harbor-control-plane/src/lib && sed -n '96,150p' sync.tsRepository: masseater/claude-code-plugin
Length of output: 1914
🏁 Script executed:
# Check handleWebhookEvent to see if it calls these fetch functions
cd apps/ops-harbor-control-plane/src/lib && sed -n '161,210p' sync.tsRepository: masseater/claude-code-plugin
Length of output: 1845
🏁 Script executed:
# Find listInstallations definition
rg "listInstallations" apps/ops-harbor-control-plane/src/lib --no-heading -n | head -5Repository: masseater/claude-code-plugin
Length of output: 639
🏁 Script executed:
# Find fetchOpenPullRequestsForAuthor definition
rg "fetchOpenPullRequestsForAuthor" apps/ops-harbor-control-plane/src/lib --no-heading -n | head -5Repository: masseater/claude-code-plugin
Length of output: 462
🏁 Script executed:
# Find hydrateAndPersist definition
rg "hydrateAndPersist" apps/ops-harbor-control-plane/src/lib --no-heading -n | head -5Repository: masseater/claude-code-plugin
Length of output: 705
🏁 Script executed:
# Check if listInstallations calls githubRequest or graphql
cd apps/ops-harbor-control-plane/src/lib && grep -n "export.*listInstallations" *.tsRepository: masseater/claude-code-plugin
Length of output: 173
🏁 Script executed:
# Get the listInstallations function
cd apps/ops-harbor-control-plane/src/lib && sed -n '1,100p' github.ts | grep -A 20 "listInstallations"Repository: masseater/claude-code-plugin
Length of output: 54
🏁 Script executed:
# Search for listInstallations in github.ts
cd apps/ops-harbor-control-plane/src/lib && grep -n "listInstallations\|fetchOpenPullRequestsForAuthor" github.tsRepository: masseater/claude-code-plugin
Length of output: 221
🏁 Script executed:
# Read the listInstallations function
cd apps/ops-harbor-control-plane/src/lib && sed -n '305,330p' github.tsRepository: masseater/claude-code-plugin
Length of output: 1007
🏁 Script executed:
# Read the fetchOpenPullRequestsForAuthor function
cd apps/ops-harbor-control-plane/src/lib && sed -n '484,520p' github.tsRepository: masseater/claude-code-plugin
Length of output: 1589
🏁 Script executed:
# Read hydrateAndPersist in sync.ts
cd apps/ops-harbor-control-plane/src/lib && sed -n '147,170p' sync.tsRepository: masseater/claude-code-plugin
Length of output: 819
🏁 Script executed:
# Search for uses of graphql() in github.ts to see which functions call it
rg "graphql<|graphql\(" apps/ops-harbor-control-plane/src/lib/github.ts -A 2 -B 2Repository: masseater/claude-code-plugin
Length of output: 692
Add explicit timeouts to the GitHub fetch helpers.
Both githubRequest() at lines 245-258 and graphql() at lines 358-390 use bare fetch() calls without timeout or abort mechanisms. A stalled GitHub connection will block request handlers in server.ts (/api/admin/sync at line 111 and /api/webhooks/github at line 138) indefinitely, exhausting the server's connection pool. Add AbortController with explicit timeout configuration to both helpers and return a timeout-specific error on abort.
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In `@apps/ops-harbor-control-plane/src/lib/github.ts` around lines 245 - 258,
githubRequest and graphql use plain fetch without timeouts which can hang; add
an AbortController with a configurable timeout (e.g., constant or from config)
in both async functions (githubRequest and graphql), wire controller.signal into
the fetch init, start a timer that calls controller.abort() after the timeout,
and clear the timer on response; catch the abort error and return or throw a
distinct timeout-specific error (or include a clear error shape in the returned
result with rateLimit) so callers of githubRequest and graphql (e.g., the
handlers in server.ts) can detect timeouts and avoid hanging the server.
| export function verifyWebhookSignature( | ||
| secret: string, | ||
| rawBody: string, | ||
| signatureHeader: string | null, | ||
| ): boolean { | ||
| if (!signatureHeader?.startsWith("sha256=")) return false; | ||
| const expected = `sha256=${createHmac("sha256", secret).update(rawBody).digest("hex")}`; | ||
| return expected === signatureHeader; |
There was a problem hiding this comment.
🧩 Analysis chain
🏁 Script executed:
cat -n apps/ops-harbor-control-plane/src/lib/github.ts | head -50Repository: masseater/claude-code-plugin
Length of output: 1568
🏁 Script executed:
# Get the import section and the function in question
sed -n '1,20p' apps/ops-harbor-control-plane/src/lib/github.tsRepository: masseater/claude-code-plugin
Length of output: 576
🏁 Script executed:
# Get the specific function around lines 537-544
sed -n '530,550p' apps/ops-harbor-control-plane/src/lib/github.tsRepository: masseater/claude-code-plugin
Length of output: 531
🏁 Script executed:
# Verify Node.js crypto module exports timingSafeEqual
node -e "const crypto = require('crypto'); console.log(typeof crypto.timingSafeEqual);"Repository: masseater/claude-code-plugin
Length of output: 80
🏁 Script executed:
# Check the Node.js version and crypto exports
node --versionRepository: masseater/claude-code-plugin
Length of output: 79
🌐 Web query:
Node.js crypto timingSafeEqual function signature and usage
💡 Result:
Function signature: crypto.timingSafeEqual(a, b) Where: - a: Buffer, TypedArray, or DataView - b: Buffer, TypedArray, or DataView (must have the same byte length as a) Returns: boolean (true if the underlying bytes are equal, using constant-time comparison to prevent timing attacks) Usage: Import from 'node:crypto' or require('node:crypto'). Suitable for comparing HMAC digests, authentication tokens, etc., without leaking timing information. Example (ESM): import { timingSafeEqual, createHmac } from 'node:crypto'; const secret = 'my-secret'; const expected = createHmac('sha256', secret).update('data').digest; const actual = Buffer.from('some-computed-hash'); console.log(timingSafeEqual(expected, actual)); // true or false, constant time Example (CJS): const crypto = require('node:crypto'); const expected = crypto.createHmac('sha256', 'secret').update('data').digest; const actual = Buffer.from('hash'); console.log(crypto.timingSafeEqual(expected, actual)); Notes: - Throws ERR_CRYPTO_TIMING_SAFE_EQUAL_LENGTH if byte lengths differ. - For multi-byte TypedArrays (e.g., Uint16Array), uses platform byte order. - Float32Array/Float64Array may give unexpected results due to IEEE 754. - Added in v6.6.0, supports ArrayBuffer since v15.0.0. - Surrounding code must also be timing-safe.
Citations:
- 1: https://nodejs.org/api/crypto.html
- 2: https://nodejs.cn/api/crypto/crypto_timingsafeequal_a_b.html
- 3: https://docs.deno.com/api/node/crypto/~/timingSafeEqual
- 4: https://www.geeksforgeeks.org/node-js/node-js-crypto-timingsafeequal-function/
- 5: nodejs/node@7be07c719c
- 6: nodejs/node@588b388181
- 7: https://bun.com/reference/node/crypto/timingSafeEqual
- 8: nodejs/node@bf4e778e50
- 9: nodejs/node@afb9917f16
Use a timing-safe compare for the webhook HMAC.
The final comparison uses === with strings, which is timing-dependent. In webhook handlers, use timingSafeEqual() with Buffer values to prevent timing attacks. This requires comparing the raw digest buffers instead of hex-encoded strings.
🔐 Possible fix
-import { createHmac, createSign } from "node:crypto";
+import { createHmac, createSign, timingSafeEqual } from "node:crypto";
...
export function verifyWebhookSignature(
secret: string,
rawBody: string,
signatureHeader: string | null,
): boolean {
if (!signatureHeader?.startsWith("sha256=")) return false;
- const expected = `sha256=${createHmac("sha256", secret).update(rawBody).digest("hex")}`;
- return expected === signatureHeader;
+ const expected = createHmac("sha256", secret).update(rawBody).digest();
+ const received = Buffer.from(signatureHeader.slice("sha256=".length), "hex");
+ return expected.length === received.length && timingSafeEqual(expected, received);
}📝 Committable suggestion
‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.
| export function verifyWebhookSignature( | |
| secret: string, | |
| rawBody: string, | |
| signatureHeader: string | null, | |
| ): boolean { | |
| if (!signatureHeader?.startsWith("sha256=")) return false; | |
| const expected = `sha256=${createHmac("sha256", secret).update(rawBody).digest("hex")}`; | |
| return expected === signatureHeader; | |
| import { createHmac, createSign, timingSafeEqual } from "node:crypto"; |
| export function verifyWebhookSignature( | |
| secret: string, | |
| rawBody: string, | |
| signatureHeader: string | null, | |
| ): boolean { | |
| if (!signatureHeader?.startsWith("sha256=")) return false; | |
| const expected = `sha256=${createHmac("sha256", secret).update(rawBody).digest("hex")}`; | |
| return expected === signatureHeader; | |
| export function verifyWebhookSignature( | |
| secret: string, | |
| rawBody: string, | |
| signatureHeader: string | null, | |
| ): boolean { | |
| if (!signatureHeader?.startsWith("sha256=")) return false; | |
| const expected = createHmac("sha256", secret).update(rawBody).digest(); | |
| const received = Buffer.from(signatureHeader.slice("sha256=".length), "hex"); | |
| return expected.length === received.length && timingSafeEqual(expected, received); | |
| } |
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In `@apps/ops-harbor-control-plane/src/lib/github.ts` around lines 537 - 544, The
verifyWebhookSignature function currently compares hex strings with === which is
vulnerable to timing attacks; change it to use crypto.timingSafeEqual by
computing raw HMAC digest as a Buffer (use
createHmac(...).update(rawBody).digest()) and convert the incoming
signatureHeader (after stripping the "sha256=" prefix) into a Buffer from hex,
then if their lengths differ return false, otherwise return
timingSafeEqual(expectedBuf, signatureBuf); update references in
verifyWebhookSignature accordingly and ensure crypto.timingSafeEqual is
imported/available.
| const recentEvents = listActivity(db, { workItemId: item.id, limit: 20 }); | ||
| const snapshot = deriveAlerts(item, recentEvents); | ||
| const enriched = { ...item, alerts: snapshot.alerts }; | ||
| upsertWorkItem(db, enriched); | ||
| refreshWorkItemAlerts(db, enriched, recentEvents); | ||
| enqueueJobsForAlerts(db, { ...enriched, alerts: snapshot.alerts }); |
There was a problem hiding this comment.
Persisting these items here will lose the new mergeable state.
This save path hands the GitHub WorkItem straight to upsertWorkItem(), and that serializer currently writes mergeable as workItem.mergeable ? 1 : 0 in apps/ops-harbor-control-plane/src/lib/db.ts:424-475. With the new "mergeable" | "conflicting" | "unknown" values from github.ts, both "conflicting" and "unknown" collapse on the first write, so later reads and status summaries are wrong. Please align the DB representation before persisting these records.
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In `@apps/ops-harbor-control-plane/src/lib/sync.ts` around lines 88 - 93, The
current save path (listActivity -> deriveAlerts -> upsertWorkItem) persists the
WorkItem directly and loses the new mergeable states; update the value stored
before calling upsertWorkItem and any downstream consumers
(refreshWorkItemAlerts, enqueueJobsForAlerts) by mapping the GitHub mergeable
union ("mergeable" | "conflicting" | "unknown") to the DB representation
expected by the serializer (or update the serializer itself in db.ts to accept
the three-state value). Concretely, convert item.mergeable into the DB-safe form
(e.g., explicit enum/string or numeric code) and use that mapped value when
building enriched (the object passed to upsertWorkItem, refreshWorkItemAlerts,
enqueueJobsForAlerts) so the new states are preserved across writes and reads.
| async function ensureExpectedBranch(cwd: string, expectedBranch: string): Promise<void> { | ||
| const result = await runCommand("git", ["rev-parse", "--abbrev-ref", "HEAD"], { cwd }); | ||
| if (result.exitCode !== 0) { | ||
| throw new Error(result.stderr || "Failed to determine current branch."); | ||
| } | ||
| if (result.stdout !== expectedBranch) { | ||
| throw new Error(`Repository is on branch ${result.stdout}, expected ${expectedBranch}.`); | ||
| } | ||
| } |
There was a problem hiding this comment.
Guard on the expected PR head SHA, not only the branch name.
A local checkout can have the right branch name while still being behind or ahead of leased.workItem.headSha. In that case the worker generates changes against stale code and pushes them back to the PR. Verify git rev-parse HEAD matches the expected SHA before running automation, or fetch/reset explicitly first.
🧭 Possible fix
-async function ensureExpectedBranch(cwd: string, expectedBranch: string): Promise<void> {
+async function ensureExpectedRevision(
+ cwd: string,
+ expectedBranch: string,
+ expectedHeadSha: string,
+): Promise<void> {
const result = await runCommand("git", ["rev-parse", "--abbrev-ref", "HEAD"], { cwd });
if (result.exitCode !== 0) {
throw new Error(result.stderr || "Failed to determine current branch.");
}
if (result.stdout !== expectedBranch) {
throw new Error(`Repository is on branch ${result.stdout}, expected ${expectedBranch}.`);
}
+
+ const head = await runCommand("git", ["rev-parse", "HEAD"], { cwd });
+ if (head.exitCode !== 0) {
+ throw new Error(head.stderr || "Failed to determine current HEAD.");
+ }
+ if (head.stdout !== expectedHeadSha) {
+ throw new Error(`Repository HEAD is ${head.stdout}, expected ${expectedHeadSha}.`);
+ }
}
...
- await ensureExpectedBranch(repository.path, leased.workItem.headBranch);
+ await ensureExpectedRevision(
+ repository.path,
+ leased.workItem.headBranch,
+ leased.workItem.headSha,
+ );Also applies to: 201-201
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In `@apps/ops-harbor/src/lib/automation.ts` around lines 113 - 121,
ensureExpectedBranch currently only checks the branch name; modify it to also
guard the expected PR head SHA by comparing the output of "git rev-parse HEAD"
to the expected SHA (e.g., leased.workItem.headSha) before proceeding, and/or
perform a fetch + explicit reset to that SHA if you prefer to enforce sync;
update the function ensureExpectedBranch to run "git rev-parse HEAD" (or
fetch+git reset --hard <sha>) and throw an error if the SHA does not match the
expected value so automation never runs against stale code.
… system - Add CSS custom properties design system with dark/light theme toggle - Add status dots with signal color mapping for all work item states - Replace typography with Syne (display) + Outfit (body) + JetBrains Mono (data) - Add panel fade-in animations and hover interactions - Add FOUC prevention script for theme persistence via localStorage - Update ops-harbor-control-plane server, config, and GitHub integration Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
There was a problem hiding this comment.
🧹 Nitpick comments (2)
apps/ops-harbor-control-plane/src/lib/port-finder.ts (1)
5-14: Consider consolidating duplicate port-finder implementations.This implementation is nearly identical to
apps/ops-harbor/src/lib/port-finder.tsandapps/sdd-webapp/src/lib/port-finder.ts. Three copies with only cosmetic differences (MIN_PORT values, comments) creates a maintenance burden and divergence risk. Consider extracting a shared utility into a common package.Additionally, there's no validation that
startPortis within the valid range (1–65535). If an invalid value is passed, the error message "No available port found between..." would be misleading.🛡️ Proposed fix for input validation
export async function findAvailablePort(startPort: number): Promise<number> { + if (startPort < 1 || startPort > MAX_PORT) { + throw new RangeError(`startPort must be between 1 and ${MAX_PORT}`); + } for (let port = startPort; port <= MAX_PORT; port += 1) {🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@apps/ops-harbor-control-plane/src/lib/port-finder.ts` around lines 5 - 14, The findAvailablePort function duplicates logic across apps; extract the core logic (including isPortAvailable usage and MIN_PORT/MAX_PORT bounds) into a shared utility package and have this file call that shared function to avoid divergence, and add validation at the start of findAvailablePort to ensure startPort is an integer within 1..65535 (or between MIN_PORT and MAX_PORT if you keep module-level MIN_PORT) and throw a clear RangeError if invalid so the thrown message isn’t misleading; update references to MAX_PORT, MIN_PORT and isPortAvailable accordingly when moving logic to the shared helper.apps/ops-harbor-control-plane/src/lib/github.ts (1)
442-452: Clarify the update condition logic.The condition at line 445 (
!config.githubWebhookSecret) means the webhook config will always be updated whengithubWebhookSecretis configured, even if the URL already matches. This appears intentional (to ensure the secret is always pushed), but a brief comment would clarify this behavior.💡 Suggested clarification
if ( current.config.url === webhookUrl && - current.config.contentType === "json" && - !config.githubWebhookSecret + current.config.contentType === "json" && + !config.githubWebhookSecret // Always update if secret is configured to ensure it's pushed ) {🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@apps/ops-harbor-control-plane/src/lib/github.ts` around lines 442 - 452, The condition in the webhook update check (the if using current.config.url, current.config.contentType and !config.githubWebhookSecret) intentionally forces an update when config.githubWebhookSecret is set so the secret will be pushed even if the URL matches; add a concise inline comment above that if (or next to the boolean expression) explaining that behavior (i.e., "we only skip update when URL and contentType match and no webhook secret is configured — if githubWebhookSecret is present we must update to push the secret"), so future readers understand why the presence of config.githubWebhookSecret prevents this early-return in the function handling webhook comparison.
🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.
Nitpick comments:
In `@apps/ops-harbor-control-plane/src/lib/github.ts`:
- Around line 442-452: The condition in the webhook update check (the if using
current.config.url, current.config.contentType and !config.githubWebhookSecret)
intentionally forces an update when config.githubWebhookSecret is set so the
secret will be pushed even if the URL matches; add a concise inline comment
above that if (or next to the boolean expression) explaining that behavior
(i.e., "we only skip update when URL and contentType match and no webhook secret
is configured — if githubWebhookSecret is present we must update to push the
secret"), so future readers understand why the presence of
config.githubWebhookSecret prevents this early-return in the function handling
webhook comparison.
In `@apps/ops-harbor-control-plane/src/lib/port-finder.ts`:
- Around line 5-14: The findAvailablePort function duplicates logic across apps;
extract the core logic (including isPortAvailable usage and MIN_PORT/MAX_PORT
bounds) into a shared utility package and have this file call that shared
function to avoid divergence, and add validation at the start of
findAvailablePort to ensure startPort is an integer within 1..65535 (or between
MIN_PORT and MAX_PORT if you keep module-level MIN_PORT) and throw a clear
RangeError if invalid so the thrown message isn’t misleading; update references
to MAX_PORT, MIN_PORT and isPortAvailable accordingly when moving logic to the
shared helper.
ℹ️ Review info
⚙️ Run configuration
Configuration used: Organization UI
Review profile: CHILL
Plan: Pro
Run ID: 32da36a6-1e9d-4074-8ba0-d956b43bde8d
⛔ Files ignored due to path filters (1)
bun.lockis excluded by!**/*.lock
📒 Files selected for processing (17)
README.mdapps/ops-harbor-control-plane/README.mdapps/ops-harbor-control-plane/package.jsonapps/ops-harbor-control-plane/src/cli.tsapps/ops-harbor-control-plane/src/lib/config.test.tsapps/ops-harbor-control-plane/src/lib/config.tsapps/ops-harbor-control-plane/src/lib/github.test.tsapps/ops-harbor-control-plane/src/lib/github.tsapps/ops-harbor-control-plane/src/lib/port-finder.tsapps/ops-harbor-control-plane/src/lib/tunnel.tsapps/ops-harbor-control-plane/src/localtunnel.d.tsapps/ops-harbor-control-plane/src/server.test.tsapps/ops-harbor-control-plane/src/server.tsapps/ops-harbor/src/client/App.tsxapps/ops-harbor/src/client/index.htmlapps/ops-harbor/src/client/styles.cssdocs/ops-harbor-local-github-app.md
✅ Files skipped from review due to trivial changes (10)
- README.md
- apps/ops-harbor-control-plane/README.md
- apps/ops-harbor/src/client/index.html
- apps/ops-harbor-control-plane/src/localtunnel.d.ts
- apps/ops-harbor-control-plane/package.json
- apps/ops-harbor-control-plane/src/cli.ts
- apps/ops-harbor-control-plane/src/lib/config.test.ts
- apps/ops-harbor/src/client/styles.css
- apps/ops-harbor/src/client/App.tsx
- docs/ops-harbor-local-github-app.md
🚧 Files skipped from review as they are similar to previous changes (2)
- apps/ops-harbor-control-plane/src/lib/github.test.ts
- apps/ops-harbor-control-plane/src/lib/config.ts
There was a problem hiding this comment.
Actionable comments posted: 12
♻️ Duplicate comments (1)
apps/ops-harbor/src/lib/automation.ts (1)
137-145:⚠️ Potential issue | 🟠 MajorAlso verify the checked-out SHA before running the job.
Matching only the branch name still allows a stale or locally advanced checkout to pass. If
HEADno longer equalsleased.workItem.headSha, the runner can commit on top of the wrong revision.Also applies to: 225-225
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@apps/ops-harbor/src/lib/automation.ts` around lines 137 - 145, ensureExpectedBranch currently only checks branch name and misses verifying the exact checkout SHA; update it to also accept the expected SHA (e.g., add parameter expectedSha or pull leased.workItem.headSha) and run git rev-parse HEAD (via runCommand) to compare result.stdout to expectedSha, throwing a clear Error if they differ; keep existing branch-name check but add this SHA verification (use the same error handling pattern: check exitCode and include stderr in thrown errors) so the job will fail if HEAD doesn't equal the expected lease SHA.
🧹 Nitpick comments (5)
apps/ops-harbor-control-plane/src/server.ts (1)
82-93: Auth middleware is opt-in—requests pass through wheninternalApiTokenis unset.The conditional
if (config.internalApiToken && ...)means that if no token is configured, all admin/internal routes are fully open. This is likely intentional for zero-config local development, but consider documenting this behavior or logging a warning at startup when the token is missing, so users are aware the internal API surface is unprotected.🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@apps/ops-harbor-control-plane/src/server.ts` around lines 82 - 93, The middleware for the admin/internal routes (app.use("/api/admin/*", ...) and app.use("/api/internal/*", ...)) currently lets requests through when config.internalApiToken is falsy, so make this opt-in behavior explicit by adding a startup warning or log when config.internalApiToken is missing; locate the config.internalApiToken check in server.ts and, during app/server initialization, emit a clear warning via the existing logger (or console) stating that internal/admin APIs are unprotected because internalApiToken is unset, so operators are aware of the insecure default.apps/ops-harbor/src/client/pages/dashboard/ui/DashboardPage.tsx (1)
241-245: Minor:trimToUndefinedis called twice with the same value.
trimToUndefined(currentConfig.internalApiToken)is evaluated once for the conditional and again in the object literal. Consider storing in a variable:Suggested simplification
- const controlPlaneHeaders = trimToUndefined(currentConfig.internalApiToken) - ? { - "x-ops-harbor-token": trimToUndefined(currentConfig.internalApiToken) as string, - } - : undefined; + const token = trimToUndefined(currentConfig.internalApiToken); + const controlPlaneHeaders = token ? { "x-ops-harbor-token": token } : undefined;🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@apps/ops-harbor/src/client/pages/dashboard/ui/DashboardPage.tsx` around lines 241 - 245, The code calls trimToUndefined(currentConfig.internalApiToken) twice when building controlPlaneHeaders; store its result in a local variable (e.g., const token = trimToUndefined(currentConfig.internalApiToken)) and then use that variable for the conditional and for the header value so controlPlaneHeaders is set based on token and uses token for "x-ops-harbor-token".apps/ops-harbor/src/lib/dev-runtime-config.ts (1)
33-35: Silent catch swallows all errors, not just "file not found".The
.catch(() => undefined)ignores any error fromunlink, including permission errors or I/O failures. For a dev utility this is likely fine, but consider logging unexpected errors for debuggability:Suggested refinement
export async function clearDevRuntimeConfig(): Promise<void> { - await unlink(DEV_RUNTIME_CONFIG_PATH).catch(() => undefined); + await unlink(DEV_RUNTIME_CONFIG_PATH).catch((error: NodeJS.ErrnoException) => { + if (error.code !== "ENOENT") { + console.warn("clearDevRuntimeConfig: failed to remove config file", error.message); + } + }); }🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@apps/ops-harbor/src/lib/dev-runtime-config.ts` around lines 33 - 35, The clearDevRuntimeConfig function currently swallows all errors from unlink(DEV_RUNTIME_CONFIG_PATH) via .catch(() => undefined); change this so only "file not found" (ENOENT) is ignored and any other errors are surfaced or logged: await unlink(DEV_RUNTIME_CONFIG_PATH).catch(err => { if (err.code !== 'ENOENT') { /* log or rethrow */ } }); This keeps the intended silence for missing files while preserving debuggability for permission or I/O errors; update references in clearDevRuntimeConfig and use the existing logger or throw the error as appropriate.apps/ops-harbor/src/client/pages/dashboard/ui/SettingsPanel.tsx (1)
160-175: Consider masking or confirming before displaying the private key.The private key textarea displays the full PEM content in plain text. While this is a local dashboard, users might inadvertently expose sensitive data via screen sharing or screenshots. Consider:
- Using
type="password"equivalent (though not available for textarea)- Showing a "reveal" toggle
- Only displaying a placeholder like
"••• (key configured)"when a value exists🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@apps/ops-harbor/src/client/pages/dashboard/ui/SettingsPanel.tsx` around lines 160 - 175, The private key textarea (controlPlaneSettings.githubPrivateKey in SettingsPanel.tsx) must not display the full PEM by default; change the UI to show a masked placeholder when a value exists (e.g. "••• (key configured)") and add a reveal toggle/button that, when activated, replaces the placeholder with the editable textarea bound to onChangeControlPlane so edits still update githubPrivateKey; ensure the real key is kept in component state and only rendered when the user explicitly toggles reveal, and keep a clear/remove action to unset the key if needed.apps/ops-harbor-control-plane/src/lib/sync.ts (1)
91-93: RedundantupsertWorkItemcall—refreshWorkItemAlertsalready upserts.Per
apps/ops-harbor-control-plane/src/lib/db.ts:638-653,refreshWorkItemAlertsinternally callsderiveAlertsand thenupsertWorkItem. SincesaveHydratedWorkItemalready derives alerts (line 89) and upserts (line 91), callingrefreshWorkItemAlertson line 92 re-derives and re-upserts the same data.Consider removing either the explicit
upsertWorkItemcall or therefreshWorkItemAlertscall to avoid the double write.Suggested simplification
const recentEvents = listActivity(db, { workItemId: item.id, limit: 20 }); const snapshot = deriveAlerts(item, recentEvents); const enriched = { ...item, alerts: snapshot.alerts }; - upsertWorkItem(db, enriched); - refreshWorkItemAlerts(db, enriched, recentEvents); + upsertWorkItem(db, enriched); enqueueJobsForAlerts(db, { ...enriched, alerts: snapshot.alerts });Or simply rely on
refreshWorkItemAlertsfor both derive + upsert:const recentEvents = listActivity(db, { workItemId: item.id, limit: 20 }); - const snapshot = deriveAlerts(item, recentEvents); - const enriched = { ...item, alerts: snapshot.alerts }; - upsertWorkItem(db, enriched); - refreshWorkItemAlerts(db, enriched, recentEvents); - enqueueJobsForAlerts(db, { ...enriched, alerts: snapshot.alerts }); + const enriched = refreshWorkItemAlerts(db, item, recentEvents); + enqueueJobsForAlerts(db, enriched);🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@apps/ops-harbor-control-plane/src/lib/sync.ts` around lines 91 - 93, The code calls upsertWorkItem and then refreshWorkItemAlerts (which itself calls deriveAlerts and upsertWorkItem), causing a duplicate write; update saveHydratedWorkItem to remove the redundant call by either (A) deleting the explicit upsertWorkItem(db, enriched) and rely on refreshWorkItemAlerts(db, enriched, recentEvents) to derive + upsert, ensuring enqueueJobsForAlerts still receives the correct snapshot.alerts, or (B) keep the explicit upsertWorkItem and remove refreshWorkItemAlerts; choose one approach and remove the other so only a single derive/upsert occurs (referencing upsertWorkItem, refreshWorkItemAlerts, deriveAlerts, saveHydratedWorkItem, enqueueJobsForAlerts).
🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.
Inline comments:
In `@apps/ops-harbor-control-plane/src/lib/config.test.ts`:
- Around line 13-21: The test "uses defaults when env vars are missing" is not
hermetic because readConfig({}) can read a real config on the host; fix it by
creating an empty temp directory and passing it as XDG_CONFIG_HOME in the env
object to readConfig (e.g., const tmp = mkdtempSync(...); const config =
readConfig({ XDG_CONFIG_HOME: tmp });), then assert defaults as before and
remove the temp dir after the test; reference the readConfig call in the test to
locate where to inject the temp XDG_CONFIG_HOME.
In `@apps/ops-harbor/src/client/app/styles.css`:
- Around line 707-711: The CSS rule for .settings-facts dd uses the deprecated
declaration "word-break: break-word"; remove that declaration and replace it
with the supported overflow-wrap property (e.g., add "overflow-wrap: anywhere;"
or "overflow-wrap: break-word;" depending on desired break behavior) so long
values will wrap without stylelint errors; update the .settings-facts dd block
accordingly and delete the old word-break line.
- Around line 667-676: The .settings-status and .settings-block rules use an
undefined CSS variable var(--bg-panel) causing the color-mix background to be
invalid; replace var(--bg-panel) with the project’s defined surface token (e.g.,
var(--surface) or var(--bg-surface) depending on your design tokens) so the
background: color-mix(in srgb, <defined-surface-token> 88%, var(--bg-deep))
yields a valid surface color; update the selector block containing
.settings-status and .settings-block and verify the chosen token exists in the
global token file.
In `@apps/ops-harbor/src/client/pages/dashboard/ui/DashboardPage.tsx`:
- Around line 462-464: The displayed finishedAt can be empty for in-progress
runs causing "Invalid Date"; inside DashboardPage (where you render <time>{new
Date(run.finishedAt).toLocaleString()}</time>) guard against missing/invalid
timestamps by checking run.finishedAt (and/or Date.parse(run.finishedAt) !==
NaN) before creating a Date; if absent or invalid render a fallback like "In
progress" or "-" instead of calling toLocaleString() so leased/pending runs
don't show "Invalid Date".
In `@apps/ops-harbor/src/lib/automation.ts`:
- Around line 271-284: The current catch treats any error (including transport
errors from client.completeJob) as a job failure; change the flow so job
execution status is recorded atomically and transport failures to report that
status do not flip the job to "failed". Specifically, after successful run
create the recordAutomationRun entry with status "completed" (use runId,
leased.workItem, leased.job.triggerType, startedAt/finishedAt/summary) and then
call client.completeJob in a separate try/catch that logs or retries transport
errors without altering the recorded job status; likewise, in the main catch
only mark the job "failed" if the execution actually failed (use the original
error path) and avoid converting completeJob/reporting errors into execution
failures. Ensure you reference and adjust the code around completeJob,
recordAutomationRun, leased, runId, startedAt, finishedAt, and summary.
- Around line 169-173: The git staging step uses runCommand("git", ["add",
"-A"], { cwd: repository.path }) but its result isn't checked before committing,
so ensure you verify staging succeeded and there are changes to commit: capture
the result of runCommand for the "git add" call (and/or runCommand("git",
["status", "--porcelain"], { cwd: repository.path }) to detect staged changes)
and only invoke runCommand("git", ["commit", "-m", commitMessage], { cwd:
repository.path }) when the add succeeded and status shows changes; if no
changes are staged, skip the commit and handle/log appropriately using the same
repository.path and workItem context.
In `@apps/ops-harbor/src/lib/config.ts`:
- Around line 106-115: loadConfig currently always calls detectGitHubLogin()
which shells out to gh even when the stored config already includes authorLogin;
change the flow so you first read and parse the config (use readFile,
JSON.parse, v.parse with ConfigSchema, normalizeConfig) or fall back to
defaultConfig(), check the resulting config.authorLogin, and only call
detectGitHubLogin() if authorLogin is missing, then pass the detected value into
withDetectedAuthorLogin before returning; ensure error handling still returns
defaultConfig() and avoids calling detectGitHubLogin() when unnecessary.
In `@apps/ops-harbor/src/lib/github-auth.ts`:
- Around line 9-33: The runGhCommand helper can hang if the spawned gh process
stalls; add a timeout to the Promise in runGhCommand so the child is killed and
the Promise rejects (or resolves with a non-zero exit) after a configurable
timeout (e.g., 10s). Implement this by creating a timer with setTimeout when the
child is spawned, calling child.kill() on timeout, clearing the timer on 'close'
or 'error', and ensuring the Promise is settled exactly once (guard with a
settled flag) so detectGitHubLogin() won't remain pending; reference
runGhCommand, spawn, child.kill, and the child 'close'/'error' handlers when
making the change.
In `@apps/ops-harbor/src/lib/local-db.ts`:
- Around line 41-47: The parseJson function should not mask JSON parse failures
by returning a default domain object; change parseJson<T> to return T | null (or
undefined) and return null on catch, then update callers (e.g., listAlerts and
other places that currently do JSON.parse(...) via parseJson) to filter out
null/malformed rows before mapping to domain objects so malformed cache rows are
excluded rather than cast to {} as WorkItem/ActivityEvent; update types and any
downstream code to handle the nullable return (filter(Boolean) or equivalent)
and ensure APIs only receive fully parsed objects with required fields like
alerts and updatedAt.
In `@apps/ops-harbor/src/server.ts`:
- Around line 102-104: The parsed limit may be NaN or negative before being
passed into listActivity(), so validate and normalize it first: parse
c.req.query("limit") into a number, check Number.isInteger(value) && value > 0,
and only then set options.limit to that value; otherwise leave options.limit
undefined (or set to the default 100) so listActivity() uses a safe default.
Update the code around the options object (the limit parsing logic) where
options is constructed and where listActivity() is called to use this validated
value.
- Around line 33-43: Remove the global CORS middleware applied to API routes
(the app.use("/api/*", cors()) line) so the local API endpoints like GET
/api/config and PUT /api/config (handlers calling loadConfig and saveConfig) are
not exposed cross-origin; if cross-origin access is required for development,
replace that global middleware with a restricted CORS policy that only allows a
specific trusted origin (or add CORS to a specific dev-only route) rather than
enabling cors() for all origins.
In `@packages/ops-harbor-core/src/sqlite.ts`:
- Around line 4-5: The code creates two require functions (requireFromModule and
requireFromWorkingDirectory) and currently uses the working-directory anchored
require to load better-sqlite3, which makes resolution depend on process.cwd();
instead, always load better-sqlite3 via the module-anchored require
(requireFromModule) so the package resolves relative to this package, not the
caller's CWD—update any usage of requireFromWorkingDirectory to use
requireFromModule (and remove the unnecessary
createRequire(resolve(process.cwd(), "package.json")) if unused) when requiring
"better-sqlite3".
---
Duplicate comments:
In `@apps/ops-harbor/src/lib/automation.ts`:
- Around line 137-145: ensureExpectedBranch currently only checks branch name
and misses verifying the exact checkout SHA; update it to also accept the
expected SHA (e.g., add parameter expectedSha or pull leased.workItem.headSha)
and run git rev-parse HEAD (via runCommand) to compare result.stdout to
expectedSha, throwing a clear Error if they differ; keep existing branch-name
check but add this SHA verification (use the same error handling pattern: check
exitCode and include stderr in thrown errors) so the job will fail if HEAD
doesn't equal the expected lease SHA.
---
Nitpick comments:
In `@apps/ops-harbor-control-plane/src/lib/sync.ts`:
- Around line 91-93: The code calls upsertWorkItem and then
refreshWorkItemAlerts (which itself calls deriveAlerts and upsertWorkItem),
causing a duplicate write; update saveHydratedWorkItem to remove the redundant
call by either (A) deleting the explicit upsertWorkItem(db, enriched) and rely
on refreshWorkItemAlerts(db, enriched, recentEvents) to derive + upsert,
ensuring enqueueJobsForAlerts still receives the correct snapshot.alerts, or (B)
keep the explicit upsertWorkItem and remove refreshWorkItemAlerts; choose one
approach and remove the other so only a single derive/upsert occurs (referencing
upsertWorkItem, refreshWorkItemAlerts, deriveAlerts, saveHydratedWorkItem,
enqueueJobsForAlerts).
In `@apps/ops-harbor-control-plane/src/server.ts`:
- Around line 82-93: The middleware for the admin/internal routes
(app.use("/api/admin/*", ...) and app.use("/api/internal/*", ...)) currently
lets requests through when config.internalApiToken is falsy, so make this opt-in
behavior explicit by adding a startup warning or log when
config.internalApiToken is missing; locate the config.internalApiToken check in
server.ts and, during app/server initialization, emit a clear warning via the
existing logger (or console) stating that internal/admin APIs are unprotected
because internalApiToken is unset, so operators are aware of the insecure
default.
In `@apps/ops-harbor/src/client/pages/dashboard/ui/DashboardPage.tsx`:
- Around line 241-245: The code calls
trimToUndefined(currentConfig.internalApiToken) twice when building
controlPlaneHeaders; store its result in a local variable (e.g., const token =
trimToUndefined(currentConfig.internalApiToken)) and then use that variable for
the conditional and for the header value so controlPlaneHeaders is set based on
token and uses token for "x-ops-harbor-token".
In `@apps/ops-harbor/src/client/pages/dashboard/ui/SettingsPanel.tsx`:
- Around line 160-175: The private key textarea
(controlPlaneSettings.githubPrivateKey in SettingsPanel.tsx) must not display
the full PEM by default; change the UI to show a masked placeholder when a value
exists (e.g. "••• (key configured)") and add a reveal toggle/button that, when
activated, replaces the placeholder with the editable textarea bound to
onChangeControlPlane so edits still update githubPrivateKey; ensure the real key
is kept in component state and only rendered when the user explicitly toggles
reveal, and keep a clear/remove action to unset the key if needed.
In `@apps/ops-harbor/src/lib/dev-runtime-config.ts`:
- Around line 33-35: The clearDevRuntimeConfig function currently swallows all
errors from unlink(DEV_RUNTIME_CONFIG_PATH) via .catch(() => undefined); change
this so only "file not found" (ENOENT) is ignored and any other errors are
surfaced or logged: await unlink(DEV_RUNTIME_CONFIG_PATH).catch(err => { if
(err.code !== 'ENOENT') { /* log or rethrow */ } }); This keeps the intended
silence for missing files while preserving debuggability for permission or I/O
errors; update references in clearDevRuntimeConfig and use the existing logger
or throw the error as appropriate.
🪄 Autofix (Beta)
Fix all unresolved CodeRabbit comments on this PR:
- Push a commit to this branch (recommended)
- Create a new PR with the fixes
ℹ️ Review info
⚙️ Run configuration
Configuration used: Organization UI
Review profile: CHILL
Plan: Pro
Run ID: f01630e2-0c4b-48e4-a0ab-f33d0dfdb7af
📒 Files selected for processing (29)
.gitignoreapps/ops-harbor-control-plane/README.mdapps/ops-harbor-control-plane/src/cli.tsapps/ops-harbor-control-plane/src/lib/config.test.tsapps/ops-harbor-control-plane/src/lib/config.tsapps/ops-harbor-control-plane/src/lib/db.tsapps/ops-harbor-control-plane/src/lib/sync.tsapps/ops-harbor-control-plane/src/server.test.tsapps/ops-harbor-control-plane/src/server.tsapps/ops-harbor/package.jsonapps/ops-harbor/src/client/app/App.tsxapps/ops-harbor/src/client/app/styles.cssapps/ops-harbor/src/client/main.tsxapps/ops-harbor/src/client/pages/dashboard/index.tsapps/ops-harbor/src/client/pages/dashboard/model/settings.tsapps/ops-harbor/src/client/pages/dashboard/ui/DashboardPage.tsxapps/ops-harbor/src/client/pages/dashboard/ui/SettingsPanel.tsxapps/ops-harbor/src/dev.tsapps/ops-harbor/src/lib/automation.tsapps/ops-harbor/src/lib/config.tsapps/ops-harbor/src/lib/dev-runtime-config.tsapps/ops-harbor/src/lib/github-auth.tsapps/ops-harbor/src/lib/local-db.tsapps/ops-harbor/src/lib/sync.tsapps/ops-harbor/src/server.tsapps/ops-harbor/vite.config.tsdocs/ops-harbor-local-github-app.mdpackages/ops-harbor-core/src/index.tspackages/ops-harbor-core/src/sqlite.ts
✅ Files skipped from review due to trivial changes (9)
- .gitignore
- apps/ops-harbor/src/client/main.tsx
- apps/ops-harbor/src/client/pages/dashboard/index.ts
- apps/ops-harbor/vite.config.ts
- apps/ops-harbor-control-plane/src/cli.ts
- apps/ops-harbor/package.json
- docs/ops-harbor-local-github-app.md
- apps/ops-harbor/src/lib/sync.ts
- apps/ops-harbor-control-plane/README.md
🚧 Files skipped from review as they are similar to previous changes (4)
- packages/ops-harbor-core/src/index.ts
- apps/ops-harbor-control-plane/src/server.test.ts
- apps/ops-harbor-control-plane/src/lib/config.ts
- apps/ops-harbor-control-plane/src/lib/db.ts
| .settings-status, | ||
| .settings-block { | ||
| display: flex; | ||
| flex-direction: column; | ||
| gap: 14px; | ||
| padding: 18px; | ||
| border: 1px solid var(--border-subtle); | ||
| border-radius: var(--radius-lg); | ||
| background: color-mix(in srgb, var(--bg-panel) 88%, var(--bg-deep)); | ||
| } |
There was a problem hiding this comment.
Use a defined surface token for these settings cards.
var(--bg-panel) is never declared in this stylesheet, so this whole background value is invalid and the block falls back to no intended surface color.
Possible fix
.settings-status,
.settings-block {
display: flex;
flex-direction: column;
gap: 14px;
padding: 18px;
border: 1px solid var(--border-subtle);
border-radius: var(--radius-lg);
- background: color-mix(in srgb, var(--bg-panel) 88%, var(--bg-deep));
+ background: color-mix(in srgb, var(--bg-surface) 88%, var(--bg-deep));
}🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In `@apps/ops-harbor/src/client/app/styles.css` around lines 667 - 676, The
.settings-status and .settings-block rules use an undefined CSS variable
var(--bg-panel) causing the color-mix background to be invalid; replace
var(--bg-panel) with the project’s defined surface token (e.g., var(--surface)
or var(--bg-surface) depending on your design tokens) so the background:
color-mix(in srgb, <defined-surface-token> 88%, var(--bg-deep)) yields a valid
surface color; update the selector block containing .settings-status and
.settings-block and verify the chosen token exists in the global token file.
| .settings-facts dd { | ||
| margin: 0; | ||
| color: var(--text-primary); | ||
| font-size: 0.9rem; | ||
| word-break: break-word; |
There was a problem hiding this comment.
Replace the deprecated word-break value.
This is already tripping stylelint. overflow-wrap is the supported way to force long values to wrap here.
Possible fix
.settings-facts dd {
margin: 0;
color: var(--text-primary);
font-size: 0.9rem;
- word-break: break-word;
+ overflow-wrap: anywhere;
}📝 Committable suggestion
‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.
| .settings-facts dd { | |
| margin: 0; | |
| color: var(--text-primary); | |
| font-size: 0.9rem; | |
| word-break: break-word; | |
| .settings-facts dd { | |
| margin: 0; | |
| color: var(--text-primary); | |
| font-size: 0.9rem; | |
| overflow-wrap: anywhere; | |
| } |
🧰 Tools
🪛 Stylelint (17.6.0)
[error] 711-711: Unexpected deprecated keyword "break-word" for property "word-break" (declaration-property-value-keyword-no-deprecated)
(declaration-property-value-keyword-no-deprecated)
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In `@apps/ops-harbor/src/client/app/styles.css` around lines 707 - 711, The CSS
rule for .settings-facts dd uses the deprecated declaration "word-break:
break-word"; remove that declaration and replace it with the supported
overflow-wrap property (e.g., add "overflow-wrap: anywhere;" or "overflow-wrap:
break-word;" depending on desired break behavior) so long values will wrap
without stylelint errors; update the .settings-facts dd block accordingly and
delete the old word-break line.
| <span>{run.summary}</span> | ||
| <time>{new Date(run.finishedAt).toLocaleString()}</time> | ||
| </li> |
There was a problem hiding this comment.
finishedAt may be empty for in-progress runs, resulting in "Invalid Date".
The AutomationRun type declares finishedAt: string, but runs with status leased or pending likely have no finish timestamp. new Date("") or new Date(undefined) produces "Invalid Date" in toLocaleString().
Suggested guard
- <time>{new Date(run.finishedAt).toLocaleString()}</time>
+ <time>
+ {run.finishedAt ? new Date(run.finishedAt).toLocaleString() : "In progress"}
+ </time>📝 Committable suggestion
‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.
| <span>{run.summary}</span> | |
| <time>{new Date(run.finishedAt).toLocaleString()}</time> | |
| </li> | |
| <span>{run.summary}</span> | |
| <time> | |
| {run.finishedAt ? new Date(run.finishedAt).toLocaleString() : "In progress"} | |
| </time> | |
| </li> |
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In `@apps/ops-harbor/src/client/pages/dashboard/ui/DashboardPage.tsx` around lines
462 - 464, The displayed finishedAt can be empty for in-progress runs causing
"Invalid Date"; inside DashboardPage (where you render <time>{new
Date(run.finishedAt).toLocaleString()}</time>) guard against missing/invalid
timestamps by checking run.finishedAt (and/or Date.parse(run.finishedAt) !==
NaN) before creating a Date; if absent or invalid render a fallback like "In
progress" or "-" instead of calling toLocaleString() so leased/pending runs
don't show "Invalid Date".
| await runCommand("git", ["add", "-A"], { cwd: repository.path }); | ||
| const commitMessage = `ops-harbor: automated update for ${workItem.repository}#${workItem.number}`; | ||
| const commit = await runCommand("git", ["commit", "-m", commitMessage], { | ||
| cwd: repository.path, | ||
| }); |
There was a problem hiding this comment.
Check git add before you commit.
If staging fails, this still falls through to git commit, which can commit previously staged changes or surface a misleading error.
Possible fix
- await runCommand("git", ["add", "-A"], { cwd: repository.path });
+ const add = await runCommand("git", ["add", "-A"], { cwd: repository.path });
+ if (add.exitCode !== 0) {
+ throw new Error(add.stderr || "Failed to stage automation changes.");
+ }
const commitMessage = `ops-harbor: automated update for ${workItem.repository}#${workItem.number}`;🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In `@apps/ops-harbor/src/lib/automation.ts` around lines 169 - 173, The git
staging step uses runCommand("git", ["add", "-A"], { cwd: repository.path }) but
its result isn't checked before committing, so ensure you verify staging
succeeded and there are changes to commit: capture the result of runCommand for
the "git add" call (and/or runCommand("git", ["status", "--porcelain"], { cwd:
repository.path }) to detect staged changes) and only invoke runCommand("git",
["commit", "-m", commitMessage], { cwd: repository.path }) when the add
succeeded and status shows changes; if no changes are staged, skip the commit
and handle/log appropriately using the same repository.path and workItem
context.
| function runGhCommand(args: string[]): Promise<{ exitCode: number; stdout: string; stderr: string }> { | ||
| return new Promise((resolve, reject) => { | ||
| const child = spawn("gh", args, { | ||
| stdio: ["ignore", "pipe", "pipe"], | ||
| }); | ||
|
|
||
| let stdout = ""; | ||
| let stderr = ""; | ||
|
|
||
| child.stdout.on("data", (chunk) => { | ||
| stdout += String(chunk); | ||
| }); | ||
| child.stderr.on("data", (chunk) => { | ||
| stderr += String(chunk); | ||
| }); | ||
| child.on("error", reject); | ||
| child.on("close", (exitCode) => { | ||
| resolve({ | ||
| exitCode: exitCode ?? 1, | ||
| stdout: stdout.trim(), | ||
| stderr: stderr.trim(), | ||
| }); | ||
| }); | ||
| }); | ||
| } |
There was a problem hiding this comment.
Add a timeout around the gh subprocess.
A stalled gh api user leaves detectGitHubLogin() pending forever, which then blocks loadConfig() and both background loops.
Possible fix
function runGhCommand(args: string[]): Promise<{ exitCode: number; stdout: string; stderr: string }> {
return new Promise((resolve, reject) => {
const child = spawn("gh", args, {
stdio: ["ignore", "pipe", "pipe"],
});
+ const timeout = setTimeout(() => {
+ child.kill("SIGTERM");
+ setTimeout(() => child.kill("SIGKILL"), 5_000).unref();
+ reject(new Error("gh command timed out"));
+ }, 10_000);
let stdout = "";
let stderr = "";
child.stdout.on("data", (chunk) => {
stdout += String(chunk);
});
child.stderr.on("data", (chunk) => {
stderr += String(chunk);
});
- child.on("error", reject);
+ child.on("error", (error) => {
+ clearTimeout(timeout);
+ reject(error);
+ });
child.on("close", (exitCode) => {
+ clearTimeout(timeout);
resolve({
exitCode: exitCode ?? 1,
stdout: stdout.trim(),
stderr: stderr.trim(),
});📝 Committable suggestion
‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.
| function runGhCommand(args: string[]): Promise<{ exitCode: number; stdout: string; stderr: string }> { | |
| return new Promise((resolve, reject) => { | |
| const child = spawn("gh", args, { | |
| stdio: ["ignore", "pipe", "pipe"], | |
| }); | |
| let stdout = ""; | |
| let stderr = ""; | |
| child.stdout.on("data", (chunk) => { | |
| stdout += String(chunk); | |
| }); | |
| child.stderr.on("data", (chunk) => { | |
| stderr += String(chunk); | |
| }); | |
| child.on("error", reject); | |
| child.on("close", (exitCode) => { | |
| resolve({ | |
| exitCode: exitCode ?? 1, | |
| stdout: stdout.trim(), | |
| stderr: stderr.trim(), | |
| }); | |
| }); | |
| }); | |
| } | |
| function runGhCommand(args: string[]): Promise<{ exitCode: number; stdout: string; stderr: string }> { | |
| return new Promise((resolve, reject) => { | |
| const child = spawn("gh", args, { | |
| stdio: ["ignore", "pipe", "pipe"], | |
| }); | |
| const timeout = setTimeout(() => { | |
| child.kill("SIGTERM"); | |
| setTimeout(() => child.kill("SIGKILL"), 5_000).unref(); | |
| reject(new Error("gh command timed out")); | |
| }, 10_000); | |
| let stdout = ""; | |
| let stderr = ""; | |
| child.stdout.on("data", (chunk) => { | |
| stdout += String(chunk); | |
| }); | |
| child.stderr.on("data", (chunk) => { | |
| stderr += String(chunk); | |
| }); | |
| child.on("error", (error) => { | |
| clearTimeout(timeout); | |
| reject(error); | |
| }); | |
| child.on("close", (exitCode) => { | |
| clearTimeout(timeout); | |
| resolve({ | |
| exitCode: exitCode ?? 1, | |
| stdout: stdout.trim(), | |
| stderr: stderr.trim(), | |
| }); | |
| }); | |
| }); | |
| } |
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In `@apps/ops-harbor/src/lib/github-auth.ts` around lines 9 - 33, The runGhCommand
helper can hang if the spawned gh process stalls; add a timeout to the Promise
in runGhCommand so the child is killed and the Promise rejects (or resolves with
a non-zero exit) after a configurable timeout (e.g., 10s). Implement this by
creating a timer with setTimeout when the child is spawned, calling child.kill()
on timeout, clearing the timer on 'close' or 'error', and ensuring the Promise
is settled exactly once (guard with a settled flag) so detectGitHubLogin() won't
remain pending; reference runGhCommand, spawn, child.kill, and the child
'close'/'error' handlers when making the change.
apps/ops-harbor/src/lib/local-db.ts
Outdated
| function parseJson<T>(raw: string, fallback: T): T { | ||
| try { | ||
| return JSON.parse(raw) as T; | ||
| } catch { | ||
| return fallback; | ||
| } | ||
| } |
There was a problem hiding this comment.
Filter out malformed cache rows instead of returning {} as a domain object.
The parse-failure path currently turns bad JSON into {} as WorkItem / ActivityEvent. That leaks invalid records through the API and can still blow up in callers like listAlerts() that expect fields such as alerts and updatedAt to exist.
Possible fix pattern
-function parseJson<T>(raw: string, fallback: T): T {
+function parseJson<T>(raw: string): T | null {
try {
return JSON.parse(raw) as T;
} catch {
- return fallback;
+ return null;
}
}
@@
export function listWorkItems(db: Db, filter: WorkItemFilter = {}): WorkItem[] {
const rows = db.prepare<[], ItemRow>(`SELECT * FROM work_items ORDER BY updated_at DESC`).all();
- const items = rows.map((row) => parseJson<WorkItem>(row.payload_json, {} as WorkItem));
+ const items = rows
+ .map((row) => parseJson<WorkItem>(row.payload_json))
+ .filter((item): item is WorkItem => item !== null);
return filterWorkItems(items, filter);
}Also applies to: 121-129, 147-152
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In `@apps/ops-harbor/src/lib/local-db.ts` around lines 41 - 47, The parseJson
function should not mask JSON parse failures by returning a default domain
object; change parseJson<T> to return T | null (or undefined) and return null on
catch, then update callers (e.g., listAlerts and other places that currently do
JSON.parse(...) via parseJson) to filter out null/malformed rows before mapping
to domain objects so malformed cache rows are excluded rather than cast to {} as
WorkItem/ActivityEvent; update types and any downstream code to handle the
nullable return (filter(Boolean) or equivalent) and ensure APIs only receive
fully parsed objects with required fields like alerts and updatedAt.
| app.use("/api/*", cors()); | ||
|
|
||
| app.get("/api/health", (_c) => { | ||
| return Response.json({ ok: true }); | ||
| }); | ||
|
|
||
| app.get("/api/config", async (c) => c.json(await loadConfig())); | ||
| app.put("/api/config", async (c) => { | ||
| const body = await c.req.json(); | ||
| return c.json(await saveConfig(body)); | ||
| }); |
There was a problem hiding this comment.
Don't expose the local API cross-origin.
The dashboard is same-origin, so cors() is unnecessary here. With it enabled, any website opened in the user's browser can read /api/config from 127.0.0.1 and exfiltrate internalApiToken or rewrite local settings.
Safer default
import { Hono } from "hono";
-import { cors } from "hono/cors";
@@
};
const app = new Hono();
- app.use("/api/*", cors());If you still need cross-origin access for a dev client, add CORS only for that specific origin instead of enabling it globally.
📝 Committable suggestion
‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.
| app.use("/api/*", cors()); | |
| app.get("/api/health", (_c) => { | |
| return Response.json({ ok: true }); | |
| }); | |
| app.get("/api/config", async (c) => c.json(await loadConfig())); | |
| app.put("/api/config", async (c) => { | |
| const body = await c.req.json(); | |
| return c.json(await saveConfig(body)); | |
| }); | |
| app.get("/api/health", (_c) => { | |
| return Response.json({ ok: true }); | |
| }); | |
| app.get("/api/config", async (c) => c.json(await loadConfig())); | |
| app.put("/api/config", async (c) => { | |
| const body = await c.req.json(); | |
| return c.json(await saveConfig(body)); | |
| }); |
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In `@apps/ops-harbor/src/server.ts` around lines 33 - 43, Remove the global CORS
middleware applied to API routes (the app.use("/api/*", cors()) line) so the
local API endpoints like GET /api/config and PUT /api/config (handlers calling
loadConfig and saveConfig) are not exposed cross-origin; if cross-origin access
is required for development, replace that global middleware with a restricted
CORS policy that only allows a specific trusted origin (or add CORS to a
specific dev-only route) rather than enabling cors() for all origins.
apps/ops-harbor/src/server.ts
Outdated
| const options: { workItemId?: string; repository?: string; limit?: number } = { | ||
| limit: Number.parseInt(c.req.query("limit") ?? "100", 10), | ||
| }; |
There was a problem hiding this comment.
Validate limit before passing it to SQLite.
Number.parseInt() can yield NaN or a negative number, and listActivity() forwards that straight into LIMIT ?. Fall back to the default unless the parsed value is a positive integer.
Possible fix
- const options: { workItemId?: string; repository?: string; limit?: number } = {
- limit: Number.parseInt(c.req.query("limit") ?? "100", 10),
- };
+ const parsedLimit = Number.parseInt(c.req.query("limit") ?? "100", 10);
+ const options: { workItemId?: string; repository?: string; limit?: number } = {
+ limit: Number.isInteger(parsedLimit) && parsedLimit > 0 ? parsedLimit : 100,
+ };🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In `@apps/ops-harbor/src/server.ts` around lines 102 - 104, The parsed limit may
be NaN or negative before being passed into listActivity(), so validate and
normalize it first: parse c.req.query("limit") into a number, check
Number.isInteger(value) && value > 0, and only then set options.limit to that
value; otherwise leave options.limit undefined (or set to the default 100) so
listActivity() uses a safe default. Update the code around the options object
(the limit parsing logic) where options is constructed and where listActivity()
is called to use this validated value.
| const requireFromModule = createRequire(import.meta.url); | ||
| const requireFromWorkingDirectory = createRequire(resolve(process.cwd(), "package.json")); |
There was a problem hiding this comment.
🧩 Analysis chain
🏁 Script executed:
# First, let's look at the file structure
git ls-files packages/ops-harbor-core/src/ | head -20Repository: masseater/claude-code-plugin
Length of output: 395
🏁 Script executed:
# Now read the sqlite.ts file to see the actual code
cat -n packages/ops-harbor-core/src/sqlite.tsRepository: masseater/claude-code-plugin
Length of output: 2062
🏁 Script executed:
# Let's also check if there are any comments or documentation explaining why this was done
cd packages/ops-harbor-core && grep -r "process.cwd" --include="*.ts" --include="*.js"Repository: masseater/claude-code-plugin
Length of output: 176
🏁 Script executed:
cat -n packages/ops-harbor-core/package.jsonRepository: masseater/claude-code-plugin
Length of output: 644
🏁 Script executed:
# Let's also check the root package.json to understand the monorepo structure
head -50 package.jsonRepository: masseater/claude-code-plugin
Length of output: 1447
🏁 Script executed:
# Check if better-sqlite3 is listed anywhere in the codebase as a dependency
grep -r "better-sqlite3" packages/ops-harbor-core/ --include="*.json" --include="*.ts" --include="*.js"Repository: masseater/claude-code-plugin
Length of output: 178
🏁 Script executed:
# Check how this openSqliteDatabase function is used in the codebase
grep -r "openSqliteDatabase" . --include="*.ts" --include="*.js" -B 2 -A 2Repository: masseater/claude-code-plugin
Length of output: 1965
🏁 Script executed:
# Check if there's a README or documentation about this module
find packages/ops-harbor-core -name "README*" -o -name "*.md" | xargs cat 2>/dev/nullRepository: masseater/claude-code-plugin
Length of output: 54
🏁 Script executed:
# Let's check if better-sqlite3 is perhaps a dependency in the apps that use this
grep -r "better-sqlite3" apps/ --include="package.json"Repository: masseater/claude-code-plugin
Length of output: 363
🏁 Script executed:
# Check the workspace root for better-sqlite3
grep "better-sqlite3" package.jsonRepository: masseater/claude-code-plugin
Length of output: 54
🏁 Script executed:
# Let's check if there are hoisted node_modules at the root or workspace level
ls -la | grep -E "(node_modules|.pnpm)"Repository: masseater/claude-code-plugin
Length of output: 54
🏁 Script executed:
# Check the package manager being used and how it installs dependencies
cat -n bun.lockb 2>/dev/null | head -5 || echo "Checking for yarn/pnpm..."
fd -name "pnpm-lock.yaml" -o -name "yarn.lock" 2>/dev/null | head -5Repository: masseater/claude-code-plugin
Length of output: 54
🏁 Script executed:
# Let's understand the actual Node.js createRequire behavior and verify the concern
# First, let's see how these apps are typically launched
find apps/ops-harbor* -name "*.ts" -o -name "*.js" | grep -E "(main|entry|bin)" | head -5Repository: masseater/claude-code-plugin
Length of output: 54
🏁 Script executed:
# Check the main entry points for these apps
cat apps/ops-harbor/package.json | grep -A 5 '"bin"'
cat apps/ops-harbor-control-plane/package.json | grep -A 5 '"bin"'Repository: masseater/claude-code-plugin
Length of output: 150
🏁 Script executed:
# Let's also check if there's any documentation about how these are deployed/run
find . -name "*.md" -o -name "Dockerfile" | xargs grep -l "ops-harbor" 2>/dev/null | head -3Repository: masseater/claude-code-plugin
Length of output: 137
🏁 Script executed:
# Let's verify Node.js module resolution semantics by checking Node documentation comments in the code
# Also check if there might be any comments explaining this pattern
grep -B 5 -A 5 "requireFromWorkingDirectory" packages/ops-harbor-core/src/sqlite.tsRepository: masseater/claude-code-plugin
Length of output: 805
🏁 Script executed:
# Let's check if there's any git history or comments that explain this pattern
git log --oneline -20 packages/ops-harbor-core/src/sqlite.ts 2>/dev/null || echo "Git history not available"Repository: masseater/claude-code-plugin
Length of output: 128
🏁 Script executed:
# Let's verify the actual behavior: check if the imports work correctly with the current setup
# by looking at how Node modules would be resolved. First, check the hoisted node_modules structure
fd -t d "node_modules" . --max-depth 3 2>/dev/null | head -10Repository: masseater/claude-code-plugin
Length of output: 54
🏁 Script executed:
# Let's understand the monorepo structure better and how bun handles workspace dependencies
cat apps/ops-harbor/package.json | head -30Repository: masseater/claude-code-plugin
Length of output: 658
🏁 Script executed:
# Check if better-sqlite3 is specified as a peer dependency anywhere
grep -r "peerDependencies" packages/ops-harbor-core/Repository: masseater/claude-code-plugin
Length of output: 54
🏁 Script executed:
# Let's verify what the actual module resolution behavior would be
# Create a test to understand how Node.js resolves modules with these two approaches
python3 << 'EOF'
import os
# Explain the two approaches
print("=" * 70)
print("Node.js createRequire() Module Resolution Analysis")
print("=" * 70)
print("\n1. createRequire(import.meta.url)")
print(" - Resolves modules relative to the file's directory")
print(" - In this case: packages/ops-harbor-core/src/")
print(" - Looks in: packages/ops-harbor-core/node_modules/")
print(" - Then walks up: node_modules/ at root")
print(" - PREDICTABLE - always resolves from package location")
print("\n2. createRequire(resolve(process.cwd(), 'package.json'))")
print(" - Resolves modules relative to the CURRENT WORKING DIRECTORY")
print(" - Example: If user runs from /path/to/project/apps/ops-harbor/")
print(" - Looks in: /path/to/project/apps/ops-harbor/node_modules/")
print(" - Example: If user runs from /path/to/project/")
print(" - Looks in: /path/to/project/node_modules/")
print(" - UNPREDICTABLE - depends on where CLI is launched from")
print("\n" + "=" * 70)
print("ISSUE: Dependency loading depends on CWD, not package location")
print("=" * 70)
print("\nScenario 1: User has local better-sqlite3 in their project root")
print(" $ cd /path/to/project && ops-harbor")
print(" → Loads /path/to/project/node_modules/better-sqlite3 ✓ (works)")
print("\nScenario 2: User runs from different directory")
print(" $ cd /tmp && ops-harbor")
print(" → Module resolution fails or loads unexpected version ✗")
print("\nScenario 3: User has better-sqlite3 in CWD but not in app dir")
print(" → App might fail to find the module ✗")
EOFRepository: masseater/claude-code-plugin
Length of output: 1570
Use requireFromModule to load better-sqlite3 instead of resolving from process.cwd().
The current code anchors createRequire() to the caller's working directory, making better-sqlite3 resolution depend on where the CLI is launched from. This is brittle: when a user runs the ops-harbor command from any directory, module resolution will look in that directory's package.json instead of the app's declared dependencies. This breaks predictability and can cause failures or load unintended versions of the module.
The module should always resolve relative to the package's location, not the runtime CWD.
Fix
import { createRequire } from "node:module";
-import { resolve } from "node:path";
const requireFromModule = createRequire(import.meta.url);
-const requireFromWorkingDirectory = createRequire(resolve(process.cwd(), "package.json"));
@@
? new (
requireFromModule("bun:sqlite") as {
Database: new (path?: string) => SqliteDatabase;
}
).Database(filename)
- : new (requireFromWorkingDirectory("better-sqlite3") as new (
+ : new (requireFromModule("better-sqlite3") as new (
path?: string,
) => SqliteDatabase)(filename);📝 Committable suggestion
‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.
| const requireFromModule = createRequire(import.meta.url); | |
| const requireFromWorkingDirectory = createRequire(resolve(process.cwd(), "package.json")); | |
| const requireFromModule = createRequire(import.meta.url); |
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In `@packages/ops-harbor-core/src/sqlite.ts` around lines 4 - 5, The code creates
two require functions (requireFromModule and requireFromWorkingDirectory) and
currently uses the working-directory anchored require to load better-sqlite3,
which makes resolution depend on process.cwd(); instead, always load
better-sqlite3 via the module-anchored require (requireFromModule) so the
package resolves relative to this package, not the caller's CWD—update any usage
of requireFromWorkingDirectory to use requireFromModule (and remove the
unnecessary createRequire(resolve(process.cwd(), "package.json")) if unused)
when requiring "better-sqlite3".
bun:sqlite requires parameter prefix ($/@/:) in object keys unlike better-sqlite3. Switch upsertWorkItem from @nAmed to ? positional parameters so work items actually persist. Add change detection to syncAuthorWorkItems: compare updatedAt and headSha before creating "synchronized" activity events. Previously every sync cycle (60s) created duplicate events even when data was unchanged, causing ~35 identical events in 17 minutes per PR. Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
…descriptions Add descriptions to all package.json files that were missing them (doc-engine, ops-harbor-core, repository-lint, ts-config, vitest-config, ops-harbor-control-plane). These feed into the auto-generated AGENTS.md tables. Add a new ESLint rule `require-description` that errors when package.json lacks a non-empty description field, applied to packages/* and apps/*. Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
- Replace `export *` with named re-exports in ops-harbor-core - Extract shared utilities to core: parseJson, parseWorkItemFilterFromQuery, mapWorkItemsToAlertSummaries, buildActivityWhereClause, findAvailablePort - Deduplicate filter construction (3 copies → 1), listAlerts (2 → 1), listActivity WHERE builder (2 → 1), port-finder (3 → 1) - Fix double deriveAlerts + upsertWorkItem call in sync - Fix N+1 queries in control-plane listWorkItems (batch load checks/reviews) - Hoist db.prepare() outside loops in upsertWorkItem - Push state/repository/updatedSince filters to SQL WHERE clauses - Wrap dev.ts top-level await in main(), extract inflight guard pattern, add runCommand output buffer limit, simplify mergeRateLimitSnapshots - Remove unused assignedRunner from WorkItemFilter, type Db aliases Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
- Merge master: husky migration (lefthook → husky), renovate config move - Fix knip: remove unused better-sqlite3 deps, add unlisted @humanwhocodes/momoa, remove unused DEV_RUNTIME_CONFIG_PATH export, clean knip.json - Fix coverage: add tests for db-helpers, filters, sqlite in ops-harbor-core Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
- Restore better-sqlite3 and @types/better-sqlite3 to ops-harbor apps (dynamically loaded via requireFromWorkingDirectory, invisible to knip) - Add knip ignoreDependencies for better-sqlite3 in both apps - Add tests for uncovered branches in control-plane config.ts Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
Summary
Testing
Summary by CodeRabbit
Release Notes
New Features
Chores