feat(agent): MVE Experiment Designer#976
Conversation
feat(instructions): introduce MVE coaching conventions for Experiment Designer chore(collections): include Experiment Designer in experimental collections chore(collections): update experimental collection YAML to reference new agent and instructions 🔧 - Generated by Copilot
Codecov Report✅ All modified and coverable lines are covered by tests. Additional details and impacted files@@ Coverage Diff @@
## main #976 +/- ##
==========================================
- Coverage 88.04% 86.94% -1.10%
==========================================
Files 45 31 -14
Lines 7885 5408 -2477
==========================================
- Hits 6942 4702 -2240
+ Misses 943 706 -237
Flags with carried forward coverage won't be shown. Click here to find out more. 🚀 New features to boost your workflow:
|
|
@mattdot ... can you look at the hifi and lofi prototype builders in design thinking and see if this covers your needs first? |
…mum Viable Experiments
@WilliamBerryiii not quite. It kind of proposes testing assumptions, but it doesn't really do it with the scientific rigor I'd expect from a true MVE. It feels more like it's proposing a vibe check of the assumptions rather than an experiment result that we have rock solid confidence in. |
One last set of questions (I should have asked earlier but has to think about it) ... where do you think this goes from a collections perspective after it's run in the experimental phase? More Coding Focused? Data Science too? Should this agent's artifact (the experiment.md) be handed off to the PRD-builder and/or Task Researcher for the implementation phase? You've got more experience in this space, are the experiments you're running more of a "rough PRD" scale or more of a "if we had enough tokens, we could probably get this through a task researcher run" 😂 ... This really comes down to do you want the experiment to run PRD -> *-Backlog-Manager for entry into the backlog or go right to coding (or both). |
The output of this is really a plan and hypothesis to go do an experiment on. Once you actually do the experiment, the results of the experiment would be used much like other research could be used, as inputs to PRD or ADR. For the collections, I could see this in the Data Science and Project Planning collections. |
|
@mattdot - should I update this to exit with a hand off document for the ADO and GH backlog managers? Do you anticipate that the experiment generates work items or do we go right to task researcher/planner/implementor/reviewer for workflow execution? |
I kind of feel like backlog might be the way to go since you could come out with several hypothesis to test and i would be good to track/work them independently. |
- add optional Phase 6 generating backlog-brief.md from mve-plan.md - add backlog-brief.md template to session artifacts and instructions - add usage guide and end-to-end example for Phase 6 workflow - enable experiment-to-backlog transition via bridge document 🔬 - Generated by Copilot
Changes Pushed: Backlog Bridge PhaseHey @mattdot — I pushed a commit to your branch that adds Phase 6 (Backlog Bridge) to the Experiment Designer. Here's a summary of what changed and why. Let me know if you're ok with these changes and I'll get the merge going. What's NewPhase 6: Backlog Bridge — an optional phase that converts completed MVE outputs into a
Files Changed (2 files, +148 / -24)
Prompt Builder ReviewThese changes went through a Prompt Builder evaluation pass (test + evaluate + fix cycle). Key findings addressed:
All linting ( Commit |
- fix ADO backlog manager intent classification to route structured briefs to Discovery instead of PRD Planning - add disambiguation heuristics separating PRDs from backlog-brief.md inputs - add backlog brief keyword signal to GitHub backlog manager Discovery row - add Backlog Brief document type to GitHub discovery parsing guidelines 🔗 - Generated by Copilot
Discovery Path B Alignment (
|
| File | Change |
|---|---|
.github/agents/ado/ado-backlog-manager.agent.md |
Added "backlog brief" keyword and "structured requirement briefs" indicator to Discovery row; refined disambiguation heuristics to separate PRDs (→ PRD Planning) from structured briefs (→ Discovery Path B) |
.github/agents/github/github-backlog-manager.agent.md |
Added "backlog brief" to Discovery keyword signals and contextual indicators |
.github/instructions/github/github-backlog-discovery.instructions.md |
Added Backlog Brief rows to Document Parsing Guidelines table (experiment requirements → User story, non-functional constraints → Task) |
Design Note
ADO's ado-wit-discovery.instructions.md was intentionally not modified — it uses generic extraction that handles backlog briefs adequately. The GitHub version has a structured Document Parsing Guidelines table that needed explicit Backlog Brief entries.
This completes the end-to-end path: Experiment Designer → backlog-brief.md → Backlog Manager → Discovery Path B → work items.
Pull Request
Description
Adds a new conversational coaching agent that guides users through designing a Minimum Viable Experiment (MVE). The agent follows a structured, phase-based process — from problem discovery and hypothesis formation through viability vetting to a complete experiment plan. It helps users translate unknowns and assumptions into crisp, testable hypotheses, evaluates experiment feasibility, and produces actionable MVE plans with session tracking via .copilot-tracking. Includes the agent definition (experiment-designer.agent.md) and companion instructions (experiment-designer.instructions.md) covering MVE domain knowledge, vetting criteria, and experiment type reference.
Related Issue(s)
Closes #973
Type of Change
Select all that apply:
Code & Documentation:
Infrastructure & Configuration:
AI Artifacts:
prompt-builderagent and addressed all feedback.github/instructions/*.instructions.md).github/prompts/*.prompt.md).github/agents/*.agent.md).github/skills/*/SKILL.md)Other:
.ps1,.sh,.py)Sample Prompts (for AI Artifact Contributions)
User Request:
Execution Flow:
Phase 1 — Problem & Context Discovery: Agent asks probing questions about the problem statement, customer context, business case, unknowns, and constraints. Creates a tracking directory at .copilot-tracking/mve/{date}/{experiment-name}/ and writes context.md.
Phase 2 — Hypothesis Formation: Agent guides user to translate unknowns into testable hypotheses using the format "We believe [assumption]. We will test this by [method]. We will know we are right/wrong when [measurable outcome]." Prioritizes hypotheses by risk and impact. Writes hypotheses.md.
Phase 3 — MVE Vetting & Red Flag Check: Agent applies four vetting criteria (business sense, crisp problem statement, Responsible AI, clear next steps) and checks against nine red flag patterns (demos, skipping ahead, solved problems, mini-MVP, etc.). Writes vetting.md. If fundamental problems found, returns to Phase 1 or 2.
Phase 4 — Experiment Design: Agent helps choose experiment type, define technical approach, set measurable success/failure criteria per hypothesis, scope timeline to weeks, and plan post-experiment evaluation. Writes experiment-design.md.
Phase 5 — MVE Plan Output: Agent consolidates all phase outputs into a single mve-plan.md document for stakeholder review. Iterates based on user feedback, returning to earlier phases if needed.
Output Artifacts:
context.md — Problem statement, customer context, business justification
hypotheses.md — Prioritized testable hypotheses with assumption/method/outcome
vetting.md — Vetting criteria results and red flag assessment
experiment-design.md — Approach, scope, timeline, resources, success criteria
mve-plan.md — Consolidated plan document for stakeholder review
Business Case
{Why this experiment matters, what decision it informs}
Success Indicators:
The .copilot-tracking/mve/{date}/{experiment-name}/ directory contains all five markdown artifacts (context.md, hypotheses.md, vetting.md, experiment-design.md, mve-plan.md)
Each hypothesis follows the three-part format: assumption, test method, measurable outcome
Hypotheses are prioritized by risk and impact with clear rationale
Vetting results explicitly address all four criteria and flag any red flags encountered
Success and failure criteria are defined per hypothesis with quantitative thresholds
The experiment is scoped to weeks (not months) with explicit out-of-scope boundaries
mve-plan.md includes next steps for both validated and invalidated outcomes
The agent challenged vague problem statements or untestable hypotheses rather than accepting them uncritically
For detailed contribution requirements, see:
Testing
I've used it for a few MVE opportunities to help refine our hypotheses and plan our MVE.
Checklist
Required Checks
AI Artifact Contributions
/prompt-analyzeto review contributionprompt-builderreviewRequired Automated Checks
The following validation commands must pass before merging:
npm run lint:mdnpm run spell-checknpm run lint:frontmatternpm run validate:skillsnpm run lint:md-linksnpm run lint:psnpm run plugin:generate(can't run dev container, hoping ci/cd pipeline checks these :) )
Security Considerations
Additional Notes