Work in progress. This repo is evolving as I learn, and I share it in case others find it useful and would like to build upon it. Expect rough edges.
An open-source Claude Code scaffold for empirical economics research. Provides structured workflows from literature review to journal submission. Can be adapted to other fields (finance, accounting, marketing, management) by customizing the domain profile and journal profiles.
Live guide: hugosantanna.github.io/clo-author
Built on: Pedro Sant'Anna's claude-code-my-workflow
# 1. Fork and clone
gh repo fork hugosantanna/clo-author --clone
cd clo-author
# 2. Open Claude Code
claudeThen paste this prompt:
I am starting a new empirical research project in [YOUR FIELD] on [YOUR TOPIC]. Read CLAUDE.md and help me set up the project structure. Start with a literature review on [YOUR TOPIC].
Claude reads the configuration, fills in your project details, and plans the approach — you approve the plan, it implements and runs review agents, and you review the results.
Using VS Code? Open the Claude Code panel instead. Everything works the same.
You describe a task. Claude plans the approach (you approve), implements it, runs specialized review agents, fixes issues, re-verifies, and scores against quality gates. You review the output at each stage.
Every creator has a paired critic. Critics can't edit files; creators can't score themselves.
| Phase | Worker (Creates) | Critic (Reviews) |
|---|---|---|
| Discovery | Librarian | librarian-critic |
| Discovery | Explorer | explorer-critic |
| Strategy | Strategist | strategist-critic |
| Execution | Coder | coder-critic |
| Execution | Data-engineer | coder-critic |
| Paper | Writer | writer-critic |
| Peer Review | Editor → domain-referee + methods-referee | — |
| Presentation | Storyteller | storyteller-critic |
| Infrastructure | Orchestrator, Verifier | — |
/review --peer [journal] simulates a full journal submission:
- Editor desk review — reads your paper, verifies novelty claims via web search, decides: desk reject or send to referees
- Referee assignment — editor selects two referees with intellectual dispositions (Structuralist, Credibility, Measurement, Policy, Theory, Skeptic) weighted by journal culture
- Independent blind reports — each referee scores on 5 dimensions with pet peeves (1 critical, 1 constructive), and every major comment includes "what would change my mind"
- Editorial decision — editor classifies each concern as FATAL / ADDRESSABLE / TASTE, sides with one referee when they disagree, produces MUST / SHOULD / MAY action items
Additional modes:
--stress [journal]— adversarial referees for pre-submission battle testing--peer --r2 [journal]— R&R second round with referee memory (checks whether prior concerns were addressed)- Max 3 rounds, then the editor's patience runs out — just like real life
30 journal profiles across economics and adjacent fields (all top-tier, A* in the Australian Business Deans Council ranking), each with configured referee pools based on published style guides and common review culture.
| Category | Commands |
|---|---|
| Research | /new-project, /discover, /strategize, /analyze, /write |
| Review | /review, /revise |
| Output | /talk, /submit |
| Tools | /tools (commit, compile, validate-bib, journal, learn, deploy, context) |
Weighted aggregate scoring with per-component minimums:
| Score | Gate | Applies To |
|---|---|---|
| 80 | Commit | Weighted aggregate (blocking) |
| 90 | PR | Weighted aggregate (blocking) |
| 95 | Submission | Aggregate + all components >= 80 |
| -- | Advisory | Talks (reported, non-blocking) |
your-project/
├── CLAUDE.md # Project configuration (fill in placeholders)
├── .claude/ # Agents, skills, rules, references, hooks
├── Bibliography_base.bib # Centralized bibliography
├── paper/ # Main LaTeX manuscript (source of truth)
│ ├── main.tex
│ ├── sections/
│ ├── figures/
│ ├── tables/
│ ├── talks/ # Beamer presentations
│ ├── quarto/ # Quarto RevealJS presentations
│ ├── preambles/ # Shared LaTeX headers
│ ├── supplementary/ # Online appendix
│ └── replication/ # Replication package for deposit
├── data/ # Raw and cleaned datasets
├── scripts/ # Analysis code (R, Python, Julia)
├── quality_reports/ # Plans, session logs, reviews, scores
├── explorations/ # Research sandbox
└── master_supporting_docs/ # Reference papers and data docs
| Tool | Required For | Install |
|---|---|---|
| Claude Code | Everything | npm install -g @anthropic-ai/claude-code |
| XeLaTeX | Paper compilation | TeX Live or MacTeX |
| R | Analysis & figures | r-project.org |
| gh CLI | GitHub integration | brew install gh (macOS) |
Optional: Python, Julia (for multi-language analysis), Quarto (web slides).
- Fill in
CLAUDE.md— replace[BRACKETED PLACEHOLDERS]with your project details - Fill in the domain profile (
.claude/references/domain-profile.md) — your journals, data sources, identification strategies, conventions, and seminal references. Use/discover interviewto populate it interactively. - Add journal profiles — 30 profiles are included (economics and adjacent fields). Add your own to
.claude/references/journal-profiles.mdusing the template at the bottom of the file. - Configure your language — R is the default; Python and Julia are also supported. Set your preference in CLAUDE.md.
Adapting to other fields: The pipeline assumes economics by default (causal inference methods, working paper format, AEA-style conventions). To adapt for finance, accounting, marketing, or management, customize the domain profile and journal profiles. The agents, rules, and section templates will follow the domain profile's field specification.
This project builds on Pedro Sant'Anna's claude-code-my-workflow, which was built for Econ 730 at Emory University. The Clo-Author reorients that infrastructure from lecture production to empirical economics research.
Maintained by Hugo Sant'Anna at UAB.
Your files are safe. The upgrade only touches .claude/ (infrastructure). Your paper, scripts, data, and bibliography are never modified.
- Download the latest release or clone clo-author into a temp folder
- Delete your old
.claude/directory - Copy the new
.claude/into your project - Done — your CLAUDE.md, paper, scripts, and data are untouched
No git merge, no upstream remote, no conflicts. Once on 4.0, future upgrades can use /tools upgrade.
The architecture loads fewer tokens per session by demand-loading reference files (journal profiles, domain profiles, coding standards) only when agents need them. Rules are path-scoped where possible.
- Scaffold, not autopilot. Every output — drafts, analysis, reviews — needs human review. Claude plans and executes; you decide what ships.
- Simulated peer review catches structural issues (missing robustness, identification gaps, notation errors) but does not replicate actual referee expertise or field-specific judgment.
- Journal profiles are based on published style guides and common review culture, not empirical calibration against actual editorial decisions.
- Quality scores are heuristic deduction rubrics. They flag problems reliably but do not measure publishability.
- The writer produces drafts. It does not replace your writing process — it gives you structured first drafts to revise.
MIT License. Fork it, customize it, make it yours.