MuscleMap is an MVP workout-analysis web app that maps exercise plans onto a browser-based 3D anatomy model.
- accepts free-text workout plans like
Bench Press - 4x8 - sends raw workout text to the LLM for both parsing and analysis
- constrains inference to canonical body-part IDs derived from a real anatomy source
- validates LLM output strictly against those IDs and ordered training-load labels
- renders per-exercise and whole-workout activations on an interactive 3D body
- supports mock mode with no API key required
- supports multiple saved workouts in the UI and all-workouts aggregation
This MVP derives canonical body-part IDs from the open BodyExplorer asset set:
frontend/public/assets/anatomy.glbbackend/app/data/mesh_mapping.json
Source project: JohanBellander/BodyExplorer
Canonical IDs are generated from the source metadata:
- BodyParts3D-backed structures use
bp3d:<BP_ID> - Z-Anatomy-only structures use deterministic fallback IDs like
zanatomy:left_latissimus_dorsi
Those IDs are used everywhere:
GET /api/body-schema- backend validation
- LLM prompt constraints
- frontend mesh activation mapping
Ordered labels used throughout the app:
nonelowmoderatehigh
The LLM now returns two parallel 0-3 dimensions for each exercise-to-muscle-group relationship:
load: mechanical / strength / hypertrophy-style training stressendurance: repeated-effort / sustained-fatigue stress
- frontend: React, TypeScript, Vite, React Three Fiber, drei, Three.js
- backend: FastAPI, Pydantic, SQLite cache
frontend/ React app and 3D viewer
backend/ FastAPI API, schema extraction, inference, cache, tests
GET /api/healthGET /api/body-schemaPOST /api/parse-workoutPOST /api/infer-exercisePOST /api/analyze-workout
This repo is set up for a single-service Render deploy using Docker. The FastAPI app serves both the API and the built frontend from the same public URL.
Files involved:
render.yamlDockerfile.dockerignore
- Push the repo to GitHub
- In Render, create a new Blueprint and point it at the repo
- Render will detect
render.yamland create themusclemapweb service - In the Render dashboard, set
LLM_API_KEYto your real key - Deploy and open the generated
onrender.comURL
The blueprint sets these defaults:
MUSCLEMAP_MOCK_MODE=false
LLM_PROVIDER=openai
LLM_BASE_URL=https://api.openai.com/v1
LLM_MODEL=gpt-5-mini
LLM_TIMEOUT_SECONDS=180You only need to add:
LLM_API_KEY=your_api_key_hereIf you prefer OpenRouter, change these in Render:
LLM_PROVIDER=openrouter
LLM_BASE_URL=https://openrouter.ai/api/v1
LLM_MODEL=openai/gpt-5-miniInstall root tooling for the single-command dev runner:
npm installpython3 -m venv .venv
.venv/bin/pip install -r backend/requirements.txt
.venv/bin/uvicorn app.main:app --app-dir backend --reloadBackend runs at http://127.0.0.1:8000.
cd frontend
npm install
npm run devFrontend runs at http://127.0.0.1:5173 and proxies /api to the backend.
For deployed frontend builds hosted separately from the API, set:
VITE_API_BASE_URL=https://your-backend.example.comIf VITE_API_BASE_URL is unset, the frontend keeps using same-origin /api requests.
After installing backend deps, frontend deps, and root deps, run both servers together with:
npm run devThe backend is already prepared for live inference through any OpenAI-compatible provider.
- Copy
.env.exampleto.envin the repo root - Fill in these values:
MUSCLEMAP_MOCK_MODE=false
LLM_PROVIDER=openai
LLM_API_KEY=your_api_key_here
LLM_BASE_URL=https://api.openai.com/v1
LLM_MODEL=gpt-4o-mini
LLM_TIMEOUT_SECONDS=180The backend loads .env automatically.
This app currently expects an OpenAI-compatible chat/completions API. Good options include:
openaiwithhttps://api.openai.com/v1openrouterwithhttps://openrouter.ai/api/v1groqwithhttps://api.groq.com/openai/v1togetherwithhttps://api.together.xyz/v1
LLM_PROVIDER is mainly for configuration clarity and health reporting. The actual request target is LLM_BASE_URL + /chat/completions.
OpenAI:
MUSCLEMAP_MOCK_MODE=false
LLM_PROVIDER=openai
LLM_API_KEY=sk-...
LLM_BASE_URL=https://api.openai.com/v1
LLM_MODEL=gpt-4o-miniOpenRouter:
MUSCLEMAP_MOCK_MODE=false
LLM_PROVIDER=openrouter
LLM_API_KEY=sk-or-...
LLM_BASE_URL=https://openrouter.ai/api/v1
LLM_MODEL=openai/gpt-4o-miniGroq:
MUSCLEMAP_MOCK_MODE=false
LLM_PROVIDER=groq
LLM_API_KEY=gsk_...
LLM_BASE_URL=https://api.groq.com/openai/v1
LLM_MODEL=llama-3.3-70b-versatileAfter saving .env, run:
npm run devYou can confirm the backend picked up your settings at GET /api/health.
Mock mode is enabled by default.
- no API key required
- random exercise inference for demo purposes on each analysis run
To force live mode:
MUSCLEMAP_MOCK_MODE=false LLM_API_KEY=your_key_here .venv/bin/uvicorn app.main:app --app-dir backend --reloadOptional env vars:
LLM_PROVIDERdefault:openaiLLM_API_KEYLLM_BASE_URLdefault:https://api.openai.com/v1LLM_MODELdefault:gpt-4o-miniLLM_TIMEOUT_SECONDSdefault:180MUSCLEMAP_MOCK_MODEdefault:true
Backward-compatible aliases also work:
OPENAI_API_KEYOPENAI_BASE_URLOPENAI_MODEL
Backend:
.venv/bin/pytest backend/testsFrontend:
cd frontend
npm run test
npm run build- the prompt includes only source-derived muscle-group IDs from
GET /api/body-schema - live mode uses one LLM call per workout and returns a structured hierarchy using recursive
sectionnodes andexerciseleaves - those muscle-group activations are expanded back into real anatomy body-part IDs for rendering
- responses are parsed into strict Pydantic models
- unknown
muscle_group_idvalues are rejected - duplicate group activations for the same exercise are rejected
- whole-workout aggregation uses peak severity across exercises
- labels represent training load contribution, not just movement dominance
The app accepts raw workout text and the LLM interprets structure, but the lightweight parser still exists for mock mode and debugging. It accepts more than one exercise per line when separators are present, for example:
Workout A: Bench Press - 4x8; Squat - 5x5 then Romanian Deadlift - 4x8
The included anatomy source is derived from:
- BodyExplorer by Johan Bellander
- BodyParts3D, CC BY-SA 2.1 Japan
- Z-Anatomy, CC BY-SA 4.0
This MVP keeps the source-derived metadata and requires attribution for redistributed derivative assets. Review upstream licenses before shipping commercially.