| title | RLM Interactive Console |
|---|---|
| emoji | 🚀 |
| colorFrom | indigo |
| colorTo | purple |
| sdk | docker |
| pinned | false |
The RLM Interactive Console is a full-stack application designed to demonstrate and interact with Reinforcement Learning Models (or similar agentic systems). It features a generic FastAPI backend for handling model inference and dataset management, coupled with a modern Next.js frontend for an interactive user experience.
- Interactive Chat Interface: user-friendly chat UI to interact with models.
- Dataset Integration: Fetches and caches datasets from Hugging Face (e.g.,
oolongbench/oolong-real). - Response Caching: Caches model responses to local JSON files to improve performance and avoid redundant computation.
- Agentic Workflow: Integrates with
smolagentsfor agent-based reasoning. - Real-time Feedback: Displays thinking process and final answers.
- Frontend: Next.js 15, React 19, TailwindCSS, TypeScript.
- Backend: FastAPI, Python 3.12+, Uvicorn.
- AI/ML:
smolagents,openenv,datasets, Hugging Face Hub. - Package Management:
npm(frontend),uvorpip(backend).
- Node.js (v18+ recommended)
- Python (v3.12+)
- Git
git clone <your-repo-url>
cd RLM-DemoThe backend is located in the backend/ directory.
It is recommended to use uv for fast package management, but standard pip works as well.
Using uv (Recommended):
# Install uv if you haven't already
pip install uv
# Create virtual environment and sync dependencies
uv venv
source .venv/bin/activate # On Windows: .venv\Scripts\activate
uv pip install -r backend/requirements.txtUsing standard pip:
python -m venv .venv
source .venv/bin/activate # On Windows: .venv\Scripts\activate
pip install -r backend/requirements.txtCreate a .env file in the root (or backend/ depending on where you run it) with the following variables:
HF_TOKEN=your_hugging_face_token
SPACE_URL=optional_space_url
MODEL_NAME=meta-llama/Llama-3.1-70B-Instruct
DATASET_SUBSET=default
DATASET_SPLIT=test
EXAMPLE_INDEX=0
MAX_ITERATIONS=10
CUTOFF_INDEX=15The frontend is located in the frontend/ directory.
cd frontend
npm installRun the backend and frontend in separate terminals.
Terminal 1: Backend
# From the root directory
source .venv/bin/activate
uvicorn backend.main:app --reload --port 8000Terminal 2: Frontend
cd frontend
npm run devOpen http://localhost:3000 to view the application.
Note: The Next.js app is configured to proxy API requests to http://localhost:8000 or expects the backend to server the frontend in production.
The project includes a Dockerfile for easy deployment, compatible with Hugging Face Spaces.
docker build -t rlm-demo .
docker run -p 7860:7860 rlm-demoRLM-Demo/
├── backend/ # FastAPI backend
│ ├── main.py # App entry point
│ ├── repl_process.py # Agent logic
│ ├── data/ # Cached datasets
│ ├── answer/ # Cached answers
│ └── requirements.txt # Python dependencies
├── frontend/ # Next.js frontend
│ ├── app/ # App router (pages & layouts)
│ ├── components/ # React components
│ └── package.json # Frontend dependencies
├── Dockerfile # Deployment configuration
└── README.md # Project documentation