An AI-powered compliance monitoring dashboard for crypto trading that combines behavioral anomaly detection with regulatory intelligence across three jurisdictions — Malta, UAE, and Cayman Islands.
Built for the Deriv Hackathon 2026.
The application consists of two main parts:
- Backend — Python FastAPI server with 8 AI agents (2 agentic workflows)
- Frontend — Next.js 16 dashboard with two screens (Live Monitor + Regulatory Hub)
Workflow 1: Transaction Analysis (Behavioral Monitoring)
User injects transaction data
→ Profile Agent (local)
→ Preprocessor Agent (local) + Baseline Calculator Agent (LLM) [parallel]
→ Anomaly Detector Agent (LLM)
→ Returns: risk score, flags, reasoning, regulations violated
Workflow 2: Compliance Update (Regulatory Intelligence)
User pushes a new regulation
→ Summarizer Agent (LLM)
→ Comparison Agent (LLM)
→ Analyzer Agent (LLM)
→ Rulebook Editor Agent (LLM) — autonomously modifies the rulebook
→ Returns: summary, comparison, impact analysis, updated rulebook
- Python 3.11+
- Node.js 18+ and npm
- OpenAI API key (or compatible LLM provider)
cd Deriv_Hackathoncd backend
# Install Python dependencies
pip install -r requirements.txt
# Configure your LLM API key
# Edit the .env file and replace the placeholder key with your real oneOpen backend/.env and set your API key:
LLM_API_KEY=sk-your-real-openai-api-key
LLM_MODEL=gpt-4o
LLM_BASE_URL=https://api.openai.com/v1Note: The application works without a real API key — every LLM agent has a deterministic fallback. However, for the full AI-powered experience (LLM reasoning, dynamic baselines, intelligent rulebook editing), a valid key is required.
Start the backend server:
python3 -m uvicorn main:app --host 0.0.0.0 --port 8000 --reloadThe API will be available at http://localhost:8000.
Open a new terminal:
cd frontend
# Install Node dependencies
npm install
# Start the development server
npm run devThe dashboard will be available at http://localhost:3000.
-
View Users — The left panel shows all 10 users sorted by risk score (highest first). Click any user to view their intelligence profile.
-
Inject Transaction Data — Click the red "Data Injection Flow" button at the bottom of the screen. This opens a drawer where you can:
- Select a target user
- Set the number of transactions to generate
- Optionally override values to inject anomalies:
- Amount — e.g.,
55000for a large transaction - Country — e.g.,
KP(North Korea) for a geo anomaly - Currency — e.g.,
USDT
- Amount — e.g.,
- Click "Inject Transaction Batch"
-
View Results — After injection, the selected user's card updates with the new risk score, and the right panel shows:
- Risk gauge (animated)
- Statistical Brain (baseline vs current comparison)
- Physics Brain (distance, speed, impossible travel detection)
- AI Guardian (full agent chain log + LLM reasoning)
-
Select Jurisdiction — Click the Malta, UAE, or Cayman Islands tab.
-
View Current State — See the compliance summary, active rulebook, and risk scoring table.
-
Push New Regulation — In the "Available New Regulations" section, click "Push" on any regulation card. This triggers the 4-agent compliance workflow:
- Summarizer summarizes the regulation
- Comparison Agent compares old vs new
- Analyzer Agent assesses impact on users
- Rulebook Editor Agent autonomously modifies the monitoring rules
-
View Output — After the push completes, see the full agent chain, summary, comparison points, impact analysis, and rulebook changes.
| Layer | Technology |
|---|---|
| Frontend | Next.js 16, TypeScript, Tailwind CSS v4, Framer Motion, Recharts, Lucide Icons, next-themes |
| Backend | Python 3.11+, FastAPI, Pydantic v2, Uvicorn |
| Data Generation | Faker |
| LLM Integration | OpenAI SDK (GPT-4o) with fallbacks |
| Database | None — JSON files (mutable at runtime) |
| Method | Endpoint | Description |
|---|---|---|
| GET | /api/init |
Returns all users with profiles, baselines, and risk state |
| GET | /api/compliance/{code} |
Returns compliance state for MT, AE, or KY |
| GET | /api/rules/{code} |
Returns current rulebook for a jurisdiction |
| POST | /api/ingest-batch |
Triggers transaction analysis workflow |
| POST | /api/compliance/{code}/push |
Triggers compliance update workflow |
| GET | /api/health |
Health check |
When using Supabase (see DEPLOYMENT.md), the following tables are created by backend/scripts/create_tables.sql. Each table’s role:
| Table | Purpose |
|---|---|
| profiles | One row per user: identity (name, country, age, occupation), KYC status, risk profile, and historical countries. Used for the monitor roster and as input to the transaction analysis workflow. |
| baselines | Per-user behavioral baselines: average transaction amount, daily volume, transactions per day, standard deviation, normal hours, etc. Updated by the Baseline Agent and used by the Anomaly Agent to compare new transactions. |
| risk_state | Current risk score, risk band (HIGH/MEDIUM/LOW/CLEAN), and risk profile per user. Updated after each transaction analysis run. Powers the monitor’s risk gauge and user ordering. |
| transactions | All ingested transactions per user: amount, currency, country, city, timestamps, and preprocessed fields (distance, speed, daily totals, etc.). Used for history in the monitor and as input to the analysis pipeline. |
| compliance_state | One row per jurisdiction (MT, AE, KY): current version, old/new regulations JSON, and metadata. Tracks which rulebook version is active per jurisdiction. |
| rulebooks | Versioned rulebooks per jurisdiction. Each row is a version (v1, v2, …) with a JSON rulebook and an is_active flag. The active rulebook is used for anomaly scoring and the regulatory hub. |
| new_regulations | Regulation updates available to push: title, summary, effective date, jurisdiction, and is_pushed flag. The regulatory hub lists these; pushing runs the compliance workflow and creates a new rulebook version. |
| agent_traces | High-level log of each agent run: type (transaction_analysis or compliance_push), user or jurisdiction, status, result JSON, timestamps. Used to show “latest analysis” on the monitor and trace detail in the UI. |
| agent_steps | Per-step log within a trace: agent name, order, status, message, duration, retries, and output. Linked to agent_traces by trace_id. Powers the agent chain view in both monitor and regulatory screens. |
| compliance_drafts | Human-in-the-loop drafts produced by the compliance push workflow: proposed rulebook, comparison, impact analysis, status (pending/approved/rejected). Used for the regulatory hub’s draft review and approve/reject flow. |
Deriv_Hackathon/
├── README.md
├── IMPLEMENTATION_GUIDE.md
│
├── backend/
│ ├── main.py # FastAPI app with all endpoints
│ ├── requirements.txt # Python dependencies
│ ├── .env # LLM API key configuration
│ ├── models/ # Pydantic schemas
│ │ ├── user.py # UserProfile, UserBaseline
│ │ ├── transaction.py # RawTransaction, PreprocessedTransaction
│ │ ├── compliance.py # Regulation, Rulebook, JurisdictionCompliance
│ │ ├── risk.py # AnomalyResult, RiskBand
│ │ └── agent_log.py # AgentLogEntry, FullAnalysisResponse
│ ├── agents/ # 8 AI agents
│ │ ├── profile_agent.py # Local — loads user profile
│ │ ├── preprocessor_agent.py # Local — enriches transactions
│ │ ├── baseline_agent.py # LLM — computes user baselines
│ │ ├── anomaly_agent.py # LLM — detects anomalies
│ │ ├── summarizer_agent.py # LLM — summarizes regulations
│ │ ├── comparison_agent.py # LLM — compares old vs new
│ │ ├── analyzer_agent.py # LLM — analyzes impact
│ │ └── rulebook_editor_agent.py # LLM — modifies rulebook
│ ├── scripts/
│ │ └── faker_generator.py # Synthetic transaction generator
│ ├── utils/
│ │ ├── llm.py # LLM client wrapper
│ │ └── geo.py # Distance/speed calculations
│ └── data/
│ ├── users.json # 10 user profiles
│ ├── baselines.json # User baselines (updated at runtime)
│ └── compliance/
│ ├── malta.json # Malta compliance + rulebook
│ ├── uae.json # UAE compliance + rulebook
│ ├── cayman.json # Cayman compliance + rulebook
│ └── new_regulations/ # Regulations available to push
│
└── frontend/
├── package.json
├── app/
│ ├── layout.tsx # Root layout with sidebar
│ ├── monitor/page.tsx # Screen 1: Live Monitor
│ └── regulatory/page.tsx # Screen 2: Regulatory Hub
├── components/
│ ├── layout/ # Sidebar, ThemeToggle
│ ├── monitor/ # UserRoster, IntelligenceDetail, etc.
│ ├── regulatory/ # JurisdictionTabs, ActiveRulebook, etc.
│ └── shared/ # RiskBadge, RiskGauge, AgentChainLog
├── lib/
│ ├── api.ts # API client
│ ├── types.ts # TypeScript interfaces
│ └── utils.ts # Helper functions
└── hooks/ # useUsers, useCompliance, useInjectBatch
| Jurisdiction | Code | Users |
|---|---|---|
| Malta | MT | Marco Vella, Sofia Borg, Luca Camilleri |
| UAE | AE | Rashid Al-Maktoum, Aisha Khalifa, Omar Farooq, Fatima Noor |
| Cayman Islands | KY | Alex Johnson, Brianna Clarke, Derek Walters |
Each jurisdiction starts at v1 with foundational regulations and an original rulebook. Three new regulations are available to push per jurisdiction (up to v4).
The backend uses Gemini 2.0 Flash for fast, simple tasks and Gemini 2.5 Pro for complex reasoning. Each agent is wired to the model that fits its job:
| Agent | Model | Why |
|---|---|---|
| Baseline Calculator | gemini-2.0-flash |
Simple averaging and structured JSON |
| Summarizer | gemini-2.0-flash |
Short text summary of a regulation |
| Comparison | gemini-2.0-flash |
Straightforward old vs new comparison |
| Anomaly Validator | gemini-2.0-flash |
Quick consistency check on existing output |
| Anomaly Detector | gemini-2.5-pro |
Deep reasoning, rule violations, regulation citing |
| Analyzer | gemini-2.5-pro |
Nuanced impact analysis with numbers |
| Rulebook Editor | gemini-2.5-pro |
Complex structured output and rulebook integrity |
If the LLM API is unavailable or the API key is not configured:
| Agent | Fallback |
|---|---|
| Baseline Calculator | Simple mathematical averages computed locally |
| Anomaly Detector | Deterministic point-based scoring from the rulebook |
| Summarizer | Returns the regulation's existing summary text |
| Comparison Agent | Returns generic comparison points |
| Analyzer Agent | Returns a template-based impact analysis |
| Rulebook Editor | Adds a generic monitoring rule without full analysis |
The application never breaks — every LLM agent has a working fallback path.