feat: major platform upgrade with 16 new features#1
Open
fdciabdul wants to merge 9 commits intodalpan:mainfrom
Open
feat: major platform upgrade with 16 new features#1fdciabdul wants to merge 9 commits intodalpan:mainfrom
fdciabdul wants to merge 9 commits intodalpan:mainfrom
Conversation
- Multi-user registration with role-based access (admin/instructor/trainee) - Gamification system: XP, levels, 12 badges, daily streaks - Leaderboard with global and team rankings - Analytics dashboard with Recharts (radar, line, bar, pie charts) - Organization/team management with invite codes - Campaign mode for multi-stage attack simulations - Post-simulation debrief with Cialdini analysis and AI deep analysis - Custom scenario builder UI with visual node editor - Email simulation renderer (Gmail-style) - Certificate generation (Platinum/Gold/Silver/Bronze) - Adaptive difficulty AI based on user performance - Notification system with real-time bell component - Webhook/Slack integration for event callbacks - Dark/light theme toggle with CSS variable system - Voice simulation (vishing) via Web Speech API - PWA support with manifest.json Also includes: - Backend refactored from monolithic server.py to modular architecture - Rate limiting on auth endpoints - Docker healthchecks for all services - GitHub Actions CI/CD pipeline - Ruff linter configuration - .env.example for contributor onboarding - Bug fixes: missing sanitize_llm_output, unreachable dead code Co-Authored-By: cp <cp@imtaqin.id>
- Add OpenRouter provider with 16+ models (free models available)
- Add OpenAI provider (GPT-4o, GPT-4o Mini, GPT-3.5 Turbo)
- Add local LLM support (Ollama, LM Studio, llama.cpp)
- Add /api/llm/providers endpoint with provider info and signup URLs
- Add /api/llm/models/{provider} with static + live model catalogs
- Add /api/llm/models/{provider}/refresh for dynamic model fetching
- OpenRouter: live model list fetch from API (200+ models)
- Local LLM: auto-detect models from Ollama/OpenAI-compatible endpoints
- LLMConfig model updated with base_url field for custom endpoints
- Settings page fully rewritten with:
- Visual provider selector grid
- Searchable model list with context length info
- Free model indicators
- Local server connection status indicator
- Quick presets for Ollama/LM Studio/llama.cpp
- Custom model ID input
- Provider guide sidebar
Co-Authored-By: cp <cp@imtaqin.id>
- Fix all E501 line-too-long errors across 20+ files - Run ruff format on entire backend codebase - Add B008, B904, S110 to ruff ignore list (FastAPI patterns) - Fix F841 unused variable in debrief route - All ruff check + ruff format --check now pass clean Co-Authored-By: cp <cp@imtaqin.id>
- Complete README rewrite: professional structure, feature tables,
architecture diagram, API overview, LLM provider guide
- Remove obsolete `version` key from docker-compose.yml
- Remove `env_file: .env` (caused failure on fresh clones)
- Auto-create .env from .env.example in `make build` and `make up`
- All env vars use defaults via ${VAR:-default} syntax
Co-Authored-By: cp <cp@imtaqin.id>
- Frontend: 3000 → 9443 - Backend: 8001 → 9442 - MongoDB: 27017 → 47017 - All ports configurable via FRONTEND_PORT, BACKEND_PORT, MONGO_PORT in .env - Updated .env.example, Makefile, README with new defaults Co-Authored-By: cp <cp@imtaqin.id>
React CRA bakes REACT_APP_* env vars at build time (yarn build), not at runtime. Added ARG in Dockerfile.frontend and build.args in docker-compose so .env value is injected during docker build. Co-Authored-By: cp <cp@imtaqin.id>
Captured via Playwright E2E test on lab.imtaqin.id: - Installer, Login, Dashboard, AI Challenge selection - AI Chat simulation (The Urgent CEO - BEC scenario) - Settings page with OpenRouter model list (200+ models) - Leaderboard with XP/levels/streaks - Scenarios page README updated with collapsible screenshot gallery. Co-Authored-By: cp <cp@imtaqin.id>
New AI Personas (16 total now): - CEO Video Call (Deepfake) — AI-generated video impersonation - Security Researcher — watering hole / technical phishing - Conference Contact — reconnaissance pretexting - Disgruntled Contractor — insider threat via sympathy - Disaster Relief Coordinator — charity fraud - Persistent Hacker — MFA fatigue + social engineering - Family Emergency (AI Voice) — voice cloning vishing - Parking Enforcement — QR code phishing (quishing) New Challenge Scenarios: - Deepfake CEO Video Call (hard, 3 decision points, 7 outcomes) - MFA Fatigue Attack (hard, multi-stage with SOC impersonation) - The Security Researcher (hard, watering hole with fake PoC) - AI Voice Clone Family Emergency (hard, emotional manipulation) - QR Code Parking Scam (easy, physical social engineering) - Conference Contact Recon (medium, slow-burn intelligence gathering) New Quizzes: - Advanced Social Engineering Threats (8 questions, deepfake/MFA/quishing) - Cialdini Principles Mastery (7 questions, principle identification) All content is bilingual (EN/ID) with complex branching paths. Co-Authored-By: cp <cp@imtaqin.id>
1. Auto-import: backend now auto-imports all YAML data from data/sample/ and data/professionals/ on first startup when database is empty. No more manual `make seed` needed. 2. Mobile dashboard: stats grid now 2-col on mobile, responsive text sizes, tighter spacing on small screens. 3. Chat auto-focus: input field automatically refocuses after AI responds, so users can immediately type their reply without clicking the input again. Co-Authored-By: cp <cp@imtaqin.id>
This file contains hidden or bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment
Add this suggestion to a batch that can be applied as a single commit.This suggestion is invalid because no changes were made to the code.Suggestions cannot be applied while the pull request is closed.Suggestions cannot be applied while viewing a subset of changes.Only one suggestion per line can be applied in a batch.Add this suggestion to a batch that can be applied as a single commit.Applying suggestions on deleted lines is not supported.You must change the existing code in this line in order to create a valid suggestion.Outdated suggestions cannot be applied.This suggestion has been applied or marked resolved.Suggestions cannot be applied from pending reviews.Suggestions cannot be applied on multi-line comments.Suggestions cannot be applied while the pull request is queued to merge.Suggestion cannot be applied right now. Please check back later.
Also includes: