A modular, hardened NixOS configuration covering ML infrastructure, defense-in-depth security, custom package management, and automated CI/CD.
This repository contains the declarative configuration for a production NixOS workstation. It is structured as a Nix flake with 278 modules across 20 categories. Notable features:
- ML Infrastructure — GPU orchestration with llama.cpp, vLLM, and TabbyAPI backends.
- Defense-in-Depth Security — Kernel hardening, AIDE, ClamAV, AppArmor, and a full SOC stack (Wazuh, OpenSearch, Suricata).
- Custom Package System — Sandboxed package builders with Firejail/Bubblewrap isolation and audit trails.
- Developer Tooling — SecureLLM Bridge, MCP servers, AI-assisted CLI utilities.
- Observability — Prometheus, Grafana, Vector, and structured logging across the stack.
graph TB
subgraph "Security Layer"
AIDE[AIDE FIM]
ClamAV[ClamAV]
Wazuh[Wazuh EDR]
Suricata[Suricata IDS]
end
subgraph "ML Infrastructure"
LlamaCPP[llama.cpp]
vLLM[vLLM]
TabbyAPI[TabbyAPI]
VRAM[VRAM Monitor]
Registry[Model Registry]
end
subgraph "Dev Tools"
SecureLLM[SecureLLM Bridge]
MCP[MCP Servers]
Phantom[Phantom AI]
Swissknife[Swissknife Debug]
end
subgraph "Network Stack"
Tailscale[Tailscale VPN]
Firewall[nftables Zones]
DNS[DNS Hardening]
NordVPN[NordVPN]
end
LlamaCPP --> VRAM
vLLM --> VRAM
TabbyAPI --> VRAM
VRAM --> Registry
SecureLLM --> MCP
Wazuh --> Suricata
AIDE --> Wazuh
| Category | Modules | Notes |
|---|---|---|
| Security + SOC | 39 | AIDE, ClamAV, Wazuh, Suricata, hardening |
| ML | 36 | llama.cpp, vLLM, TabbyAPI, model registry |
| Packages | 36 | Sandboxed package builders |
| Shell | 35 | Aliases, rebuild system, utilities |
| Services | 16 | GPU orchestration, MCP servers |
| Network | 12 | Tailscale, VPN, DNS, firewall zones |
| Hardware | 12 | Thermal forensics, NVIDIA tuning |
Totals: 20 categories, 278 modules, ~48k lines of Nix.
GPU-accelerated LLM stack integrated as NixOS modules:
kernelcore.ml.offload.enable = true;- Backends: llama.cpp (turbo + swap variants), vLLM, TabbyAPI.
- SQLite model registry with auto-discovery.
- Rust-based REST control API on port 9000.
- Real-time VRAM monitoring with automatic offloading under pressure.
- MCP protocol integration for IDE clients.
Defense-in-depth with a complete SOC stack:
kernelcore.soc.enable = true;
kernelcore.security.hardening.enable = true;- File integrity & AV: AIDE, ClamAV with scheduled scans.
- Endpoint & network: Wazuh EDR, Suricata IDS/IPS, AppArmor.
- Hardening: kernel sysctl/boot params, compiler hardening (PIE/RELRO/SSP), SSH hardening with key-only auth.
- SIEM/Logs: OpenSearch, Grafana, Vector, threat-intel feeds.
Sandboxed package builders with audit logging:
.debpackages under Firejail isolation.tar.gzextraction with FHS environments.- npm packages with sandbox profiles.
- Automatic hash verification and GitHub release tracking.
- Examples: AppFlowy, Gemini CLI, Proton Suite, Cursor.
services.securellm-mcp.enable = true;
kernelcore.tools.enable = true;
kernelcore.swissknife.enable = true;- SecureLLM Bridge — Multi-provider LLM orchestration (OpenAI, Anthropic, Bedrock, local) with rate limiting and fallback.
- Tools CLI —
nix-utils,secops,diagnostics,llm,mcp. - Swissknife — Thermal forensics, VRAM monitoring, emergency abort, build reproducibility analysis.
Dev shells:
nix develop .#python # Python with ML libs
nix develop .#cuda # CUDA toolchain
nix develop .#rust # Rust toolchain
nix develop .#infra # Infrastructure tools- Tailscale mesh VPN (zero-config peer-to-peer).
- NordVPN with kill-switch and post-quantum encryption.
- nftables-based firewall zones.
- DNSCrypt + DNS-over-TLS with caching.
- NGINX reverse proxy for Tailscale-exposed services.
- Hyprland (Wayland): custom v0.52.2 overlay, Waybar, Wofi, Wlogout.
- i3 (X11): Polybar, Rofi, Picom.
Thermal Forensics (760 lines)
thermal-forensics --duration 180
laptop-verdict /var/lib/thermal-evidence3-phase stress test collecting baseline/stress/rebuild thermal data for hardware warranty claims.
Advanced Rebuild (674 lines)
rebuild-advanced --profile workstation --check-thermalPre-flight checks, thermal monitoring, and binary cache integration during rebuilds.
GPU Orchestration (252 lines) Unloads llama.cpp models when VRAM drops below 2GB; maintains service priority queues.
SOC Stack Full Wazuh + OpenSearch + Suricata deployment running on a workstation-class machine.
/etc/nixos/
├── flake.nix # Flake entry point
├── modules/ # 278 modules / 20 categories
│ ├── ml/ # ML infrastructure (36)
│ ├── security/ # Security + SOC (39)
│ ├── packages/ # Custom packages (36)
│ ├── shell/ # Shell configuration (35)
│ ├── services/ # System services (16)
│ ├── network/ # Networking (12)
│ ├── hardware/ # Hardware tuning (12)
│ └── ... # 13 more categories
├── hosts/kernelcore/ # Host configuration
├── overlays/ # Package overlays
├── lib/ # Reusable functions
├── secrets/ # SOPS-encrypted secrets
└── docs/ # Documentation
- NixOS 23.11+ or nixos-unstable
- NVIDIA GPU (optional, for ML features)
- Git
git clone https://github.com/VoidNxSEC/nixos.git /etc/nixos
cd /etc/nixos
# Review host configuration
cat hosts/kernelcore/configuration.nix
# Dry-run build
sudo nixos-rebuild build --flake .#kernelcore
# Apply
sudo nixos-rebuild switch --flake .#kernelcore{
kernelcore.ml.offload.enable = true; # ML infrastructure
kernelcore.soc.enable = true; # SOC/SIEM stack
kernelcore.security.hardening.enable = true; # Kernel/compiler hardening
services.securellm-mcp.enable = true; # SecureLLM Bridge
kernelcore.tools.enable = true; # Unified CLI suite
kernelcore.swissknife.enable = true; # Debug toolkit
}CI runs on GitHub Actions (primary) with a GitLab CI mirror. The main ci.yml workflow runs nix flake check and builds the kernelcore closure on every push; additional workflows handle observability/debug (tmate), deployment, rollback, SOPS secret setup, and weekly flake.lock updates.
A Cachix binary cache (marcosfpina) is populated by CI so local rebuilds pull pre-built closures when available.
For the full workflow catalog, composite actions, required secrets, and reusable-workflow examples, see .github/CI-CD.md. The GitLab pipeline is defined in .gitlab-ci.yml.
- Technical Overview
- CI/CD Architecture
- GitHub Actions reference — composite actions, workflow catalog, secrets
- Workflow debugging guide — tmate, observability, notifications
- Environment: production workstation.
- Posture: hardened (kernel, compiler, network, filesystem).
- Secrets: encrypted with SOPS + age.
- Audit: AIDE + auditd + Wazuh logging across system surfaces.
Sensitive material (API keys, SSH keys, certificates) lives encrypted in secrets/. Decryption requires the appropriate age key.
- Modules: 278 across 20 categories
- Nix lines: ~47,800
- Security + SOC modules: 39
- ML modules: 36
- Custom packages: 36
- Shell modules: 35
Largest modules:
vmctl— 959 lines (VM orchestration CLI)thermal-forensics— 760 lines (hardware evidence collection)rebuild-advanced— 674 lines (safe rebuild system)
Built on:
- NixOS — declarative Linux distribution
- Hyprland — Wayland compositor
- Wazuh — XDR/SIEM platform
- llama.cpp — LLM inference engine
Maintained by: @VoidNxSEC Hardware: Lenovo ThinkPad + NVIDIA GPU Channel: nixos-unstable Status: production (daily driver)