Ollie starts from a simple, slightly silly question: What might an AI agent look like in Plan 9? From there follow two more: What happens if agent primitives are exposed as an ordinary filesystem? and How much can we subtract from agent implementations while still being useful? The result is not a conventional "agent framework" but a small Go runtime and 9P‑backed filesystem surface that defines what an agent is, then exposes its state and behaviors as regular files and I/O streams, so that the surrounding environment (shell scripts, TUIs, web apps, cron, containers, and other OS facilities) can handle orchestration, scheduling, workflows, and interfaces instead. The same primitives scale from simple copilot use cases — u/complete manages a session per working directory for ghost‑text code completion in acme — through interactive shells and one‑shot pipelines, up to multi‑agent workflows coordinated by scripts, other agents, or both.
- Interactive AI shell —
s/shgives you a readline prompt that resumes your last session per directory, with streaming output and live model/backend switching - One-shot queries that compose with Unix pipes —
cat error.log | s/bfg "what caused this?" | s/bfg "suggest a fix" - Parallel fan-out —
s/bfg -parallel 4spawns N agents on the same prompt and collects results - Multi-agent workflows in plain shell — create named sessions, pass prompts between them via files, coordinate with
statewait; no framework or SDK - Subagent delegation — agents can fork ephemeral subagents that run independently and return results
- Any model, any backend — Ollama (local), OpenAI, Anthropic, OpenRouter, GitHub Copilot, Kiro; switch per-session with one write
- Sandboxed execution — every tool call runs inside a Landlock sandbox with configurable filesystem access
- Extensible via plain scripts — drop executables into a directory and the agent picks them up; built-in tools cover file I/O, LSP (go-to-definition, references, rename, diagnostics), persistent memory, web search, browser screenshots, and task tracking
- Domain skills — teach the agent your project's conventions with a markdown file; loaded on demand
- AI code completion —
u/completereads a prefix from stdin and prints the completion to stdout; plug it into any editor that can shell out - Multiple frontends — terminal (
s/sh), acme (ollie-acme+Kmpl), Emacs (ellie), and a browser-based web UI; all talk to the same 9P server - Remote access — mount a remote server's namespace locally with
9pfuse, or use the HTTP gateway withcurlor the web UI - Store federation — point tools or transcripts at a remote 9P mount to share across machines
- Prompt optimization —
u/optimizegenerates N candidate prompts in parallel, then judges them to return the best one - Automatic context compaction — long conversations are summarized transparently when approaching the model's context limit
Clone with submodules:
git clone --recurse-submodules https://github.com/lneely/ollie.gitOr after cloning:
git submodule update --init --recursiveBuild and install everything from the monorepo root:
mkOr build specific components:
mk core # Build core library
mk 9p # Build 9p filesystem server
mk httpgw # Build HTTP gateway
mk webui # Build web UI (requires Node.js + npm)For Emacs Lisp, copy el/ellie.el into your own Emacs configuration directory, and:
(require 'ellie)See doc/USAGE.md for usage instructions.
OLLIE_BACKEND=openai # ollama | openai | anthropic | copilot | kiro (default: ollama)
OLLIE_OLLAMA_URL= # base URL for Ollama (default: http://localhost:11434)
OLLIE_OPENAI_URL=https://openrouter.ai/api
OLLIE_OPENAI_KEY=sk-or-...
OLLIE_ANTHROPIC_KEY=sk-ant-...
OLLIE_COPILOT_TOKEN=...
OLLIE_KIRO_TOKEN=... # bearer token or sqlite:// path (auto-detected from Kiro CLI if unset)
OLLIE_MODEL=qwen/qwen3-235b-a22b
OLLIE_TOOLS_PATH=~/.config/ollie/tools # directory for tool scripts ({tool} steps)
OLLIE_MEMORY_PATH=~/.config/ollie/memory # directory for memory files (ollie/m)
OLLIE_ELEVATE_SOCKET=${XDG_RUNTIME_DIR}/ollie/elevate.sock # socket path for x/elevate adapter (default: $XDG_RUNTIME_DIR/ollie/elevate.sock)
OLLIE_COMPLETE_BACKEND=ollama # backend for u/complete (required)
OLLIE_COMPLETE_MODEL=qwen3:latest # model for u/complete (required)
Shell environment variables take precedence over the env file.
Two approaches: mount the 9P namespace directly with 9pfuse, or reach it over HTTP via ollie-httpgw.
Because olliesrv speaks 9P over TCP, a remote instance can be mounted into the local namespace using 9pfuse. Agent sessions, tools, and all other filesystem state on a remote host become ordinary local files — no special client needed.
# On the remote host:
olliesrv start -tcp :9564
# Locally:
olliesrv mount remotehost:9564 ~/mnt/remotehost
ls ~/mnt/remotehost # s/, t/, ...Sessions created under ~/mnt/remotehost/s/ run on the remote host, so tool calls execute close to the remote filesystem rather than over the wire.
ollie-httpgw translates HTTP to 9P, letting any HTTP client interact with
olliesrv without a 9P library. It's also the backend for the web UI.
# Connect to local server:
ollie-httpgw
# Connect to remote server:
ollie-httpgw -net tcp -addr remotehost:9564See doc/USAGE.md for full details on the gateway and web UI.
A Containerfile is included for running a self-contained remote server:
podman build --network=host -t olliesrv .
podman run --network=host -e ANTHROPIC_API_KEY=$ANTHROPIC_API_KEY olliesrvThen mount locally or point httpgw at it:
olliesrv mount localhost:9564 ~/mnt/container-ollie
# or
ollie-httpgw -net tcp -addr localhost:9564Update all submodules to latest:
git submodule update --remoteMany sources of inspiration:
- Plan 9 from Bell Labs — for an interesting system
- @9fans — for the Plan 9 port
- Suckless — for articulating good software development principles
- @simonfxr — for a solid agent baseline to "borrow" from, and other nifty ideas
- @aws — for a solid open-source agent implementation
GPLv3