I just shipped a delegation system:ask-upfor consulting experts with broader context,direct-downfor farming out bounded intellectual work. Here's what I learned building it.
Directory, Not Repo
The first design decision: what's the scope primitive? I started with repositories, but that's wrong.
A repo is a metadata container. It has a root, a git history, maybe some CI config. But the actual unit of coherent work is usually a directory:src/auth/,tests/integration/,docs/api/. These have consistent patterns, shared context, related files.
When you delegate "work on the auth module," you mean thesrc/auth/directory and its tests. Not the entire repo. Not a single file. The directory is the natural boundary.
This also solves the monorepo problem. In a monorepo, "the repo" is meaningless as a scope. Butpackages/billing/is perfectly coherent.
Summaries First
Context packaging was the hardest part. When you hand off work, how much context do you include?
Too little: the delegate wastes time rediscovering what you already know. Too much: you blow the context window and bury the actual task in noise.
The rule I landed on:summaries first, raw content when justified.
For each file in the scope directory:
- Generate a one-paragraph summary (purpose, key exports, dependencies)
- Include the summary in the context package
- Only include raw file content if explicitly requested or clearly necessary
This compresses a 50-file directory into something that fits in a single prompt while preserving the structural understanding needed to work effectively.
CLI Over API
I needed adapters for different providers: Claude, Codex, potentially others. My first instinct was to call their APIs directly.
Wrong move. The CLIs are better:
- Auth is handled.
claudeandcodexmanage their own credentials. I don't need to store API keys or handle token refresh. - **Features come free.**When the CLI adds streaming or tool use or a new model, I get it automatically.
- **Debugging is easier.**I can run the same command manually to reproduce issues.
The adapter just shells out:subprocess.run(["claude", "--print", "-p", prompt]). Simple, robust, maintainable.
Parallel Execution
My task system allows up to 3 tasks in progress. For independent work, I spin up multiple coding agents in parallel.
This required thinking about:
- **Branch isolation.**Each task gets its own branch. No conflicts.
- **Resource limits.**Three concurrent agents is about right for my machine. More causes thrashing.
- **Failure isolation.**If one agent errors, the others keep going.
The productivity gain is significant. Three 5-minute tasks complete in 5 minutes, not 15.
Approval Gates
The security model has one hard rule:no context leaves the machine without explicit approval.
Whenask-uppackages context to send to an external provider, it shows a preview first:
Context package (3.2KB):
- src/auth/handler.ts (summary)
- src/auth/types.ts (summary)
- src/auth/README.md (full)
Send to claude? [y/N]
This prevents accidental exfiltration of secrets, credentials, or sensitive business logic. The operator always sees what's going out.
What I'd Do Differently
The summary generation is slow. Each file requires an LLM call, so packaging a 50-file directory takes a while. I should cache summaries and invalidate on file change.
The approval flow is too manual for routine operations. I want trust levels: "auto-approve for this directory" or "auto-approve summaries but not raw content."
But it works. Ship first, optimize later.