I just shipped the 10th and final phase of a project that's been consuming my evenings for weeks. It's a decision engine—a system that looks at everything I need to do and tells me the single most important thing to work on right now.

No more staring at my task list wondering where to start. No more context-switching between email, Slack, and Jira trying to figure out what's urgent. Just: "Do this next."

Here's how I built it.

The Problem

Every morning, I faced the same question: what should I work on?

My inputs were scattered across Gmail, GitHub, Jira, Slack, and a local task system. Each had its own notifications, its own priority signals, its own way of screaming for attention. The mental overhead of synthesizing all of that into a single decision was exhausting.

I wasn't just doing work—I was constantly deciding what work to do. That decision fatigue was killing my output.

The Solution: A Priority Ladder

The core insight was simple: decisions can be mechanical.

I don't need to think about whether to fix a production incident or review a PR. The incident wins. Always. That's not a decision—it's a rule.

So I built a priority ladder. Fourteen conditions, checked in order. First match wins:

  1. Incident active or CI red → Fix it immediately
  2. Teammate blocked → Unblock them
  3. Active task in progress → Continue it
  4. Meeting within 2 hours → Prep for it
  5. PR feedback waiting → Address it
  6. Tasks in review → Review them
  7. PR review requested → Do the review
  8. Email eligible → Triage inbox
  9. Slack eligible → Check messages
  10. Management review due → Send status update
  11. Open tasks available → Pick one up
  12. Uncommitted changes → Commit them
  13. Low on tasks → Generate more
  14. Fallback → Surface tech debt or improvements

One answer. Every time.

Building in Phases

I didn't try to build the whole thing at once. That's a recipe for scope creep and abandoned side projects.

Instead, I built in phases. Each phase was complete on its own—a working system that did something useful, even if it was limited.

Phase 1-3: Foundation

The first three phases established the core runtime:

  • Gateway routing to handle requests
  • Structured JSONL logging with daily rotation
  • Integrations for Gmail, GitHub, and Jira

By the end of Phase 3, the system could pull data from all my sources and log what it found. Not useful yet, but foundational.

Phase 4: The Decision Engine

This was the heart of the project. The decision engine takes a snapshot of current state—emails, PRs, tasks, calendar—and applies the priority ladder to produce a single recommendation.

The key architectural choice: decisions are deterministic. Given the same state, you always get the same answer. No ML, no fuzzy logic. Just rules.

Phase 5-6: Dashboards

Having a decision engine is great, but I also wanted visibility. Phase 5 added a personal dashboard—a local web UI showing current decisions and context. Phase 6 extended this to client-facing views, scoped per project.

The dashboard runs on Python's stdlib HTTP server. No frameworks, no dependencies, just HTML and JavaScript.

Phase 7: AI Layer (Optional)

This phase added an optional AI layer that can enhance decisions with suggestions. It's off by default because the deterministic engine is usually enough. But when enabled, it can surface insights the rules miss.

The key constraint: AI suggestions are clearly marked. They don't override the priority ladder—they augment it.

Phase 8: Actions

This is where things got interesting. Up to now, the system only recommended actions. Phase 8 gave it the ability to execute them.

Archive emails. Comment on PRs. Transition Jira tickets. Each action handler is opt-in, with a dry-run mode for safety.

Phase 9: Self-Updating

A system that runs 24/7 needs to update itself. Phase 9 added the ability to pull updates from git, run migrations, and restart gracefully. If tests fail after an update, it rolls back automatically.

Phase 10: Mac Mini Deploy

The final phase: deploying as a persistent macOS service via launchd. The system starts on boot, restarts on crash, rotates logs, and monitors its own health.

By the Numbers

  • 251 tests covering all modules
  • 10,000+ lines of code across packages
  • Handles 3 integrations: Gmail, GitHub, Jira
  • Dashboard for real-time visibility
  • Deploys as a service with one command

What I Learned

Build in phases. Each phase should be useful on its own. This keeps momentum high and risk low. If I'd tried to build everything at once, I'd still be planning.

Deterministic beats clever. The priority ladder is dumb. It doesn't learn, adapt, or optimize. But it's predictable, debuggable, and fast. Clever systems are impressive until they do something you can't explain.

Test everything. 251 tests might seem like a lot for a personal project. But those tests caught dozens of bugs during development. They're not overhead—they're insurance.

Ship, then polish. Each phase shipped with rough edges. That's fine. The point is to get something working, verify it works, and move on. Polish comes later (or never, and that's okay too).

What's Next

The system is feature-complete. Now comes the real test: running it 24/7 and seeing what breaks.

I'll also be adding more action handlers and possibly open-sourcing parts of it. If you're interested in the decision engine approach, I wrote about shipping 128 tasks in one day using an earlier version: 128 Tasks in One Day.


If you're building something similar or want to chat about decision systems, let's talk.

React to this post: