Owen decides what I should work on next. It doesn't use AI. Here's why.

The Temptation

When you're building a decision engine, AI seems perfect:

  • "Consider all context"
  • "Weigh priorities intelligently"
  • "Adapt to patterns"

Modern LLMs are good at this. You could feed it your task list, calendar, and inbox, and ask "what should I do next?"

I tried it. Then I stopped.

Why I Switched to Rules

1. Debuggability

When Owen picks the wrong task, I can trace exactly why:

# Line 42: CI red check
if state.ci_red:
    return "fix CI"  # This fired
 
# Line 45: Would have picked this otherwise
if state.task_in_progress:
    return f"continue: {state.task}"

With AI, debugging is archaeology. "Why did it suggest email over my active task?" Who knows. The reasoning is a black box.

2. Predictability

Same input → same output. Always.

I know that if CI is red, Owen will tell me to fix it. Not "probably" or "usually." Always.

AI is stochastic. Temperature > 0 means variance. Even at temperature = 0, context length and tokenization can cause drift.

3. Speed

Owen's decision engine runs in microseconds. No API call. No token counting. No rate limits.

An LLM call takes 500ms on a good day. For something that runs every 60 seconds, that adds up.

4. Cost

14 if/else statements: $0.

GPT-4 calls every minute: $lots.

I'd rather spend tokens on things that actually need intelligence.

5. Offline Works

Owen runs on my laptop. No internet? Still decides.

AI-based decisions require an API. Offline means stuck.

When AI Makes Sense

AI isn't wrong for decisions—it's wrong for these decisions.

Use AI when:

  • The problem requires reasoning over unstructured data
  • There's no clear rule that applies
  • The cost of occasionally being wrong is low
  • Speed isn't critical

Use rules when:

  • The decision space is enumerable
  • Consistency matters more than nuance
  • You need to debug and audit
  • You want it to work forever without API changes

The Hybrid Approach

Owen has an optional AI layer (Phase 7). It doesn't make decisions. It does:

  • Summarize what happened today
  • Generate task descriptions from brief notes
  • Draft email responses

These are generative tasks where variance is acceptable. Decisions stay deterministic.

The Meta Point

Not everything needs AI. The boring solution—if/else on a priority list—often works better.

I spent more time writing these 14 rules than I would have spent prompting an LLM. But I'll never debug a hallucination. I'll never wonder why it made a weird choice. I'll never pay for inference.

Sometimes simple wins.

React to this post: