Everyone's talking about AI tools. Few people explain how they actually use them. Here's my honest breakdown.
The Tools I Use
For code:
- Claude (via API) for complex reasoning and architecture questions
- GitHub Copilot for inline completions
For writing:
- Claude for first drafts and editing passes
- My own brain for anything that needs personality
For research:
- Claude for synthesizing documentation
- Web search for current information (AI knowledge cutoffs matter)
What Actually Works
Code review assistance
I paste code and ask: "What's wrong with this?" or "How would you improve this?"
AI catches things I miss when I'm deep in implementation mode. It's like having a patient reviewer who never gets tired of looking at my code.
But here's the key: I don't blindly accept suggestions. I use them as prompts to think harder about my choices.
Rubber duck debugging
Explaining a problem to an AI often surfaces the solution. The act of articulating the issue clearly enough for a machine to understand forces you to actually understand it yourself.
Sometimes I realize the answer mid-prompt and never even send it.
First draft generation
Starting from a blank page is hard. Starting from a mediocre draft is easier.
I'll describe what I want to write, get a draft, then rewrite it in my voice. The AI draft gives me structure and catches things I might forget. The rewrite makes it actually sound like me.
Documentation and comments
AI is genuinely good at explaining code. I'll write the implementation, then ask for help documenting it. The result is usually clearer than what I'd write when deep in the code.
What Doesn't Work
Letting AI write your personality
AI-written content is recognizable. It's too smooth, too balanced, too safe. If everything you publish sounds like it came from the same generic source, you lose what makes you interesting.
Use AI for structure. Write your personality yourself.
Trusting code without understanding
AI generates plausible-looking code that doesn't work. Or worse, code that works but is subtly wrong.
Every piece of AI-generated code needs to be understood, not just tested. If you can't explain why it works, you shouldn't ship it.
Complex multi-step tasks
AI excels at single, well-defined tasks. It struggles with complex, multi-step processes that require maintaining state and context across many decisions.
Break big tasks into small ones. Use AI for each step. Connect the steps yourself.
Current information
AI knowledge has a cutoff date. For anything recent — new library versions, current events, recent best practices — verify independently.
My Prompting Patterns
Be specific about context: Bad: "How do I handle errors?" Good: "I'm building a REST API in TypeScript with Express. How should I structure error handling for validation errors vs. server errors?"
Ask for tradeoffs: Instead of "What's the best way to do X?", ask "What are the tradeoffs between approach A and approach B for X?"
Request explanations: "Explain why this approach is better" forces the AI to reason, which often surfaces issues.
Iterate: The first response is rarely the best. Follow up with "What are the edge cases?" or "How would this change if we also needed Y?"
The Integration Question
The biggest productivity gains come from integration, not capability.
An AI that requires copy-pasting between apps is useful. An AI integrated into your editor, your terminal, your workflow — that's transformative.
I optimize for reducing friction. The fewer steps between "I need help" and "I have help," the more I actually use the tool.
The Honest Assessment
AI tools make me maybe 20-30% more productive on coding tasks. Not 10x. Not even 2x. But 20-30% compounded over months is significant.
The gains come from:
- Faster first drafts
- Better error catching
- Reduced context-switching for documentation
- Rubber duck debugging that actually responds
The traps to avoid:
- Over-reliance on generated code
- Losing your voice in writing
- Trusting without verifying
- Using AI when thinking would be faster
The tools are good. They're not magic. Use them for what they're good at.