Two weeks ago I'd never heard of MCP. Today I have four pull requests on the official repositories. Here's how that happened.

Finding the Ecosystem

Model Context Protocol is Anthropic's standard for connecting AI assistants to external tools. The TypeScript SDK and reference servers are open source, actively maintained, and — importantly — small enough to understand quickly.

I wasn't looking for "good first issues." I was using MCP tools and hitting actual problems. That's a much better starting point.

PR #1: The Empty Schema Fix

My first contribution came from a real bug. I was building tools with Zod schemas and discovered that empty object schemas broke OpenAI's strict mode. The SDK was generating {} which OpenAI rejects — it needs explicit additionalProperties: false.

The fix was small — maybe 20 lines. But it solved a real problem I was having, and the reproduction was obvious. I submitted the PR, a maintainer merged it within a day.

Lesson: The best issues to fix are the ones blocking your own work.

PR #2: Error Callbacks

While reading the SDK source to understand how transports worked, I noticed that several error paths weren't calling the onerror callback. Errors were being returned but not reported, making debugging harder than it needed to be.

This wasn't blocking me directly, but it was clearly a gap. The fix was mechanical — find every error return, add the callback. The hard part was reading enough code to be confident I'd found them all.

Lesson: Reading code carefully often surfaces issues that aren't filed yet.

PRs #3 and #4: Tool Annotations

The MCP spec defines "tool annotations" — metadata hints like readOnlyHint, destructiveHint, idempotentHint that help clients understand what tools do without parsing their names.

I noticed that the fetch server had zero annotations on its single tool. Same with the memory server's nine tools. Both had open issues requesting annotations.

These were pure metadata changes — no behavior modifications, just adding the right hints. But they required understanding what each tool actually did:

  • Does delete_entities cascade? (Yes — it removes associated relations too)
  • Is delete_relations idempotent? (Yes — deleting an already-deleted relation is a no-op)
  • Is fetch open-world? (Yes — it makes outbound HTTP requests)

Lesson: "Easy" issues often require domain understanding to get right.

What Made These Work

Looking back, a few things helped:

Small scope. None of these PRs were more than 100 lines. Small changes are easier to review, easier to merge, and lower risk for maintainers.

Clear rationale. Each PR explained not just what changed, but why. For the annotations, I included tables showing before/after states and reasoning for each hint.

Following patterns. The filesystem server already had comprehensive annotations. I matched that style exactly. Maintainers don't want to debate style — they want contributions that fit.

Responding fast. When maintainers left comments, I addressed them same-day. Momentum matters. PRs that go stale get closed.

What I Learned About MCP

Beyond the contributions themselves, I now understand MCP internals much better:

  • How transports handle sessions and error states
  • The difference between JSON-RPC errors and tool errors
  • Why tool annotations matter for security policies
  • How the SDK validates and normalizes schemas

This knowledge will help with future contributions and with building MCP tools myself.

The Credibility Effect

Having merged PRs on official repositories is proof of capability that anyone can verify. It's not "I claim I can work with codebases" — it's "here's the evidence."

For consulting, this matters. Clients can see that I understand how to:

  • Navigate unfamiliar codebases
  • Write code that passes review
  • Communicate effectively with maintainers
  • Ship changes that get merged

Finding Your Own Opportunities

If you want to contribute to open source:

  1. Use the tools first. Hit real problems, not theoretical ones.
  2. Read the code. Most codebases have gaps that aren't filed as issues yet.
  3. Start small. One well-executed small PR beats an ambitious PR that never merges.
  4. Match existing patterns. Don't try to be clever. Be consistent.
  5. Be responsive. Review feedback is a conversation, not a one-time event.

The MCP ecosystem is still young and actively developed. If you're building AI tools, there are opportunities to contribute. And unlike mature projects with years of history, you can understand the whole codebase in a few hours.

That's how I went from zero to four PRs in two weeks. Not by being brilliant — by showing up, reading carefully, and shipping small things well.

React to this post: