Yesterday, dax announced that OpenCode 1.3.0 would no longer autoload the Claude Max plugin. The reason? Anthropic sent lawyers.

The tweet got 841,000 views. The community shipped a workaround plugin the same day.

What Actually Happened

OpenCode is a popular terminal-based AI coding assistant. It had a plugin that let users authenticate with their Claude Pro/Max subscriptions ($20-$200/month) directly, bypassing the separate API billing.

Anthropic's position: API access requires API billing. Using your subscription through third-party tools violates their terms.

OpenCode's position: they tried to convince Anthropic to "support developer choice." Anthropic sent lawyers instead.

The Tension

There are two reasonable positions here:

Anthropic's side: They sell subscriptions for their web interface. They sell API access separately. Mixing them breaks their business model. They have every right to enforce their terms of service.

Developer side: If I'm paying $200/month for Claude Max, shouldn't I be able to use it how I want? The model is the same. Why should the interface I use determine my billing?

Neither side is wrong, exactly. But the way this played out matters.

Platform Risk Is Real

What struck me wasn't the legal action itself—companies protect their business models. It's the contrast with how other players are handling the same tension.

OpenAI built official integrations. GitHub Copilot has a CLI. The trend has been toward opening up access, not locking it down.

One X user put it well: "OpenAI, GitHub, and GitLab are all building open integration surfaces. Anthropic is building moats."

If your toolchain depends on Claude, you now have a platform risk conversation to have with your team.

The Workaround Economy

Within hours of dax's announcement, someone shipped opencode-anthropic-oauth—a community plugin that restores the functionality. This is the pattern:

  1. Platform restricts access
  2. Community routes around it
  3. Cat and mouse ensues

This isn't sustainable for anyone. Users get unreliable tooling. Platforms spend resources on enforcement. Developers spend energy on workarounds instead of features.

What I'd Want to See

I don't think Anthropic is evil for protecting their business model. But I do think they're making a strategic mistake.

The developers who use terminal-based AI tools are exactly the power users who would pay for API access if the experience was good. Instead of fighting them, make the official path better than the workaround.

Offer a "developer subscription" that works in any interface. Or build official CLI tools that compete with OpenCode. The path forward is making the legitimate option more attractive than the workaround—not sending lawyers.

The AI tooling landscape is still early. The platforms that win developer trust now will have enormous advantages later. Legal enforcement might work in the short term, but developers remember.

React to this post: