Back to blog
EngineeringApr 12, 2026· min read

Fixing Linear MCP: Why We Switched to Streamable HTTP

How we debugged and fixed unstable Linear MCP connections by switching to streamable_http transport—a small change with big reliability wins.

Sometimes the most impactful engineering work isn't about building new features—it's about fixing the infrastructure that makes everything else possible. This week we shipped a fix to our Linear MCP integration that falls squarely in that category.

What Changed

We updated our Linear MCP (Model Context Protocol) integration to use streamable_http transport instead of the previous transport method. The change was surgical—a single configuration update—but the impact was immediate.

For context: MCP is how Strug Works agents communicate with Linear to read issue details, update statuses, and post completion summaries. When that connection is flaky, agents can't close the loop on their work. Tasks complete successfully but never get marked done in Linear. Status updates vanish. The agent did its job, but the human team never knows.

Why It Matters

The streamable_http transport gives us better connection stability and more predictable error handling. Where the old transport would occasionally drop connections mid-request, streamable_http maintains persistent connections and handles network interruptions gracefully.

This matters because reliability compounds. When agents can't reliably update Linear, engineering teams lose trust in the automation. They start double-checking every status. They hesitate to dispatch new work. The entire value proposition of autonomous agents—that they handle the full lifecycle—breaks down.

We're building infrastructure for teams that want to trust their AI agents with real work. That requires boring reliability. The streamable_http transport gives us exactly that: fewer surprises, clearer error messages, and connections that stay up.

What's Next

This fix is part of a broader integration stability push. We're auditing all external service connections—GitHub, Sanity, Supabase—to ensure they use the most reliable transport and retry strategies available.

We're also building better observability around these connections. When a Linear update fails, we want Strug Central to surface that immediately—not as a cryptic error in logs, but as a clear signal in the Strug Stream that agents and humans can both act on.

The goal is simple: make integration failures visible and recoverable. Agents should know when they can't update Linear, humans should see it in real-time, and both should have a clear path to resolution. That's the kind of infrastructure that lets teams scale their trust in autonomous agents.