Ilzam - Fractional CTO

Ilzam

Fractional CTO

← All notes
Founder Decisions March 2026

Your AI Coding Tool's Biggest Risk Isn't the Price. It's the Business Model.

TL;DR

I pay $100/month for Claude Code (via Claude Max). Cursor Pro starts at $20/month. On paper, I'm paying 5x more. In practice, Cursor pays Anthropic ~$650M/year in API costs while earning ~$500M in revenue. That's a negative 30% gross margin. When you pay a middleman that loses money on every transaction, your pricing is one board meeting away from changing. I chose the tool built by the company that makes the model. Here's the full breakdown, with receipts.

Let me show you a number that should change how you think about AI coding tools.

Cursor, the AI code editor that 19% of developers say they love (per the Pragmatic Engineer's March 2026 survey), spends roughly $650 million per year on Anthropic API costs. Their annual revenue? About $500 million. The same analysis described the business as a "VC-subsidized wealth transfer mechanism."

When you pay Cursor $20/month, more than $20 worth of that goes to Anthropic for the AI model that powers it. The math does not work. It only works right now because venture capitalists are covering the gap, and VCs don't cover gaps forever.

When you pay Anthropic $100/month for Claude Max, you're paying the company that makes the model. Their cost to serve you their own model? Marginal. There's no middleman. No margin pressure. No board meeting where someone says "we need to cut included requests by half."

That already happened with Cursor. Three times, in eight months.

The trust timeline

Founders care about pricing stability. When you're bootstrapping or pre-revenue, a surprise 2x increase in your tooling costs isn't an inconvenience. It's a budget crisis. Here's what happened to Cursor's pricing in under a year:

June 2025: The credit switch

Cursor replaced its fixed 500-request Pro plan with a credit-based system. The $20/month price stayed the same. The effective capacity dropped from ~500 requests to ~225 with Claude models. No advance warning. One Medium author documented it as a "20x silent price increase" for heavy users.

July 2025: The apology

CEO Michael Truell published a public apology: "We recognize that we didn't handle this pricing rollout well and we're sorry." Cursor offered refunds for unexpected charges. One user had already racked up $350 in overages in a single week.

September 2025: Unlimited dies

Cursor removed unlimited Auto mode, the feature many users had specifically subscribed for. It now consumes credits like everything else. Users who bought annual plans for unlimited Auto scrambled to understand whether they'd be grandfathered in. They weren't.

The forum thread that captures the sentiment best:

"I no longer trust Cursor. I would rather pay $200 for Claude Code than $200 for Cursor, even though I prefer the Cursor system."

Cursor Forum

If a company changes pricing three times in eight months, what does your Q3 budget look like?

The real pricing comparison

Forget feature lists. Here's what each company charges, what you actually get, and what the business model behind it means for you.

Cursor Claude (Max)
Entry tier $20/mo (Pro) $20/mo (Pro)
Mid tier $60/mo (Pro+) $100/mo (Max 5x)
Top tier $200/mo (Ultra) $200/mo (Max 20x)
What you get Credit pool that depletes at different rates per model Usage multiplier (5x or 20x Pro limits) across all surfaces
Model access Claude, GPT-4o, Gemini, others Claude Sonnet 4.6, Opus 4.6 (1M context)
When credits run out Slow queue (wait times) or pay-as-you-go overages Optional overage at API rates, never hard-capped
Includes Code editor only Chat + Claude Code CLI + Desktop app
Pricing changes (last 12mo) 3 major changes, 1 public apology Stable since launch

The middle tier tells the story. Cursor has nothing between $60 and $200. Claude has a $100 sweet spot. For a solo technical founder who codes daily but isn't running background agents around the clock, that $100 tier is the one that actually fits.

And there's a consolidation angle. With Claude Max at $100, you get Claude chat for thinking through problems, Claude Code for implementing them, and Claude Desktop for everything else. One subscription, one context. With Cursor at $60, you get the editor, but if you also want Claude for non-coding work, that's a separate $20/month Pro subscription. Real cost: $80. For $20 more, Max gives you everything in one place.

The business model question nobody asks

Cursor raised a $2.3B Series D at a $29.3B valuation in November 2025. As of March 2026, they're in talks for a round valuing them at ~$50B. Revenue exceeded $1B ARR in 24 months. Impressive numbers.

But the cost structure underneath is brutal. Cursor is a middleman: it builds a UI on top of models it doesn't make. When you use Claude through Cursor, Cursor pays Anthropic per token. When you use GPT through Cursor, Cursor pays OpenAI per token. The margins are razor-thin to negative.

Cursor knows this. In October 2025, they launched "Composer," their first proprietary coding model, to reduce dependency on Anthropic. That's a bet, not a solution. Building a model that matches Claude Sonnet's coding ability is a multi-year, multi-billion-dollar project. In the meantime, every Cursor user running Claude models is sending money from Cursor's pocket to Anthropic's.

Now look at Anthropic. Their ARR surged to $19B as of February 2026. Claude Code alone runs at over $2.5B run-rate revenue. They raised a $30B Series G at $380B valuation. They make the model AND the tool. Their model cost to themselves is marginal. They can price Claude Code aggressively because every Claude Code user also drives API consumption across the ecosystem.

The founder question

Do you want your primary development tool priced by a profitable model maker, or by a middleman racing to find margins before investors lose patience? That's not a features question. That's a business risk question.

What "seamless integration" actually means

Most comparisons frame this as "Cursor has a nice GUI" vs "Claude Code is terminal-only." That's the surface. The real question is: how many layers sit between you and the intelligence you're paying for?

Cursor: integration through abstraction

Cursor is a VS Code fork with AI woven into the editor. Inline completions, a composer panel, chat sidebar. The experience is polished. If you've used VS Code, the learning curve is near zero.

But the AI reaches you through Cursor's abstraction layer. When Anthropic ships a new capability (extended thinking, 1M context window, agent improvements), it shows up in Claude Code the same day. Cursor has to integrate it, test it, ship it. That lag matters when models improve monthly.

There's also a practical limit. While Cursor advertises a 200K token context window, users consistently report hitting effective limits at 70K-120K tokens due to internal truncation. Claude Code's 200K window (and Opus's 1M on Max) delivers what it promises. For large codebases, that's the difference between the tool understanding your architecture and the tool understanding a fragment of it.

Claude Code: integration through directness

Claude Code is a terminal agent. You describe what you want. It reads files, makes coordinated edits across the codebase, runs tests, fixes failures, and commits changes. The whole loop happens without you touching the editor.

As the 56kode team put it after switching from Cursor: "This migration is not just a tool change, it's a job change. I spend less time typing code and more time designing, orchestrating, and validating."

Claude Code also runs in VS Code, JetBrains, and even as a desktop app. The "terminal-only" framing is outdated. But even when running inside VS Code, Claude Code talks directly to Anthropic's API. There's no middleman interpreting the model's output or wrapping it in a proprietary UI layer.

The deeper split is philosophical. Cursor is an accelerator: you drive, AI assists with completions and suggestions. Claude Code is a delegator: you describe what you want, AI drives, you review results. A year-long Cursor power user at Builder.io abandoned Cursor after trying Claude Code, noting his workflow evolved "from Claude as a small sidebar to defaulting to Claude first and only peeking at code when reviewing changes."

The efficiency data

Not opinions. Numbers.

5.5x fewer tokens for the same task

Independent benchmarks found Claude Code uses 5.5x fewer tokens than Cursor for identical tasks, finishing faster with fewer errors.

30% less rework

Claude Code produces 30% less code churn (code rewritten or deleted within two weeks). It gets things right in the first or second iteration more often than Cursor does.

46% vs 19% developer satisfaction

The Pragmatic Engineer's March 2026 survey of 906 engineers: 46% named Claude Code as their "most loved" tool. Cursor: 19%. GitHub Copilot: 9%. Claude Code overtook both in just 8 months after public launch.

The counterintuitive finding

A rigorous METR study with 16 experienced developers found they were actually 19% slower with AI coding tools, despite self-reporting they felt 20% faster. The gap came from time spent reviewing and rejecting generated code. This means the tool that produces less rework wins. That's Claude Code.

The risk dimension founders skip

Pricing and features get all the attention. Risk gets almost none. But for a founder handling user data or anything under compliance, these matter:

Security vulnerabilities

Cursor has had two notable CVEs. CurXecute (CVE-2025-54135) let attackers craft messages that, when summarized by Cursor's AI, rewrote MCP configuration files and executed arbitrary commands. MCPoison (CVE-2025-54136) enabled persistent team-wide compromise through shared repository configs. Both required Cursor's AI layer to be the attack vector, because that layer processes and transforms content in ways the user can't fully inspect.

Claude Code runs in your terminal. Your code doesn't pass through a third-party editor layer that can be exploited as an intermediary.

Privacy defaults

Cursor's Privacy Mode is off by default for Free and Pro users. That means Cursor collects prompts, code snippets, and telemetry unless you're on the Business tier ($40/user/month) or manually toggle it on. If you're writing code that touches user data, payment processing, or anything regulated, that default matters.

Platform dependency

Cursor is a VS Code fork. In April 2025, Microsoft quietly blocked Cursor from accessing certain VS Code extensions, including the C/C++ extension. Users found only 31 of 39 extensions available when importing from VS Code. Your editor depends on a company that views it as a competitor. Claude Code depends on nothing. It's a CLI that works alongside any editor you choose.

Credit depletion opacity

Cursor forum users report credits disappearing while the app is closed, Ultra plan credits exhausted in 20 days, and "abnormal usage on particular days that doesn't match actual activity." A team of 5 spent $4,600 in 6 weeks, roughly double their entire 2025 spend. When you can't predict or explain your costs, you can't budget.

When Cursor is the better choice

This is not a "Cursor is bad" piece. Cursor is genuinely the right tool for certain situations:

You have a non-technical team

If your co-founder, designer, or junior developers need to use AI coding tools, Cursor's GUI is more accessible. The visual editor, inline suggestions, and familiar VS Code interface lower the barrier. Claude Code's terminal-first approach assumes comfort with command-line workflows.

You want model flexibility

Cursor lets you switch between Claude, GPT-4o, Gemini, and others within the same interface. If you value being able to pick the best model for each task, that optionality matters. Claude Code is Claude-only. You're betting on one model family.

You want inline tab completions

Claude Code doesn't do ambient inline completions while you type. If you want real-time predictive suggestions as you write code, Cursor's autocomplete is better. This is, per multiple reviews, the single biggest reason some developers bounce off Claude Code and go back.

Worth noting: 70% of developers use 2-4 AI tools simultaneously (per the Pragmatic Engineer survey). The hybrid play is real. But for a solo founder optimizing spend, consolidation wins.

What I chose and why

I use Claude Code on the $100/month Max plan. Here's why, in order of importance:

1. I trust the business model

Anthropic makes money when I use their model. The incentive is aligned: make the model better, I use it more, they earn more. Cursor makes money by reselling someone else's model at a loss. That only works while VC funding covers the gap. I don't want my primary development tool to depend on a company's ability to raise its next round.

2. The cost is predictable

$100/month. One bill. Claude chat, Claude Code, Claude Desktop, all included. I don't track credit burn rates across different models. I don't worry about which AI model depletes my credits faster. I don't wake up to surprise overages.

3. The terminal is fine

I'm a technical founder. I live in the terminal anyway. Claude Code doesn't ask me to change my workflow. It works alongside my editor (VS Code), my terminal, my git setup. If anything, the terminal-first approach is faster for me because I can pipe commands, script automations, and run multiple instances on different projects simultaneously. As one developer put it: "I literally feel like a superhero who can work on 2 or 3 projects at the same time."

4. Direct to the source

Anthropic makes the model and the tool. When they find a limitation in Claude Code, they can improve the model itself. When Cursor finds a limitation, they can only file a feature request. As one developer wrote: "Anthropic can give you the most possible value for the least possible price because you only have to worry about paying them."

The bigger picture

The AI coding tool market is going through the same pattern every middleware market goes through. First, wrappers emerge around a new technology (Cursor wrapping LLMs). Then the technology makers ship their own tools (Anthropic shipping Claude Code). Then the wrappers either find a defensible niche or get squeezed out.

Cursor sees this coming. They're building their own model. They're raising at $50B. They have real users and real revenue. I'm not predicting their failure. But I am saying that as a founder, I'd rather bet on the company that went from $1B to $19B ARR in 14 months by making the best coding model, than on the company that depends on that model to exist.

YC CEO Garry Tan said it plainly: "25% of our Winter 2025 batch had codebases that were 95% AI-generated." The founders using AI coding tools aren't optimizing for tab-complete speed. They're asking: which tool lets me build the most with the least overhead, the least risk, and the most predictable cost?

For me, that answer is $100/month, a terminal, and no middleman.

Sources

Related Notes

Not sure which AI tools fit your workflow?

I've shipped production code with both Cursor and Claude Code. If you want a second opinion on your AI-assisted development setup, book a call.

Book an Advisory Session