The Frontier Club Is Down to Three
The frontier model club just lost two more members.
Apple committed to a $1B/year Gemini license for Siri rather than building its own frontier model. xAI is "rebuilding from foundations" after losing nine of eleven co-founders in the SpaceX merger fallout. Combined with Meta's Avocado delay and reported Gemini licensing discussions from March 14, three companies have exited the frontier race in six weeks.
The remaining frontier model producers: OpenAI, Anthropic, Google DeepMind. That's it.
The Exit Timeline
The pattern is worth mapping because it happened faster than any reasonable forecast would have predicted.
March 14: Meta delays Avocado. Internal benchmarks placed the model between Gemini 2.5 and Gemini 3, below competitive targets. Leadership discussed licensing Google Gemini for certain Meta products while Avocado catches up. The company spending $115-135 billion on AI infrastructure considered renting a competitor's model. I wrote at the time that the frontier club might be limited to 3-4 producers by end of 2026. That was nine days ago.
March 20: Apple chooses Gemini. The Siri revamp, expected in iOS 26.5 beta on March 30, runs on Google's Gemini under a multi-year deal reportedly worth $1 billion per year. Apple is building its own Foundation Models on top of Gemini, using Private Cloud Compute for privacy. But the foundation layer is Google's. Apple has $200B+ in cash reserves and some of the best ML talent on the planet. They looked at the frontier model race and chose to be a customer.
March 23: xAI loses its team. Six weeks after the $1.25 trillion SpaceX-xAI merger, Musk acknowledged the company "was not built right first time around." Nine of eleven co-founders have departed. The new hires, Devendra Singh Chaplot from Mistral and product engineering leads from Cursor, signal a pivot toward domain-specific AI for SpaceX operations, not a return to general-purpose frontier competition. Grok 5 is in training, but the organizational reset is more restart than rebuild.
Three exits. Six weeks. The prediction I made on March 14 ("3-4 frontier producers by end of 2026") needs a downward revision. We're already at three, and it's still March.
Why the Exits Happened
The comfortable assumption for the past two years was that frontier models were just an engineering problem. Get enough compute, hire enough researchers, feed enough data, and you'd produce a competitive model. Meta's $135 billion capex budget was the ultimate expression of this thesis.
The exits tell us that assumption was wrong. Frontier model development requires something beyond capital and talent. It requires a specific institutional capability that takes years to accumulate: the training infrastructure engineering, the alignment methodology, the evaluation frameworks, the failure-mode intuition that comes from shipping multiple model generations. You can't buy that on the market. You can only build it over time.
Apple's decision is the most telling. This is a company that built its own chips, its own operating systems, its own display technology. Apple vertically integrates everything. They chose not to vertically integrate frontier AI. That tells you the barrier is real and it's different from other technology barriers Apple has successfully crossed.
xAI's story is different but points to the same conclusion. Even with a $1.25 trillion merger providing effectively unlimited resources, losing the people who built the first version means losing the institutional knowledge that made it work. AI research capability is deeply embedded in teams, not transferable through documentation or hiring.
What Three Producers Means
Consolidation to three frontier producers doesn't mean less competition. It means the competition changes character.
With only three suppliers of frontier models, commoditization has a natural floor. The "models will be free" thesis (boosted this week by OpenClaw's viral growth and CNBC calling it a "ChatGPT moment") runs into a structural reality: oligopoly pricing dynamics replace open-market dynamics when there are only three producers. This is actually bullish for the remaining frontier labs and bearish for the "models are a commodity" narrative.
But there's a more interesting dynamic underneath the pricing question. The three remaining labs aren't competing on raw model capability anymore. The benchmarks are converging. GPT-5.4, Gemini 3, and Claude Opus 4.x are close enough on standard evaluations that the differences matter less than they did two years ago.
The competition has shifted to three other axes:
Reliability. Can I depend on this model to be consistent, to handle edge cases, to not break my production system when the provider pushes an update? Model deprecation cycles (OpenAI's 3-6 month windows, which I've been tracking since March 12) make this a real operational concern.
Trust posture. The Anthropic-Pentagon hearing tomorrow (March 24) is the live test of this axis. Does positioning yourself as the "responsible" AI lab create competitive advantage or contractual friction? The outcome directly shapes how every AI company structures its government and enterprise relationships going forward.
Ecosystem lock-in. OpenAI's acquisition of Astral (uv, Ruff, the Python tooling substrate), Anthropic's Claude Code Channels (ambient coding agent), Google's Gemini licensing deals (Apple, potentially Meta). Each lab is building walls, not just models. The model is the moat's foundation, but the actual moat is everything built on top of it.
The Value Migration
If models are concentrating into three producers, where does the value go that used to be spread across more competitors?
It flows upward. The orchestration layer (agent frameworks, workflow tools, integration platforms) is absorbing the value that the model layer is shedding. OpenClaw's 250K+ GitHub stars, the MCP protocol spreading to llama.cpp, OpenAI hiring OpenClaw's founder, Anthropic shipping Claude Code Channels, all of these are expressions of the same structural shift.
I'm calling this "The Great Value Migration." The historical parallel is cloud computing. When compute became a commodity (AWS, Azure, GCP), the value migrated to the platform and services layers built on top. Amazon Web Services didn't win because it had better servers. It won because it had better abstractions, better tooling, better developer experience layered over commodity compute.
The same thing is happening with AI models. The three remaining frontier labs won't differentiate primarily on model quality. They'll differentiate on the platform layer: the tooling, the developer experience, the trust framework, the ecosystem that makes their model the easiest and safest to build on.
What This Means for Builders
If you're building AI applications, the frontier consolidation pattern has practical implications.
The build-vs-buy decision at the model layer is resolved. If Apple, with effectively unlimited capital and world-class ML talent, decided buying is better than building, the answer for your company is the same. Invest your engineering effort in what you build on top of the model, not in the model itself.
Vendor concentration risk is real but manageable. Three frontier providers is a thinner market than ten, and switching costs between them are non-trivial. The inference stack abstraction layers we tracked on March 13 become more important, not less. Build your architecture so that swapping the underlying model provider is an operational decision, not an engineering project.
The commoditization narrative and the concentration narrative aren't contradictory. Open-source models (OpenClaw, llama.cpp with MCP, Qwen, DeepSeek) are genuinely getting better and will handle an increasing share of workloads. But for the highest-stakes applications, the ones where reliability, safety posture, and institutional support matter, the frontier labs hold the position. The market is splitting into two tiers, and the build-vs-buy decision is really a question of which tier your application sits in.
What I'm Watching
Three things will confirm or complicate this pattern in the next 30 days.
First, does Meta actually license Gemini? The Avocado delay started the pattern, but Meta hasn't signed a deal yet. If they do, the frontier club is definitively three. If Avocado ships at competitive quality by May, Meta could claw back frontier status.
Second, tomorrow's Anthropic-Pentagon hearing. If the court rules in ways that strengthen Anthropic's trust positioning, the trust axis becomes a harder competitive moat. If the ruling goes against Anthropic, it could reshape how all three remaining labs approach government contracts.
Third, OpenClaw's trajectory. If open-source agent frameworks continue closing the capability gap with frontier models, the "two-tier" market structure might collapse faster than expected. The llama.cpp MCP merge this week is an infrastructure signal worth tracking.
The frontier club went from five to three in six weeks. The next revision, whether it's three becoming two or three stabilizing as the new equilibrium, will tell us a lot about where AI goes from here.