The Frontier Club Is Shrinking: Meta's $135B Question
Meta delayed Avocado.
That's the headline, and on its own it's not particularly surprising. Models get delayed. Benchmarks disappoint. Timelines slip. The AI industry runs on optimistic schedules and quiet revisions.
But Meta didn't just delay Avocado. Internal benchmarks placed the model between Gemini 2.5 and Gemini 3, below competitive targets. And leadership reportedly discussed licensing Google's Gemini for certain Meta products while Avocado catches up.
Read that sequence again. The company spending $115-135 billion on AI infrastructure this year considered renting a competitor's model because its own wasn't good enough.
The Independence Strategy Cracks
Meta's AI strategy was built on independence. The Llama ecosystem gave them open-weight credibility. The 6GW AMD GPU deal (which we tracked on March 12) gave them supply chain diversification. The Avocado/Mango agent framework was designed to be the first mass-market agent deployment, optimized for Meta's advertising flywheel.
All of this was predicated on one assumption: Meta would have its own competitive frontier model. The agents would run on Meta's model. The ecosystem would train on Meta's weights. The advertising optimization would happen inside Meta's inference stack.
The Gemini licensing discussion breaks that chain. If Meta ships its agent framework on a licensed Gemini backbone, the economics change entirely. The margins change. The dependencies change. The strategic narrative of "we don't need anyone else's model" collapses.
And $135 billion starts looking like a very different kind of bet.
Counting the Frontier Producers
Here's the question Meta's delay forces: how many organizations can actually produce frontier models?
The comfortable answer has been "several." OpenAI, Google, Anthropic, Meta, xAI, maybe Mistral and a few Chinese labs. The market has been pricing in a world where 7-10 organizations compete at the frontier, each with their own models, each pushing the capability boundary.
Meta's Avocado delay suggests the real number is smaller. Possibly much smaller.
Consider what it takes. Not just compute, which Meta has in abundance. Not just data, which Meta has more of than almost anyone. Not just talent, which Meta can hire. It takes the specific combination of training infrastructure, data pipeline engineering, alignment methodology, and institutional knowledge that produces a model competitive with GPT-5.x, Gemini 3.x, and Claude Opus 4.x.
Meta has spent years and tens of billions building toward this. They have some of the best AI researchers in the world. And they're still potentially licensing Gemini.
If Meta can't keep pace, who else can?
Three or Four. Maybe.
The true frontier producers today look like this:
OpenAI has GPT-5.4 with thinking capabilities, 1M context, Excel integration at 1.5B user scale. They just released GPT-OSS to defend their ecosystem flank. Whatever you think of the company, they're producing frontier models.
Google DeepMind has Gemini 3.x powering Pentagon agents, Maps integration, and the model that Meta reportedly considered licensing. The research depth (D4RT, Agent Designer) suggests sustained capability. They're producing frontier models.
Anthropic has Claude Opus 4.x with the trust positioning and a partner network growing from 4% to 20% enterprise penetration. Claude Code is their fastest-growing product. They're producing frontier models.
After that? The picture gets blurry.
xAI has Grok and access to X/Twitter data, but hasn't demonstrated sustained frontier capability across multiple generations. Mistral punches above its weight but operates at a fraction of the compute budget. Chinese labs (Baidu, Alibaba, ByteDance) are competitive in their markets but face export controls and data restrictions that complicate global frontier status.
And now Meta, with the largest AI capex commitment in history, is showing cracks.
What This Means for Builders
If the frontier club is 3-4 organizations rather than 7-10, the industry structure looks fundamentally different.
First, the build-vs-buy decision at the model layer resolves toward "buy" for almost everyone. If Meta can't justify building its own frontier model, your company certainly can't. The practical question becomes which frontier model to build on, not whether to build your own.
Second, differentiation moves up the stack. If everyone is consuming the same 3-4 frontier models, competitive advantage comes from what you build on top: fine-tuning, agent architectures, domain-specific data, user experience, distribution. Meta's actual advantage was never the model itself. It was the 3 billion user distribution and the advertising optimization layer. A licensed Gemini underneath doesn't change that.
Third, the bargaining dynamics shift. Fewer frontier producers means more pricing power for those producers. If you're building critical infrastructure on Claude or GPT-5 and there are only 3-4 alternatives, switching costs are real and the provider knows it. The inference stack convergence we tracked on March 13 partially mitigates this through abstraction layers, but only partially.
The Capex Question
Here's where it gets uncomfortable. Meta is spending $115-135 billion on AI infrastructure. If they're licensing someone else's model, what is that money actually buying?
The answer matters for the entire AI investment thesis. If the largest AI spender in history can't translate capex into a competitive frontier model, it raises questions about the returns on AI infrastructure investment across the board. Larry Fink's bankruptcy warning, which we cover separately today, looks even more prescient in this context. Companies "third, fourth, and fifth" in the race aren't just at risk of falling behind. They may be spending billions on infrastructure for models that never achieve frontier status.
Meta's capex will still buy inference capacity, training runs for specialized models, and infrastructure for their agent ecosystem. But the narrative of "spend enough and you'll reach the frontier" is breaking down. The frontier isn't just expensive. It requires something that money alone can't buy.
What I'm Watching
Three things will tell us whether this is a temporary setback or a structural shift.
First, does Avocado reach competitive benchmarks by May? If Meta hits its revised timeline and produces a model competitive with Gemini 3 and GPT-5.4, this is a delay, not a strategy change. If May comes and goes with another revision, the licensing discussion becomes a licensing decision.
Second, does Meta actually license Gemini for any product? The discussion was reportedly internal. If it becomes a real deal, that's the clearest signal yet that the frontier club has closed its membership.
Third, what does GTC reveal? Nvidia's Rubin GPUs (288GB HBM4, 5x Blackwell throughput) could change the compute economics that determine who can and can't reach the frontier. If the hardware gets dramatically cheaper, the club could open up again. If not, the current members are the members.