The $900B Question: When Does Valuation Become Detached from Reality?

Anthropic's reported $900 billion valuation represents the logical endpoint of a funding frenzy that has lost all contact with measurable outcomes. The company is asking investors to commit capital within 48 hours—a timeline designed to manufacture urgency rather than enable due diligence. This isn't fundraising; it's a game of musical chairs where everyone pretends the music never stops.

The valuation assumes Claude will eventually capture vast enterprise markets and command pricing power that rivals or exceeds OpenAI. Yet the evidence remains thin. While OpenAI battles with xAI in court over model distillation—a problem that only matters if your models are worth stealing—the market for frontier AI remains murky. Anthropic's $900 billion price tag is a bet that the next three to five years will dramatically clarify this picture. History suggests such bets rarely pay out evenly.

The Adoption Paradox: Billions Spent, Users Increasingly Skeptical

May's data paints a damning portrait of AI adoption reality. Meta lost 20 million users last quarter yet plans to spend billions more on AI infrastructure. Young people increasingly hate using AI products despite three years of aggressive marketing. ChatGPT Images 2.0 is finding traction only in India—a geographic exception that proves the rule of tepid global demand. These aren't niche concerns; they're signals that the consumer AI narrative has fractured.

The enterprise story is more complex but equally revealing. Salesforce crowdsourcing its AI roadmap from customers suggests the company still lacks clarity on what enterprises actually need. Google rushing Gemini into vehicles feels less like customer pull and more like competitive desperation. The pattern repeats across the industry: deployment velocity increasing while product-market fit remains elusive. Billions of dollars chasing a market that hasn't yet formed.

The Commodity Arms Race: Infrastructure, Agents, and the Squeeze on Margins

SoftBank's plan to build a robotics company that constructs data centers reveals the AI industry's true obsession: not breakthrough capabilities, but infrastructure dominance. Meanwhile, Amazon's surging cloud business and massive capital spending show what real AI leverage looks like—not in models, but in the resources required to train them. The companies that win won't be the ones with the smartest models; they'll be the ones that own the hardware, the electricity, and the tooling.

The agentic AI layer is similarly commoditizing. Vercel's Open Agents, Cloudflare's Agent Memory, Stripe's wallet integration for AI spending—these are the infrastructure plays that matter. They're not attempting to build intelligence; they're building the plumbing that will route trillions of dollars through AI decision-making. The margins compress as the tools proliferate. By 2027, agents won't be a differentiator; they'll be table stakes.

Lawfare and Liability: The Musk v. Altman Trial Exposes Industry Rot

Elon Musk's testimony that xAI trained Grok on OpenAI's models crystallizes a question the industry has been dodging: who owns what, and what constitutes fair competition in an ecosystem built on opaque training practices? The trial transcripts—confusing even to observers—suggest a legal framework struggling to catch up with technological reality. But more importantly, they reveal that the industry's titans are increasingly willing to litigate over model ownership because the stakes are existential.

OpenAI's parallel move to restrict Cyber (its cybersecurity model) to "critical cyber defenders" and Anthropic's earlier limitations on Mythos show companies learning a hard lesson: open access creates liability. Every model in the wild is a potential vector for manipulation, misuse, or regulatory action. The age of freely available frontier models may be ending, not because of principled safety concerns, but because of legal and reputational risk. This contradiction—preaching openness while practicing restriction—will define the next phase of the industry.

The Verification Problem: When Authenticity Becomes Scarce

Spotify launching artist verification badges and Meta acquiring Manus to run increasingly desperate get-rich-quick ads illustrate a problem no amount of compute can solve: trust erosion. As AI-generated content becomes indistinguishable from human creation, society's response isn't to celebrate technological achievement but to demand proof of authenticity. We're moving from a world where AI is novel and attention-grabbing to one where AI is a liability that must be labeled.

This shift has profound implications for the companies investing billions in generative AI. If the economic value of an AI service depends on cryptographic verification or trusted badges rather than capability, then the moat around these services shrinks dramatically. The Verge's observation that "all these smart glasses and nothing to do" captures the core problem: hardware and software advancing while the actual use cases remain fractional and uncertain. Authentication and verification are the new battlegrounds.

The Recruitment Myth: Why Everyone Needs AI Skills But Nobody Wants AI Jobs

Amid the hype cycle, the skill acquisition narrative persists. Training courses promise to teach AI engineering, Python decorators, and stochastic programming. But May's hiring patterns tell a different story: the companies scaling fastest (Amazon, Google, SoftBank) are automating away the very skills these courses teach. The demand isn't for AI specialists; it's for infrastructure engineers, security experts, and compliance officers—the people managing risk rather than pushing capability.

The irony is bitter: the industry's pitch to the next generation of engineers is increasingly hollow. By the time a developer masters the latest agentic framework or learning technique, the industry has moved on or built abstractions that make specialization obsolete. The real career path in AI isn't becoming a researcher or engineer; it's becoming the person who manages the liability and regulatory exposure of AI systems deployed at scale. May 2026 shows us an industry consolidating around infrastructure and risk management, not innovation.

All Stories This Period