Infrastructure is shifting from reactive to preemptive, and that changes the risk profile.

This week, Anthropic committed more than $100 billion to AWS Trainium chip-based infrastructure to secure long-term compute capacity for training and deploying its AI models, including Claude, at a global scale. The spend is expected to unfold over the next decade.

At the same time, Amazon said it will invest $5 billion in Anthropic immediately, with plans to commit up to another $20 billion tied to future milestones. The vendors have been working together since 2023, with Amazon having previously invested $8 billion in Anthropic.

Taken together, the collaboration highlights a broader shift in how AI is being scaled. Instead of reacting to demand, AI vendors are starting to lock in massive amounts of compute, power and infrastructure upfront, a move that raises both the ceiling for AI growth and the stakes if that demand doesn’t materialize as expected.

Related:Canadian, German AI Startups Join Forces to Challenge US Dominance

The developments come the same week engineering leaders from Oracle, Nvidia and Google took the stage at Data Center World 2026 and pointed to a broader shift toward infrastructure purpose-built for AI training and inference.

Overseas, the same pattern is taking shape. This week, Microsoft said it plans to spend $18 billion on AI infrastructure in Australia, part of a broader push to expand capacity across key regions.

Like Anthropic’s commitment to AWS, the investment reflects a shift toward building infrastructure in advance of demand rather than simply reacting to it. It also highlights how global that demand is expected to be, with companies racing to establish regional capacity to support both training and real-time AI workloads.

At the workload level, it’s a different look. Data centers are handling both large-scale training and distributed inference. Training runs on tightly coupled GPU clusters. Inference is about speed and availability across separate locations. That split is changing how infrastructure gets built.

Also in AI This Week:

Other coverage worth paying attention to this week, from how hyperscalers are rethinking AI infrastructure to how that approach is starting to show up in real-world deployments.

How Microsoft and Google Plan and Place AI Workloads

Microsoft and Google detailed how AI is forcing changes in data center design, from how workloads are placed to how power and infrastructure are managed at scale.

SpaceX Agrees to Potential $60B Deal to Acquire Cursor

SpaceX agreed to a potential $60 billion deal to acquire AI coding startup Cursor, signaling growing interest in AI-driven software development capabilities.

Related:GPT-5.5 Boasts Coding Advancements, But Falls Short of Opus 4.7

Accenture Showcases Humanoid Robot Warehouse Pilot

At the Hannover Messe 2026 show in Germany, Accenture showcased a warehouse pilot using humanoid robots, highlighting the progress and challenges of deploying robotics in real-world operations.

How Departing CEO Tim Cook Set Up Apple’s Enterprise Play

Executives at Apple outlined how the company is approaching enterprise growth, with a continued focus on integrating its hardware and ecosystem into business environments.

CIOs Caught in the Middle as AI Startups Disrupt Vertical SaaS

AI startups are reshaping enterprise software by disrupting workflows rather than core systems, leaving CIOs to navigate when and where to adopt them.