
Anthropic and Google officially announced their cloud partnership Thursday, a deal that gives the artificial intelligence company access to up to one million of Google’s custom-designed Tensor Processing Units, or TPUs.
The deal, which is worth tens of billions of dollars, is the company’s largest TPU commitment yet and is expected to bring well over a gigawatt of AI compute capacity online in 2026.
Industry estimates peg the cost of a 1-gigawatt data center at around $50 billion, with roughly $35 billion of that typically allocated to chips.
While competitors tout even loftier projections — OpenAI’s 33-gigawatt “Stargate” chief among them — Anthropic’s move is a quiet power play rooted in execution, not spectacle.
Founded by former OpenAI researchers, the company has deliberately adopted a slower, steadier ethos, one that is efficient, diversified, and laser-focused on the enterprise market.

A key to Anthropic’s infrastructure strategy is its multi-cloud architecture.
The company’s Claude family of language models runs across Google’s TPUs, Amazon’s custom Trainium chips, and Nvidia’s GPUs, with each platform assigned to specialized workloads like training, inference, and research.
Google said the TPUs offer Anthropic “strong price-performance and efficiency.”
“Anthropic and Google have a longstanding partnership and this latest expansion will help us continue to grow the compute we need to define the frontier of AI,” said Anthropic CFO Krishna Rao in a release.
Anthropic’s ability to spread workloads across vendors lets it fine-tune for price, performance, and power constraints.
According to a person familiar with the company’s infrastructure strategy, every dollar of compute stretches further under this model than those locked into single-vendor architectures.
Google, for its part, is leaning into the partnership.
“Anthropic’s choice to significantly expand its usage of TPUs reflects the strong price-performance and efficiency its teams have seen with TPUs for several years,” said Google Cloud CEO Thomas Kurian in a release, touting the company’s seventh-generation “Ironwood” accelerator as part of a maturing portfolio.

Claude’s breakneck revenue growth
Anthropic’s escalating compute demand reflects its explosive business growth.
The company’s annual revenue run rate is now approaching $7 billion, and Claude powers more than 300,000 businesses — a staggering 300× increase over the past two years. The number of large customers, each contributing more than $100,000 in run-rate revenue, has grown nearly sevenfold in the past year.
Claude Code, the company’s agentic coding assistant, generated $500 million in annualized revenue within just two months of launch, which Anthropic claims makes it the “fastest-growing product” in history.
While Google is powering Anthropic’s next phase of compute expansion, Amazon remains its most deeply embedded partner.
The retail and cloud giant has invested $8 billion in Anthropic to date, more than double Google’s confirmed $3 billion in equity.
Still, AWS is considered Anthropic’s chief cloud provider, making its influence structural and not just financial.
Its custom-built supercomputer for Claude, known as Project Rainier, runs on Amazon’s Trainium 2 chips. That shift matters not just for speed, but for cost: Trainium avoids the premium margins of other chips, enabling more compute per dollar spent.

Wall Street is already seeing results.
Rothschild & Co Redburn analyst Alex Haissl estimated that Anthropic added one to two percentage points to AWS’s growth in last year’s fourth quarter and this year’s first, with its contribution expected to exceed five points in the second half of 2025.
Wedbush’s Scott Devitt previously told CNBC that once Claude becomes a default tool for enterprise developers, that usage flows directly into AWS revenue — a dynamic he believes will drive AWS growth for “many, many years.”
Google, meanwhile, continues to play a pivotal role. In January, the company agreed to a new $1 billion investment in Anthropic, adding to its previous $2 billion and 10% equity stake.
Critically, Anthropic’s multicloud approach proved resilient during Monday’s AWS outage, which did not impact Claude thanks to its diversified architecture.
Still, Anthropic isn’t playing favorites. The company maintains control over model weights, pricing, and customer data — and has no exclusivity with any cloud provider. That neutral stance could prove key as competition among hyperscalers intensifies.
WATCH: Anthropic’s Mike Krieger on new model release and the race to build real-world AI agents

