
Sopa Images | Lightrocket | Getty Images
Nvidia has established itself as the undisputed leader in artificial intelligence chips, selling large quantities of silicon to most of the world’s biggest tech companies en route to a $4.5 trillion market cap.
One of Nvidia’s key clients is Google, which has been loading up on the chipmaker’s graphics processing units, or GPUs, to try and keep pace with soaring demand for AI compute power in the cloud.
While there’s no sign that Google will be slowing its purchases of Nvidia GPUs, the internet giant is increasingly showing that it’s not just a buyer of high-powered silicon. It’s also a developer.
On Thursday, Google announced that its most powerful chip yet, called Ironwood, is being made widely available in the coming weeks. It’s the seventh generation of Google’s Tensor Processing Unit, or TPU, the company’s custom silicon that’s been in the works for more than a decade.
TPUs are application-specific integrated circuits, or ASICs, which play a crucial role in AI by providing highly specialized and efficient hardware for particular tasks. Google says Ironwood is designed to handle the heaviest AI workloads, from training large models to powering real-time chatbots and AI agents, and is more than four times faster than its predecessor. AI startup Anthropic plans to use up to 1 million of them to run its Claude model.
For Google, TPUs offer a competitive edge at a time when all the hyperscalers are rushing to build mammoth data centers, and AI processors can’t get manufactured fast enough to meet demand. Other cloud companies are taking a similar approach, but are well behind in their efforts.
Amazon Web Services made its first cloud AI chip, Inferentia, available to customers in 2019, followed by Trainium three years later. Microsoft didn’t announce its first custom AI chip, Maia, until the end of 2023.
“Of the ASIC players, Google’s the only one that’s really deployed this stuff in huge volumes,” said Stacy Rasgon, an analyst covering semiconductors at Bernstein. “For other big players, it takes a long time and a lot of effort and a lot of money. They’re the furthest along among the other hyperscalers.”
Google didn’t provide a comment for this story.
Originally trained for internal workloads, Google’s TPUs have been available to cloud customers since 2018. Of late, Nvidia has shown some level of concern. When OpenAI signed its first cloud contract with Google earlier this year, the announcement spurred Nvidia CEO Jensen Huang to initiate further talks with the AI startup and its CEO, Sam Altman, according to reporting by The Wall Street Journal.
Unlike Nvidia, Google isn’t selling its chips as hardware, but rather providing access to TPUs as a service through its cloud, which has emerged as one of the company’s big growth drivers. In its third-quarter earnings report last week, Google parent Alphabet said cloud revenue increased 34% from a year earlier to $15.15 billion, beating analyst estimates. The company ended the quarter with a business backlog of $155 billion.
“We are seeing substantial demand for our AI infrastructure products, including TPU-based and GPU-based solutions,” CEO Sundar Pichai said on the earnings call. “It is one of the key drivers of our growth over the past year, and I think on a going-forward basis, I think we continue to see very strong demand, and we are investing to meet that.”
Google doesn’t break out the size of its TPU business within its cloud segment. Analysts at D.A. Davidson estimated in September that a “standalone” business consisting of TPUs and Google’s DeepMind AI division could be valued at about $900 billion, up from an estimate of $717 billion in January. Alphabet’s current market cap is more than $3.4 trillion.
‘Tightly targeted’ chips
Customization is a major differentiator for Google. One critical advantage, analysts say, is the efficiency TPUs offer customers relative to competitive products and services.
“They’re really making chips that are very tightly targeted for their workloads that they expect to have,” said James Sanders, an analyst at Tech Insights.
Rasgon said that efficiency is going to become increasingly important because with all the infrastructure that’s being built, the “likely bottleneck probably isn’t chip supply, it’s probably power.”
On Tuesday, Google announced Project Suncatcher, which explores “how an interconnected network of solar-powered satellites, equipped with our Tensor Processing Unit (TPU) AI chips, could harness the full power of the Sun.”
As a part of the project, Google said it plans to launch two prototype solar-powered satellites carrying TPUs by early 2027.
“This approach would have tremendous potential for scale, and also minimizes impact on terrestrial resources,” the company said in the announcement. “That will test our hardware in orbit, laying the groundwork for a future era of massively-scaled computation in space.”
Dario Amodei, co-founder and chief executive officer of Anthropic, at the World Economic Forum in 2025.
Stefan Wermuth | Bloomberg | Getty Images
Google’s largest TPU deal on record landed late last month, when the company announced a massive expansion of its agreement with OpenAI rival Anthropic valued in the tens of billions of dollars. With the partnership, Google is expected to bring well over a gigawatt of AI compute capacity online in 2026.
“Anthropic’s choice to significantly expand its usage of TPUs reflects the strong price-performance and efficiency its teams have seen with TPUs for several years,” Google Cloud CEO Thomas Kurian said at the time of the announcement.
Google has invested $3 billion in Anthropic. And while Amazon remains Anthropic’s most deeply embedded cloud partner, Google is now providing the core infrastructure to support the next generation of Claude models.
“There is such demand for our models that I think the only way we would have been able to serve as much as we’ve been able to this year is this multi-chip strategy,” Anthropic Chief Product Officer Mike Krieger told CNBC.
That strategy spans TPUs, Amazon Trainium and Nvidia GPUs, allowing the company to optimize for cost, performance and redundancy. Krieger said Anthropic did a lot of up-front work to make sure its models can run equally well across the silicon providers.
“I’ve seen that investment pay off now that we’re able to come online with these massive data centers and meet customers where they are,” Krieger said.
Hefty spending is coming
Two months before the Anthropic deal, Google forged a six-year cloud agreement with Meta worth more than $10 billion, though it’s not clear how much of the arrangement includes use of TPUs. And while OpenAI said it will start using Google’s cloud as it diversifies away from Microsoft, the company told Reuters it’s not deploying GPUs.
Alphabet CFO Anat Ashkenazi attributed Google’s cloud momentum in the latest quarter to rising enterprise demand for Google’s full AI stack. The company said it signed more billion-dollar cloud deals in the first nine months of 2025 than in the previous two years combined.
“In GCP, we see strong demand for enterprise AI infrastructure, including TPUs and GPUs,” Ashkenazi said, adding that users are also flocking to the company’s latest Gemini offerings as well as services “such as cybersecurity and data analytics.”

Amazon, which reported 20% growth in its market-leading cloud infrastructure business last quarter, is expressing similar sentiment.
AWS CEO Matt Garman told CNBC in a recent interview that the company’s Trainium chip series is gaining momentum. He said “every Trainium 2 chip we land in our data centers today is getting sold and used,” and he promised further performance gains and efficiency improvements with Trainium 3.
Shareholders have shown a willingness to stomach hefty investments.
Google just raised the high end of its capital expenditures forecast for the year to $93 billion, up from prior guidance of $85 billion, with an even steeper ramp expected in 2026. The stock price soared 38% in the third quarter, its best performance for any period in 20 years, and is up another 17% in the fourth quarter.
Mizuho recently pointed to Google’s distinct cost and performance advantage with TPUs, noting that while the chips were originally built for internal use, Google is now winning external customers and bigger workloads.
Morgan Stanley analysts wrote in a report in June that while Nvidia’s GPUs will likely remain the dominant chip provider in AI, growing developer familiarity with TPUs could become a meaningful driver of Google Cloud growth.
And analysts at D.A. Davidson said in September that they see so much demand for TPUs that Google should consider selling the systems “externally to customers,” including frontier AI labs.
“We continue to believe that Google’s TPUs remain the best alternative to Nvidia, with the gap between the two closing significantly over the past 9-12 months,” they wrote. “During this time, we’ve seen growing positive sentiment around TPUs.”
WATCH: Amazon’s $11B data center goes live: Here’s an inside look

