June 13, 2025
AMD reveals next-generation AI chips with OpenAI CEO Sam Altman
The MI400 chips will be able to be assembled into a full server rack called Helios.

Lisa Su, CEO of Advanced Micro Devices, testifies during the Senate Commerce, Science and Transportation Committee hearing titled “Winning the AI Race: Strengthening U.S. Capabilities in Computing and Innovation,” in Hart building on Thursday, May 8, 2025.

Tom Williams | CQ-Roll Call, Inc. | Getty Images

Advanced Micro Devices on Thursday unveiled new details about its next-generation AI chips, the Instinct MI400 series, that will ship next year.

The MI400 chips will be able to be assembled into a full server rack called Helios, AMD said, which will enable thousands of the chips to be tied together in a way that they can be used as one “rack-scale” system.

“For the first time, we architected every part of the rack as a unified system,” AMD CEO Lisa Su said at a launch event in San Jose, California, on Thursday.

OpenAI CEO Sam Altman appeared on stage on with Su and said his company would use the AMD chips.

“When you first started telling me about the specs, I was like, there’s no way, that just sounds totally crazy,” Altman said. “It’s gonna be an amazing thing.”

AMD’s rack-scale setup will make the chips look to a user like one system, which is important for most artificial intelligence customers like cloud providers and companies that develop large language models. Those customers want “hyperscale” clusters of AI computers that can span entire data centers and use massive amounts of power.

“Think of Helios as really a rack that functions like a single, massive compute engine,” said Su, comparing it against Nvidia’s Vera Rubin racks, which are expected to be released next year.

OpenAI CEO Sam Altman poses during the Artificial Intelligence (AI) Action Summit, at the Grand Palais, in Paris, on February 11, 2025. 

Joel Saget | Afp | Getty Images

AMD’s rack-scale technology also enables its latest chips to compete with Nvidia’s Blackwell chips, which already come in configurations with 72 graphics-processing units stitched together. Nvidia is AMD’s primary and only rival in big data center GPUs for developing and deploying AI applications.

OpenAI — a notable Nvidia customer — has been giving AMD feedback on its MI400 roadmap, the chip company said. With the MI400 chips and this year’s MI355X chips, AMD is planning to compete against rival Nvidia on price, with a company executive telling reporters on Wednesday that the chips will cost less to operate thanks to lower power consumption, and that AMD is undercutting Nvidia with “aggressive” prices.

So far, Nvidia has dominated the market for data center GPUs, partially because it was the first company to develop the kind of software needed for AI developers to take advantage of chips originally designed to display graphics for 3D games. Over the past decade, before the AI boom, AMD focused on competing against Intel in server CPUs.

Su said that AMD’s MI355X can outperform Nvidia’s Blackwell chips, despite Nvidia using its “proprietary” CUDA software.

“It says that we have really strong hardware, which we always knew, but it also shows that the open software frameworks have made tremendous progress,” Su said.

AMD shares are flat so far in 2025, signaling that Wall Street doesn’t yet see it as a major threat to Nvidia’s dominance.

AMD

Courtesy: AMD

Andrew Dieckmann, AMD’s general manger for data center GPUs, said Wednesday that AMD’s AI chips would cost less to operate and less to acquire.

“Across the board, there is a meaningful cost of acquisition delta that we then layer on our performance competitive advantage on top of, so significant double-digit percentage savings,” Dieckmann said.

Over the next few years, big cloud companies and countries alike are poised to spend hundreds of billions of dollars to build new data center clusters around GPUs in order to accelerate the development of cutting-edge AI models. That includes $300 billion this year alone in planned capital expenditures from megacap technology companies.

AMD is expecting the total market for AI chips to exceed $500 billion by 2028, although it hasn’t said how much of that market it can claim — Nvidia has over 90% of the market currently, according to analyst estimates.

Both companies have committed to releasing new AI chips on an annual basis, as opposed to a biannual basis, emphasizing how fierce competition has become and how important bleeding-edge AI chip technology is for companies like Microsoft, Oracle and Amazon.

AMD has bought or invested in 25 AI companies in the past year, Su said, including the purchase of ZT Systems earlier this year, a server maker that developed the technology AMD needed to build its rack-sized systems.

“These AI systems are getting super complicated, and full-stack solutions are really critical,” Su said.

What AMD is selling now

Currently, the most advanced AMD AI chip being installed from cloud providers is its Instinct MI355X, which the company said started shipping in production last month. AMD said that it would be available for rent from cloud providers starting in the third quarter.

Companies building large data center clusters for AI want alternatives to Nvidia, not only to keep costs down and provide flexibility, but also to fill a growing need for “inference,” or the computing power needed for actually deploying a chatbot or generative AI application, which can use much more processing power than traditional server applications.

“What has really changed is the demand for inference has grown significantly,” Su said.

AMD officials said Thursday that they believe their new chips are superior for inference to Nvidia’s. That’s because AMD’s chips are equipped with more high-speed memory, which allows bigger AI models to run on a single GPU.

The MI355X has seven times the amount of computing power as its predecessor, AMD said. Those chips will be able to compete with Nvidia’s B100 and B200 chips, which have been shipping since late last year.

AMD said that its Instinct chips have been adopted by seven of the 10 largest AI customers, including OpenAI, Tesla, xAI, and Cohere.

Oracle plans to offer clusters with over 131,000 MI355X chips to its customers, AMD said.

Officials from Meta said Thursday that they were using clusters of AMD’s CPUs and GPUs to run inference for its Llama model, and that it plans to buy AMD’s next-generation servers.

A Microsoft representative said that it uses AMD chips to serve its Copilot AI features.

Competing on price

AMD declined to say how much its chips cost — it doesn’t sell chips by themselves, and end-users usually buy them through a hardware company like Dell or Super Micro Computer — but the company is planning for the MI400 chips to compete on price.

The Santa Clara company is pairing its GPUs alongside its CPUs and networking chips from its 2022 acquisition of Pensando to build its Helios racks. That means greater adoption of its AI chips should also benefit the rest of AMD’s business. It’s also using an open-source networking technology to closely integrate its rack systems, called UALink, versus Nvidia’s proprietary NVLink.

AMD claims its MI355X can deliver 40% more tokens — a measure of AI output — per dollar than Nvidia’s chips because its chips use less power than its rival’s.

Data center GPUs can cost tens of thousands of dollars per chip, and cloud companies usually buy them in large quantities.

AMD’s AI chip business is still much smaller than Nvidia’s. It said it had $5 billion in AI sales in its fiscal 2024, but JP Morgan analysts are expecting 60% growth in the category this year.

WATCH: AMD CEO Lisa Su: Chip export controls are a headwind but we still see growth opportunity