November 23, 2024
Meta Unveils New AI Chips to Build Generative AI Products and Services
Meta, on Wednesday, unveiled its next-generation Meta Training and Inference Accelerator (MTIA), its family of custom-made chipsets for AI workloads. The upgrade to its AI chipset comes almost a year after the company introduced the first AI chips. These Inference Accelerators will power the tech giant’s existing and future products, services, and the AI that lies w...

Meta, on Wednesday, unveiled its next-generation Meta Training and Inference Accelerator (MTIA), its family of custom-made chipsets for artificial intelligence (AI) workloads. The upgrade to its AI chipset comes almost a year after the company introduced the first AI chips. These Inference Accelerators will power the tech giant’s existing and future products, services, and the AI that lies within its social media platforms. In particular, Meta highlighted that the capabilities of the chipset will be used to serve its ranking and recommendation models.

Making the announcement via its blog post, Meta said, “The next generation of Meta’s large-scale infrastructure is being built with AI in mind, including supporting new generative AI (GenAI) products and services, recommendation systems, and advanced AI research. It’s an investment we expect will grow in the years ahead as the compute requirements to support AI models increase alongside the models’ sophistication.”

The new AI chip offers significant improvements in both power generation and efficiency due to improvements in its architecture, as per Meta. The next generation of MTIA doubles the compute and memory bandwidth compared to its predecessor. It can also serve Meta’s recommendation models that it uses to personalise content for its users on its social media platforms.

On the hardware of the chipset, Meta said that the system has a rack-based design that holds up to 72 accelerators where three chassis contain 12 boards and each of them houses two accelerators. The processor clocks at 1.35GHz which is much faster than its predecessor at 800MHz. It can also run at a higher output of 90W. The fabric between the accelerators and the host has also been upgraded to PCIe Gen5.

The software stack is where the company has made major improvements. The chipset is designed to be fully integrated with PyTorch 2.0 and related features. “The lower level compiler for MTIA takes the outputs from the frontend and produces highly efficient and device-specific code,” the company explained.

The results so far show that this MTIA chip can handle both the low complexity (LC) and high complexity (HC) ranking and recommendation models that are components of Meta’s products. Across these models, there can be a ~10x-100x difference in model size and the amount of compute per input sample. Because we control the whole stack, we can achieve greater efficiency compared to commercially available GPUs. Realizing these gains is an ongoing effort and we continue to improve performance per watt as we build up and deploy MTIA chips in our systems.

With the rise of AI, many tech companies are now focusing on manufacturing customised AI chipsets that can cater to their particular needs. These processors offer massive compute power over servers which enables them to bring products such as generalist AI chatbots and AI tools for specific tasks.


Affiliate links may be automatically generated – see our ethics statement for details.