December 26, 2024
Meta, Arm Said to Be Partnering to Get AI to Do More Tasks on Phones
Meta Connect 2024, the company’s developer conference, took place on Wednesday. During the event, the social media giant unveiled several new artificial intelligence (AI) features and wearable devices. Apart from that Meta reportedly also announced a partnership with the tech giant Arm on building special small language models (SLMs). These AI models are said to be ...

Meta Connect 2024, the company’s developer conference, took place on Wednesday. During the event, the social media giant unveiled several new artificial intelligence (AI) features and wearable devices. Apart from that Meta reportedly also announced a partnership with the tech giant Arm on building special small language models (SLMs). These AI models are said to be used to power smartphones and other devices and introduce newer ways of using these devices. The idea behind it is to provide on-device and edge computing options to keep AI inference fast.

According to a CNET report, Meta and Arm are planning to build AI models that can carry out more advanced tasks on devices. For instance, the AI could act as the device’s virtual assistant and can make a call or click a picture. This is not too far-fetched as today, AI tools can already perform a plethora of tasks such as editing images and drafting emails.

However, the main difference is that users have to interact with the interface or type particular commands to get AI to do these tasks. At the Meta event, the duo reportedly highlighted they wanted to do away with this and make AI models more intuitive and responsive.

One way to do this would be by bringing the AI models on-device or keeping the servers very close to the devices. The latter is also known as edge computing and is used by research institutions and large enterprises. Ragavan Srinivasan, vice president of product management for generative AI at Meta told the publication that developing these new AI models is a good way to tap into this opportunity.

For this, the AI models will have to be smaller in size. While Meta has developed large language models (LLMs) as large as 90 billion parameters, these are not suitable for smaller devices or faster processing. The Llama 3.2 1B and 3B models are believed to be ideal for this.

However, another issue is that AI models will also have to be equipped with newer capabilities beyond simple text generation and computer vision. This is where Arm comes in. As per the report, Meta is working closely with the tech giant to develop processor-optimised AI models that can adapt to the workflows of devices such as smartphones, tablets, and even laptops. No other details about the SLMs have been shared currently.