March 30, 2025
Alibaba’s Qwen 2.5 Omni AI Model to Help Develop Cost-Effective AI Agents
Alibaba’s Qwen team released a new artificial intelligence (AI) model in the Qwen 2.5 family on Wednesday. Dubbed Qwen 2.5 Omni, it is a flagship-tier end-to-end multimodal model. The company claims it can process a wide range of inputs, including text, images, audio, and videos, while generating real-time text and natural speech responses.

Alibaba’s Qwen team released a new artificial intelligence (AI) model in the Qwen 2.5 family on Wednesday. Dubbed Qwen 2.5 Omni, it is a flagship-tier end-to-end multimodal model. The company claims it can process a wide range of inputs, including text, images, audio, and videos, while generating real-time text and natural speech responses. It is said to enable the building and deployment of cost-effective AI agents due to its diverse skill set. Alibaba has also employed a new “Thinker-Talker” architecture for the Qwen 2.5 Omni AI model.

Qwen 2.5 Omni AI Model Released

In a blog post, the Qwen team detailed the new Qwen 2.5 Omni AI model, which is a seven-billion-parameter system. The most notable capability of this omnimodal model is the real-time speech generation and video chat capability, which will allow the large language model (LLM) to answer queries and interact with users verbally in a humanlike manner. So far, this capability is only available with Google and OpenAI’s models, which are closed-source. Alibaba, on the other hand, has open-sourced the technology.

Coming to the features, it accepts text, images, audio, and video as input as well as output. The model is also capable of real-time voice interactions and video chats. The Qwen team also highlights that the model will also offer real-time streaming of speech in a natural manner. Additionally, it is claimed to come with enhanced performance in end-to-end speech instruction.

The Qwen team highlighted that the Omni model is built on a novel “Thinker-Talker” architecture. The Thinker component functions like a brain and is responsible for processing and understanding input across modalities, and generating text output. It is essentially a Transformer decoder that encodes audio and image and assists with information extraction.

Qwen 2.5 Omni benchmark
Photo Credit: Alibaba

On the other hand, the Talker component operates like a human mouth, the researchers said. It streams the information produced by the Thinker component and generates a stream-like output for speech fluidity. It is designed as a dual-track autoregressive Transformer decoder. This entire architecture operates as a single model, allowing real-time text and speech generation, enabling end-to-end training and inference.

Based on internal testing, the Qwen 2.5 Omni AI model is said to outperform the Gemini 1.5 Pro model on the OmniBench. It also outperforms Qwen 2.5-VL-7B, Qwen2-Audio on single-modality tasks.

The AI model is now available on Alibaba’s Hugging Face listing and GitHub listing. Additionally, users can test out the new model via Qwen Chat as well as the company’s community ModelScope.