July 23, 2025
Alibaba’s Qwen Team Releases New Coding AI Model With Agentic Capabilities
Alibaba’s Qwen team released a new artificial intelligence (AI) coding model on Tuesday. Dubbed Qwen 3 Coder, the model comes with several agentic capabilities including agentic coding, agentic browser-use, and agentic tool-use. The researchers have released only one variant of the model so far, the Qwen3-Coder-480B-A35B-Instruct, which is the most powerful variant ...

Alibaba’s Qwen team released a new artificial intelligence (AI) coding model on Tuesday. Dubbed Qwen 3 Coder, the model comes with several agentic capabilities including agentic coding, agentic browser-use, and agentic tool-use. The researchers have released only one variant of the model so far, the Qwen3-Coder-480B-A35B-Instruct, which is the most powerful variant in the family. In terms of performance in coding, Alibaba’s AI team claims the open-source model is comparable to Anthropic’s Claude Sonnet 4 model. The new large language model (LLM) is available to download locally.

Qwen 3 Coder Model With Agentic Capabilities Released

In a blog post, the researchers detailed the new agentic coding tool. Available in open-source, interested individuals can download the weights from Qwen’s Hugging Face listing and GitHub listing. The model is available with a permissive Apache 2.0 licence which allows for both academic and commercial usage. Alongside the model, an open-source command-line tool dubbed Qwen Code is also available for agentic coding.

Coming to the model, the Qwen 3 Coder is a mixture-of-expert (MoE) model with 480 billion parameters. Out of that, the model functions with 35 billion active parameters and a context length of 2,56,000 tokens natively. The context window can be expanded to one million tokens using extrapolation methods. The researchers highlighted that the model supports agentic coding, agentic browser-use, and agentic tool-use.

The company claims that Qwen 3 Coder achieved state-of-the-art (SOTA) performance among open-source models on the SWE-Bench Verified benchmark. Here, SOTA refers to the highest score which was previously unattainable by any other model. The Qwen team said this score was achieved by creating a scalable system using Alibaba Cloud’s infrastructure that could run 20,000 independent environments in parallel.

To enable agentic coding, the team has also released the Qwen Code command-line tool. Built from Gemini Code, it has been equipped with custom prompts and function calling protocols. These functionalities allow the AI model to not only write and edit code but also to deploy and execute it in an integrated development environment (IDE).

While Qwen Code natively supports Qwen 3 Coder AI model, it can also be integrated with the OpenAI software development kit (SDK) when calling LLMs. On the other hand, the Qwen coding model can also be used with Claude Code. But developers will need to request an API key on Alibaba Cloud Model Studio platform.