Mistral, the Paris-based artificial intelligence (AI) firm, released the Mistral Small 3 AI model on Thursday. The company, known for its open-source large language models (LLMs), has also made the latest AI model available on Hugging Face as well as several other platforms. Mistral claimed that the latest model was built with processing speed, efficiency, and performance in mind, and it can outperform models double its size. The AI firm’s internal testing found the model to offer better performance than OpenAI’s GPT-4o mini.
Mistral Small 3 AI Model Released
In a newsroom post, the French AI firm detailed the new AI model. Mistral Small 3 is a latency-optimised model with 24 billion parameters. The LLM is being released with both a pre-trained and an instruction-tuned checkpoint to cater to a wide range of tasks. The AI model is available under the Apache 2.0 licence for academic and commercial usage. Mistral highlighted that it is moving away from the Mistral Research Licence (MRL) model that only allows academic and research-related usage.
The company stated that the AI model is neither trained with the reinforcement learning (RL) process nor includes synthetic data (data generated from other AI models or digital sources) in the training dataset.
Based on internal tests, the AI firm claimed that Mistral Small 3 outperforms GPT-4o mini in terms of latency. It also performed better than the OpenAI LLM on the Massive Multitask Language Understanding (MMLU) Pro and the Graduate-Level Google-Proof Q&A (GPQA) main benchmarks. The developers also revealed that the model is competitive with the Llama 3.3 70B model, despite being three times smaller.
As per the company, this model can be used for use cases where efficiency or speed matters to developers. Some of the suggested use cases include scenarios where fast-response conversational assistance is critical, scenarios where low-latency function calling is important, or scenarios where developers want to create a chatbot that is subject matter expert by fine-tuning the LLM.
The AI model can also be used for organisations that prefer local inference to safeguard sensitive or proprietary data. Notably, Mistral Small 3 can be run privately on a single Nvidia RTX 4090 GPU. Developers can access the model from its Hugging Face listing.
For the latest tech news and reviews, follow Gadgets 360 on X, Facebook, WhatsApp, Threads and Google News. For the latest videos on gadgets and tech, subscribe to our YouTube channel. If you want to know everything about top influencers, follow our in-house Who’sThat360 on Instagram and YouTube.