December 5, 2024
Hugging Face Unveils Open-Source AI Model That Can Analyse Text, Images
Hugging Face, the artificial intelligence (AI) and machine learning (ML) platform, introduced a new vision-focused AI model last week. Dubbed SmolVLM (where VLM is an acronym for vision language model), it is a compact-sized model that is focused on efficiency. The company claims that due to its smaller size and high efficiency, it can be useful for enterprises and AI...

Hugging Face, the artificial intelligence (AI) and machine learning (ML) platform, introduced a new vision-focused AI model last week. Dubbed SmolVLM (where VLM is an acronym for vision language model), it is a compact-sized model that is focused on efficiency. The company claims that due to its smaller size and high efficiency, it can be useful for enterprises and AI enthusiasts who want AI capabilities without investing a lot in its infrastructure. Hugging Face has also open-sourced the SmolVLM vision model under the Apache 2.0 license for both personal and commercial usage.

Hugging Face Introduces SmolVLM

In a blog post, Hugging Face detailed the new open-source vision model. The company called the AI model “state-of-the-art” for its efficient usage of memory and fast inference. Highlighting the usefulness of a small vision model, the company noted the recent trend of AI firms scaling down models to make them more efficient and cost-effective.

Small vision model ecosystem
Photo Credit: Hugging Face

The SmolVLM family has three AI model variants, each with two billion parameters. The first is SmolVLM-Base, which is the standard model. Apart from this, SmolVLM-Synthetic is the fine-tuned variant trained on synthetic data (data generated by AI or computer), and SmolVLM Instruct is the instruction variant that can be used to build end-user-centric applications.

Coming to technical details, the vision model can operate with just 5.02GB of GPU RAM, which is significantly lower than Qwen2-VL 2B’s requirement of 13.7GB of GPU RAM and InternVL2 2B’s 10.52GB of GPU RAM. Due to this, Hugging Face claims that the AI model can run on-device on a laptop.

SmolVLM can accept a sequence of text and images in any order and analyse them to generate responses to user queries. It encodes 384 x 384p resolution image patches to 81 visual data tokens. The company claimed that this enables the AI to encode test prompts and a single image in 1,200 tokens, as opposed to the 16,000 tokens required by Qwen2-VL.

With these specifications, Hugging Face highlights that SmolVLM can be easily used by smaller enterprises and AI enthusiasts and be deployed to localised systems without the tech stack requiring a major upgrade. Enterprises will also be able to run the AI model for text and image-based inferences without incurring significant costs.

For the latest tech news and reviews, follow Gadgets 360 on X, Facebook, WhatsApp, Threads and Google News. For the latest videos on gadgets and tech, subscribe to our YouTube channel. If you want to know everything about top influencers, follow our in-house Who’sThat360 on Instagram and YouTube.


Vivo X200, Vivo X200 Pro Tipped to Go Official in India in December Second Week; Sale Date Leaked



BRICS’ Move to Introduce Digital Assets Platform for De-Dollarisation Sparks Criticism from Trump