May 28, 2025
ByteDance Unveils Open Source Bagel AI Model With Image Generation Support
ByteDance released a new multimodal artificial intelligence (AI) model last week. Dubbed Bagel, it is a visual language model (VLM), which is capable of image understanding, generation, and editing. The Beijing-based tech giant has open-sourced the model, and it is available to download via popular AI repositories such as GitHub and Hugging Face.

ByteDance released a new multimodal artificial intelligence (AI) model last week. Dubbed Bagel, it is a visual language model (VLM), which is capable of understanding, generating, and editing images. The Beijing-based tech giant has open-sourced the AI model, and it is available to download via popular AI repositories such as GitHub and Hugging Face. The company claims Bagel is capable of free-form visual manipulation, multiview synthesis, and world navigation, which makes it more capable in image editing compared to existing open-source VLMs.

ByteDance’s Bagel Outperforms Gemini-2-exp in Image Editing

A GitHub listing page sheds more light on ByteDance’s Bagel AI model, including its weights and datasets. However, the company did not provide details about the post-training processes, or the architecture of the model. It is currently available with a permissive Apache 2.0 licence, which allows both academic and commercial usage.

Bagel is a multimodal AI model that accepts both text and images as input. The open-source VLM features a total of 14 billion parameters, out of which seven billion remain active at a time. ByteDance claims that the model was trained on large-scale interleaved multimodal data. This means that different types of data, such as text and images, were combined while feeding the AI system. As a result, the model learned from both modalities jointly, instead of separately.

This method allows foundation models to gain context between different modalities. For instance, if Bagel was fed images and their captions together, it would be better able to understand what the text exactly represents in the visual medium. This would result in more efficient output, as per the company.

ByteDance also claims that the AI model displays better image editing capabilities compared to existing open-source VLMs. It can perform complex tasks such as adding emotion to an image, removing, replacing or adding elements, style transfer, as well as making free-form edits. The company claims that with this ability, Bagel is capable of providing significantly higher output while world-modelling.

World-modelling refers to an AI system’s internal understanding of how the real world functions visually. This would include the relationship between different objects, physical context, and the effect of physical factors such as light, wind, rain, and gravity.

Based on internal testing, ByteDance claims that Bagel was able to outperform Qwen2.5-VL-7B, a similarly sized model, in image understanding. It is also said to score higher in image generation benchmarks than Janus-Pro-7B and Flux-1-dev. Additionally, it is also said to beat Gemini-2-exp on the GEdit-Bench for image editing.

Those who wish to try out the AI model without locally running it can head to Hugging Face, where ByteDance has set up a cloud-based interface to test its image analysis, generation, and editing.