
Google updated the Gemini 2.0 Flash artificial intelligence (AI) model last week. The new update refines the model’s conversational capability and makes it better suited for creative tasks, according to the Mountain View-based tech giant. The company claims that users will notice the difference when talking to the AI about certain topics and tasks. The update comes at a time when the 2.5 Flash AI model is already available to the users of the chatbot, although as an experimental preview.
Gemini 2.0 Flash Becomes More Conversational
On the Gemini update page, the tech giant added a new entry dated April 19 titled “Update to 2.0 Flash in Gemini.” With this update, Google says, the AI model will provide “more natural, collaborative, and adaptive conversational style.” Its biggest impact will be felt during general interactions and while engaging with the chatbot about certain topics.
Google suggests talking to the AI model about interests, a problem at school or work, and a more creative perspective to see the difference in the output. Additionally, the company said the new model is also updated with improved context awareness. This should make it easier to convey the intention behind a query, and the AI should generate more satisfactory responses.
Gadgets 360 staff members were able to access and test the new AI model. While it does feel more interactive and the responses are slightly more conversational, we were not able to notice any major improvements. However, we briefly tested the AI model, and it could be that the difference becomes more apparent with longer usage.
The tech giant’s refinement of the Gemini 2.0 Flash model is an interesting decision. While it is the default model for those on the free tier, and it is available to all Gemini users, 2.0 Flash is also nearing the end of its life cycle. Google has already made the Gemini 2.5 Flash available as an experimental preview, and a stable version will likely be released soon.
In recent times, Google’s focus has been on Gemini Live, the two-way real-time voice conversation feature within the AI app. The company has released live video and screen sharing capabilities to the paid users. Separately, it also showcased a prototype of AI Glasses, which is equipped with Gemini Live features, at a TED Talk event.