
Google released its second artificial intelligence (AI) model in the Gemini 2.5 family on Thursday. Dubbed Gemini 2.5 Flash, it is a cost-efficient low-latency model which is designed for tasks requiring real-time inference, conversations at scale, and those which are generalistic in nature. The Mountain View-based tech giant will soon make the AI model available on both the Google AI Studio as well as Vertex AI to help users and developers access the Gemini 2.5 Flash, and build applications and agents using it.
Gemini 2.5 Flash Is Now Available on Vertex AI
In a blog post, the tech giant detailed its latest large language model (LLM). Alongside announcing the debut of the Flash model, the post also confirmed that the Gemini 2.5 Pro model is now available on Vertex AI. Differentiating between the use cases of the two models, Google said the Pro model is ideal for tasks that require intricate knowledge, multi-step analyses, and making nuanced decisions.
On the other hand, the Flash model prioritises speed, low latency, and cost efficiency. Calling it a workhorse model, the tech giant said it is an “ideal engine for responsive virtual assistants and real-time summarisation tools where efficiency at scale is key.”
While launching the 2.5 Pro model, Google had specified that all LLMs in this series would feature natively built reasoning or “thinking” capability. This means the 2.5 Flash also comes with “dynamic and controllable reasoning.” Developers can adjust the processing time for a query based on the complexity, enabling them to get a granular control over the response generation times.
For its enterprise clients, Google is also introducing the Vertex AI Model Optimiser tool. Available as an experimental feature within the platform, it takes away the confusion of choosing a specific model when users are not sure. The feature can automatically generate the highest-quality response for each prompt based on factors such as quality and cost.
Google did not release a technical paper or model information card alongside the release, so information about its architecture, pre- and post-training processes, and benchmark scores are not known. The company might release it at a later time while making the model available to end consumers.
Meanwhile, the tech giant is also adding new tools to support agentic application building on Vertex AI. The company is adding a new Live application programming interface (API) for Gemini models that will allow AI agents to process streaming audio, video, and text with low latency to let it complete tasks in real-time.
The Live API, which is powered by Gemini 2.5 Pro, also supports resumable sessions longer than 30 minutes, multilingual audio output, time-stamped transcripts for analysis, tool integration, and more.