In December, we started The agency era by publishing an experimental version of Gemini 2.0 Flash – our highly effective workhorse model for developers with low latency and improved performance. Earlier this year, we have updated 2.0 Flash Thinking Experimental In Google AI Studio, which improved its performance by combining the speed of flash with the possibility of reasoning through more complex problems.
And last week, we rendered a flash 2.0 updated available to all users of the Gemini application On the desktop and the mobile, helping everyone to discover new ways of creating, interacting and collaborating with Gemini.
Today, we manufacture the gemini 2.0 flash generally updated available via the Gemini API Google Ai Studio And Vertex ai. Developers can now create production applications with 2.0 flash.
We also publish an experimental version of Gemini 2.0 Pro, our best model to date for the coding of performance and complex prompts. It is available in Google Ai Studio And Vertex aiand in the Gemini application For advanced gemini users.
We release a new model, Gemini 2.0 Flash-Lite, our most profitable model to date, in the public overview in Google Ai Studio And Vertex ai.
Finally, 2.0 Flash Thinking Experimental will be available for Gemini App Users of the model’s drop -down list on desktop and mobile.
All these models will include a multimodal input with a text outlet during the version, with more processes ready for general availability in the coming months. More information, including prices details, can be found in the Google blog for developers. For the future, we are working on more updates and improved capacities for the family of Gemini 2.0 models.
2.0 Flash: a new update for general availability
Presented for the first time At E / S 2024, the flash series of models is popular with developers as a powerful model of battle horse, optimal for high -volume high -scale tasks on a large -scale and very capable of multimodal reasoning through Large amounts of information with a context window of 1 million tokens. We were delighted to see his reception by the developer community.
2.0 Flash is now generally available for more people on our AI products, as well as improved performance in key references, with the generation of images and the vocal text to come soon.
Try Gemini 2.0 Flash in the Gemini application or the Gemini API in Google Ai Studio And Vertex ai. Price details can be found in the Google blog for developers.
2.0 PRO EXPERIMENTAL: Our best model to date for the coding of performance and complex prompts
As we continued to share the experimental versions of Gemini 2.0 as well as Gémini-Exp-206We have received excellent comments from developers on their strengths and the best use cases, such as coding.
Today, we are publishing an experimental version of Gemini 2.0 Pro that responds to these comments. It has the strongest coding performances and the ability to manage complex prompts, with better understanding and better reasoning of global knowledge, than any model that we have published so far. It is delivered with our largest context window to 2 million tokens, which allows it to analyze and understand large amounts of information, as well as the possibility of calling tools like Google Search and Code Execution.