After months of anticipation, Google is finally here Issued Its next generation is the Gemini 2.0 model. Google is first releasing Gemini 2.0 flash model which according to the company, performs better than its flagship Gemini 1.5 Pro models on key benchmarks. Google says it’s up to 2x faster and more efficient than its larger models.
Not only this, it supports native multimodal output such as native image generation and native text-to-speech multilingual audio output. It can also interact with Google Search natively and perform code execution. As far as benchmarks are concerned, Gemini 2.0 Flash achieves 62.1% in GPQA (Diamond), 76.4% in MMLU-Pro, and 92.9% in Natural2Code.
Gemini 2.0 Flash is now available on the web version of Gemini for all users, including free and paid users. To use it you need to click on the drop-down menu and select the “Gemini 2.0 Flash Experimental” model. Google says the Gemini 2.0 Flash model will soon be added to the Gemini app on Android and iOS.
As far as payment is concerned gemini advanced Users get access to a new feature called “Deep Research” that uses advanced logic to solve complex questions. It can also help you compile reports for you. It looks like Gemini Advanced users have access to something like this OpenAI’s o1 logic model He can “think” step-by-step and reason through difficult questions.
Google says Gemini 2.0 is coming AI overview Early next year. AI Overview will be able to understand complex questions, advanced math equations, coding questions, and multimodal input. Notably, the search giant says that the Gemini 2.0 model was trained entirely on its custom TPUs like Trillium, and inference is also done on its in-house TPUs.