Science and Tech

Google brings the potential of Gemini to the heart of Android, Workspace and Search

Google brings the potential of Gemini to the heart of Android, Workspace and Search

May 14. (Portaltic/EP) –

Google has presented the new features that Gemini’s artificial intelligence (AI) introduces in its Android operating system, where it becomes an assistant, but also in its services both to help users be more productive and to obtain more search results. precise.

Android, Google’s mobile operating system, is not left out of the news that Gemini introduces. For the company, it is the best place where artificial intelligence can be tested, and it demonstrates this by reimagining this OS with AI at its core, as it shared this Tuesday at its annual Google IO developer event.

An example of this is the ‘Circle to Search’ experience, which allows the user to search for anything that appears on their phone screen by simply surrounding or touching it. It is currently available on more than 100 million Samsung Pixel and Galaxy devices, but Google estimates that by the end of the year the number will grow to 200 million.

‘Crowd to Search’ also has new capabilities. Starting this Tuesday, it can help students solve math or physics problems, and later will incorporate more complex problems, such as algebraic calculation, diagrams and graphs.

Gemini on Android also offers an advanced assistant, which will soon be accessible from the application the user is using. This way, you can, for example, ask for specific information about the video being played or a PDF document, although the latter option is available in the Gemini Advanced subscription.

Android also integrates the Gemini Nano model, which is designed to run locally on the device, which helps keep information private as it is not sent to an external server.

Nano will soon incorporate multimodal functions in Pixel phones, so that it will be able to process text input as well as understand information in contexts such as images, sound and spoken language.

These Nano features will enhance the Talkback tool, designed so that people who are blind or have low vision can interact with the device. It will also power a new cybersecurity feature, which will notify the user if a so-called scam is detected.

GOOGLE WORKSPACE

Artificial intelligence also has new features in Google services, specifically in Workspace, where the company has incorporated its Gemini model to drive the next generation of intelligent features.

Specifically, it has added Gemini 1.5 Pro to the side panel of Gmail, Drive, Documents, Spreadsheets and Presentations, so that starting this Tuesday it will begin to be available to more users, who will be able to interact with it in a conversational format.

Gemini will be able to answer a greater variety of questions thanks to its greater ability to reason and understand long texts, which supports a context window of up to one million tokens that allows it to process large amounts of information at once.

The objective of this implementation is to offer a connection between the different Google services and the tasks and workflows that are driven by artificial intelligence, as the company has highlighted.

Gemini also expands the features available in Gmail, which in the coming weeks will incorporate three new tools: ’email’ summary, to obtain the most relevant information from an email; Contextual Smart Replay, which will suggest more nuanced and contextualized responses; and Gmail Q&A, to ask Gemini more specific questions.

Added to these new features is the addition of support for Spanish and Portuguese for the ‘Help me write’ tool in Gmail and Documents in the desktop version.

SEARCH AND PHOTOS

Aside from Workspace, Google has also brought Gemini into Photos to power the experimental ‘Ask Photos’ feature. With it, instead of searching the gallery with keywords, users will be able to directly ask for these images to locate them, even providing specific information such as a car license plate.

Search, Google’s flagship service that the company launched 25 years ago, also incorporates new capabilities with generative AI to make it a more useful tool for the user.

In this sense, Google has presented AI Overview, quick responses with summaries created by AI that this Tuesday begin to be available in the United States, and will later reach other countries.

It has also introduced Multi-Step Reasoning, to ask a complex question that brings together all the issues about which the user wants to make a query. That is, instead of asking separately what Pilates is, what is needed to practice it and where you can attend classes, the user can unite all the doubts in a single question.

AI will also be able to plan. The user can ask the Search Engine to create a meal plan, for example, for the next three days that includes vegan dishes. In addition to recipes, ingredients can also be included in a shopping list.

For those cases in which the user is not sure what to ask, Gemini allows you to explore a results page to get inspired. It will be available first in English for searches about restaurants and recipes and later for movies, songs, books, hotels and shopping.

Finally, with Video Question, the user can start an interactive search through the phone’s camera. The user can ask out loud while pointing the camera at something specific, such as a broken record player, and advanced visual processing will provide an answer.

Source link