Science and Tech

Virtual models, the future of online shopping

Virtual models, the future of online shopping

New search experiences with AI and Lens

Google is aware that people are using generative AI and images for their searches. That’s why it announced new features for Lens, its photo search tool, such as Multisearch, which will allow users to take a photo of a dress and combine it with words to get more accurate results.

It also added the Near Me function to capture food and ask to recommend nearby restaurants where they could sell that product. Google explained that it can also be used for other types of products, but the results may not be as accurate as with food.

“Doctor Google” and Bard, assistants with more capabilities

Another of the innovations that Lens will integrate is the ability to take images of the skin to determine if the user has any type of disease or condition, comparing it with the database it has so that the person can see a specialist. However, this feature is only available in the United States.

Separately, Bard also announced an integration with Lens, whose goal is to integrate images into generative search. For example, the user will be able to upload an image of a monument and Bard will answer the place where it is located.

In addition, the chatbot will have the ability to have a conversation with the user, it will tell them a story about that site, as well as activities they can do or any other request that the person asks. This item will be available in the coming weeks in countries where Bard can be used.

On travel, Google said it’s testing a new feature in its Search Lab (its early testing program to which



Source link