Google revealed how he is integrating the artificial intelligence in its search tools and how it will do so in the future. In a live presentation from Paris, the company’s senior vice president, Prabhakar Raghavanawas in charge of giving the announcements.
At the event, they talked about the chatbot that the company will have in its search engine, something that many are waiting for and that was confirmed will be called Barda platform based on its technology LaMDA.
But in this space they also took the opportunity to highlight that the company has been working for some time with artificial intelligence in its tools, as has happened with Lenses that combines it with augmented reality.
It may interest you: What does it mean to work or study in the cloud, this is Google Cloud with artificial intelligence
Artificial intelligence in Google searches
The first point they talked about was multi-search. This will allow users to make queries combining text and images. A function that has been available since 2021 in the United States, but now it reaches the whole world.
With this option, people will have the opportunity to enter the application of Googleuse Lenses to take a photo or select an image from your gallery and complement it with text to make the results more precise.
It may interest you: They create a robot capable of perceiving odors, like humans
Along with this tool, there will be the possibility of searching for objects near our location. For example, take a photo of a plate of food or choose a photo from the gallery and the system will provide information about nearby restaurants to go and enjoy that meal.
Going to Mapsit was revealed that the Immersive View option will already be available in Los Angeles, London, New York, San Francisco and Tokyo and will soon come to Florence, Venice, Amsterdam and Dublin. In it, users will observe tourist places in a more complete way, from inside to information about what their neighborhood is like.
Artificial intelligence will also be part of the translator, which according to Google is used by more than a billion people. The first show will be at Contextual translation options, making a better translation “for single words, short phrases and phrases with multiple meanings” focused on a specific situation, which will make their understanding easier.
And also a related function Lensessince the text that is translated will merge “perfectly” with the words that are being interpreted, so that they look more natural.
It may interest you: Twitter auctions statue of its logo for more than 15,000 USD
There was great expectation about this section and finally Google confirmed that Bard is on the way and showed how it will work.
Its use will be very similar to how other tools of this type work, in which the user finds himself with a chat interface and makes queries or requests through text, which are then answered by the AI.
The company confirmed that for now they are in the development phase and only ‘trusted testers’ have access to it, so the rest of the public will have to wait a while longer to see it.
The objective of this is to reduce the margin of error as much as possible so that it works optimally and to solve an issue that worries the platform a lot about how this can affect the traffic of web pages, who are its main clients in the search engine. An important balance to find.
It may interest you: The best way to enter Android applications with your finger print