Science and Tech

Meta pairs OpenAI and Google in AI race

Meta pairs OpenAI and Google in AI race

People living in the United States will also be able to access the new multimodal AI functions, that is, a tool that is capable of responding to input not only in text, but also in photographs.

For example, during a trip the person takes a photograph of a bird and does not know what type it is, because with their assistant in some of the apps they will be able to obtain more information about it or it will also be possible to edit the image.

Regarding social networks, the company said that it will implement more AI in the content of their platformsas in Reels, through automatic dubbing in videos in other languages ​​or through AI-generated videos in Facebook and Instagram feeds.

Regarding the open source AI model, Llama, Meta presented the new version 3.2, which is also a multimodal tool and is available for different platforms, such as AMD, AWS, Dell, Google Cloud, IBM, Intel, Microsoft Azure and Nvidia , among others,

Orion, Meta’s AR glasses

Another sector in which Zuckerberg and Meta do not want to lag behind is in augmented realities and in addition to presenting the affordable Quest 3S glasses, they also showed more advances in the AR lenses, Orion.

This device combines normal glasses with new Augmented Reality capabilities that take the concept of glasses created with Ray-Ban, which are already on the market, one step further.

According to the announcement, Orion is made up of two devices. On the one hand there are the glasses that are controlled through voice and gaze, but on the other hand it also includes an EMG bracelet that allows hand tracking to control everything the user observes through a neural interface.

For now, the new gadget remains as a prototype product, but its access was expanded to more employees within Meta, as well as a select external audience, with the aim of the company continuing in its development process and finally launching it publicly on the market. .



Source link