Nov. 20 (Portaltic/EP) –
Niantic has announced that it is developing a Large Geospatial Model (LGM, for its acronym in English) powered by Artificial Intelligence (AI), which will be based on the data recorded by users of the company’s services such as Pokémon Go, and that will allow the models to obtain “spatial understanding” skills.
Currently, AI models have difficulties visualizing and inferring missing parts of a scene, as well as to imagine what a place would be like from a new angle. It is about the “spatial understanding”, a characteristic of human reasoning which is based on information from “countless similar scenes” viewed at different times.
However, as Niantic has pointed out, this task is “extraordinarily difficult” for machines. In this framework, with a view to move towards “the next frontier of AI models”, The company has announced that it is developing a new Large Geospatial Model, that will have “spatial intelligence” capabilities.
Specifically, as explained in a statement on their websitethis concept of geospatial model will use large-scale machine learning to understand a scene and, from this, connect it with “millions of other scenes” on a global level.
To do this, it will be based on the data collected by its Visual Positioning System (VPS), with which they have trained more than 50 million neural networks with more than 150 billion parameters, which, as detailed, allows operation in more than one million locations.
VPS is Niantic’s proprietary technology, offering centimeter-level accuracy and the ability for digital content to persist and change based on user behavior. Specifically, it is the technology that the company uses for its location based games, as is the case of Pokémon Go and its new experimental function Pokémon Playgrounds.
Thus, with a single image captured from a smartphone, the system allows you to determine the position and orientation of the user, using a 3D map created from information shared by people who scan locations in the developer’s games and Scaniverse.
Furthermore, it is a unique data setsince they are taken from a pedestrian perspective and therefore include places inaccessible by other methods, such as cars.
Taking all this into account, the company has clarified that each of the Local networks of the VPS system would contribute to the great global model“implementing a shared understanding of geographic locations and understanding places that have not yet been fully scanned.”
With all this information, the Niantic’s LGM will allow computers to perceive and understand physical spacesas well as interact with them “in new ways”. As he stressed, this represents a “critical component” when it comes to advancing technology in sectors such as augmented reality glassesthe roboticscontent creation and autonomous systems.
“As we move from phones to wearable technology linked to the real world, spatial intelligence will become the future operating system of the world“, the technology company has stated.
Likewise, Niantic has specified that it is an LGM because will work similarly to large language models (LLM)which are built using large amounts of raw data. In the case of the LGM, “billions” of images of the world, linked to precise locations, will be “distilled” into a great model that will allow understanding based on location, structures and physical interactions.
Add Comment