Video game designers are facing their particular revolution. One in which artificial intelligence will become a powerful tool with which to create virtual worlds, characters and plots with much greater ease. We are beginning to glimpse what is coming our way.
AI, I want a snowy forest for my video game. Tools like ChatGPT or DALL-E have been showing everything that these generative AI models can do for some time, but now there are even more ambitious developments in the video game niche. For example, the technology of Opus.aiwhich manages to create virtual worlds from some simple instructions that we write.
Infinite and dynamic scenarios. As those responsible for this startup explain, the artificial intelligence they use allows them to create dynamic and infinite worlds. Text-to-video technology through AI allows scenarios to be created through a computational production process, but also other elements of the video game such as characters, dialogues or visual effects.
These NPCs look human. This technology is, as we mentioned, applicable to another important element in many video games: the NPCs (Non Playable Character), those characters that in many cases were almost filler and to give atmosphere, but that can now have much more powerful tasks. Work has been going on for years on how to integrate the conversational capability of GPT-3 with synthesized voices like those of Replica Studios.
smarter enemies. It’s not just that an NPC can give you conversation -and that it doesn’t end up being repetitive and boring-, but that those NPCs that are enemies in the game make it difficult for us. Artificial intelligence was already capable of “cheating” to win in video games, but Sony, for example, has been developing AI for some time so that the enemies of its video games are especially difficult to overcome.
the stanford experiment. Researchers from the prestigious Stanford University created in collaboration with Google a little simulator in which its 25 characters were covered by ChatGPT. The simulation, which lasted two days, demonstrated – one can see in action here– how these AI bots were able to interact with each other in a human way: they planned a party, coordinated the event and attended it within the simulation.
Indistinguishable from humans? In that experiment, errors arose in the experiment: several bots entered the bathroom even though there was only room for one, for example. Even so, the experiment was revealing, and allows us to imagine a not too distant future in which the NPCs of a video game interact like human beings even if we are not close to them or do not interact with them directly. The only problem: the cost. The researchers indicated that simulating those two days cost thousands of dollars due to the heavy use of tokens in ChatGPT.
Let’s talk about NeRF. We are not talking about those Nerft, mind you. We are talking about the so-called NeRf (Neural Radiance Field). These are fully connected neural networks that can generate spectacular shapes to enjoy 3D scenarios. The best of all? That to generate these 3D scenarios only a few 2D images are needed.
Give voice to your characters. Those NPCs we were talking about can also benefit from technologies such as the OpenAI Whisper API, which allows speech to text to be transcribed and which makes it possible for those conversations with those characters, whether important or not in the development of the story, to be much more versatile, convincing and realistic.
In Xataka | There are already such hyper-realistic video games that it is difficult to differentiate them from a video. ‘Unrecord’ is the best example