Gaming

Nvidia Unveils Avatar Cloud Engine, Bringing Non-Playable Video Game Characters to Life with Generative AI

Nvidia Unveils Avatar Cloud Engine, Bringing Non-Playable Video Game Characters to Life with Generative AI

May 29. (Portaltic/EP) –

nvidia has presented his new technology Avatar Cloud Engine (ACE) that “brings life” to non-playable characters (NPCs) virtual using Artificial Intelligence (AI) generative, allowing users to engage in conversations with them receiving natural responses.

The tech company has detailed that generative AI has the potential to “revolutionize the interactivity” that players have with characters and, thus, “drastically increase the immersion” in video games. In this sense, continue implementing generative AI in different fieldsin this case, integrating it into video games based on the “decades of experience working with developers.”

Nvidia has introduced its new technology for developers Avatar Cloud Engine (ACE) for video games, which is based on a customized “AI model casting” service able to “provide intelligence” to NPCs video game through AI-powered language interactions.

This technology, as the company has explained in a statement on its websiteis focused on tool and game developerswho can use it to implement it custom voice, conversation and animation AI models within the game ‘software’.

In this way, it enables players engage in a voice conversation with NPCs and receive responses naturally and consistentlyall of it AI powered. That is to say, speaking directly from the microphone of a headset, the player will be able to chat with non-playable characters and they will also answer the questions asked out loud naturally.

This is how the company shows it in a demo called Kairoswith a scene from a video game developed in collaboration with the ‘startup’ of Nvidia Inception, Convai, where the user, who embodies a playable character, asks Jin, an NPC of a ramen shop vendor, various questions.

During the conversation, Jin respond in an appropriate context and with relevant information for the development of the game. That is, the NPC character is able to respond in natural language, realistically, and with information consistent with the backstory of the narrative.

Following this line, Nvidia explains that ACE for video games is based on Nvidia Omniverse and offers various basic AI models optimized for voice, conversation or for character animation.

On the one hand, these models include the technology Nvidia NeMowith which you can create, customize and implement language models using own data. Language models learn and are customized with the lore and backstories of the characters in the video game in which it is to be used. Likewise, with NeMo Guardrails allows you to protect the model against counterproductive or unsafe conversations.

Another of the technologies that ACE includes is Nvidia Rivawhich is used for the automatic speech recognition and text-to-speech transformation to, in this way, be able to enable the conversation aloud in the NPC.

Finally, use Nvidia Omniverse Audio2Face. With it, the company indicates that it is able to “instantly” create “expressive” facial animation on an NPC to match what the voice track is currently saying. Specifically, he has specified that this technology presents connectors for Unreal Engine 5, thus making it easy for developers to add facial animation directly to MetaHuman characters.

In addition, developers can choose whether to use all of these technologies built into the ACE solution or decide which components are most useful for them and select them.

Source link