Gaming

Nvidia moves towards rendering realistic hair and complex animations by AI, without motion capture

Generative artificial intelligence is beginning to stop being a simple curiosity and take shape as a technology with a potential that we are just beginning to glimpse. Nvidia knows this better than anyone. The American firm has spent years exploiting the numerous possibilities of AI in its consumer and professional solutions, and now it is preparing to take a new leap with the generation of realistic animations in a semi-automated way, created solely based on reference materials.

The company’s latest advances have been showcased at the Siggraph 2023 convention, where it has offered a first look at the algorithms that could power the character creation tools of the future. Among its demonstrations we find a project carried out in collaboration with researchers from Stanford University that is capable of creating 3D animations of a tennis player from 2D video.

As can be seen in the video, the character’s movements seem to be skeletal (not easy for an AI), quite realistically replicating the rotation and extension of the different parts of the body, imitating services and setbacks without the need to use expensive movement capture systems. In fact, Nvidia ensures that this technology could be very interesting to reduce development costs, even though the final animations may require some manual adjustment.

Another exciting development is the use of generative AI to create hair with credible movements. The method developed by Nvidia puts an end to the usual 3D toupees, simulating “tens of thousands of hairs” animated by a neural network that is capable of predicting their movement in the real world. For reference, a human head has about 100,000 hairs.

Beyond its impressive effect, the most interesting thing about this technology is that it has been optimized for use with “modern GPUs” and runs in real time, so we could see it implemented in games sooner rather than later.

Finally, and to finish the summary of the AI-based technologies unveiled by Nvidia, we can point out another potentially interesting innovation: the use of compressed textures using neural networks.

This solution would allow to avoid the problems of space and use of video memory that appear when you want to use very high resolution environmental textures. Nvidia’s approach is novel in that it uses material-specific “small neural networks” against the DLSS environmental model.

ImageAI-compressed textures could be very useful for next-gen games.

Nvidia claims that its technique drastically reduces VRAM usage, delivering quality comparable to traditional methods. If, on the other hand, VRAM consumption is maintained, it is possible to provide up to 16 times more texels, still maintaining its execution in real time.

Source link