Science and Tech

Meta presents Movie Gen, to make videos with AI

Meta presents Movie Gen, to make videos with AI

Among the functions it offers are producing personalized videos and sounds, editing existing videos or transforming your images into videos.

“By giving a text message, we can leverage a joint model that has been optimized for both text-to-image and text-to-video convention to create high-quality, transformative images and videos,” the company noted.

This technology has the ability to generate videos of up to 16 seconds, while audio can last up to 45 seconds.

The announcement comes as Hollywood has been debating this year about how to leverage generative AI video technology, after Microsoft-backed OpenAI first showed in February how its Sora product could create videos. similar to feature films in response to text messages.

Entertainment industry technologists are eager to use these types of tools to improve and streamline movie creation, while others are concerned about adopting systems that appear to have been trained on copyrighted works without permission.

Meta pointed out that with this presentation they do not intend to replace the work of artists and animators, but rather they believe “in the power of this technology to help people express themselves in new ways and provide opportunities to people who otherwise would not have them.” , said.

Meta spokespeople said the company was unlikely to make Movie Gen available to developers, as it has done with its Llama series of large models, as it considers the risks individually for each model. They declined to comment on Meta’s assessment of Movie Gen specifically.

Movie Gen is part of the third wave of AI work presented by Meta. The first was with Make-A-Scence models, which allows the creation of images, audio, video and 3D animation.

It was followed by the basic Llama Image model, which allows the generation of higher quality images and videos, as well as image editing.

“Movie Gen is our third wave, combining all of these modalities and allowing more granular control for people using the models in a way that has never been possible before,” Meta added.

The way Movie Gen works is that users provide a text message for the generation of the video, for example “imagine a baby hippopotamus swimming”, based on the result that the AI ​​returns the edits are also requested with text, with instructions how to add, delete or replace elements.

In the same way, it is possible to request the addition of audio, such as ambient sound or instrumental music.

With information from Reuters.



Source link