Although AI-generated videos still have a way to go, it was only a matter of time before this technology made its way to video editing. It is true that there is already AI in some editing tools, such as CapCut, which serve to improve the image or reduce noise, but what Adobe has just announced goes one step further. And it does so because, in essence, it has brought the Adobe Firefly technology that we have seen in Photoshop to Premiere.
The result is impressive.
Generative AI for video. As explained by Adobe, at the end of this year they will launch “new generative AI functions in Premiere Pro.” The idea is not so much to create videos from scratch, which we will also now see, but to use it to reduce the time it takes to do incredibly tedious things, such as removing an unwanted object or modifying an element.
In fact, Adobe's vision is to allow users to “leverage OpenAI's video generation models and runwayintegrated into Premiere Pro, to generate B-Rolls to edit your projects” or use the Pika Labs tools “to add a few seconds to the end of the shot.” In other words, Adobe's approach has two pillars: generative AI with Adobe Firefly and third-party models.
Generative extension. This is the first of the functions and, as its name indicates, it allows you to add frames to a clip to make it longer. This is important for better editing and pacing, as sometimes we only need a few more frames to make an L-cut or add a transition. In the same way, this technology can be used to expand the silent parts of the audio and create a kind of ambient sound that softens the audio cuts.
Add objects. Another thing we can do is add or modify objects in the video. This is not that it is impossible to do it by hand, far from it, but it is a long, tedious task and not within everyone's reach. With Firefly we can select a part of the video and add something (such as diamonds to the video) or change one object for another. Let's think, for example, of changing a garbage container that has slipped into the background of the video for a mailbox, or adding a vase of flowers to a table.
Delete objects. And in the same way that we can add, Premiere's AI will also allow us to remove objects. Here it is very easy to imagine a use case. At a home or semi-professional level, we can think of a stain on clothes or a dirty glass that we have forgotten in the setup. At a professional level, a microphone that has slipped into the shot, a logo that should not appear or a license plate.
Text to video. Adobe has also announced that B-Roll can be generated on the fly with just text, something that was seen coming. In the same way that we can generate images today, it will soon be possible to ask Adobe (which will be integrated with other generative AI models) to generate a clip of something we want to illustrate. It is true that there are video banks, both free and paid, but sometimes it is difficult, or downright impossible, to find just the video we need to illustrate something. Thanks to AI, we will only have to be able to explain it well with text.
User information. All of these are “first explorations” for the future, according to Adobe. The firm assures that the user will be able to choose the model that “suits their use cases”, but the truth is that it still needs to be finalized how the deployment of this technology will be. Be that as it may, Adobe assures that the integration of these third-party models will be “consistent” with the company's security standards” and that it is committed to “associating content credentials” to the content produced with AI. With this Adobe is refers to the Content Authenticity Initiative, the “watermark” for content generated with AI.
Images | Adobe
In Xataka | What we talk about when we talk about artificial intelligence in household appliances. Not (yet) from ChatGPT