This week, NAB, the image and sound technology show, attracted more than 61,000 visitors to Las Vegas. Adobe, the publisher of software widely used for post-production, particularly in Hollywood, took the opportunity to announce the arrival of generative video AI in Premiere Pro, its professional editing application… And it’s a revolution in creativity service.
Published
Reading time: 2 min
In Hollywood, as in the advertising world, which has started to take hold of it, generative artificial intelligence dedicated to video is eagerly awaited in Premiere Pro, the editing software that professionals already use. This will therefore be the case later this year, as announced by Adobe, its publisher, during NAB in Las Vegas.
A structured integration to prevent the technology from being misused, with three mainly planned uses: Firefly – this is the name of this AI – will allow a plan to be extended. If there are missing frames during editing to ensure a perfect connection to the next shot, Premiere Pro will be able to imagine the missing frames based on the scene that was actually shot.
Even stronger: if a transition shot is missing – a cutting shot, for example – here too, Firefly will be able to “save” the director and his editor, by creating a reverse shot, for example, from scratch.
Tracking movements according to the visual environment
This technology also allows you to intervene directly in editing, whether to delete an object in a few clicks, make it disappear as if it had never existed in the image, or on the contrary, to add an object that does not exist. It wasn’t there: a watch or a tie, on an actor who wasn’t wearing one, as Fred Rolland, head of professional video solutions at European level at Adobe, explains to franceinfo:
“In the example of the tie, it is a man in movement, dressed in a suit. The user will be able to select the area in which he wishes to insert the tie, and ask Firefly to generate this famous tie The magic is that each object will follow – image by image – the movements of the subject, integrating the colorimetry, the 3D environment, the lighting environment, so as to give a completely realistic rendering.
Several models to choose from including Sora from Open AI
Generative AI needs models to work. And the choice of these models is another particularity of Adobe’s announcement, which wants to give users the choice. We can therefore opt for either third-party generative AI models, offered by Runway, Pika Labs or even OpenAI, the chat editorGPT, with Sora which is eagerly awaited; or the own generative video AI model, which is currently being trained, and therefore created at Adobe, from its image database, called Stock, and which includes 450 million photo and video files.