The Adobe Max 2024, the event where they showcase all their new cool stuff, is all about AI.
All the new AI features cover Adobe’s creative suite, from Photoshop, Premiere Pro, Illustrator, InDesign, and the Frame.io.
Let’s kick it off with the most insane one. Premiere Pro gets Generative Extend. It allows you to extend clips for two seconds. It’s meant for clips that are too short and is aimed for removing the need to do reshoots or capture additional footage.
It can generate some sounds, up to ten seconds, to match the original video. But it will not generate music or spoken dialogue. Generated extensions can be either 720p or 1080p and set in 24 frames per second.
It should be noted that this will launch first as a beta.
More on video, Adobe is rolling out Text-to-Video and Image-to-Video, which will be available through the Firefly web app.
Text-to-Video will allow you to generate video content through prompts. You’ll also be getting tools to help edit the video, such as camera angles, distance, and motion. You’ll be able to create different video styles, including 3D, stop motion, and so on.
Image-to-Video works alongside text prompts by providing the AI a specific reference for it to copy. This would let users have some more control over the video. Adobe has said this can work for some reshoots. But they have suggested that this feature is more ideal in creating b-rolls.
These two features can generate videos up to five seconds, 720p and at 24 frames per second.
These two new features raise a very interesting question for the future of filmmaking. Will these AI generation stuff actually be tools for animators and video editors? Almost certainly yes. I know a few creatives who are willing to embrace this technology. But what about Hollywood itself? Will it help improve the increasingly bad CGI effects because studios aren’t properly paying visual artists and giving them unrealistic head-bashing deadlines and huge last minute changes along with unreasonable last-last second changes? I hope artists can actually use this to their advantage and like, keep making good stuff without existential career threats.
Anyway, what were we talking about? Adobe Max, right.
Photoshop gets Distraction Removal. We’ve seen this trick before. It’s basically Google’s Magic Eraser, where you can remove backgrounds, including people, wires, and other unwanted objects. Of course once the undesired objects have been removed, there’s some editing trick required to create a smoother image. You can choose to do this yourself. Or just have the AI do it for you, which will base things on the background.
Illustrator gets Object on Path, which lets you arrange a set of objects along a path.
InDesign gets Photoshop’s Generative Expand, which is a tool that expands images.
Adobe’s collaborative platform, Frame.io has also gotten an upgrade, making it easier for multiple users to work on a single project. Think of it as Slack, but with Adobe stuff in it.
At this point in video generation Adobe has gotten the head start in releasing a product for more people to try. Other companies, such as OpenAI, Google, and Meta, are still working on their Sora, Veo, and Movie Gen, respectively. But, as we all know, AI is very fast-paced, we may start seeing more soon.