You no longer need a TV studio, millions of dollars of equipment, and a team of editors, animators, colorists, and artists to produce professional looking video content. You do, however, need a bit of talent, but maybe not for long, as Adobe is introducing new video tools that let you enhance, manipulate, and even edit videos with just a few typed commands.
Back in March, Adobe finally threw its hat into the AI image generation ring with the reveal of Adobe Firefly, a tool that promised to make image editing easy without infringing on the existing work of photographers, artists, and other creators. Instead, Firefly was only trained using images from Adobe’s own stock image site, public domain content, and openly licensed work.
Apps like Photoshop and Lightroom still require quite a bit of artistic talent and skill to make an image look its best, but Firefly simplified that process through AI, and even allowed the photographer to be taken out of the equation entirely, with users simply having to type a description of the image they need. Adobe Firefly is still in beta, meaning it’s not available to everyone just yet, but ahead of a wider release, Adobe has already revealed that Firefly’s capabilities will also be available to those working with video, with a handful of specific skills being highlighted. The company promises the results will be “safe for commercial use.”
Filters have long made it easy to change the look and feel of a video clip, cooling color intensities to make them feel more somber, or warming the overall look to make it feel more upbeat, but finding the exact filter you need can require some trial and error. With Adobe Firefly, users can just describe exactly the look they’re trying to achieve through text prompts, and can even make more specific adjustments or improvements to a clip, such as brightening only their subject’s face. But with more complex footage, such as a room full of people, how specific these adjustments can get remains to be seen.
A well executed text or title effect can help a video look like it was produced on a Hollywood-sized budget, even it if was only created in someone’s parents’ basement. But complex animations often require advanced animation skills, and these days, a working knowledge of 3D rendering tools. With Adobe Firefly, users can simply describe how they want a piece of on-screen text to look, and the AI will instantly spit out the desired results.
No documentary viewer wants to watch a talking head for minutes on end, so finding the appropriate b-roll footage to break up a long interview clip is crucial to keeping viewers interested. It’s usually a time-consuming task for editors, however, as they need to find a clip that correlates to what someone on-screen is saying. Adobe Firefly can automate the process by analyzing the text of a script, match what’s being said with additional footage from the same project, and then drop it into the timeline at the appropriate times.
Finding music that matches the tone of a clip, or a sound effect that perfectly matches the action on screen, is only half the challenge. You have to also ensure the music or sound effects are cleared for broadcast, wherever a video is going to end up being seen. It’s a challenge that keeps both editors and lawyers busy, but Adobe Firefly will be able to generate both “custom sounds and music to fit a specific mood and scene.” These would be completely original creations, already safe for use in a commercial capacity.
Thorough pre-planning can make a shoot run smoother, and help keep a production on budget. But creating detailed storyboards and animated previsualizations aren’t always an affordable approach for all productions. With Adobe Firefly, a script can be analyzed and both 2D static images, or low-quality 3D animations, can be generated for every shot to help everyone on a production visualize a scene and understand what exactly is needed on set when time and money disappear quickly.
Unfortunately, while Adobe has made Firefly’s image tools available to select users through a limited beta program, the new AI video tools won’t be available until later this year.