The new AI features in Adobe Firefly are intended to reduce editors’ tedious job.

Adobe said Monday that it is already working on a number of updates to better empower its customers through Creative Cloud video and audio applications, less than a month after launching its new suite of Firefly generative AI editing tools. Later this year, the updates ought to arrive in Firefly’s beta program.

Built from the company’s long-running AI program, Sensei, Firefly is a collection of generative AI models that, like Dall-E and ChatGPT, can generate and alter audio, video, graphics, and 3D models from text input. The capabilities of Firefly are already included in Adobe’s suite of products, including as Premiere Pro, Illustrator, After Effects, and Photoshop, albeit they won’t be available until the end of the year through the limited beta program.

Professional editors can now use new features like color boosting, placeholder image insertion, effect addition, and autonomously recommending b-roll for a given project by simply typing their suggestions into Firefly’s AI text prompt and letting the algorithm take care of the rest. This will include “text to color enhancements,” a feature with a wide range of applications that may change the time of day, the season, and even the brightness and saturation levels using natural language cues.

With the ability to add background music and sound effects by texting the editor what they want, the generative AI functions will also be available for audio. Soon, we should see the animated font features that we initially saw at the premiere event last month as well as an automatic b-roll tool that reads scripts to produce storyboards and recommend videos. What’s most amazing is that Firefly will even create customized how-to manuals to help new users navigate these functionalities.