November 23, 2024
Adobe Will Soon Let You Generate AI Videos From Text and Image Prompts
Adobe Firefly Video Model, the upcoming artificial intelligence (AI) model capable of video generation, was previewed by the company on Wednesday. The software giant first announced the under-development video model in April and has now shared more details about it. The large language model (LLM) will be able to generate videos from text prompts as well as image input...

Adobe Firefly Video Model, the upcoming artificial intelligence (AI) model capable of video generation, was previewed by the company on Wednesday. The software giant first announced the under-development video model in April and has now shared more details about it. The large language model (LLM) will be able to generate videos from text prompts as well as image inputs. Users can also generate videos from various camera angles, styles, and effects. The company also stated that the video model will be available in beta later this year.

Adobe Firefly Video Model Previewed

In a newsroom post, the company detailed the capabilities of the native AI video model. A YouTube video was also shared to showcase its features. Once launched, the Firefly Video Model will join Adobe’s existing generative models including the Image Model, Vector Model, and Design Model.

Based on the YouTube video, it appears the Adobe Firefly Video Model can generate videos from both text and image-based inputs. This means users will be able to write a detailed prompt or share an image as the reference for the output video.

Users will also be able to make complex requests such as multiple camera angles, lighting conditions, styles, zoom, and motions, the company claimed. Notably, the AI-generated videos shared by the company appeared to be on par with what was teased with OpenAI’s Sora.

Additionally, the company also demonstrated the Generative Extend feature, which was first revealed (but not showcased) in April. The feature essentially allows users to extend the duration of a shot by adding extra frames. These frames are generated using AI by taking reference from the preceding and following frames. This can give editors the option to lengthen a video or allow the camera to pan on a shot a couple of seconds longer.

Citing Alexandru Costin, VP of generative AI at Adobe, The Verge reports that the maximum length of the AI-generated videos has been kept at five seconds, which is on par with similar tools available in the market. Notably, while the company said the Firefly Video Model will be available as a standalone app, it will also be integrated within the Creative Cloud, Experience Cloud and Adobe Express workflows.

Further, the company claims that the AI video model is “commercially safe” and has only been trained on licenced content, data available in the public domain, and those taken from Adobe Stock. The software giant also highlighted that the AI model will not be trained on user data.

For the latest tech news and reviews, follow Gadgets 360 on X, Facebook, WhatsApp, Threads and Google News. For the latest videos on gadgets and tech, subscribe to our YouTube channel. If you want to know everything about top influencers, follow our in-house Who’sThat360 on Instagram and YouTube.


Google Search Results Will Now Show Archived Web Pages via Wayback Machine