Skip to content

Firefly: Adobe’s entry to the generative AI market

Yesterday I promised some thoughts about Adobe’s entry to the generative AI market, and although most of us are still waiting for access, here’s what we know about Firefly so far. As you’d expect from a company focused on image and video editing, Firefly is targeting image generation rather than text, so will be competition for the likes of DALL-E or Midjourney rather than ChatGPT/GPT-4.

There’s been controversy caused by existing ‘AI art’ tools repurposing images belonging to professional artists, but interestingly, Adobe’s initial offering avoids this by being trained on Adobe Stock images, openly licensed content and public domain content where copyright has expired. This means any content generated will be safe for commercial use.

The launch video shows some quite remarkable capabilities, but of course the big advantage for Adobe is that it can bring the tools directly to its huge worldwide base of Photoshop, Illustrator, Premiere and InDesign users. This is similar to Microsoft being able to offer generative AI tools to its Word, PowerPoint and Excel users, which the company is rushing to do.

Capabilities being promised include creating images from a prompt, replacing sections of an image, generating effects/designs to layer on top of text, generating custom vectors, brushes and textures, and editing video and 3D models.

One thing to remember when watching the video above is not the image manipulation capabilities, some of which are already available, but the way they can be accessed by entering a text description of the desired action. This is what’s being referred to by those who talk about a ‘de-skilling revolution’ coming our way. The argument is: “Who needs an expensive Photoshop artist when you can just look at a photo of a guy on a screen and tell the computer to show him wearing a red jacket instead of a black shirt?”