expand your creativity with adobe's integrated ai capabilities
Many of the generative AI tools that you have come to know and love, like ChatGPT and Dall-E, are relative newcomers to the AI world. Then there’s Adobe, one of the biggest content creation companies on the market and a longtime player in the AI game. Let’s take a peek under the hood of Adobe AI, guided by Kevin Schmitt, a producer in the Content Strategy Group at Yes&.
Daisy 🌼: Hi Kevin! How long have you been working with Adobe products?
Kevin: I've been working with Adobe software since, well, let's just say an embarrassingly long time. Creative Cloud is my go-to suite of tools for anything creative I have to build. And now that AI tools are built inside, I don’t have to leave the cozy confines of familiar software surroundings.
Daisy 🌼: How are Adobe's AI features stacking up?
Kevin: For one thing, AI is now at my fingertips. Before, for every new generative AI tool, I would have to learn an entirely new layout and set of features. ChatGPT looks and functions very differently from the image generators Midjourney or Stable Diffusion, and both look and function very differently from Adobe. It’s always a lot of learning curves to contend with. What Adobe has been slowly building is AI that is instantly available and directly integrated into applications like Photoshop and Illustrator. There’s no formatting or importing issues, no plugins required. It makes the workflow much easier.
Daisy 🌼: Why do you think Adobe has entered the crowded AI field?
Kevin: The interesting thing is that Adobe has been adding AI features into its software for a while now, owing to Adobe's Sensei AI initiative, which was introduced several years ago. For example, if you've used Photoshop's selection tools or After Effects' Roto Brush over the years, you may have noticed that those features have gotten much better at isolating subjects from backgrounds. That's the sort of thing Sensei has enabled to this point. Adobe creators may have been using AI in our workflows without even realizing it.
Daisy 🌼: How has Adobe been rolling out these features?
Kevin: Adobe has been developing its generative AI model, named Firefly, and the first few features have begun making their way into Adobe software. Let's focus on Photoshop, which makes image generation as simple as making a selection and clicking the Generative Fill button. You can enter a prompt, such as "robot alien clown," or you can bypass the prompt and Photoshop will generate something completely random.
Result of "robot alien clown" prompt
Random no-prompt result
You can also select part of an existing image and use Generative Fill to add, for example, a lake to an existing mountain image simply by drawing a selection and typing the word “lake.”
Original image with selection
Generative fill "lake"
Or you can use the Generative Expand feature for when you just don't have enough in an existing image to work with.
These are simplified examples, of course, that represent only the first step for generative AI features in Adobe’s Creative Cloud. It's worth a visit to the Adobe Firefly website, where you can preview or even experiment with upcoming features coming to Illustrator, such as the Firefly 2 image model or text prompts, to generate vector graphics.
Daisy 🌼: How handy, and practical! A lot of AI image generators make it easy to generate an Image, but it's still hard to actually edit them.
Kevin: Totally. And even though it’s still in its infancy, generative AI tools like Adobe Firefly are getting more and more robust as software gets updated. There's no time like the present to get in there and start seeing what you can come up with to aid in your designs.
Subscribe to The Ampersand newsletter for the next edition of Ask Daisy: