Back

Yes& General

Ask Daisy: The A.I. Revolution in the World of Design

Evangeline Weber

Introducing the latest hire at Yes&, Daisy. Daisy is an A.I. enthusiast and explorer, and the writer of our new A.I. blog series. She’s our guide through this exciting time. Whatever your organization or position, whether you’re an early or late adopter, Daisy will help you understand this massive shift. 

A.I. has started to change the way the marketing world does things: automates tasks, streamlines processes, and creates content. Content creation, known as generative A.I.,” has our Digital Creative Lead Evangeline (Vanj) Weber extremely intrigued. Vanj, a UX/UI expert, is a digital innovatoralways thinking of ways to create better customer and user experiences. I sat down with Vanj to get her thoughts about generative A.I. and how it’s affecting the world of design. 

 

Evangeline Weber HeadshotDaisy 🌼: Thanks for meeting with me, Vanj. 
Sure thing, Daisy! As a digital designer and creator, A.I. has changed the way I approach my projects and problem-solve. I was born with the “what does this button do?” gene so I have a constant insatiable curiosity, especially when it comes to technology. I’ve been using A.I. tools for more than a year now, and it’s not an exaggeration to say that A.I. is revolutionizing the creative process, no matter what field you are in. 

Daisy 🌼: That's a bold statement. How do you create with it?
Currently, using generative A.I. for the final creative output is up for debate. In fact, the question of who owns an A.I.-generated image is already the subject of a few lawsuits. However, there’s so much more to creativity than just a final output: creativity is ideation, iteration, and direction, too. In these cases, generative A.I. is simply another tool for the human imagination. 

Daisy 🌼: That's a unique way of bucketing the creative process. Can you walk us through it?
Sure! I start out with an idea. As I’m playing around with the tool, it almost always sparks an even better idea. And because so many text-to-image generators are at my disposal, I am able to try out multiple ideas in the same time frame, or try things that would have either taken too long, required software or equipment I don’t have, or drained the budget. And once I narrow down a concept, I can create imagery that gives the client a crystallized reference of what it would look like, so there’s no room for misinterpretation. Think of it as a test kitchen with every possible ingredient already bought and prepped for you. You don’t have to spend time buying or prepping the ingredients, you just get to create. 

Daisy 🌼: A.I. is like your test kitchen and your sous chef?
Exactly. So, say Buttercup Cafe asks you to create a new signature dish to bring in a morning crowd. In your test kitchen, you spot rose essence next to saffron strands, and lemons already zested. Suddenly you get an idea for an herbal omelet. That’s creative ideation. And as you’re separating your eggs, you see a strainer and want to see if straining your eggs makes a smoother omelet. And if none of your ideas work out, you can quickly start over because all of the ingredients self-replenish and the strainer self-rinses, so you don’t have to spend time preparing or cleaning. That’s creative iteration. And once you have settled on a flavor profile, you give the cafe a taste—literally—of how an aromatic morning dish will not only work with their menu but help them stand out. That’s creative direction 

Daisy 🌼: Now that you've made me hungry, can you bring the metaphor back to generative A.I.?
Haha — of course! In short, A.I. tools help spark my imagination, help me work out which ideas are best in a timely and efficient manner, and help me give clients an even more accurate sense of what they’re agreeing to. It doesn’t replace my creativity or creative output; in fact, it pushes me to create better, imagine higher, and dream bigger. 

Daisy 🌼: That's amazing. What is your favorite generative A.I. tool?
I typically play with text-to-image generators. Here’s how one of those generators work: you type in a text prompt,describing what you want to see. You can be as vague or as detailed as you’d like. Within seconds, the generator “generates” a visual based on your prompt, in any style you choose, from photographic, to fantasy art, to 3D model, to cinematic — the list is endless. The words you choose in your prompt yield different results, so the more you work with the tool, the better the results and the more imaginative you can get. I use these tools to explore concepts for user interfaces, storyboards, and mood boards. But these tools are truly so versatile they can help with any visual component. DALL-E and Stable Diffusion are some A.I. tools I like, but Midjourney is my personal preference. Its come a long way from its first versions, which were working through some gnarly issues.  

Daisy 🌼: What kinds of issues?
There were a couple of hurdles. The first one was how it generated hands and teeth. The earlier version would capture, not a whole body or even full body parts, but fragments of body parts, leading to some rather peculiar results: bulbous hands with extra fingers or even multiple rows of jagged teeth. The second and more serious issue was the lack of diversity in the generated images. The results routinely favored young, light-skinned, and model-esque figures. As a designer, I am adamant about diverse representation. Thankfully, the updated version has taken commendable strides towards inclusivity, generating images that reflect a more diverse group of people. 

Daisy 🌼: Can you give an example of these issues and how they get resolved?
Take a look for yourself. Compare the images from the older version (V4 on the left) and to the latest version (V5.2 on the right). For both I used the same simple-text prompt: "photorealistic, student, paying for college, excited, happy." Here are the results:

AI comparative image

Daisy 🌼: That's a huge difference.
It really is! And not only does it go a long way to resolve the issues, the overall quality of image is a vast improvement. Beyond problem solving, V5.2 has some really cool new features. My favorite is the "zoom out" feature where you can add background to any image. Check out this Barbie-inspired prompt “photorealistic image of a Barbie-style home.” From left to right, Midjourney added increasingly more detail and context to my “dream” world, all from a single prompt.  

 

 

Daisy 🌼: That's really cool. What other features or improvements would you like to see happen? 
I would love to see more intuitive drag-and-drop functionalities and more image refinement tools that let me to make edits directly within Midjourney. As for entirely new features, I’d love to be able to create 3D images, video, animation, and immersive environments. That would be so fun!

Daisy 🌼: We talked about positive things about Midjourney. But many people have concerns about generative A.I., especially when it comes to authenticity and ownership. 
Absolutely, and this topic deserves its own separate focus. Each software has different ways of handling these topics. Midjourney-generated images, for example, are an amalgam of millions and millions of other images, therefore each image is technically an original. A good rule is to always check the licensing agreement, check with your clients ahead of time before using Midjourney at all, and credit Midjourney if you use the images in any public way (videos, ads, social media, website visuals, etc.).  
 
The rule Yes& follows is the rule of good digital citizenry: always verify images that illustrate something factual, and don’t share anything, text or image, you wouldn’t want the world to have access to. Which brings me to my final thought: creating in the digital world always means making sure you and your clients are protected. Create safely, whatever tools you use.

 Daisy 🌼: Wow, so much great advice, Vanj. Thank you. 
Thank you so much for inviting me, Daisy. It’s a brave new world out there. Happy imagining, everyone! 

 

 

 

Subscribe to The Ampersand newsletter for the next edition of Ask Daisy: 

 

Yes& is the Washington, DC-based marketing agency that brings commercial, association, and government clients the unlimited power of “&” – using a full suite of branding, digital, event, marketing, public relations, and creative capabilities to deliver meaningful and measurable results.

Let’s talk about what the power of "&" can do for you.

Evangeline Weber
Evangeline Weber
Digital Creative Lead, UX/UI