Unlocking Creative Possibilities: Harness the Power of Generative AI for Brand Design

Unlocking Creative Possibilities: Harness the Power of Generative AI for Brand Design

LK Senior Designer, Laura ángel , shares her thoughts on generative AI.

2023 has been the year of the AI explosion. We’ve seen new everything: text-to-image generators, video and image retouching tools, UX/UI builders, and even chatbots that seem to know what we want better than we do. It’s a lot to take in, especially when you feel like your job is being challenged. Here at Ludlow Kingsley, we spend a lot of time talking about AI, what it means for us as designers, and what it means for the industry as a whole. Here’s my take as a technology-loving senior brand designer.

Since last November, I’ve been playing around with text-to-image generators. At that time, I had only heard of one called Stable Diffusion. You could say it’s the gateway to all text-to-image models. It’s free, easy to use, and has no restrictions. So, I started with a simple prompt of the first thing that came to mind: a cat using a laptop.

No alt text provided for this image
ángel, Laura. "Cat with Laptop." 2022. Stable Diffusion.

I couldn’t believe it. Sure, it’s a bit wonky and unrealistic, but still better than anything I could do in Photoshop in just a couple of hours. The excitement you feel when your thoughts materialize so quickly is incomparable. After this, you can only assume I couldn’t stop. I started trying different prompts: a cat surfing the Starry Night, the Mona Lisa in South Park style, a group of pups playing cards in a basement, a series of dogs wearing flower crowns on the cover of Vogue.

No alt text provided for this image
Can't stop, won't stop.

Needless to say, a new obsession had been created. At this point I was just doing the obvious: say whatever comes to mind, mimic artistic styles, create funny scenarios. I hadn’t quite yet realized the enormous potential of these tools, until I discovered Midjourney and my perspective changed completely.

Midjourney, just like Stable Diffusion and DALL-E, is a text to image generator. The way these generators work is by using a Diffusion Model, which essentially means they are trained to recognize millions of patterns, and are then able to create and infer details about the prompt to make a new image. Contrary to what many believe, diffusion models are not copying existing images or modifying them, rather, they create every image from scratch based on the knowledge they’ve been trained on.

What sets Midjourney apart from other AI generators is the high fidelity and photorealism it is capable of producing. Here’s an example of the same prompt of a cat using a laptop (granted, this is using Midjourney’s latest version V5.1)

No alt text provided for this image
ángel, Laura. "Slightly More Realistic Cat with Laptop." 2023. Midjourney.

I started making images every day. I even created my own AI Instagram account as a fun creative outlet for the ideas I conjured up (@wideopen.ai). The more I spent time creating images, the more I realized how hard it is to get the specific results you want. In a way, prompt crafting is similar to creative direction when you have an idea about what you want to create, and just need some help with the output. Sure, a cat with a computer is easy, but when you want to convey emotions, thoughts, or concepts it starts becoming harder to put into words.

As a designer, I firmly believe AI shouldn’t replace the artistry of the work we do. But I did see the value of AI as a tool. Recently, we tried out utilizing AI at Ludlow Kingsley to help assist us in the initial phase of a branding project for a new candle company. Our client wanted to see two styles of illustrations incorporated into the packaging: one style that we could create in house, and another that just wasn’t in our toolbox. We discussed this with our client and decided that we could use AI to convey our vision for the style in the second concept. If the concept that utilized AI was chosen, we’d then help our client hire a professional illustrator to create the actual art they would use for their packaging.?

The illustration needed to be a botanical illustration including specific elements like oranges, grapefruits, herbs, flowers, birds, etc. Plus, we needed the illustration to fit with the color palette and visual aesthetic we were going for. Here are a few examples of our creative process and what Midjourney thought we needed:

No alt text provided for this image

Disturbing, but we continue iterating.

No alt text provided for this image

Too busy and detailed and not the right style. The illustration would work better in our mockup if it was contained in the center instead of bleeding to the edges.

No alt text provided for this image

The illustration is contained but the color palette and style are not there.

No alt text provided for this image

Much better composition and style. Still not quite there yet.

No alt text provided for this image

Getting close. Still a bit too busy and we don’t need a dark background.

No alt text provided for this image

Bingo.

No alt text provided for this image

It was perfect! We were able to use this illustration in the packaging mockup for our R1 Visual Identity presentation. It was really helpful for them to see, but they ultimately choose the other direction (brand launching soon!). Even though our AI version wasn’t selected, we loved that we were able to use Midjourney to help us present a concept without using unnecessary resources for the stage of the project.

Scenarios like these are becoming more and more frequent, where these tools help speed up tasks that were once intricate and time consuming, and help us conceptualize and brainstorm quicker.

AI is a fantastic tool, but it needs a human point of view to create something meaningful. It requires specificity and a great ability to translate thoughts into concrete visual queues regarding aesthetics, mood, color, lighting, subjects, etc. In my case, I see huge potential in AI to assist the mood boarding process required for a branding project, help create a storyboard for advertisement, or build out an art direction brief.?

All in all, AI is both amazing and a little bit scary. At LK, we believe it’s our responsibility as creatives (and people!) to keep its use ethical and strategic, and always favor human input wherever possible. Have you ever used AI in a design presentation? How did it go? Comment below and let us know!


Juan Gonzalez

Graphic Designer

1 年

Awesome work and insight Laura ángel! I recently explored AI and found myself in the same spot, cracking my brain trying to write a clear enough command so the AI can have enough information to recreate what I wanted. One simple rule to follow here is not to try to nail it from the beginning by being super specific, rather start from the general and then try to narrow it down with each iteration until you get what you want. In one discussion I had with a friend about AI (Without using it yet) we concluded that the general consensus and fear is that soon enough companies will no longer need graphic designers and we joked about thinking that the CEOs will be able to hire their inexperienced 18yo nephew to do all the AI-generated design for the company. But after trying the tool for myself I'm not so sure that's the case. The tool requires a level of expertise and conceptual thinking that a person with no training in design will find soon enough very hard to nail and get the right results, even so, the patience to keep iterating until the tool knows exactly what you are after. Anyway, some random thoughts about it, kudos to you friend and keep up the good work!

Woodley B. Preucil, CFA

Senior Managing Director

1 年

Ludlow Kingsley Thank you for sharing this insightful post. I found it to be very informative and thought-provoking.

要查看或添加评论,请登录

Ludlow Kingsley的更多文章

社区洞察

其他会员也浏览了