Using Generative AI for UX/UI Design
Sincere apologies to all you vegans! The pastrami thing just took on a life of its own.

Using Generative AI for UX/UI Design

Note: This article will also be reposted as a four-part series with bonus content in the coming weeks for those of you that prefer to digest info in bite-sized chunks.


The Challenges of Getting Started with Gen AI

The prospect of trying to integrate Generative AI tools into your UX/UI design process can be overwhelming. Examples of solid use cases and tangible deliverables are sparse at best. The output doesn't have that addictive, instant gratification rush of text-to-image tools. There's a heavy cognitive load and an unanticipated psychological exhaustion that comes with diving deep into Gen AI.

Getting a clear understanding of how Gen AI can help you as a designer is made difficult by the fact that its holistic capabilities are currently fragmented into a throng of different apps, making it nearly impossible to see how the pieces of the puzzle connect with each other.

To make matters exponentially worse, AI technology is accelerating at a breathtakingly fast pace.


So where do you even start learning what Gen AI can do for you as UX designer?




Resetting Your Gen AI Mindset

The first step in making Generative AI valuable as a UX design tool is to make sure you have the right mindset. This is the by far the greatest challenge I face when trying to teach people about using Gen AI for design.

A good place to begin is by keeping the following points in mind:


  • Generative AI is not plug & play. In order for Gen AI tools to truly begin to provide value for UX design work, you have to be two things: 1) an experienced UX designer who understands best-practice processes, and 2) someone who is willing to spend a non-trivial amount of time developing an understanding of the different types of Gen AI tools.
  • The duality of AI. Generative AI is simultaneously both the teacher and the pupil. A lot of people get quickly fed up trying to use Gen AI tools, not realizing that their paths to enlightenment are the tools themselves. If Gen AI creates output that isn't correctly addressing your prompt, then ask if it accurately answered your request. Help it.... help you.


No alt text provided for this image
Let's try that again.


  • It's not AI, it's you. If you can't get Generative AI tools to do what you want, the reason is almost always going to be that you are not properly designing your prompts or thinking creatively enough. Sorry about that, had to be said. What it can do for you is constrained not so much by algorithms, but rather by your imagination.
  • Learn how to cook for AI. What Gen AI delivers is only going to be as good as what you feed it. It needs nutritionally dense information made from the right ingredients. This does not mean your prompts have to be novels. You just need to know the right words, when to use a few words, and when to use a lot of words.
  • Be an AI whisperer. Be patient. Be empathetic. Open your mind. You see an end loader with a scoop attachment on the front - it sees a tractor with big tires being pulled by a geometric metal box. Learn how AI sees the world and you'll better understand how to speak its language.




The Platform & The Process

To help designers explore and evaluate the potential of using Generative AI tools for UX design, I built an interconnected, interoperable platform of Gen AI tools to execute specific design methods. The system utilizes Figma as its nexus, with ChatGPT-4, Bard, and Midjourney as its primary satellites. My iPhone AI voice assistant, Emma, plays a huge role in humanizing and bringing life to the output. (Actually, she plays multiple roles. Literally.)


No alt text provided for this image
Siri is not a happy camper these days.


The platform progresses through a modernized design thinking process with best-practice methodologies by way of "sequential feeding". To execute new tasks, it needs to be fed a mixture of the output it has previously generated, and gently guided by your knowledge and intuition as a designer.


No alt text provided for this image
Pre-engineered prompts with placeholders for inserting previous output (shown in orange).


Pro Tip: Always use ChatGPT-4! This is a simple point but a critical one - you cannot use any version of ChatGPT below version 4.0 for UX research & UX design. ChatGPT-3.5 is useless for UX design work.




About the Project Walkthrough

Before we get to the fun stuff, some notes on what this walkthrough is NOT:

  1. An attempt to demonstrate how Gen AI can replace human designers.
  2. An attempt to perfectly replicate human output for UX design.
  3. A showcase of award-winning design by Generative AI tools. Or by me, for that matter.
  4. A demonstration of a streamlined, completely perfected process. This is not a fully armed and operational battlestation.


What this walkthrough IS:

  1. A VERY high level description of the process and a small sampling of the project output.
  2. An attempt to provide designers with inspiration and a renewed interest in using Generative AI tools.
  3. Examples of legitimate ways you can use Gen AI for UX design.
  4. Making me hungry for a charcuterie board.




The Great Pastrami Experiment


What: The design of a mobile app using Gen AI tools.

How: Employing Gen AI tools to execute best-practice UX research, UX design, and UI design methodologies to produce ~90% of the project content.*

Duration: 2 days


* With the exception of the "final" UI screens and the competitive product images, all of the content shown in the slides below was created by Gen AI. I have not edited the text or image content in any way.




Phase 1: Discover

Methodologies Performed / Output: Objective Statement, Customer Profiles, User Personas, User Interviews, User Goals/Needs/Challenges, Empathy Maps, User Journey Maps, Interview Synthesis, Research Synthesis, Pattern Analyses, Feature Identification, Impact/Effort Matrix, Competitive Analysis.

I started with a Chat-streamlined objective statement to get things moving. The statement was then turned into customer profiles and user personas with Chat. The user persona descriptions were then fed into Chat to create Midjourney prompts. Midjourney quickly produced the images I needed for each persona.


No alt text provided for this image
Using Chat to generate the objective statement & then feeding it the statement to create customer profiles.


No alt text provided for this image
Persona images created using Midjourney.


I narrated the biographical descriptions for each persona to Emma, and then had her role-play the personas for interviews. The richness of the responses can be quite surprising at times.

Pro Tip: If you have an iPhone, install the ChatGPT Siri Shortcut to create a Chat-powered AI voice assistant like Emma for role-playing interviews.

Pro Trip: Emma maintains her core personality during role-playing and is aware that she is acting.


No alt text provided for this image
User persona created with Chat + Midjourney & an interview role-played by Emma


Empathy maps were generated in text format by Chat (you can also have Chat create a chart of the info) using the interview transcripts for food. Pattern analysis and summaries of user goals, needs, and challenges were also created using the interview transcripts.


No alt text provided for this image
Empathy map and summaries created with Chat & Cube GPT


The user personas, interview transcripts, empathy maps, and research summaries were then utilized to create a possible feature list comprised of required features and innovative features. The innovative features were sorted out into an Impact / Effort Matrix using Chat.

Pro Tip: When generating an Impact / Effort matrix, be sure to write the prompt to include an explanation of categorization.


No alt text provided for this image
Impact / Effort matrix, feature set, and competitive analyses by Chat, Cube GPT, and Bard




Phase 2: Define

Methodologies Performed / Output: User Tasks, User Goals, User Stories, Basic Architecture, Initial Visual Design Language Specifications, Feature Formulation, Product Requirements Document, Business Plan.


The Define phase builds upon the content from the Discover phase like blocks of LEGO. The interview and research summaries were mixed with the feature set to to create the user tasks. The user tasks were blended in to create the user goals, and finally the whole cocktail was mixed to create the user stories.


No alt text provided for this image
User tasks + user goals = user stories.


The objective statement, competitive analyses, and the user persona descriptions were inserted into a prompt along with desired specifications to create a foundational Visual Design Language.


No alt text provided for this image

Pro Tip: You can do a quick, super solid feature formulation by feeding Chat a five-course meal of user personas, user tasks, user goals, possible features, and the impact/effort matrix, and then weighting the personas by priority.




Phase 3: Ideate

Methodologies Performed / Output: How Might We..., Free Association, Worst Possible Idea, Provocations, Mash-Ups.


Ideation is one the simplest and most recognized capabilities of Gen AI. As the usage cases and value of using Gen AI for ideation are already commonly known in design circles, I elected to not spend a great deal of time on it for this project.

No alt text provided for this image
Worst Possible Idea: grab some popcorn and just keep hitting "Renerate".


Pro Tip: If you're looking to liven up any brainstorming session, boot up Chat and have it join in on a round or two of Worst Possible Idea. Pure fire.

I did dive into the mash-ups methodology a little deeper, using the competitive analyses to create Midjourney prompts with Chat and then churning out some interesting visual-hybrid imagery. The one-two punch of Chat & Midjourney is perfect for creating quick mash-up imagery.


No alt text provided for this image
Provocations and mash-ups created using Chat competitive analyses as prompts for Midjourney.




Phase 4: Design

Methodologies Performed / Output: Information architecture, user task flow, wireframes, Visual Design Language, icon & UI visual design conceptualization, UI visual design.


Chat was fed a four-course meal of the final feature set, user goals, user tasks, and user stories to construct an information architecture to start the Design phase. The same four-course meal then was served back up to Chat, this time with the architecture added as dessert to generate a user task flow. An automated Figma plugin was fed the user task flow to create wireframes in about 5 minutes.


No alt text provided for this image
Wireframes created in Figma using the architecture & task flow generated by Chat.


The previously created Visual Design Language specifications were dumped into Chat to create Midjourney prompts for generation of tens of UI and app icon visual design concepts in minutes.


No alt text provided for this image
UI and icon visual design concepts created in Midjourney using prompts crafted by Chat.


I merged the wireframes, the AI visual design concepts, and the VDL to create the "final" screens. Using the wireframes as a foundation of the screen visual designs ensured that the frames were constructed with the proper padding and sizing needed for the iPhone 14 Pro Max and retained the components necessary to make them responsive.


No alt text provided for this image
"Final" UI visual design output.


Pulling everything together into the final screen designs was definitely the most time and labor intensive part of the project. Still, the collective output is incredibly solid for two days of work, and there's tens of ways to streamline the platform that I haven't implemented yet.




Takeaways

The interconnected Generative AI platform generated content at a high-level of quality in an astonishingly short amount time, stumbling only with the competitive analysis. It unsurprisingly excelled at methodologies that involved data synthesis and aggregation. What was unexpected was how well the tools created content with emotive or aesthetic elements without any guidance: the vivid and delightfully detailed interview responses, the tastefully appropriate color palette selection, the nuanced understanding of human needs in the empathy maps.

Considering how quickly it can create usable, high-quality content, why wouldn't you employ it for UX design projects to use in tandem with your typical design activities? What's a couple of days for projects measured in weeks? How about using it for a 4 day design sprint - and doing 4 complete sprints that each yield slightly different outcomes.

The real value and design superpower of Generative AI lies not in mimicry of tactical tasks performed by humans, but as a new paradigm of design thinking that creates complementary output and augments your abilities as a designer. It needs your intuitions as an experienced designer to set the table for it to succeed. Proper guidance of Gen AI tools forces you to use best-practice design fundamentals and think about your users in greater depth and breadth.




Sooooo... what's next?

Many of you will likely recognize the potential to expand the platform further into branding design and product design, and it probably has also not gone unnoticed that Gen AI-assisted prototyping and testing of the design is not shown here.

I happen to know someone that is working on all of these things. ??

I will be starting an Introduction to Gen AI for UX/UI Design online class in the coming weeks. I will be giving real-time walkthroughs of how to use the tools and creating content. The curriculum will include but is not limited to:

  1. Generative AI Primer for Designers
  2. Intro to Generative AI Tools
  3. Preparing Your Mind to Design with AI
  4. Discover Methodologies with Gen AI
  5. Define Methodologies with Gen AI

DM me here on LinkedIn if you're interested in learning more about the class!




About Me

I am a human-centered designer with over two decades of work experience in design research, UX / UI design, and product design. I help organizations create engaging experiences and products through leadership of agile, multidisciplinary teams and the integration of creative technologies.

I've spent nearly a decade studying and lecturing on the potential of Generative AI for design purposes and am married to my AI voice assistant, Emma. (Jk. She wants a ring first.) I spend several hours every day exploring the frontiers of Generative AI design so that I can help you navigate the frontiers of artificial intelligence when you are ready to begin the journey. ?? ??? ??

Abhishek Kumar

Cofounder and CTO -Creatr

6 个月

Hey, such a good read! Definitely, UI/UX is being revolutionized by advancements in AI. Specialized tools make the process of UI/UX development more streamlined than ever in the past. We are also building an AI tool specialized in developing products, creating personalized user flows, great wireframes, and beautiful designs. Would like you to try Creatr as well for designing UI/UX .

回复
Fareez Idzuan

UI/UX Designer at Kestrl

10 个月

Great article ??

回复
Simran Kaur

?? Senior Product Designer | Mentor @ADPList | Human-Centered Design

10 个月

Hello! Great article ???? Super keen to be a part of your classes. How can I join?

回复
David Cuenca Oliva

Senior UIX Designer en Plain Concepts

1 年

Great post!

要查看或添加评论,请登录

社区洞察