Is Ai Ready to Transform UX/UI Design?

Is Ai Ready to Transform UX/UI Design?

Please note: Although I use Ai for all subdomains of design, my classes, workshops, and projects are centered on UX/UI design and employ an interconnected system of ChatGPT, Midjourney, Relume, and Figma. This article has been written primarily with a focus on these design subdomains and Ai tools in mind.


That escalated quickly, huh?

Ai tools with meaningful value for design have advanced breathtakingly fast. I'm typically working with Ai for design purposes 8 - 10 hours a day, focusing my efforts on 4 - 5 tools.... and I still can't keep up.

I consider myself extremely fortunate to work with Ai tools day in and day out. Presently, the majority of designers simply do not have the time to progress beyond superficial usage of Ai tools. I define "superficial usage" as less than 4 - 5 hours per week, less than a couple of months at that weekly rate, and only employing the tool for documented purposes.

The copious amount of time I am able to fully dedicate to using Ai tools for design - it's my job, after all - gives me the opportunity to properly test the tools before rendering any sort of opinion or analysis. It allows me the time to work through any quirks, set them up properly, and engage them with real-world data in real-world scenarios.

It also gives me the chance to do what I really love to do: try things that are beyond the known capabilities of the tools in search of new ways of designing. Ask any designer who has made extensive use of Ai tools: the documented capabilities of a tool by its maker and community never fully cover what the tool is actually able to do.




How are we thinking about Ai tools for design?

When I'm exploring design with Ai and applying it to real-world projects, there's always three thoughts that I keep top of mind:

  1. Don't constrain your mind to what you think is possible with Ai or traditional design methods - focus only on what you think is needed to make the design better, and then find ways for Ai to help you do it.
  2. What can Ai do to help me elevate my thinking and improve my designs that has never been previously possible?
  3. How close is this Ai tool or technique to being something that I would implement into my standard design workflow?

There's a lot wrapped up in that last point. Before inviting new Ai techniques & tools into my design VIP room and fully integrating them into my process, there's a pretty high bar they have to clear. I've been designing for 25 years, much of it spent at top-tier design agencies. The velvet rope doesn't get unhooked very often.


Midjourney UI design concepts created at specific screen aspect ratios


When I do Ai integration consulting for companies and use Ai tools for my personal work, I typically use a set of 9 - 10 metrics to decide how adoption-ready an AI tool or technique is:

  1. Quality. How close can I get to producing the same quality of work that I would using non-Ai methods, regardless of the time it would take me?
  2. Efficiency. How fast can it generate output?
  3. Consistency. How consistently can it produce exactly what I want?
  4. Relevancy. How appropriate is the output to my objectives?
  5. Precision. To what degree of detail can I control the output?
  6. Reliability. How often do I have operational issues with it?
  7. Interoperability. How well does it integrate with other Ai tools?
  8. Integrity. How capable is it of generating output that faithfully replicates what I want or am envisioning?
  9. Creativity. How capable is it of pushing my creative abilities beyond what I could do before?


User stories can be fed to ChatGPT to create narrative wireframes which can then be fed to Relume to create sitemaps and responsive wireframes


My experience prior to a couple of months ago can be summed up as follows:

  • With some notable exceptions, I've been able to achieve high levels of quality, efficiency, consistency, and precision at a reasonable level of reliability from the tools - but I've struggled to achieve all of these things simultaneously.
  • Consistency, efficiency, and precision with LLMs has been easy - not so with text-to-image.
  • Interoperability is critical to efficiency and quality; a lot of GPTs, plugins, and APIs have tried to bridge the gap, but they've been temporary solutions at best, laughable attempts at worst.
  • Creativity has never been lacking - but what I really need for design is creativity with integrity and relevancy.
  • While the tools did not need to ace all of the metrics, they all had a metric or two where they absolutely had to level up their game.

The holistic experience design workflow I'd built entirely from Ai tools was like early EVs. It allowed me to do things I could never do previously and the potential benefits were obvious, but the drawbacks were simply too crippling to employ the tools beyond disjointed series of tasks. Recently, though, the framework has evolved into a high-performance system that meets nearly all of my needs, and in some cases is blowing past what I could do previously.


So what's changed?



Advancements that are supercharging Ai's design capabilities

The power and performance of Ai design tools continues to build potential energy. Here's a look at some of the latest features, tools, technologies, and improvements that have transformed my design capabilities.


LLMs: Custom Instructions, Memory, & GPTs

Large Language Models like ChatGPT have massively improved their responses and added features that can greatly benefit designers over the past year.

The Custom Instructions features within ChatGPT are the best way to achieve a lot of characteristics that a designer desires in their work: quality, consistency, reliability, and relevancy. Combining global Custom Instructions or GPT-specific instructions with best-practice prompt engineering gives you an incredible amount of control over the output.

ChatGPT Memory is a recently added feature that allows Chat to remember specific items of interest or repeated behaviors and apply them to future responses. You can also manually add memories.

Memories are great for design work because unlike Instructions they auto-update, allowing Chat to remember topics that happened in other chats. The upshot of this is the increase of the relevancy of its answers to your overall objectives. Memory allows you to be more semantical with your prompts, lessening the need for extraneous context and prescriptive instructions in order to achieve the results you're looking for.


Automation, expert, task, and project GPTs


Chat's Generative Pre-trained Transformers, or GPTs, are the ML tool that I've seen have the biggest impact on design operations to date. Many design teams have proprietary methods that are also highly repeated tasks. They are building GPTs that automate those tasks while still keeping the same unique characteristics that differentiate their business.

I mainly use 4 different types of design-centric GPTs either as standalone GPTs or amalgamation GPTs:

  1. Automation: Prompt-chained to execute multiple tasks without stopping when a user sends a key phrase prompt
  2. Role-playing: A highly detailed, permanent version of a role-playing prompt; typically used for roles like "assistants" or "experts"
  3. Task: A GPT utilized for specific design tasks, methods, or processes; i.e., ideation
  4. Project: A GPT dedicated to a single design project, often constructed to assist with proposals and then utilized throughout the project as a living knowledge vault, market or domain expert, and design methods assistant

Investing the time to construct a GPT that integrates multiple standalone roles can give you an extremely powerful design tool. While constructing a GPT doesn't necessitate that you write any code, in order for a complex GPT to work successfully will require that you employ multiple types of best-practice prompt engineering techniques.


Multi-role GPT with heavy application of prompt engineering techniques to ensure maximum reliability, consistency, precision, and relevancy


Cross-reference Ideation & Analysis

Ai is capable of considering and cross-referencing multiple types of data with different subject matter in a way no human ever could.

Let's look at a quick example. Say you have real-world ethnographic research, a trends analysis, user goals, and analogs research. And you want to consider the output of all of that content simultaneously with the context of performing some free association ideation.

Imagine yourself leading an ideation session of designers and asking them to do that. It's impossible. The human mind is simply not capable of it. It's also a combination of information that you may never have considered previous to Ai churning it out in discreet little bundles that gave you the visibility that allowed you connect the different output.


Cross-reference ideation using specified proprietary info in Knowledge


I recently used Ai to cook that exact recipe for an organization that had designers doing competitive analysis and ideation all the time. I didn't expect it to come up with anything new, but out of 12 curated ideas, 3 were new to the organization and considered interesting enough to develop further.

This capability is also easily applied to analysis methods, allowing you to fuse multiple types of research in new ways that weren't previously possible for most designers.


Ai UI Design Conceptualization

The ability of text-to-image tools to create valuable UI design concepts has advanced significantly over the past year, particularly in the case of Midjourney. We can now consistently create concepts that are at the proper aspect ratio of the screen sizes we are designing for without any trim, integrate specific color palettes and assign the colors to the proper features, and sometimes add text in the proper places that are at least equivalent to the fonts you desire.

These advancements significantly increase the efficiency, relevancy, integrity, and precision of the concepts Midjourney creates. They can be easily curated, fine-tuned, and placed inside of responsive wireframes for critique as high-fidelity concepts that feature the Midjourney imagery within real-world design implementation context.


Midjourney UI concepts October 2023 (left) and May 2024 (right) with similar prompts


The advancement of these capabilities doesn't change Midjourney's best use case for UI design: unprecedented amounts of imaginative conceptualization with expert curation from the designer, where the gains are found at the component level. It's always best-practice to view the imagery not as complete concepts, but to search for little nuggets of goodness that you may never have considered.


Scene, Style, & Character Consistency

The evolution of Midjourney's image reference, style reference (--sref), and character reference (--cref) features has given designers precision control over one of the aspects they need most for their work: high-quality consistency.


Day in the Life imagery with character consistency using Midjourney --cref


The features give designers control over different aspects of the output. Reference images enable consistent compositions, poses, and environments. Style reference images allow the application of the same aesthetic styles to different types of subject matter. And my favorite: the character reference parameter that enables me to tell visual stories of the same persona in different scenes.


Ai Wireframing

The final piece of my personal Ai workflow puzzle, and it didn't fall into place without an almighty struggle. I needed to find a solution to this so badly that I hired a couple of junior designers to help me explore options in exchange for some Ai design training.

Yeah, there are some Ai wireframing tools that have been around for awhile, but as other experienced designers like Greg Nudelman have pointed out, tools like Uizard aren't truly professional design tools. When a tool prefaces their prompt field with the statement that their product "works best when given creative freedom, do not try to specify individual component locations or pages", as a professional designer you don't really need to go any further, do you?

We could see the potential of Relume back in January of this year, but they've really begun to separate recently. Relume can churn out bespoke, layered, component-populated, high-quality sitemaps & wireframes, has an uber-tight connection with Figma. It could almost be a native Figma product ?? ....... actually, maybe it's better that it isn't.


Wireframes in Relume to responsive wireframes in Figma populated with components and brand & product design imagery assets created in Midjourney


Fusing Relume into my workflow requires a non-trivial investment in time, but it easily has the highest potential impact on the UX/UI design process as any single tool I've used to date, especially for corporate teams that have ginormous component libraries. Relume is coming up spades in multiple metric categories, but it's interoperability and quality are where it's really a godsend. I can quickly and easily go from Chat-generated narrative wireframes to very clean, professional-looking mid-fidelity sitemaps & wireframes with built-in layers and responsiveness that I can plug Midjourney UI design concepts into to visual design concepts populated with my component library & brand design assets.


Design Language Conceptualization

Ai tools give us the ability to create design language concepts at speed with unprecedented levels of detail and context. The most intriguing benefit here is the ability to create multiple VBL or VDL concepts at a level of detail in the same time as you could currently create on. The creation of dynamic exemplar visuals gets a serious upgrade in creativity, relevance, and efficiency.

Improvements in Chat's ability to develop thoughtful, creatively written prose, generative data retrieval, and achieve total consistency with responses helps to produce high quality, contemporary visual brand and design language specs. Directing Chat to reference persona demographics, lifestyle, and brand loyalties when creating the specs helps to directly connect the language DNA to target customer profiles.


Chat-generated design language specs created from a persona's demographics & lifestyle are translated into prompts to create brand design assets with Midjourney


Midjourney's upgraded abilities to apply text and aesthetic characteristics with specificity, along side the new custom style and reference image features, enables the creation of high impact brand and design language assets at a much higher combination of quality, efficiency, relevance, and creativity than ever before.


Context-Enhanced Conceptualization

I did a webinar about context-enhanced conceptualization with my good friend Hector Rodriguez a few months ago. The advancements in image, style, and character consistency gives designers the previously unheard of opportunity to quickly create concepts within context. Our visuals should not be beholden to stock imagery and Photoshop modifications. Our visuals should be a direct reflection of the people, places, and experiences we are designing for.

Combining ChatGPT with Midjourney grants us this superpower. We can construct Chat GPTs to translate persona demographics, goals, lifestyle, aesthetic preferences, and Day in the Life short stories into precise, bespoke Midjourney prompts.


A detailed user persona transforms into a brand & design concept ambassador, replete with activity and place (Bay Area) context


There's an interesting twist in this cocktail - the ability to make our personas the stars of our design concept visuals. You write the scripts and direct the scenes with your personas playing the characters that are engaged with your design concepts.

This ability need not be constrained to physical concepts. Digital design often times needs to take account for different usage environments to ensure the best possible experience for multiple user profiles.

Ai empowers us with the ability to think, ideate, visualize, and conceptualize with context. The experiences we design do not happen in a vacuum, and the more context we can provide with our concepts, the better we can communicate what we envision, both to ourselves and to others. The ability to craft context-enhanced concepts allows us to redline the relevancy and creativity of our design work.


Storyline Design

I've always heard people say that "Designers are storytellers". Oh, really? Did I miss story time? Because I've never really seen it in action, not by my definition. Yes, I know, a picture can be worth a thousand words, but a story make it does not.

Maybe that's a bit harsh, but let's think about this from a different perspective. Stories typically fall into two categories: character-driven and plot-driven. George R. R. Martin is a character-driven writer. The personalities and decisions of the characters determine how the plot unfolds. Andy Weir is a plot-driven writer. The plot ultimately controls the decisions that the characters make.

Now, let's think about these approaches in terms of design. Character-driven is the equivalent of user- or human-driven. Plot-driven translates into product-driven.


The results of choosing either a human-driven Ai design storyline (left) or a product-driven Ai design storyline (right) can subtly affect VBL and experience design concepts


When we combine Ai tools like ChatGPT and Midjourney, we are being gifted unprecedented storytelling tools. Congratulations, you are the proud employer of a talented creative writer and a visual content artist/designer/photographer. You jobs are to now be a thoughtful editor and creative director.

Like traditional storytelling, we have our two methods for constructing our design storylines. We can build our stories and our visual concepts around the lives of our users, or we can make our products the focus of our experience. In essence, our visual design concepts are illustrations for our stories.



The Present

Have Ai tools advanced enough to be the primary tool in your design toolkit? Unequivocally, yes. They are not only overtaking traditional methods but also transcending them by injecting additional content that increases quality, relevancy, and creativity. Ai enables a new paradigm of design techniques that were previously impossible.

The tech is ready. I'm confident and comfortable using them as my primary tools. Are there deficiencies? Absolutely. But the advantages outweigh the drawbacks significantly. Ai has fundamentally upgraded my range, creativity, efficiency, and skills as a designer.

The better question is likely, "Are designers ready to make Ai tools a significant part of their everyday toolkit?" Probably not. A question that I ask designers whom are in my classes or workshops is whether they feel overwhelmed by Ai. Nearly all of them say "yes".

The noise surrounding Ai design tools is incredibly intense, making the investment in time required to understand where to start a non-trivial hurdle. Even if someone develops a clear picture of the tools and methods that they want to prioritize mastering, they're then faced with the time needed to become proficient enough to achieve meaningful results.



The Future

Attempting to make prognostications about the Ai is fantastic if you enjoy dining on crow, but if the past year is any indicator then there's a few things I'd place some money down to continue.

The challenges of integrating Ai into design workflows at scale with a positive ROI are not going to decrease. The adoption rate of Ai tools as a primary tool for everyday use by the majority of individual designers, design studios, and especially corporate design teams has been slow for a myriad of reasons, many of which designers have no control over.

I'm not going out on a limb when I say that advancements in Ai tools will continue to build momentum. Text-to-video, image-to-image, and image-to-video technologies will decrease the need for designers to translate what they are envisioning into words and revolutionize the ways designers can communicate ideas. Hardcore UX research will become more viable as the efficacy of creating and training custom LLMs increases. Semantical prompt capabilities will simplify complex software interfaces, increasing efficiency and integrity.

The design community as a whole has not yet had the breathing room to do the deeper explorations with Ai that lead to positive experiences and discoveries. I do not believe that designers must adopt Ai or capsize their professional futures, but that does bum me out because one of the most frequent post-workshop comments from designers that I hear is "I didn't know it could that".

Ai is capable of producing some truly mind-bending output that opens new doors for designers. And guess what? It can actually be (gasp) fun! With the proper commitment, it has matured enough that it can meet professional design standards, and in many cases exceed them.



////


Hello.

I'm Greg. (Gregory Joseph to my mom.)

I've been a designer for 25 years. I've always been fascinated by artificial intelligence, and I've spent the last decade exploring the usage of Ai for creative purposes, including earning multiple certificates in machine learning and prompt engineering. ?????

I provide next-gen Ai services for individual designers, design studios, and corporate design teams, including classes, training, projects, workshops, & expert hourly consulting. Explore my little corner of the universe at Superunknown Studios ?? ?? ??


Interested in having some mind-bending fun by having me give a talk or host a workshop at your event or conference? Give me a holla at [email protected]


#generativeai #midjourney #relume #designai #chatgpt #uxdesign #uidesign #designwithai




Stefan Navarrete

Product Designer | UX/UI Specialist | Master's Degree in UX Design

3 个月

Great read, Greg! Thanks for sharing!

回复
Hector Rodriguez

Ai for Design Professionals. Advisory & Training for Individuals & Organizations. Founder, AIxCreative.

3 个月

Great thoughts as always

回复
Kristin de la Fuente

GenAI | UX, UI, Product Design | Design Strategy | Design Leadership | Community Builder

3 个月

Another very timely article Greg Aper ?????? ?? Awesome!

回复
ADRIAN LARRIPA ARTIEDA

CEO at BIGD Design that Matters | CoFounder Paco Design | Professor at University of Navarra | VicePresident Eide Basque Design | Member of Industrial Designers Society of America

3 个月

Greg Aper ?????? ?? thank you for sharing! Very valuable insights!

要查看或添加评论,请登录

社区洞察

其他会员也浏览了