Shaping the clay of AI

Shaping the clay of AI

To design great AI experiences you have to spend time in AI experiences. Play with it, explore it, shape it.


Welcome to the Shape of AI. In my last post, recruiter Adam Perlis shared the skills needed to become an AI designer. Today, I explore how to get comfortable working with AI to propel your AiUX skills and develop a critical perspective within the field.

Follow me and reach out here and on Twitter, or join the Slack Channel to chat about this topic. P.s. The Shape of AI is looking for sponsors to help keep our Slack channel free for open dialogue! Please reach out if you are interested ??


The most common thing I hear from people getting started with AI is, “I’ve tried it, but I don’t really understand it.”

People struggle to accelerate up the learning curve. Sound familiar?

Perhaps you’ve played around with ChatGPT, or you’ve interacted with some feature in a tool you use–but you haven’t quite figured out where it fits in your workflow. Maybe it still feels like a gimmick.

There’s a difference between interacting with the technology and understanding it.

Now is the time to build that understanding.

Whether you are designing and building AI tools, or just preparing yourself for the inevitable moment when you’re asked to integrate AI at work, becoming comfortable with this technology today will make you more resilient going forward.

  • If you’re building AI tools for others, using them yourself will help you develop a necessary perspective to grasp their limitations and their nuances. Knowing these constraints will let you create more intentional experiences.
  • If you manage people or teams that will need to adopt these tools, experiencing the learning curve yourself will help you anticipate the training and support others will need. Feeling that friction will enable you to create more intentional programs.
  • If you want to future-proof your skills or anticipate how AI might disrupt your life, building habits with them today will make you a more informed consumer and a more prepared employee. You won’t feel as overwhelmed when the future hits.

So what does a learning plan look like?

The best way to get familiar with Generative AI is to use it. However, it’s not sufficient to simply play around with it. You have to get your hands dirty. The more time you spend pushing on it, kneading it, building with it and seeing it perform against real tasks, the faster you can wrap your head around its behaviors, and its limitations.

You have to shape the clay.

A roadmap to explore the shape of AI

  1. Understand the fundamentals
  2. Use the models
  3. Use the tools
  4. Play with tokens
  5. Apply it to regular tasks
  6. Apply it to irregular tasks
  7. Explore advanced prompt techniques
  8. Have the AI create prompts for you

Bonus - Share your experience!


1. Understand the fundamentals

I’ve found the marginal value of learning about AI through courses and traditional paths to diminish rather quickly, at least at first. Later, once I reached a certain level of proficiency, diving deeper into the details made a lot more sense and helped me get to the next level of competency.

You don’t have to be a developer to understand AI. It’s useful to go in with a basic understand about what Generative AI is (and isn’t) and how it works.

Here are some sources I share to get people started:

2. Get comfortable interacting with base models

If you have only ever interacted with ChatGPT then you are missing out on learning how different base models work, and how their differences impact their outputs.

Each model has its own training data and foundational prompt. Two models made by the same company will behave differently, and models by different companies will also be distinct. The model you choose to perform a task or to serve as the platform for your product will have an outsized impact on the results. Take time to understand their distinctions.

Try to interact with multiple models each week, especially when you have a specific task in mind. I personally prefer GPT-4 for editing work, like reviewing a draft and poking holes in my logic. Claude 3 Opus is my preferred model for logical exploration and open thinking. I use Claude 3 as the foundation for Perplexity as well.

  • To start, use a single model in unstructured conversation so you can compare how it handles different tasks and responds to different prompts.
  • Then try putting the same prompts into different models to explore how their underlying training data and foundational prompt generates different results.
  • Explore how different prompt structures shape the output the model returns.
  • Explore how the model responds when you give it examples and references (and what happens when your examples or references aren’t great)
  • Put it through the hoops of real world use and you’ll start to understand its material properties, and how to work with it, instead of simply using it.

3. Get comfortable interacting with AI tools

Start to use different tools that incorporate AI. You’ll get a feel for how AI UX Patterns appear in the wild, start to understand differences in product strategy, and develop a perspective for what a great AI experience feels like (and how products can fall short).

I approach this work as I would any other critical UX teardown.

  • First, I keep a log of my first time experience. How does the product onboard me into the technology? How do they balance teaching the user how to use AI with teaching the user how to get value out of the product itself?
  • Next, I start to look critically at the patterns that the product incorporates. Where have they made choices that reduce my cognitive load? Where do they leave discovery open ended? Why do you think they do that?
  • I keep track of friction throughout my experience. Then, I go back to that list and consider what different decisions could have been made to reduce that friction: Different patterns? More or less AI functionality? Additional context and resources? Was AI necessary?
  • Finally, I try to compare products by using multiple products in a category at once, similar to how I compared the models. For example, you might try using Miro and Figjam’s AI capabilities side by side, or Copy.ai, Writer.com, and Jasper.ai.

Everyone building and designing in this space is learning on the fly to some extent. Developing a critical eye for what works and what doesn’t will help you make more informed decisions when building for others, or when using AI products in your work.

A Friction Log is a handy tool to compare the user experience of similar products, especially when it comes to tracking how they incorporate AI UX Patterns

4. Play with Tokens

Just like when communicating with people, AI will react differently depending on the words you choose. Every word we say to it (or any word it derives from something visual like an image) impacts its output.

We call these tokens. These are words that represent some concept that AI uses to form logical relationships and generate its response.

As you work with AI, you’ll start to see how your communication style affects how well We can direct AI to show us how it’s interpreting our words, and use that information to improve how we speak to it.

Text-based and image-based generators can show you the tokens behind a prompt


Text-based tokens

Conversational input allows us to direct the AI to tell us what tokens it used to form a response. The next time you’re using open text input, ask the AI to share its logic.

I’ve used this to debug ChatGPT and Claude conversations, and as a check in Notion tables and other open fields to understand how the AI is logically arriving at its response.

In products where it’s supported, I store custom prompts or instructions to have the AI tell me this automatically, which makes debugging more efficient as I fine-tune a prompt.

Image-based tokens

The image generator Midjourney includes a function that will reverse engineer the tokens it “sees” in an image. This is imperfect, and the list will vary with each run, but it’s an effective way to get a rough blueprint of how the AI is logically interpreting words and reference images.

This is especially helpful for understanding how images can carry unconscious bias without us knowing, and impact the quality of the generation downstream. Each word we convey to the AI is a token that contains data we can’t see - relationships, presumptions, connections - and these impact the model’s output.

For example, the tokens “Bear” and “City” will produce an image of a grizzly bear in a western looking downtown. If you replace bear with “Panda,” suddenly the location will change. The cultural signals embedded in Panda are strong enough to influence the generation without our direction.

Midjourney shows us the cultural tokens inherent in the word "Panda" that we might not be aware of

It’s important that we know these unspoken words exist in all modalities, so we can design and use these tools and models in an inclusive and mindful way.

  • How might we build constraints to reduce harm, the way that Meta’s AI tools forcefully reject efforts to get it to talk about sensitive topics?
  • How might we proactively recommend suggestions and parameters that can counteract bias in the model?
  • How might we handle hallucinations and situations where the model makes wrong and harmful assumptions?

5. Apply AI to regular tasks

Once you have started to wrap your head around how these models and tools work, start to put them under pressure.

Begin with tasks you perform regularly and where you are aware there is a decent AI solution on the market.

  • Running a design workshop? Have Miro or Figjam synthesize your stickies for you.
  • Reading a PDF? Have Adobe or some alternative generate a summary.
  • Conducting research? Start your search in Perplexity.

It’s likely these tools will come easy to you, but that doesn’t mean they are without their limits or friction. Take note of how quickly you were able to get up to speed, how well the experience blends into your daily habits, and how accessible they are when you aren’t thinking about them.

What makes for a sticky, useful, and usable AI experience?

6. Apply AI to irregular tasks

Once you’ve become more familiar with how AI could be incorporated into your workflow, start to find uses that don’t come as natural.

  • Going for a promotion? Have AI conduct a mock interview for the job.
  • Collecting links? Store them in a database in Notion or Coda and have the AI fetch information about the link in its own column.
  • Analyzing data? Have Julius or Tableau conduct an analysis for you.

This might be tasks you don’t perform regularly, or more difficult tasks that you have built deep habits around solving.

Critically, AI might not help in all of these situations. You’ll start to find yourself asking, if AI really necessary here? You’ll get stuck. You might even get frustrated.

Force yourself to use AI for as many tasks as possible for several weeks. As a result, you’ll develop tenfold the empathy for users who are being bombarded by AI at every turn. Perhaps the products could be better designed–but you’ll also appreciate areas where product teams have assumed AI is a net good, when it fact it generates pain.

7. Explore advanced prompting techniques

At this point, you have a solid understanding for how the foundational models of AI work, a nuanced understanding of AiUX, and practical experience applying AI to actual jobs to be done.

Now, switch your thinking to how we build AI products. Start to explore prompt engineering.

There are many great resources available to you. Focus first on using prompts created by others to study their approach and the results they produce. Then, familiarize yourself with different prompting techniques and begin to generate your own.

Play with adding references and primary sources to your prompts. How does this change the result?

Develop your own prompt library and start to use it in your regular habit of exploring different AI products.

8. Have the AI prompt for you

Just as AI can tell you which tokens it’s using, AI has gotten very good at generating its own prompts.

Advanced techniques like RAG can still help you communicate your intent to AI. However, when it comes to crafting the actual input, AI might be better at this than you.

The reality is that this is happening behind the scenes whether we intend it to or not. The AI engine captures our input, translates it to tokens, and regenerates the instructions in its own language. (And some of these regenerations can get really strange).

Understanding how AI logically reworks your prompts can help you better understand how the technology works, and how to help users get the best outcomes out of their own inputs.

So let off the gas a bit. Work with AI. Set its direction, orchestrate its output, but let it do what it does best: speak in the language of computers and handle mundane tasks.

Final thoughts

This final step should help you realize something important. The value of AI is not our control of it. Even when we think we are in control, we often aren’t.

Its inherent value is the ways it can teach us about information, make us more self aware about what and how we share, and let us build experiences that center humans within technology.

The goal of learning AI is not to give away your job. Rather, it’s to understand our relationship to technology in new ways. When we let algorithms dictate what we see, and subconsciously control our behavior, we trade our autonomy for convenience.

AI let’s us communicate our intent directly to the computer.

We work with the clay, we shape it with the wheel, but we are ultimately in charge.

With AI, we can orchestrate our own digital experience.

But first, we have to design it.


?? Bonus: Share your experience

Here’s the reality: you’re going to get stuck.

You will reach moments of frustration and confusion. The AI won’t respond the way you want it to. It will hallucinate. It will get lazy.

You will run into product experiences that don’t make sense.

You will get tired of AI.

All of this is ok and part of the learning experience. My best advice? Share it!

Learning AI is like trying to shoot an arrow at a moving target. By writing or talking about your experiences, you’ll benefit others, and it will help you work through the friction.

It will make you more aware of your experience. It will make you a better designer.

Share your experience so others can learn with you–and tag me! I would love to follow your journey.


? Emily

Wow, thank you so much for your wonderful article, Emily Campbell —?always looking forward to your updates, and learning a lot from you — one post at a time! Thank you! ????????????

回复
Justin Hevey

Product Design Leader | Experience Design | Design Systems | Designing with AI | Ex-McKinsey Design

8 个月

Useful tips here Emily.

回复
Jennifer Stivers

AI Ethics Advisor @ The Realm IQ Team | Responsible AI adoption workshops

8 个月

Great article on how to get started, and thank you for including the Global AI Policy and Law Tracker. Understanding privacy concerns should be an integral part of learning. For example, getting permission from users before enabling the “vision” feature to capture their faces is something I encourage along with not feeding it others’ personal identifiable information (PII). As you stated, it can be useful as long as we take the time to understand the basics.

回复
Saravind Kaipra

Product designer at We Create Problems | Growth design | SAAS

8 个月

Thank you for sharing, Emily Campbell What I believe, the biggest blocker users face in understanding AI is the shift from click-based or touch-based responses to a more word and read-based interaction. I can see Perplexity trying to focus on this transition, but not many others are addressing it. As we've seen with the transition from physical mobile keyboards to touch interfaces, eventually, it will become normal.

回复
Martín Alfonso Centurión Elizalde

Co-founder at Binkyness | Making pet rabbit care easier

8 个月

Very helpful! For design tasks, I like using it for replacing "lorem ipsum" content in prototypes, or helping write error messages following a design system convention (eg: Shopify Polaris).

回复

要查看或添加评论,请登录

Emily Campbell的更多文章

  • Examining the skillset of AI designers

    Examining the skillset of AI designers

    Note from Emily: Recently, any conversation I have with other design leaders eventually comes around to the question:…

    10 条评论
  • AI thrives in the mundane

    AI thrives in the mundane

    We’re in the busy season on the home front, and I’ve needed to relegate some things into the “almost ran” section of my…

    5 条评论
  • Exploring the spectrum of "Needfulness" in AI Products

    Exploring the spectrum of "Needfulness" in AI Products

    TL;DR: Not every AI product has a clear commercial purpose, but it should solve for some compelling user interest. We…

    6 条评论
  • My emerging heuristics for assessing AI Design

    My emerging heuristics for assessing AI Design

    How do we define "good" AI Design? This question has been swirling in my head every time I interact with a new…

    14 条评论
  • Personality as patterns: A framework for understanding the traits of AI

    Personality as patterns: A framework for understanding the traits of AI

    I was recently thinking about tokens. Specifically, I was wondering how we might think about tokens differently when…

    13 条评论
  • A new pattern language for a new paradigm shift

    A new pattern language for a new paradigm shift

    BLUF: Product development in AI is accelerating faster than we can keep up. This is already leading to fragmented…

    74 条评论
  • The Shape of AI

    The Shape of AI

    We are only just beginning to see the form that this new technology is taking in our products and our experiences…

    12 条评论
  • Running successful remote summits

    Running successful remote summits

    With the public response to Covid-19 disrupting in-person team gatherings and conferences, how do we carry the…

    4 条评论

社区洞察

其他会员也浏览了