Shaping the clay of AI
Emily Campbell
VP of Design | Advising on AI Product Design and Design Leadership | Reach out, let's chat! ??
To design great AI experiences you have to spend time in AI experiences. Play with it, explore it, shape it.
Welcome to the Shape of AI. In my last post, recruiter Adam Perlis shared the skills needed to become an AI designer. Today, I explore how to get comfortable working with AI to propel your AiUX skills and develop a critical perspective within the field.
Follow me and reach out here and on Twitter, or join the Slack Channel to chat about this topic. P.s. The Shape of AI is looking for sponsors to help keep our Slack channel free for open dialogue! Please reach out if you are interested ??
The most common thing I hear from people getting started with AI is, “I’ve tried it, but I don’t really understand it.”
People struggle to accelerate up the learning curve. Sound familiar?
Perhaps you’ve played around with ChatGPT, or you’ve interacted with some feature in a tool you use–but you haven’t quite figured out where it fits in your workflow. Maybe it still feels like a gimmick.
There’s a difference between interacting with the technology and understanding it.
Now is the time to build that understanding.
Whether you are designing and building AI tools, or just preparing yourself for the inevitable moment when you’re asked to integrate AI at work, becoming comfortable with this technology today will make you more resilient going forward.
So what does a learning plan look like?
The best way to get familiar with Generative AI is to use it. However, it’s not sufficient to simply play around with it. You have to get your hands dirty. The more time you spend pushing on it, kneading it, building with it and seeing it perform against real tasks, the faster you can wrap your head around its behaviors, and its limitations.
You have to shape the clay.
A roadmap to explore the shape of AI
Bonus - Share your experience!
1. Understand the fundamentals
I’ve found the marginal value of learning about AI through courses and traditional paths to diminish rather quickly, at least at first. Later, once I reached a certain level of proficiency, diving deeper into the details made a lot more sense and helped me get to the next level of competency.
You don’t have to be a developer to understand AI. It’s useful to go in with a basic understand about what Generative AI is (and isn’t) and how it works.
Here are some sources I share to get people started:
2. Get comfortable interacting with base models
If you have only ever interacted with ChatGPT then you are missing out on learning how different base models work, and how their differences impact their outputs.
Each model has its own training data and foundational prompt. Two models made by the same company will behave differently, and models by different companies will also be distinct. The model you choose to perform a task or to serve as the platform for your product will have an outsized impact on the results. Take time to understand their distinctions.
Try to interact with multiple models each week, especially when you have a specific task in mind. I personally prefer GPT-4 for editing work, like reviewing a draft and poking holes in my logic. Claude 3 Opus is my preferred model for logical exploration and open thinking. I use Claude 3 as the foundation for Perplexity as well.
3. Get comfortable interacting with AI tools
Start to use different tools that incorporate AI. You’ll get a feel for how AI UX Patterns appear in the wild, start to understand differences in product strategy, and develop a perspective for what a great AI experience feels like (and how products can fall short).
I approach this work as I would any other critical UX teardown.
Everyone building and designing in this space is learning on the fly to some extent. Developing a critical eye for what works and what doesn’t will help you make more informed decisions when building for others, or when using AI products in your work.
4. Play with Tokens
Just like when communicating with people, AI will react differently depending on the words you choose. Every word we say to it (or any word it derives from something visual like an image) impacts its output.
We call these tokens. These are words that represent some concept that AI uses to form logical relationships and generate its response.
As you work with AI, you’ll start to see how your communication style affects how well We can direct AI to show us how it’s interpreting our words, and use that information to improve how we speak to it.
Text-based tokens
Conversational input allows us to direct the AI to tell us what tokens it used to form a response. The next time you’re using open text input, ask the AI to share its logic.
I’ve used this to debug ChatGPT and Claude conversations, and as a check in Notion tables and other open fields to understand how the AI is logically arriving at its response.
In products where it’s supported, I store custom prompts or instructions to have the AI tell me this automatically, which makes debugging more efficient as I fine-tune a prompt.
Image-based tokens
The image generator Midjourney includes a function that will reverse engineer the tokens it “sees” in an image. This is imperfect, and the list will vary with each run, but it’s an effective way to get a rough blueprint of how the AI is logically interpreting words and reference images.
This is especially helpful for understanding how images can carry unconscious bias without us knowing, and impact the quality of the generation downstream. Each word we convey to the AI is a token that contains data we can’t see - relationships, presumptions, connections - and these impact the model’s output.
领英推荐
For example, the tokens “Bear” and “City” will produce an image of a grizzly bear in a western looking downtown. If you replace bear with “Panda,” suddenly the location will change. The cultural signals embedded in Panda are strong enough to influence the generation without our direction.
It’s important that we know these unspoken words exist in all modalities, so we can design and use these tools and models in an inclusive and mindful way.
5. Apply AI to regular tasks
Once you have started to wrap your head around how these models and tools work, start to put them under pressure.
Begin with tasks you perform regularly and where you are aware there is a decent AI solution on the market.
It’s likely these tools will come easy to you, but that doesn’t mean they are without their limits or friction. Take note of how quickly you were able to get up to speed, how well the experience blends into your daily habits, and how accessible they are when you aren’t thinking about them.
What makes for a sticky, useful, and usable AI experience?
6. Apply AI to irregular tasks
Once you’ve become more familiar with how AI could be incorporated into your workflow, start to find uses that don’t come as natural.
This might be tasks you don’t perform regularly, or more difficult tasks that you have built deep habits around solving.
Critically, AI might not help in all of these situations. You’ll start to find yourself asking, if AI really necessary here? You’ll get stuck. You might even get frustrated.
Force yourself to use AI for as many tasks as possible for several weeks. As a result, you’ll develop tenfold the empathy for users who are being bombarded by AI at every turn. Perhaps the products could be better designed–but you’ll also appreciate areas where product teams have assumed AI is a net good, when it fact it generates pain.
7. Explore advanced prompting techniques
At this point, you have a solid understanding for how the foundational models of AI work, a nuanced understanding of AiUX, and practical experience applying AI to actual jobs to be done.
Now, switch your thinking to how we build AI products. Start to explore prompt engineering.
There are many great resources available to you. Focus first on using prompts created by others to study their approach and the results they produce. Then, familiarize yourself with different prompting techniques and begin to generate your own.
Play with adding references and primary sources to your prompts. How does this change the result?
Develop your own prompt library and start to use it in your regular habit of exploring different AI products.
8. Have the AI prompt for you
Just as AI can tell you which tokens it’s using, AI has gotten very good at generating its own prompts.
Advanced techniques like RAG can still help you communicate your intent to AI. However, when it comes to crafting the actual input, AI might be better at this than you.
The reality is that this is happening behind the scenes whether we intend it to or not. The AI engine captures our input, translates it to tokens, and regenerates the instructions in its own language. (And some of these regenerations can get really strange).
Understanding how AI logically reworks your prompts can help you better understand how the technology works, and how to help users get the best outcomes out of their own inputs.
So let off the gas a bit. Work with AI. Set its direction, orchestrate its output, but let it do what it does best: speak in the language of computers and handle mundane tasks.
Final thoughts
This final step should help you realize something important. The value of AI is not our control of it. Even when we think we are in control, we often aren’t.
Its inherent value is the ways it can teach us about information, make us more self aware about what and how we share, and let us build experiences that center humans within technology.
The goal of learning AI is not to give away your job. Rather, it’s to understand our relationship to technology in new ways. When we let algorithms dictate what we see, and subconsciously control our behavior, we trade our autonomy for convenience.
AI let’s us communicate our intent directly to the computer.
We work with the clay, we shape it with the wheel, but we are ultimately in charge.
With AI, we can orchestrate our own digital experience.
But first, we have to design it.
?? Bonus: Share your experience
Here’s the reality: you’re going to get stuck.
You will reach moments of frustration and confusion. The AI won’t respond the way you want it to. It will hallucinate. It will get lazy.
You will run into product experiences that don’t make sense.
You will get tired of AI.
All of this is ok and part of the learning experience. My best advice? Share it!
Learning AI is like trying to shoot an arrow at a moving target. By writing or talking about your experiences, you’ll benefit others, and it will help you work through the friction.
It will make you more aware of your experience. It will make you a better designer.
Share your experience so others can learn with you–and tag me! I would love to follow your journey.
? Emily
Wow, thank you so much for your wonderful article, Emily Campbell —?always looking forward to your updates, and learning a lot from you — one post at a time! Thank you! ????????????
Product Design Leader | Experience Design | Design Systems | Designing with AI | Ex-McKinsey Design
8 个月Useful tips here Emily.
AI Ethics Advisor @ The Realm IQ Team | Responsible AI adoption workshops
8 个月Great article on how to get started, and thank you for including the Global AI Policy and Law Tracker. Understanding privacy concerns should be an integral part of learning. For example, getting permission from users before enabling the “vision” feature to capture their faces is something I encourage along with not feeding it others’ personal identifiable information (PII). As you stated, it can be useful as long as we take the time to understand the basics.
Product designer at We Create Problems | Growth design | SAAS
8 个月Thank you for sharing, Emily Campbell What I believe, the biggest blocker users face in understanding AI is the shift from click-based or touch-based responses to a more word and read-based interaction. I can see Perplexity trying to focus on this transition, but not many others are addressing it. As we've seen with the transition from physical mobile keyboards to touch interfaces, eventually, it will become normal.
Co-founder at Binkyness | Making pet rabbit care easier
8 个月Very helpful! For design tasks, I like using it for replacing "lorem ipsum" content in prototypes, or helping write error messages following a design system convention (eg: Shopify Polaris).