A content designer’s complete intro to using ChatGPT for UX copy
Annelie and ChatGPT, holding hands, partners in design. This is a real photo, FYI. Credit: Ben Davies-Romano.

A content designer’s complete intro to using ChatGPT for UX copy

By Annelie Tinworth and Ben Davies-Romano.


I know what you are all thinking: “Not another AI article”, right?

Well, this one is a little bit different and a lot more tangible. Grab a quintuple shot of espresso, we hope there’ll be some insights and inspiration for you here.??


As Content Designers supporting many teams in a huge company, we know that there are many people who are responsible for writing UX content beyond just us. And indeed, we know that a lot of you out there don’t have access to a writer or a content designer in the first place. But this doesn’t mean that your words are not important, or have any less of an impact on your product’s UX.


Our aim here is to empower you all in using GenAI, and more specifically ChatGPT, to get the words done, at least to a level that is good enough from a UX perspective.


It’s important to make something very clear here: AI cannot replace the entire skillset of a content designer. There are worries from some at the minute that this is going to happen, and while management in some companies may mistakenly think that this is the case, it absolutely is not. AI is limited, and always will be limited.?


That being said, while it can’t produce reams of stunning and flawless microcopy, it can help you get “good enough” words. (That’s right – “good enough” is our standard when using AI, not “perfect”.)

Let’s kick things off with our main takeaway…


GenAI is a tool.

Obviously. We just want to re-emphasise this.

AI’s interpretation of “a hammer”...


There are many different types of GenAI tools for generating visuals, video, copy – and the list goes on. The point is, we’re increasingly seeing GenAI used by different people throughout the design process.?


That’s great! We’re here to say: use GenAI. Butuse it thoughtfully.


As Content Designers (AKA word nerds), the most obvious use case we encounter is people using ChatGPT, to generate copy. We also know perfectly well that one word can make or break the entire user experience.


And chances are, if you’re reading this, whether you’re a designer, writer, product manager, engineer or marketeer, you have an impact on the UX of the product you work on. All of us in UX want to influence what people think, feel and do when they use our digital products or services.?


We don’t want them to feel frustrated or like their needs aren’t being met. So, the entire design, from how it works to how it looks and feels, needs to be spot on.


How, then, during your AI-boosted design process – without a writer – do you get the words for UX contexts done, and make sure that they are good enough?

Simple.

You think before you prompt, and check before you ship.


If you look in the mirror and say this out loud 3 times, we’ll appear and give you a ChatGPT tip and a cupcake.


Let’s explore each of these stages in more detail. This encompasses:


A way of thinking. What fundamental UX work do you need to do before generating copy with chatGPT?


Principles for prompting. How can we best use ChatGPT to get the result you want? Here you’ll get some solid copy and prompt examples to help you avoid some pitfalls.


Processes for checking. How do you know that the output you get is good enough? What should your post-generation QA include on a foundational level?


We’ll be focused mainly on generating UX copy for this article, but we believe that this approach can support you when using other GenAI tools as well.


Think.

If you didn’t read that title in Aretha Franklin’s voice, try again. Let’s briefly explain what we mean by think.


So, we refer to GenAI and ChatGPT as tools, like a hammer. A carpenter would use that hammer in a different way and for a different outcome than “the bad guy” in a scary movie. The hammer is simply a tool that helps you get a job done.


This is why we view ChatGPT and other AI tools not necessarily as artificial intelligence, per se, but rather as augmenting intelligence. They can augment the work we are already doing. (Check out this article for a more in-depth and thought-provoking take on this.)


Now, obviously, a hammer cannot think. We need people for that. And people, you need to know exactly what you want, before you use the hammer or the AI tool.


People are also needed to figure out the processes. When you know what you want, you need to ask for it in a structured way while you are using the tool, and then you need to check the output afterwards.


So, to be successful and efficient when using GenAI, and to get good enough output, you need people, processes and tools.

The white area outside the circle represents all your hopes, wishes, dreams and desires. ??


The Scope-Structure-Surface framework

For us over in Content Design, when we’re doing the thinking, and figuring out exactly what our users want, we follow a simple framework from Beth Dunn in her book “Cultivating Content Design”. (Check out this podcast interview with her for some absolute gems.)

Other stages considered but ultimately rejected for this framework: Size, Screaming, Silence, and Steve.


On the deepest level, Scope, we collaborate to understand and frame the problem we are trying to solve. What JTBD do our users have? Why are they hiring our product? What pains do we hope to solve?


On a Structure level, we investigate what the ideal order of information is to help users get that job done. How can we minimise the cognitive load for our users?


This is fundamental work we need to do before we can bring any words into the mix. And, of course, it’s not just the responsibility of a Content Designer. It’s a collaboration involving Product Designers, User Researchers, Product Managers and more. Each of us brings a different perspective to the table.


The Surface level is simply what the users see in the UI. Words. Colours. Icons. Images. White space.


Going deeper than the surface…

The output you get from ChatGPT represents part of that surface level. In itself, there is nothing wrong with that. This is how the tool works. It’s about patterns in language, no deeper thinking.


The danger to the UX rather arises if we all — the people — haven’t done that fundamental work of figuring out what we want before asking for it, i.e. all of that discovery on the scope and structure levels to guide us towards the right solution.


The thing is, AI won’t challenge you. It’s not going to give you a reminder that its output is only surface-level. It’s not going to ask for more context. It’s a machine. It won’t do anything unless prompted. Ultimately, that means that the output can only be as good as the skill set of the person using it.


What happens when we use AI as a shortcut to skip out on the necessary work on the scope and structure levels? Not to be overly dramatic, but we end up grossly oversimplifying the UX process and put our product at risk of not delivering on what it promises to help users get done.

The irony of Ben asking AI to generate an image of a possible dystopian future it could help shape.


This highlights a possible future dystopian state we’re fast approaching if we continue looking to AI as a UX solution rather than a tool. It’s fast becoming AI telling AI how humans are based on linguistic and behavioural patterns and generalisations.


Of course, we are all aware that humans aren’t predictable, logical, or even rational, so this is a very surface-level approach to designing for real people.

A surface-level approach means that you’ll produce surface-level content that doesn’t serve user needs or wishes, and in turn, the whole user experience will be surface-level. It’s UX for statistical models, not for people.


So, with the thinking done, we know what we want and what the user needs. Now we’re ready to ask for it in a structured way.


Prompt.

Since the launch of ChatGPT, we’ve been experimenting with AI and incorporating it into our existing processes to augment and scale our approach to UX content.


So, we’re going to share with you some guiding principles that we believe will help you in utilising AI to help design good enough UX content.


Prompting principle #1: Size matters.

First principle: size does indeed matter. And of course, by size, we’re referring to the length of the prompts you write to generate output.


We’re going to tell you the story of Goldilocks and the three prompts. You see, in 2023, Goldilocks doesn’t want porridge. She wants to write a title for a screen as part of a checkout flow in which users need to confirm their card details to continue checking out. Obviously.


First, let’s try a short and sweet prompt for ChatGPT. We wrote:

Write a sentence telling users they need to verify their card to pay.


What’s the problem here? Well, for one, where’s the context? Where in the user journey does this appear? What kind of UI? Who will read it? What’s the character count? What’s the right tone of voice? We’re missing all those details that we’d learn about on a scope and structure level.

We get the following output:

Please verify your card details to proceed with the payment


This isn’t just too wordy to be a title on a screen, but we’ve got a bit of parroting going on. We’ve used the phrase “verify their card” in our prompt, and the output contains that exact phrase, so it’s more of a rewrite of what we just wrote.


So, what if we try to include all context and details?


Bad things happen. To try it out, we included a company’s entire style guide in our prompt to describe exactly how it should be written. That may sound bananas, but we have absolutely seen individuals doing this!

The output we got was:

Let’s double-check your card details before you pay


It didn’t really have much of an impact on the output — not least because I’m giving about 15,000 words to generate one sentence. The output is conversational, but that means it takes longer to read, which isn’t ideal for a title.


So too short? Not good. Too long? Even worse. What’s next for Goldilocks?

Goldilocks and the three problematic stakeholders, coming to a LinkedIn post near you in a near future.


Let’s introduce Goldilocks to the Swedish word “lagom”. It means just the right amount.


So, what is “lagom” for ChatGPT when it comes to UX content? It means you give just the right amount of context and background necessary to receive “good enough” output.


To help write a “lagom” prompt, we recommend the following simple framework (also from Beth Dunn). This covers the most important aspects of the content you want to generate, and what you will have learned during discovery on a scope and structure level.

  • Message: What is the meaning that you need to get across? What must the text communicate to the user?
  • Context: Where does the message appear on the screen and in the flow? What kind of UI element will it appear in? How may the user be feeling at this point of the journey?
  • Goal: What action can the user take?


We don’t write for people to read. We write for people to do. Here’s our prompt following this framework:

Message-Context_Goal framework in action.
Write a title for a screen informing users that they need to verify their card details before they pay.

We now get a very different output:

Confirm card for payment


The screen title is now clear, concise and actionable. It front loads with the user action — with an active verb — and the user can understand what to do to get their job done at a glance, without really having to think about it.


So, use the Message, Context, Goal approach both for giving your prompt a clear structure and for keeping the prompt at a “lagom” size.


Prompting principle #2: Chain it, don’t stack it.

We like chaining, we don’t like stacking.


What exactly is chaining? Imagine we prompt ChatGPT to do the following:

Write a 2 sentence description of Klarna.

We like the output, but the tone isn’t quite right, so we continue and ask:

Make it sound friendlier.

Great! But it’s too long, so we follow up with:

Make it under 100 characters.


Here’s how it looks in ChatGPT:

Now make it sound like Stephen King wrote it. ??


We’re not mixing in a different idea in this conversation, like asking for a description of PayPal. That’s a different idea and a different conversation.


We kept to the original idea, which was to “describe Klarna”. When we refine the outputs, we keep referring to the previous output. So, we link our refinement prompts with each other in a conversational chain. Hence the name “chaining”. (Check out this piece for some more examples of chaining in action.)


But do we always want to chain? No. If we need copy for something completely different, we don’t want influence from my previous requests.


Imagine we are having a conversation together, and we ask you, “How are you?”, and then ask “Can you give me a recipe for red velvet cake.”, and follow up immediately with “Could you write me a nihilistic reading of Kafka’s Metamorphosis with a focus on alienation as a theme?”


Firstly, you’d think we were a bit barmy. Secondly, you’d think we were rude. I’m not acknowledging your answers. In fact, we're changing the topic completely.


It’s the same in ChatGPT. If we continue in that same conversation-changing topic, we end up stacking, rather than chaining, i.e. forcing unrelated inputs and outputs together within one conversation and influencing the output. In that case — you should start a new conversation.

One idea, one conversation.


To give an example, a few months ago, Ben promised some friends some emails with some product management tips.


Ben tried out ChatGPT to write these emails and wrote a prompt listing out the content and asking for a “friendly” and “energising” tone. They did this in one conversation.


The first few emails were great and indeed sounded friendly and energising, but as Ben continued, the emails started to sound, well, ridiculous. Why? Because the emails were unrelated to each other. Ben was now amplifying the tone.

If ye arrrren’t speakin’ like a pirate mateyyy, arghhh ye even friendly. (ChatGPT wins this one.)


By the tenth email, Ben had asked ChatGPT to be friendly and energising not once, but ten times. And the result? Emails filled with pirate metaphors — Sail through the seas, seek the hidden treasure. We didn’t ask for pirate language, but this is what we got because the tone isn’t remembered, it is stacked. And clearly, to sound mega-friendly, you should speak like a pirate. ????


To avoid this, we should have requested each email in a separate conversation.


Prompting principle #3: Be critical.

We’re going to alert you to a couple of possible deceptions and issues you need to be aware of when creating any kind of content with ChatGPT (or any GenAI tool).


Firstly, be aware of hallucinations! A hallucination is when you get something in your output that is nonsensical or unfaithful to the original prompt. The pirate tone was a good example. However, there are also hallucinations due to bias in the material used to train these tools.


For example, a lot of us use ChatGPT to write documentation. We asked ChatGPT to write me a press release about a CEO launching a new start-up, and we didn’t specify the gender of the CEO.


Perhaps predictably, the output assumed the CEO is male, using “he” and “him” to describe the CEO. we asked about this assumption, and ChatGPT apologised. It then regenerated the press release without any pronouns or references to the gender of the CEO.


Of course, we all know it’s highly biased and sexist to assume a CEO would be male, but in society, it’s a stereotype because it has traditionally always been the case. Of course, our biases are reflected in these tools through the materials they’re trained on and can appear in the output.


To give another example, but this time from the world of image generation…

Midjourney lets you generate images from text prompts. We asked Midjourney to generate an image of a nurse 40 times, and predictably, all 40 nurses were female. Of course, the stereotype goes that nursing is a female job.

Not us critiquing Midjourney while using it copiously for all of Ben’s recent posts on Medium ??


We were, however, a little surprised by the fact that these are all young, blond, slim, white women, with long hair. And none of them looked particularly tired.


In the UK and Sweden (i.e. where we’re from), stereotypes of nurses don’t align with all these characteristics. Our guess here is that this is a representation through the eyes of a stereotypical male (shout out to our fellow arts and humanities grads who know what the male gaze is). It’s an “idealised fantasy nurse”, which is not a shock when you consider where a lot of the training material comes from.


These tools may eventually be able to spot and avoid bias, but for now, they very much augment bias in society. When we’re writing prompts and checking output, this is something we need to account for, for example, by specifying “male nurse” or “nonbinary nurse” in your prompt or chaining to remove any potential bias. We recommend this article for a more in-depth look at tackling bias in AI.


Prompting principle #4: Don’t roleplay.

ChatGPT can be very persuasive in its way of communicating, but it’s not a colleague, a friend, or a human with a specific skill set. It is a tool. Don’t ask it to be something else.


What do we mean by roleplay? With ChatGPT, we’ve seen that it is very easy to fall into the trap of our own bias and assumptions, particularly when it comes to what other people with different skill sets actually do.


As we’re not currently working as Product Managers, we asked ChatGPT to stand in. We prompted ChatGPT to “Act as a product manager and give us an ideal customer profile for a Gen Z online shopper that’s into fashion.”


The output we got was very generic and not especially useful.

Yeah... How am I gonna work with that?


For example, for location we got “Primarily urban areas, but also suburban and rural areas”… which basically means “everywhere and anywhere”, right? It’s generic to the point of uselessness — and it’s because that’s how ChatGPT works as a tool.


ChatGPT doesn’t understand the full skill set of a PM. It can’t apply the same type of thinking a PM would to a situation. And it certainly cannot carry out user research. After all, it’s a tool for generating text based on linguistic patterns, not a PM.


To confirm this, we tried the same prompt without the instruction to “Act as a PM”, and the output was identical. Sometimes with this instruction, there may be slight differences in tone, for example, if we ask it to “Act as a lawyer”, it might use more formal legal language, but the stuff of it, i.e. the content produced, isn’t any different or better.


It’s an old lesson, but it applies here too: Just because we want something to be easier to do, doesn’t mean it’s going to be possible to do more easily.

ChatGPT trying to disguise itself as a real human being.


In the same vein, ChatGPT is not an analytics tool.


To give an example, we ran a simple survey and asked 65 people their favourite shoe colour. We counted up the results ourselves and found that green was the most common response with 21 people, with only 6 people saying pink.


We pasted the full list of 65 responses into ChatGPT to see if it would give me the same response, and it didn’t. Chat GPT responded that red and green were the joint most popular with 16 people having said each. The totals it reported were also lower than the 65 responses in the list.


We asked ChatGPT to be an analyst, and it failed because it’s a content generation tool, not an analytics tool. The problem was that it failed pretty convincingly, presenting the data in a nice clean table, and not giving any hint that this conclusion may not be accurate.

In short: don’t be tempted to ask a tool to replace a skillset, i.e. don’t ask it to role play.


Prompting TL;DR

Here’s a bite-sized summary covering all of our principles for you to reference:

Size matters. Aim for a “lagom” prompt with enough context. Think: message, context, goal.


Check.

The people have done the thinking. You know exactly what you want.


The tool has done its job. You asked for what you wanted in a very structured way, dodging some pitfalls along the way.


Now, how do you know that the output is good enough? This is where the processes come in. It’s time to do some checks.


What’s “good enough” in your case?

Just like you need to have a “definition of done” when working in agile teams, you need to have a definition of “good enough” when generating copy with ChatGPT.


There is no golden formula for this. It really depends on what success looks like for your screen, your flow, or your feature.


The first question you should ask yourself is:

Is “good enough” really enough for your specific situation? What are the trade-offs? Are there any risks? And is it worth it?


If the answer is “yes”, then apply the framework and principles we’ve shared with you today, and you are halfway there.

Get your magnifying glass out, friends, AI ain’t gonna do the human checking for you.


Basic post-prompt QA

Now it’s time to check that output, and we’ll share the bare minimum when it comes to checking the output. Obviously depending on your team, context, and product, other steps may well be necessary, so please take this recommendation as a foundation for QA to build on before shipping.


Start by reflecting.

  • Is the output hitting the mark in terms of your message, context and goal?
  • Are the facts right?
  • What other critical thinking do you need to apply to avoid falling for ChatGPT’s persuasive tone?


Then, have it reviewed.

  • Ask for a second pair of eyes on your copy from a team member, a writer, a legal reviewer — whoever may be necessary. This helps you avoid blind spots.
  • If you’ve generated content in a foreign language, double-check with a native speaker, especially to make sure that it’s correct in terms of authenticity and idiom.


Last, but not least — spell-check it.

We have seen errors in ChatGPT output due to mistakes in the training materials. A simple typo or a strange sentence structure in your product can subconsciously start building distrust with people, just as a slightly misaligned button or a button that doesn’t work as expected would.


At the very least, paste your output into a Google or Word doc to use the built-in spelling and grammar checkers.

Prints out ChatGPT output, sits in an old church, and corrects it with two pens. Sure, let’s go with that.


To sum it all up…

Phew, that was a lot. We hope this has been insightful, and we’d love to hear some of the things you’ve learned too when it comes to using ChatGPT for UX content.


Whenever there’s new technology surrounded by hype, as we saw over a decade ago with machine translation in the localisation industry (a conversation for another day), it’s important not to overestimate the capabilities of said technology in its current state.


Yes, it’s going to evolve, indeed at some speed, but the aura of hype can push ideas for uses from the realistic into the realm of fantasy and legend. And while we certainly think good UX has a magical quality, it must always be firmly grounded in reality, not in the fantastical.


Our advice also comes with one big caveat: as things change, so too might this advice. We’ll certainly continue to share what we learn over time. And of course, check out different perspectives and sources for more inspiration, such as this intro to using AI for UX Writing from the UX Content Collective.

The main thing you really need to remember, however, will always be true: GenAI is a tool.


No matter if you work with code, colours or copy, GenAI needs you — the people — to direct it to be able to generate something:

  • useful...
  • maintainable…
  • scalable, and…
  • that meets the needs of our users.


So, remember:

?? Think. Prompt. Check.

Are we repeating this image? Yes. Is it the one thing we hope you’ll remember from this? Also yes. ??


Looking forward to diving into this insightful article! Annelie Tinworth

Bren Kinfa ??

Follow for AI & SaaS Gems ?? | Daily Content on Growth, Productivity & Personal Branding | Helping YOU Succeed With AI & SaaS Gems ??

7 个月

Excited to dive into this! ??

James D. Feldman, CSP, CITE, CPIM, CPT, CVP, PCS

Former CEO, advisor, & global speaker, I teach organizations how to demystify AI to drive growth, enhance efficiency, and achieve remarkable results through innovation, customer engagement, and performance optimization.

7 个月

Excited to dive into this insightful article! ??

Udo Kiel

????Vom Arbeitswissenschaftler zum Wissenschaftskommunikator: Gemeinsam für eine sichtbarere Forschungswelt

7 个月

Excited to dive into this valuable resource! ??

Hiro Wa

? Lead UX Designer at Medl | ?? Crafting global experiences with scalable design and GenAI

7 个月

Great collaboration! Are you tackling more projects together after this unique word-nerd creation?

要查看或添加评论,请登录

社区洞察

其他会员也浏览了