UX Hot Take on the OpenAI Keynote Announcements

UX Hot Take on the OpenAI Keynote Announcements

After listening to 45 minutes of Sam Altman’s keynote at the OpenAI conference on Nov. 6, it was striking that the words “UX” or “usability” were not included among the almost 7,000 words in the keynote.

You can watch the keynote for yourself on YouTube.

The word “UI” was mentioned once, in the context of a demonstration of how to integrate AI features within a travel application that was being built on stage. Indeed, the UI was quite nice, in terms of presenting a map of Paris that dynamically updated as the user was asking about tourist attractions to pinpoint their location. While this feature is old, what’s interesting is that the AI took it upon itself to visualize the location of the tourist attractions without the programmer having to add the feature to the app. In many ways, this demo reminded me of Bill Gates’ demos at Comdex 30 years ago, where he would also show how “easy” it was to build cool features with various new tools he was hawking. It was never as easy for the developers back home.

It’s promising, but scary, when the AI is not just employed to answer users’ questions but also to introduce features into an application without the designers or developers having specified those features. In the demo, the AI-injected feature was useful and worked well, but one can also fear the consequences of rogue AI features for user experience.

OpenAI was pushing a new type of custom-built bots, called “GPT,” that you can train with specialized knowledge, for example, by uploading a corpus of your past content. This seems useful, but also potentially confusing since there will likely be an overwhelming number of such bots on the botstore. I’m reminded of the mobile platforms’ app stores.

The GPT bot store (Midjourney).

An unconvincing demo showcased the use of a custom bot from Canva: first, the user asked the bot in natural language to design a poster for a certain event. This resulted in a range of options, similar to the current use of Midjourney. The user would pick a preferred design, which would then be transferred to Canva for further refinement with the normal Canva design tools. This disjointed user experience evidenced a stunning lack of integration and cannot be the road ahead.

Accessibility was mentioned, when discussing the ability of ChatGPT to describe visuals. Blind users can use this to find out what’s in front of them — definitely a positive development. Though, as a usability expert who has been involved in usability studies of disabled users, I’ll add that just getting a verbal description read aloud doesn’t provide the usability that people need if they can’t see things. The most helpful description depends on the context: what the user is trying to do. A longwinded description that covers extensive irrelevant aspects of the scene will delay users and be more annoying than helpful. Knowing the context of use is critical for any good accessibility feature. I hope AI will learn how to adjust its verbal descriptions to the users’ needs, but this area needs work and won’t happen on its own.

My primary point has always been to treat disabled users as users first and foremost. The usability criteria are the same whether or not people can see: they must be able to accomplish their tasks quickly and easily. Converting visuals into speech is only helpful if the text is short and to the point relative to the user’s task.

The announcement of ChatGPT 4 Turbo was greeted with enthusiastic applause when Altman mentioned that the different modes will now be integrated so that users won’t need the mode-switching menu to change between, say, generating images and working with text. While the dirty word usability wasn’t mentioned in the presentation, this will surely be a usability improvement for the product. People who study the history of UX have known since Larry Tesler’s work to eradicate modes from the Apple Lisa around 1980 that UI modes are bad for usability.

ChatGPT generated this image for me to illustrate the launch of its own Turbo version. The conference audience did like it.

The more technical upgrades to GPT-4 will certainly have UX benefits. A bigger context window will allow us to work with more data without workarounds, which will be particularly helpful when analyzing qualitative user data. The lower prices for developers using GPT through the API will support the creation of a broad range of new innovative applications. Many will be bad, but some will be great, and experimentation will give us a new class of user experiences that we don’t envision today.

Sam Altman was quite aggressive in pushing AI agents as the next step. Agents will interact with the world rather than just answering questions about the world, as is the case for ChatGPT now. The demos were primitive, and as always with demos worked perfectly. From my perspective, the broader tasks we attempt with AI, the more we need both task analysis and good UX design to ensure a smooth workflow and user control.


?An AI agent reaching out to the world, as envisioned by Dall-E.

About the Author

Jakob Nielsen, Ph.D., is a usability pioneer with?40 years experience in UX. He founded the discount usability movement for fast and cheap iterative design, including heuristic evaluation and the?10 usability heuristics. He formulated the eponymous?Jakob’s Law of the Internet User Experience. Named “the king of usability” by?Internet Magazine, “the guru of Web page usability" by?The New York Times, and “the next best thing to a true time machine” by?USA Today. Previously, Dr. Nielsen was a Sun Microsystems Distinguished Engineer and a Member of Research Staff at Bell Communications Research, the branch of Bell Labs owned by the Regional Bell Operating Companies. He is the author of 8 books, including the best-selling?Designing Web Usability: The Practice of Simplicity (published in 22 languages),?Usability Engineering (26,283 citations in Google Scholar), and the pioneering Hypertext and Hypermedia. Dr. Nielsen holds 79 United States patents, mainly on making the Internet easier to use. He received the Lifetime Achievement Award for Human–Computer Interaction Practice from ACM SIGCHI.

·?????? Follow Jakob on LinkedIn.

·?????? Subscribe to Jakob’s newsletter to get the full text of new articles emailed to you as soon as they are published.

Mauricio Angulo Sillas

Senior User Experience & Product Designer

10 个月

Altman doesn't care about A11Y or UX, he only cares about the bottom line.

回复
Peter Smyczek

Experience Lead bei Publicis Sapient

11 个月

haha, I love "Felt like a Bill Gates demo from the '90s". I would totally agree, I had a similar impression. But to be fair, that was their first "Apple-like" Keynote ;) Nevertheless their announcements were incredible!

Dinu Manns

Gesch?ftserfolg durch Menschzentrierung | CEO @ BLUPRNT | Vision?r für die Arbeitswelt von morgen

11 个月

Thanks for another great article! ???? My question though is: what is the point of UX when we face a future where there is nothing left to actually use or experience? It is more about your ai bot sorting things out with mine. No UI, no product, no journey left to proceed. Or is it just my glass which is half empty now? ??

This is a great article and summary of where we are at with AI - thank you for your expert insights Mr Nielsen

Vikrant Kumar

User Experience Designer | Project Management | AI Enthusiast | Accessibility Advocate | Architect

11 个月

Certainly, there is a prevailing practice in organizations where the UX team is either not involved or engaged at a later stage in AI discussions. This practice warrants a necessary shift.

要查看或添加评论,请登录

社区洞察

其他会员也浏览了