What can go wrong?
A still image from Cennydd Bowles' guest lecture at the University of Washington

What can go wrong?

For a while, I wanted to write another article on design and AI, but was reluctant to do so for it would look like I was jumping on the clickbait AI bandwagon. But with development happening quickly and the Artificial Intelligence Act coming into force yesterday, I’d like to follow up on the article I wrote in August 2023, ‘Understanding the lasting impact our designs leave behind’.

I know that AI is not an 'it' but rather a loose collection of diverse systems, interfaces and use cases. However, for the readability of this article, I will use AI as an overlapping term. I write this article as a layperson, as an observer, and as a designer.

Let me begin with a recording from last May guest lecture by Cennydd Bowles, at the University of Washington. Cennydd is technology ethicist and author of ‘Future Ethics’. Together with Ariel Guersenzvaig, author of ‘The Goods of Design: Professional Ethics for Designers’, he leads the ‘Ethics in design’ training at our College. One of the topics covered in this training is, of course, AI. After joining the discussions during this training, I became more interested in the ethical aspects of AI. Watch this insightful recording here and then read on.

Governments are also slowly embracing AI. For instance, Canada, according to this article, can boast of being the first country in the world to publish a National AI strategy, which it launched in 2017. The Pan-Canadian Artificial Intelligence Strategy consists of three pillars: commercialisation, standards of practice, and talent and research. Canada is now in the process of developing an AI strategy to “accelerate responsible AI adoption by the government to enhance productivity, increase the government’s capacity for science and research, and deliver simpler and faster digital services for Canadians and businesses”.

So, like companies, governments are afraid of missing the boat. As stated in the AI Opportunities Action Plan by the UK Parliament: “Artificial intelligence has enormous potential to drive economic growth, through productivity improvements and technological innovation, and to stimulate more effective public service design and delivery. These are opportunities the United Kingdom cannot afford to miss and that is why AI, alongside other technologies, will support the delivery of our five national missions. Through targeted action this Government will support the growth of the AI sector, enable the safe adoption of AI across the economy and lead the way in deploying it responsibly in our public services to make them better.”

If I look at my own country, the Netherlands, this reminds me of the Dutch childcare benefits scandal, a political scandal in the Netherlands concerning false allegations of fraud made by the Tax and Customs Administration while attempting to regulate the distribution of childcare benefits, that led to the collective resignation of the government in early 2021.?

In 2022 Politico wrote an insightful article titled ‘Dutch scandal serves as a warning for Europe over risks of using algorithms’: “As governments around the world are turning to algorithms and AI to automate their systems, the Dutch scandal shows just how utterly devastating automated systems can be without the right safeguards. The European Union, which positions itself as the world’s leading tech regulator, is working on a bill that aims to curb algorithmic harms. But critics say the bill misses the mark and would fail to protect citizens from incidents such as what happened in the Netherlands. [..] As it is now, the AI Act will not protect citizens from similar dangers, said Dutch Green MEP Kim van Sparrentak, a member of the European Parliament’s AI Act negotiating team on the internal market committee.”?

Designers and AI

Having said all this, let me now shift from the country-level perspective to profession-level and focus on design. Thousands of designers are challenged to critically reflect and take action regarding AI in their work and in the products and services they design. I intentionally mention these two sides of the coin: AI as part of the designer’s work process (for instance, to make the design process more efficient) and AI as part of the things being designed. For the first one, you may think about AI-facilitated adaptive service blueprints, for the latter, you can think about a virtual reality nursing assistant.

Although they may seem unrelated, to me, as a designer, they are connected as it all starts with the overarching question: “What is my opinion about AI in the first place?” If you accept using AI to make your work more productive or efficient, then by the same reasoning, you also may/will/should accept using AI in the products and services you offer your clients. It’s an informed decision that you make for everything you do, use and design.

In my article from a year ago, I mentioned an invite I received from a well-known business design training about how to leverage AI tools as a designer. At that time, I was astonished, and today, I am even more concerned, especially after receiving invitations last month for (just mentioning one) “AI-Ready Service Design Framework"; a human-centred approach to how to use Language, AI, and service design principles to create exceptional service experiences.?

Responsible AI

You may think that being aware of risks, such as algorithmic bias, traceability, predictability and data privacy, allows you to start working with or designing AI-based services. You promise to prioritise care over efficiency, mindfully combine AI and human approaches to expedite the design process, and enable contestable and accountable decision-making. You convince yourself and your managers that leveraging AI can support this, especially when teams face challenges like time constraints and limited stakeholder access for research and co-creation. But aren't those just cop-out responses without critical reflection before jumping on the bandwagon? Will calling it ‘responsible AI’ make everything you do responsible??

As Cennydd states, it is easy to view AI in light of tech revolutions like the internet, smartphones and big data. And that, as with any digital transition, there is inevitable tension between those who gladly adopt AI and those who don’t. Being in favour or against, or using it for good or evil are not the only choices. You may also choose not to use it. Any technology can create problems, even in the hands of those who understand it, as Cameron Tonkinwise mentioned in one of his recent posts.

Banning AI

Last week, Ed Newton-Rex wrote in the Guardian: “The backlash, though, points out that we cannot ignore real harms today in order to take technological gambles on the future. This is why companies such as Nintendo have said they will not use generative AI. It is why users of Stack Overflow, a Q&A website for software engineers, rebelled en masse after the platform struck a deal to allow OpenAI to scrub its content to train its models: users deleted their posts or edited them to fill them with nonsense.” A growing number of companies are beginning to ban AI, and some experts say that this is not just a temporary trend.

Also worthwhile to share is the article, ‘AI is not "democratizing creativity." It's doing the opposite. Why Silicon Valley's favorite AI buzz phrase is so misleading and insulting’ from Brian Merchant. He writes: “Few AI buzzphrases have stoked my anger as much as this one, given that AI companies, of course, are in fact doing something closer to the opposite—giving management a tool to attempt the automation of jobs and execs a chance to concentrate wealth while promising benefits for all. And it’s everywhere.”

Two weeks ago, I had an afterwork drink and bite with a senior service design consultant. We discussed the role of AI, and he told me that many large corporations his consultancy? works with are starting to use AI-empowered tools, particularly for design research. Although these tools are meant to support (or used as a starting point for qualitative research/ethnography), they are now considered the only research data source. He told me that management (decision makers and budget owners) view AI as the holy grail and the ultimate truth. Combined with time pressure and this year’s limited design budgets, that you see everywhere, they are exclusively relying on the data and outputs produced by AI-led tools. Remember, we already see AI applications that have the aim to fully replace research participants.

Dependency on AI

Wouldn’t you agree with me that with OpenAI’s announcement last week Thursday of a prototype search engine, called SearchGPT, which aims to give users "fast and timely answers with clear and relevant sources”, another step has been taken toward our dependency on AI and systemic disinformation on the internet in general? And if you are already using AI as a source of information, are you blindly following the same path?

Additionally, I read about a Dutch entrepreneur who proudly announced that his “world’s first AI-generated newspaper has hit the market, generating and publishing news stories and articles entirely with AI, without any human editing.” And what about another news item in my mailbox, telling me that “new influencers are not only here to stay, but are growing in popularity and power. And hint: they are not human. AI-powered digital personas are influencing, messaging and sharing information. The insider scoop, and how to anticipate, leverage and be part of this trend - is yours for the taking.”

As Eryk Salvaggio states here on LinkedIn: “Accountability is not as challenging as AI companies would like us to believe. Flying a commercial airliner full of untested experimental fuel is negligence — and the rules asking airlines to tell us what’s in the fuel tank do not hamper innovation. Deploying models in the public sphere without any human oversight is negligence, too. Artificial Intelligence systems may be a black box, but the human decisions that go into building and deploying these unpredictable systems are crystal clear. Deploying and automating decisions through an unaccountable machine is a management and design decision — it is not a shield from the consequences of those decisions.”

Designers, including service designers, need to be an integral part of the conversation to define what responsible and ethical practices look like. Critically challenging or even saying ‘no’ is not a luxury but a necessity.


You may want to follow the following persons on this matter:

  • Cameron Tonkinwise, Professor of Design Studies at UTS
  • Dasha Simons, recently started as a PhD Candidate AI Ethics (I am looking forward to her research)
  • Cennydd Bowles, technology ethicist, interaction designer and Visiting Scholar at Elon University
  • Ariel Guersenzvaig, design and technology ethicist and Professor of Design at Elisava
  • Eryk Salvaggio, Writer, researcher & artist focused on understanding & mitigating harms of AI
  • (and if you have more names, feel free to share them in a comment!)

/ / /

Additional read:

https://hbr.org/2024/06/research-using-ai-at-work-makes-us-lonelier-and-less-healthy?

https://www.knowyourrightscamp.org/post/will-i-am-calls-for-regulation-of-artificial-intelligence

https://www.dhirubhai.net/pulse/recovering-lost-design-directions-cameron-tonkinwise-4vwac


Geert Christiaansen

Strategic Design | Business Innovation and Transformation | ESG Strategy and Implementation | Strategic Advisor

7 个月

Thank you, Inge Keizer, for sharing your thoughts and this insightful talk by Cennydd Bowles (which is indeed worth the full 45 minutes!). It is crucial to ensure that AI systems are aligned with human values and have safeguards in place to prevent harm. Bowles discusses three important levers to steer what comes next: regulation, internal change, and public engagement. While all are significant, I'm particularly in favor of emphasizing public engagement. When considering value alignment, one common approach to mitigate risks is the "human-in-the-loop" system, where humans oversee AI operations. However, I would argue for an "AI-in-the-loop" approach instead. This means maintaining humans as the primary decision-makers, with AI serving as a tool to enhance our capabilities rather than replace our judgment. We have the opportunity to design these human-centric loops, and design services and user interfaces in a way that ensures everyone has a seat at the table.

Adam Haesler

??? Playing with software for 10+ years! Empathizing with users ?????? and using design thinking ?? to create customer and employee experiences experiences of delight for all! ??

7 个月

Love your article Inge Keizer! Saying everything I have been thinking about and so much more. One big question that your article raised in my mind was at what point of inclusion of AI in design thinking is it no longer design thinking, but just synthesis of old ideas for a best guess of a new solution? i.e. skipping empathy, deciding on the right problem to solve, and ideation for the right problem. And more to the point, at what level of inclusion of AI outputs do we slip away from design thinking as a source of innovation and instead to a source of copying what already exists? Yes, remixing old into new is a method for innovation, but if everyone does it the lack of scarcity changes it from innovative to rudimentary. Could be a good thing to push us to find new innovation techniques, but as your article eloquently put, we need to be asking why use it before we get started. Thank you!

Ed Axe

CEO, Axe Automation — Helping companies scale by automating and systematizing their operations with custom Automations, Scripts, and AI Models. Visit our website to learn more.

7 个月

It's interesting how designers can shape the ethical use of AI. Their input really matters in this evolving landscape

要查看或添加评论,请登录

Inge Keizer的更多文章

社区洞察

其他会员也浏览了