Understanding the lasting impact our designs leave behind
Photo by NordWood Themes on Unsplash

Understanding the lasting impact our designs leave behind

As a designer by heart, when I first learned about AI a while ago, I started thinking critically about this technology and deep machine learning. The first articles I remember reading (and this view on AI continues) were about how it would replace people's jobs, including designers. Although relevant, I still find it a weak angle on such a phenomenon that needs far more critical views, than only thinking about our design jobs. But, before jumping to conclusions, I also wanted to find out by myself, and selected ChatGPT to start my research.?

Aware of the possible ways my online data can be collected, shared, and (mis)used, the first thing I noticed about ChatGPT is the unnecessary collection of personal data when someone registers (and later, when logging in). With my conscious mind on what data to share, I couldn't register myself. If you, like many others I know, don't have any issue with that, why not do this test developed by Sitra and read Mozilla's article and one of their tests.

Anyway, let’s not only focus on the collection of users’ data. A designer's investigative mind must dive deeper. And that's what I did (and am still doing). Connecting the dots and finding patterns is one of our abilities as designers, so I first wanted to find out about the 'open source' character. Many articles have been written about that, but in an interview (in Volkskrant, Dutch), Language Scientist Mark Dingemanse summarised it well. Together with colleagues from Radboud University, Andreas Liesenfeld and Alianda Lopez, he compared these programs and wondered how open and transparent they really are;

"ChatGPT's infrastructure, from the moment OpenAI targeted the general public last November, was set up to harvest as much of our collective intelligence as possible, without then sharing it with the rest of the world. The amount of fresh data they can rake in with it is unparalleled. For example, by default, OpenAI saves the chats you have with ChatGPT. They also ask you to give thumbs up or down as feedback. That's very cleverly done."

You may think, what is she talking about if she hasn't used it? One of our team members is using it (she’s one of the many on the positive side regarding AI) for her writing, and I was able to use her login details to try it out. I tested it, like she did. To be honest, I believe the quality of her and my writings decreased because of it. ChatGPT has a way of writing that may be easily recognisable and not standing out. And yes, I know that depends on the way and times you re-ask (prompt) ChatGPT, but in essence, it is only saving you time, not delivering better quality, at least not yet. In 6 months, it will already have improved (more usage/input results in better output), but what if that will make the users (the text writers)?less creative and critical?

To learn and get better, a lot of data needs to be collected. ChatGPT is not intelligent in itself. It depends on the input (and the usage/feedback, etc.). The same counts for services like Alexa and AI-powered social media platforms. Here I also thought about Facebook, YouTube and other platforms. And I can talk about the (mis)use of their users' data as well, but I would like to focus on the backside of the platforms, the people doing the work for the users.?

At the beginning of August, I read about this in The Guardian. Let me cite two paragraphs of this must-read article:

"Bots like ChatGPT are examples of large language models, a type of AI algorithm that teaches computers to learn by example. To teach Bard, Bing or ChatGPT to recognise prompts that would generate harmful materials, algorithms must be fed examples of hate speech, violence and sexual abuse. The work of feeding the algorithms examples is a growing business, and the data collection and labelling industry is expected to grow to over $14bn by 2030, according to GlobalData, a data analytics and consultancy firm.
Much of that labelling work is performed thousands of miles from Silicon Valley, in east Africa, India, the Philippines, and even refugees living in Kenya's Dadaab and Lebanon's Shatila – camps with a large pool of multilingual workers who are willing to do the work for a fraction of the cost, said Srravya Chandhiramowuli, a researcher of data annotation at the University of London."?

In the interview I mentioned earlier, Mark Dingemanse reiterates:

"RLHF stands for Reinforcement Learning from Human Feedback. Models still require a lot of human power. RLHF makes chatbots work fluidly. They could spit out coherent text for much longer, but in interaction, they felt rigid. Humans are needed for exactly that aspect. They are presented with different answers from a chatbot each time, and then have to indicate which answer is the best. Labour-intensive manual labour, but extremely important. Compare it to the spout that pastry chefs use to squirt cream. Without the spout, it becomes a mess. With all models, it is not clear enough exactly what this part looks like."?

This reminds me of us, designers, being involved in many products and services now and in the past. They were and are not all relevant, truly need-based or sustainable. With our work as designers we may have created more harm than good and that may be hard to undo. If we want to do a better job now and in the future, let’s learn from and think about our role, involvement and decisions. Which is what I wanted to point out in this article.

With this in mind, let's talk about the popularity of AI. Many companies feel the urge to 'not fall behind' and therefore start with AI teams. Some are only tech-oriented, some involve designers too. For the sake of staying ahead of the curve, they do what everyone (or at least their competitors) is doing. They copy, they follow. This reminded me of the following image published by Marketoonist / Tom Fishburne and shared on LinkedIn a month ago with the title "We're all aligned":

No alt text provided for this image

But let's use a simple metaphor. Would you still drink milk if you were aware that to be able to continue producing milk a cow needs to give birth to a calf every year of which the majority has a life expectancy of 6 months in isolation? Knowing that helps to realise the system behind a fresh glass of milk or a hot cappuccino. You may not mind, but it provides you with the possibility to make other ethical choices. The same counts for the use of AI. What's your organisation doing? Are you involved, part of the debate? Do you feel accountable for the decisions you make and understand the lasting impact our designs leave behind??

I don't have the answers, but after discussing with others the potential positive sides of AI, I believe this needs to be controlled and regulated. Not killed, but well-guided, as with a child making its first steps. You don't let that child leave the house, cross the street, go near a lake, etc. You stay with the child until you know it's safe.?

And I am not the only one stating this. Scientists, experts in the industry itself, and many others are asking for regulation. Have a read here.

Will (commercial) organisations make these responsible choices? I am having a hard time believing this. It's like with ING and money laundry, being just an example. They only may have started taking this more seriously and hiring more experts after the government told them to keep a better eye on it.?

This is not about being ahead of the curve but about making ethical decisions. This is not about following a trend that may be created and influenced by the industry itself (did you know that fashion trends only were introduced in the 19th century as a way to sell more clothes and that in the 16th and 17th centuries, women liked and wore their clothes until they were worn-out or not fitting anymore?). This is about seeing the larger picture, connecting the dots and having a clear systemic view of what influences what.?

In the same Dutch newspaper I mentioned before, I read the column 'With AI, there is a new Oppenheimer moment: flirting with self-destruction' by journalist Arie Elshout. It's a thought-provoking read (in Dutch), and I will rephrase a few of his thoughts:?

"Oppenheimer's distress of conscience is ours. For it is as humanity that we have created the weapons that can end our civilisation. That realisation must torment many in the silence and darkness of the movie theatre. Moreover, the story is not finished. With artificial intelligence, AI in its English acronym, we are dead set on repeating it. [..] According to Oppenheimer director Christopher Nolan, AI researchers are already speaking of their 'Oppenheimer moment'. There are widespread calls for 'restraint', for agreeing on rules and restrictions. With nuclear weapons, this has succeeded so far; with AI, the same can be done."

You may disagree with Elshout (please, read his column first as he has interesting points), but in this article here on LinkedIn, I like to share my worries as well. I am reading about designers and commercial design schools sharing tips and tricks on how to use AI. Here is just one of the examples of invitations I recently received from them: "AI isn't just a scary invention that might make us obsolete. It's a tool that can help us unlock our full potential as a designer. And it's especially useful if you are trying to do more strategic work as a designer."?

I do believe we should start by discussing whether and how we should use it and be involved in the debate about its (regulated) growth, usage, commercial exploitation, etc. Designers have several capabilities and attitudes. Two of them are critical thinking (investigative mindset) and ethical thinking, weighing multiple perspectives to make informed decisions and being able to think about and respond to a problem fairly, justly and responsibly. Let's use these first and wisely before jumping on the popular AI bandwagon.

#servicedesign #strategicdesign #designforgood #ethicalthinking

—?

PS. I am not running for Pope. I also make use of the newest technology, have used Facebook in the past, etc. And I see many positive outcomes of AI, such as here and here. But my point is that we need to be aware of our involvement in new technology and the positive influence we can have to make conscious decisions before following a trend or phenomenon. Especially when profit and commercial interest is involved.

Additional reading:

https://www.youtube.com/watch?v=L2sQRrf1Cd8

要查看或添加评论,请登录

Inge Keizer的更多文章

社区洞察

其他会员也浏览了