Talking about AI and our responsibilities
Thanks to Cecilia for having this conversation with me

Talking about AI and our responsibilities

Below is a curated conversation that we, Cecilia Scolaro and me Jeffrey, had about recent developments re: AI technologies. Even while putting the conversation together over the past month and a half, much has changed - and continues to change!? We welcome further discussion.

We hope what we’ve shared helps deepen your thoughtfulness about our responsibilities as users, builders, trainers, and caretakers of AI technologies.

Apologies in advance for any imprecisions. Thanks for taking a few minutes to read and reflect on what we wrote!


Jeffrey-

It seems like a lot of people lately are excited by developments in AI technologies. It feels like every week someone is saying to me, What do you think about ChatGPT?

Have you heard about ChatGPT??


Cecilia-

Yes, I think we’ve all heard about ChatGPT at this point.

ChatGTP though is only one of many AI technologies out there. In 2023 AI and machine learning are ubiquitous. They touch so many more aspects than people realize, such as weather forecasting, banking transactions, automated customer service, and so much more.?

More recently, releases of and updates to open-source generative AI – like ChatGTP, HuggingChat, ImageBind and MidJourney – have allowed everybody to have access to what are turning out to be very powerful tools.


The exponential and uncontrollable developments of these technologies has pushed many people in the field of AI to leave jobs in Big Tech, issue public warnings, or call for a total halt to the training of giant AI systems until we can establish better protocols.?

I find it really worrying that the people that have been building this tech are urging people to stop using it.


Jeffrey-

It can be quite concerning.

I think when anything feels unknown, especially when we feel strongly that an unknown will bring some sort of great change to our lives, it is rightfully concerning.


In this case, the technologies are developing so fast, and globally we are already experiencing so many other changes that deserve our attention, it’s difficult to wrap our minds around what is happening.

What do you wish more people would know about generative AI tools such as ChatGPT ?


Cecilia-

First of all, I would like people to understand what generative AI actually is and make them realize that it is not as intelligent as it could seem.

Generative AI is a prediction machine: It is very good at statistically guessing the next word (or whatever else it is generating), based on everything that it has been trained to analyze before. It simply replicates the most common patterns, not the best, not the most original, most definitely not the most ethical, just the most common. Generative AI tools simply replicate a version of what has been done already enough times by whatever content it’s fed by its trainers.


So, if enough examples could be found of people doing something stupid – like jumping off a cliff – the generative AI would definitely jump off the cliff too. No questions asked.?

What could go wrong? Of course, a generative AI cannot (yet) jump off a cliff literally, so I’m being metaphorical, and a little bit sarcastic.


J-

In other words, what you’re saying is, when we understand how generative AI works, we are able to bring our visions and fears of generative AI back down to earth (hopefully with more grace than if we jumped off a cliff). Understanding better what these new AI tools actually are designed to do, we can realize that although they are powerful they also have their limits.


For instance, you said that generative AI tools, when trained with enough data, are then able to imitate human behaviors. Of course, there are positive use cases. It seems there are also negative (perhaps unintended?) consequences… It would probably help if we talked about some specific examples.

Could you share any real examples of generative AI producing a negative consequence because of poor training or because of a limited data set?


C-

There are so many examples, unfortunately, it is difficult to choose. Spreading misinformation, discrimination in the assignment of resources, portrayal of stereotypes, just to name a few.?


As mentioned before AI is already everywhere and it is already affecting people’s lives in ways we don't fully understand. AI is being used to decide whether your loan will be accepted, for example. And while it cannot use your race or ethnicity as a decision factor, the information on your passport or your mailing address may play against you in ways that are not always easily explainable later on.


In a more trivial example, when asked to name 10 philosophers, ChatGTP has been shown to name only white men born in Europe. No women nor non-western philosophers are mentioned. And it is by the way the same results you will get by Googling ‘who are the most important philosophers?’.?

But Google Search won’t go ahead and use this information to write an essay or an article. The real danger with AI is how it can exponentially amplify a very problematic foundation.


Bias in data is not new, but with Generative AI we are giving license to machines to use biased data to generate more and more information that powers future decisions that are themselves then inherently biased, and we can forget how we started.


Let me ask you: How do you feel about these developments in the world of AI?


J-

I feel it’s very important we become aware of the limits and potential negative consequences of generative AI. Thank you for tampering our enthusiasm.


Hiding the bias in the data that informs the algorithm that later down the line informs important decisions for social welfare, that’s making our systems more unjust than they already are. If we don’t realize that, then the dangers and risks increase. When we are aware, as you are helping us to be, then we are able to take better-informed actions.


I’m understanding that generative AI tools often hide the costs of training them. The oversimplification invites an uncritical use of the tools. If we knew the costs, we would not be fearful but we would be extremely cautious. Would you agree?


C-

Absolutely.?

And we could be better aware of the role we are playing as ‘users’. In fact I don’t think this label is even accurate anymore. We are not simply using, or consuming, technological tools. With AI, we become ‘trainers’.?


In most cases the questions or content we input into the AI tool, that content will be used to train the machine further. And many popular AIs have already scraped the internet of all the content and used it for their training. Content that has been produced by humans, including our petty fights on social media comments. AI is literally learning from us and our behaviors at any given moment.

What are we teaching to this tech??

And how are these learnings going to be used to produce more harm, or more help?


I am not saying that the whole responsibility lies with the end users. Engineers, entrepreneurs and regulators are the ones that have possibly more power to decide the overall direction of this technology. But we are all involved by now, whether we want to be or not.?

So what shall we the ‘users’ do about it??


J-

We should give more attention to investigate how AI is working, what it can truly change and what it will not. Knowing its limitations, we are better-served educating one another in communities of trust, rather than attempting to extract profit margins and power advantages using the technology. That is something I believe, after having this conversation.


I believe our responsibility as humans is to create a healthier future. Training our minds to recognize biases, educating ourselves on the needs of our local community, building trust with people different from us, deepening our capacities for kindness, creativity, and grief … these are to me (rather than using generative AI tools uncritically) better ways to invest our time and energy right now.

Kathleen Asjes

Bringing research leaders together to Grow & Connect ??

1 年

I love the idea of the user becoming the 'trainer'. It is quite thought provoking that we can empower ourselves with this type of responsibility. I have not thought about it this way yet and it makes me feel a tiny bit more optimistic. Thank you for sharing this conversation!

Cecilia Scolaro

Responsible Design educator and consultant

1 年

Thank you Jeffrey for the stimulating conversation!! Loved it!

要查看或添加评论,请登录

社区洞察

其他会员也浏览了