Three Fallacies in the “Age of ChatGPT”
The ‘Ethical Hierarchy of Needs’ pyramid by Aral Balkan and Laura Kalbag.

Three Fallacies in the “Age of ChatGPT”

After observing and participating in the public discourse since November 2022, when OpenAI launched its famous ChatGPT, I have come to recognize three recurrent fallacies that are often used around the adoption of AI technologies. Here I elaborate briefly on each of them:

Fallacy 1: Artificial Intelligence is finally matching Human Intelligence.

I find the current discourse of comparing AI to human intelligence so absurd. As I see it, we are being sold a very fancy tool (i.e., software) that can “speak” (here I specifically say: it spits text in natural language that sounds like a person might have written it) thus people scream: “it must be human!”

Why is this problematic? The moment we attribute human characteristics (like e.g., consciousness) to an object, we are removing our responsibility from ourselves as users or creators, and shifting them to the object itself, so we can “wash our hands” from any negative consequences. No matter how many people (or the environment) are harmed by this tool, no matter how absurd usages may come up, is never the fault of the “tool maker” nor the “tool users”, is the tool’s fault!

This is why BigTech likes that idea so much and pushes it actively to the public imagination despite a wide community of independent scientists clearly stating that the cognitive abilities attributed to foundation models are enormously overstated. So, be aware of the Eliza effect!

Fallacy 2: Artificial Intelligence can solve “all” our problems.

But maybe worse than attributing consciousness to a tool, is the Maslow’s hammer bias what we are witnessing, often enough coming from people who do not have a clear or foundational understanding of how these models are made or operate. Maslow’s hammer, or golden hammer is a cognitive bias that involves an over-reliance on a tool — “if you have a hammer, you see everything as if it were a nail.”

So, people at different levels in organisations (fueled by Big-tech), are on a frenzy to use foundation models, generative algorithms, and platforms like ChatGPT for -everything- ignoring the basic principle of problem-solution-context matching.

But the fact is, there are different technologies and different socio-technical solutions that can match different types of problems. We have known this through the entire history of humankind, and (neither) GenAI (nor other forms of AI) can solve but a smart part of those problems.

So, it is our responsibility to find those solutions involving a diverse pool of problem solvers to find the most adequate solution to our problems, and not just jumping blindly to AI to solve everything.

Fallacy 3: We must choose between regulation/ethics and progress/innovation.

My final fallacy is the one that is called: “false choice” where we must choose between regulation/ethics and progress/innovation. This is very convenient for BigTech, since it implies that the end always justifies the means. The argument “regulation will hamper innovation” not only has no solid evidence, but quite the opposite, we are seeing more and more evidence that initiatives with ethics and participatory design are more successful (see how ProtonMail is positioning itself in the market and promoting/protecting journalists and democracies), and renowned figures like Dr. Rumman Chowdhury (who recently was consulted by the US House Committee for Science, Space and Technology) have been vocal about that regulation will not harm innovation, but that it will in fact enable innovation, and here I quote her in a phrase that says it all:

brakes help you drive faster.

A collorary from fallacy 3: We cannot stop progress (AI)

I am witnessing what is basically turning into a “race to the bottom” where consequences of failed implementation and negative side-effects are seen as “a necessary evil” in the “name of progress”.

We need to embrace AI; we cannot stop it”. I here over and over all those disempowering phrases that situate us in the role of passive actors, striping us from any form of agency.

Disempowering discourse is so easy to use, and it does not need any evidence because is a self-reinforcing argument. It is also the most efficient way to impose a doctrine or establish any authoritative regime, and we have seen this also through the history of mankind.

When I hear we “cannot stop AI”, I wonder: says who? And where’s the proof? and anyways, what the do you mean with that?

Maybe (just maybe) those who are arguing that we “cannot stop AI” what they are really trying to say is that “we cannot steer the development of AI to the benefit of the common good, but we have to play by the rules dictated by them (BigTech)”.

And that is where we should have a firm stand. If we have been able to steer technologies before, brought up treaties and conventions for nuclear arms, if we have established ethical boards for genetic research, written laws regulating their usage, implementing and enforcing privacy laws, why is AI (including GenAI and all its flavours) any different from any other artifacts we have built before?

We see that data behind the AI models are not respecting privacy laws, or are breaching intellectual property, we see AI models spreading misinformation and reinforcing damaging stereotypes. We see AI platforms disrupting democracies and impacting vulnerable groups. We see the unsustainable financial and natural costs that we are incurring for creating these models.

As another fellow data scientist said:

"We cannot burn forests (-or deplete natural resources) by developing AI, and then use AI to monitor the fires".

For us to really provide value with the use of any technology, we need to make sure that the fundamental rights of individuals and the wellbeing of the society and planetary limits are respected and withhold.

In the same way as the Maslow's pyramid of needs, there is an Ethical Hierarchy of Needs’ pyramid by Aral Balkan and Laura Kalbag. Without these fundational values in place, we are just "putting lipstick on a pig" and the benefits will always be overshadowed by all the negative side effects.

The fallacies I mentioned will only help us to close our eyes and derrail us from reaching the real, positive impact we all envision.

It is in our power to steer the development and usage of any technology, to the benefit of our society and our planet. But we need to have these conversations, and find solutions together.

If anyone happens to be in Arendal this week, I will be participating on a panel with an amazing lineup, to discuss how to attain the positive impact we envision with AI. See you there! https://www.dhirubhai.net/events/b-rekraft-finansogai-erviforber7080553982108008448/about/

  • 该图片无替代文字

要查看或添加评论,请登录

社区洞察

其他会员也浏览了