Can we kill bad A.I. before it hurts education?
Adrien Bouillot
Founded Chalkboard Education, now Global Social Insights lead at Sanofi Consumer Healthcare
Chances are you have been navigating endless prospective takes on how artificial intelligence will change your work, your industry, or even your life. You might be excited about it, or indifferent — you might even be worried, or annoyed (I know I am).
I am specifically annoyed when A.I. tools like ChatGPT or Midjourney are presented as an unavoidable wave of change — almost sentient technologies which are going to revolutionize the World by themselves, while we watch in admiration. No! They are not intelligent — and certainly not sentient?! They will not revolutionize the World by themselves. Some humans, however, will very much revolutionize whole industries using them.
Big difference — and good news! Once we understand this, we realize we have a say on how technology is deployed and how it impacts our world. That is what this piece is about: let us not be passive.
What is A.I. and ChatGPT??
“Artificial Intelligence” is a very broad word. In essence, it includes every time a machine is being “intelligent” — e.g., perceives and processes information with outputs that seem like human intelligence. It can be a computer speaking like a human, driving a car, or playing chess. It can be anything — although, it is rarely that intelligent: autonomous vehicles barely start driving correctly in the Bay Area — something most humans have been able to do for a long time, whilst also being able to speak, play chess, and many more. In short: “A.I.” is good branding for machines getting less dumb, but humans are still smartest — by far.
Now, what is ChatGPT? Essentially, it is a chatbot: you type a question, it gives an answer. The difference is that it is very good at acting like it is human: its very complex “language model”, based on billions of parameters or whatnots, compiled all the human-authored pieces of content it could find online and looked for patterns. This allows it, based on whatever prompt you type in, to guess what character a human person is most probable to be typing next.
This is essential to understand: ChatGPT is “A.I.” for sure, but you are never discussing with “an intelligence” when using it. It surely feels like it, but it is nothing more than a very powerful calculator, very good at mimicking what a human is most likely to be saying. It has no understanding of what it is saying: it doesn’t know what a word, a sentence, or an idea are. It only generates its calculation outputs, character after character. This is true for every other A.I. generative tool out there. Don’t trust them.
Artificial Intolerance
Let’s get political: if A.I. is essentially software that mimics humans — which humans is it mimicking? Well, for what we know, it is trained on human content published online. Public forums, publications, etc.. maybe even this Medium post. Meaning, if you don’t partake in online conversations, or don’t publish content in whatever is an A.I.-friendly format, chances are you are not the type of human generative bots will try to mimic. And, yes, if you are not a male, not white, and/or not speaking English: A.I. will find it much harder to mimic you.
A.I.’s “white guy problem” is nothing new (see this opinion from *checks notes* 2016!), and many industry players actually tried to solve this issue. Nevertheless, representativity is not a given when big data sources are so often not disclosed: you’re probably not going to get ChatGPT to say the N word — nor should you try to — as it was manually blocked by the tool’s creators; but researchers are still now unveiling real racism baked into the software . Why? Because for the most part, fighting the bot’s biases is an afterthought. A liability alleviation measure rather than a must-have.
There is a lot to dislike about Silicon Valley moguls’ ideologies. Elon Musk, a despicable character, is increasingly displaying his sympathies with alt-right accounts on Twitter, but other very influential men like Peter Thiel or Sam Harris also spread frankly extremist ideas on government or religion you might be uncomfortable with (I definitely am). Do you want to let this community of people build and define tools that might be used by your employer or local government? (I don’t). Here is a listen (Dan McQuillan’s interview in the Tech Won’t Save Us podcast ) and a read (David Golumbia’s summary of the ties between A.G.I. and white supremacy ) to scare you, if you are into that.
The danger with?A.I.
That is the real risk with A.I. tools: how are these unchecked, falsely ideologically-neutral, pieces of software going to impact how the world thinks and functions? For example, would you trust a libertarian, alt-right-infused chatbot to handle, say, your relationship with your local social benefits office? Sounds like a bad idea — yet the French government is already planning a trial for just that . Don’t worry though, these chatbots are not replacing humans (they say) (for now) (because they already tried and failed in the past).
The path is nonetheless clear: generative A.I. will increasingly be rolled out to handle interactions with the masses, because they are cheaper and scalable, and actual human interaction will come at a premium for the wealthy, the elite, the Gold status holders, or however else the happy few is selected. This will perhaps reduce delivery costs, but mostly, it will degrade service to those who need it the most (poor people, isolated people, people with disabilities, immigrants, etc.), and probably discourage some from seeking help. The alignment between these outcomes and the Silicon Valley dominant ideology is no accident.
领英推荐
Don’t believe me? Look at my beloved country (France) again: “fraud probability” profiling algorithms are already in use to harass the poorest benefits beneficiaries with recurring tax audits, at scale. Fail to understand what the robot is asking you, or to provide your documents in time and in the right format, and you lose your benefits. Will ChatGPT calibrate to alleviate bias against people in need? We’ll see! In the meantime, wealthy tax evaders can call the tax office human hotline and discuss their case in peace.
Disaster for the education industry?
Let’s be clear: the same process happening to education would be a disaster. We cannot massify access to education through A.I.-authored lower quality materials and A.I.-powered interactions. As we saw, there is no way to ensure these materials are accurate and safe: at scale, this means that some people will be taught things that are false, wrong, and dangerous. And it might be too late when we realize it.
No one wants to have ChatGPT as their teacher: not me, not you, and not rural learners. We are already seeing a surge in seemingly A.I.-generated pseudo-educational videos being published on Youtube and others, and these materials are being produced at an increasingly fast rate! Check for yourself how much content is made with ChatGPT and not even proofread : search for “Regenerate response” in Google or Linkedin — chances are you will find totally unchecked educational and medical materials.
I am not blaming those who publish A.I.-generated content without proofreading: when a technology feels so human, so smart — when you’re told it is intelligence — it is naturally tempting to leave it do your job. The problem is that by doing so, you’re relying on a very biased and imperfect software — and that is what is dangerous.
This is why we should not stay idle in the face of generative A.I.: we cannot build an dystopian education industry in which masses (eg. most of the global South) get lower quality, unsafe A.I.-generated training materials while real education is afforded only by a happy few. We need to aim higher.
Aiming higher: promoting good A.I.?usage
Don’t get me wrong: Artificial Intelligence is an extraordinary tool, which we already use daily at Chalkboard Education and that is fueling some of the best features in our software. But as a purpose-driven company, we wish to treat it as it is: a tool. A tool for, ultimately, humans to use. As the technology develops and its adoption widens, it is essential to learn and teach that crucial difference: A.I. is a seemingly intelligent tool, but never a conscience. Meaning: it must always operate under human reviewership.
There are a lot of good use cases for A.I., including generative A.I. and ChatGPT. These are when these tools give pedagogues the means to produce faster and spread their knowledge and know-how like never before. Good A.I. allows vouched professionals and materials to inspire people who never had the chance to get training before, turn some of them into trainers themselves, and reproduce that positive cycle within their respective communities.
Bad A.I., on the opposite, means uncontrolled data sources are used to generate and market unverified curricula and fake news at scale. Bad A.I. cannibalizes actual academic and training institutions who did not catch up fast enough, and kills the market for proper training aiming at low income and isolated individuals. Bad A.I. degrades training experience and outcomes for those who already are the least advantaged.
How to kill bad?A.I.
Use, learn and teach A.I. tools! Play with ChatGPT, Midjourney and the others?; explore and embrace their capacities! Obtaining a clear understanding of how these technologies function and what they can and cannot do is absolutely crucial. Train your peers and students as well! This is what is going to allow us, as an industry, to focus on the most fruitful use cases and warn against the most harmful.
Bad players are already producing poor quality content, fake news and money-grab training curricula: good players need to understand how they are doing it to fight back, push for regulation, and promote good A.I.. Don’t mistake it for the “unavoidable wave” some are selling: A.I. is as political a topic as any other, and you have power over how it spreads in your community!