Tools and Materials: A Mental Model for AI
ChatGPT suggests a prompt for a featured image for this article.

Tools and Materials: A Mental Model for AI

"Language shapes the way we think, and determines what we can think about." - Benjamin Lee Whorf

Before we begin, I asked ChatGPT to rewrite this article at a 4th grade reading level. You can read the result here.

Artificial? Yes. Intelligent? Not even close. It is not without reason things like ChatGPT are called "AI" or "Artificial Intelligence." We humans have a propensity for anthropomorphizing - attribute human characteristics to - things that are not human. Thus if we are told something is intelligent, let's say a very large computer system we can submit questions and get answers from, we look for intelligence in that thing. And if that thing is trained on our own language and art and mathematics and code, it will appear to us as intelligent because its training materials came from intelligent beings: Us ourselves.

So, as we crash headfirst into the AI present and future, we need to reset our mental model before we start believing these things we call "Artificial Intelligences" are actually intelligent (again, they are not).

Tools and Materials

I propose we all start thinking of these things we call "AI" as tools and materials. Because that's what they are and that's how we'll end up using them.

Sometimes we'll use them as tools the same way we use our phones and computers and the apps on them as tools. Sometimes we'll use them and what they produce as materials the same way we use printed fabrics and code snippets to create things. And sometimes we'll use them as both tools and materials the same way we use word processing applications first as a tool with which we write a body of text and then a material as the thesaurus function helps us use more fanciful words and phrases.

Here are some basic examples to help you build the mental model:

AI as a tool performs a task for us:

  • Fill out tax forms, write contracts and legal documents.
  • Summarize text, rewrite text to a specific reading level.
  • Write code.
  • Online shopping including booking flights and hotels etc.
  • Any interaction with any CSR.
  • Magic eraser for images, video, and audio.

AI as a material generates something for us:

  • Simple stories.
  • Plot lines for stories.
  • News articles and summaries.
  • Images and other art.
  • Variants of a layout, or a theme, or an image, or a painting.

Thinking of AI as tools and materials rather than intelligent things with magical human-like powers is an essential mental shift as we figure out how to fit these things into our lives and our world. We have to move away from the linguistic trick their creators foisted upon us with their naming, and move towards the practical realities of what these things really are:

AI are if-this-then-that machines using enormously complex decision trees generated by ingesting all available writings, imagery, and other human-made materials and filtering that data through pattern-matching algorithms.

They are regurgitation machines echoing our own works back to us.

And just like we are drawn to our own image every time we pass a mirrored surface, we are drawn to the echoes of ourselves in the output of these machines.

Shallow Work and Human Creativity

Asked for one word to describe AIs, my immediate answer is "shallow." You've probably felt this yourself without being able to put your finger on it. Let me explain:

There is a bland uniformity to AI output. It's easiest to notice in generative AI images. Once you've been exposed to enough of them, they start taking on a very specific "AI-ness." For all their variety, there is something recognizable about them - some defining feature that sets them apart from what we recognize at human-made images. That thing is shallowness.

AIs are conservative in the sense they conserve and repeat what already exists. They don't come up with anything new. They are also elitist in the sense they lean towards what is predominant, what there is more of. They are swayed by trends and popularity and amplify whatever majority opinion they find in their training data.

This makes their output bland and uniform and shallow like a drunk first-year philosophy student at a bar: The initial conversation may be interesting, but after a few minutes you notice there's little substance behind the bravado. I've been that drunk first-year philosophy student so I know what I'm talking about.

This means while AIs are great at doing shallow rote work, they have no ability to bring anything new to the table. They lack creativity and ingenuity and lateral thinking skills because these skills require intelligence. And AIs are not intelligent; they just play intelligent on TV.

Will an AI take my job?

Our instinctual response any new technology is "will it take my job?" It's a valid question: Jobs are essential for us to be able to make a living in this free-market capitalist delusion we call "modern society," yet job creators have a tendency to let go of expensive human workers if they can replace them with less expensive alternatives like self-checkout kiosks that constantly need to be reset by a staff member because you put the banana in the bagging area before you chose whether to donate $2 to a children's charity, or automated "voice assistants" that never have the answers to your customer service questions and only pass you to an actual human once you've repeated the correct incantation of profanity (try it, it totally works!)

So now that we have these things some clever marketing people have told us to call "AI," are they coming for your job? Well, that depends:

If your job is shallow and constitutes mainly rote work, there's a good chance an AI will enter your life very soon - as in within months - and become part of the toolkit you use to get your job done quicker. And if it turns out that AI can be trained to do your job without your intervention (by having you use it and thereby training it), there's a non-zero chance it will eventually replace you. That chance hinges more on corporate greed than it does AI ability though.

If your job involves any type of creative, or deep, or lateral, or organizational, or original, or challenging, or novel thinking, AI will not take your job because AI can't do any of those things. You'll still work with AI - probably within months - and the AI may alleviate you of a lot of the rote work you are currently doing that takes your attention away from what you were actually hired to do - but the AI is unlikely to replace you. Unless corporate greed gets in the way. Which it often does because of the aforementioned free-market capitalist delusion we call "modern society."

What we all have to come to terms with today is we're long past the point of no return when it comes to AI. While technology is not inevitable, technology often becomes so entrenched it is impossible to ... un-entrench it. That's where we are with AI. No matter where you live and what you do for work, for school, or in your own time, you're already interacting with AIs in more ways that you can imagine. And these AIs are going to become part of your work, your school, and your home life whether you want them or not.

Our job now is to talk to one another about what role these things called "AI" are going to play in our lives. How do we use them in ways that don't take jobs away from the humans who need them the most - the historically marginalized and excluded people who tend to hold jobs comprising mainly shallow rote work? How do we build them in ways that don't cannibalize the creative works of artists and writers and coders and teachers? How do we incorporate AI into education to improve learning outcomes for students and build a more informed and skilled populace? How do we wrench control over our AI future from the surveillance capitalists and longtermists currently building the world to their libertarian techno-utopian visions?

How do we use AI and all technology to create human flourishing and build futures in which we all have the capabilities to be and do what we have reason to value?

If we don't talk about the future, the future becomes something that happens to us. Let's have this conversation.



Cross-posted to mor10.com


Morten Rand-Hendriksen is a human person who makes video training about how to use technology to build better futures for ourselves and the people around us at LinkedIn Learning. He wrote this entire article by himself without any help from an AI because when he asked an AI to generate a list of examples of how an AI can be used as a material, the AI gave him a list of how an AI could be used to generate ideas for marketing articles about AI.

Marina Pardo Calderón

UX Designer en Grupo OSDE

2 年

Loved the article! And the way that ChatGPT?"translated" the part about the drunk first-year philosophy student?was just perfect because it is a meta example of what you are talking about.

Dan Brodnitz

Head of Global Content, LinkedIn Learning (he/him)

2 年

Thank you/subscribe! Also: “And just like we are drawn to our own image every time we pass a mirrored surface, we are drawn to the echoes of ourselves in the output of these machines.”…..

Morten Rand-Hendriksen

AI & Ethics & Rights & Justice | Educator | TEDx Speaker | Neurodivergent System Thinker | Dad

2 年

This article was brought forth by a request from my colleague Dan Brodnitz for me to not get mired in the ongoing disasters of the tech world but use my time to look towards the horizon and figure out where we're headed next. And by ongoing conversations with my colleagues Brandi Shailer and Natalie Pao who as we speak are mapping the future of AI. And by Casey Fiesler's ongoing coverage of the #AI revolution and its ethical implications.

要查看或添加评论,请登录

Morten Rand-Hendriksen的更多文章

  • After WordPress

    After WordPress

    Today, the head of the WordPress Open Source Project Matt Mullenweg unilaterally locked the gates to wordpress.org, the…

    60 条评论
  • As the Mask Drops, It's Time to Face the Politics of Tech

    As the Mask Drops, It's Time to Face the Politics of Tech

    "Is it really?" She gestured at my hoodie and the bold text across my chest reading "Code is Political." "Profoundly…

    22 条评论
  • Rubicon

    Rubicon

    On Saturday October 12, 2024, a line was crossed in the WordPress open source project that I fear will have a lasting…

    24 条评论
  • As We Break Surface – The AI Transmutation of Web Dev

    As We Break Surface – The AI Transmutation of Web Dev

    "Hey AI, build me a website." It's a matter of time - probably months, before we get here.

    10 条评论
  • It’s time to abandon reckless oil propagandists

    It’s time to abandon reckless oil propagandists

    A response to Dan McTeague’s Financial Post opinion piece “It’s time to abandon reckless EV mandates” published July…

    13 条评论
  • AI Training and the Slow Poison of Opt-Out

    AI Training and the Slow Poison of Opt-Out

    Asking users to opt-out of AI training is a deceptive pattern. Governments and regulators must step in to enforce…

    7 条评论
  • GPT-4o, OpenAI, and Our Multimodal Future

    GPT-4o, OpenAI, and Our Multimodal Future

    OpenAI held up a big shining arrow pointing towards our possible future with AI and asked us to follow them. Beyond the…

    12 条评论
  • Ten Questions for Matt Mullenweg Re: Data Ownership and AI

    Ten Questions for Matt Mullenweg Re: Data Ownership and AI

    Dear Matt. 404 Media tells me you're in the process of selling access to the data I've published on WordPress.

    11 条评论
  • AI Coding Assistants Made Me Go Back to School

    AI Coding Assistants Made Me Go Back to School

    The introduction of graphing calculators didn't remove the need to understand math; they removed the need to do rote…

    13 条评论
  • The Challenge, and Opportunity, of OpenAI's GPT Store

    The Challenge, and Opportunity, of OpenAI's GPT Store

    If you make a useful GPT in the GPT Store, someone else will publish a more popular copy and make yours look like a…

    8 条评论

社区洞察

其他会员也浏览了