Is AI Going to Steal my Job?
I asked Midjourney for a “humanoid robot”and got a terrifying Terminator with pointlessly huge “muscles” and a pitiless stare. Plus a nice cup of tea

Is AI Going to Steal my Job?

People are scared. But who are the real thieves and is it too late to change the narrative?

A shorter version of this article appeared in the most recent issue of my Discomfort Zone newsletter.


Right now, there’s an increasing flurry of think pieces, podcasts, and articles with names like Will AI Take Your Job? and Future of Work.

People in the workforce are concerned about their career prospects. Parents worry that their kids’ coding courses will soon be made irrelevant. Poets wonder whether LLM-generated verses will divert billions away from the poetry industry.

Okay, that last one is a joke. But individuals of all ages are currently experiencing genuine anxiety about the employment impact of artificial intelligence, whether or not they already use generative AI themselves.

We feel negative emotions such as fear when faced with unpredictable and potentially uncontrollable situations. And now a giant, stinking bag of unpredictapoop has been left at our doorsteps.

It’s understandable if all we want to do is to slam the door shut and ignore the stench. But even though we have no clue who left the bag there and why it’s suddenly our problem, we know in the back of our mind and the pit of our stomach that we’ll have to deal with it at some point.

What if the person who delivered the poop bag isn’t that AI robot with the cup of tea? Maybe we need to hold our nose, open the bag, and investigate what lurks inside.

The river of AI capital we are all swimming in

Newsflash: In the Global North we live in a capitalist society. The corporations that are developing AI products and services are fueled by capitalism. Should we be surprised that labor takes a backseat to profit as machine learning models mine, mill, and process the sum of human knowledge?

Now, I’m not a Marxist and I’m not an anarchist, so I don’t believe we need to re-occupy Wall Street tomorrow morning. (Forget the stock market; the redistribution of money away from the one percent should start with implementing a wealth tax and an inheritance tax, not destroying the wealth-creation machine itself.)

What is worth exploring, if we hope to navigate the river we have suddenly been plunged into, are the different types of capitalism.

The fact is that capitalism as a concept is too broad to be helpful when analyzing the impact of AI. There’s a difference between constructive capitalism and extraction capitalism.

The first kind can connect communities by building bridges; the second kind can harm the planet by extracting coal and transporting it over those bridges.

The physical bridges of the Industrial Revolution became the digital bridges of the Information Age. And now it feels like we’re entering another era, with AI being capitalism’s shiny new resource. So, when it comes to machine learning products, we first need to ask ourselves these questions:

What is being extracted? And what is being built?

I don’t have a full answer for you, although we all know that vast amounts of intellectual property have already been extracted. Scam tools, non-consensual porn, and deepfake videos are definitely being built. There’s probably some useful stuff too. But instead of burying our heads in the sand through fear, we need to examine AI corporations and their products with a keen and critical eye.

Welcome to the enshittocene

Canadian journalist and author Cory Doctorow coined the word enshittification around eighteen months ago to explain how platforms such as Facebook produce a progressively crappier user experience as they maximize profits.

What enshittification entails is that builders necessarily become extractors. This year, Doctorow has extended his thesis and encapsulated it in the term “enshittocene” as a reflection of his belief that absolutely everything is turning to shit.

Does Doctorow’s concept have to apply to the products of machine learning? AI has the potential to improve medicine, education, and agriculture in the Global South. This might help lift millions of people out of poverty. But the majestic river of capitalism contains strong currents, unpredictable eddies, and stagnant pools. There’s no guarantee that blanket AI adoption will increase productivity or decrease inequality evenly across the board.

For example, if a US corporation closes its outsourced call center in India because AI can do a better, cheaper job, nobody in America has been made redundant. It’s the workforce in Asia that has to scramble to find new jobs.

“Follow the money” is a reasonable method for understanding what’s really happening in a capitalist system. So it leads to the next question we should be asking:

Who is winning and who is losing in the AI race to replace workers?

I don’t have an answer for you here either. The situation is evolving fast, so again, we need to keep our eyes on the ball.

Should we simply surrender to our tech overlords?

When a report emerges that Sam Altman, the CEO of OpenAI, is trying to raise $7 trillion to ensure the future of his industry, it’s not hard to see where some people’s priorities lie. Make no mistake, investments in AI will end up in the pockets of the richest people in the world as well as the shareholders of the world’s most valuable corporations.

But let’s be clear about one thing: the technology isn’t the problem.

Nobody is saying that machine learning shouldn’t be used to find a cure for cancer or develop new antibiotics. And to be fair, the existence of capitalism isn’t a prerequisite for negative outcomes from each new tech paradigm.

Human beings are perfectly capable of weaponizing scientific discoveries without the backing of VC funds — just look at the nuclear arms race, using electricity to execute prisoners, or the Chinese communist party’s surveillance systems.

AI is a new tech paradigm that will doubtless produce benefits as well as harms independently of how it is financed. But capitalism is turbo-charging its development, and although corporations are justified in wanting to profit from it, the critical issue facing us in 2024 is where we focus our energy as a society: on profits or on people.

Are we powerless in the battle to put people first?

We mere mortals who are not tech titans do have one significant lever of control over what is happening: storytelling. It’s up to us to interpret our shifting reality in a way that benefits our humanity.

The stories we tell in our culture give us a framework for social cohesion. They give us a mission to believe in. They enable us to trust that others will act towards similar goals. And they help us to hold shared values.

As historian Yuval Noah Harari noted in his book Sapiens, money is probably the most successful story ever told. The idea of money works not because of coins or pieces of paper, but because we all believe in it, believe that other people believe in it, and behave accordingly.

The current moment seems to be an inflection point in history and nothing is stopping us from coming up with a new, powerful story about AI that is just as successful as money.

The key thing when it comes to how we work with AI (and how it works with us) is to make sure that we are the ones writing the story.

So… what stories are being told right now about AI? And how do they fit into other stories?

Some think AI spells doom for the human race. Others see it as our savior. Last month, one MIT economist suggested that AI might rebuild the American middle class.

The tech companies have already concocted several different stories. Here they are, in no particular order:

1. We are only a couple of years away from AGI (Artificial General Intelligence) which will be either terrifyingly omnipotent and treat us like ants or save humanity with its god-like powers.

2. Open-source platforms will give anyone the ability to create new generative AI tools, whether those individuals or companies are benevolent, malevolent, or simply inexperienced. And that’s a good thing. Or maybe not.

3. AI will take care of the most mundane and repetitive knowledge work so that employees can concentrate on more valuable tasks.

4. Government regulation is necessary to protect the general public against this frightening technology. (Oh, and regulations might prevent potential competitors from entering the market, but never mind about that.)

5. AI will solve global challenges like climate change and superbugs, so we need to invest heavily in it as soon as possible to save lives.

What do these stories have in common? Hype.

Here’s something revealing: I was curious to see whether AI itself would come up with any additional stories, so I asked Google’s Gemini LLM to pretend it was a top tech journalist and gave it a short brief. I didn’t end up using any of its suggestions because it more-or-less repeated the list I had already compiled.

But then, completely unprompted, it spontaneously appended this text to the end of its response:

“It’s important to remember: These narratives are often self-serving. Tech companies frame AI discussions to promote their specific products or agendas. As a journalist, I urge readers to look beyond the hype and critically evaluate the potential benefits and risks of this evolving technology.”

So even the AI itself is recommending that we don’t buy into these narratives!

Maybe the hype machine will run out of gas

I explore a fictional end-game of extraction capitalism in the futuristic AI-run North America where my forthcoming novel, 2084, takes place. You can read what the book’s about here but let’s just say that there’s an incredibly important distinction between letting AI do something and letting AI do something on its own.

It’s worth restating that, as I noted earlier in this article, capitalism can be a positive force. Sometimes the market does magically sort things out for the better.

And guess what? Labor is also a market.

Maybe what will ultimately save us from an employment crisis is that business executives will reject the mass adoption of artificial intelligence out of fear of being replaced themselves. Is it possible that AI will induce so much anxiety-driven harmony between management and staff that they join hands from corner offices to cubicles, saying to each other, “We’re all in this together”?

That would truly be a KumbAIya moment.


Want more articles like this? Follow me on Medium, or read my Discomfort Zone newsletter on Substack every second Thursday. This article was written without artificial intelligence. Lol.


要查看或添加评论,请登录

社区洞察

其他会员也浏览了