An Imminent Singularity
I’ve read probably close to a hundred opinions raving about AI in the last year. It gets old, and I’m not here for that. Each spiel repeated the same syllogism: AI is efficient, and efficiency benefits society; therefore, AI will benefit society. But AI may just be too efficient. AI, as an accessory, welcomes productivity in the workplace, and all the other reasons I’m sure you’ve heard 100 times. But only as an accessory will it reach its zenith. My argument is that AI, past the singularity, will be a pervasive force that will replace labour. It is more likely than not that post-singularity, society will not benefit.
As a result:
Democracy!
My stance is simple: humans will make one last democratic decision to enter a command economy and enact Universal Basic Income (UBI). This would be highly plausible in a post-singularity era if you assume extreme wealth generation from AI. If even 5% of American billionaire wealth transferred their wealth to all other humans alive, we’d only need to double the economy 11 times before each person pockets $82,000/year, 8x the current global average annual income. Of course, this would meet our needs in a post-singularity world as, at the very minimum, the cost of production would be negligible. We’ll be buying planets with enough innovation. And, more likely than not, society as we know it would lose all of its essence. If you will, imagine WALL-E.
But we must remember that it only holds true if the powers, or perhaps the AI, maintain some civil decency. It would be charity, especially since humans would no longer provide substantial value to the states. And before you say money won't be a thing, it absolutely will. It largely communicates the relative scarcity of a good or service. So, unless AI magically makes all the resources in our world infinite (energy, land, natural resources, etc.), we will require a currency. Whether or not we interact with it is a separate argument. One guarantee is that human labour will lose its value as a currency, so start saving your money now — how else will you earn it later?
Another reasonable conclusion could be that democracy dies so that social currency rules, a power only extended to those in the good graces of the AGI project's leadership and the leadership themselves. Under their rule, they'd extract 100% of plebian value without sacrificing our lives — after all, we are recyclable with enough rest.
Or, here’s a good one: a utopia emerges post-singularity and we all live happily ever after and can get anything that we could've ever imagined by simply asking for it and we'd start seeing rainbows and unicorns and candy and shrek.
In all scenarios, we kill human ambition for the masses. There'll be nothing worth exerting ourselves for.
A state’s benevolence
You may have noticed the caveat in the UBI scenario: it only works if the state or those in power grant it. Our welfare today relies on (I) the leverage we assume with our labour, (II) a state’s requirements for legitimacy, and (III) a ruler’s benevolence.
There are two reasons for this:
As it stands, the interests of states and people are aligned in an increasingly globalised world. A state needs efficient markets, high literacy, skilled workers, ballooning populations, and innovation (industrial hard- or artistic soft-power) to be economically, militarily, and resourcefully competitive. Competition between states, namely hegemons, encourages them to aid their citizens. Subsequent progress trickles down from a hegemons' charity to LEDCs for x diplomatic/strategic reasons (Venezuela-Russia, Ethiopia-China, Philippines-US).
Today, if a vital sector goes on strike, like the recent port strikes in the US, the state must care because a government's legitimacy depends on its people's belief in the system. If the Biden Administration lost all support from unions pre-election, Trump’s win would’ve been almost immediately guaranteed, upending Biden’s remaining few months in office.
With labour-replacing AI, those incentives are no longer aligned. Humans will be less of a resource to be mined and more irrelevant than anything. Rationally, states would spend more resources on developing technology and AGI to maintain their competitive advantage. It is only out of the state's benevolence or liberal values (at least in over half of the world) that states would worry about the prosperity of their citizens.
Human labour will lose its value. Why?
Simply put, labour-replacing AI is more efficient, multipliable, cost-effective, and reliable.
领英推荐
In a typical VC scenario today, a VC is willing to bet mainly on the value of a startup’s labour. How successful is this team going to be at accomplishing x task? It is a largely unreliable model:
With AI, all of those problems disappear. Let me compare it with an example:
In September 2024, Google paid a single researcher, Noam Shazeer, just under $3 billion to return. Mind you, he's a genius. But for that money, they could've hired 1000 researchers and paid them $3 million each (a lot of money for a lot of people), or even 100 for $30 million. Instead, they rehired a flight risk who publicly expressed his frustrations with the company. Unless he is the sole proprietor of some AI knowledge that would enable Google to make a breakthrough that would return them 100%+ of their investment, it is wasteful. It also goes right back to how easily replaceable humans are. Our labour has a limit. We cannot work past a certain amount of measured exertion before our marginal returns diminish.
In our world of AI, though, if a star researcher develops a star researcher AI model, the former becomes irrelevant. The AI can be cloned. Anyone with enough GPUs gets the brain. Goodbye recruiting, goodbye HR, goodbye back office, goodbye anyone really. The price of that AI would be limited to whatever the GPUs and energy costs are. There are not even any preferences involved! It's not like the AI can complain about working at a specific place (if appropriately developed).
Some will be impacted more than most, though…
There is some world in which AI becomes solely an accessory to our current lives and entrepreneurs grow wildly successful equipped with a new range of affordable tools and guidance. However, by default, sufficiently strong AI will render human entrepreneurship obsolete. VC funds, for example, can convert money into hundreds of AI startups led by digital CEOs.
Keynes, Socrates, and Marcus Aurelius did fantastic work influencing large populations and establishing world views. That is because the popularity of a piece of work is based on its correctness paired with a collective agreement on its virtuousness. A mathematician's formula requires no moral determination, but the impact on a mental model that encourages a population out of depression and into lasting success establishes impressions for decades. We need thought leaders to induce thought — a self-perpetuating cycle. A flood of AI-created ideologies means nobody will have one individual ideology that may outshine a century of thinking. Nobody will question AI’s ideologies because AI will offer its rationale! And with enough time, philosophers will wither away.
Some more obvious ones are scientists, back-office jobs, software engineers, customer service reps, drivers, labourers, finance professionals (I'm looking at you, analysts and consultants), etc.
There is also a world in which AI forays our biases. We lose journalistic creativity if organisations like the Times, Journal, Economist, or Reuters automate their articles. AI, if cloned, could seed mass conformity. On a conspiratorial note, those with higher social status, power, or wealth could effortlessly manipulate people en masse. It'd even be worse if those same executives injected hatred between communities, fueling an acrimony between religions, races, genders, and political affiliations.
Even law enforcement would suffer. Post-singularity, if law enforcement relies on digital policing to prevent and solve crimes, we entertain a 1984-esque scenario where our privacy is infiltrated at every second. We'd have cameras and microphones in our residences, parks, malls, and everything else. Ironically, crime may rise as people use offline methods of distribution to commit bribery, drug offences, etc. Complacency in the "physical world" would be our bane. That is unless we have robot officers patrolling the streets… I’m not sure what I prefer.
Unsurprisingly, politicians will likely be the subset of our population that will be least affected. Most humans seek comfort in having a reliable character standing up for their beliefs and ideologies. As it stands, AI isn't ready for that undertaking, and rightfully, humans aren't prepared to hand the reigns over. Politicians also get to set the rules. They could, theoretically, enact a piece of legislation that requires AI to maintain a set of human values that encourage them to, in a world where AI rules all, continue supporting human beings with UBI and other welfare. Or, you know, just suppress AI.
Oh, and athletes, too! They'll be fine.
Takeaway
I worry that we will lose all sense of identity. In 50 years, assuming that the singularity has happened, nobody will care about who you are as a person, but rather, what you did pre-singularity. You may be known for what your parents or grandparents did (much like any royalty) and continue amassing popularity because of it. Who knows? Maybe we'll have a revolution against AI led by a doctor's son...
After a point, all humans will become homogeneous; we will all be named players 1, 2, and so on, taking in an income of $80,000 a year from a select few extremely wealthy AI developers living a WALL-E lifestyle.
Sitting at my desk at work, I'm watching hundreds of my friends and family enter the fields of investment banking, consulting, buy-side, and other similar fields. Some are still pursuing cooler trades: medicine, therapy, and sports. I can’t judge. I, for one, have been categorised as a “finance bro” more times than I’d like to admit — to an extent, it is true. And logical!
In the medium term, 5-20 years, finance will remain a lucrative industry. But besides this essay being my desperate attempt to weld a philosophical patina to my personality (I refuse to be just a finance bro), I truly believe that right now is the time to do something ambitious. In 5-20 years, all bets are off for what our world will look like. Isn't it exciting to think of all the opportunities? And while all of the above work is admirable, how can your reaction not be to live life to the fullest?
I'm going on a few treks with some friends soon to experience the world before it becomes a wasteland, and perhaps I'll intern at an oil company immediately after. Maybe I'll work with an NGO in Somalia and then join Lockheed Martin. Or I could learn to cook and enter the echelon of the doordash elite.
The whole point is to experience things. The worst thing you can do for yourself is join the rat race.