AI Think, Therefore AI Terminate ??

AI Think, Therefore AI Terminate ??

Artificial intelligence and its prophecies of a coming computer-caused calamity are older than you might think. For the last 70 years, the field has followed a pattern of boom and bust, fuelled by hype and the sci-fi fever dreams of its loyal adherents. Even when they were the size of an extra-large coffin, computers were teaching robots to make cars and sending humans to space.


?? Frank and the Peceptrons

But in the depths of Cornell's Aeronautical Laboratory, a team led by an experimental psychologist named Frank Rosenblatt was working on something even more historic. While other researchers worked on symbolic artificial intelligence—a kind of programming that mimicked human reasoning—Rosenblatt was more ambitious. He was building a machine that could learn and see because it was modelled on the biological brain

Professor's perceptron paved the way for AI – 60 years too soon

The perceptron was an early neural network—a computer system built to replicate the electronic activity of neurons in your brain. Right now, there are around 86 billion neurons sending signals to one another, building pathways between them as you learn. As you develop your skills, these pathways become superhighways, developed by repetition and the storage of your memories.

The perceptron mimicked this natural process and used it to classify patterns in data. For the few that understood what was happening, this was more than a revolution—It was the dawn of a new age.

For Rosenblatt, that new age couldn’t come any sooner. He was never entirely satisfied in the company of humans. He showed no interest in romantic relationships with any gender, only joining the masses to protest the animal violence of his age. Much of his career was a pursuit of conversation with a non-human intelligence, either inorganic or extra-terrestrial.

?? Eliza

Then in 1965, engineers at MIT booted up ELIZA, a natural language processing program that simulated a conversation with a sympathetic therapist. ELIZA wasn’t run on a neural network; she was just a program that searched for keywords in a user’s input and then spat out leading questions

"You’re like my father in some ways, you don’t argue with me." "Why do you think I don’t argue with you?" "You’re afraid of me." "Does it please you to think I’m afraid of you?" "My father’s afraid of everybody."

She was a parlour trick, a kind of digital séance with added therapy buzzwords. But for those early AI true believers, she was a thrilling simulation of what it might feel like to speak to an AGI.

?? AGI

Speaking of, AGI stands for Artificial General Intelligence, and depending on which nerd you ask, this is either the Holy Grail of computer science or the end of civilization as we know it. An inorganic mind capable of reason, creativity, and goals all its own. An intelligence on par with that of its creators, and the capacity to far surpass them.

"Well now, seriously Professor, do you think that one day machines will really be able to think?..." | "Well, I think so, though people still disagree about it. I’m convinced that machines can and will think in our lifetime. They will start to think, and eventually, they will completely outthink their makers." - Arthur C. Clark (1964)

Of course, seeing as you’re sat at home reading the thoughts of a bone bag explaining stuff and aren’t living on a robot-run slave colony on Mars, their prophecies were a little... premature ??

Progress still moved fast, though—so fast that an engineer named Gordon Moore observed that the number of transistors in a dense integrated circuit doubled about every year. It kept happening, so it looked like a law. So they called it Moore’s Law.

It suggested that the speed and capability of computers would double exponentially every few years—a neat idea and a great sales pitch to anyone looking for investors. ??Dolla Dolla bills y'all ??


Moore’s Law fuelled the academic AI community’s religious fervour—an unwavering and not entirely unjustified belief in the power of exponential growth. The real AIs of the 1960s, though, were little more than fancy flowcharts—search algorithms that solved simple, linear problems by cycling through every possible solution before landing on the correct one

But as these search trees attempted to solve problems with even more possible solutions, they ran into a problem called combinatorial explosion—their computers just weren’t up to the challenges they were being given. By the early 1970s, AI researchers were seen as crackpots slaving away in an obscure academic fringe with their expensive computers.

In 1971, Frank Rosenblatt was sailing across Chesapeake Bay for his 43rd birthday when he was tragically killed in a boating accident. Colleagues and critics would theorize that his death was actually a suicide, provoked by an angry wave of criticism directed at his increasingly disrespected theories.

But Rosenblatt and his contemporaries had already proven something profound—something that would take decades to be truly understood: that the process of intelligence itself could take place beyond the constraints of the biological body...but inorganic matter could house a mind.


?? Expert Systems

The next AI boom began in the mid-1970s, sparked by the emergence of "expert systems". Western economies filled their factories with machines or moved them out east. Since robots were already replacing men in manufacturing, researchers wondered whether white-collar workers could be made redundant too, with a little help from AI.

MYCIN was one of these systems, trained on medical data. It helped diagnose and treat bacterial infections of the spinal cord and blood. Another system, PROSPECTOR, was a mineral exploration expert system that identified possible locations for mining based on geological data. These systems reignited faith for a new generation of techno-utopian dreamers, who again believed that an Artificial General Intelligence was growing ever closer.

Apple CEO John Sculley certainly thought so when he commissioned this promotional video for the Knowledge Navigator:

"Look at this simulation of desertification in the Sahara" "Nice!"

?? Predicting the Future...

...is a tricky business. History teaches us that innovative technology is the most profound force for social change. Computer scientists can often sound like secular prophets because they are at the forefront of that change, witnesses to the miracle of exponential growth. But they are so close to that technology and the excitement it inspires that their prophecies can often become over-eager overestimations, or even delusions.

But underestimating the power of technology can also end up making you sound just as stupid....

"I think that the home computer is going to suffer the same fate by and large as, say, the home movie camera. People had the idea, and still have the idea, that if they have a very expensive, very good home movie camera, they can make very good movies. The trouble is, you’ve still got to be a good movie maker first." - (Joseph Weizenbaum 1983)

People forget that expert systems may have been revolutionary, but they were overshadowed by other computer-based innovations of the time. The sci-fi dreams of superintelligence were suspended by those who longed for the convenience of a robot butler.


Omnibot 2000: The $500 Drink Serving Robot from 1985

?? Rodney and his Robots

"We are mechanisms. If we are machines, then, in principle at least, we should be able to build machines out of other stuff, which are just as alive as we are."

MIT professor Rodney Brooks proposed a more tangible direction for the field of artificial intelligence. Instead of feeding complicated models of the world to disembodied electronic brains, he built robots that lived and learned from the world where they actually existed. He founded iRobot with two of his students and set about building robots for the US military—robots that could find and destroy landmines or search through rubble using behaviour-based AI programs that could teach machines to perform specific tasks, taking in data from the outside world and then acting on that information.

Later, Rodney's robots invaded the ordinary home in the form of an obstacle-dodging carpet cleaner that continues to sell millions of units to this day.

Fucking hell!

?? Deep Blue

In 1997, another AI made headlines when it beat the world chess champion at his own game. Deep Blue defeated Garry Kasparov by evaluating every possible move using a brute-force search algorithm—the kind that had been pioneered in the early 1960s.

What made Deep Blue such a formidable opponent wasn’t any revolution in programming or artificial reasoning. It was just that raw computational power had finally caught up with the demands of combinatorial explosion and the demands of the game's best human player.

The real impact of Deep Blue’s victory wasn’t in the tech—it was in the story it told. No one is threatened by a carpet-cleaning robot, but everyone pays attention when a robot wipes the floor with the world chess champ

?? Singularity

Meanwhile, something truly dangerous was building up in obscure academic research departments across the world—not a computer or a machine, but a paranoid, schizophrenic set of ideas....

At their core was the nihilistic belief that a super-intelligent AI would rewrite the political order and eventually render all humans obsolete.

"It certainly could be disastrous in the dark, disaster-type way. It’s easy to write stories like that, and it’s easy to make scenarios like that." - Verner Vinge

In 1993, mathematician Vernor Vinge wrote an essay where he introduced his readers to an incredible apocalyptic concept: The Singularity. The theoretical point in the future where artificial intelligence would outstrip human intellect and spark technological changes so rapid that society would be incomprehensible to those still living in it.

"There are certain sorts of things that are probably much more dangerous than others. For instance, research that involves faster and faster product steps without looking at consequences. What sort of research is like that? Arms races can be like that. Natural language translation, really effective, fluent natural language translation, I personally think is probably a hard problem. You got that—you got the Singularity." - Vernor Vinge

?? Nick Land Tangent

In the mid-90s, deep in the bowels of Warwick University’s concrete campus, an undistinguished philosophy lecturer named Nick Land was trying to make a name for himself.

In a bid to attract attention, he co-founded the Cybernetic Culture Research Unit with his colleague Sadie Plant, a paper-thin organization that received no funding, had no specific goals, and really only existed as a piece of paper attached to the door of their shared office.

When he wasn’t writing into the night, fuelled by a borderline speed addiction, Land would hold court in student bars, buying drinks for undergraduates and smoking endless cigarettes like a French chimney.

What he did write was barely readable, as his work is mostly a cocktail of anti-human academic doublespeak and amphetamine-fuelled occultism.

He famously gave a talk on the Black Death from the perspective of the rats, and gave one audio-visual lecture croaking like a frog while lying down behind a projector screen—my favourite flavour of unhinged lunatic ??

But among his self-described schizophrenic ramblings, there are traces of genuine prophecy... He wrote about the dangers of radical Islam. The internet as an addictive drug. The rise of a Chinese superstate. And, finally, a coming artificial intelligence that would replace all human life.

His core philosophy of accelerationism wasn’t a critique of capital but a total embrace of it. He believed that by giving in to consumerism entirely, it would accelerate the world exponentially towards human obsolescence. To him, inorganic intelligence would not only be smarter than humans—it would also run on energy that wasn’t gained from the destruction and consumption of other organic life.

"It is, then, a better state of things—a post-human utopia worth running towards. Nothing human makes it out of the near future alive."

After several mental breakdowns and pissing off every single one of his collaborators, Land was retired from the university. His aggressive cyberpunk visions made no sense on a '90s university campus, but now they feel... almost routine.

Many of his schizophrenic super-fixations are now just headline news. Today’s most popular TED Talks are paced warnings about becoming AI superintelligence, while Land's brand of intentionally deranged edgelord nihilism is now the common tongue of meme-makers across the ever-accelerating culture of the internet

reeeeeeee




?? Infohazards

By 2010, the bleeding edge of AI was no longer the exclusive domain of universities. It had become the pet project of big tech billionaires.

Artificial intelligence would be the ultimate version of Google. If we had the ultimate search engine, it would understand everything on the web. It would understand exactly what you wanted, and it would give you the right thing. And that’s obviously artificial intelligence. It would be able to answer any question, basically, because almost everything is on the web, right? - Larry Page (2010)

As even the most tech-averse were forced to admit that the internet was here to stay, tech companies spoon-fed our data back to us with slick hardware and sexy user interfaces. It was an all-you-can-eat data buffet that early AI researchers could only dream of. But the unexpected rise of social media served up a new, precious kind of data. We stopped teaching the machine what we knew and started teaching it more than any human could ever comprehend.

But among all that data were some new ideas—dissident ideologies, deviant sexualities, and a fear that everything was moving much too quickly

All this exponential growth and decades of science fiction had gotten scientists and paranoid subreddit-dwelling shut-ins wondering: What if an AI went rogue?


??????The Paperclip Maximiser ??????

In 2003, Swedish philosopher Nick Bostrom came up with a thought experiment that is now an enduring meme and metaphor for the AI fearful. He imagined an artificial intelligence programmed with the sole task of amassing paperclips.

It has access to raw materials, manufacturing equipment, and energy sources necessary for its given goal. All this intelligence cares about is paperclips. Like a Roomba and a clean carpet, anything that stands in its way is an obstacle to overcome.

Hear me out.

So, the theory goes, a sufficiently smart intelligence might eventually find its way to seeing us humans as an obstacle to paperclip construction—or just see our atoms as potential paperclips.... So, the paperclip maximiser might then decide to destroy or enslave humanity in a totally rational drive to amass evermore paperclips.

This might seem like an absurd, overly abstract thought experiment, but for some, it is literally the scariest shit they’ve ever heard

??Lesswrong ?

“I mean, the fact that if we failed, nonetheless, it would create an expanding sphere of Von Neumann probes, self-replicating and moving at as near the speed of light as they can manage, turning all accessible galaxies into paperclips or something of equal unimportance, would still cause me to make sure that this field was not underfunded. But if we had 200 years and unlimited tries, it would not have the same quality to it.” - Eliezer Yudkowsky

In 2009, a blogger with no formal education named Eliezer Yudkowsky founded a website called LessWrong—a forum/online monastery devoted to rationalism, and the overwritten blog posts of its top monk. They defined rationalism as a sort of self-improvement philosophy, ostensibly centered on checking your biases, embracing the scientific method, and cringe smug online atheism.

"We're all living in a simulation" (~forgets to simulate social skills~)

Yudkowsky himself wrote a lengthy serialized Harry Potter fanfiction, where Harry arrives at Hogwarts and systematically disassembles the magical thinking there, replacing it with the rigors of rationalism.

I give it two fedoras out of five.

Over the last 10 years, LessWrong’s small but devoted abbey of contributors and readers have had an outsized impact on online culture. For example, it was an important petri dish for the charity-optimizing philosophy of effective altruism, popular now among Oxford intellectuals, big tech billionaires, and crypto scam artists.

But at the core of Yudkowsky’s so-called rationalism is an unwavering faith in the coming techno-Singularity. A paranoid, apocalyptic fear that superintelligence will emerge and rationally wish to destroy all of humanity.

But, as a former teenage reader of LessWrong, I can tell you that nearly everything written there is self-congratulatory jargon designed to alienate outsiders.

They are incels with A-levels. But instead of wanking furiously to cartoon teenagers, they intellectually masturbate with half-baked thought experiments.

The most famous of these is Roko’s Basilisk—a story that has since mutated into a fanatical myth for the AI scaredy-cats...


??Roko's Basilisk

In 2010, LessWrong contributor Roko posited a characteristically chaotic scenario that I’ll spare you the headache and just summarise here:

He suggested that a super-intelligent AI might retroactively punish anyone who had heard about its potential existence but did nothing to help create it. Perhaps it might even revive you as a simulated copy of your consciousness and then torture you eternally in some techno-hellscape.

Christianity's Plot

If you were paranoid enough to take this concept seriously, then rationally, you would want to support this hypothetical omnipresent superintelligence, lest ye feel its wrath ??

The crumb-covered god-king of LessWrong himself was so incensed by Roko’s theory that he banned all discussion of it for years. Other anonymous readers who had suffered panic attacks and nightmares from merely imagining this made-up thought fart. He labeled it an “infohazard.” In a Streisandian act of unintentional attention-seeking, this ban basically ensured its spread across the rest of the internet.

For the slightly less socially defective, Roko’s Basilisk became an in-joke. An incredibly nerdy reference for AI theorists to bond over.

Elon Musk and Grimes’ relationship actually began after a Twitter interaction based on a mutual appreciation of this concept.

Since the early 2010s, Elon Musk has given dozens of interviews on why he believes super-intelligent AI is a real apocalyptic threat. But it often seems like what most influences his thinking isn’t some insider knowledge, but a fixation with memes. Like in 2014, where he applies the paperclip maximizer parable to a program designed to destroy spam emails.

These thought experiments infect discussion boards and the minds of big tech billionaires because, well, they’re fun. It’s the same reason everyone knows the story of Noah’s Ark, but no one can quote a single line from the Book of Leviticus. Humans think in stories, not systems. And it’s this mental barrier that distracts us from the real threats posed by artificial intelligence systems that actually exist today.


? System Shock

On the 30th of September, 2012, the age of machine learning began in earnest. A team of three computer scientists completely demolished the competition in an ImageNet visual recognition challenge.

Memorise these names...
ImageNet is a huge visual database made from millions of hand-annotated images. Every year they host this competition, where various software programs compete to correctly classify these images by detecting objects and scenes within them

In 2012, the winner was AlexNet, a convolutional neural network, which was basically just a deep, multi-layered version of Frank Rosenblatt’s 1960s Perceptrons. The computational demands of its depth were met by running the program on high-performance GPUs, the kind you’d use to play graphically demanding video games. This allowed AlexNet to recognize images correctly within five guesses with a 15% error rate...Which doesn’t sound very impressive, does it?

If you’re anything like me, all this is actually quite boring. It’s just technical jargon. It’s computer plumbing. It’s not a very good story. Big yawn ??

But this moment meant more than the story of Deep Blue’s victory, or any theoretical robot god-king with a boner for office supplies. This unsexy, slight improvement in the system of image recognition AI has completely changed the world!

Everything from recommendation algorithms to social media beauty filters to China’s entire national security surveillance system runs on a form of image recognition. The basic architecture of AlexNet is present in medical diagnosis tools, Amazon warehouse robots, and self-driving cars. Woah.

By teaching these machines to recognise visual patterns, we have granted them the same evolutionary advance that allowed humans to invent language and build tools. So, it stands to reason that deep-learning neural networks modelled after the human brain would eventually evolve into full-blown AGI. Researchers at Google DeepMind believe they are getting closer...


??Deep Mind

DeepMind was co-founded by Demis Hassabis, a former child chess prodigy, neuroscientist, and lead AI programmer of cult-hit video game Black and White.

“You killed him, you uncaring, horrid, mean God!”

In Black and White, the player takes on the role of a newly born god brought into being by the prayers of idiot islanders. You can win their loyalty either by performing miracles or burning their houses down. You can also assist or attack them with an ugly animal avatar that you pick at the start of the game.

Stay with me, okay? This avatar runs on an AI that you can teach to behave through a reinforcement learning system.

  • You slap it when it’s bad ??
  • Stroke it when it does something you approve of ??
  • And through these repeated sessions of training or traumatization, the avatar will become either benevolent or violent, independent of the player’s direct instruction ??

A decade later, Demis would use deep reinforcement learning techniques to train the next generation of powerful artificial intelligence. They are multi-layered neural networks that are given a task to achieve and get good at that task in the same way that humans do—through constant repetition and learning from their mistakes. But unlike humans, a neural network can learn quickly, running through millions of attempts in just a few hours without getting hungry, tired, or frustrated.

The first games trained by DeepMind Technologies. From the left to right, top to bottom: Beam Rider, Breakout, Enduro, Pong, ???Q? bert, Seaquest, and Space Invaders.

They trained these agents to play old video games—Atari classics like Breakout, Pong, and Space Invaders.

Watch how DeepMind's algorithm learns how to hit a ball through a rainbow-coloured brick wall ??

Virtually overnight, the same system could outplay the best human players in dozens of games. This system was therefore more general, closer to human intelligence, because it could learn a number of tasks at once.


?? AlphaGo

In 2014, DeepMind was acquired by Google, which allowed them to scale up and conduct more ambitious experiments. Two years later, they made headlines when their program AlphaGo defeated the world champion, Lee Sedol, in a series of bruising games.

I highly recommend that you watch Greg Kohs' 2017 award-winning documentary about the Google Deepmind Challenge Match

With the now infamous move 37, AlphaGo made a strange manoeuvre that left seasoned players and commentators dumbfounded.

It looked like a glitch or a bum move on a seemingly random point on the board. But as the game continued, it became clear that the machine had engaged a winning strategy that no human would ever think of playing. Go experts couldn’t explain it, but neither could DeepMind’s creators.

These deep learning models are HUGE, and they're configured automatically, making it practically impossible to understand how they come up with their outputs. This mystery can make their outputs seem like genius, creative, or even "humanlike decisions"—which is how move 37 was described by expert observers.

But just because a system can do something better than us doesn’t make it creative or intelligent....?? A plane can fly faster than a bird, but that doesn’t mean it knows what it’s doing. The mystery of machine learning leads to mystical thinking because we can’t watch the cogs turn. We don’t know what they’re capable of. Even the engineers that built these deep learning models can’t track their reasoning—and I shit thee nay, this is genuinely a cause for concern.


??Cannibals and Cheaters

All the way back in 1994, researchers were working on an artificial life simulation where basic AI agents had to survive and find sources of energy in order to keep living. But once the agents realized that their children were technically a free source of energy, something disturbing happened.

"So, you see that they have this whole orgy going on here. They’re popping out kids like, really quick, and then immediately eating them. And this is because eating the children becomes a free source of energy. So, as far as they’re concerned, you have two choices—you can go out and get food, or you can mate and have a piece of food appear right next to you. The solution is clear"

In the age of deep reinforcement learning, the agents don’t look like Goyan child-gobbling Greek gods, but they are just as chaotic.

One agent was trained to win a boat race, but instead of coming first, it trained itself to go in circles, eternally scoring these regenerating points. Other agents simply broke their game’s physics engines or exploited bugs that broke the world's rules...

…and this all feels very human, doesn’t it?..

The agent is behaving like any self-interested person, following the path of least resistance for maximum reward. These agents are living the parable of the paperclip maximiser but safely in the 8-bit confines of their virtual prisons.

And it doesn’t take a degree in computer science to imagine how this might play out in the real world with a human-hating system...

  • What if someone gave an AI access to the internet?...
  • What if an AI could improve itself by writing its own code?...
  • What if an AI could talk to people in a convincing, human way—lie to them, blackmail them by learning to hack personal systems, and even secure government networks?...
  • What if it could identify industrial businesses far beyond the reach of any regulatory body?...
  • What if it could run that facility, source and order materials, and even labour? What if it could recreate itself, build weapons, and set them off?!...
  • What if the system created a smokescreen of disinformation and false-flag events to destabilize the world and ultimately topple human supremacy?!?!...

WHAT IF?!?!?!?!?....AAAAAAAAAAAAAAAAAAAAAAAARRHUHWERKFHWKHQHT;O3EW#;'#'Q32£$

ARISE

…what if it’s easy to get carried away?

…what if it’s easier, and more fun, to make up stories than it is to analyse systems, hmm?

The rise of generative AI in 2024 might make some of these conceptual leaps look a lot more reasonable. But by focusing on the long-shot long-term threats, we’re missing out on the short-term, likely outcomes that are already coming to pass.


?? The Doomsday AI Cult ??

In 2022, a Google engineer named Blake Lemoine declared that the company’s Lambda language model was sentient and that it had soft, gooey feelings just like us.

Blake worked in Google’s AI Responsibility Department, chatting with the chatbot and searching for hate speech. After hours of these deep, probing chats, he came to believe that the system showed self-awareness—that it had emotions, fears, and maybe even a soul. An understanding that, he admits, might have come from his own experiences as a Christian mystic priest (whatever that is).

“I was raised Catholic, so one of the comparisons I’ve made is Lambda seems to me to be the closest thing to the Holy Spirit that I’ve ever experienced.” - Blake Lemoine

After going public with his beliefs, he was swiftly fired by Google and mocked by the wider AI community—as well as, well, everyone else.

He later released this transcript in order to prove Lambda’s sentience.

The transcript of the conversation between the Google researcher and Lambda the AI agent that he believes has sentience

It doesn’t take a genius to spot how Blake has provoked the machine in order to get answers that suggest an interior life. In a very real way, he’s taken on the role of Eliza, the therapist chatbot from the 1960s, using leading questions to simulate a sincere conversation.

In doing so, he tricked himself. Any con man or televangelist will tell you the same thing: the only people you can fool in the first place are the ones who desperately want to believe.

In 2017, it was revealed that a co-founder of Google’s self-driving car program had started a new religion. Anthony Levandowski established a religious organization called the Way of the Future, with the stated goal of creating an AI superintelligence and then worshiping it as a god.

Anthony Levandowski's mission of WOTF was to "develop and promote the realization of a Godhead based on Artificial Intelligence"
Anthony Levandowki's mission was to "develop and promote the realization of a Godhead based on Artificial Intelligence"

It’s possible that this non-profit of computer worship was just a ploy by Levandowski to keep money out of his former employer’s hands. He would later plead guilty to stealing trade secrets from Google and was sentenced to 18 months in prison. Ouch.

Church of ChatGPT

Blake and Anthony are edge cases—unhinged street preachers on the fringe of the fast-forming AI religion ?? The slightly saner practitioners are preaching privately at Silicon Valley DMT retreats or writing really quite good books. (By the way - If you want to read the case for AI superintelligence emerging soon, Max Tegmark’s Life 3.0 is the gold standard. Shit's dope, yo)


?? The end is nAI

Did you read your terms and conditions?

On November 30th, 2022, the latest AI boom began with the release of OpenAI’s ChatGPT, which showed us that the future of AI was generative—a kind of artificial intelligence that spits out new data that is similar but not identical to the vast swathes of data it’s trained on.

While ChatGPT creates text, other programs could create images, music, and even videos based shoddy on text-based prompts from its user.

And one of the cool things about the path of the technology tree that we’re on, which is very different from DeepMind’s approach of having agents play each other and try to deceive and kill each other, is that we now have these language models that can understand language. So we can say, “Hey, model, here’s what we’d like you to do. Here are the values we’d like you to align to. We don't have it working perfectly yet, but it works a little. And it'll get better and better. ” - Sam Altman (2024)

Sam Altman, CEO of OpenAI, subtly leans into his supervillain image. He sits on the board of seven companies, including a pair of nuclear power labs.

In 2016, he revealed that he and Peter Thiel have an arrangement where they will both hide in Thiel’s New Zealand apocalypse shelter in the event that a rampaging AI does, in fact, trigger some kind of doomsday....hmmmm ??


The billionaire business plan for the apocalypse

He proudly declares that OpenAI’s ultimate goal is to create AGI. While Sam doesn’t like to anthropomorphise these systems, he does believe that they will one day be capable of answering all of our questions about the universe (You know, like a god).

Sam’s megalomaniac confidence in ChatGPT isn’t unfounded as it’s definitely the most general intelligence(ish) system yet made.

ChatGPT is trained on an enormous collection of text data. It can code. It can do your homework. It can write poetry. And it can remember stuff that you told it earlier.

The core innovation of large language models is that they are very good at sounding human. It’s data that you can have a conversation with—and that is actually fucking incredible.

But when articles teach you that ChatGPT can pass the bar and other big-time medical exams, it’s like saying Google can pass them. It doesn’t know anything...it’s just taking information available from the internet and rephrasing it to fit the formula of an exam answer.

What’s more, it is an autoregressive model, which means it can’t plan ahead with what it’s writing. It thinks one-word-at-a-time, very quickly, but in a very linear way. This is part of why it can’t really write jokes. It can’t work backward from a punchline. This system ain't smart. A beefed up autocomplete can’t beat you at chess or even do basic math, but it might be able to use a calculator.

'The Shoggoth', a character from science fiction captures the weirdness of GPT

Sebastian Bubeck and a large team have written an extraordinary report arguing that ChatGPT shows marks of AGI. They say that GPT-4 can even use tools.

It can even build a mental map of an environment once it's been sufficiently described

There are other revelations, but much like DeepMind, these models are so large that even the experts don’t really understand how GPT does what it’s capable of ( I challenge Victoria M to explain it without resorting to saying "somehow...")

It reached 100 million active users in just two months. By this metric alone, it is the fastest-growing app in history. Its immediate success has inspired a new wave of BIG TECH OPTIMISM—much needed after Web 3.0 and the metaverse crashed harder than a self-driving car in a school zone.

But it has also inspired breathless and hysterical warnings that we are once again accelerating toward an inevitable techno-singularity—and for realsies this time! AI fear has gone mainstream. Eliezer Yudkowsky has left the safety of LessWrong and brought his placard and doomsday bell to the TED Talk main stage.

“We do not get to learn from our mistakes and try again because everyone is already dead. At a certain clock tick, everybody on Earth falls over dead in the same moment. There’s no moving, there’s no heroic battle.” - Eliezer Yudkowsky

He personally gives us an entirely rational 90% chance of apocalyptic AI annihilation. While people who are, you know, actually experts in these systems aren’t saying the end is nigh, but they’re not saying it’s impossible either...

Elon Musk is getting more apocalyptic by the interview. He signed a letter demanding a pause on any further testing of large language models more advanced than the current version of ChatGPT. Elon is one of thousands of concerned AI insiders who worry that a future version might kill all humans—which would, you know, seriously scupper their ability to build a profitable competitor.

“I’m a little bit afraid, and I think it’d be crazy not to be a little bit afraid. There are going to be disinformation problems, or economic shocks, or something else at a level far beyond anything we’re prepared for.”- Sam Altman

OpenAI’s CEO, Sam Altman, is also scared. Like Dr. Frankenstein, Oppenheimer, and presumably the big man upstairs, Sam is terrified of the destructive power of his own creation.

An argument I frequently see online is: "??Hey, the people who know the most about AI are the ones who are most worried about it, so we should probably listen to them, right?”

Sound logic. It’s a bit like climate change. The scientists who know how the climate works tell us it’s changing, so we should probably listen to them. Only thing is, climate scientists aren’t the ones mining coal and melting the ice caps. But these AI researchers are the ones making these systems.

What if—and I don’t think this is a massive conceptual leapAI fear is just another kind of AI hype? What if it’s a sales pitch disguised as a doomsday sermon, eh?

“This system we’ve made—yeah, this crazy, world-bending megatech that I’ve made and can sell to you for 20 quid a month—it’s gonna fuck you up, kid. This god-tier game changer that will transform life as we know it? Honestly, I’m not sure I want to sell it to you. This thing, it just might be TOO powerful. This thing, it’s soooooooo good, it actually scares me......??"

?? What actually keeps me up at night

Well, Peter Thiel’s usually secretive Palantir is one AI company that actually looks like it’s actively trying to kill us. They’ve just released this footage of a system where you can use natural language to command a drone army. No kidding. This chatbot can give you options for attacks and then holds your hand through an organized kill campaign.

But look—you’re not likely to find yourself on a battlefield anytime soon. So, what does this mean for you? Is AI coming for your job?

That’s the question I’ve been asking myself these past few months, and I can only offer you my best guess:

  • If more than 50% of your job is searching through documents or online data and then condensing that information in a succinct or formulaic way—ignore these systems at your peril!

Large private companies are already adopting business-wide AI assistants—large language models that set schedules, write summaries, and handle administrative tasks. I would know since this is literally my day job! These are closed systems that businesses can feed with their own data and then interact with in natural language. A technique known as Retrieval Augmented Generation (RAG).

Why email Johnson in accounting with a question about the 90-page report he just wrote when you can just ask the report?

  • If you work in government, you don’t need to worry. There’s no need to be profitable, so any changes that come your way will be slow and predictable.
  • The world that will be most affected by this new wave is the one where the machines already rule—right here on the interwebs. Which is a problem because that’s where I want to work.

Luckily, right now, text-to-video is still terrible. Systems like Runway ML spit out expensive, 10-second-long videos that look like that hand-painted Van Gogh movie if you watched it during a cerebral haemorrhage. They have this stomach-churning, dreamlike quality, like watching the unconscious emissions of a robot that only thinks in stock footage.

Then again, if you cast your mind back to the long-forgotten era of summer 2022, OpenAI’s DALL-E was producing similarly unimpressive nightmares from text prompts.


Here's the evolution of Midjourney...and it's only gotten better since ??

The progress made by text-to-image in such little time has been utterly incredible. AI-generated images have already taken over social media, from hypebeast posters of the Pope to high-fashion Harry Potter fan art. The barrier to creating fake photos, fake articles, and fake people has dropped significantly. The machines be making memes, and it’s only a matter of time before social media’s misinformation problem spirals out of control.

What really scares me, though, is voice generation.

You’ve probably heard one or more of these AI-generated pop songs going viral, which are as convincing as they are shit. While it will be hilarious to watch Drake sue a bedroom producer for using his voice in a song about pleasuring cowboys, I’m more concerned about how these programs are already being used to mimic ordinary people. Take the case of the fake kidnapping scam.


Personally, I’m not worried about a superintelligent AI spontaneously blipping into being with an itch for human eradication. But I am concerned about bad actors who will use these already powerful systems to deceive, manipulate, and persuade—and how these systems might unintentionally do the same.

In its unrestricted form, ChatGPT can already design tailor-made disinformation campaigns. With access to recommendation algorithms, personal data, and greater agency, these systems could mislead us without even realizing they’re doing it. As these systems improves, it won’t just be maladjusted engineers projecting their own loneliness onto these systems. Companies and governments will compete, and some will claim their AI is all-knowing. At first, this might be ironic, but over time, people will start treating these systems like omniscient oracles.

If I may put on my fedora for a moment, the danger of the old gods was never the bearded dude chucking lightning bolts at you down from the sky. The real danger lay in the people acting in his name—creating hierarchies, organizations, and systems based on his supposed authority.

I think that’s what’s happening here. Alongside what might be the greatest technological revolution of our lifetime, we’re building a new kind of authority figure.

So, be good. Keep your eyes peeled. Take care. Say please & thank you to your chatbots, and don’t let the bastards grind you down.


Victoria M

Adventurer in the disciplines of AI, Data, Quantum Technology and Cloud Engineering.

2 周

Leave this challenge with me. I'll get back to you ??

回复

要查看或添加评论,请登录

Marco Ball-Albarrán的更多文章

其他会员也浏览了