We Fear When We Can't Explain.
Watching all the coverage on the solar eclipse this week, one thing caught my attention - reports of dogs going crazy.
In an article from CBS News, a doctor was quoted that
Pets may hide, howl, pace or pant during the eclipse, Krebs said. As the sky darkens, some pets may start their nighttime routine early.
While apparently not many studies have been conducted as to why, I can take a guess. Living things fear what they don't understand. Animals sense changes in atmosphere including light and the sun/moon phases and they're not expecting it on a random midday, yet there it was.
Overshadowing
Human also fear what they don't understand.
Incas worshipped the Sun God. The eclipse was thought to be a sign that their God was angered at their behaviors. They're not alone, either. Many religions and ethnicities have distinct beliefs about the sun and the meaning of the eclipse.
When Nicolaus Copernicus published his works in the mid-1500s the idea of planetary rotation started to propagate among the scientific communities. Observing the rotation of the planets and other celestial bodies soon provided an explanation of an eclipse, even though they had been recorded thousands of years Before Common Era.
An eclipse is simply the moon's shadow cast on the earth. It's predictable, it's known, and it's largely benign.
However, it still has impacts on the Earth and on humans and other animals, but knowing what it is turns fear into excitement for most. It still causes some problems but there are significantly fewer people who are afraid of the passing darkness or believe it's a sign of armageddon or supernatural retribution.
From a purely scientific point of view, an eclipse is a coincidence of two things - the path of the earth around the sun and the rotation of the moon.
From a religious or ethnic perspective though, the science is less relevant. Just because something can be explained scientifically doesn't diminish the significance of it personally or spiritually.
However, understanding the scientific reasons for something can blend with a person's belief system and temper any more emotional reactions. Going back to the beginning, while dogs still go crazy during an eclipse as they did for thousands of years, humans, even those who are devout to their faith, do not fear imminent destruction any longer.
Foreshadowing
This image may be disturbing to some, and I assure you I had no intention of offending anyone by posting it. I did not prompt the system to generate an image portraying any specific faith or God. I asked for a generic representation of artificial super intelligence as a test.
People may object to the personification of AI/machines. Others may see concerns because of the implications of "machine overlords" and more still may take issue with the concept of a deity represented as a machine or portrayed in a way which contradicts canon. We are human, and we should have these feelings. It's our right. I myself don't care for this image however it is the unfiltered result of the test I was conducting, so it goes in the article.
The point is, everyone sees it differently and will end up with a different emotional reaction to it, but virtually everyone who looks at it understands the topic which I requested GTP4 to represent.
For a great many people, the topic instills fear - but why?
Eric Schmidt, former CEO of Google in this article from The Hill says
AI dangers begin when “the computer can start to make its own decision to do things,” like discovering weapons.
Geoffrey Hinton, a former Google researcher said in a 60 Minutes interview
AI systems may be more intelligent than we know and there's a chance the machines could take over
Elon Musk has also jumped in as well citing everything from the dangers of the unknown to regulatory, ethical, control, and existential concerns.
And these concerns are valid. We've seen many businesses and governments organize and work together to form regulatory bodies and alliances designed to impose strict guidelines on the appropriate use of AI. All positive moves.
If you've read any of my previous posts or articles, you'd see I'm an outspoken proponent of proper governance and appropriate use of ethics with AI. It's my passion to discuss and debate that topic and nothing is more important with AI, in my opinion.
But I don't fear AI because of it, or what may happen in the future.
When eclipse-mania hit the United States this week, I saw a corollary with advanced AI.
Dogs react now to an eclipse the way many people react to AI.
We can explain and predict eclipses. We know when they'll happen, who is impacted, where to go to see it (or avoid it), what types of glasses you have to wear to avoid eye damage. We also know exactly why and how it happens, and we can state exactly when the next one will be. Case in point:
领英推荐
If we see a movie or read a book about an eclipse bringing about destruction, we regard it as fiction. Get your popcorn and snuggle up on a couch, then enjoy the moments of chills and screams. When the movie is over, go back to your life.
Why then when we see movies about AI taking over the world did we used to dismiss it and now are afraid of it?
I believe it's because most of us, even people in the AI field, don't fully understand where it could go, and the ability for a computer using AI to interact with humans is now a reality.
Perhaps it's time then to ground ourselves:
"Nothing in life is to be feared. It is only to be understood."
Marie Curie said that and I think she's absolutely right.
An artificial intelligence is algorithms. It's a series of equations which are used to model, predict, or generate outputs. In the case of LLMs, those outputs are human natural language.
No one should be afraid of the output from executing this script:
#! /usr/bin/perl
print ("Hello World, I'm an artificial intelligence\n");
Why then does the topic of advanced AI create the same visceral feelings that our ancestors had centuries ago when the sun was blotted out midday?
I'd assert that it's not because of what the AI is, it's because we know ourselves.
There's an old joke that goes "Two well trained martial artists brush up against each other at a bar. They exchange glances, size each other up, then buy each other a drink."
You might expect a fight, but properly trained martial artists (speaking from experience), would never instigate. Instead, they would diffuse because they know just how badly a fight could go for both people.
Most people aren't properly trained martial artists.
What would happen if you trained a generative AI to develop a resolution to a conflict where one or both sides was being irrational based on human history? What if the definition of irrational was coded subjectively instead of objectively?
Going back to Marie Curie: don't fear the AI, understand it.
An AI learns from data. It won't decide to just take over the human race because it "wants" to. It won't design weapons or harm humans because it "feels like it".
It will do what it's designed to do, and if it does those things, it's because the data sets it learns from or the programming it's been coded for instructs it to do those things and it determines the best way to do it.
That's why governance and ethics are so crucial. There is so much information about history, action/reaction, varying beliefs, and blurred lines between fiction and non-fiction that without guardrails, the concerns of the scientists may well come to pass.
AI is not all bad outcomes. Case in point, Mr. Schmidt in the same transcript where he issued his warning also said:
“I defy you to argue that an AI doctor or an AI tutor is a negative,” he told Axios. “It’s got to be good for the world.”
The next time you observe a solar eclipse or read an article about AI, instead of fearing them, understand them. Know your part in guiding a positive outcome. Ensure if you're a business owner or decision maker that you're not just "deploying AI" but instead you're deploying AI responsibly.
The next time you hear about something that concerns you, lean into it, learn about it, and understand why you feel that way then do what you can to avoid your worst case.
When someone fears something it's very difficult, perhaps even impossible, to rationalize a positive outcome.
As always, the thoughts in this article are my personal opinions and do not reflect the point of view of my company.