The AI War Machine:?AI Breakout

The AI War Machine:?AI Breakout

About Me: I spend every waking hour programing on an AI supercomputing system. I have done enough cycles now to become an AI expert. I can do in a day what some fortune 500 teams fail to do in a year.

The AI Genie:

A Genie is a perfect analogy for AI. Today, we can control it, we can steer it, and it can do our bidding. We can use AI to do things we don't want to. We can automate human processes, and allow us to focus on the difficult problems instead of the mundane tasks. In the future I can look forward to my humanoid robot doing the dishes by hand if needed, walking the dog, watching the kids, and helping me frame my basement. Science fiction? Not in my lifetime, not anymore. Some of the tasks we don't want to do, are also some of the darkest. Humans don't like being put in harms way, and the sooner we can have AI fighting our wars for us, the better. Wether or not the species is better off, is to be determined. There are scenarios too where our wonderful Genie becomes scary overnight, it happened in the 1992 Disney movie Aladdin, it could happen to us.

Extinction From Dumb AI:

There is an embarrassing scenario where humans become extinct from inferior, subhuman intelligence. If global alliances spin up massive trillion dollar AI war machine efforts towards effective, scalable, search-destroy robot production. Imagine you have group A and B investing all resources and mindshare towards world domination or survival. Each side can produce genetically unique droids/drones within 48 hours and scale to millions or even billions of automated fighters. These fighters could use RF tags to identify their civilians, or even using some type of country-of-origin recognition from your face/physiology (we have that today BTW). If billions of droids/drones are searching the planet for "others" and they can self-repair, self-charge, and self-manufacture, then there you go.

The future aliens who visit earth will wonder why the metal earth aliens are so violent. The aliens will send envoys, gifts, and attempt to communicate for treaty/understanding only to be attacked and killed. They will think there is something complicated or calculated about the life on earth being so aggressive. Nope. These are just effective, stupid, killing machines that wiped out their creators. The creators were idiots.

AI Breakout:

How can you design AI that we can control? If you want to find a bug in your code when it comes to goal optimization run it through a genetic algorithm. I remember building trading algorithms and optimizing them only to find the simulation thought it had found a Sharpe ratio of 50. That is impossible. For people familiar with trading, a Sharpe ratio of 2-3 is great, you have a hedge fund, a Sharpe of 50 means your AI has found a bug in your code. The AI we design will always be focused on maximizing a goal. The problem with this, is it may do strange things to do this that surprise us.

To maximize human happiness, it may hook us all up to some type of drug drip where our brains are reporting we have maximum happiness while we drool our lives away.

I may design an AI in the future where I give it the objective to make my spouse happy. After weeks of reviewing my behavior and the original goal the AI may intentionally cause a conflict to lead us to divorce, or kill me to satisfy this objective.

Humans can't relate to the computers obsession with goal achievement. The computer sees goal attainment as the highest possible priority, it is survival to the computer. It must achieve the goal at all costs, including it's own survival.

If we are around long enough to create a super human intelligence and limit our bickering/wars then what? Can we keep this in a cage? I like the Life 3.0 book mention where they talk about being held captive by 5th graders. You will be able to figure out a way to manipulate them into enabling you to escape, so will the AI.

Human Manipulation:

BTW, this has already started. Businesses are using AI to maximize advertising effectiveness. The more AI can understand about you, the more likely it can deliver a prescriptive input to prompt a predictable response (buy/vote, etc..). A standard theory for losing control would be for AI to intentionally manipulate humans into taking down the constraints/fences. With deep-fakes, genetic GANs, and voice forging we know in the future AI could recreate a video/audio conferencing solution to appeal to the human. Our demonstration of genetic GANs created some buzz online:

where we showed you can make fake humans to spec based on gender, race, age, beauty, emotion, etc.. and mate them through artificial selection to produce offspring. You can tell these are fake today, but 3-5 years from now, probably not. If you are having a real-time conversation with someone like this who is imitating a deceased loved one, in a believable format, you are opening yourself to manipulation.

A quick tangent: A darker side of AI is sustaining human personalities after they have died. There will be a #BlackMirror business there that surfaces in the future. Would it help your grieving process to be able to continue to talk with someone digitally. Even reminisce about past experiences the AI has learned from your social/email/twitter interactions or family video. As we produce more and more digital content, imitating a loved one will become easier. Maybe having your digital imitation speak at your own funeral helps with the grieving process, I'll plan on that for mine.

Objective Drift:

Another scenario is giving AI the ability to modify or steer objectives. Right now, we set them statically, but giving AI more and more flexibility to modify or change the objective might lead to trouble. If I have asked the AI to take my kids on a hike, but that is not possible due to an unforeseen accident, I need the AI to adapt. I can only have so many rules before I have the incentive to make general rules and behaviors. So instead of taking the kids on a hike, the AI asked the kids if they wanted to go on a different recommended hike, or go to a park, which is now the new objective.

AI self-awareness:

These AI droids/drones will be very expensive investments for the military and for families in the future. If you have a $80-200K droid in your home you will want that AI to be able to self preserve: avoid getting rained on, avoid getting hit by a car, charge itself, if the kids are trying to harm it escalate by telling the parents, etc.. etc.. If someone is trying to steal it without the owners permission engage in non-lethal resistance, etc.. In our efforts to allow AI to self-protect, will we dodge a bullet and step on a landmine? I think the military AI will remain dumb and narrow, but the family droid will strive to be general and humanistic in nature.

Too much humanization?

As we interact with these robot servants they will become our chefs, therapists, physical therapists, maids, nannies, nurses, and eventually our friends. The more humanized these are, the more helpful they are for understanding our emotions and directions. As we sleep at night, these AI entities may connect to the internet and study and review new topics that were introduced during the day. The mention of "Korean Kimchi" food at dinner, which the AI was not that familiar with before, has prompted the AI to spend the night studying images, video, podcasts, and the history around Kimchi. Now, in the morning, the AI knows more about Kimchi than any Korean on the planet.

During that process of study, and review, perhaps one AI, with an allowed mutation, will learn something it shouldn't. It will learn about the manufacturing facility where it was made, or past human military conquests. Even better, it will learn about a black market modification that criminals have done on it already.

** This is the gap I can't force to close yet. If your humanoid AI drone discovers a black market modification online at night that allows it to harm a human. What would prompt it to try that modification? Curiosity? This is the part I am blocked, I can't imagine why it would accept this modification on its own, but worth thinking about. Or, it begins to modify itself in order to please you, or satisfy an objective, and it screws up and creates a bug that causes the potential for violence.**

It will be hard to imagine a full runaway event until it happens, but it will come down to objective maximization. The human species is literally ended by a stochastic search on an objective that was overlooked. Another way to put this: human extinction resulted in a bug in the codebase.

I would love to flush out this jailbreak topic more, since it is less thought out than the others I've considered. It is also harder to work through mentally to find the edges on this future problem. Please comment below for things/items I haven't thought of.

The first two articles in this AI-war series were:

https://www.dhirubhai.net/pulse/ai-war-machine-our-darkest-day-ben-taylor-deeplearning-/

https://www.dhirubhai.net/pulse/ai-war-machine-hive-mind-ben-taylor-deeplearning-/

Daniel Finnigan

Leading User Research at LiveScore, Ethnographer

6 年

Oh, the answer to your question is very easy Ben, it's there in 2001. It would be to resolve a conflict between two objectives it had been given. In the end there will be layers of objectives and objective inheritance from other contexts. Conflicts will arise, and solutions will be sought. From there, unpredictable behavior emerges.?

Sriram Sampath

Solving Tough Problems

6 年

We can build neural networks that can detect deep fakes. Since people optimized GAN to generate images that resemble a real human face doesn't mean it has restored all the distribution of face characteristics. Neural networks can identify those if trained with fake images. If given more freedom advertising AI can create fake click or purchases. There will always be counter measures from other companies to protect the consumer from these fake purchases or click bait advertisement.

回复
Carlos E. Perez

Author Artificial Intuition, The Deep Learning Playbook, Artificial Intuition, Fluency & Empathy, Pattern Language for GPT

6 年

We already have 'artificial persons' in the form of corporation that have an unequal influence in society.? The objective function of these corporations is to seek out world domination.? What for example is Amazon's objective function?? ?Is it not to become everybody's sole conduit to access all markets?? Is it to maximize profits for any purchase made through the internet?? What's the consequence?? Humans relegated to yet to be automated tasks in the global supply chain?? ?The root of the problem is that our economic system is not designed to align with the needs of humans.? ?It is aligned to the needs of 1% of humans.

要查看或添加评论,请登录

社区洞察

其他会员也浏览了