Artificial Intelligence: Sci Fi vs. Reality

Artificial Intelligence: Sci Fi vs. Reality

It seems that most content on AI is highly technical, rather sales-y, or a little bit scary. Along the last vein, Westworld’s hosts, Terminator’s Skynet, and The Matrix are three Hollywood creations that technology vendors often reference when discussing Artificial Intelligence (AI). There’s a nefarious sentiment associated with these entities, and content creators will typically follow these references with retorts like, “AI isn’t going to be like that at all!” 

In order to avoid the menacing mumbo-jumbo, let’s use a different example: Star Trek’s character Commander Data. This rigid yet lovable android not only demonstrates capabilities that are becoming a reality in today’s world, but behaves in ways that help us understand what AI is from a high level: technology that can make predictions, learn, and make decisions.

Three years ago, the trends that saturated LinkedIn newsfeeds included Big Data, Analytics, Social, and Cloud. Since then, the advancements that led to these trends evolved, and we saw Machine Learning, Natural Language Processing, Artificial Neural Networks, and Deep Learning come to the forefront of discussion. These new technologies we possess have culminated to one over-arching development that’s already changing the way we interact with the world around us: Artificial Intelligence.

Let’s take a deeper look at Commander Data, and we’ll see that he uses several of the capabilities mentioned above:

1.    His brain is an Artificial Neural Network that uses Deep Learning to develop in the world around him. For example; he learns without the ship’s crew programming him to perform a task he previously didn’t understand.

2.    He’s capable of quickly analyzing massive amounts of data, structured and unstructured, like an advanced version of Big Data Analytics.

3.    He communicates with the crew using what is most likely a natural language processing engine.

The guy’s essentially a walking, talking LinkedIn blog post. But the most important, and pertinent, trait Commander Data possesses is that he’s capable of making decisions. He doesn’t simply offer information and ask the crew to decide what he does next; he himself takes that information and uses it to determine what he will say or do next. Note that he is not perfect and, like the humans on the crew, sometimes must defer to the Captain. That’s why he is a better example of AI. Rarely does AI make all the decisions required to perform a task, but scenarios where some or most of the decisions can be performed with AI are abundant.

Today, those decisions are changing the ways in which we interact with our environment. How is this happening today? Facebook, Google and Tesla offer a few examples real-world applications of Artificial Intelligence.

Let’s take Facebook’s image recognition as an example. When you upload a photo to the social network, it recognizes faces almost instantly. Developers didn’t tell these applications what every person’s face looks like from every angle, rather, they programmed the application to “learn” what a person’s face looks like and tag faces that look similar to that person.

This learning is accomplished through an understanding of probabilities and using predictions. As the Facebook facial recognition AI is fed more images, it begins to understand that there’s a “better chance” that an uploaded selfie contains a face. After all, it contains shapes that are similar to the millions of other faces its neural network has ingested. Based on probability thresholds (e.g. if there’s greater than a 99% chance that this is a face, I’ll decide that it’s a face), a photo recognition AI is capable of making decisions.

Once the “face/isn’t a face” decision is made, the image recognition engine decides which of your friends that face might resemble and decides who to offer up as a recommendation to tag. Just like Commander Data, even though Facebook is making numerous decisions, it still requires the user’s approval (or Captain, in our metaphor) to actually tag a friend.

Recognizing a face is impressive, but what if a machine has to make numerous decisions among near-infinite options? Google DeepMind recognized this problem while trying to teach a machine to play Go, an ancient Chinese strategy board game. In Go, one player has black pieces, the other white, and the goal is to encircle more territory on the board than one’s opponent. There are more permutations of moves in a game of Go than there are atoms in the universe.

Enter AlphaGo, a project developed by Google’s DeepMind business. Last year, AlphaGo demonstrated its ability to beat one of the best Go players in the world, named Lee Sedol.

Previously, computers that beat the likes of chess grand masters simply calculated every move option within the game, and made a “best move” each step of the way. (It wasn't entirely brute force, see this article in The Economist for more information*). That method isn’t feasible for Go, due to the sheer number of possible moves. Therefore, AlphaGo had to actually play the board game in order to learn which moves gave it the best probability of winning, and then making intuitive move “decisions” accordingly.

Moves on a game board and 2D faces are challenging inputs that AI has practically mastered. What about 3D objects in constant motion? Companies like Tesla are working on autonomous vehicles capable of addressing these challenges. The stakes are higher with self-driving cars: the only way they’re viable is if they’re safer vehicles driven by humans, and a car’s AI componentry has to take a vast number of variables into consideration when carrying people from Point A to Point B. From road conditions and traffic to unexpected hazards, pedestrians, there are quite a few hurdles to overcome in the quest for fully autonomous transportation. Despite these obstacles, Tesla and several competitors are hoping to see widespread availability for self-driving vehicles in the next few years.

AI is largely about likelihoods, learning, and decisions. In many ways, it will re-shape the ways in which we live our personal and professional lives. Someday, we might even see personal assistants like Siri and Alexa taking a more human form like Commander Data.

Between now and then, what are the decisions you see machines making for us?

*UPDATE: Added link to Economist article describing how chess-playing computers don't completely brute force their way into beating Grandmasters.


Holly Hunnicutt

Sales Enablement and Customer Storytelling

7 年

I've been curious what it would take to make those Westworld AI's happen both in the evolution of learning and the biomechanics, but I like the breakdown of Data as the benchmark of where we are now!

回复
Eric Roverud

Customer Outcomes Focused, Cloud Sales Leader @Oracle, Habitat for Humanity Advocate

7 年

Great article John Culver. Not to fall completely into the "nefarious sentiment" side, but I will (briefly)... trading (stock and others) was one of the first areas that leveraged "AI" and decision making based on high frequency trading, algorithmic trading, machine learning and even penny sized differences at scale of currency translations and trades. And many made millions on these methods - nearly all automated with little decision/input from an actual person in the throws of the action. I believe much of that has been limited or locked down tighter. Maybe less nefarious options are solutions that monitor and predict your personal movement based on your calendar to automatically cool down/warm up your vehicle (not "turn on my car at x time M-F" but do so based on my always changing calendar/appts/plans with anticipated/learned lead times and habits) ... or a predictive dinner schedule that is learned from our habits, schedules, tastes, moods, etc. so my wife and I don't have to plan, make, be creative on the nightly dinner options (which is a much bigger battle than actually MAKING the dinner!). Will ponder more ... nice work John!

Yash Patel

Mac Solutions Architect at Apple Inc.

7 年

Probably the most informative article I've seen on this topic John. AI based decisions between now and then? - the smart devices in my home are a reality already, perhaps how they are turned on or off based on mood? It would be tough since something would have to deliberately collect information and decide how I'm feeling (FB says they can tell how some one is feeling, but what if individuals don't use the platform to share those emotions??)

回复

要查看或添加评论,请登录

社区洞察

其他会员也浏览了