A human/machine alliance is the solution of the future

A human/machine alliance is the solution of the future

The Fourth Industrial Revolution, an age of automation, advanced AI, and human-machine collaboration is by most measures, well underway. In recent years, we have seen rapid advancements in machine intelligence, so much so that AI is matching or outperforming humans in many different areas. 

This is not to say that humans are obsolete. We still have our strengths and by recognizing what they are, can we take advantage of machines and get the best of both worlds.

In recent years deep learning has produced some truly staggering results

In imagine identification, Google’s Neural Image Assessment (NIA) has been trained to accurately predict how we’re likely to respond to a particular image based on how aesthetically pleasing it is.

Google also claims almost human-level accuracy for its text-to-speech system, Tacotron 2, which can pronounce difficult words (like names) and change its enunciation depending on the punctuation of a sentence or stress of a particular word.  

More incredibly, Google's speech recognition software has recently reached human levels for understanding speech, with an accuracy of up to 95%.

On the gaming world stage, narrow AI first proved its primacy in 1997, when IBM’s Deep Blue convincingly beat chess champion, Gary Kasparov.

Off the back of this success, AI then steamrolled through the next two decades. AI design teams put their creations to the test against the world champions of games, from poker to Pac-Man, with AI coming out ahead more and more often. 

Twenty-one years after Kasparov’s defeat, the most recent chess-directed algorithm, Google’s AlphaZero, proved the awesome power of AI when it absorbed all chess knowledge in just four hours to beat the reigning AI champion, Stockfish.

Despite these successes, AI results can also be lacking

Cultural sensitivity certainly isn’t one of AI’s strong points, as New Zealander Richard Lee recently discovered when a facial recognition bot rejected his passport application because the “subject eyes are closed”.

“Common sense”– a distinctly human quality despite that fact that so many seem to lack it –is another frontier still out of AI’s reach. 

Take the recent case of a little boy asking Amazon’s Alexa to play a song that sounded like “digger digger” but instead was given an automated rendition of a long, graphic list of rather hardcore porn search terms. Much to his father’s chagrin. 

Despite our recent success in the field of narrow, or “weak”, AI, we still haven’t broken through the boundaries of general artificial intelligence and created a self-teaching system that betters human-level intelligence. Some believe it’s 30 years away, others think it could be centuries.

The reason for this is simple: our complex neural network has so far proven impossible to decipher and replicate.

Whenever an outcome is deterministic and arrived at through a process involving repetition, machines will always be more effective than us 

When drawing on vast reserves of data to establish, let’s say, a product’s optimal pricing, a machine’s memory, and its speed in accessing that memory, AI simply outperforms us.

And because our memories are slower and more imperfect, a machine dramatically reduces the risk of error in a significantly shorter time.

The superior speed and accuracy of AI were already perfectly apparent back in 2011, when IBM’s supercomputer Watson beat two former champions of the US game show "Jeopardy!", buzzing in for one answer in a mere 10 milliseconds. Seven years have since passed, and the computers have only got quicker. 

However, whenever an outcome is open-ended, only humans can provide the insights needed to succeed. While a machine limits itself only to what’s possible, we limit ourselves to what we consider reasonable. 

Because critical thinking and innovation apply as much to the boardroom as they do to board games, this ensures that, for the foreseeable future, the human role will remain crucial in business. This role could involve making strategic decisions, like raising the price of an aspirational item purely because you know that aspiration is its main selling point. 

Or it could be something far simpler, like using our reason to determine what we want a machine to do with the data we feed it, the questions we ask of it, and which outcomes we want from it.

We’re past learning how to answer questions. Our future success depends on how well we ask them. 

As of 2017, the AI index revealed that machines are rapidly catching up with human-level performance at searching through a document and finding the answer to a question. In showing AI’s future potential, this is very encouraging. But the machine learning that produces it depends on humans defining what question to ask of the data. 

Making sure the objectives that machines work towards are aligned with our values will be critical, whether this relates to optimal pricing or inventory allocation in the business sector, or diagnostic and curative capabilities in healthcare.

In the wake of his defeat to Deep Blue, Gary Kasparov immersed himself in the study of AI. The fruit of his labor was a book called Deep Thinking: Where Machine Intelligence Ends and Human Creativity Begins

In the book, he outlines his concept of “advanced chess”, in which a human pairs up with a machine to play another human-machine team. Kasparov describes the situation as mutually beneficial, with humans using our intuition to guide AI’s capacity for calculation.

The ideal human-machine alliance combines the best of both worlds 

Machines bring enormous data-storage capacity, almost instantaneous response and unbiased predictions from the data we feed in. 

In business, where humans have historically borne responsibility for determining pricing, allocating inventory, or analyzing market trends, machines clearly reign supreme.



 

 

 

要查看或添加评论,请登录

Robert Diamond的更多文章

社区洞察

其他会员也浏览了