What's Next in 2017: Artificial Intelligence
PM Images / Getty

What's Next in 2017: Artificial Intelligence

I expected to feel a little out of place at the swanky university event for bioethics. My wife, a professor with expertise in the field, had invited me. But when I introduced myself as a software engineer to the attendees, many wanted to talk about one thing -- artificial intelligence (AI). How would it affect society? What are the goods and the bads?

Artificial intelligence has been popping up everywhere. At a recent family holiday event, a relative held a long conversation with my brother and I about how our companies --- Google and Agolo --- were destroying jobs by automating human tasks.

Artificial intelligence has entered the national conversation on a broad scale. 2017 could be the year that we start to see AI becoming part of our lives and causing some disruptive change.

Before we speculate on the potential downsides of increased automation due to AI, let’s consider some of the positives:

The U.S. Department of Transportation estimates that self-driving technology could save up to 94% of the 35,000 yearly vehicle deaths caused by human error. That number is more than the number of U.S. military casualties in Iraq and Afghanistan, and victims of domestic terrorism.

New consumer products that utilize AI can make us more efficient and enable us to potentially have more impact. Services such as a personal assistant or research assistant, which previously were only available to people in higher positions, are now available for a small price due to AI-powered services (see: X.ai and agolo.com).

AI may be able to shed light into problems that have been unsolvable - cancer treatment, virus control, how to optimize education, etc. An anecdote from the Go match between Fen Hui and Google’s AlphaGo engine illustrates this.

After playing a close match, AlphaGo played a move that surprised many of the people in attendance, including Fen Hui. “It’s not a human move. So beautiful.”

***

In the case of the game of Go, the machine is given a simple goal; win the match. But what about when AI faces complex decisions and trade-offs? That brings us to some of the concerns about AI - that despite the complexity of the algorithms, the biases and short-sightedness of the engineers can cause unforeseen ramifications.

If we look over the past year, we can see a few examples of this. Microsoft released an AI-powered chat bot, “Tay.” However, within a few hours Twitter users had figured out how to get the bot to repeat Nazi propaganda and images. It seems that the engineers hadn’t considered this “edge-case,” and the bot wasn’t prepared for the malicious attack.

Tesla released their autonomous driving technology, insisting that it was not meant to be fully autonomous. However, the reliance of the technology on light images may have caused the death of a driver in a fatal accident. Newer models are relying on radar technology as well as light images.

Besides the engineering errors that can seep through AI technology, there are also human concerns. What about the jobs that AI will replace? The self-driving truck startup Otto threatens to disrupt the jobs of 10’s of 1000’s. What will these people do from now on?

Every technical disruption has a darker side. The industrial revolution improved manufacturing efficiency 50-fold, but it also created an oversupply of workers, causing many to lose their livelihood. That’s actually where the term “luddite” comes from - Luddites were textile workers who revolted against the new technology.

Many have begun proposing solutions to the eventual automation onslaught caused by AI. Anti-technologists would like to prevent autonomous technologies from being adopted in order to protect jobs. Some technologists like Elon Musk have said that governments might have to establish a universal basic income to support displaced workers.

Just like complex problems faced by AI, our national attitude towards AI involves trade-offs. If we accept the technological progress, we also have to confront the job losses and need for re-training. If we shut out AI, we risk dropping behind as a leader in innovation and industry. The trick will be getting AI to play for us, not against us.

This article is part of the LinkedIn Top Voices list, a collection of the must-read writers of the year. Check out more #BigIdeas2017 here.

Georgiana S.

Digital Marketing Leader | Paid Search Expert | Marketing Data & Analytics | Army Veteran & Advocate

8 年

Audit Intelligence... the next AI! That's my speculation at least. Each year calls for new regulatory measures. Which means we should audit smarter, not harder: https://www.gensuite.com/products-and-services/audit-management-software/

Omer Alvie

UAE, Pakistan, Startups ??? Coaching & Mentoring

8 年

The same argument gets repeated with every major technological advance ... with net increase in employment with every major tech advance in the past ... the adjacent possibilities created by the new tech always drive exponential growth in employment

Alvin Ernest

Corporate Strategy | Commercial Strategy | Technology Strategy

8 年

While I can relate to much said in this piece, I cannot help but think that there is a fundamental flaw in its message. You see, I don't think AI is a technology, put simply it is the output (intelligence) from data analytics. And yes, increasingly there are technologies that use AI to inform and improve performance. But these are two different things... Furthermore, let's be clear, robots have existed from the dawn of the industrial revolution... And yes, today robots increasingly have a mobility function and a UI that incorporates data transceivers to send and receive instructions (e.g. one does not need to turn the steering wheel, one simply sends GPS data coordinates to the vehicle). But none of these things constitutes AI! In my opinion, AI is a predefined set of narratives, that uses computer processing to speed up intelligence outcomes... Therefore its greatest risk is not the AI itself, (as these are ultimately created by us, humans) the greatest risks is a function of the speed at which computing creates these AI narratives... such that humans are unable to intervene or terminate its consequences in time, especially if a universal "kill" routine is not available at any point to halt AI algorithms/routines and the robots they inform... Moreover and very importantly, the "Big Data" that informs AI outcomes is proportional to the population, as AI affords us, each of us, our own unique answers, which will add value to our individual lives... and therein lies the pathways to our future jobs...

Emma Pearse

Psychosocial support worker & therapist in training

8 年

Excellent piece--as someone with little clue about AI , a luddite even, I somehow feel both inspired to learn more and as though I now know something, at least enough to face a party full of swanky bioethicists. Thank you! Also, love the mantra: “It’s not a human move. So beautiful.”

要查看或添加评论,请登录

Tom Goldenberg的更多文章

社区洞察

其他会员也浏览了