Got A Problem? Call the Machine!

Got A Problem? Call the Machine!

Do you have a problem that needs resolution? Then dial 1-800-Who-Cares and talk to the machine! Algorithms are increasingly determining many of our bigger decisions that take place in our lives. They respond to us. They direct us. They are starting to manipulate us. 

Algorithms are changing our lives by making decisions for us… directing our activities. Some very explicit, other quite subtle. Driving into the city tonight… algorithms will tell you where to go and not to go. Searching about a candidates position on an issue… an algorithm will determine what results you see and don’t see. While their value can be argued, algorithms all are designed to change us. Even worst, most are unaccountable to us. 

Algorithms, a means to artificial intelligence through mathematics, don’t just predict the future, they cause the future. They are not only separating winner from losers, they are causing winners… losers. That is by design. Algorithms are not inherently fair because their builders define what success it. They commoditize us and they are getting better at it. But should they? 

Humans have many characteristics, but are often defined through our humanity. Whether we are compassionate or not, generous or not, even sympathetic or not; humanity makes us human. Humanity separates us from other species… other lifeforms. As more and more of our everyday decisions are taken over by algorithms, we degrade our humanity because we cede control. At point could algorithms put all of us in danger of losing our humanity?

So, how will our society change in 2020… how will you be changed, as these algorithms (a set of rules and instructions) move silently to manipulate our behaviors? As an AI futurist, it is can be hard to clearly say. It is difficult to make a personal case one way or another with any moral certainty. There are too many factors that come into play. Or is it? 

The documentary “Algorithms rule us all ” explores the consequence that result when we blindly are guided by the decisions of the algorithm. While it raises more issues that it addresses, that is the goal of a good documentary… it shifts you point of view. This documentary might open your mind’s eye to what happens when we become controlled by algorithms.

In the end, artificial intelligence is here to stay - that algorithmic genie can not be put back in the business bottle. But that does not mean we humans can not maintain control over it… maintain our humanity. We need to define how, where, and when algorithmic intelligence can be ethically use. We need to know when they are making decisions for us… when and even why they are manipulating us. Transparency in algorithmic intelligence is more than just knowing how something is made, it is having an ability to realize why it was made that way. 

#algorithms, #ai, #artificialintelligence, #algorithmicintelligence, #machinelearning

Dr. Jerry A. Smith

Builder of Heuristic Systems leveraging LLM & Computational Neurosciences | AI & Data Sciences | Ethicist & Futurist | Author, Podcast Host | US Navy Pilot & Nuclear Engineer

4 年

James... You are also a leader in the field of AI ethics, especially when it comes to transparency. I love the link you provided and hope others can learn from you insights as well.

回复
Kapila Monga

Head of Data Science and IPA - Bon Secours Mercy Health

4 年

An aspect parallel to "transparency" in algorithms is, "transparency" in human decisions. The world is full of examples where decisions (both business and non-business) are taken because of whims / fancies/hunches/intuition of people. Limited 'explainability' in human decisions means limited usability of underlying Data (which is debris of human activity). Rest follows… In this case, even if we are able to mathematically induce transparency in the algorithm - the transparency would only serve the purpose to conclude that the model is not worth using.? Hence I believe, AI / ML should only be applied(or relied upon) in situations where humans are objective today (both in their asks and in their response), or where objectivity is needed and is possible! This I believe should be the first gate to decide whether or not to use AI to solve a problem. Transparency in this case will be a by-product of the process. At the risk of sounding AI skeptic - For other problems till we codify "humanity" / "emotions" / "biases", AI is a far cry from usability.

James Jeude

Chief Data & Analytics Office Advisory Practice at Wipro Limited

4 年

Dr. Jerry A. Smith, that element of 'explainability' (or transparency as we call it) is important because there will and should be increased scrutiny of AI's choices and recommendations.? As you and I were discussing recently, lawsuits have been filed in areas where this affecting lives (such as sentencing and parole guidelines).? I think your reminder that AI is creating the future by making its recommendations has an impact on how future AI sessions will find untainted training sets.? That is, what will AI use for training if the training examples were, themselves, the result of AI?? We run the risk of a type of infinite loop.? At the risk of displaying bad manners in putting a link to my own article in your comments, this blog from Cognizant talks about the topic in a little more detail.? Can we ensure we don't lose our human touch in the training sets of the future? ?https://digitally.cognizant.com/bots-fathers-judicial-advice-break-infinite-loop-codex3768/

回复

要查看或添加评论,请登录

社区洞察

其他会员也浏览了