How Far Are We From trusting Artificial Intelligence?
Artificial intelligence has always been in the limelight firstly for the glorified use-cases that could bring human alike intelligence in machines and secondly for their threat to human existence if they actually start thinking like humans.
Several writers, mathematicians, scholars, scientists have all expressed their views supporting as well as opposing the growth and transformation of artificial intelligence.
Alan Turing (An English mathematician and pioneer of theoretical computer science and artificial intelligence) once said
“We like to believe that Man is in some subtle way superior to the rest of creation. It is best if he can be shown to be necessarily superior, for then there is no danger of him losing his commanding position.”
~Alan Turing
Humans have often turned against humans and it isn’t wrong in thinking that machines could behave similarly if they start thinking like humans, isn’t it? So would it be right to say that there exists a probability of the Artificial Intelligence projects you work on today to one day turn against you supporting the human killer robots who are capable and are extraordinary, unlike humans? How far are we from trusting the Artificial Intelligence?
We see different varieties of thought-provoking and exciting AI applications and tools overhyped that are believed to create a positive impact on the economy in many ways. We shouldn't oversee this hype on the hypothetical future where AI gets along with the same learning and planning abilities that humans have as well as being superintelligent machines.
Governments in most of the countries are spending billions of dollars trying to promote Artificial Intelligence and implement AI-based solutions. AI is evolving and companies and organizations are investing heavily in this technology to make the most out of it.
Though there are advancements in Artificial intelligence, the technology isn’t stable yet for multiple reasons.
Let us analyze a real-life example where AI has gone wrong:
I recently read an analysis by researchers from the Georgia Institute Of Technology that highlights a potential threat with self-driving AI cars. The threat was in an ML object detection model using Computer Vision and facial recognition systems to aid self-driving cars to avoid accidents. There was reported to be a defect in the system not recognizing dark-skinned individuals in several tests.
This is also believed to be a troubling sign of how AI can inadvertently reproduce prejudices from the wider world. The research team at Georgia Tech suggests that humans could be on a path to a future in which a world rife with autonomous cars isn’t as safe for people with dark skin tones as it is for lighter-skinned pedestrians.
img credits: Algorithmia
Though this could eventually be solved by providing 1000’s of samples with dark-skinned individuals to make the system learn; this isn’t practically safe. This is not the first study of Machine Learning systems having varying predictive accuracy on different demographics. Different researchers in the past have found similar examples in the financial sector.
An article published by Aclu.org points out Amazon’s Face Recognition Falsely Matching 28 Members of Congress With Mugshots. When tested the AI tool called “Rekognition,” the software matched 28 members of Congress incorrectly and identified them as individuals who have been arrested for a crime.
Image credits: aclu[dot]org
Yet with another case:
According to a report by Statnews, IBM’s Watson supercomputer (Watson for oncology) was used in the treatment recommendation for cancer patients and got canceled after spending 62 million USD.
It is said that this AI-enabled system recommended ‘unsafe and incorrect’ cancer treatments, internal documents show. Further studies say that the software was drilled with a small number of “synthetic” cancer cases, or hypothetical patients, rather than real patient data.
All these cases do not show an efficient AI system and do not conclude that the Artificial Intelligence system intentionally malfunctioned. However, it still shows you the possibilities of AI havoc caused that may affect human life if things go wrong due to errors.
I am not suspecting the goodness Artificial Intelligence enabled machines can bring to our life. But according to #AI expert Toby Walsh, the future of AI killer robots is near and if drone bombers are flying over the skies of Afghanistan controlled by humans now, it’s a small technical step to render them autonomous.
I think that technologies are morally neutral until we apply them. It's only when we use them for good or evil that they become good or evil. ~William Gibson.
Looking deeper into the future of Artificial Intelligence, why is it a threat to human existence?
Let’s explore the scenarios and understand what are the negative sides of AI and what could be done to avoid issues for the future.
- AI replacing the human workforce:
AI is built to imitate a human way of thinking, working and eventually, replace certain parts of your job responsibilities. Maybe in the near future, your entire responsibilities could be handled by an AI. The threat isn’t for your job as you could move to better prospects and new learning. The threat is to the job assigned to an AI which could go miscalculated and wrong if it comes across a scenario that wasn’t part of its learning. AI should be monitored and controlled.
- Effects on social living:
As described earlier in this post by Toby Walsh, the side-effects and threats that AI could cause when used autonomously could be lethal. Government organizations, decision-makers, and business should spend time in formulating policies, rules, regulations, and responsibilities on AI usage. If not, there could be significant wrongdoings as AI continues to mature.
- Bias and prejudice
Humans develop AI and they are often subjective. AI behaves and learns based on the learning we give it and if there is a bias in the data set used for training Artificial Intelligence enabled machines, the actions that AI performs could also be based on prejudice. One common example we have seen is in the US elections where AI was used to manipulate social media. AI-enabled solutions thus should be tested by several groups and subjected to test use-cases before they are actually made LIVE, and still being monitored.
- Forced misrepresentations:
AI facial recognition systems and computer vision-enabled ML models are widely used. These have the ability to fake someone and create misrepresentations on their behalf. This could be used to threaten individuals and businesses, affect livelihood and reputation. Ai is getting better day by day and it may be difficult to identify whether it was fake or real.
- Surveillance systems:
AI, when used in surveillance, gives it access to the personal lives of others thus barging into their privacy. Facial recognition access mechanisms can be used for ease of work but when coupled with analyzing one’s routines, conversations, social media engagement could cause multiple catastrophic hazards for individuals.
AI in robotics and its applications to be as human as possible pose a larger threat than what we discussed. It would be cheaper to produce autonomous robotic weapons and maybe some can be made with a 3-D printer. If this fall into the hands of terrorists could cause another devastating war. Also with such weapons, it would be difficult to know the source of an attack.
In an article published by the guardian, a former Google software engineer working on military drone project warns that the killer robots could cause another war. However, robotics is a wide area to discuss and cant be included in this article.
We as a company embracing Artificial Intelligence, we build AI solutions that aid and support the workforce thus making them more productive. However, we understand the positives and the negatives of this business equation and always consult our customers to choose AI for the betterment of their business and not as a 100% replacement of their existing resources.
It is essential that the usage of AI is regulated and policies are made on how AI should be developed and deployed to make this potential technology safe and usable.