Smart Society: How to Trust Artificial Intelligence
Ronald van Loon
CEO, Principal Analyst Intelligent World?Helping AI-Driven Companies Generating Success?Top10 AI-Data-IoT-Influencer
The concept of a smart society has been around for a long time, but the progress we have seen towards achieving it in the last decade has really been a giant leap for mankind. For those of us who are unaware, the smart society which looms over us is the future of mankind. We are about to enter a phase where living smart is the baseline, and everything else just falls in the jigsaw to complement that lifestyle. In smart societies, we are blessed with smart cities that run through the application of smart accessories and smart buildings.
In smart societies, we have smart cars (also known as self-driven cars or autonomous vehicles). We expect a better flow of traffic with traffic management that is propelled through extensive and authentic data provided by these vehicles and analyzed with smart algorithms (e.g. based on AI). The most prominent detail about smart societies as we know them now is the pervasiveness of the Internet of Things (IoT) at the smallest level. The implementation of IoT at micro levels drives the need for self-learning algorithms, hence the emphasis on AI. Eventually it all conflates together to form the bigger picture of a smart society.
AI and Algorithms
Smart society is way more than just an imagination now, and it is high time that we started answering the genuine questions pertaining to it. The role of Machine Learning in the overall implementation of the smart society is driven by the simple fact that humans are incapable of analyzing all that data in a classical way. The words “Machine Learning” already provide an indication of where the issues is with these algorithms. The machine is learning, i.e. it doesn’t necessarily follow logic easily understood by humans. That leaves us with the interesting challenge to understand how Machine Learning and algorithms impact us and the concept of the smart society. The term coined for this impact by technology philosopher Evgeny Morozov is the “Invisible Barbed Wire.”
The omnipresent systems in our smart society are driven by algorithms. That means they are directly impacting our decisions, our lives, so it would be worthwhile to understand exactly how that works. Here are a few examples:
- Data centered applications are now heavily adopted by the general public. One specific mobile application gives its users the chance to detect the type of cancer, with high levels of accuracy.
- “Simple” algorithms such as those predicting the weather already have an impact on how you want to plan your day for ages. Who hasn’t changed his or her plans to go to the beach at least once because the weather prediction suggested that there were thunderstorms expected?.
- Navigation systems that we have in our cars and our mobile phones decide the best path that should be used. The decision is optimized with fresh data coming in from all alternative paths available for our journey. By assessing traffic updates and other implications, these navigation systems come up with what is best for us.
Humans are now depending heavily on these system, even without realizing it, which is why algorithms are termed “invisible barbed wire”.
Can You Trust AI?
The question that arises once we are done with evaluating the impact of algorithms on our lives is, “can we trust AI?” The first and foremost concern for most individuals and smart phone users is what companies across the world are doing with their data. Where is the data that is collected going and how does the analysis of that data match with your expectations? These are important questions for all users, and questions that we would like to be answered.
Although the market for AI and analytics is developing rapidly, the trust deficit is not showing any signs of decreasing. Some recent statistics that highlight this trust gap are:
Source: KPMG
Now that we are aware of the current trust deficit, what can we do about it? What are the anchors that build trust between the stakeholders? KPMG identified of them:
- Is the data analysis process and the data itself of top notch quality? Of course, requirements depend on domain and application. Bank transactions and medical diagnostics put higher constraints on the quality of data and data analysis processes than marketing campaigns.
- Does the analysis do what it is intended to do? This becomes especially important when data or algorithms are reused. Data or algorithms collected or developed for one purpose are not per definition suitable for another.
- Is the use of the data and algorithms considered acceptable from ethical and regulatory perspective? Gender-based or age-based discrimination are typically prohibited by law. Data analysis must obviously comply with all regulations such as the GDPR, etc.
A trust deficit will occur when data analysis fails to adhere to the anchors mentioned above. Data analysis that fails to meet the anchors of trust is widespread and well-known. Death by GPS is a common term for people getting lost due to GPS interpretation mistakes and the reputation of Flash has led to an aura of unreliability and unpredictable behavior. Interestingly enough unpredictability is the last thing you would expect from a machine and not meeting expectations is the fastest way to a trust gap. Other quick ways to dissatisfaction and reputation damage are security breaches and the discovery of unintended bias.
The smart society has some interesting knock-on effects, e.g. in the area of liability. Research shows that more than 62 percent of D&A professionals believe that accidents caused by self-driving cares are the responsibility of the organization that creates the software and algorithms.
How to Trust AI?
The last result indicates that the data scientist has a crucial role in establishing trust in AI. The data scientist is responsible for developing the data analysis and the metrics, so they play an important role in building trust in this ecosystem.
There are already numerous controls and frameworks that exist as a way to regulate development. ISO for e.g. software quality and information security and FAST for financial modeling are some examples. The FACT principle has also been suggested for responsible data science, representing Fairness, Accuracy, Confidentiality, and Transparency; all important and necessary attributes.
Another factor that can generate trust is ethics by design, an extension of privacy by design. Design a system in such a way that the correct processes have to be followed, i.e. make it technically impossible to circumvent these processes. Only the right combination of organizational and technical measures can enforce data scientists to do the right things in the right way.
Finally, third party review is crucial for building trust. Here is where assurance firms such as KPMG come into the picture. KPMG has performed audits on financial statements for the last 100 years and has now stepped into digital assurance. We can learn valuable lessons from how the audit of financial statements works. Social value is crucial, as is the willingness to work on a trusted relationship with society. This calls for a balanced approach towards transparency. Opening the black box and creating trust in all stakeholders is what external auditors such as KPMG aim to achieve. The smart society is imminent, but we need auditors to keep a stringent check on all aspects related to data analysis.
For more professional intake on this issue, you can listen to our webinar here .
About the Authors
Sander Klous is Professor in Big Data Ecosystems at the University of Amsterdam and Partner in charge of Data & Analytics at KPMG Nederland.
If you would like to read more from Ronald van Loon on the possibilities of Big Data and the Internet of Things (IoT), please click “Follow” and connect on Youtube, LinkedIn and Twitter.
Insurance Law Specialist | Public Liability | Professional Indemnity | Life Insurance | Defamation Lawyer
6 年Isn't it interesting how IT professionals think about AI, compared to the general public?