Seeing is Believing: Why Full Transparency is Essential for Effective AI Models
Introduction
Machine learning (ML) or Deep learning (DL) approaches are common for AI models. They are trained on large amounts of data using statistical methods. Because these approaches are not sufficiently transparent in how they obtain answers from given tasks (Marcus, 2018), they are often called “black-box” AI.
The lack of transparency in AI models can lead to distrust, ethical concerns, and potential biases.?Deep learning models can be easily fooled. For example, MIT researchers developed a system to fool existing deep learning-based natural-language-processing models, including Bert. With only 10 percent of the words changed in the input, the accuracy of the models dropped from 90% to 20% (Gordon, 2020).
For black-box AI, when something is wrong, it’s difficult to find out where and why the error occurs, making it risky to trust the model’s predictions or decisions. Without transparency, it is difficult to determine whether the decisions made by the AI systems are fair or if there are biases in the data and algorithms.
Transparency is the requirement or paradigm for AI models to be visible and examinable to users. Specifically, users will be able to see how the model makes decisions, which factors are taken into account, and how the results are obtained.
In fact, many believe such transparency is the future for the AI paradigm (BNY Mellon, 2021).?For example, IBM has listed explainability/transparency as a key capability for next generation AI (IBM Corporation). Large consulting firms and business media are also advocating transparency, many suggesting that transparent AI is the next advance and will have a huge impact on business (PwC, 2018; Burke & Smith, 2019; Deloitte, 2019; Paka, 2020).
Transparent AI can complement the weakness of deep learning. Increasing the transparency of a model can help researchers and engineers understand failures and debug the errors (Artificial Intelligence Technology).
Meanwhile, transparency does not necessarily come at the expense of accuracy. For instance, in criminal justice and healthcare, transparent models performed just as well as black-box deep learning models (Rudin & Radin, 2019).
Use cases
Black-box models, in which users only have access to the input and output of the model, are blind to the intermediate stages of how they arrive at the output.?As AI permeates our everyday lives, the decisions made by AI will have a greater impact on our well-being, and even life and death. Finding out a way to reveal the complete logic behind AI models is thus crucial for everyone.
In healthcare, AI models can be used to assist in the diagnosis and treatment of various diseases. When diagnosing patients with a disease, transparent AI can show its full diagnosis process. It can help doctors examine their decisions and reduce the chance of incorrect diagnoses or treatments. Moreover, since the decisions by AI are also transparent to the patients, some potential ethical and accountability issues can be mitigated.
领英推荐
In HR, AI models can be used to help screen resumes, rate interviews and select candidates. For job applicants, they often want a reason if they are rejected (Chamorro, 2021). However, if the models are not transparent, they may not be able to explain why certain candidates were selected over others. This can result in biased decisions, unfairness, and discrimination.
In law or finance, AI models can be used to assist in decisions of legal cases and loan applications. In criminal justice, legal accountability is of great importance. Most participants in the 2018 study by Citizens Juries believed explanations were more important than accuracy when it came to AI decision-systems for criminal justice (ICO, 2019). Customers can also inquire about why their loan application was rejected. Without transparency, it may be impossible to determine and explain how AI arrived at the decision. This can result in financial losses or legal challenges.
Our work
At Mind AI, we pursue full transparency: transparency is available at every level.
Transparency comes naturally to Mind AI. Mind is not only able to identify what data it needs in order to perform its logic, but it is also able to share and reveal all of the logical processes, which are completely human-readable and transparent (Mind AI, 2022).
The core technology of the company is built on canonicals and natural language reasoning.?Data are stored in canonicals, a three-node structure consisting of primary, context, and resultant. Canonicals define an entity or a concept and are used as the unit of reasoning for deductive, inductive, and abductive reasoning.
Because it is based on symbolic reasoning, the definitions and functions are perfectly transparent and how the reasoning process reaches its conclusion can be viewed at every step (Doe, et al.). In addition, studies have argued that a logic-based approach can be used for computing trustable explanations and validating heuristic explanations (Ignatiev, 2020).
Thank you for reading
Please?subscribe?to keep up with our exclusive AI technology and fun WIP stories. Plus, visit?mind.ai?to find out how our AI is transparent, accurate and reliable ????????