Explainable AI (XAI)
Continuing from the last article, where I wrote about preventing Bias in AI, is the antidote - Explainable AI i.e. systems built to be able to explain the features they have learnt and how they are being applied.
There is no doubt that AI has started impacting and influencing our lives, in ways known and unknown. We are collectively creating something so advanced and sophisticated that it is sometimes impossible for us to even understand them; or is the case by design (for instance neural nets). Questions like how the systems work or what methods and reasoning they apply for decision making or how are they using the existing data for predictive analytics, can turn inexplicable.
To build a stronger, more trusting relationship with the AI systems, a need for Explainable Artificial Intelligence (XAI) was felt which aims at making machines more understandable for humans. Explainability becomes all the more a societally important topic due to several issues such as removing bias, ensuring fairness and bringing transparency.
Explainable AI contrasts with "black box" AIs that employ complex opaque algorithms, where even their designers cannot explain why the AI arrived at a specific decision. Across the globe, people are growing weary of data-driven systems and their impact on the society. One of the first formal research programs to attempt to crack open the AI “black box” is the Explainable AI (XAI) project, which is being run by the Defence Advanced Research Projects Agency (DARPA), an organisation that does much of America’s military research.
Regulations such as General Data Protection Regulation (GDPR) which has come into effect across the EU recently, requires sweeping changes to how organizations handle personal data. GDPR recognizes what could be considered a “right to explanation” for all citizens — meaning that users can demand an explanation for any “legal or similarly significant” decisions made by machines. There is hope that the right to explanation will give the victims of “discrimination-by-algorithm” recourse to human authorities, thereby mitigating the effect of such biases.
In India, a recent paper policy think tank NITI Aayog (read here) has laid the groundwork for evolving the National Strategy for Artificial Intelligence, and has identified potential sectors that can reap its benefit. Explainable AI has been mentioned as a focus area for Centre of Excellence on AI. And then there are non – profit organizations such as OpenAI and AI for Good foundation that are discovering and enacting the path to safe artificial general intelligence.
We feel that although Explainable AI is a step in the right direction to overcome bias in AI systems, ultimately, it’s the responsibility of the organization that owns the data to collect, store, and use that data wisely and fairly; and be responsible for the decisions its systems take- just as in the human-driven world.
If you'd like a daily mailer of our Fintech Newsletter - FinTalk, you can subscribe here: @bankofbaroda https://lnkd.in/ffG2dfG
Lead Generation for Dentists | Dental Lead Generation | Dental Leads Specialist
6 年Thanks for shedding some light on explainable AI, Aparna.