Auditing for Fair AI Algorithms
Rupa Singh
Founder and CEO at 'The AI Bodhi' and 'AI-Beehive' | Author of "AI ETHICS with BUDDHIST PERSPECTIVE"| Top 20 Global AI Ethics Leader | Thought Leader| Expert Member at Global AI Ethics Institute
With the widespread deployment of AI systems, there has also been valid concerns about the effectiveness of these automated systems especially when they have the tendency to replicate, reinforce or amplify harmful biases.
It remains challenging for practitioners to identify the harmful repercussions of their own systems prior to deployment. Once deployed, it becomes extremely difficult or impossible to trace back to the sources of harm.
Algorithms need to be rigorously audited to gain public trust.
Therefore, to determine whether the AI systems comply with company policy, industry standards, or regulations, Audits are necessary tools for interrogating the complex processes.?
The upcoming new regulations to monitor compliance with risk management procedures around the development, implementation and use of AI are:
1.?????EU Artificial Intelligence Act (European Commission, 2021)
2.?????Digital Markets Act (DMA)
3.?????Digital Services Act (DSA) (European Commission, 2022)
What makes AI unique than other previous technologies?
There are various dimensions that makes AI algorithms impactful but also unpredictable and difficult to control.
1.?Black- Box nature: With the increased complexity of algorithms, there is also improved accuracy of the outcomes. However, the inherent opacity or black-box nature of AI algorithms makes it challenging to trace back to the root cause of the problem and to understand why an AI algorithms made certain decision.
2.?Data-intensive: Another dimension of AI systems is its data-intensive nature which is sometimes a ‘double-edge sword’. On one hand it allows for much more fine-grained and precise modelling, but on the other hand, even a subtle bias present in the training data can lead to perpetuating biases.
3. Dynamic nature: With more and more data being generated and used to train the AI algorithms, the performance of the algorithm changes over time. Hence, it becomes increasingly difficult for organizations to control AI algorithms.?
AI Regulation:
Types of AI techniques considered for the Proposed Act:
The proposed Act distinguishes three types of AI techniques and approaches:
AI Act proposes three classes of risk-based approach:
Examples of unacceptable AI risks:
Examples of high-risk AI:
Examples of low-risk AI:
High risk AI application should perform a conformity assessment and they should have:
1.?An established, implemented, documented, and maintained risk management system that are able to:
a.?????Assess
b.?????Evaluate, and
c.?????Mitigate risks
2.?A data governance approach to ensure the use of high-quality datasets for training and testing learning algorithms
3. Technical documentation to track the compliance with the requirements of the AI Act and automatic record-keeping to monitor events affecting the AI systems.
4. Transparency capabilities that enable users to understand details about the functioning of the AI systems.
5. Human in the loop: Human oversight such that natural person can intervene to minimize risk of AI systems.
6.?Appropriate level of accuracy, robustness, and cybersecurity throughout their lifecycle.
Internal Audit:
External Audits:
External audits are where companies are accountable to a third party. External audits are conducted by credible experts and less influenced by organizational considerations.
However, they are limited by lack of internal processes at the audited organizations.
External auditors do not have access to intermediate models or training data, which are often protected as trade secrets.
External audit is a post-deployment audit process implemented as a reactive measure only after deployment.
An Internal Audit Framework: SMACTR
Components of initial internal audit framework encompassing five distinct stages:
Scoping:
The goal of scoping stage is to clarify the objective of the audit. At this stage, the motivation and intended impact of the investigated system is being reviewed for confirming the principles and values meant for guiding product development.
Risk analysis begins by mapping out the intended use cases and identifying the analogous deployments. The main objective is to anticipate areas to investigate as potential sources of harm and social impact. Also, the system interaction is limited at this stage.
?The key artefacts developed by auditors at this stage:
Mapping:
At the mapping stage, the testing of the system is not actively done, however, the system is reviewed of what is already in place and different perspectives involved in the audited system.
This stage involves mapping internal stakeholders, identifying key collaborators for the execution of the audit, and orchestrating the appropriate stakeholder buy-in required for execution. Risk needs to be prioritized at the mapping stage and FMEA (Failure Modes and Effective Analysis) begins.
Key artefacts at this stage:
Artifact Collection:
?At this stage, identification and collection of all required documentation from the product development process takes place to prioritize opportunities for testing.
Key artifact from auditors during this stage:
Collection of these artifact advances adherence to the declared AI principles of the organization such as ‘Responsibility’, ‘Accountability’, ‘Transparency’ etc.?
Testing:
?At this stage majority of the testing activity is performed by auditors. In order to gauge the compliance of the system with the prioritized ethical values of the organization, auditors execute a series of tests. For demonstrating the performance of the analysed system at the time of audit, auditors engage with the system in a variety of ways and produce a series of artefacts.
?Reflection:
?This phase of an audit is crucial to determine the ethical implications of the AI systems and for ensuring that it aligns with the ethical expectations clarified in the audit scoping. At this phase auditors thoroughly examine the results of the tests performed at the execution stage and compare them to the ethical principles established in the audit scoping.
Ethical implications of the AI systems are thoroughly evaluated and documented. This documentation is the used to inform decision-makers and stakeholders about the ethical implications of the AI systems, thus allowing them to make informed decisions about its deployment.
Post-Audit:
Post-audit phase of AI systems is an on-going process and it is necessary to regularly assess the ethical implications of the AI systems as new information becomes available or as the AI systems evolves over time.
REFERENCES:
[1] Omar Y Al-Jarrah, Paul D Yoo, Sami Muhaidat, George K Karagiannidis, and Kamal Taha. 2015. Efficient machine learning for big data: A review. Big Data Research 2, 3 (2015), 87–93.
[2] Amel Bennaceur, Thein Than Tun, Yijun Yu, and Bashar Nuseibeh. 2019. Requirements Engineering. In Handbook of Software Engineering. Springer, 51–92.?
[3] Joshua A Kroll, Solon Barocas, Edward W Felten, Joel R Reidenberg, David G Robinson, and Harlan Yu. 2016. Accountable algorithms. U. Pa. L. Rev. 165 (2016), 633
Thankyou for reading this article. Hope you find this informative.
Rupa Singh
领英推荐
Founder and CEO (AI-Beehive)
Author of 'AI Ethics with Buddhist Perspective'
Business & Data Analyst | Business Growth | Power Query | Digital Transformation | Design Thinking | Agile | Lean Six Sigma | French Language | IIM Bangalore
2 年It's helping humans to be more effective and productive.
?? Senior AI Scientist at CCE ?? | ?? Researcher in Gen AI ?? | ?? Top AI ML DS Voice| ?? Ph.D. in AIML ?? | ?? Consultant | ???? Trainer | ??? Speaker | ?? Mentor | ?? Coach in DS/ML/AI | ?? Thought Leader in AI | ??
2 年With the advancement in AI technology. Auditing become one of the challenging process.