Auditing for Fair AI Algorithms
Image Source : Google

Auditing for Fair AI Algorithms

With the widespread deployment of AI systems, there has also been valid concerns about the effectiveness of these automated systems especially when they have the tendency to replicate, reinforce or amplify harmful biases.

It remains challenging for practitioners to identify the harmful repercussions of their own systems prior to deployment. Once deployed, it becomes extremely difficult or impossible to trace back to the sources of harm.

No alt text provided for this image

Algorithms need to be rigorously audited to gain public trust.

Therefore, to determine whether the AI systems comply with company policy, industry standards, or regulations, Audits are necessary tools for interrogating the complex processes.?

The upcoming new regulations to monitor compliance with risk management procedures around the development, implementation and use of AI are:

1.?????EU Artificial Intelligence Act (European Commission, 2021)

2.?????Digital Markets Act (DMA)

3.?????Digital Services Act (DSA) (European Commission, 2022)

What makes AI unique than other previous technologies?

There are various dimensions that makes AI algorithms impactful but also unpredictable and difficult to control.

1.?Black- Box nature: With the increased complexity of algorithms, there is also improved accuracy of the outcomes. However, the inherent opacity or black-box nature of AI algorithms makes it challenging to trace back to the root cause of the problem and to understand why an AI algorithms made certain decision.

2.?Data-intensive: Another dimension of AI systems is its data-intensive nature which is sometimes a ‘double-edge sword’. On one hand it allows for much more fine-grained and precise modelling, but on the other hand, even a subtle bias present in the training data can lead to perpetuating biases.

3. Dynamic nature: With more and more data being generated and used to train the AI algorithms, the performance of the algorithm changes over time. Hence, it becomes increasingly difficult for organizations to control AI algorithms.?

AI Regulation:

No alt text provided for this image

  1. European Commission (the AI Act, 2021) issued a landmark regulation aimed at governing AI algorithms. ?
  2. The significance of compliance with the AI Act is reflected by the formulated penalties.
  3. Non-compliance may result in fines up to the greater of Euro 30 million and 6% of annual turnover worldwide.

Types of AI techniques considered for the Proposed Act:

The proposed Act distinguishes three types of AI techniques and approaches:

  1. Machine Learning Approaches (Supervised, Unsupervised, Reinforcement, and Deep Learning)
  2. Logic and knowledge-based approach including expert systems.
  3. Statistical Approach such as Bayesian estimation.

AI Act proposes three classes of risk-based approach:

  1. Unacceptable risk
  2. High risk
  3. Low or minimal risk

Examples of unacceptable AI risks:

  1. AI practices and techniques used beyond a person’s consciousness
  2. Social scoring techniques which are likely going to cause physical or psychological harm.
  3. Activities that exploit vulnerabilities of certain groups.
  4. Real-time remote biometric systems in publicly accessible spaces.

Examples of high-risk AI:

  1. AI systems used as products or safety components of products. Eg: Machinery, personal protective equipment, radio equipment, medical devices, transportation.
  2. Bias and discrimination: AI systems that perpetuate or amplify existing biases can harm marginalized communities and result in unfair outcomes.
  3. Misleading or false information: Using deepfake technologies and fake news to undermine public trust can have serious social and political consequences.
  4. Privacy Violations: AI systems collecting, storing, and using data in a way that that violate individual privacy can lead to serious consequences.


Examples of low-risk AI:

  1. Low risk AI systems poses minimal danger to individuals, organizations, and society as a whole.
  2. Recommender systems: Products or services suggested to users by the AI systems based on user’s preferences and behaviour.
  3. Customer service chatbots: AI systems use to automate simple customer service tasks, such as answering frequently asked questions or directing customers to the right resources.
  4. Personal assistant systems: AI-powered virtual assistants such as Siri or Alexa used for tasks like setting reminders, playing music, or providing information.

High risk AI application should perform a conformity assessment and they should have:

1.?An established, implemented, documented, and maintained risk management system that are able to:

a.?????Assess

b.?????Evaluate, and

c.?????Mitigate risks

2.?A data governance approach to ensure the use of high-quality datasets for training and testing learning algorithms

3. Technical documentation to track the compliance with the requirements of the AI Act and automatic record-keeping to monitor events affecting the AI systems.

4. Transparency capabilities that enable users to understand details about the functioning of the AI systems.

5. Human in the loop: Human oversight such that natural person can intervene to minimize risk of AI systems.

6.?Appropriate level of accuracy, robustness, and cybersecurity throughout their lifecycle.

Internal Audit:

No alt text provided for this image

  1. Internal Algorithmic Audit: It is a mechanism to check the engineering processes involved in AI system creation and deployment meet declared ethical expectations and standards, such as organizational AI principles.
  2. Internal audits can be leveraged to anticipate potential negative consequences before they actually occur and can be extremely useful to provide decision support to design mitigations.
  3. Internal audits also help to clearly define and monitor potential adverse outcomes and anticipate harmful feedback loops and system level risks.
  4. When the risk outweighs the benefits, Internal audits operating within the product development context can inform the ultimate decision to abandon the development of AI technology.
  5. Internal audits are pre-deployment audit process applied throughout the development process and enables pro-active ethical intervention methods.?

External Audits:

No alt text provided for this image

External audits are where companies are accountable to a third party. External audits are conducted by credible experts and less influenced by organizational considerations.

However, they are limited by lack of internal processes at the audited organizations.

External auditors do not have access to intermediate models or training data, which are often protected as trade secrets.

External audit is a post-deployment audit process implemented as a reactive measure only after deployment.

An Internal Audit Framework: SMACTR

Components of initial internal audit framework encompassing five distinct stages:

  1. Scoping
  2. Mapping
  3. Artifact Collection
  4. Testing
  5. Reflection

Scoping:

The goal of scoping stage is to clarify the objective of the audit. At this stage, the motivation and intended impact of the investigated system is being reviewed for confirming the principles and values meant for guiding product development.

Risk analysis begins by mapping out the intended use cases and identifying the analogous deployments. The main objective is to anticipate areas to investigate as potential sources of harm and social impact. Also, the system interaction is limited at this stage.

?The key artefacts developed by auditors at this stage:

  1. Define Audit Scope
  2. Product Requirement Document (PRD)
  3. AI Principles
  4. Use Case Ethics Review
  5. Social Impact Assessment

Mapping:

At the mapping stage, the testing of the system is not actively done, however, the system is reviewed of what is already in place and different perspectives involved in the audited system.

This stage involves mapping internal stakeholders, identifying key collaborators for the execution of the audit, and orchestrating the appropriate stakeholder buy-in required for execution. Risk needs to be prioritized at the mapping stage and FMEA (Failure Modes and Effective Analysis) begins.

Key artefacts at this stage:

  1. Stakeholder Buy-In
  2. Conduct Interviews
  3. Stakeholder Map
  4. Interview Transcripts
  5. Failure Modes and Effective Analysis (FMEA)

Artifact Collection:

?At this stage, identification and collection of all required documentation from the product development process takes place to prioritize opportunities for testing.

Key artifact from auditors during this stage:

  1. Audit Checklist
  2. Model Cards
  3. Datasheets
  4. FMEA

Collection of these artifact advances adherence to the declared AI principles of the organization such as ‘Responsibility’, ‘Accountability’, ‘Transparency’ etc.?

Testing:

?At this stage majority of the testing activity is performed by auditors. In order to gauge the compliance of the system with the prioritized ethical values of the organization, auditors execute a series of tests. For demonstrating the performance of the analysed system at the time of audit, auditors engage with the system in a variety of ways and produce a series of artefacts.

  1. Review Documentation
  2. Adversarial Testing
  3. Ethical Risk Analysis Chart
  4. FMEA

?Reflection:

?This phase of an audit is crucial to determine the ethical implications of the AI systems and for ensuring that it aligns with the ethical expectations clarified in the audit scoping. At this phase auditors thoroughly examine the results of the tests performed at the execution stage and compare them to the ethical principles established in the audit scoping.

Ethical implications of the AI systems are thoroughly evaluated and documented. This documentation is the used to inform decision-makers and stakeholders about the ethical implications of the AI systems, thus allowing them to make informed decisions about its deployment.

  1. Remediation Plan
  2. Design History File (ADHF)
  3. Ethical Risk Analysis Chart
  4. Summary Report
  5. FMEA

Post-Audit:

Post-audit phase of AI systems is an on-going process and it is necessary to regularly assess the ethical implications of the AI systems as new information becomes available or as the AI systems evolves over time.

  1. Go / No-Go Decisions
  2. Design Mitigations
  3. Track Implementation


REFERENCES:

[1] Omar Y Al-Jarrah, Paul D Yoo, Sami Muhaidat, George K Karagiannidis, and Kamal Taha. 2015. Efficient machine learning for big data: A review. Big Data Research 2, 3 (2015), 87–93.

[2] Amel Bennaceur, Thein Than Tun, Yijun Yu, and Bashar Nuseibeh. 2019. Requirements Engineering. In Handbook of Software Engineering. Springer, 51–92.?

[3] Joshua A Kroll, Solon Barocas, Edward W Felten, Joel R Reidenberg, David G Robinson, and Harlan Yu. 2016. Accountable algorithms. U. Pa. L. Rev. 165 (2016), 633


Thankyou for reading this article. Hope you find this informative.

Rupa Singh

Founder and CEO (AI-Beehive)

Author of 'AI Ethics with Buddhist Perspective'


















































































Deepak (Deep) Khandelwal

Business & Data Analyst | Business Growth | Power Query | Digital Transformation | Design Thinking | Agile | Lean Six Sigma | French Language | IIM Bangalore

2 年

It's helping humans to be more effective and productive.

Dr. Arpit Yadav

?? Senior AI Scientist at CCE ?? | ?? Researcher in Gen AI ?? | ?? Top AI ML DS Voice| ?? Ph.D. in AIML ?? | ?? Consultant | ???? Trainer | ??? Speaker | ?? Mentor | ?? Coach in DS/ML/AI | ?? Thought Leader in AI | ??

2 年

With the advancement in AI technology. Auditing become one of the challenging process.

要查看或添加评论,请登录

Rupa Singh的更多文章

  • EU AI ACT: Shaping Governance for Tomorrow’s Innovation

    EU AI ACT: Shaping Governance for Tomorrow’s Innovation

    Because of the rapid growth of technological advancements and innovation, governance and regulatory mechanisms are put…

    2 条评论
  • Angulimala in Our Algorithmic World!!

    Angulimala in Our Algorithmic World!!

    Once upon a time, in the lush forests of ancient India, there lived a fearsome bandit named Angulimala. His name struck…

    10 条评论
  • AI Ethics Approach is Reactionary instead of Proactive

    AI Ethics Approach is Reactionary instead of Proactive

    In the recent past, AI solutions are pervasively deployed and at scale in many application areas of societal concerns…

    8 条评论
  • Discriminatory Hiring Algorithm

    Discriminatory Hiring Algorithm

    Algorithms do not build themselves. They often rely on human input and the choices they make about the outcomes.

    6 条评论
  • Race to the Bottom on 'Safety' Affecting 'Responsible AI Development'

    Race to the Bottom on 'Safety' Affecting 'Responsible AI Development'

    AI Ethics should not be treated as an aftermath, rather organizations must prioritize the incorporation of AI ethics at…

  • Fairness Compass in AI

    Fairness Compass in AI

    An interesting experiment was performed around two decades ago on fairness and inequality on two Capuchins monkey that…

    2 条评论
  • Real World Biases Mirrored by Algorithms

    Real World Biases Mirrored by Algorithms

    We are going to grab the bull by its horns by tackling first the most challenging type of algorithmic bias: Cognitive…

    8 条评论
  • 5-Steps To Approach AI Explainability

    5-Steps To Approach AI Explainability

    The concept of explainability in AI is often related to transparency, interpretability, trust, fairness, and…

  • Is Your AI Model Explainable?

    Is Your AI Model Explainable?

    Why don't we just trust the AI models and accept the decisions made by the machines, if the machine learning model…

    3 条评论
  • Multicollinearity in Linear Regression

    Multicollinearity in Linear Regression

    Multicollinearity Multicollinearity is a statistical phenomenon in which two or more predictor variables in a multiple…

    5 条评论

社区洞察

其他会员也浏览了