Putting AI Ethics into Action
https://www.xenonstack.com

Putting AI Ethics into Action

As AI decisions increasingly influence and impact people’s lives at scale, so does the associated responsibility on the enterprise to manage the potential ethical and socio-technical implications of AI adoption.

Accenture define Responsible AI as the practice of designing, building and deploying AI in a manner that empowers employees and businesses and fairly impacts customers and society

  • Trust in AI is key to realizing value from this technology. But many companies struggle to overcome the perceived risks associated with it.
  • To create trust in AI, organizations must move beyond defining Responsible AI principles and put those principles into practice.

Primary concerns of AI today

  • Technological singularity
  • AI impact on jobs
  • Privacy
  • Bias and discrimination
  • Accountability

Principles of ethical artificial intelligence

No alt text provided for this image
https://ethical.institute/principles.html

How to Operationalize Data and AI Ethics

No alt text provided for this image
https://www.accenture.com/in-en/insights/artificial-intelligence/responsible-ai-principles-practice

1. Identify existing infrastructure that a data and AI ethics program can leverage. The key to a successful creation of a data and AI ethics program is using the power and authority of existing infrastructure, such as a data governance board that convenes to discuss privacy, cyber, compliance, and other data-related risks.

No alt text provided for this image
https://www.turing.ac.uk/

2. Create a data and AI ethical risk framework that is tailored to your industry. A good framework comprises, at a minimum, an articulation of the ethical standards — including the ethical nightmares — of the company, an identification of the relevant external and internal stakeholders, a recommended governance structure, and an articulation of how that structure will be maintained in the face of changing personnel and circumstances

No alt text provided for this image
https://www.turing.ac.uk/

3. Change how you think about ethics by taking cues from the successes in health care. Key concerns about what constitutes privacy, self-determination, and informed consent, for example, have been explored deeply by medical ethicists, health care practitioners, regulators, and lawyers. Those insights can be transferred to many ethical dilemmas around consumer data privacy and control.

No alt text provided for this image
https://www.turing.ac.uk/


4. Optimize guidance and tools for product managers. While your framework provides high-level guidance, it’s essential that guidance at the product level is granular

No alt text provided for this image
https://www.turing.ac.uk/

5. Build organizational awareness. Ten years ago, corporations scarcely paid attention to cyber risks, but they certainly do now, and employees are expected to have a grasp of some of those risks. Anyone who touches data or AI products — be they in HR, marketing, or operations — should understand the company’s data and AI ethics framework.?

No alt text provided for this image
https://www.turing.ac.uk/

6. Formally and informally incentivize employees to play a role in identifying AI ethical risks.?

No alt text provided for this image
https://www.turing.ac.uk/

7. Monitor impacts and engage stakeholders. Creating organizational awareness, ethics committees, informed product managers owners, engineers, and data collectors is all part of the development and, ideally, procurement process.

No alt text provided for this image
https://www.turing.ac.uk/
No alt text provided for this image
https://www.turing.ac.uk/

Identify AI bias before you scale

The?Algorithmic Assessment?is a technical evaluation that helps identify and address potential risks and unintended consequences of AI systems across your business, to engender trust and build supportive systems around AI decision making.

Use cases are first prioritized to ensure you are evaluating and remediating those that have the highest risk and impact.

Once priorities are defined, they are evaluated through our Algorithmic Assessment, involving a series of qualitative and quantitative checks to support various stages of AI development.


No alt text provided for this image
https://www.accenture.com/in-en/insights/artificial-intelligence/responsible-ai-principles-practice

  1. Set goals?around your fairness objectives for the system, considering different end users.
  2. Measure & discover?disparities in potential outcomes and sources of bias across various users or groups.
  3. Mitigate?any unintended consequences using proposed remediation strategies.
  4. Monitor & control?systems with processes that flag and resolve future disparities as the AI system evolves.


An organization’s board of directors and C-suite should view the ethical use of AI as an imperative—one that can’t be ignored.?To do so, C-suite leaders should leverage an AI framework like the one below.?

No alt text provided for this image
https://www.forbes.com/sites/insights-ibmai/2020/03/26/trust-at-the-center-building-an-ethical-ai-framework/?sh=5b6537a67bc7

Further reading and References

要查看或添加评论,请登录

社区洞察

其他会员也浏览了