Putting AI Ethics into Action
Dr. Jagreet Kaur
Researcher, Author, Intersection of AI and Quantum and helping Enterprises Towards Responsible AI, AI governance and Data Privacy Journey
As AI decisions increasingly influence and impact people’s lives at scale, so does the associated responsibility on the enterprise to manage the potential ethical and socio-technical implications of AI adoption.
Accenture define Responsible AI as the practice of designing, building and deploying AI in a manner that empowers employees and businesses and fairly impacts customers and society
Primary concerns of AI today
Principles of ethical artificial intelligence
How to Operationalize Data and AI Ethics
1. Identify existing infrastructure that a data and AI ethics program can leverage. The key to a successful creation of a data and AI ethics program is using the power and authority of existing infrastructure, such as a data governance board that convenes to discuss privacy, cyber, compliance, and other data-related risks.
2. Create a data and AI ethical risk framework that is tailored to your industry. A good framework comprises, at a minimum, an articulation of the ethical standards — including the ethical nightmares — of the company, an identification of the relevant external and internal stakeholders, a recommended governance structure, and an articulation of how that structure will be maintained in the face of changing personnel and circumstances
3. Change how you think about ethics by taking cues from the successes in health care. Key concerns about what constitutes privacy, self-determination, and informed consent, for example, have been explored deeply by medical ethicists, health care practitioners, regulators, and lawyers. Those insights can be transferred to many ethical dilemmas around consumer data privacy and control.
4. Optimize guidance and tools for product managers. While your framework provides high-level guidance, it’s essential that guidance at the product level is granular
5. Build organizational awareness. Ten years ago, corporations scarcely paid attention to cyber risks, but they certainly do now, and employees are expected to have a grasp of some of those risks. Anyone who touches data or AI products — be they in HR, marketing, or operations — should understand the company’s data and AI ethics framework.?
领英推荐
6. Formally and informally incentivize employees to play a role in identifying AI ethical risks.?
7. Monitor impacts and engage stakeholders. Creating organizational awareness, ethics committees, informed product managers owners, engineers, and data collectors is all part of the development and, ideally, procurement process.
Identify AI bias before you scale
The?Algorithmic Assessment?is a technical evaluation that helps identify and address potential risks and unintended consequences of AI systems across your business, to engender trust and build supportive systems around AI decision making.
Use cases are first prioritized to ensure you are evaluating and remediating those that have the highest risk and impact.
Once priorities are defined, they are evaluated through our Algorithmic Assessment, involving a series of qualitative and quantitative checks to support various stages of AI development.
An organization’s board of directors and C-suite should view the ethical use of AI as an imperative—one that can’t be ignored.?To do so, C-suite leaders should leverage an AI framework like the one below.?
Further reading and References