An easy to digest AI Ethics Framework
Photo by cottonbro studio: https://www.pexels.com/photo/bionic-hand-and-human-hand-finger-pointing-6153354/

An easy to digest AI Ethics Framework

You may not be a deep technologist but it's manageable to get a handle on how to assess the impact of AI on your organisation, including if you are responsible for platform / vendor selection.

The Australian Government's AI Ethics framework and the resulting 8 principles of AI Ethics are an easy to follow mechanism for governance and practical review of how you invest in AI tech, whether as an originator or customer.

It's worth having a read regardless of how your role interacts with AI so you have a better handle on the kinds of questions to ask when using, selecting, or creating any application of the technology in your organisation.

Accessibility, transparency, and data security are paramount meaning that the responsibility doesn't sit squarely with your Chief Technology Officer to police its use.

In brief, the eight principles are:

  1. Human, societal and environmental wellbeing: AI systems should benefit individuals, society and the environment.
  2. Human-centred values: AI systems should respect human rights, diversity, and the autonomy of individuals.
  3. Fairness: AI systems should be inclusive and accessible, and should not involve or result in unfair discrimination against individuals, communities or groups.
  4. Privacy protection and security: AI systems should respect and uphold privacy rights and data protection, and ensure the security of data.
  5. Reliability and safety: AI systems should reliably operate in accordance with their intended purpose.
  6. Transparency and explainability: There should be transparency and responsible disclosure so people can understand when they are being significantly impacted by AI, and can find out when an AI system is engaging with them.
  7. Contestability: When an AI system significantly impacts a person, community, group or environment, there should be a timely process to allow people to challenge the use or outcomes of the AI system.
  8. Accountability: People responsible for the different phases of the AI system lifecycle should be identifiable and accountable for the outcomes of the AI systems, and human oversight of AI systems should be enabled.

There's a quick explainer video too:


要查看或添加评论,请登录

John Price的更多文章

社区洞察

其他会员也浏览了