The Importance of Getting Your Organisations AI Ethics Right
Created with DALL.E 2

The Importance of Getting Your Organisations AI Ethics Right

Introduction

Let it be known that I was something of an Artificial Intelligence (AI) sceptic until recently. Perhaps I watched Terminator too many times growing up????


Since I’ve been spending more time with our Chief AI Officer David Bartram-Shaw (DBS!) and picking his brains on the subject, I am becoming increasingly convinced that AI has the potential to revolutionise the way we live and work, offering solutions to some of the world's most pressing problems whilst improving our quality of life in countless ways. From self-driving cars, to discovering cures for major diseases, all the way back to intelligent personal assistants, the applications of AI are numerous and varied.?


However, as we move towards a future increasingly shaped by this technology, it is essential that we also consider the ethical implications that come with it. For centuries human kind has relied upon outside assistance (animals, machinery, technology) to do things quicker, faster, cheaper, safer and better. And I do genuinely believe AI sits in this bracket as well, though we must consider the socioeconomic impact that AI could have on society.


For me, AI will become the single most powerful tool an organisation can leverage in order to make money, save money and reduce risk in the years ahead.?


Indeed, there comes a point where we must consider the implications between the “Haves” (organisations with the skills, data, technology and financial capital to fuel AI and the “Have Nots” (organisations without the skills, data, technology and financial capital to fuel AI) and what this could mean for things like market competitiveness and fairness in the coming years.


Personally, I have not seen as rapid a pace of technological advancement than in the last 6-months and with more people talking about AI, exploring its use in their business and an increasing understanding of how it can be applied, this rapid advancement will only grow exponentially. I think it’s fair to say that the AI systems of today are already far beyond what we could have imagined just a few years ago.?


Therefore, as AI becomes increasingly sophisticated and embedded into everyday life, it is crucial that we ensure it is being used for the benefit of society and that its deployment is guided by clear ethical principles by enterprise organisations.?In this blog, I will discuss the importance of having a well-governed AI ethics framework in place, the questions organisations should ask when adopting AI and the key considerations to embed in their respective control policies.?


Why do organisations need an AI ethics framework??


Over the last several years, there have been many examples of AI going wrong in industry, which illustrate the importance of having strong ethical frameworks in place to govern the use of AI technology. Some of these examples include:


Bias and Discrimination: One of the biggest challenges with AI is that it can perpetuate and amplify existing biases in society. For example, facial recognition systems have been found to be less accurate in recognizing people of colour, and some hiring algorithms have been found to discriminate against women and minority groups.


Privacy Concerns: AI systems often rely on vast amounts of personal data to function, which can raise privacy concerns. For example, there have been instances where personal data collected by AI systems has been used for purposes other than those for which it was originally collected, such as targeted advertising.


Unintended Consequences: AI systems can sometimes have unintended consequences, such as when a chatbot designed to provide customer support ended up making racist and sexist remarks.


Lack of Explainability: Another challenge with AI is that it can be difficult to understand how AI enabled systems are making decisions. This lack of explainability can be problematic in fields such as healthcare, where AI systems are being used to make diagnoses or recommend treatments, and it is important to understand the reasoning behind these decisions. Particularly in litigious regions where an incorrect diagnosis/treatment could result in legal challenges by paitents.??


Security Risks: AI can also pose security risks, such as when AI enabled systems are used to automate cyberattacks or when they are used to manipulate public opinion. This has been highly publicised in the United Kingdom as an example with the much vaunted Cambridge Analytica scandal.?


These examples alone highlight the importance of having strong ethical frameworks in place to govern the use of AI, and to ensure that it is used in a manner that is transparent, accountable, fair, and non-discriminatory. So, if we unpack these challenges, what are some of the questions organisations should ask themselves when they are seeking to apply AI in their business?


What questions must organisations ask themselves when looking to adopt AI??


As AI enabled applications play an increasingly embedded role in our lives, it is critical that their decision-making processes are transparent and explainable, to ensure they are being used ethically and responsibly. For instance, what could be the impact if an AI enabled credit decision engine wrongfully refuses a credit card to a customer based on factors such as gender, race or political disposition? Equally, what would be the material impact on the organisation if they could not explain the outcomes or decisions being generated by an AI enabled solution or business service to the customer or market regulators??


Ultimately, AI ethics are key to ensuring that these types of systems are designed with accountability, fairness, and human values in mind, and that their impact on society is understood and well-governed not just when they are initially deployed but in a cycle of continuous review and improvement. Subsequently, organisations should be asking themselves the following questions as they look to establish an ethics framework for AI.?



  • Why do we want to leverage AI and for what purpose??
  • What are the potential consequences of using AI in our business, both positive and negative?
  • How will AI systems be used to make decisions and what data will they use to inform these decisions?
  • How will we ensure that AI systems are transparent, explainable, and free from bias?
  • Who will be responsible for overseeing the ethical use of AI in our business, and how will they ensure that it aligns with our values, ethical principles and regulatory commitments?
  • How will we ensure that the privacy and security of individuals is protected when using AI systems, and what data protection regulations will we need to comply with?
  • How will our approach to using AI need to be tailored for products and services that operate across borders??
  • How will we engage with stakeholders, including employees, customers, and the wider industry, to ensure that their perspectives and concerns are taken into account when using AI?
  • How will we continuously monitor and evaluate the use of AI in our business, and how will we ensure that it remains aligned with our values and ethical principles over time?


By asking these questions, organisations can start to gain a deeper understanding of the implications of using AI in their business. The answers to these questions can then be used to form the backbone of an AI ethics framework. Let’s explore what I mean by that as well as the key considerations for enterprise organisations when looking to establish one.?


What is an AI Ethics Framework??


We can simply define an AI ethics framework as a set of guidelines, principles, and processes that govern the ethical use of artificial intelligence (AI) technology.?


Typically, the purpose of any such framework is to ensure that AI is used in a manner that aligns with core human values such as transparency, accountability, fairness, and non-discrimination. Ultimately, any such framework should be established to ensure that the deployment of AI, for any specific use case, cannot cause harm to individuals or society as a whole. An AI ethics framework should also consider the responsibility and sustainability of any such AI enabled capability. Whilst pillars such as risk management and intervention/resolution during times of erroneous behaviour should also be included upon its formalisation.?


What capabilities does an AI Ethics Framework need to include??


When establishing an AI ethics framework enterprise organisations require several key capabilities to ensure that AI systems are being used in a responsible and transparent manner. These capabilities usually include:


  • Policy Development: Enterprise organisations must have a well-defined set of policies and procedures for the ethical use of AI. Typically, these should be aligned with their corporate values and regulatory commitments.


  • Strong Governance: Organisations must have a clear governance structure in place, with roles and responsibilities defined, documented and allocated for ensuring that AI systems are used in a responsible and ethical manner. Indeed, this should include things like the allocation of a material risk taker for occasions when AI goes wrong. Equally, this should also include operational procedures and run books for risk mitigation, communication and notifications when customers or the wider market may have been incorrectly impacted as a consequence of AI operating outside of explainable patterns.?


  • Data Management & Governance: It is imperative that organisations have strong data management practices in place, to ensure that data is collected, stored, and used in a responsible and ethical manner. This is critical to support transparency and the explainability of machine learning models that fuel AI driven actions. If you can’t trace the source data that has been used to train an ML model and explain why you used certain data sets in the first instance, then you will very much be in a sticky situation. This is where the implementation of an MLOps approach can offset these types of risks.


  • Risk Assessment: Organisations must have a set of robust processes for assessing the risks associated with AI enabled applications, to ensure that they are being used in a manner that is safe, secure, and compliant with relevant regulations. These should be drafted by knowledgeable product subject matter experts (SME’S) from the business in collaboration with AI aware engineers from technology functions. Ultimately, they should also be approved and signed off by material risk takers.?


Each of these capabilities are of equal importance and typically you can’t do one, without answering another. Starting with policy definition, organisations can more easily define their operational procedures, governance needs and potential impacts of new and unknowns risks to their businesses, customers and the wider markets when adopting AI.?


Let’s briefly consider the key inclusions for building a sound AI policy for enterprise organisations.?


What needs to be included in an AI Policy??


For highly regulated enterprise organisations, an AI policy must include several key elements to ensure that the use of AI is compliant with relevant regulations and ethical principles. Some of these elements include:?


  • Legal and Regulatory Compliance: The policy must ensure that the use of AI is compliant with relevant laws and regulations, including data protection regulations such as the EU General Data Protection Regulation (GDPR) and the California Consumer Privacy Act (CCPA).


  • Ethical Principles: AI policies must outline the ethical principles that will guide the use of AI, including principles such as transparency, accountability, fairness, and non-discrimination. Generally speaking, it should outline the acceptable use cases for AI and where in the business it can be deployed (e.g., customer facing v internal operational processing).?


  • Data Management: Organisations must outline the processes for collecting, storing, and using data in a responsible and ethical manner, and should ensure that the privacy and security of individuals are protected.


  • Engineering Practices: The policy should refer to the data and ML engineering standards that should be adhered to and the stages of validation, testing, acceptance that must be followed prior to production deployment. Indeed, I’m keen to stress that I don’t believe this should be a one size fits all approach for each use case. These standards and guardrails should vary based on the level of risk acceptance an organisation is willing to take based on the potential impact of an AI enabled service causing harm to customers, the organisation or the wider market.?


  • For instance, you may want apply a more course grained set of policy standards against financial and customer impacting AI use cases that leverage highly sensitive data from key systems of record. Conversely, use cases that are used to support internal business processes that do not have a material impact on the business if disrupted should generally have less stringent policies to adhere to. This is where it is imperative for organisations to ensure that their AI policies include the need to perform impact or risk assessments prior to their development, subsequent training and deployment of ML models that underpin AI use cases.?


All in all, the policies must outline who, what, why, when and how the following seven criteria will be upheld and demonstrated:


  • Human agency and oversight
  • Technical robustness and safety
  • Privacy and data governance
  • Transparency
  • Diversity, non-discrimination and fairness
  • Societal and environmental wellbeing?
  • Accountability?



Ultimately, it’s important to make these policies easy to find, understand, apply, enforce and audit in a way that is friction free. Otherwise, the opportunity and value associated with leveraging AI could be lost if organisations tie themselves up in red tape and governance overheads.?



What standards and guidance can your organisation refer to??


Whilst the adoption and application of AI is accelerating at an unprecedented rate, industry standards, best practice and regulatory protocols are also catching up. UK, US and Canadian governments are taking a proactive approach to discussing AI and formalising their respective responses around how they intend to govern its use. Links to each of these papers can be found below:


United States of America - Whitehouse: Blueprint for an AI Bill of Rights


Canada - An Act to enact the Consumer Privacy Protection Act, the Personal Information and Data Protection Tribunal Act and the Artificial Intelligence and Data Act and to make consequential and related amendments to other Acts


United Kingdom - National AI Strategy


Furthermore, bodies like the European Commission have published a set of Ethics Guidelines for Trustworthy AI. Whilst NIST (a dependable reference point for standard setting) recently published its AI Risk Management Framework in January 2023.


Hopefully the guidance set out in this blog and reference points from the editorials, consultation papers and standards outlined above can be put to good use in your organisations efforts to establish an AI ethics framework.?


In Summary


Over the course of this blog I have tried to establish the importance of having a sound AI ethics framework. I’ve touched on how the use of AI has the potential to bring many benefits, but it also poses significant ethical challenges. Many of which we probably don’t yet know enough about, if anything at all about how they could impact humankind. However, for organisations to ensure that AI is used in a responsible and sustainable manner, it is essential that they have a framework in place to guide its use.


Any such framework should be informed by the organisation's values and ethical principles. Whilst it should be reviewed and updated on a regular basis to ensure that it remains aligned with changing circumstances, regulatory developments and emerging risks.


A well-designed AI ethics framework is essential for ensuring that AI is used in a manner that is transparent, accountable, fair, and non-discriminatory. This means AI ethics framework worth its weight in gold should establish clear policies and controls that demonstrate the following:


  • Human agency and oversight
  • Technical robustness and safety
  • Privacy and data governance
  • Transparency
  • Diversity, non-discrimination and fairness
  • Societal and environmental wellbeing?
  • Accountability?


“Good AI ethics” look different for every organisation, but some common themes are consistent across industry. These include; ensuring that AI enabled systems are transparent & explainable and that data is collected, stored, and used in a responsible and ethical manner.


By having a strong AI ethics framework in place, organisations can ensure that they are using AI in a way that benefits society.... As well as driving their respective revenue targets and strategic objectives, all whilst respecting the rights of individuals and the communities they belong to.?



Brett King

AI & data focused product leader. Graduate of ThePowerMBA

1 年

From a product perspective it is important to consider all your well made points, alongside the traditional questions we ask to make product-led decisioning. Never more so. I have been both heartened that we are now having the ethical debates concerning AI applications, and shocked at the seaming lack of prior thought and consideration given to some real world applications of AI already in society where this consideration does not appear to have been applied, or worse knowingly ignored. A great piece Ben Saunders.

回复
Jason Lee

CS Student at USF

1 年

Thanks I'm going to have nightmares due to your image lol

回复

Completely agree! The ethics behind the evolution and application of AI within organizations is a really important concept to touch on!

要查看或添加评论,请登录

Ben Saunders的更多文章

社区洞察

其他会员也浏览了