Implementing AI Ethics: Complex architectures
Matthew Newman MAICD
AI Governance | AI Safety | Tech Strategy | Change & Impact | Founder TechInnocens
According to research in 2019 by Health Ethics and Policy Lab there were 84 AI ethics principles frameworks and declarations at the time, with more on the way. It's not clear how many lists would be a good amount, but with a growing consensus that implementation might be overdue, and that perhaps we have enough lists of principles for now, the goals are set. But the work to bring the goal of responsible use to life is a level of challenge further. We've decided the music we might like to play, so now might be a good time to learn an instrument.
In this series of posts I'll look at some of the effort involved with implementation, covering some key considerations for the board C-Suite, and some perhaps overlooked realities of applying AI Ethics principles to real-world adoption.
One big Proof-of-Value
For much of industry and public sector the current implementation of AI and ML technology has been through limited, isolated projects or Proofs-of-Value. The question from leadership on how to scale any demonstrated value results in recognition that in-house, linear development from business-need to in-life is likely to be the exception rather than the rule. Which is a challenge for many of the current frameworks for Ethical AI.
As the AI economy matures, new options for adoption become possible, such as AI-as-a-Service (e.g. GPT-3 and similar), accessed through APIs and with a pay-for-use model. Similarly organisations that DO invest on in-house creation want to sweat the data and models for as much value as possible. Single-use models are expensive, with a 'model-repository' approach extracting more value from developers and data-scientists, whether staff or 3rd party.
Handing the users at the organisation's edge the ability to flex the models also derives more value from technical resources that are expensive and scarce. These users may be explicitly empowered to re-purpose tools, or may just take the decision themselves to leverage that investment for more tasks.
Finally, an organisation may just opt to buy something already integrated with their business software. Such vendors are rapidly including higher/cognitive functionality into products to remain competitive.
Not just the developers
This reality of AI tech adoption in our private and public institutions struggles to fit into frameworks created to promote responsible and ethical use which focus heavily on the development process as the locus to effect positive outcomes.
What is clear is that the reality of AI technology adoption will lead to a more complex landscape than would the development of a set of discrete, stand-alone systems. This requires a more mature approach to governance across a wider scope of the organisation. That governance should be of an enterprise capability that uses data, in-house technology, staff knowledge/judgement, external services, purchased products and 3rd party delivery to realise business goals, which have an ethical and responsible intent baked-in. By setting the goals of ethical intent at the correct level, all actors involved in realising a business outcome can undertake their roles fully aware and empowered to ensure their piece of the solution is created incorporating all necessary steps.
Implementation of AI Ethics is a whole-org activity.
TechInnocens provides C-Suite and board level advice and guidance on the implementation of AI Ethics frameworks using pragmatic, achievable interventions grounded in the reality of technology adoption.
CEO & Founder of SOFTWARE Australia ? Software Industry Advocate ? LinkedIn Top Voice ? Bestselling Author of "Kick Some SaaS" ? Purpose Driven Global Impact ? Speaker ? Translator of IT Gibberish
4 年Good read Matthew.
Emerging technology & quality infrastructure
4 年This is an important point about AIaaS as most R&D and standards around explainability, robustness, bias etc presume ACCESS to the training data and model... This won't be the case in the majority of cases over time...
Principal Analyst Data Governance | Posting commentary for analysts since 2017 | Brier Score of 0.29 | Experimental science: show me the evidence | Veritas filia temporis | Views mine own
4 年"According to research in 2019 by Health Ethics and Policy Lab there were 84 AI ethics principles frameworks and declarations at the time, with more on the way. It's not clear how many lists would be a good amount, but with a growing consensus that implementation might be overdue, and that perhaps we have enough lists of principles for now, the goals are set. But the work to bring the goal of responsible use to life is a level of challenge further. We've decided the music we might like to play, so now might be a good time to learn an instrument" Bravo!!