Prudent Public-Sector Procurement of AI Products
Photo by Agence Olloweb on Unsplash

Prudent Public-Sector Procurement of AI Products

This article was co-written with Muriam Fancy, AI Ethics Researcher at the Montreal AI Ethics Institute

The use of AI-enabled systems to streamline services that are often labor-, and time-intensive is being recognized by governments to implement in multiple sectors. However, there are significant implications for procuring these systems and the way they are deployed. Due to the gap in understanding of the implications of these systems and how to properly measure their risks, oftentimes governments will procure and deploy solutions that are biased, risk-heavy, with the potential to cause significant harm to the public. 

When things go horribly wrong ...

No alt text provided for this image

Photo by Jeswin Thomas on Unsplash

A recent case where this was demonstrated was with the UK federal government’s deployment of a flawed AI-enabled system that disproportionately impacted students who took A-level courses; students were assigned grades based on historical data and discrepancies between teachers’ assessments and those provided by the system led to the rescinding of university admission offers for many students.

In this case, a potential cause for concern was that the AI-enabled system did not have representative data during its training phase and the transparency of the operations of this system to the public was low which further diminished trust. Despite all these flaws and public outrage, there was no clear accountability framework for students who were impacted by this system. 

How can we avoid causing such negative consequences when deploying such systems for the public? 

No alt text provided for this image

Photo by Jon Tyson on Unsplash

One methodology is to empower regulators to ask the right questions during the procurement phases. Cases like this demonstrate that there is a significant gap in knowledge regarding risk, transparency, and the complexity of AI-enabled systems. This lack of understanding manifests through ill-informed policies that either over- or under-regulate such systems.

Specifically, a lack of demand in requiring transparency from the system and ignoring the creation of accountability and recourse mechanisms exacerbated the crisis in the UK with this scenario.  

How do we ask the right questions?

No alt text provided for this image

Asking the right questions involves a grounded understanding of the capabilities of AI-enabled systems that extends beyond dichotomies of under- and overestimation of what is possible to achieve with the systems.

Involving technical experts sourced from both research and industry is essential in creating a regulatory ecosystem that is adequate in its capacity to bring about public welfare. Currently, the procurement officers are ill-equipped to evaluate the implications of the systems that utilize AI. Recent guidelines from the NHSX provide some direction for officers procuring solutions in healthcare. While these are great in terms of referencing existing legislations in the healthcare domain, similar guidelines are required for other domains that make explicit references to domain-specific regulations (if they exist).

A general up-skilling in the specific use of AI and its limitations within that domain is even more important than just a general awareness of how AI-enabled systems function. We make this distinction because the same techniques might be used in different domains with varying levels of success and hence awareness of the domain-specific applications is essential for the procurement officers to make well-informed decisions. 

I've done the above, am I ready?

This needs to be a continual effort since the pace of change in the field is quite rapid and the capabilities landscape is ever-evolving.

Supplementing this with requirements of accountability and liability that are aligned with domain-specific requirements is also essential. Finally, pushing the developers of these systems away from crutches of IP protection as a way of non-disclosure is also important. Without the ability to audit the inner workings of the system, we risk regulators becoming puppets to the whims of the manufacturers who, for the most part, will lean towards minimal disclosure in the interests of maintaining their competitive edge and abdication of responsibility towards their users.  

No alt text provided for this image

Photo by Jason Hafso on Unsplash

One way to do this is by having the federal government develop a risk assessment framework, similar to what the Canadian government created when measuring risks for AI systems that will be deployed in the public sector. Especially if the state is focused on maintaining systems of democracy, human rights, and transparency, creating a risk framework will help regulators understand what the potential concerns of the public are, and if there are policies or laws to protect citizens from a potential fallout by deploying such systems. 

Failing to ask the right technical questions surrounding the risks of the solution is a burden to be borne by governments rather than the manufacturers of the system as demonstrated in the case of the deployment of insurance-premium pricing models by AllState in the US. Regulators were not only woefully under-equipped, but also ineffective in levying penalties and rendering decisions that allowed AllState to retry their obfuscation techniques in other states where they were able to slip the system past regulatory scrutiny. 

Working with the public can help!

No alt text provided for this image

Photo by Edwin Andrade on Unsplash 

The second step that can supplement this solution to empower regulators is to work directly with the public. This can include public consultations, members of the public identifying areas of application, and involving the public in data collection around the misuses of the system and its negative consequences. A proposal similar to that of ethics bug bounties can help regulators gain a better understanding of where systems can go wrong and create a bank of evidence that can push regulators to ask the hard questions and limit the market power of these firms.

Regulators are often not stakeholders who will be directly impacted by the use of these systems, making it very difficult on their side to predict potential risks. Public consultations can exist in forms such as workshops, conferences, or working groups composed of people with lived experiences so that these impacts can be better identified. 

Let's avoid participation theatre and tokenization

But, tokenization of public consultations is something that needs to be avoided. Dubbed “participation theatre”, we need to ensure that feedback from these consultations is meaningfully incorporated and tracked over the course of the design, development, and deployment of such systems. The importance of public consultation is to develop trust and display transparency. To have trust through public engagement has demonstrated that citizens will be much more engaged in utilizing these systems, and displays a level of government competency which will be important in the long term if the regulators continue to procure AI-enabled systems for public use.

You can find out more about my work here: https://atg-abhishek.github.io

Todd D. Lyle

Former Cloud Computing and Health Care Entrepreneur ? Veteran NATO Peacekeeper and U.S. Army Black Hawk Pilot.

3 年

It’s everybody’s dance. Bring your moves and the “bug bounty.”

  • 该图片无替代文字
回复
Allan Alter

Senior Manager, Data Governance at PwC

3 年

The World Economic Forum's program in AI and Machine Learning has produced a tool to aid public sector procurement of AI systems. This guidebook is designed to unlock public-sector adoption of AI through government procurement and a set of complementary tools to demonstrate the emerging global consensus on the responsible deployment of AI technologies. It can be downloaded here: https://www.weforum.org/reports/ai-procurement-in-a-box

Lauren Maffeo

Product at the State of Maryland / Author of "Designing Data Governance from the Ground Up"

3 年

Abhishek, this is great. Let me know if you'd like to co-write an op-ed on this subject for Springer's AI and Ethics journal. (I serve on the editorial board.)

Katrina Ingram

AI & Ethics. Privacy (CIPP/C). Diversity. Speaker. Founder, Ethically Aligned AI I Linked In Instructor I Adjunct Faculty

3 年

Great article. Love the part about public consultation (and not participation theatre). Public education is also needed so that richer engagement can happen.

David C Martin

Simple smart home automation: github.com/ageathome

3 年

The first question should be what’s your feedback loop to correct errors and improve accuracy and precision of inferences?

回复

要查看或添加评论,请登录

社区洞察

其他会员也浏览了