Much Ado About Facial Recognition
Cole Cioran
Guides governments across Canada on how to build better IT organizations.
The tug-of-war between individual liberties and the common good played out in the news this past week. If you missed it, Peel and York Regional Police implemented Idemia’s Facial Recognition Technology to reduce the manual effort required for mug shot matching. Their goal is to increase the efficiency, effectiveness, and fairness of what is a slow and manual process. The response from critics of the technology and policing was to assert how unfair and biased the technology is, and they reiterated their calls for legislation and a moratorium. This highlights how polarized and over-focused on facial recognition the dialogue on artificial intelligence in policing has become.??
The problem is the debate has become a distraction from solving a very real problem. The Community-Oriented Policing Services model is the backbone of Canada’s approach to building better relationships between police and the communities they serve. This approach comes at a cost, and the Price of Policing in Canada is a bargain compared to comparable jurisdictions.? With rapid population growth driving up demand Canadian Police Services need to use AI technologies to sustainably serve the community. A moratorium is not the answer when police services are struggling with recruitment and retention due to the stresses of an often thankless job. It’s time to shift the dialogue and ask better questions while the legislative process unfolds. The question that matters is this.??
Are Canadian police services using a responsible approach to govern the selection, implementation, and operations of facial recognition and other AI solutions that complies with the law of the land??
How did we get here??
On the individual liberties side the Canadian Civil Liberties Association reiterated the position in a 2023 press release that called for immediate legislation on Facial Recognition technologies and a moratorium until legislation is in place. They hold that the government’s proposed Bill C-27, the Digital Charter Implementation Act, 2022 is inadequate to the task. In general, the tone of their message is that it’s the Wild West out there, and police are able to do whatever they want with whatever facial recognition tools they choose.??
It’s worth saying that a few years ago that was closer to the truth. Multiple police services across Canada were piloting Clearview AI for facial recognition. Even worse, individual members of the services were downloading and using the software without authorization. The situation and solution were problematic on multiple fronts; privacy, public surveillance, and sharing of Canadian data with a foreign company to name just a few. The resulting investigation by the Privacy Commissioner of Canada and three provincial Privacy Commissioners resulted in a ban of the use of the software in Canada and recommendations to update Canada’s privacy regime.??
A Little Bit of History Repeating??
Given this, you might have a curious sense of deja-vu over Idemia.? However, my team and I support police services across Canada. We have worked hand in hand with them to define principles and governance practices to ensure the ethical use of this and other AI technologies. In my conversations with Chiefs and police IT leaders from every province one thing is consistent and clear. NONE of them want to see what happened with Clearview AI repeated. ? ? This is not just words, but a material investment of time and money by senior leadership to develop ethical standards for the use of AI by Canadian police forces.? Peel and York engaged the community to ensure their concerns are addressed around Idemia. They conducted a public Privacy Impact Assessment in compliance with Ontario’s Municipal Freedom of Information and Protection of Privacy Act with the? Information and Privacy Commissioner of Ontario to ensure the guidelines laid out in IPC’s guidelines on the use of facial recognition were met. Further, in Ontario the Inspectorate of Policing is a newly established agency that provides and unprecedented level of oversight of professional standards in policing. None of this prevented critics from repeating the assertion that there is no legislation governing Facial Recognition.?
Is there Legislation or Not??
The common cry among critics is that facial recognition is unregulated in Canada. The situation is more nuanced than that however. As can be seen from the Idemia implementation, there is a legislative framework (MFIPPA) and privacy guidelines that applied to the implementation. The disconnect is that critics want legislation on facial recognition software in SPECIFIC. While there have been examples of governments banning or limiting specific technologies such as Huawei, TikTok, and Clearview AI, these have been exceptions where personal or public information is being accessed by foreign entities. The Government of Canada’s position question of regulating specific technologies such as facial recognition is clearly spelled out in their response to the Ethics committee report on AI?
“The Government also acknowledges the Committee’s recommendation for a moratorium on the use of FRT by Canadian industries. It is important that organizations be accountable for their handling of personal information and apply strong privacy protections. A strength of the current law, which would also be true of the proposed CPPA, is that it is technology-neutral: this ensures that the law remains relevant, and does not constrain innovation, in the face of rapidly changing technology and business practices.”?
Note “A strength of the current law.” In the eyes of the government, there is legislation covering facial recognition, and that law is in the process of being updated by Bill C-27. Beyond that law, the Privacy Act and its provincial equivalents lie a robust framework for governing ALL technologies used by the police, including facial recognition. This includes:?
Given all of this legislation, policy, and standards, it might leave you wondering what the debate is really about.?
The Debate is Really About Specific versus Technology Neutral Legislation?
领英推荐
The key factor here is that critics want the government to legislate on facial recognition specifically. However, every technologist knows that good practices for regulating technology should be technology-neutral. This is fundamental to the problem here. Facial Recognition is not one technology. It encompasses hundreds of different algorithms and approaches. New methods and approaches are emerging at an exponential pace. The very real problem, as we’ve seen with attempts to regulate specific types of firearms, is that legislation is slow, and innovation is fast. Legislation is out of date before it's even passed. Even fast movers on AI legislation like the EU did not specifically legislate around facial recognition but adopted a technology-neutral approach that provides guidance classifying the risk of these technologies. ?
Here are a few examples of the pace of change. This past February I showed that you could use over-the-counter image generation algorithms to accelerate police sketches. It’s a new use case - facial generation based on verbal descriptions as opposed to facial recognition that matches photos. If there was a directive or legislation for facial recognition, this new use case wouldn’t be covered. With the directive as it stands the new use case requires the full application of the standard before implementation and use. ?
Another new release that caught the attention of police and privacy professionals at the Canadian Institute’s Police Tech Conference this spring is the automated transcription of body-worn camera video from Axon. This is another significant AI-powered technology that will dramatically reduce the effort required from police services. Peel uses Axon body-worn cameras (and identifies that body-worn camera footage won’t be used for facial recognition in their privacy disclosure). While Peel has done a Privacy Impact Assessment on their Axon camera solution, this new feature will trigger the requirement for new assessment as it introduces significant new functionality.?
It is also worth saying that the rationale for the calls for a moratorium and specific legislation of facial recognition is the bias and privacy risks that come from the technology. Given Idemia has passed the gauntlet of the current regulatory regime is a fact. The question now becomes one of how fair and accurate is Idemia to put concerns to rest.??
How Accurate is Facial Recognition Technology Anyways?? ?
The effectiveness of facial recognition has been heavily politicized, and there is no shortage of articles that show historical failings of the technology or cherry-pick results from less robust solutions to reinforce their point of view.??
These concerns led the US National Institute of Standards and Technology to establish a working group in 2017 that publishes reports on the effectiveness of facial recognition algorithms on a regular cadence – the NIST FRTE.? The latest FRTE report clearly shows that in the four years since the Clearview AI scandal the technology advanced has significantly. The top algorithms used in police solutions are continuously validated by the US National Institute of Standards and Technology, which publishes a monthly report on the accuracy of Facial Recognition Algorithms. The latest data shows a 99.6 accuracy rate for the top platforms in the marginalized communities we are justifiably concerned about.??
Getting even more specific, Idemia in specific is a consistent top performer in the NIST evaluation. The platform has a 99.88% accuracy rating and has been rated as the best algorithm in the world for fairness and accuracy.?Both Peel and York have highlighted this and Idemia’s commitment to continuously update and improve the platform in their privacy disclosures.?
It’s Time to Change the Conversation Around Facial Recognition?
The emergence of Generative AI on the scene and the ongoing improvement of machine learning and deep learning algorithms has propelled exponential improvement in the capabilities of AI solutions and platforms. We need to move on from tired generalizations about the effectiveness of Facial Recognition solutions of the past and into serious dialogues around how Canadian police services will leverage facial recognition, eNotes, automated video description of body-worn camera data, facial generation, and a host of other emerging AI-powered technologies to sustainably and responsibly use these solutions to support our communities. ?
We also need to break free from the us?versus them mindset that plagues our relationships with the police. As Robert Peel said in his principles on policing, the police are the public, and the public are the police. If you have concerns, get engaged through community initiatives such as the Human Rights Project, volunteer, and work WITH the police to make the most of these technologies for the benefit of the community and the officers who use them, just like I?do.?
What do you think??
Post Script – Artificial Intelligence, AI, and the Media?
The media plays a key role in this debate, and accurate reporting can be a challenge because the complexity and nuance of the problem require a deep knowledge of legislation, regulations, technology, and the politics of technology. It is all too easy for reporters to accept sources provided by key players in the debate. However, these are often selected to support existing positions. This strikes at the crux of the Accuracy principles in the Canadian Association of Journalists Ethical Guidelines, specifically, around “seeking documentation to support the reliability of those sources and stories, and we carefully distinguish between assertions and facts,” and to correct mistakes promptly, “in fact or in context” If you see these problems in reporting you should reach out to the editor. I did in this case, and the article that got me started down this path is being amended.?