It's Time to Take AI Augmented Surveillance Away from Local Law Enforcement
Downtown Detroit, 2023

It's Time to Take AI Augmented Surveillance Away from Local Law Enforcement

In February of this year, a Black woman in Detroit, Porcha Woodruff, was arrested in front of her children, taken to jail and questioned for 11 hours in connection with a late January carjacking. The impetus for her arrest? Facial recognition (FRT). Given facial recognition’s poor reputation for accuracy when it comes to those with darker skins, particularly black women, this seems like another story of glitchy tech. It’s not. It’s far worse. The absence of legal standards, transparency, and accountability demonstrated in this case illustrate why it’s time to put the brakes not just on the use of FRT, but virtually all artificial intelligence augmented surveillance, by local law enforcement agencies.

To understand the systemic failure on display here, consider that when Ms. Woodruff was arrested in February, she – unlike the actual carjacker – was 8 and a half months pregnant. Not baby bump pregnant. Full blown, bun almost done pregnant. In fact, the stress of the situation caused her to have contractions and she was taken to the hospital after posting bond. Ms. Woodruff is the third innocent person wrongfully arrested by the Detroit Police Department (DPD) on the basis of information from facial recognition software. Michael Oliver was arrested in 2019 and Robert Williams was arrested in 2020. In each case, DPD claimed the technology worked exactly as intended. The issue in Ms. Woodruff’s case, according to Detroit Police Chief James White, was the lead investigator ‘s error of including the AI selected photo of Ms. Woodruff in a photo lineup: "That is, literally, perhaps one of the most counterintuitive things you could do. Because it's going to already produce look-a-likes," White said. "In other words, you're not going to put a suspect's twin in a line up."


Chief White’s defense of a technology that has repeatedly compelled the arrest of innocent people – that it worked as intended – is disconcerting. It should compel us to examine an oft overlooked aspect of AI augmented surveillance; not human error, but human nature. Specifically, the rapidly evolving human inclination to unquestioningly trust technology and the lack of guidelines to determine when we shouldn’t.

?This impulse is not unique to law enforcement. We’ve all been there. You’re using GPS and suddenly you’re wondering why you’re going east when you know your destination is west. Some people go one or two exits before they do anything. Some shrug and follow the GPS to the destination, assuming the GPS knows something they don’t. A very few will immediately pull over or exit to make sure they entered the correct destination. What virtually no one does is ignore the map, pay attention to road signs, and trust their instincts. We’ve been trained to trust technology more than ourselves. Our phone’s autocorrect. GPS. The day and date. For most of us, trusting technology has become instinctive.

?That instinct was on display here. Without the imprimatur of technology, it’s highly likely the arresting officers would have used some damn common sense and realized the woman they were arresting could not have possibly been the perpetrator. Given society’s reliance on technology in general and increasing unrecognized reliance on AI in particularly, the officers’ actions were predictable. Human error is impossible to legislate out of existence. As the saying goes, “you can’t make anything foolproof because fools are so damn ingenious.” Human nature is even more stubborn. Perhaps this is why we haven’t even tried.

?There are no national standards or laws governing the law enforcement’s use of FRT or other AI augmented surveillance methods.? The applicability of existing constitutional protections is still evolving. While the Biden administration has presented a Blueprint for an AI Bill of Rights, intended to protect people from misuse, fraud or errors as AI systems are deployed in various sectors of our economy, it has yet to produce any tangible policies or legislation.

The regulation of AI is no better at the state level, where only Vermont has banned the use of FRT by law enforcement, while Illinois, Washington, and Texas have laws protecting biometric information. ?Similarly, only about two dozen localities (including San Francisco and Oakland, California and Boston and Somerville, Massachusetts) have acted to limit or ban the use of facial recognition by city departments, including the police. (If you’re interested in where FRT is banned or limited, check here.)

?On the other side of the equation, AI augmented surveillance technologies have rapidly become an integral part of modern policing. A 2021 report by the federal Government Accountability Office listed 20 federal agencies that employ law enforcement officers as using FRT. There is no comprehensive accounting of its use at the local level, but one FRT provider, Clearview AI, has contracts with more than 3,100 federal and local law enforcement agencies. And to be clear, the use of AI in surveillance is not limited to FRT:

  • AI augmented surveillance is used to analyze live video to detect “suspicious” behavior, recognize license plates, clothing, and other individual characteristics.
  • AI powered predictive policing is used to forecast crime hotspots. However, these algorithms rely on historical data, which ultimately perpetuates the existing biases in policing.
  • Law enforcement agencies use AI to monitor social media platforms for potential threats or criminal activity, but frequently end of targeting individuals and groups engaged in lawful protests.
  • Ironically, in an attempt to avoid bias, AI algorithms are being used in bond and sentencing risk assessments to predict the likelihood of individuals reoffending. As with predictive policing, algorithms rely on data that contains the very biases it attempts to avoid.
  • Automated License Plate Recognition use AI to read license plates and cross-reference them with databases to check for stolen vehicles, warrants -- or engagement in completely lawful protests. These systems are capable reading every license plate passing the system’s camera, regardless of any probable cause for surveilling the car or driver.

In Detroit, law enforcement currently employs FRT, gunshot detection systems, video analytics, automated license plate readers (ALPR), real-time crime centers, ring/neighbor partnerships, drones, and body-worn cameras. DPD also participates in the Detroit and Southeast Michigan Information and Intelligence Center, one of 79 fusion centers in the US operating partnership with the U.S. Department of Homeland Security to serve as a command center for gathering, analyzing, and disseminating intelligence. (If you’re interested in the surveillance systems operating in your city, try the searching the Atlas of Surveillance.)

In the absence of constitutional standards or federal, state, or local legislation, the constraints on AI augmented surveillance are largely those set by law enforcement or imposed on them by local civilian authorities. There are more than 17,000 local police departments in the United States. Most of them are small; almost half employ few than 10 sworn officers. ?Civilian oversight varies widely and federal monitoring depends largely on self-reporting. Little is actually known about how local departments are using or regulating FRT and other surveillance technologies. The US Department of Justice provides a template for local jurisdictions to develop FRT policies, but it is entirely purely optional.

Among the policies the DOJ recommends are policies to ensure transparency and accountability.? For example:

The [name of entity] will be open with the public with regard to face recognition information collection, receipt, access, use, dissemination, retention, and purging practices. The [name of entity]’s face recognition policy will be made available in printed copy upon request and posted prominently on the [name of entity]’s website [or web page] at [insert web address].

And

The [name of entity]’s personnel or other authorized users shall report errors, malfunctions, or deficiencies of face recognition information and suspected or confirmed violations of the [name of entity]’s face recognition policy to the [name of entity]’s [insert title of Face Recognition Administrator].

?Voluntary disclosure by local law enforcement is not a firm foundation upon which to build a functioning democracy. Consider that in Detroit, the public was unaware of the incident until August, a full 6 months after Ms. Woodruff was arrested. The Detroit Board of Police Commissioners, the elected body charged with police oversight, was equally ignorant. In fact, nobody outside DPD seemed to know anything until Ms. Woodruff filed a lawsuit for false arrest, false imprisonment, and a violation of her Fourth Amendment rights to be protected from unreasonable seizures.?

?This is very much on brand for DPD, which used AI augmented surveillance for more than a year with minimal public awareness and literally no specific policies or procedures in place to govern its use or punish misuse. DPD’s initial proposed guidelines were a mere 2 pages long. Public protests compelled the department to create more comprehensive policies, but they are inadequate by almost every measure. Similarly, in 2021 Detroit City Council finally passed an ordinance governing the acquisition of surveillance technology. The ordinance contains virtually nothing to ensure police accountability post-purchase and recently City Council has ignored it completely.

Further, the need for transparency goes deeper than how the technology is being deployed and used to the more basic information of its operation. For example, the parameters for an FRT match, the “match threshold”, aren’t objective or fixed. They are determined by the user or programmer, humans beings, and can be raised or lowered to provide a lesser or greater number of “matches”. The information supporting these decisions is not readily available to the public or subject to analysis by anyone outside the law enforcement agency using it. Without this information, it is impossible for those outside the police department to determine whether the surveillance technologies are, among other problems, replicating existing biases in law enforcement or encouraging officers to act in ways that undermine community rights. (For example, a 2021 report by the Chicago Office of the Inspector General found that police responding to calls prompted by a ShotSpotter gunshot detection technology alert were more likely to be on high alert and search civilians they encountered, despite the fact that in the overwhelming majority of such runs, no evidence of a crime or gunshot was found.)

?Right now, the cost-benefit analysis weighing the benefits of AI augmented surveillance against the threats to civil liberties and public safety is exclusively in the hands of law enforcement. That is unacceptable for a healthy democracy. Good faith errors aside, history has shown that deliberate government abuse of surveillance is inevitable. COINTELPRO. Ghetto Informant Program. Surveillance of Muslims in the aftermath of the 9/11 attacks. Use of FRT on crowds protesting the death of Freddie Gray in Police Custody.? To ignore predictable abuses is tantamount to endorsing them. Without protections for innocent Americans exercising their constitutional rights – or simply going about their lives – AI augmented surveillance represents a dire threat to individual liberty and our democratic system. ?Rather than pretend the technology and the people using it are flawless, it’s time to put the brakes on AI augmented surveillance until we have developed national laws and regulations that protect both our democracy and public safety.

?

?

?

?

June Henry-Singleton

Events Specialist at June Henry Group

1 å¹´

??Wow! Great article ECW

赞
回复
Dan Korobkin

Legal Director at ACLU of Michigan

1 å¹´

Great article, Eric

赞
回复

要查看或添加评论,请登录

Eric C. Williams的更多文章

  • Dearborn, Michigan isn't a terrorist haven, but it used to be.

    Dearborn, Michigan isn't a terrorist haven, but it used to be.

    Dearborn, Michigan, is a western suburb of Detroit. It is the home of Ford Motor Company’s world headquarters and my…

    4 条评论
  • Creatives, Copyright, and AI

    Creatives, Copyright, and AI

    Update: Creatives, Copyright, and Artificial Intelligence. If you’ve paid any attention to the internet lately, you…

  • A Guide to Police Abolition

    A Guide to Police Abolition

    Words matter. They can unite or divide.

    1 条评论

社区洞察

其他会员也浏览了