All Your Face ….are…..belong….to….us

All Your Face ….are…..belong….to….us

I attended this fascinating talk of Prof. Daniel Solove with Kashmir Hill last week discussing the use of facial recognition in the context of privacy, AI and privacy in the public spaces.

Since, the Canadian Federal Privacy Regulator - the OPC - has issued their annual report. Lots to read and unpack there. But one investigation that really struck me was the use of the dreaded facial recognition software by the police force in Canada

A very compelling report brings to the light how the Canadian police force should and how it can handle the use of novel technologies.

Section

  1. There is no specific legal framework for FR use in Canada. Rather, the legal framework comprises a patchwork of statutes and the common law. These include federal and provincial privacy laws, statutes regulating police powers and activities, and Charter jurisprudence.

But let’s rest assured that there are other laws that can provide the necessary protections:

Section 44.

Section 487.01 of the Criminal Code provides for warrants that permit intrusion on individuals’ privacy when a judge is satisfied that there are reasonable grounds to believe that an offense has been or will be committed and that information concerning the offense will be obtained through the use of the technique or device; that it is in the best interests of the administration of justice to do so; and in instances where there is otherwise no statutory basis for doing so.

These authorizations are subject to the usual requirements for obtaining a warrant, in addition to any conditions or limitations imposed by courts when granting them.

Statutory Authority

  1. Police agencies may also find authority for their actions in specific statutes. For instance, the Identification of Criminals Act allows police agencies to fingerprint or photograph individuals charged with, or convicted of, certain crimes for the purposes of identification. It also permits these identifiers to be published for the purposes of providing information to officers and others engaged in the administration or execution of the law. The Identification of Criminals Act does not, however, authorize the indiscriminate collection of photographs of other individuals at the broader population level. Legal advice would be required to determine if - and in what circumstances - this Act provides a legal basis for a specific use of FR, including as applied to existing mugshot databases associated with the Act.

Common law authority

50. Judicial consideration of police use of FR has so far been limited, and Canadian courts have not had an opportunity to determine whether FR use is permitted by common law.? If FR use interferes with individuals’ reasonable expectations of privacy (“REP”) and is not enabled by a statute or common law, authorization under section 487.01 of the Criminal Code will be generally required for its use.

and finally

The Canadian Charter of Rights and Freedoms

  1. The Charter gives individuals the right to be secure against unreasonable searches and seizures by police.

Privacy Legislation

Privacy legislation sets out the conditions under which public agencies may collect, use, disclose, and retain individuals’ personal information….Under federal legislation, the collection of personal information must relate directly to an operating program or activity of the federal institution collecting the personal information.

This means that federal institutions must ensure that they have parliamentary authority for the program or activity for which the information is being collected.

Necessity and Proportionality

  1. The privacy principles of necessity and proportionality ensure that privacy-invasive practices are carried out for a sufficiently important objective, and that they are narrowly tailored so as not to intrude on privacy rights more than is necessary. In the case of law enforcement, there is a clear public interest in ensuring public safety, but also in protecting individuals’ fundamental right to privacy. While the right to privacy is not absolute, neither can the pursuit of public safety justify any form of rights violation. Therefore, police may only use means justifiable in a free and democratic society.

Necessary to meet a specific need: Rights are not absolute, and can be limited where necessary to achieve a sufficiently important objective.Necessary means more than useful. Effectiveness: Police must be able to demonstrate that collection of personal information actually serves the purpose of the objective. Minimal Impairment: Police agencies’ intrusion on individuals’ privacy must not extend beyond what is reasonably necessary to achieve the state’s legitimate objective. The scope of a program should be as narrow as possible. Proportionality: This stage requires an assessment of whether the intrusion on privacy caused by the program is proportional to the benefit gained.In assessing proportionality, police agencies must be open to the possibility that, in a free and democratic society, a proposed FR system which has a substantial impact on privacy (such as via mass surveillance) may never be proportional to the benefits gained. Where the impact is substantial, police agencies should be particularly cautious about proceeding in the absence of clear, comprehensive legal safeguards and controls capable of protecting the privacy and human rights of members of the general public. Seeking warrants and court authorizations can assist with ensuring that a proposed FR use meets the proportionality standard.

Designing for privacy

Section?

  1. Implementing privacy into the design of initiatives means police agencies must formally integrate privacy protections before engaging in any use of FR technology. Privacy protections must also be designed to protect all personal information involved in an initiative, including training data, faceprints, source images, face databases, and intelligence inferred from FR searches, in addition to any other personal information that may be collected, used, disclosed, or retained.

This last paragraph shows clearly what organizations need to consider for any such “blind” technologies which include AI and algorithms and really detail out the information and meta-data lifecycle and ask all the appropriate risk related questions including and mostly risk of harm or risk of serious prejudice. Under the new Quebec Loi 25 use of biometrics in this circumstance and this volume would have to be reviewed by the CAI before approval. “The face database and probe images are two other components that raise important issues regarding accuracy and fairness. One consideration is the quality and/or age of the images and the effects this may have on the accuracy of the FR system. For example, studies have shown that lower quality images lead to declines in accuracy and longer time elapses between images of the same individual increase false negative rates.”

Sections on Accuracy

  1. Regarding the FR algorithm, there are three key considerations to be aware of with respect to accuracy. The first is that accuracy is understood statistically. The output of a FR algorithm is a probabilistic inference as to the likelihood that two images are of the same person. It is not a verified fact about the individual. As such, accuracy is not a binary “true/false” measure, but rather is computed based on the observed error rates of the algorithm across searches. There are two types of errors to consider:False positives (also known as “type I” errors) where the algorithm returns a candidate match in the face database that is not of the individual in the probe image; andFalse negatives (also known as “type II” errors) where the algorithm fails to return a genuine match in the face database even though the database contains one.
  2. The second consideration is that there is generally a trade-off between the false positive and false negative rate of a FR algorithm. The reason for this has to do with another component, the threshold for a probable match. Depending on how high (or low) the threshold is set, a FR algorithm will generally return fewer (or more) candidate matches. However, how many results the algorithm returns has implications on its error rates. While a higher threshold will return only higher probability candidates and lead to fewer false positives, this same threshold will in general make the algorithm more likely to miss lower probability matches and potentially lead to greater false negatives.
  3. Lastly, it is important to consider that the determination of an appropriate threshold will depend on the nature, scope, context and purpose of the FR initiative, taking into account the risks to the rights and freedoms of individuals. Strictly speaking, there is no single appropriate threshold. It is a matter of prioritizing the reduction of certain types of errors based on the nature and severity of risks they pose to individuals, while at the same time ensuring the overall effectiveness of the FR system.

The report from the OPC concludes with the “Do’s” and “Dont’s” for the police force but this really is a great list for any organization who is in the process of acquiring AI:

  • Human in the middle or human review - reduces the risks of inaccuracy and bias, it may also inadvertently reintroduce those very same risks into the FR system.?
  • Effective human review - organizations need to define what that is
  • Get the algorithm to be tested by recognized and qualified independent third parties for bias, fairness, accuracy etc.
  • Ensure testing follows recognized standards and technical specifications, including for performance metrics, recording of test data, reporting of test results, test protocols and methodologies as well as demographic variations
  • Quality and accuracy of the data (in this case images) is paramount and needs to be monitored to stay at a high level
  • Keep? FR systems up to date as the technology for FR algorithms improves
  • Separation of roles, especially when we are talking data sets matching (like photo images matching for recognition) to avoid bias and inaccuracies (and guessing, frankly speaking)
  • An accountable police agency (or organization undertaking such high risk AI initiatives) should have a privacy management program (PMP) in place, with clear structures, policies, systems and procedures to distribute privacy responsibilities, coordinate privacy work, manage privacy risks and ensure compliance with privacy laws.
  • Lots of excellent recommendations regarding what a Privacy Management Program should include: structures, clear roles and responsibilities, protocols to authorize and log searches of data, protocols to record incidents, investigations, testing, logs logs logs, oversight and more oversight, authorizing personnel to even come into the system and training (very important) on an ongoing basis.
  • Data minimization, security, retention, trails and data lineage are all very important and necessary as well as openness, transparency and individual access

Connect with me at DesigningPrivacy or [email protected] if you are unsure how to select and/or onboard an AI technology and I would be happy to guide you.

Christine Aykac

Learning Strategist / Project Advisor / Speaker / Leadership Trainer

1 年

You don't need to be an expert to understand it, and I learned a lot from reading it. I highly recommend this document to anyone seeking more insight into the topic. It's well-written and easy to understand - a great resource! Thank you Amalia Barthel, CIPM, CIPT, CRISC, CISM, PMP, CDPSE

回复

要查看或添加评论,请登录

Amalia Barthel, CIPM, CIPT, CRISC, CISM, PMP, CDPSE的更多文章

社区洞察

其他会员也浏览了