Legislative Tracker - 001
?? Artificial Intelligence ("AI")
Did you know? According to Stanford AI Index report, “The number of bills on AI that were passed into law grew from just 1 in 2016 to 37 in 2022 in 217 countries”.
In the US, the New York City Department of Consumer and Worker Protection (“CDWAP”) adopted its final rule to implement Local Law 144, which regulates the use of “automated employment decision tools” (“AEDTs”) to screen applicants or employees in the city. The law will be enforced as soon as July 5, 2023. It imposes a number of requirements on AEDTS:
- Annual bias audit by an independent auditor before use
- Results of the most recent bias audit published on the employer’s or employment’s agency’s website.
- Notice is provided to applicants and employees who are subject to screening by the AEDT at least 10 business days before use of the AEDT.
In Canada, Bill C-27 passed the second reading of the Federal House of Commons on April 24th, 2023, and was referred to a committee that specializes in the subject matter, in this case, the Standing Committee on Industry and Technology.
Among the least discussed aspects of Bill C-27 - Artificial Intelligence and Data Act (“AIAD”) is its criminal enforcement. AIAD, the first law that would regulate AI in Canada, includes the following new offences:
- Knowingly possessing or using unlawfully obtained personal information to design, develop, use, or make available for use an AI system. This could include knowingly using personal information obtained from a data breach to train an AI system.
- Making an AI system available for use, knowing, or being reckless as to whether, it is likely to cause serious harm or substantial damage to property, where its use causes such harm or damage.
- Making an AI system available for use with intent to defraud the public and to cause substantial economic loss to an individual, where its use actually causes that loss.
The Federal Crown, through the Public Prosecution Service of Canada, could seek fines of up to $25,000,000 or 5% of an organization’s gross global revenues upon a conviction of indictment, or $20,000,00 or 4% of the organization’s gross global revenues, in both cases, the greater of both amounts.
Across the ocean, the EU Parliament is currently fine-tuning the AI Act text ahead of a key committee vote planned on Thursday, May 11, 2023. The landmark legislation is expected to have a GDPR effect as a trail blazer in technology law. There is a lot to unpack, but here are the key points that you should know.
The initial version of the AI Act did not cover foundational models, but with the recent developments around ChatGPT, the fine-tuned text now addresses specifically foundational models (powerful model that can power other AI applications). These models are subject to a stricter regime.
- With regards to generative AI, the MPs agree that a summary of training data covered by copyright laws should be provided in a manner that is “sufficiently detailed.”
- Generative foundation models would have to ensure transparency that their content is AI rather than human generated.
- Fines are up to €10 million or 2% annual turnover.
The AI Act also cover “high-risk systems,” which are AI systems that are at high risk of causing harm.
- The definition of risks starts with the level of security of the system, bringing forward the notion of cybersecurity in AI development.
- AI systems used to influence the outcome of voting behavior are deemed high-risk, except if the output is not directly seen by the general public.
- Except for SMEs, a high-risk system must be subject to a fundamental rights impact assessment.
One of the most controversial aspects of the AI Act, however, remains the ban on biometric identification, which some MPs consider necessary for law enforcement purposes. The prohibition applies to real time and ex-post use of biometrics, except for in cases of severe crime and pre-judicial authorization.
Under the AI Act, the authorities will have the power to request access to both the trained and training models of the AI systems, including foundation models. Exceptions are still being debated.
?? Privacy Laws
In Canada, on March 28, 2023, the federal government tabled Bill-47, a 408-page budget implementation bill. If you look more closely, you may also notice that the Bill-47 also amends the Canada Elections Act to provide a national standard for how political parties handle voters’ personal information. The bill states would give broad rights to political parties when participating in the public affairs can collect, use a, disclose, retain, and dispose of personal information in accordance with the party’s privacy policy. The Bill-47 states that the purpose of this article is to provide a national, uniform, exclusive and complete regime applicable.
In Quebec, Bill-3 has recently been adopted. To use the words of the IAPP “Bill 3 creates a unified framework for the protection of citizens’ health and social services information with the objective of improving the quality of services to citizens and the management of the health and social services system.” Still in Quebec, the Commission d’accès à l’information (“CAI”) announced that it will no longer provide the names of companies and public organizations that have reported privacy incidents. According to commentors, such communications would have been detrimental to the handling of certain incidents, explaining the change in the practice of the CAI.
Back to the United States, Iowa and Indiana became the sixth and seventh states to pass a comprehensive privacy law, joining Connecticut, Utah, Virginia, Colorado, and California. Indiana and Iowa’s privacy laws are applicable to businesses that control or process personal data of at least 100,000 Iowa consumers, or to businesses that sale of Iowa resident under some specific circumstances.
Additionally, Tennessee and Montana’s privacy legislations have cleared legislatures in April and are awaiting their respective governors’ signatures before becoming law. Montana also proposes to ban TikTok and even to apply this ban to any social media apps that provide certain data to foreign adversaries. Like Virginia’s privacy law, the Tennessee Information Protect Act is seen as more business-friendly, with a threshold of application starting at $25 million and 175,000 or more Tennessee residents, or 25,000 residents and 50% of gross revenue resulting from the sale of data. Even more interestingly, this law allows companies to defend themselves by raising compliance with NIST.
In Washington, My Health My Data Act is widely considered to be a potential tidal wave in the privacy world. Biometric data is defined as “data that is generated from the measurement or technical processing of an individual’s physiological, biological, or behavioral characteristics and that identifies a consumer, whether individually or in combination with other data.” Biometrics includes (but not limited to) (a) imagery of the iris, retina, fingerprints, face, hand, palm, vein patterns, and voice recordings, from which an identifier template can be extracted, or (b) keystroke patterns or rhythms and gait patterns or rhythms that contain identifying information.” Exceptions: publicly available information, de-identified data, and info used for certain types of research. No thresholds. Covers WA consumers wherever they are, not just within the state. This Act greatly expanded the definition of “health data” and the reach of the WA Attorney General’s office.
Finally, let’s not forget the Delete Act in California, which we are following closely. It advanced out of the Senate Judiciary Committee on April 25th, and it moves on to another vote. California already has a Data Broker Registry, and this bill seeks to create a one-stop shop for data deletion requests.
?? Supply Chain Management
The Office of the Superintendent of Financial Institutions (“OSFI”) issued the Third-Party Risk Management Guideline for federally regulated financial institutions (“FRFI”), such as banks. The guideline provides a framework for a risk-based approach to managing third parties.
- The guideline applies in proportion to the context (i.e., risk and criticality)
- The following criteria are relevant to determine the risk and criticality:
- The degree to which the third party supports a critical operation.
- The impact on business operations if there is an exit or transition.
- The probability of the third party failing to meet expectations due to insolvency or operational disruption.
- The information management, data, cyber security, and privacy practices of the third party and its subcontractors.
- The third party’s use of subcontractors and the complexity of the supply chain.
- FRFIs will have to monitor supply chain risks, including by receiving updates and reporting on the third party’s use of subcontractors.
- Technology and cyber operations carried out by third parties must be transparent, reliable, and secure.
The guidelines also encourage FRFIs to consider cloud portability, i.e. “the ability for data to be moved from one cloud to another or for applications to be ported and run on different cloud systems at an acceptable cost”.
?? Governance Standards
NIST finally published the draft of Cybersecurity Framework (“CSF”) 2.0. As you may remember, the CSF 1.1 was developed by NIST in response to Executive Order 13636 which called for the development of a voluntary framework for improving cybersecurity across critical infrastructure sectors in the United States. It has since been adopted by government agencies as a baseline for assessing the cybersecurity posture of critical infrastructure organizations. As you may remember, the original framework 1.1 was based on five functions, as illustrated below.
Similarly, to ISO/IEC 27001:2022, the new CSF 2.0 would now include a distinct function for governance, which would now group organizational context, risk management strategy, roles and responsibilities, as well as policies and procedures.
Source: NIST.
The European Union Agency for Cybersecurity (“ENISA”) published an assessment of standards for the cybersecurity of AI and issues recommends on the implementation of the EU policies on AI, including a certification framework. Click on the source for the report.
Source: ENISA.
?? Incident Response
On May 2, 2023, the expanded definition of “personal information” went into effect for Pennsylvania’s breach notification law. The definition now includes the following data elements when compromised in combination with a resident’s name: medical information, health insurance information and username or e-mail address, in combination with a password or security question that would permit access to an online account. Also, electronic notice is now permitted, if the affected personal information consists of a username or email address and a password. For covered entities and business associates subject to HIPAA, the updated Act includes an exemption from the requirements of this law.
Source: Senate Bill 696
???? ?? Court Cases
In A.B. v. Google, 2023 QCCS 1167, the court considered the liability of an internet intermediary in connection with defamatory online publications made by one or third parties. The court held Google liable and granted an injection on the removal of the publications in the province of Quebec. The case is another milestone in defining the liability of intermediaries in Canada in light of the digital trade provisions in NAFTA’s successor, the US-Mexico-Canada Agreement (“USMCA”), which prevents Canada from adopting or maintaining measures that treat a supplier or user of an interactive computer service as an information content provider in determining liability for harms-related information processed through the services, except if this supplier or user in involved in its creation or development.
We hope that this legislative tracker was informative!
Leader/vCISO - at ZeroOverride CyberSecure Inc. (c)2006
1 年Great article, as I’m someone who works internationally, I appreciate this resource. Thank you sharing it, I’ve done the same. I hope I steer a few others to subscribe as well. Cheers
Managing Partner @ Ceiba Law | Top 20 Women in Cybersecurity Canada, Top 40 under 40, IFSEC Global Security Influencer, Top 3 Women in Cybersecurity Law Global.
1 年Michael Power Heidi Saas Bob Seeman Mahdi Raza