In a letter to the NTIA
, US AGs weigh in on AI regulation. What does this tell us about their views on the subject?
- AI regulation should be risk based (shout out to EU AI Act) and requires serious transparency. Without it, you may be doing something unfair and deceptive (even now)
- AI will need an audit and AI with personal data already requires a DPIA and a good model is the one in the Colorado CPA rules.
- New AI laws will need to align with the existing privacy laws (including state laws); the AGs are already enforcing and should be involved in enforcement under any new AI laws.
Transparency: [Do it now]
- The foundation of any effective AI governance framework is appropriate transparency
- Consumers must be told when they are interacting with an AI rather than a human being and whether the risks of using an AI system are negligible or considerable
- Organizations should publish public-facing policies that describe what decisions are powered by AI, what human involvement there is in validating those decisions, what process individuals can use to appeal those decisions; what personal data is used in any such decisions and a method for individuals to access and correct any personal information used by the AI in the decisions.
- By requiring appropriate disclosure of key elements of high-risk AI systems, individuals can be empowered to decide what systems are fair [Does this mean AGs think that without a disclosure right now, the use of the AI system could be unfair under the consumer protections laws?]
- There should should be pursuant to consistent criteria and standards that should be developed... like Energy Star and LEED [or kinda like the EU AI Act's CE mark]
- For companies who adopted and committed to such practices, moreover, the Federal Trade Commission and State Attorneys General would possess the ability to enforce the commitments companies made using their consumer protection authority (which sanctions deceptive trade practices)
Testing and Impact Assessment:
- To the extent organizations employ AI systems to handle critical functions, the best practice to use is to commit to ongoing cycles of testing, assessment, and external audits.
- Specifically mention "privacy by design" in this context.
- Impact assessments to be ongoing and begin as early as the design phase of a new AI system and even before an AI system is put into use
- Further periodic assessments and testing conducted once the system is operational can reveal unexpected results or unintended consequences that emerge over time
- Specifically flag the CO CPA DPIA rules as the model to go by.
- An assessment or audit need not be overly complex. An AI system assessment might, for example:
1. Identify and describe the specific risks to safety or individual or collective civil and
2. Identify risks associated with the data used by the system (e.g., biometric, financial, PII, or large-scale generalized data);
3. Document measures taken to avoid or offset those risks;
4. Document “grounded” tests or otherwise independent testing of the AI system to demonstrate efficacy without unintended bias, errors, or false outcomes;
5. Contemplate the benefits of the AI system; and
6. Demonstrate that the benefits of the system outweigh the risks offset by safeguards in place.
- The DPIA need not be public by default but enforcement authorities should have the right to request these assessments and audits at least annually to ensure that such assessments take place with the appropriate vigilance and are not an empty promise
External third party audits:
- Entities that use or develop high-risk AI should be required to engage in periodic external, third-party audits to ensure that any AI systems in use by the entity comply with a common set of criteria set by independent standards
- To optimize the effectiveness of such audits, the NTIA, NIST, or another trusted standard-setting body could develop appropriate protocols for such audits
- The development of principle-based governmental oversight of AI will be imperative in the years ahead. This could be by way of a Federal law.
- A focus on high-risk AI systems aligns with existing AI regulatory frameworks, such as the EC AI Act, which prohibits the use of AI in certain high-risk contexts18 and imposes heightened obligations in other such systems
Privacy Legislation and AI:
- Any AI governance standards should align with existing state and federal data security requirements to ensure that personally identifiable information used to inform AI systems is adequately secured.
- Any legislation aimed at AI should contemplate the privacy rights impacted by the data required to power effective AI, including appropriate consent and transparency requirements both for the initial collection and use of sensitive data used by AI systems, as well as the repurposing of personal data collected by companies for other reasons to help inform AI systems
- State Attorneys General should have concurrent enforcement authority in any Federal regulatory regime governing AI. They already active in regulating AI systems through statutes and rules which govern profiling and automated decisionmaking.
- Allowing for concurrent state Attorney General enforcement of any federal AI regulation would allow for continued enforcement of these and other important protections regarding AI
- Whatever accountability mechanism is developed, it should ensure avenues for legal redress against all participants within the chain of use and development of an AI System that causes legally cognizable harms