The Yanks are Coming (to an AI near you)
Darryl Carlton
AI Governance Thought Leader | Digital Transformation Expert | AI Pioneer since 1984 | Bestselling Author in Cybersecurity & AI Governance | Passionate about AI responsible use in Higher Education, Business & Government
The United States and the European Union (EU) approach the regulation of artificial intelligence (AI) and data privacy from fundamentally different governance frameworks, leading to distinct challenges and operational dynamics. In the U.S., the principle of "States' Rights" embedded in the federal system complicates the creation of unified federal legislation on AI and data privacy. In contrast, the EU benefits from a more harmonised approach due to its supranational legal structure.
U.S. Federal System and States' Rights
In the United States, the federal system of government allows states considerable autonomy to enact their laws and regulations, including those governing technology, AI, and data privacy. This system stems from the Tenth Amendment of the U.S. Constitution, which reserves all powers not explicitly granted to the federal government to the states or the people. As a result:
EU's Harmonized Regulatory Framework
Contrastingly, the EU operates under a different system where directives and regulations are designed to harmonise laws across member states:
Implications of Divergent Approaches
The principle of States' Rights in the U.S. creates a challenging environment for the development of federal AI and data privacy legislation, reflecting broader issues of federalism in the country. In comparison, the EU's more centralised approach under its harmonised rules offers clarity and consistency but may sacrifice some degree of local tailoring and flexibility that state-specific regulations in the U.S. can provide. Each system has its strengths and weaknesses, influencing how effectively each can navigate the complexities of modern technological governance.
California: A Pioneer in Consumer Privacy and AI Transparency
California has long been at the forefront of technology regulation in the U.S., and its approach to AI legislation is no exception. The California Consumer Privacy Act (CCPA) is the most prominent example, setting a benchmark for privacy rights that has influenced discussions about federal privacy legislation. Enacted in 2018 and subsequently amended, the CCPA provides robust consumer rights regarding access to, deleting and controlling personal information. It requires businesses to disclose their data collection and sharing practices and gives consumers the right to opt-out of the sale of their personal information.
Moreover, the CCPA includes provisions that directly address AI through transparency measures. These require businesses to disclose the logic involved in decision-making processes, thus ensuring that AI systems are not black boxes to the consumers they affect. This focus on transparency is intended to foster greater consumer trust and accountability in automated systems that make significant decisions impacting individuals' lives, such as credit scoring and personalised advertising.
Washington State: Regulating Government Use of Facial Recognition Technologies
Washington State's approach to AI regulation underscores a specific concern with privacy and civil liberties, particularly regarding surveillance technologies. In 2020, Washington passed Senate Bill 6280, which specifically regulates the use of facial recognition technologies by state and local government agencies. This law is significant because it addresses the ethical implications and potential abuses of facial recognition, a prevalent AI application.
Under the law, government agencies in Washington must test their facial recognition services to ensure they do not produce unfair performance differences across different demographic groups. The law also mandates public disclosure of such technologies, including accountability measures requiring agencies to produce transparency reports and implement oversight mechanisms. This legislative action reflects growing concerns about privacy, consent, and the potential for racial bias in automated facial recognition systems.
Illinois: Enhancing Employment Transparency with the Artificial Intelligence Video Interview Act
Illinois has targeted AI transparency within the employment sector through the Artificial Intelligence Video Interview Act, which came into effect in 2020. This legislation addresses the increasing use of AI in hiring processes, particularly the deployment of AI-driven video interview platforms that analyse applicants’ responses. The law mandates that employers disclose the use of AI in video interviews and provide information on how the technology works and what general types of characteristics it evaluates.
Furthermore, Illinois requires employers to obtain consent from applicants before using these AI systems, thereby empowering candidates to opt out of potentially biased automated evaluations. The act also stipulates that employers must delete any video or copies of interviews within a month of the applicant's request unless the applicant provides permission for their longer retention. This law underscores the critical importance of transparency and informed consent in using AI technologies that can significantly impact employment opportunities.
Colorado
Colorado is making significant strides in regulating artificial intelligence (AI), reflecting a broader national and global trend towards establishing legal frameworks to govern the ethical use of AI technologies. The state's proactive approach is exemplified by legislation such as Senate Bill 205, which addresses key areas of concern, including discrimination, transparency, and accountability in AI applications. This legislation highlights Colorado's commitment to ensuring that AI systems are used in a way that protects citizens and promotes fairness.
One of the central aims of Senate Bill 205 is to safeguard consumers from AI systems that might discriminate based on various personal attributes. This is particularly pertinent in contexts where AI decision-making could affect individual rights and opportunities, such as employment, housing, healthcare, and access to services. The bill requires that AI systems used in such sensitive decision-making processes are designed and operated in a manner that prevents discriminatory outcomes. This involves setting legal standards for fairness and requiring regular audits and reporting to ensure compliance.
The focus on discrimination is driven by increasing awareness of the biases embedded in AI algorithms, whether through the data used to train them or the algorithms' design. By mandating safeguards against such biases, Colorado aims to foster an environment where technology enhances societal equity rather than undermining it.
Another pivotal aspect of Colorado's legislative approach under Senate Bill 205 is requiring AI developers to be transparent about their systems' functionalities and potential risks. AI developers must disclose how their systems operate, the decisions they can make, and the data they utilise. This transparency is crucial for several reasons:
Such disclosures empower consumers and regulatory bodies and foster trust in AI technologies, which are essential for their sustainable integration into everyday life.
The Colorado AI Insurance Regulations represent a significant legislative step in tailoring the management of artificial intelligence (AI) applications, specifically within the insurance sector. Recognising the unique challenges and risks associated with deploying AI technologies in insurance, Colorado has introduced regulations to ensure that insurers not only embrace the benefits of AI but also mitigate potential risks effectively. These regulations are structured around two key pillars: Risk Management and Data Protection Assessments, which are designed to foster a responsible, secure, and ethical approach to the use of AI in the insurance industry.
Under the new regulations, insurance companies operating in Colorado are required to establish comprehensive AI governance and risk management frameworks. These frameworks are not merely guidelines but mandatory measures that insurers must integrate into their daily operations. The aim is to ensure that all AI systems used by insurers are overseen with high scrutiny and governed by clear policies that address potential risks throughout the AI lifecycle—from development and deployment to the continuous monitoring of AI systems.
This proactive approach aims to integrate AI into the insurance sector safely and responsibly, ensuring that technology serves the best interests of both the insurers and the insured.
Another cornerstone of the Colorado AI Insurance Regulations is the requirement for regular data protection assessments. These assessments are crucial for processes involving sensitive data, which can include everything from personal health information to financial data in the context of insurance.
Maryland
Maryland has taken a significant step towards strengthening data privacy and consumer protection with the recent enactment of the Maryland Online Data Privacy Act of 2024 (MODPA). This legislation is a clear response to growing concerns about the pervasive reach of digital technologies into personal lives and the potential misuse of personal information in the digital age. MODPA introduces comprehensive measures aimed at putting control back into the hands of consumers and imposing stringent obligations on businesses handling personal data.
Data Minimisation
At the heart of MODPA is the principle of data minimisation, which compels companies to rethink how they collect and store personal information. The act mandates that businesses limit their collection of personal data to what is strictly necessary for delivering their services. This approach seeks to curb the excessive and often unnecessary data collection practices that have become commonplace in the digital economy, reducing the risk of exploiting personal data for unapproved purposes. By enforcing data minimisation, Maryland aims to ensure that the privacy of its residents is respected and that companies are more deliberate and transparent about the data they collect.
Protection of Sensitive Data
MODPA sets even higher standards for handling sensitive data, which includes information about racial or ethnic origin, political opinions, religious beliefs, biometric data, health, sexual orientation, and more. The act prohibits the collection, processing, or sharing of sensitive data unless it is essential for the specific services requested by a consumer. This stringent requirement compels businesses to carefully assess their data practices and implement robust safeguards to protect sensitive information, thereby enhancing trust and security for consumers who may feel vulnerable about sharing their details.
领英推荐
Consumer Rights
Another cornerstone of MODPA is the broad rights granted to consumers regarding their data. These rights significantly empower Maryland residents to take control of their personal information in several ways:
To comply with MODPA, businesses must establish clear policies and practices that align with the act's principles. This includes updating privacy policies, implementing procedures to respond to consumer requests, and conducting regular data protection assessments to identify risks. Moreover, businesses must ensure adequate systems to protect the data they collect, both from external breaches and internal misuse.
USA Federal Initiatives
The proposed USA AI Act represents a pivotal development in the federal government's approach to managing the burgeoning field of artificial intelligence (AI). As AI technologies become more integrated into various sectors of the economy and daily life, there is a growing need to address the potential security and safety risks associated with their deployment. The USA AI Act seeks to address these concerns through two primary initiatives: establishing a Voluntary AI Incident Database and strengthening the National Vulnerability Database to include AI security vulnerabilities.
Establishment of a Voluntary AI Incident Database
One of the cornerstone elements of the USA AI Act is the creation of a Voluntary AI Incident Database. This initiative fosters a collaborative environment where public and private sector entities can share information about AI security and safety incidents. By encouraging transparency and the exchange of information, the database aims to build a comprehensive resource that helps stakeholders understand and mitigate risks associated with AI systems.
The database would serve multiple purposes:
Participation in the database would be voluntary, and mechanisms would be in place to protect the confidentiality of the information shared and the identities of those sharing it. This approach encourages maximum participation by reducing the potential legal or reputational risks of disclosing sensitive incident information.
Strengthening the National Vulnerability Database
The second major initiative under the USA AI Act is the enhancement of the National Vulnerability Database (NVD), a U.S. government repository of standards-based vulnerability management data. This database includes security checklist references, security-related software flaws, misconfigurations, product names, and impact metrics. The Act proposes expanding the NVD to specifically include AI security vulnerabilities, recognising the unique challenges AI technologies pose.
This enhancement would involve:
By integrating AI security vulnerabilities into the NVD, the Act aims to create a more robust and dynamic framework for securing AI systems against known and potential threats. This would enhance national security and promote trust in AI applications by ensuring their safety and reliability.
The Role of NIST at the Federal Level
The National Institute of Standards and Technology (NIST) plays a pivotal role in shaping the regulatory landscape for artificial intelligence (AI) technologies in the United States. While NIST does not directly regulate AI, its function in developing standards and guidelines is crucial. These standards often benchmark best practices in technology deployment, including AI, influencing public and private sector policies and strategies.
NIST's approach to AI involves a collaborative and open process with stakeholders from industry, academia, and government to ensure that the guidelines are comprehensive, practical, and forward-looking. The aim is to create a set of standards that address current technological capabilities and are adaptable to future advancements in AI technologies.
Although NIST's guidelines are not legally binding, their influence on AI regulation is profound. Policymakers often consider NIST standards a trusted source of technical expertise that can inform legislation and regulatory policies. This influence is evident in various state and federal AI initiatives referencing NIST standards as compliance and risk assessment benchmarks.
Implications and Consequences
As artificial intelligence (AI) becomes increasingly integral to crucial sectors of business and society, there is a global shift towards establishing more formal oversight and improved governance of these technologies. Recognising the growing dependency on AI, the National Institute of Standards and Technology (NIST) is leading efforts to develop comprehensive AI standards. This initiative is crucial as it ensures that AI development adheres to security, transparency, and equity principles, fostering a trustworthy technological advancement environment.
NIST's proactive role in formulating these standards is instrumental in shaping the future landscape of AI regulation in the United States. By offering detailed and forward-thinking guidelines, NIST influences legislative measures and promotes consistency in AI practices across various states and sectors. This ensures that AI technologies are implemented in a manner that is both responsible and ethical.
In parallel, state legislatures in California, Washington, and Illinois are taking significant steps to integrate AI responsibly into societal frameworks. These states have enacted laws to safeguard individual rights and establish a clear accountability framework for organisations deploying AI technologies. The regulations focus on ethical usage and transparency, setting a standard that could guide future federal rules and serve as a benchmark for other states.
Colorado's legislative approach to AI in the insurance sector further exemplifies state-level innovation in AI governance. The Colorado AI Insurance Regulations mandate comprehensive risk management strategies and regular data protection assessments, creating a model that other states might follow. These measures aim to protect consumers and provide guidance on using AI to enhance service quality and operational efficiency while addressing associated risks.
The Maryland Online Data Privacy Act of 2024 is another exemplary state initiative that addresses data privacy challenges directly linked to AI and digital technologies. By implementing stringent standards for data minimisation and enhancing protections for sensitive data, Maryland is at the forefront of ensuring consumer privacy in the digital age.
The USA AI Act illustrates a significant step towards creating a robust framework for managing AI-related security and safety risks. Establishing a Voluntary AI Incident Database and enhancing the National Vulnerability Database to include AI vulnerabilities are key components of this act. These initiatives aim to build a secure digital infrastructure that supports innovation and ensures the safe deployment of AI technologies.
These state and national efforts underscore a comprehensive movement towards regulating AI more stringently. As AI technologies continue to evolve and expand their influence across various aspects of life, the frameworks developed by these legislative actions provide valuable lessons and models that could influence broader national and international AI policies. This collective legislative activity demonstrates a commitment to harnessing the benefits of AI while effectively mitigating its risks, ensuring that AI technologies contribute positively to society.
What this means to you (the "so what" effect)
When it comes to AI and data privacy legislation, the US cannot be approached as a single market. State initiatives require conformance and adherence to divergent standards. This means that whether you are a US company operating exclusively within national borders or a foreign company with the intent of entering the very large and lucrative US market, you need to build your AI policies and performance to the highest standard to be confident of meeting both federal and state obligations.
In practice, this means maintaining a focus on the European Union's ethical frameworks and treating these as the most comprehensive set of policies and frameworks that would ensure global compliance, irrespective of the market in which you operate.
My new book, "AI Governance: applying AI Policy and Ethics through Principles and Assessments", will provide you with effective guidance on what you need to do to meet these challenges.
Growth Advisor + Emerging Strategies + AI Ethics ???? Industry 5.0
6 个月Good way of testing commercial assumptions in the case of the US