Risks & Insurance Implications for Companies Leveraging AI

Risks & Insurance Implications for Companies Leveraging AI

Artificial Intelligence (AI) is one of the hottest topics in today’s world. It has captured the spotlight due to a recent surge in generative AI capabilities, the public availability of such tools, and the rapid expansion of IoT (Internet of Things) that connects everything from appliances and wearable devices, to homes, security systems, and vehicles.

AI capabilities are transforming how organizations manage customer engagement, operations, and almost every other facet of business. Organizations across all industries are leveraging advanced technology to enhance efficiency and innovate in an effort to gain a competitive edge. Historically, new technologies have led to significant changes, and AI’s integration with modern digital operations is expected to cause significant economic disruption over time. However, it is crucial for organizations to address the risks and necessary insurance sooner, rather than later.

While AI has been developing for decades, the rise of generative AI has presented new opportunities and risks across industries. In March 2023, OpenAI released ChatGPT-4, its most sophisticated conversational AI model to date. Since then, the tech sector’s market capitalization has surged by 50%, adding $6 trillion in shareholder value. The AI boom has also elevated share prices for heavy hitters like Microsoft, Alphabet (Google’s parent company), and Amazon, who are laying out big money to develop the technology (Source 2)

Businesses around the globe, including film studios, banks, and consulting firms have rapidly adopted AI. Many large corporations are actively experimenting to determine what works. For example, JPMorgan Chase has implemented over 300 AI use cases, while consulting firm Capgemini has utilized Google Cloud’s generative AI to produce a library of over 500 industryspecific use cases. Similarly, the German chemicals giant Bayer has reported more than 700 use cases for generative AI. (Source 2)

Law firms are using generative AI to streamline tasks, such as due diligence and contract analysis. Investment banks are using AI to automate research processes, and other companies are using AI to build software, improve users’ search results, or enhance advertising. Despite these advancements, an IBM poll suggests that many companies are hesitant to disclose their use of AI because their organizations still lack internal expertise on the subject. About 25% of American companies have banned the use of generative AI in the workplace entirely – possibly due to data privacy and security concerns. In their annual reports, Blackstone and Eli Lilly, leaders in private equity and pharmaceuticals respectively, cautioned investors about AI-related risks, including the potential for leakage of intellectual property. (Source 2) Consequently, many companies have wisely started asking more questions about how AI is incorporated into tech stacks.

AI TECHNOLOGY RISKS

From a cybersecurity perspective, AI presents both benefits and expanded risk. Nonetheless, companies around the world are integrating AI into their products and operations, despite the emerging and varied regulations on AI safety and liability. The European Union is working to create a comprehensive AI Act, while Great Britain is taking more of a wait-and-see stance. In October 2023, President Biden issued an executive order on the “Safe, Secure, and Trustworthy Development and Use of Artificial Intelligence,” to formulate, among other things, reporting requirements for AI developers in the U.S. (Source 11) As AI applications continue to expand, so does the potential risk. Key risks that are becoming more prevalent include:

Enhanced Social Engineering. Cyberattacks are only growing more sophisticated; however, many still start with social engineering and email. Phishing emails were previously easier to spot due to incorrect grammar, low sophistication, and other errors. When attackers use generative AI to create these emails, it removes those easy-to-spot flags, enabling more seamless social engineering.

DDOS Attacks. AI algorithms can be used to automate the control and coordination of distributed denial-of-service (DDOS) attacks. In a DDOS attack, compromised devices are used to flood a target with an overwhelming amount of traffic, effectively turning these devices into “zombies.”

Exploiting Vulnerabilities. Bad actors can use AI to automate the process of finding and exploiting vulnerabilities before they are patched. While the risks associated with such AI misuse are not yet completely understood, one can envision terrifying scenarios. Consider that modern cars are internet-enabled, allowing drivers access to roadside assistance, navigation, and other features. (Source 3) New vulnerabilities could arise if bad actors gained access to these connected vehicles’ systems or data. Connected vehicles gather massive amounts of sensitive data on drivers and passengers; interact with critical U.S. infrastructure; and can be piloted or disabled remotely. At the end of February 2024, President Biden took action to protect Americans from the security risks posed by connected vehicles from concerning countries such as China. The Department of Commerce is now investigating the national security risks presented by connected vehicles that incorporate technology from countries of concern and considering regulations to address those risks. (Source 4)

Privacy Concerns. The ability of AI to process and analyze large volumes of data can undermine efforts at anonymization. This means that AI can potentially identify individuals even if personal information is not directly included by correlating and synthesizing information from multiple data points across a dataset.

POTENTIAL INSURANCE IMPLICATIONS

As AI use cases increase, so do the potential implications for insurance. Both private and public sector stakeholders, particularly shareholders, are paying close attention. Governing bodies, tasked with outsourcing investment and raising capital, often look to emerging technologies to raise funds. This trend was previously seen with non-fungible tokens (NFTs) and digital assets, and the focus has now shifted to AI.

AI Washing Allegations. The expansion of AI use cases sometimes serves merely as marketing or “window-dressing.” This phenomenon, known as “AI washing,” involves companies exaggerating the use of AI in their products and services to boost their market appeal. (Source 2) This practice not only misleads investors, but also raises significant legal and insurance issues as these claims are scrutinized as deceptive and the true capacities of the products and services are evaluated. (Source 5)

Regulatory & Plaintiff Lawsuits. Gary Gensler, head of the U.S. Securities & Exchange Commission (SEC), is also focused on the risk to markets and investors when AI is utilized to make recommendations and trades. AI models can generate incorrect outputs known as “hallucinations.” If that occurred on a large scale, it could wreak havoc on financial markets. The SEC is now developing regulations for how brokers and investment advisors leverage AI and other predictive data analytics when interacting with customers. Generally, evolving regulatory oversight contributes to an uptick in claims and lawsuits. (Source 7)

Privacy Violations. The trend toward personalized insurance has substantial privacy implications that are likely to come under greater scrutiny. Take car insurance as an example. Many drivers may not be aware that activating AI-supported features in their vehicles allows the collection and use of data about their driving behaviors, which is often shared with third-parties, including insurance companies and data brokers. Auto manufacturers and others claim to have permission to collect and use such personal information, asserting that consent is given through fine print in click-through agreements and privacy policies, but such practices are almost invisible to drivers, and certainly invisible to passengers. (Source 3) This situation highlights the need for more transparent data handling practices to ensure that consumers are truly informed and consenting.

Copyright Infringement Claims. AI models rely on large amounts of data sourced from third parties. However, there is often a lack of transparency regarding the source of that data or how it is stored within the AI itself, presenting substantial copyright issues that will only grow over time. (Source 11) Many of the key intellectual property issues brought to light by AI, ranging from the use of copyrighted material as training data for AI models to whether or not AI-generated works can be copyrighted, will likely only find resolution in the court room or through new legislation. (Source 10) The courts are already seeing cases emerge. In December 2023, The New York Times sued OpenAI and Microsoft for copyright infringement, starting an intense legal battle over the unauthorized use of published information to train AI models. (Source 8) And in a similar case, four unidentified plaintiffs sued GitHub, OpenAI, and Microsoft over the reproduction of licensed opensource code. (Source 9)

Defamation & Discrimination Lawsuits. AI can generate inaccurate information or biased outputs, giving rise to defamation and discrimination claims. In December 2023, Rite Aid faced regulatory action when the Federal Trade Commission (FTC) imposed a 5-year prohibition on the company’s use of AI-based facial recognition technology. The FTC had alleged that the company used such technology without implementing reasonable safeguards, resulting in harm to consumers as the technology exhibited bias when tagging consumers, particularly women and people of color, as shoplifters. The FTC’s settlement with the company confirmed that preventing the misuse of biometric information is a high priority for the FTC, which issued a warning earlier in 2023 that the agency would be scrutinizing biometrics use. (Source 11) In another case, OpenAI is being sued for defamation due to a “hallucination” that claimed Mark Walters, a conservative radio host, had embezzled money from the Second Amendment Foundation, a totally made-up fact. (Source 12)

Blurred Liability Lines. Use of AI can make it difficult to determine where liability begins or ends. Yet, when AI goes awry, there is potential for a broad range of losses including reputational and financial harm, in addition to third-party liability. For example, if an AI algorithm causes loss, where does that liability land? – with the business utilizing it, the AI developer, or the licensor? Much depends on the contract. However, it is clear that those companies using AI will be held responsible by regulators for that use. Companies cannot blame their vendors or their employees, even when potential employee negligence increases uncertainty for insurers. The lines between professional and product liability may continue to shift as regulations take hold. But even now, the relationship between users and developers is blurring as companies co-create AI systems and leverage proprietary information to train or refine AI models. (Source 13)

HOW CAN INSUREDS PROTECT THEMSELVES?

From a liability perspective, the wisest advice applies to both individuals and organizations – Make sure to think critically about the technologies brought into your home, life, or company. Take the time to review all insurance coverages with a trusted retail agent and wholesale broker to ensure that the exposures noted above are covered. Policyholders must also have the right governance in place as well as checks and balances around AI to ensure it is not causing harm. (Source 13)

Read the fine print. Thoroughly read the privacy notice and terms to ensure you are comfortable with the parameters. When necessary, slow the procurement process down to confirm that vendors have appropriate security and protections in place around your data. It is also vital to employ a more rigorous procurement process that satisfactorily answers key questions, including:

  • Is my organization’s data encrypted? Encryption is foundational to data security – protecting data from being compromised, stolen, or altered.
  • What data is retained, how is it stored, and how is the data used? Many companies may not understand that on the back end many vendors utilize the Cloud and AI together, which can mean data is stored or utilized in ways not clearly outlined. Data access and use restrictions should be clearly spelled out in contracts.
  • Are the limitations on use truly honored? The word “insights” should be treated as a yellow flag. If a vendor indicates that it can derive insights from a user’s data, that signals that something is being done with an insured’s data that warrants further questioning.
  • What does the actual contract say vs. marketing materials? It is important that insureds require clarity around data use via the actual contract rather than through marketing materials. Vendors are moving quickly in this area because many existing contracts allow them to add new features, but that can mean AI is integrated and rolled out without much proactive insight for users.

Focus on education across the organization. A company is only as strong as its awareness of evolving threats and what they can mean for their systems. Businesses should build out guidelines for the responsible use of AI across the organization and follow that with training that helps people truly understand what can and should not be shared to help reduce the potential likelihood of harm. Insureds should be taking reasonable steps to ensure cyber security and privacy through basic safeguards. Such policies should amend or extend acceptable use policies, data use policies, and others as applied to AI systems.

If concerns about compromising data persist after answering these questions, organizations should consider partitioning to keep certain data on a separate network so that their most valuable information is not compromised.

BOTTOM LINE

In our digital age, AI is here to stay. However, the emerging risks and insurance implications are complex, diverse, and constantly evolving. Many companies and vendors are scrambling to incorporate AI because it sells. However, new opportunities also offer new risks that must be confronted. The risk is even broader than liability. Losing control of data, which is a critical asset of any company, means losing value and it is likely that insurers will begin carving out AI exclusions as claims hit. Make it a point to review all current policies to determine if there are any gaps in coverage that should be addressed at renewal. There is already a global focus from both legislators and regulators considering how best to regulate AI, including generative AI.

It largely remains to be seen how various jurisdictions will respond (though Utah has passed a law that brings generative AI under its consumer protection statute). At this point, the court systems are beginning to review complex liability cases involving technology - and coming to different conclusions. When combined with third-party litigation funding, new products liability, and AI regulation, this could result in substantial liability shifts in the years ahead. (Source 11) Partnering with knowledgeable wholesale brokers well-versed in AI-related risks can help ensure your clients navigate this new world with the right safety nets in place. Reach out to your CRC Group producer today.

GUEST CONTRIBUTORS

KATHRYNE (KATE) M. MORRIS

Kathryne (Kate) M. Morris is a co-founder and member of Hosch & Morris, PLLC. Kate is a versatile, tech-savvy attorney with expertise in data privacy and cybersecurity and more than 15 years of experience in commercial transactions and litigation. She specializes in U.S. and E.U. data privacy and cybersecurity issues, focusing on transactional and governance matters related to data and technology. Learn more here.

RUSSELL (RUSS) B. PEARLMAN

Russell (Russ) B. Pearlman serves as Of Counsel to Hosch & Morris, PLLC, and acts as the firm’s Chief Technology Officer. Russ has a unique background that includes over 25 years of progressively senior positions in business and technology roles. His hands-on approach has made him responsible for developing technology strategy and operating it, including support for thousands of employees and multiple data centers. In addition, Russ obtained his law degree with a focus on the issues that face companies that utilize technology for competitive advantage including intellectual property, trade secrets, cyber security, data privacy, e-discovery, and technology-based contracts. Learn more here.

ABOUT HOSCH & MORRIS

Hosch & Morris, PLLC is a boutique law firm dedicated to data privacy, information security, the Internet, and technology. The firm offers extensive transactional and litigation experience, privacy and technology depth, as well as a wide breadth of perspective across many different industries in distribution, marketing, finance, employment, competition, intellectual property, and professional services. Hosch & Morris takes the time to learn clients’ businesses and focus on what they need with the aim of growing, improving, and maximizing the value of the data and technologies on which clients’ businesses are built. Learn more here.

CONTRIBUTORS

END NOTES


要查看或添加评论,请登录

社区洞察

其他会员也浏览了