"Risks Ahead" - Generative Artificial Intelligence

"Risks Ahead" - Generative Artificial Intelligence

Artificial Intelligence (AI) has been around since the 80’s. Companies have been using AI to formulate large portions of data, identify patterns, trends and automate processes.

Generative Artificial Intelligence takes it to a new level.?Think of a search engine on steroids.?Generative AI can create something that is not in the real world yet.?It’s no wonder it seems to be capturing every headline and earnings report these days.

Imagine this –

·????????Creating a term paper in seconds using a few key words

·????????AI will automate self-driving cars and best destination guidance

·????????Help a bio-tech company predict adverse events with new drugs

·????????Creating medical images that show the future development of a disease

·????????Beyond text generation, several generative AI models can also generate audio, video, and imagery

·????????AI-generated voices can be used for marketing videos

·????????Asking AI a legal question – the entire law library is being read / loaded as we speak.

·????????Asking AI to read an insurance policy or contract and allowing it to summarize the pros and cons

·????????Generative AI tools can be used for simulated attacks and environments, threat intelligence

·????????Quarterly reports will be synthetically generated

These are just a few examples – most already being used today and some in the very near future.

However, with advancements in productivity also comes risks. Some Governments have already begun taking steps to establish guidelines for AI’s uses. While the European Union is working on a “code of conduct” and even harsher privacy standards than GDPR (General Data Protection Regulation).

Here’s some examples -

Fraud / Disinformation – Remember there are “bad guys” out there who will use and embrace this technology.?Imagine getting a phone call or video chat.?However, criminals have used AI to create a person who sounds exactly like a close friend or relative. They use your friend’s likeness to obtain sensitive information or steal money. AKA “Deep Fakes”.?Further, Cybercriminals are known to be exploiting and blackmailing individuals by digitally manipulating images into explicit fakes and threatening to release them online unless a sum of money is paid.

I believe this will be the new wave of hacking / cyber-criminal fraud. Additionally, AI could be used to launch disinformation campaigns by creating fake news, coupled with believable (yet artificial) images and/or video. It would be difficult for the general public to determine what is real and what is fake, further fueling public division.

Privacy Issues - Latest reports on leaks of sensitive information and chat histories underline the urgent need for robust privacy and security measures in the development and deployment of generative AI technologies.

Generative AI models “scrape” data from public social media profiles, personal websites, public records, and even articles removed from search engine results.

Privacy problems may arise when “scraped” data from the web includes information meant to be password protected. Additionally, failure to protect personal data from scraping infringes upon the obligation of website providers to protect user data and puts individuals at risk.

An employee may have used AI to summarize meeting notes but inadvertently the machine learning digested sensitive trade secrets or information.

Finally, AI may have big challenges when it comes to peoples "right to be forgotten" which allows individuals to request a company delete their personal information. While removing data from databases is comparatively easy, it is likely difficult to delete data from a machine learning model.

Inaccurate Information – Since AI uses large sets of public data to decipher and create, often AI will respond to inquiries with answers that sound or look plausible but are not factual. In some cases, generative AI has delivered answers that are false, irrelevant, or not even remotely logical.

Personal Injury / Copyright Infringement / Discrimination – Since I’m in the insurance industry, I can see a day where increases in personal injuries occur much more frequently – using someone’s likeness against their will.?

You can easily see how words, images, music, etc. could be stolen – be it on purpose or not. If the AI is trained on copyrighted images or data for which prior approval was not obtained, the result could infringe on owners' copyrights.

There’s also the risk of discrimination. For example, an AI algorithm used to evaluate a credit card application can deny someone based on their gender, which is against the law.


Many insurance products are ill-equip to respond to these new technologies.?While a cyber policy may respond to privacy issues it may not respond to personal injury or discrimination as an example. I believe a deeper look at combined Errors & Omissions / Cyber policies will emerge as the best defense against this changing landscape.

Bottom Line: ?The full effect of generative AI is impossible to predict. The technology is growing exponentially, driven by advances in networks, machine learning, and CPU and graphic processor speeds. Generative AI has only really been in our lives for a few months, so there are many more unknowns. But the risks are real and perhaps even more severe than what we’ve experienced so far.?It’s a good idea to discuss these risks with your insurance agent and understand which of these risks are currently uninsured / underinsured.

Woodley B. Preucil, CFA

Senior Managing Director

1 年

Brian Heun, CIC Very informative.?Thanks for sharing.

回复

要查看或添加评论,请登录

社区洞察

其他会员也浏览了