The risks associated with generative AI are significant and rapidly evolving. A wide array of threat actors have already used the technology to create “deep fakes” or copies of products, and generate artifacts to support increasingly complex scams.
ChatGPT and other tools like it are trained on large amounts of publicly available data. They are not designed to be compliant with General Data Protection Regulation (GDPR) and other copyright laws, so it’s imperative to pay close attention to your enterprises’ uses of the platforms.?
Oversight risks to monitor include:
- Lack of transparency. Generative AI and ChatGPT models are unpredictable, and not even the companies behind them always understand everything about how they work.
- Accuracy. Generative AI systems sometimes produce inaccurate and fabricated answers. Assess all outputs for accuracy, appropriateness and actual usefulness before relying on or publicly distributing information.?
- Bias. You need policies or controls in place to detect biased outputs and deal with them in a manner consistent with company policy and any relevant legal requirements.
- Intellectual property (IP) and copyright. There are currently no verifiable data governance and protection assurances regarding confidential enterprise information. Users should assume that any data or queries they enter into the ChatGPT and its competitors will become public information, and we advise enterprises to put in place controls to avoid inadvertently exposing IP.?
- Cybersecurity and fraud. Enterprises must prepare for malicious actors’ use of generative AI systems for cyber and fraud attacks, such as those that use deep fakes for social engineering of personnel, and ensure mitigating controls are put in place. Confer with your cyber-insurance provider to verify the degree to which your existing policy covers AI-related breaches.
- Sustainability. Generative AI uses significant amounts of electricity. Choose vendors that reduce power consumption and leverage high-quality renewable energy to mitigate the impact on your sustainability goals.
Gartner also recommends considering the following questions:
- Who defines responsible use of generative AI, especially as cultural norms evolve and social engineering approaches vary across geographies? Who ensures compliance? What are the consequences for irresponsible use?
- In the event something goes wrong, how can individuals take action?
- How do users give and remove consent (opt in or opt out)? What can be learned from the privacy debate?
- Will using generative AI help or hurt trust in your organization — and institutions overall?
- How can we ensure that content creators and owners keep control of their IP and are compensated fairly? What should new economic models look like??
- Who will ensure proper functioning throughout the entire life cycle, and how will they do so? Do boards need an AI ethics lead, for example?
Finally, it’s important to continually monitor regulatory developments and litigation regarding generative AI. China and Singapore have already put in place new regulations regarding the use of generative AI, while Italy temporarily. The U.S., Canada, India, the U.K. and the EU are currently shaping their regulatory environments.