When generative AI lies, who is liable?
[Images: Aitor Diago/Getty Images; zhengshun tang/Getty Images; EKATERINA BOLOVTSOVA/Pexels]

When generative AI lies, who is liable?

Welcome to AI Decoded, Fast Company’s weekly LinkedIn newsletter that breaks down the most important news in the world of AI. I’m Mark Sullivan , a senior writer at Fast Company, covering emerging tech, AI, and tech policy.

This week, I’m focusing on the legal system’s looming challenge of determining whether a chatbot can be found guilty of defamation. I’m also looking at the thorny issues corporations face when considering the use of AI in key business functions.

If a friend or colleague shared this newsletter with you, you can sign up to receive it every week here . And if you have comments on this issue and/or ideas for future ones, drop me a line at [email protected].


ChatGPT-maker OpenAI sued for defamation in Georgia

In what seems to be the first defamation case against an AI chatbot, the nationally syndicated talk show host Mark Walters has sued OpenAI, claiming that the company’s ChatGPT tool generated false and harmful information about him embezzling money. He is seeking unspecified monetary damages from OpenAI .

According to the suit filed in Georgia state court, Fred Riehl, editor of the gun publication AmmoLand, had asked ChatGPT for information on Walters’ role in another, unrelated, lawsuit in Washington State. Per Walters’ filing, ChatGPT contrived a fictional part of the Washington lawsuit, saying that Walters had embezzled money from a special interest group for which he’d served as a financial officer. By doing so, ChatGPT “published libelous matter regarding Walters,” his lawsuit states. “OAI knew or should have known its communication to Riehl regarding Walters was false, or recklessly disregarded the falsity of the communication,” the suit reads.

There’s plenty of precedent for cases where humans defame humans, but precious little when an AI is causing the harm. “Defamation is kind of a new area,” says John Villafranco, a partner with the law firm Kelley Drye & Warren. “There are a lot of juicy issues to be worked out.” The Walters v. OpenAI suit may or may not prove to be a landmark test case for defamation-by-AI, but it likely will raise important legal questions that will be repeated in future cases involving generative AI tools.?

Meta has open-sourced a new AI model that can reason and learn

Meta announced Tuesday that it is open-sourcing a new computer vision model that can better help machines interpret the visual world. Unlike other computer vision models that break down images pixel by pixel, the Image Joint Embedding Predictive Architecture (I-JEPA) understands and compares images as abstract representations that convey the meaning of the image. (While chatbots process words, computer vision AI interprets or classifies images.) It processes and compares millions of images in this way, and in doing so forms an internal model of how the world works, Meta says. This approach allows I-JEPA to learn much more quickly than other models, even while using less computing power, Meta says. The end result is a model that can accomplish complex tasks and more easily adapt to unfamiliar situations. Meta says the model is already turning in high scores across a number of computer vision tests.?

The foundation models are developed by Meta’s large AI research organization, led by AI pioneer and Turing Award winner Yann LeCun. Because Meta’s foundational AI models are open-source, they can be studied by other researchers and even used as the basis for other developers’ own models or apps.

Accenture is doubling the size of it AI practice

The consulting firm Accenture says it will invest $3 billion in AI over the next three years, and increase the size of its AI practice to 80,000 people. Accenture and other large consulting firms are seeing rapidly rising demand from Fortune 500 companies in a range of industries that need help comprehending and implementing AI tools. These companies are trying to understand how to ride the wave of new AI-model development. (Their curiosity is, in part, driven by fear of lagging behind competitors that might adopt AI products more quickly.) And they need the help of firms like Accenture and McKinsey to understand the risks of AI. For instance, many corporations fear that sending their proprietary data out to AI models hosted by third parties like OpenAI might present security or privacy risks, especially in heavily regulated industries, such as healthcare. Firms like Accenture can help corporations navigate the pros and cons of augmenting key business functions with AI and help them manage risk when they do.

Salesforce works to soothe customer worries over data security

Addressing security and privacy concerns has emerged as a major part of Salesforce’s AI pitch to large-enterprise customers. During a product event Monday, the company announced more details about how it will deliver customers the benefits of AI models while mitigating the risks. CEO Marc Benioff says corporations fear losing control of their proprietary data when they send it to AI models hosted by third parties, a condition he says is causing an “AI trust gap.”

To address this, Salesforce said its customers’ data will run through a “trust layer” in its new “AI Cloud” before it goes through any third-party AI models (a corporation might want to run its product data through an Anthropic or Cohere LLM to power an AI customer-service assistant, for example). Here, Salesforce secures the data, anonymizes sensitive or competitive data, and masks personal data for privacy. Salesforce executives stress that Salesforce doesn’t retain any of the data, and even will flag any data for toxicity or harmfulness that comes back from the model.?

Amazon and Oracle are taking a similar model-agnostic and security-centric approach. Amazon announced in April that customers can access a number of popular generative AI models securely through the AWS cloud. Oracle announced this week that it will develop a number of new generative AI services in partnership with Cohere, which its SaaS customers can access securely through the Oracle cloud.


More AI coverage from Fast Company:?

From around the web:?

Nicole van Kuppeveld

Speaker ?? Keynote Speaker, Author, Changemaker, Leadership Course Creator & Facilitator, Founder, Organizations by Design Inc. | MBA

1 年

Seriously! It's up to whoever uses ChatGPT to review, correct and edit the final response/product prior to publishing, whoever authored/wrote the article (even if they used AI) is responsible for libel comments published. ChatGPT (which I use all the time) is still a ROBOT! Sheesh!

回复
Vernon Bryce

Executive Development Consulting, Leadership & MBA Coach

1 年

Yes at last - def it is Advanced Computing / Advanced Coding with all its realities - however one day there will be a breakthrough - when it will be artificial and intelligent but not yet - its the artificial stream which is the most alarming - is it time for re-framing or are we happy to continue the "stirring"? Is a better term something like "Breakthrough Coding" / Apps or is it more fun to keep fighting the old words/fears?

Dimitrios S. Dendrinos, Ph.D.

Emeritus Professor, the University of Kansas; Ph.D. University of Pennsylvania, Philadelphia PA; Masters, Washington University, St. Louis, MO. Author, editor, researcher, teacher, thinker

1 年

These are more or less benign and entertainment related applications of the so-called AI (I prefer the terms "advanced computing" in "software and programming") field that go along with advanced hardware technologies. There're other areas of application of AI, along with fields of nano-technology, weapons systems, cyberwarfare, corporate espionage, etc., that are not openly discussed for obvious security reasons. That is where malevolent and potentially pernicious applications of AI can make a significant difference in many people's lives.

要查看或添加评论,请登录

Fast Company的更多文章

社区洞察

其他会员也浏览了