Is it safe to share business-critical information with AI tools providers?
This is a two-part question:
As an attorney, my focus is on the former. Infosec and technical teams focus the latter to determine the likelihood of information loss, despite the best intentions of an AI provider that says the right things. Combining legal and technical input helps businesses make the best assessment of which tools are suitable to process their critical information.
In June 2023, I reviewed OpenAI's legal terms of service for ChatGPT to assess if OpenAI was saying the right things about protecting critical information. E.g., do the terms say OpenAI will keep information confidential, not use it to train their models and employ reasonable security measures. As noted in this post, the answer was "no" and I deemed ChatGPT not appropriate for processing critical business information (in-depth analysis here).
What a difference six months makes! On November 14, OpenAI published updated business terms, which I assess in depth in this Google Document with margin comments. The short take is OpenAI now says almost all of the things I wanted to hear as an attorney—that they:
Does this mean that it is absolutely safe to input your business-critical information? No, and their terms of service confirm this. While Section 5 of their terms describe robust security measures, it says those measures are "designed to" protect business information.
领英推荐
Those are critical words.
As with most if not all software, OpenAI systems are “designed to” be secure, while providing no guarantee that OpenAI will achieve its design goal. Section 9.2 makes clear that customers cannot hold OpenAI legally responsible for security failures ("[w]e make no representations or warranties (a) that use of the Services will be uninterrupted, error free, or secure…").
In a world of imperfect security, it is not reasonable or customary to rely on software provider promises (words) to secure information. OpenAI is still saying the right things even as they disclaim liability for security incidents. Their stance is "market standard," and no other provider is likely to promise more. Their words set the correct intention, and suffice to keep an honest provider honest.
This leaves their actions as the ultimate arbiter of if it makes business sense to trust OpenAI with critical business information. How transparent are they with the actions they take to secure information? Do those actions add up to modern, layered security best practices? E.g., do they implement the principle of least privilege, perform employee background checks, structure infosec teams to report to a different line of management than engineering, have appropriate insurance coverage, etc?
Once a provider clears the first hurdle of saying the right things, all of these become critical questions to determine if the provider is capable of—and likely to—do what they say.
[This article is not legal advice. It's my personal opinion, and I'd love to hear yours too. Last updated 12.04.23.]
Super helpful Marc... thanks for taking the time to share these insights!
VP @ Exos | Girl Dad | AI Strategy | Driving B2B2C Revenue, Marketing and Creative Operations
1 年Thanks for this and your leadership Marc Mandel - super important to highlight how we should operate within these platforms!
Legal Ops @ Exos | Legal Tech | E-Discovery | SaaS
1 年Pumped for this!