ENTERPRISE: Is it Safe to Use ChatGPT to Draft Patent Applications Yet?

ENTERPRISE: Is it Safe to Use ChatGPT to Draft Patent Applications Yet?

Earlier this week OpenAI announced ChatGPT Enterprise, which "offers enterprise-grade security and privacy, unlimited higher-speed GPT-4 access, longer context windows for processing longer inputs, advanced data analysis capabilities, customization options, and much more." Could ChatGPT Enterprise solve the risks of disclosure or confidentiality presented when using generative AI in patent application drafting? Maybe.

Undoubtedly, some firms are already using AI for marketing, contract drafting, legal research, and even unbiased hiring. But what about IP? Colleagues are hard-pressed to find a single patent attorney who vocally touts consistent use of a true generative AI tool to draft patent applications. Folks have discussed performing prior art searches and using ChatGPT to describe the state of the art, but generative AI appears to be used minimally (or surreptitiously) in the online-visible IP attorney community right now.

Concerns about confidentiality and untimely disclosure have come to the forefront. Inventors share comprehensive details of their innovations to gain exclusive rights, but premature release of the wrong information is problematic. Other potential issues, such as inadvertent copyright infringement, "hallucinations" of fake facts, and/or AI potentially doing a bit too much "inventing" are also in need of consideration, but preventing early disclosure and maintaining confidentiality are the first and biggest hurdles.

No one doubts the power of a mature AI model in eventually helping to write a patent specification. There is much speculation, however, that interaction with any tool like ChatGPT during patent application preparation could be considered a public disclosure—either during input or output.

Sadly, it's a relatively low bar to qualify as a public disclosure. 35 U.S.C. 102 states that "A person shall be entitled to a patent unless ... the claimed invention was ... in public use [] or otherwise available to the public before the effective filing date of the claimed invention." This is different from pre-AIA law where the relevant timing was at the time of the invention but the court decisions are still considered applicable. For instance, an inventor may create a public use bar when the inventor shows the invention to, or allows it to be used by, another person who is “under no limitation, restriction, or obligation of confidentiality” to the inventor. MPEP 2152.02(c) (citing American Seating Co. v. USSC Group, Inc., 514 F.3d 1262, 1267, 85 USPQ2d 1683, 1685 (Fed. Cir. 2008))

If a drafting attorney inputs invention details into a generative AI platform where the input is accessible by another it would likely constitute a public use bar, depending on who that person is and what confidentiality obligations she may have. Potentially, use of any cloud-based tools could be considered a public use, e.g., without security and/or obligations.

Moreover, if the AI model absorbs the input invention details via retraining (or fine-tuning the model) and, when prompted by another, outputs some of that invention information, it might be considered public use or eventually a printed publication. While not everyone agrees it's so cut and dried, no one seems willing to challenge these ideas—or ask their professional liability insurers.

If a public use determination is based on a confidentiality obligation, consumer-level products likely fall short. In fact, several tech companies have banned use of ChatGPT to prevent breaches of confidentiality when using the AI tool. Using generative AI to draft a patent application would likely require revelation of confidential information, e.g., about an invention. Whether the AI platform stores this data or uses such information to retrain the model could be dangerous for confidentiality. Additionally, other protections such as attorney-client privilege and attorney work product could be in jeopardy, depending on how such AI tools are used and what data or files are input.

This is all to say that IP attorneys have yet to feel confident with generative AI's confidentiality and data security. That may soon change.

OpenAI's new ChatGPT Enterprise is apparently aiming to alleviate confidentiality concerns among businesses that have banned employees from utilizing ChatGPT. ChatGPT Enterprise is new and different from the $20-per-month ChatGPT Plus premium tier of the consumer version of the web and mobile app platform, and it will also use GPT-4 as the AI model. Pricing will have OpenAI "work with everyone to figure out the best plan for them,” according to what COO Brad Lightcap told Bloomberg.

Many ostensible confidentiality concerns appear to be addressed with ChatGPT Enterprise. While never explicitly invoking "patents" or "intellectual property," OpenAI stresses that it refrains from training models on business data transmitted to ChatGPT Enterprise. Additionally, OpenAI assures that Enterprise usage data is off-limits for training, and it underscores that every interaction with ChatGPT Enterprise is both encrypted during transmission and stored in a SOC 2 compliant manner.

The privacy page for ChatGPT Enterprise reiterates that "You own your inputs and outputs (where allowed by law)," "You control how long your data is retained (ChatGPT Enterprise)," and "Custom models are yours alone to use, they are not shared with anyone else." These sound like very positive developments, while also acknowledging the shortcomings of the consumer-level ChatGPT in the privacy department.

It certainly sounds like no one can access your data in ChatGPT Enterprise. OpenAI notes in a FAQ that "[w]e may run any business data submitted to OpenAI’s services through automated content classifiers" and "[c]lassifiers are metadata about business data but do not contain any business data itself." Enterprise's business data is said to be only subject to human review "for the purposes of resolving incidents, recovering end user conversations with your explicit permission, or where required by applicable law." This, too, seems positive but could require a deeper dive in the case law before assuming there's no confidentiality breach and nothing qualifying as a public use in such a review.

Moreover, care would still be needed to ensure that any ChatGPT private "fine-tuned" models trained on such input data are not regurgitating proprietary information that could be prematurely released, published, or otherwise disclosed, e.g., by someone else using that model. No one wants their invention details unintentionally leaked by generated tweet from their colleague in marketing.

OpenAI says, "Your fine-tuned model is yours alone to use and is not served to or shared with other customers or used to train other models" and that "[d]ata submitted to fine-tune a model is retained until the customer deletes the file." While sharing a fine-tuned model within a company may seem advantageous and efficient, it may be too risky to share a fine-tuned model that has been trained on confidential data due to concerns of a later leak.

In another recent enterprise-level platform release, Microsoft has been offering Bing Chat Enterprise since July. Bing is built on a similar model to ChatGPT and pledges that "[y]ou can be confident that chat data is not saved, Microsoft has no eyes-on access to it, and it is not used to train the models." Microsoft further promises to not "retain prompts or responses from users in Bing Chat Enterprise," noting that "[p]rompts and responses are maintained for a short caching period for runtime purposes" and "[a]fter the browser is closed, the chat topic is reset, or the session times out, Microsoft discards all prompts and responses."

If data is never stored by the Bing platform, it would seem the input data may be innately maintained as confidential and not able to be disclosed. A quick look at the Bing Chat Enterprise interface reveals a reminder in green text saying, "Your personal and company data are protected in this chat" with a shield icon next to the input box.

A platform with immediate deletion and no model training may be the safest bet available to maintain confidentiality and avoid disclosures. Moreover, if the security of Bing Chat Enterprise is looked at by clients and law firms as a necessary tool akin to Office, SharePoint, or other workplace-friendly Microsoft products, Bing Chat Enterprise may indeed be trusted soon enough by IP attorneys to aid in patent application drafting. The price is expected to be an additional $5US per month.

Microsoft is a key investor in OpenAI, but their enterprise solutions may be competing and companies looking for assurances that their data will be kept private, secure, and confidential may be the biggest winners. In case they don't know any good IP attorneys, taking additional feedback from their law firm customers can continue to help.

While key steps towards safeguarding against public disclosure and breaching confidentiality appear to have been made with these new enterprise solutions, it probably is still not secure enough for the IP attorney community to widely adopt the AI tools. Perhaps patent practitioners will now start to focus more on other generative AI problems like the "hallucinations" or "inventing" as the platforms' enterprise marketing about confidentiality may put clients at a bit more ease. Until there is some court ruling directly on this matter—or every firm gets an air-gapped room with a local-only AI model—attorneys handling intellectual property should probably avoid using generative AI to draft patent applications.

This is provided for informational purposes only and does not constitute legal or financial advice. To the extent there are any opinions in this article, they are the author’s alone. The strategies expressed are purely speculation based on publicly available information. The information expressed is subject to change at any time and should be checked for completeness, accuracy and current applicability. For advice, consult a suitably licensed attorney and/or patent professional.

Michael Schur, M.S.

Optics | Chemical Engineering| Patent Law

1 年

Super interesting subject. While Microsoft "assures" that user’s input and output are not recorded, one must also consider the data pathway through which the output is generated (aka neural link). This is important because if the ‘neural link’ is stored, the output generated by the AI (in my opinion) is not truly proprietary; essentially, a pathway has been created that could potentially be applied to another user's input, even though ChatGPT did not save your input or it's output. Can this also be a concern?

Ana H Kaur

Vice President Global IP Partners

1 年

Thank you for sharing!!

David Speese

Venture Capital | IESE

1 年

Curious to hear what Benjamin Le Tréquesser thinks here. Kevin Rieffel hope you’re well!

Sheetal Sharma

Business Manager at HLB HAMT| Audit | Tax (VAT/Corporate) | FSRA | Robotic Automation | SAP / SAGE Implementation | UAE Retail Expansion-B2B, B2C SME | ESG, Sustainability & Climate Change |

1 年

Thanks for the article. I still believe we have a long way to go before we actually get adapted to AI.

Woodley B. Preucil, CFA

Senior Managing Director

1 年

Kevin Rieffel Very insightful. Thank you for sharing.?

要查看或添加评论,请登录

Kevin Rieffel的更多文章

社区洞察

其他会员也浏览了