AI Contract Review Tools: Are Law Firms Taking on Hidden Legal and Financial Risks?

AI Contract Review Tools: Are Law Firms Taking on Hidden Legal and Financial Risks?

Are You Really in Control of Your AI-Powered Legal Tech?

AI contract review platforms like ThoughtRiver and Sirion Labs are becoming popular in legal practice, with firms like #KPMG expecting lawyers to have experience using them to save costs and be more productive.

But are these tools truly protecting your firm’s interests, or are they shifting risks onto you? After reviewing the Terms & Conditions of these platforms, I found contractual loopholes that could expose law firms to liability, data privacy risks, and financial losses.

Hidden Risks in AI Contract Review Software

1. AI Shifts Legal Liability to the Customer

Most AI vendors limit their liability to the amount of fees paid. If the AI makes a critical mistake, your firm—not the provider—bears the legal and financial consequences.

Example from Sirion’s Terms & Conditions :

“The maximum amount that either we or you can be held liable for… will be limited to the Fees you paid us for the Services in the twelve (12) months before the occurrence of the first event giving rise to liability.”


Source: Sirion Labs Inc. T&C as of 18 Feb 2025, Version 2

Why This is a Risk

  • If you’re delivering $3 million in services annually but only pay $100,000 for AI subscription, and an AI error triggers a $10 million lawsuit, is that $100,000 liability cap enough to protect your business?
  • Limitation of liability is standard, but firms must weigh the true cost of the risks they accept in the name of productivity.

What You Should Do

  • Negotiate liability clauses to hold AI providers accountable for system failures.
  • Require financial assurances, such as a financial undertaking or update your firm's malpractice insurance to cover AI-related negligence/IP Infringement risks.
  • Maintain high skilled human oversight—AI should assist, not replace, manual contract review.
  • Mandate a 24-hour disclosure requirement for any data breach or unauthorised data use.


Source: Sirion Labs Inc. T&C as of 18 Feb 2025, Version 2

Key Issues & Contradictions:

? Conflicts with Purpose – AI contract review tools require protected documents, such as NDAs with settlement amounts and merger agreements with pricing, yet this clause prohibits uploading “Protected Information”, which contracts inherently contain.

? No Liability for Data Misuse – If protected data is uploaded and misused, the provider takes no responsibility.

? Access Suspension Risk – A suspected breach could result in suspended access, disrupting operations.

Law firms must demand clarity on how this aligns with the platform’s intended use before relying on it.


2. Your Client Data (IP) May Be Used to Train AI Models

Uploading contracts for AI review may grant the provider a perpetual licence to use your data for benchmarking, analysis, and AI training, often without clear safeguards.

Example from ThoughtRiver’s Terms & Conditions (IP Ownership Clause):

“The Customer grants ThoughtRiver an irrevocable, perpetual, worldwide, royalty-free license to store, display, use, copy, maintain, customise and provide such data as part of the Platform…”


Source: Thought River (Terms & Conditions)

Why This is a Risk

  • Your client’s confidential contracts and legal strategies could be used to train AI models, potentially exposing sensitive information.
  • If AI extracts deal terms from a high-value pharma acquisition contract, a data leak could reveal financial details to competitors, leading to insider trading risks.
  • Without strong safeguards, anonymisation alone may not prevent future misuse. The only reliable protection is Federated Learning, where AI models learn locally without sharing raw data, ensuring true data security and privacy. AI providers must prove this capability before claiming compliance.

What You Should Do

  • Negotiate explicit restrictions to prevent AI providers from using your data for training.
  • Demand contractual guarantees that your documents remain private and confidential.
  • Conflict of Interest Safeguard & Trading Restrictions such as all employees of the Legal AI provider, as well as their relatives, are strictly prohibited from buying, selling, or holding financial interests in any company where the client firm is investing or conducting business.

3. You Might Be Liable for Data Breaches—Even If It’s Not Your Fault

Some AI platforms require customers to share costs of mitigating data breaches, even when the breach occurs on their servers.

Example from ThoughtRiver’s Terms & Conditions:

“The Customer shall pay ThoughtRiver’s reasonable costs for investigation, mitigation and remediation of each such Personal Data Breach.”


Source: Thought River T&C (A)


Why This is a Risk

  • If a security breach happens, your firm could be billed for investigation and remediation. (Image T&C A)
  • Data breaches can lead to regulatory fines, reputational damage, and client lawsuits.
  • You cannot seek any other action for claiming damages from third party as per clause 13 (Image T&C C)
  • If they defend your claim you cannot seek remedy from third party and you must give complete control of claim to them.
  • You agree to bear reasonable costs assistance with any such claim.


Source: Thought River T&C (B)
Source: Thought River T&C (C)

What You Should Do

  • Push back against breach cost-sharing clauses—AI providers must bear full responsibility for security failures.
  • Request proof of robust cybersecurity measures before signing up.
  • You should retain your right to bring or defend claims against third parties and seek damages. Do not allow your AI provider to shift liability onto you or require cost contributions while avoiding their own responsibility.


Conclusion

Law firms and organisations must look beyond flashy interfaces and cost-saving promises and focus on what really matters—fair terms and accountability.

I fully support AI in the legal domain and recognise its evolving potential. However, providers cannot offer a digital product that lacks accuracy, reliability, and safety while disowning liability for negligence, economic losses, and infringements—clients must have fair protection for their legal and business interests.

Demand compliance with the EU AI Act to enforce responsibility.






Disclaimer

The information in this article is based on company website terms as of 18 February 2025. This content is for awareness and educational purposes only and does not constitute legal, financial, or professional advice.

要查看或添加评论,请登录

Sarvesh D. S.的更多文章