AI Terms of Service Exposed: How DeepSeek and Other AI Platforms Put U.S. Users at Risk

AI Terms of Service Exposed: How DeepSeek and Other AI Platforms Put U.S. Users at Risk

Introduction

As AI-powered platforms become a staple in our digital lives, understanding their privacy policies, data security measures, and legal implications is critical—especially for U.S. users who may be unknowingly exposing themselves to significant risks. This report examines the Terms of Service (TOS) of DeepSeek AI, a China-based AI platform, and contrasts it with leading U.S.-based AI services, including OpenAI's ChatGPT, Google Gemini, Meta AI, Perplexity AI, Claude AI, Google NotebookLM, and xAI’s Grok.

Key concerns include data collection, retention, government surveillance, legal exposure, and user rights under U.S. law. DeepSeek, in particular, raises red flags as it operates under Chinese jurisdiction, meaning U.S. users’ data could be subject to foreign government monitoring with little to no legal recourse. In comparison, U.S. platforms, while not perfect, operate under more established regulatory frameworks like the California Consumer Privacy Act (CCPA) and other consumer protection laws.

To ensure a thorough, well-supported analysis, this report was compiled using OpenAI’s Deep Research, leveraging legal, regulatory, and cybersecurity insights to break down the risks, legal limitations, and best practices for AI users. By highlighting the contrast in policies and protections, this report provides actionable steps for users to safeguard their privacy, understand their rights, and make informed decisions about which AI platforms to trust.


DISCLAIMER: This communication does not provide legal, financial, tax or investment advice. Always do your own due diligence and consult with an experienced professional in your state, region or country.

1. Privacy & Data Security

Data Collection & Storage: DeepSeek AI’s policies reveal aggressive data collection and offshore storage. All user queries and conversations are sent to servers in China (DeepSeek’s Popular AI App Is Explicitly Sending US Data to China | WIRED). DeepSeek scoops up not just your prompts and account info, but also detailed device data – even “keystroke patterns or rhythms” (DeepSeek’s Popular AI App Is Explicitly Sending US Data to China | WIRED) – effectively monitoring what and how you type. By contrast, U.S.-based platforms also collect extensive data (e.g. prompts, usage, device info, IP address), but generally store it on U.S. or regional servers. OpenAI and Anthropic log user prompts to improve their models, but OpenAI now offers controls (ChatGPT’s “Incognito” mode or data opt-out) so conversations aren’t retained long-term for training (Terms of use | OpenAI) (Privacy policy | OpenAI). Perplexity.ai similarly lets users opt out of having search queries used to train models via a privacy toggle (What data does Perplexity collect about me? | Perplexity Help Center). These controls give users some say in data retention, a feature notably absent in DeepSeek.


Retention & Deletion: DeepSeek’s privacy policy says you can delete your chat history in-app and even delete your account, but it’s unclear if this truly erases data from their servers. The Wired review confirms DeepSeek stores everything on PRC servers (DeepSeek’s Popular AI App Is Explicitly Sending US Data to China | WIRED), suggesting that deletion may only hide data from the user’s view. In contrast, OpenAI’s policy states that personal data is retained “only as long as we need… or for other legitimate business purposes”, and that ChatGPT chats can be auto-deleted after 30 days when history is off (Privacy policy | OpenAI). Users can also delete accounts or specific conversations. Perplexity’s FAQ notes data is retained while your account is active and deleted upon account deletion (with some delay) (What data does Perplexity collect about me? | Perplexity Help Center) (What data does Perplexity collect about me? | Perplexity Help Center). Google’s services (like Bard/Gemini) tie into Google’s global privacy policy – data may be stored indefinitely unless you delete it, but Google provides tools to export or erase your activity. Meta’s generative AI features are new, but Meta claims to build with “privacy safeguards” and has introduced a Generative AI Privacy Guide with transparency on data usage (Privacy Matters: Meta’s Generative AI Features | Meta). They emphasize not using private messages to train AI models (Privacy Matters: Meta’s Generative AI Features | Meta) and allow users to delete AI chat threads with special commands (Privacy Matters: Meta’s Generative AI Features | Meta).

Surveillance & Third-Party Sharing: One of the starkest differences is government access. DeepSeek explicitly warns it will “share information with law enforcement agencies, public authorities, and more when required to do so”, and all data is accessible by its corporate group in China (DeepSeek’s Popular AI App Is Explicitly Sending US Data to China | WIRED). Under China’s cybersecurity laws, companies must cooperate with state intelligence efforts (DeepSeek’s Popular AI App Is Explicitly Sending US Data to China | WIRED). This means U.S. user data on DeepSeek could be subject to Chinese government surveillance with few hurdles. U.S.-based AI providers also comply with law enforcement requests, but those are governed by U.S. law (requiring warrants, subpoenas, etc.). OpenAI, for example, says it may share personal data with government authorities only if legally required (Privacy policy | OpenAI) – a notable distinction from the blanket cooperation mandated in China.

All platforms share data with third-party service providers (for cloud hosting, analytics, payment processing, etc.), but DeepSeek again stands out. Researchers found DeepSeek’s app quietly sending analytics data to Chinese tech giants like Baidu and even to ByteDance (TikTok’s owner) (DeepSeek’s Popular AI App Is Explicitly Sending US Data to China | WIRED) (DeepSeek’s Popular AI App Is Explicitly Sending US Data to China | WIRED). DeepSeek also allows advertisers to feed it data (like mobile IDs and hashed emails) to help track users across the web (DeepSeek’s Popular AI App Is Explicitly Sending US Data to China | WIRED). By contrast, OpenAI’s privacy policy is explicit that it does not “sell” or “share” personal data for behavioral advertising (Privacy policy | OpenAI). Similarly, Perplexity states it “does not sell, trade, or share your personal information with third parties,” except for necessary service providers (What data does Perplexity collect about me? | Perplexity Help Center). Google is an outlier among U.S. AI platforms in that it does leverage user data for advertising: Google’s terms acknowledge they “may display targeted advertisements” and use your Bard/Gemini queries and profile info to personalize ads (unless you opt out of ad personalization) (Google Bard - Full Privacy Report). In other words, Google treats AI interactions like any other Google activity – part of the data stream feeding its ad ecosystem, though it emphasizes it doesn’t sell the data to outsiders (Google Bard - Full Privacy Report). Meta’s AI interactions are governed by Meta’s broader data policy, meaning your chats with “Meta AI” are used “consistent with Meta’s Privacy Policy” (Warning: If AI social media tools make a mistake, you're responsible). Meta has even indicated it might share certain AI chat queries with external partners (e.g. a search engine) to fetch real-time information (Privacy Matters: Meta’s Generative AI Features | Meta), raising additional monitoring concerns.

Security Measures: Most TOS have generic assurances about security, but DeepSeek’s execution appears shaky. Its TOS promises to take “necessary measures (not less than industry practices) to ensure cybersecurity” (DeepSeek Terms of Use). In practice, a security audit found “multiple security and privacy issues” in DeepSeek’s mobile app (NowSecure Uncovers Multiple Security and Privacy Flaws in ...). (For example, vulnerabilities reportedly exposed user data to interception.) U.S. AI providers generally invest heavily in security – employing encryption in transit and at rest, secure cloud infrastructure, and robust access controls – both to protect user data and to comply with laws like California’s data security requirements. OpenAI, Google, Anthropic, etc., publish security whitepapers and even bug bounty programs. Nonetheless, no system is immune: even ChatGPT suffered a bug that briefly exposed user chat titles and payment info to others in March 2023. Platforms are aware that a data breach could trigger liability under state data-breach notification laws or FTC enforcement. Thus, while all claim to secure user data, DeepSeek’s track record and Chinese jurisdiction heighten the risk for data security lapses compared to its U.S. counterparts.

2. Legal Rights & Exposure

Governing Law & Jurisdiction: The jurisdiction specified in the TOS can hugely affect a U.S. user’s legal rights. DeepSeek’s Terms make this crystal clear: any dispute is governed by the laws of the People’s Republic of China (mainland) (DeepSeek Terms of Use), and must be brought to a court in Hangzhou (where DeepSeek’s parent company is registered) (DeepSeek Terms of Use). This effectively places U.S. users under Chinese law for any legal issues with DeepSeek. By contrast, all the U.S.-based AI services choose U.S. law (often California) and local forums. OpenAI’s Terms are governed by California law with exclusive venue in San Francisco courts – except that OpenAI compels arbitration for most disputes (more on that shortly) (Terms of use | OpenAI). Anthropic (Claude) also chooses California law and explicitly says disputes will be resolved in state or federal courts in San Francisco (Consumer Terms of Service \ Anthropic). Google’s standard Terms of Service likewise invoke California law; Google has a known arbitration clause for U.S. consumers (with an opt-out window) and otherwise designates Santa Clara County, CA courts for litigation. Meta’s user terms (covering Facebook, Instagram, and thus their AI features) also use California law and require any lawsuits to be filed in California (historically the Northern District of CA or CA state court) – and in recent updates Meta has added clauses specific to AI. In sum, U.S. users of OpenAI, Google, Meta, Anthropic, Perplexity, or xAI are under U.S. law (generally California or Delaware) in the event of disputes, whereas DeepSeek users would be subject to Chinese law. This difference is monumental: Chinese law and courts offer none of the consumer protections U.S. users might expect (and proceedings would be in Chinese, under a legal system where the government or company may have home-field advantage).


Dispute Resolution & Class Actions: Many tech companies use arbitration clauses and class-action waivers to limit users’ ability to sue. OpenAI’s TOS includes a mandatory arbitration agreement and a class-action waiver (Terms of use | OpenAI). Users must attempt informal resolution, then if needed go to arbitration (likely under AAA rules), waiving any right to sue in court or to join a class action (Terms of use | OpenAI) (Terms of use | OpenAI). (OpenAI carves out small claims and injunctive relief for IP misuse as exceptions) (Terms of use | OpenAI). Similarly, xAI’s Grok consumer TOS “REQUIRES the parties to arbitrate their disputes and limits the manner in which you can seek relief” (Terms of Service - Consumer | xAI) – i.e. no class or representative actions, with only individual arbitration permitted. Meta’s various terms also prohibit class actions; Meta has arbitration on an individual basis in its supplemental terms, meaning you “may bring a claim only on your own behalf” (Supplemental Facebook View Terms of Service - Meta Store). Google’s terms (for consumers) have a binding arbitration clause with an opt-out and also disallow class proceedings, which is standard for Google services. Notably, Anthropic’s Claude does not force arbitration: their latest consumer terms say disputes will go to court in SF, implying users retain their right to sue in court individually (Consumer Terms of Service \ Anthropic). That also means class actions against Anthropic are not contractually barred (though the usual hurdles to class certification in court still apply). Perplexity’s public terms are less visible, but as a Delaware corporation they likely follow the industry trend – possibly California law and either arbitration or at least a venue clause; given their smaller size, they might rely on courts and simply limit liability (we did not find a specific arbitration clause in their help center materials).

For U.S. citizens, arbitration clauses mean waiving the right to a jury trial and usually limiting discovery, which can disadvantage individuals. On the flip side, Chinese jurisdiction (DeepSeek) effectively means no practical legal recourse – it’s unrealistic for an average U.S. user to pursue a lawsuit in China for a privacy or contract violation, and class relief is off the table entirely. This lack of remedy is a key risk: if DeepSeek mishandles your data or causes you harm, you likely have no effective way to hold them accountable. U.S. companies’ arbitration requirements are also restrictive, but at least an arbitrator could award damages (in theory) under U.S. law, and regulators like the FTC or state attorneys general can still intervene on the public’s behalf despite arbitration clauses.

User Rights Under U.S. Laws: U.S. users have certain statutory rights that TOS cannot waive – and some TOS explicitly acknowledge this. DeepSeek’s terms include a generic savings clause that “nothing in these terms shall affect any statutory rights that you cannot waive as a consumer” (DeepSeek Terms of Use). However, enforcing U.S. statutory rights (like rights under California’s Consumer Privacy Act, or unfair/deceptive practice laws) against a Chinese entity is extremely difficult in practice. U.S. services are directly subject to U.S. laws like the California Consumer Privacy Act (CCPA) or state AI/privacy laws. Indeed, OpenAI’s privacy policy enumerates user rights “depending on where you live”, listing the right to know, delete, and correct personal data, and freedom from discrimination for exercising these rights (Privacy policy | OpenAI) – this language aligns with CCPA/CPRA rights for California residents and GDPR rights for Europeans. OpenAI provides a Data Subject Access Request portal for users to exercise these rights (Privacy policy | OpenAI). Perplexity and Anthropic (and likely Google and Meta) offer similar processes for users to request data deletion or access. For example, OpenAI now allows anyone globally to opt out of having their content used to train models (Terms of use | OpenAI) – a response to regulatory pressure. DeepSeek’s policies, on the other hand, make no mention of CCPA or GDPR rights; they do provide an email for inquiries, but there’s little assurance that a U.S. user could successfully invoke rights to delete or access data held in China.

One area of U.S. law particularly relevant is surveillance and foreign intelligence. U.S. citizens’ data stored in the U.S. can be accessed by the U.S. government under laws like the CLOUD Act or FISA, but there are legal processes and some oversight. With DeepSeek in China, U.S. users expose themselves to a foreign government’s surveillance with none of the transparency or recourse they might have at home. This has raised national security flags – U.S. lawmakers warn that China’s DeepSeek app could expose U.S. users to surveillance and even censorship (DeepSeek AI raises national security concerns, U.S. officials say), likening it to the concerns around TikTok. In fact, the U.S. military and some federal agencies have already banned DeepSeek on government devices (House lawmakers push to ban AI app DeepSeek from US ... - CBS 42), and members of Congress have pushed to bar the app in sensitive networks (House lawmakers push to ban AI app DeepSeek from US ... - CBS 42).

Liability & Limitation of Remedies: All the AI platform TOS aggressively disclaim liability for the AI’s outputs and limit the remedies users can seek. DeepSeek’s terms bluntly state that users assume the “risks arising from reliance” on output accuracy or suitability (DeepSeek Terms of Use). DeepSeek provides its service “as is” with no warranties, just like the others (DeepSeek Terms of Use) (DeepSeek Terms of Use). Moreover, DeepSeek shifts legal risks to the user: if you misuse the service or violate laws, you are solely responsible for any third-party claims and must indemnify DeepSeek for any losses, including legal fees, that result (DeepSeek Terms of Use). This means if, for example, you generated content that infringes someone’s rights and they sue DeepSeek, DeepSeek can seek to recover all costs from you. U.S. AI providers have similar indemnity clauses (usually requiring users to indemnify the company for third-party claims stemming from the user’s content or misuse). For instance, OpenAI’s terms require users to hold OpenAI harmless from claims arising from use of the service in violation of the terms or law.

All platforms cap their liability heavily. Anthropic’s clause is typical: no indirect or consequential damages and total liability capped at the amount you paid (if any) for the service (Consumer Terms of Service \ Anthropic). OpenAI and others also exclude liability for lost profits, data loss, or punitive damages. Meta’s new AI terms explicitly tell users they – not Meta – are responsible for any actions taken based on AI outputs, and that the AI may be wrong or harmful (Warning: If AI social media tools make a mistake, you’re responsible | Technology | EL PAíS English) (Warning: If AI social media tools make a mistake, you’re responsible | Technology | EL PAíS English). Meta and LinkedIn both warn that generative AI content might be inaccurate or misleading and urge users to vet it before sharing (Warning: If AI social media tools make a mistake, you’re responsible | Technology | EL PAíS English). In effect, if an AI gives bad advice that causes you loss (say, financial or health-related), the companies have erected multiple legal shields to avoid responsibility. Users would face an uphill battle arguing product liability or negligence, given these contractual waivers and the novelty of AI services (and OpenAI’s arbitration clause would keep such disputes out of public courts anyway).

One emerging issue is defamation or privacy harms caused by AI outputs about individuals. OpenAI was recently hit with a complaint after ChatGPT falsely accused a radio host of embezzlement – but OpenAI will point to its disclaimers that outputs may be false and not to be relied upon (Terms of use | OpenAI). Unlike social media, where platforms cite Section 230 protections, AI companies may not have the same shield since they algorithmically generate content. This legal gray area is being tested, but for now, the user agreements place the burden on users to fact-check and use outputs responsibly. The upshot: U.S. users have very limited remedies if AI outputs are flawed or harmful, and for DeepSeek users, the practical remedies are virtually nil due to jurisdiction and strong liability waivers.

3. Comparison of AI Platform Policies

When comparing DeepSeek with OpenAI, Meta, Google’s Gemini (Bard), Perplexity, Claude, NotebookLM, and Grok, some clear patterns emerge:


  • Use of Data for Training/Improvement: Every AI platform reviewed uses user input content to improve the service by default. DeepSeek’s policy openly says it may “review, improve, and develop the service… by monitoring interactions…and by training and improving our technology” (DeepSeek’s Popular AI App Is Explicitly Sending US Data to China | WIRED). OpenAI likewise uses ChatGPT conversations to train models unless you opt out (Terms of use | OpenAI), and Meta uses interactions with its AI (like prompt feedback) to refine its systems (Privacy Matters: Meta’s Generative AI Features | Meta). Google uses Bard conversations to better its answers and undoubtedly folds that data into its broader AI research. Perplexity uses your searches and feedback to improve its AI, unless you disable the AI Data Usage setting (What data does Perplexity collect about me? | Perplexity Help Center). In this regard, DeepSeek is behaving similarly to the pack – except that the legal and geopolitical context makes the stakes higher. When OpenAI uses your prompt about, say, a medical question to improve its model, that data is kept internally (and you can now opt out or delete it). When DeepSeek does so, that prompt could reside in China and potentially be analyzed or even intercepted by actors beyond just the company’s R&D team.
  • Content Moderation & Censorship: While not directly a TOS privacy issue, it’s worth noting: DeepSeek, being based in China, adheres to Chinese content rules. Users have observed that it censors queries about sensitive topics (e.g. Tiananmen Square) and produces pro-China slanted answers (DeepSeek’s Popular AI App Is Explicitly Sending US Data to China | WIRED). U.S. AI platforms also have usage policies (no hate, no illegal behavior, etc.) and may refuse or filter certain prompts, but the scope is narrower (aimed at preventing abuse or disinformation, not enforcing state ideology). This hints at a broader risk for U.S. users of DeepSeek: not only is your data exposed, but the information you receive might be biased by a foreign government’s constraints, with no transparency or recourse.
  • Liability and Indemnity: All platforms disclaim warranties and limit liability, but DeepSeek’s terms are especially onerous in pushing all risk to the user. DeepSeek requires users to indemnify it for an exhaustive list of potential costs (from attorney fees to administrative fines) arising from any misuse (DeepSeek Terms of Use). U.S. companies’ indemnity clauses tend to be slightly more scoped (e.g. covering violations of IP or policies by the user, but not making the user pay for the company’s own regulatory fines). Also, DeepSeek’s lack of any promise that outputs won’t infringe others’ rights combined with its indemnity clause could put users in a nasty bind – imagine the AI inadvertently delivers text that violates someone’s copyright or defames someone; the user who shared or acted on it could be on the hook. Other AIs like OpenAI and Meta also say they “make no guarantees” about accuracy, non-infringement, or safety of outputs (DeepSeek Terms of Use) (Warning: If AI social media tools make a mistake, you’re responsible | Technology | EL PAíS English), but since they operate in the U.S., a user at least has consumer protection laws and potential tort law to turn to if something egregious happened (even if winning is unlikely). With DeepSeek, those U.S. legal avenues are contractually and practically closed.

In short, DeepSeek’s TOS is the most extreme in terms of jurisdiction (China), data export (all data to PRC), and lack of user remedy – which significantly heightens U.S. users’ exposure. The mainstream U.S. AI platforms have broadly similar terms to each other: California law, arbitration (except Anthropic), heavy disclaimers, and commitments to user privacy that are imperfect but evolving. One unique point: Anthropic’s choice not to compel arbitration and to operate as a Public Benefit Corporation might indicate a slightly more user-friendly stance legally. Meanwhile, Google’s integration of AI into its ad machine sets it apart in privacy impact (using your AI interactions for marketing profiling unless you opt out) (Google Bard - Full Privacy Report), whereas OpenAI/Anthropic/Perplexity currently do not use your data for advertising at all (Privacy policy | OpenAI). Meta lies somewhere in between – it doesn’t sell data, but it will leverage everything you do on its platforms (AI chats included) to keep you engaged and infer your interests, and its entire business is ad-driven.

4. Implications for U.S. Citizens

For Americans using these AI services, the terms translate to different levels of risk and rights:

  • DeepSeek AI (Chinese)Privacy risks are highest. Your personal data (queries, account info, device metadata) is immediately exported to China (DeepSeek’s Popular AI App Is Explicitly Sending US Data to China | WIRED) and can be accessed by the Chinese government without your knowledge (DeepSeek’s Popular AI App Is Explicitly Sending US Data to China | WIRED). U.S. laws like the CCPA or even HIPAA (if you asked health questions) can’t protect you; nor can U.S. agencies easily intervene. You effectively waive U.S. legal protections by agreeing to PRC jurisdiction (DeepSeek Terms of Use). If DeepSeek violates its own terms or suffers a breach, your only theoretical recourse is a lawsuit in China – an option out of reach for almost all individuals. There is also the personal safety angle: if you input politically sensitive information (even personal opinions), it’s conceivable that data could be linked to you. While that may not impact you unless you travel to a region where China has influence, it’s a consideration (echoing how dissidents worry about data TikTok collects). Additionally, as a user you have no guarantee of unbiased or uncensored information from DeepSeek, and you might even be unknowingly influenced by propaganda in its answers (DeepSeek’s Popular AI App Is Explicitly Sending US Data to China | WIRED). U.S. officials have explicitly warned that DeepSeek could be a channel for foreign surveillance and misinformation (DeepSeek AI raises national security concerns, U.S. officials say). From a liability standpoint, if something goes wrong (say DeepSeek’s advice causes a financial loss or it outputs illegal content you act on), you’ll find DeepSeek disclaims all responsibility and you have no U.S. legal grounds to stand on.


  • OpenAI, Claude, Perplexity, Google, Meta, Grok (U.S.-based)Privacy risks still exist but are more mitigatable. Your data stays mostly within jurisdictions that have privacy oversight. For instance, if ChatGPT misuses your data, you could complain to the FTC or a state Attorney General under U.S. privacy or consumer protection laws. You also typically have some product features to control data (deleting chat history, etc.), and these companies publish transparency reports. However, U.S. users must remember these AIs are data-hungry by design – everything you type might be retained and later reviewed by human moderators or used to train models unless you opt out (Terms of use | OpenAI). And all of them warn you not to share sensitive personal information. In fact, privacy experts strongly advise against inputting any private data into such AI bots (DeepSeek’s Popular AI App Is Explicitly Sending US Data to China | WIRED), because once submitted, you can’t fully retract it. The OpenAI and Perplexity model of opt-out improves privacy, but it’s opt-out, not opt-in. On legal rights, while you theoretically could sue OpenAI or Meta in a U.S. court, in practice you’ll likely be steered into arbitration (where large-scale relief is limited). You also might be bound by liability caps – e.g. if a bug in Claude leaked your private chats and caused you harm, Anthropic’s terms cap liability to what you paid (often $0 for a free user) (Consumer Terms of Service \ Anthropic). Still, U.S. users have some remedies: data breach notification laws would force disclosure if your data got hacked; and if an AI platform engaged in unfair or deceptive practices, the FTC or state regulators could impose penalties. We’ve seen this with OpenAI – Italy’s regulator forced changes, and the U.S. FTC has opened an inquiry into OpenAI’s data practices under consumer protection law (no outcome yet).
  • Exposure to Foreign Surveillance: Only DeepSeek (among those compared) directly implicates a foreign government. Using it as a U.S. citizen could effectively extend Chinese surveillance to you. Even if you think “I have nothing to hide,” the mere fact your usage is visible to a foreign power is a loss of privacy. By comparison, using U.S. AI services keeps your data under U.S. jurisdiction – not immune to government access (the U.S. government can serve warrants on OpenAI, etc.), but with more legal process. Moreover, U.S. companies cannot simply hand data to foreign governments without legal process; if they did, they’d face significant backlash and legal liability.
  • Lack of Legal Recourse: With most AI TOS, you waive the right to sue in court or join a class action (except in limited scenarios). This means as an individual, your legal remedies for harm are limited to arbitration or small claims court. If a large number of users are wronged (say a data leak or a harmful AI glitch), a class action – typically a powerful tool for consumers – is likely waived by these terms (Terms of use | OpenAI). U.S. citizens should understand that by using, e.g., ChatGPT or Meta’s AI, they are agreeing not to band together in court if something goes awry. DeepSeek’s bar is even higher: you’d have to litigate abroad individually.
  • Contract Enforceability: One question is whether such TOS provisions would hold up. Extremely one-sided terms (especially those involving foreign law for a consumer service) might be deemed unconscionable or unenforceable in U.S. courts. However, since DeepSeek has no U.S. presence, it’s hard to even get a U.S. court to consider the issue. For U.S. companies, arbitration clauses are generally enforceable under the FAA, though there’s debate when it comes to harms that were not foreseeable when the user agreed (e.g., AI defamation cases may test the boundaries). But practically, users should assume these terms will be enforced as written.

Recommendations for Users: Given these implications, U.S. users should take proactive steps to protect themselves:

  • Avoid exposing sensitive data: Treat any AI chat like a public forum. Do not share personal, financial, or confidential information with the AI (DeepSeek’s Popular AI App Is Explicitly Sending US Data to China | WIRED). This includes refraining from inputting full names, addresses, social security numbers, company secrets, or anything you wouldn’t want leaked. Even if the company is U.S.-based and trustworthy, breaches happen and data could be repurposed in training datasets.
  • Use privacy controls: Take advantage of settings to limit data use. For ChatGPT, turn off chat history when appropriate (OpenAI then retains data only 30 days for abuse monitoring) (Privacy policy | OpenAI). In Perplexity, toggle off AI Data Usage so your queries aren’t used to train models (What data does Perplexity collect about me? | Perplexity Help Center). Check if Meta’s AI has a “delete conversation” command (they have indicated such commands exist (Privacy Matters: Meta’s Generative AI Features | Meta)) and use them. If you use Google’s Bard/Gemini, review your Google Activity Panel and consider disabling ad personalization (Google Bard - Full Privacy Report). Always logout or clear history if using a public device.
  • Stay informed on policy changes: AI is evolving fast, and so are the terms. Companies often update their TOS with little notice (Meta and LinkedIn just updated theirs to add AI disclaimers on Jan 1, 2025 (Warning: If AI social media tools make a mistake, you’re responsible | Technology | EL PAíS English) (Warning: If AI social media tools make a mistake, you’re responsible | Technology | EL PAíS English)). Keep an eye on emails or banners announcing policy changes. It’s wise to at least skim these updates, since they might affect how your data is used. If a platform introduces an unfavorable term (like expanding data sharing), you might choose to stop using it. For instance, when X (Twitter) updated its privacy policy to allow using all user content to train AI, users concerned about that change could choose to opt out or leave the platform (Grok AI and Privacy: What You Should Know - Internxt Blog).
  • Leverage legal rights where possible: If you’re in California (or any state with similar laws), exercise your CCPA rights – request a copy of your data from the AI service, or ask for deletion, using the provided channels (Privacy policy | OpenAI). Even if you’re not in CA, many companies will honor deletion or access requests as a general practice. This can reduce your data footprint. Also, if you feel an AI company mishandled your data or violated its promises, report it to authorities – e.g., file a complaint with the FTC, your state attorney general, or consumer protection agency. Individual complaints can spur investigations (as seen by the Italian action against OpenAI which started due to privacy concerns).
  • Consider trusted and local alternatives: For sensitive tasks, consider using AI tools that run locally on your device (so data doesn’t leave your computer). There are open-source models you can use offline for certain applications. Lukasz Olejnik, a privacy researcher, noted that running models locally lets you “interact with them privately without your data going to the company” (DeepSeek’s Popular AI App Is Explicitly Sending US Data to China | WIRED). This isn’t practical for all use cases (since local models may be less powerful), but for highly sensitive data, it might be worth the trade-off. For example, if a business wants to use AI on confidential data, using an on-premises solution or an enterprise service with strong privacy commitments is preferable to a public chatbot.
  • Be cautious with foreign services: Until or unless DeepSeek (or any foreign-based AI) demonstrates robust privacy safeguards and U.S.-friendly terms, it’s wise for U.S. citizens to refrain or limit use. The potential risks – data exposure and lack of recourse – outweigh the novelty benefits. There are plenty of U.S. or European AI services that, while not perfect, at least operate under laws that give users some leverage.

5. Global Regulatory Considerations

Global privacy and AI regulations provide an important backdrop to these TOS differences. The European Union’s General Data Protection Regulation (GDPR) is the world’s strictest data privacy law, and it has indirectly improved privacy practices of AI platforms worldwide. GDPR grants EU residents rights to access, correct, delete, and restrict processing of personal data, and requires a legal basis for data collection. When Italy’s regulator found OpenAI had “processed users’ personal data to train ChatGPT without an adequate legal basis”, OpenAI had to quickly adjust its practices (Italy fines OpenAI over ChatGPT privacy rules breach | Reuters). They implemented tools for users to object to data use and improved privacy disclosures, allowing the service to resume in the EU (Italy fines OpenAI over ChatGPT privacy rules breach | Reuters). The Italian DPA also fined OpenAI €15 million in Dec 2024 for residual violations (Italy fines OpenAI over ChatGPT privacy rules breach | Reuters), showing that these laws have teeth. None of the U.S. companies want to be barred from the EU market, so they are adapting – e.g., OpenAI’s Privacy Policy and user rights section is clearly influenced by GDPR, even for U.S. users (Privacy policy | OpenAI). Anthropic’s and Google’s privacy notices similarly have GDPR-compliant language (Google’s privacy controls and export options were originally built to meet European demands).

China’s regulatory framework is very different. China has a relatively new Personal Information Protection Law (PIPL) and Data Security Law, which in theory impose consent requirements and data minimization principles not unlike GDPR. However, critically, Chinese law has broad carve-outs for national security and government access. Companies like DeepSeek must “cooperate with national intelligence efforts” by law (DeepSeek’s Popular AI App Is Explicitly Sending US Data to China | WIRED). Moreover, any data held in China can be considered under Chinese jurisdiction, making foreign access or oversight almost impossible. So while DeepSeek might claim to follow privacy principles, in practice Chinese authorities can override those at will. Global firms operating in China (like Apple or LinkedIn previously) have faced this dilemma of complying with government data requests versus user privacy – a tension that Chinese firms don’t publicly struggle with, as compliance is expected.

For U.S. users, GDPR vs. Chinese law is an easy choice – GDPR offers rights and recourse; Chinese law offers opacity. The U.S. does not yet have a comprehensive federal privacy law, but several state laws (California’s CPRA, Virginia’s CDPA, etc.) borrow from GDPR. These laws give U.S. users rights that mirror GDPR rights in some ways (access, deletion, no selling personal data without opt-out, etc.). OpenAI, Meta, Google, and others have had to incorporate these into their TOS and privacy policies for compliance. DeepSeek, focusing on rapid global expansion, appears to not tailor its policies to foreign laws, which could eventually limit it – for instance, DeepSeek is not available in the EU currently (Warning: If AI social media tools make a mistake, you’re responsible | Technology | EL PAíS English), likely because it cannot easily comply with GDPR’s requirements (like keeping EU user data in Europe or handling EU data subject requests). By comparison, Google’s NotebookLM or Gemini would have to comply with GDPR if offered in Europe, meaning data might be stored on European servers or with appropriate safeguards and users informed of processing.

Another global regulatory trend is AI-specific regulation. The EU AI Act (expected to be enacted soon) will classify AI systems by risk and impose obligations (e.g. transparency to users when they’re interacting with AI, and quality requirements for high-risk AI). Chatbots like ChatGPT and DeepSeek will likely be considered limited-risk but still have to meet certain transparency requirements (e.g. labeling AI-generated content, informing users that they’re chatting with an AI). We’re already seeing companies voluntarily disclose AI outputs (for instance, Bard and ChatGPT will remind users they are AI and not human). If DeepSeek wanted EU market entry, it would need to not only handle data differently but also adjust its content moderation and transparency to meet these upcoming rules – for example, its censorship of political topics could run afoul of Europe’s free expression standards or the Digital Services Act’s provisions on platform accountability.

From a best practices standpoint, global regulations suggest a few things U.S. users should look for (and demand) in AI platform terms:

At present, DeepSeek is more or less bypassing these global norms – which is why experts compare it to TikTok’s early days of unchecked data flow, calling for stricter scrutiny (DeepSeek's rise raises data privacy, national security concerns) (DeepSeek's Popular AI App Is Explicitly Sending US Data to China). U.S. users can take a cue from EU’s stance: Italy’s ban and fine on ChatGPT show that regulators will intervene when privacy is abused, even against popular AI. If DeepSeek or others were found violating U.S. laws (say, biometric data laws by collecting keystroke patterns without consent, which in Illinois could violate the Biometric Information Privacy Act), they could face legal challenges in the U.S. regardless of their TOS claims. However, such enforcement is reactive and slow. Until then, global best practices favor sticking with providers that are more accountable.

6. Case Studies & Examples

Real-world incidents shed light on these abstract TOS terms:

  • National Security & Data Privacy – DeepSeek vs. TikTok: The trajectory of TikTok in the U.S. is informative. TikTok, like DeepSeek, is a Chinese app that faced accusations of sending U.S. user data to China and censoring content unfavorable to Beijing. Ultimately, TikTok made concessions like routing U.S. data to U.S. servers (Project Texas) and fought bans. DeepSeek is now in the spotlight for similar reasons – the Hill reported that DeepSeek’s rise “echoes the worries that surrounded TikTok” and has prompted calls for oversight (DeepSeek's rise raises data privacy, national security concerns). Already, the U.S. military prohibits service members from using DeepSeek (Is DeepSeek a national security risk? - YouTube), and a group of House lawmakers in late 2024 pushed to ban DeepSeek for federal employees, citing the Chinese government’s ability to exploit it for surveillance and propaganda (House lawmakers push to ban AI app DeepSeek from US ... - CBS 42). If DeepSeek does not implement privacy controls comparable to other AI (encryption, onshore data storage for U.S. users, independent audits), we may see more aggressive moves – possibly Treasury/CFIUS action to block it, or FCC action if it’s on mobile networks. For U.S. citizens, this means the government might step in where individual power is limited, potentially restricting access to protect national security (which, conversely, could limit user choice).
  • Enforcement by Data Protection Authorities: We saw Italy’s Garante temporarily ban ChatGPT in March 2023 due to privacy concerns (Italy lifts ban on ChatGPT after data privacy improvements - DW). OpenAI had violated GDPR by collecting personal data (including from minors) without informing users and by having no legal basis for using conversations to train the AI (Italy fines OpenAI over ChatGPT privacy rules breach | Reuters). During the ban, OpenAI hurried to add age gating and a way for users to object to data use. By April 2023, ChatGPT was reinstated in Italy after these improvements (Italy fines OpenAI over ChatGPT privacy rules breach | Reuters). The case didn’t stop there: in Dec 2024 Italy fined OpenAI €15M and ordered a public education campaign about ChatGPT’s data use (Italy fines OpenAI over ChatGPT privacy rules breach | Reuters) (Italy fines OpenAI over ChatGPT privacy rules breach | Reuters). This case is a wake-up call – even U.S. companies can be penalized heavily for privacy lapses, and it set a precedent that training an AI on user data is considered “processing” that requires a lawful basis. We may see similar investigations in other countries (France and Spain also began looking into OpenAI’s practices in 2023). For users, this is somewhat encouraging: it means regulators are watching AI privacy and can force changes that give users more rights (for instance, the right to opt out of data training came out of this). It also means AI providers might get more cautious, perhaps retaining less data or anonymizing it more.
  • Consumer Litigation and Liability: A notable incident involved an attorney using ChatGPT for legal research, where ChatGPT fabricated case citations. The attorney faced sanctions for submitting fake information to a court, highlighting that relying on AI output can have real legal consequences – and the user bore the blame, not OpenAI (consistent with the TOS which disclaims accuracy) (Warning: If AI social media tools make a mistake, you’re responsible | Technology | EL PAíS English) (Warning: If AI social media tools make a mistake, you’re responsible | Technology | EL PAíS English). Another developing story: a Georgia radio host filed a defamation suit after ChatGPT erroneously named him in a legal complaint summary. Since OpenAI generated the false statement, this challenges the usual publisher immunity and tests whether the TOS disclaimer can shield OpenAI. If the case proceeds, a court might evaluate if such disclaimers are sufficient or if generating false statements about a person is a form of product defect or negligence. It’s an early example, but it underscores a risk: AI can and will fabricate personal data (so-called “hallucinations”), potentially harming reputations or privacy. Users who prompt the AI and then publish or act on such information could also be sued by third parties. For instance, if you used an AI to generate an article and it defamed someone, you could be held liable for republishing it. The platforms’ terms make clear you cannot blame them – Meta’s terms outright state you are responsible, not Meta, for content you create with its AI (Warning: If AI social media tools make a mistake, you’re responsible | Technology | EL PAíS English).
  • Data Breaches: While there haven’t been massive breaches reported yet for these AI platforms, there was a ChatGPT incident (March 2023) where a bug in an open-source library exposed snippets of other users’ chat histories and possibly payment info for ChatGPT Plus subscribers. OpenAI fixed it quickly and enhanced safeguards (DeepSeek’s Popular AI App Is Explicitly Sending US Data to China | WIRED). If a larger breach happened (say, hackers stealing DeepSeek’s user database or leaking millions of ChatGPT conversation logs), the fallout would depend on the laws: U.S. users could pursue class-action suits for negligence (though arbitration clauses complicate that), and regulators would investigate. A breach at DeepSeek affecting U.S. users would be tricky – Chinese law requires notifying authorities in China, but not necessarily the individuals. U.S. users might never be directly informed. This asymmetry in breach response is another reason many enterprises forbid use of services like DeepSeek – they simply can’t risk their data ending up in an opaque environment. In contrast, if Perplexity or Claude had a breach, they’d likely follow state breach notification laws, meaning affected users would get alerts and perhaps credit monitoring. We haven’t seen a high-profile AI breach case yet, but as use grows, it’s almost inevitable.
  • Intellectual Property Disputes: Several publishers and artists have sued AI companies for training on copyrighted data without permission (e.g. class actions against OpenAI and Meta for using books and code). These cases don’t directly implicate user privacy, but they could influence how AI platforms handle data. If courts require AI companies to be more selective or to honor data removal requests (as a settlement or judgment), users might see new tools to opt out of having their content (even public postings) included in training sets. Already, X (Twitter) updated its terms to declare a “worldwide license” to use all content for AI training (New X terms will allow all user data to train AI services) – essentially notifying users that anything they post can feed Musk’s xAI. This caused some backlash, but it highlights that your public posts on social media are likely feeding AI models unless you lock them down. So even if you never use an AI chatbot, your data might still be in their training data – an area regulators are increasingly scrutinizing.

In summary, these case studies show a landscape in flux: regulators are stepping in to protect privacy (Italy vs OpenAI), users and third parties are testing the legal accountability of AI (defamation, malpractice via AI), and companies are adjusting policies often in response to these pressures (X’s terms, OpenAI’s privacy pivots). U.S. users should follow these developments because they directly impact your rights. A win by plaintiffs in an AI lawsuit or an enforcement action by the FTC could force better practices industry-wide – for instance, if an AI company were held liable for a harmful output despite disclaimers, others would quickly revisit their own risk allocations.

Conclusion

U.S. citizens using AI platforms should go in with eyes open: understand that your interactions may be recorded and used in various ways, your legal recourse is limited by design, and the onus is largely on you to safeguard your privacy and interests. DeepSeek AI, in particular, poses outsized risks – by subjecting U.S. users to a foreign legal system and pervasive data collection, it leaves them with essentially no rights and high exposure. Its TOS reflects a “wild west” approach that most U.S. and EU-based companies could not get away with under current laws. In contrast, OpenAI, Google, Meta, Anthropic, Perplexity, and xAI – while not perfect – operate under frameworks that at least recognize user privacy rights (CCPA/GDPR) and offer some avenues for redress or control.

For now, the best protections for users are self-protection and regulatory action. Treat AI outputs as fallible and double-check critical information (Warning: If AI social media tools make a mistake, you’re responsible | Technology | EL PAíS English). Use privacy settings proactively. Keep informed about where your data goes and exercise your rights to it (Privacy policy | OpenAI). If an AI service doesn’t meet basic privacy standards (e.g. clear data practices, ability to delete data, trustworthy jurisdiction), think twice about using it for anything beyond casual experimentation. U.S. users can also look to global norms: if an AI app wouldn’t be allowed in Europe due to privacy issues, that’s a red flag to an American user as well.

The legal landscape will continue to evolve. We may see new U.S. federal laws or updated state laws addressing AI specifically, which could override some TOS provisions (for example, a law could ban certain liability waivers or require clear opt-in consent for AI data use). Until then, it’s “TOS buyer beware.” By understanding the contrasts between DeepSeek’s terms and those of other AI providers, users can make informed choices. Remember: when a product is free and novel like AI chatbots, often you and your data are the price (DeepSeek’s Popular AI App Is Explicitly Sending US Data to China | WIRED) (DeepSeek’s Popular AI App Is Explicitly Sending US Data to China | WIRED). Proceed accordingly – with caution and knowledge – to harness these AI tools while protecting your rights and privacy.

Mitch Jackson, Esq.

On Linkedin

On Bluesky


Sources:

Arlene M.

Lawyer| Mediator| Tech

2 周

Were the charts and graphs in your report generated by Deep Research or created separately? Many readers rely on visuals to quickly grasp key points in long reports - a risky approach when reviewing any long document, but perhaps more problematic with AI-generated content.

回复
Arlene M.

Lawyer| Mediator| Tech

2 周

At a glance, spotted some inaccuracies in the report. For example, it states the "EU AI Act (expected to be enacted soon)" when in fact the Act has been in force since 1 August 2024. It also incorrectly states that "DeepSeek is not available in the EU currently”, citing an article from El País USA Edition, which interestingly doesn’t even mention DeepSeek. The report's claims about Perplexity's public terms being “less visible” and “not find[ing] a specific arbitration clause in their help center materials” are curious, given the Perplexity Help Centre page the report cites has a footer link to Terms of Service containing an arbitration clause. OAI's Deep Research seems impressive, but these examples (assuming not user attributed) demonstrate a persistent GenAI challenge - hallucinations, their detection and mitigation. The time and effort required to fact-check AI-generated content and verify the cited sources may significantly reduce/ eliminate potential productivity gains, depending on the task at hand and various other factors at play. Another spin of the roulette wheel using a different prompt may yield an alternative result, but the inescapable need for verification remains.

Richard Perrine, M.Ed Secondary Science

Creative STE[A]M Educator-Lifelong Learner | Certified in MS/HS Computer Programming & Software Design and Biology | 2021 Recipient of Aspirations in Coding AP Computer Science Female Diversity Award | 'Extremish' Hiker

3 周

The moral of the story is be safe. Always assume that any company has, does, or will try to access your data.

要查看或添加评论,请登录

Mitch Jackson的更多文章

社区洞察

其他会员也浏览了