AI Terms of Service Exposed: How DeepSeek and Other AI Platforms Put U.S. Users at Risk
Mitch Jackson
Lawyer and entrepreneur (30+ years) - Breaking political news commentary on Substack ?? mitch-jackson.com/substack (it's free)
Introduction
As AI-powered platforms become a staple in our digital lives, understanding their privacy policies, data security measures, and legal implications is critical—especially for U.S. users who may be unknowingly exposing themselves to significant risks. This report examines the Terms of Service (TOS) of DeepSeek AI, a China-based AI platform, and contrasts it with leading U.S.-based AI services, including OpenAI's ChatGPT, Google Gemini, Meta AI, Perplexity AI, Claude AI, Google NotebookLM, and xAI’s Grok.
Key concerns include data collection, retention, government surveillance, legal exposure, and user rights under U.S. law. DeepSeek, in particular, raises red flags as it operates under Chinese jurisdiction, meaning U.S. users’ data could be subject to foreign government monitoring with little to no legal recourse. In comparison, U.S. platforms, while not perfect, operate under more established regulatory frameworks like the California Consumer Privacy Act (CCPA) and other consumer protection laws.
To ensure a thorough, well-supported analysis, this report was compiled using OpenAI’s Deep Research, leveraging legal, regulatory, and cybersecurity insights to break down the risks, legal limitations, and best practices for AI users. By highlighting the contrast in policies and protections, this report provides actionable steps for users to safeguard their privacy, understand their rights, and make informed decisions about which AI platforms to trust.
DISCLAIMER: This communication does not provide legal, financial, tax or investment advice. Always do your own due diligence and consult with an experienced professional in your state, region or country.
1. Privacy & Data Security
Data Collection & Storage: DeepSeek AI’s policies reveal aggressive data collection and offshore storage. All user queries and conversations are sent to servers in China (DeepSeek’s Popular AI App Is Explicitly Sending US Data to China | WIRED). DeepSeek scoops up not just your prompts and account info, but also detailed device data – even “keystroke patterns or rhythms” (DeepSeek’s Popular AI App Is Explicitly Sending US Data to China | WIRED) – effectively monitoring what and how you type. By contrast, U.S.-based platforms also collect extensive data (e.g. prompts, usage, device info, IP address), but generally store it on U.S. or regional servers. OpenAI and Anthropic log user prompts to improve their models, but OpenAI now offers controls (ChatGPT’s “Incognito” mode or data opt-out) so conversations aren’t retained long-term for training (Terms of use | OpenAI) (Privacy policy | OpenAI). Perplexity.ai similarly lets users opt out of having search queries used to train models via a privacy toggle (What data does Perplexity collect about me? | Perplexity Help Center). These controls give users some say in data retention, a feature notably absent in DeepSeek.
Retention & Deletion: DeepSeek’s privacy policy says you can delete your chat history in-app and even delete your account, but it’s unclear if this truly erases data from their servers. The Wired review confirms DeepSeek stores everything on PRC servers (DeepSeek’s Popular AI App Is Explicitly Sending US Data to China | WIRED), suggesting that deletion may only hide data from the user’s view. In contrast, OpenAI’s policy states that personal data is retained “only as long as we need… or for other legitimate business purposes”, and that ChatGPT chats can be auto-deleted after 30 days when history is off (Privacy policy | OpenAI). Users can also delete accounts or specific conversations. Perplexity’s FAQ notes data is retained while your account is active and deleted upon account deletion (with some delay) (What data does Perplexity collect about me? | Perplexity Help Center) (What data does Perplexity collect about me? | Perplexity Help Center). Google’s services (like Bard/Gemini) tie into Google’s global privacy policy – data may be stored indefinitely unless you delete it, but Google provides tools to export or erase your activity. Meta’s generative AI features are new, but Meta claims to build with “privacy safeguards” and has introduced a Generative AI Privacy Guide with transparency on data usage (Privacy Matters: Meta’s Generative AI Features | Meta). They emphasize not using private messages to train AI models (Privacy Matters: Meta’s Generative AI Features | Meta) and allow users to delete AI chat threads with special commands (Privacy Matters: Meta’s Generative AI Features | Meta).
Surveillance & Third-Party Sharing: One of the starkest differences is government access. DeepSeek explicitly warns it will “share information with law enforcement agencies, public authorities, and more when required to do so”, and all data is accessible by its corporate group in China (DeepSeek’s Popular AI App Is Explicitly Sending US Data to China | WIRED). Under China’s cybersecurity laws, companies must cooperate with state intelligence efforts (DeepSeek’s Popular AI App Is Explicitly Sending US Data to China | WIRED). This means U.S. user data on DeepSeek could be subject to Chinese government surveillance with few hurdles. U.S.-based AI providers also comply with law enforcement requests, but those are governed by U.S. law (requiring warrants, subpoenas, etc.). OpenAI, for example, says it may share personal data with government authorities only if legally required (Privacy policy | OpenAI) – a notable distinction from the blanket cooperation mandated in China.
All platforms share data with third-party service providers (for cloud hosting, analytics, payment processing, etc.), but DeepSeek again stands out. Researchers found DeepSeek’s app quietly sending analytics data to Chinese tech giants like Baidu and even to ByteDance (TikTok’s owner) (DeepSeek’s Popular AI App Is Explicitly Sending US Data to China | WIRED) (DeepSeek’s Popular AI App Is Explicitly Sending US Data to China | WIRED). DeepSeek also allows advertisers to feed it data (like mobile IDs and hashed emails) to help track users across the web (DeepSeek’s Popular AI App Is Explicitly Sending US Data to China | WIRED). By contrast, OpenAI’s privacy policy is explicit that it does not “sell” or “share” personal data for behavioral advertising (Privacy policy | OpenAI). Similarly, Perplexity states it “does not sell, trade, or share your personal information with third parties,” except for necessary service providers (What data does Perplexity collect about me? | Perplexity Help Center). Google is an outlier among U.S. AI platforms in that it does leverage user data for advertising: Google’s terms acknowledge they “may display targeted advertisements” and use your Bard/Gemini queries and profile info to personalize ads (unless you opt out of ad personalization) (Google Bard - Full Privacy Report). In other words, Google treats AI interactions like any other Google activity – part of the data stream feeding its ad ecosystem, though it emphasizes it doesn’t sell the data to outsiders (Google Bard - Full Privacy Report). Meta’s AI interactions are governed by Meta’s broader data policy, meaning your chats with “Meta AI” are used “consistent with Meta’s Privacy Policy” (Warning: If AI social media tools make a mistake, you're responsible). Meta has even indicated it might share certain AI chat queries with external partners (e.g. a search engine) to fetch real-time information (Privacy Matters: Meta’s Generative AI Features | Meta), raising additional monitoring concerns.
Security Measures: Most TOS have generic assurances about security, but DeepSeek’s execution appears shaky. Its TOS promises to take “necessary measures (not less than industry practices) to ensure cybersecurity” (DeepSeek Terms of Use). In practice, a security audit found “multiple security and privacy issues” in DeepSeek’s mobile app (NowSecure Uncovers Multiple Security and Privacy Flaws in ...). (For example, vulnerabilities reportedly exposed user data to interception.) U.S. AI providers generally invest heavily in security – employing encryption in transit and at rest, secure cloud infrastructure, and robust access controls – both to protect user data and to comply with laws like California’s data security requirements. OpenAI, Google, Anthropic, etc., publish security whitepapers and even bug bounty programs. Nonetheless, no system is immune: even ChatGPT suffered a bug that briefly exposed user chat titles and payment info to others in March 2023. Platforms are aware that a data breach could trigger liability under state data-breach notification laws or FTC enforcement. Thus, while all claim to secure user data, DeepSeek’s track record and Chinese jurisdiction heighten the risk for data security lapses compared to its U.S. counterparts.
2. Legal Rights & Exposure
Governing Law & Jurisdiction: The jurisdiction specified in the TOS can hugely affect a U.S. user’s legal rights. DeepSeek’s Terms make this crystal clear: any dispute is governed by the laws of the People’s Republic of China (mainland) (DeepSeek Terms of Use), and must be brought to a court in Hangzhou (where DeepSeek’s parent company is registered) (DeepSeek Terms of Use). This effectively places U.S. users under Chinese law for any legal issues with DeepSeek. By contrast, all the U.S.-based AI services choose U.S. law (often California) and local forums. OpenAI’s Terms are governed by California law with exclusive venue in San Francisco courts – except that OpenAI compels arbitration for most disputes (more on that shortly) (Terms of use | OpenAI). Anthropic (Claude) also chooses California law and explicitly says disputes will be resolved in state or federal courts in San Francisco (Consumer Terms of Service \ Anthropic). Google’s standard Terms of Service likewise invoke California law; Google has a known arbitration clause for U.S. consumers (with an opt-out window) and otherwise designates Santa Clara County, CA courts for litigation. Meta’s user terms (covering Facebook, Instagram, and thus their AI features) also use California law and require any lawsuits to be filed in California (historically the Northern District of CA or CA state court) – and in recent updates Meta has added clauses specific to AI. In sum, U.S. users of OpenAI, Google, Meta, Anthropic, Perplexity, or xAI are under U.S. law (generally California or Delaware) in the event of disputes, whereas DeepSeek users would be subject to Chinese law. This difference is monumental: Chinese law and courts offer none of the consumer protections U.S. users might expect (and proceedings would be in Chinese, under a legal system where the government or company may have home-field advantage).
Dispute Resolution & Class Actions: Many tech companies use arbitration clauses and class-action waivers to limit users’ ability to sue. OpenAI’s TOS includes a mandatory arbitration agreement and a class-action waiver (Terms of use | OpenAI). Users must attempt informal resolution, then if needed go to arbitration (likely under AAA rules), waiving any right to sue in court or to join a class action (Terms of use | OpenAI) (Terms of use | OpenAI). (OpenAI carves out small claims and injunctive relief for IP misuse as exceptions) (Terms of use | OpenAI). Similarly, xAI’s Grok consumer TOS “REQUIRES the parties to arbitrate their disputes and limits the manner in which you can seek relief” (Terms of Service - Consumer | xAI) – i.e. no class or representative actions, with only individual arbitration permitted. Meta’s various terms also prohibit class actions; Meta has arbitration on an individual basis in its supplemental terms, meaning you “may bring a claim only on your own behalf” (Supplemental Facebook View Terms of Service - Meta Store). Google’s terms (for consumers) have a binding arbitration clause with an opt-out and also disallow class proceedings, which is standard for Google services. Notably, Anthropic’s Claude does not force arbitration: their latest consumer terms say disputes will go to court in SF, implying users retain their right to sue in court individually (Consumer Terms of Service \ Anthropic). That also means class actions against Anthropic are not contractually barred (though the usual hurdles to class certification in court still apply). Perplexity’s public terms are less visible, but as a Delaware corporation they likely follow the industry trend – possibly California law and either arbitration or at least a venue clause; given their smaller size, they might rely on courts and simply limit liability (we did not find a specific arbitration clause in their help center materials).
For U.S. citizens, arbitration clauses mean waiving the right to a jury trial and usually limiting discovery, which can disadvantage individuals. On the flip side, Chinese jurisdiction (DeepSeek) effectively means no practical legal recourse – it’s unrealistic for an average U.S. user to pursue a lawsuit in China for a privacy or contract violation, and class relief is off the table entirely. This lack of remedy is a key risk: if DeepSeek mishandles your data or causes you harm, you likely have no effective way to hold them accountable. U.S. companies’ arbitration requirements are also restrictive, but at least an arbitrator could award damages (in theory) under U.S. law, and regulators like the FTC or state attorneys general can still intervene on the public’s behalf despite arbitration clauses.
User Rights Under U.S. Laws: U.S. users have certain statutory rights that TOS cannot waive – and some TOS explicitly acknowledge this. DeepSeek’s terms include a generic savings clause that “nothing in these terms shall affect any statutory rights that you cannot waive as a consumer” (DeepSeek Terms of Use). However, enforcing U.S. statutory rights (like rights under California’s Consumer Privacy Act, or unfair/deceptive practice laws) against a Chinese entity is extremely difficult in practice. U.S. services are directly subject to U.S. laws like the California Consumer Privacy Act (CCPA) or state AI/privacy laws. Indeed, OpenAI’s privacy policy enumerates user rights “depending on where you live”, listing the right to know, delete, and correct personal data, and freedom from discrimination for exercising these rights (Privacy policy | OpenAI) – this language aligns with CCPA/CPRA rights for California residents and GDPR rights for Europeans. OpenAI provides a Data Subject Access Request portal for users to exercise these rights (Privacy policy | OpenAI). Perplexity and Anthropic (and likely Google and Meta) offer similar processes for users to request data deletion or access. For example, OpenAI now allows anyone globally to opt out of having their content used to train models (Terms of use | OpenAI) – a response to regulatory pressure. DeepSeek’s policies, on the other hand, make no mention of CCPA or GDPR rights; they do provide an email for inquiries, but there’s little assurance that a U.S. user could successfully invoke rights to delete or access data held in China.
One area of U.S. law particularly relevant is surveillance and foreign intelligence. U.S. citizens’ data stored in the U.S. can be accessed by the U.S. government under laws like the CLOUD Act or FISA, but there are legal processes and some oversight. With DeepSeek in China, U.S. users expose themselves to a foreign government’s surveillance with none of the transparency or recourse they might have at home. This has raised national security flags – U.S. lawmakers warn that China’s DeepSeek app could expose U.S. users to surveillance and even censorship (DeepSeek AI raises national security concerns, U.S. officials say), likening it to the concerns around TikTok. In fact, the U.S. military and some federal agencies have already banned DeepSeek on government devices (House lawmakers push to ban AI app DeepSeek from US ... - CBS 42), and members of Congress have pushed to bar the app in sensitive networks (House lawmakers push to ban AI app DeepSeek from US ... - CBS 42).
Liability & Limitation of Remedies: All the AI platform TOS aggressively disclaim liability for the AI’s outputs and limit the remedies users can seek. DeepSeek’s terms bluntly state that users assume the “risks arising from reliance” on output accuracy or suitability (DeepSeek Terms of Use). DeepSeek provides its service “as is” with no warranties, just like the others (DeepSeek Terms of Use) (DeepSeek Terms of Use). Moreover, DeepSeek shifts legal risks to the user: if you misuse the service or violate laws, you are solely responsible for any third-party claims and must indemnify DeepSeek for any losses, including legal fees, that result (DeepSeek Terms of Use). This means if, for example, you generated content that infringes someone’s rights and they sue DeepSeek, DeepSeek can seek to recover all costs from you. U.S. AI providers have similar indemnity clauses (usually requiring users to indemnify the company for third-party claims stemming from the user’s content or misuse). For instance, OpenAI’s terms require users to hold OpenAI harmless from claims arising from use of the service in violation of the terms or law.
All platforms cap their liability heavily. Anthropic’s clause is typical: no indirect or consequential damages and total liability capped at the amount you paid (if any) for the service (Consumer Terms of Service \ Anthropic). OpenAI and others also exclude liability for lost profits, data loss, or punitive damages. Meta’s new AI terms explicitly tell users they – not Meta – are responsible for any actions taken based on AI outputs, and that the AI may be wrong or harmful (Warning: If AI social media tools make a mistake, you’re responsible | Technology | EL PAíS English) (Warning: If AI social media tools make a mistake, you’re responsible | Technology | EL PAíS English). Meta and LinkedIn both warn that generative AI content might be inaccurate or misleading and urge users to vet it before sharing (Warning: If AI social media tools make a mistake, you’re responsible | Technology | EL PAíS English). In effect, if an AI gives bad advice that causes you loss (say, financial or health-related), the companies have erected multiple legal shields to avoid responsibility. Users would face an uphill battle arguing product liability or negligence, given these contractual waivers and the novelty of AI services (and OpenAI’s arbitration clause would keep such disputes out of public courts anyway).
One emerging issue is defamation or privacy harms caused by AI outputs about individuals. OpenAI was recently hit with a complaint after ChatGPT falsely accused a radio host of embezzlement – but OpenAI will point to its disclaimers that outputs may be false and not to be relied upon (Terms of use | OpenAI). Unlike social media, where platforms cite Section 230 protections, AI companies may not have the same shield since they algorithmically generate content. This legal gray area is being tested, but for now, the user agreements place the burden on users to fact-check and use outputs responsibly. The upshot: U.S. users have very limited remedies if AI outputs are flawed or harmful, and for DeepSeek users, the practical remedies are virtually nil due to jurisdiction and strong liability waivers.
3. Comparison of AI Platform Policies
When comparing DeepSeek with OpenAI, Meta, Google’s Gemini (Bard), Perplexity, Claude, NotebookLM, and Grok, some clear patterns emerge:
领英推荐
In short, DeepSeek’s TOS is the most extreme in terms of jurisdiction (China), data export (all data to PRC), and lack of user remedy – which significantly heightens U.S. users’ exposure. The mainstream U.S. AI platforms have broadly similar terms to each other: California law, arbitration (except Anthropic), heavy disclaimers, and commitments to user privacy that are imperfect but evolving. One unique point: Anthropic’s choice not to compel arbitration and to operate as a Public Benefit Corporation might indicate a slightly more user-friendly stance legally. Meanwhile, Google’s integration of AI into its ad machine sets it apart in privacy impact (using your AI interactions for marketing profiling unless you opt out) (Google Bard - Full Privacy Report), whereas OpenAI/Anthropic/Perplexity currently do not use your data for advertising at all (Privacy policy | OpenAI). Meta lies somewhere in between – it doesn’t sell data, but it will leverage everything you do on its platforms (AI chats included) to keep you engaged and infer your interests, and its entire business is ad-driven.
4. Implications for U.S. Citizens
For Americans using these AI services, the terms translate to different levels of risk and rights:
Recommendations for Users: Given these implications, U.S. users should take proactive steps to protect themselves:
5. Global Regulatory Considerations
Global privacy and AI regulations provide an important backdrop to these TOS differences. The European Union’s General Data Protection Regulation (GDPR) is the world’s strictest data privacy law, and it has indirectly improved privacy practices of AI platforms worldwide. GDPR grants EU residents rights to access, correct, delete, and restrict processing of personal data, and requires a legal basis for data collection. When Italy’s regulator found OpenAI had “processed users’ personal data to train ChatGPT without an adequate legal basis”, OpenAI had to quickly adjust its practices (Italy fines OpenAI over ChatGPT privacy rules breach | Reuters). They implemented tools for users to object to data use and improved privacy disclosures, allowing the service to resume in the EU (Italy fines OpenAI over ChatGPT privacy rules breach | Reuters). The Italian DPA also fined OpenAI €15 million in Dec 2024 for residual violations (Italy fines OpenAI over ChatGPT privacy rules breach | Reuters), showing that these laws have teeth. None of the U.S. companies want to be barred from the EU market, so they are adapting – e.g., OpenAI’s Privacy Policy and user rights section is clearly influenced by GDPR, even for U.S. users (Privacy policy | OpenAI). Anthropic’s and Google’s privacy notices similarly have GDPR-compliant language (Google’s privacy controls and export options were originally built to meet European demands).
China’s regulatory framework is very different. China has a relatively new Personal Information Protection Law (PIPL) and Data Security Law, which in theory impose consent requirements and data minimization principles not unlike GDPR. However, critically, Chinese law has broad carve-outs for national security and government access. Companies like DeepSeek must “cooperate with national intelligence efforts” by law (DeepSeek’s Popular AI App Is Explicitly Sending US Data to China | WIRED). Moreover, any data held in China can be considered under Chinese jurisdiction, making foreign access or oversight almost impossible. So while DeepSeek might claim to follow privacy principles, in practice Chinese authorities can override those at will. Global firms operating in China (like Apple or LinkedIn previously) have faced this dilemma of complying with government data requests versus user privacy – a tension that Chinese firms don’t publicly struggle with, as compliance is expected.
For U.S. users, GDPR vs. Chinese law is an easy choice – GDPR offers rights and recourse; Chinese law offers opacity. The U.S. does not yet have a comprehensive federal privacy law, but several state laws (California’s CPRA, Virginia’s CDPA, etc.) borrow from GDPR. These laws give U.S. users rights that mirror GDPR rights in some ways (access, deletion, no selling personal data without opt-out, etc.). OpenAI, Meta, Google, and others have had to incorporate these into their TOS and privacy policies for compliance. DeepSeek, focusing on rapid global expansion, appears to not tailor its policies to foreign laws, which could eventually limit it – for instance, DeepSeek is not available in the EU currently (Warning: If AI social media tools make a mistake, you’re responsible | Technology | EL PAíS English), likely because it cannot easily comply with GDPR’s requirements (like keeping EU user data in Europe or handling EU data subject requests). By comparison, Google’s NotebookLM or Gemini would have to comply with GDPR if offered in Europe, meaning data might be stored on European servers or with appropriate safeguards and users informed of processing.
Another global regulatory trend is AI-specific regulation. The EU AI Act (expected to be enacted soon) will classify AI systems by risk and impose obligations (e.g. transparency to users when they’re interacting with AI, and quality requirements for high-risk AI). Chatbots like ChatGPT and DeepSeek will likely be considered limited-risk but still have to meet certain transparency requirements (e.g. labeling AI-generated content, informing users that they’re chatting with an AI). We’re already seeing companies voluntarily disclose AI outputs (for instance, Bard and ChatGPT will remind users they are AI and not human). If DeepSeek wanted EU market entry, it would need to not only handle data differently but also adjust its content moderation and transparency to meet these upcoming rules – for example, its censorship of political topics could run afoul of Europe’s free expression standards or the Digital Services Act’s provisions on platform accountability.
From a best practices standpoint, global regulations suggest a few things U.S. users should look for (and demand) in AI platform terms:
At present, DeepSeek is more or less bypassing these global norms – which is why experts compare it to TikTok’s early days of unchecked data flow, calling for stricter scrutiny (DeepSeek's rise raises data privacy, national security concerns) (DeepSeek's Popular AI App Is Explicitly Sending US Data to China). U.S. users can take a cue from EU’s stance: Italy’s ban and fine on ChatGPT show that regulators will intervene when privacy is abused, even against popular AI. If DeepSeek or others were found violating U.S. laws (say, biometric data laws by collecting keystroke patterns without consent, which in Illinois could violate the Biometric Information Privacy Act), they could face legal challenges in the U.S. regardless of their TOS claims. However, such enforcement is reactive and slow. Until then, global best practices favor sticking with providers that are more accountable.
6. Case Studies & Examples
Real-world incidents shed light on these abstract TOS terms:
In summary, these case studies show a landscape in flux: regulators are stepping in to protect privacy (Italy vs OpenAI), users and third parties are testing the legal accountability of AI (defamation, malpractice via AI), and companies are adjusting policies often in response to these pressures (X’s terms, OpenAI’s privacy pivots). U.S. users should follow these developments because they directly impact your rights. A win by plaintiffs in an AI lawsuit or an enforcement action by the FTC could force better practices industry-wide – for instance, if an AI company were held liable for a harmful output despite disclaimers, others would quickly revisit their own risk allocations.
Conclusion
U.S. citizens using AI platforms should go in with eyes open: understand that your interactions may be recorded and used in various ways, your legal recourse is limited by design, and the onus is largely on you to safeguard your privacy and interests. DeepSeek AI, in particular, poses outsized risks – by subjecting U.S. users to a foreign legal system and pervasive data collection, it leaves them with essentially no rights and high exposure. Its TOS reflects a “wild west” approach that most U.S. and EU-based companies could not get away with under current laws. In contrast, OpenAI, Google, Meta, Anthropic, Perplexity, and xAI – while not perfect – operate under frameworks that at least recognize user privacy rights (CCPA/GDPR) and offer some avenues for redress or control.
For now, the best protections for users are self-protection and regulatory action. Treat AI outputs as fallible and double-check critical information (Warning: If AI social media tools make a mistake, you’re responsible | Technology | EL PAíS English). Use privacy settings proactively. Keep informed about where your data goes and exercise your rights to it (Privacy policy | OpenAI). If an AI service doesn’t meet basic privacy standards (e.g. clear data practices, ability to delete data, trustworthy jurisdiction), think twice about using it for anything beyond casual experimentation. U.S. users can also look to global norms: if an AI app wouldn’t be allowed in Europe due to privacy issues, that’s a red flag to an American user as well.
The legal landscape will continue to evolve. We may see new U.S. federal laws or updated state laws addressing AI specifically, which could override some TOS provisions (for example, a law could ban certain liability waivers or require clear opt-in consent for AI data use). Until then, it’s “TOS buyer beware.” By understanding the contrasts between DeepSeek’s terms and those of other AI providers, users can make informed choices. Remember: when a product is free and novel like AI chatbots, often you and your data are the price (DeepSeek’s Popular AI App Is Explicitly Sending US Data to China | WIRED) (DeepSeek’s Popular AI App Is Explicitly Sending US Data to China | WIRED). Proceed accordingly – with caution and knowledge – to harness these AI tools while protecting your rights and privacy.
Mitch Jackson, Esq.
Sources:
Lawyer| Mediator| Tech
2 周Were the charts and graphs in your report generated by Deep Research or created separately? Many readers rely on visuals to quickly grasp key points in long reports - a risky approach when reviewing any long document, but perhaps more problematic with AI-generated content.
Lawyer| Mediator| Tech
2 周At a glance, spotted some inaccuracies in the report. For example, it states the "EU AI Act (expected to be enacted soon)" when in fact the Act has been in force since 1 August 2024. It also incorrectly states that "DeepSeek is not available in the EU currently”, citing an article from El País USA Edition, which interestingly doesn’t even mention DeepSeek. The report's claims about Perplexity's public terms being “less visible” and “not find[ing] a specific arbitration clause in their help center materials” are curious, given the Perplexity Help Centre page the report cites has a footer link to Terms of Service containing an arbitration clause. OAI's Deep Research seems impressive, but these examples (assuming not user attributed) demonstrate a persistent GenAI challenge - hallucinations, their detection and mitigation. The time and effort required to fact-check AI-generated content and verify the cited sources may significantly reduce/ eliminate potential productivity gains, depending on the task at hand and various other factors at play. Another spin of the roulette wheel using a different prompt may yield an alternative result, but the inescapable need for verification remains.
Creative STE[A]M Educator-Lifelong Learner | Certified in MS/HS Computer Programming & Software Design and Biology | 2021 Recipient of Aspirations in Coding AP Computer Science Female Diversity Award | 'Extremish' Hiker
3 周The moral of the story is be safe. Always assume that any company has, does, or will try to access your data.