DeepSeek: A Turning Point or Another False Alarm?

DeepSeek: A Turning Point or Another False Alarm?

The recent unveiling of China’s DeepSeek project has sent shockwaves through the global tech community and the U.S. stock market. With enormous amounts of potential value tied up in companies advancing artificial intelligence (AI), the ripple effects have been immediate and profound. However, before we contribute to the contagion of fear and speculation, it’s essential to take a step back and examine the situation in context.?

A History of Hype and Misinterpretation?

Major technological breakthroughs out of China should be met with a blend of excitement and skepticism. Just a few weeks ago, we witnessed a similar panic regarding quantum computing where the world thought researchers at Shanghai Jiao Tong University had created a quantum computer that could crack RSA encryption. This turned out to be a significant misinterpretation of the actual findings. While DeepSeek’s claims are undoubtedly impressive, there is a real chance that aspects of its capabilities or scale are being overstated or misunderstood. The history of tech advancements, particularly from geopolitical rivals, reminds us to critically evaluate groundbreaking announcements before fully committing to their implications.?

The Democratization of Large Language Models?

If DeepSeek is as revolutionary as it claims to be, and it really is 25X cheaper for a model that is "99% as good as OpenAI 4o", it could significantly accelerate the democratization of large language models. By lowering the costs associated with AI development and deployment, more companies—regardless of size—would have the ability to integrate advanced AI capabilities into their products and services. This could lead to increased innovation, new business models, and a more level playing field in the AI market. This means that end users will likely see an explosion of new LLM integrations into the products they use. On the other hand, while reduced costs may enhance accessibility, companies must also weigh the potential risks associated with data security, regulatory compliance, and the ethical implications of using AI technologies developed under different governance standards.?

Is DeepSeek Riskier thaen ChatGPT or Microsoft Copilot???

When comparing a Chinese-based startup like DeepSeek to similar offerings from Microsoft, OpenAI, and Google, DeepSeek is lacking the following:?

  1. DeepSeek does not provide assurances that a user’s data is not used to train AI models unless you explicitly opt in. Meaning data you enter may be available, with the appropriate prompts, to any user of the DeepSeek platform.??
  2. DeepSeek does not provide data storage transparency, following data localization and storage regulations for various jurisdictions. This means that any company looking to achieve EU GDPR, for example, would not be able to store PII in DeepSeek and remain compliant.??
  3. While they may in the future, DeepSeek has not achieved any third-party security certifications like SOC 2, ISO 27001, GDPR, CCPA, NIST AI RMF, or any other assurances that they are following well-vetted security protocols. This means that even if DeepSeek is not malicious in any way, they are vulnerable to bad actors and cyber-attacks, as we have already seen occurring.??
  4. DeepSeek does not have clear data retention policies, so there’s not way to know how long the data you’ve entered could be vulnerable within their systems. Given this, it’s fair to assume they will keep your data forever.??
  5. For companies looking to comply with the EU AI Act, Deepseek’s opaque model training makes it impossible to know how the models were trained and therefor impossible to know you are in compliance with the bias controls built into the flagship AI regulation in the world.??

It's fair to say that as of today, mostly because of it's status as a startup, DeepSeek is not an enterprise ready tool.

That said, and even given these risks, DeepSeek may still be benign for some personal users, but it’s important to go into its use knowing the further risks you’re taking. It should be assumed that any information entered into DeepSeek could be exploited in several ways. Chinese state-sponsored actors have a history of using fake social media accounts to spread disinformation, manipulate public opinion, and sow discord in Western societies. Additionally, the Chinese government has been known to track and intimidate dissidents through data collected from apps and digital platforms, targeting those advocating for Hong Kong democracy, Uyghur human rights, or Taiwanese independence. Even if you do not have a vested interest in these issues, the information you enter into DeepSeek could be used against you or your friends and family who do.?

Espionage originating from China is rampant. Simply opening an app or entering seemingly harmless information can provide intelligence to state actors. You may not see yourself as part of a critical supply chain serving national interests, but even trivial data—such as knowing when pizzas are delivered to the Pentagon—has historically provided key insights into sensitive government operations. On a similar front, any information entered into DeepSeek could be used as part of social engineering campaigns. The ability to impersonate individuals or interests serves as a gateway for LinkedIn, text message, or email scams designed to extract further intelligence.?

Corporate espionage is another well-documented concern. As a rule, never enter any sensitive intelligence from your company into services like DeepSeek. The risks of intellectual property theft, unauthorized data access, and competitive disadvantages far outweigh the potential benefits.?

These threats are even more pronounced on services hosted on Chinese servers. Until thorough security assessments are conducted on OpenSource versions of DeepSeek, it is safest to assume they carry the same vulnerabilities.?

Finally, Deepseek does not have any known independent AI oversight. This means there are no built-in controls or influences for bad actions in the future. Nothing stops them from selling your data, giving your data to the Chinese government, or using it for their own questionable purposes.??

Regardless of whether DeepSeek’s claims are fully substantiated, the risks associated with integrating such technology into your business operations are real. Companies and individuals must proactively address these concerns to safeguard sensitive and regulated data. While DeepSeek’s capabilities and pricing may make it an attractive option, organizations must tread carefully.?

A Path Forward for Enterprises: Control Your Data?

Nearly 75% of corporations today are asking their employees to not use large language models as a matter of policy. At the same time, a majority of employees are ignoring these policies due to the significant advantages large language models bring and entering sensitive information into their personal ChatGPT accounts. The introduction of DeepSeek likely means more tools that integrate generative AI in your SaaS ecosystem and a reasonable chance your employees will subvert corporate policy and put your sensitive and regulated corporate data into services hosted in China. All of this results in your security exposure skyrocketing almost overnight.??

To mitigate risks, you must enable the use of more trusted AI. To do this, we recommend implementing robust AI Governance protocols. These measures ensure:?

A suite of generative AI detection tools has been released onto the market. They allow you to know which large language models are being introduced on your corporate infrastructure, and which SaaS applications are slipping generative AI into products you already use.??

Once you understand which generative AI solutions are being used you must begin your journey to safely enable the use of these tools. To safely enable these tools, your first step is to understand what data you have. This means you know the structured and unstructured data, where it comes from, and whether it contains sensitive or regulated data.??

Once you understand what data you have, you’re ready to classify your data into shareable categories. This can be based on business case or compliance policies. These classifications not only enable sharing but good information lifecycle management as well.??

Now you are ready to share your well governed information with the large language models in your ecosystem. To do this, you should put mechanisms in place to prevent sensitive customer or proprietary data from flowing into DeepSeek and to minimize the data you expose to any AI at all. This process includes filtering sensitive and regulated data from being fed into AI systems and applying appropriate security and access controls on top of any data that remains.??

Even if you decide to use an open-source version of DeepSeek internally, these governance practices are essential. AI Governance not only helps protect your organization but also positions you to leverage AI advancements responsibly and securely.?

Conclusion?

DeepSeek may represent a seismic shift in the AI landscape, or it could be another example of overhyped innovation. Either way, businesses must focus on securing their data and adopting governance practices that safeguard operations against potential risks. By taking a measured approach, you can remain competitive while protecting your most critical assets.?

要查看或添加评论,请登录

Joseph Pearce的更多文章

社区洞察

其他会员也浏览了