Wald.ai: The Future of Secure AI Conversations

Wald.ai: The Future of Secure AI Conversations

The Risks of Enterprise Chatbot Use

As more companies begin to adopt AI tools, especially LLM-based chatbots, they put themselves at the mercy of their employees. Conversations with these AI assistants are insecure. Even with company investment in data privacy and security training, employees can slip up and include sensitive or private information in AI prompts and send them out to a large language model owned by another company.

When you send out personally identifiable information (PII) to one of these LLMs, it’s theirs to keep. OpenAI states in its privacy policy: "We collect personal data that you provide in the input to our services, including your prompts and other content you upload, such as files, images, and audio, depending on the features you use.” Anthropic, maker of Claude, says that “if you include personal data in your inputs (prompts), we will collect that information and this information may be reproduced in your outputs (responses).” Google also “collects your Gemini Apps conversations, related product usage information, info about your location, and your feedback.”

These data policies create a huge risk for companies whose employees are regularly interacting with these AI models. When sensitive data gets leaked to LLMs, these AI companies store that data for a variety of purposes. Their employees have access to your data, and oftentimes your data will be used to train their models unless you specifically opt out or are paying for specific enterprise plans. Mainly, this leakage creates another point of security failure: if?— and when?— the AI companies get hacked, your data (and your customers' data) will be vulnerable.

According to the annual Cost of a Data Breach Report from IBM, the global average cost of a data breach in 2024 is $4.88M?—?”a 10% increase over last year and the highest total ever.” A breach on their end could mean huge losses on your end?— loss from a finance, reputation, and intellectual property standpoint.

So, what if there was a way to prevent the possibility of LLMs getting access to sensitive or private information at all?

How Wald.ai Works and What It Does

Wald.ai is a B2B Saas platform that gives users access to all the world's leading AI assistants while ensuring none of their confidential data ever leaks outside of their organization.

Wald.ai gives companies a secure platform to allow their employees to interact safely with LLMs. When you enter Wald, you can select which model you would like to use from a comprehensive list. Then, you type in a prompt just like you're using a normal chatbot. This is where Wald adds a bit of magic.

Let's say you're a doctor who uses AI for help diagnosing and treating patients. You're treating a patient named Robert Vasquez and want to call on AI to hear another point of view. Using a regular chatbot, a prompt of this nature is dangerous, and you must be careful to manually omit any PII or sensitive data. From the hospital's standpoint, relying on the mindfulness of the user as the primary line of defense can be very error-prone.

With Wald, however, your prompts get automatically sanitized of PII and sensitive data?—?before it gets sent out to the AI model.

Sanitizing process

In this case, Wald.ai sanitizes the prompt so that the patient's name and age are redacted, making the prompt safe to send out to GPT-4o, Claude, Gemini, Llama, and others without compromising doctor-patient confidentiality or HIPAA compliance. For all intents and purposes, the sanitized prompt will hold the same meaning and get you a meaningful response, as PII and sensitive information are usually not crucial to the meat of the prompt.

Prompt with redacted PII

Then, when Wald receives the response from the AI model based on the sanitized prompt, it intelligently re-inserts the redacted information so that the response makes sense in the full context of your original, unsanitized prompt.

Who Is Wald.ai For?

Wald aims to serve SMEs in the U.S. in verticals that routinely deal with sensitive data. Initially, this will include Fintech, Technology, Healthcare, and Law firms. As one can extrapolate, though, many other verticals could benefit from this technology as well.

Who Else Is In This Space?

There are other startups who are trying to serve this market as well, like Private AI and Tonic.ai. Private AI offers a similar chat interface product called PrivateGPT, with a similar prompt de-identification and response re-identification methodology.

Private AI web demo

Tonic's product offering is different in that it offers a way for companies to create synthetic copies of their production data for staging and QA environments. With Tonic's database offerings, the PII in your data isn't just getting redacted —?it's getting filled in with synthetic substitutes. On the other hand, Tonic offers its 'Textual' product, which automatically identifies entities in text data to prevent potential privacy vulnerabilities, specifically for companies using data for internal AI development. Nevertheless, the technology used is similar.

Tonic.ai web demo

Each implementation is slightly different, though, and each company augments the core product with different additional features.

Where Wald.ai Stands Apart

Wald offers more comprehensive document querying, allowing users to chat while drawing from internal documents for context and information without leaking any sensitive information from those documents to the AI model APIs.

Additionally, Wald supports creation of highly-tailored knowledge assistants that can be shared across users. Instead of always using the default general, all-purpose chatbot, users can build custom instances based on documents from multiple sources to use as context and information for each response.

For example, software teams in a company each working on a specific project can create their own custom assistant based on the project's PRD, design files, and technical architecture plan to get the best, most detailed responses.

Wald gives managers, IT heads, and department leads a comprehensive admin dashboard that allows certain users to oversee usage statistics such as the number of requests being made, and how many of them contained sensitive data (and were successfully kept secure).

With these features, Wald.ai puts forth a strong case of providing the best comprehensive product on the market for businesses that want a full-fledged corporate solution for their employees.

Next Steps for Wald.ai

Currently, Wald sells its product using a one-tier pricing model: $19.99 per user, per month. This could easily be expanded upon, potentially with higher tiers offering greater access to higher tier models (GPT-4o, o1-mini, Claude Opus, etc.) and lower tiers being limited to standard models (GPT-4, Claude Sonnet, Gemini 1.5, etc.). As many B2B Saas companies do, Wald can also look to offer larger-scale, batch enterprise contracts?— although, that may be farther down the line.

To provide a better experience, Wald could look to offer greater flexibility and customization options for its sanitization. For example, Private AI allows users to configure PrivateGPT extensively, including toggling the sanitization of certain entities on/off.

As an additional offering for individuals or smaller organizations who don't need a full enterprise solution, Wald could offer a browser extension that works on top of existing chatbots. Rather than going through wald.ai, people could simply use ChatGPT, Claude, and other models like they always have, giving users secure interactions without requiring a full platform switch.

For its initial go-to-market strategy, Wald is sensibly looking to target the finance, tech, healthcare, and legal verticals for a B2B play. Once achieving strong product-market fit there, however, Wald may have a big opportunity to go B2G and serve the needs of government offices. Teams and departments in the public sector are generally slow to adopt new tech, and privacy concerns may be causing them to adopt AI at an even slower pace. The nature of the often clerical and manual work taking place in government, though, makes the space a prime candidate for AI, and Wald could help lower the barrier to potential adoption efforts.

Conclusion

As companies adopt AI tools, securing sensitive data is no longer optional. Wald.ai positions itself at the forefront of this critical challenge. Their innovative approach, combined with a clear focus on regulated industries, makes them a standout choice for funding. By addressing a pressing need and delivering a robust solution, Wald.ai is well-poised to secure its place in the AI-driven enterprise landscape.

要查看或添加评论,请登录

Michael Long的更多文章