Polyguard的封面图片
Polyguard

Polyguard

计算机和网络安全

New York,New York 264 位关注者

Don’t catch fraud. Stop it.

关于我们

Polyguard, the industry’s first real-time defense against deepfakes and AI-powered fraud, delivers proactive protection against call spoofing, hijacking, and impersonation for voice, video, and messaging, in the call center and beyond. Polyguard prevents next-generation fraud before it happens, giving financial institutions, businesses, and individuals full control over their personal and critical communications. Founded in 2024 by Joshua McKenty (former NASA Chief Cloud Architect and OpenStack co-founder) and Khadem Badiyan (mathematician and pioneer in AI analysis of image semantics), and headquartered in New York, Polyguard is redefining anti-fraud technology for the AI era. Learn more at www.polyguard.ai.

网站
https://polyguard.ai
所属行业
计算机和网络安全
规模
2-10 人
总部
New York,New York
类型
私人持股
创立
2024
领域
identity verification、cybersecurity、deepfake defence和anti-fraud

地点

Polyguard员工

动态

  • 查看Polyguard的组织主页

    264 位关注者

    As we’ve been saying for the past couple of years, deepfakes cannot be defeated with detection - either by humans, or by other AI tools. https://lnkd.in/gZyPpqUK There are three simple reasons for this: 1. Detection has never kept up with simulation (and it’s not going to start now). 2. Any breakthrough approaches to detection can actually be WEAPONIZED to make better synthetic content - content that cannot be caught with that approach. (This is the basic mechanism of GANs, one of the building blocks of modern AI systems). 3. While detection-based efforts COULD have positive effects in reducing the impact of “social attacks” such as disinformation campaigns and non-consensual pornography, they’re a poor fit for “targetted impersonation” such as financial scams or phishing. This is because the nature of detection itself requires a choice between “false positives” and “false negatives” - a choice that Facebook can make, but JPMorganChase ought not to. This is not what you’re going to hear from “deepfake detection researchers”, for obvious reasons.

  • 查看Polyguard的组织主页

    264 位关注者

    No matter how sophisticated your training programs are, or how carefully you’ve integrated anti-deepfake technology, every organization needs to have a crisis comms plan in place for the CERTAINTY that a member of their executive team is maliciously impersonated on social media.

    查看Grace Tan的档案

    Head of Corporate Communication, Regional Southeast Asia (SMT) at Visa | Financial Services| Strategic and Crisis Comms| Content Creator|

    As generative AI continues to evolve, deepfake technology is rapidly becoming a real and present danger for businesses, especially in the B2B space. Forrester's latest report reveals a staggering 80% of companies are underprepared for this emerging threat. Deepfakes—AI-generated audio-visual impersonations—pose significant risks to corporate reputation, operations, and trust. From impersonating executives to authorising fraudulent activities, these threats can cause long-lasting damage. And the numbers don’t lie: only 20% of businesses have a crisis communication plan in place, leaving most vulnerable. For CMOs and marketing leaders, ask yourself how prepared are you— do you have countermeasures or AI-driven detection available? Whether you work in finance or not, the financial and reputational consequences of this risk are real. It’s crucial to build stronger crisis response strategies and implement cutting-edge AI solutions to stay ahead of the curve on behalf of our employees, our customers, and our brands. ?? #AI #Cybersecurity #Deepfakes

  • 查看Polyguard的组织主页

    264 位关注者

    We've been warning of the risk that deepfakes pose to evidence gathering and the legal process for over a year; that warning has begun to be echoed within the discipline.

    查看Everlaw的组织主页

    20,976 位关注者

    ?? Seeing is no longer believing! The legal industry is facing a big problem with #deepfakes. Courtrooms are not yet flooded with a tsunami of deepfake evidence, but with this artificial intelligence-generated technology playing with great success on social media and in fraud schemes, it’s only a matter of time before deepfakes regularly drop into the exhibit list. ? “Deepfakes force us to confront an uncomfortable truth: Seeing is no longer believing. As forensic experts, we’re not just authenticating evidence—we’re trying to safeguard the integrity of the justice system in an era where digital manipulation can rewrite reality,” Jerry Bui. ?? Read more in ‘The End of Reality? How to combat deepfakes in our legal system’ by Chuck Kellner in ABA Journal > https://lnkd.in/g8ahJ8yf #AI #GenAI #LegalTech

    • 该图片无替代文字
  • 查看Polyguard的组织主页

    264 位关注者

    As highlighted by Tech Radar's Craig Hale, workers are increasingly concerned about their exposure to AI-powered attacks. And well they should be, as their employers are still framing this as a problem for training and hyper-vigilance. No one can be trained to spot deepfakes. The sooner we embrace this, the sooner we can focus on more effective solutions. https://lnkd.in/g6Av3C2u

  • 查看Polyguard的组织主页

    264 位关注者

    My friend James Bayer pointed this attack out to me this morning. While it looks a bit like the "xz" attack on the surface, it's actually an impersonation attack, intended to "frame and defame" the supposed attacker. GitHub, like most social networks, does not verify identity, which makes legitimate-looking or look-alike usernames as easy to spoof as... well, your phone number. Stay safe out there. https://lnkd.in/d2dftEeS

  • 查看Polyguard的组织主页

    264 位关注者

    The FBI continues to provide timely warnings to the public about the dangers of AI-powered scams. Their latest PSA https://lnkd.in/eKpBzv4g has a ton of great tips... and one unfortunate error. If you receive a call from someone claiming to be your bank or credit card provider, hang up. But don't go and search the internet for their number to call back - we've seen a number of cases where Google and other search engines have recommended a scam number! Instead, dial the number on the back of your credit card, or look up the customer service line within your mobile banking app.

  • Polyguard转发了

    查看Vivek Ramaswami的档案

    Partner at Madrona

    What do the Indian elections, Taylor Swift, and the British engineering firm Arup have in common? They have all been the recent target of #deepfakes. The rise of #AI and advancements in Generative Adversarial Networks (GANs) has generated a tidal wave of deepfakes across the political, corporate, entertainment, and consumer worlds. Face swapping, voice synthesis, body and object manipulation are becoming more common as AI makes it more difficult to distinguish real from fake. Luckily, a number of new startups are cropping up to detect and combat deepfakes, including Reality Defender, IdentifAI, Clarity, Polyguard.ai, and others. With the upcoming US elections, increasing consumer theft, and large-scale corporate frauds happening every month, the Deepfake problem has never been so important. Sabrina Wu and I write up more on this topic below. Let us know your thoughts! #security, #artificialintelligence, #startups, #elections https://lnkd.in/gnu7PQnk

    • 该图片无替代文字
  • 查看Polyguard的组织主页

    264 位关注者

    We’re building software to keep you safe online, and occasionally we’ll share notes on how that process is going. Today, let’s talk about your data. There are two tough topics that naturally come up when you’re grappling with deepfakes, particularly with AI-powered fraud: privacy, and proof-of-identity. We think you can have both. I’m sure we’ll come up with some polished marketing terms for how we’re doing this later (once we hire some marketing folks). But here’s the engineer’s explanation: First, we assume that we’re going to be subpoenaed- by every government, and in every jurisdiction. So we simply don’t collect any data that we would be uncomfortable surrendering. We tokenize EVERYTHING that’s personally identifiable, *before* it leaves your local machine, and we avoid reusing tokens whenever possible to avoid leaking “correlation” of metadata. We’ve nicknamed this “subpoena-first design”. While this first principle shares a common ethos with our compatriots in the decentralized / web3 community, our second principle is not only antithetical to them, it’s actually impossible in their universe: All of our online data storage is ephemeral by default. And we’re not talking “Google Press Release” ephemeral (with a lifetime of 1-3 years) - we’re talking a default lifetime that’s the length of your current Zoom call. It’s like snapchat, but for every database row. There are a ton of important decisions to make when you’re building security software. We’re starting with a novel attitude shift.

相似主页

查看职位