HiddenLayer

HiddenLayer

计算机和网络安全

Austin,TX 10,270 位关注者

The Ultimate Security for AI Platform

关于我们

HiddenLayer is the leading provider of Security for AI. Its security platform helps enterprises safeguard the machine learning models behind their most important products. HiddenLayer is the only company to offer turnkey security for AI that does not add unnecessary complexity to models and does not require access to raw data and algorithms. Founded by a team with deep roots in security and ML, HiddenLayer aims to protect enterprise’s AI from inference, bypass, extraction attacks, and model theft. The company is backed by a group of strategic investors, including M12, Microsoft’s Venture Fund, Moore Strategic Ventures, Booz Allen Ventures, IBM Ventures, and Capital One Ventures.

网站
https://hiddenlayer.com/
所属行业
计算机和网络安全
规模
51-200 人
总部
Austin,TX
类型
私人持股
创立
2022
领域
Security for AI、Cyber Security、Gen AI Security、Adversarial ML Training、AI Detection & Response、Prompt Injection Security、PII Leakage Protection、Model Tampering Protection、Data Poisoning Security、AI Model Scanning、AI Threat Research和AI Red Teaming

地点

HiddenLayer员工

动态

  • 查看HiddenLayer的公司主页,图片

    10,270 位关注者

    ?? HiddenLayer has been named a Gartner? Cool Vendor in AI Security! To us, this recognition underscores the growing importance of securing AI systems in an era where they are increasingly woven into the fabric of industries worldwide. We are proud of this recognition, as Gartner evaluates vendors based on their innovation, transformative potential, and real-world impact. We believe being named a Gartner Cool Vendor reaffirms that our approach and solutions are on the right path. A huge thank you to our team, partners, and clients for supporting this journey. We’re committed to continuing our mission of making AI security a priority. Download the report here: https://lnkd.in/gMujJU5b Read our press release here: https://lnkd.in/grkFGaFZ #AIsafety #AIsecurity #GartnerCoolVendor #Cybersecurity #AIresilience #Innovation #HiddenLayer #Gartner #AITRiSM

    • 该图片无替代文字
  • 查看HiddenLayer的公司主页,图片

    10,270 位关注者

    ?? Introducing the HiddenLayer Innovation Hub We’re thrilled to launch the HiddenLayer Innovation Hub—a new space dedicated to advancing security for AI knowledge. Your one-stop destination for cutting-edge security for AI research, reports, insights and more. Discover a curated collection of resources built to empower, inform, and inspire the security for AI community. Explore on your own today ?? https://lnkd.in/gBgaCxJy #SecurityforAI #AISecurity #InnovationHub #HiddenLayer

  • 查看HiddenLayer的公司主页,图片

    10,270 位关注者

    We are proud to announce Automated Red Teaming for AI. The addition of this new product extends HiddenLayer’s AISec platform capabilities to include Automated Red Teaming, Model Scanning, and GenAI Detection & Response – all under one platform. This innovative solution provides fast, reliable protection for AI deployments, helping businesses safeguard sensitive data and intellectual property and prevent malicious manipulation of AI models. Key Benefits of Automated Red Teaming: - Continuous Testing: Regular and ad hoc scans ensure new vulnerabilities are promptly identified and addressed. - Scalability: Easily scale testing across growing AI infrastructures and complex models. - Actionable Insights: Track security metrics, measure progress, and foster collaboration between red and blue teams. - Cost and Time Efficiency: Reduce labor costs and accelerate detection, empowering your team to focus on critical tasks. Discover how Automated Red Teaming for AI can transform your security strategy. Learn more today ?? https://lnkd.in/gsBjVzx6 Read our press release here ?? https://lnkd.in/gAYjDQix #AIRedTeaming #AI #GenAI #RedTeaming #Product #AutomatedRedTeaming

    • 该图片无替代文字
  • 查看HiddenLayer的公司主页,图片

    10,270 位关注者

    ?? We’re excited to partner with OWASP? Foundation in unveiling the 2025 Top 10 Risks for Large Language Models (LLMs) and supporting their inaugural sponsorship program! This initiative reflects our shared commitment to advancing AI adoption through research, guidance, and education to secure AI technologies. As generative AI and LLMs are adopted across industries, it’s critical to address their unique security challenges. The 2025 Top 10 Risks provides actionable insights for proactive risk management, equipping organizations to build secure, trusted AI systems. By aligning with OWASP, we’re advancing transparency, community collaboration, and expertise to shape a safer AI future. Together, we’re building trust, mitigating risks, and ensuring the responsible development of these powerful tools. Read more about the initiative > https://lnkd.in/gq6RdmjU #AIsecurity #SecurityForAI #OWASP #LLMRisks #GenerativeAI #Cybersecurity

    OWASP Reveals Updated 2025 Top 10 Risks for LLMs, Announces New LLM Project Sponsorship Program and Inaugural Sponsors - OWASP Top 10 for LLM & Generative AI Security

    OWASP Reveals Updated 2025 Top 10 Risks for LLMs, Announces New LLM Project Sponsorship Program and Inaugural Sponsors - OWASP Top 10 for LLM & Generative AI Security

    genai.owasp.org

  • 查看HiddenLayer的公司主页,图片

    10,270 位关注者

    ?? HiddenLayer has been recognized in Fast Company’s 2024 Next Big Things in Tech Awards in the Security and Privacy category! Fast Company evaluated over 1,300 submissions, selecting just 138 honorees driving innovation and impact. This award validates our commitment to securing AI systems against rapidly evolving threats. Across sectors from finance to critical infrastructure, organizations trust HiddenLayer to protect their AI investments and build user confidence. Thank you to our team and supporters who believe in our mission to make security for AI safer, smarter, and stronger. Read the full article here: https://lnkd.in/gtqp6RSe Read our press release here: https://lnkd.in/g9_hBj7C ?#FCTechAwards #AIsecurity #SecurityForAI #FastCompany #NextBigThingsInTech #Innovation

  • 查看HiddenLayer的公司主页,图片

    10,270 位关注者

    ?? Between the Layers: DHS Releases Security for AI Framework for Critical Infrastructure The Department of Homeland Security (DHS) has introduced the “Roles and Responsibilities Framework for AI in Critical Infrastructure” — a first-of-its-kind guide crafted with input from industry leaders, public sector experts, and civil society. This framework offers essential, actionable guidance for secure AI deployment across vital U.S. infrastructure like energy, water, and digital networks. Recognizing the complex vulnerabilities AI can introduce, the framework provides targeted recommendations for all key players: - Cloud Providers: Secure environments and monitor for threats. - AI Developers: Use Secure by Design principles and test for biases. - Infrastructure Operators: Maintain comprehensive cybersecurity. - Civil Society & Public Sector: Drive standards, research, and cross-sector collaboration for AI safety. The framework’s goal is clear: to ensure AI strengthens critical services while safeguarding security, privacy, and civil liberties. For more information on the “Roles and Responsibilities Framework for Artificial Intelligence in Critical Infrastructure,” please visit: https://lnkd.in/gCbcDqVp This post is part of our Between the Layer series. Tune in weekly as we share industry insight and thought leadership topics on #Security4AI. #AI #AIFramework #AIRegulation #AIPolicy #GenAI #LLM #AIGovernance

    • 该图片无替代文字
  • 查看HiddenLayer的公司主页,图片

    10,270 位关注者

    We’re proud to announce that SC Media has named HiddenLayer's Marta Janus a Woman to Watch in Women in IT Security. After earning a master’s in archaeology, a malware infection on her computer drove her to learn operating systems, network security, and reverse engineering. This early hands-on experience laid the groundwork for her current role as a principal researcher at HiddenLayer, where she focuses on defending AI-based systems from adversarial threats. Marta has also been a significant contributor to our Synaptic Adversarial Intelligence (SAI) team, helping to shape the innovations that earned HiddenLayer an Innovation Sandbox award at RSA Conference 2023. With over a decade in the field and more than 30 industry publications, she is a recognized leader in both adversarial machine learning and threat intelligence. Congratulations, Marta, on this well-deserved recognition. You set an example for us all, and we’re honored to have you on our team. You can read the whole article here ?? https://lnkd.in/gxmRj4cM #Cybersecurity #WomenInIT #AI #WomenInSTEM #AIResearcher

    • 该图片无替代文字
  • 查看HiddenLayer的公司主页,图片

    10,270 位关注者

    Join us tomorrow for an informative session on securing Gen AI workloads for financial analysts. Todd Cramer from Intel Corporation, Hiep Dang from HiddenLayer, and Darren Oberst from LLMWare (by Ai Bloks) will explore how Intel Core Ultra technology is helping to make AI PCs a secure choice for finance. If you’re interested in better understanding secure AI in finance, register now ?? https://lnkd.in/g_uaYypC

  • 查看HiddenLayer的公司主页,图片

    10,270 位关注者

    ?? Explore the Future of Security for AI with Automated Red Teaming Curious about how automated red teaming can reshape your AI security strategy? Join us for a deep dive into one of the most advanced techniques for securing AI systems against emerging threats. In this HiddenLayer webinar, we’ll cover:? - What Automated Red Teaming is - Why it’s essential for safeguarding AI environments - How HiddenLayer’s unique approach sets a new standard in AI defense Meet the Experts: - Malcolm Harkins – Chief Trust and Security Officer - Jason Martin – Principal AI Security Researcher - Travis Smith – VP of ML Threat Operations - Abigail Maines – Chief Revenue Officer Following the discussion, stick around for a Q&A session to get tailored insights from our panel. This session is perfect for both newcomers and experienced professionals looking to elevate their cybersecurity strategies with cutting-edge AI techniques. Don’t miss out on this opportunity to learn, engage, and enhance your defenses. ?? Reserve your spot now https://lnkd.in/gPwunb5g #AIThreatManagement #RedTeaming #AutomatedSecurity #Cybersecurity #HiddenLayerWebinar

  • 查看HiddenLayer的公司主页,图片

    10,270 位关注者

    In his latest blog, shared by RSA Conference, Malcolm Harkins, our Chief Security and Trust Officer at HiddenLayer and a CoSAI Project Governing Board Member, dives into the unique risks AI faces—data poisoning, adversarial attacks, and model inversion. Traditional cybersecurity controls like firewalls and encryption, while essential, weren’t built with these AI-specific threats in mind. That’s why the Coalition for Secure AI is leading the way with a security-by-design approach, developing frameworks that address these specialized vulnerabilities head-on. By bringing together insights from cybersecurity, industry, and academia, CoSAI empowers organizations to safeguard their AI investments proactively. We are proud to see our team and others in the space shaping the future of security for AI. ?? Read the full blog here: https://lnkd.in/gmvmY9RZ #COSAI #SecurityForAI #AISecurity #PromptInjection #BackDoor #DataPoisoning #OASIS

    查看OASIS的公司主页,图片

    12,505 位关注者

    As AI’s influence grows across sectors, its unique vulnerabilities make it a prime target for cyberattacks - threats that traditional #cybersecurity controls simply can’t mitigate on their own. Standard defenses like firewalls, encryption, and intrusion detection remain critical, but they don’t account for the unique risks in AI, such as data poisoning, adversarial attacks, and model inversion. #CoSAI (Coalition for Secure AI) is tackling this gap with a security-by-design approach for #AI, creating frameworks to address these specialized threats. By collaborating across cybersecurity, industry, and academia, CoSAI empowers organizations to proactively safeguard their AI investments. Learn more about AI’s evolving threat landscape and how CoSAI is shaping new standards for #AIsecurity in the latest RSA Conference blog by Malcolm Harkins, Chief Security and Trust Officer at HiddenLayer and CoSAI Project Governing Board Member: https://lnkd.in/eZC2Jv4u

    Traditional Cybersecurity Controls DO NOT STOP Attacks Against AI

    Traditional Cybersecurity Controls DO NOT STOP Attacks Against AI

    prod-cd3.rsaconference.com

相似主页

融资