Safer, Built by Thorn的封面图片
Safer, Built by Thorn

Safer, Built by Thorn

软件开发

El Segundo ,California 1,735 位关注者

Proactive child sexual abuse material (CSAM) detection built by experts in child safety technology

关于我们

Safer was built by Thorn to fill the need for a solution that could adequately tackle child sexual abuse material (CSAM) and online child sexual exploitation (CSE). With Safer, any platform with an upload button can access industry-leading tools for proactive CSAM and CSE detection. Safer detects verified CSAM using hash matching, novel image and video CSAM using machine learning (ML) classification models and predicts potential text-based harms that include or could lead to child exploitation, such as sextortion, discussions of CSAM and more. Platforms don't have to tackle this issue alone. We can take meaningful action together. With a relentless focus on CSAM and CSE detection strategies via state-of-the-art AI/ML models, proprietary research, and a cutting-edge detection solutions, Safer enables digital platforms to create safer user experiences.

网站
https://bit.ly/3BNWroZ
所属行业
软件开发
规模
51-200 人
总部
El Segundo ,California
创立
2019
领域
Child safety、Platform safety、Online safety、Content identification、Image identification、Video identification和Content moderation

动态

  • From social work to social networks, Jerrel P. shares his unexpected journey into Trust & Safety—and why soft skills (and skepticism) matter just as much as technical ones. Spotify’s Director of Content Policy sat down with host John Starr for an insightful episode in Safe Space’s first season. Get ready for Safe Space’s upcoming season two by revisiting Jerrel’s take on: ?? How his background in social work shaped his approach to content policy ?? The misconceptions about Trust & Safety in tech ?? The balancing act between business goals and platform integrity ?? Why Trust & Safety isn’t a silo—and how it’s a cornerstone of success “So much research shows that, when folks are experiencing abuse online, when they witness it, they are less likely to log in…less likely to do all the things that the platforms want them to do… it's a business imperative to do trust and safety well.” Listen to the full episode linked below in the comments.

  • ?? Deepfake technology is evolving at an alarming rate, lowering the barrier for bad actors to create hyper-realistic explicit images in seconds—with no technical expertise required. Thorn’s latest research reveals a troubling reality: ?? 1 in 8 teens personally knows someone who has been targeted with deepfake nudes. ?? 84% of teens believe deepfake nudes are harmful, citing emotional distress, reputational damage, and deception. ?? Misconceptions persist, however. 16% still believe this content is not harmful, often believing that because it’s "not real" and it’s not a serious issue. For Trust & Safety teams, this is a critical moment to act. The responsibility goes beyond content detection. It requires a proactive approach to abuse prevention, product design, and policy enforcement. ?? Read our latest blog to learn how platforms can take action. https://lnkd.in/eTU57Yhi

  • Safer, Built by Thorn转发了

    Thrilled to announce that Dr. Rebecca Portnoff from Thorn will be speaking at our upcoming Safety by Design: A Proactive Approach to Online Safety gathering in NYC on April 23rd! Rebecca is a trailblazing leader in child safety who was recognized by MIT Technology Review's 2024 35 under 35 innovators list. Child safety panelists from Google and Meta are being announced shortly. The conversation will be moderated by our Associate Director (and Trust & Safety lead)?Sandra Khalil. Invitations will be going out later this week, so apply now if you’re like to participate. You can apply here: https://lnkd.in/ei3SsAX2 This is a curated gathering for individuals across civil society, government, industry, and academia committed to a safer online experience for all. Our gatherings surface important values, tensions, tradeoffs, and best practices as we collectively work towards a better tech future. #AllTechIsHuman #ResponsibleTech #SafetybyDesign

    • 该图片无替代文字
  • Safe Space is now available wherever you listen to podcasts! ?? ?Our trust and safety podcast series—hosted by our very own John Starr—has only been available on YouTube until now. But now you can catch every episode on your favorite podcast platform. If you’re new to Safe Space, we focus on the human side of trust and safety—featuring the leaders, thinkers, and builders working to protect online communities. Kick off with our inaugural episode featuring Yoel Roth, Head of Trust & Safety at Match Group. He shares his insights on everything from the rewards of this line of work to what industry shifts we should be focusing on.

  • Is your platform ready for new legislation targeting deepfake nudes and nonconsensual intimate images? AI-generated deepfakes are amplifying nonconsensual intimate image abuse, creating urgent new challenges for platforms. ??Our research reveals this threat to child safety is real: 1 in 8 teens already knows someone targeted by deepfake nudes. The Take It Down Act, recently passed by the Senate, would mandate: ? A “notice and removal process” to remove non-consensual intimate visual depictions ? Removing reported non-consensual intimate content in 48-hours. ? Criminal penalties for those distributing intimate images without consent Is your team ready for these potential changes? Our latest blog breaks down exactly what trust and safety teams need to know and how to prepare proactively.

    • 该图片无替代文字
  • GIPHY reaches more than a billion users every day—serving more than 10 billion pieces of short-form content. As a widely used content-sharing platform, keeping users safe from child sexual abuse material (CSAM) is a top priority—but trust and safety teams can’t do it alone. In 2019, GIPHY’s proactive uploader application process helped prevent CSAM from appearing in search results. But risk still remained in private channels, where harmful content could be uploaded and shared out of public view. To stay ahead, GIPHY turned to Thorn for proactive detection—deploying Safer’s hash-matching service and CSAM classifier. Now, every new GIF—millions per month—undergoes automated review, ensuring a safer space for users everywhere.

  • ?? The UK leads with landmark legislation targeting AI-generated CSAM while California proposes new chatbot safety measures for kids. More updates on the regulatory landscape for child safety ?? Plus, top lines with links and summaries for the latest child safety headlines related: ?? Social Media: Lennon Torres speaks out on Meta’s content moderation changes, a report on low teen trust in Big Tech Kids, Kids Off Social Media Act is reintroduced ?? Generative AI: Deepfake apps risk normalizing non-consensual content creation, a teen’s perspective on the societal impact of gen AI ?? AI Safety: Experts warn of rising risks as capabilities advance, AI chatbots weaponized for stalking The Digital Defender newsletter gathers the trust & safety news you should know into bite size servings. Each month, we gather top headlines and give you a quick summary of the most consequential stories impacting online child safety. Thanks for reading and for being a champion for child safety! #trustandsafety #childsafety

  • Bad actors don’t lurk in the shadows. They exploit everyday platform features to find, engage, and manipulate children online. From using location-sharing to identify victims to leveraging fake accounts and direct messaging to build trust, their tactics are calculated and evolving. With the rise of AI-generated content, the risks are only accelerating. Now more than ever, platforms must take action: ?? Review platform features through the lens of child safety. ?? Strengthen reporting tools that kids rely on. ?? Understand youth experiences to build effective safeguards. By prioritizing proactive security measures, we can create a digital world where children are safe, not preyed upon.

  • Safer, Built by Thorn转发了

    查看Hive的组织主页

    18,326 位关注者

    We are thrilled to introduce the speaker lineup for one of Hive’s panels at the #TSSummit, “Harnessing AI to Detect Unknown CSAM: Innovations, Challenges, and the Path Forward.” Hive’s CEO, Kevin Guo, will be joined by experts in digital child safety and Hive partners ?? Amanda H. Volz (Vice President, Global Customers and Strategic Partnerships, Thorn) and Derek Ray-Hill (Interim CEO, Internet Watch Foundation (IWF)). For years, hashing technology has helped platforms detect and remove known child sexual abuse material (CSAM), but the challenge of identifying new, unknown CSAM has persisted—until now. Recent advancements in AI are transforming child safety efforts, enabling platforms to proactively detect previously unreported CSAM at scale. Join us at the Trust & Safety Summit March 25-26 in London: https://lnkd.in/gsbMkQKn Learn more about Hive’s partnerships with Thorn (https://lnkd.in/gX4D-XRF) and Internet Watch Foundation (IWF) (https://lnkd.in/gYcrfJVy) #TSSummit #OnlineSafety #DigitalTrust #TrustAndSafety #SafeOnline #AIModeration #ContentModeration #BrandProtection #AI

    • 该图片无替代文字

关联主页

相似主页