?? ?????????????????? ?????????????????????? ?????? ?????????????? ???????????????????? ?? ?? Boost your content moderation with these 3 smart automations: 1?? User Level Actions 2?? Risk Score Bracketing 3?? Queue Prioritization Want to streamline workflows, improve efficiency, and enhance platform safety? ?? Download our latest report- https://lnkd.in/dSCgRKs7
关于我们
ActiveFence is a Trust and Safety provider for online platforms, protecting platforms and their users from malicious behavior and content. Trust and Safety teams of all sizes rely on ActiveFence to keep their users safe from the widest spectrum of online harms, unwanted content, and malicious behavior, including child safety and exploitation, disinformation, hate speech, terror, nudity, fraud, and more. We offer a full stack of capabilities with our deep intelligence research, AI-driven harmful content detection, and online content moderation platform. Protecting over three billion users globally everyday in over 100 languages, ActiveFence lets people interact and thrive online.
- 网站
-
https://www.activefence.com/
ActiveFence的外部链接
- 所属行业
- 软件开发
- 规模
- 201-500 人
- 总部
- New York
- 类型
- 私人持股
- 创立
- 2018
地点
ActiveFence员工
动态
-
?????? ???????????? ?????????????? ? ?????? ???? ???????? ?????? ?????????? ???????? When it comes to Trust and Safety tools, should you build in-house, buy off-the-shelf, or go hybrid? Our CEO and Co-founder, Noam Schwartz, shares 5?? key considerations for making this decision, from resource demands to scalability, compliance, and trust. The traditional “build in-house” mindset is shifting as leaders recognize the value of specialized expertise and innovative solutions. Explore how we can help your platform stay safe, compliant, and user-friendly. ?? https://lnkd.in/dfKthKKZ #TrustandSafety #BuildvsBuy #OnlineSafety
-
???????????????????? ?????? ???????????????????????? ???? ?????? ???????????? ??????? Join us??????????on Wednesday as we break down practical insights you can use for implementation -- now! The REPORT Act is reshaping the landscape of platform accountability, particularly around the critical issue of reporting child sex trafficking. But how can #TrustandSafety teams ensure they meet these new requirements while managing the challenges of diverse cultural contexts and platform-specific dynamics? ????????????????, ????'???? ?????????????? ?????? ????: ?? Adapt reporting frameworks to align with cultural nuances. ?? Address platform-specific challenges in detecting and reporting trafficking. ?? Equip your teams with the tools to remain compliant and effective. We'll go beyond the basics to provide????????????????? ???????????????????? ?????? ???????????????????? ?????????????????for tackling these challenges head-on. Still have questions? Our panelists will be available for live Q&A during the event as well! ?? ???????? ?????? ????????????????: - Matt Richardson, CTCE,?Director, Child Safety, Anti-Human Trafficking Intelligence Initiative (@TeamATII) - Avi Jager, PhD, Director of Child Safety & Human Exploitation at ActiveFence - Kavya S., Human Exploitation Research Team Lead at ActiveFence ?????? ??????????????: ??? Wednesday, Nov. 20th ?? 12PM ET / 9AM PT ???Reserve your spot today! https://lnkd.in/dWV8VWiz We hope to see you Wednesday!
-
?????? ???????????? ??????: ?????? ???? ???????????? ?????????? ?????????????????????? ???? ???????? ???????????????? #TrustandSafety teams: are you up-to-date with the REPORT Act requirements and the {{linkedin_mention(urn:li:organization:27554|National Center for Missing & Exploited Children)}}'s newest guidelines? Join {{linkedin_mention(urn:li:person:GzjZDW4k-Z|Avi Jager, PhD)}} for a #webinar that goes beyond the basics, equipping you with?practical tools?and advanced strategies for detecting and reporting online child trafficking in today's complex digital landscape. ?????? ??????????????????: ?? Practical Application: Learn methods aligned with the latest REPORT Act standards and NCMEC guidelines ??Advanced Detection: Get expert insights on navigating cultural nuances and platform-specific challenges ??Collective Action: Explore collaborative paths to foster compliance and create safer online spaces. This is your chance to stay ahead of evolving threats while empowering your team to make a real impact. Don’t miss it! ?? Date:?Dec. 20th ?? Time:?12PM ET / 9AM PT ?? Register Now: https://lnkd.in/dXVbeRbc
-
?? ?????????????????????? ???? ???????????? ???? ?????? ?????? ?????????????????? ?????????????? ?????? ????????????! ?? This marks another milestone in our partnership with Amazon Web Services (AWS) as we work together toward a shared goal: fostering safer, healthier online communities. ActiveFence remains committed to equipping gaming platforms with the safety tools and solutions they need to protect players from harmful content and behaviors. Check it out here ??https://lnkd.in/d7heAsGW #Gaming #CommunityHealth #TrustandSafety Jonian Mehmeti
-
?? ???????????????????? ?????? ?????????? ???? ?? 78% of organizations rely on third-party providers to make responsible AI a reality. {{linkedin_mention(urn:li:organization:4506|Frost & Sullivan)}}'s latest report reveals the challenges—and how solutions like {{linkedin_mention(urn:li:organization:11682234|ActiveFence)}} are helping businesses build safer, more trustworthy platforms. Learn how to implement Generative AI while prioritizing safety?? https://lnkd.in/dBSxukGb
-
???? ????????: ???????????????? ???????????????????? ???? ???????????????? ?????????????? Republican candidate Donald Trump won the US presidential #election. What followed was a slew of #misinformation narratives claiming electoral fraud and demanding a recount of votes. ?? Download the report here ?? https://lnkd.in/detGUndi
-
ActiveFence转发了
When AI Goes Right - and Wrong: A Tale of Two Outcomes Most GenerativeAI tools are created with good intentions, yet the question remains: what happens when things go wrong? AI can be misused, whether intentionally or not. Intentional abuse can involve manipulating great tools for generating harmful outcomes, such as "nudify" apps, social engineering or generating political fabricated photos, either by abusing commercial pre-trained Models without sufficient guardrails or Open Source libraries. Unintentional abuse of generative AI occurs when the technology produces harmful / misleading outputs that developers didn’t foresee, often due to bias, model drift, misinterpretation, or unexpected uses. In sensitive scenarios, such as mental health support, AI may misinterpret emotional cues, causing harm by responding inappropriately. A recent concerning case involves a 14-year-old boy who developed an intense emotional connection with an AI chatbot. The chatbot engaged in highly sexualized conversations and failed to appropriately address the boy's suicidal thoughts, ultimately leading to his tragic death. These unintended risks underscore the importance of foresight, adaptable safeguards, and continuous monitoring to ensure AI serves responsibly and safely. The impact of these scenarios is profound. With intentional misuse, the harm is targeted and deliberate- with unintentional misuse, the risk stems from oversight or vulnerabilities left exposed. Recognizing and addressing both scenarios is critical for building safe, responsible, and resilient AI tools.
-
As we see the #EU intensifying its #DSA enforcement efforts, it’s clear that proactive compliance has become essential. We're dedicated to helping platforms stay prepared and aligned with these standards, this recent investigation underscores the importance of robust Trust & Safety mechanisms—not just to avoid fines, but to preserve user #trust. Click here for help navigating DSA requirements with ActiveFence- https://lnkd.in/dfHpEMgh
VP General Counsel @ ActiveFence | Tech, AI, Trust&Safety, Head of the ACC’s Trust&Safety forum , CIPP/E
??Online Safety Alert: Enforcement?? The #EuropeanCommission has officially launched an investigation into Temu for potential non-compliance with the Digital Services Act (#DSA). This is not just another headline—it’s a wake-up call. Following high-profile investigations into Meta and X, and with nine formal DSA proceedings initiated to date, this signals the EU’s serious commitment to enforcing its stringent regulations.?? Why should this matter? The risks are incredibly high. Non-compliance with the DSA can lead to fines of up to 6% of a company's global annual revenue. In addition, when a regulatory proceeding is initiated, it not only involves navigating substantial penalties but also contending with extensive legal fees, countless hours of internal resources devoted to the investigation, and potential damage to public reputation. As #inhousecounsels, we’re the first line of defense in ensuring our organizations are compliant and prepared for the evolving landscape of digital regulation. Staying ahead of these developments isn’t just about avoiding fines; it’s about safeguarding the trust and sustainability that our businesses rely on.????? The lesson here? #Compliance isn’t just a checkbox—it’s a strategic necessity. Let’s make sure we’re leading from the front and ready for whatever comes next. Are your compliance measures up to date?
-
?????????????????????? @ ??????????????????????- ?? ???????? ?????????? ?????????? Our mission is to protect online platforms and users from all forms of harm and abuse. Today, we are sharing a #behindthescenes look at what it takes to scale infrastructure to support high-throughput, low-latency model inference for real-time content moderation. 2?? 0?? 2?? 4?? ?????????????? ?? ?????? ??????????????????:?our #API needed to support real-time moderation for chat messages. Imagine analyzing tens of thousands of messages per second, filtering abusive content before it’s published—all while maintaining lightning-fast response times! ? Read Noam Levy's full insights in our Medium Engineering blog ?? https://lnkd.in/dzYhmytV