How do you create trust when your business model centers on human interaction? In this latest episode of Click to Trust ??, we had the pleasure of speaking with Jane Yu, Head of Trust and Safety at Papa. We explored how platforms like Papa are addressing the dual vulnerability of both members and service providers and what it really takes to build safety measures that go beyond the digital world and impact real lives. Listen in to learn more about how Jane and her team at Papa are leading the charge with their first transparency report: ?? Episode: https://lnkd.in/ef2Cyxii ?? Report: https://lnkd.in/d4e4zqX5 #trustandsafety #transparency
TrustLab
软件开发
Building a safer web through the power of harmful content understanding and bad actor detection at scale.
关于我们
TrustLab provides cutting-edge software and metrics to the world's largest social media platforms, online marketplaces and apps to enable them to protect their users against misinformation, hate speech, identity fraud, and other harmful content. Our customers are large enterprises with complex Trust & Safety needs and small companies building out their internal policies and teams. With a founding team with over 40 years of collective Trust & Safety experience at companies like Google, YouTube, Reddit and TikTok, Trust Lab is the trusted third-party solution for detecting and mitigating critical safety threats on the internet. --------------------------------------------------------------------------------- **Read more about our vision for the internet here: https://www.trustlab.com/post/the-big-problem-that-big-tech-cannot-solve **Join us if you or someone you know is interested in developing the next game-changing Trust & Safety technology: https://www.trustlab.com/careers **Reach out if you or your company is experiencing challenges with Trust & Safety: https://www.trustlab.com/contact
- 网站
-
https://www.trustlab.com/
TrustLab的外部链接
- 所属行业
- 软件开发
- 规模
- 11-50 人
- 总部
- San Francisco
- 类型
- 私人持股
- 创立
- 2019
- 领域
- Internet Safety、Trust & Safety、Online Content Safety、Content Moderation、Machine Learning、Misinformation、Hate Speech、B2B SaaS和Identity Verification
地点
-
主要
US,San Francisco
TrustLab员工
动态
-
We're #hiring a new Sr. Policy Specialist | FTE | US Remote in Texas. Apply today or share this post with your network.
-
We're #hiring a new Technical Program Manager | FTE | US Remote in Texas. Apply today or share this post with your network.
-
In today's digital landscape, content moderation has become a critical aspect of keeping online platforms, and their users, safe. As the volume of user-generated content continues to grow exponentially, finding the right balance between manual and automated moderation techniques has never been more important. Our latest blog post, by Cecilia Rodriguez, explores the evolution of content moderation practices, comparing manual and automated approaches, and discussing the emergence of hybrid solutions that aim to combine the best of both worlds. Check it out here! ?? https://lnkd.in/g_Sfzj8y
Finding the Balance: Manual vs. Automated Content Moderation | TrustLab Blog
trustlab.com
-
We're #hiring a new Sr. Policy Specialist in Texas. Apply today or share this post with your network.
-
The latest episode of Click to Trust ?? is available on all streaming platforms! Listen to Sabrina Puls and Carmo Braga da Costa as they chat about the important role that Content Policies play in promoting safety online as well as... 1. Why investing in Trust & Safety from the start is a strategic business decision 2. Tips and tricks for collaborating with XFN stakeholders to develop content policies 3. Leveraging QA to navigate gray areas and ambiguity in content moderation 4. Mitigating bias in your content policies and enforcement mechanisms 5. Why Trust & Safety might be a great career for you https://lnkd.in/du5QnDfy
Content Policies: An Inside Look at How Online Platforms Try to Keep You Safe
https://www.youtube.com/
-
TrustLab's cofounder and CEO, Tom Siegel, will be sharing insights at today's Stanford Trust & Safety Research Conference. He's set to discuss the "Utility of Generative AI vs Discriminative AI for Content Moderation" in a lightning talk. Join us at the McCaw Hall Mainstage at 11:30 am PST for the session! For more details, check out the conference agenda here: https://lnkd.in/gUtr8NUG
-
We recently hosted an insightful interactive discussion on content policy development with the TrustLab team, led by Sabrina Puls. Sabrina has distilled the key learnings into a must-read blog post for T&S teams! Here are some highlights: 1/ Simplify to Scale: Clear, actionable guidelines trump complex policies. 2/ Cross-Functional Collaboration: Involve multiple departments for effective implementation. 3/ Cultural Context: Adapt policies globally to respect diverse norms. 4/ Misinformation Strategies: Ground policies in data and use QA for refinement. 5/Continuous Iteration: Refine based on real-world application and emerging trends. ?? Pro Tip from Sabrina: "Consider creating a Policy Launch Standard Operating Procedure to align stakeholders and set clear expectations." Sabrina emphasizes: "The goal isn't to create a catch-all policy – focus on the most pressing issues impacting user safety." Explore these insights and more in Sabrina's full blog post! https://lnkd.in/dV4TCBpp
Navigating the Complexities of Content Policies: Bridging the Gap Between Policy & Enforcement | TrustLab Blog
trustlab.com
-
Are the systems we've designed to protect online spaces inadvertently silencing marginalized communities? In an eye-opening blog post, Emma T. delves into a critical issue facing our digital world: how content moderation disproportionately affects marginalized voices online. Emma touches on a few key points: > Automated systems often lack nuance in interpreting context > Cultural sensitivity is crucial but often overlooked > Marginalized communities face higher rates of content removal As Trust & Safety professionals, it's our responsibility to advocate for fair and inclusive moderation practices. This conversation is vital as we strive to create more equitable online spaces. Read Emma's blog >> https://lnkd.in/dHrPETBF #ContentModeration #DigitalInclusion #OnlineSafety #TechEthics
-
As online platforms face the Synthetic Content Era, Content Moderation is at a crossroads... And while AI seems like an obvious solution, it may not be the silver bullet many hope for. Instead, Tom Siegel proposes a "Co-Pilot Moderation" approach: 1/ AI-powered initial screening 2/ Strategic human/AI intervention 3/ Continuous AI-human feedback loop This symbiosis could bring Trust & Safety teams: ? Improved accuracy in content decisions ? Enhanced moderator well-being ? Increased efficiency and scalability Tom explores how this approach can transform online safety, especially for platforms struggling with off-the-shelf AI solutions or resource constraints. Check out the full article on the blog! https://lnkd.in/dw6H5tsh
Redefining Content Moderation in the Era of Synthetic Content | TrustLab Blog
trustlab.com