AI as an Ally for Child Safety? Possibly, but We Can’t Ignore the Risks
Safer, Built by Thorn
Proactive child sexual abuse material (CSAM) detection built by experts in child safety technology
Each month we'll share news and insights to keep you up to date on the latest in the child safety technology sector. Hit that subscribe button to make sure you never miss an update.
?? AI: Potential Ally and Current Risk to Child Safety
What gives us hope:
AI technology steps in as a vigilant protector, monitoring and analyzing online content at a scale and speed impossible for human moderators alone. By filtering harmful content, detecting predatory behavior, and providing educational resources, AI acts as a guardian of the digital playground, ensuring it remains a space of safety and growth for children.
What causes us concern:
The Internet Watch Foundation's July 2024 report highlights a significant increase in AI-generated child sexual abuse material (CSAM) since their October 2023 findings. This includes more than 3,500 new AI-generated images and the emergence of realistic AI-generated videos depicting severe abuse. The report underscores the rapid evolution and growing threat of AI tools being used to produce and disseminate CSAM, posing serious challenges for detection and enforcement.
Human Rights Watch (HRW) reports that AI models are being trained on photos of children found online, even when parents use strict privacy settings. HRW's researcher, Hye Jung Han, discovered 190 photos of Australian children, including Indigenous children, in a widely used AI dataset, LAION-5B, which can expose children to privacy and safety risks. Despite platform policies prohibiting such data scraping, AI models have already trained on these images, revealing personal information like names and locations. HRW calls for stronger legal protections to prevent misuse of children's photos rather than expecting parents to remove them from the internet.
Access more child safety resources for Trust and Safety professionals in our Resource Library.
Other Child Safety News
? Age Verification
Considering Age Verification and Impacts on LGBTQ+ Youth (Tech Policy Press)
领英推荐
Here, TPP explores how efforts to protect children online through age verification measures can inadvertently harm LGBTQ+ youth by limiting their access to essential resources and support networks. These measures, while intended to ensure safety, often raise significant privacy concerns and reinforce social disparities, disproportionately affecting marginalized groups.
?? Regulators and Legislation Impacting Digital Platforms
On July 30th, the U.S. Senate passed two landmark bills aimed at enhancing online safety for children and teenagers, marking the first significant attempt to regulate social media's impact on minors in over 25 years. The Kids Online Safety Act (KOSA) and the Children and Teens Online Privacy Protection Act (COPPA 2.0) both passed with strong bipartisan support.
KOSA would create a legal “duty of care” for platforms to prevent and mitigate various harms, including the online sexual exploitation and abuse of children, as well as introduce new requirements for transparency, reporting, and more. COPPA 2.0 would ban targeted advertising for minors and extend privacy protections to include teens aged 13-16. While the bills do have significant support, including within the tech industry, recent news coverage has amplified concerns about surveillance and censorship, particularly as it relates to LGBTQ+ youth.
However, many of the folks who have been working specifically with KOSA for months, including survivors and child safety experts, have criticized the amplification of these concerns as scare tactics that are not aligned with the current bill text. The bills now move to the House, where their fate is uncertain. If the bills do pass the House and are enacted into law, they could significantly reshape the child safety technology landscape, potentially requiring substantial changes to platform design, algorithms, and data handling practices for companies operating in the U.S. market.
The FTC has banned an anonymous messaging app "NGL: ask me anything" from hosting kids under 18 on its platform to settle allegations it was unfairly marketed to minors and exposed them to cyberbullying and harassment, the agency said Tuesday. This is the first time the agency has ordered messaging app to stop hosting teens and kids online. The move comes after years of mounting pressure for lawmakers and regulators to hold tech platforms accountable for their impact on youth mental health.
Related news:
?? Share your thoughts! Let us know in the comments what legislation you're keeping an eye on.?
We'll be back next month with another edition of Digital Defender.
Head of Growth @Pixis. Always Be Experimenting. AI + human expert on ABM, inbound, demand gen, pipeline creation, cold calling, SEO, paid media.
7 个月I assume you saw the Instagram AI Character rollout? https://www.dhirubhai.net/posts/matthiasclock_this-morning-when-i-opened-instagram-activity-7224499286237958144-6wpW Shameful