2022: Time to take algorithm-enhanced online abuse seriously
The report of the Joint Scrutiny Committee the On-line Safety Bill raises many issues and the necessary revisions will take time. I am therefore of the opinion that HMG should move rapidly to implement the Age Checking provisions in the Digital Economy Act pending agreement on the Bill. Mandatory audit against PAS 1296 (pending its update as an International Standard) is an obvious interim measure, linked to enforcement of the Age Appropriate Design Code.??I regard arguments over encryption as a red herring. Its active use (alongside measures to address vulnerabilities in the domain name system) to block unidentified access to vulnerable users, including children,?should be part of on-line safety policy.
But the world has changed?since my previous blogs (When IT Meets Politics) on this topic.
The algorithms used by dominant social media companies have compounded the risks to unsupervised children in their bedrooms by automating the processes predators use to find and groom potential victims - whether for child abuse or to join criminal or radical groups. Many, perhaps most, children are now more vulnerable than on the streets outside.
I am therefore delighted to host a guest blog (reported here as a Linked -In Article from?Dr. Rachel O’Connell, Founder and CEO at TrustElevate
= = =
2022: Time to take algorithm-enhanced online abuse seriously
The beginning of another new year; a year in which many will be looking for a brighter future, as we continue to navigate our way through a global pandemic. Children and young people specifically will be hoping to continue with their education without being held back any further by Covid-19.
But for some children and young people, Covid-19 isn’t the only threat out there. The NSPCC recently reported that there were a record number of online grooming crimes in 2021, up 70% in the last 3 years.[1] The Internet Watch Foundation reported that experts are finding fifteen times as much child sexual abuse material online than they were 10 years ago.[2]
It’s clear that Big Tech platforms are not doing enough. These alarmingly high statistics go to show that existing government regulations are insufficient in keeping young communities safe online at a time when they are spending more time on the internet than ever before. If there’s anything certain about the year ahead, it’s that 2022 should finally be the watershed moment to take action to protect children and young people from online abuse.
The latest update in regulatory politics came from the Joint Committee on the Draft Online Safety Bill on Tuesday 14th December, when its draft report on this issue was published. Titled 'No longer the land of the lawless', the report includes recommendations for consideration in the Online Safety Bill that are potentially game-changing. Specifically, that a platform's system should be regulated, rather than just its content. This shift in focus is hugely significant as it is the design choices and data-driven operations made by platforms that regulators should scrutinise. These include AI-driven recommendation engines that actively and seamlessly connect young people to adults with a sexual interest in children.
AI that connects adults with a sexual interest in children with children who livestream enables these adults to act as a pack and request, cajole and coercively control children to engage in increasingly sexually explicit acts. This has directly led to a spike in what is referred to as self-generated child sexual abuse material.?Paedophiles no longer need to search extensively for children online to groom, they simply watch a few videos produced by children in a specific age band, gender and ethnicity that meets their preferences. The algorithm then connects them with both children that matches these criteria and other adults who share their predilections. To be clear, Ofcom research highlights that around half of 12-year-olds have a social media profile. Most social media sites including Facebook, Twitter, Instagram and Snapchat have a minimum age requirement of 13, but 21% of 10-year-olds, 34% of 11-year-olds and 48% of 12-year-olds say they have a profile.
领英推荐
AI also recommends harmful content to children based on the content they may have searched for or viewed when feeling depressed for a period of time. These automated processes facilitate the?surfacing of more and more self-harm or suicide-related content to children whose behaviours indicate vulnerability.
Such insidious bad practices, even unintentionally, go straight to the heart of the dangers that automated processes on platforms can pose. These data driven processes are choices that platforms make on behalf of users, in the absence of informed consent, or a consideration of consequences. When deploying AI in a bid to enable frictionless interactions the stated aim is to delight the end-user by surfacing content and connections that may be valuable. The reality is to ensure eyeballs remain glued to screen so more adverts are viewed therefore more revenue is generated. However, by not acting upon the growing body of evidence of the risks to children’s wellbeing these algorithms are facilitating raises questions about the extent, if any, to which these companies are exercising a duty of care toward children and the scale of liability for those children’s wellbeing to which these companies are exposed.
The Joint Committee's recommendation suggests that these practices should be regulated and calls for increased powers to audit companies’ decision-making and hold them to account. Policing platforms, enforcing these principles through audits, and ensuring accountability and transparency are key. If these recommendations are accepted and included in the Online Safety Bill, this could represent a significant step forward, combined with a greater focus on age verification when it comes to content provision, toward creating a safer internet for children and young people.
More broadly, an increasing recognition that data rights are human rights is gaining traction. Data is central to the digital services children use every day. The processing of children's data - in fact, anyone’s data - leads to the creation of psychographic profiles composed of between 5000 and 8000 data points. Psychographic data is information about a person's values, attitudes, interests, and personality traits that is used to build a profile of how an individual views the world, the things that interest them and the triggers that motivate them to action. These profiles are interrogated by AI and predictive analytics and informed by behaviour modification measures, and the outputs shape both individual and collective behaviours and social norms. Negative impacts include increases in incidences of self-harm, disordered eating behaviours and the generation of child abuse related materials as discussed above. But, this data can also be used in ways that impede young people’s access to university places, jobs and apprenticeships or even impact car insurance premiums. Companies harvest data online about young people and sell it to insurance companies claiming to compute a young person’s appetite for risk, or likelihood to engage in criminal activity. There is zero oversight of these AI driven practices, that can shape a young person’s life chances.
An absence of oversight means companies have zero incentive to manage reports correctly and very little understanding, apart from anecdotal evidence and events highlighted by whistle-blowers, of the linkages between data-driven operations and the resultant harm to children and young people. Oversight of abuse management systems is a gap that needs to be bridged in pending regulation as a matter of urgency. The Joint Committee report recommends that individual users complain to an ombudsperson when platforms fail to comply with the new law. To be effective, there needs to be an ability to audit the operation and efficacy of its abuse management systems. These audits will include the training materials and quality assurance processes, the outputs of the management system and how these feedback into decision making regarding product features and data-driven operations to ensure the ombudsperson can operate effectively.
Given the centrality of data-driven operations to the risks that children and young people encounter online, this falls firmly within the remit of the Online Safety Bill working in collaboration with the data protection regulator.
The Joint Committee's report recommends that the service providers should be mandated to?conduct internal risk assessments?to record reasonably foreseeable threats to user safety, including the?potential harmful impact of algorithms, not just content. A considerable amount of policymaking and standards work exists to support the Joint Committee recommendations. For example, the ICO's Children's Code and the?IEEE Standards body recently published an Age-Appropriate Digital Services Framework. A Child Rights Impact Assessment provides product developers with the means to assess product feature sets and behaviour modification techniques against children in specific age bands and the known risks, harms and mitigations.
Furthermore, the UK's Data Protection law requires companies to know their users' ages and for every young child to obtain parental consent before processing children's data.
In combination, these measures and recommendations can enable a safer internet for children and young people. All eyes now turn to Parliament to see if these recommendations translate to legislation.