Why we should all be working on our DSA readiness

Why we should all be working on our DSA readiness

Don't think you need content moderation tech? Think again.

[Reposting as an article here for greater reach. For timely access to my writings please consider subscribing on Substack]


The week before Christmas was an interesting one in EU digital regulation, if only because of the made-for-clickbait announcement from the European Commission that three companies have been added to the list of designated Very Large Platforms (VLOPs): Xvideos, Pornhub and Stripchat.

That brings to 19 the number of companies that will face the most stringent requirements to police content under the far-reaching Digital Services Act (DSA). Not only will they have to report their user numbers, but also to assess systemic risks they create (especially re children), allow access to independent auditors and external researchers, and deploy recommender systems that are not based on user profiling.[1]

The EU has been both commended and criticised for drawing a distinction between the Goliaths (most of which are US-based), and everyone else. This approach was a direct response to findings that GDPR had disproportionately impacted smaller companies and—in effect—made it harder for them to compete.?

But with all the focus on VLOPs, we seem to have forgotten that most digital services of any size will face critical new obligations from 17 February this year. Especially around moderation of user-generated content (UGC).

(The following is most definitely not legal advice.)

Under the DSA’s rules, if there is any UGC on your platform—whether it’s comments or reviews, chat messages, content in files people are exchanging or posting, live voice communications, 3D creations—you have to implement means of detecting, flagging and removing illegal content.[2] If you are a marketplace for products, you must have a process to identify and remove illegal goods, including counterfeits.

This could prove a real headache for many growth-stage and midmarket companies that have users or sell product in the EU. While a lot of the technical components for content moderation and user reporting workflows exist, they still need to be cobbled together in a way that covers all the DSA requirements. You’ll need at least these features (appropriately localised to the 27 EU member nations…):?

  • Moderation / filtering to detect illegal content (and a way to publish an annual transparency report on this process)
  • Mechanism for users to report illegal content
  • Ability to remove identified or reported content
  • Ability to provide a notice to users explaining why content has been removed, and to enable them to appeal your decision.
  • A process in place to notify law enforcement if you become aware of a potential criminal offence or a threat to someone’s life or safety

That’s the technology & process bit. Of course you will first have to come up with a content policy that both satisfies the DSA definition[3] and matches the context of your UGC.

And that’s not all. You will also have to demonstrate that your privacy and security mechanisms were designed to protect minors specifically (including not serving them profile-based ads[4]), and that your interfaces are not deceptive or using ‘dark patterns’.

Today the market for tools and services that can help with content moderation is fragmented. There are plenty of vendors jumping on the double bandwagon of this new regulation and the new technology-du-jour (AI). It can be hard to distinguish between those that provide services (based on humans and/or AI), or tools and components for you to build your own solution (which will likely also require some human moderators). In addition to technology, you’ll need to appoint someone who owns the policy and can evolve it, and to continuously manage set of principles to help adjudicate disputes.?

Finally, content moderation is uniquely complex in that it can be very specific to your service (eg, what is considered a threatening comment in a social community may not be a reportable offence in an adversarial video game chat). At the same time, every company needs to make use of generalised content moderation approaches (eg, how to identify and report on Child Sexual Abuse Material or CSAM). Getting the benefit of the best standards in the industry while optimising for your own service can be hard.

Look out for lots of innovation and company pivots among content moderation solution providers as they try to address the DSA compliance challenge for the midmarket.

[This article first appeared on my Substack. If you like it please share, and consider subscribing.]


[1] In fact, recommender systems (ie, the personalisation algorithm that powers your feed) are under attack from all sides in Europe, which will be interesting to watch given that they are. by far the most effective driver of user growth, see TikTok astonishing growth.

[2] The DSA does not create a new definition for what content is illegal – it simply points at existing EU and member state laws. Broadly, illegal content includes anything that incites terrorism, depicts the sexual exploitation of children, incites racism or xenophobia, infringes intellectual property rights or is considered disinformation. But there are also country-specific restrictions to be aware of, such as the prohibition on depicting Nazi symbols in Germany, or more stringent restrictions on racist content in France.

[3] Note that “where a content is illegal only in a given Member State, as a general rule it should only be removed in the territory where it is illegal.” (Questions and Answers: Digital Services Act).

[4] The DSA draws a very hard line here, directly barring digital ads based on profiling by using personal data of users “when [operators] are aware with reasonable certainty that the recipient of the service is a minor” (Article 28). This puts into much clearer language what had been implied until now by GDPR’s Recital 71 restriction on automated decision-making via profiling. Note that while the targeted advertising ban applies to any company in scope, VLOPs and VLOSEs face additional obligations to mitigate risks including “targeted measures to protect the rights of the child, including age verification and parental control tools.”

Jamie Barnard

CEO Compliant?? | Data Compliance for Digital Media | Data Ethics | Digital Policy & Regulation | WFA Strategic Partner | Ex-Meta Policy Council | International Speaker

1 年

When I was on Meta’s Public Policy Council, I saw first-hand the scale of the problem, the horror of the problem and the importance of moderation. The DSA (and its enforcement) is super-important. Great article Maximilian.

回复

要查看或添加评论,请登录

Maximilian Bleyleben的更多文章

  • Ofcom tackles age assurance

    Ofcom tackles age assurance

    Both clarifying and infuriating in equal measure, Ofcom's guidance sets a new standard Ofcom is wasting no time in…

    1 条评论
  • New COPPA, old COPPA?

    New COPPA, old COPPA?

    Lots of last-minute action from the FTC, but was it all for naught? The FTC fired off a flurry of actions before…

  • It's OK to ask 'How old are you?'

    It's OK to ask 'How old are you?'

    Why protecting minors online doesn't have to compromise privacy or access Last week Florida and South Carolina began…

    2 条评论
  • KOSA 2.0: X marks the spot?

    KOSA 2.0: X marks the spot?

    Is this a better bill or just better packaging, courtesy of Elon? Congress's latest attempt to ‘think of the children!’…

  • How is your DSA today?

    How is your DSA today?

    Trends in enforcement; testing out-of-court settlements; shadowbanning ban It’s been about six months since the Digital…

  • What does the EU really want from Meta?

    What does the EU really want from Meta?

    And does it really think digital IDs will solve minor age verification? Recently the European Commission announced its…

    1 条评论
  • Gen AI and child safety

    Gen AI and child safety

    This time around, can we build in some child protections from the start? “The internet was not built for kids.” This…

  • Together, parents can do better

    Together, parents can do better

    Do we really need more science to do what's obviously good for our kids? I have to admit I was surprised by the…

    1 条评论
  • One step closer to a Universal Age API

    One step closer to a Universal Age API

    An initiative from Meta shows the way Just a few days after I wrote about the idea of a Universal Age API, Meta doubled…

  • We can solve for age assurance

    We can solve for age assurance

    It's time for a Universal Age API These days are seeing a record number of events on the topic of age assurance. First,…

    1 条评论

社区洞察

其他会员也浏览了