Regulating AI and Data: How Much Do Compliance Assymetries Cost European Research and SMEs?
I rarely write about regulatory questions and I am putting the question in deliberatively provocative terms. So let me be clear: I firmly believe that privacy and AI regulations are necessary. The principles underpinning EU regulations – from the GDPR to the EU AI Act – are fundamentally sound and reasonable.
The point I want to debate here has less to do with the legislation itself and more with what has been developing around it. I mean in particular the perverse asymmetries it has been generating for European research and -- if we are to believe the Draghi Report -- European SMEs.
EU regulations regarding AI and data protection have largely been designed to keep Big Tech -- predominantly US companies -- in check. Justifiably so. These firms have been voraciously collecting data from European citizens while deploying generative AI applications and popular chatbots. Yet the same stringent rules apply with equal force to scientific research, SMEs and even charities. Complying with these regulations is demanding, on top of being costly and time-consuming. And this is where we run into the first asymmetry. Big Tech can rely on armies of top-notch lawyers who don’t only understand the law, but also the technology and who excel at risk evaluation – although they only represent a small overhead spread over a giant business operation. Academic researchers and SMEs cannot boast this level of resources, let alone the scale economics. They often struggle to comply, securing GDPR-approval with much pain and sweat. Jurists on university bodies in charge of approving ethical and GDPR-compliance can be surprisingly unfamiliar with protocols, data-collection platforms and technologies now used in advanced research.
But this would not be so bad if these regulations had not spurred the emergence of a regulatory culture promoted by a large EU commentariat excessively focused on the dangers and threats associated with new technologies. Privacy is invoked at every turn, often legitimately, but sometimes in more questionable ways. With many adverse consequences for European scientists. Conducting legal AI research in Europe is very hard, when not outright impossible, because many judiciaries hide behind the GDPR to block access to the large corpora of decisions researchers need. The fear of facing complaints have made universities risk averse. And, as if that weren't enough, there is the closest thing to the regulatory Taliban: the young, inexperienced jurist on the board of ethics or working for the local regulator who has persuaded himself that the modest research proposal he is asked to review must be the next Cambridge Analytica.
It is not rare for even a very-low risk scientific proposal to take weeks or even months of wrangling to get approved. I have seen instances where researchers eventually had to ask anonymous participants to consent three times to data collection. Once to join the online platform that recruited them in the UK. Then a second time to get their GDPR consent on the EU side of the study (the form, though, is exactly the same). And finally, a third time, because they are also supposed to give their ethical consent to data collection (a nuance surely lost on 99.9% of participants).
The problem is not just that researchers end up devoting inordinate amounts of time to convince university reviewers that their work is compliant (which comes on top of all the academic bureaucracy – reporting, time-sheeting, submitting data management plans – that keeps inflating and keeps distracting scientists from the actual research). To avert problems with finicky reviewers, researchers often choose to do less. Don’t collect demographics if they are not strictly necessary for the study or if it is likely to delay approval! Master students interested in conducting experiments involving human participants must reckon with the real risk that securing GDPR and ethical review will prevent them from graduating in timely fashion.
领英推荐
There are no empirical studies of this phenomenon. But I do see scientific areas – such as psychological and behavioural studies – where this is clearly hurting European research. Because studies come with fewer covariates, there is less room for exploratory analysis and the generation of new hypotheses. The same may be occurring in medical research. As studies collect less information about patient characteristics, the resulting data will inevitably offer fewer possibilities to explore and detect interaction effects or understand rare complications.
Meanwhile, Big Tech and multinationals are moving and processing tons of data, often in ways that raise red flags, but as part of complex and opaque business operations that tend to elude the attention of regulators. Discussing with business people, I am often astonished at what Big Tech and multinationals dare to do with data. Which points to another asymmetry: the regulations we are talking about here are much easier to enforce on academics, small charities and companies running comparatively simple operations. The popularity of their apps and the attending network effects afford Big Tech giants a huge bargaining chip to force you and me to hand them over our data. Academics, obviously, don’t have this leverage, but can still feel at the mercy of frivolous accusations of breaching privacy rights.
The fixation on legislation and the constant rhetoric about the dangers and threats of AI and data processing, some phantasmagorical (e.g. people are going to commit suicide if we let them freely talk to LLM-based chatboxes as a petition circulating in Belgium suggested last year), in the European regulatory bubble has overshadowed the need for a more sobre discussion on enforcement and the compliance cost asymmetries which are hurting Europe’s research communities as well as the continent's long term economic prosperity.
What is the solution? What we should try to do first is to change the culture and mindset that have developed on and around the legislation, prioritize enforcement over regulatory inflation and focus on the real risks. Whether the EU AI Act give rise to the same assymetries as the GDPR we'll have to see, but I see comparatively less need to tinker with the current legislation itself, although some modifications might help avert frivolous privacy complaints and simplify compliance. This, I believe, would go some way towards making the lives of European scientists and researchers easier and more productive.
#AIRegulation #PrivacyLaws #GDPR #EUAIAct #BigTech #DataProtection #AIResearch #DigitalInnovation #TechPolicy #ResearchCompliance #AcademicResearch #TechEthics #FutureOfAI #DataRegulation
Author // Team Coordinator
5 个月You have a couple of points there that highlight well the challenges related to regulatory compliance (esp. for small actors/operators) - yet given your focus on academia and AI, you should have discussed the exemption in the EU AI Act on scientific research (Recital 25 and Articles 2(6) and 2(8); see eg a recent discussion here: https://www.clinicaltrialvanguard.com/article/empower-biopharma-innovation-comply-with-gdpr-and-ai-act/#:~:text=EFPIA's%20Position%20on%20the%20AI%20Act%20and%20Article%202(6)&text=EFPIA%20endorses%20the%20exclusions%20outlined,from%20the%20Act's%20regulatory%20scope. Maybe you could have also endulged in proactive succestions forward, eg highlighting the need of universities to cooperate more with another on sharing best practices and insights of how to handle data (and AI) governance aspects (nice side effect: universities focusing back on becoming knowledge hugs accessible for all, not competitive entities that cook their own soups). And last but not least: the notion of ”regulatory Taliban” puts regulators and terrorists on the same footing - no matter how much one disagrees, one should not compare a violent oppresive structure with the complexity of (overly) ambitious law- & policymaking.
VP & Senior Managing Counsel | Privacy & Data Protection | CIPM, CIPP-E | Toastmaster
5 个月Very insightful!