Should AI be regulated? Prohibitions, the EU AI Act and the failed promise of personalized learning.
March 22, 2023 will probably go down as one of the most surreal days in history. On that day, Elon Musk, who needs no introduction, Steve Wozniak, one of the co-founders of Apple, bestselling author Yuval Noah Harari and a long list of distinguished tech experts, scientists, professors and corporate executives co-signed an open letter entitled “Pause Giant AI Experiments: An Open Letter”, in which they called for “AI labs to immediately pause for at least 6 months the training of AI systems more powerful than GPT-4”. The letter calls for “safety protocols for advanced AI design and development that are rigorously audited and overseen by independent outside experts.”
That some of the foremost tech pioneers, whose inherently risky innovations paved the way for these developments, were advocating a moratorium – which, as we now know, a year later, was never ever considered – is almost as bizarre as the phrase “independent outside experts”, which almost constitutes magical thinking. (Perhaps some extraterrestrials that we mere mortals are not aware of?)
Granted, there are risks that are intrinsic to the rapid development of AI. Unscrupulous use can result in horrendous consequences, and we have witnessed the painful reproduction of stereotypes and biases in AI generated materials. However, like with any powerful technology, as the history of technological revolutions has amply shown, potential misuse does not lead to prohibition, and perhaps not even to regulation.
Regulating AI involves considerations of privacy, security, accountability, bias, safety, and ethics. The argument for regulation targets ensuring robust security measures, enhancing transparency, mandating unbiased training data, setting safety standards, and guiding ethical use, addressing concerns like privacy violations, opaque decision-making, discrimination, reliability in critical applications, and the potential misuse in areas like, for example, surveillance and autonomous weaponry.
The catch, however, lies in that, at this stage in global evolution, the very concept of those “independent experts” makes no more sense, and all the above are a double-edged sword of protection and stifling innovation through costly and bureaucratic compliance measures. Concerns about AI regulation include who makes the decisions, risking bias and limiting representation. Regulations may even favor powerful entities, inhibiting innovation and creating a fragmented global landscape. Rapid technological evolution challenges regulatory relevance, while over-regulation will, inevitably, hinder creativity.
There are other powerful arguments against regulation. Over-regulation might put smaller companies at a disadvantage by imposing burdensome compliance costs, leading to less competition. The rapid evolution of AI presents a challenge for creating enduring regulations, and, internationally, the global nature of AI development complicates regulatory coordination, risking competitive imbalances.
There is also concern that regulations might be based on overestimated fears about AI, focusing on unlikely risks while neglecting to consider its benefits and potential. Ethical and cultural variances further complicate the creation of universal regulations.
Regulation can inadvertently encourage the development of unethical AI applications by those outside regulatory frameworks, as strict rules might push innovation into unregulated spaces or jurisdictions with lax oversight. This scenario creates a "regulatory haven" for unethical development, where developers can exploit gaps in global enforcement. Moreover, overemphasis on compliance can divert attention from ethical considerations to legal loopholes, undermining the broader goal of fostering responsible AI development. Essentially, while aiming to control AI's risks, regulation may paradoxically expand the playground for unethical practices by those willing to circumvent the system.
An immediate example is the European Union′s AI Act, which received final approval by the EU Parliament on March 13, 2024. Once guidelines for its application in member states are developed, each of the nations of the EU must adapt their respective legislations to suit the provisions in the act. It is the first binding comprehensive AI law in the world, and it intends, with good intentions, to promote “Human Centric and Trustworthy AI”, and addresses levels of risks posed by different categories of AI applications:
领英推荐
Quoting directly from the text of the act, amongst many other controversial education references of high risk systems, and as such subjected to stringent – and consequently costly – regulations are “AI systems intended to be used to evaluate learning outcomes, including when those outcomes are used to steer the learning process of natural persons in educational and vocational training institutions at all levels”.
There are many adaptive AI learning platforms that, after diagnostic testing, provide an optimal learning pathway for students based on AI and big data, comparing each learner’s profile with thousands of others in their databases and personalizing exercises and interactions based on an algorithm that attempts to guide each learner towards the best possible learning outcomes.
These systems, under the new EU AI Act, fall under this dreaded high-risk category and, as such, are guilty until proven otherwise, surely through a cumbersome and expensive process. Any other teacher or small company wishing to innovate in this arena will find themselves under the scrutiny of the incumbent authority, resulting inevitably in a powerful deterrent to innovation.
I have painstakingly tried to discern who wrote the text of the act but, as is often the case with these complex multiyear processes, which have been engineered by successive committees, it is almost impossible to do so. Transparency problems are not, it seems, exclusive of neural networks…
Although I could be wrong, I would wager that no K12 classroom educator, past or present practitioner, has had any significant input on how educational AI systems should be risk categorized. This has resulted in a major faux pas for AI personalization of learning, and this is, precisely, an example of the risk of regulation. Adaptive AI learning software are one of the best overall applications of AI in any given field, and this new act severely curtails them.
I am sure that this misinterpretation of AI based adaptive education systems is not exclusive of K12 education, but AI can have a truly significant positive impact on schooling.
Should AI be regulated? My personal opinion is that, at this age and time, the general public can provide the best self-regulation through positive uses that will, as the history of civilization shows us, overwhelmingly outnumber any negative uses. What I think, however, is irrelevant. What matters is that we educators make our voices heard, as fast and as loud as possible, so that the inevitable regulation still provides room for the long awaited and much needed breakthroughs in personalization in education.
?