When reality is scarier than AI...
The other day I was presenting online to a group from various industries from Palo Alto, California, and one of the participants asked the very legitimate question, especially coming from one of the world's hottest AI innovation areas, how we in education were addressing concerns regarding inappropriate contents that might be generated by AI. I answered, only half jokingly, that we were more concerned with reality than with AI.
We strive, almost naively, to educate amongst a strikingly paradoxical state of affairs. Our attempts at instilling values such as diversity, tolerance, respect for others and their opinions, the seemingly hopelessly outdated value of civility, fall flat in the face of innumerable examples where reality flagrantly contradicts each and every one of them.
One of the legitimate concerns associated with the development of generative AI applications stems from the fact that, although the major players have not disclosed explicitly the origin of their training data, it is self evident that these systems have been trained on real life data and, as such, might reproduce real life biases.
However, companies have invested time and effort in debiasing their training data to avoid, inasmuch as possible, perpetuating these real life biases. This has resulted in that most of these systems are politically correct and properly sanitized, to the point where it is now hard to "provoke" them into straying away from neutral answers and polite interactions, which, ironically, cannot be said about the media and social networks. It is hard to believe that an AI can be safer for students than real life interactions, at least in the online world.
It might very well be that the very necessary developments of critical insights to become discerning users of generative AI may serve us well to navigate the more perilous waters of real life exchanges. We have stressed repeatedly that, given the power and future inevitable exponential growth of these systems, it is essential that students gain a deep understanding of their inner workings, including the crucial issue of biases.
In effect, to this day, some of the mainstream AI image generators will still invariably depict women for the prompt "A nurse ministering to patients" and young white males for "Engineer working on a computer". This can lead to an interesting and relevant discussion on how this happens not through a collective confabulation of AI engineers but rather since, albeit intending to cleanse them, the opaque nature of these systems renders them vulnerable to still generating these biased images.
领英推荐
And, in the supercharged prevailing ideological environment, AI systems and their biases could? be an ideal ground for implicitly discussing our human values and attempting to rekindle our sense of humanity.
This would not be a first in the history of technology developments, and may ultimately prove to be the greatest benefit of the development of AI. In the sixties, the US and USSR were locked in a battle to be the first to the moon and consequently engaged in a mindless spending spree to achieve it, similar to to the current race to attain AGI (Artificial General Intelligence).
Responding to a letter legitimately objecting to the money spent on space exploration, Ernst Stuhlinger, one of the German scientists developing rockets for NASA, attached the famous Earthrise picture and stated:
The photograph which I enclose with this letter shows a view of our Earth as seen from Apollo 8 when it orbited the moon at Christmas, 1968. Of all the many wonderful results of the space program so far, this picture may be the most important one. It opened our eyes to the fact that our Earth is a beautiful and most precious island in an unlimited void, and that there is no other place for us to live but the thin surface layer of our planet, bordered by the bleak nothingness of space. Never before did so many people recognize how limited our Earth really is, and how perilous it would be to tamper with its ecological balance.
At this age and time, when we educators collectively scratch our heads futilely trying to make sense of conflicting stimuli in this crazy world of ours, it may be the case that AI is serendipitously providing us with a mirror that reflects our contradictions and frailties, and, like the first full view of the Earth, alerts us to the risks, not just of these models that we are building, but of the real life world they reflect.
We need to use this opportunity not only to educate ourselves on how to best ensure that AI systems are safe, but also to try to express in them the better nature of our humanity, which, at this baffling juncture in the history of humanity, is even a greater challenge than any of the technical issues involved. Stuhlinger famously concluded the letter with the phrase attributed to Albert Schweitzer "I am looking at the future with concern, but with good hope". We can all definitely relate to the first part, but it is, once more, up to us educators to act on the second, being, as we have always been, architects of hope.