Who's Driving?
I love the school bus. I find their existence profound, and their journey magical. Especially the iconic yellow ones found all over the United States. The concept of transporting students to school predates what we see today, but did you know this recognizable yellow school bus was not introduced until the 20th century? The development of the modern-day school bus is attributed to Frank W. Cyr, a professor at Teachers College, Columbia University. In 1939, Cyr organized a conference that established design and safety standards for school buses. This led to the standardization of the iconic yellow color, size, and design of school buses in the United States, and eventually most parts of the modern world.
However, like any vehicle, there have been incidents involving school buses that have resulted in accidents, injuries, and, tragically, fatalities. This is the most regulated, more standardized vehicle ever. School districts, transportation departments, law enforcement agencies, and regulatory authorities prioritize safety protocols, training programs, maintenance inspections, background checks for drivers, and stringent regulations to ensure the safe operation and use of school buses. And then we have the moving timebomb pickup wagons that transport students to school in countries like Pakistan. And despite several tragic instances resulting in loss of precious lives, we still see these wagons all around.
I also love AI. What a magical presence, profound indeed. And just like the school bus, it transports our future to a place of learning. ?Yet unlike the school bus which only transports a fixed number of children, and still driven by someone with a heart and soul, AI is a self-driving car with a mind of its own, a mind learning at the speed of light, as all of us being transported to an unknown destination at the same time. And that begs the question… who’s driving?
Everybody, and nobody! This answer scares me; the tech giants including Google, Microsoft, IBM, and OpenAI, have developed ‘ethical’ AI principles, guidelines, and frameworks addressing responsible AI development, deployment, governance, transparency, fairness, accountability, and societal impact. Governments and regulatory bodies worldwide are increasingly focusing on AI governance, ethics, laws, regulations, standards, compliance, oversight, and accountability to address ethical, legal, social, economic, and security implications of AI technologies. Various countries and international organizations, such as the European Union, have proposed policy frameworks, guidelines, and initiatives on AI ethics, governance, human rights, data protection, privacy, security, and trust. And yet, here I am, petrified, thinking that’s not enough. The world stands more divided today than ever before. It doesn’t take a genius to figure out that the world leaders are incapable of achieving anything significantly ‘good’ by collaborating.
Ask yourself this, how did you feel when Twitter and/or Meta flagged one of your posts as a violation of their policy because it was content made in solidarity with the Palestinians? I am not trying to take a political stance here, just making a point. Three years ago, Adam Bensaid's TRT World report, titled "Workplace and Algorithm Bias Kill Palestine Content on Facebook and Twitter," shed light on events that signify a longstanding and institutionally accepted challenge to the cherished liberal values of free speech, integral to the foundation of democratic societies. While some contend that it's not acceptance but rather the unchecked growth of Big Tech that is the issue, the implications of social media companies neglecting what is now recognized as algorithmic bias and maintaining a silent stance despite receiving complaints are alarming enough. And these tech giants are the ones ‘regulating’ AI governance, fairness, transparency, and accountability?
Like Dr. Nekhorvich from Mission Impossible II said:
领英推荐
“Every search for a hero must begin with something that every hero requires: A villain”.
Let’s assume AI is the both the Chimera and the Bellerophon of our time. We need a public-private initiative that engages with the civil society stakeholders, AI experts, lawmakers and regulatory bodies through consultations, dialogues, forums, surveys, and other participatory processes to gather insights, perspectives, values, concerns, expectations, and recommendations on AI ethics, governance, and societal impact.
And fashionably speaking, we need that to happen yesterday! The future is already here, just not evenly spread yet. Can we ever align on a comprehensive, global moral code for AI? I don’t think so, but we desperately need to keep trying. This can only happen through a collaborative, multi-disciplinary, multi-stakeholder, and global effort impact-aligned with human values, rights, dignity, well-being, and societal priorities.
Public trust is key. We need to invest in education and awareness about AI, its capabilities, and potential risks. It empowers individuals to make informed decisions and advocate for responsible AI practices. The velocity of AI’s growth is unlike anything mankind has ever witnessed before. We need continuous engagement with key stakeholders to facilitate a dynamic and responsive governance framework. It enables adjustments to policies and regulations based on evolving technologies and societal needs. And we need to do that without stifling technological advancements. Involving experts and industry stakeholders at every step will help strike a balance between fostering AI innovation and ensuring responsible use. Chaos and order, yin and yang... the slightest disbalance threatens our existence.
This is why I love the iconic yellow school bus; it transports the creative geniuses while mirroring the structured, regulated world we know. And I want to ride that bus on this highway unpredictable as we explore the realm of Artificial Intelligence. Accidents may still happen, but at least we’d be traveling with headlights and a compass – a comprehensive, global moral code. The urgency is intense; the journey profound… one that necessitates collaboration, awareness, and the continuous pursuit of equilibrium. In the words of Cyberdyne Systems Model 101 (or the T-800),
“Come with me if you want to live!”
Published in Dawn Aurora, Jan-Feb 2024 Issue: https://aurora.dawn.com/news/1145047
Marketing Strategy l Brand Building and Management l Digital Marketing Communication l People Leadership l Digital Transformation l Data Driven Geek
1 年My two cents: no matter how many insights we gather you cannot control or "ethically" govern something you do not fully understand. I love AI but it scares me to death and I have nightmares from The Terminator.
“Every search for a hero must begin with something that every hero requires: A villain”. ??