AI Beyond the Code: A call of Universal AI Education, Digital Sovereignty, and Global Cultural Understanding"
Herman Cheung
Helping you amplify your career through tech, not toil | MD@Corci | Business & Personal Growth Leader
Exactly 2 years ago today, I submitted the following essay for my MBA arguing 3 key development areas to improve the democratisation and accessibility of AI. Today, the three points on Universal AI education, self-sovereign identities, and global culture is even more important as we venture into a world to be dominated by GenAI and as regulators struggle to find balance between protecting us and encouraging AI innovation.
My essay has been slightly edited for the purposes of LinkedIn.
The proliferation of digital technology and Artificial Intelligence (AI) has accelerated under the COVID-19 pandemic, Adobe had found that eCommerce, a leading sector in AI adoption, had seen 4 to 6 years’ worth of growth as a direct result of the pandemic. But as technology and AI advances to provide consumers with more convenience and businesses with an operational edge, our education system, legal system, and our mental models around ethics have not yet caught up with the rapid advances. This has resulted in increased debates around the ethics of how data is being used and urgent calls for increased governance on how private companies benefit from our data. I believe governments alone cannot regulate, let alone control, the fast paced developments of the technology, and the varying interests and policies that private companies are implementing. Ultimately consumers, the generator and the ethical owners of the data, will have the largest say and impact as they have the collective economic power to force companies to take or reverse actions...
In order to achieve transparency and an uplift of understanding towards AI, there are 3 key areas that I believe require prioritised attention:
?
1)??? Universal education on AI for all age groups
As AI and digital technology impacts people of all ages and educational levels, it is important that AI education is included as a core subject within the schooling curriculum and on open access platforms for adults. The growing presence of AI within our daily devices means the responsibility of understanding its impacts extends to every citizen. The stigma that AI is complex and inaccessible to those outside of higher education has deterred many from taking part in the conversation despite every citizen being subject to the decisions and actions of AI (Marr, 2018). It is through general awareness and accessibility that enables citizens to take part in collective action and in decisions on how society should be shaped by key societal changes, this extends beyond the conversations of AI and has been seen in other key topics such as climate change, and COVID policy. Even now, the conversation around AI’s progress and governance has been heavy dominated by academics and by company representatives, this is an inadequate reflection of the enormity of the scale of people that are impacted by AI – without doubt, AI will impact every citizen differently but to equally significantly degrees, the debates and the drive of action cannot be delegated to a selected few, everyone must be empowered and encouraged to take part.
Key action on education: There is a priority to implement universal education on AI to empower all those who want to be part of the conversation to be able to do so, this begins via demystifying what AI is by steering away from the complex language and formulas that can unintentionally alienate the general public. The education needs to first focus on what AI really is, what it can and cannot do, and the role of our data in AI applications – this will enable a common understanding of the technology beyond the sensationist views and create a platform for debate to begin. The debates should be focused on the lines of where and how AI is used, the motivations on its adoption by companies and governments, and the short term and lasting impacts of AI decisions.
?
领英推荐
2)??? Enabling self-sovereignty on our digital identity
While governments are progressively implementing laws that require companies to seek permission when data is collected and used, the overall manageability from a user’s point of view is often overlooked – with a great divide between the perception by the user of what data is collected and analysed to what is happening in reality (Matsakis, 2019). This can be attributed to technological factors, such as the ability for machines to ingest inputs and correlate events that are often too subtle for humans to perceive, or it can be attributed to company actions, which are often downplayed and masked by the convenience they provide to users. Take for example Google Photos and Apple Photos, to the user the functionality of being able to see and search photos (a functionality driven by AI) that the user took is largely the same, and it is understandable that users would expect both apps to require similar data permissions, however this could not be further from the truth – Google links and processes significantly more types of the user’s data.
This represents the fundamentally different considerations and approach each company took towards data and data privacy – for Apple you’re a customer of the iPhone, and for Google you’re the product for their advertisers – this complexity grows exponentially as users increase the number of apps (and thus companies) they use. As each company has access to and places different emphasis on certain data points, they each will have different perceptions of who the users are resulting in a multitude of unique digital identities of the user being stored online. As AI is applied on these identities, it could result in the amplification of the distorted and partial views leading to less transparency and explainability of how each company creates their holistic view of the user. To increase transparency, it is not enough for governments to simply require user consent in the sharing of data, rather it must extend to enable a manageable but yet decentralised way for users to view and control their digital identities, e.g. regulation that enables users to have full self-sovereignty of their digital selves.
Key action on education: Even as young kids we were taught to safeguard our personal details to ensure our physical safety, however this astuteness has not been carried into the digital world as the consequences of sharing information does not seem as concrete – there is no immediate and clear feedback mechanism when users overshare information and the advanced techniques in user experience design makes it all too simple to overshare. As such, education needs to be made available on how users can proactively manage not just their data but also their digital identities, and not blindly rely on private companies to do this on their behalf. While the sharing of data is inevitable to enable convenience, users should be educated enough to decide their personal trade-off point between privacy and convenience.
?
3)??? Reflection of cultural differences in a globalised setting
While the internet knows no international borders and lends to the acceleration of globalised business models, the ethics and norms that bound users as well as governmental institutions are not nearly as universal. The overall effect of this has been the fragmentation of AI, up to now the fears of falling behind the AI race has driven companies and countries alike to hoard AI capabilities. With companies, we have seen the aggressive tactics by large technology companies to acquire smaller competitors who possess technology and AI which threaten their dominance ?(Esposito, Goh, & Tse, 2019), with countries we see commentary on a ‘Second Cold War’ (Heath, 2020). This has led to the blanket bans of applications by governments, and extreme activist actions by concerned citizens. It has also driven irrational fear into how certain countries might be able to more quickly exploit AI and technologies as they are perceived to have a ‘lesser’ requirement towards privacy or a ‘poorer’ set of ethical standards. While it is easy to pass judgement given from the basis of our own entrenched (and subconsciously biased) views, companies and governments need to maximise the opportunities within their confines to maintain progress but at the same time look to harmonise differences to prevent a future where practices and knowledge are too divergent to be reconciled. A world with such fraction would slow down global trade, human advancement, and further entrench nationalistic tendencies. To resolve this, we must bring about the democratization of AI while acknowledging the differences between the contributors and players in the complex AI and geopolitical ecospheres. The innovation of AI can bring about solutions, such as the use of jointly developed AI to be used as a medium to translate between the different standards, examples of this could be the use of AI to facilitate cross border financial transactions between countries of differing financial crime standards while respecting and harmonising the legality / legal requirements of the transaction within both countries. Another application could be the use of technology advancements, such as General Adversarial Networks (GANs), to create synthetic sensitive training data to enable countries to share and jointly advance AI when the data sets can’t be shared due to different privacy standards.
Key action on education: The priority is to democratize AI via wider education and encouragement for people from all cultures and perception angles to be involved. This is necessarily to remove the current siloed nature of development, reverse the fears of the next Cold War which could lead to the creation of AI deeply entrenched in biases and furthering the distance between companies and countries.
?
Conclusion:
Without doubt, AI will have revolutionary impact on the world, and while the speed of its development can result in cause of fear and concern, it is through education, the lifting of baseline knowledge on AI, and the democratization of AI that will ensure we can leverage on and collaborate upon our unique cultural differences on the development of AI for the joint good of humanity.