Light with shadows: AI Bias
?? Kelvin Lwin
CEO/Founder | Chief AI Officer | Expert Attention Trainer | Parallel Entrepreneur | Master your Attention (or AI will)
No technology has greater potential to either empower or disenfranchise people in the coming century than Artificial Intelligence (AI). Any organization interested in shaping the future needs to have a stake in the development of AI, especially ones seeking to change societies via an engaged citizenry. The benefits of such technology are widespread; for example, medical diagnostic software improves the accuracy and availability of specialists. However, many AI and Deep Learning applications, often deployed as quickly as possible because of economic forces, are implemented without full understanding of their complexity or consideration of their impact. AI applications implemented without this understanding and consideration can, and often do, lead to the amplifying of existing biases in the data used to train the applications. These biases reinforce disadvantages already faced by underprivileged groups. Here are some recent outlandish examples that have come to light: a voice assistant that cannot hear female voices; a chatbot that learned to spew racist vitriol; a system used in sentencing in the justice system that overestimates the likelihood that black men will commit crimes in the future.
A number of solutions have been proposed to remedy this inequity, yet none of them seem fully adequate for the challenge. Democratic government regulation, one such proposal, is historically slow and inclined to result in gridlock. Autocratic governments could act more quickly, but it is unlikely that they can be trusted to act in the best interest of their population or the world at large. The proposal to make AI applications whose algorithms are unbiased is not simple, since a human can give an unbiased justification while actually holding biased views. Furthermore, even under the earnest efforts of diverse teams, delivery pressures are unlikely to allow sufficient time to make these applications completely fair.
As a recent psychopath AI trained by MIT shows, AI simply reflects the examples it is shown. Thus, to effectively address these issues, work must be done at the fundamental level of the data. Our proposal is a centralized, massive repository of datasets gathered from as many different sources as possible, maintained and overseen by a broad coalition of contributors, both individual and organizational, from all walks of life.
Such a centralized repository provides a straightforward mechanism to work with the data that carries bias. Multiple experts with varied experiences can be brought to bear more readily on this data, in a way that, due to technical constraints, they cannot do at the level of the AI algorithms. Furthermore, special care can be taken when biases against underrepresented demographics are found, such that the data can be augmented by representatives who come from that background. No matter how much empathy someone may have, one cannot fully replicate the way another person sees the world and notice all possible biases. By maintaining this vast repository through a broadly represented coalition, sufficient energy and resources can be made available for its creation and upkeep. The temptation to stockpile data for the narrow benefit of a single stakeholder can thus be avoided. This repository, which we call the Human Insight Project, should serve all of humanity well. Whereas the genome project gave us a complete mapping of all the genes in humans, this will give a complete mapping of all human knowledge, which will permit AI to better understand and help humans.
An Augmented Defense Technology Innovation Leader
4 年Kelvin Lwin great insight for a grand AI strategy!