Protecting Our Future: The Importance of Mitigating AI Bias
Feedback loops play a crucial role in shaping the capabilities of AI systems. They work similarly to how humans learn, by refining their abilities through a series of successes and failures. However, it's important to note that AI feedback loops can also introduce, perpetuate or amplify biases.
AI has already demonstrated its potential to revolutionize various fields through innovative applications such as art creation and fraud detection. Its potential ability to automate tedious tasks, analyze vast amounts of data, and enhance decision-making processes is widely recognized. As AI becomes increasingly adopted, its impact on billions of people and society as a whole will become more pronounced. AI will play a crucial role in areas such as finance, national security, healthcare, and criminal justice by making decisions that determine such things as a person's job prospects, education opportunities, or insurance rates. It may also impact important societal aspects such as education curriculums, research funding, and social policies.
Before our future turns markedly dystopian, we need to recognize that bias can easily influence results with profoundly detrimental implications. Bias can stem from various sources, such as the algorithms, training data, and context of usage, but it often arises from deficient or biased training data and past human prejudices. To prevent dire consequences, we need to proactively work towards eliminating bias in AI.
The following example highlights how decisions made by AI algorithms can have discriminatory consequences, even when attempts are made to exclude race as an input.? In 2016, Amazon excluded certain neighborhoods from its same-day Prime delivery system; of course, they did this without regard to race. Their decision relied upon whether a particular zip code had a sufficient number of Prime members, was near a warehouse, and had adequate staff to service that zip code. While the model did not rely on biased, racial or economic data, it nevertheless resulted in the exclusion of poor, predominantly African-American neighborhoods.? Similarly, data bias is a common problem, yet not always as obvious as when Amazon introduced, and then abandoned, their AI recruitment system after it was found to be biased against women; likely due to its training data being dominated by resumes from male applicants over a ten-year period.?
The advertisements and recommendations we receive for movies, books, and music are tailored to our individual tastes as technology "learns" to understand us over time. Similarly, effective search results from Google or responses from ChatGPT depend on the AI system's ability to quickly and accurately interpret our intended queries. Understanding the context behind our questions and motivation underlying our interests is crucial for the continual evolution of AI systems, however, it also raises privacy concerns and the potential for the formation of harmful feedback loops.
Degenerate feedback loops, like echo chambers and filter bubbles, are a major issue in social media and search results. These loops tend to promote user biases by exposing them repeatedly to similar content. Filter bubbles, in particular, restrict the range of information users are exposed to, thereby exacerbating the problem. In 2001, Harvard law professor Cass Sunstein predicted that the growth of the internet would lead to the polarization of society and pose a threat to the future of democracy. With the rise of AI, this threat is even more pronounced as the biases perpetuated by these loops may be more covert and difficult to address.
The widespread adoption of AI, as evidenced by the rapid growth of ChatGPT and its 100 million users within two months of launch, has quickly brought the issue of AI's impact to the forefront. With over 12,000 AI startups in the US and 33% of US companies reporting AI usage within their operations, the race to leverage this innovation is on, with companies like Google and Microsoft rushing to release their respective AI tools this week, within a day of each other. However, amidst the rush to market, there is a growing need to balance the drive for innovation with concerns around governance and legality. The proliferation of new AI business use cases and the eager embrace of AI by consumers suggest that the adverse consequences of this technology may continue to be overshadowed by its perceived benefits.
While the application of AI in the private sector is rapidly expanding, from marketing and recruitment to optimization and decision-making, the implications of its use in the public sector are even more concerning. For example, Project Maven by the Department of Defense uses AI to analyze surveillance data for detecting suspicious activity. The city of Chicago tested an AI-powered "Strategic Subject List" that evaluates a person's likelihood of becoming a future criminal. The Ukrainian government has even recently implemented facial recognition technology to identify potential combatants and reunite families. While these uses clearly have the potential to provide benefits, they also raise serious concerns about potential misuse.
Many agencies in the US and abroad have outlined guidelines for trustworthy AI in recent years. The EU's "Ethics Guidelines for Trustworthy AI" comprises seven principles covering human agency and oversight, technical robustness and safety, privacy and data governance, transparency, diversity, non-discrimination, and fairness, environmental and societal well-being, and accountability. The National Institute of Standards and Technology (NIST) focuses on reliability, safety, privacy and transparency, while the US Department of Defense's ethical guidelines for AI emphasize the importance of exercising appropriate judgment, minimizing bias, ensuring transparency and accountability, ensuring reliability and safety, and being able to detect and avoid unintended consequences.
The complex risks and biases inherent in AI underscore the need for rigorous guidelines in its use. While companies may prioritize market share and profitability, it is imperative that we advocate for the responsible use of AI. By raising awareness about the importance of responsible AI practices, we can play an active role in shaping the future and ensure that its use serves the greater good.
VP, Strategic Technology, AI & Transformation
1 年Troubling development at Microsoft https://techcrunch-com.cdn.ampproject.org/c/s/techcrunch.com/2023/03/13/microsoft-lays-off-an-ethical-ai-team-as-it-doubles-down-on-openai/amp/
Global Vice President | Technology Leader | Innovator | Strategist
2 年Bias in AI is such an important topic that gets neglected especially when life changing decisions are made based on the predictions. I have experienced the same set of models in 2 different parts of the giving different results as bias was introduced over time due to various reasons . Building explainabiltiy, ethical AI is still not easy or scalable. Great point of view in uncovering some of these.
Tech Services / Diamond Account Team Leader
2 年Thanks for sharing this perspective Howard. Technology companies tend to favor speed to market over quality…release a version and then spend months/years fixing the bugs (and releasing new features). It’s likely these emerging technologies will have all sorts of social bias “bugs” that we will need to continually identify and address.