You Cannot Program Empathy or Intuition - Regulators Express Concern over industry reliance upon Artificial Intelligence.
William M. Le Roy, J.D., LL.M.
Founder & Principal at PHOENIX Consulting, LLC.
Source: https://www.brookings.edu/research/an-ai-fair-lending-policy-agenda-for-the-federal-financial-regulators/
Excerpt: The risks posed by AI/ML in consumer finance
"While AI/ML models offer benefits, they also have the potential to perpetuate, amplify, and accelerate historical patterns of discrimination. For centuries, laws and policies enacted to create land, housing, and credit opportunities were race-based, denying critical opportunities to Black, Latino, Asian, and Native American individuals. Despite our founding principles of liberty and justice for all, these policies were developed and implemented in a racially discriminatory manner. Federal laws and policies created residential segregation, the dual credit market, institutionalized redlining, and other structural barriers. Families that received opportunities through prior federal investments in housing are some of America’s most economically secure citizens. For them, the nation’s housing policies served as a foundation of their financial stability and the pathway to future progress. Those who did not benefit from equitable federal investments in housing continue to be excluded.
Algorithmic systems often have disproportionately negative effects on people and communities of color, particularly with respect to credit, because they reflect the dual credit market that resulted from our country’s long history of discrimination. This risk is heightened by the aspects of AI/ML models that make them unique: the ability to use vast amounts of data, the ability to discover complex relationships between seemingly unrelated variables, and the fact that it can be difficult or impossible to understand how these models reach conclusions. Because models are trained on historical data that reflect and detect existing discriminatory patterns or biases, their outputs will reflect and perpetuate those same problems."
For the complete article: https://www.brookings.edu/research/an-ai-fair-lending-policy-agenda-for-the-federal-financial-regulators/
For the Research Paper: https://www.brookings.edu/wp-content/uploads/2021/12/Akinwumi_Merrill_Rice_Saleh_Yap_12-01-2021-1.pdf
Senior mortgage servicing leader motivated by challenging opportunities. Appreciates the chance to solve problems.
3 年Probably valid, but I do believe that the AI today continues to improve and can be trained to ignore those biases. I do understand the point about learning from the past, and therefore the potential to continue to perpetuate the same biases - I just think that knowing that, these models can/should be further refined to look past those biases.