AI Needs Experts
Lauren Romero, MBA
AI Native GTM/Philanthropic Innovation | Creator of HAIven Ecosystem | Former Strategist: Coca-Cola, Mars, Whole Foods, Publicis (Nestlé), + Honda, UPS, AT&T | Most U.S. Grocery Retail
The irony of early technology adoption is that just as inventors and developers seize the spotlight and funding, those with the expertise to inform its optimal inputs and uses are underutilized or sidelined from its development and application in the real world.
A few years ago, I wrote a little article comparing emerging AI technology to a young human, pointing out the importance of guiding its learning with intention and care. Now, the inventors of AI have repeatedly raised the alarm about its potential to get out of control, to absorb irrelevant, untrue, or nefarious information and biases, and ultimately hurt humans more than it helps. The genie is out of the bottle. Just as with humans, some AI have been exposed “bad childhoods” and act out in less-than-helpful ways.
Now, as corporate layoffs anticipate a world in which humans can be easily replaced by AI, we’re already discovering the shortsightedness in this assumption. Casting a wide view of the many industries and job categories being affected by AI mania, it’s becoming evident that some are not perfectly suited for full AI replacement of human experts. In too many cases, these experts aren’t being included sufficiently in the recommendation of AI input sources, data screening and validation, productive prompt/query guidelines, and sensible evaluation and usage of the output.
In another irony, ChatGPT is surprisingly good at telling on its own kind. I prompted for anecdotes of situations across a variety of sectors where AI failures led to bad and costly outcomes, with sourcing.
Chat GPT:
Here are several anecdotes illustrating situations where AI systems led to bad or costly outcomes across different sectors:
1. Healthcare – Misdiagnosis by IBM Watson in Cancer Treatment
What happened: IBM’s Watson AI was deployed to assist oncologists in recommending cancer treatment plans. However, it was revealed in 2018 that Watson often gave erroneous or unsafe treatment suggestions. For example, it once recommended administering a treatment that could have caused severe bleeding to a cancer patient. Outcome: Watson's errors led to distrust from doctors and patients, and the AI system was ultimately pulled back from commercial use in healthcare. The project’s failure cost IBM millions in investments and damaged its reputation as a leader in healthcare AI. Source: Stat News
2. Financial Sector – Algorithmic Trading Leading to Flash Crashes
What happened: In 2010, an AI-driven algorithm used in high-frequency trading contributed to a “Flash Crash” where the U.S. stock market plummeted nearly 1,000 points within minutes, erasing $1 trillion in market value before recovering. Algorithms started selling off stock based on perceived risks, which created a cascading effect. Outcome: Although the market rebounded, the crash caused massive temporary losses and raised concerns about the dangers of automated trading. It led to new regulatory scrutiny and highlighted how AI could amplify systemic risk. Source: The New York Times
3. Government – UK A-Level Grading Algorithm Scandal
What happened: In 2020, during the COVID-19 pandemic, the UK government used an AI algorithm to replace standardized exams for A-Level students. The algorithm was meant to predict students' grades, but it disproportionately downgraded students from lower-income schools while favoring private school students. Outcome: This sparked national protests, significant political fallout, and eventually the government had to scrap the system and rely on teacher-assigned grades. The incident eroded public trust in AI in the education system and led to substantial reputational damage for the government. Source: BBC News
4. Transportation – Uber's Self-Driving Car Fatality
What happened: In 2018, an Uber self-driving car killed a pedestrian in Arizona. The AI system had been trained to distinguish objects like pedestrians but failed to react to a person pushing a bicycle across the road. A human safety driver was present but wasn’t paying attention at the critical moment. The AI system had disabled its emergency braking to avoid false positives. Outcome: This tragedy halted Uber’s self-driving car program temporarily, led to public outrage, and prompted stricter regulations for autonomous vehicle testing. Uber eventually shut down its self-driving unit, and the incident revealed critical safety gaps in AI technology. Source: The Verge
领英推荐
5. Retail – Amazon’s Biased Hiring Algorithm
What happened: In 2018, Amazon scrapped an AI tool it had been developing to help with recruitment. The AI was trained on resumes submitted over a 10-year period but was found to be biased against women. It downgraded resumes that contained words like "women's" (e.g., "women's chess club") and systematically favored male candidates for tech roles. Outcome: Amazon had to abandon the project after failing to correct the bias, costing millions in development and revealing the limitations of AI in sensitive HR processes. It also raised broader concerns about AI perpetuating bias in hiring decisions. Source: Reuters [LRomero - This one relates to a passion pursuit of mine, which is to eliminate gender biases in hiring practice, culture, and technology.]
6. Insurance – Discriminatory AI in Health Insurance Premiums
What happened: A major health insurance company used AI algorithms to determine customer premiums. However, in 2019, a study found that the algorithm systematically underestimated the healthcare needs of Black patients. It prioritized cost-saving measures over patient well-being, leading to unequal healthcare access. Outcome: The company had to redesign its system under public pressure. This incident led to increased scrutiny of AI’s role in perpetuating racial and socio-economic disparities in healthcare, as well as new calls for fairness and transparency in algorithmic decision-making. Source: Science
7. Social Media – Facebook’s Misinformation Algorithm
What happened: Facebook’s AI was tasked with identifying and reducing misinformation and harmful content. However, during the 2020 U.S. election and the COVID-19 pandemic, the algorithm inadvertently promoted sensational or false content because these posts generated more engagement. This undermined efforts to curb misinformation. Outcome: Facebook faced severe backlash, Congressional hearings, and widespread public criticism. The company had to revamp its approach to content moderation, which resulted in significant operational costs. The failure highlighted the challenge of balancing AI-driven engagement with ethical content management. Source: The Washington Post
8. Law Enforcement – False Arrest Due to AI Facial Recognition
What happened: In 2020, Robert Williams, a Black man in Detroit, was wrongfully arrested because an AI facial recognition system incorrectly identified him as a suspect in a crime. AI facial recognition has been criticized for racial bias, particularly against darker-skinned individuals. Outcome: This incident led to widespread calls for the ban or restriction of AI facial recognition in law enforcement, significant legal costs for the police department, and heightened public concern over privacy and civil rights violations due to flawed AI systems. Source: The New York Times
These examples demonstrate the risks of using AI in complex, high-stakes areas where errors can have serious financial, social, and ethical consequences.
End ChatGPT content
We can’t leave everything to the machines. Subject matter experts are more critical than ever, particularly when it comes to the human experience of things, the nuances of human interaction with the world and communication with each other. An AI will default to the norm or the critical mass, while excluding outliers unless specifically prompted for them by an expert who understands how and where they might exist. When it comes to problem-solving creativity and innovation, outliers tend to lead the way. Also, in the process of learning, we only see what we shine the light on. AI cannot identify what has not been included in its dataset. AI misses relationships among elements of experience that humans intuitively process and react upon, the most complex being the human element itself.
The more experience a human expert has, the millions more data points they have absorbed, and the more statistically valid their “intuitive” observations and conclusions might be as a result. As a marketing and social researcher for nearly 30 years with experience integrating a wide range of quantitative and qualitative study methodologies, I’ve learned that qualitative study by and with experts is a critical and more sophisticated component of “research” than the quantification for purposes of understanding magnitude and significance. Unfortunately, those who claim to be highly data-driven are often referring to quantification, dismissing qualification as “soft” and “subjective”.
How can you know the right thing to measure if you aren’t even looking at the right thing? Talk to human experts. In each of the above cases, they were the ones to identify serious problems with AI inputs and output. They are the ones to make AI deliver optimal ROI, however it is defined.
Last but not least, to organizations casting out “old” experts for “new” and supposedly less costly AI technicians with far less experience in the industry being served by the technology – be prepared to spend more on the very experts you retired or overlooked, as the need might arise to hire them as consultants who can make more productive use of tech investments.