AI, Children, and Education: A Cautionary Tale from Nevada
(These opinions and analysis are my own and are based upon facts provided by this cited article in the New York Times)
Artificial Intelligence (AI) in Public Policy: Real Impacts on Children and Education
Artificial intelligence (AI) is transforming public policy and organizational operations, offering potential benefits of consistent analysis and data-driven insights, but also presenting serious social and ethical challenges when misapplied.
This article highlights key lessons from Nevada's case, particularly in sensitive areas impacting human rights. It explores how AI governance applying the HUDERIA framework could have helped mitigate these issues while enhancing the outcomes children, families, communities, the state, and the AI service provider.
Nevada's AI Initiative and Its Consequences
Nevada’s recent AI initiative in education sought to improve funding allocation for 'at-risk' students, but instead sparked an outcry when the number of students identified as needing support drastically dropped.
The algorithm incorporated numerous factors such as attendance, language spoken at home, and guardian engagement. Originally, the AI was also going to process race, gender, and birth country as facts, though this was omitted from the final implementation.
As a consequence of the AI systems recommendation, the number of children predicted to be in need of financial support dropped by 200,000, without taking into account the broader context that teachers, local officials, and families could have provided, if asked. AI governance principles highlight that these stakeholders should have been consulted to avoid negative outcomes for impacted people.
The Role of Human Rights Assessment (e.g. HUDERIA) in AI Governance
Although not legally binding in the U.S., frameworks like the Human Rights, Democracy, and the Rule of Law Impact Assessment for AI Systems (HUDERIA), adopted by the Council of Europe, provide valuable guidelines for state entities navigating these challenges. These frameworks permit a thorough consideration of how AI impacts groups at a societal level, ensuring projects align with human rights and provide the necessary support to vulnerable populations.
If Nevada had conducted a thorough impact assessment and involved stakeholders with knowledge of the meaning, accuracy, and lineage of the data, it could have addressed the risks and likely impacts. This might have prevented the sudden and seemingly arbitrary defunding of resources for vulnerable children who rely on educational support services.
Data integrity and lineage (sometime referred to as 'garbage in, garbage out'). - The algorithm took into account data points as attendance, language spoken at home, and guardian engagement - but how good was this data? Often such data is reported inconsistently by family members, and when an AI acts upon incorrect data, the impact of inaccuracies is magnified.
Lessons in Stakeholder Engagement and Transparency
Another risk was the lack of stakeholder engagement itself, and the fact that decisions impacting the futures and opportunities of children were being made by an algorithm, without a democratic process.
Nevada’s use of a proprietary AI model by a private company (where the criteria for at-risk/needs funding designations were undisclosed) led to misunderstandings and mistrust among stakeholders, particularly educators and advocates. The lack of transparency meant that stakeholders could not understand why specific students were excluded, making it difficult to address concerns or advocate for adjustments.
This directly impacted schools' ability to provide necessary resources to vulnerable students, leading to significant disruption in support services. HUDERIA advocates for a Stakeholder Engagement Process (SEP) to ensure accountability and representation, particularly when they impact vulnerable populations.
Performing a HUDERIA assessment would have highlighted this gap and possibly recommended help mitigations -- for example: community-led focus groups reviewing the fairness of the algorithm's scoring methods. Such people much closer to the source of the data are familiar with its accuracy.
领英推荐
If the community had been involved in the data, the weights, and scoring, it might have been able to arrive at a more fair and transparent implementation, avoiding the harm and the controversy at launch and enhancing its perceived and actual fairness.
The Role of Proportionality and Context in Implementing AI Systems
The Nevada AI system’s method of categorizing “at-risk” students did not fully account for the economic disparity in education funding across districts, and it therefore lacked important context.
The HUDERIA model recommends a context-aware analysis to identify high risks to fundamental freedoms and human rights. Such an assessment would involve evaluating the specific social, economic, and cultural context in which an AI system would be implemented to ensure that its risks were understood and appropriately managed.
Had the context of the AI deployment been considered, teachers and school administrators would certainly have pointed (as one key example) that students are already at risk and reeling from the effects of the COVID pandemic, and impacted to varying degrees based upon how COVID affected their homes and communities. Instead, it appears as though there was a 'forest-for-the-trees' problem, where a data-driven technical team (focused on the data points and the math) seemingly ignored the broader key societal, historical, and economic context of the implementation.
The Black Box: Transparency and Accountability in Proprietary Algorithms
Nevada’s reliance on a proprietary algorithm led to critical issues due to the transparency and explain-ability of the algorithm. To prevent these issues, Nevada could have chosen a more open model or implemented accountability measures, including regular audits, stakeholder consultations, and public disclosures of the algorithm's criteria. The lack thereof left decision-makers and impacted communities unable to verify or understand the determinations made by the AI.
Public sector AI applications should avoid black-box models unless accountability and transparency mechanisms (such as audits and performance metrics) are integrated, enabling stakeholders to understand an suggest improvements.
Key Takeaways:
- Stakeholder Engagement: Involving affected communities and stakeholders in the planning stage is essential in AI initiatives -- particularly those impacting human rights and public welfare where consequences to people and society can be severe. Specifically, community-based focus groups, public disclosures and comment period, and other normal democratic processes could have avoided the negative impacts on children as well as the outcry and allowed the project to be a success.
- Transparency: The implementers should have ensured AI systems were open and understandable to all stakeholders, for example releasing the scoring methodology and data sources in advance for commentary.
- Proportionality and Context: AI solutions should be implemented in a way that is balanced and contextually appropriate. Nevada could have taken into account much more context, had it consulted communities about the decision.
- AI serves people, not the other way around: AI can streamline many repetitive tasks and even surface new ideas and approaches, but it should not be approached as a substitute for human oversight or stakeholder engagement. AI should instead be implemented foremost to serve humans and society -- particularly in the public services context.
As more organizations adopt AI, especially in public-facing roles, Nevada’s experience serves as a cautionary tale about the importance of establishing ethical and legal guardrails to prevent unintended harm to individuals and society. Policymakers and organizations must proactively implement ethical guardrails to ensure AI systems are fair, transparent, and accountable—and ultimately beneficial to the communities they are designed to serve.
---
References:
Senior Director, Managing Counsel & Corporate Secretary at Rimini Street, Inc. | President - Louisiana Chapter of the Association of Corporate Counsel
4 个月“AI serves people, not the other way around: AI can streamline many repetitive tasks and even surface new ideas and approaches, but it should not be approached as a substitute for human oversight or stakeholder engagement.” Wow! Well-said! This was an amazingly powerful article. Thank you for sharing!
Teaching Professor- Communication Studies at Iowa State University
4 个月Very informative! Thank you! I may use this as a case study for my class if that’s ok with you?
Award Winning Global CISO & CPO | Managing Partner at Omnian Legal | Acclaimed Speaker | Author
4 个月Interesting POV, you are focused on the application of HUDERIA in the public sector. What do you think about application in the US private sector?
? Kosli ? | Driving Secure Software Changes at Scale | Championing Speed, Compliance with Automated Governance Engineering
4 个月The AI was brought on to cut funding - it did the job it was programmed to do, This isn’t an AI issue - it’s an executive management issue. There isn’t anything particularly interesting about this application is the scope of AL and ML by Infinite Campus. (Is there a pay-for component? The money you spend on ML will save double the at-risk funding costs!!) I bet if you dug through the inputs and data - it’s all nonsense - furthermore the more the funds help the child, improving liklihood of gradation, the funding that helped them is cut which in turn would likely put them back into the risk pool It sounds like a management and critical thinking problem Investing more in ongoing failure is not going to achieve the results they want
Chief Legal Officer at Exterro, Inc.
4 个月Transparency seemed like a major lynchpin for AI to succeed in this case. Thought provoking article, Alex Wall.