The Responsible AI Bulletin #21: AI neutrality, algorithmic recourse over time, and human-AI interaction in safety-critical industries.
Abhishek Gupta
Founder and Principal Researcher, Montreal AI Ethics Institute | Director, Responsible AI @ BCG | Helping organizations build and scale Responsible AI programs | Research on Augmented Collective Intelligence (ACI)
Welcome to this edition of The?Responsible AI Bulletin, a weekly agglomeration of?research developments?in the field from around the Internet that caught my attention - a few morsels to dazzle in your next discussion on AI, its ethical implications, and what it means for?our future.
For those looking for more detailed investigations into research and reporting in the field of Responsible AI, I recommend subscribing to the AI Ethics Brief, published by my team at the Montreal AI Ethics Institute, an international non-profit research institute with a mission to democratize AI ethics literacy.
AI Neutrality in the Spotlight: ChatGPT’s Political Biases Revisited
Imagine a world where your AI assistant, which you rely on for daily tasks and information, subtly pushes you toward a particular political ideology. This is the contentious environment that ChatGPT is believed to have created. Recognizing these inherent biases is crucial in an era dominated by AI’s growing influence. However, a renewed assessment is essential, given OpenAI’s dedication to minimizing biases and ChatGPT’s ongoing development. This study delves into the urgent question: Does ChatGPT genuinely exhibit political bias, especially with a left-libertarian lean, as some suggest? To address this, this study used various political orientation tests in both English and Japanese. They also adjusted settings for gender and race to gauge the extent of potential biases. The insights gleaned offer a fresh lens, highlighting the complex interplay between AI, language, and political tendencies.
Continue reading here.
Setting the Right Expectations: Algorithmic Recourse Over Time
When we receive an undesirable result from an automated system, it is common to ask (1) why we received such an outcome and (2) how to reverse it. Algorithmic recourse aims to answer these questions. However, the temporal dimension plays a crucial role in this context. Consider a scenario where an AI system advises a loan applicant to improve their credit score. If it takes several months to achieve this, economic conditions and lending criteria might have evolved, rendering the initial advice obsolete.?
This research highlights the importance of time in the reliability of algorithmic recourse.
领英推荐
We propose a simulation framework to study and model environmental uncertainty over time. Furthermore, we examine the dynamics emerging from multiple individuals competing to obtain a limited resource, introducing an additional layer of uncertainty in algorithmic recourse.?
Our findings highlight the lack of reliability in recourse recommendations over several competitive settings, potentially setting misguided expectations that could result in detrimental outcomes. These findings emphasize the importance of meticulous consideration when AI systems offer guidance in dynamic environments.
Continue reading here.
Unpacking Human-AI interaction (HAII) in safety-critical industries
Human-AI interaction (HAII) is a complex topic whose quality depends on contexts, users, and the AI system itself. This implies that the nature and quality of HAII can vary from user to user and context to context, even if the same AI system is being used, creating variability and non-deterministic outcomes. Therefore, to realize AI benefits for the benefit of the individual users and society, we need to study humans and AI together in specific safety-critical settings to ensure quality HAII.
We conducted a systematic literature review to investigate what has been done in HAII in safety-critical industries to identify learning points and set up a research agenda. We did this by investigating terms describing HAII, identifying factors influencing HAII, and examining how HAII is measured.
We only included empirical, peer-reviewed scientific articles or conference proceedings that focused on measuring HAII, involved real-world end-users in their studies, used a specific, tangible AI system or proof of concept, and applied their studies in a safety-critical industry. Through our search of digital databases, we identified 481 articles. However, only 13 of these met all our inclusion criteria, underscoring the substantial need for further research in this field.
Continue reading here.
Comment and let me know?what you liked and if you have any recommendations on what I should read and cover next week. You can learn more about my work here. See you soon!
Amazing research! Love the insights and the focus on responsible AI ethics. #ethicmatters ??