The illusion of artificial inclusion in social-behavioral research

The illusion of artificial inclusion in social-behavioral research

Preprint of paper by William Agnew ( 美国卡内基梅隆大学 ), Stevie Bergman (Google DeepMind ), Jennifer Chien (UC San Diego ), Mark Díaz (Google Research), Seliem E. (Google DeepMind ), Jaylen Pittman (Stanford University), Shakir Mohamed (Google DeepMind ) and Kevin McKee (Google DeepMind ), accepted for the Association for Computing Machinery’s upcoming Conference on Human Factors in Computing Systems (CHI) in May 2024.

Abstract

Human participants play a central role in the development of modern artificial intelligence (AI) technology, in psychological science, and in user research. Recent advances in generative AI have attracted growing interest to the possibility of replacing human participants in these domains with AI surrogates. We survey several such “substitution proposals” (*) to better understand the arguments for and against substituting human participants with modern generative AI. Our scoping review indicates that the recent wave of these proposals is motivated by goals such as reducing the costs of research and development work and increasing the diversity of collected data. However, these proposals ignore and ultimately conflict with foundational values of work with human participants: representation, inclusion, and understanding. This paper critically examines the principles and goals underlying human participation to help chart out paths for future work that truly centers and empowers participants.

(*) The review by Agnew et al. covers 13 technical reports or research articles and three commercial products ( OpinioAI , Synthetic Users and User Persona)

Related

Can AI Replace Human Research Participants? These Scientists See Risks. Review by Chris Stokel-Walker in Scientific American (March 22, 2024).

要查看或添加评论,请登录

社区洞察

其他会员也浏览了