The Responsible AI Bulletin #10: Fairness in recommender systems, ghost in the machine, and differences between humans and machines.
Generated using DALL-E 2

The Responsible AI Bulletin #10: Fairness in recommender systems, ghost in the machine, and differences between humans and machines.

Welcome to this edition of?The?Responsible AI Bulletin, a weekly agglomeration of?research developments?in the field from around the Internet that caught my attention - a few morsels to dazzle in your next discussion on AI, its ethical implications, and what it means for?our future.

For those looking for more detailed investigations into research and reporting in the field of Responsible AI, I recommend subscribing to the?AI Ethics Brief, published by my team at the?Montreal AI Ethics Institute, an international non-profit research institute with a mission to democratize AI ethics literacy.


Fair allocation of exposure in recommender systems

No alt text provided for this image
Generated using DALL-E 2

Machine learning algorithms are widely used in the recommender systems that drive marketplaces, streaming, and social networking platforms. Their main purpose is to provide users with personalized recommendations by predicting their preferences and sorting available content according to these predictions. However, by selecting content from some producers over others, recommendation algorithms decide who is visible and who is not. These decisions have real ethical and social implications, such as the risks of overlooking minority or disadvantaged groups when suggesting profiles to employers or the problems of over-representation of certain opinions on social networks. Our work aims to develop recommendation algorithms that limit exposure bias, taking into account both users and content producers.?

We consider a classical model of the recommendation problem where the system observes users in sequential sessions and must choose?K?items (videos) to recommend from a set of items created by producers (video creators). The traditional solution comprises two steps: 1)?Estimation: predicting a preference score for the current user for each item, based on a history of interactions?via?a learning model; 2)?Ranking: ranking the items by their estimated scores and recommending the ordered list (or ranking) of the?K?best. This ranking step can produce “superstar” or “winner-take-all” effects, where certain groups of producers capture all the exposure, even with slightly higher scores. In addition, biases in estimated preferences due to learning stereotypes can be amplified by ranking.

Continue reading here.

Ghosting the Machine: Judicial Resistance to a Recidivism Risk Assessment Instrument

No alt text provided for this image
Generated using DALL-E 2

Amid the chaos of the pandemic’s early months, Pennsylvania criminal courts?were instructed?to begin consulting the Sentence Risk Assessment Instrument when sentencing crimes. The actuarial tool uses demographic factors like age and number of prior convictions to estimate the risk that an individual will “re-offend and be a threat to society” – that is, be reconvicted within three years of release from prison.

The instrument was developed to help judges identify candidates for alternative sentences, with the ultimate aim of reducing the prison population. However, through interviews with 23 criminal judges and other legal bureaucrats throughout the state, I found that that has not happened. In fact, judges routinely ignored the tool’s recommendations, which they disparaged as “useless,” “worthless,” “boring,” “a waste of time,” “a non-thing,” and simply “not helpful.” Others weren’t even aware that their courtrooms were supposed to be using it.

Recidivism risk assessment instruments are used in high-stakes pre-trial, sentencing, or parole decisions in?nearly every US state. These algorithmic decision-making systems, which infer a defendant’s recidivism risk based on past data, are often presented as an ‘evidence-based’ strategy for criminal justice reform – a way to reduce human bias in sentencing, replace cash bail, and reduce mass incarceration. Yet there is remarkably little evidence that risk assessment instruments help advance these goals in practice.

The discourse around tools like the Sentence Risk Assessment Instrument has focused on their technical aspects, particularly?racially biased predictions. Studies of risk assessment tools also tend to be conducted without the input or expertise of communities impacted by incarceration. By contrast, this research focuses on how judges actually use the tools, using interview questions developed with input from the community organization?Coalition to Abolish Death by Incarceration?(CADBI). This work sheds new light on the important role of organizational influences on professional resistance to algorithms. This helps explain why AI-centric reforms can fail to have their desired effect.?

Continue reading here.

On the Perception of Difficulty: Differences between Humans and AI

No alt text provided for this image
Generated using DALL-E 2

Integrating artificial intelligence (AI) into daily life has magnified the need to accurately determine the difficulty encountered by humans and AI agents in various scenarios. Assessing difficulty is essential to improve human-AI interaction, emphasizing a systematic comparison between human and AI agents. Based on an extensive review, it identifies inconsistencies in prevailing methodologies used to measure perceived difficulty and underscores the need for uniform metrics.

The paper presents an experimental design combining between-subject and within-subject paradigms to address this gap. It uses standard confidence metrics in conjunction with the innovative Pointwise ??-Information (PVI) score to accurately evaluate the apparent difficulty for each entity on specific instances. This approach guarantees equal accessibility to information for both agents, creating the basis for an in-depth primary investigation.

The potential implications of this research are manifold. By discerning the differing perceptions of difficulty between humans and AI agents, this study anticipates the development of enhanced and consistent frameworks for human-AI interaction. These advancements ensure efficient collaboration in an increasingly AI-augmented scientific landscape.

Continue reading here.

Comment and let me know?what you liked and if you have any recommendations on what I should read and cover next week. You can learn more about my work?here. See you soon!

Alexandre MARTIN ???

Polymath & Self-educated ?? ? Business intelligence officer ? AI hobbyist ethicist - ISO42001 ? Editorialist & Business Intelligence - Muse? & Times of AI ? Techno humanist & Techno optimist ?

1 年

要查看或添加评论,请登录

Abhishek Gupta的更多文章

社区洞察

其他会员也浏览了