The Responsible AI Bulletin #12: Open-source provisions in EU AIA, reasonableness and risk management, and demystifying global and local fairness.
Generated using DALL-E 2

The Responsible AI Bulletin #12: Open-source provisions in EU AIA, reasonableness and risk management, and demystifying global and local fairness.

Welcome to this edition of?The?Responsible AI Bulletin, a weekly agglomeration of?research developments?in the field from around the Internet that caught my attention - a few morsels to dazzle in your next discussion on AI, its ethical implications, and what it means for?our future.

For those looking for more detailed investigations into research and reporting in the field of Responsible AI, I recommend subscribing to the?AI Ethics Brief, published by my team at the?Montreal AI Ethics Institute, an international non-profit research institute with a mission to democratize AI ethics literacy.


Open-source provisions for large models in the AI Act

Generated using DALL-E 2

The benefits of AI are seldom discussed without its risks. The potential for large models and their successors to do enormous good—from boosting economic growth to helping us live longer, richer lives—is often contrasted with the risks that the development and deployment of such models present. This debate is especially interesting at the point at which we discuss access, which determines how and where the benefits of AI will be realized (as well as to whom they will accrue).?

Widening access allows more people to use AI, build with it, and improve their lives and the lives of others. However, it is the usefulness of large models that can also make them dangerous. If you increase the number of people who can use highly capable AI systems, you are also increasing the number of possible vectors for harm. One position within this debate, which focuses on maximalist interpretations of access, is often described as ‘open-source’ (though that label isn’t always appropriate due to issues related to transparency and licensing). Regardless, the term is often used as a shorthand for a style of release that sees full models (sometimes including their weights) made available to anyone who wants them.

This debate has been simmering for quite some time, with some favoring approaches that minimize restrictions to drive growth, prevent the formalization of highly asymmetrical markets, and deliver maximum benefit to as many people as possible. Others, meanwhile, propose that this position will enable AI to cause harm by putting powerful systems in the hands of bad actors. We believe, however, that framing this as ‘access vs. no access’ overlooks and oversimplifies the true nature of the problem. Many in the latter camp argue for a more refined distinction between democratizing use and proliferating core system weights. This suggests that broader access doesn’t necessarily imply unrestricted sharing of the underlying technology.

This was the lens through which we viewed recent provisions to the European Union’s AI Act, which brought this question in sharp relief with efforts to regulate the deployment of open-source foundation models.??

Continue reading here.


Acceptable Risks in Europe’s Proposed AI Act: Reasonableness and Other Principles for Deciding How Much Risk Management Is Enough

Generated using DALL-E 2

You are the developer of an AI system that will evaluate University applications throughout Europe. Under Article 9 of Europe’s draft AI Act, which may become law as early as 2024, you have an obligation to implement risk management because the system is “high-risk”.? Risk management must ensure that any remaining risks are “acceptable.” What does that even mean? How do you decide when risks from high-risk AI systems (with potential impacts on safety, rights, health, or the environment) are acceptable?

The final text of the Act (actually called a Proposal for a Regulation Laying Down Harmonised Rules on Artificial Intelligence) is currently under negotiation between the three main branches of the European government: the Commission, the Council, and the Parliament. Among many other thorny issues for negotiation, negotiators choose between two competing approaches to risk acceptability. One approach, proposed by the European Commission, would require risks to be reduced “as far as possible” (AFAP) in design and development, with remaining risks subject to further mitigation. The Parliament, by contrast, proposes to introduce considerations of what is “reasonably” acceptable.?

This paper critically evaluates the two approaches, exploring how AFAP has been interpreted in other contexts and drawing on negligence and other laws to understand what “reasonably acceptable” risks might mean. It finds that the Parliament’s approach is more compatible with the AI Act’s overarching goals of promoting trustworthy AI with a proportionate regulatory burden.?

Continue reading here.


Demystifying Local and Global Fairness Trade-offs in Federated Learning Using Partial Information Decomposition

Generated using DALL-E 2

As machine learning permeates high-stakes sectors like finance, healthcare, recommendation systems, etc., ensuring these algorithms are fair becomes increasingly crucial. This issue is especially complex in the federated learning setting, where several clients collaborate to develop a machine learning model without directly sharing their datasets. For instance, several banks may work together to develop a credit assessment model but are legally prohibited from sharing customer data amongst themselves.

In the traditional single-node machine learning scenario, numerous techniques exist to ensure group fairness (individuals are treated equally, irrespective of their sensitive attributes such as race, gender, age.) ?However, these techniques falter in federated learning settings, mainly due to the decentralized nature of data and the training process because each client only has access to their own data. In real-world scenarios, local population demographics can differ significantly from global demographics (i.e., a bank branch having customers who are predominantly from a particular race). Despite this, most existing studies have primarily focused on either global fairness – the overall disparity of the model across all clients, or local fairness – the disparity at each client’s level, without delving much into their interplay or trade-offs.

This paper presents an information-theoretic perspective on group fairness trade-offs in federated learning (FL). We leverage a body of work in information theory called partial information decomposition (PID) to identify three distinct types of disparity that constitute the global and local disparity in FL: Unique, Redundant, and Masked Disparity. This decomposition helps us derive fundamental limits and trade-offs between global and local fairness, particularly under data heterogeneity, as well as derive conditions under which they would fundamentally agree or disagree with each other.

Continue reading here.


Comment and let me know?what you liked and if you have any recommendations on what I should read and cover next week. You can learn more about my work?here. See you soon!

要查看或添加评论,请登录

Abhishek Gupta的更多文章

社区洞察