#artificialintelligence #94: How do we define Responsible AI by design?

#artificialintelligence #94: How do we define Responsible AI by design?


Background

In a previous edition of this newsletter, I discussed our thinking on responsible AI by design.? Led by the ethics researcher Maryann F. , who has been working with us as a part of the Erdos institute, We have been expanding the ideas behind the Erdos research institute?

The plan is to launch it as a free but invite only / registered community for the long form of my book? "Mathematical foundations of Data Science”. If you are interested in being a part of this community, please let us know HERE

Responsible AI by design is one of the interdisciplinary AI research themes that we have been developing at the Erdos institute (in this case - led by Maryann Faust).?

Privacy by design foundations

Today, privacy by design is a mature concept and is even a part of the GDPR regulations. Even so, its exact definition is not clear. Privacy by design (PBD) is better understood as an approach. The basic idea is that PBD proposes that privacy considerations should be undertaken at the outset and should cover the whole engineering cycle (as opposed to retrospectively - as an afterthought). PBD itself is motivated by value sensitive design, i.e., taking human values into account in a well-defined manner throughout the process.

With this background, lets consider Responsible AI by Design. The basic concept is similar and there is some existing work ex Responsible AI by Design by researchers in Telefonica. This paper also draws upon AI values from the EC The principles include fairness, transparency, explainability, human centricity, princay by design, security by design and extending to partners and third parties.

Implementing Responsible AI by design?

Considering the limitations of PBD, responsible AI by design needs the following ideas to be practical:

  1. A definition (or at least a consensus on meaning)
  2. A foundation based on values. This needs some more work
  3. A methodology - Responsible-AI-by-Design: a Pattern Collection for Designing Responsible AI Systems provides a comprehensive approach for a design methodology
  4. And most importantly, an open source implementation of a responsible AI by design toolbox
  5. Domain specific considerations - ex responsible aI by design in healthcare.?

We welcome thoughts if you are working on similar ideas.?

To conclude

More than a specific definition of responsible AI by design, we need strategies for implementation based on a values framework, a methodology and additional considerations for industry verticals.?

?If you want to develop your own ideas for interdisciplinary AI themes, please sign up HERE to know more information about the Erdos Research Institute as we develop it


Image source: pixabay

Hassan Zamzam

Engineering in Electronic Systems as RADAR surveillance, Warning & PA Systems and Medical Electronics

2 年

Ok folks.. we c!ant ask an AI to give us the answers! But will it not be wisely to do it now before it can think and make decision of own opinion by being aware of it self existence as a Bias? It gave me this! What do you think! Transparency Fairness Responsibility Privacy Security Human Control Sustainable Development

回复
Alain Grijseels

Open for new opportunities : Freelance strategy consultant #strategy #innovator #keynotespeaker #besystemic #system-dynamics

2 年

Hi, as former CIO of regulatory entities in Belgium. I had to tackle the problematics of privacy (GDPR), cybersecurity (hacking, ransomwhere, identity theft, etc) and transparency of AI based decisions (EU regulations and legal obligation). This domain stil gets my full attention and can not enough bee emphasised. Keep up the good work.

Saeed Al Dhaheri

Commissioner| UNESCO co-Chair | AI Ethicist | Thought leader | Author | Certified Data Ethics Facilitator | LinkedIn Top Voice | Public Speaker

2 年

Good article Ajit Jaokar. The big tech and AI developers need to adopt Responsible AI by design approach to produce AI systems that are unbiased, transparent and interpretable and address many of the ethical concerns that might arises from the data itself or the model performance. the case of Telephonica and MS Responsible AI toolbox are good examples here! Thank you

Nitin Malik

PhD | Professor | Data Science | Machine Learning | Deputy Dean (Research)

2 年

Privacy is a human right. Intrusion is an ethical risk arising from AI applications. European Parliament allow people the right to access the AI model whose data has been used to build the model. GDPR allows individuals control over their personal data. There are data production principles employed and privacy rights recognised by GDPR. There are many issues which are not yet resolved. Who should be made accountable when AI causes harm. It’s not easy to fix responsibility as both man and machine collectively is involved in decision making.

Afzal Kazi

Director @ Precision Engineering Group

2 年

Artfificial intelligence is future and it’s very important to control

回复

要查看或添加评论,请登录

Ajit Jaokar的更多文章

社区洞察

其他会员也浏览了