#artificialintelligence #94: How do we define Responsible AI by design?
Background
In a previous edition of this newsletter, I discussed our thinking on responsible AI by design.? Led by the ethics researcher Maryann F. , who has been working with us as a part of the Erdos institute, We have been expanding the ideas behind the Erdos research institute?
The plan is to launch it as a free but invite only / registered community for the long form of my book? "Mathematical foundations of Data Science”. If you are interested in being a part of this community, please let us know HERE
Responsible AI by design is one of the interdisciplinary AI research themes that we have been developing at the Erdos institute (in this case - led by Maryann Faust).?
Privacy by design foundations
Today, privacy by design is a mature concept and is even a part of the GDPR regulations. Even so, its exact definition is not clear. Privacy by design (PBD) is better understood as an approach. The basic idea is that PBD proposes that privacy considerations should be undertaken at the outset and should cover the whole engineering cycle (as opposed to retrospectively - as an afterthought). PBD itself is motivated by value sensitive design, i.e., taking human values into account in a well-defined manner throughout the process.
With this background, lets consider Responsible AI by Design. The basic concept is similar and there is some existing work ex Responsible AI by Design by researchers in Telefonica. This paper also draws upon AI values from the EC The principles include fairness, transparency, explainability, human centricity, princay by design, security by design and extending to partners and third parties.
领英推荐
Implementing Responsible AI by design?
Considering the limitations of PBD, responsible AI by design needs the following ideas to be practical:
We welcome thoughts if you are working on similar ideas.?
To conclude
More than a specific definition of responsible AI by design, we need strategies for implementation based on a values framework, a methodology and additional considerations for industry verticals.?
?If you want to develop your own ideas for interdisciplinary AI themes, please sign up HERE to know more information about the Erdos Research Institute as we develop it
Image source: pixabay
Engineering in Electronic Systems as RADAR surveillance, Warning & PA Systems and Medical Electronics
2 年Ok folks.. we c!ant ask an AI to give us the answers! But will it not be wisely to do it now before it can think and make decision of own opinion by being aware of it self existence as a Bias? It gave me this! What do you think! Transparency Fairness Responsibility Privacy Security Human Control Sustainable Development
Open for new opportunities : Freelance strategy consultant #strategy #innovator #keynotespeaker #besystemic #system-dynamics
2 年Hi, as former CIO of regulatory entities in Belgium. I had to tackle the problematics of privacy (GDPR), cybersecurity (hacking, ransomwhere, identity theft, etc) and transparency of AI based decisions (EU regulations and legal obligation). This domain stil gets my full attention and can not enough bee emphasised. Keep up the good work.
Commissioner| UNESCO co-Chair | AI Ethicist | Thought leader | Author | Certified Data Ethics Facilitator | LinkedIn Top Voice | Public Speaker
2 年Good article Ajit Jaokar. The big tech and AI developers need to adopt Responsible AI by design approach to produce AI systems that are unbiased, transparent and interpretable and address many of the ethical concerns that might arises from the data itself or the model performance. the case of Telephonica and MS Responsible AI toolbox are good examples here! Thank you
PhD | Professor | Data Science | Machine Learning | Deputy Dean (Research)
2 年Privacy is a human right. Intrusion is an ethical risk arising from AI applications. European Parliament allow people the right to access the AI model whose data has been used to build the model. GDPR allows individuals control over their personal data. There are data production principles employed and privacy rights recognised by GDPR. There are many issues which are not yet resolved. Who should be made accountable when AI causes harm. It’s not easy to fix responsibility as both man and machine collectively is involved in decision making.
Director @ Precision Engineering Group
2 年Artfificial intelligence is future and it’s very important to control