Artificial Intelligence #41: A practical way to implement #AI ethics
https://www.turing.ac.uk/sites/default/files/2019-08/understanding_artificial_intelligence_ethics_and_safety.pdf

Artificial Intelligence #41: A practical way to implement #AI ethics

Like most practitioners in AI, I follow the discussion on AI ethics with interest. With the forthcoming EU regulation on AI ? this space is likely to get even more complex (because it is a regulation and not a guideline).

However, I am concerned that there is now emerging an army of consultants and advisors who see AI ethics as a business model i.e. a way to sell their services

?There are various ways to obfuscate the issue

a)?Mix narrow AI threats?with general AI threats

?b)??Anthropomorphize?AI (Anthropomorphism is the attribution of human traits, emotions, or intentions to non-human entities) ex see AI as a Godfather like puppet master in all scenarios?

c) Always frame the discussion as man v.s. machine. Paint the future like a 'planet of the apes' scenario where if we are not careful, bands of marauding?AI will hunt the last remnants of humanity?

You get the picture!

Entertaining as this may be sometimes, it has a serious problem?

If we are not careful, we could end up stifling innovation

If we are not progressive, our entrepreneurs would not be able to compete on a global scale

AI ethics is a good idea but we need to have a discussion on how to implement AI ethics at scale. These problems are often lost in the conversation if we articulate the problem of AI ethics but not the solution.

Hence, I was pleased to see the work of the Alan Turing institute on Artificial intelligence, ethics, and safety . You can download their report here .

For example, they define transparency of AI ethics as

a)????The workings of the object are clear(transparent) but also that

b)????the quality of a situation or process that can be clearly justified and explained because it is open to inspection and free from secrets.

The first part is explainability / ?interpretability of a given AI system, i.e. the ability to know how and why a model performed the way it did in a specific context and therefore to understand the rationale behind its decision or behaviour. (overcoming the black box model)

The second part involves justifying the processes that go into its design, implementation and outcome i.e. justification of use. Thus, the design, implementation and outcomes should be ethically permissible, non-discriminatory/fair, and worthy of public trust/safety-securing.

Hence, they propose three critical tasks for designing and implementing transparent AI

Justify Process: In offering an explanation to affected stakeholders, you should be able to demonstrate that considerations of ethical permissibility, non-discrimination/fairness, and safety/public trustworthiness were operative end-to-end in the design and implementation processes that lead to an automated decision or behaviour.

Clarify Content and Explain Outcome: ?In offering an explanation to affected stakeholders, you should be able to show in plain language that is understandable to non-specialists how and why a model performed the way it did in a specific decision-making or behavioural context. You should therefore be able to clarify and communicate the rationale behind its decision or behaviour.

Justify Outcome: In offering an explanation to affected stakeholders, you should be able to demonstrate that a specific decision or behaviour of your system is ethically permissible, non-discriminatory/fair, and worthy of public trust/safety-securing.

A bigger picture is as below

No alt text provided for this image

The point here is – transparency, as defined in this way could be adapted by every company to create their own set of values.

By being transparent both in mechanism(explainable) and process(outcome), you could create a system that could be practically implemented at scale

Dr Prabodh B Mistry (He/Him)

Engineer/Mathematician, Director at EHV Engineering, Humanitarian

2 年

Ajit Jaokar I am an engineer with background in process modelling so I ‘can appreciate’ AI and Autonomous systems.?I have been working on an ethics model which I would like to present to AI experts like you.?It has a simple framework (using two key values of love and truth) but its structure would lend to mathematics or ‘Programmable Ethics’ in a flexible way.?I presented key elements of that and it was captured into a video (https://youtu.be/PN5qun931wA). ?I would welcome a quick (or slow) chat with you or anyone working in this field so that I am pointed to relevant people/group to explore its developments/applications ([email protected] or 07913 634258).

回复
Fran?ois Ortolan

Senior Telecom Standardisation Engineer

2 年

Hi Ajit, thanks for featuring the Alan Turing Institute. Please do not hesitate to participate in the future standard hub at Turing to discuss your views. https://www.turing.ac.uk/news/new-uk-initiative-shape-global-standards-artificial-intelligence

Varun Madiyal

I help insurers to build digital & data driven solutions | Analytics & Insights | ML & AI | HealthTech & InsureTech | Speaker & Author | Thought Leadership & Mentoring |

2 年

Detailed description Ajit Jaokar ??? You simplified it ??

Olaf de Leeuw

Data Scientist - Dataworkz

2 年

A first step would be to teach students how to deal with innovations and technology in an ethical and responsible way. When I studied Mathematics there was one very small ethics course that didn't got much attention. Ethics should be an important part of a study so that it becomes part of your mindset. I think in NL most Data Science studies nowadays provide an Ethics course but this doesn't help the tons of Data Scientists who are working in the field for many years. So yes, it is important to talk about how to implement AI ethics, but developers of ML models and all other stakeholders must be aware of the ethical risks when developing such models. If they aren't they would never start thinking of implementing AI ethics.

Greg Nudelman

UX for AI ? 5 Books ? 24 Patents ? Upcoming TEDx Talk

2 年

Some of the most exciting and innovative implementations of AI from industrial process monitoring, fraud detection, to self driving cars are all black box. While GDPR is necessary in some cases, requiring that ALL AI models be explainable is what’s going to stifle the innovation. This approach is just not practical. A much better approach is to use the value matrix tool to “teach” the AI human values — I covered this in detail in my recent Web 3.0 conference talk?.

要查看或添加评论,请登录

社区洞察

其他会员也浏览了