Augmented Intelligence Newsletter (AiN) # 14: Trustworthy AI-Robustness- pillar 4 of 5
Photo by Anne Nyg?rd on Unsplash

Augmented Intelligence Newsletter (AiN) # 14: Trustworthy AI-Robustness- pillar 4 of 5

Trustworthy AI Pillars 4 of 5

Welcome to Augmented Intelligence Newsletter (AiN) by C. Naseeb.

AiN Issue # 14

Thank you for reading my latest article on Understanding?#trustworthyai?or?#responsibleai, its key pillars, and what businesses need to consider to become trustworthy AI businesses.?

Hey, in this issue, I explain the key concepts around?#Robustness: understanding what it is i.e,?Robust AI?or?AI Being Robust.?

In the other articles, I wrote about the the four pillars of Trustworthy AI:?Transparency,?Explainability, Fairness, and?Privacy

Here is the fourth one:?#robustness

Fourth, AI must be Robust. That is, it must be able to withstand the attacks. As AI prevails in our daily lives and is increasingly employed for crucial decision-making, it becomes more vulnerable to attacks. AI systems must be actively defended, minimizing security risks, and enabling confidence in system outcomes. All of that can be achieved only when you consider these issues right from the time you design your AI system. Robust AI handles exceptional conditions, such as abnormalities in input or malicious attacks, without causing unintentional harm. The AI must be built to withstand intentional and unintentional interference by protecting against exposed vulnerabilities. For example, if attackers poison training data to compromise system security.

It is impossible to model everything in our world.


Examples.?

The phenomenal progress that modern technologies such as AI have enabled us and encouraged us to develop high-stakes applications such as self-driving cars, automated surgical assistants, AI hedge funds, and controlling and protecting power grids from cyber attacks with AI. These sorts of high-stakes applications require Robust AI to be in place. The second thing which demands Robust AI is the need to act in the face of?unKnown?unKnowns.This is because of two reasons

  1. It is impossible to model everything in our world.
  2. It is undesirable to model everything.

Your AI system must be robust to several factors, such as

a. Human error,

b. Cyber Attacks,

c. Misspecified goals,

d. Incorrect Models,

e. UnModelled Phenomena

It is not desirable to model everything.

I'll elaborate on these aspects at a later point in time.?

IBM has made an open-source toolkit (Adversarial Robustness 360) available to enable developers and researchers to defend, certify, and verify AI models against adversarial threats (including evasion, extraction, and poisoning) and helps make AI systems more secure and trustworthy. The main difference between this toolkit and similar projects is the focus on defence methods and its machine learning framework independence, which prevents users from being locked into a single framework; guarding against adversarial threats and potential incursions to keep systems healthy.

Robust AI handles exceptional conditions, such as abnormalities in input or malicious attacks, without causing unintentional harm.


In the next one, I'll talk about?#Privacy?; what does it mean for an AI system to be?#privacypreserving??

Subscribe and view previous issues?here.?

Here at LinkedIn and on Medium, I regularly write about business, technology, digital transformation, and emerging trends. Subscribe to this newsletter or click 'Follow' to read my future articles.?

Enjoy the newsletter! Help us make it great and better by sharing it with your network.?

Have a nice day! See you soon. - Chan

要查看或添加评论,请登录

社区洞察

其他会员也浏览了