An essential requirement for successful AI construction: Creating Trustworthy AI
???? AI ??? ?? ??, Trustworthy AI ??? By Keun-Tae Kim on SAS Korea Blog December 8, 2023

An essential requirement for successful AI construction: Creating Trustworthy AI

Many people are aware of the tremendous potential of AI and have a high interest in its application. However, negative reporting on AI has increased recently, and concerns about AI-driven decision-making are growing. Organizations considering adopting AI do not want their company's name to be headlines due to poor AI applications, nor do they want to reflect discriminatory or unfair business practices in AI. Using AI responsibly helps build trust with customers, partners, and employees and can be a differentiator against competitors.

Recently, with the introduction of specific laws and guidelines related to AI applications in Korea, building trustworthy AI has become not just an option but a necessity.

The risks associated with AI decision-making are growing as much as the value of AI.

AI is already being used in various fields for the purpose of supporting decision-making. Compared to traditional human decision-making, AI-based decisions have the following characteristics:

  1. Acceleration – Leveraging exceptional computing performance enables more automated decisions in real-time.
  2. Amplification – Since AI can be replicated and quickly deployed across various environments, the impact of incorrect decisions can be amplified.
  3. Accountability – It raises the question of who is responsible when issues occur.

Thus, as much as AI's incredible value, the impact of wrong AI decisions can also be amplified more rapidly and extensively, and the issues of accountability can significantly increase.

In this context, the focus on trustworthy AI (Trustworthy AI) is quite natural. Trustworthy AI refers to AI developed and used in an ethical manner, ensuring it does not harm people and reflects societal values. This implies the need for a parallel approach to considering what AI can do and what considerations are necessary in the AI application process.

Why should we invest in trustworthy AI?

There are several reasons why organizations or companies should invest in trustworthy AI:

  1. Moral Duty – While AI can provide great value, it can also cause unintended massive damage, necessitating proactive efforts to mitigate such risks.
  2. Competitive Advantage – Companies that invest in trustworthy AI can differentiate themselves and potentially gain a competitive edge, whereas those that do not may face increased risks of poor business decisions.
  3. Regulatory Compliance Risk – Laws and regulations related to AI are being established both domestically and internationally.
  4. Reputation Risk – The importance of 'ethics' is increasingly emphasized by consumers, enhancing the reputation risks for companies that do not adopt ethical practices.

Regulatory Response and Principle Establishment

Around the world, regulatory bills related to AI are emerging. Notably, the AI Act aimed at human-centered, trustworthy use of AI was passed by the European Parliament in June and is scheduled to be implemented from 2026. It is expected to be a stringent law that could impose fines of up to 6% of annual global revenue on violating companies. In the United States, a security-focused approach is being adopted at the federal level to utilize AI in administrative agencies, defense, and criminal investigations, while at the state level, specific regulations are being proposed to prevent discrimination based on race, gender, disability, etc., such as "Anti-Discrimination by Algorithms Act" or regulations that require bias testing during the AI application in hiring processes.

In Korea, financial institutions have been the focus of AI guidelines issued by related organizations for several years. A particularly noteworthy development is the amendment to the Personal Information Protection Act last September, which specifically stipulates the obligations that AI-based automated systems must comply with. This amendment, which will come into effect next March, includes the following key contents:

Requirements of Automated Systems under the Revised Personal Information Protection Act

The legislation requires companies operating AI systems to 1) prepare for explaining AI decisions, and 2) ensure transparency in the criteria, procedures, and processing methods of automated decisions. We will further explore what responses are necessary to meet these requirements.

Analysis Process for Ensuring Explanation and Transparency in AI Decision-Making

Firstly, when providing an explanation for AI decision-making, the content might include information about the algorithm, variables, and operational principles involved. However, for explanations aimed at the general public who are not experts, the focus will likely be on which key factors (variables) played a significant role in the decision-making.

While a feature to offer such explanations is necessary, more fundamentally important is ensuring that the content of the decision explanations does not lead to issues, by creating fair and minimally biased AI models.

Fairness means eliminating prejudice or discrimination against any group, ensuring decisions are applied equitably and justly to everyone. Bias means that a decision disproportionately negatively affects certain groups or individuals based on race, gender, or age. Generally, if bias occurs in decision-making, it compromises fairness.

For example, if a loan approval decision is made using an AI model biased towards educational attainment and age, the following explanation could be provided, linking to issues of fairness or bias:

"Unfortunately, your loan has been denied. The main reason is your educational level being middle school and your age being over 60."

In the modeling process, if variables that could induce bias, such as education and age, are replaced with alternative variables like customer rating to provide the desired level of predictive performance, the following explanation could be given:

"Unfortunately, your loan has been denied. The main reason is your customer rating. Purchasing additional products to improve your customer rating could increase the likelihood of approval."

The second major requirement is transparency in automated decisions. In the real world, AI models are used for decision-making linked to complex business rules. If a complex AI model and decision-making logic exist in the form of tens of thousands of lines of complex program code, it would be challenging to meet the legal requirement that stakeholders be able to easily verify the criteria, procedures, and methods of processing decisions in the future.

AI Model and Decision-Making Process in Program Code or Process Flow Form

Thus, when implementing AI systems, it is necessary to establish systems that not only treat AI models as black boxes but also secure transparency by allowing easy verification of associated business rules.

A Comprehensive Approach is Needed for Trustworthy AI

The various risks posed by AI are interconnected across social, operational, and regulatory aspects. Therefore, building trustworthy AI cannot be satisfied with the application of a few isolated features; it requires a comprehensive approach that includes diverse technologies and services.

Framework for Building a Trustworthy AI System

SAS Viya is an enterprise AI/ML platform that provides the necessary analysis lifecycle for building trustworthy AI. Centered around automated AutoML analysis features, it facilitates data processing for analysis, model development, and the application and operational phases, offering key functions needed for constructing reliable AI systems.

SAS Viya Providing the Analysis Lifecycle Necessary for Building Trustworthy AI

Specifically, SAS Viya offers features for developing unbiased AI models, including 'fairness and bias analysis' of models. It also provides capabilities for transparently generating and managing AI-based decision-making in the form of process flows, as well as functions for explaining AI models and decisions.

Key Features of SAS Viya for Building a Trustworthy AI System

We have explored the SAS Viya platform, which provides the features necessary for system construction in response to the social demands and related regulations for trustworthy AI. With the upcoming implementation of the revised Personal Information Protection Act early next year, there is little time left to review and prepare for compliance. For organizations and companies that operate or are preparing to build AI-based systems, it is crucial to act swiftly in applying trustworthy AI.

This article is a translation from Korean published on https://blogs.sas.com/ (Keun-Tae Kim, Source ).

要查看或添加评论,请登录

社区洞察

其他会员也浏览了