AI value platform

AI value platform

AI ethics is a set of moral ideas and strategies designed to guide artificial intelligence technologies' creation and responsible application. Organizations are beginning to adopt AI codes of ethics as AI becomes more integrated into products and services.

An AI code of ethics, also known as an AI value platform, is a policy statement that formally specifies the role of artificial intelligence in advancing humanity. An AI code of ethics' objective is to offer stakeholders direction when faced with an ethical decision surrounding the use of artificial intelligence.

The science fiction writer Isaac Asimov recognized the possible perils of autonomous AI agents long before their emergence and devised The Three Laws of Robotics to mitigate such risks.?

  • The first law of Asimov's code of ethics prohibits robots from deliberately harming humans or permitting harm to happen to humans by refusing to act.
  • The second law requires robots to obey humans unless the orders violate the first law.
  • The third law requires robots to safeguard themselves by the first two principles.

No alt text provided for this image

The rapid growth of AI in the last five to ten years has prompted expert groups to design protections against the risk of AI to humans. The nonprofit institute formed by MIT cosmologist Max Tegmark, Skype co-founder Jaan Tallinn, and DeepMind research scientist Victoria Krakovna is one such organization. The institute collaborated with AI researchers, developers, and professors from several fields to develop the 23 recommendations known as the Asilomar AI Principles.

No alt text provided for this image

Companies are fast discovering that AI scales solutions and scales risk. Data and AI ethics are corporate imperatives rather than academic curiosity in this setting. Companies must have a clear strategy to cope with the ethical quandaries that this new technology introduces.

While some organizations are putting teams to ensure algorithmic accountability and ethics, Reid Blackman, CEO of Virtue and former philosophy professor at Colgate University and the University of North Carolina, Chapel Hill, says most are still falling short in ensuring their products perform ethically in the real world.

No alt text provided for this image

To operationalize data and AI ethics, they should:

  1. Identify existing infrastructure that a data and AI ethics program can exploit
  2. Develop a data and AI ethics program
  3. Design an industry-specific data and AI ethical risk framework
  4. Modify thinking on ethics by taking cues from accomplishments in health care
  5. Improve guidance and tools for product managers
  6. Increase organizational awareness
  7. Incentivize staff, both formally and informally, to take a role in discovering AI ethical risks
  8. Monitor impacts and engage stakeholders

Businesses commonly wonder, "Given that we will do this, how can we do it without exposing ourselves to ethical risks?". As a result, academics fail to address the particular, actual applications of data and AI. This translates to a lack of clear guidelines for developers on the ground and top management, who must identify and select from a collection of risk mitigation techniques. So, we need an "on-the-ground" approach.

No alt text provided for this image

Those asking the questions at firms are typically eager engineers, data scientists, and product managers. They understand how to ask business-relevant risk-related questions because they are the ones that create the products to meet specific business goals. However, they do not acquire the same level of instruction as academics. As a result, they lack the ability, knowledge, and experience to address ethical problems systematically, thoroughly, and efficiently. They also lack an essential component: institutional support.

Finally, companies (and countries) are establishing high-level AI ethics norms. Google and Microsoft, for example, have long trumpeted their beliefs. The issue comes in putting such views into action. What does it mean to advocate for "fairness"?

Identify current infrastructure that can be used by a data and AI ethics programme.

The key to creating a successful data and AI ethics programme is to leverage existing infrastructure, such as a data governance board that meets to review the privacy, cyber, compliance, and other data-related concerns. This permits issues from those "on the ground" (e.g., product owners and managers) to surface and, if necessary, to be escalated to appropriate executives. There are several reasons why governance board buy-in works:

  • The leadership level establishes the tone for how seriously employees take these challenges.
  • A data and AI ethics strategy must be aligned with the overall data and AI strategy, which is developed at the executive level.
  • Protecting the brand from reputational, regulatory, and legal risk is ultimately a C-suite responsibility, and they must be notified when high-stakes issues arise.

Create an industry-specific data and AI ethical risk framework.

A good framework includes, at the very least, an articulation of the company's ethical standards — including ethical nightmares — an identification of the relevant external and internal stakeholders, a recommended governance structure, and an articulation of how that structure will be maintained in the face of changing personnel and circumstances.

It is critical to establish KPIs and a quality assurance programme to assess the ongoing efficacy of the tactics used to carry out the strategy.

A solid framework also demonstrates how ethical risk reduction is incorporated into operations.

It should, for example, specify the ethical norms that data collectors, product developers, and product managers and owners must follow. It should also outline a precise method for elevating ethical concerns to higher-level management or an ethics committee. All businesses should investigate if they have mechanisms to detect biased algorithms, privacy issues, and unexplained results.

Change perspective on ethics by learning from the accomplishments in health care.

Many senior leaders regard ethics in general and data and AI ethics in particular as "squishy" or "fuzzy," arguing that it is not "concrete" enough to be actionable. Leaders should be inspired by the healthcare business, which has been methodically focused on ethical risk reduction for several decades now. Key considerations, such as what constitutes privacy, self-determination, and informed consent, have been thoroughly investigated by medical ethicists, health care practitioners, regulators, and lawyers.

These ideas can be used to various ethical quandaries about customer data privacy and control. These same rules can be applied to collecting, using, and sharing people's data. One obvious lesson to draw from health care is to ensure that consumers are aware of how their data is being used and that they are informed early on and in a way that makes comprehension likely (for example, by not burying the information in a large legal document). The broader lesson is to deconstruct principal ethical notions like privacy, bias, and explainability into infrastructure, processes, and practices that implement those ideas.

Improve product managers' guidance and tools.

While the framework provides high-level information, product-level guidance must be granular. Consider the much-hailed importance of explainability in AI, a highly valued aspect of ML models that will undoubtedly be part of the framework. Pattern recognition is complex for humans to grasp in standard machine-learning algorithms. However, it is typical for people to want or demand explanations for AI outputs, especially when the outputs of the AI have the potential to change their lives.

The issue is that there is frequently a conflict between making outputs explainable, on the one hand, and making outputs (e.g., forecasts), on the other. Product managers must understand how to make that compromise, and tailored tools should be developed to assist them in making such decisions. Companies, for example, can develop a tool that allows project managers to assess the value of explainability of a particular product.

If explainability is desired because it aids in detecting bias in an algorithm, but biassed outputs are not a concern for this particular ML application, then explainability loses value compared to accuracy. If the outputs are subject to regulations that require explanations — for example, regulations in the banking industry that compel institutions to explain why someone was denied a loan — then explainability is critical. The same is true for any other essential values.

Increase organizational awareness.

Corporations did not pay much attention to cyber threats ten years ago, but they do now, and employees are required to understand some of them. Anyone who works with data or AI products, whether in HR, marketing, or operations, should be familiar with the company's data and AI ethics framework. Creating a culture in which a data and AI ethics strategy can be successfully deployed and maintained necessitates training and upskilling personnel and empowering them to raise critical questions and concerns to the proper deliberative body at critical junctures.

No alt text provided for this image

Incentivize staff, both formally and informally, to play a role in discovering AI ethical hazards.

Ethical standards suffer when people are financially rewarded for acting unethically in several infamous situations. Similarly, failing to promote ethical behaviour monetarily might lead to its deprivation. The values of a firm are influenced in part by how it allocates its financial resources.

When employees do not see a budget for developing and maintaining a reliable data and AI ethics programme, they will focus on what advances their careers. It is critical to recognize and reward people for their efforts in developing a data ethics programme.

Keep an eye on the effects and include stakeholders.

Creating corporate awareness, ethics committees, knowledgeable product managers, owners, engineers, and data collectors are essential parts of the development and purchase process. However, due to limited resources, time, and a general failure to envision how things can go wrong, it is critical to monitor the effects of the data and AI products now on the market.

A car can be equipped with airbags and crumple zones, but that does not imply it is safe to go down a side street at 100 mph. Similarly, AI products can be developed ethically but deployed unethically. Qualitative and quantitative research will be conducted here, emphasizing engaging stakeholders to assess how the product has influenced them.

No alt text provided for this image


Yashvardhan Mehta

Schulich MBA | Sales & Strategy through an Engineer's lens | Ex-AB InBev, Ex-Zomato

2 年

要查看或添加评论,请登录

Sakti Kalyan Jena的更多文章

  • IoT: Strolling in the world of elation, I was lost in a small alley

    IoT: Strolling in the world of elation, I was lost in a small alley

    The internet of things (IoT) gives businesses the potential to rewrite the laws of their sector. The potential rewards…

  • Machine Learning: A Bird's Eye View

    Machine Learning: A Bird's Eye View

    Arthur Samuel (1959): "Field of study that gives computers the ability to learn without being explicitly programmed."…

  • Developing a DevOps Transformation Roadmap

    Developing a DevOps Transformation Roadmap

    Evolution Many firms are working hard to improve their delivery methods to lift the bar even higher. Enabling a DevOps…

  • Data Governance

    Data Governance

    WHAT… The value: Data governance is a data management role that adds value to the organization by developing policies…

    1 条评论
  • Moore's Law vs quantum computing

    Moore's Law vs quantum computing

    Moore's Law Since 1949, the computing power of digital computers has grown at an exponential rate. Moore's Law…

    3 条评论

社区洞察

其他会员也浏览了