Recommended Practices for the Design of Robust, Transparent, and Fair AI-Models
Murat Durmus (Background Image Shutterstock: Willyam Bradberry)

Recommended Practices for the Design of Robust, Transparent, and Fair AI-Models

The evolution of AI is creating new opportunities to improve people's lives around the world. From business to healthcare to education. Nearly every sector will be affected in the future. But it also simultaneously raises new questions about how fairness, interpretability, privacy, and security can or must be most meaningfully built into these systems.

Listed below are some recommended practices.

1. Use a People-Centric Approach.

  • First, see if there are transparent and explainable algorithms/alternatives before choosing a black-box model.?
  • The approach to development must be people-centric. The way users experience your system is essential for assessing the real impact of their predictions, recommendations, and decisions.
  • Transparency, Explainability, and Interpretability: Clarity and control are critical to a good user experience.
  • Creating a single answer may be appropriate if the likelihood is high enough that the answer will apply to a variety of users and use cases. In other cases, suggesting a few possible options to the user may be the more appropriate option. This should always be taken into account.
  • A variety of users and application scenarios should collaborate and be integrated. A constant feedback system throughout the project development should be ensured. This will incorporate diverse user perspectives into the project and increase the number of people who can benefit from the technology in the future.

2. Understand the Limitations of your Dataset and Models

  • The raw data should be examined directly. (critical: The raw data should also be examined and checked for one-sidedness (depending on the use case) and possible biases right at the beginning.) ML models reflect the data on which they were trained. The raw data should be carefully analyzed to ensure that it is sufficiently understood. In cases where this is not possible (e.g., sensitive raw data), one should try to understand the input data as well as possible while always respecting privacy (computing aggregated, anonymized summaries can do this).
  • ?The difference between performance during training and performance in productive use should be carefully monitored and analyzed.
  • Are features in your model redundant or unnecessary? Use the simplest model that meets your performance goals.
  • A model built to detect correlations should not be used to draw or imply causal conclusions. For example, your model may learn that people who buy basketball shoes are taller on average. However, this does not mean that a user who buys basketball shoes will be more elevated.
  • Machine learning models today largely reflect the patterns of their training data. Therefore, it is vital to know the scope, possibilities, and limitations of the models as well as possible.
  • Communicate limitations to users, if possible. For example, an app that uses ML to recognize certain bird species might indicate that the model was trained with a limited set of images from a specific region of the world. By better educating the user, you can also improve user feedback on your feature or app.

3. Monitoring and Updating the System

  • Monitoring and updating of the system should continue after deployment. Ongoing monitoring ensures that your model addresses real-world performance and user feedback.
  • Every model in the world is imperfect almost by definition. Build time into your product roadmap so you can address problems at your leisure.
  • Consider both short- and long-term solutions to problems. A simple fix (e.g., block listing or allow listing) may help solve a problem quickly, but it is not the optimal solution in the long run. Agree on simple short-term fixes with longer-term learned solutions.
  • Updating a deployed model should be carefully analyzed and monitored (how does the update affect overall system quality and user experience?).

4. Testing (Robustness is one of the Key Pillars of Trustworthy AI)

  • Familiarize yourself with the best testing methods and quality engineering of software engineering to ensure that the AI system works as intended or is "trustworthy."
  • Test your model for unbiasedness and fairness
  • Perform rigorous unit testing to test each component of the system in isolation.
  • Perform integration testing to understand how individual ML components interact with other parts of the overall system.
  • Use a standard dataset to test the system and ensure that it behaves as expected. Update this test set regularly according to changing users and use cases.
  • Perform iterative user testing to incorporate different user needs into development cycles.

last but not least:

One should only use AI if it creates added value.

"AI, for AI's sake, is a nearly guaranteed path to disaster." ~ (THE AI THOUGHT BOOK)

Murat

要查看或添加评论,请登录

Murat Durmus的更多文章

  • On Writing

    On Writing

    Writing is increasingly becoming a ritual, a performance rather than a personal expression. It's a kind of…

    3 条评论
  • The Gentle Cage of Certainty

    The Gentle Cage of Certainty

    Refusing to ask questions means shutting ourselves away. Not into an iron cage, but into a gentle, familiar silence…

    3 条评论
  • Me Against an Armada of AI Agents

    Me Against an Armada of AI Agents

    A single, defiant human..

    2 条评论
  • The Silent Surrender

    The Silent Surrender

    The future of AI ethics will not be decided by committees drafting guidelines. It will be decided by the silent…

    13 条评论
  • The Slow Theft of Self

    The Slow Theft of Self

    I feel exposed, like my mind is shedding layers I never agreed to lose. I am naked, not in body but in thought.

  • The Abyss of Knowing or When Expertise Feels Like Guesswork

    The Abyss of Knowing or When Expertise Feels Like Guesswork

    Many know that AI is not just a technological development but an epistemological crisis. It forces us to question not…

    2 条评论
  • The Difference Between AI Safety, AI Ethics, and Responsible AI

    The Difference Between AI Safety, AI Ethics, and Responsible AI

    Some worry about existential threats, others about fairness, and others want to avoid bad PR. AI security, AI ethics…

    5 条评论
  • From AI-Ethics to Algorithmic Conscience

    From AI-Ethics to Algorithmic Conscience

    An algorithm can calculate probabilities, but a conscience weighs consequences. If we insist on calling it 'AI ethics,'…

    8 条评论
  • From AI Agents to AI Monads

    From AI Agents to AI Monads

    The path from AI agents to AI monads reflects the philosophical development from Cartesian dualism to Leibnizian…

  • The Courage to be Uncertain

    The Courage to be Uncertain

    The greatest enemy of certainty is courage, the courage to admit, "I don't know." This may sound like an excuse in a…

    6 条评论

社区洞察

其他会员也浏览了