The evolution of AI is creating new opportunities to improve people's lives around the world. From business to healthcare to education. Nearly every sector will be affected in the future. But it also simultaneously raises new questions about how fairness, interpretability, privacy, and security can or must be most meaningfully built into these systems.
Listed below are some recommended practices.
- First, see if there are transparent and explainable algorithms/alternatives before choosing a black-box model.?
- The approach to development must be people-centric. The way users experience your system is essential for assessing the real impact of their predictions, recommendations, and decisions.
- Transparency, Explainability, and Interpretability: Clarity and control are critical to a good user experience.
- Creating a single answer may be appropriate if the likelihood is high enough that the answer will apply to a variety of users and use cases. In other cases, suggesting a few possible options to the user may be the more appropriate option. This should always be taken into account.
- A variety of users and application scenarios should collaborate and be integrated. A constant feedback system throughout the project development should be ensured. This will incorporate diverse user perspectives into the project and increase the number of people who can benefit from the technology in the future.
- The raw data should be examined directly. (critical: The raw data should also be examined and checked for one-sidedness (depending on the use case) and possible biases right at the beginning.) ML models reflect the data on which they were trained. The raw data should be carefully analyzed to ensure that it is sufficiently understood. In cases where this is not possible (e.g., sensitive raw data), one should try to understand the input data as well as possible while always respecting privacy (computing aggregated, anonymized summaries can do this).
- ?The difference between performance during training and performance in productive use should be carefully monitored and analyzed.
- Are features in your model redundant or unnecessary? Use the simplest model that meets your performance goals.
- A model built to detect correlations should not be used to draw or imply causal conclusions. For example, your model may learn that people who buy basketball shoes are taller on average. However, this does not mean that a user who buys basketball shoes will be more elevated.
- Machine learning models today largely reflect the patterns of their training data. Therefore, it is vital to know the scope, possibilities, and limitations of the models as well as possible.
- Communicate limitations to users, if possible. For example, an app that uses ML to recognize certain bird species might indicate that the model was trained with a limited set of images from a specific region of the world. By better educating the user, you can also improve user feedback on your feature or app.
- Monitoring and updating of the system should continue after deployment. Ongoing monitoring ensures that your model addresses real-world performance and user feedback.
- Every model in the world is imperfect almost by definition. Build time into your product roadmap so you can address problems at your leisure.
- Consider both short- and long-term solutions to problems. A simple fix (e.g., block listing or allow listing) may help solve a problem quickly, but it is not the optimal solution in the long run. Agree on simple short-term fixes with longer-term learned solutions.
- Updating a deployed model should be carefully analyzed and monitored (how does the update affect overall system quality and user experience?).
- Familiarize yourself with the best testing methods and quality engineering of software engineering to ensure that the AI system works as intended or is "trustworthy."
- Test your model for unbiasedness and fairness
- Perform rigorous unit testing to test each component of the system in isolation.
- Perform integration testing to understand how individual ML components interact with other parts of the overall system.
- Use a standard dataset to test the system and ensure that it behaves as expected. Update this test set regularly according to changing users and use cases.
- Perform iterative user testing to incorporate different user needs into development cycles.
One should only use AI if it creates added value.