AI in the Service of Humanity: Guidelines for Ethical AI
Patrick Bangert
Chief of AI | Data Science | Artificial Intelligence (AI) | Machine Learning (ML) | Data Analytics | Product Development | Software Engineering | CTO
As models made by artificial intelligence interact with human beings in their daily life, we must ask whether those models are fair — or whether those tasks ever lent themselves to solution by AI. This is a new field known as AI ethics.
Some cases in point are
1. Assigning grades to students in British A-Levels, or International Baccalaureates is an example where a model inaccuracy leads to enormous personal cost.
2. Using facial recognition software designed using white faces, on non-white faces is an example where a model is used to do a task it was never designed for.
3. Natural language models sometimes ingest new training input without data quality checks that can lead to racist output from the model.
In making these assessments, it is important to remember that AI models fail from time to time and have no context-awareness of the consequences of their decisions. When they assign a poor grade to an otherwise good student, the AI models do not know that this will have lifelong consequences for this person. This is an example of where the rare event (a false-positive or false-negative) has a huge cost (loss of opportunity). When situations like this arise, the use of AI models may not be appropriate at all, regardless of which kind of model and regardless of how accurate the model is.
Furthermore, it important to remember that models learn from data. The training algorithm treats the data as if it is the gospel truth. When racist texts are fed into a natural language processing engine, there is no context-awareness that scrubs the content. Indeed, the engine will learn the new content and thus appear to adopt the attitude displayed there. As data scientists, we must make sure that we only feed representative and good-quality data to a model. It helps to view models like toddlers — they need a nurturing environment with healthy nourishment and care to unfold their full potential.
Finally, the outcome of models must be presented in the right manner so that the initial problem is solved correctly. If a facial recognition system has been made for Caucasian faces, it should not be applied to African faces. Doing so is a misuse of the model.
Any modeling effort has three stages: Data preparation, machine learning, and result communication. All three pose ethical challenges as discussed above. As a community of practitioners, we must be careful to use our tools correctly and to ask the right questions of those who deliver the ingredients, like the data.
Going beyond the project of making a model, there is a fourth stage: Interaction with the user. In credit applications, models decide whether an applicant is given a loan. When the applicant disagrees or wants to know what can be done better next time, the model must provide a response. This is the desire known as explainable AI. Providing such explanations are easier for certain kinds of models (e.g. Bayesian networks) and harder for others (e.g. multi-layer neural networks). The driver is the use case that the model is put to and this must define the necessities.
To achieve an ethical AI application, we therefore must watch several different areas and assess them one-by-one:
- Is this problem even suitable for AI models? If a wrong answer is extremely costly, perhaps not.
- Is the training data biased in some way? If this cannot be measured for the dataset as it is, then the dataset may need to be augmented. An objective measure of bias should be defined and measured.
- Is the model used in the same context for which it was made? Models usually do not generalize beyond the kind of data they were trained on and therefore such use is misuse.
- Is an explanation required, or perhaps should it be required?
As the above questions show, the solution to AI ethics is a human one. It would be difficult for AI to judge ethical bias. This is the case because existing AI models do not encompass any knowledge of the world, or logical reasoning. They are extremely intricate mappings between the input data and known output data. If we could inject some knowledge, logic, and context-sensitivity into such models, some of these problems would, partially, go away. It is my vision to unify the connectionist approach of neural networks with the logical approach and to create a better AI future as described elsewhere.
Studies that help in this endeavor should receive attention by the AI community, for example in The AI Ethics Journal. Many promising movements have started, particularly in the natural language processing part of AI where ethics failures are perhaps most visible.
What is your opinion? Do you know of models that ran into an ethical issue? How was it solved? Thank you!
Director of Technology and Automation at Liberty Lift - Vice President of ALRDC - Chair of SPE ALCE 2024
4 å¹´Great article Patrick!