AI vs Human bias – Can a machine learn to have an opinion?
Jon Wiggins
Engineering automation solutions for end users and OEMs around the world, creating safer, smarter and greener operations.
With the rise of AI processing, it has become possible to compute and display vast amounts of data in a format which is easy for the human operator to digest.
Large language tools are an example of this along with other data processing models and image recognition.? However there have been cases recently where the machines have been accused of bias.
An example is given here:? https://www.technologyreview.com/2020/07/17/1005396/predictive-policing-algorithms-racist-dismantled-machine-learning-bias-criminal-justice/
What is Bias?
Bias is the distortion of data to fit a pre-determined result.? It may also be the prejudice for or against certain characteristics.? In human behaviour these are often linked to heuristics which the brain uses to process a large quantity of data quickly.? These are referred to as cognitive bias and were first described by Khaneman and Tversky in 1972.? Common examples are Confirmation bias, availability bias and anchoring bias.
We are all subject to these biases, it ultimately is a part of our human personality, so can this be transferred to a machine which lack the human element of irrational reason?
Impact on AI
AI systems by and large operate on the principle of pattern recognition.? The exact methods vary, and the results therefore may be different but in general an AI is taught to recognise patterns in a data set and correlate this to outputs.? For example, a language model is taught the basic rules of grammar and spelling then thought to look for patterns in sentences which have meanings similar to that inputted.? Facial recognition is shown faces and taught to recognise the essential patterns of the features.
But say one was to omit a part of the whole picture from the AI learning?? In this case it would not recognise this part of the world as real.? It has become biased through omission to ignore data.? On the converse say I feed an over large sample in one filed.? The AI will become more sensitive to this data and pick this out more easily.? This will lead to a greater percentage hit on this data and bias through increases sensitivity.
Therefore, we should be careful that the data fed in is both suitable to the scenario we intend to analyse and unbiased by our own preconceptions or previous experiences.
In the review of data though there may lay a greater danger.? If we feed an AI a biased data set it is very likely the biased answer we receive will confirm our view on a subject, strengthening our bias further.? On the other hand, it is tempting to ignore or dismiss data presented which challenges our bias as erroneous.? This bias in the perceived accuracy of data may generate a loop where accurate data is increasingly ignored or inaccurate data relied upon.
领英推荐
Application to safety
Within safety these challenges generate potential issues at multiple levels through he products lifecycle.? AI may be used to analyse large data sets in the generation of a safety case or in the proving of the solution.? These design stage uses have the potential to realise aa system which is either unsuitable for the intended purpose or? built on false assumptions.
When in operation the processing for data for presentation to an operator is the most obvious example.? This data may form the basis on which a safety decision is made either by the human operator our automatically.? In both cases the decision-making process needs to be considered for clarity and timeliness.? In analysing the performance of a system during service the use of AI presents though a more difficult to detect hazard in that the information fed in may be too limited to make effective judgements or may be too biased to allow the AI to be accurate.
The latter case therefore needs close consideration when coupled to the use of Ai? in the design phase that the two methods are suitably divergent and have suitable data sets to allow for biases in the data sets.? When considering modifications based on the data a factual baseline must be established as it is too easy to take retrograde steps or not to consider the whole picture.
AI in these contexts must be seen as a powerful tool but not a replacement for good sound safety systems practice and means to eliminate systematic faults such as the processes shown below:
Conclusion
In summary, AI is only as good as the information it receives.? If this is? biased, the AI is likely to? amplify this bias and feed it back.? Therefore, there are two key checks we as humans can make.
1.??????? Ensure the input data is from diverse sources.? Over reliance on one source may allow bias by omission or bias by sensitivity.
2.??????? Ensure there is an independent means to objective measure the output.? Create independent KPIs for the output data which at the top - level test the broad tenant of the feedback.?
3.??????? Use diverse scenarios to test the whole system.? Understand how the system will respond to differing scenarios.? It is ideal that these are generated independently from the development or operation of the system.
Industrial automation /networking visionary
4 个月Good article Jon. We always need to be aware of bias including during standards development and data set preparation