On Avoiding Availability Heuristic Fallacy in Single and Complex Operation (Artificial or Natural) Systems: A Case for and against Self Driving Cars
Heuristics are mental, neural (Artificial or Natural) shortcuts than aid in problem solving, value judgement and probability outcome judgements.
Availability heuristic fallacy occurs when an option or outcome is favoured over another, one or many, because of more available information on the former option. Thus resulting into a heuristic fallacy of availability at different stages of value judgement cycle. To avoid occurrence of such a risk, a system must be trained to not produce a definite outcome until a defined set of data are received and processed for determining and choosing one outcome over another.
But enabling such a limit on the system will lead to consequences of fatal and near fatal categories where a decision based on limited data sets is absolutely necessary for system’s ability to continue collecting more data sets. For example in the case of a self-driving car. A delayed decision on the account of incomplete or insufficient data may lead to an accident that will most likely remove any likelihood of collecting more data points on surrounding traffic: humans, other cars, animals, and other natural and man-made occurrences. For the ease of writing, everything except the car can be shown as <.
This might sound a little pejorative towards everything else that is not the car itself. But is an absolute necessity if the aim is to achieve a self-driving artificial general intelligence system which is certainly biassed towards the passengers being carried by the system if not for the system itself.
领英推荐
Choosing such a goal will pose a different question but of ethical nature rather than a one concerning mathematics involved in creating such a system.
But as a general artificial rule, say the goal is keeping the ‘car’ > always and < is a set of equations that need to be solved in order to keep the ‘car’ in car> position at all times.
No smart system can produce absolute accuracy at all possible intervals of possible outcomes. And especially in an activity that is as dynamic as driving a car which is mostly surrounded by other cars, and continually takes part in multiple, hundreds, maybe thousands, uncertain forms of human actions, which are collectively trying to influence the intended goal of a systematically artificial self-driving system.
The fallacy presents itself when a decision must be made in order to keep the continuity of the system going. Absence of one may lead to partial or full termination of the system. Thus, must we train the system to produce a decision even with a calculatedly wrong or inadequate data? Or force it to avoid taking such a decision and remove chances of actions based on limiting data sets?