GOOD, BAD AND UGLY BIASES, BY ALGORITHMS AND HUMANS
The phenomenon of human bias is a source of error in context understanding that causes an over-generalization. A bias is for example studying Italian eating habits only during breakfast, since in Italy the majority of people have sweet pastries for breakfast, therefore drawing to the conclusion that Italian only eat pastries. By understanding bias, that would mean observing Italians during lunch and dinner too, bias can be reduced. And this has been widely researched; reaching the conclusion that there can be positive biases too, starting from how humans learn.
Humans do not need a large number of negative samples to learn a positive instance, infants do not need to see elephants, to learn hippos. Namely, human can generalize a new concept from samples of single class, while, for example, another system like machine learning (ML), a type of artificial intelligence (AI), requires many data and many labels. So there are good and bad biases, by algorithms and humans. Algorithms have bias too, their source of error in the model causes it to over-generalize and underfit the data, for example if and algorithm has to learn how men look like and is only shown white men pictures, when it will be shown a black men photo, it will give an error. Now, due to an unprecedented access to computing system and data, human bias can be studied by machine learning (ML), a form of artificial intelligence, in how they affect behaviour and decisions. But ML has biases too because algorithms learn from human decisions and therefore also learn human mistakes, failing to notice certain details, and mirroring already existing human biases. Ignoring these facts will result in automating and even magnifying problems. Therefore ML algorithms should be considered thinking partners, rather than human replacements. Algorithms can also learn from the complex dynamic of executives’ decision: based only on symptoms resulting from their actions instead of the underlying state. As Kahneman mentioned, biases can be generated by a “hot and fast” system 1 type of processing, affected by emotion, and is therefore less prone to optimal decisions compared to a slower and more analytical system 2 type processing.
This executives’ tendency towards endogenous actions could be daily enhanced by empowering leadership, due to more proactive goal setting at the start of the workday, but the variability of those conditions in any human will undermine the outcome of any human decision. Therefore strengthening an approach to a machine learning decision-making tool will result in being lees prone to variation, but still affected by human biases. Furthermore this is important because when classification algorithms use human-generated input data that suffer from human biases may exacerbate the errors stemming from such biases. In the presence of variability in the bias-induced error, the impacts of bias can be mitigated, but not eliminated, even if the algorithmic design is adjusted to account for the bias. Acknowledging biases, may help generating bias-aware algorithm, resulting in a significant improvement of the expected outcome, but the magnitude of improvement depends critically on the relative discriminative abilities of information.
To understand the magnitude of the human and machine biases phenomenon, let’s see different kinds of biases, contributing negatively and, surprisingly, also positively to the outcome of the final application.
NEGATIVE CASES OF BIAS AUGMENTED BY MACHINE LEARNING
· “Tay” the twitter chatbot by Microsoft that became racist in 24 hrs.
· Compas and Predpol, algorithms used in court trials, that seem to consider black people more prone to be infringing the law.
NEGATIVE CASES OF BIAS AUGMENTED BY HUMANS
· Court judges will put less weight on minorities who did not kill whites
· Wikipedia is much more biased than Encyclopaedia Britannica despite each page of Wikipedia receives 1900 revisions on average
UNBIASED VALUE IS CREATED BY MACHINE
· Elementum is an AI start-up monitoring incidents to provide real-time supply chain visibility that after the 2014 fire at the DRAM chip Factory in China, secured their supply before the price increased.
· AI cutting the cost of Berg Health cancer drug development from 2.6 to 1.3 U$ Billions.
· Tianyuan robot garments: 4 minutes to make a t-shirt.
· Amazon “Outfit Compare” will tell you what garment best fits you.
· GrabitTM for Nike, 40 pieces of material assembled in 50 seconds by robots, instead of 20 minutes by human.
· From HR to HAIR, implementing programme like SAP SuccessFactors can synchronize legacy programs, offer employee collaboration platforms and predict the impact of resource decisions on other business areas.
· Conatix semi automated business research to help researching faster.
· MasterCard, for instance, is experimenting with an AI software that draws on the knowledge of experienced staff to help all workers become better sellers.
· Bosch adopting a “thinking factory” approach in one of its German automotive plants
· Affectiva, detecting emotions
· Enlitic, Scanning medical images to detect cancer.
· Chevron drilling powered AI increase 30% production.
· Xinhua “Artificial Intelligence” anchors to read news as real-life humans read news in chinese TV
VALUE ENHANCED BY MACHINE, DIRECTLY AFFECTING HUMANS NEGATIVELY
· Employees aware of STARA (Smart Technology, Artificial Intelligence, Robotics, and Algorithms) and its application to their job, are more likely to have lower organisational commitment and career satisfaction. Advent of STARA might spell the end to successful career planning, and higher perception of STARA are likely to have higher adverse effects on turnover intentions, depression, and cynicism).
VALUE ENHANCED BY HUMAN AND MACHINES WORKING SYNERGICALLY
· Mercedes is extending the cobot concept with exoskeletons, so humans and machine can help personalise the car in real-time
· SEB, a Swedish Bank is using AIDA Able to handle natural-language conversations, Aida has access to vast stores of data and can answer many frequently asked questions
· Unilever combined human and AI capabilities to scale individualized hiring. Time from application to hiring decision has dropped from four months to just four weeks, while the time that recruiters spend reviewing applications has fallen by 75%, universities proposing candidates passed from 840 to 2600 in number.
· Predix, a digital twin–based system where maintenance workers are alerted to potential problems before they become serious, and they have the needed information at their fingertips to make good decisions — ones that can sometimes save GE millions of dollars
· Carnival Corporation is applying AI to personalize the cruise experience for millions of vacationers through a wearable device called the Ocean Medallion and a network that allows smart devices to connect.
WORK RELATED ALGORITHMS PROBLEMS
· CYBERCRIME: Companies are increasingly interconnected and rely on flows of data, being subjected to malware attacks, such as the WannaCry ransomware attack of May 2017
· SUPERINTELLIGENT SYSTEMS: High-frequency trading (HFT), algorithms competing with one another have ‘escaped’ human control and not only affects a small community of investors, but it actually impacts future pensions.
· AUTONOMY VS CONTROL: Risk here is the lack of a clearly defined boundary between autonomy and control in the relationship between worker and robot: LIKE Aircraft autopilot systems or AI-assisted surgical operations.
· AI UNDERMINE WORK PLACE CONDITIONS: Misuse or abuse, particularly in cases of workplace monitoring and surveillance or in discriminatory practices like scoring or profiling. Here, data management is key, and the distinction between personal and non-personal data does matter. Workers need to know how their personal data is collected, retained, processed, disseminated and possibly sold, and how data related to their behaviour at work can be used (potentially against them
INTERPRETABILITY
· Deep learning is still a black box, exactly as humans are. Researchers are also trying to create Explainable Artificial intelligence. Because algorithms do make errors, as it almost inevitably will, but diagnosing and correcting exactly what’s going wrong can be difficult.
HUMAN POSITEVE BIASES CREATING VALUE
· Human biases can be good to learn “learning Techniques” Humans have cognitive biases that promote fast learning. A machine learning model with human cognitive biases is capable of learning from small and biased datasets
· Human biases can help machines in achieving better results like by using crowdsourcing as a method for local search, made by distant search
VALUE IS ENHANCED BY HUMAN
· Prediction by machine allows riskier decisions to be taken. Judgment by humans is exercised when the objective function for a particular set of decisions cannot be described (i.e., coded). Research is analysing how to teach judgement, instead of prediction, to machines.
STATISTICAL TRUTHS.
· Machine learning works on statistical truths rather than literal truths, life or death decision like nuclear power are still an issue.
THE ADVANTAGE OF NO PRIVACY OR IN BIG SYSTEMS
· AI, sadly, will become stronger where less rules and bigger systems are implemented, setting the subject of AI-Geographical issues (amount of data for training and the limit of hardware and how AI can correct AI).
POSSIBLE SOLUTIONS AUDITING BIASES
· Algorithm auditing must be interdisciplinary in order for it to succeed. It should integrate professional scepticism with social science methodology and concepts from such fields as psychology, behavioural economics, human-centred design, and ethics
· Computing errors: making decisions independently. AI systems will have to be developed under auditing conditions.
POSSIBLE SOLUTIONS DEBIASING THE BIASES
· Bias-aware algorithm can significantly improve the expected outcome, but the magnitude of improvement depends critically on the relative discriminative abilities of information
POSSIBLE SOLUTIONS COPYRIGHT
· An algorithm copyrighted and classified by its biases
POSSIBLE SOLUTIONS AI UNDERSTANDIG HUMANS BETTER THAN HUMANS THEMSELVES
· AI systems that understand humans and AI systems that help humans understand them instead in a mechanistic way.
· Mechanistic assumptions simply do not work for biology and humans. So can machines learn to understand humans, better than humans, simply by not simplifying
POSSIBLE SOLUTIONS RELYING ON HUMAN ABILITIES
· Considering that nearly 2500 years ago Plato in Phaedrus said that “Reading will reduce knowledge to mere data” and “Writing will limit our memories” we are not doing that badly as humans and we may behave better than expected
AI is augmenting human capacities and since a century, IS eliminating boring jobs like Bowling alley pinsetter, Switchboard operator, Lift operator, Film projectionist, Knocker-upper, Bridge toll collector, Check-out cashier and Railway station ticket seller. But AI is also deepening dependence on machine-driven networks eroding humans abilities to think for themselves, which is absolutely instrumental, especially when have to face also the further complication of algorithm biases.