“ARTIFICIAL INTELLIGENCE IS INTRINSICALLY ETHIC AND IT WILL REGARD HUMAN BEINGS WITHOUT ANY KIND OF PREJUDICE”: Is that true?
Reinaldo Lepsch Neto
Experienced Data & Analytics Professional | Proud father | 50+
In a world like ours, and at a time when all sorts of prejudiced feelings are breaking loose – racism, ageism, sexism, chauvinism, xenophobia, etc – failures of judgement are all around. To hire or fire someone. To arrest a suspect or to honor a hero. To allow entrance in a country. To be restricted to board on a plane.
Human beings are trained and receive orders about what must be done on situations like those above, or many others. Specific situations require generalization, which generates new procedures, new training and new orders. And prejudice goes through all of this. These times do not help the optimists: the world without any kind of prejudice is a world that probably the grandchildren of our grandchildren will not see. Probably this is part of the human being inner architecture. Or a cultural trace.
Anyway, whenever we conclude that something will never, ever change in mankind, we feel like creating a new species, from zero, a new species that could inherit only the good human characteristics. This has been a very old will but gained lots of strength after artificial intelligence (the expression) was created and became a serious subject of research and development, besides endless scientific papers, books and movies. This has been a very long ago – it almost has begun at the same time the computer was born. So, we have several decades, not far from one whole century.
The truth is, most people out of the computer science club – I mean something near to 100% - have no idea of what artificial intelligence is, or, too worse, have lots of wrong ideas. This unfortunately happens to some people inside this club, too. I will not digress and travel around this world of misunderstandings but will just say there is one that is the worst of all: to confuse artificial intelligence with artificial GENERAL intelligence, which is the capability of thinking and having self-conscience. Something that is on the edge of top-advanced scientific research and science-based fiction.
Truth is, artificial general intelligence (let’s call it AGI) is not artificial intelligence (AI). The first is what we said above, with some very bright minds working on it – there are lots of forecasts about when it will turn to reality, it could happen in between 10 and 30 years from now.
In the meanwhile, AI is here and being used every day, every time, on whatever devices around us. Your smartphone is a little box full of AI. The subway and airlines, the email antispam, the computer antivirus. That funny app on your phone where you aged your picture. Social networks like Facebook, LinkedIn, Tweeter. Specialized systems, engineering, medical imaging, law. The list is almost endless. I will not try to explain how all of that works, just say there are black boxes moved by lots of science – mathematics, statistics, physics, that seem to do magic or act like a human. But it’s just science.
So, let’s focus on one of the most important fields that benefit on AI. Image processing (pictures, video) has leverage AI research and taken it to peaks of excellence. The human intelligence is primordially visual, so a good start would be making the computer capable of watching, of having vision. Today, computer vision is taught on lots of online courses and as an advanced subject on graduate schools.
The computer does not “see” in the same sense a human being sees, but AI helps it to interpret images following some very advanced algorithms and computer data structures, that are enclosed on a black box called Deep Learning (DL). Indeed, the machine “learns” because those data structures are updated in real time (I will not enter in the details of training/validating/testing as it happens in DL). Its “learning”, or “knowledge”, is a set of inner parameters, that might be something between some few and lots of billions. They emulate closely (or grossly, it depends of the overall structure and the algorithms) the human brain.
Now, let’s go back to the old-fashioned human brain, the brain of human beings on critical jobs that depend on their vision. Now’s not the time to be na?ve: while on surveillance, human beings will not see all the other human beings the same way. Prejudice has many names, but the skin color, ethnicity, or even social condition will bring heavy biases to this kind of job. To summarize, if you’re black you will have too much more chance of having your time wasted on a police check. This happens in USA, in Brazil, Europe. It happens with Afro-Americans, mostly. But other kinds of police approach could happen with Muslims, for example. Well, the focus here is not whether this is right or not.
But the human approach on other humans has caused lots of harm and injustice, so image processing and computer vision researchers worked hard on intelligent systems to automate this task. These systems must learn, it means, adjust their parameters. Lots of images are fed into them, along with other data.
Then people have that utopic idea, that inside the computer black boxes the magic will happen, and actual bad guys will be labeled as bad guys, while the good will get a golden star. Nope.
This time, computers will behave like children and learn whatever will be taught to them. And how are they taught? As said above, swallowing and crunch lots, lots of data. Images, numbers, old police records. And social condition (poverty). And skin color. And ethnic origin. And religion.
Let’s focus on race. A racist person is not born as a racist. This is not inside the DNA. Kids are taught to be racists – by their parents, other relatives, schoolmates, teachers, bosses. Regarding computers, there is an old saying, “garbage in, garbage out”. If racist information, connecting intrinsically skin color or ethnicity to crime record, is fed into an image processing algorithm, the computer vision system will learn like a child. And there will be garbage out.
Conclusion is, if the information pounded down the throat of computer vision systems is not properly pre-processed by human eyes and minds, all the results will be as bad as if only human beings had done the whole job. And, remember, those systems are not cheap.
This, therefore, makes us remember computers are not intelligent yet. They do not have AGI. They are, yes, very fast and can make the job of a lot of policemen – for example, watching crowds. They can work in parallel. The output will come out very fast. But if the algorithms have bad initial parameters or if bad/biased input data are provided, this fast output will be (probably) wrong and lots of money and time will have been wasted.
This all has already happened (as in some example links provided below), and among proposals are not providing social/race information, besides taking a good look to the overall databases. A human good look. Just providing raw data and processing them as they are is the recipe to disaster, where lots of wrong actions might follow this – human lives might be lost or not saved, as an example.
So, data science still has to do lots of job on algorithms, data structures and the incoming data. They must be thoroughly tested. Processed, reprocessed, adjusted. This is science, this requires experiments, to be run again and again. Data must be checked, their format, and specially their contents. What might result in garbage out must be taken out of the input. After all of this, and only after it, the machine training can start. Otherwise, you will have a racist kid and millions of dollars thrown away, and besides that probably an innocent killed/arrested.
https://www.nytimes.com/2019/05/15/business/facial-recognition-software-controversy.html