Digital Ruminations: AI Has No Bias
The picture and the background are from Pexels; by Mateus Souza and Markus Spiske, respectively. The design is my own.

Digital Ruminations: AI Has No Bias

Hello, my friend
I am here to tell you that you are wrong
AI has no BIAS, even if you name it 2bias


It is just data being received or sent
Patterns being matched to what was modelled
An answer fetched out of billions of bytes


The paths of code going from A to Z
Without knowledge or intent
Just going in there to answer your search


Looking in the warehouse, the data lake, the Big Data wave we made
Getting the match or the best one it can find
Giving it back, without even hoping or caring if they got it right


No, my friend
AI has no BIAS
Even if you name it 2bias        

In a World where too many people’s pay depends on other people clicking, liking, or sharing whatever is said, shown, or done, it is becoming increasingly frustrating and dangerous how things get transmitted and communicated.

The wording on most (news) headlines today, no matter the source, is aimed toward being “clickbait” rather than providing the reader with an idea of what the text inside is about. They have that “emotional tone” that appeals to our lower brains, the amygdala in this case, for a mainly visceral (non-thinking) response.

One area where this is starting to get out of hand is technology. Especially, when talking about AI and anything associated. Sadder still, is that the articles themselves rarely provide the whole context or the complete idea, instead they focus only on what has that “shareable” bit. Which can be good for likes and clicks but not for providing the correct circumstances and the proper understanding of what is happening, or what is being done.

For example, there are many headlines around Face Recognition (one of the most known technologies based around AI algorithms). However, very few of those sound truly “informative”. Most include words such as “bias” or “racist”. But, as the opening text of this article states, AI has no BIAS.

A quick search in Google defines Bias as “prejudice in favour of or against one thing, person, or group compared with another, usually in a way considered to be unfair”. As that simple sentence demonstrates, there is a need for a good level of comprehension, of a variety of things, to actually be able to talk about any form of Bias.?

Human faces with a red mesh depicting the data points that are all a Face Recognition AI Algorithm cares about.

AI have no understanding, no Context, no mechanism to compare and obtain new conclusions. Even further away are conclusions that can be called “of their own”. Like the meme of a parrot compared to an ML model/solution (Machine “Learning”), at least the bird looks funny and cute when it talks all that nonsense.

But, of course, talking about coding errors or functional issues sounds too techie and certainly not click-worthy to most people. Even if they are in a visceral state. So, an honest headline like “Face Recognition AI algorithms cannot detect properly certain colour tones, or particular shapes” is the actual truth of the matter that I haven’t read anywhere, yet.

With Software issues there is the need to be careful, it is never something introduced with intent. Bugs, as they are called in the Industry, can happen because of too many reasons and none has to do with the author of the code or whomever defined the functionality, or the requirement, or the user need.

However, talking about “bias”, “prejudice”, “discrimination”, or “racism” puts an emotional weight on something that shouldn’t have any. These terms, on the other hand, can become blame; they try to put purpose and point fingers to those doing their best to develop the technology in question, and solve the issues being found as they go.

Next time a big (news) headline tells something in an emotional tone, take it with a block of salt. AI algorithms have no BIAS. They couldn’t care less. And I haven’t heard of any called 2bias, as of yet. So, friend, don’t let the wording fool you, because, you should know, you are smarter than that.

Cover Image

Through the girl’s features, the possibility to phantom a guess as to her background, culture, country of origin, place of birth, language(s) spoken, and such exist purely in a Human Mind. The Face Recognition Software, the AI Algorithm behind it, only sees the dots, the data it is designed to register and capture to be used as input for its model and the process that will work with it to give us a particular output.

Image With the Faces

The cultural, racial, age, and gender differences that we can pinpoint with relative easy in these images are meaningless to an AI Algorithm. To it, they are unnecessary beyond the data points that might be represented or obtained from the facial features displayed by each individual.?

A small irony here, but totally unrelated to the topic at hand, is that those faces have been artificially generated by another form of AI Algorithm.

Source of the image is a New York Times article (Lesson of the Day: ‘Designed to Deceive: Do These People Look Real to You?’). They use their own version of an algorithm that can be seen in action at “thispersondoesnotexist(dot)com” ("imagined" is not a word I would have used to describe them).

* * *

要查看或添加评论,请登录

社区洞察

其他会员也浏览了