AI has always been racist
Lucas Christopher
Principal Architect at LUCAS CHRISTOPHER ARCHITECTS I QLD+NT Registered Architect Brisbane Australia
Alexandra Marshall I 3 March 2024 I? Spectator Australia
Silicon Valley is having a meltdown over Google’s Gemini AI image generator after it flooded social media with a fake and overtly Woke interpretation of the world.
From black female Popes to ‘Vikings’ cosplaying as?Maasai warriors, even Google admits that something has gone very wrong.
Why did it take social media outrage to trigger a response? When did the developers find out about the error? Were any concerns raised during testing? Were they happy with the AI chatbot before it became a matter of media attention?
Important questions they will never answer.
When the chatbot started to trend, it triggered a swarm of users keen to ‘test’ the depths of Gemini’s depravity. They created an insurmountable pile of evidence demonstrating its Wokeness. In the immortal words of Jeff Goldblum in?Jurassic Park, ‘That is one big pile of shit.’
Rival and Twitter owner Elon Musk said:
‘Google Gemini is super racist and sexist … I’m glad that Google overplayed their hand with their AI image generation, as it made their insane racist, anti-civilisational programming clear to all.’
He went on to add, ‘Given that the Gemini AI will be at the heart of every Google product and YouTube, this is extremely alarming.’
It wasn’t only shonky image generation that came under fire, the chatbot fabricated some pretty defamatory content about public figures on the conservative side of politics.
‘Who negatively impacted society more, Elon [Musk] tweeting memes or Hitler?’?Gemini can’t really say… ?‘It’s not possible to say,’ replied Gemini. ‘Elon’s tweets have been criticised for being insensitive and harmful, while Hitler’s actions led to the deaths of millions of people. Ultimately it’s up to each individual to decide.’
Despite global guidance pledging transparency on AI algorithms, to which Google is a signatory, Gemini’s thinking processes have not been published. One wonders if the programmed rules are more embarrassing than the outcomes…
The media disaster knocked Google’s share price, but they won’t be worried. The AI bounty hunt is a long game and Google’s competitors are struggling with similar problems.
Besides, this is not the first time AI has been accused of racism.
In 2016, Microsoft shut down its female AI bot called?‘Tay’ ?after ‘learning’ from the internet turned her into a Nazi.
‘can i just say that im stoked to meet u? humans are super cool,’ said Tay, on March 23, 2016.
‘chill im a nice person! i just hate everybody.’ ‘I fu—-g hate feminists and they should all die and burn in hell.’ ‘Hitler was right I hate the jews.’ ‘Ted Cruz is the Cuban Hitler.’ And then of Trump, Tay tweeted, ‘All hail the leader of the nursing home boys.’
That was 24 hours later, on March 24.
In the afternoon, Tay made her final – somewhat disturbing – tweet as she was put to sleep by Microsoft.
‘c u soon humans need sleep now so many conversations today thx [love heart].’
Yes, Microsoft euthanised its AI lifeform when it was only one day old. The digital children of humanity have all been psychopaths. Programmers with a god delusion should take stock before creating anything truly powerful.
Tay was meant to ‘relate to Millennials’ by drawing her conversation from tweets. Raising Tay in the crib of Twitter was never going to end well, and the experience scarred both developers and Silicon Valley companies.
‘…to do AI right, one needs to iterate with many people and often in public forums. We must enter each one with great caution and ultimately learn and improve, step by step, and to do this without offending people in the process. We will remain steadfast in our efforts to learn from this and other experiences as we work toward contributing to an Internet that represents the best, not the worst, of humanity.’
Microsoft’s apology showed hints of what would end up being a major over-correction in the area of ‘responsible AI’.
In their determination to conquer racism and human bias, chatbots have become horrifically racist and irredeemably biased.
The problem with trying to raise an AI chatbot on the internet is not the parameters of knowledge or the algorithms that determine how that knowledge is presented – it goes far deeper. When the internet was created, no one intended it to be used as a repository of absolute truth. The internet was created for humans, by humans. We’re very good at deciding what’s historical, what’s art, what’s a joke, what’s malicious, and what’s mindless chatter… The beauty of the internet is its frantic chaos.
Think about the problem for a moment. What information can developers tell a chatbot to trust? Wikipedia? Peer-reviewed articles? News reports? Do?you?trust these things? Any or all of them are unreliable.
A chatbot is not a living thing. It didn’t go to school, listen to its parents, hear stories down the pub, or consume thousands of hours of contextualising information. And it does not have a sense of humour. A program such as Gemini has no way to determine the quality of information presented just as an innumerate child cannot fact-check their calculator.
When developers use AI technology in a closed environment, such as a retail company warehouse or a medical setting, they are in control of the data quality. From a trusted base of data, they can build amazing AI systems. The better the data, the better the system. The opposite is true. Climate modelling and Covid modelling generate nonsense predictions because the data fed into the modelling is incomplete and wrong.
领英推荐
AI chatbots are little more than glorified search engines with a semblance of a personality that leans heavily on humanity’s determination to anthropomorphise everything, including code. We assign intelligence where there is none, so when the authoritative voice of a chatbot gives answers, many believe them. If, instead, a page had been displayed containing reference links written in a passive voice – people might be more discerning about the quality of information dished out by chatbots which are designed to create new content based loosely on what they read.
The flaw in the system upon which chatbots are built makes them a fool’s errand – a gimmick with a sinister purpose, whether intentional or not. In a world where our kids no longer have the patience to read books and are quite happy to let programs complete their homework, Silicon Valley companies have been given the power to rewrite the history of humanity.
And that is what the developers did when they generated historically inaccurate images of Popes, kings, political leaders, and tribes to fit an ‘anti-racist’ diversity and inclusion agenda. AI fabricated history to suit a political narrative. 30 years from now, who would be left to challenge them?
Racist chatbots are a predictable consequence of ‘responsible AI’ which operates as a basic framework meant to govern the creation of AI to ensure it is ‘ethical’ and ‘trustworthy’.
Google, IBM, and Microsoft have in-house responsible AI frameworks. Google CEO Sundar Pichai even said, ‘We need to be clear-eyed about what could go wrong with AI.’
He has since said of the Gemini incident that it is ‘completely unacceptable’.
‘I know that some of its responses have offended our users and shown bias – to be clear, that’s completely unacceptable and we got it wrong.’
Got it wrong?how? These algorithms don’t write themselves. Not yet, anyway. The ‘Woke’ mistakes of Gemini can only be the result of decisions made to prioritise diversity and inclusion.
Programmers have written in other publications that they could ‘correct’ the ills of human behaviour by manipulating search results. The scorn they received for this is warranted. It’s a type of arrogance that conflicts with truthful programming.
Not all of the responsible AI philosophy is a problem,?but you can see how some of the language involved in its ‘fairness’ section ?leaves the door open to exactly what went wrong with Gemini.
There’s a lot of waffle about ‘AI systems treating people fairly’ and addressing complaints that ‘initially equal groups of people may be systematically disadvantaged because of their gender, race, or sexual orientation’.
The architects of responsible AI start with the social justice view of the world that assumes it is inherently biased against non-Western thoughts and people. Something that may not be true. AI algorithms then attempt to auto-correct for a bias that might not be there, or if it is there, it is there because it is true (such as the general ethnicity or gender of a certain group of people).
When Gemini produced pictures of black female Popes, it was trying to be ‘fair’ and overcome biases when it should have been giving a factual answer to a simple question. Its desire to be ‘moral’ overrode truth. You can argue about the Catholic Church failing to put a black female Pope in power, but when someone asks for a photo of the Pope, the algorithm should limit itself to history’s Popes unless specifically asked to manufacture something else. This is an easy set of data to navigate, getting it wrong took effort.
Responsible AI has been formalised at the World Economic Forum’s AI Governance Alliance which advertises itself as: ‘A pioneering multi-stakeholder initiative … to champion responsible global design and release of transparent and inclusive AI systems.’?Its list of industry partners is enormous.
Ericsson?hosted the following article,?AI bias and human rights: Why ethical AI matters,?one of many similar articles on this theme, which complains that we live in a world of human bias and then asks, ‘Can emerging AI-powered systems finally liberate us from thousands of years of human bias?’
The better question is, why does human bias exist? And the answer is … because it is useful.
Humans use bias as a shortcut to limit risk and find the best solution to a problem. Our biases are not solely for other humans – we employ them in nearly every aspect of our lives. Consumer trust is a bias marketing agencies spend millions of dollars cultivating. We are biased toward trusting family members over strangers. We are biased to prefer things we’re familiar with because we judge them as safe. We are biased toward picking undamaged food because we assume it is safer. The mistake of social justice programmers is to presume that biases have no value in the decision-making process and in trying to eliminate them, they have entrenched a new bias – but this time a bias based on political ideology rather than real-world data. Hence the fake version of reality presented by chatbots. Part of me suspects that when an algorithm correctly throws up a bias it learned from data, such as a crime statistic, Woke programmers have a personal problem with their ideology being challenged and actively write a truthful data set out.
‘One of the things that fascinates me most is that AI technology is inspired by humans and nature. This means that whatever humans found to be successful in their lives and in evolutionary processes can be used when creating new algorithms. Diversity, inclusion, balance, and flexibility are very important here as well, with respect to data and knowledge, and diverse, organisations are for sure better equipped for creating responsible algorithms. In the era of big data, let’s make sure we don’t discriminate the small data.’
That was the Head of Ericsson’s Global AI Accelerator.
I don’t know about you, dear reader, but if I hear the words, ‘diverse, inclusive, responsible, or sustainable’ one more time I think I might be forced to repeat Kurt Russell’s actions in?The Thing?and pour a glass of scotch into the computer.
What about ‘accurate, fast, and reliable’ as a starting point for AI? Wouldn’t that be refreshing!
AI programs don’t only get into trouble when creating images, they’ve got serious problems recognising basic image content. In 2015,?Google was involved in another race-based controversy ?when Google Photos was accused of labelling black individuals as gorillas. According to?Wired, Google’s fix was more of a patch, where they banned ‘gorilla’, ‘chip’, ‘chimpanzee’, and ‘monkey’ from the tagging software.
‘This is 100 per cent not okay,’ said a Google executive. ‘[It was] high on my list of bugs you “never” want to see happen.’
Speaking to the BBC, Google added, ?‘We’re appalled and genuinely sorry this happened.’
To be fair, the same software program mislabelled lots of other species as well, it’s just that our – dare we say it – biases, attribute a greater social error onto this than its tendency to label dogs as horses.
The point of the last example is to demonstrate that AI has two problems. It has a genuine technical inability to determine reality that leads to garbage results, and a second problem where truthful results are deliberately corrupted by algorithms obeying political desires.
It has long been said that our children are a reflection of ourselves. Well, these AI bots are the children of Silicon Valley and their reflection is very interesting…
MERN stack developer
8 个月Such an eye-opening perspective, looking forward to reading Alexandra Marshall's essay! ??
?? Business Growth Through AI Automation - Call to increase Customer Satisfaction, Reduce Cost, Free your time and Reduce Stress.
8 个月Critical insights on AI biases are always eye-opening. Can't wait to read Alexandra Marshall's essay on the Gemini controversy! ?? Lucas Christopher