Don't let AI become an Old White Man
Andrew Parkinson
Senior Executive | MPA | Corporate Affairs | Policy | Media | Government Relations | Stakeholder Communications | AI Curious
Key Points:
The promise of Artificial intelligence (AI) is undeniable, but folks there's a significant issue we need to address: the bias still informing AI systems.
Left unchecked, we risk creating technology straight from Margaret Attwood’s Gilead - patriarchal and prejudiced, turning our AI into an "Old White Man."
It’s a critical flaw and communication leaders have a key role to play.
The Issue: Bias in AI
Bias in AI comes from the data used to train it. Think of AI like a 4-year-old with a crayon and an obsession with dinosaurs. You’ve asked your little one to draw a lovely picture of Mummy, but Mummy looks a lot like a velociraptor.
Human biases, both explicit and implicit, creep in, leading to skewed outcomes. Whether it's a hiring algorithm favouring male applicants or a facial recognition system misidentifying minority groups, the consequences can be devastating.?
This Diffusion Bias Explorer shows how text-to-image models represent different professions and the words used to describe them. See what I mean?
Why We Should Care?
The answer is simple: fairness and trust. If AI systems are biased, they lead to unjust and discriminatory outcomes, affecting job opportunities, legal decisions, access to services and more.
This undermines trust in AI technology, hampering its adoption and stifling innovation. Organisations that rely on biased AI also risk significant reputational damage and potential litigation.
There was a case last year in the US where a major bank used AI to decide loan approvals. The algorithm was trained on historical loan data, which (overlooked by the bank) identified men as being more creditworthy. It meant women with similar financial profiles were being denied loans more than men. It’s an example of the need for diverse training data that includes fair representation.
There are plenty of reported examples in law enforcement, where facial recognition systems end up being racial recognition systems - again the source data informing such systems can misidentify people of colour as being more likely to commit crime when compared to white people.?
All this underlines the importance of recognising AI merely as an assistant to a human-led solution, not the solution in itself.
Writing in Media Collateral’s “GenAI x Comms Impact Report” last year Dr. Lisa Dethridge, a Senior Research Fellow at RMIT University, says:
领英推荐
“AI programs lack true intelligence or understanding. Generative AI merely mimics human intelligence and is capable of gross errors, needing constant oversight. AI is devoid of human and physical insight, context and cognition.”?
She points to the critical role of communicators who understand better than most the value of getting the human inputs right:?
“In any company, community or industry group, the AI conversation must extend beyond technologists, software engineers and the IT Department. Participants in the AI conversation represent a new kind of business culture. A diverse array of practitioners, users, customers and stakeholders will form the human narrative and experience central to AI deployment.”
The ‘GenAI x Comms' report shows more than 70% of respondent communication practitioners throughout the Asia Pacific are already busy applying generative AI tools inside the workplace - so we have to put this issue of bias front and centre.
What can we do about it?
Here are a few steps we can take:
#1. Raise Public Awareness: We can lead efforts to make people aware of AI bias (I’m doing my bit today!). Let’s give clear, accessible information explaining the importance of fair AI systems.
#2. Policy Development: Let’s work across disciplines - IT, HR, Comms, Policy, Ethics - to inform, and enforce policies for transparency and accountability in AI. We need standards for bias detection and mitigation, and regular audits of AI systems, but this should be multidisciplinary and collaborative - not just IT.?
#3. Education Initiatives: Let’s lead training for government leaders and stakeholders to understand more about AI bias and its impacts, enabling decision-makers with the knowledge to promote fairness and equity in AI implementation.
#4. Inclusive Stakeholder Engagement: Let’s facilitate engagement with a broad range of stakeholders, including communities that are under-represented, to gather more diverse perspectives.
#5. Ethical Guidelines: Communication leaders should also inform ethical guidelines on the use of AI in public services, with a focus on fairness, accountability, and transparency. The pace of the tech means these would need regular reflection and updates. Organisations should also be transparent about how their AI systems make decisions and be accountable for addressing any bias.
Another thing - communicators and public servants play a pivotal role in making sure AI systems are fair, equitable, and trusted. Bias in AI is not just a technical or IT issue - it's a broad challenge requiring collective effort.
As we continue to bring AI into various aspects of our lives, we have to remain vigilant against old biases. By promoting diversity, implementing robust bias detection, and fostering collaboration, we can build systems that are fair and equitable.
The OECD.AI Policy Observatory has a cool Catalogue of Tools and Metrics for Trustworthy AI that's well worth a look if you're interested in this, along with an opportunity to deep dive on a wide range of principles, policies and insights.
As a comms pro,? I urge you to reflect on how bias in AI affects your organisation and what steps you can take to address it. How can you make sure the AI systems you rely on are just and fair rather than pale, male and stale? What can you do today to mitigate bias and promote trust in AI? Please share in the comments!
Thanks for reading! If you found it valuable, please like, share and comment ??
[email protected] | 0404 615 596 | www.bureausydney.com
Senior Manager Media & External Communications
10 个月The classic case of “you can’t be what you don’t see”!
Financial Literacy Educator | Facilitator | Education Materials Designer | Project Manager
10 个月I have found it to be particularly true, when making AI generated cartoons, that the programs default to, as you say, "pale, male and stale" stereotypes. So I change the prompts, and while it tends to exclude white people from much of what it creates for me, I think, "why not?" Give it prompts for what we don't see as much but wish was reality, and I hope it will learn from that.
Enjoying retirement
10 个月Well said!
Enjoying retirement
10 个月With my blended Anglo-Saxon and European background, I'm pale-skinned. I have hair that greyed when in my 40s until becoming white when in my 50s. In my 20s, I aspired to be a SNAG (sensitive new age guy). Now in my 60s, should I feel slighted by the 'old white guy' analogy? Have I've become too much of a SNAG? Or is it simply that bias is omnipresent, with or without AI?