Allyship, Unconscious Bias and AI
Last week I had the opportunity to put some allyship training into practice within minutes of completing the course, because of my own error.
I wrote a blog post on LinkedIn in January summarising some data analysis I completed while commuting to and from the office on the train.? Building a database is not a very exciting topic so to create an image for the post to try and drive engagement I used an AI image creation tool to create a cartoon of me sat on the training working away at my database.? After a couple of adjustments to my prompt I can up with something I thought was acceptable. It was something along the lines of: “Please create a cartoon of a man in a suit with grey hair on a busy commuter train”. This took me less than 5 minutes and then another 5 minutes in another AI powered tool to add the title and colour scheme. The result is below.
Up went the post and I got good engagement and some interesting comments asking for specific analysis out of the database. I wrote another post, using the same base image added to a new title block. I thought I had landed on a basic brand/logo for my database related posts which seemed like a good idea.
We fast forward to last week, a month after my initial post and I post another short blog post based on analysis of my database. I head into allyship training by Danna Walker from BUILT BY US LIMITED organised for everyone at Useful Simple Trust by Ailsa Roberts . We have a great two hours picking out how we should respond to bias and discrimination, we roleplay scenarios and discuss how we would react and resolve the situations. I make note of a couple of interesting points and I pat myself on the back for thinking I have a good handle on my unconscious biases and methods for mitigating them.
Then I come out of the training…
My post has had a good number of reactions, and I’ve received a message from Vittoria Danino firstly thanking me for the analysis and then asking me to change my post image due to its lack of diversity. I have a quick look and fire off a reply saying I’ll take a look for future posts.
Then I open up teams and one of my colleagues, Eva MacNamara lets me know that they feel uncomfortable about my post due the passengers in the background being all white and male on the train.
So now I take a good look at the image and ask some other people in the office and I get the same response – my radar has been well off on this one. I’ve been blind to the impact my image caused and now I need to decide how to correct it.
The simple solution was to recreate the image with diverse passengers. It turns out that the AI engine takes some persuasion to create a mix of passengers – I have to explicitly ask for: ‘very diverse passengers’ to get fellow commuters that aren’t all white and male. Asking simply for diverse passengers made no difference.
Then I came to the next challenge – the prompt assumes that the man in the suit is the subject of the image so makes them much larger than everyone else in the cartoon which still has patriarchal connotations.
The next attempt is to then change the context of the image – take the passengers out entirely and have the train window with something interesting outside the window – apocalyptic climate emergency was suggested. A few prompts, some with hilarious effects and a couple of suitable images pop out. But if I’m going to change the image on my old posts (turns out I can't change the images on a post, only articles) I can’t just get rid of the other passengers it looks like I’m ignoring the issue.
So I head back a couple of iterations and prompt the AI to make the man be sitting with the other passengers. ?The image looks partially realistic, and more like my actual commute, which was my original intention - at the least the amount of phone use! Which gets me to my final image - below. I wonder if my original prompt asking for a man (to represent me) skewed the AI to think I only wanted male passengers. The challenges 谷歌 ’s latest LLM, Gemini is having trying to include diversity in its images demonstrates how hard it is.
?What did I learn?
Firstly that I work with some fantastic people both at Expedition Engineering Ltd and in the water sector that were happy to call out my error.
Secondly, I should never be complacent about my own biases and how people will rightly react to poorly thought through content.
Finally using AI to generate content needs to be checked just as thoroughly for issues of bias, discrimination and prejudice as much as a spreadsheet or report. I might be able to outsource the creativity, but you can’t outsource inclusivity and you have to reflect and consider your actions. Checking for these type of issues is important.
Technical Team Lead at AECOM
1 年Interesting post - especially how hard it was to develop a 'prompt' that gave you what you wanted to get to...seems like the AI machine has its own journey ahead...
Senior HR Business Partner
1 年A fantastic read, Ian!
Founder/ CEO of Built By Us | Board Member Public Practice
1 年Huge thanks for sharing this, Ian; I'm so pleased to hear that the session was impactful for you!
Associate at Expedition Engineering Ltd
1 年Thanks Ian! A great example to show the value of our work with Danna Walker and the brilliance of the people at the Useful Simple Trust. "You can’t outsource inclusivity" is 100% right.