Stamping out bias in AI-generated output
The challenge of deploying AI to improve equality and accessibility.
In terms of consistent processing and the application of factual rules, it's easy to recognise the opportunity AI presents as a force for good in equality. Inherently objective and evidence driven, AI can be a powerful agent of positive change, moving us away from subjective opinion and assumptions to produce impartial, data-based decisions and balanced content.
AI certainly brings the capacity and capability to sift and process information in strict accordance with rules and principles. But there's also the possibility of AI embedding prejudice and bias by replicating insight from human and subjective sources, or because of flawed algorithms that don't take account of context.
As marketers, we need to be aware of the potential for bias and to take an active role in identifying risks and opportunities from using AI to power our strategy and operations. In this article, I'll look at some of the issues to consider and address.
Fairness and inclusion in AI applications
AI is being used in recruitment, criminal justice and healthcare, credit scoring and academic selection processes. In these sectors, there are very clear human impacts and weighty consequences of allowing bias. If machine processed job applications or healthcare data embed inequality, the best candidates may not get the chance to take up roles and people in poor health or at risk of disease may be excluded from life-saving care.
The Stanford Social Innovation Review (SSIR) reveals: "Men and male bodies have long been the standard for medical testing. Women are missing from medical trials, with female bodies deemed too complex and variable. Females aren’t even included in animal studies on female-prevalent diseases. This gap is reflected in medical data."
As marketing managers, we are generally not responsible for decisions that have a directly life-changing impact. But it's vital that we take an active approach to fairness and inclusion, as we adopt AI applications, platforms and solutions to help improve efficiency, targeting, rich insight and market predictions.
There are consequences when humans make bad decisions, but when it comes to AI, the impact increases much faster, because of the sheer scale and speed of AI data processing power. And even in consumer or B2B marketing, unsupervised AI decision-making could create bias and exclusion that harms individual customers or even sectors of society. In commercial terms, this risks reputational damage, missed opportunities and customer dissatisfaction.
Here's an example. A generative AI solution might use patterns of historic profit performance or customer behaviour to apply variable pricing, penalties for delivery to certain areas or returns conditions that disadvantage some people due to factors beyond their control. This could perpetuate social inequalities, with wider ethical ramifications.
In sales and marketing, using AI to create content or targeting that's based on existing customer profiles or decontextualised market data could potentially advance or embed unwanted current realities of sexist, racist or otherwise discriminatory environments. Just because most current customers of a product are white, female and over 60, doesn't mean that it's acceptable only to address them directly in direct marketing campaigns. It may not be commercially desirable - there could well be a fruitful market opportunity amongst younger women, men, or people of other ethnicities - but it could also be discriminatory to exclude other audiences specifically or by inference. Generative AI can suggest creative executions, so it's key that language and images of people and settings are inclusive, ethical and compliant with equality legislation.
AI is indiscriminate about the data it ingests. It will use whatever we provide or whatever it can find to generate insight and evolve its algorithms. Without reviews and controls, initially subtle patterns of bias could feed a chain of bad decisions and actions. It might not be noticed until an extreme and obvious consequence occurs, such as social media commentators highlighting offensive imagery in ads or calling out sexist language.
Perpetuating human bias
Historically, many human-led decisions and outputs are biased. People sometimes deliberately but more often unknowingly (and perhaps more dangerously) perpetuate assumptions and subjective beliefs when they're selecting a creative or deciding how to position a product. That's why diversity and openness to debate are so important in marketing teams - a collective approach that involves different perspectives helps to weed out bias. Good marketers know that inclusion and representation matter.
But that's not always the case in historic data that AI marketing solutions may be ingesting. AI can perpetuate bias when assessing and categorising humans against outcomes - because unless instructed, it will treat all characteristics equally, ignoring protected characteristics. Correct statistical correlations may be unacceptable or illegal, resulting in discrimination by age, sex, race, disability, sexual orientation or other factors.
The SSIR reports a good example: "When AI systems that determine creditworthiness learn from historical data, they pick up on the patterns of women receiving lower credit limits than men. They reproduce the same inequitable access to credit along gender (and race) lines."
Marketers have a responsibility to provide good data
More than for any automation technology before, generative AI depends on the accuracy, integrity and relevance of the data it works with. As marketing leaders, we're responsible for making sure of this.
It's important to make sure that demographic and behavioural data is widely and evenly sampled. For instance, basing analysis and decisions on voluntary responses to a survey will often skew towards people with stronger opinions, or people who are more digitally enabled, or people who have more time to answer surveys. We need to be suspicious of data that could reflect societal or historical inequities or be based on individual decisions and preferences.
According to UN Women, "Digital access gaps mean women produce less data than men, and a lack of data disaggregation leads to unequal representation in data sets."
Practical ways to prevent and mitigate bias in AI decisions
Pre-processing data can help maintain balance. This means reviewing it to ensure protected characteristics don't conflict with a model's outcome predictions. You can strip out sensitive characteristics from people data to make sure they're not used in decision-making.
领英推荐
Post-processing means looking at the predictions made by AI and adjusting them to make them fair. There's also an option to constrain the AI model with conditions that stop it from disadvantaging sensitive characteristics. You can also ensure that the AI records its decision-making process, and that there's a process in place to scrutinise it regularly, to check that it's not becoming distorted by anomalies or unwanted patterns of behaviour or performance.
It's important to consider the context. AI decision-making may not be appropriate in all situations. As marketers, we need must make conscious decisions. We may need to rule out instances and contexts where greater sensitivity or nuance is needed, or where we need to make an intervention to actively improve fairness.
Using AI to understand human bias and improve equality in outcomes
AI doesn't lie or deceive itself in the way that humans do. Decision-makers may claim to have used certain criteria and fail to report that they were influenced by other factors. They may not even be aware if they were. In contrast, AI can product a full audit trail that shows the precise factors and logic progressions behind decisions, recommendations and predictions.
In many situations, AI can do a better job at being objective than its human overlords. Ask a human to pick a winner at random from a list by choosing a number or sticking a pin in the paper and their natural love of numbers including the digit '6' or their bias to the top left corner of a page means the outcome isn't truly random. A digital winner-picker will be completely unbiased and avoid mistrust from competition entrants. In the more sophisticated application of AI, this ability to manage out inconsistency, omissions and inaccuracies is very helpful.
AI can be a powerful tool for identifying and unpacking human bias. There's an opportunity to run AI alongside human decisions and compare the results. This can help us understand differences, notice anomalies and take action for positive change.
Justin Nihiser, CEO of Code Ninjas, says, "Looking right to the start of recruitment, to ensure equality, job postings must be inclusive and appeal to a diverse range of candidates. Using natural language processing, AI can analyse job descriptions and identify language that may be biased or discriminatory. As a resource, AI has the potential to significantly reduce bias in job recruitment." This capability can also be used for marketing content.
If we find that AI is replicating patterns of bias, that doesn't just mean correcting the AI model. It may also mean looking at the original human decisions, and exploring the reasons they are biased. For example, why did the original recruitment ads use potentially discriminatory language? There may be educational or commercial actions we need to take to support an inclusive culture and mindset in our teams.
Defining fairness and balance can be an ethical challenge
There's another challenging aspect to managing bias in AI. If we are going identify it and intervene, we need to know what we really mean by fair or unbiased.
When it comes to protected characteristics and discrimination legislation, it's easy to understand and apply the law. But communications, creative content and positioning can require more subtle definitions. In our marketing campaigns, does fairness always mean reflecting current reality and the state of the world? Is it possible or desirable instead to project a desired future state? Or is statistical precision the most important factor?
For example, in AI image searches for US CEOs in 2015, the number of women CEOs shown (11%) was fewer than the actual percentage (28%). But what is the optimal correction? Is it fairest for the AI to reflect the actual current percentage (still unbalanced in terms of sex equality), or an aspirational 50%? The unmoderated result reveal underlying issues of inequality in the web-wide content that the image search algorithm feeds on. It's potentially accurate in terms of reflecting popular representations of CEOs, but this is culturally undesirable in liberal nations.
Summary: Marketers are rightly accountable for generating unbiased output both from human and AI decisions
There's significant public suspicion of AI, albeit some of it based on limited knowledge and understanding. But in practice, for marketers, AI will very rapidly become discredited if it builds in bias. There's a risk of serious reputational damage for organisations, so it's important to take the issue seriously, both for ethical and commercial reasons.
If you're adopting or have already adopted AI in your marketing operations, I recommend that you put in place a formal process to identify bias, to define fairness and to continually monitor your AI models to make sure they are inclusive and egalitarian. Here's a checklist of six priorities I've identified:
That's all...for now.
Ready to start discussing your AI strategy? Speak to a Six Degrees Expert - www.6dg.co.uk
CEO - Executive Coach - Advisory Member at NAIA - Formerly HPE, ServiceNow, Twillio and Cornerstone - Founder - Angel Investor
10 个月Thanks for sharing this piece, Natalie! You share some really insightful points here. As someone in sales, I think sales teams and marketing teams end up utilizing AI in similar ways... which means we see a lot of the same challenges.