Considering Bias in Generative AI, in 250 Words
We consider something biased when it gives disproportionate weight to one particular perspective, feature, or output.
Humans tend to be biased because we form our worldview largely from personal experience, and through what we read, watch, are taught, or told. And this is, itself, biased – by culture, representation, and the social ‘norms’ of the day.
In this sense, bias may be bad in outcome, but not necessarily held in bad ‘intent’.
Generative AI?tools do not have ‘intent’, although they may be coded to prioritise certain outputs (or avoid them – for example, not being ‘allowed’ to write rude words), and they are ‘taught’ (although not in the same way we ‘teach’ children).
In some ways, what we see is?Generative AI?mirroring the biases of the societies that build them (or at least in the artefacts of that society that they are ‘fed’).
And hence because ‘society’ shows bias, so too may Generative AI.
This is a slightly simplistic view: AI can demonstrate bias for reasons other than a poor choice of training data. The ‘coding’, or their parameters of operation may induce bias too.
But we should not conflate todays outcome with tomorrows potential.
Humans are inherently subjective and flawed. It’s arguably what makes us human. But AI tools are not: we can build them better.
With care, Generative AI may not only be unbiased, but may help us to identify and tackle our own imperfections.
Rejecting, fearing or banning them may demonstrate a lack of foresight or understanding.
HR - Talent - OD - LD - M&A - Director/VP | Experience across HR domains and cultures | Helping people lead in complexity and organizations build winning capabilities | Coach | Mentor | Learner
1 年Thanks Julian, it is very intriguing how generative AI tools are trained and can deal with high ambiguity, for example humor, where we humans apply our ethical frame and perspective whether is within certain limits