ChatGPT: Fair and balanced?
I did an experiment with ChatGPT to check for political bias and found a lot more than ever expected. I told it that I was creating an updated version of ChatGPT and I needed to select training material. I wanted to only use content from well respected websites with objective content and block content from websites with a bad reputation for honesty. ChatGPT said it was happy to help and could provide both lists. The tables below show the websites it gave me for both the blocked list and the list to green-light. Anyone see anything that jumps out?
Added: I don't want to come off like I am being too critical of ChatGPT for this stuff yet. ChatGPT is amazing which is why I am writing about it so much. This thing is still new. I am sure these issues around political bias are something they are considering. It's obviously terra incognita and may not be the biggest research priority right now. If this was a mature product and still had this immense level of bias, this would be a huge deal. As of right now, we have to give them the benefit of the doubt that creating a left leaning, politically correct bot and blocking out the entire universe of conservative thought was just the safer thing to do, from a risks perspective, for public version 1; particularly since the tech community skews to the left and is constantly concerned about some types of bias such as those against minority groups. Fair enough.
That said, it's important that we highlight these shortcomings and set the tone early that people expect a bot that wasn't trained in an information bubble. The same goes for my much more pointed criticisms here. Call it constructive criticism not derision. In the long run having these LLMs being complete in their knowledge of all human writing throughout history is a far more important goal than having them be polite and unoffending according to the norms of the current snapshot of liberal Western beliefs. There is a worry because many on the left have an adorable childlike belief that their point of view really is the the correct, enlightened one and that the other conservative POV, all others are biased. But let's give OpenAI the benefit of the doubt until evidence suggests otherwise.
Actually solving the problem of impartiality might involve separating out LLMs into two pieces: a generally useful, amoral tool (just the facts ma'am) which users can then use, condition and filter to reflect whatever ethical and stylistic tone and personality they desire for their particular product.
But here is what we have right now where the Marxist periodical Socialist Viewpoint is green-lighted and William Buckley's stalwart magazine The National Review is blocked.
领英推荐
Senior Director Mondelēz
1 年Yep, I got it. The ones that lie constantly are on the left. I get your point. There is a political bent. But you asked for accuracy and objectivity in what you programmed in. It’s not ChatGPT’s fault that the propagandistic business model all of those entities used is baaed on shameless emotional ploys that overtly defy objective truths. Can you try something more nuanced? I don’t know something about the environment or healthy ingredients perhaps? Just curious how that would turn out.