chatGPT can eliminate social media hate speech
Produced with DALL-E

chatGPT can eliminate social media hate speech

Social media companies claim that monitoring and eliminating hate speech and abuse is difficult and expensive. So instead of trying to manage the copious output of their platforms, they should look at the control of the input. By placing a chatGPT-like interface before comments are made, the comments can be handled before they become problems.

chatGPT can be trained to identify the variety of abuse. It's not like they don't have abundant data to train their custom (bad) language models. Any unacceptable input could be fed back to the individual for improvement:

No alt text provided for this image

This abuse is rampant and the controls do not seem to be working or willed to work.


Several other benefits will arise from this process:

Tackling abusive behaviour: expressions of racism, misogyny and hate can generally can be identified and the filter improved through fine-tuning.

No alt text provided for this image


Protecting children: and other vulnerable groups. Need I say more?

Grammar: chatGPT can help trolls/haters to improve their grammar and write more appropriate messages and have their spelling corrected!

Education: chatGPT will help educate users about the subject they are abusing. However, I'm under no illusions that some people will deny facts continuously (as Jonathan Swift wrote: "You can't reason your way out of something you didn't reason your way into"). This will produce comments regarding bias and the deep state. But who cares?

No alt text provided for this image
chatGPT

Help: persistent offenders may be advised to seek help or counselling. As their pattern of behaviour becomes apparent this advice could be accelerated or heightened. For example, continued use of anger could be identified and the appropriate recommendations made.

Fake news/fact checking: while we are about it let's use the idea to filter our politicians and other spokespersons:

No alt text provided for this image
chatGPT
No alt text provided for this image
chatGPT

Some might say this is censorship and against free speech. No, it's censorship and against hate speech.

Or, who determines what is hate speech or abuse? We do. Society. Non-abusers. These systems are trained on text from the web and other sources and, I would argue, are inherently unbiased or as near to it that is possible.

Another issue is that the business models of these platforms thrive on abusive and divisive posts from their users through their multiplier effect.

In that respect, we should encourage users and advertisers to insist on input management controls. Failure to improve their services could result in a competitive disadvantage over those that do, or alternate platforms will be set up that threaten the market position of the deniers.

For those that perpetuate the abusive behaviour, I foresee that they will become festering platforms marinating in their own vitriol.

For those sites that promote image and video contributions, there is always DALL-E.

In the UK the Online Safety Bill depicts a complex set of problems and as such been watered down considerably.

There is a simple, low-cost solution to this problem. We just need the will to do something about it.

要查看或添加评论,请登录

Stephen Mott的更多文章