Using AI & LLM's to facilitate 'clown attacks'
INTRODUCTION
Most people are aware that they are being manipulated by digital platforms. Analytics, psychological data and gradient curves allow every action to be weaponised; further entrenching addiction. Most continue using it regardless.
Can we blame them? Algorithms overlap so perfectly with your psyche that social media feels like a cosy room with all your favourite posters on display, and all your favourite people to talk to.
Sadly, the problem is far worse than this, and will continue to worsen.
WHAT IS A CLOWN ATTACK?
It refers to the human tendency to disregard ideas from persons of 'low status'. Many digital platforms utilise clown attacks to train our brains to automatically avoid any thoughts associated with a person of 'low status' in future.
You might be tempted to think that you treat all people equally and aren't falling for these 'clown attacks'. Wrong.
If I asked you whether you followed (insert controversial politician) or (right/left wing celebrity) what would you say? If I asked you why you have a particular position on that person, what would your answer be? You might then try to justify your answers and say that you alone decided to create that echo chamber.
HOW DO CLOWN ATTACKS WORK?
Hopefully you see now where digital platforms are going. Visualise this: a room filled with psychologists, and data analysts. They're fed raw data from millions of scrolls, comments and likes, and analysing where best to fiddle with trends. They might decide they don't like the views of (insert journalist who we'll name John). Fortunately for them, John posted something about (insert controversy) several years ago, or the current trend on (insert social media platform) happens to contradict John. What do they do? They'll make sure to promote those posts that contradict him; especially the ones that turn people against him. They'll make people think John is 'low status'.
Eventually, the important and moral views of John disappear. People fall under the clown attack and stop following any ideas posted by John because he's now considered to be an 'undesirable'. Our brain is trained to associate any "John thoughts" in future as 'low status'.
领英推荐
In essence, we stop doing the most important thing of all: critical thinking.
Now, with the rise of large language models, and artificial intelligence, this has become easier to do than ever before. Multi-armed bandit algorithms and many other tools are already doing this.
POTENTIAL LEGAL REGULATION
Plenty of room here for legal regulation. Where to begin? I've outlined some potential areas for legislative reconstruction:
THE FUTURE
Currently, there's two (2) massive on-going legal cases. First, NYT is taking OpenAI and Microsoft to court over alleged claims of copyright infringement. Second, several artists have filed a class-action lawsuit against several well-known AI art companies. Overall, it seems the legal world is ramping up and trying to adapt to the emergence of generative AI.
I'm putting faith in people like Nathan-Ross Adams and Barry Scannell with experience in this field to advise legal students where possible, and to lead the charge.
SOURCES
Data Privacy Associate at Vitality Global | Attorney
10 个月Insightful ??
Legal Technology Consultant
10 个月This was concise and informative, thanks Liam. It's definitely worth the quick read. These are concepts everyone needs to be aware of to have a hope of thinking critically going forward.
Founder & MD @ ITLawCo | My views are my own.
10 个月It’s great that you’re exploring these concepts, Liam. It’s worth going deeper into research and practice in AI law. At the university level, there’s also a need for updated courses in ICT law and lecturers who have practical experience in the field.
Trainee Lawyer| International Dispute Resolution|Vis Moot Coach
10 个月Very insightful, Liam Bolton.