Using AI & LLM's to facilitate 'clown attacks'

Using AI & LLM's to facilitate 'clown attacks'

INTRODUCTION

Most people are aware that they are being manipulated by digital platforms. Analytics, psychological data and gradient curves allow every action to be weaponised; further entrenching addiction. Most continue using it regardless.

Can we blame them? Algorithms overlap so perfectly with your psyche that social media feels like a cosy room with all your favourite posters on display, and all your favourite people to talk to.

Sadly, the problem is far worse than this, and will continue to worsen.

WHAT IS A CLOWN ATTACK?

It refers to the human tendency to disregard ideas from persons of 'low status'. Many digital platforms utilise clown attacks to train our brains to automatically avoid any thoughts associated with a person of 'low status' in future.

You might be tempted to think that you treat all people equally and aren't falling for these 'clown attacks'. Wrong.

If I asked you whether you followed (insert controversial politician) or (right/left wing celebrity) what would you say? If I asked you why you have a particular position on that person, what would your answer be? You might then try to justify your answers and say that you alone decided to create that echo chamber.

HOW DO CLOWN ATTACKS WORK?

Hopefully you see now where digital platforms are going. Visualise this: a room filled with psychologists, and data analysts. They're fed raw data from millions of scrolls, comments and likes, and analysing where best to fiddle with trends. They might decide they don't like the views of (insert journalist who we'll name John). Fortunately for them, John posted something about (insert controversy) several years ago, or the current trend on (insert social media platform) happens to contradict John. What do they do? They'll make sure to promote those posts that contradict him; especially the ones that turn people against him. They'll make people think John is 'low status'.

Eventually, the important and moral views of John disappear. People fall under the clown attack and stop following any ideas posted by John because he's now considered to be an 'undesirable'. Our brain is trained to associate any "John thoughts" in future as 'low status'.

In essence, we stop doing the most important thing of all: critical thinking.

Now, with the rise of large language models, and artificial intelligence, this has become easier to do than ever before. Multi-armed bandit algorithms and many other tools are already doing this.

POTENTIAL LEGAL REGULATION

Plenty of room here for legal regulation. Where to begin? I've outlined some potential areas for legislative reconstruction:

  1. Expert committees and forums: we need to inform local municipalities, and then upper echelon politicians on the nuances of technology, LLM's and AI. They can't regulate something they aren't even aware of.
  2. Technology restrictions and data parsing: there needs to be modifiers and tag words added to certain kinds of data to prevent them from being openly manipulated and sold. Private data gathered from earbuds, Wi-Fi signals, and handheld devices should have 'gag orders' that protect it against use by multi-armed bandit algorithms.
  3. Stay informed: the simplest method is to stop consuming the same kinds of information everyone else is, and to realise when the posts you're being recommended are actively distorting how you view the world, and the kinds of decisions you make. In essence, the age-old adage: take a break from social media, perhaps forever.

THE FUTURE

Currently, there's two (2) massive on-going legal cases. First, NYT is taking OpenAI and Microsoft to court over alleged claims of copyright infringement. Second, several artists have filed a class-action lawsuit against several well-known AI art companies. Overall, it seems the legal world is ramping up and trying to adapt to the emergence of generative AI.

I'm putting faith in people like Nathan-Ross Adams and Barry Scannell with experience in this field to advise legal students where possible, and to lead the charge.

SOURCES

  1. For more on the concept of clown attacks, see the post this is inspired by: https://www.lesswrong.com/posts/mjSjPHCtbK6TA5tfW/ai-safety-is-dropping-the-ball-on-clown-attacks .
  2. To understand the true depths technology and AI has reached, see: (1) Center for Humane Technology Co-Founders Tristan Harris and Aza Raskin discuss The AI Dilemma - YouTube.
  3. Artists sue AI art generators over copyright infringement - Polygon
  4. OpenAI and Microsoft sued by NY Times over copyright infringement (msn.com)

Mikhael Cain - CIPM

Data Privacy Associate at Vitality Global | Attorney

10 个月

Insightful ??

Peter Vickers

Legal Technology Consultant

10 个月

This was concise and informative, thanks Liam. It's definitely worth the quick read. These are concepts everyone needs to be aware of to have a hope of thinking critically going forward.

Nathan-Ross Adams

Founder & MD @ ITLawCo | My views are my own.

10 个月

It’s great that you’re exploring these concepts, Liam. It’s worth going deeper into research and practice in AI law. At the university level, there’s also a need for updated courses in ICT law and lecturers who have practical experience in the field.

Fortunate Kirabo

Trainee Lawyer| International Dispute Resolution|Vis Moot Coach

10 个月

Very insightful, Liam Bolton.

要查看或添加评论,请登录

社区洞察

其他会员也浏览了