AI and Populism an utterly toxic combination
Facts and Data > Popularity and Belief

AI and Populism an utterly toxic combination

Populism brought us Trump, Brexit, the News of the World paper and The National Enquirer. How did that work out?

AI is drawn from populism. It looks at content that is out there and forms a view based on patterns. If the pattern is biased, the AI is biased.

As these models are trained on human language, this can introduce numerous potential ethical issues, including the misuse of language, and bias in race, gender, religion, and more.

Source: https://developers.google.com/machine-learning/resources/intro-llms

and as an example of what could go wrong:

Most basically, predictive models that could be used to filter job applications through techniques of supervised machine learning run the risk of replicating, or even augmenting, patterns of discrimination and structural inequities that could be baked into the datasets used to train them.”

Source: Dr David Leslie, director of ethics and responsible innovation research at The Alan Turing Institute https://www.theguardian.com/technology/2022/jul/14/uk-data-watchdog-investigates-whether-ai-systems-show-racial-bias

and Deloitte

AI systems are only as good as the data we put into them.

Source: https://www2.deloitte.com/uk/en/pages/financial-services/articles/banking-on-the-bots-unintended-bias-in-ai.html


We've seen this go badly wrong already. Several times

“[HireVue’s] method massively discriminates against many people with disabilities that significantly affect facial expression and voice: disabilities such as deafness, blindness, speech disorders, and surviving a stroke

Source: https://ainowinstitute.org/publication/disabilitybiasai-2019


As well of course as inventing data just to look good.

Lawyer who cited cases concocted by AI asks judge to spare sanctions

https://www.reuters.com/legal/transactional/lawyer-who-cited-cases-concocted-by-ai-asks-judge-spare-sanctions-2023-06-08/

It's an utterly toxic combination that leads to bad results. Bad data = Bad outcomes. Populism (the frequency of a topic) is a poor correlation of truthfulness and utility. Authority and credibility, whilst still imperfect (Even Einstein got it wrong with his cosmological constant) is likely to be more correct including where credible experts differ and we're able to see emergent information from both sides.

We need to move away from frequency and back to authority. Frequency is what bought us link spam farms to help with SEO. Authority is what happens when we reference peer reviewed papers from a journal or empirical unbiased data (e.g. people are not wanting to go back to the office) vs Puff Pieces by people with a stake in the matter. Let's not fall into the frequency trap from people who get lots of media coverage because they are well connected and have a marketing budget and thrive off of being controversial to increase their exposure. These are the people who have little credible data to back up their arguments.


This is exactly what worked for Google when they developed the first version of their search engine and what set them apart from their peers. Better Results. Faster Results. Simpler Interface. The better results were because the trustworthiness of the source was ranked by links from other pages and drew on earlier work on the citation analysis of academic papers.

Which is fine until you get to populism and spamming. Truthfulness is not a democracy. The first person to discover an idea and validate it is correct, even though that opinion may only be held by a small number globally. For instance Darwin's Theory of Evolution, or Copernicus model of the Earth not being at the centre of the universe. Both of these ideas met with considerable resistance at the time from religious led thinking which did not avail itself of constructive evidence led deduction. Being in a minority does not make you wrong.


Yet we are plagued by populist AI led information which reinforces bias.

I see this a lot on LinkedIn as a top agile leadership voice:

LinkedIn Profile Badge

I'm asked to collaborate on AI led articles. Most of these typify the sorts of fake "cut and paste" populist agile that doesn't draw on referenced authentic sources

You can see my corrections here "scrum does not have ceremonies " and also in "agile is not a methodology ". This incorrect info originally arose from biased data. When referencing scrum, the definitive reference is the scrum guide terminology of "events". Sure, as an organisation you can adopt your own local terms if that works for you, but it doesn't work as a global reference and is misleading to do so.

AI needs to be better than this to give better weight to referenceable facts and not be a popularity contest.

Remember: 17,410,742 people voted for brexit and 61,201,031 people voted for Trump. Good outcomes are not a popularity contest! "Popularity" on the web and ranking higher in AI algorithms is no better than vote stuffing at the ballot box.

How do we get referenceable authority?

  1. Validate people to prevent fake identity - LinkedIn is making some headway by validating that the profiles of people are genuine. Some others (Twitter) require uploading of referenceable government ID.
  2. Look for patterns. I called this out here . Recruiters are on LinkedIn because it's a great place to find candidates. Why would a recruiter not be on Linkedin? Why would a whole company have "recruiters" that don't show up on LinkedIn or the company page or have pictures that are identical to people with different names? Looks like fake to me. Why would their profiles not be specific enough to validate against trustworthy sources such as university graduation records?
  3. Use practices that can overcome bias and heuristics. Red Team Thinking can help here.


Authenticity and Patterns of Credibility are necessary to validate if the information you are dealing with is invented, AI generated or even fraudulent

Contact me, I may be able to help.

Craig (Merit in Artificial Intelligence, AI Dept, Edinburgh University)

Article written by hand, errors all mine!


Michael Wagener (He/Him)

Guiding Teams to Achieve Meaningful Results and Long-Term Success through Agile Leadership

1 年

AI can be used for both good and bad and there is a proliferation of fake news happening via social media that is deeply concerning. BTW, I watched a video from the Legal Eagle on YouTube about the lawyers that used ChatGPT for their research in a New York law case - they didn't even fact-check it before presenting it to the court, so I am not sure that sympathy is due: https://youtu.be/oqSYljRYDEM?si=otu5dQ6dmR5eoxJ8 Even apart from AI, it is becoming much harder to determine fact from fiction in a world where social media is frequently being used as a primary source of news with no filters or checks for accuracy and source. This was discussed recently in a program I watched from ABC in Australia called MediaWatch presented by Paul Barry (YouTube). One solution might be to implement proof checkers through firmware as suggested by Max Tegmark at a TED event recently: https://youtu.be/xUNx_PxNHrY?si=LKO2-Mi212Patsdo. Regulation too, is a good idea. I am aware that Biden's recent Executive Order on AI regulation has drawn criticism for potentially favoring industry giants over the open-source community. So legislation that is introduced will need to be well-formed and guided by trusted members of the scientific community.

回复

要查看或添加评论,请登录

Craig Cockburn的更多文章

社区洞察

其他会员也浏览了