...Using AI to try and find the truth

...Using AI to try and find the truth

Unless you have been living under a rock, you no doubt read my gripping and exciting primer on the subject of Epistemology last week. Also, many (albeit fewer) people are also familiar with the Olympics and the various controversies that have been making the rounds. Both are very exciting subjects that provoke strong, uninformed opinions; let's mix 'em together!

The example I will use here is around women's boxing. There was lot of outrage for a minute among those with little information and strong opinions about the sex/gender/gender expression/endocrinology/physical advantage/bathroom preferences/etc. of two olympic women's boxing participants. For one participant, there were competing stories -- was she born a female or male? Does she have a chromosome pattern of XY or XX? Does she have an endocrine disorder? What she doping male hormone? What gender tests were performed on her in the past and what were the outcomes? What informed the IOC's decision for her to participate in the 2024 olympics? All of these questions were floating around, and the answers I found were very dependent on where the answers were coming from.

In fact, for a while, it seemed impossible to find a straight answer. I found plenty of opinions penned as facts that stated she was born a boy, others that she had XY chromosomes, others that she had failed several gender tests. A lot of outrage about the idea of a man entering into a woman's sport to beat up women. A lot of outrage about the Italian who dropped out after receiving just a few punches. I also found many articles with a similar foundation in reality that argued that she is whatever her truth is, that gender is fluid and complex, articles about some bug or fish that is hermaphroditic and all sorts of gender politics articles. It was a long time before any actual facts emerged, and even now the whole thing is very loaded.

And I think part of why it is loaded is (of course) because of the power of confirmation bias. Few things satisfy the human mind as much as a prediction that is true. This is why movies like "Knives out" are so popular -- they set the bar nice and low and make us feel smart because we could see it all coming.

Aren't we super smart that we saw this whole thing a mile away?

So we find all sorts of evidence to feel like we were right and ignore all sorts of evidence that might disconfirm our guess and we create an echo chamber for ourselves.

Currently every AI algorithm in major public use works to do the same thing. This is the power of monetized click bait and personalized social media streams. AI at work in Meta, X, and even Google is satisfying us. It is also making us awful people and stupider.

Stupider because we get more entrenched in our core ideas with less and less foundation for them. We hold our opinions more strongly because the character of learning works through repetition and reinforcement and these algorithms keep showing us the same things and making us feel good about them. The result is predictable and obvious -- "alternative facts" and completely different views of the world.

None of what I'm saying here is an original thought any more.

But here is where AI might be able to help, as well. In Intelligence (like, spy stuff intelligence) they talk about "veracity" -- is the source of information credible. The really good teams almost perform a sort of tensor analysis where they create a multivariate signal out of the elements of the information. This includes the vector information (what direction is the signal pointing, how strong is the signal) and information about the source of the signal (is the source generally believable, is the source unbiased about this topic, etc.). When you put it all together you have a tensor math operator. If you take lots of these from lots of place, you can effectively analyze the signal that results.

This is something AI can do, and do really well. In fact, this is sort of the guts of AI. Most AI descriptions leave a big black box in the middle:

https://www.edushots.com/Machine-Learning/unsupervised-machine-learning-overview

A big part of what happens in that middle box is that the algo is testing a nearly infinite solution space for predictions that return a higher score than others and narrowing and improving the algo based on the data and the scoring approach.

Hypothetically, AI could do a very good job of identifying what is "true" if it has a) a concept of veracity and b) a better concept of "true". This brings us back to the Epistemology problem from my last article.

要查看或添加评论,请登录

社区洞察

其他会员也浏览了