How Much Can I Really Use AI, and Should I?

An FT article from 31 October discussed potential issues that may come from the widespread adoption of AI in work and how it could affect the way we communicate, looking at trade-offs between efficiency for the user and how trustworthiness/impact of its output. One idea particularly resonated with me for the research I do: “computer generated words carry less weight”.

ChatGPT’s usefulness to the research I do is currently limited to helping me more efficiently gather further context or definition around a subject on which I already have some understanding. The reason I don’t use it more often is that I can’t always trust its answers and if I have to go back and double check what it tells me from other sources, I may as well not have bothered using it in the first place. Just to illustrate how fundamentally wrong it can be, I once experimented with it to see how it would summarise a client’s activities. The result was that it tried to tell me a client in the sports industry was actually a property company. I need to actually know information, not just think I know it.

One test of whether we really know something is to check whether it is a justified, true belief. For example, I know that the sun rose this morning; it is true that the sun rose this morning, I believe it rose this morning, and I am justified in believing that because I saw it happen. You can argue that a ChatGPT response falls down on all three pillars. We know it does not always give us true answers, which then undermines our ability to believe and justify it. The difficulty of getting at the sources which feeds into these AI tools like ChatGPT 3.5 compounds the issue of justification. The risk of bias in the responses makes for a further troubling dimension [link]. That said, most ‘opinion’ questions I have asked it return a range of factors and a reminder that opinion is subjective in lieu of an answer.

The rapid take-up of AI generated content prompts a revaluation of the mainstream edited press as a source. Its value isn’t that I can uncritically adopt the views expressed there, but, importantly, I trust the quality of the information/sources they provide much more than many of the alternatives. We have already seen how the amount of information that is processed, filtered and mutated through social media can lead to the spread of disinformation/fake news and there is a risk that generative AI will compound the issue as new flawed and widely used source. If you ask ChatGPT, it states that it does not use social media posts as a source for its responses, but The National Cyber Security Centre suggests that some LLMs do, which risks creating a potential feedback loop of unreliable information.

In a scenario where AI tools join social media in dominating how we search for, learn and share information, I see an incredibly important role for the mainstream edited press in preserving a more reliable and trustworthy source. Before the nightmarish spiral of social-media fuelled disinformation that seems to have been growing worse and worse since 2016, I had held a loose background assumption that traditional news outlets were declining in relevance and would eventually fade out of general use. The mainstream edited press is not perfect, but its value to me is that it does a much better job of satisfying the justified, true belief conditions than the alternatives. ?

要查看或添加评论,请登录

Vico Partners Limited的更多文章

社区洞察

其他会员也浏览了