Emily M. Bender的动态

Emily M. Bender转发了

查看Emily M. Bender的档案,图片

Professor, Linguistics at University of Washington // Doesn't read messages on LinkedIn -- see website for email

As OpenAI and Meta introduce LLM-driven searchbots (on top of Bing and Google already splashing "AI" all over search), I'd like to once again remind people that neither LLMs nor chatbots are good technology for information access. Chirag Shah and I wrote about this in two academic papers: 2022: https://lnkd.in/g8Pswfe3 2024: https://lnkd.in/gNcSqQXZ We also have an op-ed from Dec 2022: https://lnkd.in/grBHJRWB Why are LLMs bad for search? Because LLMs are nothing more than statistical models of the distribution of word forms in text, set up to output plausible-sounding sequences of words. https://lnkd.in/gJXVDGf3 If someone uses an LLM as a replacement for search, and the output they get is correct, this is just by chance. Furthermore, a system that is right 95% of the time is arguably more dangerous tthan one that is right 50% of the time. People will be more likely to trust the output, and likely less able to fact check the 5%. But even if the chatbots on offer were built around something other than LLMs, something that could reliably get the right answer, they'd still be a terrible technology for information access. Setting things up so that you get "the answer" to your question cuts off the user's ability to do the sense-making that is critical to information literacy. That sense-making includes refining the question, understanding how different sources speak to the question, and locating each source within the information landscape. Imagine putting a medical query into a standard search engine and receiving a list of links including one to a local university medical center, one to WebMD, one to Dr. Oz, and one to an active forum for people with similar medical issues. If you have the underlying links, you have the opportunity to evaluate the reliability and relevance of the information for your current query --- and also to build up your understanding of those sources over time. If instead you get an answer from a chatbot, even if it is correct, you lose the opportunity for that growth in information literacy. The case of the discussion forum has a further twist: Any given piece of information there is probably one you'd want to verify from other sources, but the opportunity to connect with people going through similar medical journeys is priceless. Finally, the chatbots-as-search paradigm encourages us to just accept answers as given, especially when they are stated in terms that are both friendly and authoritative. But now more than ever we all need to level-up our information access practices and hold high expectations regarding provenance --- i.e. citing of sources. The chatbot interface invites you to just sit back and take the appealing-looking AI slop as if it were "information". Don't be that guy. Update 11/5: I created a newsletter post about this topic, with responses to some of the frequent replies: https://lnkd.in/gEMEZ2Tn

Situating Search | Proceedings of the 2022 Conference on Human Information Interaction and Retrieval

Situating Search | Proceedings of the 2022 Conference on Human Information Interaction and Retrieval

dl.acm.org

Tad Davis

Problem Solver

3 周

LLMs are incredibly useful for conversational responses, summaries, and creative tasks, but for fact-specific information searches, search engines often provide a clearer path to source-verifiable data. But I use both. LLMs generate responses tailored to the specific question or prompt, typically without displaying unrelated or sponsored content. This focus on directly addressing the query can feel like a form of sensory gating, as the model works to provide a coherent, concise answer without external distractions. Search engines, on the other hand, index vast amounts of information, and often surface ads, promoted links, and results that may not always be directly relevant. Paid placements, SEO optimization, and ranking algorithms sometimes prioritize certain results, which can dilute the relevance.

Olga Imas

Professor of Biomedical Engineering at Milwaukee School of Engineering

3 周

These search systems are no longer just LLM. These systems are now RAG+LLM and are starting to work quite well for information retrieval. Just like everything else, the old-school “check and verify” approach is needed. But has that not always been the case when it comes to information retrieval?

Douglas K.

UX & Product Design

3 周

I’m not an expert on LLMs but I have been part of the human experience for quite some time now. I think your argument around “information literacy” is a relevant and important one. However, and speaking from my own biased experiences, I don’t think information literacy is what most people value unfortunately. I see people favoring speed and convenience over literacy; which is what AI/LLMs seem to provide. Literacy is being attacked on several fronts and I feel hitting rock bottom (whatever that is) is going to be necessary to wake us up from this drunken stupor that consumes us. It could be decades before this happens…

Stefano Benatti

AI Innovation Leader | Building a new world with strategic technologies & team excellence

3 周

I really disagree here. The way that both perplexity and OpenAI do search is not by simply using LLMs with trained data. Instead they do the usual web crawl (same as what searching with Google would result in) and leverage LLMs to summarize the results. This is a lot more like RAG (retrieval augmented generation) and it allows it to link the results with the actual source of the information to be double checked both automatically and by the human behind the wheels. This to me is exactly how a “shallow Google search” would result in but with a lot less time needed. And it is known that anything in the second page of Google is already “dead” as far as information discovery goes, very little human beings venture there. Some studies point out that it is less than 1%, so what they launched is a better way to do > 99% of information discovery with a fraction of the time to do so. And in particular, current SEO is way more broken as it heavily favors large companies rather than correct information as stated by verge: https://www.theverge.com/2024/5/2/24147152/google-search-seo-publishing-housefresh-product-reviews I only hope that AI search will not EVER have an ad model to skew results towards it instead of accurate data.

Armin Biermann

Armin Biermann Consulting

3 周

We are too much focussed on the question what we can do with AI. We should be more focussed on the question what AI does with us. Losing - or not acquiring anymore - information literacy is just another not so tiny step into - what I call - 'Digital Invalidity', which is by far more than 'Digital Dementia' as a result of a too early and too intense use of digital media by children and teenagers.

Unfortunately, enough people seem not to care about having their search results contaminated (weaponized, even) with garbage and outright falsehoods that "free search" seems likely to get worse before it gets better. High-quality information (like clean water and air) will be something we have to pay to consume, sadly.

Respectfully, while I appreciate your caution regarding LLMs and their role in search and information access, I think it's essential to consider the actual strides made in this space. OpenAI’s approach, among others, has evolved past the generalizations and limitations you’re referring to. Yes, early iterations had their challenges, but dismissing them outright overlooks significant advancements that have been well-documented in newer research and technological applications. To balance the discourse, have you explored OpenAI's latest search capabilities or tested their utility in practical scenarios? The results might surprise you. I’d be curious to know what you found if you have—and how that stacks against your published insights.

Graham Lovelace

Charting the impacts of generative AI on human-made media | Writer | Strategist | Keynote Speaker (on generative AI and its impacts) & Media Event Producer

3 周

This is brilliant. And love this quote: "A system that is right 95% of the time is arguably more dangerous than one that is right 50% of the time. People will be more likely to trust the output, and likely less able to fact check the 5%." It'll be used in my weekly newsletter, out this Friday!

Dave Mackenzie

Digital Leader driving transformation, growth and future ways of working through emerging tech, empowered teams and inclusivity.

3 周

The technology is the worst it will ever be right now. I think it’s worth acknowledging the step change that has occurred with LLMs etc. I find this post sits at one extreme which is as bad as the ones at the other end. We should not be focusing on technology rather the problems that can be addressed. This feels so reductive. I recently spoke with someone who speaks English as a second language, they have used llms to improve their writing, help them get a new job etc. what you describe as slop, may actually be very helpful for some. It’s all about perspective and the specific use case.

查看更多评论

要查看或添加评论,请登录