A Factor Conversation About AI: Part Two

A Factor Conversation About AI: Part Two

Part One of this conversation can be found here.

Bob: You just described this nuanced point of view, with which I think I agree, but the general populace is completely poisoned against all AI because of all the horrible LLM stuff that has happened. On social media, everyone is saying, "#$*% AI! I don't want your AI tools in my software. I don't want you anywhere near my generated art. #$*%k your copyright infringement machine." The general mood doesn't have this nuanced view. The general mood has been poisoned by the completely unasked for distribution of LLMs into every product and piece of software that we buy.

Sherrard: The general tenor is against AI. I think that's true.

Bob: And it's breaking search. General web search has gotten so much worse because half the results we get are AI generated garbage. You and I, and the National Institute for AI Research, could have as many nuanced views as we like, but what gets pushed in our faces all the time, even as information science professionals, terrifies me. I just bought a new laptop. It's going to come with an AI system that I don't want. Can I turn it off? I don't know.

Sherrard: At the same time, if you look at the actual numbers of usage and stories that are out there, and I'm not saying this is a good thing, people are using LLMs in their work, often covertly, in large numbers. And not only that, but people are also trusting these LLMs on a large basis. There's tons of research about how just the interface of a chatbot lowers the skepticism or critical thinking of the user in the process.

Bob: It takes advantage of our tendency to anthropomorphize everything. Just because it makes human sounding sentences, people trust that it's human speech or human curated speech. It’s like a mascot for a tire company making a bunch of tires with a smiley face painted on it. That's the degree of anthropomorphizing that's actually happening. We are so ingrained and programmed to trust and anthropomorphize anything. It’s a huge problem if you have a chat interaction and you can’t tell if it’s with a person or an LLM.

Sherrard: I think that the answer to that is? twofold. There's the concern that you have about just the sheer amount of environmental consequence to the energy usage of LLMs, and the question of whether or not they are even the right solution to an information problem just by virtue of that?

Bob: A known search item query can be run on pennies by comparison.

Sherrard: Then there's the other question about authenticity and/or authority control and accuracy. As for the former, we’ll need to be able to convincingly sell people on the value of having proper maintenance and governance for a whole infrastructure of hybrid architecture technologies that serve as guardrails to provide that authority control.

Bob: Why do we base our actions around it, instead of standing up and just saying no, LLMs? Just say no! Don't be like, “Oh we have to learn to coexist with the thing that is totally destroying all of us.” Let’s not do that. Let's embrace other kinds of AI, which are useful at work and aren't complete disasters in environmental, ethical, and information spaces. Why the inevitability? Yes, they exist. Yes, they're out there. Maybe they're not going away. But saying, “We have to come to terms” doesn't seem right.

Sherrard: I think the reason that I come at it from this perspective is because I've been steeped in the world of knowledge graphs, open data, and linked data for so long. And this may be personal bias, but it seems like there has never been a greater use case for linked data and open linked data, and for having large discussions about information governance. In my mind, we should be shifting the discussion away from the LLM as the end all be all of AI. To me, it should serve mostly as an interface between people and AI. Structured content.

Bob: But it doesn't know how to extract information from structured content. It just makes up answer-y looking stuff.?

Sherrard: No, but you can. You can create guardrails that use the original large language model, which is inflexibly huge and not something that you can update easily. You can create guardrails that use a lot less energy than an LLM, but serve to update the prompts to allow the LLM to provide more accurate useful information.

Bob: I think the query parsing technology is the most interesting part of it. The natural language query parsing is super impressive. If you could attach that to something that provides information instead of statistically generated garbage, that would be great. But otherwise, it seems like strapping the Empire State Building to the front of your car because your bumper fell off.

Sherrard: I think you and I are actually coming from the same place. We have similar levels of concern and I think we have similar ideas about how to approach it. We both see that the trajectory that we're currently on is unsustainable for several reasons. Not only that it is energy intensive, but also that it's not particularly very good! When I look at this situation, I see opportunities to build effective, common infrastructure to support these LLMs as a natural language interface, rather than having them try to do it all.

Bob: Yeah, I don't know. I still don't like this inevitable narrative.

要查看或添加评论,请登录

Factor的更多文章

社区洞察

其他会员也浏览了