AI - Sometimes it just says things that aren't true.

AI - Sometimes it just says things that aren't true.

How did this happen?

How did Bing, just beat Google to the punch so dramatically at something that's so important and so core to their business?

Well, there's actually a really good reason for it. So AI has been blowing up lately, both in the news and in real-life applications across a ton of industries. So, you know, years ago it was only in relatively small things like helping doctors detect cancer early using advanced pattern recognition, and then a little bit more over the years with things like autonomous vehicles.

But now AI is everywhere. It's creating whole original pieces of art.

It's holding conversations with humans all over the place. It seems like we've just arrived at the beginning of the AI age. So seeing Microsoft at the forefront of it with this new Bing shouldn't really be a surprise.

People are already talking to these chatbots and asking it all sorts of questions. So it sort of feels natural having this chatbot act as your co-pilot for the web alongside search instead of just a traditional search engine full of links. But there is one thing that's gonna follow this conversational AI thing everywhere it goes, everywhere you see it, which is that sometimes it's just wrong.

Sometimes it just says things that aren't true?because fundamentally the AI doesn't know if it's telling the truth or not. It doesn't understand that like that's not part of the model.

What we're seeing is it takes our inputs and then creates outputs based on related words that are most likely to go together. It's not forming a sentence as humans do, it's generating a new sentence. And so adding it to a search engine like Bing, it's scraping all these relevant links and information and synthesizing new sentences just based on how it thinks things should be pieced together.

It's not sentient, it doesn't understand what it's saying and so it's definitely not fact-checking itself. So we have to keep that in the back of our mind through all of this, Every time we see a headline.

So it's really interesting with these search engines, Because on one hand you have Bing which has everything to gain, and then, on the other hand, you have Google which has everything to lose.

I've had access to this new Bing for a little bit. It's a limited preview before they push it live to the rest of the world. I've just been playing around with it. Basically, it adds this chat experience alongside regular Bing. It's essentially the same experience as talking to ChatGPT, but instead of being limited to a fixed data set that cuts off in 2021, it'll pull from the entire current web that Bing can scrape from.

So like I said, you can type in a question, flip it over to chat, and it'll give you a sort of nicely written summary that's synthesized based on what it finds for similar queries.

So if I ask it something kind of simple like what's the average lifespan of a cheetah in the wild?

It gives me a convincing bunch of sentences. It actually gives me more information than I ask for. It tells me about cheetahs and captivity too which makes it, you know, feel very convincing. It also gives little footnotes and citations for some of its sources, and it gives links at the end if you want to dig in some more. It's really impressive, actually.

This is like a real product that's gonna ship to all over the world, like people everywhere in the next month or two. But this could only come from Bing right now. The more you use it the natural language is super, super impressive. The fact that it gives me a convincing-sounding couple of sentences in a row and strings it together based on my input, is awesome.

But the more you use it, the more you start to see these weird patterns and these habits and these shortcomings. Again, mostly in the fact that sometimes it's just gonna be wrong.

A little game I like to play is to ask it a question I already know the answer to, and then read what it says, and spot the error. So I asked, what are the best smartphone cameras right now? And it gave me S23 Ultra, Pixel 7 Pro, and iPhone 14 Pro Max with this nice little writeup with some specs for each. That's actually a pretty good list, but it is wrong about some of these numbers here. The S23 Ultra has a 200-megapixel camera and a 12-megapixel front-facing camera.

Like, basically the answers that it gives are really convincing to someone who doesn't know anything already about that subject. But if you are already an expert in the subject that you ask about, then you'll find that the answers are like C plus, maybe B plus sometimes at best. So when you're asking ChatGPT or Bing about a factual thing or something you need help with, you also should probably add these layers on top.

Like, am I a complete newb in this topic that I'm asking about?

Am I just willing to blindly trust whatever this spits out without any further research?

Is a B plus answer gonna be good enough for me even if it might have some possible errors in it?

You know that might be good enough for just asking, a basic question, but maybe not good enough for planning a trip or meal planning for someone with an allergy or something like that. And then if you look around the internet, people have gotten it to give increasingly more and more unhinged answers over time as it tries to simulate conversations and stay in the flow with natural language. We have seen anywhere from arguing about simple corrections to spewing weird stories about how it's spied on its own developers or how it wants to be sentient, to gaslighting people about things and lying about its previous answers and just saying some straight-up scary stuff.

Like, can you imagine if Google did this, if Google search, at the top of search for people was just spewing out random stories and misinformation and like, all kinds of insane, unhinged things? That would not fly.

Now to be fair, this version of Bing isn't out yet to the public, So it is still a small group testing phase, but even with this, Microsoft knew that some of it is gonna get out there and potentially go viral. It feels like they even basically programmed in lots of friendly emojis to try to soften the blow.

So when it knows it's giving an answer to maybe a more controversial topic or something that it doesn't have a super clear answer for, you might get a little smiley face at the end just so you don't, you know, take it too seriously.

Also, literally as of today when I'm testing this, it started completely bailing on a lot of topics that might just be the slightest bit existential or dangerous.

It just says, hmm, I prefer not to continue this conversation. And then it just stops.

Just refuses to answer any more questions on that topic until you reset it. Which seems like a pretty good failsafe. It's a pretty good idea in hindsight. But we've already seen the other stuff. It's gotten out there, the damage has been done. Like, the point still stands, this could have only come from Bing. Like, a lot of people might have forgotten about this that Google has been working on conversational AI stuff for years. We've seen Google Assistant.

But they also literally showed an AI chatbot demo on stage at Google IO in 2021 where you could have this whole conversation with any person or object or anything in the universe that you wanted. Their demo on stage was asking Pluto about itself, nice and friendly.

The difference with Google is this was never shipped as a product.

Like, this was an internal research project. But the idea of displacing their massive search and ads business with a chatbot that gets things wrong all the time is insane.

It can't happen as search and ads are more than half of Google's revenue as a company. That's what having everything to lose looks like.

Now, to be fair, Google did hold an event in Paris the day after Microsoft's event, which was them talking a little more about their chat with search AI plans, and they did say they're planning on eventually doing a chatbot on top of Google search. It's called Bard.

It was much more subdued though, and yes, it also literally did have a factual mistake in the promo for it. I actually like the idea, I obviously think it's smart when you are on the precipice of this huge AI thing to be this co-pilot for the web, to help you around the internet. The idea of it summarizing a longer piece into some bullet points accurately is, that would be great.

Like the fact that it could give you spark notes for a longer book, you haven't read yet. It could even help you get a healthy meal plan, help you plan a trip, and help you make a purchase decision. But it's clear that we're still at the beginning of this.

Like, there are so many unanswered questions from obviously the fact-checking to like, how do search engines keep sending traffic to the publishers who are the sources that the chatbot is scraping from?

In the current Bing AI Chatbot, you get the links at the bottom but a lot of people are not gonna click those anymore if you just give them the answer above the search results. So right now, in its current stage, my take is anything we do with any of these AI tools should be a collaboration with the human touch.

Like, you wouldn't just put in a query in DALL-E and then just take whatever it generates and put it in a frame and just call that art, No Right? It's more for inspiration for your own paint and canvas.

So of course you shouldn't ask the Bing chatbot what TV you should buy and then just like, mindlessly click and buy the first one that comes up. I mean, it could be fine but it could also be a C plus answer.

You should use that as a springboard for your own more informed research, especially on topics that you don't already know much about.

要查看或添加评论,请登录

Rakesh Ramesh的更多文章

社区洞察

其他会员也浏览了