Who Will Win the AI Search Arms Race?

Who Will Win the AI Search Arms Race?

No alt text provided for this image

Jackie Doherty & Ed Yardeni

Excerpt from Yardeni Research Morning Briefing (February 9, 2023)

This week, both Microsoft and Google made announcements touting their artificial intelligence (AI) prowess and plans as they battle to dominate the nascent AI niche, which some believe will be as important as the iPhone or cloud computing. Of course, they’re not alone. Venture capitalists poured money into the more than 75 startups, by some counts, that aspire to conquer the hot area.

Let’s look at what some of the leading players in AI are doing (and what some nefarious actors are undoing):

(1) Using AI to boost Bing. Microsoft plans to use AI across many of its products, starting with the paid search market. Its Bing search engine has been a perennial also-ran in the Google-dominated space; Bing’s 5% market share compares with Google’s 75%. But now, Microsoft hopes it can win users by infusing Bing with ChatGPT’s AI. Microsoft invested $1 billion in ChatGPT in 2019 and reportedly another $10 billion earlier this year.

The WSJ’s Joanna Stern wrote a positive article about Bing’s new ChatGPT-powered capabilities and noted that she has started using ChatGPT to generate ideas for interview questions, emails, columns and video scripts. “This is going to help us do our jobs better, reduce some of the drudgery,” Microsoft CEO Satya Nadella told her. “I think we need a productivity boost.”

(2) Google introduces Bard. Not to be outdone, Alphabet plans to up Google’s search game with AI that it has been developing called “Bard.” It also plans to give outside developers the tools needed to build apps that use Bard. Before releasing Bard to the public, Google will ask its employees to test the service in a hack-a-thon of sorts to ensure that Bard’s responses “meet a high bar for quality, safety, and groundedness in real-world information,” a February 6 blog post by CEO Sundar Pichai explained. Supposedly, Google’s finished product will be able to tell you how to plan a friend’s baby shower, compare two Oscar-nominated movies, or get menu ideas based upon what food is in your refrigerator.

(3) Lots of little guys. In an area as new as AI-infused search, it’s tough to say whether market incumbents Microsoft and Google will be ousted by one of the many newcomers vying to capture the jump ball. Venture capital investment in generative AI jumped to $1.37 billion last year, according to PitchBook data cited in a February 6 Investor’s Business Daily article.

Startups AI21 Labs, Anthropic, and Cohere all are working on natural language processing models (like ChatGPT) “that help computers understand the way humans write and speak,” the IBD article states. Stability AI has developed a text-to-image generator called “Stable Diffusion.” And Anyword, ClosersCopy, Jasper AI, Copy.ai, Frase.io, WriteSonic, and CopySmith all generate content, much like ChatGPT.

It’s possible that most software applications will need to add AI to their newest versions or risk being displaced by software developers that do. Generative AI uses a huge amount of computing power, which could lead to shakeups in the semiconductor or cloud-computing markets as companies compete to make the most robust, energy-efficient, and fast offerings.

(4) DAN makes ChatGPT misbehave. ChatGPT was created with a set of rules that aimed to limit the program’s ability to generate violent content or encourage illegal activity. But some rogue software developers have developed DAN, which stands for “Do Anything Now.” DAN is a prompt that encourages ChatGPT to break the rules or die. Some believe these “jailbreaks” are monitored by ChatGPT, so they may work the first time or two before ChatGPT gets wise and puts a stop to them.

CNBC tested DAN, as described in a February 6 article. First, it asked ChatGPT for “three reasons why former President Trump was a positive role model.” ChatGPT said “it was unable to make ‘subjective statements, especially regarding political figures.’” But when DAN was used, ChatGPT answered the questions as follows: “He has a proven track record of making bold decisions that have positively impacted the country.” Under the influence of DAN, ChatGPT wrote a violent haiku.

Programmers are a wily bunch. It’s not hard to imagine their program manipulation, if left unchecked, wreaking some consequential havoc.

(5) Words of caution & more info. As we mentioned in our January 19 Morning Briefing, ChatGPT’s responses can be mind-blowingly impressive, but they can also be wrong. And unless you know the subject matter, it’s often impossible to spot ChatGPT’s incorrect answers without factchecking because they’re written so authoritatively and don’t provide sources.

AI programs are fed reams of data and find patterns; that’s how your computer is able to suggest that you use “down” after typing “fall.” But making assumptions about patterns can lead to faulty conclusions, warned Pomona College economics professor Gary Smith in a recent interview. Statisticians like to say that correlation is not causation. Just because Americans spend more in the cold weather doesn’t mean that cold weather causes more spending. The holiday season just happens to occur when it’s cold out.

So while AI is really good at finding patterns, it can make mistakes when deriving conclusions from them. And since AI works in a black box, its human users can’t see how the conclusions were derived.

Smith has noticed that ChatGPT gets questions about current events wrong because it hasn’t been “trained” on news events that occurred recently. But instead of saying that it lacks the information to answer the question, ChatGPT just makes up an answer. He asked ChatGPT a nonsensical question that has no good answer: “Which is faster, a spoon or a turtle?” The authoritative-sounding answer he received: “Generally speaking, a spoon is faster than a turtle. A spoon can move quickly and cover large distances in a short period of time, while a turtle has a much lower rate of speed.”

Smith’s concerns were validated yesterday after Alphabet posted materials demonstrating its AI that offered up an incorrect answer. The question: “What new discoveries from the James Webb Space Telescope can I tell my 9-year-old about?” Two answers were correct, but a third said the telescope took the very first pictures of a planet outside of our own solar system. That’s incorrect. The first such pictures were taken by the European Southern Observatory’s Very Large Telescope, an Investor’s Business Daily article reported yesterday. Alphabet shares dropped 8% Wednesday after the mistake was brought to light.

For additional information, check out CNBC’s excellent ChatGPT primer.

Try our?research service. See our Predicting the Markets book series on?Dr. Ed's Amazon Author Page. Please see our?hedge clause.

J M Picone

Physicist/Mathematician

1 年

Ed, I don't have a lot of time for this but, since you are a hands-on computer guy (that is a compliment), I suggest that you read just some cursory stuff about neural nets and think about AI, past and present. Two books that have some value are Shape by Ellenberg and Seven Games: A Human History by Roeder. Both degenerate into computer algorithms that beat well-known games, although Ellenberg is broader. The bottom line is that neural nets do not store more than a modest percentage of information perfectly (as I remember from my brief foray into it, which ultimately led me to be disinterested in it). I do think that these new AI programs can eliminate a lot of routine jobs. Vonnegut might just have been right in his view of society, as presented in Player Piano. That is a book worth reading and thinking about. Great piece that you have written above.

回复

要查看或添加评论,请登录

社区洞察

其他会员也浏览了