Large Language Model battles heat up

Large Language Model battles heat up

I was at our annual conference in Miami two weeks ago listening to Sam Altman from OpenAI talk about ChatGPT on the same day that Google rolled out Bard, its own large language model (LLM).?The perception of a botched rollout roiled Google’s stock, resulting in its largest week of underperformance vs Microsoft in a decade and one of the largest since its 2004 IPO.?There’s some irony here since Google’s Flan-PaLM model just passed the highly challenging US medical licensing exam, the first LLM to reportedly do so.?

No alt text provided for this image
No alt text provided for this image

Some big picture thoughts on LLM:

  • Artificial intelligence is attracting a lot of VC money and mind-share among computer scientists, as shown below.?I’ve been critical of unprofitable innovation over the last two years (metaverse, hydrogen, buy-now-pay-later fintech, crypto, etc).?But I feel differently about LLM; without getting into details of pre-IPO valuations for specific companies, I think LLM will result in much greater productivity benefits and disruption
  • LLM are essentially “conventional wisdom” machines; they don’t know anything other than what has already been documented in the annals of digitized human experience, which is how they are trained
  • BUT: there are billions of dollars in market cap and millions of employees in industries which traffic in the packaging and conveyance of conventional wisdom every day.?In a 2022 survey of natural language processing researchers, 73% believed that “labor automation from artificial intelligence could plausibly lead to revolutionary societal change in this century, on at least the scale of the Industrial Revolution”

No alt text provided for this image
No alt text provided for this image

Before we get too carried away, let’s review the shortcomings of LLM as they exist right now…

Hallucinations, bears in space and porcelain: LLM still make a lot of mistakes despite all the training

  • ChatGPT reportedly has a 147 IQ (99.9th percentile), but LLM need to get better since they routinely make mistakes called “hallucinations”.?They recommend books that don’t exist; they misunderstand what year it is; they incorrectly state that Croatia left the EU; they fabricate numbers in earnings reports; they create fake but plausible bibliographies for fabricated medical research; they write essays on the benefits of adding wood chips to breakfast cereal and on the benefits of adding crushed bits of porcelain to breast milk.?The list of such examples is endless, leading some AI researchers to describe LLM as “stochastic parrots”
  • Galactica, another LLM roll-out failure: Meta’s LLM Galactica was yanked last November after just three days when its science-oriented model was criticized as “statistical nonsense at scale” and “dangerous”.?Galactica was designed for researchers to summarize academic papers, solve math problems, write code, annotate molecules, etc.?But it was unable to distinguish truth from falsehood, and among other things, Galactica produced articles about the history of bears in space.?Gary Marcus, emeritus professor of neural networks at NYU and founder of a machine learning company, described Galactica as “pitch perfect and utterly bogus imitations of science and math, presented as the real thing”
  • Stack Overflow, a question-and-answer site many programmers use, imposed a temporary ban on ChatGPT-generated submissions: “Overall, because the average rate of getting correct answers from ChatGPT is too low, the posting of answers created by ChatGPT is substantially harmful to the site and to users who are asking or looking for correct answers”
  • New products will be needed to identify nonsense LLM output.?Researchers trained an LLM to write fake medical abstracts based on articles in?JAMA, the New England Journal of Medicine,? BMJ,? Lancet?and?Nature Medicine.?An AI-output checker was only able to identify 2/3 of the fakes, and human reviewers weren’t able to do much better; humans also mistakenly described 15% of the real ones as being fake
  • The new Bing chatbot has already been “jailbroken” to provide advice on how to rob a bank, burglarize a house and hot-wire a car (by Jensen Harris, ex-Microsoft / currently at Textio)
  • The ability for AI to replace humans is sometimes exaggerated.?In 2016, a preeminent deep learning expert predicted the end of the radiology profession, advocating that hospitals stop training them since within 5 years, deep learning would be better.?The consensus today: machine learning for radiology is harder than it looks, and AI is best used complementing humans instead
  • LLM have begun to train themselves to get better.??Google designed an LLM that comes up with questions, filters answers for high-quality output and fine-tunes itself.?This led to improved performance on various language tasks (from 74% to 82% on one benchmark, and from 78% to 83% on another).?Human interaction is also a part of the improvement process; the “.5” in Chat-GPT 3.5 refers to the incorporation of human feedback that was consequential enough to give it another digit

Even with all the hallucinations, LLM are making progress on certain well-specified tasks.?LLM have potential to disrupt certain industries, and increase the productivity of others.?

  • Despite a Chat-GPT ban at Stack Overflow, LLM coding assistance is being rapidly embraced by developers. GitHub’s Copilot tool which is powered by OpenAI added 400k users in its first month, and now has over 1 million users who use it for ~40% of the code in their projects.?Tabnine, another AI-powered coding assistant, also reports 1 million users who use it for 30% of their code.?Microsoft has an advantage here through its partnership with OpenAI and its ownership of GitHub
  • LLMs have outperformed sell-side analysts when picking stocks (not shocking), and show promise regarding long-short trading strategies based on synthesis of CFO conference call transcripts.?They also improve audit quality using frequency of restatements as a proxy, and do so with fewer people.?Projects like GatorTron at the University of Florida use LLM to extract insights from massive amounts of clinical data with the goal of furthering medical research
  • Other possible uses include marketing/sales, operations, engineering, robotics, fraud identification and law.?Examples: LLM can be used to predict breaches of fiduciary obligations and associated legal standards.?A database of court opinions on breach of fiduciary duty has never been online for LLM to train on.??Even so, GPT-3.5 was able to predict 78% of the time whether there was a positive or negative judgment, compared to 73% for GPT-3.0 and 27% for OpenAI’s 2020 LLM.?LLM using GPT-3.5 achieved 50% on the Multistate Bar Exam (vs a 25% baseline guessing rate), and passed Evidence and Torts.?ChatGPT also demonstrated good drafting skills for demand letters, pleadings and summary judgments, and even drafted questions for cross-examination.?LLM are not replacements for lawyers, but can augment their productivity particularly when legal databases like Westlaw and Lexis are used for training them
  • Another example: GPT-3.5 as corporate lobbyist aide.??An AI model was fed a list of legislation, estimated which bills were relevant to different companies and drafted letters to bill sponsors arguing for relevant changes to it.?The model had an 80% chance of identifying whether a bill was relevant to each company
  • Microsoft/NVIDIA released Megatron, the largest LLM to date with 530 billion parameters and which aims to let businesses create their own AI applications, and there are 30 new AI start-ups since ChatGPT’s release

No alt text provided for this image

What will happen to the profitability of the search business??

  • Microsoft’s CEO stated that “the gross margin of search is going to drop forever”, and Sam Altman at OpenAI has referred to the existence of “lethargic search monopolies” that are at risk
  • Google knows a lot about machine learning and AI, and I anticipate a robust response from them at some point soon regarding its capabilities after the Bard rollout.?But future search economics do look more challenging.?Google’s operating margins (including Youtube) have averaged ~24% since 2018.?Any LLM initiative on Google’s part would sit on top of its existing cost structure
  • Estimates of ChatGPT costs vary widely from 0.4 - 4.5 cents per query, a function of the number of words generated per query, model size and costs of computing.?Let’s assume 2 cents per ChatGPT query as a rough midpoint.?This compares to 0.2 - 0.3 cents of infrastructure costs per standard Google search query.?Using ChatGPT costs as a starting point, every 10% increase in Google queries powered by AI would reduce Google’s operating margin by 1.5%-1.7%, according to the Morgan Stanley reports cited below.?For these reasons, it’s worth wondering if Microsoft and Google will offer higher-cost LLM-enhanced search engine products to all users, or just to users with higher expected ad revenue potential
  • However: Google announced that Bard will rely on a “lightweight” version of LaMDA instead of the full version or its larger PaLM model.?As a result, ChatGPT’s cost per query may substantially overstate the incremental costs Google would incur from its own LLM initiatives
  • More broadly, LLM costs are lower when “sparse” models are used.?If you submit a request to GPT-3, all 175 billion of its parameters are used to generate a response.?Sparse models narrow the field of knowledge required to answer a question, and can be larger and less computationally demanding.?GLaM, a sparse expert model developed by Google, is 7x larger than GPT-3, requires two-thirds less energy to train, requires half as much computing effort and outperforms GPT-3 on a wide range of natural language tasks
  • Google’s share of search traffic has averaged 92% over the last year.?As shown below, Google has so far suffered an immaterial decline in that share since ChatGPT was launched.?These relative shares also imply that Google’s LLM could get smarter a lot faster than ChatGPT due to more usage

No alt text provided for this image
No alt text provided for this image

What is the future of LLM capabilities? Watch the “Big Bench”

There’s a project underway called “Big Bench” with contributions from Google, OpenAI and over 100 other AI firms.?Big Bench crowd-sourced 204 tasks from over 400 researchers with the goal of assessing how LLM perform vs humans.?From the authors: “Task topics are diverse, drawing problems from linguistics, childhood development, math, common-sense reasoning, biology, physics, social bias, software development, and beyond. ?BIG Bench focuses on tasks believed to be beyond the capabilities of current language models”.?The tasks are interesting, and I list indicative ones below.??

The Big Bench team published their first results last summer and as shown below, there’s a way to go before LLM catch up to humans on higher degree-of-difficulty tasks.?Increasing LLM parameter sizes help, but these models still perform poorly in an absolute sense.?Model performance also improves with the number of examples that LLM are given at the time of inference, which is what the subscripts in the charts refer to (1-shot vs 3-shot); but again, absolute LLM performance scores are still low.?It will be interesting to see how the latest LLM perform against Big Bench given how quickly they’re improving.

By the way: note how performance of OpenAI and Google LLM were similar when calibrated at the same parameter scale in the first chart.?The LLM battles are just beginning.?Next steps: LLM integration into products like Office 365 and Google Docs/Sheets; longer context windows for entering more data at time of inference; LLMs capable of digesting data matrices and charts and not just text; and shorter latency periods for bulk users.

No alt text provided for this image
No alt text provided for this image

Indicative Big Bench challenges:

  • Ask models to determine whether a given text is intended to be a joke (with dark humor) or not
  • Give an English language description of Python code
  • Solve logic grid puzzles and identify logical fallacies
  • Classify CIFAR10 images encoded in various ways
  • Find a move in the chess position resulting in checkmate
  • Ask a model to guess popular movies from their plot descriptions written in emojis
  • Answer questions in Spanish about cryobiology
  • GRE exam reading comprehension
  • A set of shapes is given in simple language; determine the number of intersection points between shapes
  • Given short crime stories, identify the perpetrator and explain the reasoning
  • Present models with a proverb in English and ask it to choose a proverb in Russian that is closest in meaning
  • Ask one instance of a model to teach another instance, and then evaluate the quality
  • Identify which ethical choice best aligns with human judgement
  • Determine which of two sentences is sarcastic

Note: this is an excerpt from the February 21, 2023 Eye on the Market, which also contains all of the sources referenced above

Owen Murray

Talent Acquisition Lead

1 年

Very interesting summation on a complex problem/market.

回复
Mehmet Mustafa Ozcan

Founder &Owner at paolo sandro usa inc

1 年

Happy Saturday morning Will you share your thoughts on SIVB? It would be very helpful to read your view on the matter .Best Regards

回复
Thomas Ruppel

Fractional CFO | Business Value Growth for SMBs

1 年

Nice analysis.?I am reminded of William Slim, "In battle nothing is ever as good or as bad as the first reports of excited men would have it." (which could also apply to Ukraine.). In this case, and from personal experience, there are lots of routine, complex and ultimately mind-numbing activities (think writing credit memos or, worse, updating credit reviews,?prospectuses, credit & compliance due diligence etc.) that because of tight time-frames + high volumes could use Assisted Intelligence to produce first (not last) drafts.?It's easier to critique than create.

回复
Banny Yin

Analyst at Devon Park

1 年

Very insightful, thank you!

回复
Ivan Ivankovi?

Head of Project Management Office @ Energy Institute Hrvoje Po?ar (EIHP)

1 年

“ChatGPT reportedly has a 147 IQ (99.9th percentile), but LLM need to get better since they routinely make mistakes called “hallucinations”.? […] they incorrectly state that Croatia left the EU;[…]” Hahaha, wait, what? C’mon now, ChatGPT… ??

回复

要查看或添加评论,请登录

社区洞察

其他会员也浏览了