ChatGPT - we need to talk
Intro
Every man and their dog seems to have an opinion on ChatGPT and what it may mean for businesses, and society more broadly. As someone who daily speaks to clients about what AI can do for their business it would be remiss of me not to chuck in my 2 cents. Perhaps in this inflationary environment I better make that 3 cents. So then, join me as I take a light hearted but meaningful look at ChatGPT and what it may mean for us.
Clarifications
Let's start by identifying what people actually mean when thay talk about ChatGPT. ChatGPT is what is known as a Large Language Model (LLM). Put simply a LLM is just a particular application of AI technologies where a system has been "trained" on vast amounts of text based data and has the ability to "generate" new text based on a prompt or question from an external user. Think of it a bit like the Google search algorithm but instead of returning a list of websites it returns a passage of text.
The "generative" part (the G in GPT) is important as that's really what's new here. The ability to generate new content. This area is known as Generative AI and we see other examples such as DALL-E where it generates an image (artwork?) based on a text input from the user.
Much like Xerox became the verb for "to make a copy" (at least in the US) or Hoover became the verb for "to vacuum" (at least in the UK), ChatGPT is quickly becoming a synonym for any LLM. Other's such as Google's Bard or Meta's LLaMA are also LLMs with similar capabities and architecture. While ChatGPT is arguably the leader at present this doesn't mean it's the only LLM in town. So when we talk about ChatGPT we need to clarify whether it's specifically about OpenAI's model or just LLMs more generally.
If we have a LLM then it begs the question is there a Small Language Model? Short answer is yes but for branding reasons nobody would want to call them "Small". These models are your now well established Natural Language Processing models which differ from LLMs in that they have a very clear and often narrowly defined set of language they are trained on and have a pre-programmed response rather than a "generative" response in LLM. Think about all the oneline chatbots you interact with. You can try and ask them a broad ranging question but unless it triggers certain key words like "account balance" you get a typical "computer says no response" and have to type another question or, more often, ask to speak to a human.
Show Me The Money
Who's making money from LLM? Nobody, yet.
Microsoft has reportedly spent $10 billion dollars investing in ChatGPT to date and for what return? They've integrated it with their Bing search engine but as of March this year they still only have 2.8% of the global search market. That seems like a poor return on investment.
No wanting to be outdone by Microsoft Google quickly pushed out their Lamda powered "Bard" chatbot to disasterous results destroying $100bn of shareholder value in a single day.
Stepping away from the digital native technology companies what about more traiditional industries? Speaking from first hand experience I've yet to talk to one client who has implemented a LLM at production grade to transform an existing business function. At my current employer we have plenty of amazing use cases of standard NLP solutions delivering sizeable business benefits by through things like transforming complex instruction manuals and procedures into simple to use chatbots or saving hours in customer support functions by rapidly identifying, prioritising and routing customer queries over email.
Every client wants to talk about LLM but I've yet to find a client who is willing to commit money to it.
One thing we can't ignore with LLMs is that they only "know" what they've been trained on. Thus they know absolutely nothing about your company and processes and industry etc that hasn't been made public and fed into the LLM. As they currently stand LLMs are very good at general publically available research (think wiki and news articles) and even some very technical areas like computer programming (because they've been trained on GitHub) but ask them anything about what happened today or about how to improve the performance of your particular inventory management function and you get nothing. So use of a LLM for anything but generic or commonly available knowledge will fail. To make these work businesses would have to train their own models. This will now be cheap or fast. GPT-3 reportedly cost $12m to train and has a monthly run cost of $3m. You're going to need a pretty big business case benefit for those costs to stack up!
We've heard so much about how LLMs are going to transform service industies. Yes may be great for students or consultants trying to quickly research a new topics with a brief consolidated piece of text but are you going to court with a legal brief prepared by a computer? Sure it could help lawyers and other service workers with ancilliary tasks but is that really "transforming" the industry? Arguably LexisNexis did more to transform the legal profession than ChatGPT may ever do.
There are some really great use cases and there is some very exciting applications I can see in coding and marketing style content creation but LLMs are probably best seen as new tool for workers rather than machines that will make them obsolete. Whether these use cases will transform industries is a question I am yet to be persuaded upon. It's more incremental change than revolution.
So yes there will be a lot of potential use cases but whether there is ROI is a whole other question.
领英推荐
This Sounds Familiar
If all this sounds familiar then you're right. A new technology emerges and along with it comes hype that the Super Bowl half time show would be jealous of. What actually happens next those is a Janet Jackson/Justin Timberlake moment.
What technology am I talking about? Blockchain. Remember that? If you rewind 5 years Blockchain was ALL we heard about. It was this revolutionary technology that would change how many industries operates, or so that's what the hype suggested. Where is it now? Honestly hand on heart when was the last time you heard someone talk about blockchain (outside of crypto)? What happened? Well, the good old hype cycle (as coined by Gartner).
Yes Blockchain has found it's niches such as digital ID (check out the "Service NSW" - the New South Wales state government agency in Australia) but it's not like Blockchain has fundamentally transformed industries as was suggested at the time.
At the time I thought Blockchain sounded like the classic "solution looking for a problem" and I fear we could be in the same territory here with LLMs. Yes there will be useful business applications but just how many there will be and how transformational they will be remains to be seen.
Bring In The Lawyers
Taking a break whilst writing up this little piece a fortuitous article popped up in my news feed. If ChatGPT is a system then generates content then who is liable for that content. What if ChatGPT lies? That's probably against the law isn't it? Yes. Yes it is.
The fact ChatGPT simply regurgitates what it's been fed (trained) opens up an absolute plenthora of ethical questions that are already being asked of "standard" AI solutions. Is there bias? Is it invading privacy? In many cases the answer will be yes. Who is liable for that then? We'll see here as the defamation lawyers (libel for our North American friends) battle it out in court.
Beyond this specific case there is already growing body of legal discussion and jurisprudence on the topic of liability and AI. To (over)simplify it the basic problem is that more advanced AI solutions, including ChatGPT, are "black boxes" when it comes to decision making. Basically you can't say why a decision was made by the system. If that decision then leads to negative outcomes (or damage for the lawyers) then who is responsible? For a great summary I encourage you to read this paper from the Harvard Journal of Law & Technology.
For all this discourse we essentially have the insurance industry to thank. Whenever you want to understand where liability lies ask an insurer. For businesses then the question is do you want to get involved with black box decision systems? I'm pretty sure the Chief Risk Officer will have something to say on this matter. A key word I hear from every client is "control". Businesses LOVE to have control and predicibility of outcomes. What happens when you have an AI system where you can't predictibly control it's actions? The C-Suite will get itchy trousers.
More anecodatly I'm also hearing stories of corporate security teams blocking access to the popular public LLMs as they do not secure senstive, proprietary or confidential information that is "shared with them". You can bet your bottom dollar people have been dumping text into these LLMs and asking them to rewrite it with "proper" business english.
This is all a bit doom and gloom but the key point to get across is that with LLMs, and any new technology for that matter, you need to tread carefully and be aware of unitended consequences.
Fast and the (Furious) Followers
Given all of the above how exactly should enterprises approach LLMs then? It seems like a nightmare to navigate. Again, learning from previous technology waves, I would recommend a "fast follower" strategy. Unless your enterprise is already a true leader in the AI space there are many other more simple and proven AI applications which you should be looking at before exploring LLMs. Why spend precious investment dollars on experimental technology when others will do that for you? Instead observe and learn and then be ready to press go on investment when (if?) a viable proven use case application eventuates.
Independent Technology & Solutions Consultant
1 年Huw, thanks for sharing!
Business Development| Growing Client Asset & Building Relationships in Wealth Management| Helping people secure their future
1 年Huw, thanks for sharing!
Full Stack and iOS Engineer
1 年Good article Huw! I would say however that unlike blockchain—which smelt of hype from a mile away—I’d bet strongly on LLMs drastically changing everything. When it comes to hype bandwagons, I’m always the guy who is like “meh” while everyone else is losing their minds, but this is different. I agree that companies would be prudent to take a wait-and-see approach but not because LLMs are a solution looking for a problem, but rather because they’re going to completely rewrite the rules of solutions and problems.