How to embarrass yourself with Generative AI

How to embarrass yourself with Generative AI


As ChatGPT and the like have given us a lofty new platform to build on, many have taken the opportunity to pratfall off it. Sometimes it has been a spectacular dive, such as the lawyers for whom ChatGPT hallucinated citations , or the scientists who included Midjourney images of bizarrely-endowed rats in their paper.?

Yet they are far from alone. Every day people are making obvious and public mistakes with AI. This short post aims to help clear up some of the most common misconceptions.?

Generative AI is a generational change, and it is a scramble to keep your head above the rising water. Sink or swim. People with no prior AI experience are up to their eyeballs in jargon like mixture models, RAG, and sparse LMs. Many are doing an admirable job of tackling it, but you can’t blame anyone for getting confused.??

It doesn’t help that implausible magnitudes of “AI experts” and “thought leaders” have emerged from thin air to exploit the FOMO. You absolutely need to know about RLHF, they say. You absolutely need to know about multi-agent architectures. What’s your AI social video strategy, they ask? It goes on. The pressure to be seen as having a handle on AI is immense and growing. It’s quite wearing for someone who’s been in the field a while, and I can only imagine the load this is placing on people who weren’t steeped in machine learning prior to this.?

As a modest attempt at an antidote, I’ve compiled a short and accessible list of the five mistakes I’ve seen most often in the professional community. I hope it saves at least a few potential embarrassments.??

??

1. Thinking ChatGPT is a search engine?

This is such an understandable mistake. Google, Microsoft, and others are absolutely developing LLMs for search. And it’s certainly true that there is a subset of common search queries that ChatGPT can handle well. Any “How do I ...” query is generally a success.?

However, ChatGPT is designed to produce plausible text, not accurate text, and these two objectives are not always aligned. It’s certainly plausible that OpenAI has received $37bn in cumulative investment. However, it’s not true, I plucked a sensible-sounding number from thin air. That’s what LLMs do out of the box. Such hallucinations can be very subtle, and always relayed with a disarming confidence. ?

If retrieving facts from an LLM is your objective, you should be looking at Resource Augmented Generation (RAG). The basic concept here is that you provide the LLM a set of facts (e.g. as a document) and ask it to answer from the facts, not its preconceptions. This can get very sophisticated, for example using the LLM to look up the facts from a knowledge base before answering, or involving quantitative data. In this world, the LLM is actually the commodity, it’s the data sources that are the value.??

??

2. Not evaluating the LLM’s performance??

Here’s one that got me. LLMs are beguilingly capable of zero-shot classification. This is the ability to classify an example as belonging to a set of categories, without having seen the example or categories before. This negates the need to train your own model on your own data.??

??

At first glance it is mind blowing. You might think you’ll never need to train a classifier again! The data scientist’s workload has been cut to nothing! Mechanical Turk can be turned off. The interns can finally stop labelling data.??

Reality is not quite so sweet. If you measure the accuracy of LLM zero-shot classification, you may be surprised. It can be very variable, and something that appeared solid when you were experimenting with prompts is often far less robust and generalisable than it first seemed. For example, we measured zero-shot topic classification at a promising 95% accuracy for our initial POC set of topics, only to discover other topics were a much less exciting 70%.?

These aren’t new problems for machine learning engineers and the existing toolkit still works. The trap is forgetting you still need it.??

??

3. Betting the farm on today’s prices?

OpenAI is dirt cheap right now. Fractions of fractions of fractions of pennies per token. My OpenAI budget is in fact far less than my coffee budget. Yes, I drink a lot of coffee, obviously, but still.?

Meanwhile competitors like Anthropic are giving away free credits like nobody’s business. Almost everyone currently has free access to multiple GenerativeAI services.??

Yet behind the curtain, ChatGPT’s server costs are an eye-watering $700,000 a day , according to 2023 estimates. They are projected to make $1B in revenue in FY2023, which works out at less than half the infrastructure costs. It’s almost certainly a similar situation at Anthropic or Google.?

Seasoned tech observers are expecting the screws to tighten as OpenAI try to close that gap. After all we’ve seen this before many times. Lure the users in then collect the revenue is Silicon Valley 101.?

All that said, there are reasons to believe the landing will be soft.??

First and most simply: investment into AI chips. If more cost-efficient hardware is developed there will be much less of a gap to bridge. We’ve seen this before in cloud computing with AWS, who do an outstanding job of dropping prices by producing new server types every year.?

Second: commoditisation of AI. The quality difference between ChatGPT, Claude, Gemini, Mistral, and the many others, is often not noticeable. Simultaneously the academic community are working hard on shrinking models: there are already several open source models you can run yourself on modest hardware. We can expect the open source model performance to approach the proprietary models quickly. As such, competition for AI model providers will be intense, and we can expect market forces to keep prices low.??

Ultimately though, it is such a rapidly developing space that predicting even medium-term price movement is risky. Hedge your bets when planning.??

??

4. Forgetting that hackers are already playing?

In the early days of 2023, that distant age, a popular new sport emerged: jailbreaking ChatGPT. You could trick it into breaking its calm and prim persona and make it do things like produce fake news or tell you how to commit crimes.?

??

This was endlessly amusing, but there’s a serious point. If an LLM can be induced to do things it’s not supposed to, it is vulnerable to malicious actors. Hackers can use techniques like prompt hacking or prompt injection (the AI analogue to SQL injection) to extract hidden information or trick the LLM into performing other tasks.??

Imagine you are using GPT4 as a chat interface to your web app, and have used the function-calling capability to get it to call your API directly for more data. Given a request, the LLM figures out what parameters to put in that function call to get the data. It is surprisingly easy for a hacker to craft a request that either leaks secret parameters or uses parameters other than the subset they are allowed to use.?

On an even more basic level, recently an Air Canada chatbot hallucinated a refund policy and the airline was forced (in court) to honour it. In this instance the user was genuine and all fault was with the chatbot, but you can imagine the potential for chaos in situations where an AI has any level of responsibility. But what to do about it??

You’ll be happy to hear that the blue team are working as hard as the red team, and AI security techniques are being developed. The InfoSec fundamentals still hold - sanitise untrusted input, for example - but there are new ways of applying them, such as via cross-checking models that screen input for malicious requests and output for leaked secrets. As always, awareness of the risks and conscientious use of sensitive data is paramount. I recommend starting with the OWASP AI Security and Privacy Guide .?

?

?

5. My embarrassing mistake: being overly defensive?

In reaction to the over the top hype, some people have become very cynical. It’s easy to see why. After all, we are being inundated with AI mania beyond even what we saw when web3 and blockchain technology was the next hot thing.??

In fact it has reached such levels that many CTOs I’ve spoken to have had to abandon their natural tech evangelist position and adopt the AI skeptic’s stance within their organisation, just to avoid getting swept away in the current.?

The danger with this is that you act on the possibilities, be they for incremental productivity or disruption, later than you could have in a world that is moving very fast. There’s no point having our latest AI bubbling away in our data science team’s cauldron when it could be front-and-centre driving value for our customers. In the end, it’s a business risk balance decision – release too early and disaster can occur, release too late and it is already behind the curve. ?To get it right, I need to be less defensive.? Talking with fellow CTOs and tech leaders in Bristol recently I'm left wondering what other people would say on this.?

?

Conclusion?

As an AI language model I cannot write a conclusion that provides advice on surviving in the rapidly-developing world of AI. However, I suggest…?

Just kidding. This post is human-authored and any incorrect facts have been misremembered, not hallucinated. However, I do hope you’ll find some of the advice above useful. Generative AI is a steep learning curve and many of the people who claim to have it all figured out do not. My own take is to embrace the uncertainty, leverage what you do know and learn fast to identify the opportunities that are awaiting early movers.?

Yasmin Crowther

Author and Chief Insight Officer, Polecat

8 个月

Great post Chris Bowdon! Also sharing this as an accessible companion article on limitations of AI/LLMs etc. https://ethnographicmind.com/ai-and-the-untouchables/

Godwin Josh

Co-Founder of Altrosyn and DIrector at CDTECH | Inventor | Manufacturer

8 个月

You talked about the nuances and potential pitfalls in navigating generative AI, shedding light on the challenges even experienced professionals face. It's a reminder of the intricacies involved in harnessing AI's power. Considering the complexities, how do you envision enhancing AI systems' robustness against unforeseen errors and manipulations, especially when dealing with dynamic real-world scenarios? Additionally, given the evolving landscape, what are your thoughts on applying these insights to fortify generative AI in critical sectors like cybersecurity or healthcare, where precision and reliability are paramount?

回复

要查看或添加评论,请登录

社区洞察

其他会员也浏览了