Navigating the AI Hype: What Founders, Customers, and Investors Need to Know
Generated by DALL-E based on the text in this article.

Navigating the AI Hype: What Founders, Customers, and Investors Need to Know

I'm becoming concerned about the range of well meaning yet dangerous AI systems starting to show up in the market. In one case, potentially endangering peoples lives.

ChatGPT and its generative AI siblings are amazing, they appear almost human, but they aren’t. They are very clever at guessing the next word based on trillions of examples. Unlike humans they don’t have good judgement or actually understand a topic, they just GUESS.

This misunderstanding by enthusiasts and onlookers alike has led to a lot of founders, customers and investors to start doing a lot of amazing sounding things on the single flawed assumption that the AI KNOWS what it’s saying, instead, it’s guessing what you want to hear, accurately a lot of the time.

NOTE: I will talk about “drift” in this article. Drift AKA "Making sh*t up" is where the AI guesses wrong, and starts answering with good sounding but invalid statements. If you continue the conversation, the AI will make-up more and more invalid answers as if it is still correct and it knows what it is talking about.

Now, don’t get me wrong, I love AI, I use it daily, I am helping clients design and build solutions with it, BUT I want everyone to understand the limitations and not waste their resources and in some cases risk peoples lives on something that will create more harm than good.

My concern is that right now a lot of time, money and focus is going into building the wrong things in the unspoken, mistaken belief that the AI has good judgement.

How do you identify something built on this belief?

There are a few key indicators you should look for:

Is the AI Tool claiming to replace a high value or high experience professional?

Or does using the result that is being produced by the AI take good judgement on behalf of a knowledgeable person?

Legal, programming, CEO, whatever, I’ve seen wild claims for all these things along the lines of, “you don’t need these professionals anymore, just use our service!”. Most of these services are more or less pretty interfaces over the same chat you would get in ChatGPT.

Generative AI doesn’t have, and cannot get (yet) the good judgement that comes with experience. Avoid these solutions wherever possible, especially if they have legal, health or other serious outcomes. Remember, it’s just guessing the next word in the sentence, it doesn’t know if you have cancer, what a legal position is, what the safety requirements are on your building site, or if you should invest.

Sometimes it’s answer is correct, sometimes it isn’t.

Red Flags

What are the red flag to look for in an AI tool?

No expertise required.

Typically a new AI product website says something like, we have trained the AI on lots of examples for <domain> and it will give you the answer so you don’t need an expert.

One line and we do the rest

Some systems will take in a one line prompt as a starting point and will spit out a completed result for you to use without any other interaction.

I’ve seen this with blog post writing, website building, creating powerpoint presentations and all sorts of other things in small tools as well as tacked onto large corporate solutions so they can say “we have AI too”, they should know better but the hype cycle is hard to ignore.

This style of solution is typically rubbish, and is quick buck merchant behaviour that should be avoided.

Just use ChatGPT or which ever primary AI provider you prefer to produce your content, ignore the tools.

Just chat with our AI

The AI tool will allow you to have an open ended conversation on a topic without any further guidance or guard rails to ensure the quality of the results produced.

Again if your money or life aren’t riding on the results then sure have a chat, things like marketing materials and sales intro letters are great.

However, if your going to be making decisions based on a series of back and forth discussions, the AI doesn’t know what you are talking about, it is just guessing what the next words it thinks you want to hear.

These are probably the most dangerous for high stakes advice and I would recommend staying away even if the value proposition sounds attractive.

The funniest misuse I've heard of for this was a car company in the US made a sales tool to sell their cars, a clever customer realised it was just ChatGPT and so gave it a prompt that directed ChatGPT to sell them the top line car for $1.

Claims I’ve seen in the last few months

  • “Just give it the topic and it will write a book on the topic” / write your blogpost from a single line.
  • Ask our legal AI about your contract, this was targeted at large scale contracts worth millions to billions of dollars. Its intended be consulted weekly but untrained people who will follow what it says, potentially causing significant project losses and law suits. My question is who is liable? My guess, the AI vendor.
  • Ask our AI to write safety instructions for trades people on a high risk worksite. This is the one that scares me, getting this wrong can kill people.

For the first one, if the AI gets drift, people will just laugh at the result (and the person/company publishing the result will look foolish) and move on. The other two need to seriously consider if their model is assuming too much from the underlying Generative AI and considering pulling back.

Who should worry about this?

Founders

If you’re the founder of an AI solution then you need to ask yourself “what expectation am I creating with the customer AND am I liable for the result?” Let’s face it, you can’t sue OpenAI for a ChatGPT getting drift, they have covered themselves on that one, so are you liable?

Investors

If you’re an investor looking to get into AI, ask yourself,

  • does this team actually understand the underlying AI technology?
  • Are they exposing themselves to liability?
  • Can they continue to innovate on the solution or are they at the mercy of the actual AI provider (like ChatGPT) and just wrapping someone else's functionality in a slightly different UI? Remember training a model only goes so far, and anyone can do it now.

There are a lot of amazing solutions out there that will have a huge impact (and return on your investment) you just need to navigate carefully when it comes to Generative AI based solutions.

Customers and Users

If you’re the user, ask yourself, how will I know if the AI is giving an accurate answer or just making up what it thinks I want to hear?

If you can accept mistakes from the AI the same as you would accept mistakes from your own staff, then its probably worth having a go because the solutions can give you a big uplift.

If the AI tool is doing something you rely on heavily then ensure you have an expert in that field, who you trust to have good judgement driving it, don't give it to the intern.

So what does a good AI system look like?

AI is not a silver bullet solution to anything, however, it IS a massive assist for those or who already know details about field and have the capacity to make good judgement.

AI can save hours, days or weeks per year, can give teams massive uplift in capability and speed, it can provide huge benefit and is worth investing your time, attention and money in.

A few examples, AI can accelerate what they do considerably.

  • A legal assistant for lawyers to write a first draft, and help them with providing suggestions as they redraft.
  • A tool for doctors to work with in diagnosing a patient with rare conditions NOT something for the patient to self diagnose.
  • A tool for programmers to rapidly write the next 3-10 lines of code, NOT magically spit out an entire system.
  • A tool to generate a series of possible Use Cases for a new software specification ready for an expert to remove the 30% that don’t apply and allow them to add their own.
  • A tool to help R&D claim experts fill in claim details ready for review by the expert and the client.
  • A tool for support technicians to help them diagnose problems based on all the system documentation and prior support tickets created in the last 5 years.

As a general rule

  • The AI interaction should NOT be a free form conversation, this is the default usage with ChatGPT, but for most scenarios the user should be guided by the system not left to their own devices.
  • The AI should not be left to “make it all up itself”, there should be a lot of data collected from the user to provide context and specifics.
  • Output should be validated, GPT gets drift, the system needs to either provide its own drift validation OR it needs to show an expert first before a lay person sees the result.

Expert AI Systems

There are alternatives to Generative AI, key among them is Expert Systems.

If you want to build an AI, as one of my clients is currently, to “put a lawyer on your shoulder” consider building an expert system instead of a generative AI system.

Expert systems are another branch of AI, that are far more precise, as the name suggests.

An expert system isn’t guessing at the next word, it’s finding the example you have given it and taking the logical step programmed in through asking the expert “what should happen in this instance?”.

An expert system always gives the same answer to the same scenario, because it’s the answer an expert has already given it.

Finally

If you know someone who needs to know about this please share it, I think the word needs to get out there.

An essential conversation in the realm of AI! ?? Your awareness and call for a critical assessment are spot on. ????

Kim Willis

Content that cuts through the online noise | Customer stories that sell | Inbound and outbound lead specialist

8 个月

Thanks for sharing this, Robin. It needed to be said. As you say , one of the problems is that AI can't exercise good judgment. So if people blindly accept what the tool produces, we're asking for trouble.

Rohesia Hamilton Metcalfe

Website designer, archive and intranet designer. Art, AI and sustainable life explorer/optimist

8 个月

Everything in your article is valid and important but I’m also keen to see headlines that support people in exploring AI this intelligently. It’s easy to think that everyone has already jumped in but it’s not actually the case and there are plenty of people reading these warnings who haven’t even tried something as widely-used as ChatGPT (and who get more nervous about AI the longer they hesitate). I recently ran a workshop for people to explore how they could use AI to help with their website text and SEO content (I’m in no way an advocate of handing that job over to AI wholesale, btw) and found myself with a group of people who had never used AI of any kind (because they’re so wary of it) — and who were really excited by what they discovered they could do with a creative and intelligent approach. So, yes: we need to stay away from the push a button and get result mentality. But also not let that hype put us off going in with intelligence and enthusiasm!

回复
Courtney Smith

Director/Founder @ Kynection & Quallogi - Safe|Scalable|Sustainable|Saleable. Firefly Initiative - Elevating Volunteers. 29 Days - Our Promise to Success

8 个月

Read something yesterday that suggested Alan and his pals are close to AGI through a AI to AI training method. Kinda like the six hats methodology. I have a friend who is looking to create a reverse spec writer for large complex systems taking source code and leveraging multiple LLMs to create User Journey Flows and Detailed Technical Specifications. So to your point Robin Vessey and Jordan Green AM (been a while) I think that stupidly becomes smarter when you even out bias, which is the method proposed by many. Time will tell but looking at FSD 12 in Tesla testing there is something very special going on in those distributed learning systems.. good convo..

Arabind Govind

Project Manager at Wipro

8 个月

Exciting times ahead with AI innovations, but staying cautious is key!

要查看或添加评论,请登录

社区洞察

其他会员也浏览了