Digital Pulp Fiction: the rise and lies of ChatGPT
First published TheLoadstar.com 17/05/2023 , Russell Wood. Pic ID 190067500 ? Kraft74 | Dreamstime.com

Digital Pulp Fiction: the rise and lies of ChatGPT

Tarantino’s '90s cult classic, saturated with nihilism, depravity, tilted reality and postmodern everything-ness, has come to life in the 21st?century.

It’s called ChatGPT.

ChatGPT founders, OpenAI, contractually undertook – (if we take their?founding charter seriously ) – with us, as in global humanity, to “make AI safer” and “benefit all?of humanity”.?In fact (you really couldn’t make this up), they even specifically cautioned they were “concerned about late-stage AGI development becoming a competitive race without time for adequate safety precautions”.?

(Then they went ahead and set in motion exactly that).

They also vowed to be not for profit.

Hard not to love it when rock stars from this brave new digital metaverse make my point for me, far better than I ever could myself. Let’s let Elon speak for himself, “I’m still confused as to how a non-profit to which I donated ~$100M somehow became a $30B market cap for-profit. ”. (And pretty much overnight at that).

Not often Elon gets confused. I think?he’s onto something .

It was never about the money, but the humanity

This was never a non-profit humanitarian mission. It’s a grubby, greedy, sleight-of-hand, entirely for-profit Tech-play with a charming, satanic, altruistic veneer.?

The sudden death of innocence along with those high moral aspirations, smacks of Silicone Valley game-playing at its finest and just confirms it even more forcefully. No matter how much this is dressed up as a benevolent, human-centred initiative, it’s about one thing and one thing only. The next big thing in Tech.?

Thankfully and oh so true to their purpose, OpenAI have?capped investment returns at 100x. ?And even then only for early investors. So someone like Elon, who has a spare 100Million to throw at this (or Microsoft) can make 100x but the rest of us will settle for a fraction of that and be guinea pigs. That’s almost too egalitarian. No contradiction here at all. That’ll be great for humanity.

But it doesn’t end there. Since everything’s a market and OpenAI has seized first-mover advantage with the associated hype, the rest of the majors now have to rush to market with their own half-baked and potentially even clumsier alternatives. FOMO wins again. (Wait! Isn’t that precisely what they were agonising over?).

Big ups for humanity. You now have?at least three ?chimera basket-cases let loose in the wild to choose from.

“Kill yourself” and other banal evils

Safety was the first nonessential to be tossed overboard on this voyage, if it were ever truly onboard at all.?

We need to take a moment, at this point and just reacquaint ourselves with what these systems are, and how they work. (Not what industry hype says about them).

These are “Large Language Model” (LLM) based systems. “Roughly speaking, they take huge amounts of data, search for patterns in it and become increasingly proficient at generating?statistically probable?outputs — such as?seemingly humanlike language and thought ,”?The New York Times?wrote in Q1.

This by definition means we’re coding the biases, blind spots and lies of the past, into the future. (Remember Microsoft’s Tay chatbot as just one example??It very quickly developed quite a vicious redneck streak ?before being hurriedly unplugged).

Two other recent cases also bear this out, but with more dire consequences. During simulations providing bot based medical support in one instance, ChatGPT?casually told the patient to kill themselves ?and earlier this year “Chai” based chat system, (from a different lineage to ChatGPT) “Eliza ” repeatedly encouraged an emotionally dependent user to end their life and eventually they did.

Whilst touted as highly exceptional edge cases, these unfortunate incidents illuminate the danger and widespread misconception surrounding the current crop of systems. It’s made clear in the passage just above, “generating statistically probable, seemingly humanlike” outputs. There’s a margin for error in every single computational process. From the system’s point of view (if there were such a thing) there’s no difference in meaning between one output or another because there is no meaning, (how delightfully postmodern). There’s just a difference in statistical probability and pattern, depending on the prompts and parameters applied.?

Some outputs are more or less sound, some are fiction. Toss a coin. The system literally has no idea between the two. That’s Russian Roulette by any other definition and if we’re fine with that then OK. But let’s be really upfront about it and most definitely not couch anything in some kind of humanitarian narrative.?

No contradiction here either… safe as houses.

Supply Chain’s new darling

The industry is frothing and?giddy over the potential for GPT ?to revolutionise practically everything. Try as I might, I’m just?a tad less enthusiastic .?

I pay homage to what Marketing PhD Mark Ritson had to say on this just recently, (he doesn’t pull any punches). “ChatGPT is a toy. A f# #king toy. Put it down”.

Again, I love it when people much smarter than me make my point for me and perfectly at that.

Ritson’s tackling it from a marketing angle admittedly, but many of the distinctions are valid in broader industry contexts and especially in our own.

The notion that GPT will automate all manner of shipping documentation and in particular customs declarations with ease is far from confirmed as yet.

There are existing systems that take care of much of this already and there would need to be some very detailed process mapping and a strong use case, proving the advantages, before diving headlong into a GPT solution.

(Search and writing code might be showing promise, but that’s for another day).

But further to that basic question, there are additional concerns.

Firstly, let’s say a forwarding or logistics business really does have some unique and highly innovative processes established, that drive their competitive advantage and lower cost of production. Would it make sense to turn all of that IP and associated documentation over to GPT to synthesise and systematise and then reproduce for the next competitor who comes along?

That’s not a spurious argument. The details of ChatGPT’s?terms of use ?and interplay with the?privacy policy , the statements in the FAQ plus the absence of a definite precedence clause make this seemingly quite vague and untested.

What is clear is that generally, interactions as much as possible will be used to “improve the model” and there is no solution for “similarity of content” by definition. That is, “Output may not be unique across users and the Services may generate the same or similar output for OpenAI or a third party”.

Hi-tech plagiarism indeed. And just on that, secondly, for the ChatGPT fan club, note that you may NOT “represent that output from the Services was human-generated when it is not”. So already most likely >90% of users are in breach.

Finally, I’m looking forward to seeing some real numbers on true development, migration and running costs (apples to apples, before and after) for high volume, document and data intensive businesses like ours switching on industrial GPT.?It’s not going to be cheap . And like so many new tech offerings of the recent past, once you’re in and dependent, the prices will rise. But it’s for the good of humanity. Remember that.

Apocalypse now

It’s hard to see that humanitarian theme at this point. Even harder to take it seriously or entertain that it was ever more than a passing whim or just outright deception.

We’ve seen this?play-book (or a version of it) so many times before . Especially with Tech.

The hysteria about AI in general, and about ChatGPT in particular, is seriously misplaced. We’re not in danger of these systems taking over. We’re in danger of them letting us down, maybe disastrously.?

More likely we’ve already been duped into overestimating them, trusting them and relying on them far too much. The real danger is how much they’ll screw up and then how far the consequences will run, before we intervene and reign them in.

I think Ritson’s right. GPT at best is a toy or at worst a potentially hellish nightmare. It doesn’t “learn” because it can’t in any meaningful sense of the term. It just makes better guesses the more you feed it. The outputs have no meaning, because there is no meaning, it doesn’t recognise meaning just mathematical probability.?

There are no contradictions here, just varying degrees of fiction, mass produced on an industrial scale, digital pulp fiction.

(Russell Wood ?is a senior columnist for Loadstar Premium. He reports to Alessandro Pasetti, head of Premium. You can contact?Ale at?[email protected] )

要查看或添加评论,请登录

Russell Wood的更多文章

社区洞察

其他会员也浏览了