A key change on the ChatGPT journey
Source: guiatecnologica.com

A key change on the ChatGPT journey

Most of us have been using machine learning for years, though perhaps without being aware of it. For example, private use encompassing travel routes on your mobile, Siri plays your favourite song, your email excludes spam, you receive movie recommendations and the like.

Of late there has been much chatter and quite a bit of fun writing mock-Shakespearean sonnets with ChatGPT! Certainly, my efforts have been well above my usual abilities, all with a little help from ChatGPT. And more recently still you may have noticed some comments around how silly or unreliable AI is with the ‘proof’ that it sometimes comes up with wild ‘facts’ that are utter rubbish. That part is fair; these engines do at times give ‘factual’ answers that are rubbish. However this is missing the point, and it’s key to understanding this new phase in AI. Truth is, it’s the why the rubbish answer was arrived at in the first place.

Let’s go back to the start of this ‘new’ world. It’s the 1980s and the beginnings of the concept of ‘backpropagation’. If you consider what that term may mean you’ll have arrived at the key as to the first iterations of Machine Learning and its grown-up sibling, Artificial Intelligence – it’s the loading up of massive amounts of (past) data and from that the machine ‘learns’.

That’s the key people are missing with these new ‘rubbish’ answers. It’s a key change. In the main these aren’t systems that have derived answers from relatively massive amounts of data, they’re arrived at by a whole new process and it’s this change that will enable AI to keep getting smarter than us mere humans. In lieu of using relatively massive amounts of data these new systems work on ‘few shot learning’. It’s a system where, much like us, can think x and y and arrive at an answer of z. For ourselves, when it doesn’t work, we call it confabulation. In its AI form it’s being called ‘hallucinations’, the rubbish answers. They’re the same thing. It is the same as us not meaning to lie but taking a bit of knowledge/memory from somewhere and conflating it with something else, all to an incorrect conclusion, a rubbish answer. So, in a way, machines have become more human!

OK, so why is that important, what may it mean? Consider just these points:

  • Speed of learning has gotten even steeper, as there is not the need for such a large quantity of data, inter alia. Hence a more rapid deployment is enabled and it can be built upon again and again.
  • The ability to analyse depths and breadths to a degree we may have not considered. Needing less ‘pointers’ and able to (usually) draw greater insights.
  • How do we tell when a usual computer application gives us a rubbish answer, versus at this level where it may not be as discernible? It is a trap we all know as humans. Some historical cases include - the world is flat, witches can’t float, Ukraine isn’t really a country and hence their people won’t defend it to the death. All errors that we too are capable of making. So now that risk will be heightened a new level of judgement and awareness and open debate will be rewarded.
  • The ability to have 1000x (or more) Alan Turings working for you. Or you could deploy 1000x Vladimir Putins. That should give you pause. Remember how they’re smarter than us?
  • The regulation debate. As much as some, not all, sovereign interests will, I expect, introduce regulation, it won’t be the salve of all ills. Think of nuclear proliferation and how well could regulation realistically control it if it were as accessible and as easy to hide as AI. Hence, regulation will help, but it won’t be sufficient in my expectation.
  • And ultimately how do you manage something smarter than yourself? Sit with that for a moment. That’s the discussion we’re really having now. Too often it is driven by fear. You may wish to remember the old adage, those that don’t learn from history are condemned to repeat it. Best we learn then.
  • So what does history teach us? That difficult times will reward the original thinkers; those who can balance being bold and brave against a considered, independent judgement. And with that judgment must come a new level of ethical questioning and discussion to meet this new world. And near the 80th anniversary of the ‘bouncing bomb’ that did much for the Allies during the war, it does seem appropriate to note that this is another time for the bold and the brave, with the judgement and ethics.


Postscript: For those interested, the current version of ‘ChatGPT’ is GPT-4, it was released in March 2023. For more information https://openai.com/research/gpt-4 ??

Andrew Gruskin

Investment Process | Data Insights | Process Automation | Generative AI

1 年

Paula, you had me at ‘backpropagation’. ?? I see a parallel between hallucination and the proliferation of rubbish facts across social and alternative media. All around humans are swamped by information we can no longer sift through. We need help and AI could be the salve.

要查看或添加评论,请登录

社区洞察

其他会员也浏览了