Heard about ChatGPT?

Heard about ChatGPT?

Now your wondering what the fuzz is about? Here is a little help..

You might have a friend that recently told you about this great tool helping you to write an essay on nearly any topic of choice, produces Limericks at will, can help you doing homework and in addition has an ability to deliver computer code for you as a non-programmer?

Your friend was right. OpenAI (a company started by Sam Altman, Elon Musk and others in 2015) has been building systems that actually can assimilate vast amounts of texts, process it and bring it back to you in a variety of forms. The model they build was called GPT-3 and became known as a great language interpretation model, but it came with some drawbacks.

One of the main drawbacks was that it was not good in understanding query-prompts and then generate appropriate inputs. It was also not good in follow-up questions.

The company then used another technology, called Reinforcement Learning together with crowd-sourcing a community of humans answering query-prompts, to build an additional model of how people do just that: answer queries. Combining these two models provided a tool called InstructGPT, launched in February 2022.

From there it went fast. InstructGPT showed much better capability to answer your queries, and it also learned how to deal with follow-up questions on these answers. What was lacking was a good user interface. In November 2022, a new interface was launched under the name ChatGPT, and indeed, it imitated a chatbox as you now it. The answers come quick, are relatively good at a first glance for a wide variety of topics, and they even made it type character-by-character like a human!

And honoustly, that was all that was needed to make an already existing technology (Large Language Models, GPT) fit for the masses. Within a month after it′s launch “everbody” spoke about this new tool, played with it, blogged, videocasted about it. The ChatGPT tool went from unknown to being discussed everywhere (including my daughters philosophy classes at high school) within a few weeks. Kudo′s!

And now? Is it as intelligent as it seems to be?

Good question. No. It′s not intelligent. The system matches extremely complex patterns in huge amounts of texts using a kind of statistical methods figuring out what words follow upon each other in different contexts and cases. That is all. And by doing so, it can really provide reasonable answers to questions you provide it with.

At least at first sight. Many people have tried and published results that seem amazing when you quick-read them, but seem complete bogus when deep-diving into them. Pregnant woman should eat broken porcelain (containing a lot of calcium), since that is what babies need, medical and health issues are interpreted and discussed in strange ways, biographies of well known people are incorrect and all of a sudden a famous book has gotten a different ending.

Not surprising, as said the system uses statistical methods to come to answers, and such models are always approximations of the truth. And even if the system provides corrects answers, you will still have to fine-read them, as the system as such?does not know what it doesn′t know?and thus you will get no warning if results are uncertain. It will continue to deliver answers with confidence even if the quality cannot be guaranteed, bringing about misleading and error-prone answers. And even worse, if the fake information is re-injected into the system, the next generation of answers will be based on earlier errors and incorrect statements.

And there is another danger. This technology can be used to generate systems of fake-news facts, fake personalities and add that to the image-generation we already have. Than abuse is around the corner, easy to read, looking authorative, backed up by fake-facts making all look real. Who is going to tell that these are not real facts provided to you by real people, when in fact all is computer generated? In the future we might need a completely different way of identifying people we interact with, not only to identify whether we communicate with a human or not, but also to check if the provided identity is correct! Are you really who you say you are? And you want me to transfer my savings to another account? You can imagine the situations you do not want to end up in.

But I like it, so why so negative?

Another good input. Yes, it seems to work great. The answering looks very trustworthy. For many queries reasonable answers are given, albeit a bit shallow in many cases. And let′s be honest, having such a tool available for knowledge intensive tasks might be just fantastic! And this is probably where it will go first, providing quick insight in the topic you will have to write an essay about. Or the lawyer getting some quick answers on his next case and previous judging around it. Or a scientist who gets some initial results and ideas for a new, large vaccine program to kick her off. What with the tourist, just wanting to learn a bit about the country he travels to next week? Or the school kid getting help with mathematics or history lessons. There are plenty of positive scenario′s thinkable. And we should (and presumably will) investigate those, learn from them and produce even better tools.

But do not forget, with all fun and interesting possibilities, with all the good and benefits of this new technology, all comes with an obligation. An obligation to make sure we do not (automatically) change facts into fakes, that errors, mistakes and purposely faked information stays detectable.

We need mechanisms to ensure that all knowledge we have collected to improve and build our society and technology advances stay attributable and usefull. We will need it!

(This is a copy of the article published on https://dutchbob.medium.com/heard-about-chatgpt-1d02aa250689 )

Meghana Shenai

Engagement Manager and India Delivery Lead at Capgemini

1 年

Thanks for sharing, very informative

回复
Aruna Pattam

LinkedIn Top Voice AI | Head, Generative AI | Thought Leader | Speaker | Master Data Scientist | MBA | Australia's National AI Think Tank Member | Australian

1 年

Thanks Robert H.P. Engels for sharing. ChatGPT could be a great tool, full of potential, but also with a lot of responsibility. We need to make sure that this technology is used responsibly and ethically. This technology should be treated with respect, and we should remain aware of the potential pitfalls.

Marijn Markus

AI Lead | Managing Data Scientist | Public Speaker

1 年

In a way, AI-assisted writing is just the next logical step. After switching to keyboards and adding auto-correct, We now arrive at the stage where AI actively helps us write text. Of course, writing is easy. It's the re-writing that's hard. I'm sure anyone who has ever written a paper can confirm this. And that's where solutions like ChatGPT are still lacking.

Kjetil Kjernsmo

Digital Dissident

1 年

Yes, indeed, I am very concerned about the disinformation potential. We already see that certain groups build networks of bogus claims to make it look like a coherent structure of knowledge. With this, deepfake movies, and so on, it becomes really cheap to build very extensive graphs of totally fake information, and only a very extensive analysis can reveal that, and even if we do (using knowledge graphs, of course! ?? ), with desinformation tightly integrated with identity protective cognition, it will be near impossible to convince those exploited by such disinformation campaigns that they are being exploited. This is a very urgent problem to address. If only I knew how...

Jeroen Peijnenburg

Supply Chain | SAAS | Solution Sales | Account Management Logistics | Transport | Orchestration | Optimization | Visibility | Spanish

1 年

Thanks for explaining this so clearly Robert, indeed I was wondering exactly that!

要查看或添加评论,请登录

Robert (Dr Bob) Engels的更多文章

社区洞察

其他会员也浏览了