Hype and Hubris
Hugh Bradlow
Past President at Australian Academy of Technology and Engineering (ATSE)
No one could fail to miss the hype surrounding ChatGPT over the past 6 months. It is reportedly the technology with the fastest adoption of all time.
The media has become so carried away by so-called Large Language Models (LLM’s - of which ChatGPT is one), that various luminaries of Artificial Intelligence have felt obliged to warn the world of ‘existential threats’ to humanity, equated to nuclear war and pandemics[i]. At the same time, the CEO of one of the leaders in these LLM’s, OpenAI, have warned the US congress of the need to regulate the technology.
I confess that initially I was really enthusiastic about ChatGPT. As I struggled with various diagnostic or software issues in computing projects, I felt it was like having an expert looking over my shoulder and giving good suggestions on how to solve problems. Instead of poring through links thrown up search, I could ask a question and receive a direct answer. However, with time, my enthusiasm has been tempered by more and more misleading answers, to the extent that weeding them out is taking longer than reading the links yielded by search.
To give a somewhat nerdy and obscure example, I was struggling with a timeout in a Home-Assistant instance running in Virtualbox. I asked ChatGPT-4 how I can the setting of the “Heartbeat flatline timer” in Virtualbox. It told me to go into the ‘Advanced’ tab in system settings. That would have been great, except there is no ‘Advanced’ tab in system settings. There is a way of doing what I wanted (I found it using Google eventually) but ChatGPT was not much help.
Another example came from Dall-E2, ChatGPT’s companion app for generating images from a description. I asked it to “create a diagram that represents how ChatGPT works”. It came up with the diagram below. I guess I can only say WTF or I am missing something!
领英推荐
So, my question is why is this happening? I am completely baffled by the diagram so I won’t even attempt to posit what might be happening there. However, the more interesting and useful example of the Virtualbox setting above, is probably due the fact that ChatGPT has been trained on data from the Internet. Like all of today’s so-called “Artificial Intelligence”, ChatGPT is pattern recognition. The magic lies in the fact that it has been trained on phrases and sentences (billions of them) as opposed to words, which appears to give it natural language understanding. If it finds information that appears to have the same meaning as the question you asked, it will interpret it as an answer. There is no fact checking. If you take my Virtualbox example, if someone on the Internet suggested that a feature in the Virtualbox GUI to allow you to change the Heartbeat Flatline timer would be a good idea, ChatGPT would not necessarily know that this is just a suggestion and had not been implemented. Short answer: garbage in, garbage out.
While I am in no way suggesting that ChatGPT is not useful, the notion that it presents and “existential threat” is clearly absurd. I can’t help feeling that the scientists who are pushing this idea are suffering from hubris. It looks like they are saying “look at us, we have created something of God-like capability, so we must be superhumans”.
Which brings me to the question of regulation. I am all in favour of regulation but to be useful it must both ensure that it enables harm to be avoided and is also not easily disregarded. One of the biggest threats of systems like ChatGPT is the creation of disinformation and misinformation. The regulation that is required is therefore consumer protection, not dissimilar to truth in advertising laws. Any issue with machine learning is invariably about the training data. If you train ChatGPT on all the cesspool that is social media, it will invariably be a source of nonsense and conspiracies. So, if we really want to effectively regulate these new tools, we need to certify that the data sources on which they have been trained, have been validated. That is not an easy task, but nor is it impossible. At the very least following this path would give our law makers and regulators something useful to pursue. ?
[i] https://www.safe.ai/statement-on-ai-risk
Senior Executive, Strategic Advisor and Board Director | Leverages Digital Technology to Drive Customer Value | Builds Cultures Where Everyone Can Bring Their Best
1 年Thanks for sharing Hugh. I agree with your points on hubris, and garbage in garbage out.
Global technology strategist and executive | CTO | Product Development | Innovation | Emerging tech | Architecture | Startups | Strategy | Advisory
1 年Legal advice seems to be an area that, in my experience, ChatGPT has a lot of trouble with. Since laws vary significantly across the world, ChatGPT would need to be carefullly trained to ensure it knows which jurisdiction some random piece of legal advice on the Internet is referring to, but I'm not convinced this has been done. However it is very confident when asserting its views about the law. Users need to be very careful here.
CEO at Fluffy Spider Technologies | NED & Board Member | Digital Health Interoperability Advocate | Mentor & Adviser
1 年This lines up with an earlier post by Richard Windsor. He believes the call for regulation is more about protection of market by corporations than safeguarding humanity. I tend to agree.