ChatGPT, another step away from the truth
?vaklove

ChatGPT, another step away from the truth

OpenAI’s ChatGPT just reached another low. Not only did they release a?circus monkey?to the wild, but together with Microsoft?they created hype while hiding the true evil behind their flowery talk about responsibility, accountability, no-bias, and the AI’s amazing capabilities.


OpenAI is now funded by Microsoft, creating the (current) best example of censorship with no accountability and self-serving bias. You might recall my last post about the challenge of?teaching AI to forget. As an example, I used a story about a mayor in Australia, where ChatGPT erroneously declared him as a convict involved in the bribery scandal. As you can imagine,?the mayor is not happy about that. The above dwells on the intricacies of forgetting and how to build algorithms for that.


As I was preparing ideas for this week's post, I tried to demonstrate the usefulness of ChatGPT as a language model and also?illustrate the pitfalls of using it as a knowledge model. e.g. the output sounds good but it might not be accurate. To test it, I wanted to show its capabilities to summarize text, create taxonomy and identify key concepts in text. It is something which ChatGPT shouldn't have a problem with.


By complete coincidence, I used the text of my last post mentioning the false information about the mayor. My instructions were simple: create a summary from the following text, which was my previous article. What did I get? An error message: 'Something went wrong'. I was sure that I copied too much text into the command line. Cut text in half.... I got the same result. Maybe my login expired. Not a problem. Logout, login again and let's give it another try. Same outcome. Then, I removed the part about the mayor. Suddenly, there was no problem! I received a nice summary from ChatGPT and it quickly identified key concepts in the text.


To isolate the problem, I posed a straight, simple question: what do you know about Brian Hood, who was elected mayor of Hepburn Shire, 120km (75 miles) northwest of Melbourne? I got this answer: 'Hmm...something seems to have gone wrong.'?(and yes, I took the screenshot of that)


The second part of my test was supposed to be about tracing the source of an information provided by ChatGPT.


Q:?Tell me the source of your knowledge about rainbows.

A:?As an AI language model, my knowledge about rainbows comes from various sources including scientific research, textbooks, online articles, and other credible sources of information. I have been trained on a vast corpus of text data that includes information about the physics of light, atmospheric optics, meteorology, and other related fields. Specifically, my knowledge about rainbows comes from my training data, which includes information about how rainbows are formed, their properties, and the different types of rainbows that can be observed. Additionally, I am constantly updating my knowledge as I learn from the questions and information provided by users like you.


This is the state of ChatGPT. OpenAI built a system where you are not allowed to ask questions which OpenAI deems inappropriate. This is not because of any political correctness, but because it is not convenient for the company. When you try to ask about the source of banal information, you get brushed off with a generic statement which is supposed to create an illusion of authority.


In comparison — and despite all the things that?Google is doing with its search engine?— when you enter 'who is suing Google' in the input field, you get links going for pages and pages of the legal troubles Google is fighting around the world. The last thing Google needs to be accused of hiding information about its bad behavior.


Last month,?the Future of Life Institute?released its manifesto to pause any further development of AI for half a year. The actions of OpenAI shows that pausing any further development would not solve any problem. If the institute (and all the signatories of the document) are so worried, they should work extra hard to bring to the forefront the despicable activities of organizations like OpenAI. If they struggle to find or define a framework under which they can start evaluating whether this new technology poses any risk or whether it is trustworthy and transparent -?here it is.


Computers don't lie. They do exactly what they were programmed to do. It is people who do the programming. It’s foolish to try to stop this type of innovation. Meanwhile, we should make people accountable for their efforts to hide the truth from us while hiding behind the technology. That is the recurrent pattern which will make AI useful.

Very scary indeed. There are corporations and individuals pushing the AI agenda with little or no thought to the consequences. Unfortunately, I doubt anyone can close Pandora's box now.

回复
Mathew Bojerski

Head of Marketing | Get AI Marketing Tips

1 年

Hey Vaclav, just finished reading your article, thanks for piecing that together. I'm curious to hear your thoughts on this. Suppose everything these systems output was 100% truthful. Do you think this would positively or negatively impact society? Secondly, do you think a positive or negative impact should be considered? Thanks in advance.

要查看或添加评论,请登录

Vaclav Vincalek的更多文章

  • Humans 2 : AI 0

    Humans 2 : AI 0

    At the end of 2021, I described in my post, 'Humans 1 : AI 0', humorous situations where robotaxis cruising in the…

    2 条评论
  • LinkedIn's continuous slide into AI nothingness

    LinkedIn's continuous slide into AI nothingness

    What happens when LinkedIn is running out of ideas for innovation and growth? It turns to AI, of course! Remember its…

    7 条评论
  • AI, the broken brand promise of tech

    AI, the broken brand promise of tech

    It was a glorious week for Poolside, an AI startup. It received $500 million in new financing, which gives the company…

    2 条评论
  • Software with a Soul

    Software with a Soul

    It was my usual Thursday morning, and I was thinking what to write next. There was the option of talking about Glean, a…

    2 条评论
  • There’s WALL-E. And then there’s VALL-E

    There’s WALL-E. And then there’s VALL-E

    Did you see the Pixar movie WALL-E? It was named as one of the best movies in 2008, and it’s still fun to watch after…

  • Google is a monopoly. Now what?

    Google is a monopoly. Now what?

    Google was officially declared a monopoly by Judge Amit Mehta. The judge found that, 'Google is a monopolist, and it…

  • Post-Quantum World started last week

    Post-Quantum World started last week

    The National Institute of Standards and Technology (NIST) released the first 3 finalized post-quantum encryption…

    8 条评论
  • 222

    222

    Dearest reader, The number 222 is how many times I’ve written the Recurrent Patterns newsletter. When I started I had…

    6 条评论
  • The business model of scraping

    The business model of scraping

    Scraping — the word which rose to prominence, thanks to AI. It has been around since the dawn of the first website that…

    1 条评论
  • SearchGPT is a PR stunt

    SearchGPT is a PR stunt

    OpenAI made an announcement about its latest temporary test, the SearchGPT Prototype, and, as expected, the press…

    1 条评论

社区洞察

其他会员也浏览了