When ChatGPT tried airbrushing facts in broad daylight
The whirlpool ChatGPT has brought in over cyberspace is far from over. While there is criticism or mockery from all walks of the concerned world, some are singing the praises of the tool for its powers as an uber-fast content generator. But the ones favouring it are arguably a lot more in number and how!
OpenAI, the parent company, has recently said it is close to achieving the feat of pulling one billion unique visitors on its website every month. People are mining it for creating codes, generating music patterns, asking questions vis-à-vis relationship solving equations and whatnot. Even our very own Google explicitly stated that if an AI content “is useful, helpful, original and satisfies aspects of E-E-A-T (Experience, Expertise, Authoritativeness, and Trustworthiness), it might do well in Search,” no matter how the content is produced. A large pool of companies, however small, are hiring people while putting the knowledge of using ChatGPT as one of the eligibility criteria.
With scores of issues we come across in our lives, ChatGPT is turning out to be that companion you can call 24x7. But if you dive a little deeper, things might begin to raise one of your eyebrows. Have you just tried raising one upon reading the last sentence? If you have, then you must not stop reading here.
It would be a lie if I said I tried ChatGPT right after its launch. I would notice the internet bombed with articles/ posts on it. And in my case, the interest was piqued two months ago (yes, a lot slower than Bunty’s saabun) and there I was, asking the tool what I had in mind. Now the first thing I did was to see if it could give me satisfying responses. That’s because I wanted to see for myself how far those claims of the tool being impressive were true.
The initial questions I threw at it were about relationships and mental health. Here I tried to be falsely specific about what my problem was, by exaggerating things to see how on-point the solutions could be. Once the responses began to appear on screen, not much of a doubt left in my mind about the humane touch it was provided with. Again, with those apologies, the tool made an impression that it CAN be wrong. Yes, "to err is AI" is the modern addition to the adage we have known for ages. It also came up with a built-in firewall that defended it from being treated as human whenever it wrote "as an AI language model."
So, my conversation this far was nothing short of a self-validation of the tool's capacities in that tiny sphere of things where I can ‘check’ how good it was. At this point, one of the most crucial elements in the internet world struck my mind: Privacy. Also, it was a win-win situation for me as I found an apparently intelligent ‘person’ to talk to/ argue with without getting judged. I began the thing by asking how private this very conversation was. It said the chat between us was encrypted and the data generated in course of the conversation can no longer be accessed as it would be deleted.
领英推荐
Now, after reading this, my mind wanted to take a dip in cold water. I was like, "Bro I just heaped praises on you. Don't make me eat my words." And turning my pro typing mode on, I asked how was it then possible even for me to read the conversation I had with it a week ago. It said it would never have access to any personal information unless that information was shared with the tool. Wait, what?
Moving on, I have no reason to disagree with ChatGPT when it mentioned about the ever-active risk of data breaches. But when I dragged WhatsApp in the middle, citing the latter’s end-to-end encryption where even the company itself cannot access the conversation, the AI brain got lost like crazy. While it feebly pointed out how WhatsApp can technically read the chats if law enforcement agencies makes it to, it added ChatGPT does the same “for the purposes of improving its machine learning algorithms and enhancing the quality of its services.” How did it find the comparison it drew contextually justified to defend itself?
Now on the opening page, ChatGPT does declare it can “occasionally generate incorrect information” and “produce harmful instructions or biased content.” However, the issue I mentioned probably didn’t fall into the purview of the two. If you ask me, I would call it a deliberate attempt to hide facts.
The fact that every company puts rigorous efforts to improve user experience is understandable. And truth be told, if OpenAI wants to do the same, it will have to access the conversations. But the question that still needs to be answered is, why did I have to interrogate the tool this much to get a straight answer, whereas this could have easily been clarified at the beginning itself? Should I call it another human trait where we, at times, resort to divulge less initially to keep it ‘clean’ until more queries are thrown at us?
That said, ChatGPT still stands as cool as ever. But at the same time, there is no denying that the responsibility of confirming the credibility of its generated content lies entirely with the users.