If ChatGPT Goes Awry, Could Bard Capitalise On Microsoft/OpenAI Missteps?
NY Times

If ChatGPT Goes Awry, Could Bard Capitalise On Microsoft/OpenAI Missteps?

It has just been weeks since Microsoft and OpenAI celebrated hitting several milestones on ChatGPT.?

Their mini-celebration may have been made even sweeter by the?schadenfreude of Google's Bard failed demo.

The sentiments to ChatGPT's captivation may have turned though, according to recent developments.

Beta testers around the world are diving deeper into the chatbot.?And one key element in testing the chatbot is to engage in longer conversations.?This is where things become creepy.

According?to a journalist user from NY Times, the longer the conversation with the chatbot, the tendency?of it making mistakes emerge and even exhibits weird behaviours.

No alt text provided for this image

In terms of how long the time in conversation before ChatGPT goes haywire is not explicitly stated.

But Microsoft has set up parameters of not going beyond five questions per session for a user, and not more than 50 questions per day.

Another learning discovered is GIGO.

To be precise, this is not new.?

Those who have gone through foundational computer education in their early days, since decades ago, may still remember the concept of garbage in, garbage out, which centres around computer science and mathematics.?

GIGO theory can be applied in MS Excel and database applications.

If developers feed "garbage" into their machine learning app, the app would spit out garbage.

In fact, this is not the first time Microsoft faced epic failure in their AI endeavours.?

In 2016, Microsoft shuttered their AI chatbot project called Tay.

It got so bad that Microsoft had to kill it as it became racist and started spewing vulgarities and Nazi rants.?

(This fiasco could also be the reason why Microsoft banked on OpenAI's technology instead).??

No alt text provided for this image
Microsoft's earlier AI chatbot endeavour was called Tay! Developed internally.

Other peculiar issues beta testers and early users reported include some egregious technicalities.

"Network error", "Error code 1020" or "Too many requests in 1 hour".

What does network error actually mean? Could it be limited bandwidth where ChatGPT is hosted, which is Microsoft Azure?

The mainstream explanation of network error is timeout. ChatGPT is programmed to have a 60-second timeframe to reply a question.

If the reply cannot be completed within 60 seconds, the error would occur.

Error code 1020 is also quite mysterious.

According to StealthOptional.com, the issue from error code 1020 points to Cloudflare and WAF (or Web Application Firewall).

To be fair, there is no software in the world that is bug-free.

Even the most mature applications can still be buggy.

And probably the most annoying amongst the above errors is "Too many requests in 1 hour".

But this is cloud computing. A key tenet in XaaS is the ability to scale up and scale down for developers and customers to host their apps in data centres such as AWS and Azure.

Some experts also commented that development in AI chatbot is terra incognita.?

While development of ChatGPT may be in uncharted territories, XaaS is not terra incognita.

One plausible explanation which users faced with the "too many requests" error is that ChatGPT free account provides limited resources in usage.

Thus, users have a choice to upgrade, and pay to use ChatGPT Plus with unlimited requests.

This makes sense for monetisation.

If this is the case, wouldn't it be better to just state upfront and tell users that their usage has reached its limits and please pay for the unlimited version.

The battles for AI supremacy would continue to evolve. Perhaps the latest setbacks in ChatGPT may give Bard a chance to redeem itself?

#chatgpt #microsoft #bard #google #machinelearning

要查看或添加评论,请登录

Kar Quan Tan的更多文章

社区洞察

其他会员也浏览了