Google Takes a Swing at the Air
The technological battle of the century.
If this were a boxing match, it would be like watching Muhammad Ali vs. Tyson. And Google just missed the last punch.
Google introduced its AI chatbot Bard, formerly known as LaMDA, earlier this week, and they did so with this post:
However, it appears that the company made a major error in the example it used to showcase the chatbot's capabilities. Bard claimed that the James Webb Space Telescope was used to capture the first images of an exoplanet, which is false. As a result, Google's stock fell 8% and lost $100 billion in value.
The main issue with this error is not only the mistake itself but that it highlights Google's failure to meet expectations with its AI technology. The tech giant is facing criticism for being late to the game and producing subpar results, and this mistake only underscores those concerns.
It's important to remember that AI technology makes errors, including ChatGPT, is still in its early stages and is constantly evolving. Limitations include:
Recent studies have shown that ChatGPT's accuracy varies depending on the topic:
领英推荐
Another study estimated that ChatGPT is correct in about two-thirds of questions.
While some may see this low success rate as concerning, it's worth noting that AI technology is improving at a rapid pace. Just a year ago, chatbots had the cognitive capacity of a five-year-old child, and now they can perform the work of an 18-year-old intern. With continued advancements, it's likely that AI technology will eventually become like some kind of guru: a trusted source of information and knowledge that patiently will adapt his answers to our ignorance.
Google's error is very symbolic. And it will do us good in the long run. It's crucial to understand its limitations, how it learns, and how to properly cross-examine and contextualize the information it provides. As AI becomes increasingly prevalent in the workplace, it's important for individuals to educate themselves on Deep Learning and how to train AI to minimize errors.
I remember a study that said that one of the reasons why the subprime crisis was not detected by rating agencies in the US in 2008 was due to "excessive use of power point." They said that it simplified and compressed the information so much that it made it impossible to exercise real control and a serious evaluation of what was happening. Wow wow…
I don't think it was the main cause, just as the fault of world inflation is not because of the war in Ukraine. But I do believe that in the coming years, when the massive implementation of generative AI in companies becomes a reality, this same mistake will be made by thousands of workers.
Don't believe everything the AI tells you, as you don't believe everything you read in the news or the social networks.
And for those who are looking for a future in the labor market, study Deep Learning and how to train an AI so that it does not make mistakes.
Have fun with the show.
Yours in crypto and AI.