DeepSeek and why this AI shakeup was inevitable!

DeepSeek and why this AI shakeup was inevitable!

In the past few days, the world of AI and the hundreds of millions of dollars that have been considered an 'entry barrier' to build AI models has had a major change. DeepSeek, a company from China, has developed an AI model which gives comparable results to OpenAI's O1. It has done so at a fraction of the cost employing a technique that does not require the huge amount of resources that OpenAI does. What's more, DeepSeek has also open sourced its version and you can run the entire model locally on your laptop .

It reminded me of a question that noted investor and tech professional, Rajan Anandan asked Sam Altman last year when he had visited India -->

Back then, Altman had suggested that any cheaper attempts to build foundational AI models was hopeless! It what DeepSeek is claiming is true, well, Sam Altman will be forced to eat his words and the crazy cash guzzling of frontier AI companies will be questioned.

The real question is - whether this kind of efficiency gain in training the model will lead to reduction in usage of NVidia Chips for example? I think NO!

Why ? - Because I think Jevon's Paradox will play out.

Jevon's Paradox - We constantly strive to develop more efficient technologies, hoping to reduce our environmental impact and enhance overall sustainability. However, a fascinating phenomenon, the Jevons Paradox, suggests that efficiency improvements can sometimes lead to increased consumption rather than the intended conservation.

I really hope that DeepSeek creates a wave of innovation from an AI perspective in all fields and we take AI, break the API glass ceilings that come with it and make AI usage as commonplace as messaging or email for instance.

Have you used DeepSeek? What are your first impressions? Do let me know in the comments below! I am not thinking of installing DeepSeek locally but using it in the Cloud to try it out.

要查看或添加评论,请登录

Siddhesh Joglekar的更多文章

社区洞察

其他会员也浏览了