AI: Not as Smart as You Think, Not as Dangerous as You Fear
Unspalsh

AI: Not as Smart as You Think, Not as Dangerous as You Fear

Hey there, folks! So, you might have heard about the open letter signed by more than 1,000 technology experts, researchers, and investors, asking for a six-month pause on the development of "giant AI systems". Well, let me tell you, as someone who works in technology, this request is just a fantasy. It's impossible to imagine a government or authority imposing a ban on AI experiments. Technology cannot be stopped, it's just like discovering fire, electricity, or any other innovation that changed the world forever.


There are some people who are worried about machine learning, and they often use the term "artificial intelligence" inappropriately. They believe that we are dealing with a technology that we cannot control, but this is simply absurd. Even people working directly in the field can fall into this trap, like Blake Lemoine and his unhealthy obsession with the supposed self-awareness of the algorithm he was working on.


Let's get one thing straight: algorithms like LaMDA, GPT, and others do not possess self-awareness, even if we like to anthropomorphize and attribute human qualities to them. They are simply statistical functions that have been developed on a large scale, with complex models and numerous parameters. They are not capable of self-awareness, much less becoming Terminators who will one day turn against us.


Recently, a company took the training models of some algorithms to a new level using large language models, and opened its conversational model to any user in order to exponentially increase its input. Some people began attributing human qualities to it, even when it got things badly wrong. But that's just our brains playing tricks on us, and has nothing to do with the machine rebelling or having hidden consciousness.


So, does machine learning carry "profound risks for society"? Well, there is a risk that it could replace up to three million jobs, but it could also add 7% to global GDP. The danger to society will have more to do with how that increase is distributed than anything else. If it generates disparities and widens the gap between rich and poor, we will have a serious problem, but not because of the technology itself. It's the greed of some humans that causes these issues.


Technology has always replaced manual labor with machines, and it's unavoidable. As soon as the entry barriers fall, adoption becomes mandatory, and whoever doesn't adopt the technology being used in their industry is soon out of the game. More regulation and responsibility? Sure, but politicians usually have no idea what they are talking about.


There has been an explosion of funding for large language models since November, creating fierce and unbridled competition and seemingly turning machine learning into a new religion. But let's face it, by now it's too late to call a halt, even a temporary one. It's just not going to happen.


In conclusion, there's no way we can put a "pause" on the development of AI. We should focus on how we can make the most of this technology to benefit society as a whole. It's important to remember that machine learning is just a tool, and it's up to us how we use it. So let's embrace this technology and work towards creating a better world for everyone.

要查看或添加评论,请登录

Marco Antonio Uzcátegui Pescozo的更多文章

社区洞察

其他会员也浏览了