The dangers of artificial intelligence
IMAGE: Yauhen Korabau?—?123RF

The dangers of artificial intelligence

Elon Musk, undoubtedly one of the most influential and followed persons on the planet, has called for the proactive regulation of artificial intelligence because, “by the time we are reactive in AI regulation, it’s too late”.

For such a call to come from somebody dealing with technological challenges as complex as the electric vehiclesustainable energy generationand space exploration is unsettling. Elon Musk is not just anybody. What’s more, he joins theoretical physicist Stephen Hawking and Microsoft founder Bill Gates. But fact that none of them has specific experience in research or development of this type of technology as classic case of the fallacy of authority: the fact that they are undoubtedly outstanding in other spheres of science or industry does not necessarily mean their concerns cannot be discussed or questioned.

I have long been writing about the enormous possibilities of machine learning, which I consider the most promising part of what has been called artificial intelligence but that is still a loose set of technologies that some think they will lead to machines thinking like people. For the moment, machines are capable of many things: the fact that they are able to learn from a set of data when given clear and immutable rules is prompting hundreds of companies around the world to buy tools allowing them to optimize processes and convert them into savings or efficiency gains. Machines are able to recognize images, engage in conversations, and of course, able to beat humans at chess, Jeopardy, Go or poker. However, in all these cases we are still talking about the same thing: programming a machine to carry out a specific task according to a set of fixed rules within a context which in addition, it is possible to accumulate and analyze a large amount of data. Extrapolating from this “complete” intelligence, a robot capable of intelligently dealing with reality, is tempting, but makes no sense. Making the leap from algorithms to Skynet, the network of Terminator computers would require an unlimited number of conceptual jumps that are still very far into the future, and in all likelihood will never happen

Demanding regulations for a technology or set of technologies before they are developed is problematic. Regulation rarely develops properly, and tends to be based on restricting possibilities. It is usually impossible to carry out regulation on a global level — attempts are few and compliance tends to vary, at best, meaning that some countries would press ahead anyway, and gain a competitive advantage. Regulating — or rather restricting — the use of GMOs in Europe, for example, has allowed other countries to gain tangible advantages in terms of productivity and scientific advancement.

Quite simply, pushing for regulatory systems already proven to be inefficient and applying them to a set of technologies with so much potential, while at the same time spreading alarm about hyper-intelligent robots taking over the world is dangerous.

Furthermore, from the growing numbers of people I know working directly in the field of machine learning, AI or robotics, it is clear that we are not heading toward a Skynet future. Regardless of how many sci-fi movies we binge-watch over the weekend, Skynet is not even close yet. Let’s hope that those calls for regulation or restriction are ignored by our politicians. In the meantime, let’s keep working.



(En espa?ol, aquí)



Oscar Mateo

Senior SAP Consultant at DXC Technology

7 年

Hi. It would be something terrible. You just have to see what ridiculous laws they have done for browser cookies. They don't know how browsers and cookies work. Imagine how much hurt they can do in a more complex subject.

回复
Andrés Vegas

Chief Data and Analytics Officer | Data Analytics, Artificial Intelligence

7 年

Un breve apunte: Tesla no es un mero coche eléctrico. Los avances que incorpora en el campo de la conducción autónoma por ejemplo son sobresalientes y se sustentan en el desarrollo de unas muy notables capacidades de IA, por lo que no creo que sea lícito sugerir que Musk toque de oídas al opinar en el ámbito del aprendizaje automático.

Augustine Chimwanga

Academic Advisor | Research Consultant | Information Broker

7 年

To me, regulating AI is just as good as changing human values. Things such as AI are primarily driven by capitalist competition rather than how helpful the technology would be. If we, as a race, don't start reconciling our value based differences, no amount of regulation will control AI evolution.

回复
Konstantina Sokratous

Ph.D Candidate | Computational Cognitive Modeling | AI

7 年

Brilliant !

回复

要查看或添加评论,请登录

社区洞察

其他会员也浏览了