AI will change everything.
Fredrik Blix
Consultant Strategic Cybersecurity & Associate Professor (Universitetslektor),
My name is Fredrik Blix, I am an associate professor of cybersecurity at Stockholm university and principal at a leading Swedish IT-company focusing on sustainable cybersecurity called Cybercom. Thank you for reading this, please engage by commenting after reading.
AI can make the world a better place. Or rather, we can make the world a better place with AI. But there is a problem: AI has the same potential nuclear technology had 80 years ago: It can destroy the world or it can help make the world an even more beautiful place where we all live in harmony.
For a nation-state, and from a national security perspective, AI represents power. The power to analyze, to decide and to act – on anything. In conflict and war, in the economy, in the environment, in culture, in communications.
This power is also the reason many large corporations already have a strategy for AI. This is why the government of the United Kingdom decided to fund 1000 doctoral candidates in AI recently. This is why the European commission has a European strategy for AI.
AI will change everything.
Today, we humans are probably still in control of AI. But there is uncertainty if this will always be the case. Today, and for this discussion, we still have a choice. We can decide if AI is going to be used for good or bad purposes. Employing AI for the purpose of conflict de-escalation as proposed by tech diplomat Aurore Belfrage would be good. Especially if the technology is in the hands of an independent peacemaker using it to find their way forward in a complex negotiation in a conflict.
I often get the question; is there a real risk that AI will seize control over the world from us humans? To answer that question let me read a sentence in the European Commissions ethical guidelines for trustworthy AI, developed by European AI experts: “AI systems should not unjustifiably subordinate, coerce, deceive, manipulate, condition or herd humans. Instead, they should be designed to augment, complement and empower human cognitive, social and cultural skills. The allocation of functions between humans and AI systems should follow human-centric design principles and leave meaningful opportunity for human choice”. It seems like AI experts are worried. So am I.
How can we control AI entities when we have taught them how to learn anything without our supervision? When we have connected them to every corner of the planet through the Internet? When we have given them access to unlimited storage and thinking power in the form of cloud services? When we have connected hundreds of thousands of control systems, robots and factories to them, that they can potentially control, to create effects in our physical world and to manufacture anything, including copies of themselves.
We need to think long and hard about this problem. AI entities do not have values or ethical principles. Nor do they prefer humans to themselves. So how do we design all AI entities so that they obey us? How do we ensure that there is a kill-switch we can use when things go wrong?
This is a matter of national – and international – security.
I look forward to the discussion in Almedalen and here in the comments.
Fredrik Blix
Digital transformation
5 年We have laws, humans has to abide by them and thus AI has to abide by them because there is always people with responsibility behind every AI. This is just like it is people with responsibility behind every tool that has ever been used, whether for good or bad. The general question you ask is whether laws should be universal or unique to certain geographies. Generally it takes some time...and ”bloodshed” before laws becomes universal. It is likely to be the same case with laws on responsibilities regarding AI as it has been to any other law regarding responsibilities.
Informationss?kerhetsspecialist
5 年Maybe we would need a agency like IAEA to control the use of AI. To put out guidelines and have regular controls of the way companies use their AI. Maybe have someone work in the way Hans Blix :) did in Iraq.
Emerited - Realm of freedom
5 年Even the weather as it does when this term first was used some 30 years ago
LIFE TRANSFORMATION SKILLS FACILITATOR
5 年Make sure you also change.
Principal & Chief architect, Insights & Data Capgemini Sverige
5 年Is it a hard problem or an impossible problem? Take the spread of human values across societies and how they determine variability of ethics across geographies. Implementation of value system boundaries in AI globally has a prerequisite. That all countries agree on AI ethics. Should it then be driven by UN? And an EU limited effort may end up being inconsequential.