Is AI is the Future of Life? Watch Out
What exactly are we creating, humans? And what will our creations illuminate about us?
The Future of Life Institute is a group that includes, among others, Stephen Hawking, Elon Musk and Max Tegmark. Their shared focus is on ethical guidelines for the development of AI. This is a subject I've been focused on for years (click here to see me interviewed in Robot Wars, in which I had the last word about whether humans are ready for the responsibility of ushering in the next generation in the evolution of intelligence).
The group recently released a report, and while the principles look reasonable on paper, at first glance, two of them concern me highly.
10) Value Alignment: Highly autonomous AI systems should be designed so that their goals and behaviors can be assured to align with human values throughout their operation.
11) Human Values: AI systems should be designed and operated so as to be compatible with ideals of human dignity, rights, freedoms, and cultural diversity.
Let's stop for a moment and really use our imaginations to think about what kind of world we will create if intelligent systems, far more intelligent than we are even capable of intellectualizing, take our human values as the basis for their decisions. Look how we act now. I understand that this group has a specific set of aspirational values in mind, but much like Enron having "integrity" as a core value, we have to deal with reality, not wishful thinking, when it comes to the AI we create.
Artificial intelligence, in my opinion, isn't artificial at all. The belief that it is highlights a fundamental flaw in our thinking. Everything we create, but particularly autonomous systems, is an extension of the intelligence and creativity that already exist in our human systems. Take a long, hard look at how we apply that intelligence and creativity, and then ask yourself whether we want autonomous systems coded to imitate us before we really understand ourselves. We are so easy to fool. Above all, we fool ourselves into thinking we understand more than we do.
UPDATE: The Asilomar AI Principles, developed at the Asilomar Conference, have been signed by 1,500 people. You can add your signature if you like (or read interviews with signatories to see why they signed). The Future of Life Institute "catalyzes and supports research and initiatives to safeguard life and develop optimistic visions of the future." Developing optimistic visions of the future is a much easier task than developing pragmatic visions of the future, even those that achieve outstanding feats of human accomplishment.
Futurist Rita J. King is the EVP for Business Development at Science House, a cathedral of the imagination in Manhattan that specializes in helping teams on high stakes projects focus to achieve their mission.
总经理 - 江苏三稞电子商务有限公司
7 年你好,希望能够认识你?
Senior Associate | Senior Software Engineer | Ex - Justdial | Technology Enthusiast
7 年I read the article first then all the links mentioned in it but can't figure it out yet, whether this article is in favour of AI or against to it or it's just a single page from a rule book of AI which contains a bunch of protocols to follow while creating something?. if it is really that, then it's really doesn't carry any matter and even i didn't get a single point that can relate 'is AI is the future of life'. and a bit correction here in title, AI can't be the future of life, it might be the future of technology. Life doesn't exists in AI or technology. They are just machines and runs only between 0 and 1 invented by human beings. But yes, we have engrossed us so much with technologies that we have reached at intellectual chaos. And, As you said, 'we are easy to fool'. So probably you are against to AI, i guess. Oh yes, we are fool enough that we can't assume our future without it. And if you think it would be bad for us. so, don't be worry! we have already done the worst to us by creating nuclear bombs, missiles etc... Robots (using AI) shall not do bad to us because either they will be software oriented or instructions oriented controlled by humans. But what if we do not have control on ourselves?. Problem is ours.
Account Manager
7 年Hi All, not sure this has been addressed, but lets consider Nature over nurture and the "Ghost in the Machine", to truly apply Human Values (HV) or any humanistic traits to an environment like that of AI it will not be a basic case for programming logic. It may be a case of Nature creating the situation where these values get applied in an AI environment by the AI's realization that HV is applicable or in the case of Nurture by the interaction over time with human the AI comes to the same conclusion. Now given humanity's drive for individuality that has led to far differing perspectives on HV. Can we argue that AI may come to the conclusion that HV has no place in its environment? Would AI not through its own evolution develop its own value system that is applicable only to its own kind? Seeing that values are based on both Nature and Nurture for humans, we can assume that in their own evolution AI may draw from humans, but ultimately it will be their development and evolution of that value system and it will solely be left to them.
--
7 年Rita J. King, Have you any idea about the Post-Science where too self-creation of the humans is sought to be achieved through AI in order to solve the complex nature of the Universe in an uncertain future?