The Four Laws of Humanity
Image Credit: CNBC/YouTube ( https://www.youtube.com/watch?v=W0_DPi0PmF0 )

The Four Laws of Humanity

(by Branislav Cika, participant in the University of Liverpool's Online MSc Information Systems Management Programme and with kind guidance and edit from Dr. Ashok BanerjiEzana AssefaAkefetey Ephraim Mamo and Ramesh Singh.)

With the recent upsurge in the development of artificial intelligence (AI) and robotics, the decades-old argument of its ethical use has been restarted splitting the leading humanity thinkers into two different camps: AI is dangerous camp, which holds some rather big names of academia and business such as Professor Stephen Hawking and Elon Musk to mention a few, is warning the unethical use could potentially lead to AI taking over the world from humans, while the other camp, the full steam ahead camp, led by Marc Zuckerberg and Larry Page is pushing the industry development and focusing on its benefits.

Elon Musk called an AI “a fundamental existential risk for human civilization” (Sulleyman, 2017). He implores creators to slow down and regulate the technology before it becomes a threat to humanity. Musk also stated, on several occasions, that he invested in Alphabet’s Deep Mind project only to keep an eye on it and ensure that it doesn’t get out of hand. Professor Hawking goes on to say that “the development of full artificial intelligence could spell the end of the human race” (Cellan-Jones, 2014).https://www.vanityfair.com/news/2017/03/elon-musk-billion-dollar-crusade-to-stop-ai-space-x

In stark contrast to Elon Musk, Mark Zuckerberg says that “…in the next five to 10 years, AI is going to deliver so many improvements in the quality of our lives…” and that those “…who are naysayers and try to drum up these doomsday scenarios…” are “…negative and in some ways… pretty irresponsible…” (Clifford, 2017). On the other hand, Sergey Brin, a co-founder of Google says that “…a lot of the things that people do (or) have been (doing), over the past century, (will be) replaced by machines…” with Larry Page adding that "90% of people used to be farmers. So, it's happened before. It's not surprising” (Gibbs, 2014).

But, aren’t we getting ahead of ourselves?

Where are the universally accepted ‘four laws of humanity’? While the United Nations Universal Declaration of Human Rights (1948) clearly states what the human rights are, and defines more than four of them, the very same human rights are daily abused by literally every existing government. Some, by directly stifling the human rights of their own populations (HRW, 2018), and others by means of waging conflicts, wars (UCDP, n.d.), and sanctions (BSCN, 2018, United Nations, n.d.) on people outside of their countries. How can we, with a straight face, demand an AI to abide by the laws we aren’t ready or willing to accept ourselves?

The human race has been at war with itself since time immemorial (Peacey, 2016) and we have gotten very good at it. Over the past half-a-dozen decades since the World War II, we managed to stockpile so many weapons of mass destruction (ICAN, n.d.) that we can easily destroy not just all humans but most likely all multi-cell organisms as well. Andrews (2017) claims that “if every single one of the world's nukes went off, then, there will be a near-100 percent reduction in solar radiation reaching Earth’s surface for several years, meaning the planet would be shrouded in perpetual darkness for that time”. No light, no symbiosis – no symbiosis no plants, no plants no…. well, you get the picture. And to what purpose? Certainly, not because we like each other.

Before we can demand, with a straight face, that “a robot may not injure humanity, or, through inaction, allow humanity to come to harm” shouldn’t we demand that a ‘human may not injure humanity, or, through inaction, allow humanity to come to harm’? And if we do, how many people would be guilty? Would all of us who use plastic bags that kill the seas, weed or bug sprays that kill the bees, or weapons that directly kill other humans be guilty? And if we are, should we just lock the entire humanity up?

While it is morally dubious to request robots not to injure humans or humanity for that matter – especially since we don’t follow those laws ourselves – is the preservation of a species that doesn't abide by those laws and destroys both itself and its surrounding really moral and ethical thing to do? If we are so destructive to ourselves and to our environment, wouldn't it be better if we just birthed and gave way to an 'intelligence' of sorts that does abide by higher laws of existence?

What happens if AI learns from us?

Having in mind the human capacity for destruction and fratricide, and the fact that any intelligence would learn from its creators (Knight, 2017), isn’t it too far-fetched to assume that any AI developed enough to start learning from us would also learn how to kill us? If so, isn’t it quite easy to assert that any intelligence the humanity, at this stage of its social development, would give birth too, would also be as flawed as we are? And if AI becomes as flawed as we are because it learns from us, do we really want it?

What will be the outcomes of a wider AI implementation?

I believe that proponents of AI, while most likely guided by good intentions, maybe forgetting several issues:

1) While AI can undeniably create a competitive advantage for its users it will:

  • Create more division between rich and poor;
  • Create more division between the developed and the developing countries;
  • Be misused or used for military purposes;

2) While in the developed world time saved might be used for more creative work, in the developing world time saved will simply mean people losing jobs so there will be:

  • More economic and social unrest;
  • More crime and insecurity; and
  • More conflicts and wars.

3) The above will result in more internally displaced persons and refugees which will, inevitably, set their eyes to the developed world for security and safety so immigration (both legal and illegal) will peak.

What can we do about it?

The drawbacks of the technology will, unfortunately, not stop the humans from developing it even in cases, such as the case of AI, where the disadvantages will fundamentally affect the human race. Having said that, there are many things we could do to reduce the impact:

1.    Like Elon Musk, actively engage in ‘keeping an eye’ out on anything related to the AI. Learn, educate ourselves and those around us, participate in civil ‘over watch’ of the technology, it’s implication, and reducing its potentially harmful application. At the end of the day, our very lives are potentially in danger so this is a matter of self-preservation;

2.    Pressure governments to enact basic services such as income, shelter, and medical services. These would, in long term, provide much needed respite from AI consequences which would, most certainly result in loss of comforts, and potentially the loss of livelihoods and lives. Cash needed to provide these services could easily come from reduction of defense budgets around the world or even from taxing the AI technology itself;

3.    Influence authorities to legislate basic human needs, such as air, food, water, to be enshrined on global and national levels. Before we tell AI how to behave towards humans we must accept and enshrine needs much like we enshrine the values. What is a human right have value without basic needs? How can we say that every human has a right to live if we don’t provide those basic needs, such as air, water, and food, each one of us depends on?

4.    Devolving national and evolving global mechanisms of governance to ensure basic rights and services are respected and maintained globally. While this would receive biggest pushback from increasingly xenophobic nations, the only way to ensure compliance would be to reduce national and improve global governance. While there are nation-states it will be hard to sell, for an example, to an American or a Russian that Somali life is equally important, or that we are all human thus equal.

5.    Moving the human species to other planets and moons not just because we need to become multi-planetary species to avoid global cataclysms, but because we must provide large masses of human population with an alternative to creating value – both for their financial and psychological well-being. Much as Elon Musk said at TED (2017), going to Mars is not just a matter of prolonging the species but also a matter of providing an inspiration to a countless young people, providing them an alternative to an ordinary existence by setting an amazing goal like a colonization of a new world, and allowing them to dream about possibilities other than an overpopulated, polluted, and ever resource-hungry Earth.

The above lists just some of the things we can do to help ease the human population to the inevitable arrival of the AI on the global scene. Sooner we start the better – in fact, sooner we start might mean the difference between our survival and our collective demise.

Reference:

Andrews, R. (2017) What Would Happen If Every Single Nuke In The World Went Off At The Same Time?, Available at: https://www.iflscience.com/physics/what-would-happen-if-every-single-nuke-in-the-world-went-off-at-the-same-time/all/ (Accessed on 7 February 2018).

Asimov, R. (1985) Robots and Empire, Grafton Books, London.

BSCN (2018) Sanctions Risk list COUNTRIES, Available at: https://www.bscn.nl/sanctions-consulting/sanctions-list-countries (Accessed on 07 February 2018).

Clarke, R. (1994) Asimov's laws of robotics: Implications for information technology. 2. Computer, 27(1), pp.57-66.

Cellan-Jones, R. (2014) Stephen Hawking warns artificial intelligence could end mankind, Available at: https://www.bbc.com/news/technology-30290540 (Accessed on: 6 February 2018).

Clifford, C. (2017) https://www.cnbc.com/2017/07/24/mark-zuckerberg-elon-musks-doomsday-ai-predictions-are-irresponsible.html (Accessed on 6 February 2018).

Dowd, M. (2017) ELON MUSK’S BILLION-DOLLAR CRUSADE TO STOP THE A.I. APOCALYPSE, Available at: https://www.vanityfair.com/news/2017/03/elon-musk-billion-dollar-crusade-to-stop-ai-space-x (Accessed on 6 February 2018).

Gibbs, S. (2014) Google's founders on the future of health, transport – and robots, Available at: https://www.theguardian.com/technology/2014/jul/07/google-founders-larry-page-sergey-brin-interview (Accessed on: 6 February 2018).

HRW (2018) World Report, Available at: https://www.hrw.org/sites/default/files/world_report_download/201801world_report_web.pdf (Accessed on: 6 February 2018).

ICAN (n.d.) Nuclear arsenals, Available at: https://www.icanw.org/the-facts/nuclear-arsenals/ (Accessed on 7 February 2018).

Knight, W. (2017) The Dark Secret at the Heart of AI, Available at: https://www.technologyreview.com/s/604087/the-dark-secret-at-the-heart-of-ai/ (Accessed on 7 February 2018).

Peacey, S. (2016) Have humans always gone to war?, Available at: https://theconversation.com/have-humans-always-gone-to-war-57321 (Accessed on: 7 February 2018).

Sulleyman, A. (2017) Elon Musk: AI is “a fundamental existential risk for human civilization” and creators must slow down, Available at: https://www.independent.co.uk/life-style/gadgets-and-tech/news/elon-musk-ai-human-civilisation-existential-risk-artificial-intelligence-creator-slow-down-tesla-a7845491.html (Accessed on 6 February 2018).

TED (2017) The future we're building — and boring. Available at: https://www.ted.com/talks/elon_musk_the_future_we_re_building_and_boring/transcript#t-448542 (Accessed on 12 February 2018).

UCDP (n.d.) Number of Conflicts, Available at: https://ucdp.uu.se (Accessed on: 8 February 2018).

United Nations (1948) Universal Declaration of Human Rights, Available at: https://www.un.org/en/universal-declaration-human-rights/ (Accessed on: 6 February 2018).

United Nations (n.d.) Sanctions, Available at: https://www.un.org/sc/suborg/en/sanctions/information (Accessed on 7 February 2018).

Douglas E.

Dark by Design ZeroTrust Principal Executioner.

6 年

As great as this pulse is, it did not include the largest AI threat. The threat of AI programming AI where the outcome could be devoid of all human control. Rouge AI could be like a virus, like a skynet, or something unknown and evil.

Reba M Habib

Experience Design Lead @ UST Global | Empowering UX Designers to Advance their Teams and Career

6 年

I hope that one day AI and humanity can embrace these laws together. I know there is a huge moral and ethical debate around the topic, but I think the merge of humans and AI's could be the next big "Industrial Revolution" for mankind. I honestly would love to see the day where society has the capabilities to transfer human consciousness into an AI platform. There are many issues that would need to be addressed (as seen in the negative light of altered carbon), but I think if it is approached correctly from the beginning, it could be extremely valuable to our own advancement.

Rob Gray

Geospatial Consultant. The last thing you think of and the first thing you need, Geo.

6 年

Well said Brani. We have done a wonderful job so far. Can AI do worse?

Tony Jean Charles

Network Engineer at United Nations Multidimensional Integrated Stabilization Mission in the Central African Republic

6 年

Well written, well researched. The reality seen with an eagle eyes...

Pascal Lafond

Head of IT & ICT at Digicel Haiti

6 年

Interesting topic and very well written :). You mentioned some key points that I overlooked while digging into this myself. I feel that this technology is both exciting and scary at the same time. While it makes me dream about a living jobless life of leisure where wealth is produced by machines and the world is thriving as a whole, I also can't stop thinking about the possibility of an AI programmed to do something devastating, or, do something beneficial, but using dangerous methods to achieve it's goal. As you said, the drawbacks won't stop humans from developing it and some controls will definitely need to be in place.

要查看或添加评论,请登录

Branislav ?ika的更多文章

  • Fear

    Fear

    7 January 2019, 08:24 “I must not fear. Fear is the mind-killer.

  • In Search of Clarity!

    In Search of Clarity!

    7 January 2019, 05:29, Merry Christmas to all Orthodox Christians that celebrate it today. I haven’t been sleeping well.

  • Leadership failure or a failure of leadership?

    Leadership failure or a failure of leadership?

    (by Branislav Cika, participant in the University of Liverpool's Online MSc Information Systems Management Programme…

    6 条评论
  • Innovation, Disruption, Revolution!

    Innovation, Disruption, Revolution!

    Just off the plane after an excellent Finnovation event in Addis Ababa, Ethiopia, which discussed the future of FinTech…

    2 条评论
  • Why is FinTech Gaining Significance in Africa?

    Why is FinTech Gaining Significance in Africa?

    More and more the banking industry in Africa is trying to recover vast segments of the population that are low income…

    1 条评论
  • Got Cheap Loans?

    Got Cheap Loans?

    Is capping the rates crushing the Kenyan formal banking sector? From all the latest indications, the decision of Kenyan…

    3 条评论
  • Africa – The New Business Frontier

    Africa – The New Business Frontier

    "Africa is primed to be one of the great economic success stories of this century. Incomes are rising.

    3 条评论
  • Welcome to Kenya – the home of Mobile Money!

    Welcome to Kenya – the home of Mobile Money!

    According to news24 (quoting Kenyan Central Bank), Kenyans have – by the end of October 2015 – transacted KES 2.3…

    4 条评论
  • So, Why is Change so Difficult?

    So, Why is Change so Difficult?

    To start with another Yoda-ism: ‘Always in Motion is the Future’ As great Master Yoda so rightfully stated – with…

    2 条评论
  • Post Paris (COP-21) - Can We Keep on Developing Without Killing our Nature?

    Post Paris (COP-21) - Can We Keep on Developing Without Killing our Nature?

    "Your race hasn't even reached Type 1 on the Kardashev scale. It doesn't control the resources of this one planet, let…

社区洞察

其他会员也浏览了