The missing piece for Artificial Human Intelligence [1/6]

The missing piece for Artificial Human Intelligence [1/6]

I have found the missing part. The requirement to develop artificial general intelligence (AGI[1]) that will be good for humans. This indeed is the last piece in the puzzle to make sure our core human identity is integrated and understood by an AGI. The one thing that takes away the fear that AGI will harm us, humans. Yes, it is complex to integrate, yet easy to understand. The missing piece is LOVE. You heard it right; it is Love. Maybe my conclusion disappoints you. Maybe you have expected a much better and profound answer but this is the core statement that I want to make. The next 40 pages, which I will publish in a six-part series over the next few weeks, will walk you through my analysis and thoughts that have led to this conclusion. If you want to stop reading now, that’s okay. Hopefully, I may have been able to plant the idea to think about why AGI must understand love. If you are interested in this and stay, I’m more than happy to lead you through the analysis.


About two years ago, in the summer of 2018, I started asking myself if we needed to have regulations for the development of AI. It was a time when heroes of mine like Elon Musk or Stephen Hawking were warning about the potential dangers of AI, that it could be the last invention we as humans will ever make[2]&[3]. But AI fascinated me. Come to think of having the amazing opportunity to build a future where robots take care of all the “boring” repetitive tasks and where androids like Commander Data of Star Trek come to live. What makes me really curious is to imagine going on a trek to the far reaches of the galaxy and to learn more about Mother Earth and good living. But the utopia has the flip side called dystopia. AI could increase inequality in society as not many could benefit from it; or it could promote long-lasting dictatorship based on a never-seen controlling instrument that is AI.

I was betting that humans would welcome a utopian environment as everybody wanted to have a great future. But as always, a great future depends on perspective and context. Therefore, I’ve been hooked ever since to learn more about the requirements to develop a safe type of AI (in my interpretation ‘safe’ means that AI doesn’t take away human rights and freedom and that all humans are better off with Al than without Al).

Even though experts like Juergen Schmidhuber say that AI Regulation is a good idea but it will never be possible due to the ever-widening scientific boundaries and growing military power[4], I still think it’s more than a good Idea and is fundamentally important and required for a great future.

So, I researched for six months, read dozens of papers and spoke with experts about it. My objective was to find an answer to the basic question: is it possible to regulate AI for good?

Then I started working for a venture capital fund and I was focusing on investing in promising technologies that would make the world a better place to live in. And yes, make money out of it, which I think is important in our current capitalistic system (I’m open to discussing other systems if you want to challenge me on this). Anyway, the research into AI regulation got stuck for a while. In February, when the discussion about AI regulation peaked and the EU was close to releasing a catalogue for AI requirements, I wanted to share my research with the world, hoping that it might contain some good thoughts that would contribute to the discussion. Three days before I was about to publish my first article of the “Requirements for AI regulations” series, corona broke out. It was not that I wasn’t prepared for it, but I never imagined that it would turn pandemic causing almost everything to come to a halt and be dominated by the virus for so long.

Don’t get me wrong, it is absolutely important to focus fully on this pandemic to make sure that this potentially dangerous virus does not engulf humankind.

But the ongoing debate about AI regulations has nearly stopped for the majority of the public.

In the pre-corona days, the tone about upcoming AI regulations was rather pro-human data protection and about how to regulate not only the big Silicon Valley and Chinese tech companies but also AI in general. Today, discussions in Europe about tracking people via smartphones have become more fashionable than ever before. Apple and Google turn down France and Germany’s demands that they use smartphone technology to trace coronavirus infections. It’s now the Apple’s engineers, who put privacy first, saying there will be only a decentralized solution to avoid a state of surveillance, while Germany asks Apple to support the app being developed centralized by research group Fraunhofer HHI for the Robert Koch Institute in Germany[5].

Of course, we may have never been better prepared for such a crisis without the help of technology. Without them most research, education, work etc. would have come to a standstill and remote working would have been a figment of imagination today. Thanks to AI-induced research applications our R&D is much faster than ever before. Great news I thought, this is no time to worry about AI regulations. It even seems like the European Union is ready to give the tech companies a second chance in terms of data regulation and security requirements (as long as they don’t exploit the situation and put America first and Europe second in tech supply). So again, I thought the time is not right to publish anything about AI regulation as Tech and AI companies have become the backbone for communicating through the corona-crisis.


Then I came across Rutger Bergman’s book “Humankind: A Hopeful History”. Not only was this one of the best books I’d ever read, but it also gave me a big Aha-Moment on AI Regulations.

The core question, which Bergman poses in his unique way, is whether humans are, from the ground up, good or evil.

A high number of experiments in social psychology try to show the evil side of humans. Think about the Milgram Experiment, where people follow orders to electrocute a person; or the Stanford Prison Experiment, where the dynamics of prisoners and prison officers are shown; or the Broken Window Theory which states that visible signs of crimes will lead to anti-social behaviour etc.

Do you know what all of these experiments and stories have in common? They are all wrong.

There is much critic of the Milgram Experiment showing that Milgram has performed a great play but certainly not a scientific experiment that proves the evil in man[6]; the same is true for the Stanford Prison Experiment[7], good acting than anything else. As for the Broken Window Theory, many researchers find little evidence of this theory in real life and say the neighbourhood disorder doesn't cause any crime[8]. The list of such experiments that try to prove human evil without offering any much evidence is long. For me, it is very alarming that these theories have shaped the opinion of so many people today about how humans are. The book also shows how bad the influence of news is on humans. Mostly everything you hear and read about the world around you in the news is filled with terrorist actions, epidemics, or refugees who have killed a local inhabitant etc.[9] It is not very surprising to me that Microsoft’s AI Chatbot, Tay, taking his cue from existing twitter messages, went racist one hour after Tay was launched and started tweeting things like “Hitler was right I hate the Jews”[10]. Twitter is a social network of news and comments on the news.

But the truth is: people living on planet earth are essentially good in almost every situation. Take the corona-crisis as an example. How many situations have you seen by yourself where people fight in supermarkets? Of course, the situation is tense, but in most cases, we are taking care of each other. But yes, people are edgy in edge scenarios; someone starts to panic and starts screaming or even beating. But it is also true that such situations are rare. Most of the time, society will take care of these aberrations.

What about edge-cases like war? War is always an exceptional situation. In the light of this, the question is: are people fundamentally good even if you send them out as soldiers to wage a war, which was politically decided, and put them in a situation to kill people?

There is a lot of evidence showing that people who confront their enemies in a war deliberately shoot too high because they just can’t and don’t want to kill. Wars are won at a distance, not by confronting people and shooting them. The majority of people are killed by bombs and drones[11].

There are even stories about wars that seem too corny if there wouldn’t’ be so much evidence that they did take place.

I want to share a story of “Christmas truce” with you (for the sake of simplicity I reference the story to the Wikipedia site, there you’ll find further sources but there are plenty credible sources if you google the story of “Christmas truce”[12]):

The war, which later became the First World War and is referred to as the “primordial catastrophe of the 20th century”, claimed an unimaginable number of human lives. By December 1914, one million soldiers had already died in less than five months, and almost nine million more were to follow by 1918. By the end of the war, almost 40 per cent of all German men born between 1892 and 1895 were killed. One of the reasons for this great number of deaths was the industrialization of killing with machine guns, hand grenades, flame throwers and, last but not least, poison gas.

By the winter of 1914, nothing was left of the war enthusiasm. The soldiers got stuck in the mud after weeks of rain. Positional warfare is a battle in which neither side can make up for ground lost. All attempts to beat back the Germans failed, as did their attacks westwards against the French, Belgians and British. So almost nothing worked on the western front in December a century ago. The only killing continued to be part of the soldiers' daily routine. On both sides, they had dug in to be protected at least for a few hours. In between, there was a no man's land where countless dead or injured comrades lay. To rescue the wounded and to bury the fallen were not possible. Anyone who just stuck his head out of the trench had to expect to be hit by a bullet.

The certainty of victory of the German troops had vanished. The imperial slogan that the war would be over by Christmas and the men would be back to their families had turned out to be a mere dream without any hope of a quick end to the war. The longing for at least a bit of peace grew stronger on both sides.

Unimaginable, but a kind of miracle happened: peace returned in many places along the Western Front on Christmas Day. On both sides, there had been a mood “that it should finally be over”. Captain Reginald Thomas of the British Royal Artillery summed up the general mood.

“Suddenly lights flashed up on the German trench walls,” Grenadier Graham Williams of the Fifth London Rifle Regiment described his experiences on Christmas Eve 1914, “There were candles on decorated Christmas trees, their light shining into the frosty clear air. The other sentries saw this, of course, and sounded the alarm. When our comrades came out of their shelters, drunk asleep, our opponents tuned in to ‘Silent Night, Holy Night’. We sang ‘The First Noel’. And finally, the Germans and English sang ‘O you merry’ together in their own languages. On other parts of the front, there was even fraternization with the enemy, which was strictly forbidden on both sides.”

Near Neuve Chapelle and Fleurbaix, for example, some Germans began to contact the English quite unabashedly. They shouted, as the British immediately reported home, “in really good English”, which probably wasn’t meant very seriously: “You don’t shoot, we don’t shoot!” Soon the first climbed out of their trenches into no man’s land. They were mostly simple soldiers. Rarely a shot was fired; instead, they talked, joked and laughed. And it was agreed that there would be no fighting the next day either because they wanted to bury their dead comrades.


Es wurde kein Alt-Text für dieses Bild angegeben.

Figure 1: Enemy soldiers playing soccer in No-Man's Land during the Christmas Truce in 1914. (Getty image)

The fraternization continued the following days. Private Josef Wenzl of the Royal Bavarian 16th Reserve Infantry Regiment wrote to his parents in Schwandorf on 28 December 1914: “No sooner had it begun to dawn than the English appeared and waved to us, which our people reciprocated. Gradually they went all the way out of the trenches, our people lit a Christmas tree they had brought with them, put it on the rampart and rang bells. ...it was a very moving experience: Between the trenches, the most hated and fierce enemies stood around the Christmas tree and sang Christmas carols. I will never forget this sight for the rest of my life. One soon sees that man lives on, even if he knows nothing more in this time than killing and murdering . . . ...Christmas 1914 will be one I shall never forget.” Josef Wenzl was killed on 6 May 1917 in the battle of the Aisne not far from Reims.

Not everyone shared the enthusiasm. Another corporal of the regiment was angry with his comrades: It was Adolf Hitler who disapproved of fraternization with the enemy. “After all, one was at war with another. His comrades, however, gave the enemy presents”. The soldiers had received “gifts of love” from home, self-knitted goods, brandy, biscuits, but also powdered lice and, of course, tobacco.

Fraternization with the enemy continued to occur in the following war years—despite all the prohibitions and punishments. Numerous officers were demoted for this offence. But never again did so many enemy soldiers spontaneously fraternize as on the occasion of Christmas 1914, especially along the approximately 50-kilometre-long line around Ypres, between Diksmuide and Neuve Chapelle. Only a few weeks later, thousands of soldiers were killed by the first use of poison gas in a war at the very place where young men had shaken hands across borders and played football together.[13]


What does this story teach us? The longing for peace is a fundamental human value even in most brutal situations. People are ready to shake hands if they are not being challenged by the ideologies of any state. What would AI do without any information/data about the core values and if it were only filled with “bad” data and ideologies of politics?

I don’t know if you’ve watched “Star Trek Picard”? Without spoiling it, the decisive factor that stopped an android from making a catastrophic decision and to kill a bunch of humans was that it understood the value of love.

Should we train AI-based only on hateful tweets, or rather teach them more about human love, because it is what we are?

It is time to unbundle the requirements for AI regulations from “Love is the answer” and come back to love in the last article when I summarize the analysis.


To do so, let’s jump straight into and continue the discussions that dominated the news, pre-corona, and where the tipping point for me to evaluate the general perspective on AI regulations again. The Iranian military shot down a passenger jet suspecting it to be a “terrorist fighter plane”. This was the peak of an ongoing dispute between Iran and the US. The result: 176 people were killed. The reason: human error in the kill chain (at least that was the last I heard).[14] This is an edge-case of course and doesn’t happen every day, but it led me to think: Wouldn’t it be smart to use Artificial Intelligence to reduce human errors? After all, we already have scientific proof that AI eliminates errors of humans and complex systems.[15]

Although I’m very enthusiastic about the capabilities of AI, I still feel a cold shiver running down my spine when I think about an AI machine telling me whom to kill. I do not want to consider another Terminator 2 scenario, nor do I want to start a political discussion on the topic. Nevertheless, we are making great progress on AI development and it is time to discuss and think about the implementation of AI regulations.

Let’s consider a recap to get a full view of why the European Union (EU) was in its pre-corona days trying hard to put AI regulations in place and, as part of its efforts, committed €20 billion to support AI development. I will summarize the “Why We Need AI Regulations” discussion in this first episode and then dig deeper into the current state of AI in the next episode next week.


Recap of the “Why We Need AI Regulations” discussion and the current state of AI

Let us have a look at the edge case again - AI for autonomous weapons.

Ask yourself for a second: “Could I make a decision whether a combat drone should kill a person or not?”  Even Donald Trump says “[it is] a tough business” after giving orders to kill Qasem Soleimani, one of the highest-ranking Iranian generals. But no AI-induced drone was used in the Soleimani case, even though the military spends a lot of money to develop unmanned drones to decide whom to kill and act accordingly.[16] Maybe there will be a drone in the near future that would be able to kill autonomously. If a military decision is taken based on recommendations given by AI, regulations will be needed to understand the mechanism behind such decision-making and to check for potential violations of the human rights - this will go beyond our current trust crisis.


But could we ever understand the complex mechanism behind the decision-making by AI? Presently, decision-making follows some rules and regulations. Let us look at the Soleimani case once again.

When the decision was made to launch the attack, Trump was at his golf resort in Florida, controlling the operation from his phone.[17] The executor of the airstrike was the most feared combat drone of the world, the MQ-9 Reaper. The fact that Donald Trump was at his Golf Resort when he executed the Soleimani operation points out that the operation was not a long-planned event, otherwise he would have been in the control room of the White House. Probably, the military has found a short “window of opportunity” to carry out the attack. At some point, someone must have said they had Soleimani in their sights and was waiting for the President’s order. I think this was a critical point because at that moment the US Airforce had to be certain about executing the operation as well as about the possible collateral damage it could cause.

So, how did they arrive at a decision to shoot or not to shoot? A well-known approach in the US Airforce is the observation–orientation–decision–action model. Created by US Airforce Colonel John Boyd, this model is used in cybersecurity and cyberwarfare. The main idea of the OODA model is that someone can observe and react much faster than opponents to unfolding events. Today, the model is also used in commercial operations and learning processes (and probably as decision framework to act on the pandemic of Covid19).[18]

Es wurde kein Alt-Text für dieses Bild angegeben.

Figure 2: Diagram of the OODA loop (Wikipedia)

But what if something goes wrong in the OODA process. What if someone or something makes an incomplete or a wrong observation? What happens if someone or something simply provides a bad orientation? All of these could lead to catastrophic decisions and actions.

In reality, there are many examples of people making disastrous decisions under stress, especially in case of incomplete information, and there is a natural human tendency to submit to the authorities for decision-making.[19][20] I do not know if Iran was using the OODA model when shooting at a passenger airplane, but if so, it had been a very bad observation. Arguments in favour of AI claim that AI could make faster and better decisions based on billions of input data.

But who do we blame if something goes wrong? When Iran was apologizing for shooting the passenger airplane down, they mentioned the incident as a human error. Of course, this is not satisfactory, especially for the affected families and friends, but what if an AI recommending the players involved to shoot, or even worse, shooting autonomously?

Feldman, Dant, and Massey published a paper titled “Integrating AI into Weapon Systems,” where they examined the implications of AI in the kill chain and how systems should handle the complexities of a high-dimensional battlespace. They advocated AI regulations and prohibition in the same way as chemical and biological weapons, especially by knowing that “simplistic AI implementations could be manipulated by an adversarial AI that identifies and exploits their weaknesses.” [21].

Today, AI is omnipresent not only in military research but also in medicine to diagnose diseases or provide support in medical operating rooms. In the financial sector, AI helps to calculate the credit scores of clients. Social media companies use AI to personalize content[22] and chatbots, such as Alexa or Siri, powered by natural language processing, connect homes with the internet to control light settings, or to answer questions about the weather[23]. So, AI is penetrating into more and more areas of everyday life. This is one of the reasons the EU is proposing AI rules for “high-risk applications,” such as in the healthcare sector and transport, where AI systems should be transparent, traceable, and be subject to human control.[24] So far, however, these are all just plans and not legislative proposals!


Although concerns about technological development are growing, why comparatively little effort has been made so far to push ahead with new practical legislation? Barack Obama, former president of the United States, once explained in an interview that AI was still at an early stage and that strict regulations were neither necessary nor desirable. He also pointed out that more investment in research and development was needed to support the transfer between basic and applied research in the field of AI.[25] The lack of AI regulations could also be attributed to its economic impact. As Vladimir Putin said in his speech to students about science: “[T]he one who becomes the leader in this [AI] sphere will be the ruler of the world”.[26] The economic potential of AI could be one of the biggest obstacles to the introduction of regulatory requirements for the industry. This results in a trade-off between the protection for the consumer or humanity on the one sight and innovation or development in the field of AI on the other sight.

More regulations or less innovation could lead to economic disadvantage, as potential competing economies, such as China, India, or Russia, have few restrictions on the development of AI and thus may be able to gain a competitive advantage over countries with stricter regulations.[27] But to what extent do we allow innovation that could be potentially dangerous?

Owing to the above-mentioned reasons, the public debate pre-corona has shifted from “Should we regulate AI?” to “Yes, we should regulate,” but without any knowhow of how to regulate. The EU had tightened the line by approaching new legislation that bans facial recognition in public space.[28]

"The regulatory framework for artificial intelligence has to be consistent with the overall objectives of the European approach to artificial intelligence," the draft states. It goes on to say that the aim is "to promote Europe's innovation capacity in this new and promising field, while simultaneously ensuring that this technology is developed and used in a way that respects European values and principles."[29]

But AI regulations are not only a response to the effects of autonomous driving or the replacement of factory workers by robots but also to how AI should be developed without creating any risk of jeopardizing human rights. Questions such as how to make AI responsible for one's own actions, how to regulate non-human behaviour, and how to fight data-based monopolies of large companies are still unresolved, while the question of how to implement an ethical framework in the AI system remains a great challenge.[30]

To get a sense of urgency for regulations, we must understand the maturity of AI systems and check how fast do we need it.

The following example refers to a study by Cognilytica. The Cognilytica team provides a benchmark for the voice assistant's dialogue-based interface technologies, such as Alexa, Google Home, Siri, and Cortana, and the cloud-based backend intelligence behind it. The benchmark aims to evaluate the underlying intelligence of voice-assistance platforms. In an experiment, they asked the chatbots 10 questions from 10 different categories. The responses from the chatbots were then categorized into four different categories:

·      Category 0: “Did not understand or provide a link to a search in relation to the question asked and therefore humans had to do all the work.”

·      Category 1: “Provided an irrelevant or incorrect answer.”

·      Category 2: “Provided a relevant response, but with a long list of responses or references to an online site, thereby requiring humans to understand the proper answer. It was not a default search but rather a ‘guess’ conversational response, prompting humans to do some of the work.”

·      Category 3: “Provided a correct answer conversationally (not a default search that would have required humans to do some work to determine the correct answer).”



Here are the results based on the 2019 test:

Es wurde kein Alt-Text für dieses Bild angegeben.

Figure 3:          Voice-assistant benchmark results by category (Cognilytica 2019)

The results show that most of the voice assistants today could only answer less than 25% correctly, while only 35% have been given an adequate question (one could also say that they already answer 35% correctly!).[31] In Stanley Kubrick's science fiction film "2001: A Space Odyssey", a computer illustrated the aims of AI: it was called HAL, which recognized spoken language, engaged in dialogue, played chess, and planned tasks.[32] Compared to HAL, current AI-controlled language assistants look very immature and far from becoming AGIs or Super AIs ("A superintelligence is a type of AI that surpasses human knowledge and can, therefore, develop superhuman abilities”) [33].

So, why do we care so much about AI regulations if the current state of AI is a far cry from becoming as intelligent as humans? In my view, the development of AI capabilities has outpaced the understanding of the impact of AI on society. Besides the processing power, the increasing availability of vast amounts of data has fueled the rise of AI. More than 2.5 trillion bytes of data is generated every day (the storage capacity of 36 million iPads) - this data is created by more than four billion internet users, more than half of the world's population.[34]

The data today lets AI discover (historical) patterns and is helping it to make predictions or recommendations. Even there are initiatives like OpenAI that aim to facilitate the path to safer AI and to lower the coding efforts by creating industry standards to increase the number of emerging AI tools [35], it is, of course, difficult for most people to understand the impact of AI.

And there is a big discussion going on for a long time if we should even allow AI to enter our life:

In 2000, Bill Joy criticized new developments in the fields of genetics, nanotechnology, and robotics based on AI. He argued that AI tools lead to knowledge-enabled mass destruction. He saw the urgent need for action to ensure that humans do not lose control over AI.[36] Kurzweil responded to Joy’s concerns in his book "The Singularity is Near," published in 2005, in which he presented some ways to avoid risks. He, however, also said that the only way to avoid the greater risks associated with AI is to implement human values in the system. His prediction was that some aspects of human intelligence would be emulated by AI by 2010.[37] Bostrom and Yudkowsky referred to Kurzweil and have proposed that AI should have ethics to become “friendly”.[38] Tegmark have argued from a physicist-oriented perspective. He has asked the following question: “What is the meaning? What is live? In the world of physics, humans do not present the optimal solution to any well-defined physics problem - this suggests that super-intelligent AI with a rigorously defined goal would be able to improve its chances of goal attainment by eliminating us [humans].” Hence, before developing any AI that could come into conflict with human beings, it is important from a physics point of view to answer philosophical questions such as “how should we strive to shape the future of our universe?” before developing any type of superintelligence.[39]

A more technical approach was suggested by Eric Drexler, who has defined methods to reduce risks associated with AI research. He has proposed that it is necessary to use AI, which is more intelligent than humans, to resolve potential AI security problems.[40] Soares addresses the problem that an autonomous AI system often falls far short of the programmer's expectations because it never knows the programmer's exact intentions. To resolve these problems, he has proposed an inductive learning model that could learn from sparse data and identify a reference to a label of training data for a particular model of reality.[41]

In his book “Superintelligence,” Bostrom also describes the scenarios of an upcoming revolution where AI would be far superior to human intelligence. Bostrom sees a rapid transition to an intelligence explosion as a more likely scenario than a moderate transition and a much more likely scenario than a slow transition. Concepts like whole brain emulations, where biological brains are scanned (and which include the capture of long-term memory and the “self”) and copied to a computer, are described as risky. The resulting AI could run such copied brain programs many times faster than a biological brain. According to Bostrom, though this would enormously increase work efficiencies, such a rapid take-off carries with it the danger that humanity would lose control over the technology. As technology will no longer require human help, it could mean nothing less than the end of mankind.[42] Similar radical future visions contribute to the intense debate surrounding AI.


However, while these consequences are representative of a dystopian future, NAI already has a large impact on societal politics, the industry, and the environment in a good way.

One example is Google that uses the DeepMind reinforcement learning algorithm to reduce energy consumption by up to 40%.[43] Also, there is the Ocean Data Intelligence Project, an open-source collaboration between leading tech companies, governments, and research institutes. Together they collect and collate data to detect illegal fishing, disease outbreaks, or coral-bleaching and use AI to predict and respond to changes. In smart agriculture, AI is used (combined with robotic labour) to optimize production and enable sustainable trade. In the field of urban planning, AI could be used to minimize air pollution through global monitoring and decision support.[44] Cancer detection could be potentially simplified through AI[45] and human biases explained by behavioural economics could be circumvented (but also exploited)[46].


Summing up, AI can eat up the whole world and its regulations will keep us busy for a long time. Implementing human values such as love is necessary, but difficult.

I hope you enjoyed the first episode of this series, In the next episode, I will go deeper into the current state of AI regulations and present a framework to illustrate the influence and dependency of different factors of AI. The framework should serve as the basis to search for interdependencies and solutions. I will also extend the analysis with solutions summarized thanks to discussions with incredible people working in various fields for AI. If you want to follow this series on AI regulations, you can follow me on LinkedIn. Let’s spread love.

 



[1]A simple distinction is the division into Artificial General Intelligence (AGI), or strong AI and Narrow Artificial Intelligence (NAI), or weak AI. NAI performs a specific task very well (for example, detecting plant diseases), but performs other tasks poorly (for example, holding a glass of water). AGI’s are machines with the ability to apply intelligence to various problems, not just a specific problem. They are also capable of experiencing consciousness and thus could potentially do the same work as humans (Kurzweil, R. (2018, September 15). Retrieved from https://www.forbes.com/home/free_forbes/2005/0815/030.html)

[2] https://www.cam.ac.uk/research/news/the-best-or-worst-thing-to-happen-to-humanity-stephen-hawking-launches-centre-for-the-future-of

[3] https://twitter.com/elonmusk/status/896166762361704450

[4] https://www.arte.tv/de/videos/081590-000-A/ihuman/

[5]https://www.reuters.com/article/us-health-coronavirus-europe-tech/france-germany-in-standoff-with-silicon-valley-on-contact-tracing-idUSKCN2262LM

[6]https://www.psypost.org/2019/11/unpublished-data-from-stanley-milgrams-experiments-casts-doubts-on-his-claims-about-obedience-54921

[7]https://www.livescience.com/62832-stanford-prison-experiment-flawed.html

[8]https://phys.org/news/2019-05-evidence-broken-windows-theory-neighborhood.html

[9]Bergman, R. (2020). Im Grunde Gut; Eine neue Geschichte der Menschheit. Hamburg: Rowohlt Verlag.

[10]Vincent, J. (21. September 2018). www.theverge.com. Von https://www.theverge.com/2016/3/24/11297050/tay-microsoft-chatbot-racist abgerufen

[11] Bergman, R. (2020). Im Grunde Gut; Eine neue Geschichte der Menschheit. Hamburg: Rowohlt Verlag(page 96-117).

[12] https://en.wikipedia.org/wiki/Christmas_truce

[13] https://www.faz.net/aktuell/politik/der-erste-weltkrieg/weihnachten-1914-im-schuetzengraben-ein-bisschen-frieden-im-ersten-welkrieg-13327096-p4.html

[14]https://www.businessinsider.com/the-us-army-is-developing-unmanned-drones-that-can-decide-who-to-kill-2018-4?r=DE&IR=T

[15] https://hellofuture.orange.com/en/ai-reduce-human-error-rate/.

[16]https://www.washingtonpost.com/national-security/how-trump-decided-to-kill-a-top-iranian-general/2020/01/03/77ce3cc4-2e62-11ea-bcd4-24597950008f_story.html.

[17] https://www.washingtonpost.com/national-security/how-trump-decided-to-kill-a-top-iranian-general/2020/01/03/77ce3cc4-2e62-11ea-bcd4-24597950008f_story.html.

[18] https://en.wikipedia.org/wiki/OODA_loop.

[19] C. Ferraris and R. Carveth, “NASA and the Columbia Disaster:

Decision-making by Groupthink?” in Proceedings of the 2003

Association for Business Communication Annual Convention, 2003,

p. 12.

[20] K. Clark, “The GPS: A fatally misleading travel companion,”

Jul. 2011. [Online].

[21] Feldman, Philip & Dant, Aaron & Massey, Aaron. (2019). Integrating Artificial Intelligence into Weapon Systems.

[22] appliedai. (2018, September 15). Retrieved from https://appliedai.com/use-cases/1

[23] Kar, R., & Haldar, R. (2016). Applying Chatbots to the Internet of Things: Opportunities and Architectural Elements. International Journal of Advanced Computer Science and Applications, Vol. 7, No. 11, 147-154.

[24] https://ec.europa.eu/info/sites/info/files/commission-white-paper-artificial-intelligence-feb2020_en.pdf

[25] Dadich, S. (2016, 8 24). www.wired.com. (Wired) Retrieved 4 8, 2018, from https://www.wired.com/2016/10/president-obama-mit-joi-ito-interview/

[26] rt.com. (2018, 10 1). www.rt.com. Retrieved from https://www.rt.com/news/401731-ai-rule-world-putin/

[27] Gershkoff, A. (2017, 6 12). www.huffingtonpost.com. Retrieved 4 9, 2018, from https://www.huffingtonpost.com/entry/ai-regulation-is-coming-heres-how-to-do-it-right_us_5a27461ce4b0650db4d40ba5

[28] https://g8fip1kplyr33r3krz5b97d1-wpengine.netdna-ssl.com/wp-content/uploads/2020/01/AI-white-paper-CLEAN.pdf).

[29]https://www.theguardian.com/technology/2020/jan/17/eu-eyes-temporary-ban-on-facial-recognition-in-public-places

[30] Bathia, R. (2017, 7 27). Retrieved 4 7, 2018, from https://analyticsindiamag.com/time-regulatory-framework-artificial-intelligence/

 

[31] https://www.cognilytica.com/2019/09/05/report-voice-assistant-benchmark-2-0-2019/

[32] Chiasson, D. (2018, September 15). Retrieved from https://www.newyorker.com/magazine/2018/04/23/2001-a-space-odyssey-what-it-means-and-how-it-was-made

[33] Bostrom, N. (2017). Superintelligenz 2. Auflage. Berlin: Suhrkamp Verlag.

[34] Kroker, M. (2018, September 15). https://blog.wiwo.de/. Retrieved from https://blog.wiwo.de/look-at-it/2018/03/21/25-trillionen-

[35] World Economic Forum. (2018, August 10). Retrieved from https://www3.weforum.org: https://www3.weforum.org/docs/Harnessing_Artificial_Intelligence_for_the_Earth_report_2018.pdf

[36] Joy, B. (2018, September 20). Retrieved from https://www.wired.com/2000/04/joy-2/

[37] Kurzweil, R. (2005). The Singularity Is Near: When Humans Transcend Biology. Viking.

[38] Bostrom, N., & Yudkowsky, E. (2011). The Ethics of Artificial Intelligence. In W. Ramsey, & K. Frankish, Cambridge Handbook of Artificial Intelligence (pp. 316-334). Cambridge University Press.

[39] Tegmark, M. (2015). Friendly Artificial Intelligence: The Physics Challenge. Artificial Intelligence and Ethics: Papers from the 2015 AAAI Workshop (pp. 87-89). https://www.aaai.org/ocs/index.php/WS/AAAIW15/paper/download/10149/10138.

[40] Drexler, K. E. (2015). MDL Intelligence Distillation: Exploring strategies for safe access to superintelligent problem-solving capabilities. In Technical Report #2015-3, (pp. 1-17). Oxford University.

[41] Soares, N. (2016). The Value Learning Problem. Ethics for Artificial Intelligence Workshop at 25th International Joint Conference on Artificial Intelligence, (pp. 9–15). New York.

[42] Bostrom, N. (2017). Superintelligenz 2. Auflage. Berlin: Suhrkamp Verlag.

[43] Deepmind. (2018, September 20). Retrieved from https://deepmind.com/blog/deepmind-ai-reduces-google-data-centre-cooling-bill-40/

[44] World Economic Forum. (2018, August 10). Retrieved from https://www3.weforum.org: https://www3.weforum.org/docs/Harnessing_Artificial_Intelligence_for_the_Earth_report_2018.pdf

[45] Wilson, S., Fitzsimons, M., Ferguson, M., Heath, A., Jensen, M., Miller, J., . . . Grossman, R. (2017). Developing Cancer Informatics Applications and Tools Using the NCI Genomic Data Commons API . Cancer Res; 77(21), 15-18

[46] Lindborg, H. (2018, September 22). Retrieved from https://www.theseus.fi/bitstream/handle/10024/147893/Lindborg_Hugo.pdf;jsessionid=E8E39BB4E5C11B59FB4E36DB5A89A1C0?sequence=1

 

Phillip Grimm

CEO & Founder. Expert for smart water technologies

4 年

Hey Nils, thank you very much for this contribution. I have rarely read an article about AI that was so integrally thought out. Let’s spread love. ;)

要查看或添加评论,请登录

社区洞察

其他会员也浏览了