We are in the future
Today, we see ourselves in a period of transition from one age to another, as if one foot is in the future and the other is in the past.

We are in the future

The world we live in today is like a wonderland in many ways, similar to the world that the English mathematician Lewis Carroll described in his famous novels. Image processing, smart speakers, and self-driving cars are possible because of the advancement of artificial intelligence in the system’s ability to correctly interpret external data, learn from these data, and use these learnings.

Artificial intelligence emerged as an academic discipline in the 1950s and remained in relative obscurity and limited practical interest for more than half a century. Today, artificial intelligence has entered the business space and public discourse, due to the emergence of big data and the progress in computing power. Artificial intelligence can be classified into artificial intelligence of narrow, general, and super types, according to the cognitive, emotional, and social aspects. All these types are interrelated and the difference between them is often in the various uses of artificial intelligence. (Kaplan and Haenlein, 2019)

But when the use of artificial intelligence becomes a normal and everyday thing, sensitivities about it often decrease and it becomes a tool. This phenomenon is described as the “artificial intelligence effect” that occurs when observers trivialize the behavior of an artificial intelligence program by arguing that this is not a real intelligence. As Arthur Clarke, the science fiction writer, said: Any sufficiently advanced technology is indistinguishable from magic, but when we understand the technology, the magic disappears. (Carlton et al., 2020)

Experts have been forecasting since the 1950s that artificial general intelligence, or systems that can act like humans in every way and possess cognitive, emotional, and social intelligence, is only a matter of years away. (McCarthy and Hayes, 1981)

We will then look at the historical rise and fall of artificial intelligence technology and explore the current situation and the difficulties we face.


The ups and downs of artificial intelligence

The Dartmouth Conference was followed by a period of almost two decades of remarkable success in the field of artificial intelligence. One of the first examples was the famous computer program “ELIZA” that was created by Joseph Weizenbaum at MIT between 1964 and 1966. ELIZA was designed as a tool for natural language processing that could simulate a conversation with a human and was one of the programs that passed the Turing test. (Natalie and Simpson, 2019) Another successful case in the early days of artificial intelligence was the General Problem Solver program that was built by Herbert Simon (Nobel Prize winner) and RAND scientists, Cliff Shaw and Allen Newell, which could automatically solve certain types of problems such as the Tower of Hanoi game (Banko and Lani, 2009). As a result of these inspiring achievements, a significant budget was given to artificial intelligence research and led to the implementation of more projects. In 1970, Marvin Minsky gave an interview to Life magazine and claimed that a device with the general intelligence of an average human could be built within three to eight years. Only three years later, in 1973, the US Congress expressed severe criticism of the high costs of artificial intelligence research (Minsky and Papert, 1972). In the same year, the British mathematician, James Lighthill, published the report of the UK Science Research Council and questioned the optimistic outlook of artificial intelligence researchers. Lighthill argued that machines could only reach the level of an experienced amateur in games such as chess and that common sense reasoning was always beyond their ability (Lighthill, 1973). Consequently, the UK government stopped supporting artificial intelligence research in all research centers except for three universities: Edinburgh, Sussex, and Essex, and the US government quickly followed the UK model. This period marked the beginning of the artificial intelligence winter. Although the Japanese government started to invest a lot of money in artificial intelligence research from 1980, which DARPA in the US also responded to by increasing the budget, no progress was made in the following years. (Ronald et al., 1993)

One reason for the initial lack of progress and unmet expectations in artificial intelligence was the way the early systems such as ELIZA and the General Problem Solver tried to imitate human intelligence. Specifically, they were all rule-based systems, meaning they simulated human intelligence in a top-down approach as a set of expert statements “if-then”. (Kaplan and Haenlein, 2019) Expert systems could perform impressively, for example, IBM’s chess-playing program “Deep Blue” in 1997, which defeated the world champion Garry Kasparov and proved wrong one of James Lighthill’s claims 25 years earlier. It is said that “Deep Blue” could process 200 million hypothetical moves per second and use a method called tree search to determine the next 20 optimal moves. (Campbell et al., 2002) However, expert systems also had poor performance in some cases. For example, an expert system could not easily be taught to accurately recognize faces (Hutson, 2018). For such tasks, a system needs to be able to interpret external data correctly, learn from these data, and use these learnings to achieve specific goals and tasks through flexible adaptation, i.e., the features that define artificial intelligence.

Statistical methods for achieving real artificial intelligence were proposed in the early 1940s, when Ronald Hebb, a Canadian psychologist, presented the famous learning theory known as Hebbian learning, which mimics the functioning of neurons in the human brain. (Hebb, 1949). This theory led to the creation of research on artificial neural networks. However, when Marvin Minsky and Seymour Papert showed that computers lacked the computational power required for such artificial neural networks, research on this theory stalled in 1969.

The topic of artificial neural networks, in the form of deep learning, resurfaced in 2015 with the production of a program called AlphaGo by Google, which was able to defeat the world champion in the game of Go. The game of Go is much more complex than chess. For example, at the start, 20 different moves were possible in chess, but this number in the game of Go was 361 moves. It was long believed that computers could never beat humans in this game. AlphaGo achieved its excellent performance by using a special type of artificial neural network called deep learning (Silver et al., 2016). Today, artificial neural networks and deep learning are the basis of most programs that are known as artificial intelligence. They are the basis of image recognition algorithms on Facebook, speech recognition algorithms on smart speakers, and navigation algorithms on self-driving cars. These capabilities are the result of past statistical advances in artificial intelligence that we find ourselves in today.


Regulating algorithms

As artificial intelligence systems are becoming more integrated into our everyday lives, we need to consider what kind of rules and standards should apply to them. Artificial intelligence is supposed to be neutral and fair, but that does not mean that it cannot produce biased or discriminatory outcomes. This can happen when the data used to train and evaluate the artificial intelligence system reflects or reinforces existing prejudices. For instance, some studies have found that the sensors in autonomous cars (Wilson, Hoffman and Morgenson, 2019) or the decision support systems for judges, are more accurate in recognizing light skin than dark skin, because of the images they were trained on, which could lead to racial injustice. (Angwin et al., 2016). Rather than trying to control artificial intelligence, a better solution might be to establish clear and agreed criteria for how to train and test artificial intelligence algorithms. There might also be a need for some form of guarantee, similar to physical devices, that ensures reliable performance. Even if the artificial intelligence systems improve their technical capabilities over time. Another challenge is how to hold companies responsible for the errors of the algorithms they create or whether artificial intelligence engineers should follow ethical codes, like lawyers or doctors do. But even with these regulations, there are still risks of artificial intelligence systems being hacked, misused, or abused for malicious purposes, such as creating fake news. (Suwajanakorn et al., 2017)

One factor that adds to the complexity is that deep learning, as a key technique in most artificial intelligence systems, is inherently a black box. While qualitative evaluations of the output of such systems are simple. For example, knowing the proportion of correctly classified images, makes the deep learning process largely transparent. Also, the lack of transparency in artificial intelligence processes can be intentional, such as when a company wants to keep an algorithm secret or due to technical constraints, its dissemination is impossible (Barrel, 2016). In any case, few people may care about how the Facebook face recognition algorithm works, but when an intelligent artificial intelligence system is used to provide diagnostic suggestions for skin cancer based on image analysis (Hansel et al. 2018), understanding how such recommendations are extracted becomes vital.


Employment regulations

As automation has eliminated some labor-intensive jobs, the increasing use of artificial intelligence will also reduce the demand for office workers and even high-quality professional jobs. Image processing tools, already perform better than doctors in diagnosing skin cancer. Also, this technology has eliminated the need for large teams of lawyers to review millions of documents. (Markov, 2011). There have been significant changes in the labor markets in the past. Such as the changes that occurred during the fourth industrial revolution, but it is unclear whether new jobs will necessarily be created in other areas to provide employees or not. This issue relates to both the number of possible new jobs that may be much less than the number of lost jobs; and the level of skill required for new jobs.

As a short story like The Fork can be seen as a starting point for artificial intelligence, another story can have more impact on shaping the global image of increased unemployment. Snow Crash, a novel by American writer Neal Stephenson, depicts a world where people leave their physical lives in storage units and are maintained by technical equipment. While their real lives take place in a three-dimensional world called Metaverse, where they appear as three-dimensional symbols. As fictional as this story may seem, recent advances in virtual reality processing along with past successes of virtual worlds (Kaplan and Haenlein, 2009) have made this confusion acceptable among the general public of the world and make Stephenson’s story far from utopian. The amazing advances that we witnessed in 2023 and 2023 are more indicative of this reality.

Laws may be a way to prevent unemployment. For example, companies can spend a certain percentage of the cost saved through the implementation of automation on training employees to work in new jobs that cannot be automated. Governments may decide to limit the use of automation. In France, self-service systems used by public administration bodies are only accessible during regular business hours. Or companies may limit the number of working hours per day to evenly distribute the remaining work among the workforce.

Artificial intelligence could either boost our intelligence, as Raymond Kurzweil from Google suggests, or trigger World War III, as Elon Musk warns. In any case, it will pose ethical, legal, and philosophical dilemmas that require attention. For instance, ethics has long faced the trolley problem, where a hypothetical person must decide whether to do nothing and let many die, or do something and kill fewer. (Thomson 1976). This scenario will become a real decision for self-driving cars and their coders. (Awad, Dsouza, 2018). Therefore, some influential figures like Mark Zuckerberg have advocated for various regulations.

But how can we make policies for a technology that is constantly evolving and only a few experts, let alone politicians, fully understand it? How can we overcome the widespread challenges that follow the global developments, in order to prevent the infiltration of artificial intelligence in all aspects of human life? One solution could be to follow the approach of the US Supreme Court judge Potter Stewart, who defined obscenity in 1964 by saying: I know it when I see it (Barnett and Levin, 1965). This approach brings us back to the artificial intelligence that we mentioned earlier, which we now tend to normalize while it seemed abnormal in the past. Nowadays, there are dozens of different applications that produce artworks for you, help you in the learning process, and even take care of many of your daily tasks. The big question here is exactly when and at what stage should legislation take place? The speed of progress in artificial intelligence and the creation of new neural networks is far beyond the laws and regulations, and governments should spend a lot of money annually on legislation and research on artificial intelligence. In general, if artificial intelligence is to become part of the foundations of our lives, we need to think more and more about it. And that by the experts who are primarily able to explain the algorithms of artificial intelligence. It is possible that in the future our courts will need judges who can understand the explainability or inexplicability of algorithms and apply the decision-making process in this area with more details.


Sources:

Angwin, J., Larson, J., Mattu, S., & Kirchner, L. (2016). Machine bias: there’s software used across the country to predict future criminals. and it’s biased against blacks. propublica 2016. Machine bias: there’s software used across the country to predict future criminals. and it’s biased against blacks. propublica 2016.

Kaplan, A. M., & Haenlein, M. (2009). The fairyland of Second Life: Virtual social worlds and how to use them. Business horizons, 52, 563–572.

Lighthill, J., & others. (1973). Artificial intelligence: a paper symposium. Science Research Council, London.

McCarthy, J., & Hayes, P. J. (1981). Some philosophical problems from the standpoint of artificial intelligence. In Readings in artificial intelligence (pp. 431–450). Elsevier.

Suwajanakorn, S., Seitz, S. M., & Kemelmacher-Shlizerman, I. (2017). Synthesizing obama: learning lip sync from audio. ACM Transactions on Graphics (ToG), 36, 1–13.

Suwajanakorn, S., Seitz, S. M., & Kemelmacher-Shlizerman, I. (2017). Synthesizing obama: learning lip sync from audio. ACM Transactions on Graphics (ToG), 36, 1–13

Tambe, P., Cappelli, P., & Yakubovich, V. (2019). Artificial intelligence in human resources management: Challenges and a path forward. California Management Review, 61, 15–42.

Thomson, J. J. (1976). Killing, letting die, and the trolley problem. The Monist, 59, 204–217

Wilson, B., Hoffman, J., & Morgenstern, J. (2019). Predictive inequity in object detection. arXiv preprint arXiv:1902.11097.

要查看或添加评论,请登录

社区洞察

其他会员也浏览了