My little book summary and recommendation
Ask AI

My little book summary and recommendation


New technologies and robots are here to stay. I read this sentence in both books, and it is necessary to consider the consequences of new technologies evolving and changing our lives. We should not ignore the developments because they are happening. However, how can we be part of the development and influence the development of robots and other digital technologies? We should understand these developments and get involved so that robots are created that we need and use, not harm. Against this background, I will briefly introduce two books that have dealt with robots in our society.

  • Eve Herold: Robots And the People who love them. Holding on to our humanity in an age of social robots. St. Martin's Publishing (2023)
  • Mark Coeckelbergh: Robot Ethics. MIT Press (2022)

It was exciting to read them in this order since, under the surface, you could see different perspectives on social robots and their influence on humans and our society. This not only lies in the fact that Eve Horold is a journalist and Mark Coeckelberg is a professor of philosophy of media and technology. There is a difference between the views of people who live and work in the US and those in Europe. The latter one seems much more critical and raises many more questions.

However, in Eve Herold's book, she is a well-known science writer, especially focusing on new technologies. Her book consists of 11 chapters. All the chapters integrate scientific findings on social robots and the developments and summaries of interviews or talks with researchers. At the beginning of the book, she introduces them to where and in which areas robotics are already being used and realizes that they are already there. It becomes clear that robots also become more humanlike with humanlike traits. She thinks that in a few years, robots will become more personal and social. The prerequisite for robots to be accepted in our lives will be that they are realistic enough to convince us that they are like us. She further questions whether robots could make us more emotionally intelligent, which seems interesting. To a certain extent, robots can provide the illusion of emotionally intelligent responses. However, it is just a robot incapable of real emotional intelligence. Well-developed and programmed robots can fill gaps, for instance, in social relationships or health care. She thinks companion robots can address mental illnesses, social isolation, and health care. Nevertheless, robots fall short of the prerequisites for feeling empathy. They must have rich knowledge of themselves, including personal motivation, weaknesses, strengths, history of successes and failures, and high points and lows. The self-identity of the robot would need to overlap with that of its human companion. Moreover, they do not have the capacity for the embodied nature of feelings. She summarizes this chapter by saying that she thinks that social robots can be helpful for people who struggle with emotions. Further, by modeling socially appropriate behavior, social robots can help us learn socially appropriate behaviors. Moreover, chatbots have already proven that they can offer a kind of cognitive behavioral therapy for the depressed. Eve Herold then asks if robots will be smarter than humans. For instance, one of the greatest challenges for robots is identifying patterns in unstructured environments because it relinquishes the ability to distinguish between signals and simple noise. Robots equipped with deep learning will learn with each task, but the negative side is the black-box problem. The engineers who wrote the algorithm cannot say how exactly it comes to a decision. With the development of processing power, AI researchers predict we will experience an "intelligence explosion" in AI. She describes in a chapter that social robots can reduce loneliness and that people can develop a very close relationship with robots and even love. Studies in nursing homes show that social robots can lead to greater social engagement. However, on the downside, the costs of robots are very high. She concludes that if social robots become a normal part of our lives, it will entail unprecedented social change and possibly change intimate human relationships. She covered the topic of love in the age of robots. So far, current AI robots can simulate some aspects of love, but they never feel love. The human partner must ignore the fact that the robot is only a machine. From a psychological point of view, it is a problem that a robot cannot reject the human, so they do not have to embark on personal growth and exercise empathy. Furthermore, she questions the effects of fembots because that entails the dangers of objectification of women. It allows men (and some women) to avoid having empathy with women and girls. The fembots could be the bedrock of sexism, misogyny, exploitation, and abuse. She states that the research shows that humans are driven to anthropomorphize robots, which leads them to think they are in a relationship with them. But robots cannot develop a relationship of their own will or have a say in this or can decide on their own free will, and if they want to be in this relationship, they could complain about how it is treated. Herold warns that this could lead to more isolation, loneliness, and emotionally deprived people. Robots are already used in the care of children, and some tools have been on the market for years. Some childcare robots have "emotion management systems" that detect the child's emotions. According to Herold, the research shows that childcare robots can benefit autistic children, for instance. The robot NAO, for instance, can be used as a teaching assistant in science, technology, engineering, and math. An argument for using robots in education could be that they have the potential to support teachers and that students learn from an early age to interact and collaborate with robots. The darker side of using robots with children could be that some children could act out the inner bully and mistreat the robot since it cannot complain or fight back. Another study shows that a robot cannot replace parents when learning a language or reading a story since the intimate responsiveness differs from that of parents. She cites another story that says that the overuse of robots could lead to emotional alienation between parents and children. The conclusion is that robots can be a valuable educational tool, but they cannot make a child truly loved, validated, and valued. Eve Herold dedicates a chapter on robots as killing machines and how they change wars since it makes a difference if a soldier is in direct combat, such as sitting in an office and steering a drone. Interestingly, even in the army, the military tends to humanize the robots and feels empathy with the robot. However, important questions remain, for instance, who is responsible when an autonomous military robot gets out of control in an urban area and kills civilians: Who would be to blame? Who is responsible?

Herold seems to be convinced that robots will change our human culture. Social robots will emerge as a new ontological category that is neither human nor fully machine. The relationships with these new beings will form a new category. She believes that the widespread presence of social robots in society will influence how we think and act. Robots will join our social circles of friends, families, and work. She closes her book with a chapter titled "Good news: Humans are in control. The bad news: Humans are in control." She thinks that robots will one day become a part of our daily lives, like smartphones or other digital tools, but they will not be able to meet the fundamental needs of people since they do not have the skills and capabilities to form real relationships. On the other hand, robot relationships could help us to feel and acknowledge our feelings; therapy robots can be programmed to treat post-traumatic stress, anxiety, and depression. However, people who are not able to act in functional behaviors will suffer no consequences when they mistreat robots. Nevertheless, she is convinced that robots will not change the centrality of socialness in human life. They are the second technology after social media. In light of this, she warns that we need to be careful in the judicious use of robots to help us raise, entertain, or teach our children.

The book by Mark Coeckelberg focuses on robot ethics and is, for this reason, much broader. Nevertheless, the question of robots' effects on humans and societies remains the central ethical question. This book contains eight chapters, and unlike the book I discussed above, it focuses solely on ethics. The other difference is that each chapter has many questions, which seems typical for ethics, and you will only get some answers.

Already in the first few pages, he states that robots are here to stay and are more capable of autonomous and intelligent than ever before. He points out that robots are going to make fundamental changes to our lives, societies, and environment, and for this reason, we should consider the ethical consequences and reflect more critically.

He explains that the term "robot ethics" refers to the ethics of how humans should use, interact with, and develop robots in a way that leads to good for humans and develop robots in a way that leads to good for humans or other entities, such as animals, or perhaps even robots (humans are the ethical subjects; robots are then the means to reach the ends of human ethics). Concerning Coeckelbergh, robot ethics is linked to and requires nothing less than critical thinking about our societies' social order and political order.

On page 2, he states that Marx's criticism is still relevant in the age of robots since workers become part of the machine and the broader technosocial system. Autonomous, flexible, and cooperative robots can raise work safety questions. Also, security issues can arise by using the Internet of Things and, more generally, increasing interconnection in industrial production. Coeckelberg asks if robotics technology is a new instrument used for capitalist exploitation in which the machine, not the worker, once again determines how the worker does their work.

Naturally, he switches to the topic of the fourth industrial revolution, which is characterized by the current social and economic changes and transformation of the industry by means of autonomous and intelligent robots. He states there will be a job substitution, and many jobs and workplaces will change. Nevertheless, people need help to anticipate the changes. The sentences, that we should ask what humans can still do in the new automation economy and society and what we want them to do. He also draws attention to the fact that our massive use of technology contributes to the climate change problem and other environmental concerns.

The chapter Robotic Home Companion discusses whether companion and social robots deceive people. Using more social robots, for instance, in elderly care or child care, entails a risk of less human contact. People can be misled since the robots simulate a social relationship. The question could be if the human relationship is in danger. The deception social robots can create is an illusion, the illusion of having a friend, care careworker, or a family member. Furthermore, he asks critically if personal robots may also be instrumental in supporting surveillance capitalism, a new form of social and economic oppression based on data being captured and sold.

In Chapter 4, he discusses social robots and robots in health care. He asks if care robots are a way to abandon the elderly. Will family members visit their elderly parents less likely when a social robot is present? Robots in health care can be used to get medical supplies and food or used for telemedicine or telenursing, such as monitoring people and intervening if something goes wrong. He considers it controversial to use robots in mental health. The ethical issues of robots in health and elderly care are manifold, such as privacy, data protection, surveillance, how data is stored, and if and to what extent robots take over tasks from humans. He asks further, what will the roles of robots be? Do they take over entire jobs or just specific tasks? What is the resultant role of the nurse, doctor or surgeon? Not to mention the responsibility gap, when robots become more autonomous and have more tasks but lack moral agency capacity. He even tries to describe good health care. On p 97, he listed criteria for good health care, based on Martha Nussbaum's capability approach. On page 101, he finally formulates several criteria of good care because he believes that thinking of robots in health care not only addresses specific ethical issues raised by care robots but also should make us think about what good care is.

Chapter 5 is on self-driving cars, and many ethical questions are involved in this topic. For instance, what life should a self-driving car prioritize: a child, a pet, or an adult? When robots have more autonomy, should they have some morality? When does autonomous care cause an accident? Who is responsible? The car cannot take it. Even if it would be possible to give robots the capacity for moral decision-making, robots lack emotional capacities, which are important for moral judgment, and without the emotional capacity, robots will become dangerous "psychopaths". He considers it one solution to introduce a responsibility hierarchy, in which humans are still on top, and the robots are delegated tasks.

The next chapter is on uncanny androids, appearance, and moral patiency. Coeckelbergh starts this chapter with many questions, such as, if AI gets better at talking to us, how should we portray and interpret such robots and our interactions with them? What exactly is going on when we interact with them? Is it acceptable to love or torture humanlike robots, which are machines? Is it problematic that we deceive people with humanlike rotors? Should we forbid the use of those robots with vulnerable population groups such as young children or the elderly? They then address the topic of indirect moral standing. This means that we, as humans, should treat robots well since we are humans and have moral standing. Furthermore, the worry is that if someone "mistreats" a robot, this may lead to mistreating humans and/or humans not exercising their duties towards other humans.

In chapter 7, Coeckelbergh thematizes killer drones, distance, and human existence. AI and robot development make it easier for war weapons to identify faces, fly autonomously, and target and kill from a distance. Killing from a distance makes from a psychological and moral point of view a difference because killing is easier the more the soldiers are away from the target. In consequence, physical distance can influence moral distance. Furthermore, drones are used for criminal actions, terrorism, and so on. He asks, if war weapons target and kill on their own and make their own decisions, should they be given the capacity for moral decision-making? Finally, he concludes that robots, as non-living entities with no experience of human existence and vulnerability, should not be allowed to take the life of a human being.

The last chapter ends the book by extending robot ethics to a form of environmental ethics. He states that robot ethics is always connected to philosophical anthropology and that robot ethics is about us as humans. In modern times, in his opinion, we need robots to define humans. Robots are, in this respect, technoanthropological. He pictures that instead of living in fear of robots in t, the concept of transhumanism, humans, and robots can relate and merge, and robots can be our friendly others. He motivates us to go beyond the dualist and binary category of humans and robots as two sides of a coin. Robot ethics could lead to finding a good way of relating to and living together with robots.

Claudia Snow

Driving Change with a Passion for Diversity and Cultural Dynamics | Versatile Expert and Community Advocate

7 个月

Danke für die sehr interessanten Empfehlungen :)

要查看或添加评论,请登录

Prof. Dr. habil. Martina Hasseler的更多文章

社区洞察

其他会员也浏览了