ETHICS IN LIMBO – DILEMMAS OPERATING THE DIGITISED SOCIETY
Jens Christian Steenfos
Senior Project and Program Manager hos Innocope Management Consulting
Application of modern technology in virtually any context shows a lot more than just intellectual achievements and hegemony of modern human beings. In tandem, it shows a paradox in how human beings approach ethics. On the one hand, we consider the leapfrogging speed of technological advances an indisputable consequence of our enhancing intellectual capabilities, which is not really a sin. In fact, on the contrary. On the other hand, the human race appears to be losing ground in figuring out where exactly to fit in ethics while defining its logical role, putting together a formula that does not allow technology to turn future into a zero-sum game favouring achievements before humans.
While debating ethics versus technology is far from a new phenomenon, actually capturing an ever- increasing part of the philosophical discussions for centuries, few generations have had it so close and vibrant as do the ones growing up in the post-WWII decades.
A keen supporter of technological milestones, the fourth industrial revolution added both a natural and a deeply fascinating perspective to my own worldview. Yet it also added some fearful expectations. Inasmuch digitisation has grown to become the mantra of organisations in private and public domains, the perspective of dealing with new market conditions, competition, effectiveness and innovation still disregards the fact that management teams more or less unwittingly risks turning their brave new ambitions against themselves.
Let me walk you through a few, yet very compelling examples.
Like most other trends having affected our way of doing business for at least four or five decades, a resource like data now has grown to remain key to virtually every business opportunity identified. Plenty of reasons exist for that, as do the number of tools to exploit this resource. Particularly, while this not only points to exploiting unexplored data potential already available within numerous organisations, it also specifically relates to harvesting profitability by collecting new data tapping the internet.
Still, business organisations increasingly acknowledge that escaping the ethical aspect in generating growth from exploiting data proves harder and harder, whilst simultaneously struggling to fight the inherent vulnerability with internet-based business opportunities as cyber-attacks and flawed web entrepreneurs constantly challenge legitimate platforms. They either hack and steal data or use ransomware to trap carelessly protected infrastructures. In turn, organisations that did not already applied strict data protection policies are further and ethically compromised. This includes governmental bodies as well as political parties [intentionally?] losing grip of the consequences of their objectives when using data. As such, foreign powers potentially interfere and change democratically run parliamentary elections.
A very recent example, Mark Zuckerberg admitted ‘loss’ of dozens of millions of profiles to Cambridge Analytica, enforcing an entirely new approach to protecting the data of Facebook’s more than two billion users. Congressional hearings cemented that. While Cambridge Analytica allegedly used its ‘prey’ to influence the outcome of the US 2016 Election, this circumstance ultimately runs much deeper. Both when interpreting ownership of data, and what you to data as a responsible organisation—privately or publicly managed.
The obvious elevation of exploiting a resource like data through analytics, deep learning, machine learning and artificial intelligence seems only to exacerbate aforementioned concerns. Not to say that the example compromising Cambridge Analytica did not allegedly imply use of sophisticated tools in the first place to tailor ads striving to influence American voters. Rather to emphasise that combinations of advanced software and loads of data—popularly known as artificial intelligence—increasingly integrate with everyday appliances and contexts. This reflects another dimension of technology and data exploitation, giving rise to far more compelling ethical considerations.
Consider the recent, yet tragic accident in Arizona when a driverless car hit a woman crossing the street. The woman died from her injuries, raising serious questions. While the accident immediately questioned the general safety of autonomously driving automobiles (forcing Uber to interrupt its driverless car service temporarily), it particularly boxed a serious challenge facing manufacturers of this kind of vehicle. What is the top priority for the software manufacturer to decide on when enforcing safety in a potentially hazardous situation? The passengers in the car or the fact that most people involved are hurt the least in attempting to avoid fatal collisions happening?
If applying a utilitarian perspective, thus subscribing for the latter option, how will that affect passengers’ incentive to use driverless cars in the first place? Subscribing the former, we may reduce the number of accidents from alcohol and speed violations, but we may not be safe as pedestrians, because the driverless car will always seek first to protect the passenger(s) inside the vehicle. This is very difficult to solve, even in defining a sophisticated algorithm.
In my opinion, you should ask questions at several levels. At the macro level, the first question to ask is what best suits society from the perspective of both driving, and minimising costs from injuries/fatalities when people drive? Arguably, legislation ensures that drivers in traditional vehicles like your car or my car behave correctly and respect other road users. Otherwise, we get a ticket. If we hit someone and cause injuries, or even kill someone, this adds to sad statistics. Legislation does not prevent accidents, but minimises the number.
Why then would society prevent proliferation of self-driving cars? The immediate answer probably lies in the fact that you do not know enough from a technological perspective to facilitate legislative initiatives to capture all aspects of introducing driverless cars. At least, in the short run. In addition, you cannot expect entire groups in the population to skip their traditional cars overnight. This would probably last an entire generation, besides accumulating huge costs.
Finally, all this would need manufacturers of both software and cars thoroughly to test the technology during the course of several years. From an ethical perspective, society would also need to make up by themselves what to tell manufacturers how to abide by the law. Then we are back to square one. Should politicians decide the outcome of a car accident, i.e. should the software in driverless cars protect most people from getting hurt the least or protect the passengers inside the driverless car? At the end of the day, you can only protect irrational and irresponsible pedestrians/bicyclists outside the driverless car to the extent mere physics also prevent the collision to occur.
From a systems and a capability perspective, the next question to ask comes around when considering the fact that you introduce the technology of a driverless car to operate it in an environment essentially controlled by game theoretic rules, e.g. road traffic, economics (business decisions, competition, profit making). While probably many other contexts exist in our daily life—controlled by a need to optimise your own intentions, and safety—driving responsibly and gaining economically likely constitute the most impacting situations of maintaining an existence in modern society. Yet they also carry an element of dynamism, since you cannot expect the world to stand still while you make your decision(s). In that light, others are very likely to make similar decisions (irresponsibly based or not), outcompeting your choice. In effect, you lose money or you do not get the job. You even kill yourself in a car accident. Newest technology allows driverless cars, and even traditional cars to automatically brake and stop, if an object in front of it blocks continued driving. Therefore, does technology override the mechanisms—and risks—in game theoretically impacting contexts like road traffic, and does that eliminate the need for considering ethical aspects?
Cutting to the bone, I most certainly believe that the answers to these questions are no—it only minimises—and no, as long as people are involved, ethics always comes to play a role, as does the risk of getting hurt when moving yourself from one point to another. Rather, the question is how to interpret the free choice, and consequence of the individual to select—or deselect—the implications of introducing a technology like the one controlling a driverless car. You can always decide to use a driverless car, even if this makes you feel safer. Still, does it make you feel better, if you survive the accident and the pedestrian who did not escape the car in time, is killed? Let us ask the woman in the video from the accident in Arizona. She does not look happy that the driverless car suddenly hits a crossing pedestrian.
Does ethics related to road driving do more to your way of deciding what means of transportation you would choose, going to work or to visit a friend or your family living across the country? Honestly, I am not sure that your decision depends much more on ethical considerations than it already does, selecting a driverless car for transporting yourself. Igniting the engine of your current choice of car, or someone else invites you to a seat in his or her car mostly triggers an expectation that you, personally, arrives safely at the destination. Yet you always face a risk of getting hurt physically from a car accident, even if you or your friend at the steering wheel drives safely. You cannot protect yourself against others driving irresponsibly. Maybe your friend owns a driverless car or pays for a ride in one; does that change your set of ethical rules?
Potentially, in a not so far future, we end up serving everybody’s needs, listening to the following message, gently addressing you as you enter the driverless car:
“This car is safe at all levels. Please, select level of safety. In case of an accident or collision, do you want the vehicle to protect you first, secondarily limit damage to people outside – Select level 1. In case of an accident or collision, do you want the vehicle to cause least damage to you and people outside the vehicle – Select level 2. Costs of selected level to be charged on your account.”