Morality in Technology
Many of us forget that “Technology” comes from the Greek Technae, or the idea of harnessing nature. This was unique and differentiated from Episteme which was merely knowing, but not acting upon, or harnessing nature. Playing the harp was once thought of as technology, as it harnessed music in the form of vibrating strings, nature brought out of itself and into an alternate form.
For millennia we have created tools that stand outside of ourselves. Yuval Noah Harari, author of the widely acclaimed books Sapiens and Homo Deus, someone I had the pleasure of hearing speak at the 2018 Penguin Annual Lecture in New Delhi as my book was also published by Penguin, talks about how a distinct characteristic of human beings is our ability to imagineer things outside of ourselves. We imagined tools, and instruments, like the harp to harness music in the vibrating string. In a sense, our very identity as humans is Technae. And since then we’ve imagineered a million other applications, taking technology to greater and greater abstractions away from nature, where we all came from.
He talks about how Peugeot, the car company, is a figment of our imagination. It is of course a car company, but what is a company but a series of legal fictions to create an entity that is an “other,” not any one particular person, but a representation of people to cooperate, or perhaps disagree, outside of our individuality. In the creation of a company, an LLC, a trust, or any legal fiction, we instantiate into the world another incarnation into which we imbue properties. We ascribe characteristics to this entity, this other. We state what it shall do, where it shall sit, how it shall be organized, whom it shall represent.
Now imagine that this isn’t a harp, or a company, but a rather more “cutting edge” idea like an algorithm, or a sequence of code we deem to be “artificially intelligent.” (Isn’t calling technological intelligence “artificial” redundant, if we think about the etymological origins of the word “tech?” Anything not pure nature is by definition technology, and therefore artificial, but I digress. Let’s just keep calling it AI for the sake of being “with the times,” albeit redundant.) The same way that we codify into contract the characteristics of a company, we codify into code the characteristics of an AI. We instantiate this third-party representation of people and ideas and process into code, and those who write the code make the judgements as to how that code is sequenced, what it prioritizes, how it acts upon data, and how it ultimately “decides” (even if that eventuality is a statistical given).
Jonathan Haidt is a moral philosopher, and now also a professor at NYU Stern School of Business where he speaks broadly on issues of morality in business and technology. He has a great experiment he once ran that asked participants to answer a few very simple questions that would measure if, and how, people would act “morally” or not. These questions test participants by asking them if they would perform tasks for $100, $10,000, or $1 million, or if there was no amount of money they could be paid to perform that particular task. Would you inject an unknown substance into the arm of a child? Would you state an unpopular political opinion that could get you in trouble? Would you crawl around naked acting as an animal? His Moral Foundations Test probes into the ways in which we prioritize Care/Harm, Fairness, Loyalty, Authority, and Purity.
Most children know that there are gradations in authority. For example, when children are asked if one of their peers could violate school dress code, they differentiate the importance of this as lesser than the importance of protecting another child from physical, or worse, harm. “Would it be ok to push your friend off the swing, even if the teacher told you to?” In this case perhaps we individually, largely, have an innate characteristic to prioritize Care over Authority, or at least the vast numbers of us in society are conditioned this way. We won’t get into nature versus nurture, but suffice it to say that individuals have a moral compass, and most of us wouldn’t inject the arm of a child, but we would state something politically unpopular, or defy authority if it meant causing another explicit harm.
These gradations also inform our duties, and what we deem just or fair. Imagine a bouncer at a night club telling you you can’t come in because there’s a fire hazard, and he doesn’t want people to get hurt (prioritizing care). Or he might say "this guy comes here all the time, so I’m letting him in first” (loyalty). Or he might say “you’re not dressed well enough,” (in-group/belonging) just puff out his chest and say, “you can’t come in because I say so” (authority). In all of these cases your reaction will be vastly different. Your idea of justice will be different.
Going back to the legal fiction of companies, entities, algorithms and AIs, these versions of “technae” or the harnessing of nature, do not ipso facto have their own morality. There are mere reflections of the priorities we codify into code. They do not, of their own accord seek to do no harm, or prioritize Care over Authority.
领英推荐
We accept justice if we are values aligned, and we agree with the “sequence” of how those moral values are stacked in a hierarchy. The same might go for how we view Facebook News Feed. The extent to which we might view it as “just” depends on if we agree or disagree with how they sequence the hierarchy of moral foundations, and how they explain it. Much of corporate communications is a values explanation. So then who behind Facebook News Feed is making that determination for what is Harmful and Caring, and what is actually Authority dictating freedom of speech? Which images stay, and which go? It’s governed by an “algorithm,” but that’s again just a technological fiction, a separating of values into a seemingly more objective “other,” the same way we might install assets in a trust, or IP in a corporation. Facebook might purport to assert Care above all, followed by their Authority above In-Groups or Fairness. After all, it takes a stand on editorial license and its owners define Harm and Care in how they allow, or block, content. They inscribe values into company culture and code.
Uber famously rewards drivers for logging in more often. While they might still prioritize good driving above all (Care), they then largely preference Loyalty. Drivers who do more for Uber have a better experience than those who split time with Lyft. Whether this is explicitly fair is aside the point? This is the moral hierarchy of Uber, like it or not. The founders and creators of that company have made explicit choices around values, and hierarchy of morals, and they have instantiated those truths into their company and their code.
That’s a relatively harmless example, but a laser guided bomb also makes calculations based on data inputs as to what is a target, and what is a non-enemy combatant. Does this missile prioritize Care over Authority? Or is it programmed to do what it’s told, with Authority trumping any consideration of Care once it’s launched? This is a far more troubling example of technology being far more than mere ones and zeros, or management frameworks. It is deeply dependent on, and deeply reflective of, the morality and priorities of its human developers.
How we make moralistic determinations also might depend on a definition of comprehensiveness. Did we exhaust all the options before acting? If the missile did one million calculation before making a determination to kill, are its creators any less culpable, or any more moral? Is a more comprehensive missile more moral than one that is haphazard, or less discerning in its algorithms? Both create equal Harm. What about an algorithm with more parameters, or more comprehensive training data? At what point is it no longer the creators' fault?
Algorithms and AI are neither good nor bad. But they are also not agnostic. Code is not complicit; human beings are. So how do we build diverse teams who are asking the right questions as they build, as they collect data, as they create the priorities and the sequences? How do we consider these debates as we compose contracts and compile code? These are the questions not of tomorrow, but of today. Leadership institutes revolve around Management theories, when they need to reorient around Moral ones. These are the most pressing questions of today, and certainly tomorrow, and our institutions are lagging.
Despite my book, The Fuzzy and the Techie, being nearly seven years old this April 2024, these debates are only becoming more poignant, and more urgent. I’m thankful to Virginia Tech’s Institute for Leadership in Technology for modeling a program off of the themes of my book, and for allowing me to teach alongside my friend Professor Rishi Jaitly, former head of Twitter India and Advisor to OpenAI.
We just completed our inaugural Spring Module that builds on this idea of Full Stack Human, a framing that came out of my conversations with Virginia Tech President Tim Sands in 2023. This framing treats the Liberal Arts as the “Infrastructure Layer” in a stack, on top of which these lenses and modalities can be applied to any number of modern questions. Read more about why Leadership and the Humanities go together, and why leadership institutes of today need to reorient around Moral frameworks, not Management ones.
Catalyst for Transformational and Responsible AI | Health Strategist | Policy Innovator
7 个月Love the idea of liberal arts as the ‘infrastructure layer’ Scott Hartley. Ted Lechterman that idea might resonate as you build out humanities at IE.
General Partner at IKJ Capital and Reaction
7 个月I read The Fuzzie and the Techie a few years ago and I think its contents have become even more relevant. Congrats Scott Hartley for teaching a leadership course going back to the Humanities. It should have been like this all the time.