Facebook, Cambridge Analytica, and the ethical reckoning for tech
Morten Rand-Hendriksen
AI & Ethics & Rights & Justice | Educator | TEDx Speaker | Neurodivergent System Thinker | Dad
At a conference two years ago, I sat at a lunch table with one of the pioneers of web standards. We were discussing the human impacts of web technologies and how the technologies we built for information and connection and entertainment are influencing our lives in more and more problematic ways. At one point he looked down into the table, let out a long sigh, and said:
“I thought we were making world peace.”
The Reckoning
Over the weekend, the world has responded with fury to the revelations of how the data analytics firm Cambridge Analytica accessed the user profiles of millions of Facebook users and used that information to influence the political views of those users. The story of Christopher Wylie, the “Data War Whistleblower”, hit the airwaves on Saturday March 17, 2018, one day after Facebook announced it was suspending Cambridge Analytica and SCL Group from its service.
Somehow overlooked in the outrage is the fact this story originally broke in December 2016, in the German magazine Das Magazin, and was republished by Vice’s Motherboard in January 2017. Which means from the moment the public was told what was going on, it took Facebook a full 15 months to suspend the company from its service.
I’ll let that sink in.
This is a moment of reckoning for the tech industry. We have spent the last decades moving fast and breaking things, and now those things are breaking us. What the tech industry lacks, what most other industries whose work directly impacts people and the society we live in has, is a practice built on a solid foundation of ethics.
The tech community needs to start putting “ought” before “can”
Last night Yonatan Zunger put together a Twitter thread outlining this situation and juxtaposing it with similar reckonings in other industries:
Allow me to be crass for a moment: For the tech industry, playtime is over. We, the people who make decisions for everyone else through our designs and digital services, need to take responsibility for the creations we put into the world. We can no longer sit back and say “it’s just design” or “I’m just writing code”. Every design decision is a decision made on behalf of the end-user, and every designer, developer, system architect, researcher, and content creator has a duty of care to that user to protect their best interests.
If we don’t establish an ethical framework for our industry, governments can and will impose restrictions on our work as is the case with the upcoming GDPR in Europe. And with good reason: The situation with Facebook and Cambridge Analytica is not unique. It is the story of technology used as designed to create outcomes its creators did not consider. Behind it stands a long line of equally troublesome stories of technology unbound by ethical guidelines laying the groundwork for future world-changing events we will read about in similarly shocking exposés.
Allow me to put together one probable example. I call it:
The Three Horsemen of the AI-Pocalypse
Part 1: Say Anything
In November 2016, Adobe demoed a new application called VoCo. It analyzes existing recordings of someone’s voice to allow the user to make that voice say anything. In an on-stage showcase (video below), an engineer demonstrated how using the software he could type in a sentence and play it back with the voice of actor Keegan-Michael Key:
While Adobe may not ever ship VoCo to market, the science behind it will no doubt be picked up and used by another software vendor at some point.
Conclusion: Technology will make it close to impossible to know if a voice recording is genuine.
Part 2: Lip-sync Nightmare
In July 2017, researchers from the University of Washington released a paper and a video demo (below) of an algorithm that can turn audio clips into a realistic lip-synced video.
Conclusion: Technology will make it close to impossible to know if the video you watch of a person talking is that person actually saying the words you see their lips forming.
Part 3: Face/Off
Over the past year, several so-called “face-swap” apps powered by neural networks have emerged allowing users to quite literally swap the faces of people in video clips for other people’s faces. While this technology has spawned an amusing meme where people swap out the faces of actors in famous movies for that of Nicholas Cage (video below), it has also resulted in the emergence of face-swap porn (Motherboard article).
Conclusion: Technology already has made it close to impossible to know if the person in the video you are watching is the actual person in the video.
Combined, these three technologies create a scenario where literally all the salient parts of a video can be faked by anyone with access to a computer. And what do all these technologies have in common? They were made with the best intentions, with the aim of solving real technical problems using technology.
Building an Ethics for Tech
The problem the tech community faces today is the problem countless other professions have faced in the past: While we want to be able to do anything and everything, our decisions have real-life consequences and we need to take responsibility for them. To turn a classic philosophical problem on its head, the tech community needs to start putting “ought” before “can” and consider whether something should be built before it is built. It may be time to retire “move fast and break things” and replace it with something slower and more mindful.
In 2009, Simon Sinek famously explained how great leaders succeed by asking “why.” His three concentric circles of what, how, and why have become a staple in design and management thinking and has transformed how we approach problems in the tech industry. But something is missing from Sinek’s diagram, something Aristotle and Heidegger pointed to in their studies of the Four Causes: At the center of the diagram there should be a fourth ring asking “To what end?”
As I said earlier, every design decision is one made on behalf of the end-user. From this follows a bigger and more important realization: With every design decision, a path to the future is carved out for the end-user to follow. In other words, designers build the future with every decision they make. When we evaluate our decisions, it is not enough to ask “with what should we build it” and “how do we do that” and “why are we doing it”. We also have to ask ourselves “to what end? What world are we creating with this decision?”
That’s what an ethics for the tech industry is all about.
More to come
I have been working on this problem since 2009 and over the past two years I’ve put together a framework for a process of ethical evaluation to give tech workers the tools they need to identify ethical issues in their work and make sound decisions. That framework is summed up in my article "Using Ethics in Web Design" published by Smashing Magazine.
Senior SaaS Product Lead at Zebra Technologies
6 年Shareholders aren’t too worried about how the $s “ought to be made”. If you doubted this it’s worth looking into Herbalife’s funding support vs ethics. The issues is that regulators are too far behind. And why ? Because Facebook Google Twitter make it much more attractive to the smartest of us to work on their side than the regulator’s.
Design HSE Manager, Technical Safety
6 年If you don't deploy a particular technology that is beneficial for some users, somebody else will. You cannot regulate technological development based on some ethical principles as not all the 'actors' involved, being manufacturers or beneficiaries, will share the same ethical framework. The only option is to develop more sophisticated countermeasures
Aceite Vegetal Prensado en Frío
7 年ética y conciencia son conceptos subjetivos.
ESL Teacher | Technologist | Veteran
7 年Nice job, putting this article together. Thank you.
Interesting read. Agree, we it to take it slow if it ought to be thoughtful, ethical.