Don't Let George Orwell's 'Facecrimes' Become A Reality
Chunka Mui
Futurist and Innovation Advisor @ Future Histories Group | Keynote Speaker and Award-winning Author
In George Orwell’s 1984, there was a moment when Winston Smith felt a pang of terror as he realized that he was being watched without knowing it. Here's how Orwell describes it:
It was terribly dangerous to let your thoughts wander when you were in any public place or within range of a telescreen. The smallest thing could give you away. A nervous tic, an unconscious look of anxiety, a habit of muttering to yourself—anything that carried with it the suggestion of abnormality, of having something to hide. In any case, to wear an improper expression on your face (to look incredulous when a victory was announced, for example) was itself a punishable offence. There was even a word for it in Newspeak: FACECRIME, it was called.
The future that Orwell predicts in 1984 is here but, luckily, not widely distributed. Let's not let it spread.
We're all guilty of facecrimes, and for looking for it in others.
For example, I was recently looking at the people standing behind U.S. President Donald Trump as he mocked Dr. Christine Blasey Ford and her accusations of sexual assault against Brett Kavanaugh, the subsequently confirmed U.S. Supreme Court nominee. Even some of the president’s supporters in the U.S. Senate were balking at his comments—or least they said they were, albeit in carefully calculated statements. I was interested in this group's spontaneous reaction the president's blunt comments, as captured on video. Who were these people and what were their views on this contentious issue, I wondered?
With advances in face recognition, anyone could police others, and be policed. Many faces are identifiable using available datasets containing billions of tagged photos. By gauging video sentiment on contentious issues, much else could be inferred about their general attitudes and preferences. Initial sketches could be enhanced with insights gleaned from other public data, including other public events.
Organizers already police their crowds. Take for example the “plaid shirt guy” who was recently removed from a Trump rally for, as he described it, “not being enthusiastic enough.”
Where might this lead?
Organizers could easily analyze footage during and after rallies to know who their most enthusiastic supporters are, and who are not. With rudimentary social network analysis, holders of unacceptable sentiments could be “outed” to families, friends and coworkers.
Such analysis could apply to any of the many situations that are regularly recorded on security cameras, webcams and smartphones.
In some cases, such offenses are already punishable. For example, employers already judge the sentiments and actions of their employees, both at work and other videoed settings—such as when protesters were fired after being identified at the white supremacist rally in Charlottesville, VA.
You might not be bothered these early examples of retribution but it doesn't have to stop there. One can easily imagine this happening on regular basis and where punishment is doled out for less egregious crimes or by more oppressive judges.
Consider a recent example from China, where a “smart eye” system monitors students' engagement and emotions in the classroom. Image pairing this with a“social credit” system which is could control, among other things, students' access to top notch schools.
Is China actually doing this? I'm not sure. But, one can easily imagine how improper facial expressions might slide down the slippery slope to become punishable offenses.
This is a future to avoid.
* * *
I write, speak and advise on the digital future. I'm the author of four books on technology and innovation. This article is updated from one originally published at Forbes.
S.T.E.M. Advocate
6 年Too late
Economist / Strategy / International Affairs / Supply Chain Business Ops ][ views are my own
6 年theatre classes for everyone!!! we will need it
Contributor at Nasdaq
6 年Chunka, thank you for the article and, as always, you raise many valid points. But, as with many forms of technological innovations, there are pros and cons. And one’s ideology (and politics - whether in the U.S. or abroad) will typically sway the use cases of such technology. In this case, for example, there are also many positive aspects of facial recognition. Assisting law enforcement in their investigations is one example. In terms of predictive analysis, and “facecrimes” - what if the facial expression tool were used to gauge whether bar patrons exiting a pub were making a “drunk face” and suggest they call an Uber? Orwell’s world of big brother was extreme, but I believe humans are too fallible to be trusted to always make the right decisions. This is where appropriate use of technology can help. The question is, can humans be trusted to define or agree to what “appropriate” is?