Why Certain Technologies Are Creepy - And What Engineers Can Do About It
Luiza Jarovsky
Co-founder of the AI, Tech & Privacy Academy, LinkedIn Top Voice, Ph.D. Researcher, Polyglot, Latina, Mother of 3. ??Join our AI governance training (1,100+ participants) & my weekly newsletter (54,000+ subscribers)
Today’s newsletter is sponsored by?Didomi, G2 leader in the Consent Management Platform category.
Read time: 5 minutes.
-
Certain technologies can be creepy and cause harm to people. In today's newsletter, I will discuss why it happens, give examples, and propose what engineers can do about it.
In my ongoing Ph.D. research, I discuss unfair data practices in the data cycle, meaning unfair practices that happen during data collection, data processing, and data use. When unfair practices happen in the data use phase, they are associated with a lack of adequate oversight, guidelines, and enforcement, in addition to the absence of tools to protect vulnerable populations. As a consequence, users are left vulnerable and exposed to harm. I will explain:
A first example is the use of AirTags by abusive partners, aiming at stalking their current or ex-partners. An AirTag can be defined as a “shiny, half-dollar-sized coin with a speaker, Bluetooth antenna, and battery inside, which helps users keep track of their missing items.” Their main goal is to help their owners to find luggage, wallets, keys, or any personal items that get lost. They became increasingly popular when airports first opened after coronavirus lockdowns, as the overcrowding caused massive increases in the amount of lost luggage.
Despite not being the original plan for the AirTag, they started being used by abusive partners, ex-partners, or anyone willing to unknowingly stalk another individual. After obtaining access to records of eight police departments, Vice reported that:
“Of the 150 total police reports mentioning AirTags, in 50 cases women called the police because they started getting notifications that their whereabouts were being tracked by an AirTag they didn’t own. Of those, 25 could identify a man in their lives—ex-partners, husbands, bosses—who they strongly suspected planted the AirTags on their cars in order to follow and harass them. Those women reported that current and former intimate partners—the most likely people to harm women overall—are using AirTags to stalk and harass them.”
Specifically, in the context of Apple, there is an additional problem of scale, as AirTags can leverage the global network of nearly a billion iPhones and Macs to identify AirTags. A massive surveillance system is formed, where every Apple user becomes a live tracker unless they opt out of "Find My network."
On the topic of abusive partners, ex-partners, or sexual predators, another technology that has been misused to oppress is deepfake software. Noelle Martin recounts that, when she was 18, she found her face superimposed into explicit pornographic videos and images as if she was one of the actresses. These videos and images were edited by a group of unknown sexual predators, and she discovered the deepfakes occasionally when undergoing a reverse Google image search.?
Even though deepfake technologies can have legitimate uses, such as learning tools, photo editing, image repair, and 3D transformation, nowadays, their main application seems to be cyber exploitation. According to a Deeptrace report, 96% of all deepfake videos available online are non-consensual pornography.??
Another example of unfair data use can be found in the realm of machine learning and facial recognition. Automated gender recognition (AGR) is a type of facial recognition technology that, through machine learning, aims at automatically detecting whether a picture or video belongs to male or female individuals.
领英推荐
However, gender is not a binary feature but a spectrum, which is sometimes the object of lifelong quests. How would an algorithm possibly be able to categorize it - if sometimes not even the individual has it clear yet? As the Human-Computer Interaction researcher Os Keyes stated:
“This technology tends to reduce gender to a simplistic binary and, as a result, is often harmful to individuals like trans and nonbinary people who might not fit into these narrow categories. When the resulting systems are used for things like gating entry for physical spaces or verifying someone’s identity for an online service, it leads to discrimination.”
It is an algorithm built to fail, as it does not matter how accurate its developers claim it can be, attributing gender should not be the role of automated machines.
In the examples I gave above, the data use, due to technological features or lack of regulatory constraints, was invasive and limited the autonomy of the affected individuals. In some cases, the technology facilitated psychological or physical harm.
What I argue in my research and will summarize here is that, before making a product available to the public, its developers must ensure that it will not have adverse consequences in terms of psychological well-being, physical safety, or any type of harm.?
For any product that deals with the collection and processing of personal data, in addition to a data protection impact assessment, a thorough evaluation to verify its potential abusive use is needed. Engineers should be trained to identify a broad set of possible impacts that technology can generate on individuals and society as a whole, paying special attention to children, minorities, protected groups, and vulnerable populations.
Technology is immensely powerful, and it can bring so many positive transformations. However, humans must always be the focus. It does not matter how advanced and innovative a certain technology is, there should always be adequate constraints and mechanisms to support humans and prevent harm.
Of course, it is not only the responsibility of engineers. Regulation should be tougher and more specific on unfair data uses. But this will be a topic for another edition of the newsletter.
-
? Before you go:
See you next week. All the best,?Luiza Jarovsky
CEO@3PMobile l Reimagining Digital Engagement l Low-cost Growth Engine for Web-based Businesses l Harnessing the Power of Digital Ecosystems through Consumer Choice.
2 年My thought is a simple one - what is the incentive to solve the engineering challenge? Who cares? If you're going to solve the Privacy challenge you have to start elsewhere and create a cryptographic framework for identity and legal rights management. You have to be able to prove in a court of law that this data is mine and this identity is me. Until you can do that everything has the potential to be fake. I would call it the KME platform - as opposed to the SWIFT KYC platform (Know Your Customer). I have to be able to own my KME environment. No one is going to build that until there is a financial incentive i.e. a business model that aligns privacy and monetization. Until then welcome to the deep fake and other creepy crawlies landscape ?? My best, Peter.
Quality & information security manager at Taldor
2 年Historically every technological innovation had carried some regression in its wings, but also brings new developments to curve its risks. Let’s hope that privacy protection laws as well as privacy protection technics will mitigate risk of the new gadgetry applications and machine learning technologies you have described in your very wise and true article.?????
Cyberstalking, Privacy, AI Policy Writer, with a little Royal Gossip
2 年A few thoughts come to mind - technology creators cannot see how their invention will be used in the future. When YouTube was introduced, everyone laughed at that idea people would make videos of themselves and post it on the Internet. Privacy impact analyses are an important part of design, there needs to also be exterior limits on uses of new technologies.
Standards Council of Canada (SCC) Member| AI Risk Assessments| DPIAs| Privacy management programs| AI & Privacy Engineer| Lecturer, Instructor & Advisor| U of Toronto SCS| Digital Governance, Risk & Privacy Coach|
2 年Luiza - I am working with clients to actually define the privacy eng process for project and product management teams. Would you be interested in creating a workshop together? I feel that this is the bottlneck. Lots of medium size organizations do not have the resources to "understand" this properly never mind define it in an implementable way. Would love to collaborate on this ??