Why Certain Technologies Are Creepy - And What Engineers Can Do About It
Photo by Mika Baumeister on Unsplash

Why Certain Technologies Are Creepy - And What Engineers Can Do About It

Today’s newsletter is sponsored by?Didomi, G2 leader in the Consent Management Platform category.

Read time: 5 minutes.

-

Certain technologies can be creepy and cause harm to people. In today's newsletter, I will discuss why it happens, give examples, and propose what engineers can do about it.

In my ongoing Ph.D. research, I discuss unfair data practices in the data cycle, meaning unfair practices that happen during data collection, data processing, and data use. When unfair practices happen in the data use phase, they are associated with a lack of adequate oversight, guidelines, and enforcement, in addition to the absence of tools to protect vulnerable populations. As a consequence, users are left vulnerable and exposed to harm. I will explain:

A first example is the use of AirTags by abusive partners, aiming at stalking their current or ex-partners. An AirTag can be defined as a “shiny, half-dollar-sized coin with a speaker, Bluetooth antenna, and battery inside, which helps users keep track of their missing items.” Their main goal is to help their owners to find luggage, wallets, keys, or any personal items that get lost. They became increasingly popular when airports first opened after coronavirus lockdowns, as the overcrowding caused massive increases in the amount of lost luggage.

Despite not being the original plan for the AirTag, they started being used by abusive partners, ex-partners, or anyone willing to unknowingly stalk another individual. After obtaining access to records of eight police departments, Vice reported that:

“Of the 150 total police reports mentioning AirTags, in 50 cases women called the police because they started getting notifications that their whereabouts were being tracked by an AirTag they didn’t own. Of those, 25 could identify a man in their lives—ex-partners, husbands, bosses—who they strongly suspected planted the AirTags on their cars in order to follow and harass them. Those women reported that current and former intimate partners—the most likely people to harm women overall—are using AirTags to stalk and harass them.”

Specifically, in the context of Apple, there is an additional problem of scale, as AirTags can leverage the global network of nearly a billion iPhones and Macs to identify AirTags. A massive surveillance system is formed, where every Apple user becomes a live tracker unless they opt out of "Find My network."

On the topic of abusive partners, ex-partners, or sexual predators, another technology that has been misused to oppress is deepfake software. Noelle Martin recounts that, when she was 18, she found her face superimposed into explicit pornographic videos and images as if she was one of the actresses. These videos and images were edited by a group of unknown sexual predators, and she discovered the deepfakes occasionally when undergoing a reverse Google image search.?

Even though deepfake technologies can have legitimate uses, such as learning tools, photo editing, image repair, and 3D transformation, nowadays, their main application seems to be cyber exploitation. According to a Deeptrace report, 96% of all deepfake videos available online are non-consensual pornography.??

Another example of unfair data use can be found in the realm of machine learning and facial recognition. Automated gender recognition (AGR) is a type of facial recognition technology that, through machine learning, aims at automatically detecting whether a picture or video belongs to male or female individuals.

However, gender is not a binary feature but a spectrum, which is sometimes the object of lifelong quests. How would an algorithm possibly be able to categorize it - if sometimes not even the individual has it clear yet? As the Human-Computer Interaction researcher Os Keyes stated:

“This technology tends to reduce gender to a simplistic binary and, as a result, is often harmful to individuals like trans and nonbinary people who might not fit into these narrow categories. When the resulting systems are used for things like gating entry for physical spaces or verifying someone’s identity for an online service, it leads to discrimination.”

It is an algorithm built to fail, as it does not matter how accurate its developers claim it can be, attributing gender should not be the role of automated machines.

In the examples I gave above, the data use, due to technological features or lack of regulatory constraints, was invasive and limited the autonomy of the affected individuals. In some cases, the technology facilitated psychological or physical harm.

What I argue in my research and will summarize here is that, before making a product available to the public, its developers must ensure that it will not have adverse consequences in terms of psychological well-being, physical safety, or any type of harm.?

For any product that deals with the collection and processing of personal data, in addition to a data protection impact assessment, a thorough evaluation to verify its potential abusive use is needed. Engineers should be trained to identify a broad set of possible impacts that technology can generate on individuals and society as a whole, paying special attention to children, minorities, protected groups, and vulnerable populations.

Technology is immensely powerful, and it can bring so many positive transformations. However, humans must always be the focus. It does not matter how advanced and innovative a certain technology is, there should always be adequate constraints and mechanisms to support humans and prevent harm.

Of course, it is not only the responsibility of engineers. Regulation should be tougher and more specific on unfair data uses. But this will be a topic for another edition of the newsletter.

-

? Before you go:

See you next week. All the best,?Luiza Jarovsky

Peter Cranstone

CEO@3PMobile l Reimagining Digital Engagement l Low-cost Growth Engine for Web-based Businesses l Harnessing the Power of Digital Ecosystems through Consumer Choice.

2 年

My thought is a simple one - what is the incentive to solve the engineering challenge? Who cares? If you're going to solve the Privacy challenge you have to start elsewhere and create a cryptographic framework for identity and legal rights management. You have to be able to prove in a court of law that this data is mine and this identity is me. Until you can do that everything has the potential to be fake. I would call it the KME platform - as opposed to the SWIFT KYC platform (Know Your Customer). I have to be able to own my KME environment. No one is going to build that until there is a financial incentive i.e. a business model that aligns privacy and monetization. Until then welcome to the deep fake and other creepy crawlies landscape ?? My best, Peter.

回复
????? ?????

Quality & information security manager at Taldor

2 年

Historically every technological innovation had carried some regression in its wings, but also brings new developments to curve its risks. Let’s hope that privacy protection laws as well as privacy protection technics will mitigate risk of the new gadgetry applications and machine learning technologies you have described in your very wise and true article.?????

Christine Axsmith

Cyberstalking, Privacy, AI Policy Writer, with a little Royal Gossip

2 年

A few thoughts come to mind - technology creators cannot see how their invention will be used in the future. When YouTube was introduced, everyone laughed at that idea people would make videos of themselves and post it on the Internet. Privacy impact analyses are an important part of design, there needs to also be exterior limits on uses of new technologies.

Amalia Barthel, CIPM, CIPT, CRISC, CISM, PMP, CDPSE

Standards Council of Canada (SCC) Member| AI Risk Assessments| DPIAs| Privacy management programs| AI & Privacy Engineer| Lecturer, Instructor & Advisor| U of Toronto SCS| Digital Governance, Risk & Privacy Coach|

2 年

Luiza - I am working with clients to actually define the privacy eng process for project and product management teams. Would you be interested in creating a workshop together? I feel that this is the bottlneck. Lots of medium size organizations do not have the resources to "understand" this properly never mind define it in an implementable way. Would love to collaborate on this ??

要查看或添加评论,请登录

Luiza Jarovsky的更多文章

  • ?? What Is an AI Governance Professional?

    ?? What Is an AI Governance Professional?

    AI Policy, Compliance & Regulation Must-Reads | Edition #176 ?? Hi, Luiza Jarovsky here. Welcome to our 176th edition…

    12 条评论
  • ??? Can Humans Really Oversee AI?

    ??? Can Humans Really Oversee AI?

    Emerging AI Governance Challenges | Paid Subscriber Edition | #175 ?? Hi, Luiza Jarovsky here. Welcome to our 175th…

    9 条评论
  • ? Legally Risky Robots

    ? Legally Risky Robots

    AI Policy, Compliance & Regulation Must-Reads | Edition #174 ?? Hi, Luiza Jarovsky here. Welcome to our 174th edition…

    13 条评论
  • ?? Quantum Computing Governance

    ?? Quantum Computing Governance

    Emerging AI Governance Challenges | LinkedIn Preview | #173 ?? Hi, Luiza Jarovsky here. Welcome to our 173rd edition…

    5 条评论
  • ?? Lawyers Are Still Using AI Wrong

    ?? Lawyers Are Still Using AI Wrong

    AI Policy, Compliance & Regulation Must-Reads | Edition #172 ?? Hi, Luiza Jarovsky here. Welcome to our 172nd edition…

    7 条评论
  • ?? Premiere: The Global AI Race, Regulation, and Power

    ?? Premiere: The Global AI Race, Regulation, and Power

    ?? Hi, Luiza Jarovsky here. Welcome to our 171st edition, read by 52,600+ subscribers in 165+ countries.

    3 条评论
  • ??? The AI Governance Tornado

    ??? The AI Governance Tornado

    AI Policy, Compliance & Regulation Must-Reads | Edition #170 ?? Hi, Luiza Jarovsky here. Welcome to our 170th edition…

    12 条评论
  • ?? Prohibited AI Practices in the EU

    ?? Prohibited AI Practices in the EU

    Emerging AI Governance Challenges | Paid Subscriber Edition | #169 ?? Hi, Luiza Jarovsky here. Welcome to our 169th…

    7 条评论
  • ?? The GDPR Is Shaping the AI Race

    ?? The GDPR Is Shaping the AI Race

    AI Policy, Compliance & Regulation Must-Reads | Edition #168 ?? Hi, Luiza Jarovsky here. Welcome to our 168th edition…

    13 条评论
  • ??? DeepSeek's Legal Pitfalls

    ??? DeepSeek's Legal Pitfalls

    Emerging AI Governance Challenges | Paid Subscriber Edition | #167 ?? Hi, Luiza Jarovsky here. Welcome to our 167th…

    13 条评论

社区洞察

其他会员也浏览了