Key Idea 10 - people should [REDACTED] privacy [REDACTED] consent to [REDACTED] AI
Hopefully I can get away with a bit of humor in the heading of today's article, but make no mistake: I will (in some small way) address a very serious topic this week.
Before we go any further, some important disclaimers:
The key word in that last point is ambition because, finally, I am also perfectly aware that I might even fail at piquing your interest.
With all that out of the way, I must admit that I struggle to even articulate the idea I'd like to convey.
I think it has something to do with highlighting that people should have agency, or control, over how AI (more generally, technology that can run at scale) is used to identify and authenticate them. For instance, as when voice authentication technology is applied to audio recordings.
Today's key idea is mainly a call for people to think about their rights when it comes to their privacy, and how AI technologies may be used upon their person
Probably we should be asking the individuals themselves (i.e. obtaining their consent) if we can do this (this = voice authentication, Face ID, cookies, etc), and be clear when this is happening, and for what reason(s).
But there are serious concerns around using technology, invasively, at scale. Especially when it begins to encroach on individuals' privacy and consequently, their freedom.
On one extreme we slide down that slippery slope all the way to an Orwellian surveillance state. But I think it's also an opposite extreme if we can never be sure of who we're dealing with? At some point, in some situations, we as a society have legitimate cause to be able to confidently identify and authenticate individuals.
Think of crossing a border with your passport. Or if you need to apply for a large loan/mortgage. There are surely countless scenarios you can think of: in the event of sensitive circumstances or transactions, there arises a need to establish trust among individuals within our society.
I spent the past two months or so sharing what I considered to be important ideas on deepfakes, through the prism of voice authentication technology. While I and several of my colleagues tried to offer real world context around these technologies, the focus has still mainly been a technological one.
领英推荐
We argued in this series that, given the rise of deepfake voices that are indistinguishable from real voices, to the human ear, it was an opportunity to talk about the tools that are available to us today to protect us from the harms of said deepfakes.
So if we've done a good job of convincing you of the merits of voice authentication technology, I thought it appropriate to pause and highlight the need for your permission to be granted for the use of such technology.
Unfortunately, the bad guys will not ask for your permission to clone your voice when they try to use it to break into your bank account, or defraud your coworkers or family members.
Be that as it may, there are rules on the use of people's personal data, especially when it begins to enter the realm of biometric data, and those rules are there for good reason.
People should have agency, or control, to approve the use of technologies that can be invasive in terms of privacy
My personal hope for this series was to share useful information on the benefits of a technology such as voice authentication and the accompanying spoofing countermeasures (aka deepfake detection). I do think that if we are genuinely worried about the threat of audio deepfakes, then these tools have become more important than ever before.
But it's for you to decide.
Some resources
The same disclaimers I listed above apply here. This is an informal collection of resources to maybe get you started on the topic of privacy. It is by no means complete or comprehensive, and heck, this may not even be a great list. But we try.
Of course I am not affiliated with any of the below institutions or individuals in any manner, and they've probably never heard of me.
Finally, for perhaps lighter but nonetheless insightful and engaging reading I highly recommend the book Your Face Belongs to Us by New York Times journalist Kashmir Hill . Author Hill takes us through, yep, an Orwellian odyssey about a largescale facial recognition system that is still in use today, and asks the question on whether its use has always been fair/just/legitimate/not-creepy. The book was published just last year, I believe, so it is super current.
Authentic Leadership | Strategic Transformation | Financial Services | Risk Management | Customer Contact | Biometric Security
1 个月At a practical level, organisations that succeed with biometric security recognise the value of consent as well as in putting care into the process of securing it. Your series has been clear on how critical such solutions are in keeping customers secure, however we live in an age of skepticism and of low trust, and the consent conversation is where you can make a positive contract with your customer, to build their trust, sell the value and work through any objections or worries ….. and your regulators will be happier too ?? It’s been a wonderful series Haydar - thank you.
Experienced Tech Lead
1 个月You are going too far with this :) Even the basic personal information handling is pretty bad everywhere, in general. Each company tries to collect tons of irrelevant data and then fails to store it. I am not even talking about the real compliance - just common sense security. Try to at least get a copy of your personal information from the average business in Canada or request to delete your account - good luck. We lack very basic control on very basic personal information.