Prove You're Not An AI
Jonathan Salem Baskin
I run a tech comms consultancy and write essays, books, and musicals.
A group of AI research luminaries has declared the need for tools that distinguish human users from artificial ones.
Such “Personhood Crentials,” or PHCs, would help people protect themselves from privacy and security threats, not to mention the proliferation of falsehoods online, that will almost certainly come from a tidal wave of bots that get ever-better at impersonating people.
Call it at Turing test for people.
Of course, whatever the august body of researchers come up with won’t be as onerous as a multiple-choice questionnarie; PHCs will probably rely on some cryptobrilliant tech that works behind the scenes, and finds proof of who we say we are in the cloud (or something).
I’m not convinced it’ll work, or that it’s intended to work on the problem it claims to address.
PHCs probably won’t work or work consistently, for starters, because they’ll always be in a race with computers that get better at hacking security and pretending that they’re humans. The big money, both in investments and potential profits, will be on the hackers and imposters.
Even though they don’t exist yet, the future security threats of quantum computing are so real that the US government has already issued standards to combat capabilities that they imagine hackers might have in, say, a decade, because when they are invented, those bad guys will be able to retroactively decrypt data.
Think about that for a moment, Mr. Serling.
Now, imagine the betting odds on correctly identifying what criminal or crime-adjacent quantum tech might emerge sometime after 2030. There’s a very good chance that today’s PHCs will be tomorrow’s laserdiscs.
Add to that the vast amounts of smarts and money working on inventing AGI, or Artificial General Intelligence that can not just mimic human cognition but possess something equal or better. At least half of a huge sampling of AI experts concluded in 2021 that we’d have such computers by the late 2050s, and that wait time has shortened each time they’ve been canvassed.
What’ll be the use of a credential for personhood if an AGI-capable computer can legitimately claim it?
And then there are other implactions for PHCs that may also be part of an ulterior purpose.
If they do get put into general use, they will never be used consistently. Some folks will neglect to comply or fail to qualify. Some computers will do a good enough job to get them, perhaps with the aid of human accomplices.
领英推荐
Just think of the the complexities and nuisance people already experience trying to resolve existing online identity problems, credit card thefts, and medical bill issues. PHCs could make us look back fondly on them.
Anybody who claims that such innanities couldn’t happen because some inherent quality of technology will prohibit it, whether extant or planned, is either a liar or a fool. Centuries of tech innovation have taught us that we should always consider the worst things some new gizmo might deliver, not just the best tones.
Never say never.
Plus, a side-effect of making online users prove that they’re human will become a litmus test for accessing services, sort of like CAPTCHA only on steriods. Doing so will also make the data marketers capture on us more reliable. It’ll also make it easier to surveil us.
After all, what’s the point of monitoring someone if you can’t be entirely sure that they’re someone worth monitoring?
This is where my tinfoil hat worries seep into my thinking: What if the point of PHCs is to obliterate whatever remaining vestiges of anonymity we possess?
I’ll leave you with a final thought:
We human beings have done a pretty good job of lying, cheating, and otherwise being untruthful with one another since long before the Internet. History is filled with stories of scams based on people pretending to be someone or something they’re not.
Conversely, there’s this assumption underlying technology development and use that it’s somehow more trustworthy, perhaps because machines have no biases or personal agendas beyond those that are inflicted on them by their creators. This is why there’s so much talk about removing those influences from AI.
If we can build reliably agnostic devices, they’ll treat us more fairly than we treat one another.
So, maybe we need PHCs not to identify who we want to interact with, but to warn us away from who we want to avoid?
[This essay appeared originally at Spiritual Telegraph]
Thought provoking article, Jonathan. It's pretty clear we're going to need a solution to know wether or not we are interacting with bots and to keep data secure, but I think answering who is going to be in charge of this might be the bigger question in my mind. Based on who's running it, how widely would it even be adopted by governments, digital platforms, etc. if they don't themselves control it? And if it's not universally adopted, can the technology even be applied consistently enough to be trusted? When coughing up sensitive data like passports or biometrics to obtain the PCHs, can we trust that they will not collect more personal data than they claim? Your final thought about determining who we want to avoid, I wonder who exactly would be deciding that, and with what motivation? If they can get these off the ground, perhaps more concerning is what existing large internet entity would now be enabled to gain even more control our digital lives. I think I just put on my tinfoil hat too.