Prove You're Not An AI

A group of AI research luminaries has declared the need for tools that distinguish human users from artificial ones.

Such “Personhood Crentials,” or PHCs, would help people protect themselves from privacy and security threats, not to mention the proliferation of falsehoods online, that will almost certainly come from a tidal wave of bots that get ever-better at impersonating people.

Call it at Turing test for people.

Of course, whatever the august body of researchers come up with won’t be as onerous as a multiple-choice questionnarie; PHCs will probably rely on some cryptobrilliant tech that works behind the scenes, and finds proof of who we say we are in the cloud (or something).

I’m not convinced it’ll work, or that it’s intended to work on the problem it claims to address.

PHCs probably won’t work or work consistently, for starters, because they’ll always be in a race with computers that get better at hacking security and pretending that they’re humans. The big money, both in investments and potential profits, will be on the hackers and imposters.

Even though they don’t exist yet, the future security threats of quantum computing are so real that the US government has already issued standards to combat capabilities that they imagine hackers might have in, say, a decade, because when they are invented, those bad guys will be able to retroactively decrypt data.

Think about that for a moment, Mr. Serling.

Now, imagine the betting odds on correctly identifying what criminal or crime-adjacent quantum tech might emerge sometime after 2030. There’s a very good chance that today’s PHCs will be tomorrow’s laserdiscs.

Add to that the vast amounts of smarts and money working on inventing AGI, or Artificial General Intelligence that can not just mimic human cognition but possess something equal or better. At least half of a huge sampling of AI experts concluded in 2021 that we’d have such computers by the late 2050s, and that wait time has shortened each time they’ve been canvassed.

What’ll be the use of a credential for personhood if an AGI-capable computer can legitimately claim it?

And then there are other implactions for PHCs that may also be part of an ulterior purpose.

If they do get put into general use, they will never be used consistently. Some folks will neglect to comply or fail to qualify. Some computers will do a good enough job to get them, perhaps with the aid of human accomplices.

Just think of the the complexities and nuisance people already experience trying to resolve existing online identity problems, credit card thefts, and medical bill issues. PHCs could make us look back fondly on them.

Anybody who claims that such innanities couldn’t happen because some inherent quality of technology will prohibit it, whether extant or planned, is either a liar or a fool. Centuries of tech innovation have taught us that we should always consider the worst things some new gizmo might deliver, not just the best tones.

Never say never.

Plus, a side-effect of making online users prove that they’re human will become a litmus test for accessing services, sort of like CAPTCHA only on steriods. Doing so will also make the data marketers capture on us more reliable. It’ll also make it easier to surveil us.

After all, what’s the point of monitoring someone if you can’t be entirely sure that they’re someone worth monitoring?

This is where my tinfoil hat worries seep into my thinking: What if the point of PHCs is to obliterate whatever remaining vestiges of anonymity we possess?

I’ll leave you with a final thought:

We human beings have done a pretty good job of lying, cheating, and otherwise being untruthful with one another since long before the Internet. History is filled with stories of scams based on people pretending to be someone or something they’re not.

Conversely, there’s this assumption underlying technology development and use that it’s somehow more trustworthy, perhaps because machines have no biases or personal agendas beyond those that are inflicted on them by their creators. This is why there’s so much talk about removing those influences from AI.

If we can build reliably agnostic devices, they’ll treat us more fairly than we treat one another.

So, maybe we need PHCs not to identify who we want to interact with, but to warn us away from who we want to avoid?

[This essay appeared originally at Spiritual Telegraph]

Thought provoking article, Jonathan. It's pretty clear we're going to need a solution to know wether or not we are interacting with bots and to keep data secure, but I think answering who is going to be in charge of this might be the bigger question in my mind. Based on who's running it, how widely would it even be adopted by governments, digital platforms, etc. if they don't themselves control it? And if it's not universally adopted, can the technology even be applied consistently enough to be trusted? When coughing up sensitive data like passports or biometrics to obtain the PCHs, can we trust that they will not collect more personal data than they claim? Your final thought about determining who we want to avoid, I wonder who exactly would be deciding that, and with what motivation? If they can get these off the ground, perhaps more concerning is what existing large internet entity would now be enabled to gain even more control our digital lives. I think I just put on my tinfoil hat too.

要查看或添加评论,请登录

Jonathan Salem Baskin的更多文章

  • AI is a Tulip Crossed With An Edsel?

    AI is a Tulip Crossed With An Edsel?

    Assume for a moment that every naysayer is exactly right, and AI is the biggest, dumbest economic and social bubble in…

  • AI & The Tradition of Regret

    AI & The Tradition of Regret

    AI researcher Geoffrey Hinton won a Nobel Prize earlier this month for his work pioneering the neural networks that…

  • Bigger AIs Aren't Better AIs

    Bigger AIs Aren't Better AIs

    Turns out that when large language models (“LLMS”) get larger, they get better at certain tasks and worse on others…

    1 条评论
  • AI And The Dancing Mushroom

    AI And The Dancing Mushroom

    It sounds like the title of a Roald Dahl story, but researchers have devised a robot that moves in response to the…

  • California Just Folded On Regulating AI

    California Just Folded On Regulating AI

    California’s governor Gavin Newsom has vetoed the nation’s most thoughtful and comprehensive AI safety bill, opting…

    2 条评论
  • AI's Kobayashi Maru

    AI's Kobayashi Maru

    Imagine a no-win situation in which you must pick the least worst option. It’s the premise of a training exercise…

  • Trust AI, Not One Another

    Trust AI, Not One Another

    A recent experiment found that an AI chatbot could fare significantly better at convincing people that their nuttiest…

    5 条评论
  • The Head Fake of AI Regulation

    The Head Fake of AI Regulation

    There’s lots going on with AI regulation. The EU AI Act went live last month, the US, UK, and EU will sign-on to a…

    1 条评论
  • Meet The New AI Boss

    Meet The New AI Boss

    Since LLMs are only as good as the data on which they’re based, it should be no surprise that they can function…

  • AI's Latest Hack: Biocomputing

    AI's Latest Hack: Biocomputing

    AI researchers are fooling around with using living cells as computer chips. One company is even renting compute time…

    2 条评论

社区洞察

其他会员也浏览了