Should Bots have to show ID?
Kevin King
Startup Business Development @ Amazon Web Services (AWS) | All-Star Mentor @ Techstars | Product/Operations/Customer Experience
In the land of bots, I've been a big fan. The first bot experience that blew my mind was with Amy from Dennis and the good folks at x.ai.
Amy scheduled a meeting for me so easily, handling the ping ponging back and forth like a real person, that I hounded the folks over there until I made it into their private beta. And the results did not disappoint. (Every once in a while Amy would go off the rails, but when she worked, she worked better than anything you could imagine and well enough apparently to even be asked out on a couple of dates).
More recently, and totally unrelated to Amy, something not so wonderful happened to a friend of mine. On Facebook messenger, unprompted by what looked like a phantom account, he was asked:
"Just curious, based on your last name, who you are voting for in the election?"
Racist, discriminatory, all of the above. A horrible act that certainly ruined a good portion of this friend's day and led to a lengthy discussion thread on Facebook.
Then it occurred to me, it feels like it wouldn't be that hard to mine FB for some pertinent targeting and ask everyone in a large group the same question at the exact same time.
So instead of being an isolated incident of racism (note: which is one too many) 1000s and 1000s of people are activated at the same time. If 5% of them have an incredible strong reaction, and since perhaps many of their friends are targeted at the same time, there is no outlet to help 'deflate the energy' from the hateful incident, but rather the heightened agitation amplifies the reaction 10x, 100x, 1000x, 10000x.
This was really bothering to me for weeks and I'd been slow to post anything about it. I came to the conclusion, would it be so bad if bots had to identify themselves? No one from the Jetsons cared that Rosie was clearly a robot. I certainly don't care that Marsbot (from Foursquare) is a bot designed to provide with me information automagically and tirelessly (and it's awesome). And I don't think I'd be any less impressed if Amy has stood up and said 'Hi, I'm Amy, Your friend, <name>,'s personal calendar bot. I'm going to help you get on their calendar super fast, extremely easily and just like a real human being.'
I wondered if the 'doomsday bot' scenario that I talked about above would perhaps have a less likely chance to get out of control if everyone receiving the message understood it was from a machine. Still hateful. Still hurtful. (That remains unchanged and an issue that we are grappling with it in a significantly more visible way recently) But maybe that's enough to push pause on a super amplified 'hate response' enough.
Then I read this in the Washington Post about an NYU Graduate Assistant who used bots to try and influence people using harassing language by primarily pointing out there is a real person on the end of that Tweet. The results were pretty interesting. I'd be so interested in running the experiment again but with having them be self identified as bots, even though it would seem as if identifying them as a bot would neuter or greatly reduce their impact.
Yes. Bots should have ID. And targeted mass-messages on Messenger from anonymous users shouldn't be allowed. Messenger is a personal space, like text messaging and phone calls. Anonymity isn't appropriate. The current work-around is changing privacy settings. Interesting WP article about people with (perceived) influence changing behavior by using bots. Possibly a solution to trolling...