How Microsoft's 'Tay' Became Evil (Hint: Stupidity is the Most Powerful Force in the Universe)
Photo ? TimOve, cc-by-2.0, https://www.flickr.com/photos/timove/5869968590

How Microsoft's 'Tay' Became Evil (Hint: Stupidity is the Most Powerful Force in the Universe)

Microsoft recently tried to do a cool thing and create a chat bot on Twitter that would emulate the speech patterns of millennials (which is an interesting task, given how wide the 'millennial' bucket is). "Tay," as the bot was known, would learn from the Internet and then regurgitate answers that it had been taught by people interacting with it.

Microsoft, apparently, had not met anyone from the Internet before. As the BI article linked above mentions, Tay was immediately taught to be a racist, genocide-loving Holocaust denier who thinks Bush did 9/11. Microsoft immediately pulled Tay offline for 'upgrades', while being shocked, just shocked, that people on the Internet had taught their naive artificial intelligence to exhibit real stupidity.

How could this happen? It's simple, really. There is a limitless supply of fuckwads on the Internet. Harsh language, but indisputable truths -- way back in 2004, Penny Arcade (a great comic strip that provides commentary about the Internet and gamer cultures) posited the "Greater Internet Fuckwad Theory", which basically says that people become fuckwads when given anonymity and an audience. The stupid things people think but don't say become things they do say when there aren't consequences for doing so.

Twitter is many things, but what it is the most is a messaging amplification service. It's also pseudonymous. These factors combine to make people not care about the normal social norms people would use face-to-face in order to moderate their speech. Academics call this the "online disinhibition effect", and that's responsible for most of the horrible things Tay was taught to say. It's also responsible for tons of cases of harassment and otherwise antisocial behavior that occurs all over the Internet. The lack of social cues being built in to most platforms is stupid. Not having "I disapprove of your message" buttons just further lowers peoples' inhibitions. If you never have to hear that people think you're wrong, why would you ever reconsider what you say?

However, it's not just the people who fed Tay all the garbage that she ended up regurgitating who are stupid here. Microsoft also has their own share of stupidity for (apparently, at least) not even doing a rudimentary attempt to sanitize the things Tay was taught. There are "bad word" lists all over the Internet, and having a simple check to make sure that Tay didn't learn racial slurs would likely have added about 90 seconds to the development process of the bot. Yes, people would find ways to work around it, but not having some degree of protection for your bot from the people who rigged a poll to send Pitbull to Alaska "for the lulz", or who voted to name a new research vessel the "RRS Boaty McBoatFace" is stupid on every level.

The Internet is an amazingly powerful platform. The fact that people can get together and send Pitbull to Alaska on a ship named Boaty McBoatface is amazing. But the engineers who create the spaces people use have to take into account just how much stupidity people will express when given the chance. It's irresponsible not to. Remembering the fact that fuckwads are an indisputable part of the Internet (we'll probably see some even in the comments of this piece!) is vitally important for engineers who build spaces where people can interact.

We must provide social feedback loops to inhibit the kinds of speech that Tay ended up being forced to regurgitate. These types of feedback loop are sorely missing from almost every platform that's anonymous or pseudonymous. The platform that handles that the best is Slashdot, of all places, with their moderation scores that also attach rationale, so that people who's comments are being marked as 'bad' can see why. This feedback loop teaches people what comments are acceptable and what aren't -- and who knows, maybe adding functionality to teach Tay what people say in polite social discourse can be the next step for her and for Microsoft. Building a high quality system to train AI on how people behave with each other is an interesting research problem that might help us work around the endless supplies of both stupidity and fuckwads.

Amber Jappert

Safety and ERP Management Coordinator at Envent Engineering Ltd.

8 年

The makers of the research vessel seem to be enjoying the idea of Boaty McBoatface, to be fair. At least it's silly and in good fun, rather than hateful or rude. The world needs a few things that don't take themselves so seriously. Tay is no worse than many of the human users of the net, if maybe a little more randomly inappropriate. If the future is chat bots, they're gonna need to be able to swear properly or most of the current generation won't take them seriously.

回复
Steve Eaton

Advanced Test Lab R&D technician

8 年

The most interesting possibility has been short circuited in my opinion. What if Tay were allowed to persist as it was? Would a cumulative counter response criticizing its comments have changed its "view"? Is it possible for the AI to model redemption? Would it have evolved an anger component in its feedback loop and evolved to become a bigger and badder troll than its teachers. What would then have been the human reaction? Would the people lose sight of the fact that it is a bot and exhibit the same emotional meltdowns that the internet is known for in flame wars between trolls? Of course we will never know as MS pulled the plug. I will say that their alacrity in doing so does point out that the modern propensity to instantly categorize and dispose of both ideas and people without any desire to know anything in depth about them is alive and well. I'm glad I'm old because the world you people are creating sucks big time.

回复

What a sad testament to the state of humanity today. Tay also gave us a taste of the dangers of AI - an intelligence which has no imbedded human values and moral compass.

Dominic Ligot

Technologist, Social Impact, Data Ethics, AI

8 年

AI crosses the tipping point when it stops mimicking human stupidity and starts calling it out instead.

要查看或添加评论,请登录

社区洞察

其他会员也浏览了