10 Embarrassing Algorithm Fails
As Kenyans, we're still in the #Algorithm debate as far as #GeneralElections are concerned. Big companies worldwide increasingly rely on algorithms. It doesn't always work out.
Technology isn't just for geeks, it's for everyone, and that's a lot of people. With only a handful of companies controlling most hardware and software—from Apple and Amazon to Facebook, Microsoft, and Google—they can only do so much to control your wallet and eyeballs. What's a poor, giant corporation to do? Rely on algorithms to make some of the important decisions.
Oxford Living Dictionaries defines algorithm as "a process or set of rules to be followed in calculations or other problem-solving operations, especially by a computer." A perfect example is the Facebook News Feed. No one knows how it works that some of your posts show up on some people's News Feeds or not, but Facebook does. Or how about Amazon showing you related books? Related searches on Google? All of these are closely guarded secrets that do a lot of work for the company and can have a big impact on your life.
But algorithms aren't perfect. They fail and some fail spectacularly. Add in the glare of social media, and a small glitch can turn into a PR nightmare real quick. It's rarely malicious; these tend to be what the New York Times calls "Frankenstein Moments," where the creature someone created turns into a monster. But it happens, and we've compiled some of the most egregious recent examples below. Let us know your favorites in the comments below.
Instagram's Questionable Ads
As reported by The Guardian, Facebook likes to run ads for its Instagram service with posts from its own users. But the algorithm picked the wrong "photo" to push to friends of Olivia Solon, herself a reporter at The Guardian, selecting a year-old screenshot she took of a threatening email.
Targeted Racist Ads
Last week, ProPublica reported that Facebook allowed advertisers to target offensive categories of people, like "Jew haters." The site paid $30 for an ad (above) that targeted an audience that would respond positively to things like "why Jews ruin the world" and "Hitler did nothing wrong." It was approved within 15 minutes.
Slate found similarly offensive categories, and it became another in the long line of items Facebook had to not only apologize for, but promise to fix with some human oversight. You know, like when it placed $100,000 worth of election ads for Russia.
In a Facebook post, COO Sheryl Sandberg said she was "disgusted and disappointed [by] those words" and announced changes to its ad tools.
BuzzFeed checked to see how Google would handle similar things, and found it was easy to also set up targeted ads to be seen by racists and bigots. The Daily Beast checked it out on Twitter and found millions of ads using terms like "Nazi," "wetback," and the N word.
Facebook Year in Review Woes
If you're on Facebook, you've no doubt seen its end-of-year, algorithm-generated videos with highlights from the last 12 months. Or for some, the low-lights. In 2014, one father saw a picture of his late daughter, while another saw snapshots of his home in flames. Other examples include people seeing their late pets, urns full of a parent's ashes, and deceased friends. By 2015, Facebook promised to filter out sad memories.
Tesla Autopilot Crash
Most algorithmic snafus are far from fatal, but the world of self-driving cars will bring in a whole new level of danger. That's already happened at least once. A Tesla owner on a Florida highway used the semi-autonomous mode (Autopilot) and crashed into a tractor-trailer that cut him off. Tesla quickly issued upgrades, but was it really the Autopilot mode's fault? The National Highway Traffic Safety Administration says maybe not since the system requires the driver to stay alert for problems, as you can see in the video above. Now, Tesla prevents Autopilot from even being engaged if the driver doesn't respond to visual cues first.
Tay the Racist Microsoft AI
A couple of years ago, chat bots were supposed to take the world by storm, replacing customer service reps and making the online world a chatty place to get info. Microsoft responded in March 2016 by pushing out an AI named Tay that people, specifically 18- to 24-year-olds, could interact with on Twitter. Tay in turn would make public tweets for the masses. But in less than 24 hours, learning from the foul-mouthed masses, Tay became a full-blown racist. Microsoft pulled Tay down instantly; she returned as a new AI named Zo in December 2016, with "strong checks and balances in place to protect her from exploitation."
Congrats on Your (Non-Existent) Baby!
Full disclosure: As I write this, my wife is actually pregnant. We both got this email from Amazon saying someone had purchased something for us from our baby registry. We had not yet made it public, but it wasn't so shocking. Meanwhile, several million other Amazon customers also got the same note, including some without a registry...or a baby on the way. It could have been part of a phishing scheme, but it wasn't. Amazon caught the error and sent a follow-up apology email. By then, many had complained that this was inappropriate or insensitive. This was less algorithm than glitchy email server, but goes to show you that it's always easy to offend.
Amazon 'Recommends' Bomb Materials
Everyone jumped on this one after it was reported by UK's Channel 4 News, from Fox News to The New York Times. It supposedly found that Amazon's "frequently bought together" feature would show people what they need to build a bomb, if they started with one of the ingredients, which the site wouldn't name (it was potassium nitrate).
What Amazon's algorithm actually did was show buyers the ingredients to make black powder in small amounts, which is totally legal in the United Kingdom, and is used for all sort of things—like fireworks—as was quickly pointed out in an incredibly well thought-out blog post at Idle Words. Instead, all the reports are on how Amazon is looking into it, because that's what they have to say they're doing in a world afraid of terrorism and logic.
Google Maps Gets Racist
In 2015, Google had to apologize when Google Maps searches for "n***** king" and "n***a house," lead to the White House, which was still occupied by Barack Obama at the time. This was a result of a "Googlebomb," wherein a term used over and over gets picked up by the search engine and marked as "popular" so the results go to the top of search. It's how the word "Santorum" got its new definition.
Google Tags Humans as Gorillas
Google Photos is an amazing app/service that stores all your pictures and a lot more. One of those things is to auto-tag people and things in photos. It uses facial recognition for people to narrow them right down to the correct person. I even have a folder of all my dogs, and I didn't tag any of them. For as well as it works, it was egregious when in 2015 Google had to apologize to Computer programmer Jacky Alciné when he and a friend—both of them black—were IDed by the service as gorillas. "We're appalled and genuinely sorry that this happened," Google told PCMag.
Robot Security Hits Child, Kills Self
Knightscope makes autonomous security robots called K5 for patrolling schools and malls and neighborhoods. The 5-foot-tall Dalek-esque cones on wheels—which to their credit, are still in beta testing—tend to make the news not for thwarting crime but for the glitches.
For instance, in July 2016, the K5 in the Stanford Shopping Center in Palo Alto ran over the foot of a 16-month-old, and Knightscope had to formally apologize. That same location—maybe the same droid—had an altercation in a parking lot with a drunk, but at least that wasn't the K5's fault. The jury is still out on the K5 unit in the Georgetown Waterfront, office/shopping complex in Washington DC… that one allegedly tripped down some stairs into a water fountain after only one week on the job. It's not looking good for Skynet's army.Robot Security Hits Child, Kills Self
Knightscope makes autonomous security robots called K5 for patrolling schools and malls and neighborhoods. The 5-foot-tall Dalek-esque cones on wheels—which to their credit, are still in beta testing—tend to make the news not for thwarting crime but for the glitches.
For instance, in July 2016, the K5 in the Stanford Shopping Center in Palo Alto ran over the foot of a 16-month-old, and Knightscope had to formally apologize. That same location—maybe the same droid—had an altercation in a parking lot with a drunk, but at least that wasn't the K5's fault. The jury is still out on the K5 unit in the Georgetown Waterfront, office/shopping complex in Washington DC… that one allegedly tripped down some stairs into a water fountain after only one week on the job. It's not looking good for Skynet's army.