Why Facebook and Google are giving users less, not more, control
SOURCE: Getty

Why Facebook and Google are giving users less, not more, control

No alt text provided for this image

Earlier this year I was browsing through my Facebook news feed when I came across a post by a friend about the unfortunate demise of one of his closest friends from high school. I was greatly moved by his words, and as I sat thinking about the tragic loss of a young life, I scrolled down. Right below that post was a hilarious video of a movie fight scene from the 1970s that had been shared by another friend. The “so bad it’s good” sequence helped explain why today’s producers hire specialist fight choreographers. Further below was a funny segment from a late-night show. By the time I closed the browser tab about ten minutes later, I realized just how inconsistent the experience had been. I was distraught and emotional one minute and amused and jovial the next. But these emotion, particularly my empathy for my friend’s pain, felt disingenuous. The news-feed experience itself was emotionally inconsistent; it didn’t mirror a genuine social interaction with a friend. Instead, the experience was more like watching a movie in which a lot of drama had been packed into the time available.

This is by no means the only unusual thing about the way my news feed is organized. Posts by friends about important social and political issues of the day appear to have the same weight as pictures posted by friends at airport business lounges. Posts about sports figures and sporting events of particular interest to me appear to be weighted the same as a picture of the food a friend is eating at a restaurant that evening. I wish there was a way to prioritize genuine personal stories from friends or the articles they share about arts, sports, technology, and entrepreneurship over some of the other content I see on Facebook. Given that the news-feed algorithm already uses rules to determine which posts to show users and in what order, it would seem logical to provide users with an extension that would enable them to personalize many of those rules to their unique tastes and preferences.

No alt text provided for this image

I spoke about this issue with a friend at Facebook, and not surprisingly, I wasn’t the first person to have thought about it. In 2015 Facebook began asking people what they wanted out of their news feeds. The answers varied considerably: some wanted to hear more about what friends, family, colleagues, and roommates were up to; others asked for less minutiae — especially information about relationship status changes and profile updates — and instead to be notified when a friend had posted something of substance on his or her wall. In response, Facebook launched a feature that gave people greater control over their news feeds. Using mixer-style controls — the sort that a DJ might use to increase treble and lower bass on a musical track — users could adjust a few sliders to receive, say, more wall post updates and fewer updates about relationship status and profile changes, or vice versa. Users could also choose to learn “More about these friends” or “Less about these friends.”

“Maybe one of your friends is dominating your News Feed by always writing boring notes on what she ate yesterday,” a Facebook engineer wrote in a blog post about the new tool. Request a tweak to the algorithm via your user preferences page, and “we’ll try not to subject you to any more of her culinary ramblings.” On the surface, it seemed a friendly move to help customers screen out annoying acquaintances and keep up-to-date with information they genuinely cared about. Below the surface, of course, lay business logic: if users were more likely to engage with the news feed, there was greater potential to pull in advertising dollars.

Unfortunately, when Facebook’s engineers examined the usage data after introducing the mixer-style controls, they found that engagement (measured by the number of likes, comments, and clicks on posts, as well as time spent on Facebook) fell. Contrary to expectations, users were interacting less with their news-feed posts than they had before. And surprisingly, although engagement measures fell, people who did customize their news feeds felt more positive about the algorithm.

The decrease in engagement led Facebook to eliminate some of the new features, but also raised many interesting questions. If Facebook’s original algorithm was better at predicting the content with which users were likely to engage, why did they resist it in the first place? And why did engagement drop with the new algorithm even as user trust and satisfaction improved?

It could be argued that Facebook’s algorithm knows what we want better than we do ourselves. Human psychology drives us to click on the very items that we would like to think we’re above; as much as we prefer to believe we have a taste for interesting think pieces, we often succumb to our baser desires (finding out who’s dating whom). We could also give the Facebook users who employed these mixer controls some credit: maybe they turned down the volume on roommate relationship status reports because they knew that these were the very sorts of news-feed posts that would tempt them into lingering and clicking; in other words, they wanted to engage less with the network, and adjusted the algorithm in a way that would help them do just that.

There is a third possibility, one that moves us beyond the unlikely scenario of companies such as Facebook merely accepting the fact that customers want to engage less with their products. This explanation narrows in on that moment when Facebook users clicked through to their News Feed Preferences page, thought about what they wanted from the app, and took steps to personalize it. It has to do, in other words, with control.

Researchers Berkeley Dietvorst, Joe Simmons, and Cade Massey — colleagues of mine who previously showed that we are less likely to trust algorithms once we see them fail — designed an experiment in which subjects were tasked with estimating how well high school students would perform on standardized tests. The study subjects were allowed to use advice from an algorithm (a mathematical model that predicted student scores based on historical patterns) if they wished, and they were allowed to see which factors the algorithm considered when making its own predictions.

The subjects were split into four groups: one group was not allowed to change the estimates provided by the algorithm; two groups could observe the algorithm’s estimates and make slight adjustments to the advice it offered; and the last group had free rein to change the algorithm’s estimates as much as they wished. Participants were then asked if they wanted to use the algorithm’s forecasts. The researchers were seeking to determine if some groups were more likely to adopt the algorithm than others.

They found that users in the first group — those who could not change the algorithm’s estimates — were the least likely to adopt it. Those who were given the option to adjust the algorithm's forecasts were significantly more likely to rely on it. In fact, it didn’t matter how much control was offered. Merely being allowed to make even tiny tweaks to an algorithm increased the chances that a person would trust it.

It seems that those Facebook users who used the mixers to adjust their news-feed algorithms and felt more satisfied with the results were not alone: a little control goes a long way toward improving trust in an algorithm. So it seems logical that such control should be given to them, right? Wrong, said Facebook. While user satisfaction is a good thing, in this instance, it came at the cost of lower engagement. So Facebook decided to take back the control it gave users.

Google is likewise denying control to a different set of users for a far more high-minded reason. In 2012 its self-driving car division decided its vehicles were ready for use beyond the test track. Would any of its employees be willing to drive these vehicles on their commutes to work? Many were, and the experiment began, with video cameras recording what went on inside the car and out. The results wound up disturbing the experts — not because of the cars’ performance, which was sound, but because of how the humans behaved in them. In spite of the drivers of these vehicles being instructed to remain alert and ready to take the wheel, the Google employees were reclining their seats, zoning out. Chris Urmson, chief technology officer of the division at the time, told an audience at the South by Southwest Conference in Texas: “We had somebody . . . look at their phone and says [sic] my battery’s low, so turns around, digs in their bag, pulls out their laptop, pulls out their charging cable, plugs the two in, looks up at their phone, yep, charging, and looks back out the window — at sixty-five miles an hour.”

There were three ways to react to these findings, recalls Urmson. The company could have ignored this aspect of the test drives and simply continued with them: the self-driving cars themselves had performed well, after all. Or they could have created mechanisms for reminding humans to keep their eyes on the road — a seat that doesn’t recline, a mild electric shock if the driver turns around. Or they could have accepted that such responses were inevitable (“human,” one might say) and built a better car — one that didn’t require human monitoring at all. Google chose the last option, and in November 2015 asked the National Highway Traffic Safety Administration if the company might be allowed to put cars on the road that lacked steering wheels, accelerator pedals, or foot brakes. During the congressional hearing on their proposal, Urmson said “We saw in our own testing that the human drivers can’t always be trusted to dip in and out of the task of driving when the car is encouraging them to sit back and relax.”

You might ask why Google had to make such an extreme choice. You might also reasonably ask, given the research on how control affects trust, whether so radical a design is even wise, as it could make the difference between the public’s embracing self-driving cars and rejecting them. Google’s engineers are not unique among designers in believing that end-to-end automation — where algorithms make all the choices, with no guidance from humans — represents the ultimate in product design.

When I started my research on algorithmic decision making, these systems were known as decision support systems. The key word here is “support.” But algorithms are fast moving from a support role to becoming autonomous decision makers. When robo-advisers invest our money, there is little we need to (or can) do beyond giving them access to our bank accounts. They make all the choices, with little guidance from us — even though research shows that we do, in fact, want some measure of control. By ignoring this issue of trust, engineers may be ensuring that their machines run at optimal performance levels, but they risk the general population’s rejecting their innovations outright.

From A HUMAN’S GUIDE TO MACHINE INTELLIGENCE by Kartik Hosanagar, published by Viking, an imprint of Penguin Publishing Group, a division of Penguin Random House LLC. Copyright ? 2019 by Kartik Hosanagar. 

No alt text provided for this image


Prakash Hebalkar

Strategy Advisor, Board Advisor, Mentor, Public Policy Advice. Silicon Valley based. Twitter: profitechconsu1

5 年

Technology and people: Interesting discussion of conflict in autonomous system development between trust and public acceptance in Kartik Hosanagar’s book.

Khadar Mustafe

Student at Africa University

5 年

Hlo hey you fin

回复
Robiul Hasen

SEO Consultant And Digital Marketer

5 年

nice

回复
Nadeem Malik

Free-lance journalist

5 年

I don't if this is relevant, when I started adding my extended family in friends' list, Facebook blocked me, saying it was impossible for someone many people as friends. A month or so later I could resume using Facebook, but was required to add complete strangers. Little did I know that was a part of Facebook's profiling programme.What I shared with them enabled them to make ads which apparently were supposed to interest me

要查看或添加评论,请登录

Kartik Hosanagar的更多文章

社区洞察

其他会员也浏览了