Apple, Facebook and How Privacy Is Eating the World
In the fall of 2007, I was working at an ad agency and met with a Facebook sales rep. He told us about a new social commerce idea, one where you could notify your friends of purchases you just made. As an advertiser, it sounded neat. Myspace was still the predominant social network of the day by a wide margin and Facebook was the younger sibling that was always willing to push the envelope technologically to get its share of the ad budget.
This product had a name that has since become infamous: Beacon. What ensued shortly after the launch of Facebook Beacon was a scandal. The very day it made its debut, tech journalist Om Malik asked, “Is Facebook Beacon a Privacy Nightmare?” Security researchers had discovered that these personal browsing activities were being reported back to Facebook even when they opted out and worse yet, even when they were logged out. In less than a month, the New York Times reported that 50,000 Facebook members had signed an online petition objecting to the program and Facebook responded by reining in the offering. A marriage proposal was partially ruined when the engagement ring purchase was broadcast to the future husband’s friends, which ultimately led to a $9.5 million class-action lawsuit settlement. In less than 2 years, this iteration of Facebook Beacon was dead.
There were two telling quotes from the NY Times article. One referred to Facebook’s refusal to offer a universal opt-out from the program that users requested by then Facebook vice president and current billionaire SPAC king Chamath Palihapitiya. Another was from a Facebook user who started her comment with, “We know we don’t have a right to privacy, but[…].” In many ways, these two quotes set the battle lines between corporations and users and the implied social contract one had with the other.
While the initial rollout of Facebook Beacon was bungled, it would define the strategy behind Facebook’s rapid ascent: utilizing its identity layer to capture off-platform information that became disproportionately valuable to Facebook. Facebook’s power lies not in the social network itself but in its ability to de-anonymize external data streams. Facebook Pixel, its current tracking solution which is employed by countless advertisers, apps and websites is often referred to as a beacon.
To be clear, this is not a singular indictment of Facebook, but it has been at the tip of the spear in designing products that pushed the envelope around ad targeting. It has rolled out many clever and innovative ways of capturing external data about its users. As Facebook grew and ran away with 20% of the global digital advertising market, it has forced many of its competitors to do the same. In 2018, Google Chrome made a “seemingly small change” that allowed Google to capture all the browsing history of users who logged into Gmail. The advertising industry is now obsessed with implementing an identity layer on top of all ad-supported internet traffic. Facebook paved the way towards a de-anonymized internet and has taken the brunt of the PR damage in return for a lion’s share of the gains.
We have been accustomed to clicking “I Accept” whenever a 20-page click-wrap agreement presents itself when installing a software program. Legally, we’re told that we should have read all 20 pages, understood all of its intricacies and implications and decide on our own free will to enter into an agreement to obtain access to a company’s product. In practice, we really have no choice if we want to use a particular product. We play ball on their terms. Websites and apps took advantage of this behavior with the same “I Accept” windows, except that the rules often change after you sign up and your data might be processed in drastically different ways after you have invested tons of time and effort. With traditional software, you could simply decide not to upgrade to the new version if you didn’t want to accept the new terms and conditions. By and large, this is not an option that’s available to most consumer internet services.
Facebook, once again, provides a recent example of this phenomenon. In 2014, they paid nearly $16 billion for WhatsApp, a product that made $16 million in the 6 months leading up to its acquisition in 2014. At the time of acquisition, WhatsApp made money by charging its users $1 from the second year onward. Facebook was never going to make its money back if it stuck to this model and surely the founders of WhatsApp must have known this too. It should come as no surprise then, that almost 6 years after the acquisition, Bloomberg reported that “WhatsApp Users Can No Longer Avoid Sharing Data With Facebook.” Mirroring the strategy they tried 14 years ago, Facebook would once again refuse to offer an easy opt-out. In fact, this time, there was none.
This Time, It’s Different
Unlike the Beacon rollout, your friends would be none the wiser with the new WhatsApp privacy policy changes, but your overall ad targeting profile at Facebook might be. The reaction, once again, was swift. A change to WhatsApp’s terms of service and privacy policy was announced on January 4th. By the 15th, these changes would be delayed. In the 10 hectic days between the announcement and retreat, a few things happened:
- Elon Musk touted Signal, an open-source messaging app funded by WhatsApp co-founder Brian Acton, to his Twitter followers. This added 26 million downloads in India alone (WhatsApp’s largest market). Signal would shortly race to the top of the US iOS App Store charts.
- Trust in Facebook was so low that disinformation spread rapidly regarding the overall security and privacy of WhatsApp, including unsubstantiated rumors that Facebook/WhatsApp could now read user messages and listen in to conversations.
- Turkish president Erdogan announced that he was moving his WhatsApp chat groups to state-owned BiP.
- WhatsApp rival Telegram reported that in a span of 72 hours, they added 25 million new users and Telegram CEO Pavel Durov referred to the exodus as the “largest digital migration in history.”
What was once a minor rebellion from 50,000 members in an up-and-coming social network had returned as a massive global defection that included multiple heads of state. Ironically, as Wired noted, most WhatsApp users were already sharing their information with Facebook since 2016. In prior privacy policy updates, WhatsApp had offered its users a 30-day window to opt out of these changes. Like many Americans, I ended up downloading WhatsApp while abroad and making new friends with the locals. I certainly missed the opt-out window in 2016 as I only use the app sparingly. Moreover, Techcrunch pointed out,
Most users will just tap ‘I agree’ to WhatsApp’s new T&Cs without reading them and realizing what they are agreeing to.
So what changed between 2016 and 2021? Perhaps the biggest story buried amidst the 2021 WhatsApp privacy scandal was the fact that European users were exempt from the forced data sharing that the rest of the world was told to prepare for. Unlike 2007, there was now an expectation of privacy rights from consumers, at least those in Europe.
Most Favored Nations
In 2016, the European Union adopted the General Data Protection Regulation (GDPR). German experience with the horrors of surveillance, first with Nazi Germany and later with the East German Stasi, was the driving force that led to some of the world’s first information privacy laws in 1970 and ultimately to the most comprehensive and impactful privacy legislation in the world. For the first time, consumers had digital privacy rights that were guaranteed by the government with the full force of heavy fines behind it.
While GDPR had good intentions, it had massive loopholes and lots of grey areas that resulted in an unpleasant experience for consumers. In advance of the regulation’s effectiveness in 2018, inboxes were flooded with emails asking for consumers to give explicit permission for the emails to continue. Marketers could no longer get away with harvested email lists from shady origins but consumers could only perceive it as an arcane assault on their inboxes. Consumers were also bombarded with consent windows asking them for permission to use cookies in an age where cookies were quickly going obsolete. Research I conducted and later presented to the Office of the California Attorney General showed that it was way more difficult to opt out of data collection than it was to opt into data collection in Europe thanks to the use of dark patterns, which were not explicitly regulated under the GDPR.
However, the most important role of GDPR was that its core concepts laid a foundation for other countries and corporations to build upon. Other countries, including Brazil, Australia, Japan, Thailand, Chile, India and South Africa have since adopted national privacy laws of their own based on this framework. The EU’s willingness to levy fines coupled with the market power of 500 million consumers has allowed the EU to obtain a level of leverage against Facebook that nobody else had. However, this unique leverage has set up an uneasy dynamic among citizens outside of the EU. It led many to the ask, “Are we second class digital citizens with less privacy rights?” India Today reported that WhatsApp’s separate privacy policies for Europe and India raise concerns. Teeing up the government for an easy political win, the Indian technology ministry asked WhatsApp to withdraw the privacy policy change, stating,
This differential and discriminatory treatment of Indian and European users is attracting serious criticism and betrays a lack of respect for the rights and interest of Indian citizens who form a substantial portion of WhatsApp’s user base.
India won’t be the last country to feel offended by the disproportionate treatment of its citizens. What the GDPR set off was a global arms race towards privacy rights. The eventual outcome will be a global baseline of privacy rights that most people around the world can rely on.
One glaring omission from the list of countries with national data privacy laws is the United States. How long will it take before we realize that we’ve become second class digital citizens? California has been at the forefront of privacy legislation, first with the passage of the California Consumer Privacy Act (CCPA) and more recently with the passage of the California Privacy Rights Act (CPRA). These pieces of legislation were unique because they arose under pressure from concerned citizens who gained significant support from voters.
Alastair Mactaggart, a political novice by his own admission, had a discussion with a Google engineer who told him that he would be horrified if he knew the amount of data they were collecting on users. Alastair worked with privacy researchers including Ashkan Soltani to craft legislation that improved upon the loopholes exploited by the GDPR while still staying legal as a state law. It spurred a ballot initiative that led to the passage of the CCPA. I personally moved back to California in 2019 to participate in the CCPA rule-making process and my comments to the Office of the California Attorney General formed the basis behind the roll out of one of the first laws in the world that aimed to inhibit dark patterns in the collection of consent. But still, despite our best collective efforts, the CCPA and CPRA are far from perfect. Research I conducted in August 2020 showed that 75% of sites weren’t compliant with the CCPA and almost all of them were using dark patterns to deter and prevent people from opting out of the sale of their data. The regulatory ability to prevent dark patterns, which refer to psychological design tricks used to trick or force users towards an intended behavior, will ultimately determine the success or failure of privacy legislation. In the era of consent and #MeToo, we really must ask ourselves, does a forced “yes” really mean yes?
California will play the same role in the US that the EU played for the rest of the world. Data privacy laws have also passed in Nevada and Maine and 23 other states are currently going through the legislative process to bring data privacy laws online as well. With at least half the country considering a data privacy law, it shouldn’t be long before we see a substantive draft of a national privacy law getting to the floor of Congress.
Previewing the Global Consent Standard
While it remains to be seen if privacy legislation can hit the mark on its own, the spirit of the laws themselves have inspired some companies to leverage them as a key differentiator in the marketplace. Chief among these companies is Apple, which has led the marketplace in providing transparency about the types of data that apps collect and track and a truly easy way for consumers to opt out of certain types of data collection in iOS apps. It could easily be argued that Apple has done more to bring effective widespread data privacy protection to consumers around the world than any privacy law has thus far. This notion would have been absurd just 8 years ago.
Apple’s approach to privacy has at differing times been cynical, convenient, brilliant and innovative. After getting sued twice for allowing Apple mobile phone Unique Device Identifiers (UDID)—a permanent ID that was unique to each Apple device—to be shared to third parties without users’ consent in many iOS apps, Apple closed the loophole and replaced it with their Identifier for Advertising (IDFA). Apple’s IDFA could now, in theory, be reset by the user or flagged to reflect the user’s decision to opt out of targeted mobile ads. In practice, these settings were buried 3 levels deep in menus that users had to dig for (another example of a dark pattern). Most users either didn’t know or didn’t bother to dig through their phone settings to enable this protection. This was a convenient way to respond to privacy concerns without substantively affecting the amount of tracking that went on inside apps.
On the browser side of things, Apple’s Safari has always been at the forefront of privacy innovations. It was the first browser to introduce a private browsing mode in 2005. It was also a pioneer in blocking cookies from sites you hadn’t visited by default, preventing most ad networks from easily tracking your browsing activity across multiple sites. Whether by coincidence or design, Safari’s strict anti-tracking policies pushed many website owners and new startups to drive users towards using their iOS app instead of their mobile site where they could track way more data points, including a persistent identifier available to all apps that enabled cross-app tracking, and easily share data with numerous 3rd parties via embedded pieces of code called SDKs (Software Development Kit). This was a win for Apple, since it drove loyalty and usage in the iOS app environment (which has had many apps that were exclusive to iOS or functioned much better than their Android counterparts). Apple would also benefit by extracting 30% of all revenues generated by in-app purchases (vs the 0% it would receive for purchases made through Safari). It was, however, a huge loss for the mobile web, since it created an environment that advertisers started to see as less valuable because it had less data than the increasingly signal rich mobile app environment. Through a discriminating privacy policy difference in app and mobile web environments, Apple was able to engineer a virtuous cycle that kick-started the app economy and re-imagined how the internet worked on phones.
In response to Edward Snowden’s leaks in 2013 that it participated in the NSA’s PRISM program, Apple started to step up its brand message around privacy. Its participation in PRISM led one publication to declare that Apple’s Privacy Record Sucks. It might be surprising now to hear that the Electronic Frontier Foundation (EFF), a group famous for its promotion of Internet civil liberties, gave Apple the worst privacy score—1 out of 6 stars. Facebook, Google and Foursquare were ranked significantly better than Apple at that time. It was so important of an issue that Tim Cook wrote an open letter explaining their stance on privacy. It was also designed to distance Apple from two other companies that were caught up in the PRISM scandal, Facebook and Google. Cook stated,
Our business model is very straightforward: We sell great products. We don’t build a profile based on your email content or web browsing habits to sell to advertisers. We don’t “monetize” the information you store on your iPhone or in iCloud. And we don’t read your email or your messages to get information to market to you. Our software and services are designed to make our devices better. Plain and simple.
As MacWorld correctly noted at the time,
Apple likely sees a competitive advantage in privacy, especially when its biggest direct competition comes from advertising giant Google and the enterprise-friendly Microsoft. Apple believes consumers not only desire privacy, but will increasingly value privacy as a factor in their buying decisions.
Apple bolstered its privacy credentials with the use of differential privacy in its data analysis, a way of processing data in bulk to glean insights without having direct knowledge about the data points tied to any single individual or device. It also started to punish and warn companies who found loopholes and collected data they shouldn’t have. Uber added fingerprinting code to catch Chinese fraudsters that were using fake credit card numbers to get free rides and was caught red-handed in 2017 despite adding code that was designed to be hidden from reviewers at Apple’s headquarters in Cupertino. This resulted in “[Uber] chief executive Travis Kalanick being hauled in to Cupertino for a personal dressing down from Tim Cook” along with a threat to “to pull Uber’s app from the App Store if the company didn’t remove the fingerprinting feature.” Apple also yanked 2 apps from Facebook for logging all website and app usage to create better targeting profiles, ultimately leading Facebook to shut down both data collection programs. By 2018, Apple’s reputation around privacy had made more than a full recovery, with Fast Company declaring Forget the new iPhones: Apple’s best product is now privacy.
While many consumers now had a sense that their Apple was safeguarding their privacy as best they could, the truth was a bit murkier. When the press started to peel back the onion to understand just how the app economy actually functioned, they discovered that iOS apps were still a haven for tracking and third-party data sharing despite Apple’s fervency about privacy protection. 2019 would be the seminal year where America started to learn a bit more about that economy. An investigation from The Washington Post revealed that 5,400 tracking events were sent from a reporter’s phone over the span of a week, mostly coming from the hidden SDKs inside his iOS apps. The NY Times did a deep dive into location data collected by these SDKs and was able to tie back data points collected from a single device to a specific person. The Wall Street Journal revealed that iOS apps were sending sensitive medical data including body weight, blood pressure, menstrual cycles and pregnancy status to Facebook. The Atlantic now spoke of Apple’s Empty Grandstanding About Privacy.
Under siege again around its privacy practices and with public awareness of privacy laws such as the CCPA rising, Apple took advantage of the moment to announce a slew of features that would actually be effective at stopping in-app tracking for the very first time. These features included data privacy labels showing users which pieces of information apps were using to track them, prompts that alerted users when apps were continuously collecting their geolocation data and the most importantly, a simple opt-in/opt-out mechanism that allows users to decide whether they want to be tracked across different sites and apps.
The unifying theme behind all of these features is transparency. For the very first time, users now had insights into how their apps were processing their information. The most important feature, which is called the AppTracking Transparency Framework (ATT) would surface the IDFA opt-out that has been buried behind a labyrinth of settings menus for years. It was also a big gamble for Apple since it threatened the $519 billion app economy that had been reliant on free-flowing data tied to persistent identifiers. Perhaps the most groundbreaking thing that the ATT accomplishes is that it is the first widely-used consent prompt that doesn’t utilize dark patterns. It is just as easy to say yes to tracking as it is to say no. In one fell swoop, ATT has accomplished what years of privacy legislation hasn’t yet achieved. The rollout of ATT will have users around the world accustomed to a privacy prompt that doesn’t suck. It will lead Android users to ask, “Are we second class digital citizens?”
Unsurprisingly, the company most reliant on off-platform data collection became the fiercest critic of ATT. Facing a potential multi-billion dollar revenue hit, Facebook took out full-page newspaper ads accusing Apple of changing the internet for the worse and accusing it of hurting small businesses. The message has not been received well, with Wired stating Nice Try, Facebook. iOS Changes Aren’t Bad for Small Businesses. Faced with the prospect of consumers actually exercising their privacy rights, companies have been scrambling to predict how privacy will affect their business models. Bumble, a dating app, said it expects between 0 - 20% of users to opt-in to cross-app tracking its IPO registration statement and would increase the cost it takes to acquire new customers.
The New Social Contract
Consumers are now entering a world where they don’t have to always click “I Accept” in order to use a product or service. In a world absent of forced yeses, companies will have to reconsider why and how they collect and store user data and who they share it with. While the focus of my story so far has been on the digital world since it collects the most information, the same privacy concepts and rights apply at grocery stores and barbershops when you’re asked for your phone number.
These changes are welcome at a time where the internet is entering a dangerous phase. Facebook’s obsession with collecting real-world identities and building advertising products that offered advertisers easy access to this identity layer has pressured its competitors large and small to do the same. The end-state of this race for identity is an internet where you are always logged in and never anonymous. Everything you read and everything you watch will soon be logged to your email address, your name or your phone number. Companies such as LiveRamp already specialize in linking all these pieces of data together.
The NSA’s PRISM program cost only $20 million per year in 2013. What’s left unsaid is that the rest of its surveillance infrastructure was subsidized by ad revenue, paid for by advertisers and ultimately by consumers themselves. It’s unknown if something has since replaced PRISM but it would be foolish for any national surveillance agency to reinvent the wheel when extensive amounts of personal data are already there for the taking. The biggest reckoning we’ll have to face in the coming years is whether the internet is a surveillance infrastructure that merely requires consumers to function or whether it is a service for consumers that advertisers can sponsor if they choose to do so. Consumers will ultimately have to decide if they want to indirectly pay for their own surveillance. Apple might be the first company to make a fortune promoting privacy as a key brand value but it won’t be the last.