Six Ways for Facebook to Restore Trust
Shutterstock

Six Ways for Facebook to Restore Trust

Facebook’s financial success is based on the enormous amount of advertising revenue it generates, but there are frequent conflicts between generating higher ad revenues and furthering the best interests of its users (Facebook’s “customers” aren’t the users themselves, but the advertisers and others who access users’ data). One conflict of interest that Facebook confronts in trying to earn the trust of users, for instance, is that its business model reaps financial benefits by allowing the proliferation of “fake news,” controversial stories, conspiracy theories, and the like. Disruptive, controversial, amazing (and untrue) or outright incendiary items generate more likes and shares, producing far more ad revenue than less controversial (and more factual) genuine news and stories produce.

In addition, Facebook often seems uninterested in, or even incapable of, securing its users’ personal information. Just in the last few days it suffered a hack that exposed the personal details, email addresses and phone numbers of yet another 500 million+ subscribers

As a result of these and other issues, Facebook is starting to see an increasing number of users canceling their memberships, or substantially reducing their reliance on the product. Moreover, the company is likely to face significant regulatory constraints being imposed on it in the near future. The Federal Trade Commission already levied a $5 billion fine on the company for its Cambridge Analytica scandal, and sooner or later the Facebook behemoth’s rather thuggish, anti-competitive tactics might lead the federal government to try breaking the company up.

So my question is this: 

If you were a senior executive at Facebook today and wanted to improve the trust that users and the public have in the company, what steps would you take?

I put this problem to my Menlo College marketing students, who have been reading and talking about the subject of trustability and how people’s standards for what constitutes trustworthy behavior in a business have risen over the years, as companies increase their effort to behave in ways that are fairer and more transparent for their customers (for more, see Martha Rogers’ and my book on the subject, Extreme Trust.)

My students’ ideas were absolutely brilliant, and any one of these suggestions would likely improve Facebook’s trustability in everyone’s eyes: 

  1. Eliminate bots posing as Facebook users. Two-factor authorization and CAPTCHA-like technologies would be a start, but the company could also insist on voice contact with new users, or it could simply identify (and block) bots by looking at users with more than some minimum volume of activity, or users that display other bot-like behavior.
  2. Fact-check all articles that exceed a certain number of likes or shares (while also searching for bots doing the liking and sharing), then label all articles as having been checked or not checked. Over time, the company could increase the number of fact-checked posts.
  3. If a large portion of a user’s own network has already blocked a particular ad or post, then Facebook should either label it as such, or perhaps proactively block it for the user. This would be a form of “social filtering” and could empower user opinions while defending Facebook’s reputation as well.  
  4. Inform users, individually, exactly what companies are buying their data. A user should be able to click on their account and see which companies have accessed their data, when, and perhaps even how much they paid. Of course, whatever revenue comes from success fees or commissions on sales, or ads clicked on, should also be documented for the user’s information.
  5. Allow users to opt in or out of ads, or to block specific advertisers, or simply to dial up (or down) their own individual “privacy setting.” A user should be able to specify how many ads they will tolerate per day/week/session, or perhaps per category.
  6. Share advertising revenues directly with users themselves. Maybe users should get an annual “dividend” based on the value they’ve created for Facebook that year, or perhaps users should get “cash back” or other benefits (maybe frequent flier miles?).

I found this last idea quite interesting in itself. Facebook’s average revenue per user from 190 million North American users in 2020 was about $160, which means their 40 million busiest, most active users likely generated $600 or more last year, while their very most active 5 or 10 million users generated perhaps $2500 or so. Each. And the company’s net profit on revenue is around 30%.

Now imagine, for instance, a Facebook business model that looks more like this: An active user gets money back on his or her Facebook usage, either as an annual cash “dividend” or a monthly credit. If a user dials up their personal privacy settings a bit (which would mean they are accepting fewer ads), then Facebook would have fewer opportunities to sell ads for that user, which would likely raise the price of reaching such a user. If the user chooses to block ads altogether, maybe it could even cost something to continue using the platform – perhaps a $4 to $5 monthly fee, which is the average profit Facebook would otherwise make. (The service would still be available for free, of course, the same way it is now: with ads. But users would still be free to make their own choice.)

Facebook is one of the world’s most valuable companies today. But it is clearly not trustable, and it should use today’s financial success to finance a wholesale revision of its basic business model. Before it’s too late.

Size and profit will not protect a company that violates user trust for long, no matter how big and powerful and unstoppable it appears. Just ask AOL. Remember them? 

--------

P.S.

On the brighter side (for Facebook), one of my students has several friends in Myanmar, where they told her that Facebook is “like the internet,” and the military are brutally trying to sustain their recent coup by stopping people on the street, demanding their smartphones, and then checking for any recent posts or other activity. Jail time (or worse) awaits anyone found to be opposing the coup. Thankfully, she said, Facebook’s new “profile lock” feature (originally pioneered as a way to protect women in India) is now being widely used, and protects her friends.   

John R.

Director of Supply Chain - Henderson

3 年

Or you just quit Facebook all together and free up a ton of time and pain like I did... Linkedin is quickly becoming too much like Facebook, far too political and too much back door selling. Suppliers marketing their wares to potential new customers, it's becoming the Online version of the old Thomas Registers books. I get enough spam on my cell phone, it's really becoming a negative part of being on Linkedin. About to tune out...

Frank Feather

??Future-Proof Strategies: QAIMETA (Quantum + AI + Metaverse) ??World-Leading Business Futurist ?Dynamic Keynote Speaker ?Board/CSuite Advisor ??"Glocal" Mindset ?? One Human DEI Family

3 年

I hold out little hope for Facebook. It is a grossly mismanaged global monstrosity. Period. And the younger generations don't use it. It is maturing. In any event, it needs to be heavily regulated and broken up. It is a danger to civil society. It doesn't even know anything about being a responsible "media" company in the digital age.

Iulian M.

Project Manager | Scrum Master | SAFe 6.0, PSM, Prince 2 | CIPP/E, CIPM, CEH | solely my views

3 年

FB is broken beyond repair since fundamentally built on this unlawful and immoral paradigm of profiling users. Even at today’s scale it breathes out the original concept: a tool used by a frustrated student to stalk girls.

回复

They already do 3 (that's it's whole design), and 4 and 5 which don't share $ info but you can block certain advertisers, specific ads, and see all the companies who have you included in their marketing lists.

回复

要查看或添加评论,请登录

社区洞察

其他会员也浏览了