Facebook may have a ‘determining role’ in genocide and must be regulated
fotostory / Shutterstock.com

Facebook may have a ‘determining role’ in genocide and must be regulated

Imagine if an agency came into your neighborhood and starting inquiring about each person’s gender and race, religion, and moral beliefs, political affiliations, social likes and dislikes, who every person’s friends were, what they talked about, and the most intimate details of their relationships with loved ones. Then, after gathering these data and having them analyzed, the agency sold this information to businesses, foreign governments, anyone who may be interested in using it in any way they wished — with no questions asked.

Then, consider what would happen if some sinister players started using these data to incite violence, spread hatred, or rig elections. They are provided with enough information at a granular level to identify, say, Muslims with extreme views, or Christians who feel marginalized, or homemakers unhappy with their marriage. These troublemakers are also able to send false information to groups. They could do this at a very low cost, without having to disclose their identity or motivation.

I am describing what Facebook makes possible. The United Nations has accused Facebook of playing a “determining role” in stirring up hatred and genocide against the Rohingya Muslim minority in Myanmar. “It has substantively contributed to the level of acrimony and dissension and conflict, if you will, within the public,” said Marzuki Darusman, chairman of the UN Independent International Fact-Finding Mission on Myanmar.

Facebook also enabled data firm Cambridge Analytica to acquire 50 million user profiles in the U.S. and use these to reportedly help the Donald Trump U.S. presidential election campaign spread misinformation. Facebook data may also have been used to influence the Brexit vote in Britain, as well as regional elections in India.

To be clear, Facebook isn’t scouting neighborhoods for information or knowingly supporting any malicious actions. It has automated tools, possibly ignorant of the consequences of their use.

We gain a lot from the global exchange of information that Facebook makes possible. It has brought loved ones closer and made the world a smaller place. But Facebook’s ability to recognize faces in photos more accurately than humans can is worrying. It can learn all about us from the comments we post, the news stories we read, and the pages we “like.” Even if we don’t tell Facebook where we were on a particular evening, or who our friends are, it can “see” the photos other people have posted and learn what it needs.

Facebook also owns WhatsApp and harvests data from the app. Using the sensors on smartphones, WhatsApp has the ability to keep track of our location and activity levels and to know who we are meeting, where, and how long we have been with them. In an exchange of emails, the company indicated it does not track location beyond the country level and does not share contacts nor messages, which are encrypted, with Facebook.

WhatsApp did confirm, however, that it is sharing user phone numbers, device identifiers, operating system information, control choices, and usage information with the “Facebook family of companies.” So that leaves open the question as to whether Facebook could then track those users at a granular level, even if WhatsApp doesn’t.

Big Brother in George Orwell’s 1984 could have only dreamt of having the information that Facebook has.

Facebook has become a huge part of public, civil, and private life. UN investigator Yanghee Lee said about Myanmar, “…everything is done through Facebook … [but] I’m afraid that Facebook has now turned into a beast, and not what it originally intended.”

Technology has given us many gifts. But it is increasingly being manipulated in ways intended to promote the makers’ profit over individual and collective wellbeing. The good of internet platforms is now being offset by flaws invisible to most users. Social media is being weaponized in the name of profit.

This is happening because Facebook and other internet platforms are consciously turning their users into addicts to make their products and advertising more valuable. They combine propaganda techniques with addiction strategies perfected by the gambling industry.

They provide value to users while creating filter bubbles that reinforce pre-existing beliefs in ways that make those beliefs more extreme and inflexible, causing many users to reject new information and even facts.

This is why governments need to stringently regulate Facebook. France ordered WhatsApp to stop sharing user data with parent company Facebook. Others must do the same. And they must force Facebook to crack down on hate speech — with heavy fines for every single violation.

Not just Facebook’s, but all data must be protected. A good start is Europe’s General Data Protection Regulation, going into effect in May, which requires companies to get unambiguous consent from users to collect data, to clearly disclose how personal data are being used, and to spell out why the data are being collected. Governments must also ban any form of political advertising and the sale of data to third parties.

This is not a matter of protectionism. It is about freedom and democracy itself. Technology is making amazing things possible. But it also has a dark side. We have to balance the risks with the rewards.

For more, visit my website: www.wadhwa.com and follow me on Twitter: @wadhwa


Arun Kumar Sampathkumar

Associate Director - Aerospace & Defense (Advisory)

6 年

While I agree that spread of rumors and fake news can cause serious harm to the society I have to point out that all social media and messaging platforms are vulnerable to that. My view is that the medium is is just a medium but its application for hostile activities is what is causing harm. While there is a need for regulating the content that is being shared, I think, we are risking the credibility of such social media platforms by installing too much regulation and evaluation outside of each user's scope. If there is a 'Fake News' button and I have a biased view [or a grudge] against a person/page/group, then what if I mark all their posts as 'Fake News'? If there is a 'Big Brother' looking over the content being shared on social media then what if the 'Big Brother' has a biased approach [over time if not immediately]? Here is something I find relevant to this discussion: https://www.youtube.com/watch?v=PlhbEvCrUBk

回复
carrie mccoin

Student/ Sales Associate at Jackson Shell in Meridan Idaho

6 年

In this current time with the events that are occuring in this Country - & +! I for one support this dark side in taking the steps to secure those that are in the loop for what is happening and the potential to erupt in to WWIII. Those that fear this in facebook I question?

回复
Fabian Pascal

Editor &Publisher DATABASE DEBUNKINGS, Data and Relational Fundamentalist,Consultant, Analyst, Author, Educator, Speaker

6 年

Too late. You don't let a company become a global monopoly with whole countries in its pocket, that can bring down economies and turn gullibles addicted and then hope to regulate it. You prevent it from ever reaching that status in the first place. We did have mechanisms in place for that and got rid of them. We deserve what we get.

Jordon Wallace

Student at Massachusetts Institute of Technology

6 年

Is the induction of false narrative an enabler?

回复

要查看或添加评论,请登录

社区洞察

其他会员也浏览了