Why Am I Seeing This?
Transparency is the new battleground in Meta’s battle for relevance against TikTok but is this one they can win?
Meta’s new transparency focus
Meta is going on an all-out transparency assault having followed up on their recent announcement of how the Instagram algorithm ranking works with an update on their use of AI to inform what you see on both Facebook and Instagram.?
The announcement, made by Nick Clegg, President of Global Affairs at Meta provides more details about how their AI systems rank content for Feed, Reels, Stories, and other parts of the platforms. Meta has also announced they are making it easier for people to control what they see on Facebook and Instagram and are set to launch new tools to support public interest research.
This move towards being a more transparent, open company is a clear message to regulators both in the US, UK and across Europe that there is a material difference between Meta and Tiktok - given the concerns around national security and TikTok’s parent company ByteDance.? Meta is moving on the offensive around transparency in a strategy to create distance between themselves and the Chinese-owned TikTok. Taking a leaf right out of Apple’s recent playbook in which it used privacy and user safety to curtail Meta and other ad platforms from their mass surveillance tactics.?
?With the current media interest, broad concerns, and in some cases even panic, around the growth in generative AI and the speed of change it’s no wonder Meta wants to get out in front and share how it is using AI to provide user recommendations. As part of this new openness at Meta, they have released 22 “system cards” which are guides to how their AI systems rank content across all areas of Facebook and Instagram.?
Not only that but Meta has also released the signals it tracks and users to inform the content you see within the ranking system. This is something lawmakers and researchers have been asking for to better understand what inputs companies like Meta use to show specific material and what signals help identify dangerous or harmful content.?
Why am I seeing this??
Most people won’t spend the time reading through updates on Meta’s Transparency Centre website so to ensure this becomes something everyday users can understand Meta is expanding the “Why am I seeing this” feature in Instagram Reels and Explore, and soon in Facebook Reels. As Meta outlined in their update “You’ll be able to click on an individual reel to see more information about how your previous activity may have informed the machine learning models that shape and deliver the reels you see.”
Meta wants to give users more control over the experience especially as more and more content we see comes from recommendation algorithms. As these algorithms learn the signals of what you engage with, watch, and share they can push you into every more niche and sometimes dangerous rabbit hole of content. One complaint here though is that once again by providing more tools to users Meta is pushing the burden of responsibility onto its user base to moderate and manage the content they see on the platform - rather than installing any stringent guardrails on the content users can post.?
The worry here is that though Meta is giving users greater transparency on both how it works and how you can manage your own experience on both Instagram and Facebook what it is not doing is taking any significant action or steps to curtail the mis and disinformation that has proliferated across both platforms.?
Nick Clegg, wrote as much in a Medium post back in March entitled "You and the Algorithm: It Takes Two to Tango" in which he suggests that "personalization is at the heart of the internet’s evolution over the last two decades" adding "it allows for a rich feedback loop in which our preferences and behaviours shape the service that is provided to us. It means you get the most relevant information and therefore the most meaningful experience."
领英推荐
But Clegg himself admits that these decisions are simply not automated. That despite the fact we have removed an editor there are still human decisions at play here. He states that before "we credit “the algorithm” with too much independent judgment, it is of course the case that these systems are designed by people" and for too long Meta, and previously Facebook have either made the wrong or in many cases no decision to act or create the guiderails needed to protect users from seeing or sharing mis and disinformation.
Emma Steiner, an Alayst of Disinformation told news site Axios recently that platforms "are not willing to act on evolving disinformation narratives. So while [companies] can attempt to drop new policies ... I'm not sure they will make an actual serious attempt to counter the issue."
Past crimes and misdemeanours
This all seems very well-intentioned from Meta but it's hard not to be cynical of a company whose past issues have included the Cambridge Analytica scandal in which it was believed that over 87 million users' details were harvested for ad targeting as well as a separate leak of over 500 million Facebook users personal detail being shared on a hacking forum back in April 2021. There are so many privacy and security concerns at Meta over the last few years that it’s hard to list them all (though someone has on Wikipedia here - they include allegations of eavesdropping, data mining, performative surveillance and an array of phishing and e-commerce scams.
After Mark Zuckerberg was hauled in front of Congress back in 2018 to answer questions about Meta’s involvement with, and response to, the Cambridge Analytica scandal the company has been trying to show it takes privacy and security seriously. This latest push towards more transparency of how user data and AI are being used to personise the experience for Facebook and Instagram users comes as a direct response to the issues TikTok has faced in the US, Europe and beyond.?
Shou Zi Chew, TiTok CEO was himself hauled in front of US Congress and was grilled for 5 hours to discuss concerns around user data, and national security given the company's ties to the Chinese Government. And Meta is looking to put distance between themselves and TikTok in the eyes of Congress and lawmakers globally.
So why now Meta?
Given the growing concerns around AI and with a US election around the corner, Meta wants to show itself to be responsible both with how it handles user data and with how AI is being used to personalise an individual's experience on both Facebook and Instagram.
I think it’s important to acknowledge when platforms and companies make moves to be more transparent with users but it’s also important to be aware of the timing and political implications of any such move by a company with the size and global influence of Meta.?
Meta is fighting a battle for relevance and following the recent success of Threads, the company now sees an opportunity to counter the threat posed by TikTok. Instead of using a new product, or innovation, the company is using its lobbying power and concerns around the Chinese ownership of TikTok to frame itself more positively in the US and beyond.
Safety and transparency should not be a perk or used to score political points against a rival. Instead, this needs to be a core user right that Governments around the world need to come together to protect and harness.
Users expect and frankly need, more awareness and control over the content they see and how the algorithms and recommendation engines of major platform work to provide more and more of what we want every day. The onset of AI is speeding up how recommendation engines are now used to bombard us all with more and more content from users around the globe ensuring there are some guardrails around this is a minimum but welcome step.?
Let's hope that we've not (yet) ?“lost control of what we see, read — and even think — to the biggest social-media companies” as Joanna Stern suggested in her brilliant Wall Street Journal article earlier in the year.
?
I make websites for healthcare providers → WD researcher → love networking → 6 years of industry experience.
1 年Exciting developments from Meta! Transparency will indeed play a crucial role in shaping their relevance and user trust. Looking forward to reading more in your Platform newsletter about the AI updates on Facebook and Instagram.