News, algorithms and much-needed readjustments
Enrique Dans
Senior Advisor for Innovation and Digital Transformation at IE University. Changing education to change the world...
Facebook and Google are trying to deal with the problem of fake news and offensive or factually incorrect information and face issues with their relevancy algorithms.
Both companies need solutions: Facebook has been criticized over its role in distributing fake news during the US election campaign that could have altered the way many Americans voted, while Google has also been attacked for unquestioning collating junk items in its snippets or via Google Home.
The challenge is to redesign an algorithm to prevent junk and fake news being distributed while respecting the fact that there are people out there who like this stuff (sensationalism and biased information undoubtedly have their audience).
So far, four approaches have been identified, that involve introducing new data into the equation. Where to extract that new data that would rate news items?
- Authorized sources from fact-checking pages such as Snopes, Politifactor others that comply with a few basic operating principles and that use qualified people to check news and issue a verdict on it. This is the path chosen by Facebook that rates news as “disputed” on the basis of the judgment issued by these types of pages, and by Google in France through CrossCheck, a tool developed by Google News Lab along with First Draft within a European media joint initiative. As a result, in France, some six hundred web sites have been identified as unreliable; and Libération also compiles fake news and identifies them.
- Users’ opinions: the use of peer-rating systems and the evaluation of patterns derived from their use. In the face of the rapid dissemination of partisan news, or that is sectarian or offensive, we can expect to see rapid viralization between those who like the tone or content of such items, as well as the use of rating tools by those who consider these news items factually incorrect or unacceptable. The study of the generation patterns of these negative user evaluations can be, together with the analysis of these users, one more element to be introduced in the algorithm.
- Distribution patterns: analysis of how news spreads. Very fast curves, very abrupt or those that grow at the expense of people identified as having a particular tendency, or in very homogeneous groups, or with clearly identifiable and attributable patterns, should at least be subject to some type of supervision.
- Using independent evaluators: a significant number of people in diverse countries and with a high level of diversity dedicated to the evaluation of news. Google has recently hired ten thousand people for this, who cannot directly influence the positioning of news in the results pages, but do generate qualified data about their level of credibility and tag news as “upsetting — offensive” based on an exhaustive 160 — page guideline to be used as an additional input variable (a document that attempts to define the problem and should be be used in journalism school).
We should also remember that we are talking here about input variables, not output variables: in other words this isn’t about whether a given pattern, the opinion of an evaluator or that of a secondary source “disqualifies” or “eliminates” news automatically, but that this information is used to feed a machine learning algorithm that tries, over time, to develop patterns derived from that information.
Why is it necessary to redesign algorithms? Simply because the environment to which they respond also changes. Any tool is liable to be misused as soon as the incentives to do so are sufficient, and that generates the need for those tools to be protected against such misuses. Most of the algorithms we know are, as such, works in progress that try to evolve with the characteristics of the environment or with the patterns of use they generate. The case of fake news, such as successive attempts to correct the weight of sensationalism carried out previously by Google, show this clearly.
As such, a much needed evolution. But also, from a research point of view, completely fascinating.
(En espa?ol, aquí)
IT Professional
7 年Fake news has been around for years. Now all of the sudden they want to say something about it.
Sticking up for Mother Nature
7 年Yes, that's what we need. More algorithms that work towards censorship of human imputs!
Software Engineering Lead
7 年the biggest source of fake news is mainstream media. when r we planning to ban that?
Thompson Enterprises
7 年We have a communist president who gets his news from Putin and takes the advice of Neo Nazi Steve Bannon.And is now doing weekly political rallies that only one other leader of a country in history ever did after taking office used to talk to his party followers and that was Adolf Hitler's propaganda of fake news and the German people followed like sheep.Have American people really become so easily manipulated by a arrogant lying P.O.S. billionaire. W.T.F. is wrong with you people.