Facebook’s ‘Disputed News’ Shows Intent But Does Not Solve The Fake News Problem
Last week Facebook started to roll out a feature which allows users to flag content as ‘disputed’, in a bid to mitigate the circulation of fake news on the site. Stories flagged by users will then be passed onto Facebook’s partner fact-checking organizations, which are signatories to the Poynter’s international fact-checking code of principles. If the reviewing partners agree that a story is fake, it will be labelled with a ‘Disputed News’ tag and appear lower in the news feed. Users will get a reminder before sharing such links.
Why this commendable initiative only works on a PR level
While Facebook’s efforts in contributing to the solution of this growing problem are appreciated, scepticism remains around the tangible results this initiative can deliver:
Facebook’s help page states that fact checkers will be signatories of the non-partisan Poynter Code of Principles. While Poynter acknowledges this on its site, it is quick to remind readers that this is just a minimum requirement for Facebook’s fact-checking partners. The final verdict on who gets approved as a fact-checking partner remains fully in Facebook’s hands.
- Facebook’s help page further states: ‘News stories that are reported as fake by people on Facebook may be reviewed by independent third-party fact-checkers.’ By using the word ‘may’, Facebook masterfully decreases its degree of responsibility while retaining control. There are currently no specific guidelines explaining how Facebook will decide which stories to pass on to fact-checkers. In all fairness, Facebook’s decision-making is not easy: If Facebook discloses the requirements for flagged stories to get passed onto fact-checkers, it would essentially be handing over a manual to trolls and anyone else interested in disrupting news flows. If, on the other hand, Facebook keeps the selection process formula secret, it risks baring the blame for arbitrary labelling of fake news.
- The ‘may’ word above raises an additional question. What proportion of stories, flagged by users as fake, will actually get passed on for fact-checking? It is credible to believe Facebook will try to help process as many as possible. But the sheer scale of the task at play could become problematic, especially because the fact-checking workload will ultimately sit with the fact-checking partners. These are organizations whose size may not enable them to cope with the daily fact checking workload., if Facebook doesn’t manage the amount of daily queries it puts through for their consideration in the first place.
- The fake news flag may put users at a fake ease: Users will start to see certain links labelled as ‘disputed news’, which will likely nurture the belief that Facebook is taking care of fake news now. As a result, users will be prone to letting their guard down even further to articles that aren’t labelled as disputed. Those that do make it through Facebook’s fake news wall will thus become an even more powerful source of misinformation than they are now.
- While quality fact-checking requires time, links can spread and cause damage very quickly. Retractions, retrospective debunking and apologies are unfortunately of little tangible value if the original message had already gone viral. Thus, questions remain around what happens to the flagged articles that are waiting to be reviewed. For example, should any content get temporarily blocked (when pending a fact-check review) from appearing in news feeds and perhaps have sharing disabled if a certain number of users have already flagged it? There is no right answer: Policing content in this manner is probably not Facebook’s role. On the other hand, if the company turns a blind eye to this ‘fact-check pending period’ and allows continued circulation of a potentially disputed article until it has been officially declared as such, the misinformation power of fake news will remain strong.
All in all, rather than solving the fake news problem, this initiative creates great PR for Facebook. It helps mitigate its accountability and any potential blame assignment with regard to the rise of fake news. And perhaps that is the goal – it is important to remember that Facebook (like any other for-profit company) is here to make money. It does not have a commercial interest in fully eradicating fake news as such, as long as consumers are reading them. This is demonstrated by the fact that Facebook will not be deleting or blocking hoax content, but merely labelling it as ‘disputed’ and continue to host it while disputed news will still be shared. A faction of users will always believe ‘the truth’ is being purposefully censored. Facebook, however, does have a commercial interest in making sure that the public views it as a trustful ally that cares about their lives and problems. This is to protect and nurture engagement on the platform. Thus, showing intent to tackle fake news may be currently more important to Facebook than it’s actual ability to deliver.
The discussion around this post has not yet got started, be the first to add an opinion.