
I’ve previously written about how Facebook is a sort of quasi monopolistic utility. Part of Facebook’s status as a dominant player is that it has a huge rule in determining what news people see. A few developments in the past few months have raised interesting questions about how Facebook deals with its role as an information gatekeeper.
Humans = bad, robots = good
You have read a while ago that Facebook was in trouble for supposedly showing a left-wing bias in an obscure part of its platform which was curated by human employees. This was not the main newsfeed but a small section called ‘Trending Topics’. In response to the controversy, Facebook switched from having humans curate the topics to using a supposedly more neutral and fundamentally workable algorithm (in other words moving to a newsfeed-like model).
Part of the case which was made for getting rid of the human-curated Trending Topics section was that Facebook argued it simply wasn’t feasible for it to sort through stories with humans:
… Facebook’s reach extends to so many people and countries and languages, that building editorial products that are curated by humans would not be feasible, [Will Cathcart, director of product management for the News Feed] says. Facebook wants the News Feed to feel like it has been tailored to your unique set of interests. “We think about what we’re trying to do in a really personalized way,” he says. “And so I think if you’re trying to build a product for over 1 billion people to be informed about the news that they care about, you can’t really be building a product that has judgments about particular issues, or particular publishers. Because that doesn’t match what 1 billion people around the world want.” Trending Topics are less personalized by design — they’re meant to reflect activity on the platform — but Facebook sees that as a math problem, rather than one to be solved with editorial judgment.
Algorithms aren’t neutral
The quote above suggests that Facebook thinks that by taking humans out of the equation, and relying instead on an algorithm to determine what’s relevant to each individual person means annoying questions about bias will go away. However, as Nilay Patel at The Verge pointed out, however, algorithms aren’t actually neutral at all. Algorithms are made by people who naturally have biases, and changes to algorithms inevitably favour one party over another:
algorithms aren’t neutral, which is the real issue. Facebook is a powerful media gatekeeper because of the artificial scarcity of the News Feed — unlike, Twitter, which blasts users with a firehose of content, Facebook’s News Feed algorithm controls what you see from all the people and organizations you follow. And changes to the News Feed algorithm divert enormous amounts of attention: last year Facebook was sending massive amounts of traffic to websites, but earlier this year Facebook prioritized video and that traffic dipped sharply. This month Facebook is prioritizing live video, so the media started making live videos. When media people want to complain, they complain about having to chase Facebook, because it feels like Facebook has a ton of control over the media. (Disclosure: Facebook is paying Verge parent company Vox Media to create Facebook Live videos.)
As various people have pointed out, the story about Facebook’s bias in Trending Topics — even if it’s true — doesn’t really matter in itself, because Trending Topics is such a small part of the platform. But it does say something interesting about Facebook’s relationship with the media.
Should Facebook have an obligation to fact check information?
Traditional journalism involves fact-checking stories so as to avoid publishing false or misleading information. Given Facebook now has an incredibly powerful role in determining what news people read, should it have a similar obligation to avoid presenting people with links to false information?
Here’s The Verge again:
But what about cases where people are obviously wrong? If 100 million people are posting, falsely, that Barack Obama was born in Kenya, does Facebook have a role to play in stopping that? “I think you already see that happen on the platform today,” Cathcart says. “It doesn’t have anything to do with us — people post a lot of this stuff and talk about it, and other people post different points of view. And the nitty-gritty of the details of how we should be involved I actually think is less important than building a platform where if people want to talk about that, it’s really easy to talk about that and find different points of view.”
The need for Facebook to fact-check information it promotes became obvious when it later promoted a fake story for 8 hours.
Facebook needs to be more responsible
At first glance, Facebook may not seem like a media outlet. After all, it doesn’t create content — it simply allows its users to share things through its platform. In reality, however, Facebook acts as a sort of robotic newspaper editor by automatically determining what content its users see. A necessary (but difficult) part of its job should be to filter out out blatant falsities. I know it’s not easy to magically determine what’s true and what isn’t, but I’m sure the smart engineers at Facebook can try. One interesting approach which Google has recently pursued with Google News is to pair links to news stories with a link to a fact check article.
Another role the media has traditionally played is promoting balance and different viewpoints, which is also likely to fall by the wayside if Facebook devolves responsibility to an algorithm. A natural — and troubling — consequence of Facebook’s newsfeed responding to what people ‘like’ is the so-called echo-chamber effect, where users will only see views which are consistent with their own.
The bottom line is that Facebook can’t just act as though its obligations end when it creates an ‘impartial’ algorithm to display what its users want. If Facebook is going to continue to play such an important role as an information intermediary, it needs to take its obligations as an information gatekeeper more seriously.
Postscript
Since the US election result, there have been some interesting developments regarding Facebook and accusations of ‘fake news’. Mark Zuckerburg released a statement saying fake news wasn’t a factor in the outcome of the election. He also said to journalists:
“I think the idea that fake news on Facebook — of which it’s a very small amount of the content — influenced the election in any way is a pretty crazy idea”
Then it came out that a bunch of Facebook employees have started a task-force to tackle the problem, because they feel the company isn’t taking the issue seriously enough.
Gizmodo also has an interesting story. They suggest Facebook may have avoided tackling the problem because it was worried about the impact on right-wing websites:
According to two sources with direct knowledge of the company’s decision-making, Facebook executives conducted a wide-ranging review of products and policies earlier this year, with the goal of eliminating any appearance of political bias. One source said high-ranking officials were briefed on a planned News Feed update that would have identified fake or hoax news stories, but disproportionately impacted right-wing news sites by downgrading or removing that content from people’s feeds. According to the source, the update was shelved and never released to the public. It’s unclear if the update had other deficiencies that caused it to be scrubbed.
“They absolutely have the tools to shut down fake news,” said the source, who asked to remain anonymous citing fear of retribution from the company. The source added, “there was a lot of fear about upsetting conservatives after Trending Topics,” and that “a lot of product decisions got caught up in that.”
Further reading
-
“I’m Sorry Mr. Zuckerberg, But You Are Wrong” by Rick Webb
Updated 15 Nov with a link to Google’s new Google News feature, and stuff about the US election and fake news. Updated 20 Nov to add further reading section.