Blog

Facebook Attempts to Minimize the Importance of Fake News & Filter Bubble Problems

Posted by:

“The problem with Facebook’s influence on political discourse is not limited to the dissemination of fake news. It’s also about echo chambers. The company’s algorithm chooses which updates appear higher up in users’ newsfeeds and which are buried. Humans already tend to cluster among like-minded people and seek news that confirms their biases. Facebook’s research shows that the company’s algorithm encourages this by somewhat prioritizing updates that users find comforting.”

So writes Zeynep Tufekci, an associate professor at the University of North Carolina School of Information and Library Science and a contributing opinion writer to The New York Times, in a recent editorial titled “Mark Zuckerberg Is in Denial.”

Facebook’s attempts to minimize fake news & filter bubble problems amounts to a case of machine learning bias cover-up. Problems with machine learning bias have been well documented for years. They can be bizarre and extreme, like Microsoft’s experience with Tay, the ill-fated Twitter bot which, instead of being a benign experiment in “conversational understanding,” got spammed by Twitter users intent on “training” the Tay software on misogyny, racism, and other flavors of hate speech. Or they can be subtle and complex, of the kind that we have seen exposed in Facebook’s algorithms for selecting news items to include in members’ Newsfeeds.

What both examples share is the unintended consequence of delivering information selected by a “content engine” that colors its responses due to underlying bias that can move—subtly or dramatically—away from what the engineers who set up the system intended. In the case of Facebook’s Newsfeed selections, the underlying attempts at appropriate personalization can be subverted by an engine which is continually adjusting its selections, creating a rabbit hole eventually containing only one flavor of content.

In the carbon world, pre-Facebook, the public square and omnipresent legions of newspapers provided a soapbox for pushing ideas, programs, leaders, ideologies, and, yes, invective, that passers by could attend to or not. And their attention was their own choice, not the choice of an underlying machine modulating information exchange.

Facebook’s intention may be to make users feel taken care of and happy with the news bits that are pushed up to them—they want to avoid riling up the digital natives. But perhaps digital natives should get truly restless when they learn that their own actions are miring them deeper and deeper in opaque filter bubbles, while, in the name of personalization, the Facebook ML machine spins them tightly into its web.

0

About the Author:

Hadley Reynolds is Co-founder and Managing Director at the Cognitive Computing Consortium. He is a leading analyst of the search, content management, and knowledge management industries, researching, speaking, and writing on emerging trends in these technologies and their impact on business practice. He currently leads the publications program at the Consortium.
  Related Posts

Add a Comment