Fake News
For the past day or two the people I follow on Twitter have been loudly calling for Facebook to crack down on fake news. The impetus is likely a Buzzfeed article detailing how a group of enterprising young Macedonians are making a lot of money from advertising on fake news sites. Their articles are designed to inflame partisan hatred, with clickbait-style titles like “Proof surfaces that Obama was born in Kenya—Trump was right all along…” Conservatives are their primary targets. Of this, one of the fake news authors said simply “People in America prefer to read news about Trump.” In a delightful twist, one of these sites is now running an article titled “‘Fake news’ on social media influenced US election voters, experts say.”
In A Thousand Years of Nonlinear History, Manuel de Landa makes a case for the importance of energy gradients as an evolutionary and cultural force. Wherever the flows of matter and energy in the universe create a boundary or a gradient, organisms will evolve to exploit it. For example, solar energy gradients are exploited by phytoplankton that evolved to capture light using photosynthesis. The authors of fake news are exploiting information and attention gradients, transforming the cognitive processes of Facebook users into cold hard cash.
One reason this story is compelling is that we have the intuition that if fake news on Facebook is profitable, then we have a real structural problem. We can’t handle the moral dilemma: generating profit (capturing an energy flow) is good, but spreading misinformation is bad and could weaken our political institutions. In his defense of Facebook’s algorithmic approach, Mark Zuckerberg gives anarcho-capitalism a relativist spin: “Identifying the ‘truth’ is complicated.” As on-its-face repelling as I find this, he has a point. If it were easy to identify the truth, maybe people would not consume fake news in the first place. Zuckerberg also suggests that spreading misinformation is not so bad, all things considered: “on Facebook, more than 99% of what people see is authentic.” He seems to want to have it both ways: identifying fake news is hard, but we can trust our users to do it.
This raises a lot of questions, some of which I alluded to in my previous post examining evidence for echo chambers for online news reading. How much inauthentic information is the right amount? How do we gauge the impact of fake news? If 1% of the information on Facebook is incorrect, how much damage do we expect to our political institutions? What about 2% or 0.5%? How do we estimate the effect of fake news on our elections? (Remember “swiftboating?”) Who are the people reading the fake news? If they are already die-hard partisans, do we expect the news to change their thinking? Do people in fact believe the fake news is true? Why should I have the intuition that the Pope endorsed Trump is obviously false and should be ignored while someone else has the intuition that it’s important and should be shared?
I think Facebook should identify and limit the spread of fake news articles. I agree with Zeynep Tufekci that Mark Zuckerberg is in denial. But I think that is more urgent to address why we are susceptible to fake news in the first place, and what the effects of fake news truly are. Only after answering some of these questions can we build social institutions that resist the spread of misinformation.