Posted on 27 August 2021 by Daron Acemoglu -- this post authored by Daron Acemoglu, Asuman Ozdaglar, and James Siderius, Vox eu.orgMisinformation spreads rapidly on social media platforms. This column uses a model of online content-sharing to show that a social media platform that wishes to maximise content engagement will propagate extreme articles amongst its most extremist users. ‘Filter bubbles’ prevent the content from spreading beyond its extremist demographic, creating ‘echo chambers’ in which misinformation circulates. The threat of censorship and a corresponding loss in engagement could pressure platforms to fact-check themselves, while regulating their algorithms could mitigate the consequences of filter bubbles.Please share this article - Go to very top of page, right
Global Economic Intersection Analysis Blog Feed considers the following as important:
This could be interesting, too:
Tyler Cowen writes Wednesday assorted links
Tyler Cowen writes My Conversation with Amia Srinivasan
Equitable Growth writes Equitable Growth welcomes two new Dissertation Scholars for 2021–22 academic year
posted on 27 August 2021
by Daron Acemoglu
-- this post authored by Daron Acemoglu, Asuman Ozdaglar, and James Siderius, Vox eu.org
Misinformation spreads rapidly on social media platforms. This column uses a model of online content-sharing to show that a social media platform that wishes to maximise content engagement will propagate extreme articles amongst its most extremist users. ‘Filter bubbles’ prevent the content from spreading beyond its extremist demographic, creating ‘echo chambers’ in which misinformation circulates. The threat of censorship and a corresponding loss in engagement could pressure platforms to fact-check themselves, while regulating their algorithms could mitigate the consequences of filter bubbles.
Please share this article - Go to very top of page, right hand side, for social media buttons.
“Virginia is eliminating advanced high-school math courses."
“Donald Trump tried to impeach Mike Pence."
“President Biden is passing a bill forcing all Americans to cut out red meat."
These headlines were among the many circulating on social media over the last few months. Each of the articles was found to contain misinformation - i.e. misleading information or arguments, often aiming to influence (a subset of) the public. Articles containing misinformation were also among the most viral content, with “falsehoods diffusing significantly farther, faster, deeper, and more broadly than the truth in all categories of information" (Vosoughi et al. 2018). There are increasing concerns that misinformation propagated on social media is further polarising the electorate and undermining democratic discourse.
Why does misinformation spread?
What makes misinformation spread virally on social media? What role do the algorithms of social media platforms play in this process? How can we control misinformation? In recent work (Acemoglu et al. 2021), we address these questions.
As Pennycook et al. (2021) show experimentally, social media users care about sharing accurate content online. Sharing misinformation, and being called out by others, can give the user a reputation for irresponsibility or recklessness and reduce her status on social media (see Altay et al. 2020). At the same time, online users obtain value from social affirmation, or ‘peer encouragement’, in the form of likes or re-tweets (see Eckles et al. 2016).
We capture these choices by allowing users to decide whether to share an article, to kill it (not share it at all), or to inspect (fact-check) it to find out whether it contains misinformation. Sharing brings direct benefits but may be costly if the article contains misinformation that is discovered by some of its recipients. This choice trade-off has two considerations for the user. The first is whether the article is likely to contain misinformation. Because the user has a pre-existing belief/ideology, she will assess the veracity of the article depending on the distance between its message and her viewpoint, and is more likely to share content that is ideologically aligned with her views. The second is how the article shared will be perceived by those who receive it in her social circle. This depends, among other things, on whether her followers will fact-check it themselves, which in turn depends on the degree of ‘homophily’ in her network - meaning whether her social circle shares her views. The importance of strategic calculations is evident here. If she expects recipients to fact-check an article, this will encourage her to fact-check it first, since misinformation is more likely to be discovered. In our paper, we explore these strategic considerations and how they affect the spread of misinformation.
A user’s sharing network, and the degree of homophily therein, is determined by her social network and the platform’s recommendation algorithm. Perhaps unsurprisingly, social media users tend to engage (e.g. ‘follow’ or ‘friend’) other users with similar ideological beliefs (Bakshy et al. 2015). In other words, conservatives tend to interact with other conservatives and liberals tend to interact with other liberals. This forms an exogenous ‘echo chamber’ with a high degree of homophily, whereby users associate with other like-minded users who echo each other’s opinions. Social media platform’s algorithms can exacerbate homophily by linking users of similar beliefs, and not linking those of opposing beliefs. This creates an endogenous echo chamber (or ‘filter bubble’).
One of our main findings is the role of echo chambers in the spread of misinformation. When echo chambers and the extent of homophily are limited, misinformation does not spread very far. A piece of online content will circulate until it appears to a user who disagrees with it, who will then fact-check it and reveal if it contains misinformation. This fact-checking disciplines other users, who will then be induced to inspect the articles themselves before sharing them. Conversely, when homophily is high and there are extensive exogenous or endogenous echo chambers, users of similar beliefs associate strongly with each other and, recognizing this, fact-check much less. As a result, misinformation spreads virally.
Another main conclusion of our analysis is the role of platforms in propagating misinformation. For platforms that wish to maximise user engagement - in the form of clicks or shares on their site - echo chambers can be highly advantageous. When the platform recommends content to the demographic most likely to agree with it, the content is more likely to be received positively and less likely to be fact-checked and discarded (when it contains misinformation), increasing engagement. This engagement effect can lead to endogenous echo chambers as documented by Levy (2020) for Facebook.
In fact, our results show that echo chambers and the viral spread of misinformation are more likely when articles contain extreme content. When content is not politically charged, such as wedding photos or cooking videos, the platform does not have a strong incentive to create filter bubbles, and may even decide to inspect the veracity of an article and eradicate the misinformation itself. The same is true when the platform’s users hold moderate ideological beliefs. This is because viral spread is less likely for moderate content or among users with moderate ideologies. In contrast, with politically divisive content or strong polarization of beliefs in the community, not only will the platform find it beneficial to create an echo chamber in order to maximise engagement, but it will do so without verifying the veracity of the article. In other words, the optimal platform algorithm is to recommend extreme content that aligns with the most extremist users, while adopting a filter bubble that prevents the content from spreading beyond this demographic. Though beneficial for the platform, these endogenous echo chambers for politically charged content lead to the viral spread of misinformation.
Regulation can help
Regulation can help mitigate the effects of endogenous echo chambers. We show that three types of policies can be effective: article provenance, censorship, and algorithm regulation. First, if the platform must be more transparent about the provenance of an article, it will encourage users to fact-check content from less reliable sources more often. However, we also find that such a policy can backfire because of an ‘implied truth’ effect: Content coming from well-known sources can lead to lower than optimal fact-checking. Second, if regulators threaten to censor a small subset of articles that contain extreme messages or might contain misinformation, the platform is incentivized to act more responsibly. In particular, the threat of censorship and corresponding loss in engagement is enough to push the platform toward reducing the extent of homophily and fact-checking itself in instances which, without censorship, would have created filter bubbles. Finally, a policy that directly regulates the platform’s algorithms can mitigate the consequences of filter bubbles. An ideological segregation standard - whereby echo chambers within the sharing network are limited and content across the ideological spectrum is presented to all users - can lead both to more responsible platform algorithms and to more fact-checking by users themselves.
- Acemoglu, D, A Ozdaglar and J Siderius (2021), “Misinformation: Strategic Sharing, Homophily, and Endogenous Echo Chambers", NBER Working Paper 28884.
- Altay, S, A-S Hacquin and H Mercier (2020), “Why do so few people share fake news? It hurts their reputation", New Media & Society, 24 November.
- Bakshy, E, S Messing and L A Adamic (2015), “Exposure to ideologically diverse news and opinion on Facebook", Science, 348: 1130 - 1132.
- Eckles, D, R F Kizilcec and E Bakshy (2016), “Estimating peer effects in networks with peer encouragement designs", Proceedings of the National Academy of Sciences, 113: 7316 - 7322.
- Levy, R (2020), “Social Media, News Consumption, and Polarization: Evidence from a Field Experiment", SSRN Scholarly Paper ID 3653388.
- Pennycook, G, Z Epstein, M Mosleh, A A Arechar, D Eckles and D Rand (2021), “Shifting attention to accuracy can reduce misinformation online", Nature, 592: 590-595.
- Vosoughi, S, D Roy and S Aral (2018), “The spread of true and false news online", Science, 359: 1146 - 1151.
About The Authors
Daron Acemoğlu is Charles P. Kindleberger Professor of Applied Economics in the Department of Economics at the Massachusetts Institute of Technology. He has received a BA in economics at the University of York, 1989, M.Sc. in mathematical economics and econometrics at the London School of Economics, 1990, and Ph.D. in economics at the London School of Economics in 1992. He is an elected fellow of the American Academy of Arts and Sciences, the Econometric Society, the European Economic Association, and the Society of Labor Economists.
He has received numerous awards and fellowships, including the inaugural T. W. Shultz Prize from the University of Chicago in 2004, the inaugural Sherwin Rosen Award for outstanding contribution to labor economics in 2004, the Distinguished Science Award from the Turkish Sciences Association in 2006, and the John von Neumann Award from Rajk College, Budapest in 2007. He was also awarded the John Bates Clark Medal in 2005, given every two years to the best economist in the United States under the age of 40 by the American Economic Association, and holds an Honorary Doctorate from the University of Utrecht. His research interests include political economy, economic development and growth, human capital theory, growth theory, innovation, search theory, network economics and learning.
Asu Ozdaglar received the B.S. degree in electrical engineering from the Middle East Technical University, Ankara, Turkey, in 1996, and the S.M. and the Ph.D. degrees in electrical engineering and computer science from the Massachusetts Institute of Technology, Cambridge, in 1998 and 2003, respectively.
She is the MathWorks Professor of Electrical Engineering and Computer Science in the Electrical Engineering and Computer Science (EECS) Department at the Massachusetts Institute of Technology. She is the department head of EECS and she Deputy Dean of Academics in the Schwarzman College of Computing. Her research expertise includes optimization theory, with emphasis on nonlinear programming and convex analysis, game theory, with applications in communication, social, and economic networks, distributed optimization and control, and network analysis with special emphasis on contagious processes, systemic risk and dynamic control.
Professor Ozdaglar is the recipient of a Microsoft fellowship, the MIT Graduate Student Council Teaching award, the NSF Career award, the 2008 Donald P. Eckman award of the American Automatic Control Council, the Class of 1943 Career Development Chair, the inaugural Steven and Renee Innovation Fellowship, and the 2014 Spira teaching award. She served on the Board of Governors of the Control System Society in 2010 and was an associate editor for IEEE Transactions on Automatic Control. She was the inaugural area co-editor for the area entitled "Games, Information and Networks" in the journal Operations Research. She is the co-author of the book entitled Convex Analysis and Optimization (Athena Scientific, 2003).
James Siderius is a fifth-year PhD candidate in MIT LIDS advised by Asu Ozdaglar and Daron Acemoglu. His focus of research includes network models and learning with specific applications to game theory, economics, and finance, and a particular focus on systemic risk in endogenous financial networks and the spread of misinformation in social networks.
Make a Comment
Econintersect wants your comments, data and opinion on the articles posted. You can also comment using Facebook directly using he comment block below.