When Twitter banned more than 70,000 traffickers of false information from its platform in the wake of the violence at the U.S. Capitol on Jan. 6, 2021, the impact went beyond the silencing of those users.
Credit: UC Riverside
When Twitter banned more than 70,000 traffickers of false information from its platform in the wake of the violence at the U.S. Capitol on Jan. 6, 2021, the impact went beyond the silencing of those users.
A study co-authored by UC Riverside public policy and political science scholars published in the journal Nature on Wednesday, June 5, found that the crackdown by Twitter (now called X after it was acquired by billionaire Elon Musk in late 2022) also significantly reduced the number of misinformation posts by users who stayed on the platform but had been following those who were kicked off.
Additionally, the study found that many of the misinformation traffickers, including those who posted Q-Anon conspiracy theories, left Twitter on their own accord after the massive de-platforming, which included the banning of then-President Donald Trump.
“There was a spillover effect,” said Kevin M. Esterling, a UCR professor of political science and public policy and a co-author of the study. “It wasn’t just a reduction from the de-platformed users themselves, but it reduced circulation on the platform as a whole.”
It was first time such an effect had been shown, he said.
The researchers analyzed a panel of about 550,000 Twitter users in the United States who were active during the 2020 election cycle. This information was acquired by David Lazer, the corresponding author of the study who is a professor of political science and computer and information science at Northeastern University in Boston.
A research team from Lazer’s laboratory collected Twitter posts through Twitter’s application programming interface, or API, which is a set of programmatic tools that allowed researchers to interact with Twitter’s platform and gather tweets and other information about users of the platform. The users in the panel were verified as real people by cross-referencing with voter registration data.
The analysis found that those who had followed one or more of the 70,000 who were de-platformed had been more frequent tweeters of URLs (Internet addresses) known to disseminate misinformation when compared with others in the panel of users.
The research also identified about 600 “super sharers” of misinformation in the panel who were in the top 0.1 percent of misinformation sharers in the months leading up to the Jan. 6 insurrection. The analysis found their ranks dropped by more than half after the de-platforming. Similarly, some 650 Q-Anon sharers in the panel dropped to about 200 two weeks after the de-platforming.
When monitoring their platforms, social media companies face a tradeoff between private economic interests and the public interest, said Diogo Ferrari, co-author of the paper and a UCR assistant professor of political science. Fake news posts increase engagement, which helps a platform’s bottom line. But curbing it “is good for democracy and democratic governance,” he said.
The study’s title is “Twitter’s post-January 6 de-platforming reduced the reach of misinformation.” In addition to Esterling, Ferrari, and Lazer, its co-authors are Jon Green of Duke University and Stefan McCabe of the Institute for Data, Democracy and Politics at the George Washington University.
Journal
Nature
Method of Research
Data/statistical analysis
Article Title
Twitter’s post-January 6 deplatforming reduced the reach of misinformation
Article Publication Date
5-Jun-2024
COI Statement
Authors declare that they have no competing interests.
Discover more from Science
Subscribe to get the latest posts sent to your email.