False stories travel way faster than the truth
Twitter loves lies. A new study finds that false information on the social media network travels six times faster than the truth. And reaches far more people.
And you can't blame bots. It's us. That’s what the authors of the study say. It is the largest study of online misinformation.
Researchers at the Massachusetts Institute of Technology looked at more than 126,000 stories. They were tweeted millions of times. They were tweeted between 2006 and the end of 2016. This was before Donald Trump took office. It was during the combative presidential campaign. They found that "fake news" sped through Twitter "farther, faster, deeper and more broadly than the truth in all categories of information." That's according to the study in Thursday's journal Science.
"No matter how you slice it, falsity wins out." That's according to co-author Deb Roy. She runs MIT's Laboratory for Social Machines. She is a former chief media scientist at Twitter.
Twitter funded the study. It had no say in the outcome. That’s according to the researchers.
The scientists calculated that the average false story takes about 10 hours to reach 1,500 Twitter users. But it takes about 60 hours for the truth to reach Twitter users. On average, false information reaches 35 percent more people than true news.
True news stories almost never got retweeted to 1,000 people. But the top 1 percent of the false ones got to as many as 100,000 people.
Researchers looked at how stories cascade. That is how they link from one person to another. It is like a family tree. False information reached as many as 24 generations. But true information maxed out at a dozen.
Concern over bogus stories online has escalated in recent months. This is due to evidence the Russians spread disinformation. It was spread on social media during the 2016 presidential campaign.
Social media companies have experimented with using computer algorithms and human fact-checkers. They are doing this to try to weed out false information and abuse online. Twitter earlier this month said it is seeking help from outside experts. They can better deal with the problem. And Facebook this week announced a partnership with The Associated Press. They want to identify and debunk false and misleading stories about the midterm elections.
"We have witnessed abuse, harassment, troll armies, manipulation through bots and human-coordination, misinformation campaigns and increasingly divisive echo chambers." That was according to a tweet by Twitter co-founder and CEO Jack Dorsey. "We aren't proud of how people have taken advantage of our service, or our inability to address it fast enough."
The MIT study took the 126,285 stories and checked them against six independent fact-checking sites. They used the sites to classify stories as true, false or mixed. Nearly two-thirds were false. Just under one-fifth were true. And the rest were mixed.
The six fact-checking websites agreed with each other on classification at least 95 percent of the time. Two outside researchers did some independent fact-checking to make sure everything was OK. That is according to co-author Sinan Aral. He is an MIT management professor.
The Lead author is Soroush Vosoughi. He is an MIT data scientist. He identified three false stories. They traveled the farthest and fastest. The first was about a Muslim guard called a hero in the Paris bombings of 2015. The second was about an Iraq war veteran finishing as runner-up to Caitlyn Jenner for an ESPN courage award. And the third was about an episode of "The Simpsons." It had a story line in 2000 about a Trump presidency. (It was in 2015.)
Kathleen Hall Jamieson is a University of Pennsylvania communications professor. She is a co-founder of factcheck.org. She had problems with the way the study looked at true and false stories. The MIT team characterized a story's truth on a 1-to-5 scale. One was completely false. Factcheck.org, Jamieson said, looks more at context. It does not label something either true or false.
She also suggested that calling this bogus information "false stories" does not capture how bad it is. She said it would "better be called viral deception."
The researchers looked at obvious bots and took them out. Bots are automated accounts. The bots tweeted false information at a higher rate than humans. But it wasn't that much of a difference. And even without bots, lies still spread faster and farther, Roy said.
David Lazer is a political and computer scientist. He works at Northeastern University. He wasn't part of the study. But he wrote an accompanying report. He praised the MIT research. But he said the scientists may have missed a lot of bots and cyborgs. Cyborgs are sort of in-between humans. His ongoing research has found that about 80 percent of false stories come from just one-tenth of 1 percent of users. His research has not been published yet.
The researchers dug deeper to find out what kind of false information travels faster and farther. They looked at false political stories. But researchers didn't separate conservative versus liberal. Stuff that was surprising or anger-provoking spread faster than other types of lies, Aral said.
"Falsehood was significantly more novel than the truth," Aral said. "It's easy to be novel when you make things up."
That fits perfectly with previous research on the psychology of fake information. That is according to Yale University's Dan Kahan. And also Dartmouth College's Brendan Nyhan. They are scientists who study the phenomenon.
"The more strange and more sensational the story sounds, the more likely they are going to retweet," Kahan said.
Nyhan and Lazer said that more fact-checking and education of people on how to tell fake from real can be helpful. But the more effective solution will have to come from the social media platforms themselves.