- The science magazine Science has investigated how fake news spread on Twitter during the 2016 US election.
- The scientists warn against "cyborgs" – semi-automated canals operated by humans.
- To limit the problem, they suggest, among other things, that Twitter should limit the number of URLs related to policies that a user can post per day.
Fake news is a political problem. After surprises, such as the Brexit vote and the 2016 US presidential election, it was speculated that social-media lies, disguised as news, had a major impact on the two votes. Since then much research has been done and written. How big the problem actually is, remained unclear.
On Thursday now has the science magazine Science a study published on the spread of fake news on Twitter. In order to find out how much voters were influenced by these news, the researchers examined more than 16,000 Twitter accounts that were active during the 2016 US election and could be assigned to registered US voters. This allowed the researchers to be sure that they were dealing with real people. On the other hand they knew about the voter registrations also information about gender, age and skin color of the Twitter users. So they could check whether these characteristics have an effect on distribution and consumption of fake news.
The results confirm the findings of several studies published in the past two years. Thus, much of the false news distributed on Twitter comes from a few, mostly right, news portals. Of these, seven portals account for more than 50 percent of shared bad news, including infamous Infowars and Daily Caller.
Cyborgs instead of social bots
Another finding: Almost 80 percent of fake news articles are shared and consumed by just a few hyperactive users. The researchers therefore assume that these "hyperactive" accounts are not "social bots", ie automated programs, but partially automated "cyborgs". The demographics also play a role: Accordingly, the probability of consuming and sharing many fake news increases with the conservative attitude and age.
So what to do about fake news? The researchers recommend using the "hyperactive wideners", for example, Twitter could limit the number of political news articles a user can share to 20 per day. That would affect less than one percent of the topic "unproblematic" users in the case of false news. A simulation of the study makers shows: Because the "hyperactive" disseminators are responsible for so much of the shared fake news, their number would drop already by such a rule by 32 percent.
Scientists also believe that warnings could help if users want to share or retweet articles from known fake slingshots. Something similar has already been introduced Facebook. Anyone who wants to share a message that has already been refuted by a Facebook-authorized fact-check organization (such as the German research portal Correctiv) is pointed to the doubted truth content of the content. Whether this has had a positive effect so far? For this purpose, Facebook is silent.