By Jonathan Learmont
Since the US presidential election in November 2016 produced a shock victory for Republican candidate Donald Trump, Mr Trump’s accusation of ‘fake news’ against any media outlets who contradict his public statements have brought unprecedented attention to the spread of false news stories. In February this year an investigation into fake news supplied by Russian social media accounts, led by former FBI director Robert Mueller, has sparked widespread research about its influence; most specifically on the US election as well as subsequent political events elsewhere.
A recent study by academics at MIT reports that a fake news article posted on Twitter was a staggering 70% more likely to be retweeted than real news. This was tested by the speed at which a given story reached over 1,000 people. Political news spread the quickest by a significant margin compared to different fake news topics, reaching 20,000 people three times faster than it took other types to reach 10,000. This seems to support the hypothesis that Russian accounts, mainly bots, on Twitter spreading political lies can retweet to quickly gain traction for parties and causes favoured by the Russian government. However, the academics claim that this distorted sharing of fake news is due to human users, and that bots share fake and real news in even quantity.
Such a conclusion about all social media bots is ripe for debate after the emergence of other information in recent weeks. The current identification of bots on Twitter is conducted by Twitter themselves based on unusual retweet and post activity. As bots become more sophisticated, and better financed by Russian organisations with alleged involvement such as the Internet Research Agency (IRA) set up in 2013, their role in sharing fake news becomes harder to quantify. Known accounts linked with the IRA saw spikes in political retweet activity after Donald Trump’s presidential nomination and during the election.
Crucially not captured is the effect of another social media titan on the spread of fake news: Facebook. Russian political advertisers created a presence of fake news seen by millions of US users, targeted at those more likely to share such fake news. The extent of the advertising spend is still unknown, but Facebook are under much scrutiny for failing to clarify this. They are also being opaque about who has funded it due to most user accounts being private. In fact, the human element of sharing fake news may be even more electorally potent on Facebook. It has an ageing demographic that is less technologically savvy and more likely to vote, so users are probably more prone to take an interest in false political information and share it unknowingly than on Twitter. Facebook have talked about ‘fixing’ the platform and focusing on ‘meaningful interaction’ due to the controversy around its distribution of fake news.
Regardless of the measured effect of bots this makes for worrying reading. Both social media and news websites make money from adverts, and to get people to view their pages for revenue has resulted in a proverbial race to the bottom in terms of ‘clickbait’. These tactics make it harder for people to see what’s real and what isn’t. As alluded to in the report, unusual stories or those with a clickbait headline are more likely to get shares and it seems a rising proportion of these are not even true; political or otherwise. Countries including Italy and Sweden want to introduce digital literacy classes to help the vulnerable out, but it looks to be a long-term issue that will require ongoing cooperation of Twitter and Facebook with government intelligence to tackle it directly.