By Maisie Marston
After Trump was elected in 2016, Mark Zuckerberg, CEO of Facebook, confidently asserted that “The idea that fake news on Facebook influenced the election in any way is a pretty crazy idea.” It later transpired that during the campaign, social media was plagued by both automated and human accounts spreading false and misleading messages. Most notably, the Russian Internet Research Agency were found to be involved and was later charged with criminal offences for election interference.
It became clear that the Facebook algorithm enabled the spread of fake news and confirmation bias, so Zuckerberg committed the company to “protecting our community from abuse and hate” and “defending against interference by nation states”. In other words, he would attempt and stop the proliferation of fake news or ‘disinformation’ as it is now referred to by the UK Government.
In 2018, similar questions were asked about fake news’ impact on British politics, leading to an inquiry conducted by the Digital, Culture, Media and Sport Committee. The inquiry included evidence from Cardiff University Crime and Security Research Unit, which described the influence of Russian-linked social media accounts designed to intensify the public impact of the 2017 terrorist attacks.
Currently, Facebook outlines three key areas which they are focusing on to tackle fake news. By working with over 30 fact-checking organisations globally. Facebook claims to be “disrupting economic incentives,” “building new products to curb the spread of false news” and “helping people make more informed decisions when they encounter false news”.
Despite Facebook fostering these safeguards, the committee was not satisfied, calling for a number of new regulations. It recommended that social media companies should be “obliged to take down known sources of harmful content, including proven sources of disinformation.” They concluded that, with regards to fake news, Zuckerberg had failed to show “leadership or personal responsibility”.
It has been debatable as to whether social media outlets should be responsible for filtering out fake news, or whether individuals should be personally responsible for this. The government have given a clear answer to this question: it is Facebook’s responsibility. Fake news puts our democracy at risk.
Tougher regulation could be the answer. In Germany, tech companies must remove hate speech within 24 hours or face a fine of €20m. As a consequence, one in six Facebook moderators are based in Germany, therefore, an update of electoral law and social media regulation could result in significant changes.