By Alexandra Banfi |
Since the inception of news, disinformation has been a legitimate societal threat. Whilst the term ÔÇÿfake newsÔÇÖ has enjoyed a renaissance thanks to Donald Trump, itÔÇÖs hardly exclusive to him ÔÇô and it presents far more sinister intentions than basic denial. ItÔÇÖs a method of demeaning critical voices in the press and manipulating democracy.
The development of technology, especially social media, has facilitated and automated the spread of fake news. Those in power can now share their farcical realities ÔÇô or, alternative facts – without verification or accountability, since social media affords them their own channel for distributing information. Consequently, journalism becomes just another vehicle for many members of the public, for whom the truth is determined by their preferred representatives as opposed to facts. We need only to turn to Brexit or Trump to see that fake news and the propensity to belittle the inconvenient findings of watchdog journalism pose a sinister threat to both democracy and journalism.
Despite propagating its spread, technology is a broad and powerful spectrum which may also serve as a valuable countermeasure against fake news. Tech giants, such as Google and Facebook, have recently faced immense pressure from government representatives to combat fake news on their platforms. By developing relationships with third party fact-checking organisations, they aim to shield their users from disinformation. In addition, these parties are adapting their algorithms to prioritise exposure to a plurality of opinions, rather than isolating consumers in their own echo chamber. The recently launched Google News Initiative aims to recalibrate GoogleÔÇÖs algorithms to promote verified news sources to protect journalists in the digital age. A primary aim of the initiative is to build a legitimate digital education for young people; itÔÇÖs a strong fusion, one which acknowledges that the success of fake news is symptomatic of both a technological crisis and an education system which has failed to contain the torrential force of 21st century information.
Tesla┬áboss, Elon Musk, also took to Twitter to promote Pravda, a news credibility rating service open to everyone. Despite earnest intentions, the idea is fundamentally flawed. Crowdsourcing credibility in this fashion places too much faith in a populace whose readiness to dismiss inconvenient facts has already undermined democracy ÔÇô so how could democratising the legitimacy of information be a remedy? Pravda would only provide an additional platform on which mobilised members of the public could discredit contradictory opinions.
The start-up company Logically offers a more promising example by aiming to identify disinformation and bias through machine learning and human oversight. Presently, artificial intelligence and machine learning lack the sophistication required to satisfy this complex duty independently of humans, but it can analyse the statistical metadata of an article more efficiently than we can. With the oversight of a human mind to conduct content analysis, it may be the most successful way to identify and defeat fake news. Whether or not artificial intelligence will ever function independently to identify bias and fake news is another debate, but the efforts of companies like Logically and a heightened public awareness of fact checking resources such as PolitiFact are instrumentals in diminishing the influence of fake news.
Researchers at the University of Washington recently developed an AI capable of creating realistic videos by compiling sound and video from other clips. Although its purpose is to identify fake videos, its ability to create fake news is cause for concern. Fake news is going nowhere – in fact, its arsenal will only grow stronger as technology sophisticates. But technology is an extension of us, and therefore it bears the duality of potential good and bad. Its means may well have enabled the climate of disinformation ÔÇô but technology by itself is not inherently evil. ItÔÇÖs up to us to use these tools responsibly, and to shape a society capable of adapting to an informational machine far larger than any single one of us.
While alternative perspectives are indeed welcome, alternative facts should not be. Both tech and media companies must recognise that their intimate relationship creates a need for additional responsibility. Fake news is not an isolated internet issue that can be fought merely with algorithms: it ultimately comes down to providing digital education for everybody. Only by encouraging people to critically engage with what they are consuming will we abate this crisis.