By Emma Murphy
Who hasn’t enjoyed a good video of Steve Buscemi’s face superimposed onto Jennifer Lawrence during a press conference? Or Nicolas Cage in very un-Nicolas Cage places? Maybe you found some solace in the obviously fake footage of Kit Harrington apologising for that final season (just me?) If you have, welcome to the realm of deep fakes. We’re hearing of them more and more, and with good reason. But what exactly are they, and how are they made, you ask? It’s relatively uncomplicated, and startlingly accessible.
It starts with two algorithms working together in a generative adversarial network (GAN). The first algorithm is what is known as the discriminator, the second is the generator. A good example to begin to understand how a GAN works is a spam email creator. The discriminator in this scenario mimics the actual spam email detector algorithm utilised by most email providers. It compiles a large amount of already detected spam emails, and picks up on common phrases and terms. These emails become its training set. The discriminator gets better as it shuffles through the emails, noticing that pleas for money from an Ethiopian prince, or exciting claims of compensation entitlement, don’t usually make it to the inbox. The generator then takes the data compiled by the discriminator and uses it to build new emails that avoid the usual buzzwords. The two create a self-reinforcing cycle, and are always in competition. Deepfake videos are created in pretty much the same way. A GAN, loaded with videos from innumerable sources, begins to pick up on the little nuances of human speech and expression, using them to compile new content.
That isn’t to say that the newly generated clips are by any means seamless. Many of the videos suffer from what is commonly known as ‘uncanny valley’ – a phrase used to describe the funny feeling you get when watching something vaguely human, but not quite (think Alita: Battle Angel). However, Matt Turek, a member of the Pentagon’s deepfake program, confirms that it is becoming simpler to create a deepfake than to detect one. AI are quickly developing the ability to detect deepfakes, but this just continues to feed the GAN into making higher quality, undetectable content based on what has been caught out in the past. Visual and auditory inconsistencies in the mannerisms of the subjects being recreated are often noticeable to the human eye, but that might not be the case much longer. The next solution is “hashing,” which creates a series of numbers for the video that are lost if it is tampered with. Finally, and most controversially, is a system of ‘authenticated alibis.’ This suggests that public figures constantly record themselves, so that in the case of a deepfake appearing, they can show what they were really doing at the time.
You may be thinking: okay, cool. Two funky algorithms can make politicians say something wildly controversial or celebrities show boobies on PornHub. But it doesn’t affect you, right? right? To that, I say, on the bright side, at least it sort-of delegitimises anything damning you’ve ever done on video. With the increasing popularity of facial recognition as a form of cyber security (not to mention the probably innumerable amount of pictures and videos available of you on the web), the threat to the average citizen is growing. “All of those images that you put of yourself online have exposed you,” says Hany Farid, a UC Berkley professor, whose research in media forensics has been funded by Darpa. “You gave it up freely. Nobody forced you to do it, it wasn’t even stolen from you – you gave it up.” Farid is concerned about the seeming indifference towards deepfakes from social media platforms. Following the upload of several deepfakes focusing on Facebook, Chief Executive Mark Zuckerberg, finally came around, stating at the Aspen Ideas Festival that FB might be considering a policy change. In order to make a convincing deepfake mask of the average social media user, there would need to be a few hundred unobstructed facial shots to play around with. This initially may sound like a lot to compile, there might not even be that many pictures of you floating around the web, but consider that the iPhone shoots at least 30 frames per second.
The victims of face swapping deepfakes are usually women. Extremely graphic, detailed and difficult to detect, pornographic deepfakes are uploaded frequently to the internet. The legality of them is another largely untested issue in the courts of law. Google has recently added “involuntary synthetic pornographic imagery” to its ban list which allows anyone to request the removal of such material. This however, doesn’t promise the end of their creation and advancement. Discussion boards about deepfakes are cropping up across the web, and it’s even possible to request videos of people you know. It isn’t even expensive– about £16 a pop. A more recent case of an average woman’s face being placed atop a porn actor’s body only required roughly 500 photos of her face, lifted exclusively from her social media accounts. This raises another frightening point – a quick reverse image search would bring you back to where the photos were originally shared.
Now, this isn’t an article intended to push the dangers of posting that gorgeous golden-hour selfie. Rather, a little bit of a crash course into deepfakes, and the current climate and rhetoric surrounding them. As they develop, it’s important to keep a level head and always be ready to scrutinise a little bit more.