The rise of deepfakes
Deepfakes are doctored videos that are created by sophisticated artificial intelligence (AI) capable of “deep-learning”. Deep-learning is a subset of machine-learning, where algorithms modeled after the human brain learn from large amounts of data. Deepfakes are created by AI analysing all the data available of the target, then synthesising the target onto another video in a way that it deems to be realistic through its learning.
Deepfakes first caught attention in late 2017 when Gal Gadot’s face was plastered over a pornstar’s body to create fake porn. A recent study by cyber-security company, Deeptrace, found that deepfakes have doubled in the past nine months. Today, pornography still consists 96% of all deepfakes in existence. What has become more worrying recently is the emergence of deepfake apps on the Dark Web. The resulting ease of access to such technology has sparked concerns about its potential abuse for revenge porn and cyber-bullying. DeepNude, an app that can “undress” women with deepfake technology, justifies such concerns.
Deepfakes in politics
The emergence of doctored videos in political smearing campaigns is not a new phenomenon. However, there are rising concerns of deepfake technologies’ role in empowering misinformation campaigns. This was illustrated by Buzzfeed’s deepfake video of Obama mocking Trump as an “utter dipshit”. To protect the electorate from misinformation, California made it illegal to create or distribute deepfakes of politicians within 60 days of the elections. However, critics question the enforceability of such laws given that political speech enjoys the highest level of protection in the US.
Making matters worse, identifying deepfakes would be increasingly harder to do so. A pioneer in deepfakes, Hao Li, said that we are six months to a year away from ordinary people being able to create deepfakes that seem “perfectly real”. If deepfakes become indistinguishable from real footage, laws to prohibit its abuse would not be enforceable. In the worse case scenario, footage may no longer be reliable evidence that can be admissible in court! One can only hope that we will be able to adapt to this rapidly developing technology…