Deepfakes use machine learning to fabricate events that never happened. While amusing and creative, there are pressing concerns about the social and political implications of this rapidly evolving technology. Deepfakes have started to appear everywhere – from viral celebrity face swaps to impersonations of political leaders.
Millions got their first taste of the technology when they saw former US president Barack Obama using an expletive to describe then-president Donald Trump, or actor Bill Hader shape shifting on a late-night talk show.
What are Deepfakes, Exactly?
Deepfakes are human image synthesis, a form of manipulated videos that create hyper realistic, artificial renderings of a human being. These videos are generally crafted by blending an already existing video with new images, audio, and video to create the illusion of speech. This blending process is created through generative adversarial networks, or GAN, a class of machine learning systems.
AI great Yann LeCun called GANs “the most interesting idea in the last ten years in machine learning.” Before the development of GANs, neural networks were adept at classifying existing content (for instance, understanding speech or recognizing faces) but not at creating new content. GANs gave neural networks the power not just to perceive, but to create.
Deepfakes came into the public consciousness in 2017. In fact, Reddit was the community that coined the term. Many redditors popularized the technique by swapping mainstream actresses’ faces onto pornographic actresses’ bodies. Additionally, the practice of swapping Nicolas Cage’s face onto other movie character’s bodies became a very popular meme.
However the rate of deepfake videos has grown considerably as deepfake software continues to be distributed. In fact, these videos are easier to make than something in Photoshop. This is because the videos largely rely on Machine Learning technology rather than manual design skills. The software is also usually free, making it accessible to a lot of casual users.
While impressive, today’s deepfake technology is still not quite to parity with authentic video footage—by looking closely, it is typically possible to tell that a video is a deepfake. But the technology is improving at a breathtaking pace. Experts predict that deepfakes will be indistinguishable from real images before long.

AI Is a Double Edged Sword
“In January 2019, deep fakes were buggy and flickery,” said Hany Farid, a UC Berkeley professor and deepfake expert. “Nine months later, I’ve never seen anything like how fast they’re going. This is the tip of the iceberg.” Today we stand at an inflection point. In the months and years ahead, deepfakes threaten to grow from an Internet oddity to a widely destructive political and social force. Society needs to act now to prepare itself.
The opportunities for spreading disinformation like this at the very highest levels of government are almost limitless for those able to wield effective deepfake technology. Perhaps even more concerning is that doctored videos could also be used by hostile states or extortion-seeking cyber-criminals to undermine voters’ confidence in candidates up for election.
Financially motivated extortion and social engineering, and influence operations aimed at destabilizing democracies, are just the start. One expert recently claimed that as AI technology becomes more advanced and ubiquitous, the power to create highly convincing deepfakes could be in every smartphone user’s hands by the middle of this decade. So what can we do about it?
Fake futures
What is new is that the process has become cheaper and widely accessible. The amount of deepfake content is growing at an alarming rate: DeepTrace technologies counted 7,964 videos deepfake videos online at the start of 2019, and by the end of the year it had nearly doubled to 14,678.
Since deepfakes emerged in late 2017, several apps and software generating deepfakes such as DeepFace lab, FakeApp and Face Swap have become readily available and the pace of innovation has only accelerated.
And companies have moved swiftly to monetise it. In 2019, Amazon announced that Alexa devices could speak with the voices of celebrities. On Instagram, deepfake videos of virtual artists backed by Silicon Valley money can bring in millions of followers and revenue without paying talent to perform. And in China, a government-backed outlet introduced a virtual news anchor that would “tirelessly” work 24 hours around the clock.
Increasingly, startups are attempting to commercialise deepfakes by licensing the technology to social media and gaming firms. But the potential misuses for deepfakes are high. There is concern that in the wrong hands, the technology can pose a national security threat by creating fake news and misleading, counterfeit videos.
The Brookings Institution summed up the range of political and social dangers that deepfakes pose, from: “distorting democratic discourse; manipulating elections; eroding trust in institutions; weakening journalism; exacerbating social divisions; undermining public safety; and inflicting hard-to-repair damage on the reputation of prominent individuals, including elected officials and candidates for office.”
Because of the technology’s widespread accessibility, such footage could be created by anyone: state-sponsored actors, political groups, lone individuals. At the moment, the most pressing concern has been the hijacking of women’s faces in ‘revenge porn’ videos. In fact, according to a DeepTrace report, pornography makes up an astounding 96 percent of deepfake videos found online.
Digital impressions are starting to have financial repercussions too. In the US, an audio deepfake of a CEO reportedly scammed one company of $10 million, and in the UK an energy firm was duped into a fraudulent transfer of $243 million.
Combating deepfakes
One of the solutions to combat the incredible growth of deepfakes, has been to turn to AI itself. For instance, researchers have built sophisticated deepfake detection systems that assess lighting, shadows, facial movements, and other features in order to flag images that are fabricated. Another innovative defensive approach is to add a filter to an image file that makes it impossible to use that image to generate a deepfake.
Sensity, a visual threat intelligence platform that applies deep learning for monitoring and detecting deepfakes, has created a detection platform that monitors over 500 sources where the likelihood of finding malicious deepfakes is high.
Yet such technological solutions are not likely to stem the spread of deepfakes over the long term. At best they will lead to an endless cat-and-mouse dynamic, similar to what exists in cybersecurity today, in which breakthroughs on the deepfake detection side spur further innovation in deepfake generation. The open-source nature of AI research makes this all the more likely.
California enacted a law in 2019 that made it illegal to create or distribute deepfakes of politicians within 60 days of an election. But enforcing bans is easier said than done, given the anonymity of the internet. Other legal avenues could take the form of defamation and the right of publicity, but their broad applicability might limit its impact.
The Path Forward
In the short term, the most effective solution may come from major tech platforms like Facebook, Google and Twitter voluntarily taking more rigorous action to limit the spread of harmful deepfakes.
In the end, no single solution will suffice. An essential first step is simply to increase public awareness of the possibilities and dangers of deepfakes. An informed citizenry is a crucial defense against widespread misinformation. The recent rise of fake news has led to fears that we are entering a “post-truth” world. Deepfakes threaten to intensify and accelerate this trajectory.
“The man in front of the tank at Tiananmen Square moved the world,” said NYU professor Nasir Memon. “Nixon on the phone cost him his presidency. Images of horror from concentration camps finally moved us into action. If the notion of not believing what you see is under attack, that is a huge problem. One has to restore the truth in seeing again.”
References
- Deepfake Technology: Implications for the Future – https://www.trtworld.com/magazine/a-deepfake-future-is-closer-than-you-think-should-we-be-worried-44722
- What are Deepfakes, Exactly? – https://www.uscybersecurity.net/deepfake/
- The Path Forward – https://www.forbes.com/sites/robtoews/2020/05/25/deepfakes-are-going-to-wreak-havoc-on-society-we-are-not-prepared/?sh=5c8a4fbc7494