Combatting disinformation: Putting deepfakes in perspective
With the proliferation of deepfake technology, a new form has been added to the long history of media being deployed as a type of stealth weaponry, with disinformation used to stoke division and even violence. Combatting the problem not only requires innovation, it requires a shared commitment to human rights and dignity.
In July of 2020, we released a deepfake of Richard Nixon giving an Oval Office speech informing the public that the Apollo 11 mission ended in tragedy. Our aim was to help people understand deepfakes — the use of artificial intelligence to make fake videos or recordings that seem real — but imagine if it was intended to be used for disinformation instead.
The project, In Event of Moon Disaster, spearheaded by its co-directors Francesca Panetta and Halsey Burgund for the MIT Center for Advanced Virtuality (D. Fox Harrell is the director of the centre, Pakinam Amer is a research fellow), creates an alternative history of the moon landing. It combines edited archival NASA footage and an artificial intelligence-generated synthetic video of a Nixon speech, along with materials to demystify deepfakes.
After our video was released, nearly a million people viewed it within weeks. It was circulated on social platforms and community sites, demonstrating the potent combination of synthetic media's capacity to fool the eyes and social media's capacity to reach eyeballs. The numbers matched up: In an online quiz, 49 percent of people who visited our site said they incorrectly believed Nixon's synthetically altered face was real and 65 percent thought his voice was real.
When deepfakes came under the spotlight last year, some media ran with sensational headlines that signaled the “end of news” and “collapse of reality” — but how worried should we be? The manipulation of media, both creative and nefarious, is not new. Society has long produced media with the capacity to cause harm. Consider Julian Dibbell's 1993 Village Voice article “A Rape in Cyberspace”, reporting on a traumatic, then new, experience of a woman's avatar being abused in an online multiplayer game.
Lewis D. Wheeler has his face mapped for the "In Event of Moon Disaster" project | Credit: MIT Center for Advanced Virtuality, 2019 The technology was different, but the harsh impacts on human behavior, and the anxiety around the blurring of virtual and physical worlds, were similar to what we are experiencing with deepfakes now. It's not the technology alone, it's also how we share it, watch it, regulate it, and how we believe it when we want to believe it — even if we know it's fake.
As of June 2019, some 49,000 deepfake videos were in circulation worldwide, with over 134 million views, 96 percent of which are deemed “non-consequential” deepfake videos, including for example, fake pornography. However, even in this realm there are applications such as DeepNude, which remove women's clothing in images without their consent, just as in Dibbell's example of virtual abuse. Victims of fake pornography, like Australian student Noelle Martin, whose pictures were stolen and used in explicit videos, often sustain trauma and, despite their best efforts, cannot have the offending fakes removed.
So far, political deepfakes are mostly satirical and understood as fake. But nefarious actors can also create videos making puppets of their enemies and amplify them on social media with damaging impacts that could even influence elections. Just last autumn The New York Times described how congressman Steve Scalise posted an insidious video showing Joe Biden taking a false position in a discussion — yet this was not a high-tech deepfake, it used more rudimentary techniques.
Indeed, simpler forms of manipulation are also a threat. Cheapfakes is a term Joan Donovan and Britt Paris coined to describe simple video editing techniques of speeding up, slowing down, cutting, and recontextualizing existing footage to deceive. For example, a video of House Speaker Nancy Pelosi appearing drunk is cheapfake. While this category of disinformation is the bigger problem now, that will change as deepfakes become easier to create, according to Sam Gregory of WITNESS.
Video can be a powerful and positive force, useful in holding people accountable, according to David Rand, associate professor of management science, brain and cognitive sciences at MIT. “One concern is that as deepfakes get more and more common, they will erode that power of video to hold bad people accountable for bad things, because that video can be written off as fake.” In the wake of Black Lives Matter protests, Missouri Republican congressional candidate Winnie Heartstrong tweeted that George Floyd's death was a hoax, the images “created using deepfake technology — digital composites of two or more real persons.”
Imagine a zero-trust society, where anything can be dismissed as a forgery and everything can be plausibly denied? The worst case is a dystopia. It all comes down to us and our willingness to confront our own confirmation biases. “We're the bug in the code,” Danielle Citron, professor of law at Boston University, told Scientific American. “We have all these studies . . . about how even if you say something is a lie, if it confirms your own beliefs, we still believe the lie.” When CNN reporter Donie O'Sullivan showed Trump supporters how some Biden videos were faked, one of them casually shot back, “You call it a fake video. What it is, is an internet meme.”
Combatting disinformation in the media requires a shared commitment to human rights and dignity — a precondition for addressing many social ills, malevolent deepfakes included. Along with this commitment, there are several ways in which we can guard against disinformation, both cheap and deep: counter technologies, regulations, and public awareness. Counter technology is important and reassuring, but it isn't a magical pill. Digital forensics researchers like Siwei Lyu at the State University of New York, Buffalo, have been developing algorithms that can spot digital traces. Even Lyu admits that these aren't bulletproof. And as each algorithm is developed to spot deepfakes, programmers will develop more sophisticated techniques to circumvent the algorithm.
A Fridays for Future protest in Germany: Polarisation over what is a fact and what isn't seems to be at an all-time high | Credit: Mika Baumeister / Unsplash
Regulations can also play a role. In Texas and California, it is illegal to create a deepfake with the intent of injuring a political candidate or to influence an election. That's a start. This is where the public's media literacy comes into the picture. “If you have highly critical media consumption habits, you'd probably be more resilient,” said Wilson Center disinformation fellow Nina Jankowicz. When we consume media, it helps to evaluate its source, cross reference, or look for factual errors. This is just the kind of demystification of deepfake technology that In Event of Moon Disaster aims for.
It's worth remembering that deepfakes are also one tool in a line of innovations, from Granville Woods' telephone to Photoshop, and now synthetic media. Media technologies can help keep us connected. They can be used for activism. They can make educational art.
As society fractures, media forms turn into ammunition, with disinformation used to stoke oppression, division and violence. It is up to each of us to reckon with the limitations of our particular perspectives and the failures of our society to support justice for all, which must be a unifying cause.
It's not the technology we're worried about. It's us.
This essay initially appeared online in October 2020. It is re-published here with the kind permission of the Boston Globe and the authors.
The project, In Event of Moon Disaster, spearheaded by its co-directors Francesca Panetta and Halsey Burgund for the MIT Center for Advanced Virtuality (D. Fox Harrell is the director of the centre, Pakinam Amer is a research fellow), creates an alternative history of the moon landing. It combines edited archival NASA footage and an artificial intelligence-generated synthetic video of a Nixon speech, along with materials to demystify deepfakes.
After our video was released, nearly a million people viewed it within weeks. It was circulated on social platforms and community sites, demonstrating the potent combination of synthetic media's capacity to fool the eyes and social media's capacity to reach eyeballs. The numbers matched up: In an online quiz, 49 percent of people who visited our site said they incorrectly believed Nixon's synthetically altered face was real and 65 percent thought his voice was real.
When deepfakes came under the spotlight last year, some media ran with sensational headlines that signaled the “end of news” and “collapse of reality” — but how worried should we be? The manipulation of media, both creative and nefarious, is not new. Society has long produced media with the capacity to cause harm. Consider Julian Dibbell's 1993 Village Voice article “A Rape in Cyberspace”, reporting on a traumatic, then new, experience of a woman's avatar being abused in an online multiplayer game.
Lewis D. Wheeler has his face mapped for the "In Event of Moon Disaster" project | Credit: MIT Center for Advanced Virtuality, 2019 The technology was different, but the harsh impacts on human behavior, and the anxiety around the blurring of virtual and physical worlds, were similar to what we are experiencing with deepfakes now. It's not the technology alone, it's also how we share it, watch it, regulate it, and how we believe it when we want to believe it — even if we know it's fake.
The current state of the deepfake media ecology
As of June 2019, some 49,000 deepfake videos were in circulation worldwide, with over 134 million views, 96 percent of which are deemed “non-consequential” deepfake videos, including for example, fake pornography. However, even in this realm there are applications such as DeepNude, which remove women's clothing in images without their consent, just as in Dibbell's example of virtual abuse. Victims of fake pornography, like Australian student Noelle Martin, whose pictures were stolen and used in explicit videos, often sustain trauma and, despite their best efforts, cannot have the offending fakes removed.
So far, political deepfakes are mostly satirical and understood as fake. But nefarious actors can also create videos making puppets of their enemies and amplify them on social media with damaging impacts that could even influence elections. Just last autumn The New York Times described how congressman Steve Scalise posted an insidious video showing Joe Biden taking a false position in a discussion — yet this was not a high-tech deepfake, it used more rudimentary techniques.
Indeed, simpler forms of manipulation are also a threat. Cheapfakes is a term Joan Donovan and Britt Paris coined to describe simple video editing techniques of speeding up, slowing down, cutting, and recontextualizing existing footage to deceive. For example, a video of House Speaker Nancy Pelosi appearing drunk is cheapfake. While this category of disinformation is the bigger problem now, that will change as deepfakes become easier to create, according to Sam Gregory of WITNESS.
Why the deepfake media ecology matters
Video can be a powerful and positive force, useful in holding people accountable, according to David Rand, associate professor of management science, brain and cognitive sciences at MIT. “One concern is that as deepfakes get more and more common, they will erode that power of video to hold bad people accountable for bad things, because that video can be written off as fake.” In the wake of Black Lives Matter protests, Missouri Republican congressional candidate Winnie Heartstrong tweeted that George Floyd's death was a hoax, the images “created using deepfake technology — digital composites of two or more real persons.”
Imagine a zero-trust society, where anything can be dismissed as a forgery and everything can be plausibly denied? The worst case is a dystopia. It all comes down to us and our willingness to confront our own confirmation biases. “We're the bug in the code,” Danielle Citron, professor of law at Boston University, told Scientific American. “We have all these studies . . . about how even if you say something is a lie, if it confirms your own beliefs, we still believe the lie.” When CNN reporter Donie O'Sullivan showed Trump supporters how some Biden videos were faked, one of them casually shot back, “You call it a fake video. What it is, is an internet meme.”
What we can do
Combatting disinformation in the media requires a shared commitment to human rights and dignity — a precondition for addressing many social ills, malevolent deepfakes included. Along with this commitment, there are several ways in which we can guard against disinformation, both cheap and deep: counter technologies, regulations, and public awareness. Counter technology is important and reassuring, but it isn't a magical pill. Digital forensics researchers like Siwei Lyu at the State University of New York, Buffalo, have been developing algorithms that can spot digital traces. Even Lyu admits that these aren't bulletproof. And as each algorithm is developed to spot deepfakes, programmers will develop more sophisticated techniques to circumvent the algorithm.
A Fridays for Future protest in Germany: Polarisation over what is a fact and what isn't seems to be at an all-time high | Credit: Mika Baumeister / Unsplash
Regulations can also play a role. In Texas and California, it is illegal to create a deepfake with the intent of injuring a political candidate or to influence an election. That's a start. This is where the public's media literacy comes into the picture. “If you have highly critical media consumption habits, you'd probably be more resilient,” said Wilson Center disinformation fellow Nina Jankowicz. When we consume media, it helps to evaluate its source, cross reference, or look for factual errors. This is just the kind of demystification of deepfake technology that In Event of Moon Disaster aims for.
It's worth remembering that deepfakes are also one tool in a line of innovations, from Granville Woods' telephone to Photoshop, and now synthetic media. Media technologies can help keep us connected. They can be used for activism. They can make educational art.
As society fractures, media forms turn into ammunition, with disinformation used to stoke oppression, division and violence. It is up to each of us to reckon with the limitations of our particular perspectives and the failures of our society to support justice for all, which must be a unifying cause.
It's not the technology we're worried about. It's us.
This essay initially appeared online in October 2020. It is re-published here with the kind permission of the Boston Globe and the authors.