Stranding astronauts on the moon
Francesca Panetta and Halsey Burgund reveal the creative process behind In Event of Moon Disaster, an alternative history of the Apollo 11 moon landing mission. Their work stands as a case study on how AI can expand (and challenge) how art and culture are produced.
On July 20 1969, President Nixon stepped into the Oval Office to break the devastating news that the Apollo 11 mission had ended in tragedy. After becoming the first humans to land and walk on the moon, Neil Armstrong and Buzz Aldrin were now stranded. With their oxygen running low, their lives would end on the moon.
“Fate has ordained that the men that went to the moon to explore in peace will stay on the moon to rest in peace” Nixon announced straight to camera, to a horrified audience.
The course of history would forever have been changed if this had indeed happened, but as we know, Nixon never spoke these words. His speechwriter, William Safire, wrote them as a contingency, knowing that even after the successful landing, the mission still held a number of risks. The speech remained undelivered, gathering dust in the National Archives for decades, a tenuous connection to a speculative world.
US President Richard Nixon giving his famous moon landing address - or is it? | Credit: Francesca Panetta and Halsey Burgund, MIT Center for Advanced Virtuality, 2019 But with the advent of synthetic media, the speech presented a creative possibility: a bridge to this alternative reality. It could illustrate how modern technology can be used to manipulate even well-established narratives. It could also show the significant creative potential that AI-enhanced techniques can bring to artistic creation and the generation of culture.
This film project also had a strong educational component - to illustrate the potential of this new AI technology and to pull back the curtain on how it works. It was built for a variety of audiences - as an immersive art installation where you stepped back in time into a 1960s watch party, as an interactive website, and available on YouTube and Vimeo.
We knew we had a strong idea and we knew it would take AI to produce it, but we didn’t know what it would be like to work with a technology that was so radically different from the artistic tools we were used to.
Technology is something the two of us have embraced through our careers (from audio augmented reality to interactive features), but this felt different than learning a new software tool or working with skilled engineers. We had long ago learned to trust our human collaborators, but what does it mean to “trust” something that purportedly makes its own decisions?
Though there has been much talk of the apocalypse of deepfakes, when we jumped in, we quickly realised that making one (at least of this quality) is not as easy as popular perception. "Press button, get deepfake" may be adequate for a limited set of comedic face-swapping productions, but to create our piece, a large amount of manual labor and creative decision-making was required of us.
We worked with two AI companies: Canny AI for the visuals and Respeecher for the audio. It was clear from the start that we were collaborating with two fantastic AI "handlers". Canny AI guided us to find appropriate “target footage” which they could puppeteer. Respeecher patiently talked us through our audio recording sessions. We learned that training data for the AI models is critical. "Garbage in, garbage out" seems to be largely true, though knowing what the AI deems "garbage" is not always straight-forward. In one instance, using different, albeit both high quality, microphones between the training data recordings and the speech delivery recordings yielded vastly degraded results. It turns out that the AI “hears” in a different way than we do.
Then came weeks of post production and a long chain of decisions which couldn't be outsourced. We sat in our studio taking multiple versions of the synthetic audio and sculpting the voice of the speech, adding background sound, fine-tuning the audio frequencies, then adjusting the length of each phrase to lip sync with the visuals.
The creative impact of the film became clear when we placed the deepfake in context. Though we knew the synthetic Nixon would be a draw, we doubted its artistic and emotional impact without building it up. After all, deepfakes on their own are often nothing more than a gimmick.
We started the film with well-known broadcaster Walter Cronkite hinting of the dangers to come, allowed the audience to gawp at the extraordinary ambition of the mission, and then using regular film editing techniques “crashed” the lunar lander onto the moon. The effect was dramatic and confirmed what we were beginning to suspect - there is huge creative potential for synthetic media in the arts and narrative storytelling.
The film was a hit in all the contexts we presented it. It has been viewed over 1 million times on our site and YouTube, and the immersive installation won a prize at International Documentary Film Festival Amsterdam (IDFA). Education courses have been built using it by MIT among others, and Scientific American produced a 30 minute documentary on deepfakes centred around our film.
Along the way, our messaging was clear: this isn't real; don't believe it, but think about how easy it would be to be tricked.
We asked our audiences to tell us after watching the film what they thought was real or fake. The quiz results from our website were illuminating.
The data hints at the power of the synthetic media to deceive or at the very least, to sow confusion and doubt. The only question our audiences seemed sure about was whether the NASA footage was real. The other three questions split the audience in half.
Over half of the viewing audience correctly identified Nixon's face as synthetic | © Halsey Burgund The data also hints at a question we asked ourselves throughout the project: could the impact of the project be harmful? Could we inadvertently trick our audiences? Were we adding to moon landing conspiracy theories? And is it ethical to try to manipulate people into believing something that isn't true even if you reveal your motives and methods after the fact?
Perhaps it was in the semantics - if we called our piece a work of "synthetic media" instead of a "deepfake", could we switch from being a problem to being part of the solution?
We consulted with experts on misinformation, including our funders at Mozilla and the advice was consistent - it is the intention behind the use of the technology that is critical rather than the technology itself. The intent to deceive - regardless of the tools used - is the problem.
We went to great lengths to provide contextual materials around the film. When we premiered the piece in a physical installation at IDFA in November 2019, we plastered the surrounding walls with tongue-in-cheek period “advertisements'' for deepfake technology, and created a newspaper explaining how we made the deepfake and the implications of the technology. Our website contains resources explaining how we made our film and the broader implications of the technology.
The directors provide visual context for the installation premiere at IDFA to broaden the educational message | © James Burke When we began this project, we did not know what it would be like to use AI to create an aesthetic experience. Would it be a collaboration or would it just be another technological tool, a means to an end? Would it make “decisions” on its own or rather inspire us to make our own creative decisions much like working with randomization and aleatoric composition techniques can do? And critically, could it replace us?
For us, it proved to be much more of a “cyborgian” relationship - our creative efforts and inclinations were enhanced and inspired by our work with AI. The role of the artist has not yet been superseded, but working together with AI does portend an exciting, albeit somewhat fraught, future for artistic evolution.
“Fate has ordained that the men that went to the moon to explore in peace will stay on the moon to rest in peace” Nixon announced straight to camera, to a horrified audience.
The course of history would forever have been changed if this had indeed happened, but as we know, Nixon never spoke these words. His speechwriter, William Safire, wrote them as a contingency, knowing that even after the successful landing, the mission still held a number of risks. The speech remained undelivered, gathering dust in the National Archives for decades, a tenuous connection to a speculative world.
US President Richard Nixon giving his famous moon landing address - or is it? | Credit: Francesca Panetta and Halsey Burgund, MIT Center for Advanced Virtuality, 2019 But with the advent of synthetic media, the speech presented a creative possibility: a bridge to this alternative reality. It could illustrate how modern technology can be used to manipulate even well-established narratives. It could also show the significant creative potential that AI-enhanced techniques can bring to artistic creation and the generation of culture.
This film project also had a strong educational component - to illustrate the potential of this new AI technology and to pull back the curtain on how it works. It was built for a variety of audiences - as an immersive art installation where you stepped back in time into a 1960s watch party, as an interactive website, and available on YouTube and Vimeo.
We knew we had a strong idea and we knew it would take AI to produce it, but we didn’t know what it would be like to work with a technology that was so radically different from the artistic tools we were used to.
Technology is something the two of us have embraced through our careers (from audio augmented reality to interactive features), but this felt different than learning a new software tool or working with skilled engineers. We had long ago learned to trust our human collaborators, but what does it mean to “trust” something that purportedly makes its own decisions?
Understanding the process
Though there has been much talk of the apocalypse of deepfakes, when we jumped in, we quickly realised that making one (at least of this quality) is not as easy as popular perception. "Press button, get deepfake" may be adequate for a limited set of comedic face-swapping productions, but to create our piece, a large amount of manual labor and creative decision-making was required of us.
We worked with two AI companies: Canny AI for the visuals and Respeecher for the audio. It was clear from the start that we were collaborating with two fantastic AI "handlers". Canny AI guided us to find appropriate “target footage” which they could puppeteer. Respeecher patiently talked us through our audio recording sessions. We learned that training data for the AI models is critical. "Garbage in, garbage out" seems to be largely true, though knowing what the AI deems "garbage" is not always straight-forward. In one instance, using different, albeit both high quality, microphones between the training data recordings and the speech delivery recordings yielded vastly degraded results. It turns out that the AI “hears” in a different way than we do.
Then came weeks of post production and a long chain of decisions which couldn't be outsourced. We sat in our studio taking multiple versions of the synthetic audio and sculpting the voice of the speech, adding background sound, fine-tuning the audio frequencies, then adjusting the length of each phrase to lip sync with the visuals.
Building the drama
The creative impact of the film became clear when we placed the deepfake in context. Though we knew the synthetic Nixon would be a draw, we doubted its artistic and emotional impact without building it up. After all, deepfakes on their own are often nothing more than a gimmick.
We started the film with well-known broadcaster Walter Cronkite hinting of the dangers to come, allowed the audience to gawp at the extraordinary ambition of the mission, and then using regular film editing techniques “crashed” the lunar lander onto the moon. The effect was dramatic and confirmed what we were beginning to suspect - there is huge creative potential for synthetic media in the arts and narrative storytelling.
The film was a hit in all the contexts we presented it. It has been viewed over 1 million times on our site and YouTube, and the immersive installation won a prize at International Documentary Film Festival Amsterdam (IDFA). Education courses have been built using it by MIT among others, and Scientific American produced a 30 minute documentary on deepfakes centred around our film.
Along the way, our messaging was clear: this isn't real; don't believe it, but think about how easy it would be to be tricked.
Understanding the impact
We asked our audiences to tell us after watching the film what they thought was real or fake. The quiz results from our website were illuminating.
The data hints at the power of the synthetic media to deceive or at the very least, to sow confusion and doubt. The only question our audiences seemed sure about was whether the NASA footage was real. The other three questions split the audience in half.
Over half of the viewing audience correctly identified Nixon's face as synthetic | © Halsey Burgund The data also hints at a question we asked ourselves throughout the project: could the impact of the project be harmful? Could we inadvertently trick our audiences? Were we adding to moon landing conspiracy theories? And is it ethical to try to manipulate people into believing something that isn't true even if you reveal your motives and methods after the fact?
Perhaps it was in the semantics - if we called our piece a work of "synthetic media" instead of a "deepfake", could we switch from being a problem to being part of the solution?
We consulted with experts on misinformation, including our funders at Mozilla and the advice was consistent - it is the intention behind the use of the technology that is critical rather than the technology itself. The intent to deceive - regardless of the tools used - is the problem.
We went to great lengths to provide contextual materials around the film. When we premiered the piece in a physical installation at IDFA in November 2019, we plastered the surrounding walls with tongue-in-cheek period “advertisements'' for deepfake technology, and created a newspaper explaining how we made the deepfake and the implications of the technology. Our website contains resources explaining how we made our film and the broader implications of the technology.
The directors provide visual context for the installation premiere at IDFA to broaden the educational message | © James Burke When we began this project, we did not know what it would be like to use AI to create an aesthetic experience. Would it be a collaboration or would it just be another technological tool, a means to an end? Would it make “decisions” on its own or rather inspire us to make our own creative decisions much like working with randomization and aleatoric composition techniques can do? And critically, could it replace us?
For us, it proved to be much more of a “cyborgian” relationship - our creative efforts and inclinations were enhanced and inspired by our work with AI. The role of the artist has not yet been superseded, but working together with AI does portend an exciting, albeit somewhat fraught, future for artistic evolution.