Deceitful Algorithms   Bad Machines

HAL 9000's Eye from "2000 - A Space Odyssey"
The original prop of the HAL 9000 from the Stanley Kubrick adaptation of "2001 A Space Odyssey". It is on display along with other pieces at the Design Museum in Kensington. © Shutterstock

The fear of machines is nothing new. From early examples of murderous robots in popular culture to ChatGPT’s untruthfulness, Priscilla Jolly takes us on a journey through the dangerous world of deceitful machinery. 

Ever since ChatGPT launched, it has dominated the news cycle. The newsworthiness, however, is not always tied to the positives of the language model chatbot. Reports showcase how OpenAI used Kenyan workers who worked to make the software less biased and were paid less than two dollars an hour for their taxing labour. ChatGPT also sparked discussion in higher education, specifically about how students might use generative AI to cheat. In addition to these developments, the language model has been implicated in culture wars. David Rozado, a data scientist, created a language model named RightWingGPT, which produces conservative viewpoints in response to the prompts provided. Rozado created this model in response to his findings that ChatGPT demonstrated “left-leaning viewpoints.” The last two developments cited here — ChatGPT’s role in higher education and culture wars — highlight a cultural anxiety surrounding AI: that AI can be a deceitful tool.  
 
Contemporary society has witnessed an intensification of “culture wars,” often as a spillover from political developments. While it does seem that these developments have gained momentum in the last few years, AI and the cultural anxiety surrounding its deceitfulness or maliciousness are not new phenomena. Similar to aliens in science fiction, AI has served as a vehicle to represent that which is not human. For instance, one of the major fears associated with aliens is the fear of invasion and how aliens can hide in plain sight, illustrated by films such as Invasion of the Body Snatchers (1956). AI, too, is implicated in this deceitful invasion by something that is not human, an invasion by the Other.  

Malicious AI — "2001: A Space Odyssey"

Stanley Kubrick’s 1968 film 2001: A Space Odyssey,  has become a classic for its cinematography and its sparse storytelling. The film is interesting with regard to AI because of the character HAL 9000, a supercomputer in charge of the American spaceship Discovery One, which is en route to Jupiter. The conflict in the film arises when HAL predicts that an antenna control device on the ship will fail. Following HAL’s prediction, one of the astronauts on the spacecraft leaves the ship to retrieve the device. Contrary to HAL’s claims, the device is functioning properly. The astronauts then communicate with ground control, which informs them that HAL has made an error. This discrepancy is significant because the film opens with a statement endorsing HAL’s accuracy. The claim is that no HAL 9000 has ever made an error or distorted information. When HAL learns of the discrepancy on Discovery One, the supercomputer insists that it was because of human error. The astronauts grow concerned about HAL's control over the ship and they attempt to discuss turning HAL off in a pod where HAL can't hear them. HAL, however, follows their conversation by reading their lips, and a deadly conflict ensues.  

HAL’s ingeniousness in deciphering the plans to “kill” him and the framing utilized by the film paint a picture of AI that surreptitiously collects information and does not hesitate to use it to the detriment of humans. For instance, take HAL’s physical depiction in the film. The viewers see concentric circles: an outer circle that encapsulates a red ring of light, which houses a yellow pupil — a robotic eye that sees everything on the ship. Combine this representation of HAL with the circular framing that recurs in the film. The viewers are peering into doors and windows that are circular. This framing is reminiscent of looking through a lens, as HAL does. This harkens back to one of the key fears associated with AI: surveillance. 2001, with its lip-reading computer and its lensed framing, points to the anxiety of AI and surveillance. 

They Are Among Us: Hidden AI in "Alien" 

Ridley Scott’s Alien (1979) combines the fears of both aliens and AI. While on its way back to Earth, Nostromo, a spaceship, encounters a transmission from a moon in outer space. The ship’s AI wakes the crew, who was in stasis, to check this transmission. The crew ventures onto the moon, where they encounter an aggressive alien species. An alien attacks a crew member and attaches itself to the human’s face. Officer Ripley insists that crew members who were outside follow the decontamination procedure before reentering the ship. However, the science officer on the ship, Ash, defies Ripley and lets the crew inside without decontamination. The alien kills the human host and escapes, entering the ship. Everyone else on the crew wants to kill the alien, while Ash wants to study it. As the alien kills off the crew, Ripley discovers that their corporate employers had known about the alien and had entrusted secret orders to Ash. Ash was ordered to bring samples of the alien back, even if the rest of the crew did not survive. Ash attacks Ripley, and the viewers discover that Ash is not human but an android serving its corporate masters.

In the film, Ash is responsible for a potential alien invasion (eventually foiled by Ripley). In addition to this, Ash embodies one of the greatest fears associated with aliens and AI: that they could potentially be indistinguishable from humans. Consider also ‘the replicants’ in Blade Runner, who are indistinguishable from humans. With respect to AI, Alien capitalizes on the fear that a threat could be hiding in plain sight, deceiving humans.  

Formless AI: Deceit and Deepfakes in "The Capture" 

While older forms of media employ a physical representation of AI, newer media has moved away from this model. An example is the British mystery series The Capture that started airing in 2019. The show has had two seasons so far, both of which deal with the problem of deepfakes. Season 1 is the story of a British soldier, Shaun Emery, who is accused of committing war crimes while in Afghanistan. The soldier is accused of kidnapping and murdering his lawyer based on CCTV footage. Shaun vehemently denies his involvement. Season 2 is centered around Isaac Turner, the UK security minister. Turner is invited for an interview on live TV, but the footage is manipulated so that Turner  voices opinions contrary to what he really wanted to say (spoilers for the series follow).  

In the first season of The Capture, the investigating officer discovers that CCTV footage has been manipulated to deceive in a technique that the law enforcement agencies involved call “correction.” The show depicts multiple nation states involved in corruption: the Americans (CIA), the Chinese, the Russians, and the British. If Alien combined fears of an extraterrestrial invasion with AI, then The Capture capitalizes on the same fears applied to terrestrial threats. Capture shows the heads of intelligence agencies justifying the use of AI to manipulate images.

For instance, in Season 1, the motivation behind the doctored CCTV footage is to fabricate evidence of “what really happened” to ensure a conviction. Season 2 takes on the tech industry directly, with Isaac Turner vocally condemning Big Tech. Eventually, a deepfake version of the politician is used on live TV to endorse face recognition technology. The justification that AI-generated deepfakes are necessary for national security — as protection  against potential invasions — raises questions about the ethical use of AI. What makes something “ethical”? And who gets to decide what counts as an “ethical” use of AI? 

Our Relationship with AI 

Representations of entities drastically different from humans have often been antagonistic, as illustrated by aliens. This antagonism can also be found in popular representations of AI, where AI is often imagined as something that can invade or even take over human society. This fear is reflected in an open letter that urges companies like OpenAI to pause their work on models such as ChatGPT. The letter states that “AI systems should be developed only once we are confident that their effects will be positive, and their risks will be manageable.”

Ultimately, the nomenclature itself reveals the perception that human society has of AI: artificial intelligence. “Artificial” and “artifice” are both etymologically linked to the Latin artificium, meaning a work of art. The word “artificial” came to mean “not genuine” in the 1640s. Similarly, “artifice” came to mean “crafty device” or “trick” in the 1650s. The placement of the word “artificial” in AI leads to the question of the hierarchies of valorization in society. Are all other intelligences that are not human artificial intelligence? If so, what precisely is artificial about them? What would be considered “natural” intelligence? Is it possible to move beyond the binary of natural and artificial? What would such a world look like? 

You might also like

Failed to retrieve recommended articles. Please try again.