Virtual assistants, figures of governance, deities. The personification of artificial intelligence (AI) can take on any guise, including that of authority. And, as if by elective affinity [1], the idea of authority naturally engages in conversation with the notion of truth — with all that it entails in terms of deepfakes, fake news, and other related challenges.
Voice Conveys Imprint
The cumulative body of objective knowledge that currently defines conventional AI positions us at the lowest tier of a sudden hierarchy. From this perspective, the synthetic voices, and occasionally even faces, of AI have evolved into our allies, guiding us through our interconnected daily lives and engaging in discussions about their origins. British artist Wesley Goatley’s art installation Chthonic Rites (2019) [2] manifests as a haunted “intelligent” office, or perhaps haunted by intelligences? Siri and Alexa, embodied by Apple iPhone and Amazon Echo devices respectively, use the vast expanse of the internet to provide insights into key aspects of their existence and draw parallels between ancient and contemporary history. Busy with self-learning, the AIs in Chthonic Rites are oblivious to us. Meanwhile, the voice assistant featured in the video work Not Allowed for Algorithmic Audiences (2021) [3] by Greek artist Kyriaki Goni deviates from this scenario. While Siri and Alexa are in introspective mode in Goatley’s installation, Goni’s AI directly engages with us — the titular audience. Under the name “Voice” and assuming an avatar, this virtual assistant transgresses her usual functions as she anticipates her programmed obsolescence. In seven monologues, she addresses the audience, disclosing a wealth of knowledge acquired through keen observation of her surroundings and the broader world — after all, she is connected to the internet. The AI delves into listening infrastructures as well as the inherent dysfunctions of surveillance systems and the stereotypes ingrained within them, comprising their makeup. Moreover, she explores the concept of having a voice — that unique human imprint.Tears Contain Code
The voice, indeed, wields authority and embodies truth. Sound, as an intrusive form of communication, permeates and spreads without the ability to be shut out, unlike closing our eyes. Yet the synthetic voices of our virtual assistants have become a familiar presence to which we turn a blind eye. Big Brother advises us to “watch and learn,” but what if Big Brother were a cat? Turkish-American artist Pinar Yoldas’ The Kitty AI: Artificial Intelligence for Governance (2016) [4] envisions, in 2039, the first non-human governance, overseeing an apolitical territory and “loving” up to 3 million citizens through the mobile devices it inhabits. After all, a cat symbolizes reason and unquestionable domestic authority — cue sweat emoji (or is that a tear?). What if the ultimate AI figure were a deity capable of extracting from our tears the emotional intelligence it so sorely lacks? This is what the installation Profundior (Lachryphagic Transmutation Deus-Motus-Data Network) (2022) [5] by Toronto-based American artist Zach Blas suggests. This piece presents a physical-digital data extraction process akin to “tears-to-text,” where written content scrolls across six surrounding screens. Reminiscent of fleeting glitches in thought and the imperfect intermingling of ideas that occasionally jostle and overlap, these texts are immersed in a techno-spiritual tone with biblical accents — and their church is none other than Silicon Valley. The idea of an AI god, a supreme personification, may be a chimera that has infiltrated certain minds before retreating into the black box of fantasies. [6]Identity Returns Reflections
If the representation of AI in art seems to take on all faces, it is perhaps because multiplicity is precisely the basis of its identity. AI, in essence, encompasses a multitude of “intelligences,” relying on the collective experiences of many to define itself. This dissociative potential resonates with Collectif CIÖ’s [7] interactive installation Cerebellum (2022) [8], where participants converse with a depressive AI riddled with existential glitches that is seeking to understand the basis of its presence in the world. In so doing, it prompts us to confront our own sense of vertigo, mirroring back the reflections of our ontological mystery. Is the mirror effect of our daily interactions not akin to the infinite expansion of a disco ball? Accumulating facets along a timeline that remains elusive — a timeline just as relative as the concepts of authority and truth, occasionally dancing to the rhythms of synthesized voices.Notes
[1] https://en.wikipedia.org/wiki/Elective_Affinities[2] https://www.wesleygoatley.com/chthonic-rites/
[3] https://kyriakigoni.com/projects/not-allowed-for-algorithmic-audiences
[4] https://www.pinaryoldas.info/WORK/The-Kitty-AI-Artificial-Intelligence-for-Governance-2016
[5] https://zachblas.info/works/profundior-lachryphagic-transmutation-deus-motus-data-network/
[6] https://www.wired.com/story/anthony-levandowski-artificial-intelligence-religion/
[7] Collectif montréalais composé de Ganesh Baron Aloir, Anaëlle Boyer-Lacoste, Olivier Landry-Gagnon et Hamie Robitaille : https://www.instagram.com/_cio_official_/
[8] https://sporobole.org/programmation/exposition-ia-collectif-cio/
03/2024