Proponents of AI promise incredible benefits — but at what cost? Sometimes, we mistake AI as a threat for the far-off future, but our financial, judicial, and medical systems already rely on algorithms. Dr. Rua Williams reflects on the unexpected impact of AI technologies on marginalized groups.
Ethics, empathy, beneficence, and autonomy, variables that should be of central importance, are utterly absent... All they see is a wound.
— Bill Peace, April 2019
Is That a Promise or a Threat?
Proponents of AI make many promises — AI will liberate us from menial clerical labor; AI will make us faster, smarter, more profitable; AI will write better emails and rapidly build effective marketing materials; AI will solve complex logistical problems and help us make “data-driven” decisions; AI will give us faster diagnostics, more effective medicines, and targeted treatments. The claims are endless.To understand these promises, we must first understand the terms — both the language and the conditions. AI, of course, means artificial intelligence. Technical and theoretical debates about the appropriateness of this term aside, the AI boom we are experiencing now is dominated by a specific set of techniques for algorithmic classification and prediction known as “machine learning.” Machine learning applications are, in concise terms, collections of statistical models applied to large data sets that produce inferences about a particular condition or case. A better term for understanding what AI is and what it does is “algorithmic inferences.”
This definition brings us to the terms, or conditions, by which we engage with these systems. The contemporary controversies of AI center on how these systems’ outputs have produced disproportionate, discriminatory, and even fatal consequences for marginalized people. Take, for example, when your phone won’t open for you because the data the facial recognition system was trained on didn’t include enough faces like yours. Or when your phone opens for your neighbor because the classification model fails to distinguish Asian faces from one another. Or when the credit card company assigns you a higher interest rate due to your gender because the data the lending analysis was trained on doesn’t include women until the late ’70s or mid-80s. Imagine then that the image processing algorithm can’t detect your skin cancer because the data it was trained on didn’t include enough photographs of cancer markers on dark skin. Or even more horrifically, you are denied an organ transplant because the risk-benefit calculation determined that your life expectancy is too short, never mind that your life expectancy is directly impacted by racism and ableism in medicine.
When we ask a statistical aggregate to infer probabilities of human action, human worth, and human life, that model will harm the most vulnerable every time. These are the people whom, when considered as data points, represent outliers — unexpected, inconvenient, or otherwise neglected exceptions or eccentricities to the neat rule of parametric normality. You see, to receive the promise of these systems, you must agree to the terms:
- These systems are built on data that includes historic and contemporary legacies of systemic racism, ableism, and gendered violence.
- These systems are built to solve problems or offer answers to questions that reflect an ongoing cultural ignorance about the nature of those problems or the motivations behind those questions.
AGI: Algorithmic Ghost-Making Infrastructures
Sometimes, we mistake the threat of AI as a threat for the future because so many of the promises are pitched to us as something exciting “just over the horizon.” But the truth is, algorithms are already used in our financial, judicial, and medical systems, and those algorithms are often augmented by the same statistical methods behind the classification and prediction features we associate with AI. In medicine, as dictated by risk/benefit/profit calculations, these “algorithmic inferences” often engage in self-fulfilling prophecies by leading decision-makers to deny care to people because of their classification as disabled, thus accelerating their deaths.Carrie Ann Lucas was a lawyer, disability activist, and adoptive mother to four disabled children. She was the founder of Disabled Parents Rights, an organization that fights for the parenting rights of disabled people. In 2018, United Healthcare denied her doctor’s orders for a specific antibiotic she needed to fight an infection, forcing her to take a less effective drug that she was allergic to. Lucas passed away in February 2019 due to complications arising from United Healthcare’s decision that the medication she needed was not “medically necessary.”
Bill Peace was a bioethicist, cultural anthropologist, and faculty at Syracuse University. He was a full-time wheelchair user. In 2010, he contracted a pressure sore, and doctors recommended they stop treatment and provide end-of-life care. “Somebody I had never met determined my life wasn’t worth living.” Peace survived this sore, continued his scholarship and activism, and lived to mourn his friend Carrie Ann. He passed away in August 2019 from insufficient wound care.
Michael Hickson was a father and husband, but he was also disabled and Black. In the summer of 2020 when he was admitted to a hospital in Texas with COVID, the medical team determined that treatment would not result in a sufficient “quality of life” and denied him the medical intervention. Not only did they deny him access to a ventilator, but they denied him hydration and food — the very same methods of euthanasia deployed against disabled people in Nazi Germany under Aktion T4.
For Carrie Ann, Bill, Michael, and all other disabled people, I ask: How can machine learning protect us from the establishment’s belief that we are “better off dead”? Integrations of AI into medicine rely on histories of ableist, racist, and eugenicist protocols, procedures, and data. Yet no amount of data doctoring will change the fact that we are asking AI eugenicist questions, to which there are only eugenicist answers.
Like all human intellectual and cultural pursuits, AI can neither be apolitical nor ahistorical.
The Purpose of a System...
Proponents of AI and other optimists are often ready to acknowledge the numerous problems, threats, dangers, and downright murders enabled by these systems to date. But they also dismiss critique and assuage skepticism with the promise that these casualties are themselves outliers — exceptions, flukes — or, if not, they are imminently fixable with the right methodological tweaks.Common practices of technology development can produce this kind of naivete. Alberto Toscano calls this a “Culture of Abstraction.” He argues that logical abstraction, core to computer science and other scientific analysis, influences how we perceive real-world phenomena. This abstraction away from the particular and toward idealized representations produces and sustains apolitical conceits in science and technology. We are led to believe that if we can just “de-bias” the data and build in logical controls for “non-discrimination,” the techno-utopia will arrive, and the returns will come pouring in. The argument here is that these adverse consequences are unintended. The assumption is that the intention of algorithmic inference systems is always good — beneficial, benevolent, innovative, progressive.
Stafford Beer gave us an effective analytical tool to evaluate a system without getting sidetracked arguments about intent rather than its real impact. This tool is called POSIWID and it stands for “The Purpose of a System Is What It Does.” This analytical frame provides “a better starting point for understanding a system than a focus on designers’ or users’ intention or expectations.”
Damien Patrick Williams uses the analogy of djinn to illustrate our relationship to unexpected outcomes from machine learning systems. In classical legend, djinn are neither good nor evil. They are spirits that are omnivalent, akin to nature. For instance, rain can water crops or flood streets. As such, when summoned, a djinn does precisely what it is told to do, whether or not you understand what you’ve truly asked of it. Just like humans in legend, when we are confronted with unintended consequences, we want to blame the djinn for being a trickster and may even call it evil.
Intent is laden with moral valence, and people panic when it is implied that they may be immoral or that their moral intent has had immoral consequences. We don’t need to concern ourselves with intent — whether the intent of a system is “just” is irrelevant, even if it is relatively simple to point out basic flaws in the overall premise. The sum of a system’s parts — its premise, its specifications, its data, its methods, its outputs, and its impact — shows us what that system does. And what a system does, once analyzed, automatically becomes its purpose if that system is allowed to continue.
The sum of a system’s parts — its premise, its specifications, its data, its methods, its outputs, and its impact — shows us what that system does. And what a system does, once analyzed, automatically becomes its purpose if that system is allowed to continue.
A Culture of Abdication
While intent can get in the way of understanding a system’s real impact, we still need to understand why we keep building systems with such horrifying consequences. Here too, intent is not quite the correct target for analysis. We may intend a system to give us faster, more accurate insights to support more effective choices, but what motivates us to turn to machine learning to build that system?Again, it is the promises of AI that have driven this motivation. AI has promised fast, consistent, clever, actionable answers to inordinately complex questions. But most importantly, it has promised these results can manifest ethereally, without the labor and discernment of humans. Put simply, it has promised to take complexity, and the responsibility for that complexity, out of our hands.
...Hype allows AI proponents to claim that they have built something a) of great power, b) over which they have no responsibility or control.
— Damien Patrick Williams
Building Solomon
The desire behind these systems is to abdicate the responsibility of complexity to some other power. To beg for wisdom without experience, to barter bits for judgment without the messiness of relation. We may be accidentally summoning djinn and struggling with the consequences, but it’s possible that we’re really trying to summon King Solomon himself.Revered and renowned for his wisdom and judiciousness, King Solomon is a complex character present in the texts of multiple religions. While he has some historicity, in mythology as in contemporary religious studies, he is largely used as a literary moral and philosophical device.
In one tale, the Queen of Sheba, who wants to learn more about the nature of King Solomon’s wisdom, tasks him with many riddles. In one trial, she presents him with two identical flowers. One made of fine gems, while the other is an actual flower. She asks him to determine which flower is real.
First, he smells one of the flowers and finds that it is fragrant, but when he smells the other flower, it also has a scent. Then, he notices a drop of dew, but before he can announce that the dewy flower is the real one, he finds a similar drop on the other flower. He takes a deep breath to quell his frustration, and upon opening a window, notices a bee that flies in and alights on one of the flowers. “That is the real one,” he declares triumphantly.
Like a frustrated classification algorithm, King Solomon was ready to misclassify based on the high statistical probability. But with additional context, he made the right determination.
One of the most famous stories of King Solomon involves a hearing where two women come to court, each claiming to be the mother of a newborn baby. King Solomon considers their case and decides that the baby should be cut in half so that each woman can have a piece. On hearing this, one woman cries out in horror and pleads to have the baby given to the other woman to spare its life. From this, the King deduces that she is the true mother, for she would rather see her child live than win ownership of half his body. Consequently, she takes her son home.
Sometimes, I think we have built a digital Solomon, but when he tells us to cut the baby in half, we all shrug and allow it. We are not afraid to pass judgment, but we are afraid to bear the responsibility of judging.
We may be able to learn more from King Solomon if we look to his dealings with the King of the Djinn, Asmodeus.
King Solomon receives a signet ring that gives him command over djinn. He uses this power to summon the djinn and to learn about their nature, their knowledge, and their weaknesses. He assigns many to work in constructing his great temple, performing tasks no human can complete. Eventually, he learns that the King of Djinn, Asmodeus, has knowledge that he needs to complete his temple and sends one of his cleverer courtiers to capture him. Asmodeus supplies King Solomon with the information and the labor he needs. At the completion of his temple, King Solomon expresses skepticism of the djinn’s power. “How can you be so powerful, if I’ve been able to use you thusly?” Asmodeus tells him, “Give me your ring, and I will show you my power.” The accounts do not explain if Solomon’s curiosity, hubris, or his false sense of security led him to make this foolish decision. Seizing the opportunity with the signet in his possession and his bind to obedience broken, Asmodeus flings Solomon 400 miles from Jerusalem. Solomon must then travel back on foot as a beggar while Asmodeus poses as King Solomon.
How could a wise man make such a mistake? In the many texts that refer to him, Solomon is never a clearly characterized figure. He, like other prophets, is complex — and human. He is meant to teach, and the lessons come through his failures as often as his successes. When he teaches us wisdom, he demonstrates his wisdom through his relationships with nature and humanity. When he cautions us against foolishness, he demonstrates that foolishness through an unquestioning obedience to a power he does not understand.
Learning to Ask Wise Questions
When it comes to algorithmic inferences, there are two lessons we must learn. First, we must learn to contextualize these inferences with the material conditions and consequences of the actions we might take from them. And second, we must learn to ask different questions.King Solomon does not teach us to ask “Who is the real mother?,” but “Who cares for this child's life?”
He does not teach us to ask "Which is the real flower?,” but “Which flower sustains nature?”
He does not teach us to ask “Are you really that powerful?,” but “What keeps us safe?”
What would it mean to privilege the outlier?