Lessons to learn: An interview with Stephanie Hankey
What do young people need to know about artificial intelligence (AI) in order to use it responsibly and make informed decisions? Stephanie Hankey, co-founder of the Tactical Tech non-governmental organisation (NGO), talks about her experiences with young digital natives and the need for educational institutions and civil society to join forces to promote digital literacy.
Stephanie, at the EUNIC’s AI Week in December 2020, you said that Tactical Tech aims “to engage young people with technology that is neither overly enthusiastic nor dystopic.” What exactly are you hoping to achieve?
We want to encourage critical thinking around technology and AI. When we are overly optimistic and enthusiastic about technology and how it could solve our problems, or the opposite, when we are overly negative and suspicious about how it could ruin the way we live, then we are no longer able to meaningfully or maturely engage with the very real challenges technology presents. We think the complicated middle ground is more fruitful and interesting. Sometimes technology is great and sometimes it is terrible; sometimes it solves problems and sometimes it makes them worse. We need to consider both extremes if we want to develop technology that is ethical, equitable and really works for society.
When Tactical Tech was founded at the beginning of this century, there was a widespread optimistic view of digital technologies as an equalising force that would foster democracy and public discourse. That has not turned out to be the case – why not?
The data-driven technologies that dominate today have been profoundly shaped by business models that focus on accumulating assets for a small number of companies. They were not built for social good. Things like the advertising business model have greatly shaped the technologies we have, such as behavioural or emotional profiling and attention algorithms. Not only have these technologies been significantly shaped by these business models; they also reflect the ideals and values of our time, such as consumerism, individualism, efficiency and scale. Furthermore, these technologies were designed for an ideal world, for an archetypical or standardised world, and not for the real world with all its complexity.
Stephanie Hankey
| © Stephanie Hankey
Can you elaborate a bit on the difference?
Many of these technologies were designed for specific user scenarios, for imaginary ideal people, lives and scenarios that have little to do with the complexity, chaos and complications of the real world. Concretely, this means a company might design WhatsApp to connect people and – naively or neglectfully, it is hard to say which – not consider that people could also use it for political manipulation and hate speech. Or someone could design a platform for live video streaming intending it to be used to stream birthday parties and somehow overlook the real chance that it might also be used to live stream criminal and violent acts – which we have unfortunately seen happen.
Is that a concept you would also like extend to AI and machine learning (ML)?
Yes, it is the same problem. The widely used design and engineering methodologies are fundamentally flawed. There are two essential problems. One, as mentioned above, is that they are designed for an ideal and not a real world. With machine learning, for example, we have seen countless algorithms being fed biased data, including sexist and misogynistic content. They then amplify that bias back out in the results. The other issue is that technologies are being designed for individual users, but what we actually need are design methodologies for ‘society as user’. Until we overcome these two problems, we will keep repeating and amplifying the same mistakes.
To incite discussion and raise awareness about the impact of technology on our daily lives, Tactical Tech invented creative formats like the “Glass Room”. This exhibition space looks like an Apple store and staff members are called “Ingeniuses” – a nod to Apple’s “geniuses”. Nothing is for sale though. The interactive exhibition is meant to reflect on the impact and challenges of tech developments and show possible approaches to them. So far, the large-scale Glass Room exhibitions in Berlin, New York, London and San Francisco and the pop-up versions that are being set up by libraries, schools and exhibitions have had a total of more than 168,000 visitors. Why did you choose this unconventional format?
The format was crucial for engaging a truly diverse range of people in a conversation about how technology is changing our lives. Employing the visual and aesthetic language that is used to sell aspirational technology, and then subverting it to talk about what is wrong with technology, was a simple and powerful device we felt was essential to get everyday people interested in and relating to the content.
What have been the most surprising experiences with the Glass Room?
Most surprising has been the range of people that the exhibition resonates with and how much they connect with the content. We have learned that everyone has a reason to care about how technology impacts their lives — from secondary school students to retired pensioners and from San Francisco and Seoul to Sudan.
The interactive exhibition “The Glass Room” engages people in a conversation about how technology is changing our lives.
| Photo (detail): © David Mirzoeff / Tactical Tech
What have you learned about young peoples’ relationships with digital technology, social media and potential risks through your interactions with them?
We are still at an early stage in our work, but we have already discovered two important things. First, young people are really good at using technology, but that does not mean they understand how it works and have not yet been given the tools to figure that out in a meaningful way. Secondly, they care deeply about how technology is linked to the issues they see around them, but this is not currently included in the way they learn about or understand the world. When they learn about politics, democracy, the environment or geography at school, digital technologies are not integrated as a part of the equation for now and the future. There is lots of scope for work in this area. That is what Tactical Tech hopes to be able to do in the coming years through a process of co-development with young people and by working with existing cultural institutions, communities and educators.
As digital technologies become more important, do you see a future risk that the gap between young people along income thresholds, the ability to buy state-of-the-art-hardware and the skills to use it could deepen?
This is a good question and you might logically assume that would happen. Instead we are seeing that technology is essential as a gateway for people living in difficult situations, as our research on smart phones as lifelines found. Young people from less privileged backgrounds often invest heavily in their technologies as they are essential for their access to the rest of the world and often instrumental to their ability to escape poverty or injustice.
Can AI help humans to learn better and better understand how we learn?
Definitely. But in order to be effective, AI needs a large amount of data on each individual student. We still have so many problems to solve beforehand. These include bias, profiling, privacy, educational streaming, stigma, certain types of learning prioritised in the educational system, and many assumptions that go unchallenged. This is all complicated by the question of how people like teachers, parents and carers might incorporate these insights into learning, and what kind of pressure this could put on the learner. I am not sure even the best designed AI in the world can help us here. When we design these systems, we need to look at how these technologies impact real-case scenarios with real young people and not idealised, problem-solving models. Some recent scandals with online proctoring have shown just how much stress this puts on students. We also need to understand the problems data-heavy technology brings into non-ideal learning situations. Some of the problems I mentioned are inherent to data and some to education overall. The latter need fixing too, otherwise you are just adding technology to an already broken system.
What attitude would you like to see your children develop towards AI and ML?
I have two children and they are both old enough to learn about AI and ML. I try to help them understand how it relates to the world around them and how it impacts their experiences and the lives of others. Technology is not some separate issue; it is part of the way we live. And, whether we like it or not, we have to accept technology’s omnipresent role if we want to understand and deal with it effectively. It is so important that young people come to understand technology as a mirror and amplifier of our societal and political values. And to take an approach to learning that highlights all the pitfalls and all the trade-offs – only then can we support them in developing a useful attitude to both AI and ML.
This interview has been shortened. It can be viewed in its entirety on Goethe-Institut's One Zero Society website.