Marcelo Torres Llamas
A brief atlas of gender and AI
Explore the dynamic intersection of AI, gender, and technology with this talk by Marcelo Torres Llamas from Laboratoria. Dive into the challenges and solutions for bridging the gender gap in tech, understand the impact of AI on gender issues, and discover why diversity in tech is not just important, but essential. Perfect for tech enthusiasts, gender studies scholars, and everyone in between who seeks to shape a more inclusive digital future. Tune in to gain unique insights and join the conversation on creating a tech world that benefits everyone.Summary
Marcelo Torres Llamas, from Laboratoria, highlights the intersection of AI and gender, emphasizing the systemic impact of AI on gender issues. Laboratoria, operating in Latin America, trains women in technology, addressing the gender gap in the tech sector where less than 20% of workers are women. Marcelo discusses AI's creation, operation, and outcomes from a gender perspective, noting biases in data and algorithms, and the underrepresentation of women in tech. He cites examples like AI's role in mining rare earth minerals, predominantly in the Global South, and the tagging of data by low-wage workers, often women. Marcelo stresses the importance of gender diversity in tech development, algorithm audits, and promoting women's active participation in technology to mitigate biases and ensure inclusive outcomes. He concludes with Kranzberg's quote, "Technology is neither good nor bad, nor is it neutral," advocating for a gender-aware approach in technology.
Transcript
Mpho Raborife
Marcelo is the Regional Director of Business Development at Laboratoria, an organization that exists to promote more diverse and inclusive digital economy by training women in LATAM to then work in web development and user experience design and promoting their capabilities to help them find their first jobs in the IT sector.
Now, before Marcelo begins, just a fresh reminder of the current rules. Please switch off your cameras and mics during the talks and only unmute if you have been given the stage to give your perspective and opinion. Also, please remember to be respectful at all times and avoid referring to any discriminatory language or offensive language. If there's a conflict that arises in any situation, the production team will come in to intervene. Now that we are aware of the current rules, Marcelo, please, the stage is now yours.
Marcelo Torres Llamas
Thank you. Thank you so much. I think for this kind introduction, I'm very excited to have a chance to outline some ideas regarding how AI and gender intersect and the effect it does have. I'm going to start sharing my screen. And as you mentioned, I'm going to go briefly through the content and then we can have a time to discuss whatever question may arise.
I'm based in Mexico City. I just didn't share that on the chat. I'm based in Mexico City and I am Mexican. So a brief atlas of gender and AI. So the intention for this is to try to see from a systemic perspective how AI is impacting gender. Some of this stuff you might have seen already. Some of this stuff is gender specific. But the information tries to convey an overview of how we should assess the impact with these lenses.
Before that, I want to share a little bit more about my background and the organization I work for. I work regionally for Laboratoria. As you mentioned, we train women to code and then we help them get a job in the tech industry. Because our founders, they noticed that we would have a chance to help Latin America. Because Latin America was growing, but we still do not have enough people to work in tech companies. And less than 20% of the people that is working in technology nowadays are women. Or cases like Peru, where less than 10% of people working directly in technology are women. So this is part of our impact.
We've been around for nine years. More than 30,000 women have applied, more than 3,000 have graduated and most of them will get a job. Because part of our promise is not only to reskill or upskill, but mostly to generate the relationship with companies to create cultural change. So we can have companies ready to actually understand why it's important to have more women join their teams, and why diversity is important, and how does that impact the quality of the products they create, but also how that helps them prevent certain biases and certain effects. So that's pretty much an overview of Laboratoria.
And I wanted to start with this map, because maps are always beautiful. And even though they're models, they do not compare the whole reality. Each piece of this map has different realities and impacts regarding AI. This is an Oxford map saying how ready are governments. This is a readiness score, and how governments might be ready or not to adopt and regulate and work with AI.
And as we can see, the darker the blue, the more readiness they have, the lighter the blue, the less readiness these countries have to adopt AI and to regulate AI. And this map shows women's rights. So this map shows how countries perform in terms of pay, marriage, and so on. And when you put these maps side by side, you can see that there are certain similarities.
We can see that there are at least visually certain similarities regarding how countries have actually deployed whatever they need to deploy in order to assess AI impact. At the same time, the agreement of advancement in terms of women's rights. Again, these are maps, these are oversimplifications.
But I wanted to see how these can be related in a way that is global, because here we have the privilege of having so many people around the world. And this is a global phenomenon, even though it seems obvious, it might not be for everyone.
And I want to start with the most basic stuff, which is how is artificial intelligence created? How is it created from a physical perspective? Because of course, we have the data bias and we have the algorithmic biases. But if we go really deep, we will find out that even from the hardware perspective, we have some different impacts.
If we think it from a hardware perspective, for instance, you might know, but in order to run AI models, you need GPUs, which are different from CPUs. GPUs are very powerful chips or pieces of machinery that require rare earth minerals. And these components do not come from all the countries in the world. These components come from different countries. And some of these countries are in the global South. So whenever you need to mine and whenever you need to assess a specific region in order to get not only lithium, but many, many other rare materials or minerals, you have this war or this battle in order to see who can control the supply chain.
And this is becoming a preoccupation for political turmoil. Because of course, we know whenever we have some natural resources that have a very huge economic value, that might end up altering status quo. And some of these countries might end up, we hope not, but might end up going into civil wars or some other social problems. And as you might know, women end up suffering more. So from a hardware perspective, we can see that women have an impact because the countries they're living, they don't have AI readiness. Sometimes their rights are falling behind. And some of them will work in the mines. I was trying to get the data, but of course, this information isn't very public. But yeah, Kate Crawford in her book, Atlas of AI, she also mentions that underserved populations end up working in this type of mines and are impacted by the way we create the tools that create AI.
On the other side, from a software perspective, we can see that information tagging is a taxing task that usually takes a toll on low wage workers. Again, you might already know this, but in order for the algorithm to understand if it's a chihuahua or a blueberry muffin, you need to have human beings in the first part of the process. And you need to have thousands of human beings.
Before Laboratoria, I used to work at an AI company. I worked there for two years. I was also global health business development. And I had some chance to interact with the tagging people. They were based in the Philippines, they were based in Nigeria. And most of the time, these are women or young people or minorities and they end up getting paid a low wage. Maybe it is competitive for their own countries, but it is not too huge for companies paying for this on the other side, on the West.
So from a software perspective, it's not only the amount we're paying or the global north is paying to these workers, but also sometimes this task has a toll on mental health. Because you need to tag images with violence. You need to tag images with homicides. And then the only way for the AI to understand this is that a human being helps train this. And we've seen some news around it. They're not even hired by companies. But the companies, the contractors, and these guys, all sorts of impacts. And of course, this work sometimes impacts women more.
We know it impacts Global South mostly. But some of the time, these are women. Because given that it's a job that you can do in your home, and women, a lot of the time, they have to take care of the children, because they need to stay at home. This becomes a job they can perform from home. So this is another way we can see the impact.
And a third way we can see the impact in the way AI is being created is by seeing the ratio of female to male labor participants in general, but also in the tech industry. As I mentioned before, we know as a fact, it depends on the country. But on average, around 20% of people working on code directly creating the digital products are women. So we have more than a 2 to 3 ratio.
And when you don't have women working on technology, I have two examples here. So for instance, when Apple created this health app, they didn't add menstruation cycle capability. And the senior VP of engineering, he did mention, you'll have everything you need. This shows us that they didn't have women working in that team. Because if they had, they would have seen that they were lagging behind or missing a key feature.
Also, Siri and Alexa, we know that virtual or voice assistants, nowadays they have male voices. But when they were created, they only had women voices. And this would reaffirm the stereotype that women are better at serving others. And this is another impact.
This is not only based on how the technology is created from a hardware perspective, from a model
perspective. It also has to do with the political, social, and business processes that created the technology. So that's how it is created or manufactured.
Then how does the technology operate? And what role do models play? So models, as you know, they have some inputs. They have different data sources. Then you have per set a model. And then you can have the results or the outputs, depending on what you're trying to model to do. And we know there are certain ways in which we can audit these models, or we can explain these models.
And here I'm sharing with you a representation of unexplainable AI. So we know that these models tend to have a black box thinking or a black box way of operating. We don't know what's happening there. We know the math and the statistics behind it, but we don't know for certain how it comes to certain conclusions. So this is how the model works. And if you think about it, just from the model perspective, of course, you can see how it will have, if you have bias in the data, you will have bias in the outputs, and so on.
But it's not only a model perspective. You have to see this from a broader lens. So I'm sharing here with you these diagrams from the World Economic Forum, in which you see this in application for health, public health. So as you can see, in the world, you have different access and resource allocation. You can have discriminatory health care processes. You have different a priori, or previous elements that impact women. And we also know, and for the health care space, we know there are certain things that are affecting women mostly.
But then you can have that coming to discriminatory data. As we know, you have biases, you have lack of representative data sets, you have other types of discrimination. But then when you come into deployment, some power imbalances and agenda setting generate a bigger problem. So it's not only that you might have AI, and then you might say, yeah, but this is not necessarily a gender problem. It might be, if you think of what is AI being used for.
And maybe you will think, OK, how does this impact gender? OK, there are certain illnesses that are gender related. And if we don't use this to analyze, or to learn, or to see how to better solve those problems for decades, then you have an imbalance. You have a power imbalance. And it has happened before in education. It has happened before in health care and public systems. We lag behind the analysis and the studying of gender specific problems.
So you will have a powerful tool, but not necessarily you will deploy that tool to actually try to solve women problems. Because you might say, OK, we should start with gender neutral problems, right? And then we will double click into women problems. But that becomes a problem, because if you're going to have this at a scale, then you can have literally two castes, or two types of citizens. OK, and I don't want to spend too much time here.
Maybe we can discuss more in the Q&A. But I do want to bring up the fact that I don't know if all of you have a chance to see your keyboard. And I know there are certain configurations, but for most of the world, our keywords do have this setup, which is QWERTY. And originally, we created QWERTY to slow down the pace at which people could type on their machine, right? Because it was a mechanic work, so it would get tangled up. And this was literally trying to make it hard for people to type on the machine. And it worked very well.
But then when we transition into new technologies, like the digital keyboards we have, we do not have that problem. But we still have the QWERTY configuration, because it's harder for everyone to relearn new ways of actually using their keyboard. So if we embed part of the problems I just mentioned before into the AI, and we assume that we can fix it later, this can become a problem. Because of course, everyone would say, yeah, no problem.
We can have AI help AI start working on gender problems, or clean up the data, and so on. But we might not want to do it. Or we might say some other things are our priority. Or we might say, oh, we came up with new problems, and we need to fix those problems before fixing the other problems that we let run with the time. And this has happened before. Maybe this is a very, very simple example. But I think the point comes across, OK?
So we can see how it is important that the way the models are created start from a gender lens, or a gender perspective. And last but not least, what outcomes does it generate? OK, we've seen the impact from a hardware perspective, from a software perspective, from a model creation perspective. There are certain outcomes and we can see that already in social media. OK, the present day effects from the social media algorithm, which is a way of AI, it is not generative AI. And we can also discuss that. Because nowadays, all the tight guys is around generative AI.
But AI is not only the capability of creating text or beautiful images, but also the way in which information is displayed, and cured, and presented, OK? So some effects we now know, for instance, for social media is negative body image, social comparison, cyberbullying and trolling, and exposure to harmful content. All people in the world suffers from this. All the teenagers suffer from this. Well, we already know that women suffer more, OK? So again, we can say, yeah, but some men, they also have negative body images, OK?
Yeah, but the thing is that if you see the data, and there are many, many journal peer-reviewed articles, this is impacting women more. So we cannot adopt a gender neutral optics. We need to understand that every now and then, part of the effects have a gendered effect. And for the future, there are other things we need to be aware of.
For instance, as I mentioned before, the virtual assistant and chatbots, they are supporting gender stereotypes. Not only that, we're seeing how women are being sexualized, and digital women are being sexualized, with the way technology is evolving. So now everyone can have their own fantasy of how women should be and this has other implications.
So virtual assistants, online recommendations, and personalizations. What happens if women, they get only recommendations for purchasing a new bag, or a new dress, and men, they end up getting recommendations to get an executive program, or to train, or to learn new skills? This has happened, and this might happen in the future. If we are not aware, the online recommendations, personalization, and again, the way the information is processed and presented, will have a gendered impact.
Hiring and personal selection. We already know this. Thankfully, a lot of people have whistled the bell, and blow the whistle, sorry. But we know that hiring and personal selection models are having some bias regarding the gender, because most of the people that are in senior positions are male. So the technology learns that males should be leading. And this ends up impacting wage disparities. We know that already and based on a study from the World Economic Forum, the wage disparities, if we keep on the way we're doing things, it might take up to 99 years, or even 135 years to change that.
So we need to address this from not an urgent perspective, but we need to be very aware of the effects. And some of those are obvious, some of those are not obvious. But if we're going to have exponential growth in terms of technology adoption, and in terms of the way these algorithms are being deployed, we need to think of, OK, if this has a minimal impact on gender, whenever this grows exponentially, is that impact going to stay linear, or is that impact going to grow exponentially too?
That would be the lens through which we are assessing on our own work. How should we have this conversation with the companies, and why should we aspire to help them understand how having more women will help? Because if not, we're going to have two worlds. And this image is from Mexico City, my city. And you can see how it's very clear. And in Latin America, we see that all the time, this stark contrast between both. And just to close, some solutions we can aspire to develop in the short term, collecting and analyzing more inclusive data. I'm going to share with you, of course, this deck. But it is important to have data and to have processes that guarantee that you have more inclusive data.
Gender diversity in technology development. It is important that we have women in all pieces of the machinery. We have women coding, but designing, but researching. Because that will help have different views and will protect the end result.
Algorithm audits and gender impact assessments. This mostly has to do with regulation, because no one would do that by themselves, because this costs money. Sometimes you need to rerun the model or train a model with different data sets. And every time you push play to train a model, that costs dollars. So we need to maybe have a regulation that will make companies audit their own efforts. And that's related to transparency and accountability.
I don't want to personalize, but in the last presentation of OpenAI, they did mention that they wouldn't share the data sets they were using. And they wouldn't share certain information that we were used to have shared, because that was part of how the AI was being created. But nowadays, at AI, it's a huge potential market. We're starting to see how companies will keep it as IP, as proprietary IP. And they are not sharing some information, some model creation, even some math and statistics that are being used.
And that's a problem, because it's very hard, then, if you don't have the transparency, to see what you can do. And what we do at Laboratoria, promote the active participation of women in technology design and development. So we know the impacts are different. We work in LATAM only. And even being just one part of the world, we see that the impact in Peru is different, that the impact in Colombia, than the impact in Brazil. You have different intersectionalities and all that needs to be taken into consideration.
But we can help protect women by the material effects of technology, given regulation. We can prevent biases and black box thinking. And last but not least, promotion of women talent. That, of course, that is something we love. Because as Melvin Kranzberg said a couple of decades ago, ‘Technology is neither good nor bad, nor is it neutral’. So we need to have this conversation.
I think we're on time. Maybe I overstepped a little bit. But I want to thank you so much for your time. And hopefully, it was interesting and useful for you guys.
Mpho Raborife
Thank you so much, Marcelo. Thank you for such an insightful talk. That not only speaks to Latin America, in terms of gender and AI. But I think globally, this is one problem in AI that everyone is affected by.
Best-of
Here you will find short excerpts from the talk. They can be used as micro-content, for example as an introduction to discussions or as individual food for thought.
Ideas for further Use
- The talk can be used in universities and colleges, especially in courses related to gender studies, technology, computer science, and AI. It can provide students with real-world insights into the intersection of these fields and foster discussions on gender diversity in tech.
- The talk can be offered on online educational platforms or as part of webinar series focusing on AI, gender equality, and tech industry trends.
- Companies, particularly those in the tech industry, can use the talk as part of their diversity and inclusion training programs. It can help employees understand the importance of gender diversity in technology development and the impact of AI on societal issues.
Licence
Marcelo Torres Llamas: A brief atlas of gender and AI by Marcelo Torres Llamas (Laboratoria) for: Goethe Institut | AI2Amplify is licensed under Attribution-ShareAlike 4.0 InternationalDocumentation of the AI to amplify project@eBildungslabor
The presentation is all rights reserved.
Image by Alan Warburton / © BBC / Better Images of AI / Virtual Human / Licenced by CC-BY 4.0