Pelonomi Moiloa

Protecting machines from us – lessons from the majority world

Why do AI tools need more specifity than generalization? Is current AI technology sustainable? Should our goal be human-like machines or machines that are useful to humans? These are some of the questions Pelonomi Moreia reflects on in her talk.
 


Summary and transcript

Summary

Pelonomi Moiloa's talk delves into the critical theme of safeguarding machines from human influence. Opening with an introduction, she establishes herself and the intriguing title of her talk, "Protecting Machines from Us." Reflecting on the global shifts post-COVID, Moiloa notes the disparity between anticipated positive changes and the actual outcomes, prompting a deeper exploration of the evolving world.

Providing a concise history of AI, Moiloa traces its origins from Alan Turing's 1936 paper to the coining of the term "artificial intelligence" in 1956. She highlights the cyclic pattern of AI winters and subsequent booms, particularly the surge from 1993 onwards. Transitioning to current concerns in the AI field, Moiloa addresses brain drain, where talent from the global majority is attracted to well-funded labs, leaving local contexts neglected. Structural hindrances to growth, exemplified by financial challenges faced by AI initiatives in Africa, are discussed. Moiloa also sheds light on the predicament of startups forced to incorporate elsewhere due to a lack of local investment.

Moiloa then explores the impact of AI on society, raising concerns about unintentional and intentional maliciousness, including biases and harmful applications. She emphasizes that responsibility lies with humans rather than AI itself, as AI merely reflects human values and decisions. Encouraging a learning mindset from the majority world, Moiloa advocates for specificity over generalization in AI and the incorporation of indigenous ways of knowing, emphasizing collectivism and responsibility to ecosystems.

Highlighting the benefits of application-based technology from the majority world, Moiloa underscores human-centered, resource-efficient, and impact-driven approaches. This contrasts sharply with mainstream AI models, exemplified by ChatGPT, which she criticizes for its unsustainable resource consumption. In her closing remarks, Moiloa envisions a positive future where technology serves as a tool for collective problem-solving rather than a threat. Her talk calls for a shift in perspective, urging responsible and sustainable AI development that considers the collective good and specific contexts, steering away from replicating human biases in technology.

Transcript

OK, cool. Thank you so much. I think we're in the right place. Yeah. So as as has been explained, my name is Pelonomi. I'm going to start my timer so that I don't talk for too long. But yes, my talk is around protecting machines from us. Yes, it is a completely clickability type title, but hopefully it costs your attention for long enough to hear what I have to say. And what I have to say is really a bit of a commentary on the very obvious change that is taking place in the world. I think after COVID that was a very big wake up call in terms of the types of change that can happen and how we expect those changes to then result in other positive changes. But how that doesn't necessarily happen. And we're finding at the moment that those changes are resulting in something different. Yeah. So I'm just going to take you through a brief history of where this change has come from and just take you back to the side where I've been thinking a lot about change in the context of Chino chambers. Things fall apart Book around the center cannot hold and how it does feel like things in the world are kind of falling about and and things are going a little bit crazy and to understand a little bit better with this change seems to be coming from So not sure everyone is aware but AI as a field as a thing hasn't been around for very long. The first kind of thoughts and ideas of it were in 1936, when Alan Turing released the paper On Computable Numbers, which were kind of the first imaginations of machines that we were programming to make. Numbers could then program be programmed to make unimaginable numbers and things like that. And from that point in 1956 at the Dartmouth Conference in far away, that's not here. AI or artificial intelligence was first coined as a term and something that was then injected into, yeah, the vocabulary of how we then describe machines that we don't necessarily program explicitly. From that point, there was a bit of an AI winter. So this means that funding and interest in the field of AI took a depth and almost went down to 0 in terms of funding and development and research in the field. And that kind of AI went to happened again in 1987 until 1993. What is my match? That is 1020. Yeah, not very long ago. And what happened in these AI winters is not only that there was not faith about what AI as a technology could deliver, but there was also this extra hype around what it could possibly do. And then people realized it was fast and gave up on it. From 1993, however, there's been this huge boom in terms of technologies that enable AI and machine learning to be applied to different kinds of products and services across the field. Across the board we have robots who are able to sweep our homes and vacuum our homes using AI algorithms. I think in 2007 we had an AI bot beat the World Chess Champion, and this was then replicated in 2019 with Deep Minds Alpha Go beating the Go Champion of the world, which I I still have hesitations about in terms of how they dealt with that from a cultural perspective. But anyway, and now recently in 2023, we have this chat TPT, the elephant in the room. And if you haven't been hiding under a rock or have just been protecting your mental health from the craziness that's happening in the world, Chat TPT is a natural language processing tool that is able to simulate human language in particular context. It's just kind of learned from the Internet what is out there and has learned to relate ideas and concepts and words together in a way that can simulate a human. I will say, though, that it's predominantly the languages that exist in the Internet. So for the languages that come from places of oral history and oral tradition, very much not representative. So for example, I'm from South Africa and it's very much does not do well on our languages as well as other languages in on the continent and other languages in the majority world. But what's that GPT has done is kind of triggered the state of chaos where everybody is kind of going a little bit crazy about AGI. So Artificial general intelligence, where AI kind of replicates the capability of a human, but of course it's a computer, so it's able to do it at scale. And we're people are scared about what the implications of what that means. And if you're in the AI sphere, or maybe not even if you're in the AI sphere, because this has just been such a thing that has taken over the media lately. There's been this question around AI sentience, whether AI can then take on the capabilities of understanding will and executing its will on the masses, on the world and all that exists within it. And firstly I feel like this idea was thinking about ATI and sentience comes from quite a privileged position. So I don't have much patience to think about it and I think there's other people thinking about it and that's great. But coming from a global majority type context, the first response I want to say is firstly climate change will get us first. And as the the hellfires sweep across the continent, maybe then I don't know if it's not the climate change and and the the direct effects that will will wipe out everybody on earth and maybe it'll be as a result of the social instability that comes through through climate refugees. But my second response is I'm not really afraid of AI taking over the world because I have other concerns at the moment that I'm a lot more scared of. This is Sophia, I think her name is. She was given citizenship. I think it's Saudi Arabia in 2016. And this was a funny demo that went wrong. We've seen a number of demos that that had gone wrong. But yeah, the things that I'm worried about with regards to AI is not pending doom by AI Armageddon where the robots come to get us. It's very much more related to a more nuanced reality that is happening right now, not sometimes 50 years in the future or beyond the point when we've had a climate tilt in in the situation. These are things that are happening and things that are really worrying for me, especially as a Protect practitioner in AI field and as a leader in my company who's trying to make a difference in the world and where the sphere comes from is. Yeah, it's traits that are not so much external, So this idea of this machine taking over the world, but ones that are more internal to the human condition. The first of these concerns is with results to brain drain. What we're noticing a lot in the AI field is we have this attraction of the West of fancy labs with lots of resources and lots of money and they're pulling away African, talented, African great brains into those environments, which is devastating because firstly it feels in a way that it's a a robbing of of use. So I'm speaking from the African perspective because that's where I come from, specifically the South and African perspective. But I know that there are similar things happening in the rest of the majority world, South America, Southeast Asia, and yeah, other parts of Asia as well. So, yeah, we're having this talent being drawn out of our context, which is devastating because it kind of takes away that future generation of ours that is meant to be able to help us get to a place that helps to combat our colonial past and colonial histories in cases of places that have been colonized. But it's also devastating because they then can't work on the types of problems that are imperative to doing that, but also that are the problems that are facing people at home, people that they care about in the communities that they come from. Another one is the structural hinterest to growth that we're experiencing and a lot of it is from a geopolitical perspective. So I'll provide you some examples. In Africa we have the biggest machine learning summer school in the world and they bank with Barclays and Barclays is trying to get rid of them as a client because they host these conferences on the African continent, which means they're often making payments to the African continent and this is considered as non trustworthy. And so this institution often just has to struggle with making payments when they're organizing these conferences in order to do that. Another example is Masakani, which is a huge grassroots movement on the African continent to drive research in the machine learning space with regards to language. So here we're trying to increase access to research opportunities for people to create the types of tools that allow people on the continent to interact with services in their languages, but it also acts as a means of language and cultural and heritage preservation of these languages. Well, then they can take up to a year to receive funding because they have to go through a whole checklist of requirements in order for them to receive that funding. They recently have just got approved to receive funding that they were awarded over a year ago from a very well known institution into your imperial type that normally does really well with this kind of thing. One of the requirements is that they hire an external contractor in order to monitor their spending to ensure that they're spending it's on the right thing. And and this Masakani has been around for years and they've just been pushing out papers and doing really, really well in terms of research on the continent. But they they need a supervisor almost as if they're children. Another one of these is most of the startups in the African country are having two continents are having to incorporate elsewhere in other parts of the world, which means that our IP and a lot of our income is being rooted throughout the rest of the world. And a lot proud of this is because venture capitalist firms, which is quite new on the continent in terms of being a resource for development for start-ups, won't invest here, that we have to go elsewhere in order to to secure that kind of funding. And then we've seen a number of these things as well with the BRICS agreement and what that means politically and how it hinders certain exchanges of power and money to enable technology on the continent. And these kinds of things are quite frustrating and quite difficult to get through. Another problem that we have is when people do leave the continent, there's this wonderful Toni Morrison quotes that talks about racism being the greatest distraction because instead of working on the things that you need to work on the problems you want to solve in your local context, you're having to invest that energy into proving your rights to exist. So what we find is a lot of people from the global majority are super skilled technically and could be creating solutions that help to solve problems on the ground, move overseas. And then they spend a lot of their energy fighting the institutions there in order to be recognized, in order to be seen, which again takes away from the problem solving capability that we have here. What also means that they're spending energy on on fighting to be recognized and fighting to be seen, which is really, really unfortunate. And then the last concern which is one that I think is quite public and quite well known, I term them as unintentional maliciousness. So this is when you have bias that creeps into models because the models and the technology is just not made in a way that serves who it's meant to serve and therefore mistakes are made that can be quite dire in terms of how it perpetuates discrimination, perpetuates power imbalances etcetera, etcetera. But on the other side we also have intentional maliciousness way systems are intentionally created in order to undermine the rights of the people who have subjected to that technology. So this could be in the form of surveillance extraction from a data perspective in order to then exploit that audience that you've extracted the data from and even crazy ones like having mobile apps that are able to remove the clothing off women. And So what that looks like in a more subtle way rather than these robots taking over the world. If you haven't watched this film Influence, it's a really eye opening experience into how data analytics has been used around the world and to influence political systems, creating political instability, economical instability in the majority world, Countries in the majority world in order to, yeah, use use these methods to do awful stuff. That means that the majority will just just can't progress in the way that it wants to, and that the powers that be continue to be the powers that be and some other examples of how that then affects the minorities that are in those greater context, smaller context. So we have these tools. We got the example proctia which helps to detect cheating in in online exams. It's just particularly relevant over the COVID period. Of course, it didn't work very well on faces of color, so they often got accused for cheating when they weren't in terms of racial bias and racial histories as a result of our learning setbacks. And you have mortgages that are higher and I think higher predictions of default for for people of color in those areas. And then, of course, from a female perspective, having chat, CPT, and many other technical tools that use a historical preference for male labor to make decisions and predictions about the current day in ways that are not aligned with the future that we're hoping for. And when we point our fingers at the AI, we're forgetting to realize that an AI is just an extraction of what it is that we actually fear. AI at this stage is not able to make decisions for itself. It has no will. It has no intent. It's not deciding who's important and who is not. It's the people behind it who are. And then what AI is really doing is kind of being this copier of the worst parts of the human imagination. And what is happening is we're embedding the worst parts of ourselves, the worst parts of humanity in these AI machines to replicate its scale. And when we point our fingers at the AI, we're we're kind of abstracting the responsibility onto the AI when really it belongs to the structures and the people who allow it to exist and persist in the environment that they're both. And so we look at trying to deal with this. I think there are many learnings that we can have from the majority world, and I particularly like to look at it from this perspective because I find that comparing where we are to where they are is not a very useful dance because it can be quite reactionary rather than imaginative or something cool in the future that that we're able to work towards. And so here this quote, the world has no end and I can't even see the quotes. What's good among one people is abomination with others, and what's important for us, being from the majority world in order to better what we are able to do and offer suggestions for the rest of the world is a deciding what doesn't work for us, deciding what does work for us and going with that and leaving the rest behind. And here constraint is the campus. It's just some observations. In terms of the major differences between MEC tech made in the global N versus the global S from the majority world, is this understanding that specificity is better than generalization? So generalization is the main driver behind machine learning methods that you can group people together, observe them as this unit, and then decide that because they belong to that unit, you're able to exploit certain characteristics of that unit and pretend that they belong to each individual in that unit. So for example, ChatGPT, being able to speak English, you think you can then deploy that system within the African context and we recognize that it fails because it just doesn't understand the specificity of the local context. This happens a lot. The other thing that happens is with regards to application based technology versus or in addition to fundamental science. One of my favorite experiences is going to conferences on the African continent, tech conferences, because all of the technology and all of the papers and posters, I mean not all of them, but a large majority are dedicated into solving real problems rather than understanding the fundamental, fundamental, fundamental mathematics. That, yeah, they just have an idea of how the technology is going to interact in the world and with the people in the world, rather than just seeing the technology is mathematics, which is quite meaningful. So yeah, specificity. What does this mean? It means leveraging on alternative human hoods. So at the moment, AI that we utilize today has a firm history within eugenics. So statistics, that modern day statistics was developed in order to compare the differences between people's features in order to divide them. That's what it was used for. And a lot of the papers today also reference eugenics ideas. It's really creepy. There's this very wonderful rant by Emily Bender and I can't remember who else where. They're talking about the chat TBT paper, which is over 128 pages. And the number of eugenics references in that paper is overwhelming, very disappointing and quite scary. Another factor in terms of the human hood we are injecting into these machines is subscribed to war. So that Dartmouth Conference where AI was termed, was funded by DARPA which is the American Defense Force, the ENIAC machine that Alan Turing developed as the first AI machine which was able to break the Enigma. Enigma machine anyway was a natural language processing tool that was able to decide for Russian messages during the war. And a lot of the funding within the AI field has been as a result of war, this culture of domination. And then of course this idea around the contemporary individual, which has helped to drive ideas around capitalism, capital gains and the ideas of monopolies of these huge tech companies that just come in and squash the competition and overtake a market in order to exploit it. And what we could really benefit from is other ideas of human hood. And then what we find is that a lot of indigenous ways of knowing and being have human hoods that are based on this idea of the collective. And not just the collective from the human sense, but how our collectors then interact with the rest of the world and the ecosystems that we interact with. And so instead of having this idea where we assigning greater power to others within in, in this world ecosystem by interacting with different parts of the world as if they are equal contributors and equal beneficiaries, whether that be the nature or the climate or people from different backgrounds. Considering this, this, this collective whole and us being responsible and accountable to ourselves. Yes, to each other of course, but also to this greater ecosystem that helps to keep us alive is something that is missing from the machines that we need. And through inspirations of the digital knowledge systems, the global majority, I think tech could very much benefit from this view. And you know when it'd be great if this is the kind of thing that we get AI to copy from us in terms of how we interact with each other and the world. So just quickly an example of where this has been really cool in terms of AI practice. This is what a normal paper hitting kind of looks like. I haven't blurred out the names I probably should have, but this paper's available on the web, and this is what papers normally look like in the scientific community with regards to AI and machine learning. This is a Masakani paper. It has over 50 authors from over 24 countries on the African continent, all contributing communally to an idea to develop this idea. That's a fight to get this paper in, but they did. They continued to publish papers like this, where the first page is taken up by authors illustrating this collaborative effort, and other communities within the scientific community have followed suit in terms of collaboration. In this way. This is kind of a cool quote saying we were asked how we are so active without funding because again, it's an organization of over 1000 people contributing to this knowledge. And one of the members said, what a strange question, money is not the world's only motivator. The African concept of Ubuntu is a source of funding and the currency we use to trade and improve our community, which is baffling, especially to funders. They can't understand when you don't want to be a monopoly and that you actually want to help the ecosystem thrive. I want to help other companies do Well. I'm quite over time, but I'm just quickly going to go through this concept of the application, specifically focusing on non anthropomorphized tech. What we find is that the tech that we've been fed in the movies is very much envisioned us. This replication of ourselves, this obsession with ourselves and from the global majority. We have more of an idea of what tech is on an application level in terms of it not being a replication of what human, what humans can do, but instead utilizing the technology in order to leverage what it can do that we can't. And what this means is that it's very rarely replicates the human image. Often it is more of a practical problem solving type application because it is application based. And yeah, are the three main factors that arise when we're dealing with application based technology, especially that it's human centered, which often means that it's developed in far more responsible ways because it is designed to be able to interact with the communities in which it's meant to cause impact. We also know that it is resource efficient. A lot of those models need to run on a mobile device. They can't be run in a huge data center somewhere in the cloud. And in terms of the data that needs to be used, we also can't use data from the Internet. We have to be quite selective and quite smart about how we build this technology on a smaller scale. And then it's also impact driven. Sometimes I am about to roll my eyes when we talk about technology and the SDGS Sustainable Development Goals. But I think it's really important to understand that your technology is going to be used to impact something in a positive way and that the impact that you're driving for is not capital driven. It's not about money, it's about making people's lives better. In comparison, this is ChatGPT. It created over 522 tons of CO2 for a single train. This is the same as 860 people flying from Johannesburg to Sweden. It's basically across the whole 180° across the whole world in terms of power for a single train. Again, this is 12,000 megawatts. A single household in South Africa, it consumes 1000 kilowatt hours per month. So this is the equivalent of 12,000 households in South Africa having energy for a month, which is a very soft spot because we are currently experiencing power cuts on a major scale. dollars. This is more than the GDPI. Think of over a third of the world's population in terms of countries. And this kind of tech is not sustainable. It cannot continue. And we need to think differently about doing that. Yeah. So I think there is a lot of doom and gloom around what technology can do and what it can create. But there is this possibility of this utopian robot human hybrid future and where technology is working with us as a tool to help us solve those problems rather than the threat of it deciding that that we're we're bad and shouldn't be around anymore and using it as tools for neo colonialism. Power. Yeah, power things and and and bad things. It can do something great, can do something great. And it should.


Best-of

Here you will find short excerpts from the talk. They can be used as micro-content, for example as an introduction to discussions or as individual food for thought.
 
 
 


Ideas for further use

  • The talk provides valuable insights into the challenges of brain drain and the unequal distribution of talent in the field of AI. This content could be utilized in educational contexts to emphasize the importance of diverse perspectives and the need to foster talent within local communities. It encourages a reevaluation of how tech education can be more inclusive and supportive of global majority contributions.
  • Moiloa’s comparison of mainstream AI models with approaches from the majority world offers a unique global perspective. This content could be integrated into technology and innovation courses to broaden students‘ understanding of technology beyond Western-centric models. It encourages a more comprehensive view of technological development, acknowledging diverse cultural contexts and indigenous ways of knowing.
  • The talk highlights the unsustainable resource consumption of mainstream AI models, providing a basis for discussions on the environmental impact of technology. This content is relevant in courses focusing on sustainable technology and green computing, prompting considerations of how to develop AI in ways that are both socially responsible and environmentally sustainable.
  • Moiloa’s vision of a positive future where technology serves as a tool for collective problem-solving is inspiring. This concept could be integrated into leadership and innovation courses, encouraging a mindset shift towards collaborative and impact-driven technology development. It promotes the idea that technology should be a force for positive change, addressing real-world problems and benefiting communities globally.


Licence

Pelonomi Moiloa: Protecting machines from us – lessons from the majority world by Pelonomi Moiloa for: Goethe Institut | AI2Amplify is licensed under Attribution-Share Alike 4.0 International

Documentation of the AI to amplify project@eBildungslabor

Image by Alan Warburton Image by Alan Warburton / © BBC / Better Images of AI / Nature / Licenced by CC-BY 4.0