Nathan Ross-Adams

Building inclusive AI for sustainable development in Africa

In this ‘AI to amplify’ talk, Nathan Ross-Adams guides the audience into the world of inclusive AI. Discover the vital role AI plays in sustainable development across Africa, and explore the challenges and solutions in building AI systems that truly represent our diverse world. Ross-Adams, with his rich experience and engaging insights, will guide us through the complexities of ethical AI, data diversity, and practical steps towards achieving truly inclusive technology.
 
 
Summary

Ross-Adams urges active engagement with the topic, not passive acceptance. He highlights his diverse experiences in AI, stressing inclusivity's importance due to personal and family experiences with exclusion. The talk outlines the AI lifecycle, emphasizing its role in achieving Sustainable Development Goals and overcoming challenges like bias, limited data, and workforce diversity. Ethical considerations, language diversity, and cultural nuances are crucial for effective AI communication. Ross-Adams calls for actions including data inclusivity, collaborative partnerships, and policy implementation, concluding that practical actions are essential for meaningful progress in inclusive AI.

Transcript

Mpho Raborife
Michalsons has developed and deployed trustworthy AI and ICT. He specializes in commercial transactions, data protection, information security, access to information and cybercrime law. His clients present a range of data technology and advertising companies and African governments, software, hardware, robotics, biotech, pharmaceuticals, telecomms and NG clients. Now, thank you so much, Mr. Nathan Ross-Adams for joining us today and you can now.

Nathan Ross-Adams
Perfect thank you so much Mpho. Hi everyone. I'm really looking forward to this session and to sharing a bit more with you about the exciting topic of inclusive AI. I would like to tie back to what Mpho mentioned early on that although I'm presenting information for you, to you, the idea is not for you to just accept it. It's for you to engage with it, to challenge me on any points when we do have the conversation.

Because the only way we grow as participants in these conversations is by actively engaging with the material. So let me just start off by sharing my screen and you should be able to see it. So the topic for today is about building inclusive AI for sustainable development in Africa. And although the focus here is exclusively on the continent of Africa, the lessons and principles that I'll be talking about are applicable for any continent really, where we diversity and languages and cultural differences are at play.

Mpho gave a great introduction about me, but I want to speak more about why this topic matters to me. And it's because in my time working within the commercial space, I've worked in both commercial, legal and technical roles relating to AI technologies, and it's been for about the past ten years. And what I realized is that inclusivity is not only about being a it's an abstract principle that lies over there, but it's actually the need is to have an active perspective on what inclusivity means.

And I know that because I have a family history of people in my family being excluded for various reasons with it's the ground of race, sexual orientation, gender. And so I carry those perspectives with me in in speaking about a I and the expectation to become active participants in global conversations about it is to carry those perspectives, share them, and then actively look at the spaces you're in to actually implement them in.

So there is a big agenda for today, but the gist of it is to to spark your thoughts and conversations about this topic. And so the idea is to understand the role of AI in achieving the Sustainable Development goals, identify the challenges of of actually having inclusive AI, especially when it comes to bias and discrimination, then to learn about the importance of accounting for different languages, dialects and cultural nuances in our what are some of the ethical issues are and then practically what you can actually do to to take this conversation further?

So we always like starting the conversation off with what is it because there are so many definitions out there. And for, for the purposes of this, at this talk, I would follow the definition of I that I find consistent, and that is any computer system that can learn from data that can make decisions based on that data, loosely speaking.

And then those decisions are geared towards specific outcomes. And so there's an element of data, there's an element of technology, and there's an element of a specific output. And each of those outputs take place in organizations of you may be across the lifecycle. So typically an organization would have a problem that they want to solve. Say it's a bank, they want to decide whether to grant finance to customers with it.

It may be in a government context where they want to go on social use of AI to grant social grants to the citizens. Within plan and design, the AI deal with data building, interpret the specific models, check for pre-deployment, and then actually deploy it into the populations, and then also observe and the operation of it and then monitor it.

So that's a typical AI lifecycle from beginning to end and the cycle repeats itself. But now specifically looking at the role of AI in achieving the Sustainable Development Goals, various institutes that focus on achieving the sustainable development driver and the Sustainable Development Goals have put forward the following points about specifically that it can help with data analysis within continents for helping governments and businesses make better decisions.

Then also, they say it can help with agriculture and food security, especially when looking at the data relating to farming operations and how it could be shared and improved. A typical example over there would be they may be people in one part of South Africa that understands how to do farming within a specific area or or how to form the land in a specific way.

And those lessons in that's available from that data may be relevant to people in another province, and that if they know that information, they could act on that information. So we promote knowledge sharing in that context. There's also, I think could help with health care, renewable resources, education and skills development. It's being introduced in some form or another in lessons in classrooms in South Africa and the rest of Africa with coding specifically for active acknowledges have been introduced and then also managing resources and promoting small business.

But while this is exciting and can help with these specific roles when it comes to inclusivity, I find it pretty challenging. And I speak about AI in the way that most people speak about it. But I caution against speaking about AI as this independent entity that exists because when we, when AI is trained on data, effectively, the data that it's reflecting is the data typically from very specific data sets that primarily originate in the West.

And so because of that, it becomes challenging to actually apply within different contexts because the AI typically reflects the biases of those people that create it and the data on which it was trained. And so the typical challenges involved in inclusive AI is bias in the data. As I mentioned, within Africa particularly and in the global South, there's also limited data availability.

And often that is linked to the fact that especially in smaller communities where there are lots of diversity and diversity when it comes to language, how people operate the community-related cultures within those contexts, they may not think data gathering is important. And so we have the question of, okay, we want to build inclusive AI, but we also want to respect the values and privacy of communities who don't.
We don't want their data to be collected. And so there's a tension between that. And then there's also a lack of diversity in the workforce. And what we mean here, what I mean is that typically the people who work on training AI models do not represent diverse teams. So the people who are actively working on these technologies do not have an inclusion or inclusivity mindset when they actually build these models.

And that is a big challenge in building inclusive AI, because if you don't have the perspective or the understanding that the population that you'll ultimately service at the end of the day is diverse in various ways. The outcome’s that it will, that the AI models will create, won't represent those populations. And I know it's not always easy to do it, but they all very clear statistical methods that exist that can address bias, they can adjust models and focus bias in a way that would promote the diversity of of any specific communities that the AI deals with.

Then there are also quite a few ethical considerations, and that's really the values of the specific community. And they differ depending on which side of the world you are, which country or even in even in which region you are. For example, in South Africa, what's been proven is that the values that people hold within popular cities, if, for example, Cape Town or Johannesburg, are very different from what the ethics and values of people within small town communities within the same country.

And so it's very challenging to actually match the ethics in a way that promotes inclusivity. There are also infrastructure constraints, access to Internet, for example, and data, and then also having the right people or the actual representatives of the communities involved participate in conversations about technology because and that's particularly difficult in emerging economies because on the one hand you you are trying to build an inclusive AI model, but you also do not want to put from a government standpoint, put too much red tape in place that businesses can't actually start to grow.

And as you may know, in the global South, the informal economy makes up a large portion of governments fiscus because of the contribution towards their local communities. So if these, if we create standards and frameworks for inclusive AI, one of the big challenges or backlashes we typically face within our country, is that it will prevent people from making a living and doing business.

And that is particularly difficult for organizations that, or for people to see the value in inclusive AI if they’re trying to make enough money to provide for their families, for example. And so there's a a big need for more investigation on that topic. But now moving along to, moving away from the sort of organizational perspective and government perspective to why we actually need inclusive AI.

And it's really because of our language diversities. I mean, we all know there's a difference between the main language spoken in a particular country and the same language that spoken within your household and region. And they also have different dialects as well. And they are cultural nuances between the different tribes and communities within a specific country and continent, getting the right people represented.

And then also having accessibility to local languages for local people. There was I mean, there was an issue in New Zealand recently where a company wanted to train a AI based on the languages of an Indigenous Aboriginal community, and the community objected, saying that they want to preserve their languages in their own way and they do not want to be part of the model training now.

Now, while I agree that that community's rights definitely need to be respected, the organization made good arguments about the needing to preserve the national history of of the country. And so you also have those tensions between what's good for the population as a whole versus what's good for specific sectors. And do you actually need all that data and models trained on every single language available?

So there's a tension between those points as well. And then it is also the conversation about effective communication. How can we create conversational AI technologies that we can actually access via our mobile devices or Internet access devices like our laptops if they do not effectively represent the languages of an entire population? And we know well in a South African context in particular, this is pretty challenging because we now officially have 12 official languages, 11 of them of spoken languages, and one of them is sign language.

And so how do the devices and companies that provide these services actually offer communication in all of those forms? From a legal perspective, we already have a challenge in South Africa in that regard, so much so that it was declared that the official language for legal communication is English and people need to request it in other languages if they want to do so.

I mentioned that the challenges of avoiding bias and discrimination, it's pretty it's not an easy topic to deal with, but not only from a from an intentional perspective where people are intentionally biased and discriminatory, but also from an inadvertent or unexpected perspective. And one thing that I find one point I find very interesting here in a language context is gendered language.

Some, as you may know, you've dealt with a section on languages that some languages are gendered, others are gender neutral, and others have masculine and feminine words that all geared towards specific audience, okay? And so if these biases and nuances are actually both within a language, how do these models expect to be inclusive way and deal with a gender diverse audience with the language itself as pronouns and specific words that gender people and gender items as well, inanimate objects.

So it's pretty challenging. And then so typically from a statistical perspective, when the models are trained, they usually train for a specific audience. So for example, if a bank wants to, decides to use AI to grant loans to people for housing finance, they would focus their datasets on the population and relevant to that sector.

So, for example, if the data shows that the people who tend to buy home loans or or purchase homes and then applied for home loans or people aged between 30 and 50 years old tend to be predominantly women from a certain province, the datasets that the banks will typically train the algorithms on will be mostly focused on that audience.

The typical audience. But that ignores, and well and that would be correct from their perspective. But if you want a diverse team, they will actually point out the fact that not all of those people who are listed as women may identify as women. And you also have the consideration, that with this concept of concept drift where the population changes over time.

So the question is how to deal with that. So, you know, and they're also preserving cultural heritage becomes an interesting debate. And I mentioned that early on about the the differences between communities as well. So with that context in mind, the question is what actions can you take? So it's to interrogate the, well, in the AI lifecycle, there is a big section which deals with managing and sourcing data.

So we need to start having conversations about how data can be inclusive. Then we also need to talk about how organizations, whether in the private or public sector, actually need to have collaboration or collaborative partnerships with academia, with international organizations to actually facilitate inclusivity. But then also looking at the design of AI models to check whether inclusive societies are considered across the entire lifecycle.

And so you have, in the development of terms like inclusivity by design, ethics by design, which seems to be becoming more popular. And then there are also various standards that you can rely on to address bias and discrimination within the models. There's a need for cultural sensitivity training, especially that it's contextualized to the population that you're working with, capacity building, training your teams.

Governments need to set ethical guidelines for inclusive AI. There's also need to promote accessibility, especially within the global South, where this is a key issue. And then also, once these measures have been implemented, they need to evolve as society and the last two points are to have policy. It's no point having conversations about these topics if government doesn't follow through and it doesn't implement policy and regulation that actually mandates inclusive AI.

And lastly, it's just to stay aware of what's actually happening within the space. There are several newsletters and subscriptions out there that deal with how to build inclusivity into AI models and so an important part of that conversation is actually to stay updated with that information. And my last slide basically says that this is the beginning.

I know it should say it's the end, but this is just the beginning because it's from my perspective, it's not good enough just to start these conversations and talk about it. They need to translate into practical actions, otherwise they become meaningless going forward. And I know it's challenging. I work in a commercial space, so implementing inclusivity or encouraging organizations to promote inclusivity usually needs to be tied towards some sort of commercial objective which can be soul destroying at times.

But there is a willingness to build trust and to facilitate trust with their stakeholders. So it's a vital part of the conversation. But in a nutshell, that's the talk that I have for you today, and I think it's open to questions now.


Best-of

Here you will find short excerpts from the talk. They can be used as micro-content, for example as an introduction to discussions or as individual food for thought.
 
 


Ideas for further use

  • The talk can be used in university courses related to computer science, AI, ethics, and development studies. It provides valuable insights into the practical aspects of building inclusive AI systems.
  • For policymakers and government officials, this talk can inform decisions related to technology, data privacy, and sustainable development.
  • Non-governmental organizations and international development agencies can use the talk to understand the role of AI in sustainable development, particularly in African contexts, and how to incorporate inclusive practices in their projects.
  • Companies involved in AI and technology can use the talk to train their staff on the importance of inclusivity in AI development, helping them understand how to incorporate diverse datasets and consider ethical implications in their work.


Licence

Nathan Ross-Adams: Building inclusive AI for sustainable development in Africa by Nathan Ross-Adams (South Africa) for: Goethe Institut | AI2Amplify is licensed under Attribution-ShareAlike 4.0 International

Documentation of the AI to amplify project@eBildungslabor

 Quantified Human Image by Alan Warburton / © BBC / Better Images of AI / Quantified Human / Licenced by CC-BY 4.0