Alex Tsado

Steps to building ethical and practical AI tools

To what extent can AI help solve challenges in Africa? Why is it important to get involved in the AI debate and shape AI? What can a responsible business strategy of an AI company look like? These are some of the questions Alex Tsado a reflects on in his talk.
 


Summary and transcript

Summary

In this talk, Alexander Tsado discusses various aspects related to AI policies, GDPR, and the challenges faced by programmers in understanding and implementing them. He introduces himself as the founder of Alliance for AI, a non-profit organization that has advised multiple African governments on their AI strategy and policy. Tsado emphasizes the potential of AI to address challenges in Africa, such as resource limitations, lack of infrastructure, and internet connectivity issues.

He highlights the need for responsible AI practices to avoid biases in decision-making processes, citing examples of biased hiring, health treatment, and arrest decisions. Tsado encourages individuals to consider a career in AI and emphasizes the importance of building responsible AI companies. He shares insights into how his company, Ahura AI, uses AI for education, focusing on improving attention spans, providing personalized learning experiences, and addressing concerns about privacy and data misuse.

While discussing the challenges, Tsado presents five main concerns against AI:

  1. Data Selling: Some fear that companies may sell user data for profit. Tsado addresses this by clarifying that his company does not sell data and offers users the option to decide whether to sell their data, ensuring they receive the majority of the proceeds.
  2. Data Security: Concerns about hackers stealing video data are addressed by explaining that Ahura AI analyzes videos on users' devices locally through a Chrome extension, eliminating the need to save or send data to the cloud, thus minimizing the risk of theft.
  3. Cultural Understanding: Tsado acknowledges concerns that AI may not understand users from different cultural backgrounds. He counters this by emphasizing the importance of domain experts, diverse teams, and extensive testing to ensure the tool's effectiveness across various populations.
  4. Misuse of Data: To address fears of data misuse, Tsado outlines how his company carefully controls data sharing with employers and teachers, ensuring that sensitive information is not misused to discriminate or harm users.
  5. Automation of Harmful Value Cases: Tsado discusses concerns about AI being used for global takeover, automated warfare, or discriminatory practices. He emphasizes the importance of individual choices in using AI responsibly, advocating for positive contributions to the world.

Towards the end, Tsado touches upon ethical considerations in AI, urging the audience to make conscious choices in how they use AI tools. He concludes by inviting questions and providing contact information for Alliance for AI and Ahura AI.

Overall, the key points include the potential of AI in addressing African challenges, the importance of responsible AI practices, and practical examples from Tsado's experience in the field, along with addressing the five main concerns against AI.

Transcript

How we interpret to show how we interpret, you know, AI frameworks. One of the criticisms in several WhatsApp groups I'm part of is people share how the AI policies, GDPR and all of those documents are difficult for programmers to understand and would love to see examples of how real companies are trying to follow these policies. And so I figured it would be pretty valuable for you guys to see an example from from our side in terms of myself for quick introduction. I was born in Nigeria, very, very Pan-African, travelled a lot around the continent, hoping to strive Innovation. But I currently live in the United States, in California, as I mentioned, I founded a company called Alliance for AI. It's actually a nonprofit organization. It has so far managed to advise 5 African governments on their AI strategy and policy. I was able to do that because I have an extensive background as an engineer, businessman and investor in companies. The most important past experience of mine was working at NVIDIA. Some of you on the call will be familiar with NVIDIA, maybe because you play video games. High end video games are powered by NVIDIA graphic cards. For those who aren't familiar with NVIDIA, the NVIDIA graphic graphic cards or GPUs. You can think of them as a last computer chip. You need these chips to be very, very fast if you want to run your AI models and and and train them really quickly and run them with low latency. And so being at NVIDIA really got to see how AI started more than the AI and see where the largest governments in the world were looking to take it. And as an African, I was able to bring that knowledge and advise the governments in the African continent, including South Africa, Rwanda and several others, right? And we've been able to speak on the UN as well. AI in Africa is pretty new and but we've been doing a lot of work for the last, I think, 6 years with a number of other ecosystem builders. And we've been able to get to a place where I think is buzzing. It's, it's really growing. It's small but it is growing as you can see from this exciting pictures. The one on the left here is in South Africa. They had hosted what is called the deep learning Indaba. It's the largest gathering of AI researchers in Africa and they had over 1000 people. One of one of these in fact there was a recent conference, Global Conference, that was hosted in Africa for the first time last week. Maybe some of you are lucky enough to attend that in the second image here you see a hackathon. It's the largest hackathon has been run in person. Hackathon run on the continent. And this was in Tunisia, which was also over 1000 people. Very, very exciting stuff. And some of the challenges, the problems that can be solved with AI that get me really excited. This is what I start to list on this page. You can see well, it says we lack the resources to copy the applications of other world nations and continents, as you're well aware. Resources like roads. They're good roads, but not good. Not not many enough. They're great teachers, great doctors, but definitely nowhere close to being as many as we need in the continent. And the Internet penetration and connectivity has a long way to go. So with all these challenges, people look at it and think we'd be too far behind. In fact, many, many organizations put up numbers like $100 billion need to be spent every year to catch up with this resource limitation. And so a key question became can AI help with this? I mean, the answer is yes. Official intelligence pretty much is a tool that can empower people to do a lot more with fewer resources. For example, if you had a calculator, which is like a baby baby old version of AI, if you want you to think about it that way, you know if you are solving mathematics with your head, you're probably, I don't know, do 10 problems in 10 minutes if you're really good. But if you have a calculator, you can probably 10X that number, right? And do about 100 questions in the same 10 minutes. Very similar thing with AI. If you build the right kind of AI tool, for example and put it in the hands of a doctor, that doctor can go from seeing 5 patients in 30 minutes to maybe seeing 20-30 patients in in the same time. So the changes can be incredible. But as I mentioned, it's still very early days and this is not what is happening in reality. In reality, the presence of African tenants in AI is still very low. Some number say . 9% you I see on that numbers depending on where you look, but it's still very low. So we have a, we have a lot of work to do to at least share the stories of the very, very exciting African companies that are in AI. The most exciting for me is Insta Deep, which got acquired a few months ago for $685 million dollars They were critical to helping figure out and come up with the COVID-19 vaccine. So very proud of that company and we need to share more stories like that. But because these numbers are so low, you start to see things like the bias at the last speaker was talking about. Since she has talked about a lot of that, I will just share just this few examples. You can see higher end decisions, health treatment decisions, arrest decisions. These decisions start to go very wrong when you train your AI models in in a way that doesn't consider the context. For example, you just take the data of of how people can put in in jail or in prisons without realizing that that data is already leaning towards putting more black people in jail. If you put it directly into the AI and not figure out ways to balance out the data, you're just going to automate what used to happen before, right? AI is an automation too. That's another way to think about it. So you're just going to automate the past and most of us don't like the past. So how do you automate the past like that? That's a that's a big problem and so as I mentioned the past speaker spoke a lot about this, so I wouldn't spend too much time on the other ones might be call outs to you wherever you are from in the world. If you don't want bias in AI, you have to join AI. So strongly consider a career in artificial intelligence. That's my first one. And then the second one is, as you join the AI industry, learn to build a responsible company, right. It's very, very critical and important because you're going to be automating processes and if you're successful, it will affect millions of lives. I really hope you're not responsible for runining millions of lives. So now let me show you how we do this thing in our company. First of all, the company. It's called Ahura AI. It's a global AI company. It's for education and you guys are learning. Here you can see pictures on the side. You switch on Ahura AI webcam when you want to study or do your homework. You see the green dots in your face. It starts to analyze things that you're doing, like if you're closing your eyes or looking left, right or falling asleep looking at your phone. So the things that we do, you know, it analysis, all that stuff and gives you a lot of information. For example, you can see here, let's get focused, right. It has all that information it gives you. Some of the key benefit that users talk about is 1. They're able to retrain their attention span. I think they say their attention span today is less than 11 minutes and it continues to reduce. So with our AI it tells you about things that distract you so you will retrain and improve your attention span, it will inform you of how you learn fastest and it will advise you when you need to get help. There are a lot of students that spend hours and hours confused when you're looking at the contents. The AI will help understand when you are this confused and tell you you should probably talk to your teacher and and and the teacher might also get a ping as well that there are about 10 students that are very confused and probably need more help. So switching on your webcam, when we started this three years ago, people got very worried and it's like, Oh my God, what are they going to do with our video, our data? This is really, you know, a concerning thing. Why are they having the webcam on? And so we started to gather the top concerns that they talk about and started to address each and every one of them until the place where people are now very comfortable with this tool. For example, the first thing that people were saying is we will sell your data and keep all the dollars, all the money for ourselves. First and foremost, we do not sell data. That's not our business model. It's on the contract that we're not good to sell data, especially just on our own. The second thing after some time is some people did want data to be sold, and so it became that. We'll build a process where you can decide for your data to be sold, but then you will receive the majority of the money that's made from your data. That's very different from other companies where they silently sell your data without telling you and they keep 100% of what they make. You just use that tool for free. The second one is that your video will be stolen by bad actors, right? Imagine a hacker going into our database based on the cloud and stealing the data. Well, to combat that, we don't need to save your data. We have come up with a way using a Chrome extension to analyze the videos on your laptop. That way we don't need to save the data, we don't need to send it to the cloud. So there's no data to steal, there's no video to steal, and videos are going to land on Facebook. And people see how you study it's it's or how you eat or sleep while you're studying that, that, that, that, that. That piece is not getting saved. So that's very important. And for us it also works very well. It's less costly if you don't have to send the video all the way. But it did take, it was pretty difficult to figure out a way how to do this and we worked on it because we find these things very important. The third one, I'll talk about 5. The third one is that our AI will not understand you right. When we talk to customers from different parts of the world, from Latin America, from Asia, they're not convinced that oh an American companies built an AI that will understand people from their small island. So three big things are ever done there. The first one is domain experts. If you're building any AI tool or product, will you please plead with you? Definitely. As a technology person and AI person, you must have someone from the space you're building for, right? If you're building a tool for healthcare, you should have a doctor and a team. If you're building the tool for agriculture, you should have a farmer oriented team or someone who has that information. On our side, because we're building an education platform, we're a company of AI and teachers, right? We have, we have teachers in our group that are domain experts. The second one is diversity of the team, the executive team and the engineering team. On our team we have people from all across the world, all right. So from the US, from Latin America, from Africa, from Asia and I guess we're missing folks from Australia, folks in Australia who want to call apologies and please reach out to me, I'm recruiting. OK, that'll be great. The third one there is testing. It's because of having executives from all these parts of the world, you have these people all testing your tool and informing you about ways that your tool might not work for their populations because they care about this. It's very, very important. And so when you have those import people on the table, they're able to influence how how you, you know, deploy and build and deploy this product, right. And so we have extensive testing of our product. The 4th one is that the data will be misused to harm the user. Because of that, we offer a tool to teaching institutions and also to companies, right? There are companies that want to give education benefits, learning benefits to the employees. So the employees can keep getting better, being more productive, and you just produce more. And so with that use case, the the person who's learning that gets all the data, they get all the analysis of how they are learning. But we share many more data of that with your boss, right? Because your boss, if we were telling your boss all the links that you click, all the URL's you go to, some people can get fired because of that, right? They can be misused to treat employees differently. And so we are very careful with the data that we share with your boss. With teachers, we share a bit more data because by definition, a teacher is supposed to understand what you're doing so that they can provide you with guidance. But on a teacher side, we provide more aggregated data. For example, if a teacher sends a video to 20 people, we'll let the teacher know that 40% of those 20 people shot off your video after 2 minutes, so they walk the way. Then maybe there's something to change after 2 minutes so the teacher gets that kind of information and and so in that case we're also protecting against the misuse of data. It's it's not. Another way to say this would be don't over share data. Only share as much data as you need so you don't you reduce the likelihood of harm. The last one here is around AI will automate and multiply value issues, right? It would be cool if people can. It'll be cool. It'll be cool if it'll be cool if people can write on the, you know, on the chat about the the biggest use cases that they are worried about. You know the ones that we have seen. If I can click next chair, it seems to be there we go. What global takeover People are scared of RoboCop, right? Or automate warfare or discriminate against black people. All these three things are things that I think are happening right. There are people who are building this things with artificial intelligence and the way we see it is a choice. People choose what to do with this tool and I really hope that all of you listening on this call, we choose to do something that will improve the world. We chose to help prepare people for jobs and so that they can, you know, keep up with the really fast changing pace of things in the workplace. I hope you choose a good use case as well, which is what you can see on my final slide where I say that we have a number of ethical choices to make about artificial intelligence. Do we automate? Is our consent just to automate all the manual jobs and then now you don't have any truck drivers anymore or are we thinking what tool can we build that will create job for thousands? All right, we can reorient the way we think about that. The second one here is do we build AI that can diagnose diseases, so your doctor can go from seeing three patients to seeing 30 patients or do we want to build AI that can control people on social media? What are you going to choose to build? That's what's important. That's the end of my talk. And it'd be cool to get questions. These are ways you can reach out to the Nonprofit Alliance for AI. You just find it on any social media platform. Or you can actually check our web page that lists about 100 African AI companies. And the last piece is e-mail on how to reach reach, you know, alliance for AI. And then on the education AI tool, it's Ahura dot AI first go back bit here we can see that URL Ahura dot AI is how you can get more information about that.


Best-of

Here you will find short excerpts from the talk. They can be used as micro-content, for example as an introduction to discussions or as individual food for thought.
 
 


Ideas for further use

  • The talk can be utilized to facilitate discussions on the practical steps individuals can take to contribute to responsible AI development. It serves as a valuable resource in university courses, specifically in computer science, AI, and ethics. By incorporating this talk into the curriculum, students gain practical insights into mitigating biases and fostering ethical AI development.
  • In corporate settings, the talk can inform and enhance training programs. By featuring it, companies can sensitize their employees to the critical importance of diversity and inclusion in AI development. The talk encourages a proactive approach, empowering staff to actively contribute to shaping technology that reflects diverse perspectives.
  • AI startups can use the talk as a foundational guide. It provides insights into understanding and addressing biases from the early stages of development. By emphasizing the long-term benefits of ethical considerations, startups can integrate responsible practices, ensuring sustainable and socially conscious business practices.
  • The talk can be a good introduction to finding out about developments in AI in Africa.


Licence

Alex Tsado: Steps to building ethical and practical AI toolsbyAlex Tsado, Alliance4AI for: Goethe Institut | AI2Amplify is licensed under Attribution-ShareAlike 4.0 International

Documentation of the AI to amplify project@eBildungslabor

Yasmin Dwiputri & Data Hazards Project Yasmin Dwiputri & Data Hazards Project / Better Images of AI / AI across industries. / Licenced by CC-BY 4.0