#31 - Yves Moreau
AI and ethics: Do they go together?
AI and ethics: Do they go together?
Intro: As a scientist, I have a responsibility in that. I cannot solve the problems. But I felt that just staying in my lab and saying I'm going to solve more problems, build more cool tools, I felt that this was not enough. That as a scientist, I felt a responsibility to actually go into society, interact with people, try to figure out, you know, how we're going to change society because there are so many changes at such a pace. And sometimes you ask yourself, well, are we actually choosing in what direction we want to go? And so that was one of the fundamental ideas to interact with society, force scientists to take responsibility for their work, to reflect about what their work contributes to society.
Ask Different, the podcast by the Einstein Foundation.
Anton Stanislawski: My name is Anton Stanislawski. Welcome to the podcast. For most of us, artificial intelligence isn't much more than a fun tool to play around with. It's chatbots writing application letters. It's image generators that turn our ideas into beautiful pictures. At the moment, the Internet is flooded with stunning AI created videos. But, obviously, AI is already doing so much more. It's probably one of those technical developments that fundamentally change how we as a society work. The challenge is to limit the risks and find a balance between benefits and risks. Those are not my words, but the words of Yves Moreau. He is a professor of engineering science at KU Leuven in Belgium. He calls himself a concerned scientist. A while ago, he was able to stop a huge DNA database in China. For his work, he is receiving this year's Einstein Foundation Award for Promoting Quality in Research, and he is my guest today on the podcast. Yves Moreau, welcome. Good to have you.
Yves Moreau: Thank you. Thank you very much for the invitation and the opportunity to talk to you.
Stanislawski: First of all, congratulations on your Einstein award.
Moreau: Well, thank you very much. I was really very appreciative of the fact that the committee that selected the awards actually went a little bit out of its way, I would say, in selecting a kind of work that's maybe less directly obviously related to quality in research. But, obviously, we can do good research if we don't do ethical research when we talk about the human related research.
Stanislawski: You are working with human DNA, big data kits, and it's about using AI to find, for example, hereditary diseases. Is there an easy way to explain what exactly you're doing?
Moreau: Well, it's not all that difficult. We in each our cells, we have DNA. Everybody has heard about that. It's a sequence like a book in a certain sense. There's a little bit of a caricature, but like a book of 3 billion letters. It's like a CD full of letters. And these letters contain the information, the DNA information that is needed to actually develop an entire organism. And sometimes mistakes occur in this DNA. It's not like we are all identical. You are actually 99.9% identical at the DNA level between two people around that level. Most of the differences don't matter too much. They make us a little taller or shorter, you know, how our nose looks or the color of our eyes. But some variations can lead to very severe diseases. Think about familial breast cancer caused by mutations in the BRCA gene. That's something that people have heard about, the story of Angelina Jolie, for example. These are mutations that cause or contribute to disease, and a big part of my work is trying to find those mutations by collecting data from patients all over the world and then try to find the commonalities and differences between people who are affected by the disease and people who are not affected by the disease. It's like finding the spelling mistakes in a book, you know, and just read through and try to find where the mistakes are. And sometimes the mistakes don't matter very much. You still understand. Sometimes a mistake would turn one word into a completely different word that would change the meaning of a sentence and would impact the story. That's kind of a metaphor that you could use for the work that I'm doing.
Stanislawski: Now you're calling yourself a concerned scientist. What do you mean by that?
Moreau: Well, people think everything is going so fast. And actually, I thought, well, this is like being on the beach and having a six year old kid who's, you know, trying to swim and doing a good effort but kind of struggling still a little bit because of the waves. And you are standing on the beach and you're seeing the tsunami wave coming. I didn't really see the large language models or even the image generation tools. Really, I was just as surprised as about anybody else. And many of my colleagues who are experts in artificial intelligence, when they are honest, they tell you, yeah, wow, we didn't see that one coming. But what I did see coming was that the pace of change was just going to keep increasing and that the impact on society would be in ways that I could not quite anticipate really strong. And that, as a scientist, I thought, well, I have a responsibility in that. I cannot solve the problems, but I felt that just staying in my lab and saying I'm gonna solve more problems, build more cool tools, and then we'll just see whatever society does with that. I felt that this was not enough, that as a scientist, I felt responsibility to actually go into society, interact with people, try to figure out, you know, how we're going to change society because there are so many changes at such a pace and sometimes you ask yourself, well, are we actually choosing in what direction we want to go? And to some extent we are, but I think it's quite limited. To a large extent, it's just people being in competition with each other rushing forward, not always knowing, well, where do I really want to go?
And so that was one of the fundamental ideas to go and interact with society, force scientists to take responsibility for their work, to reflect about what their work contributes to society. That's what I mean by concerned scientists. It's also a word that goes back to actually the notion of scientists after World War II stepping into society, to talk about the risk of nuclear holocaust and the responsibility to prevent that. That's a word that goes back to those times.
Stanislawski: So let's look into it with the example of AI. You're calling for ethical standards in AI. What do you have in mind when you're calling for this?
Moreau: I'm calling for something slightly different. I think a lot of people are working on ethical standards. The EU is really, like, leading the progress in this area with the GDPR, protecting privacy, with the AI act, with the Digital Service Act. There is really efforts. Maybe sometimes we'll make mistakes, but there's a clear effort to improve legal standards and that basically is the foundation often of ethical standards. But I'm calling for more than that. I think that ethics as a set of guidelines and commitments that scientists, technologists need to make and regulation, which is based in law, they are essential, but they can only function properly if they feed, they grow into a culture. And for me, what I've seen is or what I've grown up with is actually a tech community where the culture about, you know, who we are, why we do what we do, what is our responsibility to society, what's our interaction with society, is dominated by very, naive, narratives.
We have this idea, well, technology will always be good in the end and we just move forward as fast as we can. Everything will sort itself out and we don't have to take responsibility or worry. I think that for ethics and ethics guidelines and regulation, legal, to really work, they need to feed into a culture where people will not see those as obstacles but actually as really foundational. Why do we have to care about the ethics in research? Because that is central to our mission. It's not an obstacle. And often, if you lack the proper culture, you will only see, these ethical rules we have. If we want to access data, we have to fill in all kinds of paperwork. We find it boring. We find it not super meaningful and we'd like it to be different. We'd like to be left alone because while we're not intending to do anything wrong, we're not doing anything wrong. So why do we have to go all through that? If we don't perceive that these difficulties are linked to very important processes that are meant to guard and protect much of society, then they don't make sense and then we just want to go around them. And I hear that very often, like, people developing technology kind of saying, well, you know, in a milder voice, this ethics things, this regulation around our GDPR, they're just standing in the way, and it prevents us from doing our work. That that is partly true, but they're not there just for no reason. They are there because they're supposed to help us move our work forward in a way that is more beneficial to society.
Stanislawski: When talking about AI, you often hear sentences like AI is only what we feed it. It uses only the information that we as humans give it. So, AI in a way is a mirror of us. Now most of us are born and raised in societies that are not at all free of racism, free of other forms of discrimination. So how can we avoid that AI copies us in this way?
Moreau: Yeah. That's one of the main research questions these days in ethics of AI. So how do we manage biases? What is a reasonable balance, I think, with Gemini from Google coming out and starting to generate images of black German soldiers from World War II, for example, or Native American Vikings and things like that, we kind of saw, well, actually, we are struggling with that.
I think it started with a very good intention. We need to actually generate images that are diverse, that are not only always of older white men. And then suddenly you get yourself in a situation where you suffer a terrible backlash because maybe you rushed a little too much and you went overboard and you produced results that everybody's laughing at, but you have to see that there was at least some effort there. And there is a big tension about, okay, how should we build such systems? So, Google would actually show, I mean, this shows that they tried to make a strong effort in diversity. Maybe they overshot. But indeed, finding this balance of how do we limit biases. There are some beautiful mathematical results, for example, that show that in certain circumstances, it is impossible to make a fully unbiased AI system and it would be just as impossible for a human being to be fully unbiased.
Stanislawski: So, you're saying we need to have a debate about this, but at the same time, the pace in which AI is developing is so fast. Right? So, do you feel like we are already, like, a step or even more steps behind?
Moreau: Yeah. I mean, we're running awfully behind. I mean, the way that, for example, OpenAI pushed forward with ChatGpt 3.5 and then 4 was not particularly responsible. It was driven by economic incentives. It's really problematic. And, you know, I was one of the co-signers of that letter asking for a 6 month pause in the development of AI. If I'm reasonable, I know that it was not going to happen. I think overall, we're going too fast. We are like teenagers driving a car half drunk, and we are driving a car in a very perilous environment. We're going up a mountain or something or down a mountain. It's very dangerous. And what I don't understand is why there is so little perception that we could actually get to the same place, but it would be safer if we went a little slower. I think that this idea is a really spectacular breakthrough. Maybe we need some time to digest this breakthrough and then the next and then the next because this is not stopping. I mean, for the next decade, at least, maybe several decades, we're going to see every couple of years really important breakthroughs.
Stanislawski: Breakthroughs that can be beneficial and harmful. Right? You yourself are working on the interface between AI and genetics. So in very short, using AI to work with human data. And you're warning about the abusive possibilities of these DNA databases. What is it that you are so concerned about?
Moreau: So I've been working in human genetics for about 20 years and trying to develop tools, and about 10 years ago, I said, well, we want more and more and more data. We want to accumulate as much data as we can, but that is data that is relatively sensitive. There was a big demand from patients or their families to say, well, find, you know, an answer. So, if you need to share it with other people, share it. So I felt we're bringing more and more and more data together. How do we protect the privacy of the patients from abuse, but also just protecting privacy because I consider that this is a central right of autonomy, of not being monitored, of not being overly in a paternalistic relation. If somebody tells you this and this and this and that, that's what you need to do. Have this kind of balance in society. And so I started working on developing systems that protect the privacy of patients better, in particular, by not centralizing all the data in one place, but actually leaving it in the control of the original data controller, as it's called under, EU regulation, meaning typically a hospital that has the data of several thousands of patients and, say, they want to work with 20 other hospitals. How do we actually set up systems that can analyze all this data without having to put everything in one place, which raises privacy questions, also questions of control. You know, we put it in one place, then there's suddenly a player that has a lot more weight in decision making than the other players. That's also an important, dimension. And so a decade ago, I started working on that. And then, basically, I realized in 2016, 2017 that there was another field close to my field because, for me, it's more genetics in biomedical research and clinical care. But there was also something called forensic genetics. And in that field, there was really big development. And therefore, the deployment of these technologies should be the result of a careful social debate, careful regulation, balance of interests that allow us to develop them minimally so that they are the most effective possible without actually becoming a threat to an entire society.
Stanislawski: And these databases can be a threat. There is a very concrete example of this from your work. You were documenting how Chinese authorities were trying to set up DNA databases, obtaining and analyzing the genomes in particular of the Uyghur minorities. Can you walk us through briefly what was going on?
Moreau: I came across a report of the BBC that explained that in Xinjiang, a province in China that I never really had heard of, that if you wanted to get a passport, you not only had to give your fingerprints but also, have a 3 D scan of your face taken, that you had to have your voice recorded, and you had to provide a blood sample for DNA analysis. And I thought, well, that is really, really concerning. I reached out to Human Rights Watch. It turned out that they were actually interested in that problem as well and they were starting to look into the problem, but they did actually lack technical background. I mean, it's about DNA sequencers, about kits to actually look at small regions of the genome that allow to identify people uniquely. There's a lot of technicality behind it and so we started working together so that actually we could assess what the situation was in Xinjiang. And what was discovered, Human Rights Watch actually found a number of documents that was just, you know, public documents from the authorities showing that actually the police of Xinjiang was buying a large number of DNA sequencers and also kits to actually run profiles, DNA profiles, to create the DNA database in Xinjiang. And this, was at a scale that was incompatible with normal police work. Normal police work should focus on collecting traces at crime scenes and then having a database of a limited number of people who are considered most dangerous. And then doing those comparisons, that's more like the normal police work.
And the scale was just totally different. So it's felt like, well, they are trying to build the DNA database of the entire population of Xinjiang. Then guidelines were discovered by Human Rights Watch that actually were specifically stating that DNA should be collected to create a DNA profile and a database of all people between 12 and 65 in Xinjiang. So it was a mass DNA database of basically most of the population. That's at least the plan. It's not entirely clear to what extent the plan was eventually carried out. There are signs that maybe it was not carried out as originally planned because of all the pushbacks. But very importantly, there was only one of the dimensions in the surveillance and the persecution of Uyghurs in Xinjiang. So, there were, of course, cameras everywhere, facial recognition. We have actually seen software where you can predict the ethnicity of someone from their picture and so you can track even outside of Xinjiang, you can track Uyghurs wherever they are, which is totally Orwellian.
And then many layers, you know, tracking of vehicles through GPS, individual files of people that the police can access with a smartphone and add details to their file, really very concerning way of surveillance. And that was linked also to massive persecution. There's a set of camps that were deployed across the province that has about 25 million people, about 15 million from the Muslim minorities, and many hundreds of thousands, maybe more than a million people, have been sent to these reeducation camps. Families have been split apart. You know, men who have been sent to work in factories on the other side of the country, children have been sent to boarding school to become good Chinese citizens and not be influenced by the religion, the language and culture of their parents. Women have been subject to birth control and forced abortion on a very significant scale because we know that the birth rate in Xinjiang have collapsed because of all this persecution. So, crimes against humanity have been taking place in Xinjiang for a decade. And so, you cannot isolate the DNA database and say, yeah, but we should have a discussion about what exactly are the risks linked to a DNA database. This DNA database is a piece of surveillance, mass surveillance that is a piece of social control and persecution. And when you see that as a piece of the puzzle, then you say, well, we shouldn't put that piece extra in that dark puzzle.
Stanislawski: So, in this interview, we've been talking about threats and chances of new technologies. You are asking your colleagues and fellow scientists to be more active in public debates, and I think you've made quite clear why you think it's so important. Do you feel like there is movement in particular, for example, in engineering to be more concerned about the outcome of new technologies?
Moreau: Yeah. I think that that things are moving really in the right direction, maybe not as fast as I would want, but there's different things going on. So, first of all, the ethics questions are actually taking a more central place in scientific debate than they had maybe a decade or two decades ago. So, we're talking more about it, do we really do things that are significantly better? Sometimes we can say, yeah, you talk a lot, but what do you really do? Okay. But at least we're talking about it so that these are themes that were not present to the same extent 10, 20 years ago.
So that's good. Second, well, there is also a matter of generation. So, with younger people, I think that they are actually living in a quite different world. You know, they grew up with this intense pressure about sustainable development, about the limits to growth, about, you know, a society that is changing in ways that even, you know, this is hard to follow even for digital natives. And when I teach to my students, there I see something that I find very, very promising. When I teach about ethics, actually, the students are really, really engaged in ways that maybe the first time I taught that, I thought, well, you know, they're gonna be dismissive. They're gonna say, well, why do we need something about ethics? In the 6, 7 years that I've been teaching a short ethics module, I've been really amazed by the engagement of the students. I feel that the younger generation, 20, 30 years old, they are really demanding, not only answers but ways of thinking about the problem. They realize that there are problems. They realize that, you know, it's probably not going to be possible to just move forward as we did before, that we need to change the program of our civilization and that to do that, we actually need ways to think about it. And that is where I think there is a major bottleneck if you are being trained in science and technology. Well, now we're starting to talk a little bit about it and getting some exposure, but we lack really the deep tools to think about our work, to take responsibility, to engage, to enter in debates. And I think, the younger generations, even in science and technology, do not want to only hear Elon Musk telling us that we're going to colonize Mars and then we're gonna go and colonize the stars, people realize that, okay, it's great, it's fantastic for science fiction. It's not a program to run society in the 21st century. We need better ways to think. We need better stories to tell about what technology is for. Now we have hit planetary boundaries. We are hitting social boundaries that our societies are being, you know, disrupted severely by the pace of change, the interference, political, for example. And we need to invent, I don't have the answer, we need to invent a program for the coming centuries of where we want to move society forward.
Stanislawski: Says Yves Moreau about his research on AI, big data and the ethical questions that come up with all of this. Thank you very much for taking the time and sharing your thoughts.
Moreau: Thank you very much for the invitation, and thank you to all the listeners as well. And that was today's episode of Ask Different, the podcast of the Einstein Foundation.
Stanislawski: If you are interested in more interviews with scientists, please consider following the podcast. There are more episodes coming up in the next weeks months. Rating the podcast and sharing it also helps us a lot. My name is Anton Stanislavski. Thank you very much for listening.
AskDifferent, the podcast by the Einstein Foundation.