UN Sustainable Development Goals

Human bias and mashine. How we can crystasllize equality in AI?

Dr. Joaquin Alvarez

Director or Academic Research at RMA -Advisory. Researcher and professor at UAB, Community Engagement Director of Leading Cities Member of ICRAC

February 22nd 2022 - United Arab Emirates
Image

Crystallizing Equality: Navigating the Future of AI with Dr. Joaquin Rodriguez

Welcome to the World Higher Education Ranking Summit, where leading minds in higher education, innovators, and policymakers gather to discuss critical topics shaping our future. In this session, Dr. Joaquin Rodriguez, an expert in the relationship between society and technology, explores the complex world of artificial intelligence (AI) and its potential for both good and harm.

Dr. Joaquin highlights the importance of meaningful human control over AI technology and the need for a human-centric approach to its development. He discusses the responsibility of universities and society to prepare for peace, emphasizing the value of solidarity over competitiveness.

The conversation delves into the challenges posed by autonomous weaponry and the pressing need for international legal frameworks to prevent misuse. Dr. Joaquin emphasizes that technology alone cannot solve our problems and calls for responsible research and ethical applications of AI.

In addressing the future, Dr. Joaquin presents two paths: one leading to the resolution of global challenges like climate change through innovation and collaboration, and the other marked by continued power struggles and devastation. He urges us to choose the path of change and action, emphasizing the urgency of addressing climate change.

This insightful discussion sheds light on the role of AI in our future and the critical need for responsible development, ethical applications, and global cooperation to navigate the challenges ahead.

Speakers Info

Image

Dr. Joaquin Alvarez Research Director at RMA Advisory

Dr. Joaquin Alvarez holds a Ph.D. in global law and human security with a specialization in the relations between society and technology. He also holds a Master's degree in International Relations with a specialization in Peace and Security Studies from IBE, as well as a postgraduate degree in Project Management from the Center of High Academic Studies of the Organization of Ibero-American States.

Session Script: Human bias and mashine. How we can crystasllize equality in AI?


Introduction

Ladies and gentlemen, I'm honored to welcome you to the very first world higher education ranking summit that brings together the world's brightest minds in higher education, most prominent changemakers, innovators policymakers, to implement the best use of new technologies that will drive your institution forward. And today, we talk about an extraordinary topic, that will affect our lives in the future. The militarization of robots, the dark side of the AI, and what can we do to prevent bias and dangerous misuse of modern technology. And today, there is a lot of mystification around AI. And I'm very honored to be joined by Dr. Joaquin who will shed light on this ambiguous topic. So let me introduce Dr. Joaquin to our honoree audience. Joaquin Rodriguez has a PhD in the International Public Law of Autonomous University of Barcelona, and his specialty is the relations between society and technology. He currently works as associate Professor at the Faculty of Law of the Autonomous University of Barcelona. He's a member of the Board of Directors at Catedra Manuel Ballbe on Human Security and Global Law, Member of ICRAC (International Committee for Robots Arms Control) and Community engagement director at Leading Cities. So, thank you so much, Dr. Joaquin, for joining us today. And I'm very honored to ask you the first question.

"Artificial intelligence can be either a key tool for our survival, or a crystallization of the current dynamics of oppression. What do you think we should expect of AI in the future?"

Future of AI

First of all, thank you so much for having me. It's a great pleasure taking part in this conference. I'm going to your question. I believe that nowadays we are facing front, a situation we're facing from a context where there are so many meta narratives that are trying to provide a wider penetration of technology in our lives. But we should remind always that technology, it's basically an amplification of human. Well, it's a human based tool we have created. So, technology by itself, it's not going to solve problems. By the contrary, it has the ability to reproduce all the system of oppressions, all the bias that we already have. So more than ever, it's necessary to reinforce a notion of meaningful human control of our technology. And also, we should create the tools for governing technology from a social approach, from a human centric approach.

This is a very important point, human centric approach. However, as pointed out by Cantata Yama, technology is seen as an amplifier of human will.

"Does that mean our future depends on humans and their willpower on use of technology? And as you know, not everyone is a good human? What shall we do?"

Humans’ future based on technology use

Of course, our future the preservation of human rights and also our ecosystems. It's going to be in our hands. I mean, there is no technology able to solve the problems that we have created by ourselves, but technology can be a really important tool in order to help us in these huge challenges that we are facing front right now these ecological crises that we are embedded in. And also, there's so many societal challenges that we are facing, but technology is valuable, and then it's not neutral.

So, we have to ensure that there is a direction of our technological frameworks and direction that point out those big problems, big challenges that we are facing from a species in order to provide solutions. But today, as has happened with previous technological efforts, there has been some tendencies to use these in order to reproduce dynamic of operations, and in order also to advance for a militarization of technology. And this is a phenomenon that is nothing new. I mean, it happens with the nuclear, for example, the nuclear has a lot of opportunities in healthcare, in advanced technology applications, but with the size to work on the technology. And that was a human decision, and its human decision has been conflict has been horrors that we cannot even imagine the potential manifestations that they have in our systems right now. So that's why I would like to insist that when we are talking about technology, this is not neutral, it's specifically a valuable tool. So, we need to establish a mechanism of control.

This is exceptionally interesting point, because what's happening right now with the world, the mechanism of control is not actually implemented. And certain AI systems specifically are designed to kill as known of lethal autonomous weapons. And this tendency has been described by numerous scholars. And the Scholars say this is the third generation of warfare, after those defined by gunpowder and nuclear weaponry.

"So, Dr. Joaquin, what do you think, how this will impact civil society? And is there any regulation in humanitarian law, in international law, that will prevent the use of such machines on regularly people?"

Impact of AI on civil society

Well, it's like a multi-layer problem. And on one hand, we have a revolutionary technology, its artificial intelligence that can provide us with two opportunities to face common challenges. And then there is this specific interest of the industrial military complex, and so many states like the USA, China, Russia, North Korea, that they are trying to weaponize this technology, what does it mean weaponized artificial intelligence, they want to advance in giving weapons autonomy, meaning like, some critical phases of their weaponry cycle, like targeting or eliminating objectives, will be able to happen autonomously, meaning without a human in the loop. Today, we have several systems that they have this range of autonomy or semi autonomy, we can think for example of the Phalanx system of the US Navy, the Europe, fighters of the European Union also use these kinds of systems. And we have so many, many others in the hands of so many other countries. So, here what we are talking, it's about if we are going to maintain human conflict under human control, or we are willing to delegate this control over nonhuman systems like artificial intelligence system, and this is a huge debate with gigantic ethical implications and political implications.

And on the second hand, if we have today, enough legal tools in order to avoid this development, the answer is clearly no. We have an international legal framework that is materialized through what we call international humanitarian law, which is basically the Geneva Conventions and all the database law from these conventions, like some specific treaties in order to bang weapons last like cluster munition. But as we are seeing right now, in the conflict, that it's happening in Ukraine, one of the combatants, Russia, it's using cluster munition, and this is clearly illegal under international humanitarian law. So here we have one problem and we have weaknesses. In order to law enforce this regulation. We don't have enough tools to make it really compliant and another hand in what we can call the field of the militarization of the of advanced technology, we find that there is a lack of regulation. That is why it is extremely important to move forward in the elaboration of a binding treaty in an international escape providing the development research, and of course, use of our autonomous systems in warfare.

This is very, this is not really optimistic, because what we see right now is that international law has basically no ways to reinforce and make their position clear and strong while such kind of things happen. And do you think that there are mechanisms for the future to actually counter this to counter such kind of violation, and to help make it more up to civil society.

Mechanism to counter violation

I believe that nowadays we have the huge responsibility to generate big consensus about the security architecture in Europe, and also worldwide. And in order to do this, we have to move forward from those paradigms, that we have irritate from the Cold War, that now seems to be fashionable. Again, we need to talk with everybody and we need to understand international relations as a NEO medieval system, which means basically, our systems were sovereign entities are imperfect. So, they are going to be overlaps, and there are going to be areas of collision. But we have to be ready to solve all of these through peaceful means. This is one of the key points, there is an Ukrainian, activist that his name is Jury Selia Sinko, that he has said recently, if you prepare for war, you're going to get war. If you prepare for peace, you're going to get peace.

And the real problem is that we have been working on a philosophical approach, very classical, it comes from the wrong man, if you prepare if you want peace, prepare for war. This is a fallacy. And this is completely wrong. Because if you are constantly preparing for a war, if you have this machinery at your disposal, at one moment or another, you're going to use it. And if you're not going to use it directly, in the moments that the weapons you have produced, they are not updated to the new requirements of first order warfare, you are going to move this weaponry to other countries, and we are going to be feeling conflicts like the German, Somalia, so on and so forth. So, we want to create a peaceful society, we have to work from the location. And this is one of the key concepts. The only way to prepare us for peace is to advocate for peace. And now, so many times in our universities, in our educational system, we are so focused in competitivity, that we are not teaching about solidarity, and precisely what the world needs now, in order to face front to the new arms race, in order to face front of climate science is solidarity. It's working together. And it's moving forward to a new poverty in international relations.

This is excellent because it's so important for all academics, for all institutions, for every single educational establishment, to work towards collaboration in the realm of peace. And as you've mentioned, this quote that if you want peace, get ready to the war doesn't work. In fact, it gives you the mindset that the war will happen. And what needs to be done is the change framework from educational perspective, where students are taught that peacekeeping and peacebuilding are the most important things, and they are done through collaboration, not through competition. And as we see what's happening right now, if the world was working towards collaboration, and not arms, race and geopolitical struggle for power, the whole situation would have been different. So, my next question is about new weaponry systems. And we're talking about new weaponry systems that are able to make critical decisions, such as targeting and target elimination, basically killings, without significant levels of humans’ interaction, in a completely solo mode, autonomous mode, and these systems are not science fiction, because it looks like it's George Orwell that has written this whole thing that we live in right now.

"What can we as civil society, what can we as a group of academics do to counter negative and devastating ethics, of misuse of such weaponry?"

Our responsibility as a society to counter devastating ethics

I believe that the first thing we have to have clear is that we have a truly important role to play as a society as individuals. And of course, as universities. So many times, there is this kind of narrative that operates like a legend, law that says that innovation happens in military sectors. They talk about the origin of internet, that it's an academic origin, they work universities have us that they were involved in their creation, maybe their founder came from the military. But the institutions that produce knowledge, the institutions that provide technology and culture to our societies are precisely universities. And this is the key point where we have to insist in prepare us for peace, universities have to resign their compromises with military and defense systems, we don't have to be providing knowledge for the military industry, we don't have to be providing collaboration with these sectors that they only produce tools for killing, the university has to produce tools for preserving life, to fight climate change, to provide tools that allow us to feed our planet, this is what we have to have clear as universities will have to make a stand to make a stand for peace.

And to make a stand for responsible research, meaning responsible and ethical applications of the artificial intelligence are, of course, the other technological sets that we are developing. But this seems to be far from what is happening, we see that so many universities worldwide have been involved in this new arms race. So many times, without knowledge of the stuff, without knowledge of the students. So, make this information appear. The transparent each one of the cases that we are going to be facing from knowing which universities are providing knowledge transfer for weapon remanufacturing, weaponry design, weaponry development, and which are not, because those who are not are the key to build a new society with another agenda. And then that is focused on preserving life, the life of our species, and the other species that they are on our planet.

This is extremely important. And as you know, this new ranking engine that is called the UniRanks, is actually focused on showcasing universities that are focused on sustainable development and using this as one of the major tools to rank them in the ranking system. And this is very important, because what is happening in the ranking system is that many universities do not care about sustainable development. While this is super important, and this is essential for survival of humanity. And I believe that this peace and keeping peace is one of the UN Sustainable Development Goals. So, folding that agenda for every institution around the world is a must to preserve peace around the globe, to build the new generation to nurture the needed new generation of leaders that will be focused on peace, not war.


"And Dr. Joaquin My next question is about your personal experience within the field of demilitarization. So, in your own experience, what do you think needs to be changed to create those mechanisms in law that will prevent the misuse of AI that will prevent the misuse of weapons that do not require human interaction? we are talking about drones, we are talking about different systems here that are currently used as innovative weapons, what are the mechanisms that and maybe legislative procedures that you are working right now.

Personal experience regarding demilitarization

In this field, what is more important right now it's to move forward the structuration of a legally binding instrument in an international scale. Nowadays, the main discussions to abort these tools are being held at the CCW, which says the convention for certain types of conventional weapons at the United Nations the problems in this forum, it's that there is basically a set of countries like the US, Russia, that are blocking any capacity of advance in the regulation and in the discussions. So, now we are in a very complicated moment for the militarization especially because every time that there is a major conflict, the military superpowers have the tendency not advancing the signatory of new treaties of new bang weapons, of course, this war and in war, nearly all means, are acceptable in order to provide victory. So, this is one of the big problems we have, once war is materialized, it's truly difficult to stop it.

On another hand, of course, if you are using AI system that add so many levels of complexity, especially if both players have this kind of weaponry, because it makes us questions till what point we as human operators are going to have the ability to stop the conflict in the moment, we want to stop it. So, we are realizing our responsibilities, we are delegating the control of war, the control of conflict to non-human agents, and this is extremely worrying. But in other hand, if we want to truly prepare for peace, we have to prepare legal frameworks, this is clear, but we have to prepare the new generations with a new awareness. And this is work, universities and all the educational system. Of course, society as a whole has an important role to play, we cannot still play into competitivity, we cannot be encouraging our children always to compete, to be the best to be the most brilliant to be the valedictorian, this is completely senseless, we have to adapt our education to the necessity of our generations of the younger ones understanding that intelligence materialized in so many different ways. So, we have to choose technology in a way that can put in state, that can amplify what is good in ourselves.

And we have to have a special care with these applications that hang have horrible materializations, which can be worked, with can be crystallization of mechanism of oppression, like patriarchy, racism, etc. bias in artificial intelligence. It's a huge discussion on how we understand the role played by technology in this sense, but basically and in order to respond. To answer to your question, we have to work in two lines, one providing the legal frameworks that we truly need to choose from the current challenges and preparing our younger generations for peace. I mean, we have to make unthinkable war. This is the only word to avoid that we can do with when you cannot imagine something, it's difficult that it takes place. So, we have to promote those scenarios that provide collaboration, that provides solidarity, that provide further integration.

This is exceptionally important for everyone who's watching us today. And I just wanted to remind our audience that we have a chat box where you can submit the question to Dr. Joaquin, who will be able to answer according to his expertise, because I know that today's topic will generate lots of questions, especially about the future.
"And my next question is about the future. What do you think will happen with us? In the next 50 years? What do you think will happen was robots with AI? And do you see future bright? Or do you see the other side of technology?

Future of humans with AI

I believe that nowadays we have in front of us two main paths, and we have to choose which one are we going to take on one hand, maybe in the next years, we realize that we are facing from common talents that it needs, all our effort and our brightest mind or technology like climate change. Or maybe we are able to address all our innovation to solve this crucial point, challenge in our history. Or we can continue till we are now fighting for power for natural resources, for these beautiful things that are plenty in those days. And of course, if we continue, like we are acting nowadays and horrible materialization of the war in Ukraine, it's just a clear example. Our future it's going to be truly dark because we don't have one second to lose for climate change. And we are now again conducing a war and we have to remember that nowadays that we are talking about Green Deal the European Green Deal, the American Green Deal. There is no activity more polluted in human history than war. I mean, the emissions of co2 in any major human conflict are just horrible. And if we think about the situation in Ukraine, in German, in Somalia, and so many places where work its present, we see this natural devastation with see this huge damage to our ecosystem. So, we have to interiorized that we cannot continue this path, that this path just brings us closer to extinction. So, we need to change and we need to be quick.

This is actual brilliant line. To summarize our today's discussion, Dr. Joaquin, and we must change and we must change now. And it was my honor and pleasure to interview you today. As my interest in this topic is really a huge, especially about the future that we're gonna have within the next 50 years. And I just wanted to remind everyone who is watching us today, that Dr. Joaquin will be able to answer your questions, just put them in a chat box. And actually, this has been extremely insightful and really brilliant conversation. Thank you so much for your time and for everything that you are doing in all your efforts to demilitarize robots, to demilitarize the sector and counter negative aspects and negative sides of AI. Thank you so much.

Thank you so much it has been a great pleasure being with you.
Write comments...
Log in with ( Sign Up ? )
or post as a guest
Loading comment... The comment will be refreshed after 00:00.
There are no up-coming events
Uniranks
Copyright © 2023 UNIRANKS