via Popular Resistance

SPECIAL GUEST: Mary Wareham (of Human Rights Watch, the Campaign To Stop Killer Robots). An important question faces the human race. Will we decide to "outsource" the decision to take a human life to machines? Do we need autonomous killing machines as part of our military technological arsenal? Perhaps this seems like science fiction to you... but while most of us go about our day-to-day lives, this science fiction scenario is already beginning to come true. On this episode, we're joined by Mary Wareham, the Global Coordinator for the Campaign To Stop Killer Robots, to talk about the path our technological development is heading and whether or not we can choose to rise above what could be one of our last mistakes as a species. Recorded 10/5/2014.

 

You can download the episode here.

 

Mike & Matt's Recommended Reading:

The Campaign To Stop Killer Robots site

Stop Killer Robots' Twitter account

Stop Killer Robots on Facebook

Mary Wareham on Twitter

 

Transcript:

Alpha: Welcome to another episode of Robot Overlordz. Episode #112. On this show, we take a look at how society is changing. Everything from pop culture reviews to political commentary, technology trends to social norms, all in under 30 minutes, every Tuesday and Thursday.

Mike Johnston: I'm Mike Johnston.

Matt Bolton: And I'm Matt Bolton.

MJ: And joining us on this episode is Mary Wareham of Human Rights Watch. Mary, thanks for joining us.

Mary Wareham: Thanks for having me.

MJ: Mary is the global coordinator for the Campaign to Stop Killer Robots. So, to start, could you tell us a little bit about -- first off, what is a killer robot?

MW: It's a casual term that we're using there, ‘Killer Robots,’ because the terminology is still changing here. At Human Rights Watch, we call these weapons ‘Fully Autonomous Weapons,’ the governments have started calling them ‘Lethal Autonomous Weapon Systems.’ Fully autonomous weapons are weapons that are no longer under the control of the human, with respect to two key functions: the selection of targets and the use of force, lethal or otherwise. These are weapons that are not yet here, but what we see at the moment, what we call precursors, exist in systems that have got some degree of autonomy and lethality, that are being employed at the moment and others under development at the moment. But there's not currently a fully autonomous weapon out there. I guess the most common type of autonomous weapon that people might think of -- remotely piloted aircraft, the armed drones, and that's certainly one example of what we're talking about. But there are also stationary devices on the ground that worry us. There will also be autonomous weapons used on the sea and under the sea as well. So, it's a whole range of different types of functions that will be undertaken by autonomous weapons in future warfighting, and what we're concerned about is when the functions of selecting targets and using force is taken away from human control and that function is given to a machine. That, for us, is an unacceptable use of autonomy and something that we don't want to see happen in the future, which is why we launched the campaign to stop killer robots just two years ago now -- or we formed it two years ago and launched in April 2013.

MJ: What kind of steps has the campaign taken to get this changed?

MW: The Campaign to Stop Killer Robots -- I'm a human rights activist, I do research, editing and writing, but we've been involved with scientists and roboticists and other technical experts from an organization called the International Committee for Robot Arms Control. Those guys formed their committee back in 2009 because they were so concerned at what they saw as this trend towards autonomy and warfare. So, we've joined forces and head up with other humanitarian organizations, and others who are non-governmental, to form our coalition. It's a global coalition active in about 2,000 countries, and we call for a preemptive ban on the development, production and use of fully autonomous weapons. We're concerned about the use of these weapons both on the battlefield in times of warfare, but we're also worried about their use in law enforcement operations, in policing and in other circumstances, and that applies a different set of international law, and it really broadens the scope that we're talking about. So, it's not just a faraway battlefield that could be a problem, but these things could be used in policing in the United States and in other countries in the future.

MB: I'm assuming that these are mainly being developed for military use, if they are at all. Is there one particular company or government that seems to be pushing forward, or are you trying to get out in front of it ahead of time so that nobody does start developing things like this?

MW: We know that drones are here to stay, that dozens and dozens of countries have now got drones, which are used for mostly peaceful purposes, for surveillance and many other uses, including humanitarian uses. We know that armed drones are being manufactured by a number of countries. Probably the most interesting countries on the developing side are probably the United States, as well as the UK and Israel, South Korea, Russia and China. Perhaps Iran as well. Those are all countries that are developing robot weapon systems and other weapon systems that don't rely on artificial intelligence or anything. Those are the main ones that are involved in development at the moment. There are likely others. It's still a fairly small field though, which is why we felt it was time for a campaign now while the technology has not been widely embraced and why there's still time. I think we're at the right time. People are acknowledging, ‘Yes, this is a real thing, it is happening, this is not just science fiction, it's not a thing in the future.’ There's a debate about when it's going to start happening, it's not going to be just one system that appears overnight and we're like ‘Ah-ha! There's the killer robot.’ A lot of people say that this is incremental, that it's, in-turn, inevitable, which we argue, but that it's incremental, that the technology is being introduced gradually in different countries. But what we're worried about is that we're sleepwalking into this future and we're not looking at the negative aspects that could come with allowing these weapons to have full autonomy when it comes to targeting and attack decisions. That's our main concern.

MJ: What are some of those negative consequences?

MW: Well, a whole host of concerns have been raised. We put out a report at the end of 2012 at Human Rights Watch called ‘Losing Humanity.’ In that report, because we're a human rights organization and we looked at the laws of war, we looked at how international law applies to fully autonomous weapons and we found that it was inadequate and that we're going to have to supplement it there, because at the moment, with the state of technology, the types of autonomous weapons that are of concern to us would not meet some of the key principles of international law. They would not be able to distinguish a civilian from a combatant, they would not be able to make the complex proportionality decision that a military commander has to do before he embarks on an operation and has to determine the military benefit to be gained from attacking this place. ‘How does that balance against the humanitarian considerations of the civilians on the ground?’ ‘Will there be more civilians hurt as a result?’ Those are complex decisions that are taken on the battlefield by humans at the moment, and our concern at Human Rights Watch was that giving that to a machine, to interpret the laws of war, we don't hold a lot of hope for that and accountability, and other concerns. Other concerns have come from interesting sources. Just a couple of days after we launched our report, the Pentagon issued its own policy directive on autonomous weapons. It had a whole annex in there of the things that could go wrong -- hacking and spoofing and enemy counterattacks, what happens when the enemy gets hold of the equipment. That kind of leads into the broader concerns that I think many countries now have about the proliferation, and if one country gets this, will other countries want it? So, proliferation is a big one, the technical concerns are big, the legal concerns are big. Probably the biggest debating point at the moment though is this ethical/moral question of ‘Are we comfortable with outsourcing killing to a machine on the battlefield or in law enforcement? Is it morally right to allow a machine to take a human life, to go out there and actively select a human and then to take their life. Is that what we want to see happen? I don't think it is and it's part of the reason for our coalition. Just a few months ago, we had 20 Nobel Peace Prize laureates issue a statement endorsing the campaign and expressing concern about where this technology was headed, and basically welcoming the fact that we're around and we're going to try and stop it. So, multiple concerns over the different types of technology that we're talking about here, but all coming down to that notion of human control and what does it mean to control an autonomous weapon system? When does it cease to become meaningful control? At the moment, we've got armed drone pilots back in bases in Nevada, in the United States, controlling drones that are operating in Somalia and Afghanistan and Pakistan. But they're still in control of the drones. Some would question ‘Is that meaningful in the burnout and the other ways that drone pilots can be affected by the work that they do?’ But they're still in charge of it and they're still feeling bad about it because they're human. But what would happen if that task was outsourced to a machine? That's the question.

MJ: You mentioned the Nobel laureates who have signed on -- what kind of response has the campaign gotten so far?

MW: We've been pleasantly surprised. It helped that a UN expert put out a report last year as well, just after we did, and came back in around, looking at the same set of concerns. He didn't call for a ban in his report, but he called for a moratorium, which is basically a pause on the development of such systems until the international framework can be agreed. So, there was a big effort in 2013 by governments to begin acknowledging this problem, and by the end of the year in November, they agreed to the first formal talks on it, which they held in May of this year. So, they've started talking about the problem, about this move and trend in warfare, but it's very early days yet on that side. At those meetings that were held in May, five countries endorsed our call for a ban on these weapons, but many more spoke about the need for meaningful human control. There was no country who was saying ‘What are we even sitting around talking about this for? Isn't this science fiction?’ No. Everybody said ‘We get it, we understand it. We've seen where drones have been used and we're concerned about the civilian casualties there, but we understand how this is different from drones, and we understand the need to take action on it.’ Who knows what form that action will take, but I think the way in which it was picked up so quickly by the government gave us hope that they will move quickly with it. But I know from my past two decades of work with diplomats working on other weapons, that they can take a long time if they want to.

MJ: Yeah, I'm sure.

MB: Are you guys going to the UN or is it more to each individual country and trying to spread your message?

MW: We're going to work it both ways. We know that there won't be any action at the international level unless countries have got policy at the domestic level, and unless people are aware and understand this problem, not just in the media and in the public but in the government. So, it's going to be a long, hard haul. We held the first congressional briefing on the topic just a couple of weeks ago here in D.C. and had a great turnout and lots of interest. So, we're starting to lay the groundwork of understanding, but we need to do that in order to get to where we want the international talks to go, which is we want that to result in an international treaty preemptively banning the weapons. There's a precedent for that. Back in the 1990's, we found blinding lasers, which you don't hear about anymore because they were prohibited when the prototypes were being developed but they hadn't actually been fielded and used. So, we don't have precedence for banning a weapon system that does not yet exist, but it's going to take a while to get there.

MB: Just thinking about it from a layman's perspective, let's say we went to war with Iran and they had this entire army of these killer robots and so did we. The whole thing just seems kind of pointless almost, just a ‘Here's our 50,000 robots against your 50,000 robots, and let's see which --’ you know?

MW: Yeah, there was just an article today about swarm technology and using a swarm of robotic weapons and that's one potential scenario. We talk about weapon systems because it might not just be one single weapon. It could be a whole system, and so you have to then look at the notion of human control over attacks involving those weapons. But there's a whole range of different issues to be unpacked here when it comes to how an autonomous weapon functions on the battlefield. It's standing, that for the last decade there's been this huge debate about the use of armed drones and the lack of transparency and all of that, but nobody has looked at the technology and where the technology is headed. I think when we came up and said ‘Let's do it,’ there was strong interest in doing that. I heard from a lot of people over the last year or two that we're kind of at the cusp of a whole new way of warfighting if we go down this road. It's a little bit like back during WWII when the nuclear atomic bomb was being invented. We've got a tremendous technology, that if we harness it, will do a lot of great things for mankind. But if we don't establish some principles and some controls and some regulation, then we could end up really hurting ourselves in the future. That was heard strongly at the meeting that happened this year, especially from the military. You really have to question if the military really wants this kind of technology. Some of the early polling that we've done has found that, no, they're actually not that interested in it because military lose their jobs as a result. But also they'd have to be responsible in some way for what would happen if the system went wrong. The chain of command has ultimate responsibility and you cannot hold a robot accountable for its actions, but you can go to the commander and say ‘Hey, what happened?’ Or perhaps you go to the programmer that programmed it perhaps a long time ago and say ‘What happened?’ But this is another part of the puzzle, who's responsible for these weapon systems, and we haven't sorted that out with armed drones, so we're not so sure how well we'll do for the autonomous weapons.

MJ: You mentioned that one of the arguments it seems like that some people have made out there is that these systems are inevitable. What do you say to the folks that say ‘Well, this is inevitable, so we have to protect ourselves and develop this technology’?

MW: It is the trend towards the high-tech battlefield that really drives the United States, and we see that a little bit with China and with Russia. But most of the rest of the world is fighting with basic weapons, not with these high-tech solutions. We've heard a lot about how robots would not have the same failings that we have, that they wouldn't rape, they wouldn't be child-soldiers, it would save a lot of soldiers’ lives by not having to have boots on the ground -- I guess our question at Human Rights Watch is ‘Well, what about the civilians on the ground?’ What we know and what we're doing right now, in documenting the conflict in Ukraine and Syria and elsewhere, is that the civilian is always there in the mix and they're the ones who bear the brunt of armed conflict these days. It's unbearable to watch at the moment, but it's really hard for us to see how putting autonomous weapons on the battlefield is going to make that any better for civilians.

MJ: For your average person that might here this and suddenly get concerned, how would they get involved in the campaign?

MW: Well, we're just getting set up at the moment, so we don't have a huge outreach effort with legislation that we're trying to get adopted here in the United States or elsewhere yet. We're still very, very early days, but we do have Facebook and Twitter, we're on the internet, we have a good website. We're trying to encourage people to look us up, to follow us and friend us and learn a little bit more about it yourself. There will come a time soon where we will be asking for support, in terms of signing petitions and getting out there and helping in our efforts. So, this is why it was great to talk to you guys, and the next thing that we'll be doing is going to the UN in New York and talk to the governments again about what they're doing on this.

MB: Very cool.

MJ: What would you say to the people out there that still think this is science fiction? That might hear this and be like ‘Well, that's stuff that happens in movies.’

MW: Sometimes those movies do end up depicting the kind of future that we're going to be heading into. I guess its one reason why we in the Killer Robots campaign haven't come up with our vision of what a killer robot should look like. Everybody has got their own idea from having seen the movie and read the book, and everybody knows what a killer robot is. But at the same time, that's science fiction. So, look a little bit deeper and look at what's going on in the real world with the development of autonomous weapon systems and make up your own mind. There's a good amount of material out there on this now. We've got a good bibliography on the campaign's website. I think the first thing that everybody needs to do is to get their head around this topic.

MJ: Okay. Matt, do you have anything else?

MB: No, I think we've learned from literally every sci-fi movie ever that this can only end badly, so.

MW: Unfortunately, yes. But I guess what we want to do is give people hope that they can change this and that we do have a say in our own future. It's not just something that our governments do for us. We can have a view and an opinion and we can shape it ourselves.

MB: Yeah, absolutely.

MJ: Well, fantastic. Mary, thanks very much for joining us.

MW: No problem.  Thank you very much guys, and look forward to listening to it.

A: That's all for this episode of Robot Overlordz. You can find our show notes, including links from this episode, on our website at Robot Overlordz.fm. That's it for our radio broadcasting. We would love to hear your thoughts on this episode in our forum, or you can review us on iTunes. We're Robot Overlordz with a Z.

MJ: Thanks everyone for listening.

MB: Thanks.

 

Image Credit: via Popular Resistance