By Steve Jurvetson from Menlo Park, USA (Caught Coding  Uploaded by PDTillman) [CC BY 2.0 (http://creativecommons.org/licenses/by/2.0)], via Wikimedia Commons

SPECIAL GUEST: Calum Chace (Surviving AI, Pandora's Brain). The human race stands on the verge of amazing technological breakthroughs. One of the most speculated about concerns AI (Artificial Intelligence). Obviously on a show with ROBOT right in the name, we spend a lot of time talking about AI... and this time we're joined by author Calum Chace, who has written a fantastic look at the subject called Surviving AI. Join us and find out what wonders and problems the future of the technological singularity may hold. Recorded 9/27/2015.

 

You can download the episode here, or an even higher quality copy here...

 

Mike & Matt's Recommended Reading:

Calum's website, Pandoras-Brain.com

Review The Future Episode 64 - Calum Chace on Is it Time to Start Worrying About AI?

Ray Kurzweil's site, KurzweilAI.net

More coming soon...

 

Transcript:

Alpha: Welcome to another episode of Robot Overlordz, episode #210. On the show we take a look at how society is changing, everything from pop culture reviews to political commentary, technology trends to social norms, all in about thirty minutes or less, every Tuesday and Thursday.

Mike Johnston: Greetings cyborgs, robots, and natural humans. I’m Mike Johnston.

Matt Bolton: And I’m Matt Bolton.

MJ: And joining us on this episode is Calum Chace. Calum, thanks for joining us today.

Calum Chace: Thanks for having me on the show, guys.

MJ: For the benefit of our listeners, can you tell us a little bit about your background?

CC: Sure. So, I had a 30-year career in business and journalism. I started with the BBC a long time ago, and then I spent some time as a marketer, did an MBA and went into consulting, and ended up being a director and then a CEO of entrepreneurial businesses. And about three years ago/three and a half years ago, I took the opportunity to stop working for other people and concentrated on being a writer and a speaker about artificial intelligence, a subject which had interested me really all my life, but particularly since the year 2000, when, like a lot of other people, I read a book by Ray Kurzweil called “Are We Spiritual Machines?” which made me think about the possibility that a human-level artificial intelligence might be created in my lifetime, or my kid’s lifetime, and that was a really stunning idea. So, I’ve been reading as much as I could about the subject since then, and when I had the opportunity to stop working for other people, I concentrated on it full-time. So, I’m now a writer about artificial intelligence, I’ve written two books about it so far. One is a science fiction novel called “Pandora’s Brain,” that came out in March, and the other is a nonfiction book called “Surviving AI,” which has just been published. So, in a nutshell, that’s me.

MJ: What kind of research did you do? I mean, you mentioned Ray Kurzweil’s book that really triggered your interest in the subject. What additional research did you do while you were thinking about the issue, or while you were writing the books?

CC: Well, reading everybody who’s written about it, to the extent that you can do that in the time. I’ve read everything Kurzweil’s written and most of the things that Nick Bostrom has written on the subject; I think they’re probably the two leading thinkers on the subject. But there’s a whole host of other people, and one of the best ways to find out who to read is by listening to podcasts—yours and some of the other ones that I know you have had sort of crossovers with—and bit by bit, you track down who all the players are: the Ben Goertzel's, the Aubrey de Grey’s, the Hugo de Garis’s and so on and so forth. There’s a whole load of people who’ve done some good thinking about the subject, and I try to read as much of their stuff as I can.

MJ: In looking at your book—so you’ve got two, right? One is fiction, one is nonfiction, “Pandora’s Brain” and “Surviving AI.” What has the reaction to the books been like?

CC: It’s been very good; I’ve been absolutely delighted with the comments I’ve had so far. On “Surviving AI,” I’ve got about 12 or 13 endorsements from leading figures, people like Ben, and Aubrey, and Randal Koene and David Wood, who’s the chair of the London Futurists group here. So, yeah, it’s been a good response so far.

MJ: So, “Surviving AI” is the nonfiction one. What’s the audience level that you’re looking at with that one?

CC: Well, I’m trying to do the impossible and appeal to everybody. It is definitely designed to be readable by people who are coming into the subject new. So, it doesn’t assume advanced knowledge. But I’ve tried to write it in enough detail that people who’ve been thinking about it for as long as I have will still find things of interest in it.

MJ: What do you think most people think when they hear that there’s the possibility of artificial general intelligence within their lifetimes?

CC: I think when you first say that to most people, they think you’re crazy. And I know that because I’ve been having a conversation about it with everybody I come across really, ever since I read Kurzweil’s book. For the longest time, people’s eyes would glaze over or they’d think I was a bit of a looney. And actually, everything changed last year when Nick Bostrom’s book came out, because that is a very, very rigorous exploration of why we should take seriously that AGI might happen in a small number of decades, by the end of the century or so. Then, of course, Bill Gates and Elon Musk and Stephen Hawking spoke out about it, and suddenly it’s all over the media. So, now it’s something that people are talking about. It’s not something that people dismiss out of hand so much anymore. Now the knee-jerk reaction of most people, I find, is, “Well, I don’t know how seriously to take this, but I don’t think I like it,” and that’s partly because there was sort of a law passed when Bostrom’s book came out that there had to be a picture of the Terminator published with every article about artificial intelligence, and I think hopefully we’re going to get past that soon, because clearly it isn’t always a bad news story. I think it’s entirely natural that when you go public and say, “Look, this thing may happen fairly soon, and it’s a double-edged technology; it’s like nuclear technology: it can do great things for us or it can do terrible things for us,” it’s entirely natural that the media would dive straight for the bad side, because, “if it leads, it bleeds.” Nobody wants to hear a good news story as much as they want to read a bad news story. But as I said, I think we’re getting through that, and I think people will now be more inclined to listen to a more nuanced and more balanced relation of the story. So, the reason why I wrote the book and what I hope people will fairly quickly start thinking is that this is a very, very powerful technology, and if we get it right, it can do amazingly good things for humans. If we get it wrong, or if we just ignore the potential problem and allow the potential problem to become real, it could be disastrous. It really could. I don’t think you can dismiss the really serious downside possibilities. But I think if we apply a good amount of effort to the Friendly AI project, then we can arrive at a really positive outcome.

MJ: So as a writer yourself, you’ve written a fiction story about AI, do you think there’s a tendency among writers or just the people that are making more creative things to go right to those negative scenarios? You’ve already referenced “The Terminator.” There’s just so many images in popular culture of the robots gone mad or technology gone mad, or these types of scenarios happening. That kind of existential risk makes for a really good story. That seems to be certainly dominant among kind of the fictional portrayals of this kind of technology. Do you think that’s changing at all?

CC: I don’t know if it’s changing. And you’re right, I think the majority of fictional treatments of super intelligence are probably negative, they treat it as a threat. The reason for that is pretty obvious, and people often say it, that you need jeopardy to make a good story; to make a good story, you need a hero, a heroine, a protagonist who gets into trouble and then, by developing their skills and abilities and by going on a personal journey, they overcome the trouble and win in the end, that’s the typical story arch. So if you have super intelligence which is all friendly and benign from day one, you’ve not got a terribly exciting story. There are some treatments of super intelligence which are very sort of warm and fuzzy and positive in fiction. The one I think that probably most people would be familiar with is the “Culture” series of novels by Ian M. Banks, where there is jeopardy, there is danger, there is unpleasantness, and he revels in the unpleasantness, but the AIs, the superintelligences in the culture are very positive, they’re the good guys. I think Hollywood seems to have a harder time coming up with entirely positive superintelligence. The most positive I can think of is “Her,” which is one of my favorite movies about AI, and it’s interesting that the director tried to portray it more as an ordinary romance rather than a story about AI, which I completely reject. But in that—and I’ll avoid the spoiler—the AI is very well disposed toward you, so it can be done. One of the things I find really interesting is how hard it is to write about a superintelligence once it’s arrived, the kind of run-up to it arriving, the threat of it arriving, or the promise, and then the transition is easily doable—well, I wouldn’t say easily, but it’s doable. But writing about the internal life, or even what a superintelligence looks like from the outside after it’s arrived, is really tough, and for obvious reasons. Ants trying to write about a human would be a big ask. For a human trying to write about a superintelligence when the superintelligence has progressed to a long way beyond us in terms of intelligence, that’s a really hard thing to do.

MJ: Yeah, definitely. It would seem like that’s one of the reasons it’s actually called “the singularity,” as well.

 

CC: Well, exactly that. It gets really hard to see the other third of the event horizon.

MJ: Yeah. Do you think written fiction has an advantage there, that maybe the more visual arts just struggle with?

CC: Yeah, probably, yes. There’s never been a film made of any of Ian M. Banks’ “Culture” series books despite the fact that they’re hugely popular, and it’s possibly because it’d be a really hard thing to do. So, writing convincingly about superintelligences and making movies about them is really hard. And, in fact, even that series, I think he cheats a lot. Banks was a fantastic writer, very witty, very skilled plotter and character writer. But the idea that the superintelligences that he writes about would be quite so happy to just spend time hanging around with the humans who operated millions of times more slowly than them, I never really buy it. I enjoy it, but I never really buy it.

MJ: One of the things I’ve noticed in a couple of the talks you’ve given, just in talking about the Friendly AI project, is how hard that is to define and how much we, as a species, already have struggled with some of the issues. I think the way you described it was, “How do you tell a machine what it is to be a moral and good individual when we haven’t really seemed to have done that so well as a species?” Do you think that we, as a species, are really ready for this kind of issue?

CC: Well, we kind of have to be; we don’t have a choice. I mean, stepping back a bit, it isn’t certain that we are going to create artificial general intelligence in the next few decades. And if we are going to, we don’t know when it’s going to happen. But it is a distinct possibility. There are a lot of people who are very expert in the space who think it could happen, and there’s just lots of good reasons for thinking it might happen, so we have to take it seriously, we have to tackle the problem. But you’re right, it’s a really hard problem trying to draw up a comprehensive and failsafe moral code is really hard when we don’t have one. We’ve been arguing about what the good life means ever since the Ancient Greeks, and philosophy professors are still around, and there’s people studying the philosophy of ethics and haven’t gotten to the point where they can give up. We are all contradictory, even in ourselves, about how we arrive at our moral judgements. We are all sometimes consequentialists, or utilitarians, which means that we care about the outcomes of an act rather than about the moral quality of the act itself, and at other times we are deontologists, meaning that we don’t worry too much about the outcome but it’s the nature of the act itself that we worry about. We all waver between those two ways of looking at morality. So, trying to draw up a comprehensive moral code for superintelligence is a hard thing to do. We’re going to have to sort of come at it around a corner somehow. I don’t know how we’re going to do it. Fortunately, it’s not my job. There are very bright people working on the problem and we’ve got a few decades probably to solve it.

MJ: Yeah. Well, and it seems like your book would fit right into starting that debate and getting people talking about this and thinking about it, so that when it does happen potentially, or if it happens, that people have at least started thinking about that. Was that sort of your intent then in writing the book?

CC: That was absolutely my intent. Absolutely. That was why I started writing “Pandora’s Brain” a long time ago and why I’ve since written “Surviving AI.” The job of helping more and more people get up to speed with the issue is a job that’s going to keep me busy for quite a while. Initially, when I read Kurzweil’s book and started thinking about this, and I was talking to friends and they were just clearly humoring me, and they thought that I was quite entertaining talking about this stuff, but they just didn’t see any reason to take it seriously, at that point I thought I should write something about it just to help Kurzweil and others spread the ideas, get more and more people thinking about it. That job, the job of sort of getting it onto the agenda just as a start point, has really moved on a long way thanks to Nick Bostrom’s book and the responses that other people have had to that. But there’s a long way to go in getting people to be familiar with all the arguments, because it’s a complex set of issues. So, that’s what I’m trying to do.

MJ: Yeah, and it seems like this is going to make explicit some of the ways that we’ve been not-so-nice at treating each other. I mean, Matt and I have talked several times about killer robots, and sex robots, and all these types of issues; it seems like they’re going to bring to the fore all these discussions we haven’t even really had about how we treat each other, and make really explicit things like slavery and all those types of issues. Is that something that you’ve included in “Surviving AI”?

CC: I didn’t cover sex with robots. I did briefly cover killer robots, and my position on that is slightly away from the mainstream. I think the mainstream view among people who think about this is that killer robots should be banned now and forever. I’m not so sure about that, because I think if you’re going to have wars—I mean, you know, let’s not have wars—but if you’re going to have wars, who do you want pulling the trigger? Some entity, a human who is very flawed and may well fail to distinguish between a civilian and a combatant? Or an artificial entity, which is much, much better at making that distinction and getting it right? Listen, I’d rather have the decision made right. I’m not so sure that we should always be sticking rigorously to the argument that the moral responsibility has to rest with the human. There’s a whole lot of issues which we’re going to have to deal with well before we get to artificial general intelligence and superintelligence, and I do cover some of them in the book, and I’m sure I’m going to be writing, and lots of other people are going to be writing, about all of those issues as we go forward. There’s issues about how we program self-driving cars to take the decisions which are going to have to be taken. If a child runs into the road and there’s two possible outcomes, either the child dies or the driver dies, the car will have to make that decision, and it’ll have to be preprogrammed to make that decision. Technological unemployment, the possibility of that. Again, we don’t know whether that’s going to happen, and we don’t know when, but I think it’s quite likely. And if it does happen, we’ll need to be prepared for it, because we’re probably going to need a new economy, a new type of economy. We can’t just, you know, wake up one morning and find, “Wow, more than half the population is unemployed. Right, let’s invent a new economy on the hoof.” We need to be planning for it. So, there’s a lot of thinking about a lot of different issues that unfold as AI gets better and better.

MJ: Yeah, technological unemployment is one we’ve talked about quite a bit. I guess I’m curious, it seems to me that that puts sort of front and center the philosophical, kind of moral idea of people’s worth and how the economy should function. That seems like it’s a pretty big conversation on its own. I mean, pulling that out of the whole AI debate, just that alone, once you can have systems that take on a lot of this work potentially… Does it seem like we’re, as a society, having the right discussions around that yet?

CC: I don’t think nearly enough people are thinking about this. And you’re right, it’s a huge issue. In fact, it’s so big, I call it the “Economic Singularity.” We’re familiar with the term “Technological Singularity,” meaning when a superintelligence comes along, the intelligence explosion and the superintelligence. But I think the possible implications of technological unemployment are so serious that it’s an economic singularity. There are a lot of people talking about it. There’s the book, “The Second Machine Age,” by McAfee and Brynjolfsson, it’s had a big impact. As has Martin Ford’s excellent “Rise of the Robots,” and your podcast and other people are doing a great job of trying to get people thinking about it. But most people, the great majority of people, are not thinking about it yet, which is astonishing, because it could be that in 30 or 40 years most people won’t be able to work for no fault of their own. I have to say that from a European point of view there’s a very interesting way that the debate is unfolding in America—and it is, of course, mainly in America that it’s unfolding because you guys are so much more interested in and attuned to the latest technological developments. And from a European point of view, the way that Americans struggle over the concept of universal basic income is quite interesting. I spent 30 years in business, I believe that capitalism and the enlightenment and technology science have done amazing things for humanity. Now is the best time to be a human, and the way that China has, over the last decade and a half, dragged itself from being a country of massive poverty to a country of great wealth, although, you know, it’s still very unequal, is proof positive that capitalism works. But when you get to a point where AIs can do everything—well, most things—that humans can do by way of work, and you get to a point that more than half of the population is unemployable, you have to have universal basic income. There just isn’t an alternative. I mean, you can’t have half the population starving. And yet it seems that because resistance to socialism is so strongly ingrained in the States, that’s still a discussion which is quite hard to have over there. Here in Europe, we have a more explicit welfare state. I mean, you do actually have one in the States, but it’s less explicit. Here we’ve got a fairly explicit one, and so I think we don’t find that concept so hard to take on board. But it’s interesting watching it from over here, watching the debate that’s happening over there.

MJ: What would you say to someone who’s skeptical of the whole idea of technological unemployment? I guess what I’m thinking is for the people that say technology has always destroyed jobs, it’s always created new jobs, this is just more of that—what would you say to someone who’s skeptical of that?

CC: Well first of all, I’d say that they are in the majority. I think most people think that. And it certainly is true that we’ve had automation destroying jobs or, you know, replacing jobs ever since the beginning of the industrial revolution. In fact, before then. The earliest I’ve come across it was in the reign of Queen Elizabeth I. So, you know, we’ve got Queen Elizabeth II on the throne here now. Queen Elizabeth I, in around about 1600, refused a patent to somebody who had invented a very basic machine to speed up the manufacturing of socks, which was apparently a surprisingly big part of the economy then. She said she wouldn’t give them the patent because it would render some of her subjects unemployed. So, in the broader scheme of things, she was wrong, because what we know is that the industrial revolution, as it gathered speed, it did throw individual people out of work and it was pretty unpleasant for individual people, but for society as a whole, we got richer and people ended up doing better jobs; more interesting, less dangerous jobs. What I would say to them though is that you have to take seriously the possibility that this time it’s different. One way of putting it is that in the mechanization of agriculture in the 20th century, humans found new jobs. So, famously in around 1900, something like 40% of all Americans worked on the farm, but now it’s down to about 2%. But those humans, they all got better and more interesting and safer jobs. The horse didn’t do so well. The horse population went from about 2 million down to 100,000 or so, and that’s because what was being replaced was something that the horse couldn’t go on and do somewhere else, because it was just muscle power. Now, maybe this time around we’re like the horse, because now the AI is replacing jobs, is taking jobs which require cognitive ability, that’s the unique thing that we bring to jobs. So, to take just one example: if you’re a doctor and most of what you do is diagnosing illnesses that you patients have and then recommending treatments, if we get to the point where an AI can do that better than you, where do you go? What does that doctor do? Now, I think probably the great majority of people now still think that however good an AI gets, there’s always going to be the difficult case that the doctor could do. So, you could envisage one outcome, that what happens is that humans who feel very slightly ill, who currently wouldn’t go to a doctor about it, they get their AI on their smartphone or in the cloud to diagnose their slight malaise and they get told, “Eat some spinach” or, “Don’t eat some spinach,” or maybe it’s take an aspirin, or whatever. And so, we all get a great deal more health care, and in fact, health care stops being sick care, which is what it is now, and it becomes health care. So, we could move to a much better place. But the doctors are still there, and they’re still doing as much work as they are now, but they’re just only dealing with the hard cases. But I don’t buy that, and I can’t prove this; I don’t think anybody could prove this one way or the other: I think that the AIs are quite likely to get to the point where it’s going to be really, really hard for a human to be better at diagnosing and recommending, prescribing than an AI. I think you’re probably going to see that right across the economy. So it does seem to me that most people will be unemployed. But to the person who’s a skeptic, I’d say you’re probably in the majority now, but don’t you think we should take serious the possibility that you’re wrong? Because if you’re wrong, we need to prepare for that new world.

MJ: Well, and I think Martin Ford, in his book, used the example of the x-ray technicians that had historically been a well-paying job, and now the machines are better, actually, than humans at diagnosing things. I think he added in there a tag that the combination of human and machine is even better than the machine by itself. But that might not always be true. And there’s just one area. It’s fairly small, but that was a very lucrative job, and you’ve got people that were basing their future careers, their future life, really, around thinking they were going to be doing that, and now that’s gone.

CC: Yeah. I think Martin believes that we still will be able to find work. I do think most of the people who write about this do end up concluding that, and I think it’s because the alternative is pretty scary. Actually, I think that there is a big issue beyond UVI, which very few people are addressing, and I’m beginning to work on my next book, which is called, “The Economic Singularity,” which is about this whole subject. What worries me is cohesion. I think we’ll get to a point where we’ll find out where the humans will find new work to do, more stimulating and interesting work. Maybe we’ll team up with the machines in the way that a lot of people hope we will, or maybe we’ll find strange new jobs that nobody could imagine now. Maybe we’ll become dream wranglers or something. But if we find out that actually, no, most people are going to be unemployed, so we have to create an economic structure which allows us all to live lives of leisure… I think a lot of people at the moment just think, “Well, okay, if we can have a universal basic income, that kind of takes cares of that problem.” But I don’t think it does. I think if that happens, we’d end up with an economy where a minority of people, and quite possibly a very small minority of people, would own everything. They’d own the AI, and because the AI is where the value gets added, they own everything. Now, you could say, “Well, actually, there’s a pretty small minority who own almost everything already,” and that’s broadly true. You know, the 1% owns a fantastic amount of global assets. But, at the moment, the rest of us are all involved in the economy; we’re engaged in it, we’re participants. If you have a world in which the minority owns all the AI and everybody else is just living on handouts from that minority, that, I think, is a very, very different situation. And there’s another layer to it, which is that minority will have access to rapidly improving technologies to enhance themselves cognitively and physically. There’s a danger that the human race will end up splitting into two, or three, or four separate species almost. I mean, they may be able to still interbred, so they’ll still be one species, but they’ll be enormously different. Now, there’s a guy called Yuval Harari who wrote a really, really good book called “Sapiens,” and I highly recommend that to anybody, and he I know is working on another book in which he talks about this fracturing of society into two, and he’s pretty brutal about it. It’s at the end of his TED talk, he talks about the two halves of society, or two sections, being gods and the useless. Now, I wouldn’t be that brutal. But I do worry that we will have this fracturing of society and I think, again, this another thing we need to think about. You can’t forecast it, you can’t see in advance what’s going to happen, but what you can do is map out scenarios and say, “If scenario A happens, there’s a solution by doing this. If scenario B happens, there’s a solution by doing this. If scenario C happens, we’re all dead. So we have to avoid scenario C, we have to stop that happening.” That kind of scenario planning I think is well worth doing.

MJ: Yeah. Well, and it seems like science fiction is well-suited to that. I mean, certainly that kind of bifurcation, that sounds a lot to me, a little bit—it’s slightly different—but a little bit like the H.G. Wells, “The Time Machine,” and the Morlocks and the Eloi.

CC: Yeah, no, I think you’re exactly right, that’s a very good metaphor for it. And another one is “Brave New World,” which I reread recently, it’s actually rather a good book, it stands up well to time. Science fiction is great for mapping out possible futures. It’s a really bad idea to read it as a forecast. But as an imagination of possible futures, I think it’s a great tool for us.

MJ: So, where can people find you on the web, if they’re interested in learning more about you as an author or finding your books?

CC: Sure. So, I have a website or a blog, it’s at www.Pandoras-Brain.com. And from there, you can jump off to the book, “Pandora’s Brain,” and to “Surviving AI,” and there’s a whole lot of other blog posts that I write, and you can sign up for my email list if you’re interested in hearing early what I’m doing. And I tweet like crazy, because I read tons of stuff every day and I tweet a lot of the most interesting ones.

MJ: Okay, fantastic. Calum, thanks so much for joining us today.

CC: It’s been a great pleasure.

A: That’s all for this episode of Robot Overlordz. Are you interested in the future and how society is changing? We’d love to have you join our community. Visit our website to learn more and to connect with others that share that interest. You can find us at RobotOverlordz.FM. The site includes all of the show’s old episodes along with complete transcripts, links to more information about the topics and guests in each episode, and our mailing list and forums. We’d also love to hear what you think about the show. You can review us on iTunes or email us.

MJ: I’m This email address is being protected from spambots. You need JavaScript enabled to view it.

MB: And I’m This email address is being protected from spambots. You need JavaScript enabled to view it..

A: We hope to see you again in the future…

MJ: Thanks everyone for listening.

MB: Thanks.

 

Image Credit: By Steve Jurvetson from Menlo Park, USA (Caught CodingUploaded by PDTillman) [CC BY 2.0], via Wikimedia Commons