By Will Clayton from Blackburn, UK (Blade Runner (Oscar Pistorius)) [CC BY 2.0 (http://creativecommons.org/licenses/by/2.0)], via Wikimedia Commons

SPECIAL GUEST: John Danaher. As our technologies advance, they are becoming more and more a part of our very selves. What does this enhancement mean for the future of the human race? Where are these technologies leading society? Will we all be unemployed due to robot workers, or will we in fact become cyborgs, enhanced into superhumans by this amazing tech? Lecturer and Philosopher John Danaher joins us to take a look at the reshaping of humanity. Recorded 2/22/2015.

 

You can download the episode here, or an even higher quality copy here...

 

Mike & Matt's Recommended Reading:

John Danaher's personal website

John's Philosophical Disquisitions blog

Neuroenhancement And The Extended Mind Hypothesis, by John Danaher (Philosophical Disquisitions, 1/17/2015)

Institute For Ethics and Emerging Technologies (IEET) site

John's profile on IEET

 

Transcript:

Alpha: Welcome to another episode of Robot Overlordz, episode #150. On the show, we take a look at how society is changing, everything from pop culture reviews to political commentary, technology trends to social norms, all in about thirty minutes or less, every Tuesday and Thursday.

Mike Johnston: Greetings cyborgs, robots, and natural humans. I’m Mike Johnston.

Matt Bolton: And I’m Matt Bolton.

MJ: Joining us on this episode of Robot Overlordz is John Danaher.

John Danaher: Hello.

MJ: John, thanks for joining us.

JD: No problem. My pleasure.

MJ: For the benefit of our listeners, could you tell us a little bit about your background?

JD: I’m a lecturer at an Irish university. I teach in a law school but my research interests are primarily in the philosophy and ethics of emerging technologies and a variety of other things; I have a fairly eclectic set of research interests.

MJ: What kinds of things are you interested in in general?

JD: In terms of technology, there are three main things I’m interested in at the moment. One is human enhancement technologies and the ethical and legal debate associated with them. Another is mind reading technologies, or brain-based lie detection and their use in courts. I’m also now developing an interest in artificial intelligence and robotics, and the social and ethical implications of those things.

MJ: It seems like all three of those are really on the cusp of transforming society quite a bit. 

JD: Yeah. I think the one that I’m most interested in and excited about at the moment is robotics and artificial intelligence. I think the pace of technological development in those areas is more rapid than people appreciate and that it will have fairly transformative effects on society in the relatively near future. Part of my interest as well is how human enhancement technologies relate to concerns that people have about robotics and the development of them. For example, people are concerned about things like technological unemployment and the effect that will have on the future of work or on human fulfillment. I’m interested in the way that human enhancement technology may be able to countermand or counteract some of those negative implications.

MJ: We talked a while back with Martin Ford, who wrote The Lights in the Tunnel, about technological unemployment, it came out in 2009, and he’s got a new book coming out this year, Rise of the Robots. It seems like the ideas are starting to be out there a little bit more, at least from what I’ve seen on some of the sites that I read and that Matt may read as well--do you think that we’re beginning to consider these issues a little bit more deeply than we have or is this just something that is in the more futurist or technological-oriented communities?

JD: No, I don’t think so. I think that the concerns about automation and technological unemployment have broken through to the mainstream to some extent. There’s the book The Second Machine Age as well by Erik Brynjolfsson and Andrew McAfee from MIT, looking at the future of the economy and automation. You also find people like Paul Krugman writing about it in the New York Times in some of his opinion pieces, and Lawrence Summers wrote about it last year.

MJ: Do you think that we’re already seeing the effects of technological unemployment or is this something that people are maybe jumping the gun on?

JD: I’m not an expert on the changes in the economy and whether we have evidence for it at the moment, I can only repeat what other people have claimed. McAfee and Brynjolfsson would argue that there’s at least some evidence for technological unemployment. The fact that the recovery in America in particular has been a more jobless recovery than previous recoveries, or at least it took a longer time in a way for the rate of unemployment to fall while at the same time productivity within the broader economy is increasing, that’s taken by some as evidence to suggest that human workers are more disposable and less needed nowadays for increases in productivity. Human capital is less essential to productivity, so there might be some initial evidence for this. You can also then point to lots of anecdotal examples, like the Chinese restaurants that are now staffed by robot waiters and the fact that they’re being spread out across the country and may possibly come to Western countries in the near future.

MB: If you look back over history, there’s always been a desire to replace human workers or to speed them up--like the cotton gin. There have been a lot of items over the years that have replaced humans. Do you think that people are just getting replaced for certain jobs? Is it really going to be that massive of a scale or is it a shift where we’re going from low-tech jobs to people are going to have to start finding higher-tech jobs?

JD: That’s kind of the mainstream view of what’s going to happen. They refer to the luddite fallacy, the belief that displacement of human work by machines is going to lead to mass structural unemployment--some people view that as the luddite fallacy because the luddites were arguing about this 200 years ago and yet it didn’t lead to a massive decline human employment, people found other kinds of jobs. It may have led to the decline in employment in certain industries over a certain short period of time or over certain generations, but future generations recovered by finding new ways to employ themselves. I think there are some reasons to think that the luddite fallacy is incorrect about the next wave of technological unemployment, the so-called “Second Machine Age” that Brynjolfsson and McAfee talk about in their book. There are a few reasons for this: one is that the actual extent of human employment might be overestimated, or the extent to which people have been able to find new jobs might be overestimated, and the fact that there are lots of people in contemporary economies who are unemployed, people who just don’t show up in unemployment statistics because people are spending much longer in education to try and find a job and there are lots more people who just dropped out of looking for work entirely. There’s also the fact that the pace of technological change might be faster than it was in the past and that it might be more difficult for people to retrain and find jobs if technology is improving at a rate which is faster than the human ability to retrain and upscale themselves. Also, there’s an endpoint to the amount of ways in which humans can change or upscale themselves if we can create fully-human androids that are perfectly able to replicate human labor, then there’s pretty much no place for us to go anymore, no new industry for us to find if robots are capable of doing all of that work.

MB: My only argument with that is if you replace all of the humans with robots, if I have a factory that’s cranking out something and the whole factory is staffed by robots, there’s going to be nobody to buy whatever it is I’m manufacturing because nobody is going to be able to have a job or work. So, to me, it seems like a self-correcting problem--robots aren’t going to go out and buy whatever it is you’re producing, it has to be humans--so, I would think that it could only go so far before you reach the point where people just can’t afford to buy anything if no one has a job.

JD: That’s kind of the classic Marxist perspective on it as well, that this is just another contradiction or tension in capitalism that’s bound to lead to some kind of crisis or crash point, and that could well happen.  The other counterargument to that though is reforms in the ways in which income is distributed in society might allow for more technological unemployment. A reform that’s actually touted by lots of futurists, the basic income guarantee, might be a way to actually facilitate more technological unemployment while at the same time allowing for the distribution of income that keeps up the demand for the different types of labor and output.

MJ: You mentioned one of the other areas you’re interested in is human enhancement. What kinds of things are you looking at, as humans are going to have to compete with robots, that would bring your average human up to a level that maybe is at least on par with some of the capabilities of robots or artificial intelligence, or maybe even enhance. It seems to me that a couple articles have been written about how the pairing of artificial intelligence and human intelligence is more effective than either alone.

JD: Yeah, there are lots of examples of that which people point to. So, merging ourselves with technology or forming much closer partnerships with it so that it forms part of us or part of who we are might be a way to allow for improved technologically-driven productivity while maintaining human relevance in different spheres of life. The example that’s often used in discussions is about chess playing computers and how, from the late ‘90s onwards, chess playing computers were vastly superior to humans and the top chess playing computer could beat the top human pretty easily, but that the best chess of all was being played by human-machine partnerships, so called “freestyle chess.” That’s something that’s discussed in Brynjolfsson and McAfee’s book and also Tyler Cowen’s recent book about “moving beyond the great stagnation”--he points to that as a possible future for the economy, where humans form much tighter partnerships with machines and that’s a way of maintaining their economic relevance in the future. But not just their economic relevance but also their social relevance, and it also might genuinely improve human life and experience in different ways.

MJ: You have an article on your website, “Neuroenhancement and the Extended Mind Hypothesis,” which is about how these technologies intersect with actual human beings.

JD: This is a more philosophical perspective about this issue about the relationship between humans and technologies. There’s a view that’s prevalent among some philosophers that technology, ranging from the most primitive types, like writing, to the most up to date kinds, like digital assistance and smart watches and augmented reality eyewear, are ways in which humans extend their mind into the environment around them, so much so that the partnership between humans and technology is not the human interfacing with technology, it’s that the two things are the human. It might be a difficult idea to understand. The human mind actually spreads into the environment around it, it’s not confined to the skull, to the brainbone barrier, as some people put it. So yeah, interfacing with a smartphone, for example, that keeps a record of your daily appointments or of all the different conversations you’ve had with people, that smartphone is not just a device that you use to enhance your knowledge or your recall, that device is actually part of your mind, if that makes any sense.

MJ: I think it does, and this is something Matt and I have talked about a lot, particularly about smartphones. I hit a point in my own ownership of a smartphone where I suddenly became really uncomfortable when other people were holding it because it did, to a certain extent, feel like a piece of me in that I have a lot of private information now on there, it’s got access to all of my different accounts; there’s a lot in there that I consider mine and I think that opens up some interesting areas for consideration. If you regard that as an external device, it’s not a big deal, like if the police have access to it or if someone else picks it up and looks at it, or if it’s lost even. And yet as these devices have become more integrated into our lives and more and more of us is on that--Matt and I were talking, actually before we connected to you, about not being able to remember people’s phone numbers anymore and if you’re separated from your phone, you lose that access of communication to other people even.

JD: Absolutely. This idea in philosophy was originally propounded by two philosophers called David Chalmers and Andy Clark back in the late ‘90s. They use an example that might be more intuitive for people because it deals with somebody who has a mental impairment and how they use technology to supplement that mental impairment. In that kind of case, you can see more clearly the ethical implications of the mind being extended into some kind of technological artifact. Their example was a contrast between two people, one called Otto and one called Inga, so this is kind of a philosophical thought experiment, it doesn’t involve real people but it illustrates the point. Otto has some sort of amnesia or early stages of dementia or something like that, and can’t remember various facts that he needs in his daily life. One day he needs to go to the Museum of Modern Art in New York to go to an exhibition, but he can’t remember the address. But fortunately he keeps all of this information in a notebook, so he just flips to the relevant page of the notebook and finds the address and goes to the exhibition. He’s to be contrasted with Inga, who’s just a regular human being who remembers this information because it’s stored somewhere in her brain and can just bring it into conscious recall immediately whenever she needs it. Chalmers and Clark’s suggestion is that both individuals are remembering, both individuals can be said to believe that the Museum of Modern Art is wherever it is and that interfering with Otto’s notebook, taking it away and ripping the page out, would be just as much of an ethical violation as reaching into Inga’s brain and ripping out the relevant neurocircuit where she can remember that information. When we’re dealing with people who have some kind of mental impairment, I think it’s easier to see why interfering with the technological artifact is ethically problematic. But when we’re dealing with someone like you who doesn’t have any mental impairment but has lots of contacts and personal information stored on their phone, we might not be able to see the ethical implications as clearly but I think by analogy they should carry over to that case.

MJ: I have to say, there was that point in my relationship with my own phone that it crossed the line into creepy when other people were handling it, and I had this argument with another friend of ours about that, he thought I was being silly. Maybe six months or a year later, finally he hit that point himself. I just thought that was funny, that it is a lot easier to see when you talk about people who have a physical cause for why they would use some type of enhancement than maybe some people that don’t have that. But to me, it seems like they really should be the same thing; I don’t understand the argument for confining those only to people that have a condition or a disability. If they enhance your life, it seems like that should be a win/win for everyone.

JD: There’s a danger, to some extent, when we use the term “enhancement” to refer to technology, that we prejudge the kind of arguments that we’d make about that technology. As soon as it’s an enhancement, as you say, “It’s an improvement, so why would anybody object to it?” There’s a whole segment in philosophy that debate these issues about the definition of “enhancement.” But often times people just prefer to talk about specific technological examples and how they augment certain capacities without judging whether they’re an enhancement overall. So people will look at things like smart drugs and how they improve concentration, and then they’ll ask “Should we allow the use of those things?” It could have other negative implications, so there’s a richer debate to be had about whether it counts as an enhancement. But I would agree with you in terms of the distinction that some people draw between enhancement and a treatment doesn’t seem sophically sustainable to me just because it’s very difficult to know what’s normal cognitive function, what’s sub-normal, and what’s above normal, so I think we should focus on the nature of the technology and the kinds of improvements that it brings about and not worry about whether it’s a treatment or an enhancement.

MJ: Well, if you take someone like, for example, “the blade runner,” the guys who are missing their lower legs but they have those artificial legs that are engineered like cheetah legs. I have read the argument that that is somewhat cheating, but at the same time I don’t personally see any reason why they can’t compete. I think they segment out those competitions, or they make an argument that it’s not a fair race--a person who’s on human legs versus those type of engineered legs. Yet, it seems to me “Why not have them run together?”

JD: Obviously in the last Olympics they did allow Oscar Pistorius to run in the 400-meter. This was before he was charged with murdering his girlfriend, so won’t know what the future will be until the next athlete comes up who has these bladed legs who can perform to the same ability as Oscar Pistorius. But I think the sports case is not that interesting insofar as sports are, to an extent, arbitrary. It’s up to the authorities within a given sport to decide what the rules are going to be and what the nature of the performance within the sport is going to be, and I think it is possibly legitimate for them to say that there is a type of running that is limited to human limbs. I think as well as soon as artificial limbs actually improve performance to an extent that it’s vastly superior to human limbs, then they won’t allow the use of those artificial legs in traditional track racing anymore. It’ll become an entirely new event and an entirely new sport. At the moment, they segment it out into Paralympics and the regular Olympics, but with Oscar Pistorius there was some kind of carry over from one to the other.

MJ: I see it elsewhere as well. I work in IT and that involves a lot of certifications and things like that. When you go in to take a certification test, they cut you off from the internet, you’re not allowed to talk to anybody else in the testing environment, you’re not allowed to have notes or anything like that and you’re just supposed to take this test. Almost always it’s a very contrived situation--for a Microsoft certification, they’re testing you on the Microsoft way of doing things. I think there’s some argument that maybe it’s worth knowing that stuff, but at the same time what I actually do in my job, I really would not be very good at it if it weren’t for the internet or being able to ask questions of other people that know things. I use the internet in a very enhancing way in order to be able to do my job. It’s very frustrating to walk into those certification situations and have all of the tools that you would actually use to do this work suddenly cut off. Similar to how you were saying that sports is very managed and there’s a lot of rules around it, it seems like the certifications are the same way and I just think about how much in life the obstacles that they set you are those contrived situations versus what you would actually face in a day to day situation.

JD: Education in general is an area that’s struggling with this question now. I teach in a university where we still have these traditional exams where people go in for two hours and they’re not allowed to look at a phone or have any internet connection, they have to answer questions based purely on what they remember. I teach law in particular, so we get students to remember lots of legal rules and cases and use them to analyze particular problems that they’re presented with in this two hour block of time, but it’s a very artificial set up and it’s not in any way close to what it will be like to practice as a lawyer. As a practicing lawyer, you will have access to all these devices and you’ll be able to search case databases online whenever you want. So, it’s questionable how useful the traditional methods of educational assessment are in light of these technological changes that we’re currently undergoing.

MJ: It seems like there’s a disconnect between the methods that we’re using to either measure or teach humans how to compete out in the world and yet all of these technologies, to tie it back to the technological unemployment issue--people are going to exist in the world with these enhancement technologies. It seems like a lot of the areas in life, whether it be the legal system itself or the education system, they’re still not caught up to the actual skillsets that they should either be measuring or teaching.

JD: That’s a longstanding problem with human institutions in general, that they are slow and not always quick to adapt to technological changes, they’re nearly always playing catchup. You see the same in law in particular, that the law is nearly always reactionary, it reactions to technological changes after the event and correct problems that have arisen. It doesn’t do much in the way of anticipation or to try to change in line with the technologies as they are emerging. But this is also just a problem that will no doubt continue as long as the pace of technological change is accelerating.

MJ: You’ve worked with the Institute for Ethics and Emerging Technologies, correct?

JD: Yeah. I’ve never met any of the people involved in person; I just write things for them and they publish them.

MJ: How did you get together with them?

JD: They contacted me. I’ve been writing a blog for a long time, Philosophical Disquisitions, and I’ve always had an interest in enhancement technologies in particular, and brain-based lie detection, those were my early research interests. I was writing about them on my blog for years and Kris Notaro, who’s the managing director of the IEET just saw my work and wanted to know if he could republish it. I said they were more than welcome to do so and they made me an affiliate scholar of the institute at the same time as well. They’ve been looking for people to provide them with interesting content, thought-provoking material--not that I claim my own material as thought-provoking, but maybe other people find it to be. So, that’s how that happened. That happens a lot, I find, with the internet. Humanity+ as well have taken up some of the material that I write and they publish it in their magazine, which you might be familiar with, H+ Magazine.

MJ: John, thanks so much for joining us today.

JD: Thank you very much for inviting me on the show.

MB: Thanks.

A: That’s all for this episode of Robot Overlordz. Are you interested in the future and how society is changing? We’d love to have you join our community. Visit our website to learn more and to connect with others that share that interest. You can find us at RobotOverlordz.FM. The site includes all of the show’s old episodes along with complete transcripts, links to more information about the topics and guests in each episode, and our mailing list and forums. We’d also love to hear what you think about the show. You can review us on iTunes or email us.

MJ: I’m This email address is being protected from spambots. You need JavaScript enabled to view it.

MB: And I’m This email address is being protected from spambots. You need JavaScript enabled to view it..

A: We hope to see you again in the future…

MJ: Thanks everyone for listening.

MB: Thanks.

 

Image Credit: By Will Clayton from Blackburn, UK (Blade Runner (Oscar Pistorius)) [CC BY 2.0], via Wikimedia Commons