By visualpanic from Barcelona, Catalunya (pep sala:el que senties per nadal) [CC BY 2.0 (http://creativecommons.org/licenses/by/2.0)], via Wikimedia Commons

SPECIAL GUEST: John C. Havens (Hacking Happiness). What is happiness? Can it be quantified? Can it be tracked? Can we increase our happiness? Our guest on this episode is an author who has written about just that, ways to use the methods and technologies of QUANTIFIED SELF to increase our wellbeing. In this episode we dig into wearables, health tracking, happiness and the pursuit of it, Facebook, ethics, and technology. Recorded 8/13/2015.

 

You can download the episode here, or an even higher quality copy here...

 

Mike & Matt's Recommended Reading:

John C. Havens website

John on Twitter

John's Blog

Hacking H(app)iness, by John C. Havens

More coming soon...

 

Transcript:

Alpha: Welcome to another episode of Robot Overlordz, episode #197. On the show we take a look at how society is changing, everything from pop culture reviews to political commentary, technology trends to social norms, all in about thirty minutes or less, every Tuesday and Thursday.

Mike Johnston: Greetings cyborgs, robots, and natural humans. I’m Mike Johnston.

Matt Bolton: And I’m Matt Bolton.

MJ: And joining us on this episode is John Havens. John, thanks for joining us.

John Havens: Thank you for having me.

MJ: As you know, our standard question for people that are guesting with us is could you tell people that maybe aren’t familiar with you a little bit about your background and kind of where you come from?

JH: Sure. I’m Nicolas Cage, I was going to be Superman… Wait a second, no, that was a past episode. Um… My name is John Havens, and I often use my middle initial, which is C, for Charles, and I did it before Michael J. Fox, just so you know. I moved to New York City in 1991 and I was a professional actor for about 15 years; I had principal roles on Broadway, TV, and film, and in 2005 I got into social media. As I told you before the show, I worked for About.com, I was their first guide to podcasting. Fell in love with the idea of being able to tell stories in easy ways, at least for me, in terms of podcasting, like you guys have done now for so long, being able to share your voice and not have to have CBS or NBC sponsor it. Then I became EVP of PR for a while, of social media at a big PR firm. And then about four years ago my latest work was inspired when my dad passed away. He was a psychiatrist, and a lot of what I’m doing now is taking my love of technology and trying to help people increase their well-being. So, that’s my story.

MJ: Well, and you have a TEDx talk, as well right, from TEDxIndianapolis, correct?

JH: I do. I’m also a consultant and speaker, as well as an author. I have a book that’s out now called “Hacking H(app)iness” and one that comes out in February. And speaking-wise, yeah, I did a TEDx and I’ve spoken at a lot great places around the world in the past number of years.

MJ: Matt and I have both watched your TEDx. One of the things that jumped out at me is you mentioned the quantified self movement. How does that relate to your book, “Hacking H(app)inesss”?

JH: Yeah, well I think, like I mentioned my dad—taking a measure of your life is something my dad helped people do for over 40 years. In his case, it was toe-to-toe, eye-to-eye, asking people the tough questions about their lives in a professional therapeutic situation. Quantified self, with all the different gadgets and wearable devices that you can have, really gives insights where there never have been insights before. And there’s everything from passive data collection, which I talked about quite a bit in the TEDx talk, in terms of a heart monitor or measuring sweat or something where you wouldn’t normally be able to measure subjectively, meaning as a person, sort of just sitting there counting your heartbeats. So that data, that physiological data, combined with the subjective data, meaning where you might journal something, those two things combined is I think where there’s some real magic with quantified self. And people think that tracking—like I talk about quantifying happiness or tracking happiness, and people understandably say, “Are you going to take away the power or the potency of true happiness by tracking it?” I remind them that tracking is a temporary, for me at least, finite thing where you’re taking a measure of something to find insights that then can let you take actions that will increase well-being.

MJ: That’s something I find really interesting. I’ve been doing self-tracking for a while; I have a Withings scale I step on in the morning and it records my weight. The individual numbers aren’t as interesting sometimes, I think, or actually really at all, compared to the trend line, and it’s been a pretty useful tool as far as getting a little healthier and things like that. That seems to be somewhat, wouldn’t you agree, certainly the health angle that’s pitched by some of these companies that are in that space?

JH: Oh yeah, and health is a big thing that most people track. But I actually first fell in love with quantified self—I went to a New York City quantified self meeting, which is one of the biggest meetings, and a lot of people don’t realize that QUANTIFIED SELF is actually the name of the movement that led to the now non-profit, I think it is, or the site that sort of helps people with this type of stuff. And the movement was really created by Kevin Kelly and Gary Wolf of WIRED magazine. So I went to the New York City meetup where a woman, her thought of quantified self—she lived in Paris for a year, and every day at the same time every morning, she would go and take a photograph of the exact same corner outside where she lived, in whatever place she lived in. And it was quite gorgeous to see how she had a quantified corner, I think is what she called it, and it was sort of seeing at 7:03AM in that part of Paris—how the physicality, the lighting, the people, the seasons—how it really told a story about that one physical space. I think that’s the thing where health is certainly a wonderful thing for quantified self and where a majority of the quantifying happens. But I’ve seen a lot of—like that project and things like quantifying happiness, for instance—where people don’t really understand how you would do that. And the point that I always tell people is look, find some area of your life that you want to maybe change or improve—and as Americans especially, our focus is usually on, “I’m going to lose 40 pounds in three weeks!” and I’m like that’s not the quantified self movement because you’re already putting too much onus on a result. Whereas a real quantified self, the people that I really respect, will say, “Well, I’m going to take this area of my life and measure all these different things. If it’s health, for instance: sleep, and what I eat, and sex, and relationships, and just take time and breathe, take a step back and start to see what insights filter in on a much bigger scale.” And then the quantifying is sort of a sense then of saying, “Well these are insights, that I think this area over here I think is causing me stress.” Then you can take actions, change those actions, and then recalibrate and say, “Now is my stress lower because I’ve stopped doing X?” So there’s a much bigger lens sometimes than—I mean, don’t get me wrong, the apps are great too, but I just mean there’s a much bigger lens where people kind of get in the habit and the practice of changing these things, it can be very helpful.

MJ: Yeah. Well, and I guess that’s one of the things that I found interesting about your TED talk, is that stepping back and looking at all of the data rather than maybe the specific things. Like, for example, in my Withings app, looking at my weight, that’s just one thing. But stepping back and looking at, like you said, sleep and maybe tracking that somewhere else, just how you’re feeling, whether you use a journal or something that. That process of standing back and evaluating it, is that really what you’re talking about then in your book vs. the individual kind of companies and apps that are in the quantified self space somewhat, or the health space, focusing on the individual numbers?

JH: Well, I think they both have their place and it’s kind of a complimentary style approach. In the current book, “Hacking H(app)iness,” the subtitle is, “Why your personal data counts and how tracking it can change the world.” There’s a couple levels. I talk a lot about positive psychology in the book, which is an empirical science that builds on traditional psychoanalysis. So like my dad, he was versed in Freud and Jung and all the psychological greats, as it were, which I for some reason just pictured like a band. [laughs] I pictured KISS. I have no idea why Gene Simmons would be in the lexicon of famous therapists. Anyway, that was random, sorry. The logic with traditional psychoanalysis, at least in a big picture idea, is that we, as individuals, are sort of like a vase filled with water, and that water constitutes a sort of a pain, and our life experience that we’ve had and with a good therapist or trusted psychologist or psychiatrist, we can work through that pain and essentially pour out that pain, which is a good thing. Then positive psychology, which has now been around for about 16 years, says, well, in that void that now exists, why can we not then, as it were, pour positive affect in there? And the whole logic is that we know that—you mentioned with your scale—if you eat healthier and you exercise more, that you’ll improve your physical well-being. But what’s a new idea, especially I think—not to pick on Americans, but we can be this way—is this logic that if you take actions like gratitude, altruism, find what is called your “flow,” which is sort of the work you feel you’re built to do that brings you purpose, when you take these actions, you actually don’t—and this is one thing I love about positive psychology—you don’t have to like it. I don’t preach, “Okay, everyone, grab hands! Time for unicorns!” And not to be negative, if that helps people. But I’m a skeptic a lot of times, and when someone’s like, “Do a gratitude journal!” the first time I heard that, I was like, “That just sounds like bullshit!” you know? But it’s okay. Once you do the actions and you track it, there’s multiple longitudinal studies from around the world, men/women, multiple cultures, that doing something like practicing gratitude changes the physical aspect of your body. So there’s oxytocin that goes off in your brain; your physiology reacts in an objective way when you take these actions, so in one sense you are literally quantifying happiness. But I can’t tell either of you how to be happy from a standpoint of mood. I can guess, because we’re all geeks, and I can say, “Watch Battlestar Galactica,” whatever. But the science of well-being, I can say, “Well, practice gratitude, altruism, etc.” and I can start to measure and show back to you, “Look, your whole emotional/mental well-being is increasing on a regular basis.”

MJ: Do you think our attitudes around mental health are shifting in general? It seems like somewhat there’s at least an awareness that there’s been a stigma around mental health or talking about some of these things. It sounds like, both from your background and some of the things you mentioned, like positive psychology, that there seems to be at least somewhat a shift underway around maybe our approach to that.

JH: Yeah, I hope so. The H(app)athon project, a non-profit foundation that I created, we’ve been working with a wonderful woman named Barbara Van Dahlen. She has an organization called Give An Hour, which helps returning veterans deal with things like PTSD, and she has a campaign that I’ve been supporting, which helps people identify five signs of mental illness, which is really powerful. I think it’s one in three people in the States suffer from some form of mental illness. And by “illness,” I don’t mean a formally diagnosis, say, autism or schizophrenia, but something like a depression. And depression especially has an overt increase that’s very easy to correlate to things like increased healthcare costs and things like increased alcoholism. So if you have any tool that brings someone relief, that’s something—it’s very exciting, with positive psychology you can see how it helps people who are suffering from some pretty severe stuff. And yes, to your point, I think now there’s a lot more people… I’m blanking on the name of the name of the young woman who’s a singer who went through some serious depression. She was a host on X Factor for awhile, and she’s been on tour talking about suicide and depression. So the stigma I think is lifting, to your point, which I think is fantastic.

MJ: Yeah. Thinking about some of these apps and social networking and things, last year, actually in July 2014, Matt and did an episode on the Facebook experiment that they had done, where they were trying to manipulate people’s moods and things like that. The timeframe they were talking about, during when that was reported on, was actually one where personally I went through some stuff that Matt knows about. But I found that whole story really disturbing. Do you see some downside maybe in the fact that so much of the stuff out there now is app-ified and game-ified and all that, that some of these corporations in the big data space or in the social networking space are maybe less than fully responsible stewards with some of that information?

JH: Oh, yeah. There’s the question of does a person feel the methodology of quantifying their life makes sense, and I think that’s a personal preference. The Facebook situation, I’m actually doing a piece in the next week, I’m working on this piece about artificial intelligence and ethic. And the Facebook piece, in terms of when you compare how academics have to go about researching things like emotion or depression, and they have to do what’s called an IRB review board, it’s an Internal Review Board, I think is the acronym. And that’s a well-established—it’s kind of like the Bar for lawyers—where the IRB is something where if a researcher wants to do a certain type of research, they go to a university and they have people who are well-versed in ethics review and say, “When you want to do research on this group of people,” say like with Facebook, people suffering from depression, “how are you going to do it?” And they explicitly have to know the part of the research and give permission, or their caregivers, for instance, have to do that. So with Facebook, this is something—there’s a great piece by a guy named Jake Metcalf, if you want to read it, about A/B testing, which is fantastic. There’s a lot of discussion right now going on about what is consent, and with Facebook the tricky thing is—and this is the majority of the internet economy—we’ve all given consent to Google or Facebook to kind of carte blanche do whatever they want. So, the thing with the Facebook thing is it was utterly unethical when compared to any standardized testing that’s done by any part of academia. And so Facebook defending itself, like, “Well, you know, people signed up for Facebook,” it means nothing. It means nothing, in the sense of… I mean, it’s later at night and I’ve had a couple of beers. They’re absolutely full of shit. That’s all there is to it. Because the point is—another quick “for instance”: Facebook just filed for a new patent or a copyright and it was in the Wall Street Journal a couple of days ago, where in writing they said, “We’re going to use a tool for this and this and this,” and the fourth part of the tool was, “on Facebook, we can now target an individual to see not only is their credit score something that would let them get a loan, but we can examine their entire friend network to see if they would be good to have loans.” And this is pretty common, like amongst data brokers and credit bureaus, etc., but none of that would happen with your knowledge. So, there’s the depression side—and by the way, I’m sorry you went through whatever you went through—but now there’s the question of, like within Facebook, an Oculus Rift world, when we have like a Black Mirror show, which I love, augmented reality lenses over our eyes and something over our ears and we’re immersed in a virtual environment, like an Oculus Rift/owned by Facebook kind of world… I brought that example up because there very easily could be a simple system to let anybody know when anyone else in that virtual world was a credit risk. So, you might see someone’s face and it’s shrouded in a red haze and that means they’re a credit risk. And for whatever reason, you wouldn’t give them Bitcoin in a game, and then in real life you’d feel weird around them and not want to be with them because they might ask you for money. Just to be clear: that is the world we live, end of story, hard stop. So to say things like, “Well, some of the companies are doing this…” No, that’s the way our personal data is simply treated and we’ve lost, we’ve given away our rights, end of story.

MB: It always seems amazing to me that… I mean, we literally give away almost all of our personal information. I see people on Facebook, “Which Beatle are you?” or whatever, and they answer all these weird questions and then it spits out some random answer, “you’re John Lennon,” or whatever, I’m just making stuff up. But we go through all this stuff, and all these companies have all this data on us, and yet we want our privacy. We bitch and complain that we have no privacy, and we want all this privacy, and there’s settings and all this stuff. But yet people just freely give away all of this information through apps and everything. These corporations can literally know everything there is to know about you just based on the stuff that you freely give them.

JH: Oh, yeah. That’s the tricky thing, is like… What’s the term, it’s a legal term, it’s a relationship, it’s not proxy. Anyway, it’ll come to me. It’s like adherence—no, cohesion, where the whole agreement of the internet economy that exists now, and it didn’t necessarily start this way, but this is where it is now, is “Use this service or don’t.” That’s the only choice you have, right? “If you don’t like it, leave the service.” But the reason that’s also bullshit is that Facebook, for instance, launched facial recognition technology many years ago, and by that I mean not the augmented reality off-web screen version. But they launched it so that if I’m on Facebook and I’m a Facebook user, so let’s say, “Okay, I’m cool with Facebook, I love Zuckerberg, yay.” Fine. I take my picture and you guys are in my picture at a family friend’s picnic. You guys don’t even own a phone, for instance. Now, facial recognition, the two boxes that go around your faces, the system let’s anyone else identify and go, like, “Oh yeah, that’s John and Mike, and there’s the other two friends,” and they write all that stuff in. So you guys aren’t even on Facebook, but now your faces have been identified because of the tool. So, this is the thing that I’m always mystified by: people are like, “Well maybe Facebook didn’t realize that when they launched it.” Are you fucking kidding me? [laughs] Like, that many smart people… “Oh, Google with Street View, it was the mistake of one engineer. We’ve harvested unencrypted Wi-Fi data around the planet for months. And Bob, he was getting coffee, and Bob, you know…” I understand that I’m a—I try not to use the term “privacy advocate,” because I’m more of a control freak. I joke about that because I’m not here to tell anybody else how they should live their lives. But I am here to say if there is not a framework of control, then what you call privacy or what I call privacy, none of it makes any difference because it’s gone. And the framework is now there’s no way to control our identity. And it’s not just the sense of the evil data brokers. I talk about this a lot, and I have this new book, called “Heartificial Intelligence,” because I love the puns, and the uncanny valley, which you guys mentioned in one of your episodes, there’s this idea of what’s called the uncanny valley of advertising that we’re in right now, which is gone are the days where you’re all the time being on Facebook and being like, “Why the hell am I getting an ad for Cap’n Crunch? I haven’t eaten Cap’n Crunch in 15 years.” And then you’re like, “Oh, I mentioned Cap’n Crunch to a friend and posted a picture a couple of days ago in Gmail. Oh, so now they’re connected.” Now we’re at the point where you click on something and you’re like, “Huh, you know, I guess I do want a Hot Pocket,” and that’s because at some point somehow you gave out information that said at 3:30 you do have a penchant for this type of stuff. And it’s getting so nuanced that sort of the sense of uncanny, like, “I’m not sure if this was my preference anymore.” I think it was VentureBeat a couple of days ago, now I think it’s Facebook and maybe IBM are talking about they can predict what you want before you want it with a pretty high level of accuracy. So, in terms of how data—why it’s important to control it, is it’s not just because we’re being manipulated, it’s because we lose sense of self. And this is my dad speaking, but when you have hundreds of companies all coming at you with personalization algorithms, they’ve all defined various fractals of your identity and they all have a bias. Typically they want you to buy stuff. So hearing or listening to, in one way or another, dozens of different disparate voices that you aren’t really sure if they’re there: that’s called schizophrenia. That’s what that is. So if you don’t have a locus of control for that, that’s what the whole personal cloud and all the stuff I talk about is. That’s your foot in the door to sanity.

MB: I don’t know where these people…people have to know, I mean, every couple of months Facebook—”We’re worth billions of dollars,” and nobody in the world puts a credit card into Facebook, so you’re not paying for anything. I’ve never spent one dime on Facebook, but yet they make money hand over fist, so it’s obviously… What’s the old saying, “If you’re not the customer, you’re the product,” or whatever?

JH: Yep. By the way, I’m sure your listeners are going to be like, “Wow, this guy is… Phew, he’s a positive guy.” I should all this, like the reason—if we control our data, then by the way, whether it’s Facebook or any other brand, there’s also a way to extend a sense of self so that individuals can increase their well-being. So what I mean by that is if we’re at the point where personalization algorithms know us better than we know ourselves in some ways, then it’s going to be that much harder for us to be able to say, “What is it that makes me happy?” So once the control of the data takes place, then whether it’s positive psychology or whatever else, the reason I’m telling people to track their actions is because I’m basically saying look, devices and the internet of things sensors track us all the time and, in essence, they’re tracking us and telling us what our values are, meaning they’re measuring them. “You like this certain type of food,” or “you like to run; you like this certain type of music.” But once they start suggesting stuff back, there’s a threshold that gets crossed where, like I said before, you aren’t really sure where you start and the algorithms end or begin. So the control of the data happens, but then the user can say, “Well, I want to sort of understand what my values are,” and without any technology, take a real measure of that. In my new book, I have a values-tracking survey, which is free and all that stuff, but it’s based on a lot of psychological research that says if you don’t live to your values every day, your happiness decreases. And the survey is a 21-day thing where there’s 12 values, like health and family, that are common around the world. And on a scale of 1 to 10, you first say, “Which of these values is most important to me right now?” and at the end of every day for three weeks, you say, “Did I live to these values?” So for instance, if you said nature was really important to you when you started, if you don’t go outside at all for three weeks, so nature is like a two, then that’s a really good opportunity to quantifying, to say “Well either nature isn’t a value of mine, and I thought it was.” So you can kind of shift and go okay, that’s interesting. Or you can say, “This is why I’m not happy. It’s not about my boss, it’s not about my wife, it’s not about Facebook. I say that these are my values and I’m not living to them.” And so the insights that you can get there… For me, I did health; I saw that health was something that was out of alignment. And since I did the survey last year, I’ve lost about 32 pounds because I want to be a man of integrity, I want to say, “If these are my values, I’m going to live to them.” So, I should say I really do want my message to be one that’s not just a skeptical dystopian dude, because the whole thing about all this is to enlighten people and say the data that’s being taken from you, especially when you put augmented reality over your eyes or you’re immersed in the virtual world, you will start to see what it is you’ve given away. It’s too precious to give away. You need it for the insights to make your life better.

MJ: So John, would you say taking that kind of personal control over your data is sort of the antidote to being manipulated by all this stuff and being Facebook-contagion experimented into a bad mood or into a good mood, or into buying a bag of Doritos or something?

JH: [laughs] Yeah, there’s a whole industry that’s growing, it’s also much bigger in Europe and the States, around a personal cloud mindset, where you could look on any website, like WIRED or something, and you’ll see 17 kajillion ads using the word “cloud.” Like, every company, SAP, “It’s the cloud! The cloud!” A personal cloud, all that means is your data is stored on one server that you control that is not hooked up to the internet unless you want it to be. So all of John’s data that I create, at least from the point you start that personal cloud, is something that you sort of set up a tiered set of passwords where anyone who wants access to your data, you say whether or not they can get it and for how long. And so that means, for instance, if you’re doing a transaction with a friend or a family member, it’s the equivalent of saying, “Sit down at my computer and look through any of my files that you want.” If it’s a not terribly trusted new company you don’t know that well, you can say, “Well, I’m intrigued with your service, but what is the very core base amount of data that you need before you’re willing to let me use your service?” And everyone uses this example, but it’s a good one: there’s a fascinating story of an app that’s a metronome app. So I play guitar and stuff, like blues guitar, so you want to download a metronome app because it’s a metronome, because it clicks. And this metronome app asked for all of a person’s contact information, other people’s contacts, harvesting it. Again, this is where you start to see that’s not terribly uncommon. So the personal cloud is a way that we can sort of rejigger, to your point, Mike; we can take back our personal data… Right now, our data is utterly commoditized, that’s why it’s almost worthless on an individual basis. But once 10,000 people, a million people essentially lock their data way, then when brands want to essentially know what you like, what you don’t like, it’s up to the individual to make decisions about what they want to share. And by the way, there again, I want to be clear: if someone wants to share all their data and they’re cool with that, great. That is their choice, and they now have a framework where they can dial it up from 1 to 10. That’s their choice. But right now, that doesn’t exist.

MJ: Well it seems like most people don’t really understand what they’re giving away. To Matt’s point earlier, people put things on Facebook, they seem not really to think about it. So since people are so awash in information and things screaming for their attention, and everybody publishing apps and games and TV shows and all this stuff, how do you break through to them and say this is why you should care about this stuff, this is why you should put some energy into it?  When they’re so overwhelmed with the things that they need to know, everything from the security of just computers in general and their phones and just all the pressures of daily life, how do you convince them that this is something that they should invest some time and energy in?

JH: Well, it’s really, really hard. I mean for the past two or three years, I’ve been writing about these issues. Like, I wrote a piece a year and a half ago on The Guardian about quantified self that’s now—a lot of the type of stuff I wrote about is now coming true. Where there was a woman at an office who was using sort of a stress-based measurement tool and she noticed that every day at 3:30 to 3:45, her stress increased a great deal, and she was wondering, “Is it something I’m eating?” And then she realized in her calendar that every day at that time, her jerk of a boss supervisor came by and sort of leaned over the side of her cubicle and was like, “You working hard or hardly working?” like just an asshat. And after about two weeks, she realized she could correlate his presence with an increase in stress, and then she found five or six other of her colleagues and they realized that as this guy went from cubicle to cubicle, in the same timeframe all of their stress increased in a similar way. So they all went to their CEO and said, “Look, here’s data showing how Bob, when he comes to our cubicle, it increases our stress, it increases our heart rate. Here is the verified data from eight different devices. Bob is literally killing us, and you fire Bob or we will take this company down.” That type of data, Fitbit data, it was about four months ago, is now being admissible in a lot of court cases, and almost that exact scenario is now starting to happen in offices. So, here’s the thing: you don’t have to care about personal data; it’s already here. And the other example, I’m writing a piece about companion robots coming into homes, and I’m excited about companion robots. This is the thing about personal data that pisses me off. I’m a huge geek, right? I would love to have these robots come into my house and just use effective computing, and when I’m depressed I could talk to the robot and that’s great. And I’m being simplistic, but I actually feel that. The problem is that they’re hooked up to the cloud and the manufacturers also, depending on who they are, have their own bias and need to get the emotional data that’s going to be happening. And so the piece that I’m writing now is, as a parent, what’s parenting going to be like in the age of companion robots? So for instance, right now I know that my kids, who are 10 and 12, if they go to someone else’s house, the parents may take tons of pictures and put them up on Facebook. Okay. So with some parents, we’ll actually say, “We don’t like our kids’ faces to be on Facebook. Could you please honor that wish?” Now, when there’s devices that just look like a cute face on a robot, it’s just an android wrapped around the AI software, when they’re outfitted with effective computing, that means that these machines can look and see when our pupils dilate as a correlation for happiness, or when the tone of our voice reflects when we’re angry vs. whatever else. So for instance, your child goes over to someone else’s house, and then they come back and they say, “Daddy, Bobby and I had a fight” and Pepper or Buddy or whichever one of the robots, Jeebo, “recorded it, and while we were fighting, it told us to stop fighting and it said that ‘I could hear that your voices are angry.’” And so now that’s recorded, it’s gone to the cloud, and now maybe you get a call because someone says, “I saw your child and your child was being abusive to the other child.” So now maybe, as a parent, you’re going to be brought into a situation where Child Protective Services are involved. Or there’s a precedent with Target I’m sure you’ve heard of, where a father got a letter about a 16-year-old daughter, and the letter was advertising stuff for a pregnant woman. And he was livid and said, “How dare you talk about my 16-year-old daughter being pregnant.” It turns out she was, because Target’s algorithms were so accurate about what she was searching for, they said she was pregnant. So are these companion robots going to sense a young woman’s physiology, see the flush in her face, hear a tremor in her voice, and announce, “Congratulations, Barbara. You’re menstruating,” right? Obviously I’m being hyperbolic, because no one is going to program a machine to say that in one sense. But pick your scenario, and then pick the fact that culturally this will be a paradigm shift of how parents deal with other parents and other kids in terms of it’s not just that the devices are recording, which is tough enough. It’s that the devices are recording emotion and they are also making suggestions that affect emotion. So, a child could be learning empathy from a robot vs. another child. That may be a good thing down the road. Right now today, the evidence of people like Sherry Turkle out of MIT and a lot of positive psychologists say that when kids, humans learn empathy from robots, it actually erodes human empathy, and these are things that you can’t get back. There’s a shit ton of stuff that’s coming up soon, where the message to most consumers is, “Cool! A fun robot that’s like a Furby AND it let’s me make dinner hands-free while it gives me recipes! Awesome!” It’s also continuing to harvest our personal data, but now it’s also outfitted with effective sensor computing. It’s a paradigm shift, and sadly it’s going to be some kind of major thing that’s going to be—I don’t know what it’s going to be. Some huge thing that the robots caused people to kill themselves. I don’t know. Again, I’m getting dystopian, but the point is that there’s no ethical framework, there’s no standards for any of this stuff because the money is in making the robots fast before really saying is it going to improve human well-being overall, or are we just rushing ahead because the technology is something we CAN build? And there’s a lot of great stuff to it, but the heart of it is fucked up, and that’s where the data permissions is just going to really, really be a bad, bad thing.

MJ: Yeah. It seems like the heart of it really would be the hard part. And like you said, making the robots or computers or mobile phones or apps or whatever faster, more technologically aware, or the algorithms more zeroed-in on this or that—that’s the easy part. But actually the heart of it, that seems like society is not prepared for it and really these are conversations that we as a society, as a whole, should be having.

JH: Yeah. And again, to try to spin this positive, because that’s how I ultimately end up like in my new book and I try to be in general, the piece I’m doing for Mashable right now is ethics and stuff—most of the time when you hear the word “standard,” understandably people roll their eyes and they think it’s about risk, mitigation, or lawyers telling you what you can’t do. But what I try to tell people now is remember that with effective computing, most of us as individuals often times, without any technology involved, we don’t know our own emotions ourselves. We’re not good at knowing aspects of our own emotions. So ethics and effective computing, when we have permission about how our data is shared, there’s a wonderful opportunity to have these sensors tell us, “Hey Mike, hey John, are you feeling a certain way?” And in that sense, there’s this glorious opportunity to move forward in the future, where ethics is what I call Applied Ethics. Applied Ethics is where you have tools where people can say, “Are you aware of this aspect of this emotional effect on someone’s values when you have this product going to their home?” That means the engineers and the manufacturers who are making this stuff way upstream will now have the opportunity to build stuff that’s more appropriate for their end user that encompasses values and also still has the advantage of risk mitigation. So, innovation can grow enormously if we also set these standards first before we’re dealing with some kind of nuclear war-level something with AI and trying to kind of backpedal and say, “What did we do wrong?”

MJ: John, where can people find more information about you, and about some of the stuff you’ve written, and also your books?

JH: Well, thanks for asking. My website is my name, which is JohnCHavens.com. And on Amazon, my current book is called “Hacking H(app)iness,” and the new book, which I don’t know if you can order it yet, comes out next February, is called “Heartificial Intelligence: Embracing Our Humanity to Maximize Machines.” And then I’m on Twitter also @JohnCHavens.

MJ: Fantastic. Well John, thanks so much for joining us tonight.

JH: Well thank guys for having me. Again…my main message is Utopian vs. Dystopian, but I appreciate you letting me get up on the soapbox.

MB: No problem.

A: That’s all for this episode of Robot Overlordz. Are you interested in the future and how society is changing? We’d love to have you join our community. Visit our website to learn more and to connect with others that share that interest. You can find us at RobotOverlordz.FM. The site includes all of the show’s old episodes along with complete transcripts, links to more information about the topics and guests in each episode, and our mailing list and forums. We’d also love to hear what you think about the show. You can review us on iTunes or email us.

MJ: I’m This email address is being protected from spambots. You need JavaScript enabled to view it.

MB: And I’m This email address is being protected from spambots. You need JavaScript enabled to view it..

A: We hope to see you again in the future…

MJ: Thanks everyone for listening.

MB: Thanks.

 

Image Credit: By visualpanic from Barcelona, Catalunya (pep sala:el que senties per nadal) [CC BY 2.0], via Wikimedia Commons