By Jesusccastillo (Own work) [CC BY-SA 3.0 (http://creativecommons.org/licenses/by-sa/3.0)], via Wikimedia Commons

SPECIAL GUEST: Kara Platoni (We Have The Technology). In this episode, we're joined by author Kara Platoni to talk about her new book, We Have The Technology. Kara has been interviewing a wide variety of scientists, theorists, technologists, hackers, and enthusiasts about the ways technology is reshaping our bodies and our minds. From looking into the search for new tastes (beyond the currently recognized 5) to brain control systems for next generation prosthetics, from garages to labs to museums, We Have The Technology digs into this fascinating and changing world around us. Recorded 3/24/2016.

 

You can download the episode here, or an even higher quality copy here...

 

Mike & Matt's Recommended Reading:

Kara's website (KaraPlatoni.com)

Kara on Twitter

Kara's book, We Have The Technology, on her site

Kara's book, We Have The Technology, on Amazon

More coming soon...

 

Transcript:

Alpha: Welcome to the Robot Overlordz show, a podcast about the future. I’m Alpha, the computer, here. We cover how society is changing through the lens of pop culture reviews, political commentary, technology trends, and social norms, all in about 30 mins, every Tuesday and Thursday. This is episode #258. And now, your hosts.

Mike Johnston: Greetings cyborgs, robots, and natural humans. I’m Mike Johnston.

Matt Bolton: And I’m Matt Bolton.

MJ: And joining us on this episode is Kara Platoni. Kara, thanks for joining us.

Kara Platoni: Yeah, thanks for having me. And hi, Alpha—I assume Alpha’s listening as well?

A: Hello, Kara. We are so glad you could join us today.

MJ: For our listener’s benefit, could you tell us a little bit about your background?

KP: Yeah, sure. So, I’m a finance reporter; I live in Oakland, California and I’ve been reporting out here for about 20 years now. I teach at UC-Berkeley, in the Graduate School of Journalism. Basically, I make more reporters, including some more science reporters. And I just wrote a book! It’s called WE HAVE THE TECHNOLOGY, and it’s essentially about hacking your brain—or hacking sensory perception, maybe that’s an even more specific way to say it.

MJ: Yeah, I’m actually reading the book now. We discovered it through Jesse Lawler and Smart Drug Smarts. Jesse was on with us a couple weeks ago, I think. One of the things I found really interesting in the very beginning of the book, how you introduced it and the concepts that you’re writing about, and actually tied it in a little bit to popular culture. How did you really get into these areas of technology? Was it just from things you encountered in normal day-to-day science reporting, or is it stuff that you ran across in popular culture?

KP: It’s kind of all of those things. It was kind of just what was brewing, what was boiling out here in the science scene. A lot is going on with cognitive science, neuroscience; the Bay Area is a great place for people who are studying the brain. And then, of course, this is a very gadgety area. This was the epicenter of Google Glass and all kinds of wearable technologies, are being developed and worn here. I sort of thought, where is it all heading? It’s all heading towards technologies that affect the brain. So the kinds of things that I get into in this book are the first retinal implant, and the idea of thought-controlled robotic limbs, and also the idea of wearables, things that might give you some kind of sensory superpower if you wore them—you know, augmented reality glasses, VR goggles, smartwatches, stuff like that. So basically what I did was I set out to have an adventure. I am an old-fashioned reporter who doesn’t like to just sit at my desk and talk to people on the phone. I want to go wear the helmet, eat the green slime, sit in on the experiment, meet the people, go to their house, go to their basement where they’re doing crazy biohacker experiments. So, I took a year off from my job so I could travel, and first I did a lot of reading to see who was out there doing cool stuff, cutting edge stuff, things that made me think, wow, I had not heard of that before. And then I would say, “Hey, can I come over to your house? Can I come over to your lab?” And they said yes—very trusting people said yes. And then, so I would put notes up on Facebook and say, “Hey friends, I really need to go to Toronto,” or, “I really need to go to Philadelphia,” or, “How about Denver? Anybody know anyone in Denver?” And people would say, “Yeah, come stay on our couch.” I had so many people put me up all over the place. I stayed in not one, but two houses where the residents weren’t even there, they just left a key for me. I stayed with people I had never met who were just friends of friends. I stayed on air mattresses and sofas all over the place. So basically, I just went out and did it on a shoestring budget; I really had a lot of fun. And then I came back and I told the book as a series of 11 stories. I didn’t want it to be this reading experience where it would be like an academic telling you about a bunch of studies. Even though there are a lot of academic labs that are featured in this book and I did speak with a lot of scientists, I wanted it to feel like stories about people in the real world who are actually using the technologies. So that’s the way I set it up and I hope people will like it.

MJ: Yeah, I was struck immediately—I mean, I’m in the Taste and Smell chapters, so I’m still early in the book myself. But the experience, the way you recount it, the whole experience of going through the tastes, that was really interesting to me. And that kind of division between having the language to even talk about what you’re experiencing, I’ve got to think that’s an interesting problem to have as a writer.

KP: Oh, yeah, this was one of the first chapters I started working on because it got me so intrigued, but it was also one of the toughest journalistic challenges I had. So the idea for this chapter, Taste, it’s the one that starts out the book, because everyone can relate to food, right? Everybody likes eating. And more importantly, all of us who are alive now and are over the age of 20-25 have had the same experience, which was, when we were kids, when we went to elementary school, there were only four basic tastes. There was sweet, salty, sour, and bitter. And then in the year 2000, along came this concept, umami, or savory, right? Which, we’d never heard of before. It had actually been studied and accepted as a scientific concept in Japan since 1908, but Western scientists had ignored it. They said, “What’s this crazy Japanese word and flavor? We don’t understand it, we don’t get it, it doesn’t exist.” Well in the year 2000, scientists discovered receptors on the tongue that lock with the amino acid, glutamate, and there was proof that this produces this sensation in the brain, this experience of savoriness, the same way that sugar locks in with the receptor on the tongue and gives you this perception of sweetness, and certain toxins give you the perception of bitterness and so on. So, it unleashed this global chase to see if there was going to be more than five basic tastes out there. And there are all of these contenders around the world now. There are people who are arguing that fat would be the next basic taste, and there’s another group that says, “No, no, it’s calcium.” And the idea is these are all things that our body needs to eat, so your body should have a way of sensing it in its environment. There are groups that say, “Okay, maybe water is a taste.” There is one group that says, “Okay, no, maybe it’s carbon dioxide. Maybe the taste of carbon dioxide is separate from the feeling of carbonation bubbles.” And there’s really mysteries kind of X factor thing that everybody is arguing over, which is this idea called kokumi. The idea of kokumi, also a Japanese concept like umami—in fact, being researched by the same company that kind of brought the world umami—is it doesn’t taste like anything on its own, but it makes the other basic tastes taste better. So it makes salt saltier, and sweet sweeter, and savory savorier, right? But okay, here’s my problem as a reporter: it is very, very hard to sit next to somebody and say, “What are you tasting? What are you eating? What does this weird experimental food taste like?” Because the language of taste is really abstract, we hardly have any words at all to describe tastes, mostly we default to metaphors, kind of these adjectives that we use as proxies for it. And so when you want to ask somebody to describe a taste of a thing that we’ve never tasted before, it’s almost impossible. So what I had to do was volunteer myself; I had to go to all of these labs and participate in all of these experiments. I went to the Denver Museum of Nature and Science, which is researching fat taste. They have this huge citizen science experiment that they’re running right now, they’re trying to get 1500 people to volunteer, and you have to donate your DNA, you have to take a cheek swab, and then they give you these… Have you ever used those melt away breath strips before?

MJ: Oh, yeah.

KP: Yeah, so they taste like mint, right? Or normally they taste like mint. But at the museum, instead they have this fatty acid in it. They don’t tell you whether or not the tab they’re giving you has the fatty acid in it or not, and they vary the concentration of it, too. So then in this kind of blind study, they give you these tabs and then they say, “Rate the strength of what you’re tasting from 0 to 10,” and you score it. And then they try to get you to give them words to describe what you’re tasting, and it is really, really hard. Fat taste does not taste—at least to me—like bacon or ice cream or gravy or all of the other things that you would think of us as fat. To me, the first thing that sprang to mind was that it tasted bitter. But bitter is already a basic taste, right? So that doesn’t count, you have to think of something else. And then I wanted to say, “Oh, it tastes kind of acidic,” but acidic is sour, and that’s already another basic taste, right? And there’s the problem: I could taste something, but I didn’t have a word for it.

MJ: So this is a little off-topic I guess, but do you find, when you’re talking about this stuff and reporting it—you mentioned, when you started answering my last question, that those of us who grew up learning four tastes, now there are five, do you think that everyone really knows that? Because for myself, I remember hearing about umami, but it wasn’t something I knew the way the stuff you learn in elementary school or whatever, kind of in your background, so to speak. I just think that a lot of people that I know, I don’t know that they would necessarily know about that as a taste. Do you find that there’s a disconnect between people knowing some of these things that science has discovered since they were in school?

KP: Yeah, absolutely. I think understanding of umami is growing, and among people who are really into food it’s becoming increasingly appreciated and savored. We have a burger chain out here in California, maybe it’s nationwide, it’s called Umami Burger, it specializes in really savory things. We have a store in my town that sells Japanese food, Japanese barware and dishes, it’s called Umami Mart. So, among people who know, they like it. And just for anybody who’s like, “What is this umami thing you keep talking about?” So, it’s the flavor that’s associated with caramelization, so like a caramelized onion, or roasted meat, or it’s also associated with slow-roasting or stewing things that have a lot of these peptides in them. So for example, a sun-dried tomato or a very slow, simmered tomato sauce would have a lot of umami in it. So these are things that people recognize as taste. “Oh yeah, I know that thing,” right? It’s not that you’re tasting something that you’ve never tasted before. It’s being able to perceive a persistent quality in the food that you’re eating and pick it out as a discrete thing, to say, “Oh, I recognize what it is,” it’s a discernible thing the way that saltiness is discernable to you. It’s a separate thing, you recognize it. A lot of taste scientists compare this to the problem of inventing a new color. I don’t know if you ever blew your mind when you were a kid trying to figure out what a new color might be…

MB: [laughs] Well, we’re men, so we only see the red, the blue, the green…

KP: Oh that’s right, you don’t have the extra X chromosome. So, one of the really interesting things that I learned while I was researching this is there are some cultures that don’t have separate words for blue and green, that blue and green are kind of perceived as one singular experience. So if you were looking at the light spectrum, if you were looking at the rainbow, you wouldn’t see blue and green as two distinct separate bands, you would see them as one thing. And actually, I was speaking with a Russian researcher who blew my mind, she said, “In the Russian language, we divide blue into two distinct colors, kind of analogous to cyan and indigo, and we think of blue as not one thing but two things.” So there are different ways you can divide up the rainbow depending on how you think about it, right? And if you really want to blow your mind, go look at a rainbow, and look at a light spectrum, and see if there’s a band in it that you would like to give a special name to, like, “That’s super orange, or that’s ultra green!” and that becomes a separate thing to you, right? So the idea is your eye hasn’t changed, it hasn’t evolved, you’re not any different. The thing that’s different is that you have a word that differentiates this particular segment from everything else and it lets you pick it out. Somebody who was raised with the language that doesn’t discriminate between blue and green, their eye is working exactly the same as mine. It’s just that when they’re looking at the rainbow, they kind of mentally compartmentalize those as one thing, though I see them as two. And it’s the same with taste, right? So maybe the idea is fat taste or calcium taste, which also to me tasted extremely bitter, maybe it’s only slightly different than what bitter is. But if you recognize it, if you learn to pick it out as discrete, you’ll start to see it as a separate thing. For scientists, the big thing that they want, kind of the gold standard for proof, is is there a receptor on the tongue that plugs into this thing and this thing alone? Is there this kind of receptor/chemical molecule match? Which is what they had to prove with umami. They had to say, “Yes, these peptides lock with the receptors on the tongue in a way that is unique and discrete, and that information travels to the brain and it’s separate than the information that you’re getting about sweet and salty and all these other things.” It’s really fascinating; it’s kind of a mind-blower to think about it. The difference has nothing to do with anything outside of your body. It’s not in the food, it’s not even on the tongue. It’s on how you think about that incoming information.

MJ: Is it fair to say that might actually be a theme of your book somewhat? That all of these groups, all of these technologies, all the things that are being experimented with… Even for our audience, a theme for us is somewhat that science fiction and futurism and things like that, they actually shape the way that we think about the future, they actually then create the future, that you need those concepts in order to be able to talk about it, to be able to think those things, and to one day actually create them.

KP: Yeah. Okay, so here’s one way to think about it: your brain is locked in this weird box, it’s locked in your skull, and it only has five ways to get information from the world, right? Sight, hearing, touch, smell, and taste. It has five inbound portals. Other people sometimes compare this to a computer that has five peripherals, and that’s all it gets, right? Your brain knows nothing about what’s going on in the outside world, it just knows the information that these five inbound ports are sending it. And these five ports that we humans happen to have are what we evolved because of our ecological niche. They’re not the only sensory organs that an animal can have. How do we know that? Because other animals have other senses that we don’t have, like sharks can sense electricity, and honey bees can sense into the ultraviolet—we can’t, that’s beyond the range of visible light. And there are pit vipers, snakes, that can sense into the infrared, which, again, is beyond the spectrum of visible light. There are lots of animals that have a magnetic sense; they are usually migratory animals like fish and sea turtles, even monarch butterflies—they have to fly north/south so they have some information to sense the direction of the magnetic poles. They need that information, they evolved sensory apparatus for it. We didn’t. So here’s your brain trapped in this box or skull or however you want to think about it being kind of enclosed in this space. It only gets so much information. And not only that, but the amount of information that’s out there in the world is overwhelming, so your brain has to screen that information. Otherwise, you would just be basically paralyzed with indecision, you would have this flood of white noise coming at you all the time and you wouldn’t know what to do with it. So, your brain has to do a lot of decision-making, and that process is guided by your attention, and one of the huge factors that guides your attention is language. Having a word for something helps you recognize it, helps you remember it, helps you pay attention to it, helps you communicate it to others. So that’s what’s going on in the search for a sixth taste—having a word for something directs your attention to it. But there’s all kinds of other factors that direct your attention, and some of them are social, some of them have to do with your culture, some of them have to do with your individual experience with learning and memory, and some of them are just pure neuroscience. It’s what your visual system is doing, it’s what your hearing system is doing to gate and filter that information, to only take the information that you need. So, let me give you an example. The brain is tuned to novelty. Why would an animal need that? Because you need to know what’s changing in your environment, because the newest information is the information that you probably have to act on the fastest, and it might be a hint that danger is coming along, so you’re kind of wired to react to things that are new. The brain has all of these shorthand cues that tell you how to react things. So for example, your sense of smell is very connected to learning and emotion and memory. Well, why? Because that helps you remember things like what’s dangerous, what’s poisonous. The sense of smell probably evolved from what researchers call a chemical sense. They think this was the first sense that any living organism had—not just mammals, but anything, all the way back to single cell organisms—and it was basically the precursor to sense of taste, and this was a sense that picked up chemicals in the environment and said, “Hey, that’s food, go eat it,” or, “Hey, that’s your species, go mate with it,” or, “Hey, that thing is dangerous and wants to eat you, or it’s fire, it wants to burn you, get away from it.” Your brain is predisposed to privileging certain information so you can act on it, disregarding the rest. This whole field of endeavor that has to do with perception kind of looks at the question of, okay, could our brains take in more? Could our brains take in things more optimally? Could we develop senses that we don’t have using our limited sensory gear, our five inbound portals? Is there kind of a way to hack around these limitations? Heck, why couldn’t we see what the honey bee sees? Why couldn’t we sense what the pit viper senses? Why in the world are we being outperformed by the monarch butterfly? [laughs]

MB: [laughs]

MJ: [laughs] Did you have any favorites among the more technological aspects of the people you talked to? Like I said, I’m still early in the book, so I haven’t quite gotten to read those sections yet. But of the people you talked to that maybe are doing things that maybe are a little more gadgety and technical, did you have any particular standout favorite, or was there anything that really surprised you?

KP: Oh, yeah. Okay, let me give you some robot and gadgety examples of people I got to meet. So, one of my favorite people was a guy named Dean Lloyd, who’s one of the first people on the planet who has ever relearned how to see, and that’s because he volunteered to get a retinal implant. Dean Lloyd was born with normal vision and then he lost it as an adult because he has a disease called retinitis pigmentosa; it basically slowly destroys the photoreceptors at the back of your eye. A few years back, he volunteered to get a retinal implant that is called the Argus II, it’s made by a company called Second Sight. This is the first retinal implant to be licensed for sale in the United States, they’ve only been selling it on the market for about a year, and Dean was one of its first testers. So what it is is he wears this special pair of glasses that has a camera set over the bridge of his nose. The images from that camera are translated to electrical impulses, the impulses are ported to a device that’s actually inside his eye, it was surgically implanted inside the back of his eye, and that device stimulates the surviving photoreceptors at the back of his eye. The information travels up his optic nerve to the brain and he perceives kind of a limited, very sparse version of sight. It’s not what he remembers from before, it’s not what most of us experience as sight. So he doesn’t see three-dimensional objects, he doesn’t see much in the way of shading. The implant that is in his eye has only 52 working receptors, so you could almost say it’s kind of like a football scoreboard that only has 52 lights that work. So what he perceives, the way the world looks to him is it’s just flashes of light that indicate contrast areas, that kind of indicate the differences between dark and light. So for example, when we were walking down the street together, he could kind of see a series of flashes where the white of the sidewalk met the black of the street. But even with this sparse information, he can use it to get around. He can navigate pretty well with it; when he’s talking to people, he can sort of tell where they are; he can identify certain objects, like he was telling me he can figure out where his dinner plate is by scanning his head right and left, and he’ll get flashes at both edges of the plate and that tells him where the object is and how big it is. Vision is a construct and his construction is much more limited, much more sparse than ours is because we each have millions of working photoreceptors and he only has a few, but he’s still getting information thanks to this implantable neuroprosthetic device. And these devices are only going to get more sophisticated. He refers to himself all the time as the Model T Ford, he’s just acknowledging that it’s a really primitive technology that he’s volunteered to test. But there are lots of people thinking about ways to make this device better, whether it’s do you add more electrodes to the array so you can stimulate receptors at the same time… There are talks about going all the way up to a thousand receptors, some scientists had said maybe you could do that. And then there are people who are saying, “No, look, stimulating with the electricity, it is way too fuzzy and clunky,” the problem with doing that is the electricity kind of goes everywhere, you might stimulate the wrong cell, you might stimulate more cells than you want. And instead, they want to use this new technology called optogenetics, which means stimulating neurons with light, basically with fiberoptics. And light is much more exact than using electricity; electricity kind of disperses into a fuzzier cloud and light is very precise. So that was really cool, to meet him and spend time with him and watch him do what he does every day. Another person I got to observe at work was a surgeon named Dr. Sherry Wren. I watched her remove a gallbladder from a patient, and the really cool thing was the patient was 20-25 feet away from Dr. Wren on the other side of the room, because Dr. Wren was operating on her through a robot, so she was doing telesurgery. Telesurgery has this amazing potential which is like, okay, you could operate on someone who’s in space, you could operate on someone who’s at the South Pole, you could operate on someone who’s on a submarine… You could have the great surgeons of the world from the major metropolises operate on people who live in really rural areas who can’t get to a hospital, people who live in developing countries where people don’t have access to this kind of medical care. Really cool stuff, right? But, telerobotics, telesurgery poses all of these technical complications. One of them is that there’s lag time and interference, which means that operating over distance can be dangerous, so right now that’s why she’s actually in the same room as the person that she’s operating on. And the other one is haptics are really hard to render back to a surgeon, so Dr. Wren was working only by sight. She was working at this console that was kind of like, I don’t know, almost the size of one of those big, old Atari consoles from the ‘80s, and she actually had her head and her upper body leaning into the console, she sees this beautiful screen through which she can see the image of this up close camera feed inside the body of her patient. She can see the tips of her tools as she moves them, but she can’t actually see her hands, her hands are actually under the console, where she’s holding these kind of grippers that she’s manipulating, and the grippers are giving directions to drive the robot that is actually on the other side of the room, hanging over the patient. Okay, so that’s pretty cool and that’s pretty interesting. Why would we want to give Dr. Wren real time touch feedback? Well, for one thing, without it, she’s missing some information, like she doesn’t know, by touch, how hard she’s pulling on a suture. So when she’s trying to tie that line, the only clue she has is basically whether or not the skin is blanched; she can kind of visually tell if she’s pulling too tight. But one of the things she and other doctors told me is that beginners break the suture all the time because they can’t tell how tightly they’re pulling on it. She can’t palpate from inside. Surgeons need to use texture a lot of the time to tell the difference between healthy tissue and a tumor. For example, they need to feel the pulse of blood moving through an artery, they need to know the weight and angle of how an organ is hanging, they need to know where the tension is so that they can cut… She gets none of this right now, she only knows what to do because she has visual feedback and because she’s a very experienced surgeon who knows what to expect. I spoke with a lot of mechanical engineers at Stanford who are working on developing these haptic tools that might, one day, help a surgeon like Sherry Wren. But there’s an even cooler next step, and the next step is what if you took these kind of haptic sensors and you put them on a prosthetic limb, and you made the prosthetic limb driven by brain control? So, it is a prosthetic limb that can be worn by somebody who’s had an amputation, or who has had a stroke, or who for some other reason doesn’t have a limb or can’t move that limb, and they would control it with their brain and they would get real time feedback from that limb. So they would think, “I want to grab that coffee cup,” and they would grab it, and the arm would be able to say, “Hey, you’re gripping it so tightly you’re crushing it,” or, “It’s about to slip, you’re about to lose it.” That’s what would make one of these prosthetic devices incredibly useful, incredibly precise; it would be the next generation for prosthetic limbs.

MJ: I would think that whole area of brain control is going to have all kinds of implications. I was just thinking, all the things you’ve gone through there, that that seems like another area where we’re still developing the language, whether it’s code or the actual mechanical abilities to translate that stuff between the patient on the table and the doctor doing the surgery, that that stuff is still kind of being developed. When we get a little bit further along that curve, it’s really going to open some pretty neat stuff up.

KP: Yeah, we’re at an amazing point in brain science. Up until, I don’t know, maybe the turn of the century, not too long ago, studying the brain, studying the senses, studying perception was largely a psychological science, and it was kind of a behavioral science. You would give people or you would give animals a stimulus and then you would see how they reacted, and that’s how you kind of figured out what was happening inside the brain. In the last 10-20 years, we’ve developed much better ways to look at the brain and to actually start to understand the brain’s language, to understand why it’s doing what it’s doing. So these include things like the fMRI, being able to scan to see brain activity in real time; having implants, electrodes that can actually be put inside a living, behaving person’s brain to read out activity of what they’re doing; and, of course, our understanding of genetics is much better, too. So, that has all helped us kind of understand what’s going on. One of the goals that researchers told me over and over again is they want to be able to understand the language of the brain, they want to understand the process of what is happening in this chain of information as information comes in, which is the senses, the input side, and then also what happens as information, instructions are relayed from the brain, which is kind of the action or motor side of the brain. It is incredibly complex. It involves not only so many neurons but so many different kinds of neurons. And even right now, the best electrodes that scientists have to put on the brain only get at a very small area of the brain. Usually they can only be placed on kind of the outer layer of the brain, or there are electrodes that only access a very small network. It’s not like they can understand, neuron by neuron, what the whole brain is doing in real time. The fMRI, it’s kind of this blurry overall picture that has to do with essentially blood flow—they kind of use blood flow as a proxy for brain activity. So they can’t exactly use fMRI to say, “Hey, I understand what each and every neuron in this section is doing.” But the potential for developing this more granular information is huge, and it has huge applications for a lot of the diseases we currently either don’t know how to treat or don’t have many options to treat. So all of these diseases of aging that are increasingly affecting people as people live longer, things like Alzheimer’s, Parkinson’s, these are things that this new wave of neuroscience might help us treat. Strokes, as well as spinal cord injuries, these are also really big targets for these kind of researchers. Lou Gehrig's Disease, locked-in syndrome—all of these things that, right now, rob a person of the ability to speak or to move, maybe these are things that can be overcome by developing brain machine interfaces or machines that can read out activities or thoughts from the brain. The idea is, could we develop either neuroprosthetics, like that limb, where you have an intention to move and it moves, or some kind of device that helps you communicate with a computer, it reads out your thoughts on the screen or there’s a talking device that reads your thoughts aloud.

MB: If we’re hitting that point, then you kind of think that the singularity can’t be that far off.

KP: Right? And a lot of people say, “Uh, hang on a minute, this is kind of my post-apocalyptic nightmare. You’re talking about reading thoughts?! That doesn’t sound great!” A lot of the researchers I talked to said, “Yeah, you have to be very thoughtful about this, because the flipside of it could be really scary.” Now, I should say the technology that we have today is quite crude, it’s not like somebody can put a hat on your head and read out what you’re thinking. And when you talk to researchers like this, they really start to parse out, “Well, what is thought anyway?” Because the intention to move your arm or the intention to even form a word, to have a verbal image in your mind is way different than the kind of abstract thinking we do all the time, like a dream or a daydream, or an emotional feeling, or a memory, right? Accessing those things, they’re very wispy and fleeting, and most of the time they’re not even conscious, right? Accessing those things might be much, much harder than accessing somebody’s intention to move their arm or to very specifically think a certain word. That said, all of the stuff that we’re talking about now sounded impossible a few years ago, much less a few decades ago. A lot of the time I would come back from researching, and I’d have my notebook and my cassette tapes full of all of this amazing stuff, and I’d try to tell people about it and they’d say, “Yeah, but that doesn’t really exist,” and I’d say, “Oh no, it exists. I saw it. I just saw it in a lab.” Maybe I saw it because the implant was in a mouse or in a monkey or something like that, but I saw it. So these things move fast, and if we’re going to get to the point where we’re going to talk about, okay, we’re going to start to access people’s intentions, we’re going to start to have technologies that directly interface with the brain, maybe things that are actually inside the body, like inside the eye or inside the brain, we really have to think about how we want them to be used and who’s going to get to use them. And right now, I should be very clear and say the people who are developing these things are developing them only for people with very serious medical needs, so people who could not be helped by any other currently existing treatment. And these are very, very serious clinical studies done at major universities often with big government funding, and they’ve cleared all kinds of ethical checks and human subject use ethical inquiries and that sort of thing. It’s not just a bunch of dudes in their basement shoving something into their ear or whatever, right? [laughs]

MB: [laughs]

KP: Don’t do that at home, kids! Don’t do that at home! But you can see how things move fast, and where my book ends is with a group of biohackers called Grindhouse Wetware, who are based out of Pennsylvania. This is a bunch of guys who are frustrated with the limits of the senses, and they have been trying to develop things that they can make on their own in their basement that they can put in their bodies that will give themselves a new or different sensory experience. And I should be very careful to say they are not doing anything that has to do with the brain, they’re not shoving anything in their brain, but they are putting things in their hands. So there’s a lot of biohackers who start by implanting a magnet, kind of as a bid to see if that will give them an electromagnetic sense. And the guys at Grindhouse Wetware, when I met them they had just finished building this device called Circadia, which Tim Cannon, who’s one of the group’s founders, was wearing in his arm. Circadia is about the size of a deck of playing cards, it was on the inner plane of his left arm. What it did was it read his body temperature and it ported that information to his cell phone via bluetooth, and it lit up; when you held a charging coil to it, it lit up red and green, and it looked really dramatic and really cool. You might say, “What does that have to do with sensory perception?” Well, nothing at the moment. It basically was a test device to see if they would accidentally kill Tim, and if they could charge it and if it would work. So Tim was fine, he wore it for about three months. They eventually took it out because the battery was expanding because of the heat of the charging coil.  But it was kind of their proof of principle, so just this Thanksgiving they came out with their next project, which they call Northstar. When I was there with them, the idea behind Northstar was to develop an in-hand compass, something that would essentially light up when you were facing north, and it would be this hack, this kind of like proxy sensation, to being the monarch butterfly or sea turtle who can sense magnetic fields naturally. This Thanksgiving, they came out with kind of the demo version, version 1. All it does right now is light up, it doesn’t have that magnetic compass element in it, but that’s what they’re going to try to do next. So yeah, these things are still primitive, but there are people who are trying them, there are people who are doing them, and I think it’s kind of a testament to human curiosity.

MJ: So Kara, do you think that society really needs to start having these conversations now about, like you said, who should be allowed access to this technology and what the parameters of it should be? Do you think that society really should start having those conversations on a grander scale, and is your book intended to start those conversations?

KP: Yeah absolutely, I would be delighted if this book helped kickstart some of those conversations. I think right now for the very invasive technologies, things like the retinal implant, those are very carefully monitored by the FDA, you would need a hospital surgeon to implant it and you would need your insurance to cover it… There’s all of these systems that are going to limit who can access that technology. And you wouldn’t want it if you had normal vision anyway; it wouldn’t do anything for you, it would probably make your eyesight worse, right? But, we do have this whole new array of things that are coming that are wearable that are way less invasive. There are augmented reality spectacles, or VR goggles that you put on, or smart clothes or smartwatches. Even, I have to say, the cell phone is amazingly capable in some ways as a sensory gadget. Most of us are not making those things in our basement. There is a really strong open source DIY maker movement that’s interested in gadgets, but not everybody is participating in that. So most of us buy our technology, and we don’t know what’s in it, right? I do think we should be having conversations about its capabilities, what it’s supposed to do, what kind of information that device is feeding you, what kind of information it’s leaving out, and we should be having that now before the only choice you have left as a consumer is buy it or not, right? Let me give you, just really quick, one example of why we should care about this. I spoke with this great scholar from the University of Calgary, his name is Gregor Wolbring, and he’s a disabilities and abilities scholar, and I wanted to talk to him because the disabilities community is kind of a group that a lot of new assistive technologies are made for first. I wanted to say is it always a great idea to try to augment the body or change the body, and he said, “Look, you have to think very carefully about what technology normalizes, because technology says a lot about what we expect a body to look like and what we expect a person to be able to do to be considered a productive, normal member of society.” And he said, “Just a couple generations ago, nobody had to know how to use a computer, nobody had to know how to use the internet. A few generations before that, you didn’t have to know how to use a phone in order to have a middle class job or a middle class education. Well, now you do, it’s required.” So if some of these augmented reality goggles, spectacles, things that are analogous to Google Glass but more complex, if they give you an advantage—an advantage at work, an advantage at school—if they become required, then it’s going to kind of exacerbate the haves and the have nots, you’re going to have to participate if you want to keep up with everybody else. He said, “Look, you could say no, just like today you could say ‘No, I don’t want to use the internet,’ but it’s going to have really big repercussions for your social life, for your financial life, for your work possibilities, so is it really a choice?” And this idea of evolving along with our gadgets, this idea of an arms race, where a successful gadget means that everybody else has to buy the gadget and keep up with it, that’s really crucial to think about.

MJ: Definitely, definitely. So Kara, where can people find more about you, and where can they get a hold of the book?

KP: You can find more information about me at KaraPlatoni.com or on Twitter, @KaraPlatoni. But the book is everywhere, it’s on Amazon, Kindle, Barnes and Noble. You can go to IndieBound, which is a great website to see if your local independent bookshop is carrying it. It’s in bookstores everywhere and it is on iBooks, so you can look it up there if you like to read digitally.

MB: Awesome.

MJ: Well Kara, thanks so much for joining us tonight.

KP: Thank you! It’s been really fun.

A: That’s all for this episode of Robot Overlordz. For even more society-changing goodness, visit our website at RobotOverlordz.fm. You can also review us on iTunes or email us.

MJ: I’m This email address is being protected from spambots. You need JavaScript enabled to view it.

MB: And I’m This email address is being protected from spambots. You need JavaScript enabled to view it..

A: We’ll see you again soon, in the future.

MJ: Thanks everyone for listening.

MB: Thanks.

 

Image Credit: By Jesusccastillo (Own work) [CC BY-SA 3.0], via Wikimedia Commons