Episode 242 - HippOcratic SecurOmatic
Published February 2, 2016
SPECIAL GUEST: Beau Woods (I Am The Cavalry). We're taking a look at MEDICAL DEVICES today and the efforts to help the industry conquer cybersecurity by IAmTheCavalry.org (previously featured in Episodes 99 and 126). This time, we're joined by special guest, infosec expert Beau Woods, to talk about the Cavalry's new Hippocratic Oath For Devices as well get an update on auto safety and how you can get involved in the efforts to keep us all safer. Recorded on 1/28/2016.
Mike & Matt's Recommended Reading:
IAmTheCavalry.org, grassroots organization that is focused on issues where computer security intersect public safety and human life
IAmTheCavalry.org's Five Star Automotive Cyber Safety Program
IAmTheCavalry.org's Hippocratic Oath For Connected Medical Devices
Beau Woods' website
Beau on Twitter
Episode 99 - Five Star Auto!, our previous episode with Josh Corman
Episode 126 - Cavalry-Charged Medicine!, our previous episode with Josh Corman and SpaceRogue
More coming soon...
Alpha: Welcome to another episode of Robot Overlordz, episode #242. On the show we take a look at how society is changing, everything from pop culture reviews to political commentary, technology trends to social norms, all in about thirty minutes or less, every Tuesday and Thursday.
Mike Johnston: Greetings cyborgs, robots, and natural humans. I’m Mike Johnston.
Matt Bolton: And I’m Matt Bolton.
MJ: Tonight we’re going to revisit a topic we’ve talked about before a little bit. We’ve had on, from I Am The Cavalry before, both Josh Corman and Space Rogue, and tonight we’re joined by Beau Woods from I Am The Cavalry. Beau, thanks for joining us.
Beau Woods: Thanks for having me on, guys.
MJ: So for our listeners’ benefit, could you tell us a little bit about your background and how you connected to I Am The Cavalry?
BW: Yeah, sure. So, I’m Beau Woods, I’ve been in information security for somewhere around ten years now, I tried to stop counting when it made me feel old. But I kind of got my start in a healthcare environment, working in a hospital. And so for a few years I was doing that, and a part of what I was doing was sometimes having to go out and fix medical devices that were down, or at least take them offline and calling the manufacturer, determine what the root cause of a particular incident was. And it was involving a medical device or something on the medical device network, so that ties into I Am The Cavalry, which I’ll explain in just a second. After I left healthcare, I went into consulting and did that for a couple years; moved from technical consulting to more business strategy on the consulting services side, and then got back into consulting and spent the last four years doing delivery and running a business doing consulting for both the InfoSec industry folks as well as delivering to customers. So, a couple years ago now, at DEF CON, when Josh Corman and Nick Percoco launched I Am The Cavalry originally, I ran into Josh in one of the green rooms, and I had been speaking at a conference called BSides Las Vegas at the same time Josh was speaking, and it was right after that talk that we both met and starting having some of the same ideas, and talking about some of the things that I had seen in healthcare. Josh was talking about the I Am The Cavalry initiative, which briefly is, as information security professionals who recognize that there’s a spectrum of consequences ranging from minor inconveniences of your app crashes on your phone, all the way up through credit card data is lost, intellectual property is lost, some type of a permanent reputational harm might be done; to potentially now some of the consequences might be life and death as connected technology is entering medical devices, automobiles, power plants, as we saw in Ukraine recently there was an example some malware that may have caused some power issues… As the consequences of this connected technology in our society increase, someone should surely be looking out for the unintended side effects of the adoption of those connected technologies. And so as folks in the security industry, you kind of look around and you talk to governments wherever they exist in the world, you talk to people in corporations and you talk to some of the other folks about some of these issues, and you find that there’s really not a lot of conversation going on in this space. Each manufacturer is doing some things, but it doesn’t tend to be coordinated in a way that facilitates learning across all industries so that we don’t repeat past mistakes, so that we instead learn from our failures and replicate our successes. Actually, the name “I Am The Cavalry” came from that realization, that the cavalry isn’t coming to save us. John Wayne’s not coming in at the end of the movie with his cowboy hat to, you know, grab us and pull us out of the river or whatever it is. That if the cavalry isn’t coming to save us, then the responsibility falls to us—it falls to each of us, not a group of, you know, people who come from the security community, but to everyone to take responsibility for kind of being the better world that they want to see. So, “If the cavalry isn’t coming to save us, I have to be the cavalry, so I Am The Cavalry.” It’s a personal attestation of each individual who feels like they want to be part of the solution, to engage in a more proactive, collaborative effort rather than sitting on the sidelines, or worse, pulling down the people who are trying to help. So, that’s kind of the origin of I Am The Cavalry, and we’ve been around as an initiative for two and a half years roughly now. We are all volunteer, so anyone can step forward and do something or join what someone else is doing, or get attention to something, or, you know, frankly just follow us and see what we’re doing in case something does pique your interest or in case you do happen to find some line that you can engage. For some of us, that’s taken the form of changing jobs to be more closely aligned to higher consequence outcomes from devices, or it’s also taken the form of going to talk at conferences in either the security industry or other industries; it’s taken the form of speaking in small groups, or to manufacturers, to some of the stakeholders in the various ecosystems. And it’s also taken the form of writing documents and frameworks—small, light philosophical frameworks—that can be used by the various industries.
MJ: Yeah, the getting involved effort is something both Josh and Space talked about, so we do have some of that information in the show notes for both of those episodes, and we’ll link to those episodes in our show notes for this episode, along with the links to I Am The Cavalry if people want to get involved. So I Am The Cavalry had a pretty big announcement on January 19th recently. Can you talk about what that announcement was?
BW: Yeah, sure. So on January the 19th, we launched what we call the Hippocratic Oath for Connected Medical Devices, the idea being that, as physicians and caregivers, take a Hippocratic Oath or a symbolic attestation to look out for the best interest of the patients and always act in that manner. Medical devices are increasingly the delivery side of those physician’s orders, of the caregiver’s instructions, and so it makes sense that the medical devices would also have some symbolic attestation that they would act in the best interest of patients. And just as the original Hippocratic Oath is a, again, kind of a personal attestation with several facets to it, so our Hippocratic Oath for connected medical devices is also a symbolic attestation with several facets. It’s a little bit tongue-in-cheek in that the initial thought for the Hippocratic Oath was that the medical device itself would be symbolically taking this oath, which, of course, you know, we know that machines are not sentient yet, but were they sentient, they might be able to do that. Because they’re not, we tried to make the oath something that each person involved in the chain of care delivery could potentially take a look at and see how their role fits in in lockstep with each of those points. So whether you’re a caregiver, whether you’re a patient, whether you’re a hospital administrator, somebody in the procurement department; whether you’re a medical device maker, a regulator, or a security researcher who might want to help in this area, we hope that you can look into this oath and see how you can contribute in a meaningful and positive way to upholding those principles.
MJ: It seems like those principles are fairly similar to the five star auto standard that I Am The Cavalry released. I know when we were talking with Josh the first time he was on with us, he mentioned that that was kind of written with medical devices in mind as well, that a lot of those same principles would carry over. So, is it fair to say that the Hippocratic Oath is the medical devices version of the five star auto plan? Are there any major differences?
BW: Yeah, that’s a very fair statement. Certainly philosophically we were trying to follow up with that, and just very briefly I’ll talk about the two. So, I’ll start with the Hippocratic Oath. There is a slightly longer version that has a little bit of a preamble, but essentially we tried to map it as close as possible to the actual Hippocratic Oath, the idea being that, as caregivers or someone in the chain of delivery, your first and foremost responsibility is to protect human life. Even though we know that all systems will fail and that failures will happen in the environment, there will be defects, adverse conditions, accidents and adversaries, that the capabilities that we intend to improve or save lives should not inadvertently harm or end lives. So, where failure has a high consequence that affects patient safety, care delivery, and other aspects that really hit to human life, we should make sure that we do our utmost. And from the original Hippocratic Oath, one of the things that most people know and can say is the idea that we must first do no harm by introducing a treatment. So, that’s kind of the ethos. We broke that down into five distinct statements, or distinct capabilities and pledges. Those are, first, cybersafety by design, which is kind of combining the worlds and domains of cybersecurity and patient safety, and we say that, “I will incorporate security development life cycle, adversarial resilience, and supply chain practices into design.” The second one then is third-party collaboration. Once you acknowledge that flaws are unavoidable in the process, you invite others to discover flaws to tell you about what those are. So, “I will invite disclosure of potential safety and security issues reported in good faith.” And then we wanted some mechanism to be able to learn from failures, from in-the-field negative outcomes where patients have a reaction to something that is not what the physician or the caregiver intended. So we need some way to capture evidence. We often hear that nobody’s ever died from a cyber attack on a medical device, but we don’t really have the evidence captured to be able to say that conclusively one way or the other. So the third principle is evidence capture, and “I will facilitate evidence capture, preservation, and analysis to learn from safety investigations.” The fourth capability is around the idea that there will always be failure in the environment, that there will always be failure in a device, and that we should resist failure from the environment when one device fails or when there’s a malicious adversary who maybe walks into a hospital, that that should not put patient safety at risk unduly. And that when failure does occur in a medical device, there should be clear indicators that the device has failed, and that it should fail in a safe state, that, again, doesn’t put patient safety at risk. And then the fifth capability is cyber safety updates. So now once we know there is a failure mode that exists and we have a way to prevent against that failure mode, whether it’s a change in operations, whether it’s a change in the environment, or whether it’s an actual software update, that there will be a way to support prompt, agile, and secure updates to that operating ecosystem so that we can inoculate against any harm. That was a mouthful, and that was a long block of text that I just spewed out, so to just kind of really capture it in a memorable way is, with respect to patient harm, how do you anticipate and avoid patient harm? How do you invite help avoiding patient harm? How do you capture and learn from patient harm? How do you resist and contain patient harm? And then how do you inoculate against patient harm? And so those are the five principles outlined in the Hippocratic Oath, and as you mentioned, those really closely dovetail with our automotive five star cyber safety framework. There are a few differences because of the unique natures of healthcare delivery vs. automotive manufacture and operation, but those are really closely aligned. And while we set out to design the first one, the automotive five star, with the expressed intent of having it be maybe not a universal set of principles but certainly we drew from learning that the medical device industry has had… When we started doing the medical device, when we said, alright, let’s start with a blank slate and let’s see what we find without too many preconceived notions—you’re never a completely blank slate, of course—and then let’s see what we find. So, as it turns out, we ended up converging on almost an identical set of principles to the automotive one, so we think that there is a really more universal kind of set or subset or superset that could be applied to any manufactured devices, or potentially any other type of software, that can support the same types of capabilities and can help achieve safer outcomes sooner.
MJ: It seems like that’s a pretty good set of standards. So, what has the reaction so far been like? Didn’t you guys announce this with the FDA, actually?
BW: We didn’t announce it with the FDA, but we staged it to occur just before an FDA workshop, to A) capitalize on some of the media buzz and some of the attention that that was getting; B) to help seed some new thoughts and ideas to approach solutions going into the workshop; and also to draw more attention to the workshop itself. So, our reactions were very, very positive. Now, we worked pretty closely with some of the ecosystem stakeholders to develop the Hippocratic Oath, talking with medical device makers, talking with healthcare people, talking with security researchers who specialize in this; talking with patients, talking with physicians, talking with procurement offices, talking with many of the people that have a big role to play in patient safety through connected medical devices. And so, it wasn’t a surprise that we had some people who liked it. I don’t think I’ve heard anyone who hasn’t liked it so far, which is really good. We got a lot of funny looks after we put out the automotive one because a lot of people didn’t know us. I think by this point, enough people know us that are paying attention to the types of outlets and the types of conversations that the Hippocratic Oath showed up in, that they’ve already heard of us and they know that we’re not a threat, that we’re collaborative in nature. But actually, surprisingly, at the FDA workshop there was quite a bit of conversation about the Hippocratic Oath. Some of it was by us, or, you know, people that we’re very friendly with. But there was somebody who stood up on the second day and went up to the microphone and, after Josh had outlined the five critical capabilities of the Hippocratic Oath, unsolicited, somebody stood up and said, “Is there anybody in the audience who would not take this oath, specifically sign up to the idea of cyber safety by design?” And he challenged the entire audience, both their live as well as the streaming audience and the recorded audience, to view and take the oath. I thought that that was absolutely fantastic. You know, it was a great show of support from somebody who didn’t know us, hadn’t met us, but felt so strongly about some of these things that they were ready to potentially embarrass themselves in front of a government audience and whoever watches government programs on the internet, to call out that this is something that’s needed and necessary. So that was very, very rewarding.
MJ: Yeah, that’s great that you guys are getting buy-in. Do you think that there’s still, though, a little bit of a disconnect between maybe the buy-in at the higher level, that, “Yeah, these are principles we support and we would buy into them,” and then the actual execution, kind of when the rubber meets the road, of all the actual detail involved in implementing these in a way that actually works? That temptation to avoid maybe the shortcuts or avoid, whether it’s a fear of extra cost or extra time or whatever, that keeps manufacturers from doing this kind of thing?
BW: Yeah, certainly. You know, we’ve outlined a handful of principles that are great in theory, but as they say, the devil is always in the details, right? So, one of the things that we learned in the automotive space is that 1) there’s a great track record already of safety in these industries, in automotive as well as in healthcare, and that we can learn as much from those stakeholders as we think we can provide to them from our advice. But 2) there’s a lot of misconceptions, and we have to keep in mind that medical device makers and car makers are really in the early stages of their journey of making safer software that affects some of these critical systems. So, just as we took 30 years in the software industry, at least, to figure out some kind of common agreed to norms for what makes secure software development, what makes a secure architecture, how do we do safe updates, how do we make sure that we do isolation without loving the benefits of connectivity… These device makers are also on that journey, and they’re still at a very, very early stage. So, if we can just help mentor them and give them the benefit of our scars without trying to pass judgement or to supplant their decisions with ours, then I think we’re well on our way to compressing what might be a 30-year learning curve, as it was for us, to maybe two or three years. And I think that would be a fantastic thing if we could achieve an outcome like that, to get these capabilities out there. The mechanism that we use for the Hippocratic Oath and for the five star is actually not a technical set of controls. It allows manufacturers to publish an attestation that says, “Yes, we follow this, and here are the steps that we take,” or, “Here is how you know that our pledge is true.” And it doesn’t have to be the technical details of their security development life cycle, but enough to reassure the buying public and to preserve confidence in those systems that they are trustworthy, that people can rely on safe operation of the software within their vehicles and in their medical devices. And we think that that’s a method that empowers and encourages innovation. So rather than setting a floor of minimum standards, it triggers a race to the ceiling to compete against who’s going to provide better assurances? Who’s going to be able to provide them at less cost? Who’s going to be able to get them out the door quickest with the least hurdles? And so we really tried to think hard about how do we set not just what we think should happen, but try and establish preconditions and incentives that really make the industry see this as something that is in everyone’s best interest to the point that they will open up a little bit about it and talk some about what they’re doing. One of the things that we’ve found is that, you know, people don’t like to tell their internal secrets, not because they think that they’ve done something wrong, but because they see those as competitive differentiators, that if their competitors knew, they might have an edge up. So we tried to pull some of that away and make it less about preserving intellectual property and more about reaching out to consumers, whether those be car buyers or institutional buyers at a hospital, to help them understand what capabilities are on offer and are possible, so that when a buyer is looking among multiple different options, they can take cyber safety capabilities into account. Any time you go in for care delivery, any time you go seek treatment, the best treatment options that you have are the ones that you can make fully informed choices about. Fully informed choices about potential consequences, positive or negative; potential side effects that may come in any aspect, including the cyber safety domain.
MJ: To pull a little bit at kind of the evidence capture threat a little bit, in looking at the way the media covers whether it be something like the Sony hack, or you mentioned the possibility of the Ukraine’s power issues being caused by malware or something like that, or the Ted Koppel book about the US electric grid… Do you see that there’s maybe another disconnect between the way the media covers this stuff and how your average layperson experiences that news, and maybe the evidence capture part of this whole process, that the media seems all set to a conclusion and to kind of put out the most sensationalized scare tactic version of, “Oh my god,” for example, the car security researchers, “they’re going to take control over your car over the cellular network and drive you off the road”?
BW: Yeah, there’s definitely a knowledge gap there between what even some of the best informed media personalities will know and what the security researchers themselves know. It’s a really big gap to ask somebody who is an expert in one domain to also be an expert in another domain. So, sometimes you see stories that come out that are wrong, purposefully or accidentally, that maybe over-hype something. Or we’ve also seen some that underplay certain things, or that mischaracterize them, or just get the details, get the facts wrong. Most of the things that we talk about in the media around these are anecdotal evidence, but they’re not really actuarial evidence. We don’t have data points that would withstand scrutiny and rigor from some type of a study to look at whether or not some of these things have happened or have not happened. Or at least, if that data exists, it tends to be locked away in individual companies who have been breached or not breached. But in the healthcare ecosystem and in the automotive ecosystem, there is a strong commitment to reporting failures so that others can learn from them, and in the automotive industry you have groups that are devoted to just bringing together information about how cars fail. The National Highway Traffic Safety Administration, the Department of Transportation, some of those other agencies do investigations of car crashes to find out why things happen. And over the past 60 years—I saw a study put out a couple years ago that said that safety innovations in automobiles have saved 614,000 lives on the road from automobile accidents in 60 years. So, there’s a really well-established, robust set of metrics and measurements that you can look at to tell that safety story. We want the same thing for cyber safety. We want the same thing to know that if and when software is a contributing cause to a negative patient outcome or to a car crash, that we’re able to look at it with the same amount of forensic integrity and rigor that we spend looking at crumple zones, looking at airbag deployments, looking at speed or texting, or other things, when we’re doing these accident investigations and safety investigations so that we can get safer sooner.
MB: This may sound like a simplistic question, but if you were kind of to grade or rate the medical device makers, how they’re doing right now, how would you say that they are? Are they improving? Are they staying about the same as far as all this stuff goes?
BW: It’s kind of hard to say. I have the distinct pleasure, of the medical device manufacturers that I’ve worked with, are the ones who are really crucial, that are really interested in engaging with us to improve cyber safety. They recognize that it’s a problem and they have made significant investments in some cases in improving the situation. So I really have a very biased kind of view about what the situation is. I also know, on the other hand, that there are some medical device makers that haven’t really come to the table, that don’t understand some of these principles, and that’s okay. But the goal is we want to have outreach among the different stakeholder communities so that they can learn from each other and teach other. Something like the FDA workshop that happened, where there were 400-500 people in the room, and a large portion of those were medical device makers who were teaching each other, kind of tearing down some of the walls of corporate competitiveness and saying, “Look, in this common shared space, we have a shared goal, and that goal is patient safety. We really believe in that and we’re willing to invest in that; invest our time, invest some of the things that we have learned the hard way so that we don’t repeat the same failures in the past.” And I think if I look into the automotive industry a couple years on from publishing our five star, there’s some telling indicators and telling signs that are happening there. Obviously we talk about Tesla a lot because of the overlap between what they’re doing and what we’re talking about. Tesla has over-the-year updates for their cars, which they’re able to push out to update features but also to eliminate security vulnerabilities that may cause safety problems. They have a well-publicized coordinated disclosure program that includes their automobiles, and they even reward researchers who report significant flaws in those vehicles, whether they have safety implications or whether they’re merely security vulnerabilities. So, I’d say that they’re, right now, kind of leading. But we recently saw that GM announced a coordinated disclosure program, and there was a news article the other day that says that GM has over 100 people working on their product security team, which I think are great indicators that that industry is getting it. Maybe we can’t take credit for that, and I wouldn’t take credit for the work of others, but I think that it shows that this type of thing is catching on. And if you look at a document that was recently published by NHTSA and the Department of Transportation, and signed by I think it was 18 automakers, it outlined the 2016 goals for proactive safety in the automotive field. There were four goals, and I could see evidence of ways that the five star could support each of those four goals, and I thought the most telling was #4, which had a big part that was a call to collaborate with the cyber security community. I think that that speaks to the efficacy of the approach of ambassadorship that we’re taking with this, because if you reach out your hand and offer assistance and help, then someone is likely to take it. If instead you point and laugh at failures, well then you set up an adversarial relationship, and that doesn't lend itself towards the other party coming and saying, “Well we should definitely work together,” right? So I think that after many, many years of having this lull between us, whether it’s software makers or device makers, and the security research community, we’re finally starting to erode that wall through some of the work that we’ve done, but also some of the better attitudes of the security research community in recent years, to say, “Look, we’re all in this together, we know that no one knows everything.” We don’t know how to do what they do as well as they do, they don’t know how to do what we do as well as we do. So, where our demands of expertise collide and overlap, we’ll be better served, we’ll be safer sooner if we work together.
MJ: Beau, what would you say to your average person, maybe? I mean, do you have something you kind of tell the people in your friend and family circles about how they should feel about this, or maybe how they could get involved in helping move that conversation forward, whether it’s sharing things from I Am The Cavalry, or following this stuff? Where can they go so that they don’t kind of perpetuate when the media does get a little sensationalized, spreading that panic and fear, and more to just make sure that we’re having the right conversations and that we’re all kind of taking that “We’re in this together” attitude to make things better for everyone?
BW: First off, one of the things that when somebody sees a media story that indicates someone doesn’t care about their safety, it’s probably not true. The more that I work with automakers, the more that I talk with device makers, healthcare organizations, regulatory agencies, anyone involved in any of these ecosystems, the more it becomes clear that there are very crucial, very passionate people within each of those organizations that are working every single day to make things safe for their customers, their patients, whoever the stakeholders and beneficiaries may be. The second thing is when you’re engaging in some of these conversations with maybe your physician on what your course of treatment should be, ask these types of questions. What are the potential implications, and how do you know what those are? What’s the evidence to support that? Often times, talking with physicians and doctors, you ask them about medical devices and they kind of think that they’re a black box. They don’t question them because they don’t know to question them. They think the devices themselves were built with a level of a rigor and a level of care that’s beyond reproach. Engineers will tell you that all systems fail; physicians know that intellectually. But just the way that we tend to operate is unless we see evidence of something or unless we’re made to question it, then we tend to just accept things on their face a lot of times. And so, it’s something that physicians and healthcare administrators and other people in these ecosystems who could make a big difference don’t know to look in those areas. So, getting awareness of those things but without prompting a fear response is probably the best way for the average consumer to try and go about it. The worse thing we could do is to have such a fear about the safety of these devices that we choose not to avail ourselves of the best care options available. In talking with buyers of cars, sometimes they’ll say, “Well, you know, I’m afraid that a hacker might hack my car, so I’m going to get a 1970s, or a 1980s, or a 1990s car.” And if you look at evidence from crash test ratings, a modern car from 2015 fairs way, way better than a car from even the 1990s at preserving passenger and driver safety, and I would hate to think that by trying to raise awareness of some of the cyber safety concerns, we inadvertently hurt safety.
MJ: Is there anything that we haven’t covered that maybe we missed that you’d like to make sure we hit?
BW: The only thing is go to the website, IAmTheCavalry.org. We’ve got some information there. You can follow us on Twitter @IAmTheCavalry. You can join our discussion group, which we host on Google Groups. Come be a part of the conversation. No matter whether you think you have something to contribute or not, you might be surprised at what a single passionate individual is able to do.
MJ: Okay, fantastic. And we’ll put that in the show notes as well. Okay, well, Beau, thanks so much for joining us tonight.
BW: Thank you guys, and look forward to hearing the podcast more and talking to you later.
A: That’s all for this episode of Robot Overlordz. Are you interested in the future and how society is changing? We’d love to have you join our community. Visit our website to learn more and to connect with others that share that interest. You can find us at RobotOverlordz.FM. The site includes all of the show’s old episodes along with complete transcripts, links to more information about the topics and guests in each episode, and our mailing list and forums. We’d also love to hear what you think about the show. You can review us on iTunes or email us.
A: We hope to see you again in the future…
MJ: Thanks everyone for listening.