EPISODE 59
Can DoCRA (Duty of Care Risk Analysis) tell you if your cybersecurity controls reasonable?

EP 59: Can DoCRA (Duty of Care Risk Analysis) tell you if your cybersecurity controls reasonable?

Our bi-weekly Inflection Point bulletin will help you keep up with the fast-paced evolution of cyber risk management.

Sign Up Now!

About this episode

August 4, 2020

Kip Boyle, CEO of Cyber Risk Opportunities, and Jake Bernstein, JD and CyberSecurity Practice Lead at Focal Law Group, discusses DoCRA – Duty of Care Risk Analysis. It’s an approach that helps organizations figure out whether their cybersecurity controls are reasonable. And we’ll do that with the help of our guest, Chris Cronin.

Tags:

Episode Transcript

Speaker 1: Welcome to the Cyber Risk Management Podcast. Our mission is to help you thrive as a cyber risk manager. On today's episode, your virtual chief information security officer is Kip Boyle and your virtual cyber security council is Jake Bernstein. Visit them at cyberriskopportunities.com and focallaw.com.

Kip Boyle: So, Jake, what are we going to talk about today?

Jake Bernstein: Hey Kip. Today, we're going to talk about DoCRA, which is the Duty of Care Risk Analysis. It's a risk assessment method that helps organizations figure out whether their cyber security controls are reasonable.

Kip Boyle: Well, that's great because guess what, on our listener survey recently, one of the things they said was, "Please talk more about risk frameworks, risk management frameworks."

Jake Bernstein: That's good and I think as everyone knows by now, we really like talking about reasonable cyber security programs and controls. So, this will help. Today, we have as a guest DoCRA's inventor and main proponent, Chris Cronin. He is a partner at HALOCK Security Labs, and he'll be talking us through how DoCRA works. So, essentially it answers the question that we have been essentially bantering about for a long time, which is, what exactly does reasonable mean? We're looking at it from a blend of both of interests, cyber security and the law.

Kip Boyle: Yes. Chris, welcome to the podcast.

Chris Cronin: It's good to be with you guys.

Kip Boyle: Chris, you wrote Duty of Care Risk Analysis. I have to admit that this is a new thing for me. Jake I think has been working on this already collaborating in the Sedona Conference with you. I've just recently learned that it's been adopted. It's being taken up by the Center for Internet Security and in other places. So, could you give us a quick description of what DoCRA is. Just dive a little bit more deeply than we did in the intro.

Chris Cronin: Yeah, sure. Well, we know that generally this question about reasonable security controls has been a big question for lawyers, regulators, information security practitioners, companies that are trying to get compliant. They're struggling to find a workable definition of what a reasonable control is or what reasonable risk means.

So, what DoCRA is it's a way to state what risk is and what reasonable is, and a way that is spoken in the native language of all those professions, the lawyers, regulators, information security practitioners. So, it's a risk calculus that includes all of the things that each of these professions are interested in getting resolved. So, high level crosstalk.

Kip Boyle: Well, that's no small thing. That's no small thing, Chris, because any listener of this podcast knows that Kip and Jake don't exactly talk the same language. We don't exactly look at the things from the same perspective. So, this is almost like a moonshot at a Rosetta Stone.

Chris Cronin: Yeah. I imagine. I'm trying to think of what species of being has the Rosetta Stone in the moon. It certainly got me interested in going there to find out.

Kip Boyle: Well, I'm thinking about 2001 crosstalk honesty and the monolith.

Jake Bernstein: Hey, not that. Okay. So let's actually work that through because this has been fascinating to me is the idea that attorneys and security engineers and business people speak a very different language, and surely that's true. But there's something they all have in common. And it's this idea of a balancing test, and Kip and fire doesn't innovate. You actually hit on this, but halfway through, you've got a description of how an organization can evaluate its controls on a sliding scale.

Chris Cronin: That means he read at least halfway through the book Kip.

Kip Boyle: I imagined that he just plopped it right down in the middle and said, "All right, there's got to be something here I can bring up on the podcast."

Jake Bernstein: That's where it naturally opened up. I did read the book. It's actually really helpful in a couple of ways. Later in the conversation to become to it, I'd like to talk about the importance of fire not innovating and how it associates with cyber insurance. But the interesting thing I wanted to address now is this way you have on a sliding scale and organization evaluating the strength of their controls.

And they might say by control it gives me a certain level of certainty about how effective it's going to be. But then a control can go all over the end of the scale where it's actually doing more harm than good crosstalk business. And this is a common understanding between engineers, security people, lawyers, regulators, litigators, and businesses. You can have enough or too much of a control.

Kip Boyle: Yeah. I was just going to affirm that. Thank you for reading my book. I really appreciate it. And yes, that's what I was trying to do is acknowledge that unlike money, you can have too much security and there's got to be some way to flag that when you find it and then figure out how to dial it back. And since you read my book, Chris, you know that what I tried to do in the book and what I tried to do with my customers is strike a balance between the purely qualitative and the purely quantitative.

Because in my experience, neither one of those extremes delivers consistently good results. And more importantly, when you talk about business leaders, and I think lawyers and cybersecurity people might be the exception to this, but when you talk about business leaders, the people who actually make the decisions about how much risk we're going to accept, how much budget we're going to allocate, by and large, they are decision making engines. They want to make as many decisions as possible.

And so when you slow them down to explain new stuff to them that they don't understand, it really messes with their ability to make decisions and they get cranky. And so I try to meet them where they are, which is why my book looks the way that it does. But that doesn't mean that I don't believe that there's a place for FAIR or for all these other risk assessment techniques. I just think you've got to choose the right one for the context that you're working in.

Chris Cronin: Yeah, absolutely right. And this is exactly what we were striving for in DoCRA. And when we were working with Center for Internet Security to develop their risk assessment method based on DoCRA, that's exactly what they were bringing to the table as well. And Jake you know you can't succeed as an attorney, whether you're doing litigation support or strategic advisory or regulatory support unless you figured out something that makes sense to all parties, right?

Jake Bernstein: That's right. And I'm thinking as you two are talking about this, how important it is to have something to point to when we are trying to describe reasonable. Because the core legal concept of the quintessential reasonable man is helpful in a lot of torque based contexts, torque being your kind of typical slip and fall or negligence case. And the duty of care is at the core of those negligence cases.

Negligence is you had a duty of care and you violated it. And so, what's really interesting is that I don't suspect, or I suspect you did not choose Duty of Care Risk Analysis at random, I'm sure you did that on purpose because duty of care has a lot of legal meaning. And just to kind of bring it back slightly to the discussion moments ago, Kip, you and I call it nine and 10 when there's too much security.

Another way of saying that interestingly is unreasonable security. And we normally say ... people normally think that having an unreasonable set of security controls is when you're not doing anything close to enough, but it is just as accurate to say that your controls are unreasonable if you've gone way too far.

Kip Boyle: Absolutely. And the biggest reason for that is because people don't respect the controls, they work around them, and you end up in a three. Like you're unreasonable, but you think you're reasonable. Like you have this false sense of security, you're living in the security bubble where you think everything's perfect and behind your back, everybody's violating everything and you have no idea what's going on. And that's why that's so dangerous from a practitioner's point of view.

Jake Bernstein: Absolutely.

Chris Cronin: Exactly. And then to make sure that anybody who's involved in the risk discussion either in operating a safe organization, in evaluating whether there was some kind of negligence after a breach, everyone in that conversation needs to come to an agreement on what we meant by risk. And this is one of the big challenges that we were trying to address with DoCRA, and so far and Jake, you've seen this with some of the conversations we have with Sedona conversations.

Which we don't go into any detail about until the publication has been released, but we can at least say that people come to a risk discussion with a vague understanding of what risk means and not an equal understanding of what it means. And there's usually some consternation. And then when you explain the principles of Duty of Care Risk Analysis, then all of a sudden, a bunch of heads nod and say, okay, yeah, that makes sense. Because everyone's interested in inaudible.

Jake Bernstein: And that's right. I guess the next part of the conversation is, how is DoCRA different from what people already do? And what by that is that as a practical matter, businesses aren't out there implementing controls that hurt their ability to do business, just to adhere to some kind of standard. They're almost always negotiating down their controls in some fashion so that they can operate.

Nobody wants to be at a nine or a 10 as long as they know they're at a nine or a 10. And I think you nailed it, which is what is risk. And I'm curious, talk a little bit about what we see organizations doing and how that might not actually hit the bullseye on risk assessments at all. And just as an example, gap assessments, audits, and maturity assessments, or a vulnerability scan a lot of people might think, "Oh, okay. I did risk assessments because I did a gap assessment."

Or, "Oh, I have a SOC two, so I'm good." I think it's more complicated than that. So why don't you talk about that?

Chris Cronin: Yeah, no, you're exactly right. And this is where we see a big problem. And we'll get to the commonality here in a minute, but let's first establish what we mean by risk analysis. We're always looking for some kind of probability or the likelihood. We're trying to quantify or qualify the foreseeability of a problem occurring. And then the magnitude of that problem occurring.

And we want to know if I've got things in their current state, and I know what's happening in the world, I know bad things can happen, how likely is it that one of those bad things could happen in my environment and who gets hurt and how much when it does? And we find that when we ask those questions about specific information assets, specific controls, we can have a fairly good estimate, even through an estimated guess from capable people about how to prioritize risk that way.

If I've got a database of my visitors who come in and out and they leave their fingerprint and I don't secure it because I didn't consider that PII, and it's a third party desktop application that isn't connected to anything else, I might say my likelihood of something happening is really low. The impact, depending on who gets the information could be very high, bunch of names and fingerprints could be used for other stuff.

So, it's just thinking through the likelihood of harm. So let's first lay out that this is what we mean by risk analysis and why it's different from a gap assessment or an audit.

Jake Bernstein: Exactly. So I think to define these terms, an audit is, okay, I'm saying that I have implemented and used these controls. I want you to audit that statement and basically affirm that my representation is true. And that's it. All that is all an audit does, that is all an audit is meant to do. A gap assessment is, well based on some external standard of some kind, we think that we have holes in our system, but neither of those are a true risk assessment.

And I think we should talk briefly about what are the risk assessments out there and what are the principles or what are the practices?

Kip Boyle: Yeah, before we do that, Chris, I would like to have a sidebar here hopefully very quickly about likelihood. So as a practitioner, I have a real problem calculating likelihood. I think it's an incredibly slippery slope. And here's why. What I've noticed is that risk managers bring their personal risk appetites and personal risk tolerances into their work.

So if I skydive on the weekend and scuba and race cars, then the likelihood of something bad happening to me is minuscule. It doesn't matter what a forum I'm in. I could be at work, I could be doing one of my high risk activities. I just don't see it. In contrast, if I keep all my money under my mattress or in a Manet's jar buried in my backyard, everything looks like it's going to kill me, and people bring these attitudes into their work and it's completely unreliable.

And so, I have taken the approach dominantly that I don't even discuss likelihood with my customers unless they bring it up because it's just too hard to school them on this and calibrate them. Again, it completely derails the conversation. And what I find is that senior decision makers love gap analysis. They understand it. I don't have to explain anything to them or very little. And so while it doesn't really measure up to the true definition of a risk analysis, that's the approach that I take. So I'm curious, what do you think about what I just said?

Chris Cronin: Yeah. You're exactly right. You're exactly right. This is an area likelihood or probability is an area where there is a lot of variability in how people approach the question and that can make risk analysis really meaningless or biased or idiosyncratic. And what we try to do with risk analysis, so when we wrote DoCRA, we were not saying, "We think DoCRA is better than NIST 800-30. It's better than ISO 27,001. It's better than FAIR.

It's better than Doug Hubbard supplied information economics." We're not saying that at all. We're saying you can use those methods and use them in conformance with DoCRA. So we're not actually prescribing whether you do a probabilistic statistical analysis of probability, or you guess, we're wide open. In practice, here's what I do. In practice, I say, "Let's take a look at a reliable data set that we can look at that might even be available to the public."

So if you look at the various community database, that's a project that was started by Verizon, and they've got more than 8,000 records of breaches, reported breaches and what some people thought were causes of those breaches. And then it can break things down by where the attack happened? Who did the attack? Was it an insider or outsider third party involved? What industry were they in and all this stuff. If you're looking at data like that, it can say, here are things that are more or less common as causes of reported attacks.

So it's a vector that's used more or less. It's an attack style that's used more or less. So, that might be an input. That's an input that we use, but it might be an input that someone uses to think about how likely something is. Just get public data about what happens more or less. There are other things that we do. Like we associate the strength of controls, the reliability of controls as a counter against the commonality of the attack.

So this is where are you a skydiver or someone who puts money under their mattress comes in. We ask, what is the position of this organization? How reliable are their controls to function correctly? And then we match that against how common the attack is, and then put that against a sort of bell curve distribution, or an adjusted distribution to say, what would the resulting likelihood be given the commonality of attacks in environments like this and the strength of those controls? So crosstalk to do this.

Kip Boyle: Yeah. And I think that approach works much better with larger, more sophisticated organizations, as opposed to medium sized and smaller companies, which ironically have all of the same cyber risks pressing down on as very large enterprises, but they just don't have the sophistication and the means and the method. And I've had a really hard time in my experience. I have crashed and burned multiple times trying to bring more sophisticated methodologies to less sophisticated management teams.

And maybe it's because Kip's not good at that, or maybe somebody else would do a better job, but I have rarely seen success on how to do that.

Chris Cronin: No, don't be too hard on yourself. This is actually a really big challenge in the industry. And in anything I've seen people do is idiosyncratic outside of what FAIR does and what Doug Hubbard does. Now, Doug Hubbard's methods are very likely out of reach of a smaller, medium sized organization because it takes a lot of effort to even set up probability models that match a company situation.

Kip Boyle: Right. Although I do want to tell, you I've read Doug's books and I've talked with him and I love what he does. But I just don't know how to bring it to people.

Chris Cronin: Yeah. I've worked with Doug in the field and we've applied DoCRA using his methods. Yeah. He's a delightful guy and isn't it funny, you can read books about probability and find them real easy to read delightful?

Kip Boyle: That's because I'm weird.

Jake Bernstein: That sounds like a fib, but maybe it's true.

Kip Boyle: No, it's good. But what I want to do is bring this back to the original questions that you're asking about how others comes into the idea of what a duty of care is. If an organization has the available tools, the personnel, the expertise to do a good job with probability analysis, then they very well may come up with better estimates of what probable outcomes are.

But I don't think that's the standard for demonstrating do care to get it right. I think, and Jay, keep me honest here, but I think in order to show that you applied due care, you want to show that you did think through how things could have turned out, what the likelihood and the magnitude was, how it could have harmed you and others, and how you demonstrated whether your controls were overly burdensome, given the risk they were there to prevent that.

But we're not given points for having been more accurate, but showing that we did the thinking.

Jake Bernstein: That's right. And there's a really kind of interesting question here, which is who ultimately gets to decide if someone has met the duty of care? And we can debate it two different sides, in a pre-litigation dispute can debate it. But at the end of the day, at least in the US, these types of things will actually be decided. There are a question of fact, and the jury will decide.

So at the end of the day, whatever we come up with, it has to be digestible by a jury of nonspecialists. And I think people routinely make decisions on duty of care in medical malpractice cases. But at the same time, you also know that a lot goes into those decisions with the jury. And so I think you're right. It is unlikely that a jury is going to be swayed by frankly, most forms of quantitative risk analysis.

Unless the numbers are big. Let me just unpack that slightly. I don't think a jury is going to care if we're off by five or 10% in some kind of quantitative risk analysis, let alone fractions of a percent. They will care if there's a quantitative risk analysis. And it's like 25 to 75%, some really large number that you can kind of understand. The problem with all of this though is that in general, people are not good at determining risk at all.

How many times have you heard that people are more afraid of flying than driving their car despite the fact that statistically you're far more likely to suffer an auto accident than a plane accident? And so we have to deal with that when we're dealing with all of these different principles, and risk assessment forms, and whether it's NIST 800-30 or FAIR or Risk IT, all of it. They're all just mechanisms to try to get at this question of, did we do enough?

Kip Boyle: Okay. And that's what I like about DoCRA so far. And again, I'm kind of new to this, but what I like about it is, is that it's extensible. I think of it as, it's a standard that has a plugin architecture and you get to decide what kind of risk assessment you want to apply.

Jake Bernstein: That was quality IT joke there.

Kip Boyle: Well, I'm sorry. That's the way I think. And I can do that because I'm the co-host anyway. So Chris, all right. So I apologize for devolving our conversation into mudslinging about risk assessment and likelihood. But let's get back on track and let's talk about, so DoCRA is kind of a meta enhancements to whatever risk assessment approach you're using. Is that about right? Or how would you characterize it?

Chris Cronin: Yeah. I think that's a good way of thinking about that. The idea here is that we see a lot of value and limitations to any of the risk assessment frameworks out there, and the best of them, their authors readily admit to that. So, what we're not doing is choosing one or the other. But back to that original question that you had about being able to speak a language that everyone understands.

If we can talk about risk saying, it's always about some kind of probability or some kind of magnitude of harm, what are the parties in a NIST 800-30 scenario that you're asking about? How about that affair scenario or Hubbard's applied information economics scenarios? And let's make sure that when we're looking at risk, we take care of these three principles, no matter how we do it.

We want to make sure that when we evaluate the likelihood of harm, we think about the harm that could go to any interested party. We want to be sure that there's a definition of what an acceptable risk would be. And by acceptable, we mean the level of harm that could come to us, some redressable or correctable harm that could come to others. And we want to be sure that we've evaluated our control using this same method we use to evaluate the risk so we can compare them and say, "Did the control make me worse off based on any of those risks?"

So if we're doing quantitative qualitative NIST 800-30, ISO 27,005, FAIR, whatever, if we can take those three principles and apply them, we've done DoCRA.

Jake Bernstein: Okay. So first, let's restate that. So here are the three principles. One, your risk analysis must consider the interest of all parties that may be harmed by the risk. Another way of saying it is, you must identify all stakeholders. Second, those risks must be reduced to a level that authorities and potentially affected parties would find appropriate.

Another way that I might say that is, the risks have to be reduced to the point where a reasonable group of jury members of jurors would say that you met the duty of care. That's what we're kind of going for here. And then third, and this is really important, the safeguards that you put into place must not be more burdensome than the risks they protect against. In other words, we're not going to spend a million dollars to prevent one dollar of risk.

Kip Boyle: That's a great summary. And I can't help but to think about a comparable situation, which I think in this context would probably be the problem of pollution. So if a factory is discharging wastewater into a stream or river, back in the day in the United States, that was of no concern to the factory. In fact, it saved them money because they didn't have to dispose of that wastewater in any particular fashion.

So it was very advantageous of them to just discharge the water in whatever expedient way they could. And they absolutely did not consider the effect on all the stakeholders, which is why the United States had to create an entire agency, the EPA to put some more structure into that decision making process. I think conceptually, this is a very similar problem space. What do you think, Chris?

Chris Cronin: Exactly, exactly, exactly. And you've hit on one of the most interesting areas of regulatory law that DoCRA grew out of. So back in 1992, Bill Clinton was elected president after 12 years of Republican administrations, where people were saying, "Oh, now we've got a Democrat in, regulations are going to grow." And EPA was one of those things that people were looking at greatly.

And what Clinton did within his first year, he signed an executive order 12866 that said we're going to make sure that any regulation that goes in place has a cost benefit analysis to demonstrate whether it's the right regulation. And what's a cost benefit analysis? It's basically what we encoded into DoCRA. It's certainly that third principle. Whatever we do, we're going to make sure that we've looked at the potential of harm to all people.

But we're going to also make sure that the control that we require in a regulation doesn't create an out sized harm to the industry, that there has to be a way that balances everything. And since 1993, executive order 12866 has been cited over and over and over again as the basis for other executive orders. Because it says, regulations may only work if there's a balance between the public interest and the burden of the control that would affect it.

Kip Boyle: Okay. So conceptually we're in the same park, but I think the approach here is wildly different. We're talking about a non-regulatory approach so far, but do you think it's going to become a regulatory approach?

Jake Bernstein: I think it already is.

Kip Boyle: I'm not sure that it's reached the level that DoCRA is trying to get to.

Jake Bernstein: So part of the reason there is that there just haven't been a ton of cases litigated, but I think the FTC at the end of the day, the FTC, whether they say it outright or not is applying a form of DoCRA when it says something is reasonable or not. Partly it's because there's no other way to really do it. We credited Chris with inventing DoCRA, but really he didn't. He restated well for this particular audience what kind of has always been about kind of harm to others law. This is not necessarily a new concept.

Kip Boyle: No. It's not. But there's something about the EPA and the way that it works in the United States that I don't think we've gotten to yet, which is environmental impact.

Jake Bernstein: Yes. Sorry. Yes. We do not yet have an EPA for cyber. And what you were going to say was, you were about to say environmental impact assessment. That is a formalized version of DoCRA specific to the environment and most states have them. You always hear about the State Environmental Protection Act and that those different assessments that have to be done.

I do think that you see that in a few places, for example the GDPR in Europe has a data privacy impact assessment or data protection impact assessment. That's essentially the exact same thing as what the EPA requires. But there's nothing like that quite in the US yet.

Kip Boyle: And focused on cyber security-

Jake Bernstein: Or cyber security and privacy, right?

Kip Boyle: ... and privacy. But Chris, do you think that we're on track to getting something like that?

Chris Cronin: Well, I don't think so because we never see a successful federal action whether in Congress or anywhere to get a unified understanding of what cybersecurity even is. But you pointed to something that I've been very intrigued with, and this has to do with again, why I just sat down and worked out DoCRA to begin with. When you look at the lack of regulatory agency for cyber security, you also see a lack of lobbyists in cybersecurity. You don't have cybersecurity lobbyists.

Jake Bernstein: That's really interesting. You don't.

Chris Cronin: So, what's happened is, when you were looking at regulations for any industry, we mentioned EPA, but FAA, FTC, FDAC, what happens is that the lobbyists go against the regulators and they do the regulatory impact assessments. And then the lobbyists go back to their constituents, the industries and say, "This is how you're going to apply this rule." That creates an inefficiency negotiation rather than a clear risk analysis.

In fact that they're doing their cost benefit analysis as a financially interested debate that has something to do with putting money in coffers of legislators who want to get back to office. What we're doing with DoCRA is saying, "Let's remove that. Let's not get into this battle of a lobbyist and a regulator sort of hashing things out. Let's actually use the tools that regulators gave us."

Because you look at the HIPAA security rule, Gramm-Leach-Bliley Act, and New York regulations, the department of financial services. We see it with CCPA. They're all saying, get to risk appropriateness, or get to a reasonable application of the safeguard. So as long as they're telling us, "You draw a line on what's reasonable." Then all we have to do is articulate carefully what the regulator would expect to see.

I want to see that you thought through the likelihood of harm to yourself and others. I want to be sure that you thought through what a non-reddressable harm would be, in other words, acceptable risk. And I want to know that your controls are not more burdensome than the risks. And if you show me that, then I don't have a leg to stand on as your regulator.

So, because there's this void of regulatory oversight and a void of lobbyists sort of complicating things as a negotiation, we still have this gift to define our own level of risk. And I'll just say in practice before I turn this back over, is that what we've seen with regulators too, is that even after a breach, when they're presented this risk analysis, I'm talking to the federal and state regulators, they drop their oversight.

They say, "Good. That was reasonable. I'll let it go." That's been the most interesting thing. On a case by case basis, regulators just walking away and saying, yes, there was a breach, you showed that your controls are reasonable. Done.

Jake Bernstein: That's true. I think that is what's been happening and I'm not sure what it's going to take other than people have talked about the cyber 911 for years now. I think it would take a cyber 911 before there was some kind of more in depth government action. What do you think Kip?

Kip Boyle: I'm wrestling with this because I'm trying to think of a case where I heard of a regulator saying, "You were reasonable. Move along. There's nothing here to see."

Jake Bernstein: So you don't hear about those because they don't make the news. The only reason we know they happen is you and I actually did a presentation with an FTC official who told us, "Yeah, there's times when we just drop things and we dropped them when it looks like they've been reasonable."

And here's why. Is that as a regulator, what are your options? You really have two options. You can either settle a case, or you can try to win in court. The nature of a settlement is the other side is agreeing with you to some degree. Right?

Kip Boyle: Yes. Yeah. And we've talked about this in a previous episode. Why does the FTC settled so much?

Jake Bernstein: We have, yes. We've talked about that exact thing. But in order to win in court, you're going to have to go and make the case that such and such was unreasonable. Now, if you're the FTC and you find out that someone has done something that you're thinking is somewhat reasonable, your odds of being able to sell a court or a jury that they did something wrong have just plummeted.

Because we've laid out multiple principles. We've talked in depth about a lot of this stuff, but at the end of the day, truly the most distilled way to say this is, can you convince a jury that the party who ... let me start over on that. Can you convince a jury that someone did something wrong? That honestly is kind of the shortest way of saying it. And there's a lot of ways to kind of unpack that and build it out. And that's what we've been talking about, but that is the core. Go ahead.

Kip Boyle: Jake, I assume that most regulators who don't take action drop the action, because they can't prove that something bad happened. Not because they come to the conclusion that somebody's reasonable.

Jake Bernstein: What's the difference though?

Kip Boyle: There's a big difference, I think, but I don't know there's material to this conversation that it's sufficiently different, but that's what I'm thinking about right now.

Jake Bernstein: Okay. So let me try to summarize this then real quick, which is, I think what you're saying is we've never seen a situation where a regulator has come out and said, that is reasonable and therefore there's nothing for us to do. You're right. That doesn't happen.

Kip Boyle: I would love to see that.

Jake Bernstein: That would be interesting. So, there are actually is, now that you got me thinking about this, that's what the FDA does. Basically the entire process of getting an FDA clearance is essentially affirming a drug or a device's reasonableness. And what are the crosstalk. It's safety and efficacy. When the FDA clears something for use, it has basically affirmed safety and efficacy.

And we're less concerned about safety in a way, just because we're talking about cybersecurity versus medical devices or drugs, although obviously safety-

Kip Boyle: They're converging.

Jake Bernstein: They're converging. They're converging. But nonetheless, the idea is wouldn't it be interesting if there was some affirmative mechanism where you could get clearance? The problem though is that fire doesn't innovate.

Kip Boyle: Damn. You're using my title against me.

Jake Bernstein: Because here's the thing. And we've said this before, cybersecurity isn't something you buy, it's something you do. If you think about a drug, a drug is the same. When you get approval for the drug, that drug doesn't change. Assuming that you manufactured the same and we can gloss over some of this, but fundamentally, that drug isn't evolving, it's not changing. Whereas the threat that cyber security is trying to protect against is constantly evolving.

So the best hypothetical FDA for cybersecurity could do would be to say at this point in time, you're a reasonable, that's not necessarily that helpful because what about tomorrow? I don't know.

Chris Cronin: Okay. So we actually have positive evidence of something that solves that problem.

Jake Bernstein: Really?

Chris Cronin: Yeah.

Kip Boyle: Is it crosstalk by any chance?

Chris Cronin: It does. It's kind of funny you should say that. So the State of Pennsylvania released the decision on their suit against Expedia and Orbits that suffered a breach. Interesting why the breach happened. But there was a breach of information from Orbits' applications that Expedia had acquired. And the injunctive terms that the Office of Attorney General, Pennsylvania provided for Expedia to behave better were DoCRA principles.

So what they're saying is, you're going to make sure in your program that you're going to base your controls on a risk evaluation that involves yourself and other interested parties. You're going to have a definition of acceptable risk based on the foreseeability of addressable or non-addressable arm, and that your controls will be not more burdensome than the risks. And you're going to evaluate that through a risk analysis and your program is going to be driven by it.

Kip Boyle: Okay. Jake, you have to get the citation for that case and study it because we should do a whole episode on it.

Jake Bernstein: I think we probably will. I like that idea. One thing that I have to just point out immediately though, is that what that injunction does is really not any different than what the FTC consent decrees and decisions say. Which is, you have to follow this process because DoCRA ultimately is a process. And that is significantly different from what the FDA does when it approves a specific drug for use and calls it safe and effective.

It is fundamental. That is a different thing. We're saying with the cybersecurity context, kind of at least right now, the best we can do is say, "You must follow this process, and then we can audit you to make sure you're following the process." That has potential value, but we can't ever say that you are reasonable in the future. We could say that you're reasonable right now, because we checked and you're following this process, but it is just different. And by the way, I don't think that can ever ... that's never going to change.

Kip Boyle: I think you're right.

Jake Bernstein: Because fire doesn't innovate.

Kip Boyle: Yeah. I think you're right. I think you have successfully convinced me that it wouldn't be all that useful. However, what I do think is interesting here, particularly in the case that Chris brought to our attention is the emphasis on how your controls affected other stakeholders. That I think is innovative and necessary and a welcome emphasis in the broader conversation. I like that.

Jake Bernstein: Well, and not just that, I think if you don't do that, you're probably going to fail at being reasonable.

Kip Boyle: Well, I think as reasonableness evolves, I agree with you on that. But I just don't remember the FTC and any of its settlements consent decrees really putting a big emphasis on the effect to outside parties, quite the way that DoCRA seems to be doing it. I really like this because as I said, when a factory discharges wastewater with heavy metals into a river, that's not okay. And it's not okay for people to play fast and loose with information assets that can because harm to other people if they get out of control.

Jake Bernstein: You're right. And so the FTC hasn't explicitly said that in any of its current consent decrees. However, remember, and we've talked about this, remember the section of law that the FTC is using to enforce cyber security. And that would be, it's the unfairness prong of the FTC act currently codified at section 45N. And that actually does say something is unfair if there has been harm to consumers that the consumer couldn't reasonably avoid, and that isn't outweighed by some countervailing benefit to competition or commerce.

Kip Boyle: Jake, it always warms my heart when you cite your sources.

Jake Bernstein: I know, but if you scorned and then change your perspective, isn't that DoCRA?

Kip Boyle: But I like it. I like what DoCRA's because it's elevating that. It hasn't been as prominent in the FTC, although it's there. And now what we're talking about is making it more prominent. And now I can actually go to my customers and say, "You can't just think about how this affects you. And here's why."

Jake Bernstein: That's totally right. And I feel like I need to be really clear. I love DoCRA. DoCRA is everything.

Kip Boyle: There goes your independence.

Jake Bernstein: But I guess what I'm trying to say, and what I've been saying is that DoCRA isn't just a nice way of putting things or isn't a new thing. What it is a nice way to repackage principles that have been ensconce in law for centuries.

Kip Boyle: Well then, we're in violent agreement.

Jake Bernstein: Yes, we are. Let's violently agree some more.

Kip Boyle: Nobody wants to hear that anymore.

Jake Bernstein: No. And we're also getting ... this is also becoming a marathon episode.

Kip Boyle: It is. It is. So I want to circle back around for a moment because I want to retrace steps and just go back to something that Chris was trying to tell us about, which is the principles. That DoCRA has three principles and 10 practices. And I don't think we have time to go through all of them right now. We looked at the principles. But Chris as we wind the episode down, what final thoughts do you want to share with our audience about DoCRA that you want them to think about when they stop listening to this episode?

Chris Cronin: Yeah. That's a great setup for something that's really been on my mind for many years. When I look at risk assessment methods, I almost always see people applying them in a very inwardly focused way. How could I be hurt from this risk? And no cybersecurity problem breach or a resulting suit or regulatory action was taken because an organization hurt itself.

And when I started on this endeavor to say, "Hey, we're going to work with organizations to help them evaluate risks, not just to themselves, but to other people." I was a little reserved and I had to ask myself, why am I feeling reserved about putting someone else's harm in this? I thought we tend to think of American organizations, certainly for profit organizations as being selfish, self-motivated.

And that's just been wrong thinking. We've implemented DoCRA, my company HALOCK Security Labs, we've implemented DoCRA in more than 100 organizations. And organization after organization goes, "Oh, great, what a relief?" And I thought, this is kind of interesting. They see the potential harm to others and the potential harm to themselves together. And they feel relieved.

They're not nervous about it. They don't say, "I don't want to look at harm to others." And so, we started to realize that this problem is first solved at the boardroom. It's first solved in front of the CFO, the people who have to try to balance the harm they might do to others to the potential difficult impacts of security controls.

That before a regulator ever steps in the room, before a litigator ever steps in the room, the people managing a business are very welcoming of the conscientious decision they've got before them. Am I going to hurt someone more than they should be hurt if I do this? Am I going to hurt myself more than I should if I don't do this? And that has been, I have to tell you, not just heartwarming, it's been very affirming to see organizations for profit, nonprofit, governmental agencies saying, "Yes, this is the calculus by which we want to make our decisions."

The myth of the selfish organization that's only motivated by its own purposes. It just disappears when you put this calculus in front of people, it's essentially a scaled depiction of the golden rule. You're going to do unto others as you would have them do unto you. And by the way, they're going to reciprocate. They're not going to ask you to do more the thing you should to take care of them.

So that's been the absolute best thing I've seen from DoCRA despite the uptake from regulators and litigators and groups like Center for Internet Security, it's been seeing organizations embrace the conscientiousness that comes with looking at what their duty of care is on paper and saying, "Yeah, we agree to that."

Kip Boyle: I like the practicality of that. I'm very practically minded, and I think that's super, super practical. I love it. Jake, what do you have to say as we wrap up?

Jake Bernstein: I think that's a really nice way of putting it. I think that most people don't want to hurt other people. We wouldn't have a society if that were the case. And I think that even in the constitution, article three, which is what lays out federal courts. In order to go to a federal court, you have to be able to show as a litigant that you've been harmed.

And so I think it's completely natural then that when you're doing a risk analysis that you consider the harm you might because to others. Because even if you're not motivated by pure altruism, you know that it is in your best interests to not hurt other people. And so I think-

Kip Boyle: If for no other reason, then you're hurting your own market.

Jake Bernstein: Exactly. If for no other reason that if you hurt others, you too will end up getting hurt. And I think that this is the right way to think about cyber security risks and privacy risks. Maybe just as much, if not more so than cybersecurity, if you want to draw a distinction that right there.

Kip Boyle: Obviously it depends on a good cyber security. So there's definitely a connection.

Jake Bernstein: There definitely is. But I think DoCRA, one, it's a great acronym, but even more than that, it's right. It's right in that cosmic sense of this is the right thing to do, and that has a lot of power.

Kip Boyle: Okay, thanks everybody. That wraps up this extended episode of the cyber risk management podcast. Today we learned about DoCRA, Duty of Care Risk Analysis. It's an approach that helps organizations figure out whether their cyber security controls are reasonable. And we did that with the help of our guests, Chris Cronin. Thanks everyone. We'll see you next time.

Jake Bernstein: See you next time.

Speaker 1: Thanks for joining us today on the Cyber Risk Management Podcast. Remember that cyber risk management is a team sport, so include your senior decision makers, legal department, HR, and IT for full effectiveness. So, if you want to manage cyber as the dynamic business risk it has become, we can help. Find out more by visiting us at cyberriskopportunities.com and focallaw.com. Thanks for tuning in. See you next time.

Headshot of Kip BoyleYOUR HOST:

Kip Boyle
Cyber Risk Opportunities

Kip Boyle is a 20-year information security expert and is the founder and CEO of Cyber Risk Opportunities. He is a former Chief Information Security Officer for both technology and financial services companies and was a cyber-security consultant at Stanford Research Institute (SRI).

YOUR CO-HOST:

Jake Bernstein

  Newman DuWors LLP

Jake Bernstein, an attorney and Certified Information Systems Security Professional (CISSP) who practices extensively in cybersecurity and privacy as both a counselor and litigator.