EPISODE 95
 
What To Do about the Massive Insider Threat?

EP 95: What To Do about the Massive Insider Threat?

Our bi-weekly Inflection Point bulletin will help you keep up with the fast-paced evolution of cyber risk management.

Sign Up Now!

About this episode

December 21, 2021

There is a massive insider threat in all our organizations according to the Verizon Data Breach Investigations Report (DBIR). Why is that and what should we do about it? Our guest, John Grim, one of the long-time authors of the report, will tell us. Your hosts are Kip Boyle, vCISO with Cyber Risk Opportunities, and Jake Bernstein, Partner with K&L Gates.

Tags:

Episode Transcript

Speaker 1: Welcome to The Cyber Risk Management Podcast. Our mission is to help executives thrive as cyber risk managers. Your hosts are Kip Boyle, virtual chief information security officer at Cyber Risk Opportunities, and Jake Bernstein, partner at the law firm of K&L Gates.
Visit them at cr-map.com and klgates.com.

Jake Bernstein: So Kip, what are we going to talk about today?

Kip Boyle: Hello, Jake. Listen, this is going to be a good one.

Today we're going to talk to a guest. I love it when we have guests. His name's John Grim. And what we're going to talk with him about is the role of the human element, because he looks at the data that is analyzed for the Verizon Data Breach Investigations Report, and I think this is going to be a fascinating look at this data in a way that, on this podcast anyway, we haven't looked at it before.

Jake Bernstein: That sounds great. John, welcome to our podcast.

John Grim: Thanks. It's good to be here.

Kip Boyle: So, John Grim, I really appreciate you being here. What we found out recently is that you're the head of research development and innovation at the Verizon Threat Research Advisory Center. My gosh, that's a mouthful.

Now, it wasn't too long ago that we actually talked with your colleague on a recent episode, Suzanne Widup, and she was a delight. We learned a lot from her.

So you work at Verizon, she works at Verizon, but that's not the same Verizon that I get my mobile phone service from. Is that right?

John Grim: Well, we are one Verizon, but Suzanne and I are on the Verizon Business Group for Verizon Wireless, Wireline Enterprise customers that we support, SMB customers, government entities, folks such as that. So Suzanne and I are looking outwards from Verizon. We're looking to support our customers when it comes to understanding data breaches and cyber security incidents.

Kip Boyle: Well, I told Suzanne this, I'm going to make sure you know it, the DBIR is so useful in my work and I'm very grateful for it, and it's amazing that I don't have to pay for this quality product.

Jake Bernstein: Agreed. It's one of the great tools of the security industry, and I think it's such a great service that is being performed here.
So John, about the report, you're not the primary author of the 2020 Data Breach Investigations Report, which of course, we'll probably call the DBIR, but you are the primary author of the Insider Threat Report and the Cyber Espionage Report, both of which I did find and download, Kip.

Kip Boyle: Thanks a lot.

Jake Bernstein: So we know that Suzanne Widup gathers, cleans, and reports on the data in the DBIR. What do you do with the data once you get it?

John Grim: Absolutely good question. I'm actually a contributor to the DBIR indirectly through the case load that we have on the V-track team. So we are one of the 83 contributors to the Database Investigations Report. But my role is more of a presenter or evangelist in talking about what we're seeing in the DBIR in terms of the numbers and what we're seeing in terms of the V-track caseload out there with the investigations that we conduct on behalf of our customers.

But you indicated I'm the primary author of the Insider Threat Report and Cyber Espionage Report. Absolutely. Those are sister publications to the DBIR. We're using that data-driven insight to look specifically for those two examples, at the insider threats, as well as the espionage threat actors.

Kip Boyle: Got it. Okay, cool.

Now let's unpack that a little bit, John. So even when you look at the DBIR itself, and certainly when you look at the sister publications that you are really working on, the human element is really present in so many of these incidents and breaches.

And by the way, audience members, an incident and a breach does have a very specific definition in DBIR land. So I'm not using those terms interchangeably.

But John, just how big of a presence does the human element have in all these incidents and breaches?

John Grim: Absolutely great question. And I think when people think of the insider threat or insider risk, they're thinking of nefarious threat actor, somebody who's doing something mischievous, something wrong, stealing data or destroying something. But when we look at the human element, which is the greater dataset when it comes to insiders, we're looking at 85% of data breaches in this year's DBIR, the 2021 DBIR involving the human element in some way, shape or form.

And if you're wondering what some way, shape or form is, this is looking at the DBIR data set, or more specifically, VERIS, which is the Vocabulary of Event Recording and Incident Sharing, the action categories by the threat actors.

So when we look at that 85%, we break it down. We could break it out into three action categories.

The very first one that comes to mind is social. This is your social engineering, your phishing, your pretexting. In fact, phishing was big this year. It's at 36% of overall breaches.

So that's being driven by an external threat actor, typically with the insider or the end user, for example, or the employee being on the other end of the phishing or pre-texting when it comes to social.

Kip Boyle: Okay. So that's the first category. And I think a lot of people really miss the subtlety here. So I just want to really highlight this.

So when we talk about the human element, we're talking about people who are otherwise loyal, highly engaged folks. They themselves are not malicious. They don't have malicious intent.

But through social engineering, phishing, pretexting, and that sort of thing, they can be manipulated into becoming the agent of an attacker.

I think that's what you're saying, right?

John Grim: Absolutely. In fact, when we look at the CIA triad, confidentiality, integrity and availability, and we look at integrity specifically for breaches, we see alter behavior is at the top. And that is the flips side of the social breach coin. This is that person being manipulated to click on a hyperlink or divulge their credentials, maybe telephonically, via pre-texting on the telephone to a threat actor. They're being tricked or being manipulated.

Kip Boyle: Yeah. And so phishing, I think most people probably know because we've been dealing with phishing for so long now, but pre-texting is maybe a new term for people. But pretexting is when somebody is misrepresenting themselves and gives a false explanation for why they're about to have an interaction with you.

And something I've been reading about lately that I think is a good example of pretexting, but you can tell me if it's true, is two-factor authentication. So criminals can call you up, impersonate your bank, for example, and then suggest that you need to help them fight some kind of a fraud with your account. And then they'll actually poke the bank to get an SMS code sent. And then they ask you for that actually on the phone call. You give it to them. And what you've just done is you've actually allowed them to get into your account.

Is that a good example of pretexting?

John Grim: That's a good example. Absolutely.

It's probably the oldest trick in the book in terms of social engineering. It predates email, telephonic, in person, and of course, nowadays it could be via email, simply asking questions, or social media.

So it is pretexting. It's tricking somebody into believing that you're somebody you're not. And the threat actors find, this has the second highest social action category within our data set, second only to phishing.

Jake Bernstein: It's a con game. The pretexting phrase gives it an air of something that... It's lying. These people are lying and they are tricking you.

I think it's... And that's important and it's just the verbal live action form of phishing.

John Grim: Absolutely. In espionage circles, we would call that false swag. I can actually say when it comes to the integrity attribute, fourth on the list, I mentioned alter behavior was number one, is misrepresentation. And that rose actually, this year in the dataset. A lot of that has to do with business email compromise, which follows pretexting

Kip Boyle: Right. Now, Jake, your previous work right at the state. Essentially you were on the lookout for con artists. Wasn't that a lot of what you were doing, is people trying to rip off consumers?

Jake Bernstein: Yeah. And it's interesting because there's a fine line between straight up fraud and lying, which is generally criminal. And the deceptive practices that I would deal with as a consumer protection lawyer. And it's sometimes subtle, sometimes not. But I think in the context of social engineering, it would almost always be considered a criminal level of... They're just lying.

It's not like they're saying, "Oh, that car has 10,000 miles," when actually they've manipulated it and in fact, it has 20,000 miles.

This is straight up lies. They're not even connected to reality. They're just trying to manipulate you like a spy. That's what they're trying to do.

Kip Boyle: Ah. Now, John, you mentioned something a moment ago, which made me realize we should be clear to the audience. Your background includes a lot of experience with... Did you say army intelligence? Is that right?

John Grim: Absolutely. Counter intelligence, both counterintelligence, just your straightforward counterintelligence or CI and cyber counterintelligence where you've got that technical component to it.

Kip Boyle: Got it. Okay. crosstalk Well, I think that makes you uniquely qualified to do this work.

Jake Bernstein: Well, I want to know a little bit more about the difference as you see it between normal CI and cyber CI? crosstalk

John Grim: Absolutely. Normal CI that's just my term it's just my term. And so a lot of us refer to it as straight lake CI where you're just dealing with everyday people. There may be a computer system involved. But when you're looking at cyber CI, this is your network intrusions. This is external threat actors typically coming into the environment, or attempting to. There might also be an insider threat component as well, but it's really going to be focused on or involving a computer network.

Kip Boyle: Okay. Cool.

Jake Bernstein: And I know we, we still have two more various action categories to talk about, and I want to do that. But I also want to just point out now that we're talking about insider threats, but this is not just malicious people?

The DBIR, I believe, categorizes manipulated people also as an insider threat because they are an insider threat. They're inside literally, and when they get manipulated, they become a threat, hence the name.

So I just want to verify that and understand that a little bit more.

John Grim: Absolutely. So we actually use the term internal actors versus external actors versus partner actors.

So external actors do not have any legitimate access to the data, the assets, and the environment. They're outside the building, or half a world away outside the enterprise environment.

The internal actors have some level of trust and privilege to the data, the assets and the environment. Now, they may have a motive with what they're doing, or they may not have a motive, and we're going to get into the second action category here shortly with error, where there isn't a motive.

And then finally, the partner actors are somewhere in between external and internal terms of their trust and privilege. They have access, maybe remote access into the victim organization's environments, and they're doing something malicious to lead to a data breach or a cybersecurity incident.

And when you look in the DBIR, and you look how these three different actor types or categories stack up, external, by far, is number one in the data set. Internal is a distant number two in the data set. And then partners are way at the bottom at number three.

And some folks are surprised about the partners, because you read about supply chain attacks, etc. But we really need to put it in the perspective here. Who is actually driving that supply chain breach or driving the breach? If it can be tracked back to an external threat actor, such as SolarWinds, then it's going to be coded as an external actor, even though there were multiple partners involved in that particular situation.

Jake Bernstein: Right. And that makes sense to me. I think a supply chain vulnerability that hits you because an actual external bad actor simply used that as the vector. That should still be categorized. That to me makes sense. To me, the partner implies collusion between different threat actors, as opposed to the term partner that people use often when they're talking about business partners, business groups, industry partners, things like that.

So let's move on to the error.

So I'm curious, we've got mis-configuration and mis-delivery.

Another word for error is mistake. Is that what we're talking about here?

John Grim: Absolutely. Another word for error is indeed a mistake. Typically, the action varieties underneath error will start with mis-configuration as you said, and mis-delivery.

Those two combined, are 17% of the breaches this year.

And of course, there's all kinds of other error action varieties. But these two to quote, one of the data scientists from the DBIR, are the burger and fries of error. These are the ones that we've seen time and time again, rise to the top of the list there.

In fact, to put things into perspective, the number of error breaches this year are higher than last year, but we have over 1200 more breaches that we're looking at. So frequency wise, error is actually down for the first time in four years, but total number of breaches, it's actually increased since last year.

Jake Bernstein: I'm not sure how to take that statistic. I think it's good, but maybe... I guess one takeaway is that no, in fact, error is down only as a percentage of the whole, but it is objectively going up.

Interesting. Hard to know. It is hard to know.

John Grim: Absolutely. To put it into perspective, and this has to do with the denominator in terms of the dataset, social breaches, the first action category that we're talking about, have actually increased since last year. It's 11% up from 25%. And so that has actually pushed down error. So you can look at it that way. So there's more error, but less frequency because there's been some movement within the denominator of our DBIR dataset.

Kip Boyle: And it's getting technical now.

Jake Bernstein: Well, do you think that has to do with the fact that with the pandemic and the work from home shift, and just the overall increase in threat actor activity over the last year?

John Grim: Absolutely. So we did indeed see that increase in social breaches in the era of COVID 19. So not to blame the virus, but folks are working from home, working from anywhere, so they're a little bit out of their element.

Last year, in terms of the dataset, now keep in mind, the dataset that we had in the 2021 DBIR is from roughly March in terms of when COVID really started here in the United States of last year to October 31st, which is the cutoff. So we have six months of COVID 19 era data in the DBIR.

This next DBIR coming up will have the full 12 months now. So what's interesting with this... crosstalk Yeah, absolutely.

To put it into the perspective, so folks are working from anywhere, so they're vulnerable from that standpoint. They're may be not used to thinking security at home like they did at the office. Maybe they're using a laptop instead of a desktop, or maybe their mobile device though.

The threat actors know this. They know that they're out of their elements and they can target them with their phishing emails, but they're also targeting them from the standpoint of COVID 19 being top of the mind, or insurance being top of the mind, or benefit fits being on top of the mind in terms of COVID 19. So they're very slick in that regard where they're crafting their emails to be something that's really click worthy and getting people to click.

And they're setting up those fake websites, those fake websites for more information on COVID, for example, that is simply looking to drive traffic to that website so that they can on the backend, harvest those credentials. And they know that a lot of users use the same passwords or similar passwords across their accounts. And if you've got single factor authentication at play, then the threat actors are as good as in if that password that they've harvested matches their enterprise password.

So this is the kind of things that we're seeing in terms of the external threat actors being able to impact the human elements

Kip Boyle: And the third VERIS action category is misuse. And I'm already fatigued from the fact that there's social and error, and now it's like, "Oh my gosh, let's heap it on."
All right, John, tell us about misuse.

John Grim: Okay. So let's heap it on. And this action category, the misuse, is what the impetus was for The Insider Threat Report a couple years ago that we did. So this is probably closest to what people think of when there's thinking of insider threat. We have to have internal actor or a partner plus the misuse action. And even then, we don't necessarily have a malicious insider threat.

So what are we talking about here?

Well, for misuse, you have to be misusing or abusing something that you were granted in trust. Data, software applications, your device, and it leads to a breach. So we got that out of the way.

But when we look at misuse, there could also be a mis-intention in terms of leading to a data breach. It could be an internal actor that's using their device and cutting corners, leveraging shadow IT to get the job done. They're not necessarily intending for a breach to occur. And because of those actions, it leads to a data breach. So that would be an unintentional insider.

But the more nefarious ones are the ones who are disgruntled, or they're looking to steal data for espionage purposes, or maybe for their next job. That's when we get into the really malicious or the insider threats that people really think of when they hear that term.

Kip Boyle: And there was one in the US Navy recently. There was somebody part of the nuclear submarine program was reported to have tried to sell secrets, which ultimately, they were selling to an FBI undercover agent, if I remember correctly. But that's the classic situation you're talking about.

John Grim: Yes. Absolutely. I believe it was a camera media card that was in a peanut butter jelly sandwich or a peanut butter sandwich, which is classic old school counterintelligence...

Kip Boyle: Trade craft, right?

John Grim: ... Trade craft. Absolutely.

Jake Bernstein: So one thing on this that I think it's worthwhile to discuss, because I'll start by saying that one of the things I counsel my client base on is, "Hey, look, I know the focus tends to be on personal information, personally identifiable information and personal data because of the privacy laws. GDPR, CCPA." And people don't want to get hit by fines or penalties or lawsuits.

But I also often remind people, "Hey, look. If you've got trade secrets, if you've got proprietary information, if you've got customer data, there are bad guys who want that too."

And what I'm wondering is, do you think it's just not as widely reported or what would you say is the risk of this type of espionage, basically?

Should the average company be concerned about that? Or is that something that requires someone to be more targeted in their approach, as opposed to ransomware as a service, which we often... We're trying to tell people, "Look, you don't have to be targeted to be a victim of cyber attacks."

But on the other hand, it does seem like, for certain types of cyber attacks, maybe those don't happen unless you're targeted.

So do you have any insight on that question, which I know I just totally threw at you?

John Grim: Well, that's a good question. So for espionage, espionage is interesting. So every organization has proprietary information that they're looking to protect and now some industries moreso are hit by espionage threat actors, such as public administration, which is the government, or manufacturing, or mining utilities, we see higher in terms of espionage.

When we compare espionage to financial, in terms of the motives, those two motives, it's really interesting to see how the data or the impact the data shakes out when it comes to exfiltration of data.

So for example, financially motivated threat actors are targeting your PII, your PCI, your PHI, your banking information, as well as credentials.

When we look at the espionage threat actors, they're targeting different data types except for their credentials in the internal data. They're targeting the trade secrets almost three out of four times. They're targeting the government classified. They're targeting system information.

And so why are they targeting that? Well, they're looking at that data from the standpoint of competitive advantage. It's more of a closed loop system where they're looking to use that data for their own purposes, whether it be for eventual financial competitive advantage, or economic, I should say, or for nation state or security advantage when it comes to national security.

Now for the financially motivated threat actors, they're looking to monetize that data rather quickly. So if cyber defenders are not detecting the exfiltration of the financial data, someone's going to potentially miss that data, miss the money, consumers are going to see on their credit card statements, fraud, dark web pundits are going to be seeing that data being sold or offered, or even access to the environment, whereas espionage actors are not going to be selling that data. They're going to go using it for that advantage. So going to hit right here.

So the financially automated data tends to be the data that's regulated. So there is mandatory compliance for securing it and mandatory reporting for its disclosure. So if you're going to hit that, see that data moreso in our data set, because of those reasons. When it comes to espionage that data, generally speaking, doesn't have compliance behind it in terms of securing it as well as reporting it. So it may under reported because of that reason in the dataset when we're looking at espionage data, but it's also due to the nature of the espionage threat actors. Probably under detected it, therefore under reported as well.

Jake Bernstein: Got it. That's a sobering thought actually, isn't it?

John Grim: It absolutely is.

Jake Bernstein: Basically if they're good, and we already know how good threat actors are in general, just because of the way they're able to be so effective with brute force, ransomware attacks.

It's a little terrifying to think about what if they don't want to be detected? How effective have they been? That's very sobering. I want to repeat that. I think that probably is an alarm bell that most people should probably start thinking about.

And I would also venture a guess based on my own experience that detection capabilities in the marketplace writ large, I should say in industry, in the economy, are not good. They're generally pretty weak. And I think that just adds to this concern that, "Hey, this is under reported. Not because it's not happening, but because no one knows that it's happening."

John Grim: It's a good point. And why don't they know? Well, it's the tools. You're looking at espionage threat actors who are, to use a cliche, living off the land, blending into the forest. They're leveraging tools already in the environment for illegitimate purposes, which really does hamper detection from the standpoint of using tools that the organization would expect to be used and seen in their logs.

But it also hampers detection because they're not bringing in malicious software that could be detected. They're living off the land, blending into the forest. It also hampers instant response efforts. And as a forensics investigator, looking through log entries, trying to determine who did what is certainly challenging. It slows things down.

Jake Bernstein: Well, I can come up with mechanisms or stories in my head where it would be virtually impossible to detect. If a threat actor is using a compromised account and they're smart about it, and they're using it more or less as the real person would use it, good luck detecting that.

John Grim: Absolutely. And it's interesting you used that example of using a compromised account. When we looked at the Insider Threat Report a couple years ago, we actually illustrated that insiders are not just human beings, but they're also their accounts. So if an external threat actor has taken over an account that had trust and privileges granted to it in terms of the user, then you've got an insider threat there, all of a sudden from an external point of view.

And we also looked at supply chain from the standpoint of insiders could be a piece of hardware that's made it into your enterprise environment. And perhaps it's got IP addresses that it's speaking out to that have been hard coded into the firmware, the system. We've seen those kind of situations as well.

Kip Boyle: It's insidious.

Jake Bernstein: That's a very good point.

Kip Boyle: Absolutely insidious. And beaconing. You mentioned that.

Man, we could have a whole episode just on the fact that most all networks have no egress filtering, no egress monitoring And that's why attackers have started to use beaconing and to establish persistent access is because it's just not paid attention to, generally speaking.

Okay. This is great conversation. So the element is absolutely present in all of these incidences and data breaches 85%, is what you're telling us, John. And then to top it off, it's insidious, it's exceptionally challenging to even know that it's happening to detect it, to just realize that are experiencing it. In the case of espionage, you may not realize that a bad turn of events for you a year or two down the road could be due to the fact that somebody stole all your proprietary data and then formulated a competitive strategy that blew you out of the water.

Did I do a pretty good job of summarizing where we've gotten to this at this point?

John Grim: Absolutely.

So to put it into real context with an anonymous case here, we had a situation where we were asked to come in and do an investigation for a customer who didn't believe they were breached, but they saw a competitor roll out a piece of equipment very similar to what they had been developing. And as we did our forensics proactive investigation, we determined that an engineer in their R and D department had been socially engineered via email phishing from, extensively, a recruiter. Had clicked on the link. The threat actors were able to gain access to the system, were able to find computerated design drawings and copy those out of the environment.

No lab detected it at the time until somebody saw a similar product being launched by a competitor, and it was strange to that person who noticed it because the competitor didn't typically work with that kind of hardware. And it looked very similar to what they had been developing. So classic example of insider threat with the espionage component there.

Kip Boyle: And you guys are actually able to piece that apart and prove that that's what happened, right?

John Grim: Absolutely. Through timeline analysis and looking at all evidence sources to include interviews of people. So we look at typically, just to give the audience here an idea of what we would look at, we would look at network traffic. We would look at the endpoint systems, volatile data and memory dumps, as well as browser information, email information. And then we'd also look outside the wire at net flow data, as well as looking to see if we could find anything on the through put. But we wouldn't typically expect to see espionage information on through put but we're going to check that anyways, just to be sure.

Kip Boyle: Okay. So, so we talked about how difficult it is to detect this, but they did detect it. They did scratch into it. Can you share anything? Do you know anything about what they did once it was confirmed? Let's talk about prevention and mitigation. What did they do?

John Grim: So I'll take a step back here. When it comes to espionage threat actors versus financial threat actors, say PCI, for example, if you find that Ram scraper that malware for PCI you're containing eradication is then going to roll forward rather quickly. If you're looking at an espionage threat actor, do their very nature that they're in the environment, you've detected them. You've confirmed them, that they're in there. They may not be using the malware, or they may be using legitimate tools, or they may have other ways of moving laterally and maintaining persistence within the environment.

So I think folks in the security industry will realize that to really know for sure that the threat actors are part of your segmented network environment, you may have to go from scratch and rebuild restore, to be sure.

And then you want to continue monitoring afterwards to make sure that you're not seeing any suspicious traffic egress or egress because cyber espionage threat actors really do enjoy those back doors. And they also enjoy the command and control that comes with their activities.

Kip Boyle: Right. Okay.

So you've artfully dodged my question a little bit about what these people did in response, but talking generally, that makes a lot of sense.

I got to imagine that their minds were completely blown when you actually brought back the evidence that they had been the victim of espionage.

Gosh, that's awful. I feel bad for them.

Okay. So let's keep going because there's still some more really interesting information to unpack with you today. And so let's continue this conversation about prevention, mitigation response.

So John, in a atypical organization, who do you think is most responsible, from a position, for managing this human element, this insider threat?

John Grim: That's a good question. So insider threat cases are interesting from the standpoint of... From a technical standpoint, they're typically not too complex. The espionage ones can be complex. You've got advanced threat actors.

Insiders are using the trust and privilege that they've been granted. They're probably not even using malicious software. But it's complex from the standpoint of, you've really got to involve different stakeholders, more so than a malware outbreak.

So what do I mean by that? Well, for insider threat, yes, absolutely. It is an IT security problem. Hopefully, there's an insider threat manager, but you've also got human resources. You've got your legal team. You may have to have physical security involved. So you've got a bunch of stakeholders who are not necessarily as technical as the cyber security folks or the IT security team, but they do have a stake. They do have a role in the insider threats investigation.

You would likely also, to expand the legal there, have an outside counsel that you're working with as well.

Kip Boyle: Yeah. If we just had a lawyer on the show to tell us whether that was a good idea.

Jake Bernstein: I know. That would be nice, wouldn't it?

Kip Boyle: Okay.

So John, I think what you're saying is this is a cross-functional issue, and so we shouldn't be surprised to see that we need to have people from many different disciplines participating in the prevention and the mitigation and the response.

But would you lay primary responsibility at the feet of any particular job title.
Is it the CSO's primary job? Is it the HR primary job?

Jake Bernstein: It's not outside council's primary job. I can promise that.

Kip Boyle: Thank you.

John Grim: If there's an insider threat manager, I've seen those folks being underneath the CSO and reporting to the CSO. I've also seen, potentially, they're reporting to legal. May even be under human resources. So it really does depend on the organization. And really, those folks tend to be the ones that we see most involved with insider threat cases. They all do have a big piece of the pie in and of their own right, in terms of their roles and responsibilities.

Kip Boyle: Well, I can tell you as somebody who had the CSO title at an insurance company for seven years, I was daunted by this insider threat issue, the human element in all of these, because of that cross-functional aspect of it. It was much harder for me to get traction on that issue than it was for me to go play with access controls or audit firewall rules, you name it. The technical stuff that we can do and that we should be doing was so much more accessible to me and to my team. It felt more satisfying. We could get faster closure.

And so I guess I'm confessing that this is not easy because it's not an easy thing to do. I just don't know how many people actually embark on doing a lot of preventive things.

Maybe in giant enterprises, especially if you're part of the defense industrial base, it probably is a little bit more straightforward. But I was working at a mid-sized insurance company. We were doing about $500 million of business a year. And even though we weren't necessarily concerned about espionage, phishing and everything was still an issue. But just trying to get traction with it.

I remember having a conversation with our HR representative about personnel reliability. The case I was making to her was, "Okay. So we hired this person eight years ago and their background check was very good, but in eight years now they're teetering bankruptcy and they're about to lose their home. Now they've got motivation to steal policy holder information, possibly sell it, convert it into cash, whatever. Shouldn't we be doing something on a regular basis every few years to just make sure that people still fit our criteria for personnel reliability?" And she looked at me like I was crazy.

Jake Bernstein: From an external perspective, what this feels like to me is like the graduate level class in cybersecurity where most companies and most people are still trying to get into college. They're so far behind, or ultimately, this is so far ahead of where most people are that it's probably hard to even comprehend.

What you're talking about sounds more like military level intelligence, counterintelligence, nation state stuff. And the principle that's where they come from. All that thinking. John being a prime example. I think your background is... Where would you learn to do this other than the military, police, things like that? I don't know that there's a clear answer

John Grim: And that's a good point. And there's not a clear answer. I've seen folks get into this field from that background of the counterintelligence, or maybe the law enforcement. But that's not necessarily the case anymore. There's also folks that are getting into the role from maybe HR and they're becoming more involved in the insider threat program.

And let's not forget that insider threat, we're only talking cyber here. There's also the disgruntled worker who may be destroying property or maybe part of workplace violence as well. So that's where you've got another element there. And you may have physical security involved as well, coming over and being part of the insider threat program.

Kip Boyle: Yeah.

Well, I think one of the big takeaways from the episode today, as we start to wrap it up is that if you're responsible or playing a role in protecting digital assets, information assets in your organization, you really need to widen your aperture here and you have to recognize that 85% of the things that are in the DBIR have some human element to them. And so you're not going to solve this problem just by continuing to fiddle with the settings on your security device. As Jake likes to talk about, the blinky light security, it just isn't going to get you where you need to go. You've got to try to find some way to get traction with the human element.

So this has been a fantastic conversation.

Jake, any last questions for John while we have him here?

Jake Bernstein: Many, but we don't have time for them.

Kip Boyle: Okay. All right. Great. And listen, John, we're so glad you were our guest. If you have any closing words, we'd love to hear from you. We'd also love to know if anybody wants to connect with you, how can they find you on the internet?

John Grim: Absolutely. So my final word here is know your assets, know your people, and know who needs access to those assets and the data when it comes to insider threat. Don't forget about the non-technical detection mechanism. So sensitize your workforce to be on the lookout for disgruntled employees, or folks who are bragging about accessing the CEO's email when they should not be doing that.

You can certainly reach me on LinkedIn at JO Grim. Happy to link in with folks. And if you have any questions, I'm happy to answer those questions as well.

Kip Boyle: John, you're very generous to join us today and to be our guest. Thank you.

And that wraps up this episode of The Cyber Risk Management Podcast. And today, we talked about the massive insider threat in all of our organizations, whether it's malicious or manipulated, and what you should do about it. And we did all that with the help of our guest, John Grim.

Thanks everybody. We'll see you next time.

Jake Bernstein: See you next time.

Speaker 1: Thanks for joining us today on The Cyber Risk Management Podcast. If you need to overcome a cyber security hurdle that's keeping you from growing your business profitably, then please visit us at cr-map.com.

Thanks for tuning in. See you next time.

Headshot of Kip BoyleYOUR HOST:

Kip Boyle
Cyber Risk Opportunities

Kip Boyle is a 20-year information security expert and is the founder and CEO of Cyber Risk Opportunities. He is a former Chief Information Security Officer for both technology and financial services companies and was a cyber-security consultant at Stanford Research Institute (SRI).

YOUR CO-HOST:

Jake Bernstein
K&L Gates LLC

Jake Bernstein, an attorney and Certified Information Systems Security Professional (CISSP) who practices extensively in cybersecurity and privacy as both a counselor and litigator.