EPISODE 27
What’s at the intersection of AI and cybersecurity?

EP 27: What’s at the intersection of AI and cybersecurity?

Our bi-weekly Inflection Point bulletin will help you keep up with the fast-paced evolution of cyber risk management.

Sign Up Now!

About this episode

May 14, 2019

Kip Boyle, CEO of Cyber Risk Opportunities, talks with Jake Bernstein, JD and CyberSecurity Practice Lead at Newman DuWors LLP, about three things that cyber risk managers will find at the intersection of artificial intelligence and cybersecurity.

Tags:

Episode Transcript

Kip Boyle: Welcome to the Cyber Risk Management podcast. Our mission is to help executives become better cyber risk managers. We are your hosts. I'm Kip Boyle, CEO of Cyber Risk Opportunities.

Jake Bernstein: And I'm Jake Bernstein, cyber security counsel at the law firm of Newman DuWors.

Kip Boyle: And this is the show where we help you become a better cyber risk manager.

Jake Bernstein: The show is sponsored by Cyber Risk Opportunities and Newman DuWors LLP. If you have questions about your cyber security related legal responsibilities-

Kip Boyle: And if you want to manage your cyber risks just as thoughtfully as you manage risks in other areas of your business, such as sales, accounts receivable, and order fulfillment, then you should become a member of our cyber risk managed program, which you can do for a fraction of the cost of hiring a single cybersecurity expert. You can find out more by visiting us at cyberriskopportunities.com, and newmanlaw.com.

Jake Bernstein: So Kip, what are we going to talk about today,

Kip Boyle: Jake, today we're going to talk about what's at the intersection of artificial intelligence and cyber security.

Jake Bernstein: Well, that sounds fun. Those are a couple of buzz words right now. Should we start off with a definition of AI?

Kip Boyle: Yeah, absolutely. And let's do it without turning this episode into a highly technical or a very deep dive tutorial on AI, because I don't want to numb the minds of our audience members.

Jake Bernstein: Well, that shouldn't be a problem because I don't think I am capable of a highly technical or deep tutorial on AI. So, let's go with the following definition, which I heard recently by a futurist, actually the Bald the Futurist and you can find him on Twitter.

Kip Boyle: Great.

Jake Bernstein: And he says that the core feature of AI is that it is trained with data as opposed to program, that these are algorithms that have been around since the 1980s, but we didn't have either the computing power or the data sets to really use it. When we say AI, people always think Terminator or Skynet, when what we're really talking about right now are everything from self-driving cars to Alexa. And there's four main categories of what AI can do. And these aren't discreet, these aren't discreet types of AI. These are kind of the way to think about what AI does.

Kip Boyle: Okay. Then characteristics?

Jake Bernstein: Characteristics is a good way. Yes. So number one, they see, hear and can understand the world. That's something that we take for granted as humans, but in terms of having computers that can do that, it's really kind of a new thing.

Kip Boyle: All right. That's number one.

Jake Bernstein: And that's really important, by the way, for security, because people think CAPTCHAs, for example, are a decent mechanism for filtering out bot traffic. And that may have been true, but AI can be taught to see, and therefore it can interpret CAPTCHAs without any difficulty, so that's a practical example.

Kip Boyle: Right.

Jake Bernstein: The next one is discovery. And what we mean by that is, locating and finding patterns inside oceans of data that humans alone can't see.

Kip Boyle: Mm-hmm (affirmative).

Jake Bernstein: And that is important to understand, because AI is, we really should think of them as assistants.

Kip Boyle: Right.

Jake Bernstein: They're enhancements. They basically boost human capability. They don't necessarily, at this point, replace anything.

Kip Boyle: Right. They're not a general intelligence.

Jake Bernstein: They're not an artificial general intelligence. Yeah, you'll often hear, especially science fiction geeks, talking about AGIs and that's an artificial general intelligence.

Kip Boyle: Right. So it's helpful, I think, for our listeners to think about AI, as it exists today, more as a tool that you pick up and put in your hand like a wrench or a calculator, right?

Jake Bernstein: Yep. A very sophisticated wrench and calculator, but that is accurate. And it's a calculator and a wrench that can learn from experience.

Kip Boyle: Right.

Jake Bernstein: So that would be the third category, third characteristic. It can use trial and error and get better at a task over time. You can see why we call this artificial intelligence, is that it starts to look less and less like a traditional computer that is programmed and more and more like something closer to human.

And then the last one, which I admit is actually kind of the creepiest, is that AI can imagine and create. And you might be thinking, well, hold on now, I thought that was the province of people. And that's not exactly correct. Simple examples are the ability of AI powered systems to remove an occlusion from the foreground of a photograph. You might think, oh, that's no big deal, but you're asking your computer to remove data and fill it in where there is nothing. And humans might look at that and be like, "Oh, that's so easy. I know what's supposed to be behind that fence." for example, but actually getting a computer to do that is impressive.

Kip Boyle: Because what you're really talking about is opening up Photoshop, what a human being would do, and then the human being would recognize, oh, there's a power line bisecting the photo. I'm going to erase that out of there to make the photo look really, really great. And so what you're talking about is in the future, we could just give the photo over to a computer and say, "Hey, erase that power line out of there."

Jake Bernstein: Well, not in the future, now. That application exists currently.

Kip Boyle: Well, I haven't seen it yet, so it's in Kip's future.

Jake Bernstein: Yeah. In the distant future. A really good example of this, which has, I think, tremendous impact to the cybersecurity field which we'll be talking about, is something called a generative adversarial network. That sounds really fancy, but really what it is, is it's two different AIs. One is called the generator, and it basically makes stuff a up randomly. And then the other half of this generative adversarial network is called the discriminator. And it's kind of like a forger and a detective working in tandem.

And using this technique, AI has been able to create pictures of people that look completely real, but they're not. They're not real people. And the way it works is, like I said, the generator digests huge quantities of data and says, "here, this is a person." And the discriminator also digests huge quantity of data, and its job is to say, "No, that doesn't look right." And working together, they create very, very lifelike photographs.

Kip Boyle: Mm. Okay. So it's like pair programming, right?

Jake Bernstein: I suppose. Oh yeah. Like two people working together. Yes, it would be.

Kip Boyle: Yeah.

Jake Bernstein: And these GANs, I think are a major component of using AI in the cybersecurity process, because if you think about what that will mean is, you want your computer to be very good at both creating and spotting fakes. And so that's what this thing does.

Kip Boyle: Yeah. And we're going to need that. That's one of the things we're going to talk about actually is, lying at the intersection of artificial intelligence and cybersecurity is deep fake, so we're going to talk about that in a moment.

But we want to keep the episode here practical, so what we're not going to do is explore the various types of artificial intelligence anymore than we already did. We're not going to really talk about specific learning methods, so we going to talk about neural networks, machine learning. We're not going to get into the details of all that, that's all out of scope for this episode. There's plenty of other places where you can go deep if you want to, but if you don't want to, then all right, we're not going to do it.

Jake Bernstein: And really those are the engineering details. What we care about as practitioners of cybersecurity is how the tool works and what we can do with the tool.

Kip Boyle: Right. So to thrive as a cyber risk manager, what do you need to know about what lies at the intersection of AI and cybersecurity? It turns out there's three things and let's take them in turn.

The first thing that we're seeing is the use of artificial intelligence in order to fight cyber crime, and there's lots of examples of this. I thought we could talk about an example that may be a little more familiar to really anybody who's used a computer lately. It used to be that antivirus companies would study individual pieces of malicious code. They would hire people, security analysts, and they would capture samples of malicious code or malicious traffic. And then what they would do is they'd say, "Okay, let's focus on what's unique about this. And then let's create a signature so that if our antivirus engine ever sees this again, it'll match up to a signature and then it'll automatically block it.

And the underlying idea here was that one piece of malicious code released by a bad actor was going to be the same no matter where you encountered it. And so you could write a signature or another way to think of it is you could create a fingerprint of that piece of malicious code, and then as new attacks showed up, you would just repeat that process over and over again. You would just continue to write signatures.

And the problem is that cyber attackers figured out about, oh, 10 years ago, that they needed to stop writing malware that was so easily fingerprinted. And so they started to come up with techniques where their malicious codes would have unique fingerprints. And at that point, the whole approach of signature based really just disintegrated, because there was just no way to fingerprint each individual piece of malicious code, even though they were highly related to each other. It just gave out.

And so now, antivirus companies are using artificial intelligence to create a statistical profile of what's considered to be bad behavior in place of actually writing a signature. And by doing that, not only can they increase their detection rates and blocking rates, but they can also develop and deploy these statistical profiles in real time, because they're actually feeding data to an artificial intelligence engine. And rather than having a security analyst write the profile, we now have computers writing the profile, so this is very powerful.

Jake Bernstein: It is. Now, just to be clear, this does not mean that we don't need to have signature based detection on our systems. That's a basic form of cyber hygiene.

Kip Boyle: Yeah. I believe the turn of phrase is, necessary, but insufficient.

Jake Bernstein: Exactly. Yeah. That, I think, this is a really interesting issue because, as we were just talking about the generative adversarial networks, we can use AI to fight cyber crime, but you can also use AI to enhance cyber crime. So I think the next arms race here is AI versus AI. And so if you think about being a security professional, we already know the bad guys are going to use AI, so you darn well better get acquainted with it as well, so that you can learn how to use AI to protect yourself when it's being used against you.

Kip Boyle: Right. And when the adversary is using an artificial intelligence to learn about your defenses and to modify its attack patterns appropriately and in real time, the only way you're going to be able to keep up is to do the same thing defensively. Is you're going to have to have some kind of an artificial intelligence that specializes in recognizing and classifying attack patterns, and then automatically generating defensive maneuvers. And already, I'm starting to think of Star Trek episodes when the Borg is showing up and automatically adapting to the different weapons of the Enterprise

Jake Bernstein: They're adapting is a very common, that's a common, I guess, phrase used when fighting the Borg. But that is really what we're talking about here is, needing to understand that you're likely to be facing viruses that can reprogram themselves in order to get around your defenses.

Kip Boyle: Right. Yeah. And doing it in real time or near real time, and without a human operator, potentially, in the loop to slow things down, or the other side of that coin is to make them better. Because remember we defined AI, in part, as a tool, as opposed to a sentient, general intelligence that acts autonomously.

But now we're starting to talk about the second thing that we're finding at the intersection of AI and cybersecurity, which is using artificial intelligence to commit cyber crime. And the one thing that I want to share that I think is a little bit speculative, although it could be happening right now, we talked about deep fakes when we did the intro to the episode. And I'm sure most of you in the audience have heard or seen deep fakes at this point.

And what's happening there is you take a video of a legitimate person's speech, so you just video record a person talking. But then you can use software to replace either the face of the person speaking with somebody else's face. Or you can just create an audio stream with words that the person never said, but you can modify the movement of their mouth in order to make these new words match up to what it looks like the person's saying. And so now you can have a video that says things that the person never said, and it sounds very convincing. It matches the person's cadence of speech and the unique ways in which they talk, the particular word choices that they tend to make.

So what if you could take this capability and instead of put it out for amusement, what if you could just create an audio conversation? Let's say you have an AI study the speech patterns of an executive who talks a lot on, let's say, earnings calls or shows up on evening news a lot, and then have the AI make fake social engineering calls using that executive's voice and patterns of speech, but to say things that they've never said. And to have them call people in the organization and ask them to move money on short notice and actually have full conversations with them. I don't know how we're going to deal with that.

Jake Bernstein: Well, we've talked in the past about what makes cyber attackers different than bank robbers. And one of them is the ability to automate and basically fire and forget and let your computer do the work for you while you go sit by the pool.

Kip Boyle: Right.

Jake Bernstein: And social engineering at least had been one of those things that required a human to sit around and actually work on it.

Kip Boyle: Yeah. High level of involvement.

Jake Bernstein: Particularly in making calls and things, but what you just suggested indicates that in the not too distant future, they'll be able to automate social engineering too, which I really hope means we can automate social engineering defenses, so we just, have AIs talking to one another, trying to convince each other to let someone log in.

Kip Boyle: Yeah. Well, you know, the immediate thing that I thought of was kind of a spy versus spy, I guess. But I mean, in the military, we would establish code words and we would have shared secrets that we would pass back and forth to each other when we met face to face. And I could see the need in the future to actually establish one time passwords or one time codes that somebody would carry in their wallet. And that way when somebody called you could actually challenge them to authenticate, in real time, over the phone. And that could be difficult for an artificial intelligence, even a general artificial intelligence, to know what's on line three of my wallet card?

Jake Bernstein: Quite. That's old school, in a lot of ways.

Kip Boyle: And I think we're going to have to reembrace a lot of old school techniques. Things are just getting too too difficult.

Jake Bernstein: Yeah. No, I think that might be true. I think that the World War II challenge phrases to make sure that you're supposed to be where you are and things like that.

Kip Boyle: Right. Yeah, if somebody slips into your Fox hole in the middle of the night and it's an enemy soldier that went to school in the United States and speaks impeccable English. It's like, "Are you really on my side or not?" And of course I cannot help but to think about the reprised series Battlestar Galactica that was in 2003, I think, it was roughly when they rebooted it. But the basic premise of that reboot was that the adversaries were so sophisticated and so good at hacking into networked systems that only antiquated weapons could survive against them.

Jake Bernstein: Yes. I mean, everything was air gapped, as we would say these days, to prevent infiltration of the computer systems. And that I think is, I mean, we've, of course, just jumped the shark of our own rules into far future artificial general intelligences. But ...

Kip Boyle: Well, I, wasn't trying to focus on the intelligence. I was really focusing on the idea of embracing old ways of doing things as the countermeasure to super advanced technology.

Jake Bernstein: Yeah, exactly. One of the things that we often talk about here, and just to maybe bring this back to ground level for a moment, is the idea of creating a network traffic baseline. We'll do this. We recommend this to clients all the time when they want to set up some kind of IDS or IPS, you can't really-

Kip Boyle: An intrusion detection system.

Jake Bernstein: Right. Intrusion detection system or prevention system. You can't really do those things unless you know... I mean, this is more of crosstalk

Kip Boyle: Got to know what's normal.

Jake Bernstein: But you got to know what's normal. And what I'm thinking is that we're going to see more and more situations where, rather than take a baseline and then sit there and program rules, which is what you have to do now, instead, you're going to have some form of AI system that is basically constantly monitoring and taking its own baseline. It's hooked into HR, so it "knows" when new people have started.

Kip Boyle: Right.

Jake Bernstein: And it's basically keeping track of everything and just digesting huge quantities of data in hopes that it's going to be even better at detecting on the fly, potentially malicious activity.

Kip Boyle: Inevitably it's going to be better because, as a former chief information security officer, I've dealt with amassing large quantities of system data with the hope of being able to find a needle in a haystack. I mean, you've got all these logs and bad behaviors going on. You just know it. And it's just a question of can you find it? And I'll tell you what, sending people to do a robot's job does not work.

Jake Bernstein: No, it doesn't. And if you think about what we're asking people to do is sit there and sift through, ah, ten-

Kip Boyle: Oh, it's mind numbing.

Jake Bernstein: Yeah.

Kip Boyle: Just horrible. I mean, people just won't do it, or they'll just go through the motions. I mean, you fuzz out very, very quickly. I mean, even the best accountants, auditors, people who are very, very good at looking at large quantities of data and picking out the anomalies, even they're not going to really sustain the effort for the amount of time that's needed. So I think that absolutely makes sense. Another way of using artificial intelligence to fight cyber crime is to analyze these vast quantities of systems events that we bring in from our networks. So, yeah, that makes a lot of sense.

And so now let's take a look at the third thing that's sitting at the intersection of artificial intelligence and cybersecurity, which is the use of cybersecurity to protect AI powered systems. As we pointed out, in order to work, any AI powered system needs massive data sets from which they create the rules that it uses to make decisions. And a really good example of this that's been in the media a lot is self-driving cars. They are continuously gathering data to evaluate current road conditions, obstacles, things that are moving.

And we've seen a news story not too long ago, about a poor woman who was pushing a bike across a road, and the artificial intelligence driving the car did not understand what that obstacle was. And apparently just continued on because it decided that that was not something to be bothered with. Well, okay, so that was an accident, but what would happen if somebody was deliberately corrupting the data sets? What if somebody standing on the side of the road or remotely had access to that car and corrupted the sensors or corrupted the data that the sensors gather and caused the artificial intelligence to make bad decisions? Could a cybersecurity system on board that vehicle, keep your car on the road.

And let's say it couldn't. Let's say your car left the road and had an accident, would a police officer who responded to the scene, would that person believe you, that your autonomous car started acting screwy all by itself and that you didn't switch it over to manual and botch it up? This is a huge thing that I don't hear anybody talking about.

Jake Bernstein: No, it reminds me of the question of driving under the influence. Well, officer, I wasn't driving. My car was.

Kip Boyle: Right. And now you've got to pull the logs and prove it.

Jake Bernstein: Yep. Well, that's interesting. That's one of those parts of this, where again, the systems are going to have to be designed with this in mind. And it's going to be very difficult to ensure that our systems are always... we need to get them ahead of the bad guys as often as possible. And I think that there's going to be a lot of design work into failing safely.

Kip Boyle: There needs to be.

Jake Bernstein: And I think that because, we can sit here right now in February 2019 and pontificate on all of this is going to happen easily enough, which means that it probably will happen or already has happened. Which means that those in charge of security for these new systems, they need to take a different approach than has been taken in the past, which is kind of what got us in this mess. I mean, I think if you ask any serious security engineer, they would probably agree with the statement, that things would be a lot better if we had only thought of security from the beginning of the internet.

Kip Boyle: Yeah. Yeah, absolutely. I mean, secure by design, right. Which means when you're laying out the blueprint of a system, that there's somebody there during that process, who's paying attention and baking in security systems from the go. That's a very rare thing to do, traditionally. And even though you've got people like Adam Shostack out there, who is promoting and has been promoting for years and years and years now, the idea of incorporating threat modeling into systems designs early, early on. And although that's the most economical way to do it too, it's still not a very well established practice.

And so, with the self-driving cars that we have out there right now, I don't know, but I'd be willing to bet that they don't, that they did not design these systems from the ground up with the type of cybersecurity systems that we're going to need. I mean, if you look at the recall of... General Motors had a Jeep that had been hacked pretty spectacularly by a couple of reporters. It was written up in Wired Magazine a couple of times. And the way they got into that Jeep, the way they established remote access to a Jeep that was driving down the freeway at 60, 70 miles an hour and took control of it, tells me that they did not design security from the beginning.

Jake Bernstein: Exactly. And that's what is being said. That should be one of our main focuses as security experts in the field right now, is just making sure that to the extent our listeners are able to influence product design at the early, early stages-

Kip Boyle: Mm-hmm (affirmative). Yeah. They should be doing it.

Jake Bernstein: They should be doing it. And the thing is that, I think that the argument is even stronger these days to build it in from the very beginning, because at this point, no one should be caught unawares when it comes to cyber risks and the types of threats and damage that can result.

Kip Boyle: Yep. Yeah, absolutely. Absolutely. Well, that wraps up this episode of the Cyber Risk Management podcast. Today, we talked about what's at the intersection of artificial intelligence and cyber security. Thanks for being here. We'll see you next time.

Jake Bernstein: See you next time.

Kip Boyle: Thanks everybody for joining us today on the Cyber Risk Management podcast.

Jake Bernstein: Remember that Cyber Risk Management is a team sport and needs to incorporate management, your legal department, HR, and IT for full effectiveness

Kip Boyle: And management's goal should be to create an environment where practicing good cyber hygiene is supported and encouraged by every employee. So if you want to manage your cyber risks and ensure that your company enjoys the benefits of good cyber hygiene, then please contact us and consider becoming a member of our cyber risk managed program.

Jake Bernstein: You can find out more by visiting us at cyberriskopportunities.com and newmanlaw.com. Thanks for tuning in. See you next time.

Headshot of Kip BoyleYOUR HOST:

Kip Boyle
Cyber Risk Opportunities

Kip Boyle is a 20-year information security expert and is the founder and CEO of Cyber Risk Opportunities. He is a former Chief Information Security Officer for both technology and financial services companies and was a cyber-security consultant at Stanford Research Institute (SRI).

YOUR CO-HOST:

Jake Bernstein
K&L Gates LLC

Jake Bernstein, an attorney and Certified Information Systems Security Professional (CISSP) who practices extensively in cybersecurity and privacy as both a counselor and litigator.