
EP 133: ChatGPT and Cyber Risk Management
Our bi-weekly Inflection Point bulletin will help you keep up with the fast-paced evolution of cyber risk management.
Sign Up Now!
About this episode
June 6, 2023
Can ChatGPT help us manage Cyber Risk? Can any generative artificial intelligence be helpful? If so, how? And are there any limitations? Let’s find out with your hosts Kip Boyle, CISO with Cyber Risk Opportunities, and Jake Bernstein, Partner with K&L Gates.
Suggested “ChatGPT Prompt Engineering” course by Sean Melis:
https://www.udemy.com/course/chatgpt-101-supercharge-your-work-life-500-prompts-inc/
Episode Transcript
Speaker 1: Welcome to the Cyber Risk Management podcast. Our mission is to help executives thrive as cyber risk managers. Your host are Kip Boyle, virtual Chief Information Security Officer at Cyber Risk Opportunities. And Jake Bernstein, partner at the law firm of K&L Gates. Visit them at cr-map.com and klgates.com.
Jake Bernstein: So Kip, what are we going to talk about today in episode 133 of the Cyber Risk Management podcast?
Kip Boyle: Hey, Jake. It's good to be back. Today we got a really fun topic. It's the thing everybody's really talking about, or maybe I should say nobody's talking about it. And then that would be a bad joke because we're just going to jump on the bandwagon, aren't we? Right. Generative artificial intelligence. We're going to talk about it because everyone's trying to figure out the cyber risk management implications of this new technology. I'm reflecting on it a lot. I can see a lot of other people are talking about it. So let's go ahead and open up what I think is probably going to be a recurring topic for us as this thing continues to roll out and we figure it out, right?
Jake Bernstein: Absolutely. And it really was inevitable that we would come and talk about this. The topic of generative AI in general right now is everywhere. I've personally been talking about it, feels like nonstop for the last month. I'll definitely give you a bit of a background on what went down at my law firm's partner retreat with respect to AI.
Kip Boyle: Great.
Jake Bernstein: But then I do want to really try to focus on the cyber risk management implications of generative AI. And I think we should start off just by saying, look, we don't know the future. No one does. This is all a great deal of guesswork, but I do think it is absolutely a critical part of cyber risk management to be thinking about this now. By the way, I don't know if you've seen it yet, this is a complete non sequitur, but I think a future episode maybe in the very new future is going to be about the revisions to the NIST Cybersecurity Framework. I don't know if you've seen CSF 2.0 that adds govern. I don't know why I just thought of that, but I think I thought of it because it is, as we were talking about, as I was saying, that we need to think about this, I always think about the CSF and the identify and the risk management functions, and that's really what we're doing here.
Kip Boyle: We are. We're identifying and then we're trying to figure out how to govern. Because a lot of companies are starting to figure out, oh my gosh, we can't trade secrets in ChatGPT no matter how helpful it is, because then we lose control over it or medical records or any confidential data of any kind. I could imagine that separate from cyber risk management, you're probably also having conversations in your law firm about what is the impact to the legal industry of ChatGPT?
Jake Bernstein: For sure. It is. Let's go ahead and just spend five, 10 minutes just talking about ChatGPT. And to be clear, ChatGPT is just one specific version of a generative AI, right?
Kip Boyle: It's the Kleenex of generative AI.
Jake Bernstein: Well, it's more like the prototype Kleenex in a sense. And if you look at how open AI positioned the original ChatGPT, which technically the model that it was using was GPT-3.5. This was a large language model, which is not the only type of generative AI, and it was trained on a large set of data up until roughly 2021. So ChatGPT, at least the GPT-3.5 version is really one specific type. And really I don't know if you've talked about it before, but let's just briefly define that GPT.
Kip Boyle: Sure.
Jake Bernstein: So what it stands for is generative pre-trained transformer. And this idea is that, and I think this is really important for the overall conversation, and I've had this conversation a lot recently. But what is generative AI and what isn't it? There's so much confusion and uncertainty, and you expect this in the general population, but I do think it's critical for people in our profession to appreciate what this thing is and what it really is not.
Kip Boyle: What it really is. Because when you just poke at it with a stick, it seems like a human being, right? Because it's emulating this very humanist kind of interaction.
Jake Bernstein: It does.
Kip Boyle: And there's different ways to characterize it. One of the things that I like to think about is it's just math.
Jake Bernstein: It is just math.
Kip Boyle: It is statistics.
Kip Boyle: It's just predictive. And when you say a large amount of data, we mean the entire internet.
Jake Bernstein: Pretty much, yeah. It's truly a large, it's a lot.
Kip Boyle: I think ginormous would be a better term, because people I think don't really realize just how ginormous the dataset was, and it's just statistics and probability. So every time it puts a word on the screen, it does a math algorithm and it says, what should the next word be? And for human beings, it comes across as artificial intelligence, as in to say human-like behavior. But really it's just massive statistics and probability.
Jake Bernstein: And that understanding that I think is actually absolutely critical. And there's a couple of reasons for it. One, humans are really, really good at anthropomorphizing everything, right?
Kip Boyle: Cars, everything. Airplanes.
Jake Bernstein: Cars. Pets, which at least are alive.
Kip Boyle: Pets.
Jake Bernstein: But the fact is, is that to our knowledge, current, and I'm not saying that this is going to be this way forever, this is not a show about biology and animal sapiens, but I will just say that on a simplistic level, humans, homo sapiens are the only species, or at least the most successful species, to have what Steven Pinker called the Language Instinct. And this is, if you think about it, and again, Cyber Risk Management podcast, not philosophy.
Kip Boyle: This is groundwork, don't worry. This is groundwork.
Jake Bernstein: Philosophy of thought and metaphysics, but it really is groundwork. And the reason is that when you sit there and read a book or look at a screen and you think about thinking, what are you doing, right? Well, we think in language, although it is also possible to think in images, but really a mature human being. And I mean that in terms of-
Kip Boyle: Somebody over the age of 23 when the mind has been fully formed.
Jake Bernstein: Well, I was just going to go over the age of roughly 12. But just a functional, maybe we should say, we'll, just over the age of 25, because that is roughly when the brain is fully matured. There we go. What I meant is matured in the biological sense. That's what I meant. So you think about what that is. And really most of the time we are thinking in language, we're thinking in words and phrases. It's so easy to make the mistake of believing that, for example, ChatGPT is thinking, but it's not. It is not thinking. And this has a ton of really critical implications in the way that we use it, the way we think about the technology, the way that we should and should not be worrying about it.
Kip Boyle: I heard somebody say, and I wish I'd thought of this, that ChatGPT, if you're trying to use it to get work done, is like this overly confident intern. It knows just enough to be dangerous. And when it doesn't know the answer it pretends it knows with a high degree of confidence.
Jake Bernstein: Oh my gosh.
Kip Boyle: If you don't know-
Jake Bernstein: It sounds so confident.
Kip Boyle: I know.
Jake Bernstein: So confident.
Kip Boyle: So if you yourself are not competent to be able to look at the results and say, well, okay, this is correct, and this is not correct, then you are going to take the result and you're going to become that overly confident intern, and you're going to turn it in as a work product, and other people are going to go, what the hell? This is completely wrong. So this is no panacea for the fact inaudible.
Jake Bernstein: That's also a phenomenal place to start the more substantive discussion about cyber risk management. The first one that I wrote down here was, I think, and I'm going more broadly, but AI tools to help with the cyber risk management process. And let's think about this, right? And I want to be very clear. ChatGPT is one application of this technology. I'm not going to pretend to know all of the many applications. I will say that if all we get out of the so-called AI revolution is ChatGPT and some better improvements, I personally do not think it's going to be the revolution that the hype made it out to be over the last couple of months.
Kip Boyle: So we're not going to get an AI overlord? Dang.
Jake Bernstein: No.
Kip Boyle: I have the cake and banners ordered.
Jake Bernstein: I know. I know. The AI overlord requires something called artificial general intelligence. And for lack of a better phrase, that is basically a superhuman mind. And look, first of all, ChatGPT and all generative AI, nowhere close to that. In fact, the debate continues as to whether that's even possible. I have to think it probably is because we exist, therefore at least some computer type device, our brains, is capable-
Kip Boyle: We'll keep getting closer to it.
Jake Bernstein: We will. But I think the critical component right now is to just recognize that it ain't here yet. It may not ever be here, and there's certainly no reason to believe that we've already gotten it.
Kip Boyle: Although Hollywood has done a good job of getting us ready to get freaked out. Because you got shows like The Matrix, which blamed the entire dystopia on rogue AI, as did the Terminator series.
Jake Bernstein: As did. But in some ways that was always going to be easier because really we were just imagining us, but without feelings or without caring. And honestly, I'm actually not even sure of that. That's not even, in the case of The Matrix, those AIs had feelings. They were just people. They honestly were just people.
Kip Boyle: Synthetic people.
Jake Bernstein: They were synthetic. They were synthetic minds, but they were people. And the thing about ChatGPT is that it is not a people, not at all.
Kip Boyle: No, even though it's confidently-
Jake Bernstein: Even though it confidently does that. So Kip, I know I've seen some discussions that you've been involved in both on LinkedIn and in our Slack chat and everywhere else, but what have you seen people trying to use generative AI for immediately?
Kip Boyle: Within our context?
Jake Bernstein: Within our context.
Kip Boyle: Right. Well, I've seen a lot of different things. Everything from like, let me use ChatGPT to write a script for a podcast. I haven't done that yet, but I've heard other people do it. I think that's a reasonable use for ChatGPT within its bounds of competence, or writing a policy, I think is another good example of it. Hey, I need a policy on X, Y, Z. We all know that policies are a pain in the to write and not very fun. And so if I can get ChatGPT to give me a preliminary draft, that's a wonderful use of it. The other thing I've seen a lot, and you probably have too, is I've seen a lot of people red team ChatGPT, which is to say they're trying to explore the boundaries of it as a technology, and they're trying to make it do stuff it shouldn't do. Like write me a phishing email, write me a piece of malware, that sort of thing.
And if there we are red teaming ChatGPT to find out if it can do that, then I have to believe that the bad guys and gals are already using it in their toolkit in order to better sharpen their hacking tools to make better malware. Possibly, gosh, we did an episode on this before about malware that mutates every time it hops from one system to the next so that it's really difficult to fingerprint and to observe behaviorally and that sort of thing. I haven't actually seen that, but based on all the red teaming, I got to imagine that it's happening.
Jake Bernstein: For sure. And I think drafting policies, and I had three or four points that I was wanting to make about this topic, and that gets into a bunch of them. One of the things that I'm wondering about, and I don't know if this is actually even going to be possible with what this does, but you see a lot of generative AI driven summarizing tools. How useful would it be to have a generative AI able to monitor vulnerability news releases and cybersecurity newsfeeds and spit out a cogent summary of those issues. It's like just a daily news summary for us as cyber risk managers to really just use. And like you said, it's the world's most confident intern. And that is a fundamental limitation of it honestly.
Kip Boyle: I wouldn't give it anything more important to do than I would give to an intern, and I would scrutinize the output as I would scrutinize the results of intern work. And so I think that's the most important limitation that I would really encourage people to live by and to manage too.
Jake Bernstein: Just for fun, some real time follow up. As you were talking, I admit, I pay for a ChatGPT Pro.
Kip Boyle: I haven't signed up yet. How do you like it?
Jake Bernstein: Well, I went over and I flipped it over to GPT-4, which of course is the newest superior version of GPT 3.5. And I said, you are a cybersecurity expert, and I'd like you to write a podcast script about the uses of generative AI like yourself in cyber risk management. And I'm not going to read the whole thing, but let's just say, title generative AI, revolutionizing cyber risk management. It then includes, I find this highly entertaining. It then includes a bracketed spot for intro music. Then it says, host, welcome to another episode of Cyber Solutions, where we delve into the world of cybersecurity and technology. I'm your host, your name, my name. And in today's episode, we'll be exploring the remarkable potential of generative AI, specifically how it's revolutionizing cyber risk management. This goes on and on.
I'll just give you, let me tell you, it's got segment two, how generative AI enhances cyber risk management. It's got four points. One is threat modeling and simulation. Generative AI can create realistic simulations of potential cyber attack scenarios. By analyzing historical data and current threat patterns, AI can help identify vulnerabilities and test the effectiveness of an organization's defenses. That's what we were talking about in a way. Phishing detection and prevention. Generative AI can analyze large volumes of data to recognize phishing patterns and detect malicious emails. It can also generate realistic phishing emails for training purposes, of course helping employees recognize and avoid following victim to scams, realtime incident response, and then security awareness training. And then it goes on a bit more. I think even that is kind of useful.
Kip Boyle: As long as you realize that you've got an overconfident intern.
Jake Bernstein: Let me do a brief sidebar here.
Kip Boyle: Sure.
Jake Bernstein: Like I'm sure every law firm, my law firm is thinking, how do we use generative AI? What do we do with this? And there's all kinds of discussions. Everything from, hey, let's see if we can get our own version of it, our own copy of it, and then it will be okay to train it on our own data, because it's going to be inaudible.
Kip Boyle: It'll just be for you.
Jake Bernstein: Just for us and all these different things. But I think one of the things, and this podcast script that I just generated here is a good example. For lawyers, and I'm sure honestly for almost everybody, there's nothing quite as demoralizing at times, depending on your mood, as that experience of looking at a blank word document with a blinking cursor just staring you in the face. You know you've got a 20-page memo or something to write, and it's just staring at you and you're like, it's so hard to get started. Well, I've firmly, I have very little doubt that a good use of generative AI is the first draft. Right?
Kip Boyle: That's a wonderful use for it.
Jake Bernstein: It's a wonderful use for it.
Kip Boyle: Because I would ask an intern to do a first draft of something, and that makes a ton of sense. The fact that you generated a podcast script in real time I think is very cool. And it brings up a couple of things that I think are important to say. One is that how you prompt a generative model is really, really important. I think it's the most important thing because a flimsy prompt I think is going to give you a flimsy result, but a really, really thoughtful, well done prompt is going to increase the quality of the output. What do you think about that?
Jake Bernstein: Well, if we think about how these things work and why it does make perfect sense. Right? I'm going to use a really bad mathematical analogy because I don't have the math for a lot of stuff. But you know binomial expansion, the concept of it, just where you multiply things out and you make one expression into a somewhat longer expression?
Kip Boyle: Yes.
Jake Bernstein: Here's what I think about that. The more prompt I give a generative AI, the more nodes and branching it's neural net, and it is a neural net. That is actually how these things work within the black box. The more grist for the mill we have provided it, which means that it's got more data to start from. So it's going to be able to provide you with a much more thorough, and I say convincing, but it also could be accurate response. And I want to be clear about something too. I'm not saying that these things are intentionally lying to us. That is not at all-
Kip Boyle: It has no capacity to lie.
Jake Bernstein: It has no capacity to do that.
Kip Boyle: It's not moral or not.
Jake Bernstein: It doesn't think, it's not moral, it doesn't care. It does not have motivation. It does not have emotion.
Kip Boyle: Nope. There's no greed or fear.
Jake Bernstein: No greed, no fear. It just does a whole lot of math based on almost every written word the human race has ever created.
Kip Boyle: And even some of the really awful writing too, right?
Jake Bernstein: Oh, yes.
Kip Boyle: It is a direct reflection of everything that's good and virtuous and awful about humanity. It's all in there. Which is why we can sometimes fake it into giving us really awful answers.
Jake Bernstein: It is us, Kip. It is us.
Kip Boyle: It's humanity.
Jake Bernstein: And I think one of the ways that I saw, and this is a little bit of another non sequitur, but that's what this episode-
Kip Boyle: That's what we're doing here today.
Jake Bernstein: That's what this episode was going to be inevitably. I read an article about trying to reframe what AI is and the way we think about it. And I want to say that, this is relevant to cyber risk management, because the way that we think about generative AI is going to drive how people relate to it and how it gets used. This article was about a concept called data dignity, and it was about opening up the black box. But it started with a really interesting concept, which you just hinted at without realizing it, which is generative AI isn't a separate entity. It's not a thinking machine in any fashion. What it is Wikipedia to the extreme, right?
It is the ultimate in, at least for the moment, human collaboration tools. Because what it really is, is a collection of everything that we have said and done and analyzed in such a fashion that if we give it a good prompt, and that is critical, it will give us output that is useful. And the important thing to realize is that despite the name generative AI, it can't actually generate new ideas. That would imply thinking, which it does not do.
Kip Boyle: I'm so glad you brought this up. Let me know when you're done, because I have to say something about that.
Jake Bernstein: Okay. Well, I'm done with the point there, which is just that everything that comes out of it had to go into it.
Kip Boyle: That's right.
Jake Bernstein: The idea behind data dignity is that-
Kip Boyle: There's nothing new in there.
Jake Bernstein: There's nothing new in there. If there was a way, and hypothetically there should be, to know the origin of everything that comes out in a given usage of it, there would be mechanisms to remunerate people for their contributions and to just keep it real, really as opposed to this mystical black box that simulates humanity, but only if you don't really know what it's doing. Okay, go ahead.
Kip Boyle: Okay. There's so much opportunity to branch off into other areas like what happens if I make a new piece of music based on a bunch of other music that ChatGPT has had access to, right? There's a lot of copyright implications. I'm not going to go there. But what I want to talk about is this idea that the results you're getting from ChatGPT could be new to you because you haven't read the whole internet, but it's not innovative. There's no innovation in those responses. Somebody's already written it. And that's how it got into the large language model. Now, the reason why I'm bringing this up is because I can't use ChatGPT a lot in my work, because only a portion of my work is routine. For example, writing a policy is a fairly routine thing, and I only need a policy that's good enough to get the job done in most cases.
However, if I'm writing an inflection point, which is a message I send out every two weeks to my subscribers, and I'm trying to talk to them about what has happened recently that is a material in cyber risk management, either what have the cyber criminals done to us lately that is moving the bar? What are cyber soldiers and cyber warriors doing? Or on the opposite side, what is government doing to protect us from this? And so I can't go to ChatGPT and prompt it and say, what's the latest in the world of cyber risk management that's innovative that I can talk to my subscribers about? It's just not there yet. I would be very careful about if I ever wanted to produce a work product that has any innovation in it at all, ChatGPT is not going to be that helpful for me. So if I see it as in some ways, cynically, as a race to mediocrity. Anything ChatGPT gives to me is by definition average.
Jake Bernstein: It is. And it's interesting too, you could foresee how in a science fiction novel, a civilization could easily stagnate completely because it invents generative AI and then it stops innovating.
Kip Boyle: That's right.
Jake Bernstein: Because it believes that the AI is going to do all the innovation for it. And even if the AI begins to feed off of itself, there's still no new ideas in there.
Kip Boyle: Not really, no. Not the way we understand this AI stuff today. Now, maybe 20 years from now, 50 years from now, maybe AI will be capable of innovation, but it's not right now..
Jake Bernstein: Heck, six months from now, it could change. We don't know. We don't know. What is important to understand is what is generative AI doing right now? One thing that it is doing, that it's pretty good at actually, and it's been around for a little while now, is helping coders create software, right? So much code functions and insert a whole bunch of programming terminology here is recycled code, right? You're using it over and over and over again.
Kip Boyle: And coding is a wonderful example because when you talk about predictive language, coding has such strict syntax. It's unnatural.
Jake Bernstein: It is. It's a great fit. When we say the software can code itself, yes, but I'm not too worried about runaway Skynet, because it can only code what it has been trained on ultimately. Right?
Kip Boyle: Again, no innovation.
Jake Bernstein: No innovation. And ironically, if innovation were to happen, it would be a little bit like evolution, which would be accidental, literally a glitch that would have to cause something. And I'm not sure how likely that is or whether that's even possible. But what we know for sure is that it can definitely help you code stuff quickly.
Kip Boyle: And that's why we're worried about malicious hackers, because that's what they want to be able to do, is they want to code stuff so that it's more virulent and harder to detect. And so I think if I were somebody trying to do that, I would be asking ChatGPT to reveal to me what are the known methods of being stealthy and so forth. I don't need a lot of innovation there. I just need a lot of state of the art that I can then decide how I'm going to put it together and I can use the output to create innovation.
Jake Bernstein: Well, isn't ChatGPT the ultimate expression of the search engine in a way?
Kip Boyle: Well that's why Google's freaked out.
Jake Bernstein: That is why Google's freaked out. One other one along these lines where I think ChatGPT and generative AI in general are a serious, an actual threat is, and I mentioned earlier, phishing, phishing emails. It has been a while since you could detect phishing emails because of the bad grammar, but oh my gosh, Kip, we've already said that ChatGPT is like the world's most confident sounding intern. That is about-
Kip Boyle: That is a phishing message.
Jake Bernstein: Well, if you were going to find the ultimate phishing email drafter, the world's most confident intern would be right up there.
Kip Boyle: Of course.
Jake Bernstein: And I really think that this is going to be a big issue, and it's already happening. I have great concerns about generative AI emailing back and forth. There was a great joke actually that someone told me. It wasn't meant as a joke but we realized it was, is that in the future, a lawyer will come up with five bullet points and feed these five bullet points to his generative or her generative AI assistant to expand into a five paragraph email, and it'll do a great job of it, and it'll send it off. And then the recipient, also a lawyer will tell his or her generative AI assistant to please summarize that five paragraph email into five bullet points, and it'll do a great job such that we really just exchanged bullet points.
Kip Boyle: And nobody's actually done anything.
Jake Bernstein: Nobody's actually done anything. Is that productivity? I have no idea. Whatever. It's funny to think about. It's probably going to happen if it hasn't already happened. That's one of many, many ways that I think we're going to see this technology be deployed. One thing I wanted to mention real fast is deep fakes.
Kip Boyle: Definitely.
Jake Bernstein: Because deep fakes in-
Kip Boyle: The Pope in a puffy coat.
Jake Bernstein: The Pope in a puffy coat is a funny one. But imagine, this could happen really soon, imagine the ability for a hacker to, think about a business email compromise, right? Those really do rely on tricking people still.
Kip Boyle: Definitely.
Jake Bernstein: Obviously all this-
Kip Boyle: With clever writing.
Jake Bernstein: With clever writing. Here's my fear, is if I can break into an email account, what is to stop the attacker from feeding a generative AI all of the emails, all of the sent emails from a given account, and then essentially deep faking communications from the company president?
Kip Boyle: Easy.
Jake Bernstein: First of all, that is easy to do.
Kip Boyle: Easy.
Jake Bernstein: And second, it's going to sound really convincing.
Kip Boyle: Yes. And to make the point here, one of the things that I did is when I was learning prompt engineering, which is what I've heard people call it, and I think that's a pretty reasonable thing to call it. And by the way, if you want to learn about prompt engineering, I took a really great Udemy course, and I'm going to recommend it. Sean Melis is the guy who did it, S-E-A-N M-E-L-I-S. He's been working in AI for years and years and years. And so when he teaches you prompt engineering, he's teaching it to you as somebody who really understands what's going on behind the scenes. And so I really, really enjoyed his course. I'll put a link to it in the show notes. But what I want to say though is that... I've lost my train of thought. I can't believe it. My train just went completely off the rails, Jake.
Jake Bernstein: I hate when that happens.
Kip Boyle: No, I know. When I was teaching myself prompts, I was wondering, could I get ChatGPT to write something for me that would sound like me? Because when I write my inflection point emails, I'm writing it a very specific way because I'm trying to write for a very specific audience. And so I took a couple of my inflection points and I pasted into ChatGPT, and I asked it, what is the writing style of this text? And it actually summarized for me what the writing style was. And so I took that and I added a few other things to a prompt, and then I said, I want you to write about, and I gave it a topic, and then I said, and I want you to write it in this style. And I pasted in what it told me my style of writing was. And I said, and I want 500 words on this topic. And boy, it did a good job. It did a really good job. It sounded just like me.
Jake Bernstein: And that's the thing, is I find myself really going back and forth with generative AI, because on the one hand, I'm like, it doesn't think it's not smart, it's a mathematical model, it's useless, it's a waste of time. But then I realize, no, I don't think that's right either. I don't think it's a fad. I don't think it's a fad. I don't know that it's not. But I do think that it's a tool, and like all tools, we're going to have to figure out how to use it and how to use it to its fullest capacity. I do want to recognize, I think facially the technology as an achievement is incredible. Even if it can't innovate, which is fine. Really how much is written that is truly innovative?
Kip Boyle: You don't always need innovative.
Jake Bernstein: You oftentimes don't need innovative stuff.
Kip Boyle: You often don't. And so for people who don't need innovative stuff and they just need good enough stuff, then sure, ChatGPT for the win. Absolutely. And I think if you're a fiction writer, for example, oh my gosh.
Jake Bernstein: oh my gosh.
Kip Boyle: What a perfect fit. Because there's no such thing as a need to be accurate or correct when you're writing a story. It just needs to be believable. I think there are some use cases out there for ChatGPT, which have very few caveats to it. But when it comes to cyber risk management and when it comes to the law or engineering, I think there's a ton of caveats right now.
Jake Bernstein: There is. One other thing, there are many caveats. Basically the caveat is simple. If you need something to be true, you got to think twice about whether you can use it.
Kip Boyle: You have to screen whatever-
Jake Bernstein: We've been using the phrase, you have to QA it. You have to QA it a lot.
Kip Boyle: You do. And don't put any confidential information into a ChatGPT or any of these models right now where you don't completely control how that data's being used. This is intellectual property 101, right Jake?
Jake Bernstein: I wish. It's probably not.
Kip Boyle: It is for you.
Jake Bernstein: We've all seen articles about the bad things that have been happening with people not understanding this idea.
Kip Boyle: I couldn't imagine putting a trade secret into ChatGPT and then going, oh my gosh, it's a trade secret. By definition it needs to be kept secret. Don't put it in one of these large language models, it is by definition not secret anymore.
Jake Bernstein: Agreed. Okay. So Kip, any other topics that you want to talk about before we wrap up this episode? One thing that's obvious is talking about things not being true is just misinformation warfare. That is definitely, it was already a problem. It's going to become a bigger problem. And again, that confidence is an issue.
Kip Boyle: Because the last two elections in the United States have really been messed with by people who are releasing fake news and all that stuff. And really the ability to create fake news is limited only by one's imagination. And if you could say to a generative AI, hey, give me 10 pieces of fake news that around this topic with head headlines and so forth. And then take that and go over to an AI generated photorealistic image that you could add to that, maybe some video. Oh my gosh. Talk about information warfare. People will absolutely freak out if they see a video of somebody doing something heinous to a member of their group and they're going to shoot first and ask questions later, it's inaudible.
Jake Bernstein: And that is a cyber risk management issue. This is a good example of how cyber risk can bleed over into the physical realm.
Kip Boyle: Definitely.
Jake Bernstein: That's a really good example of how it can happen. So with that sobering-
Kip Boyle: You thought this was going to be a quick episode.
Jake Bernstein: I know. I did. I don't know why I thought that. But with that sobering thought, should we go ahead and wrap it up? Any closing remarks, maybe perhaps a bit more optimistic. I will say this, I do think that this technology is in its infancy, right? It is newborn, right? It's not even in the sitting up, let alone crawling or walking stage. And it's going to be an interesting next couple of years. I don't know where this is leading. Sam Altman, who's the CEO of OpenAI, did make a relatively interesting comment relatively recently, which was, and I already not sure that there's much point in making even bigger models. I'm not sure if you saw this. And one could say, oh, well, is that tacit admission that GPT-4 is about as good as it gets? Then it's not.
I think what he was saying is that, look, at a certain point there's nothing to be gained by adding yet more data. What needs to be happening now is specializing things. A good example, Kip, is it is math, right? Is there a way to mathematically begin to eliminate these so-called hallucinations, which is the amusing computer science term for making stuff up? And the answer is, of course, I don't know. Because I'm no data scientist. I can tell you that an idea that intrigues me is if I were to train a generative AI on nothing but every single English language case decision, case opinion, legal opinion that has ever been written, is that going to end up giving me a different result? Particularly when you start playing around with things like temperature. I'm not sure if you've done that or not.
Kip Boyle: Mm-mm (negative).
Jake Bernstein: But it's essentially the ways to manipulate the models math to be either more creative or more precise or more balanced. And these are all things that can be done. So like I said, this is the very, very beginning. And if we pause for a minute, and this will be my last thought for the episode, and then you can react. What if a future version of ChatGPT was 99.999% truthfully accurate? In other words you could trust its output.
Kip Boyle: You could rely on it without any material reservation
Jake Bernstein: And assume that it still can't innovate, right? Because it still can't think.
Kip Boyle: But it's accurate.
Jake Bernstein: But would that not just be incredibly useful?
Kip Boyle: That would be tremendously useful to know that, hey, I need a fact, and I could go and get that fact, and I could be highly confident that the answer it gave me was indeed the fact and that I could rely on it to do whatever it is I need to do. That would be wonderful. I think that would be a boon. Talk about a productivity boost, right? My last word on this isn't quite so positive. I just want people to realize that we're in the beginning of a hype cycle.
Jake Bernstein: Yes, thank you for saying that.
Kip Boyle: Yep. It's really important not to allow yourself to get too caught up in the hype, because for those of us who are of a certain age, we can remember, when the internet came onto the scene for anybody who wanted to be on it, there was a huge hype cycle, and people were talking about how the internet was going to democratize everything, and it was going to lead to just these wonderful revolutions in human affairs. I heard people say things like, poverty will be eliminated and illiteracy will be eliminated, and blah, blah, blah, blah.
Jake Bernstein: It didn't happen. Did it?
Kip Boyle: Some things got better, but some things got a lot worse. And so just remember that that's a very typical developmental cycle for technology to go through. This is probably going to go through the same thing, so it's really important that you don't fall for the hype cycle. There's my final word on the subject, and that wraps up this episode of the Cyber Risk Management podcast. Today we talked about generative artificial intelligence and the many implications that this technology has for cyber risk management. Thanks for being here. We'll see you next time.
Jake Bernstein: See you next time.
Speaker 1: Thanks for joining us today on the Cyber Risk Management podcast. If you need to overcome a cybersecurity hurdle that's keeping you from growing your business profitably, then please visit us at cr-map.com. Thanks for tuning in. See you next time.
Sign up to receive email updates
Enter your name and email address below and I'll send you periodic updates about the podcast.
YOUR HOST:
Kip Boyle
Cyber Risk Opportunities
Kip Boyle is a 20-year information security expert and is the founder and CEO of Cyber Risk Opportunities. He is a former Chief Information Security Officer for both technology and financial services companies and was a cyber-security consultant at Stanford Research Institute (SRI).
YOUR CO-HOST:
Jake Bernstein
K&L Gates LLC
Jake Bernstein, an attorney and Certified Information Systems Security Professional (CISSP) who practices extensively in cybersecurity and privacy as both a counselor and litigator.