Search
Close this search box.
EPISODE 154
NIST AI Risk Management Framework, part 2

EP 154: NIST AI Risk Management Framework,
part 2

Our bi-weekly Inflection Point bulletin will help you keep up with the fast-paced evolution of cyber risk management.

Sign Up Now!

About this episode

March 26, 2024

Here’s part 2 of what’s in the NIST Artificial Intelligence Risk Management Framework (NIST AT-RMF)? And, how do you use it? Let’s find out with your hosts Kip Boyle, CISO with Cyber Risk Opportunities, and Jake Bernstein, Partner with K&L Gates.

Tags:

Episode Transcript

Speaker 1: Welcome to the Cyber Risk Management Podcast. Our mission is to help executives thrive as cyber risk managers. Your hosts are Kip Boyle, Virtual Chief Information Security Officer at Cyber Risk Opportunities, and Jake Bernstein, partner at the law firm of K&L Gates. Visit them at cr-map.com and klgates.com.

Jake Bernstein: Kip, what are we going to talk about today in episode 154 of the Cyber Risk Management Podcast?

Kip Boyle: We're going to pick up where we left off last time. We're going to finish our exploration of the NIST Artificial Intelligence Risk Management framework, or as I have been calling it, the AI RMF. I mean, why not? We need more acronyms, apparently. This is going to let us explore a little more deeply the map, measure, and manage functions. If we have time today, which is a wild card, we're also going to talk about the importance of having an AI Acceptable Use Policy, or AI AUP.

Jake Bernstein: Oh, no. Now, you're just gratuitous with these things.

Kip Boyle: I felt gratuitous the first time I said... I'm going off the rails. Governance. Governance is really important here. Time permitting, we'll dive into that as well.

Jake Bernstein: Okay. Speaking of governance, I believe we left off actually talking about the govern function. That's as far as we got. Before we move in to map, is there anything else that you, I suppose, or I... I would invite myself, want to say about the govern function?

Kip Boyle: I think it would be helpful simply to not try to summarize what we said in the last episode.

Jake Bernstein: Oh, definitely not. Definitely not.

Kip Boyle: Go back to the last episode if you didn't hear that. But I think what I will say is that it's important to realize what Jake and I are discovering is that the governed function seems to serve a slightly different role in terms of how you operationalize this framework, which is to say what you're going to see as we go through the map and the measure and the managed functions... They are a lot more context-specific/use case-specific. How are we going to use AI to solve this problem? Or how are we going to use AI to increase our productivity in this way or that way? The govern function, in contrast, is really about thinking of the organization and setting its policies and guardrails and, once it's dialed in, not really changing it a lot or considering a lot of different use cases. I think that's an important capstone idea. Would you say that differently? Or how would you-

Jake Bernstein: Yeah, I'll say it a little differently. I'm going to use a personal story example, which I don't think I ever do, but this is funny. This week is going to be my dad's birthday, is on... actually today-

Kip Boyle: Nice.

Jake Bernstein: ...which is the day of recording. I don't know that you'll ever know what day we recorded unless you count backwards because you only know the day of publication. I'll leave that a mystery. But on Monday, I got a text from my mom saying, "Oh, it's Dad's birthday this week. We should plan something." Sorry. She did not use the P word. That's where I'm trying to go. She said, "What should we do?" Now, the thing to understand about my family is that, as my wife calls us, we are chaos generators. We just generate chaos everywhere we go. You might say we're not planners, Kip. You might say, Kip, that we don't have much of a governed function. That has historically been true. The opposite-

Kip Boyle: Well, you govern as you go.

Jake Bernstein: We govern as we go, yes, which is the point I'm ultimately going to make here, which is my wife's family, on the other hand... Their governed function has always been very strong. They're planners. A birthday dinner would get planned two months in advance. Now, there's all different dynamics around that, of course. But here's what I am going to offer up to explain why the governed function with AI is so important: is that birthdays are going to happen. People are going to go do stuff about it.

Kip Boyle: They're even predictable.

Jake Bernstein: They're even predictable. Now, the question is, is that: is it going to be planned or is it going to be chaotic? With AI, it's going to happen. Your business will use it whether you, as the business owners or managers/senior executives, have thought about it at all. The question you have to ask yourself is, as a business, do you want to manage the risk of AI using the risk management framework, which will require governing everything about it? Or are you just going to accept what will be pure chaos? I think that marital strife notwithstanding, chaos is livable. You can deal with it in certain areas of your life. But my contention, Kip, is that chaos is not acceptable in the business context because it destroys predictability. It destroys any sense of foreseeability. The capital markets don't like it. You're literally required to try to minimize chaos if you're a publicly traded company in the way that you disclose things to the SEC. Yes. To answer your question with a very long story, there are many ways to think about the governed function. One of them is chaos minimization.

Kip Boyle: I like that. That's a good story. That's a really good story. You've actually given me some insights to why some organizations are dysfunctional in a governance sense because maybe the CEO is not a planner. Maybe the CEO is the embodiment of chaos and chaos-generating and governing-as-you-go. They just will not set the tone at the top to allow anything different.

Jake Bernstein: Sometimes, I am accused, Kip... accused, of enjoying the chaos. There is some truth to that. To test out her theory-

Kip Boyle: Well, it's very energetic, isn't it?

Jake Bernstein: It is energetic. To test out her theory, I recently had a cup of water thrown at me and shouted, "Chaos!" She thought this was hilarious. I initially was like, "What just happened?" But then, in order to prove my point, of course, I had to be like, "This is totally fine. I'm totally acceptable. I can totally accept having this water tossed at me because I'm adaptable. I'm adapting to everything as it happens. No big deal."

Kip Boyle: Oh, my lord.

Jake Bernstein: I mean, honestly, sometimes, I live in a sitcom. It seems like.

Kip Boyle: Oh, you should tell more stories. It's only taken you 154 episodes to convince me.

Jake Bernstein: I know. That's true. But think of how this applies to organizations. If you have people who don't mind chaos... Look, there's a lot of CEOs out there-

Kip Boyle: Or thrive on it.

Jake Bernstein: I was just going to say. There's a lot of CEOs who thrive on it. Now, the problem with that is that maybe that's okay in a very small org. But rarely is it okay in a larger organization because, look, there's a lot of people who aren't going to like that level of chaos. It really stresses them out. What I'm seeing in clients is that the ability of AI to generate chaos within an organization really is pretty immense.

Kip Boyle: Fascinating.

Jake Bernstein: That was a lot longer than I think I intended it to go, but I think it was useful. I think there's lots of ways to think about this. This is one of them.

Kip Boyle: Totally. Thank you for allowing yourself to be a little vulnerable there. All right. Let's continue. Next, I'm looking at page 24 of the AI RMF. Yes, paper. I continue to look at my paper copy. Page 24. This is section 5.2. I remember asking you, Jake, "Map? What in the world does that mean? Because I'm really struggling." You said, "Well, think of it as context."

Jake Bernstein: I did say that, didn't I?

Kip Boyle: Yeah. That was really helpful. Would you expand on that a little bit?

Jake Bernstein: Sure. I mean, reading straight from this page, the map function establishes the context to frame risks related to an AI system. That's the first sentence. You could actually stop there. We could probably discuss that alone for the rest of the episode. We won't, but we could. You all know we could. The reason that they're saying this is, I think, a really interesting one. I don't want to say that this is a unique issue to AI because I doubt that it's unique, but it certainly should be top of mind for AI. That's the following. The AI lifecycle, which we did talk about last time, consists of many interdependent activities involving a diverse set of actors.

Kip Boyle: Now, you're reading. I can tell.

Jake Bernstein: Okay, I'm totally reading. But the point of this is that you're not going to control the whole thing. Rarely will you be in charge of the whole thing. Let's just think about this. What are the components of an AI system? We talked about this. It's the training data. It's the hardware that you're using to train the algorithm. It's the uses, the way people use it. It's the evolution of all that. It's all of these things. Even if you're the AI solution maker, even if you're the... Pardon me. Even if you're the organization that has developed the framework... sorry, not the framework, developed the AI solution, you don't control how people are going to use it.

Kip Boyle: No.

Jake Bernstein: It's going to evolve. Right away, you can see how if we're talking about risk management with AI, which, of course, we are, then you need to understand the ways that all the different activities and all the different actors in a specific context do it. That's what this is all about.

Kip Boyle: Okay. Yeah. It's context. We used, when we talked about this before, the idea of a use case. What is the use case? That's a suitable synonym for this, isn't it?

Jake Bernstein: Well, it's the first step in the mapping function, is what I would say.

Kip Boyle: Okay. It's knowing what your use case is.

Jake Bernstein: Yeah. I mean, I think when you say, "Okay, let's map," I think the first thing you have to do is say, "Oh, what's the use case?"

Kip Boyle: Got it.

Jake Bernstein: That's one of the things that I was going to say about the RMF this time, Kip, is that I don't know about you, but despite the number of hours I've spent reading it and looking at it, I'm still not totally sure on how to operationalize it.

Kip Boyle: I'm getting there.

Jake Bernstein: I am glad one of us is. It's worse for the measure function. It's actually more obvious what you would do. The problem is... Sorry. It's more obvious how you would do it. The problem is that actually doing it, the "what" of it, is incredibly complicated and perhaps too challenging. We'll talk about that. I do have some criticisms about the measure function. But back to map-

Kip Boyle: Yeah. Yeah. I was going to say before we go on to measure, I thought it would be helpful to do this. Jake and I recently had a meeting with a joint client. We were talking with them about use cases. They had already surveyed their organization to try to determine what's going on now as far as people could report it. I'm looking at the list of use cases. I just thought I would pick out some things without naming names. For example, somebody reported that ChatGPT is being used. This is shadow AI, isn't it? I love that.

Jake Bernstein: It is.

Kip Boyle: Shadow IT. Shadow AI.

Jake Bernstein: Yep. Shadow AI.

Kip Boyle: Somebody also said that they're piloting... Well, that's not a good word. They're testing the Outlook Sales Copilot, which comes out of the Microsoft tech stack.

Jake Bernstein: We call that an unofficial pilot. It's an unofficial pilot.

Kip Boyle: By the way, ChatGPT came up a lot in this. Auto-transcribing a video came up as a use case.

Jake Bernstein: That's a good tool.

Kip Boyle: That's AI. Let's see what else came up here. Photoshop has some AI capabilities in it. That's being unofficially tested.

Jake Bernstein: That's a really good example, Kip, of what I call the vendor issue with AI. Photoshop has been around for how long? I mean, decades.

Kip Boyle: 20-plus years. 30 years.

Jake Bernstein: 20-plus years. No one's going to think twice about, "Oh, my vendor uses Photoshop. Totally fine." But when Photoshop adds an AI function, which, of course, there's so many different ways you could do it... To be clear here, we're talking about generative artificial being the risk. There are AI functions in Photoshop and in many photo editing systems that have been there for some time.

Kip Boyle: Right. They just weren't as AI.

Jake Bernstein: They weren't.

Kip Boyle: There was no chat capability. You weren't talking to this thing. It was just providing you with the function. You didn't realize it was machine learning that was making it happen. I'm going to give you a great example of that right now. Grammarly.

Jake Bernstein: Grammarly is a good example. That's a machine learning "AI function" as well. Kip, it's interesting because we look at the categories. Here we are with the not-very-well-thought-out category names. You can do better. I know that you will.

Kip Boyle: It's just V1. They'll get better.

Jake Bernstein: It's V1. No need to apologize. We accept it already. We know that it will get better. But the map, number one, it simply is. Context is established and understood. Even more interesting, map 1.1 is intended purposes, potentially beneficial uses, context-specific laws, norms and expectations, and prospective settings in which the AI system will be deployed or understood and documented. Here you go. This is really the essence of map, which is figure out your context. Figure out the use cases. But obviously, it doesn't end there. If it ended there, then this would be a very short function, and I somehow doubt that NIST would've built an entire function around it. I think what's also really interesting... Tell me if you agree with this thought, Kip, which is that, in the AI RMF... I underlined it in red pencil, so I know this must have been important. Outcomes in the map function are the basis for the measure and manage functions. Does that happen in other frameworks that NIST has published? I'm not certain that it does in exactly the same way that it happens here.

Kip Boyle: No, it's not exactly the same. But the caveat that I'll give as I answer that is I'm most familiar with the risk management framework, the big brother to this one. This is an AI risk management framework. This just has a risk management framework.

Jake Bernstein: Yeah. What is it? It's either 37 R2 or 800-37 R2 or 839. I always get confused.

Kip Boyle: Something like that. Then, they have a cyber resilience-oriented one, which is the NIST cybersecurity framework. Those are really the two that I keep in my mind. No, those two don't work quite in the same way. However, they do have outcomes. I think that's a key idea that we should be focused on. Outcomes, as we operationalize frameworks as opposed to checkbox exercises... "Make sure all these things are implemented." That's not exactly what we're going for here. We're not trying to get a checklist. We're actually focused on outcomes. What's actually happening at the desk level?

Jake Bernstein: Yep. I'm going to point out another one here, just a few of these. Map 1.3: the organization's mission and relevant goals for AI technology are understood and documented. Man, Kip, how many companies do you think are paying money for AI to AI vendors, and they don't really have a mission or a goal in mind? They're just like, "Oh, we got to do it."

Kip Boyle: The only people who have that documented are the AI companies themselves.

Jake Bernstein: Even then, we can't be sure.

Kip Boyle: No, we can't be sure, but they have a business model. We're going to sell AI to do that.

Jake Bernstein: They do have a business model. Yes. Yes, they do.

Kip Boyle: That's documented. I would agree with you that there's probably pockets of understanding and perhaps pockets of documentation. But that goes back to the use cases.

Jake Bernstein: It does.

Kip Boyle: Is there an overall organizational mission? I agree with you. I don't think anybody really has that nailed down yet.

Jake Bernstein: Then, I like map 2: categorization of the AI system is performed. What does that mean?

Kip Boyle: Gosh. Well, if we dig into it here, it's about specific tasks and methods.

Jake Bernstein: Classifiers, generative models, recommenders. I don't actually know what a... I mean, I can guess what a recommender and a classifier is. I think it's actually a good reminder that this is not the generative artificial intelligence risk management framework.

Kip Boyle: Correct.

Jake Bernstein: This is a broader focus. This is the AI risk management framework. There are many types of AI, not just generative. Keep that in mind.

Kip Boyle: Exactly, as you go through this. This is something that we are doing, too, Jake and I. We're trying to learn new terms of art, like classifiers and recommenders and so forth. You're going to find, as you go through this, that there's a lot of terms as well as acronyms. There was this acronym that I ran into as I was flipping through the pages here. TEVV-

Jake Bernstein: Can I guess? Is it TEVV?

Kip Boyle: Yeah.

Jake Bernstein: Yeah. It's TEEV. What does TEVV mean? What does TEVV mean, Kip?

Kip Boyle: I still don't know. I haven't burnt it into my brain yet. Have you?

Jake Bernstein: It was there. It's testing evaluation-

Kip Boyle: I circled it in my copy where it first showed up.

Jake Bernstein: ...something and verification. But clearly, I don't remember it exactly.

Kip Boyle: It's a little turn of phrase that is important because they've made it into an acronym. It appears throughout the AI RMF. But anyway, just an example. We've got a whole-

Jake Bernstein: I wrote it down here: test, evaluate, verify, validate.

Kip Boyle: We've got a whole new lexicon here.

Jake Bernstein: We really do. Okay. A couple of other highlights from map before we move into measure: map 3 is about AI capabilities, targeted usage goals, expected benefits and costs, et cetera. Map 4 really does talk about going over the risks and the benefits for each component of the AI system and, particularly, third-party software and data. Then, map 5 looks at the impacts of the technology to individuals, groups, communities, organizations, and society.

Kip Boyle: Well, I think of map 5 as the ethical-

Jake Bernstein: It is.

Kip Boyle: ...category.

Jake Bernstein: It is. Okay. Measure. Measure, in some ways, like I said, I think is the easiest one to conceptualize. I'll just read this first part here. The measure function employs quantitative, qualitative, or mixed-method tools, techniques, and methodologies to analyze, assess, benchmark, and monitor AI risk and related impacts. It includes documenting aspects of systems' functionality and trustworthiness. Then, I have a note here: "so much work," because they talk about how you have to develop processes and adopt them in the measure function. It needs to include things like rigorous software testing/performance assessment methodologies. The methodologies have to take into account uncertainty and comparisons to performance benchmarks.

I mean, this is really complicated in a lot of ways. This is where I think this framework stumbles a little. Well, Kip, we've talked at length about the difficulty of measuring cyber risk. I think our hackles would go up if NIST tried to add a measure function to the cybersecurity framework. I mean, they do say qualitative or mixed method. It's not like they're forcing you to go into some quantitative level. But the stuff I just read implies that you do have to dig in maybe a lot. I think assigning quantities or even qualitative descriptors is tough. I think it's going to be something that people really have to get used to and learn. TEVV, of course, shows up here a lot.

Kip Boyle: But I want to make a comment, too, that as I read the measure section, what really comes out to me is if you're an organization that's incorporating artificial intelligence into products that you're building... Let's say you're an airline manufacturer and you want to put AI in the cockpit or you want to put AI in an automobile or in a medical device, a lot of this seems to be aimed at that situation as opposed to an organization that simply is trying to figure out where am I going to allow generative AI to show up in office automation software? What do you think?

Jake Bernstein: That's true. I think that's true. I do think that there is a lot of room for interpretation.

Kip Boyle: Well, this thing's trying to cover so many different situations.

Jake Bernstein: It really is trying to cover so much. I'm going to cheat. I'm going to use the playbook, which is online and-

Kip Boyle: Is excellent.

Jake Bernstein: It is. Yeah. I'm just going to look at measure one. Appropriate methods and metrics are identified and applied. 1.1 is approaches and metrics for measurement of AI risks enumerated during the map function or selected for. A lot of this stuff gets dense. But what I'm going to do is click the playbook here.it'll take me to this thing. Now, I have even more words to do here.

Kip Boyle: Oh, yeah. If you think the PDF is wordy, go to the playbook.

Jake Bernstein: Oh, man. Yeah. The playbook isn't-

Kip Boyle: It's great.

Jake Bernstein: It is great.

Kip Boyle: But you're just swimming with additional-

Jake Bernstein: Yeah. It's necessary, I think, to deal with the level of complexity here. But in the playbook, each one of these subcategories has a number of suggested actions, which is great. The suggested actions... I'm not going to read them because there are so many. But a couple of examples: report metrics to inform assessments of system generalizability and reliability. I don't know, Kip. There's a risk. I think the reason they cut this stuff out into the playbook is that it can be overwhelming. The risk management framework is important. But in order to be useful, it has to get used. I think they're trying to walk that line. One of the things I've said repeatedly that I love about the risk or, sorry, the cybersecurity framework is that you can do nothing else but rattle off the five, soon to be six, functions. I feel like you get a lot of value out of it. If you're going to-

Kip Boyle: That's 'cause it's following a life cycle of an incident. It's actually something-

Jake Bernstein: It is.

Kip Boyle: ...that you can ground it in.

Jake Bernstein: Not even so much an incident. Well, I guess it is an incident, isn't it?

Kip Boyle: Yeah.

Jake Bernstein: The way that they do it. Identify, detect, respond, recover. Here, it's clearly something similar. You want to-

Kip Boyle: But they're trying to.

Jake Bernstein: They're trying to map, measure, and manage.

Kip Boyle: If this is implying that there's a life cycle, what is the subject of that life cycle? In the cybersecurity framework, we have an incident. What would it be here?

Jake Bernstein: I think it would be anything from buying an AI tool to developing one to determining how to use it and when not to use it and all that stuff.

Kip Boyle: Which is very broad.

Jake Bernstein: But it's very broad. It's much less focused than the CSF or even the privacy framework.

Kip Boyle: Maybe in the future, this will actually split into multiple frameworks that are more specialized. For example, if I'm not developing products and I'm just trying to figure out how to use AI in an office automation environment, I would love to have a tailored AI RMF that's just considering that.

Jake Bernstein: Yeah. For sure. Okay. Is there anything we want to say about measure? I think reading the categories is helpful since there aren't many of them.

Kip Boyle: Go ahead.

Jake Bernstein: AI systems are evaluated for trustworthy characteristics. Now, this is one that makes a lot of sense. We talked about that last time. We talked about the value of the AI trust... sorry, the trustworthy characteristics. Those are security, resilience, transparency, accountability, explainability, and... Explainability is one. Privacy, fairness and bias, environmental impact... All of those things can be evaluated. Measure two is one of the longer... Actually, it's the longest category with 13 subcategories, simply in part because each of the AI trustworthy characteristics gets its own subcategory. But this one makes a lot of sense.

Kip Boyle: I'm thinking about figure 4 on page 12 as we talk about this.

Jake Bernstein: Exactly.

Kip Boyle: It contains a lot of the things that you just mentioned.

Jake Bernstein: Yep. That's right. Then, measure three is mechanisms for tracking identified AI risks over time are in place. That's important because these things... I mean, we talk about cyber-

Kip Boyle: Well, this is a risk register.

Jake Bernstein: It is. It is. We talk about cyber being a dynamic risk. Well-

Kip Boyle: Jeez. This is making it look painfully slow.

Jake Bernstein: It really is. This makes cyber look like evolution, which is slow. Then, let's see. Do we have a measure four? We do. Feedback about the efficacy of measurement is gathered and assessed. That's really important because if you don't have that function in there, then you could be doing all this work and not even be aware if it's doing anything.

Kip Boyle: Yeah. You wouldn't necessarily know until something awful happened.

Jake Bernstein: Okay. Now, Kip, why don't you take us through manage to the extent that we can? I have some confusion about manage. What's the difference between manage and govern?

Kip Boyle: Right. They're definitely related. Let me just grab something right out of the doc here. It says the managed function entails allocating risk resources to mapped and measured risks on a regular basis and as defined by the governed function. If we just look at that one sentence there, what we're saying is that if we have AI risks that have been mapped and measured, now we're going to manage them. Now, the question of how we're going to manage them, and how frequently we're going to manage them, and to what level of tolerance and acceptability... That would be informed by the decisions we made when we went through the governed function. That's my interpretation. What do you think?

Jake Bernstein: I think that's right. I mean, I think that govern sets the rules, and then manage actually executes those rules.

Kip Boyle: Yeah. Yeah, according to govern for each risk that was mapped and measured. Maybe that's what this is. This is the life cycle of-

Jake Bernstein: I think it is a life cycle.

Kip Boyle: ...specifically articulated AI risks.

Jake Bernstein: Yeah. I think it could be connected to specific AI use cases really, Kip. I think, in some ways, it does come back to that.

Kip Boyle: Well, I think a use case can have multiple risks. The use case could be-

Jake Bernstein: Oh, for sure.

Kip Boyle: I really like this inventory that I mentioned earlier. Use cases, video, audio transcribing for videos... That's the use case. How many AI risks are there? One, two, three, four, five. All right. Let's move on. Grammarly. What's the use case? Well, people have it on their mobile devices to protect against typos and autocorrect causing trouble. All right. That's the use case. Now, let's talk about the risks. One, two, three. I think that's how this could be operationalized.

Jake Bernstein: I think so, too. I think the other thing that makes this perhaps just challenging overall is that, right now, I think people can sometimes get stuck thinking about the use cases, right? "Oh, ChatGPT." This list shows that there are many more. Some of them are probably low-risk. Some of them are much higher risk. But I think part of the problem is we don't yet all know automatically what the use cases are. It's very early in the development of all of these tools. I think that's normal.

Kip Boyle: I think people could confuse a tool with the use case. They could say Grammarly is a use case. No, because Grammarly on mobile is a use case. Grammarly on desktop is a use case. Even more specifically, Grammarly on mobile in the hands of a finance person versus Grammarly on desktop in the hands of a marketer.

Jake Bernstein: Yeah. I was going to say you can get even more use-case specific. Grammarly used in press releases. Grammarly used in published publications. There really is-

Kip Boyle: You can knock yourself out with specificity.

Jake Bernstein: You could knock yourself out with specificity if you wanted to.

Kip Boyle: Probably too much.

Jake Bernstein: Probably too much. Let's just quickly look at the manage categories that I've been doing.

Kip Boyle: Now, that we understand what manage is, yeah, I agree. There's four categories. The first one says AI risks based on assessments and other analytical output from the map and measure functions are prioritized, responded to, and managed. There's four subcategories. I'm not going to read them all. But clearly, they're decomposed from the category I just read to you.

Jake Bernstein: Yep. I think the way that that particular category reads is very familiar to anyone who's read the cybersecurity or privacy frameworks.

Kip Boyle: Yep. Okay. The second managed category... Strategies to maximize AI benefits and minimize negative impacts are planned, prepared, implemented, documented, and informed by input from relevant AI actors. That's a mouthful.

Jake Bernstein: It's a mouthful, but-

Kip Boyle: I can't wait to simplify that.

Jake Bernstein: I know. I mean, it is a mouthful, Kip. Let's ignore the possibility of lawsuits. But let's think about something that really has been happening quite a bit, which is you get called before a congressional hearing. We're nowhere, of course, close to the McCarthy era of congressional hearings. But you never know when we could get back to something like that. Certainly, a lot of the big tech companies have been going before Congress.

This is a good time to ask, how are you going to answer questions from Congress about what you did with AI that may have done X, Y, or Z? If you don't know/if you haven't come up with anything/if you haven't planned, prepared, implemented, documented... That's the question I have. Some of this sounds like, "Oh, do I really have to do that in order to adopt an AI system?" I mean, obviously, the answer is no. You don't have to do any of this. All you have to do is probably pay some money and get people to use it. Question is, is that wise? The answer is no. It is not. That one is a mouthful, but I do think there's a lot packed in there. Sure enough, that does have four additional subcategories.

Kip Boyle: Right. It's interesting when you say, "Is it wise? It's not," well, that's where we get into the arguments. Where if you're-

Jake Bernstein: That's true. That's fair.

Kip Boyle: ...a high-flying, risk-taking venture startup, your appetite may be perfectly aligned with, "We're going to do it, but we're not going to write anything down."

Jake Bernstein: Which is pretty common.

Kip Boyle: Yeah. Pretty common. Okay. That's manage 2. Let's look at managed 3. AI risks and benefits from third-party entities are managed. There's only two subcategories here. This is where you get AI risks and benefits from Microsoft because you're subscribed to O365, and they start releasing AI functionality into something that you're already using that they make. Is that a good example, do you think?

Jake Bernstein: Yeah. I think that is. I think that is good.

Kip Boyle: Then, 3.2, which is one of the subcategories, talks about pre-trained models. That would be a generative-

Jake Bernstein: Generative.

Kip Boyle: Yeah. Generative AI. Then, the fourth and final category. Risk treatments, including response and recovery, and communication plans for the identified and measured AI risks are documented and monitored regularly. It would be the govern function that would tell us what is the definition of "regularly" in this context.

Jake Bernstein: Indeed, it would.

Kip Boyle: Yep. There's-

Jake Bernstein: Let's take a look at manage four now, just to finish up.

Kip Boyle: No, that was manage four.

Jake Bernstein: Was it?

Kip Boyle: Yes, sir.

Jake Bernstein: Oh right. Sorry. Manage three is the one that only has two subcategories.

Kip Boyle: That's right. Yep. I just did manage four. Manage 4 has three subcategories. That takes us to the end of the functions and categories, all of them. Now, what we need to do before we can end this episode is we need to talk about profiles.

Jake Bernstein: Do we? Do we have to talk about profiles, Kip?

Kip Boyle: Oh, I think we should, and we have the time.

Jake Bernstein: All right. I don't fully understand the profiles in any of the other risk management frameworks or cybersecurity framework. What they're supposed to represent are specific implementations. In theory, they should be super helpful. I think they would be. But the RMF doesn't give us any. Instead, it gives examples and tells you what kinds there will be. There's use case profiles. There's temporal profiles, which is about state and time. Then, there's cross-sectoral profiles. But we don't have any profiles yet. The profiles in the cybersecurity framework are something that is growing. I think it's going to play a bigger role in the CSF 2.0.

Kip Boyle: Yeah. 'Cause even the CSF didn't have profiles.

Jake Bernstein: It did not.

Kip Boyle: It said you should profile your organization or you should profile the framework for your organization, which is just a way of saying throw out the stuff that doesn't apply to you and keep the rest. But with time, they've actually created published profiles. The one that I remember the most is they have a manufacturing profile, which I thought was fantastic. We don't have any AI ones yet. But I'm imagining we will get some. They'll probably show up in the playbook or in other locations of the NIST framework. But I want to make the case that you do want to profile the AI RMF for your organization if you're going to use it. Remember what I said was it's just a matter of going through the categories and subcategories and either striking out the stuff that does not apply to your situation or perhaps maybe modifying it a little bit or making it more contextually understandable for your organization. All right.

Jake Bernstein: Make it make sense.

Kip Boyle: Make it make sense.

Jake Bernstein: Ultimately, this thing is supposed to work for you.

Kip Boyle: I love it.

Jake Bernstein: It's not supposed to be busy work.

Kip Boyle: Yep. Exactly.

Jake Bernstein: Okay. Kip, before we finally wrap up this whole discussion about the AI RMF. Was there anything else you wanted to talk about that was related to this area?

Kip Boyle: I do. Something that we are doing in Cyber Risk Opportunities... We're not a very big company. We can do this. But I've got everybody in the company currently reviewing and revising an AI-acceptable use policy. An AI AUP. God, I love new acronyms. Yay. Not.

Jake Bernstein: To be fair, AUP is not a new acronym.

Kip Boyle: No.

Jake Bernstein: All we've done is add AI in front of it. To me, I give you a free pass on that one.

Kip Boyle: I think that the AI AUP does ultimately belong in the general acceptable use policy for information systems, which we've had now in organizations for quite some time.

Jake Bernstein: These all go in the employee handbook, really, is what we're talking about.

Kip Boyle: Yep. Yep. Exactly. You've got to give folks guardrails. You have to do it in a way that can make sense to them. Put it in a location that they will encounter. I think it's really important to have an AUP. It's going to take some time.

Now, if you're a small organization like we are, then you can get everybody to contribute. But if you're a larger organization, that's not reasonable. Form a subcommittee, think about who are the influential people in your organization, the folks, regardless of their job title, that other people go to when they want help with tough business problems. These could be senior individual contributors, middle managers... I mean, the same people that we would choose to interview if we were going to go and do what we call a culture cyber risk management action plan. Put a working group together composed of those folks and get them to come up with an AUP for your organization. That's going to take a little bit of time. While you do that, let's borrow from our friends in public relations and publish a holding statement. Tell everybody we're working on this, and give us some time.

Jake Bernstein: You can use the holding statement as well to set some initial ground rules just to avoid chaos to the extent possible.

Kip Boyle: You've got to tell people what you're doing is what this really comes down to.

Jake Bernstein: It really does. I'll say, look, if you develop that acceptable use policy, then you've actually started into the governed function right away because that is where that would live, is in the governed function. It probably is the most important document right now for every company to get. If you don't have one and you're listening to this episode, and you're like, "Gosh, I don't know where to start," that's okay. You can contact either of us. We can help you with that. By the way, it can be really simple. If you want it to be, "Don't use it while we figure this out-"

Kip Boyle: There you go.

Jake Bernstein: ...that's your choice. That's what a lot of companies did. That's what we did at first.

Kip Boyle: Yep. Maybe you have no technical way of enforcing that, but you'd at least need to tell people what the expectation is. Later on, as you firm up your AUP, you might want to implement some technical controls. But you got to start somewhere.

Jake Bernstein: Just to wrap this whole thing up, I think one of the main reasons behind this is that the currently popular... meant to be used by not necessarily organizations, although obviously that happens, but individuals. Individuals get benefit from using ChatGPT. It's individuals who are writing something that get benefit from Grammarly. Because of that feature, because of that, you really have to focus on the employees and your people because they're the ones who are going to make immediate use of these new tools if they think it's going to make them better at their jobs or more efficient or whatever.

Kip Boyle: I think that's right. There was a news story recently as we wrap up this episode that I'll share with you, which is another situation, where Air Canada had a chatbot which is run by machine learning on its website. A passenger asked that chatbot a question about a bereavement fare. "Somebody died. How can I get a discount on air travel?" The chatbot said, "No problem. Here's what you do." It gave the answer. Well, this traveler bought a ticket when they completed their travels. They filed a request for a partial reimbursement to reflect a discount on that travel. They followed the procedure that the chatbot gave them. The airline said, "What? That's ridiculous. We have no such policy. We're not going to reimburse you 40% or whatever the percentage was that the chatbot said that you were entitled to." This passenger said, "That's BS," and called the regulators. Well, guess what? The regulators said, "It's not reasonable for them to know that the chatbot, which you told them they have to use, wouldn't be able to tell them an accurate company policy."

Jake Bernstein: Man, Kip. That is a great story of AI risk that really should have been managed.

Kip Boyle: Exactly. That doesn't have anything to do with, "Hey, I'm an individual, and I want to augment-

Jake Bernstein: No. That one does not.

Kip Boyle: ...my skills," although that is a very common situation. What we have is we've got this AI already deployed, helping organizations save money on customer service operations. There you go. There's at least one situation where we're not talking about big amounts of money. But it could easily turn into big amounts of money if the chatbot's telling thousands of your customers everyday false information about your pricing structures.

Jake Bernstein: Yep. Okay. We're about to hit a record if we're not careful. Why don't you go ahead and wrap up this episode?

Kip Boyle: All right. I will wrap up this episode of the Cyber Risk Management Podcast. Okay. Today, what did we do? We finished part two, although we're going to probably talk about this some more, of our discussion of the NIST AI Risk Management Framework, the RMF. Based on the queries that we're getting from our customers/our clients, we're convinced that this is going to play an important role in every organization that uses AI, which is everybody. We hope you take time to look at this thing. Use it in your organizations. Kick it around. Take it out for a spin. Tell us if you need any help. We're going through it right now, too. We'll see you next time.

Jake Bernstein: See you next time.

Speaker 1: Thanks for joining us today on the Cyber Risk Management Podcast. If you need to overcome a cybersecurity hurdle that's keeping you from growing your business profitably, then please visit us at cr-map.com. Thanks for tuning in. See you next time.

Headshot of Kip BoyleYOUR HOST:

Kip Boyle
Cyber Risk Opportunities

Kip Boyle is a 20-year information security expert and is the founder and CEO of Cyber Risk Opportunities. He is a former Chief Information Security Officer for both technology and financial services companies and was a cyber-security consultant at Stanford Research Institute (SRI).

YOUR CO-HOST:

Jake Bernstein
K&L Gates LLC

Jake Bernstein, an attorney and Certified Information Systems Security Professional (CISSP) who practices extensively in cybersecurity and privacy as both a counselor and litigator.