Search
Close this search box.
EPISODE 153
NIST AI Risk Management Framework, part 1

EP 153: NIST AI Risk Management Framework,
part 1

Our bi-weekly Inflection Point bulletin will help you keep up with the fast-paced evolution of cyber risk management.

Sign Up Now!

About this episode

March 12, 2024

What’s in the NIST Artificial Intelligence Risk Management Framework (NIST AT-RMF)? And, how do you use it? Let’s find out with your hosts Kip Boyle, CISO with Cyber Risk Opportunities, and Jake Bernstein, Partner with K&L Gates.

Tags:

Episode Transcript

Speaker 1: Welcome to the Cyber Risk Management Podcast. Our mission is to help executives thrive as cyber risk managers. Your hosts are Kip Boyle, Virtual Chief Information Security Officer at Cyber Risk Opportunities, and Jake Bernstein, partner at the law firm of K&L Gates. Visit them at cr-map.com and klgates.com.

Jake: So Kip, what are we going to talk about today in episode 153 of the Cyber Risk Management Podcast?

Kip: I'm excited because we are going to talk about something that you and I are really, really starting to dig into with joint clients of ours. And I don't know if I've told you, but I'm going crazy trying out ChatGPT and this new open source project called Fabric, which leverages GPT-4 Turbo. Anyway, this is just going crazy. So what we're going to talk about today is NIST in January of 2023, published something called the Artificial Intelligence Risk Management Framework, the AI RMF. And what we're going to do is we're going to help our listeners understand the importance of looking at AI through yet another NIST risk framework.

Jake: That's right, Kip. And as everyone knows by now, we're huge fans of NIST and both the cybersecurity framework, which is soon to be upgraded to version 2.0, perhaps by the time you listen to this, that will have already been published, but also the privacy framework. And it's worth pointing out that the NIST AI framework isn't some knee-jerk reaction to the sudden visibility and popularity of generative AI systems. As you just said, version 1 of the AI RMF was published in January, 2023, and obviously it's not old, but it was also clearly in development for some time before ChatGPT took the world by storm way back in November of 2022.

And it's worth pointing out as a final bit of background that NIST actually developed the AI RMF as a result of the National Artificial Intelligence Initiative Act of 2020. I will admit that I have not looked up this piece of legislation, but it directed NIST to do this, and NIST expects to continue refining and building on the AI RMF, essentially forever. It's very specifically intended to be a living document.

Kip: Right. And that's true for a lot of the frameworks that NIST publishes, especially the cybersecurity framework and lots of other things that we rely on. And that's good because innovation is a constant theme with anything cyber, so it should be a living document. I also want to point out that just like with the cybersecurity framework, NIST did not put a bunch of smart people in a room and say, "Okay, we're not letting you out of here until you come up with a framework." Not even close. What they did is what they often do, which is they went out in the industry and they said, "Okay, what works? Okay, what doesn't work? Okay, what should we be thinking about?" And so this really reflects the best thinking that was available to NIST throughout the economy, and that's something about these frameworks that I really enjoy.

But there's other actually international standards, policy think pieces, and even legislation that also contributed to NIST informing itself as it developed the AI RMF. So for example, we've got the Organization for Economic Cooperation and Development, which most people just call the OECD. There's an artificial intelligence and society paper that was published in 2019 from them. There's various drafts of a European Union AI Act, and there's another OECD publication that's called the Framework for Classification of AI Systems, and that's from 2022. So what's really interesting is how a lot of these regulatory environments and bodies were actually anticipating the release in some ways of AI into the everyday world that we're inhabiting, because usually we see the opposite, where something gets released and then NIST and other governmental bodies sort of struggle to catch up. So this is kind of a neat little turn here. But anyway, the bottom line is the framework is robust and I believe it's trustworthy, based in part on how it was created, and I think people should know that.

Jake: Yeah, I agree. And we almost made it five minutes before I decided to go off script. The point you made about these things not being in a vacuum, it's worth mentioning, we may have mentioned it before, that the AI algorithms were originally developed in, I believe, the 1950s. So yes, people have seen this coming for quite some time, and certainly AI and machine learning have been buzzwords off and on throughout the decades. But I really do think that it certainly was November, 2022 when ChatGPT hit the mainstream that I think this really took off. And so I really like that we weren't just left in a lurch in November. This thing was mostly done by the time ChatGPT came out. It was published just two months later, so that's really good, and that does cover the relatively short history of this whole thing.

So let's look at the structure. The AI RMF isn't terribly long. The primary document is a 48-page PDF that consists of two main parts plus four appendices. Part one is titled Foundational Information. And really honestly, it's very useful and I think it's worth a read if you want to dig into some of the theory and thinking behind AI and risk management. I will say that if you're already pretty comfortable with risk and just the concept of risk management, you can definitely skim a fair amount of it. But the AI material is probably new to most people, and I would encourage you to at least skim the risk parts, because it's very contextual with AI. And then the most valuable part, in my opinion, of part one is section three, titled AI Risks and Trustworthiness. But we're going to come back to that. So Kip, what's in part two?

Kip: Well, part two is kind of my favorite bit here, mostly because of my experience using the NIST cybersecurity framework now for over six years to help organizations become more cyber-resilient. So the AI RMF has a core, and the privacy framework has a core too, doesn't it, Jake?

Jake: It does, yeah. It's very similar.

Kip: Okay. And so what is the core? Well, these are the primary functions, and then there's a breakdown into a second level and a third level of detail. And just like in the NIST cybersecurity framework, we've got functions at the top level, categories at the second level and subcategories at the third level of detail. Unlike the cybersecurity framework, the AI RMF has four primary functions, and it's not in total, it's not nearly as substantial as the cybersecurity framework is. And so that was something that I noticed right off the bat. However, it is a little ahead of the cybersecurity framework in the sense that while we're expecting govern to be a sixth function in version 2 of the cybersecurity framework, it's already its own function in the AI RMF, and so that's good. And there's all kinds of other things about the AI RMF that you'll see is very familiar to the way version 2 of the cybersecurity framework is going to be announced. I'm not going to dig into that, but just know that there's a lot of parallels in the way that it's all being published.

Now surrounding the govern function, there's three other functions that do kind of have an order of operations to it, similar to the cybersecurity framework. There's a map, measure, and manage, nice alliteration there, right?

Jake: Definitely.

Kip: Yep. Although I don't necessarily like these terms, it's very illiterate. But these three functions are unique within the NIST frameworks. Although I would say the map function probably closely resembles the identify function in the cybersecurity framework. But truth be told, Jake and I are still learning this framework, so neither one of us today are going to claim that we've got complete clairvoyance on all this.

Jake: Exactly. And who knows, we may decide to make a comment to NIST and maybe they'll rename something based off that comment.

Kip: Maybe.

Jake: You never know. So, okay, last, but by no means least, we have the four appendices. These are, of course, unique to the RMF and they cover important topics including how AI risks differ from traditional software risks and AI risk management and human AI interaction. We may or may not come back to those at the end of the episode. This may become a two-parter, we don't know, but stay tuned and you'll find out.

Kip: Or a three-parter, because we're actively working with clients right now to figure out how to use the AI RMF to guide people in the real world about, "Help, I've got AI seeping through the seams of my organization. What do I do? I don't even know if I want this, but to the extent that I can't stop it, what do I do about it?" But truth be told, Jake, might have misspoken because there's a few other components to the AI RMF, it's just they're not in the PDF, and so I want to call attention to that. There's a big one called the playbook, AI RMF Playbook. And unlike the framework, which is a pretty svelte 48 pages, this playbook is 230 pages if you download it. It's A PDF. I don't recommend that unless you want to skim it on an airplane ride or something where you don't have internet, you really want to use it in a web browser. I think that's really it's primary... it was primarily designed to be consumed inside of a web browser online.

And we're seeing that with the cybersecurity framework as well. They're also taking some things out of a static PDF and putting it into a browser where it could be interacted with and also updated on a more frequent basis, which is really great.

Jake: I can't remember the fancy name they have for it, but I know that they've recently essentially put the classic SP 800-53 now in revision 5 up online, and they do have a name for it. I think it is, that one in particular being a catalog seems particularly suitable for kind of an online documentation system, but that's really what this playbook is is it does seem to be usable online. And yes, you did make a fair point. It's not part of the initial PDF, but I also think that NIST really intended this playbook and the actual RMF itself to go together, just given how much effort they put into the playbook, it's clear that they see this as core to the tool.

Okay, let's go ahead and dig into part one, see what we can learn and what our listeners may want to go back and read if this is important to them. And one more mea culpa, I must say, I skipped over the executive summary. This has a very good executive summary, and obviously it comes very first in the AI RMF and in fact, Kip, there's a sentence about halfway down page one that really struck both of us when we were discussing this whole thing just recently. Would you care to read that?

Kip: Oh. Why sure. It's here in the script, so it's easy for me to do that. Here's the sentence.

Jake: Ignore the man behind the curtain. Please ignore that, man.

Kip: Here's the sentence, and it's great, and it really distracted Jake and I the other day when we were going through this document together. I think we probably spent 30 minutes just riffing off of this one. Here it is. "AI systems are inherently socio-technical in nature, meaning they are influenced by societal dynamics and human behavior." So Jake and I were talking about, "Well, okay, so if I do a search with chat DPT today, that's definitely true. But if I was to do that same search with Google at the same time, or five years ago, I mean, wouldn't that also be the same? Because ChatGPT is trained on data that comes from the internet. Google performs searches based on data that comes from the internet." And we had to admit that, or at least I felt really compelled, that I think it's true that Google Search is influenced by societal dynamics and human behavior because it is socio-technical. But I think if I rolled back to the first Google Search version about 20 years ago, I don't think it would've been as much, because the internet-

Jake: No, I would agree-

Kip: ... was just different then.

Jake: It was. The internet was different. And again, I mean, the internet itself is a socio-technical beast. I mean, we talk about Web 1.0, Web 2.0, Web3-

Kip: inaudible.

Jake: ... all these different little things. And it's all... I mean, unless I've missed something, Kip, it really isn't as if the fundamental TCP/IP server client structure of the internet has changed. In fact, unfortunately it hasn't, for better or for worse. But what has changed is how we use it, what it means to society, what we put up there. Social media, there was nothing intrinsically new about the concepts behind social media, but it was the way things were organized and how easy it was to add content that really created the entirety, the whole concept of what we now call social media. So-

Kip: I also think it was the number of people that piled onto the internet over the years and what they decided to talk about. Early internet, what did we talk about? Well, if I go way back, we talked about defense projects, and we talked about network survivability, research, that sort of thing. But of course today we're talking about commerce, we're talking about deep fakes, we're talking about disinformation, fake news.

Jake: I mean, Kip, we're talking about life, the universe and everything.

Kip: Right. It's all on there now and so anything that is performing search based on it or is taking a cue from it in order to fill in a large language model and present it for interaction, well, that that's the fuel that it's burning.

Jake: It is, and at the risk of losing this episode to this one concept, I'm going to steal your section here and just read a tiny bit more for context. The next line after the one, the socio-technical line, is this, "AI risks and benefits can emerge from the interplay of technical aspects combined with societal factors related to how a system is used, its interactions with other AI systems, who operates it, and the social context in which it is deployed." I mean, and that's what we were just talking about with respect to how the internet was. But with these AI models, it's even more so. Google, pre-algorithm Google, literally an index of search terms, that's way back when, I would say was really not all that socio-technical. It was just indexing what was there.

Anyone could argue, "Well, the results were inherently socio-technical." But not really. I mean, they weren't.

Kip: But the algorithm was in-

Jake: No, the algorithm wasn't. The algorithm now I think has been heavily influenced by things like internet commerce, the entire online marketing and advertising ecosystem, everything that relates to that, certainly culture itself. No, and then the sheer number of people and the sheer amount of information. So it's much closer to that now. And I think you see that in so far as when ChatGPT came out, it was like, "Oh, it's going to be the Google killer." And of course, Google immediately moved toward Bard and other generative AI tools. So anyway, we don't want to go down that rabbit hole.

Kip: No, but I will say something that just ran through my head as you were talking. I was like, "You know what? It's like Soylent Green." Do you remember the closing-

Jake: I know the reference.

Kip: ... lines of Soylent Green, right? Who's the star of that? Was it Charlton Heston? Who was it? Who was-

Jake: I don't know.

Kip: I can't remember, but he's a very famous actor and he's running through the streets, "Soylent Green is people." So if you haven't seen Soylent Green, I just ruined the movie for you, sorry. But it just made me realize AI is people.

Jake: AI is people. It is, and it's meant to be. Okay, so Kip, for the sake of time, we really have to skim over the whole section that's-

Kip: And you thought we weren't going to have enough content?

Jake: No, no, no. I did not say that. I said something different, but we will not be distracted. Okay, so again, we have to skim over the whole section on framing risk, but it's worth your time to at least review quickly. "Of particular note are the discussions of the challenges of measuring risk and the follow-up issues of determining your organization's AI risk tolerance, risk prioritization, and how all of this ties into the org's overall integration and management of risk." And for better or for worse, this is one, and tell me if you agree, Kip, where outside help, like us, we can really only facilitate internal discussions, but the decisions, particularly on things like risk tolerance and prioritization, have to come from within the org.

Kip: No doubt about it. This is an adaptive challenge. What I mean by that is the people with the problem are the ones that have to do the work to figure out, "What do we do about the problem?" Because all this AI seeping into our organizations, this isn't really a cut and dried problem where we have lots of a knowledge base of how to deal with it. Nobody knows how to deal with it yet. So when you end up in a situation like this, you have to give the work to the people who have the problem. A very simple example is if you break your arm, you go to a doctor. Nobody wonders, "Oh, I've broken my arm, what do I do?" Everybody knows you go to the doctor. That's a very technical problem. There's an expert with a very technical and well-understood fix for that.

But this is different. Managing the risks of artificial intelligence, there is no expert. Jake and I are working hard to be able to have an expertise to offer, and I think we've done a good job of that so far. But the real answer is going to lie within the organizations themselves. And so we have to bring a certain amount of expertise, but I think we also have to facilitate the internal discussions, just like you said.

Jake: Agreed. So you want to take us to the second to last part of part one?

Kip: Yeah. Well, it talks about audience, and this is interesting because it really shows how broad the AI RMF is intended to be. And I think it also says a lot about just how pervasive AI is going to be in organizations if they're not already. So it explores the concept, or it introduces it, of AI actors, and you can read a lot more about this in the document, and also the lifecycle and the key dimensions of an AI system. I want to take a moment and unpack that just a little bit. So imagine a circle with two layers on the outside and a core, and I'm just describing figure 2 in the RMF, so if you happen to have that available, just turn to that. The outer layer consists of the AI lifecycle stages, which are plan and design, collect and process data, build and use model, verify and validate, deploy and use, and then finally, operate and monitor.

Now the middle layer and the core are the key dimensions. Interestingly, NIST puts people and planet at the core to remind us that the AI system's biggest impacts are always going to be on humans and the planet at large. And it reminds me of the cybersecurity framework, talks about civil liberties with respect to surveillance. So even in the cybersecurity framework, there is a kind of human and a cultural dimension that's injected into that because we really should not, it's inaudible our best interest to pretend that there aren't effects to be experienced by humans and the planet.

And the middle layer is the more recognizable set of system dimensions and contrast, the data, the input, the AI model, the algorithms, tasks, outputs, application context, so it's all in there. And that's really all I can do in the time that we have available for this episode. But I'll just tell you this section is worthwhile. I think it sets a great stage for the core which we're going to explore. And maybe we will unpack this a little bit more in a future episode, I'm not sure, but there you go. Now what about part two?

Jake: Okay, well, one more part before we get to part two. I really want to highlight the value of the AI risks and trustworthiness discussion in part one, section three. We recently spent an entire episode talking about digital trust, and somebody will put link in the show notes for that.

Kip: Somebody will.

Jake: The two-second version of that episode is tools and rules. Needless to say, trustworthiness in AI go together like peas and carrots, don't they?

Kip: Yes. inaudible.

Jake: Maybe that is a poor simile or whatever of the... it's not a metaphor. I think it's a simile. But one thing is clear. In order for AI to be useful and not merely a toy, we have to learn to trust it. And how can we do that? And NIST suggests the following characteristics of trustworthy AI. "As a baseline AI must be valid and reliable. We then need to ensure that it is safe and also secure and resilient." Which to me seemed like bedrock principles for the deployment of AI. I mean, just go watch Terminator, people. Come on, this is not new.

Kip: Yeah, or The matrix, whichever you like.

Jake: Or the Matrix. Yep, yep. We also want our AI systems to be explainable and interpretable, which is not at all easy-

Kip: Or possible.

Jake: We're not going to go into... It may not even be possible. We don't know.

Kip: We'd like it, though.

Jake: We'd like it, yes. "For maximum trust, it's also important to have privacy enhanced AI systems that are fair, meaning harmful bias has been managed." Note that it's not eliminated. And it's also focusing on harmful bias. So fairness is oftentimes in the eye of the beholder, and there's of course a whole section in here about this particular characteristic. But it is worth pointing out.

Kip: Yep. AI is people.

Jake: It is. "And then finally, and relating to all the other characteristics, AI systems should be accountable and transparent." And man, Kip, this I think is going to be very important. I do want to point out that quite interestingly, the RMF does also point out that these characteristics may be in direct tension with each other, and they usually will be.

Kip: Yeah, they will. It reminds me-

Jake: Privacy... yeah.

Kip: It reminds me of Sam Altman. He was fired, right, as the CEO of OpenAI, ostensibly because the board of directors felt that he didn't pay enough attention to some of these factors and that he put more emphasis on the commercialization of the technology. Now that's just one explanation I heard. The reality may be different. But I would say that we're going to look back at that and we're going to say, "That was probably one of the first really high-profile instances of the tension you just mentioned."

Jake: Yeah, I think so. So as I mentioned, of course, the AI RMF goes into detail about each of these characteristics. I think we'll probably end up doing a second episode, maybe even a third, like you said, on the AI RMF where we explore these details. But for now, just know that these concepts come back in the core and you can't really understand the core without having gone through and looked at this stuff.

Kip: Yeah, or at least refer back to it. I mean, what I did is I skimmed the first part very thoroughly, but I didn't read it word for word. But what I found is that as I got to the core, I did need to flip back. And by the way, when I say "flip back," I mean that literally. Jake and I have both, I guess we're old men, but we both printed the framework on paper, and I found myself flipping back to part one a lot in order to understand what they were trying to say in the core. So however you want to do that, have two different windows open in your PDF Reader or however you want to do it.

But all right, so let's go on to part two. So the core of the AI RMF is deceptively simple because it's actually shorter than the cybersecurity framework core and the privacy framework core. But I don't think it's actually simple, and I don't necessarily think it's simpler than either of those, and that's why I say it's deceptive. But Jake and I are going to have to spend a lot of time. We've already decided that we're going to have to spend a lot more time going through the core. Even though it's not as big, it just feels like it's more densely packed.

Jake: It is. It's dense, it's very dense.

Kip: Yeah. And we have to unpack all of that terminology. Some of it is just overly overloaded words. Like, what does this word mean in this context versus some other context that we would use it in, turns of phrases, very ethereal requirements, like we talked about accountability, explainable, some of that stuff just like, "Okay, how do you do that?" The very nature of the artificial intelligence makes it difficult to wrap your brain around it. But in any event, let's continue. "So the core uses the same top level function, second level category, third level subcategory structure as the other two frameworks do." And if you look at figure 5 in the document, you can see how NIST intends for them to relate to each other. So let me just do a quick explanation. So govern sits at the center of the core. It's core to the core. And I think a good summary of it is it's a culture of risk management is cultivated and present.

And there's a great example of the density of this. What does it mean to cultivate a culture of risk management and what does it mean to be that it's present, right? Because it's saying it has to be cultivated and it has to be present. I think that it means that there needs to be an ongoing conversation, and I think that means there needs to be artifacts of that conversation and the decisions that have been made. Anyway, that's what it means to me. But I couldn't tell you definitively exactly what documents, exactly what those conversations should be. I think that's what we're all trying to figure out. Now, if you look at govern in the center and you look around the outside of that inner circle, you've got map at about the 10 o'clock position, measured about the two o'clock position, and managed at about the six o'clock position. Now, I'm assuming that those of you listening to the episode now do know how to tell time using an analog clock face, because if you don't-

Jake: Or at least learn how to drive, 10 and 2, 10 and 2.

Kip: That's right. Because if you don't, that meant nothing to you, and you're probably pissed off that I did that. But let me give you a very short summary of those three functions. So map, what does that mean? It means context is recognized and risks related to context are identified. That's dense, man.

Jake: All of these are super dense, Kip, which is, I think... I love this because it's really demonstrating that, but go ahead, continue with the density.

Kip: Okay. I will. Measure, what does that mean? "Identified risks are assessed, analyzed, or tracked." Okay, that's a little less dense.

Jake: That one I can actually get, yeah.

Kip: Yep. And manage, "Risks are prioritized and acted upon based on projected impact." I'd say that's semi-dense.

Jake: Yeah. That's semi-dense, again. I mean, understanding those words are not necessarily that dense, but actually doing it, that's dense.

Kip: Yeah. Yeah. And now another thing that Jake and I took away from reading this, I would invite anybody else who wants to dive into it or has, to share your reaction. But our reaction is is that scope, when we look at the govern function, we really think that the scope of what it's trying to address is at the organizational level as a whole. In contrast, the other functions seem to be more limited.

Jake: Or at least specific.

Kip: Yeah, more specific. Maybe the scope isn't organizational wide. It seemed to us that the scope is more narrow. Use cases, right, jake?

Jake: Yeah, use cases. We were thinking departmental or maybe even specific systems or even specific products. But at the end of the day, we both laugh because we both came up with use cases. And I think that is a useful way of looking at this, because departments of course can have many use cases, and a use case may consist of multiple AI systems. So I just think if we apply map, measure, and manage to individual products, that seems like too low level. At the same time, I don't understand, at least at this point, how we could map, measure, and manage across the entire organization. I think that's going to become clear as we kind of use the rest of the episode to... I don't want to say dig into, because we're not going to have time to do that in this episode, but at least look at the...

Kip: The categories in inaudible, the categories?

Jake: The categories.

Kip: Yeah, just a little bit inaudible.

Jake: Yeah, I think what we'll do here, Kip, is why don't we do this? Why don't we... Let's just take a look at the descriptions of each of the categories within each function. And I think we're not going... Even just trying to read the subcategory names would take too long.

Kip: I think it would bore people.

Jake: Well, actually it wouldn't because, yeah, it would be very sad. So one criticism that I'm going to level right now against the AI RMF and it's a 1.0, I get it, is that unlike the other frameworks you don't get... at least I don't see currently useful category and subcategory names. Instead of... there's all kinds of examples and of course I'm blanking on it. Maybe Kip can find one in the CSF, but-

Kip: So detect, right. In the CSF, there's the detect function, and inside that there are some categories and subcategories that are descriptively named that tell you in the name of the thing what we're talking about. And then it unpacks it more thoroughly in the third level of detail. So like anomalies and events, that we can detect anomalies and events. So that's the name of a category. And then in the subcategories, we break that down, but that's not how the AI RMF does it. They actually just repeat the name of the function and then they just tap on a numeric. So for example, we've got govern, and then in there we've got a category of govern 1, govern 2, govern 3, govern 4, and then the subcategories are I guess this, right? Govern 1.1, govern 1.2, govern 1.3. I don't find this to be the way to go. This is difficult.

Jake: Yeah, and I think it's less descriptive. On the other hand, it is a lot faster as a citation.

Kip: Yeah, I guess.

Jake: I mean, I honestly just realized that right this second, but we'll see. We'll see what they decide to do. So I'm going to go through-

Kip: You know how we do it with CSF, right?

Jake: I know.

Kip: I mean, ID.RM.6-A, but we can still have meaningful titles.

Jake: I agree. I think they'll probably do that. And you know what, even if they don't, nothing stops us from coming up with our own titles. So-

Kip: Yeah, baby.

Jake: That's right. Okay. "Govern one is policies, processes, procedures, and practices across the organization related to the mapping, measuring, and managing of AI risks are in place transparent and implemented effectively." My goodness, that is a very dense category right there, off the bat.

Kip: That's a full-time job description almost.

Jake: It is kind of a full-time job description, but still you can already see what governs going to be, right? It is the policies, the procedures-

Kip: And by the way, these requirements, I think of them as requirements. Even though they're categories and subcategories, they don't say they're requirements, but I think of them as requirements. They're just as overloaded as any NIST framework. Because if I really wanted to implement govern 1, I'd have to break that out into multiple testable statements.

Jake: Oh, for sure. And I mean, really, Kip-

Kip: There's too much in there.

Jake: The same is true. Even if we go to the next level of subcategories, govern 1 point whatever, they're all pretty dense. "Govern 2 is about accountability structures being in place so that the appropriate teams and individuals are empowered, responsible, and trained for mapping, measuring, and managing AI risk." It still has to be broken down, and it is.

Kip: Yep, but it's about people.

Jake: But it's at least a bit... it's a bit more inaudible.

Kip: It's about people and accountability. I like that. "Govern 3 is around workforce diversity, equity, and inclusion." So this is good. We talked about bias, right? There's a lot of bias in these systems, and so here's a govern category focused on that.

Jake: Yep. "Govern 4 is organizational teams are committed to a culture that considers and communicates AI risk." I love the use of the word culture here, Kip, because even... We'll just take your own company's primary services and products as an example. Way back in the day, I'm not even sure if I remember what we called it, but now you call it a cyber risk managed action plan, cyber CR map.

Kip: inaudible action plan. Yeah, that's right. And we have two different flavors, and one of them is called a culture CR map. And it's designed specifically to not only determine top cyber risks, but to actually interact with the influencers inside your organization in a way that actually nudges your culture towards the practice of reasonable cybersecurity. So it's actually changing the organization in a positive way, at the same time that it's actually learning something about the organization, which is what are its top cyber risks.

Jake: And let's move on to govern 5. So govern 5 was "Processes are in place for robust engagement with relevant AI actors." I have no idea what that means on its face, so I'll move to govern 6. "Policies and procedures are in place to address AI risks and benefits arising from third-party software and data and other supply chain issues." Now that one I get, that one I think is being discussed actively now in many places. So given the length of this episode and the fact that we have not yet touched on map, measure, and manage, what do you say that we put a break in the episode here and then pick this back up next time?

Kip: Well, yes. I think it bears saying that having gone through the different categories in govern, right, there's six of them, I think it should give listeners enough of an understanding of how the other functions are broken out, and it's just a good preview. So when you go in and do your own reading, you should be well-prepared for what you're going to encounter in the other functions. And no doubt we'll be back to continue to explore these, and as we work with our clients, bring back to you, our audience, some of the insights that we're able to have and we hope that that helps you. Okay. Anything else, Jake, before we wrap up?

Jake: Nope, wrap it up.

Kip: Okay. That wraps up this episode of the Cyber Risk Management Podcast. Today we really just started a discussion about what NIST's AI risk management framework is, how to use it, what does it really mean? We do feel like this framework could soon be just as important to our work and your work as the cybersecurity framework and the privacy framework. And hey, we're encouraging you to take the time to look at it and to use it in your organization and share with us what you experience. And until then, we'll see you next time.

Jake: See you next time.

Speaker 1: Thanks for joining us today on the Cyber Risk Management Podcast. If you need to overcome a cybersecurity hurdle that's keeping you from growing your business profitably, then please visit us at cr-map.com. Thanks for tuning in. See you next time.

Headshot of Kip BoyleYOUR HOST:

Kip Boyle
Cyber Risk Opportunities

Kip Boyle is a 20-year information security expert and is the founder and CEO of Cyber Risk Opportunities. He is a former Chief Information Security Officer for both technology and financial services companies and was a cyber-security consultant at Stanford Research Institute (SRI).

YOUR CO-HOST:

Jake Bernstein
K&L Gates LLC

Jake Bernstein, an attorney and Certified Information Systems Security Professional (CISSP) who practices extensively in cybersecurity and privacy as both a counselor and litigator.