EPISODE 77
 
Busted: The Truth about Cloud Security

EP 77: Busted: The Truth about Cloud Security 

Our bi-weekly Inflection Point bulletin will help you keep up with the fast-paced evolution of cyber risk management.

Sign Up Now!

About this episode

April 13, 2021

What do you know about Cloud Security Marketing?! In today’s episode, we do some mythbusting, specifically targeting common cloud security marketing messages, with the help of our guest, Paul Rich. To see more about Paul Rich check out his LinkedIn Profile: https://www.linkedin.com/in/parich/

Tags:

Episode Transcript

Audio: Welcome to the Cyber Risk Management Podcast. Our mission is to help you thrive as a cyber risk manager. On today's episode, your virtual chief information security officer is Kip Boyle, and your virtual cybersecurity counsel is Jake Bernstein. Visit them at cyberriskopportunities.com and focallaw.com.

Kip Boyle: Jake. Hi. What are we going to talk about today?

Jake Bernstein: Hey, Kip. Today we're going to talk about the cloud security and some myths, and we're going to bust those myths with the help of our guest, Paul Rich. Paul is-

Kip Boyle: I miss MythBusters. I'm glad we're doing this.

Jake Bernstein: That's true. It was a great show. Paul is an executive director at JPMorgan Chase, and before that, spent about 21 years at Microsoft, most recently working on Azure security. So Paul, welcome to the podcast.

Paul Rich: Hi. It's great to be here.

Kip Boyle: Paul, thanks for joining us. We really appreciate it. I enjoyed learning about your background. 21 years at Microsoft. Somebody who worked on Azure from the inside. And I just think that makes you incredibly well-qualified to come here today and to share some perspectives about the cloud and what's myth and what's truth. So thank you for being here.

Now, we've teased a little bit of your background, but I'd like you to introduce yourself, and I want our listeners to know even more about where you've been and what you've been doing.

Paul Rich: Well, I appreciate that. And it's fun to be here and to go over background because I feel like I've had kind of a combination of all the best Disney rides in my life as my personal and professional life. But professionally, it's like maybe the fun house where I've got a lot of variety.

So I started on the TRS Mod I, the TRS-80 Mod I back in the early '80s, and I fell in love with technology then. But I've also been an electrocardiogram tech, a State Farm insurance agent. I spent four years in the U.S. Navy and actually worked on encryption hardware devices and computers there.

And then after the Navy, I moved out here to Seattle and worked on all Novell NetWare systems, nothing Microsoft, routers and switches. And then because I realized my dream was to work for Microsoft, I worked hard to prepare. And got the call in '98, and carried that badge for 21 years.

I was really lucky to get in on the ground floor of what today is cloud, but back then that term didn't even exist. So I spent most of my time at Microsoft in Office 365 and the last couple of years in Azure security. Then I took a sabbatical and then joined JPMorgan, great company. I was very impressed working with them when I was at Microsoft.

And then outside of JPMorgan, I'm also co-chair of the Cloud Security Alliance working group that covers cloud key management. And I'm also co-chair of the International Association of Privacy Professionals, Seattle-Bellevue Chapter. So I keep pretty busy.

Kip Boyle: That's for sure. Just one of those co-chair positions would suck up all my free time.

Jake Bernstein: Wow, Paul, that is quite a background. I think there's a lot we could talk with you about, a lot you could offer our audience. But today we're going to focus on something extremely practical, and that is how to evaluate the many, many marketing claims made by cloud services companies, particularly in the SaaS realm, and really focusing on that myth-busting theme here on the Cyber Risk Management Podcast.

Kip Boyle: There's a lot of myths.

Jake Bernstein: There's a lot of myths. And if you could just give us a real quick overview of the four points that we're hoping to cover. Our regular listeners know that we may start with four points and only get to three, but we'll see what happens.

Paul Rich: Yeah, sure. And before I launch into my strongly held views, but I'm persuadable, but strongly held views, I should mention that the JPMorgan lawyers want me to make sure that you and your listeners know that these are my opinions and I'm not representing JPMorgan at all here. So with that-

Jake Bernstein: That's a well-done disclaimer there, Paul. As a lawyer, I approve.

Paul Rich: All right. Thank you, counsel. Actually, I've done a number of public talks. The SecureWorld talk that, Jake, you mentioned before we started recording was one of them on myths essentially about cloud services. More often than not, in the privacy and security realms, but sometimes in other realms too, like with availability.

So there's some key points that really talk to the mythology of cloud is, one, that these marketing stories or hooks that they put out there is their tendency to want to appeal to customers' control... Control issues sounds a little like pathology.

Jake Bernstein: No, that was on purpose.

Paul Rich: Yeah. The need for control and the desire customers have. There's a real common matrix that's bandied around the tech community that shows this matrix of control that you give up as you move from on-prem to IaaS and then to PaaS and then to SaaS. And it just gives a lot of people shivers when they look at that and realize what they're giving up by turning things over to the cloud provider. So that's one.

And then the second thing is there are consequences of system design failures on the part of the cloud service providers that can impact privacy and, obviously, risk. And those are things that I would say there's an implied mythology that the cloud providers don't have those kinds of failures, and so nobody really asks about them. So we can talk about that a little bit.

And then for me, near and dear to my heart is encryption. And one of the strong mythologies that's really been persistent and pernicious for a very long time now is having to do with encryption and the use of cloud services, both companies that ship appliances that have custom encryption, as well as the effects that encryption can have. So needing to think about standards-based and peer-reviewed encryption because it is a mathematical science.

And the last one, which I think is going to be nearest and dearest to your heart, Jake, is contracts. It's a fascinating area with cloud providers and one that's really ignored by just about the entire tech community. And we'll dispel some myths that a lot of people have. So those are four key areas we can talk about.

Kip Boyle: That's great. I love it. So I'm really excited about this topic. It's so important. Everybody's going to the cloud if they're not already there yet. A lot of customers that we work with are cloud-first companies. Right? They don't have any legacy local area networks, legacy wiring closets, switch closets. They've never co-located anything anywhere.

So they've just started out with cloud. And there's so many advantages to it, as it seems to me like we're converging on this idea of utility computing. And I remember hearing about this years ago, this idea that, in the future, computing would be like electricity. You would just have a wall socket anywhere you went. And if you wanted electricity, you just plug your gizmo or whatever it is into the wall.

You wouldn't have to worry about where does electricity come from? And do we have enough diesel in the generator? And whatever, right? So it just becomes this utility. And that's certainly what I'm seeing, but it's not as simple as electricity. It's way more complicated.
And these marketing messages are very, very clever. If we see Facebook figuring out ways to hook people, to actually get into how the human mind works and how human psychology works and actually can figure out ways to keep us engaged even when we should probably be doing other things, you can bet that there's a lot of human psychology behind the marketing messages from the cloud companies.

And so we've just got to be able to separate truth from fiction, especially CISOs and senior decision-makers. I mean, these are critical-thinking people. And if we can get them the facts, then they're going to be making better decisions. And that's really what they do, right, is that they are corporate decision-makers. And so we want to make sure that we're setting them up for success.

So we've got some points that we're going to explore here. What kinds of things should people be asking their cloud service sales reps?

Paul Rich: Yeah. I really like to try and help people think about how to maybe flip or migrate their questions from what they typically ask to what they really should be asking. So a simple example of that is the most common answer, or question, sorry, to go back to the control issue, is can I have control of something? Of whatever, fill in the blank.

And what they really should be asking their cloud provider is, will this prevent you from responding to a legal order, for example, for my data? Or will this prevent your personnel from being able to access my data?

So that's a simple transition from, "Can I have control of the encryption key?" To, "What are you really asking for?" And then instead of asking what's your SLA or what's your uptime, asking how is your cloud service or your technology underpinning it most likely to fail?

Kip Boyle: That's a great one, by the way. I want to inject an idea here, which is negative visualization. Have you heard of that particular turn of phrase, Paul?

Paul Rich: Educate me.

Kip Boyle: Yeah. So negative visualization, this is what decision-makers should be doing but that they never do, because they're always thinking about their happy path. Right? "Hey, we're going to migrate to the cloud and all these problems that we have are going to go away. And then we're going to be able to do all this other cool stuff that we've always talked about."

So they always think about how everything's going to go right. What they don't often do is think about and talk about how could things go wrong, and can we prepare for that somehow? So I love this question that you've brought up because it gives a great example of this idea of negative visualization. And I think it's a really important part of risk management.

Jake Bernstein: I'm pretty sure that negative visualization is actually just being a lawyer.

Paul Rich: Or a risk manager. Yeah. Right.

Jake Bernstein: Yeah. Certainly, that is... Yeah.

Paul Rich: I read a very influential book-

Kip Boyle: Wasn't there a course on that in law school?

Jake Bernstein: Yeah. Yeah. The whole thing.

Paul Rich: Yeah. So I can't remember the name of the book, but I read a very influential book a long time ago when I was at Microsoft. The title of the book and the theme of it was Man-Made Disasters.

So it went through Three Mile Island space shuttle. So they were focusing on very complex systems, not simple man-made disasters, but really highly, highly complex things like a nuclear power plant. And like what you're saying, Kip, how the designers of those systems, actually, people on the ground probably, did have some ideas about how it might fail. But at the upper echelon, they really saw this as kind of a bulletproof system.

And then when those systems failed, you look at something like Fukushima, just absolute incredible disaster, but also a wake-up moment where those of us in the public looking from the outside in would go, "Well, gee, that seems kind of obvious." Tsunamis happen in the Pacific Ocean, you locate... So the cloud is like that.

The technologies underpinning something like Amazon or Microsoft are very complex, and there are so many different ways they can fail. And we don't take the time to focus on that. We're focused on SLAs and penalties for SLA. So yeah, we can talk about that a little bit.

And then the last two questions I would bring up would be on the encryption piece, if a vendor or a cloud provider is starting to talk about encryption, you got to nail them to the wall on where's the standards body community on this? There's the IEEE if your listeners aren't familiar with, in the engineering field, the international bodies that govern internet standards.

You have to be using an internet standard if you're using encryption. If you're not, you've gone rogue in that community and no one really can trust what you have implemented. So it's a peer-review model like any scientific endeavor.

And then lastly, when you see a marketing claim about privacy and security, you really want to ask, what does the contract say about this? So that'd be the final and maybe even the most important question.

Jake Bernstein: Very good. Yeah. I think all of that is just critically important, and I think it's oftentimes not something that people always think about. Particularly, as we're talking about the negative visualization, I like to think of that as everyone being a lawyer.

The standards body one is really, really critical too. I know there's a whole talking point about this, but my favorite kind of marketing speak to dig into is, "Oh, you have a proprietary encryption mechanism? That's so fascinating." And that just goes to your standards body publication and protocol.

So let's go ahead and move into those four primary components. And I think the first one was talking about people's control issues, which we all have.

What I see is that... I think a lot of our shared clients, Kip and I's shared clients, as you mentioned, are cloud-first. And it's almost like they don't think about... They're not thinking about giving up control in the same way because they're just kind of used to that.

But we also have clients with huge... I'm going to use the L-word, legacy networks. And they definitely, I would say sometimes to their detriment, refuse to give up control. So maybe speak to the value of control, what it really is. Is it some kind of goal in and of itself, in other words, an outcome? Or is it really just a tool, a component of an overall system? And let's go ahead and discuss that first.

Paul Rich: Yeah. It's an interesting dynamic with cloud-first entities. And I would love to hear your experiences with companies like that that maybe have turned a corner on how they, well, let's say ride along making assumptions and they're happy and blissfully ignorant of the contract in this case, with respect to privacy and their data.

And then as they grow up a little bit, maybe they start out paying attention to the startup issues that their attorneys are going to take care of for them. And once they graduate to a situation where maybe they start talking to peers and their peers are saying, well, so-and-so had data that was subpoenaed to a cloud provider and that was turned over, now they're starting to pay attention to it. And then that raises the question of control.

Even later on, those cloud-first companies might be coming back and going, "Hmm, this is an area where I'd really like to protect myself. What can we do here?" And what they'll see is the marketing around you can control access to your data or that Microsoft has a key feature in that arena, but all the cloud providers will market to you, hey, you can then control your encryption keys.

So the smart person is going to respond to that by saying, "Okay. So my goal is to ensure that my data can't be subpoenaed. Is that going to actually address that concern? Is that going to accomplish that goal?"

This was an area I spent many years at Microsoft addressing with customers, so answering that question. And my experience is that, universally, with SaaS providers, the answer is no. There's no way around that.

With IaaS and PaaS, the dynamics can be different, but it's always the case with SaaS that the answer to that question is going to be no, that encryption control is not going to give you protection against that scenario.

And that includes the insider threat from the cloud provider, so their personnel being able to access your data.

Jake Bernstein: Isn't it the case that if you actually read Slack's privacy policy, they never actually say that Slack itself can't read your information? They can, they could, hypothetically.

Paul Rich: Yeah. And there's a good reason why contracts don't include language like that, not just because they might want to leave it intentionally open, but because Slack doesn't know if they can read your data because you could put data into Slack's service that's already been encrypted on-premises, using encryption keys that Slack can't access, so they can't ever see the data.

So that's an option with any of the SaaS services. With any cloud service, you can encrypt data and stick that data into the cloud versus putting it into the cloud and having the cloud encrypt the data for you.

If the cloud is encrypting your data, then your data is available to the cloud personnel. It's available for them to respond to a third-party data request. And that's the key differentiator there.

Are you encrypting your data and the cloud can't decrypt it? Or are you letting the cloud encrypt your data? And therefore, they obviously can decrypt it.

Jake Bernstein: Yeah. That's a really, really important point.

Kip Boyle: And even if a human being isn't reading your data, you get situations like Google Ads where they were using automated systems to read messages that were being sent around so that they could serve up ads, right? So that's kind of creepy.

Paul Rich: I agree. I'm not a very ad-positive person. And one of the things I was always impressed that Microsoft said is, "We'll never do that." And they have kept to that. Never is a long time, but they've kept that promise.

What they do, for example, in the Office 365 service, is they do read the data in the Office 365 service to populate intelligence in their product, it used to be called Delve, it's something Analytics, where they can say, "Oh, Kip often sends mail to Jake. And Jake reads those emails in five minutes. This must be a close contact in Kip's business community versus someone else."

It can surface things to Jake that you're working on that he doesn't even know about, because it can do this business analytics across your data and his data. So no human is looking at that, but their systems are constantly looking at your email, your documents, all of the crosstalk.

Kip Boyle: Yeah. And it makes one wonder that if a computer is allowed to do that by design, then how could that permission that the computer has, be abused so that a human being could gain access?

Paul Rich: Yeah. Well, that's the insider threat scenario really if you're mostly concerned about the cloud service provider personnel. And my experience, both personally, inside of Microsoft, and also with a number of colleagues I have that work at Amazon, Google, is that all of the major cloud providers have done a very good job of building checks, controls.

In the case of Microsoft, at least, the one I know the most about, zero access on an ongoing basis. So no individual at Microsoft that operates the cloud services in Office 365 has access to anything in production on an ongoing basis.

So every instance where they... If they come on call and they're supporting the service this weekend, they don't have access. When they get a call and they need to jump in and do something, if they need to access production resources, they'll raise a request, get that request approved.

The access will be limited in duration, by default, a short number of hours, unless they ask for more. But once that timer expires, their access evaporates again.

So that's one way that the major cloud providers have really evolved from the traditional on-prem where people have got 800, 900 different admins in a company that size and they always have access, so there's a lot of insider threat.

Jake Bernstein: And I think that issue just about lit me up completely because I was just thinking, okay, it's great that the big cloud providers do that. That's one of the questions I would ask the small cloud provider. Hey, how do your support personnel... Or how do you control access to this data?

I was thinking just-in-time access is what you were describing, and that's really ideal. It really prevents people from gaining too much power over time, accumulating information. But I do wonder how feasible that model is for smaller cloud entities. And I think that's a really fair question to ask people.

To me, that's probably a follow-up under that first bullet point, what will this do to keep my data secure or private? So yeah, that's great. That's a really, really key, I think, observation/insight that you've given us there.

Paul Rich: Sorry. On the theme of control and this particular aspect of it, there's a follow-up question to that, which would be, if you've implemented this just-in-time access model, does getting approved for that just-in-time access also include access to customer data?

And that might seem like, well, that's kind of weird, but actually, Microsoft separated those two things years ago. So if I were working for them and supporting the Office 365 systems, and I did get that access approved, it still wouldn't give me access to any customer data. That's a second approval that's necessary.

On top of just being able to connect to anything in the data center, I then need a second level of approval. And this is where control for the customer comes in because Microsoft built a feature where, for that second level to get access to your data, you can be in the approval path as the customer.

So the operator can get access to the systems, but not your data. And if they need access to your data, they can have to come to you, the customer, and ask for the final approval. If you say no, then that operator will not get access.

Jake Bernstein: You know what is striking to me about this example and this discussion is how even though there's an absolute ton of technology being used and applied here, fundamentally, this is a low-tech solution. Right? The idea of, hey, ask for access, right?

And what it reminds me of is the easiest, best, lowest-cost, but yet most effective defense against typical business email compromise wire fraud is a phone call. Talk to the person. Don't wire money until you've verified it.

And what this reminds me of is you're talking about some of the most sophisticated technology on the planet right now, and yet the control really isn't based around technology at all. There is no machine learning going on here. There is no ultra-fancy algorithms running. It is a human approval process.

And I think that is a really important lesson for everyone to just remember and keep that in mind when you're designing your systems from scratch. The more you rely on pure technology, the more vulnerable it will be to not just hackers, but insiders.

And conversely, the more you rely on multiple human beings, the more secure it is, and for the simple reason that it's difficult. I mean, by the time you have to compromise five or six individual people, now it's a grand conspiracy instead of just one person who's an insider threat.

Kip Boyle: Yeah. Well, there's also the possibility of group think as well, right?

Jake Bernstein: Yep.

Kip Boyle: Just everybody focused on one thing that makes their life easier and not necessarily thinking about other stuff that could lead to some real serious data breaches and cataclysmic complex systems failures like Paul was mentioning in that book that he was reading. But what we've always said here is that, as a cyber risk manager, you want to bring all your resources to inaudible problem, right?

So if you can get a technical control in place, you can train people how to do things in a certain way, if you can get a process implemented that's going to decrease the risk of, if something bad happened. And then you get some managerial oversight. You can coordinate or orchestrate all four of those resources to put on something that's a big risk. That's usually the best approach.

Jake Bernstein: Let's go into talking point two. This is looking like it'll be one of those slightly longer episodes, but I think the value here is clear.

Kip Boyle: So privacy.

Jake Bernstein: Privacy. Yep. I think, for me, this question is privacy, but it's also about system design failure and how I think, typically, people overly focus on availability. And, oh, what's your uptime, downtime? What's your SLA?

But I think what we're going to talk about here is let's think about just a bit more than just that. What are the privacy consequences when a system fails?

Paul Rich: Yeah. Because I have the personal experience, I was still at Microsoft when this happened, I don't want to highlight Microsoft as being special in some kind of bad way here. This happens across the board.

But there was an incident that was public. They didn't release a lot of details, and I won't divulge the details today, but they had an incident a couple of years ago where there was a failure in the design of Office 365. And it has to do with how a lot of data gets cached because, in a cloud service of supersized scale, there's a tremendous amount of pressure on the system for handling usage.
You want to be responsive to customers' demands for downloading a document, right? All the things that you do with the cloud. And so there's a lot of caching that goes on.

And as you can imagine, every operation to the cloud involves some kind of authentication, authorization process, one or both of those things are happening. And sometimes you think, when you design a system, a computing system, I can put this stuff in a cache and I don't need to authenticate back to that cache because I've already authenticated them to a system that is in front of that cache.

So this was the general case here with an incident that happened where the cache had no authorization protection on it, there was... You could think of it as a corruption in the computing system that was attached to it. And one Office 365 tenant was able to see a little bit of content from a different tenant.

So that should never happen, right? That should be an inviolable separation between all tenants in any multi-tenanted cloud service. I considered that a big deal.

It didn't result in a terrible outcome because the data that was actually exposed was kind of minimal, but that was a case where no one thought that something like that could happen. And a customer asking a question, referring to an instance like that. Hey, this happened. How could your systems fail in a way that something like that might happen?

And to be clear, I'm not suggesting that someone from Google or Salesforce or Slack is going to blurt out, "Well, this is exactly how that's going to play out." What they're almost certainly going to do is they're not going to tell you anything, in effect, about the "how."

But what you want to do is evaluate their response. Not that they divulged or didn't divulge something, but just did they acknowledge that this is a problem that they think about, that they actually do system failure design analysis as part of their development process?

In fact, there's a process that runs inside the Microsoft cloud called Chaos Monkey. Now, just that name can give you an idea. Just let a monkey go inside your environment, put them in the data center and see what breaks. It's a great idea.

Kip Boyle: What wires will they pull?

Paul Rich: Yeah. Microsoft actually designed this into their cloud system. So this thing runs continuously and can go around and break things. And having something like that tells you something about their mentality towards system failure. They want to experience unexpected system failures.

And you want to ask your cloud provider, maybe a question would be; do you, by design, have a process that runs around and breaks things is a reasonable question to ask.

Jake Bernstein: Yeah. I think that's a really interesting point. And what I'm thinking is that the last thing you want to hear from your cloud service provider is, "Oh. Well, we didn't think of that, or we didn't see that coming."

And I think the truism about computers is that it's a strength and a weakness. They do exactly what you tell them. And my opinion, and this is just an opinion, is that the designer of a computing system probably ought to understand everything that that system can and can't do, which includes ways that it fails.

And though I do understand that today's computing systems are almost incomprehensibly complex, I do think that this is a type of... I think the underlying question is how robust is your threat modeling?

And I don't mean that just like traditional threat modeling, but also, just general failure modeling. And I think that's a really good point, and I really would like people to remember that and to really ask their cloud providers.

Let's go ahead and move into the third talking point regarding the importance of peer review for encryption models and things like that, because I think that's one of the more common claims that you hear is, oh, such-and-such is fully encrypted, end-to-end encryption at rest, in transit.

And I think, for a lot of folks, encryption is a word that, for better or for worse, people just tend to trust it. Like, "Oh, it must be true." If they say it's encrypted, it must be true. That's at least my impression of the non-technical portion of customers. Why don't you go ahead and bust that myth?

Paul Rich: Thanks, Jake. I've been trying to come up with kind of an analog to this in the non-computer science world, and I haven't found a good one yet, but the best I can come up with right now is maybe a brain surgeon.

If you needed brain surgery, very complex operation, a lot of risk associated with it, and it's something that's not done by very many people in the world, you probably actually don't want to hear, "Hey, we've got this brand new technique. No one's ever tried it before. And even though you're able to use all of the existing techniques of brain surgery for this problem, we're going to try something totally innovative and new on you."

You might say, "You know, this time around, I'll skip the innovation and just stick with the thing that's got the good survival rate and so forth." Right? Maybe that's the best analogy I can draw is that encryption is a very hard science, it's really pure math. It's just math.

And math is not subject to... It doesn't work if some one individual in the world says, "I've got this idea for how to do math differently." No one's going to start using that model until it's been tested through the scientific method. Right? So other people need to understand what their theory is and then try and replicate those results in kind of a laboratory experiment type thing, get the same results.

So science is a peer-reviewed endeavor, and encryption is almost, you can say one of the most pure sciences that there are. So if someone is marketing a product, a service, an appliance with innovative and new attached to encryption, more than likely, you should just run. Run the other way.

What you want here is tried and true, absent the paranoid portion of humanity that doesn't trust people like the National Security Agency. I do, as much as anyone could, I suppose. The NSA is really kind of "the buck stops here" for encryption in the United States.

So for commercial encryption, it's going to have been approved by the NSA and published through the National Institute of Standards and Technology, or NIST. So if it's not that, then it's probably something you just don't want to use.

Jake Bernstein: And I think there might be some widespread confusion about this topic because some people might think, well, wait a second, if this encryption method is publicly available and it's this approved thing, doesn't that mean that someone can break it because it's out there? It's right there.

I understand why you might make that temptation, right? There's two different things going on here, one is a total lack of understanding of what encryption is. And the other is kind of falling prey to one of the other myths of security, which is security by obscurity is that if something is... If you think it's hidden-

Paul Rich: And weird enough.

Jake Bernstein: ... And weird enough, that makes it secure. But maybe for our listeners who are not cryptographers, just really briefly describe why those two ideas are false.

Paul Rich: Why encryption-

Jake Bernstein: Yeah. Why publicly-tested encryption is actually the opposite of insecure, it's the most secure. And then the idea that security by obscurity in the encryption area, which is what you were saying, is untested, but why? Because I think, intuitively, for non-mathematicians, this is confusing stuff.

Paul Rich: Yeah. Well, I would say the easiest thing for the public to understand should be that, how many nuclear physicists are there in the world today working on atomic bombs or such things like that? The number of them is probably vanishingly small, in the hundreds.

And in the world of cryptography, in fact, I just recently did a search on the top, I don't know, 50 cryptographers in the world today, and I think you get less than that in a search engine. But there just aren't that many people that are doing this as a full-time job in real terms across the globe.

And it's a very difficult area, meaning even the people who do it, understand it's a difficult area. So it's very hard to get right. There's an infinite number of ways that you can build an encryption algorithm that will have a flaw in it that can be exploited by someone.

It's much harder to build something that we think has no flaws. This is why, at least in the international community of Western European, US, those cryptographers are putting those in front of the National Security Agency who then evaluates them to determine if they can break those encryption algorithms.

And in fact, within the cryptography community, all of those people are looking at trying to break each other's cryptography, and it's how that community operates.

So if you've got a company who's saying... Even sometimes they're saying, "We've got this thing, it's got this innovative new encryption model. And this one expert out there in the world, who is a cryptographer, has given us their seal of approval." That's still not following a peer-reviewed model

Not to cast dispersions, but maybe they're on the board of directors of that company, maybe they're an early investor. There's some reason why one cryptographer is saying, "Yeah, I support this." Again, if you had one scientist saying this model works in the world for some other area in science, you would not conclude that that is truth. You need peer review for it.

Kip Boyle: Yeah. It's just so mind-bendingly difficult to design these encryption systems and to design them without flaws. The peer review is the proven process to come as close as you can, but even that is not flawless because we've got encryption algorithms and key sizes that used to be completely safe that are not any longer because of the advances in computing systems.

So even though peer review is sort of the gold standard for encryption, for integrity, things can happen that can change how secure an algorithm is at a particular key size.

Paul Rich: Yeah. I don't know if you've talked about something like Heartbleed on any of your previous episodes, but that's an example of a very public, open community, well-peer-reviewed encryption algorithm that has become broken.
So even very, very strong designs eventually can fall prey to flaws that are hidden for a very long time. Following that model still gets you the lowest risk profile crosstalk.

Kip Boyle: Yeah. Yeah. The problem I see with peer review, in terms of senior decision-makers and private industry, especially ones that are trying to put together some innovative new product, is they get impatient, they can't control it. It takes a long time to peer-review stuff.

Private companies tend to be rewarded for proprietary technologies and inventions, and so they just naturally assume that why should I put up with all this academic rigmarole that takes forever and that I don't understand? We'll just invent something for ourselves on our own, and we'll go to market. I've seen that pattern a lot.

Paul Rich: Yeah. And to try and make this more practical and down to earth for listeners, right now, the areas of privacy-preserving encryption, homomorphic encryption, format-preserving encryption, these are newer schemes that have developed really in the last five or 10 years, which might sound like, "Oh, five or 10 years." Those are still in diapers.

And the things that we're using on the internet, commonly using on the internet, are 20, 30, even more in terms of how long they've been around and been tested. With quantum computing coming on the scene, huge concern in the financial services industry about that breaking existing encryption schemes.

And so you're going to see new innovative approaches come. And we need to be wary of those approaches, and particularly because quantum computing, again, one of those areas, there's so few experts in the world in that arena that having your own roll-your-own kind of thing is going to be that much more risky.

Kip Boyle: I want to throw one more thought in here, and then I think we need to go into the fourth point that we wanted to explore a bit. But homomorphic encryption, which is now starting to enter into a broader audience, it was actually first proposed in 1978.

Paul Rich: That's true. Yeah.

Kip Boyle: So this stuff just... It takes forever.

Paul Rich: Yeah. Maybe we can have a day to talk further about that. But one of the key characteristics of homomorphic encryption is the trade-off between strength and data leakage or information leakage.

So homomorphic encryption always leaks information. It's just a question of how far do I want to turn that dial? Do I want to do the spinal tap thing and turn it way up or not?

Jake Bernstein: crosstalk. And I think the last point on encryption is, just to hit on that point, is that, again, other than cryptographers, I think most people think of encryption as this binary; it's either encrypted or it's not, and therefore, it's either totally safe or it isn't.

But the reality of encryption, as Kip alluded to with his comment about older systems being broken because of computing speed increases, everything in this area is really a reasonable trade-off between speed, performance, cost, time.

I'm sure that right now we could easily create an encryption system that we'd be confident will take millions of years to break, even after quantum computing maybe. The problem is that, sure, you might be able to do that, but it'd be totally impractical and useless because it would take five years to encrypt, and then it would take five years to decrypt. And that's useless, right?

Kip Boyle: It's impractical.

Jake Bernstein: I just want to highlight that encryption, though encryption is operating on ones and zeros, the overall concept is a series of reasonable trade-offs between these different features, if you will. And I think we probably need to have a whole separate episode talking with you, Paul, about encryption because it is so central to not just cloud computing, but to computing these days and privacy in general.

But I think though it is a commonly-used term, it is totally misunderstood. And my favorite example is the executive who just says, "Well, just encrypt it." And I always laugh because, invariably, that is not the answer because that would render the data unusable to the company. And like I said, major, major lack of understanding of what the E-word is and what it does.

Kip Boyle: Yeah. And authorized users can decrypt anything anyway. So it's not an insider threat countermeasure.

Jake Bernstein: Nope.

Paul Rich: That's right. Yeah.

Kip Boyle: Yeah.

Paul Rich: Yeah.

Jake Bernstein: All right.

Kip Boyle: Let's talk about point number four, which is contracts, which is Jake's favorite point, I think.

Jake Bernstein: It is, although I really like encryption right now. But go ahead, Paul.

Kip Boyle: I can tell.

Jake Bernstein: Let's talk about the contracts, and just hit that real fast.

Paul Rich: I enjoyed greatly your episode on negotiating the data security addendum, I learned just listening to that episode.

Jake Bernstein: Thank you.

Paul Rich: There's so much more in the world of contracts, however, it shouldn't be any surprise that most people don't read contracts and they don't want to read contracts.

Kip Boyle: Oh, yeah.

Jake Bernstein: What?

Paul Rich: Yeah.

Jake Bernstein: I don't. I can't.

Kip Boyle: Oh, yeah.

Jake Bernstein: Surely, that's not true.

Kip Boyle: I went to get a mortgage one time and the guy shoved the whole sheath of papers in front of me, and I looked at them and I said, "I know nobody ever reads these, but I'm going to read them. So if you want to go get a snack or something, now's the time."

Paul Rich: Yeah. You're plus-one'ing me, Kip. You're plus-one'ing me. I read contracts pretty... And my wife is an attorney, and she is more likely to shuffle through them.

But certainly, in the world of engineering, and I would venture to say CISOs, but coming out of the world of Microsoft for so long in a program management role, even the people that were risk managers there, they're not reading contracts.

Now, there's a fantastic legal department at Microsoft. In fact, I worked very, very heavily with lawyers at Microsoft for quite a number of years. And I actually worked to put stuff into the terms of service for the cloud services at Microsoft. So a direct experience with how to write contract language to go in there that's going to be consumed by the world.

But I think I was the only program manager that I ever met in Office 365 that had read the entire terms of service contract for our own service when I was in that company. And I don't think I've ever met any external engineer to a cloud that has read the cloud service providers contract.

In fact, it's almost exclusively attorneys. Not even the risk management people do I see necessarily reading those contacts.

Kip Boyle: Yeah. Yeah. This is an example of some of the dysfunction that can occur because we work in silos, right? Most organizations are organized in silos, and those silos don't generally talk to each other.

And we see this all the time where the finance department thinks that IT is completely all over the business email compromise threat. And IT is like, "Meh. Money? What? We're just trying to stop the phishing emails." Because the silos don't talk to each other. So yeah, I think that's what you're talking about, phenomenon.

Jake Bernstein: It is. And I think what's super challenging is just like, on one hand, it makes no sense for a lawyer to review code. And I'm not suggesting that. That doesn't actually make sense. You should not probably do that. However, I think it does make sense for lawyers to have a very, very high-level appreciation for what code is.

And the flip side of that is that even though it's not a valuable use of time for a non-lawyer to replicate all contracts from law school in an attempt to understand every clause, I think everyone who's involved in it should nonetheless take the time to get a high-level understanding of what the contract is saying.

And sometimes, and this is true, I think, of every contract, you need to understand the technical parts that relate specifically to the product or service, I think, better than average. That's what I would say, is you really should.

You don't have to understand what an integration clause does because, honestly, that's kind of a pure legal thing. But the operative language, I think that's critical to understand.

Paul Rich: Yeah. Most everything will surprise someone, a technical person who's never read a contract by a cloud service provider. But some of the things that I think would just shock people are... It's unheard of really for the big CSPs to include specific feature functionality in their contracts.

So if you're looking for... I'll give you an example, actually. Jake, you'll understand this. eDiscovery. We're going to use your email service and we need to be able to run legal discovery against our entire corpus of email data that's stored in your cloud service.

So I'm expecting to see in the contract that it says, "We'll provide you with eDiscovery capabilities." But that's not the case. If you read Microsoft's terms of service, eDiscovery is not even mentioned in there.

So the first thing that the technical crowd needs to understand is that contracts for cloud don't cover really the technical arena directly in the primary terms of service. They may have addendums that the primary contract points to, and they say the things that are in that addendum are applied in some way.

But for the most part, they have to keep technical things out of the contract because the cloud reserves the right to add features, to deprecate features, to change the way features work, to change the way that they're delivered. Everything they put into a contract that's technical in nature can have the ability to come back and bite them by constraining their ability to continue innovating.

Kip Boyle: Yeah. Every time I log in to the Office 365 administrator interface, something's different.

Paul Rich: Yeah.

Kip Boyle: Every time. Just the pace of product evolution is just accelerated like crazy.

Paul Rich: And having worked, again, with attorneys a lot, people that are technical, that read contracts and can basically understand them are so valuable to the company that they're in because they're going to be snowflakes.

They're going to have this ability to bridge between the technical architecture and engineering and operations, and to convey to their colleagues in the technical arena, "Hey, that's a misunderstanding that you have about what to expect from the cloud service provider because it's not in the contract if you're counting on that thing being there."

And then the second thing is you'll just have a much better understanding of how cloud providers think about what they do and what they deliver to their customers.

Jake Bernstein: Yep.

Kip Boyle: I thought it was really interesting. You and Jake made a comment a moment ago that I've just been ruminating on here a little bit, and I just want to surface it, which is reading the contract for a cloud service provider is kind of like doing a code review, because you can't really see all the actual code that's being written to make the cloud service work.

But if you could, and if you did a code review, you'd better understand the limitations and the full capabilities of that cloud service provider and the services you're getting from it.

But I think reading the contract is probably a useful thing that you could do. And if you're inclined to do code reviews, you should probably do contract reviews. I don't know inaudible.

Jake Bernstein: At the risk of creating, I think, possibly the record length of the Cyber Risk Management Podcast, but I think it's worthwhile and, hey, we're not going to be offended if people play this back at 1.25X or something.

But here's a recent example. I had a client come to me, wanting me to review a SaaS product that is involved in... Basically, they're an emergency alert service via SMS, text message, but it's all a SaaS product.

And as I reviewed the terms of service, I realized this company is basically saying that we're going to sell you a product whose literal purpose is to function in a certain way during emergencies. But the contract says, "We offer no warranties of any kind. This may not work, and it may not work during emergencies."

And I brought that to my client's attention. And they ultimately decided that that wasn't going to work, and that they were willing to pay more for a different service that was technically similar but that had different contract language.

And that's a really good lesson in the importance of connecting the reality of expectations, or I should say the reality versus expectations, contracts, and cloud service. Do you have a reaction to that, Paul? Does that jibe with what you've seen or thought?

Paul Rich: Yeah. That's certainly not atypical. I think, particularly, if you look at the concerns that the CISO-level people have, which privacy and security are almost always, I think, top of mind... Availability is important, but if you run a highly available service but I can't count on you to protect my data, then I got to go look somewhere else.

Even if the contract, say, is 20 pages long, focusing just on the privacy and operational security content, when I left Microsoft, it was a few paragraphs. It wasn't much content to read, but it clearly stated in there, how do we handle subpoenas? How do we handle third-party data requests? How do we handle the human personnel and their access to any systems that Microsoft runs that contain your data? So they cover those things in there.

And I think most people are going to be a little taken aback to try and square... Go back to the marketing claims that they're going to see, and then read the contract. No one in marketing's going to outright lie to you, but they're going to tell you-

Jake Bernstein: Well...

Paul Rich: Yeah. "Your data is totally protected. You're safe." Then you read the contract and the contract is going to tell you, look, we have to respond to legal requests. If we can, we will. If we can tell you that we received it, we will. But if we can't, then we obviously won't.

This is the important thing to understand, I think, for the audience. The contract supersedes any claims anybody ever makes from marketing, sales, even the people that are the engineers, if you get to talk directly to those personnel that are writing the code and building the service, nothing any of those people say overrides what's in the contract.

Jake Bernstein: Yeah. Pro-tip, that's the integration clause I mentioned earlier. That's the piece of language that basically serves to wrap the entirety of the deal inside the four corners of the contract and disclaim as meaningless any other conversations. That's what it does.

Kip Boyle: Yeah. I also want to point out here that the contract may say something like, "Customer agrees to..." There's going to be some language in there that's going to point you to their shared responsibility security model. Right? But they're not going to elaborate on it in the contract.

And it's really easy to skim over those kinds of terms and conditions, but the shared security model is fundamental to being able to use cloud services securely. And I've never seen any of those details in a contract, for all the reasons that we talked about, but it's critically important that you go check it out.

Paul Rich: Yeah.

Jake Bernstein: Agreed.

Paul Rich: crosstalk spend an hour just on contract.

Kip Boyle: Anything else on contracts that you want to mention?

Paul Rich: They're mostly easy to read. I've been impressed with Amazon and Google's contracts, and I've read Microsoft's so many times. By the way, fun little fact, Microsoft publishes their contract monthly. Every single month, they roll out a new version of their terms of service for cloud.
So it's not sufficient to read it once and drop it and forget it. And your terms are the contract when you signed it.

Kip Boyle: Wow.

Paul Rich: So you have to be able to go back in time and see what did I sign rather than what's current.

Kip Boyle: Well, that's funny because people think of contracts as a once-and-done, right? I mean, certainly, that's true for my mortgage, for other major contracts, they're cast in stone. But now, contracts innovate. So that's really interesting.

Jake Bernstein: Well, it's an innovation of the way that internet and software services work.

Kip Boyle: Oh, yeah.

Jake Bernstein: Right?

Kip Boyle: Absolutely.

Jake Bernstein: It's my old favorite thing. Your continued use of this service indicates consent to our revised terms. It's the way that we lawyers... Yeah. Let's not get distracted by that. It's a major component of almost all contracts that involve software.

Kip, let's go ahead and wrap this up. I think this has been a phenomenal episode, lengthy though it may be.

Kip Boyle: Well, Paul, we're really grateful that you are our guest today. Thank you so much for being here, for sharing what you know. The wisdom that you've accumulated over the years is super helpful.

Before we wrap it up, I just want to give you an opportunity to tell everybody how they can find you on the internet if you want to be found.

Paul Rich: Yeah. I should be easily found on LinkedIn, and I'm happy to connect with people there and bring people into the fold with these communities, with the Cloud Security Alliance and the IPP that I'm a part of. Love to get more volunteers there, and happy to consult with other folks on these topics too.

Kip Boyle: Great. Yeah. So we're going to put your LinkedIn, the URL to LinkedIn profile, we're going to put it in the show notes. Yeah, we do have show notes, sparse as they may be.

But thanks, Paul, for being here. And that wraps up this episode of the Cyber Risk Management Podcast. Today, we went myth-busting, specifically targeting common cloud security marketing messages that you'll see out there, and Paul Rich helped us do that. Thanks, everybody. We'll see you next time.

Jake Bernstein: See you next time.

Audio: Thanks for joining us today on the Cyber Risk Management Podcast. Remember that cyber risk management is a team sport, so include your senior decision-makers, legal department, HR, and IT for full effectiveness.

So if you want to manage cyber as the dynamic business risk it has become, we can help. Find out more by visiting us at cyberriskopportunities.com and focallaw.com. Thanks for tuning in. See you next time.

Headshot of Kip BoyleYOUR HOST:

Kip Boyle
Cyber Risk Opportunities

Kip Boyle is a 20-year information security expert and is the founder and CEO of Cyber Risk Opportunities. He is a former Chief Information Security Officer for both technology and financial services companies and was a cyber-security consultant at Stanford Research Institute (SRI).

YOUR CO-HOST:

Jake Bernstein
K&L Gates LLC

Jake Bernstein, an attorney and Certified Information Systems Security Professional (CISSP) who practices extensively in cybersecurity and privacy as both a counselor and litigator.