Site icon Easy Prey Podcast

The Importance of Testing Your Cybersecurity Response with Steve Orrin

“You can’t secure what you don’t know. You’ve got to understand your assets, your services, and your resources.” - Steve Orrin Click To Tweet

Cyberattacks can happen to an individual computer or an entire network. It’s vital to have well-tested plans in place before ransomware rears its ugly head. Today’s guest is Steve Orrin. Steve is Intel’s Federal CTO and Senior Principal Engineer. He leads public sector solutions architecture, strategy, and engagement and has held technology leadership positions at Intel. Steve was previously CTO, CSO, and co-founder of several successful security startups and is a recognized expert and frequent lecturer on enterprise security.

“Risk management isn’t about solving every security problem. It’s about managing the risk.” - Steve Orrin Click To Tweet

Show Notes:

“If you can limit the impact of an attack, that can make a difference. You can’t prevent every attack but you can limit the amount of damage it does.” - Steve Orrin Click To Tweet

Thanks for joining us on Easy Prey. Be sure to subscribe to our podcast on iTunes and leave a nice review. 

Links and Resources:

Transcript:

Steve, thank you so much for coming on the Easy Prey Podcast today. 

Thank you for having me, Chris. It's a pleasure to be here. 

Can you give myself and the audience a little bit of a background about who you are, what you do, and how you got into the field? 

Sure thing. I'm Steve Orrin. I'm the Federal Chief Technology Officer and Senior Principal Engineer for Intel Corporation. Basically, my job revolves around helping the federal government and public sector adapt current and future technologies and architectures, and really push the envelope of what's possible when it comes to everything from hardware to software to cloud services, helping to map those technologies to their mission enterprise needs.

The other half of my job is translating government [inaudible 00:01:19] back into Intel [inaudible 00:01:12] so that our products and technologies kind of evolve and develop towards those requirements as well. I, someways, play a  two-way communicator, if you will, helping the government and helping the Intel business units understand the government better. Mostly focused on the US public sector, but there are use cases that are global in nature. 

My background is in cyber security, having run multiple security startups throughout the 90s and 2000s and then running security pathfinding for Intel for about nine years, prior to taking the federal role. 

You're a translator. 

Exactly. 

That actually is a surprisingly important role in the IT and security spaces. You have entities that don't understand each other's lingo trying to make business decisions, like keeping the company safe. 

Exactly, and helping to understand the bits and bytes of the technology and how do you solve real-world problems. That's really been the fundamental struggle you see often. Everyone wants to throw a technology widget at a problem, which is probably the good thing to do, but on the other hand, how do I catch it and make it apply to something that makes an impact?

Let's start talking about that. If you are running a small-to-large size business, which encompasses all of them, every business nowadays has a cybersecurity risk profile. If you have a device or if you have a website, if your people have phones, there's an attack surface there. There is a risk to your business model. Let's talk through what some of those low-hanging fruit, like where the heck do I start in dealing with cybersecurity at my business? 

Chris, that's a really good question. Often it's something that needs to be asked right up front is, where to begin. There's so much to be done. IT and IT security in particular has a limited budget, limited workforce, and an infinite amount of vulnerabilities, exploits, and tasks ahead of them. Where do you begin and how do you make a measurable difference in the risk? 

It really comes from the beginning with what you just said, having a risk posture, understanding the risk environment is step one. You can't secure what you don't know. Understanding your assets, your services, your resources, and having whether it be a formal catalog or just an assessment of the things you're trying to protect, step one. 

You can't secure what you don't know. Understanding your assets, your services, your resources, and having whether it be a formal catalog or just an assessment of the things you're trying to protect, step one. -Steve Orrin Click To Tweet

From there, what really drives a successful risk profile and risk management is understanding the risk associated with those assets. This is often lost when people look at these big frameworks whether it be the cybersecurity framework, which is a great framework. Buried in the text there is mapping those risks and mapping those assets to business. 

Not every system is your most critical system, and not every attack is the one going to bring your system down. Understanding that risk management isn't about solving every security problem, it's about managing the risk. One of the key things that we see CIOs and CISO's that are being successful is being able to do that assessment of both the systems and the things they need to protect, the data, but also how it maps to critical business. 

Not every system is your most critical system, and not every attack is the one going to bring your system down. Understanding that risk management isn't about solving every security problem, it's about managing the risk. -Steve… Click To Tweet

Knowing what's important to keep your operations going, and applying the right risk management profiles to those resources so that from a prioritization perspective, but also from a security control perspective. 

Understanding the dependencies is another key aspect of knowing what those dependent resources are. If you are in the bank and the cash management is checking on your major sources of income and what you provide, understand that they are also dependent on resources and capabilities. There may be certain systems that are more important than others or that have to be protected and isolated from others.  

Step one is just getting a really rich and manageable risk profile and risk management infrastructure. [inaudible 00:05:19] OK, great. You understand your environment. You start getting an indication of the risk. The next question people often go to is what are the technologies I need to do? What can I do that's different? 

I think what's important is people talk about the security industry professionals have always harped on. We need better security hygiene. That's going to solve everything. It's important. We need good security hygiene. We need patch management. We need to have active sensors for detection. 

Part of the reality we live with today is that most of the large and even medium organizations have that. They have good hygiene. They are doing the best practices, if you will look at the checklist. They are in compliance with whatever regulations they need to be, and they are still getting attacked. They are still getting data breaches and ransomware, and then all service. 

The question really isn't how do I do better hygiene? It's what are the key things I need to do above hygiene, beyond hygiene, to make a measurable difference in my security posture? That's where we start to look at what are some key technologies or processes that can be put in place to raise the bar again? To do beyond that. 

The question really isn't how do I do better hygiene? It's what are the key things I need to do above hygiene, beyond hygiene, to make a measurable difference in my security posture? -Steve Orrin Click To Tweet

When you look at the CISO’s budget, and if they got 80% of their budget going to anti-virus, firewall, and data protection is sort of the hygiene and patch management, what's that next 20% up to that can actually make a bigger difference than just the security [inaudible 00:06:40]? 

Some of the recommendations that I've given that I've really resonated, one is looking at your overall priorities. If you can limit the impact of an attack, that can make a difference. You're not going to prevent any attack, but once it's in, you can prevent it from going catastrophic and going to scale in your organization. One of the technologies and approaches I often recommend is network segmentation, micro-segmentation. 

I'm laughing because I just had this conversation yesterday with somebody. 

It's not the sexiest part of the security strategy. It doesn't even often require a new technology or new product. You probably have the core infrastructure whether it be network devices like switches and routers or even easier software to find networking to do this today. Oftentimes, there is a pushback, so it's going to be harder to manage. Those days are gone. The management tools that come with these products today are able to operate in highly micro-segmented environments. They are able to do it today if you plan for it. 

One of the key things that micro-segmentation will do is it gives you three major benefits out of the gate. Number one, obviously it means that you have smaller network enclaves so when something does go wrong, it gets contained.… Click To Tweet

One of the key things that micro-segmentation will do is it gives you three major benefits out of the gate. Number one, obviously it means that you have smaller network enclaves so when something does go wrong, it gets contained. It can be contained more quickly. It doesn't spread catastrophically to the whole organization. 

When ransomware shows up at a branch office, it may take down the branch office, but it doesn't take down headquarters or 12 other branches. That's step one. It allows you to better contain the threats that are happening. 

Two, it allows you to get much more granular in your policies. Whether you are going down the path of zero trust or trying to get more of a risk management infrastructure, one of those tenets is having domain-specific policies, dynamic policies, that change with the risk or change with the business. Micro-segmentation allows you to implement those policies much easier because you are not changing something for such a large organization. 

Policies are hard. By doing a one size fits all, you actually neuter the policy to the point where it's almost useless, because it has to fit every situation. -Steve Orrin Click To Tweet

Policies are hard. By doing a one size fits all, you actually neuter the policy to the point where it's almost useless, because it has to fit every situation. Network segmentation allows me to create very specific policies about the security controls, the flows, the accesses. They are very domain specific and context-aware, which means I can get very granulated. I can actually control it really well and not have to worry about systems in another branch that have a different profile. I can give them a specific policy. It really becomes a benefit to be able to create these various policies that are context-aware and domain-specific. 

Network segmentation allows me to create very specific policies about the security controls, the flows, the accesses. They are very domain specific and context-aware, which means I can get very granulated. -Steve Orrin Click To Tweet

The last benefit is really next to my recommendation and I've come with this idea. I call it a threat canaries. A lot of people have existing infrastructure. It's hard to refresh it, it's hard to push new tools out and new sensors to every environment because you have to refresh every system. You have to create a whole new infrastructure. You have to buy new tools. One approach that I'm positioning is once you have that micro-segmentation, you don't try to refresh every system in there to the latest and greatest with all these sources turned on. 

You can but no one has the budget for that today. What if you can put one sensor or one node into the environment, a working node that is the latest and greatest that has all the protection, the patches, the latest hardware, the latest OS, with sensors turned to 11 and let that be your canary in the coal mine. 

Now we have multiple micro-segments, each with their own specific policies and a sensor for that environment that can be set to full mode. Set to 11 so it's detecting everything, copying everything for that one sensor in that environment. What that will allow you to do is to catch things early before they become catastrophic, before they filter back to headquarters, which start to impact a lot of the core systems that maybe somebody will be watching more closely. 

By having network segmentation at the microlevel, it allows you now to deploy those sensors into that specific environment. One of the big challenges to sensors in general is that you get a lot of false positives because you have to tune them to what's going on in the environment. The more generic the environment, the more false positives if you get really specific on your sensor. 

This is again why those benefits of micro-segmentation is I could tune that sensor to the specifics of that environment. If you are running Windows 11 in that environment and you've got applications that are being accessed that are all database, then if you see something else it's going to be an anomaly. Whereas if it's the whole organization, you have everything so you have to be able to assume everything. Those three things really are the benefit that all start with micro-segmentation. 

If you have a dozen micro-segments in your network that you have now said, “I can only afford one  system, let me put it in the most business-critical area,” so to speak? 

Obviously, you want to put the most critical, but you also want to put it in the most exposed. Maybe the one that's got the most internet access or the most employees coming into. Think of it from both where you want to put that [inaudible 00:11:43] is where the most risk is. 

If you’ve got a mainframe system in the back that's got guards with guns and no one gets to access it, you should put a threat [inaudible 00:11:52] but is that the first place you put it? Or do you put it really out on the edge or into an environment that is the dirtier side of your network where things are more likely to happen, then you're able to catch things early and get that pre-knowledge. 

Ideally, you should put sensors into every one of your network enclaves. Again, because you're only putting one, it sorts of limits the pain if you put the cost and management perspective. What I've seen happen is they start getting good data out of the sensors then it's a business justification to put more sensors, to basically refresh more systems to push more tools into environments because we're starting to get quality data out of them. 

From a business perspective, for a CIO or CISO, putting those sensors into all different areas and then starting to collect data then map that to your dashboard, it's like, “Hey, we're catching these things. You're blocking these lateral moves.” It's a good way to justify we need more because it's actually providing value. When you can do measurable impact on risks, that's the game. 

That's where I suspect one of the larger challenges with security infrastructure is. How do you provide the quantifiable business case to management of, “This is why we need money,” as opposed to, “We need everything dialed to 11 everywhere. Give us 10% of the company's entire budget.” At least if you could say, “Here's where we are seeing stuff and this is what we're seeing,” it's an easier business case.

Absolutely. Actually one of the interesting side effects of having network enclave specific is generally, when you see the dashboard, the CIO provides a whole enterprise view. We caught 4,000 attacks this week out of maybe 50,000 that got through, whatever that number is. If you are on the other side of that executive room, you've got people on the lines of business, it's hard for them to understand. Was that my system? Was that his system? Who’s getting the benefit of this? 

If you could show this network sensor in the cash management enclave versus this network sensor in the credit card enclave and be able to show domain-specific results, this is how you get line-up businesses to jump onboard and say, “Yes, I want to pay for more sensors in my environment because I am seeing the benefit to my business,” as opposed to just the macro, which, again, just becomes hard to slice and dice. 

It's sort of, if you see lots of issues happening at one branch, we need to do some training over there. 

Yes. 

What was the next point now? 

After you've done the network segmentation and the sensors, one of the things we're starting to see is looking at your data. Right now, in the modern world, we see that data is king. The old style was I protect my network and my systems. That's where all the security is. 

We look at a lot of the legacy security tools. Legacy doesn't mean the old, it's just approaches. It's always been about, “How do I lock down my network? And how do I inspect and manage the system?” The endpoints, the nodes, the servers, even the cloud all fall on that system network approach. Again, that is the normal cyber hygiene. 

The thing we all know today is data is the oil, it's the thing that drives an enterprise. People understand that you have to protect the data wherever it lies. It may be in the cloud, maybe in your partner's cloud, it could be on an employee desktop at home. You want to be able to protect the data through all stages of its life cycle. 

That's where concepts that we're seeing on the Cloud come to bear around confidential computing. Looking at, how do I protect my data and use. I protected it. I can store it all on disk securely. I can transport it securely [inaudible 00:15:30] or TLS. 

What about when it's being operated on? When it's in the cloud? In a virtualization? In its container and being transacted upon? How do I protect the data there? Edge to cloud? Recognizing that your data flows back and forth and all around. 

What confidential computing is starting to do, and we are seeing the different cloud providers provide this as a service, is allow you to protect that last mile, so that you are protected from both tenants that are co-resident with you in a  multi-hybrid cloud environment. It also means that it gets the cloud provider out of your trust control base. 

From a compliance perspective, if your cloud provider or service provider isn't part of your trust domain, then you can rest back control over who gets access. You don't have to worry about the rogue admin or an admin misconfiguration if you are controlling the security. That's what confidential computing is. 

It's still mason as far as the wide-scale adoption, the services are there. We are talking about where I am going to start focusing on next. That's one we recommend to people, to start looking at now so they can better plan their application development and rollouts to take advantage of that last mile of security, especially in the day of data breach and ransomware. 

Your data is being compromised and you've already got transport security, network, and full disk encryption so why are we still having this data breach problem? It's because they are sucking it right out the database as it is being queried. At that point, it's not encrypted anymore. By encrypting it even in use, even if they were to exfiltrate the data as encrypted data so that therefore you are less risky, again it goes back to understanding the risk to the data kind of approach. 

This would prevent those, “Hey, someone found another AWS instance where the data was just sitting there.”

Yeah, many times. Again, it comes down to just putting your stuff in the right kind of instances. Human error will prevail. Someone puts their stuff into a confidential computing instance and it's securing it and some admin at your organization moves it back to a dev test environment. Again, that's why you have compensating controls to check when that happens, but it gives you another control to look for. 

That's the key. Giving you more things to monitor, which really leads to the next step, which is moving from an audit or annual check to a continuous monitoring approach. The idea of continuous monitoring isn't new. When the federal cloud came out, they came up with this concept that everyone should do continuous monitoring, which was hard and it took a lot of work to get there. 

The next piece of that puzzle, which people forget is it's not just I need to monitor daily because we've had great examples, Sola Winds being one where things are being monitored, but no one was looking at the data or the data didn't provide the context to know what was really going on. The threat profile had changed. 

It's really about continuous monitoring and continuous or dynamic policy and risk management. It's tying those two worlds together, because even if we are always monitoring, we have someone watching the dashboards all day long. If the threat environment changed, if there's a new attack or new product that's brought in that has new threats, has the risk profile been updated? Or are we going to wait for the next six months to change the cycle to do that? That's too late. That's six months of exposure that's not being looked at. 

It's really about continuous monitoring and continuous or dynamic policy and risk management. -Steve Orrin Click To Tweet

By making the policy side and the risk management more dynamic, we rely [inaudible 00:18:53] to them being able to have context for that continuous monitoring part as well. 

It's funny, I thought of a real-world example outside of the IT space of moving from that annual audit to continuous monitoring. It was a company I worked for, used to do annual inventory. Of course at the annual inventory, you don't always find out that locations are often—stuff are missing. Now you have to go back 12 months trying to figure out when it disappeared. 

OK, we then move to quarterly warehouse counts. Definitely better, things could get caught quicker, and the thing that made a huge difference was when the company moved to daily cycle counts. We're not going to do a count on every location on the warehouse every day,  but we're going to hit the most active locations in the warehouse on a more frequent basis,  if the inventory gets off, we find it right away and it made those quarterly and annual inventory counts go really fast because there weren't thousands of things misplaced. There were one or two things misplaced and everybody got to go home quicker.

Exactly. It’s one of those examples of the warehouse use case where technology is helping to provide the means to do that, whether it be barcode, RFID tracking and things like that, being able to better instrument the warehouse. We get that same correlation in IT. We can instrument our systems better and monitor them with that tracking data, if you will. Again, you’re going to find the losses quicker.

When you look at some of these large-scale data breaches, and we’re looking at terabytes of data that made its way out of the organization, yes, there was a breach and there was something that wasn’t patched, but where was the exfiltration monitoring? The data is going to some external IP address that we’re not monitoring and seeing this. Even it’s at a trickle for a long, long time. It’s, again, instrumenting your environment.

The warehouse examples are a great way of thinking about it. You want to know where every item in your warehouse is in a given time, so then you can track it from an inventory perspective and availability, but also do your capacity planning based on that information. The same idea in IT—“I need to know what resources are being utilized for what reason. If they're being misused, I want to know quicker.”

What is the next object that we look at?

It’s a great question. As we start looking at these basic building blocks, you see they start all linking together. It’s about getting your infrastructure ready for more dynamic policy and then doing the things to make it more dynamic. There’s a mapping to this. Some of them are implementing those controls, the network segmentation, and confidential computing. The other is about being able to apply more dynamic policies or context to work policies to make those changes. The next piece of that is driving your business accordingly.

All this greatness that we’ve done, how do we map that now into how business and dev test, and solution development? A lot of what we do in IT security is going back and trying to fix and modify the core businesses that we’re in. It's always sort of a retroactive approach. “We’ve got this cash management system. How do I better secure it? How do I isolate it? How do I protect its data?” The next optimization phase and, really, this is the last mile, if you will.

If you look at a maturity model, after you figured out the basics, then it’s the optimization step. The last piece of the maturity model. How do you then inform new business? When the next application is being rolled or when you’re moving from an on-prem ERP system to cloud base, or to a SaaS base one, having the models that you built in already from is going to be part of that decision, part of that development process.

You’re building from micro-segmentation. You’re building for dynamic policy changes. Then the system that comes out the other end through that process is ready for dynamic policy, ready for being loaded into an isolated network. Again, I hate using the bug word, but these building blocks lead you towards that idea of a zero-trust architecture.  If at the end of the day, everyone thinks that they're going to buy some product to get them zero trust, it’s not a destination.

It’s a journey, and it’s a long journey. It doesn’t end. You don’t ever become zero trust. It’s how you do business, how you do security that becomes zero trust in its approach. These building blocks allow you to start thinking about the way you implement security. I think one of the things that a lot of CISOs are seeing the benefit of these building block steps is they become less of an IT security applying it to the business. They start becoming a partner with the business.

That’s the endgame everyone wants. It’s to be a crucial part of the business conversation, just like legal is, just like marketing, that security is built in from the get-go. -Steve Orrin Click To Tweet

That’s the endgame everyone wants. It’s to be a crucial part of the business conversation, just like legal is, just like marketing, that security is built in from the get-go. As we always have known, this has been [inaudible 00:23:54] for a long time. We don’t want to bolt on security, you want it built in. When CISOs become a partner to the business, number one, it becomes easier to get a budget because you’re lined up to what the business does, but it also becomes much more natural to how the corporation operates.

When CISOs become a partner to the business, number one, it becomes easier to get a budget because you’re lined up to what the business does, but it also becomes much more natural to how the corporation operates. -Steve Orrin Click To Tweet

Is this more of a challenge if you have an existing business who’s been around, the number of decades increases that these processes are more difficult to implement? It’s one thing if you’re starting a new business, you could think about this from day one of, we need a segment thing as we bring on new business units. They're micro-segmented. We’ve got zero trust between these departments, so to speak. But if you’ve got a multibillion dollar Fortune 500 company, I assume most of them have probably done this stuff, but the larger the entity is, is it almost the harder the process is going to be to move through?

I think it’s perception. Perception-wise, yes, it feels harder, because they're already doing it a certain way. There’s billions of dollars of revenue being generated based on an existing approach. One of the things you get inside of these larger organizations, and they have large development organizations, they're not sitting still because their customers aren’t sitting still. Very few of these large businesses are static, if any, at this day and age. They're all adopting digital technology.

They're all transforming, modernizing, whether it be going to the cloud, moving to micro-services, going to edge-based computing, all of the changes are happening. Even in the billion-dollar industries that have been around for a long time, there’s constant transformation.

There’s a perception, but actually, the reality is, you can get involved in those just like you would if you were in a smaller or newer business. In some cases, it works to your advantage because they’ve already figured out how they move data around. Now, they're just transforming it. One of the things that was really interesting, we’ve looked at the ransom or attacks that happened a couple years ago now, some of the things we learn from that, just from how the IT market has evolved, I’m going to pick on the meat-packing one, JBS, out of Australia. We learned two really important things from that attack.

Number one, no one is immune from a cyberattack, because you can’t get any less sexy, I think, than meat-packing from a target perspective. But it also showed that every aspect of our lives has become digital. Meat-packing industry was reliant on digital infrastructure systems to move the meat through the system. When those got ransomed, the meat-packing shut down. That's a physical world thing that is, again, not thought of as being a digital environment, but it was absolutely an—they call it an industry 4.0 environment, all automated with quality assurance and testing and everything going on without many humans in the loop.

If you look at any modern manufacturing line from frozen pizza to car gears, it’s all a large computer-based, edge, industrial system that is heavily reliant on IT systems. The key thing is even some of these nontraditional, what you think of as IT businesses, are IT businesses. The opportunity to insert better security practices into those transformations as they continue to evolve is always there. But, you’re right. There’s this common perception that this company has been doing it for 40 years. It’s how it’s done.

As you’ve talked to the CIOs and every day they're trying to figure out how they squeeze that extra couple of cents of efficiency out of the systems they have. They're willing to invest heavy dollars to take advantage of new technologies to help do that.

Do you think one of the confusions with ransomware and cyberattacks is that there has to be an intentional target? We’re trying to infiltrate electronic arts. We’re trying to infiltrate this particular entity or some of it is just entirely opportunistic. “Hey, we found this system. We have no idea whose it is until we get into it. Then we finally figured out, Hey, it’s a meat-packing company. Let’s have some fun.”

You’ll find that there are three modes there. There is the targeted, which is, “I’m going after this bank. I’m going after this entertainment company for a reason, whether it’s because they have a lot of money I want to get or I don’t like what they just did.” There’s the, like you said, the opportunistic, “We always sent out all these different boxes and they brought us back to really juicy things. We’re going to go now focus on those juicy things.” Then there’s the purely automated, which is, go do crypto-mining anywhere you are.

Whether I’m crypto-mining in a bank’s headquarters or I’m crypto-mining on an end user’s laptop, I don’t care. I’m still driving my bottom line, which is crypto-mining. You defined all three. The problem from the CIO’s perspective and the CISO’s perspective, there’s no way to know what the intent of the attacker is, even after they’ve gotten in often. That attribution is very hard to do and it’s rare. If you look at the numbers of where we’ve been able to attribute to a particular campaign or activity or an entity and the ones that we haven’t, you hear about the ones where they say it’s trickbot or it’s this one, because that gets into the news.

But how many thousands of attacks and breaches are still no one has any idea? That’s the reality we live in. You can’t know in advance what the end intents are. The thing to keep in mind is, really, we’ve shown, we’ve seen the industry, that everyone is a potential target. Even if you’re not being targeted. The automation and the tooling that the adversaries are using allows them to go pretty deep, pretty broadly without having to go pick on a particular organization or craft a unique spear phish for that specific executive administrator or account in the organization to try to get.

They do that and we see that, but we also see just common places—phishing scams can be pretty broad. Again, the success rate doesn’t have to be high for the amount of cost that goes into putting it out there.

Yeah. That’s the unfortunate thing about phishing is it’s very cheap from the attacker’s perspective.

Indeed.

I can send out billions of emails—I guess that’s really the issue with security as a whole. The attacker just needs to get it right once, but the defender needs to get it right 100% of the time.

Yeah. I think one of the things we as an industry are trying to do is move away from that, “I need to be 100% secure.” That’s where you start seeing—we’ve got these standards coming out around resiliency. The goal isn’t to be 100% protected, because that will never happen. But the two questions you want to ask is, how do I operate through? How do I continue to run my business while I’m actively being attacked or being exploited? Resiliency, how do I get back to a known good state in a timely fashion? Business continuity, disaster recovery, fail good kind of approaches.

The goal isn’t to be 100% protected, because that will never happen. But the two questions you want to ask is, how do I operate through? How do I continue to run my business while I’m actively being attacked or being exploited?… Click To Tweet

We’ve seen NISP 1800 and 193 is an example of doing platform-based resiliency. You assume you’re going to get attacked. You assume that laptop or that server is going to get a piece of malware or going to get its firmware compromised. That’s a given. The question is, “How do I get back to a known good state,” whether [inaudible 00:31:09] is, in 24 hours, in one hour, in two days, whatever that SLA is, how can I get back to operating a known good state and be able get back to doing my business?

Then the other side of the conversation—this is not as much of a technology, oftentimes, just good business processes. How do we maintain our operations while the ransomware is running wild? That’s a critical part of the overall business continuity planning. If I want to think of one other recommendation that I often give, I would say that gamify things. You’ve got a business plan, you’ve got a business continuity plan, you’ve got your disaster plan, you’ve got your ransomware. Assuming you have the plans, how often have you run those plans?

How do we maintain our operations while the ransomware is running wild? That’s a critical part of the overall business continuity planning. -Steve Orrin Click To Tweet

Have you actually done the game? The key thing, not IT security, not IT with everybody, with legal, with your CEO, with your marketing team, with your brand team. Run the full exercise. The New York Times reports you just got hacked by ransomware or being exploited for $1 billion. Run the full exercise all the way to the forensics, getting systems back online to the other side of, which is what do you communicate to the press? What do you talk to your analyst about?

You have to gamify the whole thing, because what we see when these things actually happen is oftentimes, there may be some good plans but a lot of people spent a whole lot of time trying to go, “What am I supposed to do? What’s the appropriate response? Let’s get a meeting together.” Now, we have to have committees of committees to be able to make a decision that if we’ve done the gamify, everyone would know their role, their job, and what needs to be done. One of the best recommendations is run the scenarios. Do the war gaming. Do it with everybody. Get the board in the room.

What you’ve found with some of these boards—even the big companies—they actually appreciate being brought in as part of the game exercise for two reasons. Number one, they get a better sense that you are taking security seriously in the organization. But from a board member, they also then see, “Hey, this company is doing it this way. I may be the board member here, but I’m running this organization. We need to be running that.”

You start to spread the message about what the right practices are. This is another example where we shouldn’t be keeping this stuff secret. The board should be sharing amongst themselves. Companies should be sharing CEO to CEO or CIO to CIO how they run these business continuity because ransomware affects everyone. Data breaches affect everyone. The adversaries are collaborating. We need to be collaborating too. But definitely, gamifying is a key thing to help reduce the amount of time it takes to get an adequate response.

Companies should be sharing CEO to CEO or CIO to CIO how they run these business continuity because ransomware affects everyone. Data breaches affect everyone. The adversaries are collaborating. We need to be collaborating too.… Click To Tweet

Yeah. I think the process of doing it also helps educate everybody in the system, “Here’s where you don’t need to freak out. Here’s where you do.” I think you’re almost always guaranteed to find things that you missed.

Yes. The gap analysis, the post mortem is very important.

We need to notify all of our customers that this happened, yet the customer database is compromised. How do you notify the customers?

That’s a great example.

There was someone I was talking about. You need to figure out the lawyers that you’re going to use for things before you need to use them. If you’re in the event and then you’re going out and asking your cohorts, “What lawyers should we use for this thing?” It’s already too late at that point.

Exactly. You should already have the retainer contract ready to go.

One of the things that we were talking about, and I think it ties into this a little bit before we started recording, was supply chain issues. We’re not talking the meat-packing supply chain here, we’re talking technology supply chain and how to address those issues, because from my perspective, micro-business here, I have control on what goes on my servers. I can watch that stuff. How do I manage what’s going on with my partners and their partners and the partners’ partners down the line because as we were talking before we started recording, I have an issue where someone several links down the chain got compromised and some cryptocurrency mining was put into the supply chain and it worked us all over.

It worked its way through the system to my website. My website was telling users, “Hey, am I crypto currency-ing your browser for this unknown entity?” My website wasn’t compromised. My partner wasn’t compromised. It was someone down the line. How in the world do we start addressing that type of issue?

Chris, it’s a really good question. This question is being brought up in boardrooms in government agencies and standard organizations as we speak. It is a challenge that everyone is trying to figure out. With Log4j and SolarWinds, it’s in the news and people are trying to respond. At its core, I think the first in the process is transparency or visibility, depending on what side of the coin you’re on. You can’t secure what you don’t know. You can’t know unless you’re able to get that information.

Whether it be something as formal as an SBOM, or Software Bill of Materials, or just understanding who your supply chain partners are and their ecosystem, getting that traceability, getting that visibility, so that you can make better risk decisions. That’s really at the core. If you look at the guidance that’s coming out or even the SBOM committees, at its core, it’s not that the SBOM document is going to give you better security. It’s not going to add a new firewall role, if you will. But it’s going to give you the ability to make better decisions, because we only can make good decisions based on information. 

To take your example, if you have visibility into who those supply chain participants are, you can then start to look at how many times they have had vulnerabilities. Are they currently vulnerable, [inaudible 00:36:49]? Can I do that kind of mapping with the artifacts that I’m getting? Then I can make better risk determinations. That could be something as draconian as I’m not going to buy or leverage this product because it’s too vulnerable or too risky. That probably won’t happen all the time. More often what will happen is I’m taking on this risk and I’m going to mitigate it with other compensating control. We’re going to get a service from a third party that I don’t have a lot of visibility into who’s their developers, but I’m going to put better security on what I do. I’m going to monitor the traffic for that.

Again, it’s a risk control that I put in place because I have the knowledge. Even if the knowledge tells me it’s risky. Step one is getting that visibility. Step two is really connected to it, which is how? Vendors need to start asking for it. Not just asking for, “Could you give me this?” We as a whole collective industry, both on the consumer and the producer side of it, have to start putting our money where our mouth is. Saying, “My contract is going to require these visibility artifacts.”

In order to promote your product or leverage your product in my thing, and I’ll give you the credit that I’m using XYZ widget, I need to get these artifacts in. What we’ve seen in the executive order in 14028 and the memorandums that came after, after all the requirements and [inaudible 00:38:10] standard and all, at the very bottom of it was, and the federal accounting regulations need to be updated to require SBOMs or similar artifacts as part of the contract process.

That last piece of the puzzle is absolutely crucial when it becomes a requirement. Just like the functionality is a requirement, any vendor that’s going to be selling to, in this case, to the government or it could be to corporate America, if it’s in the contract, then they will start to adhere to it. We’ve got to make this the requirement. Then, at the end of the day, we have to not make it a checkbox. I think that the last piece is that oftentimes when we do regulations and controls, everyone says, “Give me my checklist.” I’m going to go tick, tick, and tick. Done, we got an SBOM. Put it into a file and we're done.

That’s not the case here. For supply chain security to work, we need to use that content and make risk decisions based on it. That means extracting the relevant details, mapping the CVEs. There’s a new standard that’s being worked on called VEX to allow the vendors to better inform the vulnerabilities that apply to their products. There’s a lot of work being done, but at the end of the day, the consumer or the corporation that’s bringing the software in, needs to operationalize the artifacts they're getting.

There is some great guidance coming out of ESF and others about what as a corporation or as a government entity to do with those artifacts. How do you scale them and operationalize them into your environment? It really is those three things coming together. Getting the visibility, that has to be done at the supplier and the developer, all the way through to the customer, making it a part of the contract process, a part of the acquisition process, if you will. Use it once you get it.

Got you. As we wrap up here, let’s take a step back and if there’s one asset class on the cybersecurity landscape that people missed most, what do you think it is?

That’s a very good question. What is the one thing they missed most? This comes up with SolarWinds as an example or Log4j is that a blind spot has been things that came into our enterprise through the front door that were approved. We’ve got lots of tools out there looking for malware. That’s an industry and it’s been around for a very long time, looking for the anomaly upfront. What we aren’t doing a good job of is where I call the product-specific monitoring.

Looking at the things that we have—the data, the flows, the application—and monitoring them for anomalous activity or abhorrent behaviors. It’s sort of a shift in how we do it versus I’m looking for something weird on the network. I’m looking for a file that shouldn’t be there. Again, we do a lot of that. That’s the hygiene part. The thing that’s often missed is when I want to look at my word process, or I want to look at my ERP application, or in your case, I want to look at my website and what’s being delivered to the client. Is it acting in an abhorrent way? And monitoring that?

If you want to say what’s one asset that needs to have the dials turned up on the visibility, it’s looking at the assets we actually manage and monitoring them specifically for what’s considered to be known good versus potential anomalous behaviors.

Seeing things behave outside of their normal characteristics.

Exactly.

Unfortunately for the small business owner that’s probably going to be well outside the skill for something that they're going to be able to do.

Maybe. They can do those, looking at their dev test environment and populating some of the quality assurance tests into a production scan. We may not be able to afford a whole product to monitor, but doing some validation testing on a production system after it’s already been out there to see if it is still operating like it did in dev test is an easy way to leverage what you’ve already invested in for a small business as well.

It won’t catch everything, but you at least know whether or not the product is operating the way you expect or published it in the case of a website code or a transaction.

I suppose having a pathway if someone says something is not behaving right, that you just don’t assume that they're misunderstanding, you take all reports of incidents seriously and investigate them.

Absolutely have someone monitoring that info email.

In my case, it would be easy for me to be dismissive and say, “I know it’s not on my side. I checked my code. It’s not my code. You must be mistaken.” It’s an easy position to take. I guess the answer for small businesses is you never assume that a report is wrong, you investigate it. You do what you can to take appropriate measures.

Yeah, and you put it under a bucket of customer service.

Yes. We’re going to train all of our customer service people to be software engineers now. If more people want to be able to connect with you, where can they find you?

The best way to find me is to find me on LinkedIn, it’s linkedin/sorrin, and if you want to look at some of the materials I’ve written in the content, if you go to intel.com/publicsector, that’s where you’ll find all of my government recommendations, and a lot of the stuff I’ve been doing on cybersecurity can be linked off from there as well.

Awesome, Steve. We’ll make sure to link both of those in the show notes so people can find them without having to hit rewind, rewind over and over to get the spellings right. Thank you so much for coming on the Easy Prey Podcast today.

Thank you. It’s been my pleasure Chris.

 

 

 

Exit mobile version