Site icon Easy Prey Podcast

Minimizing Damage From Cyberattacks with Stuart Madnick

“Consider the possibility of, no matter how cautious you are, you fall for something. How do you minimize the damage? This is something most people do not think about.” - Dr. Stuart Madnick Click To Tweet

Many use the internet considering it more like a walk in the park rather than realizing it is more like a trip down a dark alley. Today’s guest is Stuart Madnick. Professor Madnick has been a faculty member at MIT since 1972. He has served as the head of MIT’s IT group for more than 20 years. During that time, the group has been consistently rated number 1 in the nation among schools for information technology programs. Dr. Madnick is a prolific writer and author or co-author of over 380 books, articles, or technical reports and textbooks. He has a degree in Electronic Engineering, Management, and Computer Science from MIT. He has been a visiting professor at Harvard University and six additional countries.

“You can put a better lock on your front door, but if you put your key under the mat, are you any more secure?” - Dr. Stuart Madnick Click To Tweet

Show Notes:

“Spear phishing is so believable and plausible.” - Dr. Stuart Madnick Click To Tweet

Thanks for joining us on Easy Prey. Be sure to subscribe to our podcast on iTunes and leave a nice review. 

Links and Resources:

Transcript:

Stuart Madnick, thank you so much for coming on the Easy Prey Podcast today.

Thank you. Glad to join you.

Can you give myself and the audience a little bit of background about who you are and what you do? You have probably been involved in cybersecurity in one way or another longer than most of the listeners have been alive. This is going to be a great conversation.

First thing, I am Stuart Madnick. I am the John Norris Maguire Professor of Information Technologies at the MIT Sloan School of Management. I also have an appointment as a professor of engineering systems in the MIT School of Engineering. But most relevant to our conversation today, I am the Founding Director of Cybersecurity at MIT Sloan which was formerly known as the Interdisciplinary Consortium—I can't remember it anymore which is part of the reason why we changed the name.

Massachusetts Institute of Technology (MIT) – Cambridge, Massachusetts, USA

Getting back to your point, I've been involved in cybersecurity since at least, I think, 1974. I wrote several papers back then, and in 1979, co-authored a book called Computer Security. I've been involved in cybersecurity issues pretty much for a long time, although I've been involved in other things as well.

There's a wealth of stuff that I see us diving into. At what point did you start seeing cybersecurity as more than just an individual computer threat?

You may know the famous Morris worm—I don't remember what year it was. It probably was, maybe, late 1980s and so on, which basically shut down a significant portion of the Internet. Various things like that—although in some cases, they were not deliberate or intended to cause great harm—often were experiments that went wild.

That's when it became clear that the Internet, which is a great wonder for many of us, allows things to happen that we could never envision before. In fact, it turns out there were examples of cybersecurity events pre-dating the Internet. Back in the era when people would exchange floppy disks and so on, there actually were examples of that. I don't remember the dates of those, but that's probably back in the early 1970s and so on.

I do remember a couple of those types of things back from the days when me and my friends would meet together and exchange copies of games and things like that. You always had to be worried that you might bring something nasty home with you.

Exactly. I guess the key thing is when did it get to the attention of the average person and the public? I think that that's clearly post-Internet when everybody started connecting their computers and all of a sudden discovering all kinds of fun things were happening to their computers.

I think the first significant large-scale event—maybe not the right word—was the Melissa virus. That was probably mid- to late-90s or something like that. Everybody at the company I was working for was getting dozens of copies of this virus all day long, every day.

Commodore Amiga 2000 PC from 1980s

 

And then, of course, back in 1999-2000, just before the dot-com crash, there was dot-com boom. Basically, everything was dot-coming, which means everybody was getting on the Internet and, of course, all kinds of fun things started happening.

The reason why this is even an issue is that in my perception, the Internet was never really designed with security in mind. It was always this, “We're a bunch of educators. We trust each other.” Trust was just implicit in the process.

There are several aspects of that. Although I agree with everything you've just said, a lot of the things have happened since then. The two things that are true is people are very trusting of the Internet. It's like a cuddly teddy bear, if you will. Going down a dark alley at night probably makes people cautious, but connecting onto the Internet doesn't have that same scary feel about it. 

But the real problem we have is by large—and that's why I'm glad to chat with you and your audience—so many people are not aware that it is a dark alley and what things to be warned about, so I think we need to make a much better effort of that.

One of the examples I often use—and I don't know how many decades it took—is that industrial settings, manufacturers, and so on understand the idea of safety. The example I use is if you were to go to a modern plant nowadays, you'll typically see a sign over the door that says, “550 days since the last industrial accident.” P.S. Please don't be the one who sets it back to zero. 

But when's the last time you went to a data center and saw a sign that said, “Fifty milliseconds since the last cyber attack”? In manufacturing and so on, safety is something you hear about, talk about, and worry about every single day. Cybersecurity is way nowhere near that, and that's a big concern.

In some sense, the cybersecurity risks are almost ubiquitous. If you turn on a device that's public Internet-facing with no firewall, stuff just starts trying to connect to you without you initiating any contact. You just start getting scanned immediately.

When's the last time you went to a data center and saw a sign that said, “50 milliseconds since the last cyber attack”? -Stuart Madnick Click To Tweet

One of the things is you may be familiar with the term Internet of Things (IoT) where almost everything has a computer in it. Ironically, I actually picked up a smart toothbrush with a computer in it. You may know about that. It'll actually send messages to your smartphone, giving you a status update of how good a job you're doing brushing your teeth. 

I used to joke that everything will have a computer except for a brick, and someone showed me an article about smart bricks. The question is they're all over the place. They're spreading like weeds. Unfortunately, as you said before, not only was the Internet not designed with security in mind, but probably my toothbrush wasn't either.

Is that where we're starting to see problems? Before we started recording, we were talking about Internet-connected refrigerators. It sounds like a great idea, but who's the expert who comes and maintains your smart refrigerator?

Yeah. Of course, there have been examples of attacks on smart televisions. Basically, every device that's on the Internet is a candidate, particularly the newer devices. The newer they are, the less likely people think about security in them. Who worries about the security of your toothbrush?

Now I'm starting to wonder, “Is my toothbrush Internet-connected?” I sure hope it isn't. 

What are the failures you see both on consumer and on corporate where you see those consumer-corporate weaknesses of, “We're not doing what we need to be doing”?

There's a limited number of things that we as individuals can do because a lot of times, these vulnerabilities come about through services you use. We'll come back to the corporate side later on. 

There are many things that we can do that most of us are somewhat aware of but not completely. The classic case is if you get an email saying, “Hello, I'm a Nigerian prince. I want to give you a lot of money. Please do the following things.” Believe it or not, I was told that still 1%-3% fall for that.

I actually, unfortunately, believe you.

I was told an interesting story. Usually, it's poorly worded and the English is terrible. I said, “No, you don't understand.” This is an IQ test. They've got to take time to get you to get the money out, so they want to make sure they've got someone who's really gullible and isn't too bright. They deliberately make it sloppy so you and I presume you would push it off to the side. That 1%-3% who go that next step, they know, “Oh, I've got someone I've got a good chance of landing.” Ironically, some of the really bad ones are deliberately bad.

That's actually inspiring and discouraging at the same time. A lot of the ones that I saw—yesterday, in fact—were, “Oh, I got this notice from DHL that my package was delivered to the wrong address.” Maybe it was Amazon or something like that. But since during COVID, everything's e-commerce, everyone just assumes, “Oh, that must be my package. Let me click on that link.”

Exactly. But one thing that I'm not sure the term is familiar to all of your listeners and such is spear phishing. It's kind of the flip side of that. They'd be either the Nigerian prince or maybe a little bit closer to home—your Amazon delivery, which is something you might be expecting. 

Spear phishing is when they actually bother to either (a) rely upon your public information, things that you have on your Facebook page or whatever, or (b) they may have actually gotten into your computer and actually reading all your correspondence. Based upon that, they craft something. 

The typical case that you hear about goes as follows. It's typically allegedly an email from the CEO of the company to the Chief Financial Officer (CFO) saying something like, “Well, we're negotiating a merger deal with this company in Indonesia,” which is something that they do. The company does things like that. “There's a law firm over there that we need to send a payment to. Go send $200,000 to the following account in Indonesia.”

The email starts off with saying, “Hello there, Chris. It was great to get together with your wife at dinner last Wednesday. We've got to do that again. Now please, blah, blah.” 

Let's say you did have dinner with him last Wednesday. It just is so obvious that this is your boss talking to you, and he asked you to do that. But ironically enough, although many of them are looking for hundreds of thousands of dollars, apparently, there are parts of the world where a couple of hundred dollars is a decent week's pay. 

I know of at least one university where half a dozen people in the department were hit with a message allegedly from the department chair saying, “My nephew's having a birthday party tonight. I'm in meetings all day long. Could you please stop by the local CVS or whatever, pick up a $100 gift card for me, and just email me the number on the back of the card?” Half a dozen fell for it. Basically, you’ve got to be alert. You’ve got to be cautious.

Basically, you’ve got to be alert. You’ve got to be cautious. -Stuart Madnick Click To Tweet

To me, when I get the email saying, “Hey, I've got $10 million, and I'm trying to figure out a good, reliable humanitarian to give the money to.” It's like, “OK, come on, $10 million.” Because that's one of those things that, in our minds, is a red flag. “Millions of dollars, oh, it has to be fraudulent.” In your situation, you're talking about a few hundred dollars here or there. It's not outside the realm of a normal conversation.

Spear phishing ones are ones that are so realistic and so plausible. I'll give you an extreme example. You might appreciate it because it was someone like you, Chris, for example. It was some kind of investigative reporter who was talking to a firm that basically aids companies. They were talking about spear phishing. 

He said at the end of the meeting, “Hey, here’s what we’re going to do, Chris. Between now and next week, you will be a victim of spear phishing. We will pick you as a target. Let's talk next week.” 

Of course, during the week, various things came by that he realized were fishy and he discarded them, but before the week was out, he had fallen for at least one of them. Remember, he's someone who is aware of this stuff and he had been told ahead of time. So don't feel bad if you get hit. Better people than us have been hit successfully.

I think that's important to communicate to people. We were tongue-in-cheek saying that the Nigerian scammers are targeting people that are low IQ. If they're not going to notice the grammar mistakes, we're going to weed them out. But there are even scams that target the best and the brightest of us. It's that mentality of as soon as we think that we're doing everything right, we're perfectly secure, and we have all of our systems in place, it's the point that we need to start truly worrying about whether or not we're going to fall victim because we've become overly reliant on, “I wouldn't fall for that because I know better.”

Exactly. But while we're still talking about the typical consumer, there's one thing I often stress very much besides all the cautions we've talked about and many more of them. I often say, “Well, consider the possibility that no matter how cautious you are, you fall for something. How do you minimize the damage?”

Consider the possibility that no matter how cautious you are, you fall for something. How do you minimize the damage? -Stuart Madnick Click To Tweet

This is something most people don't think about—minimizing damages if they get into your computer, steal all your data, or corrupt all your data. Simple, like all hygiene issues. How recently did you back up your data? It's a little bit why you buy fire insurance. You don't want to have a fire in your house, but you have to be prepared. You probably have a fire extinguisher in your house also. In the same way, we do some things to realize that a bad thing can happen, but what can we do to minimize the damage? That's a line of thinking that surprisingly few people even are starting to think about.

I have to admit that I'm one of those people that at times are overly cautious about such things. If we're talking backups, I have a network-attached backup. I rotate through a physical backup that I take to my local bank and swap out a drive in there, and then I also have an online backup because I'm thinking, “Well, if someone breaks into the house, they're going to get the physical one and the NAS. What do I do? The house burns down. Multiple places and multiple copies.”

You're getting a double-A rating. Unfortunately, I suspect that the number of people like you is a small percentage.

It takes conscious effort to maintain it. Truly, I do like a lot of the online backup solutions because it doesn't require someone to be like, “Let me pop a drive-in and hit the button that says backup.” That automation, hopefully, removes a lot of operator inconvenience. 

I liked the idea of minimizing damage. I think of that way with my finances as well. We have a primary bank account where my wife and I do our banking. We deposit our paychecks and things like that, but I have a second bank account that is not connected to any other bank account. It's never wired money to or from any other bank accounts that I have enough money in so that I can survive a couple of months if someone gets into my regular bank account. While I know most people can't do that, I know a lot of people that are seven-figure earners that haven't even crossed their mind to have a second bank account unless they're trying to hide it from their girlfriend or something like that.

As I said, I guess the key thing here when you talk about individuals is you need to think about not just disk backup or data backup, but if something bad happens to your computer—whether it be your bank account or whatever it is—what is your plan? The joke I use is most people don't realize that the batteries in their flashlight are out until the electricity is turned off. That's not the right time to find out your batteries don't work.

Unfortunately, that's the case. Anecdotally, a lot of small businesses that do have automated on-site backups have never actually tried to restore from the backup. I know one company where the system went down. They needed to do the backup, and the instructions on how to do the backup were on the computer that died. They're like, “Well, we don't have the password because it was on that computer.” They've never actually tried to restore their data. Effectively, if you've never tested your backup, I'm not sure that you really do have a backup.

There are many things. There's probably a long laundry list, but I think it's more a way of thinking. Imagine that something has happened. What is your contingency? How do you back things up?

I do want to talk a little bit on the corporate side unless you want to go any further on the consumer side.

Let's go corporate.

This part, once again, is probably not something that the average listener is going to be able to deal with directly, but it is a matter of understanding. There used to be a radio show—maybe it's still going on—called The Rest of the Story. Do you remember that?

I believe so.

The idea being is you see the headline, but usually, it is much more than the headline. The same thing is true in cybersecurity for lots of reasons: (a) because people like to make the story as simple as possible, but also (b) most companies want to minimize the damage to their reputation and so on. 

You'll hear stories like, “Well, we had a cyberattack because someone forgot to update the software.” “We had a cyberattack because someone misconfigured this firewall.” These are actual headlines, by the way, more or less. I'm not reading them but roughly speaking the headlines. The impression that came across was that some individual was a little sloppy one day, wasn't paying attention, and made a minor mistake. Problem solved, […] for all we know, nothing more to worry about.

We've been researching about half a dozen of these cases in-depth. We've sometimes gone through over 50,000 pages—not every single page—of testimony, court documents, and so on. Almost always, a major cyberattack, one that steals lots of money or typically steals hundreds of thousands—if not millions—of people's Social Security numbers and so on, almost all of those things cannot be done with just a single mistake. In classic cases, someone breaks in the front door, but they've still got to find a way to unbolt the safe on the floor or find a way to get into the safe. There are lots of things that they have to do.

In one case, we wrote an article that appeared in MIS Executive magazine about Equifax in which—I forgot the actual number—100 million people’s security numbers, addresses, and shoe sizes probably were all stolen. 

We identified what I call 18 management indecisions—I call them semi-conscious decisions. What I mean by a semi-conscious decision is someone said, “Well, gee, the front door was wide open. Should we do something about it?” He said, “Well, no, the weather is nice outside. Just leave the front door open.” “[…], but we're in a neighborhood where burglars take place.” They made a hefty decision without stopping for a second to say, “But what is the possible consequence of that decision?”

What were some of the 18 quasi-decisions that Equifax made which ultimately resulted in hundreds of millions of records?

Once again, I recommend you read the whole paper because I can't memorize all that, but I'll give you a few of the examples. One of them is fascinating because they actually have a piece of hardware and software called an intrusion detection system. Basically, what it did was it monitored the traffic going in and out of the network to see if anything odd was going on. The problem was in order to monitor that traffic, it needed to have security certificates that had expired nine months earlier. Basically, it was a brick sitting there doing nothing for nine months. 

That's one incident. In fact, when they actually—for whatever reason—finally fixed the certificate, that's when the alarms went off nine months later that something was going on. But what's interesting about it was not just that nine months went on without anybody noticing this brick was sitting there. 

Apparently, this issue had come up earlier because, as you may know in your systems, you will have certificates all over the place for all kinds of permissions, if you will. Someone had pointed out the fact that they had thousands of certificates in a big company like Equifax, but maintaining those certificates and keeping them up to date is a manual task. 

Somebody said, “Well, this is very time-consuming but also very error-prone.” They forget to check on one. They had recommended that a monitoring system be put together and how much it would have cost. It wasn't a huge amount of money, but it will take us time and effort that would allow us to centralize it and be able to get an alert that the old certificate over there is not working right. 

Some management guy said, “It's not that important.” That decision cost them $1.7 billion—that and a bunch of other decisions. No one said, “Well, you realize, of course, you are putting $1.7 billion on the line.” It doesn't mean it's going to happen, but by not doing that, this is what could happen. That's what I mean by semi-conscious decision-making.

Isn't that one of the big challenges—and I think this is starting to change—that IT officials, IT guys, or security guys in the company have been trying to go to management and ask for money so that something doesn't happen? It's hard to get funding. “Hey, we want you to spend a whole bunch of money so that something doesn't happen. If we do it right, you're never going to get your ROI.”

There are so many versions of that. I'll give you one extreme case. I talked to one security executive in a firm and it turned out that they were about to cut his budget in half. Why? We had no cybersecurity breaks this year. It's only when he said, “You realize there were 50,000 attempted attacks that we're able to ward off” that they said, “Hmm.” “If you cut my budget, maybe more of those 50,000 will actually make their way through.”

You're right. One of the problems is you don't appreciate what doesn't happen. That's why you, in some sense, do buy homeowners insurance and so on, but we haven't quite got to that level of the thinking process.

We hear the same in the terrorism space is law enforcement needs to get it right 100% of the time, and the terrorist only needs to get it right once.

That's what's called the asymmetric warfare aspect of it.

It really seems to be that way with cybersecurity as well. I've got to make sure everything's patched. There are hundreds of things that I have to look at. If I miss just one of them or someone finds one that I didn't know about, that's all it takes for them to potentially compromise.

I'm going to flip the other way around. We're going to use the Equifax case because we have a very detailed analysis of it. In order for that attack to be fully successful, as I said, a number of things had to go the attackers' way. In other words, you didn't have to be stupid once, you had to be stupid a dozen times. I strongly recommend against that.

Yes, I would as well.

You're right. Maybe you did forget this. I'll give you another example. As I remember correctly, they had the security group and the IT group as separate divisions. In fact, they actually reported quite separately in the hierarchy. I have no reason to think they didn't get along with each other, but basically, they were relatively separate. 

What you'll read in the headlines is that a patch had not been applied. That's the only thing you’ll hear. Someone forgot to apply a patch. The trouble was the people who knew about the patch were a different division of the company that apparently didn't tell the division who was responsible for doing the patch. The left hand didn't know what the right hand had to do. 

 

It wasn't that someone was told to do the patch and they forgot to do it. They were never told because they were not looped whether that information was passed. Individually, each one of these things is a little bit foolish. To do all that many of them, there's no reason for that. Unfortunately, we've seen this in half a dozen companies, so it's not like these are the outliers.

Do you see that starting to change, that larger corporations are being a lot more security-conscious and that they're giving more budget and more attention to cybersecurity risks?

It's difficult to do that over a short period of time. It's like glaciers moving. They move slowly. But I have two more ominous ways of thinking about it too. The first one is I do say the good guys are getting better, which is your point there, but it also then says the bad guys are getting badder and even faster. 

Yes, maybe people are beginning to get a little bit of a realization of it, although you said something I want to make sure that everybody thinks about. Whether it's human nature, corporate, or whatever you want to call it, increasing your revenue and profit makes you very excited. Preventing cyberattacks, not so much. I've seen time after time after time when those things are just made lower priority. Maybe they've gone from the sub-basement to only the first-level basement in the building, but unfortunately, they're always way down in priority in most organizations.

Increasing your revenue and profit makes you very excited. Preventing cyberattacks, not so much. -Stuart Madnick Click To Tweet

Lack of events doesn't make news either. “Hey, we've gone 20 years without a cybersecurity incident.” CNN or Fox is not going to give you headline coverage for that, but you mess it up once and you're going to get lots of coverage.

Exactly. The second ominous thing I want to mention—once again, maybe we're going from the individual to corporate, to the world—is cyberwar. I had a conversation just this morning. We have Professor Robert Pindyck. He's an economist at MIT. He was a speaker at one of our weekly meetings. He researches catastrophes. He researched mainly things like pandemics, climate change, and things like that. 

We were trying to talk to him about, well, if you think about it, cybersecurity could potentially be a catastrophe. We discussed a little bit what a catastrophic cyber event might look like. 

Once again, as he and I both noted, there was a period of time in, I guess, the 1960s and 1970s where a nuclear attack, I won't say was likely, but definitely the reason why people were building bomb shelters in their basements is because they took it pretty seriously. 

We've managed to make it through that period so far—knock on wood—without the world coming to an end. The fact that a catastrophe can happen doesn't mean it will happen, but we're totally unprepared for a cyber catastrophe.

The fact that a catastrophe can happen doesn't mean it will happen, but we're totally unprepared for a cyber catastrophe. -Stuart Madnick Click To Tweet

How would you define a cyber catastrophe? What would the proverbial bomb shelter be?

Firstly, I hope this is not keeping people awake at night unnecessarily, but there are several things about what a cyber catastrophe would be. Firstly, you’ve got to understand something. Ninety-nine percent, or whatever you want to call it, of all cyberattacks, I would call information technology (IT) attacks. They steal your data, lock up your computer, ask for ransomware, and so on. 

In most cases, what you can do is wipe your computer clean. Hopefully, you've got your backup tapes working. It is an annoyance and can take you maybe a week or more in some extreme cases in a corporate network to get things up and running, but it's not a lasting problem in most cases. 

One of the things we've observed and actually have demonstrated in our own lab here is when you talk about cyber-physical devices. That could include your toothbrush, but more typically, we're talking about things like generators and things like that. We have shown how to make them explode. If you explode an $800,000 or a million-dollar generator, you don't go to the RadioShack down the street and say, “Could you please replace my million-dollar generator?” 

We actually had one that wasn't a cyberattack. It was a malfunction attack situation at MIT. It took three months to get up and running again. That's the first issue. I have an example. We have seen cases of the power grids going down or steel mills being destroyed. It has happened, but they're one-offs and they don't get a lot of general publicity

Power station

Would Stuxnet be a good example?

Exactly. A thousand centrifuges were destroyed by getting them into what we call harmonic frequency. They basically shut themselves to death.

In IT, the computer gets compromised and tells the device to do something. At some point, the device physically fails because you can't do what it's trying to do.

And although this is not a cyber incident, to some extent, that's related to the 737 MAX situation. One of the sensors said, “You're about to go too fast. You’ve got to go down.” And the planes crashed. That was not cyber but the same that you send the phony signal to a device, and it goes and does something really bad. People can die or things can explode.

The first issue you’ve got to realize is when that happens, you're talking about an outage, not measured hours or days but hours measured in weeks, even longer. That makes it into a cyber catastrophe or cyberwar. You don't attack just one device.

I don't know your background, Chris. I'm an engineer by training and possibly many others people are. There was a certain liability there. Liability in terms that we were all taught about independent failures. If you're running a power station, you have eight generators, and you have mechanical devices, over time, you know that the device will fail, but the probability of turbine number one and turbine number two failing at the same time is really small. That's not true of a cyberattack. The cyberattack that attacks turbine one will go through two through eight at the same time.

When I talk to engineers—this isn't rocket science—they just never thought about, for decades, independent failure. That logic doesn't apply to a deliberate cyber attack. The difference here is not just that the Northeast power failure will go or not that Texas will freeze up, which is what happened with the snowstorm and so on. It's that all the power systems across the country go out at the same time. Will that happen? I sure hope not. But technically, there is no reason why it could not happen. That's what I mean by a catastrophe.

I know we're talking power grids. I don't want to get too into the weeds, but is that where things like systems need to be air-gapped? Like turbines one through four can be connected, but while they're connected, four through eight can't be connected. That sort of thought process on the engineering side?

First thing, for the benefit of anybody who hasn't heard the expression air gap, which is exactly what you're saying, the assumption is, “Well, our equipment is not connected to the Internet. There is an air gap between them.” 

I jokingly said to one of my students that I believe a virus made its way to the International Space Station—talk about an air gap. Stuxnet also was called air gap, if you will.

Number one, there are ways to get beyond the air gap. A simple thing to do is you sprinkle little USB memory sticks in the driveway with a sign that says, “Naked photos on here” and rely upon someone picking it up in the parking lot, bringing it in, and plugging it into this computer in the plant. There are all kinds of non-trivial ways. 

Really saying that, though, back to your point again, unfortunately, I still fear that many of the executives in power plants and such think that they're safe because of air gap—number one—which makes them even more vulnerable, if you will. But then more importantly—and once again, I've talked to quite a few people, and I'm sure there are exceptions—the forces are totally against that in terms of air gapping. 

Air gapping has two limitations to it. A lot of times, if you want to run the system most efficiently, as we all know, we want to connect them together so you can be more efficient.

Furthermore, more and more particularly, I gave an interview recently about renewable energy. Renewable energy typically involves windmills—God knows where—up some mountain someplace or other. Do you want to drive up in the middle of a snowstorm to the windmill to see how things are going? More and more, we connect to the Internet so we can talk to them and monitor them remotely. If anything, air gaps are diminishing in the world at large, not increasing.

Yeah. As we're trying to make things more efficient and more effective, it's introducing additional risk to the process.

Exactly. I’ve got to be clear about one thing I always stress to people. If you want to minimize your risk, don't get out of bed in the morning because you can slip in the shower. Obviously, it's a balancing act. You’ve got to decide, but the key thing is you don't want to do foolish risks, number one. If you're going to have a risk, make sure you’ve got some nice carpet on the floor in the bathroom so when you fall down, you fall down on something soft. 

It doesn't mean that you could avoid all risks, but you definitely can minimize them, number one, occurring and, number two, minimize how much damage can occur if the risk takes place. That's probably a healthy way to think about it.

Like you said, there's no way to eliminate all risk, so the issue is to try to figure out, well, how much risk can you live with? If something does go wrong, what do you do? 

I think we understand that when it comes to driving your car—it's a perfect example—we know that there's an inherent risk in driving a car. That no matter how good of a driver you are, there are other drivers on the road, there are wild animals that run across the road, and there's debris that pops tires. So we have insurance to address—hopefully we have insurance depending on what state you're in—and deal with the other side of that in case something does happen. There's coverage and protection.

Since you mentioned automobiles, let me mention something to you. It's a two-part story. About 15 or 20 years ago, I was a visiting professor at the University of Nice in France. By the way, Nice is really nice. The group I was working with there happened to be affiliated with another group at the university that was doing automotive telemetrics. 

This was 15 or 20 years ago, so these were not autonomous vehicles, but they were looking at sensors, cars, emissions control things, and so on. After talking to them for a while, I came to the following realization. They have lots of technical issues to deal with. They've got to find a way to make this equipment inexpensive, small, cheap, and able to deal in a hostile roadway environment. They have a list of N things to worry about. Cybersecurity was somewhere down about N+1. This was 15 or 20 years ago. 

About three years ago just before COVID, I was on a trip to Singapore visiting one of the research groups over there designing and building autonomous vehicles, and having a conversation with them. Twenty years later, the same basic conversation. They've got really major, serious technical problems to overcome. Once they've overcome all those, then they'll start thinking about cybersecurity. I'm not saying they're an exception. I'm sure there are companies that take it as a priority. I just haven't found one yet.

I'm trying to think of a way to phrase this. I don't think that the entities are thinking cybersecurity isn't important. I think part of the mentality is, “Well, if we can't solve the physical engineering issues, then we're never going to even need to address the cybersecurity issues.” It's almost like a design process issue. “We've figured out all the technical stuff and built it. It physically works and physically does what it's supposed to do. Now we can move on to cybersecurity. It's just, unfortunately, much further down the line.”

Exactly. This is an ongoing part of the research we're doing here now. Much like in software—nothing to do with cybersecurity—a lot of times, they're designing safe software, software that works correctly. It's easier to do that in the early stage than try to retrofit it later on. 

Use that same logic. A lot of times, we believe that if you design it with cybersecurity in mind, it's a lot easier than trying to bolt it on afterward. In fact, in some cases, it turns out it wasn't possible. They actually realize that in order to make it secure, the product wouldn't work and they have to start over again. It's a major mindset change and it's happening at glacial speed, unfortunately.

I'm not going to blame educational institutions here necessarily. Is that part of there has to be a shift in the way that people are educated that are working on these systems, so inherently, it's going to take 10–20 years?” At the corporate level, people realize, “Oh, gosh, we've got to change this,” but that doesn't change how people are actually being taught in college.

I'm glad you brought that up because we had a remote video conference about a month ago. I'm not sure if you or your viewers are familiar with it. There's a thing called the Log4j vulnerability. For those who don't know what it is, it's a piece of software that's widely used in hundreds of millions of computers that someone realized has a vulnerability in it, a vulnerability that's often called a zero-click vulnerability. In other words, it doesn't require clicking on a link from a Nigerian prince and it doesn't require somebody to know your password. Basically, your back door is wide open and they can walk right on in. That's a pretty serious vulnerability. 

What was interesting was that piece of software had gone through 8000 quality assurance checks. Every release goes for 8000 quality assurance checks. The trouble is this goes back to education. We teach people how to design software to do what we intend it to do. We don't have them design software that may be misused in some way or another. 

Once again, I'm sure there's an exception out there, but in this group of a hundred or so people who say, “Does anybody know any university that teaches you how to design software that resists these kinds of problems?” It’s not part of the software design process at any of the schools that we know of. You almost have to flip your head around, not think about what the software is supposed to do, but what it can be made to do.

You almost have to flip your head around, not think about what the software is supposed to do, but what it can be made to do.-Stuart Madnick Click To Tweet

Do you see schools starting to—for a lack of a better description—have hacking degrees? Like you're saying, you teach defensively, here are the things that you do to protect yourself, but are there institutions that are actually starting to teach people how to try to thwart existing technologies and break into them as a degree, so to speak?

As you know, there's an increasing number of schools that offer degrees in cybersecurity, the bachelor level degrees or master's degrees. I have not studied all of them. Obviously, it's growing as the interest level grows. As you may know, I've heard reports of anywhere from one to two million open job positions. If you want to think of a career, that's not a bad place to go.

What I'm about to say is not scientifically grounded, but what I've seen in studies we've done mostly going back two or three years, so it's not recent, is most people basically repackage things they've been teaching for decades and put the label cybersecurity on it. Whether they actually are adding anything that specifically is going to make it significantly better, not a lot. 

I won't say there's nobody out there because I haven't studied everyone, but unfortunately, people are taking the easy way out. They teach good software programming and hope that somehow that will solve the problem.

I definitely know the feeling. I think this is actually a good place to wrap it up. There's a little bit of, “Oh, gosh, this is a bit scary,” but there's a bit of optimism in that there are people like you and people trying to find out how we do things better, how do we teach things better in a way that will mitigate these issues. It might not be tomorrow, but as a course of action, they're going to get mitigated over time.

One thing I'll say at the end, what I alluded to earlier and why I think what you're doing is so helpful is the issue of thinking about how to change the way people think of the world or the cybersecurity culture in organizations, and what it would be in organizations or in your home life.

I have a lot of sayings I pass on to my students, but one of them is: You could put a better lock on your front door, but if you are still putting the key under the front mat, are you any more secure? A lot of times, we have to really think about what we're doing, why we're doing, and how we're doing things. I think that self-reflection is something that we need to all work on, both in our individual lives, our corporate life, as well as in government.

For example, it was only after the Colonial Pipeline situation, the cyberattack […] that the government required pipelines to report cyberattacks. They were not required even to report them before. 

None of this is rocket science, but it requires you to start thinking about things in ways that we haven't done before. The hope we have is that we will get people thinking these ways and that will start to turn the tide in our favor.

If people want to find you online, where can they find you?

The best thing to do is we have a website that has all our site research centers. It's cams.mit.edu. That's Cybersecurity at MIT Sloan. We have a lot of the articles we wrote. We have a lot of news articles we have to do as well, both the technical articles for people who want to go deep, as well as the overview articles if people want something fun or scary to read about as the case may be.

Hopefully not too scary stuff. We'll make sure to link those in the show notes. Stuart, thank you so much for coming on the podcast today.

Stuart: A pleasure.

 

Exit mobile version