Gifted hackers can access data from a government website, a hospital medical system, or even a car. Most are not aware when their personal information is stolen, sold, and used until it is too late.
Today’s guest is Alissa Knight. Alissa is a recovering hacker of 20 years, a cyber security influencer, content creator, and the principal cyber security analyst at Alissa Knight and Associates. She is the author of the recently released book Hacking Connected Cars. Alissa has been quoted in articles by Brian Krebs and featured in numerous magazine articles including PC Magazine, Wired, and Forbes.“I don’t believe that there is true prevention…I think we need to unlearn the concept of prevention because no matter how much you learn and how much money you spend, they will eventually get a foothold.” - Alissa Knight Click To Tweet
- [1:01] – Alissa shares how she started hacking at the age of 13 and she got caught hacking a government network. They came to arrest her at school.
- [2:27] – After this experience, Alissa later went on to own a few startups and sold them for millions of dollars.
- [3:44] – Alissa explains her combat training when owning a defense contracting company. She then transitioned back into cyber security.
- [5:10] – Her company shifted from defense contracting to private sector cyber security.
- [6:06] – While living in Germany, Alissa got into hacking connected cars.
- [7:07] – Although Alissa knows the risks of having connected technology, she is definitely a consumer of connected devices.
- [7:55] – We are seeing a fundamental change in cybersecurity because now it isn’t just about information. It can literally be life or death.
- [9:02] – Alissa loves cinematography and combines her knowledge of hacking and content creation.
- [10:17] – Cybersecurity can be boring and uninteresting. Alissa states that she got tired of seeing the same white papers and changed it up to make it more interesting not just for her but for clients as well.
- [11:22] – Alissa references a book called Blue Ocean Strategy and summarizes its content in relation to her business model and content.
- [12:58] – “A lot of the content out there for security is told through the eyes of a blue team member. It’s told through the eyes of the defender. Very rarely do we see content being told through the eyes of the adversary.”
- [14:13] – Alissa describes what she wants people to see through her content.
- [15:58] – In Alissa’s opinion, we need to relearn the concept of prevention.
- [17:27] – Chris points out that many mistakes are made when people think they have an impenetrable system. They become complacent.
- [18:20] – There are so many products out there right now that become very overwhelming and many don’t know what to choose or buy.
- [19:17] – Alissa breaks down the categories of mHealth and describes how she was able to hack into them.
- [20:59] – When testing these systems through hacking, Alissa was shocked at how much information she was able to access about patients.
- [22:01] – Alissa explains the rule that CMS passed called FHIR.
- [24:36] – Describing the systems that hospital systems use, Alissa points out some issues with lack of security.
- [26:48] – Alissa shares a personal story about being diagnosed with cancer and the experience of getting an email with her medical data available through a mobile app.
- [29:21] – The average person is not digging deep to find where their information could have been published on the darkweb.
- [30:54] – Alissa explains the differences between what some providers can and cannot do with data.
- [31:41] – To explain a BOLA vulnerability, Alissa uses an easy to visualize analogy.
- [33:58] – Some of the problems in the APIs that Alissa is testing is insecure coding and programming. She lists how this can see patient health information in medical systems.
- [35:13] – Simply changing an ID slightly once it has been authenticated is the number 1 vulnerability in APIs. Alissa says it's the easiest hack in the world.
- [36:08] – Sharing a story about an experience with a pen tester, Chris demonstrates how important testing for vulnerabilities is.
- [38:16] – We as consumers have to rely on manufacturers to make more secure cars and our healthcare providers to create more secure programs. It’s unfortunately out of our hands.
- [39:54] – It is not an immediate thing to learn. Alissa points out the many tools and the importance of understanding them.
- [42:16] – Exploits and these penetration testing tools are important, but if they are in the wrong hands they can be used for different purposes.
- [43:32] – When the developer is responsible for data, it leads to many problems. Alissa describes what can happen.
- [46:19] – Alissa explains what she predicts what she thinks will happen in the future.
- [47:28] – “I think zero trust should have been the foundational elements of the building blocks from the beginning.”
- [49:37] – There is a lot of amazing technology coming from Tel Aviv which is a shift from the past.
Thanks for joining us on Easy Prey. Be sure to subscribe to our podcast on iTunes and leave a nice review.
Links and Resources:
- Podcast Web Page
- Facebook Page
- Easy Prey on Instagram
- Easy Prey on Twitter
- Easy Prey on LinkedIn
- Easy Prey on YouTube
- Easy Prey on Pinterest
- Alissa Knight on YouTube
- Alissa Knight Home Page
- Alissa Valentina Knight on LinkedIn
- Alissa Knight on Twitter
- Hacking Connected Cars: Tactics, Techniques, and Procedures by Alissa Knight
I know in your bio you refer to yourself as a recovering hacker. How did you get interested in hacking and why do you consider yourself recovering now?
I think it started off as a joke. I was wondering: people have to be getting sick of me constantly saying the same bio every single time. I have a few followers and some of them make sure to watch every single thing that comes out. They’ve got to be tired of hearing the same thing over and over. I’m going to come with something funny and new. I was like, “Recovering hacker works.”
It started as a joke, but I started hacking when I was 13. It's the typical Hollywood story. I finally got caught. I was hacking a government network and the law enforcement was waiting for me at school when I arrived and arrested me right when I got there. It was so embarrassing. At the time I was like, “This is cool.” But it was really embarrassing, especially now that I look back on it. But some kids get arrested for killing someone or drugs. I got arrested for hacking a government network.
Of all things you could get arrested for, I'd much rather know someone who got arrested for hacking than someone who got arrested for murder.
This is true, and I don't look good in orange.
It's not the new black? Sorry, bad joke.
Not for me, no. The charges were dropped. When I was arrested, they just played this out very badly. It was in the ‘90s so they really didn't know what they were doing. I got off and went to go work for the US intelligence Community Cyber Warfare. I started a company; it was my first startup. It’s called NetStream. I sold that to a public company. Then that was sold for about $3 million. I sold my second startup when I was 27 for about $5 million. I always made sure not to bring in venture capital.
I always owned 100% of the company or 95%, 98% of the company. Even though I ended up later starting a venture capital fund, I didn’t like working with VCs. I didn't want their hands in the pot.
They own you, in a sense.
They do. I have this thing with being told what to do, which is what has always made me this horrible employee. I was left to just be an entrepreneur because that's all I can really do because I can't really hold a job. I keep pissing off every boss I ever had. That's me. I'm still an entrepreneur as well. Started and sold multiple companies. I'm on my third.
Are all in cybersecurity or that vertical?
Yeah. My current company, Brier & Thorn, started out as a defense contractor providing counterinsurgency support in Afghanistan and Iraq. There was a time there where I didn't want to be a hacker anymore, I didn't want to be in cybersecurity anymore. I traded my keyboard in for an M4 and AR15. Went out to close-quarter combat training. I got into defense contracting if you remember the Blackwater days. We were providing coin support.
Then later, transitioned knowing that the conflicts in the Middle East were going to wind down. I wanted to get back in cybersecurity. I didn’t want to be shot at anymore. I went back in cybersecurity, and my first return back was actually running cybersecurity at a nuclear power plant—San Onofre Nuclear Generating Station.
I'm close enough that I care about the fact that there's someone good there.
Yeah, I want to say that I'm good. I want to think that I'm good. But I was leading penetration testing efforts for a pretty major plant out in Southern California. The rest is history. Brier & Thorn went into cybersecurity at that point and was no longer doing defense contracting. We've gotten more into private-sector stuff, critical infrastructure protection.
I became a certified SCADA security architect. I began hacking SCADA systems. I think that was where I just got bit by the bug to hack embedded systems. I just got really big in hacking embedded systems. I wanted to do something different. I didn't want to be just this commodity pen tester. Not that there's anything wrong with pen testers, that folks on hacking traditional networks and systems, but I wanted to be different.
I got this contract to hack connected cars. I moved to Stuttgart, Germany. I always say that wrong. I say it like a white girl.
I lived in Germany, but I don't think I can say Stuttgart.
I always sound like just this white chick when I say it. I even got made fun of when I lived there for the way I said it, so I stopped saying it. I stopped trying. I lived in Stuttgart for a while. That's where I got big into hacking connected cars and started hacking cars for tier one and automakers as well—tier one OEMs. I had a really good time, continued to do it for the next few years, and then I wrote a book on it. I wrote a book on hacking connected cars. Now I'm focused on hacking APIs.
When it comes to connected cars, does it make you nervous to get in a connected car now?
I'm one of those weirdos that if it's my time to go, it's my time to go. I'm trading in my car for a Tesla. I'm all about technology. My wife and I just finished building a house in Las Vegas and everything's connected. I can talk to A-L-E-X-A from any room. If I say her name right now everything will get really loud. But at the same time, it also gives me an awesome cyber range. My own house becomes my own cyber range, we’re hacking connected IoT devices and smart home stuff.
Is it an issue? It’s an acceptable risk for you?
I know at some point something's going to get compromised but I'm not that worried about it?
Look, the fact that I love hacking connected cars really speaks to how important I think it is, because, for me, it's one thing if you deface a website and another thing if you crash your car with your family in it. I think things are going to fundamentally change very soon here. The thing is that now we're looking at cybersecurity vulnerabilities that are affecting life and safety.
You look at the mHealth research that I published recently with Approov where I hacked these mobile health apps and APIs and was able to access other patient data and clinician reports. If I wanted to kill you, Chris, why walk up to you and shoot you when I can just hack an API and find out what you're allergic to, find out you're allergic to bees and fill your house with bees?
It definitely expands the concern for me where the vulnerabilities that I'm discovering and researching have real-life impacts. The other stuff is important, but when you're talking about things like—I can remotely start and stop the engine of any particular carmaker on the road, lock and unlock the doors, or control the steering wheel. You're talking about a major problem.
I think that's what brought me to where I am today as a content creator where you're probably wondering, “OK, what the heck is it you do?” I'm still trying to figure out what it is I do. I just say if a hacker and content creator were to have a baby, I would be the product of that. For me, I'm blending hacking with content creation. I love cinematography. I have really gotten big into videos.
A lot of the research I do is backed by a one- or two-minute theatrical trailer. You look at the law enforcement vehicle hack—I don’t know if you saw that video—but that and the mHealth video is very Dreamworks-y, it’s very Hollywood-y. I like that. I love that. I love how I can blend hacking a command shell or Metasploit shell with content creation. I think my client's cybersecurity vendors that work with me dig that. I think it makes for a different story.
I think cybersecurity gets very bland, uninteresting.
It does. It’s the same thing.
Geeky, nerdy, let me put on my glasses and talk like Urkel type of things.
Yeah. It's sort of the same thing every year. I think another thing that drove me was I was tired of seeing the same white papers, the same blogs about how great we are. We're really awesome at talking about ourselves. You play that to companies and companies that have even bigger budgets to talk about themselves. I think CISOs are just tired of it. I'm speaking to CISOs through my storytelling, through my videos, through my podcast like this, through the white papers I write, the videos I create.
I feel like I'm speaking to those who are wanting to know, does the product really do what the marketing material says it does? That's what I think is so important today that vendors are missing. They're putting out these papers about their features, their capabilities, and what it does versus why you need it. I'm a big believer in people don't buy what you do, they buy why you need it.
I really try and help create the Blue Ocean Strategy for companies and help them execute on that. Have you read that book Blue Ocean Strategy?
No, I haven't.
Let me tell you about it, Chris.
Yes, tell me.
It's an amazing book. Basically, the interesting thing about Blue Ocean Strategy is it was written by two MBAs out of Harvard. The book really starts up with the story about Cirque du Soleil. How the creators of Cirque du Soleil didn't come out and create—they didn't want to compete against the circus. Cirque du Soleil wanted to redefine what the circus was. The concept of the Blue Ocean Strategy is that market share and demands can be artificially created. It doesn't need to be stolen from your competitors.
It's the idea that you eliminate your competition by making them irrelevant. Where people don't care about the difference between this deception technology and this one, or this NDR and this NDR, and this CDR and this CDR. What they care about is just eliminating the competition completely where you're not comparing apples to apples and commoditizing your products by comparing on features and price alone.
You're telling a story about what you can do for your customer. That's what I'm trying to do from this perspective. Now, Chris, what's very different about my content is that a lot of the content out there for security is told through the eyes of a blue team member. It's told through the eyes of the defender. Very rarely do we see content being told through the eyes of the adversary. That's what I'm trying to do with my content.
Are CISOs thinking more from the perspective of defense and not from the perspective of, “How are people attacking me and what are they thinking?”
I don't think it's necessarily wrong. I am not saying you shouldn't think that way, but in the end, I also always go back to Sun Tzu. You can't defeat an enemy that you don't know. I think that's why it's really important to mix that defender content with that breaker content or that adversarial content is what I like to call it.In order to defeat your enemy, you need to know who your enemy is—how they think, how they operate. -Alissa Knight Click To Tweet
In order to defeat your enemy, you need to know who your enemy is—how they think, how they operate. That's what I think we also need about that recent deception technology. I don’t know if you saw that one, but it was called I effing hate elusive. The neat thing about that research, that report, and that video was you see actually what deception technology does to disrupt the adversary's decision-making process. How frustrated I was and how much I effing hated elusive.
For me, I want CISOs to see that. I want them to see. Tell me, Chris, how often do you see CISOs actually being able to visualize the return on investment in their cybersecurity products?
That’s the biggest thing. No one looks at that as a return on investment. It's looked at as risk mitigation at best.
How much can I afford that will make it look like if I do get hacked, at least the public will think I've done the right thing. And it's not about preventing it, it's about public relations sometimes.
I even ran into a CISO one time that said, “You know what? It's cheaper for me just to pay the fines and pay for the incident response cleanup than to pay for the prevention.” Obviously, didn't agree with that position, but there are some CISOs out there who think that way and treat cybersecurity as just not having that auto insurance policy. For when you get in a car crash, it's cheaper to pay to repair your car kind of thing.
Sometimes, I think it comes from, “It's not my data, it's my customers' data.” It’s kind of coming from that theory that people look at politics, they're spending somebody else's money. If it's somebody else's data that's getting compromised, what do I care?
I'm exaggerating it a bit, but I think a little bit of that mentality.
I certainly don't want to stereotype because we can't obviously say that’s all CISOs. I think there are just some individuals out there who just feel that the response is cheaper than the prevention. I also don't believe that there is true prevention. There may be a lot of people out there who may want to key my car for saying this, but I think we need to unlearn the concept of prevention because I think that no matter what you do, no matter how much money you spend, they will eventually get a foothold.
I think that's what I love about my research either with the mHealth and FHIR API stuff that I'm doing right now, or the deception stuff. Again, it goes back to Sun Tzu. It's not betting on the enemy not coming, it's the fact that the enemy will come but we need to make our position unassailable. I think that's what I love about my research is that it tells a story that you're not going to be able to prevent me from coming. You're not going to be able to prevent the adversary from getting a foothold on your network.
What you do is you lower your mean time to detection and mean time to respond because there's no such thing anymore as prevention. If they want in, they will get in and they’ll figure out a way. At the end of the day, if there are no vulnerabilities to exploit, they're just going to send an email to Chris and tell you to double-click on that PDF attachment because I have a purchase order for you that you've been waiting for or whatever.You lower your mean time to detection and mean time to respond because there's no such thing anymore as prevention. -Alissa Knight Click To Tweet
But I mean, at the end of the day, humans are the weakest link in security. If they want in, they'll find a way in. It's detecting them once they're there.
I think that's almost where people get into trouble is when they think they have an impenetrable system. At that point, you stop thinking about—assuming they get in—how do I prevent them from getting, not just the golden eggs, but how do I keep them from getting the goose?
It's arrogance, right? We become—what's the word I'm looking for? Lazy is not really the right word.
Complacent. Thank you, Chris. I haven't had enough coffee today. I think we just become complacent. Also, speaking in defense of the CISOs out there, I think there's just so much to choose from. There's so much crap out there. There's a lot of stuff out there. We live in a very exciting time right now where there are all these new product categories. Breach and attack simulation, does that mean that I don't need to do penetration testing anymore?
Does that mean I don't need to do vulnerability scanning anymore? I have this breach and attack simulation solution, that's automating pen testing for me. Then you have the deception technology category. That's really cool. There's all this stuff out there, and I honestly don't think CISOs know what to buy anymore. I don't think CISOs know what to choose and what to do.
I think that's another positive outcome of what I'm doing right now. I'm helping CISOs weed through all that noise and figure out what it is exactly they need to do.
Let’s circle back to mHealth. Can you tell the listeners what mHealth is, how it works?
Sure. There are all these different product categories and I want your listeners to understand that they're not one and the same. You have mHealth, you have telehealth, you have telemedicine. You have all these different product categories. What I'm doing right now is a two-phased project, a vulnerability research project, sponsored by Approov. What it is, is the first phase was downloading 30 mHealth apps and hacking the APIs that they talk to.
I was able to hack every single one of the apps. There were hard-coded usernames and passwords, hard-coded API keys, tokens. It’s that. It's 2021 and we're still doing that. Not only API secrets for that hospital or that mHealth company but also API secrets for third parties like payment processors and other stuff like that. Really nasty stuff. Easily reverse-engineered with free tools like MobSF.
What was interesting about this research for my 2019 research where I hacked all those fintech and financial services mobile apps, with this one, I actually got to go after the API. We got some hospitals and some companies involved that allowed us to target their APIs. That was really interesting because I was able to take the attack from the initial passive attack, where I'm not only throwing packets at the victim's network or APIs. I was just doing all the reverse engineering of the app locally and finding all the API secrets.
Then to be able to take that to an active attack where I was actively going after the APIs. The results were startling. I mean, I was able to access thousands of patient records. Not just information on in-patients at the hospital who were admitted into the hospital, but their family information because when you went into the hospital, some of these hospitals would take a picture of the patients, so I saw their pictures. Their next of kin, their family information, and their PII, which is why PHI is worth so much on the dark web. It's worth over 1000 times more than a credit card number.
It's because of all of the data in a PHI record. It's everything. It's your allergy information. Whether or not you're HIV positive. Just whatever's wrong with you in a single record. In my 20 years, I think this is probably the most jaw-dropping research, just shocking research I've been involved in.
Now in phase two, I'm going after FHIR APIs. What that is, Chris, so the federal government CMS—Centers for Medicare and Medicaid Services. CMS passed this final rule that patients must be able to have access to their patient data. The plumbing system for this would be FHIR API. FHIR has its own acronym. Fast Healthcare Interoperability Resources, FHIR.
Obviously, this made an amazing pun opportunity for some really cool puns like playing with FHIR, putting the FHIR out for the topic, and the name of my research. We have a lot of fun with that. We ultimately decided on playing with FHIR. That's what made for the really cool teaser trailers on my YouTube channel. In addition to the FHIR stuff, there's something called Health Level Seven or hl7.org, which is a separate organization from CMS.
CMS are who the payers and the providers listen to. If they want to get paid for Medicare or Medicaid patient services, they have to listen to CMS or they don't get paid. They're very important people.
The rules, guidelines, and systems.
They're like the Gestapo.
They’re like the Gestapo for health services. I don’t know if I’m allowed. Is that politically correct to say? CMS is like the Big Brother for health and stuff in the United States.
Before there was HL7 and then now is FHIR. They finally decided that FHIR as the standard. FHIR is not you and I going to BestBuy and buying, “Ah, yes, we would like to buy one package shrink-wrapped FHIR API, please.” It’s a document that tells you how to implement this and the framework for it, but it doesn't tell you how to secure them.
There are two major companies in the United States responsible for provisioning this data and information out—Cerner and Epic. They make up a majority of the US healthcare market.
They have something like open.epic. I think they call it Epic on FHIR. Then you have Cerner. They have their FHIR API sandbox. The really interesting thing here—this is a scary thing that you might think is scary—is the way it works is if you and I want to start a company—and let's say we’re in Pakistan. A lot of the companies I research actually were in Pakistan, Israel, Middle East, India. You and I can come and say, “Hey Chris, let's have a new startup. Let's start a company and we're going to pull data and we're going to communicate in and pull patient records from Epic.” We can do that. We go to open.epic and we can create an API and they even give you a wizard. There is even a GUI wizard that can build the API client. You can pull that Epic data. You can build this client. You can publish it into something, I think, they call the app orchard. I really like what Epic and Cerner are doing. There’s some really cool stuff with FHIR.
The interesting thing here is that they don't have any control over our lack of security. If you and I don't secure it at all—we’re pulling their data and we’re communicating their stuff—they don't have any control over that. It's really scary.
For them, it's a challenge because you have these companies that are just building these clients and they're pulling data from their APIs. They don't know where that data is going. They don't know how it's being secured. People are just pulling this data and its PHI. It's patient data. It's scary. I could go all day about this.
Is there anything that patients can do about it other than don't tell your doctor stuff you don't want them to know?
It's interesting because I went to—about two years ago, I got cancer. Actually, this is the first time I've ever talked about this publicly. I went to the surgery. It was really interesting because I got this email and it was like, “Your clinician report and your bloodwork data are available on this site. You can go log in and get it.” I was like, “Oh my God.” There was a mobile app for it and I was like, “What is happening right now? I didn't authorize this. Who is this company? Why do they have my data from the surgery and from my x-rays? What is happening right now?”
This is what's happening. You go to a hospital, you go to a doctor, and you'll get this email. I think it's called MyChart. Anyway, you go to the site and you can grab your medical data. It's just sitting there. I don't ever remember signing anything for that. I don't remember saying that this other provider over here and this company over here that makes a mobile app—probably outsourced to some company in the Middle East somewhere. Now I need this mobile app. I've got to go download it from the App Store. I don't know if the API is here. I wanted to hack crap out of it. Sorry for my language. Bleep that out later if you have to. I didn't authorize any of this. What's the security around this? This is my PHI.
I think that's what's really appealing to me in the research that I do because, for me, it has a meaningful impact on people's lives and safety. It's way different, like I said, the days of World of Hell and mass website defacements are gone. That was 20 years ago. Now it's all about money. It's about profiting from the data you steal because it's worth more than oil now. It's worth more than Bitcoin.
Do you see things moving towards rather than people trying to hack my bank account, they're trying to hack my personally identifiable information, my health records, and sell that because then they're not—I don't know that it's happened. If I don't know that it’s happened, I’ve got nowhere to report it. If you stole from my bank account, you bet I'm going to raise heck everywhere I go. But I'm never going to know if you get my medical records.
Yeah, and it's not like Chris Parker who is paying $40,000 a month to subscribe to a threat intelligence platform to constantly scrape the dark web looking for dumps containing your name. The average Joe on Main Street or the average Jane on Main Street isn't either. I think a lot of people don't know. A lot of people never went out there to find out what exactly was in that—what was that credit agency that was hacked?
Yeah, no one went out there looking for the Equifax breach searchable web interface to find what data on you was published on the dark web from the Equifax breach. I think maybe we as humans, I won't even say Americans, but maybe we as humans like to play ostrich and put our head in the sand and say, “If I don't know what's been published out there about me, if I don't know, if I don't see it, it's not going to bother me.” It's scary to think about PHI breaches because we all individually have health concerns and we'll individually have health issues. We don't want that out there.
I mean, I don't care if my Bank of America or Chase credit card number is out there. It's probably been compromised more times than I have fingers. How do you issue new health care information? How do I go to you and say, “Hey, Chris, sorry, your credit card was breached. We're going to send you a new one in the mail.” How do I send you new health information in the mail? I'm going to undo this or undo that from your health history so you've got a brand new clean health history. You can't do that. It stays with you.
Do the APIs allow for pushing information into the health records or is it just pulling information out?
I mean, obviously, it depends on the provider, it depends on the company. They'll have partnerships with other companies that obviously fill that data with data that they've got. If I go into surgery, if I have new health information from a particular FHIR that needs to be put behind this FHIR API, obviously they can place that data there, upload that data there. But a lot of things like FHIR, it's meant for patients to be able to pull that data. That is a great question, though. For me, the vulnerabilities were, what access do I have to view other people's information?
This is around the vulnerability called BOLA vulnerabilities, broken object-level authorization. A great analogy I'd like to give your listeners if they don't know what the heck that even means is if you and I went to a cocktail party and we went to a coat check—I actually get freaked out by coat checks, they freak me out. I never get my stuff coat checked.
Let's say you're standing in front of me and I’m like, “Chris has a really nice coat. I love that. That's a Prada coat.” I'm behind you and you get the number 18. You give the coat check your coat and maybe you give them your wallet for some reason—because Chris isn't thinking—and your keys. I come up behind you like, “I want that Prada coat.”
I get the number 17 for some reason. I take my number, I take a sharpie, and I change that seven to an eight. Thirty minutes later, I see you're still here, I go to the coat check, and I ask for 18. Now, the coat check person has authenticated me because I have a ticket and it looks legit. But they haven't authorized me to take your coat, wallet, and keys home. That's a great example of a BOLA vulnerability.
I'm authenticated, I have a token, I have a username and password. I'm legitimately allowed to communicate with that API, but I'm not authorized to pull your patient data. That's a perfect example of the way a BOLA vulnerability works. That was systemic across all of the APIs I tested.
Is that from careless programming, people just trying to make things easy, stage environments that just never got effectively moved to production without some piece being taken out?
Yeah, I think it's a combination of things. I think it's a combination of a lack of shift left security, meaning security being placed into the SDLC to the time that the code is being written all the way to shift left, shield right where you're not protecting it in production. The problem is, a combination of things as well involving a lack of certificate pinning like, “Why was I able to insert myself in the middle of that API request between the mobile app and the API?” That's a lack of certificate pinning.
With that code, it's insecure coding. It's insecure programming. We're hardcoding API secrets and tokens here on the app. We are not making sure that who's requesting a particular object ID is indeed the person who should be seeing that. These are all just IDs. It's patient/100, patient/101, 102. They're just object IDs. What the API is doing is saying, “Hey, that exists. It's a legitimate request that I understand, and it's being requested so I'm going to provide it.” The only problem is that Alissa is not Chris, and I'm requesting your patient data.
Yeah, I've seen that on websites before. I’ll be logging into my account and logging into something and the URL is ID4583. I'm like, “What if I make that 84?” I'm like, “That's somebody else's account information.” That's not good.
Yeah. That's a great example of a BOLA vulnerability. It used to be called Insecure Direct Object Reference and they changed the name of it. I think it was OWASP. There's this OWASP API top 10 list and it's the top 10 API vulnerabilities. Guess what number one is? BOLA.
Just changing an ID once you've authenticated through some other mechanism.
It's the easiest hack in the world. To go back to your previous question, it's also a lack of penetration testing by the companies themselves where they're not eating their own dog food. They're not testing their own APIs or some of them are outsourcing it. They're trusting the people that they outsource the development to are actually doing security testing. In a lot of cases, clearly, it's not.
I know I had a previous episode, I'm trying to remember who it was. We were talking about pen testing. Having come from an IT background, not by any means a security person, the company hires someone for the purposes of insurance. “Hey, we need to have a pen test done because it will get the insurance rates down.” They did a pen test. I thought, “Well, I'm sure they're going to find stuff because I inherited—who knows what I inherited.”
The stuff that I did was pretty good but there was stuff that was inherited that I didn't even know it existed that they were able to find through the pen test. I'm like, “Oh my gosh, it doesn't matter how good I am—not that I'm that great—if someone else somewhere in the past in some legacy system didn't do it right then, I don't even know about it. How do I protect against that?”
Yeah. Actually, that's some new content that I want to work on here soon. The concept of what you're referring to is attack surface management. I'm actually going to be doing this for Elusive. A big component of their deception technology is being able to catalog the assets in your environment and know what it is you’re up to because you can't protect what you don't know you have. You brought up a great reference right there where that was true to the case where, “Hey, look, I have these assets I didn't know existed, are reachable from the internet.”
The same thing with APIs. That happens with APIs where a company will have over 1000 APIs. How is the CISO going to know about every single API, which ones are internet-facing, which ones are serving PII, which ones that he or she needs to think about securing? That’s attack surface management. It's very important in any defense in-depth strategy to know what it is you've got in your environment because the worst breach is the breach of an asset that you weren't patching, that you didn't know existed. That's such an important feature and capability.
I think, as consumers, we should just totally freak out then.
That's the thing. Unfortunately, you're not the first person who asked me that question because it's not like you can go to your local Best Buy and say, “Hey, Kim, do you have any API security products? What aisle is that?” Unfortunately, we as consumers need to rely on the automakers to make more secure cars, the health care companies to write more secure code. We just don't have any…that’s out of our hands. It's unfortunate.
That's the point of my research—is to wake big corporate America up and say, “Hey look, we need to be doing better.” In all sincerity, for me, I think that's why this is so important because when things like this go unchecked, like Capitol Hill passing federal rules and laws and regulations around a lot of things that we don't fully understand, in many cases don't even know how to secure, this is when you run into problems like this.
That's the difficulty, I think for consumers is, for one, we don't have the knowledge of what's happening, and then we don't have any power over it. I mean, in theory, they say your power is your wallet. You could take your money and go somewhere else. Well, I don't know that health provider B is going to be any better about maintaining my patient records. How do I know? The CISO is not going to sit down with me and explain all their security tactics.
Yeah. It's true. I mean, I think that's also the thing that gets me up in the morning too. I need to be making a difference. If you look at a lot of the vulnerability research I've done, it's very labyrinthine work. It’s not like you can throw a rock and hit a pentester that knows how to hack a car or hack APIs. Even hackers are switching their attention over to APIs because they know that's where the data is. It's not an immediate thing where you can immediately go over and know how to do this.
You have to understand a lot of the things you're looking at, like JSON. You have to understand a lot of the tools because there are different tools. It’s not like you can load up a Metasploit scanner, auxiliary scanner, and do this stuff against APIs. A lot of the time, if you look at my videos, I am in Postman—an API client—or I'm in Burp Suite. You have to know how to use the tool and you have to know what you're doing.
You mentioned Burp Suite. Was that Jake? I had one of the guys who built Burp Suite I had on, and now I feel horrible that I can't remember his name.
I'm horrible with names, so don’t beat yourself up about it.
PortSwigger. Yes, I had one of the guys from PortSwigger on. Hey, I recognize Burp Suite.
It's a great tool. I actually used to use Postman all the time. Then I was like, you know what? I think it's time to learn Burp Suite. They have this awesome interceptor that you can use as a proxy for this. The neat thing is it actually will decrypt the SSL traffic for you. It presents a certificate in both directions and decrypts that traffic. It's really cool. I used to use a command-line tool called the SSL MITM. Now I can do everything all within one single tool, whereas before it was Postman and SSL MITM. I can do all that in a single tool with Burp Suite. It's pretty neat.
I sit there and I say, “Wow, it's great that these tools exist for people who are using them for good.”
Right, but what about the evildoers?
Then there are evildoers out there that are taking these great tools and using them for their own nefarious purposes.
Yeah. I mean, if you think about it, as we've seen, an airplane can be used for good and an airplane can also be used for bad. It's like that whole religious debate over guns. It's not guns that kill people, it's people that kill people—depending on what side of the debate you follow. I think exploits and these tools are important because we as defenders need them. Unfortunately, when they are in the wrong hands, they can be used also for the wrong reasons.
Yeah. I remember I was talking with Troy Hunt. One of the things that we were talking about was always questioning, “Why are you providing information?” If someone's asking you for something, is to really think, “Do they really need that?” Rather than just filling out the form and giving them everything that they asked for, “Is this optional? Can I not give you this information?” Really try to have some sense of minimizing our footprint when we can. Obviously, there are situations where we can't.
Right. It's interesting that you bring that up, Chris. Actually, because a lot of the APIs have this problem where I'll notice that if you look at the API client, it may only display three fields out of the record that you're requesting. But if you intercept the traffic and look at it, you can see that the API is providing everything. They're leaving it up to the developer to filter through all the data that they're sending back and show only the fields that you are wanting to show.
This leads to many problems, obviously, because if you're providing everything on that individual in the database and you're leaving it up to the client to filter out the results, you have this woman-in-the middle attack problem, as I like to call it. Do you have this issue where what happens if somebody is in the middle can see all of that?
I was wondering from like, okay, so with my website, I constantly have to think about bandwidth. It's not that any one particular page view is a lot of traffic, but you get millions and millions of page views. It becomes a huge amount of traffic and saving a few bytes here and a few bytes there could make a difference on bills when they get lots of traffic. I think that if I'm building an API, why am I shoving all the data when the person really only wants three fields? Obviously, there's a security risk. But there's just the cost of maintaining all this bandwidth, pushing all this data, slowing it down for everybody else.
Throwing a Volkswagen at a nail when a hammer will do.
Yeah. If someone didn't ask for the whole medical history, why are you giving the whole medical history? There's no conservation there, so to speak.
It feels like I'm beating up on developers a lot right now, which I'm trying not to. It's really nothing personal. I feel like it's lazy programming. I think it's just, why write all these additional calls when I could just deal with one?
I'll be in defense and say I'm just providing flexibility for those who are using my API.
There it goes, forward compatibility.
Yes, forward compatibility, that's the phrase. I suppose there's always reasoning and some of it may be very reasonable, and some of it is maybe not-so-reasonable. Obviously, it is more difficult on both ends if you're saying, “I'm only going to specify which fields I want and you're only going to send me those fields.” Both people have to do more work because of that.
Right. The other thing is, it becomes more dangerous when it's fields from other object IDs or other records, if you will, where you're expecting the client to also filter out that particular user's data that you're wanting to show them. I'm in the middle of an FHIR research right now. I think that's going to probably be a lot of the findings that are going to be an endemic problem across all these APIs.
I think it's going to be a situation where they don't know because anyone can log into these companies and build an API client that reaches into their data and talks to these FHIR APIs and requests it. They don't know necessarily what they're going to need or want access to. Let's provide them everything and they can filter out the rest. I think that's what's going to end up happening. We'll see.
Yeah. I think it's that fundamental shift that is in the process of happening. The internet was designed—everybody trusts everybody. The first mail servers, “Hi, I'm a legitimate mail.” OK. It's a from address. OK.
See, that's a whole other show. That's a whole other episode where we can go down that rabbit hole of zero trust. I honestly think shame on us from the very beginning for thinking the internet was clean and trusted. Shame on us for trusting every employee just because they work for us. I think zero trust should have been the foundational elements of the building blocks from the very beginning because there is no inside and outside, within the castle walls and outside the castle walls.
There is no wall. That’ll be the new meme.
There's not even a castle anymore. Now with the pandemic, the castle is gone with a lot of companies. There are companies that are just in a permanent work-at-home environment, and how do you trust that user's home network? Come on, you’ve got to be kidding me.
For me, the interesting thing is how we've evolved to this place where we can't even trust the users, can't trust the devices, right? It's zero trust extending to ZT for users, ZT for network, ZT for devices, and it's everything. We have to construct around the idea that nothing at all can be trusted. It's a very exciting time. I think it's going to get even more interesting.
There's a lot of really interesting tech coming out of Israel. I want to say that our new president, the new administration talked about this, how America really isn't innovating anymore. I'm going to sound anti-American and was like, “Oh, my God, that Alissa Knight, she's anti-American too.” I feel like it's true because there was a time there where there really wasn't any new innovation. You would go to the RSA conference or you'd go to Black Hat briefings and it was like lipstick on a pig where everything was just new. They were features, but we were getting the concept of features mixed up with innovation and invention.
Unfortunately, innovation is now coming from outside of Silicon Valley. It's coming from Tel Aviv. I think Tel Aviv is the new Silicon Valley. It is. There are incredibly brilliant people out there making some really great things. A lot of these cybersecurity vendors obviously contacted me to do contact with them. I mean, there's some stuff that's coming that's just mind-blowing, and it didn't come out of San Jose, California. It came out of Tel Aviv.
I think we live in a very exciting time, and I think research like this is going to be important because CISOs need to know exactly what it is that's out there, how well it works, and how well it stands up to a real live attack. Because you don't know until it actually happened, right? Before adversarial content, you had to wait to see how your product stood up to an APT and it was trial by fire. No pun intended. Oh, that's another pun I can use.
Trial by fire.
Trial by fire. Love it.
So let's wrap up here because we're hitting up almost to an hour now. I think we could probably keep talking for a couple of hours.
If people want to see your research, see what you're doing, where can they find that?
So please definitely subscribe to my YouTube channel—check out the playlist and the videos. I am also on LinkedIn—definitely reach out to me on LinkedIn, and you can follow me on Twitter. Believe it or not, I also have an Instagram. A lot of it is pictures of me eating food or my food, but no, Twitter, LinkedIn, and YouTube. I publish a new video and I live stream every week.
Awesome. All those will be linked in the show notes so people can find you without having to try to figure out how to spell things because that makes it easy for people.
Yeah, everyone spells my name with a Y.
No matter how people spell a name, they're always going to spell it wrong. There's always a variant.
That is true because you can even misspell John. You can misspell Chris.
I know, and people often do, which really surprises me. Alissa, thank you so much for coming on the podcast today.
It was a pleasure. Thank you, Chris. I look forward to coming back, hopefully.