Site icon Easy Prey Podcast

The Impact of AI on Scamming with Ran Levi

“AI has tremendous potential and we haven’t seen its real potential yet in cyber security.” - Ran Levi Click To Tweet

The thought of your computer getting a virus implies something biological, but the fear of infection is real. Are there prevention techniques that you need to be implementing now? Today’s guest is Ran Levi. Ran has been podcasting since 2007 and started with a podcast regarding history and technology. He is the co-founder of PI Media LTD, helping other people do podcasts. He is the editor and host of the popular podcast Malicious Life that tells the stories of the histories of cyber security with commentary by hackers, security experts, journalists, and politicians. 

“Being ignorant is inviting someone to scam you.” - Ran Levi Click To Tweet

Show Notes:

“Social media isn’t evil. But the scammers are using organizations like Facebook and others to help them gather the data they need to scam you. And we leave lots of traces behind us online.” - Ran Levi Click To Tweet

Thanks for joining us on Easy Prey. Be sure to subscribe to our podcast on iTunes and leave a nice review. 

Links and Resources:

Transcript:

Ran, thank you so much for coming on the Easy Prey Podcast today.

My pleasure. Thank you for inviting me.

Can you tell me and the audience a little bit about yourself and how you launched your podcast?

I’m a podcaster from Israel. My name is Ran Levi. About 15 coming up on 16 years ago, I started a podcast in Israel about history and technology. I’m an engineer by trade for about 15 years, but writing and spreading knowledge was always my passion from a young age. 

I started a podcast back when nobody knew what a podcast is just because it was fun to do. It took off, and about six years ago, I quit my day job as an engineer and started the company about producing podcasts back when the podcasting world was virtually unknown in Israel.

About a year after I started my company, I got a phone call from Cybereason, an Israeli-American cybersecurity company. Their founders were listening to my history show in Hebrew in Israel. They asked me if I can create a show in the same narrative style about cybersecurity. And what do you know? Five years earlier, I wrote my third book about the history of cybersecurity, which was one of my favorite topics because it kind of blends technology and psychology—two of my favorite topics.

I said to them, “You know what? Yeah, let’s try,” because I had some reservations if I’ll be able to do that in the kind of professional way I wanted to do my podcast. I’m a perfectionist. That’s a curse and a blessing, always.

Quite early on, I realized that my accent wasn’t good enough back then, so I took some three months or four months, even, in improving my accent, exercising each day in front of the computer saying tongue twisters and all that kind of stuff. Once I thought my accent improved in a measurable way, we started the show. It’s been one hell of a ride, as they say.

Within, I think, half a year or a year, it rocketed, and I think that last month we were number nine on iTunes’ technology podcast category, which means for an Israeli guy whose English is second language, is like playing in the NBA, I think. 

That is definitely an accomplishment.

Thank you.

Can you tell me more why the psychology and technology element is interesting to you?

Sure. As I said, it started with my book. I already wrote two books about the history of science and technology on various topics, and I was looking for an interesting topic to write the third book. Even when I’m writing books or podcasts or whatever, I’m always looking for great, great stories. 

Stories are always a crucial element in any good episode. -Ran Levi Click To Tweet

Stories are always a crucial element in any good episode. There’s the knowledge that you want to give the audience that they can use or enjoy and learn something new, but there has to be an interesting story involved.

I was trying to think of what fields in technology, what kind of ideas have great stories in them. I started looking at different directions, just trying to look for different ideas. Then I came across, I think it was Security Now with Steve Gibson, which I think you interviewed for this podcast as well a few months back.

Yes, I have. He’s a great guy, and I found out he lives right down the street from me.

Amazing. I started listening to Security Now, and I was hooked. As a programmer and a hardware engineer, I knew a bit about cybersecurity and how to write code security and stuff, but really only the basics. I had no idea about the history of cybersecurity.

As a kid, copying/pirating games in Israel back in the eighties and 90s was quite, I would say, normal because we didn’t have a good software industry back then. Of course, I got lots of viruses back then—the Ping-Pong virus, all sorts of stuff. I think the worst one was probably Sasser, that tries to shut down your computer every 60 seconds, was a hassle to debug. Rebooting the machine like a hundred times in a row, something like that.

After listening to Security Now, I said to myself, “This is a very interesting topic. I should really dive into it.” Once I started learning the history and the ideas behind cybersecurity, I was even more hooked than I was before. 

In the cybersecurity business, we’re kind of used to the idea of the sort of autonomous software running on our machine, doing stuff that we can’t control. But I think for the public at large, that’s a very weird idea. It’s a machine. What do you mean it does what it wants?

Back then when I was talking to people about viruses and worms and stuff, many people blew their minds that software can replicate itself, that it can do stuff autonomously, that it can resist attempts at debugging it or reverse engineering itself, et cetera. 

There is a kind of dread from the general public about the idea that something can infect your computer and have no idea what it does there. That mindset is in the heart of a lot of cybersecurity stories. The idea that you have a machine that you interact with, it’s an important part of everybody’s lives. It’s in our phones, our desktops at work, at home, yet something can happen which makes that machine turn against you in a way. There’s a very basic fear underlying the whole industry.

A little bit of the fear of the unknown.

Exactly, of something “living” in your computer, something is almost alive. It’s a virus. Is it a real virus? People who are not in the industry don’t understand sometimes the basic ideas that we’re talking about because when you use the word “virus,” it has a connotation of biological creatures. Worms. The words we use to describe things sometimes hint at something biological in nature, which it’s not, but the ideas are very, very interesting.

I started digging into the deep past of cybersecurity, back into the 50s and 60s. Interviewed the people who wrote the very first viruses that were discovered. The people who wrote the very first anti-viruses. Back when people were amazed at this.

One of the most interesting things that I discovered in my research is there was Scientific American, the magazine. They had a very famous column about math puzzles. I think it was one of the last pages of every magazine, every one of them. At one point, somebody sent a letter to the editor and said something about how he discovered a way to create artificial life inside of a computer.

We’re talking about 1978, I think, or 1979. Really, really early. Or maybe it was the early 80s, I forgot. In the next column, the writer described the idea of creating artificial life and how it could be. And then in the following months, he got a deluge of letters from readers, each one of them describing how he reinvented a virus, because the idea of creating self-replicating software seems to be very hard but in essence is not that difficult. 

People have rediscovered how to create computer viruses at least a few hundred times in the past 50, 40 years. That’s where people who discovered the idea we’re so afraid of that they created a monster. 

There’s an interesting story about it—a letter that a couple of Italian guys sent that magazine—about when they wrote a virus that it was understood it could spread out of control, so they burned the floppy disks and dismantled the computer so that no one will ever discover this idea again. Of course, it was discovered probably in parallel by a hundred more people at the same time.

It’s a very interesting topic. The psychology, the questions of trust, of privacy, of fear of information leaking, which is sensitive, or the greed of people who try to use cyber attacks to earn money and stuff. 

Ransomware is a great example of using fear in terms of the user interface. As software developers create software interfaces, they usually try to make it nice, to make it pleasurable to use the software. But when you think about ransomware, when they encrypt the user’s computer, they always display these scary images. Sometimes there are even weird noises and stuff because they want to make you feel afraid that you end up paying the ransom. Computers and emotions are a very interesting intersection.

Computers and emotions are a very interesting intersection. -Ran Levi Click To Tweet

And these days, software engineers have definitely learned how to manipulate our emotions to a great degree.

Exactly.

What have been some of your favorite topics, or stories, or episodes to tell?

It’s very difficult to decide.

They’re all your children.

Exactly. Probably one of my most favorite was the Ashley Madison hack. Sex, money, and fear is basically a great story. The right elements kind of joined together. Some of the stories I love are, I think, not very well-known. 

For example, there’s the story of Blue Frog. Blue Frog was an Israeli, I think, company back in the early 2000s, I think it was, which tried to fight spam by spamming the spammers, like returning the favor, as they say. 

It was an interesting story, technologically-wise, how they started, et cetera, except one of the spammers was probably hit so hard he decided he would end the company, literally. So they started harassing them. We’re talking about a Russian spammer who started harassing the Israeli founders in Israel by sending them what we think are private detectives, or hitmen, or whatever it is. 

It’s a very scary story. It starts fun-like. It gets darker and darker. That’s why I love it so much. I love dark stories.

“Anonymity” is kind of one of those benefits of the Internet. I’m going to put anonymity in quotes because…probably not. 

Nobody’s really anonymous, yeah. 

But I think everybody’s fear is like, “Oh, I can say things online that I would never be able to say in person, or do things online that I’d never do in person.” As soon as someone shows up at your front door, it becomes very real very quickly.

Exactly. That’s why I love what Brian Krebs does on his website, on his blog, Krebs on Security. Every other week or something like that, he publishes a research that he does unmasking or doxxing the spammers and hackers just by silly mistakes that they make, like registering different services with the same email. And you think to yourself, “These guys know the price they’re going to pay and they still make mistakes.” 

It makes you think about regular people. Everybody makes mistakes online. If the spammers and hackers are making mistakes, what chance do regular people have? 

If the spammers and hackers are making mistakes, what chance do regular people have? -Ran Levi Click To Tweet

Is there a favorite guest you’ve interviewed?

We interviewed a lot of people over the years—probably hundreds. Even on a personal level, I interviewed Steve Wozniak, Apple’s co-founder. It was a cybersecurity conference. I think it was organized by […] itself back then, and I had the pleasure of interviewing him onstage, talking about how Apple thought about security back in the 70s. Spoiler: they didn’t. 

Nobody did.

Exactly. Wozniak was a prankster. That’s what he does. He pranked the pope. He pranked everybody. Some of his pranks kind of crossed the line into virus. He actually wrote a virus for the Apple II computer. Like everybody else, he understood what was going on, so he just deleted it and never talked about it.

It was very interesting to speak to the man. He’s one of my heroes as an engineer. He’s an absolute hero for me.

We had lots of interesting guests; it’s so difficult. There was the […] Industries. They were a gang that operated in the 90s. One of the very, very, very early cybersecurity pioneers, and I got to interview most of them at some point. It was very interesting to learn how things were going in the early 90s. Nobody knew what they were doing, and when they discovered flaws in Microsoft’s Windows back then. Microsoft totally ignored them. These grassroots stories are always very interesting.

There were no open bounty programs, no responsible disclosure program.

Plus their stories are very interesting because at some point, they decided to go the commercial route. They started like hackers for fun, and at some point, they decided to sell the company and they became corporate, which kind of ruined everything for everybody. Some people were more commercial, doing more commercial stuff. Some people who steal it—the ethics of white hat hackers work for fun. These things never grow along best so they had to fire some of their old friends from the new company. 

It ended on a sour note, but it’s fun to see that 20 years after that, you’re still friends. They talk about amazing incidents that they had, like when they had to testify in Congress about a certain incident about how unsecure the Internet is, et cetera, and they took a wrong turn. They had a big pick-up truck or something, a big van I think, took the wrong turn, and ended up in the NSA’s headquarters for some reason. In a van full of electronic equipment and antennas. They’re very lucky to still be out of jail. It’s a fun story.

I’ll definitely have to listen to that one. Was there any kind of incident that you did an episode on, that’s kind of like the most alarming? Like, “Oh my gosh. I can’t believe that this was compromised, or how close the world was to ending”?

Interesting question. I don’t know if there’s a single story. The Equifax affair was very alarming. I’m not a US citizen, so I don’t have a credit history and credit score. That’s something that is unique, I think, to the States. But knowing that everybody in the US is under some sort of surveillance, that’s basically what Equifax did, gather information about everyone.

That was very surprising to me, that mass scale surveillance of this sort is legitimate. Because I’m not familiar with something like that from Israel, that was a big surprise to me.

Also I had the pleasure of interviewing some security scientists, people who research all the basic stuff. These interviews I very much like because they give you a glimpse about the future of cybersecurity. Not everything comes around and really becomes a product or whatever. 

For example, analyzing human characteristics from the way people write text in consoles—like shell text—and using AI in defense and offense. That’s really scary. I know from other things that I read about AI, that AI has tremendous potential. I think we haven’t seen it still in cybersecurity, a real potential coming to fruition. Once we will, that will be very interesting.

You can think about all the latest generation of AI, like Dolly 2 and the GPT-3. The kind of AI which can really, really imitate human interactions, human texts, images, whatever. Imagine communicating with somebody online thinking that it’s human, and getting robbed or scammed by a machine. That’s something that so far has not been a real problem. You have chat bots but it’s easily really obvious when you’re talking to a machine. With the current generation of AI, the lines are very, very blurred.

It gets particularly disturbing when you think of just the ability to learn from what does and doesn’t work, and what techniques work. I know there are call centers of scammers in various countries with playbooks. They have meetings about what does and doesn’t work. But it’s still people with foreign accents contacting people, so there’s always a giveaway in that sense. But once you start having AI making those phone calls or sending those emails or texts, it starts getting a little concerning of how quickly can it learn to do bad things.

Exactly. I kind of saw it in person in a way. I think it was last year, one of my listeners wrote to me and said, “I’m working at a company, a startup which works on synthesizing deepfaking voices.” He wrote to me, “Can I train my machine on your voice, because you’ve got hundreds and hundreds of great audio. Can I train my machine on your voice?” I said, “Yeah, sure. Go ahead. That’s amazing.” And he did. 

He sent me 60 seconds of me narrating a text. I mean, me, the machine, narrating in my voice. Although it wasn’t as good as what you can see today. Let’s say, it won’t fool anybody yet. You see that we’re so close. 

At a certain point I said to myself, “Wow. I could use it on Malicious Life within a year or two. I could use it on Malicious Life to narrate an episode or part of an episode, and I’m not sure the listeners will be aware that there’s a machine there reading in my voice.” 

Think about it. Somebody can do the same with my voice calling my mother or something. “I need money. Give me money, or something like that.” The technology is there and that scam will work because nobody can imitate my voice as well as a machine. That’s pretty darn scary.

I think that what potentially helps us prevent that in some sense is how well we know people. It’s one thing for you to be able to have a deepfake narrate something that you’ve written, so it’s not that the AI has written the content and written the episode. It’s still you who’s written the content and the episode. 

Hopefully, that still remains true with our interactions with people so we can kind of know, “Well that’s off. Ran would never say that in that way. Sounds like him, but he would never use that expression. He would never communicate in that form.” Hopefully, it’s a lot longer before AI figures that out.

I don’t have great hopes. I think once they really zoom in on a person—maybe myself, somebody else—they can really, really learn the way we speak, their habits. It’s relatively easy, especially when somebody is very present on the Internet. You’ve got Instagram, Twitter, and Facebook. You can learn about someone automatically a lot if you just scrape the right websites. When that happens, I think we’ll probably have lots of episodes that we can make both our podcasts. No danger of that running out.

I’ll prognosticate that the first major case will be a politician.

Interesting, although when it is the deepfake of Biden or Putin or whatever, there’s a hundred security or deepfake experts going over each and every bit, looking for the telltale signs of fakes. When it’s me, maybe one of the listeners would be nice and check things out. 

But nobody, I think, is going to really think about or care about ordinary people. These people are going to get scammed a lot by AI if nobody takes care. Maybe the only way to solve that is using an AI to fight the scamming AI. An AI that kind of notices there’s something wrong about that voice. It’s not totally human. Something like that. That’s really science fiction-y stuff.

Maybe we start having Blade Runner-esque interviews with each other at the beginning of our conversations.

Yeah. How did he call it? The Voight-Kampff machine? You have to do me the Voight-Kampff test before the interview just to see that I’m a real human being.

So you’re walking through a desert and there’s a turtle. You turn it over on its back.

Exactly. Science fiction comes true.

I hope our conversations don’t go that way. It’s too disturbing.

We live in interesting times. 

That we do. You’ve done a lot of interviews and you’ve written a lot of stories. What are some of the lessons that you’ve learned that’s kind of some of the themes and takeaways regular people—us—should be applying in our own lives to kind of mitigate…? 

Obviously, we can’t prevent Equifax from being hacked, but there are things that we can do in our own lives to mitigate what happens if Equifax gets hacked. What are some of the things that you think we should be doing?

The first thing I learned is never to trust an organization with my data. -Ran Levi Click To Tweet

The first thing I learned is never to trust an organization with my data. I have researched and I wrote about and I told so many stories about organizations being hacked, that I feel like I can no longer trust anyone, which kind of makes everybody’s lives on the Internet a lot more miserable because then you have to use different passwords for everything. 

Ever since I started writing about cybersecurity, I never answer the kinds of security questions that people ask—what’s your spouse’s name or your first dog or whatever—nor do I give my real birthdate or my kids’ real birthdate. I’ve gotten a lot more paranoid over the years.

I think that’s good paranoia because before then, like almost everybody else, I used to reuse password, use memorable passwords. Working on a book and the show really made me become more cynical in a sense. 

It’s pretty obvious that you can’t really trust any organization—whether it be a government, from the largest companies to small ones—nor do we have to trust them. They’re not our friends. Our data is their livelihood. We need to really think about what we share online. 

Our data is their livelihood. We need to really think about what we share online. -Ran Levi Click To Tweet

Also, it’s less fun being online when you’re  always on the lookout for stuff like that. I stopped sharing too much of my own personal life online just because of what we talked about. People can really scrape those sites and get a good sense of who I am, where do I live, who’s my family, all the blind spots that everybody has in their lives.

I’m much more careful now when I’m online than I ever was before. It’s a bit of a shame back in the early 90s-2000s. Life was fun. You could catch a virus and it maybe mess with your hard drive. But if you had a good backup, then OK. Everything will be fine. Now with ransomware and those hacks that expose your data to the world, it’s not fun anymore. When you start really digging deep, you become paranoid in a sense.

Healthy paranoia.

Yeah. Ignorance is bliss for some people. They don’t understand. They don’t know the dangers. Sometimes, they’re lucky. Nothing happens. My mother probably has no clue about how to handle her stuff online. Luckily, she’s an Israeli. She’s an old woman. Not like in the States. People in the States get scam calls a lot, so that’s really a frequent thing. 

In other countries around the world—non-English speaking countries—that’s not that much of a big deal. It will be in the not-so-far future. But so far, people like my mother don’t see the dangers too much because really life is not that dangerous. But for someone like me whose online life really revolves around all the usual suspects—Facebook, Twitter—I’m everywhere where everybody is. For us people, we can’t be ignorant. Being ignorant is inviting somebody to scam.

We can’t be ignorant. Being ignorant is inviting somebody to scam. -Ran Levi Click To Tweet

I think it’s one thing to say, “Well, I’m going to avoid companies that I believe are intentionally trying to use my data.” It’s easy to paint Facebook as a villain and say they just exist to vacuum up data, sell it, market it, and match it, but we even need to be cautious. 

What information am I giving the grocery store? Their intent may be a little bit different, but a breach is a breach. It starts getting merged with other data that we don’t want out there. It becomes complicated.

I’m not one of those people who think that Facebook is evil in a sense. I don’t see evilness in corporations. The way I see it and from my experience, corporations have culture. They have processes. They have people. They don’t have emotions. Corporate itself is a machine. 

So I don’t think Facebook is evil. That’s what they do. They take your data, but nobody cares about if it’s Ran or Chris or whatever. They don’t usually care that much. But the scammers are using organizations like Facebook and others to help them gather the data that they do need in order to scam, and we leave lots of traces behind us online. Once you realize that, you can’t really keep browsing the web as you did before. Now you become much more cautious.

Even offline, I’m a lot more cautious about things that anytime I’m going somewhere and they have a form to fill out for whatever reason. I can understand why you need this or that piece of information, but I don’t see why you need that. 

I think most of us at least—sheep is not the right word, but if a form is in front of us, we just fill out the form because that’s what we’ve been programmed to do. We don’t think about asking the person who gave us the form, “Hey, do I really need to fill out all of this, or can I just fill out my name and address?” 

And often, they’ll just say, “Oh, all we really need is your name and address. We don’t need anything else.” So figuring out what information do we really need to disclose, and to whom and when, what we don’t need to, and pushing back a little bit about data collection.

Exactly. Something that we need to do in the modern society. Probably we have no chance.

And as time goes by, there’ll be unfortunately probably less and less fields that are voluntary and more that are mandatory. But I think some companies are starting to get smarter about understanding that users don’t always want to disclose a bunch of information. 

If I want to join your website or forum, I should just be able to write an email address and a password. You shouldn’t have to know a bunch of other stuff about me. I think some companies are starting to get better at, “Hey let’s only ask really what we need in order to perform our function. We don’t need anything more than that to do what we need to do.”

Correct. I think that for many companies, data is a liability. Having your customers’ data that you need to keep it means once in like 10-15 years ago, you could store whatever information you wanted, and if there was a breach, well, there was a breach. We couldn’t do anything.

Now, you can be sued to oblivion, basically, and regulators, like in Europe and in the States, are much more on top of how companies store data, what are they allowed to do with it. So now, data becomes a liability. 

Myself, personally, in my business, I try to keep as little information—less information that I possibly can about users. Oftentimes, with people who subscribe or tell you something, I just keep their emails, not even their names because I know that I can get hacked somehow. If somebody really wants to target my company, they’ll hack me. There’s nothing I can really do about it.

It’s like not keeping a gun in your house if you don’t need to. For me, it’s the same kind of feeling. If you need a gun, OK. Keep it. Store it in a safe place. But if you don’t need a gun, that’s a liability because then a kid could find it somewhere, and there’ll be lots of disasters. Data starts becoming […].

I had a great conversation with one of the guys at EFF, Electronic Frontier Foundation. There was a point in that conversation that I really switched over to thinking of data as a liability. I’ve always tried to not collect stuff that I don’t need, but I didn’t think of it as a liability. 

One of the things that I started doing was I stopped retaining website logs. I’ll keep them for a certain amount of time for diagnostic purposes, but then stuff just auto deletes after a certain period of time, where it becomes not useful for diagnosis. Just one less thing that I have that can either be hacked, or law enforcement comes along and says, “Hey, we want these logs, and we want to know this about your users.” Well, if I don’t have that information, that’s just less of a liability for me.

Exactly. I don’t envy the security guys in companies like Google, Facebook, and stuff. There’s so much information that can be stolen. That’s really scary to be in charge of so much data in those companies.

And really having to figure out who and what should or shouldn’t have access to what data. The larger the organization, the more complex that becomes of should your receptionist or your medical billing assistant be able to access patient history? Well, they need enough to be able to do the billing, but can you anonymize it so they don’t know who the individual is? You start worrying about how much […]

And when you think about it: in every modern company information is the lifeblood of the company. A company in the most basic sense is a bunch of people processing information, usually, or processing physical goods, but they do need to process information as well even for that between different people or different functions in the company.

Once you start needing to sequester that information to ask yourself who needs information where? Who needs it? Who doesn’t need that kind of information? That really makes life much more difficult for a company to function correctly because then all the time you’re messing around with authorizations and who has access to that information or who doesn’t have access. It makes life much more difficult once you have lots and lots of data.

Even for companies who are not like Facebook and Google. Even for companies like myself. Relatively small companies. I don’t give access to information to all the employees of the company—the same kind of access, which means that I spend some part of my time giving access and removing access. It makes life much more difficult now.

Your newly hired receptionist should not have access to HR data.

Exactly.

But back in the old days, everybody had access to everything. It would be peeking around on the company folders to see, “What’s Sally making? Can I figure that out?”

Yeah. I was an electronics engineer at a company in Israel. I start my first day on the job. I was still a student. They didn’t have me any real tasks to do yet since it was my first day, so I started browsing the network to see which computers are connected, where are the servers, just to get a sense of everything.

At some point I probably got into a server which I shouldn’t have gone to somewhere, and somehow every employee in the company—like 5000 people—got a message on the screen from me saying something like, “A test, a test.” Something like that. I was sure I’m going to be fired, but nobody did. […] discovered who it was. They did, they never told me. 

But nowadays, I wouldn’t be able to do that. I wouldn’t be able to reach some part of the network which I have no access to.

Hopefully, you can’t. 

Hopefully.

Well, as we wrap up here, where can people find your podcast? Where can people find more about you online? Not that they’re snooping, but the stuff that you want people to know.

Malicious Life—the website is malicious.life. They can also find me personally on Twitter @ranlevi. That’s my handle. I also have a personal blog. Most of it is in Hebrew, but there are some articles that I write in English as well. That’s ranlevi.co.il. More like philosophical stuff, scientific musings, and stuff I write about. That’s it, I think.

That’s awesome. Ran, thank you so much for coming on the Easy Prey Podcast today.

It was a pleasure, Chris. Thank you very much.

 

 

Exit mobile version