AI Supercharges Scams with Brett Winterford

Hosted By Chris Parker

289
Click Below to Subscribe
“Attackers are leaning on AI to create a lot more synthetic material that is highly targeted to who they’re going after, and it’s imposing far less cost on them.” - Brett Winterford Share on X

Cybercriminals are accelerating their attacks in ways that weren’t possible a few years ago. Artificial intelligence is giving them the ability to spin up phishing campaigns, voice clones, and deepfakes in minutes instead of days. As a result, the gap between what’s genuine and what’s fake is closing fast, making it harder for both individuals and organizations to defend themselves.

I’m thrilled to welcome Brett Winterford, Vice President of Okta Threat Intelligence. Brett has had a front row seat to these changes. His team analyzes identity-based attacks and delivers insights to help organizations adapt their defenses. Brett previously served as Okta’s Regional CISO for Asia-Pacific and Japan and started his career as a journalist covering information security before moving into leadership roles in banking, government, and technology.

In this episode, Brett explains how AI is reshaping the speed and scale of cybercrime, why trusted platforms like email, SMS, and collaboration tools are being targeted, and what practical steps can reduce risk. He highlights the growing importance of phishing-resistant authentication methods like passkeys, the need for clearer communication between service providers and users, and the role of collaboration across industries and law enforcement in pushing back against attackers.

“You might be a target not because you have something of value, but because you have access to a capability that attackers need.” - Brett Winterford Share on X

Show Notes:

  • [00:00] Brett Winterford introduces himself as Vice President of Okta Threat Intelligence and explains how identity-based threats are monitored.
  • [02:00] He shares his career path from cybersecurity journalist to CISO roles and now to leading threat intelligence.
  • [05:48] Brett compares phishing campaigns of a decade ago with today’s AI-driven ability to launch attacks in minutes.
  • [08:00] He notes how reconnaissance and lure creation have become easier with artificial intelligence.
  • [10:40] Brett describes the shift from banking malware to generic infostealers that sell stolen credentials.
  • [12:30] He explains how cryptocurrency changed the targeting of attacks by offering higher payouts.
  • [14:21] We learn about the Poison Seed campaign that used compromised bulk email accounts to spread phishing.
  • [15:26] Brett highlights the rise of SMS and other trusted communication channels as phishing delivery methods.
  • [16:04] He explains how attackers exploit platforms like Microsoft Teams and Slack to bypass traditional defenses.
  • [18:30] Brett details a Slack-based campaign where attackers impersonated a CEO and smuggled phishing links.
  • [22:41] He warns that generative AI has erased many of the old “red flags” that once signaled a scam.
  • [23:01] Brett advises consumers to focus on top-level domains, official apps, and intent of requests to detect phishing.
  • [26:06] He stresses why organizations should adopt passkeys, even though adoption can be challenging.
  • [27:22] Brett points out that passkeys offer faster, more secure logins compared to traditional passwords.
  • [28:31] He explains how attackers increasingly rely on SMS, WhatsApp, and social platforms instead of email.
  • [31:00] Brett discusses voice cloning scams targeting both individuals and corporate staff.
  • [32:30] He warns about deepfake video being used in fraud schemes, including North Korean IT worker scams.
  • [34:59] Brett explains why traditional media-specific red flags are less useful and critical thinking is essential.
  • [37:15] He emphasizes the need for service providers to create trusted communication channels for verification.
  • [39:29] Brett talks about the difficulty of convincing users to reset credentials during real incidents.
  • [41:00] He reflects on how attackers adapt quickly and why organizations must raise the cost of attacks.
  • [44:18] Brett highlights the importance of cross-industry collaboration with groups like Interpol and Europol.
  • [45:24] He directs listeners to Okta’s newsroom for resources on threat intelligence and recent campaigns.
  • [47:00] Brett advises consumers to experiment with passkeys and use official apps to reduce risk.
  • [48:00] He closes by stressing the importance of having a trusted, in-app channel for security communications.
“Reconnaissance using artificial intelligence is dramatically easier for them, and not only is AI being used to create more compelling lures, it’s also being used to stand up the phishing kits in seconds. - Brett Winterford Share on X

Thanks for joining us on Easy Prey. Be sure to subscribe to our podcast on iTunes and leave a nice review. 

Links and Resources:

Transcript:

Brett, thank you so much for coming on the podcast today.

No worries, Chris. Thank you very much for hosting me.

You’re welcome. Can you give myself and the audience a little bit of background about who you are and what you do?

My name is Brett Winterford. I’m the vice president of Okta Threat Intelligence. Okta is, you probably know, the world’s identity company. What we focus on is helping folks in the workplace securely access all of their applications and data that they need to do their job, and then in the customer world, like when you sign into a lot of customer services, you’re often using our technology behind the scenes.

It’s delivered as a cloud service, naturally, because we’re seeing a lot of the world’s authentication events—that is, people verifying their identity to access something—that is the front door for a lot of cyber attacks. We have access to pretty extraordinary data that our customers give us consent to, to use in certain situations to, I guess, assess what threat actors are doing when they’re going after a particular entity, when they’re going after a particular set of targets.

We study a lot of things like phishing attacks and other forms of social engineering, large-scale credential stuffing and password spray attacks, and other attacks that involve fraudulent identities.

One of my jobs is to make sure that we take those observations, make sure that our customers learn from it, adjust the configuration of their systems and security awareness programs for their staff, to make sure that they are best protected against the latest hijinks that threat actors are up to.

Is this something you’ve always wanted to do, or did somewhere along the line someone tap your shoulder and say, “We need you to do this or you lose your job”? What’s the story of how you got to where you are?

It’s a funny confluence of things that happened. I used to write about cybersecurity as a journalist similar to yourself, Chris, for many years here in Australia. I edited magazines and wrote columns for our major newspapers, all about cybersecurity before it was called cybersecurity—used to just be called information security.

Eventually, I went into the field and started working in cybersecurity teams internally at companies. One of the largest banks here in the region, I worked in their cybersecurity team doing all kinds of things like advising the government on cyber policy, working on security awareness programs, funding undergraduate degrees in cybersecurity universities, all these things.

Eventually found my way through the cybersecurity field to the point where I was regional chief security officer for Okta in our region until recently. Jumping back into threat intelligence was an opportunity where I felt that our level of observation and some of the skills of our analysts weren’t fully being realized in terms of outcomes for customers. So I wanted to seize that opportunity and make sure that we were maximizing the value of what we could observe.

Is working in threat intelligence the kind of thing that keeps you up at night? Do you get the phone calls in the middle of the night, in the middle of the day where you’re 24/7?

I think it’s less stressed than being the chief security officer, for sure. Anyone who works as a chief security officer does get the phone calls that result in many days or weeks sometimes of lacking sleep.

In my case, it’s really only the time zone problem that results in me losing sleep. It’s about being comfortable with being able to release this information in a timely manner so that people can protect themselves before the campaign is through.

In other words, when a threat actor sets up infrastructure these days, it doesn’t take them very much investment or effort. If you’re going to try and disrupt that activity, you don’t have a lot of time to get the message out to the relevant people to help protect them.

That’s why it’s great that there are podcasts such as yours, for example, that take people back to base principles because you can’t always be reactive at the end of the day. You have to have an underlying level of knowledge, a set of capabilities, that are going to protect you, because otherwise, the creativity of threat actors and their ability to adapt to the defenses that are put in front of them is pretty remarkable.

You have to have an underlying level of knowledge, a set of capabilities, that are going to protect you, because otherwise, the creativity of threat actors and their ability to adapt to the defenses that are put in front of them is… Share on X

If you were to think back, let’s say, five or 10 years, you were talking about the short time that you have to brief customers and make them aware of campaigns. How has that changed from five or 10 years ago?

I would’ve said that five or 10 years ago, that configuring the infrastructure behind something like a phishing campaign that’s targeted at a particular entity would have been one of the constraints on an attacker. Potentially some of the reconnaissance work they needed to do, too, to really understand what is it that Chris Parker is going to respond to? How can I be sure that I know enough about Chris that when I go after his organization in a really targeted way and I go after him, that I’m going to maximize my chances of success and minimize my chances of being caught?

These days, the ability to rapidly create targeted phishing infrastructure is pretty phenomenal and it’s powered by artificial intelligence, we’ve discovered, and the ability to also create infrastructure that other attackers can use and rent from you is very common now. There is a real specialization of tasks in the cyber crime industry where everyone has their job, they excel at their job, and collectively they’re very efficient.

There is a real specialization of tasks in the cyber crime industry where everyone has their job, they excel at their job, and collectively they’re very efficient. -Brett Winterford Share on X

That is no longer a constraint for an attacker now. Reconnaissance using artificial intelligence is dramatically easier for them. Not only is AI being used to create more compelling lures and lures in different languages, but it’s also being used to actually stand up the front-end infrastructure of the phishing kits in seconds. It literally takes seconds, and everything we’ve done to enable efficiency in the digital world is something that can be abused by threat actors.

Where was that a few years ago? How much time did it take attackers and the scammers to spin up their infrastructure? Is it days, weeks, months, years to launch an attack? And is it now minutes?

Days and now minutes to hours. It’s not that it was really, really hard, but if you’re doing it at scale, but you’re still being highly targeted. In other words, I want to go after lots of organizations, but I want every single phishing page to look exactly like what that user would typically sign into, for example.

I want the lure that we send to that particular target to really appeal to them in some way or be very contextual to their daily experience. The ability to do that now, highly targeted, highly effective, but still at large scale is what’s changed. It’s gone from, as you say, days down to minutes.

To me, I think of the old-style phishing scam might be, it’s near Christmas or any holiday, depending on the country, we’re going to send lots of emails pretending to be the local variation of Amazon or whatever the big retailer is saying a fraudulent order’s been placed, and oh my gosh, everyone has placed an order, so either expecting a package or they’re alarmed that their account got compromised. If we just send it out to a hundred million people, we’re going to catch some.

Now that the more I hear of these scalable, highly targeted, what used to be we’re just going to spend hours and hours building a campaign to spearfish a particular CEO, is that basically what they’re doing for everybody now?

Yeah. It’s a lot easier to be highly targeted. There are still a lot of large-scale events out there, but the nature of the cash out is different. Also the ability to spin up new infrastructure quickly as existing infrastructure is identified. It means that you can have a perpetual campaign using different infrastructure over long periods of time.

It isn’t probably as seasonal as it used to be, to your point. Some of the most profound changes is if I think back earlier in my career at certain East European groups that focused on banking malware—that is specifically malware designed to compromise a user’s device and redirect payments from their online bank account—we saw over time a lot of the world’s largest banks got a lot better at detecting that malicious activity.

A lot of those crews moved on to generic info stealer malware. In other words, just trying to infect lots of the world’s devices with generic malware, that just steals passwords and session tokens out of the browser, then reselling those compromised accounts onto attackers of all stripes who had all kinds of different motivations.

That was a really interesting change for me, that some of the world’s most capable actors used to be just focused on your online bank account. Now, they’re focused generally on just stealing credentials from your device to compromise a person in all kinds of different ways. There are so many online services that we use now that are very critical to our lives compared to 10 or 15 years ago.

I think the rise of cryptocurrency also changed the targeting a lot because now attackers can focus on holders of large cryptocurrency holdings and get a far bigger payday than when they were going after individuals’ bank accounts in the past.

It was funny. I recently got a support ticket from someone saying, “Do you think this is a scam?” And they had gotten a text message saying that their Coinbase account had been compromised and someone was using it to send or receive cryptocurrency with some crime syndicate.

This poor person panics and calls the phone number that was in the text message and starts going back. At some point, they realize something seems off here. “Why do they email me? I don’t know, but they emailed me.” So the first thing is, “Well, do you even have a Coinbase account?” And they’re like, “No.” I’m like, “Why did you respond?”

It’s interesting that people respond to crypto stuff even if they don’t have any crypto accounts. Within a week of hearing about that, I got that text message as well. It was like, “Here’s the case number. We need you to call Coinbase security. Here’s the telephone number.” I’m like, “Oh, OK. If I had a Coinbase account, that might look reasonably convincing.”

It sounds like some campaigns that we investigated a few months ago. One particular campaign you can read about publicly was called the poison seed campaign, was the name that a different threat intelligence group gave to that campaign. What was super interesting there was the adversary was aware of a lot of targets that did have a lot of cryptocurrency. It sounds like they also made some errors and sent it to people that had no holdings of cryptocurrency.

What’s super interesting is as email authentication has improved over time, the attackers are finding that they need to compromise a trusted sender in order to deliver their scams. -Brett Winterford Share on X

What was interesting there is you’re starting to see that the total number of phishing campaigns that land in inboxes is actually stabilizing the very, very high plateau, but for the last 12 months or so around about the same number of phishing campaigns land in inboxes around the world. What’s super interesting is as email authentication has improved over time, the attackers are finding that they need to compromise a trusted sender in order to deliver their scams.

For that particular campaign, there was a previous set of activity where those threat actors went after anyone who had access to a bulk email account, so an account with a SendGrid or Amalga or one of these services where you can send emails to thousands of people and it’s coming from a trusted sender. Once they’ve compromised all of those accounts, then they could send the messages that you’re referring to. Also, sending obviously messages over SMS is a way that a lot of attackers have found that they can evade analysis from the email security firms in particular.

Super interesting campaign in that now you’re a target, not just because you have something of value to an attacker, but you’re also a target as an individual if you have access to a capability that they need to be able to phish other users.

Super interesting campaign in that now you’re a target, not just because you have something of value to an attacker, but you’re also a target as an individual if you have access to a capability that they need to be able to phish… Share on X

Let’s geek out for a moment here. What you’re talking about is having authenticated sender platforms, people that have a website and it needs to send out notifications when people subscribe to things or newsletters and things like that. Those platforms are set up with SPF, DKIM, domain keys and all those things. My experience is I get a ton of email that doesn’t pass those things regardless. Does that really result in better deliverability for them?

It does. Over time, we are gradually seeing major web mail platforms introduce controls where it should be harder to get malicious messages into your inbox.

I know that Google and Yahoo started insisting on box senders registering, authenticating their mail some time ago. Microsoft’s being a little slower, but I’m pretty sure in the last few months they have also introduced some of those controls. Obviously, they’ve always been around in the corporate world.

We’ve been using SPF, DKIM, et cetera for forever, but as that starts to roll out to the major consumer email platforms, it does become a consideration around deliverability for threat actors. It’s interesting. They have to optimize their email campaigns just as marketers do.

To your point for any of your audience, you might be running a company where you think, “Why would I be part of a supply chain of an attack? I’m running a mowing service.” But the fact is, you have access to an email. If you have access to a bulk email capability that has good deliverability rates because you do have a great relationship with all of your customers, then your access to that account is of interest to threat actors.

Are they trying to go after that business’s customers, or are they just trying to use that to send an email to anybody and everybody?

Typically, they have their own. In my experience with these particular campaigns, they had their own downstream targets in mind, so not necessarily that particular customer base, no.

Are you seeing where people are trying to exploit a company’s infrastructure to go after the company’s own customers?

Certainly. More so in the tech sector, for example. That is far more prevalent. The point I was making earlier was you could be any generic industry and your capabilities are still of interest to a threat actor. But certainly, yes. Supply chain attacks abuse trusted messaging platforms—very, very prevalent in the tech sector. Any capability that a SaaS company has, for example, to send notifications, is something that threat actors will target.

Supply chain attacks abuse trusted messaging platforms—very, very prevalent in the tech sector. Any capability that a SaaS company has, for example, to send notifications, is something that threat actors will target. -Brett… Share on X

To give you an example, for the last few years we’ve seen a lot of campaigns where attackers have abused Microsoft Teams as a platform, because there is a capability to send messages in Teams from one tenant to another.

An attacker-control tenant sometimes will take over the tenant of a small business that they’ve managed to compromise through password spray or some other technique. If the ultimate target of who they’re going after allows for external connections between those two teams’ tenants, they can then deliver their phishing list via a Teams message, which is going to be subject to less scrutiny than over email.

We had a really interesting attack recently that one of our customers brought to our attention. We, first of all, identified some of the malicious domains being stood up. We notified our customer and they tracked this campaign with us. It was super interesting what the threat actors were doing.

What they’d done is they’d set up an attacker-controlled Slack instance, Slack being similar to Microsoft Teams. They had created this Slack instance to appear as though it was set up by the CEO of the firm they were targeting.

They pulled the profile information, the profile photo, the title, the name, et cetera. It appeared as though your CEO was inviting you to this new Slack channel because you’ve performed really well; maybe you’re going to go to President’s Club because you sold the most widgets, whatever it is. Some enticement to go and join this particular Slack tenant.

What was super interesting about these attacks was you didn’t necessarily have to fall for it. You didn’t have to accept the invitation to the Slack tenant to be delivered a phishing order. What they managed to do was find a way to smuggle phishing links that were obfuscated. They looked like they were directed to one particular benign site, but they were actually being redirected to a malicious phishing site. They managed to smuggle those into the notifications sent to the user before they’d even accepted the invitation.

In other words, when you invite someone to join a Slack tenant, that identity is created in Slack, and then you can send a DM (direct message) to that individual who hasn’t even yet joined the Slack tenant, and it will send it to their inbox. Again, now that domain is a [email protected]. Whatever the domain is, is not something that your corporate IT team is necessarily going to block because there’re legitimate notifications from that domain that are going to come to your email inbox.

So a very clever way of getting around, again, some of the authentication methods that security teams rely on, and some of the signals that users rely on when they’re trying to detect malicious activity.

The other thing we loved about tracking this campaign is that the attackers explicitly asked the targeted user not to use Okta FastPass when they were signing in, which is as good an advertisement for our passwordless machine-resistant authentication than we’ve ever seen. So we blew that up into a new story to basically say, “Attackers are telling everyone not to use Okta. That means you should probably use Okta.”

I love it. So you were talking about it getting more and more difficult to spot the scabs with generative AI producing better grammar, mimicking a brand’s tone, even probably doing language translation to a language that the scammer’s not familiar with. What should consumers be doing to try to spot these new tactics?

You ask me, five to 10 years ago, “Oh, yeah, look for bad grammar, look for weird formatting of phone numbers. It’s Amazon’s logo from 20 years ago. It’s Bank of Amrica, not Bank of America. They’ve got a typo in the bank name.” All of those things are out the window because of generative AI.

It’s disappointing to say it, but judging by some of the lures that we see on a daily basis, it is getting very, very hard to distinguish between a phishing lure and a real sign-in page. It’s very difficult to then educate people… Share on X

It’s disappointing to say it, but judging by some of the lures that we see on a daily basis, it is getting very, very hard to distinguish between a phishing lure and a real sign-in page. It’s very difficult to then educate people on what to look out for.

Naturally, looking at the top-level domain, predominantly, the best advice you can give someone is if they expect to be directed to a specific domain and they aren’t. Not everyone’s going to remember the top-level domain of the service providers that they use, which is why when you first start interacting with a service provider, it might be a good idea to download their app.

You are then relying, of course, on the fact that the app ecosystem is not allowing fraudulent apps to be registered. But generally speaking, once you’ve authenticated with a legitimate app of a service provider, there is a form of trust created between your device and the service of that service provider, which means that you don’t have to authenticate very frequently. More often than not, you’re just verifying your identity to the device, and the device is authenticating to the service on your behalf.

I think it’s incumbent upon service providers to make sure they’re really, really crystal clear on under what circumstances they would ask their customers to take a specific action, because all we can really rely on at this point is intent. “What am I being asked to do? Is it out of the ordinary? Is it not something I would typically expect to be asked by this particular service provider?”

This is why a lot of folks like me, but also a lot of folks like the cybersecurity agencies of most countries, are trying to encourage people to use passkeys, because with a passkey, there isn’t a secret to hand over anymore. At the point of enrollment with a service, you create a trusted relationship between your device or a security key and that service, and there is no password or OTP to hand over to an attacker who’s trying to phish you.

I understand that this is a very new technology that people are only just getting used to, but I think we’re going to have to get a lot more used to it, given what generative AI is delivering to attackers.

And I’ll make the point for organizations that are rolling out passkeys. Don’t do what one of my vendors did, that every time I tried to sign in without using passkeys, I had to say, “No, I don’t want to do passkeys” about five times before it would let me in, every single time I tried to log into their platform.

For about three months, I was about ready to switch providers because they were so persistent about trying to get me to do something that, at the time, was not the appropriate choice for me.

It’s very difficult because it’s easy to sit up on my perch and say, “Judging by what I can see in the threat landscape, you should all be using passkeys.” The reality of getting everyone onto a very new form of authentication. We’ve been using username and password for a little while now. Most of us are very familiar with that. It took a long time to get people to understand the need to use multi-factor authentication.

And still it’s a challenge for lots of people.

Yeah. But now we’re getting to a point where what I’m trying to convince people is if they can make that initial investment in enrollment in passkeys—and some of the management of those passkeys can be a little bit complex—if you get over that hurdle, what you get from there is not only phishing-resistant protection, but it’ll take you three to five seconds to sign in to your service provider, and it used to take you 30 seconds.

There is some initial pain, but the promise down the line is every subsequent authentication event, you’re saving yourself 25 seconds. That’s how I try to sell it to people.

Because everything about the existing authentication is slowing people down. Some people’s hesitancy to even do 2FA is, “Well, now I’ve got an extra step I’ve got to do, and I just need that extra 20 seconds of my life back.”

I dread to think of how many times I’ve clicked the “Forgot Password’ button, and I’m someone who works in this field. I use password managers, but password managers can sometimes be complex across different browsers on different devices, and you end up thinking the fastest path to resolving this is to press the “Forgot Password” and go through another password reset. I look forward to a future where we’ll have to click that button far less frequently.

I love it. I think we are seeing, and maybe you can confirm, deny, or suggest otherwise, is most of the delivery mechanisms moving outside of email now?

Email will always be a delivery method of note for a broad set of actors, but SMS, I think, has become so important to adversaries. It’s easily spoofable, it doesn’t tend to go through the same gateways that are looking for signs… Share on X

Oh, very much so. Email will always be a delivery method of note for a broad set of actors, but SMS, I think, has become so important to adversaries. It’s easily spoofable, it doesn’t tend to go through the same gateways that are looking for signs of inauthenticity.

Like I said, other trusted messaging channels, over-the-top messengers like WhatsApp, very common now, social media accounts, very common for scams. As I said also, just abuse of what should be trustworthy services where threat actors managed to spin up their own tenants in multi-tenant services and attack from that location.

I think we’re seeing more and more often that simply sending out tens of thousands or hundreds of thousands of email messages is fairly crude compared to the tools available to threat actors today.

What’s your thought on people’s concern about AI voice cloning and identity impersonation in terms of, “Well, I got a phone call and I think it’s from my kid.” The typical grandparent scam. It’s the grandkid. “Oh, I’m in jail and I’m in a different country; I need money.” Are you seeing more of that targeting corporate or is that still relegated to one-offs targeting grandparents?

No, definitely. We’re seeing the same thing in the corporate world. The most common one that you probably swat away every week is the impersonation of your CEO. Given that there is a huge corpus out there, generally speaking, of chief executives that speak at their quarterly earnings, et cetera. There’s a huge corpus of voice print that attackers can develop on. They have a sufficient amount of input to create a very compelling output.

There are WhatsApp messages with audio attached all the time being sent to our staff, all of our customers, where their CEO is impersonated and instructing them to do certain things. Voice is something that can be trivially cloned.

We are also seeing obviously deepfake videos as well being used in things like the DPRK, the North Korean fraudulent IT worker scheme, we study very closely. This scheme relies on a lot of other uses of AI besides deepfakes, but occasionally you’ll see a deepfake overlay used where there is some reason for that individual to change their appearance.

It’s still a little glitchy. If we educate people and give them various methods of understanding whether they are potentially interviewing someone who is not who they say they are. But one of the easy things to do from a deepfake perspective is simply have them wave their hand in front of their face because it nearly always glitches in a way that makes it clear that there is a deepfake overlay.

Wait. You mean there are six fingers suddenly?

The progress that technology has made over the past six months has been remarkably scary. I’d like to think that I would never be tricked by someone using a deepfake, but I don’t know if I could say that in a month or two from now.… Share on X

The blurring effect does look very, very strange. The progress that technology has made over the past six months has been remarkably scary. I’d like to think that I would never be tricked by someone using a deepfake, but I don’t know if I could say that in a month or two from now. The technology just keeps improving.

The disappointing thing is a lot of these advancements in AI, to me, give an asymmetrical advantage to the attacker and not the defender, not the target. As a person who might be on the receiving end of a scam, the tools you have available and accessible to you aren’t necessarily scaling as fast as the tools that are available to attackers.

At the end of the day, AI is great at creating synthetic things. So yeah, we’re going to see more of that, unquestionably. The advice we have to give people has to change alongside it. Again, we focus very, very much on taking in multiple signals and also having methods of anchoring your trust in something. A cryptographic relationship between your device and a service is something you can trust more readily than someone who’s calling you out of the blue.

With all these changes, we used to have, 10 years or five years ago maybe even, the red flags of, “Well, if the person won’t get on a FaceTime chat with you, then they’re a scammer. If they’re on a dating app and they’re trying to get you to move off the platform, that’s a red flag. If the grammar in the email’s bad, it’s a red flag.”

In some of the automated telephone calls, there are telltale signs of the same background noise repeating over and over and over in this conversation. That’s odd. Those media-specific red flags are really starting to go away with AI. What are the critical thinking pieces that we need to teach instead?

I wouldn’t say we should stop teaching those because most of the things you just mentioned, we still see even with AI. I think it’s great because it’s teaching people how to be cynical about these approaches.

Like being asked to move off one platform to another is a really clear signal. Someone approaches you on LinkedIn and then they want you to move off to a mobile messenger service. Why? What is it that they don’t want to expose in messaging on LinkedIn, which is looking for certain scams?

I think it’s still great advice. The difficult thing for a regular person is just to understand when would my service provider call me and ask me to take an action? How would they verify their identity to me? They’re always asking me to verify my identity to them, but how do they prove that they’re legitimate to me?

I think as much as we can continue to impart this knowledge as you are on how to pick those scams, we also need to get a lot better—those of us who are working for service providers—in setting really, really clear expectations and creating trusted methods of communication that people can rely on, and not leaving it 100% up to the end user of a service to be able to discern what’s malicious and what’s benign. As I said, the trade craft of adversaries keeps adapting and improving at a rate that we can’t expect people’s knowledge to keep up with. Every conversation helps.

I think now the advice that works today, and hopefully it’ll work for at least a few more years, is you should always be suspicious of any unsolicited communication. If you didn’t ask your bank to call you and you get a call from your bank, be concerned that it’s not really your bank. If you get a call from your grandson and you weren’t really expecting a call from your grandson, and it’s something odd, trust that it seems odd.

I hate to tell people to be cynical or to be suspicious, but we’re now in a state where we have to have at least some baseline cynicism or distrust of what’s happening around us.

And we need to have the tools to say, “OK, well, a bank calls me.” I should be able to say to them, “Well, I want you to verify that you are my bank. If I log into my bank now in my banking app and there’s an inbox, whatever the matter is that you are addressing me on the phone, I should find that matter in that inbox.”

In other words, I should be able to look to some alternate channel to verify that the matter that you are calling me about is legitimate. I think that’s the skill we’ve got to teach people. -Brett Winterford Share on X

In other words, I should be able to look to some alternate channel to verify that the matter that you are calling me about is legitimate. I think that’s the skill we’ve got to teach people. But again, there are two sides to this. The bank has to make that available. If they’re going to call a user and say take an action, then they have to also provide the user an alternate channel of communication to verify that the approach is legitimate.

For the most part, obviously, banks don’t call their users for that reason. But outside of banks, there are plenty of other reasons why people have to be contacted.

The most difficult thing our customers have to do sometimes is things like mass password resets. It’s very easy to reset the passwords of tens of thousands of users. It’s very difficult to convince them that you really do need them to reset their password and that you are not a scammer, because scammers are impersonating IT and security teams. They know that that is a trust anchor that they can abuse.

Do you see campaigns where someone says, “Hey, you need to reset your password. We’re going to send you to a phishing site. We want you to enter your old password, then reset it with a new one, and then use that to get the password.” Is that a classic?

Absolutely. That’s one of the classics is there’s a problem with a factor of some kind—whether it’s a password, whether it’s the SMS OTP you’ve set up, an authenticator app that you've installed on your phone—you’ve got a problem with it. Please authenticate and we’ll set you up with a new one. We’ll reset it.

Resetting your credentials is a common scam. Again, when you have to legitimately convince a user to reset their credentials, that is a part of responding to a cyber attack that a lot of organizations don’t realize that that’s where the complexity and cost comes in, is in having to communicate to your users that they need to take action. That can be very, very tricky because I know that if I get a message saying, “Reset my credential,” I tend to be quite cynical about it.

I’m the same way. Usually, it’s accompanied by a notice that, “We take your security and privacy very seriously.” It is a really challenging cat-and-mouse game between security and the scammers. Do you think the scammers and the adversaries have the upper hand at the moment because of AI and that it’s just a matter of time before the tides turn?

I think that they have, as I said, an asymmetrical advantage at the moment. Attackers are leaning on AI to create a lot more synthetic material that is highly targeted to who they’re going after, and it’s imposing far less cost on them.

At the same time, I try to remain optimistic. In the corporate environment, I see the adoption of phishing-resistant, passwordless authentication rocketing every year. I think we saw it go from 2% to 6% to 13% to higher. It’s doubling at that rate that makes me think that we are really, really making ourselves harder targets for threat actors.

We’re starting to see them focus on the enrollment and recovery processes of the life cycle. That suggests that they’re not defeating the general sign-in with phishing, so they’re having to be more creative. If attackers are getting on the phone in a corporate environment, they are exposing themselves to more risk, and that’s a good thing. Every time an attacker has to do something that’s risky for them, that is more likely to result in them getting caught and going to prison, then I’m happy.

In the consumer context, we really need service providers to do more, and I’m not just talking about passkey enrollment. I’m talking about being able to get a gauge on what a user’s normal pattern of behavior is, and then being able to take appropriate action in line when their pattern of behavior deviates significantly. I think there’s a lot more we can do around things like breach password protection. That is, not allowing users to create passwords that we know have been compromised in other external events.

There’s a lot we can do to identify synthetic traffic, the fraudulent creation of accounts, that thing. I’m seeing that level of maturity ratchet up through our customers over time, and that’s what gives me hope is seeing those capabilities used. Every now and then, you just get those little signals from various threat actor groups that you target closely, that they’re having to adapt and change. That means that we are creating friction and cost, and that’s super, super important.

I’m also a big advocate for some of the cross-industry groups. We work with, for example, […], collaborative work with Interpol, Europol and others where we are bringing down entire scam farms, working across industry bodies with law enforcement to address some of those scams that I don’t get to have much impact on in my day-to-day job. With the right people in the room, it’s amazing what can be done.

I remain optimistic that maybe as well with artificial intelligence, it might even create a generation of users of digital services who are inherently more cynical about any approach that comes to them because they’ve grown up in a world in which they are less trusting of a lot of the messages that come to them.

As we wrap up here, two questions. Are there any particular resources that you guys have that are consumer-facing that give a broad overview of the threat landscape?

If you go to Okta’s newsroom and look for stories tagged “Okta threat intelligence,” there’s a great deal. Many resources there where we’ve scratched away at a particular campaign and tried to educate people about what we can learn from those campaigns. If you hit along Okta’s newsroom, you’ll see that.

The one that was particularly interesting from an AI perspective involved the abuse of a service called V0. If you search for that, it’s very interesting to see just again, how quickly attackers are adapting to generative AI and using it to create not just compelling phishing campaigns, but the entire infrastructure behind their fishing operations.

Got you. Any parting advice? One or two actions that consumers should take if they haven’t already taken to reduce their risk, reduce their […] profile?

You know what I’m going to say. I’m going to say go to your service providers that you use regularly, see if they’re offering passkeys and start experimenting. Again, that initial, upfront investment in time is going to save you a lot of time in the future.

It doesn’t mean that you have to throw away all of your existing techniques, your password managers, et cetera. This is going to be a transition. But you’ll feel a lot more comfortable if you’re signing in using passkeys or signing in using the official apps of your service providers.

If you are using those methods, then being asked to access resources via any other method is going to seem a lot more unusual to you and help you to differentiate between the benign and the malicious.

I love it. And if you’re a platform provider, you should always provide an in-app confirmation of messages.

Yes. I love that. In that inbox, basically, just make sure that anything you’re going to communicate out to your customers, they have a way of authenticating to a trusted app, where they can see the reason why you’ve had that outreach to them. Just having a separate, trusted method of communication is essential, my opinion.

I love it. Brett, thank you so much for coming on the podcast today.

No worries. Thanks for having me, Chris. I enjoyed the conversation.

About Your Host

Chris Parker

Chris Parker is the founder of WhatIsMyIPAddress.com, a tech-friendly website attracting a remarkable 13,000,000 visitors a month. In 2000, Chris created WhatIsMyIPAddress.com as a solution to finding his employer’s office IP address. Today, WhatIsMyIPAddress.com is among the top 3,000 websites in the U.S. 

Share Post:

COULD YOU BE EASY PREY?

Take the Easy Prey
 Self-Assessment.

YOU MAY ALSO LIKE

Dan
Ariely

Why You Fall For Scams

Jared
Shepard

Mobile Device Threats

Chris
Kirschke

Past, Present, and Future of AI agents

Cynthia
Hetherington

You Are Traceable with OSINT

Deviant
Ollam

Anyone Could Walk In

PODCAST reviews

Excellent Podcast

Chris Parker has such a calm and soothing voice, which is a wonderful accompaniment for the kinds of serious topics that he covers. You want a soothing voice as you’re learning about all the ways the bad guys out there are desperately trying to take advantage of us, and how they do cleverly find new and more devious ways each day! It’s a weird world out there! Don’t let your guard down, this podcast will give you some explicit directions!

MTracey141

Required Listening

Somethings are required reading – this podcast should be required listening for anyone using anything connected in the current world.

Apple Podcasts User

Fascinating stuff!

I've listened to quite of few of these podcasts now. Some of the topics I wouldn't have given a second look, but the interviewees have always been very interesting and knowledgeable. Fascinating stuff!

Apple Podcasts User

Excellent Show

Excellent interview. Don't give personal information over the phone … it can be abused in countless ways

George Jenson

Interesting

I've listened to quite of few of these podcasts now. Some of the topics I wouldn't have given a second look, but the interviewees have always been very interesting and knowledgeable. Fascinating stuff!

User22

Content, content, content!

Chris provides amazing content that everyone needs to hear to better protect themselves and learn from other’s mistakes to stay safe!

CaigJ3189

New Favorite Podcast!

Entertaining, educational and I cannot 
get enough! I am excited for more phenomenal content to come and this is sthe only podcast I check frequently to see if a new episode has rolled out.

brandooj

Big BIG ups!

What Chris is doing with this podcast is something that isn’t just desirable, but needed – everyone using the internet should be listening to this! Our naivete is constantly being used against us when we’re online; the best way to combat this is by arming the masses with the information we need to stay wary and keep ourselves safe. Big, BIG ups to Chris for putting the work in for us.

Riley

As seen on

COULD YOU BE EASY PREY?

Take the Easy Prey Self-Assessment.
close

Copy and paste this code to display the image on your site

COULD YOU BE EASY PREY?

Take the Easy Prey Self-Assessment.

We will only send you awesome stuff!

Privacy Policy

Your privacy is important to us. To better protect your privacy we provide this notice explaining our online information practices and the choices you can make about the way your information is collected and used. To make this notice easy to find, we make it available on every page of our site.

The Way We Use Information

We use email addresses to confirm registration upon the creation of a new account.

We use return email addresses to answer the email we receive. Such addresses are not used for any other purpose and are not shared with outside parties.

On occasion, we may send email to addresses of registered users to inform them about changes or new features added to our site.

We use non-identifying and aggregate information to better design our website and to share with advertisers. For example, we may tell an advertiser that X number of individuals visited a certain area on our website, or that Y number of men and Z number of women filled out our registration form, but we would not disclose anything that could be used to identify those individuals.

Finally, we never use or share the personally identifiable information provided to us online in ways unrelated to the ones described above.

Our Commitment To Data Security

To prevent unauthorized access, maintain data accuracy, and ensure the correct use of information, we have put in place appropriate physical, electronic, and managerial procedures to safeguard and secure the information we collect online.

Affiliated sites, linked sites, and advertisements

CGP Holdings, Inc. expects its partners, advertisers, and third-party affiliates to respect the privacy of our users. However, third parties, including our partners, advertisers, affiliates and other content providers accessible through our site, may have their own privacy and data collection policies and practices. For example, during your visit to our site you may link to, or view as part of a frame on a CGP Holdings, Inc. page, certain content that is actually created or hosted by a third party. Also, through CGP Holdings, Inc. you may be introduced to, or be able to access, information, Web sites, advertisements, features, contests or sweepstakes offered by other parties. CGP Holdings, Inc. is not responsible for the actions or policies of such third parties. You should check the applicable privacy policies of those third parties when providing information on a feature or page operated by a third party.

While on our site, our advertisers, promotional partners or other third parties may use cookies or other technology to attempt to identify some of your preferences or retrieve information about you. For example, some of our advertising is served by third parties and may include cookies that enable the advertiser to determine whether you have seen a particular advertisement before. Through features available on our site, third parties may use cookies or other technology to gather information. CGP Holdings, Inc. does not control the use of this technology or the resulting information and is not responsible for any actions or policies of such third parties.

We use third-party advertising companies to serve ads when you visit our website. These companies may use information (not including your name, address, email address, or telephone number) about your visits to this and other websites in order to provide advertisements about goods and services of interest to you. For information about their specific privacy policies please contact the advertisers directly.

Please be careful and responsible whenever you are online. Should you choose to voluntarily disclose Personally Identifiable Information on our site, such as in message boards, chat areas or in advertising or notices you post, that information can be viewed publicly and can be collected and used by third parties without our knowledge and may result in unsolicited messages from other individuals or third parties. Such activities are beyond the control of CGP Holdings, Inc. and this policy.

Changes to this policy

CGP Holdings, Inc. reserves the right to change this policy at any time. Please check this page periodically for changes. Your continued use of our site following the posting of changes to these terms will mean you accept those changes. Information collected prior to the time any change is posted will be used according to the rules and laws that applied at the time the information was collected.