Scammers aren’t just phishing your inbox anymore—they’re impersonating your voice, your face, and even your coworkers. Deep fakes and social engineering have moved beyond clever tricks and become powerful tools that bad actors are using to infiltrate businesses, breach accounts, and dismantle trust at scale. What’s used to take a hacker hours and expensive tools can now be done in minutes by anyone with a Wi-Fi connection and a little malicious intent.
Our guest today is Aaron Painter, CEO of Nametag, a company leading the charge in next-generation identity verification. Aaron’s background includes 14 years at Microsoft and executive roles in cloud tech across Europe and Asia. After witnessing firsthand how easily identity theft could unravel lives—especially during the shift to remote everything—he founded Nametag to answer a critical question: how can we know who’s really behind the screen? With Nametag, Aaron is building real-time, high-security ID checks that are already reshaping how help desks and businesses protect users.
In this conversation, we unpack the difference between authentication and identity, why traditional methods like security questions are dangerously outdated, and how mobile tech and biometrics are changing the game. Aaron also shares practical tips on protecting your most valuable digital asset—your email—and what consumers and companies alike can do to stay ahead of evolving threats. This one’s packed with insight, and more relevant than ever.
“The face ID camera on a modern phone can be used to capture a 3D spatial selfie. That’s far richer and more secure than a simple webcam photo.” - Aaron Painter Share on XShow Notes:
- [00:54] Aaron is the CEO of Nametag. A company he started 5 years ago that focuses on identity verification at high-risk moments.
- [01:37] He spent 14 years at Microsoft working on product including at Microsoft China. He also ran a cloud computing company that was AWS's largest partner in Europe.
- [02:12] When everything went remote in 2020, he discovered that there were identity verification issues over phone lines.
- [03:03] He began building technology that will help accurately identify people when they call in to support or help desks.
- [04:22] Most of what we think of as identity is really just authentication.
- [07:41] A common new challenge is the rise of remote work and people having to connect remotely. The rise of technologies that make it easier to impersonate someone is also a problem.
- [10:38] Knowing who you hire and who you're working with matters.
- [11:03] Deep fakes and voice cloning has become so much easier.
- [15:47] How platforms have a responsibility to know their users.
- [18:11] How deep fakes are being exploited in the corporate world.
- [19:30] The vulnerability is often the human processes. Back doors and side doors are deleting ways that companies are breached.
- [23:53] High value accounts and companies that know they have something to protect our early adopters of Aaron's technology.
- [24:50] Identity verification methods including using mobile phones. The device has cryptography.
- [27:07] Behavioral biometrics include the way we walk or the way we type.
- [29:56] If you're working with a company that offers additional security tools, take them up on it.
- [34:04] Dating sites are starting to do verification profiles.
- [43:07] We all need to push for more secure ways to protect our accounts.
- [43:48] The importance of protecting your email.
Thanks for joining us on Easy Prey. Be sure to subscribe to our podcast on iTunes and leave a nice review.
Links and Resources:
- Podcast Web Page
- Facebook Page
- whatismyipaddress.com
- Easy Prey on Instagram
- Easy Prey on Twitter
- Easy Prey on LinkedIn
- Easy Prey on YouTube
- Easy Prey on Pinterest
- Aaron Painter – LinkedIn
- Nametag
- Aaron Painter – Facebook
- LOYAL: A Leader's Guide to Winning Customer and Employee Loyalty
Transcript:
Aaron, thank you so much for coming on the podcast today.
Thanks for having me, Chris. I'm really excited to be here.
I'm excited for this as well. Can you give myself and the audience a little bit of background about who you are and what you do?
Yeah. My name is Aaron Painter. I'm the CEO of Nametag, a company I started about five years ago, focused on identity verification and high-risk moments like when you call a help desk and need to verify that you are in fact the rightful account owner.
That's a really short description.
There's so much more to say.
I like it. What got you into the field? Was there an event that said, “Oh, my gosh,” or were you, “I've got a solution looking for a problem,” or, “There's this massive problem I see on the horizon; we've got to start doing something about it now”?
It was an interesting, winding journey. Professionally, I spent 14 years at Microsoft. I started in product and then spent most of my career outside the US. My last time with Microsoft was running Microsoft China, which was this crazy opportunity to build and create a very fast-growing market at the time. I left that and I ran a cloud-computing company that was AWS' first and largest partner in Europe and was based in London.
About five years ago was just the start of the pandemic, and it was February of 2020. Suddenly, it felt like everything was going remote. Everything was no longer in person. I had several friends and family members that had their identity stolen. I said, “I'm going to be a good friend. I'm going to be a good son. We're going to jump on the phone. We'll figure this out.”
Every company we called just had this method to say, “Well, before I can help you, I need to ask some security questions.” We know what that's like. “Where did you live in whatever year? What street? What's the last four of your Social?” It was nonsense, so much so that clearly someone had called before us, knew the answers to those questions, and was able to take over accounts. That led me down this path to saying, “How is this possible in the modern age that we don't know who's behind the screen, who a caller is, who's logging into a site?”
We set out to try and solve that problem and build technology around it. Actually, that was a solution we had built technically, but we didn't quite know where to apply it. It was some really early customers who gave us great feedback and said, “Well, here; I've got use cases for you. What about when my employees or my customers call our help desk, or when my employees or customers have added things like multi-factor authentication to protect their accounts, but then they get locked out? They get a new phone, they upgrade their phone.” That's a use case for high-security identity verification. It just grew from there into other use cases. A bit of both—it was a personal problem, and then it actually was a solution that we are trying to narrow in on where we could best apply the technology.
I like that. I think of help desk like, that is one of the biggest scam vectors these days. “I'm from your IT department,” or, “I'm the help desk within Amazon.” I don't know why they're calling me to provide help desk support, but there's always a plausible reason. I always thought that that is one of the biggest weaknesses for a company, either making the outbound call or getting the inbound call. It'd just be so easy to like, “Hey, I'm Bob. I'm new with the IT department. Sharon told me to give you a call and you had an issue with your computer. Let me help you.”
There's a big difference, I've learned. Someone pointed this out to me, actually, just in a conversation today between authentication and identity. Identity is a big word, and it can mean who we are, what we stand for, and how we come across. In the digital space, most of what we think of as identity has really just been authentication. Do you have access to this device? Are you reachable on this phone number? Do you have the password? How do we authenticate someone or log them in?
Identity is a big word, and it can mean who we are, what we stand for, and how we come across. In the digital space, most of what we think of as identity has really just been authentication. Do you have access to this device? Are… Share on XThere really hasn't been the technology previously to know who is this person? Who are they really? Are they human, let alone which human that's trying to access the account? In today's world, we've been talking about in the past around a decrease in trust in so many aspects of our lives. It really matters that you know who the identity of a person and the identity at that moment.
I sometimes say that identity is like a credit card. You might be given the credit card, but it's actually more important when you swipe that card to use it. Is your account in good standing? Do you have available credit? It's a real-time question, not were you once given the card. The identity of someone confirming that is also a real-time question. Are you the real person at this moment or is someone maybe trying to impersonate you? That's a real-time question that we want to be able to answer at those moments that matter.
I sometimes say that identity is like a credit card. You might be given the credit card, but it's actually more important when you swipe that card to use it. Is your account in good standing? Do you have available credit? -Aaron… Share on XIt's interesting that you refer to it as authentication versus identity. As you were spelling that out, it's like, yeah, most of the time people are looking for “identity,” it really is authentication. Do you have the right credentials? Are you actually the person who's supposed to have the credentials?
Exactly right. It's just something we grew into. It's an evolution of how the computing and internet infrastructures existed since the sixties. This idea that you log into something initially that was a trusted email account because everyone on the internet in the early days were from an academic institution or from government. If they had that email, if they had that at gov or edu domain, they were a trusted person.
As email addresses suddenly became easy to get and easy to dispose of, and you could have unlimited ones at Gmail, Hotmail, or you name it, unfortunately, email as an identifier, it no longer became a representation of someone's identity, who they were. It was just an alias to be known as in the online context. That really started to degrade trust for those times when you need to know who someone is.
Do you think that it was the decrease of barrier to entry? I think of the parallels to the early days when you had to have an SSL certificate. The common thing that people would hear you say is, “Oh, you want to look for the S because that means it's safe.” Not that it was secure, but that means it's safe. I remember the processes that you had to go through. To get a certificate, you had to submit company paperwork, and it cost thousands of dollars. They had to send something in the mail to you or whatever the case was.
There was a lot more to the process than it is now. Now the definition of the colloquialism of what does the S stand for is, well, it just means it's an encrypted connection. There's no verification of who it is that you're actually corresponding with that entity or if they're even the real entity. It just means no one's snooping on that conversation.
Yeah, I think that's right. There's a lot of merit to that, but there are probably two big trends that have emerged that have made this problem more challenging, and one is the rise of remote work and remote just people being and interacting with services remotely. If you worked in the government or you worked in education in those early days, you probably went to the academic institution or you went to a government office. People saw you, you were authenticated that way. Your first day on the job, someone saw you physically, maybe checked your government ID and said, “OK, great.”
When I started at Microsoft, that's how it was. After HR, I went to a security office. The security office checked my ID, gave me my username and password. They helped me set up with a smart card at that time, but it was a physical process. That's now remote for most workers and those kinds of jobs who start work.
The other big piece has just been the rise of these technologies that make it easier to impersonate someone. Particularly now, we think a lot about deep fakes, but it's just gotten so mainstream and easy for you to impersonate someone else, particularly in those remote contexts. When you put those two together, we've created this perfect storm where it's really hard to trust that person you might be interacting with and knowing who they are.
It's interesting because there are several people that work for me that I have never met them face to face and will likely never have the opportunity to meet them face to face. They're halfway around the world and three hours from the nearest airport or something like that. I'm never going to run across this person, yet they're still part of my business and still part of the way that I do things. In what I'm doing, if they are actually somewhat different, it's not a big deal as long as the work gets done. In that case, for a lot of companies, we are going to be talking about the IT workers. If you're hiring a remote IT worker, they potentially have access to stuff that all of a sudden becomes more of a risk to you for them to have access to if you don't know who they are.
I think you're right. If you think of all the levels of training, certification, requirements that so many companies encourage or require their employees to go through, let alone background checks, training on how to handle data types, how to handle privacy, certifications in certain areas or proof of schooling, if you're an outsourcing firm, you might say, “Hey, my employees have been trained on this; they've been certified in certain things.” Suddenly, that's not the person doing the work. All those elements that we've been relying on for years to maybe you don't know the person and their home life or their personal background, less relevant, but certainly knowing that they've achieved certifications or training, it matters. We rely on that to be able to make sure that our workers are doing the work that we need them to with the standard we expect. It matters for a lot of companies that they know who they're hiring and they know who they're working with. There's a lot more subtlety there than, “Have you met in person?”

Deep Fakes now have babies talking and it looks so realistic.
Yeah. You were talking about deep fakes. How have they progressed over the last couple of years?
Deep fakes are an interesting full topic. It's a term that started in 2017-2018 by a user on Reddit who was initially creating synthetic content using celebrities. They were trying to make a celebrity who was doing something revealing, adult content, or shocking. It discontinued from there partly with the technology advances in GenAI.
Microsoft researchers have recently found voice cloning, for example. If you wanted to mimic your voice or someone else's, you used to a few years ago, you would read a 20-minute script that was a preset set of words, you would record it, a system would learn off of that, and then get close. Now with three to five seconds of someone's audio—it could be from this podcast, could be from a call, a voicemail, a social media post—you can suddenly create a relatively high-caliber voice, audio deep fake of someone that sounds like that person. The tools have gotten so easy that it's like photo editors, photo sharing, or a lot of these consumer service things that we do that are so easy, it's now that easy to make a deep fake of someone, invoice, in video, a combination of both, let alone in written text.
This rise of the technology has given superpowers to a lot of us in being able to do cool new things. There are beneficial purposes we could talk about, reliving historians, bringing back people who are no longer with us, loved ones, memory aids, wonderful things, but it also means that someone can impersonate you without you necessarily knowing or consenting to it. It's that rise of the ability to impersonate someone that's made it such a dangerous tool and given these superpowers to bad actors.
I remember back, and it was probably in the late teens, where there was someone who was working with a Tom Cruise lookalike to create a fake Tom Cruise. I think they even had a fake Tom Cruise account. The amount of work and compute power that it took to get someone who looked almost like Tom Cruise to look very much like Tom Cruise was such a barrier to entry, and the technology was so difficult to use that it was like, “OK, no one in real life is going to do this for Chris Parker, or no one's going to do this for Aaron because there's so much labor, so much compute power involved.” Is it now that it's almost down to free?
Absolutely. The level of sophistication and cost is maybe free to the end users, but not free to the cloud companies providing the service. Different topic. The ability to do it has certainly gotten easier. It's important to note, we don't always need some high-fidelity 4K, able to project in times square level of realism or deep fake.
Often the scenarios where this happens, it's not some official release video from a government body. It's, “Did you catch this poorly captured video clip in the back of someone's car that they've recorded? Was that really them saying or doing X?” Or it's someone calling the help desk on a bad phone connection. “I'm sorry, I'm on vacation and I'm locked out of my account. Yeah, I know my connection's not great.” You don't need it to actually be 4K fidelity for it to be a very realistic deepfake, but yet the ability to create those high-fidelity ones is getting faster by the day.
It's one of those weird things—people might be even more trusting of the poor-quality stuff than the high-quality stuff.
We have this a lot with companies. Some of them say, “Our company maybe isn't that big,” or “Gosh, the people on our call center, help desk, or HR team, they've been there for years. They recognize everyone.” Same thing you think of in private banking or certain financial transactions. “Oh, I know everyone who calls me.” That's even worse in a way because you recognize their voice. You're more inclined to trust, “Oh, it's you again. Oh, of course I recognize you.” When, if that's actually someone running some deep fake voice emulator, you immediately default to trust, when in today's world, it is harder to default to trust.
I'm internally chuckling. I'm not in a situation where I can talk about the specifics of it, but there's a situation where there's a certain layer of protocols that are supposed to apply when I show up somewhere. I was just chuckling that it was like, “Oh, hey Chris. Come on in.” It's like, “Oh, you have now avoided all the security protocols. All the double checks have now gone out the window because you think you know me.”
That's right. I tend to think that platforms have a responsibility to know their users just like communities. You might operate with an alias, you might not need to always show your ID. Let's say you're on a social media platform. Maybe you want to be anonymous or maybe go by a pseudonym. That's great. There's business reason for platforms to offer that, but the platform should know who you are, because if your action is bad or someone's trying to impersonate you, they need the ability to protect the others, to protect your account, to protect you.
I think in the physical world, it's like when you go to a conference. You check in at their front desk, you're probably invited to begin with, but someone's checking your ID, your business card maybe, confirming your email, where you work, and then they give you, coincidentally, maybe a name tag to wear at that event. The people you meet there, they know that you've been vouched for or not likely impersonating someone.
When you go to the airport, same thing. To get to the secure area where the flights are departing, you go through a security check, and then you can hopefully feel safer and feel like you're in a bit more of a trusted space once you're past security. In the real world, we have these controls to verify who people are so that the environments can be safe. Increasingly in the online world, it's very difficult to get that same level of trust.
Also, the challenge is that people don't want to be verified. When I go to the airport, I want to make sure myself and everybody else are verified. If I'm going to go online and spout stuff that's going to be a political hot topic, I don't know that I want people to know who I am. I think that's one of the challenges that the platforms understand that not all users want to have that verification.
That might make perfect sense for you to be anonymous or operate by a pseudonym. At the same time, those platforms have challenges of combating bots, disinformation, misinformation, and other things they're also trying to solve for because you want to make sure that the contents you're reading maybe is from a real person and that person is at least known.
Each platform, I believe, can have its own business model and the value it provides, anonymity, or verified profiles that are real or otherwise. Regardless, I think certainly in the context of at work, the platform in that context is your employer, your corporate network. You know what, you really want to know who the person is? That matters.
In the corporate context, how are deep fakes and that lack of identity being exploited?
One of the biggest ones unfortunately started about two years ago, and it was this particular cyber group called Scattered Spider. They were most known for a target that was MGM, the hotels and casinos in Las Vegas. MGM, they're not meant to be the targets here, but 60 Minutes did a piece on them and that made it mainstream, where a bad actor called the employee IT help desk. In roughly eight minutes, he was able to convince the help desk rep who probably got into that job to help people and not to be some identity interrogator. Eight minutes later, they were able to get their access to that account reset. The bad actor was able to go in, deposit ransomware, and take MGM offline for two weeks. That level of attack has just continued.
Starting before that, that was the most known, visible one and then spread like wildfire in the months since. The concept that we were talking about earlier around authentication, however you protected it is only as secure as the recovery mechanism. If you've set up great things, phishing resistant, multi-factor authentication, you've sent a PIN here, and you're typing in an authenticator code, amazing. There's some great technology there, but the vulnerability is not necessarily the technology; it's the human processes around that.
…the vulnerability is not necessarily the technology; it's the human processes around that. -Aaron Painter Share on XAll it takes is for someone to call and say, “Well, that's me, but I can't access my account.” Suddenly, all the great technology safeguards you put into place around authentication go out the window because now it's up to this person on the phone or maybe on a video chat to guess. “Are you really the rightful account owner? Should I let you into this account?” That back door or side door has become the leading way that companies are breached. Eighty percent-plus of cyber attacks today are identity-related, and it's often using this kind of methodology.
From the corporate compromise challenge, my experience of interacting with an account like that, I've experienced two different ways of something going on with an account and then say, “Oh, well. Call this number.” You talk to someone, “OK, blah, blah, blah, blah. You just need to send over a photocopy of your driver's license.” I'm like, “OK, I can do that, but you don't have anything to compare it to. You're asking me to prove my identity with a document that you have never seen before and that you can’t.” In this particular case that I was asked, you don't have the ability to actually authenticate that what I've produced is a legitimate document. Sure, it might look like a legitimate document, but you've got no way to prove it.
I also had a situation where someone used that technique to get into one of my accounts. They just provided forged documents. When I talked to the company, I'm like, “Well, why did you accept the forged documents?” “We didn't have any on file.” “You never asked for them.”
That's right. The challenge there is often that this method of identity verification with driver's license and a selfie, maybe a passport, or whatever it might be, it was built for regulatory compliance. It wasn't built for security. Now you can literally go to a GenAI tool to make me one of these or make me a photo of someone holding this ID. It's shocking how good some of that has become.
Even a few years ago, you would use Photoshop, you would modify something, you would save his PDF and upload the PDF. In your example, “I emailed the PDF; here you go. Yup, this is the person I want to be.” It was maybe fine for regulatory compliance, like I check the box. Plausibly in financial services, you call it KYC or Know Your Customer regulations, but coincidentally, if you opened a bank account that way, very little chance that when you call the bank to transact, they ask for the same thing. They say, “Oh, now let's use security questions or some other method.” It wasn't built for security.
That was one of our big insights five years ago to say the methods that exist to do this today aren't correct. Actually, the analogy someone pointed out to me recently was, it's a lot like mobile check deposits. You would write a physical check and you might take it to the bank and deposit it. OK, great. Or maybe through the ATM. Fine, it was physical. That would route through a whole process.
There are movies about Catch Me if You Can and celebrities who are good at check fraud and detecting check fraud. Amazing, but then came along eventually, mobile banking deposits. That's what we use today, commonly. You're using a mobile app on your mobile phone to take a photo and to capture and deposit that check. It turns out using your mobile phone is actually a quite a secure way to do that.
There was never, however, the in-between. There was never the take a photo of this check and upload it into a web browser because that wasn't secure, so banks don't rely on that. They never did. That was the skipped in-between. For this scan an ID and take a selfie, somehow the norm is still use a web browser, save his PDF, and upload it, and that's just not built for security.
I'm asking maybe a loaded question: How many companies out there are actually redesigning their systems and building them for security? Or is it still, “We're just trying to figure out how to ratchet security on the side of the process”?
I think the companies that are realizing that have more and more important things to protect are some of the earliest adopters in thinking about new ways to do this. At least in our line of work, a lot of what we see are high-value accounts. Think of the internet domain providers. Think of HubSpot that has companies’ CRM and financial information. Think of your average employee account. An employee that has access inside an organization, let alone an enterprise, is valuable. In workforce and protecting those workforce accounts have become some of the fastest use cases because those are important accounts that you have to get right. There are some of the earliest in adopting these more secure methods of doing identity verification to keep the accounts safe.
What are some of the techniques of identity verification that is not authorization?
The key insights we had at least a few years ago and built a lot on since was this idea of using mobile phones, of using like what exists in that mobile check deposit framework for identity verification. It turns out that if you ask someone to scan an ID and take a selfie exclusively on a mobile phone in the right way using mobile app technology and other things, you get a huge number of benefits from that. You get to benefit from the cryptography on the device. Meaning you can't just upload into web browser. You know that your app on that device is talking to the cameras and the hardware on that device, so your evidence collection methods are more secure. Suddenly, you can trust the information you're collecting.
By the way, you get to get much richer information because, for example, the face ID camera that's on modern iPhones, you can use that not just for face ID but to capture a three dimensional spatial selfie of someone. Compare that in a much richer way than a webcam photo that you might otherwise get. These mobile phones, it turns out, provide a solution that can be used to do this identity verification in a more secure way. That's been one of the big insights that we've had that I think we're really early on and now a bunch of people are looking at that space and saying, “Hey, maybe there's something here.”
I've heard of other interesting techniques over the years. I ran across one inadvertently by having somebody surfing over my shoulder at home. It's when I was trying to help someone out with someone that I needed to log into my computer. The computer password I was using, it's all muscle memory. Now I just mash the keyboard and it works. I don't think about what's the order that I'm pressing, where's the shift, where's not the shift, where's the funky characters. There's a very specific way that I enter the password every single time. I've heard that actually, how people type and how people enter their passwords is actually often more secure than the password itself.
Yeah. I think there's a lot of merit in a lot of those. The category people tend to refer that is behavioral biometrics—the way we type, the way we walk—called gait detection, if it's in an in-person scenario. Exactly, typing patterns, click patterns. There's a lot of merit in all of those other options.
One of the challenges that exists with them, though, like passwords or even password lists, passkeys, this growing trend, same problem with all of them is that you need to know who is setting that up. You need to know whose behavior patterns you're watching, and what do you do if someone's locked out? How do you make sure that, “Oh, the gate detection isn't working”? “The typing pattern is recognizing me incorrectly.” All you have to do is call someone and say that, and then you're back to the same challenge in the first place. It can provide a great sense of ongoing authentication, but it doesn't actually tell you who the person is behind the screen. Again, back to the difference between authentication and identity, it doesn't necessarily identify the person. It just authenticates that it's the same person who set that up.
Again, back to the difference between authentication and identity, it doesn't necessarily identify the person. It just authenticates that it's the same person who set that up. -Aaron Painter Share on X“OK, we need you to come into the office, key in your password a hundred times while you're in front of us, and you need to bring your passport.”
That's right.
It's all well and good until you close the car door on your finger and you can't type anymore.
There you go. Or something changes, or who knows? A more realistic, just consumer example even, there was an article about a year-and-a-half ago in The Wall Street Journal where this dad in Florida was locked out of his iCloud account. He offered Apple $10,000. He offered to fly to Cupertino to get a reset. He wanted his family photos. He said, “I don't understand. I use Face ID every day. You know exactly who I am.” Apple said, “No, we know that's the face that enrolled in Face ID, but we don't know that that's the owner of this iCloud account.” That's the fundamental difference.
Just because you use face ID all day, maybe on your iPhone, just like you might use those behavioral biometric tools, but at the moment of getting locked out, you get back to the same question when you set it up. “Who is this actually? Who's doing this? Whose face is this that we're matching against?” That's really the challenge that people are taking advantage of.
Let's get out of the corporate space and maybe we can draw some help for the audience. How do we shift our mindset from authorization of verification of credentials to identity verification when we're interacting with people or entities?
I think one of the valuable things to start with is if the company you're working with or whatever consumer accounts you might have, they begin to offer additional security tools. Right or wrong, they might be the good ones, it might be bad ones. It's good to start by taking them up. If they offer multi-factor authentication, great. Add that to your account. That is necessary. Not fully sufficient, but it is a great, great first step over just having a password. Start with that.
Friends and family on vacations where we get together and I'm like, “Hey, let's add MFA to a few of your accounts.” They think I'm crazy. The week after the vacation they're like, “Oh, I'm really glad we did that.” That's a really important first step.
You get into other scenarios. We were chatting a bit earlier. How do you know when someone just texts you or reaches out to you? Are they the right person? Are they the person they claim to be or the person maybe you think you know? I think there's just this healthy curiosity or skepticism that sadly, rightly or wrongly, we probably all need to have. I tend to think about, “What's the person? What's the context? What's the channel?”
Is this a person I think I know, or is it a new random stranger in my life? Is it in a group chat or is it in a one-on-one chat? What's the context? They're suddenly asking for something that's outside of how I know this person to be. I had someone reach out to me on LinkedIn the other day from a known LinkedIn account, someone I knew, I hadn't spoken to them in years, and they started just asking me all these personal questions and questions about our business. It started with, “Hey, it's been a while.” I was like, “Yeah, it's great to hear from you.”
The questions very quickly got to something where I had to ask back like, “Hey, but why are you asking? It's great to hear from you, but this isn't stuff we normally talk about.” They quickly say, “Oh, yeah, sorry, I was just curious.” OK, but again, the context was wrong for how I knew that person and having not been in touch for years. It's just this healthy skepticism or curiosity I think we have to bring to all of these digital interactions.
Yeah. I definitely had connected on social media with a former coworker while we were working together. As most social things, once you're connected with someone you often don't go out of your way to disconnect from them unless they do something that you're horrified by or something like that, or if you're one of those people who just, “Hey, I just pruned my social media connections.” I'm not one of those people. I got a Facebook message from starting to interact with me and I'm like, “OK, this person has probably never reached out to me on Facebook before.” Yeah, we were connected, but we've never had a direct conversation. Within about three or four prompts, it was like, “Oh, hey. Can you help me out with my electric bill,” or something like that. I'm like, “Oh, OK. Yeah, it's a compromised account.”
For one, we weren't good enough friends that he would ever ask me for money. At least I don't think he would, but like you said, this person wouldn't normally ever contact me on Facebook. He would text me or call me if he really needed to get ahold of me. That's one of those things. My wife likes to say, context matters.
There you go. She's onto something.
She is. Someone's going on a first time date with someone. OK. Physically, yeah, OK. We saw their social media profile and we show up in person with them. Yeah, the faces match. That's authentication, not identity. Are there ways that we can, without showing your driver's license. On people that we're interacting with, how can we at least attempt to build an identity for them in our own minds versus just taking everything at face value?
It is interesting. When we first got started as a company and we built this way of doing identity verification, we thought those would be some of the most useful applications. Social media platforms, dating sites, exactly. How are you going to build a virtual relationship with someone and hopefully it gets to the point of let's meet up in person, yet maybe they logged in with their social media login and you don't know who it is?
It is interesting. It's taken a few years. At the time, there was less market interest and just in the last few weeks even. Bumble just announced that they're now doing identity verification of profiles; Tinder does the same. These platforms are trying to bring some sense of knowing who someone is.
Digital signatures are another fascinating one to me. The idea that we generally sign documents with where your IP address is, although you can choose to say no, it makes no sense to me, but, “Oh, I trust that's you who signed the document.” They're natural cases where you want some form of identity verification, but then you get into, well, is someone using a consumer-grade version of that, or are they using a security-built version of that? If it's easy to use a deep fake in identity verification process, then it feels just like theatrics.
Maybe they're doing something that you feel bad—”Oh, they're doing identity verification. That’s amazing.” But maybe it was done once. Pick your favorite home rental platform. You can have a verified profile from years and years ago and somebody could be using that account. You could be giving their credentials. The account could have been taken over. You look and seem like a verified profile, yet there's no re-verification that's happened or the way it was done in the first place was done in a web browser and very insecure.
Again, take the advantage of the tools as they come across. If a platform is offering that, I say great, go for it. You should use it as a consumer. You should take advantage of that enhanced security. Are there companies that some do better than others? Surely.
That's interesting because like what you were talking about, “OK, we did the identity verification once, but then we’ve just never done it ever again.”
One of the safety leaders at one of those large platforms explained that to me. They said, “Gosh, when we built this, we tried to do it like a hotel.” They said the difference was in a hotel, you check in every time with a credit card and an ID. They say, “OK, we know it's you. We know this is the name of the reservation. We'll go ahead and proceed and welcome you to our hotel.” Some of these home rental platforms, most in fact, don't do that. They say, “Create an account. Let’s try and ask you to verify your ID so we know it's you.” At one point in time, again, often using an insecure method, but at one point in time. Every time you go back to book or to be a host, it just assumes that you're still the same person because you have the username and password. That's not a reality in today's world where account takeovers are unfortunately too easy.
Maybe you have an answer to this. I don't think my bank has ever asked me. There are some financial institutions where I have set up an account and required to know your customer. I've had to provide identity documents once, but I'm trying to think if any institution has ever asked me, “Hey, as part of our every-five-year compliance, we need you to come in or resubmit these documents.” I don't think anyone's ever asked me for those again.
No, and my guess is during those five years, for most companies, they default even less secure methods. Even if you were verified and they checked the compliance box that your identity was checked when you opened that account—once today, I called somewhere and they said, “I need to ask you some security questions.” They literally said, “What's your name?” That's the level. I have that probably once a week. Sometimes it's, “What's your address? I can't help you unless you confirm your email address.” It's nonsense, theatrics.
I get frustrated because I feel bad for the person asking. Sometimes I just laugh. You know that's not keeping my account safe. By the way, then you can't hear me and I have to spell my email address three times. It's frustrating, it's time consuming, it has no security value, and yet that's the norm today. It's a sad state of affairs, and unfortunately it's enabling these bad actors to take advantage of these weaknesses in our society.
I think the latest one I had was, “What is the PIN code off of your bill?” I'm like, “Oh, the thing that could be snagged out of my mailbox is the way that you're proving who I am?”
At least they're trying, a little bit better. We forget how hard some of those are. The other part is you put in friction with some things like that where you might say, “Oh, gosh. I don't know. I don't save my mail, or I can't find that piece of paper.” We see it a lot with later-age demographics. People say, “What street did you live on in 1950x?” They're like, “I can't remember. I don't know.” “OK, onto the second question. You failed two questions.” It's just hard. It's hard to know, it's hard to remember. The questions are so obscure that they also don't serve the security benefit.
Yeah. I think there was one time I was doing one of the security questions. It asked the timeframe that I was at a particular address. I'm like, “That was 30 years ago. Was I at this one before that one or that one? I don't remember the order. I was there for six months or a year while I was in college. I don't remember what specific year that was.”
We forget the friction that we've begun to tolerate today for what's actually been very little security value.
The funny thing is I want to get access to the MPD breach because they don't know the answers to my own questions.
Some have, surely. Healthcare has been another wild one where healthcare organizations, hospitals, doctors, providers, have all been targeted aggressively in these forms of attacks, particularly in the last 12 months. A lot of government agencies have been very clear and warning on this. It's actually sad to see how much healthcare data is trading for in these dark web-type of environments because people are using the information, frankly, to do targeted scam attacks, more targeted outreach to you. They're feeding AI models. If you could reach out and say, “Hey, part of here's a care group for an ailment you might be facing, or I'm reaching out from the doctor's office. I know you were here last week.” It makes it much more personal. Even harder to assess is this real or not. Or someone after my information.
“We're running this program for people with your health condition,” or, “We'll pay you to participate in this program.”
“Just put in your banking details here.” Meanwhile, it's a reversal and the account is withdrawn. It's very common.
Because you are working with companies on identity, I always wonder. I've run across this. I have not made this a habit of asking companies. Are there companies that are like, “Oh, yes. We have additional level of security on your account that we can turn on, but we don't advertise that. We don't tell you about it, but you have to ask us if you want it to turn on.” Have you run across that in working with your customers and working with companies? Sure, we have a higher level of security that we can operate your account with, but it's just not turned on by default.
Yeah, definitely. It's actually a very hot debate inside companies, actually on the workforce side too, which employment demographics, privilege, users maybe, or full-time employees, first contractors, do we want to put extra protections on. The same side on customer accounts. It's an interesting debate because some companies say, “No, the security should be equal for everyone, so I want to enable to make it available to anyone.” Others say, “No, our best customers or things like that should have access to it first.” It's a heavy debate. In a way I don't mind it. I'm glad that people are at least thinking about how do we further protect these accounts. You want that to be as universal and widespread as possible.
I guess this is the thing for consumers to start with your most important accounts, contact them and say, “Hey, I'm concerned about account takeovers. What additional security features can you enable on my account?”
When you see it, at least you see them marketed to you, great. Again, my advice would be taking advantage of whatever they're providing. Even if it's not perfect yet, it's probably in the right direction.
Any level of security is better than the minimal level of security.
Yes. However, interestingly, when you see companies that start to differentiate on it, to me that's actually also a draw. I know HubSpot offers account takeover protection on their accounts. I love that. In choosing CRM providers, I know that's a differentiator.
When I start calling around looking for commercial banking accounts and I say, “Hey, what's your level of security?” I think I found one that said, “Oh, you'll love this. We offer UB keys. Hardware-backed devices you can plug in to access your account.” I said, “Oh, that's great.” I said, “What happens if I lose it or I get locked out?” They said, “Oh, don't worry. We just send you an SMS text.” I said it's theatrics. I appreciate them trying to differentiate on security, but that's actually not going to keep my account more secure if they haven't thought about the recovery mechanisms.
In fact, they probably made the account less secure by doing that.
Again, you can always ask more, but I think even asking for it starts to create this sense of demand. That matters because we all have to be pushing, I think, for more secure methods to protect these digital accounts in our lives. To your point, they're different things. You pick your most important accounts. You might be different to different people. In some cases, that's banking. In some cases you're a gamer and it's your gaming account, or you're a social media person, influencer, and it's your social media profile. Whatever it might be, we all have digital accounts. It's like all things in our lives that matter more to different people. All of them should be able to be protected and be safe.
My general argument—and people disagree with me until they agree with me—the most important thing that you need to protect is your email.
I think there's a lot of merit to that. There's a central point for so much.
Because you log into your 401(k) with your email, you log into your Social Security with your email, you log into your bank with your email, you access your phone, everything revolves around our email. If we lose access to that or someone else gains access to it, it's not a guarantee that they get access to everything else, but they've got one factor on the level of stuff to pretend to be you now. It comes back to the identity issue.
It does. Sadly, a lot of the consumer internet providers don't have great solutions there. Some of them offer multi-factor authentication increasingly, which is great, and you should absolutely turn that on for your email accounts. The flip side too is—I love Microsoft in many ways, but Office 365, if you're a consumer or small business and you get locked out of your account, the average time to recover it is 14 days. They just freak out. There's a department, there's a whole problem. “Oh, you're locked out, ooh, well.” If you can't have access to the admin account, there's a waiting period, there's delay, they send you information, it's wild. Imagine being locked out of your account for 14 days, that email account you rely on.
I can also make the argument of, well, as long as no one else can get into that account for 14 days.
That's their argument. That's right. The waiting period calms things down.
If you think about the way most scams—we even talked about it with a deep fake technology—that as the barrier to entry becomes less and less and less, and it becomes easier to execute, if, “Hey, I can get an SSL certificate for free now. I don't have to pay for it at all.” OK, so it really isn't secure. It isn't authentic. It isn't identity at that point. But if you introduce fake friction, it's not that they have to wait 14 days, but by introducing friction, the scammer will probably move on to other targets, or the chance that someone will move on as opposed to waiting out the 14 days, but your life is pretty visible for those 14 days.
It's the challenge of reliance so heavily on email, but I agree. Definitely worth protecting. That's an account, most of our lives, it's worth protecting. Particularly again, if you're in a workplace context, protecting your employee's email account. The idea today that most companies hire people and maybe HR runs a background check, typically Social Security number, somebody can pass a background check. It doesn't mean you know that that person is the same one who's applying the background check or an I-9 check in the US to work again—regulatory compliance. Then something goes over to IT and says, “IT, I just hired this new remote worker. Give them an email account.” Typically, email their personal email, their Gmail account with a link to set a password and to set up MFA. Gosh, that is a very risky proposition. If you are not positive who you hired, you're then inviting that bad actor onto the network.
We were chatting earlier about what we've seen from these North Koreans and others and actors working with North Koreans to impersonate employees and apply for jobs, get these jobs, get access to the networks. It's causing just a massive security risk. It's almost that epidemic status right now with how common this has become.
It is so amazing how we have switched from nobody works remote to whole industries where, “No, we don't even have a physical office anymore.” The scammers will find any way that they can to exploit opportunities like this, unfortunately.
They're human processes. Even the people back to work. I'm on vacation this week. My manager said I really need to do something. I just got a call and I can't access my account. All the same stuff. If you're remote at all, be an all-remote employee. If you're remote at all or you allow for that, and some companies say, “Well, sorry. You have to come back to the office. You have to come drive three hours or get on a plane.” OK, but it's not necessarily scalable across large organizations.
Yeah. Particularly if you have a fault in your system that requires everyone to reauthorize, now you're back to square one all over again.
You're right.
Doom and gloom. That's what the podcast is all about, doom and gloom.
I like that optimism too. Again, I think there are a lot of wonderful things that GenAI is bringing us. There are actually wonderful use cases for deep fakes that can be beneficial for society, but I'm also really glad that the attention is shifting to how do we protect accounts. I think the increased awareness there is just driving better behavior. It's actually driving change in industry. If we can make these digital accounts as secure, more secure than we were doing things otherwise, it actually leaves us in a better position from a national infrastructure perspective, from our individual lives. There's a lot to be said that's useful for conversations like this, just educating people and increasing awareness.
Yeah. I think the more that we can differentiate between identity and authorization in our lives, the better the scenario is. I think of maybe this is a little bit on the borderline of authentication. It has probably been a couple years now, an auto window replacement company started to advertise. “We will send you a photo of the person who's coming to do the work.” That was the beginning of like, “OK, now the company has said this is who is going to show up at your front door. If it's not this person, don't let him in.” At least it's the beginning of a process as opposed to some random person walking in and, “Hey, I'm the service person you called.” “OK, come on in.”
Yeah. It's a great example. You're right. Bringing trust, consumer demand, and business demand for just increased trust in these scenarios, I do think that leads us to a better society.
Yeah, I a hundred percent agree. Aaron, if people want to get ahold of you, how can they find you online?
Probably most active on LinkedIn. One of the things we try and do is just publish a lot of content that's educating people and keeping them aware of the things we're seeing, the techniques we're seeing that work, the challenges that are coming up. Please follow me, follow our company, Nametag. We love to just have conversations around this stuff. It's really important in shaping so much of our society.
For people that are particularly interested, if they're running a business and they're looking for identity solutions, what's the website?
We're getnametag.com.
Awesome. Aaron. Thank you so much for coming on the podcast today.
Thank you again for having me, Chris. It was super fun.
Leave a Reply