Easy Prey Podcast

Privacy vs Reality

“We have to figure out how to get more talent into the cybersecurity space at a younger age and early in their career path.” - Bryce Austin Share on X

Online security advice often sounds simple until you actually try to follow it. Between password managers, privacy settings, and data brokers, protecting yourself can start to feel like a full-time job. That gap between what sounds easy and what’s actually realistic is where a lot of people get stuck.

My guest today is Yael Grauer, a freelance investigative technology reporter who covers privacy, security, digital freedom, hacking, and mass surveillance. She also works as a program manager of cybersecurity research at Consumer Reports, where she manages Security Planner, a free resource that provides customized guidance to help people stay safe online.

We discuss what actually matters when it comes to protecting yourself, why so much of the responsibility ends up on individuals, and how to approach security in a way that’s realistic. She explains where the biggest risks tend to come from, what people often overlook, and how to make practical decisions without turning it into something that takes over your time.

“A small incident is not that different from a big incident. It's just the level of stress and visibility that comes with it.” - Bryce Austin Share on X

Show Notes:

“We need to quickly identify and assess the impact of an incident to determine whether it requires escalating and mobilizing the team during off-hours or can it be managed through standard security operations and monitoring.” -… Share on X

Thanks for joining us on Easy Prey. Be sure to subscribe to our podcast on iTunes and leave a nice review. 

Links and Resources:

Transcript:

Yael, thank you so much for coming on the podcast today.

Thank you so much for having me.

Can you give myself and the audience a little bit of background about what you do and why you do it?

Sure. My name is Yael. Right now, I'm a program manager at Consumer Reports, working on cybersecurity research and a digital security tool called Security Planner, among other things. And then my background is actually investigative reporting. But before I did investigative tech reporting, I was actually a health and fitness writer, and I wrote a little bit about content marketing.

But then I got really concerned about my own security, and I started doing all this research on it. I realized I have to write stories about this if I want to get paid this month and thought it was kind of more interesting than writing about health. Honestly, I was able to help more people who had really specific questions. I just got really into it and haven't looked back.

As an investigative writer, did it kind of freak you out the more you started researching security and privacy and these sorts of things?

I think it was more complex, yeah. But I was kind of aware of some of the things that people could do, like, oh, random people can find out where I live, like, before I started looking into ways to secure myself. I think I was aware of the issue. But yeah, I guess it did. I guess the fact that it's so difficult to do what should be basic things, and kind of learning more and more about all of the things that are not in place, and where the onus is on the consumer instead of companies or governments, it's a little disturbing. It gets kind of worse every day, too.

I guess the fact that it's so difficult to do what should be basic things, and kind of learning more and more about all of the things that are not in place, and where the onus is on the consumer instead of companies or governments,… Share on X

And that's the challenge, very much, is that it's on us as consumers to protect ourselves,

whether it's from cybersecurity incidents or just even regular consumer protection stuff. So much of the onus is on us to be aware of what our risks are, what our responsibilities are, and even knowing, well, this really is the company's obligation, but they're not upholding it.

Yeah, I think it's really awful. I joke that the goal of Security Planner is to put it out of business, because for your listeners or people who don't know, basically, securityplanner.org, you go and fill out a survey about what devices you have and what you're worried about, and it gives you a list of things you can do to secure yourself. And sometimes people tell me, “This list is too long.”

I think that's a societal failure. It's a policy failure. It's a failure on these companies that they're not building things that are secure by design, and that there are so many steps you have to take to make sure that you're protecting yourself. I see that as kind of a societal fail.

I think that's a societal failure. It's a policy failure. It's a failure on these companies that they're not building things that are secure by design, and that there are so many steps you have to take to make sure that you're… Share on X

Running Security Planner, I don't know if you guys run reports on the reports that are generated, what are kind of the most common, like, top three things that people should be doing, or that's most recommended by Security Planner?

I'd say most people should be—it’s kind of the same stuff you've probably heard over and over again. Use a password manager to make sure you have a unique password for each account. Use multi-factor authentication, that kind of thing. Deleting unused accounts is a big one. Keeping your software updated, because there are bugs that people have found.

Actually, I just thought I'm doing this recording without updating my laptop, because I'm like, “I'm not going to have time to update it before we start.” Now I have, like, low-level anxiety, but sometimes people will just put it off forever, and that's bad.

Yeah. You were talking about, with there being so many things that we need to do. In people giving feedback to you, is there a certain amount of paralysis that people go through from the perspective, “Oh my gosh, there's so much. It's just too overwhelming,” they don't do anything?

Yeah, I think that happens a lot when you tell somebody, like, “Oh, you've been reusing your same password. You have to enter all 500 of these into a password manager and reset all of them.” Like, people do not want to do that. Sadly, a lot of times, people don't even want to do anything until after they're compromised, and then they start panicking, so it's kind of like buying locks for your house after it's broken into.

Sadly, a lot of times, people don't even want to do anything until after they're compromised, and then they start panicking, so it's kind of like buying locks for your house after it's broken into. -Yael Grauer Share on X

But what we try to do with security planners is tell people. We prioritize it based on what's customized to what they're concerned about, and we tell them, like, “If you can only do one thing, do this thing. If you can only reset one password, reset your email password where somebody could get into your email and reset all of your other accounts. That's a really important one.”

We try to break it down and make it easier for people to do. There are people who are just like, “There's too much.” And then also, there's people who are like, “I'm already so secure and I forgot this one thing.” It's frustrating for sure.

It's challenging because there's so much tech in our life and we are so responsible for it ourselves that, even tech provided to. This is one of the things that gets me annoyed is that so many people have routers that are provided by their internet service provider. They're going to provide this to us, and then they're going to charge us $6 a month to rent it, but they don't maintain the security on it.

They don't make sure that the firmware updates. “Oh no, it's been, you know, oh, there was, you know, that it's no longer, it's end of life, it was end of life three years ago. There's no more security updates for it. Oh, and we didn't bother to tell you about it and we're not going to replace it.”

Yeah, I know. That's terrible. I feel like some of this is on. That's kind of what's so cool about working with some reports is that we can approach it from many different angles because we do have an advocacy. I'm technically on the advocacy team and we tell people, like, “Tell these companies to do better.” Or you can tell the governments, like, “These are regulations you should do,” or you tell individuals things that they can do. It's kind of a three-pronged approach. But yeah, it is. It's bad. Like everybody should be doing better, I think.

I mean, the advantage is, like, someone like Consumer Reports actually has a bully pulpit. You've got enough customers and influence that when Consumer Reports says stuff, at least some people listen.

Yeah, I hope so. I guess it's different. I want everything to change immediately overnight and so it's kind of a long game. CR has made a lot of changes. There's been a ton of improvements, but I want a Security Planner to be out of business. I want us to not have to tell people, “Here's the 50 settings you need to change to be secure.”

Privacy vs Reality Share on X

And that's not just on the policy government side. It's also the companies, like, “They should build their products secure by design, less deceptive design,” because people sign up for things without even realizing it. There's just so much I want to see happening.

It's challenging because Consumer Reports' goal is not to put companies out of business in a sense. It's like you're not trying to prevent commerce. It's like, “Well, we want commerce, but we want it to be responsible commerce.”

Right, exactly. No, I was saying I think security planners should be put out of business.

Yeah, no, I understood that. Consumer Reports wants companies to do better, like it's not, like, their goal is to pay our goals to shut down entities, we just want them to do better.

We get excited when companies make changes that we recommend, which is sometimes because they do it because there is so much pushback from consumers, and then sometimes it's because of regulations, but yeah, that's always good when things get better, and they are incrementally getting better, I try to remind myself of that.

And how long has Security Planner been around now?

Citizen Lab launched Security Planner in, I think it was 2017, and it was really, really cool, and then we inherited it and kind of carried the torch forward on Citizens Lab's work in 2020, and it's gone through different iterations. It’s gone through several designs. The really cool thing about being able to work on it at Consumer Reports is that we have a UX team, and so we ran it through tests and made improvements based on the feedback from user experience testing, and we just kind of keep iterating on it.

Yeah, and I imagine it's a tool that it's not the sort of thing that you've built at once, and now you never have to go back and do anything to it. I have thought about the same things of going through and writing guides, like, “OK, here are all the settings that you need to change on your phone.” And then the realization that, like, every point release on every operating system changes those instructions.

Yeah, actually, we've been really lucky, we've been working with a consultant. He's our security education consultant named Jeff Landale, and he basically does this running audit, and he's constantly looking and seeing what pages we need to update. And then I'll just keep seeing articles, and I'm like, “Should we add this? Should we do something about this?” And then I'll just throw it into our meeting agenda, and we go through and change it.

It is an ongoing thing, and I've been thinking about that a lot because I was thinking of adding some screenshots to some of the pages that are kind of harder to find. And I'm like, “I'm going to have to update this,” which I think might be worth it for some of them, probably not every single one of them, but I'm like, every single time there's an Android, or I also update, I'm going to have to go back in and update this.

That's the annoying thing about that sort of when you're doing screenshots. It changes so much that it's frustrating for users like, “Oh, that's not what my screen looks like.”

Yeah, exactly. And they're like, “This is completely outdated.” And I think that was some of the problems that we saw. There's a lot of guides out there that are outdated or they're not applicable for the user, and it's so frustrating for them because they will type in what they're looking for, and it will give them instructions for a different device, or like five years, settings from five years ago, and so yeah, I think it's cool.

And then just keeping it relevant. And then every once in a while, we get to take a page off. We took our first page down, which was about installing HTTPS everywhere, and I'm like, “This is so cool that we get to take this down now because it's built in. You don't have to do anything now.” There are incremental improvements, but yeah, it's a lot of work.

But I think it's good because there's a few other guides out there that are also up-to-date, like EFFs, SSD, security, self-defense. Yeah, they keep theirs updated, and there's a few out there. But it was interesting because we link to other sites. We have a resource page, and we're constantly going through, and we're like, “These haven't been updated in the next year, so we have to take them out.” It's like the internet is a graveyard for abandoned security guides, but it's kind of sad.

But from a device level, for privacy and security, what are the top couple of settings to change on someone's phone that they should be using for?

If you use Instagram, it's going to need your camera. If you use a mapping, like a GPS tool, it's going to need your location. Your flashlight camera doesn't need your location. -Yael Grauer Share on X

That's a tough one because I think it really depends on the individual. But one of the things we try to tell people, at least be aware of what information your apps are asking for, and some of it makes sense. If you use Instagram, it's going to need your camera. If you use a mapping, like a GPS tool, it's going to need your location. Your flashlight camera doesn't need your location. That's one of the settings that I'm like, “If you don't need it for functionality, then you don't give away that permission,” is one of the things that I tell people to do.

But a lot of it, it's really individual, so different people have different things they're concerned about, which would impact the settings that they need to change. I post a lot of things publicly on social media, but a lot of people don't want that information public, and they're like, “How is it that my friend's cousin knew about this thing because it's public on your Facebook page?”

That's one of those, not so much that app permissions are challenging in and of themselves, but that every app has a difference, every social media platform has a different way of nomenclature, a different way of naming who can see this, who can reshare it, who can tag you, who can't tag you, and different defaults.

If you're not really, really aware of that particular platform, it's really easy to accidentally post, “Oh yeah, my cousin had a kid,” and they didn't want anyone to know. And now it's totally public for everybody. It's like they were good about their security and their privacy, but because I blabbed publicly, now it's not private.

Sometimes when you reset or update something, they will just re-change your settings, and

it's like, “Jesus, I turned this off. Now I have to go back in and turn it off.” No, I think that's super frustrating. I think that's also—but you make a good point because there are times, like I know people who don't want their pictures posted online, and I've seen people post their pictures online because they just assumed it was fine and didn't know. Like, “Oh, you were posing for a picture and you didn't say anything.” Getting that consent.

That's a big one. We have a page and security planner for people who are worried about stalkerware, and a lot of times, they are worried that somebody has compromised their device because somebody has a piece of information that they didn't share with that person. A lot of the times, it's because they're talking to their friends who are friends with that person, or there’s, like, non-technical reasons for how that data is being, I don't want to say leaked, but shared, spread.

It becomes available.

Right, exactly. I don't know. It's looking at all these things. But I don't know. It is frustrating. I don't feel like every single person should have, kind of James Bond-level awareness of these things, and yet here we are, right? Every woman I know doesn't post the location until after they've left, you know, just things like that. It's bad. So we're trying to work with you.

And it's particularly difficult because what I have found running the websites and the podcast is that there are people that are concerned about their privacy and their security, but they're not technology experts. They start looking through the settings and they start looking through the phones, and they run across things that they don't understand. That can frequently become a, “Well, look, there's stalkerware on my phone.”

Right.

There's open source stuff on my phone.” It's like, some of that is just part of the native part of the way the phone has been designed and part of the operating system that doesn't necessarily mean you've been compromised. But it's really challenging because, like, because there are people that are being stalked and there are people who just, who are thinking that they're being stalked but they aren't, but they just don't understand the technology.

Well, it was just like you're not in a clear headspace. You're panicking. I helped somebody reset an account that was locked. That should have taken five minutes, but because she was panicking, it took a half hour. I don't know, sometimes it's things like you were sharing a mapping app with your partner and you're no longer their partner.

You forgot to turn that off or they have access to things that you forgot to reset or you set up an account together. Sometimes it's things like that. I think it's hard when you're in the panic of a moment to think about things in a really analytical way because we've all been there, I think.

When the OS or the app changes and there's a new version, sometimes the platforms reset the defaults. Now you're like, “Well, I know I previously set that to don't share. Now it's set to share. Does that mean someone compromised my device?”

It's tough. Also when people are like, “I swear I only told one person this thing,” and, like, it's hard to tell them, “Well, maybe that person told somebody else,” or like, “Maybe somebody overheard you.” Then you end up just second guessing. It's really hard to come up with technical solutions for these real-world problems.

You can't always have a technical cause, a technical solution, for biological systems.

Right. Exactly.

Can't have technical solutions for humans because we're humans.

Have you seen that XKCD where somebody is like, “This is how you think people will get your password. This is how they'll actually get your password.” And he’s, like, holding somebody off a window with a wrench or something. It's a good one. And like, yes, this is true. Because people will ask me these bizarre, they will be like, “Can I set up…can I be my own threat model and set up something so that I can't access it even if I want to?”

This isn't going to happen, like I also do a talk. I do this in my own personal capacity, but me and David were different from Freedom of the Press Foundation. Also in his personal capacity, do a talk every year at this conference called practice con on the worst cybersecurity reporting. And we will talk about—we make it educational. A lot of what we write about, it's kind of like what was in that hacker website is people are worried about these things that are not happening in the wild.

They'll be like, “Oh, like, you should not charge your phone in public because there's a tiny chance that there you can, it could be, like, a rogue outlet or something.” Somebody always comes up to me after each year and we'll be like, “You're minimizing real threats. I don't know a single person who has been compromised because they charged their phone in public, but I do know people who have had personal safety issues come up because their phone wasn't charged.” At some point you have to look at what is actually happening, not, like, what you think might be possible in a decade to people that are targeted by the nation state or what is likely to happen to real people.

What should they be doing and not doing right now? How do you approach this? I always think about something I learned. I went to survival school once; I really liked it. They had this thing called “What's going to kill you first?” You're trying to decide, like, “Should I drink this water from this lake? I might get giardia, but if I don't drink it, I will die of dehydration, and that would kill me first. I'm going to drink the water and deal with giardia later.”

This is a concept I think people should use when thinking about security is what not necessarily kill you, but what's going to get you first? And it's like not the zero day in 10 years that might.

Cybersecurity is one of those weird things because I think we see so much on television and the police and the law shows. We think what we see there is reality that like, “Oh, some guy, what's their phone number? OK. I've got access to their device. Yep. I know exactly where they are. And let me read their email, and let me open up their Facebook account.” This stuff happens instantly.

It's people in the—and then they hear news stories about, uh, tools that are available to nation state hackers. They think, and within something happens to them, they think, “Well, clearly, like, our mind goes weird, our mind goes to these places like clearly I'm a target of this sort of stuff. Someone has compromised me, but I don't think nation states are going after…OK, you're a journalist.”

If you're a journalist and you're reporting about what's going on in Iran, yeah, you might be a target of a nation state, you know, but if you're Joe Smith and you're working at the factory, nation states probably aren't trying to hack you.

There's actually some really cool tools for people who are being targeted. There's all these, like, advanced protection programs and IOS has some really good…Apple has a really good team that has tools, but it makes it really hard to use all of your devices. I don't think advanced protection program does, but I'm like, “Man, I really want to play this game and I can't because it just doesn't work cause I'm on lockdown mode.”

If you want to be secure, enable lockdown mode and you can make phone calls and that's about it.

There's just a few things that I am on lockdown mode cause I've done some reporting as a freelancer on China. I just think it's a good idea cause I think I'm more likely to be targeted than some of the people that use Security Planner. Just kind of, we call them norm. I don't know what to call them.

The general public general.

There's things that I want to do that I'm like, this weird app that I use just isn't working on this, like it's frustrating, but it's actually—some people are really worried. There are things that you can do, but I feel like a lot of times people do these things and are not doing basic things and it's like, do that first, do this, passwords first.

They're worried about being targeted by nation state platforms, but they are using the same password everywhere.

Right. It's like, cool. Who do you think we're not updating their software? I think also people that are really technical sometimes forget about physical security. I'm like, what are you going to do? How will you know if somebody is going to, is at your doorstep, and what are you going to do? Do you have a plan, because people have different comfort levels for involving law enforcement or not.

Before you're in this high-stress situation, you should figure out what are you going to do? Like, cool. You have your camera now. What are you going to do if you see somebody on it? Like, what's your plan? I've made this mistake too, where I'm just sharing information about an Airbnb I was at once because it was a very weird situation.

I don't smoke marijuana and I accidentally stated an Airbnb that was like afterwards looking at what the description was. I'm like, “Oh, no wonder they're really upset that I won't smoke with them.” Because it was clearly described in a way that just went over my head. I was sharing this information with some people kind of casually and they're like, “I think we can figure out where you're staying,” and they were able to just look it up and find it. Here I am, this security person.

It does go to show that innocent information that we talk about casually can be used to identify very specific things like these days, the OSINT, the open source intelligence tools that are available to people and even not tools, but just searching.

I had a housemate once who was really, really panicking cause I asked her if she was about to go hiking and she thought I was psychic or something. It was because she was filling up some water bottle. No, this is just pattern recognition, but people, how did you know, it's very funny.

Before we get too much further, I do want to ask and I try to ask my guests that are in the counter scam, counter fraud, cybersecurity space, have you ever been a victim of a scam, a fraud or cybersecurity incident? And what was it?

I mean, these are, I'm lucky that they were pretty minor, but I have done dumb things online. One time I clicked on a—I’m not even really a Swifty, but there was some Taylor Swift cardigan that I thought looked cute at some ridiculously low price and I almost bought it. I was so close to buying it until I was like, “This is too good to be true.” But there was a time when I kind of like, “Um, I got a message.” I think I was on Skype.

I was recording a podcast on Skype and I never used Skype, and I saw that somebody messaged me and it happened to be somebody I talked to the day before who I hadn't talked to in months. And they were like, “I need your help.” I clicked on some link and luckily it was a link that would have owned my computer if I was on Windows, but I was on a Mac because I eventually had somebody look into it. I totally clicked on a link that I had a friend look at it and she was like, “What are you sending me?”

“What's going on here?”

Yeah, exactly. That was dumb. I failed a phishing test once. That was embarrassing. I think I always thought if I failed a phishing test working in cyber security, I would get fired or something, but it was like, it wasn't targeted, but it felt targeted because it was—it just so happened to be something that was on my mind. I've definitely, and none of us are above, like, clicking on things.

I use a security key. I like to think that I won’t, like, be owned even if I screw up, but, like, not everywhere except security keys. There's definitely been instances where I'm like, “If this was a phishing link, I totally….” Like, somebody else sent me a really targeted email being, “I've followed your work for X years and I wrote a paper related….”And I'm like, “If this is a phishing link, I'm totally on because I definitely want to read this.” Everybody has their phish.

A lot of it is just a matter of circumstances aligning. I was talking to one of my previous guests and he works in cyber security, and I think he ran the department for doing the phishing stuff, and he had just been recently been talking to HR about some benefit stuff and he got an email about benefits. Like the way the human mind works, put it together if this is in response to the conversation I was just having. He clicked on it and he started to do stuff.

And I was like, “Wait a sec, why is this not my password? Why is the password manager not wanting to autofill the password?” And it was like, “Oh, this is a phishing….” And it wasn't an internal phishing scam, but it was, he had just been talking about benefits and he got an email about benefits. It was like, “Well, this is clearly the email I was expecting.”

That happens a lot. I think a lot of phishing tests are just like, if you click the link, you fail, but I don't know if I agree with that because I feel like usually they will ask you for a password or some set of information and then that's when you hopefully, hopefully it starts to click. Whereas, like, you can just fill in, it's usually when you're tired or stressed out. I think that was also one time when I bought something and then I got a receipt from a weird email that didn't seem like a company.

I'm like, “This feels sketchy, and I'm going to immediately cancel with my credit card company.” And that's actually why we do a consumer cyber readiness report every year based on survey data from the survey team at Consumer Reports. We were looking closely because people are more likely to lose money. People who encounter a scam are more likely to lose money to that scam in certain demographic groups.

It depends on the year, but we've had data on Black consumers, Hispanics, and also people who have lower household incomes. I don't know, we were kind of looking into it and there's only, there's limited information you can clue in from every survey. But the more we looked into it, we're like, “I think that people who are getting their money back are paying with credit cards, and the credit cards will refund your money.”

Because if you're paying with an app, like different payment apps or other methods, you might not get that refund. I think that that’s, like, if I had used a different method, then I would have lost that money, which wasn't a lot of money, but still, yeah, I don't, I just don't think, I think everybody's susceptible no matter how—you could work in cybersecurity. You could do this.  Like you mentioned your friend who wrote the phishing test. And that's why I think things like security keys, et cetera, are so important.

I think it's just a fallacy to think that we as humans can get everything right a hundred percent of the time.

And I think it's good, I'm glad that you do this because I think it's good for people. A lot of times people don't get help or don't even contact their banks because they feel dumb. And they're like, well. I think it's good for people to normalize, not that it's good for people to get scammed.

But in order for the problem, like in the sense of, like, Security Planner, in order for Security Planner to go away, people have to talk about the need for Security Planner. Companies need to take action. In the same way with scams, if no one's talking about being scammed, then no one thinks there's a problem.

People like to think of, they will be like, “Well, I won't get scammed because I do XYZ.” But it's like, not everybody does everything. There's voice scams, there’s, like, voice-cloning scams. Then also, like people are like, “Well, I'm perfectly secure,” but it's like, “Are your friends secure?” Are the people you're traveling with, like, there's ways to target anybody. Hopefully it will happen less and less, or people will catch it earlier or catch it while it's happening.

It gets complicated. The voice is like, OK, my wife and I can have a secret password that if something seems out of context, we can ask for the password. But are we going to have a different password with every one of our friends? This verbal password with a hey, if my friend Bob calls me and says there's something and it sounds like Bob, do I ask Bob what the password is? Bob and I haven't established a password, do I need a password manager for every friend of mine and do they have the password manager?

I jokingly, I had a friend who just lost his phone and I jokingly asked him for the code word and he's like, do we set a code word? But he had emailed me about something that only he and I would know about, which was kind of off hand. I gave him something to give to his girlfriend and he was like, “Oh, she really liked the thing.” I was like, “OK, I know it's you,” even though I don't think he did that on purpose.

It's complicated and it's tough. It's hard because not everybody, even if you tell everybody everything, they're not all going to do it. I had a friend, her Instagram was hacked and it looked like she was selling things. The reason that people were clicking and buying it is because she's like a pillar of the community, you know what I mean? It was because she had such a good reputation as just warm and kindhearted person who had done so much for the community, so you're targeting people. It really makes me really upset.

I mean, and that's why scammers target accounts like that is because there is authority. There is trust and well, of course, “Nancy's not going to defraud me because Nancy's a pillar of the community.” Well, it's not Nancy anymore.

Right. Exactly. It was bad. That one was really bad and it's frustrating that you can't get ahold of people. Also, just some of the reporting. I know there was on Reuters about how much money I feel like these companies have been on us to stop running ads from scammers. They should do a better job of like, know your customer for those advertisers and also just getting help to help people get somebody on the phone.

There's been situations before I was at CR where there was some weird issue with somebody's account and I knew people inside the company. I could call them and they could flag it and look into it. This isn't like, that's nice and all, but like, I shouldn't get special privileges because they know me because I've written articles, like…

I'm trying to think of all the people that I know who've had their Facebook accounts compromised. Uh, only one of them was able to regain the account. Like, most of them would be fine with, like, I would be OK if the account got shut down. But even though dozens of their friends reported it to Facebook that this account's been compromised and someone has taken my friend's account, like Facebook kept the account alive for at least two years.

It was actively scamming, if you looked at the feed, it was clear that the account was being used to scam people and yeah, everyone's reporting it as being a compromised account and nothing's happening. I definitely feel that, OK, these companies need to do a better job.

I think that some of them—I can't remember which it is, but some of them have cracked down on people, helping people they know or people they don't know that well, but know kind of loosely. I'm like, “That's not better. We fix it the other way. Fix it the other way.”

We cracked down the problem. We're not letting our employees help people that they know anymore.

Exactly.

Wait, it used to be that you'd help some people. Now you're going to help no people.

Exactly. Now it's harder to get this much love. I remember, like, one time I was trying to, and I wasn't even doing this to help people get their accounts fixed, but a bunch of accounts were being flagged and were getting taken down on a social media site. And I was like, “Huh, this is a really interesting story. I should write an article about it.” And I contacted them and suddenly the accounts were back up, which might've been completely unrelated. It might not have been. All these people thought that I had a special in and they would contact me with account questions and I'm like, “This might've been random. And if it wasn't random, it's not like a thing I can replicate. I don't work for this company.” But it was very weird.

That's challenging. I know you've also written about mass surveillance. I’m trying to phrase this in a way, like, that is not self-defeating. What should people be worried about, about mass surveillance, and what can they actually do about it?

I worry about things like location tracking. We've seen reports recently about governments being able to access advertising data to people to track individuals. There’s, like, limited things you can do and they might not all work, but turning on global privacy control, turning off your ad ID, like there's a few little settings and tweaks you can do, but not every state enforces global privacy control. It's really tough. It's hard to fight nation states. Using Signal, I think, is a really big one.

When people are like, whether you're a journalist or an activist or whatever, if people are like, “Oh, I'm worried about, like, I could use Proton Mail,” but a government order, they can use a mutual legal assistant treaty to get this information. It's like, don't use email, use Signal for these messages. Turn on disappearing messages for anything that you are worried about. Some of it is hard because it's, you don't find out until it's too late a lot of times.

Being careful who you share what information with because came up with like, when people were worried about people finding out about or being targeted for allegedly having abortions and states where abortions are legal and people are like, “Oh, we need to look at period tracking at privacy because….” And I'm like, “You need to look at who you're texting. You need to look at who you're messaging on Messenger when it's not intended encrypted, or even if it is encrypted, like, who's going to turn that data over to authorities?” That's the kind of thing that I kind of think about and worry about.

Or who are you going to accidentally add to the group chat?

Yes. I keep trying to get added to those.

Because no journalists has ever been—OK, I'm going to go there.

Maybe if I changed my name to initials, because I've never been added to one of those, but everybody's had an email go to them. I've done this too. This isn't, since we're talking about embarrassing things we've done, where you accidentally CC the person you're talking about. This is bad, but not as bad as some of the government ones. Use your SCIF. Use the SCIF. If you're a government employee.

Be mindful. I think those are the challenges. I think they're legitimate conversations to have, if you're thinking about automatic license plate readers, there is a societal benefit to, “Hey, we know that the owner of this car is a murderer and he just drove through this intersection.” That's good information to know. But once cities start saying, “Well, you went through this intersection at this time and you went through the next intersection at this time, which means you were doing one mile an hour over the speed limit, we’re now going to automatically send you a ticket.” Like, “OK, now you've gone too far.”

They're collecting so much data. How come that data is never available? Why don't they know where Nancy Guthrie is? That kind of stuff. This is probably, some of this might be my own paranoia, I'm not sure, but I feel like it's like people just collect a lot of data so they have ways to target individuals based on…

Maybe that data shows that they were doing something and I don't feel like it's actually being used for crime stopping. Then we see that happening. This is something EFF has done a lot of work around, where these tools, they're like, “Oh, we're only going to use these on the worst of the worst criminals to stop major crimes, like kidnapping and murder and whatever.” Then it’s, like, used for traffic or suddenly you're targeting teenagers.

That's a challenge is that somewhere along the line, someone goes, “Hey, there's something else we can do with this data, and OK, well, let's do it.” It comes down to that, “OK, how do I, as a consumer, fight back?” Well, OK. Now I give the least amount of data to entities that I deal with that I have to. If there's a form, like how many fields can I leave blank before the form rejects me.

Or where are you storing? If you have a video doorbell or camera, like where are you storing that? Are you storing it locally so if police wants access to it, they send the order to you individually and you can decide what you want to do. Or are you just storing it with a company and they can make that decision sometimes without even alerting you? Like, just things to think about.

More than the Nancy Guthrie case, she was not paying for the storage, yet they were able to recover some of the video from the cloud.

Right.

It was being, it was not being purged or it was not being in a destructive purge.

I don't understand how that works. It seems like whenever you need the data, they can't get it for you. It's complicated.

I mean, that's one of those things that make you think, it's like, OK, she was intention…. Let's just take this particular case out of space. Someone was specifically—“I don't want the data stored.” I'm perfectly happy with seeing my camera and looking at it real time. If I store it locally, I store it locally, but I don't know if in the disclosure of that particular doorbell brand, and I don't know that it was Ring or whoever it was, I don't know if it was disclosed that, “Even if you don't use our cloud storage actively, there's still going to be data stored on the cloud anyway.”

You as a consumer don't have access to it, but law enforcement did. Maybe in this case, maybe it was beneficial, but if I'm thinking, “Oh, well, I thought because I opted out of the cloud storage, it meant that people coming up to my front door had privacy. Well, maybe now they don’t.”

That should be clear. I think you mentioned also just how annoying it is to go through, like you don't have to have all of your privacy policies be these 30-page legal. Even if you do have to have that or think you need that, you can write easy to read. I really love it when I read privacy policies where they're like, “This is what this section means” in English on the side, like a layered policy. This is great because you actually know what's going on.

I really love it when I read privacy policies where they're like, “This is what this section means” in English on the side, like a layered policy. This is great because you actually know what's going on. -Yael Grauer Share on X

I think people are, even if it's something people don't like, they like it more when they know ahead of time, this is the thing that could happen instead of finding out the hard way or by reading an article.

People want clarity. There are circumstances like, “Look, we understand that sometimes the cost of doing business is us providing some information. We just want to know what you're going to do with it in ways that are clear. If you're going to sell it, tell us you're going to sell it. Don't obfuscate it in 16 paragraphs of stuff that I can't read. Just let me decide, ‘Hey, OK. I'm OK with you selling it, or I'm not OK with you selling it. I can decide whether I want to do business with you or not.’” But because it's like you said, the terms and conditions and the privacy policies that we quickly hit the check mark, hit OK. They're pretty squirrelly when you get into them.

The third-party thing is annoying too. I did an article—this was a really long time ago, but I did an article for The Intercept about a project. I think it was Yale Privacy Labs and Exodus Labs, or Exodus, a group in France, and they were showing which apps link to third-party trackers, basically. Even if you are OK with an app having your data, that doesn't mean that you have consented to them giving it to all their friends. It was just so ubiquitous and kind of shocking. I think people are shocked when they find these things, but yeah, it's tough.

Even if you are OK with an app having your data, that doesn't mean that you have consented to them giving it to all their friends. -Yael Grauer Share on X

Is privacy dead then?

I mean, I don't know if I want to say it's dead. I think people can choose. I have a really public existence, but there are things that are private that I've only told a few people that nobody else would know about because at least as far as individuals and companies, there's a whole ‘nother story. I think there are still ways to kind of claw some of it back, which actually, in my personal capacity, I'm part of the Lockdown Systems Collective and we help people delete their tweets, basically. It's really hard to—there’s ways to find things. It gets more and more technically difficult if I decide I want to delete this all or I want it to be ephemeral.

Some people are still uploading those old tweets to Blue Sky because we help them migrate it to Bluesky, but some people just want it gone. Somebody can screenshot it and keep reposting it and keep it forever, but usually they don't. I do think there's still ways to keep some things private. It's getting more and more difficult. Even some of the states are fighting back though, like some of those things that California is doing, this is really cool that you can.

Yes, there's limitations to it and it's not perfect, but there's some meaningful restrictions to how data can be used or shared. I don't know if I want to say it's completely dead. I also feel like if people are like privacy is dead, like I can't have it anyway, they then make these mistakes, like being like, “Oh, so I'm just going to post everything everywhere and not even worry about it.” And that has its own risks.

Right, exactly. I think I haven't read the, I think California, if I remember right, has started a data broker removal service, right?

Yeah, it's really cool. I wrote a thing about it.

I love editors. Thank you, guys. We appreciate the work that you do.

California has a delete request and opt-out portal. It's called DROP. You can demand deletion of personal data from over 500 registered data brokers with a single request form, which is awesome. There's always kind of limitations to these things. I'd have to look up a specific law, but I know that in a lot of states, what is considered a data broker?

What's the definition of the word? That sort of thing.

Not a data broker, a lot of the publicly available information is really hard to get removed and that includes—it’s kind of funny because I run this data broker opt-out list, helping people remove their information, but I know that it's not going to get out everything. If you own property in a lot of states, that's public record with—people can find that. If you vote in a lot of states, people can buy that data, and I joke that it's security by obscurity, but it's kind of true.

How much work is your, the person trying to find out where you live willing to put in? Are they willing to leave a paper trail? How much do they know about ways you can get this information?

You're kind of hoping that you're a little smarter than your adversary, but it's definitely not foolproof, or I don't like that term.

There's definitely, like, data removal doesn't erase history, in a sense. If it was out there, there's still stuff that's out there, but there's things that you can do to at least reduce what is easily available. Are the data broker platforms, including the one that you are affiliated with, how effective are they at least in the moment we're getting data removed?

I'm not really affiliated with any. I run a list that helps people do it on their own. I'm not really affiliated with any. There are a few that we've done that we think that I recommend because they, in my experience and some testing we've done with Tall Poppy, work better than others. I have a separate list that's called FIG, filling in the gaps of data broker removal services, because they'll tell you, “We won't get this information and you have to do it manually.”

But we did do some research and it did show that there were a couple, it was like Optery's highest paid tier and easy opt-outs. Their removal was almost as good as if you did it manually. I usually recommend easy opt-outs just cause it's so much less expensive. Most people don't want to pay more than $20 a year, but Optery did do a little better.

People who are, “I have some money and I didn't want to spend it on that,” that's definitely an option, but it's great if it's 20 bucks a year. That's what I use. I've used a lot of them and kind of gone back and forth. It depends on what are people worried about. Are you just trying to kind of do this preventatively? Because sometimes people will reach out to me because they're about to run for office or write an article that they think will upset people or something like that.

They really want to make sure everything that they can get down is down. I have them check it manually as well. I'm like, but the thing that's kind of cool is that if you're paying for a service, any service, and they claim to get your information off somewhere and it's still on there, you can just have them redo it. You can just email them as opposed to having to do it yourself. I don't know how much information on myself is still up there.

Even though I run this list, I go through it constantly. I'm always trying to remove my data. Someone will send me a new list ahead or a new site I hadn't heard of. And I'm like, “Oh, there's my info again.” I'm paying for a thing and I'm doing this constantly and yet here it is. It's bad. We need legislation, I think.

Getting it removed does not mean that it won't get re-added later.

I think in California, they do make the brokers check to make sure it's still removed.

If you're in California and if it's defined, does it, and if it is defined as a data broker or the way they can help, or you just write it and if they're actually being compliant.

There's a guy I work with, who works on this and he's really amazing. It's funny cause people will ask me about state laws and I'm like, “You should ask him. I could ask him and have him tell me, I could tell you, or you could.”

Every state that has a law about this has a different law about it. There's no national standard for this is how you get off. This is how you get out of the data broker system.

Right, and I think people worry if there is a national—like, what if your state law is stricter than what the national standard is, would it preempt it? There's a lot of concern about that cause California, people really like that law. Would it preempt the law? Some of the people I work with, particularly Matt, has just been doing tremendous work on these different state laws. One of them has a really cool, I'm trying to remember the term. They're not allowed to collect data unless they need to actually use it.

Oh, I like that.

It's great. I'm like, we need more of this week.

There's a lot of discussions about, “Oh, well, if we have kind of crypto identities and then we can authorize people to have access.” If we're going to go to the doctor, we can authorize, “OK, you're allowed to have access to my health data and I own it,” but I don't know that we're going to get there. I don't know that it's achievable or practical for everybody.

It's also kind of scary. If they have all that data and it is compromised, there's a lot of, yeah, all of this stuff is just kind of an ongoing, cause it's like, how would you roll it out? What about people who don't have certain, like a lot of people in the medical system, some of them don't even have phones. It gets tricky when you think about rollout and interoperability, the security protection. It's complicated.

I think sometimes people say that their tech can do certain things because it sells, like it makes them money. You remember all the blockchain people are like, “Oh, we're going to use the blockchain to do X, Y, Z, like we're going to have everything.”

Everything will be transparent and perfect and there'll never be mistakes again.

I used to write about blockchain and I would ask these people, would send me these press releases and I would ask these companies like, “Why are you on the blockchain?” And they could not answer like nine times out of 10. “Why aren't you just on a shared database? Why is it blockchain?” They would get so mad and they would be like, “You don't really understand this field.” I'm like, “Explain it to me then.” I think that what they didn't want to say is that they were just using blockchain because it would get them funding, or it was the cool buzzword and it would make them more money.

In the same way that everybody, as we're recording this in March, 2026, if you say the word AI, then it means that your account, that if you don't say the word AI in a press release, you're not a cutting-edge company. AI is the new blockchain.

I think the term I heard was cognitive surrender, where people are like, “I'm just going to use AI to do everything and put my brain in a jar.” I'm like, “This is bad.” If I just don't use AI or don't use it often, I'm going to be… Share on X

I'm interested to see, because I feel like a lot of the promises that have been made, it doesn't live up to yet, at least in my experience. I'm also a little worried about—I think the term I heard was cognitive surrender, where people are like, “I'm just going to use AI to do everything and put my brain in a jar.” I'm like, “This is bad.” If I just don't use AI or don't use it often, I'm going to be so much smarter than those people. You’re, like, outsourcing your brain, your functional capacity to actually do things to a machine.

As someone turns off the power, you will be the only one who will be able to answer questions.

Close your eyes. I heard somebody said people in interviews were saying, “Close your eyes and answer this question.” At that point, you should just do an in-person interview. I took a test online, and I took a test, though. I just got certified in IAPP and privacy policy, and I took the test and I remember. Some of the questions were framed in ways that it would be really helpful to look. How reliant am I on search engineering? It's like, “People, you have to use that muscle.”

This has been an awesome discussion that I know I wouldn't be sensitive for your time. Any advice to people as we wrap up concerning either cyber security scams, privacy?

I think, like, securityplanner.org is a good place to start, but I guess I'd also tell people not to, like, there are things you can do. It’s not all a lost cause. There are things you can do to protect yourself online. Don't give up hope.

Take incremental steps.

It's just one thing at a time. Thank you so much for having me. This has been a lot of fun.

You're welcome. How can people find you online if they want to connect with you?

I'm on Bluesky. I'm trying to remember my Bluesky name. I think it's yaelwrites.com and that's also my website and my email is yael@yaelwrites.com. I love hearing from people.

We'll make sure to include those in the show notes. Yael, thank you so much for coming on the podcast today.

Thank you.

Exit mobile version