Site icon Easy Prey Podcast

Being Foolproof to Misinformation with Sander van der Linden

“Certain types of information can actually impact people to do really bad things.” - Sander van der Linden Click To Tweet

We’ve heard the polarizing narrative, “Either you’re with me or you’re my enemy.” Using this polarizing statement can allow people to fall prey to being manipulated. Today’s guest is Sander van der Linden. Dr. van der Linden is a professor of Social Psychology and Society and the Director of Cambridge Social Decision Making Lab in the Department of Psychology at the University of Cambridge. He is ranked among the top 1% of highly cited social scientists worldwide and has published over 150 research papers. He’s the author of Foolproof: Why Misinformation Infects Our Minds and How to Build Immunity.

“How do masses of people become convinced of conspiracy theories that can actually lead to harm and violence?” - Sander van der Linden Click To Tweet

Show Notes:

“Disinformation is more concerning. It is misinformation coupled with some psychological intention to deceive or harm people.” - Sander van der Linden Click To Tweet

Thanks for joining us on Easy Prey. Be sure to subscribe to our podcast on iTunes and leave a nice review. 

Links and Resources:

Transcript:

Sander, thank you so much for coming on the Easy Prey Podcast today.

Pleasure. Thanks for having me on the show.

Can you give myself and the audience a little bit of background about who you are and what you do?

I'm Sander van der Linden. I'm a professor of psychology at the University of Cambridge in the UK. I study very broadly how people are influenced by information, particularly information that's false and misinformation, including manipulation and other forms of influence, and how we can help people resist it through what I call psychological inoculation. That's what I do on a daily basis.

It seems like for you to have been in this field for 10 years, you had some insight as to what was going to happen in the world. But what in the world got you interested in this field? It seemed like an obscure field 10 years ago, at least to me.

It was relatively obscure. When I started out, my interest was originally in why people acquire odd beliefs about the world, including things like superstition, magical thinking, and how people are fooled by rhetorical tricks, illusions, and conspiracy theories, which was one of my main interests. At the time, nobody really cared about conspiracy theories. People thought it was a kooky thing to study like, “OK, who are these five individuals in Area 51 looking for aliens?”

It wasn't intentionally that I decided I'm going to write my doctoral dissertation on something like conspiracy theories, but it was always a topic that I found of interest. I've revealed this in some interviews.

When I was very young, I had this interest in duping my friends and seeing what the responses would be. It would just be innocent stuff. I'd say, “Oh, did you know the exam was canceled today?” And they'd be like, “Really?” They're cheering. As they would be walking away, I'm like, “No, no.” I debriefed them as a good psychologist, but I wouldn't actually let them walk out. I was like, “No, no.”

What made them think that that was legitimate information? Is it trust? I was always fascinated. When the Internet came about, I started getting really interested in online deception and this whole idea of people in chat rooms pretending to be anyone. 

I started getting into the early days of, I guess people call it hacking. By no means that I've ever become professional or did anything illegal for that matter. I became fascinated by this process of how people might be manipulated. 

On a personal level, it's a bit more grim. My family's Jewish. We're from Europe. Most of the family on my mother's side was executed during World War II by the Nazis. We always have these discussions about how certain kinds of information can actually impact people to do really bad things. How do masses of people become convinced of conspiracy theories that can actually lead to harm and violence? That was always something that was in the back of my mind as something that seems impactful and an interesting psychological puzzle.

It wasn't that I made that my topic of study. But over the years, all of these interests came together for me when I started studying psychology. Within psychology, I decided that the influence and how people become convinced of particularly false information was going to be my ultimate topic.

That's interesting. Let's put a category. Can you define what misinformation is?

This is the key thing. Whenever I give a talk, people always have their opinions about what's misinformation, what's not. For me, misinformation is just any information that is either false or misleading. I think it's important to have both in there, because people say, “But how do you determine whether something is false or not?”

There are all kinds of methods. There are independent fact checkers. There's scientific consensus in the field. Most businesses believe that the earth is round, not flat. None of us have perfect knowledge, but we rely on scientific consensus as a benchmark for what is likely to be true about the world, or those expert consensus, fact checkers, or other modes of acquiring knowledge.

Misleading often tends to be information that's not entirely false, but it is manipulative in some way—either it's deliberately lacking context, using a particular framing, or particular technique to influence people without their knowledge.

Misinformation for me is stuff that's either false or misleading, but then disinformation is what I'm much more concerned about, which is misinformation coupled with some psychological intention to actually deceive or harm other people with explicit intent. You're actually knowingly spreading false information with some agenda, which could be political, which makes it propaganda, or it could be non-political. That's what I'm much more concerned about.

Technically, if a journalist makes an error, that's misinformation. But we all know people make errors, and sometimes that happens. So I'm much more concerned about disinformation.

For misleading, in your mind, there's an intent behind it. It's not that there's a factual mistake, necessarily, but there's an implied intent behind it?

Yeah. That's where it gets really tricky, I think, from a legal perspective. We often ask ourselves this question: Can you be misleading without being intentional? I think the answer is yes. I think sometimes people can unconsciously create headlines that are misleading, but they didn't mean for it to be misleading. That happens sometimes, but I think the majority of the time, it is intentional, even though it might be difficult to prove.

I tend to use the word misinformation when there's no documented evidence to prove intent, which will make it disinformation, but sometimes we can. Right off the bat, to get into some hot examples, the tobacco industry, one of the courts in the US made a pretty significant ruling not too long ago that the tobacco industry misled people on the link between health and smoking, and that they did so deliberately, which they sourced out of documents, which show that they knew that it was harmful and intentionally decided to withhold that from people and cast doubt on the science.

That's an example where we can say that's clearly disinformation. Whereas a lot of the time, you just don't know whether they really intentionally did this, or whether it was an accident, or unconsciously somebody wrote a headline and they didn't really mean it. I tend to want to give people the benefit of the doubt, so I tend to say misinformation, unless we have evidence that it's disinfo.

So much of our information that we get nowadays is online, that there's a need to be more sensational when you talk about something, otherwise people won't listen to you, so people write clickbait-y headlines that are probably somewhat misleading in order to get people to click and read the article.

Yes, I think it would be a fair assessment. That's definitely true for social media companies, but also, I think for the Internet more generally. Even credible news outlets are forced more and more to come up with sensationalist headlines to get people to click. It's unfortunate because a lot of the advice that we give is about neutrality, objectivity, and not using manipulative wording and language.

Even though credible outlets will often totally agree with that, I think in the back of their minds, they're thinking, “That's nice, Sander, but we're not going to make any money that way. We need to survive, too.” People click on the story. It's an attention economy now. We're all fighting for people's attention. We have to use some of these tactics.

It is interesting because people often tell me about some of the techniques that I talk about in the book, Foolproof, that summarizes my research on this topic. Some people say, “Look, I'm an influencer. I use some of these techniques. Are you saying I'm spreading misinformation?”

It's like, “Well, what I'm saying is if you define misinformation as using manipulative tactics, then yes, because you're trying to influence people in ways that are not necessarily accurate and objective.” There can be various degrees of people being OK with that, as long as we're transparent, it's ethical, and so on.

Sometimes it's actually done to harm people, confuse people, or for less positive purposes. But at the end of the day, for me, it's like, well, it can't possibly be a bad thing for people to be aware of these techniques.

What is some of the language that you would define as manipulative? Are there some key phrases, emotional responses that indicate manipulative language even if we don't see it in the words that are being used?

When we look at data online, big data we gather, millions of observations from what we scraped out online, and then we analyze it with dictionaries and linguistic classifiers to understand the sentiment of different types of content. We see what predicts virality online and also what the accuracy level is of the information or the veracity.

Information tends to go viral and also tends to be less accurate. It's the stuff that's shocking, that's novel, that's grabbing people's attention. Outrage is one of those factors that does really well, creating what we call moral… Click To Tweet

Typically, what you see is what you said in the beginning. Information tends to go viral and also tends to be less accurate. It's the stuff that's shocking, that's novel, that's grabbing people's attention. Outrage is one of those factors that does really well, creating what we call moral outrage. 

That's like, you won't believe…

Yeah. There's even a dictionary for moral, emotional words, and those are particular kinds of words. We know that emotional content are more likely to go viral, things like fear-mongering. Fear is a very popular emotion. Any type of headline that tries to convey a sense of fear, instills a sense of fear in people, or stuff that makes people angry and that leads to outrage, but there are very specific words, particularly things like pedophile; that's a moral emotional word.

We know that emotional contents are more likely to go viral, things like fear-mongering. Fear is a very popular emotion. -Sander van der Linden Click To Tweet

That's why a lot of the conspiracies tend to invoke satanic pedophile rings, because that combo is not only emotional, it's also moral. When you hit that nexus of something that's both negative and moral, then you get bonus points when you're predicting morality.

Nobody's going to be for satanic pedophiles.

Exactly. Everyone thinks that's bad. That's what gets you traction. One of the biggest predictors we found above and beyond moral and emotional language is what we call in-group versus out-group language. If you write something negative about the other side, that really gets traction on social media.

One of the biggest predictors we found above and beyond moral and emotional language is what we call in-group versus out-group language. If you write something negative about the other side, that really gets traction on social… Click To Tweet

Let's say you're a liberal. You say something nasty about conservatives. And vice-versa. If you're conservative, you say something nasty about liberals. That really sees a lot of uptake on social media. That's what we later termed the perverse incentives of social media because people learn what gets rewarded.

If being negative about the other side gets traction, then that's what's being incentivized. That's not a good thing for discourse because even though not all of it is false, the toxicity level, polarization levels, get high. The veracity of the content gets lower and lower quality. It's eroding public discourse on a most subtle level, in some ways.

That's why I personally think, and that's why I focus a lot on these manipulative contexts, is because even the stuff that's obviously false, if you quantify that in the media ecosystem, that's not the most prevalent stuff. The flat earth, there are obvious falsehoods like that, but that's not the bulk of the problem. The bulk is the stuff that might be a little true here and there, but actually, it's quite manipulative and misleading. That's what gets people worked up a lot. That's why you see a lot of the flame wars, and that's actually the most toxic one.

Is there a forecast model of the doom spiral, so to speak, that as language gets more harsh and rhetoric gets more inflammatory, is there a point in modeling it where it just breaks down? Or does it just keep getting worse and worse and worse until people say, “This is ridiculous, so I'm just not even going to pay attention to these people anymore”?

The models that I've seen, some of them do predict tipping points. These are, as I'm sure you're familiar with that term, early detection systems that are being built to understand discourse online and see what's taking off. But it's hard to say at which point people disengage.

When we talk to social media companies and we do some work with them, they often tell us that sometimes they do ping people. Maybe this is only me, and I'll be honest here, but Netflix sometimes asked me if I was still going to continue watching, because you've watched a lot of content. It's the same online.

When you're in an echo chamber of Playmore, sometimes they'll ping people and say, “Are you still interested in this?” Most people then actually say, “No, I'm not interested in this,” but somehow they get stuck in it. That's the problem, I think. A lot of people don't want this stuff, but somehow they find themselves part of some flame war or echo chamber.

Do you think part of the problem in the perception of why we're seeing more disinformation, more misinformation, more manipulative language, is because it's the algorithm? But the algorithms are rewarding people for that behavior. They're doing more of it, therefore we're watching more of it. It comes back to that doom spiral. When you're inside it, it's hard to see that you're a part of it.

Yeah, and it is a really complicated question. There were a few papers published recently from Meta. I think the gist of what they were saying was that there's more going on than the algorithm. They tried to experiment with the algorithm, and they didn't necessarily find that it reduced polarization, but that also doesn't mean that the algorithm isn't the cause of it. It's a complex system.

The typical argument they bring is people were polarized before social media. How could social media possibly be the cause of all of this? It's like what you were saying. I think the algorithms can be the amplifier of something that's already existing with existing tensions in society. The algorithms certainly can be a powerful amplifier of those tensions by rewarding the content that's more extreme, more polarizing, more toxic, rather than incentivizing constructive debate, because that doesn't get clicks and engagement.

The algorithms certainly can be a powerful amplifier of those tensions by rewarding the content that's more extreme, more polarizing, more toxic, rather than incentivizing constructive debate, because that doesn't get clicks and… Click To Tweet

That's tied to the business model of how these companies are run. It's not an easy solution in terms of what else they would be optimizing for if it's not engagement. They often look at the user behavior. People are clicking, people are liking. That's input. It's telling us that this is what they want. But then if they survey people, people say, “Oh, we don't want that.”

It's complicated because if you're trapped in a system like that, then sure, you're going to click. But if you ask people to step outside the Matrix for a moment, people say, “Oh, no, that's not what I want.” How do you design systems that still make people money, but don't promote this content? That's really the million-dollar question that everyone is currently thinking about. It's really difficult. I'm happy to talk about some of the solutions we've explored. I think, broadly, algorithms are a major amplifier. 

To give you an example from a non-Western context, WhatsApp uses end-to-end encryption. They actually can do anything. They could, but given that they want to preserve people's privacy, they don't actively intervene.

That was the problem years ago, where rumors were spread on WhatsApp about local kidnappers in India. People in local rural areas just take that stuff at face value. They get a message that goes viral saying, “There are kidnappers. Here's what they look like.” They go outside, they see somebody that looks like that, and they lynch that person.

It turns out that the information was fake, so that was extremely concerning, because these things were going viral, people were getting killed. WhatsApp was under tremendous pressure by the Indian government to release the phone numbers of these people. Obviously, they were like, “Whoa, we don't want to be releasing phone numbers to the government because they can go and hunt people down. That doesn't seem like a good scenario, but also, we don't want to be facilitating this stuff.”

Clearly, the lynchings weren't the result of the algorithm. There are ethnic tensions in India that were clearly being used to further some of this stuff. There are existing tensions people can tap into, and WhatsApp seemed like the ideal vehicle. They were obviously used as a way to do that. They didn't have the limitations on how many messages you can forward back then or how many groups.

That is how things can spread very quickly from one person to another and can get people killed. I think that's what differentiates modern times from when you were in the Roman era, and everything was going by horse and carriage. It took a bit longer for people to find you and spread a rumor.

I was going to say, is that part of the mix of turbocharging this, is that the misinformation can spread faster now than it could 20, 30, 40, or 100 years ago?

That's part of it, so it can definitely spread faster. In the book, I do some calculations. I actually calculated based on the estimates of historians of how long it would take in the Roman postal system for a message to travel and how fast, like if somebody like Obama can get a message to all of his followers. That is obviously a huge differential, but I think there's more to the story as well.

Part of the story is also the structure of the information environment has changed pretty drastically. People are getting inputs. It's not just cable TV anymore or just regular TV and radio. It's the podcast. It’s Spotify. It's YouTube. It’s Instagram. It’s Snapchat. It’s TikTok.

People are being bombarded with information, whether it's direct browsing social media or audio, from all sorts of venues. It's really difficult actually now to try to capture how much misinformation or how much inaccurate information somebody receives on a daily basis. 

There are studies that somewhat confidently claim to have estimates, but I'm always quite skeptical because they typically look at one platform during an election. It's dozens of platforms. Not only your neighbors and your cranky uncle, but also the Internet, the TV, and everything. How do you calculate that? It's not an easy challenge to try to get a picture.

 

 

 

 

 

 

 

 

We also have a landscape that's vastly different, and it's fragmented much more. Now we're talking about mainstream social media like Twitter or X, we should say now. X and Facebook, whereas you have your Rumble, Parlor, and Telegram. It's becoming much more fragmented, which also means that when you're trying to get to people, you can't just go to one or two platforms and reach everyone. Now, everyone's scattered in their own echo chamber now, so that makes it much more difficult.

Some people say, “Oh, sure, OK, but there are not that many people on these old tech platforms,” which for now it might be true in relative terms. But people are jumping from one to the other. It is becoming more fragmented. You could say that's somewhat of a good thing. We don't want the market controlled by one or two platforms. That's probably not a good thing, either. There should be a happy middle, but there's a lot of fragmentation at the moment.

The third really is the way that technology is shaping information itself. WhatsApp is different, because you get messages from people in your group. There's all group psychology at work there. These are people you know and trust usually. It has this inherent value. You think it's credible.

That's very different from anonymous tweets that you read on Twitter. YouTube is, again, different. It's the audio visual thing where you're being sucked into a charismatic podcaster. That's a whole different vehicle. I think the unique features of how people interact with technology are also quite different.

What in the world do we do to, using your phrase earlier, inoculate ourselves against misinformation, disinformation? What everybody would like to say—either love the answer of or hate the answer of—is the platforms just need to do more. Put it on somebody else to solve the problem. In that way, I don't have to solve it.

I guess there are two ways to look at it. What should the platforms be doing, whether it's mainstream media, broadcast, cable, news websites, social media platforms, versus what should I personally be doing in my life and my interactions?

As a psychologist, I'm biased towards things that I know about, which are solutions at the individual level, but you're quite right. When we have these conversations at a larger level, you have two camps of people: one who say this individual stuff is a distraction because of the social media companies and the government, we need that much strict regulation, and social media companies need to fix it. Other people say, “No, no. We don't want them interfering with free speech. They shouldn't be doing anything.”

At the level of our solution, we have a solution that maybe doesn't rub people the wrong way. It's just to empower individuals to discern what is manipulation and what is less likely to be manipulation. I'll first explain that, and then I'll come back to the tradeoff between these bigger issues.

The inoculation analogy follows the vaccine metaphor quite exactly in the sense that, just as you expose your body to a weakened or inactivated strain of a virus to try to trigger the production of antibodies to help confer resistance against future infection, it turns out that you could do the same with the human mind. 

In a lot of research, we found that when you expose the mind to weakened doses of misinformation or the tactics that are used to spread misinformation, and you refute them persuasively in advance and deconstruct the manipulation techniques for people, we can build up mental or cognitive resistance, even antibodies, if you think of it, mental antibodies against future attempts to do busts with misinformation.

We started out doing this in the lab. We have people come in and we expose them to a weakened dose of potent misinformation, and then we refute it in advance, or help them see the manipulative tactics, and then we expose people to the full “virus” later on. And we find that they become more resistant.

Just as your body needs lots of copies of potential invaders to know which proteins are the healthy ones and which ones look like invaders, it works the same with the mind. The mind needs lots of micro-doses of what deception looks like in order to start recognizing the patterns.

The mind needs lots of micro-doses of what deception looks like in order to start recognizing the patterns. -Sander van der Linden Click To Tweet

Let me say this: I'm not against facts. Fact-checking and education, that's all good. Maybe if we put this in cyber attack terms for some of the audience, it's good to have broad knowledge. But when you're faced with a specific attack on your system, you're going to have to have a response that's tailored to the type of attack that you're facing. Even though facts and critical thinking are useful in general, what we often find is people have no mental defenses for the specific misinformation that they come across.

You need to pre-bunk that in advance so that people can withstand the attack when it actually happens and when they're being targeted. I just did this. It sounds ridiculous, but I was just at a disinformation summit. These were 200 disinformation researchers. Somebody asked why people believe in flat earth. I said, there are all sorts of social explanations for that. For a lot of people, it's not so much whether it's really flat or not, but it's conspiratorial. There are people who believe in one conspiracy tend to believe in others.

The other side of the story is that, at a cognitive level, people actually don't have a lot of mental defenses. It's not like I can convince you that the earth is flat in the conversation. But I can get people to be a little bit more uncertain about their beliefs. Because when I asked the room of 200 people, if they were being honest with themselves, and I asked them who would raise their hand and explain why the earth is round and not flat? What are the physical science mechanisms that explain that? Two people in a room of 200 raised their hands who were confident enough, actually.

Most people don't have the mental defenses ready to actually counter-argue and resist somebody who is going to throw facts at you about why the earth is flat and all the science-y sounding stuff. It sounds ridiculous with flat earth, but they do it with whether it's vaccines, climate, or GMOs. Whatever the topic is, people can spout a lot of scientific stuff.

My own family forwarded me a WhatsApp message of some doctor who was using a blow dryer up his nose to kill the Coronavirus. It's something ridiculous, but he has scientific papers. If you don't know, then you don't have credible ways of responding.

That's really the idea behind inoculation. You need to get people ready, and there are various ways of doing that. You can do that with specific falsehoods, but you can also do it at a more general technique base level. I think here's where the approach, I think, is particularly useful when you think of it as a broader spectrum vaccine.

You can do it with flat earth. I can tell you in advance what the arguments are going to be, how to refute that, and so on. It doesn't scale very well because next time, they're going to come with some other conspiracy, and then you don't have the mental defense.

It's much more useful to actually inoculate people against the building blocks of conspiracy theory, or polarizing headlines, how they are crafted, or how emotions like fear mongering or outrage are used, or trolling out of baiting people online, what that looks like, how they want you to respond, and how you get sucked into these things, which is very active during elections, trolling and false amplification through bots, for example, bot armies. And impersonation, fake doctors impersonating politicians, celebrities.

What we found is that you can inoculate people against these larger techniques. We've developed some games to get people interested. The other thing we found is that people don't want to be bored with some media literacy lecture.

We have some edgy games; one is called Bad News. It allows you to step into the shoes of a manipulator. It's a social media simulation, and your goal is actually to gain followers by duping people using weakened doses of some of these tactics. If you're being too ridiculous, you lose all credibility, because that's not what a clever manipulator would do.

People go through the simulation, and they build up the resistance to these tactics by actually being in a simulator, like a flight simulator rather than being you get the experience, and that motivates people's immunity, so to speak. That's how we do it.

When you talked about low doses or low exposure to small amounts of misinformation, the thing that popped into my head, the conspiracy theory response was, “But who chooses what information that you're seeding as misinformation?”

What sometimes people misunderstand is the key thing is not just to expose people to the weakened dose of the misinformation, but also give people the tools to dismantle it. -Sander van der Linden Click To Tweet

Yeah, of course. What sometimes people misunderstand is the key thing is not just to expose people to the weakened dose of the misinformation, but also give people the tools to dismantle it. Without that part, you run the risk of just spreading the misinformation. We use humor and sarcasm to weaken it.

The thing is, when you're synthesizing a weakened dose out of a specific falsehood, you are in a situation where there is some ground truth that you're going to have to establish. For example, coming back to the definition, you have to turn to the fact checkers or scientific consensus to say,
Well, this is actually the truth. This is misinformation. Now I'm going to extract a weakened dose.”

I can give you a specific example. Let's talk about climate change, for example. Over 97% or 99% of scientists think that humans are at least partly contributing to global warming. There are websites out there using fake petitions claiming that thousands of scientists say that global warming isn't real. But then when you look at the petition, there's the Spice Girls, Charles Darwin. It's unverified stuff. Anyone can sign this thing.

That could be the weakened dose. That's what we use in some of our experiments to say, “Look, here's what an unverified petition looks like.” You get people like Dr. Geri Halliwell from the Spice Girls and disease people like Charles Darwin. You deconstruct the fake petition technique, and then you reinforce the idea that there is a scientific consensus on climate change.

You have to accept that there are some things that are true. The same with flat earth. You would show what are some of the techniques that flat-Earthers use, and then you inoculate people with, “Well, here's actually what you need to know about why the earth is flat. These are some of the […] nobody round.” People are paying attention. Why is the earth round? And then some of the techniques that are used to convince people that the earth is flat.

However, some people, including social media companies, initially weren't comfortable with that. They were like, “Oh, we don't want to talk about climate change, immigration. We're not arbiters of truth. We need something a little lighter touch than this approach.”

For scientists, it's a bit. There's a scientific consensus, or there are fact checkers, and we have to establish the truth somewhere. Let's think about this. If you're a social media company or government, we started thinking about this, and we're like, “Well, actually, it's probably a good thing if social media companies or the government aren't going to tell people what they need to believe.”

Even though we might like one government, maybe the next government, people are not going to like it. The idea as well, how can a government, a social media company, or any entity really scale this inoculation approach in a way that's going to be in the public interest?

One of the things that we came up with was what we call technique-level inoculation. The technique-level inoculation doesn't necessarily assume that there's a ground truth for something, but it does make people aware of key manipulative tactics, and then allows people to make their own informed choice about what they want to believe, but exposes the technique in a very nonpartisan way.

There are real examples. I can give some political examples that might resonate in the US and also some non-political examples. Talking about podcasts and YouTube, false dichotomies are huge. Politicians love using false dichotomy. The technique here is to present two options, even though there's more. You take away all nuance, and it gets people into more extremist modes of thinking.

We did a bit of this work with Google, who owns YouTube. On YouTube, for example, you get these extremist recruitment videos like, “Oh, either you join ISIS or you're not a good Muslim.” That's like putting people into the mindset of either you become more extremist or you're not a good person. That one's extreme. But nonetheless, there are these political gurus on YouTube that often radicalize young men, but also other people using these rhetorical techniques.

It's OK to influence people if they know about it. That's my personal opinion, if you're being ethical about it. But if you use rhetorical, logical fallacies without people's knowledge to get them to be more extreme, I think that's what I would consider manipulation, and people should know about that.

Organizations use it, too. Some people are not going to like this example. Objectively, regardless of how you feel about guns, I think the NRA tweeted something during the mass shootings, “Either you're pro AR-15 rifles or you're against the Second Amendment.” That's just not true. People can like guns and also the Second Amendment. People can not like certain guns and also like the Second Amendment.

I think it's a tactic to get rid of all the nuance. What do we do in the inoculation approach? We don't talk about any of this stuff. We show people a clip from YouTube. These videos, those annoying ads that you can’t skip on YouTube? That's where the free bunk would go.

The weakened dose would be Star Wars. All of a sudden, the clip cuts in from Revenge of the Sith. I'm not sure if you're a Star Wars fan, Chris, but this is Anakin Skywalker talking to Obi Wan. But for people who don't know Anakin, he goes on to become Darth Vader.

That's a spoiler.

Spoiler alert. Not-so-spoiler alert. He says, “Either you're with me or you're my enemy.” And then Obi Wan replies and says, “Only a Sith deals in absolutes.” For the really clever listener, that's an absolute statement also.

The idea here is that the narrative goes like, nobody wants to be a Sith. Don't use manipulation techniques. The thing is that, either you're with me or you're my enemy, that's the structure of all false dichotomies that are used by real organizations, people, and politicians everywhere. You don't need to talk about any controversial stuff, you just give people the template of the manipulation technique.

We test people and 24 hours later, we target them with some misinformation ad, or we'd use some of these tactics, and then we find that it helps boost people's recognition of these techniques. Others are called scapegoating. You blame a whole group in society for something that isn't their fault.

Again, the weakened dose is South Park. It's the Canadians. I think whether you're liberal or conservative, Star Wars, South Park, everyone can relate to that. We can all agree on both sides of the aisle that these techniques are bad. If it's your party who uses them, we should call it out. If it's the other party, we should call it out.

That's the thing that I think at the minimalist level, inoculation can work in that way. It is pretty minimal because some people complain and say, “OK, but how can people spot these techniques, but you're not changing their minds?” Or maybe not change their mind or the idea as well. That's not our goal. Our goal is to help people recognize manipulation online so that people are empowered to then form their own opinions.

Our goal is to help people recognize manipulation online so that people are empowered to then form their own opinions. -Sander van der Linden Click To Tweet

Especially if you think about education integrating this into schools, or when social media companies, or governments use this from people, like, “Why do you work with Meta, Google, the US government, and so on?” People don't like big corporations complaining. People don't like the government complaining to me.

The thing is, if they end up making use of these techniques themselves against their own people or their own users, then they've only invested in making people more immune by scaling knowledge of these techniques. That's why it doesn't bother me.

I agree that we don't do work. I got this call once from a shampoo company that claimed that people were spreading misinformation about the shampoo, and they wanted to inoculate their customers. I don't want to get involved here in terms of what's the truth about this chemical shampoo. Is it really destroying people's hair?

That's the thing where I'm like, the ground truth here is even unsure. There's no scientific consensus here or some fact-checked information, so this is getting too murky. I can see how people might think about this approach. But I think, ideally, it'd be used to preempt and pre-bunk these larger rhetorical fallacies that keep popping up time and again.

It's interesting because this has wider implications beyond what we would generally think of—I’m going to classify it as media disinformation or news-ish news, adjacent media. But I think it would also help keep us from getting individually targeted scams, because you now have these skill sets to look at like, well, this isn't a reasonable interaction. Why is this unreasonable interaction happening, and being able to step back and look at it with a little bit more skepticism, a little bit more being disconnected from it a little bit.

I'd be getting into the scamming literature, because a lot of scam experts have told me, look, this seems like the ideal test case for this approach. I totally agree. There are some contexts where inoculation is a bit trickier. In any domain where techniques are being used against people and you can break those down and inoculate people against them, that's a good context.

We've done stuff in Iran, in formerly ISIS-occupied areas, where we break down the extremist recruitment techniques for youngsters and inoculate against them so that when it actually happens, they can recognize it. I think it's similar to scams. You're an expert on this than me, obviously, but there are different types of scams that one can consider. And I have some computer science colleagues who would tell me.

Part of the inoculation is for warning people that they might be manipulated to jumpstart the psychological immune system. That's sleeping most of the time, because we just assume that most stuff is not deception. Sometimes that's a reasonable assumption, but other times it's not.

They tell me, “Look, this forewarning stuff, we've tried in computer security. Nobody cares about popup warnings.” I think they have a point there, but forewarning is only a small part of the inoculation. People debate about this, but in my opinion, the more important part is actually the pre-bunk, where you actually give people the tools.

I think from IT security, this is often missing. When I talked to our own IT security, they sent out these emails like warning, phishing. I'm like, “But you're not giving me the weakened dose of what the attack is going to look like, what to look out for. Give me a simulation.”

What I've told our own IT security is, “Do it on people in a weakened dose. Send out a phishing attempt, hook them, and then explain, ‘Oh, this is IT. We just want to let you know that you're vulnerable, and this is why, and here's how to resist it.’”

I think that's more like the games, how we make them. That's really going to shock people into thinking like, “Oh, yeah, I might be susceptible. I've learned something new,” rather than just these emails that tell you that this thing happens every now and then with some vague tips. That's why I think there could be a lot of improvement.

Starting with the police force here in London a while ago, they were telling me about these romance scams. I wasn't an expert in this, but they said it's infrequent. But when it does happen, people are out millions, hundreds of thousands sometimes. There are tactics that are used, so maybe pre-bunking that could be really valuable.

I've been thinking a lot about what are the tactics for different types of scams and how can we inoculate people against those scams? Some are targeted at older people, at the elderly. Some are targeted at younger users. I think it's very relevant in that space. What do you think?

Based on our conversation here, I think that pre-bunking is a great way to help people. I think it's one thing. Once somebody's in the midst of being scammed—let’s use the romance scam—it’s really hard to get people out of it, because the part of one of the scammer techniques is separating a person from their family, separating them from their loved ones, separating them from the people that they trust. Like you said, “It's us against the world, and they won't understand us.”

The scammers in some sense are already doing their own pre-bunking to try to get them to not trust their own family members. It's almost like you have to get people well before they're scammed. Saying, “Don't answer your phone.” Great. Now, when the doctor calls from a different number, they don't get that important call from their doctor.

There are some very gross generalization techniques that, yeah, while they work, but they have other consequences, or they're just not realistic. People want to get phone calls from their friends and family. They want to get contacts telling them, “Don't ever open your email. Don’t respond to any text.” It can be impractical advice. I've probably given lots of it over the years.

I like the concept of pre-bunking and pre-inoculating people against types of scams, because then you're not talking, “Well, this is the specific piece of the scam now. This is the fake news portion of it, but here are the techniques that they use and being aware of the techniques.” Then the people start to see the techniques, as opposed to, “Well, this is the factual information that's wrong,” which I like. 

It's like you're a cybersecurity analyst. Sure, if you do all your defense to build against this one specific attack, well, then they find out a new vulnerability. They just go there, and you have no defense for it. But if you learn the broader patterns to look for manipulation or subterfuge, then you are triggering people's emotional responses. Then you start to see, “OK, that's what I'm looking for, not for the individual phrase when the person says IRS that I hang up on them.”

Exactly. There are some really important analogies there actually that maybe could riff on a little bit. That's exactly right. We started out with the narrow spectrum vaccine, because you can do that very controlled in the lab and so on. The exact question came up that you're going to be very resistant to this specific type of attack, which can be useful in certain contexts where a type of attack is recurrent and so on. But then they're going to find some other vulnerability.

If you want to scale, it's going to be more helpful to have these broader pattern recognition, that “vaccine.” The other thing you said is also really relevant for what we do because people always ask me how to de-radicalize their uncle who believes in some crazy conspiracy theory.

The thing is, de-radicalization—I’ve also studied cults—that's really difficult. You need to get people a different social support network. You need to get them out. They're not going to listen to you the first few times. It's going to be a long endeavor. Maybe the next book is going to be how to de-radicalize people, but that's a much heavier lift.

People are like, “Why do you go on and on about pre-bunking?” Because it's much easier to inoculate them, to get people out of a cult. It's the same with people with these romance scams. Once they're committed to this other person, you can't get them out. I think the analogy there is really strong.

The other thing that you said that also really resonates with our research is you can do inoculation in a very broad sense. Social media companies always want things very short and very broad. I liked the idea of what you said about not answering your phone, and then when your doctor calls from an anonymous number, you don't answer the phone.

The thing is, with inoculation, you can get some of that stuff too sometimes when you only do the forewarning, for example. Sometimes the inoculation makes people a bit more skeptical about credible outlets, too, when they use a hint of these manipulation tactics. Actually, you get into this really interesting debate in our area about what the right level of skepticism is.

Some people, to translate the analogy, my colleague who I do a lot of workbooks on, he's convinced that actually you should be skeptical every time somebody calls you. I like this level of skepticism that if somebody calls you from an unknown number, then you're going to maybe not answer your phone. Maybe it's good to have that level of skepticism.

Other people say, “OK, but what if your uncle dies or you won the lottery? That could be a huge cost to being overly skeptical.” Is it the same with news? Outlets that are very credible and sometimes use a little bit of manipulation that people are now attuned to manipulation too much, that they're going to discount outlets.

Some of it is like, if the New York Times, Reuters, or the BBC, who are generally well-rated factually, if they use clickbait and so on, maybe we should discount the headline that we're talking about, maybe not the source but the headline, because they can make mistakes. Other people are like, “OK, but we don't want to get people too skeptical.”

That's the same here with the analogy that you used. You want to be skeptical enough that you're regularly investigating your emails, but you don't want to stop opening your email. That fine tuning is really interesting because we're doing a lot of experiments where we can crank the skepticism at various levels. I'm not sure how it works in the IT space, but it's an interesting idea of how to calibrate people's skepticism.

I can see it perfectly reasonable to tell people. You see the stories where there's the scam of the college student as a way, and their parents get a voice AI scam calling with the caller ID of the kid's phone. “OK, if I get a call from my kid, should I always assume that it's not really my kid? And that we have a password that before we start any conversation, we exchange this password?”

It could definitely go too far, but there should be some indicators of if the kid is panicky and says, “I'm kidnapped, then we need to take a step back. OK, let's figure out if that is a scam or not.”

I agree with that. I think a lot of people in our space make the argument of what's the base rate of the incident. If it's really low, then should you really be concerned about these types of rare instances? I'm sure you get this insecurity, too. Other people are like, “No, no. We should be vigilant about these black swan events, because they're so important.”

I think we're in the middle. I agree that heightened skepticism in this day and age is probably not a bad thing with so many scams here and there, but you don't want to go too far, just live in the woods, and not see anyone anymore ever.

There is that practical aspect of we still want to be able to live our lives and not be afraid of every interaction that we have with something that's unknown, because otherwise, we're never going to make new friends, we're never going to try new foods, we're never going to have new experiences and travel to countries we've never been to. Saying, “Just don't do anything new,” that makes for a really boring life.

Absolutely. One of the ways we've been able to integrate this is to give people some feedback in our interventions. Whether you've watched the video again, we give people a lot of quizzes but some feedback. That seems to really help people not take the skepticism and then turn it on everything that seems remotely manipulative.

It is interesting. You could take this argument and say, I get this question a lot. “Look, Sander, but this emotional manipulation, a lot of marketing people use this to sell their product. What are you doing here?” It's like, “Well, my field is helping people resist persuasion, particularly when it's malicious.”

A lot of marketing is not malicious, but I don't really see the harm if people are recognizing that you're using some tactics, and now they're going to think twice before they buy a product. It's like, well, you use a tactic, you can educate people on these tactics to level the playing field. That's how I see it. I think people are committed to a cause and see it as an obstacle regardless of what the cause is. People who want to persuade see somebody who is helping people resist persuasion as an obstacle.

This is the conversation that I had with Dr. Cialdini. Does it switch from a marketing technique into manipulation? Where is that line? I don't know that we can easily answer. I don't know that we actually came up with a definitive answer, but that's where marketers have to watch out, that they're not crossing a line and misrepresenting their product, misrepresenting what it can do for people.

That's the challenge. Products are always selling a lifestyle. “If you buy our products, you'll be a happier person. People will like you more.” When the hair gel is not going to do that, or your car is not going to make people like you more, but that's what they're selling. That's the challenge. These are marketing techniques that are tried and true, so to speak.

Yeah, some of them are. I think there's an analogy with social media companies there, too, that there is no incentive to correct or be honest sometimes. In the long-term, I'm hoping that we can change some of these practices.

Bob helped me a bit with the book because obviously, he was uncovering the six principles of persuasion, and I was uncovering the six degrees of manipulation. We talked about this. I think there are some lines.

He talks about experts and how that can be persuasive. I think it's fine if you use experts to help sell your product. If these are real experts and they are honest about the product, I think that's fine marketing. Where it gets into manipulation is if this is a fake expert, somebody who lied about their credentials, or who doesn't actually know anything.

That would never happen.

You see a lot of that. I think, for me, that's the line. That's the manipulation versus OK marketing. Companies can make mistakes. You can say, “Oh, this product does this. We've advertised it like this.” It turns out, it wasn't the case. “Sorry, guys, here's the refund. We're going to do better next time.”

I think that a record of trustworthy behavior is what makes someone not explicitly manipulative. Companies who know that they're doing it, they don't alert people, and they keep doing it, I think have a very different marketing from those who recognize that they've made a mistake, are honest about it, and hope to do better next time. I think everyone makes mistakes.

There's this worry in society that we can't admit the mistakes anymore and that everyone's going to get canceled. Nobody's going to buy your product. I do think that in my field that we call intellectual humility, if we're all somewhat forgiving and humble, then maybe that creates a better culture for people to act in trustworthy ways.

I know the exact reason why companies don't apologize and don't fess up to mistakes is because they don't want lawsuits. There is a financial disincentive for them to admit mistakes, or it's why you get just weirdly frayed.

It sounds like they're apologizing, but they're not really apologizing, which always makes me mistrust people when I hear a non-apology apology, but because they're trying to do it in a way that their lawyers have vetted, that won't put us in legal jeopardy. Throw lawsuits into the mix, and it messes all this stuff up.

You're absolutely right. I think the lawsuit element is looming over most of these companies. I think that's absolutely right. That is a system. We get back to the system question, which I conveniently avoided.

The last thing I'll say on that is that it's true. I'm absolutely sympathetic to the idea that at the system level, things need to change. Companies shouldn't be incentivized to lie if it makes it so that if the incentives go the wrong way. It's the same for social media companies.

Do we need the government to take stronger action? I'm not a policy expert, and people have different opinions on this. There's this whole debate on free speech, obviously. But I do think that there are clear problems with the system that we need better guardrails. Who's going to implement these guardrails? Who's responsible?

I think we shouldn't put the onus all on people. The people didn't necessarily design the system. Some people have been trapped in the system or forced to work with it. I think the social media companies, governments, other actors, are responsible for enacting some larger system-level of change.

I do slightly worry sometimes about our approach, which is adopted by social media companies, governments, and other actors. We don't want this to be an excuse for not doing the harder things—changing the algorithm, taking down content if it's violating hate speech policies or some other stuff. That's the tricky balance. As long as the individual measures don't undermine the system-level measures, then I think it's fine, but it's a trade-off.

It's almost that you hope if you can poke at this from 40 different angles, and everybody gives a little bit, cumulatively, there's a large advancement without any one entity source point-of-view overriding all the rest. You just incrementally creep hopefully in a good way as opposed to a bad way.

Exactly. The people who say, “Oh, no, no. It's the system. We need to change the system. You’re a distraction.” And the people will say, “No, no, you can't trust the system. People have to change.” I hope everyone now recognizes that it's a false dilemma.

That's a good example even coming out of our conversation. It is a false dilemma that it's either the system or it’s people. There's probably way more than two ways to tackle this broad issue. 

I wanted to ask you before we close out. You talked a bit about some videos and some games. Are those things that are publicly available that our audience can take a look at and participate in?

Absolutely. Everything's free, publicly available for educational and non-commercial purposes. The website is inoculation.science. All of our resources are housed there. It's also in the book. People can read about it, which is foolproofbook.com.

Awesome. If people want to follow you online, how can they connect with you?

I'm still, for the moment, on X. I'm @sander_vdlinden. I'm on LinkedIn. I play around with Instagram and TikTok under Professor Sander van der Linden, but I'm not very prolific on TikTok and Instagram. It's beyond my generation, I think, but Twitter and LinkedIn are my main channels.

Awesome. We will make sure to link all of those on our site so that people don't have to write it down while they're driving. They can easily visit the website and just click. Thank you so much for coming on the podcast today.

Pleasure to be on the show. Thanks for having me.

Exit mobile version