I recently went to an open lecture at MIT by Adam Berinsky about misinformation and how to fight it. Professor Berinsky's book, Political Rumors: Why We Accept Misinformation and How to Fight It, brings together a decade of research on the political psychology of misinformation by leveraging tools such as field experiments. I found the lecture to be enlightening particularly when it comes to questions of expertise, which are perennial interests for anyone in STS.
Adam Berinsky is the Mitsui Professor of Political Science at MIT and the founding director of the MIT Political Experiments Research Lab. His books include Silent Voices: Public Opinion and Political Participation in America and In Time of War: Understanding American Public Opinion from World War II to Iraq.
Political Rumors: Why We Accept Misinformation and How to Fight it
We’re living in an era of mistrust: there’s been a loss of expertise and authority. When I was growing up, we looked to experts to decide particular issues, address a particular problem — experts come in, present solutions, and we accepted that. We might not accept their political views, but we trusted their expertise. Individuals and groups that should have authority aren’t necessarily given authority: there is a lack of trust in facts and mistrust of “experts.”
What happens when experts aren’t trusted? A few years ago, I gave a presentation in 2018 to talk about political rumors and talk to doctors about what to do with them. There was a decline in trust in doctors but trust in the scientific community remained relatively stable — since 2016, however, scientific knowledge has eroded across the board (not just with Republicans — with Democrats as well). In 2009, I started to become interested in political rumors. I just got tenure and I was looking for a new project — there were really weird things happening in the country. People talked about the US government being involved in 9/11, that Obama wasn’t born in this country, etc. People talked about how these are crazy stories and not political science. In 2009, I started looking at Barack Obama citizenship conspiracy theories Wikipedia pages. In 2010, Barack Obama released his birth certificate showing that he was born in Hawaii; there was a lot of back and forth by people like President Trump saying that the birth certificate was faked. Obama then released his long form birth certificate. People started saying that this was a multilevel forgery — the rumors weren’t just about Obama. We saw it about election results (e.g., 2020); back in 2016, Trump was laying the groundwork to say that the election would be stolen from him.
The fact that elections were stolen wasn’t just a 2020 phenomenon by Republicans; Democrats had theories too that the 2004 election was stolen, too (cf. voting machines in Ohio). I’m not a huge fan of arguments that say that tech changes everything, but tech certainly changes some things. When I was growing up in New York in the 1970s and 80s, rumors were spread via posters on the sides of construction sites and word-of-mouth. Now, they’re spread via Twitter — _exposure _has changed. Most of the people don’t pay much attention to politics; this audience is paying attention to politics in a very specific way (you’re at this talk!). This led me to my book (Political Rumors: Why We Accept Misinformation and How to Fight It) — I signed a contract with Princeton saying that it would come out in 2014, but it’s coming out in 2023 [laughter].
Specific rumors that I started to study:
- July 2010 Birthers: do you believe that Obama was born in the US?
- July 2010 Death Panels: do you think the changes to the health care system via ACA would create “death panels” which have the authority to determine whether or not a gravely ill or injured person should receive health care based on their “level of productivity?”
- If you think that there will be death panels, most people would say that they’re not going to support this policy. These rumors aren’t just on the right, but on the left as well.
- July 2010 Truthers: do you think that people in the federal government either assisted in the 9/11 attacks and took no action to stop them because they wanted the US to go to war in the Middle East?
- There was a whack-a-mole effect: you addressed falsehoods as they came out.
What did you hear that led you to this belief? People who said “not sure” would say “I’m not sure what to think — where there’s smoke, there’s probably fire.” This is a portion of the population that’s really important to study because they’re in an information environment where things are changing very quickly. If you keep on hearing these kinds of stories, it becomes really problematic.
It becomes a pebble in the pond: creators drop a rock in the pond and effects ripple out (believers → uncertain → disbelievers). You’re not just targeting people who believe these things, but focus on people who are uncertain — moving them into disbelievers.
Two central questions to the book: how is it that reasonable people can come to believe bizarre stories about our political system and the politicians who run it? How can we correct false information?
Why do people endorse rumors?
Do people really believe rumors? There is a structure to false beliefs: there is a general tendency for conspiratorial thinking and the power of partisanship. The people who say they believe conspiracies really believe them! Think about your relative at the holidays, every year there’s a new news story — these are people who believe these stories, and on the other end, there are people who don’t believe them at all. I was trained as a social psychologist, I run surveys and experiments. How many of these 7 beliefs do you believe are true, false, or unsure?
From the early work in 2010: I asked about those 7 rumors and people accepted around 2 of them. But what’s important to note here is that it’s not that people believe a lot of crazy things (or will reject them all) — there are more people who believe some crazy things.
How can we correct rumors?
Rumors are sticky: just because you present the truth to someone doesn’t mean that the truth will correct itself. Intuitively, you might say that Obama wasn’t born in the country, but look at the birth certificate. You might think that that would debunk the rumor! But it comes down to source credibility: the kind of people who should be the experts aren’t necessarily the people who carry credibility in that moment. Reinforcing rumors — correction may actually backfire. Corrections also fade over time — it’s not one and done. These corrections don’t necessarily stick — right after he released the birth certificate, the number of people who believed it might have decreased 20% but people’s attention shifted. But when we asked again a while later, that 20% decrease completely disappeared. If you have people speaking against their apparent interest, that’s the most effective. It’s an easy cure but it’s hard to implement. Who’s making it? Where are they coming from?
Here, I’m looking at the effects over time. I ran an experiment where I assigned people to 5 conditions — in a control condition, I showed them nothing. What is the baseline belief in a political rumor? Is it possible to make things worse by exposing them to more in-depth rumors (e.g., article that really goes in depth on death panels)? Let’s then bring in some non-partisan parties, like the American Medical Association. Or let’s have a Republican say that there aren’t death panels. (Or a Democrat.) Do any of these options work? I ran an experiment with five conditions: Control (no information), rumor only, non-partisan correction, Republican correction, Democratic correction.
If I give you the rumor and the non-partisan correction, then nothing changes at all. Rumor + partisan correction actually worked — if I go back to them 2 weeks later and don’t show them anything else, I find that the control stays the same, the rumor sticks, and the corrections fade over time.
Final thoughts and ongoing work
There’s no magic bullet: there are different sources of “authority.” There are a wide range of messengers and there are crowdsourced authority (e.g., Community Review on Facebook, Community Notes / Birdwatch on X / Twitter). We could also bundle interventions: jointly deploy multiple interventions, “Swiss cheese model” of intervention. Fact checkers are really important, but they’re often not the best people to deliver that story.
We don’t want to use Snopes to deliver the message, we want different people with authority to deliver that message. Who’s the best person to deliver information about vaccines? Doctors might have the expertise, but people might say that they’re in the pockets of Big Pharma. If you have parents with autistic kids talking about how they _don’t _cause autism, that’s more powerful. I was a consultant at Twitter when they launched Birdwatch — you could write a note about any tweet, and promote those notes across the political spectrum. Prebunking or inoculation is one avenue, but it can’t solve all of the problems. The people who are promoting that want to sell their books, and in a competitive environment, it’s not built to _bring together _solutions. We know now that there won’t be one magic bullet, but there are different kinds of interventions that might come together (Swiss cheese model) — there’s lots of different things we can do. The holes in one layer are covered by cheese in another layer, and so on. We need to think collaboratively about how to bring this research together.
Q&A period
I was surprised that you didn’t talk about restoring public trust in experts. Do we have a shot of restoring that trust?
I study mass political behavior, so I’m interested in thinking about what we’re thinking on the individual reception side. We are where we are today not just because people aren’t listening to experts, but they’re listening to political actors who benefit from these types of rumors and misinformation (Trump in the 2020 election — there was a reason why he was promoting misinfo about election fraud). You have Republicans Secretaries of State saying that this is ridiculous, but what’s really hard is looking at the broadcast side. Thinking about the larger system — what can we do on the mass consumption side? This is a really hard question: who gets to decide who’s an expert? That is difficult. In an ideal world, I think that people should listen to experts, but in a world where it’s competing with political messages, it’s really difficult. Doctors are concerned about people like Doctor Oz etc — he is motivated by financial gain. You can’t just say “we’re the experts, listen to us” — given the world we live in, we need to think about who delivers the message that doesn’t just hinge on claiming expertise. In order to fix this, we need to fix the elites, but there are some small things that we can do to make it better.
Your work is oriented to the population and individuals and how rumors have arisen as the consequence of the internet. Are there material conditions that fed distrust before the internet? There is a tight relationship between the rise of neoliberalism, the rise of distrust, and the financialization of the economy. Do we have the institutional capacity to regulate this technology? What is our responsibility at MIT?
I’m going to take us back to the medical example — how can we fix this? There’s a disconnect between expertise in prediction with the response to the pandemic. How can we convey that expertise comes with uncertainty? This is a very nuanced argument. We should be doing that at MIT — not just saying that something is right and wrong, but showing where we’re certain and where we’re not. We don’t want to overclaim on the evidence — intellectual modesty is really important.
How do we think about rumors that are persistent (e.g., anti-Semitism — “Jews will not replace us,” “Israel was responsible for 9/11”)?
There is a book called “The Global Grapevine” that addresses this. I’m not a sociologist; the themes are really important. We want to think about the specifics of those rumors but also how they tap into specific themes that come up again and again. We can probably learn from history.
What’s your next book and how is misinfo related to AI? We talked about how misinfo is related to the internet, but now deepfake technology is pervasive. How do we deal with that?
What does tech change and what does it not change? There have been attempts to deceive and manipulate information for a long time. Pictures of UFOs were possible before the advent of Photoshop; people adjusted after that. How can we work at tech companies to label images and convey from a user experience standpoint what we want it to? Another thing to think about is thinking through how we can use these technologies to help us: how can we use Chat-GPT to argue about misinformation? You can have individual conversations that are impossible to scale up, but with AI, that is actually possible. It’s not all bad, but we want to be very cautious about how we deploy this (and how humans interact with this as well). The stuff that’s new under the sun is very familiar. We should be learning from that.