Skip to Content

Technically Optimistic with Raffi Krikorian -

Season 2 Preview Available

Listen Now

How misinformation spreads online—and what we can do about it

Internet researcher Renée DiResta reflects on the rise of conspiracy theories and other misinformation online, and shares ways that social media platforms can respond.

    • Renee DiResta

    Conspiracy theories about COVID-19 have proliferated online in the last six months. A “perfect storm” of factors has led to a number of persistent, yet debunked theories about the origins of the virus.

    Renée DiResta, an Emerson Collective Fellow, is a researcher who investigates the movement of social media narratives in an effort to understand how misleading information — about health, politics, and more — spreads online. As the Technical Research Manager at the Stanford Internet Observatory, she’s currently at work on a project to combat misinformation about the upcoming 2020 election.

    DiResta recently spoke with Raffi Krikorian, Emerson Collective’s Managing Director of Engineering, about how the phenomena she studies undermine our democracy, and what technology platforms can do about it.

    Let’s start with the basics. How do conspiracy theories start online?

    One of the things that's great about the internet today is, it lets you find people who are just like you. Certain platforms really prioritize creating places where people can go and find new friends. You used to come to Facebook with your “real” social network—the people you already knew in real life—but then, the platform began to prioritize creation of groups and expanding your online social network. It keeps you on the platform longer, and increases your engagement. There are many very active groups.

    But there are also a lot of very persistent conspiratorial communities online. Conspiracies often start when people are looking for an explanation for a situation that makes them anxious—looking for a bad guy or a boogeyman when there isn’t necessarily one. When there’s no satisfactory explanation, people just try to latch onto something that can make the world make sense. There are some recurring conspiracies about diseases: they are caused by bioweapons; or by villains, looking to reduce the world’s population; or by corporate profiteers, wanting to sell a cure or vaccine. Unfortunately, for some people, it’s more plausible that evil people are doing something, than that—in the case of COVID-19—disease exists in the world.

    Conspiracies often start when people are looking for an explanation for a situation that makes them anxious — looking for a bad guy or a boogeyman when there isn’t necessarily one.

    Are conspiracy theories a new problem? How do we start to control the spread of misinformation online?

    Conspiracy theories are very, very old. Propaganda is also very, very old. It dates back to the era of the printing press. The word, “propaganda,” refers to the “Congregation for the Propagation of the Faith,” founded in 1622, after all, so this predated the internet by centuries. The type of conspiratorial content is not new, but the distribution mechanism is new: the virality of people sharing content, and the velocity at which information spreads.

    What we saw with the recent Plandemic video, and some of the other, extremely viral hoaxes about COVID-19, was that people would share them widely, and then the platforms where they were shared would do something about it only after the content had two million views. That’s too late. So the question really becomes: how can we begin to assess these things much earlier?

    If you’ve been following platform policy over the last few weeks, there’ve been some acknowledgements of the fact that unchecked virality is problematic. Finally. Facebook made a comment, saying that, when things begin to go viral on the platform and you see velocity spike for some particular content, someone will look it over and potentially throttle it, to give fact-checkers time to come in and address any misinformation. That’s one policy intervention that they’re recognizing is possible. Addressing the means of distribution and adding context is different from simply taking down the content.

    The platforms are also beginning to recognize that particular distribution interventions can improve the quality of information that people see. That’s where a second kind of intervention can be useful—what myself and others call, “Do Not Recommend”—in which the moderator allows the content to exist on the platform, but it is removed from recommendation engines. Meaning that if I want to go learn about certain conspiracies, the platforms aren’t taking them down—they’re allowing for that freedom of expression to persist—but they’re not proactively pushing them at people.

    The type of conspiratorial content is not new, but the distribution mechanism is new: the virality of people sharing content, and the velocity at which information spreads.

    Do you think there’s a relationship between the rise of these conspiracy theories, and the breakdown in trust between the government and the people?

    Yes. COVID-19 was fascinating in that, all of a sudden, everybody was paying attention to the same thing, globally. The conspiracies are actually quite similar across the world, in terms of the particular narratives that emerged. Oftentimes there’s a locally specific government boogeyman or figurehead that’s more culturally relevant; but the general claims—of elite control, or the virus being unleashed upon the people—are pretty common themes worldwide.

    Countries that have low degrees of trust in government often will be the ones where that “government boogeyman” narrative gets more traction. In the case of COVID-19 in the U.S., what we saw was that—as people were all getting on the internet and looking for information about the virus—we had a bit of a breakdown in official guidance and communication. There wasn’t very much in the way of effective government leadership, so stories filled the void.

    In addition, even our health institutions were not communicating effectively with the public in the way that people have come to expect in the age of online information. The CDC and the World Health Organization, for instance, were not participating in the conversation on social platforms in the way that grifters, charlatans, and conspiracy theorists were. People wanted to know about treatments and about masks, and the health institutions were very slow to say anything.

    Another problem was that platforms were trying to figure out what information to curate, and who to amplify. They made the conscious decision to boost the institutional health authorities, which, in most outbreaks, should be the safe bet. But—when those authorities were weeks behind on updating mask guidance and things like that—the fact that the platforms had chosen them to boost, really just served to further lower trust with the public.

      • I Voted mask

        Election Integrity Partnership's objective is to detect and mitigate the impact of attempts to prevent or deter people from voting or to delegitimize election results.

      Let’s talk about the upcoming election. What are you doing at the Stanford Internet Observatory to prepare for it?

      At the Stanford Internet Observatory, we have a multistakeholder effort, called the Election Integrity Partnership, which is a consortium—of core research organizations, as well as government and civil-society partners—focusing on detecting voting-specific, misinformation and disinformation campaigns.

      We are very concerned about ensuring trust in the integrity of the voting process. What we expect to see is a wide range of claims leading up to, and on Election Day, alleging various forms of impropriety—claims that the machines don’t work; procedural misinformation; telling people misleading things about where to go vote; allegations of rigging; allegations of polling-place misconduct. These are all things that local governments will see bubbling up in their communities, and the claims will need to be triaged, assessed—fact-checked in some cases—and looked at for evidence that bad actors are the ones amplifying them.

      It’s a nonpartisan initiative to make sure that all channels of communication across all of the stakeholders are connected—researchers, fact-checkers, local governments, the tech platforms, CISA at DHS (which is responsible for election integrity)—ensuring that information is routed to the most-relevant party as quickly as possible.

      How do you keep sane online? You must open Twitter every day and see some wild things.

      I love what I do. It’s like I’m watching the emergence of a new communication infrastructure in real time; seeing how so many of the old propaganda tactics are being reimagined for it, remade for the modern era. Any time the platform changes a feature or policy, it has just changed the playing field, and some new tactic emerges as a result. It’s sort of a constant stream of novel things, which is fascinating.

      I follow a lot of bipartisan news sources. I follow a lot of different voices on both sides of the political aisle in the U.S., and a bunch of international voices, as well. I think it’s important to understand what’s important to particular communities, and how particular narratives are received by those different communities—how events are seen to play out very differently depending on what news sources or influencers you’re following. I do try to stay pretty plugged in.

      I feel this work has made me less partisan in a lot of ways. Particularly over the last four years—watching the extent to which these campaigns target all Americans—I have not yet seen a single community that is wholly resistant. People running influence operations target all of us because it works. We all share the same human psychology.

      Related Articles

        • Dial Fellowship

          5 min read

          read
        • Emerson Collective Fellowship

          5 min watch

          watch