I recently went to a very lovely workshop hosted by the Media Ecosystems Analysis Group on health misinformation and social media platforms. I learned a lot from the presentations, and one highlight of the workshop was an open discussion of the research insights that can actually be helpful for non-academics (policymakers, tech practitioners, etc). Here are some of the takeaways that I wrote up during the session, so please excuse slight messiness in language...
-
Non-official spaces for misinformation: We have a tendency (particularly in communications and political science) to want to talk about “official” sources of information — politicians, government websites, etc. Increasingly, the spaces of influence are sports, entertainment, memes — no one went there for politics, but it’s now very political speech. It might not have been explicitly political — and it might have been seen as comical — but it is a huge venue for misinformation.
-
Development of the alt-left: These “weird spaces” are not just a right-wing problem; the left has its own vectors of disinformation. Tankies are a good example of this — they are extreme Communists and have become the 4chan of the left. They are genocide deniers and very supportive of the CCP — they claim to be anti-fascist. They have created their own clones; it’s not just on Reddit. When tankies were banned on Reddit, they went to a Mastodon-type Reddit (they created their own technologies!). Platforms and insider networks: Twitter is the classical drosophila of social media and with push.shift, Reddit opened up. CrowdTangle did the same for Facebook. Increasingly, we know that disinformation actors coordinate in platforms that we really don’t study at all — there are a lot of holes in our knowledge as these things become more popular (e.g., Telegram). They are starting to surface in the mainstream. There are also apps that are very mainstream (e.g., WhatsApp), but it’s hard to study! If we’re going to take platforms like WhatsApp seriously, we know that insider networks are really important (e.g., your family chat, the building you live in, etc). That notion of an insider network has very different dynamics as far as mis/disinformation go.
-
Demonizing your enemy: “Politics as a football match” (Zeynep Tufecki) really resonates with work in the Philippines — “obliterating your enemy” is an important component of this, compromise is not possible. Misinformation becomes a fair tactic to demonize the other side because we don’t think of them as people we compromise or dialogue with, but as people we want out of the game entirely. Misinformation can be about signaling — if you amplify something that you know is wrong, it’s a reputational decision. You are signaling a political commitment. Dealing with political identity: Wellness influencers play with identity in a really interesting way: they often strategically play with politics (e.g., “not right or left, not Democrat or Republican”) but use rhetoric of wellness to advance their goals. This looks similar to US media at the turn of the 20th century — newspapers realized that they could sell twice as many newspapers if they could cater to more political stripes, so they invented “objectivity” in narrative reporting.
-
Information disorder and influence: not all influence is bad! I design tools monitoring gender-related violence in media for activists; activists use these stories to change the media and inject feminist analysis into that space. The ability to manipulate and shift discourse isn’t always bad. It’s useful to remember that this is a counter-narrative: the “official” media outlets are really toxic. Influence isn’t something that we should always mistrust on its face. There are always influencers and social media, but if the influencers all believe the same thing, then there’s something wrong with the ecosystem. Social media is not all inauthentic — this is a question of proportionality. How many users are real people vs. bots? Saying that most disinformation is carried out by bots is an oversimplification; users might be following narratives that are planted by malicious actors, but that’s not the same thing. When it comes to known state actor-sponsored troll accounts, most are not created with new narratives. They upvote existing comments and accounts, and then seek to further polarize from there. (In)authentic behavior: The reason why disordered information can be so powerful is that the reason people spreading it are not bad actors; they are spreading something that they believe is true (or want to believe to be true). A lot of it is honestly grown, fair trade misinformation that is organically grown. The truth is that state actors — more than anything — are looking for existing divisions and finding ways to amplify division. They don’t have to create misinformation out of whole cloth. Pay less attention to the places where you think are influential — they are much smaller and organic. Authenticity is what makes misinformation so powerful! The magnitude of the problem is going to become worse as the attention economy becomes even larger. Attention in and of itself is a commodity that is tradeable for money or for power — it’s what makes journalists push for first and fast; we can now expect attention to have an impact on an ecosystem in the same way that injecting money into the economy can.
-
Expanding the conversation past Global North: A lot of these dynamics do not translate to low-internet places. These dynamics surface in real life and not the internet! There is a start-up and entrepreneurial economy that is working on this (e.g., influencers themselves who share their success stories across Africa, South Asia, etc) — we don’t see enough work on this. We know that Dark Social is where a lot of this stuff is coordinated. In the Global North, we’re used to seeing these campaigns manifest online. In the Global South, we see this stuff happen in the street or at a protest; it doesn’t surface on the socials that we know how to study because it’s not a meaningfully inclusive participatory sphere. The old media model is about creating an audience and delivering an audience for hire to disinformation actors.
-
Information provenance: We need to ask where someone is getting their information — we can’t make assumptions about provenance anymore, especially since it’s so individual. Doing ethnographic research is critical to really understand this — how do they know what they know? Tracing those knowledge claims is really important. The question of where people get their knowledge is linked with their identity too!
-
Refining research in a dynamic world: the research we do affects the world we’re studying! Studying social media can be like shining a light on cockroaches — once you shine the light, all of the bugs scatter. There is a real opportunity for reflexivity here. A couple years ago, there were lots of studies about the YouTube algorithm and radicalization. Many studies found that the algorithm does not have a radicalizing effect — if anything, it tried to direct you to mainstream news channels. However, YouTube from those studies is not the YouTube today, so researchers who believe that the algorithm is radicalizing and those who believe the opposite can both be right! The notion of a single media voice is the best picture of broadcast anxiety — now everyone is a publisher. Is the chaos a result of the plurality? Or is it something else? It’s hard to know!