How to Perform a Social Media Autopsy
"You know something is being manipulated but you can’t see how."
Welcome to Galaxy Brain — a newsletter from Charlie Warzel about technology and culture. You can read what this is all about here. If you like what you see, consider forwarding it to a friend or two. You can also click the button below to subscribe. And if you’ve been reading, consider going to the paid version.
There’s an iceberg-like quality to media manipulation and disinformation campaigns. What you can see is dramatic - but it’s a small fraction of the whole. Many media manipulators and trolls use the chaos and murky nature of big platforms as cover to spread falsehoods.
Erin Gallagher’s work helps me see the big picture. Gallagher is a Research Assistant with the Technology and Social Change Project at the Shorenstein Center. Her work specializes in visually mapping the way that information travels across social media, especially Twitter. When I first came across her research in 2017, she’d been examining hashtags from specific news events and trying to chart how trolls and far right influencers were pushing viral propaganda. Here’s one graphic she made showing conversation around a single hashtag from April 15, 2017, when Proud Boys and anti-fascist activists clashed at protests in Berkeley:
This is what it looks like underneath the iceberg. It’s a total mess of accounts. Some are coordinating with each other but there’s also plenty of serendipitous stuff going on between accounts that don’t know each other. Since I began following her, Gallagher has gone on to study movements like QAnon as well as networks of serial online harassers. Recently, she performed a forensic analysis of a conspiracy theory that antifascist activists set last summer’s wildfires in Oregon. Gallagher’s work is like a social media autopsy.
Often, we frame particular disinformation events as the brainchild of one evil genius or group. Gallagher’s visualizations attempt to separate out grifter coordination from the useful bystanders. What often looks tidy is, in fact, incredibly complicated.
What follows is a lightly edited version of a long chat with Gallagher about how she studies these movements, how information warfare is evolving, and why the definition of online harassment is messier than many pundits think.
Your work is unique in that its both visual design but also investigative social media analysis. How’d you get into it?
Back in 2013 I started working with a group of activists on a small indie news blog, mainly doing Spanish translations into English. In 2014, 43 students from Mexico were abducted and I watched Mexican social media just explode. I started following the protests and hashtags in order to translate and I saw one hashtag that got spammed by bots. It was obvious that the bot campaign had the effect of censoring news happening on the ground so much that I couldn’t follow what was happening in the protests anymore. I’m not a trained journalist. I was supposed to be a graphic designer. But I was so interested and concerned by what I’d seen. In U.S. media automated accounts and social media manipulation is often framed as amplification — boosting fake news or a hashtag — but in other countries this was being discussed as a form of censorship. I thought that was interesting.
Were you surprised when these dynamics started to surface more in the American political conversations on social media — in the run-up to the 2016 election?
Definitely not. There was this report that came out in Bloomberg in March 2016 called “How To Hack An Election.” I’m not sure it made waves in the U.S. but when it came out the researchers I’d come in contact with in Latin America told me it was a like a bomb went off in their part of the world. This guy had been manipulating social media and influencing elections for a decade and he explained how he did it. But it didn’t seem like it made much of an impression in the US. There was a notion of ‘that won’t happen here.’
Your work is heavily focused on visualizing online propaganda and manipulation campaigns. I’m curious how you’d describe what you’re trying to accomplish when you dissect a harassment campaign or look at a viral information operation that swarmed a hashtag?
We talk all the time about how there’s something wrong on the internet. But it is often an abstract concept. You know something is being manipulated but you can’t see how and so there is no way to understand it. I felt that by making these visualizations of Twitter conversations and social media in general that it would be a way to show people what this manipulation looks like. Initially, I wanted to know, ‘what does an organic conversation look like versus a hijacked one?’ Or, ‘how do we tell the difference between what’s a bot and what’s a real person online?’ And what I’ve found is that it’s often extremely difficult to parse — you often can’t tell what is real and what is inauthentic just from graphing these campaigns.
People confusing real accounts for bots is so frustrating. Do you feel like regular internet users have gotten better at not always blaming bad things they see online on ‘bots?’
I think it’s still a lot more complicated than people realize. There is a misperception, still, in U.S. specifically, regarding who is running fake accounts and why. The way most of the U.S. public learned about disinformation online is through the lens of Russian interference. But, of course what is happening online is so much more complicated than simply ‘the Russians.’
This dynamic comes up in your work tracing fake anti-fascist accounts that have amplified conspiracy theories. Most recently you charted how fake anti-facist accounts helped boost rumors that last year’s wildfires in the Pacific Northwest were caused by left wing activists.
Yes. A good example is how the original operators of this fake Boston Antifa Twitter account is this young couple in Eugene, Oregon. They’re clearly right leaning in their politics but they created the account as a joke and set the location of the account to Russia. And, of course they had a lot of people in U.S. say, ‘Look at this! The Russians forgot to turn off location settings!’ And it was so disheartening to see people fall for that. Since, I’ve seen these exact fake Antifa accounts brought up in congressional hearings. You have powerful people assuming this is some foreign intel service but really it’s just two shitposters in Oregon.
What I’ve tried to stress — especially recently — is that it’s so hard to tell if an account is a bot or not. I don’t even use that word much any more. I say ‘is this automated? Can I prove it?’ And, if not, I don’t call it the b-word. Also, I try to remind people that the actual humans influencing media coverage and public discourse — even the trolls — are U.S.-based.
Yeah, the call is coming from inside the house…
And they’re a lot more influential than, you know, a botnet from Russia.
Recently, you published this long, forensic piece mapping an online harassment campaign and trying to trace it back to its origins. What struck me about it was not only the volume of the campaign but the complexity of its structure. You had automated accounts working with real people and seemingly associated groups working together, but not in an easily provable way. A lot of the harassment was happening across multiple platforms as well…
…It made me think about some recent conversations on Twitter arguing that journalists are too quick to cry, ‘harassment’ on social media. And that we need to either raise the bar for what we call harassment or have narrower, cleaner definitions. But what you show is that these campaigns are, by nature, very messy. What do you think about that conversation?
It’s frustrating. I’d been looking at this particular troll network since May/June 2019 and I spent a lot of that time just trying to figure out how to explain it. I understood what was happening but it was so difficult to demonstrate. On one hand, I get why people underestimate these campaigns because often you can’t see them.
[The journalist] Carlos Maza said something in an interview about this that stuck with me. It was after he made a compilation video of [right-wing comedian] Stephen Crowder mentioning Carlos derogatorily on his show. Maza said that, anytime Crowder mentioned him in a video, he’d wake up to a wall of harassment on Instagram, on Facebook, on Twitter. It was just overwhelming but from the outside you couldn’t tell. Nobody could really see what he was dealing with because so much of the hate — especially the volume of it — was confined to his private screen. I think about that a lot. It makes it so hard for outsiders to know what somebody is dealing with.
And it’s cross-platform. Something said on Twitter, that doesn’t violate the rules, can end up feeding wild abuse on a separate platform. I’m often frustrated that our conversation gets flattened down to intent — like, ‘did X person *mean* to sic their followers on them? Does that matter?
If you look at, say, Glenn Greenwald, he’s going on Tucker all the time. Over time, those appearances change the nature of Glenn’s audience. Tucker has a much different audience than, say, you or I. Now, I can’t imagine Glenn doesn’t understand the nature of his audience. So when he does a quote tweet he’s basically putting something out to his audience and saying, ‘look at this ridiculous thing.’ And that paired with his tone and the nature of the topics he picks up on — it’s an aggressive move. He likely knows what will happen next.
It’s a precarious balance, because obviously you can’t have people be responsible for the actions of all of their audience. But also, to some extent, you have to understand your audience and how they’ll likely behave to what you’re putting out.
Yeah. You don’t want to play that game of making people responsible for their audience’s every action, but it also seems like reckless behavior if you don’t take it into account. It’s very hard.
We first spoke in 2017 and some of these digital manipulation networks felt novel then. I’m curious if you’re seeing an evolution in how propaganda and conspiracies spread and how harassment campaigns operate? What’s changed in recent years?
I don’t know if the tactics have changed much. Let’s take the Antifa stuff. In 2017 that narrative was just starting. But now audiences are primed. Regular news consumers on the right have this mental picture of Antifa as violent, nefarious terrorists — Trump, for example, did a lot of work solidifying that image in peoples’ minds. In 2017 right-wing trolls and shitposters had to work a lot harder to get people to think Antifa is doing nefarious stuff. So I don’t think the tactics have changed but the audience’s understanding of the typical wedge issues is way more advanced.
People are more polarized and ready to receive the information.
Now, during a breaking news event you can see that all the shitposters just know exactly what to do. They know, ‘ok, here are the talking points — we’ll roll this out.’ It felt really clear last year with the summer’s George Floyd protests and the rumors about buses full of Antifa super soldiers coming to peoples’ towns. By the time we got to the Oregon wildfires the rumors that spiraled out were a natural progression of these information networks.
I found the disinformation around the wildfires striking. It felt like a really worrying example of crises just piling on top of each other. You have a misinformation crisis, a crisis around racial justice causing mass protest and intensifying political polarization, and a climate crisis causing natural disasters. And each one just seemed to feed the others and increase the stakes.
There's this amazing quote I am planning to print out and frame that really resonated when I was working on examining the Antifa fires conspiracy. It’s from a network scientist named Peter Dodds about complex systems. The quote is: “There’s no love in a carbon atom, no hurricane in a water molecule, no financial collapse in a dollar bill.”
It really spoke to what you’re mentioning — the concept of complexity. You have all of these various crises happening at once and when they happen at same time you have brand new emergent effects. I think it’s profound and totally under-studied.
Do you think those emergent effects are intensifying in recent years? I feel like whether it’s with troll campaigns, or even something like the GameStop stock rally, we are witnessing more glaring examples of people who can collectively command a lot of attention and quickly use it to accumulate power.
I feel like this attention gathering is something that’s always been happening. But there’s so much access to huge, instant audiences that it happens at greater velocity now. Also, distribution tactics are more advanced. During the Oregon wildfire conspiracies for example, QAnon seized on the narrative. But what fascinated me is that, in this case, QAnon worked more as a simple distribution platform. They weren’t really creating their own rumors or theories, they just picked up what was already circulating. QAnon was acting more like a broadcast network.
Do you think that’s where QAnon is headed? Fewer byzantine conspiracy theories and more broadcast network-focused?
I think it’s already that. The audience is so engaged and so Extremely Online. It’s a powerful distribution node. Back when I started this work an activist described this kind of information warfare as communities of people, perhaps divided by language or borders, but united in their common interest to create their own news engines to impose their preferred realities on everyone else. I think this work is not simply, ‘here’s a fake account’ or ‘here’s why this hashtag is trending.’ It’s about dissecting audiences and the people influencing them to believe certain things or vote a certain way.
That’s how you’d recommend people to think about what they see online?
Looking critically at online audiences and who or what is influencing them is a good way to frame it. I’d also go a step further: Ask yourself, ‘what audience are you in? What is the content you’re seeing everyday and sharing? Who makes it? Why are they making it?’ That’s a big ask, but it seems critical.
Ok! That’s it for today. If you read this newsletter and value it, consider going to the paid version, and come hang out with us on Sidechannel.
If you are a contingent worker or un- or under-employed, just email and I’ll give you a free subscription, no questions asked. If you’d like to underwrite one of those subscriptions, you can donate one here.
If you’re reading this in your inbox, you can find a shareable version online here. You can follow me on Twitter here, and Instagram here. Feel free to comment below — and you can always reach me at charliewarzel@gmail.com.
When it comes to networks, we are like Medieval people lurching towards an understanding of AIR -- much as we want to know and understand, much as we can feel it's IMPORTANT that we know and understand, that it will change our lives! -- we just can't yet conceive of those invisible atoms and molecules that we move through, and breathe in and out every day, let alone what they do, what they carry, how they interact.
Humanity exists in networks (AS networks?), whether digital or not, and those networks serve us well, but they have always "gone wrong" in various ways: the internet didn't invent the madness of crowds, social distrust, or misinformation. Unlike most of the networks throughout human history, tho, digital networks both speed things up and leave a trail -- an utterly complicated, almost incomprehensible trail. Some people ask if we're "living in a simulation" -- maybe, maybe not, but either way, we're at least partially-living in digital networks that are vast experiments in human behavior accelerated and documented.
But all that data is meaningless if we can't make sense of it.
I just read Peter Pomerantsev's _Nothing is True and Anything is Possible_, about the waves of cultural desperation and hysteria that have washed over Russia in recent years, and what struck me most was that even though the stories he tells are absolute madness, none of them seemed alien or removed from my American experience of life... Russia isn't us, but we are connected, in profound ways, beyond shared humanity: our networks are taking similar shapes: feedback loops, vicious cycles and doom spirals. Too much irony layered on top of unspoken understandings, winks, memes that aren't for you, outright lies. Our responses to such dysfunctional networks drive us to extremes of belief and extremes of apathy. Some find solace in side-taking, some escape into cults, must most shrug and divorce themselves from the whole mess, decide it's just a game you have to play to get along -- what is truth? Big shrug. It all moves too fast and is too complex for the average person to derive any human meaning.
I applaud you, Erin Gallagher, for the work you're doing, and I hope to read and see more about what you discover. And thank you Charlie Warzel for continuing to bring these ideas to the public in this new way.
Correction, that's: Peter Pomerantsev's _Nothing is True and Everything is Possible_