This is Galaxy Brain — a newsletter from Charlie Warzel about technology and culture. You can read what this is all about here. If you like what you see, consider forwarding it to a friend or two. You can also click the button below to subscribe. And if you’ve been reading, consider going to the paid version.
PROGRAMMING NOTE 1: Months ago, before Galaxy Brain was even a twinkle in my eye, our family planned our first trip away from Montana, driving an obscene number of hours to warmer climes for the last week of April. Given that I’m only in Week Three of this new life adventure, our “vacation” is now quite poorly timed. Which is all to say, please bear with me this week as I try to do some light blogging by the pool.
PROGRAMMING NOTE 2: Fellow Sidechanneler, Casey Newton, has an important piece up on the enterprise software company, Basecamp, which has been in the news lately after a great deal of internal drama. Casey’s piece has some fascinating reporting about the internal reckoning over racial justice and political discussions that led to this week’s controversy. Today, Anne Helen Petersen (of Culture Study fame) and I are going to do a live chat with Casey about the piece 5:30 PT on Sidechannel. If you want to get access, you can subscribe here!
Can We Do Better Than Deplatforming?
In the next few days or weeks, Facebook’s independent Oversight Board will offer its ruling on whether the company ought to continue its ban (deplatforming) of Donald Trump. As you might expect, quite a few people have thoughts as to what the social network ought to do — so much so that the Oversight Board received over 9,000 public comments on the case and had to delay its decision in order to read them. Elected officials around the world are weighing in, perhaps because they’re worried that the decision might add a little friction to being a powerful person online.
Deplatforming is controversial. With groups, it seems to work so well that, at some point, it also becomes a potential liability. A paper published in October 2020 examining the aftermath of Reddit’s ban of r/The_Donald and r/Incels found that deplatforming toxic communities decreases the ability of bad actors or groups to attract new members. But it also suggested that the group then migrates to a new, more obscure platforms tends to get more radical.
For power users, deplatforming can work. The best (recent) example: former Breitbart writer and noted troll Milo Yiannopoulos, who was banned from Twitter in 2016. In Yiannopoulos’s case, deplatforming didn’t cut him off from his audience as much as it cut him off from the rest of the mainstream media, who frequently amplified his bigoted, misogynistic, trolly posts, if only to debunk them. What good is a MAGA pundit who has lost the ability to trigger the libs?
Infowars’ Alex Jones is a slightly tricker example, because Jones spent the last thirty years building out a media empire that’s so bought in to the Infowars Cinematic Universe Of Conspiracies that they will tune in directly to his programming. Donald Trump falls far more into Jones’ category — but his laziness may ultimately hurt him. As both a brand and a former president, Trump has the alternative platforms and could very easily have the broadcast infrastructure, should he seek it. That he has hardly opted to leverage these opportunities and has taken to sending out Kid Mayor style press releases with a fake presidential seal remains one of my genuine 2021 surprises.
Understandably, most of the discussion around Trump’s case focuses around a binary outcome: Either Trump is nuked from the platform in perpetuity or he’s reinstated and presumably begins tempting the fates with posts letting you know the next time he’ll call into Sean Hannity’s show. This makes sense as most content moderation is bluntly framed. You either have a platform or you don’t.
But what if that’s too simple a frame for an issue that applies to the online speech and the reach of world leaders? Even those who agree that the platforms made the right call to ban Trump are queasy about what the decision says about Big Tech’s power (There was a point at the end of Trump’s presidency, for example, when he could unilaterally launch a nuclear strike — but not post about it). There’s an omnipresent fear that a ban would only cause more harm by victimizing the offender and driving more of the conversation underground. These are valid fears! But what if the on-or-off the platform binary is a false dichotomy — and the result of a shallow understanding of what platforms do?
I confess I’d never given much thought to a different way, at least until talking with Aviv Ovadya last week. Ovadya, the founder of the Thoughtful Technology Project and a fellow at the German Marshall Fund, is one of those people I turn to when I’m trying to think about a well-worn subject from a different angle. Back in 2018, he broke my brain by outlining a number of worrying scenarios for our future that he called “the Infocalypse.”
Ovadya argues that we need to expand our idea of what’s possible when it comes to content moderation. His idea is that the Oversight Board should attempt to “unbundle” Facebook. Ovadya thinks that the Oversight Board and Facebook should at the functions of the platform — and then figure out which features, in the hands of Donald Trump, are likely to result in real harm for users, and which are largely unimportant. Perhaps, he argues, there’s a way to separate them out. In a public comment to the Board, he laid out some examples of what this might look like. First, there’s Facebook’s explicit posting functions, which include:
Providing free storage for new and existing content.
Enabling free broadcasting to a large audience.
Allowing one to pay to advertise and broadcast across Facebook properties.
Enabling one to target this advertising in a variety of ways.
Providing free analytics for understanding how page posts perform.
There are also implicit posting functions for a user like Donald Trump, which Ovadya suggests are equally important:
Allowing frictionless submission of arbitrary amounts and types of content — instantaneously and without review.
Enabling near frictionless access to content through links and search.
Enabling near frictionless re-sharing of content.
Recommending content such as videos.
Enabling the frictionless creation of unmoderated conversational spaces in the comment section of posts.
Associating content with the Facebook brand.
While Ovadya doesn’t advocate for or against deplatforming Trump, he believes that you could feasibly unbundle some of those features. This might look like reinstating Trump but with some or all of the following restrictions on his account:
Disabling all broadcasting and recommendations of posts.
Enforcing a 7 day delay before a submitted post is shown on the platform.
Limiting a user to 3 posts per week
Manual expert review of posts before submission for violations.
Disabling comments, reactions, and resharing for posts.
Ovadya also says you could go further, and, for example, focus on advertisements and targeting. And I get it: you could argue that this type of restriction is essentially deplatforming by another name. But the real name for what Ovadya is describing is friction — real friction, not the toothless warning labels that social networks appended to Trump’s tweets and Facebook posts, knowing full well they weren’t effective.
The best part of Ovadya’s idea isn’t necessarily the potential restrictions, but the fact that his process asks us to think about what platforms are actually doing — and which parts explicitly and implicitly benefit its worst actors. When I think about these questions of platform regulation, friction seems like an increasingly helpful frame: a way to not just control distribution, but immediacy. This is especially true for the spread of disinformation, which often hinges on the ability of bad actors to publish quickly and gain a huge advantage over authoritative news outlets that take time to vet information before sharing it.
Frictionless platforms allow their most malicious users to take advantage of these windows of opportunity (this is known in academic circles as a data void, a term coined by Microsoft’s Michael Golebiewski). Ovadya’s frame helps us to see that limiting a user’s posts to three a week, or, say, enforcing a time delay on posts, would ultimately feel almost as extreme as deplatforming. And that, in turn, shows us how much these networks’ entire operation has come to rely on the availability of frictionless sharing.
Instead of asking platform executives to make these blunt, dramatic decisions, we should force them to address the ways their platforms really work. This means confronting uncomfortable truths, like the fact that platforms optimized for engagement will give a natural advantage to their most shameless users. Or that, as Renee DiResta and Matt DeButts argued in Columbia Journalism Review earlier this week, Facebook’s newsworthiness policy allowed Trump to stay on the platform, creating more space for the dangerous antics they ultimately banned him for. As they put it: “Donald Trump’s Facebook behavior is on trial, but its accomplice is at large: Facebook’s policy.”
This is the crucial part of Ovadya’s argument. Individual bad actors and insurgent movements are obviously a problem — but they’re one that Facebook can’t solve for on its own. Nobody can expect Facebook to moderate away shitty, exploitative people and movements. But we can and should expect Facebook to answer for and change the way its platform architecture gives outsize attention and money and power to its most shameless users.
Ovadya’s approach also reveals the shallowness with which most platforms and the Oversight Board are (at least publicly) approaching the question of moderation — especially in regards to leaders who might erode democratic norms. If, say, you’re inclined to think of Donald Trump’s platform presence as a malignant tumor in the body of democracy, the current platform approach would be akin to excising it with a hacksaw instead of a scalpel. It might work! But it could also cause unnecessary damage along the way.
“We’re thinking about this in the most rudimentary way possible,” Ovadya recently told me. “Deplatforming creates righteous victims, and it can connect bad people to worse people. It’s so broad that it puts the QAnon folks and the Nazis in the same category — and then it can force them into the same spaces, creating stronger networks between those who’ve already fallen far down that rabbit hole. Currently, enforcement is just very coarse. You want a finer grain set of consequences. We need more nuance.”
Putting my cards on the table, I think that Facebook is a better place without Donald Trump polluting it with garbage. After his election denial and the events of January 6th, I think there’s an extremely solid case for his permanent ban. But I also agree with Ovadya that we can have a more sophisticated conversation going forward around what we expect of world leaders online and how to make sure to hold them accountable before they destabilize our politics any further. Mostly though, I think conversations like this one are proof that we are still in the early days of understanding and governing the platforms. The limited array of enforcement options we have to balance speech and mitigate harms doesn’t feel adequate. I think we can, eventually, do better.
A Soothing Internet Thing
I joke that everyone has a pandemic catch phrase — some line they’ve repeated again and again to the people they’ve been stuck in captivity with as a coping mechanism. Mine is: “I miss bars.” I miss the faithful, sleepy neighborhood dive bar. I miss the busy, overpriced bar. I miss the random bar you have one drink in and then realize isn’t really your vibe. I miss the bar you would never go to normally but you’re in town on a work trip and it’s the closet one so screw it you’re here. I miss going to bars with friends and I maybe most of all I miss going to a bar and bringing a book and reading a few pages only to strike up a meandering conversation with the bartender about nothing of interest. I realized this year that about 18 percent of my love of sports is watching them on mute in a bar.
Which is why I love this soundscape website by the folks at Maverick that lets you recreate a bar ambiance and toggle different settings to get that particular vibe you want. I put it on yesterday evening while I was writing this newsletter. Listening to the ambient chatter and clinking of various glassware with lo-fi music purring behind it filled me with joy and hope. Maybe it will do that for you, too.
No Bears Were Harmed In The Making*
Over in Sidechannel we have a ‘Best Thing You Saw Today’ room and this one brightened my day. The quickest way to my heart is via Bear Content.
*Galaxy Brain Corp. cannot confirm this but the rescue crews all seem to have nice and comfy padded mats laid out.
Better Than The Real Thing
This thread went viral so you likely saw it but I still wanted to share this post-pandemic New Yorker magazine cover that a School of Visual Arts illustration student did for an assignment. The whole thread is lovely but I found this particular one quite moving for reasons I’m not sure I can articulate. I think we should start a petition to let Lauren V. design an actual New Yorker cover.
Ok! That’s it for today. If you read this newsletter and value it, consider going to the paid version, and come hang out with us on Sidechannel.
If you are a contingent worker or un- or under-employed, just email and I’ll give you a free subscription, no questions asked. If you’d like to underwrite one of those subscriptions, you can donate one here.
If you’re reading this in your inbox, you can find a shareable version online here. You can follow me on Twitter here, and Instagram here. Feel free to comment below — and you can always reach me at charliewarzel@gmail.com.
Clicked through and went into the rabbit hole on the Infocalypse, and was reminded of a scifi book (of course) predicated on exactly this: "There’s “diplomacy manipulation,” in which a malicious actor uses advanced technology to “create the belief that an event has occurred” to influence geopolitics. Imagine, for example, a machine-learning algorithm (which analyzes gobs of data in order to teach itself to perform a particular function) fed on hundreds of hours of footage of Donald Trump or North Korean dictator Kim Jong Un, which could then spit out a near-perfect — and virtually impossible to distinguish from reality — audio or video clip of the leader declaring nuclear or biological war. “It doesn’t have to be perfect — just good enough to make the enemy think something happened that it provokes a knee-jerk and reckless response of retaliation.” It's Caleb Carr's so-so book, Killing Time, with an absolutely killer and prescient premise, “It is the greatest truth of our age: Information is not knowledge.”
I'd like the delayed posting thing for *myself.*