Who Really Broke the Discourse?

Online moderation has a long history, and the predominant model today — reactive deletion of posts — is not the only one that works.

by | March 31, 2021

Illustration by Ning Yang.

The discussion around the failures of Twitter, Facebook, and other social media platforms to moderate content often starts with the assumption that the internet broke something fundamental in our common discourse. And as these companies try to regain control of the tenor of online conversation, some feel that platforms are breaking something else — the freedom that came with being online in the earliest days of the world wide web. But neither of these assumptions is really true. For years, early internet communities managed, imperfectly, to control and moderate conversations online, and in doing so, they exercised quite a bit of authority, but in a way that involved a greater degree of community investment.

Earlier this month, Twitter turned to its users to ask about its moderation of world leaders, opening a public survey in fourteen languages. Should world leaders be subject to the same rules as everyone else, they wanted to know? And what should happen to them if they broke a rule? As the platforms struggle with these questions, it can be useful to look backwards.


Before the online moderator came the host. “A host is like a host at a party,” internet theorist and author of the book The Virtual Community Howard Rheingold wrote in 1998, in a guide titled “The Art of Hosting Good Conversations Online.” “You don’t automatically throw a great party by hiring a room and buying some beer. Someone needs to invite an interesting mix of people, greet people at the door, make introductions, start conversations, avert fisticuffs, encourage people to let their hair down and entertain each other.”

A virtual host, Rheingold wrote, must also be an exemplar for behavior in the group, and a “cybrarian” responsible for safekeeping “community memory” by pointing newcomers toward archives, posting links to past conversations, and hunting down digital resources. And, of course, a host must be an authority. “The host is the person who enforces whatever rules there may be, and will therefore be seen by many as a species of law enforcement officer,” Rheingold wrote.

In a recent telephone interview, Rheingold emphasized the distinction between “hosts” — the favored term on BBS, the public, largely text-based message boards that emerged in the 1980s — and contemporary moderators. “The word ‘moderator’ really implies that you only have censorship capabilities, whereas hosts were people,” he said. “They would invite people, introduce people, try to mitigate some of the fights.”

“There’s been a culture of online facilitation that happened in many, many chatrooms and forums for years and years, before Facebook came along and blew it all away,” he added.

Net artist Wolfgang Staehle was the founder of an early internet community called The Thing, which began on BBS in 1991. The Thing was primarily for artists, many of whom already knew each other socially, though it wasn’t a closed community — they put up posters with the phone number required to access the message board around New York. Still, it was necessarily restricted by limitations internet access; a Pew survey in 1995 found that 14 percent of US adults had internet access at the time. (42 percent had never heard of the internet, and an additional 21 percent were “vague on the concept.”)

The Thing flourished due to activity on its message boards, which Staehle said broke down into two types. “A lot of people hated the idea of moderation, so some of the message boards were completely open to whatever happened; sometimes it worked and sometimes it degenerated,” he said. “Then there were others that were…a bit like a symposium where specific people were invited.” And then, Staehle said, “There were a few that were in between, which were the most lively ones, filled with intelligent people and a few clowns who heckled them.”

The group expanded and became a sort of private network with nodes in New York, Berlin, and elsewhere, at a time when it was still rare to talk to anyone across the ocean via any technology, much less the internet. Staehle said that he often had to judge how to intervene in particular disputes as things “got overheated.” “I had to think about what kind of language wouldn’t inflame things further, and which characters were involved,” he said. “It was a kind of politics.” At its peak, the group had roughly 120 people, so this more personalized style was usually manageable.

Still, hosting had its challenges, the most divisive of which occurred in a chatroom run by artist Julia Scher called “Madame. J’s Dungeon” where members participated in S&M roleplay. Eventually, one user who went by the name Felix Melchior began posting bizarre and extremely violent fantasies, and other members wanted him expelled. “I still find this Felix Person very upsetting, I’m trying hard, I don’t know how some of you stay so cool, he truly frightens me,” one user wrote. Another took a more academic interest in his posts: “Reading the other weird fantasies [Felix seems] to have on others makes me want to actually develop the notion of Sadistic practices in the realm of fascism.” One curator appealed to Staehle to have him expelled. Staehle considered it for a long time, but ultimately decided not to remove him: the virtual space was designed to facilitate a kind of net art performance, and Staehle decided that the disturbing comments remained within the boundaries of that performance.

In 1993, in a multiplayer game called LambdaMOO, one player wrote and used a subprogram that allowed him to sexually assault other users’ avatars, leading to community-wide outrage. In a lengthy meeting, users sought to come to terms with what had occurred and reach a consensus about what to do about it. In the end, a master programmer made an executive decision and deleted the user, though he would reemerge later with a new identity. The entire episode confounded ideas about online speech, and the boundaries between virtual and physical, IRL actions; it became clear to many in the community that real harm bordering on violence could occur in virtual spaces. In the aftermath, journalist Julian Dibbell wrote, “I can no longer convince myself that our wishful insulation of language from the realm of action has ever been anything but a valuable kludge, a philosophically damaged stopgap against oppression that would just have to do till something truer and more elegant came along.”

Although the internet of the 1990s is romanticized, it was not exactly the good old days. Many of the problems that exist on contemporary internet existed already in nascent form. The lines between describing violence and inflicting violence were being pushed. There were actual Nazis online. There was sexual harassment, and sexual violence in new forms. Moderators were not always able, or eager, to tamp down on violations. But moderation was still happening largely within a framework of community — individuals or groups were weighing options and arriving at outcomes that may have been imperfect, but were nonetheless geared toward the preservation of communal values and norms.


A little more than a decade later, in 2006, much had changed: computers were cheaper and more accessible, and a majority of Americans reported going online from home every day. MySpace and Facebook were in their early days; online fan culture was in its heyday. The first iPhone was a year away from release. And YouTube, the video platform founded in 2005, employed about ten people on a team called the SQUAD (Safety, Quality, and User Safety Advocacy Department), who were in charge of scrubbing offensive and malicious content from a site where users were watching more than 100 million videos a day. In 2009, Facebook had a team of about twelve doing similar work. These newly formed groups were tasked with deciding whether to accept or reject objectionable content that had been flagged by users, and occasionally to suspend accounts.

As more and more videos and photos were posted to these platforms, content moderators were exposed to horrific content, and had to make high-profile, occasionally politicized decisions. In the summer of 2009, during protests in Iran, a video was uploaded that showed protestor Neda Agha-Soltan being killed by a bullet to the chest; as Catherine Buni and Soraya Chemaly reported for The Verge, it quickly became a focal point of the debate for the team (which by then comprised more than two dozen workers). It was graphic but self-apparently newsworthy. “As [one content moderator] recalls, the guidelines they’d developed offered no clear directives regarding what constituted newsworthiness or what, in essence, constituted ethical journalism involving graphic content and the depiction of death,” they wrote. “But she knew the video had political significance and was aware that their decision would contribute to its relevance.” Ultimately, the video stayed up, with a warning about the graphic content, in what has now become something of a standard for violent, newsworthy content.

Much of these decisions were ad hoc; there was limited support and no charted path for what this team was trying to do. In most cases, though, it was a question of “accept” or “reject,” remove or leave up. The days of hosting — back-and-forth facilitation between members and moderators who were part of the online community they managed — had more or less come to a close. Perhaps the most significant reason for this change was scale. When Rheingold wrote his guidelines for hosting in 1998, an estimated 3.6 percent of the world were internet users. Today, more than 60 percent of the global population is online. Users have become increasingly concentrated on a few platforms: Facebook, Twitter, and YouTube, the contemporary landscape of the social internet.

“Norms being overwhelmed by scale is not a new problem,” Rheingold said. “But when you’re talking about having billions of people online, that’s a level of scale that’s a whole new problem. How do you moderate something that’s got millions of posts per minute?”

Legal scholar Kate Klonick argues in a paper titled “The New Governors,” published in 2017 in The Harvard Law Review, that companies have grappled with scale through the increased enforcement of rules rather than allowing moderators to stick to more ambiguous standards. Klonick writes that the platforms’ original standards-based approach — what one Facebook employee called the “Feel bad? Take it down” rule — created a patchy approach that was difficult to scale, but allowed for more normative judgment. As the platform grew, standards were codified as rules, which have created their own sets of challenges, like the removal of breastfeeding photos and art that depicted nude women to adhere to a “no nipple” policy.

Perhaps above all else, moderation structured by rules has created little incentive to think about community-building. Though the language of the early community-based internet has been co-opted by the platforms — even in the names of documents like Facebook’s “community guidelines”— there is very little communal about their approach to moderation.

Klonick notes that how these companies moderate content depends, at least partly, on their profit motives. She argues that companies are motivated by a sense of corporate responsibility and identity, and material threats to ad revenue if objectionable content isn’t removed. Moderation at these companies is now largely outsourced and much is done algorithmically; almost all of it is geared toward the reactive removal of content.

I spoke to Minna Ruckenstein, an associate professor at the Consumer Society Research Centre at the Helsinki Centre for Digital Humanities, who co-authored a paper on content moderation on Finnish conversational forums. The paper’s authors quickly noticed a difference in generations of moderators: older ones who had worked on the early internet remained invested in what she called, in a recent phone interview, “this kind of feeling of camaraderie,” while younger moderators were “often more detached.” In their research, Ruckenstein and her co-author observed that moderators are dehumanized by their roles, which in turn sucks the humanity from virtual communities. “They’re made into human algorithms,” she said. “All they can do is delete content.”


Despite its global financial dominance, Facebook’s daily user numbers in the US have been declining since 2017, especially among people between 12 and 34 years old. There are a variety of reasons for this, but one explanation is that people are turning to different platforms for a sense of connection online, some of which more closely resemble the early social internet. The invitation-only audio app Clubhouse has seen a recent explosion in popularity, as have “servers” on Discord, a platform originally geared toward gamers that in 2020 shifted toward a more general audience.

“Our Discord is really where our community is at,” said Taylor Moore, one of hosts of podcast “Rude Tales of Magic,” whose Discord server hosts a community of about 1,600 members. When Moore and his co-hosts started the podcast in 2019, they launched a Discord because it was a popular way to expand audience; it has since turned into the heart of their community.

Moore said that from the beginning, he knew that moderation in the Discord was going to be “extremely high-stakes.” He had been a member of a few internet communities that had failed in the past and become toxic, and didn’t want to repeat any of those mistakes.

“There were two competing values,” Moore said. “One value was that didn’t want to be prudish, repressive disciplinarians…We’re a chaotic bunch of weirdos and our audience is a bunch of great weirdos, and we didn’t want be hall monitors. But on the other hand, there are some things about which we do not compromise. Period. That includes hate speech, harassment, anything like that. We have a zero tolerance policy — and it’s not just that we’re willing to boot it out, but we really wanted to build a place would never be engendered at all.”

Instead of making a list of rules or negative behaviors, Moore and his co-hosts outlined at the outset they wouldn’t tolerate negative behavior in the Discord. “We said, ‘we’re going to stop any behavior that we think sucks, and we’re going to use our judgment,’” Moore said. He and the other hosts are actively involved with the day-to-day moderation, along with three community members who have official moderator duties in the server; all decisions are consensus-based between them.

This represents a version of what Ruckenstein calls returning “the logic of care” to the culture of moderation. “Instead of accelerating anger and rage, which the current conversational culture is often doing — because the business logic for these companies is accelerating affective reactions — logic of care is trying to work with these feelings,” Ruckenstein told me.

Discord, like the early internet, is hardly utopic. In its early days especially, it was a haven for white supremacist communities that were gathering on the platform. Though they began cracking down on these groups in 2018, working with the Southern Poverty Law Center to purge their servers from the platform, the platform provided what journalist April Glaser called “a safe space” for neo-Nazis, the alt-right, and white supremacists.

There remain plenty of issues with moderation on Discord, particularly when it comes to moderating real-time audio content on the platform. Researcher Jed Brubaker, who co-authored a paper on Discord moderation, noted the challenges around cutting someone off mid-audio stream, and of collective evidence of bad behavior after the fact — challenges that audio-oriented platforms like Clubhouse will need to navigate more and more. But in many Discord servers, there’s a return to the sense of communal responsibility for moderation. Discord even provides its own “Discord Moderator Academy,” resources that range from basic to more philosophical “seminars” like one on “Community Governance Structures”— an acknowledgement of the work’s complexity.

For Moore, the positivity of the Rude Tales of Magic Discord has upended his sense of the possibility of virtual community. Since it started, they’ve only had to expel one person. “It’s not like we’re building a machine,” Moore said. “It’s more like we’re gardening. You need a good patch of land, sun and water, and you give it all that and hope the tomatoes turn out okay.”

Subscribe to Study Hall for Opportunity, knowledge, and community

$532.50 is the average payment via the Study Hall marketplace, where freelance opportunities from top publications are posted. Members also get access to a media digest newsletter, community networking spaces, paywalled content about the media industry from a worker's perspective, and a database of 1000 commissioning editor contacts at publications around the world. Click here to learn more.