Very little sunlight makes it to the redwood forest floor. Even on clear days, but especially when it’s cloudy, a grayness suffuses. The grayness is dark but also green. It’s veiled but casts no shadow. It’s a quality of light but clings to your skin as a mist. In the grayness, in the greenness, mist melts into branch, and branch into trunk, and trunk into dirt. All pours onto the redwoods’ woven roots, as much above ground as below, tangled and tumbling off the sides of embankments, jutting up from fallen old growth, crisscrossing like veins across the trail. There’s a sense, in the forest, that what happens over there, beyond this tree line or gully, has everything to do with the here where you’re standing. Biologically, it does. Dense networks of fungi and roots link this tree to that tree to another and another with such efficiency, and such robust energetic exchange, that distinguishing one tree from the next becomes mere semantics. They are, in every way that matters, the same tree. What happens right here is what happens over there. Connection to the point of unity.
The internet as we know it is designed to enable precisely the intertwine seen in the redwoods. Information seeping from here to there with ambient ease is the point; content flowing from amateurs to professionals then back again is the point; expression spanning the globe in previously unimaginable ways is the point. All the network enclosures that kept polluted deluges like the Satanic Panics relatively confined in the 1980s and 1990s have grown increasingly permeable as roots have fused with trunks have fused with branches have fused with mists. Our problem now is not that things have gone unexpectedly wrong. It’s that they’ve gone exactly as they were designed to go—with profound consequences for how much pollution is generated, where it travels, and whom it affects.
This design is rooted in liberalism. “Liberal” in this sense does not necessarily mean “politically progressive” or “morally permissive,” as the term is often used in contemporary US politics. Instead, liberalism as a political philosophy extends all the way back to the Enlightenment of the eighteenth and early nineteenth centuries and back farther still to the scientific revolutions of the sixteenth and seventeenth centuries. Claimed in different combinations by a broad range of political perspectives, liberalism enshrines individual freedoms like free speech, a free press, property rights, and civil liberties.1 Liberalism informs libertarianism, which places particular emphasis on personal autonomy, as well as neoliberalism, which places particular emphasis on free-market capitalism.
Online, liberalism is most clearly articulated through the maxim that “information wants to be free.” This sentiment reflects liberalism’s staunch defense of negative personal freedoms: freedom from external restriction.2 John Perry Barlow, cofounder of the Electronic Frontier Foundation, soaringly affirmed the negative freedoms of online spaces in his 1996 essay “A Declaration of the Independence of Cyberspace.” Barlow’s essay does exactly what its title suggests. It’s our internet, Barlow argued; the government can jump off a cliff.
Barlow was hardly alone in his resistance to government intervention or his celebration of negative freedoms. As technology historian Steven Levy explains, freedom of information was foundational to the computer revolution of the 1960s, 1970s, and 1980s,3 and science fiction writer Bruce Sterling proclaimed in 1993 that freedom to do as one wished was one of the main draws of the early internet.4 Unsurprisingly, libertarianism was a dominant political philosophy among early internet adopters.
All these negative freedoms fundamentally shaped the digital landscape. First, freedom from censorship ensured that the maximum amount of information—regardless of how harmful, dehumanizing, or false—roared across the landscape. Freedom from regulation encouraged what journalism professor Meredith Broussard calls “technochauvinism,” the overall sense that if something can be done, that’s reason enough to do it.5 Build the website. Share the information. Thus spoke Mark Zuckerberg: move fast and break things.6
This origin story is only partial, of course, a history written by the victors. As technology reporter April Glaser chronicles, Barlow’s 1996 declaration was one of many network possibilities.7 Indigenous antiglobalization activists in Mexico, for example, were simultaneously building a very different sort of digital community. They may have shared some ideals with the likes of the Electronic Frontier Foundation, but their focus was on positive freedoms: empowering marginalized communities for the good of the collective, not merely protecting individual rights for the benefit of those individuals. Many leftist organizers followed the same path. Liberalism’s negative freedoms, however, were what attracted neoliberal entrepreneurs to the fledgling internet; communities to be cultivated became markets to be tapped.
We have the forest we have because freedom from won. Those who moved fast and broke things, who found increasingly ingenious ways to ensure that information wouldn’t just be free but also profitable, didn’t think about the toxins their social platforms might filter into the forest. Nor did they think about the toxins they themselves were spreading. In an impassioned 2019 acceptance speech for—appropriately enough—an Electronic Frontier Foundation award named after John Perry Barlow, sociologist danah boyd highlighted these toxins, particularly the industry’s normalized misogyny, racial exclusion, and tolerance of sexual predators. “These are the toxic logics that have infested the tech industry,” boyd stated. “And, as an industry obsessed with scale, these are the toxic logics that the tech industry has amplified and normalized.”8
Decision makers within the tech sector aren’t the only contributors to the internet’s polluted outcomes; the millions of its early adopters who structured their lives around the jubilant cry don’t ever tell me what to do also sidestepped the toxins they carried. Neither group thought about the ecological consequences of their actions, because they didn’t have to. They were positioned behind a set of deep memetic frames that kept them safe, happy, and utterly oblivious as pollution coursed through the forest’s roots.
One of these frames was the white racial frame, which allowed participants to—among other things—heed Barlow’s utterly Cartesian proclamation that the internet is the “new home of Mind” and that cyberspace “is not where bodies live.”9 Any harassment directed at people of color, any hate speech, any harm, that wasn’t real, it was just the internet—an easy thing to say when you’re not the one being targeted.
Another frame poisoning the forest was fetishized sight. As in our previous work, we use the term fetishization to label the tendency during online interaction to fixate on the object you’re looking at—just the GIF you’re sharing, just the post you’re reading, just the tweet you’re replying to—without considering the very real people represented in or producing or affected by those objects.10 When everything is flattened to pixels on the screen, it’s easy to forget the people standing behind those pixels, how being flattened might hurt them, and how our everyday actions might make that hurt worse. Fetishization, as we will see, is supercharged by the white racial frame, and the white racial frame is supercharged by fetishization.
Tangled up with both frames is what came to be known as “internet culture.” A jumble of sites, memes, and aesthetics that exploded to prominence in the mid- to late aughts, this culture maintained close ties to the tech sector and was a product of liberalism through and through. Its emphasis on fun and funny negative freedoms—share that meme, troll that stranger, joke about Hitler, it’s your right—downplayed the destructive, antidemocratic, and deeply polluted dimensions of fetishized sight. Its adherence to the white racial frame muted diverse ideas and experiences, erroneously claiming online spaces for white people, and in particular white men, whose centrality was taken as a given.
The compounding myopias of content producers, platform designers, and everyday social media users—each tapping into the roots of all the others—normalized a deeply detached, deeply ironic rhetorical style that created space for white supremacist violence to flourish a half-decade later. It also silenced the alarms being raised by the people who didn’t just see the poisons bubbling up, but were themselves being poisoned, and who cried for help but were ignored—or were punished for their effort.
The political and ethical failures at the heart of so-called internet culture makes tracing its roots uncomfortable. And we mean personally uncomfortable. The two of us were ourselves part of that culture, as were many of our friends and colleagues. We all bear responsibility, and all must face what boyd describes as a “great reckoning” for the toxicity we collectively helped normalize.11 This toxicity wasn’t restricted to our own insular circles. Instead it helped wedge open the Overton window—the norms of acceptable public discourse—just enough for bigots to shimmy through in 2016. Their deluge of hate, falsehood, and conspiracy theory ripped the walls right off. But first came the absurdist, loud, silly fun that flourished a decade before. The pollution cast off by all that fun percolated underground, intensifying with each passing year. It may have emerged unnoticed by many. Ultimately it was felt by all.
At first glance, the term “internet culture” seems like it should be highly inclusive. Based on the assumption that internet culture is, well, culture on the internet, what on the internet wouldn’t be internet culture? That broad sense of the term, however, wasn’t the one that emerged in the mid-2000s to describe the slew of remixed jokes, jargon, and folk art occurring on sites like Something Awful, 4chan, and eventually Reddit and Tumblr.
We say “sense of the term” rather than “definition” because internet culture was never fully defined, not exactly. The people creating and remixing all that content certainly talked about a thing called “internet culture.” But just as often, they called it “meme culture” or even just “The Internet,” without explaining what they meant. These same people—many of whom called themselves “internet people”—explicitly and enthusiastically identified with it and actively contributed to it, with it best summarized by a shrugged “I know it when I see it.” Even researchers who studied this it could waffle on the name and description. In a 2012 article coauthored with fellow internet researcher Kate Miltner, for example, Phillips blithely described the it as “early meme/ROFL/internet culture whatever.”12 Here we’ve settled on “internet culture,” because that was the term we encountered most often.
No matter what they labeled it, internet culture participants sorted themselves into a highly insular clique, with a highly recognizable aesthetic. In addition to embracing fetishized sight and irony as a mode of being, members echoed a familiar set of attitudes. The internet was a cordoned-off playpen that severed memes from consequences. All these memes were free to spread widely and should be free to spread widely; if you didn’t like something, you should just log off (also lol it’s just the internet). Being able to do or make or mock something was justification enough. Even participants who had little sense of what liberalism was—other than being broadly in favor of their own free speech—drew from and reinforced centuries-old liberal roots.
The most basic feature of internet culture was the people who embraced it. This included, first, everyday meme enthusiasts active on popular platforms like 4chan, Reddit, and YouTube. Meme enthusiasts employed by media and entertainment platforms like Urlesque, BuzzFeed, Gawker, Rocketboom, and Know Your Meme formed a crucial subsection of this group. Another group comprised employees of the social platforms that internet culture depended on, as well as people working for social-media-savvy marketing and advertising agencies like the Barbarian Group and Wieden+Kennedy. Rounding out these prominent early adopters were academics at institutions like MIT, Harvard, and NYU, which were home base for many internet people and actively supported conferences and talks related to internet culture.
The boundary between these groups was highly permeable; members of one group often knew members of other groups personally, professionally, or both. Phillips, for her part, was friends with a number of people within the BuzzFeed / Know Your Meme orbit, which introduced her to other media and entertainment circles, which looped back around to the academic circles Milner traveled in. People also frequently moved between groups, like those who started graduate school after working for media and entertainment companies or those who got jobs at social platforms after finishing their PhDs.
The preeminent internet culture conference, ROFLCon, epitomized this blur. It was held in 2008, 2010, and 2012 at MIT, with additional off-year summits in New York City and Portland, Oregon; Phillips was a conference attendee in 2010 and 2011 and a speaker in 2012. The overwhelming majority of its organizers were students affiliated with MIT or Harvard; attendees included everyday meme enthusiasts, largely students from Boston-area universities, and media and tech sector professionals. The Barbarian Group sponsored the first ROFLCon in 2008, Wieden+Kennedy sponsored the Portland summit in 2011, and Harvard’s Berkman Klein Center for Internet and Society sponsored the 2012 conference.
And then there were the trolls. In contemporary parlance, “trolling” is used as a blanket label for just about any undesirable thing someone could do or say, from being an ass on Facebook to being a bigot in person. Back then, the term wasn’t such a broad catchall.13 The trolling that overlapped with internet culture traces back to 2003, when an American fifteen-year-old named Christopher “moot” Poole created a simple image board called 4chan. Although the online sense of the term troll long predates 4chan, it took on a newfound and highly specific meaning as more and more participants, particularly on 4chan’s /b/ (or “random”) board, began calling themselves trolls and behaving in highly recognizable, highly idiosyncratic ways. After incubating for a few years, subcultural trolling enjoyed a golden age from about 2008 to 2012, the same as internet culture more broadly—a parallel that derived, most basically, from just how seamlessly trolling subculture fed into internet culture writ large.
This tangle is illustrated most clearly by the fact that Poole helped organize the ROFLCon series and spoke on several ROFLCon panels throughout the conference’s four-year run. True to trollish form, Poole’s “ROFLTeam” profile image in 2010 and 2012 featured a Black man—Poole is white—wearing a leather jacket, holding a lightsaber, and standing in someone’s messy living room. Beneath the photo, the caption reads, “Moot Mootkins is part of the ROFLTeam. He is a motherfucker.”14 The line references the then-popular “Epic Beard Man” meme, which was based on cell phone footage posted to YouTube then shared on 4chan. In the video, an older, white, bearded man clad in a shirt reading “I am a motherfucker” violently assaults a Black man.
Poole’s invocation of Epic Beard Man was indicative, not just for the racist wink of the joke. One of the most basic markers of subcultural trolling during this time was the incessant creation and circulation of internet memes. As danah boyd explains, 4chan’s design primed it to become internet culture’s first meme factory.15 Because Poole didn’t have enough server space for everything being uploaded to 4chan’s various boards, he built the site so that it would delete older posts to make room for new ones. Users were frustrated when their favorite content would disappear, so they would frequently repost images, often after altering them slightly.
4chan’s content didn’t stay confined to 4chan. Instead the memes enjoyed by trolls and the memes enjoyed by broader internet culture were strikingly symbiotic. Several factors contributed to this blur. First and foremost, the memes emerging from internet culture circles often were trolling memes, created and spread by trolls on 4chan. Either the trolls themselves were publicizing their own content off-site, or the trolls’ work resonated strongly with 4chan’s more casual visitors, who carried the memes beyond the site’s borders. Poole spoke to this dynamic during a 2010 ROFLCon panel titled “Mainstreaming the Web,” asserting, uncontested, that 4chan created the memes that everyone else on the panel—including representatives from internet culture hotbeds like Know Your Meme and the Cheezburger Network—studied, collated, and profited from.16
Even when memes hadn’t originated on 4chan, the aesthetic overlap between trolling subculture and internet culture was so pronounced that a meme from one could easily be mistaken for a meme from the other. They looked the same and did the same things with the same sense of humor.
Nick Douglas, a blogger who has written for internet culture publications like Urlesque, the Daily Dot, and Gawker, calls this unifying aesthetic “Internet Ugly.”17 As the moniker suggests, Internet Ugly is a “celebration of the sloppy and the amateurish” and describes apparent aesthetic misfires like rough editing, bad grammar, and a whole lot of shaky, freehand mouse drawing.18 The title of Douglas’s article on the subject, “It’s Supposed to Look like Shit,” pretty much sums it up.
Like boyd, Douglas traces the rise of Internet Ugly to 4chan’s platform architecture.19 Because threads on 4chan were deleted so quickly, sometimes within minutes, posters needed to act fast. The images posted quickly enough to be seen by others tended to be the most haphazard, slapdash, and outrageously non sequitur. Polished work, “good” work, simply took too long. As more and more people inside and outside 4chan replicated the aesthetic, the logistic necessity of Internet Ugly gave way to an established social norm. It entered the taproot of the network.
In a 2013 PBS Idea Channel video, Mike Rugnetta elaborates on the Internet Ugly aesthetic, which he christens “glitchy art.”20 For Rugnetta, the popularity of “malfunction-esque” content could be attributed, at least in part, to millennial nostalgia. As he explains, the technologies that kids (some kids, anyway) played with in the late 1980s and early 1990s, including 8-bit sound and video, magnetic tapes, and VCRs, were often unreliable and unpredictable. The resulting “ballet of mistakes” at the core of the busted-media look, Rugnetta argues, “sends you, or me at least, careening back to childhood.” The implication is that internet culture’s affection for weird, broken, ugly shit stems, perhaps counterintuitively, from comforting warm fuzzies.
Douglas similarly links Internet Ugly to personal experience.21 When internet people reacted with nostalgic glee or absurdist delight (or both) to Internet Ugly, or when they fell back on the “I know it when I see it” explanation, those reactions were a knowing wink, one that communicated “I am one of you; I am aware of all internet traditions.” Laughter was, by extension, an act of “laughing with,” performing for an audience who understood and appreciated a given message and were able to reciprocate in kind—not just by laughing back but by sharing their own absurdist fun.
As much as internet culture participants laughed with one another, however, they also laughed at others. The weaponization of laughter was most apparent on 4chan’s /b/, championed by participants—and by Chris Poole—as a free speech stronghold. Activity on /b/ could be so outrageous, so disruptive, and so aggressively transgressive that digital media scholar Finn Brunton described the board in 2011 as “the most broadly offensive artifact that has ever been produced in the history of human media.”22 Anthropologist Gabriella Coleman likewise highlights the offensiveness of the site, adding that trolling subculture was replete with “terrifying” and “hellish” elements, particularly when trolls used what they called “life-ruining tactics” to terrorize their targets for months or even years on end.23 As Phillips recounts, these same trolls regularly framed their activity as a kind of public service, one that helped protect, of course, free speech (exactly how this worked was never made clear). Trolls also claimed that their attacks encouraged people to think more critically, an outcome they prodded along by taunting, gaslighting, and deceiving targets. Many joked that they deserved a thank-you for their efforts.24
Although their targets could be quite diverse—at the time, trolls gleefully singled out women, progressives, and people of color, as well as Republicans, Christians, and white people generally25—trolling participants were unified by a particular kind of laughter. Described by trolls as “lulz,” this laughter celebrated their targets’ distress while simultaneously policing the boundaries between us and them. The trolling us laughed. The targeted them did not.
Trolls may have been especially ruthless in their attacks. Their laughter may have been especially loud, and their understanding of free speech especially myopic. However, the pursuit of lulz was endemic to broader internet culture. Exemplifying this register was the common call to “learn how to internet.” Learning how to internet meant knowing how to replicate or at least decode the internet culture aesthetic, to respond to memes “correctly,” and, most important of all, to not take anything too seriously. The result was to cleave the us who knew how to internet, who got the jokes, who responded to things with a troll face, from the them who didn’t or couldn’t or wouldn’t. For internet people, feeling distressed online—because someone saw something unseeable, because someone clicked a link they shouldn’t have, because someone fed the trolls—was a self-inflicted wound. Certainly not something to start censoring platforms over; that would ruin everything.
Douglas emphasizes the boundary policing behind “learning how to internet” in his discussion of “fail content,”26 a staple of Internet Ugly that highlights mistakes, bad ideas, and anyone accused of fucking something up. The laughter generated by fail content was directed at unfortunate souls on the other side of the us–them line, either because they were the supposed fuck ups, or because they were clueless outsiders who didn’t get the joke. A related target of uproarious derision was “bad” media. Phillips describes this derision by using the Japanese word kuso, which loosely translates into “ha ha, awesome, this is terrible.”27 The kuso response accompanies a range of content, Phillips writes, from poorly executed images to glitch art remixes of videos from the 1980s and 1990s to “the online obsession with failure generally, which worships at the altar of ineptitude and technological incompetence.”28 Very often, kuso responses were framed as a strange sort of fandom. The laughing us didn’t hate the thing they were actively mocking; they loved it—never mind that proclamations like “I sure enjoy reveling in your humiliation” aren’t exactly compliments. That, of course, didn’t matter to the us. They could point and laugh, and that freedom was reason enough to do it.
Whether internet people talked about kuso, Internet Ugly, or lulz, whether they were malicious or jovial, the result of their fun was to flatten wholes into parts; all the surrounding context might as well not have existed (to them, anyway). Fetishized sight was so pervasive within these circles that it constituted a deep memetic frame; it made the world make sense to those seeing through it.
The act of cleaving digital text from broader context is, on one level, a feature of online play, reflective of the affordances of digital media. Affordances are simply what technologies allow people to do with them; they’re why people do what they do, because that’s what they can do. Most basically, digital media allow people to alter, edit, and remix content; manipulate parts of that content without disrupting the original; and save, store, and easily access the content later.29 The result is decontextualization: stuff that’s not connected to where it originated or what it started out as.
The simple fact that these actions can be undertaken and therefore are undertaken is not what creates fetishized sight. For the frame to flourish, a number of additional factors must converge. One of the most critical is the sharing imperative: that social media companies encourage their users to post, comment on, and generally spread as much content to as many people as possible. As media scholar Siva Vaidhyanathan argues, this impulse is quite literally coded into platform design.30 When bad information ends up mixing in with the good, the directive from corporate headquarters is not to quiet down and proceed with caution. It’s to post even more information as a corrective. This remained Mark Zuckerberg’s rallying cry straight through 2019, when, during a speech at Georgetown University, he doubled down on what Vaidhyanathan describes as his company’s “nineteenth century view of speech.”31 More memes, more comments, more pixels on the screen, faster and faster, across and between an ever-widening vortex of participants free to duke it out to their hearts’ content—democracy in action, right?
Maybe for some. Others, who find themselves trampled and dehumanized as a result, employ a different calculus informed by different frames. In her 2019 Electronic Frontier Foundation award speech, danah boyd lamented everything the tech sector lost by refusing to give these others—women and people of color and gender-nonconforming people and disabled people—a seat at the user design table.32 Those who were at that table, particularly as internet culture blossomed, were overwhelmingly white, male, and upwardly mobile. Historically, that’s the demographic least likely to experience persistent, coordinated abuse, certainly not by reactionary bigots or violent misogynists. Consequently, the lead developers and top managers at social media companies didn’t have much reason to scan the forest for these specific risks. They had the luxury of emphasizing freedom from over freedom for.
As a consequence, rather than building platforms that protected users against the dangers of fetishized sight, and rather than seeking out, listening to, and learning from people who had been targeted, these developers and top managers built platforms that streamlined fetishized sight and encoded their own myopic frames into user experience. In short, because they only saw versions of themselves, only listened to versions of themselves, and only respected the experiences of versions of themselves, they all but invited the deluge of abuse, harassment, and manipulation that was soon to overrun their sites—a deluge they never saw coming, because they didn’t think they needed to look.
Assessing the platform’s long-standing failures to curb abuse and harassment, Twitter’s cofounder Ev Williams admits as much. “Had I been more aware of how people not like me were being treated and/or had I had a more diverse leadership team or board,” he explains, “we may have made it a priority sooner.”33 Charlie Warzel, writing for BuzzFeed News, further links Twitter’s “abiding commitment to free speech above all else” with the homogeneity of the company’s top decision makers. As one former senior employee told Warzel, “The original sin is a homogenous leadership. This is part of what exacerbated the abuse problem for sure—because they were often tone-deaf to the concern of users in the outside world, meaning women and people of color.”34 Tarleton Gillespie echoes a similar point in his study of how platform moderation policies shape social media. Many of the platform designers he interviewed were surprised by the levels of hatred and obscenity that so quickly overtook the networks they had created.35 They assumed, as one content policy manager explained, that the people using the sites would be like them, and designed their policies and tools accordingly.
Internet culture—and all the kuso, Internet Ugly, and lulz it incubated—grew out of this myopia. The resulting jokes and fun and jargon were just jokes and just fun and just jargon with few consequences for the internet culture clique. Grinning, clueless participants who had learned how to internet got to flatten entire lives into dehumanizing memes. They got to reduce misfortune, pain, and tragedy to lulzy punch lines. They got to comfortably cavort within a smug, self-satisfied us.
And by they, of course, we mean we.
In our early work, especially when we were graduate students, we were absolutely, undeniably guilty of seeing the world with fetishized sight. We didn’t know each other yet—we wouldn’t be introduced until 2012, after we’d both received our PhDs—but we got the same things wrong, evidencing our shared standpoint behind that frame.
For Milner, fetishized sight came in the form of a permissive attitude toward the jagged edge of the visual internet, itself a reflection of liberal assumptions about the inherent democratic good of unrestricted speech. As he wrote his dissertation on internet memes in 2011, Milner paid little attention to the very real people represented in the images he was including in its pages. He dropped in pictures of children in their homes, activists at protests, and “fail” after “fail” without ever stopping to think about how the people he included might feel about snapshot moments of their complex lives being enshrined in the academic pantheon for time eternal. He never asked anyone’s permission to let Google Scholar catalog and index that one time some kid fell off a trampoline, maybe breaking their leg, maybe bankrupting their parents. Who knows, who cares; the GIF was funny. “It’s already on the internet,” Milner said with a shrug. “It’s already public data.” Years later, a reviewer called him out for treating people like pixels, cautioning him to carefully weigh the costs and benefits of what he amplified. His immediate reaction was to get defensive. “Progress at any cost,” he bellowed, until he heard himself say that out loud, stopped, and said, “I guess I hadn’t thought of that before.”
For Phillips, fetishized sight came in the form of a permissive attitude toward the concept of harm on the internet, itself a reflection of John Perry Barlow–esque assumptions about online disembodiment: the internet was not where the body lived. Not only could you separate who people were offline from what they did online, but you should consider any bad behavior online in the context of subcultural norms. Trolls were playing a role, duh; that play was the thing to focus on. For example, as Phillips began working on a journal article about trolling on 4chan in 2009, she included example after example of trolls’ attacks, including their “jokes” about pedophilia, without reflecting on the embodied harm the trolls’ actions cause, or the fact that their “jokes” were based on all-too-real traumas. For her, these aggressions were just subcultural play, just trolls being trolls. Violence was an object of research interest, not something to worry about amplifying. One of the article’s reviewers critiqued her on that point, reminding her that pedophilia was not a funny joke on the internet; it destroyed lives, and further, attitudes like hers were what helped normalize violence. Like Milner, her immediate reaction was to get defensive. “Normalizing violence, that’s stupid, I’m a folklorist,” she snorted as her face grew hot, which at first she thought was anger, then realized was embarrassment. She also hadn’t thought of that before.
We should have thought of that before. We should have seen the embodied consequences of what we shared without having to be told. That we didn’t is a personal failing. It also reflects something bigger: that we were seeing what our accepted frames encouraged us to see. Or encouraged us not to see, as was more often the case. We were far from alone in that myopia. What we missed, what so many other people missed, and more importantly why we missed it, emerged from something deeper than the digital tools we were using and deeper than the platforms we were posting on. Our fetishized sight emerged from yet another outcropping of liberalism and the Enlightenment before that: structural white supremacy. Or described another way, freedom for white people: white lives, white liberty, and the pursuit of white happiness at the expense of everyone else.
The extraordinary freedoms of whiteness, our own very much included, didn’t just allow pollution to seep into the public square. The freedoms of whiteness rendered that pollution invisible to the people peering out through the internet’s most carefree frame.
As our burgeoning cast of characters so far attests, the what’s what and who’s who of internet culture was as much about bodies in the world as it was about pixels on the screen. All those pixels would not have been represented, remixed, and mocked as they were without all those bodies doing the fetishizing. The resulting fun, or at least what seemed like fun for the people in on the joke, muted some while amplifying others, further entrenching the line between the us who laughed and the them who did not. Whiteness snaked across all of it, leaching assumptions, minimizations, and subtle (and sometimes not so subtle) aggressions into the root system.
In some ways, the demographics of early internet culture were straightforward: the people who participated in, studied, and otherwise policed the boundaries of “The Internet” were overwhelmingly white and majority male. The actual internet has, of course, never been exclusively or inherently white. In 2013, for example, as internet culture began to catapult into the mainstream, the Pew Research Center reported that people of color were marginally more likely to use social media than white folks.36 Whiteness as default, however, was baked into the “internet culture” narrative. It was the unexamined us against which all other people, and all other forms of expression, were measured.
ROFLCon is a case in point, both for how it reinforced the white majority and for the ambivalence at the core of that majority. Throughout the conference’s multiyear run, attendees skewed very white and very male, a point that ROFLCon’s founders Tim Hwang and Christina Xu admit somewhat sheepishly in a 2014 interview with the Journal of Visual Culture.37 This imbalance existed even though the ROFLCon organizing committee was racially diverse and in some years supermajority female. The question was, Why? What accounted for the striking difference between conference organizers and conference attendees?
Xu pins the discrepancy on the stereotypes tangled up with early internet culture, which equated being computer savvy with being a straight white dude. As she explains, unless you actively describe a space as being for “girl geeks” or “Black nerds,” those groups won’t think the space is for them and consequently won’t come to your event. Assumptions about who and what counted, and therefore who and what belonged, influenced the we the conference indirectly constructed. This we, in turn, played an implicit game of boundary policing. Excluded most stringently, even if inadvertently, was content popular within Black communities, which ROFLCon all but ignored in 2008 and 2010.38 “It’s not the internet culture I grew up on,” Xu admits of Black memes and influencers; “but that doesn’t make it not a part of it.”39 This realization, Xu says, prompted ROFLCon organizers to take more active steps in 2012 to emphasize inclusiveness.40
Still, the overall issue persisted. Conference attendees remained overwhelmingly white, and the whiteness of internet culture remained a normative default. ROFLCon 2012’s “Choose Your Own Adventure” event program booklet, for instance, paired conference panel descriptions with related campy 8-bit stock images of people sitting at computers and generally having fun. Tellingly, throughout the ninety-five-page program, there wasn’t a single pixelated figure obviously of color.41 Maybe this was meant to be an ironic send-up of the (presumed) homogeneity of internet culture. No matter the motive, the art, like so much else about the conference series, signaled that white attendees were the welcomed, privileged us and everyone else was the invisible, second-class them. There was nothing surprising about the resulting conference demographics.
As Xu’s admission about ROFLCon’s we underscores, issues of inclusiveness (or lack thereof) were bigger than the conference itself. The problem exemplifies, instead, the pervasive, unexamined whiteness—and very often default maleness—at the core of internet culture. Race and technology scholar André Brock’s 2012 study of Black Twitter highlights that default.42 Writing concurrently to the ROFLCon series, Brock chronicled how Black Twitter then—just like Black Twitter now—was buzzing with memes and jokes and was every bit the generator of internet culture that “internet culture” was. But, Brock explained, unlike what white folks were doing online, what Black folks were doing wasn’t granted legitimacy, if it was noticed at all, by the mostly white, mostly male gatekeepers laying claim to “The Internet.”43 One of the specific gatekeepers Brock cited in his 2012 piece was none other than Internet Ugly’s own Nick Douglas, who in 2009 contrasted how Black people use Twitter with the “correct” ways “normal” people use Twitter. For Douglas, Brock explained, “normal people” translated to “white guys with collars and spelling.”44
Feminist and critical race scholars have long underscored Brock’s point.45 So have scholars who study internet cultures outside the United States.46 As these scholars attest—along with an entire subdiscipline of scholars focused on online communities in North America—queer people, people of color, and indigenous people have gifted digital spaces with boundless creativity, ingenuity, and playfulness.47 Even when, as André Brock explains of Twitter, they’re using a technology “that wasn’t originally designed for us.”48
That these stories and these researchers are frequently omitted from what white scholars call “internet culture” is one more example of how underrepresented groups are culturally muted. For decades, communication research has emphasized the pervasiveness of such omissions in offline spaces.49 People from underrepresented backgrounds have always had things to say, and have often put themselves at great risk to speak. But historically they’ve struggled to find a broader audience. That is, they’ve struggled to find white people willing to take a few steps back and hand over the microphone—or even just to listen.
To this point, Black feminists Moya Bailey and Trudy underscore the frequency with which Black women in particular are erased, ignored, and plagiarized online.50 They call this phenomenon misogynoir: anti-Black misogyny, a term they coined in 2008, for which they regularly go uncredited. That Black people, and Black women in particular, have been erased and plagiarized is a paradox as old as America. Commenting on the frequency with which white people have come to appropriate Black Twitter’s jokes, memes, and slang, critical race scholar Meredith Clark zooms out to the much broader historical pattern. “Black culture has been actively mined for hundreds of years for influences on mainstream American culture,”51 she states; the insult of plagiarism only deepens the injury of muting.
The muting and appropriation that pervade online spaces, both within internet culture and more broadly, are the result of what social theorist Joe R. Feagin calls the white racial frame: a fetishistic worldview that normalizes the oppressions of people of color.52 While the white racial frame often manifests as outright prejudice, discrimination, and racist violence, it’s also an everyday mental tool kit for navigating the world, one that privileges white bodies and white lives over the bodies and lives of people who aren’t white. Muting is one of the many actions within this frame that’s not obviously or physically violent but still enacts symbolic violence.53 Violence is still violence—still dehumanizing, still marginalizing, still harmful—whether symbolic or physical. The difference is that symbolic violence can be, and frequently is, perpetuated by people who replicate racist frames without ever considering themselves racist, and indeed, who might outright denounce racism.54
Media scholar Richard Dyer explains how the white racial frame is reinforced symbolically through images and ideas.55 For Dyer, whiteness is, obviously, a skin color. Beyond that, something can be socially white or representationally white, whiteness can be a characteristic of people and texts, and it can broadly be considered a quality.
According to Dyer, the first marker of symbolic whiteness is that it establishes and jealously guards its own centrality, a point underscoring sociologist Tressie McMillan Cottom’s observation that whiteness defends itself against a whole host of truths.56 Second, whiteness asserts power and control to maintain that centrality. Third, whiteness separates the lived experiences of the body from the abstract ideas of the mind, in the process downplaying the embodied consequences of whiteness. Together, these tactics reinforce structural white supremacy. They’re how white people have for centuries kept themselves in positions of power and privilege.
Online, the white racial frame likewise keeps whiteness the default center and norm. It enables white folks to pick and choose whom and what they pay attention to, to assume speech always works for the good (meaning their good), and to fetishize the nonwhite them for the benefit of the white us. And more often than not, white people are oblivious to all of it. They might not be the only group in the forest; indeed, they might be a minority in the forest. But the whole forest still feels the effects of their whiteness.
The subcultural trolling that flourished during the aughts perfectly entwines fetishized sight and the white racial frame. The connections are so complete that Dyer’s characteristics of symbolic whiteness are exactingly, even uncannily, replicated within early trolling norms. Trolls vigorously and often violently maintained the centrality of whiteness on 4chan’s /b/ board. They reveled in asserting control over others through off-site raids and on-site boundary policing. They erased their own embodied experiences by obsessively maintaining their own anonymity—while at the same time abstracting the violence they committed against others as a fun, consequence-free source of lulz.
If a clear barrier separated subcultural trolling and internet culture, these critiques would begin and end with the trolls themselves. That, however, is not how the forest works. Instead, the two sets of roots are tangled, with widespread consequences for the surrounding grove. A MemeFactory showcase, hosted by NYU on October 9, 2009, and then uploaded to Vimeo, epitomizes how seamlessly the fun of early internet culture gnarled up with trollish fetishization, and how seamlessly fetishization gnarled up with the white racial frame. It also illustrates the crucial role that laughter plays in sustaining deep memetic frames and in spreading pollution far and wide.
MemeFactory was a performance trio featuring Stephen Bruckert, Patrick Davison, and Mike Rugnetta, the eventual host of Idea Channel; Davison and Rugnetta also wrote for Know Your Meme. Like Know Your Meme, MemeFactory sought to archive internet culture for internet people and translate popular memes for audiences outside the esoteric forums and niche networks where the memes were created. MemeFactory’s highly choreographed, highly energetic live shows featured internet memes projected rapid-fire onto three separate screens. At times, text appeared on one screen to comment on the images featured on the other screens, or to undercut something Bruckert, Davison, and Rugnetta were saying. The result was dizzying and perfectly replicated the glitchy, ugly, breakneck, and, of course, lulzy feel of internet culture.
Unsurprisingly, many of the images featured in MemeFactory’s NYU performance originated on 4chan. Some were flagged as trolling memes, but many were not. Bruckert, Davison, and Rugnetta also spent a great deal of time discussing the ins and outs of 4chan, its various boards, and its decree that nothing should be taken seriously. As within internet culture more broadly, trolling played a foundational part in the show.
Unlike 4chan’s trolls, the MemeFactory performance didn’t outwardly weaponize or revel in the traps of whiteness. However, those traps were still scattered throughout. In the intervening decade, Bruckert, Davison, and Rugnetta—all of whom are white—have reconsidered many of the assumptions they once made about the impact, politics, and ethics of memes. Davison, for instance, told us that he “CRINGED” (caps lock his) when he reread the research he’d done during his MemeFactory years.57 In that research and more broadly in his life, he explained that he was content to “bull-china-shop through issues of image-based sexual violence, of racism both implicit and explicit, as well as tons of other terrible mindsets.” Davison did not mince words in explaining why. “My social and political irresponsibility come from having been a cis white middle class dude in his mid-20s, plain and simple,” he said. “I was an incredible embodiment of privilege at the time.”
Thinking back on his own MemeFactory experience, Rugnetta echoed Davison’s all-caps CRINGE.58 “When MemeF (and even Know Your Meme, depending) comes up now,” Rugnetta told us, “I want to gesture to all my current work as a way of saying OK BUT IT’S DIFFERENT NOW”—a reflection of the fact that in the years after MemeFactory, Rugnetta turned a pointedly critical eye toward internet culture, trolling very much included, through his work on Idea Channel. Bruckert, too, looks back with deep discomfort, noting how his assumption at the time—that lampooning white supremacy would weaken white supremacy—only outfitted white supremacists with the plausible deniability of “just trolling.”59 He recalls satirically pantomiming bigoted ideologies in MemeFactory performances, Stephen Colbert–style, to show how ridiculous and wrong those ideologies are. The problem was that those jokes often failed as jokes and in their overarching mission. They just reinscribed the bigotries they had set out to critique, and spread them to new audiences.
It’s not that Bruckert, Davison, and Rugnetta were wholly unaware of the problems pervading early internet culture during MemeFactory’s heyday. As Davison explained to us, the trio did think about the impact and content of their shows, and they did have an ethics. The issue, he observed wryly, was that it was a “naive and myopic” ethics (italics his).60 Rugnetta underscored this point, telling us that he assumed the MemeFactory audience knew what the bad things were. He also assumed that the people targeted by the most harmful memes would understand their motives, which were not to hurt anybody. They shared the bad things because they had a responsibility to tell the truth about internet culture.
It was in this spirit that the MemeFactory trio offered a disclaimer at the outset of their 2009 NYU performance, acknowledging that the show would feature a great deal of racism, sexism, homophobia, and violence. They did not advocate any of it, they explained. But they had to include it, because that’s what the internet is; pretending otherwise would be to misrepresent the culture. The presentation then proceeded to mix all that racism, sexism, homophobia, and violence in among a deluge of more innocuous, absurd, and often laugh-out-loud funny memes. So many images were flying across the screen, so many ironic captions were clattering against whatever was being said, and so much laughter was echoing across the auditorium, that it was difficult to zero in on any one meme, let alone form a critical response to any of them.
The argument that you risk misrepresenting a culture if you don’t illuminate its harms dovetails with the assumption that, in order to understanding something, you have to hold it up to the light and properly dissect it. At the time, these assertions were common within internet research circles. For some, they remain common, an extension of even deeper liberal ideas about free-flowing information and unrestricted speech. Bad speech isn’t the real danger, the argument goes. Censoring bad speech is. Similarly, ignoring the harms inherent to a particular culture means having less robust, less accurate, and less valuable discussions about that culture.
The basic sentiments might be true; how could a person talk about the harms of trolling, for instance, without talking about trolling itself? However, when all that harm is publicized as it’s analyzed, and more importantly laughed at as it’s analyzed, the argument becomes a tougher sell. It also becomes a tougher sell when considering the experiences and basic sense of safety of the people whose bodies are being targeted. Accurately representing a culture might be the goal. However, when that culture pushes already marginalized people farther to the margins, clinical analyses risk replicating the same marginalizations and contributing to a public square where fewer people feel safe and are safe. All because their bodies and comfort and overall wellness matter less than clinical accuracy.
One segment of the MemeFactory performance epitomized this myopia and the white racial frame at its core. It featured the “O RLY” meme, which originated on 4chan in 2005 with an image of a screeching snowy owl, captioned with the letters “O RLY?” (shorthand for “oh really?”). As memes do, the O RLY owl inspired countless variations. At the NYU show, a number of examples flashed rapid-fire across the screen, including a black owl captioned with the phrase “NGA RLY.” The crowd roared with laughter. Just as suddenly as it appeared, the image was replaced with another, and then another, and then another, as unmoored pixels flew everywhere, here and then gone and then onto the next bit of fun.
Watching this 2009 moment ten years later was jarring. Not because the MemeFactory performance was unique in its juxtaposition of fun memes and dehumanizing memes. Recall, for instance, how 4chan’s Chris Poole casually referenced the racist Epic Beard Man meme in his ROFLTeam profile page. As was so common within internet culture circles at the time, violence against a Black body was just another funny ROFLCon joke, a barely registered shrug, even with Harvard sponsoring the conference.61
Nor was the MemeFactory “O RLY” moment unique in its universalization of a white us, epitomized by a room full of mostly white college students laughing uproariously because that picture up there, did you see it, it made the black owl say the N-word, lol. How it might feel for a Black person to sit in that auditorium and be inundated by laughter directed quite literally at blackness, how it might feel to be reminded that the N-word is a hilarious punch line to lots of white people, was not part of that discussion.
Indeed, what was most jarring about this moment was that it wasn’t unique. What was jarring was thinking back and remembering how common that kind of imagery and those kinds of reactions were at the time (like when Phillips attended a MemeFactory performance in person in 2010 and laughed so hard she cried). Those images and reactions weren’t a bug in the MemeFactory performance; they were a feature of internet culture. They certainly were a feature of our early research. In public presentations she gave between 2008 and 2010, Phillips would regularly include similarly jarring juxtapositions in her slides. She didn’t include those images to be provocative. She included them because she didn’t think they were provocative. She, like the MemeFactory trio, assumed that everybody knew that racism was bad, so if you saw something racist, then it was obviously satire. And if it was satire, then what harm could it possibly do to surface it, or even laugh at it? Anyway, it was normal to collapse terrible things with funny things, that was just how the internet was, and I’m here to tell you about the internet, why are you looking at me like that?
For Milner, the same juxtapositions were front and center in his 2012 dissertation. Many of the images he chose to analyze were explicitly racist or sexist and often appeared on the same page as funny animal pictures and non sequitur absurdities. Sometimes Milner would comment on that racism or sexism, but all too often he would ignore it to focus on what really fascinated him, because look at how the position of the captions in figure 2.1 creates a visual ellipsis, thus indicating a punch line, isn’t that cool? That’s what memes were to him: just academic abstractions, just points of clinical interest. Racism was a by-product of the shock humor, not something his white body ever had to worry about.
Plus, Milner figured, echoing a point Rugnetta raised when reflecting on his MemeFactory days, the whole reason to do this work was to show how important internet culture was, to prove to his colleagues and classmates and uncles at Thanksgiving that memes were worthy of study. For Milner and Rugnetta, taking seriously the ugliness of so many memes risked—in their minds, anyway—undermining the claim that memes mattered. And so they foregrounded the beauty of the collaboration and the creativity and, as Rugnetta explained, “the potential (utopian only, please) futures this activity might suggest.”62 The resulting defense, Rugnetta continued, could be summarized as “I mean yeah some of it is racist but like, OTHER THAN THAT, how great is it?”
When confronted by all this racism, Milner in turn defaulted to the same disclaimer included at the start of the MemeFactory performance. Analyzing something did not mean you liked that thing. You were being a researcher, and what researchers do is explain what’s true. However, along with Phillips, along with Bruckert, Davison, and Rugnetta, along with so many of the other people we worked with and laughed with and were friends with at the time, Milner missed an important detail. You can’t argue that you’re making a careful analytic critique when at the same time you’re setting people up to enjoy the things you’re critiquing.
The dangers of collapsing fun into ugliness and ugliness into fun were especially prominent during MemeFactory’s 2009 segment on “fail” content. Throughout the night, the NYU crowd reveled in unfortunate misspellings, professional faux pas, and people simply struggling to behave “correctly.” Failures at life, you could say—someone’s version of life, anyway.
The rollicking guffaws that followed these fails were not single-handedly conjured by the MemeFactory performance. Bruckert, Davison, and Rugnetta might have teed up the punch lines, but they couldn’t force the audience to laugh. Yet laugh the audience did, uncontrollably, reflecting the norms the audience carried into the room with them—norms that simultaneously universalized the experiences of an implied us and demeaned the implied them. Look the right way, according to our standard. Talk the right way, according to our standard. Act the right way, according to our standard. Laughter, in those moments, was an act of naming and shaming difference. As was so often the case within internet culture, difference meant anything that deviated from white, middle-class, cisgender, straight, male norms. People who fit those norms provided nothing to laugh at, nothing to meme, so they were spared the fetishized trampling.
Among those not spared during the MemeFactory performance were several internet-infamous young white women who had inspired widespread mockery online. These young women, Bruckert, Davison, and Rugnetta explained, were known as “camwhores.” On 4chan, the slur described women who revealed their bodies on the site. At one point in the show, an image of one of these young women, a teenager, was projected onto the screen. The crowd exploded in boos and hisses. “KILL HER!” one man in the audience shouted.
Davison offered up an even more telling example from another MemeFactory performance. Late in the show, Davison explained, the trio had displayed a screenshot taken from a daytime talk show. The guest on the show was a very large Black woman. Someone had added the text “A WILD SNORLAX APPEARS”—“snorlax” referring to a type of oversized Pokémon. This slide, Davison explains, got the biggest laugh of the whole performance, so much so that it interrupted the flow of the script, compelling the three men to turn around to check what on the screen was causing such a ruckus. Part of him still wants to believe, Davison admits, that the audience was reacting more to the fact that anyone would say something like that—that it was metacommentary on the cruelty of the caption. And yet, Davison concedes, “I have to acknowledge that really, we were just telling our audience a fat joke that someone else had come up with, and they were laughing at this woman’s expense.”63
Obviously destructive, unabashedly dehumanizing trolls taking active, gleeful steps to harm others are easy to condemn; that’s a no-brainer. But the audience members who howled with laughter in response to these “failed” women did the same basic thing. They disconnected their laughter from its consequences for these particular women, and indeed for all the women, Black and white and brown, daily poisoned by violent misogyny and compoundingly poisoned by racism. The laughing us sidestepped all that. Instead they approached the targets of their amusement as punch lines, as pixels, as objects that never learned how to internet, never learned how to act right or look right. They did so because they could, because they were both willing and able to see lulz instead of people.
This willingness and ability to disregard consequences highlight the hazards of ironic fetishization. They also highlight its causes. As literary scholar Christy Wampole explains, arm’s-length, giddily ironic reactions—like the ones on display during the MemeFactory performance, across internet culture more broadly, and throughout our own early work—are a luxury enjoyed only by people who also enjoy an excess of comfort and lack of risk.64 For ironists, life is negative freedom: freedom to do and say what you please simply because it pleases you, without having to pay a price for any of it. Why not laugh; why not play; nothing matters. Conversely, where there is suffering, where there is injustice, where bodies bear the scars of violence and dehumanization, there is no irony, because there is no freedom. No freedom to go about your business unmolested, and no freedom from the harms bearing down on your body. The only thing that’s laughable is the idea that nothing matters.
Research published in 2019 by Stop Online Violence against Women, an inclusive public affairs initiative, underscores the consequences of the ironic, fetishizing fun of internet culture.65 By 2012, Black women on social media were ringing the alarm bells about harassment campaigns that employed trolling tactics and internet culture aesthetics to demean and dehumanize Black women.66 Russian disinformation agents later replicated the same tactics and aesthetics to suppress the Black vote during the 2016 US presidential election. As part of these campaigns, bad actors essentially trained social media algorithms to accept bigoted content as normal. In the process, they trained white eyes—often already more than willing—to accept it as normal too.
When reflecting on internet culture, especially with ten years of hindsight, it’s easy to succumb to despair, coupled with a creeping sense of shame, over how terrible everything on the internet is—at least that’s how we often feel. That said, fetishized sight is not inevitable and doesn’t characterize every instance of online play. Decontextualization might be inevitable, the result of digital affordances. Fetishization, however, is something else entirely. It’s a deep memetic frame—one not everyone is looking through. Internet researcher An Xiao Mina, for instance, highlights memetic play in China that doesn’t rely on fetishization.67 When Chinese citizens critique government oppression with protest memes, Mina explains, they can be just as slapdash, quippy, and Internet Ugly as their American counterparts.68 The difference is that the punch line is the broader context: the state’s autocratic control, satirized with messages of resilience and resistance.
Leftist Brazilian memes evidence a similar dynamic, a point Viktor Chagas emphasizes in his analysis of the activist “struggle memes” shared across Brazil on WhatsApp and Facebook.69 Like Chinese resistance memes, struggle memes hinge on the context of political action within an increasingly repressive state. Through these memes, citizens gain a more holistic understanding of the political landscape and its stakes for everyday people. The memes described by Mina and Chagas—which have parallels around the globe, including some pockets of the United States—can still be funny. They can still be glitchy, non sequitur, and downright obscene. But they are not animated by aloof, nihilistic irony. And that makes all the difference.
The danger, in other words, isn’t the specific act of creating or sharing memes, of having fun on the internet. The danger is the perfect, even symbiotic, gnarl of fetishizing ideologies, fetishizing technologies, and fetishizing actions. Early internet culture epitomized this intertwine. Groups who found themselves muted or targeted by so much dissociated laughter, and all the streamlined sharing that carried it forth, could have foreseen the effects. Those who couldn’t, whose entire life got to be laughter, would not have known to heed Wampole’s stark warning, issued in 2012, that irony at these levels creates an ethical vacuum in the individual and collective psyche. “Historically,” Wampole explained, “vacuums eventually have been filled by something—more often than not, a hazardous something.”70
Until 2010, the laughter ricocheting through subcultural trolling and internet culture circles remained relatively quarantined to those circles—less quarantined than the subversion myths at the heart of the Satanic Panics, but still somewhat bounded within groups of internet people. As it accelerated, network climate change ensured that these boundaries would dissolve. By 2012, Christina Xu explains, “an industry had sprung up” around internet culture, so much so that she and ROFLCon cofounder Tim Hwang decided that 2012 would be the last conference.71 “In 2012,” Hwang says, “we were on the phone with Grumpy Cat’s agent, and it was like, ‘this cat has an agent.’ I think that fact alone is a really big indication of how the space of internet culture had changed in a four-year time period.”72
This shift occurred for many reasons, as Phillips chronicles.73 First, the introduction of content generating sites like Meme Generator and Quickmeme in 2009 and 2010 allowed users to instantly create memes without knowledge of the photo-editing software that had been a barrier to internet culture contribution. Compounding shifts in memetic production was the family-friendly Cheezburger Network’s 2011 acquisition of Know Your Meme. Cheezburger’s acquisition injected corporate capital into the site and, in the process, helped draw an even wider audience—including advertisers and marketers—to its growing meme database. Efforts to “memejack,” in which marketers would attempt to harness internet culture output, became increasingly common and drove increasing attention to specific memes and the communities that spawned them. An uptick in news coverage about the latest memes, including the exploits of trolling subculture on and around 4chan, also spurred the mainstreaming process.
Much more subtle but just as significant was the increasing, interconnected influence wielded by prominent internet people. As Davison underscored, it was weird how insular that world was. It was weird, he continued, that we—referring to Phillips and the whole network of academics and media types in the New York internet culture orbit—were all at ROFLCon. It was weird, Davison said, that he met Chris Poole several times. It was weird that Andrew Auernheimer, the notorious white supremacist and violent misogynist known as “weev,” attended the first MemeFactory show at NYU. At the time, however, Davison didn’t think about who was who and what was what. Everybody just stumbled around making and sharing content.
The issue was that this “everybody” was already on a fast track to success. Many, including ROFLCon’s organizers and superparticipants, attended some of the most prestigious universities in the world, and after college they accepted positions at some of the most prestigious media and technology companies in the world. Their voices, their jokes, and their frames tangled even more tightly with industry. Even as everyday enthusiasts produced a dizzying stream of content, the fact that the most prominent behind-the-scenes influencers were friends, or at least friends of friends, who actively promoted one another’s work across and between high-profile platforms, helped cohere that content. It also helped that content stretch its tendrils out to further mainstream attention.
Before long, the jokes, memes, and overall aesthetic of internet culture, including many elements of trolling subculture, were showing up on television and in movies. Trolling and meme merchandise was everywhere, in malls, at Target, and even on floats in the Macy’s Thanksgiving Day parade. In just a few short years, internet culture hadn’t just gone mainstream. It had come to define popular culture—a popular culture steeped in the idea that the more information there is, the more memes people share, the more people comment, the more pixels people flatten, the better things are. For some. Certainly for the titans of liberalism who found ways to monetize all those memes, comments, and pixels while doing everything possible to avoid restricting speech. For them, it wasn’t just that information should be free; it’s that free information was a gold mine.
And so the pollution embedded within internet culture became more potent as it seeped through ever-broadening swaths of the forest. The most obvious references to trolling subculture might have been minimized or simply forgotten as that taproot fed into so many others. Still, lulz became a dominant register, not just among extremely online influencers, but also among people who had no idea what forest they were even in, let alone how deep down the roots went. This included many journalists, as we’ll see in the next chapter, and, we’re certain, many people reading this book.
The resulting chorus of ironic, nihilistic, fetishistic laughter created the perfect conditions for bigotry to spread stealthily, tucked away within things that didn’t seem polluted at all. That seemed, instead, like harmless fun. As so many otherwise well-intentioned people watched, seeing nothing, trolls and lulz and glitchy art melted into supremacist hatred, like mists melting into branches melting into roots. Once there, its poison could spread, unfettered, from this tree to that, gathering more strength with each turn. Soon it couldn’t be contained and burst up through the sidewalk, into the public square. This process took time, but it wasn’t difficult; internet culture and white supremacy share the same frames. The difference is that white supremacists know they’re sowing poison.
The next chapter chronicles how bigots exploited these frames throughout the 2016 US presidential election cycle. The story of early internet culture helps ground that conversation. It also serves a more prescriptive function, underscoring that while our present troubles have grown from the seeds we’ve planted, they shouldn’t be seen as inevitable. Fetishized sight and the white racial frame are potent and, for white people in particular, difficult habits to break—or even to recognize as habitual. But it’s still possible for all of us to turn our heads, take in more of the landscape, and take seriously what too many among us haven’t noticed before. This is our first line of defense against the spread of bigoted pollution. It’s not a guaranteed fix. But if we don’t try, the forest may never recover.