Why Good Ideas Die Quietly and Bad Ideas Go Viral

1 day ago 4

The internet—it seemed like such a good idea at the time. Under conditions of informational poverty, our ancestors had no choice but to operate on a need-to-know basis. The absence of pertinent, reliable, and commonly held facts was at first a matter of mere logistics—the stable storage and orderly transfer of knowledge was costly and troublesome, and entropy was free—but, over time, the techniques of civilization afforded us better control over the collection and transmission of data. Vast triage structures evolved to determine who got to learn what, when: medieval guilds, say, or network news reports. These systems were supposed to function in everybody’s best interests. We were finite brutes of fragile competence, and none of us could confront the abyss of unmitigated complexity alone. Beyond a certain point, however, we couldn’t help but perceive these increasingly centralized arrangements as insulting, and even conspiratorial. We were grownups, and, as such, we could be trusted to handle an unadulterated marketplace of ideas. The logic of the internet was simple: first, fire all of the managers; then, sort things out for ourselves. In the time since, one of the few unambiguously good things to have emerged from this experiment is an entire genre of attempts to explain why it mostly hasn’t worked out.

This effort—the attempt to hash out what went so wrong—had something of a rocky start. After 2016, many liberals were inclined to diagnose the pathologies of the internet as a problem of supply. Some people have bad ideas and beliefs. These are bad either because they are false (“climate change is a myth,” “vaccines cause autism”) or because they are pernicious (“we should have a C.E.O. as a monarch,” “foreigners are criminals”). These ideas propagate because the internet provides bad actors with a platform to distribute them. This story was appealing, both because it was simple and because it made the situation seem tractable. The solution was to limit the presence of these bad actors, to cut off the supply at the source. One obvious flaw in this argument is that “misinformation” was only ever going to be a way to describe ideas you didn’t like. It was a childish fantasy to think that a neutral arbiter might be summoned into being, or that we would all defer to its judgments as a matter of course.

The major weakness of this account was that it tended to sidestep the question of demand. Even if many liberals agreed in private that those who believed untrue and harmful things were fundamentally stupid or harmful people, they correctly perceived that this was a gauche thing to say out loud. Instead, they attributed the embrace of such beliefs to “manipulation,” an ill-defined concept that is usually deployed as a euphemism for sorcery. These low-information people were vulnerable to such sorcery because they lacked “media literacy.” What they needed, in other words, was therapeutic treatment with more and better facts. All of this taken together amounted to an incoherent theory of information. On the one hand, facts were neutral things that spoke for themselves. On the other, random pieces of informational flotsam were elevated to the status of genuine facts only once they were vetted by credentialled people with special access to the truth.

There was, however, an alternative theory. The internet was not primarily a channel for the transmission of information in the form of evidence. It was better described as a channel for the transmission of culture in the form of memes. Users didn’t field a lot of facts and then assemble them into a world view; they fielded a world view and used it as a context for evaluating facts. The adoption of a world view had less to do with rational thought than it did with desire. It was about what sort of person you wanted to be. Were you a sophisticated person who followed the science? Or were you a skeptical person who saw through the veneer of establishment gentility?

This perspective has come to be associated with Peter Thiel, who introduced a generation of conservative-leaning acolytes to the work of the French theorist René Girard. This story has been told to hermeneutic exhaustion, but the key insight that Thiel drew from Girard was that people—or most people, at any rate—didn’t really have their own desires. They wanted things because other people wanted those things. This created conditions of communal coherence (everybody wanting the same thing) and good fellowship, which were simultaneously conditions of communal competition (everybody wanting the same thing) and ill will. When the accumulated aggression of these rivalries became intolerable, the community would select a scapegoat for ritual sacrifice—not the sort of person we were but the one we definitely were not. On the right, this manifested itself as various forms of xenophobia and a wholesale mistrust of institutional figures; on the left, as much of what came to be called cancel culture and its censorious milieu. Both were attempts to police the boundaries of us—to identify, in other words, those within our circle of trust and those outside of it.

The upshot of all of this was not that people had abandoned first principles, as liberals came to argue in many tiresome books about the “post-truth” era, or that they had abandoned tradition, as conservatives came to argue in many tiresome books about decadence. It was simply that, when people who once functioned on a need-to-know basis were all of a sudden forced to adjudicate all of the information all of the time, the default heuristic was just to throw in one’s lot with the generally like-minded. People who didn’t really know anything about immunity noticed that the constellation of views associated with their peers had lined up against vaccines, and the low-cost option was to just run with it; people who didn’t really know anything about virology noticed that the constellation of views associated with their peers had lined up against the lab-leak hypothesis, and they, too, took the path of least resistance. This is not to say that all beliefs are equally valid. It is simply to observe that most of us have better things to do than deal with unremitting complexity. It’s perfectly reasonable, as a first approximation of thinking, to conserve our time and energy by just picking a side and being done with it.

Liberals were skittish about this orientation because it replaced our hopes for democracy with resignation in the face of competing protection rackets. But what they really didn’t like was that their bluff had been called. Their preferred solution to informational complexity—that certain ideas and the people associated with them were Bad and Wrong and needed to be banished from the public sphere—wasn’t much better. The urge to “deplatform” made liberals seem weak, insofar as it implied less than total confidence in their ability to prevail on the merits. The conservative account was all about allegiance and power, but at least it didn’t really pretend otherwise. They were frank about their tribalism.

Recent discourse attending to a “vibe shift” has tended to emphasize a renewed acceptance, even in erstwhile liberal circles, of obnoxious or retrograde cultural attitudes—the removal of taboos, say, on certain slurs. Another way to look at the vibe shift is as a more fundamental shift to “vibes” as the unit of political analysis—an acknowledgment, on the part of liberals, that their initial response to an informational crisis had been inadequate and hypocritical. The vibe shift has been criticized as a soft-headed preference for mystical interpretation in place of empirical inquiry. But a vibe is just a technique of compression. A near-infinite variety of inputs is reduced to a single bit of output: YES or NO, FOR or AGAINST. It had been close, but the vibe shift was just the concession that AGAINST had prevailed.

One side effect of the vibe shift is that the media establishment has started to accept that there is, in fact, such a thing as a Silicon Valley intellectual—not the glib, blustery dudes who post every thought that enters their brains but people who prefer to post at length and on the margins. Nadia Asparouhova is an independent writer and researcher; she has held positions at GitHub and Substack, although she’s always been something of a professional stranger—at one company, her formal job title was just “Nadia.” Her first book, “Working in Public,” was an ethnographic study of open-source software engineering. The field was inflected with standard-issue techno-utopian notions of anarchically productive self-organization, but she found little evidence to support such naïve optimism. For the most part, open-source projects weren’t evenly distributed across teams of volunteers; they were managed by at most a few individuals, who spent the bulk of their waking hours in abject thrall to a user-complaint queue. Technology did not naturally lead to the proliferation of professional, creative, or ideological variety. Tools designed for workplace synchronization, she found at one of her tech jobs, became enforcement mechanisms for a recognizable form of narrow political progressivism. In the wake of one faux pas—when her Slack response to an active-shooter warning elicited a rebuke from a member of the “social impact team,” who reminded her that neighborhood disorder was the result of “more hardships than any of us will ever understand”—she decided to err on the side of keeping her opinions to herself.

Asparouhova found that she wasn’t the only one who felt disillusioned by the condition of these once promising public forums. She gradually retreated from the broadest public spaces of the internet, as part of a larger pattern of migration to private group chats—“a dark network of scattered outposts, where no one wants to be seen or heard or noticed, so that they might be able to talk to their friends in peace.” Before long, a loose collection of internet theorists took on the private-messaging channel as an object of investigation. In 2019, Yancey Strickler, one of the founders of Kickstarter, published an essay called “The Dark Forest Theory of the Internet.” The title was an allusion to Cixin Liu’s “Three-Body Problem,” which explains the Fermi paradox, or the apparent emptiness of the universe, as a strategic preference to remain invisible to predatory species. The writer Venkatesh Rao and the designer Maggie Appleton later expanded on the idea of the “cozyweb.” These texts took a fairly uncontroversial observation—that people were hotheaded dickheads on the public internet, and much more gracious, agreeable, and forgiving in more circumscribed settings—as a further sign that something was wrong with a prevailing assumption about the competitive marketplace of information. Maybe the winning ideas were not the best ideas but simply the most transmissible ones? Their faith in memetic culture had been shaken. It wasn’t selecting for quality but for ease of assimilation into preëxisting blocs.

In the fall of 2021, Asparouhova realized that this inchoate line of thought had been anticipated by a cult novel called “There Is No Antimemetics Division.” The book is brilliant, singular, and profoundly strange. Originally serialized, between 2008 and 2020, under the pseudonym qntm (pronounced “quantum,” and subsequently revealed to be a British writer and software developer named Sam Hughes), as part of a sprawling, collaborative online writing project called the SCP Foundation Wiki, “There Is No Antimemetics Division” is part Lovecraftian horror, part clinical science fiction, and part media studies. (This fall, an overhauled version will be published, for the first time, as a print volume.) Its plot can be summarized about as well as a penguin might be given driving directions to the moon, but here goes: it’s a time-looping thriller about a team of researchers trying to save the world from an extra-dimensional “memeplex” that takes the intermittent form of skyscraper-sized arthropods that can only be vanquished by being forgotten (kinda). The over-all concept is to literalize the idea of a meme—to imagine self-replicating cultural objects as quirky and/or fearsome supernatural monsters—and conjure a world in which some of them must be isolated and studied in secure containment facilities for the sake of humanity. What captured Asparouhova’s attention was the book’s introduction of something called a “self-keeping secret” or “antimeme.” If memes were by definition hard to forget and highly transmissible, antimemes were hard to remember and resistant to multiplication. If memes had done a lot of damage, maybe antimemes could be cultivated as the remedy.

This is the animating contrast of Asparouhova’s new book, “Antimemetics: Why Some Ideas Resist Spreading,” published with Yancey Strickler’s Dark Forest Collective. She has devoted her attention, as she puts it in the introduction, to the behavior of “ideas that resist being remembered, comprehended, or engaged with, despite their significance.” She is interested in ideas that cost something. Her initial examples are a little bizarre and slightly misleading: Why do we still observe daylight-saving time when nobody likes it? Why don’t people wash their hands when they know they should? (A clearer and more salient reference might be to the newly memetic “abundance agenda,” which remains essentially antimemetic in substance, insofar as it attempts to replace procedural fetishism and rhetorical grandstanding with the hard, unglamorous, possibly boring work of applying ourselves to basic problems of physical infrastructure.) What she’s ultimately after is a much bigger set of questions: Why can’t we manage to solve these big, obvious collective-action problems? Why, in other words, can’t we have nice things? As she puts it, “Our inability to make progress on consequential topics can be at least partly explained by the underlying antimemetic qualities that they share—meaning that it is strangely difficult to keep the idea top of mind.” These antimemes are crowded out by the electric trivia of online signalling: “As memes dominate our lives, we’ve fully embraced our role as carriers, reorienting our behavior and identities towards emulating the most powerful—and often the most primal and base—models of desire. Taken to the extreme, this could be seen as a horrifying loss of human capacity to build and create in new and surprising ways.”

Read Entire Article