Managing the Global Discourse

2 hours ago 2
Looking up—street corner between Rue de l'Abbaye and Rue Saint Vincent —Nice, France. Image: Virtual-Pano, 2019. Wikimedia Commons

In 1997, David Rothkopf, at that time managing director of Henry Kissinger’s international consulting firm Kissinger Associates, wrote an article straightforwardly titled “In praise of cultural imperialism?” in Foreign Policy. Positioning himself against both the anti-globalisation left and the nativist right, Rothkopf declared that the “homogenizing influences of globalization that are most often condemned … by cultural romanticists are actually positive.” In fact, he wrote, “the decline of cultural distinctions may be a measure of the progress of civilization, a tangible sign of enhanced communications and understanding.” What he saw emerging in this post-national world was a unified “Global Information Infrastructure,” where satellites, fibre-optic cables, and the internet would “literally form the fabric of future civilization.” However, for this shining future to be fully realised, the whole world would have to adopt a new “global culture” and in the process essentially become America:

It is in the economic and political interests of the United States to ensure that if the world is moving towards a common language, it be English; that if the world is moving towards common communications, safety, and quality standards, they be American; that if the world is becoming linked by television, radio, and music, the programming be American; and that if common values are being developed, they be values with which Americans are comfortable.

Watching the “exporting of the American model” from the other side of the Atlantic, Pierre Bourdieu was appalled. What the newly intensified project of American cultural imperialism really represented, he wrote with his collaborator Loïc Wacquant, was a form of “symbolic violence.” Its main mode of operation was language. The newly globalized vocabularies of the new public management and postmodern identity politics—both suddenly omnipresent in the 1990s—were two aspects of the same phenomenon. A new and oddly authoritative set of keywords for describing social and economic reality (“flexibility” and “governance”; “exclusion,” “multiculturalism,” and “minority”) was being exported globally through the forcing mechanisms of American media, the academic publishing and conference circuits, think tanks, and cultural institutions.

Most disturbingly for Bourdieu and Wacquant, these new linguistic templates imposed (as a kind of cognitive overlay) the very particular racial history of the United States (and the “race relations” and anti-racist policy models developed to manage its legacies in line with American capitalism) as the new “universal point of view.” In fact, they argued, this was true of all aspects of what they dubbed the new “planetary vulgate” and its “constrained” modes of “communication.” Meaning was now increasingly concentrated into the capsule form of corporate slogans, management speak, and currently acceptable terminology. The global circulation of this new and simplified set of terms and concepts compelled everybody everywhere to accept the very particular values of neoliberalism and American identity politics as though they were universals:

By imposing on the rest of the world categories of perception homologous to its social structures, the USA is refashioning the entire world in its image: the mental colonization that operates through the dissemination of these concepts can only lead to a sort of generalized and even spontaneous ‘Washington consensus’.

The world beyond America was subject to a new linguistic imperialism that made the very words through which local cultures had formerly understood themselves obsolete and morally suspect. The old world, with its distinct cultures and histories, was becoming literally unthinkable because it could no longer be described in its own terms. Authoritative description lay with the new terminology. There was now a “new planetary commonsense” abroad, which constantly updated global sense-making according to the dictates of current best practice. The aura and prestige of the new and constantly evolving term sets brought managers, professionals, and cultural mediators everywhere smoothly into line with thinking in the new (American) metropole. Old “cardinalities” (the interpretive categories into which society had previously been sorted) were forgotten, seamlessly overwritten with new technological and social programming.

The straightforward and unapologetic American chauvinism championed by Rothkopf is, of course, no longer a feature of how American companies do their global business. Being a global brand today means recognising and celebrating global identities and interests, not harping on purely national ones. However, the disappearance of explicit American cultural imperialism is also a reflection of how enormously successful the post-Cold War project of “exporting the American model” through information technology has really been. The primacy of American interests no longer has to be insisted upon; American concepts and descriptions have spread virally into everyone’s heads. Cultural politics beyond America (particularly in the Anglosphere and increasingly in Europe) to a large extent has simply become American cultural politics. Left and right political parties throughout the Anglosphere can now seem like nothing so much as local branches of the Global Democratic and Global Republican parties. Similarly, as Bourdieu and Wacquant predicted, the American identity politics framework has thoroughly reforged local understandings of ethnicity and belonging beyond the US on both sides of the political spectrum.

Given new transmitting power by the internet, the new “planetary vulgate” is also thriving. The communication constraints imposed by social media and the phone screen are perfect vectors for its simplifying powers of expression. “Restacks with a quote,” Instagram captions, and tweets can intensify emotion and a sense of moral certainty just as they can radically simplify ideas. The current iteration of the planetary vulgate can be seen threading its way through Twitter’s 2010s-era content moderation documents as well as the academic literature on “trust and safety.” In his vision for the future of content moderation, Microsoft researcher and Cornell professor Tarleton Gillespie speaks of the tech industry’s need to work with an “ethic of care”; its duty to amplify “marginalized voices”; and the need for it to use its powers of calculation to address “structural inequities.” Each of these phrases encodes a capsule expression of the progressive worldview; they resonate with a very particular set of ideological commitments. Their ubiquity during the late 2010s represents the tech industry’s “soulful turn” under the moral guidance of academia and other members of the “global new class.” Among these were the “NGOs, activists and other organizations” whom the Trust and Safety Council liaised with and who helped Twitter “harness the power of public conversation” to “create action” on social issues.

Therapeutic culture, the new public management, and machinic forms of judgement and calculation were fusing into a new hybrid technocratic morality. Its logic was internalised by and then imposed on the world (as interpretations, procedures, and outward-facing duties) by the newly “mission-driven” knowledge workers of the global new class. Partly self-serving, but also in large part sincere, members of this class wanted to (in effect) turn the supertanker of global capitalism around, pointing it towards the most pressing moral and existential challenges of the twenty-first century. At the same time, this new morality also represents the progressive knowledge worker’s will to power, status, and social control, expressed (in classic new class fashion) under the signs of working tirelessly on behalf of the marginalized. Yes, we work for the world’s largest corporations and most prestigious universities, they seem to say, but we care! We have checked our privilege and are using it in service of the vulnerable! Our expertise is making the world a better place! Please do not question our motives or moral authority!

If Rothkopf could see the emergence of a new “global culture” in 1997, Twitter executives increasingly came to see themselves as the stewards and custodians of a new and emerging “global discourse.” As with so many of the phenomena associated with 2010s-era “woke capitalism,” this was a concept in which narrow commercial interests and wider progressive commitments to recognition and inclusivity were bound together in a tight knot. Platform capitalism was thoroughly mystified as moral commitment and a duty of care. Aside from being moral principles, inclusivity and the promotion of “common understandings” are ways for a communications company to maximise its market reach. The “global conversation” is therefore simultaneously an ideal and evidence of the platform’s successful penetration into all global markets. At the same time, given the ethical dimension of “woke capitalism,” company communications insisted that the firm use its power to be seen pursuing more than its own financial self-interest.

When the COVID-19 pandemic was just beginning in January 2020, the Trust and Safety team wrote on the Twitter Blog that:

As the global conversation continues around the spread of COVID-19, we want to share the work we’re doing to surface the right information, to promote constructive engagement, and to highlight credible information on this emerging issue.

More or less organically, it seemed, the socially magnetic powers of the platforms had drawn a diverse and global public together into the same speech community. It fell to Trust and Safety to “protect the civic conversation,” particularly at crucial moments, such as during a global pandemic. Their task was to foster “constructive engagement” among users; to amplify and deboost in order to “surface the right information” while ensuring that “misinformation” sank from sight. However, it was no longer sufficient to aim only at keeping individuals safe and informed. The global nature of the new “civic conversation” meant that Trust and Safety’s commitment to the user experience had to be scaled up to planetary level. As Gillespie put it, “the offence and harm in question is not just to individuals, but to the public itself, and to the institutions on which it depends.” If it had once seemed that social media companies were simply “making up trust and safety as they went along,” those in charge of content moderation had now come to understand their project in terms no less all-encompassing than safeguarding the “collective values” of a global public.

In the second half of the 2010s, social media executives increasingly turned to the metaphor of public health to represent how their platforms could protect the new speakers of the “global discourse” from harm. Introducing initiatives to reduce the visibility of trolls and abusive content on Twitter in May 2018, David Gasca and Vice President of Trust and Safety Del Harvey described what they were doing as “serving healthy conversation” and improving “the overall health of our service.” The public health framing gave individual moderators a license to intervene at any point to “keep the conversation productive and healthy.” However, the binary logic of public health also enabled social media companies and commentators to stigmatise content and behaviour that was deemed divisive or unwanted on the platforms. These represented the semantic flipsides (or implied antonyms) of the normative concepts of health and desirability—“an unhealthy environment,” “a poisonous influence on the discourse,” “harmful speech,” and above all, “toxicity.”

When it issued a report on Twitter misogyny in 2018, Amnesty International claimed that abuse and harassment had made Twitter “a toxic place for women.” The report’ hashtag, #ToxicTwitter, suggested that misogyny had quite literally poisoned Twitter, as though the platform were now synonymous with the behaviour of its worst users. Digital toxicity evokes the idea of “toxic build-up,” whereby something that might have been bearable in small doses is deadly when encountered at higher concentrations. However, when used to describe “problematic content” on social media, “toxicity” has often eluded precise definition. This lack of clarity enables it to be applied in an extremely broad fashion.

A 2021 Big Data & Society article about COVID-masking discourse on Twitter described digital “toxicity” as being simply “a rude, disrespectful, or unreasonable comment that is likely to make you leave a discussion,” and asserted that this kind of “verbal aggressiveness” in the context of a global pandemic “constitutes a threat to public health.” Having defined digital toxicity in such sweeping terms, the authors argue that this behaviour:

creates a hostile environment that turns users away from online conversations about the issue and/or may distract them from acquiring factual, evidence-based information about face mask wearing as an effective measure to stop the spread of COVID-19.

Accordingly, they suggest, “toxic” comments on social media (in other words, those that “aggressively” express scepticism about the effectiveness of face masks) “should be continuously monitored and managed by public health agencies and organizations tasked with fighting the infodemic” as part of the wider institutional response to the COVID-19 pandemic.

These arguments erect a cordon of certainty around expertise at the same time as they declare that any questioning of it constitutes a public health emergency. There is no room here for any broader discussion about the practicalities of COVID masking (should masks be used outdoors as well as indoors? Are cloth masks as effective against virus particles as N95s?) It is simply taken for granted that all such practices are “effective,” and that anyone who questions them is an adversary of “the good” (“a threat to public health”). The idea that public health institutions were “fighting” an infodemic in 2020 as well as a pandemic is especially revealing in this respect. This new hybrid concept (coined by the World Health Organisation at the very start of the pandemic) spoke powerfully to expert and governing classes that had long been ambivalent about unregulated public speech on matters of consequence. Now, “unmanaged” or “ill-informed” public discourse could be collapsed outright into the category of contagious disease, suggesting that the normative response to non-compliant ideas and opinions should be to restrict their spread (as one would any other dangerous pathogen) for the care and protection of others.

All of these approaches to content moderation work according to an overarching compliance logic. Just as public health discourse does, compliance justifies itself on the basis of its ability to prevent harm. In the process, however, it fundamentally reconfigures reality. The world is sorted into those things that no-one could be against (care, protection, public health) and those things that no-one could be for (abuse, toxicity, harm). Shades of grey resolve into the certainties of black and white. The goal of institutions, when reframed in terms of compliance, becomes to apply procedures that prevent the absolute evils in the latter category of concepts, in order to promote the absolute goods in the former. Moreover, the effect of compliance is very often to replace what were once political questions (what are the limits of free speech?) with forms of decision-making that are restricted from public view or contestation.

The world when redescribed in the binary terms that compliance insists on imposing becomes, as the criminologists Tereza Østbø Kuldova, Jardar Østbø, and Thomas Raymen put it:

a Manichean fight between the forces of good and evil, the pure and the dirty, between those with integrity and those corrupt, and between a healthy body politic and a diseased body politic that infects others like a virus, requiring political-economic, cultural, and ideological vaccination.

Demands for compliance often begin with a clear-cut scandal, such as the genuine harassment that many female posters on Twitter were subjected to during the Gamergate episode. There is a corresponding and deeply felt desire that “something must be done to stop this.” Once established as “best practice” in one arena, however, compliance extends its territory, redefining more and more elements of social reality as “problems” to be eradicated according to its ramifying procedures. What were formerly seen as minor transgressions (or not even understood as transgressions at all) acquire a new aura of seriousness once the spotlight of compliance is turned on them.

Modern compliance is also deeply ideological. We can all sense (and political scientists have confirmed) that the bundles of values and beliefs that it is necessary to profess in order to be seen as “a good progressive” or “a good conservative” have become increasingly uniform (or “constrained”) in recent times. Digital transparency has accustomed us to pattern-matching. We can now see (at scale and in striking detail) which sorts of beliefs are most associated with the sorts of people who profess certain brands of politics. The degree to which online networks are also “technologies of reputation” compels us to announce our adherence to the beliefs of our avowed tribes (and to disavow those associated with “the other side,” so that we can maintain good standing with our ideological allies). That we now speak of certain beliefs, values, or personal aesthetics as being politically “coded” one way or another speaks to this emerging logic. Compliance acts to police the outer boundaries of these bundles of ideological acceptability. It takes attitudes, beliefs, or behaviours that have become “problematic,” and (under “terms and conditions” logic) places them beyond the bounds of acceptable utterance. Arguably, as the progressive strain of content moderation bedded in during the mid to late 2010s under the rhetoric of “inclusivity,” those terms of acceptable utterance become increasingly synonymous with the values and beliefs bundled within the “progressive” worldview.

These compliance practices rely above all on the application of labels, which denote the things no-one can be for—“misinformation,” for instance, or “hateful conduct”—that it is the mission of compliance to prevent. Labelling reifies complex and often contested concepts into simple and seemingly unassailable facts. As part of its expanded “hateful conduct” policy, released in October 2018, Twitter for the first time prohibited the “targeted misgendering or deadnaming of transgender individuals,” bringing it into line with other 2010s-era definitions of online abuse, such as those in the open source developer community’s Open Code of Conduct. A rapidly evolving (and ideologically contested) form of social etiquette became, through the power of policy, a quasi-legal prohibition. However, as the philosopher Holly Lawford-Smith observes, the policy seems to have been interpreted very widely by Twitter moderators. After 2018, many gender-critical users reported having their accounts suspended simply for arguing that there was a distinction to be made between biological sex and gender identity, rather than for any particular instances of misgendering or deadnaming. The effect of the policy was to give normative force to a particular (and contested) view of reality, and make arguments against it “unsayable” on the platform.

Like other forms of pejorative labelling, “hateful conduct” very obviously denotes an act of moral censure as well as one of simple classification. This form of censure could have relevance well beyond Twitter. Lawford-Smith observes that Australian media outlets reporting on her views after 2018 would regularly note “in shocked tones” that she had been banned from Twitter for “hateful conduct,” without providing details about the policy itself, or any context about how widely it was being applied. All of this illustrates the “simulated objectivity” that compliance seeks to create. The label and its caption act as overlays obscuring everything arbitrary and subjective about the compliance process itself. They can then become a kind of mobile index of negative social capital, capable of being referred to in other contexts as though they had judicial significance.

One need not be a gender-critical feminist to understand from these examples how effective platform compliance frameworks can be for ideological policing. Political viewpoints out of alignment with the worldviews encoded in moderation policies simply become “non-compliant”—that is, things no-one could possibly be for. They are (by the logic of compliance) redefined as problems to be managed (and censured) under highly normative (but supposedly “objective”) headings such as “hate,” all under the guise of protecting others from harm. Twitter’s attempts to “meet the cultural moment” between 2018 and 2022 by implementing new approaches to content moderation illustrate this process in action. A very particular understanding of the world (and how language should function) was made universal and applied across user communities worldwide as part of the terms and conditions of participating in the “global conversation.”

As the content moderation ethos that held sway between roughly 2018 and 2023 recedes into the past, it is worth asking what that project was exactly, and what its long-term consequences might be. Behind the idea that platforms had a duty to shape the “global conversation” was a belief in the fundamental power of language and representations to shape reality. By enforcing new etiquettes, amplifying the voices of the marginalized, and reducing the signals associated with “problematic” users, new forms of social equality could be brought into being in the “real world.” This was, in other words, yet another instance of the new class project of installing its particular ways of speaking—represented here by “the discourse”—as “the standard of good speech” everywhere. Like all forms of compliance ideology, this vision relied on the power of the anti-policy (that which would ban or suppress the “problematic”), as well as a faith that having the right procedures in place was the way to get universal social buy-in from all parties.

Pronouncing 2018 “the year we (sort of) cleaned up the internet” for Mashable, tech and wellness reporter Rachel Kraus expressed this new faith in the social power of content moderation in a particularly vivid way. “If a bigot rants in a forest, and nobody is around to hear him, does anyone care what that idiot says?”, she asked. Although she acknowledged that deplatforming abusive or problematic users was not a one-stop solution to the problems of online, she nevertheless saw it as one of:

the dire changes needed to proactively prevent racism, anti-Semitism, misogyny, radicalization, and blind hate from spreading online and into the real world … it’s a much, much, MUCH needed start to both combatting and understanding how toxic speech that leads to real-world violence spreads online.

Everything in Kraus’s article, from the defiantly slanted, no-filter sassiness of her tone to her reliance on pejorative labeling to settle arguments in advance, is saturated with the Manichaean logic of compliance thinking. Any “grievances” are simply bad ideas conjured into existence by the “hate groups” and “propaganda” that the deplatforming agenda aimed to expell from the internet. Nor does she acknowledge that the labels she uses (“toxicity,” “hate,” “radicalization”) are mobile and elastic terms that could be expanded to cover any amount of speech and behaviour with the “problematic” label. Like the spectre of “corruption” in anti-corruption campaigns, “hate” and its cognates become obstacles to the realization of a perfect world. Once hate and hate-speakers were removed from the global discourse, the Good—or, in the language of Twitter Trust and Safety, “a healthy conversation”—would inevitably emerge. The fully moderated online space would represent a simulated reality from which all dangerous elements were purged as unsafe, harmful, and against the terms of service.

This vision was most fully realized during the COVID-19 pandemic in 2020-1, during which content moderation came (for a time) to seem like a matter of life and death. On 1 April 2020, Twitter declared that it was broadening its definition of harm to remove content that contradicted “guidance from authoritative sources of global and public health information.” In May 2020, it began attaching labels to “potentially harmful and misleading” pandemic tweets, extending this policy to vaccine misinformation in 2021. The scope of these labels soon expanded to encompass a range of additional misinformation offences. The platform would eventually label the tweets of its “main character” Donald Trump multiple times in late 2020 and early 2021 for “election-related misinformation,” before permanently suspending both his personal and White House accounts on 8 January 2021.

That Twitter Trust and Safety eventually escalated to deplatforming the president himself was the most explicit expression imaginable of the power and authority that content moderation had assigned itself in its mission to protect the public from informational harm. At the same time, however, the certainties of compliance thinking perhaps blinded it to the simmering anger and resentment that its claims to be the arbiter of online truth could generate, even among those who had not themselves been subject to direct moderation. Over the last fifteen or twenty years, platform metrics have reformatted our sense of how “status and esteem” is generated, in the process generating new battles for “social recognition” (of which the desire to use social media to amplify “marginalized voices” is but one prominent example). However, as William Davies observes, those who feel that they have not been sufficiently recognized by the new platform-based social economies can suffer a corresponding sense of injury. The logic of social media, he writes:

is such that their principal effect is to generalize a feeling of misrecognition—heightening the urgency with which people seek recognition, but never satisfying this need. One effect of this process is the rise of groups who feel relatively deprived, to the point of political insurrection.

Posting on X about his lack of sympathy with progressive internet users who were punished for their responses to the assassination of Charlie Kirk, the conservative academic Geoffrey Miller perhaps spoke for many when he expressed this sense of misrecognition in the following terms:

 Every single American conservative has lived their entire life under the Left’s cultural regime of censorship, intimidation, compelled speech, stigmatization, ostracism, & demonization. We’re not going to live under that regime any more.

According to Miller, the sense of misrecognition suffered by American conservatives amounts to more than occasional penalties for overstepping the boundaries of acceptable speech. The entire system of education and communication that ostensibly subjects non-progressives to the “regime” of progressive ideology is implicated. With the passing of Twitter (as X) into the hands of Elon Musk and Trump’s newfound willingness (since the Kirk shooting) to place direct pressure on broadcasters that oppose him, that system is (he suggests) coming to an end. Here we see the binary logic of social media and compliance culture expressed in its most straightforwardly nihilistic terms. If one is manifestly on the side of things no-one can be against and one’s political opponents refuse to submit to one’s point of view, what point is there in further dialogue? In such a situation, the purpose of political communication becomes simply to describe the opposition in terms of the things no-one can be for and seek their disqualification (or annihilation) on that basis.

Alex Garland has described the impetus for his 2024 film Civil War as the growing awareness that:

people on the left were simply not listening to people on the right … they were not listening to their fears and complaints; they were just getting angry with them, and exactly the same was true on the other side. The complete absence of communication.

Civil War itself dramatizes this total breakdown of communication and civility and the human chaos that results. Jesse Plemons’s militia leader seems to want to impose some sort of order on the slaughter when he asks the main characters, “what kind of American are you?” as he threatens to kill them. However, the undifferentiated truckload of dead American civilians his unit is burying belies any sense there are orderly selection criteria for who dies and who does not. The agonies of a gravely wounded American Army soldier whimpering in pain and fear are silenced by a gunshot; the other members of his unit are captured and summarily executed. Later, as the secessionist Western Forces close in on Washington DC, it is clear that the old signifiers of sovereign authority retain no aura. The Lincoln Memorial is destroyed by helicopter missile fire; the president’s limousine machine-gunned and the staffers inside killed; a secret service agent trying to negotiate the president’s surrender gunned down mid-sentence; and the president himself dragged out from beneath his desk and shot dead by grinning soldiers.

The assertion that one’s political opponents are (depending on one’s position on the political spectrum) either “fascists” or “communists” is nothing new. What seems to have changed in more recent years is that, with the addition of historical distance, any sense of real meaning has long leached out of these historical descriptors. They have become pure signs or labels that, when applied to one’s enemies, simply mark them as being existential threats and thus worthy in themselves of annihilation. As the more idealistic models for “managing the global conversation” themselves become a thing of the past, our label-based modes of communication and sense-making (both online and offline) are increasingly starting to resemble the war of all against all.

Discussion about this post

Read Entire Article