Renée DiResta, Author at NOEMA https://www.noemamag.com Noema Magazine Fri, 12 Sep 2025 15:55:25 +0000 en-US 15 hourly 1 https://wordpress.org/?v=6.8.3 https://www.noemamag.com/wp-content/uploads/2020/06/cropped-ms-icon-310x310-1-32x32.png Renée DiResta, Author at NOEMA https://www.noemamag.com/author/renee-diresta/ 32 32 The Great Social Media Diaspora https://www.noemamag.com/the-great-decentralization Tue, 07 Jan 2025 18:05:35 +0000 https://www.noemamag.com/the-great-decentralization The post The Great Social Media Diaspora appeared first on NOEMA.

]]>
For the past two decades, most online discourse has occurred on a handful of social media platforms. Their dominion seemed unshakeable. The question wasn’t when a challenger to Twitter or Facebook might arrive but if one could ever do so successfully. Could a killer new app, or perhaps the cudgel of antitrust, make a difference?

Today, those same platforms still enjoy the largest user bases; massive breakout successes like TikTok are the rare exception, not the rule. However, user exodus to smaller platforms has become increasingly common — especially from X, the once-undisputed home of The Discourse. X refugees have scattered and settled again and again: to Gab and Truth Social, to Mastodon and Bluesky. 

What ultimately splintered social media wasn’t a killer app or the Federal Trade Commission — it was content moderation. Partisan users clashed with “referees” tasked with defining and enforcing rules like no hate speech, or making calls about how to handle Covid-19 content. Principles like “freedom of speech, not freedom of reach” — which proposed that “borderline” content (posts that fell into grey areas around hate speech, for example) remain visible but unamplified — attempted to articulate a middle ground. However, even nuanced efforts were reframed as unreasonable suppression by ideologues who recognized the power of dominating online discourse. Efforts to moderate became flashpoints, fueling a feedback loop where online norms fed offline polarization — and vice versa.

And so, in successive waves, users departed for alternatives: platforms where the referees were lax (Truth Social), nearly nonexistent (Telegram) or self-appointed (Mastodon). Much of this fracturing occurred along political lines. Today the Great Decentralization is accelerating, with newspapers of record, Luke Skywalker and others as the latest high-profile refugees to lead fresh retreats.

It was once novel features, like Facebook’s photo tagging or Twitter’s quote tweets, that drew users to social media sites. Now, it’s frequently ideological alignment that seduces users. People are decamping to platforms that they believe match their norms and values — and, in an increasingly polarized America, there is a chasm between the two sides. 

Yet there’s more to this migration than meets the eye. Beneath the surface lies a profound shift in the technology underpinning online socialization. In the latest wave of decampment — primarily to Bluesky — users are seeking out an ideological alternative to the increasingly right-wing X. They may be leaving for the vibes, but they are also stepping into a world that is foundationally different in ways that many are only beginning to grasp. The federated nature of emerging alternatives, like Mastodon and Bluesky — platforms structured as a network of independently-run servers with their own users and rules, connected by a common technological protocol — offers a potential future in which communities spin up their own instances (or servers) with their own rules.

This movement away from centralized trust and safety teams enforcing universal rules may sound like a fix for social media’s woes. Fewer violent clashes between culture warriors. Fewer histrionic accusations of “censorship.” The players becoming the referees. Isn’t that ideal? 

But new governance models come with new complexities, and it’s crucial to grapple with what’s on the horizon. What happens when sprawling online communities of tens of millions fracture into smaller, politically homogenous self-governing communities? And what does this mean for social cohesion and consensus, both online and off? 

Preceding The Great Decentralization

How did we arrive here? The centralized content moderation system that has begun to fracture was shaped by a mix of American political values, societal norms and economic realities, as researcher and professor Kate Klonick argued in the “Harvard Law Review” in 2018. Klonick’s essay “The New Governors” details how platform governance policies were largely crafted by American lawyers with First Amendment pedigrees.

These platforms were privately owned and operated, yes, but their governance hewed to the spirit of American law. Nonetheless, most platforms also saw it as their duty to moderate away “obscene, violent, or hate[ful]” content. This was due in part to a desire to be seen as good corporate citizens, but also was nakedly pragmatic: “Economic viability depends on meeting users’ speech and community norms,” Klonick wrote. When platforms created environments that met user expectations, users would spend time on the site, and revenue might increase. Simple economics. 

Yet, even as platforms sought to balance corporate responsibility, user safety and economic viability, the rules increasingly became flashpoints for discontent. Content moderation decisions were perceived not as neutral governance but as value-laden judgments — implicit declarations of whose voices were welcome and whose were not. Facebook’s removal of the iconic “Napalm Girl” photo in 2016 — due to its automated enforcement of rules against nudity — provoked global backlash, forcing the platform to reverse its decision and acknowledge the complexities of moderating at scale.

“It was once novel features … that drew users to social media sites. Now, it’s frequently ideological alignment.”

Around the same time, Twitter faced criticism for failing to adequately respond to the rise of Islamic State group propagandists, and to harassment campaigns like “Gamergate” (a 2014 online movement ostensibly about ethics in gaming journalism but widely perceived as a troll campaign targeting women in the industry).

These incidents underscored the tensions between enforcing community standards and protecting free expression. For many users, particularly those whose speech bordered on the controversial or offensive, the referees of Big Tech platforms seemed to wield disproportionate power, which fueled a sense of alienation and distrust. Rather than simply constraining what could be said online, the rules seemed to signal whose perspectives held power in the digital public square.

As these forces converged and hardened into the governance status quo, those who chafed under it faced a timeless choice: exit versus voice. Should they abandon a product or community in search of better options, or stay and speak out, channeling their frustration into demands for change?

German economist Albert Hirschman argued that the decision between exit or voice for dissatisfied consumers was mediated by a third factor: loyalty. Loyalty, whether rooted in patriotism or brand affinity, can tether individuals to an institution or product, making them more inclined to call for change than to walk away. For years, loyalty to major platforms was less about affection and more about structural realities; monopolistic dominance and powerful network effects left social media users with few realistic alternatives. There weren’t many apps with the features, critical mass or reach to fulfill users’ needs for entertainment, connection or influence. Politicians and ideologues, too, relied on the platforms’ scale to propagate their messages. People stayed, even as their dissatisfaction simmered.

And so, voice was the answer. Politicians and advocacy groups pressured companies to change policies to suit their side’s needs — a process known as “working the refs” (referees) among those who study content moderation. In 2016, for example, “Trending Topicsgate” saw right-wing influencers and partisan media chastise Facebook for allegedly downranking conservative headlines on its trending topics feature. The outrage cycle worked: Facebook fired its human news curators and remade the system. (Their replacement, an algorithm, quickly busied itself spreading outrageous and untrue headlines, including from Macedonian troll factories, until the company ultimately decided to kill the feature.) Left-leaning organizations ref-worked over the years as well, applying pressure to maximize their interests.

Online partisan crowds began to perceive even one-off decisions as evidence of rank bias. Content moderation calls involving seemingly inconsequential interpersonal disputes were magnified into manufactroversies — proof of platforms kowtowing to identity politics or perpetuating some sort of supremacy. There were grains of truth: moderators did make mistakes, miss context and make bad calls as they worked through millions of decisions a quarter. Yet as disagreement became a partisan sport, platforms found themselves refereeing an escalating culture war. Efforts to impose order — to prevent real people from being doxed, stalked or even just harassed — were routinely transmuted into fodder for further tribal aggrievement.

On the right, in particular, moderation disputes were reframed as existential battles over political identity and free speech itself. Despite scant evidence of any actual systemic bias, right-wing influencers galvanized around the idea that platforms were targeting them; they moved from working the refs to challenging their right to operate.

Then-President Donald Trump, in particular, angry that his misleading tweets were labeled misleading, didn’t make nuanced arguments about transparency or the need for an appeals process. Instead, he set about delegitimizing content moderation itself and threatening regulatory action. Basic interventions like fact-check labels on disputed claims — and sometimes even the mere suspicion of intervention (i.e., if a tweet did not get its perceived due in engagement) — were reframed as tyrannical acts by tech elites conspiring against right-wing populists. The referees were no longer mediators in the culture war; they had become the opposition.

As this narrative became embedded in right-wing political identity, the market responded with opportunities for exit. Alt-platforms like Parler, which emerged in 2018, were created with the express goal of catering to Trump supporters who now believed mainstream platforms were irredeemably biased. Gettr and Truth Social followed, borne​ of grievances surrounding the 2020 election and the January 6 riots, and moderation of the man most responsible for instigating them. 

The new right-wing alt-platforms had refs on the same team, but they remained small — because the trade-off was that there were few libs around to own. There were few opportunities for partisan brawls or trolling. There were few bystanders to potentially recruit to a preferred cause. And so, political influencers, media figures and politicians across the political spectrum continued working the refs on major platforms, where the stakes — and the audiences — remained far greater.

“As disagreement became a partisan sport, platforms found themselves refereeing an escalating culture war.”

Then, in 2022, a seismic shift occurred: Elon Musk, a true believer in the theory of the corrupt refs, bought Twitter — and anointed himself as primary referee. The platform he now called X had always been relatively small but disproportionately influential: its concentration of the media- and politics-obsessed earned it the nickname “the public square.” More accurately, it often functioned as a gladiatorial arena — a chaotic space where consensus was shaped and hapless individuals became “main characters” in mob pile-ons. 

After the acquisition, Musk offered “amnesty” for those who’d fallen afoul of the old referees — including avowed neo-Nazis. Right-wing influencers on the platform seized the opportunity to work the new referee with a vengeance, and Musk responded by overhauling governance quickly and significantly in their favor. Posts that were formerly moderated, such as unfounded rumors of rigged elections or intentional misgendering of transgender users, were now fair game.  

Dissatisfaction with the new referee, policies and the overall environment on X thus led to an exodus from the platform by the American political left. At first, people hopped to Mastodon, which had the advantage of already existing. Another new market entrant, Bluesky, launched its beta with an invitation-only model driven by referral networks. The progressive-left community quickly established a foothold, and its users tested the relatively novice refs during moments of dissatisfaction over its nascent moderation policies. They debated whether hyperbolic speech constituted a “threat,” and under what conditions users should be banned. In one notable early incident, users confronted Bluesky’s developers on the platform and demanded public apologies after a bug allowed trolls to register slurs as usernames. By November 2023, Bluesky had 2 million users and a reputation as a very lefty space

In July 2023, the 800-pound gorilla entered the competition for dissatisfied tweeters: Threads, owned by Meta. Positioned as a direct competitor to X, Threads marketed itself as “sanely run,” in the words of Chief Product Officer Chris Cox. However, the promise of sanity didn’t shield Threads from ref-working dynamics. Leadership’s decision to throttle political news and block some pandemic-related searches triggered a backlash from its largely liberal user base (some of whom began to promote Bluesky as a better place to be). Despite these tensions, Threads grew rapidly, self-reporting 275 million monthly active users by late October 2024; it was, even dissatisfied users sighed, better than X.

By November 2024, however, it was Bluesky’s growth that was accelerating dramatically, fueled by Trump’s reelection and Musk’s increasingly explicit alignment with the far-right. Musk, X’s most visible user as well as its chief referee, had become a vocal Trump surrogate and election-theft truther, and his platform’s algorithms appeared to boost him and his ideological allies.

Loyalty to the old Twitter steadily declined among previously vocal power users. And so, many chose to exit: In the weeks following the election, Bluesky broke 25 million users, spurred not so much by features but by ideological dissatisfaction and the allure of a platform where governance seemed to align more closely with progressive norms.

But does it? 

New Governance, New Challenges

The Great Decentralization — the migration away from large, centralized one-size-fits-all platforms to smaller, ideologically distinct spaces — is fueled by political identity and dissatisfaction. Yet what is most interesting about this latest wave of migration is the technology underpinning Bluesky, Mastodon and Threads — what it enables and what it inherently limits. These platforms prioritize something foundationally distinct from their predecessors: federation. Unlike centralized platforms, where curation and moderation are controlled from the top down, federation relies on decentralized protocols — ActivityPub for Mastodon (which Threads also supports) and the AT Protocol for Bluesky — that enable user-controlled servers and devolve moderation (and in some cases, curation) to that community level. This approach doesn’t just redefine moderation; it restructures online governance itself. And that is because, writ large, there are no refs to work. 

The trade-offs are important to understand. If centralized platforms with their centrally controlled rules and algorithms are “walled gardens,” federated social media might best be described as “community gardens,” shaped by members connected through loose social or geographical ties and a shared interest in maintaining a pleasant community space.

In the fediverse, users can join or create servers aligned with their interests or communities. They are usually run by volunteers, who manage costs and set rules locally. Governance is federated as well: While all ActivityPub servers, for example, share a common technological protocol, each sets its own rules and norms, and decides whether to interact with — or isolate from — the broader network. For example, when the avowedly Nazi-friendly platform Gab adopted Mastodon’s protocol in 2019, other servers defederated from it en masse, cutting ties and preventing Gab’s content from reaching their users. Yet Gab persisted and continued to grow, highlighting one of federation’s important limitations: defederation can isolate bad actors, but it doesn’t eliminate them.

“The Great Decentralization … is fueled by political identity and dissatisfaction.”

Protocol-based platforms offer a significant potential future for social media: digital federalism, where local governance aligns with specific community norms, yet remains loosely connected to a broader whole. For some users, the smaller scale and greater control possible on federated platforms is compelling. On Bluesky — which is, for the moment, still largely just one instance run by the development team — the savvy are developing tools to customize the experience. There are shareable blocklists, curated feeds (views that let users see the latest posts on a creator-defined topic, like news or gardening or sports), and community-managed moderation tools that enable the application of categorization labels for posts or accounts (“Adult Content,” “Hate Speech,” etc.). These allow users to tailor their environment to their values and interests, giving them more control over what posts they see — ranging from spicy speech to nudes to politics — and which are hidden behind a warning or concealed altogether. And while there is, presently, a centralized content labeler controlled by the Bluesky moderation team, users can also simply turn it off.

For some, this level of agency is appealing. However, most users never change the defaults on a given app or piece of technology: what they are looking for is relief from the drama, chaos and perceived ideological misalignment of other spaces. They are drawn not to “composable moderation” or “federated governance” — many, in fact, seem not to fully understand what it portends — but to the vibes of the instance. ” They want platforms to “compete on service and respect,” even as the large platforms, ref-worked by politicians with regulatory cudgels, would like nothing more than to stop making moderation calls as quickly as possible. Bluesky, on a mission to build a protocol that will ultimately render centralized moderation largely moot, has nonetheless had to quickly quadruple the size of its moderation team as users have flooded in.

And this is why it’s important to understand that the migration away from centralized refs comes with very real trade-offs. Without centralized governance, there is no single authority to mediate systemic issues or consistently enforce rules. Decentralization places a heavy burden on individual instance administrators, mostly volunteers, who may lack the tools, time or capacity to address complex problems effectively. 

Some of my own work, for example, has focused on the significant challenge of addressing even explicitly illegal content — child exploitation imagery — on the fediverse. Most servers run by volunteers are ill-equipped to deal with these issues, exposing administrators to legal liability and leaving users vulnerable. Fragmented enforcement leaves gaps that bad actors, including state-sponsored manipulators and spammers, can exploit with relative impunity.

Identity verification is another weak point, leading to impersonation risks that centralized platforms typically manage more effectively. Inconsistent security practices between servers can allow malicious actors to exploit weaker links. Professionalized companies with experience (like Threads) have experience managing some of these problems, but they require an economic incentive to participate.

While federation offers users more autonomy and fosters diversity, it makes it significantly harder to combat systemic harms or coordinate responses to threats like disinformation, harassment or exploitation. Moreover, because server administrators can only moderate locally — for example, they can only hide content on the server they operate — posts from one server can spread across the network onto others, with little recourse.

Posts promoting harmful pseudoscience (“drinking bleach cures autism”) or doxxing can persist unchecked on some servers, even if others reject or block the content. People who have become convinced that “moderation is censorship” may feel that this is an unmitigated win, but users across the political spectrum have consistently expressed a desire for platforms to address fake accounts and false or violent content. 

Beyond the challenges of addressing illegal or harmful content, the Great Decentralization raises deeper questions about social cohesion: Will the fragmentation of platforms exacerbate ideological silos and further erode the shared spaces needed for consensus and compromise?

Our communication spaces shape our norms and politics. The very tools that now directly empower users to curate their feeds and block unwanted content may also amplify divisions or reduce exposure to differing perspectives. Community-created blocklists, while useful for targeted groups seeking to avoid trolls, are blunt instruments. A wayward comment, a missed joke or personal animus on the part of a list creator can cast a wide, isolating net; people with nuanced views on contentious issues like abortion policy may self-censor to avoid being “mislabeled” and excluded.

“Without centralized governance, there is no single authority to mediate systemic issues or consistently enforce rules.”

Recent events on Bluesky illustrate these challenges. In mid-December, tensions erupted on the platform over the sudden presence of a prominent journalist and podcaster who writes about trans healthcare in ways that some of the vocal trans users on the platform considered harmful. In response, tens of thousands of users proactively blocked the perceived problematic account (blocks are public on Bluesky). Community labelers enabled users to hide his posts. The proliferation of shared blocklists included some that enabled users to mass-block followers of the controversial commentator. Journalists, many of whom follow people they do not personally agree with, commented that they were getting caught up in the wide net; to mitigate this, users in the community suggested that they create “alt” accounts to avoid sending unwanted signal. 

Shareable blocklists, however expansive they may be, are tools designed to empower users. However, a portion of the community did not feel satisfied with the tools. Instead, it began to ref-work the head of trust and safety on Bluesky, who was deluged with angry demands for a top-down response, including via a petition to ban the objectionable journalist. The journalist, in turn, also contacted the mods — about being on the receiving end of threatening language and doxing himself. The drama highlights the tension between the increased potential for users to act to protect their own individual spaces, and the persistent desire to have centralized referees act on a community’s behalf. And, unfortunately, it illustrates the challenges of moderating a large community with comparatively limited resources.

The idealistic goal of federalism in the American experiment was to maintain the nation’s unity while enabling local control of local issues. The digital version of this, however, seems to be a devolution, a retreat into separate spaces that may perhaps increase satisfaction within each outpost but does little to bridge ties, restore mutual norms or diminish animosity across groups. What happens when divergent norms grow so distinct that we can no longer even see or engage with each other’s conversations? The challenge of consensus is no longer simply difficult, it is structurally reinforced.

What’s Ahead

Whether you like or dislike them, centralized models of top-down policy and enforcement have defined the social media experience on large platforms like Facebook, Twitter and YouTube for two decades. As Nilay Patel of “The Verge” put it, content moderation is “the product” of these platforms: The decisions made by moderation teams shape not only what users see but how safe or threatened they feel. These policies have had profound effects, not only on societal phenomena like democracy and community cohesion but also on individual users’ sense of well-being. If the Great Decentralization continues, that experience will change.

While centralized governance on platforms like Twitter and Facebook became a highly politicized front in the culture war, it’s worth asking whether the system was truly broken. Centralized moderation, despite being imperfect, expensive and opaque, nonetheless offered articulated rules, sophisticated technology and professional enforcement teams. Criticism of these systems frequently stemmed from their lack of transparency or occasional high-profile errors, which fueled perceptions of bias and dissatisfaction.

This legitimation crisis eventually tipped the scales from voice to exit — and now, the shaping of a new online commons presents both a challenge and an opportunity. Yes, there is the potential for truly democratic online spaces free from the misaligned incentives that have, thus far, defined the platform-user relationship. But realizing such spaces will take significant work. 

There is also the looming question of economics. Federated alternatives must be financially sustainable if they intend to persist. Right now, Bluesky is primarily fueled by venture capital; it has broached having paid subscriptions and features in the future. But if the last two decades of social media experimentation have taught us anything, it’s that economic incentives inevitably have an outsized impact on governance and user experience.

“What happens when divergent norms grow so distinct that we can no longer even see or engage with each other’s conversations?”

Technologists (myself included) love to talk about faster innovation, better privacy, and more granular user control as the future of social media. But that’s not what most people think about. Most users just want good services, minimal risks to their well-being and a generally positive, entertaining environment. Ironically, these are the end states moderation has attempted to deliver. The argument that the downsides of social media participation — disinformation, doxxing and harassment — are emblematic of the triumph of “free speech” has been roundly rejected; very few users actually spend time on “absolutist” anything-goes communities; 8chan, for example, was never widely popular. And yet, our inability to agree on shared norms and values, both online and off, is pushing us apart into distinct online spaces.

Users who are drawn to Bluesky are gravitating to the culture of the main instance, which feels a bit like Old Twitter circa 2014 — a simpler, less toxic time. They crave a return to a less divisive and nasty American society. This longing reflects a deeper truth: online platforms don’t just mirror our offline values; they actively influence them. 

Federated platforms will give us the freedom to curate our online experience, and to create communities where we feel comfortable. They represent more than a technological shift — they’re an opportunity for democratic renewal in the digital public sphere. By returning governance to users and communities, they have the potential to rebuild trust and legitimacy in ways that centralized platforms no longer can. However, they also run the risk of further splintering our society, as users abandon those shared spaces where broader social cohesion may be forged.

The Great Decentralization is a digitalized reflection of our polarized politics that, going forward, will also shape them.

The post The Great Social Media Diaspora appeared first on NOEMA.

]]>
]]>
The New Media Goliaths https://www.noemamag.com/the-new-media-goliaths Thu, 01 Jun 2023 16:35:11 +0000 https://www.noemamag.com/the-new-media-goliaths The post The New Media Goliaths appeared first on NOEMA.

]]>
One of the more remarkable artifacts of late-stage social media is the indelible presence of a particular character: the persecution profiteer. They are nearly unavoidable on Twitter: massive accounts with hundreds of thousands to millions of followers, beloved by the recommendation engine and often heavily monetized across multiple platforms, where they rail against the corporate media, Big Tech and elites. Sometimes, the elites have supposedly silenced them; sometimes, they’ve supposedly oppressed you — perhaps both. But either way, manipulation is supposedly everywhere, and they are supposedly getting to the bottom of it.

Many of these polemicists rely on a thinly veiled subtext: They are scrappy truth-tellers, citizen-journalist Davids, exposing the propaganda machine of the Goliaths. That subtext may have been true in last century’s media landscape, when independent media fought for audience scraps left by hardy media behemoths with unassailable gatekeeping power. But that all changed with the collapse of mass media’s revenue model and the rise of a new elite: the media-of-one. 

The transition was enabled by tech but realized by entrepreneurs. Platforms like Substack, Patreon and OnlyFans offered infrastructure and monetization services to a galaxy of independent creators — writers, podcasters and artists — while taking a cut of their revenue. Many of these creators adopted the mantle of media through self-declaration and branding, redefining the term and the industry. Many were very talented. More importantly, however, they understood that creating content for a niche — connecting with a very specific online audience segment — offered a path to attention, revenue and clout. In the context of political content in particular, the media-of-one creators offered their readers an editorial page, staffed with one voice and absent the rest of the newspaper. 

The rise of a profitable niche media ecosystem with a reach commensurate with mass media has been a boon for creators and consumers alike. YouTube, Instagram and TikTok have enabled sponsorships and ad-revenue sharing for quite some time — spawning a generation of influencers — but patronage opened additional paths to success. A tech blogger can start a podcast about Web3 with no infrastructural outlay, reaching their audience in a new medium. A Substack newsletter devoted to political history can amass thousands of subscribers, charge $5 a month, and deliver a salary up to seven figures for its author. Pop culture pundits can earn a living producing content on Patreon, and web-cam adult performers can do the same on OnlyFans. Even Twitter has launched subscriptions.

Whatever the kink — from nudes to recipes to conspiracy theories — consumers can find their niche, sponsor it and share its output. This ecosystem has given rise to people with millions of followers, who shape the culture and determine what the public talks about each day.  

Well, their public, anyway. 

The Rise Of Niche Propaganda

Like the media, the public has increasingly fragmented. The internet enabled the flourishing of a plethora of online subcultures and communities: an archipelago of bespoke and targetable realities. Some of the most visible are defined by their declining trust in mass media and institutions. Recognizing the opportunity, a proliferation of media-of-one outlets have spun up to serve them.

In fact, the intersection of a burgeoning niche media ecosystem and a factionalized public has transformed precisely the type of content that so concerns the persecution profiteers: propaganda. Propaganda is information with an agenda, delivered to susceptible audiences to serve the objectives of the creator. Anyone so inclined can set up an account and target an audience, producing spin to fit a preferred ideological agenda. Those who achieve a degree of success are often increasingly cozy with politicians and billionaire elites who hold the levers of power and help advance shared agendas. In fact, the niche propagandists increasingly have an advantage over the Goliaths they rail against. They innately understand the modern communication ecosystem on which narratives travel and know how to leverage highly participatory, activist social media fandoms to distribute their messages; institutions and legacy media typically do not. 

Although the mechanics of who can spread propaganda, and how, has shifted significantly over the last two decades, public perception of the phenomenon has not. People discussing concerns about propaganda on social media frequently reference the idea of a powerful cabal composed of government, media and institutional authorities, manipulating the public into acquiescing to an elite-driven agenda. This misperception comes in large part from popular understanding of a theory presented by Noam Chomsky and Edward Herman in their 1988 book, “Manufacturing Consent: The Political Economy of the Mass Media.” 

“Manufacturing Consent” proposed a rather insidious process by which even a free press, such as that of the United States, filters the information that reaches the public by way of self-censorship and selective framing. Even without the overt state control of media present in authoritarian regimes, Chomsky and Herman argued, American media elites are influenced by access, power and money as they decide what is newsworthy — and thus, determine what reaches the public. Chomsky and Herman identified five factors, “five filters” — ownership, advertising, sourcing, catching flak, and fear — that comprised a system of incentives that shaped media output. 

Media “ownership” (the first filter) was expensive, requiring licenses and distribution technology — and so, the ecosystem was owned by a small cadre of the wealthy who often had other financial and political interests that colored coverage. Second, advertising meant that media was funded by ad dollars, which incentivized it to attract mainstream audiences that advertisers wanted and to avoid topics — say, critiques of the pharmaceutical industry — that might alienate them. Third, “sourcing” — picking experts to feature — let media elevate some perspectives while gatekeeping others. Fourth, fear of catching “flak” motivated outlets to avoid diverging from approved narratives, which might spark lawsuits or boycotts. And finally, “fear” highlighted the media’s capacity to cast people in the role of “worthy” or “unworthy” victims based on ideology. 

Throughout the 20th century, Chomsky and Herman argued, these incentives converged to create a hegemonic media that presented a filtered picture of reality. Media’s self-interest directly conflicted with the public interest — a problem for a democratic society that relied on the media to become informed. 

But legacy media is now only half the story, and the Goliaths are no longer so neatly distinguished. Technology reduced costs and eliminated license requirements, while platform users themselves became distributors via the Like and Share buttons. Personalized ad targeting enabled inclined individuals to amass large yet niche audiences who shared their viewpoints. The new elites, many of whom have millions of followers, are equally capable of “manufacturing consent,” masquerading as virtuous truth-tellers even as they, too, filter their output in accordance with their incentives.

However, something significant has changed: Rather than persuading a mass audience to align with a nationally oriented hegemonic point of view — Chomsky’s concern in the 1980s — the niche propagandists activate and shape the perception of niche audiences. The propaganda of today entrenches fragmented publics in divergent factional realities, with increasingly little bridging the gaps. 

“Positioning of niche media as a de facto wholesome antithesis to the ‘mainstream propaganda machine’ — Davids fighting Goliaths — is a marketing ploy.”

From Five Filters To Four Fire Emojis

As technology evolved and media and the public splintered, the five filters mutated. A different system of incentives drives the niche media Goliaths — we might call it the “four fire emoji” model of propaganda, in homage to Substack’s description of criteria it used to identify writers most likely to find success on its platform. 🔥🔥🔥🔥

In its early days of operation, Substack, which takes 10% of each subscription, reached out to media personalities and writers from traditional outlets, offering them an advance to start independent newsletters. To assess who might be a good investment, the company ranked writers from one to four fire emojis, depending on their social media engagement. Someone with a large, highly engaged following was more likely to parlay that attention into success on Substack. There is no algorithmic curation or ads; each post by the author of a newsletter is sent to the inbox of all subscribers. Substack describes their platform as a “new economic engine for culture,” arguing that authors might be less motivated to replicate the polarization of social media if they are paid directly for their work.

But the four fire emoji rubric inadvertently lays bare the existential drive of niche media: the need to capture attention above all else, as technology has driven the barrier to entry toward zero and the market is flooded with strivers. Getting attention on social media often involves seizing attention, through sensationalism and moral outrage. Niche media must convert that attention into patronage. A passionate and loyal fandom is critical to success because the audience facilitates virality, which delivers further attention, which can be parlayed into clout and money.

There is little incentive to appeal to everyone. In a world where attention is scarce, the political media-of-one entrepreneurs, in particular, are incentivized to filter what they cover and to present their thoughts in a way that galvanizes the support of those who will boost them — humans and algorithms alike. They are incentivized to divide the world into worthy and unworthy victims. 

In other words, they are incentivized to become propagandists. And many have. 

“It seems likely that at least some of the audience believes that they have escaped propaganda and exited the Matrix, without realizing that they are simply marinating in a different flavor.”

Consider a remarkable viral story from January 2023. Right-wing commentator Steven Crowder published a video accusing a major conservative news outlet (later revealed to be The Daily Wire) of offering him a repressive contract — a “slave contract,” as he put it, that would penalize him if the content he produced was deemed ineligible to monetize by major platforms like YouTube. “I believe that many of those in charge in the right-leaning media are actually at odds with what’s best for you,” he told his nearly 6 million YouTube subscribers. Audiences following along on Twitter assigned the scandal a hashtag: #BigCon. 

Underlying the drama was classic subtext: Crowder, the David, pitted against conservative media Goliaths. And yet, the contract Crowder derided as indentured servitude would have paid him $50 million

Sustaining attention in a highly competitive market practically requires that niche propaganda be hyper-adversarial, as often as possible. The rhetorical style is easily recognizable: They are lying to you, while I have your best interests at heart. 

As it turns out, perpetual aggrievement at elites and the corporate profiteering media can be quite lucrative. On Substack, pseudoscience peddler Joseph Mercola touts his “Censored Library” to tens of thousands of paid subscribers at $5/month, revealing “must-read information” that the medical establishment purportedly hides from the public. Several prominent vaccine skeptics — who regularly post about how censored they are — are also high on the Substack leaderboard and in the tens-of-thousands-of-paid-subscribers club.

Matt Taibbi, a longtime journalist who’s also a lead Substack writer, devotes many posts to exposing imaginary cabals for an audience that grew significantly after billionaire Elon Musk gave him access to company emails and other internal documents. His successful newsletter solicited additional contributors: “Freelancers Wanted: Help Knock Out the Mainstream Propaganda Machine.” The patrons of particular bespoke realities reward the writers with page views and subscriber dollars; prominent members of political parties cite the work or move it through the broader partisan media ecosystem.

“The manufacture of consent is thriving within each niche.”

It is an objectively good thing that the five filter model is increasingly obsolete. Reducing the barriers to ownership, in particular, enabled millions of voices to enter the market and speak to the public, and that is an unambiguously good thing. But the positioning of niche media as a de facto wholesome antithesis to the “mainstream propaganda machine” — Davids fighting Goliaths — is a marketing ploy. The four fire emoji model simply incentivizes a more factional, niche propaganda. 

Since the model relies on patronage, rather than advertising, the new propagandists are incentivized to tell their audiences what they want to hear. They are incentivized to increase the fracturing of the public and perpetuate the crisis of trust, in order to ensure that their niche audience continues to pay them, rather than one of their nearest neighbors (or, God forbid, a mainstream outlet). Subscribers don’t have unlimited funds; they will pick a handful of creators to support, and the rest will struggle. 

As attention and trust have fragmented, “sourcing” has also reoriented to ensure that writers feature people who are approved within the bespoke reality they target; for example, there are several different universes of COVID experts at this point. “Flak” is now a veritable gift: Rather than being afraid of it, the patronage propagandists are incentivized to court it. Attack from ideological outsiders are a boon: “Subscribe to help us fight back!” So much of the media-of-one content is defined by what it is in opposition to — otherwise, it loses the interest of its audience. Partisan outlets have long played the fear game, as Chomsky pointed out in the 1980s, encouraging hatred of the other side — but now, the “unworthy victim” is your neighbor, who may have only moderately different political leanings.

The Effect: Lost Consensus, Endless Hostility

The devolution of propaganda into niches has deep and troubling implications for democratic society and social cohesion. It was Walter Lippmann, a journalist and early scholar of propaganda, who coined the phrase “the manufacture of consent” of the governed in 1922, using it to describe a process by which leaders and experts worked alongside media to inform the public about topics they did not have the time or capacity to understand. The premise was paternalistic at best.

However, Lippmann also had reservations about the extent to which “the public” existed; the idea of an omnicompetent, informed citizenry powering functional democracy was an illusion, he believed, and the “public” a phantom. People, Lippmann wrote, “live in the same world, but they think and feel in different ones.” Propaganda was manipulative, even damaging and sinister, Lippmann thought, but he also believed that the manufacture of consent was to some extent necessary for democratic governance, in order to bridge divides that might otherwise render democracy dysfunctional. 

Lippmann’s intellectual rival on the topics of propaganda, the public and democracy was the eminent philosopher John Dewey. Unlike Lippmann, Dewey believed “the public” did exist. It was complicated, it was chaotic — but it was no phantom. Dewey also rightly bristled at the idea of a chosen few wielding propaganda to shape public opinion; he saw it as an affront to true democracy. Instead, Dewey saw the press — when operating at its best — as a tool for informing and connecting the public, enabling people to construct a shared reality together.       

Though at odds in many respects, both Lippmann and Dewey acknowledged the challenges of a fractured public. The two men saw a dissonant public as both a natural state and as a barrier to a functioning, safe and prosperous society. Though they differed greatly in their proposed approaches, they agreed on the need to create harmony from that dissonance.     

One hundred years later, both approaches seem like an impossibility. It is unclear what entities, or media, can bridge a fragmented, polarized, distrustful public. The incentives are driving niche media in the opposite direction.

“Perhaps by highlighting the new incentives that shape the media-of-one ecosystem, we may reduce the public’s susceptibility to the propaganda it produces.”

The propagandists of today are not incentivized to create the overarching hegemonic national narrative that Chomsky and Herman feared. Rather, their incentives drive them to reinforce their faction’s beliefs, often at the expense of others. Delegitimization of outside voices is a core component of their messaging: The “mainstream” media is in cahoots with the government and Big Tech to silence the people, while the media-of-one are independent free-thinkers, a disadvantaged political subclass finally given access to a megaphone … though in many cases, they have larger audiences and far larger incomes. It seems likely that at least some of the audience believes that they have escaped propaganda and exited the Matrix, without realizing that they are simply marinating in a different flavor.

We should not glorify the era of a consolidated handful of media properties translating respectable institutional thinking for the masses — consolidated narrative control enables lies and deception. But rather than entering an age of “global public squares” full of deliberative discourse and constructive conversation, we now have gladiatorial arenas in which participants in niche realities do battle. Our increasingly prominent medias-of-one can’t risk losing the attention game in the weeds of nuance. We have a proliferation of irreconcilable understandings of the world and no way of bridging them. The internet didn’t eliminate the human predilection for authority figures or informed interpretations of facts and narratives — it just democratized the ability to position oneself in the role. The manufacture of consent is thriving within each niche. 

“Manufacturing Consent” ended with an optimistic take: that what was then a burgeoning cable media ecosystem would lead to more channels with varying perspectives, a recognition that truly independent and non-corporate media does exist and that it would find ways to be heard. But Chomsky and Herman also cautioned that if the public wants a news media that serves its interests rather than the interests of the powerful, it must go find it. Propaganda systems are demonstrably effective precisely because breaking free of such a filtered lens requires work. Perhaps by articulating to today’s public how the system has shifted and highlighting the new incentives that shape the media-of-one ecosystem, we may reduce the public’s susceptibility to the propaganda it produces.

The illustration above was first published in FORESIGHT Climate & Energy’s Efficiency issue.

The post The New Media Goliaths appeared first on NOEMA.

]]>
]]>
How Online Mobs Act Like Flocks Of Birds https://www.noemamag.com/how-online-mobs-act-like-flocks-of-birds Thu, 03 Nov 2022 16:26:30 +0000 https://www.noemamag.com/how-online-mobs-act-like-flocks-of-birds The post How Online Mobs Act Like Flocks Of Birds appeared first on NOEMA.

]]>
Credits

Renée DiResta is an associate research professor at the McCourt School of Public Policy at Georgetown.

You’ve probably seen it: a flock of starlings pulsing in the evening sky, swirling this way and that, feinting right, veering left. The flock gets denser, then sparser; it moves faster, then slower; it flies in a beautiful, chaotic concert, as if guided by a secret rhythm.

Biology has a word for this undulating dance: “murmuration.” In a murmuration, each bird sees, on average, the seven birds nearest it and adjusts its own behavior in response. If its nearest neighbors move left, the bird usually moves left. If they move right, the bird usually moves right. The bird does not know the flock’s ultimate destination and can make no radical change to the whole. But each of these birds’ small alterations, when occurring in rapid sequence, shift the course of the whole, creating mesmerizing patterns. We cannot quite understand it, but we are awed by it. It is a logic that emerges from — is an embodiment of — the network. The behavior is determined by the structure of the network, which shapes the behavior of the network, which shapes the structure, and so on. The stimulus — or information — passes from one organism to the next through this chain of connections.

While much is still mysterious and debated about the workings of murmurations, computational biologists and computer scientists who study them describe what is happening as “the rapid transmission of local behavioral response to neighbors.” Each animal is a node in a system of influence, with the capacity to affect the behavior of its neighbors. Scientists call this process, in which groups of disparate organisms move as a cohesive unit, collective behavior. The behavior is derived from the relationship of individual entities to each other, yet only by widening the aperture beyond individuals do we see the entirety of the dynamic.

Online Murmurations

A growing body of research suggests that human behavior on social media — coordinated activism, information cascades, harassment mobs — bears striking similarity to this kind of so-called “emergent behavior” in nature: occasions when organisms like birds or fish or ants act as a cohesive unit, without hierarchical direction from a designated leader. How that local response is transmitted — how one bird follows another, how I retweet you and you retweet me — is also determined by the structure of the network. For birds, signals along the network are passed from eyes or ears to brains pre-wired at birth with the accumulated wisdom of the millenia. For humans, signals are passed from screen to screen, news feed to news feed, along an artificial superstructure designed by humans but increasingly mediated by at-times-unpredictable algorithms. It is curation algorithms, for example, that choose what content or users appear in your feed; the algorithm determines the seven birds, and you react.

Our social media flocks first formed in the mid ‘00s, as the internet provided a new topology of human connection. At first, we ported our real, geographically constrained social graphs to nascent online social networks. Dunbar’s Number held — we had maybe 150 friends, probably fewer, and we saw and commented on their posts. However, it quickly became a point of pride to have thousands of friends, then thousands of followers (a term that conveys directional influence in its very tone). The friend or follower count was prominently displayed on a user’s profile, and a high number became a heuristic for assessing popularity or importance. “Friend” became a verb; we friended not only our friends, but our acquaintances, their friends, their friends’ acquaintances.

“The behavior is determined by the structure of the network, which shapes the behavior of the network, which shapes the structure, and so on.”

The virtual world was unconstrained by the limits of physical space or human cognition, but it was anchored to commercial incentives. Once people had exhaustively connected with their real-world friend networks, the platforms were financially incentivized to help them find whole new flocks in order to maximize the time they spent engaged on site. Time on site meant a user was available to be served more ads; activity on site enabled the gathering of more data, the better to infer a user’s preferences in order to serve them just the right content — and the right ads. People You May Know recommendation algorithms nudged us into particular social structures, doing what MIT network researcher Sinan Aral calls the “closing of triangles:” suggesting that two people with a mutual friend in common should be connected themselves.

Eventually, even this friend-of-friending was tapped out, and the platforms began to create friendships for us out of whole cloth, based on a combination of avowed, and then inferred, interests. They created and aggressively promoted Groups, algorithmically recommending that users join particular online communities based on a perception of statistical similarity to other users already active within them.

This practice, called collaborative filtering, combined with the increasing algorithmic curation of our ranked feeds to usher in a new era. Similarity to other users became a key determinant in positioning each of us within networks that ultimately determined what we saw and who we spoke to. These foundational nudges, borne of commercial incentives, had significant unintended consequences at the margins that increasingly appear to contribute to perennial social upheaval.

One notable example in the United States is the rise of the QAnon movement over the past few years. In 2015, recommendation engines had already begun to connect people interested in just about any conspiracy theory — anti-vaccine interests, chemtrails, flat earth — to each other, creating a sort of inadvertent conspiracy correlation matrix that cross-pollinated members of distinct alternate universes. A new conspiracy theory, Pizzagate, emerged during the 2016 presidential campaign, as online sleuths combed through a GRU hack of the Clinton campaign’s emails and decided that a Satanic pedophile cabal was holding children in the basement of a DC pizza parlor.

At the time, I was doing research into the anti-vaccine movement and received several algorithmic recommendations to join Pizzagate groups. Subsequently, as QAnon replaced Pizzagate, the highly active “Q research” groups were, in turn, recommended to believers in the prior pantheon of conspiracy theories. QAnon became an omni-conspiracy, an amoeba that welcomed believers and “researchers” of other movements and aggregated their esoteric concerns into a Grand Unified Theory. 

After the nudges to assemble into flocks come the nudges to engage — “bait,” as the Extremely Online call it. Twitter’s Trending Topics, for example, will show a nascent “trend” to someone inclined to be interested, sometimes even if the purported trend is, at the time, more of a trickle — fewer than, say, 2,000 tweets. But that act, pushing something into the user’s field of view, has consequences: the Trending Topics feature not only surfaces trends, it shapes them. The provocation goes out to a small subset of people inclined to participate. The user who receives the nudge clicks in, perhaps posts their own take — increasing the post count, signaling to the algorithm that the bait was taken and raising the topic’s profile for their followers. Their post is now curated into their friends’ feeds; they are one of the seven birds their followers see. Recurring frenzies take shape among particular flocks, driving the participants mad with rage even as very few people outside of the community have any idea that anything has happened. Marx is trending for you, #ReopenSchools for me, #transwomenaremen for the Libs Of TikTok set. The provocation is delivered, a few more birds react to what’s suddenly in their field of view, and the flock follows, day in and day out.

“Trying to litigate rumors and fact-check conspiracy theories is a game of whack-a-mole that itself has negative political consequences.”

Eventually, perhaps, an armed man decides to “liberate” a DC pizza parlor, or a violent mob storms a nation’s capitol. Although mainstream tech platforms now act to disrupt the groups most inclined to harassment and violence — as they did by taking down QAnon groups and shutting down tens of thousands of accounts after the January 6th insurrection — the networks they nudged into existence have by this point solidified into online friendships and comradeships spanning several years. The birds scatter when moderation is applied, but quickly re-congregate elsewhere, as flocks do.

Powerful economic incentives determined the current state of affairs. And yet, the individual user is not wholly passive — we have agency and can decide not to take the bait. We often deploy the phrase “it went viral” to describe our online murmurations. It’s a deceptive phrase that eliminates the how and thus absolves the participants of all responsibility. A rumor does not simply spread — it spreads because we spread it, even if the system is designed to facilitate capturing attention and to encourage that spread.

Old Phenomenon, New Consequences

We tend to think of what we see cascading across the network — the substance, the specific claims — as the problem. Much of it is old phenomena manifesting in new ways: rumors, harassment mobs, disinformation, propaganda. But it carries new consequences, in large part because of the size and speed of networks across which it moves. In the 1910s, a rumor may have stayed confined to a village or town. In the 1960s, it might have percolated across television programs, if it could get past powerful gatekeepers. Now, in the 2020s, it moves through a murmuration of millions, trends on Twitter and is picked up by 24/7 mass media. 

“We shape our tools, and thereafter they shape us,” argued Father John Culkin, a contemporary and friend of media theorist Marshall McLuhan. Theorists like Culkin and McLuhan — working in the 1960s, when television had seemingly upended the societal order — operated on the premise that a given technological system engendered norms. The system, the infrastructure itself, shaped society, which shaped behavior, which shaped society. The programming — the substance, the content — was somewhat secondary. 

This thinking progressed, spanning disciplines, with a sharpening focus on curation’s role in an information system then comprised of print, radio and the newest entrant, television. In a 1971 talk, Herbert Simon, a professor of computer science and organizational psychology, attempted to reckon with the information glut that broadcast media created: attention scarcity. His paper is perhaps most famous for this passage:

In an information-rich world, the wealth of information means a dearth of something else: a scarcity of whatever it is that information consumes. What information consumes is rather obvious: it consumes the attention of its recipients. Hence a wealth of information creates a poverty of attention and a need to allocate that attention efficiently among the overabundance of information sources that might consume it.

Most of the cost of information is not incurred by the producers, Simon argues, but the recipients. The solution? Content curation — a system that, as he put it, “listens and thinks more than it speaks,” that thinks of curation in terms of withholding useless bait so that a recipient’s attention is not wasted flitting from one silly provocation to another.

I dug up the conference proceedings where Simon presented this argument. They include a discussion of the paper in which Simon’s colleagues responded to his theory, making arguments nearly identical to those of today. Karl Deutsch, then a professor of government at Harvard, expressed apprehension about curation, or “filtering,” as a solution to information glut — it might neglect to surface “uncongenial information,” in favor of showing the recipient only things they would receive favorably, leading to bad policy creation or suboptimal organizational behavior. Martin Shubik, then a professor of economics at Yale, tried to differentiate between data and information — is what we are seeing of value? From what was then the nascent ability of computers to play chess, he extrapolated the idea that information processing systems might eventually facilitate democracy. “Within a few years it may be possible to have a virtually instant referendum on many political issues,” he said. “This could represent a technical triumph — and a social disaster if instability resulted from instantaneous public reaction to incompletely understood affairs magnified by quick feedback.”

Though spoken half a century ago, the phrase encapsulates the dynamics of where we find ourselves today: “a technical triumph, and a social disaster.”

Simon, Deutsch and Shubik were discussing one of social media’s biggest fixations more than a decade before Mark Zuckerberg was even born. Content curation — deciding what information reaches whom — is complicated, yet critical. In the age of social media, however, conversations about this challenge have largely devolved into controversies about a particular form of reactive curation: content moderation, which attempts to sift the “good” from the “bad.” Today, the distributed character of the information ecosystem ensures that so-called “bad” content can emerge from anywhere and “go viral” at any time, with each individual participating user shouldering only a faint sliver of responsibility. A single re-tweet or share or like is individually inconsequential, but the murmuration may be collectively disastrous as it shapes the behavior of the network, which shapes the structure of the network, which shapes the behavior.

Substance As The Red Herring

In truth, the overwhelming majority of platform content moderation is mostly dedicated to unobjectionable things like protecting children from porn or eliminating fraud and spam. However, since curation organizes and then directs the attention of the flock, the argument is of great political importance because of its potential downstream impact on real-world power. And so, we have reached a point in which the conversation about what to do about disinformation, rumors, hate speech and harassment mobs is, itself, intractably polarized.

But the daily aggrievement cycles about individual pieces of content being moderated or not are a red herring. We are treating the worst dynamics of today’s online ecosystem as problems of speech in the new technological environment, rather than challenges of curation and network organization.

“We don’t know enough about how people believe and act together as groups.”

This overfocus on the substance — misinformation, disinformation, propaganda — and the fight over content moderation (and regulatory remedies like revising Section 230) makes us miss opportunities to examine the structure — and, in turn, to address the polarization, factional behavior and harmful dynamics that it sows.

So what would a structural reworking entail? How many birds should we see? Which birds? When?

First, it entails diverging from The Discourse of the past several years. Significant and sustained attention to the downsides of social media, including from Congressional leaders, began in 2017, but the idea that “it’s the design, stupid” never gained much currency in the public conversation. Some academic researchers and activist groups, such as the Center for Humane Technology, argued that recommender systems, nudges and attention traps seemed to be leading to Bad Things, but they had little in the way of evidence. We have more of that now, including from whistleblowers, independent researchers and journalists. At the time, though, the immediacy of some of the harms, from election interference to growing evidence of a genocide in Myanmar, suggested a need for quick solutions, not system-wide interrogations.

There was only minimal access to data for platform outsiders. Calls to reform the platforms turned primarily to arguments for either dismantling them (antitrust) or creating accountability via a stronger content moderation regime (the myriad of disjointed calls to reform 230 from both Republicans and Democrats). Since 2017, however, Congressional lawmakers have broached a few bills but accomplished very little. Hyperpartisans now fundraise off of public outrage; some have made being “tough on Big Tech” a key plank of their platform for years now, while delivering little beyond soundbites that can themselves be digested on Twitter Trending Topics.

Tech reformation conversations today still remain heavily focused on content moderation of the substance, now framed as “free speech vs. censorship” — a simplistic debate that goes nowhere, while driving daily murmurations of outrage. Trying to litigate rumors and fact-check conspiracy theories is a game of whack-a-mole that itself has negative political consequences. It attempts to address bad viral content — the end state — while leaving the network structures and nudges that facilitate its reach in place.

More promising ideas are emerging. On the regulatory front, there are bills that mandate transparency, like the Platform Accountability and Transparency Act (PATA), in order to grant visibility into what is actually happening on the network level and better differentiate between real harm and moral panic. At present, data access into these critical systems of social connection and communication is granted entirely at the beneficence of the owner, and owners may change. More visibility into the ways in which the networks are brought together, and the ways in which their attention is steered, could potentially give rise to far more substantive debates about what categories of online behavior we seek to promote or prevent. For example, transparency into how QAnon communities formed might have allowed us to better understand the phenomenon — perhaps in time to mitigate some of its destructive effects on its adherents, or to prevent offline violence.

But achieving real, enforceable transparency laws will be challenging. Understandably, social media companies are loath to admit outside scrutiny of their network structures. In part, platforms avoid transparency because transparency offers less immediately tangible benefits but several potential drawbacks, including negative press coverage or criticisms in academic research. In part, this is because of that foundational business incentive that keeps the flocks in motion: if my system produces more engagement than yours, I make more money. And, on the regulatory front, there is the simple reality that tough-on-tech language about revoking legal protections or breaking up businesses grabs attention; far fewer people get amped up over transparency.

“This overfocus on the substance makes us miss opportunities to examine the structure — and, in turn, to address the polarization, factional behavior and harmful dynamics that it sows.”

Second, we must move beyond thinking of platform content moderation policies as “the solution” and prioritize rethinking design. Policy establishes guardrails and provides justification to disrupt certain information cascades, but does so reactively and, presently, based on the message substance. Although policy shapes propagation, it does so by serving as a limiter on certain topics or types of rhetoric. Design, by contrast, has the potential to shape propagation through curation, nudges or friction.

For example, Twitter might choose to eliminate its Trending feature entirely, or in certain geographies during sensitive moments like elections — it might, at a minimum, limit nudges to surfacing actual large-scale or regional trends, not simply small-scale ragebait. Instagram might enact a maximum follower count. Facebook might introduce more friction into its Groups, allowing only a certain number of users to join a specific Group within a given timeframe. These are substance-agnostic and not reactive.

In the short term, design interventions might be a self-regulatory endeavor — something platforms enact in good faith or to fend off looming, more draconian legislation. Here, too, however, we are confronted by the incentives: the design shapes the system and begets the behavior, but if the resulting behavior includes less time on site, less active flocks, less monetization, well…the incentives that run counter to that have won out for years now.

To complement policy and design, to reconcile these questions, we need an ambitious, dedicated field of study focused on the emergence and influence of collective beliefs that traces threads between areas like disinformation, extremism, and propaganda studies, and across disciplines including communication, information science, psychology, and sociology. We presently don’t know enough about how people believe and act together as groups, or how beliefs can be incepted, influenced or managed by other people, groups or information systems.

Studies of emergent behavior among animals show that there are certain networks that are simply sub-optimal in their construction — networks that lead schools/hives/flocks to collapse, starve or die. Consider the ant mill, or “death spiral,” in which a collection of army ants lose the pheromone track by which they navigate and begin to follow each other in an endless spiral, walking in circles until they eventually die of exhaustion. While dubbing our current system of communications infrastructure and nudges a “death spiral” may seem theatrical, there are deep, systemic and dangerous flaws embedded in the structure’s DNA.

Indeed, we are presently paying debt on the bad design decisions of the past. The networks designed years ago — when amoral recommendation engines suggested, for example, that anti-vaccine activists might like to join QAnon communities — created real ties. They made suggestions and changed how we interact; the flocks surrounding us became established. Even as we rethink and rework recommendations and nudges, repositioning the specific seven birds in the field of view, the flocks from which we can choose are formed — and some are toxic. We may, at this point, be better served as a society by starting from scratch and making a mass exodus from the present ecosystem into something entirely new. Web3 or the metaverse, perhaps, if it materializes; new apps, if all of that turns out to be vaporware.

But if starting from scratch isn’t an option, we might draw on work from computational biology and complex systems in order to re-envision our social media experience in a more productive, content-agnostic way. We might re-evaluate how platforms connect their users or how factors that determine what platform recommenders and curation algorithms push into field-of-view, considering a combination of structure (network), substance (rhetoric, or emotional connotation) and incentives that shape information cascades. This could potentially have a far greater impact than battling over content moderation as a path toward constructing a healthier information ecosystem. Our online murmurations can be more like the starlings’ — chaotic, yes, but also elegant — and less like the toxic emergent behaviors we have fallen into today.

The post How Online Mobs Act Like Flocks Of Birds appeared first on NOEMA.

]]>
]]>