James O'Sullivan, Author at NOEMA https://www.noemamag.com Noema Magazine Tue, 09 Dec 2025 18:00:43 +0000 en-US 15 hourly 1 https://wordpress.org/?v=6.8.3 https://www.noemamag.com/wp-content/uploads/2020/06/cropped-ms-icon-310x310-1-32x32.png James O'Sullivan, Author at NOEMA https://www.noemamag.com/author/james-osullivan/ 32 32 The Politics Of Superintelligence https://www.noemamag.com/the-politics-of-superintelligence Tue, 09 Dec 2025 16:38:08 +0000 https://www.noemamag.com/the-politics-of-superintelligence The post The Politics Of Superintelligence appeared first on NOEMA.

]]>
The machines are coming for us, or so we’re told. Not today, but soon enough that we must seemingly reorganize civilization around their arrival. In boardrooms, lecture theatres, parliamentary hearings and breathless tech journalism, the specter of superintelligence increasingly haunts our discourse. It’s often framed as “artificial general intelligence,” or “AGI,” and sometimes as something still more expansive, but always as an artificial mind that surpasses human cognition across all domains, capable of recursive self-improvement and potentially hostile to human survival. But whatever it’s called, this coming superintelligence has colonized our collective imagination.

The scenario echoes the speculative lineage of science fiction, from Isaac Asimov’s “Three Laws of Robotics” — a literary attempt to constrain machine agency — to later visions such as Stanley Kubrick and Arthur C. Clarke’s HAL 9000 or the runaway networks of William Gibson. What was once the realm of narrative thought-experiment now serves as a quasi-political forecast.

This narrative has very little to do with any scientific consensus, emerging instead from particular corridors of power. The loudest prophets of superintelligence are those building the very systems they warn against. When Sam Altman speaks of artificial general intelligence’s existential risk to humanity while simultaneously racing to create it, or when Elon Musk warns of an AI apocalypse while founding companies to accelerate its development, we’re seeing politics masked as predictions.

The superintelligence discourse functions as a sophisticated apparatus of power, transforming immediate questions about corporate accountability, worker displacement, algorithmic bias and democratic governance into abstract philosophical puzzles about consciousness and control. This sleight of hand is neither accidental nor benign. By making hypothetic catastrophe the center of public discourse, architects of AI systems have positioned themselves as humanity’s reluctant guardians, burdened with terrible knowledge and awesome responsibility. They have become indispensable intermediaries between civilization and its potential destroyer, a role that, coincidentally, requires massive capital investment, minimal regulation and concentrated decision-making authority.

Consider how this framing operates. When we debate whether a future artificial general intelligence might eliminate humanity, we’re not discussing the Amazon warehouse worker whose movements are dictated by algorithmic surveillance or the Palestinian whose neighborhood is targeted by automated weapons systems. These present realities dissolve into background noise against the rhetoric of existential risk. Such suffering is actual, while the superintelligence remains theoretical, but our attention and resources — and even our regulatory frameworks — increasingly orient toward the latter as governments convene frontier-AI taskforces and draft risk templates for hypothetical future systems. Meanwhile, current labour protections and constraints on algorithmic surveillance remain tied to legislation that is increasingly inadequate.

In the U.S., Executive Order 14110 on the “Safe, Secure, and Trustworthy Development and Use of Artificial Intelligence” mentions civil rights, competition, labor and discrimination, but it creates its most forceful accountability obligations for large, high-capability foundation models and future systems trained above certain compute thresholds, requiring firms to share technical information with the federal government and demonstrate that their models stay within specified safety limits. The U.K. has gone further still, building a Frontier AI Taskforce — now absorbed into the AI Security Institute — whose mandate centers on extreme, hypothetical risks. And even the EU’s AI Act, which does attempt to regulate present harms, devotes a section to systemic and foundation-model risks anticipated at some unknown point in the future. Across these jurisdictions, the political energy clusters around future, speculative systems.

Artificial superintelligence narratives perform very intentional political work, drawing attention from present systems of control toward distant catastrophe, shifting debate from material power to imagined futures. Predictions of machine godhood reshape how authority is claimed and whose interests steer AI governance, muting the voices of those who suffer under algorithms and amplifying those who want extinction to dominate the conversation. What poses as neutral futurism functions instead as an intervention in today’s political economy. Seen clearly, the prophecy of superintelligence is less a warning about machines than a strategy for power, and that strategy needs to be recognized for what it is. The power of this narrative draws from its history.

Bowing At The Altar Of Rationalism

Superintelligence as a dominant AI narrative predates ChatGPT and can be traced back to the peculiar marriage of Cold War strategy and computational theory that emerged in the 1950s. The RAND Corporation, an archetypal think tank where nuclear strategists gamed out humanity’s destruction, provided the conceptual nursery for thinking about intelligence as pure calculation, divorced from culture or politics.

“Whatever it’s called, this coming superintelligence has colonized our collective imagination.”

The early AI pioneers inherited this framework, and when Alan Turing proposed his famous test, he deliberately sidestepped questions of consciousness or experience in favor of observable behavior — if a machine could convince a human interlocutor of its humanity through text alone, it deserved the label “intelligent.” This behaviorist reduction would prove fateful, as in treating thought as quantifiable operations, it recast intelligence as something that could be measured, ranked and ultimately outdone by machines.

The computer scientist John von Neumann, as recalled by mathematician Stanislaw Ulam in 1958, spoke of a technological “singularity” in which accelerating progress would one day mean that machines could improve their own design, rapidly bootstrapping themselves to superhuman capability. This notion, refined by mathematician Irving John Good in the 1960s, established the basic grammar of superintelligence discourse: recursive self-improvement, exponential growth and the last invention humanity would ever need to make. These were, of course, mathematical extrapolations rather than empirical observations, but such speculations and thought experiments were repeated so frequently that they acquired the weight of prophecy, helping to make the imagined future they described look self-evident.

The 1980s and 1990s saw these ideas migrate from computer science departments to a peculiar subculture of rationalists and futurists centered around figures like computer scientist Eliezer Yudkowsky and his Singularity Institute (later the Machine Intelligence Research Institute). This community built a dense theoretical framework for superintelligence: utility functions, the formal goal systems meant to govern an AI’s choices; the paperclip maximizer, a thought experiment where a trivial objective drives a machine to consume all resources; instrumental convergence, the claim that almost any ultimate goal leads an AI to seek power and resources; and the orthogonality thesis, which holds that intelligence and moral values are independent. They created a scholastic philosophy for an entity that didn’t exist, complete with careful taxonomies of different types of AI take-off scenarios and elaborate arguments about acausal trade between possible future intelligences.

What united these thinkers was a shared commitment to a particular style of reasoning. They practiced what might be called extreme rationalism, the belief that pure logic, divorced from empirical constraint or social context, could reveal fundamental truths about technology and society. This methodology privileged thought experiments over data and clever paradoxes over mundane observation, and the result was a body of work that read like medieval theology, brilliant and intricate, but utterly disconnected from the actual development of AI systems. It should be acknowledged that disconnection did not make their efforts worthless, and by pushing abstract reasoning to its limits, they clarified questions of control, ethics and long-term risk that later informed more grounded discussions of AI policy and safety.

The contemporary incarnation of this tradition found its most influential expression in Nick Bostrom’s 2014 book “Superintelligence,” which transformed fringe internet philosophy into mainstream discourse. Bostrom, a former Oxford philosopher, gave academic respectability to scenarios that had previously lived in science fiction and posts on blogs with obscure titles. His book, despite containing no technical AI research and precious little engagement with actual machine learning, became required reading in Silicon Valley, often cited by tech billionaires. Musk once tweeted: “Worth reading Superintelligence by Bostrom. We need to be super careful with AI. Potentially more dangerous than nukes.” Musk is right to counsel caution, as evidenced by the 1,200 to 2,000 tons of nitrogen oxides and hazardous air pollutants like formaldehyde that his own artificial intelligence company expels into the air in Boxtown, a working-class, largely Black community in Memphis.

This commentary shouldn’t be seen as an attempt to diminish Bostrom’s achievement, which was to take the sprawling, often incoherent fears about AI and organize them into a rigorous framework. But his book sometimes reads like a natural history project, in which he categorizes different routes to superintelligence and different “failure modes,” ways such a system might go wrong or destroy us, as well as solutions to “control problems,” schemes proposed to keep it aligned — this taxonomic approach made even wild speculation appear scientific. By treating superintelligence as an object of systematic study rather than a science fiction premise, Bostrom laundered existential risk into respectable discourse.

The effective altruism (EA) movement supplied the social infrastructure for these ideas. Its core principle is to maximize long-term good through rational calculation. Within that worldview, superintelligence risk fits neatly, for if future people matter as much as present ones, and if a small chance of global catastrophe outweighs ongoing harms, then preventing AI apocalypse becomes the top priority. On that logic, hypothetical future lives eclipse the suffering of people living today.

“The loudest prophets of superintelligence are those building the very systems they warn against.”

This did not stay an abstract argument as philanthropists identifying with effective altruists channeled significant funding into AI safety research, and money shapes what researchers study. Organizations aligned with effective altruism have been established in universities and policy circles, publishing reports and advising governments on how to think about AI. The UK’s Frontier AI Taskforce has included members with documented links to the effective altruism movement, and commentators argue that these connections help channel EA-style priorities into government AI risk policy.

Effective altruism encourages its proponents to move into public bodies and major labs, creating a pipeline of staff who carry these priorities into decision-making roles. Jason Matheny, former director of Intelligence Advanced Research Projects Activity, a U.S. government agency that funds high-risk, high-reward research to improve intelligence gathering and analysis, has described how effective altruists can “pick low-hanging fruit within government positions” to exert influence. Superintelligence discourse isn’t spreading because experts broadly agree it is our most urgent problem; it spreads because a well-resourced movement has given it money and access to power.

This is not to deny the merits of engaging with the ideals of effective altruism or with the concept of superintelligence as articulated by Bostrom. The problem is how readily those ideas become distorted once they enter political and commercial domains. This intellectual genealogy matters because it reveals superintelligence discourse as a cultural product, ideas that moved beyond theory into institutions, acquiring funding and advocates. And its emergence was shaped within institutions committed to rationalism over empiricism, where individual genius was fetishized over collective judgment, and technological determinism was prioritized over social context.

Entrepreneurs Of The Apocalypse

The transformation of superintelligence from internet philosophy to boardroom strategy represents one of the most successful ideological campaigns of the 21st century. Tech executives who had previously focused on quarterly earnings and user growth metrics began speaking like mystics about humanity’s cosmic destiny, and this conversion reshaped the political economy of AI development.

OpenAI, founded in 2015 as a non-profit dedicated to ensuring artificial intelligence benefits humanity, exemplifies this transformation. OpenAI has evolved into a peculiar hybrid, a capped-profit company controlled by a non-profit board, valued by some estimates at $500 billion, racing to build the very artificial general intelligence it warns might destroy us. This structure, byzantine in its complexity, makes perfect sense within the logic of superintelligence. If AGI represents both ultimate promise and existential threat, then the organization building it must be simultaneously commercial and altruistic, aggressive and cautious, public-spirited yet secretive.

Sam Altman, OpenAI’s CEO, has perfected the rhetorical stance of the reluctant prophet. In Congressional testimony, blog posts and interviews, he warns of AI’s dangers while insisting on the necessity of pushing forward. “Our mission is to ensure that AGI (Artificial General Intelligence) benefits all of humanity,” he wrote on his blog earlier this year. There is a very we must build AGI before someone else does feel to the argument, because we’re the only ones responsible enough to handle it. Altman seems determined to position OpenAI as humanity’s champion, bearing the terrible burden of creating God-like intelligence so that it might be restrained.

Still, OpenAI is also seeking a profit. And that is really what all this is about — profit. Superintelligence narratives carry staggering financial implications, justifying astronomical valuations for companies that have yet to show consistent paths to self-sufficiency. But if you’re building humanity’s last invention, perhaps normal business metrics become irrelevant. This eschatological framework explains why Microsoft would invest $13 billion in OpenAI, why venture capitalists pour money into AGI startups and why the market treats large language models like ChatGPT as precursors to omniscience.

Anthropic, founded by former OpenAI executives, positions itself as the “safety-focused” alternative, raising billions by promising to build AI systems that are “helpful, honest and harmless.” But it’s all just elaborate safety theatre, as harm has no genuine place in the competition between OpenAI, Anthropic, Google DeepMind and others — the true contest is in who gets to build the best, most profitable models and how well they can package that pursuit in the language of caution.

This dynamic creates a race to the bottom of responsibility, with each company justifying acceleration by pointing to competitors who might be less careful: The Chinese are coming, so if we slow down, they’ll build unaligned AGI first. Meta is releasing models as open source without proper safeguards. What if some unknown actor hits upon the next breakthrough first? This paranoid logic forecloses any possibility of genuine pause or democratic deliberation. Speed becomes safety, and caution becomes recklessness.

“[Sam] Altman seems determined to position OpenAI as humanity’s champion, bearing the terrible burden of creating God-like intelligence so that it might be restrained.”

The superintelligence frame reshapes internal corporate politics, as AI safety teams, often staffed by believers in existential risk, provide moral cover for rapid development, absorbing criticism that might target business practices by attempting to reinforce the idea that these companies are doing world-saving work. If your safety team publishes papers about preventing human extinction, routine regulation begins to look trivial.

The well-publicized drama at OpenAI in November 2023 illuminates these dynamics. When the company’s board attempted to fire Sam Altman over concerns about his candor, the resulting chaos revealed underlying power relations. Employees, who had been recruited with talk of saving humanity, threatened mass defection if their CEO wasn’t reinstated — does their loyalty to Altman outweigh their quest to save the rest of us? Microsoft, despite having no formal control over the OpenAI board, exercised decisive influence as the company’s dominant funder and cloud provider, offering to hire Altman and any staff who followed him. The board members, who thought honesty an important trait in a CEO, resigned, and Altman returned triumphant.

Superintelligence rhetoric serves power, but it is set aside when it clashes with the interests of capital and control. Microsoft has invested billions in OpenAI and implemented its models in many of its commercial products. Altman wants rapid progress, so Microsoft wants Altman. His removal put Microsoft’s whole AI business trajectory at risk. The board was swept aside because they tried, as is their remit, to constrain OpenAI’s CEO. Microsoft’s leverage ultimately determined the outcome, and employees followed suit. It was never about saving humanity; it was about profit.

The entrepreneurs of the AI apocalypse have discovered a perfect formula. By warning of existential risk, they position themselves as indispensable. By racing to build AGI, they justify the unlimited use of resources. And by claiming unique responsibility, they deflect democratic oversight. The future becomes a hostage to present accumulation, and we’re told we should be grateful for such responsible custodians.

Superintelligence discourse actively constructs the future. Through constant repetition, speculative scenarios acquire the weight of destiny. This process — the manufacture of inevitability — reveals how power operates through prophecy.

Consider the claim that artificial general intelligence will arrive within five to 20 years. Across many sources, this prediction is surprisingly stable. But since at least the mid-20th century, researchers and futurists have repeatedly promised human-level AI “in a couple of decades,” only for the horizon to continuously slip. The persistence of that moving window serves a specific function: it’s near enough to justify immediate massive investment while far enough away to defer necessary accountability. It creates a temporal framework within which certain actions become compulsory regardless of democratic input.

This rhetoric of inevitability pervades Silicon Valley’s discussion of AI. AGI is coming whether we like it or not, executives declare, as if technological development were a natural force rather than a human choice. This naturalization of progress obscures the specific decisions, investments and infrastructures that make certain futures more likely than others. When tech leaders say we can’t stop progress, what they mean is, you can’t stop us.

Media amplification plays a crucial role in this process, as every incremental improvement in large language models gets framed as a step towards AGI. ChatGPT writes poetry; surely consciousness is imminent. Claude solves coding problems; the singularity is near. Such accounts, often sourced from the very companies building these systems, create a sense of momentum that becomes self-fulfilling. Investors invest because AGI seems near, researchers join companies because that’s where the future is being built and governments defer regulation because they don’t want to handicap their domestic champions.

The construction of inevitability also operates through linguistic choices. Notice how quickly “artificial general intelligence” replaced “artificial intelligence” in public discourse, as if the general variety were a natural evolution rather than a specific and contested concept, and how “superintelligence” — or whatever term the concept eventually assumes — then appears as the seemingly inevitable next rung on that ladder. Notice how “alignment” — ensuring AI systems do what humans want — became the central problem, assuming both that superhuman AI will exist and that the challenge is technical rather than political.

Notice how “compute,” which basically means computational power, became a measurable resource like oil or grain, something to be stockpiled and controlled. This semantic shift matters because language shapes possibility. When we accept that AGI is inevitable, we stop asking whether it should be built, and in the furor, we miss that we seem to have conceded that a small group of technologists should determine our future.

“When we accept that AGI is inevitable, we stop asking whether it should be built, and in the furor, we miss that we seem to have conceded that a small group of technologists should determine our future.”

When we simultaneously treat compute as a strategic resource, we further normalize the concentration of power in the hands of those who control data centers, who, in turn, as the failed ousting of Altman demonstrates, grant further power to this chosen few.

Academic institutions, which are meant to resist such logics, have been conscripted into this manufacture of inevitability. Universities, desperate for industry funding and relevance, establish AI safety centers and existential risk research programs. These institutions, putatively independent, end up reinforcing industry narratives, producing papers on AGI timelines and alignment strategies, lending scholarly authority to speculative fiction. Young researchers, seeing where the money and prestige lie, orient their careers toward superintelligence questions rather than present AI harms.

International competition adds further to the apparatus of inevitability. The “AI arms race” between the United States and China is framed in existential terms, that whoever builds AGI first will achieve permanent geopolitical dominance. This neo-Cold War rhetoric forecloses possibilities for cooperation, regulation or restraint, making racing toward potentially dangerous technology seem patriotic rather than reckless. National security becomes another trump card against democratic deliberation.

The prophecy becomes self-fulfilling through material concentration — as resources flow towards AGI development, alternative approaches to AI starve. Researchers who might work on explainable AI or AI for social good instead join labs focused on scaling large language models. The future narrows to match the prediction, not because the prediction was accurate, but because it commanded resources.

In financial terms, it is a heads-we-win, tails-you-lose arrangement: If the promised breakthroughs materialize, private firms and their investors keep the upside, but if they stall or disappoint, the sunk costs in energy-hungry data centers and retooled industrial policy sit on the public balance sheet. An entire macro-economy is being hitched to a story whose basic physics we do not yet understand.

We must recognize this process as political, not technical. The inevitability of superintelligence is manufactured through specific choices about funding, attention and legitimacy, and different choices would produce different futures. The fundamental question isn’t whether AGI is coming, but who benefits from making us believe it is.

The Abandoned Present

While we fixate on hypothetical machine gods, actual AI systems reshape human life in profound and often harmful ways. The superintelligence discourse distracts from these immediate impacts; one might even say it legitimizes such. After all, if we’re racing towards AGI to save humanity, what’s a little collateral damage along the way?

Consider labor, that fundamental human activity through which we produce and reproduce our world. AI systems already govern millions of workers’ days through algorithmic management. In Amazon warehouses, workers’ movements are dictated by handheld devices that calculate optimal routes, monitor bathroom breaks and automatically fire those who fall behind pace. While the cultural conversation around automation often emphasizes how it threatens to replace human labor, for many, automation is already actively degrading their profession. Many workers have become an appendage to the algorithm, executing tasks the machine cannot yet perform while being measured and monitored by computational systems.

Frederick Taylor, the 19th-century American mechanical engineer and author of “The Principles of Scientific Management,”is famous for his efforts to engineer maximum efficiency through rigid control of labor. What we have today is a form of tech-mediated Taylorism wherein work is broken into tiny, optimized motions, with every movement monitored and timed, just with management logic encoded in software rather than stopwatches. Taylor’s logic has been operationalized far beyond what he could have imagined. But when we discuss AI and work, the conversation immediately leaps to whether AGI will eliminate all jobs, as if the present suffering of algorithmically managed workers were merely a waystation to obsolescence.

The content moderation industry exemplifies this abandoned present. Hundreds of thousands of workers, primarily in the Global South, spend their days viewing the worst content humanity produces—including child abuse and sexual violence—to train AI systems to recognize and filter such material. These workers, paid a fraction of what their counterparts in Silicon Valley earn, suffer documented psychological trauma from their work. They’re the hidden labor force behind “AI safety,” protecting users from harmful content while being harmed themselves. But their suffering rarely features in discussions of AI ethics, which focus instead on preventing hypothetical future harms from superintelligent systems.

Surveillance represents another immediate reality obscured by futuristic speculation. AI systems enable unprecedented tracking of human behavior. Facial recognition identifies protesters and dissidents. Predictive policing algorithms direct law enforcement to “high-risk” neighborhoods that mysteriously correlate with racial demographics. Border control agencies use AI to assess asylum seekers’ credibility through voice analysis and micro-expressions. Social credit systems score citizens’ trustworthiness using algorithms that analyze their digital traces.

“An entire macro-economy is being hitched to a story whose basic physics we do not yet understand.”

These aren’t speculative technologies; they are real systems that are already deployed, and they don’t require artificial general intelligence, just pattern matching at scale. But the superintelligence discourse treats surveillance as a future risk — what if an AGI monitored everyone? — rather than a present reality. This temporal displacement serves power, because it’s easier to debate hypothetical panopticons than to dismantle actual ones.

Algorithmic bias pervades critical social infrastructures, amplifying and legitimizing existing inequalities by lending mathematical authority to human prejudice. The response from the AI industry? We need better datasets, more diverse teams and algorithmic audits — technical fixes for political problems. Meanwhile, the same companies racing to build AGI deploy biased systems at scale, treating present harm as acceptable casualties in the march toward transcendence. The violence is actual, but the solution remains perpetually deferred.

And beneath all of this, the environmental destruction accelerates as we continue to train large language models — a process that consumes enormous amounts of energy. When confronted with this ecological cost, AI companies point to hypothetical benefits, such as AGI solving climate change or optimizing energy systems. They use the future to justify the present, as though these speculative benefits should outweigh actual, ongoing damages. This temporal shell game, destroying the world to save it, would be comedic if the consequences weren’t so severe.

And just as it erodes the environment, AI also erodes democracy. Recommendation algorithms have long shaped political discourse, creating filter bubbles and amplifying extremism, but more recently, generative AI has flooded information spaces with synthetic content, making it impossible to distinguish truth from fabrication. The public sphere, the basis of democratic life, depends on people sharing enough common information to deliberate together.

When AI systems segment citizens into ever-narrower feeds, that shared space collapses. We no longer argue about the same facts because we no longer encounter the same world, but our governance discussions focus on preventing AGI from destroying democracy in the future rather than addressing how current AI systems undermine it today. We debate AI alignment while ignoring human alignment on key questions, like whether AI systems should serve democratic values rather than corporate profits. The speculative tyranny of superintelligence obscures the actual tyranny of surveillance capitalism.

Mental health impacts accumulate as humans adapt to algorithmic judgment. Social media algorithms, optimized for engagement, promote content that triggers anxiety, depression and eating disorders. Young people internalize algorithmic metrics — likes, shares, views — as measures of self-worth. The quantification of social life through AI systems produces new forms of alienation and suffering, but these immediate psychological harms pale beside imagined existential risks, receiving a fraction of the attention and resources directed toward preventing hypothetical AGI catastrophe.

Each of these present harms could be addressed through collective action. We could regulate algorithmic management, support content moderators, limit surveillance, audit biases, constrain energy use, protect democracy and prioritize mental health. These aren’t technical problems requiring superintelligence to solve; they’re just good old-fashioned political challenges demanding democratic engagement. But the superintelligence discourse makes such mundane interventions seem almost quaint. Why reorganize the workplace when work itself might soon be obsolete? Why regulate surveillance when AGI might monitor our thoughts? Why address bias when superintelligence might transcend human prejudice entirely?

The abandoned present is crowded with suffering that could be alleviated through human choice rather than machine transcendence, and every moment we spend debating alignment problems for non-existent AGI is a moment not spent addressing algorithmic harms affecting millions today. The future-orientation of superintelligence discourse isn’t just distraction but an abandonment, a willful turning away from present responsibility toward speculative absolution.

Alternative Imaginaries For The Age Of AI

The dominance of superintelligence narratives obscures the fact that many other ways of doing AI exist, grounded in present social needs rather than hypothetical machine gods. These alternatives show that you do not have to join the race to superintelligence or renounce technology altogether. It is possible to build and govern automation differently now.

Across the world, communities have begun experimenting with different ways of organizing data and automation. Indigenous data sovereignty movements, for instance, have developed governance frameworks, data platforms and research protocols that treat data as a collective resource subject to collective consent. Organizations such as the First Nations Information Governance Centre in Canada and Te Mana Raraunga in Aotearoa insist that data projects, including those involving AI, be accountable to relationships, histories and obligations, not just to metrics of optimization and scale. Their projects offer working examples of automated systems designed to respect cultural values and reinforce local autonomy, a mirror image of the effective altruist impulse to abstract away from place in the name of hypothetical future people.

“The speculative tyranny of superintelligence obscures the actual tyranny of surveillance capitalism.”

Workers are also experimenting with different arrangements, and unions and labor organizations have negotiated clauses on algorithmic management, pushed for audit rights over workplace systems and begun building worker-controlled data trusts to govern how their information is used. These initiatives emerge from lived experience rather than philosophical speculation, from people who spend their days under algorithmic surveillance and are determined to redesign the systems that manage their existence. While tech executives are celebrated for speculating about AGI, workers who analyze the systems already governing their lives are still too easily dismissed as Luddites.

Similar experiments appear in feminist and disability-led technology projects that build tools around care, access and cognitive diversity, and in Global South initiatives that use modest, locally governed AI systems to support healthcare, agriculture or education under tight resource constraints. Degrowth-oriented technologists design low-power, community-hosted models and data centers meant to sit within ecological limits rather than override them. Such examples show how critique and activism can progress to action, to concrete infrastructures and institutional arrangements that demonstrate how AI can be organized without defaulting to the superintelligence paradigm that demands everyone else be sacrificed because a few tech bros can see the greater good that everyone else has missed.

What unites these diverse imaginaries — Indigenous data governance, worker-led data trusts, and Global South design projects — is a different understanding of intelligence itself. Rather than picturing intelligence as an abstract, disembodied capacity to optimize across all domains, they treat it as a relational and embodied capacity bound to specific contexts. They address real communities with real needs, not hypothetical humanity facing hypothetical machines. Precisely because they are grounded, they appear modest when set against the grandiosity of superintelligence, but existential risk makes every other concern look small by comparison. You can predict the ripostes: Why prioritize worker rights when work itself might soon disappear? Why consider environmental limits when AGI is imagined as capable of solving climate change on demand?

These alternatives also illuminate the democratic deficit at the heart of the superintelligence narrative. Treating AI at once as an arcane technical problem that ordinary people cannot understand and as an unquestionable engine of social progress allows authority to consolidate in the hands of those who own and build the systems. Once algorithms mediate communication, employment, welfare, policing and public discourse, they become political institutions. The power structure is feudal, comprising a small corporate elite that holds decision-making power justified by special expertise and the imagined urgency of existential risk, while citizens and taxpayers are told they cannot grasp the technical complexities and that slowing development would be irresponsible in a global race. The result is learned helplessness, a sense that technological futures cannot be shaped democratically but must be entrusted to visionary engineers.

A democratic approach would invert this logic, recognizing that questions about surveillance, workplace automation, public services and even the pursuit of AGI itself are not engineering puzzles but value choices. Citizens do not need to understand backpropagation to deliberate on whether predictive policing should exist, just as they need not understand combustion engineering to debate transport policy. Democracy requires the right to shape the conditions of collective life, including the architectures of AI.

This could take many forms. Workers could participate in decisions about algorithmic management. Communities could govern local data according to their own priorities. Key computational resources could be owned publicly or cooperatively rather than concentrated in a few firms. Citizen assemblies could be given real authority over whether a municipality moves forward with contentious uses of AI, like facial recognition and predictive policing. Developers could be required to demonstrate safety before deployment under a precautionary framework. International agreements could set limits on the most dangerous areas of AI research. None of this is about whether AGI, or any other kind of superintelligence one can imagine, does or does not arrive; it’s simply about recognizing that the distribution of technological power is a political choice rather than an inevitable outcome.

“The real political question is not whether some artificial superintelligence will emerge, but who gets to decide what kinds of intelligence we build and sustain.”

The superintelligence narrative undermines these democratic possibilities by presenting concentrated power as a tragic necessity. If extinction is at stake, then public deliberation becomes a luxury we cannot afford. If AGI is inevitable, then governance must be ceded to those racing to build it. This narrative manufactures urgency to justify the erosion of democratic control, and what begins as a story about hypothetical machines ends as a story about real political disempowerment. This, ultimately, is the larger risk, that while we debate the alignment of imaginary future minds, we neglect the alignment of present institutions.

The truth is that nothing about our technological future is inevitable, other than the inevitability of further technological change. Change is certain, but its direction is not. We do not yet understand what kind of systems we are building, or what mix of breakthroughs and failures they will produce, and that uncertainty makes it reckless to funnel public money and attention into a single speculative trajectory.

Every algorithm embeds decisions about values and beneficiaries. The superintelligence narrative masks these choices behind a veneer of destiny, but alternative imaginaries — Indigenous governance, worker-led design, feminist and disability justice, commons-driven models, ecological constraints — remind us that other paths are possible and already under construction.

The real political question is not whether some artificial superintelligence will emerge, but who gets to decide what kinds of intelligence we build and sustain. And the answer cannot be left to the corporate prophets of artificial transcendence because the future of AI is a political field — it should be open to contestation. It belongs not to those who warn most loudly of gods or monsters, but to publics that should have the moral right to democratically govern the technologies that shape their lives.

The post The Politics Of Superintelligence appeared first on NOEMA.

]]>
]]>
The Last Days Of Social Media https://www.noemamag.com/the-last-days-of-social-media Tue, 02 Sep 2025 14:55:48 +0000 https://www.noemamag.com/the-last-days-of-social-media The post The Last Days Of Social Media appeared first on NOEMA.

]]>
At first glance, the feed looks familiar, a seamless carousel of “For You” updates gliding beneath your thumb. But déjà‑vu sets in as 10 posts from 10 different accounts carry the same stock portrait and the same breathless promise — “click here for free pics” or “here is the one productivity hack you need in 2025.” Swipe again and three near‑identical replies appear, each from a pout‑filtered avatar directing you to “free pics.” Between them sits an ad for a cash‑back crypto card.

Scroll further and recycled TikTok clips with “original audio” bleed into Reels on Facebook and Instagram; AI‑stitched football highlights showcase players’ limbs bending like marionettes. Refresh once more, and the woman who enjoys your snaps of sushi rolls has seemingly spawned five clones.

Whatever remains of genuine, human content is increasingly sidelined by algorithmic prioritization, receiving fewer interactions than the engineered content and AI slop optimized solely for clicks. 

These are the last days of social media as we know it.

Drowning The Real

Social media was built on the romance of authenticity. Early platforms sold themselves as conduits for genuine connection: stuff you wanted to see, like your friend’s wedding and your cousin’s dog.

Even influencer culture, for all its artifice, promised that behind the ring‑light stood an actual person. But the attention economy, and more recently, the generative AI-fueled late attention economy, have broken whatever social contract underpinned that illusion. The feed no longer feels crowded with people but crowded with content. At this point, it has far less to do with people than with consumers and consumption.

In recent years, Facebook and other platforms that facilitate billions of daily interactions have slowly morphed into the internet’s largest repositories of AI‑generated spam. Research has found what users plainly see: tens of thousands of machine‑written posts now flood public groups — pushing scams, chasing clicks — with clickbait headlines, half‑coherent listicles and hazy lifestyle images stitched together in AI tools like Midjourney.

It’s all just vapid, empty shit produced for engagement’s sake. Facebook is “sloshing” in low-effort AI-generated posts, as Arwa Mahdawi notes in The Guardian; some even bolstered by algorithmic boosts, like “Shrimp Jesus.”

The difference between human and synthetic content is becoming increasingly indistinguishable, and platforms seem unable, or uninterested, in trying to police it. Earlier this year, CEO Steve Huffman pledged to “keep Reddit human,” a tacit admission that floodwaters were already lapping at the last high ground. TikTok, meanwhile, swarms with AI narrators presenting concocted news reports and “what‑if” histories. A few creators do append labels disclaiming that their videos depict “no real events,” but many creators don’t bother, and many consumers don’t seem to care.

The problem is not just the rise of fake material, but the collapse of context and the acceptance that truth no longer matters as long as our cravings for colors and noise are satisfied. Contemporary social media content is more often rootless, detached from cultural memory, interpersonal exchange or shared conversation. It arrives fully formed, optimized for attention rather than meaning, producing a kind of semantic sludge, posts that look like language yet say almost nothing. 

We’re drowning in this nothingness.

The Bot-Girl Economy

If spam (AI or otherwise) is the white noise of the modern timeline, its dominant melody is a different form of automation: the hyper‑optimized, sex‑adjacent human avatar. She appears everywhere, replying to trending tweets with selfies, promising “funny memes in bio” and linking, inevitably, to OnlyFans or one of its proxies. Sometimes she is real. Sometimes she is not. Sometimes she is a he, sitting in a compound in Myanmar. Increasingly, it makes no difference.

This convergence of bots, scammers, brand-funnels and soft‑core marketing underpins what might be called the bot-girl economy, a parasocial marketplace fueled in a large part by economic precarity. At its core is a transactional logic: Attention is scarce, intimacy is monetizable and platforms generally won’t intervene so long as engagement stays high. As more women now turn to online sex work, lots of men are eager to pay them for their services. And as these workers try to cope with the precarity imposed by platform metrics and competition, some can spiral, forever downward, into a transactional attention-to-intimacy logic that eventually turns them into more bot than human. To hold attention, some creators increasingly opt to behave like algorithms themselves, automating replies, optimizing content for engagement, or mimicking affection at scale. The distinction between performance and intention must surely erode as real people perform as synthetic avatars and synthetic avatars mimic real women.

There is loneliness, desperation and predation everywhere.

“Genuine, human content is increasingly sidelined by algorithmic prioritization, receiving fewer interactions than the engineered content and AI slop optimized solely for clicks.”

The bot-girl is more than just a symptom; she is a proof of concept for how social media bends even aesthetics to the logic of engagement. Once, profile pictures (both real and synthetic) aspired to hyper-glamor, unreachable beauty filtered through fantasy. But that fantasy began to underperform as average men sensed the ruse, recognizing that supermodels typically don’t send them DMs. And so, the system adapted, surfacing profiles that felt more plausible, more emotionally available. Today’s avatars project a curated accessibility: They’re attractive but not flawless, styled to suggest they might genuinely be interested in you. It’s a calibrated effect, just human enough to convey plausibility, just artificial enough to scale. She has to look more human to stay afloat, but act more bot to keep up. Nearly everything is socially engineered for maximum interaction: the like, the comment, the click, the private message.

Once seen as the fringe economy of cam sites, OnlyFans has become the dominant digital marketplace for sex workers. In 2023, the then-seven-year-old platform generated $6.63 billion in gross payments from fans, with $658 million in profit before tax. Its success has bled across the social web; platforms like X (formerly Twitter) now serve as de facto marketing layers for OnlyFans creators, with thousands of accounts running fan-funnel operations, baiting users into paid subscriptions. 

The tools of seduction are also changing. One 2024 study estimated that thousands of X accounts use AI to generate fake profile photos. Many content creators have also begun using AI for talking-head videos, synthetic voices or endlessly varied selfies. Content is likely A/B tested for click-through rates. Bios are written with conversion in mind. DMs are automated or outsourced to AI impersonators. For users, the effect is a strange hybrid of influencer, chatbot and parasitic marketing loop. One minute you’re arguing politics, the next, you’re being pitched a girlfriend experience by a bot. 

Engagement In Freefall

While content proliferates, engagement is evaporating. Average interaction rates across major platforms are declining fast: Facebook and X posts now scrape an average 0.15% engagement, while Instagram has dropped 24% year-on-year. Even TikTok has begun to plateau. People aren’t connecting or conversing on social media like they used to; they’re just wading through slop, that is, low-effort, low-quality content produced at scale, often with AI, for engagement.

And much of it is slop: Less than half of American adults now rate the information they see on social media as “mostly reliable”— down from roughly two-thirds in the mid-2010s.  Young adults register the steepest collapse, which is unsurprising; as digital natives, they better understand that the content they scroll upon wasn’t necessarily produced by humans. And yet, they continue to scroll.

The timeline is no longer a source of information or social presence, but more of a mood-regulation device, endlessly replenishing itself with just enough novelty to suppress the anxiety of stopping. Scrolling has become a form of ambient dissociation, half-conscious, half-compulsive, closer to scratching an itch than seeking anything in particular. People know the feed is fake, they just don’t care. 

Platforms have little incentive to stem the tide. Synthetic accounts are cheap, tireless and lucrative because they never demand wages or unionize. Systems designed to surface peer-to-peer engagement are now systematically filtering out such activity, because what counts as engagement has changed. Engagement is now about raw user attention – time spent, impressions, scroll velocity – and the net effect is an online world in which you are constantly being addressed but never truly spoken to.

The Great Unbundling

Social media’s death rattle will not be a bang but a shrug.

These networks once promised a single interface for the whole of online life: Facebook as social hub, Twitter as news‑wire, YouTube as broadcaster, Instagram as photo album, TikTok as distraction engine. Growth appeared inexorable. But now, the model is splintering, and users are drifting toward smaller, slower, more private spaces, like group chats, Discord servers and federated microblogs — a billion little gardens.

Since Elon Musk’s takeover, X has shed at least 15% of its global user base. Meta’s Threads, launched with great fanfare in 2023, saw its number of daily active users collapse within a month, falling from around 50 million active Android users at launch in July to only 10 million active users the following August. Twitch recorded its lowest monthly watch-time in over four years in December 2024, just 1.58 billion hours, 11% lower than the December average from 2020-23.

“While content proliferates, engagement is evaporating.”

Even the giants that still command vast audiences are no longer growing exponentially. Many platforms have already died (Vine, Google+, Yik Yak), are functionally dead or zombified (Tumblr, Ello), or have been revived and died again (MySpace, Bebo). Some notable exceptions aside, like Reddit and BlueSky (though it’s still early days for the latter), growth has plateaued across the board. While social media adoption continues to rise overall, it’s no longer explosive. As of early 2025, around 5.3 billion user identities — roughly 65% of the global population — are on social platforms, but annual growth has decelerated to just 4-5%, a steep drop from the double-digit surges seen earlier in the 2010s.

Intentional, opt-in micro‑communities are rising in their place — like Patreon collectives and Substack newsletters — where creators chase depth over scale, retention over virality. A writer with 10,000 devoted subscribers can potentially earn more and burn out less than one with a million passive followers on Instagram. 

But the old practices are still evident: Substack is full of personal brands announcing their journeys, Discord servers host influencers disguised as community leaders and Patreon bios promise exclusive access that is often just recycled content. Still, something has shifted. These are not mass arenas; they are clubs — opt-in spaces with boundaries, where people remember who you are. And they are often paywalled, or at least heavily moderated, which at the very least keeps the bots out. What’s being sold is less a product than a sense of proximity, and while the economics may be similar, the affective atmosphere is different, smaller, slower, more reciprocal. In these spaces, creators don’t chase virality; they cultivate trust.

Even the big platforms sense the turning tide. Instagram has begun emphasizing DMs, X is pushing subscriber‑only circles and TikTok is experimenting with private communities. Behind these developments is an implicit acknowledgement that the infinite scroll, stuffed with bots and synthetic sludge, is approaching the limit of what humans will tolerate. A lot of people seem to be fine with slop, but as more start to crave authenticity, the platforms will be forced to take note.

From Attention To Exhaustion

The social internet was built on attention, not only the promise to capture yours but the chance for you to capture a slice of everyone else’s. After two decades, the mechanism has inverted, replacing connection with exhaustion. “Dopamine detox” and “digital Sabbath” have entered the mainstream. In the U.S., a significant proportion of 18‑ to 34‑year‑olds took deliberate breaks from social media in 2024, citing mental health as the motivation, according to an American Psychiatric Association poll. And yet, time spent on the platforms remains high — people scroll not because they enjoy it, but because they don’t know how to stop. Self-help influencers now recommend weekly “no-screen Sundays” (yes, the irony). The mark of the hipster is no longer an ill-fitting beanie but an old-school Nokia dumbphone. 

Some creators are quitting, too. Competing with synthetic performers who never sleep, they find the visibility race not merely tiring but absurd. Why post a selfie when an AI can generate a prettier one? Why craft a thought when ChatGPT can produce one faster?

These are the last days of social media, not because we lack content, but because the attention economy has neared its outer limit — we have exhausted the capacity to care. There is more to watch, read, click and react to than ever before — an endless buffet of stimulation. But novelty has become indistinguishable from noise. Every scroll brings more, and each addition subtracts meaning. We are indeed drowning. In this saturation, even the most outrageous or emotive content struggles to provoke more than a blink.

Outrage fatigues. Irony flattens. Virality cannibalizes itself. The feed no longer surprises but sedates, and in that sedation, something quietly breaks, and social media no longer feels like a place to be; it is a surface to skim. 

No one is forcing anyone to go on TikTok or to consume the clickbait in their feeds. The content served to us by algorithms is, in effect, a warped mirror, reflecting and distorting our worst impulses. For younger users in particular, their scrolling of social media can become compulsive, rewarding their developing brains with unpredictable hits of dopamine that keep them glued to their screens.

Social media platforms have also achieved something more elegant than coercion: They’ve made non-participation a form of self-exile, a luxury available only to those who can afford its costs.

“Why post a selfie when an AI can generate a prettier one? Why craft a thought when ChatGPT can produce one faster?”

Our offline reality is irrevocably shaped by our online world: Consider the worker who deletes or was never on LinkedIn, excluding themselves from professional networks that increasingly exist nowhere else; or the small business owner who abandons Instagram, watching customers drift toward competitors who maintain their social media presence. The teenager who refuses TikTok may find herself unable to parse references, memes and microcultures that soon constitute her peers’ vernacular.

These platforms haven’t just captured attention, they’ve enclosed the commons where social, economic and cultural capital are exchanged. But enclosure breeds resistance, and as exhaustion sets in, alternatives begin to emerge.

Architectures Of Intention

The successor to mass social media is, as already noted, emerging not as a single platform, but as a scattering of alleyways, salons, encrypted lounges and federated town squares —  those little gardens.

Maybe today’s major social media platforms will find new ways to hold the gaze of the masses, or maybe they will continue to decline in relevance, lingering like derelict shopping centers or a dying online game, haunted by bots and the echo of once‑human chatter. Occasionally we may wander back, out of habit or nostalgia, or to converse once more as a crowd, among the ruins. But as social media collapses on itself, the future points to a quieter, more fractured, more human web, something that no longer promises to be everything, everywhere, for everyone.

This is a good thing. Group chats and invite‑only circles are where context and connection survive. These are spaces defined less by scale than by shared understanding, where people no longer perform for an algorithmic audience but speak in the presence of chosen others. Messaging apps like Signal are quietly becoming dominant infrastructures for digital social life, not because they promise discovery, but because they don’t. In these spaces, a message often carries more meaning because it is usually directed, not broadcast.

Social media’s current logic is designed to reduce friction, to give users infinite content for instant gratification, or at the very least, the anticipation of such. The antidote to this compulsive, numbing overload will be found in deliberative friction, design patterns that introduce pause and reflection into digital interaction, or platforms and algorithms that create space for intention.

This isn’t about making platforms needlessly cumbersome but about distinguishing between helpful constraints and extractive ones. Consider Are.na, a non-profit, ad-free creative platform founded in 2014 for collecting and connecting ideas that feels like the anti-Pinterest: There’s no algorithmic feed or engagement metrics, no trending tab to fall into and no infinite scroll. The pace is glacial by social media standards. Connections between ideas must be made manually, and thus, thoughtfully — there are no algorithmic suggestions or ranked content.

To demand intention over passive, mindless screen time, X could require a 90-second delay before posting replies, not to deter participation, but to curb reactive broadcasting and engagement farming. Instagram could show how long you’ve spent scrolling before allowing uploads of posts or stories, and Facebook could display the carbon cost of its data centers, reminding users that digital actions have material consequences, with each refresh. These small added moments of friction and purposeful interruptions — what UX designers currently optimize away — are precisely what we need to break the cycle of passive consumption and restore intention to digital interaction.

We can dream of a digital future where belonging is no longer measured by follower counts or engagement rates, but rather by the development of trust and the quality of conversation. We can dream of a digital future in which communities form around shared interests and mutual care rather than algorithmic prediction. Our public squares — the big algorithmic platforms — will never be cordoned off entirely, but they might sit alongside countless semi‑public parlors where people choose their company and set their own rules, spaces that prioritize continuity over reach and coherence over chaos. People will show up not to go viral, but to be seen in context. None of this is about escaping the social internet, but about reclaiming its scale, pace, and purpose.

Governance Scaffolding

The most radical redesign of social media might be the most familiar: What if we treated these platforms as public utilities rather than private casinos?

A public-service model wouldn’t require state control; rather, it could be governed through civic charters, much like public broadcasters operate under mandates that balance independence and accountability. This vision stands in stark contrast to the current direction of most major platforms, which are becoming increasingly opaque.

“Non-participation [is] a form of self-exile, a luxury available only to those who can afford its costs.”

In recent years, Reddit and X, among other platforms, have either restricted or removed API access, dismantling open-data pathways. The very infrastructures that shape public discourse are retreating from public access and oversight. Imagine social media platforms with transparent algorithms subject to public audit, user representation on governance boards, revenue models based on public funding or member dues rather than surveillance advertising, mandates to serve democratic discourse rather than maximize engagement, and regular impact assessments that measure not just usage but societal effects.

Some initiatives gesture in this direction. Meta’s Oversight Board, for example, frames itself as an independent body for content moderation appeals, though its remit is narrow and its influence ultimately limited by Meta’s discretion. X’s Community Notes, meanwhile, allows user-generated fact-checks but relies on opaque scoring mechanisms and lacks formal accountability. Both are add-ons to existing platform logic rather than systemic redesigns. A true public-service model would bake accountability into the platform’s infrastructure, not just bolt it on after the fact.

The European Union has begun exploring this territory through its Digital Markets Act and Digital Services Act, but these laws, enacted in 2022, largely focus on regulating existing platforms rather than imagining new ones. In the United States, efforts are more fragmented. Proposals such as the Platform Accountability and Transparency Act (PATA) and state-level laws in California and New York aim to increase oversight of algorithmic systems, particularly where they impact youth and mental health. Still, most of these measures seek to retrofit accountability onto current platforms. What we need are spaces built from the ground up on different principles, where incentives align with human interest rather than extractive, for-profit ends.

This could take multiple forms, like municipal platforms for local civic engagement, professionally focused networks run by trade associations, and educational spaces managed by public library systems. The key is diversity, delivering an ecosystem of civic digital spaces that each serve specific communities with transparent governance.

Of course, publicly governed platforms aren’t immune to their own risks. State involvement can bring with it the threat of politicization, censorship or propaganda, and this is why the governance question must be treated as infrastructural, rather than simply institutional. Just as public broadcasters in many democracies operate under charters that insulate them from partisan interference, civic digital spaces would require independent oversight, clear ethical mandates, and democratically accountable governance boards, not centralized state control. The goal is not to build a digital ministry of truth, but to create pluralistic public utilities: platforms built for communities, governed by communities and held to standards of transparency, rights protection and civic purpose.

The technical architecture of the next social web is already emerging through federated and distributed protocols like ActivityPub (used by Mastodon and Threads) and Bluesky’s Authenticated Transfer (AT) Protocol, or atproto, (a decentralised framework that allows users to move between platforms while keeping their identity and social graph) as well as various blockchain-based experiments, like Lens and Farcaster.

But protocols alone won’t save us. The email protocol is decentralized, yet most email flows through a handful of corporate providers. We need to “rewild the internet,” as Maria Farrell and Robin Berjon mentioned in a Noema essay. We need governance scaffolding, shared institutions that make decentralization viable at scale. Think credit unions for the social web that function as member-owned entities providing the infrastructure that individual users can’t maintain alone. These could offer shared moderation services that smaller instances can subscribe to, universally portable identity systems that let users move between platforms without losing their history, collective bargaining power for algorithm transparency and data rights, user data dividends for all, not just influencers (if platforms profit from our data, we should share in those profits), and algorithm choice interfaces that let users select from different recommender systems. 

Bluesky’s AT Protocol explicitly allows users to port identity and social graphs, but it’s very early days and cross-protocol and platform portability remains extremely limited, if not effectively non-existent. Bluesky also allows users to choose among multiple content algorithms, an important step toward user control. But these models remain largely tied to individual platforms and developer communities. What’s still missing is a civic architecture that makes algorithmic choice universal, portable, auditable and grounded in public-interest governance rather than market dynamics alone.

Imagine being able to toggle between different ranking logics: a chronological feed, where posts appear in real time; a mutuals-first algorithm that privileges content from people who follow you back; a local context filter that surfaces posts from your geographic region or language group; a serendipity engine designed to introduce you to unfamiliar but diverse content; or even a human-curated layer, like playlists or editorials built by trusted institutions or communities. Many of these recommender models do exist, but they are rarely user-selectable, and almost never transparent or accountable. Algorithm choice shouldn’t require a hack or browser extension; it should be built into the architecture as a civic right, not a hidden setting.

“What if we treated these platforms as public utilities rather than private casinos?”

Algorithmic choice can also develop new hierarchies. If feeds can be curated like playlists, the next influencer may not be the one creating content, but editing it. Institutions, celebrities and brands will be best positioned to build and promote their own recommendation systems. For individuals, the incentive to do this curatorial work will likely depend on reputation, relational capital or ideological investment. Unless we design these systems with care, we risk reproducing old dynamics of platform power, just in a new form.

Federated platforms like Mastodon and Bluesky face real tensions between autonomy and safety: Without centralized moderation, harmful content can proliferate, while over-reliance on volunteer admins creates sustainability problems at scale. These networks also risk reinforcing ideological silos, as communities block or mute one another, fragmenting the very idea of a shared public square. Decentralization gives users more control, but it also raises difficult questions about governance, cohesion and collective responsibility — questions that any humane digital future will have to answer.

But there is a possible future where a user, upon opening an app, is asked how they would like to see the world on a given day. They might choose the serendipity engine for unexpected connections, the focus filter for deep reads or the local lens for community news. This is technically very achievable — the data would be the same; the algorithms would just need to be slightly tweaked — but it would require a design philosophy that treats users as citizens of a shared digital system rather than cattle. While this is possible, it can feel like a pipe dream. 

To make algorithmic choice more than a thought experiment, we need to change the incentives that govern platform design. Regulation can help, but real change will come when platforms are rewarded for serving the public interest. This could mean tying tax breaks or public procurement eligibility to the implementation of transparent, user-controllable algorithms. It could mean funding research into alternative recommender systems and making those tools open-source and interoperable. Most radically, it could involve certifying platforms based on civic impact, rewarding those that prioritize user autonomy and trust over sheer engagement.

Digital Literacy As Public Health

Perhaps most crucially, we need to reframe digital literacy not as an individual responsibility but as a collective capacity. This means moving beyond spot-the-fake-news workshops to more fundamental efforts to understand how algorithms shape perception and how design patterns exploit our cognitive processes. 

Some education systems are beginning to respond, embedding digital and media literacy across curricula. Researchers and educators argue that this work needs to begin in early childhood and continue through secondary education as a core competency. The goal is to equip students to critically examine the digital environments they inhabit daily, to become active participants in shaping the future of digital culture rather than passive consumers. This includes what some call algorithmic literacy, the ability to understand how recommender systems work, how content is ranked and surfaced, and how personal data is used to shape what you see — and what you don’t.

Teaching this at scale would mean treating digital literacy as public infrastructure, not just a skill set for individuals, but a form of shared civic defense. This would involve long-term investments in teacher training, curriculum design and support for public institutions, such as libraries and schools, to serve as digital literacy hubs. When we build collective capacity, we begin to lay the foundations for a digital culture grounded in understanding, context and care.

We also need behavioral safeguards like default privacy settings that protect rather than expose, mandatory cooling-off periods for viral content (deliberately slowing the spread of posts that suddenly attract high engagement), algorithmic impact assessments before major platform changes and public dashboards that show platform manipulation, that is, coordinated or deceptive behaviors that distort how content is amplified or suppressed, in real-time. If platforms are forced to disclose their engagement tactics, these tactics lose power. The ambition is to make visible hugely influential systems that currently operate in obscurity.

We need to build new digital spaces grounded in different principles, but this isn’t an either-or proposition. We also must reckon with the scale and entrenchment of existing platforms that still structure much of public life. Reforming them matters too. Systemic safeguards may not address the core incentives that inform platform design, but they can mitigate harm in the short term. The work, then, is to constrain the damage of the current system while constructing better ones in parallel, to contain what we have, even as we create what we need. 

The choice isn’t between technological determinism and Luddite retreat; it’s about constructing alternatives that learn from what made major platforms usable and compelling while rejecting the extractive mechanics that turned those features into tools for exploitation. This won’t happen through individual choice, though choice helps; it also won’t happen through regulation, though regulation can really help. It will require our collective imagination to envision and build systems focused on serving human flourishing rather than harvesting human attention.

Social media as we know it is dying, but we’re not condemned to its ruins. We are capable of building better — smaller, slower, more intentional, more accountable — spaces for digital interaction, spaces where the metrics that matter aren’t engagement and growth but understanding and connection, where algorithms serve the community rather than strip-mining it.

The last days of social media might be the first days of something more human: a web that remembers why we came online in the first place — not to be harvested but to be heard, not to go viral but to find our people, not to scroll but to connect. We built these systems, and we can certainly build better ones. The question is whether we will do this or whether we will continue to drown.

The post The Last Days Of Social Media appeared first on NOEMA.

]]>
]]>