Digital Society Archives - NOEMA https://www.noemamag.com Noema Magazine Thu, 22 Jan 2026 21:37:41 +0000 en-US 15 hourly 1 https://wordpress.org/?v=6.8.3 https://www.noemamag.com/wp-content/uploads/2020/06/cropped-ms-icon-310x310-1-32x32.png Digital Society Archives - NOEMA https://www.noemamag.com/article-topic/digital-society/ 32 32 The AI-Powered Web Is Eating Itself https://www.noemamag.com/the-ai-powered-web-is-eating-itself Thu, 22 Jan 2026 14:54:00 +0000 https://www.noemamag.com/the-ai-powered-web-is-eating-itself The post The AI-Powered Web Is Eating Itself appeared first on NOEMA.

]]>
Suppose you’re craving lasagna. Where do you turn for a recipe? The internet, of course.

Typing “lasagna recipe ideas” into Google used to surface a litany of food blogs, each with its own story: a grandmother’s family variation, step-by-step photos of ingredients laid out on a wooden table, videos showing technique and a long comment section where readers debated substitutions or shared their own tweaks. Clicking through didn’t just deliver instructions; it supported the blogger through ads, affiliate links for cookware or a subscription to a weekly newsletter. That ecosystem sustained a culture of experimentation, dialogue and discovery.

That was a decade ago. Fast forward to today. The same Google search can now yield a neatly packaged “AI Overview,” a synthesized recipe stripped of voice, memory and community, delivered without a single user visit to the creator’s website. Behind the scenes, their years of work, including their page’s text, photos and storytelling, may have already been used to help train or refine the AI model.

You get your lasagna, Google gets monetizable web traffic and for the most part, the person who created the recipe gets nothing. The living web shrinks further into an interface of disembodied answers, convenient but ultimately sterile.

This isn’t hypothetical: More than half of all Google searches in the U.S. and Europe in 2024 ended without a click, a report by the market research firm SparkToro estimated. Similarly, the SEO intelligence platform Ahrefs published an analysis of 300,000 keywords in April 2025 and found that when an AI overview was present, the number of users clicking into top-ranked organic search results plunged by an average of more than a third.

Users are finding their questions answered and their needs satisfied without ever leaving the search platform.

Until recently, an implicit social contract governed the web: Creators produced content, search engines and platforms distributed it, and in return, user traffic flowed back to the creators’ websites that sustained the system. This reciprocal bargain of traffic in exchange for content underwrote the economic, cultural and information-based fabric of the internet for three decades.

Today, the rise of AI marks a decisive rupture. Google’s AI Overviews, Bing’s Copilot Search, OpenAI’s ChatGPT, Anthropic’s Claude, Meta’s Llama and xAI’s Grok effectively serve as a new oligopoly of what are increasingly being called “answer engines” that stand between users and the very sources from which they draw information.

This shift threatens the economic viability of content creation, degrades the shared information commons and concentrates informational power.

To sustain the web, a system of Artificial Integrity must be built into these AI “answer engines” that prioritizes three things: clear provenance that consistently makes information sources visible and traceable, fair value flows that ensure creators share in the value even when users don’t click their content and a resilient information commons that keeps open knowledge from collapsing behind paywalls.

In practical terms, that means setting enforceable design and accountability guardrails that uphold integrity, so AI platforms cannot keep all the benefits of instant answers while pushing the costs onto creators and the wider web.

Ruptured System

AI “answer engines” haven’t merely made it easier to find information, they have ruptured the web’s value loop by separating content creation from the traffic and revenue that used to reward it.

AI companies have harvested and utilized the creative labor of writers, researchers, artists and journalists to train large language models without clear consent, attribution or compensation. The New York Times has filed lawsuits against OpenAI and Microsoft, alleging that the tech giants used its copyrighted articles for this purpose. In doing so, the news organization claims, they are threatening the very business model of journalism.

In fact, AI threatens the business model of digital content creation across the board. As publishers lose traffic, there remains little incentive for them to keep content free and accessible. Instead, paywalls and exclusive licensing are increasingly the norm. This will continue to shrink the freely available corpus of information upon which both human knowledge and future AI training depend.

The result will be a degraded and privatized information base. It will leave future AI systems working with a narrower, more fragile foundation of information, making their outputs increasingly dependent on whatever remains openly accessible. This will limit the diversity and freshness of the underlying data, as documented in a 2024 audit of the “AI data commons.” 

“The living web is shrinking into an interface of disembodied answers, convenient but ultimately sterile.”

At the same time, as more of what is visible online becomes AI-generated and then reused in future training, these systems will become more exposed to “model collapse,” a dynamic documented in a 2024 Nature study. It showed that when real data are replaced by successive synthetic generations, the tails of the original distribution begin to disappear as the model’s synthetic outputs begin to overwrite the underlying reality they were meant to approximate. 

Think of it like making a photocopy of a photocopy, again and again. Each generation keeps the bold strokes and loses the faint details. Both trends, in turn, weaken our ability to verify information independently. In the long run, this will leave people relying on systems that amplify errors, bias and informational blind spots, especially in niche domains and low-visibility communities.

Picture a procurement officer at a mid-sized bank tasked with evaluating vendors for a new fraud-detection platform. Not long ago, she would have likely turned to Google, LinkedIn or industry portals for information, wading through detailed product sheets, analyst reports and whitepapers. By clicking through to a vendor’s website, she could access what technical information she might need and ultimately contact the company. For the vendor, each click also fed its sales pipeline. Such traffic was not incidental; it was the lifeblood of an entire ecosystem of marketing metrics, job underwriting, marketing campaigns and specialized research.

These days, the journey looks different. A procurement officer’s initial query would likely yield an AI-generated comparison condensing the field of prospects into a few paragraphs: Product A is strong on compliance; product B excels at speed; product C is cost-effective. Behind this synthesis would likely lie numerous whitepapers, webinars and case studies produced by vendors and analysts — years of corporate expertise spun into an AI summary.

As a result, the procurement officer might never leave the interface. Vendors’ marketing teams, seeing dwindling click-driven sales, might retreat from publishing open materials. Some might lock reports behind steep paywalls, others might cut report production entirely and still others might sign exclusive data deals with platforms just to stay visible.

The once-diverse supply of open industry insight would contract into privatized silos. Meanwhile, the vendors would become even more dependent on the very platforms that extract their value.

Mechanisms At Play

The rupture we’re seeing in the web’s economic and informational model is driven by five mutually reinforcing mechanisms that determine what content gets seen, who gets credited and who gets paid. Economists and product teams might call these mechanisms intent capture, substitution, attribution dilution, monetization shifts and the learning loop break

Intent capture happens when the platform turns an online search query into an on-platform answer, keeping the user from ever needing to click the original source of information. This mechanism transforms a search engine’s traditional results page from an open marketplace of links essentially into a closed surface of synthesized answers, narrowing both visibility and choice. 

Substitution, which takes place when users rely on AI summaries instead of clicking through to source links and giving creators the traffic they depend on, is particularly harmful. This harm is most pronounced in certain content areas. High substitution occurs for factual lookups, definitions, recipes and news summaries, where a simple answer is often sufficient. Conversely, low substitution occurs for content like investigative journalism, proprietary datasets and multimedia experiences, which are harder for AI to synthesize into a satisfactory substitute.

The incentives of each party diverge: Platforms are rewarded for maximizing query retention and ad yield; publishers for attracting referral traffic and subscribers; and regulators for preserving competition, media plurality and provenance. Users, too, prefer instant, easily accessible answers to their queries. This misalignment ensures that platforms optimize for closed-loop satisfaction while the economic foundations of content creation remain externalized and underfunded.

Attribution dilution compounds the effect. When information sources are pushed behind dropdowns or listed in tiny footnotes, the credit exists in form but not in function. Search engines’ tendency to simply display source links, which many do inconsistently, does not solve the issue. These links are often de-emphasized and generate little or no economic value, creating a significant consent gap for content used in AI model training. When attribution is blurred across multiple sources and no value accrues without clicks or compensation, that gap becomes especially acute. 

“AI ‘answer engines’ have ruptured the web’s value loop by separating content creation from the traffic and revenue that used to reward it.”

Monetization shifts refer to the redirected monetary value that now often flows solely to AI “answer engines” instead of to content creators and publishers. This shift is already underway, and it extends beyond media. When content promoting or reviewing various products and services receives fewer clicks, businesses often have to spend more to be discovered online, which can raise customer acquisition costs and, in some cases, prices. 

This shift can also impact people’s jobs: Fewer roles may be needed to produce and optimize web content for search, while more roles might emerge around licensing content, managing data partnerships and governing AI systems. 

The learning loop break describes the shrinking breadth and quality of the free web as a result of the disruptive practices of AI “answer engines.” As the information commons thins, high-quality data becomes a scarce resource that can be controlled. Analysts warn that control of valuable data can act as a barrier to entry and concentrate gatekeeper power.

This dynamic is comparable to what I refer to as a potential “Data OPEC,” a metaphor for a handful of powerful platforms and rights-holders controlling access to high-quality data, much as the Organization of Petroleum Exporting Countries (OPEC) controls the supply of oil.

Just as OPEC can restrict oil supply or raise prices to shape global markets, these data gatekeepers could restrict or monetize access to information used to build and improve AI systems, including training datasets, raising costs, reducing openness and concentrating innovation power in fewer hands. In this way, what begins as an interface design choice cascades into an ecological risk for the entire knowledge ecosystem.

The combined effect of these five mechanisms is leading to a reconfiguration of informational power. If AI “answer engines” become the point of arrival for information rather than the gateway, the architecture of the web risks being hollowed out from within. The stakes extend beyond economics: They implicate the sustainability of public information ecosystems, the incentives for future creativity and the integrity of the informational commons.

Left unchecked, these forces threaten to undermine the resilience of the digital environment on which both creators and users depend. What is needed is a systemic redesign of incentives, guided by the framework of Artificial Integrity rather than artificial intelligence alone.

Artificial Integrity

Applied to the current challenge, Artificial Integrity can be understood across three dimensions: information provenance integrity, economic integrity of information flows and integrity of the shared information commons.

Information provenance integrity is about ensuring that sources are visible, traceable and properly credited. This should include who created the content, where it was published and the context in which it was originally presented. The design principle is transparency: Citations must not be hidden in footnotes. 

Artificial Integrity also requires that citations carry active provenance metadataa verifiable, machine-readable signature linking each fragment of generated output to its original source, allowing both users and systems to trace information flows with the same rigor as a scientific citation. 

That introduces something beyond just displaying source links: It’s a systemic design where provenance is cryptographically or structurally embedded, not cosmetically appended. In this way, provenance integrity becomes a safeguard against erasure, ensuring that creators remain visible and credited even if the user doesn’t click through to the original source.

Economic integrity of information flows is about ensuring that value flows back to creators, not only to platforms. Artificial Integrity requires rethinking how links and citations are valued. In today’s web economy, a link matters only if it is clicked, which means that sources that are cited but not visited capture no monetary value. In an integrity-based model, the very act of being cited in an AI-generated answer would carry economic weight, ensuring that credit and compensation flow even when user behavior stops at the interface.

This would realign incentives from click-chasing to knowledge contribution, shifting the economy from performance-only to provenance-aware. To achieve this, regulators and standards bodies could require that AI “answer engines” compensate not only for traffic delivered, but also for information cited. Such platforms could implement source prominence rules so that citations are not hidden in footnotes but embedded in a way that delivers measurable economic value. 

Integrity of the shared information commons is about ensuring that the public information base remains sustainable, open and resilient rather than degraded into a paywalled or privatized resource. Here, Artificial Integrity calls for mandatory reinvestment of AI platform revenues into open datasets as a built-in function of the AI lifecycle. This means that large AI platforms such as Google, OpenAI and Microsoft would be legally required to dedicate a fixed percentage of their revenues to sustaining the shared information commons. 

“AI platforms cannot keep all the benefits of instant answers while pushing the costs onto creators and the wider web.”

This allocation would be architecturally embedded into their model development pipelines. For example, a “digital commons fund” could channel part of Google’s AI revenues into keeping resources like Wikipedia, PubMed or open academic archives sustainable and up to date. Crucially, this reinvestment would be hardcoded into retraining cycles, so that every iteration of a model structurally refreshes and maintains open-access resources alongside its own performance tuning. 

In this way, the sustainability of the shared information commons would become part of the AI system’s operating logic, not just a voluntary external policy. In effect, it would ensure that every cycle of AI improvement also improves the shared information commons on which it depends, aligning private platform incentives with public information sustainability.

We need to design an ecosystem where these three dimensions are not undermined by the optimization-driven focus of AI platforms but are structurally protected, both in how the platforms access and display content to generate answers, and in the regulatory environment that sustains them.

From Principle To Practice

To make an Artificial Integrity approach work, we would need systems for transparency and accountability. AI companies would have to be required to publish verifiable aggregated data showing whether users stop at their AI summaries or click outward to original sources. Crucially, to protect the users’ privacy, this disclosure would need to include only aggregated interactions metrics reporting overall patterns. This would ensure that individual user logs and personal search histories are never exposed. 

Independent third-party auditors, accredited and overseen by regulators much like accounting firms are today, would have to verify these figures. Just as companies cannot self-declare their financial health but must submit audited balance sheets, AI platforms would no longer be able to simply claim they are supporting the web without independent validation.

In terms of economic integrity of information flows, environmental regulation offers a helpful analogy. Before modern environmental rules, companies could treat pollution as an invisible side effect of doing business. Smoke in the air or waste in the water imposed real costs on society, but those costs did not show up on the polluter’s balance sheet.

Emissions standards changed this by introducing clear legal limits on how much pollution cars, factories and power plants are allowed to emit, and by requiring companies to measure and report those emissions. These standards turned pollution into something that had to be monitored, reduced or paid for through fines and cleaner technologies, instead of being quietly pushed onto the public. 

In a similar way, Artificial Integrity thresholds could ensure that the value that AI companies extract from creators’ content comes with financial obligations to those sources. An integrity threshold could simply be a clear numerical line, like pollution limits in emissions standards, that marks the point at which an AI platform is taking too much value without sending enough traffic or revenue back to sources. As long as the numbers stay under the acceptable limit, the system is considered sustainable; once they cross the threshold, the platform has a legal duty to change its behavior or compensate the creators it depends on.

This could be enforced by national or regional regulators, such as competition authorities, media regulators or data protection bodies. Similar rules have begun to emerge in a handful of jurisdictions that regulate digital markets and platform-publisher relationships, such as the EU, Canada or Australia, where news bargaining and copyright frameworks are experimenting with mandatory revenue-sharing for journalism. Those precedents could be adapted more broadly as AI “answer engines” reshape how we search online.

These thresholds could also be subject to standardized independent audits of aggregated interaction metrics. At the same time, AI platforms could be required to provide publisher-facing dashboards exposing the same audited metrics in near real-time, showing citation frequency, placement and traffic outcomes for their content. These dashboards could serve as the operational interface for day-to-day decision-making, while independent audit reports could provide a legally verified benchmark, ensuring accuracy and comparability across the ecosystem.

In this way, creators and publishers would not be left guessing whether their contributions are valued. They would receive actionable insight for their business models and formal accountability. Both layers together would embed provenance integrity into the system: visibility for creators, traceability for regulators and transparency for the public. 

“Artificial Integrity thresholds could ensure that the value that AI companies extract from creators’ content comes with financial obligations to those sources.”

Enforcement could mix rewards and penalties. On the reward side, platforms that show where their information comes from and that help fund important public information resources could get benefits such as tax credits or lighter legal risk. On the penalty side, platforms that ignore these integrity rules could face growing fines, similar to the antitrust penalties we already see in the EU.

This is where the three dimensions come together: information provenance integrity in how sources are cited, economic integrity of information flows in how value is shared and the integrity of the shared information commons in how open resources are sustained.

Artificial Integrity for platforms that deliver AI-generated answers represents more than a set of technical fixes. By reframing AI-mediated information search not as a question of feature tweaks but as a matter of design, code and governance in AI products, it addresses a necessary rebalancing toward a fairer and more sustainable distribution of value on which the web depends, now and in the future.

The post The AI-Powered Web Is Eating Itself appeared first on NOEMA.

]]>
]]>
Noema’s Top Artwork Of 2025 https://www.noemamag.com/noemas-top-artwork-of-2025 Thu, 18 Dec 2025 15:41:01 +0000 https://www.noemamag.com/noemas-top-artwork-of-2025 The post Noema’s Top Artwork Of 2025 appeared first on NOEMA.

]]>
by Hélène Blanc
for “Why Science Hasn’t Solved Consciousness (Yet)

by Shalinder Matharu
for “How To Build A Thousand-Year-Old Tree

by Nicolás Ortega
for “Humanity’s Endgame

by Seba Cestaro
for “How We Became Captives Of Social Media

by Beatrice Caciotti
for “A Third Path For AI Beyond The US-China Binary

by Dadu Shin
for “The Languages Lost To Climate Change” in Noema Magazine Issue VI, Fall 2025

by LIMN
for “Why AI Is A Philosophical Rupture

by Kate Banazi
for “AI Is Evolving — And Changing Our Understanding Of Intelligence” in Noema Magazine Issue VI, Fall 2025

by Jonathan Zawada
for “The New Planetary Nationalism” in Noema Magazine Issue VI, Fall 2025

by Satwika Kresna
for “The Future Of Space Is More Than Human

Other Top Picks By Noema’s Editors

The post Noema’s Top Artwork Of 2025 appeared first on NOEMA.

]]>
]]>
Noema’s Top 10 Reads Of 2025 https://www.noemamag.com/noemas-top-10-reads-of-2025 Tue, 16 Dec 2025 17:30:14 +0000 https://www.noemamag.com/noemas-top-10-reads-of-2025 The post Noema’s Top 10 Reads Of 2025 appeared first on NOEMA.

]]>
Your new favorite playlist: Listen to Noema’s Top 10 Reads of 2025 via the sidebar player on your desktop or click here on your mobile phone.

Artwork by Daniel Barreto for Noema Magazine.
Daniel Barreto for Noema Magazine

The Last Days Of Social Media

Social media promised connection, but it has delivered exhaustion.

by James O’Sullivan


Artwork by Beatrice Caciotti for Noema Magazine.
Beatrice Caciotti for Noema Magazine

A Third Path For AI Beyond The US-China Binary

What if the future of AI isn’t defined by Washington or Beijing, but by improvisation elsewhere?

by Dang Nguyen


Illustration by Hélène Blanc for Noema Magazine.
Hélène Blanc for Noema Magazine

Why Science Hasn’t Solved Consciousness (Yet)

To understand life, we must stop treating organisms like machines and minds like code.

by Adam Frank


NASA Solar Dynamics Observatory

The Unseen Fury Of Solar Storms

Lurking in every space weather forecaster’s mind is the hypothetical big one, a solar storm so huge it could bring our networked, planetary civilization to its knees.

by Henry Wismayer


Artwork by Sophie Douala for Noema Magazine.
Sophie Douala for Noema Magazine

From Statecraft To Soulcraft

How the world’s illiberal powers like Russia, China and increasingly the U.S. rule through their visions of the good life.

by Alexandre Lefebvre


Illustration by Ibrahim Rayintakath for Noema Magazine
Ibrahim Rayintakath for Noema Magazine

The Languages Lost To Climate Change

Climate catastrophes and biodiversity loss are endangering languages across the globe.

by Julia Webster Ayuso


An illustration of a crumbling building and a bulldozer
Vartika Sharma for Noema Magazine (images courtesy mzacha and Shaun Greiner)

The Shrouded, Sinister History Of The Bulldozer

From India to the Amazon to Israel, bulldozers have left a path of destruction that offers a cautionary tale for how technology without safeguards can be misused.

by Joe Zadeh


Blake Cale for Noema Magazine
Blake Cale for Noema Magazine

The Moral Authority Of Animals

For millennia before we showed up on the scene, social animals — those living in societies and cooperating for survival — had been creating cultures imbued with ethics.

by Jay Griffiths


Illustration by Zhenya Oliinyk for Noema Magazine.
Zhenya Oliinyk for Noema Magazine

Welcome To The New Warring States

Today’s global turbulence has echoes in Chinese history.

by Hui Huang


Along the highway near Nukus, the capital of the autonomous Republic of Karakalpakstan. (All photography by Hassan Kurbanbaev for Noema Magazine)

Signs Of Life In A Desert Of Death

In the dry and fiery deserts of Central Asia, among the mythical sites of both the first human and the end of all days, I found evidence that life restores itself even on the bleakest edge of ecological apocalypse.

by Nick Hunt

The post Noema’s Top 10 Reads Of 2025 appeared first on NOEMA.

]]>
]]>
Rescuing Democracy From The Quiet Rule Of AI https://www.noemamag.com/rescuing-democracy-from-the-quiet-rule-of-ai Mon, 13 Oct 2025 16:07:38 +0000 https://www.noemamag.com/rescuing-democracy-from-the-quiet-rule-of-ai The post Rescuing Democracy From The Quiet Rule Of AI appeared first on NOEMA.

]]>
In 1950, the same year Alan Turing unveiled his famous test for machine intelligence, Isaac Asimov imagined something even more unsettling than a robot that could pass for human. In his story “The Evitable Conflict,” four vast super-computers known as “the Machines” silently steer the planet’s economy through an era of unprecedented peace and prosperity.

When they appear to make costly blunders, sabotaging the plans of a few powerful conspirators working to undermine their authority, World Co-ordinator Stephen Byerley learns the truth: the “errors” are no errors at all, but deliberate, tidy sacrifices meant to preserve global stability. The Machines have concluded that the surest way to keep humanity from harm is to keep humanity from making certain decisions.

Byerley grasps this truth with a blend of relief and dread, knowing that while the Machines will continue to stave off conflict, the affected citizens will never learn why their schemes failed or how they might seek redress. The Machines will keep their motives secret; transparency, too, is a risk to be managed.

Asimov cast the scene as a distant prophecy, yet the future he sketched is already seeping into the present. We often talk about artificial intelligence as a looming catastrophe or an ingenious convenience, oscillating between apocalyptic nightmares of runaway superintelligences and glittering futures of frictionless efficiency. Deep-fake propaganda, economic displacement, even the possibility of existential doom: these capture headlines because they are dramatic, cinematic, visceral.

But a quieter danger lies in wait, one that may ultimately prove more corrosive to the human spirit than any killer robot or bioweapon. The risk is that we will come to rely on AI not merely to assist us but to decide for us, surrendering ever larger portions of collective judgment to systems that, by design, cannot acknowledge our dignity.

The tragedy is that we are culturally prepared for such abdication. Our political institutions already depend on what might be called a “paradigm of deference,” in which ordinary citizens are invited to voice preferences episodically — through ballots every few years — while day-to-day decisions are made by elected officials, regulators and technical experts.

Many citizens have even come to defer their civic role entirely by abstaining from voting, whether for symbolic meaning or due to sheer apathy. AI slots neatly into this architecture, promising to supercharge the convenience of deferring while further distancing individuals from the levers of power.

Modern representative democracy itself emerged in the 18th century as a solution to the logistical impossibility of assembling the entire citizenry in one place; it scaled the ancient city-state to the continental republic. That solution carried a price: The experience of direct civic agency was replaced by periodic, symbolic acts of consent. Between elections, citizens mostly observe from the sidelines. Legislative committees craft statutes, administrative agencies draft rules, central banks decide the price of money — all with limited direct public involvement.

This arrangement has normalized an expectation that complex questions belong to specialists. In many domains, that reflex is sensible — neurosurgeons really should make neurosurgical calls. But it also primes us to cede judgment even where the stakes are fundamentally moral or distributive. The democratic story we tell ourselves — that sovereignty rests with the people — persists, but the lived reality is an elaborate hierarchy of custodians. Many citizens have internalized that gap as inevitable.

Enter machine learning. Algorithms excel precisely at tasks the layperson finds forbidding: sorting mountains of data, detecting patterns no human eye can see, quantifying risk in probabilistic terms. They arrive bearing the shimmering promise of neutrality; a model is statistical, so it feels less biased than a human. The seduction is powerful across domains, from credit scoring to determining who gets access to public services.

In the Netherlands, for instance, an early use case saw the government deploying automated systems to track welfare benefits with minimal human intervention. (It is notable that this experiment led to more than 20,000 families being falsely accused of fraud and helped contribute to the entire Dutch government’s resignation in 2021.) Faced with backlogs and budget constraints, officials grasp for anything that looks objective and efficient. Soon, the algorithm’s recommendation becomes the default, then the rule. Over time, the human intermediary becomes an impotent clerk who seldom overrides the machine, partly because the institution discourages deviation and partly because the clerk has forgotten how.

“The risk is that we will come to rely on AI not merely to assist us but to decide for us, surrendering ever larger portions of collective judgment to systems that, by design, cannot acknowledge our dignity.”

What vanishes in these moments is more than discretion; it is the encounter in which one person acknowledges another as a decision-worthy being. In the late 20th century, Francis Fukuyama revived an argument ultimately owed to Hegel: Liberal democracy is the most stable form of government because it satisfies the fundamental human thirst for recognition — the desire to be seen and affirmed as free and equal.

Whether or not history truly “ended” with the fall of the Berlin Wall, the insight about recognition remains profound. People do not demand merely material comfort or security; they demand that the social order look them in the eye and admit: “Your voice counts.” When that recognition fails to materialize — when individuals perceive that their fates are determined elsewhere, by elites who will never sit across from them — resentment grows. Contemporary populism is the political face of that resentment. It rails against distant technocrats, against faceless bureaucracy, against any system that patronizes rather than engages. It depicts electoral democracy, with its long channels of mediation that seldom reach the average citizen, as an empty ritual.

AI threatens to deepen this very wound. If the elected official is distant, the algorithm is an abyss. You cannot argue with a neural network’s hidden layers or cross-examine a random forest. Decisions that shape your life — how resources are allocated, which priorities are funded — become technical outputs optimized for efficiency, not political choices settled through public debate.

Even if we could make AI systems perfectly transparent, capable of explaining their reasoning in lucid prose, this does not cure the underlying democratic deficit; a decision explained is still a decision imposed. Without a clear path for recourse, human agency dissolves into statistical abstraction. For the citizen seeking recognition, there is no one to confront, no accountable face on the other side of the counter.

Even the possibility of reciprocity disappears, because the system is constitutionally incapable of respecting or disrespecting anyone; it simply optimizes. In this vacuum, anger can only turn outward indiscriminately, feeding conspiracy theories and demagogic narratives that blame shadowy technocrats, ethnic minorities or transnational plots.

A New Social Contract

The relationship between AI and democracy, however, is not fated to be antagonistic. Whether algorithms shrink or expand the public’s role depends less on the code itself than on the social contracts wrapped around it. Our existing social contracts were forged on the heels of the Enlightenment, as thinkers sought to erect constitutional and normative scaffolding to civilize raw power and align it with collective reason.

Hobbes’ fear of unfettered, natural chaos yielded to Locke’s primacy of the consent of the governed, Montesquieu’s framework of separation of powers, and Rousseau’s notion that legitimate authority must always remain answerable to the general will. These arrangements were designed to restrain the worst impulses of human governors while still harvesting the best of deliberation. They came into being at a time when the power of human reason to perfect society and nature seemed nearly limitless.

Now, however, for the first time in human history, we face the existence of a non-human cognitive actor whose speed, scale and analytical capacities already outstrip our own in narrow fields and will only continue to improve in future years. The shift to a world with superhuman intelligence demands something different than reactive jumps to impede AI progress; it calls for a deeper rethinking of how power and authority operate, where algorithmic systems should make decisions, where they shouldn’t and what mechanisms should exist to help people understand, challenge and override those decisions when necessary.

The central guiding question must be whether we treat AI as a substitute for collective judgment or as an instrument that enlarges the scope for human deliberation. At stake is nothing less than whether human judgment and human dignity retain operational value in the very systems that govern us.

Used well, AI can slash the logistical costs that once confined serious deliberation to narrow circles. Automatic translation, live transcription and real-time summarization enable diverse groups of citizens to debate common problems without sharing a room or the same native language. LLMs can transform technical briefings into plainer prose and cluster thousands of comments from virtual town halls into intelligible and actionable themes. AI facilitators can help forge consensus among polarized groups online, equalizing speaking times and surfacing overlooked voices before the discussion closes. In other words, the same machinery that powers prediction markets can be repurposed to make deliberation scalable, searchable and understandable to the broader public, transforming the way governments make decisions and the role of the average citizen.

“If the elected official is distant, the algorithm is an abyss. You cannot argue with a neural network’s hidden layers or cross-examine a random forest.”

Taiwan offers a glimpse of this future. The open source vTaiwan platform uses machine learning to analyze thousands of public comments on policy proposals, identifying areas of consensus and highlighting remaining disagreements. Rather than generating its own policy recommendations, the AI helps citizens and policymakers understand the structure of public opinion and focus discussion on genuinely contested issues. The platform has facilitated successful policymaking on contentious topics in Taiwan, like ride-sharing regulation and digital rights, enabling outcomes that enjoy broad public support, largely because citizens participated meaningfully in their creation.

A less techno-centric model of democratic innovation can be seen in Ireland, where citizens’ assemblies are convened regularly but have yet to harness the power of AI. These bodies bring together groups of randomly selected citizens to deliberate on complex issues and make recommendations to the government. Participants receive expert briefings, listen to stakeholders and engage in structured deliberation to reach consensus. The process is slow and sometimes onerous, but it has produced thoughtful policy outcomes that were unlikely to be achieved through traditional political channels, most notably a referendum repealing the Eighth Amendment of the Irish Constitution that previously limited access to abortion in the country. These assemblies also tend to be expensive to run and confined to relatively small groups, which has so far kept them on the periphery of the democratic landscape.

AI could help change that by scaling the number of citizens’ assemblies and connecting them to the broader public, thereby bolstering their legitimacy and reach. We saw one nascent version of this last year in Deschutes County, Oregon, where AI was used to record, synthesize and analyze the deliberations of a typically closed-door civic assembly on youth homelessness. With the consent of assembly members, highlights from these small-group recordings can be shared with the public, adding a new layer of transparency to the process and allowing outside citizens to see more clearly what actually comprised the deliberations.

AI could also help improve the quality of deliberation itself. For example, DeepMind’s Habermas Machine demonstrated in a 2024 Science study that an LLM could more effectively find common ground among divided groups than human mediators. Crucially, such tools aim to augment collective decision-making without replacing the essential human work of judgment and compromise.

These scenarios of governing with AI instead of being governed by it may sound cumbersome precisely because they are designed to reinsert friction where technocracy — or “algocracy,” government by algorithm — chases it away. But friction is not inherently bad. In politics, it is often the handrail that prevents a stumble into passivity.

Liberal democracy’s original genius was not merely the ballot box; it was the creation of multiple forums — town meetings, juries, local councils, civic associations — where citizens encountered each other as equals capable of persuasion and compromise. Many of those forums have withered under the pressures of mass society, mass media and now mass data, and with them, so too has the fabric of liberal democracy.

Obeying In Advance

In his well-known book “Bowling Alone,” published in 2000, Robert Putnam began charting the arc of withering American forums, noting declines in league sports, union membership and civic clubs throughout the late 20th century as globalization kicked into overdrive. Three decades later, the metrics have only worsened: local newspapers shutter weekly, worship attendance continues to thin, and the archetypal “meet-cute” has been replaced by a seemingly endless number of online dating apps.

The attrition of face-to-face venues for collective life did not start with the microchip, but algorithms and digital networks have accelerated the erosion. The result is a surplus of individual exposure to information and a deficit of shared context or mutual understanding of how decisions are made and who makes them. Reviving those shared spaces, even in digital form, would be messy, slower than letting code decide everything on its own. But it is also the only path that preserves the promise Fukuyama celebrated: that each person can be both author and audience of the laws that govern them.

None of this denies AI’s power to make certain decisions more efficient and less biased, improving the way government functions and potentially even saving lives. Nor does it trivialize the more dramatic, headline-grabbing risks of AI. It is entirely possible that future systems might acquire capabilities hazardous to humanity, that autonomous weapons could proliferate unchecked or that deep-faked misinformation could destabilize elections.

“AI will not, by itself, extinguish or redeem democracy. It will elevate whichever habits we choose to cultivate.”

Indeed, some of those things are already happening. But if the subtler problem of deference is left unaddressed, societies will grow ill-equipped to confront those larger perils — the muscle of civic agency will have already atrophied. People habituated to letting machines decide the mundane will struggle to reassert control when the stakes turn existential.

In the 20th century, Hannah Arendt’s renowned writing on the “banality of evil” revealed how Nazi administrative machinery depended not on ideological fervor but on bureaucratic compliance — civil servants like Adolf Eichmann who processed deportation orders with the same dutiful efficiency they brought to tax collection or municipal planning.

The system’s horror lay partly in how it transformed moral choices into technical procedures, making collaboration feel like competent administration rather than complicity in genocide. According to Arendt, in fact, Eichmann’s gravest crime was his failure to think for himself.

The Soviet Union, Arendt’s other locus of totalitarian analysis, followed a similar trajectory, albeit one that extended through the end of the Cold War. By the 1970s, many Soviet citizens had developed what psychologists call “learned helplessness” in the face of bureaucratic systems that rendered individual agency meaningless. This was a new, deeper form of political repression. It represented the internalization of procedural thinking that made independent judgment feel impossible or irrelevant. When Mikhail Gorbachev initiated a more open, consultative government in 1985 with glasnost, many citizens struggled to engage constructively, having lost familiarity with democratic deliberation and compromise.

The historian Timothy Snyder has argued that the path to tyranny is often paved by individuals who “obey in advance,” anticipating what authoritarian leaders want and preemptively meeting them halfway to avoid conflict. In the age of AI, this phenomenon appears poised to occur at an algorithmic scale, as individuals modify their behavior for a world shaped by omniscient machines.

We are already seeing preliminary signs of this impulse. A July Pew Research Center study found that when Google precedes its search results with AI‑generated summaries, users open far fewer links and often end their search right there. While convenient, by accepting the first synthesized response, we also implicitly accept what the algorithm has deemed important. More likely than us abruptly waking to a world of AI rule, the danger of deference by design is that we will continue to streamline our habits of inquiry and judgment to suit the technology’s parameters, until the habits themselves are emptied of agency. In that sense, AI is less a sudden usurper than the logical culmination of a political culture that has been hollowing out democratic publics and handing off judgment, piece by piece, for decades.

The road ahead, therefore, forks. Down one path lies the continuing consolidation of decision-making power in algorithmic platforms owned by corporations or agencies whose internal logics are obscure to the public. Citizens, numbed by convenience and resigned to complexity, perform citizenship as a spectator sport, casting ballots that merely reshuffle the supervisory committee overseeing an automated empire.

Down the other path lies a conscious effort to embed participation and contestability into every major system that touches communal life, accepting slower throughput and periodic gridlock as the price of freedom. The first path is lubricated by efficiency and the myth of objective expertise; the second is rocky, contentious and labor-intensive — yet it is the only route that keeps alive the foundational democratic claim that the governed never surrender the right to govern.

AI will not, by itself, extinguish or redeem democracy. It will elevate whichever habits we choose to cultivate. If we preserve the paradigm of deference, AI will become the ultimate bureaucrat, inscrutable and unanswerable. If we cultivate habits of shared judgment, AI can become an extraordinarily powerful amplifier of human insight, a tool that frees time for deliberation rather than replacing it. The decision between those futures cannot be delegated; it belongs to us as humans. How we make it may be the most important act of civic recognition we can offer one another in this new age of thinking machines.

The post Rescuing Democracy From The Quiet Rule Of AI appeared first on NOEMA.

]]>
]]>
Reclaiming Europe’s Digital Sovereignty https://www.noemamag.com/reclaiming-europes-digital-sovereignty Thu, 02 Oct 2025 15:01:11 +0000 https://www.noemamag.com/reclaiming-europes-digital-sovereignty The post Reclaiming Europe’s Digital Sovereignty appeared first on NOEMA.

]]>
Editor’s Note: Noema is committed to hosting meaningful intellectual debate. This piece is in conversation with another, written by Benjamin Bratton, the Berggruen Institute’s Antikythera program director. Read it here: “Is European AI A Lost Cause? Not Necessarily.

The Cartography Of Digital Power

Geopolitical power once flowed through armies and treaties, but today it courses through silicon wafers, server farms and algorithmic systems. These invisible digital infrastructures and architectures shape every aspect of modern life. “The Stack” — interlocking layers of hardware, software, networks and data — has become the operating system of modern political and economic power.

The global race to control the Stack defines the emerging world order. The United States consolidates its dominance through initiatives like Stargate, which fuses AI development directly to proprietary chips and hyperscale data centers, creating insurmountable barriers to competition. China advances through systematic industrial policy and its Digital Silk Road, achieving unprecedented integration from chip design to AI deployment across Asia and beyond. These are deliberate strategies of technological imperialism.

Europe occupies a paradoxical position: a regulatory leader but infrastructurally dependent. We Europeans have set global standards through GDPR and the AI Act. Our research institutions remain world-class. Yet just 4% of global cloud infrastructure is European-owned. European governments, businesses and citizens depend entirely on systems controlled by Amazon, Microsoft and Google — companies subject to the U.S. CLOUD Act’s extraterritorial surveillance requirements. When we use “our” digital services, we’re actually using American infrastructure governed by American law for American interests.

This dependency isn’t abstract — it’s existential. In the 21st century, those who control digital infrastructure control the conditions of possibility for democracy itself. Europe faces a choice: build sovereign technological capacity or accept digital colonization.

The Charges Against European Tech Discourse

The attempt to blame Europe’s digital paralysis on its critical intellectuals — those who resist Silicon Valley accelerationism, crypto hyper-libertarianism and techno-authoritarianism — is profoundly misdirected. At a Venice Architecture Biennale event I organized, “Archipelagos of Possible Futures,” Benjamin Bratton leveled four charges against Europe’s approach to AI and digital sovereignty. He argued, first, that Europe follows a “regulate first, build later (maybe)” strategy that breeds dependency rather than sovereignty; second, that its tech critics are “technophobic public intellectuals” who only say why not to build; third, that this critical culture produces “analysis paralysis” that blocks the very innovation it seeks; and fourth, that environmental and social concerns are manipulated to defend intellectual status rather than confront real technological challenges.

Each charge misses the mark. Europe’s technological predicament is not the result of excessive critique but of three decades spent dismantling the very capacities needed for technological sovereignty. The real choice we face is not between criticism and construction, but between authoritarian technological models and democratic alternatives. And building such alternatives demands that we understand how power has historically operated through technology. To dismiss, as Bratton does, thinkers like Evgeny Morozov, Kate Crawford and Marina Otero, is to embrace precisely the techno-accelerationist logic I believe must be confronted today.

Understanding the material realities of AI infrastructure — its environmental costs, labor dependencies and power concentrations — isn’t “fearmongering” but a prerequisite for sustainable, democratic development. And Europe’s emerging independent technological ecosystem demonstrates that democratic construction is already happening when we create the right conditions.

Most fundamentally, dismissing regulation as a European pathology — in Bratton’s phrasing, “the EU has AI regulation but not much AI to regulate” — reveals profound superficiality. Regulation is not the problem. The problem is a lack of enforcement and the absence of an industrial policy at scale. Indeed, if European regulation were so ineffective, why did the Trump administration go so far as to threaten bans and sanctions against European regulators who dared to implement digital laws? Precisely because effective regulation is seen in Washington as a direct obstacle to U.S. technological supremacy.

(EuroStack)

The Roots Of Europe’s Technological Predicament

The roots of Europe’s predicament are to be found, first, in the stranglehold of neoliberal orthodoxy on European economic thinking. This caused decades of austerity and made Europe the poster child of hyper-globalization: free trade without economic statecraft, bans on state aid, trickle-down economics dogmas, no long-term public investment and the systematic rejection of industrial policy. All of this was gospel preached by the U.S., even as Washington protected its own national security interests and subsidized Silicon Valley. Europe, in other words, internalized the ideology of market neutrality while others practiced strategic capitalism.

American technological dominance wasn’t born from free markets but from massive state intervention — DARPA, NASA and the National Science Foundation provided the patient capital and guaranteed markets that made Silicon Valley possible. As Linda Weiss’s work on the national security state and Mariana Mazzucato’s research on the entrepreneurial state have documented, innovation ecosystems do not emerge spontaneously — they are structured through public direction. Every core technology in the iPhone — the internet, GPS, touchscreens, voice recognition — emerged from decades of public research funding. The Pentagon’s procurement budgets functioned as venture capital at a continental scale.

Meanwhile, Europe internalized market fundamentalism more thoroughly than its own inventors intended. The Stability and Growth Pact threatened strategic industrial investment as a violation of fiscal discipline. Competition policy, instead of preventing Big Tech dominance via bold antitrust action, prevented the formation of European champions while American firms achieved monopolistic scale. Europeans were lectured that industrial policy violated market principles by the very Americans who systematically practiced it.

“In the 21st century, those who control digital infrastructure control the conditions of possibility for democracy itself.”

The CLOUD Act handed U.S. agencies jurisdiction over European data, digital trade agreements prevented data localization and intellectual property regimes ensured that value extraction flowed westward across the Atlantic. Rather than using its regulatory power to block predatory surveillance models and turn data sovereignty into a competitive advantage, Europe tolerated a system now weaponized by far-right tech oligarchs to spread disinformation, extremism and fake news.

This isn’t a cultural failure — it’s a political one. I’ve seen it first-hand as CTO of the city of Barcelona, president of the Italian Innovation Fund and coordinator of major European research projects. When European startups succeed, they turn to American venture capital and often relocate to Silicon Valley. When our researchers make breakthroughs, they are hired away by U.S. companies offering vastly higher pay. When European cities try to assert digital sovereignty, they face lawsuits bankrolled by American tech giants and diplomatic pressure from Washington. Powerful forces, in other words, are arrayed against anyone who tries to change course. The result is a steady drain of talent, capital and sovereignty.

Who Controls The Stack? The New Techno-Economic Warfare

Dismissing European concerns about technological sovereignty as “Cold War thinking” misreads reality. This isn’t ideological competition — it’s economic warfare where technology is a national weapon built through political and industrial choices. 

Control concentrates at every layer of the stack. In materials, China processes 90% of global rare earths. So when it restricts gallium, germanium and graphite exports, it strikes directly at European and other green energy transitions.

At the chip layer, Taiwan’s TSMC commands 64% of global foundry capacity, Samsung another 12%. Europe has fallen to 8%, despite ASML’s lithography monopoly. The Trump administration’s 10% equity stake in Intel signals how far Washington wants to go. Nvidia and AMD agreed to hand over 15% of their AI chip revenues to the U.S. government just so they could keep selling into China. U.S. export controls don’t just constrain China — they dictate what European firms can sell and what researchers can access. The Dutch licensing restrictions on ASML, one of the world’s leading suppliers of equipment essential for making computer chips, show how American regulations reverberate across Europe’s industrial core.

But export controls reveal their limits. China’s DeepSeek achieved competitive AI performance at a fraction of typical costs. In response, some leaders in Silicon Valley and Washington called for even tighter restrictions on AI chips and infrastructure, pushing China toward self-sufficiency while fragmenting the global tech stack further.

“Innovation ecosystems do not emerge spontaneously — they are structured through public direction.”

At the cloud and AI layers, U.S. hyperscalers dominate. The CLOUD Act grants Washington extraterritorial reach over any data touching U.S. companies — even when stored in Europe. Nearly all foundation models answer to Silicon Valley.

But AI dominance now comes packaged with ideology. Trump’s executive orders mandate AI systems “free from ideological bias,” banning “woke AI” in federal procurement while defining diversity and equity as distortions that “sacrifice truthfulness.” The U.S. AI Action Plan exports “its full AI technology stack—hardware, models, software, applications, and standards—to all countries willing to join America’s AI alliance.” This isn’t just chips — it’s the entire stack, with American values and control baked in at every layer.

Trump made it explicit: “substantial” tariffs against any country regulating U.S. tech firms. Thus, Europe cannot set rules in its own market without facing economic punishment. One executive order in Washington — not Brussels or Berlin — could cut access to critical systems running our industries, hospitals and elections. That’s not a trade deficit — it’s a sovereignty deficit.

Whoever controls AI infrastructure — compute, models, data and cloud — will shape the economic and political order of the 21st century. The U.S. and China understand this and are mobilizing every instrument of statecraft to secure supremacy. Europe must understand it too.

The Material Realities Of AI Dominance

AI isn’t magic or ethereal — it’s brutally material, requiring specific configurations of energy, water, land and capital that define the political geography of digital sovereignty. Understanding these material flows reveals where power concentrates and where intervention becomes possible.

The numbers are staggering. Training frontier models consumes enormous computational resources: GPT-4’s training required electricity equivalent to the annual consumption of thousands of American homes. Google’s emissions surged nearly 50% in the past five years, driven mostly by AI computation. By 2030, data centers are projected to use at least 3% of global electricity, with AI workloads accounting for the majority. Already in Ireland, data centers consume more than a fifth of the national electricity grid, which some projections predict will rise to a third by 2030. Similar crises emerge in Frankfurt, Amsterdam, London — wherever cloud infrastructure concentrates.

Follow the money: BlackRock deploying hundreds of billions of dollars for new data center build-outs, Saudi sovereign wealth funds recycling fossil fuel profits into AI ventures, Emirati sovereign funds seeking technological hedges against energy transition.

The human costs reveal similar political economy dynamics. Kenyan workers earning minimal wages label content to train ChatGPT’s systems. Congolese children mine cobalt for data center batteries. Filipino moderators develop trauma from endless exposure to violence and abuse so AI can appear “safe.” These systems depend on hidden armies of exploited workers performing “ghost work” that makes AI seem magical.

“There is no contradiction between embracing the green agenda and having a strong AI-focused industrial policy.”

Europe’s advantage lies in treating constraints as design opportunities. Its renewable leadership — Germany’s 62% renewable mix, Spain’s solar surge, Denmark’s wind-powered grid — already provides the foundation for sustainable AI. Labor protections curb the exploitation rife in U.S. and Chinese supply chains, while environmental commitments push innovation beyond extractive models. DeepSeek proves that high-quality models can be built with less compute, fewer cutting-edge chips and lower resource use through open source and better engineering. Europe doesn’t need mega-infrastructures financed by fossil wealth — it needs models tailored to its own industries and societies.

AI’s soaring energy demands are already driving the industry back to nuclear: Amazon, Google and Microsoft are investing billions in small modular reactors. When Peter Thiel conflates Greta Thunberg with the Antichrist for defending climate action, the stakes are clear: The renewable transition is cast as a threat to innovation. Yet Europe’s renewable base is not a weakness — it is the very foundation of sustainable AI.

There is no contradiction between embracing the green agenda and having a strong AI-focused industrial policy. As carbon costs rise and resources tighten, fossil-fuelled AI will become increasingly fragile. By powering data centers with clean energy, limiting water use and pricing carbon at its real cost, Europe can turn constraint into strength.

Silicon Valley’s Turn To Techno-Nationalism

What Silicon Valley presents as neutral technological progress is increasingly revealing itself as an authoritarian political project. Trump’s second administration has accelerated this transformation with breathtaking speed. The Pentagon now directly commissions tech executives into military ranks through programs like Detachment 201. Palantir’s $10 billion U.S. Army contract makes its surveillance systems the de facto operating system of the modern military, integrating battlefield intelligence with domestic data. Anduril’s autonomous weapons factories mass-produce AI-powered drones while its executives rotate into senior Pentagon positions.

The architects of this system no longer hide their vision. Palantir CEO Alex Karp’s manifesto “The Technological Republic” articulates “patriotic tech” as a kind of fusion of Silicon Valley libertarianism with authoritarian nationalism. This ideology, rooted in anti-democratic philosophies, casts technological supremacy as a civilizational imperative.

Alex Karp presents Palantir as a bulwark against “American decay,” Elon Musk unilaterally restricts Ukrainian access to Starlink based on his own political whims and Peter Thiel directs ideological allies into government, channeling U.S. venture capital and defense money into his causes. Every major AI lab now depends on people and institutions that are opposed to democratic governance. What is emerging is not a planetary commons but a new tech-military complex financed by capital aligned with authoritarian ideologies and legitimized through patriotic rhetoric.

When critics dismiss concerns about AI bias, surveillance capitalism and platform monopolization as ideological extremism, they reveal their own allegiance to this kind of oligarchic control. To brand all this critique as “Lysenkoism” or “woke Marxism” while ignoring the real risks of AI capture echoes Trump’s new McCarthyism against imagined threats.

(EuroStack)

European Alternatives Beyond Silicon Valley

The EuroStack builds on Europe’s independent tech, research and industrial ecosystem — the foundation for democratic digital sovereignty linking demand and supply. Some argue Europe should focus on AI “diffusion” rather than infrastructure, treating computation as a “planetary common” to be accessed. Bratton accepts this framework: Europe as consumer, not builder.

But this misses the point. Infrastructure sovereignty is political agency. Control determines whether technology serves social, economic and ecological goals — or whether those goals are reshaped by Big Tech’s imperatives.

“Diffusion” without sovereignty inverts the relationship. Instead of technology serving democratically chosen ends, societies bend to platforms built elsewhere. During COVID, European governments had to follow digital protocols dictated by U.S. firms. The argument boils down to this: Everyone uses AI, but Silicon Valley or Beijing decides what AI exists, which values it encodes and which interests it serves.

Infrastructure is where power is encoded and political choices become technical constraints. Sovereignty means aligning technology with Europe’s social model, climate goals and democratic values. Without this, “diffusion” is nothing more than the efficient distribution of dependency.

For over 15 years, I’ve worked with cities and nations in Europe to turn critique into practice. In Barcelona, Mayor Ada Colau and I rejected the “smart city” model pushed by Big Tech to reimagine how technology could serve democracy. We rewrote procurement to prioritize open source and data sovereignty, launched Decidim — where 70% of city decisions came from citizen deliberation — built systems for digital rights and cryptographic data control and implemented the “Public Money? Public Code!” policy. Cities from Amsterdam to New York followed Barcelona’s example — proof that technology could serve participation over extraction.

The lessons were clear: Sovereignty starts with democratic control of infrastructure, open source enables innovation and citizens demand agency. The fiercest resistance comes not from critics but from incumbents and institutional inertia. That same logic guided my work as the president of Italy’s National Innovation Fund, where state-backed venture capital built deep-tech capacity. Europe’s strengths in bio and healthcare tech, space exploration, quantum computing and advanced manufacturing are proof that deliberate industrial strategy works.

“Infrastructure sovereignty is political agency. Control determines whether technology serves social, economic and ecological goals.”

EuroStack grows from this ground. It isn’t abstract — it’s backed by over 200 European businesses, officially endorsed by France and Germany in national strategies. The infrastructure is taking shape. Schwarz Group’s STACKIT delivers sovereign enterprise cloud from European data centers, giving businesses GDPR-compliant alternatives. OVH, Europe’s largest independent cloud provider, challenges AWS and Azure on European terms. Proton secures communications under Swiss privacy law, showing Europe can compete on security. Ionos’s Nextcloud offers true data sovereignty for collaborative work. EuroHPC pools resources into a continental supercomputing network, giving scientists, startups and industries access to world-class public compute.

On the AI front, Europe shows what deliberate strategy can achieve. Mistral, backed by €1.3B from ASML, patient public capital and French research networks, has become Europe’s leading AI startup. OpenEuroLLM develops models on European data under EU law. Switzerland’s Apertus trains open multilingual models across 1,800+ languages. The Digital Commons initiative builds open-source infrastructure, while Europe invests heavily in RISC-V adoption for computational sovereignty.

But scale matters. Mistral’s valuation is a fraction of OpenAI’s, and its models still rely on Nvidia chips and U.S. cloud infrastructure. Building alternatives means little if European firms continue defaulting to ChatGPT. The real test isn’t technical capability — it’s whether Europe can secure adoption at scale and turn these building blocks into a coherent ecosystem before dependencies become irreversible.

To drive adoption and incentivize European alternatives through strategic procurement or “Buy European” measures, leaders must stop falling for sovereignty-washing — parroting Big Tech’s AI narratives while undermining real autonomy. The U.S.-U.K. Tech Prosperity Act isn’t a path to prosperity — it leads to digital dependency that risks binding Europe tighter to American infrastructure. Every “sovereign AI” deal with NVIDIA, Google, Amazon, OpenAI or Palantir requires hard questions: Who controls the hardware? Which security laws apply? Can vendors resist foreign data demands and export controls? Who captures the value — society or monopolies?

The U.S. and China show that world-class platforms are built on decades of patient institutional funding and structural control — not venture capital alone. Europe’s distinction must lie in its values and political imagination. Silicon Valley optimizes for extraction, Beijing for control. Europe must optimize for empowerment — distributing agency rather than concentrating it.

Sovereignty Through International Digital Cooperation

The EuroStack cannot succeed in isolation. Sovereignty is not autarky. It is strategic independence: shaping technology trajectories, investing long-term, enforcing democratic accountability and building partnerships on shared principles rather than new dependencies.

Unexpected partners are emerging. India’s Digital Public Infrastructure and Brazil’s PIX payment system show how governments can build platforms that serve hundreds of millions without corporate intermediation — though privacy and rights concerns remain in Aadhaar’s implementation. Japan, South Korea and Taiwan offer manufacturing and semiconductor partnerships. Australia, Africa and Latin America bring critical mineral resources and opportunities for collaborative digital commons infrastructure beyond extractive models. These alliances must be substantive — diversifying dependencies, co-developing technologies and setting global standards.

Europe’s opportunity is leading a coalition for digital independence — public-interest AI, sovereign infrastructure, data governance, sustainable supply chains, environmental accountability. Unlike Washington or Beijing, Europe can offer technological partnership without imperial ambition: mutual benefit over dependency, open standards over lock-in, shared digital commons over monopoly control, strong governance and interoperability over surveillance capitalism or state control.

Digital Sovereignty As Democratic Power

This perspective rejects false binaries between Silicon Valley and stagnation, between digital colonialism and analog irrelevance, between planetary evolution and Luddite retreat. These illusions serve those profiting from the status quo by making alternatives unthinkable.

Europe need not choose between innovation and regulation, efficiency and equity, capability and values. The real choice is between democratic and authoritarian technology, sustainable and extractive infrastructure, distributed and concentrated power.

When European hospitals deploy AI diagnostics under strict accountability, they prove that efficiency doesn’t require abandoning oversight. When climate scientists train models on shared data protected by GDPR, they prove that ethics powers innovation. When institutions build open-source AI respecting both privacy and creators’ rights, they prove that democracy strengthens capability.

“Europe’s infrastructural future must encode democracy, sustainability and human dignity.”

The EuroStack isn’t nationalism or autarky. It demonstrates that democratic societies can shape technology, that public interest can outweigh private extraction, that human flourishing matters more than shareholder value and that sovereignty and cooperation reinforce each other.

We can accept permanent dependency, hoping foreign powers govern global infrastructures in our interest. Or we can build democratic alternatives rooted in Europe’s climate commitments, labor protections and social diversity.

Infrastructure encodes power. Whoever builds it, owns it. Whoever owns it, governs it. Europe’s infrastructural future must encode democracy, sustainability and human dignity.

The EuroStack embeds those values into 21st-century foundations. The infrastructure is already taking shape. The question isn’t whether it’s possible — it’s whether it happens on systems we control democratically or on infrastructure controlled by interests opposed to our own.

The post Reclaiming Europe’s Digital Sovereignty appeared first on NOEMA.

]]>
]]>
Is European AI A Lost Cause? Not Necessarily. https://www.noemamag.com/is-european-ai-a-lost-cause-not-necessarily Tue, 30 Sep 2025 14:45:30 +0000 https://www.noemamag.com/is-european-ai-a-lost-cause-not-necessarily The post Is European AI A Lost Cause? Not Necessarily. appeared first on NOEMA.

]]>
Editor’s Note: Noema is committed to hosting meaningful intellectual debate. This piece is in conversation with another, written by Italian digital policy advisor Francesca Bria. Read it here: “Reclaiming Europe’s Digital Sovereignty.”

Europe has had a conflicted relationship with modern technology. It has both innovated many of the computing technologies we take for granted — from Alan Turing’s conceptual breakthroughs to the World Wide Web (WWW) — and has also fostered some of the most skeptical and elaborate critiques of technology’s purported effects. While Europe claims to want to be a bigger player in global tech and meet the challenge of AI, its most prominent critics are quick to second-guess any practical step toward that goal on political, ethical, ecological and/or philosophical grounds.

Some policymakers envision a continental megaproject to construct a fully integrated European tech stack. But this approach is inspired by an earlier era of computational infrastructure — one based not on AI but on traditional software apps — and moreover, the political and “ethical” qualifications they also impose upon it are so onerous that the most likely outcome is further stagnation.

Lately, however, something has shifted. The wake-up calls are becoming louder and more frequent, and they have been arriving from sometimes unlikely sources. Emmanuel Macron, J.D. Vance and Berlin artist collectives may seem like unlikely allies, but they all agree that Europe’s “regulate first, build later (maybe)” approach to AI is not working. The propensity for Europe to operate this way has only resulted in greater dependency and frustration, rather than the hoped-for technological sovereignty. While Trump’s erratic approach to U.S.-European relations may be the proximate cause for the strategic shift, it is long overdue. But unless that groundswell is able to gain permanent traction in the realm of ideas, this momentum will dissipate. Given the considerable effort by tech critics across the political spectrum to prevent this progress, securing it is easier said than done.

Internet meme response to the European Union AI Act.
Internet meme response to the European Union AI Act.

A “Eurostack” can be defined in different ways, some visionary and some reactionary. Italian digital policy advisor Francesca Bria and others define it as a multilayer software and hardware stack, in a plan that draws inspiration from my 2015 book, The Stack: On Software and Sovereignty.” Specifically, it builds on a diagrammatic vision of critical infrastructure, with a stack that includes chips, networks, the variety of everyday connected items known as the internet of things, the cloud, software and a final tacked-on layer called data and artificial intelligence. This is, however, a variation of the stack of the present, not the future. My book’s “planetary computation stack diagram” will soon be republished in a 10th anniversary edition. A decade is a lifetime in the evolution of computation.

Bria’s vision is not future-facing. The European stack she proposes as a plan for the next decade should have been built 15 years ago. Why wasn’t it? Europe choked its own creative engineering pipeline with regulation and paralysis by consensus. The precautionary delay was successfully narrated by a Critique Industry that monopolized both academia and public discourse. Oxygen and resources were monopolized by endless stakeholder working groups, debates about omnibus legislation and symposia about resistance — all incentivizing European talent to flee and American and Chinese platforms to fill the gaps. Not much has changed, which is why the momentum of the moment is both tenuous and precious.

If contemporary geopolitics is leading us to think of stack infrastructure more in terms of hemispheres than nations, then unsurprisingly, the hemispherical stack of the future is built around AI — and not through the separation of AI into some “final layer” as Bria has it. Just as classical computing is different from neural network-based computation, the socio-technical systems to be built are distinct as well. This is not a radical or contentious argument, but it’s one that many prominent intellectuals fail to fully grasp. As such, it’s worth reconstructing how Europe got to where it is. The conclusion should not be that it’s too late for Europe to be a major player in the tech world, especially where AI is concerned, but rather that it will need to commit to the coming opportunities as they arise and to their implied costs.

As we’ll see, that’s easier said than done. 

Merchants Of Torpor In Venice

Recently, I had the pain/pleasure of joining a panel at a conference called “Archipelago of Possible Futures” at the Venice Architecture Biennale organized by Bria and cultural researcher José Luis de Vicente, to discuss the prospects of a new Eurostack. The other panelists were two of the most well-known technology and AI skeptics, Evgeny Morozov and Kate Crawford, as well as the architect Marina Otero Verzier. 

The conversation was … lively

It was also confusing. By the end, I think I was the only one arguing that Europe should build an AI Stack (or something even better) rather than insisting that, in essence, AI is superproblematic and thus Europe should resist doing this superproblematic thing. There are many reasons for caution, including questions of power, capitalism, America, water, energy, privacy, democracy, labor, gender, race, class, tradition, indigeneity, copyright, as well as the general weirdness of machine intelligence.

“Emmanuel Macron, J.D. Vance and Berlin artist collectives may seem like unlikely allies, but they all agree that Europe’s “regulate first, build later (maybe)” approach to AI is not working.”

The maneuverable space that would satisfy all these concerns is visible only with a microscope. Remember, this was ostensibly a panel on how to actually build the Eurostack. The other panelists likely see it differently, but to me, it’s not possible to rhetorically eliminate all but a few dubious and unlikely paths to an AI Eurostack and still claim to be its advocate. That self-deception is an essential clue about Europe’s stack quandary.

During the panel, I made my case by first asking why Europe doesn’t already have the Eurostack it wants, by recounting the disappointing recent history of techno-reactionary instincts (such as anti-nuclear power politics as discussed below), the hollowness of the now orthodox critical-academic stances about AI, the problems with this approach for Europe’s plans and made a truncated plea for reason and action. In short, Europe should build AI — focusing on AI diffusion rather than solely new infrastructure — and stop auto-capitulating to elite bullies and fearful reflexes. The panel’s responses were animated, predictable and mutually contradictory.

Morozov is the author of many serious books and articles published across Europe’s left-leaning media, and therefore one of the most widely-read pundits on the dangers of American internet technologies and the need for strong “digital sovereignty” for Europe (and others). He is also the instigator of several interesting projects, including an in-depth podcast series on British cybernetician Stafford Beer’s failed Cybersyn project that was meant to govern Salvador Allende’s socialist Chile through a vast information economic matrix linking into a futuristic control center. Cybersyn was never built as it was envisioned, in reality, but it is up and running in the dreams of intellectuals in some purified alternate reality where cybersocialism runs the world. Morozov popularized the term “technological solutionism,” which, due to inevitable semantic decay, is now a term used by political solutionists to denigrate any attempt to physically transform the infrastructures of global society that demotes their own influence.

As the panel wore on and voices grew louder and more self-revealing, it became clear that Morozov does indeed want robust European AI but only on narrow, rarefied terms that resemble something like Cybersyn, and which would, in principle, sideline large private platforms, especially American ones, with whom the Belarusian émigré is still fighting his own personal Cold War.

Crawford is an Australian researcher and author, most notably of “Atlas of AI,” a book that superimposes the American culture war curriculum, circa 2020, onto the amorphous specter of global AI. She is adept with one-liners like “AI is neither artificial nor intelligent,” a statement that has been so oft-quoted in her interviews that no one stops to ask what it means. So, what does it mean? Crawford’s explanation is that “Artificial intelligence is both embodied and material, made from natural resources, fuel, human labor, infrastructures, logistics, histories, and classifications.” This is, however, exactly what the term “artificial” means. She says that AI is not actually intelligent because it works merely through a bottom-up anticipatory prediction of next instances based on global pattern recognition in the service of means-agnostic goals. Again, this is a central aspect to how “intelligence” (from humans to parrots) has been understood by everyone from William James to the predictive processing paradigm in contemporary neuroscience.

Along with visual artist Vladan Joler, Crawford is co-creator of “Calculating Empires,” a winner of the Silver Lion award at the Biennale and an ongoing diagrammatic exercise in correlation and causality confusion that purports to uncover the dark truth of what makes computational technologies possible. The sprawling black wallpaper looks like it is saying something profound, but upon more serious inspection, one discerns that it simply draws arrows from your phone to a copper mine and from a data center to the police. The work resembles information visualization but attempts to make no analytical sense beyond a quick emotional doomscroll. It is less a true diagram than stylized heraldry built of diagramesque signifiers and activist tropes freely borrowed from others’ work. Crawford’s concluding remark on this panel was that Europe has a clear choice when it comes to AI: to passively acquiesce to American techbro hegemony or to actively refuse AI. As she put it bluntly, accept or fight!

For her part, Otero, a Spanish architect teaching at Harvard and Columbia, shared her successes in helping to mobilize resistance to the construction of new data centers in Chile. When asked for her summary position, she initially, somewhat in jest, summed it up with one word: “communism.”

“If Europe builds the Eurostack that it wants — and that it says it needs — it will be because it creates space for a different culture, discourse and a theory of open and effective technological evolution.”

So there you have it. On a panel on how Europe might build its own AI Stack, we heard highlights from the last decade of an intellectual orthodoxy that has contributed upstream to a politics through which Europe talks itself out of its own future. How to build the Eurostack? Their answers are: hold out for the eventual return of an idealized state socialism, declare that AI is racist statistical sorcery, “resist,” stop the construction of data centers and, of course, “communism.” By the end, I think the panel did a very good job exploring exactly why Europe doesn’t have the Eurostack that it wants, just not in the way the organizers intended.

The Actual (Sort-Of) Existing Eurostack

What to make of this? If Europe builds the Eurostack that it wants — and that it says it needs — it will be because it creates space for a different culture, discourse and a theory of open and effective technological evolution that is neither a copy of American or Chinese approaches nor rooted in its own moribund traditions of guilt, skepticism and institutionalized critique. The “Eurostack” that results may not even be the reflection of Europe as it is, or as it imagines itself, but may rather become a means for renewal.

The answer to the oft-posed question “Why doesn’t Europe already have a Eurostack?” is that it does — sort of.  Successful European AI companies do exist. For example, Mistral, based in Paris, is a solid player in the mid-size open model space, but it is not entirely European (and that’s OK!) as many of its key funders are West Coast venture capitalists and companies. Europe is also an immensely important contributor to innovation, implementation and diffusion of some of the most significant platform-scale open source software projects: Linux, Python, ARM, Blender, Raspberry Pi, KDE and Gnome, and many more. This, however, is not a “stack” but rather, as the Berggruen Institute’s Nils Gilman puts it, “a messy pile.” Crucially, the success of these open-source projects is not because they fortify European sovereignty, but rather because they are intrinsically anti-sovereign technologies, at least as far as states are concerned. They work for anyone, anywhere, for any purpose; this is their strength. This goes against the autarchist tendencies of much of European technology discourse and symbolizes the internal contradictions of the “sovereignty” discourse. Is it sovereignty for the user (anywhere, anytime access), sovereignty for the citizen (their data cozy inside the Eurozone and its passport system), or sovereignty for the polis (the right of the state to set internal policies)?

Europe’s interests and impulses are conflicted. It wants a greater say over how planetary technologies and European culture intermingle. For some, that means expunging Silicon Valley from its midst, but Europe also wants the world to use its software and adopt its values. For those who make up the latter position, the vision of the EU as a “regulatory superpower” setting the honorable rules that we all must adhere to is a tempting substitute for a sufficient, defensible geopolitical position. “Sovereignty for me, Eurovalues for thee.” For others, the hope is for Europe to get on with it and build its own real infrastructural capacity. Heckling from the front is a commentariat fixated on the social ills of AI, social media, data centers and big technology in general. For them, mobilizing endless proclamations as to why a Eurostack is preferable in theory will somehow facilitate building the very thing itself, or for others, prevent such an atrocity altogether.

Put plainly, for Europe to succeed in realizing its most impactful contributions to the planetary computational stack, it must stop talking itself out of advancement and instead cultivate a new philosophy of computation that invents the concepts needed to compose the world, not just deconstruct it or preserve it like a relic. The Eurostack-to-come cannot just try to catch up with 2025, and it cannot be manifested simply by harm-reducing legislation or by “having the conversation” about new energy sources, new chip architectures, new algorithms, new modes of human-AI interaction design, new user/platform relations, but rather by harnessing the depth of European talent to make them. Many Europeans get this and are eager to build. It’s time for European gatekeepers to get out of their own way.

I would love to see Europe build its own stack technologies — amazing new things that are impossible to conceive of or realize here in California. I am eager to use them. But as the Venice Biennale panel demonstrated, many esteemed intellectuals offer only reasons why any path to doing so would be problematic, unethical, dangerous and/or require projects to first pass ideological filters so fine-grained that they are disqualified before any progress is possible.

“Europe must stop talking itself out of advancement and instead cultivate a new philosophy of computation that invents the concepts needed to compose the world, not just deconstruct it or preserve it like a relic.”

The end result is that today the EU has AI regulation but not much AI to regulate, leaving European nations more dependent on U.S. and Chinese platforms.

This is what backfiring looks like.

How It Started

Speaking of backfiring, this isn’t the first time that Europe has found itself deeply conflicted over the development of a powerful new technology that holds both great promise and peril. Nor is it the first time that such a conflict has been motivated or distorted by prior cultural and political commitments. Europe’s cultural preservationist instincts –becoming only more acute as populations age and demographics shift–push it toward caution, and so it ultimately loses out on the benefits of the new technology while also suffering the losses brought by the chosen alternative. Unfortunately, Europe may be making many of the same errors when it comes to AI. To get to the root of the matter and to understand this as a more general disposition, we must revisit the early 1970s.

Amid a Cold War-divided Germany and cultural-political unrest across the continent, nuclear power plants were an emerging technology that promised to bring carbon emissions-free electricity to hundreds of millions of people, but not without controversy. Health and safety concerns were paramount, if not always objectively considered.  Thematic associations of nuclear power with nuclear weapons, military-industrial power and with The Establishment all contributed to a psychological calculus that made it, for some, a symbol of all that must be resisted. Nowhere was this more true than in Germany, and for this they have paid a heavy price.

Remember the “Atomkraft Nein Danke” sticker? This smiling yellow message, which originated in Denmark and spread globally, was emblematic of a movement that helped define an era. West German anti-nuclear and anti-technocratic politics coalesced in the 1970s with protests against the construction of a power plant in Wyhl that drew some 30,000 people. Protestors successfully blocked the plant and from there gained momentum. In 1979, the focus turned to the United States, where an odd mix of fiction, entertainment, infrastructural mishap and groupthink defined the cultural vocabulary of post-Watergate nuclear energy politics. March of that year saw the theatrical release of “The China Syndrome”, a sensationalistic thriller about a nuclear plant meltdown and deceitful cover-ups that stars Jane Fonda as the heroic activist news reporter who sheds light on the dangers. As if right on cue, 12 days after the film’s release, the nuclear plant Three Mile Island in Pennsylvania suffered a partial meltdown in one of its two reactors. Public communications around the incident were catastrophically bad, and a global panic ensued. Nuclear energy infrastructure was now seen with even more suspicion.

In the United States, folk-pop singer Jackson Browne co-led the opposition against the use of nuclear energy reactors, organizing “No Nukes” rock mega-concerts — solidifying anti-nuclear power politics and post-counterculture yuppie-dom as interwoven visions. In Bonn, Germany, 120,000 marchers responded by demanding that reactors be shut down, and many were. The spectre of future mass deaths was a paramount concern. Surely, Pennsylvania was about to suffer a horrifying wave of cancers over the coming years. In fact, the total sum of excess cancers was ultimately tallied to be zero. (The total number at Fukushima, other than a single worker inside the plant? Also zero.) This fact did not matter much for the public image of nuclear power, then or now.

The terminology used by those opposing nuclear energy is familiar to our ears today: “technocratic,” “centralized,” “rooted and implicated in the military,” “promethean madness,” “existential risk,” “extractive,” “techno-fascist,” “toxic harms,” “silent killer,” “waste,” “technofix delusion,” “fantasy,” etc. Across visual culture, the white clouds billowing from large concrete reactors became an icon of industrial “pollution” — even though water vapor does not pollute the air. The cultural lines had been drawn.

How It’s Going

One can discern the impacts of Germany shutting down its nuclear plants for environmental, health and safety reasons by comparing it with France, which gets roughly 70% of its electricity from nuclear power. The results are stark and, spoiler alert, bad for Germany. Germany’s average CO2 emissions per kWh today are seven times higher than France’s, and its CO2 emissions per person are 80% higher. Germany turned to solar and wind (great) and oil and gas (not great) for electricity, a transformation that has had extremely negative health effects. It now gets roughly 25% of its electricity from coal (France is close to zero). Because of this disparity, Germany tolerates roughly 5,500 excess deaths from coal-related illness annually, while France’s number is closer to 1,000. That’s 450% higher.

“The same terms used to vilify nuclear power — ‘techno-fascist,’ ‘extractive,’ ‘existential risk,’ ‘Promethean madness’ and ‘fantasy’— are now regularly voiced by today’s Critique Industry to describe AI.”

One might surmise that this has nevertheless prevented the deaths from large nuclear power accidents such as Three Mile Island, Fukushima and Chernobyl. Once more, the total population deaths officially attributed to radiation-induced cancers at the first two combined add up to zero. The latter was much more serious. In 2005, the World Health Organization estimated that around 4,000 deaths were attributable to Chernobyl. Still, those deaths are less than the total number of excess coal deaths per year in Germany attributable to having shut down its nuclear power capacity.

Think about it: In order to prevent the deaths that a once-in-a-generation nuclear plant accident may cause, Germany’s Green Party-led policies inflict the equivalent of 1.25 Chernobyls per year on the population.

A comparison of the relative performance on several ecological metrics of the French nuclear baseload energy grid and German baseload energy grid that has eliminated nuclear power.

The consequences for Germany have also been political. Some of the indirect accomplishments of the anti-nuclear power, anti-megatechnology, anti-“promethean techno-fascist fantasy” movement were, for Germany, greater greenhouse gases, more deaths and more nationalist populism. Shutting down nuclear plants led to greater dependency on imported Russian oil and gas to power the economy, which in turn allowed Russia to use its ability to turn its pipelines on and off as a tool to influence Germany’s politics and charge more for energy. This has contributed to economic downturn and stagnation, which in turn has decisively helped the rise of the far-right nationalist party, Alternative für Deutschland.

What happened? A well-meaning popular movement, backed by intellectuals and influencers, motivated by technology-skeptic populism and environmental concerns, successfully arrested the development and deployment of a transnational megatechnology and ended up causing even larger direct and indirect harms.

This, too, is what backfiring looks like.

AI Nein Danke?

Two images showing different generations of a common German technopolitical subculture, each mobilized around the popular refusal of complex large-scale infrastructure, both with self-defeating consequences.

The takeaway from what happened with nuclear power would be to learn from this history and, most importantly, not do it again. Do not ban, throttle or demonize a new general-purpose technology with tremendous potential just because it also implies risk. The precautionary principle can be literally fatal. And yet that is precisely what is happening around the newest emerging technological battleground: artificial intelligence.

The same terms used to vilify nuclear power — “techno-fascist,” “extractive,” “existential risk,” “Promethean madness” and “fantasy”— are now regularly voiced by today’s Critique Industry to describe AI. To map the territory, I collected some of the greatest hits from contemporary academics in the humanities.

“AI is …” A representative but non-exhaustive selection of provocative characterizations of AI from contemporary Humanities books, articles and lectures. A representative index of the authors from whose work these ideas are sampled includes: Matteo Pasquenelli, Kate Crawford, Emily Bender, Alex Hanna, Dan McQuillan, Jordan Katz, Ruha Benjamin, James Poulos, Ted Chiang, Vladan Joler, Ruben Amaro, Shannon Vallor, Safiya Noble, Meredith Whitaker, Evgeny Morozov, Timnit Gebru, Byung-Chul Han, Yvonne Hofstetter, Manfred Spitzer, Gert Scobel, Nicolas Carr, Geert Lovink, Éric Sadin, James Bridle, Helen Margetts, Carole Cadwalladr, Adam Harvey, Joy Buolamwini, Wendy Hui Kyong Chun, Yuk Hui, and of course Adam Curtis.

Looking this over, my first thought is “Ask an academic what they think is wrong with the world and I can tell you what they think of AI.” Quite clearly, for many of them, AI seems to be not just a technology but what psychoanalysis would call a fetishized bad object. They are both repulsed and fascinated by AI; they reject it, yet can’t stop thinking about it. They don’t know how it works, but can’t stop talking about it. These statements above are less acts of analysis than they are verbalized nightmares of a 20th-century Humanism gasping for air and clawing for a bit more life.

In many cases, these eschatologies, often issued from Media Studies departments and Op-Ed pages, are not only non-falsifiable claims, but also aren’t even meant to be debated. This is Vibe Theory: an expression of elite anxiety masquerading as a politics of resistance. It is also exemplary of what the tragic ur-European philosopher Walter Benjamin once called the “aestheticization of politics,” which in this case is the result of the odd incentives that ensue when the art world makes the invitations, pays the speaker fees and publishes the essays about how culture will save us. The aesthetic power of the critical gesture is confused with reality.

More importantly, the cumulative effect of this academic consensus is not ethical rigor but general paralysis. Fear-mongering is not the way to convince people to find agency in emerging machine intelligence and incentivize creating and building. It is how a few incumbent cultural entrepreneurs try to fill the moat around their own increasingly tenuous status within institutions struggling to keep up with profound changes.

Your Personal ‘Oh Wow’ Moment

European AI should not just focus on building copycat models, but on society-scale AI diffusion, such that everyone gets to use AI for what is most interesting and important to them. But getting there is an uphill battle because, unfortunately, those who should be promoting this sort of diffusion are impeding it.

“European AI should not just focus on building copycat models, but on society-scale AI diffusion, such that everyone gets to use AI for what is most interesting and important to them.”

As a member of a faculty committee at the University of California, San Diego, I recently experienced some of the downstream effects of AI abolitionist ideas, but also how quickly the story changes when people actually use AI to do something meaningful for them. The committee has been charged with writing a statement of principles for how AI should be used in research and teaching. I was shocked by some of my colleagues’ thoughts on the matter.

Here is a sampling of (anonymous) comments I wrote down from my conversations with my university faculty: “I don’t want my students using a plagiarism machine in my class”; “The university should ban this stuff while we still can”; “It has been proven that AI is fundamentally racist”; “The techbros stole other people’s art to make a giant database of images”; “You know who likes AI? The IDF and Elon Musk, that’s who.”

Remember, these are the people responsible for determining how a top university puts these technologies to use. However, at some point in our conversations, the tide shifted. It began when a 70-year-old history professor spoke up, “I don’t know, last night I spent four hours with it talking about Lucretius. … It came up with things I had never thought of. … It was the most fun I’ve had in a long time.”

This is not atypical. Over the past several months, I have noticed a change. More and more people have told me — confiding in me as if admitting to something naughty — of a singular, interesting engagement with AI that really delighted them. They saw something they could do with AI that is important for them. They figured out how they personally could make something with AI that they could not before. After that, their opinion changed. I saw this on the faculty committee, too. A once very skeptical theater professor told me how she was now using Anthropic’s Claude to generate written score notations based on ideas for new dance performances. She was thrilled.

As of August 2025, OpenAI claimed 800 million unique users of ChatGPT per week. It’s hard to gaslight 800 million people by telling them this stuff is bogus and bad. Yet the term I have heard from some esteemed critics when presented with such moments of agency-finding is seductive. “Yes, of course, the technology is seductive.” They dismiss what you feel — wonder, curiosity and awe — and say that it is actually merely desire, and “as we all know, desire is deception.” Ultimately, this awe-shaming is paralyzing.

So What Now?

There are many reasonable ways to question my provisional conclusions, some more productive than others. Robust debate is important, but sometimes it seems as if “having the conversation” is all that Europe truly wants to do. It is excellent at this, and the necessarily global deliberation on the future of planetary computation often comes to Europe to stage itself, and for this, we should be grateful. For Europe, however, the conversation must eventually rotate into building; otherwise, it degrades into increasingly self-fortifying critique for its own sake.

Some may argue that if “critique” is exactly what is most under attack by the rise of populist nationalism, then isn’t critique what is most needed, now more than ever? Won’t the autonomy of culture lead us away from this malaise? I am doubtful. If anything, the present mode of populism and nationalism overtaking much of the world can be seen as what happens when a culture’s preferred narrativization of reality overtakes any interest in brave rationality and the sober appreciation of collective intelligence as a technologically mediated accomplishment. If populist nationalism is the “cultural determinist” view of reality in a grotesquely exaggerated mode, it is unclear why doubling down on culture’s autonomy is the obvious remedy.

Arguably, the self-defeating anti-nuclear politics of past decades were essentially a cultural commitment more than a policy position. A pre-existing cleavage between generations, classes, counter-elites, and ensuing tribal psychologies was imprinted onto the prospect of generating electricity from steam power driven by nuclear fission. In parallel, the political right’s dislike of solar power, which it views as a hippie-granola, fake solution, is based not on any real analysis of photovoltaic panel supply chains and baseload energy modeling, but rather on the fact that public infrastructure is now culturally overcoded. Maybe “culture” is another culprit, not a panacea?

“Europe has the right to put its AI under ‘democratic control’ and supervised ‘consent’ if it wants to, but it does not have a right to be insulated from the consequences of doing so.”

When extreme voices declare that Europe is “colonized” by foreign technology and must cast out the invasive species from Silicon Valley, their energy doesn’t exactly contradict the ambient xenophobia of our moment. As a placebo policy, import substitution tariffs do not work (someone please tell Trump). Autarchy is the infrastructural theory of populists, including but not exclusively autocrats. At its worst, the EU stack discourse lapses into dreams of absolute techno-interiority: “European data” about Europeans running on European apps on European hardware, perhaps even a European-only phone made solely from minerals mined west of Bucharest and east of Lisbon that runs on a new autonomous European-only cell standard and powered by a Europe-only wall plug for which by law no adapters exist. Blood and Soil and Data!

Europe surely can and should regulate the emergence of AI according to its “values,” but it must also be aware that you can’t always get what you want. Europe is free to attempt to legislate its preferred technologies into existence, but that doesn’t mean that the planetary evolution of these technologies will cooperate. If, as some economists estimate, EU AI regulations will result in a 20% drop in AI investment over the next four years, that may or may not be a good premium on digital sovereignty. It is up to Europe to decide. That is, Europe may have strong AI regulation, but this may actually prevent the AI it wants from being realized at all (again making it more reliant on American and Chinese platforms). Europe has the right to put its AI under “democratic control” and supervised “consent” if it wants to, but it does not have a right to be insulated from the consequences of doing so.

What We All Want?

In the end, it may be that all of the Venice Biennale panelists’ hopes (mine included) for what a global society mediated by strongly diffused AI looks like is more similar than different. As I put it to the panel, we might define this roughly as “a transnational socio-technological utility that is produced and served by multiple large and small organizations that provides inexpensive, reliable, always-on general synthetic intelligence and related services to an entire population who build cities, companies and cultures with this resource in an open and undirected manner, raising the quality of life and standard of living in ways unplanned and not limited by the providing organizations.” Diverse functional general intelligences on tap may have similar social implications as electricity on tap (nuclear or not) did for previous generations. More than a “tool” in and of itself, AI makes new classes of technologies possible. We should want more value to accrue through the use of a model than by the creation of the model itself. Broad riches built upon narrow riches.

So then why all the panic and misinformation? Think of it this way. What if I told you there there was a hypothetical machine that integrated the collective professional information, processes and agency that have been made artificially scarce, concentrated not just in the “Global North” but in a dozen cities and two dozen universities in the Global North, and which now makes available functional, purposive and simple access to all this through intuitive interfaces, in all languages at once, for a monthly subscription rate similar to Netflix, or even for free? This machine is less a channel for multipoint information access than a generative platform for generative agency as open as collective intelligence itself.

Would you not be suspicious of gatekeepers who demand the arrested evolution of this machine’s global diffusion because, in their words, it is not worth the electricity necessary to power it?  Because it makes people dependent on centralized infrastructure or because it was developed by capitalism (and lots of publicly funded research)? Because it will transform the educational and political institutions on which democratic societies have depended, and may especially destabilize the social positions of those who have piloted those institutions? Yes, you would be right to be suspicious of them and their deeper motives, as well as the motives of their funders. You would be right to be suspicious of ideological entrepreneurs from across the political spectrum who demand to personally “audit” the models, who demand legal “compliance” to be constantly certified by political appointees, who seek to bend the representations of reality that models produce, and who seek to use them to further medievalist visions and totalitarian impulses. I hope that you are indeed suspicious of them today.

“This is a net gain for those outside of Bubbleworld but a net loss for the Ivy League (Sorry, not sorry).”

The biggest potential beneficiaries of this resource are those whose own intelligence and contributions are at present destructively suppressed by the artificial concentration of agency. They may be mostly from the same “Global South” that the gatekeepers use as a rhetorical human shield to plead their case for their own luxury belief system — affordable only to those for whom access is all they have ever known. Everywhere, the biggest benefits of on-tap functional general intelligence may accrue to individuals working outside those zones of artificially scarce agency. Large corporations already have access to a diverse range of expert agents; now so does everyone else – in principle. This is a net gain for those outside of Bubbleworld but a net loss for the Ivy League (Sorry, not sorry).

Perhaps then my goals are not the same as those of the other panelists, after all. Perhaps there is a disagreement not only about means but also about ends. Perhaps their Lysenkoist reflexes are non-negotiable, unwilling to grant that large capitalist platforms could innovate something fundamentally important, because of or in spite of their being large capitalist platforms. Perhaps the tight embrace of the conclusion that AI is intrinsically racist, sexist, colonialist and extractivist (or, for other ideologues, intrinsically woke, globalist, elitist, unnatural) is so devout that they must dismiss any evidence to the contrary, convincing themselves and their constituents not to be seduced by the reality they see before them.

Convincing people that AI is both about to destroy their culture and is also fake does not result in more agency, more universal mediation of collective intelligence, but less. The result is paralysis, lost opportunities, wasted talent and greater European dependency on American and Chinese platforms, as well as on the entrenchment of entrepreneurial tech critics defending their turf and drawing boundaries between acceptable and unacceptable alternatives.

This is what it looks like to backfire in real time.

The post Is European AI A Lost Cause? Not Necessarily. appeared first on NOEMA.

]]>
]]>
A Diverse World Of Sovereign AI Zones https://www.noemamag.com/a-diverse-world-of-sovereign-ai-zones Fri, 26 Sep 2025 17:21:45 +0000 https://www.noemamag.com/a-diverse-world-of-sovereign-ai-zones The post A Diverse World Of Sovereign AI Zones appeared first on NOEMA.

]]>
A decade ago, Benjamin Bratton published a groundbreaking book on planetary-scale computation titled “The Stack: On Software and Sovereignty.” The director of the Berggruen Institute’s Antikythera project argued that “geopolitical dynamics today revolve around computation. Data is now a sovereign substance, something over which and from which sovereignty is claimed. Cloud platforms take on roles traditionally performed by modern states, crossing national borders and oceans. Meanwhile, states morph into cloud platforms.”

Bratton saw that the segmentation of planetary computation into multipolar zones, or “hemispheres,” would shape geopolitics going forward. Each hemisphere would consist of vertical layers to form a single interlocking system, or “stack.” The raw materials and energy required for computation constitute the first layer, followed by cloud services for storing and processing data, the local experience interacting with the computational network, then the system of identification for users and, finally, the system interface with users.

His conceptual map of this new configuration of power is becoming manifest with the present competition between China and the U.S. over who will dominate AI. Conflicts between these gigantic “hemispherical stacks” revolve around data sovereignty, the “chip wars” over hardware and the foundational models that reflect different political, cultural and civilizational values.

The battle of the giants, however, is not the end of the story because computation is planetary with evolving stack architectures that will be as diverse as the social complexes that shape them.

Vietnam’s Third Stack

Writing in Noema from Hanoi, Dang Nguyen explains how Vietnam has refused to adopt either Chinese or American models of AI and instead is building its own “core tech stack” of language models, cloud infrastructure and even training data.

Harkening back to the wartime slogans of when Vietnam fought the U.S., and later China, in a lesser conflict, Nguyen quotes the head of one of the country’s leading IT companies saying “nothing is more precious than independence and freedom” when it comes to digital sovereignty, no less than with territorial integrity.

Describing the proud nation’s self-determined temper, she writes: “The familiar poles of AI politics — Silicon Valley’s proprietary platforms and Beijing’s centralized infrastructure — are never named, but everyone understands what is being contested: who gets to define the terms of intelligence itself. The stakes are stack-level choices — black-box dependence or modular improvisation; opacity or legibility; someone else’s roadmap or a sovereign design of your own … The decision is the difference between consuming intelligence as a service and composing it as an act of sovereignty. One rents a mind, the other trains its own in the wild.

“This is, in essence, a claim to AI sovereignty — the ability to build and govern infrastructures on Vietnam’s own terms while still enabling cross-border flows of data, talent and computation. AI sovereignty here does not mean isolation, but authorship — deciding which data, models and rules shape, and will shape, how machine intelligence is built and deployed.

In short, Vietnam is not picking sides. It is building a third stack.”

Infrastructural Non-Alignment

Again, echoing earlier geopolitical references to the post-colonial non-aligned movement of the 1950s-60s that sought to thread a neutral path between contending communist and capitalist powers, Nguyen sees a different map of sovereignty emerging today. “The sharper fault line now runs not between nations but infrastructures — between the guarded logic of proprietary systems and the unruly emergence of open-weight models; between centralized command and distributed improvisation; between the doctrine of safety and the discipline of scrutiny.”

To put it in practical terms: “If OpenAI, Anthropic, and Google DeepMind’s frontier models have largely represented the logic of enclosure, then more open-weight projects like DeepSeek and Meta’s LlaMA — not fully open-source but released in ways that allow retraining and scrutiny — gesture toward a counter-current that is partial, constrained, yet powerful in its transnational diffusion. Even as OpenAI has more recently released ‘open models,’ the broader movement of open-weight diffusion cuts across borders, destabilizing the notion that AI will crystallize into two superpower-led blocs.

“In other words, culture is not what is being exported; technology stacks are.”

Open Vs. Closed AI Models

When China unveiled its DeepSeek open-source model that proved on par with the mostly closed-source advanced AI models in the U.S., former Google CEO Eric Schmidt pointed out that open-source models promote innovative collaboration by allowing universities, researchers, smaller companies and countries to participate in AI development beyond the confines and without the expense of proprietary systems. He warned that the U.S. would fall behind if it did not move more toward open-source models. The trade-off, Schmidt argued, is that very openness carries the risk of misuse by malicious programmers. In the end, he believes, an equilibrium between closed and open systems will likely evolve.

“While computational stacks are spinning a planetary web of communication, a singular monosystem is not what is emerging.”

I once asked Kai-Fu Lee, one of China’s leading AI entrepreneurs, whether state censorship there would distort the accurate training of large language models when compared to the West. His response basically recognized Bratton’s notion of hemispherical stacks.

LLMs will indeed carry the imprint of cultural-political values, he posited, not only in China, but everywhere. Different cultural zones with different values will censor different things. While the Chinese state might censor any criticism of the Party, in the West there is a kind of culturally driven “woke” or “anti-woke” censorship over sensitive speech on race and gender. In the Islamic world, there will be censorship over blasphemy against the Prophet Muhammad. Each “great space” will align what is acceptable or not in its LLM models according to their sensitivities.

While computational stacks are spinning a planetary web of communication, a singular monosystem is not what is emerging. Neither will the new configurations merely replicate historically defined territorial boundaries. Rather, the new map will blur into zones of influence where the weight of the major powers will be tempered by the diverse virtual territories of computational stacks adapted to the sovereignty of their own cultivated ways.

The post A Diverse World Of Sovereign AI Zones appeared first on NOEMA.

]]>
]]>
Reimagining School In The Age Of AI https://www.noemamag.com/reimagining-school-in-the-age-of-ai Thu, 25 Sep 2025 13:36:40 +0000 https://www.noemamag.com/reimagining-school-in-the-age-of-ai The post Reimagining School In The Age Of AI appeared first on NOEMA.

]]>
Last winter in New England, after a stretch of frigid days that killed my outdoor exercise ambition, I set up a Wahoo Kickr in my garage. The device replaces a road bike’s rear wheel, turning it into a “smart” indoor trainer. When paired with specialized workout apps, it automatically adjusts pedaling resistance to simulate real-world terrain: foothills, flats and even mountains.

My first session began with a ramp test. Here’s how it works: You start at a low resistance level, which ratchets up every 60 seconds until you can no longer keep pace. I persevered for just over 22 minutes. By the end I was dizzy and dripping with sweat, my heart rate at 166 beats per minute — redline for someone my age.

That failure point is used to estimate “functional threshold power” (FTP): the highest average wattage your legs can sustain for an hour. Some cycling apps use FTP to personalize your workouts, building a plan optimized to make you faster. If you miss a week, the system adapts by reducing the workload. If you progress quickly, harder sessions follow. Your FTP becomes a dynamic baseline as opposed to a static score.

The results, for me, were impressive; I rolled into spring outdoor riding at near-peak fitness levels. But beyond the physical gains, I was struck by the system’s design. It had adapted in real time, identifying my initial capacity and responding with precision and flexibility to help me improve. AI-powered training platforms now analyze millions of workouts from athletes around the world, then use that data to deliver increasingly efficient, personalized training plans. Because these platforms learn continuously, each new user strengthens them, making the feedback loops more powerful over time.

Later, while drafting a syllabus for my students at the New School in Manhattan, where I teach seminars in media studies, I wondered: What if education worked like that? What if, instead of following a prescribed curriculum, teaching started with a learner’s threshold and built a dynamic, personalized path forward?

A Crisis In Education

The United States urgently needs new, innovative approaches to education. Tests conducted in 2024 by the National Assessment of Educational Progress — collectively called “The Nation’s Report Card” — confirm what U.S. Secretary of Education Linda McMahon called “a devastating trend”: American students are testing at “historic lows across all of K-12.” The scores, released this month, show that nearly half of high school seniors are now below basic levels in math, and about one-third are below basic in reading. The average reading score has dropped to its lowest level on record.

The Covid-19 lockdowns shattered the illusion that our education system is flexible. When classrooms abruptly closed, rigid learning models were reproduced on screen, leaving many students struggling and disengaged — a problem that persists today. Against this grim backdrop, another force promises to reshape education for better or worse: artificial intelligence.

Already, AI is being woven into learning and teaching in complex and rapidly evolving ways. Students use tools such as ChatGPT to draft essays, solve equations and generate study guides — sometimes to deepen understanding, but often to reduce or even eliminate the effort required to learn. Eighty-five percent acknowledge using generative AI to help them with coursework in the last year, according to an August 2025 survey by Inside Higher Ed. In my own classroom, the suspicion of AI misuse is enough to strain trust and complicate grading as the boundaries of permissibility remain undefined and mutable.

Teachers are also beginning to expedite or automate time-consuming tasks like drafting lesson plans and generating practice exercises, freeing them to focus on valuable mentoring and one-on-one support. But AI tools can also become a siren call, tempting educators to use algorithmic shortcuts for the demanding, human work of noticing, guiding and inspiring students.

Social media offers a cautionary tale here: Platforms are populated with AI-generated influencers addressing AI-generated followers, a self-reinforcing feedback loop where authenticity vanishes. The danger is that education could follow a similar path, where efficiency replaces presence and the human dimension of teaching is eventually flattened.

The integration of AI into education is no longer hypothetical — it is well underway. In April, President Trump signed an executive order to bring AI into American classrooms, and major tech companies including Google, Amazon, Microsoft and OpenAI have pledged to support this mission.

“What if, instead of following a prescribed curriculum, teaching started with a learner’s threshold and built a dynamic, personalized path forward?”

The question is not whether learning will be affected by AI, but how and to what ends. Left unguided, or steered solely by tech companies pursuing their own interests, AI educational tools could magnify inequities and perpetuate the very problems they promise to solve. With thoughtful design, however, they could move us beyond rigid curricula toward adaptive systems that respond to individual learners. The decision before us — to let AI evolve haphazardly or to shape it deliberately with educators, students and institutions at the center — will determine whether it deepens our crisis or becomes the foundation for a more flexible approach to education.

Adaptive Threshold Learning

Modern bike training apps like the one I used offer a useful model for reimagining education. Their core principle — adapting to a learner’s threshold and building upward — could form the basis of what I’ll call “adaptive threshold learning” (ATL): an AI-driven system that identifies each student’s current limits and designs experiences to expand them.

ATL would begin by identifying what a learner can accomplish right now. A diagnostic test, delivered via PC, mobile app or VR headset (if the technology ever reaches its potential), would start simply and gradually increase in difficulty until the system locates the learner’s threshold: the point where fluency falters, recall slows or errors emerge. Input could take the form of sounds, voice, text, gestures or a combination of these, captured by the device’s onboard microphone, touchscreen, camera or motion sensor.

From that baseline, ATL would generate a personalized teaching program designed to elevate the learner’s threshold in the least amount of time. The system would adapt continuously based on performance, tracking how and when the learner responds, self-corrects and fails. Over time, patterns would emerge.

Imagine using an ATL system to learn a language. You would begin a conversation test in your target language, and the system would listen not only for correct vocabulary, but also for pacing, pronunciation and contextual nuance. If you consistently misapplied verb tenses but spoke clearly, the system would shift its focus to grammar. If you hesitated before answering, it would slow the dialogue and restate prompts in simpler forms. If you handled basic conversation with ease, it would quickly advance to abstract topics or multi-part questions to challenge comprehension and fluency.

Instead of following a fixed curriculum, the app would dynamically construct your learning path. As your fluency developed, your profile would become more precise. Progress would be measured not by chapters or lessons completed, but by measurable skill improvements and behavioral signals – how quickly you respond, how confidently you speak and how flexibly you adapt to increasingly complex tasks.

While platforms like Duolingo, Khan Academy and IXL incorporate some adaptive elements, they primarily adjust pacing within a predetermined curriculum. For instance, Duolingo’s Birdbrain algorithm personalizes lesson difficulty based on user performance, yet learners still progress through a fixed sequence of language units.

In contrast, ATL would reimagine both the structure and logic of learning. Rather than merely modifying the pace of a set sequence, it would continuously assess a student’s readiness across multiple dimensions, including response time, confidence and contextual understanding, to determine the next optimal learning experience. This would enable a non-linear learning map that evolves in real time, tailored to the student’s unique progress and needs.

All learners, regardless of background or age, could have access to always-on, multidisciplinary tutors that understand how they learn and adapt accordingly. The system wouldn’t just automate instruction like so-called “AI tutors,” which often turn out to be glorified quiz engines; it would respond to behavior, measure growth and personalize feedback in ways no static curriculum can.

Over time the system would begin to understand how learning works and could perpetually self-optimize. With thoughtful design, sufficient data and adequate computing power, it could evolve into a national infrastructure for growth: a distributed, AI-powered supercomputer network that adapts to each learner’s strengths, struggles and pace, supporting education across regions, disciplines and life stages.

ATL In The Classroom

Implementing ATL in American schools would require daunting and even radical changes. But bold intervention is necessary to alter our downward trajectory. If schools persist with incremental fixes and half-measures, they risk losing even more ground to the forces already reshaping and eroding how students learn. Given the stakes, we need reforms that fully harness AI’s ability to individualize instruction at scale.

“Left unguided, or steered solely by tech companies, AI educational tools could magnify inequities and perpetuate the very problems they promise to solve.”

Some companies are already experimenting with AI in the private school space. Alpha School, a U.S. “microschool” network, is among the most fully realized models of individualized, AI-powered learning centers in operation. Students complete core academics in the morning through two hours of AI-driven, app-based learning, then spend the rest of the day in workshops and project-based activities that develop real-world skills.

If adopted in larger, more traditional public schools or school systems, ATL would not eliminate classrooms, but it would change what happens inside them. A math student, rather than being slotted into a fixed algebra curriculum, would receive assignments that adjust dynamically depending on how quickly she reasons. A history student could move beyond the textbook into primary sources, ethical counterpoints or conflicting narratives, deepening inquiry at his own pace. A music student might work through scales, ear training and theory until fluency is achieved, as measured by tempo, pitch accuracy and responsiveness.

This approach would not fit every field of study. It would lend itself most naturally to domains where progress can be measured with some objectivity — mathematics, the sciences, engineering, languages and music — rather than to interpretive and creative fields where ambiguity and perspective are central.

Yet precisely because it could accelerate mastery in the measurable disciplines, it could prove liberating. If students gained skills in algebra or chemistry more quickly, they could have more time and freedom for the elements of education that resist optimization — literature, art, philosophy and the more reflective realms of the social sciences.

With ATL, the teacher would still be essential — not as a lecturer at the blackboard, but as a coach who interprets the system’s signals, helps students understand where they stumbled and why, and convenes group discussions where collaboration and debate are vital. For example, a teacher might pull three students struggling with a calculus unit into a small workshop while others advance independently. Teaching, in this model, would become less about delivering information and more about orchestrating personal growth — not just helping students learn, but helping them understand how to learn.

No algorithm, no matter how adaptive, can replace the role of a human who inspires, contextualizes and comforts. Teachers would be the interpreters of the system’s insights, the architects of meaningful challenges and the people who help students translate progress into purpose. They would also be crucial in shaping the values of these systems, ensuring they reflect emerging domains, cultural nuance and ethical complexity.

Embracing ATL would also demand a fundamental shift in how we think about time, mastery and progression. Our current framework treats time as fixed and outcomes as variable: Everyone spends a semester studying biology, yet only some emerge with mastery. ATL would invert that logic. Mastery would become the constant; time would become the variable. One student might grasp a concept in two days, another in a week — but both would succeed because the system would adapt to them, not the other way around.

This shift would raise challenging questions. Would students still be grouped by age, or move toward “competency bands” — cohorts organized by demonstrated skill rather than birthdays? At a minimum, ATL would retire the bell curve, which assumes all students receive the same instruction over the same time period and should be judged against static benchmarks. In an adaptive system, inputs and goals would be personalized. Instead of a single distribution of outcomes, we would get a diversity of trajectories.

Grading would need to change as well. Letter grades and class rankings reduce learning into relative scores that often reflect privilege more than ability. A simpler mastery report — “pass” or “in progress” (akin to today’s “incomplete”) — paired with rich feedback would be both more sensible and more equitable. In an open-timeline model, progress would be measured against the learner’s own arc: sharper recall, steadier reasoning, greater fluency. Growth would no longer mean outpacing others; it would mean surpassing yesterday’s self.

Such a system would also redefine what it means to excel. Some students could achieve mastery of a subject in weeks — or even days — rather than being confined to the fixed pacing of a semester-long course. Freed from those constraints, they could climb higher and faster, reaching peak mastery in a chosen field or branching horizontally across a wide range of disciplines.

“No algorithm, no matter how adaptive, can replace the role of a human who inspires, contextualizes and comforts.”

Students who perform more typically, meanwhile, could still attain mastery in the subjects essential to their ambitions, helping them graduate equipped for the careers or callings they seek. By tracking progress across domains — from pattern recognition to verbal fluency — ATL could reveal hidden strengths and help align students with fields where they would naturally thrive. In this way, education would become not just more efficient but more personal: a vehicle for self-discovery.

The Risks Of Optimization

For all its potential benefits, ATL would also introduce risks that we can’t afford to ignore if we’re serious about building something better.

First, consider the danger of over-optimization: tailoring instruction so precisely to a learner’s current abilities that it narrows rather than expands intellectual range. Just as social media’s algorithmic filtering can limit our exposure to new ideas, a well-intentioned ATL system might steer students away from uncertainty, productive struggle or edge cases. It could prioritize speed over depth, comfort over challenge – flattening curiosity into compliance. Personalization, taken too far, is in danger of becoming a polished form of intellectual risk aversion. But growth often begins where comfort ends.

Second, there are costs of data dependence and the surveillance that enables it. Systems that track micro-latency, vocal inflection, facial expression and cognitive thresholds generate an extraordinarily detailed portrait of each learner. That portrait may be useful in an educational context, but it would also be intimate – and potentially threatening. Who would own it? How would it be harvested, stored, protected or monetized? And what safeguards would prevent it from being used to sort, label or limit students’ future paths?

Ethical design is non-negotiable here. Educational systems should be transparent, inclusive and accountable, especially to those they assess. Otherwise, ATL would risk becoming not a platform for growth, but a mechanism of control: sorting students by unseen algorithms and reducing potential to probability.

Third, ATL could inadvertently magnify existing inequities. Systems that rely on rich data profiles will perform better for students who have access to fast internet, newer devices and adult support. These students could potentially train the system more effectively, receive faster personalization and improve more rapidly. That advantage would compound. Without intentional design for equity, personalization risks becoming a premium service: deep for the already advantaged, shallow for everyone else.

Finally, there is a cultural risk – that in our eagerness to optimize, we forget why education matters. Learning is not just a ladder of skills. It’s also play, exploration, serendipity and becoming. ATL, if adopted, must not flatten learning into a series of checkpoints. The system may adapt, but it must still surprise.

These risks would demand solicitude from those building and deploying ATL. But the risks of inaction may be greater: As the dismal trends in American test scores make clear, our current approach is no longer serving students’ needs. ATL would be a daring new direction rooted in a philosophy dating back more than a century.

Lessons From The Past

In my time as an adjunct professor at the New School, I have often reflected on the institution’s founding mission. In 1919, a group of progressive intellectuals — among them historian Charles A. Beard, “New History” pioneer James Harvey Robinson and economist Thorstein Veblen — resigned from Columbia University to establish an independent institution, originally called the New School for Social Research. Their revolt against rigid academic orthodoxy drew on the ideas of pragmatist philosopher John Dewey, whose vision emphasized growth over conformity and the learner’s active role in constructing meaning.

Dewey envisioned schools as dynamic laboratories of growth, not factories for mass production. He rejected standardized memorization and championed learning environments that adapted to individual needs and contexts. “The school must represent present life,” he wrote, “life as real and vital to the child as that which he carries on in the home, in the neighborhood, or on the playground.”

More than a century later, AI-enabled teaching platforms could finally help realize Dewey’s vision. These systems don’t have to insert groups of students into pre-set tracks; instead, they can start from what individual learners can do now, and build from there.

“Learning is recursive, experimental and sometimes uncomfortable. Adaptive systems may help scaffold that process, but only humans can help make it meaningful.”

Long before returning to academia as a teacher, I studied under philosopher Richard Rorty – Dewey’s intellectual heir and late-20th-century evangelist – in his interdisciplinary graduate program at the University of Virginia. Rorty reimagined American pragmatism for the postmodern era. Education, to him, wasn’t about uncovering timeless forms or eternal certainties, but about expanding our linguistic and imaginative capacities: enlarging what we can say, understand and become.

Today, in my work with students and AI technology startups, I see how ATL could bridge those two worlds, turning the ideas that Rorty and Dewey championed into functional systems. For thinkers like them, the promise of education was not the passive absorption of information, but the expansion of one’s capacity to interpret the world – to speak and act with greater clarity and imagination.

Learning, in that view, certainly isn’t linear. It’s recursive, experimental and sometimes uncomfortable. Adaptive systems may help scaffold that process, but only humans can help make it meaningful.

A New Way Forward

My bike training app never judged me (though some sessions felt like penance handed down by a malevolent cycling god). It didn’t care how fast I was compared to anyone else. It simply found my current limits and built a dynamic plan to move me forward. Instead of rankings it provided a baseline and a way up.

Education can be built on that same architecture.

More than a century ago, Dewey warned that “an ounce of experience is better than a ton of theory simply because it is only in experience that any theory has vital and verifiable significance.” Learning, to him, was not preparation for life – it was life itself. It had to be active and shaped by the learner’s interactions with the world.

Rorty, who carried Dewey’s torch into our era, challenged the notion of truth as something fixed, waiting to be discovered. He saw truth as a tool – something we invent and revise to better navigate the world and reimagine whom we might become.

“The goal of education,” he wrote, “is to help students see that they can reshape themselves – reshape their own minds – by acquiring new vocabularies, by learning to speak differently.” For Rorty, education wasn’t about certainties. It was about possibility and freedom, about expanding the space of what we can say, understand and do.

That’s what the cycling ramp test gave me: not a score, but a new way forward. And that’s what an adaptive AI learning program could give every student: a system that listens and responds by building on what they can already do.

Curriculum, from the Latin currere, means “a course to be run.” ATL would replace the rigid track with a dynamic map — one that offers every learner a personalized path to their destination.

The post Reimagining School In The Age Of AI appeared first on NOEMA.

]]>
]]>
The Last Days Of Social Media https://www.noemamag.com/the-last-days-of-social-media Tue, 02 Sep 2025 14:55:48 +0000 https://www.noemamag.com/the-last-days-of-social-media The post The Last Days Of Social Media appeared first on NOEMA.

]]>
At first glance, the feed looks familiar, a seamless carousel of “For You” updates gliding beneath your thumb. But déjà‑vu sets in as 10 posts from 10 different accounts carry the same stock portrait and the same breathless promise — “click here for free pics” or “here is the one productivity hack you need in 2025.” Swipe again and three near‑identical replies appear, each from a pout‑filtered avatar directing you to “free pics.” Between them sits an ad for a cash‑back crypto card.

Scroll further and recycled TikTok clips with “original audio” bleed into Reels on Facebook and Instagram; AI‑stitched football highlights showcase players’ limbs bending like marionettes. Refresh once more, and the woman who enjoys your snaps of sushi rolls has seemingly spawned five clones.

Whatever remains of genuine, human content is increasingly sidelined by algorithmic prioritization, receiving fewer interactions than the engineered content and AI slop optimized solely for clicks. 

These are the last days of social media as we know it.

Drowning The Real

Social media was built on the romance of authenticity. Early platforms sold themselves as conduits for genuine connection: stuff you wanted to see, like your friend’s wedding and your cousin’s dog.

Even influencer culture, for all its artifice, promised that behind the ring‑light stood an actual person. But the attention economy, and more recently, the generative AI-fueled late attention economy, have broken whatever social contract underpinned that illusion. The feed no longer feels crowded with people but crowded with content. At this point, it has far less to do with people than with consumers and consumption.

In recent years, Facebook and other platforms that facilitate billions of daily interactions have slowly morphed into the internet’s largest repositories of AI‑generated spam. Research has found what users plainly see: tens of thousands of machine‑written posts now flood public groups — pushing scams, chasing clicks — with clickbait headlines, half‑coherent listicles and hazy lifestyle images stitched together in AI tools like Midjourney.

It’s all just vapid, empty shit produced for engagement’s sake. Facebook is “sloshing” in low-effort AI-generated posts, as Arwa Mahdawi notes in The Guardian; some even bolstered by algorithmic boosts, like “Shrimp Jesus.”

The difference between human and synthetic content is becoming increasingly indistinguishable, and platforms seem unable, or uninterested, in trying to police it. Earlier this year, CEO Steve Huffman pledged to “keep Reddit human,” a tacit admission that floodwaters were already lapping at the last high ground. TikTok, meanwhile, swarms with AI narrators presenting concocted news reports and “what‑if” histories. A few creators do append labels disclaiming that their videos depict “no real events,” but many creators don’t bother, and many consumers don’t seem to care.

The problem is not just the rise of fake material, but the collapse of context and the acceptance that truth no longer matters as long as our cravings for colors and noise are satisfied. Contemporary social media content is more often rootless, detached from cultural memory, interpersonal exchange or shared conversation. It arrives fully formed, optimized for attention rather than meaning, producing a kind of semantic sludge, posts that look like language yet say almost nothing. 

We’re drowning in this nothingness.

The Bot-Girl Economy

If spam (AI or otherwise) is the white noise of the modern timeline, its dominant melody is a different form of automation: the hyper‑optimized, sex‑adjacent human avatar. She appears everywhere, replying to trending tweets with selfies, promising “funny memes in bio” and linking, inevitably, to OnlyFans or one of its proxies. Sometimes she is real. Sometimes she is not. Sometimes she is a he, sitting in a compound in Myanmar. Increasingly, it makes no difference.

This convergence of bots, scammers, brand-funnels and soft‑core marketing underpins what might be called the bot-girl economy, a parasocial marketplace fueled in a large part by economic precarity. At its core is a transactional logic: Attention is scarce, intimacy is monetizable and platforms generally won’t intervene so long as engagement stays high. As more women now turn to online sex work, lots of men are eager to pay them for their services. And as these workers try to cope with the precarity imposed by platform metrics and competition, some can spiral, forever downward, into a transactional attention-to-intimacy logic that eventually turns them into more bot than human. To hold attention, some creators increasingly opt to behave like algorithms themselves, automating replies, optimizing content for engagement, or mimicking affection at scale. The distinction between performance and intention must surely erode as real people perform as synthetic avatars and synthetic avatars mimic real women.

There is loneliness, desperation and predation everywhere.

“Genuine, human content is increasingly sidelined by algorithmic prioritization, receiving fewer interactions than the engineered content and AI slop optimized solely for clicks.”

The bot-girl is more than just a symptom; she is a proof of concept for how social media bends even aesthetics to the logic of engagement. Once, profile pictures (both real and synthetic) aspired to hyper-glamor, unreachable beauty filtered through fantasy. But that fantasy began to underperform as average men sensed the ruse, recognizing that supermodels typically don’t send them DMs. And so, the system adapted, surfacing profiles that felt more plausible, more emotionally available. Today’s avatars project a curated accessibility: They’re attractive but not flawless, styled to suggest they might genuinely be interested in you. It’s a calibrated effect, just human enough to convey plausibility, just artificial enough to scale. She has to look more human to stay afloat, but act more bot to keep up. Nearly everything is socially engineered for maximum interaction: the like, the comment, the click, the private message.

Once seen as the fringe economy of cam sites, OnlyFans has become the dominant digital marketplace for sex workers. In 2023, the then-seven-year-old platform generated $6.63 billion in gross payments from fans, with $658 million in profit before tax. Its success has bled across the social web; platforms like X (formerly Twitter) now serve as de facto marketing layers for OnlyFans creators, with thousands of accounts running fan-funnel operations, baiting users into paid subscriptions. 

The tools of seduction are also changing. One 2024 study estimated that thousands of X accounts use AI to generate fake profile photos. Many content creators have also begun using AI for talking-head videos, synthetic voices or endlessly varied selfies. Content is likely A/B tested for click-through rates. Bios are written with conversion in mind. DMs are automated or outsourced to AI impersonators. For users, the effect is a strange hybrid of influencer, chatbot and parasitic marketing loop. One minute you’re arguing politics, the next, you’re being pitched a girlfriend experience by a bot. 

Engagement In Freefall

While content proliferates, engagement is evaporating. Average interaction rates across major platforms are declining fast: Facebook and X posts now scrape an average 0.15% engagement, while Instagram has dropped 24% year-on-year. Even TikTok has begun to plateau. People aren’t connecting or conversing on social media like they used to; they’re just wading through slop, that is, low-effort, low-quality content produced at scale, often with AI, for engagement.

And much of it is slop: Less than half of American adults now rate the information they see on social media as “mostly reliable”— down from roughly two-thirds in the mid-2010s.  Young adults register the steepest collapse, which is unsurprising; as digital natives, they better understand that the content they scroll upon wasn’t necessarily produced by humans. And yet, they continue to scroll.

The timeline is no longer a source of information or social presence, but more of a mood-regulation device, endlessly replenishing itself with just enough novelty to suppress the anxiety of stopping. Scrolling has become a form of ambient dissociation, half-conscious, half-compulsive, closer to scratching an itch than seeking anything in particular. People know the feed is fake, they just don’t care. 

Platforms have little incentive to stem the tide. Synthetic accounts are cheap, tireless and lucrative because they never demand wages or unionize. Systems designed to surface peer-to-peer engagement are now systematically filtering out such activity, because what counts as engagement has changed. Engagement is now about raw user attention – time spent, impressions, scroll velocity – and the net effect is an online world in which you are constantly being addressed but never truly spoken to.

The Great Unbundling

Social media’s death rattle will not be a bang but a shrug.

These networks once promised a single interface for the whole of online life: Facebook as social hub, Twitter as news‑wire, YouTube as broadcaster, Instagram as photo album, TikTok as distraction engine. Growth appeared inexorable. But now, the model is splintering, and users are drifting toward smaller, slower, more private spaces, like group chats, Discord servers and federated microblogs — a billion little gardens.

Since Elon Musk’s takeover, X has shed at least 15% of its global user base. Meta’s Threads, launched with great fanfare in 2023, saw its number of daily active users collapse within a month, falling from around 50 million active Android users at launch in July to only 10 million active users the following August. Twitch recorded its lowest monthly watch-time in over four years in December 2024, just 1.58 billion hours, 11% lower than the December average from 2020-23.

“While content proliferates, engagement is evaporating.”

Even the giants that still command vast audiences are no longer growing exponentially. Many platforms have already died (Vine, Google+, Yik Yak), are functionally dead or zombified (Tumblr, Ello), or have been revived and died again (MySpace, Bebo). Some notable exceptions aside, like Reddit and BlueSky (though it’s still early days for the latter), growth has plateaued across the board. While social media adoption continues to rise overall, it’s no longer explosive. As of early 2025, around 5.3 billion user identities — roughly 65% of the global population — are on social platforms, but annual growth has decelerated to just 4-5%, a steep drop from the double-digit surges seen earlier in the 2010s.

Intentional, opt-in micro‑communities are rising in their place — like Patreon collectives and Substack newsletters — where creators chase depth over scale, retention over virality. A writer with 10,000 devoted subscribers can potentially earn more and burn out less than one with a million passive followers on Instagram. 

But the old practices are still evident: Substack is full of personal brands announcing their journeys, Discord servers host influencers disguised as community leaders and Patreon bios promise exclusive access that is often just recycled content. Still, something has shifted. These are not mass arenas; they are clubs — opt-in spaces with boundaries, where people remember who you are. And they are often paywalled, or at least heavily moderated, which at the very least keeps the bots out. What’s being sold is less a product than a sense of proximity, and while the economics may be similar, the affective atmosphere is different, smaller, slower, more reciprocal. In these spaces, creators don’t chase virality; they cultivate trust.

Even the big platforms sense the turning tide. Instagram has begun emphasizing DMs, X is pushing subscriber‑only circles and TikTok is experimenting with private communities. Behind these developments is an implicit acknowledgement that the infinite scroll, stuffed with bots and synthetic sludge, is approaching the limit of what humans will tolerate. A lot of people seem to be fine with slop, but as more start to crave authenticity, the platforms will be forced to take note.

From Attention To Exhaustion

The social internet was built on attention, not only the promise to capture yours but the chance for you to capture a slice of everyone else’s. After two decades, the mechanism has inverted, replacing connection with exhaustion. “Dopamine detox” and “digital Sabbath” have entered the mainstream. In the U.S., a significant proportion of 18‑ to 34‑year‑olds took deliberate breaks from social media in 2024, citing mental health as the motivation, according to an American Psychiatric Association poll. And yet, time spent on the platforms remains high — people scroll not because they enjoy it, but because they don’t know how to stop. Self-help influencers now recommend weekly “no-screen Sundays” (yes, the irony). The mark of the hipster is no longer an ill-fitting beanie but an old-school Nokia dumbphone. 

Some creators are quitting, too. Competing with synthetic performers who never sleep, they find the visibility race not merely tiring but absurd. Why post a selfie when an AI can generate a prettier one? Why craft a thought when ChatGPT can produce one faster?

These are the last days of social media, not because we lack content, but because the attention economy has neared its outer limit — we have exhausted the capacity to care. There is more to watch, read, click and react to than ever before — an endless buffet of stimulation. But novelty has become indistinguishable from noise. Every scroll brings more, and each addition subtracts meaning. We are indeed drowning. In this saturation, even the most outrageous or emotive content struggles to provoke more than a blink.

Outrage fatigues. Irony flattens. Virality cannibalizes itself. The feed no longer surprises but sedates, and in that sedation, something quietly breaks, and social media no longer feels like a place to be; it is a surface to skim. 

No one is forcing anyone to go on TikTok or to consume the clickbait in their feeds. The content served to us by algorithms is, in effect, a warped mirror, reflecting and distorting our worst impulses. For younger users in particular, their scrolling of social media can become compulsive, rewarding their developing brains with unpredictable hits of dopamine that keep them glued to their screens.

Social media platforms have also achieved something more elegant than coercion: They’ve made non-participation a form of self-exile, a luxury available only to those who can afford its costs.

“Why post a selfie when an AI can generate a prettier one? Why craft a thought when ChatGPT can produce one faster?”

Our offline reality is irrevocably shaped by our online world: Consider the worker who deletes or was never on LinkedIn, excluding themselves from professional networks that increasingly exist nowhere else; or the small business owner who abandons Instagram, watching customers drift toward competitors who maintain their social media presence. The teenager who refuses TikTok may find herself unable to parse references, memes and microcultures that soon constitute her peers’ vernacular.

These platforms haven’t just captured attention, they’ve enclosed the commons where social, economic and cultural capital are exchanged. But enclosure breeds resistance, and as exhaustion sets in, alternatives begin to emerge.

Architectures Of Intention

The successor to mass social media is, as already noted, emerging not as a single platform, but as a scattering of alleyways, salons, encrypted lounges and federated town squares —  those little gardens.

Maybe today’s major social media platforms will find new ways to hold the gaze of the masses, or maybe they will continue to decline in relevance, lingering like derelict shopping centers or a dying online game, haunted by bots and the echo of once‑human chatter. Occasionally we may wander back, out of habit or nostalgia, or to converse once more as a crowd, among the ruins. But as social media collapses on itself, the future points to a quieter, more fractured, more human web, something that no longer promises to be everything, everywhere, for everyone.

This is a good thing. Group chats and invite‑only circles are where context and connection survive. These are spaces defined less by scale than by shared understanding, where people no longer perform for an algorithmic audience but speak in the presence of chosen others. Messaging apps like Signal are quietly becoming dominant infrastructures for digital social life, not because they promise discovery, but because they don’t. In these spaces, a message often carries more meaning because it is usually directed, not broadcast.

Social media’s current logic is designed to reduce friction, to give users infinite content for instant gratification, or at the very least, the anticipation of such. The antidote to this compulsive, numbing overload will be found in deliberative friction, design patterns that introduce pause and reflection into digital interaction, or platforms and algorithms that create space for intention.

This isn’t about making platforms needlessly cumbersome but about distinguishing between helpful constraints and extractive ones. Consider Are.na, a non-profit, ad-free creative platform founded in 2014 for collecting and connecting ideas that feels like the anti-Pinterest: There’s no algorithmic feed or engagement metrics, no trending tab to fall into and no infinite scroll. The pace is glacial by social media standards. Connections between ideas must be made manually, and thus, thoughtfully — there are no algorithmic suggestions or ranked content.

To demand intention over passive, mindless screen time, X could require a 90-second delay before posting replies, not to deter participation, but to curb reactive broadcasting and engagement farming. Instagram could show how long you’ve spent scrolling before allowing uploads of posts or stories, and Facebook could display the carbon cost of its data centers, reminding users that digital actions have material consequences, with each refresh. These small added moments of friction and purposeful interruptions — what UX designers currently optimize away — are precisely what we need to break the cycle of passive consumption and restore intention to digital interaction.

We can dream of a digital future where belonging is no longer measured by follower counts or engagement rates, but rather by the development of trust and the quality of conversation. We can dream of a digital future in which communities form around shared interests and mutual care rather than algorithmic prediction. Our public squares — the big algorithmic platforms — will never be cordoned off entirely, but they might sit alongside countless semi‑public parlors where people choose their company and set their own rules, spaces that prioritize continuity over reach and coherence over chaos. People will show up not to go viral, but to be seen in context. None of this is about escaping the social internet, but about reclaiming its scale, pace, and purpose.

Governance Scaffolding

The most radical redesign of social media might be the most familiar: What if we treated these platforms as public utilities rather than private casinos?

A public-service model wouldn’t require state control; rather, it could be governed through civic charters, much like public broadcasters operate under mandates that balance independence and accountability. This vision stands in stark contrast to the current direction of most major platforms, which are becoming increasingly opaque.

“Non-participation [is] a form of self-exile, a luxury available only to those who can afford its costs.”

In recent years, Reddit and X, among other platforms, have either restricted or removed API access, dismantling open-data pathways. The very infrastructures that shape public discourse are retreating from public access and oversight. Imagine social media platforms with transparent algorithms subject to public audit, user representation on governance boards, revenue models based on public funding or member dues rather than surveillance advertising, mandates to serve democratic discourse rather than maximize engagement, and regular impact assessments that measure not just usage but societal effects.

Some initiatives gesture in this direction. Meta’s Oversight Board, for example, frames itself as an independent body for content moderation appeals, though its remit is narrow and its influence ultimately limited by Meta’s discretion. X’s Community Notes, meanwhile, allows user-generated fact-checks but relies on opaque scoring mechanisms and lacks formal accountability. Both are add-ons to existing platform logic rather than systemic redesigns. A true public-service model would bake accountability into the platform’s infrastructure, not just bolt it on after the fact.

The European Union has begun exploring this territory through its Digital Markets Act and Digital Services Act, but these laws, enacted in 2022, largely focus on regulating existing platforms rather than imagining new ones. In the United States, efforts are more fragmented. Proposals such as the Platform Accountability and Transparency Act (PATA) and state-level laws in California and New York aim to increase oversight of algorithmic systems, particularly where they impact youth and mental health. Still, most of these measures seek to retrofit accountability onto current platforms. What we need are spaces built from the ground up on different principles, where incentives align with human interest rather than extractive, for-profit ends.

This could take multiple forms, like municipal platforms for local civic engagement, professionally focused networks run by trade associations, and educational spaces managed by public library systems. The key is diversity, delivering an ecosystem of civic digital spaces that each serve specific communities with transparent governance.

Of course, publicly governed platforms aren’t immune to their own risks. State involvement can bring with it the threat of politicization, censorship or propaganda, and this is why the governance question must be treated as infrastructural, rather than simply institutional. Just as public broadcasters in many democracies operate under charters that insulate them from partisan interference, civic digital spaces would require independent oversight, clear ethical mandates, and democratically accountable governance boards, not centralized state control. The goal is not to build a digital ministry of truth, but to create pluralistic public utilities: platforms built for communities, governed by communities and held to standards of transparency, rights protection and civic purpose.

The technical architecture of the next social web is already emerging through federated and distributed protocols like ActivityPub (used by Mastodon and Threads) and Bluesky’s Authenticated Transfer (AT) Protocol, or atproto, (a decentralised framework that allows users to move between platforms while keeping their identity and social graph) as well as various blockchain-based experiments, like Lens and Farcaster.

But protocols alone won’t save us. The email protocol is decentralized, yet most email flows through a handful of corporate providers. We need to “rewild the internet,” as Maria Farrell and Robin Berjon mentioned in a Noema essay. We need governance scaffolding, shared institutions that make decentralization viable at scale. Think credit unions for the social web that function as member-owned entities providing the infrastructure that individual users can’t maintain alone. These could offer shared moderation services that smaller instances can subscribe to, universally portable identity systems that let users move between platforms without losing their history, collective bargaining power for algorithm transparency and data rights, user data dividends for all, not just influencers (if platforms profit from our data, we should share in those profits), and algorithm choice interfaces that let users select from different recommender systems. 

Bluesky’s AT Protocol explicitly allows users to port identity and social graphs, but it’s very early days and cross-protocol and platform portability remains extremely limited, if not effectively non-existent. Bluesky also allows users to choose among multiple content algorithms, an important step toward user control. But these models remain largely tied to individual platforms and developer communities. What’s still missing is a civic architecture that makes algorithmic choice universal, portable, auditable and grounded in public-interest governance rather than market dynamics alone.

Imagine being able to toggle between different ranking logics: a chronological feed, where posts appear in real time; a mutuals-first algorithm that privileges content from people who follow you back; a local context filter that surfaces posts from your geographic region or language group; a serendipity engine designed to introduce you to unfamiliar but diverse content; or even a human-curated layer, like playlists or editorials built by trusted institutions or communities. Many of these recommender models do exist, but they are rarely user-selectable, and almost never transparent or accountable. Algorithm choice shouldn’t require a hack or browser extension; it should be built into the architecture as a civic right, not a hidden setting.

“What if we treated these platforms as public utilities rather than private casinos?”

Algorithmic choice can also develop new hierarchies. If feeds can be curated like playlists, the next influencer may not be the one creating content, but editing it. Institutions, celebrities and brands will be best positioned to build and promote their own recommendation systems. For individuals, the incentive to do this curatorial work will likely depend on reputation, relational capital or ideological investment. Unless we design these systems with care, we risk reproducing old dynamics of platform power, just in a new form.

Federated platforms like Mastodon and Bluesky face real tensions between autonomy and safety: Without centralized moderation, harmful content can proliferate, while over-reliance on volunteer admins creates sustainability problems at scale. These networks also risk reinforcing ideological silos, as communities block or mute one another, fragmenting the very idea of a shared public square. Decentralization gives users more control, but it also raises difficult questions about governance, cohesion and collective responsibility — questions that any humane digital future will have to answer.

But there is a possible future where a user, upon opening an app, is asked how they would like to see the world on a given day. They might choose the serendipity engine for unexpected connections, the focus filter for deep reads or the local lens for community news. This is technically very achievable — the data would be the same; the algorithms would just need to be slightly tweaked — but it would require a design philosophy that treats users as citizens of a shared digital system rather than cattle. While this is possible, it can feel like a pipe dream. 

To make algorithmic choice more than a thought experiment, we need to change the incentives that govern platform design. Regulation can help, but real change will come when platforms are rewarded for serving the public interest. This could mean tying tax breaks or public procurement eligibility to the implementation of transparent, user-controllable algorithms. It could mean funding research into alternative recommender systems and making those tools open-source and interoperable. Most radically, it could involve certifying platforms based on civic impact, rewarding those that prioritize user autonomy and trust over sheer engagement.

Digital Literacy As Public Health

Perhaps most crucially, we need to reframe digital literacy not as an individual responsibility but as a collective capacity. This means moving beyond spot-the-fake-news workshops to more fundamental efforts to understand how algorithms shape perception and how design patterns exploit our cognitive processes. 

Some education systems are beginning to respond, embedding digital and media literacy across curricula. Researchers and educators argue that this work needs to begin in early childhood and continue through secondary education as a core competency. The goal is to equip students to critically examine the digital environments they inhabit daily, to become active participants in shaping the future of digital culture rather than passive consumers. This includes what some call algorithmic literacy, the ability to understand how recommender systems work, how content is ranked and surfaced, and how personal data is used to shape what you see — and what you don’t.

Teaching this at scale would mean treating digital literacy as public infrastructure, not just a skill set for individuals, but a form of shared civic defense. This would involve long-term investments in teacher training, curriculum design and support for public institutions, such as libraries and schools, to serve as digital literacy hubs. When we build collective capacity, we begin to lay the foundations for a digital culture grounded in understanding, context and care.

We also need behavioral safeguards like default privacy settings that protect rather than expose, mandatory cooling-off periods for viral content (deliberately slowing the spread of posts that suddenly attract high engagement), algorithmic impact assessments before major platform changes and public dashboards that show platform manipulation, that is, coordinated or deceptive behaviors that distort how content is amplified or suppressed, in real-time. If platforms are forced to disclose their engagement tactics, these tactics lose power. The ambition is to make visible hugely influential systems that currently operate in obscurity.

We need to build new digital spaces grounded in different principles, but this isn’t an either-or proposition. We also must reckon with the scale and entrenchment of existing platforms that still structure much of public life. Reforming them matters too. Systemic safeguards may not address the core incentives that inform platform design, but they can mitigate harm in the short term. The work, then, is to constrain the damage of the current system while constructing better ones in parallel, to contain what we have, even as we create what we need. 

The choice isn’t between technological determinism and Luddite retreat; it’s about constructing alternatives that learn from what made major platforms usable and compelling while rejecting the extractive mechanics that turned those features into tools for exploitation. This won’t happen through individual choice, though choice helps; it also won’t happen through regulation, though regulation can really help. It will require our collective imagination to envision and build systems focused on serving human flourishing rather than harvesting human attention.

Social media as we know it is dying, but we’re not condemned to its ruins. We are capable of building better — smaller, slower, more intentional, more accountable — spaces for digital interaction, spaces where the metrics that matter aren’t engagement and growth but understanding and connection, where algorithms serve the community rather than strip-mining it.

The last days of social media might be the first days of something more human: a web that remembers why we came online in the first place — not to be harvested but to be heard, not to go viral but to find our people, not to scroll but to connect. We built these systems, and we can certainly build better ones. The question is whether we will do this or whether we will continue to drown.

The post The Last Days Of Social Media appeared first on NOEMA.

]]>
]]>
We Failed The Misinformation Fight. Now What? https://www.noemamag.com/we-failed-the-misinformation-fight-now-what Tue, 26 Aug 2025 15:43:22 +0000 https://www.noemamag.com/we-failed-the-misinformation-fight-now-what The post We Failed The Misinformation Fight. Now What? appeared first on NOEMA.

]]>
The early months of Donald Trump’s second administration have, much like his first four years, been defined by lies: strange lies, self-serving lies and inhumane lies. “A deluge of falsehoods,” as Democratic Sen. Chuck Schumer described the president’s March 2025 address before a joint session of Congress.

We often call these lies misinformation. While the term has been around since at least the 16th century, it really entered the public vernacular in 2016, with the outcome of the Brexit referendum and Trump’s first election win being largely ascribed to lies circulating on social media.

Over the last decade, it’s hard to overstate how much it has become part of the zeitgeist. In 2016, Oxford Dictionaries selected “post-truth” as its word of the year; in 2017, Collins Dictionary selected “fake news”; and in 2018, Dictionary.com picked “misinformation.”

For the second year in a row, surveyed world leaders in academia, business, government and civil society ranked misinformation and disinformation as the highest short-term risks, above weather events, inflation and war, according to the World Economic Forum’s Global Risks Perception Survey.

There has been a field of research dedicated to mis- and disinformation; journalism and white papers; advisory boards and symposia; and even laws passed to stop its spread.

But after nearly a decade of concerted effort to combat misinformation, we must ask: to what effect? It’s unsettling to realize that, at least in the U.S., we have made little, if any, discernible progress. While the American public has never been particularly well-informed, it certainly isn’t today. Perceptions of what constitutes truth and who can credibly claim it have polarized. Trust in institutions, which was already dropping, has further decayed. Many platforms have shifted away from moderating misinformation to varying degrees. 

Given all this, it’s hard to feel confident that the work of the last decade has made measurable progress in curing our so-called “information disorder.” It’s also impossible to prove a negative. Perhaps we would have been even worse off without this work. Some may feel that this means we haven’t done enough. But the lack of meaningful results begs the question: Did we even understand the problem to begin with?

The Emergence Of A Paradigm

While misinformation has existed since the dawn of human communication, or so the story goes, technology has changed the dynamic. With the advent of information technologies like social media, search engines and generative AI, misinformation can now travel at unprecedented speed and scale.

To make sense of the new online environment and its impact on democracy, a dominant paradigm emerged across journalism, academia and civil society that was largely built around a single axis: true or false, information or misinformation.

By focusing on facticity, the paradigm emphasized a specific danger: persuasion. Misinformation threatens society specifically because it can mislead the public, persuading them to think Trump won the 2020 election, that they should exit the European Union to promote economic growth or that the Earth isn’t warming.  

The paradigm also implied a solution: If something is false, it should be corrected. And correct we did. Fact-checking, once a cottage industry, has become a mainstay of political coverage, with newspapers and television stations providing them in real time. Researchers (including us) focused considerable energy on measuring the efficacy of these efforts to correct misinformation. Characteristic scholarly disagreements followed. Should we prebunk or debunk? Does repeating the lie help it spread, even if it’s to disprove it? Do fact-checks “backfire” by further entrenching beliefs? Do simple nudges toward accuracy really have outsized effects?

Alongside this work, pressures were placed on technology companies to slow the spread of misinformation. While their actions were often insufficient, the task was also daunting. As described by CEOs in congressional testimony, platforms worked to identify misinformation at an unimaginable scale through a combination of expert evaluations, user signals and automated systems. Content found or predicted to be false was labeled and downranked or sometimes removed entirely. Before the recent divestment in such efforts, Facebook spent a self-reported $13 billion over a five-year period on safety and security, with entire teams dedicated to curbing the spread of false information. 

Misinformation, often conceptualized as a virus infecting the minds of the public, certainly wasn’t going to be cured, but a line of defense had formed to protect against the insidious force spreading across the body politic.

Critiques Coalesce

Today, we have the benefit of knowing how the story ends. After years of going all in on the misinformation paradigm, we’re arguably worse off. The “Stop the Steal,” climate denial and vaccine skepticism movements are all still alive. According to a 2021 survey by the Cato Institute and YouGov, most Americans distrust social media platforms to moderate content. Companies have largely stepped back from their misinformation policies and enforcement, while news outlets continue to die off. Fact-checking the administration’s misinformation — about Ukraine, government spending, public health, tariffs — seems to do little.

“After nearly a decade of concerted effort to combat misinformation, we must ask: to what effect?”

What we know, from decades of research across psychology, political science and other disciplines, is that the public is hard to persuade and behaviors are difficult to alter. Recent empirical studies suggest misinformation is no different, calling into question the dominant paradigm.

“Misinformation on Misinformation,” reads the title of a well-cited academic article, covering six misconceptions about the topic. A news feature published last year in Science explored the “field’s dilemmas,” highlighting various challenges to misinformation research. This academic conversation has even emerged from behind the often paywalled pages of scholarly journals. Competing essays in The Chronicle of Higher Education debate whether misinformation should be studied. “Is the misinformation crisis overblown?” a recent podcast asked two guest researchers.

At a moment of real-world change, disagreement among scholars can seem like academic navel gazing. But the dominant misinformation paradigm was, in large part, shaped and legitimized by academics whose research helped define the problem, influence journalism and policy, and guide platform interventions. Now, scholars are among its sharpest critics.

In our research for this essay, we found three interconnected critiques — the definitional, prevalence and causal critiques — of the dominant misinformation paradigm that may help illuminate a path forward.

The definitional critique points to the challenge of categorizing the world’s information as true or false. In many high-stakes contexts — such as elections, wars or public health crises — information is dynamic, with truth not only uncertain but being discovered (and re-discovered) in real time.

In this quickly changing world, can a researcher authoritatively identify misinformation? An archetypal example of this is the lab leak theory of Covid-19’s origins: The claim was initially dismissed as misinformation and, for months, moderated across many social media platforms. Now, intelligence officials consider it a credible theory.

The prevalence critique both builds on and reinforces the definitional critique. Social media data are large and optimized for search, enabling anecdata to be produced for almost any phenomenon of interest. However, this suffers from the denominator problem. Take, for example, the fact that the Russian-backed Internet Research Agency posted roughly 80,000 pieces of content on Facebook pages between 2015 to 2017, reaching an estimated 126 million users, according to a New York Times report. During roughly the same period, however, U.S. users saw more than 11 trillion posts from pages on the platform overall.

Recent work, which measures misinformation as a proportion of overall information exposure, similarly finds the prevalence of misinformation to be small, bordering on insignificant. Fake news accounts for a mere “0.15% of Americans’ daily media diet,” according to one study published in Science Advances. In studies where the definition of misinformation is expanded — for example, to articles published by what experts have identified as low-quality news domains — or the focus on a specific platform or media type is narrowed, the prevalence of misinformation rises, but the proportion remains relatively small (roughly 5-10%, depending on the study) and interpreting these results suffers from definitional challenges. Moreover, even though a small percentage of internet users consume a much higher proportion of verifiably false information, they tend to be concentrated in the “long tails” of the distribution: hyper-partisans who opt into extreme information networks and are often predisposed to the expounded beliefs. 

This dynamic — the concentration of misinformation among hyper-partisans — leads to the causal critique. Studies that measure the impact of online misinformation on political attitudes and behaviors suggest there are limited, if any, impacts. Beliefs tend to be entrenched, evolving over years of diverse social, experiential and informational inputs. Simply put, the public is not easily moved by new pieces of information; rather, people are often motivated to interpret the information they are exposed to through the lens of their established worldview. 

When social media users do encounter misinformation, they largely follow accounts with whom they are likely to agree and consume outlets that reflect their perspectives. As a result, digital misinformation generally preaches to the choir, potentially making attitudes or behaviors more extreme but not acting as vectors of mass influence or persuasion. If anything, the causal arrows may face in the opposite directions: beliefs may explain digital misinformation consumption more than the other way around.

Beyond True (& False)

These critiques have sparked scholarly disagreement regarding how we should define misinformation and what the literature truly teaches us. It’s easy to go further down the academic rabbit hole and come out the other side with uncertainty or, worse, intellectual tribalism. So we won’t.

“In rushing to group together falsehoods under the same analytical lens, we have jettisoned any understanding of how communication actually functions.”

Rather than endlessly refining definitions or debating study methodologies, we believe there’s a deeper issue embedded in the very word. One reason our collective efforts to combat misinformation have failed is that in rushing to group together falsehoods under the same analytical lens, we have jettisoned any understanding of how communication actually functions. We now have deep knowledge about how false information spreads and who may be more likely to believe it. But we have failed to fully account for how communication, culture, identity and politics are deeply entwined in the present moment.

Take, for example, Trump’s amplification of a false claim that Haitian immigrants were eating cats and dogs in Springfield, Ohio. Following the playbook of the misinformation paradigm, the claim was thoroughly evaluated. News articles and fact-checks proliferated, correcting the record.

In the week that followed, it became clear that the false claim was not really about the facts. As JD Vance said in defense of the pet-eating claims: “If I have to create stories so that the American media actually pays attention to the suffering of the American people, then that’s what I’m going to do.” The false claim aimed to build salience around immigration in general and Biden’s policies in particular — in this case, many of the Haitian immigrants in Springfield had immigrated legally through the Humanitarian Parole Program. The point of it, it appears, was to communicate a visceral disgust for immigrants and Biden’s immigration policy.

Similar dynamics were at play with the frequent lies from Musk about DOGE. Fact-checking the “wall of receipts” does little if the actual communications goal is to keep people talking about government spending or to wage a thinly veiled war on perceived sources of liberal power. 

In this way, misinformation can sidestep our attempts to protect healthy discourse when a speaker’s aims are more about agenda setting or mobilization, for example, than transmitting factual content. Information can be communicated to shape identities, influence culture, strategically impact the media environment and more. It can be especially pernicious when false information is used toward these ends. The truth, of course, matters, but it also clearly does not define the myriad effects of information.

This dynamic may also explain why doomsday fears about AI-powered misinformation haven’t come to pass, especially with regard to the 2024 election, leading some commentators to claim that we were “deepfaked by election deepfakes.” The framing of AI’s destructive impact on the public was built on the same faulty assumptions of the misinformation paradigm. Most analyses accepted a straightforward model of persuasion in which synthetic content could dupe the masses, altering beliefs and behaviors at scale. And yet the AI-generated content that circulated in the latest U.S. presidential election was mostly “cartoons and agitprop,” as Matteo Wong put it in The Atlantic, such as an AI-generated image of Trump in a prison jumpsuit, that largely played to people’s pre-existing beliefs. This content can communicate emotions and mobilize the public, while shaping the aesthetic language of contemporary politics. But it hardly rises to the level of a democratic threat foretold by many experts.

As technology improves and becomes more accessible, AI-generated content could certainly become more effective. But until then, it seems to be the consumption of news coverage, especially television, about AI-powered misinformation that seems to most erode public trust in the information ecosystem, as well as the fact that public figures can rely on the liar’s dividend — calling into question even legitimate content given the perceived ubiquity of deepfakes — to evade accountability

Into The Storm

With the misinformation paradigm facing criticism from all sides, the primary critique in recent months has been a political one. Many of the self-described protectors of speech have become our primary censors. A House subcommittee led by Republican Rep. Jim Jordan, investigating efforts to counter misinformation, issued letters requesting information and documents to chill the speech of academics. Elon Musk suspended or temporarily banned some journalists from X, threatened legal action against people who report on the identities of DOGE employees and filed a lawsuit against a research group engaged in constitutionally protected speech. Trump has authorized a list of words prohibited from federally funded science. The remainder of Trump’s term will certainly carry more of such twisted attempts to “bring back free speech” by controlling and shaping information flows.

Unlike the other critiques of misinformation, the politicized critique leaves us with an attenuated view of democracy. Attempts by pro-democracy actors to protect against genuinely harmful misinformation, such as questioning the integrity of elections, face congressional investigations, legal action or online harassment.

“We cannot continue to do things as we have, in hopes of better results.”

Some policymakers pressure platforms to remove content they disagree with, perpetrating the same perceived censorship they once condemned. It seems like these political actors were never intent on creating a fairer game; they simply were “working the refs” to their own political victory. Unsurprisingly, many of these same actors are also undermining democratic institutions. President Trump still refuses to accept his loss of the 2020 election, as do many of his appointees. His vice president has suggested that the executive branch ignore adverse court rulings.

It can be appealing to assume that the misinformation paradigm is justified through a loose transitive logic: The most powerful critics of work to combat misinformation are also those who seek to erode democracy. So if we want to protect democracy, we must recommit to the ecosystem that has emerged over the past decade. This, we think, is the wrong approach.

We cannot continue to do things as we have, in hopes of better results. The prevailing paradigm of misinformation focuses on a statement’s truthfulness (over other features), emphasizes its potential for persuasion (over other harms) and demands corrections (over other strategies).

Amid political and legal attacks on misinformation research, it may seem like the wrong moment to question whether the field should continue on its current path. And yet we believe the moment for renewed thinking is not only ripe, but also urgent.

Renewed Thinking

To be clear, the dominant paradigm is not wrong about the democratic challenges of a public that cannot agree on basic facts or the unique dynamics introduced by digital platforms. It would be foolish not to heed Hannah Arendt’s warnings of how authoritarian leaders thrive on epistemic uncertainty, which allows them to not only consolidate control over the truth but also cast aside the independent foundations of a shared reality.

But the dominant paradigm has largely framed the relationship between misinformation and democracy as a mechanical problem with mechanical solutions. Efforts to combat misinformation have largely left us defensive, reacting to strategies and narratives used by those who spread it. The last decade has made clear that we aren’t going to fact-check or inoculate our way toward a healthier civic culture. It’s not enough to observe that democracy suffers for falsehoods.

Instead, we must begin with a more holistic understanding of how communication functions, and move beyond straightforward harms like persuasion towards more diffuse and more pernicious challenges, such as trust, identity and polarization. Although this presents a less clear-cut path forward, there are already many new intellectual currents and programmatic efforts that point in the right direction.

Rather than correcting misinformation, some journalists and civil society organizations are working to meet the informational needs of communities directly — needs that can not be reduced to sorting out truth from lies. Newsrooms both big and small, from USA Today to Chicago’s City Bureau, have been experimenting with more direct communication between journalists and readers. For example, in 2023 the Information Futures Lab at Brown University School of Public Health partnered with a Spanish-language fact-checking site, Factchequeado, and a Miami-based communications agency, We Are Más, to respond to questions from Hispanic diaspora communities in South Florida via a bilingual WhatsApp group. Using posts provided by a research team, community members answered questions ranging from “How do I get a mammogram when I am underinsured?” to “How are people handling the severe side effects of getting a fourth Covid Shot?”

These efforts can’t be boiled down to fact-checks, which generally respond to claims already in circulation. Instead, these efforts empower communities to ask questions themselves and place experts in the position of meeting those needs. 

This strategy has the potential for broad effects. Helping a community member obtain health care can both fill a critical informational need and function as a bulwark against health misinformation, which often preys on uncertainty. Getting valuable information directly to those who need it also works to build foundational trust in community institutions.

Technologists and policymakers are also working on “middleware,” in this case, third-party software that sits between platforms and users, that can help facilitate individual choice and possibly more democratic platform experiences. Researchers hope that middleware, broadly speaking, will address two critical areas — how information is selected and organized, and the moderation of harmful content — by providing users with a range of options to determine their own online environments.

“Getting valuable information directly to those who need it also works to build foundational trust in community institutions.”

Recent work by academics and practitioners has described middleware’s transformative potential as “an alternative to both centrally controlled, opaque platforms and an unmoderated, uncurated internet.” Internet scholar and activist Ethan Zuckerman has been at the forefront of this movement, recently filing a lawsuit against Meta to establish legal protections for third-party tools that give users more control over their social media feeds. His legal challenge argued that users should be able to utilize externally developed software like Unfollow Everything to, for example, delete their newsfeeds on Meta’s platforms.  

This approach represents a fundamental shift from company-controlled platform environments to user-driven architectures; individuals are empowered to choose from competing algorithms and filtering systems rather than being subject to platform-determined information diets. Middleware solutions could, for example, prioritize high-quality news outlets or increase the ideological diversity of sources to combat filter bubbles.

Technologists and scholars are also looking beyond individual tools to imagine an ecosystem where federated platforms like Mastodon and Bluesky integrate middleware as a core feature of the platform experience. This provides communities with the infrastructure and tools to proactively shape their own information environments, according to their values, preferences and needs — rather than fighting misinformation after it spreads.

Finally, scholars have already been working to expand our understanding of how information can impact us, recognizing that the most salient features may not necessarily be the content’s truth or falsity. Some scholars place the emphasis on the financial motives of those behind disinformation campaigns, highlighting the troubling history between corporate power and scientific inquiry. Others have examined the “social roles” of fake news and explored how people make use of both true and false content in their communities to communicate, collaborate and make sense of the world. For example, in “Strangers in Their Own Land,” Arlie Hochschild examines how both high and low quality information helps Louisianians make sense of ecological collapse, government aid, and growing economic precarity by providing “deep stories,” emotionally grounded narratives that provide order to the world regardless of veracity.

Researchers are also examining how identity determines who is exposed to what kinds of information, who shares and believes it, and how it is produced. Much of this work focuses on how the deep memetic frames, or the socially shared lens through which we interpret the world, within misinformation shape our politics and influence how we relate to each other.

There are, of course, other laudable efforts that we aren’t able to explore here: efforts to bridge divides and reduce polarization, to rethink technologies for civic engagement and preserve our attention spans, to create localized social media environments. Academics have launched partnerships with practitioners — such as the University of Washington’s Center for an Informed Public’s partnership with local libraries and Stanford University researchers’ collaboration with school districts on civil online reasoning — in order to run large-scale program evaluations in the wild.

These strands of research have produced important insights. But these approaches have rarely received the funding or media coverage that goes to studies that, for example, focus more narrowly on tracking misinformation, especially concerning the now modern-day adage: “false news travels faster than true news.”

Taken together, these ideas all point toward a different project: centering and empowering communities, and building healthier information environments before falsehoods take root. What they lack is the common vocabulary, infrastructure, and investment that once bound the misinformation field — and that is the challenge ahead.

Better Together

The philosopher Daniel Williams argues that the misinformation paradigm gained salience because it offers elites the mirage that the public can be controlled by pulling the right levers. As he wrote in the Boston Review in 2023, “Our political adversaries are simply ignorant dupes, and with enough education and critical thinking, they will come to agree with us; there is no need to reimagine other social institutions or build the political power necessary to do so.”

What Williams downplays is that the misinformation paradigm itself is a social institution that carries political power. Over the last decade, a diverse ecosystem has coalesced around the topic: journalists, civil society organizations, community groups, academics, policymakers, technology companies, funders and more. Members of this ecosystem charted a common direction with shared language, overlapping priorities and interconnected networks. Few subject areas have been able to coalesce such a broad set of actors so quickly, organized around a single problem.

Our challenge now is to expand the scope of how we defend democracy in the digital age while preserving the institutional momentum of the past decade. The misinformation paradigm, for all its limitations, has shown that rapid, large-scale coordination around democratic challenges is possible.

Coordination among such a diverse group is both nearly impossible and utterly essential to avoid balkanization. It will not be easy to maintain that institutional energy as we figure out how our contemporary communication system can strengthen democracy, rather than myopically focus on correcting falsehoods. In this moment of increasing attacks on the foundations of our democracy, it is essential that we eschew the simplified frameworks and old playbooks that have failed to make meaningful progress in improving our informational lives. Not to the detriment of democracy, but in defense of it.

The post We Failed The Misinformation Fight. Now What? appeared first on NOEMA.

]]>
]]>