Future of Capitalism Archives - NOEMA https://www.noemamag.com Noema Magazine Fri, 16 Jan 2026 22:03:31 +0000 en-US 15 hourly 1 https://wordpress.org/?v=6.8.3 https://www.noemamag.com/wp-content/uploads/2020/06/cropped-ms-icon-310x310-1-32x32.png Future of Capitalism Archives - NOEMA https://www.noemamag.com/article-topic/future-of-capitalism/ 32 32 How The ‘AI Job Shock’ Will Differ From The ‘China Trade Shock’ https://www.noemamag.com/how-the-ai-job-shock-will-differ-from-the-china-trade-shock Fri, 16 Jan 2026 17:49:28 +0000 https://www.noemamag.com/how-the-ai-job-shock-will-differ-from-the-china-trade-shock The post How The ‘AI Job Shock’ Will Differ From The ‘China Trade Shock’ appeared first on NOEMA.

]]>
Among the job doomsayers of the AI revolution, David Autor is a bit of an outlier. As the MIT economist has written in Noema, the capacity of mid-level professions such as nursing, design or production management to access greater expertise and knowledge once available only to doctors or specialists will boost the “applicable” value of their labor, and thus the wages and salaries that can sustain a middle class.

Unlike rote, low-level clerical work, cognitive labor of this sort is more likely to be augmented by decision-support information afforded by AI than displaced by intelligent machines.

By contrast, “inexpert” tasks, such as those performed by retirement home orderlies, child-care providers, security guards, janitors or food service workers, will be poorly remunerated even as they remain socially valuable. Since these jobs cannot be automated or enhanced by further knowledge, those who labor in them are a “bottleneck” to improved productivity that would lead to higher wages. Since there will be a vast pool of people without skills who can take those jobs, the value of their labor will be driven down even further.

This is problematic from the perspective of economic disparity because four out of every five jobs created in the U.S. are in this service sector.

So, when looking to the future of the labor market in an AI economy, we can’t talk about “job loss vs. gains” in any general sense. The key issue is not the quantity of jobs, but the value of labor, which really means the value of human expertise and the extent to which AI can enhance it, or not.

I discussed this and other issues with Autor at a recent gathering at the Vatican’s Pontifical Academy in Rome, convened to help address Pope Leo XIV’s concern over the fate of labor in the age of AI. We spoke amid the splendor of the Vatican gardens behind St. Peter’s Basilica.

The populist movements that have risen to power across the West today, particularly in the U.S., did so largely on the coattails of the backlash against globalization. Over the course of the U.S.-led free-trade policies during the post-Cold War decades, the rise of China as a cheap-labor manufacturing power with export access to the markets of advanced economies hollowed out the industrial base across large swaths of America and Europe — and the jobs it provided.

Some worry the AI shock will be even more devastating. Autor sees the similarity and the distinctions. What makes them the same is “it’s a big change that can happen quickly,” he says. But there are three ways in which they are different.

First, “the China trade shock was very localized. It was in manufacturing-intensive communities that made labor-intensive products such as furniture, textiles, clothing, plastic dolls and assembly of low-end hardware.”

AI’s effects will be much more geographically diffuse. “We’ve already lost millions of clerical worker jobs, but no one talks about ‘clerical shock.’ There is no clerical capital of America to see it disappear.”

Second, “the China trade shock didn’t just eliminate certain types of jobs. It eliminated entire industries all at once.” AI will shift the nature of jobs and tasks and change the way people work, but it “will not put industries out of business. … It will open new things and will close others, but it will not be an existential elimination, a great extinction.”

Third, “unless you were a very big multinational, what was experienced by U.S. firms during globalization was basically a shock to competition. All of a sudden, prices fell to a lower level than you could afford to produce.”

AI will be more of a productivity change that will be positive for many businesses. “That doesn’t mean it’s good for workers necessarily, because a lot of workers could be displaced. But business won’t be like, ‘Oh God, the AI shock. We hate this.’ They’ll be, like, ‘Oh great. We can do our stuff with fewer inputs.” In short, tech-driven productivity is the route to great profitability.

As we have often discussed in Noema, it is precisely this dynamic where productivity growth and wealth creation are being divorced from jobs and income that is the central social challenge. Increasingly, the gains will flow to capital — those who own the robots — and decreasingly to labor. The gap will inexorably grow, even with those who can earn higher wages and salaries through work augmented by AI.

Is the idea of “universal basic capital” (UBC), in which everyone has an ownership share in the AI economy through investment of their savings, a promising response?

Autor believes that what UBC offers is a “hedge” against the displacement or demotion of labor. Most of us are “unhedged,” he says, because “human capital is all we have and we are out of luck if that becomes devalued. So at least we would have a balanced portfolio.”

If the government seeds a UBC account, such as “baby bonds,” at the outset, Autor notes, it will grow in value over time through compounded investment returns. The problem with the alternative idea of “universal basic income” is that you are “creating a continual system of transfers where you are basically saying ‘Hey, you rich people over there, you pay for the leisure of everybody else over here.’ And that is not politically viable. ‘How do they get the right to our stuff?’”

Autor compares the idea of “universal basic income” (UBI) to the “resource curse” of unstable countries with vast oil and mineral resources, where it appears that “money is just coming out of a hole in the ground.”

The related reason that UBC is important for Autor is that “the people who have a voice in democracies are those who are seen as economic contributors. If the ownership of capital is more diffuse, then everyone is a contributor,” and everyone has a greater voice, which they will use since they have a stake in the system.

The closer we get to widespread integration of AI into the broader economy, the clearer the patterns Autor describes will become. On that basis, responsible policymakers can formulate remedial responses that fit the new economic times we have entered, rather than relying on outmoded policies geared to conditions that no longer exist.

The post How The ‘AI Job Shock’ Will Differ From The ‘China Trade Shock’ appeared first on NOEMA.

]]>
]]>
Noema’s Top Artwork Of 2025 https://www.noemamag.com/noemas-top-artwork-of-2025 Thu, 18 Dec 2025 15:41:01 +0000 https://www.noemamag.com/noemas-top-artwork-of-2025 The post Noema’s Top Artwork Of 2025 appeared first on NOEMA.

]]>
by Hélène Blanc
for “Why Science Hasn’t Solved Consciousness (Yet)

by Shalinder Matharu
for “How To Build A Thousand-Year-Old Tree

by Nicolás Ortega
for “Humanity’s Endgame

by Seba Cestaro
for “How We Became Captives Of Social Media

by Beatrice Caciotti
for “A Third Path For AI Beyond The US-China Binary

by Dadu Shin
for “The Languages Lost To Climate Change” in Noema Magazine Issue VI, Fall 2025

by LIMN
for “Why AI Is A Philosophical Rupture

by Kate Banazi
for “AI Is Evolving — And Changing Our Understanding Of Intelligence” in Noema Magazine Issue VI, Fall 2025

by Jonathan Zawada
for “The New Planetary Nationalism” in Noema Magazine Issue VI, Fall 2025

by Satwika Kresna
for “The Future Of Space Is More Than Human

Other Top Picks By Noema’s Editors

The post Noema’s Top Artwork Of 2025 appeared first on NOEMA.

]]>
]]>
Noema’s Top 10 Reads Of 2025 https://www.noemamag.com/noemas-top-10-reads-of-2025 Tue, 16 Dec 2025 17:30:14 +0000 https://www.noemamag.com/noemas-top-10-reads-of-2025 The post Noema’s Top 10 Reads Of 2025 appeared first on NOEMA.

]]>
Your new favorite playlist: Listen to Noema’s Top 10 Reads of 2025 via the sidebar player on your desktop or click here on your mobile phone.

Artwork by Daniel Barreto for Noema Magazine.
Daniel Barreto for Noema Magazine

The Last Days Of Social Media

Social media promised connection, but it has delivered exhaustion.

by James O’Sullivan


Artwork by Beatrice Caciotti for Noema Magazine.
Beatrice Caciotti for Noema Magazine

A Third Path For AI Beyond The US-China Binary

What if the future of AI isn’t defined by Washington or Beijing, but by improvisation elsewhere?

by Dang Nguyen


Illustration by Hélène Blanc for Noema Magazine.
Hélène Blanc for Noema Magazine

Why Science Hasn’t Solved Consciousness (Yet)

To understand life, we must stop treating organisms like machines and minds like code.

by Adam Frank


NASA Solar Dynamics Observatory

The Unseen Fury Of Solar Storms

Lurking in every space weather forecaster’s mind is the hypothetical big one, a solar storm so huge it could bring our networked, planetary civilization to its knees.

by Henry Wismayer


Artwork by Sophie Douala for Noema Magazine.
Sophie Douala for Noema Magazine

From Statecraft To Soulcraft

How the world’s illiberal powers like Russia, China and increasingly the U.S. rule through their visions of the good life.

by Alexandre Lefebvre


Illustration by Ibrahim Rayintakath for Noema Magazine
Ibrahim Rayintakath for Noema Magazine

The Languages Lost To Climate Change

Climate catastrophes and biodiversity loss are endangering languages across the globe.

by Julia Webster Ayuso


An illustration of a crumbling building and a bulldozer
Vartika Sharma for Noema Magazine (images courtesy mzacha and Shaun Greiner)

The Shrouded, Sinister History Of The Bulldozer

From India to the Amazon to Israel, bulldozers have left a path of destruction that offers a cautionary tale for how technology without safeguards can be misused.

by Joe Zadeh


Blake Cale for Noema Magazine
Blake Cale for Noema Magazine

The Moral Authority Of Animals

For millennia before we showed up on the scene, social animals — those living in societies and cooperating for survival — had been creating cultures imbued with ethics.

by Jay Griffiths


Illustration by Zhenya Oliinyk for Noema Magazine.
Zhenya Oliinyk for Noema Magazine

Welcome To The New Warring States

Today’s global turbulence has echoes in Chinese history.

by Hui Huang


Along the highway near Nukus, the capital of the autonomous Republic of Karakalpakstan. (All photography by Hassan Kurbanbaev for Noema Magazine)

Signs Of Life In A Desert Of Death

In the dry and fiery deserts of Central Asia, among the mythical sites of both the first human and the end of all days, I found evidence that life restores itself even on the bleakest edge of ecological apocalypse.

by Nick Hunt

The post Noema’s Top 10 Reads Of 2025 appeared first on NOEMA.

]]>
]]>
The Politics Of Superintelligence https://www.noemamag.com/the-politics-of-superintelligence Tue, 09 Dec 2025 16:38:08 +0000 https://www.noemamag.com/the-politics-of-superintelligence The post The Politics Of Superintelligence appeared first on NOEMA.

]]>
The machines are coming for us, or so we’re told. Not today, but soon enough that we must seemingly reorganize civilization around their arrival. In boardrooms, lecture theatres, parliamentary hearings and breathless tech journalism, the specter of superintelligence increasingly haunts our discourse. It’s often framed as “artificial general intelligence,” or “AGI,” and sometimes as something still more expansive, but always as an artificial mind that surpasses human cognition across all domains, capable of recursive self-improvement and potentially hostile to human survival. But whatever it’s called, this coming superintelligence has colonized our collective imagination.

The scenario echoes the speculative lineage of science fiction, from Isaac Asimov’s “Three Laws of Robotics” — a literary attempt to constrain machine agency — to later visions such as Stanley Kubrick and Arthur C. Clarke’s HAL 9000 or the runaway networks of William Gibson. What was once the realm of narrative thought-experiment now serves as a quasi-political forecast.

This narrative has very little to do with any scientific consensus, emerging instead from particular corridors of power. The loudest prophets of superintelligence are those building the very systems they warn against. When Sam Altman speaks of artificial general intelligence’s existential risk to humanity while simultaneously racing to create it, or when Elon Musk warns of an AI apocalypse while founding companies to accelerate its development, we’re seeing politics masked as predictions.

The superintelligence discourse functions as a sophisticated apparatus of power, transforming immediate questions about corporate accountability, worker displacement, algorithmic bias and democratic governance into abstract philosophical puzzles about consciousness and control. This sleight of hand is neither accidental nor benign. By making hypothetic catastrophe the center of public discourse, architects of AI systems have positioned themselves as humanity’s reluctant guardians, burdened with terrible knowledge and awesome responsibility. They have become indispensable intermediaries between civilization and its potential destroyer, a role that, coincidentally, requires massive capital investment, minimal regulation and concentrated decision-making authority.

Consider how this framing operates. When we debate whether a future artificial general intelligence might eliminate humanity, we’re not discussing the Amazon warehouse worker whose movements are dictated by algorithmic surveillance or the Palestinian whose neighborhood is targeted by automated weapons systems. These present realities dissolve into background noise against the rhetoric of existential risk. Such suffering is actual, while the superintelligence remains theoretical, but our attention and resources — and even our regulatory frameworks — increasingly orient toward the latter as governments convene frontier-AI taskforces and draft risk templates for hypothetical future systems. Meanwhile, current labour protections and constraints on algorithmic surveillance remain tied to legislation that is increasingly inadequate.

In the U.S., Executive Order 14110 on the “Safe, Secure, and Trustworthy Development and Use of Artificial Intelligence” mentions civil rights, competition, labor and discrimination, but it creates its most forceful accountability obligations for large, high-capability foundation models and future systems trained above certain compute thresholds, requiring firms to share technical information with the federal government and demonstrate that their models stay within specified safety limits. The U.K. has gone further still, building a Frontier AI Taskforce — now absorbed into the AI Security Institute — whose mandate centers on extreme, hypothetical risks. And even the EU’s AI Act, which does attempt to regulate present harms, devotes a section to systemic and foundation-model risks anticipated at some unknown point in the future. Across these jurisdictions, the political energy clusters around future, speculative systems.

Artificial superintelligence narratives perform very intentional political work, drawing attention from present systems of control toward distant catastrophe, shifting debate from material power to imagined futures. Predictions of machine godhood reshape how authority is claimed and whose interests steer AI governance, muting the voices of those who suffer under algorithms and amplifying those who want extinction to dominate the conversation. What poses as neutral futurism functions instead as an intervention in today’s political economy. Seen clearly, the prophecy of superintelligence is less a warning about machines than a strategy for power, and that strategy needs to be recognized for what it is. The power of this narrative draws from its history.

Bowing At The Altar Of Rationalism

Superintelligence as a dominant AI narrative predates ChatGPT and can be traced back to the peculiar marriage of Cold War strategy and computational theory that emerged in the 1950s. The RAND Corporation, an archetypal think tank where nuclear strategists gamed out humanity’s destruction, provided the conceptual nursery for thinking about intelligence as pure calculation, divorced from culture or politics.

“Whatever it’s called, this coming superintelligence has colonized our collective imagination.”

The early AI pioneers inherited this framework, and when Alan Turing proposed his famous test, he deliberately sidestepped questions of consciousness or experience in favor of observable behavior — if a machine could convince a human interlocutor of its humanity through text alone, it deserved the label “intelligent.” This behaviorist reduction would prove fateful, as in treating thought as quantifiable operations, it recast intelligence as something that could be measured, ranked and ultimately outdone by machines.

The computer scientist John von Neumann, as recalled by mathematician Stanislaw Ulam in 1958, spoke of a technological “singularity” in which accelerating progress would one day mean that machines could improve their own design, rapidly bootstrapping themselves to superhuman capability. This notion, refined by mathematician Irving John Good in the 1960s, established the basic grammar of superintelligence discourse: recursive self-improvement, exponential growth and the last invention humanity would ever need to make. These were, of course, mathematical extrapolations rather than empirical observations, but such speculations and thought experiments were repeated so frequently that they acquired the weight of prophecy, helping to make the imagined future they described look self-evident.

The 1980s and 1990s saw these ideas migrate from computer science departments to a peculiar subculture of rationalists and futurists centered around figures like computer scientist Eliezer Yudkowsky and his Singularity Institute (later the Machine Intelligence Research Institute). This community built a dense theoretical framework for superintelligence: utility functions, the formal goal systems meant to govern an AI’s choices; the paperclip maximizer, a thought experiment where a trivial objective drives a machine to consume all resources; instrumental convergence, the claim that almost any ultimate goal leads an AI to seek power and resources; and the orthogonality thesis, which holds that intelligence and moral values are independent. They created a scholastic philosophy for an entity that didn’t exist, complete with careful taxonomies of different types of AI take-off scenarios and elaborate arguments about acausal trade between possible future intelligences.

What united these thinkers was a shared commitment to a particular style of reasoning. They practiced what might be called extreme rationalism, the belief that pure logic, divorced from empirical constraint or social context, could reveal fundamental truths about technology and society. This methodology privileged thought experiments over data and clever paradoxes over mundane observation, and the result was a body of work that read like medieval theology, brilliant and intricate, but utterly disconnected from the actual development of AI systems. It should be acknowledged that disconnection did not make their efforts worthless, and by pushing abstract reasoning to its limits, they clarified questions of control, ethics and long-term risk that later informed more grounded discussions of AI policy and safety.

The contemporary incarnation of this tradition found its most influential expression in Nick Bostrom’s 2014 book “Superintelligence,” which transformed fringe internet philosophy into mainstream discourse. Bostrom, a former Oxford philosopher, gave academic respectability to scenarios that had previously lived in science fiction and posts on blogs with obscure titles. His book, despite containing no technical AI research and precious little engagement with actual machine learning, became required reading in Silicon Valley, often cited by tech billionaires. Musk once tweeted: “Worth reading Superintelligence by Bostrom. We need to be super careful with AI. Potentially more dangerous than nukes.” Musk is right to counsel caution, as evidenced by the 1,200 to 2,000 tons of nitrogen oxides and hazardous air pollutants like formaldehyde that his own artificial intelligence company expels into the air in Boxtown, a working-class, largely Black community in Memphis.

This commentary shouldn’t be seen as an attempt to diminish Bostrom’s achievement, which was to take the sprawling, often incoherent fears about AI and organize them into a rigorous framework. But his book sometimes reads like a natural history project, in which he categorizes different routes to superintelligence and different “failure modes,” ways such a system might go wrong or destroy us, as well as solutions to “control problems,” schemes proposed to keep it aligned — this taxonomic approach made even wild speculation appear scientific. By treating superintelligence as an object of systematic study rather than a science fiction premise, Bostrom laundered existential risk into respectable discourse.

The effective altruism (EA) movement supplied the social infrastructure for these ideas. Its core principle is to maximize long-term good through rational calculation. Within that worldview, superintelligence risk fits neatly, for if future people matter as much as present ones, and if a small chance of global catastrophe outweighs ongoing harms, then preventing AI apocalypse becomes the top priority. On that logic, hypothetical future lives eclipse the suffering of people living today.

“The loudest prophets of superintelligence are those building the very systems they warn against.”

This did not stay an abstract argument as philanthropists identifying with effective altruists channeled significant funding into AI safety research, and money shapes what researchers study. Organizations aligned with effective altruism have been established in universities and policy circles, publishing reports and advising governments on how to think about AI. The UK’s Frontier AI Taskforce has included members with documented links to the effective altruism movement, and commentators argue that these connections help channel EA-style priorities into government AI risk policy.

Effective altruism encourages its proponents to move into public bodies and major labs, creating a pipeline of staff who carry these priorities into decision-making roles. Jason Matheny, former director of Intelligence Advanced Research Projects Activity, a U.S. government agency that funds high-risk, high-reward research to improve intelligence gathering and analysis, has described how effective altruists can “pick low-hanging fruit within government positions” to exert influence. Superintelligence discourse isn’t spreading because experts broadly agree it is our most urgent problem; it spreads because a well-resourced movement has given it money and access to power.

This is not to deny the merits of engaging with the ideals of effective altruism or with the concept of superintelligence as articulated by Bostrom. The problem is how readily those ideas become distorted once they enter political and commercial domains. This intellectual genealogy matters because it reveals superintelligence discourse as a cultural product, ideas that moved beyond theory into institutions, acquiring funding and advocates. And its emergence was shaped within institutions committed to rationalism over empiricism, where individual genius was fetishized over collective judgment, and technological determinism was prioritized over social context.

Entrepreneurs Of The Apocalypse

The transformation of superintelligence from internet philosophy to boardroom strategy represents one of the most successful ideological campaigns of the 21st century. Tech executives who had previously focused on quarterly earnings and user growth metrics began speaking like mystics about humanity’s cosmic destiny, and this conversion reshaped the political economy of AI development.

OpenAI, founded in 2015 as a non-profit dedicated to ensuring artificial intelligence benefits humanity, exemplifies this transformation. OpenAI has evolved into a peculiar hybrid, a capped-profit company controlled by a non-profit board, valued by some estimates at $500 billion, racing to build the very artificial general intelligence it warns might destroy us. This structure, byzantine in its complexity, makes perfect sense within the logic of superintelligence. If AGI represents both ultimate promise and existential threat, then the organization building it must be simultaneously commercial and altruistic, aggressive and cautious, public-spirited yet secretive.

Sam Altman, OpenAI’s CEO, has perfected the rhetorical stance of the reluctant prophet. In Congressional testimony, blog posts and interviews, he warns of AI’s dangers while insisting on the necessity of pushing forward. “Our mission is to ensure that AGI (Artificial General Intelligence) benefits all of humanity,” he wrote on his blog earlier this year. There is a very we must build AGI before someone else does feel to the argument, because we’re the only ones responsible enough to handle it. Altman seems determined to position OpenAI as humanity’s champion, bearing the terrible burden of creating God-like intelligence so that it might be restrained.

Still, OpenAI is also seeking a profit. And that is really what all this is about — profit. Superintelligence narratives carry staggering financial implications, justifying astronomical valuations for companies that have yet to show consistent paths to self-sufficiency. But if you’re building humanity’s last invention, perhaps normal business metrics become irrelevant. This eschatological framework explains why Microsoft would invest $13 billion in OpenAI, why venture capitalists pour money into AGI startups and why the market treats large language models like ChatGPT as precursors to omniscience.

Anthropic, founded by former OpenAI executives, positions itself as the “safety-focused” alternative, raising billions by promising to build AI systems that are “helpful, honest and harmless.” But it’s all just elaborate safety theatre, as harm has no genuine place in the competition between OpenAI, Anthropic, Google DeepMind and others — the true contest is in who gets to build the best, most profitable models and how well they can package that pursuit in the language of caution.

This dynamic creates a race to the bottom of responsibility, with each company justifying acceleration by pointing to competitors who might be less careful: The Chinese are coming, so if we slow down, they’ll build unaligned AGI first. Meta is releasing models as open source without proper safeguards. What if some unknown actor hits upon the next breakthrough first? This paranoid logic forecloses any possibility of genuine pause or democratic deliberation. Speed becomes safety, and caution becomes recklessness.

“[Sam] Altman seems determined to position OpenAI as humanity’s champion, bearing the terrible burden of creating God-like intelligence so that it might be restrained.”

The superintelligence frame reshapes internal corporate politics, as AI safety teams, often staffed by believers in existential risk, provide moral cover for rapid development, absorbing criticism that might target business practices by attempting to reinforce the idea that these companies are doing world-saving work. If your safety team publishes papers about preventing human extinction, routine regulation begins to look trivial.

The well-publicized drama at OpenAI in November 2023 illuminates these dynamics. When the company’s board attempted to fire Sam Altman over concerns about his candor, the resulting chaos revealed underlying power relations. Employees, who had been recruited with talk of saving humanity, threatened mass defection if their CEO wasn’t reinstated — does their loyalty to Altman outweigh their quest to save the rest of us? Microsoft, despite having no formal control over the OpenAI board, exercised decisive influence as the company’s dominant funder and cloud provider, offering to hire Altman and any staff who followed him. The board members, who thought honesty an important trait in a CEO, resigned, and Altman returned triumphant.

Superintelligence rhetoric serves power, but it is set aside when it clashes with the interests of capital and control. Microsoft has invested billions in OpenAI and implemented its models in many of its commercial products. Altman wants rapid progress, so Microsoft wants Altman. His removal put Microsoft’s whole AI business trajectory at risk. The board was swept aside because they tried, as is their remit, to constrain OpenAI’s CEO. Microsoft’s leverage ultimately determined the outcome, and employees followed suit. It was never about saving humanity; it was about profit.

The entrepreneurs of the AI apocalypse have discovered a perfect formula. By warning of existential risk, they position themselves as indispensable. By racing to build AGI, they justify the unlimited use of resources. And by claiming unique responsibility, they deflect democratic oversight. The future becomes a hostage to present accumulation, and we’re told we should be grateful for such responsible custodians.

Superintelligence discourse actively constructs the future. Through constant repetition, speculative scenarios acquire the weight of destiny. This process — the manufacture of inevitability — reveals how power operates through prophecy.

Consider the claim that artificial general intelligence will arrive within five to 20 years. Across many sources, this prediction is surprisingly stable. But since at least the mid-20th century, researchers and futurists have repeatedly promised human-level AI “in a couple of decades,” only for the horizon to continuously slip. The persistence of that moving window serves a specific function: it’s near enough to justify immediate massive investment while far enough away to defer necessary accountability. It creates a temporal framework within which certain actions become compulsory regardless of democratic input.

This rhetoric of inevitability pervades Silicon Valley’s discussion of AI. AGI is coming whether we like it or not, executives declare, as if technological development were a natural force rather than a human choice. This naturalization of progress obscures the specific decisions, investments and infrastructures that make certain futures more likely than others. When tech leaders say we can’t stop progress, what they mean is, you can’t stop us.

Media amplification plays a crucial role in this process, as every incremental improvement in large language models gets framed as a step towards AGI. ChatGPT writes poetry; surely consciousness is imminent. Claude solves coding problems; the singularity is near. Such accounts, often sourced from the very companies building these systems, create a sense of momentum that becomes self-fulfilling. Investors invest because AGI seems near, researchers join companies because that’s where the future is being built and governments defer regulation because they don’t want to handicap their domestic champions.

The construction of inevitability also operates through linguistic choices. Notice how quickly “artificial general intelligence” replaced “artificial intelligence” in public discourse, as if the general variety were a natural evolution rather than a specific and contested concept, and how “superintelligence” — or whatever term the concept eventually assumes — then appears as the seemingly inevitable next rung on that ladder. Notice how “alignment” — ensuring AI systems do what humans want — became the central problem, assuming both that superhuman AI will exist and that the challenge is technical rather than political.

Notice how “compute,” which basically means computational power, became a measurable resource like oil or grain, something to be stockpiled and controlled. This semantic shift matters because language shapes possibility. When we accept that AGI is inevitable, we stop asking whether it should be built, and in the furor, we miss that we seem to have conceded that a small group of technologists should determine our future.

“When we accept that AGI is inevitable, we stop asking whether it should be built, and in the furor, we miss that we seem to have conceded that a small group of technologists should determine our future.”

When we simultaneously treat compute as a strategic resource, we further normalize the concentration of power in the hands of those who control data centers, who, in turn, as the failed ousting of Altman demonstrates, grant further power to this chosen few.

Academic institutions, which are meant to resist such logics, have been conscripted into this manufacture of inevitability. Universities, desperate for industry funding and relevance, establish AI safety centers and existential risk research programs. These institutions, putatively independent, end up reinforcing industry narratives, producing papers on AGI timelines and alignment strategies, lending scholarly authority to speculative fiction. Young researchers, seeing where the money and prestige lie, orient their careers toward superintelligence questions rather than present AI harms.

International competition adds further to the apparatus of inevitability. The “AI arms race” between the United States and China is framed in existential terms, that whoever builds AGI first will achieve permanent geopolitical dominance. This neo-Cold War rhetoric forecloses possibilities for cooperation, regulation or restraint, making racing toward potentially dangerous technology seem patriotic rather than reckless. National security becomes another trump card against democratic deliberation.

The prophecy becomes self-fulfilling through material concentration — as resources flow towards AGI development, alternative approaches to AI starve. Researchers who might work on explainable AI or AI for social good instead join labs focused on scaling large language models. The future narrows to match the prediction, not because the prediction was accurate, but because it commanded resources.

In financial terms, it is a heads-we-win, tails-you-lose arrangement: If the promised breakthroughs materialize, private firms and their investors keep the upside, but if they stall or disappoint, the sunk costs in energy-hungry data centers and retooled industrial policy sit on the public balance sheet. An entire macro-economy is being hitched to a story whose basic physics we do not yet understand.

We must recognize this process as political, not technical. The inevitability of superintelligence is manufactured through specific choices about funding, attention and legitimacy, and different choices would produce different futures. The fundamental question isn’t whether AGI is coming, but who benefits from making us believe it is.

The Abandoned Present

While we fixate on hypothetical machine gods, actual AI systems reshape human life in profound and often harmful ways. The superintelligence discourse distracts from these immediate impacts; one might even say it legitimizes such. After all, if we’re racing towards AGI to save humanity, what’s a little collateral damage along the way?

Consider labor, that fundamental human activity through which we produce and reproduce our world. AI systems already govern millions of workers’ days through algorithmic management. In Amazon warehouses, workers’ movements are dictated by handheld devices that calculate optimal routes, monitor bathroom breaks and automatically fire those who fall behind pace. While the cultural conversation around automation often emphasizes how it threatens to replace human labor, for many, automation is already actively degrading their profession. Many workers have become an appendage to the algorithm, executing tasks the machine cannot yet perform while being measured and monitored by computational systems.

Frederick Taylor, the 19th-century American mechanical engineer and author of “The Principles of Scientific Management,”is famous for his efforts to engineer maximum efficiency through rigid control of labor. What we have today is a form of tech-mediated Taylorism wherein work is broken into tiny, optimized motions, with every movement monitored and timed, just with management logic encoded in software rather than stopwatches. Taylor’s logic has been operationalized far beyond what he could have imagined. But when we discuss AI and work, the conversation immediately leaps to whether AGI will eliminate all jobs, as if the present suffering of algorithmically managed workers were merely a waystation to obsolescence.

The content moderation industry exemplifies this abandoned present. Hundreds of thousands of workers, primarily in the Global South, spend their days viewing the worst content humanity produces—including child abuse and sexual violence—to train AI systems to recognize and filter such material. These workers, paid a fraction of what their counterparts in Silicon Valley earn, suffer documented psychological trauma from their work. They’re the hidden labor force behind “AI safety,” protecting users from harmful content while being harmed themselves. But their suffering rarely features in discussions of AI ethics, which focus instead on preventing hypothetical future harms from superintelligent systems.

Surveillance represents another immediate reality obscured by futuristic speculation. AI systems enable unprecedented tracking of human behavior. Facial recognition identifies protesters and dissidents. Predictive policing algorithms direct law enforcement to “high-risk” neighborhoods that mysteriously correlate with racial demographics. Border control agencies use AI to assess asylum seekers’ credibility through voice analysis and micro-expressions. Social credit systems score citizens’ trustworthiness using algorithms that analyze their digital traces.

“An entire macro-economy is being hitched to a story whose basic physics we do not yet understand.”

These aren’t speculative technologies; they are real systems that are already deployed, and they don’t require artificial general intelligence, just pattern matching at scale. But the superintelligence discourse treats surveillance as a future risk — what if an AGI monitored everyone? — rather than a present reality. This temporal displacement serves power, because it’s easier to debate hypothetical panopticons than to dismantle actual ones.

Algorithmic bias pervades critical social infrastructures, amplifying and legitimizing existing inequalities by lending mathematical authority to human prejudice. The response from the AI industry? We need better datasets, more diverse teams and algorithmic audits — technical fixes for political problems. Meanwhile, the same companies racing to build AGI deploy biased systems at scale, treating present harm as acceptable casualties in the march toward transcendence. The violence is actual, but the solution remains perpetually deferred.

And beneath all of this, the environmental destruction accelerates as we continue to train large language models — a process that consumes enormous amounts of energy. When confronted with this ecological cost, AI companies point to hypothetical benefits, such as AGI solving climate change or optimizing energy systems. They use the future to justify the present, as though these speculative benefits should outweigh actual, ongoing damages. This temporal shell game, destroying the world to save it, would be comedic if the consequences weren’t so severe.

And just as it erodes the environment, AI also erodes democracy. Recommendation algorithms have long shaped political discourse, creating filter bubbles and amplifying extremism, but more recently, generative AI has flooded information spaces with synthetic content, making it impossible to distinguish truth from fabrication. The public sphere, the basis of democratic life, depends on people sharing enough common information to deliberate together.

When AI systems segment citizens into ever-narrower feeds, that shared space collapses. We no longer argue about the same facts because we no longer encounter the same world, but our governance discussions focus on preventing AGI from destroying democracy in the future rather than addressing how current AI systems undermine it today. We debate AI alignment while ignoring human alignment on key questions, like whether AI systems should serve democratic values rather than corporate profits. The speculative tyranny of superintelligence obscures the actual tyranny of surveillance capitalism.

Mental health impacts accumulate as humans adapt to algorithmic judgment. Social media algorithms, optimized for engagement, promote content that triggers anxiety, depression and eating disorders. Young people internalize algorithmic metrics — likes, shares, views — as measures of self-worth. The quantification of social life through AI systems produces new forms of alienation and suffering, but these immediate psychological harms pale beside imagined existential risks, receiving a fraction of the attention and resources directed toward preventing hypothetical AGI catastrophe.

Each of these present harms could be addressed through collective action. We could regulate algorithmic management, support content moderators, limit surveillance, audit biases, constrain energy use, protect democracy and prioritize mental health. These aren’t technical problems requiring superintelligence to solve; they’re just good old-fashioned political challenges demanding democratic engagement. But the superintelligence discourse makes such mundane interventions seem almost quaint. Why reorganize the workplace when work itself might soon be obsolete? Why regulate surveillance when AGI might monitor our thoughts? Why address bias when superintelligence might transcend human prejudice entirely?

The abandoned present is crowded with suffering that could be alleviated through human choice rather than machine transcendence, and every moment we spend debating alignment problems for non-existent AGI is a moment not spent addressing algorithmic harms affecting millions today. The future-orientation of superintelligence discourse isn’t just distraction but an abandonment, a willful turning away from present responsibility toward speculative absolution.

Alternative Imaginaries For The Age Of AI

The dominance of superintelligence narratives obscures the fact that many other ways of doing AI exist, grounded in present social needs rather than hypothetical machine gods. These alternatives show that you do not have to join the race to superintelligence or renounce technology altogether. It is possible to build and govern automation differently now.

Across the world, communities have begun experimenting with different ways of organizing data and automation. Indigenous data sovereignty movements, for instance, have developed governance frameworks, data platforms and research protocols that treat data as a collective resource subject to collective consent. Organizations such as the First Nations Information Governance Centre in Canada and Te Mana Raraunga in Aotearoa insist that data projects, including those involving AI, be accountable to relationships, histories and obligations, not just to metrics of optimization and scale. Their projects offer working examples of automated systems designed to respect cultural values and reinforce local autonomy, a mirror image of the effective altruist impulse to abstract away from place in the name of hypothetical future people.

“The speculative tyranny of superintelligence obscures the actual tyranny of surveillance capitalism.”

Workers are also experimenting with different arrangements, and unions and labor organizations have negotiated clauses on algorithmic management, pushed for audit rights over workplace systems and begun building worker-controlled data trusts to govern how their information is used. These initiatives emerge from lived experience rather than philosophical speculation, from people who spend their days under algorithmic surveillance and are determined to redesign the systems that manage their existence. While tech executives are celebrated for speculating about AGI, workers who analyze the systems already governing their lives are still too easily dismissed as Luddites.

Similar experiments appear in feminist and disability-led technology projects that build tools around care, access and cognitive diversity, and in Global South initiatives that use modest, locally governed AI systems to support healthcare, agriculture or education under tight resource constraints. Degrowth-oriented technologists design low-power, community-hosted models and data centers meant to sit within ecological limits rather than override them. Such examples show how critique and activism can progress to action, to concrete infrastructures and institutional arrangements that demonstrate how AI can be organized without defaulting to the superintelligence paradigm that demands everyone else be sacrificed because a few tech bros can see the greater good that everyone else has missed.

What unites these diverse imaginaries — Indigenous data governance, worker-led data trusts, and Global South design projects — is a different understanding of intelligence itself. Rather than picturing intelligence as an abstract, disembodied capacity to optimize across all domains, they treat it as a relational and embodied capacity bound to specific contexts. They address real communities with real needs, not hypothetical humanity facing hypothetical machines. Precisely because they are grounded, they appear modest when set against the grandiosity of superintelligence, but existential risk makes every other concern look small by comparison. You can predict the ripostes: Why prioritize worker rights when work itself might soon disappear? Why consider environmental limits when AGI is imagined as capable of solving climate change on demand?

These alternatives also illuminate the democratic deficit at the heart of the superintelligence narrative. Treating AI at once as an arcane technical problem that ordinary people cannot understand and as an unquestionable engine of social progress allows authority to consolidate in the hands of those who own and build the systems. Once algorithms mediate communication, employment, welfare, policing and public discourse, they become political institutions. The power structure is feudal, comprising a small corporate elite that holds decision-making power justified by special expertise and the imagined urgency of existential risk, while citizens and taxpayers are told they cannot grasp the technical complexities and that slowing development would be irresponsible in a global race. The result is learned helplessness, a sense that technological futures cannot be shaped democratically but must be entrusted to visionary engineers.

A democratic approach would invert this logic, recognizing that questions about surveillance, workplace automation, public services and even the pursuit of AGI itself are not engineering puzzles but value choices. Citizens do not need to understand backpropagation to deliberate on whether predictive policing should exist, just as they need not understand combustion engineering to debate transport policy. Democracy requires the right to shape the conditions of collective life, including the architectures of AI.

This could take many forms. Workers could participate in decisions about algorithmic management. Communities could govern local data according to their own priorities. Key computational resources could be owned publicly or cooperatively rather than concentrated in a few firms. Citizen assemblies could be given real authority over whether a municipality moves forward with contentious uses of AI, like facial recognition and predictive policing. Developers could be required to demonstrate safety before deployment under a precautionary framework. International agreements could set limits on the most dangerous areas of AI research. None of this is about whether AGI, or any other kind of superintelligence one can imagine, does or does not arrive; it’s simply about recognizing that the distribution of technological power is a political choice rather than an inevitable outcome.

“The real political question is not whether some artificial superintelligence will emerge, but who gets to decide what kinds of intelligence we build and sustain.”

The superintelligence narrative undermines these democratic possibilities by presenting concentrated power as a tragic necessity. If extinction is at stake, then public deliberation becomes a luxury we cannot afford. If AGI is inevitable, then governance must be ceded to those racing to build it. This narrative manufactures urgency to justify the erosion of democratic control, and what begins as a story about hypothetical machines ends as a story about real political disempowerment. This, ultimately, is the larger risk, that while we debate the alignment of imaginary future minds, we neglect the alignment of present institutions.

The truth is that nothing about our technological future is inevitable, other than the inevitability of further technological change. Change is certain, but its direction is not. We do not yet understand what kind of systems we are building, or what mix of breakthroughs and failures they will produce, and that uncertainty makes it reckless to funnel public money and attention into a single speculative trajectory.

Every algorithm embeds decisions about values and beneficiaries. The superintelligence narrative masks these choices behind a veneer of destiny, but alternative imaginaries — Indigenous governance, worker-led design, feminist and disability justice, commons-driven models, ecological constraints — remind us that other paths are possible and already under construction.

The real political question is not whether some artificial superintelligence will emerge, but who gets to decide what kinds of intelligence we build and sustain. And the answer cannot be left to the corporate prophets of artificial transcendence because the future of AI is a political field — it should be open to contestation. It belongs not to those who warn most loudly of gods or monsters, but to publics that should have the moral right to democratically govern the technologies that shape their lives.

The post The Politics Of Superintelligence appeared first on NOEMA.

]]>
]]>
Address ‘Affordability’ By Spreading AI Wealth Around https://www.noemamag.com/address-affordability-by-spreading-ai-wealth-around Fri, 21 Nov 2025 17:37:58 +0000 https://www.noemamag.com/address-affordability-by-spreading-ai-wealth-around The post Address ‘Affordability’ By Spreading AI Wealth Around appeared first on NOEMA.

]]>
The most salient issue of American politics revealed in the recent elections is “affordability” for all those earners not in the top 10%. It is an especially acute concern among young adults facing economic precarity and the lost expectation of upward mobility as technological innovation disrupts labor markets.

Ready to jump on this turn of events as a path forward for a moribund party, progressive Democrats are reverting to the standard reflex in their policy toolbox: Tax the rich and redistribute income to the less well-off through government programs. As appealing, or even compelling, as that may be as an interim fix, it does not address the long-term structural dynamic that’s behind the accelerating economic disparity heading into the AI era.

In the end, the affordability challenge can’t be remedied in any enduring way by policies that just depend on hitting up the richest. It can only be met by spreading the wealth of ownership more broadly in the first place in an economy where the top 10% own 93% of all equities in financial markets.

That means, instead of relying solely on redistributing other people’s income, forward-looking policies should foster the “pre-distribution” of wealth through forms of “universal basic capital” (UBC) wherein everyone gets richer by owning a slice of an ever-enlarging pie driven by AI-generated productivity growth. That ought to be a rallying cry of the emergent “coalition of the precariat,” which encompasses all those who labor for a living when intelligent machines are coming for their livelihood.

The fairness Americans are looking for in today’s churning political economy is not only about constraining concentration of wealth at the top, but also about building it from below.

This agenda could provide common ground for populists in the Trump orbit — notably conservative Catholics like Vice-President J.D. Vance and Steve Bannon, who champion the left-behind working middle class — and the new generation of Democrats who want to restore the inclusive American Dream to an extractive economy that benefits the few at the expense of the many.

Where To Start

OpenAI’s Sam Altman has proposed a redistributive universal basic income (UBI) scheme as a safety net for displaced workers to be funded through the establishment of an “American Equity Fund.” It would be capitalized by taxing companies above a certain valuation at 2.5% of their market value each year, payable in shares. Proceeds from those earnings would be doled out as regular minimum payments to those whose income falls below a certain level.

Aware that this is only a stopgap income transfer that doesn’t change the pattern of wealth distribution, he has more recently shifted away from UBI and toward the idea of UBC, or what he calls “universal basic wealth.”

“What I would want is, like an ownership share in whatever the AI creates — that I feel like I’m participating in this thing that’s going to compound and get more valuable over time,” he has said.

These ideas could be married to extant policy.

The place to start is with an embryonic form of universal basic capital already established by the Republican-dominated U.S. Congress through its MAGA program: Money Accounts for Growth and Advancement.

Beginning in July, the MAGA program will initiate, by auto-enrollment, a $1,000 account for every child under 8 who is an American citizen. That initial deposit will be invested across the market by professional managers in a pool with all others. The funds will grow with compounded returns over the years until the account holder reaches 18. Families can add up to $5,000 per year to the account. All income from investment returns will be tax-advantaged upon withdrawal and can be used for education, starting a small business, helping purchase a home or in other ways.

The MAGA program is funded at roughly $30 billion per year only through 2028. The Trump administration has so far sought to use tariff revenues to pay for it. But rather than tax consumers in this way to keep the funds flowing after 2028, why not place a “productivity and wealth-sharing levy” of, say, 1% of their market value each year on the highly concentrated wealth of Big Tech with their skyrocketing (albeit fluctuating) valuations? This could seed the MAGA investment accounts into 2029 and beyond. Per Altman, such a levy could also be paid in shares.

As AI is integrated further throughout the entire economy in the coming decades, one could envision reducing and expanding that annual levy, making it 0.5% on all businesses worth more than, let’s say, $5 billion, up to an assessment of 5% of their total equity. Once new enterprises reach this valuation threshold, they would also be subject to the same rules.

“In the end, the affordability challenge can’t be remedied in any enduring way by policies that just rely on hitting up the richest.”

The MAGA accounts, just like investments by the richest Americans, promise to boom when productivity gains are realized as AI diffuses through all economic sectors over time. In this way, “ownership of the robots” will be broadened so upcoming generations can share in the wealth creation of generative AI that’s fueled, after all, by the raw material of their (and our) data.

Critically, the UBC idea is also not statist, but individualist. The proceeds of those levies would not go to the government, but only through the state as a collection agency directly into personal and family accounts. Since the state does not become the owner of wealth that remains private, the idea does not qualify as “socialist.” On the contrary, it makes everyone a capitalist.

A New Orientation

Sustaining the MAGA accounts in and of themselves, of course, is not a silver bullet that will slay an inequality chasm that has been building for decades. But it would signal a new orientation in the way we think systemically about how wealth is created and shared fairly in the AI economy of the future — an orientation that can guide other innovative ways to more widely implement the UBC concept across the entire population.

One such idea emerged in a brainstorming session with some of the more socially aware Big Tech titans of Silicon Valley. In this plan, all publicly traded companies with a valuation above a certain threshold could be required to contribute 2% of their value in shares each year to a sovereign wealth fund that supplements Social Security. From those holdings, every adult American — on the condition that they actively vote in elections — would receive a synthetic security, essentially an account indexed across the stock market, that must be vested for at least 20 years to allow the compounded returns to grow. Capital gains would be tax-exempt upon withdrawal.

The idea is to provide citizens with a literal stake and responsibility in the future of the system, both in terms of its economic fortunes and political stability.

Another proposal to get a jump-start on future AI job shock is to build up assets in the intermediate term when employment patterns still hold. This could be done by following the model of Australia’s superannuation fund,  which we have often mentioned in Noema. The combination of the fund’s scale of participation, continual inflow of savings from employer/employee contributions into investments and the longevity to term earns compounded returns that have made the fund, started in 1991, worth $4.2 trillion today — more than the nation’s GDP. As a result, the average wealth per adult in Australia is among the highest in the world at $550,000.

The old paradigm of the Industrial Age, which relied on the bargaining power of labor to capture its fair share, just no longer works when intelligent machines capable of doing what most humans do are knocking at every door. As the value of labor diminishes, capital income from wealth ownership will become a significant hedge against diminishing or disappearing wages.

The usual argument against such a levy in a globalized economy has been that companies will leave for better pastures. But, given the enormous investments and political will to make the U.S. the dominant AI player, companies that succeed on that basis are not about to bolt for either anti-tech Europe or America’s strategic rival, China.

Economic Inclusiveness Is On The Right Side Of History

The recent arguments for lessening over-regulatory obstacles that stand in the way of achieving “abundance” are not wrong as far as they go. But abundance does not distribute itself fairly. This is what the idea of UBC proposes.

Sharing the abundant wealth of an AI economy that is socially generated through the use of our data is so sensible a concept that it would, in time, become as normal and accepted a condition of doing business as paying into Social Security and Medicare.

Historically, as the work for which economist Daren Acemoglu was awarded the Nobel Prize in 2024 has shown, those societies that maintain inclusive social and economic institutions have prospered while those where wealth and power are concentrated at the top have ultimately splintered and failed. This is also the theme of Henry Wismayer’s recent essay in Noema on why once successful societies collapse.

Adopting policies that foster universal basic capital for the AI era would place America’s off-track trajectory once again on the right side of history.

The post Address ‘Affordability’ By Spreading AI Wealth Around appeared first on NOEMA.

]]>
]]>
The Progress Paradox https://www.noemamag.com/the-progress-paradox Thu, 13 Nov 2025 17:31:19 +0000 https://www.noemamag.com/the-progress-paradox The post The Progress Paradox appeared first on NOEMA.

]]>
In March 2024, Lina Khan took the stage before an audience of foreign policy experts to argue that the United States must resist growing calls to protect “national champions” in the technology sector. One of her arguments, familiar to all Americans, was that innovation is the fruit of market competition. Or, as she put it, “history and experience show us that lumbering monopolies mired in red tape and bureaucratic inertia cannot deliver the breakthrough innovations and technological advancement that hungry startups tend to create.” For example, she said, antitrust actions in prior years against IBM and AT&T paved the way for developments like personal computing and the internet. By contrast, government efforts to protect national champions like Boeing from competition have resulted in stagnant growth and cautionary tales.

Khan is quite right that national champions are bad, if for no other reason than because they produce brittle public dependencies on unaccountable private power. But the supposed natural alliance between market competition and innovation is more American mythology than nearly everyone across the political spectrum — right, left and center — would like to acknowledge. In truth, those nimble startups are not competing in anything like an ideal market.

Markets have always required some form of protectionist intervention — like intellectual property law — to help foster innovation. In recent years, startups have innovated because of a rich symbiosis with tech giants and their monopoly-seeking investors. Startups are indeed hungry, but their hunger is not to serve consumer needs or the national interest; it is to join the happy ranks of the monopolists. The nature of technological innovation is that competitive markets, without being “managed,” do not inspire it.

Today, this may sound bizarre, heterodox and jarring. But it was once fairly mainstream opinion. In the middle part of the 20th century, many of America’s most celebrated economic minds taught that competitive markets cause technological progress to stagnate. During the neoliberal era that followed, from the 1980s to the 2010s, this idea was largely forgotten and pushed to the margins of politics and academia. But it never lost its kernel of truth.

Old Wisdom

Economist John Kenneth Galbraith taught in the 1950s that under highly competitive conditions, private firms invest little or nothing in research and development, because competition pushes their profit margins too low to afford it. In the 1960s, the economist Kenneth Arrow further explained that competitive markets provide little or no incentive to invest in information goods like science and technology, because when markets function efficiently, the fruits of research are instantly copied and widely disseminated. These arguments also apply to artistic breakthroughs, philosophical ideas, journalism and much more. Markets are good at serving clear consumer demands, but advancing knowledge just isn’t what they do.

Midcentury American policymakers largely accepted this fact, thereby recognizing that the much-ballyhooed market economy did not foreordain the United States’ technological superiority over the Soviet Union. On the contrary, no matter how efficient the American corporate sector was at producing cheap toothpaste, the Soviet Union’s ability to decree massive, focused, ongoing state investments in science meant it could always pull ahead technologically (and thus also economically and militarily). How else to account for Sputnik? To keep pace, the U.S. government had to do the same, investing heavily in nonmarket institutions like the National Institutes of Health, the National Science Foundation and the Department of Defense.

Yet the deep lessons of Galbraith and Arrow were never fully absorbed into the market-enthused mainstream of American political thought. And as California’s technology scene morphed from the hierarchical defense-contractor culture of the 1960s to the libertarian Silicon Valley culture of the 1980s and ‘90s, it was almost entirely forgotten. A new faith emerged, one that preached that markets would generate progress of their own accord, and conversely, that technological breakthroughs would create new markets, in a virtuous cycle.

This doctrine was ideologically convenient for almost all politicians. The center-right could lean on it to pitch bustling markets as a path to technological progress. And the center-left could use it to pitch state-driven technological progress as the key to a thriving market society. It fit the optimistic, end-of-history spirit of the age like a glove. But the insights of Galbraith and Arrow remained true. Information goods are the wellspring of technological (as well as economic and cultural) “progress,” but only limits on market efficiency and abridgments of perfect competition create incentives to produce them.

“The supposed natural alliance between market competition and innovation is more American mythology than nearly everyone across the political spectrum … would like to acknowledge.”

Examples of such abridgments of market competition include direct state investments and intellectual property rights, like patents, which amount to temporary monopolies on information goods. But crucially, another important abridgment of markets is simply regular old monopoly — cornering resources, swallowing up competitors and otherwise creating dependencies that can be turned into pricing power. Monopoly power places certain private actors in a unique position to profit from information goods, including technical progress. Consequently, it also induces them to invest in those goods.

New Folly

Few Silicon Valley investors would have been able to articulate this in 1990, but by 2010, the sharpest had grokked it. Their biggest wins came when they invested money not in the best technologies or those addressing clear and present market demands, but in those most conducive to achieving and retaining monopoly power: sticky software platforms, social media networks, and infrastructural chokepoints. Any contradiction between this strategy and the Valley’s pervasive pro-market ideology was abstract and easily ignored. After all, the neoliberal period was the era of sugar-free soda and dairy-free butter: Everyone was glad to believe they could have their cake and eat it too.

In a way, turning a blind eye to the discomfiting economics of technology was the “glue” in the neoliberal consensus. Socially permissive liberals were happy to imagine that rapid social change was not being bought at the expense of creating unaccountable private power concentrations, while the Chamber of Commerce was happy to imagine that all this technological progress was bubbling forth from free and fair markets.

Justice Antonin Scalia surely induced some cognitive dissonance for neoliberals when, in the 2004 decision Verizon v. Trinko, he drove antitrust doctrine into the jaws of this contradiction. Writing for a unanimous court, he announced: “The mere possession of monopoly power, and the concomitant charging of monopoly prices, is not only not unlawful; it is an important element of the free-market system. The opportunity to charge monopoly prices — at least for a short period — is what attracts ‘business acumen’ in the first place; it induces risk-taking that produces innovation and economic growth.”

Hardly shying from the paradox, the Supreme Court thus simply expanded the permissibility of monopoly, with all its associated incentives for technological development, over competitive markets that might subordinate business activity to the needs of society and its consumers. That case, along with many similar developments, gradually transformed the way American law conceived of private power and, step by step, rolled out the red carpet for it.

It is still politically awkward to acknowledge that the United States has remained a technological leader into the 21st century largely by tolerating monopolies. But it is hardly a secret. In public writings such as 2014’s “Competition is for Losers,” Peter Thiel states with real candor: “Americans mythologize competition and credit it with saving us from socialist bread lines. Actually, capitalism and competition are opposites. Capitalism is premised on the accumulation of capital, but under perfect competition, all profits are competed away. The lesson for entrepreneurs is clear: if you want to create and capture lasting value, don’t build an undifferentiated commodity business.”

Technology entrepreneurs seeking venture funding have eagerly followed Thiel’s cue, building technologies aimed at addicting, corralling and manipulating users, while trying to tamp down political and intellectual narratives that could threaten their monopolies. Driving down bread prices — that is to say, determining how best to efficiently provide consumers with the basic things they need to thrive — is no longer an “interesting problem” to American capitalists, even though it is quite far from being “solved.” And Americans feel the results: growing dependencies on cheap (but often harmful) technology products, paired with crushingly expensive food, medical care, housing, utilities and transportation. Cheap innovations” and unaffordable, poor-quality necessities.

Plenty of readers will still be skeptical. To many, the cutthroat Silicon Valley startup ecosystem proves on its face that competition, not monopoly, produces innovative technology. But look closer. Silicon Valley startups are routinely valued at many multiples more than their revenue or profits straightforwardly justify. What explains that? Investors are betting that those startups will eventually either become monopolies or merge into existing ones through acquisitions or other forms of financial consolidation. In other words, technology startups are “competing” not to serve consumers’ needs, but to become — or join — monopolies. The aim for many of today’s Silicon Valley startups is a high-profile acquisition and a lucrative “exit.”

“It is still politically awkward to acknowledge that the United States has remained a technological leader into the 21st century largely by tolerating monopolies.”

After uniting with monopoly power, erstwhile startups gain the ability not just to charge non-competitive prices, but to exert monopoly power in subtler ways. The social scientist Kean Birch and economist D.T. Cochrane have usefully classified these unconventional forms of digital monopoly power as “enclave rents” (power from controlling ecosystems of devices), “expected monopoly rents” (capitalizing expected future monopoly power in share values, thus allowing owners to accelerate growth and acquisition), “engagement rents” (behavioral insights into deeply dependent and/or surveilled users), and “reflexivity rents” (using market dominance to influence future (favorable) policy and enforcement).

Without an eye toward these and similar forms of monopoly power, one cannot fully understand the lofty valuations of startups. Thus, when they lose their trajectory toward uniting with monopoly, they also lose their access to capital — and promptly stop “innovating.”

Past Innovation ‘Champions’

The early 20th century is replete with examples of the close, uneasy kinship between monopoly and innovation. Take telecommunications. When Bells early patents expired in 1894, many new operators entered the market. Prices were driven down, and telecommunications became accessible. Every drop of possible use was squeezed out of the existing infrastructure, to the immediate advantage of consumers. In other words, the infrastructure was exploited efficiently. At the same time, technology stagnated. Operators did not invest in complex new long-distance infrastructure or switching technology, because to maximize the usefulness of such innovations, and also satisfy pro-competition local regulators, they would have had to provide connectivity to rival operators, without these rivals having to have incurred any of the upfront costs. Gradually, it became clear that although there were fairly obvious ways to improve telecommunications technology, no one was doing it.

The landscape shifted in the financial crisis of 1907 when AT&T, financed by J.P. Morgan, bought up many small telecommunications operators. This gave it an effective monopoly over long-distance lines, which it then swiftly improved, researching and developing many complementary technological upgrades along the way. All this put it in a position to charge consumers exorbitant prices, which it also did.

Threatened by the U.S. Attorney General with antitrust action, in 1913 AT&T agreed to the so-called “Kingsbury Commitment,” promising to permit smaller operators to buy the use of its long-distance lines. In the bargain, it secured what amounted to legal approval of its monopoly — and promptly accelerated its technological pathbreaking. In 1914, AT&T built the first coast-to-coast line.

With AT&T now in a comfortably exclusive position to profit from advanced telecommunications, it spent subsequent years investing in advanced automatic switching research. In 1925, AT&T founded Bell Labs, an incubator enabling top research technologists to work while insulated from market pressures, which eventually gave birth to the personal computing revolution. This overall picture is paradoxical, complex and discomfiting. It is simultaneously reasonable to doubt that AT&T’s monopoly served the public interest, and also difficult to dispute that it accelerated investment in knowledge.

The kinship between monopoly and innovation is structural and timeless. To put it in the simplest possible terms: information is valuable only as far as it can be controlled, and it is hard to control. As the writer Stewart Brand said, it “wants to be free.” This means that to transform information into profit, you need something like hard power. You need to have exclusive dominion over some part of the system. Being “just-another-vendor-in-the-marketplace” does not cut it.

When the Wilson administration effectively blessed AT&T’s dominance, it echoed an older way of thinking about corporations. In the early 19th century — hardly ancient history in 1913 — corporations could be created only by special legislative acts, and only for clearly defined, time-or-space-limited projects, such as building bridges or exploiting colonies. Even as recently as the late 19th century, corporations still had to enumerate their purposes for the state’s approval: They couldn’t simply be used as general vehicles for any profitable opportunity. Far from a minor bureaucratic detail, this older and more prescriptive understanding of a corporations telos reflected a recognition that chartering corporations constitutes an essentially hazardous delegation of a state’s responsibility to order society.

Wilson wanted AT&T to be dominant because he wanted it to develop telecommunications for the good of society. Inherent in this conception of the company as a national champion was a certain assumption of its subordination to the government’s authority to uphold the common good. But over the next century, this assumption was lost.

“The neoliberal period was the era of sugar-free soda and dairy-free butter: Everyone was glad to believe they could have their cake and eat it too.”

By the 1990s, private ownership of corporations that lacked any telos beyond enriching shareholders was considered the norm. Thus, when Boeing was permitted in 1997 to become the only domestic airplane producer with scant conditions, it simply exploited and squandered this privilege (however limited it might have been in light of continued competition from Airbus). Instead of using the luxury of temporarily reduced competition to innovate in the national interest, it extracted profits, slashed quality, created foreign dependencies and nonetheless fell behind Airbus in the commercial market.

Similarly, in the early 2010s, when the Federal Trade Commission approved Facebook’s acquisitions of Instagram and WhatsApp, it imposed minimal conditions. The Obama administration blithely fêted Silicon Valley’s ascendance, blessing its dominance without imposing any meaningful public responsibility. Big Tech and its investors interpreted this not as a grant of responsibility for civic infrastructure, but as a blank check to pillage the social fabric.

The Irony Of Open Source

The mixed results of the open-source movement serve as another case in point. Open source has been sold to the public as a means of dissolving monopolies and accelerating technological progress. Its effects on power are more complex than that. By diffusing certain bodies of technical knowledge, open source prevents those bodies of knowledge from forming the basis of a monopoly. Potentially, it also enables more people to work on them, so that new techniques can develop faster. But who then pays for such efforts to advance knowledge? In the long run, the answer is either (a) nobody; or (b) somebody with an upstream monopoly that benefits from the diffusion of the open-sourced knowledge.

We saw shades of this in the brief DeepSeek panic. Nervous AI investors thought, for a day or two, that DeepSeek’s powerful open-source model meant that the AI giants have no “moat.” Their panic subsided when they remembered that big tech companies do not necessarily need to own a moat around AI models so long as they control enough other moats, like access to uniquely large amounts of computing power, energy, financial capital, political capital and consumer attention. This means Big Tech remains in pole position to capture a huge share of the value that AI unlocks, even if Silicon Valley’s engineers ultimately prove incapable of keeping its frontier models dramatically ahead of those being built in Shanghai, Singapore, Lagos and St. Petersburg. For this reason, global markets continue to use the big tech companies as easy conduits for investing trillions in AI.

The Vexing Politics Of Technology

All this is achingly annoying to the ideology of the old American center-left, because it explodes the Clinton and Obama era narratives that private innovation serves the public interest. In fact, when innovation is funded by investors betting that they can exploit it via monopoly power, it’s unlikely to leave society better off. To be sure, an (entirely hypothetical) innovation economy owned and tightly managed by trustworthy public authorities might serve the public interest handsomely — but this was almost the opposite of recent Democratic administrations’ innovation agendas. And one can see why: To advocate for such a model, Democrats would need to abandon their accommodation of private markets and instead argue for state control of technology infrastructure to a degree that has been at odds with American culture since the late 1970s.

Nothing more is required to grasp why the Democratic establishment’s technology policy now feels pressure from a socialist wing advocating for public ownership of infrastructure; an anti-monopoly movement that rejects the accommodation of Big Tech, venture capital and private equity; and an interventionist Trump administration.

But the ideological inconvenience is no less severe for Republicans from the libertarian-leaning center-right. Were they to acknowledge that innovation does not truly arise from free market competition, logic would compel them to concede either (a) that their true goal is not really a culture of fair market competition, but of raw contestation for power; or (b) that they do not, in fact, value innovation unqualifiedly. Lo and behold, precisely this schism has emerged in the Trump-era American right, with the Silicon Valley “tech right” representing the former, and populists and cultural conservatives representing the latter.

Where to from here? Both sides of the battered American center must first face their mistakes. This will be painful, not only because their errors have resulted in profound and long-term mis-governance, but also because their blind spots are deeply entangled with old, hard-to-kick ideological habits. The center-left’s sunny techno-utopianism traces its roots back to the rationalism of the French Revolution, via Karl Marx and early 20th-century progressives. The center-right’s fervent market fundamentalism is equally a relic of bygone eras, reflecting the thought of Friedrich Hayek and Milton Friedman — idea-warriors who pitched competitive markets as a cure-all largely to one-up the utopian promises of their techno-optimistic progressive foes. Thus, today, center-right and center-left thinking both feel like artifacts from yesterday’s ideological trenches. A new form of centrism that wants to speak to 2026 would need to thoroughly clear the decks.

“Both sides of the battered American center must first face their mistakes.”

This reckoning dovetails with a complex broader reappraisal of China and the West’s relationship with it. In the 1990s, the West misjudged its China policy by insisting on what was then a politically convenient belief: that democratization is the natural result of material prosperity. We risk a disturbingly similar mistake if we now insist upon the belief that breakthrough technological progress is the natural result of competitive markets (or, just as tenuously, democratic society). Nothing prevents China from outracing Silicon Valley and the West in AI, infrastructure and more.

Hard Choices

The point is not that the West should copy China, but that it cannot afford to duck hard choices. Shall we entrust our shared destiny to a hyper-empowered private sector accountable only to investors? Or, perhaps, shall we build societies in which conventional technological progress is not paramount? Or shall we ask our governments to manage technological progress in harmony with some robust conception of the common good? A new political center needs to face this choice boldly, not sweep it under the rug.

If America continues to prioritize private-sector-led technology development, it will come at a foreseeable, devastating cost to the social fabric — a savage new chapter in the book of modern catastrophes. If, on the other hand, we prioritize citizens’ social and economic well-being — as Europe has since World War II — we risk sacrificing technological dynamism. This should not be dismissed too hastily, but it is viable only if accompanied by strategies to avoid geo-strategic eclipse by other means (e.g., post-war Europe flourished through decades of technological non-leadership, but with the help of an American security shield and a huge reserve of accumulated wealth and prestige).

There is a third option: If we further empower the state to steer the progress and deployment of technology, we may avoid the worst outcomes. For example, we might just be able to install a meaningful, accountable sense of the common good at the head of the vast technical enterprise. We might be able to use the law nimbly, shielding the most precious elements of domestic life, religious life, education and culture from technology’s too-rapid disruptions.

However, this balancing act raises other difficulties. Relevant state authorities would need rigorous and enforced ethics measures to ensure that private interests are weeded out. They would also need to be guided by a coherent and unabashedly moral-philosophical vision, rather than the now-prevailing mishmash of contradictory ideas about technology’s proper role in society. Achieving this coherence would likely entail softening old commitments to certain liberal conceptions of the government’s role in cultivating the good life.

The job of centrists is to accept this troubling trilemma and craft a reasonable way forward. For example, technological infrastructure can and should, in many contexts, be owned and developed by public-interested actors. There is nothing “leftist” about this, because it is conducive not just to individuals’ economic well-being, but also to tradition and social stability. Marx, after all, wanted technology to upend prevailing social relations, and thus might even have celebrated Silicon Valley’s wild derailments of old norms and institutions.

In 2025, public ownership of infrastructure carries almost the opposite cultural meaning it had in Marx’s day, when it was a strictly efficiency-enhancing and “accelerationist” proposition. Today, it can just as easily moderate as accelerate technology’s economic and cultural disruptions. In fact, public ownership arguably represents a new face of moderate social conservatism.

Further, intellectual property — which is to say, the thicket of old rights-based compromises between markets and monopoly — can and should be profoundly recalibrated. Simply abandoning intellectual property rules, or allowing them to lose practical relevance, serves no one except existing monopolists. The pattern of needed intellectual property (IP) reforms is complex, but it should tilt more or less gently toward social stability, cultural virtues and widely distributed welfare.

For example, copyright’s fair use exception should be clarified to exclude AI training, thus sharpening copyright as a sword for human creators of new works against economic expropriation by AI, as Republican Sen. Josh Hawley of Missouri and Democratic Sen. Richard Blumenthal of Connecticut have proposed. But patents should be limited in scope and shortened in duration, to make it harder for companies and investors to corner markets and profit from old or relatively obvious inventions, like minor tweaks to well-understood drugs. And although legal protection of trade secrets and trademarks must continue, they should be counterbalanced with heightened responsibilities to the public.

The details of the needed reforms are technical, but the principles are not: When private actors use their IP to ends that society broadly condemns — advancing objectionable eugenic interventions, for example, or using trademarks to distort rather than clarify consumers’ perceptions of value — then society shouldn’t go out of its way to protect their investments. Such crucial conversations are long overdue.

“Public ownership arguably represents a new face of moderate social conservatism.”

Similarly, it is fashionable to dismiss data policy given the failures of the European Union’s strict privacy laws, known as the General Data Protection Regulation (GDPR). After all, GDPR didn’t meaningfully protect people from having their information exploited. But emerging legal models of data ownership still hold promise. For example, it has long been considered unworkable to create new rights in data that entitle individuals to share in data’s downstream profits and influence future use of the information. This is because individuals’ interests in data “overlap,” so that as consumers seeking convenience and good deals, we will always pull each other into a race to the bottom.

(For example, when I agree to Gmail’s terms of use, I also compromise your rights, because copies of your emails to me are in my Gmail inbox. When I agree to share my own genetic information with 23andMe, I also compromise the interests of my parents, siblings, and children. Digital services trying to provide powerful and seamless experiences, and individuals seeking such experiences, are incentivized to ignore these negative externalities. Consequently, in the name of convenience, consumers tend to undermine one another’s interests in a dynamic similar to the prisoner’s dilemma.)

But what if data rights could only be exerted via large, carefully regulated associations, not signed away by individuals? This has not been tried, and it might work. Such a new class of rights could be a crucial new tool for restoring a modicum of power back to consumers and for bringing order to the digital economy.

Last but not least, a recalibrated antitrust doctrine could be centered as a unifying, coalition-building project. In a cruel irony, antitrust law currently operates to prevent the formation of needed coalitions among many actors who are now being crushed by our heavily monopolized and consolidated economy. For example, large consolidated businesses can negotiate lower prices with their suppliers, but if many separate small businesses act in “combination” to do the same, it is unlawful. Such perversions of the spirit of antitrust now pervade American society, so that the huge gains from economic consolidation are captured mostly by financiers who can buy up businesses and combine them. We should, cautiously and advisedly, find ways to direct these windfalls instead to smaller business owner-operators who, much more than financial owners, contribute “off the ledger” to the fabric of communities.

A guiding principle for the next generation of technology policies might be this: Entrepreneurship contributes to the public interest when it competes to solve ordinary people’s existing problems, but not when it competes to lock consumers into ecosystems, addict them to dubious novelties, augment unaccountable monopolies or disrupt values and traditions that enjoy broad support. Indeed, when investors and technologists who transform society for the worse are rewarded with indefinite ownership of the infrastructure upon which the transformed society depends, everyone else loses. In short, we should be open to the state and other, ideally, non-market-driven institutions like the arts, civil society and even religion, exerting more influence upon where technological innovation goes next.

By embracing the seemingly apolitical, private-monopoly-led model of technological advancement, American centrists inked a Faustian bargain with social and economic dissolution. The truth is that technology — like media and other forms of informational power — is inherently political. It is categorically different from other kinds of market activity. It develops under the auspices of states or monopolies, transforming the social and cultural contexts within which politics occurs. Centrists must come to grips with this if they ever want to find a path back to their traditional stabilizing role.

The post The Progress Paradox appeared first on NOEMA.

]]>
]]>
Introducing The Futurology Podcast https://www.noemamag.com/introducing-the-futurology-podcast Thu, 28 Aug 2025 04:07:16 +0000 https://www.noemamag.com/introducing-the-futurology-podcast The post Introducing The Futurology Podcast appeared first on NOEMA.

]]>
The world emerging before our eyes appears both as a wholly unfamiliar rupture from patterns of the past that could frame a reassuring narrative going forward — while also promising new possibilities never before imagined.

Prodigious leaps in technology, science, productive capacity and planetary interconnectedness herald a future that humanity has only dreamed of in the past. Yet these great transformations underway seem to have triggered in their wake a great political and cultural reaction among the multitudes they have bypassed or threaten to uproot. One is a condition of the other.

What is clear is that history is fast approaching an inflection point. We live either on the cusp of an entirely new era or on the brink of a return to an all-too-familiar, regressive and darker past.

From the tumultuous realm of geopolitical conflict to the roiling culture wars, the advent of intelligent machines and the capacity to redesign the human genome, a new Age of Upheaval is clearly upon us.

To help navigate the perilous and promising rapids of oncoming times, the Berggruen Institute has launched a new podcast: Futurology. This weekly series complements Noema in seeking out cutting-edge minds on the frontiers of change, looking to define the paradigm shifts that will help make sense of the world we are entering and figure out how to dwell in it.

The first episodes of the Futurology podcast illustrate its scope and breadth. You can find them on YouTube and wherever you get your podcasts.

Contemplating the extreme polarization in America these days, historian Niall Ferguson thinks the country is entering a “late republic stage” like the last days of the Roman Republic before it lapsed into an empire. Francis Fukuyama sees not the end of history, but the return to 19th-century-type spheres of influence among the great powers. Stateswoman Anne-Marie Slaughter envisions a more fluid world order with networks of the willing and middle powers playing a key role.

The so-called “godmother of AI,” Fei-Fei Li, argues it is up to us humans to put robots in their place and control them before they control us. Vandi Verman, the Jet Propulsion Lab scientist who guided the Mars Rover expedition, discusses how robots will be the ambassadors of the human species on other planets. John Markoff, the chronicler of the rise of Silicon Valley, worries about the new “cyberocracy” that is coming to dominate all of society. Thomas Moynihan fleshes out the philosophical implications of discovering that we humans are on a course to our own extinction. Indian novelist Rana Dasgupta ponders the contradiction that the “nation-state” is both obsolete — and experiencing a revival. Scholar Stephen Batchelor wonders what the world would be like if governed by Buddhists.

In the most recent episode of Futurology out this week, Nicolas Berggruen and I recount the origins of the Institute and our various projects over the last decade, from “The Think Long Committee” for California to our 21st Century Council meetings in Beijing with Chinese President Xi Jinping. We trace the evolution of the “three Ps” that are the programmatic core of the Institute’s work: planetary realism, pre-distribution of wealth through universal basic capital and participation without populism.

Upcoming episodes will also include the literary journalist Pico Iyer on how the world he has so relentlessly traveled is less connected today than when he wrote “Video Night in Kathmandu” nearly 40 years ago, and philosopher David Chalmers on solving “the hard problem” of determining the origins of consciousness.

Each podcast is introduced by Berggruen Institute President Dawn Nakagawa. You can tune in every Tuesday here.

The post Introducing The Futurology Podcast appeared first on NOEMA.

]]>
]]>
What The MAGA Congress Got Right https://www.noemamag.com/what-the-maga-congress-got-right Fri, 08 Aug 2025 15:54:06 +0000 https://www.noemamag.com/what-the-maga-congress-got-right The post What The MAGA Congress Got Right appeared first on NOEMA.

]]>
A decade ago, the French economist Thomas Piketty published “Capital In The 21st Century,” a blockbuster screed against the rich getting richer. In that weighty tome, he encapsulated the dynamic of steadily increasing inequality with the formula r > g: the compounded rate of return on capital is greater than the rate of economic growth.

In short, the inequality gap inexorably grows over time between those who own capital assets that appreciate in value, especially financial assets, and those who work and live paycheck to paycheck.

In a recent University of Chicago Journal of Political Economy study, Moritz Kuhn, Moritz Schularick and Ulrike I. Steins traced the growth of wealth inequality in America from 1949 to 2016. They pointed out the central importance of portfolio composition in creating the wealth gap. While working families have little savings for investment, middle-class portfolios are dominated by housing, while rich households mostly own business equity.

Former World Bank economist Branko Milanović notes that this concentration of ownership of financial assets has accelerated in most countries since the 1990s, where “a rising share of total income is going to capital. That means total income will become more and more concentrated. With the extremely uneven distribution of financial assets, the wealth of a country is going more and more to only the people at the top and very little percolates downward.”

To boot, a New York Federal Reserve Bank study shows how the concentration of wealth reproduces itself because more time, effort and expertise are put into managing holdings as they grow larger. “Concentration in capital ownership causes a transition to an unequal steady state,” the study concludes. In 2024, the richest 10% owned 93% of all equity in the U.S.

As frequently noted in Noema, this condition will be exacerbated by the innovations of digital capitalism that are increasingly divorcing employment and income from productivity growth and wealth creation, generating an ever-accelerating gap between those who “own the robots” and those who labor for their livelihood.

Policies that respond to this challenge would foster an ownership share for all in the wealth generated by intelligent machines that are diminishing or displacing gainful employment. The aim is to enhance the assets of the less well off in the first place — pre-distribution — instead of only redistributing the income of others after the fact.

In our 2019 book “Renovating Democracy: Governing In The Age of Globalization and Digital Capitalism,” Nicolas Berggruen and I called this concept “universal basic capital,” or UBC. The idea is not just to break up the concentration of wealth at the top, but to build it from below. The best way to fight inequality in the digital age, we wrote, is to spread the equity around.

Enter MAGA Accounts

Who could have expected that it would be a MAGA-dominated U.S. Congress that is blazing an incipient path to universal basic capital in the world’s largest economy as one key way to counter the dynamic of inequality Piketty identified?

The Money Account for Growth and Advancement program was passed into law last month as part of the “One Big Beautiful Bill.” While in most respects that budget package benefits the rich, the so-called MAGA accounts aim to bolster the assets of the rest so that just like the rich, they too can own capital that will grow in compounding value from invested savings.

Starting as early as July 2026, every child under 8 who is an American citizen will be auto-enrolled into an account with a $1,000 deposit from the government. All income from investments for that account held throughout its term — when the child turns 18 — would be tax-deferred, after which withdrawals would be taxed at the low long-term capital gains rate. Families can add up to $5,000 per year to the account.

Distributions are only allowed once the child is 18, at which point account holders are allowed access to only 50% of their funds, and solely for higher education, training programs, small business loans and first-time home purchases.

At age 25, savings account holders are allowed to withdraw up to the full balance of the account, but only for those same specified purposes. Upon reaching 30, account holders can access the full balance for any purpose desired.

“Who could have expected that it would be a MAGA-dominated U.S. Congress that is blazing an incipient path to universal basic capital?”

Texas Sen. Ted Cruz, who championed the program, said of the accounts: “There are many Americans who don’t own stocks or bonds, are not invested in the market, and may not feel particularly invested in the American free enterprise system. This will give everyone a stake.”

Various similar “baby bond” schemes have been underway elsewhere, from California to Connecticut to France and the United Kingdom, mostly geared toward low earners.

The most successful was the U.K. Child Trust Fund. It was launched in 2003 by then Prime Minister Tony Blair and Chancellor of the Exchequer Gordon Brown. In the austerity years of Prime Minister David Cameron following the financial crisis of 2008, the trust was wound down in 2011. In terms of encouraging savings, however, it was a clear success. During its lifetime, 6.3 million new savings accounts were opened. As of April 2023, the total market value of the accounts was 9 billion pounds ($11.4 billion), of which the government contributed only 2 billion pounds ($2.5 billion).

The Big Worry

What is most worrisome about the otherwise worthy MAGA accounts is that some among the ideological right in the U.S. will see it as one day replacing Social Security.

Indeed, last week U.S. Treasury Secretary Scott Bessent actually said of the MAGA program, “In a way, it is a back door for privatizing Social Security. Social Security is a defined benefit plan paid out. To the extent that, if all of a sudden these accounts grow, and you have in the hundreds of thousands of dollars for your retirement, then that’s a game changer.”

Realizing the implications of his musings, Bessent hastened to clarify what he meant on X. What he called “Trump Baby Accounts” are “an additive benefit for future generations, which will supplement the sanctity of Social Security’s guaranteed payments. This is not an either-or question: our Administration is committed to protecting Social Security and to making sure seniors have more money.” Let’s hope that is a sincere pledge and not subject to changing on a whim as so much else in Washington these days.

Paying The Piper

Over the longer term, the question is how such a massive budget commitment can be sustained when U.S. public debt is already 120% of the GDP, or whether it will go the way of the otherwise successful UK Child Trust.

One idea floating around is to make the initial $1,000 deposit a loan, instead of a grant, that would be paid back when the account comes to term without interest. As the MAGA account program matures, this would create a kind of revolving fund to keep it going.

Another notion being floated would use the considerable revenues expected from tariffs to fund the program in a kind of virtuous cycle where the home market for investment opportunities is boosted while generating returns for a broader class of citizens. Even accounting for any slowdown as a result of protectionist measures, the Yale Budget Lab projects $2.2 trillion in government revenue at current tariff rates from 2025-2036.

Whether implemented through MAGA accounts or otherwise, UBC ought to be embraced as a bipartisan agenda. After all, its American patriotic pedigree goes back to Thomas Paine, who proposed in 1797 that every newborn child should be provided with an equal endowment financed by an inheritance tax on wealthy landowners.

As left-leaning Nobel economist Joe Stiglitz and top hedge fund manager Ray Dalio agreed in a Noema exchange, UBC is “neither capitalist nor socialist,” but a practical way to more fairly share the wealth that transcends stale old ideological divides.

The post What The MAGA Congress Got Right appeared first on NOEMA.

]]>
]]>
‘Climate Delusion’ Or Vital Solution? Carbon Capture’s Uphill Battle https://www.noemamag.com/climate-delusion-or-vital-solution-carbon-captures-uphill-battle Tue, 15 Jul 2025 13:33:00 +0000 https://www.noemamag.com/climate-delusion-or-vital-solution-carbon-captures-uphill-battle The post ‘Climate Delusion’ Or Vital Solution? Carbon Capture’s Uphill Battle appeared first on NOEMA.

]]>
TRACY, Calif. — When Alexa Dennett’s EV glided into Heirloom Carbon Technologies’ parking lot last August, the prospects of a net-zero future looked bright. Joe Biden was still president, and his 2022 Inflation Reduction Act had turbocharged a fledgling industry for pulling carbon dioxide from ambient air, providing generous tax breaks for every metric ton of carbon sucked from the atmosphere.

On the drive to Heirloom’s new carbon capture plant in my rented electric Ford, I had passed dozens of wind turbines scattered across golden-brown hills. Electric cars, clean energy and federal investment aplenty — this felt like a society firmly on a low-carbon pathway.

Heirloom spokeswoman Dennett and her public relations colleague, Scott Coriell, were here to show me the guts of the first commercial plant in the United States to sell credits for what’s known as direct air capture (DAC) of carbon. The technology extracts carbon dioxide from the atmosphere and then puts it deep underground for permanent storage or utilizes it in applications like concrete. Heirloom’s services have been sought by companies like Microsoft, JPMorgan Chase and Shopify, who want to offset their carbon footprints.

Heirloom had recently announced a $475 million investment to build a larger facility in Louisiana and had also won $50 million — with eligibility of up to $600 million — in federal funding for a third plant, which was part of a larger endeavor called Project Cypress that would dwarf the other two in scale. Pulling carbon dioxide from the air, after years of hype and unmet optimism, seemed on the cusp of credibility.

But standing in the parking lot edged with newly planted trees, I knew the road ahead for carbon capture remained rocky. The technology under development had enemies. Influential ones. And not the ones you’d expect. A large portion of the environmentalists you might think would support the development of a carbon capture industry were deeply, and vociferously, opposed

A Daunting Scale

The atmospheric math is unequivocal. Carbon must be pulled from the sky to stop the Earth from warming dangerously. In its most recent assessment report, the Intergovernmental Panel on Climate Change noted that “all available studies require at least some kind of carbon dioxide removal to reach net zero.”

But even if nations slash emissions at an eye-popping rate, they will still be shoveling carbon into the atmosphere for decades to come from hard-to-abate sectors like aviation, agriculture and steel. Before the Industrial Revolution, carbon dioxide made up 280 or fewer parts per million of our planet’s atmosphere. Today’s 430 parts per million is dangerously high.

The University of Oxford-led 2024 State of Carbon Dioxide Removal report predicts that by 2050, we will need to be removing around 7 to 9 billion metric tons of CO2 annually and permanently to avoid a dangerous rise in sea levels, catastrophic wildfires and crop failures around the world. Given the slow pace of emissions reductions to date, that amount is probably a low-ball estimate.

As an industrial task, removing this much CO2 from the atmosphere is mind-boggling. “The oil industry extracts about 4 billion tons of fluid out of the ground every year, and it took them 150 years to build that industry,” Heirloom’s co-founder and CEO Shashank Samala told me. However, Samala added, we now have only 20 to 30 years to make this transition.

The good news is that about 2 billion tons are already being removed annually, largely due to “conventional” methods like active forest restoration, the rewetting of peatlands and the rebuilding of coastal wetlands. These conventional methods help pull CO2 weighing the equivalent of 20,000 fully loaded aircraft carriers from the sky each year.

The bad news is that the availability of land limits how much this number can grow. There’s only so much spare land for carbon-mitigating landscape restoration given competing demands like agriculture and housing. Even worse, increasing global temperatures are reducing the effectiveness of natural carbon sinks. This puts the ball squarely in the court of approaches like DAC.

Simple Chemistry, Complex Machinery

Motors squealed intermittently as I stood alongside two dozen floor-to-ceiling towers of neatly stacked trays at the Heirloom plant. The trays look like giant baking sheets covered in a thick, white powder — Dennett later told me this was “calcium hydroxide” — about 3 inches apart. This small test facility exists mostly as a proof of concept, extracting as much as a paltry 1,000 metric tons of CO2 per year. But Dennett told me Heirloom is confident that “dumb rocks and smart robots” will help them scale.

“A large portion of the environmentalists you might think would support the development of a carbon capture industry were deeply, and vociferously, opposed.”

The dumb rock is limestone, which is mostly made up of calcite, one of the most abundant minerals on Earth. It is also relatively benign. (“It’s in your toothpaste,” Dennett said). The limestone is ground up and heated to around 1,650 degrees Fahrenheit in an electric kiln using renewable energy. Carbon dioxide released during heating is captured and compressed into a liquid destined for long-term storage. 

The powdery residue is hydrated to produce calcium hydroxide, or slaked lime, that is spread onto trays and exposed to the air. Slaked lime is hungry for CO2 so it can return to limestone again. With a gentle breeze blowing through the open-sided warehouse, the lime gets all the CO2 it wants without any energy input.

The smart robots are the source of the squealing noise. They whizz up and down the stacks on vertical runners. Sensors in the robots assess the percentage of slaked lime converted into limestone. Heirloom’s proprietary technology accelerates a years-long natural process to less than three days. When satisfied with the conversion, the robots extract a tray from the stack and dump the limestone for transport back to the electric kiln. The process repeats, reusing the same materials.

Heirloom envisions fleets of fast-moving robots tending to hundreds of thousands of trays. It is a modern, highly calibrated version of a process nature has performed for more than 2.5 billion years. And to make a difference, Heirloom plans to go big. 

“To not be a rounding error on climate change, you have to believe your technology has a billion-ton pathway,” Dennett told me. That’s a million times what the squealing robots manage today in Tracy.

A scale-up of this magnitude hinges on many things going right. It needs motivated engineers with ample funding. It also needs a highly motivated public willing to put their faith in a plan that sounds almost too magical to be true.

The Environmental Enemies

DAC, an engineering process first proposed in 1999 by engineer Klaus Lackner, polarized the environmental community from the start. The World Resources Institute calls it “an important part of a climate solution portfolio.” The Center for Climate and Energy Solutions insists that “engineered carbon removal solutions will be necessary … to keep the target of warming by 1.5 degrees Celsius alive.”

But Lili Fuhr, director of the Fossil Economy Program at the Center for International Environmental Law, argues that DAC is “a dangerous distraction” and a fig leaf for more fossil fuel production. Paul Rauber, a longtime former editor of the Sierra Club’s magazine, dismissed it as a “boutique technology,” adding, “it’s too late for wishful thinking.” And Zoë Schlanger with The Atlantic labels DAC dismissively as America’s latest “climate delusion.”

There are plenty of good reasons to prioritize an end to burning carbon rather than burning it and then trying to claw it back. Coal, oil and gas give off lots of energy when ignited. But the dangerous gases they release are hard to contain, despite industry promises. The Kemper Project in Mississippi, once lavishly subsidized by the Obama Administration, was designed to gasify coal and siphon off CO2 before it reached the atmosphere. The technology proved difficult to scale. Costs tripled, and Kemper’s clean coal machinery produced electricity for only about 100 hours before being demolished in 2021.

The collapse of confidence in carbon offsets has ratcheted up skepticism. Offsets are designed to capture carbon in one place to compensate for emissions released in another. Until now, many offsets depended on the carbon absorbed by forests. But forest carbon is hard to certify, and the offset’s validity hinges entirely on the trees remaining intact. Numerous investigations have shown that timberlands set aside for offsets have later been burned or logged

Another black eye for DAC is that some of the biggest players in the industry are fossil fuel companies. Occidental Petroleum bought the Canadian firm Carbon Engineering in 2023 for $1.1 billion. Occidental’s subsidiary, 1PointFive, is currently putting finishing touches to what will be the world’s biggest DAC plant, named Stratos, in the Permian Basin in Texas. Not coincidentally, the basin is the source of nearly 50% of U.S. crude oil. 

Fuhr points out that oil and gas companies already capture plenty of carbon at the smokestack, only to pump it back into aging wells. The pressurized gas snakes through fissures in the rock and acts as a solvent to force out more hydrocarbons. The process, known as Enhanced Oil Recovery (EOR), is a dubious use of captured CO2 if you are sincere about helping the climate. “It drives us away from addressing the root of the problem,” Fuhr told me.

“The collapse of confidence in carbon offsets has ratcheted up skepticism.”

DAC stands little chance of gaining the social license it needs to operate if sizeable portions of the environmental community oppose it. Investment is already faltering as governments and businesses pull back from their net-zero targets. DAC desperately needs a reset, but doing so requires a commodity often in short supply when it comes to climate change: trust.

Making Carbon Capture Credible

“Permanent, durable, believable.” Without these assurances, Coriell told me, as we stood beneath an imposing tower of Heirloom’s trays, the DAC industry is doomed. 

Holly Jean Buck, a sociologist in the University at Buffalo’s Department of Environment and Sustainability, agrees. Buck is the author of “Ending Fossil Fuels: Why Net Zero is Not Enough.” She spent a year in President Biden’s Office of Fossil Energy and Carbon Management creating plans for properly engaging communities on the frontlines of the energy transition. 

Buck told me the main conversation in carbon removal these days is about “trust and establishing trust infrastructure.” It is rare for an industry’s success to hinge so centrally on an ethical idea. But carbon removal exists in a strange cultural space. It responds to a slow-moving, largely invisible, global problem. It foregrounds an industry dogged by thorny questions — questions about pollution, corporate greed, global inequity and deceit. Ethical hackles go up even before a ton of carbon is siphoned into a tank.

The checklist for morally acceptable DAC will not look the same for everyone, but a picture is slowly emerging of what that might be.

The first condition for building trust is that the captured carbon must be real and not an accounting trick; it must be believable. “Our customers demand full transparency into the lifecycle emissions of the facility,” Dennett told me. The stench of fraud from the offsets of the 2010s still lingers. 

A whole sub-industry has emerged to create credible standards for monitoring, reporting and verifying (MRV) carbon removals. Anu Khan founded the Carbon Removal Standards Initiative to create consistency across policies being enacted around the globe. “We will not build the political coalition to scale carbon removal to climate-relevant volumes without broad societal trust that this is a real thing,” she told me.

The skepticism over carbon removal is understandable. Even without its checkered history, carbon credits are not like other purchasable commodities. “The buyer doesn’t ever take physical possession of the thing; they don’t ever know if it is or is not what they thought it was,” Khan told me. “The value of the product is largely reputational.” She thinks organizations firmly rooted in civil society, alongside government and industry, have a key role to play. It is early in the game, and the rules need to be set correctly by people without a vested interest.

MRV is largely a technical matter concerned with metrics and quantification. But the different players in the MRV ecosystem all know the fundamental challenge is to clear an ethical bar. Absolute Carbon, an American company founded to set benchmarks for carbon removal quality, promises “purchasing with confidence,” “mitigating the risks of greenwashing” and “leading with transparency and integrity.” The UK-based registry Isometric has developed detailed protocols for verifying carbon credits to “rebuild trust in carbon markets.” For both companies, it is ethics all the way down. 

But being believable does not only require good accounting. It requires convincing skeptics like Fuhr that DAC is not a fig leaf. About 50 million tons of CO2 are already pulled out of smokestacks at industrial facilities each year. That is real, measurable carbon. But the Global Carbon Capture and Storage Institute reports that 70-80% of it is pumped right back into the ground for EOR.

The International Energy Agency says that EOR reduces oil’s carbon emissions by 37%, thanks to the fact that a portion of the CO2 injected to extract the oil remains trapped in the sedimentary formations after the oil has been pushed out. That is good. But burning hydrocarbons still creates new emissions. Unless an oil company compensates for this with additional sequestration, it is still a net harm for the climate.

This is a deal-breaker for those who want to see the fossil fuel industry disappear as quickly as possible. “I think it’s really important to take note of how the industry sells the technology,” says Fuhr. They are saying, “This is going to allow us to keep drilling for decades to come.” The Science and Environmental Health Network describes EOR as “a moral failure, a climate failure, and a threat to public health and the environment.”

“It is early in the game, and the rules need to be set correctly by people without a vested interest.”

Many companies developing DAC facilities have adopted operating principles that swear off EOR. Many of the entities purchasing carbon credits have demanded it, knowing their own reputation for taking climate change seriously is at stake. So far, this includes the companies buying carbon credits from Occidental’s subsidiary, 1PointFive.

The Challenge of Permanent Storage

Beyond believable, the public needs reassurance that the carbon, once accurately counted, is stored somewhere it can do no harm. The storage must be permanent and durable.

One solution is to inject captured CO2 into concrete. Concrete mixed by the Romans still stands in the Colosseum and the Pantheon. A common metric of permanence is whether carbon is sequestered for more than 1,000 years. The Colosseum checks that box.

The Canadian company CarbonCure has developed a process for putting captured carbon into concrete. They spray CO2 under pressure into concrete slurry at the batching plant, where it mineralizes instantly into calcium carbonate. The calcium carbonate adds compressive strength to the concrete and allows a reduction in the amount of cement needed to bind the mixture together. Since cement production accounts for 4% of global emissions, the process potentially scores two carbon wins — less cement and captured CO2 that’s turned into carbonate.

Despite the advantages of concrete as a partial solution, the sheer scale of the need for permanent carbon storage has some people looking for geological alternatives. The most satisfying answer for permanence is to put the carbon deep underground, where it can sit in sedimentary formations similar to the ones that hold oil and gas for millennia. There is an elegant poetry to the idea of injecting carbon beneath the Earth’s surface, whence it came.

A complicated licensing process exists for what are known as Class VI wells that are suitable for this kind of permanent storage. But talk of sedimentary rock holding pockets of gas raises the specter of EOR again, even if the Class VI wells have a different purpose from the wells used for extracting oil.

A potentially more persuasive version of underground storage is a new technique involving basalt. Basalt is formed by magma rising from the Earth’s mantle, which cools into distinctively shaped polygons. It is found beneath about 10% of the Earth’s landmass and most of the ocean floor. Huge basalt formations lie close to the surface in volcanic regions such as Iceland, the U.S. Pacific Northwest and the Deccan Plateau in India. 

“In broad terms, we have orders of magnitude more storage capacity than we would ever need,” Sandra Ósk Snæbjörnsdóttir, chief scientist at Carbfix in Iceland, told me. The company is the world’s first to offer commercial CO2 sequestration in basalt. 

Carbfix has developed a technique for dissolving CO2 in water about 1,000 feet below the surface. The pressurized liquid reacts with minerals in the basalt to create carbonate rocks. Carbfix has shown that 95% of injected CO2 turns to rock within two years. Leakage is also highly unlikely since the dissolved CO2 flows downwards, and the mineralization is so quick. 

To date, the company has sequestered over 100,000 tons of carbon dioxide, including all the carbon captured in Iceland by Climeworks, the world’s first commercially operating DAC company. 

I asked Snæbjörnsdóttir how far the technology still had to go. “It’s ready,” she said, “and we are working on scaling to megaton scale.” It’s an exciting frontier. Basalt in Washington State alone could store up to 18 years’ worth of the carbon dioxide that the Oxford report said will need to be pulled from the atmosphere and sequestered annually by 2050 to maintain a safe climate. 

Basalt injection may help silence the doubts about permanence inherited from earlier types of offsets that relied on forests. The guarantee, after all, is rock solid. And the geological formations where you find basalt have no associations with EOR. It is like starting from a clean slate, Snæbjörnsdóttir told me with a slight smile.

Reassurance on quantification, permanence and durability all fall on the technical side of DAC’s trust challenges. If the engineering is convincing, if the MRV proves airtight, a pathway starts to emerge for the foundational trust the new industry needs.

Unfortunately, DAC is also plagued by a whole other set of doubts. The industry promises to be massive. Are people going to want all that noise and infrastructure when these new industrial-scale facilities set up shop in their neighborhood?

“There is an elegant poetry to the idea of injecting carbon beneath the Earth’s surface, whence it came.”

Community Buy-In

“We can’t have sacrifice zones,” she told me.

I was speaking to Kasja Hendrickson, then director of technology policy at Carbon180, a non-governmental organization that works on U.S. carbon removal policy. Hendrickson knows DAC must differentiate itself from the extractive industries that came before.

DAC as an industry hopes to grow rapidly. Occidental Petroleum alone plans for up to 135 plants as big as Stratos by 2035. It will take several thousand large facilities around the world to meet the carbon capture needs anticipated by the IPCC. The industrial build-out will involve thousands of drilling rigs, storage tanks, pipelines, air contactors, worker housing and a huge energy infrastructure to support it all. “We are talking massive scale,” Fuhr told me. “An industry that has a planetary effect.”

Carbon180 has a whole team devoted to making sure the DAC industry grows in a way that is environmentally just. “DAC needs to be scaled in a way that brings people along,” Hendrickson told me. Developers must show they can build local economies while avoiding the environmental burdens that have plagued mineral extraction and energy production throughout history.

This type of trust cannot be built by engineers and MRV protocols. It accumulates slowly by engaging closely with the people who see the industry take off from their own backyards.

The progressive edge of the DAC community is working hard to ensure the build-out creates a positive legacy. A key element is to sign “community benefit agreements” before breaking ground on any new facility. These are agreements designed to ensure new carbon capture plants will be good neighbors. They require partnering with communities and learning from local people how the new industry can best serve them. An energy company’s recently signed community benefit agreement with an environmental advocacy group as part of a project to build a CO2 pipeline, included providing funding to train first responders, creating a community fund for counties along the pipeline and promising to pay for cleanup after they are gone.

Carbon180 calls this “transformative justice” or “removing forward.” One piece of this justice is helping people who honed their skills in the oil and gas industry transfer them to carbon sequestration. “It’s part of the just energy transition,” Hendrickson told me. “There must be a transition where we use that expertise.” Matching what matters to people with what matters to the planet builds a bridge between the global challenge of carbon and the local challenge of economic and social sustainability, she noted.

Hendrickson points out that the stakeholder engagement necessary for community benefit agreements makes good business sense, whatever your politics. “There is a fundamental financial payoff to engaging communities early and often, getting them bought into the process,” Hendrickson told me. A company has a business interest in building facilities that keep local people happy. 

An Industry Ready To Launch

The fans at Occidental’s Stratos plant are nearly ready to start spinning. Soon, the dry Texas air will blow over the potassium hydroxide and water solution in the plant’s contactors, pulling carbon from the sky. Once it is fully operational, potentially by late 2025, it’s expected to scrub 500,000 tons of CO2 from the atmosphere annually. 

1PointFive, the Occidental subsidiary building the plant, is ambiguous about where this captured carbon will go. “The CO2 is either permanently stored in underground reservoirs through secure geologic sequestration,” its description of the technology states, “or is used to make new products.” Occidental’s CEO Vicki Hollub is hardly reassuring about rapidly ending carbon emissions. “We believe that our direct capture technology is going to be the technology that helps to preserve our industry over time,” Hollub said at a 2023 conference for oil executives.

David Keith, a professor of geophysical sciences who designed the technology bought by Occidental, thinks that despite environmentalists’ skepticism, there is reason to welcome the oil and gas industry’s entry into DAC. It is evidence that climate concerns are being taken more seriously by the industry, and it could also be helpful to political progress on climate change overall.

“Legacy oil wants low carbon prices and high energy prices,” Keith wrote in a 2023 article for The Economist. “Carbon removal wants the opposite.” The more big players pushing on the right side of the ledger, the better.

“To be clear, we should never cut them any slack on the bad stuff they are doing,” Keith told me. “But don’t block the good stuff they are doing just because they are also doing bad stuff.”

“Matching what matters to people with what matters to the planet builds a bridge between the global challenge of carbon and the local challenge of economic and social sustainability.”

Environmentalists are right to be on alert for greenwashing, Keith said. But in the meantime, he thinks the oil and gas industry’s skillset with large industrial and chemical processes is a boon for the fledgling industry.

The Ethics Of Climate Solutions

For Hendrickson, there is a moral dimension to every climate-related question.

Rebuilding trust is a delicate process, especially in a sector that is perpetually shrouded in suspicion, Hendrickson told me. It takes time, as well as accurate math and engineering. It involves understanding what a community needs to welcome a new industry into its neighborhood. Both types of trust are essential for building the political will needed to tackle the climate problem.

As long as DAC remains market-driven rather than sponsored by governments and treated as a valuable public service, numerous potential pitfalls could prevent the building of what Buck calls the infrastructure of trust. “History,” Buck told me, “is full of people who have values that ran up against the demands and structures of capitalism and were forced to make compromises or step away from their values because of how the system operates.”

She suspects DAC may prove to be no different.

Although still optimistic about the future of Heirloom’s rocks and robots, Dennett left the company a few months after my visit and now works for an innovation incubator. Heirloom continues to raise money and is moving ahead with its two plants in Louisiana, including the one that is part of the federally funded DAC Hub.

According to the World Meteorological Organization, as of this past January, the last decade was the hottest on record. The need to remove carbon from the atmosphere is only becoming clearer. There is little evidence that reducing emissions is going to become a priority for every country in the world anytime soon. This means that carbon removal is fast becoming an atmospheric necessity. After years in development, the technology may finally be growing viable on the scale necessary to make a difference. Its advocates are still waiting for environmentalists to give the nascent industry their blessing.

Travel for this piece was supported by a Frank Allen Field Reporting Grant from the Institute for Journalism & Natural Resources.

The post ‘Climate Delusion’ Or Vital Solution? Carbon Capture’s Uphill Battle appeared first on NOEMA.

]]>
]]>
The Ascendance Of Algorithmic Tyranny https://www.noemamag.com/the-ascendance-of-algorithmic-tyranny Tue, 01 Jul 2025 16:51:05 +0000 https://www.noemamag.com/the-ascendance-of-algorithmic-tyranny The post The Ascendance Of Algorithmic Tyranny appeared first on NOEMA.

]]>
We are living through a transition. The ground beneath us is moving, subtly but unmistakably — what previous generations might have called a new zeitgeist, or we might today simply call a vibe shift.

The once-venerated forces of competition have given way to moat-building, rent-seeking and the financialization of hype. In this world, economic power flows from control — of the platforms, data and algorithms that help make our lives more efficient but also opaque, unsettling and destructive.

Artificial intelligence is situated near the heart of this shift. It is no longer just a technological exploration but a battleground for political and economic dominance. The U.S. and China are pouring billions into AI research under the banner of national security, while Europe is hoping to reduce its technological dependence.

We’ve seen this before. The transition toward an industrialized society brought with it a similar sense of vertigo: changes in labor, culture and global order that confounded the logic of a prior age. “All that is solid melts into air,” as philosopher Marshall Berman, echoing Karl Marx, wrote of the onslaught of industrial modernity. What united those shifts was not simply material transformation but, as we argue in “Seeing Like a Platform,” something deeper: a change in how we understood ourselves — a remapping of how power is imagined, articulated and enacted — defined by the rise of a new set of metaphors.

We are, we believe, living through a similar reordering — this time into what might be called a digital modernity. And as before, some of the most profound changes don’t have to do with the figureheads of the age or the rise and fall of empires, but with subtler matters: the shifts in language we use to describe our social world. These shifts do more than reflect change; they enact it. Our aim, then, is to examine the epistemological foundations of digital modernity: the ways of knowing, speaking and imagining that define this new era.

To some, such an endeavor may feel like a distraction in a moment of brute power and naked violence. Surely metaphors alone do not define politics. But neither are they only the linguistic veneer of society; metaphors are tools in the hands of the powerful. They also help us define how we might envision an alternative world. 

Henry Ford & Metaphors Of Power

To understand digital modernity, we must situate it historically. So, we first turn to an icon of industrial modernity: Henry Ford. In 1913, when Ford introduced the moving assembly line to his factory in Highland Park, Michigan, he did more than revolutionize manufacturing. He provided a new metaphor for modernity itself. The logic of the factory seeped beyond its walls, infiltrating bureaucracies, schools and governments. Society, it was now understood, could be engineered. Institutions became engines, citizens mere cogs. The machine became the guiding metaphor of the age, a template for power itself. Ford’s factory became the “epistemological building site on which the whole world-view was erected and from which it towered majestically over the totality of living experience,” sociologist Zygmunt Bauman memorably notes in “Liquid Modernity.”

The implicitly mechanistic language came not only to describe, but also to structure reality, bringing a world that saw itself in mechanical terms: precise, hierarchical and controlled. It inspired an active state characterized by a sometimes dangerous combination of ambition and self-esteem. Ford’s factory, in short, came to shape a new modernity — an industrial modernity.

Power always needs abstraction. To be controlled, reality’s unwieldy complexity needs to be slotted in models, categories and measures that allow for standardization and manipulation. Nature and social life are bureaucratically indigestible in their raw form and must be pre-processed to be seen and shaped. Power, in other words, requires maps. And maps are only useful because they are specific and leave things out.

When wielded by the state, maps become more than representations; they shape the world in their image. A state registry that designates taxable property-holders does not merely record a system of land tenure — it creates one, its categories made real by the force of law. Power has an epistemology, and that epistemology is inscribed onto reality itself.

This was the insight at the heart of political scientist James C. Scott’s “Seeing Like a State,” an influential meditation on how modern states, in their quest for “legibility,” remake the world to fit their own narrow field of vision. The modern state was inseparable from the rise of demographics and statistics (literally, the science of the state), which gave it a way of seeing — of rendering reality intelligible and, thus, governable. But vision, no matter how comprehensive it appears, is always selective. Certain things fall outside the field of view.

“Artificial intelligence is no longer just a technological exploration but a battleground for political and economic dominance.”

The modern state, confronted with the limits of its own perception, sought to make the world conform to its methods of representation. Diversity, mobility and local knowledge were obstacles to governance. The solution was to standardize, categorize and classify — to fix people in space, to assign them legible identities, to measure and control. Permanent surnames became standard. Borders hardened. Populations — a statistical concept — were sorted and segregated, compelled to reside among their designated categories. Forests were arranged in neat, monocultural rows. Cities were reimagined as grids, stripped of their messy, organic life.

These ambitions reached their zenith with Fordist industrialism, which shaped a modernity that celebrated mastery over nature and society, confident that human ingenuity could tame complexity through central planning and rational control. Yet, industrial modernity was contradictory, even schizophrenic, as it attempted to both fix reality through infrastructure and administration while simultaneously promoting incessant movement and continuous flow. And like any model, the imposition of this mechanical view was incomplete, uneven and contested.

Industrial modernity brought both progress and oppression. It raised the living standards of countless people, delivered affordable goods, stable jobs and a period of steady wage growth. 

But it also imposed a stifling uniformity, crushing individuality and local variation in the name of efficiency. The combination of ambitious self-confidence and partial blindness resulted in catastrophic failures. While it may be necessary to bracket aspects of the world to make it legible, the world left outside the brackets will often return to haunt the interventions. The monoculture “scientific” forests were susceptible to disease outbreaks, pests, fires and storm-felling. The square-grid cities of Le Corbusier and Robert Moses left out the human scale — what the influential urban author Jane Jacobs called the “sidewalk ballet.” Where Moses and Le Corbusier viewed cities as hopelessly inefficient and outdated, Jacobs saw an intricate and historically evolved web of social relations that made cities livable, creative and innovative. In their zeal to engineer society as if it were a clockwork, modernists destroyed the very social tissue that held it together. 

At its darkest extremes, industrial modernity took its way of seeing to a terrifying conclusion. Fixing, segregating and concentrating populations became means of exploitation and extermination. Ghettos. Apartheid. Camps.

Industrial modernity epitomized both the creative and destructive powers of humankind. For good or bad, its machinic metaphors suggested that societies could be designed according to the noblest or darkest ideologies — if societies are like machines, its human engineers decide their fate.

Liquid Modernity

The decline of industrial modernity was not merely the collapse of an economic model but the unraveling of a worldview. The once-seamless fusion of mass production, rising wages and stable employment began to disintegrate in the 1970s. The postwar boom had reached its limits. The once-devastated economies of Western Europe and Japan had completed their recoveries, their markets growing increasingly saturated. A central contradiction emerged: industrial modernity’s blueprints created its own discontents and could not contain capitalism’s dynamism. 

Inflation soared, stagnation took hold and an environmental consciousness emerged in response to smog-choked cities and the growing awareness of planetary limits. At the same time, demands for greater democracy — and resistance to standardization — gathered force. Factory workers went on strike. Students rebelled. Civil rights activists and anti-war movements challenged the authority of states that had long assumed their legitimacy was self-evident. And then, in 1973, the Organization of the Petroleum Exporting Countries imposed an embargo in retaliation for the West’s support of Israel, sending the price of oil — industrial modernity’s lifeblood — skyrocketing. The system wobbled. Then it buckled.

In response to falling profit margins, many corporations looked outward. Where once industrial capitalism had been a closed loop — wages fueling consumption, consumption driving production — firms now sought refuge in the low-wage economies of the Global South. The consequences were epochal. By breaking the link between domestic production and domestic wages, globalization shattered the fragile balance that had sustained rapidly rising incomes and relatively low inequality in the West. The truce between capitalism and democracy — one that had defined the postwar order — came to an end.

Politically, it marked the demise of the self-confident state. The era in which governments widely saw themselves as stewards of economic progress gave way to something far less ambitious. The state no longer sought to design economic and social life, but rather to lubricate the machinery of production in order to compete in a global market. Neoliberalism did not simply shrink the state — it redefined its purpose.

“In their zeal to engineer society as if it were a clockwork, modernists destroyed the very social tissue that held it together.”

No longer an agent of redistribution, the state became a facilitator of wealth growth, shedding its old obligations in favor of deregulation, privatization and the steady erosion of public goods. The industrial state had sought to counteract or even transcend capitalism; the neoliberal state, by contrast, sought merely competitive advantages vis-à-vis other states. In place of engineering a cycle of investment and growth, grand infrastructure projects and social guarantees, it offered tax cuts, asset sales and subsidies for corporate investment.

Economically, the shift was equally stark. Manufacturing, once the engine of modernity, ceded its position to finance, technology and the culture industries. The dominance of mass production was replaced by what theorist David Harvey called flexible accumulation — a system in which markets became more fragmented, production more dispersed and labor more precarious.

If industrial modernity was defined by economies of scale — factories producing standardized goods for standardized consumers — the new order was defined by economies of scope: diversity, differentiation and niche markets took precedence over mass production. In this new landscape, financialization became paramount. Profits were no longer driven primarily by the production of goods but by the endless churn of capital itself — trading, speculation, and the extraction of value from debt.

As production shifted, so too did culture. The rigid structures of industrial society — its mass markets, mass media, and mass identities — gave way to something more fluid and fragmented: a postmodern consumer culture saturated with images, advertisements and aesthetic bricolage. Traditional categories blurred. Styles mixed. The past was repackaged as aesthetic fodder. Identity, once grounded in work and social class, became a matter of lifestyle curation, shaped by brands, social media and the omnipresent logic of consumer choice.

This cultural shift was more than a matter of taste — it reflected a deeper epistemic transformation. The decline of the industrial state was accompanied by a broader erosion of the modernist belief in universal truths, objective knowledge and rational planning. If Scott’s “Seeing Like a State” had exposed the limits of bureaucratic reason, postmodern thinkers went further, questioning whether such knowledge was ever more than a construct, a tool of power masquerading as truth. The old dream of a knowable, controllable world gave way to skepticism, relativism and pluralism. Grand narratives collapsed, replaced by fragments.

As Bauman observed, this was the arrival of liquid modernity. Where industrial modernity was rigid and hierarchical, liquid modernity was fluid and unstable. The institutions that had once provided security — stable jobs, lifelong careers, fixed social roles — dissolved under the pressures of globalization, technological acceleration and individualization. While industrial society had imposed conformity, it had at least offered predictability. Liquid modernity, by contrast, offered neither.

Toward Digital Modernity

The last decades have seen the gradual emergence of a new way of seeing. One that, in its metaphors and mechanics, offers something distinct from both the centralized command of the industrial era and the disoriented flux of its postmodern successor.

Digitalization first took root as part of the capitalist reorganization in the wake of the Fordist crisis of the 1970s. Replacing mass production with flexible specialization meant restructuring production toward automation and digitalization. Digital technology provided the infrastructure for the global financial system, enabling the acceleration and deepening of securitization, financialization and capital mobility.

But within these systems, something else was stirring. Geeks, hackers and countercultural thinkers found in computers the seeds of a different logic — one that resisted hierarchy, celebrated emergence and suggested that order did not need to be imposed from above but could arise from the interactions of many. The digital world was not a factory to be managed but an ecosystem to be explored. The internet, still in its infancy, became both a medium and a metaphor for an alternative social order.

At the same time, a similar revolution was underway in science. The old mechanical metaphors, built on equilibrium and linear causality, struggled to explain the messy complexity of real-world systems. Physics’ conventional mechanistic understanding of nature — with its emphasis on reductionism, linearity, equilibria and analytical solutions — could only represent part of the natural world. 

Pendula and two-body gravity systems may be simple enough to solve precisely or well-structured and machine-like enough to take apart into component pieces. But when a pendulum is subjected to too much initial force, or when a third body joins the gravitational system, the methods fail — and the systems effectively become unpredictable, entering the realm of what mathematicians call chaos.

“Neoliberalism did not simply shrink the state — it redefined its purpose.”

A growing movement of so-called complexity scientists argued that such chaos and complexity were not the exception but the norm. Reality was not a machine but a web of interactions where intelligence and order emerged from the bottom up. A single ant, they observed, is a simple creature, but together, ants form colonies capable of astonishingly sophisticated behavior; far more efficient than top-down coordination. Nature, it seemed, had been running a decentralized system all along.

Digital technology allowed these ideas to be put into practice. Organic metaphors came to replace machinic metaphors. Instead of designing closed and comprehensive systems, programmers learned to create systems that evolve. The most fascinating computational experiments of the era — artificial life, cellular automata, simulations like the Game of Life” — did not create order through command but through interaction. We see here the emergence of a new epistemology, one that saw the world not as a grid but as a complex, adaptive system. Wikipedia, Linux and open-source communities generally seemed to validate the promise: Social coordination does not require hierarchy. The network had replaced the machine as the central metaphor of the age. We were no longer cogs in a machine but birds in a swarm.

For a moment, digitalization appeared to offer an alternative to both markets and the state — enabling leaderless and non-monetized forms of social organization through online experiments like Wikipedia or the original Couchsurfing. Social movements embraced its tools, imagining new models of collective decision-making beyond the slow bureaucracies of representative democracy. Theorists proposed ideas like the “sharing economy” and “commons-based peer production,” suggesting that digital technology could enable a more cooperative, decentralized world. The platforms that emerged in the 2000s — Facebook, Uber, Airbnb — initially presented themselves as the realization of this dream: breaking down barriers, bypassing gatekeepers, creating seamless peer-to-peer interactions. Crowds and swarms would replace states and corporations.

But power is never so easily displaced. Platforms, venture capitalists soon discovered, enabled their own form of power and control. They provide infrastructures to constrain and shape the dynamics of interaction. Digital interfaces and algorithms do not use force or command to steer their users — as in the scientific management typical of Fordist factories — but rather nudge, cajole and incentivize. If rules governing the interactions of ants shape their collective intelligence and enable them to build complex infrastructures, platforms now designed and tweaked their rules of interactions to maximize engagement, advertisement revenue and data extraction. Collective intelligence and emergence, so dangerous for the blueprint designs of Fordist modernity, now fed corporate power.

Rather than eliminating intermediaries, digital platforms became them — only with more opaque and pervasive forms of control. What once appeared as decentralization revealed itself as a new structure of intermediation, one that expanded the market into ever more intimate corners of human life. The promise of a free and open internet curdled into an architecture of surveillance and commodification.

Platforms brought more than an expansion of commodification; they represented the rise of a new form of power: They revealed how decentralization and bottom-up systems could paradoxically function as a means of control. While industrial power was tied to repression and coercion, the power of the platform encourages our active participation and agency — making us complicit in the very systems that control our behavior, preferences and opportunities.

Individuality shifted from a threat to the homogeneity and obedience upon which industrial modernity rested, to a resource which could be channeled to fuel innovation and control. The billions of people who use Facebook or Instagram for their own creative, transgressive, mundane purposes fortify Meta’s power by feeding content and directing attention to its platforms.

More recently, the weaponization of swarm dynamics has taken a more explicitly political turn. For Elon Musk, his platform X can be “thought of as a collective, cybernetic super-intelligence” because “it consists of billions of bidirectional interactions.” So much is true. But unlike colonies of ants, X does have an owner with the oversight and power to tweak algorithms and parameters to achieve a desired colony dynamic.

The transformation has been epistemic as much as economic. Traditional governance, built on census data and static categories, relied on segmentation: populations were classified, counted and regulated. Digital governance works differently. Instead of imposing fixed grids, it detects patterns, clusters and trends, steering individuals through nudges rather than commands. It does not discipline through direct intervention but through the subtle orchestration of flows — shaping consumption, curating news, sorting visibility. The factory organizes workers into regimented shifts; the platform organizes them into fluid, ever-adapting supply chains. Uber drivers, delivery workers, ghost kitchen workers, content creators — all governed by an invisible web of feedback loops, ratings and automated incentives.

“If industrial modernity’s blind spot was its erasure of complexity, digital modernity’s is its naturalization of inequality.”

This shift in control reflects a deeper transformation in how we conceptualize society itself. If the industrial world was understood in mechanical terms — systems of input and output, gears and engines — today’s digital world is understood as organic: webs, networks, ecosystems.

Governance is no longer a matter of enforcing stability but of guiding adaptation. The dominant institutions of the digital age, from Amazon to TikTok, condition their users’ actions by providing platforms where particular social patterns emerge through interaction. A TikTok video does not go viral because an editor selects it — it spreads because an algorithm learns from user behavior, amplifying certain trends while burying others. Power operates not through decree but through design.

Like all paradigms, this way of seeing both reveals and obscures. If industrial modernity’s blind spot was its erasure of complexity, digital modernity’s is its naturalization of inequality. Self-organization, for all its promise, does not ensure fairness. Networks are not inherently democratic. The very structures that enable swarm dynamics also enable the consolidation of unprecedented power. A few platforms, largely insulated from oversight, shape the conditions under which culture, social life and economic exchange take place.

In this model, control is exerted not through direct coercion but through environmental structuring. The platform does not force participation, but it makes certain behaviors more likely, certain choices more available and certain patterns of attention more profitable. This is the logic of the algorithmic feed, the personalized ad, the gamified social contract. It governs not by mandate but by suggestion — and in doing so, it rewires the very conditions under which power is recognized.

The robber barons and oligarchs of the digital era draw their power from their capacity to shape the infrastructures that control the flow of attention and data.

This marks a fundamental departure from earlier forms of governance. The industrial-modern state sought to see its population through demographic and statistical techniques to manage it — hence its obsession with classification, legibility and making things measurable. Power in digital modernity does not need to see individuals in quite the same way; instead, it detects behavioral patterns, clusters of interactions, tendencies that can be guided rather than commanded. What it loses in clarity, it gains in flexibility. Rather than mapping reality onto a rigid statistical model, it responds in real-time, adjusting itself to flows of data.

If the industrial state’s failure was its attempt to fit people and nature into rigid grids, the failure of digital power might be its attempt to render the world as a series of dynamic patterns. The metaphors of networks and clusters are equally as partial and incomplete as the metaphor of the homogeneous grid before it.

And just as industrial modernity shaped the world by stamping it with the imprint of its designs, seeking to make it fit its partial vision of grids and uniformity, so digital power will leave its imprint: reshaping how we work, how we are governed, how we understand ourselves. Every governing logic has its blind spots, and every attempt to impose order on the world creates new forms of disorder. The question is not whether those who wield power within digital modernity will succeed in realizing their visions, but what will emerge in the gaps  — in the places where digital modernity’s models fail and something is inevitably left out.

The Politics Of Digital Modernity

What we are living through is not just a vibe shift, then — it is the slow, seismic arrival of a new modernity. A digital modernity.

But modernities are not monoliths. They are fragmented, volatile and contested — sites of struggle as much as transformation. If past modernities are any indication, the answer will not be determined in theory but in practice. The metaphors we use to describe digital modernity will shape how we navigate it, just as those of industrial modernity once informed the dreams and designs of factory owners, bureaucrats and revolutionaries. 

As industrial modernity waned, it left behind a shared skepticism of top-down power — of institutions, bureaucracies and all forms of delegation. The left and right envisioned different futures but found common cause in distrusting anything that descended from above. For the right, this translated into an embrace of market logic. For the left, it inspired a politics shaped by figures like Jane Jacobs, Ralph Nader and Rachel Carson: defenders of local communities and ecosystems, critics of top-heavy schemes, champions of decentralization and legal constraint.

“The robber barons and oligarchs of the digital era draw their power from their capacity to shape the infrastructures that control the flow of attention and data.”

Digitalization didn’t invent these impulses; it turbocharged them. On the left, it promised more convivial, horizontal modes of exchange, liberated from both state and market. On the right, it lent new vigor to free-market mythology, as platforms assumed tasks once reserved for public institutions: regulating, coordinating, even building infrastructure.

But the skepticism toward institutions reshaped more than policy, it reshaped politics itself. The post-industrial left, deeply suspicious of hierarchy, increasingly eschewed delegation and organization. The right shared this disdain for traditional authority but had a release valve: a willingness to rally behind charismatic strongmen. The left, lacking such a mechanism, struggled to convert its energy into power, or its ideals into durable structures. 

Digital culture amplified these anti-institutional reflexes. The left, disenchanted with bureaucracy, was seduced by the allure of frictionless mobilization. Why endure the tedium of organizational work when a hashtag could summon thousands? Why compromise one’s individuality when one could move, like a bird in a murmuration, beautiful and leaderless? Even when the left endorses public institutions, the support is tepid — unmoored from the tradition that once built and defended them. The problem runs deeper than policy: A political imagination that equates democracy with spontaneous, bottom-up emergence cannot sustain the investments — emotional, financial, political — required for long-term, collective action.

Our visions of liberation remain stuck in the metaphors of industrial modernity. Oppression is still imagined as imposed, mechanistic, external. Freedom, by contrast, is seen as organic and self-organizing. But these metaphors falter in the face of digital power. They obscure more than they reveal — about the world we live in and the one we might want to change.

The authoritarianism of digital modernity does not wear the mask of faceless bureaucracy. It operates through faceless collectivity. Its commands arrive not as decrees but as design choices. Its tyrannies are those of the nudge, the notification, the algorithm. The swarm replaces the structure; the feed replaces the plan.

If democracy is to endure, it must be reimagined not as a retreat to the local or the decentralized, but as a renewed assertion of our collective capacity to shape society deliberately and at scale. We must rediscover forms of organizing that enable rather than dilute collective power. As the sociologist Ruha Benjamin has written, resistance is not enough. We need creation. The vast technical and financial energies now aimed at achieving speculative goals like the AI singularity or Mars colonization could be redirected toward more urgent, earthly matters: housing shortages, economic precarity, the quiet crises of everyday life.

The high priests of digital modernity — epitomized by Musk — envision a world where collective wealth serves elite ambition, while the rest of us are reduced to reactive swarms: intelligent, maybe, but rudderless and dreamless. To critique the metaphors of digital modernity is to reclaim coordination and collective action not as a compromise, but as an act of imaginative solidarity.

The post The Ascendance Of Algorithmic Tyranny appeared first on NOEMA.

]]>
]]>