NOEMA https://www.noemamag.com Noema Magazine Thu, 22 Jan 2026 21:37:41 +0000 en-US 15 hourly 1 https://wordpress.org/?v=6.8.3 https://www.noemamag.com/wp-content/uploads/2020/06/cropped-ms-icon-310x310-1-32x32.png NOEMA https://www.noemamag.com/ 32 32 The AI-Powered Web Is Eating Itself https://www.noemamag.com/the-ai-powered-web-is-eating-itself Thu, 22 Jan 2026 14:54:00 +0000 https://www.noemamag.com/the-ai-powered-web-is-eating-itself The post The AI-Powered Web Is Eating Itself appeared first on NOEMA.

]]>
Suppose you’re craving lasagna. Where do you turn for a recipe? The internet, of course.

Typing “lasagna recipe ideas” into Google used to surface a litany of food blogs, each with its own story: a grandmother’s family variation, step-by-step photos of ingredients laid out on a wooden table, videos showing technique and a long comment section where readers debated substitutions or shared their own tweaks. Clicking through didn’t just deliver instructions; it supported the blogger through ads, affiliate links for cookware or a subscription to a weekly newsletter. That ecosystem sustained a culture of experimentation, dialogue and discovery.

That was a decade ago. Fast forward to today. The same Google search can now yield a neatly packaged “AI Overview,” a synthesized recipe stripped of voice, memory and community, delivered without a single user visit to the creator’s website. Behind the scenes, their years of work, including their page’s text, photos and storytelling, may have already been used to help train or refine the AI model.

You get your lasagna, Google gets monetizable web traffic and for the most part, the person who created the recipe gets nothing. The living web shrinks further into an interface of disembodied answers, convenient but ultimately sterile.

This isn’t hypothetical: More than half of all Google searches in the U.S. and Europe in 2024 ended without a click, a report by the market research firm SparkToro estimated. Similarly, the SEO intelligence platform Ahrefs published an analysis of 300,000 keywords in April 2025 and found that when an AI overview was present, the number of users clicking into top-ranked organic search results plunged by an average of more than a third.

Users are finding their questions answered and their needs satisfied without ever leaving the search platform.

Until recently, an implicit social contract governed the web: Creators produced content, search engines and platforms distributed it, and in return, user traffic flowed back to the creators’ websites that sustained the system. This reciprocal bargain of traffic in exchange for content underwrote the economic, cultural and information-based fabric of the internet for three decades.

Today, the rise of AI marks a decisive rupture. Google’s AI Overviews, Bing’s Copilot Search, OpenAI’s ChatGPT, Anthropic’s Claude, Meta’s Llama and xAI’s Grok effectively serve as a new oligopoly of what are increasingly being called “answer engines” that stand between users and the very sources from which they draw information.

This shift threatens the economic viability of content creation, degrades the shared information commons and concentrates informational power.

To sustain the web, a system of Artificial Integrity must be built into these AI “answer engines” that prioritizes three things: clear provenance that consistently makes information sources visible and traceable, fair value flows that ensure creators share in the value even when users don’t click their content and a resilient information commons that keeps open knowledge from collapsing behind paywalls.

In practical terms, that means setting enforceable design and accountability guardrails that uphold integrity, so AI platforms cannot keep all the benefits of instant answers while pushing the costs onto creators and the wider web.

Ruptured System

AI “answer engines” haven’t merely made it easier to find information, they have ruptured the web’s value loop by separating content creation from the traffic and revenue that used to reward it.

AI companies have harvested and utilized the creative labor of writers, researchers, artists and journalists to train large language models without clear consent, attribution or compensation. The New York Times has filed lawsuits against OpenAI and Microsoft, alleging that the tech giants used its copyrighted articles for this purpose. In doing so, the news organization claims, they are threatening the very business model of journalism.

In fact, AI threatens the business model of digital content creation across the board. As publishers lose traffic, there remains little incentive for them to keep content free and accessible. Instead, paywalls and exclusive licensing are increasingly the norm. This will continue to shrink the freely available corpus of information upon which both human knowledge and future AI training depend.

The result will be a degraded and privatized information base. It will leave future AI systems working with a narrower, more fragile foundation of information, making their outputs increasingly dependent on whatever remains openly accessible. This will limit the diversity and freshness of the underlying data, as documented in a 2024 audit of the “AI data commons.” 

“The living web is shrinking into an interface of disembodied answers, convenient but ultimately sterile.”

At the same time, as more of what is visible online becomes AI-generated and then reused in future training, these systems will become more exposed to “model collapse,” a dynamic documented in a 2024 Nature study. It showed that when real data are replaced by successive synthetic generations, the tails of the original distribution begin to disappear as the model’s synthetic outputs begin to overwrite the underlying reality they were meant to approximate. 

Think of it like making a photocopy of a photocopy, again and again. Each generation keeps the bold strokes and loses the faint details. Both trends, in turn, weaken our ability to verify information independently. In the long run, this will leave people relying on systems that amplify errors, bias and informational blind spots, especially in niche domains and low-visibility communities.

Picture a procurement officer at a mid-sized bank tasked with evaluating vendors for a new fraud-detection platform. Not long ago, she would have likely turned to Google, LinkedIn or industry portals for information, wading through detailed product sheets, analyst reports and whitepapers. By clicking through to a vendor’s website, she could access what technical information she might need and ultimately contact the company. For the vendor, each click also fed its sales pipeline. Such traffic was not incidental; it was the lifeblood of an entire ecosystem of marketing metrics, job underwriting, marketing campaigns and specialized research.

These days, the journey looks different. A procurement officer’s initial query would likely yield an AI-generated comparison condensing the field of prospects into a few paragraphs: Product A is strong on compliance; product B excels at speed; product C is cost-effective. Behind this synthesis would likely lie numerous whitepapers, webinars and case studies produced by vendors and analysts — years of corporate expertise spun into an AI summary.

As a result, the procurement officer might never leave the interface. Vendors’ marketing teams, seeing dwindling click-driven sales, might retreat from publishing open materials. Some might lock reports behind steep paywalls, others might cut report production entirely and still others might sign exclusive data deals with platforms just to stay visible.

The once-diverse supply of open industry insight would contract into privatized silos. Meanwhile, the vendors would become even more dependent on the very platforms that extract their value.

Mechanisms At Play

The rupture we’re seeing in the web’s economic and informational model is driven by five mutually reinforcing mechanisms that determine what content gets seen, who gets credited and who gets paid. Economists and product teams might call these mechanisms intent capture, substitution, attribution dilution, monetization shifts and the learning loop break

Intent capture happens when the platform turns an online search query into an on-platform answer, keeping the user from ever needing to click the original source of information. This mechanism transforms a search engine’s traditional results page from an open marketplace of links essentially into a closed surface of synthesized answers, narrowing both visibility and choice. 

Substitution, which takes place when users rely on AI summaries instead of clicking through to source links and giving creators the traffic they depend on, is particularly harmful. This harm is most pronounced in certain content areas. High substitution occurs for factual lookups, definitions, recipes and news summaries, where a simple answer is often sufficient. Conversely, low substitution occurs for content like investigative journalism, proprietary datasets and multimedia experiences, which are harder for AI to synthesize into a satisfactory substitute.

The incentives of each party diverge: Platforms are rewarded for maximizing query retention and ad yield; publishers for attracting referral traffic and subscribers; and regulators for preserving competition, media plurality and provenance. Users, too, prefer instant, easily accessible answers to their queries. This misalignment ensures that platforms optimize for closed-loop satisfaction while the economic foundations of content creation remain externalized and underfunded.

Attribution dilution compounds the effect. When information sources are pushed behind dropdowns or listed in tiny footnotes, the credit exists in form but not in function. Search engines’ tendency to simply display source links, which many do inconsistently, does not solve the issue. These links are often de-emphasized and generate little or no economic value, creating a significant consent gap for content used in AI model training. When attribution is blurred across multiple sources and no value accrues without clicks or compensation, that gap becomes especially acute. 

“AI ‘answer engines’ have ruptured the web’s value loop by separating content creation from the traffic and revenue that used to reward it.”

Monetization shifts refer to the redirected monetary value that now often flows solely to AI “answer engines” instead of to content creators and publishers. This shift is already underway, and it extends beyond media. When content promoting or reviewing various products and services receives fewer clicks, businesses often have to spend more to be discovered online, which can raise customer acquisition costs and, in some cases, prices. 

This shift can also impact people’s jobs: Fewer roles may be needed to produce and optimize web content for search, while more roles might emerge around licensing content, managing data partnerships and governing AI systems. 

The learning loop break describes the shrinking breadth and quality of the free web as a result of the disruptive practices of AI “answer engines.” As the information commons thins, high-quality data becomes a scarce resource that can be controlled. Analysts warn that control of valuable data can act as a barrier to entry and concentrate gatekeeper power.

This dynamic is comparable to what I refer to as a potential “Data OPEC,” a metaphor for a handful of powerful platforms and rights-holders controlling access to high-quality data, much as the Organization of Petroleum Exporting Countries (OPEC) controls the supply of oil.

Just as OPEC can restrict oil supply or raise prices to shape global markets, these data gatekeepers could restrict or monetize access to information used to build and improve AI systems, including training datasets, raising costs, reducing openness and concentrating innovation power in fewer hands. In this way, what begins as an interface design choice cascades into an ecological risk for the entire knowledge ecosystem.

The combined effect of these five mechanisms is leading to a reconfiguration of informational power. If AI “answer engines” become the point of arrival for information rather than the gateway, the architecture of the web risks being hollowed out from within. The stakes extend beyond economics: They implicate the sustainability of public information ecosystems, the incentives for future creativity and the integrity of the informational commons.

Left unchecked, these forces threaten to undermine the resilience of the digital environment on which both creators and users depend. What is needed is a systemic redesign of incentives, guided by the framework of Artificial Integrity rather than artificial intelligence alone.

Artificial Integrity

Applied to the current challenge, Artificial Integrity can be understood across three dimensions: information provenance integrity, economic integrity of information flows and integrity of the shared information commons.

Information provenance integrity is about ensuring that sources are visible, traceable and properly credited. This should include who created the content, where it was published and the context in which it was originally presented. The design principle is transparency: Citations must not be hidden in footnotes. 

Artificial Integrity also requires that citations carry active provenance metadataa verifiable, machine-readable signature linking each fragment of generated output to its original source, allowing both users and systems to trace information flows with the same rigor as a scientific citation. 

That introduces something beyond just displaying source links: It’s a systemic design where provenance is cryptographically or structurally embedded, not cosmetically appended. In this way, provenance integrity becomes a safeguard against erasure, ensuring that creators remain visible and credited even if the user doesn’t click through to the original source.

Economic integrity of information flows is about ensuring that value flows back to creators, not only to platforms. Artificial Integrity requires rethinking how links and citations are valued. In today’s web economy, a link matters only if it is clicked, which means that sources that are cited but not visited capture no monetary value. In an integrity-based model, the very act of being cited in an AI-generated answer would carry economic weight, ensuring that credit and compensation flow even when user behavior stops at the interface.

This would realign incentives from click-chasing to knowledge contribution, shifting the economy from performance-only to provenance-aware. To achieve this, regulators and standards bodies could require that AI “answer engines” compensate not only for traffic delivered, but also for information cited. Such platforms could implement source prominence rules so that citations are not hidden in footnotes but embedded in a way that delivers measurable economic value. 

Integrity of the shared information commons is about ensuring that the public information base remains sustainable, open and resilient rather than degraded into a paywalled or privatized resource. Here, Artificial Integrity calls for mandatory reinvestment of AI platform revenues into open datasets as a built-in function of the AI lifecycle. This means that large AI platforms such as Google, OpenAI and Microsoft would be legally required to dedicate a fixed percentage of their revenues to sustaining the shared information commons. 

“AI platforms cannot keep all the benefits of instant answers while pushing the costs onto creators and the wider web.”

This allocation would be architecturally embedded into their model development pipelines. For example, a “digital commons fund” could channel part of Google’s AI revenues into keeping resources like Wikipedia, PubMed or open academic archives sustainable and up to date. Crucially, this reinvestment would be hardcoded into retraining cycles, so that every iteration of a model structurally refreshes and maintains open-access resources alongside its own performance tuning. 

In this way, the sustainability of the shared information commons would become part of the AI system’s operating logic, not just a voluntary external policy. In effect, it would ensure that every cycle of AI improvement also improves the shared information commons on which it depends, aligning private platform incentives with public information sustainability.

We need to design an ecosystem where these three dimensions are not undermined by the optimization-driven focus of AI platforms but are structurally protected, both in how the platforms access and display content to generate answers, and in the regulatory environment that sustains them.

From Principle To Practice

To make an Artificial Integrity approach work, we would need systems for transparency and accountability. AI companies would have to be required to publish verifiable aggregated data showing whether users stop at their AI summaries or click outward to original sources. Crucially, to protect the users’ privacy, this disclosure would need to include only aggregated interactions metrics reporting overall patterns. This would ensure that individual user logs and personal search histories are never exposed. 

Independent third-party auditors, accredited and overseen by regulators much like accounting firms are today, would have to verify these figures. Just as companies cannot self-declare their financial health but must submit audited balance sheets, AI platforms would no longer be able to simply claim they are supporting the web without independent validation.

In terms of economic integrity of information flows, environmental regulation offers a helpful analogy. Before modern environmental rules, companies could treat pollution as an invisible side effect of doing business. Smoke in the air or waste in the water imposed real costs on society, but those costs did not show up on the polluter’s balance sheet.

Emissions standards changed this by introducing clear legal limits on how much pollution cars, factories and power plants are allowed to emit, and by requiring companies to measure and report those emissions. These standards turned pollution into something that had to be monitored, reduced or paid for through fines and cleaner technologies, instead of being quietly pushed onto the public. 

In a similar way, Artificial Integrity thresholds could ensure that the value that AI companies extract from creators’ content comes with financial obligations to those sources. An integrity threshold could simply be a clear numerical line, like pollution limits in emissions standards, that marks the point at which an AI platform is taking too much value without sending enough traffic or revenue back to sources. As long as the numbers stay under the acceptable limit, the system is considered sustainable; once they cross the threshold, the platform has a legal duty to change its behavior or compensate the creators it depends on.

This could be enforced by national or regional regulators, such as competition authorities, media regulators or data protection bodies. Similar rules have begun to emerge in a handful of jurisdictions that regulate digital markets and platform-publisher relationships, such as the EU, Canada or Australia, where news bargaining and copyright frameworks are experimenting with mandatory revenue-sharing for journalism. Those precedents could be adapted more broadly as AI “answer engines” reshape how we search online.

These thresholds could also be subject to standardized independent audits of aggregated interaction metrics. At the same time, AI platforms could be required to provide publisher-facing dashboards exposing the same audited metrics in near real-time, showing citation frequency, placement and traffic outcomes for their content. These dashboards could serve as the operational interface for day-to-day decision-making, while independent audit reports could provide a legally verified benchmark, ensuring accuracy and comparability across the ecosystem.

In this way, creators and publishers would not be left guessing whether their contributions are valued. They would receive actionable insight for their business models and formal accountability. Both layers together would embed provenance integrity into the system: visibility for creators, traceability for regulators and transparency for the public. 

“Artificial Integrity thresholds could ensure that the value that AI companies extract from creators’ content comes with financial obligations to those sources.”

Enforcement could mix rewards and penalties. On the reward side, platforms that show where their information comes from and that help fund important public information resources could get benefits such as tax credits or lighter legal risk. On the penalty side, platforms that ignore these integrity rules could face growing fines, similar to the antitrust penalties we already see in the EU.

This is where the three dimensions come together: information provenance integrity in how sources are cited, economic integrity of information flows in how value is shared and the integrity of the shared information commons in how open resources are sustained.

Artificial Integrity for platforms that deliver AI-generated answers represents more than a set of technical fixes. By reframing AI-mediated information search not as a question of feature tweaks but as a matter of design, code and governance in AI products, it addresses a necessary rebalancing toward a fairer and more sustainable distribution of value on which the web depends, now and in the future.

The post The AI-Powered Web Is Eating Itself appeared first on NOEMA.

]]>
]]>
When AI & Human Worlds Collide https://www.noemamag.com/when-ai-human-worlds-collide Tue, 20 Jan 2026 17:32:59 +0000 https://www.noemamag.com/when-ai-human-worlds-collide The post When AI & Human Worlds Collide appeared first on NOEMA.

]]>
A robot is learning to make sushi in Kyoto. Not in a sushi-ya, but in a dream. It practices the subtle art of pressing nigiri into form inside its neural network, watching rice grains yield to its grip. It rotates its wrist 10,000 times in an attempt to keep the nori taut around a maki roll. Each failure teaches it something about the dynamics of the world. When its aluminum fingers finally touch rice grains, it already knows how much pressure they can bear.

This is the promise of world models. For years, artificial intelligence has been defined by its ability to process and translate information — to autocomplete, recommend and generate. But a different AI paradigm seeks to expand its capabilities further. World models are systems that simulate how environments behave. They provide spaces where AI agents can predict how the future might unfold, experiment with cause and effect, and, one day, use the logic they acquire to make decisions in our physical environments. 

Large language models currently have the attention of both the AI industry and the wider public, showing remarkable and diverse capabilities. Their multimodal variants can generate exquisite sushi recipes and describe Big Ben’s physical properties solely from a photograph. They guide agents through game environments with increasing sophistication; more recent models can even integrate vision, language and action to direct robot movements through physical space.

Their rise, however, unfolds against a fierce debate over whether these models can yield more human-like and general intelligence simply by continuing to scale them through investing in their parameters, data and compute.

While this debate is not yet settled, some believe that fundamentally new architectures are required to unlock AI’s full potential. World models present one such different approach. Rather than interacting primarily with language and media patterns, world models create environments that allow AI agents to learn through simulation and experience. These worlds enable agents to test “what happens if I do this?” by counterfactually experimenting with cause and effect to hone how they perform their actions based on their outcomes.

To understand world models, it helps to distinguish between two related concepts: AI models and AI agents. AI models are machine learning algorithms that learn statistical patterns from training data, enabling them to make predictions or generate outputs. Generative AI models are AI models capable of generating new content, which is then integrated into systems that users can interact with, from chatbots like ChatGPT to video generators like Veo. AI agents, by contrast, are systems that use such models to act autonomously in different environments. Coding agents, for example, can perform programming tasks while using digital tools. The abundance of digital data makes training such agents feasible for digital tasks, but enabling them to act in the physical world remains a harder challenge.

World models are an emerging type of such AI models that agents can use to learn how to act in an environment. They take two distinct forms. Internal world models are abstract representations that live within an AI agent’s architecture, serving as compressed mental simulations for planning. What can be called interactive world models, on the other hand, generate rich, explorable environments that any user can explore, and agents can train within.

The aspiration behind world models is to move from generating content to simulating dynamics. Rather than providing the steps to a recipe, they seek to simulate how rice responds to pressure, enabling agents to learn the act of pressing sushi. The ultimate goal is to develop world models that simulate aspects of the real world accurately enough for agents to learn from and ultimately act within them. Yet this ambition to represent the underlying dynamics of the world rather than the surface patterns of language or media may prove to be a far greater challenge, given the staggering complexity of reality.

Our Own World Models

Since their conceptual origins decades ago, world models have become a promising AI frontier. Many of the thinkers shaping modern AI — including Yann LeCun, Fei-Fei Li, Yoshua Bengio and Demis Hassabis — have acknowledged that this paradigm could pave new pathways to more human-like intelligence.

To understand why this approach might matter, it helps to take a closer look at how we ourselves came to know the world.

“Rather than interacting primarily with language and media patterns, world models create environments that allow AI agents to learn through simulation and experience.”

Human cognition evolved through contact with our three-dimensional environment, where spatial reasoning contributes to our ability to infer cause and effect. From infancy, we learn through our bodies. By dropping a ball or lifting a pebble, we refine our intuitive sense of gravity, helping us anticipate how other objects might behave. In stacking and toppling blocks, babies begin to grasp the rules of our world, learning by engaging with its physical logic. The causal structure of spatial reality is the fabric upon which human and animal cognition take shape.

The world model approach draws inspiration from biological learning mechanisms, and particularly from how our brains use simulation and prediction. The mammalian prefrontal cortex is central to counterfactual reasoning and goal-directed planning, enabling the brain to simulate, test and update internal representations of the world. World models attempt to reproduce aspects of this capacity synthetically. They draw on what cognitive scientists call “mental models,” abstracted internal representations of how things work, shaped by prior perception and experience.

“The mental image of the world around you which you carry in your head is a model,” pioneering computer engineer Jay Wright Forrester once wrote. We don’t carry entire cities or governments in our heads, he continued, but only selected concepts and relationships that we use to represent the real system. World models aim to explicitly provide machines with such representations.

While language models appear to develop some implicit world representations through their training, world models take an explicit spatial and temporal approach to these representations. They provide spaces where AI agents can test how environments respond to their actions before executing them in the real world. Through iterative interaction in these simulated spaces, AI agents refine their “action policies” — their internal strategies for how to act. This learning, based on simulating possible futures, may prove particularly valuable for tasks requiring long-horizon planning in complex environments. Where language models shine in recognizing the word that typically comes next, world models enable agents to better predict how an environment might change in response to their actions. Both approaches may prove essential — one to teach machines about our world, the other to let them rehearse their place within it.

This shift, from pattern recognition to causal prediction, makes world models more than just tools for better gaming and entertainment — they may be synthetic incubators shaping the intelligence that one day emerges, embodied in our physical world. When predictions become actions, errors carry physical weight. While this vision remains a relatively distant future, the choices we make about the nature of these worlds will influence the ethics of the agents that rely on them.

How Machines Construct Worlds

Despite its recent resurgence, the idea of world models is not new. In 1943, cybernetics pioneer Kenneth Craik proposed that organisms carry “small-scale models” of reality in their heads to predict and evaluate future scenarios. In the 1970s and 1980s, early AI and robotics researchers extended these mental model foundations into computational terms, using the phrase “world models” to describe a system’s representation of the environment. This early work was mostly theoretical, as researchers lacked the tools we have today.

A 2018 paper by AI researchers David Ha and Jürgen Schmidhuber — building on previous work from the 1990s — offered a compelling demonstration of what world models could achieve. The researchers showed that AI systems can autonomously learn and navigate complex environments using internal world models. They developed a system architecture that learned to play a driving video game solely from the game’s raw pixel data. Perhaps most remarkably, the AI agent could be trained entirely in its “dream world” — not literal dreams, but training runs in what researchers call a “latent space,” an abstract, compact representation of the game environment. This space serves as a compressed mental sketch of the world where the agent learns to act. 

Without world models, agents must learn directly from real experience or pre-existing data. With world models, they can generate their own practice scenarios to distill how they should act in different situations. This internal simulation acts as a predictive engine, giving the agent a form of artificial intuition — allowing for fast, reflexive decisions without the need to stop and plan. Ha and Schmidhuber likened this to how a baseball batter can instinctively predict the path of a fastball and swing, rather than having to carefully analyze every possible trajectory.

This breakthrough was followed by a wave of additional progress, pushing the boundaries of what world models could represent and how far their internal simulations could stretch. Each advancement hinted at a broader shift — AI agents were beginning to learn from their own internally generated experience.

“The world model approach draws inspiration from biological learning mechanisms, and particularly from how our brains use simulation and prediction.”

Recently, another significant development in AI raised new questions about how agents might learn about the real world. Breakthroughs in video generation models led to the scaled production of videos that seemed to capture subtle real-world physics. Online, users admired tiny details in those videos: blueberries plunging into water and releasing airy bubbles, tomatoes slicing thinly under the glide of a knife. As people shared and marveled at these videos, something deeper was happening beneath the surface. To generate such videos, models reflect patterns that seem consistent with physical laws, such as fluid dynamics and gravity. This led researchers to wonder if these models were not just generating clips but beginning to simulate how the world works. In early 2024, OpenAI itself hypothesized that advances in video generation may offer a promising path toward highly capable world simulators. 

Whether or not AI models that generate video qualify as world simulators, advances in generative modeling helped trigger a pivotal shift in world models themselves. Until recently, world models lived entirely inside the system’s architecture — latent spaces only for the agent’s own use. But the breakthroughs in generative AI of recent years have made it possible to build interactive world models — worlds you can actually see and experience. These systems take text prompts (“generate 17th-century London”) or other inputs (a photo of your living room) to generate entire three-dimensional interactive worlds. While video-generating models can depict the world, interactive world models instantiate the world, allowing users or agents to interact with it and affect what happens rather than simply watching things unfold.

Major AI labs are now investing heavily in these interactive world models, with some showing signs of deployment maturity, though approaches vary. Google DeepMind’s Genie series turns text prompts into striking, diverse, interactive digital worlds that continuously evolve in real time — using internal latent representations to predict dynamics and render them into explorable environments, some of which appear real-world-like in both visual fidelity and physical dynamics. Fei-Fei Li’s World Labs recently released Marble, which takes a different approach, letting users transform various inputs into editable and downloadable environments. Runway, a company known for its video generation models, recently launched GWM-1, a world model family that includes explorable environments and robotics, where simulated scenarios can be used to train robot behavior.

Some researchers, however, are skeptical that generating visuals, or pixels, will lead anywhere useful for agent planning. Many believe that world models should predict in compressed, abstract representations without generating pixels — much as we might predict that dropping a cup will cause it to break without mentally rendering every shard of glass.

LeCun, who recently announced his departure from Meta to launch Advanced Machine Intelligence, a company focused on world models, has been critical of approaches that rely on generating pixels for prediction and planning, arguing that they are “doomed to failure.” According to his view, visually reconstructing such complex environments is “intractable” because it tries to model highly unpredictable phenomena, wasting resources on irrelevant details. While researchers debate the optimal path forward, the functional result remains that machines are beginning to learn something about world dynamics from synthetic experience. 

World models are impressive in their own right and offer various applications. In gaming, for instance, interactive world models may soon be used to help generate truly open worlds — environments that uniquely evolve with a player’s choices rather than relying on scripted paths. As someone who grew up immersed in “open world” games of past decades, I relished the thrill of their apparent freedom. Yet even these gaming worlds were always finite, their characters repeating the same lines. Interactive world models bring closer the prospect of worlds that don’t just feel alive but behave as if they are. 

Toward Physical Embodiment

Gaming, however, is merely a steppingstone. The transformative promise of world models lies in physical embodiment and reasoning — AI agents that can navigate our world, rather than just virtual ones. The concept of embodiment is central to cognitive science, which holds that our bodies and sensorimotor capacities shape our cognition. In 1945, French philosopher Maurice Merleau-Ponty observed: “the body is our general medium for having a world.” We are our body, he argued. We don’t have a body. In its AI recasting, embodiment refers to systems situated in physical or digital spaces, using some form of body and perception to interact with both users and their surroundings. 

Physically embodied AI offers endless new deployment possibilities, from wearable companions to robotics. But it runs up against a stubborn barrier — the real world is hard to learn from. The internet flooded machine learning with text, images and video, creating the digital abundance that served as the bedrock for language models and other generative AI systems.

“While video-generating models can depict the world, interactive world models instantiate the world, allowing users or agents to interact with it and affect what happens.”

Physical data, however, is different. It is scarce, expensive to capture and constrained by the fact that it must be gathered through real actions unfolding in real time. Training partially capable robots in the real world, and outside of lab settings, might lead to dangerous consequences. To be useful, physical data also needs to be diverse enough to fit the messy particulars of reality. A robot that learns to load plates into a dishwasher in one kitchen learns little about how to handle a saucepan in another. Every environment is different. Every skill must be learned in its own corner of reality, one slow interaction at a time.

World models offer a way through this conundrum. By generating rich, diverse and responsive environments, they create rehearsal space for physically embodied systems — places where robots can learn from the experiences of a thousand lifetimes in a fraction of the time, without ever touching the physical world. This promise is taking its first steps toward reality.

In just the past few years, significant applications of world models in robotics have emerged. Nvidia unveiled a world model platform that helps developers build customized world models for their physical AI setups. Meta’s world models have demonstrated concrete robotics capabilities, guiding robots to perform tasks such as grasping objects and moving them to new locations in environments they were never trained in. Google DeepMind and Runway have shown that world models can serve robotics — whether by testing robot behavior or generating training scenarios. The AI and robotics company 1X grabbed global attention when it released a demo of its humanoid home assistant tidying shelves and outlining its various capabilities, such as suggesting meals based on the contents of a fridge. Though their robot is currently teleoperated with human involvement, its every interaction captures physically embodied data that feeds back into the 1X world model, enabling it to learn from real-world data to improve its accuracy and quality.

But alongside advancements in world models, the other half of this story lies with the AI agents themselves. In a 2025 Nature article, the Dreamer agent demonstrated the ability to collect diamonds in Minecraft without relying on human data or demonstration; instead, it derived its strategy solely from the logic of the environment by repeatedly testing what worked there, as if feeling its way toward competence from first principles. Elsewhere, recent work from Google DeepMind hints at what a new kind of general AI agent might look like. By learning from diverse video games, its language model-based SIMA agent translates language into action in three-dimensional worlds. Tell SIMA to “climb the ladder,” and it complies, performing actions even in games it’s never seen. A new version of this agent has recently shown its ability to self-learn, even in worlds generated by the world model Genie.

In essence, two lines of progress are beginning to meet. On one side, AI agents that learn to navigate and self-improve in any three-dimensional digital environment; on the other, systems that simulate endless, realistic three-dimensional worlds or their abstracted dynamics, with which agents can interact. Together, they may provide the unprecedented capability to run virtually endless simulations in which agents can refine their abilities across variations of experience. If these systems keep advancing, the agents shaped within such synthetic worlds may eventually become capable enough to be embodied in our physical one. In this sense, world models could incubate agents to hone their basic functions before taking their first steps into reality.

As world models move from the research frontier into early production, their concrete deployment pathways remain largely uncertain. Their near-term horizon in gaming is becoming clear, while the longer horizon of broad robotics deployment still requires significant technical breakthroughs in architectures, data, physical machinery and compute. But it is increasingly plausible that an intermediate stage will emerge — world models embedded in wearable devices and ambient AI companions that use spatial intelligence to guide users through their environment. Much like the 1X humanoid assistant guiding residents through their fridge, world-model-powered AI could one day mediate how people perceive, move through and make decisions within their everyday environments.

The Collingridge Dilemma

Whether world models ultimately succeed through pixel-level generation or more abstract prediction, their underlying paradigm shift — from modeling content to modeling dynamics — raises questions that transcend any architecture. Beyond the technological promise of world models, their trajectory carries profound implications for how intelligence may take form and how humans may come to interact with it.

“Much like the 1X humanoid assistant guiding residents through their fridge, world-model-powered AI could one day mediate how people perceive, move through and make decisions within their everyday environments.”

Even if world models never yield human-level intelligence, the shift from systems that model the world through language and media patterns to systems that model it through interactive simulation could fundamentally reshape how we engage with AI and to what end. The societal implications of world modeling capabilities remain largely uncharted as attention from the humanities and social sciences lags behind the pace of computer science progress.

As a researcher in the philosophy of AI — and having spent more than a decade working in AI governance and policy roles inside frontier AI labs and technology companies — I’ve observed a familiar pattern: Clarity about the nature of emerging technologies and their societal implications tends to arrive only in retrospect, a problem known as the “Collingridge dilemma.” This dilemma reminds us that by the time a technology’s consequences become visible, it is often too entrenched to change.

We can begin to address this dilemma by bringing conceptual clarity to emerging technologies early, while their designs can still be shaped. World models present such a case. They are becoming mature enough to analyze meaningfully, yet it’s early enough in their development that such analysis could affect their trajectory. Examining their conceptual foundations now — what these systems represent, how they acquire knowledge, what failure modes they might exhibit — could help inform crucial aspects of their design.

 A Digital Plato’s Cave

The robot in Los Angeles, learning to make sushi in Kyoto, exists in a peculiar state. It knows aspects of the world without ever directly experiencing them. But what is the content of the robot’s knowledge? How is it formed? Under what conditions can we trust its synthetic world view, once it begins to act in ours?

Beginning to answer these questions reveals important aspects about the nature of world models. Designed to capture the logic of the real world, they draw loose inspiration from human cognition. But they also present a deep asymmetry. Humans learn about reality from reality. World models learn primarily from representations of it — such as millions of hours of curated videos, distilled into statistical simulacra of the world. What they acquire is not experience itself, but an approximation of it — a digital Plato’s Cave, offering shadows of the world rather than the world itself.  

Merleau-Ponty’s argument that we are our body is inverted by world models. They offer AI agents knowledge of embodiment without embodiment itself. In a sense, the sushi-making robot is learning through a body it has never inhabited — and the nature of that learning brings new failure modes and risks.

Like other AI systems, world models compress these representations of reality into abstract patterns, a process fraught with loss. As semanticist Alfred Korzybski famously observed, “a map is not the territory. World models, both those that generate rich visual environments and those that operate in latent spaces, are still abstractions. They learn statistical approximations of physics from video data, not the underlying laws themselves.

But because world models compress dynamics rather than just content, what gets lost is not just information but physical and causal intuition. A simulated environment may appear physically consistent on its face, while omitting important properties — rendering water that flows beautifully but lacks viscosity, or metal that bends without appropriate resistance.

AI systems tend to lose the rare and unusual first, often the very situations where safety matters most. A child darting into traffic, a glass shattering at the pour of boiling tea, the unexpected give of rotting wood. These extreme outliers, though rare in training data, become matters of life and safety in the real world. What may remain in the representation of the world model is an environment smothered into routine, blind to critical exceptions.

With these simplified maps, agents may learn to navigate our world. Their compass, however, is predefined — a reward function that evaluates and shapes their learning. As with other AI reinforcement learning approaches, failing to properly specify a reward evokes Goodhart’s Law: when a measure becomes a target, it ceases to be a good measure. A home cleaning agent that’s rewarded for “taking out the trash” no longer becomes appealing to its owner if it places the trash in the garden or brings it back in so that it’s rewarded for taking it out again.

“Because world models compress dynamics rather than just content, what gets lost is not just information but physical and causal intuition.”

While traditional simulations are encoded with physical principles, those created by world models learn patterns. In their constructed worlds, pedestrians might open umbrellas because sidewalks are wet, never realizing that rain causes both. A soufflé might rise instantly because most cooking videos they’ve learned from skip the waiting time. Through reward hacking — a well-documented problem in reinforcement learning — agents may discover and exploit quirks that only work in their simulated physics. Like speedrunners — gamers who hunt for glitches that let them walk through walls or skip levels — these agents may discover and optimize for shortcuts that fail in reality. 

These are old problems dressed in new clothes that transfer the risks of previous AI systems — brittleness, bias, hallucination — from information to action. All machine learning abstracts from data. But while language models can hallucinate facts and seem coherent, world models may be wrong about physics and still appear visually convincing. Physical embodiment further transforms the stakes. What once misled may now injure. A misunderstood physics pattern becomes a shattered glass; a misread social cue becomes an uncomfortable interaction.

While humans can consider the outputs of chatbots before acting on them, embodied actions by an AI agent may occur without any human to filter or approve such actions — like the Waymo car that struck KitKat, a  beloved neighborhood cat in San Francisco —  an outcome a human driver might have prevented. These issues are compounded by the complex world model and agent stack; its layered components make it hard to trace the source of any failures: Is it the agent’s policy, the world model’s physics or the interaction between them?

Many of these safety concerns manifest as technical optimization challenges similar to those the technical community has faced before, but solving them is also an ethical imperative. Robotics researchers bring years of experience navigating the so-called “sim-to-real” gap — the challenge of translating simulated learning into physical competence. But such existing disciplines may need to adapt to the nature of world models — rather than fine-tuning the dials of hard-coded physics simulations, they must now verify the integrity of systems that have taught themselves how the world works.  As competition intensifies, the need for careful evaluation and robustness work is likely to increase.

Industry deployments recognize these inherent complexities, and leading labs are grounding their world models in real-world data. This enables them to calibrate their models for the environments their physically embodied systems inhabit. Companies like 1X, for example, ground world models in video data continuously collected by their robotics fleet, optimizing for the particularities of physical homes. These environment-specific approaches that still rely on real-world data will likely precede the dream of a general agent, as interactive world models are likely to initially simulate narrow environments and tasks. However, for lighter-stakes embodiments like wearables, the push for generality may arrive sooner.

Beyond these characteristics, world models have distinctive features that raise new considerations. Many of these are sociotechnical — where human design choices carry ethical weight. Unlike language models, world models reason in space and time — simulating what would happen under different actions and guiding behavior accordingly.

Through the dynamics simulated by world models, agents may infer how materials deform under stress or how projectiles behave in the wind. While weaponized robots may seem distant, augmented reality systems that guide users through dangerous actions need not wait for breakthroughs in robotics dexterity. This raises fundamental design questions about world models that carry moral weight: What types of knowledge should we imbue in agents that may be physically embodied, and how can we design world models to prevent self-learning agents from acquiring potentially dangerous knowledge?

Beyond physical reasoning lies the more speculative frontier of modeling social dynamics. Human cognition evolved at least in part as a social simulator — predicting other minds was once as vital as predicting falling objects. While world models are focused on physical dynamics, nothing in principle prevents similar approaches from capturing social dynamics. To a machine learning system, a furrowed brow or a shift in posture is simply a physical pattern that precedes a specific outcome. Were such models to simulate social interactions, they could enable agents to develop intuitions about human behavior — sensing discomfort before it is voiced, reacting to micro-expressions or adjusting tone based on feedback.

Some researchers have begun exploring adjacent territory under the label “mental world models,” suggesting that embodied AI could benefit from having a mental model of human relationships and user emotions. Such capabilities could make AI companions more responsive but also more persuasive — raising concerns about AI manipulation and questions about which social norms these systems might amplify.

“Thoughtful engagement with the world model paradigm now will shape not just how such future agents learn, but what values their actions represent and how they might interact with people.”

These implications compound at scale. Widely deploying world models shifts our focus from individual-level considerations to societal-level ones. Reliable predictive capabilities may accelerate our existing tendency to outsource decisions to machines, introducing implications for human autonomy. Useful systems embedded in wearable companions could gather unprecedented streams of spatial and behavioral data, creating significant new privacy and security considerations. The expected advancement in robotics capabilities might also impact physical labor markets. 

World models suggest a future where our engagement with the world is increasingly mediated by the synthetic logic of machines. One where the map no longer just describes our world but begins to shape it.

Building Human Worlds

These challenges are profound, but they are not inevitable. The science of world models remains in relative infancy, with a long horizon expected before it matures into wide deployment. Thoughtful engagement with the world model paradigm now will shape not just how such future agents learn, but what values their actions represent and how they might interact with people. An overly precautionary approach risks its own moral failure. Just as the printing press democratized knowledge despite enabling propaganda, and cars transformed transportation while producing new perils, world models promise benefits that may far outweigh their risks. The question isn’t whether to build them, but how to design them to best harness their benefits.

This transformative potential of world models extends far beyond the joyful escapism of gaming or the convenience of laundry-folding robots. In transportation, advances in the deployment of autonomous vehicles could improve our overall safety. In medicine, world models could enable surgical robots to rehearse countless variations of a procedure before encountering a single patient, increasing precision and enhancing access to specialized care. Perhaps most fundamentally, they may help humans avoid what roboticists call the “three Ds” — tasks that are dangerous, dirty or dull — relegating them to machines. And if world models deliver on their promise that simulating environments enable richer causal reasoning, they could help revolutionize scientific discovery, the domain many in the field consider the ultimate achievement of AI.

Realizing the promise of such world models, however, requires more than techno-optimism; it needs concrete steps to help scaffold these benefits. The embodiment safety field is already adapting crucial insights from traditional robotics simulations to its world model variants. Other useful precedents can be found in adjacent industries. The autonomous vehicles industry spent years painstakingly developing validation frameworks that verify both simulated and real-world performance. These insights can be leveraged by new industries, as world models could provide opportunities in domains where tolerance for error is narrow — surgical robotics, home assistance, industrial automation — each requiring its own careful calibration of acceptable risk. For regulators, these more mature frameworks offer a concrete starting point and an opportunity for foresight that could enable beneficial deployment.

World models themselves offer unique opportunities for safety research. Researchers like LeCun argue that world model architecture may be more controllable than language models — involving objective-driven agents whose goals can be specified with safety and ethics in mind. Beyond architecture, some world models may serve as digital proving grounds for testing robot behavior before physical deployment.

Google DeepMind recently demonstrated that its Veo video model can predict robot behavior by using its video capabilities to simulate how robots would act in real-world scenarios.  The study showed that such simulations can help discover unsafe behaviors that would be dangerous to test on physical hardware, such as a robot inadvertently closing a laptop on a pair of scissors left on its keyboard. Beyond testing how robots act, world models themselves would need to be audited to ensure they align with the physical world. This presents a challenge that is as much ethical as it is technical: determining which world dynamics are worth modeling and defining what “good enough” means.

Ultimately, early design decisions will dictate the societal outcomes of world model deployment. Choosing what data world models learn from is not just a technical decision, but a socio-technical one, defining the boundaries of what agents may self-learn. The behaviors and physics we accept in gaming environments differ deeply from what we may tolerate in a physical embodiment. The time to ask whether and how we would like to pursue certain capabilities, such as social world modeling, is now.

These deployments also raise broader governance implications. Existing privacy frameworks will likely need to be updated to account for the scale and granularity of spatial and behavioral data that world model-powered systems may harvest. Policymakers, accustomed to analyzing AI through the lens of language processing, must now grapple with systems trained to represent the dynamics of reality. Given that existing AI risk frameworks do not adequately capture the risks posed by such systems, updating these also may soon be required.

The walls of this digital cave are not yet set in stone. Our task is to ensure that the synthetic realities we construct are not just training grounds for efficiency, but incubators for an intelligence that accounts for the social and ethical intricacies of our reality. The design choices we make about what dynamics to simulate and what behaviors to reward will shape the AI agents that emerge in the future. By blending technical rigor with philosophical foresight, we can ensure that when these shadows are projected back into our own world, they do not darken it but illuminate it instead.

The post When AI & Human Worlds Collide appeared first on NOEMA.

]]>
]]>
How The ‘AI Job Shock’ Will Differ From The ‘China Trade Shock’ https://www.noemamag.com/how-the-ai-job-shock-will-differ-from-the-china-trade-shock Fri, 16 Jan 2026 17:49:28 +0000 https://www.noemamag.com/how-the-ai-job-shock-will-differ-from-the-china-trade-shock The post How The ‘AI Job Shock’ Will Differ From The ‘China Trade Shock’ appeared first on NOEMA.

]]>
Among the job doomsayers of the AI revolution, David Autor is a bit of an outlier. As the MIT economist has written in Noema, the capacity of mid-level professions such as nursing, design or production management to access greater expertise and knowledge once available only to doctors or specialists will boost the “applicable” value of their labor, and thus the wages and salaries that can sustain a middle class.

Unlike rote, low-level clerical work, cognitive labor of this sort is more likely to be augmented by decision-support information afforded by AI than displaced by intelligent machines.

By contrast, “inexpert” tasks, such as those performed by retirement home orderlies, child-care providers, security guards, janitors or food service workers, will be poorly remunerated even as they remain socially valuable. Since these jobs cannot be automated or enhanced by further knowledge, those who labor in them are a “bottleneck” to improved productivity that would lead to higher wages. Since there will be a vast pool of people without skills who can take those jobs, the value of their labor will be driven down even further.

This is problematic from the perspective of economic disparity because four out of every five jobs created in the U.S. are in this service sector.

So, when looking to the future of the labor market in an AI economy, we can’t talk about “job loss vs. gains” in any general sense. The key issue is not the quantity of jobs, but the value of labor, which really means the value of human expertise and the extent to which AI can enhance it, or not.

I discussed this and other issues with Autor at a recent gathering at the Vatican’s Pontifical Academy in Rome, convened to help address Pope Leo XIV’s concern over the fate of labor in the age of AI. We spoke amid the splendor of the Vatican gardens behind St. Peter’s Basilica.

The populist movements that have risen to power across the West today, particularly in the U.S., did so largely on the coattails of the backlash against globalization. Over the course of the U.S.-led free-trade policies during the post-Cold War decades, the rise of China as a cheap-labor manufacturing power with export access to the markets of advanced economies hollowed out the industrial base across large swaths of America and Europe — and the jobs it provided.

Some worry the AI shock will be even more devastating. Autor sees the similarity and the distinctions. What makes them the same is “it’s a big change that can happen quickly,” he says. But there are three ways in which they are different.

First, “the China trade shock was very localized. It was in manufacturing-intensive communities that made labor-intensive products such as furniture, textiles, clothing, plastic dolls and assembly of low-end hardware.”

AI’s effects will be much more geographically diffuse. “We’ve already lost millions of clerical worker jobs, but no one talks about ‘clerical shock.’ There is no clerical capital of America to see it disappear.”

Second, “the China trade shock didn’t just eliminate certain types of jobs. It eliminated entire industries all at once.” AI will shift the nature of jobs and tasks and change the way people work, but it “will not put industries out of business. … It will open new things and will close others, but it will not be an existential elimination, a great extinction.”

Third, “unless you were a very big multinational, what was experienced by U.S. firms during globalization was basically a shock to competition. All of a sudden, prices fell to a lower level than you could afford to produce.”

AI will be more of a productivity change that will be positive for many businesses. “That doesn’t mean it’s good for workers necessarily, because a lot of workers could be displaced. But business won’t be like, ‘Oh God, the AI shock. We hate this.’ They’ll be, like, ‘Oh great. We can do our stuff with fewer inputs.” In short, tech-driven productivity is the route to great profitability.

As we have often discussed in Noema, it is precisely this dynamic where productivity growth and wealth creation are being divorced from jobs and income that is the central social challenge. Increasingly, the gains will flow to capital — those who own the robots — and decreasingly to labor. The gap will inexorably grow, even with those who can earn higher wages and salaries through work augmented by AI.

Is the idea of “universal basic capital” (UBC), in which everyone has an ownership share in the AI economy through investment of their savings, a promising response?

Autor believes that what UBC offers is a “hedge” against the displacement or demotion of labor. Most of us are “unhedged,” he says, because “human capital is all we have and we are out of luck if that becomes devalued. So at least we would have a balanced portfolio.”

If the government seeds a UBC account, such as “baby bonds,” at the outset, Autor notes, it will grow in value over time through compounded investment returns. The problem with the alternative idea of “universal basic income” is that you are “creating a continual system of transfers where you are basically saying ‘Hey, you rich people over there, you pay for the leisure of everybody else over here.’ And that is not politically viable. ‘How do they get the right to our stuff?’”

Autor compares the idea of “universal basic income” (UBI) to the “resource curse” of unstable countries with vast oil and mineral resources, where it appears that “money is just coming out of a hole in the ground.”

The related reason that UBC is important for Autor is that “the people who have a voice in democracies are those who are seen as economic contributors. If the ownership of capital is more diffuse, then everyone is a contributor,” and everyone has a greater voice, which they will use since they have a stake in the system.

The closer we get to widespread integration of AI into the broader economy, the clearer the patterns Autor describes will become. On that basis, responsible policymakers can formulate remedial responses that fit the new economic times we have entered, rather than relying on outmoded policies geared to conditions that no longer exist.

The post How The ‘AI Job Shock’ Will Differ From The ‘China Trade Shock’ appeared first on NOEMA.

]]>
]]>
The Politics Of Planetary Color https://www.noemamag.com/the-politics-of-planetary-color Thu, 15 Jan 2026 15:31:32 +0000 https://www.noemamag.com/the-politics-of-planetary-color The post The Politics Of Planetary Color appeared first on NOEMA.

]]>
When the first color photograph of Earth was captured from space in 1968, millions around the globe saw their home in a new way. Rising from darkness above the moon, it could be seen in breathtaking oceanic blue. Unlike the black-and-white Lunar Orbiter 1 frame taken two years earlier, “Earthrise” made the planet’s fragility legible and emotionally graspable. In 1972, “Blue Marble” added new depth, revealing Earth from the Mediterranean Sea to Antarctica in vibrant swirls of blue, brown, green and white.

These familiar sunlit hues fostered a politics of relatability, inviting belonging, and with it, a sense of responsibility for the planet.

An environmental consciousness began to crystallize. As historian Robert Poole notes in “Earthrise: How Man First Saw the Earth,” the Space Age flipped from a narrative of outward conquest to one of inward rediscovery. The first Earth Day was held in 1970, and the popular metaphor “Spaceship Earth” shifted from describing a technical vessel managed by engineers to describing a living, vulnerable biosphere requiring stewardship. Planetary survival became a mass political demand.

If color once taught us to see and value our planet, it now records how we are altering it.

Black Marble,” a global composite image of the darkened Earth at night in 2012, revealed a web of golden yellow — electric constellations of urbanization and light pollution. More recently, a Nature analysis detected climate-driven trends in color across roughly 40% of the global surface ocean, observing that low-latitude waters are shifting from deep blue toward green as surface ecosystems reorganize. NASA’s PACE (Plankton, Aerosol, Cloud ocean Ecosystem) mission captures this complexity with hyperspectral precision, reading the ocean’s spectral fingerprint to identify exactly which plankton populations could be driving the shift.

Similarly, from Alpine glaciers to the Greenland Ice Sheet, snow can flush red when snow algae bloom, and because those blooms darken the surface and reduce reflectivity, they can amplify melt, rendering a warming cryosphere newly legible. 

Color is not just how Earth shows itself; it can be diagnostic, even a narrative of change, inviting human response through visible nuance. It is a measurement and a mirror of our agency.

The planetary becomes political through color. The hues through which Earth appears in public decide what we notice and act upon. For us to become a planetary society, the colors through which Earth senses and is sensed need to be aligned. It is time to compose a planetary palette.

Colors Make History

Color has long organized politics in the open. The French tricolor cockade turned loyalty into something you could wear in the street. The suffragette palette of purple, white and green made support for women’s right to vote instantly legible across Britain and beyond. The Pan-African colors of red, black and green and the Aboriginal black, red and yellow flag in Australia condensed claims to land and self-determination into vivid emblems.

In Thailand, rival movements quite literally became “red shirts” and “yellow shirts,” with chroma standing in for competing sovereignties. Iran’s Green Movement used a single hue to signal reformist solidarity, just as Ukraine’s Orange Revolution did earlier with orange as a banner of contested legitimacy. These anecdotes are not about taste. They show how colors have repeatedly given politics a public body, allocating attention, rallying coalitions and making claims visible at a glance.

Historian Michael Rossi’s “The Republic of Color” shows how, at the turn of the 20th century, color science and its regulation reorganized modern life: Industrial dyes, standardized color languages and new instruments did not simply tint goods. They reshaped labor, markets and perception, turning the organization of color into a form of managing attention, desire and trust. The institutionalization of standards and techniques for collective perception bestowed color with political force.

Our planetary age echoes the industrial one in that regard. Where the earlier era effectively forged a republic of color for factories and mass media, the planetary age calls for a politics of planetary color.

Choices about how Earth’s processes are rendered — through hue, lightness, contrast, naming and disclosure — organize public perception and coordination, deciding what counts as common evidence and how we act together with Earth. Rossi’s larger point applies: Color infrastructures do not merely decorate an era, they constitute it. If the planetary is to be held in common, it must be legible in color.

“Color is not just how Earth shows itself; it can be diagnostic, even a narrative of change.”

Political theory has language for this. French philosopher Jacques Rancière’s “distribution of the sensible” names how regimes allocate what is perceptible and sayable before any statute is written. Like metrology’s units, calendars’ time zones, cartography’s projections and interface defaults, planetary color is a pre-legal order: a background regime that organizes what appears actionable before any law speaks.

The politics of planetary color therefore operates where aesthetic order becomes epistemic order. It is an arrangement of seeing and sensing that quietly conditions what can be argued, trusted and coordinated as our shared world.

Planetary Colors

Some planetary colors are physical, spectral and stubborn. Neptune’s saturated blue is methane subtracting red. The aurora’s iconic green is oxygen’s 557.7-nanometer emission. These are not metaphors but signifiers for materials; naming them as such helps ratify the processes that produce them. “Neptune Blue” or “Aurora Green” could easily link colors to our cosmic existence.

Other planetary colors are (re)made by cameras, algorithms and conventions. “True color” Earth images are engineered reconstructions. NASA’s “Blue Marble 2002” was stitched from months of satellite observations into a seamless “true color” mosaic, underscoring that many “true color” Earth views are composited reconstructions, unlike Apollo 17’s 1972 “Blue Marble” photograph.

False color” composites and infrared-to-visible mappings from the Hubble Space Telescope to the James Webb Space Telescope are deliberate translation schemes that reveal what can be seen by choosing certain palettes.

An infrared view of the Pillars of Creation peered through interstellar dust unveils newly formed stars that are obscured in ordinary visible light. Here, color constitutes a designed translation of data instead of a mere passive recording of optical cues. Similarly, architect Laura Kurgan argues in “Close Up at a Distance” that satellite sensing and its visual languages translate dispersed Earth processes into legible — and political — images, a reminder that how we render planetary signals is already a choice about how we understand our world.

Within these regimes of visibility, what one might call “artificial color” — the deliberate abstraction that translates non-visible wavelengths and signals into visible hues — is a crucial epistemic step. By encoding data like infrared signals or chemical compositions into color, these images create knowledge rather than just recording it. That is authorship of planetary color.

Not only in satellite images and space telescopes can we experience this, but also in everyday life. Planetary colors pass through soft- and hardware, each imposing its own technical biases. The same image can look vivid on a phone and muted on an older laptop because device gamuts and color-management defaults differ. Regardless of the device, just as “Earthrise” and “Blue Marble” did for the modern environmental movement, planetary color operationalizes knowledge: It renders information actionable.

The Human Factor

Color carries ideas because it travels through perception. Three mechanisms are especially relevant. Color constancy is the brain’s habit of making an object’s color appear the same under different lighting: A blue shirt at noon still looks blue at dusk. Helpful in daily life, this can hide real differences in images unless a palette also signals illumination, which can reveal changes we would otherwise miss.

Pre-attentive salience describes the effect that some color differences jump out before we consciously decide to look for them. This is why rainbow gradients can mislead us by overemphasizing small changes, whereas scales where equal data steps are perceived as equal color steps and are detectable to color-blind viewers support honest detection.

Affective priming describes the psychological mechanisms behind color’s ability to nudge mood and behavior. In achievement tasks, brief exposure to red can tilt people toward avoidance, which shows that color can shape judgment even when we believe we are acting autonomously.

Considered together, this affective palette of colors explains why the way we perceive the planet — be it through the hues of volcanoes and ice sheets, forests and rivers, or space weather and meteor showers — quietly changes what becomes noticeable, thinkable and actionable. 

If color is part of a yet-to-emerge planetary literacy, it must be multilingual, as the perception of color is not merely neurophysiological, but deeply influenced by culture. The World Color Survey extended linguists Brent Berlin and Paul Kay’s classic thesis that languages name colors in a predictable, universal sequence (the so-called “basic color terms”), revealing both recurrent patterns and striking partitions, such as “grue” categories that merge green and blue.

These partitions travel with power: Art historian John Gage’s archaeology of Western color and artist David Batchelor’s account of “chromophobia” show how empires, religions and modernist canons scripted the meanings of, and the values attributed to, different hues.

“For us to become a planetary society, the colors through which Earth senses and is sensed need to be aligned.”

A planetary color is therefore less a single key than an interoperable set of keys: Process-based names, such as a “Saharan Dust Ochre,” can meet local lexicons so colors carry physics and culture at once.

The Earth Factor

Not only do we see Earth through color, Earth, in a real way, senses through color. Sunlight arrives as a spectrum, and the planet sorts it: Oceans swallow reds and return deep blues; clouds and ice throw broad light back to space; dark soot on snow shifts whiteness to gray and, at the same time, influences the planet’s temperature. In the air, color steers chemistry: Aerosol-laden skies redden, changing how quickly sunlight breaks apart molecules and how much energy the lower atmosphere keeps.

Living organisms are also optical instruments. Leaves are tuned to red and blue, using chlorophyll to absorb and harvest daylight; plant phytochromes register the color of light at dusk to tell seasons apart; phytoplankton ride the green-blue gradient to time their blooms; some marine microbes even run retinal-like photochemistry that taps the green bands of the sea.

Corals fluoresce, using color as both a shield and a stress signal, while the “vegetation red edge” — the sharp spectral jump between plants absorbing red light and reflecting near-infrared — is both a planetary fingerprint and a byproduct of how plants detect and manage light.

Color is not only an appearance but an interface: a surface upon which energy becomes information and the planet’s materials, organisms and spheres register, store and respond. Designing the planetary color palette, then, is not just designing what we see, it is learning to handle color in the wavelengths Earth already uses to sense its way forward.

To do so, we can refer to Abelardo Gil-Fournier and Jussi Parikka’s “Living Surfaces,” in which they unveil how Earth is made of “living surfaces”: interfaces where plant and photographic surfaces fold into one another, and where light functions at once as metabolized signal, registered through photosynthesis, and as measurable inscription, captured and processed into images. In this account, the two surfaces converge through a cultural technique that builds surfaces from measured light.

Approaching planetary color means working within these medianatures. It requires engaging cultural techniques such as calibration, mapping and ground-truthing that actively format Earth’s surface into data. These are the tools that translate raw biological life into the images we see.

In the planetary age, this means that color as experienced by humans is only one narrow slice of a wider spectral life. As Ed Yong reveals in “An Immense World,” the more-than-human world can parse wavelengths that we cannot, ranging from ultraviolet and infrared to the polarization of light.

The pre-legal order constituted by planetary palettes — colormaps, legends, thresholds, names and so forth — must be framed as a situated human translation: explicit about its vantage, inclusive of color-vision diversity and capable of turning non-visible spectra into shared, contestable public signals.

Color As Infrastructure

Artworks such as James Turrell’s immersive Ganzfeld installations, which dissolve depth perception in edgeless fields of pure colored light, and Olafur Eliasson’s “The Weather Project,” which suspended a giant, mist-shrouded artificial sun inside the Tate Modern art gallery to gather crowds in a shared amber glow, demonstrate how color fields can retune attention and assemble a public.

Hélio Oiticica’s “Parangolés,” wearable capes of saturated color first activated with the Mangueira samba community in Rio, turned hues into a collective act in the street, where color was not only seen but engaged with, danced with and debated as a public form. Color here is not a matter of mere aesthetics: These are political arguments in color.

Angela Snæfellsjökuls Rawlings stages a deliberative assembly as a participatory performance in the artistic-activist project “Motion to Change Colour Names to Reflect Planetary Boundary Tipping Points.” By framing the renaming of colors in response to climate crises as a socio-legal innovation, Rawlings treats the palette not merely as a visual code, but as a parliamentary act.

In a similar vein, entrepreneur Luke Iseman and designer Andrew Song have tested sulfur-dioxide balloon releases with their startup Make Sunsets, a geoengineering gambit that asks: If aerosols cool the planet, how red would (or should) our sunsets become? These discussions showcase the widespread awareness of the fact that any large-scale change in the planet’s color — be it our skies, oceans or land cover — could deeply affect humans’ relationship to their planet.

“Colors have repeatedly given politics a public body, allocating attention, rallying coalitions and making claims visible at a glance.”

Most often, color slips into planetary politics quietly, as the mood of a map, the warning of a dashboard, the tint of a season, the hue of a banner. Large parts of everyday coordination already turn on this quiet code.

In Europe, the purchase of a new appliance entails reading a green-to-red efficiency bar. In France, the vigilance weather map organizes municipal and household responses to dangerous weather events from heatwaves to floods through a four-color logic. And Mexico City’s Hoy No Circula program turns color into choreography at urban scale: Cars carry colored hologram stickers linked to plate numbers — yellow, pink, red, green, blue — which determine no-drive days and restrictions during pollution episodes.

Do these color schemes help societies think of and relate to the planetary?

In many countries, air pollution is communicated through a color-coded air quality index (AQI): In the U.S., the AQI runs from green (“good”) through red (“unhealthy”) to maroon (“hazardous”).  Across Europe, comparable indices pair color bands with explicit health advice for the general public and sensitive groups on when to modify outdoor activity.  

However, as architect Nerea Calvillo argues in “Aeropolis,” air and air pollution are not a homogeneous “outside.” They are co-produced by bodies and atmospheres, as well as by sensors, indices, visualizations, infrastructures and the regulatory and economic logics that often perpetuate exclusion and inequity.

That means that color-coded atmospheric representations are not neutral readouts but part of the apparatus through which uneven exposures become publicly legible and actionable: useful for collective response, yet always at risk of flattening differences among pollutants, places and vulnerabilities.

In each case, color is not decoration around the facts. It is part of how the facts enter public life. Just as the way we color-code the planet influences what we know about it, this is an epistemic and political practice. A poorly designed thermal map might hide extremes, whereas a well-designed one can reveal patterns at first glance.

Planetary Palette

Right now, what passes for a planetary palette is mostly an accident of defaults: device settings, stock colormaps, ad-hoc choices. Making the implicit explicit means surfacing that palette and recomposing it with Earth. The intentional making of such a palette calls for at least four moves.

First, open a conversation and reframe. The palette might be treated as a public invitation — not décor, but a shared claim tested with Earth. Rather than green branding and device defaults, Earth’s own signals would meet human ways of seeing: chlorophyll greens, auroral oxygen’s green, aerosol-red sunsets. In this register, color would work as a relay. Measurements would become proposed hues, scales would aim to make equal changes look like equal changes for aging, color-blind and standard eyes alike and color names would carry causes.

The palette would also point to possibility, not only alarms: cool corridors of “Canopy Jade” and “Breeze Sapphire” for walking and schooling; “Nocturne Blue” nights that would restore a shared sky; “Pulse Cyan” river rises that would coordinate fisheries, ferries and floodplain planting. The aim would be co-creation: open, revisable and applicable to how the planet already speaks in color.

Second, convene to formulate principles and compose first prototypes. A planetary color convention could seat Earth-observation scientists, artists, designers, accessibility experts, linguists, anthropologists, educators, journalists and policymakers, so palettes are co-sensed, legible and usable where decisions happen.

A few prototypes could focus on specific processes, such as Breeze & Shade (urban cool corridors from canopy transpiration and wind pathways) and Night-Sky Commons (dark-sky windows from cloud aerosol and light-pollution data), developed under agreed principles such as:

  • Start from the planet, not moods: Tie hues to earthly processes.
  • Make it beautiful: Compose for dignity and delight.
  • Design for adaptation: Establish a shared backbone with room for local adjustments.
  • Make it accessible and fair: Use color-vision inclusivity and strong contrast.
  • Be transparent: Indicate what was sensed and why each hue was selected, and visually signal data uncertainty.
  • Build for learning and evolution: Test with real people and devices, allowing new uses and meanings to develop over time.

Third, give this work an institutional home. Rather than a single bureaucratic body, this could take the form of a distributed observatory run by a consortium of science agencies, design labs and museums. Here, satellite and field streams would be translated into images accompanied by concise color briefs in the form of accessible guides explaining the data source and usage rules for each hue.

“Most often, color slips into planetary politics quietly, as the mood of a map, the warning of a dashboard, the tint of a season, the hue of a banner. ”

Simultaneously, the observatory would run design and legibility trials, co-creating and testing new maps with diverse communities to ensure they are understood and welcomed before release. A living lexicon would record process-bearing names, and palette hearings would be held when colors might steer broader public action. Crucially, an ethics log and version history would track why visual choices were made, ensuring that changes in the planet’s appearance are traceable decisions rather than hidden defaults.

In partnership with city agencies, researchers, artists and frontline communities, the observatory would commission experimental pilots, such as public light installations or interactive urban dashboards, and publish open-source resources, like accessible colormap plugins for mapping software.

Fourth, evaluate and refine the palette based on evidence. The prototypes should be treated as civic infrastructure and assessed across a set of dimensions. Do they read quickly and correctly? Do they steer inspiration to reshape human-planet relations? Do they prompt the right actions, and are they accessible regardless of visual ability or device? Small pilots and before-and-after rollouts would inform a public log of what changed when a color band flipped, and a regular review cadence would adjust the scheme. The goal is a shortening loop between planetary signal, legible appearance and coordinated response.

Rossi shows how the industrial age wired color into institutions so thoroughly that perception itself became a site of politics. The planetary age inherits this lesson at a different scale: Ocean color trends now register ecological reorganization; hyperspectral satellites are built to track it; cross-cultural surveys reveal that our vocabularies for color are learned, mobile and contested; and contemporary art keeps demonstrating that color can gather strangers into a public around a shared field of sensation.

More than a single palette, the planetary colors would be a set of tested, explained and teachable mappings to help people sense earthly processes together. If the 19th-century “republic of color” standardized perception for an industrial order, the 21st-century equivalent might standardize disagreement with shared references — enough coherence of planetary colors to argue about the same world.

This is planetary politics in practice: a palette co-authored by Earth’s own signals and by human institutions that translate spectra into public reasons. If colors are integral to planetary politics, then designing the palette is not a cosmetic but a constitutional practice.

The post The Politics Of Planetary Color appeared first on NOEMA.

]]>
]]>
The Mythology Of Conscious AI https://www.noemamag.com/the-mythology-of-conscious-ai Wed, 14 Jan 2026 17:23:54 +0000 https://www.noemamag.com/the-mythology-of-conscious-ai The post The Mythology Of Conscious AI appeared first on NOEMA.

]]>
For centuries, people have fantasized about playing God by creating artificial versions of human beings. This is a dream reinvented with every breaking wave of new technology. With genetic engineering came the prospect of human cloning, and with robotics that of humanlike androids.

The rise of artificial intelligence (AI) is another breaking wave — potentially a tsunami. The AI systems we have around us are arguably already intelligent, at least in some ways. They will surely get smarter still. But are they, or could they ever be, conscious? And why would that matter?

The cultural history of synthetic consciousness is both long and mostly unhappy. From Yossele the Golem, to Mary Shelley’s “Frankenstein,” HAL 9000 in “2001: A Space Odyssey,” Ava in “Ex Machina,” and Klara in “Klara and The Sun,” the dream of creating artificial bodies and synthetic minds that both think and feel rarely ends well — at least, not for the humans involved. One thing we learn from these stories: If artificial intelligence is on a path toward real consciousness, or even toward systems that persuasively seem to be conscious, there’s plenty at stake — and not just disruption in job markets.

Some people think conscious AI is already here. In a 2022 interview with The Washington Post, Google engineer Blake Lemoine made a startling claim about the AI system he was working on, a chatbot called LaMDA. He claimed it was conscious, that it had feelings, and was, in an important sense, like a real person. Despite a flurry of media coverage, Lemoine wasn’t taken all that seriously. Google dismissed him for violating its confidentiality policies, and the AI bandwagon rolled on.

But the question he raised has not gone away. Firing someone for breaching confidentiality is not the same as firing them for being wrong. As AI technologies continue to improve, questions about machine consciousness are increasingly being raised. David Chalmers, one of the foremost thinkers in this area, has suggested that conscious machines may be possible in the not-too-distant future. Geoffrey Hinton, a true AI pioneer and recent Nobel Prize winner, thinks they exist already. In late 2024, a group of prominent researchers wrote a widely publicized article about the need to take the welfare of AI systems seriously. For many leading experts in AI and neuroscience, the emergence of machine consciousness is a question of when, not if.

How we think about the prospects for conscious AI matters. It matters for the AI systems themselves, since — if they are conscious, whether now or in the future — with consciousness comes moral status, the potential for suffering and, perhaps, rights.

It matters for us too. What we collectively think about consciousness in AI already carries enormous importance, regardless of the reality. If we feel that our AI companions really feel things, our psychological vulnerabilities can be exploited, our ethical priorities distorted, and our minds brutalized — treating conscious-seeming machines as if they lack feelings is a psychologically unhealthy place to be. And if we do endow our AI creations with rights, we may not be able to turn them off, even if they act against our interests.

Perhaps most of all, the way we think about conscious AI matters for how we understand our own human nature and the nature of the conscious experiences that make our lives worth living. If we confuse ourselves too readily with our machine creations, we not only overestimate them, we also underestimate ourselves.

The Temptations Of Conscious AI

Why might we even think that AI could be conscious? After all, computers are very different from biological organisms, and the only things most people currently agree are conscious are made of meat, not metal.

The first reason lies within our own psychological infrastructure. As humans, we know we are conscious and like to think we are intelligent, so we find it natural to assume the two go together. But just because they go together in us doesn’t mean that they go together in general.

Intelligence and consciousness are different things. Intelligence is mainly about doing: solving a crossword puzzle, assembling some furniture, navigating a tricky family situation, walking to the shop — all involve intelligent behavior of some kind. A useful general definition of intelligence is the ability to achieve complex goals by flexible means. There are many other definitions out there, but they all emphasize the functional capacities of a system: the ability to transform inputs into outputs, to get things done.

“If we confuse ourselves too readily with our machine creations, we not only overestimate them, we also underestimate ourselves.”

An artificially intelligent system is measured by its ability to perform intelligent behavior of some kind, though not necessarily in a humanlike form. The concept of artificial general intelligence (AGI), by contrast, explicitly references human intelligence. It is supposed to match or exceed the cognitive competencies of human beings. (There’s also artificial superintelligence, ASI, which happens when AI bootstraps itself beyond our comprehension and control. ASI tends to crop up in the more existentially fraught scenarios for our possible futures.)

Consciousness, in contrast to intelligence, is mostly about being. Half a century ago, the philosopher Thomas Nagel famously offered that “an organism has conscious mental states if and only if there is something it is like to be that organism.” Consciousness is the difference between normal wakefulness and the oblivion of deep general anesthesia. It is the experiential aspect of brain function and especially of perception: the colors, shapes, tastes, emotions, thoughts and more, that give our lives texture and meaning. The blueness of the sky on a clear day. The bitter tang and headrush of your first coffee.

AI systems can reasonably lay claim to intelligence in some form, since they can certainly do things, but it is harder to say whether there is anything-it-is-like-to-be ChatGPT.

The propensity to bundle intelligence and consciousness together can be traced to three baked-in psychological biases.

The first is anthropocentrism. This is the tendency to see things through the lens of being human: to take the human example as definitional, rather than as one example of how different properties might come together.

The second is human exceptionalism: our unfortunate habit of putting the human species at the top of every pile, and sometimes in a different pile altogether (perhaps closer to angels and Gods than to other animals, as in the medieval Scala naturae). And the third is anthropomorphism. This is the tendency to project humanlike qualities onto nonhuman things based on what may be only superficial similarities.

Taken together, these biases explain why it’s hardly surprising that when things exhibit abilities we think of as distinctively human, such as intelligence, we naturally imbue them with other qualities we feel are characteristically or even distinctively human: understanding, mindedness and consciousness, too.

One aspect of intelligent behavior that’s turned out to be particularly effective at making some people think that AI could be conscious is language. This is likely because language is a cornerstone of human exceptionalism. Large Language Models (LLMs) like OpenAI’s ChatGPT or Anthropic’s Claude have been the focus of most of the excitement about artificial consciousness. Nobody, as far as I know, has claimed that DeepMind’s AlphaFold is conscious, even though, under the hood, it is rather similar to an LLM. All these systems run on silicon and involve artificial neural networks and other fancy algorithmic innovations such as transformers. AlphaFold, which predicts protein structure rather than words, just doesn’t pull our psychological strings in the same way.

The language that we ourselves use matters too. Consider how normal it has become to say that LLMs “hallucinate” when they spew falsehoods. Hallucinations in human beings are mainly conscious experiences that have lost their grip on reality (uncontrolled perceptions, one might say). We hallucinate when we hear voices that aren’t there or see a dead relative standing at the foot of the bed. When we say that AI systems “hallucinate,” we implicitly confer on them a capacity for experience. If we must use a human analogy, it would be far better to say that they “confabulate.” In humans, confabulation involves making things up without realizing it. It is primarily about doing, rather than experiencing.

When we identify conscious experience with seemingly human qualities like intelligence and language, we become more likely to see consciousness where it doesn’t exist, and to miss seeing it where it does. We certainly should not just assume that consciousness will come along for the ride as AI gets smarter, and if you hear someone saying that real artificial consciousness will magically emerge at the arbitrary threshold of AGI, that’s a sure sign of human exceptionalism at work.

There are other biases in play, too. There’s the powerful idea that everything in AI is changing exponentially. Whether it’s raw compute as indexed by Moore’s Law, or the new capabilities available with each new iteration of the big tech foundation models, things surely are changing quickly. Exponential growth has the psychologically destabilizing property that what’s ahead seems impossibly steep, and what’s behind seems irrelevantly flat. Crucially, things seem this way wherever you are on the curve — that’s what makes it exponential. Because of this, it’s tempting to feel like we are always on the cusp of a major transition, and what could be more major than the creation of real artificial consciousness? But on an exponential curve, every point is an inflection point.

“When we identify conscious experience with seemingly human qualities like intelligence and language, we become more likely to see consciousness where it doesn’t exist, and to miss seeing it where it does.”

Finally, there’s the temptation of the techno-rapture. Early in the movie “Ex Machina,”the programmer Caleb says to the inventor Nathan: “If you’ve created a conscious machine — it’s not the history of man, that’s the history of Gods.” If we feel we’re at a techno-historical transition, and we happen to be one of its architects, then the Promethean lure must be hard to resist: the feeling of bringing to humankind that which was once the province of the divine. And with this singularity comes the signature rapture offering of immortality: the promise of escaping our inconveniently decaying biological bodies and living (or at least being) forever, floating off to eternity in a silicon-enabled cloud.

Perhaps this is one reason why pronouncements of imminent machine consciousness seem more common within the technorati than outside of it. (More cynically: fueling the idea that there’s something semi-magical about AI may help share prices stay aloft and justify the sky-high salaries and levels of investment now seen in Silicon Valley. Did someone say “bubble”?)

In his book “More Everything Forever,” Adam Becker describes the tendency to project consciousness into AI as a form of pareidolia — the phenomenon of seeing patterns in things, like a face in a piece of toast or Mother Teresa in a cinnamon bun (Figure 1). This is an apt description. But helping you recognize the power of our pareidolia-inducing psychological biases is just the first step in challenging the mythology of conscious AI. To address the question of whether real artificial consciousness is even possible, we need to dig deeper.

Figure 1: Mother Teresa in a cinnamon bun. (Public Domain)

Consciousness & Computation

The very idea of conscious AI rests on the assumption that consciousness is a matter of computation. More specifically, that implementing the right kind of computation, or information processing, is sufficient for consciousness to arise. This assumption, which philosophers call computational functionalism, is so deeply ingrained that it can be difficult to recognize it as an assumption at all. But that is what it is. And if it’s wrong, as I think it may be, then real artificial consciousness is fully off the table, at least for the kinds of AI we’re familiar with.

Challenging computational functionalism means diving into some deep waters about what computation means and what it means to say that a physical system, like a computer or a brain, computes at all. I’ll summarize four related arguments that undermine the idea that computation, at least of the sort implemented in standard digital computers, is sufficient for consciousness.

1: Brains Are Not Computers

First, and most important, brains are not computers. The metaphor of the brain as a carbon-based computer has been hugely influential and has immediate appeal: mind as software, brain as hardware. It has also been extremely productive, leading to many insights into brain function and to the vast majority of today’s AI. To understand the power and influence of this metaphor, and to grasp its limitations, we need to revisit some pioneers of computer science and neurobiology.

Alan Turing towers above everyone else in this story. Back in the 1950s, he seeded the idea that machines might be intelligent, and more than a decade earlier, he

formulated a definition of computation that has remained fundamental to our technologies, and to most people’s understanding of what computers are, ever since.

Turing’s definition of computation is extremely powerful and highly (though, as we’ll see, not completely) general. It is based on the abstract concept of a Turing machine: a simple device that reads and writes symbols on an infinite tape according to a set of rules. Turing machines formalize the idea of an algorithm: a mapping, via a sequence of steps, from an input (a string of symbols) to an output (another such string); a mathematical recipe, if you like. Turing’s critical contribution was to define what became known as a universal Turing machine: another abstract device, but this time capable of simulating any specific Turing machine — any algorithm — by taking the description of the target machine as part of its input. This general-purpose capability is one reason why Turing computation is so powerful and so prevalent. The laptop computer I’m writing with, as well as the machines in the server farms running whatever latest AI model, are all physical, concrete examples of (or approximations to) universal Turing machines, bounded by physical limitations such as time and memory.

“The very idea of conscious AI rests on the assumption that consciousness is a matter of computation.”

Another major advantage of this framework, from a practical engineering point of view, is the clean separation it licenses between abstract computation (software) and physical implementation (hardware). An algorithm (in the sense described above) should do the same thing, no matter what computer it is running on. Turing computation is, in principle, substrate independent: it does not depend on any particular material basis. In practice, it’s better described as substrate flexible, since you can’t make a viable computer out of any arbitrary material — cheese, for instance, isn’t up to the job. This substrate-flexibility makes Turing computation extremely useful in the real world, which is why computers exist in our phones rather than merely in our minds.

At around the same time that Turing was making his mark, the mathematician Walter Pitts and neurophysiologist Warren McCulloch showed, in a landmark paper, that networks of highly simplified abstract neurons can perform logical operations (Figure 2). Later work, by the logician Stephen Kleene among others, demonstrated that artificial neural networks like these, when provided with a tape-like memory (as in the Turing machine),  were “Turing complete” — that they could, in principle, implement any Turing machine, any algorithm.

Figure 2: A modern version of a McCulloch-Pitts neuron. Input signals X1-X4 are multiplied by weights w, summed up together with a bias (another input) and then passed through an activation function, usually a sigmoid (an S-shaped curve), to give an output Y. This version is similar to the artificial neurons used in contemporary AI. In the original version, the output was either 1 (if the summed, weighted inputs exceeded a fixed threshold) or 0 (if they didn’t). The modifications were introduced to make artificial neural networks easier to train. (Courtesy of Anil Seth)

Put these ideas together, and we have a mathematical marriage of convenience and influence, and the kind of beauty that accompanies simplicity. On the one hand, we can ignore the messy neurobiological reality of real brains and treat them as simplified networks of abstract neurons, each of which just sums up its inputs and produces an output. On the other hand, when we do this, we get everything that Turing computation has to offer — which is a lot.

The fruits of this marriage are most evident in its children: the artificial neural networks powering today’s AI. These are direct descendants of McCulloch, Pitts and Kleene, and they also implement algorithms in the substrate-flexible Turing sense. It is hardly surprising that the seductive impressiveness of the current wave of AI reinforces the idea that brains are nothing more than carbon-based versions of neural network algorithms.

But here’s where the trouble starts. Inside a brain, there’s no sharp separation between “mindware” and “wetware” as there is between software and hardware in a computer. The more you delve into the intricacies of the biological brain, the more you realize how rich and dynamic it is, compared to the dead sand of silicon.

Brain activity patterns evolve across multiple scales of space and time, ranging from large-scale cortical territories down to the fine-grained details of neurotransmitters and neural circuits, all deeply interwoven with a molecular storm of metabolic activity. Even a single neuron is a spectacularly complicated biological machine, busy maintaining its own integrity and regenerating the conditions and material basis for its own continued existence. (This process is called autopoiesis, from the Greek for “self-production.” Autopoiesis is arguably a defining and distinctive characteristic of living systems.)

Unlike computers, even computers running neural network algorithms, brains are the kinds of things for which it is difficult, and likely impossible, to separate what they do from what they are.

Nor is there any good reason to expect such a clean separation. The sharp division between software and hardware in modern computers is imposed by human design, following Turing’s principles. Biological evolution operates under different constraints and with different goals. From the perspective of evolution, there’s no obvious selection pressure for the kind of full separation that would allow the perfect interoperability between different brains as we enjoy between different computers. In fact, the opposite is likely true: Maintaining a sharp software/hardware division is energetically expensive, as is all too apparent these days in the vast energy budgets of modern server farms.

“The more you delve into the intricacies of the biological brain, the more you realize how rich and dynamic it is, compared to the dead sand of silicon.”

This matters because the idea of the brain as a meat-based (universal) Turing machine rests precisely on this sharp separation of scales, on the substrate independence that motivated Turing’s definition in the first place. If you cannot separate what brains do from what they are, the mathematical marriage of convenience starts to fall apart, and there is less reason to think of biological wetware as there simply to implement algorithmic mindware. Evidence that the materiality of the brain matters for its function is evidence against the idea that digital computation is all that counts, which in turn is evidence against computational functionalism.

Another consequence of the deep multiscale integration of real brains — a property that philosophers sometimes call “generative entrenchment” — is that you cannot assume it is possible to replace a single biological neuron with a silicon equivalent, while leaving its function, its input-output behavior, perfectly preserved.

For example, the neuroscientists Chaitanya Chintaluri and Tim Vogels found that some neurons fire spikes of activity apparently to clear waste products created by metabolism. Coming up with a perfect silicon replacement for these neurons would require inventing a whole new silicon-based metabolism, too, which just isn’t the kind of thing silicon is suitable for. The only way to seamlessly replace a biological neuron is with another biological neuron — and ideally, the same one.

This reveals the weakness of the popular “neural replacement” thought experiment, most commonly associated with Chalmers, which invites us to imagine progressively replacing brain parts with silicon equivalents that function in exactly the same way as their biological counterparts. The supposed conclusion is that properties like cognition and consciousness must be substrate independent (or at least silicon-substrate-flexible). This thought experiment has become a prominent trope in discussions of artificial consciousness, usually invoked to support its possibility. Hinton recently appealed to it in just this way, in an interview where he claimed that conscious AI was already with us. But the argument fails at its first hurdle, given the impossibility of replacing any part of the brain with a perfect silicon equivalent.

There is one more consequence of a deeply scale-integrated brain that is worth mentioning. Digital computers and brains differ fundamentally in how they relate to time. In Turing-world, only sequence matters: A to B, 0 to 1. There could be a microsecond or a million years between any state transition, and it would still be the same algorithm, the same computation.

By contrast, for brains and for biological systems in general, time is physical, continuous and inescapable. Living systems must continuously resist the decay and disorder that lies along the trajectory to entropic sameness mandated by the inviolable second law of thermodynamics. This means that neurobiological activity is anchored in continuous time in ways that algorithms, by design, are not. (This is another reason why digital computation is so energetically expensive. Computation exists out of time, but computers do not. Making sure that 1s stay as 1s and 0s stay as 0s takes a lot of energy, because not even silicon can escape the tendrils of entropy.)

What’s more, many researchers — especially those in the phenomenological tradition — have long emphasized that conscious experience itself is richly dynamic and inherently temporal. It does not stutter from one state to another; it flows. Abstracting the brain into the arid sequence space of algorithms does justice neither to our biology nor to the phenomenology of the stream of consciousness.

Metaphors are, in the end, just metaphors, and — as the philosopher Alfred North Whitehead pointed out long ago  — it’s always dangerous to confuse a metaphor with the thing itself. Looking at the brain through “Turing glasses” underestimates its biological richness and overestimates the substrate flexibility of what it does. When we see the brain for what it really is, the notion that all its multiscale biological activity is simply implementation infrastructure for some abstract algorithmic acrobatics seems rather naı̈ve. The brain is not a Turing machine made of meat.

“Abstracting the brain into the arid sequence space of algorithms does justice neither to our biology nor to the phenomenology of the stream of consciousness.”

2: Other Games In Town

In the previous section, I noted that Turing computation is powerful but limited. Turing computations — algorithms — map one finite range of discrete numbers (more generally, a string of symbols) onto another, with only the sequence mattering. Turing algorithms are powerful, but there are many kinds of dynamics, many other kinds of functions, that go beyond this kind of computation. Turing himself identified various non-computable functions, such as the famous “halting problem,” which is the problem of determining, in general, whether an algorithm, given some specific input, will ever terminate. What’s more, any function that is continuous (infinitely divisible) or stochastic (involving inherent randomness), strictly speaking, lies beyond Turing’s remit. (Turing computations can approximate or simulate these properties to varying extents, but that’s different from the claim that such functions are Turing computations. I’ll return to this distinction later.)

Biological systems are rife with continuous and stochastic dynamics, and they are deeply embedded in physical time. It seems presumptuous at the very least to assume that only Turing computations matter for consciousness, or indeed for many other aspects of cognition and mind. Electromagnetic fields, the flux of neurotransmitters, and much else besides — all lie beyond the bounds of the algorithmic, and any one of them may turn out to play a critical role in consciousness.

These limitations encourage us to take a broader view of the brain, moving beyond what I sometimes call “Turing world” to consider how broader forms of computation and dynamics might help explain how brains do what they do. There is a rich history here to draw on, and an exciting future too.

The earliest computers were not digital Turing machines but analogue devices operating in continuous time. The ancient “Antikythera mechanism,” used for astronomical purposes and dating back to around 2,000 BCE, is an excellent example. Analogue computers were again prominent at the birth of AI in the 1950s,  in the guise of the long-neglected discipline of cybernetics, where issues of control and regulation of a system are considered more important than abstract symbol manipulation.

Recently, there’s been a resurgence in neuromorphic computation, which leverages more detailed properties of neural systems, such as the precise timing of neuronal spikes, than the cartoon-like simulated neurons that dominate current artificial neural network approaches. And then there’s the relatively new concept of “mortal computation” (introduced by Hinton), which stresses the potential for energy saving offered by developing algorithms that are inseparably tied to their material substrates, so that they (metaphorically) die when their particular implementation ceases to exist.  All these alternative forms of computation are more closely tied to their material basis — are less substrate-flexible — than standard digital computation.

Figure 3: The Watt Governor. It’s not a computer. (R. Routledge/Wikimedia)

Many systems do what they do without it being reasonable or useful to describe them as being computational at all. Three decades ago, the cognitive scientist Tim van Gelder gave an influential example, in the form of the governor of a steam engine (Figure 3). These governors regulate steam flow through an engine using simple mechanics and physics: as engine speed increases, two heavy cantilevered balls swing outwards, which in turn closes a valve, reducing steam flow. A “computational governor,” sensing engine speed, calculating the necessary actions and then sending precise motor signals to switch actuators on or off, would not only be hopelessly inefficient but would betray a total misunderstanding of what’s really going on.

The branch of cognitive science generally known as “dynamical systems,” as well as approaches that emphasize enactive, embodied, embedded and extended aspects of mind (so-called 4E cognitive science), all reject, in ways relating to van Gelder’s insight, the idea that mind and brain can be exhaustively accounted for algorithmically. They all explore alternatives based on the mathematics of continuous, dynamical processes — involving concepts such as attractors, phase spaces and so on. It is at least plausible that those aspects of brain function necessary for consciousness also depend on non-computational processes like these, or perhaps on some broader notion of computation.

“Evidence that the materiality of the brain matters for its function is evidence against the idea that digital computation is all that counts, which in turn is evidence against computational functionalism.”

These other games in town are all still compatible with what in philosophy is known as functionalism: the idea that properties of mind (including consciousness) depend on the functional organization of the (embodied) brain. One of the factors contributing to confusion in this area has been a tendency to conflate the rather liberal position of functionalism-in-general, since functional organization can include many things, with the very specific claim of computational functionalism, which implies that the type of organization that matters is computational and which in turn is often assumed to relate to Turing-style algorithms in particular.

The challenge for machine consciousness here is that the further we venture from Turing world, the more deeply entangled we become in randomness, dynamics and entropy, and the more deeply tied we are to the properties of a particular material substrate. The question is no longer about which algorithms give rise to consciousness; it’s about how brain-like a system has to be to move the needle on its potential to be conscious.

3: Life Matters

My third argument is that life (probably) matters. This is the idea — called biological naturalism by the philosopher John Searle— that properties of life are necessary, though not necessarily sufficient, for consciousness. I should say upfront that I don’t have a knock-down argument for this position, nor do I think any such argument yet exists. But it is worth taking seriously, if only for the simple reason mentioned earlier: every candidate for consciousness that most people currently agree on as actually being conscious is also alive.

Why might life matter for consciousness? There’s more to say here than will fit in this essay ( I wrote an entire book, “Being You,” and a recent research paper on the subject), but one way of thinking about it goes like this.

The starting point is the idea that what we consciously perceive depends on the brain’s best guesses about what’s going on in the world, rather than on a direct readout of sensory inputs. This derives from influential predictive processing theories that understand the brain as continually explaining away its sensory inputs by updating predictions about their causes. In this view, sensory signals are interpreted as prediction errors, reporting the difference between what the brain expects and what it gets at each level of its perceptual hierarchies, and the brain is continually minimizing these prediction errors everywhere and all the time.

Conscious experience in this light is a kind of controlled hallucination: a top-down inside-out perceptual inference in which the brain’s predictions about what’s going on are continually calibrated by sensory signals coming from the bottom-up (or outside-in).

Figure 4: Perception as controlled hallucination. The conscious experience of a coffee cup is underpinned by the content of the brain’s predictions (grey arrows) of the causes of sensory inputs (black arrows). (Courtesy of Anil Seth)

This kind of perceptual best-guessing underlies not only experiences of the world, but experiences of being a self, too — experiences of being the subject of experience. A good example is how we perceive the body, both as an object in the world and as the source of more fundamental aspects of selfhood, such as emotion and mood. Both these aspects of selfhood can be understood as forms of perceptual best-guessing: inferences about what is, and what is not, part of the body, and inferences about the body’s internal physiological condition (the latter is sometimes called “interoceptive inference”; interoception refers to perception of the body from within).

Perceptual predictions are good not only for figuring out what’s going on, but (in a call back to mid-20th century cybernetics) also for control and regulation: When you can predict something, you can also control it. This applies above all to predictions about the body’s physiological condition. This is because the primary duty of any brain is to keep its body alive, to keep physiological quantities like heart rate and blood oxygenation where they need to be. This, in turn, helps explain why embodied experiences feel the way they do.

Experiences of emotion and mood, unlike vision (for example), are characterized primarily by valence — by things generally going well or going badly.

“Every candidate for consciousness that most people currently agree on as actually being conscious is also alive.”

This drive to stay alive doesn’t bottom out anywhere in particular. It reaches deep into the interior of each cell, into the molecular furnaces of metabolism. Within these whirls of metabolic activity, the ubiquitous process of prediction error minimization becomes inseparable from the materiality of life itself. A mathematical line can be drawn directly from the self-producing, autopoietic nature of biological material all the way to the Bayesian best-guessing that underpins our perceptual experiences of the world and of the self.

Several lines of thought now converge. First, we have the glimmers of an explanatory connection between life and consciousness. Conscious experiences of emotion, mood and even the basal feeling of being alive all map neatly onto perceptual predictions involved in the control and regulation of bodily condition. Second, the processes underpinning these perceptual predictions are deeply, and perhaps inextricably, rooted in our nature as biological systems, as self-regenerating storms of life resisting the pull of entropic sameness. And third, all of this is non-computational, or at least non-algorithmic. The minimization of prediction error in real brains and real bodies is a continuous dynamical process that is likely inseparable from its material basis, rather than a meat-implemented algorithm existing in a pristine universe of symbol and sequence.

Put all this together, and a picture begins to form: We experience the world around us and ourselves within it — with, through and because of our living bodies. Perhaps it is life, rather than information processing, that breathes fire into the equations of experience.

4: Simulation Is Not Instantiation

Finally, simulation is not instantiation. One of the most powerful capabilities of universal, Turing-based computers is that they can simulate a vast range of phenomena — even, and perhaps especially, phenomena that aren’t themselves (digitally) computational, such as continuous and random processes.

But we should not confuse the map with the territory, or the model with the mechanism. An algorithmic simulation of a continuous process is just that — a simulation, not the process itself.

Computational simulations generally lack the causal powers and intrinsic properties of the things being simulated. A simulation of the digestive system does not actually digest anything. A simulation of a rainstorm does not make anything actually wet. If we simulate a living creature, we have not created life. In general, a computational simulation of X does not bring X into being — does not instantiate X — unless X is a computational process (specifically, an algorithm) itself. Making the point from the other direction, the fact that X can be simulated computationally does not justify the conclusion that X is itself computational.

In most cases, the distinction between simulation and instantiation is obvious and uncontroversial. It should be obvious and uncontroversial for consciousness, too. A computational simulation of the brain (and body), however detailed it may be, will only give rise to consciousness if consciousness is a matter of computation. In other words, the prospect of instantiating consciousness through some kind of whole-brain emulation, at some arbitrarily high level of detail, already assumes that computational functionalism is true. But as I have argued, this assumption is likely wrong and certainly should not be accepted axiomatically.

This brings us back to the poverty of the brain-as-computer metaphor. If you think that everything that matters about brains can be captured by abstract neural networks, then it’s natural to think that simulating the brain on a digital computer will instantiate all its properties, including consciousness, since in this case, everything that matters is, by assumption, algorithmic. This is the “Turing world” view of the brain.

“Perhaps it is life, rather than information processing, that breathes fire into the equations of experience.”

If, instead, you are intrigued by more detailed brain models that capture the complexities of individual neurons and other fine-grained biophysical processes, then it really ought to be less natural to assume that simulating the brain will realize all its properties, since these more detailed models are interesting precisely because they suggest that things other than Turing computation likely matter too.

There is, therefore, something of a contradiction lurking for those who invest their dreams and their venture capital into the prospect of uploading their conscious minds into exquisitely detailed simulations of their brains, so that they can exist forever in silicon rapture. If an exquisitely detailed brain model is needed, then you are no more likely to exist in the simulation than a hailstorm is likely to arise inside the computers of the U.K. meteorological office.

But buckle up. What if everything is a simulation already? What if our whole universe — including the billions of bodies, brains and minds on this planet, as well as its hailstorms and weather forecasting computers — is just an assemblage of code fragments in an advanced computer simulation created by our technologically godlike and genealogically obsessed descendants?

This is the “simulation hypothesis,” associated most closely with the philosopher Nick Bostrom, and still, somehow, an influential idea among the technorati.

Bostrom notes that simulations like this, if they have been created, ought to be much more numerous than the original “base reality,” which in turn suggests that we may be more likely to exist within a simulation than within reality itself. He marshals various statistical arguments to flesh out this idea. But it is telling that he notes one necessary assumption, and then just takes it as a given. This, perhaps unsurprisingly, is the assumption that “a computer running a suitable program would be conscious” (see page 2 of his paper). If this assumption doesn’t hold, then the simple fact that we are conscious would rule out that we exist in a simulation. That this strong assumption is taken on board without examination in a philosophical discussion that is all about the validity of assumptions is yet another indication of how deeply ingrained the computational view of mind and brain has become. It is also a sign of the existential mess we get ourselves into when we fail to distinguish our models of reality from reality itself.


Let’s summarize. Many social and psychological factors, including some well-understood cognitive biases, predispose us to overattribute consciousness to machines.

Computational functionalism — the claim that (algorithmic) computation is sufficient for consciousness — is a very strong assumption that looks increasingly shaky as the many and deep differences between brains and (standard digital) computers come into view. There are plenty of other technologies (e.g., neuromorphic computing, synthetic biology) and frameworks for understanding the brain (e.g., dynamical systems theory), which go beyond the strictly algorithmic. In each case, the further one gets from Turing world, the less plausible it is that the relevant properties can be abstracted away from their underlying material basis.

One possibility, motivated by connecting predictive processing views of perception with physiological regulation and metabolism, is that consciousness is deeply tied to our nature as biological, living creatures.

Finally, simulating the biological mechanisms of consciousness computationally, at whatever grain of detail you might choose, will not give rise to consciousness unless computational functionalism happens anyway to be true.

Each of these lines of argument can stand up by itself. You might favor the arguments against computational functionalism while remaining unpersuaded about the merits of biological naturalism. Distinguishing between simulation and instantiation doesn’t depend on taking account of our cognitive biases. But taken together, they complement and strengthen each other. Questioning computational functionalism reinforces the importance of distinguishing simulation from instantiation. The availability of other technologies and frameworks beyond Turing-style algorithmic computation opens space for the idea that life might be necessary for consciousness.

Collectively, these arguments make the case that consciousness is very unlikely to simply come along for the ride as AI gets smarter, and that achieving it may well be impossible for AI systems in general, at least for the silicon-based digital computers we are familiar with.

At the same time, nothing in what I’ve said rules out the possibility of artificial consciousness altogether.

Given all this, what should we do?

“Many social and psychological factors, including some well-understood cognitive biases, predispose us to overattribute consciousness to machines.”

What (Not) To Do?

When it comes to consciousness, the fact of the matter matters. And not only because of the mythology of ancestor simulations, mind-uploading and the like. Things capable of conscious experiences have ethical and moral standing that other things do not. At least, claims to this kind of moral consideration are more straightforward when they are grounded in the capacity for consciousness.

This is why thinking clearly about the prospects for real artificial consciousness is of vital importance in the here and now. I’ve made a case against conscious AI, but I might be wrong. The biological naturalist position (whether my version or any other) remains a minority view. Other theories of consciousness propose accounts framed in terms of standard computation-as-we-know-it. These theories generally avoid proposing sufficient conditions for consciousness. They also generally sidestep defending computational functionalism, being content instead to assume it.

But this doesn’t mean they are wrong. All theories of consciousness are fraught with uncertainty, and anyone who claims to know for sure what it would take to create real artificial consciousness, or for sure what it would take to avoid doing so, is overstepping what can reasonably be said.

This uncertainty lands us in a difficult position. As redundant as it may sound, nobody should be deliberately setting out to create conscious AI, whether in the service of some poorly thought-through techno-rapture, or for any other reason. Creating conscious machines would be an ethical disaster. We would be introducing into the world new moral subjects, and with them the potential for new forms of suffering, at (potentially) an exponential pace. And if we give these systems rights, as arguably we should if they really are conscious, we will hamper our ability to control them, or to shut them down if we need to.

Even if I’m right that standard digital computers aren’t up to the job, other emerging technologies might yet be, whether alternative forms of computation (analogue, neuromorphic, biological and so on) or rapidly developing methods in synthetic biology. For my money, we ought to be more worried about the accidental emergence of consciousness in cerebral organoids (brain-like structures typically grown from human embryonic stem cells) than in any new wave of LLM.

But our worries don’t stop there. When it comes to the impact of AI in society, it is essential to draw a distinction between AI systems that are actually conscious and those that persuasively seem to be conscious but are, in fact, not. While there is inevitable uncertainty about the former, conscious-seeming systems are much, much closer.

As the Google engineer Lemoine demonstrated, for some of us, such conscious-seeming systems are already here. Machines that seem conscious pose serious ethical issues distinct from those posed by actually conscious machines.

For example, we might give AI systems “rights” that they don’t actually need, since they would not actually be conscious, restricting our ability to control them for no good reason. More generally, either we decide to care about conscious-seeming AI, distorting our circles of moral concern, or we decide not to, and risk brutalizing our minds. As Immanuel Kant argued long ago in his lectures on ethics, treating conscious-seeming things as if they lack consciousness is a psychologically unhealthy place to be.  

The dangers of conscious-seeming AI are starting to be noticed by leading figures in AI, including Mustafa Suleyman (CEO of Microsoft AI) and Yoshua Bengio, but this doesn’t mean the problem is in any sense under control.

“If we give these systems rights, as arguably we should if they really are conscious, we will hamper our ability to control them, or to shut them down if we need to.”

One overlooked factor here is that even if we know, or believe, that an AI is not conscious, we still might be unable to resist feeling that it is. Illusions of artificial consciousness might be as impenetrable to our minds as some visual illusions. The two lines in the Müller-Lyer illusion (Figure 5) are the same length, but they will always look different. It doesn’t matter how many times you encounter the illusion; you cannot think your way out of it. The way we feel about AI being conscious might be similarly impervious to what we think or understand about AI consciousness.

Figure 5: The Müller-Lyer illusion. The two lines are the same length. (Courtesy of Anil Seth)

What’s more, because there’s no consensus over the necessary or sufficient conditions for consciousness, there aren’t any definitive tests for deciding whether an AI is actually conscious. The plot of “Ex Machina” revolves around exactly this dilemma. Riffing on the famous Turing test (which, as Turing well knew, tests for machine intelligence, not consciousness), Nathan — the creator of the robot Ava — says that the “real test” is to reveal that his creation is a machine, and to see whether Caleb — the stooge — still feels that it, or she, is conscious. The “Garland test,” as it’s come to be known, is not a test of machine consciousness itself. It is a test of what it takes for a human to be persuaded that a machine is conscious.

The importance of taking an informed ethical position despite all these uncertainties spotlights another human habit: our unfortunate track record of withholding moral status from those that deserve it, including from many non-human animals, and sometimes other humans. It is reasonable to wonder whether withholding attributions of consciousness to AI may leave us once again on the wrong side of history. The recent calls for attention to “AI welfare” are based largely on this worry.

But there are good reasons why the situation with AI is likely to be different. Our psychological biases are more likely to lead to false positives than false negatives. Compared to non-human animals, the apparent wonders of AI may be more similar to us in ways that do not matter for consciousness, like linguistic ability, and less similar in ways that do, like being alive.

Soul Machine

Despite the hype and the hubris, there’s no doubt that AI is transforming society. It will be hard enough to navigate the clear and obvious challenges AI poses, and to take proper advantage of its many benefits, without the additional confusion generated by immoderate pronouncements about a coming age of conscious machines. Given the pace of change in both the technology itself and in its public perception, developing a clear view of the prospects and pitfalls of conscious AI is both essential and urgent.

Real artificial consciousness would change everything — and very much for the worse. Illusions of conscious AI are dangerous in their own distinctive ways, especially if we are constantly distracted and fascinated by the lure of truly sentient machines. My hope for this essay is that it offers some tools for thinking through these challenges, some defenses against overconfident claims about inevitability or outright impossibility, and some hope for our own human, animal, biological nature. And hope for our future too.

The future history of AI is not yet written. There is no inevitability to the directions AI might yet take. To think otherwise is to be overly constrained by our conceptual inheritance, weighed down by the baggage of bad science fiction and submissive to the self-serving narrative of tech companies laboring to make it to the next financial quarter. Time is short, but collectively we can still decide which kinds of AI we really want and which we really don’t.

The philosopher Shannon Vallor describes AI as a mirror, reflecting back to us the incident light of our digitized past. We see ourselves in our algorithms, but we also see our algorithms in ourselves. This mechanization of the mind is perhaps the most pernicious near-term consequence of the unseemly rush toward human-like AI. If we conflate the richness of biological brains and human experience with the information-processing machinations of deepfake-boosted chatbots, or whatever the latest AI wizardry might be, we do our minds, brains and bodies a grave injustice. If we sell ourselves too cheaply to our machine creations, we overestimate them, and we underestimate ourselves.

Perhaps unexpectedly, this brings me at last to the soul. For many people, especially modern people of science and reason, the idea of the soul might seem as outmoded as the Stone Age. And if by soul what is meant is an immaterial essence of rationality and consciousness, perfectly separable from the body, then this isn’t a terrible take.

“Time is short, but collectively we can still decide which kinds of AI we really want and which we really don’t.”

But there are other games in town here, too. Long before Descartes, the Greek concept of psychē linked the idea of a soul to breath, while on the other side of the world, the Hindu expression of soul, or Ātman, associated our innermost essence with the ground-state of all experience, unaffected by rational thought or by any other specific conscious content, a pure witnessing awareness.

The cartoon dreams of a silicon rapture, with its tropes of mind uploading, of disembodied eternal existence and of cloud-based reunions with other chosen ones, is a regression to the Cartesian soul. Computers, or more precisely computations, are, after all, immortal, and the sacrament of the algorithm promises a purist rationality, untainted by the body (despite plentiful evidence linking reason to emotion). But these are likely to be empty dreams, delivering not posthuman paradise but silicon oblivion.

What really matters is not this kind of soul. Not any disembodied human-exceptionalist undying essence of you or of me. Perhaps what makes us us harks even further back, to Ancient Greece and to the plains of India, where our innermost essence arises as an inchoate feeling of just being alive — more breath than thought and more meat than machine. The sociologist Sherry Turkle once said that technology can make us forget what we know about life. It’s about time we started to remember.

The post The Mythology Of Conscious AI appeared first on NOEMA.

]]>
]]>
A New Anti-Political Fervor https://www.noemamag.com/a-new-anti-political-fervor Thu, 08 Jan 2026 15:02:28 +0000 https://www.noemamag.com/a-new-anti-political-fervor The post A New Anti-Political Fervor appeared first on NOEMA.

]]>
It’s said that we live in a crisis of democracy, but it would be better stated that we live in a crisis of politics. Throughout the world, and especially in the West, an anti-political mood has taken hold. 

Faith in several key national institutions is at an all-time low in the U.S. To many voters, political outsiders are more compelling than experienced politicians. Anger toward elites is commonplace as income inequality rises. The social climate is growing lonelier and more frayed. Community life has suffered, worsened by internet use. We trust each other less, and we are more anxious and pessimistic about the future. 

For most of the 20th century, politics and even political parties were viewed as a home outside of home by many, fortified by strong social bases of support. Unions, churches, civic organizations and local community life made up the foundation. This rootedness created both manageable stability for the state and meaning for people.

These places of belonging have since declined, and so too has politics declined as a home. What has emerged in response is an untethered and distrusting public. Historically, transitional periods of great economic and social dislocation like ours are also times of heightened anti-political sentiments. Everyday people become detached from and even suspicious of their public representatives. 

What makes today’s situation remarkable is how forcefully anti-political feelings have risen across many different countries, all at the same time. Recent polling shows dissatisfaction with democracy across 12 leading high-income nations at a median of 64% — a record high. These trends extend far beyond the Western world. 2025 has seen unprecedented revolts in Asia motivated by a strong sense of disgust toward politicians and nepotism. Similar anger has fueled protests in Kenya, Morocco, Madagascar and elsewhere across Africa. 

Politicians and elites now find themselves “ruling the void,” in the words of political theorist Peter Mair.

In this day and age, anti-political feelings tend to manifest as a swarm. Usually online, movements rapidly take shape, organize themselves and then often dissipate as quickly as they appear. United by shared distrust of the political class, the 15-M protests in Spain, Occupy Wall Street and the Arab Spring were some of the first primarily internet-based, swarm-like movements in the 2010s. “Neither-nor” and “Down with the partycratic dictatorship” were common slogans of the indignados in Spain in 2011. 

More recently, the Yellow Vests formed in France in 2018 as a decentralized swarm against the state. Swarms have even toppled governments — as in Armenia in 2018, Bangladesh in 2024 and Nepal in 2025. Outsider candidates have also embraced the anti-political climate to enter power themselves, with mixed results.

In the past decade, the populist right has had more success capitalizing on the anti-political mood. But anti-politics is a raw public energy, not bound by any political ideology. It is redefining the entire political terrain. Is anti-politics, as The New York Times columnist David Brooks wrote in 2016, truly the “governing cancer of our time”? Or is it instead society’s antibody response against the state’s failures, a symptom of a deeper transformation?

While periods of anti-political fervor have taken hold just as strongly in the past, our situation today is unique. There are two historical moments that can help us understand what motivates the current frustration and sets it apart. Through this frame, we can better contextualize the U.S. case and also discern what future might come of it.

Anti-politics is a vehicle of discontentment, a real but disorganized spirit of our time, and its destination is an open question.

After World War I

While being held in an Italian prison by fascists in the 1930s, philosopher and politician Antonio Gramsci wrote that “at a certain point in their historical lives, social classes become detached from their traditional parties.” 

When ruling elites lose their consensus, he continued, they are “no longer ‘leading’ but only ‘dominant’ … this means precisely that the great masses have become detached from their traditional ideologies and no longer believe what they used to believe previously.”

This insight was the preface to an often-quoted adaptation of his words: Such times are when the “old is dying, and the new is struggling to be born.” This is also when a “great variety of morbid symptoms appear.” 

Gramsci was diagnosing a social climate that had emerged from World War I. The Great War produced homelessness and personal loss on an unprecedented scale. Centuries-old empires like those of the Habsburg Dynasty of Austria-Hungary and the Ottoman Empire collapsed. New states emerged from the rubble, and lives in the ones that survived were permanently altered. 

“Anti-politics is a raw public energy, not bound by any political ideology. It is redefining the entire political terrain.”

Because everything was so battered, the years between the two world wars were a time of intensely contested mass politics in Europe. People were searching for an identity and desperately sought answers on how to start anew. 

Many spoke openly against parliamentary democracy at the time. The democracies after World War I were hastily constructed and were unable to cope with the tide of popular demands. Rife with factionalism and historical grievances, they were inherently unstable.

Parliamentary democracy, therefore, became an easy whipping post of frustration. In “The Revolt of the Masses,” Spanish philosopher José Ortega y Gasset in 1929 likened the anti-political mood of the masses to “mere negation.” The crisis of politics then was mainly channeled by two mass movements: communists and fascists. 

The fascists would ultimately be most successful in converting this anti-political mood into power. By the end of the 1930s, the crisis of politics had transformed the European continent. In 1938, only 13 European states were parliamentary democracies, down from 26 in 1920. 

The damage caused by radical mass parties provoked philosopher Simone Weil to write “On the Abolition of All Political Parties” in 1943. She concluded that the logical endpoint of every party is a monopoly on power at the expense of society. The ultimate goal of a party, she wrote, “is its own growth, without limit.”

The social climate remained distrustful and cynical well into World War II. In “World of Yesterday,” published in 1941, Austrian writer Stefan Zweig looked across Europe and found pessimism everywhere:

In 1939… this almost religious faith in honesty or at least the ability of your own government had disappeared throughout the whole of Europe. Nothing but contempt was felt for diplomacy after the public had watched, bitterly, as it wrecked any chance of a lasting peace at Versailles.

At heart, no one respected any of the statesmen in 1939, and no one entrusted his fate to them with an easy mind. The nations remembered clearly how shamelessly they had been betrayed with promises of disarmament and the abolition of secret diplomatic deals… Where, they asked themselves, will they drive us now?

The sad irony of the period is that the public, who had grown so cynical of parliamentary politics, now found their frustrations once again exploited and their destiny decided for them, just like in 1914. They had no choice but to fall in line.

“Men went to the front, but not dreaming of becoming heroes,” Zweig wrote. “Nations and individuals felt they were the victims of either ordinary political folly or the power of an incomprehensible and malicious fate.”

Under The Iron Curtain

Unlike during the volatile interwar years, anti-political feelings were forced underground in Eastern Europe after World War II. The public was suppressed under a cult of power ruling like an impenetrable leviathan. In 1956, the Soviet state violently crushed the people’s uprising in Hungary. In 1968, it did the same in Czechoslovakia.

After that tragedy, it became clear to many that politics was a dead end. According to Czech dissident Václav Havel, even though no one believed in the state, one had to “behave as though they did or tolerate them in silence.” The mood was best captured by Polish dissident Jacek Kuroń: “What is to be done when nothing can be done?”

With all political possibilities for change seemingly closed, dissidents instead asked how life should be lived and shared with others. They turned their focus toward civil society, forming a counter-movement that Hungarian writer György Konrád called “anti-politics.”

Anti-politics was a social movement that sought to create a public space separate from the state. It went by many names: “second culture,” “parallel polis,” “politics from below.” As Havel put it, the dissident “has no desire for office and does not gather votes. He offers nothing and promises nothing.” 

Eastern European anti-politics was instead a social project: a moral critique of power rooted in everyday life. Havel famously coined its credo as “living within the truth.” Polish journalist Konstanty Gebert described living within the truth as setting up a “small, portable barricade between me and silence, submission, humiliation, shame.”

Seeing no political possibilities, dissidents reimagined how they should live with others. Their meetings were held underground, in apartments and secret work meetings. They stressed that their actions — even down to their language — were not political but pro-social. They took to creating a second culture through films, novels, poetry, music and other mediums as they explored their extreme conditions. Today, this is commonly looked back upon as the golden age for Eastern European literature and art. 

“Is anti-politics society’s antibody response against the state’s failures, a symptom of a deeper transformation?”

Ultimately, of course, the dissidents were victorious. They won by remaking the social sphere into something that could bludgeon the state. As the cracks accumulated, Soviet rule collapsed under the weight of its own illegitimacy. Some of the writers and heroes of the anti-political underground would go on to run for office themselves, despite originally promising otherwise. 

The Eastern European case demonstrates how anti-politics reinvents itself with each new set of material circumstances.

Today

For much of the 20th century, it was accepted that political parties had to be linked to civil society organizations for turnout and legitimacy. This made parties more receptive to public pressure; they had to show interest in bread-and-butter deliverables. Parties also relied on the public and its organizations for funding and leaders. 

Over the past few decades, however, civil society organizations have eroded. As a result, today’s parties struggle to draw sustained, mass participation like they did a century ago. The state is also not dominating public life so punitively, like under Soviet rule, that a second culture is needed.

The conditions are categorically different today. As the tense relationship between the state and everyday people is again being renegotiated, its expression will be unique to the 21st century. 

Today’s anti-political mood has been building for some time. In “Ruling the Void,” published in 2013, Mair documented the unusual convergence of trends across all Western democracies: depressed voter turnout, declining party membership, an increase in independents, wild electoral swings and low participation in civil society organizations. These trends have since deepened and calcified. 

Today, voters are less guided by partisan cues. In the U.S., a plurality does not identify with either major party. Consequently, the correlation between one’s class and voter preference has weakened. No longer do voting blocs fit clear schemas and predictive models like they used to. This is the new public that Mair likened to “the void.”

The reason for these changes is longstanding and structural, but the frustration has been intensified by the internet. As Martin Gurri documented in “The Revolt of the Public” in 2014, the internet has undermined the old, top-down mediators of information. Traditional media no longer exclusively sets the agenda and states cannot effectively rule by persuasion alone. Dominant narratives struggle to hold sway. In Gurri’s words, this means “every inch of political space is contested” in a horizontal, decentralized media environment. 

The explosion of information has led to a collapse of meaning, which has been replaced by pure negation. As philosopher Byung-Chul Han succinctly said in a 2022 interview in Noema, “The more we are confronted with information, the more our suspicion grows.” This is natural fuel for anti-politics. Gurri similarly argued that government failure now sets the public agenda. Since meaning can no longer be narrativized from the top down, states are unable to easily hide or excuse their failures like before. 

Rather than affirm the power center, the internet energizes the “world of the very small,” in the words of former President of Armenia and physicist Armen Sarkissian. He has likened the internet’s destabilizing effects to quantum mechanics: “You need just a couple of high-energized particles. They come and hit. And what you get is a chain reaction.” 

In January 2022, Sarkissian fell victim to this very phenomenon, only four years after an internet-based swarm had toppled the Armenian government. He claimed that the public had become obsessed with “all sorts of conspiracy theories and myths” which was starting to affect his health. In a surprise announcement, he resigned and claimed his presidential office did not have sufficient power to influence events. 

Yet, this idea that the internet would deepen the void was not a given. As Gurri writes, implicit in the century-long struggle for suffrage was the belief that “once all the people were inside the system, something magical would happen: the good society.” The internet was once viewed as merely an extension of this long march toward inclusion, one that would only better represent a general interest. 

The internet has instead highlighted the inertia and emptiness of political institutions. Today, with little left sacred, these institutions are readily filled by opportunistic outsiders and other political entrepreneurs, who are also shaping the public conversation. Some cynicism has always been part of democratic society, but it is now easily converted into actionable anger.

“Cynicism has always been part of democratic society, but it is now easily converted into actionable anger.”

While the internet has deepened anti-political feelings, preexisting societal conditions laid the groundwork for this to happen. Since the 1970s, political parties across Western democracies have been hollowed out. Their organizations have grown more closed and insular, relying less and less on their constituents for decision-making and funds. The present-day anger, therefore, is not imagined but rooted in longstanding exclusion.

Mair and political scientist Richard S. Katz argued in their 2018 book that leading Western parties have undergone a process of “cartelization.” Whereas mass parties in the early 20th century were labor-intensive, bottom-up, reformist and relied on members for funding, cartel parties view politics as a profession, depend on a wealthy donor class, possess an in-group mentality and collude with each other to maintain their positions. Because cartel parties rely less on member recruitment, they instead outsource decision-making to institutional bureaucracies, courts and a web of organizations outside of government.

These changes naturally make everyday people feel invisible and secondary. Lacking a direct relationship to the public, political elites are more and more beholden to only themselves. Political parties’ main purpose then becomes simply maintaining their positions. As Mair noted, Western parties have “become agencies that govern rather than represent.” In this dynamic, the public’s role in democracy is largely relegated to being a spectator.

It’s unsurprising that the ballot box has become the natural vehicle for anti-politics. Votes can be sudden reminders to political elites that the public still controls some levers. In recent years, populist movements on both the right and left have tapped into anti-political sentiments to unseat the traditionally dominant parties in Western Europe and beyond. In fact, 2024 was the worst year for incumbents on record. In developed countries that held elections, every single governing party lost vote share.

The American Case

Modern U.S. history tells a decades-long story of how anti-politics takes root. In the late 1960s, the public grew distrustful and receded while political parties became more insular to protect themselves.

Sometimes called “the last innocent year,” 1964 was the high point of American institutional trust at 77%, per Gallup polling. Both the failed Vietnam War and corruption scandals at home — such as Watergate and the findings of the Church Committee on CIA abuses — deeply damaged public faith in the following years. By 1979, it had plummeted to 29%. 

The public responded to the diminishing prospects of politics by turning inward. The 1970s were the “Me Decade,” as journalist Tom Wolfe put it. Former hotbeds of student activism calmed. Relatively rare during the previous decade, self-help books started to fill the bestseller lists. Concepts like “burnout” appeared in psychological journals for the first time. As Christopher Lasch wrote of the period, a “therapeutic sensibility” was taking over America. No longer were Americans viewing politics as a place to actualize their dreams. 

Instead, they looked elsewhere. What was once political became personal. Sociologist Nina Eliasoph has documented how this transformation affected even the language of everyday people. In her field studies during the 1980s, she was surprised to find how often words like “doable” and “personal” overlapped with “non-political,” whereas “not doable” was associated with “away from home” or “political.” By the end of the 20th century, this passive sensibility was clear at the ballot box. In the 1996 presidential election, voter turnout dropped to a historic low.

This was not without reason. As the scandals of the 1970s unraveled, both the Republican and Democratic parties reorganized themselves away from the public, justifying this shift under the guise of stability. The public was deemed simply too volatile and emotional to decide politics now.

This gave rise to the so-called “invisible primary” or “money primary”: the primary before the primary, where a candidate is primed for the public by investors and insider allies. It was a turning point in how parties procured funds. In 1976, the Supreme Court ruled in Buckley v. Valeo that election expenditures count as “free speech,” making dark campaign money legally permissible. Then in 1982, the Hunt Commission codified preselected superdelegates as part of the Democratic primary process, further gating party elites from the public.

As the Democrats restructured themselves, Republicans strategized around the rising number of non-voters. In 1977, they filibustered to irrelevance President Jimmy Carter’s bill to make voter registration easier. As Pat Buchanan, the future White House communications director for President Reagan, put it: The “busing of economic parasites and political illiterates” to the polls would mean the end of the insurgent New Right.

“Since meaning can no longer be narrativized from the top down, states are unable to easily hide or excuse their failures like before.”

Consultants and pollsters instead became a leading group within the party apparatus, which now lacked strong civil society roots. According to political scientist Costas Panagopoulos, media mentions of political consultancy increased 13-fold from 1979 to 1985. The maintenance of the party cartel became an end in itself for those employed by it, and politics consequently became the art of maintaining this closed-off world.  

The result was the development of a “permanent campaign,” as political strategist Sidney Blumenthal famously put it in 1980. The ballooning costs of the permanent campaign were simply too high to allow for any outsiders. In many cases, being an incumbent was the ticket to virtually automatic victories. Once you entered the party system, you stayed. As a result, the U.S. has effectively become a gerontocracy.

Despite both political parties building decades-old moats around themselves, they are still under siege today. In the 21st century, longstanding distrust has hardened into a generalized opposition supercharged by the internet. As if awakened from dormancy, the once-passive public has made its power felt. Both Barack Obama and Donald Trump were victorious despite not being chosen by the invisible primary.

Yet contemporary anti-politics presents us with a glaring contradiction, both in the U.S. and elsewhere. While outsiders tap into the anti-establishment mood to win votes, they struggle to maintain legitimacy once they enter power themselves. This is because anti-politics today is rarely expressed as a positive program. 
Since there is no clear majority opinion driving it other than general cynicism, what we have instead is “unpopular populism.” And as is so often the case, electing a new government does not fundamentally redress the tension; it just briefly pauses it.

A General Opposition

More than half a century ago, American political theorist Robert Dahl speculated that the political future might be motivated by a new principle: “an opposition to the democratic leviathan itself.” To the average alienated citizen, the state would become “remote, distant, and impersonal.” Dahl, in many ways, was right.

Today’s political life is dominated by a general discontentment with representation itself. But this is closer to unveiling the true reality of politics than one might assume. 

One cannot be nostalgic about past eras of mass politics, as if they had been actually representative. On the contrary, those eras obscured the actual relationship between the state and the public. Back then, after all, powerful political machines relied on bosses in civil society organizations to churn out votes. 

This past setup was marginally more representative and sometimes even delivered results, but the American public rejected it in the 1970s precisely because it exposed itself as corrupt. The internet has now converted this longstanding cynicism into raw discontentment. The state’s naked self-interest is so clearly out in the open now, seen for what it is. 

Messy as it may be, what has been broken apart cannot be put back together. When anti-politics is the prevailing mood, the most relevant division becomes up versus down, insiders versus outsiders. What is most resented by people is being made invisible. 

Any successful future movement will have to position itself as both part of the public and prove it can deliver pro-social, material results. A healthier civil society has to be rebuilt from the bottom up. Despite lacking coherence, anti-politics is effectively the real movement: a symptom of a deep fissure that can no longer be ignored.

The post A New Anti-Political Fervor appeared first on NOEMA.

]]>
]]>
Where The Prairie Still Remains https://www.noemamag.com/where-the-prairie-still-remains Tue, 06 Jan 2026 18:00:43 +0000 https://www.noemamag.com/where-the-prairie-still-remains The post Where The Prairie Still Remains appeared first on NOEMA.

]]>
ROCHESTER, Iowa — If you take a road trip across Iowa, you’re likely to see fields of corn and soybean crops blanketing the landscape, one after the other across 23 million acres, or some 65% of the state. But turn off a gravel road near the Cedar River in the rural southeast and walk through an ornate rusted arch, and you will find yourself in another world.

Rochester Cemetery is not just an active cemetery. It’s a remnant of a once-common sight in Iowa, the place where tallgrass prairie and woodland meet. Faded, crumbling headstones dot its 13 hilly acres. The biggest oaks I’ve seen in my life — gnarled, centuries-old red, black, burr and white — tower over them, keeping watch. And otherwise engulfing the stones is a sea of prairie grasses: big bluestem, Indiangrass, switchgrass. On the right spring day, there are more blooming shooting stars here — with their delicate pink downturned heads nodding in the breeze — than may exist anywhere else in the state.

The cemetery itself dates to the 1830s, just after the Black Hawk Purchase added Iowa to the Union. But today, Rochester is special because it contains one of the rarest ecosystems in the world: oak savanna. Under a few massive trees, prairie plants sequester carbon, prevent erosion and provide key habitat for endangered wildlife like Monarch butterflies and rusty-patched bumblebees — ecosystem services desperately needed across the Midwest.

Before European settlement, tallgrass prairie covered 80% of Iowa. What remains serves as critical seed banks and blueprints for future restorations. But the continued existence of remnants like Rochester is tenuous in this land where corn is king, and it depends on the stewardship of individuals with very different ideas about what and who the land is for — and how it should be managed.

I arrived at the cemetery on a warm Sunday last May. Jacie Thomsen, a Rochester native, greeted me at the gate in a faded U.S. Army T-shirt. A township trustee and the cemetery’s burial manager, Thomsen carried a binder of old documents in one hand and a long metal rod in the other that she periodically used to probe for forgotten, buried gravestones. 

“A lot of people tend to say we’re disrespecting our dead,” Thomsen told me. “I always tell people, ‘Take what you think you know about cemeteries and leave it in your car, because it does not, will not, apply here.’”

I think of the postage-stamp perfect square cemetery I grew up visiting on Memorial Day in nearby Wapello, Iowa, with its close-cropped turfgrass, ornamental bushes and stones in lines straight as the corn rows that box them in on all sides. With manicured lawns and trimmed trees as the blueprint for cemeteries, I can see why some less well acquainted with prairie plants — including other township trustees here — complain this place looks “overgrown” with weeds and in need of a good mow. But at the same time, it strikes me that if one of the pioneers buried here suddenly rose from the dead, these hills are about the only part of the Iowa landscape they’d recognize.

“When you walk in these gates, you’re seeing Iowa as they saw it when they arrived after the Black Hawk Purchase,” Thomsen told me, gesturing at the prairie.

Prairie is Iowa’s natural landscape insofar as any landscape is natural. Humans have shaped the American Midwest ever since the glaciers retreated. For some 10,000 years, Iowa was a dynamic place. Indigenous Americans lit frequent fires that kept encroaching woodlands at bay, allowing the grasslands that dominate the Great Plains to migrate east into Iowa and Illinois. Only in the last 200 years did farmers transform these acres into neat cornfields.

“Turn off a gravel road near the Cedar River in the rural southeast and walk through an ornate rusted arch, and you will find yourself in another world.”

Today, less than a tenth of 1% of Iowa’s original prairie remains. Plows broke the vast majority of prairie down in the 19th and 20th centuries, transforming a biodiverse ecosystem into a crop factory — what Jack Zinnen, an ecologist for the Prairie Research Institute at the University of Illinois Urbana-Champaign, calls an “agricultural desert.”  Set aside before industrial agriculture arrived in Iowa, pioneer cemeteries like this one have become the prairie’s final resting place — one of the few where the land remembers what it once was. Some of these cemetery prairie remnants tower over the surrounding farm fields, long roots holding the rich, undisturbed soil together as the rest of Iowa erodes away under repetitive plowing, flowing downriver.

Isaac Larsen, a geosciences expert at UMass Amherst, stands near a drop-off that separates native remnant prairie from farmland in Iowa. Researchers found that farmed fields were more than a foot lower than the prairie on average. (UMass Amherst)

Compared to other forms of American wilderness, prairies are hard to love — they don’t easily fall into the category of the sublime like giant sequoias or Yosemite waterfalls. You have to get really close to appreciate the complex beauty. It’s probably why (along with the black gold underneath the plants) it was so easy to destroy, acre by acre.

“To the uninitiated, the idea of a walk through a prairie might seem to be no more exciting than crossing a field of wheat, a cow pasture, or an unmowed blue-grass lawn,” wrote Robert Betz, a Northeastern Illinois University biologist and early defender of cemetery prairies. “Nothing could be further from the truth.”

Aboveground at Rochester, native prairie grasses and flowers and introduced ornamental plants, such as daisies, hyacinths and showy stonecrops, coexist. Black-eyed Susans, coneflowers, milkweed and prairie clovers grow on graves, alongside the usual decorative plastic varieties. Underground, deep roots entwine with the bodies of long-dead pioneers — who pushed out the Indigenous communities who first stewarded this prairie — and generations of Rochester citizens.

A massive oak towers over gravestones on a hill in Rochester Cemetery. (Christian Elliott)
Left: A queen bumblebee pollinating shooting stars in Rochester Cemetery. On the right spring day, there are more blooming shooting stars here than may exist anywhere else in the state. (Laura Walter) Right: The gates to Rochester Cemetery which covers 13 acres today. (Christian Elliott)

The Prairie’s Unmaking

I grew up less than an hour’s drive from Rochester, though I learned of the cemetery’s existence only recently, in a book by the New York landscape photographer Stephen Longmire, who’d stumbled across this place and spent years photographing it with a large format film camera. While he wandered Rochester’s hills in the early 2000s, I was spending my weekends at my grandparents’ farm in Wapello playing in their corn rows behind the barn. Prairie was the setting for Laura Ingalls Wilder’s books, a thing of the past. I had no idea how utterly transformed Iowa was, or how much we’d lost.

It wasn’t until college that I learned the truth. Prairie once stretched from Montana down to Texas and east into Ohio, over a million square miles. Iowa was once the beating heart of the American Central Grassland.

But “tallgrass prairie is, in many respects, a human construct,” Tom Rosburg, a biologist and herbarium curator at Drake University in Iowa, told me.

Prairie relies on annual cleansing fire to transform dead foliage into usable nutrients. Shortgrass prairie in the dry western plains burns easily, the fires often lit by lightning and fueled by constant wind. Tallgrass prairie, on the other hand, “wants to be trees,” Chris Helzer, The Nature Conservancy’s science director in Nebraska, told me. It only grows in places with enough precipitation that woodland should dominate.

The Central Grassland’s extension into the Midwest, called the Prairie Peninsula, puzzled scientists for decades — they wondered why it wasn’t dominated by forest. Eventually, they arrived at an answer. For thousands of years, grass and trees had waged a war of contrition across the hills that are now Rochester Cemetery — and across much of Iowa and Illinois. But Indigenous peoples sided with the grasses from the beginning, lighting regular fires that rejuvenated the grasses, kept trees at bay and ensured the landscape remained open for easier hunting. Here at Rochester, it was the Meskwaki, who still live nearby on land purchased from the U.S. government after the Black Hawk War.

Most of a prairie plant’s biomass is underground, in the form of deep root systems that allow it to spring back to life after frequent fires. When pioneers arrived in Iowa and Illinois in the early 1800s, they discovered millennia of decomposing roots produced a black, nitrogen-rich, silty loam — some of the most fertile soil in the world. Thus began the prairie’s destruction. Industrialized farming operations moved in, like my family’s, such that less than a century later, it was nearly all gone, turned into monocultures of corn and soy sustained by artificial nitrogen inputs, herbicides and pesticides, which were irrigated by stick-straight ditches and networks of buried drainage tiles.

“It was destroyed piece by piece, farmer by farmer,” Rosburg told me, with some bitterness. “It was the biggest transformation in the history of Earth — and in less than a person’s lifetime.”

The change is so dramatic, it’s hard to imagine what was once there. You can’t unplow a prairie — once you tear through those deep, ancient roots, formed over centuries, it’s over. And despite decades of attempts, it’s nearly impossible to create a restoration that perfectly matches the real thing, with its function, structure and sheer number of species, each with its own complex relationships.

“Prairie plants sequester carbon, prevent erosion and provide key habitat for endangered wildlife like Monarch butterflies and rusty-patched bumblebees — ecosystem services desperately needed across the Midwest.”

To attempt a restoration at all, you need raw material — seeds. And for that, you need remnants. Scientists have dedicated their lives to mapping the few places where the prairie still exists, scouring the state on foot and sifting through old records as if panning for gold. Rosburg has found and saved more than 65 forgotten remnants through his organization, Drake Prairie Rescue. Many remnants exist on fragments of land deemed too rocky, sandy or steep to plow. Those remnants were often used as pastures — planted with a mix of non-native grasses and heavily grazed by cattle.

Examples of still-intact prairies, on rich black carbon soil, are rare — primarily found in narrow strips along railroad tracks set aside before plowing began and on pioneer cemeteries, where the impediment to plowing was cultural, rather than practical. Those remnants tend to be the last and best records of what’s considered a typical prairie, with its rich, silty, loamy soil.

To date, there are 136 cemetery prairies across the Midwest, according to the Iowa Prairie Network’s list. While an Iowa cornfield’s species diversity can be counted on one hand, some prairie remnants contain as many as 250 species, according to data published last July by the Prairie Research Institute team in Illinois.

Unlike neighboring Illinois, which has an extensive state system to protect its rare native prairies, wetlands and forests, in Iowa, nearly all the state’s land is privately held. In fact, 60% of Iowa’s public land is made up of roadside rights-of-way, or ditches, as they are more commonly known, according to the University of Northern Iowa’s Tallgrass Prairie Center.

In Iowa, cemeteries with fewer than 12 burials in the past 50 years are officially designated as pioneer cemeteries, which allows counties to relax mowing and restore prairie — although that doesn’t always happen in practice. Still, these township-owned pioneer cemeteries serve as de facto prairie nature preserves, islands of tenuous conservation for rare insects and plants — as long as townships OK it — in a sea of destruction.

Due to climate change, the wet Midwest is becoming even wetter, which means that prairie remnants are slowly transitioning to woodland in the absence of fire. Absent any management, a prairie can disappear in as little as 30 years, Laura Walter, a University of Northern Iowa biologist, told me. “Rescuing” remnants, as Rosburg does, is an active process that involves convincing townships to conduct controlled burns and weed out invasive species in their cemeteries.

And these prairie preserves have come in handy. They’re models for what some scientists call artisanal restorations — small-scale prairies conjured forth on private land, often with great care and dedication to exactly recreating what’s been lost. But remnants like Rochester are also helping bring back prairie at a larger scale. 

In the 1990s, Iowa lawmakers mandated prairie plantings along state highways and provided incentives for counties to do the same to help combat soil erosion and reduce mowing and herbicide use that polluted waterways. But the Tallgrass Prairie Center, which operates the state’s roadside vegetation program, couldn’t find prairie seeds readily available for sale.

So, they had to start from scratch, collecting seeds from cemetery prairies and other remnants, learning to germinate and grow plants in their greenhouse and production plots, and then donating seeds to seed companies while teaching them how to grow them in order to scale up production. 

Before they started, prairie blazing star, a common Iowa prairie flower, could only be purchased from the Netherlands, where it was a popular cut flower, said Laura Jackson, the Tallgrass Prairie Center’s director. Now, she told me, it’s one of dozens of regional ecotype seeds that counties can use to restore prairie along their roads. At last May’s annual spring seed pickup at the center’s warehouse in Cedar Falls, Iowa, trucks from 46 Iowa counties hauled away 19,000 pounds of prairie seed — big bluestem, switchgrass, prairie clover, asters, coneflowers and more — originally sourced from prairie remnants like Rochester. To date, some 50,000 acres of roadsides have been planted with native grasses and wildflowers.

Restoration is about preparing Iowa for the future rather than trying to revert its landscape to the 1800s, Jackson told me. On a practical level, prairies provide myriad benefits, especially in light of climate change, that are more important than ever, including soil stability, carbon storage, flood mitigation, fire resilience, drought resistance and habitat for pollinators. But because it’s so hard to predict what will survive amid a changing climate, it’s crucial to maximize genetic diversity by sourcing seeds from remnants across the state, Jackson told me.

“Prairie once stretched from Montana down to Texas and east into Ohio, over a million square miles. Iowa was once the beating heart of the American Central Grassland.”

Because Iowa is a relatively young landscape, geologically speaking, only a handful of prairie plants have gone extinct, and most species are still widespread. In parts of the country that haven’t been wiped clean by glaciers as recently, plants have evolved to become highly local, “endemic” to specific niches, Chris Benda, an Illinois botanist who regularly conducts plant surveys, told me.

Even though Iowa’s prairie survives today primarily on scattered fragments, many of its plants once thrived across the state. That means the seeds of Iowa’s great prairie still exist. From pioneer cemeteries, managers can source the original seeds of Iowa’s landscape and use them to grow prairie at scale.

Left: Old gravestones at Rochester Cemetery showing the Howe family plot. The Howe family still lives in Cedar County and let the prairie grow wild around the old settlers’ stones as that’s how the cemetery would have looked when they arrived. (Christian Elliott) Right: The stone visible here is Adam Graham’s who he left money in his 1850 will to purchase the land that is now Rochester Cemetery. (Christian Elliott)

Prairie Or Cemetery?

At Rochester Cemetery, others began to arrive for the day’s garlic mustard pull: Dan Sears, an organizer for the nonprofit Iowa Prairie Network; Walter, who runs the prairie plant research program at the Tallgrass Prairie Center; and a dozen locals. Volunteers tucked their jeans into their socks to avoid tick bites, grabbed bags and donned gardening gloves.

Sears explained what garlic mustard — the non-native species encroaching on this tiny prairie remnant — looks like, with toothed leaves and delicate white flowers. However, Sears added that volunteers should also be on the lookout for another non-native plant, showy stonecrop (which he referred to as “sedum”), which could compromise the quality of the prairie remnant. 

I noticed Thomsen tense beside me as she piped up: “I need to investigate first before you pull sedum!” The cemetery’s prairie is speckled with sedum and other long-naturalized “invasives,” from lilacs to day lilies, that were planted over centuries to honor loved ones. Thomsen relies on those plants to find unmarked graves in a cemetery without formal records, she told me. She even planted a peony bush to help her find her own family’s graves amid the tallgrass. “Just because you don’t see a headstone does not mean there’s not somebody there!”

Sears held up his hands to Thomsen in surrender: “Her word is law today.”

Their interaction was the first hint at a conflict that has come up time and again here — between what’s considered natural or local, and invasive or foreign, among both plants and people. Rochester draws outsiders to an unusual degree for a rural Iowa town. For years, prairie enthusiasts like Longmire, environmentalists, AmeriCorps volunteers and university scientists have taken the Rochester exit off Interstate 80 to visit this cemetery. 

At times, visitors have collected seeds or even plants without permission. The late Diana Horton, who long ran the University of Iowa herbarium and created the most complete list of Rochester’s some 400 species, once cut down several of the prairie’s red cedars, much to Thomsen’s chagrin. The trees are native to the area (“It’s called the Cedar River,” she quipped), but not to oak savannas. Some locals, who come to the cemetery simply to mourn their loved ones, see the outsiders themselves as the invasive species. Of course, it’s a matter of perspective — descendants of pioneers here can trace their ownership back to the original land stolen from the area’s Indigenous peoples.

But the biggest point of conflict, here as at prairie cemeteries across Iowa and Illinois, comes from locals with varying ideas of what a cemetery should be. Rochester Township owns the cemetery, and its trustees manage it, along with most of the town’s affairs. Most of Iowa’s cemetery prairies are no longer active, working cemeteries. That makes it easier for conservationists like Rosburg to make the case to trustees for controlled burns and other active management strategies — the prairie is part of the pioneer history of those cemeteries, something to be preserved. But Rochester still has burials every year, which heightens tensions.

The Nature Conservancy recognized Rochester as a high-quality site for prairie plants back in the 1980s and got permission to do a controlled burn then. But its proposal to cease burials there to prevent damage to prairie plants was “incendiary” to locals, Longmire told me. Since then, fierce debates have arisen repeatedly over proposals to mow more frequently — Thomsen told me that one of her aunts tried to oust an incumbent trustee solely over the need for increased mowing during the 2006 election.

But infrequent mowing is what preserved the prairie. Rochester was hayed for livestock under pioneer ownership and, more recently, due to limited staff time and township funding, mowed annually in the fall so mourners could find their family stones. That cadence mimics the fires and grazing by bison and livestock that historically rejuvenated prairie, keeping woody plants at bay.

“Compared to other forms of American wilderness, prairies are hard to love — they don’t easily fall into the category of the sublime like giant sequoias or Yosemite waterfalls. You have to get really close to appreciate the complex beauty.”

There are always residents who want this cemetery to resemble the familiar urban variety, Sarah Subbert, Cedar County’s naturalist, told me. “Well, that’s not what Iowa was … If you mowed it every week, you wouldn’t have that diversity out there at all.”

Some residents take mowing around their family stones into their own hands, having been officially permitted to do so by management rules enacted in 2016. This has resulted in a more traditional-looking patch of close-cropped grass at the center of the cemetery surrounding the most recent burials, encircled by prairie on all sides — a sort of compromise visible on the landscape.

Pedee Cemetery, an example of a typical country cemetery in eastern Iowa. Photo by Stephen Longmire from his book, “Life and Death on the Prairie” (George F. Thompson Publishing, 2011).
Left: A hillside in Rochester Cemetery with black-eyed Susans and black oak. (Stephen Longmire/”Life and Death on the Prairie”) Right: A farm near Rochester, Iowa. (Stephen Longmire/”Life and Death on the Prairie”)

On Nature & Culture

I fell in love with tallgrass prairie as an undergrad at Augustana College in Rock Island, Illinois. Not with the plants, as many of my botany peers did, but with the idea of prairie as a human construct. If you try to fence off a prairie and preserve it — freeze it in time — it’ll disappear as woody plants and trees slowly encroach. That was a point of fierce debate in the 1980s and ‘90s, when conservationists like Betz, the early discoverer of cemetery prairies, and Steve Packard in Chicago advocated for controlled burns and more active management of prairie remnants and restorations.

Critics saw restoration as gardening or meddling with nature. I thought of the vast western nature preserves that William Cronon described in “The Trouble with Wilderness,” and the irony of the government ousting the area’s Indigenous peoples — who had been stewarding the land — from their homes to create national parks to preserve now government-recognized wilderness. Nature has always been a part of the human realm. But prairie especially so.

“The whole ‘let nature take its course’ thing, or wilderness as a place without people, all those things break down very quickly in the tallgrass prairie,” Helzer, who manages thousands of acres of prairie in Nebraska, told me.

So I started seeking out prairies and other native ecosystems in Iowa and Illinois as a restoration volunteer. I pulled and cut invasives like buckthorn and multiflora rose and helped prepare for burns. When Rock Island decided to reintroduce prairie in a historic, Victorian-style, manicured park near my college, I dedicated my senior thesis to assessing how community members felt about the effort.

What I learned really surprised me — residents used words like “abandoned,” “unkempt,” “trashy” and “unwelcoming” to describe the unmowed areas. Several told me they felt like the “wild” had “invaded” the park and worried about this inviting “vandalism and crime” or “undesirable” people. That’s a conflation — famously made in New York City’s broken windows policing initiative — that some anthropologists have deemed “trash talk.”

To be fair, the initial restorations were of low quality. The parks department, perhaps unfamiliar with the history of prairie management, which requires careful selection and seeding of native species and controlled burns, took a laissez-faire approach. Later, the city acknowledged the “naturalized” areas weren’t exactly beautiful at first and began to plant more prairie grasses and flowers. But the negative attitudes stuck with me, long after I graduated. The nature-culture divide, established over two centuries of American civilization, is a challenge to bridge in the city.

Parks and graveyards are both “memorial landscapes,” Longmire writes in his photography book about Rochester, “Life and Death on the Prairie,” places where nature is manipulated to human ends. But cemeteries are culturally sacred places. That’s why I had to see Rochester’s cemetery prairie for myself. What way forward — if any — had its managers figured out to help with the coexistence of not just plants but also culture?

Volunteers at the garlic mustard pull organized by the Iowa Prairie Network fill buckets with uprooted invasive plants. (Christian Elliott)
Left: Volunteers search the prairie for garlic mustard and other invasive plants encroaching from the woods on all sides. (Christian Elliott) Right: Jacie Thomsen, the cemetery’s burial manager, in a quiet moment leaning against the prod she uses to find lost, buried markers. (Christian Elliott)

People Of The Prairie

Back at Rochester, Thomsen led me away from the garlic mustard pull to show me her favorite part of the cemetery. She grew up just to the north and spent her summers here with her best friend, who once eerily foretold that Thomsen would someday become the cemetery’s guardian. 

In 2011, the township asked her to become a trustee and the burial manager.

Even setting aside its sprangly prairie vegetation, Rochester is a chaotic sort of cemetery. A resident can pick a plot, but that doesn’t guarantee it will be available. (“Somebody might already be there,” Thomsen told me.) On a metal park bench under an oak, Thomsen unrolled a copy of a survey from the 1980s with graves marked with little Xs: “It’s accurate to a degree,” she said.

“Most of a prairie plant’s biomass is underground, in the form of deep root systems that allow it to spring back to life after frequent fires.”

Thomsen’s found hundreds of unmarked graves with her trusty prod and dug up and restored many broken and long-forgotten stones — as of December 2025, she was up to 1,061. And after 15 years, she knows where all her “residents” are — and all their stories. She’s met their descendants and walked with them to their long-lost relatives. She’s dug through newspaper archives for obituaries and uploaded records to FindAGrave.com. Growing up, she wanted to be an archaeologist.

Surefooted in the tall grass, Thomsen led the way uphill to a spot near the cemetery’s boundary fence, far from the mustard-pulling crew. Here we visited Rebecca Green, who died on Sept. 25, 1838, at the age of seven months. This made her grave the cemetery’s oldest, Thomsen told me. Green is surrounded by pink prairie phlox and purple columbine, as she would have been when her parents, Eliza and William Green, buried her here next to where they’d eventually be laid to rest. Thomsen wondered aloud if they’d picked this place for its colorful flowers. The Greens arrived in Rochester in 1837, just a year after its founding, from Kentucky and Maryland, respectively. Their home served as a hotel for travelers and a stop on the underground railroad. 

“When you come here, you’re looking at what they saw and what made them stay,” Thomsen told me. “This is the pioneer’s gift that they left for us. We are respecting that, even if everybody doesn’t get it, when they’re so used to manicured, boring.” She’s protective of this place, and her job isn’t easy. Sometimes trustees make decisions without her, mowing too early last year, for example, which prevented a controlled burn she was planning. She’s used to having to fight to be heard. She yanks poison ivy off a newer stone that reads “Captain Andrew Walker” — a Mexican and Civil War veteran buried in “a pauper’s grave” after he died at the Mt. Pleasant Asylum for the Insane. Thomsen tracked down his pension file and honored him with a stone on his family’s plot at Rochester.

I asked Thomsen whether she knew where she wanted to be buried. And of course, she did. She’s known since she was a child. The highest hill along the back fence, under an oak — a spot that’s always called to her. Thomsen gets goosebumps thinking about it. “There’s energy to the land, and we all leave our little imprint somehow.” The cemetery remembers the prairie, and the prairie remembers the people buried within it. Like the Greens, Thomsen’s family is mostly here, “four rows of kin” — her grandma and grandpa, her aunt, three uncles, her sister-in-law, two of those lost just last year. Her own staked-out spot is some distance away from the family plot — “Sometimes you can be a little too close to family, even in death.”

When Longmire spent his years in Rochester, he lamented that there was a “dearth of people who could see both sides of the coin,” he told me — to appreciate Rochester as both a natural and cultural wonder. But just as he left, Thomsen arrived on the scene. In her big binder, she keeps a pamphlet from his book talk. She knows all the stones, but she also knows the prairie — the common names (and some she’s made up) for each of the plants and the spots they come up every year, including the secret place the lady slipper orchid grows. She knows each of the towering oaks by name — the bear tree (a burr oak with a burr that resembles a cub climbing one side); the guardian, which stood tallest on the hill before a derecho felled it. She cried and mourned its death.

I had expected conflict at Rochester. But instead, I found someone who cared enough to shepherd compromise. If it can be done here, on hallowed ground, maybe it can be done anywhere.

A hill of blooming shooting stars, native to North America and one of the species being actively protected by restoration efforts, in the heart of Rochester Cemetery. (Christian Elliott)

Life Persists

Lost in thought, I realized Thomsen had taken off down the hill. I waded after her. She wanted to point out a new plant she’d spotted to Sears, the mustard pull organizer. Each little stalk was ringed with a spiraling firework of yellow blossoms.

“Oh, that’s lousewort!” he told her, “Laura would be really excited to see that!”

Thomsen cupped her hands around her mouth and shouted for Laura Walter.

“The cemetery remembers the prairie, and the prairie remembers the people buried within it.”

Walter, the scientist, wandered over, a bag overflowing with uprooted garlic mustard invaders tied around her waist. She excitedly knelt to examine the tiny plant, lifting her wide-brimmed hat. Finding lousewort usually means you’re dealing with high-quality remnant prairie, she told me, a “holy grail.” It’s partially parasitic, with roots that penetrate those of other plants underground to pirate water and mineral nutrients. In doing so, it suppresses its victim’s growth and keeps the prairie more open, promoting diversity. That kind of complex relationship is hard to recreate when doing restoration work. The plants nearby did look a little droopy. Had it already raided their nutrients and left a warning sign for others? I asked.

“It’s tantalizing to think about,” Walter laughed. She took a geolocated photo, and later, with the township’s permission, returned to collect its seeds. 

Walter then pointed excitedly at a blooming shooting star a few feet away. As we watched, a large bumblebee hovered upside down under its blossom and landed. In the spring, new bumblebee queens fly great distances to start new colonies, she told me. They depend on a few early blooming prairie flower species, like the shooting star, which have co-evolved to release pollen at specific bumblebee buzz frequencies.

“It’s funny, this is a cemetery, it’s where you honor the dead,” she mused. “But here you can also come and honor an abundance of life.”

Walter has collected shooting star seeds from remnants across the state, but they’re tricky to propagate. In the first growing season, a plant produces tiny seed leaves, a centimeter across. The following year, it gains a tiny tuft of true leaves. It can take five years to flower and produce seeds. Prairie restoration managers typically favor vigorous, fast-growing species that can outcompete invasive species and establish quickly.

Sitting in a prairie, you come to appreciate its beauty. The sheer complexity surrounding us was overwhelming. And it continued, invisibly, beneath the soil — every remnant prairie has a fungal and microorganism community unique to the soil type and plant community.

“Think about all the things that we don’t know, and that don’t come back on their own,” Walter said. “We have to preserve those relationships in the places where they exist until we understand them.”

Rochester Cemetery is a model of what scientists call artisanal restorations — small-scale prairies conjured forth on private land and are helping bring back prairie at a larger scale. (Christian Elliott)

Fate Of The Prairie

The future of tallgrass prairie remains uncertain. The Midwestern states are speckled with more and higher-quality restorations today than when efforts began in the 1980s; however, Iowa’s unique roadside vegetation program depends on county and state-level support, which is at a low point under the current administration.

The Burr Oak Land Trust, an Iowa conservation group that for years sent AmeriCorps volunteers to Rochester and other remnant prairies to pull invasive species and conduct prescribed burns, lost its funding due to Department of Government Efficiency cuts this year. The Prairie Research Institute in Illinois lost $21 million in federal funding last fall. And opt-in programs, like the Conservation Reserve Program, where the federal government pays farmers to take marginal land out of crop production and return it to prairie or wetland, depend on the whims of the market, Jonathan Dahlem, an Iowa State University sociologist who studies farming conservation practices, told me. When corn and soybean prices rise, like they have over the past two decades, farmers are eager to plow up restorations to seed row crops even if yields aren’t expected to be high. 

Rosburg said he finds hope in the increasing number of remnants discovered each year on forgotten pastures, along roads and in cemeteries. Universities like to talk about the “outsized impact” of small restorations, Jackson told me. But in reality, “every little bit helps a little bit,” she said.

I find my own hope in this place and in these people. At the end of the day, after the garlic mustard pull was over, Thomsen and Walter walked together up the hills, sharing their intimate and yet very different knowledge of the place.

Longmire calls Rochester Cemetery a memento mori — a reminder for living visitors of both their inevitable fate and of what Iowa lost. Funerals, gravestones and cemeteries are for the living — and this is a place that is alive, with plants and humans. Rochester is a time capsule of the past and a key to the future.

As I left, a truck and trailer pulled into the prairie to unload a riding lawn mower. The roar of the engine drowned out the buzz of insects as its operator carefully mowed around their family stone. It’s not a sight you’d see in a typical prairie. But here, it’s what compromise looks – and sounds — like. 

I later learned that the man who had mowed around the gravestones of many Rochester families for years as a public service had passed away that same day. The sea of tallgrass grew unchecked in the following months, surging against the gravestones like waves — a constant reminder that he was gone. Concerned families have started asking Thomsen how the cemetery will be maintained going forward — how nature will be held at bay. A similar series of events sparked the big fight over mowing back in 2006. I worry a little about the prairie’s future and Thomsen’s hold over the fragile balance here.

“But isn’t it wonderful,” Longmire asked me, “to have a place that people take so seriously to fight about how it’s managed?”

The post Where The Prairie Still Remains appeared first on NOEMA.

]]>
]]>
A Test Of Great Power Spheres Of Influence https://www.noemamag.com/a-test-of-great-power-spheres-of-influence Mon, 05 Jan 2026 17:30:36 +0000 https://www.noemamag.com/a-test-of-great-power-spheres-of-influence The post A Test Of Great Power Spheres Of Influence appeared first on NOEMA.

]]>
The most dangerous moment in geopolitics is when the old order no longer prevails, but the new one is still unsettled.

In this circumstance, there is not so much a vacuum as a cloud of uncertainty. Everything is up in the air. Expectations, assumptions and intentions are scrambled. Fearing lost advantage in the face of these unknowns, worst-case scenarios drive the build-out of capabilities. Acting in the breach is a wild guess, the possible outcomes of which cannot be assuredly weighed.

That is the situation we are in today as we witness the nascent revival of Great Power spheres of influence being tested out in Venezuela, Ukraine and Taiwan.

Among the more shocking turns of the Trump administration is the unabashed throwback to the Monroe Doctrine, enforced by gunboat diplomacy in Latin America, replete with the remarkable claim that the national patrimony of Venezuela’s oil resources is rightly the province of U.S. oil companies.

As Trump himself put it over the weekend after Maduro’s audacious capture, “We built Venezuela’s oil industry with American talent, drive and skill, and the socialist regime stole it from us …This constituted one of the largest thefts of American property in the history of our country.”

It remains to be seen how that rationale for intervention squares with U.S. Secretary of State Marco Rubio’s claim that, under the restored tutelage of U.S. companies, the oil industry will be “run for the benefit of the people.”

Whether the plan now is to “run the country,” as Trump put it, or Rubio’s scheme of coercing the remnants of the regime to bend to U.S. will, both run entirely counter to the MAGA base’s aversion to global meddling, regime change and “forever wars.” What appeals to that constituency is the special military operation against drug cartels, though the robust demand to get high on the home front, which so lucratively drives supply, is rarely mentioned.

To be sure, Maduro was a bad seed. No love was lost for the repressive caudillo in Caracas among most of the other countries in the region. But few, especially Mexico, will forswear the nationalist identity that legitimizes their rule by welcoming the return of imperial imposition from the North.

After the Japanese prime minister said in November that an attack on Taiwan would be a national security threat to her nation and an end-of-year $11 billion U.S. arms sale to fortify Taiwan as a defensible “porcupine,” China conducted the closest and most aggressive war games ever in the seas surrounding the island democracy. It was meant to demonstrate the locked-and-loaded capabilities for achieving its oft-pronounced intent of bringing Taiwan back into the national fold by force if necessary.

“The outcome of these struggles will determine the nature of the next world order as it reverts to Great Power spheres of influence.”

Despite urgent and ongoing peace talks over Ukraine, it is hard to imagine that Russian President Vladimir Putin will ever negotiate away his vision of a reunified “spiritual Rus.” His response to U.S. and European proposals so far has been to feign interest while doubling down with vicious military attacks on civilians and energy infrastructure in the dead of winter.

Absent European resolve and ready military capacity as U.S. commitment wanes, why would Putin do anything other than dig in and wait things out while doing as much damage as possible until he gets his way?

When The Dominoes Fall

The outcome of these struggles will determine the nature of the next world order as it reverts to Great Power spheres of influence.

As it stands now, the norm of inviolable national sovereignty sanctified by the post-World War II order hangs by a tenuous thread that is further frayed daily by the unilateral transgressions of the world’s major players. When one moves, as Russia already has with the invasion of Ukraine and the U.S. has now with its ousting of Maduro, the falling dominoes of the old order are set in motion elsewhere. Is Taiwan the next in line?

If each gets its own way with impunity, how will the others respond?

Russia and China will surely see America’s intervention in Venezuela as permission, and even justification, to do as they similarly wish in their own domains. While Mexico’s dependence on U.S. markets will constrain its margins of maneuver, other large powers in Latin America, such as Brazil, will inevitably seek to strengthen ties with China as a buffer against the return of the old imperialism, making the continent another proxy battleground as during the Cold War.

When push comes to shove, will the U.S. really risk going to war with a rejuvenated, high-tech and nuclear-armed Middle Kingdom over Taiwan, or simply relent in the name of a pragmatic peace?

Will the U.S. finally tire of Europe’s carping dependence on American resources to defend Ukraine and just give in to Putin’s single-minded persistence as a fait accompli?

When all that is said and done, the logic of hemispheric hegemony will deem the annexation of Greenland and the Panama Canal as necessary on national security grounds because of Russia’s reach into the Arctic and China’s global assertiveness.

This unraveling string of eventualities over the coming years will cement the contours of what comes next.

Of course, successful resistance by the outgunned is always a possibility. In Ukraine, a prolonged armistice, as in a divided Korea, cannot be ruled out. But the “correlation of forces,” as the Soviets used to say, seems aligned against the fortunes of lesser powers who, in the end, may have little choice but to accommodate the might of the most powerful.

The post A Test Of Great Power Spheres Of Influence appeared first on NOEMA.

]]>
]]>
Noema’s Top Artwork Of 2025 https://www.noemamag.com/noemas-top-artwork-of-2025 Thu, 18 Dec 2025 15:41:01 +0000 https://www.noemamag.com/noemas-top-artwork-of-2025 The post Noema’s Top Artwork Of 2025 appeared first on NOEMA.

]]>
by Hélène Blanc
for “Why Science Hasn’t Solved Consciousness (Yet)

by Shalinder Matharu
for “How To Build A Thousand-Year-Old Tree

by Nicolás Ortega
for “Humanity’s Endgame

by Seba Cestaro
for “How We Became Captives Of Social Media

by Beatrice Caciotti
for “A Third Path For AI Beyond The US-China Binary

by Dadu Shin
for “The Languages Lost To Climate Change” in Noema Magazine Issue VI, Fall 2025

by LIMN
for “Why AI Is A Philosophical Rupture

by Kate Banazi
for “AI Is Evolving — And Changing Our Understanding Of Intelligence” in Noema Magazine Issue VI, Fall 2025

by Jonathan Zawada
for “The New Planetary Nationalism” in Noema Magazine Issue VI, Fall 2025

by Satwika Kresna
for “The Future Of Space Is More Than Human

Other Top Picks By Noema’s Editors

The post Noema’s Top Artwork Of 2025 appeared first on NOEMA.

]]>
]]>
Noema’s Top 10 Reads Of 2025 https://www.noemamag.com/noemas-top-10-reads-of-2025 Tue, 16 Dec 2025 17:30:14 +0000 https://www.noemamag.com/noemas-top-10-reads-of-2025 The post Noema’s Top 10 Reads Of 2025 appeared first on NOEMA.

]]>
Your new favorite playlist: Listen to Noema’s Top 10 Reads of 2025 via the sidebar player on your desktop or click here on your mobile phone.

Artwork by Daniel Barreto for Noema Magazine.
Daniel Barreto for Noema Magazine

The Last Days Of Social Media

Social media promised connection, but it has delivered exhaustion.

by James O’Sullivan


Artwork by Beatrice Caciotti for Noema Magazine.
Beatrice Caciotti for Noema Magazine

A Third Path For AI Beyond The US-China Binary

What if the future of AI isn’t defined by Washington or Beijing, but by improvisation elsewhere?

by Dang Nguyen


Illustration by Hélène Blanc for Noema Magazine.
Hélène Blanc for Noema Magazine

Why Science Hasn’t Solved Consciousness (Yet)

To understand life, we must stop treating organisms like machines and minds like code.

by Adam Frank


NASA Solar Dynamics Observatory

The Unseen Fury Of Solar Storms

Lurking in every space weather forecaster’s mind is the hypothetical big one, a solar storm so huge it could bring our networked, planetary civilization to its knees.

by Henry Wismayer


Artwork by Sophie Douala for Noema Magazine.
Sophie Douala for Noema Magazine

From Statecraft To Soulcraft

How the world’s illiberal powers like Russia, China and increasingly the U.S. rule through their visions of the good life.

by Alexandre Lefebvre


Illustration by Ibrahim Rayintakath for Noema Magazine
Ibrahim Rayintakath for Noema Magazine

The Languages Lost To Climate Change

Climate catastrophes and biodiversity loss are endangering languages across the globe.

by Julia Webster Ayuso


An illustration of a crumbling building and a bulldozer
Vartika Sharma for Noema Magazine (images courtesy mzacha and Shaun Greiner)

The Shrouded, Sinister History Of The Bulldozer

From India to the Amazon to Israel, bulldozers have left a path of destruction that offers a cautionary tale for how technology without safeguards can be misused.

by Joe Zadeh


Blake Cale for Noema Magazine
Blake Cale for Noema Magazine

The Moral Authority Of Animals

For millennia before we showed up on the scene, social animals — those living in societies and cooperating for survival — had been creating cultures imbued with ethics.

by Jay Griffiths


Illustration by Zhenya Oliinyk for Noema Magazine.
Zhenya Oliinyk for Noema Magazine

Welcome To The New Warring States

Today’s global turbulence has echoes in Chinese history.

by Hui Huang


Along the highway near Nukus, the capital of the autonomous Republic of Karakalpakstan. (All photography by Hassan Kurbanbaev for Noema Magazine)

Signs Of Life In A Desert Of Death

In the dry and fiery deserts of Central Asia, among the mythical sites of both the first human and the end of all days, I found evidence that life restores itself even on the bleakest edge of ecological apocalypse.

by Nick Hunt

The post Noema’s Top 10 Reads Of 2025 appeared first on NOEMA.

]]>
]]>