Philosophy & Culture Archives - NOEMA https://www.noemamag.com Noema Magazine Fri, 16 Jan 2026 19:32:03 +0000 en-US 15 hourly 1 https://wordpress.org/?v=6.8.3 https://www.noemamag.com/wp-content/uploads/2020/06/cropped-ms-icon-310x310-1-32x32.png Philosophy & Culture Archives - NOEMA https://www.noemamag.com/article-topic/philosophy-culture/ 32 32 The Politics Of Planetary Color https://www.noemamag.com/the-politics-of-planetary-color Thu, 15 Jan 2026 15:31:32 +0000 https://www.noemamag.com/the-politics-of-planetary-color The post The Politics Of Planetary Color appeared first on NOEMA.

]]>
When the first color photograph of Earth was captured from space in 1968, millions around the globe saw their home in a new way. Rising from darkness above the moon, it could be seen in breathtaking oceanic blue. Unlike the black-and-white Lunar Orbiter 1 frame taken two years earlier, “Earthrise” made the planet’s fragility legible and emotionally graspable. In 1972, “Blue Marble” added new depth, revealing Earth from the Mediterranean Sea to Antarctica in vibrant swirls of blue, brown, green and white.

These familiar sunlit hues fostered a politics of relatability, inviting belonging, and with it, a sense of responsibility for the planet.

An environmental consciousness began to crystallize. As historian Robert Poole notes in “Earthrise: How Man First Saw the Earth,” the Space Age flipped from a narrative of outward conquest to one of inward rediscovery. The first Earth Day was held in 1970, and the popular metaphor “Spaceship Earth” shifted from describing a technical vessel managed by engineers to describing a living, vulnerable biosphere requiring stewardship. Planetary survival became a mass political demand.

If color once taught us to see and value our planet, it now records how we are altering it.

Black Marble,” a global composite image of the darkened Earth at night in 2012, revealed a web of golden yellow — electric constellations of urbanization and light pollution. More recently, a Nature analysis detected climate-driven trends in color across roughly 40% of the global surface ocean, observing that low-latitude waters are shifting from deep blue toward green as surface ecosystems reorganize. NASA’s PACE (Plankton, Aerosol, Cloud ocean Ecosystem) mission captures this complexity with hyperspectral precision, reading the ocean’s spectral fingerprint to identify exactly which plankton populations could be driving the shift.

Similarly, from Alpine glaciers to the Greenland Ice Sheet, snow can flush red when snow algae bloom, and because those blooms darken the surface and reduce reflectivity, they can amplify melt, rendering a warming cryosphere newly legible. 

Color is not just how Earth shows itself; it can be diagnostic, even a narrative of change, inviting human response through visible nuance. It is a measurement and a mirror of our agency.

The planetary becomes political through color. The hues through which Earth appears in public decide what we notice and act upon. For us to become a planetary society, the colors through which Earth senses and is sensed need to be aligned. It is time to compose a planetary palette.

Colors Make History

Color has long organized politics in the open. The French tricolor cockade turned loyalty into something you could wear in the street. The suffragette palette of purple, white and green made support for women’s right to vote instantly legible across Britain and beyond. The Pan-African colors of red, black and green and the Aboriginal black, red and yellow flag in Australia condensed claims to land and self-determination into vivid emblems.

In Thailand, rival movements quite literally became “red shirts” and “yellow shirts,” with chroma standing in for competing sovereignties. Iran’s Green Movement used a single hue to signal reformist solidarity, just as Ukraine’s Orange Revolution did earlier with orange as a banner of contested legitimacy. These anecdotes are not about taste. They show how colors have repeatedly given politics a public body, allocating attention, rallying coalitions and making claims visible at a glance.

Historian Michael Rossi’s “The Republic of Color” shows how, at the turn of the 20th century, color science and its regulation reorganized modern life: Industrial dyes, standardized color languages and new instruments did not simply tint goods. They reshaped labor, markets and perception, turning the organization of color into a form of managing attention, desire and trust. The institutionalization of standards and techniques for collective perception bestowed color with political force.

Our planetary age echoes the industrial one in that regard. Where the earlier era effectively forged a republic of color for factories and mass media, the planetary age calls for a politics of planetary color.

Choices about how Earth’s processes are rendered — through hue, lightness, contrast, naming and disclosure — organize public perception and coordination, deciding what counts as common evidence and how we act together with Earth. Rossi’s larger point applies: Color infrastructures do not merely decorate an era, they constitute it. If the planetary is to be held in common, it must be legible in color.

“Color is not just how Earth shows itself; it can be diagnostic, even a narrative of change.”

Political theory has language for this. French philosopher Jacques Rancière’s “distribution of the sensible” names how regimes allocate what is perceptible and sayable before any statute is written. Like metrology’s units, calendars’ time zones, cartography’s projections and interface defaults, planetary color is a pre-legal order: a background regime that organizes what appears actionable before any law speaks.

The politics of planetary color therefore operates where aesthetic order becomes epistemic order. It is an arrangement of seeing and sensing that quietly conditions what can be argued, trusted and coordinated as our shared world.

Planetary Colors

Some planetary colors are physical, spectral and stubborn. Neptune’s saturated blue is methane subtracting red. The aurora’s iconic green is oxygen’s 557.7-nanometer emission. These are not metaphors but signifiers for materials; naming them as such helps ratify the processes that produce them. “Neptune Blue” or “Aurora Green” could easily link colors to our cosmic existence.

Other planetary colors are (re)made by cameras, algorithms and conventions. “True color” Earth images are engineered reconstructions. NASA’s “Blue Marble 2002” was stitched from months of satellite observations into a seamless “true color” mosaic, underscoring that many “true color” Earth views are composited reconstructions, unlike Apollo 17’s 1972 “Blue Marble” photograph.

False color” composites and infrared-to-visible mappings from the Hubble Space Telescope to the James Webb Space Telescope are deliberate translation schemes that reveal what can be seen by choosing certain palettes.

An infrared view of the Pillars of Creation peered through interstellar dust unveils newly formed stars that are obscured in ordinary visible light. Here, color constitutes a designed translation of data instead of a mere passive recording of optical cues. Similarly, architect Laura Kurgan argues in “Close Up at a Distance” that satellite sensing and its visual languages translate dispersed Earth processes into legible — and political — images, a reminder that how we render planetary signals is already a choice about how we understand our world.

Within these regimes of visibility, what one might call “artificial color” — the deliberate abstraction that translates non-visible wavelengths and signals into visible hues — is a crucial epistemic step. By encoding data like infrared signals or chemical compositions into color, these images create knowledge rather than just recording it. That is authorship of planetary color.

Not only in satellite images and space telescopes can we experience this, but also in everyday life. Planetary colors pass through soft- and hardware, each imposing its own technical biases. The same image can look vivid on a phone and muted on an older laptop because device gamuts and color-management defaults differ. Regardless of the device, just as “Earthrise” and “Blue Marble” did for the modern environmental movement, planetary color operationalizes knowledge: It renders information actionable.

The Human Factor

Color carries ideas because it travels through perception. Three mechanisms are especially relevant. Color constancy is the brain’s habit of making an object’s color appear the same under different lighting: A blue shirt at noon still looks blue at dusk. Helpful in daily life, this can hide real differences in images unless a palette also signals illumination, which can reveal changes we would otherwise miss.

Pre-attentive salience describes the effect that some color differences jump out before we consciously decide to look for them. This is why rainbow gradients can mislead us by overemphasizing small changes, whereas scales where equal data steps are perceived as equal color steps and are detectable to color-blind viewers support honest detection.

Affective priming describes the psychological mechanisms behind color’s ability to nudge mood and behavior. In achievement tasks, brief exposure to red can tilt people toward avoidance, which shows that color can shape judgment even when we believe we are acting autonomously.

Considered together, this affective palette of colors explains why the way we perceive the planet — be it through the hues of volcanoes and ice sheets, forests and rivers, or space weather and meteor showers — quietly changes what becomes noticeable, thinkable and actionable. 

If color is part of a yet-to-emerge planetary literacy, it must be multilingual, as the perception of color is not merely neurophysiological, but deeply influenced by culture. The World Color Survey extended linguists Brent Berlin and Paul Kay’s classic thesis that languages name colors in a predictable, universal sequence (the so-called “basic color terms”), revealing both recurrent patterns and striking partitions, such as “grue” categories that merge green and blue.

These partitions travel with power: Art historian John Gage’s archaeology of Western color and artist David Batchelor’s account of “chromophobia” show how empires, religions and modernist canons scripted the meanings of, and the values attributed to, different hues.

“For us to become a planetary society, the colors through which Earth senses and is sensed need to be aligned.”

A planetary color is therefore less a single key than an interoperable set of keys: Process-based names, such as a “Saharan Dust Ochre,” can meet local lexicons so colors carry physics and culture at once.

The Earth Factor

Not only do we see Earth through color, Earth, in a real way, senses through color. Sunlight arrives as a spectrum, and the planet sorts it: Oceans swallow reds and return deep blues; clouds and ice throw broad light back to space; dark soot on snow shifts whiteness to gray and, at the same time, influences the planet’s temperature. In the air, color steers chemistry: Aerosol-laden skies redden, changing how quickly sunlight breaks apart molecules and how much energy the lower atmosphere keeps.

Living organisms are also optical instruments. Leaves are tuned to red and blue, using chlorophyll to absorb and harvest daylight; plant phytochromes register the color of light at dusk to tell seasons apart; phytoplankton ride the green-blue gradient to time their blooms; some marine microbes even run retinal-like photochemistry that taps the green bands of the sea.

Corals fluoresce, using color as both a shield and a stress signal, while the “vegetation red edge” — the sharp spectral jump between plants absorbing red light and reflecting near-infrared — is both a planetary fingerprint and a byproduct of how plants detect and manage light.

Color is not only an appearance but an interface: a surface upon which energy becomes information and the planet’s materials, organisms and spheres register, store and respond. Designing the planetary color palette, then, is not just designing what we see, it is learning to handle color in the wavelengths Earth already uses to sense its way forward.

To do so, we can refer to Abelardo Gil-Fournier and Jussi Parikka’s “Living Surfaces,” in which they unveil how Earth is made of “living surfaces”: interfaces where plant and photographic surfaces fold into one another, and where light functions at once as metabolized signal, registered through photosynthesis, and as measurable inscription, captured and processed into images. In this account, the two surfaces converge through a cultural technique that builds surfaces from measured light.

Approaching planetary color means working within these medianatures. It requires engaging cultural techniques such as calibration, mapping and ground-truthing that actively format Earth’s surface into data. These are the tools that translate raw biological life into the images we see.

In the planetary age, this means that color as experienced by humans is only one narrow slice of a wider spectral life. As Ed Yong reveals in “An Immense World,” the more-than-human world can parse wavelengths that we cannot, ranging from ultraviolet and infrared to the polarization of light.

The pre-legal order constituted by planetary palettes — colormaps, legends, thresholds, names and so forth — must be framed as a situated human translation: explicit about its vantage, inclusive of color-vision diversity and capable of turning non-visible spectra into shared, contestable public signals.

Color As Infrastructure

Artworks such as James Turrell’s immersive Ganzfeld installations, which dissolve depth perception in edgeless fields of pure colored light, and Olafur Eliasson’s “The Weather Project,” which suspended a giant, mist-shrouded artificial sun inside the Tate Modern art gallery to gather crowds in a shared amber glow, demonstrate how color fields can retune attention and assemble a public.

Hélio Oiticica’s “Parangolés,” wearable capes of saturated color first activated with the Mangueira samba community in Rio, turned hues into a collective act in the street, where color was not only seen but engaged with, danced with and debated as a public form. Color here is not a matter of mere aesthetics: These are political arguments in color.

Angela Snæfellsjökuls Rawlings stages a deliberative assembly as a participatory performance in the artistic-activist project “Motion to Change Colour Names to Reflect Planetary Boundary Tipping Points.” By framing the renaming of colors in response to climate crises as a socio-legal innovation, Rawlings treats the palette not merely as a visual code, but as a parliamentary act.

In a similar vein, entrepreneur Luke Iseman and designer Andrew Song have tested sulfur-dioxide balloon releases with their startup Make Sunsets, a geoengineering gambit that asks: If aerosols cool the planet, how red would (or should) our sunsets become? These discussions showcase the widespread awareness of the fact that any large-scale change in the planet’s color — be it our skies, oceans or land cover — could deeply affect humans’ relationship to their planet.

“Colors have repeatedly given politics a public body, allocating attention, rallying coalitions and making claims visible at a glance.”

Most often, color slips into planetary politics quietly, as the mood of a map, the warning of a dashboard, the tint of a season, the hue of a banner. Large parts of everyday coordination already turn on this quiet code.

In Europe, the purchase of a new appliance entails reading a green-to-red efficiency bar. In France, the vigilance weather map organizes municipal and household responses to dangerous weather events from heatwaves to floods through a four-color logic. And Mexico City’s Hoy No Circula program turns color into choreography at urban scale: Cars carry colored hologram stickers linked to plate numbers — yellow, pink, red, green, blue — which determine no-drive days and restrictions during pollution episodes.

Do these color schemes help societies think of and relate to the planetary?

In many countries, air pollution is communicated through a color-coded air quality index (AQI): In the U.S., the AQI runs from green (“good”) through red (“unhealthy”) to maroon (“hazardous”).  Across Europe, comparable indices pair color bands with explicit health advice for the general public and sensitive groups on when to modify outdoor activity.  

However, as architect Nerea Calvillo argues in “Aeropolis,” air and air pollution are not a homogeneous “outside.” They are co-produced by bodies and atmospheres, as well as by sensors, indices, visualizations, infrastructures and the regulatory and economic logics that often perpetuate exclusion and inequity.

That means that color-coded atmospheric representations are not neutral readouts but part of the apparatus through which uneven exposures become publicly legible and actionable: useful for collective response, yet always at risk of flattening differences among pollutants, places and vulnerabilities.

In each case, color is not decoration around the facts. It is part of how the facts enter public life. Just as the way we color-code the planet influences what we know about it, this is an epistemic and political practice. A poorly designed thermal map might hide extremes, whereas a well-designed one can reveal patterns at first glance.

Planetary Palette

Right now, what passes for a planetary palette is mostly an accident of defaults: device settings, stock colormaps, ad-hoc choices. Making the implicit explicit means surfacing that palette and recomposing it with Earth. The intentional making of such a palette calls for at least four moves.

First, open a conversation and reframe. The palette might be treated as a public invitation — not décor, but a shared claim tested with Earth. Rather than green branding and device defaults, Earth’s own signals would meet human ways of seeing: chlorophyll greens, auroral oxygen’s green, aerosol-red sunsets. In this register, color would work as a relay. Measurements would become proposed hues, scales would aim to make equal changes look like equal changes for aging, color-blind and standard eyes alike and color names would carry causes.

The palette would also point to possibility, not only alarms: cool corridors of “Canopy Jade” and “Breeze Sapphire” for walking and schooling; “Nocturne Blue” nights that would restore a shared sky; “Pulse Cyan” river rises that would coordinate fisheries, ferries and floodplain planting. The aim would be co-creation: open, revisable and applicable to how the planet already speaks in color.

Second, convene to formulate principles and compose first prototypes. A planetary color convention could seat Earth-observation scientists, artists, designers, accessibility experts, linguists, anthropologists, educators, journalists and policymakers, so palettes are co-sensed, legible and usable where decisions happen.

A few prototypes could focus on specific processes, such as Breeze & Shade (urban cool corridors from canopy transpiration and wind pathways) and Night-Sky Commons (dark-sky windows from cloud aerosol and light-pollution data), developed under agreed principles such as:

  • Start from the planet, not moods: Tie hues to earthly processes.
  • Make it beautiful: Compose for dignity and delight.
  • Design for adaptation: Establish a shared backbone with room for local adjustments.
  • Make it accessible and fair: Use color-vision inclusivity and strong contrast.
  • Be transparent: Indicate what was sensed and why each hue was selected, and visually signal data uncertainty.
  • Build for learning and evolution: Test with real people and devices, allowing new uses and meanings to develop over time.

Third, give this work an institutional home. Rather than a single bureaucratic body, this could take the form of a distributed observatory run by a consortium of science agencies, design labs and museums. Here, satellite and field streams would be translated into images accompanied by concise color briefs in the form of accessible guides explaining the data source and usage rules for each hue.

“Most often, color slips into planetary politics quietly, as the mood of a map, the warning of a dashboard, the tint of a season, the hue of a banner. ”

Simultaneously, the observatory would run design and legibility trials, co-creating and testing new maps with diverse communities to ensure they are understood and welcomed before release. A living lexicon would record process-bearing names, and palette hearings would be held when colors might steer broader public action. Crucially, an ethics log and version history would track why visual choices were made, ensuring that changes in the planet’s appearance are traceable decisions rather than hidden defaults.

In partnership with city agencies, researchers, artists and frontline communities, the observatory would commission experimental pilots, such as public light installations or interactive urban dashboards, and publish open-source resources, like accessible colormap plugins for mapping software.

Fourth, evaluate and refine the palette based on evidence. The prototypes should be treated as civic infrastructure and assessed across a set of dimensions. Do they read quickly and correctly? Do they steer inspiration to reshape human-planet relations? Do they prompt the right actions, and are they accessible regardless of visual ability or device? Small pilots and before-and-after rollouts would inform a public log of what changed when a color band flipped, and a regular review cadence would adjust the scheme. The goal is a shortening loop between planetary signal, legible appearance and coordinated response.

Rossi shows how the industrial age wired color into institutions so thoroughly that perception itself became a site of politics. The planetary age inherits this lesson at a different scale: Ocean color trends now register ecological reorganization; hyperspectral satellites are built to track it; cross-cultural surveys reveal that our vocabularies for color are learned, mobile and contested; and contemporary art keeps demonstrating that color can gather strangers into a public around a shared field of sensation.

More than a single palette, the planetary colors would be a set of tested, explained and teachable mappings to help people sense earthly processes together. If the 19th-century “republic of color” standardized perception for an industrial order, the 21st-century equivalent might standardize disagreement with shared references — enough coherence of planetary colors to argue about the same world.

This is planetary politics in practice: a palette co-authored by Earth’s own signals and by human institutions that translate spectra into public reasons. If colors are integral to planetary politics, then designing the palette is not a cosmetic but a constitutional practice.

The post The Politics Of Planetary Color appeared first on NOEMA.

]]>
]]>
The Mythology Of Conscious AI https://www.noemamag.com/the-mythology-of-conscious-ai Wed, 14 Jan 2026 17:23:54 +0000 https://www.noemamag.com/the-mythology-of-conscious-ai The post The Mythology Of Conscious AI appeared first on NOEMA.

]]>
For centuries, people have fantasized about playing God by creating artificial versions of human beings. This is a dream reinvented with every breaking wave of new technology. With genetic engineering came the prospect of human cloning, and with robotics that of humanlike androids.

The rise of artificial intelligence (AI) is another breaking wave — potentially a tsunami. The AI systems we have around us are arguably already intelligent, at least in some ways. They will surely get smarter still. But are they, or could they ever be, conscious? And why would that matter?

The cultural history of synthetic consciousness is both long and mostly unhappy. From Yossele the Golem, to Mary Shelley’s “Frankenstein,” HAL 9000 in “2001: A Space Odyssey,” Ava in “Ex Machina,” and Klara in “Klara and The Sun,” the dream of creating artificial bodies and synthetic minds that both think and feel rarely ends well — at least, not for the humans involved. One thing we learn from these stories: If artificial intelligence is on a path toward real consciousness, or even toward systems that persuasively seem to be conscious, there’s plenty at stake — and not just disruption in job markets.

Some people think conscious AI is already here. In a 2022 interview with The Washington Post, Google engineer Blake Lemoine made a startling claim about the AI system he was working on, a chatbot called LaMDA. He claimed it was conscious, that it had feelings, and was, in an important sense, like a real person. Despite a flurry of media coverage, Lemoine wasn’t taken all that seriously. Google dismissed him for violating its confidentiality policies, and the AI bandwagon rolled on.

But the question he raised has not gone away. Firing someone for breaching confidentiality is not the same as firing them for being wrong. As AI technologies continue to improve, questions about machine consciousness are increasingly being raised. David Chalmers, one of the foremost thinkers in this area, has suggested that conscious machines may be possible in the not-too-distant future. Geoffrey Hinton, a true AI pioneer and recent Nobel Prize winner, thinks they exist already. In late 2024, a group of prominent researchers wrote a widely publicized article about the need to take the welfare of AI systems seriously. For many leading experts in AI and neuroscience, the emergence of machine consciousness is a question of when, not if.

How we think about the prospects for conscious AI matters. It matters for the AI systems themselves, since — if they are conscious, whether now or in the future — with consciousness comes moral status, the potential for suffering and, perhaps, rights.

It matters for us too. What we collectively think about consciousness in AI already carries enormous importance, regardless of the reality. If we feel that our AI companions really feel things, our psychological vulnerabilities can be exploited, our ethical priorities distorted, and our minds brutalized — treating conscious-seeming machines as if they lack feelings is a psychologically unhealthy place to be. And if we do endow our AI creations with rights, we may not be able to turn them off, even if they act against our interests.

Perhaps most of all, the way we think about conscious AI matters for how we understand our own human nature and the nature of the conscious experiences that make our lives worth living. If we confuse ourselves too readily with our machine creations, we not only overestimate them, we also underestimate ourselves.

The Temptations Of Conscious AI

Why might we even think that AI could be conscious? After all, computers are very different from biological organisms, and the only things most people currently agree are conscious are made of meat, not metal.

The first reason lies within our own psychological infrastructure. As humans, we know we are conscious and like to think we are intelligent, so we find it natural to assume the two go together. But just because they go together in us doesn’t mean that they go together in general.

Intelligence and consciousness are different things. Intelligence is mainly about doing: solving a crossword puzzle, assembling some furniture, navigating a tricky family situation, walking to the shop — all involve intelligent behavior of some kind. A useful general definition of intelligence is the ability to achieve complex goals by flexible means. There are many other definitions out there, but they all emphasize the functional capacities of a system: the ability to transform inputs into outputs, to get things done.

“If we confuse ourselves too readily with our machine creations, we not only overestimate them, we also underestimate ourselves.”

An artificially intelligent system is measured by its ability to perform intelligent behavior of some kind, though not necessarily in a humanlike form. The concept of artificial general intelligence (AGI), by contrast, explicitly references human intelligence. It is supposed to match or exceed the cognitive competencies of human beings. (There’s also artificial superintelligence, ASI, which happens when AI bootstraps itself beyond our comprehension and control. ASI tends to crop up in the more existentially fraught scenarios for our possible futures.)

Consciousness, in contrast to intelligence, is mostly about being. Half a century ago, the philosopher Thomas Nagel famously offered that “an organism has conscious mental states if and only if there is something it is like to be that organism.” Consciousness is the difference between normal wakefulness and the oblivion of deep general anesthesia. It is the experiential aspect of brain function and especially of perception: the colors, shapes, tastes, emotions, thoughts and more, that give our lives texture and meaning. The blueness of the sky on a clear day. The bitter tang and headrush of your first coffee.

AI systems can reasonably lay claim to intelligence in some form, since they can certainly do things, but it is harder to say whether there is anything-it-is-like-to-be ChatGPT.

The propensity to bundle intelligence and consciousness together can be traced to three baked-in psychological biases.

The first is anthropocentrism. This is the tendency to see things through the lens of being human: to take the human example as definitional, rather than as one example of how different properties might come together.

The second is human exceptionalism: our unfortunate habit of putting the human species at the top of every pile, and sometimes in a different pile altogether (perhaps closer to angels and Gods than to other animals, as in the medieval Scala naturae). And the third is anthropomorphism. This is the tendency to project humanlike qualities onto nonhuman things based on what may be only superficial similarities.

Taken together, these biases explain why it’s hardly surprising that when things exhibit abilities we think of as distinctively human, such as intelligence, we naturally imbue them with other qualities we feel are characteristically or even distinctively human: understanding, mindedness and consciousness, too.

One aspect of intelligent behavior that’s turned out to be particularly effective at making some people think that AI could be conscious is language. This is likely because language is a cornerstone of human exceptionalism. Large Language Models (LLMs) like OpenAI’s ChatGPT or Anthropic’s Claude have been the focus of most of the excitement about artificial consciousness. Nobody, as far as I know, has claimed that DeepMind’s AlphaFold is conscious, even though, under the hood, it is rather similar to an LLM. All these systems run on silicon and involve artificial neural networks and other fancy algorithmic innovations such as transformers. AlphaFold, which predicts protein structure rather than words, just doesn’t pull our psychological strings in the same way.

The language that we ourselves use matters too. Consider how normal it has become to say that LLMs “hallucinate” when they spew falsehoods. Hallucinations in human beings are mainly conscious experiences that have lost their grip on reality (uncontrolled perceptions, one might say). We hallucinate when we hear voices that aren’t there or see a dead relative standing at the foot of the bed. When we say that AI systems “hallucinate,” we implicitly confer on them a capacity for experience. If we must use a human analogy, it would be far better to say that they “confabulate.” In humans, confabulation involves making things up without realizing it. It is primarily about doing, rather than experiencing.

When we identify conscious experience with seemingly human qualities like intelligence and language, we become more likely to see consciousness where it doesn’t exist, and to miss seeing it where it does. We certainly should not just assume that consciousness will come along for the ride as AI gets smarter, and if you hear someone saying that real artificial consciousness will magically emerge at the arbitrary threshold of AGI, that’s a sure sign of human exceptionalism at work.

There are other biases in play, too. There’s the powerful idea that everything in AI is changing exponentially. Whether it’s raw compute as indexed by Moore’s Law, or the new capabilities available with each new iteration of the big tech foundation models, things surely are changing quickly. Exponential growth has the psychologically destabilizing property that what’s ahead seems impossibly steep, and what’s behind seems irrelevantly flat. Crucially, things seem this way wherever you are on the curve — that’s what makes it exponential. Because of this, it’s tempting to feel like we are always on the cusp of a major transition, and what could be more major than the creation of real artificial consciousness? But on an exponential curve, every point is an inflection point.

“When we identify conscious experience with seemingly human qualities like intelligence and language, we become more likely to see consciousness where it doesn’t exist, and to miss seeing it where it does.”

Finally, there’s the temptation of the techno-rapture. Early in the movie “Ex Machina,”the programmer Caleb says to the inventor Nathan: “If you’ve created a conscious machine — it’s not the history of man, that’s the history of Gods.” If we feel we’re at a techno-historical transition, and we happen to be one of its architects, then the Promethean lure must be hard to resist: the feeling of bringing to humankind that which was once the province of the divine. And with this singularity comes the signature rapture offering of immortality: the promise of escaping our inconveniently decaying biological bodies and living (or at least being) forever, floating off to eternity in a silicon-enabled cloud.

Perhaps this is one reason why pronouncements of imminent machine consciousness seem more common within the technorati than outside of it. (More cynically: fueling the idea that there’s something semi-magical about AI may help share prices stay aloft and justify the sky-high salaries and levels of investment now seen in Silicon Valley. Did someone say “bubble”?)

In his book “More Everything Forever,” Adam Becker describes the tendency to project consciousness into AI as a form of pareidolia — the phenomenon of seeing patterns in things, like a face in a piece of toast or Mother Teresa in a cinnamon bun (Figure 1). This is an apt description. But helping you recognize the power of our pareidolia-inducing psychological biases is just the first step in challenging the mythology of conscious AI. To address the question of whether real artificial consciousness is even possible, we need to dig deeper.

Figure 1: Mother Teresa in a cinnamon bun. (Public Domain)

Consciousness & Computation

The very idea of conscious AI rests on the assumption that consciousness is a matter of computation. More specifically, that implementing the right kind of computation, or information processing, is sufficient for consciousness to arise. This assumption, which philosophers call computational functionalism, is so deeply ingrained that it can be difficult to recognize it as an assumption at all. But that is what it is. And if it’s wrong, as I think it may be, then real artificial consciousness is fully off the table, at least for the kinds of AI we’re familiar with.

Challenging computational functionalism means diving into some deep waters about what computation means and what it means to say that a physical system, like a computer or a brain, computes at all. I’ll summarize four related arguments that undermine the idea that computation, at least of the sort implemented in standard digital computers, is sufficient for consciousness.

1: Brains Are Not Computers

First, and most important, brains are not computers. The metaphor of the brain as a carbon-based computer has been hugely influential and has immediate appeal: mind as software, brain as hardware. It has also been extremely productive, leading to many insights into brain function and to the vast majority of today’s AI. To understand the power and influence of this metaphor, and to grasp its limitations, we need to revisit some pioneers of computer science and neurobiology.

Alan Turing towers above everyone else in this story. Back in the 1950s, he seeded the idea that machines might be intelligent, and more than a decade earlier, he

formulated a definition of computation that has remained fundamental to our technologies, and to most people’s understanding of what computers are, ever since.

Turing’s definition of computation is extremely powerful and highly (though, as we’ll see, not completely) general. It is based on the abstract concept of a Turing machine: a simple device that reads and writes symbols on an infinite tape according to a set of rules. Turing machines formalize the idea of an algorithm: a mapping, via a sequence of steps, from an input (a string of symbols) to an output (another such string); a mathematical recipe, if you like. Turing’s critical contribution was to define what became known as a universal Turing machine: another abstract device, but this time capable of simulating any specific Turing machine — any algorithm — by taking the description of the target machine as part of its input. This general-purpose capability is one reason why Turing computation is so powerful and so prevalent. The laptop computer I’m writing with, as well as the machines in the server farms running whatever latest AI model, are all physical, concrete examples of (or approximations to) universal Turing machines, bounded by physical limitations such as time and memory.

“The very idea of conscious AI rests on the assumption that consciousness is a matter of computation.”

Another major advantage of this framework, from a practical engineering point of view, is the clean separation it licenses between abstract computation (software) and physical implementation (hardware). An algorithm (in the sense described above) should do the same thing, no matter what computer it is running on. Turing computation is, in principle, substrate independent: it does not depend on any particular material basis. In practice, it’s better described as substrate flexible, since you can’t make a viable computer out of any arbitrary material — cheese, for instance, isn’t up to the job. This substrate-flexibility makes Turing computation extremely useful in the real world, which is why computers exist in our phones rather than merely in our minds.

At around the same time that Turing was making his mark, the mathematician Walter Pitts and neurophysiologist Warren McCulloch showed, in a landmark paper, that networks of highly simplified abstract neurons can perform logical operations (Figure 2). Later work, by the logician Stephen Kleene among others, demonstrated that artificial neural networks like these, when provided with a tape-like memory (as in the Turing machine),  were “Turing complete” — that they could, in principle, implement any Turing machine, any algorithm.

Figure 2: A modern version of a McCulloch-Pitts neuron. Input signals X1-X4 are multiplied by weights w, summed up together with a bias (another input) and then passed through an activation function, usually a sigmoid (an S-shaped curve), to give an output Y. This version is similar to the artificial neurons used in contemporary AI. In the original version, the output was either 1 (if the summed, weighted inputs exceeded a fixed threshold) or 0 (if they didn’t). The modifications were introduced to make artificial neural networks easier to train. (Courtesy of Anil Seth)

Put these ideas together, and we have a mathematical marriage of convenience and influence, and the kind of beauty that accompanies simplicity. On the one hand, we can ignore the messy neurobiological reality of real brains and treat them as simplified networks of abstract neurons, each of which just sums up its inputs and produces an output. On the other hand, when we do this, we get everything that Turing computation has to offer — which is a lot.

The fruits of this marriage are most evident in its children: the artificial neural networks powering today’s AI. These are direct descendants of McCulloch, Pitts and Kleene, and they also implement algorithms in the substrate-flexible Turing sense. It is hardly surprising that the seductive impressiveness of the current wave of AI reinforces the idea that brains are nothing more than carbon-based versions of neural network algorithms.

But here’s where the trouble starts. Inside a brain, there’s no sharp separation between “mindware” and “wetware” as there is between software and hardware in a computer. The more you delve into the intricacies of the biological brain, the more you realize how rich and dynamic it is, compared to the dead sand of silicon.

Brain activity patterns evolve across multiple scales of space and time, ranging from large-scale cortical territories down to the fine-grained details of neurotransmitters and neural circuits, all deeply interwoven with a molecular storm of metabolic activity. Even a single neuron is a spectacularly complicated biological machine, busy maintaining its own integrity and regenerating the conditions and material basis for its own continued existence. (This process is called autopoiesis, from the Greek for “self-production.” Autopoiesis is arguably a defining and distinctive characteristic of living systems.)

Unlike computers, even computers running neural network algorithms, brains are the kinds of things for which it is difficult, and likely impossible, to separate what they do from what they are.

Nor is there any good reason to expect such a clean separation. The sharp division between software and hardware in modern computers is imposed by human design, following Turing’s principles. Biological evolution operates under different constraints and with different goals. From the perspective of evolution, there’s no obvious selection pressure for the kind of full separation that would allow the perfect interoperability between different brains as we enjoy between different computers. In fact, the opposite is likely true: Maintaining a sharp software/hardware division is energetically expensive, as is all too apparent these days in the vast energy budgets of modern server farms.

“The more you delve into the intricacies of the biological brain, the more you realize how rich and dynamic it is, compared to the dead sand of silicon.”

This matters because the idea of the brain as a meat-based (universal) Turing machine rests precisely on this sharp separation of scales, on the substrate independence that motivated Turing’s definition in the first place. If you cannot separate what brains do from what they are, the mathematical marriage of convenience starts to fall apart, and there is less reason to think of biological wetware as there simply to implement algorithmic mindware. Evidence that the materiality of the brain matters for its function is evidence against the idea that digital computation is all that counts, which in turn is evidence against computational functionalism.

Another consequence of the deep multiscale integration of real brains — a property that philosophers sometimes call “generative entrenchment” — is that you cannot assume it is possible to replace a single biological neuron with a silicon equivalent, while leaving its function, its input-output behavior, perfectly preserved.

For example, the neuroscientists Chaitanya Chintaluri and Tim Vogels found that some neurons fire spikes of activity apparently to clear waste products created by metabolism. Coming up with a perfect silicon replacement for these neurons would require inventing a whole new silicon-based metabolism, too, which just isn’t the kind of thing silicon is suitable for. The only way to seamlessly replace a biological neuron is with another biological neuron — and ideally, the same one.

This reveals the weakness of the popular “neural replacement” thought experiment, most commonly associated with Chalmers, which invites us to imagine progressively replacing brain parts with silicon equivalents that function in exactly the same way as their biological counterparts. The supposed conclusion is that properties like cognition and consciousness must be substrate independent (or at least silicon-substrate-flexible). This thought experiment has become a prominent trope in discussions of artificial consciousness, usually invoked to support its possibility. Hinton recently appealed to it in just this way, in an interview where he claimed that conscious AI was already with us. But the argument fails at its first hurdle, given the impossibility of replacing any part of the brain with a perfect silicon equivalent.

There is one more consequence of a deeply scale-integrated brain that is worth mentioning. Digital computers and brains differ fundamentally in how they relate to time. In Turing-world, only sequence matters: A to B, 0 to 1. There could be a microsecond or a million years between any state transition, and it would still be the same algorithm, the same computation.

By contrast, for brains and for biological systems in general, time is physical, continuous and inescapable. Living systems must continuously resist the decay and disorder that lies along the trajectory to entropic sameness mandated by the inviolable second law of thermodynamics. This means that neurobiological activity is anchored in continuous time in ways that algorithms, by design, are not. (This is another reason why digital computation is so energetically expensive. Computation exists out of time, but computers do not. Making sure that 1s stay as 1s and 0s stay as 0s takes a lot of energy, because not even silicon can escape the tendrils of entropy.)

What’s more, many researchers — especially those in the phenomenological tradition — have long emphasized that conscious experience itself is richly dynamic and inherently temporal. It does not stutter from one state to another; it flows. Abstracting the brain into the arid sequence space of algorithms does justice neither to our biology nor to the phenomenology of the stream of consciousness.

Metaphors are, in the end, just metaphors, and — as the philosopher Alfred North Whitehead pointed out long ago  — it’s always dangerous to confuse a metaphor with the thing itself. Looking at the brain through “Turing glasses” underestimates its biological richness and overestimates the substrate flexibility of what it does. When we see the brain for what it really is, the notion that all its multiscale biological activity is simply implementation infrastructure for some abstract algorithmic acrobatics seems rather naı̈ve. The brain is not a Turing machine made of meat.

“Abstracting the brain into the arid sequence space of algorithms does justice neither to our biology nor to the phenomenology of the stream of consciousness.”

2: Other Games In Town

In the previous section, I noted that Turing computation is powerful but limited. Turing computations — algorithms — map one finite range of discrete numbers (more generally, a string of symbols) onto another, with only the sequence mattering. Turing algorithms are powerful, but there are many kinds of dynamics, many other kinds of functions, that go beyond this kind of computation. Turing himself identified various non-computable functions, such as the famous “halting problem,” which is the problem of determining, in general, whether an algorithm, given some specific input, will ever terminate. What’s more, any function that is continuous (infinitely divisible) or stochastic (involving inherent randomness), strictly speaking, lies beyond Turing’s remit. (Turing computations can approximate or simulate these properties to varying extents, but that’s different from the claim that such functions are Turing computations. I’ll return to this distinction later.)

Biological systems are rife with continuous and stochastic dynamics, and they are deeply embedded in physical time. It seems presumptuous at the very least to assume that only Turing computations matter for consciousness, or indeed for many other aspects of cognition and mind. Electromagnetic fields, the flux of neurotransmitters, and much else besides — all lie beyond the bounds of the algorithmic, and any one of them may turn out to play a critical role in consciousness.

These limitations encourage us to take a broader view of the brain, moving beyond what I sometimes call “Turing world” to consider how broader forms of computation and dynamics might help explain how brains do what they do. There is a rich history here to draw on, and an exciting future too.

The earliest computers were not digital Turing machines but analogue devices operating in continuous time. The ancient “Antikythera mechanism,” used for astronomical purposes and dating back to around 2,000 BCE, is an excellent example. Analogue computers were again prominent at the birth of AI in the 1950s,  in the guise of the long-neglected discipline of cybernetics, where issues of control and regulation of a system are considered more important than abstract symbol manipulation.

Recently, there’s been a resurgence in neuromorphic computation, which leverages more detailed properties of neural systems, such as the precise timing of neuronal spikes, than the cartoon-like simulated neurons that dominate current artificial neural network approaches. And then there’s the relatively new concept of “mortal computation” (introduced by Hinton), which stresses the potential for energy saving offered by developing algorithms that are inseparably tied to their material substrates, so that they (metaphorically) die when their particular implementation ceases to exist.  All these alternative forms of computation are more closely tied to their material basis — are less substrate-flexible — than standard digital computation.

Figure 3: The Watt Governor. It’s not a computer. (R. Routledge/Wikimedia)

Many systems do what they do without it being reasonable or useful to describe them as being computational at all. Three decades ago, the cognitive scientist Tim van Gelder gave an influential example, in the form of the governor of a steam engine (Figure 3). These governors regulate steam flow through an engine using simple mechanics and physics: as engine speed increases, two heavy cantilevered balls swing outwards, which in turn closes a valve, reducing steam flow. A “computational governor,” sensing engine speed, calculating the necessary actions and then sending precise motor signals to switch actuators on or off, would not only be hopelessly inefficient but would betray a total misunderstanding of what’s really going on.

The branch of cognitive science generally known as “dynamical systems,” as well as approaches that emphasize enactive, embodied, embedded and extended aspects of mind (so-called 4E cognitive science), all reject, in ways relating to van Gelder’s insight, the idea that mind and brain can be exhaustively accounted for algorithmically. They all explore alternatives based on the mathematics of continuous, dynamical processes — involving concepts such as attractors, phase spaces and so on. It is at least plausible that those aspects of brain function necessary for consciousness also depend on non-computational processes like these, or perhaps on some broader notion of computation.

“Evidence that the materiality of the brain matters for its function is evidence against the idea that digital computation is all that counts, which in turn is evidence against computational functionalism.”

These other games in town are all still compatible with what in philosophy is known as functionalism: the idea that properties of mind (including consciousness) depend on the functional organization of the (embodied) brain. One of the factors contributing to confusion in this area has been a tendency to conflate the rather liberal position of functionalism-in-general, since functional organization can include many things, with the very specific claim of computational functionalism, which implies that the type of organization that matters is computational and which in turn is often assumed to relate to Turing-style algorithms in particular.

The challenge for machine consciousness here is that the further we venture from Turing world, the more deeply entangled we become in randomness, dynamics and entropy, and the more deeply tied we are to the properties of a particular material substrate. The question is no longer about which algorithms give rise to consciousness; it’s about how brain-like a system has to be to move the needle on its potential to be conscious.

3: Life Matters

My third argument is that life (probably) matters. This is the idea — called biological naturalism by the philosopher John Searle— that properties of life are necessary, though not necessarily sufficient, for consciousness. I should say upfront that I don’t have a knock-down argument for this position, nor do I think any such argument yet exists. But it is worth taking seriously, if only for the simple reason mentioned earlier: every candidate for consciousness that most people currently agree on as actually being conscious is also alive.

Why might life matter for consciousness? There’s more to say here than will fit in this essay ( I wrote an entire book, “Being You,” and a recent research paper on the subject), but one way of thinking about it goes like this.

The starting point is the idea that what we consciously perceive depends on the brain’s best guesses about what’s going on in the world, rather than on a direct readout of sensory inputs. This derives from influential predictive processing theories that understand the brain as continually explaining away its sensory inputs by updating predictions about their causes. In this view, sensory signals are interpreted as prediction errors, reporting the difference between what the brain expects and what it gets at each level of its perceptual hierarchies, and the brain is continually minimizing these prediction errors everywhere and all the time.

Conscious experience in this light is a kind of controlled hallucination: a top-down inside-out perceptual inference in which the brain’s predictions about what’s going on are continually calibrated by sensory signals coming from the bottom-up (or outside-in).

Figure 4: Perception as controlled hallucination. The conscious experience of a coffee cup is underpinned by the content of the brain’s predictions (grey arrows) of the causes of sensory inputs (black arrows). (Courtesy of Anil Seth)

This kind of perceptual best-guessing underlies not only experiences of the world, but experiences of being a self, too — experiences of being the subject of experience. A good example is how we perceive the body, both as an object in the world and as the source of more fundamental aspects of selfhood, such as emotion and mood. Both these aspects of selfhood can be understood as forms of perceptual best-guessing: inferences about what is, and what is not, part of the body, and inferences about the body’s internal physiological condition (the latter is sometimes called “interoceptive inference”; interoception refers to perception of the body from within).

Perceptual predictions are good not only for figuring out what’s going on, but (in a call back to mid-20th century cybernetics) also for control and regulation: When you can predict something, you can also control it. This applies above all to predictions about the body’s physiological condition. This is because the primary duty of any brain is to keep its body alive, to keep physiological quantities like heart rate and blood oxygenation where they need to be. This, in turn, helps explain why embodied experiences feel the way they do.

Experiences of emotion and mood, unlike vision (for example), are characterized primarily by valence — by things generally going well or going badly.

“Every candidate for consciousness that most people currently agree on as actually being conscious is also alive.”

This drive to stay alive doesn’t bottom out anywhere in particular. It reaches deep into the interior of each cell, into the molecular furnaces of metabolism. Within these whirls of metabolic activity, the ubiquitous process of prediction error minimization becomes inseparable from the materiality of life itself. A mathematical line can be drawn directly from the self-producing, autopoietic nature of biological material all the way to the Bayesian best-guessing that underpins our perceptual experiences of the world and of the self.

Several lines of thought now converge. First, we have the glimmers of an explanatory connection between life and consciousness. Conscious experiences of emotion, mood and even the basal feeling of being alive all map neatly onto perceptual predictions involved in the control and regulation of bodily condition. Second, the processes underpinning these perceptual predictions are deeply, and perhaps inextricably, rooted in our nature as biological systems, as self-regenerating storms of life resisting the pull of entropic sameness. And third, all of this is non-computational, or at least non-algorithmic. The minimization of prediction error in real brains and real bodies is a continuous dynamical process that is likely inseparable from its material basis, rather than a meat-implemented algorithm existing in a pristine universe of symbol and sequence.

Put all this together, and a picture begins to form: We experience the world around us and ourselves within it — with, through and because of our living bodies. Perhaps it is life, rather than information processing, that breathes fire into the equations of experience.

4: Simulation Is Not Instantiation

Finally, simulation is not instantiation. One of the most powerful capabilities of universal, Turing-based computers is that they can simulate a vast range of phenomena — even, and perhaps especially, phenomena that aren’t themselves (digitally) computational, such as continuous and random processes.

But we should not confuse the map with the territory, or the model with the mechanism. An algorithmic simulation of a continuous process is just that — a simulation, not the process itself.

Computational simulations generally lack the causal powers and intrinsic properties of the things being simulated. A simulation of the digestive system does not actually digest anything. A simulation of a rainstorm does not make anything actually wet. If we simulate a living creature, we have not created life. In general, a computational simulation of X does not bring X into being — does not instantiate X — unless X is a computational process (specifically, an algorithm) itself. Making the point from the other direction, the fact that X can be simulated computationally does not justify the conclusion that X is itself computational.

In most cases, the distinction between simulation and instantiation is obvious and uncontroversial. It should be obvious and uncontroversial for consciousness, too. A computational simulation of the brain (and body), however detailed it may be, will only give rise to consciousness if consciousness is a matter of computation. In other words, the prospect of instantiating consciousness through some kind of whole-brain emulation, at some arbitrarily high level of detail, already assumes that computational functionalism is true. But as I have argued, this assumption is likely wrong and certainly should not be accepted axiomatically.

This brings us back to the poverty of the brain-as-computer metaphor. If you think that everything that matters about brains can be captured by abstract neural networks, then it’s natural to think that simulating the brain on a digital computer will instantiate all its properties, including consciousness, since in this case, everything that matters is, by assumption, algorithmic. This is the “Turing world” view of the brain.

“Perhaps it is life, rather than information processing, that breathes fire into the equations of experience.”

If, instead, you are intrigued by more detailed brain models that capture the complexities of individual neurons and other fine-grained biophysical processes, then it really ought to be less natural to assume that simulating the brain will realize all its properties, since these more detailed models are interesting precisely because they suggest that things other than Turing computation likely matter too.

There is, therefore, something of a contradiction lurking for those who invest their dreams and their venture capital into the prospect of uploading their conscious minds into exquisitely detailed simulations of their brains, so that they can exist forever in silicon rapture. If an exquisitely detailed brain model is needed, then you are no more likely to exist in the simulation than a hailstorm is likely to arise inside the computers of the U.K. meteorological office.

But buckle up. What if everything is a simulation already? What if our whole universe — including the billions of bodies, brains and minds on this planet, as well as its hailstorms and weather forecasting computers — is just an assemblage of code fragments in an advanced computer simulation created by our technologically godlike and genealogically obsessed descendants?

This is the “simulation hypothesis,” associated most closely with the philosopher Nick Bostrom, and still, somehow, an influential idea among the technorati.

Bostrom notes that simulations like this, if they have been created, ought to be much more numerous than the original “base reality,” which in turn suggests that we may be more likely to exist within a simulation than within reality itself. He marshals various statistical arguments to flesh out this idea. But it is telling that he notes one necessary assumption, and then just takes it as a given. This, perhaps unsurprisingly, is the assumption that “a computer running a suitable program would be conscious” (see page 2 of his paper). If this assumption doesn’t hold, then the simple fact that we are conscious would rule out that we exist in a simulation. That this strong assumption is taken on board without examination in a philosophical discussion that is all about the validity of assumptions is yet another indication of how deeply ingrained the computational view of mind and brain has become. It is also a sign of the existential mess we get ourselves into when we fail to distinguish our models of reality from reality itself.


Let’s summarize. Many social and psychological factors, including some well-understood cognitive biases, predispose us to overattribute consciousness to machines.

Computational functionalism — the claim that (algorithmic) computation is sufficient for consciousness — is a very strong assumption that looks increasingly shaky as the many and deep differences between brains and (standard digital) computers come into view. There are plenty of other technologies (e.g., neuromorphic computing, synthetic biology) and frameworks for understanding the brain (e.g., dynamical systems theory), which go beyond the strictly algorithmic. In each case, the further one gets from Turing world, the less plausible it is that the relevant properties can be abstracted away from their underlying material basis.

One possibility, motivated by connecting predictive processing views of perception with physiological regulation and metabolism, is that consciousness is deeply tied to our nature as biological, living creatures.

Finally, simulating the biological mechanisms of consciousness computationally, at whatever grain of detail you might choose, will not give rise to consciousness unless computational functionalism happens anyway to be true.

Each of these lines of argument can stand up by itself. You might favor the arguments against computational functionalism while remaining unpersuaded about the merits of biological naturalism. Distinguishing between simulation and instantiation doesn’t depend on taking account of our cognitive biases. But taken together, they complement and strengthen each other. Questioning computational functionalism reinforces the importance of distinguishing simulation from instantiation. The availability of other technologies and frameworks beyond Turing-style algorithmic computation opens space for the idea that life might be necessary for consciousness.

Collectively, these arguments make the case that consciousness is very unlikely to simply come along for the ride as AI gets smarter, and that achieving it may well be impossible for AI systems in general, at least for the silicon-based digital computers we are familiar with.

At the same time, nothing in what I’ve said rules out the possibility of artificial consciousness altogether.

Given all this, what should we do?

“Many social and psychological factors, including some well-understood cognitive biases, predispose us to overattribute consciousness to machines.”

What (Not) To Do?

When it comes to consciousness, the fact of the matter matters. And not only because of the mythology of ancestor simulations, mind-uploading and the like. Things capable of conscious experiences have ethical and moral standing that other things do not. At least, claims to this kind of moral consideration are more straightforward when they are grounded in the capacity for consciousness.

This is why thinking clearly about the prospects for real artificial consciousness is of vital importance in the here and now. I’ve made a case against conscious AI, but I might be wrong. The biological naturalist position (whether my version or any other) remains a minority view. Other theories of consciousness propose accounts framed in terms of standard computation-as-we-know-it. These theories generally avoid proposing sufficient conditions for consciousness. They also generally sidestep defending computational functionalism, being content instead to assume it.

But this doesn’t mean they are wrong. All theories of consciousness are fraught with uncertainty, and anyone who claims to know for sure what it would take to create real artificial consciousness, or for sure what it would take to avoid doing so, is overstepping what can reasonably be said.

This uncertainty lands us in a difficult position. As redundant as it may sound, nobody should be deliberately setting out to create conscious AI, whether in the service of some poorly thought-through techno-rapture, or for any other reason. Creating conscious machines would be an ethical disaster. We would be introducing into the world new moral subjects, and with them the potential for new forms of suffering, at (potentially) an exponential pace. And if we give these systems rights, as arguably we should if they really are conscious, we will hamper our ability to control them, or to shut them down if we need to.

Even if I’m right that standard digital computers aren’t up to the job, other emerging technologies might yet be, whether alternative forms of computation (analogue, neuromorphic, biological and so on) or rapidly developing methods in synthetic biology. For my money, we ought to be more worried about the accidental emergence of consciousness in cerebral organoids (brain-like structures typically grown from human embryonic stem cells) than in any new wave of LLM.

But our worries don’t stop there. When it comes to the impact of AI in society, it is essential to draw a distinction between AI systems that are actually conscious and those that persuasively seem to be conscious but are, in fact, not. While there is inevitable uncertainty about the former, conscious-seeming systems are much, much closer.

As the Google engineer Lemoine demonstrated, for some of us, such conscious-seeming systems are already here. Machines that seem conscious pose serious ethical issues distinct from those posed by actually conscious machines.

For example, we might give AI systems “rights” that they don’t actually need, since they would not actually be conscious, restricting our ability to control them for no good reason. More generally, either we decide to care about conscious-seeming AI, distorting our circles of moral concern, or we decide not to, and risk brutalizing our minds. As Immanuel Kant argued long ago in his lectures on ethics, treating conscious-seeming things as if they lack consciousness is a psychologically unhealthy place to be.  

The dangers of conscious-seeming AI are starting to be noticed by leading figures in AI, including Mustafa Suleyman (CEO of Microsoft AI) and Yoshua Bengio, but this doesn’t mean the problem is in any sense under control.

“If we give these systems rights, as arguably we should if they really are conscious, we will hamper our ability to control them, or to shut them down if we need to.”

One overlooked factor here is that even if we know, or believe, that an AI is not conscious, we still might be unable to resist feeling that it is. Illusions of artificial consciousness might be as impenetrable to our minds as some visual illusions. The two lines in the Müller-Lyer illusion (Figure 5) are the same length, but they will always look different. It doesn’t matter how many times you encounter the illusion; you cannot think your way out of it. The way we feel about AI being conscious might be similarly impervious to what we think or understand about AI consciousness.

Figure 5: The Müller-Lyer illusion. The two lines are the same length. (Courtesy of Anil Seth)

What’s more, because there’s no consensus over the necessary or sufficient conditions for consciousness, there aren’t any definitive tests for deciding whether an AI is actually conscious. The plot of “Ex Machina” revolves around exactly this dilemma. Riffing on the famous Turing test (which, as Turing well knew, tests for machine intelligence, not consciousness), Nathan — the creator of the robot Ava — says that the “real test” is to reveal that his creation is a machine, and to see whether Caleb — the stooge — still feels that it, or she, is conscious. The “Garland test,” as it’s come to be known, is not a test of machine consciousness itself. It is a test of what it takes for a human to be persuaded that a machine is conscious.

The importance of taking an informed ethical position despite all these uncertainties spotlights another human habit: our unfortunate track record of withholding moral status from those that deserve it, including from many non-human animals, and sometimes other humans. It is reasonable to wonder whether withholding attributions of consciousness to AI may leave us once again on the wrong side of history. The recent calls for attention to “AI welfare” are based largely on this worry.

But there are good reasons why the situation with AI is likely to be different. Our psychological biases are more likely to lead to false positives than false negatives. Compared to non-human animals, the apparent wonders of AI may be more similar to us in ways that do not matter for consciousness, like linguistic ability, and less similar in ways that do, like being alive.

Soul Machine

Despite the hype and the hubris, there’s no doubt that AI is transforming society. It will be hard enough to navigate the clear and obvious challenges AI poses, and to take proper advantage of its many benefits, without the additional confusion generated by immoderate pronouncements about a coming age of conscious machines. Given the pace of change in both the technology itself and in its public perception, developing a clear view of the prospects and pitfalls of conscious AI is both essential and urgent.

Real artificial consciousness would change everything — and very much for the worse. Illusions of conscious AI are dangerous in their own distinctive ways, especially if we are constantly distracted and fascinated by the lure of truly sentient machines. My hope for this essay is that it offers some tools for thinking through these challenges, some defenses against overconfident claims about inevitability or outright impossibility, and some hope for our own human, animal, biological nature. And hope for our future too.

The future history of AI is not yet written. There is no inevitability to the directions AI might yet take. To think otherwise is to be overly constrained by our conceptual inheritance, weighed down by the baggage of bad science fiction and submissive to the self-serving narrative of tech companies laboring to make it to the next financial quarter. Time is short, but collectively we can still decide which kinds of AI we really want and which we really don’t.

The philosopher Shannon Vallor describes AI as a mirror, reflecting back to us the incident light of our digitized past. We see ourselves in our algorithms, but we also see our algorithms in ourselves. This mechanization of the mind is perhaps the most pernicious near-term consequence of the unseemly rush toward human-like AI. If we conflate the richness of biological brains and human experience with the information-processing machinations of deepfake-boosted chatbots, or whatever the latest AI wizardry might be, we do our minds, brains and bodies a grave injustice. If we sell ourselves too cheaply to our machine creations, we overestimate them, and we underestimate ourselves.

Perhaps unexpectedly, this brings me at last to the soul. For many people, especially modern people of science and reason, the idea of the soul might seem as outmoded as the Stone Age. And if by soul what is meant is an immaterial essence of rationality and consciousness, perfectly separable from the body, then this isn’t a terrible take.

“Time is short, but collectively we can still decide which kinds of AI we really want and which we really don’t.”

But there are other games in town here, too. Long before Descartes, the Greek concept of psychē linked the idea of a soul to breath, while on the other side of the world, the Hindu expression of soul, or Ātman, associated our innermost essence with the ground-state of all experience, unaffected by rational thought or by any other specific conscious content, a pure witnessing awareness.

The cartoon dreams of a silicon rapture, with its tropes of mind uploading, of disembodied eternal existence and of cloud-based reunions with other chosen ones, is a regression to the Cartesian soul. Computers, or more precisely computations, are, after all, immortal, and the sacrament of the algorithm promises a purist rationality, untainted by the body (despite plentiful evidence linking reason to emotion). But these are likely to be empty dreams, delivering not posthuman paradise but silicon oblivion.

What really matters is not this kind of soul. Not any disembodied human-exceptionalist undying essence of you or of me. Perhaps what makes us us harks even further back, to Ancient Greece and to the plains of India, where our innermost essence arises as an inchoate feeling of just being alive — more breath than thought and more meat than machine. The sociologist Sherry Turkle once said that technology can make us forget what we know about life. It’s about time we started to remember.

The post The Mythology Of Conscious AI appeared first on NOEMA.

]]>
]]>
Where The Prairie Still Remains https://www.noemamag.com/where-the-prairie-still-remains Tue, 06 Jan 2026 18:00:43 +0000 https://www.noemamag.com/where-the-prairie-still-remains The post Where The Prairie Still Remains appeared first on NOEMA.

]]>
ROCHESTER, Iowa — If you take a road trip across Iowa, you’re likely to see fields of corn and soybean crops blanketing the landscape, one after the other across 23 million acres, or some 65% of the state. But turn off a gravel road near the Cedar River in the rural southeast and walk through an ornate rusted arch, and you will find yourself in another world.

Rochester Cemetery is not just an active cemetery. It’s a remnant of a once-common sight in Iowa, the place where tallgrass prairie and woodland meet. Faded, crumbling headstones dot its 13 hilly acres. The biggest oaks I’ve seen in my life — gnarled, centuries-old red, black, burr and white — tower over them, keeping watch. And otherwise engulfing the stones is a sea of prairie grasses: big bluestem, Indiangrass, switchgrass. On the right spring day, there are more blooming shooting stars here — with their delicate pink downturned heads nodding in the breeze — than may exist anywhere else in the state.

The cemetery itself dates to the 1830s, just after the Black Hawk Purchase added Iowa to the Union. But today, Rochester is special because it contains one of the rarest ecosystems in the world: oak savanna. Under a few massive trees, prairie plants sequester carbon, prevent erosion and provide key habitat for endangered wildlife like Monarch butterflies and rusty-patched bumblebees — ecosystem services desperately needed across the Midwest.

Before European settlement, tallgrass prairie covered 80% of Iowa. What remains serves as critical seed banks and blueprints for future restorations. But the continued existence of remnants like Rochester is tenuous in this land where corn is king, and it depends on the stewardship of individuals with very different ideas about what and who the land is for — and how it should be managed.

I arrived at the cemetery on a warm Sunday last May. Jacie Thomsen, a Rochester native, greeted me at the gate in a faded U.S. Army T-shirt. A township trustee and the cemetery’s burial manager, Thomsen carried a binder of old documents in one hand and a long metal rod in the other that she periodically used to probe for forgotten, buried gravestones. 

“A lot of people tend to say we’re disrespecting our dead,” Thomsen told me. “I always tell people, ‘Take what you think you know about cemeteries and leave it in your car, because it does not, will not, apply here.’”

I think of the postage-stamp perfect square cemetery I grew up visiting on Memorial Day in nearby Wapello, Iowa, with its close-cropped turfgrass, ornamental bushes and stones in lines straight as the corn rows that box them in on all sides. With manicured lawns and trimmed trees as the blueprint for cemeteries, I can see why some less well acquainted with prairie plants — including other township trustees here — complain this place looks “overgrown” with weeds and in need of a good mow. But at the same time, it strikes me that if one of the pioneers buried here suddenly rose from the dead, these hills are about the only part of the Iowa landscape they’d recognize.

“When you walk in these gates, you’re seeing Iowa as they saw it when they arrived after the Black Hawk Purchase,” Thomsen told me, gesturing at the prairie.

Prairie is Iowa’s natural landscape insofar as any landscape is natural. Humans have shaped the American Midwest ever since the glaciers retreated. For some 10,000 years, Iowa was a dynamic place. Indigenous Americans lit frequent fires that kept encroaching woodlands at bay, allowing the grasslands that dominate the Great Plains to migrate east into Iowa and Illinois. Only in the last 200 years did farmers transform these acres into neat cornfields.

“Turn off a gravel road near the Cedar River in the rural southeast and walk through an ornate rusted arch, and you will find yourself in another world.”

Today, less than a tenth of 1% of Iowa’s original prairie remains. Plows broke the vast majority of prairie down in the 19th and 20th centuries, transforming a biodiverse ecosystem into a crop factory — what Jack Zinnen, an ecologist for the Prairie Research Institute at the University of Illinois Urbana-Champaign, calls an “agricultural desert.”  Set aside before industrial agriculture arrived in Iowa, pioneer cemeteries like this one have become the prairie’s final resting place — one of the few where the land remembers what it once was. Some of these cemetery prairie remnants tower over the surrounding farm fields, long roots holding the rich, undisturbed soil together as the rest of Iowa erodes away under repetitive plowing, flowing downriver.

Isaac Larsen, a geosciences expert at UMass Amherst, stands near a drop-off that separates native remnant prairie from farmland in Iowa. Researchers found that farmed fields were more than a foot lower than the prairie on average. (UMass Amherst)

Compared to other forms of American wilderness, prairies are hard to love — they don’t easily fall into the category of the sublime like giant sequoias or Yosemite waterfalls. You have to get really close to appreciate the complex beauty. It’s probably why (along with the black gold underneath the plants) it was so easy to destroy, acre by acre.

“To the uninitiated, the idea of a walk through a prairie might seem to be no more exciting than crossing a field of wheat, a cow pasture, or an unmowed blue-grass lawn,” wrote Robert Betz, a Northeastern Illinois University biologist and early defender of cemetery prairies. “Nothing could be further from the truth.”

Aboveground at Rochester, native prairie grasses and flowers and introduced ornamental plants, such as daisies, hyacinths and showy stonecrops, coexist. Black-eyed Susans, coneflowers, milkweed and prairie clovers grow on graves, alongside the usual decorative plastic varieties. Underground, deep roots entwine with the bodies of long-dead pioneers — who pushed out the Indigenous communities who first stewarded this prairie — and generations of Rochester citizens.

A massive oak towers over gravestones on a hill in Rochester Cemetery. (Christian Elliott)
Left: A queen bumblebee pollinating shooting stars in Rochester Cemetery. On the right spring day, there are more blooming shooting stars here than may exist anywhere else in the state. (Laura Walter) Right: The gates to Rochester Cemetery which covers 13 acres today. (Christian Elliott)

The Prairie’s Unmaking

I grew up less than an hour’s drive from Rochester, though I learned of the cemetery’s existence only recently, in a book by the New York landscape photographer Stephen Longmire, who’d stumbled across this place and spent years photographing it with a large format film camera. While he wandered Rochester’s hills in the early 2000s, I was spending my weekends at my grandparents’ farm in Wapello playing in their corn rows behind the barn. Prairie was the setting for Laura Ingalls Wilder’s books, a thing of the past. I had no idea how utterly transformed Iowa was, or how much we’d lost.

It wasn’t until college that I learned the truth. Prairie once stretched from Montana down to Texas and east into Ohio, over a million square miles. Iowa was once the beating heart of the American Central Grassland.

But “tallgrass prairie is, in many respects, a human construct,” Tom Rosburg, a biologist and herbarium curator at Drake University in Iowa, told me.

Prairie relies on annual cleansing fire to transform dead foliage into usable nutrients. Shortgrass prairie in the dry western plains burns easily, the fires often lit by lightning and fueled by constant wind. Tallgrass prairie, on the other hand, “wants to be trees,” Chris Helzer, The Nature Conservancy’s science director in Nebraska, told me. It only grows in places with enough precipitation that woodland should dominate.

The Central Grassland’s extension into the Midwest, called the Prairie Peninsula, puzzled scientists for decades — they wondered why it wasn’t dominated by forest. Eventually, they arrived at an answer. For thousands of years, grass and trees had waged a war of contrition across the hills that are now Rochester Cemetery — and across much of Iowa and Illinois. But Indigenous peoples sided with the grasses from the beginning, lighting regular fires that rejuvenated the grasses, kept trees at bay and ensured the landscape remained open for easier hunting. Here at Rochester, it was the Meskwaki, who still live nearby on land purchased from the U.S. government after the Black Hawk War.

Most of a prairie plant’s biomass is underground, in the form of deep root systems that allow it to spring back to life after frequent fires. When pioneers arrived in Iowa and Illinois in the early 1800s, they discovered millennia of decomposing roots produced a black, nitrogen-rich, silty loam — some of the most fertile soil in the world. Thus began the prairie’s destruction. Industrialized farming operations moved in, like my family’s, such that less than a century later, it was nearly all gone, turned into monocultures of corn and soy sustained by artificial nitrogen inputs, herbicides and pesticides, which were irrigated by stick-straight ditches and networks of buried drainage tiles.

“It was destroyed piece by piece, farmer by farmer,” Rosburg told me, with some bitterness. “It was the biggest transformation in the history of Earth — and in less than a person’s lifetime.”

The change is so dramatic, it’s hard to imagine what was once there. You can’t unplow a prairie — once you tear through those deep, ancient roots, formed over centuries, it’s over. And despite decades of attempts, it’s nearly impossible to create a restoration that perfectly matches the real thing, with its function, structure and sheer number of species, each with its own complex relationships.

“Prairie plants sequester carbon, prevent erosion and provide key habitat for endangered wildlife like Monarch butterflies and rusty-patched bumblebees — ecosystem services desperately needed across the Midwest.”

To attempt a restoration at all, you need raw material — seeds. And for that, you need remnants. Scientists have dedicated their lives to mapping the few places where the prairie still exists, scouring the state on foot and sifting through old records as if panning for gold. Rosburg has found and saved more than 65 forgotten remnants through his organization, Drake Prairie Rescue. Many remnants exist on fragments of land deemed too rocky, sandy or steep to plow. Those remnants were often used as pastures — planted with a mix of non-native grasses and heavily grazed by cattle.

Examples of still-intact prairies, on rich black carbon soil, are rare — primarily found in narrow strips along railroad tracks set aside before plowing began and on pioneer cemeteries, where the impediment to plowing was cultural, rather than practical. Those remnants tend to be the last and best records of what’s considered a typical prairie, with its rich, silty, loamy soil.

To date, there are 136 cemetery prairies across the Midwest, according to the Iowa Prairie Network’s list. While an Iowa cornfield’s species diversity can be counted on one hand, some prairie remnants contain as many as 250 species, according to data published last July by the Prairie Research Institute team in Illinois.

Unlike neighboring Illinois, which has an extensive state system to protect its rare native prairies, wetlands and forests, in Iowa, nearly all the state’s land is privately held. In fact, 60% of Iowa’s public land is made up of roadside rights-of-way, or ditches, as they are more commonly known, according to the University of Northern Iowa’s Tallgrass Prairie Center.

In Iowa, cemeteries with fewer than 12 burials in the past 50 years are officially designated as pioneer cemeteries, which allows counties to relax mowing and restore prairie — although that doesn’t always happen in practice. Still, these township-owned pioneer cemeteries serve as de facto prairie nature preserves, islands of tenuous conservation for rare insects and plants — as long as townships OK it — in a sea of destruction.

Due to climate change, the wet Midwest is becoming even wetter, which means that prairie remnants are slowly transitioning to woodland in the absence of fire. Absent any management, a prairie can disappear in as little as 30 years, Laura Walter, a University of Northern Iowa biologist, told me. “Rescuing” remnants, as Rosburg does, is an active process that involves convincing townships to conduct controlled burns and weed out invasive species in their cemeteries.

And these prairie preserves have come in handy. They’re models for what some scientists call artisanal restorations — small-scale prairies conjured forth on private land, often with great care and dedication to exactly recreating what’s been lost. But remnants like Rochester are also helping bring back prairie at a larger scale. 

In the 1990s, Iowa lawmakers mandated prairie plantings along state highways and provided incentives for counties to do the same to help combat soil erosion and reduce mowing and herbicide use that polluted waterways. But the Tallgrass Prairie Center, which operates the state’s roadside vegetation program, couldn’t find prairie seeds readily available for sale.

So, they had to start from scratch, collecting seeds from cemetery prairies and other remnants, learning to germinate and grow plants in their greenhouse and production plots, and then donating seeds to seed companies while teaching them how to grow them in order to scale up production. 

Before they started, prairie blazing star, a common Iowa prairie flower, could only be purchased from the Netherlands, where it was a popular cut flower, said Laura Jackson, the Tallgrass Prairie Center’s director. Now, she told me, it’s one of dozens of regional ecotype seeds that counties can use to restore prairie along their roads. At last May’s annual spring seed pickup at the center’s warehouse in Cedar Falls, Iowa, trucks from 46 Iowa counties hauled away 19,000 pounds of prairie seed — big bluestem, switchgrass, prairie clover, asters, coneflowers and more — originally sourced from prairie remnants like Rochester. To date, some 50,000 acres of roadsides have been planted with native grasses and wildflowers.

Restoration is about preparing Iowa for the future rather than trying to revert its landscape to the 1800s, Jackson told me. On a practical level, prairies provide myriad benefits, especially in light of climate change, that are more important than ever, including soil stability, carbon storage, flood mitigation, fire resilience, drought resistance and habitat for pollinators. But because it’s so hard to predict what will survive amid a changing climate, it’s crucial to maximize genetic diversity by sourcing seeds from remnants across the state, Jackson told me.

“Prairie once stretched from Montana down to Texas and east into Ohio, over a million square miles. Iowa was once the beating heart of the American Central Grassland.”

Because Iowa is a relatively young landscape, geologically speaking, only a handful of prairie plants have gone extinct, and most species are still widespread. In parts of the country that haven’t been wiped clean by glaciers as recently, plants have evolved to become highly local, “endemic” to specific niches, Chris Benda, an Illinois botanist who regularly conducts plant surveys, told me.

Even though Iowa’s prairie survives today primarily on scattered fragments, many of its plants once thrived across the state. That means the seeds of Iowa’s great prairie still exist. From pioneer cemeteries, managers can source the original seeds of Iowa’s landscape and use them to grow prairie at scale.

Left: Old gravestones at Rochester Cemetery showing the Howe family plot. The Howe family still lives in Cedar County and let the prairie grow wild around the old settlers’ stones as that’s how the cemetery would have looked when they arrived. (Christian Elliott) Right: The stone visible here is Adam Graham’s who he left money in his 1850 will to purchase the land that is now Rochester Cemetery. (Christian Elliott)

Prairie Or Cemetery?

At Rochester Cemetery, others began to arrive for the day’s garlic mustard pull: Dan Sears, an organizer for the nonprofit Iowa Prairie Network; Walter, who runs the prairie plant research program at the Tallgrass Prairie Center; and a dozen locals. Volunteers tucked their jeans into their socks to avoid tick bites, grabbed bags and donned gardening gloves.

Sears explained what garlic mustard — the non-native species encroaching on this tiny prairie remnant — looks like, with toothed leaves and delicate white flowers. However, Sears added that volunteers should also be on the lookout for another non-native plant, showy stonecrop (which he referred to as “sedum”), which could compromise the quality of the prairie remnant. 

I noticed Thomsen tense beside me as she piped up: “I need to investigate first before you pull sedum!” The cemetery’s prairie is speckled with sedum and other long-naturalized “invasives,” from lilacs to day lilies, that were planted over centuries to honor loved ones. Thomsen relies on those plants to find unmarked graves in a cemetery without formal records, she told me. She even planted a peony bush to help her find her own family’s graves amid the tallgrass. “Just because you don’t see a headstone does not mean there’s not somebody there!”

Sears held up his hands to Thomsen in surrender: “Her word is law today.”

Their interaction was the first hint at a conflict that has come up time and again here — between what’s considered natural or local, and invasive or foreign, among both plants and people. Rochester draws outsiders to an unusual degree for a rural Iowa town. For years, prairie enthusiasts like Longmire, environmentalists, AmeriCorps volunteers and university scientists have taken the Rochester exit off Interstate 80 to visit this cemetery. 

At times, visitors have collected seeds or even plants without permission. The late Diana Horton, who long ran the University of Iowa herbarium and created the most complete list of Rochester’s some 400 species, once cut down several of the prairie’s red cedars, much to Thomsen’s chagrin. The trees are native to the area (“It’s called the Cedar River,” she quipped), but not to oak savannas. Some locals, who come to the cemetery simply to mourn their loved ones, see the outsiders themselves as the invasive species. Of course, it’s a matter of perspective — descendants of pioneers here can trace their ownership back to the original land stolen from the area’s Indigenous peoples.

But the biggest point of conflict, here as at prairie cemeteries across Iowa and Illinois, comes from locals with varying ideas of what a cemetery should be. Rochester Township owns the cemetery, and its trustees manage it, along with most of the town’s affairs. Most of Iowa’s cemetery prairies are no longer active, working cemeteries. That makes it easier for conservationists like Rosburg to make the case to trustees for controlled burns and other active management strategies — the prairie is part of the pioneer history of those cemeteries, something to be preserved. But Rochester still has burials every year, which heightens tensions.

The Nature Conservancy recognized Rochester as a high-quality site for prairie plants back in the 1980s and got permission to do a controlled burn then. But its proposal to cease burials there to prevent damage to prairie plants was “incendiary” to locals, Longmire told me. Since then, fierce debates have arisen repeatedly over proposals to mow more frequently — Thomsen told me that one of her aunts tried to oust an incumbent trustee solely over the need for increased mowing during the 2006 election.

But infrequent mowing is what preserved the prairie. Rochester was hayed for livestock under pioneer ownership and, more recently, due to limited staff time and township funding, mowed annually in the fall so mourners could find their family stones. That cadence mimics the fires and grazing by bison and livestock that historically rejuvenated prairie, keeping woody plants at bay.

“Compared to other forms of American wilderness, prairies are hard to love — they don’t easily fall into the category of the sublime like giant sequoias or Yosemite waterfalls. You have to get really close to appreciate the complex beauty.”

There are always residents who want this cemetery to resemble the familiar urban variety, Sarah Subbert, Cedar County’s naturalist, told me. “Well, that’s not what Iowa was … If you mowed it every week, you wouldn’t have that diversity out there at all.”

Some residents take mowing around their family stones into their own hands, having been officially permitted to do so by management rules enacted in 2016. This has resulted in a more traditional-looking patch of close-cropped grass at the center of the cemetery surrounding the most recent burials, encircled by prairie on all sides — a sort of compromise visible on the landscape.

Pedee Cemetery, an example of a typical country cemetery in eastern Iowa. Photo by Stephen Longmire from his book, “Life and Death on the Prairie” (George F. Thompson Publishing, 2011).
Left: A hillside in Rochester Cemetery with black-eyed Susans and black oak. (Stephen Longmire/”Life and Death on the Prairie”) Right: A farm near Rochester, Iowa. (Stephen Longmire/”Life and Death on the Prairie”)

On Nature & Culture

I fell in love with tallgrass prairie as an undergrad at Augustana College in Rock Island, Illinois. Not with the plants, as many of my botany peers did, but with the idea of prairie as a human construct. If you try to fence off a prairie and preserve it — freeze it in time — it’ll disappear as woody plants and trees slowly encroach. That was a point of fierce debate in the 1980s and ‘90s, when conservationists like Betz, the early discoverer of cemetery prairies, and Steve Packard in Chicago advocated for controlled burns and more active management of prairie remnants and restorations.

Critics saw restoration as gardening or meddling with nature. I thought of the vast western nature preserves that William Cronon described in “The Trouble with Wilderness,” and the irony of the government ousting the area’s Indigenous peoples — who had been stewarding the land — from their homes to create national parks to preserve now government-recognized wilderness. Nature has always been a part of the human realm. But prairie especially so.

“The whole ‘let nature take its course’ thing, or wilderness as a place without people, all those things break down very quickly in the tallgrass prairie,” Helzer, who manages thousands of acres of prairie in Nebraska, told me.

So I started seeking out prairies and other native ecosystems in Iowa and Illinois as a restoration volunteer. I pulled and cut invasives like buckthorn and multiflora rose and helped prepare for burns. When Rock Island decided to reintroduce prairie in a historic, Victorian-style, manicured park near my college, I dedicated my senior thesis to assessing how community members felt about the effort.

What I learned really surprised me — residents used words like “abandoned,” “unkempt,” “trashy” and “unwelcoming” to describe the unmowed areas. Several told me they felt like the “wild” had “invaded” the park and worried about this inviting “vandalism and crime” or “undesirable” people. That’s a conflation — famously made in New York City’s broken windows policing initiative — that some anthropologists have deemed “trash talk.”

To be fair, the initial restorations were of low quality. The parks department, perhaps unfamiliar with the history of prairie management, which requires careful selection and seeding of native species and controlled burns, took a laissez-faire approach. Later, the city acknowledged the “naturalized” areas weren’t exactly beautiful at first and began to plant more prairie grasses and flowers. But the negative attitudes stuck with me, long after I graduated. The nature-culture divide, established over two centuries of American civilization, is a challenge to bridge in the city.

Parks and graveyards are both “memorial landscapes,” Longmire writes in his photography book about Rochester, “Life and Death on the Prairie,” places where nature is manipulated to human ends. But cemeteries are culturally sacred places. That’s why I had to see Rochester’s cemetery prairie for myself. What way forward — if any — had its managers figured out to help with the coexistence of not just plants but also culture?

Volunteers at the garlic mustard pull organized by the Iowa Prairie Network fill buckets with uprooted invasive plants. (Christian Elliott)
Left: Volunteers search the prairie for garlic mustard and other invasive plants encroaching from the woods on all sides. (Christian Elliott) Right: Jacie Thomsen, the cemetery’s burial manager, in a quiet moment leaning against the prod she uses to find lost, buried markers. (Christian Elliott)

People Of The Prairie

Back at Rochester, Thomsen led me away from the garlic mustard pull to show me her favorite part of the cemetery. She grew up just to the north and spent her summers here with her best friend, who once eerily foretold that Thomsen would someday become the cemetery’s guardian. 

In 2011, the township asked her to become a trustee and the burial manager.

Even setting aside its sprangly prairie vegetation, Rochester is a chaotic sort of cemetery. A resident can pick a plot, but that doesn’t guarantee it will be available. (“Somebody might already be there,” Thomsen told me.) On a metal park bench under an oak, Thomsen unrolled a copy of a survey from the 1980s with graves marked with little Xs: “It’s accurate to a degree,” she said.

“Most of a prairie plant’s biomass is underground, in the form of deep root systems that allow it to spring back to life after frequent fires.”

Thomsen’s found hundreds of unmarked graves with her trusty prod and dug up and restored many broken and long-forgotten stones — as of December 2025, she was up to 1,061. And after 15 years, she knows where all her “residents” are — and all their stories. She’s met their descendants and walked with them to their long-lost relatives. She’s dug through newspaper archives for obituaries and uploaded records to FindAGrave.com. Growing up, she wanted to be an archaeologist.

Surefooted in the tall grass, Thomsen led the way uphill to a spot near the cemetery’s boundary fence, far from the mustard-pulling crew. Here we visited Rebecca Green, who died on Sept. 25, 1838, at the age of seven months. This made her grave the cemetery’s oldest, Thomsen told me. Green is surrounded by pink prairie phlox and purple columbine, as she would have been when her parents, Eliza and William Green, buried her here next to where they’d eventually be laid to rest. Thomsen wondered aloud if they’d picked this place for its colorful flowers. The Greens arrived in Rochester in 1837, just a year after its founding, from Kentucky and Maryland, respectively. Their home served as a hotel for travelers and a stop on the underground railroad. 

“When you come here, you’re looking at what they saw and what made them stay,” Thomsen told me. “This is the pioneer’s gift that they left for us. We are respecting that, even if everybody doesn’t get it, when they’re so used to manicured, boring.” She’s protective of this place, and her job isn’t easy. Sometimes trustees make decisions without her, mowing too early last year, for example, which prevented a controlled burn she was planning. She’s used to having to fight to be heard. She yanks poison ivy off a newer stone that reads “Captain Andrew Walker” — a Mexican and Civil War veteran buried in “a pauper’s grave” after he died at the Mt. Pleasant Asylum for the Insane. Thomsen tracked down his pension file and honored him with a stone on his family’s plot at Rochester.

I asked Thomsen whether she knew where she wanted to be buried. And of course, she did. She’s known since she was a child. The highest hill along the back fence, under an oak — a spot that’s always called to her. Thomsen gets goosebumps thinking about it. “There’s energy to the land, and we all leave our little imprint somehow.” The cemetery remembers the prairie, and the prairie remembers the people buried within it. Like the Greens, Thomsen’s family is mostly here, “four rows of kin” — her grandma and grandpa, her aunt, three uncles, her sister-in-law, two of those lost just last year. Her own staked-out spot is some distance away from the family plot — “Sometimes you can be a little too close to family, even in death.”

When Longmire spent his years in Rochester, he lamented that there was a “dearth of people who could see both sides of the coin,” he told me — to appreciate Rochester as both a natural and cultural wonder. But just as he left, Thomsen arrived on the scene. In her big binder, she keeps a pamphlet from his book talk. She knows all the stones, but she also knows the prairie — the common names (and some she’s made up) for each of the plants and the spots they come up every year, including the secret place the lady slipper orchid grows. She knows each of the towering oaks by name — the bear tree (a burr oak with a burr that resembles a cub climbing one side); the guardian, which stood tallest on the hill before a derecho felled it. She cried and mourned its death.

I had expected conflict at Rochester. But instead, I found someone who cared enough to shepherd compromise. If it can be done here, on hallowed ground, maybe it can be done anywhere.

A hill of blooming shooting stars, native to North America and one of the species being actively protected by restoration efforts, in the heart of Rochester Cemetery. (Christian Elliott)

Life Persists

Lost in thought, I realized Thomsen had taken off down the hill. I waded after her. She wanted to point out a new plant she’d spotted to Sears, the mustard pull organizer. Each little stalk was ringed with a spiraling firework of yellow blossoms.

“Oh, that’s lousewort!” he told her, “Laura would be really excited to see that!”

Thomsen cupped her hands around her mouth and shouted for Laura Walter.

“The cemetery remembers the prairie, and the prairie remembers the people buried within it.”

Walter, the scientist, wandered over, a bag overflowing with uprooted garlic mustard invaders tied around her waist. She excitedly knelt to examine the tiny plant, lifting her wide-brimmed hat. Finding lousewort usually means you’re dealing with high-quality remnant prairie, she told me, a “holy grail.” It’s partially parasitic, with roots that penetrate those of other plants underground to pirate water and mineral nutrients. In doing so, it suppresses its victim’s growth and keeps the prairie more open, promoting diversity. That kind of complex relationship is hard to recreate when doing restoration work. The plants nearby did look a little droopy. Had it already raided their nutrients and left a warning sign for others? I asked.

“It’s tantalizing to think about,” Walter laughed. She took a geolocated photo, and later, with the township’s permission, returned to collect its seeds. 

Walter then pointed excitedly at a blooming shooting star a few feet away. As we watched, a large bumblebee hovered upside down under its blossom and landed. In the spring, new bumblebee queens fly great distances to start new colonies, she told me. They depend on a few early blooming prairie flower species, like the shooting star, which have co-evolved to release pollen at specific bumblebee buzz frequencies.

“It’s funny, this is a cemetery, it’s where you honor the dead,” she mused. “But here you can also come and honor an abundance of life.”

Walter has collected shooting star seeds from remnants across the state, but they’re tricky to propagate. In the first growing season, a plant produces tiny seed leaves, a centimeter across. The following year, it gains a tiny tuft of true leaves. It can take five years to flower and produce seeds. Prairie restoration managers typically favor vigorous, fast-growing species that can outcompete invasive species and establish quickly.

Sitting in a prairie, you come to appreciate its beauty. The sheer complexity surrounding us was overwhelming. And it continued, invisibly, beneath the soil — every remnant prairie has a fungal and microorganism community unique to the soil type and plant community.

“Think about all the things that we don’t know, and that don’t come back on their own,” Walter said. “We have to preserve those relationships in the places where they exist until we understand them.”

Rochester Cemetery is a model of what scientists call artisanal restorations — small-scale prairies conjured forth on private land and are helping bring back prairie at a larger scale. (Christian Elliott)

Fate Of The Prairie

The future of tallgrass prairie remains uncertain. The Midwestern states are speckled with more and higher-quality restorations today than when efforts began in the 1980s; however, Iowa’s unique roadside vegetation program depends on county and state-level support, which is at a low point under the current administration.

The Burr Oak Land Trust, an Iowa conservation group that for years sent AmeriCorps volunteers to Rochester and other remnant prairies to pull invasive species and conduct prescribed burns, lost its funding due to Department of Government Efficiency cuts this year. The Prairie Research Institute in Illinois lost $21 million in federal funding last fall. And opt-in programs, like the Conservation Reserve Program, where the federal government pays farmers to take marginal land out of crop production and return it to prairie or wetland, depend on the whims of the market, Jonathan Dahlem, an Iowa State University sociologist who studies farming conservation practices, told me. When corn and soybean prices rise, like they have over the past two decades, farmers are eager to plow up restorations to seed row crops even if yields aren’t expected to be high. 

Rosburg said he finds hope in the increasing number of remnants discovered each year on forgotten pastures, along roads and in cemeteries. Universities like to talk about the “outsized impact” of small restorations, Jackson told me. But in reality, “every little bit helps a little bit,” she said.

I find my own hope in this place and in these people. At the end of the day, after the garlic mustard pull was over, Thomsen and Walter walked together up the hills, sharing their intimate and yet very different knowledge of the place.

Longmire calls Rochester Cemetery a memento mori — a reminder for living visitors of both their inevitable fate and of what Iowa lost. Funerals, gravestones and cemeteries are for the living — and this is a place that is alive, with plants and humans. Rochester is a time capsule of the past and a key to the future.

As I left, a truck and trailer pulled into the prairie to unload a riding lawn mower. The roar of the engine drowned out the buzz of insects as its operator carefully mowed around their family stone. It’s not a sight you’d see in a typical prairie. But here, it’s what compromise looks – and sounds — like. 

I later learned that the man who had mowed around the gravestones of many Rochester families for years as a public service had passed away that same day. The sea of tallgrass grew unchecked in the following months, surging against the gravestones like waves — a constant reminder that he was gone. Concerned families have started asking Thomsen how the cemetery will be maintained going forward — how nature will be held at bay. A similar series of events sparked the big fight over mowing back in 2006. I worry a little about the prairie’s future and Thomsen’s hold over the fragile balance here.

“But isn’t it wonderful,” Longmire asked me, “to have a place that people take so seriously to fight about how it’s managed?”

The post Where The Prairie Still Remains appeared first on NOEMA.

]]>
]]>
Noema’s Top Artwork Of 2025 https://www.noemamag.com/noemas-top-artwork-of-2025 Thu, 18 Dec 2025 15:41:01 +0000 https://www.noemamag.com/noemas-top-artwork-of-2025 The post Noema’s Top Artwork Of 2025 appeared first on NOEMA.

]]>
by Hélène Blanc
for “Why Science Hasn’t Solved Consciousness (Yet)

by Shalinder Matharu
for “How To Build A Thousand-Year-Old Tree

by Nicolás Ortega
for “Humanity’s Endgame

by Seba Cestaro
for “How We Became Captives Of Social Media

by Beatrice Caciotti
for “A Third Path For AI Beyond The US-China Binary

by Dadu Shin
for “The Languages Lost To Climate Change” in Noema Magazine Issue VI, Fall 2025

by LIMN
for “Why AI Is A Philosophical Rupture

by Kate Banazi
for “AI Is Evolving — And Changing Our Understanding Of Intelligence” in Noema Magazine Issue VI, Fall 2025

by Jonathan Zawada
for “The New Planetary Nationalism” in Noema Magazine Issue VI, Fall 2025

by Satwika Kresna
for “The Future Of Space Is More Than Human

Other Top Picks By Noema’s Editors

The post Noema’s Top Artwork Of 2025 appeared first on NOEMA.

]]>
]]>
Noema’s Top 10 Reads Of 2025 https://www.noemamag.com/noemas-top-10-reads-of-2025 Tue, 16 Dec 2025 17:30:14 +0000 https://www.noemamag.com/noemas-top-10-reads-of-2025 The post Noema’s Top 10 Reads Of 2025 appeared first on NOEMA.

]]>
Your new favorite playlist: Listen to Noema’s Top 10 Reads of 2025 via the sidebar player on your desktop or click here on your mobile phone.

Artwork by Daniel Barreto for Noema Magazine.
Daniel Barreto for Noema Magazine

The Last Days Of Social Media

Social media promised connection, but it has delivered exhaustion.

by James O’Sullivan


Artwork by Beatrice Caciotti for Noema Magazine.
Beatrice Caciotti for Noema Magazine

A Third Path For AI Beyond The US-China Binary

What if the future of AI isn’t defined by Washington or Beijing, but by improvisation elsewhere?

by Dang Nguyen


Illustration by Hélène Blanc for Noema Magazine.
Hélène Blanc for Noema Magazine

Why Science Hasn’t Solved Consciousness (Yet)

To understand life, we must stop treating organisms like machines and minds like code.

by Adam Frank


NASA Solar Dynamics Observatory

The Unseen Fury Of Solar Storms

Lurking in every space weather forecaster’s mind is the hypothetical big one, a solar storm so huge it could bring our networked, planetary civilization to its knees.

by Henry Wismayer


Artwork by Sophie Douala for Noema Magazine.
Sophie Douala for Noema Magazine

From Statecraft To Soulcraft

How the world’s illiberal powers like Russia, China and increasingly the U.S. rule through their visions of the good life.

by Alexandre Lefebvre


Illustration by Ibrahim Rayintakath for Noema Magazine
Ibrahim Rayintakath for Noema Magazine

The Languages Lost To Climate Change

Climate catastrophes and biodiversity loss are endangering languages across the globe.

by Julia Webster Ayuso


An illustration of a crumbling building and a bulldozer
Vartika Sharma for Noema Magazine (images courtesy mzacha and Shaun Greiner)

The Shrouded, Sinister History Of The Bulldozer

From India to the Amazon to Israel, bulldozers have left a path of destruction that offers a cautionary tale for how technology without safeguards can be misused.

by Joe Zadeh


Blake Cale for Noema Magazine
Blake Cale for Noema Magazine

The Moral Authority Of Animals

For millennia before we showed up on the scene, social animals — those living in societies and cooperating for survival — had been creating cultures imbued with ethics.

by Jay Griffiths


Illustration by Zhenya Oliinyk for Noema Magazine.
Zhenya Oliinyk for Noema Magazine

Welcome To The New Warring States

Today’s global turbulence has echoes in Chinese history.

by Hui Huang


Along the highway near Nukus, the capital of the autonomous Republic of Karakalpakstan. (All photography by Hassan Kurbanbaev for Noema Magazine)

Signs Of Life In A Desert Of Death

In the dry and fiery deserts of Central Asia, among the mythical sites of both the first human and the end of all days, I found evidence that life restores itself even on the bleakest edge of ecological apocalypse.

by Nick Hunt

The post Noema’s Top 10 Reads Of 2025 appeared first on NOEMA.

]]>
]]>
The Creative Intuition Of Frank Gehry https://www.noemamag.com/the-creative-intuition-of-frank-gehry Fri, 12 Dec 2025 17:22:40 +0000 https://www.noemamag.com/the-creative-intuition-of-frank-gehry The post The Creative Intuition Of Frank Gehry appeared first on NOEMA.

]]>
Before Frank Gehry, there were boxes, pyramids, domes and an occasional ziggurat. Not many can claim to have created an entirely new form, as the architect did with his famous Guggenheim Museum Bilbao in Spain and the Walt Disney Concert Hall in Los Angeles. Gehry, who died earlier this month at age 96, was such a truly original mind that Apple included his visage along with that of Albert Einstein and John Lennon in its famous “Think Different” ad campaign in 1997.

Like others who think differently and come from outside the insider establishment, he rebelled against the custodians of proper and hallowed ways.

This was most evident in his early days through the deconstruction of his own staid Dutch colonial-style home in Santa Monica, whose façade he disrupted with jutting angles of glass, corrugated metal, plywood and chain-link fence. It was not pretty. But, as fellow architect Thom Mayne has commented, the use of inexpensive everyday materials in a city where properties easily go for $20 million was a critical statement about the house as a status symbol. Mayne thought the house was “very aggressive politically … using chain link is saying fuck you to marble.”

Living in Los Angeles, I crossed paths with Gehry several times over the years, including in some formal conversations for New Perspectives Quarterly, the journal I edited. We met for lunch once in the late 1990s to discuss the formidable roadblocks to getting the Disney Hall built.

Gehry drew one of his famous scribbled sketches on a restaurant napkin and told me his original idea was to sheath the building in stone, not metal, which created construction impediments. He railed against the aesthetic judgment of some members of the board overseeing the design, who were threatening to block funding. He seemed so convinced the project would never see the light of day that I threw away what would now be an immensely valuable sketch!

I last saw the famed architect for a video interview at his studio in L.A. in 2018 to discuss how the backlash against globalization was affecting cities, which Gehry happily saw as chaotic “collisions of thought like democracy.”

The most intellectually memorable conversation, though, was in 2014, when we drilled down on his creative vision and where it comes from:

Nathan Gardels: You once commented on your fascination with a dancing Shiva sculpture that belonged to the Norton Simon Museum. And you seem to have tried to capture this “frozen motion,” as you put it, in your buildings in Bilbao and at the Disney Hall in Los Angeles.

Interestingly, your attempts to capture this “frozen motion” in architecture correspond to the scientific pursuits of Ilya Prigogine, the chaos theory physicist who won the Nobel Prize in 1977.

“If the clock was the symbol of classical science,” Prigogine has said, “sculpture is more the symbol for today. Sculpture is time put into matter. In some of the most beautiful manifestations of sculpture, be it the dancing Shiva or in the miniature temples of Guerrero, there appears very clearly the search for a junction between stillness and motion, time arrested and time passing. It is this confrontation — a hidden unity just like dark and light — that will give our era its uniqueness.” A sculpture like the dancing Shiva is the symbol of the new work being done in physics because it “embodies some elements that conform to given rules and other elements that arise unexpectedly through the process of creation.”

Though your buildings look as if you’ve thrown together disconnected fragments, isn’t there really a synthesis, a hidden unity, as Prigogine suggests, in your designs?

Frank Gehry: You are absolutely right. I am amazed to hear this quote from Prigogine. That too is what I am seeking, though guided by intuition and not so consciously by intellect. It is all about a sense of movement. When I look outside the door, what do I see? An airplane flying over, a car passing by. Everything is moving. That is our environment. Architecture should deal with that.

For example, the best way to look at the building I did in Bad Oeynhausen, Germany, is to go across the road to the bar and just sit there and look out. Big trucks are whooshing by. When they come along the road, they fit into the form of the building. The movement of the trucks doesn’t conflict with the motionless building but integrates with it.

“Like others who think differently and come from outside the insider establishment, Frank Gehry rebelled against the custodians of proper and hallowed ways.”

I didn’t do this on purpose, but intuitively. Such a building strikes me as very much like the dancing Shiva. I used to sit there and just look at Norton Simon’s dancing Shiva. It was a remarkable sculpture. I swear it was moving. How did they do that?

I had a similar feeling when I saw the Elgin Marbles. The shield of the warriors seemed to be thrusting out. You could just feel the movement. These observations affected my work very much. When I would go out to the suburbs and see these huge tracts of housing under development, I was fascinated. You would see row after row of wood frames going up with piles of wood stacked all around. It was really vibrant. It looked far better than when the houses were actually finished.

The Walt Disney Concert Hall located in Los Angeles completed by Frank Gehry in 2003. Photo taken by Carol M. Highsmith and provided via Wikimedia Commons.
The Walt Disney Concert Hall located in Los Angeles completed by Frank Gehry in 2003. (Carol M. Highsmith/Wikimedia Commons)

I used to fantasize: What would it look like if you just threw all those piles of wood into the air and just froze them there in mid-air? It would be magnificent. Indeed, the great organ in the new Disney Hall has some of that sense to it.

The Walt Disney Concert Hall organ. Photo by cultivar413 via Wikimedia Commons.
The Walt Disney Concert Hall organ. (cultivar413/Wikimedia Commons)

Gardels: In Los Angeles, there is neither utopia nor ruins — the downtown has been completely eradicated four times in the last 100 years. Creating architecture here is like building in a “pure space.” This corresponds to something the poet Octavio Paz said — that we live in the permanently temporary present of “pure time” without a past, and, since all utopias have failed, with an undetermined future.

One might even say that your Disney Concert Hall is a more perfect symbol of Los Angeles than its sponsors imagined. Pure time meets pure space in the frozen motion of those metallic waves. That is our reality today.

Gehry: I suspect there is some truth here, that I have tapped into something that is going on, that my buildings represent a certain way of seeing. At a personal level, though, it is hard to claim such things.

I’m not a theorist, but a vacuum cleaner. I listen. I look. And then I represent with my tools. As for the pure space of the present, there are a lot of constraints. Why do our leaders and the public at large want to live so much in the past? It seems the less faith they have in the future, the more they want to anchor their identity in the past. But the past is gone. It is a fiction of our insecurity. To anchor architecture in the past is to build nostalgic parks. It is to make ersatz out of heritage. And it is denial.

Authentic Theme Parks

Gardels: Arata Isozaki, the Japanese architect who built the Los Angeles Museum of Contemporary Art, says he likes to build in America because there is no irony. Relative to old societies like Japan, there is no ancestral territory, and thus little if any distance between the context and whatever new it is you want to create. There is no conflict with history in America, which, as [the French philosopher] Jean Baudrillard has put it, is essentially “space plus a spirit of fiction” — in other words, pure space.

Isozaki contrasted this with his concert hall in Kyoto, where the traditionalists fought against his design as unfitting for Japan’s ancient spiritual center. Isozaki argued back that Kyoto was little more than a theme park where tourist buses unload groups of Japanese looking into a past that has no reality for them today.

“They might as well be wearing Mickey Mouse ears,” Isozaki told the enraged traditionalists.” With the arrival of pure space, the authentic becomes inauthentic and vice versa.

Gehry: As far as it goes, I have to agree. At the same time, though, there is, of course, something that is different. Kyoto grew out of a refined culture over the centuries. It evolved a method of building and an aesthetic that meant something. It was fashioned in a crucible of time, feeling and culture that was related to a spiritual connection with nature. When I took my kids there, it became an important part of their experience.

Disney World isn’t that. It is a ride. It is a fantasy. It is a built movie. Kyoto wasn’t. It may be abused as a theme park now, as Isozaki says. But its origins are real. And it is valuable to see Kyoto just as it is valuable to see a Picasso.

“To anchor architecture in the past is to build nostalgic parks. It is to make ersatz out of heritage. And it is denial.”
—Frank Gehry

Asia & The Generic City

Gardels: Rem Koolhaas, the Dutch architect, has declared that the city as we have known it is gone. We have arrived in the age of “the generic city,” liberated from “the captivity of the center” — and the personality, identity and constraints associated with that. Connected in cyberspace, we will all live in the floating, unanchored periphery. Should we leave our vague regrets behind and just embrace this open future?

Gehry: That is freedom. I suppose it is the pure space you’ve been talking about. And Rem is probably right that this form will cover most of the planet.

Gardels: What is your favorite city?

Gehry: Tokyo is my favorite city visually. It is partly the density that I like, but also the transitional quality. They have the history, but they didn’t stop because of it. On one street you will find a temple next to an eight-story building from the 1950s next to a 30-story building constructed in the 1970s. Then they plastered neon signs all over and stuck a roadway in the middle of it all, going off into space. It is dynamic, like those erector sets we used to play with as kids. Along the freeways and down at the Tokyo Bay, they build these Godzilla-size convention centers. But they are tasteful, more invested with architecture than you might find in America. They are clearly plugged in to a style sense.

Then they will build those wacko indoor ski resorts that look like the Eiffel Tower. It is weird, but beautiful. I see in Tokyo today what I see in my favorite writer, Salman Rushdie. He’s like James Joyce, his novels are episodic and open-ended — they go all over the place, in seven directions at once. The characters have layers of identity — plural identities.

Now, when they go the next step — as they already are in the Shinjuku area of Tokyo — there are only 50-story buildings, and it looks like 6th Avenue in New York. Then they lose it. When they get that big, they need more land. And that is when they overpower everything else.

Gardels: What is your image of the future city? For Koolhaas, the old cities of Asia will give way to the Generic City as they are obliterated with megastructures to accommodate the demographic deluge. That will happen either in an ordered way, as in Singapore, or in a more dystopian way, as in Blade Runner. Simply, as Koolhaas puts it, “the past is too small to inhabit.”

Gehry: I don’t know if we’re capable of speculating about the future. We know bits and pieces, but we can’t know what the aggregate is going to look like. I don’t have any hopes that it will be much to be excited about, though. Today, there are pockets of sanity that are of a scale where they are still visible in the chaos. In the future, the pockets of sanity will become tiny. Perhaps then the buildings I’m doing that look like they are moving will ultimately dematerialize into ether. The mega-scale will overpower all else. In rapidly growing Asia, they are interested in building, not architecture. I’ve been invited to China, but I’ve turned them down because I know the people building on a large scale there are Donald Trumps. Chinese Donald Trumps. As a friend of mine says, it is already over in China for architecture.

The post The Creative Intuition Of Frank Gehry appeared first on NOEMA.

]]>
]]>
Inside Denmark’s Hardline Immigration Experiment https://www.noemamag.com/inside-denmarks-hardline-immigration-experiment Tue, 02 Dec 2025 17:27:17 +0000 https://www.noemamag.com/inside-denmarks-hardline-immigration-experiment The post Inside Denmark’s Hardline Immigration Experiment appeared first on NOEMA.

]]>
COPENHAGEN, Denmark — Prime Minister Mette Frederiksen is keen to have immigration policy back at the center of Danish politics. In fact, she believes it will dominate the upcoming election. In an interview with the newspaper, Politiken, Frederiksen described the lack of safety that she felt had become “the absolute biggest problem” for many Danes.

“Many of us know that there could be an assault at a subway station, or that a young guy could be sitting alone in the back seat of a bus, and suddenly two or three people with an Arab background come in and rip him apart,” Frederiksen, who leads the left-leaning Social Democratic party, said in September.

Her comments have upset many in progressive circles and especially the many now second- and third-generation Danish citizens of Arab descent, who have felt targeted and estranged from their native country as a result. But the statement seemed deliberately aimed at cementing the party’s position as tough on immigration, ahead of next year’s general elections.

In Denmark, where non-Western immigrants and their descendants comprise some 10% of its 6 million people, the issue of immigration was once a clear dividing line between left and right on the political spectrum. Today, being tough on newcomers is a cornerstone of political consensus. Over the past two decades, successive governments have tightened asylum laws, slashed welfare benefits for immigrants and pursued a zero-asylum policy. With that last goal nearly achieved — Denmark granted asylum to only 860 asylum seekers in 2024 — the supposedly center-left-leaning government is now promising even stricter rules.

It’s not only Denmark. Countries across the old continent are grappling with a surge of populist right-wing parties, and more established parties seem to be trying to draw lessons from the Danish experience. The U.K. home secretary recently sent officials to Denmark to study its border control and asylum policies. Denmark’s strict rules on family reunions and temporary refugee stays are among the policies under review, the Guardian reports. While “getting to Denmark,” as coined by political scientist Francis Fukuyama, may once have been considered the El Dorado of good governance, is this really where we all want to go?

The Seeds Of Anti-Elitism

In 1987, Denmark won its first Oscar with “Babette’s Feast,” an adaptation of a famous tale by Karen Blixen about a political refugee from France. Villagers greet Babette’s arrival in Denmark with sometimes subtle presumptions and whispered speculations. When, many years later, she wins the lottery, she throws an opulent banquet for the community with turtle soup, blinis crowned with caviar, and quail and foie gras. The villagers make a pact to reluctantly eat the foreign food, but take no pleasure in it.

The film’s interrogation of the parochialism of a small community and its fears toward the foreign and unfamiliar was a sign of the times, arriving at a moment when anti-immigration sentiments had started to seep into Danish politics. The right-wing Progress Party — born in the 1970s as a libertarian protest party against high taxes — had by the 1980s redirected much of its energy toward opposing Muslim immigration.

Its leader claimed that Turkish guest workers, invited during the economic boom of the 1960s, along with later refugees from Iran and Iraq, were eroding the Danish welfare state from within. Central to this critique was the 1983 Aliens Act — then the most liberal immigration law in Europe — that extended generous rights to individuals seeking asylum and family reunification. Even though immigrants from Muslim-majority countries were well under 2% of the population, the so-called party of progress cast such immigration as not just a threat to Danish national identity, but as something essentially incompatible with it. In doing so, the Progress Party attracted voters from smaller rural communities who were estranged from the educated elites of the capital, not entirely unlike the insular community depicted in “Babette’s Feast.”

Still, anti-immigration sentiments remained on the political fringes of the extreme far-right. Anti-immigration sentiments were fiercely rejected in remarks by politicians across the political spectrum throughout the 1980s and ‘90s. In a famous speech, then-Prime Minister Poul Nyrup Rasmussen, a Social Democrat, declared that the Progress Party would never become “stuerent,” literally “clean for the living room,” an idiom signaling the party’s inherent disreputability as political partners. For established parties, the Progress Party and its successor, the Danish People’s Party, were to be kept at arm’s length, eternally excluded from what was considered normal politics.

But that is a bygone era.

“Today, being tough on newcomers is a cornerstone of political consensus.”

In 2001, Anders Fogh Rasmussen made a surprising but calculated decision to accept the Danish People’s Party’s outside support for his center-right minority government in order to gain a long-awaited premiership. With its 22 seats, the People’s Party now effectively had veto and bargaining power over government policy, notably on assimilation and immigration, and none of the responsibilities.

The depth of that influence was dramatized in an episode of the popular Danish television series “Borgen,” with an apt Machiavellian episode title, “The Art of the Possible.” The series follows a fictional prime minister who reluctantly adopts increasingly tighter immigration laws to secure her government’s survival and retain the outside support of the populist party. Throughout the series, the issue of immigration served as a prism for illuminating left-right divisions in Danish politics at the time and highlighting how a relatively small far-right party could exert disproportionate influence over a single topic.

In real life, Rasmussen’s early aughts government similarly adopted ever-tighter immigration and asylum laws, responding to the demands from the Danish People’s Party and a growing popular concern over what many on the right viewed as the country’s liberal family reunification laws. These reunification laws enabled a “chain” effect on migration, allowing the arrival and permanent settlement of relatives from non-Western countries who gained access to free healthcare, schools and universities, among other benefits of the Danish welfare state. Sometimes, proponents suggested, these family reunifications even occurred through forced or arranged marriages.

In 2001, a new “Ministry for Integration” was established to centralize political control over a domain that was no longer considered a peripheral social issue but one that had been elevated to the very center of the government’s agenda. The following year, the government passed new laws as part of a so-called “immigration package.” Among their new mandates were requiring foreign spouses to be at least 24 years old before applying for family reunification, longer reunification waiting periods, married couples needing to demonstrate ties to Denmark that were stronger than any other country, to pass a language proficiency test, and the payment of as much as $12,000 in today’s dollars for any future welfare expenditures.

Gradually, Rasmussen and his right-wing coalition grew less hesitant about using their new anti-immigration rhetoric and policies. Polls continuously showed that about half the population viewed immigration as a serious threat to Danish culture and throughout the aughts and the early 2010s, successive governments proposed increasingly harsher assimilation and migration policies, culminating in the controversial minister for integration, Inger Støjberg, posing by a birthday cake in her office to celebrate the government’s 50th tightening of immigration legislation.

At the border, officers gave increasingly more scrutiny to immigrants from non-Western countries. Denmark sharpened its external controls on immigration, using its long-standing opt-out from European Union asylum cooperation to limit the number of asylum seekers and reduce immigration incentives.

As Syrians were actively fleeing war and the Assad regime at the end of 2015 and in early 2016, Denmark partially closed its southern border with Germany and passed a new law that enabled border officials to confiscate jewelry and other valuables from refugees, in order to allegedly cover the cost of their asylum, and perhaps more importantly, to also deter Syrian asylum seekers from Denmark.

The Danish government ran ads in Arab-language newspapers, urging potential immigrants to reconsider Denmark as a destination and warning that social benefits had been halved, family reunification suspended and permanent residency contingent on mastering the Danish language.

In the span of about 15 years, what may have begun as concessions to a far-right support party had hardened into a governing consensus on the right and center-right of the political spectrum. More than 50 laws had been passed to tighten immigration, but in polling voters from these parties continued to ask for ever stricter laws and ranked immigration as one of their top three priorities.

The Mirror & The Wall

Denmark’s drastic measures drew heavy criticism throughout the aughts and 2010s from liberal and left-wing parties in parliament, as well as internationally from rights groups, United Nations agencies and even, notably, Denmark’s own neighbors. Though Germany and Sweden ostensibly have a similar political culture to Denmark, as mature democracies with a common history, these two neighboring countries effectively provided a contrasting mirror image to the Danes, displaying the progressive and solidaristic posture that outside observers typically expect from wealthy Northern European welfare states.

“In the span of about 15 years, what may have begun as concessions to a far-right support party had hardened into a governing consensus on the right and center-right of the political spectrum.”

Unlike Denmark, in 2015, Germany was squarely at the forefront of Europe’s refugee crisis after the fallout of the so-called “Arab Spring” and the Syrian Civil War. In a decision unmatched across the continent, Germany’s government waived its asylum rules and welcomed more than a million refugees. Then-Chancellor Angela Merkel’s famous exhortation “Wir schaffen das” (“We can do this”) resonated among Europeans as a particular exemplar of how a political leader could guide her fellow citizens, given the country’s capacity and moral imperative to welcome refugees.

In the fall of 2015, another event emerged as a comparative foil to Denmark’s immigration policies. Public broadcasters invited Danish and Swedish politicians from across the political spectrum to a debate on immigration and refugee policy, broadcast live on national television.

It was remarkable to witness representatives of these two adjacent Nordic countries understanding each other while speaking in their respective languages — with no live translation or interpreter — and yet portraying such diametrically opposite views on immigration. Swedish politicians on the left and the right recalled their country’s historic role in welcoming refugees fleeing Nazism, Stalinism and ethnic cleansing in Bosnia. Then these politicians argued that Denmark’s policies were “cynical” and “racist,” comparing Denmark’s public discourse around Muslims to Nazi Germany’s 1938 rhetoric on Jews. Meanwhile, Danish politicians dismissed Sweden’s posture as naïve and disingenuous. They argued that Swedish media and policymakers embraced political correctness to the point of self-censorship and had therefore limited coverage of the country’s disastrous efforts at multicultural assimilation.

In Denmark in 2015, the Social Democratic Party, once the most representative in the nation, could — again — not form a new government. At the ballot box, the Danish People’s Party had become the second largest party in parliament with roughly 22% of the vote.

Then-party leader Helle Thorning-Schmidt resigned, and the Social Democratic party entered a period of introspection. The party needed to reinvent itself. But how? Long regarded as the architects and guardians of the Danish welfare state, Social Democrats had historically been defenders of poorer, working-class people. But those were the very people among whom the Social Democrats had steadily lost political ground. In rural and poor areas, voters were turning to the far right.

Some critics argued that Social Democrats had lost touch with the everyday concerns and cultural values of ordinary Danes and were now synonymous with the cosmopolitan elite of Copenhagen. Ordinary Danes, these critics suggested, were preoccupied particularly with immigrants coming from Muslim countries and their lack of assimilation. These Danes wanted to preserve the nation’s renowned welfare state but restrict its benefits to insiders — what political scientists today call “welfare chauvinism.” To regain power, pundits argued that the Social Democrats would have to platform the concerns of everyday Danes and reinvent themselves. That meant embracing a new platform of law-and-order, anti-immigration and anti-establishment policies typically associated with far-right parties.

Most notably, the Social Democrats endorsed a “paradigm shift” in Denmark’s migration and integration approach — from viewing refugees and asylum seekers as future citizens who could be permanently integrated into Danish society, to now temporary residents by default, who could be repatriated as soon as their home countries were considered safe.

The Social Democratic Party’s transformation proved highly successful. In 2019, the party returned to power under Mette Frederiksen, who quickly advanced a series of restrictive asylum and migration measures, openly pursuing a goal of zero asylum seekers. Asylum seekers could now be sent to another country for processing; rejected asylum seekers were sent to often newly expanded detention facilities to await deportation. Benefits were also cut, and family reunification laws tightened again by applying a ceiling on the maximum number of reunifications. Denmark also dropped its U.N. quotas for refugees to 200 a year, and revoked the temporary protected status for some Syrian refugees.

With such measures, Social Democrats transformed themselves from a center-left party into something farther to the right, absorbing much of the far right’s immigration platform and, to a certain extent, what French sociologist Pierre Bourdieu might call its cultural habitus. Frederiksen, active on social media, posts images of herself polishing her own windows or eating simple open-faced sandwiches. In her yearly televised New Year’s address, copies of bestselling novels around social issues were prominently displayed in the window beside her. Such novels have reignited debates about the lack of class mobility and steep societal divisions between urban and rural Denmark.

“Social Democrats had historically been defenders of poorer, working-class people. … (But) in rural and poor areas, voters were turning to the far right.”

For the Social Democrats, being tough on immigration is a way to signal their ties to everyday Danes and their efforts to take seriously the anxieties of those who feel left behind by a changing world. Social Democrats have reclaimed voters from the Danish People’s Party and repositioned themselves as guardians of the welfare state. Yet the needle keeps moving — and further rightward.

Last year, the Social Democrats’ shadow minister for immigration and integration, Frederik Vad, delivered an influential speech in parliament that he called “the third realization.”  In this speech, Vad warned of a fifth column of Danish citizens with Muslim backgrounds who were allegedly “undermining Danish values from within.” In the subsequent public debate, a controversial book by the French anthropologist Florence Bergeaud-Blackler about Islamist networks in France was used to back up this claim. Vad, for example, argued that Muslims working in public institutions like schools, libraries and hospitals should be scrutinized for potentially promoting Islamist values, and citizenship rights should be redefined to include aspects of loyalty and tilhørsforhold, or “belonging” to Danish society by adhering to its values and culture.   

Seeing their agenda adopted once again by the Social Democrats, the far-right People’s Party has quickly advanced its own, even more radical idea of “remigration” as a new frontline in Denmark’s migration debate.

The concept of remigration originates in the extreme ethno-nationalist Identitarian movement in France and is now banned by the French government. In Denmark, the far-right People’s Party advocates a remigration scheme that includes reviewing and potentially revoking Danish citizenship granted to migrants from Muslim‑majority countries over the past 20 years, followed by their forced mass deportation.

In support of remigration, the People’s Party has also proposed measures that, in the party’s own words, make it “close to impossible to live an Islamic lifestyle in Denmark.” Such steps include prohibiting halal foods in schools, banning sharia-based arbitration and potentially shutting down Muslim schools and cultural centers if they do not adhere to “Danish values.”

Ultimately, such targeted restrictions are meant to pressure Danish citizens of Muslim faith to lower their heads or be forced out. For now, remigration remains the official policy only of the People’s Party, though members of the center-right Conservative Party have expressed openness to discussing the idea.

Some experts, like Mira S. Skadegaard, a university professor who teaches about minority rights, have called remigration a modern form of ethnic cleansing against Muslim citizens in Denmark. Liberal party leader Martin Lidegaard recently denounced the Danish People’s Party’s remigration proposal as “wild, extremist, and un-Danish,” vowing to fight it both politically and legally. Denmark’s Minister of Foreign Affairs, Lars Løkke Rasmussen, similarly warned in an interview in November that such a plan hitches the right-wing bloc to a wagon they will regret; he has urged more voices to speak out against it. Meanwhile, Kristian Madsen, now editor-in-chief of the A4 news outlet and a former speechwriter for the ex-Danish Prime Minister Helle Thorning-Schmidt, argues that remigration is a “disgusting” concept, and in a recent column, he called out today’s Social Democrats for sharing in the People’s Party premise that Muslims are unwanted in Denmark.

All this raises the question: How far right will — or can — this go?

Horseshoe Politics & Its Discontents

The mainstreaming of far-right positions that started two decades ago in Denmark is no longer an aberration, even in erstwhile strong liberal and open democracies like Sweden and Germany — and even in the U.S., it seems. The far-right Sweden Democrats Party, hitherto shunned by the rest of the parties in its parliament, has provided external support to help the conservative minority government stay in power for the last few years, much like the Danish People’s Party did at the beginning of this century. Their rise can be directly correlated to rising crime in Sweden’s degraded suburbia.

Similarly, in neighboring Germany, the far right and xenophobic Alternative für Deutschland (AfD) has witnessed a seemingly unstoppable rise in the polls. Earlier this year, another taboo was shattered when Germany’s Christian Democrats passed a parliamentary vote on citizenship rules and border controls with the support of the AfD, thus cracking the “firewall” against it that had endured until then.

It is no exaggeration to claim that in this sphere, Denmark was the canary in the coalmine — predating such developments elsewhere in Europe and the United States by multiple political cycles or even a generation. The question is, now, what this trajectory suggests about the state of democratic politics, of its normative underpinnings and where things might go in Denmark — and beyond — from here.

“The mainstreaming of far-right positions that started two decades ago in Denmark is no longer an aberration, even in erstwhile strong liberal and open democracies like Sweden and Germany.”

Compare, for example, the political consensus that has consolidated in Denmark around restricting immigration with that of a country like Italy, which has long been at the forefront of the fight against illegal immigration due to its geographic position in the middle of the Mediterranean.

Italians generally trace their experience with modern immigration back to Aug. 8, 1991, when Vlora, the first large boatload carrying 20,000 Albanian migrants, docked in the Southern port of Bari, Italy. Since then, immigration has easily been the most divisive and polarizing issue of Italy’s identity politics. Subsequent Italian right-wing governments have attempted various plans and agreements that were later adopted by the rest of Europe. In 2008, Italy and Libya signed a “Friendship Treaty” that included an apology for Italy’s prior colonialism and a $5 billion infrastructure fund in exchange for the repatriation of immigrants. The deal showcased the same transactional logic — immigrant repatriation in exchange for hefty payments to an autocratic regime — perfected nearly a decade later in a deal between the European Union and Turkey after the 2015 refugee crisis.

In 2022, Italy’s far-right party won government power for the first time. Since then, Italy’s far-right government has successfully pushed Europe to adopt similar accords from Tunisia to Egypt. Today, Rome operates an extra-territorial asylum processing center in Albania. In a social media exchange with one of the authors of this article, the Albanian philosopher Lea Ypi referred to this practice as “fascist humanitarianism.”

Remarkably, the discursive practices and policy positions of a Scandinavian Social Democratic-led government, traditionally known for its progressivism and solidarity, closely align with those of far-right governments, such as Italy’s. Copenhagen has teamed up with Rome to question the reach of the European Court of Human Rights, which has already ruled against some of Denmark’s immigration policies, and to call for a renationalization of judicial powers to rein in the Court’s reach and “make political decisions in our own democracies.” And Denmark was the first country in Europe to transfer asylum seekers to countries outside the EU for processing (that policy has since been put on hold).

References to Muslim migrants as threats are now used in Italian far-right political slogans,  but they were pioneered by the Danish far-right three decades ago and are now a staple of Denmark’s political conversation, even by left-leaning parties such as the Social Democrats. In the jargon of political scientists, this is a textbook example of the horseshoe theory, applied on a transnational scale.

 When in power, center-left governments in Italy also pursued severely restrictive immigration policies, much like the Biden administration in the United States was responsible for a volume of deportations comparable to that of the first Trump administration. Yet Europe’s center-left forces have generally struggled to reconcile their political narrative with their political reality. Instead, they continue their old efforts to rebuild consensus with other left-leaning political goals and groups, while attempting to ignore their discomfort with the so-called paradigm shift — from assimilation to repatriation — of Denmark’s supposedly center-left Social Democrats. As a result of such mixed messaging, Europe’s center-left coalitions have been routinely punished at the polls.

Today’s Danish experience, however, is very different, and it stands as an outlier. That’s because, according to research by political economist Laurenz Guenther, the public in virtually all European countries is consistently more culturally conservative than its respective political establishment and to the right of mainstream politicians on issues such as immigration and criminal justice. But the Danish Social Democrat Party’s transformation has meant that its positions on immigration and criminal justice are aligned with the public preferences of a slight majority of voters on non-Western immigration.

On the face of it, this alignment might make Denmark a virtuous paragon for representative democracy. But in practice, however, the Danish case shows a worrying involution of democratic politics. In their upcoming book “What Europeans Think About Immigration and Why it Matters,” political scientists Andrew Geddes and James Dennison show how the public tends to interpret immigration through emotional, cultural and selective narratives to make sense of it. Public perception of a need for more law and order tends to result in more radical policies to address this need — a dynamic that, in turn, has fueled the rise of anti-immigration movements on the right.

“Europe’s center-left forces have generally struggled to reconcile their political narrative with their political reality.”

Of course, not everyone in Denmark is against immigration from non-Western countries. There is strong opposition from some progressive liberal circles in urban areas and from civil rights non-governmental organizations working with immigrant communities, as well as from younger generations.  This was also evident in the upset municipal election result in Copenhagen last month, where the Social Democrats lost power for the first time in 122 years to a Green Left candidate. But perhaps the most surprising voices critical of the government’s immigration policy are among the business community, which is typically more politically conservative and right-wing.

Confederation of Danish Industries CEO Lars Sandahl Sørensen and the Danish Chamber of Commerce’s Executive Director Brian Mikkelsen have consistently argued for a more open immigration policy to help address the country’s aging population and overheated labor market. With unemployment rates as low as 2.6%, Denmark needs foreign labor to supplement virtually all sectors, private and public. Unable to ignore these voices, the government has taken some targeted measures to make it easier for employers to sponsor non-EU skilled worker permits.

Similarly, steps have been taken to attract international students, particularly in science, math and technology. Still, the tough-on-immigration policy and suspicion toward immigrants from non-Western countries remain intact. In the now-infamous Politiken interview, the prime minister scolded a Danish university for having too many students from Bangladesh: “Last year, one in six new master’s students,” she quipped, “was from Bangladesh. I mean, when you say that sentence, you think it’s a lie.” 

When viewed in this light, the Danish case is less a model than a warning about what happens to democratic politics when politicians from the center and center-left move to the right to regain or retain power, rather than deliberating, informing and modeling responsibility and respect. What were once signature proposals in the far-right playbook are now mainstream policies, and — once effectuated by Denmark’s highly functional bureaucratic state — these proposals have a ripple effect on Danish society that has resulted in what sociologist Brooke Harrington terms “performative xenophobia.” Stringent migration laws and policy proposals signal toughness and Danish belonging, while conversely, those who criticize such proposals are perceived as naïve or disloyal to the homeland.

While immigration seems poised to dominate the upcoming election cycle’s discourse, as Denmark’s prime minister predicted, the broad public consensus around the current hardline posture means election results, paradoxically, are less likely to make a difference in determining Denmark’s stance on the subject.

The Danish experience offers a cautionary tale for other countries in Europe and beyond. Over the past two decades, adopting elements of the far-right agenda has not only made its policy propositions more acceptable and seemingly mainstream; it has created space for new demands from the far right, radicalized its discourse and increasingly normalized its worldview. If this is the final destination of “getting to Denmark,” it might not be worth the trip.

The post Inside Denmark’s Hardline Immigration Experiment appeared first on NOEMA.

]]>
]]>
The Moral Authority Of Animals https://www.noemamag.com/the-moral-authority-of-animals Tue, 25 Nov 2025 14:50:32 +0000 https://www.noemamag.com/the-moral-authority-of-animals The post The Moral Authority Of Animals appeared first on NOEMA.

]]>
A fine and highly trained dog is at work on a beautiful day at Panama City Beach, Florida. It’s spring break 2022; the sun is shining and spirits are high. Then chaos erupts.

The dog’s human colleague, a stocky, white police officer, is uniformed, armed and visibly irate. He is yelling at a young woman of color in a bikini. She walks away but the cop storms after her with the dog as other people gather around and shout.

It’s unclear what prompted the mayhem, which is captured in part in a shaky video. A young Black male, who looks to me like a high schooler, appears to try to defuse the situation, but the officer is not calmable. He grabs the kid by the back of the neck, then throws him to the ground and pins him down.

Onlookers scream. “He didn’t do nothing!” The dog has had enough and attacks the person behaving aggressively — the dog’s own handler — biting the arm of the officer. When the clip is posted online, the dog is celebrated as the hero of the day for upholding justice and fairness. 

The well-being of our societies depends on such qualities, which some assume to be uniquely human. But research has emerged showing that animals can be moral beings, too. In a world where power is misused, public morality has become slippery and dishonesty lurches sickeningly through public speech, animals can offer vital lessons for human ethics, political wisdom and social health.

Some animals display a sense of right and wrong, as that police K-9 demonstrated at Panama City Beach, and of fairness. A dog may shake a human’s hand with his paw — repeatedly, without treats — because he enjoys doing so. But if a second dog is invited to join in and is given a treat, the unrewarded dog may show signs of stress and refuse to keep playing: It isn’t fair. 

Similarly, in a famous experiment by researchers Frans de Waal and Sarah Brosnan, female capuchin monkeys trained to barter made their feelings about fair treatment clear. The researchers rewarded one capuchin with grapes (which the primates love) and another with cucumbers (which they care less for). When the second capuchin saw the other getting a grape, she refused to play along. Years later, the researchers videotaped the task in monkeys who had never done it before to see if the reaction might be stronger: The second capuchin reacted furiously, shaking the cage and hurling cucumber slices at the experimenter.

Fairness matters to dwarf mongooses, too. In the daytime, while they forage in groups, one must stand guard to watch out for predators. They take turns in this sentinel role. In the evening, when they all groom each other, those who spent more time on guard duty get more grooming: Fair’s fair. Dwarf mongooses also care about justice. If one has been mean during the day, perhaps shoving another away from food, the other mongooses take note and groom that one less.

Many animals mete out punishment for perceived wrongs, including some big cats, canids and primates. A troupe of baboons was reportedly near a mountain road in Saudi Arabia in 2000 when one was hit and killed by a car. The whole group gathered in grief and fury, watching every vehicle that went by for three days until the car that had killed their friend passed again on that stretch of road. They chucked rocks, forcing the car to stop, then shattered the windscreen. The driver, fearing for his life, had to flee. Tigers too have been known to enact revenge, specifically targeting those who have provoked them.

Canids know that honesty matters. Biologist and animal behaviorist Marc Bekoff notes in “The Emotional Lives of Animals” that while canids sometimes “lie” — for example, they might perform a “play bow” to indicate friendship, then attack — they may face consequences for dishonesty. Coyotes who lie are ostracized by the pack. Dogs who “cheat” may be shamed and avoided by others.

Honesty, justice, fairness and the moral behavior shown by the police dog are part of the ethics that make societies healthy. Even in small ways, ethics matter. The word “etiquette” means “little ethics.” This is not some dainty and spurious curlicue of arbitrary human behavior, but rather a demonstration of respect for others, important for social health. We humans are not the only animals to embrace it. 

“Animals can offer vital lessons for human ethics, political wisdom and social health.”

Chimpanzees in Arnhem’s Royal Burgers’ Zoo in the Netherlands had learned zookeepers’ rule that meals wouldn’t be served until all had assembled. But one day, as reported by Time magazine in 2007, two teenage chimps were more interested in staying out to play than coming in to eat. The others had to wait for hours, getting hungrier and angrier. When the two errant chimps finally showed up, zookeepers protected them from the others’ wrath in a separate enclosure overnight. But when they joined the group the next day, the others pummelled them, teaching them some manners. That night, those two were the first in for dinner.

Following Animals’ Lead

Many Indigenous philosophies consider that we humans are the “younger brothers of creation,” including animals, and that they have lessons to teach us. For millennia before we showed up on the scene, social animals — those living in societies and cooperating for survival — had been creating cultures imbued with ethics. As Bekoff writes, “The origins of virtue, egalitarianism and morality are more ancient than our own species.”

In the opinion of some Australian anthropologists, notes ethologist Temple Grandin, early humans watched wolves and were educated by them. Indigenous Australians put it more directly, saying, “dogs make us human.” Millions of years before us, wolf ethos included babysitting the pups, sharing food with those too injured, sick or old to hunt and including friends in their packs, beyond the genetic kin. Wolf ethics also included being both a good individual and a good pack member. 

Human societies, while often quite different from one to the next, generally have a shared ethos similar to that of wolves: Look after the young; protect the tribe; consider the needs of the sick, injured or old; and value the cooperation of others who may not be kin (friends, in other words). It is biomimicry applied to the ethical world. Wolves were doing it first, and we aped them. 

In ancient folktales and medicine stories, animals are often at the heart of an ethical pivot. Many deal with issues of societal healing after the hero has been treated unethically and morality needs to prevail once again. In the Grimm fairytale of the Goose-girl, the heroine has been cheated and lied about, but her horse, Falada, has moral authority in speaking the truth of her situation. In Puss in Boots, the miller’s youngest son has been wrongly disinherited, but Puss, avenging that wrong, creates a fortune for him far beyond expectation.

The medicine presented in these stories is often an ethical remedy for social ills. The wisdom of folktales aligns with the perception of Indigenous philosophy to tell us: Look to the animals for morality.

Animals Policing Humans

Many societies have overtly attributed to animals the job of policing human behavior. An Ancient Greek legend tells of a thief who attacked a poet and left him for dead. With his little remaining strength, the poet called out to cranes flying overhead, who became police-birds. They followed the thief, circling over him until he felt forced to confess.

Widespread Indigenous belief speaks of an animal archetype, often called the Master, Mistress or Owner of the Animals, who guards animals from hunters’ mistreatment and, by doing so, regulates human ethics. For the Ojibwe (or Anishinaabe, as they call themselves), the most populous group of First Nations in America, this figure is the Sky Bear, an archetype of a real bear, who is born in a Sky den and lives in Sky lodges. The bear approves of generosity and disapproves of selfishness and excess.

Among many other Native people of North America, the archetype is known as the Bear Master, who polices hunting: People must not take more than they need or disrespect the animals. Insulting or wasteful behavior also offends the Master of the Animals. These ethics are remarkably common: rewarding respect and generosity, punishing greed. They are also apparent in folktales of the animal helpers.

Across the Amazon, the Master of the Animals is also a guardian who protects his creatures from overhunting. The Master of Animals may appear to a hunter in his dreams, troubling his conscience. The worried dreamer may then talk to a shaman, who underlines and reemphasizes the warning. If the caution is disregarded, the Master of Animals may punish the hunter, making the animals scarce or withdrawing them.

When I was in the Amazon in 2000, I was struck by the parallels between this figure and the Greek god Pan, who guards his animals and whose presence makes people nervous, causing them to watch their step and behave well. 

“The origins of virtue, egalitarianism and morality are more ancient than our own species.”
— Marc Bekoff

According to Chisasibi Cree belief, hunters must treat caribou well and never overhunt. Caribou were around the Chisasibi Cree lands regularly, being hunted with respect until, early in the 20th century, people went out armed with novel weaponry, repeating rifles and, said Elders, they lost control of themselves and killed more caribou than they could carry away.

The caribou disappeared for decades. But in the winter of 1982 into 1983, a few returned. The following winter, they came in large numbers to an area that the hunters could reach by road, and over the course of a month or so, a huge and frenzied hunt took place. People shot wildly, leaving many animals injured and once again killing more than they could take away. The Elders were upset, warning that if the animals weren’t respected, they wouldn’t return.

The following year, there were indeed almost no caribou. The Elders reminded the hunters of the last long absence of the caribou. Were these hunters of the 1980s going to be the ones to lose the respect of the caribou? Chastened, the young hunters took heed and, according to wildlife biologist Peter Miles, this had far more impact than any government regulation or legislation.

The powerful actions of animals may be recognized as a form of natural law governing morality. Perhaps modernity, in its increasing severance from the minds of wild, free animals, has also cut ties to something utterly precious and necessary — a public, shared and visible conscience.

Animal Models Of Healthy Politics

Healthy societies need healthy politics and animals can be good role models. Some may operate their own kinds of referendums, taking amenable account of each other’s wishes. Red deer will move off after a period of resting or feeding when 62% of the adults get to their feet. When African buffaloes make a collective decision to move, only the females’ votes count, expressed by standing, gazing in the direction they want to take, then lying down again. They watch each other, and when enough females want to move, they do.

In his book “Honeybee Democracy,” Thomas Seeley describes honeybees’ intricate decision-making processes for finding a new hive or leading fellow bees to feeding sites for nectar and pollen. The decisions depend on good research and on each bee communicating as truthfully as possible. Scout bees fly out to reconnoiter for a new site, and they dance to convey their findings. Other scouts check out the reports of good sites, returning to dance for the best. They listen to disagreement and recheck the sites. The new nest is chosen when all the scout bees are in agreement, dancing for the same site.

The bees offer a model for the consensus-building politics of citizens’ assemblies that help people reach cooperative decisions after careful deliberation. Healthy human societies could use some political medicine from honeybees. Tell the truth. Don’t suppress dissent. Listen to the experts. Always dance.

Fieldfares, gently speckled honey-colored birds, also demonstrate remarkable cooperation. They are much smaller than their enemy, the hooded crow, which snatches eggs from fieldfare nests. The first principle of political action is this: Find allies. So fieldfares gather together, fly above the hooded crow, and do a huge synchronized poop, bombing the bird, who has to exit the scene and clean its oily feathers.

When we think of political systems, we usually turn to human processes. But some animals model political practices that we could learn from to improve the health of our societies.

Rethinking Anthropomorphism

Fieldfare collective action? Baboon retribution? Dwarf mongoose sanctions? Dog fairness? Is there an issue of anthropomorphism here?

The term “anthropomorphism” is too often robed in a peculiar and partial rationalism that prefers “mechanomorphism,” treating animals as machines oiled by automatic response and fired by reflexive instinct.

An accusation of anthropomorphism is commonly used to pour scorn over all who regard animals with a fullness of intelligence — those of us who think with both logic and metaphor, who perceive with both measurements and intuitive sensitivity, who unlimit ourselves and embrace a possibility (never a certainty) of coming to knowledge through empathy, observation, self-forgetting and kinship.

Humans don’t know with certainty what is happening in other human minds, so we gently engage in the habit of twice-listening, where we hear someone and let their experience find resonance in ourselves. We listen to our gut. We watch our own responses. The juror in empathy steps into the skin of the alleged victim, knows that pallor and sweat, knows and believes her. Our kinship with other humans gives us license to guess and to feel our way into their minds.

“Perhaps modernity, in its increasing severance from the minds of wild, free animals, has also cut ties to something utterly precious and necessary — a public, shared and visible conscience.”

We are akin to other animals in shared common ancestry, in the continuum of evolution, and when we see animals (especially mammals) acting in a certain way in a certain situation, we can infer something of their minds. The accusation of anthropomorphism loathes the attribution of human traits or emotions to non-humans, but our characteristics and intentions are so very often held in common with other animals.

Our emotions are fundamentally theirs, as are our ways of expressing them. Love is warm, close and cuddling. Anger is a hot, violent rush of blood. Fear is a chilling freeze. Humans share so much with other animals — humor, language, culture, friendship, spirituality, art, politics, mother-love and a sense of home.

After a while, it feels silly to claim as “human” characteristics that are so manifestly shared with other creatures. Among the human traits that do not appear to be shared are capitalism, genocide and ecocide, and perhaps the divine right of kings (though queen bees may have something to say on that). 

Wise thinking uses what has been termed “critical anthropomorphism,” not directly translating animal behaviors into human terms, but using empathy to help make interpretations.

Critical anthropomorphism embraces as context an animal’s worldview — what that creature needs, feels, knows and wants — and blends that knowledge creatively with a human response. It subliminally attends to shared common ancestry and the inherent relatedness of living beings. Coined in the mid-1980s, the term critical anthropomorphism was useful to consider animals properly as living, subjective beings rather than as little more than robots with reflexes.  

It’s very far from being an exact science, which is perhaps why some scientists, steeped in the Enlightenment’s necessity to divest itself of anything with a whiff of uncertainty, mysticism or Indigenous culture still go vinegar-mouthed at it. It’s an art. It’s a philosophy. As a working hypothesis, it is generous, and as an aid to understanding animals, it is legitimate. Critical anthropomorphism is not an enemy of the scientific process but a friendly adjunct.

Many animal observers are now arguing that those who attack anthropomorphism are suffering a nihilistic shutdown in their thinking, trapped into anthropodenialism. This term was coined by de Waal, who described it as “a blindness to the human-like characteristics of other animals or the animal-like characteristics of ourselves.” The onus now is on the sneerers to prove that animals don’t experience responses and emotions comparable to ours.

Animals As Social Medicine

Animals can offer social medicine by their mere physical presence. When people stroke a cat, their oxytocin levels rise. When they interact with their dogs, the levels of oxytocin in both the human and the dog can nearly double. This is good not just for the individual, but for society. American neuroscientist Paul Zak calls oxytocin the “moral molecule” because it motivates people to treat others with compassion. He refers to his work on oxytocin as pioneering the study of the chemical basis for human goodness.

The effects of pet-keeping may also be a social prophylactic for the simplest of reasons, as the company of pets is a remedy for loneliness. (Pet-keeping is, incidentally, an ancient human universal; Indigenous people from the Arctic to the Amazon have kept pets.) I have often found my loneliness assuaged by my cat. Research backs up their salutary effect: In a 2013 study of elderly dog owners living alone, 75% of the men and 67% of the women said their dog was their only friend.

This matters to a society’s political health because when people are lonely, they become vulnerable. Loneliness increases the risk of strokes by 56%; it is more dangerous than smoking; it is a strong contributory factor in heart disease and the lonely are more likely to have serious mental health problems.

By safeguarding so many of us against loneliness, animals help inoculate us against these ills, providing preventative medicine for our individual and collective well-being. They also give us a steady continuance of existence, a dependable self-sameness.

This world-hour is one of flux and havoc; a white-water, white-knuckle ride of hectic technological and political change; a weakening of social ties as a result of the precarity of jobs and housing. The future looks fearful and foreclosed as a buckling and breaking climate holds us all in jeopardy. It can make us feel seasick and disturbed, with no sure footing.

“Critical anthropomorphism embraces as context an animal’s worldview — what that creature needs, feels, knows and wants — and blends that knowledge creatively with a human response.”

In this state of chaos, we need animals desperately. They give us a sweet and fundamental gift in that they stay reliably the same. The dove that returned to Noah and settled for Picasso flies across your mind’s sky. A donkey is a donkey is a donkey, offering a stable stillness in a rupturing storm, the constancy of their being. Animals help us keep our paws on the ground as individuals. 

We can better our collective health if we are willing to learn from animals modeling consistent ethics and a constancy of morality. It is a reassurance that when truth and wrong and right and lies are smudged to toxic sludge, some creatures may offer us a clear-eyed view of good behavior.

The post The Moral Authority Of Animals appeared first on NOEMA.

]]>
]]>
From Cinema To The Noematograph https://www.noemamag.com/from-cinema-to-the-noematograph Fri, 14 Nov 2025 17:35:35 +0000 https://www.noemamag.com/from-cinema-to-the-noematograph The post From Cinema To The Noematograph appeared first on NOEMA.

]]>
For the celebrated novelist Ken Liu, whose works include “The Paper Menagerie” and Chinese-to-English translation of “The Three-Body Problem,” science fiction is a way to plumb the anxieties, hopes and abiding myths of the collective unconscious.

In this pursuit, he argues in a Futurology podcast, AI should not be regarded as a threat to the distinctive human capacity to organize our reality or imagine alternative worlds through storytelling. On the contrary, the technology should be seen as an entirely new way to access that elusive realm beneath the surface and deepen our self-knowledge.

As a window into the interiority of others, and indeed, of ourselves, Liu believes the communal mirror of large language models opens the horizons of how we experience and situate our presence in the world.

“It’s fascinating to me to think about AI as a potential new artistic medium in the same way that the camera was a new artistic medium,” he muses. What the roving aperture enabled was the cinematic art form of capturing motion, “so you can splice movement around … and can break all kinds of rules about narrative art that used to be true.

“In the dramatic arts, it was just assumed that because you had to perform in front of an audience on the stage, that you had to follow certain unities to make your story comprehensible. The unity of action, of place, of time. You can’t just randomly jump around, or the audience wouldn’t be able to follow you.

But with this motion-capturing machine, you can in fact do that. That’s why an actual movie is very different from a play.

You can do the reaction shots, you can do the montages, you can do the cuts, you can do the swipes, you can do all sorts of things in the language of cinema.

You can put audiences in perspectives that they normally can never be in. So it’s such a transformation of the understanding of presence, of how a subject can be present in a dramatic narrative story.”

He continues: “Rather than thinking about AI as a cheap way to replace filmmakers, to replace writers, to replace artists, think of [it] as a new kind of machine that captures something and plays back something. What is the thing that it captures and plays back? The content of thought, or subjectivity.”

The ancient Greeks called the content, or object of a person’s thought, “noema,” which is why this publication bears that name.

Liu thus invents the term “Noematograph” as analogous to “the cinematograph not for motion, but for thought … AI is really a subjectivity capturing machine, because by being trained on the products of human thinking, it has captured the subjectivities, the consciousnesses, that were involved in the creation of those things.”

An Interactive Art Form Where The Consumer Is Also The Creator

Liu sees value in what some regard as the worst qualities of generative AI.

“This is a machine that allows people to play with subjectivities and to craft their own fictions, to engage in their own narrative self-construction in the process of working with an AI,” he observes. “The fact that AI is sycophantic and shapeable by you is the point. It’s not another human being. It’s a simulation. It’s a construction. It’s a fictional thing.

You can ask the AI to explain, to interpret. You can role-play with AI. You can explore a world that you construct together.

You can also share these things with other humans. One of the great, fun trends on the internet involving using AI, in fact, is about people crafting their own versions of prompts with models and then sharing the results with other humans.

And then a large group, a large community, comes together to collaboratively play with AI. So I think it’s the playfulness, it’s that interactivity, that I think is going to be really, really determinative of the future of AI as an art form.”

So, what will the product of this new art form look like?

“As a medium for art, what will come out of it won’t look anything like movies or novels …They’re going to be much more like conversations with friends. They’re going to be more like a meal you share with people. They are much more ephemeral in the moment. They’re about the participation. They’re about the consumer being also the creator.

They’re much more personalized. They’re about you looking into the strange mirror and sort of examining your own subjectivity.”

AI Makes Us Visible To Ourselves

Much of what Liu posits echoes the views of the philosopher of technology, Tobias Rees, in a previous conversation with Noema.

As Rees describes it, “AI has much more information available than we do, and it can access and work through this information faster than we can. It also can discover logical structures in data — patterns — where we see nothing.

AI can literally give us access to spaces that we, on our own, qua human, cannot discover and cannot access.”

He goes on: “Imagine an AI model … that has access to all your data. Your emails, your messages, your documents, your voice memos, your photos, your songs, etc.

Such an AI system can make me visible to myself … it literally can lift me above me. It can show me myself from outside of myself, show me the patterns of thoughts and behaviors that have come to define me. It can help me understand these patterns, and it can discuss with me whether they are constraining me, and if so, then how. What is more, it can help me work on those patterns and, where appropriate, enable me to break from them and be set free.”

Philosophically put, says Rees, invoking the meaning of “noema” as Liu does, “AI can help me transform myself into an ‘object of thought’ to which I can relate and on which I can work.

“The work of the self on the self has formed the core of what Greek philosophers called meletē and Roman philosophers meditatio. And the kind of AI system I evoke here would be a philosopher’s dream. It could make us humans visible to ourselves from outside of us.”

Liu’s insight as a writer of science fiction realism is to see what Rees describes in the social context of interactive connectivity.

Art’s Vocation

The arrival of new technologies is always disruptive to familiar ways of seeing that were cultivated from within established capacities. Letting go of those comforting narratives that guide our inner world is existentially disorienting. It is here that art’s vocation comes into play as the medium that helps move the human condition along. To see technology as an art form, as Liu does, is to capture the epochal moment of transformation that we are presently living through.

The post From Cinema To The Noematograph appeared first on NOEMA.

]]>
]]>
Humanity’s Endgame https://www.noemamag.com/humanitys-endgame Thu, 06 Nov 2025 16:47:05 +0000 https://www.noemamag.com/humanitys-endgame The post Humanity’s Endgame appeared first on NOEMA.

]]>
LONDON — There are 8 million artifacts in the British Museum. But to commence his tale of existential jeopardy, risk expert Luke Kemp made a beeline for just two items housed in a single room. On a visit in early fall, beyond a series of first-floor galleries displaying sarcophagi from pharaonic Egypt, we stopped beside a scatter of human bones.

The exhibit comprised two of the 64 skeletons unearthed from the sands of Jebel Sahaba, in northern Sudan, in 1964. Believed to be over 13,000 years old, the bodies in this prehistoric cemetery were significant for what they revealed about how their owners died. Of those 64 skeletons, at least 38 showed signs of violent deaths: caved-in skulls, forearm bones with parry fractures from victims staving off blows, or other injuries. Whether a result of organized warfare, intercommunal conflict or even outright massacre, Jebel Sahaba is widely considered to be some of the earliest evidence of mass violence in the archaeological record.

According to Kemp, these shattered bones were a foreshadowing of another object in this room. Ten feet away, displayed at knee-height, was the Palette of Narmer. Hewn from a tapering tablet of grey-green siltstone, the item on display was an exact cast of the 5,000-year-old original — discovered by British archaeologists in 1898 — that now sits in Cairo’s Egyptian Museum.

At the center of the stone stands the giant figure of Narmer, the first king of Egypt. His left hand clasps the head of an enemy, presumed to be a rival ruler of the Western Delta. In his raised right hand he holds a mace. The image is thought to depict Narmer bludgeoning his greatest opponent to death, an act that solidified his sovereignty over all Egypt. Beneath his feet lie the contorted bodies of two other victims, while overhead a falcon presents Narmer with a ribbon, believed to represent the god Horus bestowing a gift of the Western Nile. “Here we have perfect historical evidence of what the social contract is. It’s written in blood,” Kemp told me. “This is the first depiction of how states are made.”

In the British Museum’s repository of ancient treasures and colonial loot, the palette is by no means a star attraction. For the half hour we spent in the room, few visitors gave it more than a passing glance. But to Kemp, its imagery “is the most important artwork in the world” — a blueprint for every city-state, nation and empire that has ever been carved out by force of arms, reified in stone and subsequently turned to dust.

Systematizing Collapse

When Kemp set out seven years ago to write his book about how societies rise and fall — and why he fears that our own is headed for disaster — one biblical event provided him with the perfect allegory: the story of the Battle of the Valley of Elah, recounted in 1 Samuel 17. Fought between the Israelites and the Philistines in the 11th century BCE, it’s a tale more commonly known by the names of its protagonists, David and Goliath.

Goliath, we are told, was a Philistine warrior standing “six cubits and a span,” or around 9 feet, 9 inches, clad in the alloy of copper and tin armor that would give his epoch its name: the Bronze Age. As the rival armies faced off across the valley, the giant stepped onto the battlefield and laid down a challenge that the conflict should be resolved in single combat.

For 40 days, Goliath goaded his enemy to nominate a champion, until a shepherd named David came forward from the Israelite ranks, strung a stone into his slingshot and catapulted it into Goliath’s brow, killing him at a stroke, and taking his head with the giant’s own sword. For centuries thereafter, the story of David and Goliath has served as a parable challenging the superiority of physical might. Even the most impressive entity has hidden frailties. A colossus can be felled by a single blow.

According to Kemp’s new book, “Goliath’s Curse,” it’s a lesson we would do well to heed. Early on, he dispenses with the word “civilization,” because in his telling, there is little that might be considered civil about how states are born and sustained. Instead, he argues that “Goliath” is a more apposite metaphor for the kind of exploitative, hierarchical systems that have grown to organize human society.

“‘Goliath’ is a more apposite metaphor for the kind of exploitative, hierarchical systems that have grown to organize human society.”

Like the Philistine warrior, the Goliath state is defined by its size; in time, centralized polities would evolve to dwarf the hunter-gatherer societies that prevailed for the first 300,000 years of Homo sapiens. Ostensibly, it is well-armored and intimidating, exerting power through the threat and exercise of violence. And, in kind with the biblical colossus, it is vulnerable: Those characteristics that most project strength, like autocracy and social complexity, conceal hidden weaknesses. (A more modern allegory, Kemp writes, can be found in the early Star Wars movies, in which a moon-sized space station with the capacity to blow up a planet can be destroyed by a well-placed photon torpedo.)

Kemp is, of course, by no means the first scholar to try to chart this violence and vulnerability through the ages. The question of what causes societies to fail is arguably the ultimate mission of big-picture history, and a perennial cultural fixation. In the modern era, the historian Jared Diamond has found fame with his theories that collapse is usually a product of geographical determinism. The “Fall of Civilizations” podcast, hosted by the historian Paul Cooper, has over 220 million listens. Perusing a bookshop recently, I spotted a recent release, entitled “A Brief History of the End of the F*cking World,” among the bestsellers.

What distinguishes Kemp’s book from much of the canon is the consistencies he identifies in how different political entities evolved, and the circumstances that precipitated their fall. A panoramic synthesis of archaeology, psychology and evolutionary biology, “Goliath’s Curse” is, above all, an attempt to systematize collapse. Reviewers have hailed the book as a skeleton key to understanding societal precarity. Cooper has described it as “a masterpiece of data-driven collapsology.”

Moreover, it is a sobering insight into why our own globalized society feels like it is edging toward the precipice. That’s because, despite all the features that distinguish modern society from empires of the past, some rules hold true throughout the millennia.

Becoming ‘Dr. Doom’

In September, Kemp traveled down from Cambridge to meet me in London for the day. Given his subject, I half-expected a superannuated and eccentric individual, someone like Diamond with his trademark pilgrim-father beard and penchant for European chamber music. But Kemp, 35, would prove to be the antithesis of the anguished catastrophist. The man waiting for me on the concourse at King’s Cross was athletic, swarthily handsome and lantern-jawed. He’d signed off emails regarding our plans to meet with a puckish “Cheerio.”

Kemp’s background is also hardly stereotypical of the bookish scholar. He spent his early years in the dairy-farming town of Bega in New South Wales, Australia, where cattle outnumbered people three-to-one. It was “something of a broken home,” he told me. His father was an active member of the Hell’s Angels, involved in organized crime, a formative presence that would later germinate Kemp’s interest in power dynamics, the way violence is at once a lever for domination and for ruin.

Escaping to Canberra, after high school, Kemp read “interdisciplinary studies” at the Australian National University (ANU), where he found a mentor in the statistical climatologist Jeanette Lindsay. In 2009, it was Lindsay who persuaded him to join a student delegation heading to COP15 in Copenhagen, where Kemp found himself with a front row seat to what he calls “the paralysis of geopolitics.”

At one stage, during a symposium over measures to curb deforestation, he watched his own Australian delegation engage in endless circumlocutions to derail the debate. Representatives from wealthier countries, most notably America, had large teams that they could swap in and out of the floor, enabling them to filibuster vital, potentially existential questions to a deadlock. “If you’re from Tuvalu, you don’t have that privilege,” Kemp explained.

Afterward, Kemp became preoccupied by “a startling red thread” evident in so many spheres of international negotiation: the role of America as arbiter of, and all too often barrier to, multilateral cooperation. Kemp wrote his doctoral thesis on how pivotal issues — such as biodiversity loss, nuclear weapons and climate change — had grown captive to the whims of the world’s great superpower. Later, when he published a couple of academic articles on the same subject, “the ideas weren’t very popular,” he said. “Then Trump got elected, and suddenly the views skyrocketed.”

In 2018, Kemp relocated to the United Kingdom, landing a job as a research affiliate at Cambridge University’s “Centre for the Study of Existential Risk” (CSER, often articulated, in an inadvertent nod to a historical avatar of unalloyed power, to “Caesar”). His brother’s congratulatory present, a 3-D printed, hand-engraved mask of the Marvel character “Dr. Doom,” would prove prophetic. Years later, as Kemp began to publish his theories of societal collapse, colleagues at CSER began referring to him by the very same moniker.

“Goliath hierarchies select for assholes — or, to use Kemp’s preferred epithet, ‘dark triad’ personalities: people with high levels of psychopathy, narcissism and Machiavellianism.”

It was around this time that Kemp read “Against the Grain,” a revisionist history of nascent conurbations by James C. Scott. Kemp had always been an avid reader of history, but Scott’s thesis, which argued that the growth of centralized states “hadn’t been particularly emancipatory or even necessarily good for human wellbeing,” turned some of Kemp’s earlier assumptions about human nature on their head.

Such iconoclastic ideas — subsequently popularized in blockbuster works of non-fiction like Rutger Bregman’s “Humankind” (2019), and “The Dawn of Everything” (2021) by Graeber and Wengrove — would prompt years of research and rumination about the preconditions that enable states and empires to rise, and why they never last forever.

‘Hobbes’ Delusion’

“Goliath’s Curse” opens with a refutation of a 17th-century figure whose theories still cast a long shadow across all considerations of societal fragility. In “Leviathan” (1651), the English philosopher Thomas Hobbes proposed that the social contract was contingent on the stewardship of a central authority — a “Leviathan” designed to keep a lid on humanity’s basest instincts. Political scientists refer to this doctrine as “veneer theory.”

“Once civilization is peeled away, chaos spreads like brushfire,” Kemp surmises. “Whether it be in post-apocalyptic fiction, disaster movies or popular history books, collapse is often portrayed as a Hobbesian nightmare.”

For decades now, the predominant version of history has been beholden to this misanthropic worldview. Many of the most influential recent theories of collapse have echoed Hobbes’ grand theory with specific exemplars. Diamond has famously argued that the society on Rapa Nui, or Easter Island, unraveled due to self-inflicted ecocide before devolving into civil war. That interpretation, in which the islanders deforested the land in the service of ancestor worship, has since been held up as a species-wide admonition — evidence, as researchers John Flenley and Paul Bahn have written, that “humankind’s covetousness is boundless. Its selfishness appears to be genetically inborn.” In “The Better Angels of Our Nature” (2011), Steven Pinker estimated that 15% of Paleolithic people died of violent causes.

But Kemp was struck by a persistent “lack of empirics” undermining these hypotheses, an academic tendency to focus on a handful of “cherry-picked” and emotive case studies — often on islands, in isolated communities or atypical environments that failed to provide useful analogs for the modern world. Diamond’s theories about the demise of Rapa Nui — so often presented as a salutary cautionary tale —have since been debunked.

To further rebut such ideas, Kemp highlights a 2013 study by the anthropologists Jonathan Haas and Matthew Piscitelli of Chicago’s Field Museum. In what amounted to the most comprehensive survey of violence in prehistory, the authors analyzed almost 3,000 skeletons interred during the Paleolithic Era. Of the more than 400 sites in the survey, they identified just one instance of mass conflict: the bones of Jebel Sahaba. “The presumed universality of warfare in human history and ancestry may be satisfying to popular sentiment; however, such universality lacks empirical support,” Haas and Piscitelli wrote.

If there was any truth to the Hobbesian standpoint, the Paleolithic, with its absence of stratified social structures, should have been marked by mass panic and all-out war. Yet the hunter-gatherer period appears to have been a time of relative, if fragile, peace. Instead, conflict and mass violence seemed to be by-products of the very hierarchical organization that Hobbes and his antecedents essentialized. Cave art of armies wielding bows and swords dates only to around 10,000 years ago. “As soon as you start tugging on the threat of collapse, the entire tapestry of history unravels,” Kemp told me.

But if Hobbes was wrong about the human condition — if most people are averse to violence, if mass panic and mutual animosity are not the principal vectors of societal disintegration — what then explains the successive state failures in the historical record? Where or what, to mix metaphors, is Goliath’s Achilles’ heel?

What Fuels Goliath?

In seeking to disentangle a template of collapse from this historiography, Kemp turned to historical data, searching for traits of state emergence and disintegration shared by different polities. “When I see a pattern which needs to be explained, it becomes a fascination bordering upon obsession,” he told me.

A central pillar of his research was the Seshat Global History Databank, an open-source database incorporating more than 862 polities dating back to the early Neolithic. Named after the Egyptian goddess of wisdom, Seshat includes a range of metrics like the degree of centralization and the presence of different types of weaponry; it aggregates these to create nine “complexity characteristics” (CCs), including polity size, hierarchy, governmental framework and infrastructure.

“Wherever Goliath took hold, ‘arms races’ followed, as other status-seeking aspirants jostled for hegemony. And Goliaths were contagious.”

Using this and other sources, Kemp set out to collate his own novel dataset, this time focusing on the common features not of complexity, but of collapse. In keeping with Seshat’s old-god nomenclature, he dubbed it the “Mortality of States” index, shortened to “Moros”, after the Greek god of doom. Covering 300 states spanning the last five millennia, the resulting catalogue is, Kemp claims, “the most exhaustive list of state lifespans available today.”

To some extent, Kemp’s data told a story that has become received wisdom: As Earth thawed out from the last ice age, we entered the Holocene, a period of warmer temperatures and climatic stability. This shift laid the terrain for the first big inflection point: the advent of agriculture, which encouraged our previously itinerant species to settle in place, leading to greater population density and eventually proto-city-states. These early states rose and fell, often condemned by internal conflict, climatic shocks, disease or natural disasters. But gradually the organization of human societies trended toward higher levels of complexity, from the diffuse proto-city-states, through the birth of nations, then empires, to the globalized system of today. The violent paroxysms of the past were merely hiccups on a continuum toward increased sophistication and civility, and perhaps someday immortality. Such is the tale that is commonly framed as the arc of human progress.

But trawling through the data in more detail also revealed unexpected and recurrent patterns, leading Kemp to an early realization: states observably age. “For the first 200 years, they seem to become more vulnerable to terminating. And after 200 years, they stay at a high risk thereafter,” Kemp told me.

The other glaring commonality concerned the structure of these societies. “The common thread across all of them is not necessarily that they had writing or long-distance trade,” Kemp said. “Instead, it’s that they were organized into dominance hierarchies in which one person or one group gains hegemony through its ability to inflict violence on others.”

Kemp argues that dominance hierarchies arise due to the presence of three “Goliath fuels.” The first of these is “lootable resources,” assets that can be easily seen, stolen and stored. In this respect, the advent of agriculture was indisputably foundational. Cereal grains like wheat and rice could be taxed and stockpiled, giving rise to centralized authorities and, later, bureaucracies of the state.

The second Goliath fuel is “monopolizable weapons.” As weaponry evolved from flint to bronze, the expertise and relative scarcity of the source material required for early metallurgy meant that later weapons could be hoarded by powerful individuals or groups, giving those who controlled the supply chain a martial advantage over potential rivals.

The third criterion for Goliath evolution is “caged land,” territories with few exit options. Centralized power is predicated on barriers that hinder people from fleeing oppressive hierarchies.

In Kemp’s telling, every single political entity has grown from one of these seeds, or more commonly, a combination of all three. Bronze Age fiefdoms expanded at the tip of their metal weaponry. “Rome,” Kemp writes, “was an autocratic machine for turning grain into swords,” its vast armies sustained by crop imports from the Nile Valley, its endless military campaigns funded by the silver mines it controlled in Spain. In China, the Han dynasty circumscribed its territory with its Great Wall to the north, intended both to keep Xiongnu horseback raiders out and the citizenry in. Europe’s colonial empires were built, in Diamond’s famous summation, by “Guns, Germs and Steel.”

For millennia, the nature of forager societies kept these acquisitive impulses to some extent contained, Kemp argues. The evolutionary logic of hunting and gathering demanded cooperation and reciprocity, giving rise to “counter-dominance strategies”: teasing, shaming or exile. With the advent of Goliath polities, however, the “darker angels of our nature” were given free rein, yielding social arrangements “more like the dominance hierarchies of gorillas and chimpanzees.”

“Rather than a stepladder of progress,” Kemp writes, “this movement from civilization to Goliath is better described as evolutionary backsliding.” Moreover, Goliaths “contain the seeds of their own demise: they are cursed. This is why they have collapsed repeatedly throughout history.”

In Kemp’s narrative, our retrograde rush toward these vicious social structures has been less about consensus than the relentless ascent of the wrong sort of people. Goliath hierarchies select for assholes — or, to use Kemp’s preferred epithet, “dark triad” personalities: people with high levels of psychopathy, narcissism and Machiavellianism. Consequently, history has been shaped by pathological figures in the Narmer mold, dominance-seekers predisposed to aggression. Reinforced by exceptionalist and paranoid ideologies, these strongmen have used violence and patronage to secure their dominion, whether driven by a lust for power or to avenge a humiliation. Several of the rebellions that plagued dynastic China, Kemp points out, were spearheaded by aggrieved people who failed their civil service examinations.

“Whether societies collapsed through gradual depopulation, like Çatalhöyük, or abruptly, as with Teotihuacan’s conflagration, Kemp argues that the triggers were the same.”

Wherever Goliath took hold, “arms races” followed, as other status-seeking aspirants jostled for hegemony. And Goliaths were contagious. The growth of “one bellicose city-state” would often produce a domino effect, in which the threat of an ascendant Goliath would provoke other regional polities to turn to their own in-house authoritarian as a counterweight to the authoritarian next door.

In this way, humankind gravitated “from hunting and gathering to being hunted and gathered,” Kemp writes. Early states had little to distinguish them from “criminal gangs running protection rackets.” Many of the great men of history, who are often said to have bent society to their will, Kemp told me, are better thought of as “a rollcall of serial killers.”

The 1% View Of History

Back downstairs, on the British Museum’s ground floor, we walked into a long gallery off the central atrium containing dozens of megalithic totems from the great ages of antiquity. The giant granite bust of Rameses II sat beatific on a pediment, and visitors peered into a glass cabinet containing the Rosetta Stone. Kemp, slaloming through the crowds, murmured: “The 1% view of history made manifest.”

Along both walls of an adjacent corridor, we came upon a series of bas-reliefs from the neo-Assyrian city of Nimrud, in modern-day Iraq. Depicting scenes from the life of the Ashurnasirpal II, who ruled Nimrud in the 9th century BCE, the gypsum slabs were like an artistic expression of Kemp’s historical themes: Ashurnasirpal sitting on a throne before vassals bearing tribute; Ashurnasirpal surrounded by protective spirits; Ashurnasirpal’s army ramming the walls of an enemy city, rivals dragging themselves along the ground, backs perforated with arrows. The entire carving was overlaid with cuneiform script, transcribed onto signage below, with sporadic sentences translated into English: “great king, strong king, king of the universe. … Whose command disintegrates mountains and seas.

Across the atrium, in a low-lit room containing a bequest from the Rothschild family’s antique collection, Kemp lingered over an assortment of small wooden altarpieces, with biblical scenes and iconography carved in minuscule, intricate detail. Elite status could be projected in the imposing size of a granite statue, he said. But it could just as well be archived in the countless hours spent chiseling the Last Supper into a fragment of boxwood.

It is, of course, inevitable that our sense of history is skewed by this elite bias, Kemp explained. While quotidian objects and utensils were typically made of perishable materials, the palaces and monuments of the governing class were designed to be beautiful, awe-inspiring and durable. In the hours that we spent on the upper floors, we spied just one relic of ordinary life: a 3,000-year-old wooden yoke from Cambridgeshire.

Likewise, early writing often evolved to reinforce the “1% view of history” and formalize modes of control. The predominance of this elite narrative has produced a cultural blind spot, obscuring the brutality and oppression that has forever been the lot of those living at the base of a pyramid, both figurative and actual.

From all this aristocratic residue, Kemp sought to extract a “people’s history of collapse” — some means of inferring what it was like to live through collapse for the average person, rather than the elites immortalized in scripture and stone.

The Curse Of Inequality

If Kemp’s research revealed that historical state formation appears to follow a pattern, so, too, did the forces that inexorably led toward their demise. To illustrate how the process works, Kemp provides the example of Çatalhöyük, a proto-city that arose on the Konya Plain in south-central Turkey around 9,000 years ago, one of thousands of “tells,” mounded remnants of aborted settlements found throughout the Near East.

Excavations of the site’s oldest layers suggest that early Çatalhöyük was notable for its lack of social differentiation. Crammed together in a dense fractal of similarly sized mud-brick dwellings, the settlement in this period exhibits no remnants of fortification and no signs of warfare. Analysis of male and female skeletons has shown that both sexes ate the same diet and performed the same work, indicating a remarkable degree of gender equity.

This social arrangement, which the Stanford archaeologist Ian Hodder has described as “aggressively egalitarian,” lasted for around 1,000 years. Then, in the middle of the 7th millennium BCE, the archaeological record starts to shift. House sizes begin to diverge; evidence of communal activity declines. Later skeletal remains show more evidence of osteoarthritis, possibly betraying higher levels of workload and bodily stress. Economists have estimated that the Gini coefficient, which measures disparities in household income, doubled in the space of three centuries — “a larger jump than moving from being as equal as the Netherlands to as lopsided as Brazil,” Kemp writes. Within a few centuries, the settlement was abandoned.

“In almost every case, [societal] decline or collapse was foreshadowed by increases in the appearance of proxies of inequality.”

The fate of Çatalhöyük established a template that almost every subsequent town, city-state and empire would mirror. Its trajectory resounds throughout the historical record and across continents. Similar patterns can be discerned from the remnants of the Jenne-Jeno in Mali, the Olmecs of Mesoamerica, the Tiwanaku in Titicaca, and the Cahokia in pre-Columbian North America.

Occasionally, the archaeological record suggests a fluctuation between equality and disparity and back again. In Teotihuacan, near today’s Mexico City, the erection of the Feathered Serpent Pyramid by an emergent priestly class in around 200 CE ushered in a period of ritual bloodletting. A more egalitarian chapter followed, during which the temple was razed, and the city’s wealth was rechanneled into urban renewal. Then the old oligarchy reasserted itself, and the entire settlement, beset by elite conflict or popular rebellion, was engulfed in flames.

Whether societies collapsed through gradual depopulation, like Çatalhöyük, or abruptly, as with Teotihuacan’s conflagration, Kemp argues that the triggers were the same. As Acemoğlu and Robinson explored in “Why Nations Fail” (2012), the correlation between inequality and state failure often rests on whether its institutions are inclusive, involving democratic decision-making and redistribution, or extractive: “designed to extract incomes and wealth from one subset of society to benefit a different subset.” Time and again, the historical record shows the same pattern repeating — of status competition and resource extraction spiraling until a tipping-point, often in the shape of a rebellion, or an external shock, like a major climate shift or natural disaster, which the elites, their decision-making fatally undermined by the imperative to maintain their grip on power, fail to navigate.

In almost every case, decline or collapse was foreshadowed by increases in the appearance of proxies of inequality. A rise in the presence of large communal pots indicates an upsurge in feasting. Deviation in the size of dwellings, preserved in the excavated footprints of early conurbations, is a measure of social stratification, as wealth accumulates among the elite. Graves of that same nobility become stuffed with burial goods. Great monuments, honoring political and religious leaders or the gods who were supposed to have anointed them, proliferate. Many of the most lucrative lootable resources throughout history have been materials that connote elevated social standing, an obsession with conspicuous consumption or “wastefully using resources,” that marked a break from the hunter-gatherer principle of taking only what was needed. (Kemp wears a reminder of the human compulsion to covet beauty as much as utility, an obsidian arrowhead, on his wrist.)

All the while, these signs of burgeoning inequality have tended to be twinborn with an increasing concentration of power, and its corollary: violence. War, often instigated for no more reason than the pursuit of glory and prestige, was just “the continuation of status competition by other means,” Kemp writes. On occasion, this violence would be manifested in the ultimate waste of all: human sacrifice, a practice custom-made to demonstrate the leadership’s exceptionality — above ordinary morality.

Better Off Stateless

As Kemp dug into the data in more detail, his research substantiated another startling paradox. Societal collapse, though invariably catastrophic for elites, has often proved to be a boon for the population at large.

Here again, Kemp found that the historiography is subject to pervasive and fallacious simplifications. In his book, he repudiates the 14th-century Tuscan scholar Petrarch, who promulgated the notion that the fall of classical Rome and Greece ushered in a “dark age” of cultural atrophy and barbarism. His was a reiteration of sentiments found in many earlier examples of “lamentation literature,” left behind on engraved tablets and sheaves of papyrus, which have depicted collapse as a Gomorran hellscape. One of Kemp’s favorites is the “Admonitions of Ipuwer,” which portrays the decline of Egypt’s Old Kingdom as a time of social breakdown, civil war and cannibalism. “But it actually spends a lot more time fretting about poor people becoming richer,” he said.

In reality, Kemp contends, Petrarch’s “rise-and-fall vision of history is spectacularly wrong.” For if collapse often engulfed ancient polities “like a brushfire,” the scorched earth left behind was often surprisingly fertile. Again, osteoarcheology, the study of ancient bones, gives the lie to the idea that moments of societal disintegration always spelled misery for the population at large.

Take human height, which archaeologists often turn to as a biophysical indicator of general health. “We can look at things like did they have cavities in their teeth, did they have bone lesions,” Kemp explained. “Skeletal remains are a good indicator of how much exercise people were getting, how good their diet was, whether there was lots of disease.”

“Societal collapse, though invariably catastrophic for elites, has often proved to be a boon for the population at large.”

Prior to the rise of Rome, for example, average heights in regions that would subsequently fall under its yoke were increasing. As the empire expanded, those gains stalled. By the end of the Western Empire, people were eight centimeters shorter than they would have been if the preceding trends had continued. “The old trope of the muscle-bound Germanic barbarian is somewhat true. To an Italian soldier, they would have seemed very large,” Kemp said. People in the Mediterranean only started to get taller again following Rome’s decline. (In a striking parenthesis, Kemp points out that the average male height today remains two centimeters shorter than that of our Paleolithic forebears.)

Elsewhere, too, collapse was not necessarily synonymous with popular immiseration. The demise of the extravagant Mycenaean civilization in Greece was pursued by a cultural efflorescence, paving the way for the proto-democracy of Athens. Collapse could be emancipatory, freeing the populace from instruments of state control such as taxes and forced labor. Even the Black Death, which killed as much as half of Europe’s population in the mid-14th century, became in time an economic leveler, slashing inequality and accelerating the decline of feudalism.

It’s a pattern that can still be discerned in modern contexts. In Somalia, the decade following the fall of the Barre regime in 1991 would see almost every single indicator of quality of life improve. “Maternal mortality drops by 30%, mortality by 24%, extreme poverty by 20%,” Kemp recounted from memory. Of course, there are endless caveats. But often, “people are better off stateless.”

Invariably, however, Goliaths re-emerged, stronger and more bureaucratically sophisticated than before. Colonial empires refined systems of extraction and dominance until their tentacles covered diffuse expanses of the globe. Kemp, never shy of metaphor, calls this the “rimless wheel,” a centripetal arrangement in which the core reaps benefits at the margins’ expense.

At times, such regimes were simply continuations of existing models of extraction. In 1521, when the Spanish conquistador Hernán Cortés unseated the Aztec ruler Moctezuma II, it was merely a case of “translatio imperi” — the handing over of empire. The European imperial projects in the Americas were an unforgivable stain, Kemp said. But, more often than not, they assumed the mantle from pre-existing hierarchies.

Endgame

In the afternoon, we walked north from the British Museum over to Coal Drops Yard, formerly a Victorian entrepôt for the import and distribution of coal, now a shiny vignette of urban regeneration. The morning rain had cleared, and Granary Square was full of tourists and office workers enjoying the late summer sun. Kids stripped to their underwear and played among low fountains; people chatted at public tables beneath a matrix of linden trees. Kemp and I found an empty table and sat down to talk about how it could all fall apart.

As “Goliath’s Curse” approaches its conclusion, the book betrays a sense of impending doom about our current moment. The final section, in which Kemp applies his schema to the present day, is entitled “Endgame,” after the stage in chess where only a few moves remain.

Today, we live in what Kemp calls the “Global Goliath,” a single interconnected polity. Its lootable resources are data, fossil fuels and the synthetic fertilizers derived from petrochemicals. Centuries of arms races have yielded an arsenal of monopolizable weapons like autonomous drones and thermonuclear warheads that are “50 trillion times more powerful than a bow and arrow.” The land — sectored into national borders, monitored by a “stalker complex” of mass surveillance systems and “digital trawl-nets” — is more caged than ever.

We have reached the apotheosis of the colonial age, a time when extractive institutions and administrative reach have been so perfected that they now span the globe. However, the resulting interdependencies and fetishes for unending growth have created an ever-growing catalog of “latent risks,” or accumulated hazards yet to be realized, and “tail risks,” or outcomes with a low probability but disastrous consequences. Kemp characterizes this predicament, in which the zenith of human achievement is also our moment of peak vulnerability, as a “rungless ladder.” The higher we go, the greater the fall.

“We have reached the apotheosis of the colonial age, a time when extractive institutions and administrative reach have been so perfected that they now span the globe.”

Under a series of apocalyptic subtitles — “Mors ex Machina,” “Evolutionary Suicide,” “A Hellish Earth” — Kemp enumerates the existential threats that have come to shape the widespread intuition, now playing out in our geopolitics, that globalized society is sprinting toward disaster. After the post-Cold War decades of non-proliferation, nuclear weapons stockpiles are now growing. The architects of artificial intelligence muse about its potential to wipe out humanity while simultaneously lobbying governments to obstruct regulation. Our densifying cities have become prospective breeding grounds for doomsday diseases. Anthropogenic climate change now threatens to shatter the stability of the Holocene, warming the planet at “an order of magnitude (tenfold) faster than the heating that triggered the world’s greatest mass extinction event, the Great Permian Dying, which wiped away 80–90% of life on earth 252 million years ago,” Kemp warns.

The culprits in this unfolding tragedy are not to be found among the ranks of common people. The free market has always been predicated on the concept of Homo economicus, a notional figure governed by dispassionate self-interest. But while most people don’t embody this paradigm, we are in thrall to political structures and corporations created in that image, with Dark Triad personalities at the wheel. “The best place to find a psychopath is in prison,” Kemp told me. “The second is in the boardroom.”

Now, deep into the Global Goliath’s senescence, several of the indicators that Kemp identifies as having historically presaged collapse — egalitarian backsliding, diminishing returns on extraction, the rise of oligarchy — are flashing red. Donning his risk analyst hat, Kemp arrives at the darkest possible prognosis: The most likely destination for our globalized society is “self-termination,” self-inflicted collapse on a hitherto unprecedented scale. Goliath is more powerful than ever, but it is on a collision course with David’s stone.

Lootable Silicon

All of this seemed hard to reconcile with the atmosphere of contented civility in Granary Square on this sunny September afternoon. I proposed that an advocate for global capitalism would doubtless view our current circumstances as evidence of the Global Goliath’s collective, trickle-down bounty.

“We should be thankful for a whole bunch of things that started, by and large, in the Industrial Revolution,” Kemp said. “Vaccines, the eradication of smallpox, low infant mortality and the fact that over 80% of the population is literate. These are genuine achievements to be celebrated.”

Kemp argued that most redistribution has been a product of “stands against domination”; for example, the formation of unions, public health movements and other campaigns for social justice. Meanwhile, underlying prosperity still depends on the rimless wheel: the hub exploiting the periphery. “If we were here 150 years ago, we’d be seeing child laborers working in these courtyards,” he said, gesturing at the former coal warehouses that are now an upmarket shopping mall and that once served as a nerve center of the fossil fuel industry that built the modern age.

The same dynamics hold sway today, albeit at a further remove. Just south of us, across the Regent’s Canal, sat the London headquarters of Google, a billion-dollar glass edifice. At first glance, Kemp gave the building an enthusiastic middle finger.

Later, he explained: “The people sitting in that building are probably having a pretty good time. They have lots of ping pong tables and Huel. But the cobalt that they’re using in their microchips is still often dug up by artisanal miners in the Democratic Republic of Congo, getting paid less than a couple of dollars a day.”

Like much of the oligarchic class, the boy-gods of Silicon Valley still cleave to Hobbesian myths to justify their grip on wealth and power. Their techno-Utopian convictions, encapsulated in Bill Gates’ mantra that “innovation is the real driver of progress,” are merely a secular iteration of the divine mandates that Goliaths once used to legitimize their rule. Promises of rewards in the afterlife have been supplanted by dreams of a technological singularity and interplanetary civilization.

Another plausible eventuality, which Kemp dubs the “Silicon Goliath,” is a future in which democracy and freedom are crushed beneath the heel of advanced algorithmic systems. He is already at work on his next book about the evolution of mass surveillance, an inquiry that he told me “is in many ways even more depressing.”

Slaying Goliath

Toward the end of “Goliath’s Curse,” Kemp imagines a scenario in which the decision of whether to detonate the Trinity atomic bomb test in New Mexico in 1945 was made not by a Department of War but by a “Trinity jury,” an assembly of randomly selected members of the public.

“Now several of the indicators that Kemp identifies as having historically presaged collapse — egalitarian backsliding, diminishing returns on extraction, the rise of oligarchy — are flashing red.”

In such a counterfactual, with the Nazis defeated, Japan already inches from surrender and Manhattan Project physicists warning of a non-zero possibility that the test could ignite the whole atmosphere and exterminate all life on Earth, Kemp contends that a more inclusive decision-making process would have changed the course of history. “If you had a random selection by lottery of 100 U.S. citizens and asked them, ‘Should we detonate the bomb?’ What decision do they come to? Almost certainly ‘No,’ he told me.

As Kemp sees it, the widespread adoption of such open democracy is the only viable route to escape the endgame. These citizen juries wouldn’t be free-for-alls, where the loudest or most outrageous voice wins, but deliberative procedures that necessitate juror exposure to expert, nonpartisan context.

Such assemblies wouldn’t be enough to “slay Goliath” on their own, Kemp told me. “Corporations and states … [must] pay for the environmental and social damages they cause … to make the economy honest again.” Per capita wealth, Kemp added, should be limited to a maximum of $10 million.

I challenged Kemp that this wish-list was beginning to sound like a Rousseauvian fever-dream. But seven years immersed in the worst excesses of human folly had left him in no mood for half-measures. “I’m not an anarcho-primitivist,” he said. There was no point trying to revivify our hunter-gatherer past. “We’d need multiple planet Earths!” Kemp conceded. And yet the urgency of our current circumstances demanded a radical departure from the existing status quo, and no less a shift in mindset.

His final demotic prescription, “Don’t be a dick,” was an injunction to everyone that our collective future depends as much on moral ambition as political revolution. Otherwise, Goliath won’t be just a Bible story. It could also be our epitaph.

The post Humanity’s Endgame appeared first on NOEMA.

]]>
]]>