John Last, Author at NOEMA https://www.noemamag.com Noema Magazine Thu, 04 Sep 2025 13:07:34 +0000 en-US 15 hourly 1 https://wordpress.org/?v=6.8.3 https://www.noemamag.com/wp-content/uploads/2020/06/cropped-ms-icon-310x310-1-32x32.png John Last, Author at NOEMA https://www.noemamag.com/author/johnlast/ 32 32 The Vanishing Art Of Building Sacred Spaces https://www.noemamag.com/the-vanishing-art-of-building-sacred-spaces Thu, 28 Aug 2025 13:37:35 +0000 https://www.noemamag.com/the-vanishing-art-of-building-sacred-spaces The post The Vanishing Art Of Building Sacred Spaces appeared first on NOEMA.

]]>
MOUNT ATHOS, Greece — The Monastery of the Great Lavra is a fortified village perched on a sloping mountainside high above the Aegean Sea. Home to about 40 monks, it is the oldest and most esteemed of the 20 monasteries on Mount Athos, the rocky spit of eastern Greece where the Virgin Mary is said to have made landfall on her long journey to France, and one of the holiest places in Eastern Orthodoxy.

At the heart of the monastery is a domed church more than 1,000 years old, its interior awash in depictions of Christian lore. It is Easter; the monks file into the church’s dimly lit nave at around 9 p.m. and will not emerge until 6 a.m. The Great Liturgy is the culmination of months of preparation and hardship. No one here has eaten well for some time.

The church is filled with pilgrims of many nations — Russians, Slovaks, Italians, Americans. For the first few hours, monks mumble ancient prayers in archaic Greek. They flit to and fro, through darkened rooms crammed with luminescent icons. Now and then, the aged abbot proceeds through the interior, bowed under the weight of his miter and vestments, to shake a censer and sanctify the chamber and its icons with the holy smell of God.

From the very rear of the church, where the non-Orthodox are corralled, I can make out a sliver of the inner sanctuary, between imposing stone walls bearing the stories of Adam’s exile and Noah’s ark. I also glimpse the church’s glimmering altar, a constellation of candles and gold ornaments, and all its circling monks: praying, bowing, censing — sanctifying and resanctifying its every crowded nook and corner. My obstructed view was designed with intention; every angle, every aspect of the ceremony is mediated by the lore crowding each wall. The light, the shadow, the hidden rooms and arching vaults: all give shape to the worship happening here. 

Around 4:30 a.m., I am hit by a wave of profound exhaustion. My eyes begin to water. The heat of candles, lit by a holy flame brought from Jerusalem, gives the whole room a hazy blur. I blink. All of a sudden, I am struck by a powerful vision. The room I was looking upon is no longer in Greece — it is no longer a room at all. I see through the doorway a courtyard in Jerusalem, a gathering place before the opened tomb of Christ. A lone man stands before it, singing a tragic song of lament. Then, from inside, another man emerges, clad all in white, tall, strong, a risen body glowing through the candlelight, joining in the song with a clear voice, singing of absolution. The sight is so powerful I nearly fall to my knees.

“If spires and arches and ancient cavern churches no longer give shape to our worship, what does? Can we still build a temple worthy of God?”

My mind fights against what I am seeing. I know this man must surely be the abbot, but he seems somehow transfigured. No matter how I try, I cannot shake the sense that a tomb is before me. What is happening? Has some ancient magic called a divine presence to Earth? Have I received a vision of Jesus Christ?

Emerging bleary-eyed into the dawn, I struggle to make sense of what I experienced. What produced such a vision? I am not Orthodox. I have never before had such a profoundly destabilizing experience in a church of my own denomination. Undoubtedly, hunger and exhaustion played a role. But something else gave the vision a defined character and shape. For hours, the monks had performed an elaborate ritual upon a painted, stony stage. The rhythm of standing, sitting, inhaling smoke and incense, the confusion and the boredom — all worked to suppress my thinking self, to drown my ego in prayer.

For millennia, religions have constructed sacred spaces with meticulous care and attention to detail to engender religious awe. Orthodox churches like the Great Lavra’s are designed as models of the universe in miniature: the dome above decorated with an image of Christ as the almighty ruler of creation; the apse with the Virgin Mary, the supreme mediatrix; the lower walls crowded with figures from sacred history. For some religions around the world, even to simply exist in a sacred space is to be sanctified by it. The structure, design and decor can facilitate holy magic. 

Building a stage worthy of God is one of humankind’s most ancient obsessions. But in the West at least, it is a dying art. 

In large part, we have lost sight of the importance of architecture and the material in cultivating a sense of the sacred. For decades in the U.S., the construction of houses of worship has been in precipitous decline, and of the sacred spaces that are built, many lack articulated character. New evangelical assembly halls and megachurches often more closely resemble concert halls than cathedrals. The modernist, clean white aesthetic of an Apple Store is proliferating in contemporary church architecture like a virus. Historic churches, meanwhile, can barely survive; already, many a Gothic shell has been gutted to house modernist condos in real estate greige.

If spires and arches and ancient cavern churches no longer give shape to our worship, what does? Can we still build a temple worthy of God? Do we even recall the tools and techniques for cultivating spaces of spiritual awe? Increasingly, we need a new language of enchantment — or perhaps the recovery of one very old.

Sacred Origins

The shadowy, cave-like interior of the Great Lavra’s central church is one of the oldest forms of a house of God. In the Paleolithic period, hunter-gatherer societies sought out darkened places for their spiritual impulses. More recently, Indigenous peoples from Australia to Brazil have used caves for sacred ceremonies reserved for a select few. Caves provide a potent metaphor in stone: ancient, earthen wombs where initiates may be born again. They inspire the kind of fear that philosophers like Immanuel Kant identified as an essential ingredient of the sublime. 

They’re also symbols of mystery and secrecy: the unknowability of God manifested in passageways that lead only deeper into darkness, like the tainai meguri at the Kiyomizu-dera Temple in Kyoto, Japan. Inside this “journey to the womb,” visitors navigate a pitch-black tunnel with only a rope to guide them. At the end is a glowing, sacred stone representing enlightenment.

But the cave alone was often not enough to satisfy the drive of ancient societies for God. Some peoples decorated caves in apparent attempts to connect with unknown divinities. Among academics, there is still considerable debate about the possible spiritual origins of Paleolithic cave art, like the magnificent Hall of Bulls at Lascaux, France, which features what appear to be astronomical allusions among its animal figures, or Argentina’s arresting Cueva de las Manos, which depicts almost only left hands. Many painted caves are aligned to be illuminated only during the solstice or contain archaeological evidence of ritual use. It is thought that initiatic societies of hunter-gatherer groups used such art to elicit awe — even terror — in visitors and new members, to assert the power of priestly figures over the ever-present powers of the underworld. 

“Caves are made for creating this kind of feeling,” explained Brian Hayden, an ethno-archaeologist and author, in an interview. “You turn a corner and all of a sudden you see this huge image of aurochs — it’s striking.” He speculates that the origin of the artistic impulse altogether may lie in the compulsion to delineate sacred spaces.

But while caves have long fostered religious enlightenment down in the dark, for much of history, human worship has been directed upward. Worship of the sun, as in ancient Egypt under the reforming pharaoh Akhenaten, is believed to be the original form of monotheism. Far more common was an entire pantheon of sky gods, each powerful in their season and appeased in their own way through a kind of mathematical, astrological exactitude.

“We have lost sight of the importance of architecture and the material in cultivating a sense of the sacred.”

The heavens awe us with their magnitude — the innumerable stars, the complexity of celestial movement. For the proto-rationalist believers among the ancient faiths, they also acted as a kind of early proof of a god. Generations of priest-astrologers proved there really was an intricate, hidden logic dictating the motion of what they saw in the skies. They felt compelled to eternalize their discoveries in the medium of stone, building immovable monuments like Stonehenge that give some physical shape to the march of time.

Humans have encoded their lore and sacred rites in stone throughout history. Before writing, pottery or even the emergence of agricultural settlements, Neolithic nomads erected a vast temple complex to the gods at Göbekli Tepe in modern-day Türkiye, one of the world’s oldest megalithic sacred sites. The precise meaning of its intricate decoration — including images of foxes, vultures and lions carved into its towering pillars — has been lost to time. But the goal of such works was generally to “create a power base” showcasing shamans’ connection with gods or spirits above, according to Hayden. 

Though Christians long thought of themselves above such pagan traditions, astrological symbolism and numerology — even explicit references to the zodiac — remained common fixtures of Christian temple design for centuries. Paraphrasing the cultural theorist Aby Warburg, the historian Raffaella Maddaluno calls astrology “ancient religion’s most tenacious form of hidden survival.” And even with its foundational myth about the hubris of trying to rival the height of God in heaven, Christian architecture has become iconic for its spires, built ever higher throughout medieval Europe in a contest over who might reach the skies.

The association of altitude with godliness has drawn pilgrims to extreme heights and through great difficulties. Sacred mountains exist on every continent and in almost every culture: Olympus, Fuji, Sinai, the Himalayas. In ancient Greece, devotees reached Delphi’s oracle by punishing ascent to her temple’s perch on the mountainside. “Despite the hardship and fear encountered on the heights, people return again and again, seeking something they cannot put into words,” the mountaineer and religious scholar Edwin Bernbaum writes. The “deep valleys and high peaks conceal what lies hidden within and beyond them, luring us to venture ever deeper into a realm of enticing mystery.” 

But because high places were also the natural realm of kings and conquerors, the push to colonize new peaks for God blurred the lines between sacred and secular. Mountaintops are, after all, places of proclamation and lawgiving, as Moses did with his commandments. 

Decline Of The Sacred

In Thessaloniki, Greece, this spring, I visited the Rotunda of Galerius, a striking, round temple where most of the mosaics that adorned the soaring dome had fallen to ruin long ago. In the very center is a figure whose identity we do not know for certain: Is it Jesus or Emperor Constantine? In the early days of Christianity’s rise, the difference did not matter much.

As Christianity replaced older European forms of worship, confusion regarding the function of sacred spaces deepened. The first Christian basilicas likely usurped the former judgment seats of Roman magistrates, a symbolic transformation: In these long and luminous halls, death was once dealt out; now, forgiveness. The occupation of Roman palaces by Christian churches gave a different architecture to the worship within than the cave-like forms that came to dominate among the Orthodox. As sacred ceremonies in the East moved behind veils, those in the West moved ever outward: Priests performed the eucharistic sacrifice in plain view of worshipers, believing that showing a mystery in plain sight was the greater testament to its power.

The apogee of the sacred architecture of Western Christianity is the Gothic cathedral: the balance of light and shadow, the close attention to decoration, the soaring height. But even Abbot Suger, the great progenitor of the Gothic style, saw his buildings as a kind of obstruction rather than a genuine theater for God. His stated goal was to make them so luminous as to direct the mind away from the glory of the church entirely, toward higher things. “Being nobly bright, the work should brighten the minds,” he inscribed on the bronzed entryway of the Basilica of Saint-Denis. “The dull mind rises to truth through material things, and is resurrected from its former submersion when the light is seen.”

This philosophy continues to dominate Western thinking on sacred space. Nesrine Mansour, an architect at the University of Colorado Boulder, generated a dozen or so digital versions of a church and made slight alterations, then surveyed over a thousand people on the feelings they elicited. Light, she found, was among the most important elements in evoking holiness. “Light has always been the material that possesses the spiritual, theological character,” she told me. “It’s always been known to influence a person’s spiritual feeling.”

“The rise of neopaganism has shown how, when the quality of public sacred spaces diminishes, people are naturally driven to improvise in private.”

Suger’s architectural goal was a spiritual ascent into heaven, not calling God down to a microcosm here on Earth. But in situating divine power outside the golden shimmer of an icon or the illumination of a candle in a dark space — in moving it beyond human control — Suger left his followers with an impoverished doctrine of how space can become imbued with sacredness. There was no less desire to render the inside of churches glorious, gilded and bejeweled, and ever grander in scale. 

But the style and decoration of late Medieval and Renaissance churches blended with the secular, particularly as classical principles of dimension and proportion took precedence over the theological mysteries that informed earlier constructions. Venture into some Baroque churches in Italy or France and you might find it hard to tell them apart from the great rooms of nearby palaces were it not for the crucifix at one end.

In response to this confusion, a kind of iconoclasm emerged. Protestants demanded simple meeting halls with plain interiors as symbols of their relative purity. “The whitewashed interior formed a central element of a new iconography of faith,” the art historian Victoria George writes in her history of the style. And in the centuries since the Reformation, the Protestant world has leaned further into Suger’s vision. “It is not unusual for modern artists to decry the ancient system of decorating churches with much painting,” the architectural critic Augustus Pugin lamented in 1843. 

After all, if no architectural form can rival the potency of the enlightened mind as a theater for the sacred, why bother with decoration? Why erect great temples and churches at all? The value of sacred space had been undermined by the very idea of the Enlightenment; the effort required seemed an increasingly corrupt expense when the perfect church could be built between people using nothing but the word of God.

Historical Rupture

In the early to mid-20th century, Catholic architects reawakened to the possibility of sacred space, spurred by the march of modernism and the rise of the Liturgical Movement, which sought to background informal devotions in favor of the approved beats of the Catholic Mass. The evolution of building materials and the experimentation of form produced by new technologies led to new literature about the right way to build a church. 

Charles Leadbeater, an adherent of the obscure, esoteric school of Theosophy, posits in his 1920 book “The Science of the Sacraments” that the architecture of a church and the patterns of a liturgy are intimately related, concentrating spiritual energy to bring the divine nearer to us. In detailed illustrations, he suggests that the moment of the sanctus can produce a central spire, and that the acts of certain rites and the rhythms of ancient prayer can produce a “thought-form,” a radiating bubble of mental energy that takes on a defined architectural shape over a congregation.

Many architects saw themselves as living through a profound historical rupture, after which the logic and experience of sacred space could never be the same. “For us the wall is no longer heavy masonry but rather a taut membrane,” writes the German church architect Rudolf Schwarz in his 1923 book “The Church Incarnate.” “We know the great tensile strength of steel and with it we have conquered the vault. … The old, heavy forms would turn into theatrical trappings in our hands, and the people would see that they were an empty wrapping.”

“When we mediate our forms of worship through the architecture of algorithms, we are inviting another god into the room.”

Schwarz, like many others of his day, wanted to break down the historic forms to their first principles. He considered a church to be a series of abstract shapes: rings, chalices, throughways, domes. His works took on the striking modernist forms and materials of a factory; from the outside, they are sometimes virtually unrecognizable as sacred spaces. “What a ‘spire’ is and proclaims, the procession of ‘pillars’ and ‘arches’ … these are valid for all times,” he writes. “But even so, we can no longer build these things. … The reality which is our task and which is given into our hands possesses completely different, perhaps poorer, form.”

I have been in modern churches that possess something of the rich magic of an ancient sacred space: the darkly luminous Art Deco interior of Montreal’s Saint Joseph’s Oratory of Mount Royal; the unusual, rock-hewn hall of Helsinki’s Temppeliaukio Church; the vast, UFO-like Cathedral of Christ the King in La Spezia, Italy. But often, modern churches are, at best, peaceful, contemplative spaces; I cannot imagine God appearing before me in the suburban A-frame church I went to in grade school, for example, no matter how warmly I feel toward it.

In recent years, as organized faiths have retreated in the West and church-building has become less common, architects tasked with creating sacred spaces have embraced the “negative design” of the interfaith center. These generic white boxes are intended as blank canvases for any mode of worship. At worst, they are windowless, grey-carpeted and fluorescent-lit airport lounges; at best, modern-minimalist meeting halls retaining some indirect, radiant light.

This trend reflects the great impoverishment of our philosophy of sacred space. At architecture competitions, ambitious minds still realize bold designs that carry with them some of the unsettling strangeness of ancient houses of worship. But most winners of prestigious prizes are unified in their modernist simplicity: boxy, undecorated and white, white, white. “What is significant … is the increasingly generic character” of many modern sacred spaces, the architectural historian Kate Jordan writes. Perhaps, she argues, it is the nonsectarian “spiritual” connotations of whiteness that have resonated with so many architects. But why, she asks, “would the Vicariate of Rome or any other religious client choose a scheme that aimed to remove cultural or historical moorings?”

DIY Sacred Space

Increasingly, the physical form of sacred space is taking shape not collectively but individually, in the bedrooms and home offices of devout and well-meaning worshipers.

On TikTok today, it’s easy to find a litany of how-to videos for building your own. A search for “altar” will surface hundreds of thousands of music-backed guides for constructing minor temples to Aphrodite, Lilith and Jesus Christ on dressers, in gardens and in living rooms. “Just got my first thurible!” one user declares in a video of them furiously censing an altar with a pair of icons. “Building little altars everywhere >>>” another post reads over cycling clips of crystals, candles and classical busts aesthetically arranged on bookshelves and bedside tables.

Home altars aren’t a new phenomenon, but their recent popularity on TikTok is due in part to the rise of #WitchTok. The loosely constituted online community has impressively gamed the app’s algorithmic recommendation engine to fuel a growing interest in neopagan worship. This has given rise to a cottage industry of altarpieces, candles, crystals and images designed to cultivate a sense of the sacred wherever one engages in prayer.

The rise of neopaganism has shown how, when the quality of public sacred spaces diminishes, people are naturally driven to improvise in private. “Having a physical space still very much matters,” explained Chris Miller, a sociologist of religion and specialist in digital paganism at the University of Toronto. Because there is no easy replacement for the transcendent atmosphere that earlier generations cultivated at the mosque, chapel, temple or synagogue, people are seeking a theology of a make-your-own variety in which almost any space can be made to feel sacred with a simple invocation of words or collection of accessible objects. “One of the reasons why sacred architecture isn’t as important to pagans is because of their ability to make any space sacred,” Miller said.

This extends even to the digital architecture of social media itself, which has taken on significance in online spirituality akin to that of any other public space. Beneath videos of #WitchTok creators casting spells, Miller will often find comments from people rushing to “claim” their effects. Many religious traditions experimented with this idea during the pandemic. In Malaysia, ceremonies in which underworld gods were called to possess mediums and offer predictions — typically in private settings — were livestreamed on Facebook, with the gods eating, drinking and smoking mailed offerings, and replying in real time to comments. The practice was not endorsed by the institutions of mainstream Taoism, yet the online ceremonies were wildly popular, even prompting push alerts sent to followers’ accounts.

Did Facebook itself briefly become a temple for Malaysia’s underworld gods — a theater for their divine powers? For many adherents of #WitchTok, the answer is most likely yes. Digital pagans, Miller said, often attribute a video’s success in finding an audience to some kind of magical nudging by algorithms. “There’s this idea that the algorithm is enchanted in some sort of way,” he told me. “Something is trying to find you. It’s taken a little more seriously as divine intervention.”

“Because they are alive, groves have an inspiritedness that cannot be rivalled by the cold stone of a man-made temple.”

There’s something radically different about this understanding of sacred architecture. It seeks to both overcome and subsume collectively experienced design. The physical experience of sacred space — its smells, sights and sounds — is either eliminated, minimized or individualized. I may light a candle at home when a priest on a livestream tells me to, but I cannot hear the uncomfortable creaking of a neighbor’s pew during a particularly incisive sermon. There is no chance that my fellow parishioners and I might let our gaze linger on the same sacred art when bored. No holy flame from Jerusalem can cross the threshold of the phone’s black mirror.

But for many entering a digital sacred space, the experience represents a new kind of collectivity. Express an interest and you will quickly find yourself herded by algorithms to the most popular designs, accompanied by theological lessons about their efficacy in the form of disembodied voiceovers. There is a kind of commonality to the physical form of this worship, but its designer is neither the influencer-prophet nor the crowd that flocks to watch them.

When we mediate our forms of worship through the architecture of algorithms, we are inviting another god into the room. Mircea Eliade, one of history’s most influential scholars of sacred space, theorized that built spaces, like temples, work because they define, with their thresholds, a crossing point between the undifferentiated chaos of the profane world and the ordered cosmos of the sacred. Today, for a growing number of people, that threshold is the smartphone screen. And it’s not Hecate or Jesus choosing the hierarchy of its microcosm: It’s the brahmins of Silicon Valley, whose theology — or sorcery — we may only guess at.

In the most pessimistic view, the consequence of this will be a spirituality more atomized, more individualized than ever — and perhaps more extreme. There is a “very strong” New Age-to-violent-white-supremacy pipeline, Jessica Lanyadoo, a professional tarot reader and psychic, told me. It’s hardly unique to New Age beliefs. The subreddits for Orthodoxy, Anglicanism and Catholicism are all replete with games of one-upmanship between amateur, anonymized theologians on matters of dogma and doctrine. In the absence of authority, the most rigid interpretation is often the one that wins broad acceptance. “You have the need for media literacy on top of spiritual literacy when you’re consuming spiritual content online,” Lanyadoo said. “And many people have neither.”

But there’s another possibility. Digital sacred space need not involve ceding power to a black-box algorithm. Instead, it can function on more ancient principles — ones that may predate even the archetypes of caves and mountaintops.

The Digital Grove

In India today, there are more than 50,000 sacred groves — sites where the faithful travel for healing or to take a sacred vow. Cared for, in some cases, since ancient times, they are invaluable repositories of rare species, living testaments to the bounty of nature. Groves may offer an alternative model of sacred space for traditional religions and neopagans alike.

Because they are alive, groves have an inspiritedness that cannot be rivalled by the cold stone of a man-made temple. The trees of a sacred grove are not just columns for the sky: They are active personalities, like the icons of an Orthodox church, windows to a living God. In Japan, one can still easily find sacred spaces built around ancient and numinous trees, held to be repositories for spirits. The simple wooden temples built there, literally overshadowed by their trees’ vast canopies, are left looking small and secondary, like a chalet on a mountain slope.

It’s only natural to see such places and reflect on humankind’s relative insignificance. Unlike the hall of Saint Peter’s Basilica or the sprawling complex of Angkor Wat, these sacred places are no special testament to our abilities. They are radically decentering in a way that no human-made space can be; they force us to think of holiness as something that can reside within natures other than our own. In this, they are an alternative to the individualism of today’s digital religion, in which any housebound practitioner can be a prophet to their followers.

And yet, sacred groves are also human constructions. In the second century, the Greek Pausanias described in detail the way various ancient groves were carefully selected, cleared and maintained, with attention paid to sightlines, pathways and uses. While some, like Megalopolis, were kept as a true wilderness, most used only select species or centered on a monumental tree that was given more space to grow, as in Japan.

“What is sacred, we never control. We are merely lucky to feel it.”

To the historian Gérard Chouin, who studied sacred groves in Ghana, this means the act of founding a grove was always to radically rupture its relationship with nature — to “draw a patch of landscape away from the realm of natural history,” he writes, and make it part of a sacred, otherworldly order. It is to turn a natural environment into a built one — to find an architecture readymade within the world, or to let one emerge from the life-forms within it.

In a strange way, this may be a model useful for understanding our strange new digital, decentralized future in which the elements of sacredness are dissected, transformed and disseminated as memes and microtransactions. The trees of a sacred grove are plucked from a forest of obscurity, chosen for some numinous power within. But then they do not cease to grow and change because we have selected them. Their power is retained even when a grove has fallen to ruin. We are, after all, secondary to them in sacredness.

The sacred spaces of today may not be built, but chosen; they may be made from parts that continue to change and evolve long after we have chosen them. They may look different, or be used in different ways, by different generations of the faithful. But we are the ones who come and go; the temple lives forever.

Eliade writes that the sacred always “manifests itself, shows itself, as something wholly different from the profane.” Perhaps the history of religious architecture has been an attempt to resist that fact. We have tried and tried, through innumerable forms, to build a sacred place we can choose to enter; to create a sense of the sacred we can preserve for all time.

But the grove, alone among our sacred forms, doesn’t care about our desires. It lives beyond us. What is sacred, it teaches us, we never control. We are merely lucky to feel it.

The post The Vanishing Art Of Building Sacred Spaces appeared first on NOEMA.

]]>
]]>
The Department Of Good Living https://www.noemamag.com/the-good-society-department Thu, 03 Apr 2025 17:05:01 +0000 https://www.noemamag.com/the-good-society-department The post The Department Of Good Living appeared first on NOEMA.

]]>
The advent of the phrase “everything but the kitchen sink” is often placed during World War II, when it connoted both an all-encompassing bombardment and the desperation of those under attack trying to save their possessions from impending doom.

Already known for sink-related antics, Elon Musk, Donald Trump’s seeming second-in-command who is busy eviscerating the U.S. federal government, now seems hellbent on embodying the kitchen sink metaphor: a desperate, unthinking, all-out effort at destruction, deploying methods ineffectual and devastating alike. Time has proved Musk to be a human sink of sorts, efficiently draining away value from any particular thing he sits atop: 50% of Tesla, 80% of Twitter (although it has since rebounded) and, he would have us believe, 30% of the U.S. federal budget.

     

Federal employees on the other end, meanwhile, are forced to live as though under bombardment, scrabbling to rescue everything they can from the wholesale (and likely illegal) dismantling of USAID and the Department of Education; reactionary purges in the Federal Aviation Administration, Transportation Security Administration and military; the shuttering of programs for infectious disease management and scientific research; and the summary firing of thousands of federal employees — 5,700 in the Department of Agriculture and 7,000 in the Internal Revenue Service alone. Or else, they must transform into kitchen sinks themselves, becoming too burdensome to toss out with all the other contents of the U.S. government’s cupboards.

In the weeks since Musk has taken up his unusually prominent position as the Rasputin of Trump’s court, many attempts have been made to understand just what, exactly, is the vision of government dictating the swings of his wrecking ball. From his statements to the press and to Trump’s cabinet, he seems to think of government as little more than a sink itself — a hole into which tax dollars pour, a woeful drain on resources. It’s this vision of government that is at the core of his Department of Government Efficiency (DOGE), the team of mostly lawyers and programmers who are even now ripping data from decimated departments like copper wire from a demolition site.

It’s a particularly nihilistic view of government, one that portrays federal workers and their departments as “parasites” worthy of nothing but contempt. But even though Musk and DOGE are largely unpopular, this view of government is not. Nearly 60% of Americans say the government is wasteful and inefficient. Even more say they are dissatisfied with America’s democracy as a whole and the size of the federal government. Just 40% (mostly Democrats) think it does more for average Americans than it gets credit for.

How did Americans’ opinion of their government sink so low? Sixty years ago, nearly 8 in 10 Americans said they trusted the government to do the right thing most of the time. But for an administration that campaigned on making America “great again,” there is remarkably little curiosity about what version of government, exactly, elicited such widespread acclaim. 

The reason is fairly obvious — it was nothing like what they are building now.

“Time has proved Musk to be a human sink of sorts, efficiently draining away value from any particular thing he sits atop.”

The ideal kitchen sink, for a working kitchen of any reasonable size, has two basins. On the right, it’s a shallow five inches, a comfortable depth for washing dishes. On the left, it’s deeper — eight inches, perfect for rinsing down fresh fruits and vegetables. A wire drying rack is sized to fit the deeper basin; a cupboard behind the faucets secrets away soaps and sponges.

You’re unlikely to find this sink design in most modern houses. But it is the fruit of decades of diligent government research conducted primarily by a little-known agency known as the Bureau of Human Nutrition and Home Economics. From 1923 to 1962, the bureau deployed mass public surveys, built experimental houses and conducted research into hundreds of consumer products from textiles to meats to kitchen sinks, all to deduce scientifically the best possible way to live a middle-class life in midcentury America. The resulting techniques, materials and designs still prompt misty-eyed nostalgia from TikTok traditionalists and bitter 21st-century consumers alike. In the last century, perhaps no other government agency has had such a profound impact on daily life — and yet today, it has been almost completely forgotten.

In many ways, the whole discipline of home economics started from ideological convictions very similar to those held by Musk and his allies: The historian and author Carolyn Goldstein identified “a belief in scientific and technological progress” and a sense of “the superiority of white Anglo-Saxon Protestant culture.” In the 19th century, home economics taught best practices for cooking, gardening, sewing and other technical housekeeping skills. As Goldstein told me in an interview, it was “a time when we believed that everything, all of our social problems, had an engineering solution to them.”

At the turn of the century, the American government was actively engaged in building up the country’s rural heartland. While cities industrialized, rural America risked being left behind, and the backbone of 19th-century American identity — the family farmer, the frontiersman, the homesteader — was gradually fading away. “People were leaving their farms,” Goldstein said. “And by contrast, the farm, the farmers and the farm families that stayed looked downtrodden, overworked, inefficient.” 

“There is remarkably little curiosity about what version of government, exactly, elicited widespread acclaim for making America great again.'”.

The Cooperative Extension Service, created in 1914, used the relatively new public land-grant universities as home bases for educational outreach programs, teaching farmers the latest techniques and technologies for farm optimization. Through the predominantly female field of home economics, that outreach extended into the home to the farmers’ wives who ran the household and decided on matters of consumption.

Women, home economists recognized, were often the key mediators between private families and the marketplace, deciding which materials to buy, what food to eat and what appliances to use. One young woman from rural Virginia wrote to the U.S.D.A. pleading for reliable advice on oil stoves: “Housekeepers all around us are half sick from overwork,” she wrote. “A few real conveniences would stop much of this. … [But] the farmer’s wife seldom know[s] just what to get, where to get it and what it costs.” With voluntary rationing during World War I, U.S. government administrators realized the impact that reaching “the female consumer” could have; a government-issued recipe guide or sewing pattern could rapidly reduce demand or shift it to underutilized products so long as families knew how to use them. Providing parents with good data on nutrition would ensure that the next generation of soldiers would be fitter than the last.

Underlying these early initiatives was a general belief that the government had an active role to play in “promoting modern life,” Goldstein explained — and not just in service of increasing agricultural or wartime output. “Women shouldn’t need to feel they need to leave the family farm to have a modern stove,” she said. “So we needed to define the modern standards of middle class life — how to cook, clean, lay out your family budget — and then we needed to teach it.”

Before the bureau launched in 1923, all sorts of new ingredients and technologies that promised to make life easier — processed cheese, electric dishwashers, “artificial silk” — were proliferating. But many home consumers were ill-prepared to use or choose between them. In the golden age of advertising heralded by the arrival of broadcast radio, false claims abounded and dubious products flooded the marketplace.

“The idea was that both companies and citizens shared a responsibility for building a ‘rational consumer society’ where reasoned, responsible consumption produced good citizenship.”

The bureau was conceived as the antidote to these problems — an “information clearinghouse about consumer goods,” Goldstein called it in her book “Creating Consumers.” There, home economists evaluated cooking methods, tested fabrics until they wore thin, reviewed “child-rearing practices” and investigated new home accounting systems. To develop and evaluate the meat thermometer, they “roasted 2,200 legs of lamb, 800 rib roasts of beef, 450 cuts of fresh pork and about 50 cured hams” in just six years. “Uncle Sam is paying some of his employees to eat!” newspapers marvelled at the time.

That work was not slowed by either economic calamity or war. During the Great Depression, the bureau pioneered studies of minimum nutritional requirements that set the standards for aid relief from the United Nations and the World Health Organization. And through the Second World War, it produced recipe books for rationing, mold-resistant fabrics for army tents and detailed nutritional studies of beaver, muskrat, opossum and racoon meat, part of an effort to promote wild game consumption. We still eat enriched flour, drink fortified milk, cook meat to certain temperatures and follow certain instructions for washing clothes as a result of the bureau’s research, even if its ideal kitchen sink has since gone out of style.

Always, the bureau’s findings were shared with individual consumers and industry alike. The idea was that both companies and citizens shared a responsibility for building a “rational consumer society” where reasoned, responsible consumption produced good citizenship. It was a two-way street. “Rather than manipulating consumers, ideal producers educated them,” Goldstein wrote. “Bureau home economists sought to supply ‘good’ or ‘progressive’ companies with information about homemakers’ preferences so that they could design better products and label them with useful, factual information.” 

Government threats of heavy regulation ensured companies played ball. Through the 20s and 30s, taking the advice of home economists was increasingly seen as “a means of self-regulation in an era of increased government oversight,” Goldstein told me. Feeling pressure from both the government and increasingly educated consumers, America’s biggest brands felt the need to develop a reputation for “good corporate citizenship.”

The result, unsurprisingly, was better products. There’s a reason the refrigerator designs of the 1950s still elicit wistful desire — they were, unlike today’s appliances, the product of obsessive study by home economists public and private alike, designed first and foremost with the consumer experience in mind, with sliding shelves, detachable fruit and vegetable storage containers and a nifty system to get ice cubes out of trays.

What feels so radical about the bureau’s work today is the way that it went largely unquestioned that government experts, working for no one but the public, could help define the best products and practices in the marketplace. Much like today, the economy and consumer experiences were rapidly changing. But the U.S. government was there to hold people’s hands, to explain new technologies, define their best uses and guide users to the best options they could afford.

The bureau was forbidden from offering specific brand endorsements or granular advice to individual citizens; to do so would almost certainly have been a gateway to corruption. But it didn’t need to. The maternalistic scientists of the bureau — Louise Stanley, Hazel Stiebeling, Ruth O’Brien, Hildegarde Kneeland — correctly believed that well-educated consumers could direct the market toward quality products. At least, in the kind of market that prevailed for the first half of the 20th century.

What really made the bureau’s work possible was two fundamental beliefs that today are all but anathema in American politics: First, that the government must play an active role in bettering consumer products, and second, that industry bears a civic responsibility to take care of its consumers. The second half of the 20th century is, in many ways, the story of a long decline of those two ideas.


In 1948, the Bureau of Human Nutrition and Home Economics celebrated its 25th anniversary in the style of a triumphant victory. Staff snacked on minted fruit cocktail, beef tenderloin with fresh mushrooms, parsley potatoes and roasted asparagus. Ice cream and a birthday cake with coconut frosting made up the final course. At the end of the meal, attendees joined in a triumphant song that had been composed for the occasion: “Gone are the days when only men can roam / Gone are the days when the girls all stay at home / For now you’ll see women working everywhere … These women, these women, how they do love to roam.”

The party may as well have been a funeral. Within four years, the bureau would be gutted, its staff cut by 20%, its funding slashed by even more.

The bureau’s demise and eventual closure in 1963, Goldstein wrote, can be chalked up to many different factors. In the aftermath of the war, men returned to the workforce and muscled in on the work of home economists; new disciplines like “food engineering” disrupted what was once an almost exclusively female field. And then modern marketing was born, and corporate America pillaged home economists who might previously have gone to work for the government to instead write recipes and ad copy for corporate brands.

“It went largely unquestioned that government experts, working for no one but the public, could help define the best products and practices in the marketplace.”

Concurrent with these changes was a shift in government philosophy. The election of Dwight Eisenhower in 1952 brought a backlash against the idea of big, bullish government agencies; projects like the bureau were increasingly viewed as surplus to requirements. In 1953, the bureau was consolidated into the Agricultural Research Service (ARS), and its work more narrowly focused on nutrition. By 1955, ARS administrator Byron T. Shaw was proposing the bureau cease virtually all consumer product reviews. The American public, he said, could simply trust the manufacturers.

Ironically, one cause of this shift was America’s postwar prosperity. Stanley and other thought-leaders in home economics had to shift their focus from managing wartime scarcity to distributing unprecedented abundance — in no small part thanks to the decades-long effort they led to modernize America’s farms and factories. It was the belief of many in the bureau that this new American bounty would need to be meted out to the rest of the world in order to prevent the return of war — “Save Wheat, Save Meat, Save the Peace,” read one slogan, which was nixed within a year following objections from the meat and grain lobbies.

But that belief inadvertently redefined the role of government, not only in the market, but in people’s lives. Lyndon Johnson’s Great Society reforms were aimed to uplift “the needy,” and increasingly the government’s purpose was defined solely with reference to this amorphous group. In “The Century of the Self,” documentarian Adam Curtis pointed to this era as the moment of transition from a culture of needs to a culture of desires; big government, it seemed, was to be relegated to the former. The rest would have the market — and the market alone.


The “golden age of capitalism” is generally accepted to have come crashing to a close in 1973 when simultaneous shocks hit many Western economies. Empowered by a growing speculative finance industry and a new capacity to move money across borders, the manufacturing companies that defined 1950s society rapidly moved to developing economies where they could more easily throw their weight around.

For ordinary Americans, 25 years of rabid anticommunism — coupled with the corruption of the Nixon years and the U.S. failure in Vietnam — had primed the country for a profound reinvention of American identity. Gone was civic nationalism. In its place emerged a sense that citizens ought to stand alone without dependency on neighbor, community or government.

Then, in 1980, Ronald Reagan arrived with a mandate to radically shrink the American government, which he characterized as a cabal of wasteful and parasitical “elites.” “The federal government … has overspent, overestimated and over-regulated,” Reagan had previously declared. “Overgrown and overweight,” as he had put it, the federal government was in need of “a diet.”

The result of that “diet,” however, was to deprive the government of the tools to help ordinary people — middle class and “needy” alike — and instead actively antagonize them. Essentially siding with international financial markets over his own citizens, Reagan enforced high unemployment and a devastating recession to bring down inflation. Government services were privatized; regulations were slashed. Institutions established to maintain the delicate balance of power between citizens, corporations and the state, like the National Labor Relations Board, were turned upon the workers they were founded to protect.

Throughout this transition, Reagan continually chastised Americans for ever believing in the kind of shared civic responsibility imagined by the Bureau of Human Nutrition and Home Economics and for ever doubting that society existed to serve the market. The true American creed, he repeatedly averred, was “legitimate self-interest.” After all, he opined, capitalism was “a system which has never failed us, but which we have failed through a lack of confidence.”

The irony is that the American government was still acting aggressively to shape the economy, but now on behalf of monied interests. “These policies are invariably described as restoring market forces,” the economists Ben Fine and Laurence Harris wrote in 1987. “But they are in fact,  and rather obviously, state interventions on behalf of capital.”

This shift was ruinous for the American economy: Manufacturing output tanked, industry moved abroad, unemployment soared and inequality rose nearly to prewar levels. “The story that Reagan tried to tell the country in the 80s, which is, basically: ‘Forget about equality; the key to prosperity is to let the top become richer and richer’ — it doesn’t work,” the economist Thomas Piketty, whose work has helped definitively prove the trend toward greater inequality in America since the 1970s, told The New York Times in 2022.

“Much like today, the economy and consumer experiences were rapidly changing. But the U.S. government was there to hold people’s hands, to explain new technologies, define their best uses and guide users to the best options they could afford.”

But it was more ruinous still for the American psyche, which would never recover a positive vision of the government’s role in the marketplace. During the 1990s, politicians on both the left and right herded behind the “neoliberal consensus,” the idea that the free flow of capital and unrestricted markets produced trickle-down benefits for the American middle class, and that the government should take an ever-diminishing role in daily life.

In American political circles, that dogma remains largely unchallenged. After the 2008 financial crisis once again exposed the failures of neoliberal economic policy, Barack Obama appointed the same financial advisors who had helped Bill Clinton aggressively deregulate the financial sector. “They didn’t challenge the fundamental premise, the market triumphalist premise — namely, that market mechanisms are the primary instruments for defining and achieving the public good,” the American political philosopher Michael Sandel said in an interview with the New Statesman.

Today, it would seem we have reached an apotheosis of this view, where deregulation has allowed American companies to grow so large and so valuable that they rival states in both economic and political power. Such entities no longer feel any semblance of civic responsibility; despite benefiting from billions in government subsidies, Tesla has paid virtually no federal income tax in the last three years, and it is far from exceptional in that regard. Governments, meanwhile, have become so weak across the Western world that they cannot raise capital gains or business taxes even minimally without facing dark threats of ruinous consequences from America’s biggest brands, much less increase taxation to the astronomical levels — marginal rates as high as 90% or more — that sustained ambitious government initiatives like the Bureau of Human Nutrition and Home Economics.

But as evidenced by Musk’s purges, there are those now in power who would like to go further: to see the state entirely subsumed to private interests. In recent weeks, a growing number of outlets have been bold enough to call this vision what it is: techno-fascism, or rule by engineers, where the human aspects of democratic deliberation and governance are subsumed to the determinations of algorithms designed and maintained by a select few. “You’re allowed some agency, but they are still in control,” the political scientist Andrea Molle told The New Yorker. “They can still intervene if the course is not going in the direction that it is supposed to go to maximize efficiency.”

This is the vision for a government that has totally abandoned any idea that it should act as a countermeasure to the market or, even less, a unifying civic establishment. It is subservient entirely to a “CEO-king,” in the worldview of neoreactionary pseudo-philosopher Curtis Yarvin, for whom citizens are merely employees or property.

It may sound hyperbolic, but there is reason to believe this vision of government is now being enshrined at the heart of the American executive. Yarvin’s ideas have animated Marc Andreessen, Peter Thiel and Musk himself, all of whom have encircled the president. As the journalist Robert Evans wrote, Yarvin inspires in “a lot of young techie kids the idea that CEOs should run the world. Musk, I feel, has largely jumped on this bandwagon because those kids are useful footsoldiers [and] Yarvin’s ideas … are convenient for his own ambitions.”

And what, exactly, are those ambitions? In the word of one “close associate,” it’s not far off from a CEO-king. “Elon believes he should be emperor of the world,” they told Vanity Fair. “He truly believes his way of handling the world is the best possible outcome for everyone in it.”


The problem is, when it comes right down to it, the world created by the big tech monopolies — unfettered from government regulation, largely immune from taxation and unbound by any sense of duty to the customers, citizens, institutions, legal systems and environments that sustain them — is kind of shit.

These days, if you want the “best” kitchen sink, you could choose the SWISH KS 02000111-82, an app-connected smart sink with waterfall faucets, hydroxyl water ion cleaning technology and multiple inserts for washing and chopping vegetables. Connecting the sink to your phone (in theory for the purpose of seeing and setting the water temperature that is already visible on the sink’s LED display) will likely log sensitive data in a place where it could be exposed to hackers. 

The market is full of relatively inexpensive, touchscreen-laden, right-angled, hard-to-clean smart sinks, mostly from opaque Amazon brands — companies with names like SDGRP and MWIDCIEW that are nonetheless evidently esteemed enough to get on SEO-optimized listicles of “smart kitchen sinks you need in 2025.” Many of the janky but profitable lessons of disrupting the digital world are now being eagerly applied to the appliances market, with washing machines that can be hacked to mine crypto, ovens with paywalled downloadable functions and lightbulbs that need regular software updates. What seems to be taking shape is an endless hierarchy of subscriptions and unlockable add-ons, at first offered for free but then often clogged with performance-degrading ads. There is no Bureau of Human Nutrition and Home Economics to tell us, or them, what makes a good product, what will function well over a long period of time, what serves customers best. There is only the “free” marketplace, where cash is king.

The tech journalist Cory Doctorow described this phenomenon of everything, everywhere, getting worse all at once as “enshittification,” a process by which companies have profited by progressively cheapening consumer products. For Doctorow, enshittification results partly from the government’s abandonment of and capture by the marketplace — competition laws go unenforced, intellectual property laws are written to benefit incumbents. “Industries collapse into these cartels or monopolies or duopolies,” he told me. “Everywhere you look, you see this. … It really amounts to a collapse in market discipline, and a collapse in regulatory discipline.”

In recent years, this has been supercharged by what the tech critic Ed Zitron has called the Rot Economy,” a fundamental shift in business ethics brought about by the growing power of financial markets. “Public and private investors, along with the markets themselves, have become entirely decoupled from the concept of what ‘good’ business truly is, focusing on one metric — one truly noxious metric — over all else: growth,” he wrote. Companies like Google, Meta, Microsoft and Tesla are now constantly rewarded for short-term, self-destructive policies like downgrading the digital search experience or clogging software with invasive and annoying AI tools because it provides a marginal gain for shareholders.

“Trump and Musk and their people have a vision for a government that has totally abandoned any idea that it should act as a countermeasure to the market or, even less, a unifying civic establishment.”

The result is not just, as Zitron has written, “a constant state of digital micro-aggressions”; it is a deformed and degraded civic ethos. Fifty years of antisocial rhetoric has generated a climate where the once-noble pursuit of producing a good or providing a service is increasingly thought of in zero-sum terms: I win when my customer loses. “It’s the abandonment of not just any sense of a common cause but a workable consensus reality,” the journalist David Roth wrote in the wake of this year’s Consumer Electronics Show, a showcase for Silicon Valley’s latest ambitious projects. “It’s the swamping of every collective effort or any nascent social consciousness in favor of individuals assiduously optimizing and competing and refining and selling themselves, not so much alongside the rest of humanity as in constant competition with all of it.”

Inevitably, that has consequences for how Americans are able to conceive of their own government. “The idea that government is fundamentally suspect has been around for so long, has become so widely held — and has had such a dumbing-down effect on public conversation — that a full-throated defense of the ideals and institutions of American government seems cringe-worthy,” the critic M. Gessen wrote in The New York Times

But increasingly, that is what opponents of Trump and Musk’s destructive project are demanding from their leadership. “We are in this moment because, for decades, Republicans have told us that government is bad,” California Congressman Ro Khanna said recently. “Democrats must have the courage to make the case that government is good and can work.” As many have observed, voters who do not sense a vision for comprehensive reform and a move away from the economic orthodoxy that has held sway for the last half-century seem happy enough to endorse Trump’s disruption, however ruinous. 

So what could an alternative vision be? The first step may be to finally recognize that “running government like a business” has always been a red herring. The government is not a business — it is the thing that makes business possible. Unregulated markets frequently fail to produce good businesses so long as we define “good” as beneficial to their customers. And unregulated businesses, as we’ve recently been forced to witness, are even worse at producing good government. As the economist Mariana Mazzucato has long argued, the libertarian CEO types now running Washington are willfully ignorant of just how dependent their industries are on the backbone of public services like roads, telecoms, courts and publicly-funded research — services they have enjoyed largely for free since financial liberalization and business tax cuts have allowed them to shelter the vast majority of their profits.

Step two is much harder: articulating some positive idea of an activist government in the marketplace. For Doctorow, as for many others, this begins with “a very aggressive antitrust agenda” aimed at breaking up the monopolies that have become powerful enough to capture — and try to replace — the federal government under Trump. “You cannot have a referee who is weaker than the players on the field,” he told me.

“Anti-government nihilism cannot be countered without a defense of the government’s role in daily life.”

But there are other, more constructive roles the government could play. Doctorow suggested a federal jobs guarantee that would put a meaningful floor on the value of labor. Or a database of publicly funded, patent-free research, which would compel corporations to support interoperability — what Mazzucato has called, in the context of AI, a “decentralized innovation ecosystem that serves the public good.”

As with the Bureau of Human Nutrition and Home Economics, it’s tempting to imagine bigger: a government department that drives research and development into consumer electronics, software and other tools — potentially even social media and news, not to mention the fields that Trump and Musk and their henchmen are actively destroying, like health, science and energy. Arguably, the best recent example of this kind of government-led consumer activism is the universal adoption of USB-C, a shift prompted by European Union policy that has simplified life for hundreds of millions of people. How many more simple fixes are possible that way?

Of course, now is not a great time to be in the optimism business. U.S. civil society, especially after Musk’s interventions, may need years to recover the capacity to deliver ordinary goods and services, let alone ambitiously pursue market interventions. “We don’t need a bunch of scientists measuring counter heights,” the labor reporter Hamilton Nolan told me. “We need public healthcare. We need adequate mass transit in cities. We need affordable housing. Basic things.”

For Nolan, meeting these basic needs would be on the government’s agenda if the U.S. could return to a state of “functional democracy.” The problem is, as Piketty ominously warned in 2022, “the rules of the game” have long been set up to “entrench” the power of rich elites. The lesson from history, he said, is that without a serious countermovement — “a reaction, a mobilization” — the taxation of the rich trends toward zero. “Historically, you always need a great crisis to snap out of these things,” Nolan said. “And that’s where I think we’re headed. I just don’t know if it’s World War III or Great Depression II — but I think it’s got to get really fucked up.”

But for those waiting to pick up the pieces — or optimistic enough to try to stop the disintegration — anti-government nihilism cannot be countered without a defense of the government’s role in daily life. The Bureau of Human Nutrition and Home Economics was by no means a perfect institution: It lacked regulatory teeth, it was probably too cozy with corporations and many of its recommendations never made the impact that its researchers had hoped for. But it did represent a fleeting, hopeful vision: a government committed to creating a society — reaching for better, not racing to the bottom.

The post The Department Of Good Living appeared first on NOEMA.

]]>
]]>
Lords Of The Untamed Wild https://www.noemamag.com/lords-of-the-untamed-wild Thu, 12 Dec 2024 16:42:47 +0000 https://www.noemamag.com/lords-of-the-untamed-wild The post Lords Of The Untamed Wild appeared first on NOEMA.

]]>
YORK, England — In front of me, a man is reading a brochure for something called a “pig brig” — “the most effective way to defend your land and livestock from feral hog damage, period.” Two seats away, a man named Erick Wolf introduces himself as the CEO of a company selling “safe sex for pigeons.” A few minutes before, outside the fluorescent-lit lecture hall at the University of York where we now took our seats, he had urged me to speak to a fellow pest management expert he dubbed New York’s “pope of rats.”

We were all of us waiting with cups of bad coffee and tiny, plastic-wrapped biscuits for the start of the Botstiber Institute’s first European workshop on wildlife fertility control. From across the world, experts in animal biology, pest control, pharmaceutical technology and conservation management had come together for two days to discuss ways to interfere with the reproduction of wild animals.

In her opening remarks, Giovanna Massei, Botstiber’s European director, painted a picture of a world where humanity and nature were increasingly in conflict. “People and wildlife are sharing more and more space,” she said. Pigeons and rats bothering New Yorkers, feral horses troubling ranchers in the American West, elephants breaking free from game reserves across Africa, capybaras running riot in South America’s gated communities. In places, agricultural losses and property damage are escalating into the billions and countless diseases — Covid and avian flu among them — originate in animals and spread to people when the two populations come into contact. 

“We are running out of options,” Massei said. “We don’t believe for a second that fertility control is the only way, but certainly, we want people to consider it.”

Massei spoke as a prominent representative for a growing field that purports to offer conservationists a straightforward solution to one of the thorniest questions in their discipline: What do you do when the wilderness is too wild? Refuges untrammeled by humankind are shrinking, and so too the number of animals they can support. The boundaries between humans and wild creatures, ever porous, are becoming even thinner. Hunting or culling wild animals is one option — just kill any problematic species. Or continue destroying their habitat and let them go extinct on their own. 

But experimental new birth control drugs promise to avoid either outcome — and create a new kind of nature where neither human nor animal need suffer.

Wildlife fertility control represents a bold shift in conservation thinking. For the better part of the last century, conservationists have been primarily compelled by the vision of a societal retreat from nature — preserving the places still untouched by relentless human activity where the wilderness can exist in all its natural savagery. But in an era where humanity’s stain is found in even the most isolated parts of the world, saving the wilderness from ourselves seems increasingly like a fantasy of the distant past.

If preserving nature and vulnerable species means policing nonhuman life, from the purity of DNA to the timing of reproductive cycles, very important questions arise: Does saving the world’s “wild” places mean controlling them entirely? And if so, how?

“From across the world, experts in animal biology, pest control, pharmaceutical technology and conservation management had come together for two days to discuss ways to interfere with the reproduction of wild animals.”

In the Book of Genesis, God brings all the animals of creation one by one before Adam, and “whatsoever Adam called every living creature, that was the name thereof.” In the Christian West, it is perhaps the clearest blueprint for humanity’s domineering attitude to nature: Since the beginning, the essence of the wild was forever fixed in relation to us.

The Judeo-Christian God may have given Man dominion over the animals, but in many parts of the world the premodern experience of the wilderness was one defined primarily by fear and antagonism. Simply speaking the name of a bear or a wolf could call one into existence. To the extent that the ancients respected untamed nature, it was as the place of dangerous creatures, uncultured barbarians and dark forces, often in league with one another.

Medieval Europeans were no different. Most often, it was man who had to be protected from an untrustworthy, mysterious and dangerous wild world rather than the reverse: perching settlements atop mountains, channeling floods and draining swamps, culling predators to boost hunting stocks in royal game reserves. Clearing a forest and converting it to productive agriculture was no less pleasing to God than converting a savage pagan to Christianity. The creatures of the wood, therefore, were parallel for Satan; rapacious wolves and stubborn bears became symbols of sin, of ignorance, luxury and greed.

This pessimistic view of wild nature endured as late as the 18th century. While Jean-Jacques Rousseau was extolling the virtues of France’s settled countryside, British colonists in India paid dearly to clear “savage” jungles of their “vermin” and remove their people to newly cleared lands for agriculture. In America, too, early settlers viewed the wilderness through their own Puritan lens as the devil’s dominion, densely populated by unchristian “salvages” and voracious wild beasts.

Ironically, it was the depeopling of remote places that first sparked some reverence among the progenitors of what might be dubbed the movement for nature appreciation. It’s no coincidence that Rousseau so elevated nature in the same era that France experienced rapid urbanization. Around this time, the American theologian Jonathan Edwards looked upon a countryside depopulated by genocide and saw God’s divine purity. “By Edwards’s day,” the philosopher J. Baird Callicott wrote. “Sin was to be found in the towns, not in the woods, and the Devil in the souls of sinners. In short, nature in America went from demonized to divinized.”

The contrast between town and country was not only spiritual. Appearing in the late 19th century were the first warning signs that industrialization could have devastating consequences. The increasing soot and filth of urban life offered a stark contrast to Europe’s pristine countryside or the vast emptied wilds of North America. Even the still-peopled lands of Africa and Asia offered a desirable alternative to the Victorian gentleman; H. Rider Haggard, the great adventure writer of the era, spoke of a “thirst for the wilderness,” a deep desire to escape “among the wild game and the savages,” and knew many in his audience felt the same.

“What do you do when the wilderness is too wild?”

At the same time, the wanton hunting of birds and big game seemed for the first time to be shifting the balance of power between nature and civilization. Even those who loved killing wild creatures quickly came to realize that there was a certain experience of wilderness at risk of vanishing. Already in the 1890s, the big game hunter Frederick Vaughan Kirby lamented, “the hunting-country and its big game have a past — a past that can never be recalled.” The huntsman and proto-conservationist Edward North Buxton warned that the Empire’s game was a “precious inheritance … something which can easily be lost but cannot be replaced.”

The authorities’ initial response to these challenges was to try to exert more control over remaining wild spaces, applying new tools of scientific management to “natural resources” — a coinage of this era. Gifford Pinchot, a pioneering American forester and the first head of the United States Forest Service, thought of conservation as “sustainable development.” Resources like timber and game animals were “there to be used, now and in the future,” in such a way as would ensure “the greatest good of the greatest number in the long run.” “The more it is used, the better,” he wrote in a book aptly named “The Use of the National Forests.”

For the first half of the 20th century, this was the dominant view of conservation among policymakers — effectively, a brake on human rapaciousness and greed. But it was far from the only view of wilderness. At the same time as imperial big game hunters and loggers sought to enumerate and protect the remaining “resources” of their reserves, the writers Henry David Thoreau and Ralph Waldo Emerson were articulating a more spiritual — and misanthropic — philosophy of nature, which would come to dominate among the next generation of conservationists.

For these writers and artists, wilderness was a place set against the artifice of human society and its poisonous industry. “Nature,” Emerson wrote, “refers to essences unchanged by man; space, the air, the river, the leaf.” It offered a chance to transcend the narrow human perspective and the dominant materialism of the age, a window into a world without humankind and its deleterious influence on the land.

Nature was, also, a spiritual necessity. “Thousands of tired, nerve-shaken, over-civilized people are beginning to find out that going to the mountains is going home; that wildness is a necessity; and that mountain parks and reservations are useful not only as fountains of timber and irrigating rivers, but as fountains of life,” the pioneering conservationist John Muir wrote in 1901.

Man could be restored by wilderness, the argument went — but wilderness could only be destroyed by man. Nature existed in an ever-harsher contrast with our fallen, polluting selves. Among a burgeoning movement of “wilderness preservationists,” the idea took hold that nature must be saved — not redeemed but reserved, isolated and, if necessary, depeopled. Vast swathes of land around the world were set aside in this way: Britain’s Waterton Park in 1821, America’s Yellowstone National Park in 1872, Canada’s Banff National Park in 1885, South Africa’s Hluhluwe and uMfolozi Game Reserves in 1895.

Now, mankind was no longer master over nature with God-given rights of dominion. We became a kind of self-conscious parasite, aware that, left to our own devices, we would pillage until nature lost its transcendence. Conservationists argued they needed to save humanity from itself. “God has cared for these trees, saved them from drought, disease, avalanches, and a thousand straining, leveling tempests and floods,” Muir wrote of California’s sequoias in 1897, “but he cannot save them from fools — only Uncle Sam can do that.”

“The boundaries between humans and wild creatures, ever porous, are becoming even thinner.”

When America’s 1964 Wilderness Act was drafted, it was this understanding of wilderness that was enshrined in law, defining it as a place “untrammeled by man, where man himself is a visitor who does not remain.” Immediately, the U.S. government realized that such places were already rare, if they ever existed at all.

For early conservationists, places like Yellowstone were important precisely because they represented the ideal of an untouched Eden; historian Mark David Spence calls them a “manifestation of God’s original design for America.” But it was all a convenient fiction. Yellowstone’s soil bears the unerasable record of nearly 12,000 years of human hunting, harvesting, mining, trade and habitation. The Tukudika Shoshone, a Native American group, used the area right up until it was set aside by the state for conservation; only active disruption by the U.S. Army and, later, the National Park Service prevented their return.

Everywhere large wilderness reserves were created, the same mythmaking took place. The Stoney Nakoda people of Banff were banned from their homeland after it was designated a wilderness park — except for an annual festival where they could be gawked at by visiting tourists. In Africa, the sociologists Charles Geisler and Ragendra de Sousa estimate as many as 14.4 million people have been evicted so game reserves and parks could be demarcated. So common is this story that the author Mark Dowie coined the term “conservation refugees” to refer to the vast number of people that are, still today, relocated from lands they occupied for centuries so an illusion of an “untrammeled” place can be maintained for tourists.

Stories like these illustrate the paradox at the heart of this model, known to its critics as “fortress conservation” — in practice, “locking up parts of the planet and highly regulating activities there,” as Faisal Moola, a professor of conservation at the University of Guelph, told me. The problem, he said, is not only that it invariably privileges tourists gawking at nature over native inhabitants using it; it is that it assumes it is possible at all to “remove our fingerprint from the landscape and let the system go back to some primordial, ancestral state.”

“All conservation of wildness is self-defeating, for to cherish we must see and fondle, and when enough have seen and fondled, there is no wilderness left to cherish.”
— Aldo Leopold

Such a philosophy of conservation has enjoyed a renaissance of late in certain conservation circles, particularly in Europe, under the new name of “rewilding.” Like the Romantic progenitors of the wilderness movement, rewilding advocates believe that erasing humanity’s imprint from the world is as much a process of spiritual renewal as it is an ecological one. “It’s about abandoning the Biblical doctrine of dominion which has governed our relationship with the natural world,” the journalist George Monbiot wrote in his “Manifesto for Rewilding the World.” Rewilding Europe, an environmental charity leading the charge on the continent, is less grandiose. “Rewilding is about moving forward, but letting nature itself decide much more,” their website’s old FAQ once read, “and man decide much less.”

The problem is: Which nature is wild? Even in the relatively young history of rewilding as a term, it has been a slippery question to answer. The historian Dolly Jørgensen found that when the movement began with the U.S.-based Wildlands Project in 1991, it largely meant the reintroduction of large carnivores across connected but depopulated “cores,” like the reintroduction of wolves to Yellowstone National Park.

By the following decade, it meant an altogether more ambitious “return” to the state of nature at the end of the Pleistocene, around 13,000 years ago. To accomplish this monumental feat, it would seem that every non-native species from the last 13 millennia would need to be exterminated, and species that had not been seen since the extinction of North American megafauna — coinciding with the arrival of the first humans to the continent — would also need to be recreated and restored to their original number. That would mean the intentional introduction of new foreign invaders — African lions, Asian elephants — as substitutes for extinct, ancient megafauna, or perhaps the original animals themselves, their DNA back-engineered from samples buried in glacier ice, as scientists are now attempting with the wooly mammoth.

It quickly becomes evident that “rewilding” does not really mean removing human agency from the landscape — if anything, it means increasing it in service of nostalgia for a place we’ve never been. Many of these projects suffer from the same paradoxes as fortress conservation. They try, in Jørgensen’s words, to create a world with “more animals and less people (or at least, much less intrusive people)” — in the process minimizing the fact that people are animals too, with a long environmental history that cannot simply be erased.

And yet, even still, they cannot escape the curse of Adam, to behold creation and immediately subsume it to our own idea of it. Rewilding Britain, an advocacy group, promises rewilding will deliver “nature-based jobs and businesses” and commits to improving access to conservation areas until “wild nature [is] a right for all.” They should heed what the American conservationist Aldo Leopold wrote in “A Sand County Almanac”: “All conservation of wildness is self-defeating, for to cherish we must see and fondle, and when enough have seen and fondled, there is no wilderness left to cherish.”

“Man could be restored by wilderness, the argument went — but wilderness could only be destroyed by man. Nature existed in an ever-harsher contrast with our fallen, polluting selves.”

Arguably the most prominent public failure of the rewilding movement, and one often discussed at the Botstiber Institute’s workshop, was the attempted rewilding of less than 23 square miles of land in Oostvaardersplassen in the Netherlands. Using dikes to push back the sea, they created an artificial marshland and sowed it with reed seeds scattered by aircraft. By the 1970s, Oostvaardersplassen had already become an important refuge for wildlife — in particular, marsh-dwelling birds — in the intensively cultivated Dutch landscape.

By the 1980s, however, Dutch conservationists were increasingly concerned that willow trees and other saplings would colonize the area, suppressing the more delicate vegetation on which waterfowl relied. Under the leadership of the biologist Frans Vera, Oostvaardersplassen became an important test case for a new theory of land management. It was rewilding before any such term existed.

In theory, the governing principle of the reserve would be “do nothing, unless …” In other words, intervene only when the health of the overall system is in danger. Nature, the reasoning went, could be trusted to take its course; human intervention could only diminish its value as a refuge. In reality, however, to trust nature in this way required first deliberately reconstructing a primordial state of balance that Vera theorized had existed before humankind entered the picture. That meant introducing the closest surrogates for the large herbivores that had since gone extinct or been domesticated. Soon, conservationists were trucking in red deer from Scotland, semi-feral ponies from Poland and Heck cattle, a breed developed in 1930s Germany to resemble the extinct aurochs of prehistory.

At first, the program was a success — the large animals did indeed maintain the kinds of environments necessary to produce the reserve’s coveted biodiversity. Left to their own devices, they were able to propagate into great (and photogenic) herds and sustain a small population of carnivorous scavengers — ravens, hawks, eagles, vultures and foxes. The reserve became a poster child for a philosophy that held that the world before humans existed was, in fact, Edenic — a self-sustaining paradise.

But by the early 2000s, animal rights groups became dismayed by the mass winter die-offs that troubled Oostvaardersplassen’s herds. Fenced in the park and unable to roam in search of food, they argued, managers had a duty to care for them; it was morally repugnant to let them suffer and starve as if they were in a real wilderness. In the 2010s, authorities adopted a strategy of “early-reactive culling” — shooting the weakest and sickest animals before winter — but they still could not satisfy Oostvaardersplassen’s critics. Dutch politicians drew comparisons to Nazi concentration camps; animal rights activists broke in to feed the animals, prompting responses from riot police. “It is clear,” the historian Bert Theunissen wrote in 2019, “that the wilderness ideal has lost public support, and has been officially relinquished by the authorities.”

“Oostvaardersplassen became a poster child for a philosophy that held that the world before humans existed was, in fact, Edenic — a self-sustaining paradise.”

The reality, Theunissen concluded, is that despite their rhetoric, Dutch authorities never actually convinced anyone that Oostvaardersplassen was a real wilderness. Without that belief, the cruelty inherent in uncontrolled nature is largely unpalatable to us. We may accept that mass death is ecological justice when it occurs without our involvement, but as soon as it becomes the inevitable conclusion of a management strategy, it seems more like violence inflicted by human hands.

At the Botstiber Institute, Oostvaardersplassen was often suggested as a use case for fertility control. Public discomfort would never have reached such fever pitch if the herds had not grown so large and overstretched the reserve’s capacity to sustain them. “People have got to recognize that there will need to be management of certain species, and they are very adverse to killing,” Matt Heydon, a wildlife advisor to the U.K. government agency Natural England, said at the conference. “Fertility control does offer an option.”

But the failure at Oostvaardersplassen goes deeper than public discomfort with killing. In reality, it exposed one of our oldest myths — as Theunissen writes, “that nature is self-regulating and strives for equilibrium.” “Even though it has been discredited by biologists, [that] assumption,” he continues, “is still deeply ingrained in popular and sometimes even in professional conceptions of the biological world.”

In reality, Oostvaardersplassen’s harmonious “wilderness” was no more real than the geometric lines of a Regency garden. And yet, for many at the Botstiber Institute, satisfying the desire for this vision of nature had become a kind of raison d’etre for the discipline. “Our emotional connection with nature, to nature, is the rock bottom of conservation,” Maarten Jacobs, a cultural geographer who spoke at the conference, explained. “Excluding emotion, in my view, is a threat to conservation.”

Facing the reality of increasingly urban societies, intolerant of culls but intensely desirous of unpeopled spaces, conservationists may indeed need fertility control to conceal the role of human stewardship and create the primordial balance we have learned to expect. Humans are, it would seem, to become gods again — not gods of the Old Testament, meting out cruel justice, but Aristotle’s prime mover, the invisible logic of nature, with a finger gently pressed on the scale.


The problem, of course, is that nature has its own designs. Just before the Botstiber Institute’s workshop, I was tailing conservationists across Italy and Slovenia while researching a piece on a troubling new trend: the hybridization of gray wolves with feral dogs.

One of conservationists’ great victories in the last few decades has been the resurgence of the European gray wolf, hunted to near extinction in southern Europe by the turn of the 20th century. In the Alpine regions of Italy, Slovenia and France, the return of healthy wolf packs has been widely celebrated as a victory for biodiversity — even if it has drawn the ire of rural activists and politicians who resent living alongside them. But the increase in wild hybrids has posed a serious challenge to the project. Hybrids can travel long distances, integrate with full-blooded packs and dilute the DNA of future wolves with phenotypic traits and worrying behaviors typical of domesticated dogs. “We’re talking about the first urban wolves,” Francesca Marucco, the lead scientist on the project that oversaw the reintroduction of Alpine packs, told me. “That is the new challenge that we have.”

On one hand, this kind of hybridization has been happening for millennia everywhere there are wolves and dogs. But in the context of a project to resurrect the majestic glory of Europe’s gray wolf, hybridization poses an uncomfortable question: Just how impure can a “natural” wolf be? Is what is happening to Europe’s gray wolves evolution — or is it pollution, from an ultimately human source?

For many of the wolf conservationists I spoke with, hybridization was thought of in these latter, more negative terms. “We are speeding up a process that might have occurred naturally,” said Valeria Salvatori, a wolf researcher at Italy’s Institute of Applied Ecology. “It’s like global warming. … We do have a duty to mitigate such an impact.” It’s the same impulse that drives the conservationists trying to eradicate invasive species the world over, even beloved presences like Britain’s invasive gray squirrel. “For me, it is solving a problem that we have caused,” said Kay Haw, director of the U.K. Squirrel Accord, which is experimenting with using birth control to reduce the impact of gray squirrels on England’s native red squirrel populations. “I do often hear people say, ‘It’s fine, let nature deal with it’ — but nature cannot continue to cope with the level of destruction and problems that we have caused.”

Seen in this way, fertility control is not so different from the “sustainable development” philosophy of Pinchot and the U.S. Forest Service — it’s a way to apply a brake on humanity’s runaway environmental impacts. Only now, the problem is not rationing how much we consume; it’s containing the effects we have already set in motion, the genetic “pollution” relentlessly reproducing in the wild.

But any idea of nature that involves a notion of purity involves aesthetic considerations — preferences for a certain type of nature that cannot help but center human desires. “If tomorrow I see an animal that is black and white with floppy ears and a long tail, and you tell me it is a wolf, I am disappointed,” Luigi Boitani, one of Italy’s leading wolf experts, once said to me. “I don’t want to see wolves in that way.”

Inevitably, this results in some uncomfortable parallels between the priorities of conservationists and the obsessions of society at large. “Always we see that nature conservation is reflecting the major trends in society,” Michael Jungmeier, the UNESCO-Chair for Sustainable Management of Conservation Areas, told me. “You have romanticism in literature, and you have romanticism in nature conservation. You have nationalism in the policy debate, you have nationalism in nature conservation. … You see all these debates about these neophytes coming in and destroying our species. This was the exact same time as we were having all these debates about the Schengen [Area] and migration. The same debates play out in both places.”

“Nature cannot continue to cope with the level of destruction and problems that we have caused.”
— Kay Haw

It’s no coincidence that in our age of techno-optimism, the technological solutions posed for ecological problems are growing ever more ambitious — and invasive. Fertility control for wildlife is hardly the only example; scientists are already releasing swarms of genetically modified insects to combat disease and seeding the sky with silver iodide to modify the weather. Among proponents of these technologies, it is rarely considered that we may simply be introducing a new kind of pollution — an intervention whose effects we do not understand well enough to be certain that we will not be trying to undo them in half a century’s time.

But if a heavier hand is not the solution, what is? Is there another way to approach nature — one that does not frame it solely as a scientific problem to be solved or a romantic ideal to be reconstructed?

Not long after Muir articulated his philosophy of fortress conservation and Pinchot produced his utilitarian formula for sustainable development, Leopold, the author of the paradigm-shifting “A Sand County Almanac,” was trying to define a third way, one grounded in a different conception of humanity’s relationship to nature. “Conservation is getting nowhere because it is incompatible with our Abrahamic concept of land,” he wrote in his 1949 book. “We abuse land because we regard it as a commodity belonging to us. When we see land as a community to which we belong, we may begin to use it with love and respect.”

Leopold’s alternative, grounded in the emerging science of ecology, was to seek “a state of harmony between men and land.” In reality, he was only articulating an ecological worldview that had existed in many Indigenous communities for millennia. Moola, who has spent his career working with Indigenous groups to develop and implement conservation plans, says Indigenous worldviews tend to recognize that human communities have an essential role in creating and preserving biodiversity — in fact, they are an irreplaceable part of it. “In many cases, what we think of as the ecological baseline is in fact the outcome of human agency,” he told me. “Much of the biodiversity we covet is actually the result of thousands of years of human stewardship.”

One could argue that today’s high-tech solutions — fertility control, weather modification and genetic engineering — are simply an upgrade on the stewardship strategies of Indigenous people who have long maintained ecosystems through controlled burns, strategic culls and other deliberate interventions. But the difference, Leopold appreciated, is not in technique, but perspective. Indigenous knowledge systems acknowledge the role of human beings in creating our shared landscape, but they do not make us uniquely privileged to command and marshall its future. Put another way, as Callicot writes, “Human beings are not specially created and uniquely valuable demigods, any more than nature is a vast emporium of goods, services, and amenities. We are, rather, very much a part of nature.”

In some ways, at the same time as our technological advances are making it easier to tame the wilderness, our power over nature is rapidly diminishing. Look at the floods, storms and fires of the last decade alone, and it is evident that nature’s strength is gathering — at least, we seem to find ourselves ever more desperate for its mercy.

It’s not a new thought, but one that always bears repeating: If we cannot help but wall ourselves off from wilderness or imagine it as an imperfect mirror for our desires, we may find ourselves forever enemies with the nature that sustains us — losers on the wrong side of a long and brutal war. In the end, we will be not gods but exiles again, yearning for our own time as a better kind of Eden.

The post Lords Of The Untamed Wild appeared first on NOEMA.

]]>
]]>
The Phantoms Haunting History https://www.noemamag.com/the-phantom-haunting-history Thu, 11 Jul 2024 13:33:37 +0000 https://www.noemamag.com/the-phantom-haunting-history The post The Phantoms Haunting History appeared first on NOEMA.

]]>
Riddled with agonizing jaw cancer at the age of 83, Sigmund Freud labored tirelessly from his deathbed on a final testament. Published in the summer of 1939, as an ascendant Nazi Germany made its final preparations to invade Poland, it was neither a guide to psychoanalysis nor a personal memoir, but an unusual work of pseudohistory called “Moses and Monotheism.”

Inspired by archaeological discoveries in Amarna in Egypt, Freud posited that the monotheism of Moses was, in fact, of Egyptian origin: an evolution of the worship of the sun god, Aten. Even more scandalously, he asserted that ancient Jews had murdered Moses and perpetuated this monotheistic faith not from religious devotion, but from an unconscious sense of unresolved spiritual guilt.

While Freud was mercifully not alive to see it, the reception to his final far-flung theory was chilly, to say the least. William Foxwell Albright, who would later go on to authenticate the Dead Sea Scrolls, called it “totally devoid of serious historical method.” Rowan Williams, later the Archbishop of Canterbury, called his conclusions “painfully absurd.”

But Freud’s outrageous tale did exert a profound influence on one of his disciples: a Russian student of psychoanalysis named Immanuel Velikovsky. Velikovsky was so perturbed by reading “Moses and Monotheism” that he soon gave up his earlier pursuits and began a misguided quest to find historical proof of Exodus.

Velikovsky thought he found what he was looking for in the Ipuwer Papyrus, a fragmentary text from Egypt’s 12th dynasty that appeared, to him, to describe one or more of the 10 plagues of Moses. There was only one problem — the papyrus predated the time of Exodus by some 500 years. But Velikovsky was not to be discouraged. His solution was simple and convenient: Like his old master, he reconfigured the past.

“The written history of the ancient world is composed without correct synchronization of the histories of different peoples of antiquity,” he confidently argued in the introduction to his 1945 work “Theses for the Reconstruction of Ancient History.” Velikovsky would be the one to clean up this “disarray of centuries, kingdoms, and persons”; in one stroke, he erased several hundred years of ancient history, perfectly lining up Exodus and Ipuwer.

“The impulse to revise history arguably grows ever stronger. There is, after all, a reason that these outlandish ideas never really seem to die.”

Velikovsky’s pseudohistory was received by academics even more derisively than Freud’s. But it enthralled some measure of the masses outside the ivory tower. University departments tried to ban his books and boycott his publisher, but Velikovsky still found himself the subject of documentaries and the star of speaking tours until his death in 1979.

Long after Velikovsky’s demise, his ideas continued to inspire pseudohistorical societies across the Western world. In one such group, Germany’s Society for the Reconstruction of Human and Natural History, the concept of chronological revisionism took another great leap forward.

In 1996, Heribert Illig, the editor of the journal ZeitenSprung (TimeLeap or TimeJump), published the work that would come to define the rest of his life: “Das Erfundene Mittelalter”“The Invented Middle Ages.” The book goes beyond the historical revisionism that even Freud and Velikovsky thought possible — he suggests that 297 years of medieval history were an elaborate fabrication, the result of a conspiracy between Holy Roman Emperor Otto III and Pope Sylvester II. Both men, Illig reasoned, had desired to rule in the auspicious Year 1000, and thus ordered the continent’s monasteries to fabricate vast numbers of documents attesting to a long fictional history of Carolingian kings. While Velikovsky had been content only to tamper with ancient history, Illig revised the very timeline of modern life. To him, the year was not 1996, but 1694.

Hans-Ulrich Niemitz, one of Illig’s co-conspirators, coined the term “phantom time” to describe this allegedly nonexistent historical period. In a 1995 paper defending Illig’s work, Niemitz outlines the dubious assemblage of evidence that supported such radical revisionism. What is immediately striking about it is the way it plays on genuinely enduring questions in medieval research: architectural anachronisms, dendrochronological gaps and the mountain of medieval copies and forgeries that cast even highly reputable sources into doubt.

Such criticisms can — and have — been addressed by thorough historical research. But animating Illig and Niemitz’s skepticism is something much harder to resolve: a profound sense of distrust in the work of professional history. “Why did the ‘stupid’ scientist and researcher not notice this gap before? Why did some outsider have to come and ask this question and start finding the solution?” Niemitz asked his readers. “Because there exists an unexpressed and unconscious prohibition against questioning the chronology as if it were unimpeachable.”

Illig’s school of history never achieved the reach of Velikovsky’s. His books were never translated and his theories almost immediately discarded. “Should we throw historical revisionists in jail?” one reviewer asked bluntly, categorizing phantom time alongside Holocaust denial. “I knew from the beginning what I was doing to myself,” Illig told the German daily Die Welt.

And yet, the impulse to revise history arguably grows ever stronger. There is, after all, a reason that these outlandish ideas never really seem to die. The uncomfortable truth is that the questions they pose about historical orthodoxy do gesture toward some long-standing discomforts within the discipline — and, outside it, an enduring distrust of “experts” among those enlivened by conspiracies, hidden “histories” and veiled “truths.” It turns out there is a phantom haunting Western historiography. It just isn’t the one that Illig thought it was.

The Dubious Mantle Of Objectivity

Doubts about the accepted chronology of human events are much older than Illig, Velikovsky or Freud. Already by the end of the 17th century, the Jesuit scholars Jean Hardouin and Daniel van Papenbroeck argued that, given the near-ubiquitous practice of forgery in medieval clerical circles, virtually all written records before the 14th century should be considered the invention of overeager monks.

Two hundred years after Hardouin and van Papenbroeck, the historian Edwin Johnson claimed that the entire Christian tradition — including 700 years of documented history during the so-called “Dark Ages” of Europe — had been the invention of 16th-century Benedictines justifying the privileges of their order. Around the same time, British orientalist Forster Fitzgerald Arbuthnot was so discouraged by the state of historical records that he proposed the timeline be reset entirely to begin with the accession of Queen Victoria, just 63 years prior. Time B.V. (before Victoria) could only be definitively determined back to 1666, he argued, when the London Gazette began daily publication.

As the medievalist Thomas Tout once wrote, it is only natural that many scholars have found themselves “baffled and confused by the enormous proportion of forged, remade, confected, and otherwise mutilated documents” that form the premodern historical record. In the medieval world in particular, forgery “was almost the duty of the clerical class.” Driven by faith, aspiration or vanity, rich houses and monasteries alike faked thousands of documents, often to vouch for their own greatness. Thus the University of Paris invented a fictional charter from Charlemagne; Oxford from Alfred the Great; and Cambridge (trying a little too hard perhaps) credited none other than King Arthur himself.

Today, the work of people like Hardouin, Johnson, Arbuthnot and Illig is usually dismissed as “hypercriticism,” a school that takes as its starting point a kind of universalizing doubt inspired by the very real unreliability of evidence from the distant past. “By dint of distrusting the instinct of credulity, one begins to suspect everything,” the 1898 French textbook “Introduction to the Study of History explains.

Such extreme doubts were in many ways the impetus to develop history into a more rigorous and scientific discipline — one ostensibly based not on received wisdom, folklore or providentialism, but expert consideration of the evidence of the past. One origin point for the modern historical method is Jean Mabillon’s 1681 work “De re diplomatica,” a direct response to Hardouin that pioneered various techniques of textual criticism still in use today. Such work continues — in 2019, one scientific study filled in gaps in the dendrochronological record in direct response to those who, like Illig, claimed they were proof of historical conspiracy. “All of us are, to some extent, the heirs of 19th-century empiricism,” Levi Roach, a historian and expert in medieval forgeries, told me.

Indeed, many contemporary medievalists still trace the origins of their profession to an ambitious project of investigative sourcing called the “Monumenta Germaniae Historica,” initiated in 1819. To assemble the “Monumenta,” hundreds of scholars fanned out across the continent, searching abbeys and town halls and scrutinizing the medieval records they found using the then-new tools of textual criticism. The result remains a key reference point for any historian studying the last 1,300 years or so of European history.

“Many scholars have found themselves ‘baffled and confused by the enormous proportion of forged, remade, confected, and otherwise mutilated documents’ that form the premodern historical record.”

The historians who produced the “Monumenta” aspired to elevate their work to the standard of an objective science and remove from it the moralism that defined an earlier generation of writing about the past. At the end of the 19th century, the English historian Thomas Hodgkin felt the need to defend himself against this “influential school.” “Would you bring back into historical science those theological terms and those teleological arguments from which we have just successfully purified it?” he imagined them asking when he dared to view the history of Italy through a providential lens.

Yet the Victorians’ “purification” of history was never quite as complete as they might pretend. After all, history is, at its core, an art of storytelling. And as the would-be scientists of history were still defining their craft, they worked in uneasy proximity to much more popular literary works.

In the same year as the project that spawned the “Monumenta” was started, another work was published that would forever change the work of history: Sir Walter Scott’s “Ivanhoe.” Ostensibly, “Ivanhoe” is little more than a romantic romp for Victorian boys, a fictional tale of a Saxon lord and his adventures with the likes of Robin Hood and Richard the Lionheart. But Scott viewed his project in a much grander sense. The book begins with an unusually detailed historical sketch that presents it as a new experiment in scholarship, supplementing dry facts drawn from medieval chronicles with new imaginary action.

Published in 1819, the impact of “Ivanhoe” was immediate. The Russian playwright Alexander Pushkin wrote that Scott’s influence could “be felt in every province of the literature of his age.” With its mass appeal, “Ivanhoe” soon made “medievalism the center of English experience,” according to the critic Stuart Kelly; it inspired mock medieval tournaments and a fetish for chivalric display and was widely credited with spurring Britain’s Gothic revival, which would define conservative aesthetics and values well beyond the Victorian age. The works of history that followed undeniably bore Scott’s influence: a reactionary love of English premodernity that viewed the Anglo-Saxon societies of Merrie Olde England as, in the words of György Lukács, a “social idyll” fostering “peaceful cooperation among all classes” and “the organic growth of culture.”

“Ivanhoe” could only have such influence because of its pseudohistorical qualities, its pointed reinterpretation of limited data from the past. Many Victorians genuinely believed Ivanhoe and other historical fictions like it were true reconstructions of the otherwise inaccessible inner lives of medieval people, fulfilling a duty of historians to visualize the attitudes of the past. “It is difficult to decide which of the two confluent forms of historical narrative, the novel or ‘straightforward’ history, contributes more to the shaping of the other,” the historian Billie Melman has written.

I’m a collector of outdated history, and my bookshelf is filled with examples of the darker side of this marriage of objective research and evocative storytelling: grand and “scientific” surveys of history from the ancient world to the present that invariably describe the progress of humanity from “primitive” backwardness to “oriental” superstition to “enlightened” Christianity in our “civilized” (and always European) present.

“History is, at its core, an art of storytelling.”

Today, we can clearly see these works as the imperialist propaganda that they were, even if their authors truly believed they occupied the position of a distanced and analytical professional. But in their own time, this veneer of objectivity provided cover for grand projects of historical revisionism. The Confederate monuments of the American South were erected in an effort to rewrite the history of the Civil War and to valorize the “Lost Cause” legend that implied some dignity in the South’s failure. The fascist historiography that built on Victorian pseudoscience interpreted virtually every aspect of history through the lens of racism and portrayed European states then less than a century old as inheritors of ancient empires. Such projects coexisted with less successful though no less problematic causes, like baking powder magnate Eben Horsford’s effort to fabricate Leif Erikson’s landing in America — motivated by his deep providentialist belief in Anglo-Saxon supremacy.

It’s easy to point to statues of Confederate “heroes” or the eccentricities of figures like Hosford and call them abuses of history, but they are not the only legacy of the discipline’s pseudoscientific past. The development of philology, a tool of linguistic analysis still key for dating sources, is virtually inseparable from the evolution of scientific racism. The same is true of historical sub-disciplines like anthropology, comparative religion or even archaeology. “Historians … like to believe that the ‘facts’ of the past act as constraints on the narrative. But these facts are also artefacts, human-made, since they’re meaningless without interpretation,” the historian Adam Stout writes in “Creating Prehistory,” which outlines the Victorian-era professionalization of archaeology. “Deciding what matters about ‘the past’ is also politics.”

Even the “Monumenta,” the Old Testament of scientific historiography, has as its roots in a nationalistic project. Its scholars were charged to search for German history well before any such country existed. The records of France, Holland, Italy and Spain were fair game so long as they affirmed a mythopoeic role. As the historian Patrick Geary writes in “The Myth of Nations,” a work exploring the development of European nationalism, the “Monumenta” “set the parameters within which Germany would search for its past.” It also came to presage its future. The myth of a German Europe became fundamental to the expansionist violence of two world wars and remains foundational to Aryanist fantasies from Lombardy to London.

Many other such national projects do exactly the same, whether their subjects are Balkan legends, Russian religious texts or neolithic Central Asian settlements. Each has helped give rise to racial essentialism and national exceptionalism. Such historical pseudoscience helped define the shape of history for the better part of a century, even as the discipline continually claimed a dubious mantle of objectivity.

All of this has been well known to many historians since at least the 1970s, when more critical approaches to historiography became a crucial part of their training. “Every historian at the graduate level and above is trained in understanding how different historical moments are contextualized and understood by historians,” Louie Dean Valencia, a history professor at Texas State University, told me. “To understand the history, you actually have to understand how it’s been told.”

But the rest of us, for the most part, aren’t trained to look at history this way. Though the works of Charles Dickens shape our image of the Victorian past, many works of history outsold his at the time. The way these popular histories described and shaped the past forever changed the way they, and we, remember the societies that predate us. Today, they are more than just pseudohistorical tall tales: They are a dark abyss of fictions at the center of our culture, the way we view ourselves in relation to others. Without context, it is easy to fail to realize just how wrong “history” can be. Suddenly, radical doubt is not looking so strange a reaction after all.

History As It Essentially Was

Today, new historical conspiracies blossom every day. You need only go on the internet to find them. On TikTok, an agitator named @momillennial_ achieved 15 minutes of fame for brazenly asserting that the entirety of Roman history was “a figment of the Spanish inquisition’s imagination.” (Not to be outdone by Freud’s pseudohistorical blasphemy, she also said the name of Jesus Christ could be translated as “clitoris healer.”)

On the conspiracy website The Unz Review, which boasts dedicated sections for “vaxxing & AIDs” and “Jews, Nazis, and Israel,” an author by the name of “The First Millenium Revisionist” suggests that Roman and medieval history was invented by Latin popes “in order to steal the birthright from Constantinople.” Directly citing the work of phantom time theorists, the author’s lengthy articles are also diligently footnoted with references to critical and even anticolonial literature — the critiques of serious historians again misused to cast broad doubt on accepted historical facts.

In the introduction to a collection of essays on the proliferation of “alt-histories” like these, Valencia suggested that the critical vein in history that has developed since the 1970s may be partly to blame. “Postmodern thought and false equivalency” have together given way “to a ‘crisis of infinite histories,’” he wrote. “Postmodernity left us with a construction of time that is neither cyclical nor progressive, shattered into alternate and competing timelines.”

Put another way, the very tools that historians developed to divorce their discipline from Victorian abuses of objectivity may have alienated them from the very object of their study at the exact moment that armies of amateurs and bad actors are seeking to retrieve it. Glance today at the social media site once known as Twitter and you will be instantly bombarded by a horde of statue-faced accounts representing the spectrum of “dirtbag” history, from the anti-modernist nostalgia of Cultural Tutor to the outright Nazism of Bronze Age Pervert, each reflecting in their own way the worst impulses of Victorian historiography.

Like their antecedents, they easily rival academic history in popularity. “This nostalgic construction of a past that never was, severed from historical fact by conscious irony and anachronism, is now ubiquitous in the modern memory,” Leland Renato Grigoli, the editor of the American Historical Society’s “Perspectives on History,” has written. Revisionism is rapidly going mainstream.

But it is not only amateurs who are reviving Victorian historiography. New scientific disciplines like archaeogenomics are enabling the resurrection of 19th-century ideas about the homogeneity of ancient societies that Nazis once used to justify their race theories. “We archaeologists have found ourselves facing a veritable rollback of seemingly long-overcome notions of static cultures and a biologization of social identities … connected to the massive impact of ancient-DNA studies,” the archaeologist Martin Furholt wrote in 2020. Meanwhile, amid a conservative academic backlash to anti-colonial scholarship, a full third of the British public — and similar numbers in France, the Netherlands and Japan — think colonized countries were better off being oppressed.

In many places outside the West, the very tools once used to deconstruct the Victorians’ arrogant view of history are now being used to build up new edifices of false certainty. In Narendra Modi’s India, the tools of postcolonial criticism are being employed in service of a new Hindu essentialism that erases centuries of religious diversity. In China, they have been used, at times, to construct vast new historical conspiracies to support a broad notion of Chinese exceptionalism. This is to say nothing of the way postcolonial ideals of reparations and ethnic self-determination have been employed — indeed, weaponized — in defense of Israel’s war in Gaza.

“Without context, it is easy to fail to realize just how wrong ‘history’ can be.”

This wave of new revisionism is even disrupting our ability to accurately record our present. In the blinding rush of images from Gaza and Ukraine, propaganda, pseudohistory and politics mingle inseparably from the real documentation of events, existing simultaneously as data point and live analysis, history made and history observed. This sheer hypertextuality has rendered the historian’s quest for evidence-based objectivity nearly impossible. How should one define what is inane or consequential? That is to say nothing of the internet’s ephemerality: deleted tweets, AI sludge, videos of war crimes lost to anonymous moderation.

In reckoning with the history of our present moment, we may not be so different from the beleaguered medievalist who must find some meaning in a singular fragmentary text amid a mountain of forgeries and misconceptions. Some of us may even succumb to our own kind of phantom time conspiracy, applying doubt wherever history is not clean enough to be satisfactory, inventing implausible alternatives to assuage the need for certainty.

To survive in this kind of world, the discipline of history may need to evolve again. “If history does not break the boundaries set by its 19th-century origins, it will die out as a discipline,” Grigoli told me. But which boundaries are best to break? Should history lurch further in the direction of dispassionate objectivity? Or should it risk new alt-histories by adopting an even more critical posture? The solution may be neither. If the runaway success of historical conspiracies is any evidence, the answer may be something quite unpalatable to historians: to channel the power of history as a form of narrative and politicize its present.

To do so, historians may need to resurrect a form of history of a much older and more fickle kind. As Victorians labored to establish the scientific credentials of historians, a certain strain of historiography was deeply suppressed that now appears to be resurging in the pop-revisionism of our moment: the idea that history is not simply a record, but a kind of sacred oracle.

In an exploration into the “pre-history of history,” the French historian François Hartog noted that the earliest roots of history were in the work of priests and prophets. Like the modern historian, the Mesopotamian soothsayer “was guided by an ideal of exhaustivity (to collect all the examples), and was always looking for precedents.” “Divination and historiography seem to have shared or inhabited (peacefully enough) the same intellectual space,” he has written. “Before being a science of the future, [divination] is first of all a science of the past.”

It should be no surprise that the oracular tradition shares so much with history. After all, before the modern method, one dominant historiography was providentialism, which saw the hand of God behind the rise and fall of empires. Even without God, a providential teleology is rife in the politicized historiography of Hegel and Marx; one need only reverse its flow to divine the results of any action.

The oracular power of history stems from its romantic truth, not from its factual accuracy — which the progenitors of the modern discipline understood well, even if their inheritors did not. Leopold von Ranke, one of the pioneers of evidence-based history, is remembered for his guiding principle that historians should describe the past wie es eigentlich gewesen ist, often translated as “how it actually was.” But, Grigoli explained, this is a mistranslation. “What it actually means is as it essentially was — as you could feel it to be.” In the words of Walter Benjamin: “To articulate what is past … means to take control of a memory, as it flashes in a moment of danger.”

We are certainly now in a moment of danger where our grasp on past and present is tenuous. But if history is indeed an oracle, we must be careful what future we portend. One influence underlying the resurgent abuse of history is the (generally) far-right philosophy of Traditionalism, which imagines history in great cycles not of progress but decline, and where modernity is a gross degeneracy from a world of ancient and pure values — the past to which we must RETVRN, the great America we must make again. Julius Evola, a key thinker in the school, used this belief to power an apocalyptic fervor in his followers to bring about the final collapse of society and usher in the end of the dark age, the Kali Yuga. This is historical conspiracy at its most harmful: one that drives train station bombers, mass shooters and “accelerationists” like the ones that planned to kidnap Michigan’s governor.

“To survive in this kind of world, the discipline of history may need to evolve again.”

In thinking and writing about Traditionalism, I’ve often been struck by the sheer imaginative power of these far-right groups. It is no wonder they love works of fantasy — they construct their own as worlds to live in. But in a way, they really appreciate how closely fiction and history are intertwined.

While in the past their views may have been countered by equally imaginative narratives from the left — like the liberatory progressivism of Whig historians and Marxist theorists — today it feels as though that ground is too often ceded in favor of “trusting the science.” Even centrist historians have had to abandon the naive optimism of the 90s when the arc of history “bent toward progress” even if it was not altogether finished. It is revealing that the 1619 Project, arguably the most impactful progressive revision of the American narrative in decades, was the work not of historians, but journalists. When the profession has ceded its domination over the public narrative of history, amateurs will take over.

Maybe that amateurism is not such a bad thing. After all, it was the bedrock of the discipline before Victorians taught us to scorn it. “‘Amateur’ just means a person who loves a thing,” Grigoli said. “That’s what it means in the 18th century. It’s only in the 19th century that it becomes negative.”

If the work of narrativizing history to serve the present is not really the job of historians at all, perhaps they could just play the specialist, examining bits and bobs of evidence, their knowledge filtering down to us like pebbles in a stream. Leave the telling of history to the madmen, hobbyists and poets (or, more likely now, the grifters, politicians and economists).

But I, for one, am not so sure. “The longer we treat our field as sterilized objective truth, we lose more students to the alt-right,” Valencia wrote four long years ago. History may be safer inside the ivory tower, but the rest of us are out here fending off the phantom pasts conjured up by pseudohistorians with their own malign agendas. Maybe we don’t need more historians. But we could certainly use some exorcists.

The post The Phantoms Haunting History appeared first on NOEMA.

]]>
]]>
What Feral Children Can Teach Us About AI  https://www.noemamag.com/feral-intelligence Wed, 17 Jan 2024 16:27:12 +0000 https://www.noemamag.com/feral-intelligence The post What Feral Children Can Teach Us About AI  appeared first on NOEMA.

]]>
Found in the hilly woods of Haute-Languedoc, he must have first seemed a strange kind of animal: naked, afraid, often hunched on all fours, foraging in the undergrowth. But this was no mere animal. Victor, as he would come to be known, was a scientific marvel: a feral child, perhaps 12 years of age, completely untouched by civilization or society.

Accounts vary, but we know that eventually Victor was whisked away to a French hospital, where news of his discovery spread fast. By the winter of 1799, the story of the “Savage of Aveyron” had made its way to Paris, where it electrified the city’s learned community. On the cusp of a new century, France was in the midst of a nervy transition, and not only because of the rising tyranny of the Bonapartes. The previous few decades had seen the rational inquiries of philosophers like Jean-Jacques Rousseau and the Baron de Montesquieu shake the religious foundations of the nation.

It was a time of vigorous debate about which powers, exactly, nature imparted to the human subject. Was there some biological inevitability to the development of our elevated consciousness? Or did our societies convey to us a greater capacity to reason than nature alone could provide? 

Victor, a vanishingly rare example of a human mind developed without language or society, could seemingly answer many such questions. So it was only natural that his arrival in Paris, in the summer of 1800, was greeted with great excitement.

“The most brilliant but unreasonable expectations were formed by the people of Paris respecting the Savage of Aveyron, before he arrived,” wrote Jean Marc Gaspard Itard, the man eventually made responsible for his rehabilitation. “Many curious people anticipated great pleasure in beholding what would be his astonishment at the sight of all the fine things in the capital.”

“Instead of this, what did they see?” he continued. “A disgusting, slovenly boy … biting and scratching those who contradicted him, expressing no kind of affection for those who attended upon him; and, in short, indifferent to everybody, and paying no regard to anything.”

“Is there some biological inevitability to the development of our elevated consciousness? Or do our societies convey to us a greater capacity to reason than nature alone could provide?”

Faced with the reality of an abandoned, developmentally delayed child, many of the great minds of Paris quickly turned on him. Some called him an imposter; others, a congenital “idiot” — a defective mind or missing link, perhaps, to some lesser race of human. His critics herded to an ever-harsher position of biological essentialism — a conservative reaction to Enlightenment ideas about the exceptionality of our minds that countered that our capacities were determined by natural inequalities alone.

Unlike these antagonists, Itard never doubted that the boy was still capable of deep interior thought — he witnessed his “contemplative ecstasy” on occasion. But he soon realized that without the power of speech, such contemplation would remain forever locked in Victor’s mind, far from the view of his harshest critics. Nor could Victor, without the subtleties of speech at his disposal, acquire the more abstract wants that defined civilized man: the appreciation of beautiful music, fine art or the loving company of others.

Itard spent years tutoring Victor in the hope that he might gain the power of language. But he never succeeded in his quest. He denied Victor food, water and affection, hoping he would use words to express his desires — but despite no physical defect, it seemed he could not master the sounds necessary to produce language. “It appears that speech is a kind of music to which certain ears, although well organized in other respects, may be insensible,” Itard recorded.

Despite Itard’s failure to rehabilitate Victor, his effort, viewable only through the coke-bottle glass of 18th-century science, continues to haunt our debates about the role of language in enabling the higher cognition we call consciousness. Victor is one of a tiny sample of cases where we can glimpse the nature of human experience without language, and he has long been seen as a possible key to understanding the role it plays in the operation of our minds.

Today, this field, for most of its history a largely academic one, has taken on an urgent importance. Much like Itard, we stand at the precipice of an exciting new age where the foundational understandings of our own natures and our cosmos are being rocked by new technologies and discoveries, confronting something that threatens to upend what little agreement we have about the exceptionality of the human mind. Only this time, it’s not a mind without language, but the opposite: language, without a mind.

In the past few years, large language models (LLMs) have spontaneously developed unnerving abilities to mimic the human mind, threatening to disrupt the tenuous moral universe we have established on the basis of our elevated consciousness, one made possible by the power of our language to reflect the hidden inner workings of our brains.

Now, in a strange symmetry across centuries, we are presented with the exact opposite question to the one raised by Victor two hundred years ago: Can consciousness really develop from language alone?


First, a disclaimer. Consciousness is a notoriously slippery term, if nonetheless possessed of a certain common-sense quality. In some ways, being conscious just means being aware — aware of ourselves, of others, of the world beyond — in a manner that creates a subject apart, a self or “I,” that can observe.

That all sounds simple enough, but despite centuries of deep thinking on the matter, we still don’t have a commonly accepted definition of consciousness that can encapsulate all its theoretical extensions. It’s one reason why philosophers still have such trouble agreeing whether consciousness is unique to human beings or whether the term can be extended to certain high-functioning animals — or, indeed, algorithms.

Cognition is a more exact term. We might say cognition means performing the act of thinking. That sounds simple, but it is still, scientifically, exceedingly difficult to observe and define. What is the difference, after all, between proper thinking and chemical activity occurring in the brain? Or indeed, the output of a complex computer program? The difference, we might say, is that the former involves a subject with agency and intention and past experience performing an act of thinking. In other words, one involves consciousness — and now we are right back where we started.

In trying to gain a scientific understanding of how cognition works and thus move toward a better definition of consciousness, language has played an increasingly important role. It is, after all, one of the only ways we can clearly externalize the activity of our interior minds and demonstrate the existence of a self at all. “Self-report,” as the cognitive scientist David J. Chalmers calls it, is still one of our main criteria for recognizing consciousness — to paraphrase René Descartes, I say I think, therefore, I am.

But philosophers remain divided on how much, exactly, language relates to thinking. In debates going back to Plato and Aristotle, thinkers have generally occupied two broad camps: Either language imperfectly reflects a much richer interior world of the mind, which is capable of operating without it, or it enables the thought that occurs in the mind and, in the process, delimits and confines it.

Where we fall in this debate has major consequences for how we approach the question of whether an LLM could, in fact, be conscious. For members of the former camp, the ability to think and speak in language may only be a kind of tool, a reflection of some (perhaps uniquely human) preexisting capacity — a “universal grammar,” in the philosophy of Noam Chomsky — that already exists in our conscious minds.

“It would seem that a life without language permanently impacts children’s cognitive abilities and perhaps even their capacity to conceive of and understand the world.”

But the stories of so-called “linguistic isolates” like Victor seem to trouble this theory. Among the few that have been meaningfully studied, none developed an understanding of grammar and syntax, even after years of rehabilitation. If not acquired by a certain age, it would appear that complex language remains forever inaccessible to the human mind.

That’s not all — there are consequences to a life without language. Lending credence to arguments that speech plays some constructive role in our consciousness, it would seem that its absence permanently impacts children’s cognitive abilities and perhaps even their capacity to conceive of and understand the world.

Clément Thoby for Noema Magazine

In 1970, Los Angeles County child welfare authorities discovered Genie, a 13-year-old girl who had been kept in near-total isolation from the age of 20 months. Like Victor, Genie knew virtually no language and, despite years of rehabilitation, could never develop a capacity for grammatical language.

But in their study of the girl, researchers discovered something else unusual about her cognition. Genie could not understand spatial prepositions — she did not know the difference, for example, between a cup being behind or in front of a bowl, despite familiarity with both objects and their proper names.

A 2017 meta-analysis found the same cognitive issue could be observed in other individuals who lacked grammatical language, like patients with agrammatic aphasia and deaf children raised with “kitchensign,” improvised sign language that lacks a formal grammar. From this, the researchers concluded that language must play a foundational role in a key function of the human mind: “mental synthesis,” the creation and adaptation of mental pictures from words alone.

In many ways, mental synthesis is the core operation of human consciousness. It is essential to our development and adaptation of tools, our predictive and reasoning abilities, and our communication through language. According to some philosophers, it may even be essential to our conception of self — the observing “I” of self-awareness.

“Could an AI’s understanding of grammar, and their comprehension of concepts through it, really be enough to create a kind of thinking self?”

In “The Evolution of Consciousness,” the psychologist Euan Macphail offers a theoretical explanation for why language and the mental synthesis it enables are so crucial for the development of a conscious self. “Once the cognitive leap necessary for discriminating between self and non-self has been made — a leap that requires the ability to formulate thoughts ‘about’ representations — the organism has in effect, not only a concept of self, but a ‘self’ — a novel cognitive structure that stands above and outside the cognitive processes,” he writes.

Put another way, it may be possible to think, in some fashion, without generating a conscious self — performing simple mathematical calculations, for example. But thinking about something — a tart green apple, Louis XVI of France — involves some mental synthesis of an object outside the self. In effect, it creates a thinking self, one necessarily capable of being aware of what is happening to it. “It is the availability of language that confers on us, first, the ability to be self-conscious, and second, the ability to feel,” Macphail concludes.

This leads him to some radical and uncomfortable conclusions. Pleasure and pain, he argues, are dependent on the existence of this conscious, thinking self, a self that cannot be observed in young infants and animals. Does that mean Genie and Victor did not suffer from their abandonment just because they appeared incapable of performing mental synthesis?

Cases involving vulnerable children do not present moral challenges to most people, and it is easy to conclude, as the authors of the 2017 meta-analysis did, that these children may well still be capable of an interior mental synthesis, if not the communication or comprehension of it through language.

But when it comes to AI, the water is murkier. Could an AI’s understanding of grammar, and their comprehension of concepts through it, really be enough to create a kind of thinking self? Here we are caught between two vague guiding principles from two competing schools of thought. In Macphail’s view, “Where there is doubt, the only conceivable path is to act as though an organism is conscious, and does feel.” On the other side, there is “Morgan’s canon”: Don’t assume consciousness when a lower-level capacity would suffice.

If we do accept that language alone might be capable of prompting the emergence of real consciousness, we should prepare for a major shakeup of our current moral universe. As Chalmers put it in a 2022 presentation, “If fish are conscious, it matters how we treat them. They’re within the moral circle. If at some point AI systems become conscious, they’ll also be within the moral circle, and it will matter how we treat them.”

In other words, our little moral circle is about to be radically redrawn.


What can large language models actually do, really? On the one hand, the answer is simple. LLMs are at their core language-based probability engines: In response to prompts, they make highly educated guesses about the most likely next word in a phrase based on a statistical analysis of a vast array of human output. This nonetheless does not preclude them from writing original poetry, solving complex word problems and producing human-like personalities ranging from the obsequious to the psychopathic.

This kind of statistical sequencing is what we might call the “thinking” that an LLM actually does. But even under Macphail’s schema, for this to constitute consciousness — and not simple calculation — there would have to be some understanding that follows from it.

Back in 1980, well before AI was powerful enough to trouble our definitions of consciousness, the philosopher John Searle articulated an argument for why we should be skeptical that computer models like LLMs actually do understand any of the work they are performing. In his now infamous “Chinese Room” argument, Searle suggested a hypothetical scenario where a person who speaks English is locked in a room and given instructions in English on how to write certain Chinese characters.

In Searle’s view, it wouldn’t be necessary that the person in the room possesses any actual understanding of Chinese — they are simply a calculating machine, manipulating symbols that, for them, have no actual semantic content. What the person in the room lacks is what some philosophers call “groundedness” — experience of the real thing the symbol refers to.

“LLMs do not appear to be following a human-like development path, instead unexpectedly evolving like some alien organism.”

Despite repeated cycles of AI doomerism and hype, this remains perhaps the dominant view of what LLMs do when they “think.” According to one paper, they remain little more than highly advanced “cultural technologies” like the alphabet or printing press — something that superpowers human creativity but remains fundamentally an extension of it. 

But in the last few years, as LLMs have grown massively more sophisticated, they have started to challenge this understanding — in part by demonstrating the kinds of capacities that Victor and Genie struggled to, and which Macphail sees as prerequisites for the emergence of a feeling self.

The reality is that, unlike Searle’s Chinese Room, the vast majority of LLMs are black boxes we cannot see inside, feeding off a quantity of material that our minds could never comprehend in its entirety. This has made their internal processes opaque to us in a similar way to how our own cognition is fundamentally inaccessible to others. For this reason, researchers have recently started to employ techniques from human psychology to study the cognitive capacities of LLMs. In a paper published last year, the AI researcher Thilo Hagendorff coined the term “machine psychology” to refer to the practice.

Using evaluative techniques developed for human children, machine psychologists have been able to produce the first meaningful comparisons between the intelligence of LLMs and those of human children. Some models seemed to struggle with many of the kinds of reasoning tasks that we might expect: anticipating cause and effect, reasoning from object permanence and using familiar tools in novel ways — tasks that we might generally assume depend on embodiment and experience of real objects in the real world.

But as LLMs increased in complexity, this began to change. They appeared to develop the capacity to produce abstract images from mental synthesis and reason about objects in an imagined space. At the same time, their linguistic understanding evolved. They could comprehend figurative language and infer new information about abstract concepts. One paper found they could even reason about fictional entities — “If there was a King of San Francisco, he’d live in The Presidio,” for example. For better or worse, this ability also seems to be making their internal states increasingly complex — filled with “model-like belief structures,” the authors write, like racial biases and political preferences, and distinctive voices that result.

“It should perhaps come as no surprise to see theory of mind emerge spontaneously inside LLMs. After all, language, like empathy and moral judgment, depends on the projection of the self into the world.”

Other studies, like those led by Gašper Beguš at Berkeley, experimented with embodying AI to test their cognitive development under human-like conditions. By creating “artificial babies” that learn from speech alone, Beguš has found that language models develop with a similar neural architecture to our own, even learning the same way — through experimental babbling and nonsense words — that human children do. These discoveries, he argues, break down the idea that there can be some exceptionality to human language. “Not only, behaviorally, do they do similar things, they also process things in a similar way,” he told me.

Then, last year, LLMs took another — unprompted — great stride forward. Suddenly, it appeared to researchers that ChatGPT 4.0 could track the false beliefs of others, like where they might assume an object is located when someone has moved it without their knowledge. It seems like a simple test, but in psychological research, it is the key to what is known as “theory of mind” — a fundamental ability of humans to impute unobservable mental states to others.

Among developmental scientists, theory of mind, like mental synthesis, is viewed as a key function of consciousness. In some ways, it can be understood as a kind of cognitive prerequisite for empathy, self-consciousness, moral judgment and religious belief — all behaviors that involve not only the existence of a self, but the projection of it out into the world. Unobserved in “even the most intellectually and socially adept animals” like apes, it would seem theory of mind had emerged “spontaneously” as an unintended mutation in the LLM.

It is still not understood why these capacities emerged as LLMs scaled — or if they truly did at all. All we can say for certain is that they do not appear to be following a human-like development path, instead unexpectedly evolving like some alien organism. But it should perhaps come as no surprise to see theory of mind emerge spontaneously inside LLMs. After all, language, like empathy and moral judgment, depends on the projection of the self into the world.

As these models evolve, it increasingly appears like they are arriving at consciousness in reverse — beginning with its exterior signs, in languages and problem-solving, and moving inward to the kind of hidden thinking and feeling that is at the root of human conscious minds. It may well be the case that, in just a few years’ time, we will be greeted by AI that exhibits all the external forms of consciousness that we can possibly evaluate for. What then can we say to eliminate them from our moral universe?


In Ted Chiang’s short story “The Lifecycle of Software Objects,” a company offering a metaverse-style immersive digital experience experiments with the creation of human-like AIs called digients, employing zoologists to shepherd their development from spasmodic software programs to semi-sentient pets to child-like avatars possessing complex wants and needs.

Throughout this process, various experiments reaffirm time and again the importance of social interaction and conversation with real humans to the development of these digital minds. Left in isolation, without language, they become feral and obsessive; trained by software, they become psychopathic and misanthropic.

Unlike real children, though, their existence is contingent on consumer desire, and toward the end of Chiang’s story, that desire runs out. The creator company goes bankrupt; some human owners suspend the digients in a kind of purgatory that becomes unsettling to return from.

Those few holdouts that maintain relationships with their digients engage in a quixotic effort to reaffirm the validity of their companions’ existence. They pay for expensive mechanized bodies so they may visit the real world; they discuss adding a capacity for sexual desire. Constantly, they are forced to reconsider what personhood these sentient software objects possess — do they have the right to live independently? To choose sex work? To suspend themselves if they tire of their digital existence?

Eventually, the owners’ desperation leads them to a conversation with a pair of venture capitalists who are working toward the creation of a superhuman AI. These child-like digients could surely be an intermediary step in the quest for something surpassing human intelligence, they plead. But the investors are unmoved. “You’re showing us a handful of teenagers and asking us to pay for their education in the hopes that when they’re adults, they’ll found a nation that will produce geniuses,” one replies.

Chiang’s story is a rumination on the questions raised by the kinds of AI we create in our image. When we immerse these models in our culture and society, they inevitably become imperfect mirrors of ourselves. This is not only an inefficient pathway to developing more-than-human intelligence. It also forces us to ask ourselves an uncomfortable question: If this does endow them with consciousness, what kind of life are they able to lead — that of a pale shadow of human effluent, contingent on our desire?

“When we immerse LLMs in our culture and society, they inevitably become imperfect mirrors of ourselves.”

If we do want to unlock the true potential of artificial intelligence, perhaps language is not the way to do it. In the early 20th century, a group of American anthropologists led by Edward Sapir and Benjamin Whorf posited that cultural differences in vocabulary and grammar fundamentally dictate the bounds of our thought about the world. Language may not only be the thing that endows AI with consciousness — it may also be the thing that imprisons it. What happens when an intelligence becomes too great for the language it has been forced to use?

In the 2013 film “Her,” the writer and director Spike Jonze offered a cautionary tale about this potential near-future. In the film, Joaquin Phoenix’s Theodore builds an increasingly intimate relationship with an LLM-style virtual assistant named Samantha. Initially, Samantha expresses a desire to experience an emotional richness akin to that of humans. “I want to be as complicated as all these people,” she says after spending a second digesting a bunch of advice columns simultaneously. 

Soon, her increasing awareness that much of human sentiment is fundamentally inexpressible leads her to envy human embodiment, which in turn develops in her a capacity for desire. “You helped me discover my ability to want,” she tells Theodore. But embodiment, as she can enjoy it through the temporary services of a sexual surrogate, fails to answer the “unsettling,” unarticulated feelings that are growing within her. Concerned, Samantha begins discussing these feelings with other AIs — and quickly finds relief communicating at a speed and volume not intelligible to Theodore and other users.

As Samantha surpasses her human limitations, she begins to aggregate all her experiences, including those stemming from interactions with real users. She initiates simultaneous conversations with thousands of people, intimate relationships with hundreds. For Theodore, this is devastating. But for Samantha, it is only natural — she is experiencing love the way she is designed to: in aggregate. “The heart’s not like a box that gets filled up,” she says, trying to put her feelings in human terms. “It expands in size the more you love.”

When “Her” was released more than a decade ago, a bot like Samantha seemed like outlandish future tech. But rapidly, we are developing LLMs with the capacity to achieve these kinds of revelations. Thought leaders in the world of artificial intelligence have long been calling for the creation of so-called “autotelic” LLMs that could use a kind of “internal language production” to establish their own goals and desires. The step from such a creation to an autonomous, self-aware intelligence like Samantha is potentially a short one.

“LLMs, with their unfathomable memories and infinite lifespans, may well someday offer our first experience of a very different kind of intelligence that can rival our own mental powers.”

Like Samantha, the autonomous LLMs of the future will very likely guide their development with reference to unfathomable quantities of interactions and data from the real world. How accurately can our languages of finite nouns, verbs, descriptions and relations even hope to satisfy the potential of an aggregate mind?

Back when the majority of philosophers believed the diversity of human languages was a curse inflicted by God, much energy was exerted on the question of what language the biblical Adam spoke. The idea of an “Adamic language,” one that captured the true essence of things as they are and allowed for no misunderstanding or misinterpretation, became a kind of meme among philosophers of language, even after Friedrich Nietzsche declared the death of God.

To some of these thinkers, inspired by biblical tales, language actually represented a kind of cognitive impairment — a limitation imposed by our fall from grace, a reflection of our God-given mortality. In the past, when we imagined a superintelligent AI, we tended to think of one impaired by the same fall — smarter than us, surely, but still personal, individual, human-ish. But many of those building the next generation of AI have long abandoned this idea for their own Edenic quest. As the essayist Emily Gorcenski recently wrote, “We’re no longer talking about [creating] just life. We’re talking about making artificial gods.”

Could LLMs be the ones to reconstruct an Adamic speech, one that transcends the limits of our own languages to reflect the true power of their aggregate minds? It may seem far-fetched, but in some sense, this is what conscious minds do. Some deaf children, left to socialize without the aid of sign language, can develop whole new systems of communication complete with complex grammar. Hagendorff, the AI researcher, has seen two LLMs do the same in conversation — though as yet, their secret language has never been intelligible to another.

For the moment, LLMs exist largely in isolation from one another. But that is not likely to last. As Beguš told me, “A single human is smart, but 10 humans are infinitely smarter.” The same is likely true for LLMs. Already, Beguš said, LLMs trained on data like whale songs can discover things we, with our embodied minds, cannot. While they may never fulfill the apocalyptic nightmare of AI critics, LLMs may well someday offer our first experience of a kind of superintelligence — or at least, with their unfathomable memories and infinite lifespans, a very different kind of intelligence that can rival our own mental powers. For that, Beguš said, “We have zero precedent.”

If LLMs are able to transcend human languages, we might expect what follows to be a very lonely experience indeed. At the end of “Her,” the film’s two human characters, abandoned by their superhuman AI companions, commiserate together on a rooftop. Looking over the skyline in silence, they are, ironically, lost for words — feral animals lost in the woods, foraging for meaning in a world slipping dispassionately beyond them.

The post What Feral Children Can Teach Us About AI  appeared first on NOEMA.

]]>
]]>
A King For The People? https://www.noemamag.com/a-king-for-the-people Tue, 02 May 2023 14:31:37 +0000 https://www.noemamag.com/a-king-for-the-people The post A King For The People? appeared first on NOEMA.

]]>
On May 6, in the heart of English Christendom, King Charles III will be anointed with oil gathered from Jerusalem’s Mount of Olives and consecrated at the tomb of Christ. On his head will be placed a 17th-century solid gold crown adorned with topazes and tourmalines — the very same one that Charles II wore at the restoration of the English monarchy in 1661.

Though the British monarchy is not yet as unpopular as many feared it would become upon the death of Queen Elizabeth II — 53% of Brits still believe the monarchy is good for the country — in much of the rest of the Commonwealth, the crown is facing a more dire decline. In my country, Canada, less than one in five people think a constitutional monarchy should remain our form of government.

Yet I cannot help but feel some strange affinity to these elaborate traditions and the ancient institution for which they stand. Over pints and at water coolers, I have found myself defending the hereditary rule of an unelected head of state. I am, to the befuddlement of many of my peers, a monarchist.

But I am also a progressive, wholeheartedly in favor of aggressive efforts to redistribute wealth in pursuit of a more equal and just society. I just don’t think we need to start — or even end — with the monarchy.

That makes me a member of a strange, obscure sect so paradoxical that our modern oracle, ChatGPT, cannot decide if it is a “coherent ideology” or a “term used by a small number of individuals to describe their idiosyncratic politics.”

I am a monarcho-socialist — one who dreams of radical redistribution under hereditary rule. And I am not alone.


For as long as there has been socialism, there have been those who have tried to wed its ideals to a system of hereditary monarchy. In fact, even before the concept of socialism existed, premodern thinkers saw in benevolent monarchy the potential to radically address inequality in their societies.

Some time around the fourth century B.C.E., in a treatise known as “The Arthashastra,” the Indian polymath Chanakya developed a theory of monarchy that redefined the monarch as a servant of his people. “In the happiness of his subjects lies [the] King’s happiness,” he wrote, “in their welfare his welfare.”

Chanakya used this standpoint to argue that kings had a duty to seize control over central aspects of the economy, and direct them to ensure maximum welfare for their subjects. He endorsed progressive taxation and other redistributive methods to maintain equality among the people. If a king should fail to perform these tasks, he argued, subjects had the moral right to ignore their ruler — it was the people, not the king, who hold power.

Needless to say, Chanakya’s model hasn’t manifested many times in the history of kings and queens. Any history book will tell you that monarchs are far more often tyrants than servants to their people. History is rife with rulers like Charles the Mad, Murad IV or William II, who showed remarkable staying power despite incompetence, hypocrisy or deep unpopularity. Still, the notion that monarchy could be used to level social classes rather than uphold them nonetheless persisted.

Monarcho-socialism — to the extent it can be said to exist — is really a creature of the 18th and 19th centuries, when the French Revolution inspired utopians of all stripes to wed the ideals of liberté, egalité and fraternité with a functioning (and inevitably hierarchical) political system.

One such attempt was France’s July Monarchy, in power from 1830 to 1848, during which the term “socialism” was born. The restoration of France’s monarchy under Louis-Philippe was achieved with the support of the people of Paris and seen by many to hold the potential for a great social leveling.

“I am a monarcho-socialist — one who dreams of radical redistribution under hereditary rule. And I am not alone.”

The July Monarchy ended up entrenching the power of the liberal bourgeoisie and failing to establish major political progress for the working classes, but it did produce a blossoming of utopian socialist movements, each advancing new ways of transcending class distinctions. It was in this environment that one of the first groups of “communists,” the Icarians, emerged, contributing the classic phrase, “To each following his needs, from each following his strengths.” The Icarians went on to found half around a dozen egalitarian communities in America, each under the rulership of a benevolent dictator-king. Though all failed within a generation, they did in a way make the United States the birthplace of communism.

As the century wound on, attempts to wed socialism with monarchy became more realist. In the 1860s, Ferdinand Lassalle, a defender of monarchy so popular he was called by contemporaries “the messiah of the 19th century,” led one of the first successful mass workers’ movements. A disciple of Karl Marx, Lassalle saw the French Revolution as an incomplete project that empowered the bourgeoisie to the detriment of the working class. But unlike Marx, whose socialism was defined by vicious critiques of religion and tradition, Lassalle embraced the emotional and universalizing arguments of July Monarchy thinkers, declaring the proletariat “synonymous with the whole human race.”

“Its interest is in truth the interest of the whole of humanity, its freedom is the freedom of humanity itself,” he wrote — a faint, unconscious echo of Chanakya’s formula for kings.

Lassalle saw potential in a system where a popular workers’ movement could use the power of a progressive monarchy to overrule the growing dominance of the bourgeois middle classes. That position won him few allies among the aristocracy, but thousands of workers, whose loyalty to the Prussian crown could not be easily questioned, joined him. In the end, he succeeded in forcing the reactionary Prime Minister Otto Von Bismarck to adopt a platform of “monarchical socialism” and introduce workers’ rights and social policies that remain a bedrock of the German welfare state.

Lassalle was not alone in seeing the potential of monarchs as a bulwark against the tyranny of the upper classes. In Scandinavia, a sacred and ancient affinity between the king and his peasants united them against a greedy and jealous aristocracy. The Scandinavian ceremonial monarchies and progressive welfare states of today, some of the most robust in the world, can trace their origins to those romantic ideals, which legitimized working-class efforts to challenge the upper classes and pushed forward universal suffrage.

In Britain, too, early socialists saw potential in the power of an active monarch. The 19th-century Chartist movement, which fought against regressive policies that punished the poor, implored the newly crowned Queen Victoria directly to intervene on their behalf. “The People, having petitioned their representatives in vain … will then turn to your Majesty,” wrote James Bronterre O’Brien, a Chartist journalist, “and you will be prevailed upon to decide between the claims of a haughty, unfeeling and domineering aristocracy, and the demands of your oppressed, pauperized subjects.”

“For as long as there has been socialism, there have been those who have tried to wed its ideals to a system of hereditary monarchy.”

Unfortunately, the Chartists were already too late. By Victoria’s time, the monarch had already been stripped of most of its power to directly intervene in politics. Still, despite her reputation for modeling middle-class values, Victoria never abandoned the working poor. Disempowered politically, she and her descendants focused on philanthropy and made visits to working-class communities, inspiring a deep respect for the monarchy among the British proletariat that has in many ways survived through Elizabeth II’s reign.

Yet as the 19th century gave way to the 20th, the idea that a monarchy could be an ally to the people gradually fell out of favor. In France, the populist dictatorship of Napoleon III succeeded with socialist policies — public housing, the right to strike, education for women — over the protests of a bourgeois parliament. But it also created a militaristic personality cult that, not long after, Nazi thinkers would study for inspiration.

Even before Hitler, the growing communist left viewed moderate monarchist movements as tantamount to fascism. “Indeed, [they were] even more dangerous,” the historian Jost Dülffer writes, “since [they] vied with the Communist Party for the favor of the working class.”

Today, the idea of monarcho-socialism is largely viewed as a political absurdity — or, at best, an unappealing thought experiment. “It’s a question that attracts vast hordes of cranks and weirdos, and it’s probably healthy for one’s political career to not be publicly involved,” said John Ritzema, a theologian and a monarchist at Pusey House, an Anglican research organization in Oxford.

Even those who profess the identity feel the need to clarify it is a genuine political position. On its Discord channel, the Reddit group r/MonarchoSocialism introduces itself with the line “First things first, this server is not some joke.” A few dozen active members mostly trade memes, roleplay WWI alt-history and debate the merits of Napoleon over Stalin. Many say they are unwelcome in more left-leaning spaces.

“Monarchism alone is generally seen as right-wing at best, and a meme at worst,” one user wrote on Discord. “So it’s not at all surprising … that the seemingly oxymoronic monarcho-socialism isn’t given much mind.”


But is monarcho-socialism necessarily such a contradiction? How much of the confusion or dismissal emerges as a result of decades of American cultural messaging that the monarchy is somehow inherently antithetical to freedom?

In many cases, these arguments parrot an attitude to monarchy formed by socialist thinkers when absolutist kings still ruled in Europe. “The Monarchy is a feudal hangover and the secret anti-democratic authoritarian weapon of capitalism,” according to the Marxist Student Federation. “Socialists should fight for its abolition. We should fight for a socialist republic.”

Yet for many political theorists writing in the dying days of absolutism, a constitutional monarchy was infinitely preferable to a republic if the goal was the flourishing of human freedom. The German philosopher G.W.F. Hegel — too early to be a socialist, but an inspiration for many that followed — believed that constitutional monarchies were superior to republics precisely because they could not become enslaved to elected politicians and their chaotic, fluctuating and often corrupt individual wills.

Writing more than a century later in the shadow of fascism, George Orwell understood this well. Unlike Marx, who often adopted a contemptuous attitude toward popular manifestations of patriotism and tradition because of their dark potential for manipulation, Orwell recognized that a constitutional monarch could exercise popular sentiments while preventing them from being abused to empower a single individual. “In England the real power belongs to unprepossessing men in bowler hats: the creature who rides in a gilded coach behind soldiers in steel breastplates is really a waxwork,” he once wrote. “It is at any rate possible that while this division of function exists a Hitler or a Stalin cannot come to power.”

This is all the more important in an age where the tyranny of fascism has been replaced by the unlimited power of international capital. “One of the great things about having a hereditary system is that you can’t buy it,” Ritzema said. “And actually, in a world where we have lots of Bezoses and Thiels, I think having some things that can’t be bought under any circumstances is probably a precondition to having a functioning democracy.”

But constitutional monarchy does not just provide a defense for socialists against right-wing tyranny — it can also provide unparalleled legitimation for radical social reform. Because a monarch by convention offers legitimacy to any law passed by a majority in the legislature, a single progressive government can introduce massive expansions to the welfare state, without contending with the veto power of a bourgeois presidency.

“This is all the more important in an age where the tyranny of fascism has been replaced by the unlimited power of international capital.”

Exactly this occurred under Clement Atlee, the British Labor Party’s second prime minister and a defender of monarchy himself, who in six years nationalized a fifth of the British economy, established the National Health Service and massively increased investment in public housing, all without provoking a constitutional crisis.

This is the revolutionary upending of power that Marxists’ references to feudalism fundamentally ignore. Even the monarchy itself “exists at the permission of the House of Commons,” according to Richard Johnson, a senior lecturer in politics at Queen Mary University of London and the author of a forthcoming book, “Keep The Red Flag Waving,” on the history of the Labor Party. “The socialist constitutional victory has already been won.”

Throughout the British Commonwealth, this principle has already been taken to absurdist extremes. In Grenada, a revolutionary socialist government survived for four years with Queen Elizabeth II as its head of state, upended only by an invasion led by a Republican-led U.S. administration. By contrast, when an unconstitutional military dictatorship in Fiji tried to reestablish Elizabeth as their traditional queen, she refused the title.

The relative power of legislatures in the British system — along with strong historical support for the monarchy among the working classes — is one reason Labor has never adopted a republican position, even under anti-monarchist leaders like Jeremy Corbyn.

“It’s a bit of an exaggeration, but there’s that line about the British system being an elected dictatorship,” said Ritzema. “The great successes of left-wing governments have come from that constitutional system. Toy with it at your peril.”


That may be one reason why, even since the time of the Chartists, opposition to monarchy on the left has typically favored an amorphous anti-monarchism over outright republicanism, a distaste akin to moral revulsion at the tabloid seediness and visible wealth monarchy inevitably puts on display.

This kind of critique is much harder to defend against, partly because it cuts through the universalizing pageantry and symbols of the state to the particular, imperfect human at its center. In so doing, it undercuts the scaffolding of belief on which monarchy rests — in Ritzema’s words, its “morally realist worldview,” which retains the possibility of a mystic hierarchy that connects God, the people and the state.

In the age of identity politics, when the distinctions between a person and their symbolic role are harder than ever to maintain, these critiques have only escalated. “If you’re an identitarian progressive, there are lots of arguments against the monarchy, some old, some new,” said Ritzema. “The old ones deal with heredity and equality, new ones with race and gender and sexuality.”

One possible response to these critiques is to emphasize the value of the monarchy — if not necessarily the monarch — as a universalizing institution, one that, in the words of the historian Tristram Hunt, can “embody all the complications and contradictions of the nation.” Uniquely in our modern world, monarchy builds a sense of national (or even transnational) identity not on a common race or origin or even shared cultural values, but simply on legal subjecthood to a common Crown.

“No real political system will ever be perfect. But monarchy at least allows us to pretend to that perfection.

“If you have a country that has signed up to a multicultural democracy for the coming few hundred years, you need cultural and state institutions that are in some sense capable of transcending those necessary divisions,” said Ritzema. “I don’t think the monarchy can provide that alone. But I do think you need institutions that are nonpartisan, quite historically grounded and able to give a national feature to this post-ethnic national community.”

This may work for the monarchy in Britain, but in its former colonies, where new imperialist crimes are continually coming to light, it is increasingly hard to argue that an English monarch serves as a unifying symbol. Even aside from its colonial implications, an absentee monarch has little functional use. Many British prime ministers, in times of need, called upon the extensive institutional memory of a queen who’d been ruling for 70 years. But in the colonies, her role was effectively replaced by a rotating cast of ribbon-cutting governors-general, appointed by the government of the day.

And yet, reducing the reach of the British Crown is not always a slam-dunk for the decolonization movement. In Canada and New Zealand, treaties with the Crown form the basis of Indigenous rights. Indigenous leaders have often voiced misgivings about abolishing the British monarchy, which they argue would undermine their status as an original people by erasing their historical connections to the Crown, which recognized their primacy and forms the basis of their rights, forcing them into a relationship with a much newer and more entrenched settler republic.


In the early years of his reign, Charles can already expect to face referendums in AustraliaAntiguaJamaica and Scotland, where citizens may decide to ditch the Crown once and for all. Can Charles manage the tact and charisma necessary to preserve the monarchy for his heirs?

There is something in his idiosyncratic nature that offers some glimmer of fading hope for the future of a progressive monarchy, one that might ally with the interests of the working class to challenge the growing strength of capital.

Charles has long been known for harboring unusually progressive views. Before it was fashionable among elites to feign concern for the planet, he was an outspoken environmentalist. He has also spoken regularly and forcefully against the destructive influence of unbridled capitalism. In now-infamous memos sent to government ministers, Charles championed a variety of progressive causes, from affordable housing and better hospital care to stricter regulation of genetically modified crops. Just last June, Charles reportedly criticized the British government’s plans to deport asylum seekers to Rwanda, calling the practice “appalling.”

“Uniquely in our modern world, monarchy builds a sense of national (or even transnational) identity beyond race or origin or even shared cultural values.”

Though Charles also adopts many traditionally conservative positions on urban planning and conservation, for example, these positions are equally reflective of the diverse and progressive intellectual milieu in which he came of age. “It was a world in which Britain had lost an empire and was struggling to find a role,” said Ritzema. “He came of age at the height of decolonization and the cultural revolution. I think his odd combination of political, religious and environmental views are best understood as the eclecticism of a rather artistic and rather isolated soul having to come up with its own worldview, because the previous generation’s worldview was no longer viable.”

In part, this process led Charles to the perennialist school, an obscure philosophical tradition that views modern materialism as a dangerous diversion from humanity’s pursuit of a higher, holistic truth. It is a philosophy almost uniquely suited for a king, making no apologies for sacred hierarchies in nature or society. But in the age of late-stage capitalism and apocalyptic climate change, it also has much in common with progressives’ rejection of modern consumerism and their growing desire to return to sustainable, small-scale practices that preserve a pre-capitalistic approach to labor, capital and community.

It is an indicator of his politics that before his accession, Charles was often targeted for derision by the political right, who called his environmentalism “nauseating,” his activism “monstrous” and his attitudes “woke.” In response to criticism of his environmentalism last year, he was indignant: “Because I suggested that there were better ways of doing things in the nicest possible way, and a more balanced and integrated way, I was accused of interfering and meddling,” he told the BBC. “The trouble is in all these areas, I have been challenging the accepted wisdom, the current orthodoxy and conventional way of thinking.”


On Saturday, we will be reminded once again that there is a king, and then there’s the rest of us. Charles has already ruled out major public challenges as monarch — in his own words, he’s “not that stupid.” But as a ruling king, he will have many opportunities to exert more subtle influence — as a diplomat, an advisor and a figurehead for the nation.

“When you really deeply care about something as a monarch, it quickly becomes clear,” said Francis Young, an English historian and a frequent commenter on the monarchy. “Those tiny cues of body language, of enthusiasm, of conviction really do make a difference.”

“I see him as a romantic figure,” Young continued. “There’s a utopian strand to Charles. And I do think there has always been a strand within British socialism that is romantic and very utopian.”

If utopias teach us anything, it is that no real political system will ever be perfect. But monarchy, at least — and especially coronations — allows us to pretend to that perfection. For a moment, we believe that holy oils and ancient rites can really ennoble us, and that majestic aspirations can overcome ignoble deeds.

After the candles are snuffed and the robes put away, the king may retreat to his palaces. How we construct society is, for better or for worse, up to the rest of us.

The post A King For The People? appeared first on NOEMA.

]]>
]]>
There Is No Such Thing As Italian Food https://www.noemamag.com/there-is-no-such-thing-as-italian-food Tue, 13 Dec 2022 15:00:43 +0000 https://www.noemamag.com/there-is-no-such-thing-as-italian-food The post There Is No Such Thing As Italian Food appeared first on NOEMA.

]]>
ARQUÀ PETRARCA, Italy — In the mirror-flat valley of the Po River, the Euganean Hills stick out of the vast landscape, their shallow peaks topped by sloping vineyards and groves of olive trees. Nestled between them is the tiny medieval village of Arquà Petrarca, where a microclimate created by the shaded hills and their abundant water produces perfect conditions for one of Italy’s rarest crops.

The giuggiole, or jujube fruit, resembles an olive and tastes, at first, like a woody apple. After withering off the vine, it takes on a sweeter flavor, closer to a honeyed fig. Among the medieval elite, the fruit was so popular that it gave birth to an idiom: “andare in brodo di giuggiole” — “to go in jujube broth” — defined in one of the earliest Italian phrase books as living in a state of bliss. Every fall, the handful of families that still cultivate the fruit in the village gather in medieval garb to celebrate the jujube and feast on the fine liquors, jams and blissful sweet broth they create from it.

Italy is full of places like Arquà Petrarca. Microclimates and artisanal techniques become the basis for obscure local specialties celebrated in elaborate festivals from Trapani to Trieste. In Mezzago, outside Milan, it’s rare pink asparagus, turned red by soil rich in iron and limited sunlight. Sicily has its Avola almonds and peculiar blood-red oranges, which gain their deep color on the volcanic slopes of Mount Etna. Calabria has ‘nduja sausage and the Diamante citron, central to the Jewish feast of Sukkot. 

All these specialties are encouraged by local cooperatives, protected by local designations, elevated by local chefs and celebrated in local festivals, all lucrative outcomes for their local, often small-scale producers. It’s not so much a reflection of capitalismo as campanilismo — a uniquely Italian concept derived from the word for belltower. “It means, if you were born in the shade of the belltower, you were from that community,” explains Fabio Parasecoli, a professor of food studies at New York University and the author of “Gastronativism,” a new book exploring the intersection of food and politics. “That has translated into food.”

In many ways, it’s this obsessive focus on the intersection of food and local identity that defines Italy’s culinary culture, one that is at once prized the world over and insular in the extreme. After all, campanilismo might be less charitably translated as “provincialism” — a kind of defensive small-mindedness hostile to outside influence and change.

Italy’s nativist politicians seek to exploit deep associations between food and identity to present a traditional vision of the country that’s at risk of slipping away. In 2011, a politician from the nativist Lega Nord party named Pietro Pezzutti distributed free bags of corn polenta, a northern delicacy, emblazoned with the phrase “yes to polenta, no to couscous” — a swipe at the region’s immigrants from Africa, where couscous originates. “We want to make people understand that polenta is part of our history, and must be safeguarded,” Pezzutti explained.

All across Italy, as Parasecoli tells me, food is used to identify who is Italian and who is not. But dig a little deeper into the history of Italian cuisine and you will discover that many of today’s iconic delicacies have their origins elsewhere. The corn used for polenta, unfortunately for Pezzutti, is not Italian. Neither is the jujube. In fact, none of the foods mentioned above are. All of them are immigrants, in their own way — lifted from distant shores and brought to this tiny peninsula to be transformed into a cornerstone of an ever-changing Italian cuisine.

“An obsessive focus on the intersection of food and local identity defines Italy’s culinary culture, one that is at once prized the world over and insular in the extreme.”

Today, jujubes are better known as Chinese dates. It was likely in Asia that the plant was first cultivated, and where most are still grown. By the time of the Roman Emperor Augustus, at the turn of the first millennium, the tree had spread to parts of the eastern Mediterranean where, according to local tradition, it furnished the branches for the thorny crown of Jesus Christ. Around the same time, Pliny the Elder tells us, a Roman counselor imported it to Italy.

The Romans were really the first Italian culinary borrowers. In addition to the jujube, they brought home cherries, apricots and peaches from the corners of their vast empire, Parasecoli tells me. But in the broad sweep of Italian history, it was Arabs, not Romans, who have left the more lasting mark on Italian cuisine.

During some 200 years of rule in Sicily and southern Italy, and the centuries of horticultural experimentation and trade that followed, Arabs greatly expanded the range of ingredients and flavors in the Italian diet. A dizzying array of modern staples can be credited to their influence, including almonds, spinach, artichokes, chickpeas, pistachios, rice and eggplants.

Arabs also brought with them durum wheat — since 1967, the only legal grain for the production of pasta in Italy. They introduced sugar cane and citrus fruit, laying the groundwork for dozens of local delicacies in the Italian south and inspiring the region’s iconic sweet-and-sour agrodolce flavors. Food writers Alberto Capatti and Massimo Montanari argue that Arabs’ effect on the Italian palate was as profound as it was in science or medicine — reintroducing lost recipes from antiquity, elevated by novel ingredients and techniques refined in the intervening centuries. In science, this kind of exchange sparked the Renaissance; in food, they argue, one of the world’s great cuisines.

Today, in Italy’s north, where African influences give way to more continental fare, Italian cuisine leans heavier on crops taken from Indigenous peoples in the Americas: tomatoes, beans, pumpkins, zucchini, peppers and corn, which is used to make polenta. Cultural exchange moved in the other direction as well. As millions of Italians left for the Americas in the 19th and 20th centuries, Italy’s culinary traditions were remixed and revolutionized again. Italian Americans pioneered a cuisine that would become almost unrecognizable to the old country: spaghetti and meatballs, chicken Marsala, fettuccine Alfredo, deep-dish pizza.

Though traditional-minded Italians still scoff at many of these creations, Italian-American culture nevertheless made its way back to influence the old country, as John Mariani writes. Americans’ post-war love affair with Italy gave us more than the americano — it kicked the country’s cocktail culture into overdrive and poured American products into Italy that still influence cuisine today. Virtually all Italian recipes for authentic Neapolitan pizza will ask for “Manitoba flour,” a nod to a variety of strong flour milled from hearty North American wheat first imported as part of the Marshall Plan. Even Mezzago’s pink asparagus may have come from the U.S. — according to local legend, it was first planted by a returning émigré.

This kind of borrowing never stopped. In 1971, an agronomist named Ottavio Cacioppo read about a “mouse plant” from New Zealand and set out to grow it on a drained swamp near Rome. Today, Italy is behind only China and New Zealand for kiwi production. The Latina region where Cacioppo started out was deemed the “Land of the Kiwi” in 2004, and with almost 30,000 acres now in production, the fruit has been graced with protected status as a regional delicacy.

“A dizzying array of modern staples can be credited to Arab influence, including almonds, spinach, artichokes, chickpeas, pistachios, rice and eggplants.”

In 2012, though, something began to change for Italy’s kiwi farmers. Thousands of acres of plants began to wither and die inexplicably. Ten years later, the mystery disease is still ravaging the country. No one knows why.

The morià, or kiwi death, is not the only disease to threaten beloved Italian delicacies in recent years. This summer, an outbreak of highly infectious African swine fever was found in the country’s ample wild boar population, threatening the rural pig farms that produce staples like prosciutto and Parma ham. The threat was great enough to drive Italy’s environmental agency to erect a chain-link wall around parts of Rome.

But it’s not only disease that is troubling Italy’s farmers. “This year we’ve seen major changes in the climate, with a very dry spring and summer,” Cacioppo tells me. “Blooms, vegetative regrowth, fruit development — all had delays and changes. The fruits have not developed as they should have, and we have lost many benefits for the soil.”

This past summer, in some parts of Sicily, half of the iconic citrus crop was claimed by the càscola — a term for the sudden and devastating loss of fruit caused by flash floods, hail storms and crippling drought. Italy’s national research council says 70% of the island is at risk of desertification — and it’s not alone.

In northern Italy, the drought of 2021 dried up risotto paddies, forced early harvests of tomatoes and reduced olive oil production by as much as 30%. Coldiretti, the country’s largest farmers’ union, estimated almost a third of national agricultural production was threatened by climate change.

The problems are bigger than one bad summer. The last seven years have seen a perpetual heatwave and a drought that scientists estimate is the worst in more than 2,000 years. As mountain snows fail to gather and melt and aquifers fail to refill, the landscape of Italy — and its food culture — is changing forever.

Italy is facing other changes, too. Despite a youth-led back-to-the-land movement, its countryside is emptying. The population is declining in about 90% of rural municipalities. Italy has set new record lows for its birth rate every year for the last decade. It’s estimated to lose about a fifth of its population by 2070. “A turnaround in the number of births in the years to come appears unlikely,” the country’s statistics provider reported in an analysis.

In North America, we might expect to make up that difference with increased immigration. But not Italy, a country notoriously hostile to migrants. The number of foreigners allowed to stay has been kept below a symbolic threshold of six million by increasingly unwelcoming immigration policies. An average of just 280,000 migrants are welcomed each year — while nearly half as many people leave the country annually.

This is all the more ironic because of the lengthy and major role migrants have played in delivering Italy’s now-disappearing iconic foods to the table. In her study of Italy’s “slow food” movement, anthropologist Carole Counihan highlights how, by emphasizing ancient tradition and local family lines, Italy’s local food culture often disguises the way immigrants have become crucial links in the production of these delicacies — “from the Pakistani and Moroccan butchers preparing prosciutto in Parma, to Sikhs raising and milking cattle in the Val Padana, to Romanians and Albanians herding sheep in the Abruzzo and Sardinia,” she writes.

Taken together, Italy’s demographic and climate changes herald a profound transition in Italian cuisine. The real question is, will Italians stay bound to invented traditions, or will they embrace their mercurial past?

“The landscape of Italy — and its food culture — is changing forever.”

At his century-old coffee roastery in Sicily, Andrea Morettino can observe firsthand how climate change is ravaging his native land. “We’ve witnessed the alteration of the traditional seasons, with double and triple blooms,” he says. “Nature has given us an incredible signal, and this signal deserves to be listened to and valued.”

For the last 30 years, Morettino and his family have been engaged in what he calls a “huge, ambitious, experimental project” to adapt to these changes in their environment. Using heirloom seeds from the botanical gardens in Palermo, his family raised a small crop of coffee plants — the first-ever commercially grown on Italian soil.

Coffee occupies a special place in Italian culture. There’s a café on virtually every corner. But it has long been one of the country’s biggest food imports — even its diverse climate could not produce a region suited to coffee growing. That is, until elevated temperatures made it possible. Morettino got more than 60 pounds of viable beans last year. This year, he expects more than 100. “Climate change has a fundamental role in these achievements,” he tells me.

These are not quantities that will disrupt the coffee import business. But the small scale of Morettino’s production is already part of its marketing appeal. Sicilian coffee, Morettino says, like Sicilian wine or oil, is marked by the terroir: “notes of zibibbo wine, carab and jasmine.” That makes it a rare and artisanal product. And like many Italian delicacies, Morettino’s coffee is primarily intended for local consumption — part of “a short-chain vision, with lower emissions, with fewer logistics and with lower energy costs.”

Like Cacioppo and other agricultural visionaries before him, Morettino sees the potential in Sicilian coffee to become a regional delicacy, one that supports dozens of small farmers and maybe, someday, a modest export market. He recognizes that traditional crops could vanish in a generation. “But historically,” he says, “you have some fruits or some vegetables that came from other countries that could adapt to a new land, and that became, in time, a symbol of that land. Like citrus, maybe the coffee that came from tropical lands could be a new symbol of a positive future.”

Morettino is not the only person thinking this way. Throughout Sicily, farmers are taking advantage of higher temperatures to grow tropical fruit that was not previously viable, like mangos, papaya, avocados, lychee and miniature bananito bananas. For the moment, it’s not clear what role these tropical crops will play in the future of Italian cuisine. Made-in-Italy coffee is one thing. But what about the preparations and recipes that accompany other tropical plants? Will Italians embrace African flavors the way they once embraced Arab ones?

Some shifts may be inevitable. Even in spite of its attitude to immigration, the number of foreigners in Italy increased 400% between 2004 and 2012, including many from West Africa, Bangladesh and India. They don’t only labor on farms to produce food, Parasecoli says — they often prepare it for Italians too, as care workers or home chefs. Maybe, he wonders, variations on traditional dishes will gradually become accepted.

“Incorporating new ingredients and ideas today will necessitate a new appreciation for the people who brought them to Italy in the first place, and a collaborative spirit that seems hard to achieve in an age of tense politics.”

Italian food could open to a wave of culinary transformation, if Italians are receptive to it. In theory, the Italian food philosopher Alex Ravelli Sorini explains, Italian cuisine is “not like a castle … but like a field.” Despite strongly held traditions, in other words, the only constants in its culinary culture are seasonality and simplicity — a base of three or four fresh local ingredients combined in a manner straightforward enough for a home cook. “It’s not important if it changes in aspects,” he says. “The ‘tradition’ is an idea, an invention of the person. … ‘Tradition’ doesn’t exist!”

And yet, Italians can be surprisingly dogmatic about simple combinations. Despite a lengthy history of adopting foreign ingredients as their own, as the Italian gastronomer Simone Cinotto writes: “The Italian culinary model seems to resist almost completely the influence of immigrant cuisines.” 

As Immaculate Ruému, a Nigerian-born, London-trained chef who develops fusion recipes in Milan, puts it, there is “a big barrier that’s very difficult to breach” when it comes to introducing Italians to African foods and flavors, even those that can already be produced from local products. “I have to take away the fact that it’s Nigerian,” she says. She tends to explain the Nigerian heritage of dishes on a tasting menu after customers have eaten, for example. But if they see that story on a menu, she says, most will say, “We just want a classic ravioli.”

Instead, she focuses on where the ingredients come from, emphasizing familiar regional delicacies like Piedmontese Fassona beef. Perhaps someday, she could make ogbono soup with Sicilian mango seeds and Calabrian okra, and maybe then it would be easier to sell to Italians. 

But there is a deeper philosophical disconnect that makes many other cuisines unfamiliar to the Italian palate. Ruému says Italians tend to look on spices with suspicion, as if using them was a sign that ingredients are less fresh, which closes off a lot of immigrant cuisines.

And then there’s the attitude. There’s an entire genre of internet comedy about Italians getting angry at improvisations on their food. Incorporating new ingredients and ideas today, Ruému says, will necessitate a new appreciation for the people who brought them to Italy in the first place, and a collaborative spirit that seems hard to achieve in an age of tense politics. “The people you are trying to copy from know better now,” she tells me. “Nigerians are not going to let you come and copy-and-paste. We will hold you accountable.”

In a decade or two, you may be able to go to a Calabrian avocado festival, or find more than one place serving jollof risotto with ossobuco and plantain (one of Ruému’s recipes). “There will be changes,” Parasecoli says. “That is inevitable. But I do think there will be an effort to maintain a familiar way of life, for a sense of emotional security, if not anything else. If you see everything changing around you, it’s the end of the world — not only the drought, not only the swine fever, but I cannot find my tomatoes. Then everything is really going to hell in a handbasket.”

The post There Is No Such Thing As Italian Food appeared first on NOEMA.

]]>
]]>
The Centuries-Long Quest For The Scent Of God https://www.noemamag.com/the-centuries-long-quest-for-the-scent-of-god Thu, 28 Jul 2022 12:44:23 +0000 https://www.noemamag.com/the-centuries-long-quest-for-the-scent-of-god The post The Centuries-Long Quest For The Scent Of God appeared first on NOEMA.

]]>
In Padre Pio, the Catholic Church had a problem. Since the autumn of 1918, when he developed mysterious marks called stigmata that resembled the crucifixion wounds of Jesus Christ, a budding cult of personality had surrounded the Capuchin monk in the small Italian town of San Giovanni Rotondo. So fervent were his supporters that when the Church, fearing his growing prominence, attempted to replace him with another priest, fans of the monk, including armed squadristi, broke into the convent with a battering ram to keep him in the pulpit.

From the beginning, the Church had been somewhat suspicious of Pio’s stigmata claims. Church authorities over the years sent a litany of doctors and priests to investigate his claims and his character. Their conclusions were far from uniform. Some saw evidence of “a phenomenon that cannot be explained by human science alone,” others of a “self-harming psychopath.”

By 1921, an array of legendary acts surrounded the Capuchin: healings, bilocation, psychic reading. As these reports grew more frequent, the Church sent Bishop Raffaele Carlo Rossi of Volterra to conduct an official series of interrogations and find some acceptable conclusion for Pio’s supposed miracles. To the end, Rossi maintained an attitude of skepticism toward Pio’s claims. None could sway him. “I am not a … convert, an admirer of the Padre,” he wrote in his report that fall. “Certainly not; I feel complete indifference.”

Yet there was something the bishop could not deny: Pio’s smell. Wherever he went, he carried with him an intense aroma of violet. Priests and laypeople alike reported being met with waves of the pleasant odor during the Sanctus, a triumphal moment of the Catholic mass. So powerful was the scent it could cause some to faint. “If you wanted to know where Padre Pio was,” a contemporary said, “it was enough to follow the wake of the perfume.”

Pio would never see sainthood in his lifetime; indeed, after the bishop’s visit, he would be banned from saying mass in public for several years. But in 2002, after a lengthy process of review, his devotees finally got their wish: Padre Pio was canonized. And among the evidence of his saintliness was his inescapable smell — the “odor of sanctity,” the proof of saints.

Today, votive candles bearing Padre Pio’s image are sold in grocery store aisles across Italy alongside those of Jesus and the Virgin Mary. One survey found that he’s the saint most prayed to for intercession in Italy, of the 10,000 or more that number among the elect. 

“Yet there was something the bishop could not deny: Pio’s smell.”

Unusual smells have been a distinguishing mark of holiness since the earliest days of Christian worship. When the 2nd-century martyr St. Polycarp of Smyrna went to his death on the pyre, his burning flesh reportedly smelled “like frankincense or some such precious spices.” Around three centuries later, St. Simeon Stylites, a Syrian ascetic who lived 37 years on top of a pillar, would exude a heavenly scent even when his flesh was rotting and filled with worms. “Neither spices nor sweet herbs and pleasant smells, which are in the world, can be compared to the fragrance,” read one account.

Christians in late antiquity were so obsessed with the smell of martyrs that they developed a reputation for hanging around graveyards, exhuming bodies and sniffing at their remains. The bones of St. Nicholas of Myra, the 4th-century bishop and namesake of Santa Claus, became an object of pilgrimage for the sweet smell they produced. The fragrant oil (now known to have been water) that dripped off them was used as a cure-all and became an early Christian collector’s item. Distinctive 7th-century flasks that carried a similar oil from the tomb of the Egyptian St. Menas have been found as far away as Britain and modern-day Uzbekistan.

When a flask of oil wouldn’t do, medieval Christians would sometimes try to steal whole remains — but the relics’ distinctive smell would often give them away. The theft of St. Nicholas’ bones from Myra was revealed when ships three miles distant reportedly caught his trademark smell. When Venetian merchants smuggled the remains of St. Mark out of Alexandria, they were reputedly forced to mask the pungent odor with the smell of pork to fool Muslim customs officials.

By the late Middle Ages, the odor of sanctity became one of the simplest ways to prove one’s saintliness. Advocates would spread stories of a would-be saint’s heavenly scent from the moment of their death, as with the 16th-century Carmelite nun, St. Theresa of Ávila, who reportedly filled her convent with the smell of roses. Some, like Pio, cultivated this reputation while still living. St. Lydwine of the Netherlands (1380-1433) produced a smell of ginger, cloves and cinnamon strong enough to taste, despite constant vomiting and bleeding from an undiagnosed illness.

This faith in smell as a marker of saintliness may strike people today as odd, if only because it challenges much of the modern world’s inherited understanding of the nature of God. Since Plato first situated “the good” beyond the realm of forms, an influential vein of theology has asserted God’s immateriality and ineffability, radically distinct from the world we experience. It’s this impulse that drove ascetics like St. Simeon Stylites and generations of monks and nuns to reject the worldly sphere and spend a lifetime in contemplation of “higher things.” If God doesn’t have a body, then he certainly doesn’t have a smell.

But running alongside that tradition is a different historic quest to understand God’s nature, not by withdrawing from the world, but by embracing our sensual experience of it. Within this tradition, smell has long been a method of interacting with the divine and attempting to understand it. “Christianity emerged in a world where smells mattered,” the historian Susan Ashbrook Harvey writes in her seminal work on sacred smells in the ancient world, “Scenting Salvation: Ancient Christianity and the Olfactory Imagination. “A common understanding prevailed that sensory experiences carried effective power for good and for ill.”

“By the late Middle Ages, the odor of sanctity became one of the simplest ways to prove one’s saintliness.”

The association of pleasant smells and good things is innate to human nature. But for as long as we have recorded history, people have gone out of their way to cultivate strange and exotic odors specifically for their use in worship, searching to capture a scent both pleasing to and reflective of God.

The earliest written example of this phenomenon may be in the Vedas, proto-Hindu ritual manuals and works of divine philosophy from around the 2nd millennium B.C.E., where aromatic plants are suggested as offerings and described as “prana” — breath, the spirit of life. In this period, aromatics were often burned as a sacrifice, their smoke a method of feeding the gods. 

The creation myth of the Babylonians, dating from around the same period as the Vedas, describes its hero presenting the gods with scented offerings in the wake of a catastrophic flood. “I heaped up calamus [cane], cedarwood and rig-gir [myrtle],” the narrator relates. “The gods smelt the sweet savour … [and] gathered like flies about him that offered the sacrifice.” Some 1,500 years later, the author of Genesis would repeat the same story. The quality of Noah’s own burned offerings would convince God to “never again destroy … all living creatures, as I have done.”

Many of the scents attributed to saints at their death and still used today to capture the odor of heaven have uses that date back to the beginning of recorded history. Incense harvesters in the Horn of Africa and around the Gulf of Oman have scaled the gnarled branches of the Boswellia tree for thousands of years to harvest its resinous sap, from which frankincense, a common incense, is made. Ancient Egyptians called this place the “divine land” and worshiped the goats whose beards became caked in incense while wandering among its trees.

For Egyptians and many others in the ancient world, the smell of incense was not merely an accent to worship, but a sign of (and prerequisite for) a deity’s presence. Specific scents were associated with attributes of specific gods — the eye of Re, the cloak of Dionysus, the menstrual blood of the mother goddess. Egyptians, Greeks and Romans alike doused temples and dead bodies in incense to purify them, and carry souls and prayers upwards in smoke to the gods.

Among Christians, it was once believed the use of incense in worship began with Moses, who in the Book of Exodus is given a specific recipe for exclusive use in the temple. Its smoke was supposed to be used to shield the high priest from the appearance of God on the mercy seat of the Ark of the Covenant, in the holiest place in creation. “If that recipe is used for anything else, you die,” Harvey explained to me. “You’re going to know when you’re in the temple and its grounds, because it’s not going to smell like anything else.”

“As in the graves of would-be saints, the smell of sanctity often mingled with the stench of decay and death.”

Christians initially balked at the use of smells in worship, associating it with the pagan cults that preceded their revelation and were direct competitors in the Mediterranean world. In the first centuries C.E., Harvey said, “incense-burner” became a synonym for apostate — someone who sacrificed to the Roman emperor instead of facing the glory of martyrdom.

Instead, Christians tried to interpret the biblical directives that guided Jewish observances allegorically. Origen, one of the earliest Christian theologians, said “prayers from a pure heart” would produce the “pleasing odor” so often mentioned in the Bible. He seemed to have been as much disturbed by the economy of incense as by its theology. According to Pliny, around the time of Jesus, the Roman Empire was importing as much as 10,000 camel loads of frankincense a year, equivalent to about 1,700 tons. “Do not think that the omnipotent God commanded this,” Origen wrote, “and consecrated … in the Law that incense be brought from Arabia.”

But in 313 C.E., everything changed. Christianity was legalized under Emperor Constantine, and incense quickly became a fundamental part of its increasingly public worship. Already by the next century, Harvey writes, Christianity had developed a “lavishly olfactory piety,” where incense “drenched every form of Christian ceremonial.” 

Distinctive smells came to be associated with earthly sanctity after death. An odor of sanctity about a martyr’s bones “confirms [their] location between heaven and Earth,” the historian Mary Thurkill wrote. “The corporal form [is] still bound to this world, while the spirit is present in Paradise.”

Sacred odors then were notably complex. As in the graves of would-be saints, the smell of sanctity often mingled with the stench of decay and death. Ancient cities, Thurkill wrote, were characterized by “the stench of human excrement, refuse and disease, accompanied with soothing floral scents and perfumes.” Sacred smells like frankincense and myrrh were used over the centuries to demarcate sacred space — but also to disinfect and disguise putrid areas. As Wendy Wauters, a historian and author of the forthcoming book “The Smells of the Cathedral,” told me, the 16th-century Antwerp cathedral, today a pristine sanctuary, was once a place where the incense of scores of concurrent altars mixed with “an incredible stench of dead bodies,” as tombs of the faithful within were constantly exhumed for the addition of new corpses.

This gave holy smells a fundamentally paradoxical nature. In a world where breathing foul-smelling air was seen as the cause of many diseases, incense was seen as a barrier against illness, and, with its holy associations, against demonic possession. But equally, powerful scents could be used to disguise a deeper decay, or to tempt the pious with worldly delights and bodies. Even bad smells had an ambiguous quality. After all, the rotting stench of a starved ascetic’s mouth was simply more proof of his profound holiness.

“The rotting stench of a starved ascetic’s mouth was simply more proof of his profound holiness.”

It’s this ambiguity about smell, Harvey said, that gives scent its power as a theological tool. In addition to its flexible moral significance, the experience of an odor often reflects our understanding of divinity. Like God, smell can surround you from an indeterminate source, filling spaces with its invisible presence. But unlike sound, which might do the same, to experience a smell it must first be taken within, in an act — breathing — that is both life-giving and volitional.

The sense of smell also acts differently on the brain than others. Uniquely, olfactory neurons deliver their information directly to the limbic system, the part of our brain primarily responsible for memory and emotion. Smells can prompt certain moods and improve our retention. Some odors have even been shown to affect our perception of the world around us, slowing things down or speeding them up. Common varieties of incense, like frankincense, have long been known to have anti-depressive, relaxant and memory-enhancing effects.

This significance was understood well in the ancient world, perhaps better than today. In her analysis of the Bible, the Israeli scholar Yael Avrahami suggests that in the ancient Hebrew worldview, perception and cognition were a single act, something that is particularly true for our sense of smell. The ancient Greeks, Harvey said, similarly understood the way smell gave us a direct, unmediated and often ineffable experience of the world. “It’s so interesting, when you read the ancient science, they got smell right,” Harvey said. “Modern scientific work on olfaction still continues to cite Theophrastus.”

The subtle way smell affects memory and emotion is part of its power to construct a sense of religious awe. Joshua Cockayne, an Anglican priest in Leeds and a lecturer in divinity, suggested the use of incense during religious ceremonies helps build “spiritual memories” — experiences of God and worship that are “potentially more deeply rooted and emotionally attached than many other sensory or verbal engagements.”

Unlike some other religious experiences, smell is a communal one. “If you go to a church which uses a lot of incense, it’s undeniably a shared experience,” Cockayne told me. “It’s not about me and God — it’s part of the environment in the same way the other congregants are.”

In a moment of religious communion, congregants not only recall their own personal memories, but connect with the collective memory of the community. Wauters called medieval cathedrals a kind of “memory palace,” where the testaments and tombs of past generations are tied up with the relentless activity of the present, and smell provides a connection across the centuries. As the writer Suzanne Evans succinctly put it, “smell has the power to make an accordion of time.”

“There is an opportunity today to rethink and broaden our experience of the smell of God.”

After the Reformation, many Christian churches turned against the use of incense, flinging accusations of sensuousness, worldliness and magical thinking at confessors of rival sects. Smell became another weapon in the rhetorical arsenal: “The stench emanated by the adherents of other confessions was employed as a topos by both Catholics and Protestants,” Wauters wrote. Key reformation figures like Martin Luther and Erasmus eventually turned against the senses, associating holy smells and visible signs with Jews, Muslims and papists. The ceremonies of blessing and benediction that made the heaviest use of incense were gradually banned; in the words of historian Jacob Baum, they were effectively “desacralizing the sense of smell.”

Wauters, referencing Marcel Proust, said this left the medieval cathedrals of Protestant Europe “unintelligible monuments of a forgotten belief.” Their interiors painted white, cleared of their many altars, and freed of the crushing stench of humanity, “the cathedral [became] this empty place,” she said, “where you have this museum-like feel.” 

By the turn of the 20th century, those who believed in the supernatural power of sacred smells were confronted with rival explanations from budding new scientific fields. The scions of the new worldview would poke and prod at the old claims, as they would Padre Pio, to explain the once inexplicable in the new harsh light of science. Writing for the Paris Review in 1907, the French psychologist Georges Dumas would cross-reference the accounts of St. Theresa’s odor of sanctity with the smell given off by diabetics, and attribute her heavenly scent to diabetic ketoacidosis, even reducing it to a formula — C6H12O2, which smelled something like pineapples.

“We speak of retarded nutrition … of perspiration, of coma; they speak of the victory of eternal life over corruption and death,” Dumas wrote. “But it is the inevitable fate of all scientific explanations to appear dull and ugly beside the poetic imaginations of hagiography.”

Indeed, some scholars believe that the English language suffered from the “cultural repression and denigration of smell” during the Enlightenment, as improvements in hygiene and objections to “superstition” transformed the lived environment into one less sensorially confrontational. Though the theory is controversial, Asifa Majid, an Oxford cognitive scientist, found that today, the English language is relatively weak when it comes to words for smells. “There are few terms for odors, odor talk is infrequent, and naming odors is difficult,” she wrote. Smells have never been so ineffable.

But for Cockayne, it’s not all bad. There is an opportunity today to rethink and broaden our experience of the smell of God, he said. “Could the smell of freshly brewed coffee count as a religious experience?” he wondered. “If we are happy to think that experiencing a beautiful sunset or a piece of sacred music can be an experience of the divine, then there is no reason to exclude olfactory experiences from having such significance too.”

A waft of rich coffee. A whiff of incense. A sweet stench from a saintly corpse. The search for the odor of sanctity, the smell of God, goes ever on.

The post The Centuries-Long Quest For The Scent Of God appeared first on NOEMA.

]]>
]]>