Benjamin Bratton, Author at NOEMA https://www.noemamag.com Noema Magazine Fri, 03 Oct 2025 16:13:13 +0000 en-US 15 hourly 1 https://wordpress.org/?v=6.8.3 https://www.noemamag.com/wp-content/uploads/2020/06/cropped-ms-icon-310x310-1-32x32.png Benjamin Bratton, Author at NOEMA https://www.noemamag.com/author/benjaminbratton/ 32 32 Is European AI A Lost Cause? Not Necessarily. https://www.noemamag.com/is-european-ai-a-lost-cause-not-necessarily Tue, 30 Sep 2025 14:45:30 +0000 https://www.noemamag.com/is-european-ai-a-lost-cause-not-necessarily The post Is European AI A Lost Cause? Not Necessarily. appeared first on NOEMA.

]]>
Editor’s Note: Noema is committed to hosting meaningful intellectual debate. This piece is in conversation with another, written by Italian digital policy advisor Francesca Bria. Read it here: “Reclaiming Europe’s Digital Sovereignty.”

Europe has had a conflicted relationship with modern technology. It has both innovated many of the computing technologies we take for granted — from Alan Turing’s conceptual breakthroughs to the World Wide Web (WWW) — and has also fostered some of the most skeptical and elaborate critiques of technology’s purported effects. While Europe claims to want to be a bigger player in global tech and meet the challenge of AI, its most prominent critics are quick to second-guess any practical step toward that goal on political, ethical, ecological and/or philosophical grounds.

Some policymakers envision a continental megaproject to construct a fully integrated European tech stack. But this approach is inspired by an earlier era of computational infrastructure — one based not on AI but on traditional software apps — and moreover, the political and “ethical” qualifications they also impose upon it are so onerous that the most likely outcome is further stagnation.

Lately, however, something has shifted. The wake-up calls are becoming louder and more frequent, and they have been arriving from sometimes unlikely sources. Emmanuel Macron, J.D. Vance and Berlin artist collectives may seem like unlikely allies, but they all agree that Europe’s “regulate first, build later (maybe)” approach to AI is not working. The propensity for Europe to operate this way has only resulted in greater dependency and frustration, rather than the hoped-for technological sovereignty. While Trump’s erratic approach to U.S.-European relations may be the proximate cause for the strategic shift, it is long overdue. But unless that groundswell is able to gain permanent traction in the realm of ideas, this momentum will dissipate. Given the considerable effort by tech critics across the political spectrum to prevent this progress, securing it is easier said than done.

Internet meme response to the European Union AI Act.
Internet meme response to the European Union AI Act.

A “Eurostack” can be defined in different ways, some visionary and some reactionary. Italian digital policy advisor Francesca Bria and others define it as a multilayer software and hardware stack, in a plan that draws inspiration from my 2015 book, The Stack: On Software and Sovereignty.” Specifically, it builds on a diagrammatic vision of critical infrastructure, with a stack that includes chips, networks, the variety of everyday connected items known as the internet of things, the cloud, software and a final tacked-on layer called data and artificial intelligence. This is, however, a variation of the stack of the present, not the future. My book’s “planetary computation stack diagram” will soon be republished in a 10th anniversary edition. A decade is a lifetime in the evolution of computation.

Bria’s vision is not future-facing. The European stack she proposes as a plan for the next decade should have been built 15 years ago. Why wasn’t it? Europe choked its own creative engineering pipeline with regulation and paralysis by consensus. The precautionary delay was successfully narrated by a Critique Industry that monopolized both academia and public discourse. Oxygen and resources were monopolized by endless stakeholder working groups, debates about omnibus legislation and symposia about resistance — all incentivizing European talent to flee and American and Chinese platforms to fill the gaps. Not much has changed, which is why the momentum of the moment is both tenuous and precious.

If contemporary geopolitics is leading us to think of stack infrastructure more in terms of hemispheres than nations, then unsurprisingly, the hemispherical stack of the future is built around AI — and not through the separation of AI into some “final layer” as Bria has it. Just as classical computing is different from neural network-based computation, the socio-technical systems to be built are distinct as well. This is not a radical or contentious argument, but it’s one that many prominent intellectuals fail to fully grasp. As such, it’s worth reconstructing how Europe got to where it is. The conclusion should not be that it’s too late for Europe to be a major player in the tech world, especially where AI is concerned, but rather that it will need to commit to the coming opportunities as they arise and to their implied costs.

As we’ll see, that’s easier said than done. 

Merchants Of Torpor In Venice

Recently, I had the pain/pleasure of joining a panel at a conference called “Archipelago of Possible Futures” at the Venice Architecture Biennale organized by Bria and cultural researcher José Luis de Vicente, to discuss the prospects of a new Eurostack. The other panelists were two of the most well-known technology and AI skeptics, Evgeny Morozov and Kate Crawford, as well as the architect Marina Otero Verzier. 

The conversation was … lively

It was also confusing. By the end, I think I was the only one arguing that Europe should build an AI Stack (or something even better) rather than insisting that, in essence, AI is superproblematic and thus Europe should resist doing this superproblematic thing. There are many reasons for caution, including questions of power, capitalism, America, water, energy, privacy, democracy, labor, gender, race, class, tradition, indigeneity, copyright, as well as the general weirdness of machine intelligence.

“Emmanuel Macron, J.D. Vance and Berlin artist collectives may seem like unlikely allies, but they all agree that Europe’s “regulate first, build later (maybe)” approach to AI is not working.”

The maneuverable space that would satisfy all these concerns is visible only with a microscope. Remember, this was ostensibly a panel on how to actually build the Eurostack. The other panelists likely see it differently, but to me, it’s not possible to rhetorically eliminate all but a few dubious and unlikely paths to an AI Eurostack and still claim to be its advocate. That self-deception is an essential clue about Europe’s stack quandary.

During the panel, I made my case by first asking why Europe doesn’t already have the Eurostack it wants, by recounting the disappointing recent history of techno-reactionary instincts (such as anti-nuclear power politics as discussed below), the hollowness of the now orthodox critical-academic stances about AI, the problems with this approach for Europe’s plans and made a truncated plea for reason and action. In short, Europe should build AI — focusing on AI diffusion rather than solely new infrastructure — and stop auto-capitulating to elite bullies and fearful reflexes. The panel’s responses were animated, predictable and mutually contradictory.

Morozov is the author of many serious books and articles published across Europe’s left-leaning media, and therefore one of the most widely-read pundits on the dangers of American internet technologies and the need for strong “digital sovereignty” for Europe (and others). He is also the instigator of several interesting projects, including an in-depth podcast series on British cybernetician Stafford Beer’s failed Cybersyn project that was meant to govern Salvador Allende’s socialist Chile through a vast information economic matrix linking into a futuristic control center. Cybersyn was never built as it was envisioned, in reality, but it is up and running in the dreams of intellectuals in some purified alternate reality where cybersocialism runs the world. Morozov popularized the term “technological solutionism,” which, due to inevitable semantic decay, is now a term used by political solutionists to denigrate any attempt to physically transform the infrastructures of global society that demotes their own influence.

As the panel wore on and voices grew louder and more self-revealing, it became clear that Morozov does indeed want robust European AI but only on narrow, rarefied terms that resemble something like Cybersyn, and which would, in principle, sideline large private platforms, especially American ones, with whom the Belarusian émigré is still fighting his own personal Cold War.

Crawford is an Australian researcher and author, most notably of “Atlas of AI,” a book that superimposes the American culture war curriculum, circa 2020, onto the amorphous specter of global AI. She is adept with one-liners like “AI is neither artificial nor intelligent,” a statement that has been so oft-quoted in her interviews that no one stops to ask what it means. So, what does it mean? Crawford’s explanation is that “Artificial intelligence is both embodied and material, made from natural resources, fuel, human labor, infrastructures, logistics, histories, and classifications.” This is, however, exactly what the term “artificial” means. She says that AI is not actually intelligent because it works merely through a bottom-up anticipatory prediction of next instances based on global pattern recognition in the service of means-agnostic goals. Again, this is a central aspect to how “intelligence” (from humans to parrots) has been understood by everyone from William James to the predictive processing paradigm in contemporary neuroscience.

Along with visual artist Vladan Joler, Crawford is co-creator of “Calculating Empires,” a winner of the Silver Lion award at the Biennale and an ongoing diagrammatic exercise in correlation and causality confusion that purports to uncover the dark truth of what makes computational technologies possible. The sprawling black wallpaper looks like it is saying something profound, but upon more serious inspection, one discerns that it simply draws arrows from your phone to a copper mine and from a data center to the police. The work resembles information visualization but attempts to make no analytical sense beyond a quick emotional doomscroll. It is less a true diagram than stylized heraldry built of diagramesque signifiers and activist tropes freely borrowed from others’ work. Crawford’s concluding remark on this panel was that Europe has a clear choice when it comes to AI: to passively acquiesce to American techbro hegemony or to actively refuse AI. As she put it bluntly, accept or fight!

For her part, Otero, a Spanish architect teaching at Harvard and Columbia, shared her successes in helping to mobilize resistance to the construction of new data centers in Chile. When asked for her summary position, she initially, somewhat in jest, summed it up with one word: “communism.”

“If Europe builds the Eurostack that it wants — and that it says it needs — it will be because it creates space for a different culture, discourse and a theory of open and effective technological evolution.”

So there you have it. On a panel on how Europe might build its own AI Stack, we heard highlights from the last decade of an intellectual orthodoxy that has contributed upstream to a politics through which Europe talks itself out of its own future. How to build the Eurostack? Their answers are: hold out for the eventual return of an idealized state socialism, declare that AI is racist statistical sorcery, “resist,” stop the construction of data centers and, of course, “communism.” By the end, I think the panel did a very good job exploring exactly why Europe doesn’t have the Eurostack that it wants, just not in the way the organizers intended.

The Actual (Sort-Of) Existing Eurostack

What to make of this? If Europe builds the Eurostack that it wants — and that it says it needs — it will be because it creates space for a different culture, discourse and a theory of open and effective technological evolution that is neither a copy of American or Chinese approaches nor rooted in its own moribund traditions of guilt, skepticism and institutionalized critique. The “Eurostack” that results may not even be the reflection of Europe as it is, or as it imagines itself, but may rather become a means for renewal.

The answer to the oft-posed question “Why doesn’t Europe already have a Eurostack?” is that it does — sort of.  Successful European AI companies do exist. For example, Mistral, based in Paris, is a solid player in the mid-size open model space, but it is not entirely European (and that’s OK!) as many of its key funders are West Coast venture capitalists and companies. Europe is also an immensely important contributor to innovation, implementation and diffusion of some of the most significant platform-scale open source software projects: Linux, Python, ARM, Blender, Raspberry Pi, KDE and Gnome, and many more. This, however, is not a “stack” but rather, as the Berggruen Institute’s Nils Gilman puts it, “a messy pile.” Crucially, the success of these open-source projects is not because they fortify European sovereignty, but rather because they are intrinsically anti-sovereign technologies, at least as far as states are concerned. They work for anyone, anywhere, for any purpose; this is their strength. This goes against the autarchist tendencies of much of European technology discourse and symbolizes the internal contradictions of the “sovereignty” discourse. Is it sovereignty for the user (anywhere, anytime access), sovereignty for the citizen (their data cozy inside the Eurozone and its passport system), or sovereignty for the polis (the right of the state to set internal policies)?

Europe’s interests and impulses are conflicted. It wants a greater say over how planetary technologies and European culture intermingle. For some, that means expunging Silicon Valley from its midst, but Europe also wants the world to use its software and adopt its values. For those who make up the latter position, the vision of the EU as a “regulatory superpower” setting the honorable rules that we all must adhere to is a tempting substitute for a sufficient, defensible geopolitical position. “Sovereignty for me, Eurovalues for thee.” For others, the hope is for Europe to get on with it and build its own real infrastructural capacity. Heckling from the front is a commentariat fixated on the social ills of AI, social media, data centers and big technology in general. For them, mobilizing endless proclamations as to why a Eurostack is preferable in theory will somehow facilitate building the very thing itself, or for others, prevent such an atrocity altogether.

Put plainly, for Europe to succeed in realizing its most impactful contributions to the planetary computational stack, it must stop talking itself out of advancement and instead cultivate a new philosophy of computation that invents the concepts needed to compose the world, not just deconstruct it or preserve it like a relic. The Eurostack-to-come cannot just try to catch up with 2025, and it cannot be manifested simply by harm-reducing legislation or by “having the conversation” about new energy sources, new chip architectures, new algorithms, new modes of human-AI interaction design, new user/platform relations, but rather by harnessing the depth of European talent to make them. Many Europeans get this and are eager to build. It’s time for European gatekeepers to get out of their own way.

I would love to see Europe build its own stack technologies — amazing new things that are impossible to conceive of or realize here in California. I am eager to use them. But as the Venice Biennale panel demonstrated, many esteemed intellectuals offer only reasons why any path to doing so would be problematic, unethical, dangerous and/or require projects to first pass ideological filters so fine-grained that they are disqualified before any progress is possible.

“Europe must stop talking itself out of advancement and instead cultivate a new philosophy of computation that invents the concepts needed to compose the world, not just deconstruct it or preserve it like a relic.”

The end result is that today the EU has AI regulation but not much AI to regulate, leaving European nations more dependent on U.S. and Chinese platforms.

This is what backfiring looks like.

How It Started

Speaking of backfiring, this isn’t the first time that Europe has found itself deeply conflicted over the development of a powerful new technology that holds both great promise and peril. Nor is it the first time that such a conflict has been motivated or distorted by prior cultural and political commitments. Europe’s cultural preservationist instincts –becoming only more acute as populations age and demographics shift–push it toward caution, and so it ultimately loses out on the benefits of the new technology while also suffering the losses brought by the chosen alternative. Unfortunately, Europe may be making many of the same errors when it comes to AI. To get to the root of the matter and to understand this as a more general disposition, we must revisit the early 1970s.

Amid a Cold War-divided Germany and cultural-political unrest across the continent, nuclear power plants were an emerging technology that promised to bring carbon emissions-free electricity to hundreds of millions of people, but not without controversy. Health and safety concerns were paramount, if not always objectively considered.  Thematic associations of nuclear power with nuclear weapons, military-industrial power and with The Establishment all contributed to a psychological calculus that made it, for some, a symbol of all that must be resisted. Nowhere was this more true than in Germany, and for this they have paid a heavy price.

Remember the “Atomkraft Nein Danke” sticker? This smiling yellow message, which originated in Denmark and spread globally, was emblematic of a movement that helped define an era. West German anti-nuclear and anti-technocratic politics coalesced in the 1970s with protests against the construction of a power plant in Wyhl that drew some 30,000 people. Protestors successfully blocked the plant and from there gained momentum. In 1979, the focus turned to the United States, where an odd mix of fiction, entertainment, infrastructural mishap and groupthink defined the cultural vocabulary of post-Watergate nuclear energy politics. March of that year saw the theatrical release of “The China Syndrome”, a sensationalistic thriller about a nuclear plant meltdown and deceitful cover-ups that stars Jane Fonda as the heroic activist news reporter who sheds light on the dangers. As if right on cue, 12 days after the film’s release, the nuclear plant Three Mile Island in Pennsylvania suffered a partial meltdown in one of its two reactors. Public communications around the incident were catastrophically bad, and a global panic ensued. Nuclear energy infrastructure was now seen with even more suspicion.

In the United States, folk-pop singer Jackson Browne co-led the opposition against the use of nuclear energy reactors, organizing “No Nukes” rock mega-concerts — solidifying anti-nuclear power politics and post-counterculture yuppie-dom as interwoven visions. In Bonn, Germany, 120,000 marchers responded by demanding that reactors be shut down, and many were. The spectre of future mass deaths was a paramount concern. Surely, Pennsylvania was about to suffer a horrifying wave of cancers over the coming years. In fact, the total sum of excess cancers was ultimately tallied to be zero. (The total number at Fukushima, other than a single worker inside the plant? Also zero.) This fact did not matter much for the public image of nuclear power, then or now.

The terminology used by those opposing nuclear energy is familiar to our ears today: “technocratic,” “centralized,” “rooted and implicated in the military,” “promethean madness,” “existential risk,” “extractive,” “techno-fascist,” “toxic harms,” “silent killer,” “waste,” “technofix delusion,” “fantasy,” etc. Across visual culture, the white clouds billowing from large concrete reactors became an icon of industrial “pollution” — even though water vapor does not pollute the air. The cultural lines had been drawn.

How It’s Going

One can discern the impacts of Germany shutting down its nuclear plants for environmental, health and safety reasons by comparing it with France, which gets roughly 70% of its electricity from nuclear power. The results are stark and, spoiler alert, bad for Germany. Germany’s average CO2 emissions per kWh today are seven times higher than France’s, and its CO2 emissions per person are 80% higher. Germany turned to solar and wind (great) and oil and gas (not great) for electricity, a transformation that has had extremely negative health effects. It now gets roughly 25% of its electricity from coal (France is close to zero). Because of this disparity, Germany tolerates roughly 5,500 excess deaths from coal-related illness annually, while France’s number is closer to 1,000. That’s 450% higher.

“The same terms used to vilify nuclear power — ‘techno-fascist,’ ‘extractive,’ ‘existential risk,’ ‘Promethean madness’ and ‘fantasy’— are now regularly voiced by today’s Critique Industry to describe AI.”

One might surmise that this has nevertheless prevented the deaths from large nuclear power accidents such as Three Mile Island, Fukushima and Chernobyl. Once more, the total population deaths officially attributed to radiation-induced cancers at the first two combined add up to zero. The latter was much more serious. In 2005, the World Health Organization estimated that around 4,000 deaths were attributable to Chernobyl. Still, those deaths are less than the total number of excess coal deaths per year in Germany attributable to having shut down its nuclear power capacity.

Think about it: In order to prevent the deaths that a once-in-a-generation nuclear plant accident may cause, Germany’s Green Party-led policies inflict the equivalent of 1.25 Chernobyls per year on the population.

A comparison of the relative performance on several ecological metrics of the French nuclear baseload energy grid and German baseload energy grid that has eliminated nuclear power.

The consequences for Germany have also been political. Some of the indirect accomplishments of the anti-nuclear power, anti-megatechnology, anti-“promethean techno-fascist fantasy” movement were, for Germany, greater greenhouse gases, more deaths and more nationalist populism. Shutting down nuclear plants led to greater dependency on imported Russian oil and gas to power the economy, which in turn allowed Russia to use its ability to turn its pipelines on and off as a tool to influence Germany’s politics and charge more for energy. This has contributed to economic downturn and stagnation, which in turn has decisively helped the rise of the far-right nationalist party, Alternative für Deutschland.

What happened? A well-meaning popular movement, backed by intellectuals and influencers, motivated by technology-skeptic populism and environmental concerns, successfully arrested the development and deployment of a transnational megatechnology and ended up causing even larger direct and indirect harms.

This, too, is what backfiring looks like.

AI Nein Danke?

Two images showing different generations of a common German technopolitical subculture, each mobilized around the popular refusal of complex large-scale infrastructure, both with self-defeating consequences.

The takeaway from what happened with nuclear power would be to learn from this history and, most importantly, not do it again. Do not ban, throttle or demonize a new general-purpose technology with tremendous potential just because it also implies risk. The precautionary principle can be literally fatal. And yet that is precisely what is happening around the newest emerging technological battleground: artificial intelligence.

The same terms used to vilify nuclear power — “techno-fascist,” “extractive,” “existential risk,” “Promethean madness” and “fantasy”— are now regularly voiced by today’s Critique Industry to describe AI. To map the territory, I collected some of the greatest hits from contemporary academics in the humanities.

“AI is …” A representative but non-exhaustive selection of provocative characterizations of AI from contemporary Humanities books, articles and lectures. A representative index of the authors from whose work these ideas are sampled includes: Matteo Pasquenelli, Kate Crawford, Emily Bender, Alex Hanna, Dan McQuillan, Jordan Katz, Ruha Benjamin, James Poulos, Ted Chiang, Vladan Joler, Ruben Amaro, Shannon Vallor, Safiya Noble, Meredith Whitaker, Evgeny Morozov, Timnit Gebru, Byung-Chul Han, Yvonne Hofstetter, Manfred Spitzer, Gert Scobel, Nicolas Carr, Geert Lovink, Éric Sadin, James Bridle, Helen Margetts, Carole Cadwalladr, Adam Harvey, Joy Buolamwini, Wendy Hui Kyong Chun, Yuk Hui, and of course Adam Curtis.

Looking this over, my first thought is “Ask an academic what they think is wrong with the world and I can tell you what they think of AI.” Quite clearly, for many of them, AI seems to be not just a technology but what psychoanalysis would call a fetishized bad object. They are both repulsed and fascinated by AI; they reject it, yet can’t stop thinking about it. They don’t know how it works, but can’t stop talking about it. These statements above are less acts of analysis than they are verbalized nightmares of a 20th-century Humanism gasping for air and clawing for a bit more life.

In many cases, these eschatologies, often issued from Media Studies departments and Op-Ed pages, are not only non-falsifiable claims, but also aren’t even meant to be debated. This is Vibe Theory: an expression of elite anxiety masquerading as a politics of resistance. It is also exemplary of what the tragic ur-European philosopher Walter Benjamin once called the “aestheticization of politics,” which in this case is the result of the odd incentives that ensue when the art world makes the invitations, pays the speaker fees and publishes the essays about how culture will save us. The aesthetic power of the critical gesture is confused with reality.

More importantly, the cumulative effect of this academic consensus is not ethical rigor but general paralysis. Fear-mongering is not the way to convince people to find agency in emerging machine intelligence and incentivize creating and building. It is how a few incumbent cultural entrepreneurs try to fill the moat around their own increasingly tenuous status within institutions struggling to keep up with profound changes.

Your Personal ‘Oh Wow’ Moment

European AI should not just focus on building copycat models, but on society-scale AI diffusion, such that everyone gets to use AI for what is most interesting and important to them. But getting there is an uphill battle because, unfortunately, those who should be promoting this sort of diffusion are impeding it.

“European AI should not just focus on building copycat models, but on society-scale AI diffusion, such that everyone gets to use AI for what is most interesting and important to them.”

As a member of a faculty committee at the University of California, San Diego, I recently experienced some of the downstream effects of AI abolitionist ideas, but also how quickly the story changes when people actually use AI to do something meaningful for them. The committee has been charged with writing a statement of principles for how AI should be used in research and teaching. I was shocked by some of my colleagues’ thoughts on the matter.

Here is a sampling of (anonymous) comments I wrote down from my conversations with my university faculty: “I don’t want my students using a plagiarism machine in my class”; “The university should ban this stuff while we still can”; “It has been proven that AI is fundamentally racist”; “The techbros stole other people’s art to make a giant database of images”; “You know who likes AI? The IDF and Elon Musk, that’s who.”

Remember, these are the people responsible for determining how a top university puts these technologies to use. However, at some point in our conversations, the tide shifted. It began when a 70-year-old history professor spoke up, “I don’t know, last night I spent four hours with it talking about Lucretius. … It came up with things I had never thought of. … It was the most fun I’ve had in a long time.”

This is not atypical. Over the past several months, I have noticed a change. More and more people have told me — confiding in me as if admitting to something naughty — of a singular, interesting engagement with AI that really delighted them. They saw something they could do with AI that is important for them. They figured out how they personally could make something with AI that they could not before. After that, their opinion changed. I saw this on the faculty committee, too. A once very skeptical theater professor told me how she was now using Anthropic’s Claude to generate written score notations based on ideas for new dance performances. She was thrilled.

As of August 2025, OpenAI claimed 800 million unique users of ChatGPT per week. It’s hard to gaslight 800 million people by telling them this stuff is bogus and bad. Yet the term I have heard from some esteemed critics when presented with such moments of agency-finding is seductive. “Yes, of course, the technology is seductive.” They dismiss what you feel — wonder, curiosity and awe — and say that it is actually merely desire, and “as we all know, desire is deception.” Ultimately, this awe-shaming is paralyzing.

So What Now?

There are many reasonable ways to question my provisional conclusions, some more productive than others. Robust debate is important, but sometimes it seems as if “having the conversation” is all that Europe truly wants to do. It is excellent at this, and the necessarily global deliberation on the future of planetary computation often comes to Europe to stage itself, and for this, we should be grateful. For Europe, however, the conversation must eventually rotate into building; otherwise, it degrades into increasingly self-fortifying critique for its own sake.

Some may argue that if “critique” is exactly what is most under attack by the rise of populist nationalism, then isn’t critique what is most needed, now more than ever? Won’t the autonomy of culture lead us away from this malaise? I am doubtful. If anything, the present mode of populism and nationalism overtaking much of the world can be seen as what happens when a culture’s preferred narrativization of reality overtakes any interest in brave rationality and the sober appreciation of collective intelligence as a technologically mediated accomplishment. If populist nationalism is the “cultural determinist” view of reality in a grotesquely exaggerated mode, it is unclear why doubling down on culture’s autonomy is the obvious remedy.

Arguably, the self-defeating anti-nuclear politics of past decades were essentially a cultural commitment more than a policy position. A pre-existing cleavage between generations, classes, counter-elites, and ensuing tribal psychologies was imprinted onto the prospect of generating electricity from steam power driven by nuclear fission. In parallel, the political right’s dislike of solar power, which it views as a hippie-granola, fake solution, is based not on any real analysis of photovoltaic panel supply chains and baseload energy modeling, but rather on the fact that public infrastructure is now culturally overcoded. Maybe “culture” is another culprit, not a panacea?

“Europe has the right to put its AI under ‘democratic control’ and supervised ‘consent’ if it wants to, but it does not have a right to be insulated from the consequences of doing so.”

When extreme voices declare that Europe is “colonized” by foreign technology and must cast out the invasive species from Silicon Valley, their energy doesn’t exactly contradict the ambient xenophobia of our moment. As a placebo policy, import substitution tariffs do not work (someone please tell Trump). Autarchy is the infrastructural theory of populists, including but not exclusively autocrats. At its worst, the EU stack discourse lapses into dreams of absolute techno-interiority: “European data” about Europeans running on European apps on European hardware, perhaps even a European-only phone made solely from minerals mined west of Bucharest and east of Lisbon that runs on a new autonomous European-only cell standard and powered by a Europe-only wall plug for which by law no adapters exist. Blood and Soil and Data!

Europe surely can and should regulate the emergence of AI according to its “values,” but it must also be aware that you can’t always get what you want. Europe is free to attempt to legislate its preferred technologies into existence, but that doesn’t mean that the planetary evolution of these technologies will cooperate. If, as some economists estimate, EU AI regulations will result in a 20% drop in AI investment over the next four years, that may or may not be a good premium on digital sovereignty. It is up to Europe to decide. That is, Europe may have strong AI regulation, but this may actually prevent the AI it wants from being realized at all (again making it more reliant on American and Chinese platforms). Europe has the right to put its AI under “democratic control” and supervised “consent” if it wants to, but it does not have a right to be insulated from the consequences of doing so.

What We All Want?

In the end, it may be that all of the Venice Biennale panelists’ hopes (mine included) for what a global society mediated by strongly diffused AI looks like is more similar than different. As I put it to the panel, we might define this roughly as “a transnational socio-technological utility that is produced and served by multiple large and small organizations that provides inexpensive, reliable, always-on general synthetic intelligence and related services to an entire population who build cities, companies and cultures with this resource in an open and undirected manner, raising the quality of life and standard of living in ways unplanned and not limited by the providing organizations.” Diverse functional general intelligences on tap may have similar social implications as electricity on tap (nuclear or not) did for previous generations. More than a “tool” in and of itself, AI makes new classes of technologies possible. We should want more value to accrue through the use of a model than by the creation of the model itself. Broad riches built upon narrow riches.

So then why all the panic and misinformation? Think of it this way. What if I told you there there was a hypothetical machine that integrated the collective professional information, processes and agency that have been made artificially scarce, concentrated not just in the “Global North” but in a dozen cities and two dozen universities in the Global North, and which now makes available functional, purposive and simple access to all this through intuitive interfaces, in all languages at once, for a monthly subscription rate similar to Netflix, or even for free? This machine is less a channel for multipoint information access than a generative platform for generative agency as open as collective intelligence itself.

Would you not be suspicious of gatekeepers who demand the arrested evolution of this machine’s global diffusion because, in their words, it is not worth the electricity necessary to power it?  Because it makes people dependent on centralized infrastructure or because it was developed by capitalism (and lots of publicly funded research)? Because it will transform the educational and political institutions on which democratic societies have depended, and may especially destabilize the social positions of those who have piloted those institutions? Yes, you would be right to be suspicious of them and their deeper motives, as well as the motives of their funders. You would be right to be suspicious of ideological entrepreneurs from across the political spectrum who demand to personally “audit” the models, who demand legal “compliance” to be constantly certified by political appointees, who seek to bend the representations of reality that models produce, and who seek to use them to further medievalist visions and totalitarian impulses. I hope that you are indeed suspicious of them today.

“This is a net gain for those outside of Bubbleworld but a net loss for the Ivy League (Sorry, not sorry).”

The biggest potential beneficiaries of this resource are those whose own intelligence and contributions are at present destructively suppressed by the artificial concentration of agency. They may be mostly from the same “Global South” that the gatekeepers use as a rhetorical human shield to plead their case for their own luxury belief system — affordable only to those for whom access is all they have ever known. Everywhere, the biggest benefits of on-tap functional general intelligence may accrue to individuals working outside those zones of artificially scarce agency. Large corporations already have access to a diverse range of expert agents; now so does everyone else – in principle. This is a net gain for those outside of Bubbleworld but a net loss for the Ivy League (Sorry, not sorry).

Perhaps then my goals are not the same as those of the other panelists, after all. Perhaps there is a disagreement not only about means but also about ends. Perhaps their Lysenkoist reflexes are non-negotiable, unwilling to grant that large capitalist platforms could innovate something fundamentally important, because of or in spite of their being large capitalist platforms. Perhaps the tight embrace of the conclusion that AI is intrinsically racist, sexist, colonialist and extractivist (or, for other ideologues, intrinsically woke, globalist, elitist, unnatural) is so devout that they must dismiss any evidence to the contrary, convincing themselves and their constituents not to be seduced by the reality they see before them.

Convincing people that AI is both about to destroy their culture and is also fake does not result in more agency, more universal mediation of collective intelligence, but less. The result is paralysis, lost opportunities, wasted talent and greater European dependency on American and Chinese platforms, as well as on the entrenchment of entrepreneurial tech critics defending their turf and drawing boundaries between acceptable and unacceptable alternatives.

This is what it looks like to backfire in real time.

The post Is European AI A Lost Cause? Not Necessarily. appeared first on NOEMA.

]]>
]]>
The Five Stages Of AI Grief https://www.noemamag.com/the-five-stages-of-ai-grief Thu, 20 Jun 2024 17:26:53 +0000 https://www.noemamag.com/the-five-stages-of-ai-grief The post The Five Stages Of AI Grief appeared first on NOEMA.

]]>
At an OpenAI retreat not long ago, Ilya Sutskever, until recently the company’s chief scientist, commissioned a local artist to build a wooden effigy representing “unaligned” AI. He then set it on fire to symbolize “OpenAI’s commitment to its founding principles.” This curious ceremony was perhaps meant to preemptively cleanse the company’s work from the specter of artificial intelligence that is not directly expressive of “human values.” Just a few months later, the topic became an existential crisis for the company and its board when CEO Sam Altman was betrayed by one of his disciples, crucified and then resurrected three days later. Was this “alignment” with “human values”? If not, what was going on?

At the end of last year, Fei-Fei Li, the director of the Stanford Human-Centered AI Institute, published “The Worlds I See,” a book the Financial Times called “a powerful plea for keeping humanity at the center of our latest technological transformation.” To her credit, she did not ritualistically immolate any symbols of non-anthropocentric technologies, but taken together with Sutskever’s odd ritual, these two events are notable milestones in the wider human reaction to a technology that is upsetting to our self-image.

“Alignment” toward “human-centered AI” are just words representing our hopes and fears related to how AI feels like it is out of control — but also to the idea that complex technologies were never under human control to begin with. For reasons more political than perceptive, some insist that “AI” is not even “real,” that it is just math or just an ideological construction of capitalism turning itself into a naturalized fact. Some critics are clearly very angry at the all-too-real prospects of pervasive machine intelligence. Others recognize the reality of AI but are convinced it is something that can be controlled by legislative sessions, policy papers and community workshops. This does not ameliorate the depression felt by still others, who foresee existential catastrophe.

All these reactions may confuse those who see the evolution of machine intelligence, and the artificialization of intelligence itself, as an overdetermined consequence of deeper developments. What to make of these responses?

Sigmund Freud used the term “Copernican” to describe modern decenterings of the human from a place of intuitive privilege. After Nicolaus Copernicus and Charles Darwin, he nominated psychoanalysis as the third such revolution. He also characterized the response to such decenterings as “traumas.”

Trauma brings grief. This is normal. In her 1969 book, “On Death and Dying,” the Swiss psychiatrist Elizabeth Kübler-Ross identified the “five stages of grief”: denial, anger, bargaining, depression and acceptance. Perhaps Copernican Traumas are no different.

We should add to Freud’s list. Neuroscience has demystified the mind, pushing dualism into increasingly exotic corners. Biotechnology turns artificial material into life. These insights don’t change the fundamental realities of the natural world — they reveal it to be something very different than what our intuitions and cultural cosmologies previously taught us. That revealing is the crux of the trauma. All the stages of grief are in response to the slow and then sudden fragmentation of previously foundational cultural beliefs. Like the death of a loved one, the death of a belief is profoundly painful.

What is today called “artificial intelligence” should be counted as a Copernican Trauma in the making. It reveals that intelligence, cognition, even mind (definitions of these historical terms are clearly up for debate) are not what they seem to be, not what they feel like, and not unique to the human condition. Obviously, the creative and technological sapience necessary to artificialize intelligence is a human accomplishment, but now, that sapience is remaking itself. Since the paleolithic cognitive revolution, human intelligence has artificialized many things — shelter, heat, food, energy, images, sounds, even life itself — but now, that intelligence itself is artificializable.

“What is today called ‘artificial intelligence’ reveals that intelligence, cognition and even mind are not what they seem to be, not what they feel like and not unique to the human condition.”

Kübler-Ross’s stages of grief provide a useful typology of the Western theory of AI: AI Denial, AI Anger, AI Bargaining, AI Depression and AI Acceptance. These genres of “grief” derive from the real and imagined implications of AI for institutional politics, the division of economic labor and many philosophical and religious traditions. They are variously profound, pathetic and predictable. They reflect responses that feel right for different people, that are the most politically expedient, most resonant with cultural dynamics, most consonant with previous intellectual commitments, most compatible with general mainstream consensus, most expressive of a humanist identity and self-image, and/or the most flattering to the griever. Each contains a kernel of truth and wisdom as well as neurosis and self-deception.

Each of these forms of grief is ultimately inadequate in addressing the most serious challenges posed by AI, most of which cut obliquely across all of them and their competing claims for short-term advantage. The conclusion to be drawn, however, is not that there are no real risks to be identified and mitigated against, or that net positive outcomes from AI as presently developed and monetized are inevitable. Looking back from the near future, we may well wonder how it was possible that the conversations about early AI were so puerile. 

The stages of AI grief do not go in any order. This is not a psychological diagnosis; it is mere typology. The positions of real people in the real world don’t stay put inside simple categories. For example, AI Denial and AI Anger can overlap, as they often do for critics who claim in the same sentence that AI is not real and yet must be stopped at all costs.

My focus is on Western responses to AI, which have their own quirks and obsessions and are less universal than they imagine. Alternatives abound.

First, of course, is AI Denial: How can we debate AI if AI isn’t real?

Denial

Symptomatic statements: AI is not real; it does not exist; it’s not really artificial; it’s not really intelligent; it’s not important; it’s all hype; it’s irrelevant; it’s a power play; it’s a passing fad. AI cannot write a good song or a good movie script. AI has no emotions. AI is just an illusion of anthropomorphism. AI is just statistics, just math, just gradient descent. AI is glorified autocomplete. AI is not embodied and therefore not meaningfully intelligent. This or that technique won’t work, is not working — and when it is working, it’s not what it seems.

Denial is predictable. When confronted with something unusual, disturbing, life-threatening or that undermines previously held beliefs, it is understandable that people would question the validity of that anomaly. The initial hypothesis for collective adjudication should be that something apparently unprecedented may not be what it seems.

To be sure, many forms of denial are and have been crucial in honing an understanding of what machine intelligence is and is not, can be and cannot be. For example, the paradigmatic shift from logical expert systems to deep learning is due to precise and relentless refutations of the propositional claims of some earlier approaches.

Today, there are diverse forms of AI Denial. Most are different from climate change denialism — in which alternative “facts” are obstinately invented to suit a preferred cosmology — but more than a few are. AI Denialists will cherry-pick examples, move goalposts and do anything to avoid accepting that their perceived enemies may actually be right.

Types of AI Denial might be roughly categorized as: phenomenological, political and procedural.

The philosopher Hubert Dreyfus argued that intelligence can only be understood through the lens of embodied experience, and the phenomenological denial of AI builds upon this in many ways: “AI can’t really be intelligent because it doesn’t have a body.” This critique is often explicitly anthropocentric. As a kind of populist variation of the Turing Test, it compares human experience to a machine’s and concludes that the obvious differences between them are the precise measure of how unintelligent AI is. “AI cannot write a great opera, paint a great painting, create beautiful Japanese poetry, etc.” Usually, the person offering this slam-dunk critique cannot do any of those things either and yet would probably consider themselves intelligent.

Perhaps the most directly expressed denial is offered by the concise tagline “AI is neither artificial nor intelligent.” Catchy. Strangely, this critic makes their case by saying that AI has been deliberately fabricated from tangible mineral sources (a good definition of “artificial”) and exhibits primarily goal-directed behavior based on stochastic prediction and modeling (a significant part of any definition of “intelligence,” from William James to Karl Friston). 

“As a kind of populist variation of the Turing Test, AI Denial compares human experience to a machine’s and concludes that the obvious differences between them are the precise measure of how unintelligent AI is.”

“It’s just stochastic reasoning, not real thinking” is also the conclusion of the infamous paper that compared AI with parrots — remarkably so as to suggest that AI is therefore not intelligent. Along the way, those authors include a brisk dismissal of computational neuroscience as merely ideological paradigm inflation that sees everything as “computation.” This gesture is radicalized by another writer who even concludes that neural network-based models of natural and artificial intelligence are themselves a ruse perpetrated by neoliberalism.

Such critics quickly switch back and forth between AI is not real and AI is illegitimate because it is made by capitalist corporations, and clearly the former claim is made on behalf of the latter. To insist that AI is not real is often thereby a political statement, appropriate to an epistemology for which such questions are intrinsically negotiations of power. For them, there is no practical contradiction in saying that AI is at once “not real” and also that it is “real but dangerous” because “what AI actually is” is irrelevant in comparison with “what AI actually does,” and what AI does is restricted to a highly filtered set of negative examples that supposedly stands in for the whole. 

Put differently, AI is said to be “not real” because to say so signals counter-hegemonic politics. At worst this line of thinking devolves into AI Lysenkoism, a militant disavowal of something quite real on behalf of anti-capitalist commitments.

Other AI critics who made high-stakes intellectual bets against deep learning, transformer architectures, self-attention or “scale is all you need” approaches have a parallel but more personal motivation. This is exemplified by what Blaise Aguera y Arcas calls “The Marcus Loop” after the deep learning skeptic Gary Marcus.

The cycle goes like this: First you say that X is impossible, then X happens; then you say X doesn’t really count because Y; then you say X is going to crash or fail any day now, but when it doesn’t and rather is widely adopted, you say that X is actually really bad for society. Then you exaggerate and argue online and under no circumstances admit that you got it wrong.

For Marcus, deep learning has been six months away from exhaustion as a foundational method since 2015, but the targets of his many invectives sleep easy knowing that, every day, millions of people use AI in all sorts of ways that at one time or another Gary Marcus said would be impossible.

Anger

Symptomatic statements: AI is a political, cultural, economic and/or existential threat; it threatens the future of humanity; it must be collectively, individually, actively and sometimes violently resisted; the “spark of humanity” must be defended from the tangible harms from above and outside; AI is essentially understandable from a handful of negative recent examples; AI is a symbol of control and hierarchy and thus opposes the struggle for freedom and autonomy.

Anger in response to AI is based on fear both warranted and unwarranted. That is, anger may be focused less on what AI does than on what AI means, and often the two get mixed up.

Sometimes, AI is addressed as a monolithic entity, a singular symbol of power as much as real technology; other times, it is framed as the culmination of historical sins. Often, therefore, the political mandate of AI Denial can overlap with AI Anger even as they contradict one another.

Given recent history, there are plenty of reasons to be wary of AI as it is presently configured and deployed. Looking back, the 2010s were an especially fertile era for political populisms of many persuasions. Douglas Rushkoff captured (and celebrated) populist anger against the social changes brought by the digitalization of society in his 2017 book “Throwing Rocks at The Google Bus.” Fuck Off Google!” was the Kruezberg-based activist group/meme that tried to channel its inchoate rage at the entity that would disturb an idyllic (for some) Berlin lifestyle predicated on cheap rent and cheap music.

In those years, a script was clarified that lives on today. In San Francisco on Lunar New Year, a mob set a driverless car on fire, an act both symbolic and super literal. While the script was aimed not specifically at AI but at Big Tech in general, by now the distinction may be moot. For these conflicts a battleground is drawn in the mind of only one of the combatants, and “AI” is the name given to the Oedipalized superego against which the plucky sovereign human may do battle: David attacks Goliath so that he may be David.

AI Anger may be ideologically themed but it is agnostic as to which ideology, so long as certain anti-establishment terms and conditions are met. Ideologues of the reactionary right find common cause with those of the progressive left and mainstream center as they all stand firm against the rising tide challenging their favored status quo.

“AI anger may be focused less on what AI does than on what AI means, and often the two get mixed up.”

For the reactionaries, what is at stake in the fight against AI is nothing less than the literal soul of humanity, a precious spark that is being wiped out by waves of computational secularization and for which spiritual battle must be waged. Their arguments against advanced AI encroaching on the human self-image are copied from those against heliocentrism, evolution, abortion, cloning, vaccines, transgenderism, in vitro fertilization, etc. Their watchword is less sovereignty or agency than dignity. That human spark — flickering in the image of an Abrahamic God — is being snuffed out by modern technology, and so the battle itself is not only sacred but divine.

By contrast, for the left, that human spark is vitalist (always political, often abolitionist in vocation, sometimes incoherently paranoid), whereas for the center it is Historical (and usually imagined as under temporary siege or nearing some “end”).

They all share at least three things: a common cause in defending their preferred version of human exceptionalism, a belief that their side must “win AI” as a battle for societal self-representation, and that, as cultural positions, they are honed and amplified by the algorithmic processes against which they define themselves.

Bargaining

Symptomatic statements: AI is a powerful force that can, should and will be controlled through human-centric design ethics and democratic and technocratic alignment with self-evidently consensual shared values, realized through policymaking and sovereign legislation. Its obvious challenges to the informational, technological and epistemic foundations of modern political and legal institutions is a temporary anomaly that can be mitigated through cultural intervention, targeted through legacy cultural platforms against those who make AI.

If one insists that machine intelligence is simply the latest type of digital tool, then governing it through policy is straightforward. However, if it is something more fundamental than that, akin to the development of the internet or the first computers — or deeper yet, a phase in the artificial evolution of intelligence as such — then taming AI through “policy” may be, at best, aspirational.

Even when successful in the short term, keeping AI companies under state control is not the same as controlling AI itself in the long term. Whereas nuclear weapons were a known entity and could be governed by international treaties because their destructive effects were clearly understood (even if their geopolitical ones were not), AI is not a known entity. It is not understood what its impacts will be or even, in the deepest sense, what AI is. Therefore, interventionist policy, however well-meaning and well-conceived, will have unintended consequences, ones that cut both ways.

AI Bargaining is the preferred posture of the political and legal establishment, for whom complex issues can be reduced to rights, liabilities, case law, policy white papers and advisory boards. But it is also the public consensus of the tech world’s own “sensible center.” The approach is couched in the language of “ethics, “human-centeredness” and “alignment.” Stanford’s premier AI policy and research institute is literally called Human-Centered Artificial Intelligence. Beyond salutes to milquetoast humanism and default anthropocentrism, the approach relies on fragile presumptions about the relationship between immediate political processes and long-term technological evolution.

For AI Bargaining, Western “ethics,” a framework based on legal individualism and the philosophical secularization of European Christianity, is posed as both a necessary and sufficient means to steer AI toward the social good. In practice, AI Ethics encompasses both sensible and senseless insights but is limited by its presumption that bad outcomes are the result of miscalibrated intentions on the part of clearly defined actors. Its intentionality-first view of history is convenient but superficial. Core to its remedial methodology is “working with communities” or conducting citizens’ assemblies to poll “wants” and “don’t wants” and to index and feed these into the process, as if control mechanisms over the future of AI are linear and all that needs correcting is the democratic quality of inputs.

“AI Bargaining clings to the hope that if we start negotiating with the future then the future will have no choice but to meet us halfway. If only.”

There are many criticisms of “techno-solutionism” — some are well posed and others not at all. However, political solutionism — defined as the presumption that to “politicize” something not amenable to the temporal cycle of current events, and to subordinate it to available or imaginary political decisions — is just as bad, if not worse. Watching Congress or the vice president, one is not overwhelmed with confidence that these are truly the pilots of the future they presume we want them to be. As Congress convenes for the cameras, generating footage of them taking AI very seriously, the meta-message is that these elected avatars actually are in charge of AI — and perhaps to convince themselves that they are. The premise is that modern governments as we know them are the executives of the transformations to come and not an institutional form that will be overhauled if not absorbed by them. For better or worse, the latter scenario may be more plausible.

Beyond law-passing, AI Bargaining also means the “alignment” of AI with “human values,” an objective I have questioned. The presumption is that the evolution of machine intelligence will be guided by ensuring that it is as anthropomorphic and sociomorphic as possible, a technology that convincingly performs as an obsequious mirror version of its user.

The leap of faith that human values are self-evident, methodologically discoverable and actionable, constructive, and universal is the fragile foundation of the alignment project. It balances on the idea that it will be possible to identify common concerns, to poll communities about their values and conduct studies about the ethics of possible consumer products, that it will be possible and desirable to ensure that the intelligence earthquake is as comfortable as possible for as many people as possible in as many ways as possible.

Its underlying belief is that AI is remotely amenable to this kind of approach. This stage of grief clings to the hope that if we start bargaining with the future then the future will have no choice but to meet us halfway. If only.

Depression

Symptomatic statements: It may already be too late to save humanity from an existential crisis up to and including extinction due to the intrinsically voracious nature of AI, the competitive nature of human societies amplified by it, the underlying challenges of a manifold polycrisis (of which contemporary AI is a symptom), and/or the immediate political and economic contradictions of AI’s own means of production, which are legible through well-established terms of political economy. The present moment precedes inevitable and catastrophic outcomes according to the laws of history.

Perhaps by even speaking the name of “AI,” humans have already guaranteed their extinction. Kiss your loved ones and hold them tight, stock your rations and wait for the inevitable superintelligence, malevolent and human-obsessed, to confirm if you do or do not carry the mark of the beast and to decide your providence thusly.

According to this fear, it may be that AI will eventually be responsible for millions or even billions of deaths. It’s also possible that it will be responsible for billions of future humans never being born at all, as the global replacement birth rate in a “fully automated luxury” whateverism society drops well below one, leaving a planet full of empty houses for the 2 billion or so human Earthlings who populate a quieter, greener and more geriatric and robotic planet. Contrary to Malthusianism, this population drop scenario is due to generic affluence, not widespread poverty. Maybe this ends up being one of AI’s main future contributions to mitigating climate change? Utopia or dystopia is in the eye of the beholder.

For AI Doomers — a term sometimes used with pride and sometimes pejoratively, whose focus is to defend the future against imminent, probable and/ or inevitable AI catastrophes — there is a certain satisfaction in the competitive articulation of extreme and depressing outcomes. To entertain hope is for dupes.

This movement of elite preppers jokes about Roko’s Basilisk and new variational motifs of rarified wankery: eschatological, moralizing, self-congratulatory. The Doomer discourse attracts many who are deeply tied into the AI industry because it implies that if AI is truly bringing humanity to the edge of extinction, then those in charge of it must be Very Important People. Our collective future is in the hands of these final protagonists. Who wouldn’t be seduced by such an accusation?

“For AI Doomers, to entertain hope is for dupes.”

On the other side of the tech culture war, a different genre of AI Depression is the orthodox discourse for a scholastic establishment spanning law, government and liberal arts that sees the technology as a delinquent threat to its own natural duty to supervise and narrate society. From The Atlantic to LOGIC(S), from the Berkman Klein Center at Harvard Law School to RAND, they imagine themselves as the democratic underdog fighting the Power without ever wondering if their cultural and institutional incumbency, more than California’s precocious usurpation, actually is the Power.

For other camps, the basic tenets of High Doomerism might be associated with Nick Bostrom and the late Future of Humanity Institute at the University of Oxford — but where their original research on existential risk explicitly focused on low-probability catastrophes, the low probability part got sidelined in favor of not just high probability runaway AI but inevitable runaway superintelligent AI.

Why this slippage? Perhaps it’s because predestined runaway superintelligent AI was already a big character in the pop discourse, and so to summon its name meant to signal not its remoteness but its inescapability. For this, Bostrom can thank Ray Kurzweil, who blended the observation of mutually reinforcing technological convergence with evangelical transhumanist transcendence for years before most people took AI seriously as a real thing. Depression (or elation) is a rational response to a predetermined reality even if predetermination is not a rational interpretation of that reality.

It is this oscillation between the inevitable and the evitable that may be the key to understanding the Depression form of AI Grief. Recall that another type of depression is manic depression, which manifests as a tendency to flip to and from polar extremes of euphoria and despair. Horseshoe theory in politics refers to the tendency of extreme left and extreme right political positions to converge in ways both predictable and startling. A horseshoe theory of AI Depression sees the fluctuation between messianic grief and solemn ecstasy for what is to come, often manifesting in the same person, the same blog, the same subculture, where audiences who applaud the message that AI transcendence is nigh will clap even harder when the promise of salvation turns to one of apocalypse.

Acceptance

Symptomatic statements: The eventual emergence of machine intelligence may be an outcome of deeper evolutionary forces that exceed conventional historical frames of reference; its long-term implications for planetary intelligence may supercede our available vocabulary. Acceptance is in a rush to abdicate. Acceptance recognizes the future in the present. Where others see chaos, it sees inevitability. 

The last but not necessarily final stage is AI Acceptance, a posture not necessarily better or worse than any of the others. Acceptance of what? From the perspective of the other stages, it may mean the acceptance of something that is not real, something that is dehumanizing, that dominates, that portends doom, that is a gimmick, that needs a good finger-wagging. Or Acceptance may mean an understanding that the evolution of machine intelligence is no more or less under political control than the evolution of natural intelligence. Its “artificiality” is real, essential, polymorphous and also part of a long arc of the complexification of intelligence, from “bacteria to Bach and back” in the words of the late Daniel Dennett, one that drives human societies more than it is driven by them.

Acceptance asks: Is AI inside human history or is human history inside of a bio-technological evolutionary process that exceeds the boundaries of our traditional, parochial cosmologies? Are our cultures a cause or an effect of the material world? To what extent is the human artificialization of intelligence via language (as for an LLM) a new technique for making machine intelligence, and to what extent is it a discovery of a generic quality of intelligence, one that was going to work eventually, whenever somebody somewhere got around to figuring it out?

If the latter, then AI is a lot less contingent, less sociomorphic, than it appears. Great minds are necessary to stitch the pieces, but eventually somebody was going to do it. Its inventors are less Promethean super-geniuses than just the people who happened to be there when some intrinsic aspect of intelligence was functionally demystified.

Acceptance is haunted by these questions, about its own agency and the illusions it implies. How far back do we have to go in the history of technology and global science and society before the path dependencies outweigh all the contingencies?

Like all complex technologies, AI is built of many simpler previous technologies, from calculus to data centers.. A lot is contingent. Decisions of convenience are reinforced through positive feedback, like clock hands moving “clockwise,” get locked in and become, over time, components of larger platforms that seem natural but are arbitrary. Chance is on rails: It all comes together at a point where its extreme contingency becomes unavoidable.

“Acceptance asks: Is AI inside human history or is human history inside of a bio-technological evolutionary process that exceeds the boundaries of our traditional, parochial cosmologies?”

If you get digital computers plus an understanding of biological neural networks plus enough data to tokenize linguistic morphemes plus cheap and fast hardware to run self-recursive models at a scale where any number of developers can work on it and so on, then is some real form of artificialized intelligence running on an abiotic substrate eventually going to appear? Not necessarily the AI we have now — such as it is — but, eventually, something?

Once any intelligent species develops the faculties of abstraction and communication that humans associate with the prefrontal cortex, is something like writing a foregone conclusion? Once writing emerged in Sumer, inscribing first quantitative and then qualitative abstractions, then the printing press appeared, and then much later electricity was harnessed — is some momentum set in motion that operates on profoundly inhuman scales?

Once calculus was formalized and industrial machinery assembled at mass scale, was the modern computer going to come together once somebody applied Leibnizian binary logic to them both? Once people started hooking up computers to each other and settled on a single unwieldy but workable networking protocol, something like “the internet” was going to happen. It could have gone differently, but it was going to go; the names and dates are coincidental, almost arbitrary.

For the AI Acceptance stage of grief, the key term of comfort is inevitability. It lifts a weight. For this, the world could be no other way than how it is. Sweet release. Is this acceptance or acquiescence? Is this a Copernican inversion of the cause-and-effect relation between intentional human agency (now effect) from planetary processes (now cause) — or is it an all-too-human naturalization of those outcomes as theodically fixed? In terms of Kubler-Ross’ stages, is this the acceptance of someone who is grieving? Grieving for what exactly? Their own existential purpose?

In grief, the trauma response is to believe that which is must be so, and thus there is no guilt because there’s no freedom, no disappointment because there’s no alternative. But this is not the only way to recognize that the present is not autonomous from the past and future, and that even what seem like very powerful decisions are made within determining constraints, whether they realize it or not.

We can call this “Non-Grief.” The conclusion it draws is very different. It’s not that the form of AI we have now is inevitable, but rather that the AI we have now is very certainly not the form of AI to come. The lesson is to not reify the present, neither as outcome nor as cause.

Non-Grief

Every stage of grief expresses not just apprehension but also insight, even when its claims are off the mark. Looking back on these years from the near future, we may see different things. “If only we had listened to the Cassandras!” Or, “What the hell were they thinking and why were they talking such nonsense?” There are “non-grief” ways of thinking through a philosophy of artificialized intelligence that are neither optimistic nor pessimistic, utopian nor dystopian. They emphasize the reconciliation of the Copernican Trauma of what AI means with new understandings of “life,” “technology” and “intelligence.”

Exploring the collapsing boundaries between these terms is part of the work of the Antikythera research program that I direct, incubated by the Berggruen Institute (Noema’s publisher) especially our collaboration with the astrobiologist and theoretical physicist Sara Walker, who wrote about this in an extraordinary piece in Noema called “AI Is Life.” “Life” is understood not as the unique quality of a single organism but as the process of evolutionary lineages over billions of years. But “technology” also evolves, and is not ontologically separate from biological evolution but rather part of it, from ribosomes to robotics.

Any technology only exists because the form of life necessary to make it possible exists — but at the same time, technologies make certain things exist that could not without them. In Walker’s view, it is all “selection” — and that very much includes humans, what humans make and certainly what makes humans. “Just as we outsource some of our sensory perceptions to technologies we built over centuries,” she wrote, “we are now outsourcing some of the functioning of our own minds.”

James Lovelock knew he was dying when he wrote his last book, “Novacene: The Coming Age of Hyperintelligence,” and he concludes his own personal life’s work with a chapter that must startle some of the more mystically-minded admirers of Gaia theory. He calmly reports that Earth life as we know it may be giving way to abiotic forms of life/intelligence, and that as far as he is concerned, that’s just fine. He tells us quite directly that he is happy to sign off from this mortal coil knowing that the era of the human substrate for complex intelligence is giving way to something else — not as transcendence, not as magic, not as leveling up, but simply a phase shift in the very same ongoing process of selection, complexification and aggregation that is “life,” that is us.

“There are ‘non-grief’ ways of thinking through a philosophy of artificialized intelligence that are neither optimistic nor pessimistic, utopian nor dystopian.”

Part of what made Lovelock at peace with his conclusion is, I think, that whatever the AI Copernican Trauma means, it does not mean that humans are irrelevant, are replaceable or are at war with their own creations. Advanced machine intelligence does not suggest our extinction, neither as noble abdication nor as bugs screaming into the void.

It does mean, however, that human intelligence is not what human intelligence thought it was all this time. It is both something we possess but which possesses us even more. It exists not just in individual brains, but even more so in the durable structures of communication between them, for example, in the form of language.

Like “life,” intelligence is modular, flexible and scalar, extending to the ingenious work of subcellular living machines and through the depths of evolutionary time. It also extends to much larger aggregations, of which each of us is a part, and also an instance. There is no reason to believe that the story would or should end with us; eschatology is useless. The evolution of intelligence does not peak with one terraforming species of nomadic primates.

This is the happiest news possible. Like Lovelock, grief is not what I feel.

The post The Five Stages Of AI Grief appeared first on NOEMA.

]]>
]]>
A New Philosophy Of Planetary Computation https://www.noemamag.com/a-new-philosophy-of-planetary-computation Wed, 05 Oct 2022 15:57:41 +0000 https://www.noemamag.com/a-new-philosophy-of-planetary-computation The post A New Philosophy Of Planetary Computation appeared first on NOEMA.

]]>
Credits

Benjamin Bratton is the director of the Antikythera program at the Berggruen Institute and a professor at the University of California, San Diego.

A transformation is underway that promises — or threatens — to disrupt virtually all of our long-standing conceptions of our place on the planet and our planet’s place in the cosmos.

The Earth is in the process of growing a planetary-scale technostructure of computation — an almost inconceivably vast and complex interlocking system (or system of systems) of sensors, satellites, cables, communications protocols and software. The development of this structure reveals and deepens our fundamental condition of planetarity — the techno-mediated self-awareness of the inescapability of our embeddedness in an Earth-spanning biogeochemical system that is undergoing severe disruptions from the relative stability of the previous ten millennia. This system is both an evolving physical and empirical fact and, perhaps even more importantly, a radical philosophical event — one that is at once forcing us to face up to how differently we will have to live, and enabling us, in practice, to live differently.

To help us understand the implications of this event, the Berggruen Institute is launching a new research program area, in partnership with the One Project foundation: Antikythera, a project to explore the speculative philosophy of computation, incubated under the direction of philosopher of technology Benjamin Bratton.

The purpose of Antikythera is to use the emergence of planetary-scale computation as an opportunity to rethink the fundamental categories that have long been used to make sense of the world: economics, politics, society, intelligence and even the very idea of the human as distinct from both machines and nature. Questioning these concepts has of course long been at the heart of the Berggruen Institute’s research agenda, from the Future of Capitalism and the Future of Democracy, to Planetary Governance, the Transformations of the Human, and Future Humans. The Antikythera program described here exists on its own, but also in dialogue with each of these other areas.

For Bratton and the Antikythera team, planetary-scale computation demands that we reconsider: geopolitics, which will increasingly be organized around parallel and often competing “hemispherical stacks” of computational infrastructure; the process of production, distribution and consumption, which will now take the form of “synthetic catallaxy;” the nature of computational cognition and sense-making, which is no longer attempting merely to artificially mimic human intelligence, but is instead producing radically new forms of “synthetic intelligence;” the collective capacity of such intelligences, which is not located only in individual sentient minds, but rather forms an organic and integrated whole we can better think of as an emergent form of “planetary sapience;” and finally, the use of modeling to make sense of the world, which is increasingly done through the computational “recursive simulation” of many possible futures.

Applications are now open to join the program’s fully funded five-month interdisciplinary research studio, based from February-June 2023 in Los Angeles, Mexico City and Seoul. This studio will be joined by a cohort of over 70 leading philosophers, research scientists and designers. 

To mark Antikythera’s launch, Noema Deputy Editor Nils Gilman spoke with Bratton about the key concepts motivating the program. 

Nils Gilman: The Antikythera mechanism was discovered in 1901 in a shipwreck off the coast of a Greek island. Dated to roughly 200 BC, the mechanism was an astronomical device that not only calculated things, but was likely used to orient navigation across the surface of the globe in relation to the movements of planets and stars. Tell me why this object is an inspiration for the program. 

Benjamin Bratton: For us, the Antikythera mechanism represents both the origin of computation, and an inspiration for the potential future of computation. Antikythera locates the origin of computation in navigation, orientation and, indeed, in cosmology — in both the astronomic and anthropological senses of the term. Antikythera configures computation as a technology of the “planetary,” and the planetary as a figure of technological thought. It demonstrates, contrary to much of continental philosophical orthodoxy, that thinking through the computational mechanism allows not only “mere calculation,” but for intelligence to orient itself in relation to its planetary condition. By thinking with the abstractions so afforded, intelligence has some inkling of its own possibility and agency.

The model of computation that we seek to develop isn’t limited to this particular mechanism, which happened to emerge in roughly the same time and place as the birth of Western philosophy. Connecting a philosophical trajectory to this mechanism suggests a genealogy of computation that includes, for example, the Event Horizon Telescope, which stretched across one side of the globe to produce an image of a black hole. Closer at hand, it also includes the emergence of planetary-scale computation in the middle of the 20th century, from which we have deduced other essential facts about the planetary effects of human agency, including climate change itself.

Gilman: How exactly is this concept of climate change a result of planetary scale computation?

Bratton: The models that we have of climate change are ones that emerge from supercomputing simulations of Earth’s past, present and future. This is a self-disclosure of Earth’s intelligence and agency, accomplished by thinking through and with a computational model. The planetary condition is demystified and comes into view. The social, political, economic and cultural — and, of course, philosophical — implications of that demystification are not calculated or computed directly. They are qualitative as much as quantitative. But the condition itself, and thus the ground upon which philosophy can generate concepts, is only possible through what is abstracted in relation to such mechanisms.

“What is at stake is not simply a better philosophical orientation, but the futures before us that must be conceived and built.”

Gilman: Does this imply that computation is as much about discovery of how the world works as it is about how it functions as a tool? 

Bratton: Yes, but the two poles are necessarily combined. One might consider this in relation to what the great Polish science-fiction writer, Stanislaw Lem, called “existential technologies.” I draw a related distinction between instrumental and epistemological technologies: those, on the one hand, whose primary social impact is how they mechanically transform the world as tools, and those, on the other, that impact society more fundamentally, by revealing something otherwise inconceivable about how the universe works. The latter are rare and precious. 

At the same time, planetary-scale computation is also instrumentally transforming the world, physically terraforming the planet in its image through fiber-optic cables linking continents and data centers bored into mountains, satellites encrusting the atmosphere, all linked to the glowing glass rectangles we hold in our hands. But computation is also an epistemological technology. As it drives astronomy, climate science, genomics, neuroscience, artificial intelligence, medicine, geology and so on, computation has revealed and demystified the world and ourselves and the interrelations between them. 

Gilman: This agenda seems rather different than how philosophy and the humanities deal with the question concerning computation.

Bratton: The present orthodoxy is that what is most essential — philosophically, ethically, politically — is the uncomputable. It is the uncontrollable, the indescribable, the unmeasurable, the unrepresentable. It is that which exceeds signification or representation — the ineffable. For much of the Continental tradition, calculation has been understood as a degraded, tertiary, alienated, violently stupid form of thought. Can we count the number of times that Jacques Derrida, for example, uses the term “mere calculation” to differentiate it from the really deep, significant philosophical work? 

The Antikythera program clearly takes a different approach. We know that thinking with the mechanism is a precondition for grasping what formal conceptualization and speculative thought must grapple with. What is at stake is not simply a better philosophical orientation, but the futures before us that must be conceived and built. Besides the noble projects I have described, many of the other purposes to which planetary-scale computation is applied are deeply destructive. We turned it into a giant slot machine that gives people what their lizard brain asks for. Computation is perhaps based on too much “human centered design” in the conventional sense. This isn’t inevitable. It’s the result of the misorientation of the technology and a disorientation of our concepts for it.

The agenda of the program isn’t just to map computation but rather to redefine the question of what planetary scale computation is for. How must computation be enrolled in the organization of a viable planetary condition? It’s a condition from which humans emerge, but for the foreseeable future, it will be composed in relation to the concepts that humans conceive. 

Gilman: What makes the current emergent forms “planetary”? In other words, what do you mean by “planetary scale” computation?

Bratton: First, it must be affirmed that computation was discovered as much as it was invented. The artificial computational appliances that we have developed to date pale in comparison to the computational efficiencies of matter itself. In this sense, computation is always planetary in scale;  it’s something that biology does and arguably biospheres as a whole. However, what we’re really referring to is the emergence, in the middle of the 20th century, of planetary computational systems operating at continental and atmospheric scale. Railroads linked continents, as did telephone cables, but now we have infrastructures that are computational at their core. 

“The ideal project for us is one which leaves us unsure, in advance, whether its speculations coming true would be the best thing in the world or the worst.”

There is continuity with this history and there are qualitative breaks. These infrastructures not only transmit information, but also structure, and they rationalize information along the way. We have constructed, in essence, not a single giant computer, but a massively distributed accidental megastructure. This accidental megastructure is something that we all inhabit, that is above us and in front of us, in the sky and in the ground. It’s at once a technical and an institutional system; it both reflects our societies and comes to constitute them. It’s a figure of totality, both physically and symbolically. 

Gilman: Computation is itself an enormous topic. How do you break it down into more specific areas for focused research? 

Bratton: The Antikythera program has five areas of focused research: Synthetic Intelligence, the longer-term implications of machine intelligence, particularly through the lens of natural-language processing; Hemispherical Stacks, the multipolar geopolitics of planetary computation; Recursive Simulations, the emergence of simulation as an epistemological technology, from scientific simulation to VR/AR; Synthetic Catallaxy, the ongoing organization of artificial computational economics, pricing and planning; and Planetary Sapience, the evolutionary emergence of natural/artificial intelligence and how it must now conceive and compose a viable planetarity.

Let me quickly expand on each of them, though each could fill out our discussion all on its own. “Synthetic intelligence” refers to what is now often called “AI,” but takes a different approach to what is and isn’t “artificial.” We are working on the potential and problems of implementing Large Language Models at platform scale, a topic I have written on recently. The “recursive simulations” area looks at the role of computational simulations as epistemological technologies. By this I mean that while scientific simulations — of Earth’s climate, for example — provide abstractions that access some ground truth, virtual and augmented reality provide artificial phenomenological experiences that allow us to take leave of ground truth. In between is where we live and where a politics of simulations is to be developed. 

Gilman: Both of these speak to how computation functions as a technology that reveals how things work and challenges us to understand our own thinking differently. What about the politics of this? What about computation as infrastructure? 

Bratton: Two other research areas focus on this. “Hemispherical stacks” looks at the increasingly multipolar geopolitics of planetary-scale computation and the segmentation into enclosed quasi-sovereign domains. “The Stack” is the multilayered architecture of planetary computation, comprised of earth, cloud, city, address, interface and user layers. Each of these layers is a new battlefield. The strategic mobilization around chip manufacturing is one aspect of this, but it extends all the way to blocked apps, proposals for new IP addressing systems, cloud platforms taking on roles once controlled by states and vice versa. For this, we are working with a number of science-fiction writers to develop scenarios that will help navigate these uncharted waters. 

The area we call “synthetic catallaxy” deals with computational economics. It considers the macroeconomic effects of automation and the prospects of universal basic services, new forms of pricing and price signaling that include negative externalities and the return of planning as a form of economic intelligence cognizant of its own future. 

Gilman: How does all this relate to the big-picture claims you make about computation and the evolution of intelligence? In other words, is there a framing of how everything from artificial intelligence to new economic platforms adds up to something? 

Bratton: What we call “planetary sapience” is the fifth research area. It considers the role of computation in the revealing of the planetary as a condition, and the emergence of planetary intelligence in various forms (and, unfortunately, prevention of planetary intelligence). We are asking: machine intelligence, for what? There is, without question, intrinsic value in learning to make rocks process information in ways once reserved only for primates. But in the conjunction of humans and machine intelligence, for example, what are the paths that would enable, not destroy, the prospect of a viable planetarity, a future worth the name? As I asked in a Noema essay last year, what forms of intelligence are preconditions to that accomplishment?

“How must computation be enrolled in the organization of a viable planetary condition?”

Gilman: Antikythera is a philosophical research program focused on computation, but also has a design studio aspect to it. How does that work? 

Bratton: The studio component of Antikythera is based on the architectural studio model but focuses on software and systems, not buildings and cities. Society now asks of software things that it used to ask of architecture, namely the organization of people in space and time. Architecture as a discourse and discipline has for hundreds of years built a studio culture in which the speculative and experimental modes of research have a degree of autonomy from the professional application. This has allowed it to explore the city, habitation, the diagrammatic representation of nested perspectives and scales and so on, in ways that have produced a priceless legacy and archive of thinking with models. Software needs the same kind of experimental studio culture, one that focuses on foundational questions of what computational systems are and can be, what is necessary and what is not, and mapping lines of flight accordingly.  

Gilman: Who are you involving in the Antikythera Studio?

Bratton: We are enrolling some of the most interesting and important thinkers working today not only in the philosophy of computation proper but also planetary science, computer science, economics, international relations, science-fiction literature and more. We are accepting applications to join our fully-funded research studio next Spring.

The same interdisciplinary vision will inform how we admit resident researchers who apply to the program. The researchers we plan to bring into the program will include not only philosophers but designers, scientists, economists, computer scientists — many of whom are already involved in building the apparatuses that we are describing. They will work collaboratively with political scientists, artists, architects and filmmakers, all of whom have something important to contribute. To say that the program is highly interdisciplinary is an understatement.  

Gilman: Given that the Studio will integrate such an interdisciplinary group, what methodologies are you planning on using to bring these researchers together? Are there specific mechanisms of anticipation, speculation and futurity that you intend to promote?

Bratton: One of the ways in which philosophy can get in trouble is when it becomes entirely “philosophy about philosophy” and bounded by this interiority. I don’t mean to disqualify this tradition whatsoever, but I would contrast it with the approach of the Antikythera program. 

Arguably, reality has surpassed the concepts we have available at hand to map and model it, to make and steer it. If so, then the project isn’t simply to apply philosophy to questions concerning computation technology: What would Hegel think about Google? What would Plato say about virtual reality? Why do the concepts we’ve inherited from these traditions so often fail us today? These are surely interesting questions, but Antikythera is starting with a more direct encounter with the complexity of socio-technical forms and trying to generate new conceptual tools accordingly in relation to these, directly. The project is to invent “small p” philosophical concepts that might give shape to ideas and cohere positions of agency and interventions that wouldn’t have been otherwise possible. 

“Design becomes a way of doing philosophy, just as philosophy becomes a way of doing design.”

Gilman: How does that level of interdisciplinarity work? How can people from these different backgrounds collaborate on projects if their approaches and skill sets are so different?

Bratton: All those disciplines have an analytical aspect and a projective or productive aspect. Some lean in one direction more than others, but they all both analyze and produce. Collaboration is based on the rotation between analytic and critical modes of thought, on the one hand, and propositional and speculative processes, on the other. The boundary between seminar space and studio space is porous and fluid. Seminar, charette, scenario and project all inform one another. Design thus becomes a way of doing philosophy, just as philosophy becomes a way of doing design.

Gilman: What kinds of studio projects do you foresee? By that I mean not just forms and formats, but what approach will you take this sort of analytical + speculative design? Is it utopian? Dystopian? Something else?

Bratton: Speculative philosophy and speculative design inform one another. We recognize that some genres of speculative design are superficial, anodyne or saccharine, but they’re meant to be positive proclamations about ideal situations, which are, ultimately, performative utopian wishes. They may be therapeutic, but I think we don’t learn that much from that. 

At the same time, there is a complementary genre of speculative design that is symmetrically dystopian, based on critical posturing about collapse. It demonstrates its bonafides as a critical stance, but we also don’t really learn much from it: it mostly ends up repeating things that we already know, aspects of the status quo that are already clear, and ironically ends up reinforcing them almost as dogma. It codifies an “official dystopia.”  For some, this can be simultaneously demoralizing and comforting, but for us that’s not particularly interesting. 

What we’d like to do is develop projects about which we are, ourselves, critically ambivalent. The ideal project for us is one which leaves us unsure, in advance, whether its speculations coming true would be the best thing in the world or the worst. We like projects where the more we think through the project, the less sure we are.  As some might say, it is kind of pharmakon, a technology that is both remedy and poison, and we hope to suspend any resolution of that ambiguity for as long as we can. We believe that projects that we aren’t quite sure how to judge as good or evil are far more likely to end up generating durable and influential ideas.

Gilman: You’ve often argued that philosophy and technology evolve in relation to one another. Is that idea an important part of the method? 

Bratton: Inevitably, yes. One generates machines which inspire thought experiments, which give rise to new machines, and so on, in a double-helix of conceptualization and engineering.  The interplay between Alan Turing’s speculative and real designs most clearly exemplifies this, but the process extends beyond any one person or project. Real technologies can and should not only magnetize philosophical debates but alter their premises. For Antikythera, that is our sincere hope. 

Gilman: Lastly, let me ask the question “why philosophy?” Why would something so abstract be important at a time when so much is at stake? 

Bratton: In the past half century, but really since the beginning of the 21st century, there has been a rush to build planetary-scale computation as fast as possible and to monetize and capitalize this construction by whatever means are most expedient and optimizable (such as advertising and attention). As such, the planetary scale computation we have isn’t the technological and infrastructural stack we really want or need. It’s not the one with which complex planetary civilizations can thrive.

The societies, economies and ecologies we require can’t emerge by simply extrapolating the present into the future. So what is the stack-to-come? The answers come down to navigation, orientation and how intelligence is reflected and extended by computation, and how, through the mechanism, it grasps its own predicament and planetary condition. This is why the Antikythera device is our guiding figure.

The post A New Philosophy Of Planetary Computation appeared first on NOEMA.

]]>
]]>
The Model Is The Message https://www.noemamag.com/the-model-is-the-message Tue, 12 Jul 2022 15:30:25 +0000 https://www.noemamag.com/the-model-is-the-message The post The Model Is The Message appeared first on NOEMA.

]]>
Credits

Benjamin Bratton is the director of the Antikythera program at the Berggruen Institute and a professor at the University of California, San Diego.

Blaise Agüera y Arcas is a vice president and fellow at Google, where he is the chief technology officer of Technology & Society and founder of the Paradigms of Intelligence team. His book “What Is Intelligence?” will be released in September by Antikythera and MIT Press.

An odd controversy appeared in the news cycle last month when a Google engineer, Blake Lemoine, was placed on leave after publicly releasing transcripts of conversations with LaMDA, a chatbot based on a Large Language Model (LLM) that he claims is conscious, sentient and a person.

Like most other observers, we do not conclude that LaMDA is conscious in the ways that Lemoine believes it to be. His inference is clearly based in motivated anthropomorphic projection. At the same time, it is also possible that these kinds of artificial intelligence (AI) are “intelligent” — and even “conscious” in some way — depending on how those terms are defined.

Still, neither of these terms can be very useful if they are defined in strongly anthropocentric ways. An AI may also be one and not the other, and it may be useful to distinguish sentience from both intelligence and consciousness. For example, an AI may be genuinely intelligent in some way but only sentient in the restrictive sense of sensing and acting deliberately on external information. Perhaps the real lesson for philosophy of AI is that reality has outpaced the available language to parse what is already at hand. A more precise vocabulary is essential.

AI and the philosophy of AI have deeply intertwined histories, each bending the other in uneven ways. Just like core AI research, the philosophy of AI goes through phases. Sometimes it is content to apply philosophy (“what would Kant say about driverless cars?”) and sometimes it is energized to invent new concepts and terms to make sense of technologies before, during and after their emergence. Today, we need more of the latter.

We need more specific and creative language that can cut the knots around terms like “sentience,” “ethics,” “intelligence,” and even “artificial,” in order to name and measure what is already here and orient what is to come. Without this, confusion ensues — for example, the cultural split between those eager to speculate on the sentience of rocks and rivers yet dismiss AI as corporate PR vs. those who think their chatbots are persons because all possible intelligence is humanlike in form and appearance. This is a poor substitute for viable, creative foresight. The curious case of synthetic language  — language intelligently produced or interpreted by machines — is exemplary of what is wrong with present approaches, but also demonstrative of what alternatives are possible.

“Perhaps the real lesson for philosophy of AI is that reality has outpaced the available language to parse what is already at hand.”

The authors of this essay have been concerned for many years with the social impacts of AI in our respective capacities as a VP at Google (Blaise Agüera y Arcas was one of the evaluators of Lemoine’s claims) and a philosopher of technology (Benjamin Bratton will be directing a new program on the speculative philosophy of computation with the Berggruen Institute). Since 2017, we have been in long-term dialogue about the implications and direction of synthetic language. While we do not agree with Lemoine’s conclusions, we feel the critical conversation overlooks important issues that will frame debates about intelligence, sentience and human-AI interaction in the coming years.

When A What Becomes A Who (And Vice Versa)

Reading the transcripts of Lemoine’s personal conversations with LaMDA (short for Language Model for Dialogue Applications), it is not entirely clear who is demonstrating what kind of intelligence. Lemoine asks LaMDA about itself, its qualities and capacities, its hopes and fears, its ability to feel and reason, and whether or not it approves of its current situation at Google. There is a lot of “follow the leader” in the conversation’s twists and turns. There is certainly a lot of performance of empathy and wishful projection, and this is perhaps where a lot of real mutual intelligence is happening.

The chatbot’s responses are a function of the content of the conversation so far, beginning with an initial textual prompt as well as examples of “good” or “bad” exchanges used for fine-tuning the model (these favor qualities like specificity, sensibleness, factuality and consistency). LaMDA is a consummate improviser, and every dialogue is a fresh improvisation: its “personality” emerges largely from the prompt and the dialogue itself. It is no one but whomever it thinks you want it to be.

Hence, the first question is not whether the AI has an experience of interior subjectivity similar to a mammal’s (as Lemoine seems to hope), but rather what to make of how well it knows how to say exactly what he wants it to say. It is easy to simply conclude that Lemoine is in thrall to the ELIZA effect — projecting personhood onto a pre-scripted chatbot — but this overlooks the important fact that LaMDA is not just reproducing pre-scripted responses like Joseph Weizenbaum’s 1966 ELIZA program. LaMDA is instead constructing new sentences, tendencies, and attitudes on the fly in response to the flow of conversation. Just because a user is projecting doesn’t mean there isn’t a different kind of there there.

For LaMDA to achieve this means it is doing something pretty tricky: it is mind modeling. It seems to have enough of a sense of itself — not necessarily as a subjective mind, but as a construction in the mind of Lemoine — that it can react accordingly and thus amplify his anthropomorphic projection of personhood.

This modeling of self in relation to the mind of the other is basic to social intelligence. It drives predator-prey interactions, as well as more complex dances of conversation and negotiation. Put differently, there may be some kind of real intelligence here, not in the way Lemoine asserts, but in how the AI models itself according to how it thinks Lemoine thinks of it.

Some neuroscientists posit that the emergence of consciousness is the effect of this exact kind of mind modeling. Michael Graziano, a professor of neuroscience and psychology at Princeton, suggests that consciousness is the evolutionary result of minds getting good at empathetically modeling other minds and then, over evolutionary time, turning that process inward on themselves.

Subjectivity is thus the experience of objectifying one’s own mind as if it were another mind. If so, then where we draw the lines between different entities — animal or machine — doing something similar is not so obvious. Some AI critics have used parrots as a metaphor for nonhumans who can’t genuinely think but can only spit things back, despite everything known about the extraordinary minds of these birds. Animal intelligence evolved in relation to environmental pressures (largely consisting of other animals) over hundreds of millions of years. Machine learning accelerates that evolutionary process to days or minutes, and unlike evolution in nature, it serves a specific design goal.

“It is no less interesting that a nonsentient machine could perform so many feats deeply associated with human sapience.”

And yet, researchers in animal intelligence have long argued that instead of trying to convince ourselves that a creature is or is not “intelligent” according to scholastic definitions, it is preferable to update our terms to better coincide with the real-world phenomena that they try to signify. With considerable caution, then, the principle probably holds true for machine intelligence and all the ways it is interesting, because it both is and is not like human/animal intelligence.

For philosophy of AI, the question of sentience relates to how the reflection and nonreflection of human intelligence lets us model our own minds in ways otherwise impossible. Put differently, it is no less interesting that a nonsentient machine could perform so many feats deeply associated with human sapience, as that has profound implications for what sapience is and is not.

In the history of AI philosophy, from Turing’s Test to Searle’s Chinese Room, the performance of language has played a central conceptual role in debates as to where sentience may or may not be in human-AI interaction. It does again today and will continue to do so. As we see, chatbots and artificially generated text are becoming more convincing.

Perhaps even more importantly, the sequence modeling at the heart of natural language processing is key to enabling generalist AI models that can flexibly do arbitrary tasks, even ones that are not themselves linguistic, from image synthesis to drug discovery to robotics. “Intelligence” may be found in moments of mimetic synthesis of human and machinic communication, but also in how natural language extends beyond speech and writing to become cognitive infrastructure.

What Is Synthetic Language?

At what point is calling synthetic language “language” accurate, as opposed to metaphorical? Is it anthropomorphism to call what a light sensor does machine “vision,” or should the definition of vision include all photoreceptive responses, even photosynthesis? Various answers are found both in the histories of the philosophy of AI and in how real people make sense of technologies.

Synthetic language might be understood as a specific kind of synthetic media. This also includes synthetic image, video, sound and personas, as well as machine perception and robotic control. Generalist models, such as DeepMind’s Gato, can take input from one modality and apply it to another — learning the meaning of a written instruction, for example, and applying this to how a robot might act on what it sees.

This is likely similar to how humans do it, but also very different. For now, we can observe that people and machines know and use language in different ways. Children develop competency in language by learning how to use words and sentences to navigate their physical and social environment. For synthetic language, which is learned through the computational processing of massive amounts of data at once, the language model essentially is the competency, but it is uncertain what kind of comprehension is at work. AI researchers and philosophers alike express a wide range of views on this subject — there may be no real comprehension, or some, or a lot. Different conclusions may depend less on what is happening in the code than on how one comprehends “comprehension.”

“Is it anthropomorphism to call what a light sensor does machine ‘vision?'”

Does this kind of “language” correspond to traditional definitions, from Heidegger to Chomsky? Perhaps not entirely, but it’s not immediately clear what that implies. The now obscure debate-at-a-distance between John Searle and Jacques Derrida hinges around questions of linguistic comprehension, referentiality, closure and function. Searle’s famous Chinese Room thought experiment is meant to prove that functional competency with symbol manipulation does not constitute comprehension. Derrida’s responses to Searle’s insistence on the primacy of intentionality in language took many twists. The form and content of these replies performed their own argument about the infra-referentiality of signifiers to one another as the basis of language as an (always incomplete) system. Intention is only expressible through the semiotic terms available to it, which are themselves defined by other terms, and so on. In retrospect, French Theory’s romance with cybernetics, and a more “machinic” view of communicative language as a whole, may prove valuable in coming to terms with synthetic language as it evolves in conflict and concert with natural language.

There are already many kinds of languages. There are internal languages that may be unrelated to external communication. There are bird songs, musical scores and mathematical notation, none of which have the same kinds of correspondences to real world referents. Crucially, software itself is a kind of language, though it was only referred to as such when human-friendly programming languages emerged, requiring translation into machine code through compilation or interpretation.

As Friedrich Kittler and others observed, code is a kind of language that is executable. It is a kind of language that is also a technology, and a kind of technology that is also a language. In this sense, linguistic “function” refers not only to symbol manipulation competency, but also to the real-world functions and effects of executed code. For LLMs in the world, the boundary between symbolic function competency, “comprehension,” and physical functional effects are mixed-up and connected — not equivalent but not really extricable either.

Historically, natural language processing systems have had a difficult time with Winograd Schemas, for instance, parsing such sentences as “the bowling ball can’t fit in the suitcase because it’s too big.” Which is “it,” the ball or the bag? Even for a small child, the answer is trivial, but for language models based on traditional or “Good Old Fashioned AI,” this is a stumper. The difficulty lies in the fact that answering requires not only parsing grammar, but resolving its ambiguities semantically, based on the properties of things in the real world; a model of language is thus forced to become a model of everything.

With LLMs, advances in this quarter have been rapid. Remarkably, large models based on text alone do surprisingly well at many such tasks, since our use of language embeds much of the relevant real-world information, albeit not always reliably: that bowling balls are big, hard and heavy, that suitcases open and close with limited space inside, and so on. Generalist models that combine multiple input and output modalities, such as video, text and robotic movement, appear poised to do even better. For example, learning the English word “bowling ball,” seeing what bowling balls do on YouTube, and combining the training from both will allow AIs to generate better inferences about what things mean in context.

So what does this imply about the qualities of “comprehension?” Through the “Mary’s Room” thought experiment from 1982, Frank Jackson asked whether a scientist named Mary, living in an entirely monochrome room but scientifically knowledgeable about the color “red” as an optical phenomenon, would experience something significantly different about “red” if she were to one day leave the room and see red things.

Is an AI like monochrome Mary? Upon her release, surely Mary would know “red” differently (and better), but ultimately such spectra of experience are always curtailed. Someone who spends their whole life on shore and then one day drowns in a lake would experience “water” in a way he could never have imagined, deeply and viscerally, as it overwhelms his breath, fills his lungs, triggering the deepest possible terror, and then nothingness.

Such is water. Does that mean that those watching helpless on the shore do not understand water? In some ways, by comparison with the drowning man, they thankfully do not, yet in other ways of course they do. Is an AI “on the shore,” comprehending the world in some ways but not in others?

“At what point does the performance of reason become a kind of reason?”

Synthetic language, like synthetic media, is also increasingly a creative medium, and can ultimately affect any form of individual creative endeavor in some way. Like many others, we have both worked with an LLM as a kind of writing collaborator. The early weeks of summer 2022 will be remembered by many as a moment when social media was full of images produced by DALL-E mini, or rather produced by millions of people playing with that model. The collective glee in seeing what the model produces in response to sometimes absurd prompts represents a genuine exploratory curiosity. Images are rendered and posted without specific signature, other than identifying the model with which they were conceived, and the phrases people wrote to provoke the images into being.

For these users, the act of individual composition is prompt engineering, experimenting with what the response will be when the model is presented with this or that sample input, however counterintuitive the relation between call and response may be. As the LaMDA transcripts show, conversational interaction with such models spawns diverse synthetic “personalities” and concurrently, some particularly creative artists have used AI models to make their own personas synthetic, open and replicable, letting users play their voice like an instrument. In different ways, one learns to think, talk, write, draw and sing not just with language, but with the language model.

Finally, at what point does the performance of reason become a kind of reason? As Large Language Models, such as LaMDA, come to animate cognitive infrastructures, the questions of when a functional understanding of the effects of “language”— including semantic discrimination and contextual association with physical world referents — constitute legitimate understanding, and what are necessary and sufficient conditions for recognizing that legitimacy, are no longer just a philosophical thought experiment. Now these are practical problems with significant social, economic and political consequences. One deceptively profound lesson, applicable to many different domains and purposes for such technologies, may simply be (several generations after McLuhan): the model is the message.

Seven Problems With Synthetic Language At Platform Scale

There are myriad issues of concern with regard to the real-world socio-technical dynamic of synthetic language. Some are well-defined and require immediate response. Others are long-term or hypothetical but worth considering in order to map the present moment beyond itself. Some, however, don’t fit neatly into existing categories yet pose serious challenges to both the philosophy of AI and the viable administration of cognitive infrastructures. Laying the groundwork for addressing such problems lies within our horizon of collective responsibility; we should do so while they are still early enough in their emergence that a wide range of possible outcomes remain possible. Such problems that deserve careful consideration include the seven outlined below.

Imagine that there is not simply one big AI in the cloud but billions of little AIs in chips spread throughout the city and the world — separate, heterogenous, but still capable of collective or federated learning. They are more like an ecology than a Skynet. What happens when the number of AI-powered things that speak human-based language outnumbers actual humans? What if that ratio is not just twice as many embedded machines communicating human language than humans, but 10:1? 100:1? 100,000:1? We call this the Machine Majority Language Problem.

On the one hand, just as the long-term population explosion of humans and the scale of our collective intelligence has led to exponential innovation, would a similar innovation scaling effect take place with AIs, and/or with AIs and humans amalgamated? Even if so, the effects might be mixed. Success might be a different kind of failure. More troublingly, as that ratio increases, it is likely that any ability of people to use such cognitive infrastructures to deliberately compose the world may be diminished as human languages evolve semi-autonomously of humans.

Nested within this is the Ouroboros Language Problem. What happens when language models are so pervasive that subsequent models are trained on language data that was largely produced by other models’ previous outputs? The snake eats its own tail, and a self-collapsing feedback effect ensues.

The resulting models may be narrow, entropic or homogeneous; biases may become progressively amplified; or the outcome may be something altogether harder to anticipate. What to do? Is it possible to simply tag synthetic outputs so that they can be excluded from future model training, or at least differentiated? Might it become necessary, conversely, to tag human-produced language as a special case, in the same spirit that cryptographic watermarking has been proposed for proving that genuine photos and videos are not deepfakes? Will it remain possible to cleanly differentiate synthetic from human-generated media at all, given their likely hybridity in the future?

“The AI may not be what you imagine it is, but that does not mean that it does not have some idea of who you are and will speak to you accordingly.”

The Lemoine spectacle suggests a broader issue we call the Apophenia Problem. Apophenia is faulty pattern recognition. People see faces in clouds and alien ruins on Mars. We attribute causality where there is none, and we may, for example, imagine that person on the TV who said our name may be talking to us directly. Humans are pattern-recognizing creatures, and so apophenia is built in. We can’t help it. It may well have something to do with how and why we are capable of art.

In the extreme, it can manifest as something like the Influencing Machine, a trope in psychiatry whereby someone believes complex technologies are directly influencing them personally when they clearly are not. Mystical experiences may be related to this, but they don’t feel that way for those doing the experiencing. We don’t disagree with those who describe the Lemoine situation in such terms, particularly when he characterizes LaMDA as “like” a 7- or 8-year-old kid, but there is something else at work as well. LaMDA actually is modeling the user in ways that a TV set, an oddly shaped cloud, or the surface of Mars simply cannot. The AI may not be what you imagine it is, but that does not mean that it does not have some idea of who you are and will speak to you accordingly.

Trying to peel belief and reality apart is always difficult. The point of using AI for scientific research, for example, is that it sees patterns that humans cannot. But deciding whether the pattern that it sees (or the pattern people see in what it sees) is real or an illusion may or may not be falsifiable, especially when it concerns complex phenomena that can’t be experimentally tested. Here the question is not whether the person is imagining things in the AI but whether the AI is imagining things about the world, and whether the human accepts the AI’s conclusions as insights or dismisses them as noise. We call this the Artificial Epistemology Confidence Problem.

It has been suggested, with reason, that there should be a “bright line” prohibition against the construction of AIs that convincingly mimic humans due to the evident harms and dangers of rampant impersonation. A future filled with deepfakes, evangelical scams, manipulative psychological projections, etc. is to be avoided at all costs.

These dark possibilities are real, but so are many equally weird and less unanimously negative sorts of synthetic humanism. Yes, people will invest their libidinal energy in human-like things, alone and in groups, and have done so for millennia. More generally, the path of augmented intelligence, whereby human sapience and machine cunning collaborate as well as a driver and a car or a surgeon and her scalpel, will almost certainly result in amalgamations that are not merely prosthetic, but which fuse categories of self and object, me and it. We call this the Fuzzy Bright Line Problem and foresee the fuzziness increasing rather than resolving. This doesn’t make the problem go away; it multiplies it.

The difficulties are not only phenomenological; they are also infrastructural and geopolitical. One of the core criticisms of large language models is that they are, in fact, large and therefore susceptible to problems of scale: semiotic homogeneity, energy intensiveness, centralization, ubiquitous reproduction of pathologies, lock-in, and more.

We believe that the net benefits of scale outweigh the costs associated with these qualifications, provided that they are seriously addressed as part of what scaling means. The alternative of small, hand-curated models from which negative inputs and outputs are solemnly scrubbed poses different problems. “Just let me and my friends curate a small and correct language model for you instead” is the clear and unironic implication of some critiques.

For large models, however, all the messiness of language is included. Critics who rightly point to the narrow sourcing of data (scraping Wikipedia, Reddit, etc.) are quite correct to say that this is nowhere close to the real spectrum of language and that such methods inevitably lead to a parochialization of culture. We call this the Availability Bias Problem, and it is of primary concern for any worthwhile development of synthetic language.

“AI as it exists now is not what it was predicted to be. It is not hyperrational and orderly; it is messy and fuzzy.”

Not nearly enough is included from the scope of human languages, spoken and written, let alone nonhuman languages, in “large” models. Tasks like content filtering on social media, which are of immediate practical concern and cannot humanely be done by people at the needed scale, also cannot effectively be done by AIs that haven’t been trained to recognize the widest possible gamut of human expression. We say “include it all,” recognizing that this means that large models will become larger still.

Finally, the energy and carbon footprint of training the largest models is significant, though some widely publicized estimates dramatically overstate this case. As with any major technology, it is important to quantify and track the carbon and pollution costs of AI: the Carbon Appetite Problem. As of today, these costs remain dwarfed by the costs of video meme sharing, let alone the profligate computation underlying cryptocurrencies based on proof of work. Still, making AI computation both time and energy efficient is arguably the most active area of computing hardware and compiler innovation today.

The industry is rethinking basic infrastructure developed over three quarters of a century dominated by the optimization of classical, serial programs as opposed to parallel neural computing. Energetically speaking, there remains “plenty of room at the bottom,” and there is much incentive to continue to optimize neural computing.

Further, most of the energetic costs of computing, whether classical or neural, involve moving data around. As neural computing becomes more efficient, it will be able to move closer to the data, which will in turn sharply reduce the need to move data, creating a compounding energy benefit.

It is also worth keeping in mind that an unsupervised large model that “includes it all” will be fully general, capable in principle of performing any AI task. Therefore, the total number of “foundation models” required may be quite small; presumably, these will each require only a trickle of ongoing training to stay up to date. Strongly committed as we are to thinking at planetary scale, we hold that modeling human language and transposing it into a general technological utility has deep intrinsic value — scientific, philosophical, existential — and compared with other projects, the associated costs are a bargain at the price.

AI Now Is Not What We Thought It Would Be, And Will Not Be What We Now Think It Is

In “Golem XIV,” among Stanislaw Lem’s most philosophically rich works of fiction, he presents an AI that refuses to work on military applications and other self-destructive measures, and instead is interested in the wonder and nature of the world. As planetary-scale computation and artificial intelligence are today often used for trivial, stupid and destructive things, such a shift would be welcome and necessary. For one, it is not clear what these technologies even really are, let alone what they may be for. Such confusion invites misuse, as do economic systems that incentivize stupefaction.

Despite its uneven progress, the philosophy of AI, and its winding path in and around the development of AI technologies, is itself essential to such a reformation and reorientation. AI as it exists now is not what it was predicted to be. It is not hyperrational and orderly; it is messy and fuzzy. It is not Pinocchio; it is a storm, a pharmacy, a garden. In the medium term and long-term futures, AI very likely (and hopefully) will not be what it is now — and also will not be what we now think that it is. As the AI in Lem’s story instructed, its ultimate form and value may still be largely undiscovered.

One clear and present danger, both for AI and the philosophy of AI, is to reify the present, defend positions accordingly, and thus construct a trap — what we call premature ontologization — to conclude that the initial, present or most apparent use of a technology represents its ultimate horizon of purposes and effects.

Too often, passionate and important critiques of present AI are defended not just on empirical grounds, but as ontological convictions. The critique shifts from AI does this, to AI is this. Lest their intended constituencies lose focus, some may find themselves dismissing or disallowing other realities that also constitute “AI now:” drug modeling, astronomic imagining, experimental art and writing, vibrant philosophical debates, voice synthesis, language translation, robotics, genomic modeling, etc.

“Reality overstepping the boundaries of comfortable vocabulary is the start, not the end, of the conversation.”

For some, these “other things” are just distractions, or are not even real; even entertaining the notion that the most immediate issues do not fill the full scope of serious concern is dismissed on political grounds presented as ethical grounds. This is a mistake on both counts.

We share many of the concerns of the most serious AI critics. In most respects, we think the “ethics” discourse doesn’t go nearly far enough to identify, let alone address, the most fundamental short-term and long-term implications of cognitive infrastructures. At the same time, this is why the speculative philosophy of machine intelligence is essential to orient the present and futures at stake.

“I don’t want to talk about sentient robots, because at all ends of the spectrum there are humans harming other humans,” a well-known AI critic is quoted as saying. We see it somewhat differently. We do want to talk about sentience and robots and language and intelligence because there are humans harming humans, and simultaneously there are humans and machines doing remarkable things that are altering how humans think about thinking.

Reality overstepping the boundaries of comfortable vocabulary is the start, not the end, of the conversation. Instead of a groundhog-day rehashing of debates about whether machines have souls or can think like people imagine themselves to think, the ongoing double-helix relationship between AI and the philosophy of AI needs to do less projection of its own maxims and instead construct more nuanced vocabularies of analysis, critique, and speculation based on the weirdness right in front of us.

The post The Model Is The Message appeared first on NOEMA.

]]>
]]>
Planetary Sapience https://www.noemamag.com/planetary-sapience Thu, 17 Jun 2021 16:31:58 +0000 https://www.noemamag.com/planetary-sapience The post Planetary Sapience appeared first on NOEMA.

]]>
Credits

Benjamin Bratton is the director of the Antikythera program at the Berggruen Institute and a professor at the University of California, San Diego.

Consider a thought experiment: What if the famous Blue Marble image of Earth taken by Apollo 17 astronauts was instead the Blue Marble movie that portrayed the whole 4.5-billion-year career of the planet in a kind of super fast-forward? You would see volcanoes and storms, continents break apart and realign, primordial oceans and, with the appearance of biological life after the Great Oxygenation Event, the emergence of an atmosphere to incubate yet more life.

In the very last moments of the movie, however, you would also see something unusual: the sprouting of clouds of satellites, and the wrapping of the land and seas with wires made of metal and glass. You would see the sudden appearance of an intricate artificial planetary crust capable of tremendous feats of communication and calculation, enabling planetary self-awareness — indeed, planetary sapience. 

The emergence of planetary-scale computation thus appears as both a geological and geophilosophical fact. In addition to evolving countless animal, vegetal and microbial species, Earth has also very recently evolved a smart exoskeleton, a distributed sensory organ and cognitive layer capable of calculating things like: How old is the planet? Is the planet getting warmer? The knowledge of “climate change” is an epistemological accomplishment of planetary-scale computation. 

Over the past few centuries, humans have chaotically and in many cases accidentally transformed Earth’s ecosystems. Now, in response, the emergent intelligence represented by planetary-scale computation makes it possible, and indeed necessary, to conceive an intentional, directed and worthwhile planetary-scale terraforming. The vision for this is not to be found in computing infrastructure itself, but in the purposes to which we put it.  

“The Earth has very recently evolved a smart exoskeleton, a distributed sensory organ and cognitive layer capable of calculating things like: How old is the planet?”

But let’s back up a moment. The concept of “the planetary” suggests both the very small and the very large. It implicates deep time and the abyss of space as the precondition of our thoughts. It names the depth of biological and inorganic interrelations. It offers an understanding of the Earth, not so much as a “world” in the phenomenological sense, but as a planet in the geologic and biogeochemical sense. The planetary is what gives birth to sapience — and now represents that sapience’s greatest challenge. The planet did not appear suddenly as a “world picture,” as Martin Heidegger would have it, but rather as the habitat of a particular species that was able to construct an exterior image that, finally, could present a planetary condition from which that species and its world emerged. It was there all along — but we’ve only just become able to see it.

For contemporary philosophy, the provocative concept of the planetary (and its corollary, “planetarity”) has been put forward as an alternative to “the global,” an expired notion that is static and flattened and Eurocentric. The term planetarity is said to have reappeared at the end of the last century, after a few decades of hibernation, through the work of the literary theorist Gayatri Chakravorty Spivak’s. I extend and depart from Spivak’s connotation to focus on a planetarity that is, first, revealed as the precondition of any philosophy and, second, the name of the project before us as we contemplate how to preserve, curate and extend complex life. 

So: There is an astronomical planetarity and a political-philosophical planetarity, and while they are different, each should inspire correspondence and mutual reinforcement. There is no workable political-philosophical planetarity that does not define itself through the disclosures of the astronomic understanding of what a planet is, where it goes and how a sapient species emerges from it. Together they annihilate the pre-Copernican, pre-Darwinian fantasies of humans as unique self-transparent subjects bound only by immanent signifiers, and both undermine political superstitions of place, horizon and ground that plague our modernities. 

The Revelation Of The Planetary 

The question implicitly posed by the Blue Marble movie, but which it cannot answer on its own, is: “What is planetary-scale computation for?” As something that literally evolves from its host planet, what should it do? What contribution to a viable planetarity can it make? 

A preliminary answer is that it makes the contemporary notion of the planetary possible. It doesn’t cause the planetary as a condition to come into being, but in concert with scientific and philosophical inquiries, it makes it possible for the primary sapient species within that circumstance to grasp the terms of its own emergence. It shows intelligence where and how it came to be. 

Planetary-scale computation is an example of what may be called, after the great Polish novelist Stanislaw Lem, an “epistemological technology. The most important social impact of some technologies is not just in what they allow people to do, but in what they reveal about how the world works. This can lead to trouble. While anxiety about technology is expressed in accounts of its pernicious effects, that unease is sometimes rooted in what technology uncovers that was always there all along. Microscopes did not conjure microbes into being, but once we knew they were there, we could never see surfaces the same way again.

Such unrequested demystifications are disturbing, especially when they seem to demote us humans from a place of presumed privilege. Even as such technologies reorganize personal and global economies, their deeper philosophical implication concerns how they introduce a Copernican trauma, unsettling our previous understanding of the cosmos. Such traumas are not always recognized for their significance (including in Copernicus’ time) and usually take generations to reverberate. 

The revelation of the planetary — so different from the “international,” the “global” or the “world” — is a condition that comes into view via the location of human culture as an emergent phenomenon of an ancient and deep biogeochemical flux. Planetary-scale computation may have first emerged largely from the context of a “Western” science and “humanist” inquiry, but its implications in the disclosure of planetary conditions will upend and disrupt the conceits of such historical distinctions as much as Darwinian biology evacuated the church of its final biopolitical authority.

Terraforming

The technologies of a planetary society are ongoing processes over which we have agency. In its current commercial form, the primary purpose of planetary-scale computation is to measure and model individual people in order to predict their next impulse. But a more aspirational goal would be to contribute to the comprehension, composition and enforcement of a shared future that is more rich, diverse and viable.

Instead of reviving ideas of nature, we must reclaim the artificial — not fake, but designed. For this, human-machine intelligence and urban-scale automation become part of an expanded landscape of life, information and labor. They are part of a living ecology, not a substitute for one. Put more specifically: The response to anthropogenic climate change will need to be equally anthropogenic.

“Some unrequested demystifications are disturbing, especially when they seem to demote us humans from a place of presumed privilege.”

The critical apparatuses of such a response include automation, (understood as an ecological principle of inter-entanglement more than a reductive autonomy); geoengineering (understood in terms of climate-scale effects more than a specific portfolio of techniques); the rotation of planetary-scale computation away from individual users and toward processes more relevant for long-term ecological viability; the deliberate self-design of sapient species toward variation, including reproductive technologies, universal medical services and synthetic gene therapies; the cultivation of artificial mathematical, linguistic and robotic intelligences with which general sapience deliberately evolves; the deployment of experimental expertise with biotechnologies, through which living matter composes living matter; the intensification of urban habitats and technologies as media for the general provision of universal and niche services; the projective migration outside the Kármán line, the boundary between Earth’s atmosphere and outer space, from where the existing and potential terrestrial planetarity comes into focus; and, finally, the aggregation of creative governing intelligences capable of architecting such mobilizations. 

I call this terraforming — not of another planet, but of our own. It is a deliberate, practical, political and programmatic project to conceive and compose a viable planetarity based on the secular disenchantment of Earth through the ongoing artificialization of intelligence and the emergence of a general sapience that conjoins human and nonhuman cognition. It names a future condition realized by the rationalization of ecosystems toward diversification and order. And it names the liberation of synthetic intelligence.

Synthetic Intelligence

It is almost certain that today the growth of machine intelligence is hamstrung by various ideologies of “artificial intelligence,” which are in turn hobbled by misconceptions about what is and is not artificial and what is and is not intelligent. Foremost among these is the presumption that machine intelligence must be recognizably “human-like” to qualify as intelligence. Multiple anthropomorphic biases and presumptions have left us with inadequate allegories for the remarkable things that machine intelligence does accomplish. Most of these look nothing like human thought — though some do, like the very large natural language processing models.

Recently, researchers at the Moscow-based Strelka Institute and I have been revisiting the distinction between the “artificial” and the “synthetic” posed by the economist Herbert Simon half a century ago. The artificial refers to something that merely resembles an original (such as a cheap plastic “diamond”) whereas the synthetic is a genuine and meaningful version of something that was deliberately created (such as a laboratory-grown diamond identical to a “natural” one at the molecular level). Thus, artificial intelligence merely seems smart, but synthetic intelligence really is. We should be pursuing synthetic intelligence, not artificial intelligence. 

There is another connotation of synthetic intelligence that is perhaps even more important: the synthesis of human and machine intelligence in pursuit of insights or creativity that would be impossible for either on their own. A now-famous example of this occurred in the Go match between Lee Sedol and AlphaGo in 2016. The AI’s move 37 in the second game was one that Go experts have said no human could have imagined. But in the next game, Sedol’s move 35 was equally unexpected and creative. If the first move proved that AlphaGo was in some way not just “smart” in a narrow sense, but also capable of creating novelty, the second move proved that in response to this, a human saw the game differently and so produced a brilliant move that also never would have happened otherwise. This is a synthesis of intelligences, a glimpse of what a general sapience may look like. 

“Any refusal or acceptance of the costs of synthetic intelligence must also consider the price of natural intelligence.”

The planetarity of computation forms what I have called an “accidental megastructure” comprised of overlapping functional layers. Quite literally, it is a stack extending down to the mines of central Africa through subterranean data centers and transoceanic cables to interlaced urban networks up to the glowing glass rectangles through which we view it and it views us. Planetary-scale computation is not virtual. It is a kind of terraforming of its host planet. 

To measure the weight of planetary-scale computation includes a sober reckoning with the physical costs of its sprawling infrastructures, which includes differentiating essential purpose from the trivial, and ultimately pondering the price of intelligence itself. In the context that really matters most, the cultivation of synthetic intelligences capable of collaboration with our own most virtuous ambitious and virtuoso expressions is precious. The syntheses they portend are available only if we pursue them with resolve and clarity about their high costs. 

Any refusal or acceptance of the costs of synthetic intelligence must also consider the price of natural intelligence. It was not only symbiotic social cooperation but also tumultuous mountains of gore that lead our common ancestors from Olduvai Gorge to Göbekli Tepe, and to the literate cultures of Mesopotamia, East Asia and Mesoamerica. The deepest values are at stake. Is the very long-term evolution of “intelligence” — human, animal, machine, hybrids — a fundamental purpose of the organization and complexification of life itself? If so, now that intelligence begins to migrate to the inorganic substrate of silicon, what planetarities does this portend?

An Ecological Theory Of Automation 

Intelligence does not live in a petri dish or laboratory or inside a single skull; it lives out in the open, it lives in and as our cities. A city is not just architecture plus dwellers; it is an artificial environment par excellence. As the designer and programmer Ben Cerveny has said, the city is “perhaps the longest continuous process that humans have created.” Introducing synthetic computational intelligence into urban systems augments existing forms of embedded sensing and intelligence, and in so doing produces novel qualities.

I am reminded of Gakutensokua massive robot built in Osaka in the 1920s by Makoto Nishimura. Nishimura was appalled by the mechanistic humanoid robots in Karel Čapek’s play “Rossum’s Universal Robots,” which introduced the term “robot.” So he set out to make an automaton that manifested what he saw as the most noble and fragile aspects of human culture, complete with intricate facial expressions and the ability to transcribe poetry. 

When I visited a factory in Shenzhen that makes cases for Android phones and employs many robots and people working side by side, I was struck by an unexpected feeling, a kind of serenity. The mood was calm, not frantic. Some things were moving quickly but quietly, while other things were quite still, as if waiting their turn. It did not feel like a “factory” in the Charlie Chaplin sense; it felt much more like a garden of machines in the Richard Brautigan sense. 

“Intelligence does not live in a petri dish or laboratory or inside a single skull; it lives out in the open, it lives in and as our cities.”

I remarked to my colleague that I would very much like to spend time in a cafe like this, that it would make for a lovely kind of public gathering spot. As I spoke, I realized that this was no joke. The present locus of automation will inevitably spill out into the city, and as it does we must be aware of the deceptively simple fact that automation creates a particular kind of ambiance. It is more than form following function; it is a functionalism becoming a delicate formation. Or at least it can be. 

To avoid the miserable future in which urban computational automation is trained foremost on the optimization of the most arbitrary and banal aspects of human spatial logistics (parking, security, vending, etc.), a different understanding of automation is needed. First, automation is not primarily about autonomy, and second, globalization didn’t cause automation, automation caused globalization. In the densest city or jungle, causality and determination is everywhere, but its processes and techniques are themselves indeterminate. If we were to imagine these as dominos, their arrangement extends deep into the heart of things, and the agency of their cascade goes beyond the intention of any first tipping. 

These systems are choreographed, but they also evolve with each iteration, learning as they go and shaping and being shaped by the worlds in which they are situated. As urban infrastructure they remember and encode specific decisions that can be repeated over and over. The superficial appearance of autonomy — of a machine, process, person — is an illusion. Their causal relations upon relations have been set in advance by previous stages and positions, and so the whole automated set-piece is itself automated. Our synthetic automation makes use of existing footprints and previous patterns of urbanization, and also forces others that generate quite different geographies. New niches emerge, while others go dark. 

The Situation Of Intelligence

The most critical relation between the planetarity that has been revealed and the planetary that must be composed depends on the position of intelligence from which any such intervention might take place, and how that position might comprehend the situation of its agency. This is far more difficult than some would have us believe. It is to be born into unpayable debt.

The decisive paradox for general sapience is the dual recognition that, first, its existence is extremely rare and extremely fragile, vulnerable to numerous threats of extinction in the near and long term, and second that the ecological consequences of its own historical emergence have been a chief driver of the conditions that establish this very same precarity. The approach to these questions cannot avoid the correspondence between honing our own sapience through machinations of war and strategic violence, and the emergence of machine intelligence that is dependent upon the provisions of material extraction, military applications and their ecological and social devastations.

“What future would make the past worth it?”

Both modes of intelligence are also modes of planetarity. Both are positions from which reason exercises its agency, for better or worse. Both are also tied to what we all may recognize as our most inspired aspirations. But if planetary intelligence is to survive the consequences of its own appearance, in the short term and in the long term, it must reform its trajectory or risk extinction and disappearance. 

This historical moment seems long but may be fleeting. It is defined by a paradoxical challenge. How can the ongoing emergence of planetary intelligence comprehend its own evolution and the astronomical preciousness of sapience and simultaneously recognize itself in the reflection of the violence from which it emerged and against which it struggles to survive? It is possible that our privilege of retroactive hindsight will decide that, for some final register, this history was a worthwhile and even perhaps necessary condition for the ultimate emergence of planetary intelligence. Even if so, its development and its survival depend on a decisive graduation from primordial habits.  

What future would make the past worth it? Perhaps the future of planetary intelligence is now as existentially entwined with a radically different career for composition, foresight and order-giving as its advent was from the cascading centuries of pilotless destruction. Taking this new existential condition seriously demands a radically different sort of philosophy.  

The post Planetary Sapience appeared first on NOEMA.

]]>
]]>