Shannon Vallor, Author at NOEMA https://www.noemamag.com Noema Magazine Fri, 28 Feb 2025 17:06:21 +0000 en-US 15 hourly 1 https://wordpress.org/?v=6.8.3 https://www.noemamag.com/wp-content/uploads/2020/06/cropped-ms-icon-310x310-1-32x32.png Shannon Vallor, Author at NOEMA https://www.noemamag.com/author/shannonvallor/ 32 32 The Danger Of Superhuman AI Is Not What You Think https://www.noemamag.com/the-danger-of-superhuman-ai-is-not-what-you-think Thu, 23 May 2024 14:51:06 +0000 https://www.noemamag.com/the-danger-of-superhuman-ai-is-not-what-you-think The post The Danger Of Superhuman AI Is Not What You Think appeared first on NOEMA.

]]>
Today’s generative AI systems like ChatGPT and Gemini are routinely described as heralding the imminent arrival of “superhuman” artificial intelligence. Far from a harmless bit of marketing spin, the headlines and quotes trumpeting our triumph or doom in an era of superhuman AI are the refrain of a fast-growing, dangerous and powerful ideology. Whether used to get us to embrace AI with unquestioning enthusiasm or to paint a picture of AI as a terrifying specter before which we must tremble, the underlying ideology of “superhuman” AI fosters the growing devaluation of human agency and autonomy and collapses the distinction between our conscious minds and the mechanical tools we’ve built to mirror them.

Today’s powerful AI systems lack even the most basic features of human minds; they do not share with humans what we call consciousness or sentience, the related capacity to feel things like pain, joy, fear and love. Nor do they have the slightest sense of their place and role in this world, much less the ability to experience it. They can answer the questions we choose to ask, paint us pretty pictures, generate deepfake videos and more. But an AI tool is dark inside.

That’s why, at a machine learning conference in September of 2023, I asked the Turing Award winner Yoshua Bengio why we keep hearing about “superhuman” AI when the products available are so far from what a human is, much less superhuman. My keynote prior to his had openly challenged this kind of rhetoric, which featured heavily in Bengio’s subsequent presentation — just as it does on his website and in his warnings to lawmakers and other audiences that humans risk “losing control to superhuman AIs” in just the next few years.

Bengio was once one of the more sober and grounded voices in the AI research landscape, so his sudden adoption of this rhetoric perplexed me. I certainly don’t disagree with him about the dangers of embedding powerful but unpredictable and unreliable AI systems in critical infrastructure and defense systems or the urgent need to govern these systems more effectively. But calling AI “superhuman” is not a necessary part of making those arguments.

So, I asked him, isn’t this rhetoric ultimately unhelpful and misleading given that the AI systems that we so desperately need to control lack the most fundamental capabilities and features of a human mind? How, I asked, does an AI system without the human capacity for conscious self-reflection, empathy or moral intelligence become superhuman merely by being a faster problem-solver? Aren’t we more than that? And doesn’t granting the label “superhuman” to machines that lack the most vital dimensions of humanity end up obscuring from our view the very things about being human that we care about?

I was trying to get Bengio to acknowledge that there is a huge difference between superhuman computational speed or accuracy — and being superhuman, i.e., more than human. The most ordinary human does vastly more than the most powerful AI system, which can only calculate optimally efficient paths through high-dimensional vector space and return the corresponding symbols, word tokens or pixels. Playing with your kid or making a work of art is intelligent human behavior, but if you view either one as a process of finding the most efficient solution to a problem or generating predictable tokens, you’re doing it wrong.

“Attempts to erase and devalue the most humane parts of our existence are nothing new; AI is just a new excuse to do it.”

Bengio refused to grant the premise. Before I could even finish the question, he demanded: “You don’t think that your brain is a machine?” Then he asked: “Why would a machine that works on silicon not be able to perform any of the computations that our brain does?”

The idea that computers work on the same underlying principles that our brains do is not a new one. Computational theories of the mind have been circulating since the 20th century origins of computer science. There are plenty of cognitive scientists, neuroscientists and philosophers who regard computational theories of mind as a mistaken or incomplete account of how the physical brain works (myself among them), but it’s certainly not a bizarre or pseudoscientific view. It’s at least conceivable that human brains, at the most basic level, might be best described as doing some kind of biological computation.

So what surprised and disturbed me about Bengio’s response was not his assumption that biological brains are a kind of machine or computer. What surprised me was his refusal to grant, at least initially, that human intelligence — whether computational at the core or not — involves a rich suite of capabilities that extend well beyond what even cutting-edge AI tools do. We are more than efficient mathematical optimizers and probable next token generators.

I had thought it was a fairly obvious — even trivial — observation that human intelligence cannot be reduced to these tasks, which can be executed by tools that even Bengio admits are as mindless, as insensible to the world of living and feeling, as your toaster. But he seemed to be insisting that human intelligence could be reduced to these operations — that we ourselves are no more than task optimization machines.

I realized then, with shock, that our disagreement was not about the capabilities of machine learning models at all. It was about the capabilities of human beings, and what descriptions of those capabilities we can and should license.

What Is Superhuman AI?

On his website, Bengio defines “superhuman AI” as an AI system that “outperforms humans on a vast array of tasks.” That’s pretty vague. What falls under the definition of a task? Is anything a human being does a task?

For decades, the AI research community’s holy grail of artificial general intelligence (AGI) was defined by equivalence with human minds — not just the tasks they complete. IBM still echoes this traditional notion in its definition of the AGI-focused research program Strong AI:

[AGI] would require an intelligence equal to humans; it would have a self-aware consciousness that has the ability to solve problems, learn, and plan for the future. … Strong AI aims to create intelligent machines that are indistinguishable from the human mind.

But OpenAI and researchers like Geoffrey Hinton and Yoshua Bengio are now telling us a different story. A self-aware machine that is “indistinguishable from the human mind” is no longer the defining ambition for AGI. A machine that matches or outperforms us on a vast array of economically valuable tasks is the latest target. OpenAI, which led the way in moving AGI’s goalposts, defines AGI in their charter as “highly autonomous systems that outperform humans at most economically valuable work.”

OpenAI’s AGI bait-and-switch wipes anything that does not count as economically valuable work from the definition of intelligence. That’s a massive erasure of our human capacity and a reduction of ourselves that we should resist. Are you no more than the work you completed today? Are you any less human or less intelligent if you spent your waking hours doing things that do not have well-defined “solutions,” that are not tasks that can be checked off a list, and that have no market price?

“By describing as superhuman a thing that is entirely insensible and unthinking, we implicitly erase or devalue the concept of a ‘human.'”

Once you have reduced the concept of human intelligence to what the markets will pay for, then suddenly, all it takes to build an intelligent machine — even a superhuman one — is to make something that generates economically valuable outputs at a rate and average quality that exceeds your own economic output. Anything else is irrelevant.

As the ideology behind this bait-and-switch leaks into the wider culture, it slowly corrodes our own self-understanding. If you try to point out, in a large lecture or online forum on AI, that ChatGPT does not experience and cannot think about the things that correspond to the words and sentences it produces — that it is only a mathematical generator of expected language patterns — chances are that someone will respond, in a completely serious manner: “But so are we.”

According to this view, characterizations of human beings as acting wisely, playfully, inventively, insightfully, meditatively, courageously, compassionately or justly are no more than poetic license. According to this view, such humanistic descriptions of our most valued performances convey no added truth of their own. They point to no richer realities of what human intelligence is. They correspond to nothing real beyond the opaque, mechanical calculation of word frequencies and associations. They are merely florid, imprecise words for that same barren task.

I am still not sure whether Bengio himself truly believes this. Later in the Q&A following his talk, he asked to revisit my question, and it seemed that he wanted to strike a more conciliatory tone and seek some common ground. But when he refused to grant that humans are more than task machines executing computational scripts and issuing the statistically expected tokens, I took him at his word. If beating us at that game is all it takes to be superhuman, one might think that silicon “superhumans” have been among us since World War II, when the U.K.’s Colossus became the first computer to crack a code faster than humans could.

Yet Colossus only beat us at one task; according to Bengio, “superhuman” AI will beat us at a “vast array of tasks.” But that assumes being human is to be nothing more than a particularly versatile task-completion machine. Once you accept that devastating reduction of the scope of our humanity, the production of an equivalently versatile task-machine with “superhuman” task performance doesn’t seem so far-fetched; the notion is almost mundane.

So what’s the harm in speaking this way?

Being Superhuman

The word “superhuman” means “human, but more so.” To be superhuman is to have the same powers that humans do, plus other powers we lack — or to have human powers to a degree that we don’t. It’s not a word we use for something that’s of a radically different kind from us, something that lacks fundamental human qualities and powers but performs better than we do on some metrics. We don’t talk about “superhuman airplanes” or “superhuman cheetahs” even though airplanes and cheetahs both travel faster than any human has ever run.

We use and understand the term superhuman to mean something very much like us, but better. The fictional Superman is perhaps the best-known English-language articulation of the superhuman idea. Superman is not Earth-born, but he embodies and far exceeds our highest human ideals of physical, intellectual and moral strength. He isn’t superhuman just because he flies; a rocket does that. He isn’t superhuman because he can move heavy things; for this, a forklift will do. Nor is he superhuman because he excels at a “vast array” of such tasks. Instead, he is an aspirational magnification of what we see as most truly human.

There are no fundamental dimensions of the human personality missing from Superman. He is an imagined answer to the question: “What if us, only more so?” He desires, he suffers, he loves, he grieves, he hopes, he cares and he doubts; he experiences all these even more intensely and deeply than we do. He is as far as one can be from a mindless producer of efficiencies. His embodiment as Superman is a direct expression of each of the aspects of humanity that we value most, the things about our kind that we tend to see as universally shared.

By describing as superhuman a thing that is entirely insensible and unthinking, an object without desire or hope but relentlessly productive and adaptable to its assigned economically valuable tasks, we implicitly erase or devalue the concept of a “human” and all that a human can do and strive to become. Of course, attempts to erase and devalue the most humane parts of our existence are nothing new; AI is just a new excuse to do it.

“Maybe the moral and experiential poverty of AI will bring the most vitally human dimensions of our native intelligence back to the center of our attention and foster a cultural reclamation and restoration of their long-depreciated value.”

Indeed, for the entirety of the Industrial Age, those invested in the maximally efficient extraction of productive outputs from human bodies have been trying to get us to view ourselves — and more importantly one another — as flawed, inefficient, fungible machines destined to be disposed of as soon as our output rate slips below an expected peak or the moment a more productive machine can be found to step in.

The struggle against this reductive and cynical ideology has been hard-fought for a few hundred years thanks to vigorous resistance from labor and human rights movements that have articulated and defended humane, nonmechanical, noneconomic standards for the treatment and valuation of human beings — standards like dignity, justice, autonomy and respect.

Yet to finally convince us that humans are no more than mechanical generators of economically valuable outputs, it seems to have only required machine tools that generate such outputs in our primary currencies of human meaning: language and vision. Now that you can elicit an infinite multitude of these currencies from an app on your smartphone, we accept the advent of “superhuman AI” as a foregone conclusion, something already quite literally at hand.

Reclaiming Our Humanity

The battle is not lost, however. As the philosopher Albert Borgmann wrote in his 1984 book “Technology and the Character of Contemporary Life,” it is precisely when a technology has nearly supplanted a vital domain of human meaning that we are able to feel and mourn what has been taken from us. It is at that moment that we often begin to resist, reclaim and rededicate ourselves to its value.

His examples might seem mundane today. He wrote about the post-microwave revival of the art of cooking as a cherished creative and social practice, one irreplaceable by even the most efficient cooking machines. Indeed, the skilled and visionary practice of cooking now carries far greater cultural value and status than it did in the late 20th century. Similarly, the treadmill did not eliminate the irreplaceable art of running and walking outdoors just by offering a more convenient and efficient means to the same aerobic end. In fact, Borgmann thought the sensory and social poverty of the experience of using a treadmill or microwave could reinvigorate our cultural attention to what they diminished — activities that engage the whole person, that continually remind us of our place in the physical world and our belonging there with the other lives who share it. He was right.

Perhaps the ideology of “superhuman” AI, in which humans appear merely as slow and inefficient pattern matchers, could spark an even more expansive and politically significant revival of humane meaning and values. Maybe the moral and experiential poverty of AI will bring the most vitally human dimensions of our native intelligence back to the center of our attention and foster a cultural reclamation and restoration of their long-depreciated value.

What might that look like? Imagine any sector of society where the machine ideology now dominates and consider how it would look if the goal of mechanical optimization became secondary to enabling humane capabilities.

Let’s start with education. In many countries, the former ideal of a humane process of moral and intellectual formation has been reduced to optimized routines of training young people to mindlessly generate expected test-answer tokens from test-question prompts. Generative AI tools — some of which advertise themselves as “your child’s superhuman tutor” — promise to optimize even a kindergartener’s learning curve. Yet in the U.S., probably the world’s tech-savviest nation, young people’s love of reading is at its lowest levels in decades, while parents’ confidence in education systems is at a historic nadir.  

What would reclaiming and reviving the humane experience of learning look like? What kind of world might our children build for themselves and future generations if we let them love to learn again, if we taught them how to rediscover and embrace their humane potential? How would that world compare to one built by children who only know how to be an underperforming machine?

Or consider the economy. How would the increasingly sorry state of our oceans, air, soil, food web, infrastructures and democracies look if we stopped rewarding mindless, metastatic growth in “domestic product” that we make machines (human or silicon, whichever is cheaper) churn out in any environmentally or socially poisonous form that can sell? How would the future we are headed for change if we mandated new economic incentives and measures tied to medium- and long-term indicators of health, sustainability, human development and social trust and resilience?

“What if, instead of replacing humane vocations in media, design and the arts with mindless mechanical remixers and regurgitators of culture like ChatGPT, we asked AI developers to help us with the most meaningless tasks in our lives?”

What if tax relief for wealthy corporations and investors depended entirely on how their activities enabled those humane indicators to rise? How would our jobs change, and how might young people’s enthusiasm for investing their energies in the workforce be boosted, if the measure of a company’s success were not simply the mechanical optimization of its share price, but a richer and longer-term assessment of its contribution to the quality of our lives together?

What about culture? How different would the future look if current efforts to use AI to replace human cultural outputs were stalled by a renewed affection for our own capacity to create meaning, to tell the world’s stories, to invent new forms of beauty and expression, to elevate and ornament the raw animal experience of living? What if, instead of replacing these humane vocations in media, design and the arts with mindless mechanical remixers and regurgitators of culture like ChatGPT, we asked AI developers to help us with the most meaningless tasks in our lives, the ones that drain our energy for everything else that matters? What if you never had to file another tax form?

What if we designed technologies like AI with and for the benefit of those most vulnerable to corruption, exploitation and injustice? What if we used our best AI tools to more quickly and reliably surface evidence of corrupt practices, increase their political cost and more systematically push corruption and exploitation toward the margins of public life? What if populations collectively vowed to reward only those politicians, police and judges willing to take the risks of demonstrating greater transparency, accountability and integrity in governing?

Even in these more humane futures, we’d be far from utopia. But those possible futures are still much brighter than any dominated by the ideology of superhuman AI.

That doesn’t mean that AI has no place in a more humane world. We need AI to take over inherently unsafe or human-unfriendly tasks like environmental cleanup and space exploration; we need it to help us slash the costs, redundancies and time burden of mundane administrative processes; we need AI to scale up infrastructure maintenance and repair; we need AI for the computational analysis of complex systems like climate, genetics, agriculture and supply chains. We are in no danger of running out of important things for our machines to do.

We are in danger of sleepwalking our way into a future where all we do is fail more miserably at being those machines ourselves. Might we be ready to wake ourselves up? In an era that rewards and recognizes only mechanical thinking, can humans still remember and reclaim what we are? I don’t think it is too late. I think now may be exactly the time.

The post The Danger Of Superhuman AI Is Not What You Think appeared first on NOEMA.

]]>
]]>
The Thoughts The Civilized Keep https://www.noemamag.com/the-thoughts-the-civilized-keep Tue, 02 Feb 2021 16:55:55 +0000 https://www.noemamag.com/the-thoughts-the-civilized-keep The post The Thoughts The Civilized Keep appeared first on NOEMA.

]]>
Credits

Shannon Vallor is the Baillie Gifford Chair in the Ethics of Data and Artificial Intelligence and the director of the Centre for Technomoral Futures at the University of Edinburgh. Her latest book is “The AI Mirror: How to Reclaim Our Humanity in an Age of Machine Thinking” (Oxford University Press).

It is a profoundly erroneous truism … that we should cultivate the habit of thinking of what we are doing. The precise opposite is the case.

Civilization advances by extending the number of important operations which we can perform without thinking about them.

— Alfred North Whitehead, “An Introduction to Mathematics,” 1911

GPT-3 is the latest attempt by OpenAI, a tech research lab in San Francisco, to unlock artificial intelligence with an anvil rather than a hairpin. As brute force strategies go, the results are impressive. The language-generating model performs well across a striking range of contexts. Given only simple prompts, GPT-3 writes not just interesting short stories and clever songs, but also executable code such as web graphics.

GPT-3’s ability to dazzle with prose and poetry that appears entirely natural, even erudite or lyrical, is less surprising. It’s a parlor trick that its predecessor performed a year earlier, though its then-massive 1.5 billion parameters are swamped by GPT-3’s power, which uses 175 billion parameters to enhance its stylistic abstractions and semantic associations.

Just like their great-grandmother, Joseph Weizenbaum’s ELIZA, a natural language processing program developed in the 1960s, these systems benefit considerably from human reliance on familiar heuristics for speakers’ cognitive abilities. GPT-3 readily deploys artful and sonorous speech rhythms, sophisticated vocabularies and references, and erudite grammatical constructions. Like the bullshitter who gets past their first interview by regurgitating impressive-sounding phrases from the memoir of the company’s CEO, GPT-3 spits out some pretty good bullshit.

Yet the connections GPT-3 makes are not illusory or concocted from thin air. It and many other machine learning models for natural language processing and generating do, in fact, track and reproduce real features of the symbolic order in which humans express thought. And yet, they do so without needing to have any thoughts to express.

The hype around GPT-3 as a path to general artificial intelligence reveals the sterility of mainstream thinking about AI today. More importantly, it reveals the sterility of our current thinking about thinking.

“Like the bullshitter who gets past their first interview by regurgitating impressive-sounding phrases from the memoir of the company’s CEO, GPT-3 spits out some pretty good bullshit.”

A growing number of today’s cognitive scientists, neuroscientists and philosophers are aggressively pursuing well-funded research projects devoted to revealing the underlying causal mechanisms of thought, and how they might be detected, simulated or even replicated by machines. But the purpose of thought — what thought is good for — is a question widely neglected today, or else taken to have trivial, self-evident answers. Yet the answers are neither unimportant, nor obvious.

The neglect of this question leaves uncertain the place for thought in a future where unthinking intelligence is no longer an oxymoron, but soon to be a ubiquitous mode of machine presence, one that will be embodied in the descendants and cousins of GPT-3. There is an urgent question haunting us, an echo from Alfred North Whitehead’s conclusion in 1911 that civilizations advance by expanding our capacity for not thinking: What thoughts do the civilized keep?

Whitehead, of course, was explicitly talking about the operations of mathematics and the novel techniques that enable ever more advanced shortcuts to be taken in solving mathematical problems. To suggest that he is simply wrong, that all operations of thought must be forever retained in our cognitive labors, is to ignore the way in which shedding elementary burdens of thought often enables us to take up new and more sophisticated ones. As someone who lived through the late 20th century moral panic over students using handheld scientific calculators in schools, I embrace rather than deny the vital role that unthinking machines have historically played in enabling humans to stretch the limits of our native cognitive capacities.

Yet Whitehead’s observation leaves us to ask: What purpose, then, does thinking hold for us other than to be continually surpassed by mindless technique and left behind? What happens when our unthinking machines can carry out even those scientific operations of thought that mathematical tables, scientific calculators and early supercomputers previously freed us to pursue, such as novel hypothesis generation and testing? What happens when the achievements of unthinking machines move outward from the scientific and manufacturing realms, as they already have, to bring mindless intelligence further into the heart of social policymaking, political discourse and cultural and artistic production? In which domains of human existence will thinking — slow, fallible, fraught with tension, uncertainty and inconsistency — still hold its place? And why should we want it to?

The Labor Of Understanding

To answer these questions, we need to focus on what unthinking intelligence lacks. What is missing from GPT-3?

It’s more than just sentience, the ability to feel and experience joy or suffering. And it’s more than conscious self-awareness, the ability to monitor and report upon one’s own cognitive and embodied states. It’s more than free will, too — the ability to direct or alter those states without external compulsion.

Of course, GPT-3 lacks all of these capacities that we associate with minds. But there is a further capacity it lacks, one that may hold the answer to the question we are asking. GPT-3 lacks understanding.

This is a matter of some debate among artificial intelligence researchers. Some define understanding simply as behavioral problem-solving competence in a particular environment. But this is to mistake the effect for the cause, to reduce understanding to just one of the practical powers that flows from it.

For AI researchers to move past the behaviorist conflation of thought and action, the field needs to drink again from the philosophical waters that fed much AI research in the late 20th century, when the field was theoretically rich, albeit technically floundering. Hubert Dreyfus’s 1972 ruminations in “What Computers Can’t Do”(and 20 years later, in “What Computers Still Can’t Do”) still offer many soft targets for legitimate criticism, but his and other work of the era at least took AI’s hard problems seriously. Dreyfus in particular understood that AI’s true hurdle is not performance but understanding.

Understanding is beyond GPT-3’s reach because understanding cannot occur in an isolated computation or behavior, no matter how clever. Understanding is not an act but a labor. Labor is entirely irrelevant to a computational model that has no history or trajectory in the world. GPT-3 endlessly simulates meaning anew from a pool of data untethered to its previous efforts. This is the very power that enables GPT-3’s versatility; each task is a self-contained leap, like someone who reaches the flanks of Mt. Everest by being flung there by a catapult.

“Understanding is beyond GPT-3’s reach because understanding cannot occur in an isolated computation or behavior, no matter how clever.”

GPT-3 cannot think, and because of this, it cannot understand. Nothing under its hood is built to do it. The gap is not in silicon or rare metals, but in the nature of its activity.

Understanding does more than allow an intelligent agent to skillfully surf, from moment to moment, the associative connections that hold a world of physical, social and moral meaning together. Understanding tells the agent how to weld new connections that will hold under the weight of the intentions, values and social goals behind our behavior.

Predictive and generative models like GPT-3 cannot accomplish this. GPT-3 doesn’t even know that to successfully answer the question “Can AI be conscious?,” as the philosopher Raphaël Millière prompted it to do in an essay, it can’t randomly reverse its position every few sentences.

GPT-3 effortlessly completed the essay assigned by Millière. This is a sign not of GPT-3’s understanding, but the absence of it. To write it, it did not need to think; it did not need to struggle to weld together, piece by piece, a singular position that would hold steady under the pressure of its other ideas and experiences, or questions from other members of its lived world.

The instantaneous improvisation of its essay wasn’t anchored to a world at all; instead, it was anchored to a data-driven abstraction of an isolated behavior-type, one that could be synthesized from a corpus of training data that includes millions of human essays, many of which happen to mention consciousness. GPT-3 generated an instant variation on those patterns and, by doing so, imitated the behavior-type “writing an essay about AI consciousness.”

But it did not need to know anything about what an essay on AI consciousness might seek to do, or how it would fit into the larger world of social meaning that makes the subject of AI consciousness worth seeking to understand. It is akin to the difference between a songbird’s tuneful mimicry of a human lullaby and a new human father’s variation — however tuneless — on the lullaby his mother once sang to him as a child. One act is anchored in an understanding of the shared social history of meaning that gives a lullaby significance. The other is not.

Understanding is a lifelong labor. It is also one carried out not by isolated individuals but by social beings who perform this cultural labor together and share its fruits. The labor of understanding is a sustained, social project, one that we pursue daily as we build, repair and strengthen the ever-shifting bonds of sense that anchor our thoughts to the countless beings, things, times and places that constitute a world. It is this labor that thinking belongs to.

When GPT-3 is unable to preserve the order of causes and effects in telling a story about a broken window, when it produces laughable contradictions within its own professions of sincere and studied belief in an essay on consciousness, when it is unable to distinguish between reliable scholarship and racist fantasies — GPT-3 is not exposing a limit in its labor of understanding. It is exposing its inability to take part in that labor altogether.

Thus, when we talk about intelligent machines powered by models such as GPT-3, we are using a reduced notion of intelligence, one that cuts out a core element of what we share with other beings. This is not a romantic or anthropocentric bias, or “moving the goalposts” of intelligence. Understanding, as joint world-building and world-maintaining through the architecture of thought, is a basic, functional component of human intelligence. This labor does something, without which our intelligence fails, in precisely the ways that GPT-3 fails.

Beeple

A Legacy In Danger

While machines remain wholly incapable of the labor of understanding, there is a related phenomenon in the human world. Extremist communities, especially in the social media era, bear a disturbing resemblance to what you might expect from a conversation held among similarly trained GPT-3s. A growing tide of cognitive distortion, rote repetition, incoherence and inability to parse facts and fantasies within the thoughts expressed in the extremist online landscape signals a dangerous contraction of understanding, one that leaves its users increasingly unable to explore, share and build an understanding of the real world with anyone outside of their online haven.

Thus, the problem of unthinking is not uniquely a machine issue; it is something to which humans are, and always have been, vulnerable. Hence the long-recognized need for techniques and public institutions of education, cultural production and democratic practice that can facilitate and support the shared labor of understanding to which thought contributes. Had more nations invested in and protected such institutions in the 21st century, rather than defunding and devaluing them in the name of public austerity and private profit, we might have reached a point by now where humanity’s rich and diverse legacies of shared understanding were secured around the globe, standing only to be further strengthened by our technological innovations.

Instead, systems like GPT-3 now threaten to further obscure the value of understanding and thinking. For as their narrow competence and frequently unreliable performance is gradually supplanted by more stable, adaptable and robust forms of unthinking intelligence, AI systems will appear to be a far more attractive source of prudent decision-making and governance. This will certainly be true if the primary alternative is reliance upon an increasingly disordered tumult of conspiracy-addled humans struggling to hold together even the shared fruits of understanding that prior generations produced.

And so, if the breathlessly over-hyped warnings from Elon Musk, Bill Gates and others of artificially intelligent machines “taking over” our human affairs have any grounding in reality, it comes not from the imminent rise of machines that understand more than we do, but from our collective and institutional failures to preserve our own capacities for this labor.

“Extremist communities, especially in the social media era, bear a disturbing resemblance to what you might expect from a conversation held among similarly trained GPT-3s.”

In an era where the sense-making labor of understanding is supplanted as a measure of human intelligence by the ability to create an app that reinvents another thing that already exists — where we act more like GPT-3 every day — it isn’t a surprise that GPT-3 might be mistaken for the AI breakthrough that will spawn true machine intelligence. But as AI researcher Gary Marcus and many others have acknowledged, that goal awaits in a different direction. If machines ever do join us in the domain of understanding — if they become able to think, know and build new worlds with us or with one another — then GPT-3 will be a footnote in their story.

But even as someone who thinks about AI for a living, I don’t find myself worrying much about when, or if, machines will get there. I find myself worrying about whether, by the time they do, we will still be capable of thinking and understanding alongside them. We can get by and endure, for a while, by riding the coattails of those who labored before us. But not for much longer, unless we repair and restart the engines of thinking for future generations — the cultural institutions, social practices, norms and virtues that valorize and enable, rather than penalize and suppress, the shared human labor of understanding.

Humanity has reached a stage of civilization in which we can build space stations, decode our genes, split or fuse atoms and speak nearly instantaneously with others around the globe. Our powers to create and distribute vaccines against deadly pandemics, to build sustainable systems of agriculture, to develop cleaner forms of energy, to avert needless wars, to maintain the rule of law and justice and to secure universal human rights — these are the keys to our future.

Yet they are all legacies of past labors of understanding that even now we wield with increasingly unsteady and unthinking hands. Of course, these achievements would all be impossible if Whitehead’s words were not in large part true. But we have failed to seriously ask the question that should have followed: “What thoughts do the civilized keep?”

The post The Thoughts The Civilized Keep appeared first on NOEMA.

]]>
]]>