Interview Archives - NOEMA https://www.noemamag.com Noema Magazine Tue, 15 Jul 2025 17:44:53 +0000 en-US 15 hourly 1 https://wordpress.org/?v=6.8.3 https://www.noemamag.com/wp-content/uploads/2020/06/cropped-ms-icon-310x310-1-32x32.png Interview Archives - NOEMA https://www.noemamag.com/article-type/interview/ 32 32 America Is In A Late Republic Stage Like Rome https://www.noemamag.com/america-is-in-a-late-republic-stage-like-rome Tue, 20 May 2025 14:17:38 +0000 https://www.noemamag.com/america-is-in-a-late-republic-stage-like-rome The post America Is In A Late Republic Stage Like Rome appeared first on NOEMA.

]]>
Niall Ferguson is the celebrated historian, commentator and biographer whose many books include “The Ascent of Money,” “Kissinger: The Idealist,” “Civilization: The West and the Rest” and “The Square and the Tower.” A senior fellow at the Hoover Institute at Stanford University and at the Belfer Center at Harvard University, Ferguson recently sat down with Noema Editor-in-Chief Nathan Gardels to discuss the Trump agenda, the conflict with China, polarization in America and his own conversion to Christianity. This interview is excerpted from a Berggruen Institute “Futurology” podcast episode.

Nathan Gardels: Under the Trump administration’s radical sovereigntism, which summarily defects from the rules-based liberal world order founded by the U.S. after World War II, it appears America is joining the other axes of upheaval, China and Russia. All these major powers now seek to build their own spheres of influence that challenge such an order.

How do you see this unfolding?

Niall Ferguson: Well, I don’t agree that the United States is somehow aligning itself in any way with the axis of whatever you want to call it, authoritarians, upheaval or ill will.

What’s odd about the last four years before Trump is that the Biden-Harris administration came in and was welcomed by liberals around the world. “The adults were back in the room.” American foreign policy was going to respect alliances again, and it all went disastrously wrong. The allies have been sorely disappointed. The net result of the Biden administration’s foreign policy was that an axis formed that didn’t exist in 2020, an axis that brought together Russia, China, Iran and North Korea. And unlike the axis of evil of 2002 around the Iraq war, it actually exists. It’s not just an idea for a speech. These powers cooperate together, economically and militarily.

What went wrong? The answer is a disastrous failure of deterrence that really began in Afghanistan in 2021, got a lot worse in February 2022 when Russia invaded Ukraine, and got even worse in 2023 when Hamas and the Palestinian Islamic Jihad attacked Israel. So, I think one has to understand the re-election of Donald Trump as partly a public reaction against a very unsuccessful Democratic administration, a little bit like what happened in 1980 when Americans voted for Ronald Reagan and repudiated Jimmy Carter during the Iran hostage crisis.

When one asks, what the result of re-electing Donald Trump is, I don’t think that it’s a big win for China, Russia, Iran and North Korea. Quite the opposite. I think it’s bad news for them.

Let’s just break it down briefly. Many people wrongly thought that it would be beneficial to Vladimir Putin if Donald Trump were re-elected. I don’t think this war is going to be ended on Putin’s terms, if it’s going to be ended. Secondly, maximum pressure is now back on Iran. That’s important. Thirdly, tariffs have been increased on China, so the pressure is on China. Little Rocket Man in North Korea is still waiting to get whatever is coming to him, but I don’t think it’s going to be a love letter from the Trump administration.

In short, for the axis of ill will, it’s bad news that Trump is back.

Gardels: I didn’t mean it in that sense. I meant upheaval in the sense of the liberal international order of free trade and trusted alliances across a unified West. America is moving toward a sovereigntist way of governing itself that is unencumbered by a rules-based system in global affairs that takes into account the interests of others. Trumpist America is leveraging its mercantile might to get its way.

Ferguson: I am always reminded when people talk about the liberal international order of what Voltaire said about the Holy Roman Empire: it was neither holy nor Roman, nor an empire. And the same is true of the liberal international order. It was never very liberal, very international or very orderly. It’s actually an illusion that such a thing ever existed after 1945.

The real structure of power in the world was not the United Nations presiding over a liberal international order. There was a Cold War in which two empires, an American and a Soviet, struggled for power, and the United States at no point ceased to exercise power in the classical sense.

I read so many commentators saying, “How terrible and shocking it is that the United States is reverting to empire after the wonderful time of the liberal international order.” I wrote a book 20 years ago called “Colossus,” making the point that the United States has been an empire for many years and didn’t stop being an empire in 1945.

“When one asks, what the result of re-electing Donald Trump is, I don’t think that it’s a big win for China, Russia, Iran and North Korea.”

The interesting thing about the Cold War was that both empires accused the other of imperialism, each claiming that it wasn’t imperial. But they both, in fact, functionally were empires.

The United States today has much in common with the empires of the past, particularly in its ability to project military and naval power all around the world. So, I think we should probably be a little bit more skeptical about the concept of a liberal international order.

What’s interesting about Trump is that he’s open about it. He wants Greenland. He wants to retake the Panama Canal. And so, in a sense, we’ve gone back to the era of President William McKinley at the turn of the 20th century. But that’s not surprising, because Trump told us in the campaign back in the summer that McKinley was his hero, and that was not just the “tariff man” McKinley, but clearly also the McKinley who acquired, after the Spanish American War, Puerto Rico, Guam and the Philippines with an option on Cuba. So I think we are just back in a late 19th-century mode with Donald Trump.

Tariffs are late 19th century as are immigration restrictions. Much of the populist language that Trump uses would be instantly recognizable to anybody who has studied late 19th-century American history.

The progressive side took a very severe beating in the 2024 election, and they’re currently conducting one of the longest post-mortems in American political history. They’ve yet to figure out why they lost. So, the McKinleyite tendencies have the upper hand.

I don’t think this is anything other than a kind of revelation of what has always been true. There was a time when the neoconservatives openly talked about empire back during the Iraq War.

That all went wrong. One of the points I made in “Colossus” was that the United States is not actually very good at being an empire by the standards of, say, Britain in the 19th century. There’s a structural problem with American Empire, which is worth spelling out.

There are deficits that make it hard to be an effective empire. There’s a deficit in terms of manpower. I mean, America imports people. It doesn’t really export people. Very few Americans want to spend large amounts of time in hot, poor, dangerous places. Hence, the six-month tour of duty for the military abroad.

There’s another kind of deficit, which is the fiscal deficit. America can’t afford to occupy zones across the planet the way the British or French did.

Presently, there is also the problem that America is now spending more on debt interest payments than on the defense budget for the first time in its history. When that is the case, you’re probably in trouble. That’s been true, more or less, of every empire since 16th-century Spain.

And finally, there’s an attention deficit disorder, which I think is inherent in American public and political life. People lose interest in complicated, messy foreign adventures rather quickly, and that makes it very hard to complete them, whether it’s in Vietnam, Iraq or Afghanistan.

All these are structural problems. The American Empire is one of these strange cases of cognitive dissonance: Functionally, the United States has many of the characteristics of an empire, but Americans themselves don’t really want to be in the empire business, and this causes American power to oscillate. There are periods of strength, then there are periods of retreat. And after Trump overreaches, which he doubtless will, there’ll be another bout of retreat. We’ve seen this movie several times.

Gardels: If even the fiction of a rules-based order is not there, then everything’s up for grabs.

Some analysts, like Robert Kaplan, fear we are heading into what he calls a “global Weimar,” meaning that if there’s no authoritative hegemon, there’s going to be chaos in the vacuum out of which something bad emerges, as domestically in inter-war Germany.

On the other hand, you have the old idea of the conservative German jurist Carl Schmitt who envisioned the emergence of several “Grossraums,” or “great spaces” where the main powers are dominant over sea, land and technology in their domain. These spheres of influences, he thought, might balance each other since none are strong enough to dominate.

How do you see the constellation of powers evolving going forward?

Ferguson: I think it’s simpler than any of that suggests. One doesn’t really need to resort to German analogies to explain anything much in the 21st century. We’re in Cold War II, and we’ve been in it for at least six years.

“Functionally, the United States has many of the characteristics of an empire, but Americans themselves don’t really want to be in the empire business, and this causes American power to oscillate.”

The People’s Republic of China is playing the part of the Soviet Union, and the United States is the United States. First of all, you can tell it’s a cold war because there are only two superpowers. There are no other AI superpowers. There are no other quantum superpowers in the realm of technology. There are just two.

The second is that there’s a clear ideological difference between the two, and it’s become more pronounced since Xi Jinping became the Chinese leader and emphasized the Marxist-Leninist roots of the People’s Republic of China.

The United States, even with Donald Trump as president, is fundamentally different. It’s a two-party system, not a one-party system. It’s a system in which the rule of law is real in the sense that even the president is constrained by the law. He may not like it, but he is, and he will be, and that’s fundamentally different from China. So there’s an ideological difference.

And as in the first Cold War, they’re engaged in a technological race as well as in classic geopolitical contests over Taiwan and the South China Sea.

Cold War II is still at a relatively early stage. Yet, already, more or less everything that’s going on in the world can be seen in that context. For example, the war in Ukraine was like the Korean War in 1950, the moment that a hot war made it clear that the world was now a world of two blocs.

If you look at who supports Ukraine and who supports Russia, it is basically the same as who supported South Korea and who supported North Korea in the early 1950s. The Middle East was also a Cold War theater. The Yom Kippur War in 1973 was probably the most important in those days, and here we are again, 50 years later almost to the day, there’s a surprise attack on Israel, and we all have to focus, once again, on the Middle East.

So, I think it’s easier to figure this out if one just thinks that we had an interwar period from about 1991, when the Soviet Union collapsed, until 2012, when Xi Jinping came to power, and certainly until 2016, when Donald Trump came to power.

In that interwar period between the two Cold Wars, we all had a great time. Give or take the odd financial crisis and give or take the odd terrorist attack, there was relative peace.

Importantly, being back in a Cold War is no guarantee that the outcome will be the same, that the U.S. somehow wins all cold wars.

China is a much more formidable opponent than the Soviet Union ever was. Economically, it’s much larger. It’s larger than the U.S. on a purchasing power parity basis. Even on a current dollar basis, it’s much closer — 70% of U.S. GDP roughly — than the Soviet Union ever was — 44% at its peak, not even half.

So this is a tougher Cold War for the United States. Let’s just understand that we had a very nice interwar period after the collapse of the Soviet Union, and now there’s a new Marxist-Leninist superpower that is an even bigger challenge than the last one.

Gardels: So it’s basically a bipolar order going forward?

Ferguson: Yes. And you can see that if you spend time in Europe.

Europeans would like to be players, but they’re not. In fact, they’re really an object of this Cold War, more than they’re a subject in the sense that they can’t exercise strategic autonomy. The war in Ukraine was thrust upon them as a result of the failure of American deterrence. Once that deterrence failed and Russia launched its invasion, it was the American decision to support Zelenskyy when he refused to flee. That, in turn, forced the war and the European allies. Essentially, Europe has been a passenger. European leaders have talked for years about strategic autonomy. The war in Ukraine revealed that they are very far from having it, and it will take many years for them to have it. They are also not contenders in the AI race, and that is pretty fundamental.

Gardels: China and Russia regard themselves these days as “civilizational states,” a way to legitimize their power through the continuity of history. In response to that, you have a lot of people in the West now — Elon Musk, Giorgia Meloni, Viktor Orban — saying what they are about is defending their own civilization.

“Being back in a Cold War is no guarantee that the outcome will be the same, that the U.S. somehow wins all cold wars. China is a much more formidable opponent than the Soviet Union ever was.”

For the Italian prime minister, Western civilization means, as she has put it: Greek philosophy, Roman law and Christian humanism. You wrote the introduction to Palantir CEO Alex Karp’s book, titled “The Technological Republic: Hard Power, Soft Beliefs and the Future of the West.”

So this is kind of a mirror reaction to the claims of Russia and China. Do you see this as an element of the conflict? It’s a cultural and civilizational clash as well as an ideological one.

Ferguson: Yes, I do. I wrote a book called “Civilization” quite a few years ago. The subtitle was “The West and the Rest.” I used to outrage my Harvard colleagues by teaching a course titled “Western Ascendancy, Mainsprings of Global Power.”

The argument of the book and of the course was that something very extraordinary happened in the world around about 1600. People from Western Europe started to leap ahead of the rest of the world in a variety of different ways. They evolved different systems of governance predicated on competition rather than political monopoly. That’s important.

They also pioneered a scientific method that was different from anything that had been done before, and far more effective at establishing ways of managing the natural world as well as understanding it. They also built systems of law — common law and civil law — based on the idea of private property as the foundation. They pioneered modern medicine. They had a different attitude toward consumption and work.

All these different ideas and institutions evolved over time, uniquely in the West, by which I mean Western Europe and the places where people from Western Europe settled in large numbers, like North America.

Other civilizations existed around the world, such as Islamic civilization, but it was fundamentally different. It achieved a great many things, but it didn’t achieve what I’ve just described. Chinese civilization was far more advanced in, say, the year 1000, than anything in Western Europe. But for most of the next millennium, China stagnated.

That is history. Now we are living through the end of that period of Western ascendancy.

Why is that? It is because the rest of the world finally realized, if you can’t beat them, join them. And so, people in non-Western societies, beginning in Japan, downloaded the killer apps of Western civilization. And of course, they work everywhere because one of the important things about ideas and institutions is that they don’t care what color you are or what your religious background is. If you adopt those ideas and institutions, your economy will grow, your human lifespan will increase and everything will be better.

It’s amazing that it took so long. It took into the late 20th century for China to accept that there really was only one path to prosperity, and it involved markets, it involved science. You couldn’t rig those because of Mao’s ideological predilections. Once they finally recognized this, the Chinese caught up and they caught up really quickly.

If you thought of history starting in 1600, there is not a huge difference between Chinese and European incomes. But it just diverges spectacularly all the way until 1979, when, on a purchasing power basis, the average American was 22 times richer than the average Chinese. Now, in 2025, it’s maybe three times because there’s been a dramatic reconvergence. That’s the story of our time.

That’s the way to think about this historical moment. The problem for the Chinese is that they did not download all the killer apps. They were never willing to download the political competition app, that is to say, the idea that there should be competition between institutions, branches of government and parties. Without that, they can’t really have rule of law, because you can’t have rule of law if there’s no accountability through a system of justice.

So what the Chinese did was to say, “Yeah, we’ll take science, and we’ll certainly take modern medicine, and we’ll have a consumer society, and we’ll have a work ethic, but we just don’t want those institutions that presuppose competition and private property rights.” That is why, in my view, their system can’t succeed. It is incomplete and thus fundamentally doomed. Over the next 10 or 20 years, it will unravel.

Gardels: Why, then, is it so important, as someone like Alex Karp argues, to so ferociously build up hard power superiority in AI and technology if the Chinese system is bound to unravel?

“Now we are living through the end of that period of Western ascendancy.”

Ferguson: Because the lesson of 20th-century history is very clear: Totalitarian regimes are capable of wreaking catastrophic damage, even if they’re ultimately unable to sustain themselves. It is in the period of their greatest strength that they’re at their most dangerous.

Nazi Germany certainly proved that point. So did the Soviet Union. Unlike in say, the 1990s, China is more than a military match for the United States in the Indo-Pacific region. It has a larger navy. It’s accumulated a huge arsenal of nuclear weapons as well as non-nuclear weapons, including advanced missiles that can sink American aircraft carriers.

This dramatic race for military parity has produced a grave threat not only to the United States, but to its allies. I agree with Alex Karp: A world in which China won would be a world in which individual liberty would be quite quickly snuffed out.

If you are an authoritarian regime with AI, a full system of social credit and a total surveillance technology, you can be a far more successful totalitarian state than anything in the mid-20th century, including Stalin’s Soviet Union. That is a real threat, and it’s at its most dangerous now because the United States and its allies are terribly overstretched and underfunded. We are in a situation in which, because of the end of the first Cold War, we thought we owed ourselves a peace dividend that led to a drastic decline in investment in defense technology. That complacency has left us very vulnerable, especially in the Indo-Pacific region.

So, over a 10- or 20-year time frame, free societies are likely to prevail because they will be more innovative.

In the short run, there’s a window of great danger, as there was in the 1930s and as there was again in the 1960s and ‘70s, when totalitarian regimes had a capacity to wage war on free societies and conceivably could win such a war. That’s why Karp is right. We must not allow them to acquire a decisive technological and particularly military technological advantage, because if they have it, they’re highly likely to use it.

Gardels: So where does AI fit into all this? It seems a comforting myth in the West that China can’t innovate. Look at DeepSeek, which matches the best of the West in generative AI, no less, as an open-source model.

Ferguson: A great deal of confusion has come into this debate because people use terms like artificial intelligence and large language models (LLMs) interchangeably. Large language models are a part of AI, but not, in my view, the most important part.

Much of what they do is, in a sense, fake human discourse and allow us quickly to generate texts that seem human, though they’re not generated through human intelligence. This is a toy, really. It’s a toy that allows you to generate books in seconds. It allows you to generate images in seconds. But what these things are is essentially fake human content. There’s some use for this. It probably poses a mortal threat to search of the variety that Google pioneered. But that’s not what matters about AI.

What matters about AI is its ability to do scientific research on a scale never before possible, and because of the harnessing of enormous computational power, to discover and design, for example, new viruses. It’s the power of the scientific AI that should worry us.

It’s also clear, because it’s already happened, that you can have AI-enabled weapon systems in which decisions about targeting and shooting are not taken by human actors, but are taken much more rapidly by artificial intelligence. What worried Henry Kissinger in the later years of his life were not the LLMs, they were the applications of AI to scientific research, and particularly to weapons systems.

Whatever we may say about how we’ll restrain ourselves, I don’t think there’s any guarantee that China will restrain itself. We know the kind of work they were already doing on viruses before AI, the “gain of function research” that very likely was connected to the outbreak of the Covid-19 pandemic in Wuhan. I shudder to think what kind of experiments are going on now with AI that makes it possible to conduct far more radical scientific exploration of virus structures. That’s just one example of why we should be worried.

Gardels: You are the biographer of Henry Kissinger. He has said that AI is a result of the Enlightenment philosophy of critical thinking, but now, with AI, we have a technology that needs a new philosophy. What does he mean by that?

“It’s the power of the scientific AI that should worry us.”

Ferguson: One of the more impressive things about Henry Kissinger, even in his 90s, was his ability to see the implications of artificial intelligence before nearly everybody else, other than the specialists in the field. The insight that he had, long before anyone had heard of ChatGPT, was that we had created technologies that were doing things and delivering outcomes that we could not explain. It was the fact that the reasoning of an artificial intelligence model was non-human, and it therefore could deliver results that we could not explain by pathways that we could not ourselves interpret, that struck him as a great shift. That takes us back to a pre-Enlightenment age, or, I would say, even a pre-Scientific Revolution Age, in which much that goes on around human beings in that era is unintelligible.

For most of history, things that went on in the natural world were unintelligible to human beings, so we attributed them to gods or other extraterrestrial forces. The thing that’s interesting about AI is that it has created a new possibility of bewildering outcomes that we cannot explain. And we won’t attribute them to gods. We’ll attribute them to large language models. We’ll attribute them to AI. What worried Kissinger was the sense that things were going to become as unfathomable as they had been to medieval peasants.

Gardels: So all those advances in knowledge take us back to a kind of ignorance.

Ferguson: Yes, they basically demote us. Artificial intelligence is the creation of an alien and superior intelligence in our midst, not coming from far away in another universe.

Think of ”The Three-Body Problem.” In Cixin Liu’s science fiction novel, the Trisolarans come from a distant galaxy and are intellectually and technologically superior to us. In our imagination, we always assumed aliens would come from another world. But it turns out that we’re going to build them ourselves and endow them with intelligence that will ultimately be superior to us.

We should be very wary of where that is likely to lead. At the very least, we risk sharing the fate of the horses. Now, horses still exist, and very picturesque they can be. But long ago, they ceased to be the main form of transportation for human beings in a hurry. Just as they were entirely replaced, we are in danger of replacing ourselves the way we once replaced the horses.

Gardels: We’ve had this kind of extremely liberal open society in America that accommodates radical woke thought. Now things seem to have shifted to the prevailing ascent of what some call “the strong gods, family, faith and nation” that harkens back to traditionalist Christian values. Are we witnessing the last sigh of liberalism as the dominant philosophy, or just going through another cycle that will turn again?

Ferguson: I think what was striking about the Great Awokening, the last diffusion of extreme progressive ideology, was how intolerant it was. It made life extremely unpleasant on university campuses because the intolerance of radical progressives for any ideas to the right of themselves was a distinguishing feature of their brief reign of moral terror. In truth, for most of the last 60 years, most people retained considerable allegiance to faith and to nation and to family. You might have been flying over them between Los Angeles and New York, but that was, broadly speaking, the case.

What happened in the 1960s was that the elites, beginning in the English-speaking world, embraced a quite radical social change in which sexuality was far less strictly controlled, in which a whole range of different beliefs were given legitimacy and the gods of the Victorians of the 19th century were ridiculed and mocked.

I don’t think that cultural shift was deeply, profoundly influential on the wider population of the United States. They may have seen it on “The Ed Sullivan Show.” They may have read about it in the newspapers, but I’m not sure it fundamentally altered life in much of the United States. It probably had much more influence on the populations of Western Europe.

So what happened in the last 10 years was that the radical left, having been entirely defeated in the field of economics, decided to adopt a radical identity politics, aiming to transform our understanding of American history and of today’s American society in a way that was deliberately divisive and hostile to individual identity. It re-emphasized racial difference, abandoning the notion that a society could be color blind. It weaponized categories like “transgender,” a tiny minority of people.

“In the last 10 years was that the radical left, having been entirely defeated in the field of economics, decided to adopt a radical identity politics.”

All of these things were calculated to create a new and revolutionary cultural environment. This was achieved to a large extent in many universities, but it didn’t really extend very far. And in fact, when one looks at the polling around the last election, you realize that the left of the Democratic Party on a whole range of issues, like, for example, the rights of transgender athletes to compete in women’s sports, diverged so far from mainstream opinion that they were almost off the charts. Mainstream opinion, regardless of whether it was the opinion of a white person or a brown person, hadn’t moved nearly as far on those identity issues as the left wanted to go.

So what has happened isn’t really a profound backlash, just a repudiation of those ideas by ordinary Americans. And interestingly, that repudiation went right across almost every demographic category. There was only one category of American voter that did not swing to Donald Trump between 2020 and 2024 — white women with college degrees. Everybody else moved away from what the progressive wing of the Democratic Party had been trying to achieve.

Gardels: So this silent majority, culturally and politically, has basically re-emerged.

Ferguson: It never went away, but simply reasserted itself in the face of a very intemperate, radically progressive movement that had detached itself from social reality. When Richard Nixon used the phrase “silent majority,” it was in response to anti-war protests in 1968-69. He understood that if you just did the numbers, the people protesting were a tiny minority of Americans. Most Americans were not actually with them, and so the appeal to a silent majority was a shrewd move by Nixon to exploit the fact that most people are, in fact, quite socially conservative and are not particularly interested in revolutions in their norms.

But the left forgot that again, and it walked into the same trap that the left walked into in ‘68, which was to go too far in radicalizing relations between the sexes and relations between the races. If you go too far in that direction, the silent majority says, “Hang on, we’re going to stop being silent as long as it takes to shut you up.”

Gardels: In your personal life, both you and your wife, Ayaan Hirsi Ali, have converted to Christianity. Does that fit into this larger cultural moment?

Ferguson: My parents left the Church of Scotland before I was even born. My mother, as a physicist, was a strict rationalist, long before anyone had heard of Richard Dawkins or Steven Pinker. I was brought up in a household in which the official line was that life was a cosmic accident.

I abandoned atheism, which is a form of faith in itself, in two steps. First, through historical study, I understood that no society based on atheism had been anything other than disastrous. In fact, the correlation between repudiation of religion and extreme violence is very close. The worst regimes in history engaged in anti-clerical activity, the Bolshevik regime, or say, Mao’s regime in China, not to mention the Nazis, who turned against Christ as they identified him, not wrongly, as Jewish.

So, for a variety of historical reasons, I came to the view that you could not organize a society on the basis of atheism. I became like Tocqueville. I didn’t have any religious faith, but I felt it would be good if people generally did.

The second step that led me to become a Christian was the realization that one couldn’t organize one’s life as an individual or as a family without religious faith, and that the teachings of Christ are an extraordinarily powerful and revolutionary solution to some of the central problems of human existence.

We haven’t come up with anything better. Indeed, all attempts to come up with alternatives have, I think, been failures. So, for very personal reasons, my wife and I arrived at Christianity because there seemed to be no other way for us to live good, fulfilled lives and be effective parents.

Ayaan went on a very different journey. I wouldn’t speak for her, as she began as a Muslim and then spent a period of time as one of the “new atheists.” But she arrived in a very characteristic way, almost by first principles, at the need for a Christian God. She appreciated and arrived at the teachings of Christ in a way that I couldn’t, almost working them out, as it were, from scratch. But we both arrived at the same point.

“There was only one category of American voter that did not swing to Donald Trump between 2020 and 2024 — white women with college degrees.”

These are probably tiny little parts of a revival of religious faith that had been a long time coming, but I think is probably the only way that we in the West will be able to withstand the challenges that we currently face. It’s simply not feasible for us to have the strength to withstand the challenges from the Communist regimes in China and North Korea, the challenges from the nihilistic fascist regime in Russia, the challenge from Iran, the challenge from radical Islam. We can’t withstand those challenges with the scriptures of Richard Dawkins and Steven Pinker. That’s not enough.

Gardels: President Trump has launched a tariff war, as promised, mostly aimed at China. Your thoughts?

Ferguson: One has to understand that the tariffs are part of the backlash against China that Donald Trump led. He campaigned in 2016 as the first politician in a generation to stand up to the Chinese challenge. That was one of the reasons he won. And in his mind, tariffs were an important instrument for that return to a more combative approach. But it is not the only instrument. I don’t think we can separate the tariffs from the tech war. They weren’t separate in 2018-19, and they won’t be separate now.

The United States then not only imposed tariffs on Chinese exports to the U.S., more importantly, it imposed export controls on important technology, particularly semiconductors going to China. We can trace that back to Trump, but it was stepped up by Joe Biden. I’m thinking particularly of the Commerce Department restrictions on Chinese access to the most sophisticated semiconductors.

That is actually more important than the tariffs in the U.S.-China rivalry, because they strike at China’s ability to compete technologically, particularly in AI. That’s why this isn’t just going to only be a tit for tat game about tariffs. It will also involve measures relating to technology, including the kind of rare Earth minerals that China has considerable control over. Those things matter, not least because of their importance to technology in the West.

Gardels: You wrote a book a few years ago, “The Square and The Tower: Networks and Power From the Freemasons to Facebook.”

What we have today is a social media ecosystem which both concentrates control — the tower — but also empowers a multitude of voices — the square. Republics have always put in place checks and balances when too much power is concentrated in one place. One of the impacts of this diversity of voices and fragmentation of the body politic is that different tribal silos don’t speak to each other.

The Korean philosopher Byung-Chul Han argues that the internet changes the way information flows. It goes from private space to private space without creating a public space, a public sphere as a common platform for democratic deliberation.

So, isn’t it also just as important not just to check concentration, but to have checks and balances when information flows are so distributed that the public square is disempowered?

Ferguson: In that book, I argue that the world is only intelligible with the help of network science. Through network science, one can see hierarchical entities like states or corporations at one end of the spectrum, and at the other end of the spectrum, there are distributed networks that are entirely decentralized, which is what the World Wide Web originally was.

What happened very quickly in the 21st century was that the World Wide Web became centralized, and it created its own hierarchy through companies that we call “hyper-scalers.”

For a time, this handful of companies created network platforms that so entirely dominated the internet and information flows that it ceased to be a truly distributed network. Everything was being channeled by the very powerful algorithms that the platforms used. I think that is still the case.

The power of the platforms reached a zenith in 2021 when they acted in lockstep, politically, against Trump after January 6, and then in support of the Biden administration. This was an extremely disturbing development. I felt there were really two coups that one could talk about in January 2021: the bungled one that happened at the Capitol and the successful one against Trump by the big tech companies and their proximity to the Biden administration on a range of issues.

That was one of the more troubling developments of our modern times. Elon Musk’s decision to buy Twitter and turn it into X broke that political monopoly up. I think that was a very desirable thing to happen.

“The power of the platforms reached a zenith in 2021 when they acted in lockstep, politically, against Trump after January 6, and then in support of the Biden administration.”

But we’ve now arrived at a new situation in which the natural tendency for networks to polarize because of homophily — birds of a feather flock together in any kind of a network even in a small network of friends at high schools. If one looks at the United States today, the tendencies toward polarization have only gone further than when I wrote that book.

It occurs to me now that Americans are in much the same place as people in Glasgow when I was growing up. In Glasgow, there were two completely separate communities, Catholics and Protestants, Celtic and Rangers. They did not intermarry. They barely spoke when they met. They fought.

Americans have arrived at a Glaswegian state of polarization along partisan lines. Republicans and Democrats occupy separate cultural spaces, separate networks. Soon, there won’t be Democrats on X; they’ll all have gone to Bluesky. And this means that the two communities are becoming entirely separate, to the point that there is no longer intermingling across the partisan divide.

That’s quite dangerous, I think, for a republic, not because there’s no public sphere. It still exists. It’s just that the two rival clans or rival sects refuse to engage with one another in good faith.

I don’t know how you fix that. It may be inherent in the way that the internet has evolved structurally that we have ended up in a giant Glasgow. I’m not quite sure where that leads, probably just to a kind of schizophrenic politics, in which small changes at the margins in a small number of counties in a small number of states cause the politics to swing radically from Rangers to Celtic, from Republicans to Democrats.

And each time this happens, we see more of the pathological behavior we saw at the end of the Biden administration with the wild, preemptive pardoning of family members.

If I could strike a very pessimistic note for a moment, there is some sense of being in the late republic in America today, by which I mean that the institutions of the republic are being corroded by a latent civil war in which the stakes of political defeat become too high. That’s something of what eroded the Roman Republic and paved the way to the Empire.

My sense is that history has always been against any republic lasting 250 years. So this American republic is in its late republican phase with the intimations of empire, to bring our conversation back to where it began. That is the thing I worry about most as an American.

The post America Is In A Late Republic Stage Like Rome appeared first on NOEMA.

]]>
]]>
A Roadmap To Alien Worlds https://www.noemamag.com/measuring-a-planets-acquired-memory Tue, 01 Apr 2025 16:07:48 +0000 https://www.noemamag.com/measuring-a-planets-acquired-memory The post A Roadmap To Alien Worlds appeared first on NOEMA.

]]>
Theoretical physicist and astrobiologist Sara Imari Walker proposes that evolution and selection can operate at a planetary scale on Earth, and perhaps worlds beyond. In her telling, a planet accretes, iterates and then — crucially — evolves, acquiring information and memory that structure material possibilities. The following is a discussion between Walker and the Berggruen Institute’s historian of science Claire Isabel Webb.

Claire Isabel Webb: The James Webb Space Telescope (JWST), launched in 2021, has revealed the magnificence of our universe as never before. A main priority of NASA’s mission is to learn more about exoplanets’ atmospheres, where evidence of extraterrestrial life might be found. What are your hopes for how this technology of perception could help astrobiologists like you characterize alien signs of life?

Sara Imari Walker: The fact that we can build instruments and technologies that allow us to see billions of years into the universe’s past is in many ways more interesting than the images and information we get from these telescopes. 

Humans are part of Earth’s physical system. While what we see through our telescopes is extraordinary, it is more extraordinary that we emerged from the geochemistry of Earth and, after around four billion years of evolution, can construct telescopes and interpret what we see.

CIW: We see collective intelligence in organisms like honeybees, which waggle information to each other; starlings that murmurate; and slime molds that coordinate chemical responses to their environment despite their lack of brains. You argue that physical systems capable of intelligence can scale to planet Earth, but also, perhaps, to planets beyond.

How would thinking of planets, including all the flora and fauna they may foster, through the lens of physics, fundamentally change how we look for life beyond Earth?

SIW: We are realizing how little we know about exoplanets, even from the features we can infer, such as simple atmospheric gases. Astronomers hope that by analyzing the spectra of these gases, we might learn something about planetary chemistry and whether it indicates the presence of life. The diversity of planets is proving to be much broader than we could have naively anticipated.

CIW: Right, it was about four years ago that astronomers characterized — but have yet to confirm — a Hycean planet: a new potential world that’s ocean-covered with a thin hydrogen atmosphere that could be conducive to life emerging. There also are sub-Neptunes, Super-Earths, Mini-Neptunes and Mega-Earths. These neologisms speak to the fact that scientists are discovering many kinds of planets that don’t fit into the mold of our solar system’s planets.

SIW: Exactly. We have no priors for what we are seeing. Studying these worlds will raise many unknowns about alien environments and the potential biologies that could evolve there. To assume any of those exoplanets harbor life forms just like those on Earth enormously understates the theoretical possibility of alien life forms.

CIW: A possibility space is an arena where all plausible outcomes are considered, simulated and theorized; the concept also acknowledges the unknown lacunae of present knowledge. So, successfully detecting extraterrestrial life might mean we need to reframe how we currently conceptualize evidence for what even counts as “life” on Earth.

SIW: Yes. Historically, astronomy experiments that sought alien life forms fixated on detecting molecules rather than conceptualizing the life processes of an entire planet. That is, we are now starting to think about detecting entire biospheres.

But to do this, we need to develop new theories of life around the concept of complexity. By “complexity,” I mean the amount of information necessary to produce a particular set of structures; in other words, it is what determines what possibilities exist. And by “information,” I really mean causation and selection, which we formalize in assembly theory as the minimum amount of contingent historical steps necessary for the observed objects to exist. How much selection and historical contingency must go into making what we observe? If the answer is significant, it suggests those features require much acquired memory and can only be produced by life.

Earth can provide a model for understanding planetary complexity. To begin to answer the question — How would one characterize our planet as a living world? — we can start with the concept that our planet has some four billion years of acquired memory. When scientists set out to characterize and then detect life, what I think we need to aim to detect is the depth in time of past states that the planet retains in its current state. This might seem like a weird way to think about it, but with it, we can then use assembly theory to follow how selection constructs entire atmospheres, and potentially detect alien life.

“To assume any of those exoplanets harbor life forms just like those on Earth enormously understates the theoretical possibility of alien life forms.”

This conceptual reorientation requires that we decouple our thinking about specific “things life constructs” (e.g., discrete units like a Lego building block) from entire systems of “life-constructing things” (e.g., an organism). There might be other information processing or intelligent systems in the universe — what we might consider “life” — and we would recognize these only because they can make things we know could not form in the absence of life. Knowing a planet’s full history over billions of years of planetary evolution is not necessary because the evolved objects themselves should be evidence of that history and whether “life” is a part of the history or not.

CIW: By a planet’s “acquired memory” then, you mean almost a Gordian knot of chemical, biological and physical data that enfolds over eons.

SIW: Yes. Because biological systems are constantly reproducing and building new structures, we tend not to realize how old some forms of life are. The interior structure of the ribosome has changed less than most rocks on this planet in the last four billion years. The lineage of sharks has been around longer than Saturn’s rings. Given the continual evolution of biological systems and the fact that scientists seek evidence of life through their physical traces, the question for astrobiologists and exoplanetary astronomers then becomes: “How do we infer processes that are deep in time simply from the structure of a planet’s atmosphere?”

We build optical telescopes because we think they are the best technologies to infer molecules that life processes produce. But the challenge is that we won’t directly “see” certain structures existing in the universe unless we know what to look for — in this case, we need to recognize an alien biosphere filtered through the lens of an atmosphere and then a telescope. To do this kind of inference, we need to better conceive of what life is, so we know what to look for.

CIW: Technologies must catch up with theories. Geologist Eduard Suess, writing in 1875, conceptualized Earth as a series of layered, interlocking spheres. The biosphere, or “selbständige,” as he coined it, was a layer that enshelled all life on Earth. Soviet scientist Vladimir Vernadsky, about 60 years later, developed Suess’s concept. He described the biosphere to be in a state of momentous transition: an emerging noösphere, or “the energy of human culture.” There were glimmerings of humans’ impact on Earth in Vernadsky’s writing, and it was only in the consequential decades that technologies — computers, satellite images, climate models — rendered in great scientific clarity the extent of that impact. How we look for climate change is through the technologies we’ve built to look at climate change. Of course, there is always room for surprise and serendipity.

SIW: To use an analogy, how we should look for life is somewhat akin to how we discovered gravitational waves. In 1916, Albert Einstein’s theory of general relativity predicted that the collisions of supermassive objects like black holes would create ripples in the very fabric of spacetime. Humans did not know how to build an instrument to measure this — an interferometer — nor did they possess the technological tools necessary to confirm the existence of gravitational waves. It took a century for us to develop the technology to make the detection. We made “first contact” in 2015 — confirming Einstein’s prediction of these waves — almost exactly 100 years later. Technology and exoplanetary insights can only work together. We did not have the technologies of perception to see gravitational waves in 1916. In 1916, cars were barely on the road!

CIW: Your analogy reminds me of my work with radio astronomers who search for extraterrestrial intelligence (like those at the SETI Institute). They make a distinction between biosignatures, such as planetary atmospheres that would indicate some form of life, and technosignatures, which are artifacts of intelligent alien technologies. Even if one is generous with the parameters of life or even “intelligence” existing beyond Earth, there’s no way to say with certainty that humans would be able to notice — let alone receive, let alone translate — a directed, intentional and meaningful communication. The interoperability — or ability of human and speculative alien transmissions to communicate effectively — is not guaranteed.

Gravity waves, I think you’re saying, represent a different kind of epistemic endeavor. Einstein’s prediction of gravity waves led to a century of theoretical research that allowed physicists to precisely predict the shape of the “chirp” of two black holes colliding — they characterized the disturbance in spacetime at the length of one-ten-thousandth of the diameter of a proton! Theory came first. Experiments to support such theory followed. In SETI, astronomers are developing experiments of expectation where the object is not guaranteed, let alone characterized with any theoretical clarity.

“The interior structure of the ribosome has changed less than most rocks on this planet in the last four billion years.”

SIW: Yes, this is exactly the challenge. Compared to life processes, predicting and detecting gravitational waves is a fairly simple problem. We do not have the right abstractable concept or theory to talk about extraterrestrial life, let alone alien intelligence. How are we going to possibly know we have the right technology or framework to see complex biological features in the universe or perceive alien signals? The coupling between how we build and use technology and how we conceptualize life is fundamental, yet unanswered.

So, some of JWST’s data might already indicate biosignatures in the composition of atmospheric chemistry. But I propose a conceptual reframing of how we even begin to interpret that data: We need to understand molecules’ presence as products of the collective evolution of living worlds, not as individual units. That’s where assembly theory, an explanation for life first developed by chemist Leroy (Lee) Cronin, comes in. It allows unfolding analysis of structures of molecular bonds and the recurrence of certain bonds’ structures as indications of how much minimal acquired memory is necessary for a given chemical system to emerge.

CIW: Can you walk me through an example of how that works at the molecular level?

SIW: Basically, we take the molecule apart and we try to rebuild it by taking those constituent parts and joining them back together. Our goal is to discover the shortest possible route, only reusing parts we have already made. One can imagine doing something similar with Lego. Say one had a Lego castle, smashed it to pieces, and then asked how many steps are necessary to rebuild it — with the stipulation that the builder can only use things the builder has already built. This constraint bounds the minimum causation necessary for evolution to discover the object. And our hypothesis, which so far stands up to experimental testing for assembly theory as applied to molecules in the lab, is that some objects have sufficient minimum causation to mean that they are only producible by life.

CIW: So, when a system gains sufficient complexity, life can assemble itself.

SIW: To see the holistic structure of what we really think “life” is, we need new ways of seeing. We might not use existing, familiar technologies of perception to detect extraterrestrial life because we don’t yet understand the full complexity of a planet’s total life processes.

We also need to be very careful not to overestimate the connection between the materials of life processes and the technologies that life processes produce. As Lee [Cronin] likes to point out, the social media app TikTok will not exist anywhere else in the universe; we don’t expect a technology that evolved on Earth to have evolved on every planet with life. This is because we implicitly recognize that humans are embedded in a particular technological space, which is a product of our biology, which itself is a product of geochemical events that happened on our planet an estimated 3.8 billion years ago — it is all contingent. But for some reason, when we look at biochemistry, we choose to talk about life’s complex processes in a way that implies these processes emerge linearly as individual objects out of a singular planetary condition rather than realizing biochemistry is a complex, iterative, interactive invention of deeply complex planetary systems.

CIW: Good to know that Earth is the only planet with TikTok! But you’re saying we should think of TikTok as a result of Earth’s complex systems that can be unwound to the molecular building blocks of life as we know it. Given the awesome number of combinations that can be made on an atomic level and the many events that led to humans making TikTok, telescopes and satellites, the number of possible systems that created intelligent life is enormous.

SIW: Yes it is enormous! And that is why I am excited about assembly theory because it allows us to formalize how big the space is that must select for a given object to exist. The mystery that I and other astrobiologists are trying to sort out is how signs of life not only might have initially emerged out of a particular planetary geochemistry, but how the awesome diversification of the structures of life have been elaborated on over billions of years.

Of course, there are constraints. Everything follows the laws of physics, and we expect those laws to impose universal constraints on how biologies and technologies get invented. For instance, we can presume that all flying creatures will have a winglike structure — but the particular details of those structures, like what they are made of, their precise shape and how they emerged and then evolved among species, varied enormously based on the specific historical context under which they emerged.

“We need to understand molecules’ presence as products of the collective evolution of living worlds, not as individual units.”

CIW: Convergent evolution describes how a pterodactyl and a bat both have wings, but those structures were born out of completely different evolutionary pathways.

SIW: Yes. In the same way, conceptualizing a planet’s acquired memory means expanding the definitions of what signifies life and the tools necessary to find it. Astrobiology needs to move beyond analyzing the details of molecular structures to analyzing macro-scale patterns that might really be universal signatures of life.

CIW: What you’re saying is that we need to understand chemistry in an exoplanetary atmosphere as the result of a global system — not as just some atoms bonding together. A bird’s wings are the result of a great chain of processes that have complexified themselves over billions of years. Wings are a phenomenon we see on Earth because they’re the result of evolution and selection that emerged from a planetary system of life. So, selection processes on Earth produced life, which produced intelligence and then technology.

SIW: And that intelligence is not only observable at an individual level, like a human doing math, but at a collective level, like humans producing AIs. That process of complexification scales to planetary scale living and intelligent processes.

CIW: Intelligence is a trace of complex objects that can be observed at the planetary level. The planet embeds a material lineage that tells us how complex objects — life — can assemble themselves into other complex objects, reflexively and recursively iterating. Earth has had enough time to build a memory that includes life, and this life includes technologies.

I am curious: What future observations might be evidence of planetary scale knowledge — its acquired memory? SETI scientists I worked with were generally leery of indulging in detailed speculation about the natures of alien beings. Given our limited knowledge, it’s fun, but perhaps not scientifically useful, to imagine if aliens have fur, or 10 eyes or can operate in the sixth dimension. We can only use the present technologies to search very narrowly for a range of radio frequencies that would indicate alien technology — not some cosmological brain-scanning device that would detect alien “intelligence.”

But indulge me for a moment: How might one design a futuristic successor instrument to the JWST that would search not merely for the presence of molecules but also be equipped to search for a concept such as intelligence by assessing an entire planet’s geological, biological and chemical structures?

SIW: I think in terms of radical abstraction about the nature of life. If we could build new technologies of perception that would see the world in terms of causal structure, it would be very easy to pick out objects and entities that would be “alive” — possessing a deeper causal structure of “liveliness.”

A planet that evolved a technosphere — a series of distinct and integrated systems, like satellites and spacecraft — is more “alive” than one with a biosphere. That’s because the amount of causation that goes into assembling a technosphere is much higher. It has a much larger causal depth and, therefore, exists as an object that is deeper in time on a planet.

CIW: And one can calculate that causal depth using assembly theory.

SIW: Yes. Assembly theory is a mathematical description of life and its objects. But we hope to generalize assembly theory to all kinds of materials structured by life. It is not clear how speculative instruments might translate to measuring complexity through direct physical observations, but I think a key step will be inventing a new technology (e.g., a theory, like assembly theory in this case) that can help us see causal structure.

Detecting planetary life processes — either through direct observation or a conceptual framework — is difficult. From space telescopes, we are only getting photons from exoplanets rendered as spectra. While this can tell us a lot about a planet’s size and even atmospheric composition, building an instrument to characterize a planet’s living complexity is not straightforward. What kinds of measurements would we even take? Can we do so remotely? We are making a lot of headway on this, thinking about how to make inferences based on our observations of the diversity of bonds and elements in an atmosphere, which tell us something about its assembly.

CIW: So, it is not really a question of gathering enough information to eventually be able to count it as a planet with deep, acquired memory, or compiling spectra to understand that data in this new way.

“If we see a sufficiently complex atmosphere — one that required a sufficient amount of time to produce — that might be the smoking gun of a biosignature.”

SIW: Right. We cannot just use information theory or computational language to detect life, because those depend on human-derived data labeling systems. We need a new paradigm where the contingency in the matter we observe and the computation of its complexity are the same. In assembly theory, we do this by treating objects as “informational”: Objects are made up of the operations the universe uses to build them as an intrinsic property, meaning different objects require different amounts of memory and, consequently, have different depths in time. Therefore, we should expect to require varying amounts of acquired memory for these to ever appear at a given time in the universe. To detect this from atmospheric data, we require a leap to the perspective of thinking about an atmosphere as a complex system assembled by evolution and selection.

CIW: Let us bring the complexity question of exoplanets back to the familiar context of Earth — indeed, the only place in the universe where we know life exists. Would one have to compile millions of years of atmospheric spectra over time, generating different timestamps for the evidence of evolving biological processes? Would one also have to journey to the bottom of the ocean to calculate the assembly index of sharks’ teeth to plug into a holistic theory of the emergence of life on Earth?

SIW: Right now, I am just not sure how much we can infer about life from atmospheric data. That is because the objects we are interested in inferring exist at such a large temporal scale, and most of the molecules in an atmosphere are not deep in time objects.

On the other hand, objects that life uniquely produces are objects that are immensely large in time. In what we call assembly time, we can stack all objects by the minimal number of physical operations necessary to build them and define a boundary between what can be produced anywhere and what requires a living (evolving) trajectory. But this requires us to assume time is an intrinsic feature of all objects. Humans are very large in time. Plants and humans are 3.8 billion years in clock time. Everything living on this planet has parts that extend that far back.

CIW: I have never heard anyone describe objects as being “large” (a physical phenomenon) in “time” (an immaterial phenomenon).

SIW: Sometimes doing new physics requires defining what is material in new ways. To talk about planetary atmospheres in an explanatory way that allows us to theorize about life, we might measure how large the atmosphere is in time. So, if we see a sufficiently complex atmosphere — one that required a sufficient amount of time to produce — that might be the smoking gun of a biosignature. It would be a definitive sign indicating that the planet possessed some kind of life.

But the problem is that volatile gases — ones present in the atmosphere of Earth, where we know life exists, and areas where we think life cannot — tend to be composed of very simple molecules. So, to detect “life,” we have to make many other inferences, like observing how molecules interact and how they together indicate complex processes of life. That is, one might see that there is evidence in the total set’s composition that indicates an object much larger in time than any number of individual molecules.

I am proposing that we look at the whole composition of a system. That will allow us to understand a planet’s memory depth — evidence of its evolutionary history — that would have produced a total atmospheric composition we can observe. 

Many people are not optimistic that we have a scientific pathway for inferring the presence of life on exoplanets. I am not sure where I land on that question. But what I am doing now with Lee [Cronin] is working toward a large-scale project that will allow us to observe the emergence of alien life in the lab — e.g., generate an origin of life event from scratch.

We need to do this by building a “planet simulator.” It cannot be a computational experiment. It must be physical for two reasons: (1) the computations to simulate life are more efficient when implemented in the real universe, and (2) we do not know all the relevant physics to simulate them, so we must run the experiments in reality, using chemistry. The technology now exists to do this at scale, and we have a theory that will allow us to guide our search. The best way for us to demonstrate the principles that will allow us to discover alien life is to do the right kinds of theory-driven experiments here on Earth.

The profound question I want to answer in my lifetime is this: Can we evolve truly alien life — and perhaps intelligence — in the laboratory?

Editor’s Note: This interview has been edited for clarity and length.

The post A Roadmap To Alien Worlds appeared first on NOEMA.

]]>
]]>
Under Trump, You ‘Petition The King’ https://www.noemamag.com/to-make-government-efficient-empower-the-bureaucracy Tue, 25 Mar 2025 16:10:31 +0000 https://www.noemamag.com/to-make-government-efficient-empower-the-bureaucracy The post Under Trump, You ‘Petition The King’ appeared first on NOEMA.

]]>
Noema Editor-in-Chief Nathan Gardels recently sat down with Francis Fukuyama at Stanford University for the Berggruen Institute’s “Futurology” podcast series. Fukuyama is the noted author of books such as “The End Of History And The Last Man,” “The Origins Of Political Order” and “Liberalism And Its Discontents.” What follows is an excerpt of their wide-ranging conversation.

Noema Editor-in-Chief Nathan Gardels speaks with Francis Fukuyama for the Berggruen Institute’s “Futurology” podcast series. Subscribe or follow for new episodes every Tuesday.

Nathan Gardels: You said some years ago that democracy cannot survive the lack of belief in the possibility of impartial institutions. Today, that level of trust is almost zero and worsens daily with the continued denigration of the courts by the Trump team. Even Supreme Court Chief Justice John Roberts says such attacks are very dangerous to the rule of law. Where does the meter of democracy’s survival stand today?

Francis Fukuyama: Since Jan. 20, I place it much lower. There was much discussion before the election asking, “Is Trump a fascist?” Is he an authoritarian?” I tended to poo-poo some of that. Comparing him to Hitler, I felt, was just a little overheated. But now one must admit he’s definitely an authoritarian. In these short months, you’re already seeing America turn toward authoritarianism.

The Constitution is all about the separation of powers, boxing in the executive so that the president has very clear but limited duties. Yet, what we’ve seen since Jan. 20 is a barrage of executive orders. It’s like the king is simply giving out commands to his subjects.

Under this administration, you don’t go to Congress to debate legislation. If you want to change something, like closing down a major agency, you petition the king. So, we’re already in an authoritarian phase. That’s just on an institutional level.

The other really poisonous thing that’s happening is the further erosion of trust. I had always thought of America as a relatively high-trust society. This is a tradition that goes back to Alexis de Tocqueville, the idea that Americans get along and can form civil society organizations to cooperate with each other. But that level of trust has disappeared along with trust in government, which itself was never high to begin with, but now has turned very poisonous.

Worse, the trust ordinary Americans have in one another is also lost. This is not a good situation in a democratic republic.

There are many causes behind this, but quite frankly, technology is a big part of it. 

Robert Putnam wrote his famous essay “Bowling Alone” in the mid-90s when he was already talking about a decline of trust in the American art of association. In one sense, that turned out not to be right. The internet was just coming into being, and people started associating online in ways that Putnam could never have imagined.

Yet, what you want in a democracy is a kind of generalized trust where people believe that their fellow citizens are trustworthy and honest and so forth. But what’s happened is that we now have these very tightly bonded groups marked by polarization in society generally. Within these groups, members exhibit a high degree of trust in one another but lack trust in others in society.

I’m sure the Proud Boys are really tight. They go out for beers and have a great time. It’s a narrow group, and it’s very much opposed to other narrow groups in the country. This is also going on in the left. The solidarity the working class once had has converted into identity politics, which again, binds people into smaller and smaller identity groups. As a result, you have more association than ever, but less social trust.

Gardels: So trust — but within silos.

You mentioned Trump’s executive orders. He justified his executive orders as a response to a “national emergency.”  This brings to my mind the “decisionist” theory of Weimar-era German jurist Carl Schmitt. Looking back at history, he argued that absolute sovereign authority derives its legitimacy from suspending the normal constitutional order in a “state of exception,” when the nation is threatened by enemies, within or without.  This seems to be the mentality of Team Trump — JD Vance, Elon Musk and Trump himself — all of whom claim the legitimacy of their actions despite judicial rulings.

Fukuyama: Certainly the declaration of emergencies on immigrant invasions is a way of getting around the current legal restrictions. How do you legitimize the ending of birthright citizenship, given the clear statement in the 14th Amendment that all persons “born and naturalized in the territory of the United States are subject to the jurisdiction thereof.” And so, they’re trying to hang their action on one little clause and say we’re in a state of exception because we’re being invaded, and that’s going to allow us to override this plain language of the amendment. So, there’s a lot of game playing in that.

“Under this administration, you don’t go to Congress to debate legislation. If you want to change something, like closing down a major agency, you petition the king.”

However, I do think that it does speak to a broader crisis in liberalism, which I would say is the issue of “excessive proceduralism.” Liberal societies are built on a rule of law, right? You’ve got rules that prevent powerful people from doing whatever they want, and the tendency in liberal societies is to simply pile up those rules, one on top of another, in the belief that that’s what gives you legitimacy. But that leads to outcomes where it’s very hard to convict criminals. It becomes very hard to build infrastructure around all the regulations and permits required.

This is one issue that I’ve paid a lot of attention to because it breeds this instinct for authoritarian government. People get tired of being so constrained. If you look at who cheered the Trump election, you get people like President Bukele in El Salvador, who has managed to jail a large swath of the youth population in the country. He’s brought down the crime rate, but it’s completely extrajudicial.

What we’re witnessing is a reaction to the excessive proceduralism in liberal societies, and then people want to move to the opposite. That’s leading Trump supporters to attack judges. In the face of all the lawsuits that are now filed against the president’s executive orders, Musk and Vance say, “Let’s impeach the judges. There should be no restrictions on what we do whatsoever.”

Gardels: So, is the Trump administration’s deep dive into the malpractices of the so-called Deep State increasing trust or decreasing it?

Fukuyama: It’s obviously decreasing trust. Musk’s operation is completely non-transparent. We don’t know what he’s doing. He’s making decisions that build upon the demonization of the federal bureaucracy. I actually think the vast majority of bureaucrats are meritocratic professionals who have joined the bureaucracy to serve the public.

Of course, there is some corruption due to bureaucratic capture by various powerful groups. Still, I think that the old ideal of the Pendleton Act, which set up the civil service in 1883, is basically in place. However, people like Russ Vought, the head of the Office of Management and Budget, are very radical. Vought has actually said he thinks that federal bureaucrats hate the American people, and therefore that justifies a war against them. He’s not concerned about whether they’re doing important things in the public interest, like making sure that airplanes don’t run into each other. He’d be happy to see a lot of the bureaucrats simply disappear.

Gardels: In your book “The Origins of Political Order” you talked about China being the first modern state because it developed an administrative bureaucracy — what developed into the so-called mandarinate where the best and brightest had to pass rigorous examinations administered by the state. This led to what some call China’s “institutional civilization,” which made it a great power for centuries.

Isn’t it a pretty simplistic notion to think that modern societies like America don’t need a governing apparatus to thrive and prevail?

Fukuyama: Yes. One of the central issues that I’ve been wrestling with for the last 25 years is the problem of delegation — that any political authority or any corporate authority or any organization, in general, has to delegate authority upward or downward within a hierarchy to the appropriate level of competence.

Who actually has knowledge about what’s really happening in the world? Is it the people at the top of the hierarchy — the president, the CEO or is it the worker bees at the bottom — who are actually dealing in markets, making things and delivering services?

One problem we’ve got is what I regard as a kind of dumb version of what economists call “the principal-agent model.” It says the principal gives the orders, and the agents must obey the orders. But in any actual organization, most of the knowledge is on the part of the agents. It’s the civil servants at the bottom who really understand how things work when the principals have no idea. And so, the authority actually goes from the bottom of the organization up to the top. The U.S. Army understands this very well. It is the second lieutenant in front of the building who is trying to take the town that really understands the situation, not the general who’s back a couple of 100 clicks behind the front lines.

Therefore, you must delegate authority to lower levels. That’s what a bureaucracy is. It’s basically a hierarchical system in which the bureaucrats themselves need the autonomy to make good decisions based on their superior local knowledge and their ability to act quickly.

“What we’re witnessing is a reaction to the excessive proceduralism in liberal societies.”

The problem we have in the United States is that we do not trust the state. We don’t like the government, and so we don’t trust bureaucrats sufficiently to empower them to actually make decisions based on their good judgment. What we do instead is come up with lots of rules.

One example is the Federal Acquisition Regulation, by which a federal agency cannot buy a desk or a computer without referring to a rule book that is several hundred pages long about how to put out a bid for proposals, how to adjudicate disputes and so forth. That’s why it takes forever for them to actually purchase something like a computer system, which in the federal government is usually obsolete by the time the contract is actually executed.

If you really want to have a Department of Government Efficiency, the first thing you’ve got to do is not fire bureaucrats. You actually have to free them from all of these mountains of regulations because most bureaucrats, under present circumstances, are more concerned with complying with these detailed rules than they are in actually solving the problems that their constituents are facing.

The trouble is that conservatives in the United States, especially the far-right critics of the European Union, think that the big problem is that bureaucrats have too much power and, therefore, need to be constrained. We’ve heard this ever since the New Deal.

More lately, we’ve heard Musk say that there are all these bureaucrats out there running our lives, with no democratic control. That’s just nonsense. If anything, they’re over-constrained by all these rules.

What you really need to do is authorize them to actually make decisions within a mandate that is democratically established by Congress and by elected representatives. Our problem in the United States is we don’t understand that our dislike and distrust of the government means countries in Scandinavia, Japan or South Korea, with a longer state tradition, provide better government services, because they don’t automatically distrust anything that a bureaucrat does.

Gardels: So, the way to get efficiency is to empower the bureaucrats, not to disempower them.

Fukuyama: That’s right. And in order to trust them with that authority, they’ve got to be good, competent and capable of judgment. They must have the training, professionalism and technical knowledge. Let me give you one example of what not to do.

When we set up the TSA after 9/11 the Republicans particularly didn’t want workers to be able to unionize, but they also didn’t want to pay them a lot of money. So lawmakers said, “OK, just high school graduates or people with equivalency certificates are good enough to work in TSA.”  Now, with that kind of person, you cannot trust them to make complex decisions about who should be screened or not.

That’s not the way that the Israelis do it. Because of the early years of hijacking and bombing attempts, Israelis have faced right from the beginning a much bigger problem with airplane security. Their airport security people are not trained to follow rules so much as to exercise judgment. They scrutinize and question passengers about who they know, where they’ve been, what’s in their bags, and judge whether they are suspicious or not based on past profiles. You must be pretty well-trained to do that.

Trust in government requires trustworthy governments. If you want to minimize your costs and get the cheapest possible labor, you will get what you pay for.

Gardels: Singapore is renowned for its competent bureaucracy, which is based on the Confucian mandarinate model. The country’s employees are paid at a comparable scale to employees in the private sector.

Fukuyama: That’s something that Americans have had a real problem with. One of our deepest cultural traits is distrusting the government, and therefore we don’t have the cultural instincts to do that sort of thing.

Gardels: When do bureaucracies decay to the point where they’re moribund and counterproductive?

Fukuyama: Decay happens when institutional rules become rigid and fail to adapt to changing social conditions, the mobilization of new actors or changes in technology. When there’s a capture of the state’s institutions by entrenched stakeholders — sometimes called “state capture” — then the government no longer serves public purposes. It now serves the interests of the organized groups that have captured it.

That is one of the perils of any form of governance, not just democracies. It happened in the Ottoman Empire and in Imperial China.

“If you really want to have a Department of Government Efficiency, the first thing you’ve got to do is not fire bureaucrats. You actually have to free them from all of these mountains of regulations.”

Gardels: Is it possible, as Elon Musk says, to introduce the kind of innovations you have in Silicon Valley into the bureaucracy by “moving fast and breaking things.”

Fukuyama: Can the government be innovative? The answer is yes. Sometimes you can actually create kind of fenced-off gardens in which you permit higher degrees of risk-taking because that’s what leads to innovation — the ability to take risks with new products, ideas and ways of doing things.

The trouble is that when you get into the public sector, there is an extremely low tolerance for failure, and in a political setting where you have two competing political parties, nobody wants to be seen as failing. It is far more difficult in public life than in Silicon Valley to say, “Well, we thought this might work, and it didn’t work. But that’s okay. We’ve learned a lesson.”

That doesn’t fly politically. And so, it’s very hard to actually promote risk-taking in the government for the precise reason that we have a kind of zero-fault tolerance mentality. In Silicon Valley, making mistakes, trying things and failing is routine because it’s your own money or the money of some unfortunate VC that you’re risking. You just don’t have these huge political blowbacks from failure that you do in the public sector.

Gardels: That’s why, in China, or now happening in places like Singapore and Malaysia, you have these special economic zones. They are zones where environmental regulations, tariffs and other restrictions that apply elsewhere are put aside to experiment. People expect there’ll be risks in that zone, but the risks are contained, so to speak, from the whole of society.

Fukuyama: And quite frankly, it probably helps that these are authoritarian states where you can actually protect certain sectors of the government from that kind of public criticism. In the United States, it’s much harder to do that because everything’s out in the open and because there’s so much political competition. Your enemies are going to seize on any failure immediately.

Gardels: How do you expect Musk’s chainsaw approach to slashing the bureaucracy to pan out?

Fukuyama: It’s going to be a disaster because the government does some pretty critical things, like controlling air traffic or certifying drugs for safety and efficacy. The moment an airplane crashes because you fired all the air traffic controllers, people are going to notice that, or when a plague gets out of control.

My beef with the American public, in general, is that they simply do not understand what their own government does and how critical it is. You have good, well-trained people running these agencies, people who know what they’re doing and who are trying to serve the public interest. The moment this ends, people will regret that they managed to undermine the system as it was.

Strong Gods & The Body Politic

Gardels: The 2024 election seemed to indicate that the “strong gods” of faith, family and nation have prevailed over the liberal sentiments of an open society that tolerated extreme wokeness. Is this a return to common sense, or is this the last sigh of liberalism as a political philosophy?

Fukuyama: Well, I cannot believe the latter because classical liberalism is still the only viable way of governing in a diverse society. For the last several years, I have argued that you have two interpretations of liberalism that have gotten us into trouble.

The one on the right is so-called neo-liberalism, a kind of extreme belief in the market and corresponding dislike of the state and its regulation. On the left is a form of what you might call woke liberalism, or identity politics, in which you no longer treat people as universal rights bearers but as members of particular identity groups — with special privileges carved out for them.

I think you can walk back from that kind of identity politics and still have a liberal society. A lot of people on the right say that woke liberalism is the inevitable consequence of liberalism itself, and therefore we must reject liberalism as a whole. I just don’t see any justification for that. You can walk back many of the extreme aspects of identity politics and still have a society that’s open, tolerant and pluralist.

Back To The 19th Century

Gardels: One could ask the same question about the liberal international rules-based order. Is it the last sigh for that? We have long regarded Russia and China as the main revisionist powers that want to get rid of that order. Now, the U.S. seems to have joined this axis of upheaval. It has become a sovereigntist power that abjures any entanglement in rules by others that might constrain it.

“My beef with the American public, in general, is that they simply do not understand what their own government does and how critical it is.”

If you’ve got all these sovereigntist powers out there doing their own thing unconstrained, that portends a new paradigm of world order.

Fukuyama: It’s not a new order, but actually a return to the 19th century.

I don’t think people voted for this. If you had asked before the election what would Trump’s foreign policy look like, most would have said isolationism. He criticized the forever wars. He doesn’t like NATO. He doesn’t like foreign entanglements. Then, everything we’ve heard since Jan. 20 goes way beyond that. He wants to take over Panama; he wants Greenland now; he wants Gaza so he can build hotels there; and Canada. That is a vision that completely overturns the post-1945 order.

One of the great achievements of the liberal rules-based order was that national greatness was delinked from territory. So, Japan could have the world’s second-largest economy, but no empire. They’re just happy to stay on the Japanese archipelago and build a lot of Toyotas and Sony products.  And now Trump seems to be reviving this idea that somehow the physical extent of your country is really what makes you great. He appears to want to go down in history as the first President since William McKinley, to actually extend the physical territory of the United States by taking in these other places. That really is a 19th-century conception because it resurrects the whole idea of spheres of influence, that the U.S. should control territory near the United States.

Unfortunately, there are some other powers interested in doing that as well. Russia thinks that Ukraine is really part of Russia, and China thinks that Taiwan is part of China.  They have both been restrained from doing this easily or without resistance by post-1945 norms. Now, the United States seems to be getting into this game as well. That’s the way the world was in the 19th century, when great powers had empires, and we seem to be getting back into that kind of mindset. It’s a pretty naked return to power politics.

Gardels: What’s different from the 19th century, though, is information networks and AI. How does that fit in?

Fukuyama: Where does AI fit into this new geopolitical landscape? You have constant technological change, and that was true even in the age of empires.  So, you replace wooden steamers with modern steel battleships, and you can maintain the empire either way. I just think that AI is another tool that people are going to use.

What is hard to foresee at this point, is whether it tends to disperse or concentrate power. I think we thought that it tended to concentrate power because you needed vast amounts of computing power to train large language models. Maybe that’s not true with open source LLMs where virtually anybody can create an AI that can do all sorts of amazing things. I don’t think we really know the answer to that at this point, so I think it’s probably better not to speculate too much.

This really tests a number of theories that I’ve toyed with over the years, which is that there are certain functional things that powers must do, which pushes them in the same direction. So, nobody can ignore computers. You can’t have a great power that says, “No, no, we want to do everything with horses and wooden wagons.” I mean, it’s just not going to work.

For politics, one of the critical things that is pretty clear is that you can’t govern through central planning. That whole Soviet Communist approach to economics just doesn’t work, no matter what the technology is. You must have competition. You must have freedom of entrepreneurship and so forth. And then the question becomes, does that give an advantage to the United States? We still do have a relatively open economy where a lot of different people have access, and they can compete against each other. Is a country like China capable of replicating all of that? So far, the answer to that isn’t in yet, because China has been pretty good at keeping up. The way that [Chinese President] Xi Jinping has wanted to control the tech sector reveals the chinks in that armor. It’s going to limit their ability to innovate.

Gardels: At the same time, though, the kind of tech bans against China, as we’ve seen in the case of DeepSeek, have ended up not stymying but stimulating Chinese innovation. Those kinds of bans are the mother of innovation in societies with high capability and capacity.  It’s been true of their space program. It’s been true of their military program. China is a capable society that still rises no matter how the West tries to block them.

“You can walk back many of the extreme aspects of identity politics and still have a society that’s open, tolerant and pluralist.”

Fukuyama: That’s true. They are capable and bans do appear to stimulate them to compete harder. All of that is true. I think in the end, you’re not going to be able to block them from competing, but you can make it slower and more costly for them. And I think that’s as much as you can expect.

Gardels: China and Russia have lately been describing themselves as civilizational states to legitimate their power through a sense of historical continuity. In response to that, many in the West are now saying we need to talk about Western civilization in the same terms. Italian Prime Minister Giorgio Meloni, for example, says she is out to defend Western civilization, which, to her is rooted in “Greek philosophy, Roman law and Christian humanism.”

And now you have the CEO of Palantir, Alex Karp, who’s written a book called “The Technological Republic: Hard Power, Soft Beliefs and the Future of the West,” arguing that the only way to maintain Western civilization’s predominance against these other civilizational states is to dominate the technological frontier. Do you see a civilizational element in all of this?

Fukuyama: This gets back to the old debate with Huntington, who also thought in civilizational terms. I still don’t really buy that argument, because, it seems to me, within each of these civilizations, you still have major fractures and ruptures.

Huntington was thinking about Islam as a civilization. It’s the only part of the world that actually thinks in real civilizational terms that transcend the nation-state. But if you look at the reality of the Middle East, they’re still divided into nations that compete against one another. And there are clear limits to the cooperation of people within either the Chinese sphere or the Russian sphere. There are also a lot of different ways to define these civilizations.

People who believe in Western civilization really have two completely different versions of it. Conservatives believe it’s based on Christianity in some way, and when they talk about the decline of the West, they basically mean the decline of Christian church attendance and traditional values.

But there’s another view, a liberal view, which is that the West represents the Enlightenment, born in reaction to the Medieval church to free people from orthodoxy. So, any Westerner who thinks there’s a single vision of a Western civilization is glossing over some really major complexities.

Gardels: But surely there’s a general resonance that differentiates Western open societies from China or Russia or the majority Muslim nations and Hindu India?

Fukuyama: In that sense, yes. I’m not sure that the old language of authoritarianism versus democracy is sufficient to explain all the cultural elements involved, but that’s where you actually get into all these complexities. Huntington did have a coherent vision that civilization was rooted in religion and religious traditions that are remarkably stable over time and that dictate the way that people behave and think. In these more secular times, I don’t really see what substitutes for religion in these newer versions that claim to be civilizations.

The Media Eco-System

Gardels: Today’s social media ecosystem concentrates control in the Googles and Metas of the world but also empowers a multitude of voices that were never heard before. So, there is a kind of double dynamic going on.

In the past, republics have developed checks and balances whenever too much power is concentrated in one place. Surely we need that now with the tech giants.  But we also now have the opposite problem: Information these days flows from private space to private space without creating a public square.  Everyone dwells in silos.

Don’t republics now have to also build checks and balances when information flows are so distributed that the public square is so disempowered that there is no common platform for reaching a governing consensus? You can’t reach any kind of unity with that kind of fragmentation.

Fukuyama: That is true. Western societies that are based on liberal principles like freedom of speech are facing a real dilemma right now. It’s fine to say that there ought to be checks and balances and that you shouldn’t allow this concentration of power. It is not clear to me, though, how you actually achieve that.

I don’t think you want the government to be the fundamental check on what’s true and what’s fake news. But you also don’t want a large private, for-profit corporation to take on that responsibility because it does not see itself as the custodian of any kind of democratic public interest.

“One of the great achievements of the liberal rules-based order was that national greatness was delinked from territory.”

Between those two it’s not clear which is worse. Both of those forms of control are bad. One idea that makes the most sense in this conflict is the use of so-called “middleware.” It is the only technological solution to this problem. Middleware basically takes the content moderation function away from the big platforms — but does not give it to the government. Rather it distributes it to third parties that can competitively offer content moderation to users so they can choose platforms instead of being dictated to by some rich individual or state authority.

If you actually had alternative ways of moderating that same content, you could have a competitive ecosystem where the user could choose what kind of material he or she wanted rather than filter bubbles and compartmentalized information. As long as there are a lot of them, and you don’t have a single one that is really dominant, that would be compatible with our traditional notions of freedom of speech in a liberal order.

Gardels: How is that different from these community monitoring functions that Meta, for example, now employs instead of content moderators?

Fukuyama: It is a step forward that they are moving there. One example that comes closest to the ideal middleware is Reddit. They’ve got lots of different communities with content moderation distributed among them. In short, the people who belong to the community decide on the moderation rules. That’s a pretty good system. It’s a much better alternative to a single platform with universal rules that are forced on everyone.

Gardels: That may help with content moderation. But it doesn’t do much to build the bridge of a public square across silos where issues are exposed to the body politic as a whole for deliberation, where the ideas can compete at that level and not within the silos.

Fukuyama: That’s a tough problem. Part of the problem is just the intrinsic nature of modern economies of scale that dominate in so many different areas. That’s why these platforms have gotten as big and powerful as they have. If you’re on a network with 100 million followers, that’s better than one with 10 million participants. And so, there’s been this gravitation toward larger and larger platforms. I’m not quite sure what the solution is. Middleware is at least a way of moderating and pushing back against that.

Gardels: So, a distributed solution to a distributed problem. I want to go back for a second to the issue of the sovereigntist states of the 19th century and this tariff regime that seems to be rolling out. Most economists say this is like Smoot Hawley; it’s going to create an economic disaster. But as economist Michael Pettis points out, 2025 is not the U.S. in the 1930s. In those days, Americans produced a lot more than they consumed. Now we consume a lot more than we can produce. And so, tariffs will actually serve to redirect demand domestically, and this will bolster GDP growth, higher wages and even in the longer term, bring down inflation.

What’s your view of this new mercantilist strategy, where tariffs are used to build national power, not just economically but to leverage Jordan over Gaza or leverage U.S. economic might against Canada, Panama or Denmark. As Trump puts it “when you’re sitting on a gold mine, tariffs are a good thing.”

Fukuyama: Well, I think it all depends on the way that they’re applied and whether you can keep them in bounds. I mean, the thing is, we didn’t really have a free trade system previously in the so-called liberal international order. The whole time the Chinese were rising they were subsidizing certain key industries, so there actually wasn’t a level playing field.

That’s a bit of the Trump perspective that I think is correct, that we allowed the Chinese to take advantage. We allowed this with Korea and Japan also, but that was more deliberate because we wanted to build them up as anti-communist bulwarks in Asia. And so, we would be willing to run trade deficits with them in order to help them along. But when you get a power like China that is really opposed to us on certain key values, that’s not such a smart policy.

Therefore, you can deviate on our side from free trade by saying, “We’re not on a level playing field, and we’re simply trying to make it more level.” That part of it’s fine. That’s why Biden continued a lot of the Trump tariffs from his first administration. The whole problem, though, is when you start applying them indiscriminately against your two biggest trading partners that are actually friendly democracies. That’s where you really run into some pretty dangerous territory.

“Middleware basically takes the content moderation function away from the big platforms — but does not give it to the government.”

This ignores the reality that your national power depends on the national power of your friends. That’s the thing that sovereigntists forget.

Endless History

Gardels: A question everyone must ask you: Obviously, history did not end in liberal democracy after 1989. As we’ve been discussing, we’re going back to the way the world was organized a century earlier in spheres of influence by the great powers. Endless history one might say.

Fukuyama: Well, part of the problem is that people have memories that go back in generational cycles. One thing that seems pretty universal is that people don’t like living under dictatorships.

When Eastern Europe came out of communist dictatorship, people were overjoyed to be liberated. But it’s now 35 years since that happened. You’ve had an entire generation that’s grown up under the peace and prosperity that’s been provided by the European Union. They don’t remember what it was like to live under a communist dictatorship. And so, they can tell themselves, “Well, it’s really the EU bureaucracy that’s the new tyrant.”

One of the things that amazes me about some of the rhetoric on the right is that people will say we have no freedom in the United States anymore! They act as if liberal society is like living under a dictatorship. These are people who have no idea what an actual dictatorship is like, but they’ve talked themselves into this lather about how cancel culture is as bad as Stalinism.

I still retain the faith that liberalism is capable of correcting itself because of its open and critical spirit.

Editor’s note: This interview has been edited for clarity and length.

The post Under Trump, You ‘Petition The King’ appeared first on NOEMA.

]]>
]]>
Why AI Is A Philosophical Rupture https://www.noemamag.com/why-ai-is-a-philosophical-rupture Tue, 04 Feb 2025 17:55:23 +0000 https://www.noemamag.com/why-ai-is-a-philosophical-rupture The post Why AI Is A Philosophical Rupture appeared first on NOEMA.

]]>
Tobias Rees, founder of an AI studio located at the intersection of philosophy, art and technology, sat down with Noema Editor-in-Chief Nathan Gardels to discuss the philosophical significance of generative AI.

Nathan Gardels: What remains unclear to us humans is the nature of machine intelligence we have created through AI and how it changes our own understanding of ourselves. What is your perspective as a philosopher who has contemplated this issue not from within the Ivory Tower, but “in the wild,” in the engineering labs at Google and elsewhere?

Tobias Rees: AI profoundly challenges how we have understood ourselves.

Why do I think so?

We humans live by a large number of conceptual presuppositions. We may not always be aware of them — and yet they are there and shape how we think and understand ourselves and the world around us. Collectively, they are the logical grid or architecture that underlies our lives.

What makes AI such a profound philosophical event is that it defies many of the most fundamental, most taken-for-granted concepts — or philosophies — that have defined the modern period and that most humans still mostly live by. It literally renders them insufficient, thereby marking a deep caesura.

Let me give a concrete example. One of the most fundamental assumptions of the modern period has been that there is a clear-cut distinction between us humans and machines.

Here humans, living organisms; open and evolving; beings that are equipped with intelligence and, thus, with interiority.

There machines, lifeless, mechanical things; closed, determined and deterministic systems devoid of intelligence and interiority.

This distinction, which first surfaced in the 1630s, was constitutive of the modern notion of what it is to be human. For example, almost the entire vocabulary that was invented between the 17th and 19th centuries to capture what it truly is to be human was grounded in the human/intelligence-machine/mechanism distinction.

Agency, art, creativity, consciousness, culture, existence, freedom, history, knowledge, language, morals, play, politics, society, subjectivity, truth, understanding. All of these concepts were introduced with the explicit purpose of providing us with an understanding of what is truly unique human potential, a uniqueness that was grounded in the belief that intelligence is what lifts us above everything else — and that everything else ultimately can be sufficiently described as a closed, determined mechanical system.

The human-machine distinction provided modern humans with a scaffold for how to understand themselves and the world around them. The philosophical significance of AIs — of built, technical systems that are intelligent — is that they break this scaffold.

What that means is that an epoch that was stable for almost 400 years comes — or appears to come — to an end.

Poetically put, it is a bit as if AI releases ourselves and the world from the understanding of ourselves and the world we had. It leaves us in the open.

I am adamant that those who build AI understand the philosophical stakes of AI. That is why I became, as you put it, a philosopher in the wild.

Gardels: You say that AI is intelligent. But many people doubt that AI is “really” intelligent. They view it as just another tool like all previous human-invented technologies.

Rees: In my experience, this question is almost always grounded in a defensive impulse. A sometimes angry, sometimes anxious effort to hold on to or to re-inscribe the old distinctions. I think of it as a nostalgia for human exceptionalism, that is, a longing for a time when we humans thought there was only one form of intelligence, us.

AI teaches us that this is not so. And not just AI, of course. Over the last two decades or so the concept of intelligence has multiplied. We now know that there are lots of other kinds of intelligence: from bacteria to octopi, from Earth systems to the spiral arms of galaxies. We are an entry in a series. And so is AI.

To argue that these other things are not “really” intelligent because their intelligence differs from ours is a bit silly. That would be like one species of birds, say Pelicans, insisting that only Pelicans “really” know how to fly.

It is best if we get rid of the “really” and simply acknowledge that AI is intelligent, if in ways slightly different from us.

Gardels: What is intelligence?

Rees: Today, we appear to know that there are some baseline qualities to intelligence such as learning from experience, logical understanding and the capability to abstract from what one has learned to solve novel situations.

AI systems have all these qualities. They learn, they logically understand and they form abstractions that allow them to navigate new situations.

However, what experience or learning or understanding or abstraction means for an AI system and for us humans is not quite the same. That is why I suggested that AI is intelligently slightly different from us.

“AI defies many of the most fundamental, most taken-for-granted concepts — or philosophies — that have defined the modern period and that most humans still mostly live by.”

Gardels: AI may be another kind of intelligence, but can we say it is, or can be, smarter than us?

Rees: For me, the question is not necessarily whether or not AI is smarter than us, but whether or not our different intelligences can be complementary. Can we be smarter together?

Let me sketch some of the differences I am seeing.

AI can operate on scales — both micro and macro — that are beyond human logical comprehension and capability.

For example, AI has much more information available than we do and it can access and work through this information faster than we can. It also can discover logical structures in data — patterns — where we see nothing.

Perhaps one must pause for a moment to recognize how extraordinary this is.

AI can literally give us access to spaces that we, on our own, qua human, cannot discover and cannot access. How amazing is this? There are already many examples of this. They range from discovering new moves in games like Go or Chess to discovering how protein folds to understanding whole Earth systems.

Given these more than human qualities one could say that AI is smarter than us.

However, human smartness is not reducible to the kind of intelligence or smartness AI has. It has additional dimensions, ones that AI seems to not have.

The perhaps most important of these additional dimensions is our individual need to live a human life.

What does that mean? At the very least it means that we humans navigate the outside world in terms of our inside worlds. We must orient ourselves by way of thinking, in terms of a thinking self. These thinking selves must understand, make sense of, and be struck by, insights.

No matter how smart AI, is it cannot be smart for me. It can provide me with information, it can even engage me in a thought process, but I still need to orient myself in terms of my thinking. I still need to have my own experiences and my own insights, insights that enable me to live my life.

That said, AI, the specific non-human smartness it has, can be incredibly helpful when it comes to leading a human life.

The most powerful example I can think of is that it can make the self visible to itself in ways we humans cannot.

Imagine an on-device AI system — an AI model that exists only on your devices and is not connected to the internet — that has access to all your data. Your emails, your messages, your documents, your voice memos, your photos, your songs, etc.

I stress on-device because it matters that no third parties have access to your data.

Such an AI system can make me visible to myself in ways neither I nor any other human can. It literally can lift me above me. It can show me myself from outside of myself, show me the patterns of thoughts and behaviors that have come to define me. It can help me understand these patterns and it can discuss with me whether they are constraining me, and if so, then how. What is more, it can help me work on those patterns and, where appropriate, enable me to break from them and be set free.

Philosophically put, AI can help me transform myself into an “object of thought” to which I can relate and on which I can work.

The work of the self on the self has formed the core of what Greek philosophers called meletē and Roman philosophers meditatio. And the kind of AI system I evoke here would be a philosopher’s dream. It could make us humans visible to ourselves in ways no human interlocutor can, from outside of us, free from conversational narcissism.

You see, there can be incredible beauty in the overlap and the difference between our intelligence and that of AI.

Ultimately, I do not think of AI as a self-enclosed, autonomous entity that is in competition with us. Rather, I think of it as a relation.

Gardels: What is specifically new that distinguishes deep learning-based AI systems from the old human/machine dichotomy?

Rees: The kind of AI that ruled from the 1950s to the early 2000s was an attempt to think about the human from within the vocabulary provided by machines. It was an explicit, self-conscious attempt by engineers to explain all things human from within the conceptual space of the possibility of machines.

“AI could make us humans visible to ourselves in ways no human interlocutor can, from outside of us, free from conversational narcissism.”

It was called “symbolic AI” because the basic idea behind these systems was that we could store knowledge in mathematical symbols and then equip computers with rules for how to derive relevant answers from those symbolic representations.

Some philosophers, most famously Herbert Dreyfus and John Searle, were very much provoked by this. They set out to defend the idea that humans are more than machines, more than rule-based algorithms.

But the kind of AI that has risen to prominence since the early 2010s, so-called deep learning systems or deep neural networks, are of an altogether different kind.

Symbolic AI systems, like all prior machines, were closed, determined systems. That means, first, that they were limited in what they could do by the rules we gave them. When they encountered a situation that was not covered by the rules, they failed. Let’s say they had no adaptive, no learning behavior. And it means as well that what they could do was entirely reducible to the engineers who built them. They could, ultimately, only do things we had explicitly instructed them to do. That is, they had no agency, no agentive capabilities of their own. In short, they were tools.

With deep learning systems, this is different. We do not give them their knowledge. We do not program them. Rather, they learn on their own, for themselves, and, based on what they have learned, they can navigate situations or answer questions they have never seen before. That is, they are no longer closed, deterministic systems.

Instead they have a sort of openness and a sort of agentive behavior, a deliberation or decision-making space, that no technical system before them ever had. Some people say AI has “only” pattern recognition. But I think pattern recognition is actually a form of discovering the logical structure of things. Roughly, when you have a student who identifies the logical principles that underlie data and who can answer questions based on these logical principles, wouldn’t you call that understanding?

In fact, one can push that a step further and say that AI systems appear to be capable of distinguishing truths from falsehoods. That’s because truth is positively correlated with a consistent logical structure. Errors, so to speak, are all unique or different. While the truth is not. And what we see in AI models is that they can distinguish between statements that conform to the patterns that they discover and statements that don’t.

So in that sense, AI systems have a nascent sense of truth.

Simply put, deep learning systems have qualities that, up until recently, were considered possible only for living organisms in general and for humans in particular.

Today’s AI systems have qualities of both –– and, thereby, are reducible to neither. They exist in between the old distinctions and show that the either-or logic that organized our understanding of reality –– either human or machine, either alive or not, either natural or artificial, either being or thing –– is profoundly insufficient.

Insofar as AI escapes these binary distinctions, it leads us into a terrain for which we have no words.

We could say, it opens up the world for us. It makes reality visible to us in ways we have never seen before. It shows us that we can understand and experience reality and ourselves in ways that lie outside of the logical distinctions that organized the modern period.

In some sense, we can see as if for the first time.

Gardels: So, deep-learning systems are not just tools, but agents with a degree of autonomy?

Rees: This question is a good example to showcase that AI is indeed philosophically new.

We used to think that agency has two prerequisites, being alive and having interiority, that is, a sense of self or consciousness. Now, what we can learn from AI systems is that this is apparently not the case. There are things that have agency but that are not alive and that do not have consciousness or a mind, at least not in the way we have previously understood these terms.

This insight, this decoupling of agency from life and from interiority, is a powerful invitation to see the world — and ourselves — differently.

For example, is what is true for agency — that it doesn’t need life and interiority — also true for things like intelligence, creativity or language? And how would we classify or categorize things in the world differently if this were the case?

“What makes AI a philosophical event is that these systems defy the formerly clear-cut distinction between humans and machines or between living things and nonliving things.”

In her essay in Noema, the astrophysicist Sara Walker said that “we need to get past our binary categorization of all things as either life or not.”

What interests me most is rethinking the concepts we have inherited from the modern period, from the perspective of the in-betweenness made visible to us by AI.

What is creativity from the perspective of the in-betweenness of AI? What language? What mind?

II. A New AIxial Age?

Gardels: Karl Jaspers was best known for his study of the so-called Axial Age when all the great religions and philosophies were born in relative simultaneity over two millennia ago — Confucianism in China, the Upanishads and Buddhism in India, Homer’s Greece and the Hebrew prophets. Jaspers saw these civilizations arising in the long wake of what he called “the first Promethean Age” of man’s appropriation of fire and earliest inventions.

For Charles Taylor, the first Axial Age resulted from the “great dis-embedding” of the person from isolated communities and their natural environment, where circumscribed awareness had been limited to the sustenance and survival of the tribe guided by oral narrative myth. The lifting out from a closed-off world, according to Taylor, was enabled by the arrival of written language. This attainment of symbolic competency capacitated an “interiority of reflection” based on abiding texts that created a platform for shared meanings beyond one’s immediate circumstances and local narratives.

Long story very short, this “transcendence” in turn led to the possibility of general philosophies, monotheistic religions and broad-based ethical systems. The critical self-distancing element of dis-embedded reflection further evolved into what the sociologist Robert Bellah called “theoretic culture,” to scientific discovery and the Enlightenment that spawned modernity. For Bellah, “Plato completed the transition to the Axial Age,” with the idea of theoria that “enables the mind to ‘view’ the great and the small in themselves abstracted from their concrete manifestations.”

The big question is whether the new level of symbolic competence reached by AI will play a similar role in fostering a “New AIxial Age” as written language did the first time around, when it gave rise to new philosophies, ethical systems and religions.

Rees: I am not sure today’s AI systems have what the modern period came to call symbolic competence.

That is related to what we’ve already discussed.

There was, ever since John Locke, the idea that we humans have a mind in which we store experiences in the form of symbols or symbolic representations and then we derive answers from these symbols.

Let’s say this conceptualization was understood throughout the modern period to be the basic infrastructure of intelligence.

In the late 19th century, philosophers like Ernst Cassirer gave this a twist. He suggested that the key to understanding what it is to be human is to see that we humans invent symbols or meaning and that symbol-making or meaning-making is what sets us apart as a species from everything else.

Deep learning, in general, and generative AI in particular, have broken with this human-centric concept of intelligence and replaced it with something else: The idea that intelligence is pretty much two things: learning and reasoning.

Essentially, learning means the capacity to discover abstract logical principles that organize the things we want to learn. Whether this is an actual data set or learning experiences that we humans make, there is no difference. Call it logical understanding.

The second defining feature of intelligence is the capacity to continuously and steadily refine and update these abstract logical principles, these understandings, and to apply them –– by way of reasoning –– to situations we live in and that we must navigate or solve.

Deep learning systems are most excellent at the first part –– but not so much the second. Basically, once they are trained, they cannot revise the things they have learned. They can only infer.

Be that as it may, there is nothing much symbolic here. At least not in the classical sense of the term.

I am emphasizing this absence of the symbolic because it is a beautiful way to show that deep learning has led to a pretty powerful philosophical rupture: Implicit in the new concept of intelligence is a radically different ontological understanding of what it is to be human, indeed, of what reality is or of how it is structured and organized.

Understanding this rupture with the older concept of intelligence and ontology of the human/the world is key, I think, to understanding your actual question: Are we entering what you call a new AIxial age, where AI will amount to something similar to what writing amounted to roughly 3,000 to 2,000 years ago?

“Are we entering what you call a new AIxial age, where AI will amount to something similar to what writing amounted to roughly 3,000 to 2,000 years ago?”

If we are lucky, the answer is yes. The potential is absolutely there.

But let me try to articulate what I think the challenge is so we truly can make this possible.

Let’s take the correlation between the emergence of writing, the birth of a vocabulary of interiority, and the rise of abstract or theoretical thought as our starting point.

I will do what I tried to do in my prior responses: Reflect on the historicity of the concepts we live by, point out how recent they are, that there is nothing timeless or universal about them, and then ask if AI challenges and changes them.

There is a beautiful book by Bruno Snell called “Die Entdeckung des Geistes” or, in an excellent English translation, “The Discovery of the Mind.”

The work’s central thesis is that what we today call “mind,” “consciousness” and “inner life” is not a given. It is nothing that has always existed or was always experienced. Instead, it is a concept that only gradually emerged.

In beautiful, captivating prose Snell traces the earliest instances of the birth of what I think of as “a vocabulary of interiority.”

For example, he shows that in Homer’s works, there is no general, abstract concept of “mind” or “soul.” Instead, there is a whole flurry of terms that are very difficult to translate. For example, thymos, which is perhaps best articulated as a passion that overcomes and consumes one, or noos, which originally meant sensory awareness and psyche, is a term that Homer and his contemporaries most often meant “breath” or that which animates, but not what we would call psyche today.

Simply put, there is absolutely no vocabulary of interiority in Homer. Or in Hesiod.

This changes at the turn from Archaic to Classical Greek. We begin to see the birth of a vocabulary of interiority and increasingly sophisticated ways of describing inner experience. The most important reference here is probably Sappho. Her poetry is among the very first explorations of what we today would call subjective experience and individual emotion.

I do not want to derail us by retelling the whole of Snell’s book. Rather, what interests me is to convey a sense of the possibility that we discussed earlier: We humans have not always experienced ourselves the way we do today. Every form of experience and thinking or understanding is conceptually mediated. This is also true, perhaps particularly so, for the idea of interiority and inner life.

Snell’s book is so wonderful because he shows the discontinuous, gradual emergence of new concepts that amount to the idea that there is something like an interiority and that this interiority — a kind of inner landscape — is where a single, self-identical “I” is located.

Now, what is crucial, is that the introduction of writing, which probably began right at the time of Homer, was key for the emergence of a conceptual vocabulary of interiority.

Snell touches on this only in passing, but later works, especially by Jack Goody, Eric Havelock and Walter Ong, have attended to this explicitly and all have more or less come to the same conclusion: The practice of writing created new possibilities for analytical thinking that led to increasingly abstract, classificatory nouns and to a form of systematic search and production of knowledge that was not seen anywhere in human history before.

These authors also made clear that the only unfortunate thing about Snell’s work is his use of the term “discovery” in his title. The mind was not discovered. It was constituted, invented, if you will. That is, it could have been constituted differently. And that is what Goody, Ong and others have amply shown. What mind is, what interiority is, is different in other places.

Let me summarize this simply by saying that the technology of writing had absolutely dramatic consequences for what it is to be human, for how we experience and understand ourselves as humans. Among the two, perhaps, most important of these consequences was the systematic emergence of self-reflection and abstract thought.

Can AI play as transformative a role in what it means to be human as it did for writing?

Can AI mark the beginning of a whole new, perhaps radically discontinuous chapter for what it is to have a mind, to have interiority, to think? Can it help us think thoughts that are so new and so different that however we understood ourselves up until now become obsolete?

“Can AI mark the beginning of a whole new, perhaps radically discontinuous chapter for what it is to have a mind, to have interiority, to think?”

Oh yes, it can! AI absolutely has the potential to be such a major philosophical event.

The perhaps most beautiful, most fascinating and eye-opening way to show this potential of AI is what engineers call “latent space representations.”

When a large language model learns, it gradually distills ever more abstract logical principles from the data it is provided with.

It is best to think of this process as roughly similar to a structuralist analysis: The AI identifies the logical structure that organizes — that literally underlies — the totality of the data it is trained on and stores or memorizes it in the form of concepts. The way it does this is that it discovers the logic of the relations between different elements of the data. So, in text, roughly, that would be the words: What is the closeness between the different words in the training data?

If you will, an LLM discovers the many different degrees of relations between words.

Fascinatingly, what emerges from this learning process is a high-dimensional, relational space that engineers call latent — in the sense of hidden — space.

First, this means that something grows on the inside of an LLM during training. A hidden map of the logic of relations between words that the AI successively discovers. I say on the inside because we humans cannot observe this map from the outside.

The second thing it means is that this map is not just a list but a spatial arrangement.

Imagine a three-dimensional point cloud where each point stands for a word and where the distance between points reflects how close or far words are from one another in the training data.

It is just, and this is the third thing, that this spatial map doesn’t have only the three dimensions — length, width, depth — our conscious human mind is comfortable operating in. Instead, it has many, many more dimensions. Tens of thousands and with the latest models, perhaps millions.

That is, the understanding an LLM has formed is a spatial architecture. It has a geometry that literally determines what, for an LLM, is thinkable.

It is literally the logical condition of possibility — the a priori — of the LLM.

For all we know, human brains also create latent space representations. The neurons in our brain work in a very similar fashion to how neurons work in a neural network.

Yet, despite this similarity, it appears that the latent space representations that a human brain produces and the latent space representations that an AI can produce are different from one another.

The two latent space representations likely overlap but they also differ significantly in kind and quality because of AI’s far greater dimensional scope.

Now imagine we could build AI so that the logic of possibility that defines the human brain gets extra latent spaces.

Imagine we built AI to add to our human mind logical spaces of possibility that we humans could travel but not produce on our own. The consequence would be that we humans could discover truths and think things that no human could have ever thought before AI. In this case, no one knows where the human mind might end and AI might begin.

We could take any theme and approach it from whole new perspectives. Imagine what this kind of co-cogitation between humans and AI would do to our current concept of interiority! Can you imagine what it would do to how we understand terms like mind, thought, having an idea or being creative?

As I outline this vision, I can hear the critical voices. They tell me that I make AI sound like a philosophical project while the companies building AI have very different motives.

I am entirely aware that I am giving AI philosophical and poetic dignity. And I do so consciously because I think AI has the potential to be an extraordinary philosophical event. It is our task as philosophers, artists, poets, writers and humanists to render this potential visible and relevant.

All this certainly has the makings of a new pivotal age.

Gardels: To grasp how deep learning through what AI scientists call backpropagation — the feeding of new information through the artificial neural networks of logical structures — could lead to interiority and intention, it might be useful to look at an analogy from the materialist view of biology about how consciousness arises. The core issue here is whether disembodied intelligence can mimic embodied intelligence through deep learning.

Where does AI depart from, and where is it similar to the neural Darwinism described here by Gerald Edelman, the Nobel Prize-winning neuroscientist? What Edelman refers to as “reentrant interaction” appears quite similar to “backpropagation.”

“Imagine we built AI to add to our human mind logical spaces of possibility that we humans could travel but not produce on our own.”

According to Edelman, “Competition for advantage in the environment enhances the spread and strength of certain synapses, or neural connections, according to the ‘value’ previously decided by evolutionary survival. The amount of variance in this neural circuitry is very large. Certain circuits get selected over others because they fit better with whatever is being presented by the environment. In response to an enormously complex constellation of signals, the system is self-organizing according to Darwin’s population principle. It is the activity of this vast web of networks that entails consciousness by means of what we call ‘reentrant interactions’ that help to organize ‘reality’ into patterns.

The thalamocortical networks were selected during evolution because they provided humans with the ability to make higher-order discriminations and adapt in a superior way to their environment. Such higher-order discriminations confer the ability to imagine the future, to explicitly recall the past and to be conscious of being conscious.

Because each loop reaches closure by completing its circuit through the varying paths from the thalamus to the cortex and back, the brain can ‘fill in’ and provide knowledge beyond that which you immediately hear, see or smell. The resulting discriminations are known in philosophy as qualia. These discriminations account for the intangible awareness of mood, and they define the greenness of green and the warmness of warmth. Together, qualia make up what we call consciousness.”

Rees: There are neural processes happening in AI systems that are similar — but not the same — as in humans.

It seems likely that there is some form of backpropagation in the brain. And we just talked about the fact that both biological neural networks and artificial neural networks build latent space representations. And there is more.

But I do not think that makes them have interiority or intentionality in the way we have come to understand these terms.

In fact, I think the philosophical significance of AI is that it invites us to reconsider the way we previously understood these terms.

And the close connection between backpropagation and reentry that you observe is a great example of that.

The person who did perhaps more than anyone to make the concepts of backpropagation accessible and widely known was David Rumelhart, a very influential psychologist and cognitive scientist who, like Edelman, lived and worked in San Diego.

Both Rumelhart and Edelman were key people in the connectionism school. I say this because I think the theoretical impulse between reentry and backpropagation is almost identical: the effort to develop a conceptual vocabulary that allows us to undifferentiate the biological and artificial neural networks in order to understand the brain better and in order to build better neural networks.

Some have suggested that the work of the connectionists was an attempt to think about the brain in terms of computers –– but one could just as well say it was an attempt to think about computers or AI in terms of biology.

At base, what matters was the invention of a vocabulary that didn’t need to make distinctions.

There is a space in the middle, an overlap.

It is very difficult to overemphasize how powerful this kind of conceptual work has been over the last 40 years.

Arguably, the work of people like Rumelhart and Edelman has led to a concept of intelligence that can be described in a substrate-independent manner. And these concepts are not just theoretical concepts but concrete engineering possibilities.

Does this mean that human brains and AI are the same thing?

Of course not. Are birds, planes and drones all the same thing? No, but they all make use of the general laws of aerodynamics. And the same may be true for brains. The material infrastructure of intelligence is very different — but some of the principles that organize these infrastructures may be very similar.

In some instances, we likely will want to build AI systems similar to human brains. But in many cases, I presume, we do not. What makes AI attractive, in my thinking, is that we can build intelligent systems that do not yet exist — but that are perfectly possible.

I often think of AI as a kind of very early-stage experimental embryology. Indeed, I often think that AI is doing for intelligence what synthetic biology did for nature. Meaning, synthetic biology transformed nature into a vast field of possibility. The number of things that exist in nature is minuscule compared to the things that could exist in nature. In fact, many more things have existed in the course of evolution than there are now, and there is no reason why we can’t combine strands of DNA and make new things. Synthetic biology is the field of practice that can bring these possible things into existence.

“What makes AI attractive, in my thinking, is that we can build intelligent systems that do not yet exist — but that are perfectly possible.”

The same is true for AI and intelligence. Today, intelligence is no longer defined by a single or a few instances of existing intelligences but by the very many intelligent things that could exist.

Gardels: Back in the 1930s, much of philosophy from Heidegger to Carl Schmitt was against an emergent technological system that alienated humans from “being.” As Schmitt put it back then, “technical thinking is foreign to all social traditions; the machine has no tradition. One of Karl Marx’s seminal sociological discoveries is that technology is the true revolutionary principle, besides which all revolutions based on natural law are antiquated forms of recreation. A society built exclusively on progressive technology would thus be nothing but revolutionary; it would soon destroy itself and its technology.” As Marx put it, “all that is solid melts into air.”

Does the nature of AI make Schmitt’s perspective obsolete, or is it simply a fulfillment of his perspective?

Rees: I think the answer — and I take that to be very good news — is yes, it makes Schmitt’s perspective obsolete.

Let me first say something about Schmitt. He was essentially apocalyptic in his thinking.

Like all apocalyptic thinkers, he had a more or less definite, ontological and in his case also religious, worldview. Everything in his world had a definite, metaphysical meaning. And he thought the modern, liberal world, the world of the Enlightenment, was out there to destroy the timeless, ultimately, divine order of things. What is more, he thought that when this happened, all hell would break loose, and the end of the world would begin to unfold.

The lines that you quote illustrate this. On the one hand the modern, Enlightenment period, the factory, technology, substanceless, the relativizing quality of money, etc. — and, on the other hand, social, that is, racially defined national traditions, images and symbols.

Schmitt was worried that the liberal order would de-substantize the world. Everything would become relative. And at least if we go by his writings, he thought that Jews were one of the key driving forces of this de-substantification of the world. Famously, Schmitt was a rabid antisemite.

He was so worried about the end of the world that he aligned himself with Hitler and the Nazis and their agendas.

From today’s perspective, of course, it is obvious that the ones who embraced modern technology to de-substantize humans, to deprive them of their humanity and to murder them on an industrial scale, were the Nazis.

It is difficult to suppress a comment on Heidegger here, who sought to “defend being against technology.” That said, I think there are important differences between the two.

But let me go to the second part of my reply, why I think AI renders his world obsolete.

AI has proven that the either-or logic at the core of Schmitt’s thinking doesn’t hold. One example of this is provided by Schmitt’s curious appropriation of Marx.

Famously, Marx described the rise of industry enabled by the combustion engine as a dehumanizing event. Before capitalists discovered how they could use the combustion engine to fabricate goods, most goods were made in artisanal sweatshops. Maybe these sweatshops were harsh places. But, or so Marx suggests, they were also places of human dignity and virtuosity.

Why? Well, because at the center of these sweatshops were humans who used tools. As Marx saw it, tools are nothing in themselves. What one can do with a tool depends entirely on the imagination and the virtuosity of the human who uses it.

With the combustion engine, everything changed. It gave rise to factories in which goods were made by machines rather than by artisans. However, the machines were not entirely autonomous. They needed humans to assist them. That is, what the machines needed were not artisans. What they needed was not human imagination and virtuosity. On the contrary, what was needed were humans that could function as extensions of the machine. That made these humans mindless and reduced them to mere machines.

That is why Marx described the machine as the “other” of the human and the factory as the place where humans are deprived of their own humanity.

Schmitt appropriated this for his own argument to juxtapose his kind of substance thinking with the modern, technical world. The net outcome is that you now have a juxtaposition timeless, substantive, metaphysical truth on the one hand — and, on the other, the modern world of machines, of technology, of functionality, of relativity of values, of substance-less humans.

Hence, technology, for Schmitt, comes into view as an unnatural violence against the metaphysically timeless and true.

“The alternative to being against AI is to enter AI and try to show what it could be.”

Schmitt’s distinction was most certainly not timeless but intrinsic to the modern period and deeply indebted to its paradigm of the new machine versus the old human.

The deep-learning-based AI systems we have today defy and escape the “either-or” distinction of Schmitt — or of Marx and of Heidegger and all those who come after them.

AI clearly and beautifully shows us that there is a whole world in between these distinctions. A world of things, of which AI is just one, that have some qualities of intelligence and some qualities of machine — and that are reducible to neither. Things that are at once natural and built.

AI invites us to rethink ourselves and the world from within this in-between.

Let me say that I understand the wish to render human life meaningful. To render thought and intellectual insight critical and, so too, art, creativity, discovery, science and community. I totally get it and share it.

But I think the suggestion that all these things are on the one side, and AI and those who build it are on the other, is somewhat surprising and unfortunate.

A critical ethos grounded in this distinction reproduces the world it says it is against.

The alternative to being against AI is to enter AI and try to show what it could be. We need more in-between people. If my suggestion that AI is an epochal rupture is only modestly accurate, then I don’t really see what the alternative is.

This video, “overflow,” was generated with the Limn AI system based on a prompt enticing the AI to categorize an ambiguous drawing. The video is reflective of the AI’s effort to work through its learned categories of representation — without ever arriving at a stable representation resulting in exploring the hidden spaces between existing categories of representation. (LIMN/Noema Magazine)

III. In-Betweenness & Symbiogenesis

Gardels: I’m wondering if there is a correspondence between your “in-betweenness” point and Blaise Agüera y Arcas’ idea that evolution advances not only by natural selection but through “symbiogenesis” — the mutual transformation that conjoins separate entities into one interdependent organism through the transfer of new information, for example, DNA fragments carried by bacteria that are “copy and pasted” into the cells they penetrate. What results is not either/or, but something new created by symbiosis.

Rees: I believe Blaise, like me, was influenced by an essay the American computer scientist Joseph Licklider published in 1960, called, called “Man-Computer Symbiosis.”

This is how the essay begins:

“The fig tree is pollinated only by the insect Blastophaga grossorun. The larva of the insect lives in the ovary of the fig tree, and there it gets its food. The tree and the insect are thus heavily interdependent: the tree cannot reproduce without the insect; the insect cannot eat without the tree; together, they constitute not only a viable but a productive and thriving partnership. This cooperative ‘living together in intimate association, or even close union, of two dissimilar organisms’ is called ‘symbiosis.’”

Licklider goes on: “At present (…) there are no man-computer symbioses. The purposes of this paper are to present the concept and, hopefully, to foster the development of man-computer symbiosis by analyzing some problems of interaction between men and computing machines, calling attention to applicable principles of man-machine engineering, and pointing out a few questions to which research answers are needed. The hope is that, in not too many years, human brains and computing machines will be coupled together very tightly, and that the resulting partnership will think as no human brain has ever thought and process data in a way not approached by the information-handling machines we know today.”

What does symbiosis mean? It means that one organism cannot survive without the other, which belongs to a different species. More specifically, it means that one organism is dependent on functions performed by the other organism. More philosophically put, symbiosis means that there is an indistinguishability in the middle. An impossibility to say where one organism ends and the other (or the others) begin.

Is it conceivable that this kind of interdependence will in the future occur between humans and AI?

The traditional answer is: Absolutely not. The old belief is that humans belong to nature and, more specifically, to biology, to living things that can self-reproduce. Computers, on the other hand, belong to a totally different ontological category, the category of artificial, the merely technical. They don’t grow, they are constructed and built. They have neither life nor being.

Symbiosis, in that old way of thinking, is only possible within the realm of nature, between living things. In this way of thinking, there cannot possibly be a human-computer symbiosis.

I think there was also a sense that what Licklider meant was an enrolling of humans into the machine concept. Perhaps like a cyborg. And as humans are supposedly more than or different from machines, that would mean a loss of that which makes us human, of that which sets us apart from machines.

“AI can have agency, creativity, knowledge, language and understanding without either being alive or being human.”

But as we have discussed, AI renders this old, classical modern distinction between living humans or beings and inanimate machines or things, insufficient.

AI leads us into a territory that lies outside of these old distinctions. If one enters this territory, one can see that things –– things like AI –– can have agency, creativity, knowledge, language and understanding without either being alive or being human.

That is, AI affords us with an opportunity to experience the world anew and to rethink how we have thus far organized things in the world, the categories to which we assigned them.

But here is the question: Is human-AI symbiosis possible from within this new, still emergent territory — this in-between territory — in the sense of the indistinguishability just described?

I think so. And I am excited about it. A bit like Licklider, I am looking forward to a “partnership” that will allow us to “think as no human brain has ever thought and process data in a way not approached by the information-handling machines we know today.”

When we can think thoughts we cannot think without AI, and when AI can process data in ways it cannot on its own, then no one can say where humans end and AI begins. Then we have indistinguishability, a symbiosis.

Let me add that what I describe here — with Licklider — is not a gradual human dependency on AI, where we outsource all thinking and decision-making to AI until we are barely able to think or decide on our own.

Quite the opposite. I am describing a situation of maximal human intellectual curiosity. A state where being human is being more than human. Where the cognitive boundary between humans and AI becomes meaningfully indistinct.

Is this different, in an ontologically meaningful way, from fungi-tree relationships?

Their relationship is essentially a communication, in which they cogitate together. Neither party can produce or process the information exchanged in this communication alone. The actual processing of the information — cognition — happens at the interface between them: Call it symbiosis.

What, if any, is the ontological difference between human-AI symbiosis? I fail to see one.

Gardels: Perhaps such a symbiosis of inorganic and organic intelligence will spawn what Benjamin Bratton calls “planetary sapience,” where AI helps us better understand natural systems and align with them?

Rees: What if we linked AI to this fungi-tree symbiosis? AI could read and translate chemical and electrical signals from fungi-tree-soil networks. These signals contain information about ecosystem health, nutrient flows, stress responses. That is, AI could make the communication between fungi-trees intelligible to humans in real-time.

We humans could then understand something — and possibly pose questions and thereby communicate — that we simply couldn’t otherwise, independent of AI. And simultaneously we can help AI ask the right questions and process information in ways it cannot on its own.

Now let’s expand the scope: What if AI could connect us to large-scale planetary systems that are impossible to know without AI? In fact, what if AI would become something like a self-monitoring planetary system into which we are directly looped. As Bratton has put it, “Only when intelligence becomes artificial and can be scaled into massive, distributed systems beyond the narrow confines of biological organisms, can we have a knowledge of the planetary systems in which we live.”

Perhaps in a way where — as DNA is the best storage for information we know — part of the information storage and the compute the AI relies on is actually done by mycorrhizal networks?

If anything, I can’t wait to have such a whole Earth symbiotic state — and to be a part of this form of reciprocal communication.

Gardels: What is the first next step to guiding us toward symbiosis between humans and intelligent machines that opens up the possibilities of AI augmenting the human experience as never before?

Rees: Ours is a time when philosophical research really matters. I mean, really, really matters.

As we have elaborated in this conversation, we live in philosophically discontinuous times. The world has been outgrowing the concepts we have lived by for some time now.

To some, that is very exciting. To many, however, it is not. The insecurity and confusion are widespread and real.

If history is any guide, we can assume that political unrest will occur, with possibly far-reaching consequences, including autocratic strongmen who try to enforce clinging to the past.

One way to prevent such unfortunate outcomes is to do the philosophical work that can lead to new concepts that allow us all to navigate uncharted pathways.

“AI could make the communication between fungi-trees intelligible to humans in real-time.”

However, the kind of philosophical work that is needed cannot be done in the solitude of ivory towers. We need philosophers in the wild, in AI labs and companies. We need philosophers who can work alongside engineers to jointly discover new ways of thinking and experiencing that might be afforded to us by AI.

What I dream of are philosophical R&D labs that can experiment at the intersection of philosophical conceptual research, AI engineering and product making.

Gardels: Can you give a concrete example?

Rees: I think we live in unprecedented times, so giving an example is difficult. However, there is an important historical reference, the Bauhaus School.

When Walter Gropius founded the Bauhaus, in 1919, many German intellectuals were deeply skeptical of the industrial age. Not so Gropius. He experienced the possibilities that new materials like glass, steel and concrete offered as a conceptual rupture with the 19th century.

And so, he argued –– very much against the dominant opinion — that it was the duty of architects and artists to explore these new materials, and to invent forms and products that would lift people out of the 19th and into the 20th century.

Today, we need something akin to the Bauhaus — but focused on AI.

We need philosophical R&D labs that would allow us to explore and practice AI as the experimental philosophy it is.

Billions are being poured into many different aspects of AI but very little into the kind of philosophical work that can help us discover and invent new concepts — new vocabularies for being human — in the world today. The Antikythera project of the Berggruen Institute under the leadership of Bratton is one small exception.

Philosophical R&D labs will not happen automatically. There will be no new guiding philosophies or philosophical ideas if we do not make strategic investments.

In the absence of new concepts, people — the public as much as engineers — will continue to understand the new in terms of the old. As this doesn’t work, there will be decades of turmoil.

The post Why AI Is A Philosophical Rupture appeared first on NOEMA.

]]>
]]>
Al Will Take Over Human Systems From Within https://www.noemamag.com/al-will-take-over-human-systems-from-within Wed, 09 Oct 2024 16:54:27 +0000 https://www.noemamag.com/al-will-take-over-human-systems-from-within The post Al Will Take Over Human Systems From Within appeared first on NOEMA.

]]>
Yuval Noah Harari sat down with Noema Editor-in-Chief Nathan Gardels to discuss the themes of his new book, “Nexus.”

Nathan Gardels: The premise of your work is that what distinguishes sapiens is their ability to tell stories people believe in that connects them and enables collective action. As the French philosopher Régis Debray has written in his reflections on how de Gaulle revived his country after its World War II defeat: “The myth makes the people, not the people the myth.”

What matters in the march of history, you say, is the information networks that convey those narratives. Can you elaborate on this point with some historical examples?

Yuval Harari: Our human superpower is the ability to cooperate in very large numbers. And for that, you need a lot of individuals to agree on laws, norms, values and plans of action. So how do you connect a lot of individuals into a network? You do it with information, of course, and most importantly, with mythologies, narratives and stories. We are a storytelling animal.

You can compare it to how an organism or a body functions. Originally there were only single-celled organisms. It took hundreds of millions, billions of years to create multicellular organisms like humans or elephants or whales.  The big question for the multicellular organism is: How do you connect all these billions of cells into a functioning human being so that the liver, heart, muscles and brain all work together toward common goals?

In the body, you do that by transferring information, whether through the nervous system or through hormones and biochemicals. It is not just a single information network. There are actually several information networks combined to hold the body together.

It’s the same with the state, with a church, with an army, with a corporation. The big question is, how do you make all these billions of cells of individual humans cooperate as part of a single organism? The most important way you do it in humans is with stories.

Think about religions, visual information, images and icons have constituted the most common portrait in history, the most famous face in history, the face of Jesus. Over 2000 years, billions of portraits of Jesus were created, and they are alike everywhere, in churches, in cathedrals, in private homes, in government offices. The amazing thing about all these portraits is that not a single one of them is true.

Not a single one of them is authentic because nobody has any idea what Jesus actually looked like. We don’t know of any portraits that were drawn of him during his lifetime. He was a very, very minor figure working in a province of the Roman Empire, known perhaps to a few thousand people who met him personally or heard rumors about him. The actual person of Jesus had a very, very small impact on history.

Yet the story of Jesus and the image of Jesus, most of it created after he was long dead, had a tremendous impact on history.  Even in the Bible, there is not a single word about what Jesus looked like.  We have one sentence in the Bible about the cloth he wore at a certain point, but no information about whether this man was tall or short, fat or thin, blonde or black-haired. Nothing.

Over the centuries, you have had millions of people closing their eyes and visualizing Christ because of this image created of him. Still, his story has united billions of people for close to 2,000 years now, with both good and bad consequences — from charity and hospitals and relief to the poor to crusades and Inquisitions and Holy Wars. It’s all, in the end, based on a story.

The network of cathedrals are kind of the nerve center of the whole thing. The question is, what do you preach to people in the cathedral? Do you preach they should give some of their money and their time to help the poor, to heal the sick? Or do you tell them to wage war against the infidels and against the heretics.

Gardels: Networks built on stories bring people together through the information they make available. But what you call “a naïve view of information” can make things worse, not better. Can you explain what you mean by this?

“We are a storytelling animal.”

Harari: The naïve view of information, which is very common in places like Silicon Valley, thinks that information equals truth. If information is truth, the more information you have in the world, the more knowledge you have and the more wisdom you have; the answer to any problem is just more information.

People do acknowledge that there are lies and propaganda and misinformation and disinformation, but they say, “OK, the answer to all these problems with information is more information and more freedom of information. If we just flood the world with information, truth and knowledge and wisdom will kind of float to the surface on this ocean of information.” This is a complete mistake because the truth is a very rare and costly kind of information.

Most information in the world is not truth. Most information is junk. Most information is fiction and fantasies, delusions, illusions and lies. While truth is costly, fiction is cheap.

If you want to write a truthful account of things that happened in the Roman Empire, you need to invest so much time and energy and effort. Experts go to universities and spend 10 years just learning Latin and Greek and how to read ancient inscriptions. Then, just because you found an inscription by Augustus Caesar saying something doesn’t mean it’s true. Maybe it’s propaganda, maybe it was a mistake. How do you tell the difference between reliable and unreliable information? So, the truth is costly to find.

In contrast, if you want to write a fictional story about the Roman Empire, it’s very easy. You just write the first thing that comes to your mind. You don’t need to fact-check. You don’t need to know Latin or Greek or to go do archeological excavations and find ancient pottery shreds and try to interpret what they mean.

The truth is also complicated, while fiction can be made as simple as you would like it to be. What is the truth about why the Roman Republic fell, or why the Roman Empire fell? Is it because of loose sexual morals, as so many believe? The whole truth is very, very complicated and involves many factors, but fiction can be made as simple as you like.

Gardels: It is the very naïve simplicity of fiction that makes it easier for so many to grasp and for such narratives to capture so much attention.

Harari: Exactly. And finally, the truth is often painful to take in even at the level of individuals. It’s difficult to acknowledge the truth about how we behave, how we treat the people we love, how we treat ourselves. This is why people go to therapy for years to understand.

That applies as well at the level of nations and cultures. I look at my country, Israel.  If you have an Israeli politician who will tell people the truth, the whole truth and nothing but the truth about the Israeli-Palestinian conflict, that person will not win the election — guaranteed. People do not want to hear it; they do not want to acknowledge it.

That is true in the U.S. It’s true in India, Italy, in all the nations of the world. It’s also true for religions.

The truth can be unattractive. Fiction can make one’s image of reality as pleasing and attractive as you would like. So, in a competition between information that is costly, complicated and unattractive, and information that is cheap and simple and pleasing, it’s obvious which one will win.

If you just flood the world with information, truth is bound to lose. If you want truth to win and to acquire knowledge and wisdom, you must tilt the playing field. How? By building institutions that do the difficult work of investing the time, resources and effort to find the truth and to explain it and promote it.

These institutions can run the gamut from research institutions and universities to newspapers, to courts — though in the judicial system, it’s also often not easy to know what the truth is. Only if we invest in these kinds of institutions that preserve the hope of reaching the truth and acquiring knowledge and developing wisdom, can we tilt the balance.

Gardels: In other words, since the fictions or delusions you’ve described are what secure social cohesion, the prevailing logic of information networks is to privilege order over truth, which is disruptive.

“The truth is a very rare and costly kind of information.”

Harari: Yes. For an information network to function, you need two things. You need to know some truth. If you ignore reality completely, you will not be able to function in the universe and you will collapse. But at the same time, just knowing the truth is not enough. You also need to preserve order. You need to preserve cohesion.

For the human body to function, it needs to know some truth about the world: How to get water, how to get food, how to avoid predators. But the body also needs to preserve all these billions of cells working together. This is also true of armies and churches and states.

The key thing to understand is that order, in most cases, is more important than truth for societies to cohere and work together collectively.

If you think, for instance, about a country trying to develop nuclear weapons, what do you need to do to build an atom bomb? You obviously must know some facts about physics. If you ignore all the facts of physics, your bomb will not explode. But just knowing the facts of physics is not enough.

If you have a lone physicist, the most brilliant physicist in the world, and she knows that e=mc2 and she’s an expert on quantum mechanics, she can’t build an atom bomb by herself. Impossible. Just knowing the truth is not enough. She needs help from millions of other people. She needs miners in some distant land to mine the uranium. She needs people to design and build the reactor and centrifuges to enrich the uranium to bomb-grade.  And she needs people, of course, to farm food so that she and the miners and engineers and construction workers will have something to eat. You need all of them.

So, to motivate them collectively and bind them to the project, you need a story. You need a mythology. You need an ideology. And when it comes to building the mythology that will inspire these millions of people, the facts are not so crucial.  

Now, most of the time the people who understand nuclear physics get their orders from experts in mythology or ideology. If you go to Iran these days, you have experts in nuclear physics getting orders from experts in Shiite theology. If you go to Israel, the experts in nuclear physics are getting orders from experts in Jewish theology. If you were in the Soviet Union, the orders came from Communist ideologues. This is usually how it works in history: The people who understand order are giving the orders to the people who merely know the truth.

Narrative Warfare

Gardels: Networks of connectivity are a dual-use technology. They can foster social cohesion and collective action, but they can also divide. Particularly now with peer-to-peer social media, you have every group with its own identity, believing its own truth and spinning its own narrative.

This creates a kind of archipelago of subcultures, a fragmented sense of reality, which actually subverts cohesion.

As the Korean-German philosopher Byung-Chul Han puts it, peer-to-peer connectivity flows from private space to private space without creating a public sphere. Without a public sphere, there cannot be social cohesion. So you have this dual dynamic of cohesion — whether it’s for good or bad purposes — and then you have complete fragmentation. Order breaks down.

Harari: Absolutely. Stories unite, but stories also divide. Because a binding narrative is so important to keeping the order, to keeping things together, “narrative warfare” is the most potent type of warfare in the world, because it can cause the disintegration of the integral network. Yes, it goes both ways, absolutely.

Empowerment & Control

Gardels: There is another dual aspect of information networks: they both concentrate and disperse power at the same time.

This quote from DeepMind’s co-founder Mustafa Suleyman captures the contradictory nature of that duality:

“The internet centralizes in a few hubs while also empowering billions of people. It creates behemoths and yet gives everyone the opportunity to join in. Social media created a few giants and a million tribes. Everyone can build a website, but there is only one Google. Everyone can sell their niche products, but there is only one Amazon. The disruption of the internet is largely explained by this tension, this potent, combustible brew of empowerment and control.”

In other words, network connectivity tends to centralize to be more efficient, but also it creates billions of possibilities.

“Now, most of the time the people who understand nuclear physics get their orders from experts in mythology or ideology.”

Harari: Yes, but it’s not deterministic. You can build different kinds of information networks. One of the things that I try to do in my book, “Nexus,” is look again at the whole of human history from this viewpoint of information networks, to understand institutions like the Catholic Church, the Soviet Union or the Roman Empire as information networks. I’ve studied how information flows differently with different models.  If you do this, you see that many of the conflicts and wars that shaped history are actually the result of different models of information networks.

Maybe the best example is the tension between democracy and dictatorship. We tend to think of democracy and dictatorship as different ethical systems that believe in different political ideologies. That is true.  But at a more fundamental level, they are simply different models for how information flows in the world.

A dictatorship is a centralized information network. All the decisions are supposed to be made in just one place. There is one person who dictates everything. So all the information must flow to a central hub where all the decisions are being made and from where all the orders are sent.

A democracy, in contrast, is a distributed information network. It is decentralized. Most decisions are not made in the center but in other, more peripheral places.  In a democracy you will see that, yes, a lot of information is flowing to the center, let’s say to Washington in the United States. But not all of it.

You have lots of organizations, corporations, private individuals or voluntary associations that make decisions by themselves, without any kind of guidance or permission from Washington. Much of the information just flows between private companies and voluntary associations and individuals without ever passing through the Washington center, through the government.

The other thing that distinguishes these two models is that democracies retain strong self-correcting mechanisms that can identify and correct mistakes of decisions at the center.

The danger in a democracy, always, is that the center might use its power to accumulate even more and more power until you get a dictatorship.

At the simplest level, in a democracy, you give power to a person or a party for four years or some limited term, on the condition that they must give it back and the people can make a different choice. What happens if they don’t give back the power you gave them now that they have it? What can compel them to give it back?

That has been the big problem of democracy from ancient Greece until modern America, and this is also the issue at the center of the present election, in America.  You have a person, Donald Trump, with a proven track record of not being keen to give up power after you give it to him. That is what makes the upcoming election such a huge gamble.

In places like Russia, or now also in Venezuela, the public gave power to somebody through elections who now doesn’t want to give it up. Here, it is clear that the self-correcting mechanism of elections alone are not enough. If all other distributed powers of self-correction, such as courts or free media, are in the hands of the government which suppresses any active opposition, it is very easy to manipulate electoral outcomes.

We see this again and again from ancient history, from the Roman Republic to the present day. Dictators don’t abolish elections; they just use them as a kind of facade to hide their power and as a kind of authoritarian ritual. You hold an election every four years in which you win every time by a 90% majority, and you say, “Look, the people love me.”

So elections by themselves are not enough. You need the entire range of self-correcting mechanisms which are known as the checks and balances of democracy to make sure that the distributed information network remains distributed and not overly centralized.

Leninist AI

Gardels: How does the advent of artificial intelligences amplify these models of information networks?

Harari: We don’t know yet. One prominent hypothesis is that AI could tilt the balance decisively in favor of centralized information networks, in favor of dictatorships. Why?

Let’s look back again at the 20th century. The 20th century ended with people convinced that democracy won, that democracy is simply more efficient than dictatorship. And again, the easiest way to understand it is not in ethical terms, but in terms of information.

“One prominent hypothesis is that AI could tilt the balance decisively in favor of centralized information networks, in favor of dictatorships.”

The argument then was that when you try to concentrate all the information of a country like the Soviet Union in one place, it is just extremely inefficient. Humans at the center are just unable to process so much information fast enough, so they make bad decisions, first and foremost, bad economic decisions. There is no mechanism to correct their mistakes, and the economy goes from bad to worse, until you have a collapse, which is what happened to the Soviet Union.

In contrast, what made the West successful was a distributed information system like the United States, which allowed information to go to many different places.  You didn’t just rely on a few bureaucrats in Washington to make all the important economic decisions. And if somebody in Washington made the wrong decision, you could replace him. You could correct a mistake. And this proved to be just far more efficient. So in the end, it was an economic competition in which the distributed system won, because it was far, far more efficient.

Now, enter AI and people say, “Ah, well, when you concentrate all the information in one place, humans are unable to process it, and they make really bad decisions. But AI is different. When you flood humans with information, they are overwhelmed. When you flood AI with information, it becomes better. Data is the food, the fuel for the growth, of AI. So the more of it the better.  What couldn’t work in the 20th century, because you had humans at the center, might work in the 21st century when you put AI systems at the center.”

What you see today, therefore, even in capitalist societies, is that one area after the other is being monopolized by a single behemoth, as Suleyman pointed out. What we are seeing is the creation of extremely centralized information networks, because the algorithms of AIs just make it far more efficient.

Not everybody agrees with this analysis. One weakness is that you still must account for the absence of self-correcting mechanisms. Yes, if you put all the information in one place and the AI can process that information in a way that humans can’t, it still makes mistakes. Like humans, AI is fallible, very, very fallible. So, this is just a recipe for disaster. Sooner or later, this kind of Leninist AI will make some terrible mistakes, and there will be no mechanism to correct it.

Another thing worth pointing out, if you think about the impact on human dictators, is the threat it poses to them. The biggest fear of every human dictator in history was not of a democratic revolution. This was a very rare occasion in history. Not a single Roman emperor was toppled by a democratic revolution.

The biggest fear of every human autocrat is a subordinate who becomes more powerful than him and who he doesn’t know how to control. Whereas no Roman emperor was ever toppled by a democratic revolution, dozens of Roman emperors were assassinated, overthrown or manipulated by powerful subordinates — by some army general, provincial governor, their wife, cousin. This was always the biggest danger.

If I’m a human dictator, I should be terrified by AI because I’m bringing into the palace a subordinate that will be far more powerful than me and that I have no chance of controlling.

What we know from the history of dictatorships is that when you concentrate all power in the hands of one person, whoever controls that person controls the empire. What we also know is that its relatively easy to manipulate autocrats. They tend to be extremely paranoid individuals. Every Sultanate or Chinese Empire has always had concubines, eunuchs and counsels who knew how to manipulate the paranoid person at the top.

For an AI to learn how to manipulate a paranoid Putin or a paranoid Maduro is like stealing candy from a baby. That would be the easiest thing in the world. And so, if you think about human dictatorships, AI poses an enormous danger. For the Putins and Maduros of the world, I would tell them, “Don’t rush to embrace AI.”

Alien Intelligence

Gardels: The most worrying thing about AI is how it hacks the master key of human civilization by appropriating what you’ve called the superpower of sapiens — language and the ability to construct stories that bind societies together.

In this context, you see AI as an alien force that is a threat to our species.

“For an AI to learn how to manipulate a paranoid Putin or a paranoid Maduro is like stealing candy from a baby.”

Harari: I think of AI as an acronym not for artificial intelligence, but for alien intelligence. I mean alien not in the sense that it’s coming from outer space, but alien in the sense that it thinks, makes decisions and processes information in a fundamentally different way than humans. It’s not even organic.

The most important thing to realize about AI is that it is not a tool. It’s an agent. Every previous technology in history was a tool in our hands. You invent a printing press, you decide what to print. You invent an atom bomb, you decide which cities to bomb. But you invent an AI, and the AI starts to make the decisions. It starts to decide which books to print and which cities to bomb, and eventually even which new AIs to develop. So don’t think about it like the previous technologies we’ve had in history. This is completely new.

For the first time, we have to contend with a very intelligent agent here on the planet. And it’s not one agent. It’s not like one big supercomputer. It’s potentially millions and billions of AI agents that are everywhere. You have AIs in the banks deciding whether to give us a loan. You have AIs in companies deciding whether to give us jobs. You have AIs at universities deciding whether to accept you and what grades to give you. AIs will be in armies, deciding whether to bomb our houses or to target us and kill us.

We haven’t seen anything yet. Let’s remember that the AIs of today like the ChatGPTs are extremely primitive. These AIs are likely to continue developing for decades, centuries, millennia and millions of years ahead.

I talked in the beginning about organic evolution from single-celled organisms like amoebas to multicellular organisms like dinosaurs, mammals and humans. It took billions of years of evolution. AI is now at its amoeba stage, basically. But it won’t take it billions of years to get to the dinosaur stage. It may take just 20 years, because digital evolution is far, far faster than organic evolution.

If ChatGPT is the amoeba, what do you think an AI T. rex would look like? Think about it very, very seriously, because we are likely to encounter AI T. rexes in 2040 or 2050, within the lifetime of most people reading this.

By definition, AIs are not something that we can plan for in advance and anticipate everything they will do. If you can anticipate everything they will do, then it is not AI.

An AI learns and changes by itself, and this is why the challenge is so big. The idea that, “Oh, we can just build some safety mechanisms into it, and we can just have these regulations,” completely misunderstands that what we are contending with is an alien agent that can act on its own and is not a tool like all previous technologies.

Gardels: Even now, in the primitive stages of AIs, aren’t we seeing the scale and scope of its impact?

Harari: Definitely. We are not talking only about the future, but what AI has already wrought. We’ve already seen at least one big catastrophe with the way that algorithm-driven social media destabilizes democracies and societies all over the world. This was kind of a first taste of what happens when you release an agent into the world that makes decisions by itself.

The algorithms of social media used by Twitter/X, YouTube or Facebook are extremely primitive. Though only first generation, these AIs have had a huge impact on history. These social media algorithms were given the task of increasing user engagement, to make more people spend more time on Facebook, more time on YouTube, more time on Twitter. What could go wrong? Engagement, is a good thing, right?

Wrong, because the AIs discovered that the easiest way to increase user engagement is to spread hate and fear and greed since that is what catches the attention of human nature. You press the hate button in people’s minds, and they are glued to the screen. They stay longer on social media platforms and the algorithms can quickly place ads while they have your attention.

Nobody instructed the AIs to spread hatred and outrage. Mark Zuckerberg or the other people who run Facebook or YouTube did not intend to deliberately spread hate. They gave power to these algorithms, and the algorithms did something unexpected and unanticipated because they are AI. This is what AIs do.

“The most important thing to realize about AI is that it is not a tool. It’s an agent.”

The damage is not in the future. It is already in the past. American democracy is now in danger because of how these extremely primitive AIs have fragmented our societies.

Just imagine what more sophisticated AI models will wreak 10 or 20 years from now, no less when amoebas become dinosaurs.

We Need Checks & Balances On Distributed Power

Gardels: To go back on this point, to the self-correcting mechanisms of republics and democracies. In the past, they have defended themselves by putting in place checks and balances whenever too much power is concentrated in one place. The social splintering you’ve described suggests we now need another set of checks and balances when power is so distributed into tribes by social media that the public sphere is disempowered; social cohesion can’t hold and what binds societies together disintegrates.

Harari: Yes, you need checks and balances on the other side as well. Absolutely. Democracy always needs to find the middle path between dictatorship on one side, and anarchy on the other side. Anarchy is not democracy. If you lose all cohesion, that is not democracy.

Democracy, at least in the modern world of large-scale societies, requires patriotism and nationalism. It is very hard to maintain a democratic system without a cohesive national community. Lots of people get this wrong, especially on the left. They think that nationalism and patriotism are forces of evil, that they are negative; that the world would be such a wonderful place without patriotism. It won’t. It will fall into tribal anarchy.

The nation is a good thing when it’s been built and maintained properly. Nationalism should not be about hate. If you’re a patriot, it doesn’t mean you hate anybody. It’s not about hating foreigners. Nationalism is about love. It’s about loving your compatriots. It’s about being part of a network of millions of people, most of whom you’ve never met in your life, but still care enough about that, for instance, you’re willing to give 20-40-60% of your income so that these strangers on the other side of the country can enjoy good health care, education, a working sewage system and drinking water. This is patriotism.

If this community of belonging disintegrates, then democracies fall apart. What is so worrying these days is that it is often those leaders who portray themselves as nationalists who are the most responsible for destroying the national community.

Look at my own country. Prime Minister Benjamin Netanyahu built his political career for years by destroying the Israeli nation and breaking it up into hostile tribes. He deliberately spreads hate, not just against foreigners, but between Israelis, dividing the nation against itself. For one group of Israelis, he’s the Messiah, the greatest person who ever lived.

For another major part of Israeli society, he’s the most hated person in the history of the country. One thing is clear, he’s the last person on Earth who can unite the Israeli nation. If you were to pick a random person here on the streets of Los Angeles, that person has a much better chance of uniting the Israeli nation than Netanyahu.

It’s the oldest trick in the book: Divide and rule. It destroys nations, and it also destroys democracy, because once the nation is split into rival tribes, democracy is unsustainable.

In a democracy, you see other people, not as your enemies, but as your political rivals. You say, “They are not here to destroy me and my way of life. I think they are wrong in the policies they suggest, but I don’t think they hate me. I don’t think they try to harm me. So OK, I was in power for four years or eight years, or whatever, and I tried a bunch of policies, and now they want to try different policies. I think they are wrong. But let’s try and see, and after a few years, if their policies actually turn out to be good, I’ll say I was wrong. They were right.”

If you start thinking of other people not as political rivals, but as enemies — a different tribe out to destroy my tribe — then every election turns into a war of survival. If the other tribe wins, that’s the end of us. So, we must do everything, anything legal or illegal, to win the war, because it is a war. And if we lose the elections, there is no reason to accept the verdict. And if we win the elections, we only take care of our own tribe.

“If you were to pick a random person here on the streets of Los Angeles, that person has a much better chance of uniting the Israeli nation than Netanyahu.”

Gardels: We see this same phenomenon, in varying degrees, across most Western democracies today.

You don’t see this in China. President Xi Jinping’s uniting narrative is the rejuvenation of Chinese civilization. Unabashedly, his regime privileges order over freedom in the name of social cohesion — and over truth.

Is it a foregone conclusion that China will be on the losing end of history? Perhaps it is democratic societies — so fragmented that they can’t hang together — that will be on the wrong side of history?

Harari: It’s not a foregone conclusion. Nothing is deterministic. We don’t know.  Absolutely, they could be on the right side of history, at least for a while. We could also have a split world in which you have different models, different systems, in different parts of the world, competing for quite a long time, like during the Cold War. That is, of course, very bad news, because then it’s going to be very, very difficult to have any kind of joint human action on the most important existential threats facing us, from climate change to the rise of AI.

Gardels: There is another area where the AI future is demonstrably already here.  Whatever one thinks about the war in Gaza, Israel has been very efficient in rooting out Hamas without massive casualties of its own. Some say the reason is its widespread use of AI in sifting through data and other intelligence and in identifying targets. What do you know about this?

Harari: This is something I’ve been working hard to understand over the last few months. I haven’t written anything about it because I still am not sure of the facts. So, I will be very, very careful about what I say.

What everybody I’ve talked with agrees on is that AI has been a major game changer in the war, not in terms of autonomous weapon systems, which everybody focuses on, but in terms of intelligence, and especially in choosing targets where the actual shooting will be done by humans.

The big question is, who is giving the orders? And here, there is a big debate. One camp argues that, increasingly, the AI is giving the orders, in the sense of selecting the targets for bombing. They say that Israel has deployed several very powerful AI tools that collect enormous amounts of data and information to discern patterns in the data that are often hidden from human eyes, patterns that analysts would take weeks to discover, but that AI can discover in minutes. Based on that, the AI identifies that X building is a Hamas headquarters to bomb or X person is a Hamas activist to kill.

Here is the big disagreement. One camp says, basically, the Israelis just do what the AI tells them to do: Bomb those buildings. They kill people based on this AI analysis, with very little oversight by humans who don’t have the time, or maybe the willingness, to review all the information and make sure that if the AI told us to bomb that building, it’s really a Hamas headquarters — that it’s not a false positive and the AI did not make a mistake.

The other camp says, yes, the AI is now central in choosing targets, but there are always humans in the loop for ethical reasons, that the Israeli army and security forces are committed to certain ethical standards. They don’t just go blowing up buildings and killing people because some algorithm told them to. They check it all very thoroughly. They say that “the AI, is crucial, because it suddenly brings to our attention that building we never even thought of checking. We thought it was a completely innocent building, but the AI said, “No, this is Hamas’ headquarters,” and then we have human analysts review all the relevant data.

They couldn’t do it without the AI. But now that the AI has pointed out the target, it makes it must faster than in the past to review all the data, and if they have compelling evidence that this is Hamas’ headquarters, then they bomb.”

I’m not sure which camp to believe. Very likely, in different phases of the war, it worked differently. At certain times, less care was taken in making sure that the AI got it right and there was more tolerance for false positives and less tolerance for false negatives than at other times or other places.

What everybody agrees upon is that the AI definitely sped up the process of finding targets, which goes at least some way toward explaining Israel’s military success.

“AI has been a major game changer in the war … in terms of intelligence, and especially in choosing targets where the actual shooting will be done by humans.”

Again, there is a huge ethical and political debate to be had here. But if we just put on the cold glasses of military analysis without any ethics, then the Israeli security forces have had tremendous success.

People thought that Hamas built what was perhaps one of the biggest fortresses in the world underground in Gaza. According to Hamas sources, 900 kilometers [559 miles] of houses and underground tunnels, fortified with missiles, were constructed. The expectation was that the Israelis would never be able to take over the Gaza Strip, or that it would cost the Israelis many thousands of soldiers’ lives in very difficult street-to-street, house-to-house, combat.

This turned out not to be the case because of AI.

Hollywood Miscasts AI As The Terminator

Gardels: Hollywood has cast the AI story as “rogue robots revolting against their masters,” in films like “The Terminator” and “Blade Runner.” Is that the right image of AI we should have in our mind’s eye?

Harari: No, it’s completely misleading.  Hollywood did a huge service in focusing people’s attention on the AI problem long before anybody else really thought about it. But the actual scenario is misleading because the big robot rebellion is nowhere in sight. And this unfortunately makes a lot of people who grew up on “The Terminator” image complacent.  They look around, they don’t see any kind of Terminator or Skynet scenario as really being feasible. So, they say everything is okay.

These films portray a kind of general-purpose AI that you just throw into the world, and they take over. They can mine metals in the ground, build factories and assemble hordes of other robots.

However, the AIs of today are not like this. They don’t have general intelligence. They are idiots. They are extremely intelligent in a very narrow field. AlphaGo knows how to play Go, but it can’t bake a cookie. And the AIs used by the military can identify Hamas headquarters, but they cannot build weapons.

What the cinematic image so far misses is that the AIs don’t need to start from scratch. They are inserted into our own systems, and they can take our systems over from within.

I’ll give a parallel example. Think about lawyers. If you think about the best lawyer in the United States, this person is, in a way, an idiot savant. This person can be extremely knowledgeable and intelligent in a very, very narrow field like corporate tax law, but can’t bake a cookie and can’t produce shoes.  If you take this lawyer and drop him or her in the savannah, they are helpless — weaker than any elephant or any lion.

But they are not in the savannah. They are inside the American legal and bureaucratic system. And inside that system, they are more powerful than all the lions in the world put together because that single lawyer knows how to press the levers of the information network and can leverage the immense power of our bureaucracies and our systems.

This is the power that AI is gaining. It’s not that you take ChatGPT and throw it in the savannah and it builds an army. But if you throw it into the banking system, into the media system, it has immense power.

Gardels:  The unleashing of AI’s algorithmic spirits into the bureaucracies of the information networks, not some unit of general artificial intelligence, is that where we ought to focus our anxieties about these alien agents? Is it the presence of AIs in the banal tasks of large systems that is most worrisome?

Harrari: Yes, it’s the AI bureaucrats, not the Terminators, that will be calling the shots. That is what we need to worry about. Even in warfare, people may be pressing the trigger, but the orders will come from the AI bureaucrats.

Gardels: Big Tech says they are such huge players in society that they are bound to be responsible. If powerful AI models get too powerful and threaten to make their own decisions, then they can pull the plug or hit the kill switch. Do you have any faith in that perspective?

Harari: Only on a very small scale. It’s like the Industrial Revolution. Imagine a century ago if all these coal and oil giants told us, “You know, if industry will cause this pollution and the ecological system will be in danger, we’ll just pull the plug.” How do you pull the plug on the Industrial Revolution? How do you pull the plug on the internet?

“It’s the AI bureaucrats, not the Terminators, that will be calling the shots. That is what we need to worry about.”

They have in mind some small malfunction in one confined company or location or model.  If some unauthorized agent tries to launch nuclear missiles, the process can be shut down. That can be done. But that is not where the danger lies.

There will be countless AI bureaucrats, billions globally, in all the systems. They are in the healthcare system, the education system, in the military. If after a couple of years, we discover we made a big mistake somewhere, and things are getting out of control, what do you do? You can’t just shut down all the militaries and all the healthcare systems and all the education systems of the world. It is completely unrealistic and misleading to think so.

Gardels: What can be done preemptively to retard the proliferation of algorithmic spirits throughout all human-designed systems?

Harari: First understand the problem and conceptualize it properly not as rogue robots but as AIs taking over from within, as we just discussed. We humans rush to solve problems, and then end up solving the wrong problems. Stay with the problem a little and, just first, really understand what the problem is before you rush to offer solutions.

Information Fasting

Gardels: The knowledge contained in your books from “Sapiens” to “Homo Deus” to “Nexus” is encyclopedic. You’re like a Large Language Model. Ask a question, push the button and it all spews out. Where do you get your information? What do you read?

Harari: First of all, I have four people on our research team. So, if I want to know something about Neanderthals or about AIs, I get help from other people who delve deeply into the matter. Personally, I have an information diet the same way that people have food diets.

I think it is good advice for everybody to go on an information diet because we are flooded with far too much information, and most of it is junk. So, in the same way people are very careful about what they eat, they should be very careful about how much and what they consume in terms of information.

I tend to read long books and not short tweets. If I really want to understand what’s happening in Ukraine, what’s happening in Lebanon, whatever, I go and read several books, depending on what the issue is — if it’s about LLMs or the Roman Empire, history or biology or computer science.

The other thing I do is go on information fasts. Since most information is junk, and information needs processing, just putting more information in your head doesn’t make you smarter or wiser. It just fills your mind with junk.

So, as important as it is to consume information, we also need time off to digest it and to detoxify our minds. To do that, I meditate for two hours daily. Every year I go for a long retreat — between 30 to 60 days — during which I don’t consume any new information. These are silent retreats. You don’t even talk to the other people in the meditation center, and you just process, you just digest, you just detoxify everything you accumulated during the year.

I know that for most people this is going to extremes. Most people just can’t afford the time and resources to do it. But still, I think it is a good idea for everybody to think more carefully about their information diet, and also to at least have short information fasts, maybe once a week, of a day, a week or a few hours a day, when you don’t consume more information.

 This interview was edited for clarity and length.

The post Al Will Take Over Human Systems From Within appeared first on NOEMA.

]]>
]]>
Social Media Messed Up Our Kids. Now It Is Making Us Ungovernable. https://www.noemamag.com/social-media-messed-up-our-kids-now-it-is-making-us-ungovernable Thu, 13 Jun 2024 16:54:09 +0000 https://www.noemamag.com/social-media-messed-up-our-kids-now-it-is-making-us-ungovernable The post Social Media Messed Up Our Kids. Now It Is Making Us Ungovernable. appeared first on NOEMA.

]]>
In a conversation with Noema editor-in-chief Nathan Gardels, the social psychologist Jonathan Haidt discusses the impact of social media on truth in politics, the mental health crisis of today’s youth, and what to do about it.

Nathan Gardels: For those who haven’t read your book, “The Anxious Generation,” can you summarize the main thesis?

Jonathan Haidt: It all begins with a mystery: Why is it that mental health statistics for American teenagers were pretty flat, with no sign of any problem, from the late ’90s through 2010 to 2011? That is true whether we look at depression, anxiety or self-harm. And then, all of a sudden, in 2012, it’s as though someone flipped a switch, and the girls began getting much more anxious, depressed and self-harming. It was true of boys too, but it’s not been so sudden. It was more gradual in the early 2010s.

We first discovered this on college campuses because the students who entered universities from 2014 to 2015 were very different from our stereotype of college students who want to have fun, who want to drink and party.

The students arriving in 2014 to 15 were much more anxious. And they were especially triggered by words or jokes, speakers or books. It was that observation that led Greg Lukianoff to propose the hypothesis that college is doing something to kids to make them think in this distorted way. That was the basis of our book “The Coddling of the American Mind.”

But now it’s becoming clearer that what we saw and wrote about in that book wasn’t just happening to college students, but actually to all teenagers born after 1995. And it was not only observable in the U.S., Britain and Canada but a lot of other countries as well. What happened? Why was it so sudden? So that’s the mystery.

Was it some chemical dropped in the water supply all over North America and Northern Europe, along with the South Pacific? Or was it the massive change in the technological environment of childhood in all these countries simultaneously? This seemed the obvious hypothesis.

So, the first chapter of “The Anxious Generation” discusses what actually happened to teen mental health. And then the rest of the book seeks to unravel the mystery. It’s not just about “social media is destroying everybody.” It’s a more subtle and interesting story about the transformation of childhood — a tragedy that occurred in three acts.

Act I, which I only hinted at in the book, was the loss of community. So, if you look at America, especially in the years just after World War II, social capital was very high. The best way to make people trust each other is to have someone attack them from the outside — come together, fight a war and win. Social capital was very high in the U.S. in the 1940s and 1950s, and then it begins to drop over succeeding decades for many reasons.

Robert Putnam talked about this in “Bowling Alone.” You have smaller family sizes; people retreat inside because now they have air conditioning and TV and they’re not out in the front yard socializing as much. So, for a lot of reasons, we begin to lose trust in each other. We begin to lose social capital. That’s Act I of the tragedy.

Because of that, Act II happens, which is when we take away play-based childhood. Children used to always play together. It didn’t matter if it was raining or snowing, if there was a crime wave or drunk drivers, kids went out to play. Like all mammals, we evolved to play, in order to wire up our relatively large brains.

But in the ’90s, we decided it was too dangerous for kids to be out and about. They’ll get kidnapped or sexually abused, we thought, because we no longer trusted our neighbors. So, we locked our kids up out of fear of each other. In other words, over protection. This is the coddling part.

Then, after losing strong communities and play-based childhoods, we’re ready for the third act in the tragedy: the massive, sudden transformation of childhood between 2010 and 2015 into a phone-based childhood.

In 2010, the vast majority of teens across the developed world had cell phones. But they were flip phones or basic phones, with no internet browser. All you could do with them is text and call. That was pretty much it aside from some games. It wasn’t for constant communication. And that’s good. Kids could text their friends and say, “Let’s meet up at 3 p.m.” It was a simple tool. There was very little high-speed internet then and no front-facing camera. There was Facebook, but no Instagram. That’s the way things were in 2010.

“All of a sudden, in 2012, it’s as though someone flipped a switch, and the girls began getting much more anxious, depressed and self-harming.”

In 2010, kids in the U.S. and other Anglo countries still had a recognizably human childhood. They would meet up in person, even if they now had less freedom to roam. By 2015, that all changed when about 80% of those kids had a smartphone with a front-facing camera and a bunch of social media apps. So now we have the selfie culture. Almost everyone now has high-speed internet and now everyone can display video.

In short, by 2015 we have what I call “the great rewiring of childhood.” And that’s why in 2012, which is the year, incidentally, that Facebook bought Instagram, when online life changed, especially for girls, who flocked onto Instagram. And it was right after that when we first noticed the widespread upsurge in anxiety, depression and self-harm.

Gardels: The main criticism of your thesis is that you are mistaking correlation for cause and being too technologically determinist. How do you respond to that?

Haidt: First of all, my story is not just about technology, it is sociological. It’s a cultural psychology story. It’s about the change of childhood and human development.

To those who argue these changes could have been caused by any number of factors, I say a couple of things. First, whatever other factor you might think was more determinative, did that happen in New Zealand and Iceland and Australia all at the same time? No one can identify such a factor. Nobody has proposed an alternative theory that works internationally.

Second, it is true that the data is mostly correlational. If you have 300 correlational studies and 25 experimental studies, I would say the data is mostly correlational. The scientific debate has been focused on a very, very narrow question: Do the hours spent on social media tell you anything about the level of mental illness, especially depression and anxiety? There’s a clear correlation in these studies.

But we also have experimental studies, which I cite in the book. I go into great detail about the difference between correlation and causation. Every week, every month, we have more experiments indicating the causality of anxiety-inducing technology.

There are so many causal pathways by which a phone-based childhood harms different kids in different ways. Let me just take the example of sextortion, a very common crime online. There are international sextortion gangs that display avatars of beautiful, sexy young women. An avatar flirts with a boy that she finds, usually on Instagram. And then she convinces him to swap nude images. Boom. Then the sextortionist reveals himself, not as a sexy girl but as a man who now has all the content he needs to ruin you: “I’m going to show this picture of you and your penis to everyone, because I have all your contacts, unless you pay me $500 in two hours.”

The boys panic, and some of them have killed themselves because of the shame. The FBI has identified 20 suicides that were direct results of sextortion, which means there are probably hundreds of cases they didn’t catch, and far more kids who were traumatized by the experience and the shame. Now, is that just a correlation? Would these boys have killed themselves anyway, even if they had not been sextorted? I don’t think so.

Gardels: What are the specific remedies you propose for parents to protect their kids?

Haidt: The key to the whole book is understanding collective action problems, which are sometimes referred to as “the tragedy of the commons,” where each person acting in their own interest ends up bringing about an outcome that’s bad for everyone. If you’re the only one who doesn’t put your sheep out to graze, if you’re the only one who doesn’t fish in the pond, you suffer while everyone else continues to do what they’re doing.

One of the main reasons that we all are giving our kids phones now at age nine or 10 — it gets younger all the time — is because the kid comes home from school and says, “Mom, everyone else has an iPhone, I have to have an iPhone, or I’ll be left out.”

This is a collective action problem because any parent who does the right thing and says, “No, you’re not going to get one until you’re mostly done with puberty,” is imposing a cost on their child. All over the developed world now, family life has devolved into a struggle over screen time and phones. This is terrible. So, the trick is to realize we’re in this problem because everybody else is in this problem.

“All over the developed world now, family life has devolved into a struggle over screen time and phones.”

We’re so deep into this that it is very hard for any family to get out of it by themselves. Some parents are tough and just say “no,” but the status environment doesn’t change for the kids.

What I’m trying to do with the book is to say, if we team up with a few other families, if a small group of parents can get the whole school or school district to say “no,” then they escape and we can change the situation very, very quickly.

What we need is the adoption of four norms that can break the back of the collective action problem.

One: No smartphone before high school. Just keep it out of middle school. Let the kids at least get through early puberty, which is the most sensitive period. You can give them a flip phone if you absolutely need to text. I understand the need to coordinate.

Two: No social media before the age of 16. Social media is entirely inappropriate for children, it cannot be made appropriate because what you’re basically doing is saying, “How about we let the entire world get in touch with you? Let’s let all the companies try to sell things to you, let men all over the world who want to have sex with you contact you, and try to trick you into sending photos.” There’s no way to make this safe. So just recognize that social media is a tool for adults. Eleven-year-olds don’t need to network with strangers.

Third: Schools need to be phone-free. Imagine that when I was a kid growing up in the ’70s, if we had been allowed to bring in our television sets and our radios along with all sorts of toys and games and put them on our desk and use them during class. That’s what teachers are facing today. Disgusted and frustrated that they can’t get through to students, teachers are quitting.

Also, global test scores have been dropping, since 2012. This did not begin with Covid. It began around 2012. The result is a massive destruction of human capital. So, it’s just kind of obvious. You can’t have kids have the greatest distraction device ever invented in their pockets while they’re in class. All kids must check their phones during the day. If others are texting, they have to be texting back. So, just lock up the phone in the morning to give it back at the end of the day.

Four: We need to restore a play-based childhood. Kids need more independence, free play and responsibility in the real world. If you’re going to roll back the phone and don’t restore play, a child can have no childhood. So, roll it back and instead, give them adventure and fun with other kids.

Us parents need to overcome our own fears and let our children learn how to play with each other. Kids playing in groups are very safe. That’s how they learn to get along. That’s how they’re going to resolve disputes in life.

If we do these four things I’m pretty confident that rates of mental illness will come down within two years. Experience so far shows that phone-free schools get great results within a month. In various childhood independence projects, you get results within a month. If any community does all four of these, I believe they’re going to see pretty big drops in depression, anxiety, self-harm and other problems in short order.

Gardels: Do you worry that more prosperous parents with the means and time to be attentive to their kids will follow your advice, while the less well-off, busy working two jobs with less time for their kids, won’t? That this will just create a greater gap in society?

Haidt: Yes, I do expect that it will begin this way, with the most educated and wealthy families. But I think it will spread quickly as parents begin to see and hear about the benefits. Also, I should note that the most educated families apply the most limits, whereas children in low socioeconomic status, single-parent, or Black or Hispanic families have one- to two- hours more screen time per day, so going phone-free will disproportionately help them.

Gardels: Implicit in your remarks is you don’t have any faith in the Instagrams or TikToks of the world to be able to regulate themselves so they do less harm?

“What we need is the adoption of four norms that can break the back of the collective action problem.”

Haidt: Right now, as long as you’re old enough to lie about your age, you can go to Pornhub. You can open 20 Instagram accounts, you can open TikTok accounts. The law says you have to be 13 to sign a contract with a company to give away your data without your parents’ knowledge. But the law is written in such a way that there’s no responsibility for the companies if they don’t know your real age. As long as they don’t know your real age, they can’t be held liable for serving you eating disorder content or sex and violence.

We’re talking about five to 10 companies here that own our children’s childhood. They have a lot more influence over our kids than we do in some ways. And they have no responsibility. They are literally protected from lawsuits by Section 230 of the Communications Decency Act, which shields them from liability for the content on their platforms.

This is a completely insane situation. And they’re making huge amounts of money. So no, I don’t expect them to do anything until they’re forced by legislation, or by enormous losses in court.

Gardels: Your book has obviously hit a chord with parents and with school authorities. Do you have any sense of how the TikTok crowd or kids themselves see it?

Haidt: When you survey kids who’ve been through this, it’s really hard to find members of Gen Z who are opposed to what I’m saying. In fact, I actually haven’t found any. They almost always say, “Yeah, you know, you’re right. This really messed us up. But, you know, what are you going to do? This is just the way things are, and I can’t quit because everyone else is on.” There’s just an extraordinary sense of fatalism. We don’t find any young people organizing to protect their rights to have these things. The older kids generally say, if we could get everyone off, we should do that.

Gardels: The Chinese cyberspace authorities have no qualms about imposing limits on social media. Here are the rules:

  • Children under 8: Can only use smart devices for 40 minutes per day and can only consume content about “elementary education, hobbies and interests, and liberal arts education”
  • Children aged 8 to 15: Can use their phone for no more than one hour per day
  • Children aged 16 to 17: Can use a handset for a maximum of two hours per day
  • Minor mode: Requires mobile devices, apps and app stores to have a built-in mode that would bar users under 18 from accessing the internet on mobile devices from 10 p.m. to 6 a.m.

Perhaps they will produce more mentally healthy kids?

Haidt: China is engaged in a battle with the United States for cultural and economic supremacy. Since our young people are giving away all of their available attention, there’s a good chance that they will be less creative and less productive. They don’t have any spare attention to actually do anything. I imagine that makes the Chinese government happy.

The worst single product for American children is TikTok. It sucks up more of their time, energy and attention than any other product. And it harms them. It doesn’t do anything good for them. TikTok has more influence over our kids than any other organization on the planet. So, there are many reasons to think that that is a danger not only to our kids, but to our country.

It seems the Chinese are doing the right thing by using their authoritarian system to reduce the damage to their own children.

Of course, authoritarian solutions are not right for us, but we can do similar things through democratic solutions, through community and civil society. One thing Tocqueville praised Americans about is that when something needs doing, say the townspeople need to build a bridge, they just do it. They don’t wait for the state like in France. They don’t wait for the King like in Britain. Americans come together as citizens, elect a leader, raise money and then they do it.

So, I’m hopeful that my book presents norms that we adopt ourselves, even if we never get any help from Congress or lawmakers. Doing it ourselves — in groups of parents organized around schools — is a very American solution to what I think is one of the largest problems facing America today.

“TikTok has more influence over our kids than any other organization on the planet.”

Gardels: To go back to the coddled generation argument. What do you make of all these kids in college today putting up barricades, occupying administration buildings protesting the war in Gaza?

Haidt: Most of the activism of the college kids has moved online. That tends to be very ineffective and creates a culture that is bad for activists. I put some research in the book showing that before 2010, being politically active was actually associated with better mental health. You were engaged, you were part of a group, you were energized. After 2010, activists, especially progressive activists, are the least happy people in the country. They are marinating in beliefs about oppressor versus victim and embracing the untruths of the coddled. That was certainly true until very recently.

Now it’s true these protests are in person. That’s at least better psychologically for them. They are physically present and interacting with others on campus.

Even so, I think there are signs that it’s different from previous generations. One is that the present protestors are expecting accommodation, often seeking not to be punished for missing classes and for delayed exams. In other words, they are expecting a low cost to themselves. In previous periods of activism, civil disobedience meant if you break the law, then you pay the consequences to show how committed you are to the cause.

To be sure, today’s actions are communal, which is always very exciting. It’s not as though Gen Z is incapable of acting in person; though, I would point out, it’s overwhelmingly at the elite schools that this is happening.

Gardels: One of the reasons that we have such a paralyzed and polarized society is that the public square has virtually disappeared. Until social media turbocharged fragmentation, there was a common space where competing ideas could be contested in the full gaze of the body politic.

As the philosopher Byung-Chul Han has observed, the peer-to-peer connectivity of social media redirects the flow of communication. Information is spread without forming a public sphere. It is produced in private spaces and distributed to private spaces. The web does not create a public.

The possibility of arriving at a governing consensus through negotiation and compromise is being shattered by a cacophony of niche propagandists egging on their own siloed tribe of the faithful to engage in an endless partisan battle. Indeed, Rene DiResta at Stanford calls the niche ideologues “the new media goliaths” who have supplanted mainstream platforms in terms of influence.

In short, the digital media ecosystem is disempowering the public sphere.

In this sense, social media is not only messing up our kids but undermining the basis of democratic discourse.

Do you agree with that?

Haidt: Absolutely. In an article for the Atlantic in 2019, I made the case, basically along the lines of Han, that massive changes in information flows and the way we connect people change the fundamental ground within which our democratic institutions are operating. And it’s quite possible that we are now so far outside the operating range of these institutions that they will fail.

I’m extremely alarmed about the future of this country. If you read Federalist #10, the Founding Fathers, who were excellent social psychologists, were very afraid of the passions of the people. They didn’t want us to have a direct democracy. They wanted cooling mechanisms of deliberation through reason. The system of governance they devised, with its checks and balances, is really like a complicated clock that they thought could last a very long time precisely because it was realistic about human frailties. And they were right.

Then all of a sudden in the later post-war era — first with television, then the internet and, especially, now peer-to-peer media, it is all going awry. With television, at least there were editors. Jonathan Rauch wrote an amazing book called “The Constitution of Knowledge,” both about the Constitution and how knowledge is constituted.

He discussed how we make knowledge in universities and science and medicine. But he also discussed the U.S. Constitution and how the community of knowledge makers are governed by certain rules and checks and balances. We developed editors, filters and other mechanisms to vet truth.

All that’s going away now. Or at least the institutions are so weakened as to be feeble. I’m very alarmed. And, at the same time, what’s replacing them are the sorts of peer-to-peer networks that you’re talking about.

“Until social media turbocharged fragmentation, there was a common space where competing ideas could be contested in the full gaze of the body politic.”

In the history of humanity, when you connect people, there could be disruptions. But in the long run, that’s good. It increases the flow of knowledge and increases creativity. You get more value when you connect people. So, the telephone was great, the postal system was great.

Social media is not like those earlier innovations. I think the best metaphor here is to imagine a public square in which people talk to each other. They debate ideas or put forth ideas that may not always be brilliant. They may not always be civil, but people can speak while others listen. Sometimes people are moved by persuasion or dissuasion.

I think the Founding Fathers assumed that’s about the best we can hope for. Imagine one day, and I’ll call it 2009, that all changes. There’s no more public square. Everything takes place in the center of the Roman Colosseum. The stands are full of people who are there to see blood. That’s what they came for. They don’t want to see the lion and the Christian making nice; they want the one to kill the other. That’s what Twitter is often like.

It all becomes performative and comes at a superfast pace. Just as television changed the way we are and made us into passive consumers, the central act in social media is posting, judging, criticizing and joining mobs. Donald Trump is the quintessential person who thrives in that environment. If not for Twitter, Trump never could have been president. So, when our politics moved into the Roman Colosseum, I think the Founding Fathers would have said, “Let’s just give up. There’s no way we can build a democracy in this environment.”

Gardels: Just as republics have historically created institutional checks and balances when too much power is concentrated in one place, so too don’t we need to foster checks and balances for an age when power is so distributed that the public sphere is disempowered?

What I have in mind are the citizens’ assemblies indicative of the public as a whole, which deliberate issues in a non-partisan environment and, outside the electoral sphere where partisans vie for power by any means necessary, are able to come to a consensus through pragmatic, common sense solutions?

Haidt: It’s possible to create these small artificial communities where you lock citizens away together for a week and have them discuss something. They work pretty well from what I know, and they come up with solutions. But it’s not clear to me how you could use that to run a country. The way people feel about let’s say, Donald Trump, has very little to do with some ascertainment of fact.

If you use the word power, then I’m a little bit confused. But I think I see what you’re getting at. If we change the word to authority, it is clearer to me. When I wrote “The Righteous Mind,” I was on the left then and really tried to understand conservatives. Reading conservative writings, especially Edmund Burke and Thomas Sowell, was really clarifying on the idea that we need institutions. We need religion, we need gods, even if it is not true. We need moral order and constraint.

The progressive impulse is to tear things down and make things new. The conservative impulse is to protect authority structures because we need them. Without them, we have chaos. Of course, there are times to tear things down. But I think during the 2010s everything has been torn down, to some extent. This is a time we need to build.

I am very concerned that there is no longer any source of authority. There is no trusted authority, there is no way to find consensus on truth. It seems that the truth-seeking mechanisms, including the courts, came up with the answer that the last presidential election in the U.S. was not stolen. But there’s no real way to spread that around to the large portion of society that believes that it was.

With AI coming in, the problem of the loss of authority is going to be magnified tenfold or even a hundredfold when anyone can create a video of anyone saying anything in that person’s voice. It’s going to be almost impossible to know what’s true. We’re in for a wild ride if we’re going to try to run a democratic republic with no real authority. My fear is that we will simply become ungovernable. I hope not, I hope we find a way to adapt to living in our world after the fall of the tower of Babel, the fall of common understandings and common language.

This interview was edited for brevity and clarity.

The post Social Media Messed Up Our Kids. Now It Is Making Us Ungovernable. appeared first on NOEMA.

]]>
]]>
Mapping AI’s Rapid Advance https://www.noemamag.com/mapping-ais-rapid-advance Tue, 21 May 2024 17:02:14 +0000 https://www.noemamag.com/mapping-ais-rapid-advance The post Mapping AI’s Rapid Advance appeared first on NOEMA.

]]>
Nathan Gardels: Generative AI is exponentially climbing the capability ladder. Where are we now? Where is it going? How fast is it going? When do you stop it, and how? 

Eric Schmidt: The key thing that’s going on now is we’re moving very quickly through the capability ladder steps. There are roughly three things going on now that are going to profoundly change the world very quickly. And when I say very quickly, the cycle is roughly a new model every year to 18 months. So, let’s say in three or four years. 

The first pertains to the question of the “context window.” For non-technical people, the context window is the prompt that you ask. That context window can have a million words in it. And this year, people are inventing a context window that is infinitely long. This is very important because it means that you can take the answer from the system and feed it back in and ask it another question.

Say I want a recipe to make a drug. I ask, “What’s the first step?” and it says, “Buy these materials.” So, then you say, “OK, I bought these materials. Now, what’s my next step?” And then it says, “Buy a mixing pan.” And then the next step is “How long do I mix it for?”

That’s called chain of thought reasoning. And it generalizes really well. In five years, we should be able to produce 1,000-step recipes to solve really important problems in medicine and material science or climate change. 

The second thing going on presently is enhanced agency. An agent can be understood as a large language model that can learn something new. An example would be that an agent can read all of chemistry, learn something about it, have a bunch of hypotheses about the chemistry, run some tests in a lab and then add that knowledge to what it knows. 

These agents are going to be really powerful, and it’s reasonable to expect that there will be millions of them out there. So, there will be lots and lots of agents running around and available to you. 

The third development already beginning to happen, which to me is the most profound, is called “text to action.” You might say to AI, “Write me a piece of software to do X” and it does. You just say it and it transpires. Can you imagine having programmers that actually do what you say you want? And they do it 24 hours a day? These systems are good at writing code, such as languages like Python. 

Put all that together, and you’ve got, (a) an infinite context window, (b) chain of thought reasoning in agents and then (c) the text-to-action capacity for programming. 

What happens then poses a lot of issues. Here we get into the questions raised by science fiction. What I’ve described is what is happening already. But at some point, these systems will get powerful enough that the agents will start to work together. So your agent, my agent, her agent and his agent will all combine to solve a new problem. 

Some believe that these agents will develop their own language to communicate with each other. And that’s the point when we won’t understand what the models are doing. What should we do? Pull the plug? Literally unplug the computer? It will really be a problem when agents start to communicate and do things in ways that we as humans do not understand. That’s the limit, in my view.

Gardels: How far off is that future? 

Schmidt: Clearly agents with the capacity I’ve described will occur in the next few years. There won’t be one day when we realize “Oh, my God.” It is more about the cumulative evolution of capabilities every month, every six months and so forth. A reasonable expectation is that we will be in this new world within five years, not 10. And the reason is that there’s so much money being invested in this path. There are also so many ways in which people are trying to accomplish this. 

You have the big guys, the large so-called frontier models at OpenAI, Microsoft, Google and Anthropic. But you also have a very large number of players who are programming at one level lower at much less or lower costs, all iterating very quickly.

“These agents are going to be really powerful, and it’s reasonable to expect that there will be millions of them out there.”

Gardels: You say “pull the plug.” How and when do you pull the plug? But even before you pull the plug, you know you are already in chain of thought reasoning, and you know where that leads. Don’t you need to regulate at some point along the capability ladder before you get where you don’t want to go?

Schmidt: A group of us from the tech world have been working very closely with the governments in the West on just this set of questions. And we have started talking to the Chinese, which of course, is complicated and takes time.  

At the moment, governments have mostly been doing the right thing. They’ve set up trust and safety institutes to learn how to measure and continuously monitor and check ongoing developments, especially of frontier models as they move up the capability ladder. 

So as long as the companies are well-run Western companies, with shareholders and exposure to lawsuits, all that will be fine. There’s a great deal of concern in these Western companies about the liability of doing bad things. It is not as if they wake up in the morning saying let’s figure out how to hurt somebody or damage humanity. Now, of course, there’s the proliferation problem outside the realm of today’s largely responsible companies. But in terms of the core research, the researchers are trying to be honest.

Gardels: By specifying the Western companies, you’re implying that proliferation outside the West is where the danger is. The bad guys are out there somewhere.

Schmidt: Well, one of the things that we know, and it’s always useful to remind the techno-optimists in my world, is that there are evil people. And they will use your tools to hurt people. 

The example that epitomizes this is facial recognition. It was not invented to constrain the Uyghurs. You know, the creators of it didn’t say we’re going to invent face recognition in order to constrain a minority population in China, but it’s happening

All technology is dual use. All of these inventions can be misused, and it’s important for the inventors to be honest about that. In open-source and open-weights models the source code and the weights in models [the numbers used to determine the strength of different connections] are released to the public. Those immediately go throughout the world, and who do they go to? They go to China, of course, they go to Russia, they go to Iran. They go to Belarus and North Korea. 

When I was most recently in China, essentially all of the work I saw started with open-source models from the West and was then amplified. 

So, it sure looks to me like these leading firms in the West I’ve been talking about, the ones that are putting hundreds of billions into AI, will eventually be tightly regulated as they move further up the capability ladder. I worry that the rest will not. 

Look at this problem of misinformation and deepfakes. I think it’s largely unsolvable. And the reason is that code-generated misinformation is essentially free. Any person — a good person, a bad person — has access to them. It doesn’t cost anything, and they can produce very, very good images. There are some ways regulation can be attempted. But the cat is out of the bag, the genie is out of the bottle. 

That is why it is so important that these more powerful systems, especially as they get closer to general intelligence, have some limits on proliferation. And that problem is not yet solved.

Gardels: One thing that worries Fei-Fei Li of the Stanford Institute on Human-Centered AI is the asymmetry of research funding between the Microsofts and Googles of the world and even the top universities. As you point out, there are hundreds of billions invested in compute power to climb up the capability ladder in the private sector, but scarce resources for safe development at research institutes, no less the public sector. 

Do you really trust these companies to be transparent enough to be regulated by government or civil society that has nowhere near the same level of resources and ability to attract the best talent?

Schmidt: Always trust, but verify. And the truth is, you should trust and you should also verify. At least in the West, the best way to verify is to use private companies that are set up as verifiers because they can employ the right people and technology. 

In all of our industry conversations, it’s pretty clear that the way it will really work is you’ll end up with AI checking AI. It’s too hard for human monitoring alone.

“It’s always useful to remind the techno-optimists in my world … that there are evil people. And they will use your tools to hurt people.”

Think about it. You build a new model. Since it has been trained on new data, how do you know what it knows? You can ask it all the previous questions. But what if the agent has discovered something completely new, and you don’t think about it? The systems can’t regurgitate everything they know without a prompt, so you have to ask them chunk by chunk by chunk. So, it makes perfect sense that an AI itself would be the only way to police that.

Fei-Fei Li is completely correct. We have the rich private industry companies. And we have the poor universities who have incredible talent. It should be a major national priority in all of the Western countries to get basic research funding for hardware into the universities.

If you were a research physicist 50 years ago, you had to move to where the cyclotrons [a type of particle accelerator] were because they were really hard to build and expensive — and they still are. You need to be near a cyclotron to do your work as a physicist. 

We never had that in software, our stuff was capital-cheap, not capital-intensive. The arrival of heavy-duty training of AI models, which requires ever more complex and sophisticated hardware, is a huge economic change. 

Companies are figuring this out. And the really rich companies, such as Microsoft and Google, are planning to spend billions of dollars because they have the cash. They have big businesses, the money’s coming in. That’s good. It is where the innovation comes from. Others, not least universities, can never afford that. They don’t have that capacity to invest in hardware, and yet they need access to it to innovate.

Gardels: Let’s discuss China. You accompanied Henry Kissinger on his last visit to China to meet President Xi Jinping with the mission of establishing a high-level group from both East and West to discuss on an ongoing basis both “the potential as well as catastrophic possibilities of AI.” 

As chairman of the U.S. National Security Commission on AI you argued that the U.S. must go all out to compete with the Chinese, so we maintain the edge of superiority. At the same time with Kissinger, you are promoting cooperation. Where to compete? Where is it appropriate to cooperate? And why?  

Schmidt: In the first place, the Chinese should be pretty worried about generative AI. And the reason is that they don’t have free speech. And so, what do you do when the system generates something that’s not permitted under the censorship regime?

Who or what gets punished for crossing the line? The computer, the user, the developer, the training data? It’s not at all obvious. What is obvious is that the spread of generative AI will be highly restricted in China because it fundamentally challenges the information monopoly of the Party-State. That makes sense from their standpoint. 

There is also the critical issue of automated warfare or AI integration into nuclear command and control systems, as Dr. Kissinger and I warned about in our book, “The Age of AI.” And China faces the same concerns that we’ve been discussing as we move closer to general artificial intelligence. It is for these reasons that Dr. Kissinger, who has since passed away, wanted Xi’s agreement to set up a high-level group. Subsequent meetings have now taken place and will continue as a result of his inspiration.

Everyone agrees that there’s a problem. But we’re still at the moment with China where we’re speaking in generalities. There is not a proposal in front of either side that is actionable. And that’s OK because it’s complicated. Because of the stakes involved, it’s actually good to take time so each side can actually explain what they view as the problem and where there is a commonality of concern. 

Many Western computer scientists are visiting with their Chinese counterparts and warning that, if you allow this stuff to proliferate, you could end up with a terrorist act, the misuse of AI for biological weapons, the misuse of cyber, as well as long-term worries that are much more existential. 

For the moment, the Chinese conversations I’m involved in largely concern bio and cyber threats.

The long-term threat goes something like this: AI starts with a human judgment. Then there is something technically called “recursive self-improvement,” where the model actually runs on its own through chain of thought reasoning. It just learns and gets smarter and smarter. When that occurs, or when agent-to-agent interaction takes place, we have a very different set of threats, which we’re not ready to talk to anybody about because we don’t understand them. But they’re coming.

“The spread of generative AI will be highly restricted in China because it fundamentally challenges the information monopoly of the Party-State.”

It’s going to be very difficult to get any actual treaties with China. What I’m engaged with is called a Track II dialogue, which means that it’s informal and a step away from official. It’s very hard to predict, by the time we get to real negotiations between the U.S. and China, what the political situation will be. 

One thing I think both sides should agree on is a simple requirement that, if you’re going to do training for something that’s completely new on the AI frontier, you have to tell the other side that you’re doing it. In other words, a no-surprise rule.

Gardels: Something like the Open Skies arrangement between the U.S. and Soviets during the Cold War that created transparency of nuclear deployments?

Schmidt: Yes. Even now, when ballistic missiles are launched by any major nuclear powers, they are tracked and acknowledged so everyone knows where they are headed. That way, they don’t jump to a conclusion and think it’s targeted at them. That strikes me as a basic rule, right? 

Furthermore, if you’re doing powerful training, there needs to be some agreements around safety. In biology, there’s a broadly accepted set of threat layers, Biosafety levels 1 to 4, for containment of contagion. That makes perfect sense because these things are dangerous. 

Eventually, in both the U.S. and China, I suspect there will be a small number of extremely powerful computers with the capability for autonomous invention that will exceed what we want to give either to our own citizens without permission or to our competitors. They will be housed in an army base, powered by some nuclear power source and surrounded by barbed wire and machine guns. It makes sense to me that there will be a few of those amid lots of other systems that are far less powerful and more broadly available.

Agreement on all these things must be mutual. You want to avoid a situation where a runaway agent in China ultimately gets access to a weapon and launches it foolishly, thinking that it is some game. Remember, these systems are not human; they don’t necessarily understand the consequences of their actions. They [large language models] are all based on a simple principle of predicting the next word. So, we’re not talking about high intelligence here. We’re certainly not talking about the kind of emotional understanding in history we humans have.

So, when you’re dealing with non-human intelligence that does not have the benefit of human experience, what bounds do you put on it? That is a challenge for both the West and China. Maybe we can come to some agreements on what those are?

Gardels: Are the Chinese moving up the capability ladder as exponentially as we are in the U.S. with the billions going into generative AI? Does China have commensurate billions coming in from the government and/or companies?

Schmidt: It’s not at the same level in China, for reasons I don’t fully understand. My estimate, having now reviewed the scene there at some length, is that they’re about two years behind the U.S. Two years is not very far away, but they’re definitely behind. 

There are at least four companies that are attempting to do large-scale model training, similar to what I’ve been talking about. And they’re the obvious big tech companies in China. But at the moment they are hobbled because they don’t have access to the very best hardware, which has been restricted from export by the Trump and now Biden administrations. Those restrictions are likely to get tougher, not easier. And so as Nvidia and their competitor chips go up in value, China will be struggling to stay relevant. 

Gardels: Do you agree with the policy of not letting China get access to the most powerful chips? 

Schmidt: The chips are important because they enable the kind of learning required for the largest models. It’s always possible to do it with slower chips, you just need more of them. And so, it’s effectively a cost tax for Chinese development. That’s the way to think about it. Is it ultimately dispositive? Does it mean that China can’t get there? No. But it makes it harder and means that it takes them longer to do so.

I don’t disagree with this strategy by the West. But I’m much more concerned about the proliferation of open source. And I’m sure the Chinese share the same concern about how it can be misused against their government as well as ours. 

We need to make sure that open-source models are made safe with guardrails in the first place through what we call “reinforcement learning from human feedback” (RLHF) that is fine-tuned so those guardrails cannot be “backed out” by evil people. It has to not be easy to make open-source models unsafe once they have been made safe.

This interview has been edited for clarity and brevity.

The post Mapping AI’s Rapid Advance appeared first on NOEMA.

]]>
]]>
How Seawater’s Teeming Life May Change Our Own https://www.noemamag.com/planet-microbe Tue, 24 Oct 2023 15:45:00 +0000 https://www.noemamag.com/planet-microbe The post How Seawater’s Teeming Life May Change Our Own appeared first on NOEMA.

]]>
The pioneering cartographer of the human genome, Craig Venter, discusses synthetic biology and his new book, “The Voyage of Sorcerer II: The Expedition That Unlocked The Secrets of The Ocean’s Microbiome,” with Noema Editor-in-Chief Nathan Gardels.

Nathan Gardels: Generative AI has been heralded lately as one of the great game-changing innovations of our time. I remember in one of our conversations years ago when you said already then that biology was becoming a computational science, opening a path to the “dawn of digital life.”  What is the impact of the ever-more empowered big data processing of AI, particularly generative AI, on genomics and the potential of synthetic biology?

Craig Venter: So far, one of the greatest impacts of the use of generative AI has been in improving protein structure predictions, or 3D modeling of gene sequences. That is a big deal because it allows us to understand many of the genes with unknown functions that provide the various chemical signals that determine the growth, differentiation, and development of cells. As far as anybody can tell, the predictions coming out of generative AI seem to be a big improvement over existing algorithms.

In 2016 we announced the first synthetic “minimal cell,” a self-replicating organism, a bacterial genome that encoded only the minimal set of genes necessary for the cell to survive. But even at that quite minimal level we still did not know the functions of up to 25% of those genes.

We have a very substantial amount of biology left to learn, even though everybody was getting to the point where they thought we knew it all.  As soon as you start to think that, you’re wrong. And so, AI tools, and certainly improved protein predictions, have been useful in helping to determine the structure and function of some of the unknown proteins that are essential for life.

We can’t design and create future synthetic living cells without knowledge of these unknown genes that encode the function of essential proteins. So we’re slowly making headway with these new tools.

Having said that, AI is only as good as the datasets it is trained on. And if you don’t know the answer to the function of genes to begin with, AI is not going to miraculously pull that out of the rear end of the computer. Yes, as I said, you can get hints by comparing the protein structures we know to these better-predicted ones. That is a helpful tool, but it’s not a savior. We have solved half of the 156 genes of unknown function that are essential for life.

What has most transformed genomics overall is the ever-increasing speed and decreasing cost of computation. With the amount of data we have, that is obviously essential. When we sequenced the first human genome in 2000, we had to build a one-and-a-half teraflop computer that cost about $150 million. Today that would cost only a few thousand dollars.

Now, reading the genetic code has gone from billions of government dollars in labs to $100 million at Celera to now only $200 to have your genome sequenced. That’s a radical change in the reading of the genetic code.

Writing the genetic code has lagged well behind, though there is very exciting new technology. Barry Merriman, co-founder of the company Avery, has developed a method of constructing and assembling synthetic DNA on computer chips, where each pixel can synthesize a separate chain of DNA. This potentially increases the rate of synthesis by at least 10,000-fold.

It took us 10 years to make the first minimal synthetic cell because there was so much trial and error. Many of our designs didn’t work because we were missing key components of biological knowledge. That took a long time because synthesis was so slow; we had to build an entire genome and test it. Then if it didn’t work, we had to take it apart and reassemble it adding other components.

With these new tools, potentially, we could make 1000 genomes at a time to readily see which one boots up and produces life. So, the rate of experimentation will be greatly enhanced, but it’s still trial and error. Either you have a gene set that leads to life, or you don’t. And just having one gene wrong at the minimal cell level meant the difference between life and no life.

But again, even with all the power and speed, AI can’t solve unknowns. It can just give us hints of what else to look at and look for.

“Reading the genetic code has gone from billions of government dollars in labs to $100 million at Celera to now only $200 to have your genome sequenced.”

Gardels: Yuval Noah Harari, the Israeli historian, says that generative AI has hacked “the master key” of human civilization — language and the ability to construct narratives. He worries that machines could one day author our narrative.

Your work in synthetic biology over decades, first mapping and then learning to read and write genetic code, is really hacking the master key of life itself.

In this sense, don’t you see synthetic biology as the parallel to artificial intelligence? Together they are fostering a phase transition in evolution.  

Venter: In the broader sense and long run as you frame it, yes. But the difference between synthetic biology and ChatGPT, for example, is that biology is real. It is either alive or not alive. ChatGPT generates a fair amount of misinformation. That doesn’t work in the world of biology as it does in politics.

Gardels: There’s no fake news in synthetic biology.

Venter: You can’t fake life.

Gardels: What are the practical implications of synthesizing genes for healthcare, predicting disease or preventing disease?

Venter: We made the first synthetic vaccine for the H7N9 flu in 2013. We showed that by just synthesizing the DNA of that virus — which we did in four days and four hours — and feeding it into our digital biological converter, we could just send that information digitally. We had a receiver at the other end at Novartis, which immediately printed out the new molecule chain for the vaccine and scaled up production.

Much of what happened with the rapid development of RNA vaccines for Covid derives from the approach that we developed during that first vaccine synthesis. Our team made a large number of Covid mutational variants for the various companies to test their vaccines against to see if they covered those variants. It took us less than a week to make more than 50 synthetic variants.

So, the technology for writing the genetic code is revolutionizing what’s happening in the vaccine world.

With all the tens of millions of genes that we have discovered in the ocean, we are finding new metabolic pathways to building life that are teaching us a lot we haven’t known about chemistry. We can now synthetically reproduce those pathways for cells and potentially create a new revolution in manufacturing everything from building materials to pandemic vaccines. With new discoveries come new tools.

Gardels: Gathering more data through exploring the microbiome of the seas has been the aim of the years-long project you describe in your new book, “The Voyage of Sorcerer II: The Expedition That Unlocked The Secrets of The Ocean’s Microbiome.”

As a way of saying that the universal is contained in the particular, the poet Khalil Gibran wrote that “you are not a drop in the ocean, you are an ocean in a drop.”

You have turned that notion into a scientific endeavor, going around the world at different locations, collecting seawater and discovering for the first time whole universes of teeming life below. These in turn provide large data sets that can be processed to help discover the secrets of primordial life that will drive future advances of synthetic biology.

Does that fairly describe your project? What have you discovered? And what are the implications?

Venter: Well, those are certainly some of the goals and the hoped-for outcome.

Our project started like most things in science with a relatively simple question — “What is life?” — that is difficult to answer. After sequencing the human genome, I was looking around at other places to apply this new set of computational tools to understanding biology. As a sailor, swimmer and surfer my mind turned naturally toward the sea.

It was clear to me that the more genomes we had, the more it helped by comparison to interpret our genomes and to interpret life. Reports were that there was a very low diversity of life in the oceans. Up until DNA sequencing, the way we discovered new microbes was by looking for them under the microscope or growing them in culture.

But basically, if they couldn’t be grown in a lab culture, they were deemed virtually not to exist. Because of our limited tools, we were missing probably 99% of the biology of our planet.

 With his fantastic observations, Darwin went around just to look and see what was there. The voyage of HMS Beagle to observe the world empirically was discovery science at its best.

“Changing the ocean temperature by only one degree can kill off certain types of bacteria that make life on the planet Earth livable.”

Today, that approach to science is put down by the establishment. If you don’t have a hypothesis, you can’t go out and just do “a fishing expedition.” But I decided to do just that: to take a barrel of seawater and filter it through several layers to collect everything from the tiniest viruses to microbes to diatoms.

Then, we took those filters and isolated all the DNA and RNA and sequenced the genomes. Our first surprising discovery was that thousands of organisms dwelt in just one small extraction of seawater. In just one barrel from the Sargasso Sea, we came up with over 2,000 species, 148 of which had never been seen before. We stopped sequencing at 1.4 million new genes just from one sample of seawater from the ocean.

When we published that study in 2004, everybody’s outlook on what was there changed. Of course, you can’t extrapolate from sampling and sequencing in one location to the idea that the ocean is a giant homogenous soup. So, we decided to launch a sailing expedition around the world, like the famous HMS Challenger in the 1870s, regarded today as one of the foundations of oceanography.

At the time of the Challenger expedition, the theory was there couldn’t be life below 18,000 feet. How they came up with that brilliant hypothesis, no one knows. At the time, the first transatlantic telegraph cables were being laid and more knowledge about the ocean bottom was needed.

Challenger went out to various locations across the world, stopping every 200 miles, and, among other experiments, dredged the bottom and pulled it up to see what was there. In every sample from the sea bottom, they discovered new life. There was life at every depth.

We decided to follow their example with the Sorcerer II. The reason for the 200-mile interval is that’s roughly how far a decent-sized sailing vessel can sail in 24 hours. So we did the same thing. We stopped every 200 miles, only instead of dredging the ocean floor, we collected 400 liters of seawater at each stop, filtered out all the organisms, put the filters in the freezer till we got to port and sent them back to the Venter Institute near San Diego, where they could be sequenced.

What we found, astonishingly, is that every 200 miles, 80% of the sequences were unique. The diversity is incredible. We discovered far more organisms in the ocean than there are stars and planets in the universe! Yet, we know we are only scratching the surface, even with the tens of millions of organisms we discovered at these sailing intervals.

These organisms produce about 50% of the oxygen that we breathe, so to survive we need to preserve that environmental resource. Changing the ocean temperature by only one degree can kill off certain types of bacteria that make life on the planet Earth livable. That’s why in the Seychelles, you have all these white sandy beaches. They look beautiful, except when you understand where they came from — coral reefs dying from warming seas that are killing off key symbiotic bacteria that kept the coral alive. What we are learning is that we are changing our environment to the detriment of the conditions necessary for survival.

Everybody’s worried about the sea level rise from climate change. That is certainly going to be important. But far worse will happen if we wipe out the producers of oxygen, as we see with these huge areas the size of the United States or Africa with no oxygen in them at all, dead zones in the ocean plus several-mile-wide islands of plastic that we sailed through.

We discovered massive life we hadn’t known about during our expedition. But we also discovered how important that life is for our existence and how we are damaging it.

Gardels: Did you run across obstacles in collecting samples near sovereign territorial zones?

Venter: One of the challenges we faced in our expedition concerned the ownership of genetic resources. We were arrested twice on our voyage, once by the French and once by the British.

Sailing from the Galapagos to the Marquesas islands in French Polynesia is one of the longest open ocean passages. The current moves at roughly one knot across the open ocean — carrying all those microbes and viruses that belong to all humanity — toward the Marquesas. But as soon as the current crossed within 200 miles of the islands, the microbes and viruses became so-called French genetic heritage, until the currents carried them out of French waters.

“We’re not going to synthesize in the lab anything near the quantity that will replace our ocean environment … We humans have to change, or we’re going to choke ourselves to death.”

And so, while they are briefly in the French domain, the French don’t want anybody else collecting them and making discoveries. They want French scientists to make those discoveries. More than that, I think they were afraid we were going to take samples near their nuclear test sites to show how much mutation had been caused by all the radiation.

After all, back in 1985 French intelligence agents sunk the Rainbow Warrior, a Greenpeace ship that was planning to sail into French Polynesia to protest their nuclear tests.

In our case, the U.S. Navy had to intervene and declare that we were a U.S. research vessel under U.S. government protection. Even collecting microbes and doing science these days is subject to outmoded nationalist notions.

Gardels: Maybe you will go down in French history as the “microbe pirate!”

On the climate point, if we are now understanding how we can synthesize the qualities of those microbes to produce oxygen, that’s obviously beneficial, at some point, for dealing with global warming, isn’t it?

Venter: Well, we could never reproduce what these organisms do at the scale of a planet covered mostly by oceans. The number of microbes that exist out there is like 10 to the 30th power.  We’re not going to synthesize in the lab anything near the quantity that will replace our ocean environment. We can maybe come up with things to help decrease the pollution. But fundamentally, we humans have to change, or we’re going to choke ourselves to death by eliminating our oxygen supply.

Gardels: The discoveries you describe remind me of the discoveries of quantum physics. What appeared solid actually turns out to be composed of particles and waves in constant motion and interaction. What once were thought to be empty volumes of water in the seas akin to a desert is actually teeming with a seemingly infinite population of organisms that create and sustain the conditions of biological existence.

As you say in the book, we really live on Planet Microbe. Perhaps naming our era the Anthropocene is a misconception? The main players are actually the microbes. We live really in the Microbiocene.

Venter: That is true. The main players in the human body that have made the Anthropocene are microbes. We have colonies of microbes within us that help keep us alive. You’ve got different microbes in your mouth, even different sub-colonies around each tooth and on your skin. The fact is, we live in a microbial world. We’re visitors in it. We’re hosts in the microbiome universe. If we eliminate the microbes, we eliminate our ability to live.

But without these tools of modern science invented by humans, they were essentially invisible. Just as we began to think we knew most of biology, to the point where we could read the genetic code and design a cell by rewriting that code, we have found out that we don’t know most of biology. The more we know the more we realize what we don’t know.

Gardels: As a scientist, what does that teach us about the presumed centrality of humans on the planet?

Venter: Well, we play a central role in this sense: We can either ensure our existence or ensure that we will go away. The microbes are here to stay. They will change and evolve as necessary to survive.

One of the most surprising things we found through our literal and metaphorical voyage is there are two symbiotic drivers of the genetic mutation behind evolution: bacteria and viruses.

Sar 11 is a photosynthetic microbe that we found in thousands of variants. These organisms are constantly mutating, constantly evolving as sort of a cloud of species, not just one. If you take a milliliter of seawater, you will find a million of such bacteria, but there are 10 million viruses as well that help control the microbial populations. The last thing these viruses want to do is kill off their hosts.

So the viruses that infect organisms actually carry updated photosynthetic genes. In essence, they go in and update the operating software in their host cells to make them robust and keep them alive.

This creates some philosophical issues about whether the viruses are in charge. But viruses aren’t truly alive unless they’re growing in a host. It’s a grand scheme and nature where all these things are interdependent, not least within us human hosts. Each determines the functioning of the other.

“We can either ensure our existence or ensure that we will go away. The microbes are here to stay. They will change and evolve as necessary to survive.”

On their own, microbes will overpopulate. They will mutate themselves out of existence unless the viruses bring in new versions of the software and update it.

Covid, of course, taught most people the inconvenient truth that viruses can be horrible because they cause disease. Yet it is only the smallest fraction of viruses that affect humans in a disease-causing way. Mostly they’re part of the scheme that’s helping to keep us alive.

Gardels: They really are the carriers of information between species, the postman among all life forms.

Venter: Exactly, like a USB or CD for downloading new information.

Gardels: Back in 1975 at the Asilomar Conference on Recombinant DNA, scientists arrived at a consensus that human intervention should not alter the germline, where traits are passed down to the genome of future generations. Given the advances we’ve been discussing that can identify the function of genes and proteins to predict and prevent disease, will that still hold? Should that still hold?

Venter: Well, it depends on what species you’re talking about. We created a totally new species that never existed before with our minimal synthetic cell. Any such synthetic organism like that needs to be watermarked so it doesn’t get confused with the evolution of the natural world.

We put our institutional names and some quotes from literature in the watermarks of the genetic code we synthesized. We need to make sure we’re keeping track of all synthetic organisms.

When it comes to the human genome, we do now have good tools for changing the germline to eliminate totally devastating diseases like Tay Sachs or ataxia, a debilitating disease where you lose control over your muscles from a very early age.

The problem of intervention in such cases is that it is a slippery slope. Polls have been taken of young parents-to-be asking what they would change in their child’s genetic code if they could.  The top responses were mostly superficial. They would change their height. Those parents who wanted their kids to be athletic wanted to change fast twitch to slow twitch muscles that support endurance.

In other words, they were interested in cosmetic effects, not whether we could eliminate devastating diseases from the human population.

So how do you start the path of intervention without causing a landslide of demand for either frivolous or frightening changes?

Above all, the main reason we should not go down this path lightly is that, for all the hype, we’re still at an early stage of interpreting the human genome. I said 20 years ago that we know less than 1% about the functioning of all the genes in the genome. And I don’t think we’ve seen much progress in this respect since.

Yes, we’ve seen a lot of progress in mapping and processing genomes. We started the company Human Longevity in order to better understand how the phenotype of an individual is shaped by the interaction of the genotype with the environment. But we just still do not have sufficient knowledge and understanding of the genome to assuredly go in and make changes that do not cause more harm than good.

Gardels: Everything that is changed changes something else in ways we can’t know.

Venter: That is the key thing to understand. We just still don’t understand our own biology very well.

For example, some people treat manic depression as a disease that has genetic causes. It would be great to get rid of it. But the argument has been made that the most creative people also fall somewhere along the spectrum of manic depression. So will we eliminate all those great leaps of creativity from the human population if we manage to eliminate the depression gene?

CRISPR is a great tool for advancing research, but it is not a magic tool. You can make specific changes in genes with it, but it has what’s called “off-target effects” that cause random changes in other genes. So if you think you’re correcting Tay Sachs disease, without measuring every other change in the genome that you’re making and understanding its effect, you could be causing far more harm than good.

We get caught up in the science fiction of “now that we have these magic tools, we can rewrite the human genetic code.” We are decades, if not centuries, away from having enough knowledge to be able to do that intelligently and responsibly.

Gardels: China is also pursuing pathways to synthetic biology just as you are. Where does China stand on its progress in the same type of research?

Venter: China is investing far more than the U.S. government is in genomics generally as well as synthetic biology. They obviously recognize that it’s the future of medicine and the future of manufacturing. If we don’t change the level and kind of funding that exists in the U.S. for such basic research, we’ll be learning everything about biology in the future from China.

The post How Seawater’s Teeming Life May Change Our Own appeared first on NOEMA.

]]>
]]>
How To Govern The World Without World Government https://www.noemamag.com/how-to-govern-the-world-without-world-government Tue, 17 Jan 2023 17:39:35 +0000 https://www.noemamag.com/how-to-govern-the-world-without-world-government The post How To Govern The World Without World Government appeared first on NOEMA.

]]>
Noema Deputy Editor Nils Gilman and Associate Editor Jonathan Blake recently met with Harvard Kennedy School professor Roberto Mangabeira Unger to discuss his latest book, “Governing the World Without World Government.”

Noema: Your new book makes the case for how we should produce global public goods without relying on what you call “globalism” — that is, the belief in the possibility of supranational government. While it is obviously the case that the sovereign nation-state remains the bedrock of national politics and international relations, it is equally hard to deny that the idea and practice of state sovereignty impedes global cooperation and thereby threatens the conditions of global habitability. We live within a complex of planetary-scale physical, geochemical and biological systems that operate according to the laws of nature, regardless of the laws of nations. What should we do about this collision?

Roberto Unger: The dominant tenor of writing on global governance is animated by what I call a “soft globalism.” By that, I mean that many people who write about this topic are often antagonistic to national sovereignty and prefer the attenuation of it. Yet these thinkers’ soft globalism puts them at odds with the overwhelming preferences of contemporary humanity, which massively rejects any suggestion of a move toward a world state.

Whereas the soft globalists seem to think there exist a huge range of possible alternatives for governing the world worth considering, experience suggests there’s only one option that works: voluntary cooperation among sovereign states to help solve problems that they cannot adequately solve alone.

Now, I don’t believe in national sovereignty simply because it’s the majority view. I agree with it substantively. The division of humanity into sovereign states is more than a brute fact. My position is that humanity develops its powers and its potential only by doing so in different directions, and can be unified only by being allowed to diverge. Visions of convergence — that we will all converge on the same set of best available practices and institutions — are a disaster. They subvert and impede the experiments by which humanity develops its potential.

All human beings are born nailed to two crosses. We are crucified, first, in a position within the internal social order of a nation-state. We are born into a particular class, caste or community and are required to spend our lives struggling to emancipate ourselves from the consequences of that crucifixion.

But we are crucified the second time by finding ourselves accidentally born into one of these national communities into which humanity is divided. I don’t diminish the significance of this double nightmare. But the alternatives to it are even worse. The idea of evolution toward a world state, a world empire, would be a prison from which we could not escape and in which we would have much less prospect of continuing the ascent of mankind.

Now, of course, we face problems that are global problems. How can we hope to avert the worst harms and achieve the most important common goods, when the world is divided into clashing, greedy, forceful and violent national states? The division of the world into sovereign nation-states is by far the lesser evil, compared to the union of mankind into a single state or into a collection of hegemonic states that would achieve an agreement among themselves and impose it on the rest of mankind in the name of what is allegedly necessary for all.

Our way of approaching the organization of the world should be in the service of the ability to create alternative structures, to create the possibility to resist the imposition of dogmatic blueprints by the powerful. That’s why we need pluralism.

Now, pluralism comes with dangers, including the danger of environmental destruction. Some of these national experiments will take us backward. But the fact that the future is open means it is inherently open to danger. It cannot be open without being dangerous.

“Humanity develops its potential only by doing so in different directions, and can be unified only by being allowed to diverge.”

Noema: Let me give a concrete example and ask which of the types of voluntary cooperation among plural political structures you think might be most productive to deal with it. Your home country, Brazil, happens to claim sovereignty over most of the Amazon rainforest. But many people — including climate scientists and ecologists — view the preservation of the Amazon as a global public good. Do you agree?

Unger: Yes, it is a subject in which all humanity has an interest. That’s true.

Noema: Okay, so given that all humanity has an interest in this, let’s imagine that Brazil were to elect a president who believed that the best thing to do with the Amazon was to chop the whole thing down and turn it into grazing land. What should be the response, given the voluntary cooperation model you propose?

Unger: One can always imagine examples that push anything to the limit, and, of course, we’ve had a case like this in former president Jair Bolsonaro, who I think you are referencing. Yet despite his lack of commitment to the preservation cause, he was very, very distant from the extreme case than you present. As Brazilians sometimes say, “We have preserved much of the Amazon, unlike, for example, the French or the Germans, who have chopped almost everything down and planted some trees in the garden. So why are you going after us?” Then it becomes a discussion about the details. And we come back to the world of reality, in which there aren’t these simple contrasts.

Noema: One retort might be that the planetary sapience of the importance of preserving biodiversity hotspots did not exist at the time that the French, Germans and Americans razed their forests.

Unger: Let’s forget about the past and focus on the question of contemporary environmentalism. The main temper of the environmental cause in the rich North Atlantic world is a kind of post-structural, post-ideological politics. Northern environmentalists would like the Amazon to be kept, in essence, as a park for the benefit of humanity, when in fact there are more thirty million people living and working there.

Let me give you a concrete example with respect to the Amazon. What does sustainable development in the Amazon mean? It could mean two things. On the one hand, it could mean a primitive, artisanal extractivism, in which you have, for example, indigenous rubber tappers taking latex out of the trees. I think that, implicitly, that’s what a lot of these Amazon-preoccupied people in the rich countries have in mind. It’s a form of “sustainable” development that has no science, no technology, no scale and, therefore, no future. It’s a joke. It’s the same thing as having “primitive peoples” roaming around in a kind of zoo.

On the other hand, the alternative to that is having an advanced form of sustainable development based on technology and science and new institutional models. In other words, it would be a variant of the knowledge economy. It’s either primitive, craft production, or it’s highly advanced. And where is this high advancement to take place? Then we come back to the question of division of experiments. The reason to have divisions is so that we can go from a period in which we had a crazed president who is hostile to environmentalism of any kind to another period in which we can retake the idea of preservation as a variant of the knowledge economy. And instead of having what the Americans have, for example, which is an insular knowledge economy, that excludes the vast majority of workers and firms and therefore produces both stagnation and inequality, we can aspire to have a knowledge economy for the many. We can take the problem of preservation in the Amazon as one of the hooks or provocations for this project of building strong, inclusive economies.

“The idea of evolution toward a world state would be a prison in which we would have much less prospect of continuing the ascent of mankind.”

Noema: The issue, however, is that past experiments have closed the possibility of certain present and future experiments. Once the Europeans and North Americans chopped down forests and killed off so many species in the name of our national experiments, it foreclosed what might have been reasonable experiments about cutting down remaining intact forests — assuming we want to keep a habitable Earth, that is.

Unger: I understand that. What you’re saying is that there are terrible things that can happen as a result of this division of humanity into sovereign states. The fundamental answer to that was enunciated by British mathematician and philosopher Alfred North Whitehead when he said, “It is the business of the future to be dangerous.” Yes, it is dangerous. And there is no antidote to that, because the alternative danger is having a set of princes or an emperor imposing an order on humanity — which is much, much worse.

Noema: Let us pose an alternative governance model, one that we proposed in an earlier essay. We imagine a set of narrowly tailored planetary institutions dedicated to broad-strokes standard-setting on specific planetary problems. In this arrangement, a planetary climate change institution, for example, would set non-voluntary limits on atmospheric carbon emissions for the planet as a whole, handing out mandates to national governments. Each nation-state would then get to decide how to reach those targets, but the targets would be set at the supranational level, though by problem-oriented institutions, not a general-purpose world government.   

Unger: Here’s my question: How will this regime that you described come about? If it arises as a form of voluntary cooperation, then it would be an example of what I call a “special purpose coalition,” like the International Agency for Solar Policy and Application or the efforts to prohibit human trafficking or to safeguard biodiversity. But if someone imposes this regime on states by force, then I have a problem. Because if they can impose this apparently progressive maneuver by force, there are lots of other things that they can impose by force. And then we have the beginning of the world state, and the doors of the prison are locked forever.

“The division of the world into sovereign nation-states is by far the lesser evil.”

Noema: Is your problem with transnational institutions capable of setting binding targets that they represent a slippery slope toward a (potentially tyrannical) world government?

Unger: No, it’s not a slippery slope argument. I’m focused on the question of whether the beginning of this process arises from an imposition, in which there is a background threat of force. Or is there going to be a regime of inducement and of cooperation? And I think that if we can impose this by force of arms, there’s anything that we can impose by force of arms. Why stop there?

Noema: Is this simply a question of scale? After all, many if not all nation-states were formed by force against various unwilling populations. And, of course, many of these governments continue to be repressive. Is your fear that what is already happening at many national scales would be imposed on the global scale?

Unger: As I said in my metaphor about the crucifixion, nation-states are not beds of roses. I am not imagining some way to extirpate the element of oppression from human life. There is no neutral definition of a free society: Every institutional order tilts the scales, encouraging some forms of life while discouraging others. My fundamental argument is in favor of experimental pluralism. There will be a struggle in these different states, and some of them will be much more democratic than others, and some will allow for the enhancement of human agency more than others. But with humanity divided into different political communities, there mustn’t be just one conductor.

“Northern environmentalists would like the Amazon to be kept as a park for the benefit of humanity, when in fact there are more thirty million people living and working there.”

Noema: Let’s turn to one important instance of institutional innovation arising out of a peaceful coming together of nation-states: the European Union. In your book, you describe the EU as a regional coalition of the willing that provides a possible “model for global order.”

Unger: Let’s accept the European Union as a model for globalization. If you take that idea seriously, then you’d quickly reach a question: Why is it that in Europe, so many who are young or old, or adventurous or romantic, or very left or very right, are against the European Union? The answer is that the EU lies under the dead hand of technocratic centrism. That’s why everyone who has life in them is against the Union.

Europe is a museum, the least interesting part of the world. Most signs of life there come from the right. Otherwise, a Frenchman just wants to sit in his cafe and be served by a Polish waiter. The idea that there should be ideological and political clashes and experiments has vanished.

How did that come to be? It came to be because the dominant architectural principle in the evolution of the EU is legal and institutional convergence. European economic and social policies are increasingly centralized in the EU government, de jure in Brussels though de facto in Berlin. Conversely, the power to develop the social and educational endowments of the citizens is delegated to the national and sub-national authorities.

What could be the alternative? The alternative should be just the opposite. The main vocation of the EU should be to ensure the capabilities of all its citizens and to develop their educational and economic endowments. Then the widest latitude of institutional experimentation should be devolved to the member states. This would be a model of globalization worth pursuing.

Such a change couldn’t come about as be a gift from the European technocracy to the peoples of Europe — it could only come about if the southern and eastern member states allied with opposition forces within Germany, and within France, to force a change. It’s highly unlikely to happen in the present circumstance, but that’s what would be necessary. And then the European Union would be a model for the kind of globalization that would be better for the world, rather than a kind that’s worse.

Noema: We are sympathetic to this view that we should encourage plural and dynamic experimentation. At the same time, if global temperatures increase by four or five degrees centigrade, none of the experiments we’re going to be having are going to be very pleasant. And the fact remains: all the attempts at voluntary cooperation to reduce greenhouse gas emissions have failed. So what do we do?

Unger: First of all, I don’t agree with the idea that climate change is somehow the supreme global harm. The major global danger is the same as its always been: war among the great powers. Everything else is less important, including climate change. And it’s a problem that we cannot escape because it is rooted in the division of the world, which we need, because the alternative to it is worse.

Noema: Your certainty that any alternative to the sovereign state will be worse seems to dismiss a range of possible futures. Yet back in 1987, you wrote, “History really is surprising; it does not just seem that way.” Do you still believe that?

Unger: We now know that there was a time in the history of the universe when the present structural entities did not exist. The basic subatomic structure described by particle physics did not exist, and the laws and constants and symmetries of nature as we now describe them did not apply. The universe has a history. So, history is prior to structure, already cosmologically.

Then we have in the evolution of the universe a series of events that increased this power (that always existed) for the production of the new. Already prior to life in the geological record, we find the creation of novelties, like the formation of crystals. Then comes the mind, consciousness and more. The evolution and ascent of humanity and the development of our powers of agency is related to this enhancement of our ability to create the new. And each of these is a prophecy of more creation of the new. The fundamental reason why reality is surprising is that the new is possible.

This brings me back to our question of governing the world without world government. Whatever we do with respect to the arrangements for the organization of the world, its consequence must not be to suppress or even to diminish our ability to create the new. Because our ability to create the new is our fundamental power.

This interview has been edited for length and clarity. It has not been revised by the interviewee.

The post How To Govern The World Without World Government appeared first on NOEMA.

]]>
]]>
A New Philosophy Of Planetary Computation https://www.noemamag.com/a-new-philosophy-of-planetary-computation Wed, 05 Oct 2022 15:57:41 +0000 https://www.noemamag.com/a-new-philosophy-of-planetary-computation The post A New Philosophy Of Planetary Computation appeared first on NOEMA.

]]>
Credits

Benjamin Bratton is the director of the Antikythera program at the Berggruen Institute and a professor at the University of California, San Diego.

A transformation is underway that promises — or threatens — to disrupt virtually all of our long-standing conceptions of our place on the planet and our planet’s place in the cosmos.

The Earth is in the process of growing a planetary-scale technostructure of computation — an almost inconceivably vast and complex interlocking system (or system of systems) of sensors, satellites, cables, communications protocols and software. The development of this structure reveals and deepens our fundamental condition of planetarity — the techno-mediated self-awareness of the inescapability of our embeddedness in an Earth-spanning biogeochemical system that is undergoing severe disruptions from the relative stability of the previous ten millennia. This system is both an evolving physical and empirical fact and, perhaps even more importantly, a radical philosophical event — one that is at once forcing us to face up to how differently we will have to live, and enabling us, in practice, to live differently.

To help us understand the implications of this event, the Berggruen Institute is launching a new research program area, in partnership with the One Project foundation: Antikythera, a project to explore the speculative philosophy of computation, incubated under the direction of philosopher of technology Benjamin Bratton.

The purpose of Antikythera is to use the emergence of planetary-scale computation as an opportunity to rethink the fundamental categories that have long been used to make sense of the world: economics, politics, society, intelligence and even the very idea of the human as distinct from both machines and nature. Questioning these concepts has of course long been at the heart of the Berggruen Institute’s research agenda, from the Future of Capitalism and the Future of Democracy, to Planetary Governance, the Transformations of the Human, and Future Humans. The Antikythera program described here exists on its own, but also in dialogue with each of these other areas.

For Bratton and the Antikythera team, planetary-scale computation demands that we reconsider: geopolitics, which will increasingly be organized around parallel and often competing “hemispherical stacks” of computational infrastructure; the process of production, distribution and consumption, which will now take the form of “synthetic catallaxy;” the nature of computational cognition and sense-making, which is no longer attempting merely to artificially mimic human intelligence, but is instead producing radically new forms of “synthetic intelligence;” the collective capacity of such intelligences, which is not located only in individual sentient minds, but rather forms an organic and integrated whole we can better think of as an emergent form of “planetary sapience;” and finally, the use of modeling to make sense of the world, which is increasingly done through the computational “recursive simulation” of many possible futures.

Applications are now open to join the program’s fully funded five-month interdisciplinary research studio, based from February-June 2023 in Los Angeles, Mexico City and Seoul. This studio will be joined by a cohort of over 70 leading philosophers, research scientists and designers. 

To mark Antikythera’s launch, Noema Deputy Editor Nils Gilman spoke with Bratton about the key concepts motivating the program. 

Nils Gilman: The Antikythera mechanism was discovered in 1901 in a shipwreck off the coast of a Greek island. Dated to roughly 200 BC, the mechanism was an astronomical device that not only calculated things, but was likely used to orient navigation across the surface of the globe in relation to the movements of planets and stars. Tell me why this object is an inspiration for the program. 

Benjamin Bratton: For us, the Antikythera mechanism represents both the origin of computation, and an inspiration for the potential future of computation. Antikythera locates the origin of computation in navigation, orientation and, indeed, in cosmology — in both the astronomic and anthropological senses of the term. Antikythera configures computation as a technology of the “planetary,” and the planetary as a figure of technological thought. It demonstrates, contrary to much of continental philosophical orthodoxy, that thinking through the computational mechanism allows not only “mere calculation,” but for intelligence to orient itself in relation to its planetary condition. By thinking with the abstractions so afforded, intelligence has some inkling of its own possibility and agency.

The model of computation that we seek to develop isn’t limited to this particular mechanism, which happened to emerge in roughly the same time and place as the birth of Western philosophy. Connecting a philosophical trajectory to this mechanism suggests a genealogy of computation that includes, for example, the Event Horizon Telescope, which stretched across one side of the globe to produce an image of a black hole. Closer at hand, it also includes the emergence of planetary-scale computation in the middle of the 20th century, from which we have deduced other essential facts about the planetary effects of human agency, including climate change itself.

Gilman: How exactly is this concept of climate change a result of planetary scale computation?

Bratton: The models that we have of climate change are ones that emerge from supercomputing simulations of Earth’s past, present and future. This is a self-disclosure of Earth’s intelligence and agency, accomplished by thinking through and with a computational model. The planetary condition is demystified and comes into view. The social, political, economic and cultural — and, of course, philosophical — implications of that demystification are not calculated or computed directly. They are qualitative as much as quantitative. But the condition itself, and thus the ground upon which philosophy can generate concepts, is only possible through what is abstracted in relation to such mechanisms.

“What is at stake is not simply a better philosophical orientation, but the futures before us that must be conceived and built.”

Gilman: Does this imply that computation is as much about discovery of how the world works as it is about how it functions as a tool? 

Bratton: Yes, but the two poles are necessarily combined. One might consider this in relation to what the great Polish science-fiction writer, Stanislaw Lem, called “existential technologies.” I draw a related distinction between instrumental and epistemological technologies: those, on the one hand, whose primary social impact is how they mechanically transform the world as tools, and those, on the other, that impact society more fundamentally, by revealing something otherwise inconceivable about how the universe works. The latter are rare and precious. 

At the same time, planetary-scale computation is also instrumentally transforming the world, physically terraforming the planet in its image through fiber-optic cables linking continents and data centers bored into mountains, satellites encrusting the atmosphere, all linked to the glowing glass rectangles we hold in our hands. But computation is also an epistemological technology. As it drives astronomy, climate science, genomics, neuroscience, artificial intelligence, medicine, geology and so on, computation has revealed and demystified the world and ourselves and the interrelations between them. 

Gilman: This agenda seems rather different than how philosophy and the humanities deal with the question concerning computation.

Bratton: The present orthodoxy is that what is most essential — philosophically, ethically, politically — is the uncomputable. It is the uncontrollable, the indescribable, the unmeasurable, the unrepresentable. It is that which exceeds signification or representation — the ineffable. For much of the Continental tradition, calculation has been understood as a degraded, tertiary, alienated, violently stupid form of thought. Can we count the number of times that Jacques Derrida, for example, uses the term “mere calculation” to differentiate it from the really deep, significant philosophical work? 

The Antikythera program clearly takes a different approach. We know that thinking with the mechanism is a precondition for grasping what formal conceptualization and speculative thought must grapple with. What is at stake is not simply a better philosophical orientation, but the futures before us that must be conceived and built. Besides the noble projects I have described, many of the other purposes to which planetary-scale computation is applied are deeply destructive. We turned it into a giant slot machine that gives people what their lizard brain asks for. Computation is perhaps based on too much “human centered design” in the conventional sense. This isn’t inevitable. It’s the result of the misorientation of the technology and a disorientation of our concepts for it.

The agenda of the program isn’t just to map computation but rather to redefine the question of what planetary scale computation is for. How must computation be enrolled in the organization of a viable planetary condition? It’s a condition from which humans emerge, but for the foreseeable future, it will be composed in relation to the concepts that humans conceive. 

Gilman: What makes the current emergent forms “planetary”? In other words, what do you mean by “planetary scale” computation?

Bratton: First, it must be affirmed that computation was discovered as much as it was invented. The artificial computational appliances that we have developed to date pale in comparison to the computational efficiencies of matter itself. In this sense, computation is always planetary in scale;  it’s something that biology does and arguably biospheres as a whole. However, what we’re really referring to is the emergence, in the middle of the 20th century, of planetary computational systems operating at continental and atmospheric scale. Railroads linked continents, as did telephone cables, but now we have infrastructures that are computational at their core. 

“The ideal project for us is one which leaves us unsure, in advance, whether its speculations coming true would be the best thing in the world or the worst.”

There is continuity with this history and there are qualitative breaks. These infrastructures not only transmit information, but also structure, and they rationalize information along the way. We have constructed, in essence, not a single giant computer, but a massively distributed accidental megastructure. This accidental megastructure is something that we all inhabit, that is above us and in front of us, in the sky and in the ground. It’s at once a technical and an institutional system; it both reflects our societies and comes to constitute them. It’s a figure of totality, both physically and symbolically. 

Gilman: Computation is itself an enormous topic. How do you break it down into more specific areas for focused research? 

Bratton: The Antikythera program has five areas of focused research: Synthetic Intelligence, the longer-term implications of machine intelligence, particularly through the lens of natural-language processing; Hemispherical Stacks, the multipolar geopolitics of planetary computation; Recursive Simulations, the emergence of simulation as an epistemological technology, from scientific simulation to VR/AR; Synthetic Catallaxy, the ongoing organization of artificial computational economics, pricing and planning; and Planetary Sapience, the evolutionary emergence of natural/artificial intelligence and how it must now conceive and compose a viable planetarity.

Let me quickly expand on each of them, though each could fill out our discussion all on its own. “Synthetic intelligence” refers to what is now often called “AI,” but takes a different approach to what is and isn’t “artificial.” We are working on the potential and problems of implementing Large Language Models at platform scale, a topic I have written on recently. The “recursive simulations” area looks at the role of computational simulations as epistemological technologies. By this I mean that while scientific simulations — of Earth’s climate, for example — provide abstractions that access some ground truth, virtual and augmented reality provide artificial phenomenological experiences that allow us to take leave of ground truth. In between is where we live and where a politics of simulations is to be developed. 

Gilman: Both of these speak to how computation functions as a technology that reveals how things work and challenges us to understand our own thinking differently. What about the politics of this? What about computation as infrastructure? 

Bratton: Two other research areas focus on this. “Hemispherical stacks” looks at the increasingly multipolar geopolitics of planetary-scale computation and the segmentation into enclosed quasi-sovereign domains. “The Stack” is the multilayered architecture of planetary computation, comprised of earth, cloud, city, address, interface and user layers. Each of these layers is a new battlefield. The strategic mobilization around chip manufacturing is one aspect of this, but it extends all the way to blocked apps, proposals for new IP addressing systems, cloud platforms taking on roles once controlled by states and vice versa. For this, we are working with a number of science-fiction writers to develop scenarios that will help navigate these uncharted waters. 

The area we call “synthetic catallaxy” deals with computational economics. It considers the macroeconomic effects of automation and the prospects of universal basic services, new forms of pricing and price signaling that include negative externalities and the return of planning as a form of economic intelligence cognizant of its own future. 

Gilman: How does all this relate to the big-picture claims you make about computation and the evolution of intelligence? In other words, is there a framing of how everything from artificial intelligence to new economic platforms adds up to something? 

Bratton: What we call “planetary sapience” is the fifth research area. It considers the role of computation in the revealing of the planetary as a condition, and the emergence of planetary intelligence in various forms (and, unfortunately, prevention of planetary intelligence). We are asking: machine intelligence, for what? There is, without question, intrinsic value in learning to make rocks process information in ways once reserved only for primates. But in the conjunction of humans and machine intelligence, for example, what are the paths that would enable, not destroy, the prospect of a viable planetarity, a future worth the name? As I asked in a Noema essay last year, what forms of intelligence are preconditions to that accomplishment?

“How must computation be enrolled in the organization of a viable planetary condition?”

Gilman: Antikythera is a philosophical research program focused on computation, but also has a design studio aspect to it. How does that work? 

Bratton: The studio component of Antikythera is based on the architectural studio model but focuses on software and systems, not buildings and cities. Society now asks of software things that it used to ask of architecture, namely the organization of people in space and time. Architecture as a discourse and discipline has for hundreds of years built a studio culture in which the speculative and experimental modes of research have a degree of autonomy from the professional application. This has allowed it to explore the city, habitation, the diagrammatic representation of nested perspectives and scales and so on, in ways that have produced a priceless legacy and archive of thinking with models. Software needs the same kind of experimental studio culture, one that focuses on foundational questions of what computational systems are and can be, what is necessary and what is not, and mapping lines of flight accordingly.  

Gilman: Who are you involving in the Antikythera Studio?

Bratton: We are enrolling some of the most interesting and important thinkers working today not only in the philosophy of computation proper but also planetary science, computer science, economics, international relations, science-fiction literature and more. We are accepting applications to join our fully-funded research studio next Spring.

The same interdisciplinary vision will inform how we admit resident researchers who apply to the program. The researchers we plan to bring into the program will include not only philosophers but designers, scientists, economists, computer scientists — many of whom are already involved in building the apparatuses that we are describing. They will work collaboratively with political scientists, artists, architects and filmmakers, all of whom have something important to contribute. To say that the program is highly interdisciplinary is an understatement.  

Gilman: Given that the Studio will integrate such an interdisciplinary group, what methodologies are you planning on using to bring these researchers together? Are there specific mechanisms of anticipation, speculation and futurity that you intend to promote?

Bratton: One of the ways in which philosophy can get in trouble is when it becomes entirely “philosophy about philosophy” and bounded by this interiority. I don’t mean to disqualify this tradition whatsoever, but I would contrast it with the approach of the Antikythera program. 

Arguably, reality has surpassed the concepts we have available at hand to map and model it, to make and steer it. If so, then the project isn’t simply to apply philosophy to questions concerning computation technology: What would Hegel think about Google? What would Plato say about virtual reality? Why do the concepts we’ve inherited from these traditions so often fail us today? These are surely interesting questions, but Antikythera is starting with a more direct encounter with the complexity of socio-technical forms and trying to generate new conceptual tools accordingly in relation to these, directly. The project is to invent “small p” philosophical concepts that might give shape to ideas and cohere positions of agency and interventions that wouldn’t have been otherwise possible. 

“Design becomes a way of doing philosophy, just as philosophy becomes a way of doing design.”

Gilman: How does that level of interdisciplinarity work? How can people from these different backgrounds collaborate on projects if their approaches and skill sets are so different?

Bratton: All those disciplines have an analytical aspect and a projective or productive aspect. Some lean in one direction more than others, but they all both analyze and produce. Collaboration is based on the rotation between analytic and critical modes of thought, on the one hand, and propositional and speculative processes, on the other. The boundary between seminar space and studio space is porous and fluid. Seminar, charette, scenario and project all inform one another. Design thus becomes a way of doing philosophy, just as philosophy becomes a way of doing design.

Gilman: What kinds of studio projects do you foresee? By that I mean not just forms and formats, but what approach will you take this sort of analytical + speculative design? Is it utopian? Dystopian? Something else?

Bratton: Speculative philosophy and speculative design inform one another. We recognize that some genres of speculative design are superficial, anodyne or saccharine, but they’re meant to be positive proclamations about ideal situations, which are, ultimately, performative utopian wishes. They may be therapeutic, but I think we don’t learn that much from that. 

At the same time, there is a complementary genre of speculative design that is symmetrically dystopian, based on critical posturing about collapse. It demonstrates its bonafides as a critical stance, but we also don’t really learn much from it: it mostly ends up repeating things that we already know, aspects of the status quo that are already clear, and ironically ends up reinforcing them almost as dogma. It codifies an “official dystopia.”  For some, this can be simultaneously demoralizing and comforting, but for us that’s not particularly interesting. 

What we’d like to do is develop projects about which we are, ourselves, critically ambivalent. The ideal project for us is one which leaves us unsure, in advance, whether its speculations coming true would be the best thing in the world or the worst. We like projects where the more we think through the project, the less sure we are.  As some might say, it is kind of pharmakon, a technology that is both remedy and poison, and we hope to suspend any resolution of that ambiguity for as long as we can. We believe that projects that we aren’t quite sure how to judge as good or evil are far more likely to end up generating durable and influential ideas.

Gilman: You’ve often argued that philosophy and technology evolve in relation to one another. Is that idea an important part of the method? 

Bratton: Inevitably, yes. One generates machines which inspire thought experiments, which give rise to new machines, and so on, in a double-helix of conceptualization and engineering.  The interplay between Alan Turing’s speculative and real designs most clearly exemplifies this, but the process extends beyond any one person or project. Real technologies can and should not only magnetize philosophical debates but alter their premises. For Antikythera, that is our sincere hope. 

Gilman: Lastly, let me ask the question “why philosophy?” Why would something so abstract be important at a time when so much is at stake? 

Bratton: In the past half century, but really since the beginning of the 21st century, there has been a rush to build planetary-scale computation as fast as possible and to monetize and capitalize this construction by whatever means are most expedient and optimizable (such as advertising and attention). As such, the planetary scale computation we have isn’t the technological and infrastructural stack we really want or need. It’s not the one with which complex planetary civilizations can thrive.

The societies, economies and ecologies we require can’t emerge by simply extrapolating the present into the future. So what is the stack-to-come? The answers come down to navigation, orientation and how intelligence is reflected and extended by computation, and how, through the mechanism, it grasps its own predicament and planetary condition. This is why the Antikythera device is our guiding figure.

The post A New Philosophy Of Planetary Computation appeared first on NOEMA.

]]>
]]>