Sara Imari Walker, Author at NOEMA https://www.noemamag.com Noema Magazine Thu, 11 Dec 2025 17:26:13 +0000 en-US 15 hourly 1 https://wordpress.org/?v=6.8.3 https://www.noemamag.com/wp-content/uploads/2020/06/cropped-ms-icon-310x310-1-32x32.png Sara Imari Walker, Author at NOEMA https://www.noemamag.com/author/sarawalker/ 32 32 The Death Of The Scientist https://www.noemamag.com/the-death-of-the-scientist Thu, 11 Dec 2025 14:34:11 +0000 https://www.noemamag.com/the-death-of-the-scientist The post The Death Of The Scientist appeared first on NOEMA.

]]>
A persistent hubris infects every age of our species’ scientific and technological development. It usually takes the form of individuals or institutions who are confident that — after thousands of years of human cultural evolution and billions of years of biological evolution — we have finally gotten to the bottom of reality. We are finally at the precipice to explain everything.

The newest incarnation is found in discourse around artificial intelligence. Here, at least, it is acknowledged that humans, with our limited memory and information processing capacity, will never really know everything. Still, this newfound and humbler stance is supplemented with the assumption that we are the single superior biological species who can build the technologies that will.

AlphaFold, an AI system developed by Google DeepMind, represents one of AI’s most celebrated achievements in science. Trained on more than 150,000 experimentally determined protein structures, AlphaFold 3 can now predict the structure of more than 200 million proteins as well as that of other biomolecules. Such scale was previously unimaginable. Earlier mathematical models could predict some features of protein structure, but nothing approaching this magnitude. The optimism is palpable: If AI can solve protein folding at this scale, what else might it accomplish?

Some proclaim AI will solve all disease, make scientists obsolete or even that artificial superintelligences will solve all of science. Yet many consider the protein folding problem unsolved. AlphaFold predicts 3D structures, but it does not explain the underlying physics, folding pathways or dynamic conformational ensembles. It works well for proteins made from the 20 or so amino acids found in terrestrial biology. To study proteins from the hundreds of amino acids in meteoritic materials, or to design novel therapeutic proteins, this model needs additional input. The limitation is not the algorithm or its scaling: The necessary data does not exist.

This tension reveals something profound about what science is, and how science defies precise definition. If we view science purely as the scientific method — observation, hypothesis, testing, analysis — then automation seems inevitable. AI algorithms demonstrably perform many, if not all, of these steps, and are getting better at them when guided by scientists.

But as philosopher Paul Feyerabend argued in “Against Method,” the very idea of a universal scientific method is misconceived. Most scientists invoke the scientific method only when writing for peer review, using it as standardization that allows reproducibility. Historically, scientific methods arise after discoveries are made, not before.

The question is not whether AI can execute steps in a method, but whether science generates knowledge in a way that is fundamentally something more.

If scale was all we needed, current AI would provide a mundane solution for science: We could do more because we have larger scale models. However, optimism around AI is not just about automation and scaling, it is also about theory of mind. Large language models (LLMs) like ChatGPT, Gemini and Claude have reshaped how many see intelligence, because interactions with these algorithms, by virtue of their design, give the appearance of a mind.

Yet as neuroscientist Anil Seth keenly observed, AlphaFold relies on the same underlying Transformer architecture as LLMs, and no one confuses AlphaFold with being a mind. Are we supposed to interpret that such an algorithm, instantiated on silicon chips, will comprehend the world in exactly the way we do, and communicate via our language with us so effectively as to describe the world as we understand it? Or should we instead believe it is maybe easier than we thought, after billions of years of the evolution of intelligence, to encode our own predictive and dynamic representational maps within such short spatial and temporal physical scales?

Consider how your own mind constructs your unique representation of reality. Each of us holds within our skulls a volume that generates an entire inner world. We cannot say this with the same certainty about any other entity, alive or not. Your sensory organs convert physical stimuli into electrical signals. In vision, photoreceptors respond to light and send signals along your optic nerve. Your brain processes this in specialized regions, detecting edges, motion and color contrasts in separate areas, then binds these fragmented perceptions into a unified object of awareness — what is called a percept — which forms your conscious experience of the world.

This is the binding problem: how distributed neural activity creates singular, coherent consciousness. Unlike “the hard problem of consciousness,” an open question behind our intrinsic experience, we do have some scientific insights into how binding could be accomplished: Synchronized neural activity and attention mechanisms coordinate information across brain regions to construct your unique mental model of the world. This model is literally the totality of your conscious understanding of what is real.

“The question is not whether AI can execute steps in a method, but whether science generates knowledge in a way that is fundamentally something more.”

Each of us is an inhabitant of one such mental model. What it is like to be inside a physical representation of the world, as we all are within our conscious experience, is nontrivial to explain scientifically (and some argue may not be possible).

Scientific societies face an analogous binding problem. Just as individual minds collect sense data to model the world, societies do the same through what Claire Isabel Webb, director of the Berggruen Institute’s Future Humans program, has called “technologies of perception”: Telescopes reveal cosmic depths, radiometric dating uncovers deep time, microscope expose subatomic structure, and now AI uncovers patterns in massive data.

Danish astronomer Tycho Brahe’s precise astronomical measurements, enabled by mechanical clocks and sophisticated angle-measuring devices, provided sense data that German astronomer Johannes Kepler transformed into mathematical models of elliptical orbits. A society collecting observations across space and time, exemplified across the work of Copernicus, Brahe, Kepler, Galileo and others, came to be bound into a single scientific consensus representation of reality — a societal percept — in the form of a theory that describes what it means to move and to gravitate.

But there is a fundamental difference. Your subjective experience, what philosophers call qualia, is irreducibly private. In a very real sense, it may be the most private information of all that our universe creates, because it is uniquely and intimately tied to the features of your physical existence that cannot be replicated in anything else.

When you see the color red, a specific experience emerges from your neural architecture responding to wavelengths between 620 and 750 nanometers. I can point to something red, and you can acknowledge you are also seeing red, but we cannot transfer the actual experience of redness from your consciousness to mine. We cannot know if we share the same inner experience. All we can share are descriptions.

This is where science radically differs from experience: It is fundamentally intersubjective. If something exists only in one mind and cannot be shared, it cannot become scientific knowledge. Science requires verifying each other’s observations, building on a lineage of past discoveries and developing intergenerational consensus about reality. Scientific models must therefore be expressible in symbols, mathematics and language, because they must be copyable and interpretable between minds.

Science is definitionally unstable because it is not an objective feature of reality; instead, it is more accurately understood as an evolving cultural system, bred of consensus representation and adaptive to the new knowledge we generate.

When Sir Isaac Newton defined F = ma, he was not sharing his inner experience of force or acceleration. He created a symbolic representation of relationships between three core abstractions — force, mass, acceleration — each developed through metrological standardization. The formula became pervasive cultural knowledge because any mind or machine can interpret and apply it, regardless of how each experiences these concepts internally.

This reveals the most fundamental challenge of scientific knowledge: Our primary interface for sharing scientific ideas is symbolic representation. What we communicate are models of the world, not the world itself. Philosopher of science Nancy Cartwright argues scientific theories are simulacra; that is, they are useful fictions in mathematical and conceptual form that help us organize, predict and manipulate phenomena. Theories are cultural technologies.

When we use the ideal gas law (PV = nRT), we model gases as non-interacting points. This is not to be interpreted as a claim that real gases are literally points with no volume that never interact, it is merely a simplification that works well enough in many cases. These simplified models matter because they are comprehensible and shareable between minds, and they are copyable between our calculating machines.

The requirement that scientific knowledge must be shareable forces us to create simulacra at every descriptive level. Science’s intersubjective nature places strict physical constraints on what theories can be. Our scientific models must be expressible symbolically and interpretable between human minds. They are therefore necessarily abstractions that never capture reality’s full structure. They can never fully capture reality, because no human mind has sufficient information processing and memory to encode the entire external world. Even societies have limits.

AI will also have limits.

These limits are not solely in terms of available compute power, made acute in the need for more data processing infrastructure to support the AI economy. More fundamentally, the current optimistic, and sometimes hubristic, dialogue around AI and artificial general intelligence (AGI) suggests these algorithms will be “more than human” in their ability to understand and explain the world, breaking what some perceive as limits on intelligence imposed by human biology.

“Our scientific models can never fully capture reality, because no human mind has sufficient information processing and memory to encode the entire external world.”

But this cannot be true by virtue of the very foundations of the theory of computation, and the lineages of human abstraction from which these technologies directly descend. As physicist David Deutsch writes, if the universe is indeed explicable, humans are already “universal explainers” because we are capable of understanding anything any computational system can: In terms of computational repertoire, both computers and brains are equivalently universal.

Other foundational theorems in computer science, like the no free lunch theorems by physicists David Wolpert and William Macready, indicate that when performance is averaged over all possible problems, no optimization algorithm (machine learning algorithms included) is universally better than any other. Stated another way, making an algorithm such that it performs exceptionally well for one class of problems will lead to trade-offs where it is poorer than average at others.

The physical world does not contain all possible problems, but the structure of the ones it does contain changes with biological and technological evolution. Just as no individual can comprehend everything all humans know, or will know, there can be no algorithm (AGI or otherwise) that is indefinitely better than all others.

More fundamentally, the possibility of universal computation arises due to a fundamental limitation; universal computers can only describe computable things, but never the uncomputable ones — a limitation intrinsic to any computer we build. This limitation does not apply to individual human minds, only what we share via language, and this is key to how we generate new social knowledge.

Scientific revolutions occur when our shared representational maps break down; that is, when existing concepts prove inadequate to cover phenomena we newly encounter or old ones we wish to explain. We must then invent new semantic representations capturing regularities old frameworks could not. At these times, nonconformism plays an outsized role in knowledge creation.

Consider the shift from natural theology to evolution. The old paradigm assumed organisms were designed by a creator, species were fixed, Earth was young. As we learned to read deeper histories, through carbon dating, phylogeny and observing species change through selective breeding and extinction, we never witnessed the spontaneous formation of biological forms.

Deeper historical memory forces new descriptions to emerge. Evolution and geology revealed concepts of deep time, astronomy introduced concepts of deep space, and now, as historian Thomas Moynihan points out, we are entering an age revealing a universe deep in possibility. Our world does not suddenly change or get older, but our understanding does. We repeatedly find ourselves developing radically new words and concepts to reflect new meaning as we discover it in the world.

Philosopher of science Thomas Kuhn recognized these transitions as paradigm shifts, noting how abrupt periods of change force scientists to reconceptualize the way we see our field, what questions we ask, what methods we use, what we consider legitimate knowledge. What emerges are entirely new representations for describing the world, often including totally new descriptions of everyday objects we thought we understood.

Science, as Kuhn saw it, is messy, social and profoundly human. In an age where we are now worried about alignment, after alignment and re-alignment with our own technological creations, paradigm shifts might best be described as the representational alignment of our societal percepts, where we must find new ways for our representations to keep in sync with the changing structure of reality as presented to us across millennia of our cultural evolution.

Paradigm shifts reveal how the power of scientific thought does not lie in the literal truth of theories, but in our ability to identify new ways of describing the world and in how the structures we describe persist across different representational schemes. The culture of science helps distinguish between simulacra that approach causal mechanisms (sometimes called objective reality) and those that lead us astray. Crucially, discovering new features of reality requires building new descriptions. When frameworks fail to capture important worldly features, for example when we recognize patterns but cannot articulate them, new frameworks and representational maps must emerge.

Albert Einstein’s development of general relativity illustrates this. Seven years separated his realization that physics needed to transcend the linear Lorentz transformations (appearing in special relativity) to get to the general theory of relativity. In his own reflections, he comments on the reason being how “it is not so easy to free oneself from the idea that coordinates must have an immediate metrical meaning.” Mathematical structures imposed as models weren’t capturing meaning: They were missing features Einstein intuited must exist. Once he encoded his intuition, it became intersubjective and shareable between minds.

“Scientific ideas are not born solely of individual minds, but also of consensus interpretations of what those minds create.”

This brings us to why AI cannot replace human scientists. Controversy and debate over language and representation in science are not bugs; they are features of a societal system determining which models it wants. Stakes are high because our descriptive languages literally structure how we experience and interact with the world, forming the reality our descendants inherit.

AI will undoubtedly play a prominent role in “normal science,” something Kuhn defined as constituting the technical refinement of existing paradigms. Our world is growing increasingly complex, demanding correspondingly complex models. Scale is not all we need, but it will certainly help.

AlphaFold 3’s billions of parameters suggest parsimony and simplicity may not be science’s only path. If we want models mapping the world as tightly as possible, complexity may be necessary. This aligns with logical positivists Otto Neurath, Rudolph Carnap and the Vienna Circle’s view: “In science there are no ‘depths’; there is surface everywhere.” If we have accurate, predictive models of everything, maybe there are no deeper truths to be uncovered. 

This surface view misses a profound feature of scientific knowledge creation. The simulacra change, but underlying patterns we uncover by manipulating symbols remain, inarticulable and persistent, independent of our languages. The concept of gravity was unknown to our species before science, despite direct sensorial contact throughout human history and an inherited memory from the nearly 4-billion-year lineage of life that preceded us. Every species is aware of gravity, and some microorganisms even use this awareness to navigate. We knew it as a regularity before Newton’s mathematical description, and this knowledge persisted through Einstein’s radical reconceptualization.

Prior to Newton’s generation, the model of Ptolemy was the most widely adopted for the study of planetary motions, as it had been for nearly 1,500 years. It included circular orbits for the planets, and to increase predictive power, epicycles were added for each planet, such that each planet in the model moved in a small circle while also moving in a larger circle around the Earth. Additional epicycles were added to increase predictive accuracy, not unlike adding nodes to a machine learning model with the accompanying risk of over-fitting.

We did not transition to the Newtonian model for its predictive power, but rather because it explained more. The modern concept of gravity was invented by this process of abstraction, and by the explanatory unification of our terrestrial experience of gravity with our celestial observations of it. It is likely that our species, and more specifically our species’ societies, will never forget gravity now that we have learned an abstraction to describe it, even as our symbols describing it may radically change.

It is this depth of meaning, inherent in our theories, that science discovers in the process of constructing new societal percepts. This cannot be captured by the surface level view, where science merely creates predictive maps, devoid of depth and meaning.

French literary critic Roland Barthes argued in his liberating 1967 essay “The Death of the Author” that texts contain multiple layers and meanings beyond their creators’ intentions. As with Feyerabend, this was a direct rebuttal “against method.” For Barthes, this rebuttal of method was in refute of literary criticism’s traditional methodological practice of relying on the identity of an author to interpret an ultimate meaning or truth for a text. Instead, Barthes argued for abandoning the idea of a definitive authorial meaning in favor of a more socially constructed and evolving one.

Similarly, it can be said the scientist “dies” in our writings. When we publish, we submit work to our peers’ interpretation, criticism and use. The peer review process is currently a target for AI automation, born from a misconception that peer review is strictly about fact-checking. In reality, peer review is about debate and discussion among peers and gives scholars an opportunity to cocreate how new scientific work is presented in the literature.  That debate and cocreation are essential to the cultural system of science. It is only after peer review that we enter a method that allows reproducibility. Scientific ideas are not born solely of individual minds, but also of consensus interpretations of what those minds create.

The outputs of AI models arrive already “dead” in this crucial sense: They are produced without an embodied creative act of meaning-making that accompanies the modes of scientific discovery we have become accustomed to in the last 400 or so years. When a scientist develops a theory, even before peer review, there is an intentional act of explanation, and an internal act of wrestling with intuition and its representation. AI models, by contrast, generate predictions through statistical pattern recognition, a very different process.

“Will AI transform science? Certainly. Will it replace scientists? Certainly not.”

Science and AI are cultural technologies; both are systems societies use to organize knowledge. When considering the role of AI in science, we should not be comparing individual AI models to individual human scientists, or their minds, as these are incomparable.

Rather, we must ask how the cultural systems of AI technologies and science will interact. The death of the scientist is the loss of the inner world that creates an idea, but this is also when the idea can become shared, and the inner world of the societal system of debate and controversy comes alive. When human scientists die in their published work, they birth the possibility of shared understanding. Paradigm shifts are when this leads to entirely new ways for societies to understand the world, forcing us to collectively see new structure underneath our representational maps, structure we previously could not recognize was there. 

An AI model can integrate an unprecedented number of observations. It can execute hypothesis testing, identify patterns in massive datasets and make predictions at scales an individual human cannot match. But current AI operates only within the representational schema humans give it, refining and extending them at scale. The creative act of recognizing that our maps are inadequate and building entirely new, social and symbolic frameworks to describe what was previously indescribable remains exceptionally challenging, impossible to reduce to method, and so far, uniquely human.

It is unclear how AI might participate in the intersubjective process of building scientific consensus. No one can yet foretell the role AI will play in a collective determination of which descriptions of reality a society will adopt, which new symbolic frameworks will replace those that have died, and which patterns matter enough to warrant new languages for their articulation.

The deeper question is not whether AI can do science, but whether societies can build shared representations and consensus meanings with algorithms that lack the intentional meaning creation that has always been at the heart of scientific explanation.

In essence, science itself is evolving, begging the question of what science after science will look like in an age where the cultural institution of science becomes radically transformed.  We should be asking: When we find our species still craves meaning and understanding, beyond algorithmic instantiation, what will science become?

Will AI transform science? Certainly. Will it replace scientists? Certainly not. If we misunderstand what science is, mistaking automation of method for the human project of collectively constructing, debating and refining the symbolic representations through which we make sense of reality, AI may foretell the death of science: We will miss the true opportunity to integrate AI into the culture systems of science.

Science is not merely about prediction and automation; history tells us it is much more. It is about explanatory consensus, and an ongoing human negotiation of which descriptions of the world we will collectively adopt. That negotiation, the intersubjective binding of observations into shared meaning is irreducibly social and, for now, irreducibly human.

The post The Death Of The Scientist appeared first on NOEMA.

]]>
]]>
A Roadmap To Alien Worlds https://www.noemamag.com/measuring-a-planets-acquired-memory Tue, 01 Apr 2025 16:07:48 +0000 https://www.noemamag.com/measuring-a-planets-acquired-memory The post A Roadmap To Alien Worlds appeared first on NOEMA.

]]>
Theoretical physicist and astrobiologist Sara Imari Walker proposes that evolution and selection can operate at a planetary scale on Earth, and perhaps worlds beyond. In her telling, a planet accretes, iterates and then — crucially — evolves, acquiring information and memory that structure material possibilities. The following is a discussion between Walker and the Berggruen Institute’s historian of science Claire Isabel Webb.

Claire Isabel Webb: The James Webb Space Telescope (JWST), launched in 2021, has revealed the magnificence of our universe as never before. A main priority of NASA’s mission is to learn more about exoplanets’ atmospheres, where evidence of extraterrestrial life might be found. What are your hopes for how this technology of perception could help astrobiologists like you characterize alien signs of life?

Sara Imari Walker: The fact that we can build instruments and technologies that allow us to see billions of years into the universe’s past is in many ways more interesting than the images and information we get from these telescopes. 

Humans are part of Earth’s physical system. While what we see through our telescopes is extraordinary, it is more extraordinary that we emerged from the geochemistry of Earth and, after around four billion years of evolution, can construct telescopes and interpret what we see.

CIW: We see collective intelligence in organisms like honeybees, which waggle information to each other; starlings that murmurate; and slime molds that coordinate chemical responses to their environment despite their lack of brains. You argue that physical systems capable of intelligence can scale to planet Earth, but also, perhaps, to planets beyond.

How would thinking of planets, including all the flora and fauna they may foster, through the lens of physics, fundamentally change how we look for life beyond Earth?

SIW: We are realizing how little we know about exoplanets, even from the features we can infer, such as simple atmospheric gases. Astronomers hope that by analyzing the spectra of these gases, we might learn something about planetary chemistry and whether it indicates the presence of life. The diversity of planets is proving to be much broader than we could have naively anticipated.

CIW: Right, it was about four years ago that astronomers characterized — but have yet to confirm — a Hycean planet: a new potential world that’s ocean-covered with a thin hydrogen atmosphere that could be conducive to life emerging. There also are sub-Neptunes, Super-Earths, Mini-Neptunes and Mega-Earths. These neologisms speak to the fact that scientists are discovering many kinds of planets that don’t fit into the mold of our solar system’s planets.

SIW: Exactly. We have no priors for what we are seeing. Studying these worlds will raise many unknowns about alien environments and the potential biologies that could evolve there. To assume any of those exoplanets harbor life forms just like those on Earth enormously understates the theoretical possibility of alien life forms.

CIW: A possibility space is an arena where all plausible outcomes are considered, simulated and theorized; the concept also acknowledges the unknown lacunae of present knowledge. So, successfully detecting extraterrestrial life might mean we need to reframe how we currently conceptualize evidence for what even counts as “life” on Earth.

SIW: Yes. Historically, astronomy experiments that sought alien life forms fixated on detecting molecules rather than conceptualizing the life processes of an entire planet. That is, we are now starting to think about detecting entire biospheres.

But to do this, we need to develop new theories of life around the concept of complexity. By “complexity,” I mean the amount of information necessary to produce a particular set of structures; in other words, it is what determines what possibilities exist. And by “information,” I really mean causation and selection, which we formalize in assembly theory as the minimum amount of contingent historical steps necessary for the observed objects to exist. How much selection and historical contingency must go into making what we observe? If the answer is significant, it suggests those features require much acquired memory and can only be produced by life.

Earth can provide a model for understanding planetary complexity. To begin to answer the question — How would one characterize our planet as a living world? — we can start with the concept that our planet has some four billion years of acquired memory. When scientists set out to characterize and then detect life, what I think we need to aim to detect is the depth in time of past states that the planet retains in its current state. This might seem like a weird way to think about it, but with it, we can then use assembly theory to follow how selection constructs entire atmospheres, and potentially detect alien life.

“To assume any of those exoplanets harbor life forms just like those on Earth enormously understates the theoretical possibility of alien life forms.”

This conceptual reorientation requires that we decouple our thinking about specific “things life constructs” (e.g., discrete units like a Lego building block) from entire systems of “life-constructing things” (e.g., an organism). There might be other information processing or intelligent systems in the universe — what we might consider “life” — and we would recognize these only because they can make things we know could not form in the absence of life. Knowing a planet’s full history over billions of years of planetary evolution is not necessary because the evolved objects themselves should be evidence of that history and whether “life” is a part of the history or not.

CIW: By a planet’s “acquired memory” then, you mean almost a Gordian knot of chemical, biological and physical data that enfolds over eons.

SIW: Yes. Because biological systems are constantly reproducing and building new structures, we tend not to realize how old some forms of life are. The interior structure of the ribosome has changed less than most rocks on this planet in the last four billion years. The lineage of sharks has been around longer than Saturn’s rings. Given the continual evolution of biological systems and the fact that scientists seek evidence of life through their physical traces, the question for astrobiologists and exoplanetary astronomers then becomes: “How do we infer processes that are deep in time simply from the structure of a planet’s atmosphere?”

We build optical telescopes because we think they are the best technologies to infer molecules that life processes produce. But the challenge is that we won’t directly “see” certain structures existing in the universe unless we know what to look for — in this case, we need to recognize an alien biosphere filtered through the lens of an atmosphere and then a telescope. To do this kind of inference, we need to better conceive of what life is, so we know what to look for.

CIW: Technologies must catch up with theories. Geologist Eduard Suess, writing in 1875, conceptualized Earth as a series of layered, interlocking spheres. The biosphere, or “selbständige,” as he coined it, was a layer that enshelled all life on Earth. Soviet scientist Vladimir Vernadsky, about 60 years later, developed Suess’s concept. He described the biosphere to be in a state of momentous transition: an emerging noösphere, or “the energy of human culture.” There were glimmerings of humans’ impact on Earth in Vernadsky’s writing, and it was only in the consequential decades that technologies — computers, satellite images, climate models — rendered in great scientific clarity the extent of that impact. How we look for climate change is through the technologies we’ve built to look at climate change. Of course, there is always room for surprise and serendipity.

SIW: To use an analogy, how we should look for life is somewhat akin to how we discovered gravitational waves. In 1916, Albert Einstein’s theory of general relativity predicted that the collisions of supermassive objects like black holes would create ripples in the very fabric of spacetime. Humans did not know how to build an instrument to measure this — an interferometer — nor did they possess the technological tools necessary to confirm the existence of gravitational waves. It took a century for us to develop the technology to make the detection. We made “first contact” in 2015 — confirming Einstein’s prediction of these waves — almost exactly 100 years later. Technology and exoplanetary insights can only work together. We did not have the technologies of perception to see gravitational waves in 1916. In 1916, cars were barely on the road!

CIW: Your analogy reminds me of my work with radio astronomers who search for extraterrestrial intelligence (like those at the SETI Institute). They make a distinction between biosignatures, such as planetary atmospheres that would indicate some form of life, and technosignatures, which are artifacts of intelligent alien technologies. Even if one is generous with the parameters of life or even “intelligence” existing beyond Earth, there’s no way to say with certainty that humans would be able to notice — let alone receive, let alone translate — a directed, intentional and meaningful communication. The interoperability — or ability of human and speculative alien transmissions to communicate effectively — is not guaranteed.

Gravity waves, I think you’re saying, represent a different kind of epistemic endeavor. Einstein’s prediction of gravity waves led to a century of theoretical research that allowed physicists to precisely predict the shape of the “chirp” of two black holes colliding — they characterized the disturbance in spacetime at the length of one-ten-thousandth of the diameter of a proton! Theory came first. Experiments to support such theory followed. In SETI, astronomers are developing experiments of expectation where the object is not guaranteed, let alone characterized with any theoretical clarity.

“The interior structure of the ribosome has changed less than most rocks on this planet in the last four billion years.”

SIW: Yes, this is exactly the challenge. Compared to life processes, predicting and detecting gravitational waves is a fairly simple problem. We do not have the right abstractable concept or theory to talk about extraterrestrial life, let alone alien intelligence. How are we going to possibly know we have the right technology or framework to see complex biological features in the universe or perceive alien signals? The coupling between how we build and use technology and how we conceptualize life is fundamental, yet unanswered.

So, some of JWST’s data might already indicate biosignatures in the composition of atmospheric chemistry. But I propose a conceptual reframing of how we even begin to interpret that data: We need to understand molecules’ presence as products of the collective evolution of living worlds, not as individual units. That’s where assembly theory, an explanation for life first developed by chemist Leroy (Lee) Cronin, comes in. It allows unfolding analysis of structures of molecular bonds and the recurrence of certain bonds’ structures as indications of how much minimal acquired memory is necessary for a given chemical system to emerge.

CIW: Can you walk me through an example of how that works at the molecular level?

SIW: Basically, we take the molecule apart and we try to rebuild it by taking those constituent parts and joining them back together. Our goal is to discover the shortest possible route, only reusing parts we have already made. One can imagine doing something similar with Lego. Say one had a Lego castle, smashed it to pieces, and then asked how many steps are necessary to rebuild it — with the stipulation that the builder can only use things the builder has already built. This constraint bounds the minimum causation necessary for evolution to discover the object. And our hypothesis, which so far stands up to experimental testing for assembly theory as applied to molecules in the lab, is that some objects have sufficient minimum causation to mean that they are only producible by life.

CIW: So, when a system gains sufficient complexity, life can assemble itself.

SIW: To see the holistic structure of what we really think “life” is, we need new ways of seeing. We might not use existing, familiar technologies of perception to detect extraterrestrial life because we don’t yet understand the full complexity of a planet’s total life processes.

We also need to be very careful not to overestimate the connection between the materials of life processes and the technologies that life processes produce. As Lee [Cronin] likes to point out, the social media app TikTok will not exist anywhere else in the universe; we don’t expect a technology that evolved on Earth to have evolved on every planet with life. This is because we implicitly recognize that humans are embedded in a particular technological space, which is a product of our biology, which itself is a product of geochemical events that happened on our planet an estimated 3.8 billion years ago — it is all contingent. But for some reason, when we look at biochemistry, we choose to talk about life’s complex processes in a way that implies these processes emerge linearly as individual objects out of a singular planetary condition rather than realizing biochemistry is a complex, iterative, interactive invention of deeply complex planetary systems.

CIW: Good to know that Earth is the only planet with TikTok! But you’re saying we should think of TikTok as a result of Earth’s complex systems that can be unwound to the molecular building blocks of life as we know it. Given the awesome number of combinations that can be made on an atomic level and the many events that led to humans making TikTok, telescopes and satellites, the number of possible systems that created intelligent life is enormous.

SIW: Yes it is enormous! And that is why I am excited about assembly theory because it allows us to formalize how big the space is that must select for a given object to exist. The mystery that I and other astrobiologists are trying to sort out is how signs of life not only might have initially emerged out of a particular planetary geochemistry, but how the awesome diversification of the structures of life have been elaborated on over billions of years.

Of course, there are constraints. Everything follows the laws of physics, and we expect those laws to impose universal constraints on how biologies and technologies get invented. For instance, we can presume that all flying creatures will have a winglike structure — but the particular details of those structures, like what they are made of, their precise shape and how they emerged and then evolved among species, varied enormously based on the specific historical context under which they emerged.

“We need to understand molecules’ presence as products of the collective evolution of living worlds, not as individual units.”

CIW: Convergent evolution describes how a pterodactyl and a bat both have wings, but those structures were born out of completely different evolutionary pathways.

SIW: Yes. In the same way, conceptualizing a planet’s acquired memory means expanding the definitions of what signifies life and the tools necessary to find it. Astrobiology needs to move beyond analyzing the details of molecular structures to analyzing macro-scale patterns that might really be universal signatures of life.

CIW: What you’re saying is that we need to understand chemistry in an exoplanetary atmosphere as the result of a global system — not as just some atoms bonding together. A bird’s wings are the result of a great chain of processes that have complexified themselves over billions of years. Wings are a phenomenon we see on Earth because they’re the result of evolution and selection that emerged from a planetary system of life. So, selection processes on Earth produced life, which produced intelligence and then technology.

SIW: And that intelligence is not only observable at an individual level, like a human doing math, but at a collective level, like humans producing AIs. That process of complexification scales to planetary scale living and intelligent processes.

CIW: Intelligence is a trace of complex objects that can be observed at the planetary level. The planet embeds a material lineage that tells us how complex objects — life — can assemble themselves into other complex objects, reflexively and recursively iterating. Earth has had enough time to build a memory that includes life, and this life includes technologies.

I am curious: What future observations might be evidence of planetary scale knowledge — its acquired memory? SETI scientists I worked with were generally leery of indulging in detailed speculation about the natures of alien beings. Given our limited knowledge, it’s fun, but perhaps not scientifically useful, to imagine if aliens have fur, or 10 eyes or can operate in the sixth dimension. We can only use the present technologies to search very narrowly for a range of radio frequencies that would indicate alien technology — not some cosmological brain-scanning device that would detect alien “intelligence.”

But indulge me for a moment: How might one design a futuristic successor instrument to the JWST that would search not merely for the presence of molecules but also be equipped to search for a concept such as intelligence by assessing an entire planet’s geological, biological and chemical structures?

SIW: I think in terms of radical abstraction about the nature of life. If we could build new technologies of perception that would see the world in terms of causal structure, it would be very easy to pick out objects and entities that would be “alive” — possessing a deeper causal structure of “liveliness.”

A planet that evolved a technosphere — a series of distinct and integrated systems, like satellites and spacecraft — is more “alive” than one with a biosphere. That’s because the amount of causation that goes into assembling a technosphere is much higher. It has a much larger causal depth and, therefore, exists as an object that is deeper in time on a planet.

CIW: And one can calculate that causal depth using assembly theory.

SIW: Yes. Assembly theory is a mathematical description of life and its objects. But we hope to generalize assembly theory to all kinds of materials structured by life. It is not clear how speculative instruments might translate to measuring complexity through direct physical observations, but I think a key step will be inventing a new technology (e.g., a theory, like assembly theory in this case) that can help us see causal structure.

Detecting planetary life processes — either through direct observation or a conceptual framework — is difficult. From space telescopes, we are only getting photons from exoplanets rendered as spectra. While this can tell us a lot about a planet’s size and even atmospheric composition, building an instrument to characterize a planet’s living complexity is not straightforward. What kinds of measurements would we even take? Can we do so remotely? We are making a lot of headway on this, thinking about how to make inferences based on our observations of the diversity of bonds and elements in an atmosphere, which tell us something about its assembly.

CIW: So, it is not really a question of gathering enough information to eventually be able to count it as a planet with deep, acquired memory, or compiling spectra to understand that data in this new way.

“If we see a sufficiently complex atmosphere — one that required a sufficient amount of time to produce — that might be the smoking gun of a biosignature.”

SIW: Right. We cannot just use information theory or computational language to detect life, because those depend on human-derived data labeling systems. We need a new paradigm where the contingency in the matter we observe and the computation of its complexity are the same. In assembly theory, we do this by treating objects as “informational”: Objects are made up of the operations the universe uses to build them as an intrinsic property, meaning different objects require different amounts of memory and, consequently, have different depths in time. Therefore, we should expect to require varying amounts of acquired memory for these to ever appear at a given time in the universe. To detect this from atmospheric data, we require a leap to the perspective of thinking about an atmosphere as a complex system assembled by evolution and selection.

CIW: Let us bring the complexity question of exoplanets back to the familiar context of Earth — indeed, the only place in the universe where we know life exists. Would one have to compile millions of years of atmospheric spectra over time, generating different timestamps for the evidence of evolving biological processes? Would one also have to journey to the bottom of the ocean to calculate the assembly index of sharks’ teeth to plug into a holistic theory of the emergence of life on Earth?

SIW: Right now, I am just not sure how much we can infer about life from atmospheric data. That is because the objects we are interested in inferring exist at such a large temporal scale, and most of the molecules in an atmosphere are not deep in time objects.

On the other hand, objects that life uniquely produces are objects that are immensely large in time. In what we call assembly time, we can stack all objects by the minimal number of physical operations necessary to build them and define a boundary between what can be produced anywhere and what requires a living (evolving) trajectory. But this requires us to assume time is an intrinsic feature of all objects. Humans are very large in time. Plants and humans are 3.8 billion years in clock time. Everything living on this planet has parts that extend that far back.

CIW: I have never heard anyone describe objects as being “large” (a physical phenomenon) in “time” (an immaterial phenomenon).

SIW: Sometimes doing new physics requires defining what is material in new ways. To talk about planetary atmospheres in an explanatory way that allows us to theorize about life, we might measure how large the atmosphere is in time. So, if we see a sufficiently complex atmosphere — one that required a sufficient amount of time to produce — that might be the smoking gun of a biosignature. It would be a definitive sign indicating that the planet possessed some kind of life.

But the problem is that volatile gases — ones present in the atmosphere of Earth, where we know life exists, and areas where we think life cannot — tend to be composed of very simple molecules. So, to detect “life,” we have to make many other inferences, like observing how molecules interact and how they together indicate complex processes of life. That is, one might see that there is evidence in the total set’s composition that indicates an object much larger in time than any number of individual molecules.

I am proposing that we look at the whole composition of a system. That will allow us to understand a planet’s memory depth — evidence of its evolutionary history — that would have produced a total atmospheric composition we can observe. 

Many people are not optimistic that we have a scientific pathway for inferring the presence of life on exoplanets. I am not sure where I land on that question. But what I am doing now with Lee [Cronin] is working toward a large-scale project that will allow us to observe the emergence of alien life in the lab — e.g., generate an origin of life event from scratch.

We need to do this by building a “planet simulator.” It cannot be a computational experiment. It must be physical for two reasons: (1) the computations to simulate life are more efficient when implemented in the real universe, and (2) we do not know all the relevant physics to simulate them, so we must run the experiments in reality, using chemistry. The technology now exists to do this at scale, and we have a theory that will allow us to guide our search. The best way for us to demonstrate the principles that will allow us to discover alien life is to do the right kinds of theory-driven experiments here on Earth.

The profound question I want to answer in my lifetime is this: Can we evolve truly alien life — and perhaps intelligence — in the laboratory?

Editor’s Note: This interview has been edited for clarity and length.

The post A Roadmap To Alien Worlds appeared first on NOEMA.

]]>
]]>
AI Is Life  https://www.noemamag.com/ai-is-life Thu, 27 Apr 2023 16:28:22 +0000 https://www.noemamag.com/ai-is-life The post AI Is Life  appeared first on NOEMA.

]]>
Toward the end of August 1924, the orbits of Mars and Earth carried the two sister planets closer to each other than they had been in around a century. Enthusiasm for the event spread across the United States. An article in the New York Times anticipated that astronomers “may definitively solve the question whether Mars is inhabited.” The government requested five minutes of complete radio silence, on the hour every hour, across the nation over the days when the planets were closest to one another, with the hope that this radio silence would increase our chance of detecting any signals broadcast by Martians. 

No message came.

As long as we have looked toward worlds that might be among the stars, we have hoped for and assumed life would be on them. Disappointment and shock greeted the news that there was no observable life revealed by the first images of the surface of Mars. Since then, we have grown accustomed to images of other barren worlds in the decades. 

But could we recognize life if it is really out there? We are embedded in a living world, yet we do not even recognize all the life on our own Earth. 

For most of human history, we were unaware of the legions of bacteria living and dying across the surface of everything in our environment — even within us. It took the technological innovation of the microscope in the late 16th century for us to finally see a microscopic world teeming with life. The first indication we had of viruses was cryptic patterns in infectious diseases they cause, but their existence was only confirmed in the late 19th century. We also did not know about the ecosystems thriving near hydrothermal vents in the darkest depths of the ocean floor until the second half of the 20th century, when submarines that could stand intense pressures got us close enough to observe them. 

“Attempts to define life have so far failed because they focus on containing the concept of life in terms of individuals rather than evolutionary lineages.”

The discovery of new forms of life requires the advent of technologies that allow us to sense and explore the world in new ways. But almost never do we consider those technologies themselves as life. A microbe is life, and surely a microscope is not. Right? But what is the difference between technology and life? Artificial intelligences like large language models, robots that look eerily human or act indistinguishably from animals, computers derived from biological parts — the boundary between life and technology is becoming blurry. 

A world in which machines acquire sufficient intelligence to replace biological life is the stuff of nightmares. But this fear of the artificiality of technology misses the potentially far-reaching role technologies may play in the evolutionary trajectories of living worlds. 

Complex (technological) objects do not just appear spontaneously in the universe, despite popular folklore to the contrary. Cells, dogs, trees, computers, you and I all require evolution and selection along a lineage to generate the information necessary to exist. 

Here on planet Earth, this is evident even in the rocks: Mineral diversity has co-evolved with life, for example through the process of biomineralization, in which organisms produce minerals to strengthen shells or skeletons or accomplish some other goal. The global rock record literally includes the fossilized remains of the history of life, because life has altered the geosphere so markedly. Because of this, we expect worlds with no life will have different compositions than the Earth does, even in the nonliving materials that compose them. 

Many of us would not recognize mineral diversity as “life” any more than we would the computer screen or magazine you are reading this text on as “life,” but these are products of a sequence of evolutionary events enacted only on Earth. This is as true for a raven as it is for a large language model like ChatGPT. Both are products of several billion years of selective adaptation: Ravens wouldn’t exist without dinosaurs and the evolution of wings and feathers, and ChatGPT wouldn’t exist without the evolutionary divergence of the human lineage from apes, where humans went on to develop language. 

Attempts to define life have so far failed because they focus on containing the concept of life in terms of individuals rather than evolutionary lineages. Invariably, something is included or excluded from the category of “living” that probably should not be. If you draw the line at self-reproducing or self-sustaining, viruses or parasites are excluded. (Viruses are often cited as a boundary case for exactly this reason.) If you draw the line based on the consumption of energy, fire can reasonably make the cut. Other definitions face similar problems. A popular one first developed by a NASA working group — “life is a self-sustaining chemical system capable of Darwinian evolution” — at first seems innocuous enough. But on closer conceptual inspection, it faces these same pitfalls. Only populations evolve — individuals do not. And it raises a question rather than providing an answer: Must all life rely on chemical reactions to exist?

“Technology, like biology, does not exist in the absence of evolution. Technology is not artificially replacing life — it is life.”

To move beyond these circular debates, we need to get past our binary categorization of all things as either “life” or “not.” We should not exclude examples based on naive assumptions about what life is before we develop an understanding of the deeper structure underlying the phenomena we colloquially call “life.” 

Consider the discovery of the nature of motion. When physicists talk about motion, the details of different examples of moving things are of no concern. Color, size, texture, age — none of that is important for calculating how objects move through space. We only care about mass, position and velocity (plus derivatives). Realizing that moving things can be described in terms of just a few observables was a huge conceptual leap made by our species. In the words of Isaac Asimov, “We all know we fall. Newton’s discovery was that the moon falls, too — and by the same rule that we do.”

All motion, whether here on Earth or on the other side of the observable universe, can be described in the same way. This discovery — this development of the laws that form the abstract description of motion — unified what happens terrestrially with what happens celestially. Before we understood motion at such a level of depth and abstraction, we had no idea the heavens were governed by the same laws as the Earth.

Just as our ancient ancestors could not have expected the same rules governing motion here on Earth to also apply to the heavens, a deep abstract structure underlying life need not conform to our current expectations. While there are many features of life that may be observed across many examples, such as replication or metabolism, these are not entirely universal — each has exceptions. 

In the search for a deep abstract mathematical framework that explains life, we are looking for the features we expect all life in the universe to share, whether it is here on Earth or anywhere else in the universe we might find it. Once we developed universal laws for motion, we were able to predict properties of moving objects we have not yet observed. In the same way, if we identify the “laws of life,” we should be able to predict the properties of alien examples. And just as with motion, we will have to ignore many details to get at a more universal and therefore deeper understanding. 


Technology Evolved

Our best estimates place the origin of life on this planet at approximately 3.8 billion years ago. Biological beings alive today are part of a lineage of information that can be traced backward in time through genomes to the earliest life. But evolution produced information that is not just genomic. Evolution produced everything around us, including things not traditionally considered “life.” Human technology would not exist without humans, so it is therefore part of the same ancient lineage of information that emerged with the origin of life. 

Technology, like biology, does not exist in the absence of evolution. Technology is not artificially replacing life — it is life. 

It is important to separate what is meant by “life” here as distinct from “alive.” By “life,” I mean all objects that can only be produced in our universe through a process of evolution and selection. Being “alive,” by contrast, is the active implementation of the dynamics of evolution and selection. Some objects — like a dead cat — are representative of “life” (because they only emerge in the universe through evolution) but not themselves “alive.”

To understand life may therefore require us to unify the biological and technological, akin to how the celestial and terrestrial were unified in our explanations for motion. 

The canonical definition of technology is the application of scientific knowledge for practical use. Historically, where philosophy and technology intersect, the goal has been to apply old philosophical ideas to understand new technology. However, as the philosopher of mind David Chalmers has pointed out, in the area of techno-philosophy, this logic can be inverted: Technology can be used as a new lens with which to visit old questions in philosophy. 

We can also ask what new insights might be gained by taking a broader, non-human-centric view of what constitutes technology and how this can be used to reinvestigate old questions in philosophy and biology alike. Technology relies on scientific knowledge, but scientific knowledge is itself information that emerged in our biosphere. It enables things to be possible that would not be without it. 

“To understand life may require us to unify the biological and technological, akin to how the celestial and terrestrial were unified in our explanations for motion.”

Consider satellites. Launching them into space would not have been possible on our planet without Newton’s invention of the laws of gravitation. Newton himself could not have invented those laws if, centuries earlier, humanity had not come to understand the mathematics of geometry or constructed timekeeping devices that allowed us to track seconds. And of course, none of this could have happened if our biosphere had not evolved organisms capable of making abstractions like these in the first place. 

Once the knowledge of laws of gravitation became encoded in our biosphere, new technologies were made possible, including satellites. Satellites are not launched from dead worlds or worlds with only microbial life. They require a longer evolutionary trajectory of information acquisition. You can trace that lineage within the history of our species, but arguably it should be traced all the way back to the origin of life on Earth. 

Technology, in the broadest sense, is the application of knowledge (information selected over time) that allows things to be possible that are not possible in the absence of that knowledge. In effect, technologies emerge from what has been selected to exist. They are also what selects among possible futures — and builds them. Consider robust carbon removal technology, which could change the future evolutionary trajectory not just of humans but of a huge diversity of species on Earth. 

We are accustomed to thinking about technology as uniquely human, but in this broader definition, there are many examples across the biological realm. Just like the objects of life might include pencils and satellites, so too technology might include wings and DNA translation. Photosystems I and II — multiprotein complexes found in plants and other photosynthesizing creatures — harvest photons to use light energy to catalyze reactions. As evolutionary innovations, these technologies radically changed the climate of Earth in the Great Oxidation Event, a period about 2.5 billion years ago when cyanobacteria produced a great deal of atmospheric oxygen, which contributed to later conditions supportive of multicellular life. 

People might want to differentiate between biological evolution and the intentionality of humans when we build technologies. After all, software developers and companies choose to produce technology in a different way than ravens evolved wings to fly. But both fundamentally rely on the same principles of selection. 

Arguably, the kind of selection humans do is much more efficient than natural selection on biological populations. It is more directed, which is only possible because we ourselves already are structures built across billions of years. We are bundles of possibilities refined by evolution and embodying the history of how we came to exist. The physics governing how we select what we create may be no different (other than by degree of directedness) than how we were selected by evolution. We are, after all, a manifestation of the very physics that allowed us to come to be. 


Biological Innovations Are Technologies

Roughly 3.8 billion years ago, some of the most ancient technologies now on our planet were first invented. Among those is the chemistry of translation. 

Translation allows information stored in the sequences of DNA to be read out by the translation system of the cell to produce specific protein sequences. A universal code — digitally encoded in the sequence of nucleobases — used by all organisms (with minor variations) had evolved that allowed genes from one organism to be shared with another and retain their meaning. This technology is so robust that it has persisted for almost 4 billion years and is part of nearly everything alive on this planet right now. No technology yet invented by humans will last that long, although if we come to understand what we are, something might. 

Far earlier than humans, it was the biosphere that invented many technologies. Over billions of years, the innovations of sight and hearing, among many others, emerged through evolution and selection. We do not know exactly what the Earth looked like when life first emerged. In fact, neither did the life that existed at that time. Nothing alive then could see. The evolution of photon receptors and eventually eyes relied on many other innovations previously made over a great deal of time by single-cell organisms. Multicellular creatures like mammals, which rely on about 70 different specialized cells to see, further advanced the technology of sight, but only by building on what came before. The mantis shrimp evolved perhaps the most complex multicellular eye: It has compound eyes that move independently and have up to 16 color receptors (our eyes have three). 

The history of life on Earth is full of new and better organisms developing technologies by innovating on what came earlier, all the way back to the deep history of ancient life. A key feature of life is this evolutionary contingency: New objects only come into existence because there is a history that supports their formation. Multicellular eyes could not evolve before cells with photon receptors any more than ChatGPT could evolve before human language — both rely on previous developments in a lineage of evolving technology.

The technologies we are and that we produce are part of the same ancient strand of information propagating through and structuring matter on our planet. This structure of information across time emerged with the origin of life on Earth. We are lineages, not individuals. 

“The technologies we are and that we produce are part of the same ancient strand of information propagating through and structuring matter on our planet.”

Human technologies are therefore not much different from other innovations produced in our planet’s 3.8-billion-year living history — with the exception that they are in our evolutionary future, not our past. Multicellular organisms evolved vision; what I will call “multisocietial aggregates” of humans evolved microscopes and telescopes, which are capable of seeing into the smallest and largest scales of our universe. Life seeing life. All of these innovations are based on trial and error and selection and evolution on past objects. 

Intelligence is playing a larger role in modern technology, but that is to be expected — intelligence itself improves via evolution. It generates more complex systems — cells, multicellular aggregates like humans, societies, artificial intelligence and now multisocietial aggregates like international companies and groups that interact at the planetary scale. So-called “artificial intelligences” — large language models, computer vision, automated devices, robotics and more — are often discussed as disembodied and disengaged from any evolutionary context. But the technologies we are inventing today represent the recapitulation of life’s innovations into new substrates, and these are allowing the emergence of intelligent life at a new scale — the planetary. There is no “intelligence” in isolation; rather, complex ecosystems of technologies interact with biology to bring about new capabilities. 

First came cells with photon receptors, then eyes, then microscopes and telescopes. Now, we are in the midst of another transition from the biological to technological: We are using algorithms to interpret data and “see” the world for us.

It is a bit like how brains had to co-evolve to process the information gathered by eyes. How we think is another innovation evolved over billions of years, which is just now being recapitulated at a larger scale than our individual brains. We need to evolve technologies to process the huge amounts of data we are receiving and generating so we can “see” the world as a planet. 

The technology of computation first emerged from the brains of humans, which themselves evolved over billions of years, in an attempt to build a mathematical abstraction that captured the structure of human thought. Just as we outsource some of our sensory perceptions to technologies we built over centuries, we are now outsourcing some of the functioning of our own minds. This allows the same principles that operate within us to function now at higher levels of organization, moving up from localized societies to global ones. 


AI Is A Major Transition In Planetary Evolution

James Lovelock and Lynn Margulis’s Gaia hypothesis — that living organisms interact with the Earth to produce a self-regulating complex system that maintains conditions favorable for life — is sometimes interpreted to mean that the Earth itself is alive. Margulis and Lovelock’s insight was to recognize that, over eons, living organisms (trees, for example) produce gases that affect the atmosphere, warming or cooling the surface of the Earth to keep it within a range conducive to life. Other researchers have noticed that, at certain times in Earth’s history (the human-driven warming of today’s climate crisis being the latest example), life has failed to maintain this careful balance, leading to large-scale extinctions.

But we have yet to conceptualize the implications of the Gaia hypothesis because we don’t yet understand what life is. 

A challenge is that biological modes of evolution do not apply to biospheres. We do not yet fully understand what evolution is doing at the planetary scale. We do know that the history of individual organisms within the biosphere has gone from the simple to the more complex (though not the case along every lineage). Prokaryotes are “simple” — mostly single-celled life with no internal organelles within them. “Complex” life evolved as cells became more structured, with components inside and out, allowing multicellular life and tissues with specific functions. 

Individual multicellular systems then formed societies. In human societies, we went on to evolve language. As the evolutionary biologists Eörs Szathmáry and John Maynard Smith have pointed out, each of these major evolutionary transitions has been associated with new modes of information transfer and storage. The multisocietal aggregates that have only very recently emerged on this planet are made possible through the interaction of linguistic societies. 

A natural extension of this evolutionary history is to recognize how “thinking” technologies may represent the next major transition in the planetary evolution of life on Earth. It is what we might expect as societies scale up and become more complex, just as life simpler than us has done in the past. The functional capabilities of a society have their deepest roots in ancient life, a lineage of information that propagates through physical materials. Just as a cell might evolve along a specific lineage into a multicellular structure (something that’s not inevitable but has happened independently on Earth at least 25 times), the emergence of artificial intelligences and planetary-scale data and computation can be seen as an evolutionary progression — a biosphere becoming a technosphere.  

“The emergence of artificial intelligences and planetary-scale data and computation can be seen as an evolutionary progression — a biosphere becoming a technosphere.”

One example of planetary-scale computation is global monitoring of planetary health, if the data can be used to adaptively respond. Another example is large language models because they require the global integration of massive amounts of language data for their training. 

The Gaia hypothesis was intended to conceptualize how life has established feedback loops with the planet that allow it to maintain itself over time. It did not address the hierarchy of complexity that life evolves over time — that is, the major transitions of life recurring across scales, from molecular to cellular, to multicellular to societal, to multisocietal to planetary. 

If life is truly a planetary phenomenon, we should expect to see the same features recurring across time at new levels of organization as they gradually scale up to the planetary. What is emerging now on Earth is planetary-scale, multisocietal life with a new brain-like functionality capable of integrating many of the technologies we have been constructing as a species over millennia. It is hard for us to see this because it is ahead of us in evolutionary time, not behind us, and therefore is a structure much larger in time than we are. Furthermore, it is hard to see because we are accustomed to viewing life on the scale of a human lifespan, not in terms of the trajectory of a planet. 

Life on this planet is very deeply embedded in time, and we as individuals are temporary instances of bundles of informational lineages. We are deeply human (going back 3.8 billion years to get here), and this is a critically important moment in the history of our planet but it is not the pinnacle of evolution. What our planet can generate may just be getting started. In all likelihood, we are already a few rungs down in the hierarchy of informational systems that might be considered “alive” on this planet right now. 

We are 3.8-billion-year-old lineages of information structuring matter on our planet. We need to recognize our world teems with life and also that life is what we are evolving into. It is only when we understand ourselves in this context that we have any hope of recognizing whatever life, currently unimagined and evolving along radically different lineages, might exist, or we might generate to co-evolve with us. 

The post AI Is Life  appeared first on NOEMA.

]]>
]]>