Bobby Azarian, Author at NOEMA https://www.noemamag.com Noema Magazine Fri, 24 May 2024 15:42:14 +0000 en-US 15 hourly 1 https://wordpress.org/?v=6.8.3 https://www.noemamag.com/wp-content/uploads/2020/06/cropped-ms-icon-310x310-1-32x32.png Bobby Azarian, Author at NOEMA https://www.noemamag.com/author/bobbyazarian/ 32 32 Life Need Not Ever End https://www.noemamag.com/life-need-not-ever-end Tue, 28 Feb 2023 13:50:18 +0000 https://www.noemamag.com/life-need-not-ever-end The post Life Need Not Ever End appeared first on NOEMA.

]]>
Perhaps the most depressing scientific idea that has ever been put forth is the infamous “heat death hypothesis.” It is a theory about the future of the universe based on the second law of thermodynamics, which in its most well-known form states that entropy, a complicated and confusing term commonly understood to simply mean “disorder,” tends to increase over time in a closed system. Therefore, if we consider that the universe is itself a closed system, the law seems to suggest that the cosmos is becoming increasingly disorganized. It has also been described by many as “winding down.”  

As such, the second law appears to hold a chilling prophecy for humanity in the very long term. Essentially, it would seem to imply that life is doomed — not just life on Earth, but life anywhere in the cosmos. Consciousness, creativity, love — all of these things are destined to disappear as the universe becomes increasingly disordered and dissolves into entropy. Life would merely be a transient statistical fluctuation, one that will fade away, along with all dreams of our existence having some kind of eternal meaning, purpose or permanence. This bleak idea is known as the “heat death hypothesis,” and the prophecy foretells a future where all pattern and organization has ceased to be. In this cosmological model, everything must come to an end. There is simply no possibility for continual existence. 

Fortunately, the gloomiest theory of all time may just be a speculative assumption based on a misunderstanding of the second law of thermodynamics. For one thing, the law may not be applicable to the universe as a whole, because the types of systems on which it has been empirically tested have well-defined boundaries. The expanding universe does not. Secondly, depending on how one interprets the second law, the inevitable increase in entropy may not correspond to an increase in cosmic disorder. 

In fact, some leading scientists are beginning to think that the cosmos is becoming increasingly complex and organized over time as a result of the laws of physics and the evolutionary dynamics that emerge from them. Seth Lloyd, Eric Chaisson and Freeman Dyson are among the well-known names who have questioned whether “disorder” is increasing in the cosmos. Outside of physics, complexity theorist Stuart Kauffman, neuroscientist Christof Koch and Google’s director of engineering Ray Kurzweil all believe that the universe is not destined to grow more disorganized forever, but more complex and rich with information. Many of them have a computational view of the universe, in which life plays a special role.

As Paul Davies, a prolific author and a highly respected theoretical physicist, wrote: “We now see how it is possible for the universe to increase both organization and entropy at the same time. The optimistic and pessimistic arrows of time can coexist: The universe can display creative unidirectional progress even in the face of the second law.” In other words, if we understand the second law better, we can see that it does not actually prohibit the continual growth of complexity and order in nature.

“Essentially, the heat death hypothesis seems to imply that life is doomed — not just life on Earth, but life anywhere in the cosmos.”

This is the cosmic narrative that the theoretical physicist and author Julian Barbour proposes in his new book “The Janus Point: A New Theory of Time,” which has received praise by some trusted names in the physics world, such as Martin Rees, Sean Carroll and Lee Smolin. Barbour believes that the second law — at least as it is popularly interpreted — does not apply to the universe as a whole, since it is always expanding due to the mysterious force known as dark energy. The old story of increasing cosmic disorder, Barbour concludes, may turn out to be the complete opposite of what is actually happening. Because the universe is not a bounded system, order can continue to increase indefinitely.

Barbour is not alone. David Deutsch, the father of quantum computation, has expressed a similar view in his bestselling mindbender “The Beginning of Infinity,” in which he argues that there are no fundamental limits to knowledge creation. This is a much stronger claim than Barbour’s, because it specifically suggests that life in the universe need not come to an end. 

Life is a crucial part of the cosmic story because the growth of complexity and organization enters a new phase when biology emerges. Life is a special form of complexity: It has the ability to create more complexity and to maintain organization against the tendency toward disorder. In a universe expanding without limit, the ability of intelligent life to continually construct complex order may not be limited by the laws of thermodynamics in the way once imagined.

This story of continual complexification would seem to go against the second law, a rock-solid pillar of physics. Remember, though, that both the first and second laws of thermodynamics were conceived before we knew the universe was expanding. To understand if these laws are applicable to the universe as a whole — and not just systems inside the universe — we must briefly explore the history of thermodynamics and understand its relationship with the phenomenon we call life. 

Doom

In the two fields of thermodynamics — classical and statistical — there are subtly different versions of the second law. The former emerged about a half a century before the latter, and it was concerned with the flow of heat and energy. Statistical thermodynamics attempted to explain the findings of classical thermodynamics in terms of the behavior of ensembles of molecules and atoms, and it was more concerned with how configurations of particles evolve over time.

You could say that the original version of the second law, the classical version, was about the spreading out of energy, where the statistical version was more about ordered configurations of particles becoming more disordered. While the two versions are intimately related and in many instances become equivalent, they do not have the same cosmic implications. 

The ideas that would become the second law can be traced back to the work of the French engineer Sadi Carnot in the early 1800s. Carnot wanted to understand how to make steam engines more efficient by analyzing how they used energy. He recognized that heat would spontaneously flow from hotter to colder systems, but never in reverse. We all experience this phenomenon on a daily basis, whenever a hot bath or cup of coffee inevitably cools to room temperature as heat is lost to the surrounding air. Carnot pointed out that this flow of heat creates a motive force, which can be harnessed to power machines. Through a cycle of heating and cooling steam inside a chamber (known as a cylinder) with a movable wall on one side (known as a piston), you can create a force of motion that can power an engine. 

What Carnot astutely noticed about this process was that it couldn’t be made 100% efficient. This is the basis for the original second law of thermodynamics. The conversion of thermal energy into mechanical energy always involves the loss of some useful energy to the environment in the form of heat. Once this useful energy is dissipated, meaning it gets spread and lost to its surroundings, it can no longer be harnessed to do physical work. The lost energy still technically exists somewhere out there in the universe, but it can’t be extracted to do anything useful, like sustaining an engine or some other machine. Since life is a machine of sorts, this has implications for how long it can persist in the universe.  

Because Carnot was an engineer, his insights were largely unknown or ignored by the physics community for decades, until two giants in the field — Lord Kelvin and Rudolph Clausius — explained their significance and relevance to the emerging science of thermodynamics. 

The new field proposed two major laws that, when put together, seem to have cosmic implications. The first law says that energy is conserved. That means it cannot be created or destroyed — implying that the total amount is fixed — though it can be transformed from one form to another. The second essentially says that there is “free energy” — or energy available to do work — but as that energy is used for mechanical work, some of it inevitably gets dissipated as it is converted into heat, a form of energy that is no longer useful. Once energy is dispersed in this way, it becomes impossible to be used to do mechanical work, like creating a force that could power a system.

In 1852, Lord Kelvin wrote a paper with what is considered to be the first statement of the second law, which he described as a universal tendency toward the dissipation of mechanical energy. The term “entropy,” introduced by Clausius in 1865, was originally defined as a measure of the energy in a system that is no longer available for work. Entropy, then, referred to dissipated energy, not structural disorder. 

“The conversion of thermal energy into mechanical energy always involves the loss of some useful energy to the environment in the form of heat.”

Essentially, these discoveries suggested that a limited supply of free energy was always spreading out and dissipating, so there would come a time when no further mechanical work could be done, including the work required to sustain the biological machinery that we call “life.” One by one, the stars that supply the energy that powers biology would radiate away their usable energy, and life would cease to be. 

This sad story isn’t just local; all the stars throughout the cosmos will eventually burn out, causing any biosphere, anywhere, to degrade. Even if some form of life could develop the technology to explore the cosmos, eventually all useful energy in the universe would be converted into heat, leaving no energetic fuel for advanced forms of sentience to consume.  

At least, that was the assumption in the second half of the 19th century. This scenario became known as the “heat death” of the universe, and it seemed to be the nail in the coffin for any optimistic cosmology that promised, or even allowed, eternal life and consciousness. For example, one of the most popular cosmological models of the time was put forth by the evolutionary theorist Herbert Spencer, a contemporary of Charles Darwin who was actually more famous than him during their time. Spencer believed that the flow of energy through the universe was organizing it. He argued that biological evolution was just part of a larger process of cosmic evolution, and that life and human civilization were the current products of a process of continual cosmic complexification, which would ultimately lead to a state of maximal complexity, integration and balance among all things. 

When the prominent Irish physicist John Tyndall told Spencer about the heat death hypothesis in a letter in 1858,” Spencer wrote him back to say it left him “staggered”: “Indeed, not seeing my way out of the conclusion, I remember being out of spirits for some days afterwards. I still feel unsettled about the matter.”

Things got even gloomier when the Austrian physicist Ludwig Boltzmann put forward a new statistical interpretation of the second law in the latter half of the 19th century. That was when the idea that the universe is growing more disordered came into the picture. Boltzmann took the classical version of the second law — that useful energy inevitably dissipates — and tried to give it a statistical explanation on the level of molecules colliding and spreading out. He used one of the simplest models possible: a gas confined to a box. 

How does the evolution of a gas in a box explain the dissipation of useful energy? First, it should be understood that a gas is a collection of molecules moving around rapidly and chaotically, particles that Boltzmann assumed were like little billiard balls following fixed trajectories. Since the great Scottish physicist James Clerk Maxwell had recently shown that the kinetic energy of a molecule is determined by how fast it is moving, Boltzmann assumed the dissipation of usable energy described by Lord Kelvin was caused by pockets of excited molecular motion spreading out in space due to random collisions between neighboring molecules.  

For example, if a pocket of highly excited gas molecules starts out in some orderly configuration — let’s say the molecules are bunched together in one corner of the box — over time, the ensemble of particles will evolve to become increasingly spread out, or “disordered.” When an ordered pocket of excited molecular motion exists, there is an energy gradient in the system and the potential to do some work, but as these molecules interact with their neighbors and that excited motion gets dispersed, the gradient disappears. This dissipation of molecular order and free energy continues until the gas approaches a state of maximum entropy and disorder known as thermodynamic equilibrium. Paradoxically, this state of “total disorder” looks like a uniform distribution of gas molecules. 

The gas molecules spread out in this way due to a simple statistical reason: There are many more ways for the gas molecules to be arranged in a disordered mess than in some orderly configuration. In other words, an orderly arrangement of particles moving around randomly will naturally become more disorganized. Just like in pool, where the balls start off in an ordered formation but spread out and mix up as collisions occur.  

Boltzmann, like Clausius and Kelvin before him, tried to apply his version of the second law to the entire universe — which, he assumed, must be a giant closed system of atoms and molecules bouncing around chaotically, not all that different from his gas in a box. According to his version of the second law of thermodynamics, the entire universe — as a system composed of atoms moving according to physical laws — must eventually tend toward a more disordered and random configuration, just like his box of gas molecules. To explain why there was so much complexity and order in the universe around him, he suggested that the universe must have started out in an extremely ordered state that had since evolved into what we see today, or that the ordered state of affairs we see in our neck of the cosmic woods was the result of a temporary statistical fluctuation away from the general trend toward disorder. 

“The universe can grow increasingly organized through the spread of intelligent life, as long as it can find the free energy it needs to build and maintain the cosmic organization it constructs.”

Of course, there were many problems with comparing Boltzmann’s gas-in-a-box model to the universe. The order-to-disorder transition only occurs when the particles in the system do not become statistically correlated with each other over time. Boltzmann’s H-theorem, which the idea of a natural tendency toward disorder is based on, assumes “molecular chaos.” But molecular and chemical forces often cause atoms and molecules to clump together into larger, more complex structures — meaning a gas evolving in a box is not an accurate representation of all the dynamics in nature. 

Boltzmann’s model also ignored the influence of gravity, which is often described as an anti-entropic force due to its clumping effects on matter. Gravity’s effects on small objects like gas molecules are essentially so tiny that they are negligible for all practical purposes, meaning you can leave the force out of the model and still make accurate predictions about the state of the system. But at the scale of the universe, the effects of gravity become extremely important to the evolving structure of the system. Gravity is one factor driving the growth of order in the cosmos, and a good example of why the evolution of the universe looks very different from a gas spreading out in a box.

Of course, the attractive force of gravity doesn’t explain the emergence of life, which has been defying Boltzmann’s tendency toward disorder for about four billion years. Not only does life represent the formation of complexity, it constructs more of it. What explains this paradox? How does the biosphere grow more complex and organized if there’s a tendency for organized systems to fall apart? If cosmic complexity is to grow continuously, the process would then seem to curiously depend on life, the only form of complexity that can create more organization and actively sustain itself.

The quantum physicist Erwin Schrodinger explained this paradox in his 1944 book “What is Life?”. What Schrödinger noticed was that instead of drifting toward thermodynamic equilibrium — which for life means a state of death and decay — biological organisms maintained their ordered living state by consuming free energy from the environment (which he called “negative entropy”). Boltzmann’s law of increasing disorder only applies to closed systems, and life on Earth is an open system. It is constantly receiving usable energy from the sun, which drives it away from thermodynamic equilibrium.  

Of course, without a steady supply of incoming energy, equilibrium ensues and life perishes. But by feasting on the free energy in the environment, ordered systems can pay the physical price of staying organized and functional, just like burning more coal will allow a steam engine to continue to function. The cost is the dissipation of free energy and the production of thermal entropy, in the form of heat, which is constantly being released into the environment. 

Therefore, the continual growth of complexity in the form of biological and technological organization — in other words, the biosphere and the layer of industry and technology that sits on top of it — does not violate the classical version of the second law of thermodynamics. Because the biosphere is an open system that is continually getting energy from the sun, it can continuously build and maintain order. Local reductions in configurational entropy (disorder) are paid for by the simultaneous increase in thermal entropy (heat) caused by life’s constant use of free energy. As long as free energy continues to be used and dispersed, the total amount of entropy in the universe increases, and the classical version of the second law remains intact. 

However, it is important to note that the production of heat is not the same as the creation of structural disorder. Energy gets more dispersed as the universe organizes itself, and that is all the second law requires in this context. One could say that energetic disorder increases as structural order grows.

What this means is that the universe can grow increasingly organized through the spread of intelligent life, as long as it can find the free energy it needs to build and maintain the cosmic organization it constructs. Luckily, the universe offers a vast ocean of exploitable energy to beings that are intelligent enough to know how to extract it. In theory, a hyperintelligent civilization could spread through the cosmos, transforming all the matter in its midst into exotic forms of biological and computational machinery. This scenario might be hard to visualize, but it would not be very different from how life went from existing at just a single point on the Earth, not even visible with the naked eye, to covering the entire planet.

But how long could this go on for? The great science fiction writer Isaac Asimov called that “The Last Question” in a critically acclaimed short story about the fate of life in the universe. The story questions the prevailing view of the second law’s applicability to the entire universe, an assumption made by a series of characters in the story: “However it may be husbanded, however stretched out, the energy once expended is gone and cannot be restored. Entropy must increase to the maximum.” Asimov’s skepticism may have been one of his most prescient insights. In his 1964 biographical sketch of Clausius, Asimov called the heat death hypothesis the “scientific analog of the Last Judgement” and notes that “its validity is less certain now than it was a century ago. Though the laws of thermodynamics stand as firmly as ever, cosmologists are far less certain that the laws, as deduced in this small segment of the universe, necessarily apply to the universe as a whole and there is a certain willingness to suspend judgment on the matter of the heat-death.”

The Expanse

In the 1960s, the Harvard cosmologist David Layzer pointed out that although the entropy of the universe will continue to increase in accord with the second law of thermodynamics — that is, an expanding intelligence will always be converting more free energy into thermal entropy — the maximum possible entropy of the expanding universe will presumably increase at a faster rate than the actual entropy increase, allowing for the continual growth of order and complexity. He called this an “entropy gap” — the difference between the universe’s actual entropy and its maximum possible entropy. As long as that gap exists, the universe will not be in thermodynamic equilibrium, and that means there will be energy gradients that life can extract work from. 

Now we know the universe is not just expanding, which Edwin Hubble confirmed in 1929, but that the expansion is accelerating at an increasing rate due to the mysterious force known as “dark energy,” the presence of which was theorized before the turn of the millennium. These developments give us reason to believe that the entropy gap will persist into the future, such that the universe may never come to the state of equilibrium predicted by the heat death hypothesis. 

In his 2016 book “Humanity in a Creative Universe,” the complexity theorist Stuart Kauffman explained the significance of this: “[W]e do not have to worry about enough free energy. As the universe becomes larger, its maximum entropy increases faster than the loss of free energy by the second law, so there is always more than enough free energy to do work.” 

But where does this seemingly unlimited free energy come from, if the first law of thermodynamics suggests that nature has a fixed and finite amount? 

Well, it turns out that first law of thermodynamics may also not apply to the universe as a whole, as was assumed, even though conservation of energy applies to systems within the universe. Challenges to our traditional notion of the first law are not uncommon in modern physics. For example, cosmic inflation theory — the leading cosmological model for how the universe became filled with all its energy and matter — proposes that during the early period of expansion, miniscule fractions of a second after the Big Bang, new matter and energy was being continuously created from nothing. In fact, the theory of cosmic inflation suggests more and more universes are being created, so in the totality of reality envisioned by this model, matter creation never ends.

The only way cosmic inflation theory can coexist with the first law is if we divide all the energy in the world into two opposing categories of energy: positive and negative. The so-called “positive energy” associated with new matter is balanced out by the “negative energy” of the gravitational force associated with that matter. According to this model, the sum total of energy of the universe is zero. It may seem like a desperate attempt by cosmologists to salvage the first law, but it works out mathematically. For this reason, Alan Guth calls the universe “the ultimate free lunch.” In principle, new energy can be continuously created, as long as the ratio of positive to negative energy remains balanced. While the implications of this concept are foggy, it is clear that applying the first and second laws of thermodynamics to the cosmos as a whole can get very tricky. 

“These new developments give us reason to believe that the entropy gap will persist into the future, such that the universe may never come to the state of equilibrium predicted by the heat death hypothesis.”

Deutsch speculates over whether life could harness dark energy directly itself to power computation forever in his 2011 book “The Beginning of Infinity”: “Depending on what dark energy turns out to be, it may well be possible to harness it in the distant future, to provide energy for knowledge-creation to continue forever.” 

Some physicists have since argued that in theory, it is possible that dark energy could be used as a power source. A conference paper published by the American Astronomical Society proposes that “simple machines could, in theory, extract local power from the gravitationally repulsive cosmological constant,” even if “the amount of energy that could be liberated in a local setting is many orders of magnitude too small to be useful or even detectable.”

Whatever dark energy turns out to be, the cosmic expansion it is driving serves to keep the universe out of thermodynamic equilibrium, and a system not in equilibrium is a system that still has some energy and the capacity to do work.

At his blog Preposterous Universe, Sean Carroll writes: “If there exists a maximal entropy (thermal equilibrium) state, and the universe is eternal, it’s hard to see why we aren’t in such an equilibrium state — and that would be static, not constantly evolving. This is why I personally believe that there is no such equilibrium state, and that the universe evolves because it can always evolve.”

If there’s no inevitable equilibrium state, then there seems to be no reason to assume that an evolving intelligence must necessarily come to an end. In his 2006 book “Programming the Universe,” MIT’s Seth Lloyd speculates along these lines: “By scavenging farther and farther afield, our descendants will collect more and more matter and extract its energy. Some fraction of this energy will inevitability be wasted or lost in transmission. Some cosmological models allow the continued collection of energy ad infinitum, but others do not.”

While some cosmologists believe dark energy and the accelerating expansion will ultimately dilute the matter and energy in the universe to such a degree that life must come to an end, a popular new theory known as quintessence suggests that the accelerating expansion may begin to slow, creating even more uncertainty around any predictions for life’s future. Perhaps the dynamics of the universe’s expansion are what they need to be to allow for the continual growth of cosmic complexity? In a 2020 Nature article about quintessence, Carroll is quoted saying, “We’re back to a situation where we have zero idea about how the universe is going to end.” 

If Isaac Asimov were alive today, I believe he would be delighted to know that his “last question” is still open. The increase in entropy in the universe is not equivalent to increasing cosmic disorganization. Complexity and entropy can grow together, and perhaps even without limit. I like to believe that this means that the universe is on our side.

Correction: An earlier version of this essay incorrectly stated that the Hubble telescope confirmed that the universe was expanding. It was Edwin Hubble the person, not the telescope that was named after him.

The post Life Need Not Ever End appeared first on NOEMA.

]]>
]]>
The Mind Is More Than A Machine https://www.noemamag.com/the-mind-is-more-than-a-machine Thu, 09 Jun 2022 16:16:29 +0000 https://www.noemamag.com/the-mind-is-more-than-a-machine The post The Mind Is More Than A Machine appeared first on NOEMA.

]]>

Before Kurt Gödel, logicians and mathematicians believed that all statements about numbers — and reality more generally — were either true or false, and that there must be a rule-based way of determining which category a specific statement belonged to. According to this logic, mathematical proof is the true source of knowledge. 

The Pythagorean theorem, for example, is a mathematical conjecture that is true: It has been proved formally, and in more ways than one. With many theorems, it may be extremely difficult to find proof, but if it is true, it must have a proof — and if it is false, then it should be impossible to prove with the fundamental axioms and the rules of inference of the formal mathematical system.

At least, that was the assumption made by leading mathematicians of the early 20th century like David Hilbert, and later Bertrand Russell and Alfred North Whitehead, who attempted to design an ultimate formal system that could, in theory, prove or disprove any conceivable mathematical theorem. Meanwhile, scientists and philosophers at that time were trying to demystify the mind by showing that human reasoning was the product of purely algorithmic processes. If we could somehow access the exact steps that brains were following to ascertain something, they argued, we would find that they were using strict rules of logic.

A brain, then, was nothing more than a squishy Turing machine — a simple device operating on reasonably simple rules that could compute the solution to any problem solvable with computation, given enough time and memory. This would mean that all the mystery and magic associated with conscious thought could be boiled down to logical operations, or rule-based symbol manipulation. The mind would be no more mysterious than a computer — everything it did would be determinable, definable and understood mathematically. It was a pretty sensible stance at the time. 

But Gödel, an eccentric Austrian logician, disproved that view even before Alan Turing invented his abstract machine, in a quite roundabout and loopy way. In 1931, Gödel published his famous incompleteness theorem, as it became known, which called into question the power of mathematics to explain all of reality — along with the hypothesis that the mind works like a formal system, or a mathematical machine.

With a clever use of paradox, Gödel would destroy the idea that truth is equivalent to mathematical proof. Taking inspiration from an old Greek logic statement involving self-reference called the “liar’s paradox,” he constructed a proposition about number theory using a ridiculously complex coding scheme that has become known as Gödel numbering. Although the theorem is virtually impossible to understand for anyone without an advanced degree in mathematics, we can comprehend it by translating it into similar statements in common language. 

“If we could somehow access the exact steps that brains were following to ascertain something, we would find that they were using strict rules of logic.”

To really grasp how the liar’s paradox inspired a new kind of theorem — one that would threaten the very foundations of mathematics — you have to explore the logic for yourself. Consider the following sentence, which is a more straightforward variation of the liar’s paradox, and try to determine whether it is true or false:

“This statement is false.”

This exercise works best if you say the statement out loud. Notice that trying to prove the sentence true or false sends you around a loop that does neither. If the statement is true, it would mean the statement is false, because it says that it is. So it can’t be true. But if the statement, “This statement is false,” is false, then that would mean that the statement is true, because it states that it is false. Thus, it can’t be false without being true.

Either pursuit leads to a contradiction, and it is impossible to see the logic of why unless you go around the loop. We are left with a proposition that can neither be proven true nor false only because the statement has this strange and somewhat absurd property of referring to itself. While this discovery might appear to be trivial on the surface, it caused quite a stir among mathematicians and logicians because it demonstrated that no formal system can be considered consistent and complete if it produces what are known as “undecidable” conjectures.

But the true brilliance of Gödel’s theorem was not that it constructed a mathematical statement that could not be proven true or false. Gödel bumped the loopiness up a level by creating a conjecture that was true, but unprovable. Notice that the following self-referential statement is not just about itself, but also its own provability:

“This statement has no proof.”

It was no easy feat, but Gödel created a mathematical statement that was the numerical equivalent of that sentence. The interesting thing about this particular proposition is that it is, in fact, true — it has no proof. We don’t even have to check, because if the proof did exist, it would mean that the statement is true. But it says that it has no proof, so once again, proving the statement would only disprove it. 

Even though it cannot be proven with the axioms of the system and the rules of inference, mathematicians can clearly see that its truth is self-evident by focusing on what the symbols mean. Gödel’s true but unprovable proposition proves that there are truths that exist outside the realm of what can be deduced using symbolic logic or computation.

Because mathematicians could see the truth of an undecidable conjecture, the great theoretical physicist Roger Penrose later argued that the mind must be doing something that goes beyond raw computation. In other words, the brain must be more than a symbol-shuffling machine. As he wrote in a 1994 paper:

The inescapable conclusion [of Gödel’s theorem] seems to be: Mathematicians are not using a knowably sound calculation procedure in order to ascertain mathematical truth. We deduce that mathematical understanding — the means whereby mathematicians arrive at their conclusions with respect to mathematical truth — cannot be reduced to blind calculation!

While Penrose can be credited for popularizing this insight, which was proposed by the British philosopher John Lucas nearly three decades earlier, it seems that Gödel himself was aware of that implication of his theorem, as can be seen by this famous quote of his: “Either mathematics is too big for the human mind or the human mind is more than a machine.”

What exactly is the difference between mind and machine? Machines compute, minds understand. They allow us to see truths that a purely algorithmic intelligence would be blind to. What is it that allows this curious ability that we call understanding? Conscious experience, presumably, which enables us to not just reason, but to reflect on reasoning itself.

“Gödel proved that there are truths that exist outside the realm of what can be deduced using symbolic logic or computation.”

While Penrose was justified in arguing that the mind is not a Turing machine, he made what many consider an unjustified leap when he proposed that the brain must then be some kind of quantum computer. Although this theory should not be dismissed on the grounds that it invokes a quantum explanation, the truth is that right now it is not taken seriously by most scientists working on the problem of consciousness. The most well-known criticism, supported by physicists like Max Tegmark, says that the brain is too warm, wet and noisy to sustain the kind of coherent quantum state that Penrose believes is responsible for conscious processing.

However, it is worth pointing out that researchers now think a growing number of biological processes exploit quantum mechanics — like bird navigation, which uses quantum entanglement, and photosynthesis, which involves quantum tunneling. If quantum biology is real and takes place inside “warm and wet” systems, who’s to say that quantum neurobiology is impossible? If there’s some computational advantage to a mechanical process that exists in nature, natural selection will typically find a way to leverage it.

While Gödel’s incompleteness theorem made consciousness more mysterious to Penrose, it provided the solution to the puzzle for Douglas Hofstadter, the philosopher who wrote the Pulitzer Prize-winning book “Gödel, Escher, Bach: An Eternal Golden Braid,” which was published in 1979, a decade before Penrose’s book. To Hofstadter, the mystery of subjectivity can only be explained with the concept of self-reference, the same property that allowed Gödel’s statements to transcend formal proof. By referring to themselves, symbols suddenly became meaningful, and semantics sprouted from syntax.

“Something very strange thus emerges from the Gödelian loop: the revelation of the causal power of meaning in a rule-bound but meaning-free universe,” Hofstadter wrote. According to him, he said the self that we associate with subjective experience emerges from the same kind of self-reference “via a kind of vortex whereby patterns in a brain mirror the brain’s mirroring of the world, and eventually mirror themselves, whereupon the vortex of ‘I’ becomes a real, causal entity.”

More specifically, self-reference in the form of self-modeling produces an observer with causal power. Just as Gödel showed that math can reference itself — call it “metamathematics” — minds can do the same by looking back at the model of the world that evolution and adaptive learning have built up in brains. As Hofstadter wrote: “When and only when such a loop arises in a brain or in any other substrate, is a person — a unique new ‘I’ — brought into being. Moreover, the more self-referentially rich such a loop is, the more conscious is the self to which it gives rise.”

The lovably loopy idea that consciousness emerges from self-modeling is supported by some intellectual heavyweights, like Judea Pearl, whose causal calculus forms the backbone of one of today’s most respected consciousness theories, integrated information theory. In a 2019 interview with MIT podcast host Lex Fridman, Pearl was clearly echoing Hofstadter’s big idea:

That’s consciousness. You have a model of yourself. Where do you get this model? You look at yourself as if you are a part of the environment. … I have a blueprint of myself, so at that level of a blueprint I can modify things. I can look at myself in the mirror and say, ‘Hmm, if I tweak this model I’m going to perform differently.’ That is what we mean by free will. … For me, consciousness is having a blueprint of your software.

So how does this idea line up with modern neuroscience? Most neuroscientists believe that consciousness arises when harmonized global activity emerges from the coordinated interactions of billions of neurons. This is because the synchronized firing of brain cells integrates information from multiple processing streams into a unified field of experience. This global activity is made possible by loops in the form of feedback. When feedback is present in a system, it means there is some form of self-reference at work, and in nervous systems, it can be a sign of self-modeling. Feedback loops running from one brain region to another integrate information and bind features into a cohesive perceptual landscape.

When does the light of subjective experience go out? When the feedback loops cease, because it is these loops that harmonize neural activity and bring about the global integration of information. When feedback is disrupted, the brain still keeps on ticking, functioning physiologically and controlling involuntary functions, but consciousness dissolves. The mental model is still embedded in the brain’s architecture, but the observer fades as the self-referential process of real-time self-modeling ceases to produce a “self.”

According to integrated information theory — invented by the neuroscientist Giulio Tononi — a system that has no feedback loops to integrate information can in theory display conscious behavior without having the corresponding experience that a system integrating information would have. Such systems are called “feed-forward systems” because the flow of information only travels one way. An example of a feed-forward system is the cerebellum, which contains more neurons than any other brain region, yet it does not appear to produce an observer. The neuroscientist Christof Koch, one of integrated information theory’s most high-profile supporters, explains the reason no self sprouts from the cerebellum in a 2018 Nature article titled “What Is Consciousness?”:

“The cerebellum is almost exclusively a feed-forward circuit: One set of neurons feeds the next, which in turn influences a third set. There are no complex feedback loops that reverberate with electrical activity passing back and forth.”

“What exactly is the difference between mind and machine? Machines compute, minds understand.”

An equally famous consciousness theory invented by the neuroscientist Bernard Baars, known as global workspace theory, describes how a stream of conscious experience emerges when multiple sensory streams fuse to form a unified perceptual landscape. In this computational model, consciousness is referred to as a “global workspace” because its contents can be manipulated by the mind and broadcast globally to many regions in the brain at once.

The mental workspace is thought to be produced by feedback loops running from the frontal lobes to the parietal lobes and back again — so-called “fronto-parietal loops” — which integrate information over space and time. When these feedback loops cease, the global workspace ceases to be. Conscious processing is disrupted, and information is no longer made globally available.

What can we conclude from these neuroscience theories of consciousness? That Douglas Hofstadter was right — it is self-reference in the form of self-modeling that conjures up an observer, and it does so through feedback loops that entrain neural activity and integrate information. If it wasn’t for Gödel and his loopy incompleteness theorem, the significance of self-reference might not have been discovered, and Hofstadter probably would have never connected it to self-modeling in minds.

While there is still much for scientists to learn about how the brain generates conscious experience, it is clear that Gödel was correct in his assessment that the mind is more than just a machine. It is a generator of conscious experience that allows beings with brains to reflect on reasoning and to understand the meaning encoded in true but unprovable statements.

Could self-reference be the missing puzzle piece that allows for truly intelligent AIs, and maybe even someday sentient machines? Only time will tell, but Simon DeDeo, a complexity scientist at Carnegie Mellon University and the Santa Fe Institute, seems to think so: “Great progress in physics came from taking relativity seriously. We ought to expect something similar here: Success in the project of general artificial intelligence may require we take seriously the relativity implied by self-reference.”

The post The Mind Is More Than A Machine appeared first on NOEMA.

]]>
]]>