The Consciousness Conundrum

 by Steven Gussman



Author's Note: This article was originally published to the now-defunct Areo Magazine on 5/30/23. What appears below is pulled from a version I archived on 11/7/23 (though I don't believe changes were ever made to this article).


Consciousness is among the greatest outstanding scientific mysteries.


We have no direct way of telling whether a given being is conscious and therefore have to rely on basic induction. We know that we ourselves are conscious—as Descartes said, “I think therefore I am”—but to avoid solipsism, we must also reason that other people are conscious as well (that it is a feature of our species), and even that many other animals are conscious (judging by their complex behaviour, and the fact that their minds were generated by the same process of evolution by natural selection that generated our own).


Would it be possible to build an artificial human that could fool anyone who met it, but that would nevertheless be a mindless machine with no first-person perspective (a concept known in philosophy as a zombie)? As technology advances, this may become a practical issue and we may be faced by a dilemma: how can we tell the difference between a zombie, which we could legitimately regard as a mere tool at our disposal, and a conscious AI, i.e., an agent with rights and with the capacity to suffer?  


Some network scientists argue that the complex structure of the brain produces consciousness by default. In other words, your first-person experience of the world is a side effect of the physics involved in a neural network. This hypothesis predicts that a zombie cannot exist—if an entity seems conscious, it must be conscious. The simplest version of this approach implies that consciousness is substrate-independent. This is the approach taken by Ray Kurzweil and Max Tegmark, who argue that consciousness is a core feature of complex neural networks, just as temperature is a feature of collections of particles and reflects the kinetic energy associated with their motion. This idea suggests that there must be a sliding scale of consciousness from, say, an ant to a cat to a human. This would be in keeping with all other evolved traits and capacities, which are seldom equal between species. This is a very imprecise hypothesis, however: after all, there are many complex systems that no one thinks are conscious: from our planet’s climate right up to the universe as a whole. 


When complex networks are structured just right, Max Tegmark and Ray Kurzweil have argued, they reflect information content about their environment—just as when atoms are structured just right, they form a mirror. But while a mirror merely reflects current visible information in its environment, the mind can notice patterns and make predictions about the future. However, this fails to explain why a neural network capable of understanding its environment should necessarily include a first-person experience of this process. Consciousness must be a necessary feature of our brains’ structure—the very existence of consciousness implies some reductionist physical correlate—but the network scientists go much further than this. They imply that the relationship between neural structure and consciousness is such that you cannot even create a good facsimile of a person without that facsimile becoming conscious. 


Evolutionists, by contrast, start by trying to explain why beings are conscious (leaving the how question for later). These biologists note that only multicellular animals appear to exhibit consciousness, despite the fact that the vast majority of the universe consists of far more basic physical and chemical phenomena. To the Darwinian, consciousness is a potential liability: a complex and costly feature of an organism. After all, many populations of simple, single-celled organisms have got by just fine without it for billions of years. For natural selection to have favoured consciousness, then, requires that it raise an organism’s fitness. 


The evolutionist argues that the benefit gained is agency. To be conscious is to “feel that we freely author or own thoughts and actions,” as Sam Harris has pointed out; but unlike Harris, the evolutionist takes this feeling at face value. Evolving from a single-celled zombie to a free-willed agent, in this view, is adaptive, as it allows an organism to weigh the vagaries of the environment before making conscious executive decisions to engage in goal-based behaviours. This is how many people think culture evolved, as the product of many interacting minds converging on better ideas. Yet it is not obvious that the fitness benefits of free-will outweigh the costs (such a being is now free to act against its genetic interests). And besides, a zombie could surely do just as good a job at passing on its genes, without the added cost of building and running a consciousness (in which case the zombie would be favoured by evolution, and we would not be conscious today). 


Worse, the idea that we truly possess free will—rather than merely the illusion of it—is undermined by everything we know about science: everything, including the human brain, must ultimately follow the mechanics of physical law. There is no room for the intervention of a god-like agent that cannot be reduced to the many-body physics of an organism’s particular makeup. The mere illusion of free will that consciousness seems to offer cannot then increase an organism’s fitness.


So, is consciousness an expensive adaptation or a mere spandrel? 


The network scientist might argue that because there is something it is like to be a neural network, as an inevitable result of the way it is structured, there is no tension between their hypothesis and evolution. In this view, the neural network is adaptive precisely because of its ability to metabolize information in its environment that can be used to make predictions that improve the organism’s fitness. First-person experience simply came along for the ride. This explanation is difficult to swallow. After all, we know only the general sketch of how even the simplest life-forms first emerged from the basic laws of physical chemistry. Natural selection among simple molecules provides the only satisfactory explanation of how this must have happened. That the, “most complex thing we have discovered in our universe,” as James Watson has described it, the human brain, could be explained by its physical properties alone and not by an evolutionary advantage is deeply unsatisfactory. 


The holy grail for the evolutionist is to prove the compatibility of free will with a mechanical universe—a tall order, indeed. For the complexity scientists, it is to show that consciousness arises as an emergent phenomenon associated with any sufficiently structured information network. It is hard to tell how that would happen or what it would even mean. To get a sense of the difficulty of this problem, let’s return to the example of the mirror. Essentially, a mirror is a collection of atoms structured such that light that enters it and is absorbed by an atom at a particular location is quickly re-emitted by that atom in the direction it came from (or in a direction dependent on the angle of incidence), such that it reflects the image back. How could a complex neural network do something analogous? It can be shown that the network stores and processes information from its environment—but how does it form a first-person experience out of this?


Very little progress, then, has been made on the hard problem of consciousness.


The crux of the issue is that consciousness appears to be a rare facet of the universe. Very few phenomena are conscious (compare this to a characteristic like temperature, which even black holes have). The only known examples of consciousness have arisen as the result of billions of years of evolution. Despite the fact that evolution is a ruthlessly economical process, network scientists argue that consciousness is a mere side effect of the way some information networks are structured. Something isn’t right here. A major piece of the puzzle is clearly missing.


As it stands, either true free will exists—and the foundations of science must be called into question—or evolution has spent significant resources building a machine that exhibits a feature that is totally useless to the organism it runs on. There must be a third way forward, one which maintains the laws of philosophy of science, as well as the thrifty nature of evolution.


Steven Gussman

Steven Gussman is a scientist and video game developer on the east coast of the United States. He has recently authored and self-published the book, The Philosophy Of Science.

Comments

Popular posts from this blog

Table Of Contents | The Philosophy Of Science by Steven Gussman [1st Edition]

Bibliography And Recommended Works | The Philosophy Of Science by Steven Gussman [1st Edition]

Chapter IV: Mathematics | The Philosophy Of Science by Steven Gussman [1st Edition]