"But even if one were willing to accept the perplexing claim that a smartphone could be conscious, could you ever know that it was true? Surely only the smartphone itself could ever know that." --Oliver Burkeman
In recent decades philosophers and scientists have been engaged in an ongoing and often bitter agon over the nature of consciousness, and there is no end in sight. At issue are some of the most tantalizing and potentially fruitful intellectual conundrums we face: a complex of philosophical/scientific problems that some hope will eventually erase the borders between philosophy and science. I’m not among these latter, but wherever one stands in this debate it’s clear that it delineate the border between philosophy and science in fascinating ways.
Last week The Guardian published one of the best introductory articles I’ve seen on the subject (Oliver Burkeman: “Why can’t the world’s greatest minds solve the mystery of consciousness?”). Without trying to summarize the ground Burkeman covers (go read the article yourself: it’s well worth it) I might minimally say that two rough positions taken up by recent contenders are: 1) We are in some way fundamentally different from robots or zombies (David Chalmers); 2) There is no fundamental difference between us and robots, only differences of degree or sort (Daniel Dennett).
I’m with Chalmers in this spat, though as yet there’s no scientific route to decide who is more likely right. Which is little surprise: there’s no scientific route to deciding many things.
One of the problems with Dennett’s position (and that of others) is that it establishes too low a threshold for what counts as consciousness. In my own lexicon (which I’m sure Chalmers would have problems with too) consciousness is not necessarily present with the mere feeling of pain (as is suggested in the article) but rather arises with the capability of knowing “I feel pain”--the I here being in part a product of my own pained being’s recognition that there are other beings in existence.
In other words, for myself the problem of consciousness immediately raises the thorny problem of I-ness, how we experience that I-ness which then in turn experiences what are called qualia. Qualia (singular quale) are basic irreducible perceptions like the color green, the taste of a certain chocolate, or the pain of a crushed finger. It is, after all, our experience of qualia that poses what Chalmers has famously called “the hard problem of consciousness”. The hard problem is hard because--and this is difficult to phrase clearly--although there “is something that it is like” to experience a particular color, or a particular chocolate, that consciousness experience, that somethingness, cannot be reduced to a mechanistic explanation. Or: Though neuroscience might be able to explain how the brain stores and accesses memories, and might be able to explain a host of other brain functions as well, there remain irreducible qualia that are not subject to such explanation, nor are they subject to comparison between conscious beings (there will never be any way of knowing if my red looks like your red).
Some scientists (Dennett again) insist that there is no hard problem of consciousness to begin with, or that if there are hard problems they are elsewhere--precisely where Chalmers sees the “easy problems”.
Without being able to elaborate much here, my own interest in this debate relates to the question of the degree to which consciousness entails subjectivity, and how consciousness itself, and qualia themselves, might be related to language or semiotics. I’d tentatively argue that full consciousness is only possible on a ground of something like an experience of selfhood.
Many have theorized that human selfhood, a person’s feeling of I-ness, arises initially through interaction with others (usually beginning with the recognition that one is a distinct being from one’s mother) and is then further established through the acquisition of language--that I-ness is, in large part, attendant on one’s learning and living within language. Your sense that you are you, then, that you are one being possessed of selfhood, arose not only via your physical singularity as an independent body, but also via a kind of dialectical back and forth with language. You are an individual physical Homo sapiens (a brain and body separate from others) that has been infected by language, which created means of a more complete parcelling out of the areas of your individual self as well as of your perceptions of the world. It is on this double ground that you slowly developed the innerness that characterizes full human consciousness.
One may grasp the crux of the issue of what I mean by “full human consciousness” by considering an experiment that should never be done. As follows: Were it possible for a Homo sapiens to be raised in an environment with no other humans or animals (fed and protected within such an artificially limited environment) we’ve really no way of knowing if full consciousness (or even what sort of consciousness) would develop. Certainly language would not develop. Would loneliness? Very likely not. Nonetheless, were the hand burnt on this same Homo sapiens, he/she would jerk it back in reaction. But such pulling back in pain would not, in my view at least, constitute full consciousness. The feeling of pain (or hunger, or satiation) is different from the experience of full consciousness.
This problem I pose here of full human consciousness is not exactly the same as the problem of consciousness per se, but it does at least clarify things by getting at what we humans experience as consciousness. It is this base state, after all, from which we always begin; it is from this ground that we always ask our questions, always approach more general problems in the field. And I’m convinced it is highly doubtful we will ever escape this base state to attain an objective perspective on the other problems of consciousness we might raise--such as, for instance, the question of animal consciousness. This is the kind of philosophical issue I believe the likes of Daniel Dennett don’t take seriously enough.
The problem of consciousness is important for a host of reasons. Many ethical problems are implicated in how one approaches it. For one: If our humanity lies in our conscious selfhood, our ability to experience I-ness, does this mean that those who have not yet attained consciousness (say, the unborn or the severely retarded) or those who have lost consciousness (say, those in coma) are not truly human? As a Catholic, I would insist that No, in fact these are all humans, because humans are in any case endowed with a soul. Of course I’m well aware that this Catholic position is not at present scientifically provable (although research of NDEs continues to provide “troubling” data supporting the possibility of an identity principle independent of the body), and so cannot be part of this particular scientific debate. But the Catholic position is worth raising here if only because it underlines the hard questions remaining for secular ethicists. Namely: If our humanity lies in our conscious selfhood, can those who lack consciousness have any rights? And on what grounds?
Posing the problem of consciousness also leads inevitably to speculation as to the consciousness of animals and machines--a realm of inquiry fraught with both a troubling history (speculations that animals are mere automata) and a troubling future (cf. the possible threat posed by artificial intelligence).
What sense of I-ness do animals have? It is hard to know, though everyone sees that animals can interact with others as others. Does such interaction imply consciousness? If so, to what degree or of what kind?
Apart from I-ness, what is it like to be, for instance, a bat? This is the question posed in a classic paper by philosopher Thomas Nagel. Part of the upshot of the question is that regardless of advances in science, we will never be able to know. No amount of neurological understanding of how a bat’s brain works will ever tell us what it is like to be one. And that problem, again, relates to Chalmers’ posing of the hard problem.
Most scientists now agree that many species of animals have consciousness. I would argue, again, that for animal consciousness to be real it must relate to a kind of self against which perceptions can be experienced. Insects, then, very likely lack consciousness because the insect brain is merely a neural center for managing response to different kinds of stimuli. Here already, however, I run into the challenge that certain thinkers would pose: “On what grounds do you say, Eric, that your human brain is not itself merely a neural center for managing response to stimuli?” Or again: “Is not your laptop likewise a kind of digital center for managing response to stimuli? On what grounds do you know your laptop isn’t conscious?”
My answer to both these questions would again relate to I-ness or selfhood. When Nagel asked what it was like to be a bat, that was a fruitful question, but I don’t think it would be fruitful to ask what it is like to be a butterfly or an iPhone 6. Because there is likely no consciousness present in a butterfly to experience being a butterfly, the question of what it is like to be one approaches the question of what it is like to be a mushroom. Neither the butterfly nor the mushroom has a self to experience its being, and the same goes for an iPhone.
Though many creatures doubtless attain rudimentary consciousness, there are myriad remarkably complex creature that very likely do not. The interesting question is why some creatures do have consciousness, and how they have it.
Will machines ever really attain consciousness? In fact, as my above remarks hint, some theorists in the field are trying to pre-empt the question by insisting that machines already do have consciousness--that your television, for instance, is conscious. I would say this claim (which I find ultimately flippant) merely erases the the problem of consciousness. Which is perhaps the point such thinkers are trying to make. They seek to invalidate the problem from the get-go. Indeed, some theorists insist that many of the “mysteries of consciousness” don’t constitute valid problems in any case (cf. again Dennett). I think they are wrong.
As to the questions of whether machines can think, we of course have the test Alan Turing proposed in a 1950 paper--what has come to be called the Turing test. This question as to whether machines “can think” is not quite the same as the question of whether machines may ever be conscious, but it is definitely strongly related.
One of the best answers to those who are existentially enthusiastic about AI (those who believe we are on the verge of having real and authentic one-to-one relations with machines) is still to be found in John Searle’s Chinese room challenge, where he clarified the difference between thinking machines, which merely simulate thinking, and real thinking. Searle’s general argument is that thought means being able to understand and experience, as a self, the meaning of what is being communicated. Again, for Searle, I-ness is the question, or: What is the nature of that entity that understands its responses (rather than merely computes “correct” responses)?
A being may claim consciousness, and mimic it quite convincingly, without actually possessing it, but any being that thinks it is conscious is necessarily so. The takeaway? Verifying consciousness from outside is well nigh impossible.
In general, the crux in these debates tends to reside in the contrasting definitions of consciousness being put forward. If there is a “problem” of consciousness or not, and where the hard and easy problems lie, largely depends on how consciousness is defined. Which is why the participants are so often speaking past each other. Whereas for me those who say my cell phone is conscious are merely ignoring the problem, for them I am talking about a problem that doesn’t properly exist.
There are no clear answers to many of these questions. I stick with my position that consciousness is grounded in an identity--or, more minimally, that some experience of selfhood must exist for anything like true consciousness (at whatever level, including the level of qualia) to be present. The fully conscious being is a being that knows itself as an I (with the attendant awareness of others this implies), experiences itself in an innerness which necessarily has access to an outerness (both the outerness of the world and that of other selves). Animals are very likely participants in consciousness, though I believe the human experience of self grounded in language is of a quite different order from that which animals experience. The question of whether computers may ever attain consciousness seems to me as yet entirely open, and it seems very possible that it will always remain open regardless of how artificial intelligence may advance.
A final question: If animals participate in some way in consciousness and we participate more fully in it, is it likely that an even fuller participation in consciousness is possible in the universe? As a Catholic, I believe the answer is Yes--and that the endpoint of full consciousness is God. In a scientific register, I believe the answer is also Yes: there seems no reason to assume that consciousness may not be deepened or enhanced, and of course consciousness may very well exist elsewhere in the universe, at whatever degree of complexity.
There are many issues I’ve left out here (I haven’t discussed philosophical zombies, for instance) and I’ve perhaps been amiss in not more directly addressing certain challenges that might be made to my assertions regarding how the consciousness debate relates to the different problems of I-ness or selfhood. Further, I’m aware my depiction of Chalmers especially is lacking or maybe misleading (for one, in recent years is not exactly a skeptic regarding the possibility of machine consciousness). But I’ve tried to keep things at a basic level, and so have had to leave out much.