I’ve been asked to comment on Peter Hacker’s bizarre claim that qualia don’t exist in his arrogantly braggish essay “The Bogus Mysteries of Consciousness.” So here goes.

Say What Now?

First, what are qualia? If you’re new to the idea, “qualia” means the qualitative properties of human experience. It’s a catch-all term for all the features unique to conscious experience, the “what it is like” to be seeing the color red or hearing a bass drumbeat or smelling cinnamon or feeling angry. Explaining why qualia exist and are the way they are is called the “hard problem” of consciousness because it’s really the last frontier of brain science, a question we haven’t yet resolved even hypothetically (in contrast to the other three unsolved frontiers of science—the origin of life, the origin of the universe, and the fundamental explanation of the Standard Model of particle physics—which all have fairly good hypotheses already on the table). Yes, the explanation for qualia most likely does have something to do with the inevitable physical effects of information processing. All evidence so far is converging on no other conclusion. But that still leaves us ignorant of a lot of the details.

This is mainly because we can’t access the information we need to answer this question. For example, to tell what actually is causally different between a neural synaptic circuit whose activation causes us to smell cinnamon rather than oranges (or see red or hear violins or feel ennui), we need resolutions of brain anatomy far beyond any present technology. The mere arrangement of synapses won’t be enough even, yet we don’t even have that—and since the IO signal for any neuron is determined by something inside the neuron, such as perhaps methyl groups attached to the nuclear DNA of the cell, we’d need to be able to make a map even of that, for every single cell in the brain, which is far beyond any present physical capability. AI research could get there sooner, if somehow they achieve general AI and can ask it about its personal phenomenology, but that’s just another technological capability we presently don’t have.

In any event, if you want to catch up on the history of this problem and its current state of play, see the entries in the Stanford Encyclopedia of Philosophy and the Internet Encyclopedia of Philosophy. And to catch up on where I land on this subject, see The Mind Is a Process Not an Object (as well as the relevant sections of How My Philosophy Would Solve the Unsolved Problems and How I’d Answer the PhilPapers Survey).

What Is Hacker on About?

What Hacker argues is not even quite the same thing as what so-called “eliminativists” argue. They don’t really argue “qualia don’t exist,” but that they don’t exist in the sense supposedly everyone assumes. Neither Paul and Patricia Churchland nor Daniel Dennett have actually argued qualia don’t exist in any sense at all. Which is a problem I have with eliminativists generally; they only confuse people with semantic games. Dennett proposes we must abandon qualia by providing “alternative explanations for the phenomena” that qualia are evoked to explain. But the phenomena to be explained are the qualia. Dennett thus confuses causal theories of qualia with the qualia themselves. The Churchlands make the same mistake. Once you correct their mistake, we’re back at square one: we have some distinctive phenomena we have to explain; and we have not yet fully explained them. It does not matter what you call those phenomena. You can’t change what a thing is by changing what you call it.

Hacker doesn’t make this mistake, because in those other cases, such as the Churchlands and Dennett, their actual explanations are coherent enough to actually disentangle what they are trying to say in different words. For example, Dennett ultimately gets around to admitting there are phenomena to explain, and he attempts an explanation of them. Hacker does neither. As such, I suspect Hacker has simply naively misunderstood eliminativists, and gone off on an immature brag fest denouncing the stupidity of anyone who still thinks there are any phenomena to explain.

Dennett and the Churchlands don’t do that. They admit there is something to explain, and try to explain it; though what they provide is really a meta-explanation, which in each case reduces to the same thing: they propose qualia are an illusion; they are simply what it is to believe you are experiencing qualia. In other words, qualia are not an extra something that explain anything; they are, rather, the inevitable consequence of certain forms of information processing. I concur. I just don’t think it’s helpful to frame that as saying qualia don’t exist. That’s rather like realizing “when I see a mirage of water on the horizon, I know that that water doesn’t exist,” and then concluding “the mirage doesn’t exist.” That’s to confuse explanandum with explanans.

Why You Can’t Hide from This

No matter what word games you play, you still have to explain why cinnamon doesn’t smell like oranges, why activating one neural circuit causes you to experience a smell at all and not hear a bass drum or see the color red or feel disgust (and vice versa), or any other conceivable thing instead, and why any of this happens at all. We well know what it is like to process information without any of this phenomena: we call it our subconscious. So what makes the difference between just walking though life running purely on subconscious processes, and instead experiencing all these bizarre, and bizarrely specific, phenomena? What makes the difference between experiencing something as a smell, and experiencing it as a color? Or a sound? Or an emotion? Or anything else other than any of these things? Why, in other words, do smells or colors or sounds even exist at all?

And we don’t mean by this the biomechanics of our sensory systems. When we ask what makes the difference between cinnamon smelling like cinnamon and not oranges, we don’t mean what has to be different about the molecular receptors in the nose that distinguish between these two odors; those don’t have anything whatever to do with what things smell like. No matter what molecule stimulates a certain neural track in the nose, that’s just a binary signal, “on or off,” that flows into the brain. At best, perhaps, it has a quantity scale. But there’s nothing qualitative about it. That wire could go anywhere. It could go to the circuit that makes you see red, rather than smell anything, much less some particular thing. And for some people, it does: synesthesia is a thing. (So why are only some people synesthetes?)

Qualia are in fact undeniables. They therefore cannot not exist. The probability is literally zero. And that’s saying something, because almost nothing has a truly zero probability. But qualia are in fact the one and only thing that does. Because it is literally 100% impossible that “I am experiencing a white field with black markings inside it right now” is false; that it “isn’t happening” and thus “doesn’t exist.” That I am seeing letters on a computer screen as I type can be in doubt—maybe I’m hallucinating or dreaming this; maybe I am mistaken about what the sensory signals my brain is interpreting as letters on a computer screen actually signify; and so on. But that I am experiencing seeing letters on a computer screen is impossible to doubt. And why that is has to be explained.

Yes, qualia are fictional (our brain invents them to demarcate and navigate information), and yes, their “existence” will have something to do with information processing. Because we know if you remove or numb the pertinent information-processing circuit that generates any given experience, you remove the experience. And you can even cause the experience to occur by simply sticking a wire into the pertinent circuit and shocking it. So we know this is simply something that circuit does, and does differently than a circuit that doesn’t generate any phenomenological experience (as most circuits in our brain don’t) or that generates a different one than this (as all the remaining circuits in our brain do). What makes a “cinnamon circuit” cause that experience and not some other (or none at all)? This is the “Mystery of Consciousness” that Hacker daftly claims is “Bogus.” But it’s Hacker’s claim that’s bogus.

Hacker’s Catastrophic Derail

One thing that often throws everyone off, including the “eliminativists,” is the persistent yet completely unnecessary assumption that qualia are things. That they are objects, entities—evoking wonder at what mass or charge they have or whether we can bottle them. That would be as mistaken as thinking we can capture “running down the street” or “voting in an election” in a bottle, and weigh it on a scale. Those are not things, they are events. And like them, qualia are events, not things (I fully explicate this point in my article The Mind Is a Process Not an Object).

Thus qualia don’t “explain” things; they are the thing to be explained. And they don’t exist separately from the physical process underlying them; they are the physical process underlying them. So the question is what is different about those physical processes, and other physical processes, which don’t generate such phenomena? That is exactly identical to the question of what causes those events of experience to occur, and to have the qualities they do (rather than others instead). And this is the “hard problem” of consciousness. It is not unsolvable (we know what we need to do to get at the answer; we just don’t have the technology to get at it yet), nor is its being “mysterious” evidence against physicalism (physicalism poses no difficulty for explaining what “events” are and why they occur).

But Hacker ignores all that and launches his bizarre essay with the incredible declaration that “there is nothing mysterious or arcane about” consciousness, despite all actual experts the world over, from brain scientists to philosophers, concurring that there is. Indeed Hacker even slags off eliminativists in his first paragraph, noting that “Daniel Dennett” himself has said “that consciousness ‘is the most mysterious feature of our minds’,” and so Dennett, too, is among all the rest of the world’s experts “who should know better.” Hacker himself is a philosopher of relevant pedigree; so really, it is he who should know better.

I just laid out what the “mystery” of consciousness is; and it is very real, and indeed remains very much a mystery. Maybe not as much a mystery as why America elected Donald Trump to be their president or why ketchup-flavored ice cream is a thing. But some manner of mystery all the same. So how does Hacker try to argue that it isn’t a mystery? That there isn’t anything about it to explain?

Mostly Hacker argues by vacuous mockery. It takes quite a lot of reading to ever even discern an actual argument in anything he says. Indeed, the first time we get to anything even close to an argument is his sarcastic remark that:

There is something which it is like for you to believe that 25 x 25 = 625, which is different from what it is like for you to believe that 25 x 25 = 624. There is something it is like for you to intend to retire at 11.30, which is different from what it is like for you to intend to get up at 7.00. These are distinct qualia.

This isn’t, of course, an argument at all. He does not draw any conclusions or inferences from this declaration. He seems to imply that it is ridiculous and that its being ridiculous somehow means qualia don’t exist. But I can’t fathom how a serious philosopher could think that wasn’t bollocks. “These qualia don’t exist, therefore none do” is a shit argument.

It’s just all the worse that “Arguments to the Ridiculous” are usually already shit arguments. They typically just reify the fallacy of Argument from Lack of Imagination. To simply presume there is no qualitative difference between experiencing the conceptual distinctions he lists here is, in other words, a circular argument. And circular arguments are shit arguments. The rest of us aren’t this stupid. Belief means confidence; and we all know confidence feels different than the lack of it. Whereas if there is anyone out there who can “experience” the difference between “624” and “625” as quantities, that logically entails that for them there is something experientially different between them. And that’s exactly what the word “qualia” means.

Most of us, however, do not qualitatively experience any difference between such abstract numbers as 624 and 625. We comprehend them in a computational sense, absent any unique qualia. We generally have to work out in what way they differ; we don’t experience it directly, the way we do the difference between “two” and “three,” which are quantities we can directly apprehend in experience. And to feel the difference between those quantities we don’t even have to be the synesthete to whom chicken tastes “like three points,” but we could be—and how would Hacker explain that? But larger numbers, like 624 and 625? Those simply don’t “feel” any different to us except in fragmentary ways. We can “feel” that one of those quantities has one more than the other (but so do lots of other quantities); that both are in the hundreds (but so are lots of quantities); and we experience distinct features of the Arabic shape of the component numerals (but those numerals, and hence the attendant qualia, attach to lots of other numbers); and so on. But that’s it. And that’s what we need to explain.

By contrast, we can be fairly confident my desktop computer experiences none of these things. So why do I? And why do they feel like that, and not like something else? Of course—to some people, they do. The most common form of synesthesia is to experience color qualia in conjunction with various numbers. That Hacker doesn’t know this would suggest he is too science illiterate to have any opinion on this topic worth consulting.

Indeed, in accord with his ignorance, perhaps Hacker might ignorantly blather on about how we could possibly know my desktop computer doesn’t experience these things as I do; at which he should be instructed to read up on the science of comparative neuroanatomy. My desktop computer has none of the corresponding hardware we know my brain requires to experience those things. We know a computer’s entire contents, and nowhere in that inventory is any experiential circuitry analogous to ours. Yet my computer can agilely handle the conceptual content of these numbers through countless renderings and computations. Perhaps that does feel like something to it; but it won’t be at all like what it feels like to me: our phenomenological circuitry is too radically different. My computer’s phenomenology couldn’t even be identical to that of a flat worm; and yet is surely far more distant from mine than a worm’s. And unless Hacker is going to profess a belief in magic, he cannot propose an effect can exist without a necessary and sufficient cause.

So now I am half way through Hacker’s essay and have yet to encounter a single argument, apart from this garbage, which is the mere fragment of a possible argument—and that argument is trash.

Demystifier, Aisle Seven

In the second half of his essay Hacker makes the whole world face-palm when he backtracks from the stupid idea he’s been uselessly pushing for hundreds of words now by declaring “There is ignorance, but nothing mysterious.” Someone ship him a dictionary. Those mean the same thing. When Dennett says the question of how the human brain generates the particular phenomenal experience that it does, he simply means we do not know how it does that. It’s a mystery. Have I really been duped into reading a thousands-word long equivocation fallacy? Is Hacker that shitty a philosopher? I’d tell him it’s time for him to retire—but I see that he already has. Maybe he should stick to fishing. Or knitting. That’s a good hobby.

But it’s worse than that. When Hacker gets to trying to explain how there is no mystery to explain, he actually reverts back to claiming we are not ignorant of how consciousness works. So which is it? Never mind. Here he declares “the question of what perceptual consciousness is for is trivial,” because obviously it has survival advantages. This is where he jumps the shark, revealing he doesn’t know what he is talking about. When scientists ask why qualitative experience evolved, they are not asking why the conceptual processing of perception or thought evolved—they already know why that’s useful. The “mystery” is not why our brains can do those things (for example, locate and react to movement in our “peripheral perception”). The mystery is why our brain can’t just do that as blindly as it does everything else—why does it have to experience doing it?

We don’t need colors. So why does our brain invent “red” when we could just simply respond to different wavelengths of light automatically? We don’t need to “experience” seeing anything to recognize something is there and is reflecting different wavelengths of light than something next to it, for example. So why does our brain bother “coloring” that in? Much less with specifically that color. Remember, red does not exist. Nothing outside our brain has any color. Redness is a fiction our brains made up to “represent” certain patterns of photon wavelenths. Why?

And remember, that’s both senses of why: Why did it do that at all? And why did it do it in that specific way? Why are red things red and not blue? Why are they red and not some shade or pattern of grey? Why not some other completely alien color? What is it about the circuit that colors in parts of our visual field with “red” that is different from the circuit that colors it “blue”? And why does that physical difference in those circuits produce exactly that difference in color experience? This is what scientists are talking about when they say they don’t know “why” our brains evolved to do this, nor “how” any neural circuit even can do this.

Hacker seems to not know this. He seems to think scientists are confused about why wavelength discrimination is useful; but my computer can do that, and it needs no conscious experience to reap every resulting benefit. So what use is the experiential aspect of wavelength discrimination? And what use is that specific kind of experiential discrimination? (Colors instead of shades of grey; those color assignments instead of some others; and so on) Neither is explained yet, by evolutionary biology or neurophysics. We have ideas. But Hacker seems not to know that either. He acts like all scientists and philosophers have done is throw up their hands and propose nothing. In fact they’ve been busy proposing a lot of good leads for answering these questions. Hacker seems not to know any. He can cherry-pick a Dennett quote; but does not appear to have ever read him.

Take Hacker’s example of pain. He claims “consciousness of increasing pain is an incentive to decrease stress on an injury.” But that utterly fails as an explanation. All we need is the behavioral-response-to-stimuli effect. We do not have to feel pain at all. The useful behaviors Hacker refers to can be entirely programmed without it. So why are we programmed with it? What is pain for? The question is not, what are aversive stimuli for. If we just reflexively favored a wounded limb, no one would be mystified. But we don’t do that. Instead we have an elaborate phenomenology of pain, a completely unnecessary extra step—and one most annoying. Why?

We can tell this evolved early; comparative neuroanatomy shows that experiential pain as a mechanism is an attribute of neural systems going pretty far back (at least as far back as insects); by contrast, similar reactive systems in single-celled and simple multi-celled organisms, and plants, lack any of that computational architecture. They don’t need it. So why do animals? We can even today build injury-favoring robots without any of that phenomenological architecture. So why did evolution produce it? More importantly, how did evolution produce it? After all, we do not know how to program a robot or a computer to feel pain. Why?

This is the mystery that completely eludes Hacker—because he apparently read nothing on this subject, and knows nothing about the actual debates and concerns of real experts in it. He just pontificated a drunk uncle’s essay from the armchair, harrumphing at something he doesn’t even understand, and has made no effort to. This is annoying.

How Does One Solve These Mysteries?

When Hacker makes hopelessly naive declarations like, “Affective consciousness enables us to reflect on our moods and emotions and to bring them under rational control in a manner unavailable to other animals,” he is the one throwing up his hands and giving up. He is basically just covertly admitting he has no explanation for why we need affective consciousness to do this. He is likewise declaring we have no need of knowing how evolution could have produced such a remarkable feature, even were it needed. Even computationally. Much less biologically. This is almost as antiscientific a behavior you could ever expect from a purported philosopher.

We don’t know how a computational process can produce an “affective consciousness” to use in this way. That is the primary mystery of consciousness. Nor do we know why our brains, arranged as they are, generate the particular kind of affective consciousness we experience—why our emotions feel the way they do and not like something else. That is the secondary mystery of consciousness. Only then do we rank the remaining mystery of consciousness, which is why evolution would have brought us down that road of DNA mutations toward developing an organ capable of any of that, rather than achieving the same goals in other, less mysterious ways (like simply making thought more rational; no need for any phenomenology of emotion in the first place).

Unlike Hacker, I acknowledge these are serious questions that need serious answers. Not armchair poohpoohing. Only a fool would think these questions can be ignored. And they haven’t been. Following Dennett, the Churchlands, and others, I know (unlike Hacker) that the most promising research program here is in the direction of integrated information processing. At a certain level of complexity, virtual world-building becomes inseparably phenomenological. In other words, you can’t have a complex integrated perceptual system that doesn’t eventuate a phenomenology—a “what it is like” to be navigating that virtual perceptual space. In other words, philosophical zombies are logically impossible. A conclusion evidently unknown to Hacker, who evidently has never actually read anything on this. This is, on present evidence, the most likely solution to the primary mystery of consciousness. (This does not mean the hyper-specific idea called “Integrated Information Theory” is the ticket, however. All computational models of consciousness carry the same basic insight.)

From that it then follows the answer to the secondary and tertiary mysteries will be one of mechanism: eventually we will be able to map and diagram the specific neural circuits causally sufficient and necessary for generating every unique quale, and we will then be able to see what the physical difference is between a circuit that generates a scent and a circuit that generates a color, or a circuit that generates no quale at all; and then what the physical difference is between a circuit that generates the color red and a circuit that generates the color blue; and we will then be able to deduce all logically possible color circuits, and be able to begin discovering, and possibly even predicting, what colors any given circuit will generate and why. Likewise scents, feelings, and the like.

I do not think we will be able to predict all phenomenology independent of experiencing it ourselves (I suspect we would have to integrate a color circuit into or perceptual system to “experience” the color it produces: you have to be the process to know what it is like to be), but we will be able to categorize them: at a mere glance we will be able to predict whether a circuit so-installed would make us see a color or smell an odor or feel a feeling, for example. And with all that information, we will be able to look at the evolutionary history of every component, all the way back to its most primitive known ancestor, and thereby answer the question of why evolution favored that route for that circuit, while favoring the development of non-conscious circuitry for other functions and systems in the brain.

I can say that, at this point, I suspect what we will find is that phenomenology-driven pathways are more computationally simple to develop for the complex purposes they serve. We may find, for example, that it is possible to design a consciousness circuit that does not feel pain but reacts in every way identically to the ways pain is meant to benefit its experiencer, but that the requisite programming is too irreducibly complex to arrive at blindly by stepwise mutation and natural selection. The pathway of coopting a phenomenological feedback loop was probably easier and thus more likely to be hit upon by any evolutionary process.

Conclusion

After pointless drunk-uncle rambling, completely missing the point, understanding nothing, and developing no cogent theory of why or how a completely natural, physical world can be compatible with experiential consciousness, Hacker eventually resorts to the idiotic declaration that “the world does not contain conscious states and events” but rather it “contains sentient creatures like us who are conscious (or unconscious) and are conscious of various things.” Those are the same goddamned things. There is no such thing as a “sentient creature” without conscious states and events. There is no such thing as “being conscious” and there being no events or states of consciousness. Hacker is literally writing contradictory nonsense.

Hacker closes his essay with a dozen more nonsensical contradictory statements like that one, which require no further parsing. The fact remains that there are indeed real mysteries of consciousness. There are phenomena we cannot yet and as yet have not explained. We don’t know how a physical system can produce them (we have some ideas, but a long way to go to confirm them). We don’t know why different physical systems produce certain phenomena and not others (we have some ideas, but a long way to go to confirm them). And we don’t know how or why evolution ever got to or needed any of this to do any useful thing (we have some ideas, but a long way to go to confirm them).

Hacker does not explain any of this away. He instead simply ignores what all those mysteries actually are. He pretends there is nothing to explain, and therefore we don’t need to develop even the beginning of any answers to them, much less continue a major scientific research program to complete them. But we do. And we have. But dolts like Hacker want you to simply abandon all scientific and philosophical curiosity and responsibility and pretend there are no mysteries to solve about why we are the way we are, and how the world has made that possible. Please. New Year’s resolution. Don’t be like him.

Discover more from Richard Carrier Blogs

Subscribe now to keep reading and get access to the full archive.

Continue reading